s bimbingan dasar menuju interaktifs membaca bisikan reels mahjong wayss mengubah riak putaran mahjong wayssinyal alarm mahjong ways zona pembayaran premiumpengawas keamanan jackpot wild west gold abc1131 surveillance sicboasisten rumah tangga pola tumble starlight princess abc1131 exitpgsoft mahjong2 profit pemulaevaluasi strategi mahjong awsbetasia3475asia3476asia3477asia3478pelayan restoran waktu emas respin sicbo abc1131 zero tolerancekurir paket jarak optimal buy spin gates abc1131 baccarats cara jitu sakura76 engines clockwork pattern mahjong ways 2s memahat ritme turbo mahjong wayss mengenal hiburan interaktif lebih dekats mengubah gemuruh spin mahjong ways 2s menyulam pola wild mahjong ways menjadis metode cerdas sakura76s monitor putaran akurats pemetaan putaran mahjong ways 2 mengenalis perpaduan putaran tenang mahjong dan sistem dingins radar pola mahjong ways 2 detiks rahasia serunya gameplay digitalide inspirasi mahjong wins3asia3471asia3472asia3473aasia3474aasia3467asia3468asia3469asia3470langkah cerdas panji scattertaktik spam spin soniteknik baca rtp agusbaca rtp mahjong pemulaspam spin pak jono mahjongstrategi toni mahjong profitcleaning service bug volatility starlight baccarat abc1131 emosiburuh pelabuhan rumus free spin gates roulette abc1131 tamaks algoritma misterius mw2s analisis ritme reel tekniks formula menang mahjong ways 2 modals guide terbaru mw 2s menafsir gerak reels mahjong wayss menggubah kejutan mega win mahjong wayss panduan rahasia pola mahjong ways 2 modals pengurai pola scatter cara menentukans pulse reader mahjong ways 2 hitungs rahasia pola gacor mahjong ways 2 dibocorkans rumus panas teknologi ais saat alur mahjong ways berpadu dengan zenabc1131 mpo slotslot qris danaABC1131mpoxo link slot mahjongmpo slotslot depo 5kslot deposit 1000slot thailandasiaklubmacauklubpondok88garuda76asiawin189AWSBETcincinbetsitus cincinbet1parisklubheylink macauklubheylink asiaklubheylink hksbetheylink kapten76heylink mpoxoheylink garuda76heylink pondok88viral asiaklubviral macauklubviral garuda76viral pondok88mpoxlASLI777Asli777MpogalaxyMPOGALAXYpsychopsyslot gacor hari inislot gacor qrisabc1131Link Platfrom Situs Slot Online Gacor Gampang MenangAt-Taujih; Jurnal Bimbingan Konseling IslamMPO SLOTrtp mpoxoabc1131 slot viralabc1131 - sopadecabc1131 slot777 lundbergdesign.com

Bot List

Here is a list of bots that we have identified. All of these bots will automatically be blocked by our Know Your Visitor (KYV) WordPress plugin.

Using Know Your Visitor (KYV), you can whitelist any bot that visits your site. There may be some bots that you find useful, such as Google’s crawler, among others.

This list is not complete and we will be adding to it as we encounter new bots.

Go-http-client/1.1
“Go-http-client/1.1” is the default User-Agent string used by the Go programming language’s standard library HTTP client in the net/http package. It identifies clients that are making requests using Go’s built-in HTTP functionality and does not directly imply it is a Google bot, despite some confusion. The “1.1” indicates the library version and is not related to the HTTP version 1.1.

l9scan/2.0.1393e2830313e2739313e2833313; +https://leakix.net
LeakIX’s scanning tools, including l9scan, systematically probe internet-connected devices to create a database of potential security issues.

OAI-SearchBot/1.0; +https://openai.com/searchbot
OAI-SearchBot is OpenAI’s web crawler designed to index and analyze websites specifically for its search features, such as those within ChatGPT. It helps power AI-driven search results and retrieve real-time information.

Bytespider; https://zhanzhang.toutiao.com
The ByteDance crawler bot is an automated tool used by ByteDance, the parent company of TikTok

DotBot/1.2; +https://opensiteexplorer.org/dotbot; [email protected]
DotBot/1.2 is Moz’s web crawler that gathers data for its Link Index, which is used in products like Link Explorer and the Moz Links API to provide SEO metrics. The URL https://opensiteexplorer.org/dotbot identifies the bot and its purpose, and [email protected] provides contact information for inquiries about the bot’s behavior.

Amazonbot/0.1; +https://developer.amazon.com/support/amazonbot)
Amazonbot/0.1 is the user-agent string for Amazon’s web crawler, an AI search crawler that indexes web content to improve services like Alexa and other AI-powered features, while respecting robots.txt directives and page-level meta tags to allow for control over crawling and data usage. You can control how Amazonbot interacts with your site by adding specific meta tags in your HTML or HTTP headers, such as noindex or noarchive, to prevent data from being indexed or used for model training.

Googlebot/2.1; +http://www.google.com/bot.html
“Googlebot/2.1” refers to Google’s web crawler software (also known as a spider or bot) and its specific version number, which is used to browse the internet, discover and download web pages, and collect information to build the searchable index for Google Search. The full string “Googlebot/2.1; +http://www.google.com/bot.html” is the user-agent string a website sees, identifying the bot and providing a link for further information about it.

Google-Site-Verification/1.0
“Google-Site-Verification/1.0” refers to the user agent string for a Google bot that verifies ownership of a website for services like Google Search Console. When you, as a website owner, prove you own your site by uploading a file, adding a meta tag, or using a DNS record, Google’s “Google-Site-Verification” bot checks for these verification tokens to confirm your ownership before granting you access to site-specific data and tools.

IbouBot/1.0; [email protected]; +https://ibou.io/iboubot.html)
IbouBot crawls URLs found on public pages and thus may be visiting each page which has been publicly cited somewhere.

meta-externalagent/1.1 (+https://developers.facebook.com/docs/sharing/webmasters/crawler)
The Meta-ExternalAgent crawler bot is a web crawling tool developed by Meta Platforms, Inc., primarily used for indexing and retrieving data from external websites to enhance services like Facebook’s link preview. It operates by visiting URLs shared on its platforms to gather content snippets, images, and metadata for display in user feeds.

CensysInspect/1.1; +https://about.censys.io
CensysInspect/1.1 is the User-Agent string used by the Censys platform, an internet-wide scanning search engine that discovers and catalogs Internet-facing devices, services, and certificates to provide internet intelligence for cybersecurity research, threat hunting, and security operations. The +https://about.censys.io/ part of the string is a URL providing more information about the service and an opt-out option for users who wish to prevent their systems from being scanned.

SemrushBot/7~bl; +http://www.semrush.com/bot.html
SemrushBot/7~bl; +http://www.semrush.com/bot.html is the user agent string for SemrushBot, the web crawler for the Semrush SEO platform. This bot scans websites to collect data used in Semrush’s various SEO tools, such as for backlink analysis, site audits, content research, and monitoring keyword rankings. The URL in the string, http://www.semrush.com/bot.html, provides more information about the bot’s purpose and how to manage its access to a website.

python-requests/2.32.4
Therefore, when a web server receives a request with the User-Agent python-requests/2.32.4, it means that the request originated from a program written in Python that is utilizing version 2.32.4 of the Requests library to interact with the server. While this can be used by automated scripts (which are often referred to as “bots”), the User-Agent string itself is simply an identifier for the library.

PetalBot;+https://webmaster.petalsearch.com/site/petalbot
PetalBot is the web crawler for Huawei’s Petal Search engine, an AI-powered mobile search engine. Its purpose is to crawl and index the content of both mobile and PC websites to build a searchable database. This information is then used to provide search results, content recommendations, and AI Assistant services within the Petal ecosystem. You can find more information and verify PetalBot’s identity on the official webmaster.petalsearch.com site.

SeznamBot/4.0; +https://o-seznam.cz/napoveda/vyhledavani/en/seznambot-crawler/
SeznamBot/4.0 is the user-agent string for a web crawler developed by Seznam.cz, the leading search engine in the Czech Republic. This robot systematically browses websites to discover, index, and update content for Seznam’s search engine, ensuring that its search results are accurate and current. The URL in the user-agent string, “https://o-seznam.cz/napoveda/vyhledavani/en/seznambot-crawler/”, provides documentation for the bot and explains how website owners can manage its access using the robots.txt file.

GRequests/0.10
A user agent string containing GRequests/0.10 indicates that a request to a web server was made by a program using the grequests Python library. The 0.10 denotes the version number of the library. It is not a bot in the traditional sense, but a toolkit for creating custom programs for tasks like web scraping.

facebookexternalhit/1.1 (+http://www.facebook.com/externalhit_uatext.php)
facebookexternalhit is a crawler used by Meta to fetch content from websites when links are shared on platforms like Facebook, Instagram, or Messenger. It retrieves metadata such as titles, descriptions, and images to generate link previews.

bingbot/2.0; +http://www.bing.com/bingbot.htm
“Bingbot/2.0” is the user agent string for Microsoft’s web crawler for the Bing search engine. This string identifies the bot to webmasters, indicating it’s a genuine Bing crawler (at the time of the string’s creation) that is indexing web pages for Bing search results, and the provided URL offers more information about the bot’s purpose and management.

GPTBot/1.2; +https://openai.com/gptbot)
GPTBot/1.2; +https://openai.com/gptbot is the user agent string for GPTBot, a web crawler developed by OpenAI. Its primary purpose is to collect publicly available data from websites to train and improve OpenAI’s large language models (LLMs), such as the one that powers ChatGPT. The string identifies the crawler to website servers and often appears in server logs.

WinHttp.WinHttpRequest.5
A WinHttp.WinHttpRequest.5.1 object is not a bot, but rather a COM component that allows applications and scripts to make complex HTTP and HTTPS requests from the client-side on Windows. It provides a scriptable interface to the WinHTTP API, enabling developers to programmatically send requests (like GET, POST) and set arbitrary headers for web-related tasks, including retrieving data, handling redirects, and managing server/proxy authentication

fasthttp
Fasthttp is a high-performance, standalone HTTP server and client library for the Go programming language, offering lower latency and higher throughput than Go’s standard net/http package. While ideal for tasks like chat or video servers, its efficiency has also made it a target for cybercriminals executing malicious activities such as brute-force login attacks and multi-factor authentication (MFA) spamming, especially against Microsoft Entra users.

ClaudeBot/1.0; [email protected]
ClaudeBot/1.0; [email protected] is the user agent string for ClaudeBot, a web crawler operated by the AI company Anthropic. The bot is designed to crawl the web to collect data, which is used to train Anthropic’s large language models (LLMs) that power its AI products, such as the chatbot Claude.

https://docs-cortex.paloaltonetworks.com/r/1/Cortex-Xpanse/Scanning-activity
Cortex Xpanse scans your public-facing websites, creating a continuously updated inventory of your web assets, including the server software and other technologies powering your web applications.

curl/7.61.1
curl/7.61.1 is not a specific, named web crawler, but rather a request to a server that uses the command-line tool cURL, version 7.61.1. Web administrators may see this in their server logs and interpret it as a bot if the request comes from an unexpected or automated source.

Barkrowler/0.9; +https://babbar.tech/crawler)
Barkrowler/0.9 (+https://babbar.tech/crawler) is the user agent string for Barkrowler, an SEO web crawler operated by Babbar.tech to analyze websites, gather data for online marketing and referencing tools, and contribute to their graph representation of the World Wide Web. This user agent identifies itself to websites and provides a link to Babbar.tech’s crawler documentation, which explains its purpose and includes information on its politeness policies, like respecting robots.txt files and crawl delays.

WebsiteDetectorBot/1.0
Website Detector Bot/1.0 is a user-agent string used by an automated script or bot that accesses websites. The string identifies the request as coming from a web-scraping or data-collection bot, but there is no specific, widely recognized entity known to operate this bot. A web server’s logs would record this user agent when the bot makes a request.

AhrefsBot/7.0; +http://ahrefs.com/robot
AhrefsBot/7.0; +http://ahrefs.com/robot is the user-agent string for AhrefsBot, the web crawler for the SEO tool Ahrefs. It is one of the most active bots on the internet, responsible for constantly crawling websites to build and maintain Ahrefs’ comprehensive database of backlinks and other SEO data. This information is used by Ahrefs users for competitive analysis, backlink profiles, keyword research, and site audits to improve their website’s performance.

coccocbot-web/1.0; +http://help.coccoc.com/searchengine
coccocbot-web/1.0; +http://help.coccoc.com/searchengine is the user agent string for a search engine web crawler from Vietnam’s Coc Coc browser and search engine. This bot visits websites to index content for the Coc Coc search engine, similar to how Googlebot indexes content for Google Search. Websites can control its behavior by adding instructions in their robots.txt file.

Google-Read-Aloud; +https://support.google.com/webmasters/answer/1061943
Google Read Aloud is a text-to-speech (TTS) service that reads web content aloud for users, activated by the user agent string “Google-Read-Aloud” and often accessible through Google Assistant or Chrome’s “Listen to this page” feature. It functions by fetching web content to provide audio output, supporting users with vision impairments, reading disabilities, or those who prefer to listen rather than read. The service prioritizes user accessibility, ignoring robots.txt rules to ensure all users can access and utilize the feature.

python-httpx/0.28.1
When you see python-httpx/0.28.1 in a log or request header, it signifies that a program written in Python is using the httpx library, version 0.28.1, to interact with a web server or API. The term “bot” in this context implies that the client is an automated program or script, as opposed to a human user browsing the web.

AliyunSecBot/Aliyun ([email protected])
AliyunSecBot/Aliyun ([email protected]) is a user agent for a web crawler (or spider) associated with Alibaba’s cloud computing division, known as Aliyun or Alibaba Cloud. Web crawlers are bots that systematically browse the web to find and index information. While the purpose of many crawlers is benign, the AliyunSecBot has been described by some security services as a bot designed to carry out “continuous attacks using a customized spider”.

CMS-Checker/1.0; +https://example.com
“CMS-Checker/1.0; +https://example.com” is likely a User-Agent string used by an automated tool, such as a website crawler or scanner, to identify itself as a CMS (Content Management System) checker and provide the URL of the site it is analyzing. This specific string indicates it’s version 1.0 of a tool designed to check or interact with a website’s CMS, and it’s sending information from or about the website located at “https://example.com

MJ12bot/v1.4.8; http://mj12bot.com
MJ12bot is a web crawler operated by Majestic, a company specializing in SEO and link intelligence data. Its primary function is to index web pages to build a comprehensive map of the internet, which is then used to analyze link structures and provide insights into website authority and backlink profiles.

python-requests/2.27.1
A “python-requests/2.27.1 bot” is an automated program written in Python that uses version 2.27.1 of the requests library to send and receive data from web servers, effectively acting as a client that interacts with web resources.

Skip to content
MateMedia
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.