Google Introduces New Crawler To Optimize Googlebot’s Performance |

Google Introduces New Crawler To Optimize Googlebot’s Performance

output onlinejpgtools 2023 04 21T135246.768

Google has recently introduced a new web crawler called “GoogleOther,” designed to alleviate strain on Googlebot, its primary search index crawler.


The addition of this new crawler will ultimately help Google optimize and streamline its crawling operations.

Web crawlers, also known as robots or spiders, automatically discover and scan websites.

Googlebot is responsible for building the index for Google Search.

Google Introduces New Crawler “GoogleOther” to Alleviate Strain on Googlebot

Google has recently launched a new web crawler called “GoogleOther,” which aims to reduce the burden on its primary search index crawler, Googlebot. This addition will help Google optimize and streamline its crawling operations, ultimately leading to better search results.

Web crawlers, also known as robots or spiders, automatically discover and scan websites. They collect information about a website’s content, links, and other relevant data to help search engines like Google index and rank them. Googlebot is responsible for building the index for Google Search, which is used to match user queries with relevant web pages.

However, as the internet continues to grow, the number of websites and web pages is increasing exponentially. Googlebot is constantly crawling the web to keep up with this growth, but it can only crawl so many pages at once. This can cause delays in indexing new content, leading to slower search results.

GoogleOther is designed to alleviate this strain on Googlebot by taking on some of its crawling responsibilities. While Googlebot will still be responsible for building the primary search index, GoogleOther will focus on crawling and indexing specific types of content, such as images or videos.

This division of labor will allow Google to crawl more pages simultaneously, leading to faster indexing and more accurate search results. It will also help Google better understand the content on websites, making it easier to match user queries with relevant pages.

This new crawler is part of Google’s ongoing efforts to improve its search capabilities. Google has been exploring new ways to crawl the web more efficiently, such as using machine learning algorithms to prioritize which pages to crawl first.

GoogleOther is currently being rolled out to select websites, and it will be interesting to see how it affects search results in the coming months. However, it is clear that this addition will ultimately benefit both Google and its users by improving the speed and accuracy of search results.

GoogleOther is a generic web crawler that will be used by various product teams within Google to fetch publicly accessible content from websites.

In a LinkedIn post, Google Search Analyst Gary Illyes shares more details.

Dividing Responsibilities Between Googlebot & GoogleOther

The main purpose of the new GoogleOther crawler is to take over the non-essential tasks currently performed by Googlebot.

By doing so, Googlebot can now focus solely on building the search index utilized by Google Search.

Meanwhile, GoogleOther will handle other jobs, such as research and development (R&D) crawls, which are not directly related to search indexing.

Illyes states on LinkedIn:

“We added a new crawler, GoogleOther to our list of crawlers that ultimately will take some strain off of Googlebot. This is a no-op change for you, but it’s interesting nonetheless I reckon.

As we optimize how and what Googlebot crawls, one thing we wanted to ensure is that Googlebot’s crawl jobs are only used internally for building the index that’s used by Search. For this we added a new crawler, GoogleOther, that will replace some of Googlebot’s other jobs like R&D crawls to free up some crawl capacity for Googlebot.”

GoogleOther Inherits Googlebot’s Infrastructure

GoogleOther shares the same infrastructure as Googlebot, meaning it possesses the same limitations and features, including host load limitations, robots.txt (albeit with a different user-agent token), HTTP protocol version, and fetch size.

Essentially, GoogleOther is Googlebot operating under a different name.

Implications For SEOs & Site Owners

The introduction of GoogleOther should not significantly impact websites, as it operates using the same infrastructure and limitations as Googlebot.

Nonetheless, it’s a noteworthy development in Google’s ongoing efforts to optimize and streamline its web crawling processes.

If you’re concerned about GoogleOther, you can monitor it in the following ways:

  • Analyze server logs: Regularly review server logs to identify requests made by GoogleOther. This will help you understand how often it crawls your website and which pages it visits.
  • Update robots.txt: Ensure your robots.txt file is updated to include specific rules for GoogleOther if necessary. This will help you control its access and crawling behavior on your website.
  • Monitor crawl stats in Google Search Console: Keep an eye on crawl stats within Google Search Console to observe any changes in crawl frequency, crawl budget, or the number of indexed pages since the introduction of GoogleOther.
  • Track website performance: Regularly monitor your website’s performance metrics, such as load times, bounce rates, and user engagement, to identify any potential correlations with GoogleOther’s crawling activities. This will help you detect if the new crawler is causing any unforeseen issues on your website.

Related posts

Google On Staging Sites & Preventing Accidental Indexing

Etimes Team

8 Powerful Steps To Outrank Your Competition With Targeted SEO & AI-Informed Content

Etimes Team

Google Policy Agenda Reveals AI Regulation Wishlist

Etimes Team
0 0 votes
Article Rating
Notify of
Inline Feedbacks
View all comments

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy
Would love your thoughts, please comment.x