openoffice-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Peter Kovacs <>
Subject Re: Critical issue on and Google Search
Date Tue, 12 May 2020 16:07:34 GMT
Hi John,

We have not changed the Robots.txt file in 11 years. After checking back 
this is long standing, unchanged configuration.

The page itself is reachable by yandex 
response check gives an 200 return code, which indicates all is fine.

We were also able to curl the headers and all looked a okay. No Google 
Crawlers are blocked by IP address. I managed to confirm all those things.

Let try the Google Search:

Searchkey: OpenOffice Reset Profile

2nd link points to the right topic on the forum.

 From our standpoint it looks everything as intended.

Since we have not change anything recently, but google search shows 
issues I assume the issue is within google Infrastructure causing the 
issue in your crawler.

I do not see what we should change and why. Feel free to respond to the 
mailing list.

All the Best


Am 12.05.20 um 11:56 schrieb John Mueller:
> Hi Peter
> It looks like Google's infrastructure for crawling the web can't access any
> URLs at all from, including the homepage. Sometimes
> this is due to a firewall or abuse protection system recognizing these
> requests as malicious. Over time, as we attempt to update the pages in the
> search results by crawling URLs from the site, if we see that we can't
> access them at all, they generally get removed from our search results, In
> practice, this means that users won't be able to find your pages in Google
> Search. Sometimes websites do that on purpose, if they don't want to be
> found in search, I suspect it's more of an accident here. A simple way to
> test is to use to check
> URLs from your site (better would be to use
> , though that would
> require verification of the site in Google Search Console first).
> Hope this helps!
> John
> On Tue, May 12, 2020 at 10:33 AM Peter Kovacs <> wrote:
>> Hello Mr Mueller,
>> The is our support Forum. When people have issues
>> they are often directed to this page for solutions.
>> Do you have a list of URLs googlebot has not able to crawl? We can then
>> check if the behavior is intended or not and we can tell you the reason for
>> this measurement.
>> I am not particular skilled in google search engine. I do not understand
>> the sentence:
>> This will cause those pages to drop out of Google's search results, and
>> will prevent new pages from being picked up for Search.
>> Can you explain this in an example please?
>> Thanks for the support.
>> All the best
>> Peter
>> Am 11.05.20 um 13:37 schrieb John Mueller:
>> Dear webmaster of
>> I'm an analyst at Google in Switzerland. We wanted to bring your attention
>> to a critical issue with your website, and how it's available for Google's
>> web search.
>> In particular, Googlebot has been unable to crawl URLs from
>> . This will cause those pages to drop out
>> of Google's search results, and will prevent new pages from being picked up
>> for Search. If you're not aware of this issue, you may be accidentally
>> blocking these pages from Google Search due to a server issue. If you need
>> to block Googlebot from crawling pages on your website, we'd recommend
>> using the robots.txt file instead.
>> Should you need to recognize IP addresses of Googlebot requests, you can
>> use a reverse IP lookup to do so:
>> Should you have any questions, feel free to contact me directly. For
>> verification purposes, we are sending a copy of this message to your site's
>> Search Console account.
>> Thank you,
>> John Mueller (
>> Webmaster Trends Analyst
>> --
>> John Mueller, He/Him, Search Relations Team - go/search-rel
>> <>
>> WTA is now Search-Rel (info
>> <>)
>> *Time-critical? Resend with "URGENT" in the subject.*
>> Google Switzerland GmbH
>> Gustav-Gull-Platz 1, 3. Stock
>> 8004 Zurich, Switzerland
>> Identifikationsnummer:
>> CH-

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message