Table of Contents
Your server returned a 500-level error when the page was requested. See Fixing server errors.
Google experienced one of the following redirect errors:
Use a web debugging tool such as Lighthouse to get more details about the redirect.
This page was blocked by your site’s robots.txt file. You can verify this using the robots.txt tester. Note that this does not guarantee that the page won’t be indexed through some other means. If Google can find other information about this page without loading it, there is a very small chance that the page might still be indexed. To ensure that a page is not indexed by Google, remove the robots.txt block and use a ‘noindex’ directive.
When Google tried to index the page it encountered a ‘noindex’ directive and therefore did not index it. If you do not want this page indexed, congratulations! If you do want this page to be indexed, you should remove the ‘noindex’ directive.
To confirm the problem:
The page request returns what we think is a soft 404 response. This means that it returns a user-friendly “not found” message but not a 404 HTTP response code. We recommend returning a 404 response code for truly “not found” pages and adding more information on the page to let us know that it is not a soft 404. To see how Google sees the page, run a live URL inspection test against the page and click View tested page to see a screenshot showing how Google renders the page. Learn how to fix a soft 404.
The page was blocked to Googlebot by a request for authorization (401 response). If you do want Googlebot to be able to index this page, either remove authorization requirements for this page, or else allow Googlebot to access your pages by verifying its identity. You can verify this error by visiting the page in incognito mode.
This page returned a 404 error when requested. Google discovered this URL without any explicit request or sitemap. Google might have discovered the URL as a link from another page, or possibly the page existed before and was deleted. Googlebot will probably continue to try this URL for some period of time; there is no way to tell Googlebot to permanently forget a URL, although it will crawl it less and less often. 404 responses are not necessarily a problem, if the page has been removed without any replacement. If your page has moved, use a 301 redirect to the new location. See Fixing 404 errors
HTTP 403 means that the user agent provided credentials, but was not granted access. However, Googlebot never provides credentials, so your server is returning this error incorrectly. The page will not be indexed.
If you do want Googlebot to index this page, you should either admitting non-signed-in users or explicitly allow Googlebot requests without authentication (though you should verify its identity).
The server encountered a 4xx error not covered by any other issue type described here. Try debugging your page using the URL Inspection tool.
The page is currently blocked by a URL removal request, either by someone who manages this property in Search Console or by an approved request from a site visitor.
Use the URL removals tool to see who submitted a URL removal request. Removal requests are only good for about 90 days after the removal date. After that period, Googlebot may go back and index the page even if you do not submit another index request. If you don’t want the page indexed, use ‘noindex’, require authorization for the page, or remove the page.
The page was crawled by Google but not indexed. It may or may not be indexed in the future; no need to resubmit this URL for crawling.
The page was found by Google, but not crawled yet. Typically, Google wanted to crawl the URL but this was expected to overload the site; therefore Google rescheduled the crawl. This is why the last crawl date is empty on the report.
This page is marked as an alternate of another page (that is, an AMP page with a desktop canonical, or a mobile version of a desktop canonical, or the desktop version of a mobile canonical). This page correctly points to the canonical page, which is indexed, so there is nothing you need to do. Alternate language pages are not detected by Search Console.
This page is a duplicate of another page, although it doesn’t indicate a preferred canonical page. Google has chosen the other page as the canonical for this page, and so will not serve this page in Search. You can Inspect this URL to see which URL Google considers canonical for this page.
This is not an error, but is working as intended, because Google does not serve duplicate pages. However, if you think that Google has chosen the wrong URL as canonical, you can explicitly mark the canonical for this page. Alternately, if you think that this page is not a duplicate of the Google-chosen canonical, you should ensure that the content differs substantially between the two pages.
This page is marked as canonical for a set of pages, but Google thinks another URL makes a better canonical. Google has indexed the page that we consider canonical rather than this one. Inspect this URL to see the Google-selected canonical URL. If you think this page is not a duplicate of the Google-chosen canonical, you should ensure that the content between the pages differs substantively.
The URL redirects to another URL and therefore was not indexed. The final target URL might be indexed, and should appear in this report. If you test this URL in the URL Inspection report, the indexed test will show the redirect; the live test will follow and test the redirected page, though it won’t show the URL of the redirected and tested page.
Warnings are listed in the Improve page experience table on the summary page of the Page indexing report. These issues don’t prevent a page from being indexed, but they do reduce Google’s ability to understand and index your pages.
The page was indexed despite being blocked by your website’s robots.txt file. Google always respects robots.txt, but this doesn’t necessarily prevent indexing if someone else links to your page. Google won’t request and crawl the page, but we can still index it, using the information from the page that links to your blocked page. Because of the robots.txt rule, any snippet shown in Google Search results for the page will probably be very limited.
Next steps:
This page appears in the Google index, but for some reason Google could not read the content. Possible reasons are that the page might be cloaked to Google or the page might be in a format that Google can’t index. This is not a case of robots.txt blocking. Inspect the page, and look at the Coverage section for details.
It’s totally dependent on Google regarding indexing, and we can’t do anything.
Follow these steps to add Adobe Analytics in AMP Step 1: Go to WordPress Admin Area -> AMP…
Core Web Vitals are the subset of Web Vitals that apply to all web pages,…
We have added the Google Analytics 4 ( GA 4 ) integration in AMPforWP v1.0.80.…
In this article, I will show you how to add Dotmetrics Analytics to your AMP…
In this option, we will show you how to modify the H1-H6 size for mobile…
If you are facing the"Leave a comment button is not redirecting to non-amp when mobile…