Index Coverage (Page Indexing) is a report in Google Search Console that shows the crawling and indexing status of all URLs that Google has discovered for your website.
It helps you track your website’s indexing status and keeps you informed about technical issues preventing your pages from being crawled and indexed correctly.
Checking the Index Coverage (Page Indexing) report regularly will help you spot and understand issues and learn how to address them.
In this article, I will describe:
- What the Index Coverage (Page Indexing) report is,
- When and how you should use it,
- The statuses shown in the report, including types of issues, what they mean, and how to fix them.
When was the Index Coverage (Page Indexing) report introduced?
Google introduced the Index Coverage report in January 2018 when it started releasing a revamped version of the Search Console to all users.
Apart from Index Coverage, the improved Search Console contained other valuable reports:
- The Search performance report,
- Reports on Search enhancements: AMP status and Job posting pages.
Google said the redesign of the Google Search Console was motivated by feedback from users. The goal was to:
- Add more actionable insights,
- Support the cooperation of different teams that use the tool,
- Offer quicker feedback loops between Google and users’ sites.
The report showed URLs within the following four statuses, connected with different issues Google encountered on specific pages:
- Error – critical issues in crawling or indexing.
- Valid with warnings – URLs that are indexed but contain some non-critical errors.
- Valid – URLs that have been correctly indexed.
- Excluded – pages that haven’t been indexed due to issues – this is the most important section to focus on.
The 2021 Index Coverage report update
In January 2021, Google improved the Index Coverage report to make the reported indexing issues more accurate and clear to users.
The changes to the report consisted of:
- Removing the generic “crawl anomaly” issue type,
- Making pages that were submitted but blocked by robots.txt and got indexed reported as “indexed but blocked” (in warnings) instead of “submitted but blocked” (error),
- Adding an issue called “indexed without content” (to warnings),
- Making the reporting of the soft 404 issues more accurate.
The 2022 Index Coverage (Page Indexing) report update
In August 2022, Google reorganized the Index Coverage report (from now on, visible as the Page indexing report in GSC).
Google’s goal was to simplify the report as users were saying that the “warning” status wasn’t a clear signal on how they should approach a given URL.
Google grouped all:
- The Excluded and Error pages into the Not indexed status, and
- The Valid and Valid with warning pages into the Indexed status.
Also, all the “Submitted but…” Error statuses have been combined with their equivalents from the Excluded status within the Not indexed section.
Google’s indexing pipeline
Before digging into the report’s specifics, let’s discuss the steps Google needs to take to index and, eventually rank web pages.
For a page to be ranked and shown to users, it needs to be discovered, crawled, and indexed.
Discovery
Google needs to first discover a page to be able to crawl it.
Discovery can happen in a few ways.
The most common ones are for Googlebot to follow external or internal linking to a page or find it through an XML sitemap, which is a file that lists and organizes the URLs on your domain.
Crawling
Crawling consists of search engines exploring web pages and analyzing their content.
An essential aspect of crawling is the crawl budget, which is the amount of time and resources that search engines can and want to spend on crawling your site. Search engines have limited crawling capabilities and can only crawl a portion of pages on a website. Read more about optimizing your crawl budget.
Indexing
During indexing, Google evaluates the pages and adds them to the index – a database of all web pages that Google can use to generate search results. This stage also consists of rendering, which helps Google see the pages’ layout and content. The information Google gathers about a page helps it decide how to show it in search results.
But, just because Google can find and crawl your page, doesn’t mean it will be indexed.
Getting indexed by Google has been getting more complicated. This is mainly because the web is growing, and websites are becoming heavier.
But here is the crucial indexing aspect to remember: you shouldn’t have all of your pages indexed.
Instead, ensure the index only contains your pages with high-quality content valuable to users. Some pages can have low-quality or duplicate content and, if search engines see them, it may negatively affect how they view your site as a whole.
That’s why it’s vital to create an indexing strategy and decide which pages should and shouldn’t be indexed. By preparing an indexing strategy, you can optimize your crawl budget, follow a clear indexing goal and fix any issues accordingly.
If you want to learn more about indexing, start by exploring our guide to Indexing SEO.
Ranking
Pages that are indexed can be ranked and appear in search results for relevant queries.
Google decides how to rank pages based on numerous ranking factors, such as the amount and quality of links, page speed, mobile-friendliness, content relevance, and many others.
How to use the Index Coverage (Page indexing) report?
To get to the Index Coverage (Page indexing) report, log in to your Google Search Console account. Then, in the menu on the left, select “Pages” in the Index section:

You will then see the report. By ticking both statuses, you can choose what you want to visualize on the chart:

“All known pages”, “All submitted pages” vs. “Unsubmitted pages only”
In the upper left corner, you can select whether you want to view:
- “All known pages”, which is the default option, showing URLs that Google discovered by any means,
- “All submitted pages”, including only URLs submitted in a sitemap, or
- “Unsubmitted pages only”, including only URLs unsubmitted in a sitemap.
You should find a stark difference between the status of “All submitted pages” and “All known pages” – “All known pages” normally contain more URLs and more of them are reported as Not indexed. That’s because sitemaps should only contain indexable URLs while most websites contain many pages that shouldn’t be indexed. One example is URLs with tracking parameters on eCommerce websites. Search engine bots like Googlebot may find those pages by various means, but they should not find them in your sitemap.
So always be mindful when opening the Index Coverage (Page indexing) report and make sure you’re looking at the data you’re interested in.
Inspecting the URL statuses
Indexed pages
To browse the URLs that are indexed within your website, go to the View data about indexed pages section, just below the chart.
Here you can see the timeline of how the number of your indexed pages changed over time on a sorted chart.
Below the chart, you can explore the list of your indexed pages. But remember that you may not see all of them as:
- The report shows up to 1,000 URLs, and
- New URL may have been added after the last crawl.
To receive more information, you can inspect each URL by choosing the URL from the list and clicking Inspect URL on the right panel.

Not indexed pages
To see the details on the issues found as Not indexed, look below the chart in the Page indexing report:

This section displays the reason behind a given status, the source of it (whether your website or Google causes the issue), and the number of affected pages.
You can also see the validation status – after fixing an issue, you can inform Google that it has been addressed and ask to validate the fix.
This is possible at the top of the report after clicking on the issue:

The validation status can appear as “fixed”. But it can also show “failed” or “not started” – you should prioritize fixing issues that respond with these statuses.
You can also see the trend for each status – whether the number of URLs has been rising, dropping, or staying on the same level.
After clicking on one of the types, you will see which URLs respond with this issue. In addition, you can check when each URL was last crawled – however, this information is not always up-to-date due to possible delays in Google’s reporting.
There is also a chart showing the dates and how the issue changed over time.

Here are some important considerations you should be aware of when using the report:
- Always check if you’re looking at all submitted pages or all known pages. The difference between the status of the pages in your sitemap vs all pages that Google discovered can be very stark.
- The report may show changes with a delay, so whenever you release new content, give it at least a few days to get crawled and indexed.
- Google will send you email notifications about any particularly pressing issues encountered on your site.
- Your aim should be to index the canonical versions of the pages you want users and bots to find.
- As your website grows and you create more content, expect the number of indexed pages in the report to increase.
How often should you check the report?
You should check the Index Coverage report regularly to catch any mistakes in crawling and indexing your pages. Generally, try to check the report at least once a month.
But, if you make any significant changes to your site, like adjusting the layout, URL structure, or conducting a site migration, monitor the results more often to spot any negative impact. Then, I recommend visiting the report at least once a week and paying particular attention to the Not indexed status.
URL Inspection tool
Before diving into the specifics of each status in the Index Coverage (Page indexing) report, I want to mention one other tool in the Search Console that will give you valuable insight into your crawled or indexed pages.
URL inspection tool provides details regarding if:
- The page is indexed,
- The page is indexed but has issues (e.g., struggles with structured data-related problems), or
- The page isn’t indexed.
You can find it in Google Search Console in a search bar at the top of the page.
Simply paste a URL that you want to inspect – you will then see the following data:

You can use the URL inspection tool to:
- Check the index status of a URL and, in case of issues, see what they are and troubleshoot them,
- Learn if a URL is indexable,
- View the rendered version of a URL,
- Request indexing of a URL – e.g., if a page has changed,
- View loaded resources, such as JavaScript,
- See what enhancements a URL is eligible for – e.g., based on the implementation of structured data and whether the page is mobile-friendly.
If you encounter any issues in the Index Coverage (Page indexing) report, use the URL inspection tool to verify them and test the URLs to better understand what should be fixed.
Not indexed status in Index Coverage (Page indexing) report and types of issues
It’s time to look at the Not indexed status in the report and:
- Discuss the specific issue types that it can show,
- What causes these issues, and
- How you should address them.
You may find that many URLs in this section have been excluded for the right reasons. But it’s important to regularly check which URLs are not indexed and why to ensure your critical URLs are not kept out of the index.
Excluded by ‘noindex’ tag
Googlebot found a page and could not index it because of a noindex tag or header in the HTTP response. Go through these URLs to ensure the right ones are blocked from the index. If any of the URLs should be indexed, remove the tag.
Blocked by page removal tool
These URLs have been blocked from Google using Google’s Removals tool. However, this method works only temporarily, and, typically after 90 days, Google may show them in search results again. If you want to block a page permanently, you can remove or redirect it or use a noindex tag.
Server error (5xx)
As indicated by the name, it refers to server errors with 5xx status codes, such as 502 Bad Gateway or 503 Service Unavailable.
You should monitor this section regularly, as Google will have trouble indexing pages with server errors. You may need to contact your server administrator to fix these errors or check if they are caused by any recent upgrades or changes on your site.
Check out Google’s suggestions on how to fix server errors.
Redirect error
Redirects transfer search engine bots and users from an old URL to a new one. They are usually implemented when old URLs change, or their content doesn’t exist anymore.
Redirect errors point to the following problems:
- Redirect chain (which occurs when there are multiple redirects between URLs) is too long,
- Redirect loop – URLs redirect to each other,
- Redirect URL that exceeded the maximum URL length,
- A wrong or empty URL was found in the redirect chain.
Check and fix the redirects for each affected URL – if you’re unsure where to start, follow my guide to redirects.
Blocked by robots.txt
Robots.txt is a file containing instructions on how robots should crawl your site. If this URL should be indexed, Google needs to crawl it first, so you should go through these URLs and check if you intended to block them.
Remember that using robots.txt directives is not a bulletproof way to prevent indexing pages. Google may still index a page without visiting it, e.g., if other pages link to it. To keep a page out of Google’s index, use another method, such as password protection or noindex tag.
The 401 Unauthorized status code means that a request cannot be completed because it’s necessary to log in with a valid user ID and password. Googlebot cannot index pages hidden behind logins – this tends to occur in staging environments. In this case, either remove the authorization requirement or verify Googlebot so it can access the pages.
If these URLs shouldn’t be indexed, this status is fine. However, to keep these URLs out of Google’s reach, ensure your staging environment cannot be found by Google. For example, remove any existing internal or external links pointing to it.
Crawled – currently not indexed
Googlebot has crawled a URL but is waiting to decide whether it should be indexed.
There could be many reasons for this. For instance, there may be no issue, and Google will index this URL soon. But, frequently, Google will wait to index a page if its content is not quality or looks similar to many other pages on the site. Google then puts it in the queue with a lower priority and focuses on indexing more valuable pages.
If you want to learn about what could be causing this status and how to address any issues, be sure to read our article on how to fix “Crawled – currently not indexed”.
Discovered – currently not indexed
This means that Google has found a URL – for example, in a sitemap – but hasn’t crawled it yet.
Keep in mind that in some cases, it could simply mean that Google will crawl it soon. This issue can also be connected with crawl budget problems – Google may view your website as low quality because it lacks performance or contains thin content.
Possibly, Google has not found any links pointing to this URL or encountered pages with stronger link signals that it will crawl first. If there are a lot of better quality or more current pages, Google may skip crawling this URL for months or even never crawl it at all.
If you want to learn more about this status ‒ read our article on how to fix “Discovered – currently not indexed”.
Alternate page with proper canonical tag
This URL is a duplicate of a canonical page marked by the correct tag, and it points to the canonical page. Canonical tags are used to specify a URL that represents the primary version of a page. It’s a way of preventing duplicate content issues when many identical or similar pages exist.
In this situation, you don’t need to make any changes.
Duplicate without user-selected canonical
There are duplicates for this page, and no canonical version is specified. It means that Google doesn’t view the specified URLs as canonical.
You can use the URL inspection tool to learn which URL Google chose as canonical. It’s best to choose the canonical version yourself and mark it up accordingly in your URLs using the rel=”canonical” tag.
Duplicate, Google chose different canonical than user
You chose a canonical page, but Google selected a different page as canonical.
The page you want to have as canonical may not be as strongly linked internally as a non-canonical page, which Google may then choose as the canonical version.
One way to address this issue is to consolidate your duplicate URLs. If you want to learn more about possible causes and solutions for the status, read our guide on how to fix the Duplicate, Google chose different canonical than user issue.
Not found (404)
404 error pages indicate that the requested page could not be found because it changed or was deleted. Error pages exist on every website and, generally, a few of them won’t harm your site. But, whenever a user encounters an error page, it may lead to a negative experience.
If you see this issue in the report, go through the affected URLs and check if you can fix the errors. For example, two options include creating a great 404 page or you could set up 301 redirects to working pages. Also, make sure that your sitemap doesn’t contain any URLs that return any HTTP status code other than 200 OK.
Page with redirect
These pages are redirecting, so they haven’t been indexed. Pages here would generally not require your attention.
For permanently redirecting a page, be sure you implemented a 301 redirect to the closest alternative page. Redirecting 404 pages to the homepage can result in Google treating them as soft 404s.
Soft 404
Soft 404 issue means a page returns a 200 OK status, but its contents make it look like an error, e.g. because it’s empty or contains thin content. Or, it may be custom 404 pages containing user-friendly content directing to other pages, but still returning a 200 OK HTTP code.
To fix soft 404 errors, you can:
- Add or improve the content on these URLs,
- 301 redirect them to the closest matching alternatives, or
- Configure your server to return proper 404 or 410 codes.
Also, as a follow-up, read our article on what are soft 404s in SEO.
Duplicate, submitted URL not selected as canonical
This includes URLs submitted in a sitemap but without canonical versions specified.
Google considers these URLs duplicates of other URLs and has decided to canonicalize these URLs with Google-selected canonical URLs. You should add canonical URLs pointing to the preferred URL versions.
Blocked due to access forbidden (403)
The 403 Forbidden status code means the server understands the request but refuses to authorize it. You can either grant access to anonymous visitors so Googlebot can access the URL or, if this is not possible, remove the URL from sitemaps. And if Google shouldn’t access these URLs, it’s better to use a noindex tag.
Blocked due to other 4xx issue
Your URLs may not be indexed due to 4xx issues not specified in other error types. 4xx status codes errors generally refer to problems caused by the client ‒ check these pages to learn what the error is.
You can learn more about what is causing each problem by using the URL Inspection tool. Fix the problems according to the specific code that appears. If you cannot resolve the error, remove the URL from your sitemap.
Conclusion
The Index Coverage (Page indexing) report shows a detailed overview of your crawling and indexing issues and points to how they should be addressed, making it a vital source of SEO data.
Your website’s crawling and indexing status is not straightforward – not all of your pages should be crawled or indexed. Ensuring such pages are not accessible to search engine bots is as crucial as having your most valuable pages indexed correctly.
The report reflects the fact that your indexing status is not either black or white. It highlights the range of states that your URLs might be in, showing both serious errors and minor issues that don’t always require action. If you’re struggling to understand what action you should take to improve your website’s indexing, contact us for technical SEO services.
Ultimately, you should regularly browse Google’s Index Coverage (Page indexing) report and intervene when it doesn’t align with your indexing strategy.