Website Advisor scores are based on observable evidence from a public scan. Sometimes the scores may not match your intuition about your site. Here is why that happens and what to do about it.
Common reasons for unexpected scores
- The scan sees your site as a first-time visitor. Internal knowledge about your product, audience, or roadmap is not visible to the scanner.
- Low confidence scores mean the scan does not have enough evidence, not necessarily that the site is bad. Check the Confidence section to see what is still unknown.
- Lighthouse scores are lab measurements taken from a server, not field data from real users. Your actual user experience may differ.
- The scanner samples a limited number of follow-on pages. Important pages that are not linked from the homepage may be missed.
- Sites that require JavaScript to render content may show lower scores if the rendered version differs significantly from the source HTML.
How to improve accuracy
- Add business context: import GA4, Plausible, or Search Console data to give the report real traffic evidence.
- Add competitor URLs: benchmarking puts your scores in context relative to your market.
- Add private paths: if important content is behind login, add those URLs so the scan can evaluate them.
- Add a change note: tie scans to specific releases so score changes can be traced to actual work.
- Re-read the Confidence section first: the solid versus inferred split tells you exactly where the read is strong and where it is still guessing.