Product Deep-Dive

What Makes Our Audit Different: 14 AI Agents, 200+ Checks, and Why It Matters

RF
Ross Forrester
··22 min read
14 AI agents working in parallel to audit a UK small business website for SEO and GEO

What Makes Our Audit Different: 14 AI Agents, 200+ Checks, and Why It Matters

The problem with most SEO audits is not that they find too little. It is that they find the wrong things — and miss the category of search that is growing fastest.

Run a site through SEMrush, Ahrefs, or Screaming Frog and you will get a list of technical issues: missing meta descriptions, redirect chains, slow page speed. These tools are good at what they do. But they were built for a world where Google was the only search engine that mattered. That world is changing faster than most audit tools have adapted.

Our audit was built from scratch for 2026. It runs 14 specialised AI agents in parallel, covering over 200 individual checks across both traditional SEO and AI search visibility (GEO). It adapts its recommendations to your specific CMS. And it produces a single, comparable score — the Digital Visibility Score — that tells you exactly where you stand and how to improve.

This article explains exactly how it works, what each agent checks, and why we built it this way.

Key Takeaway: Most SEO audit tools check for technical problems on your website. Our audit checks for those and for your visibility to AI search engines — ChatGPT, Perplexity, Google AI Overviews — which now handle up to 17% of global search queries. The two require entirely different checks.


The Problem With Existing Audit Tools

Before explaining what we built, it helps to understand what we were reacting to.

The generic problem

Existing tools like SEMrush Site Audit, Ahrefs, and Screaming Frog check for technical issues competently. SEMrush checks over 140 technical issues including crawl errors, duplicate content, Core Web Vitals, and schema markup problems. Ahrefs offers always-on crawling with a clean health score display. Screaming Frog is the gold standard for technical crawling at £279/year.

But every one of these tools has the same blind spot: they produce generic lists that are not adapted to your platform, your industry, or the way AI search engines actually find and cite content.

A Shopify site and a WordPress site get identical recommendations about canonicals — despite Shopify's canonical structure being a known platform-specific issue (generating four duplicate URLs per product by default), and WordPress canonicals depending entirely on which plugins are installed and how they are configured. A Wix site cannot edit robots.txt directly, so any recommendation to "update your robots.txt" is technically impossible without a platform migration.

The GEO gap

More significantly, none of these tools was designed with AI search visibility in mind. SEMrush recently added an "AI Search Health" score, but it is a bolt-on feature — a handful of checks about whether AI bots are blocked and whether structured data is present. It does not measure citability, brand entity signals, passage-level authority, or the specific requirements of ChatGPT, Perplexity, and Google AI Overviews — which each pull information differently.

This matters because AI search is no longer a future consideration. ChatGPT now handles approximately 17% of global search queries. Google AI Overviews appear in at least 16% of all searches. Perplexity crossed 100 million monthly active users and is growing over 200% year-on-year. If your website is not optimised for these platforms, you are invisible to a significant and growing share of the people searching for what you do.

The pricing problem

For a small business in the UK, the pricing of professional audit tools is a significant barrier. SEMrush Pro starts at $139.95 per month. Ahrefs Lite is $129 per month, with credit limits that penalise normal usage. These tools are designed for agencies running audits on dozens of clients, not for business owners who want to understand and improve their own website.


How Our 14-Agent System Works

We run 14 specialised AI agents in parallel, each focused on a distinct area of your website's visibility. Running them in parallel means the audit completes in 30–45 seconds rather than the hours a sequential process would take. Here is what each agent examines.

14 Agents — Running in Parallel TECHNICAL SEO Crawl, indexability, robots CORE WEB VITALS LCP, INP, CLS real-world SCHEMA MARKUP JSON-LD, FAQPage, Local ON-PAGE SEO Titles, H-tags, meta GEO CRAWLERS GPTBot, ClaudeBot access AI CITABILITY Passage authority, structure LLMS.TXT AI instruction file audit BRAND ENTITY Cross-web brand signals CONTENT QUALITY E-E-A-T, depth, readability LOCAL SEO GBP, NAP, citations COMPETITOR Benchmarking your rivals PLATFORM CMS-specific fixes BACKLINK PROFILE Authority, toxic links SITEMAP & ROBOTS Indexation control SECURITY HEADERS HTTPS, CSP, HSTS DIGITAL VISIBILITY SCORE 60% SEO + 40% GEO — one number, one report

Agent 1: Technical SEO

This agent checks the fundamentals that allow search engines to crawl and index your site correctly. It verifies that robots.txt is not accidentally blocking important pages, that your sitemap is valid and submitted, that there are no crawl traps or infinite redirect chains, and that canonical tags are implemented correctly. It also checks for common indexation errors — noindex tags applied to pages that should be indexed, orphan pages with no internal links pointing to them, and thin pages that consume crawl budget without contributing to visibility.

Agent 2: Core Web Vitals

This agent does not rely on Lighthouse alone. Lighthouse scores are lab measurements taken on a simulated 2016 phone on Slow 4G — a stress test that does not reflect your actual visitors' experience. Of pages scoring 90+ in Lighthouse, 43% fail to meet one or more Core Web Vitals thresholds in real-world field data (CrUX). Our agent checks both Lighthouse and real-world CrUX data where available, measuring the three signals Google uses as ranking factors: Largest Contentful Paint (LCP), Interaction to Next Paint (INP — which replaced FID in 2024), and Cumulative Layout Shift (CLS).

Agent 3: Schema Markup

Schema markup tells search engines — and increasingly, AI engines — exactly what your content is about. This agent checks for the presence, completeness, and accuracy of JSON-LD structured data. It looks for Organisation schema, LocalBusiness schema, FAQPage schema, Product schema where relevant, BreadcrumbList, and Article or BlogPosting schema on content pages. It also flags common errors: schema present in the wrong format, schema referencing images that do not exist, and required fields (like name and url) missing from Organisation markup.

Agent 4: On-Page SEO

Title tags, meta descriptions, heading hierarchy, keyword placement, image alt text, internal linking structure. This agent checks every on-page signal that affects how well individual pages perform for target keywords. Uniqueness matters here: duplicate title tags across multiple pages, or multiple H1 tags on a single page, both cause problems that this agent surfaces.

Agent 5: GEO Crawler Access

This agent specifically checks whether AI crawlers can access your site. The main ones are GPTBot (OpenAI), ClaudeBot (Anthropic), PerplexityBot (Perplexity AI), and Googlebot-Extended (Google AI). According to industry research, 63% of businesses are accidentally blocking at least one of these crawlers through overly restrictive robots.txt rules. If these crawlers cannot access your content, the content cannot be cited in AI search answers — regardless of how good the content is. This agent also checks for Cloudflare bot protection and other firewall rules that might block AI crawlers without the site owner knowing.

Agent 6: AI Citability

Being crawlable is necessary but not sufficient. This agent assesses whether your content is structured in a way that AI systems can extract and cite it accurately. It looks for: clear passage-level answers to questions (AI engines need to identify citable sentences, not just citable pages), proper use of headings to signal topic structure, answer-first content structure (the answer appears before the supporting detail, not after), and the presence of statistics and citations that make your content credible to AI synthesis systems.

Agent 7: llms.txt Audit

llms.txt is the AI equivalent of robots.txt — a file that tells AI systems what your website is about, which content is authoritative, and how you want your business to be described. The file is placed at yourdomain.com/llms.txt and uses a simple markdown format. Our agent checks whether you have this file, whether it is correctly formatted, and whether it accurately represents your key pages, products, and services. For Comprehensive tier audits, we generate a custom llms.txt file based on your site's content.

Agent 8: Brand Entity Signals

AI search engines do not just look at your website. They look at the broader web to verify your brand exists, is credible, and is consistently described. This agent checks for brand mentions across authoritative external sources, consistency in how your business name, address, and phone number appear across directories, presence on key platforms (LinkedIn, Companies House, Google Business Profile), and Wikipedia or Wikidata presence for established brands. Weak or inconsistent brand entity signals make it harder for AI systems to confidently cite your business.

Agent 9: Content Quality (E-E-A-T)

Google's Quality Rater Guidelines define E-E-A-T — Experience, Expertise, Authoritativeness, and Trustworthiness. This agent assesses your content against these criteria: Does the content demonstrate first-hand experience? Are authors credited with verifiable credentials? Does the content cite sources? Is there clear contact information and business transparency? Content that scores poorly on E-E-A-T signals struggles both in traditional Google rankings and in AI citation quality.

Agent 10: Local SEO

For businesses that serve customers in a specific geographic area — whether that is a single postcode or a national service area — local SEO signals are critical. This agent checks your Google Business Profile completeness, NAP consistency (Name, Address, Phone) across all online directories, local schema markup, location-specific page content, and review signals. It also checks for common local SEO errors: service area businesses accidentally set to "storefront" mode, inconsistent address formats across citations, and GBP categories that do not match the site's content.

Agent 11: Competitor Benchmarking

You cannot improve your SEO in isolation. This agent benchmarks your Digital Visibility Score against the competitors you name in the audit form. It identifies keywords where they outrank you, GEO signals they have that you lack, and technical advantages they hold. The output is a gap analysis — specific areas where the gap between you and your competitors can be closed with actionable work, prioritised by likely impact.

Agent 12: Platform-Specific Analysis

This is the agent most audit tools skip. Once your CMS is identified, this agent applies a platform-specific knowledge base to your results:

WordPress: Common issues include plugin conflicts causing duplicate schema, Yoast or RankMath missing configuration for key page types, and PageSpeed issues from unoptimised images and unused plugin CSS.

Shopify: The platform generates four canonical URL variants per product (/products/slug, /collections/collection/products/slug, etc.). Without proper canonical handling, this creates duplicate content signals. The agent flags which products are affected.

Wix: Robots.txt cannot be edited directly on Wix — any recommendation requires workarounds or a platform migration. The agent accounts for this and only recommends what is actually achievable on the platform.

Squarespace: URL structures are often suboptimal (folders within folders), and the platform has limited server-side rendering options that affect JavaScript SEO. The agent identifies which settings can be changed and which require plan upgrades.

Webflow: Generally strong technical SEO out of the box, but CMS-generated pages often lack proper internal linking. The agent checks CMS collection linking patterns.

Agent 13 assesses your backlink profile — not just quantity but quality and relevance. It flags toxic links that may be generating a penalty signal, and identifies high-authority linking opportunities based on your industry and competitors. Agent 14 checks security headers (HTTPS, HSTS, Content Security Policy) and core security signals that affect both trust and Google's "site quality" assessment.


The 200+ Checks: What Each Category Covers

The 14 agents between them run over 200 individual checks. Here is how they break down by category:

200+ Checks — Category Breakdown Technical SEO — 58 checks GEO / AI Search — 44 checks On-Page SEO — 34 checks Content Quality — 30 checks Platform-Specific — 28 checks Local SEO — 22 checks Other — 18 checks

Technical SEO (58 checks): Crawlability, indexability, redirect health, canonical implementation, sitemap validity, robots.txt analysis, Core Web Vitals, page speed, mobile usability, HTTPS, structured URL architecture, internal link depth, orphan page detection, crawl budget analysis.

GEO / AI Search (44 checks): AI crawler access (GPTBot, ClaudeBot, PerplexityBot), llms.txt presence and validity, schema markup for AI extractability, passage-level answer structure, brand entity signals, citation-ready content formatting, speakable schema, AI platform-specific signals (Google AI Overviews, Perplexity, ChatGPT web search).

On-Page SEO (34 checks): Title tag uniqueness and keyword placement, meta description presence and length, heading hierarchy (H1–H4), image optimisation and alt text, internal linking patterns, keyword density and placement, anchor text diversity, duplicate content detection.

Content Quality (30 checks): E-E-A-T signals (Experience, Expertise, Authoritativeness, Trustworthiness), author credibility signals, content depth versus thin content, readability score, citation and source quality, freshness signals, topical authority indicators, FAQ and structured answer presence.

Platform-Specific (28 checks): CMS identification, platform-specific known issues (Shopify duplicate URLs, Wix robots.txt limitations, WordPress plugin conflicts, etc.), achievable versus non-achievable recommendations filtered by platform, plugin and theme impact assessment for WordPress.

Local SEO (22 checks): Google Business Profile completeness, NAP consistency, local citation health, review signal quality, location schema, local keyword signals, service area coverage, mobile local pack appearance.

Backlinks, Security, and Other (18 checks): Domain authority assessment, backlink quality and toxicity screening, security headers (HTTPS, HSTS, CSP), site security signals.


The Digital Visibility Score: How It's Calculated

Traditional SEO audit scores measure only your SEO health. Our Digital Visibility Score (DVS) measures your total search visibility across both traditional and AI search — in one number.

The formula:

  • 60% SEO Score — How well-optimised your site is for traditional search engines (Google, Bing). Weighted across technical SEO, on-page, content quality, local, and backlinks.
  • 40% GEO Score — How visible and citable you are to AI search engines. Weighted across AI crawler access, citability structure, llms.txt, brand entity, and AI platform-specific signals.

The 40% GEO weighting reflects current AI search market share. As AI search grows, this weighting will be reviewed.

A score of 50–65 is typical for a UK small business website that has had some SEO work done but has not addressed GEO at all. A score below 40 indicates significant technical problems across both categories. A score above 80 represents strong visibility in both channels.

The DVS gives you one number to track over the 90 days between your initial audit and your free re-audit. If you implement the recommendations from your report, you should see a meaningful score improvement.


Why We Review Every Report Before Delivery

AI agents are excellent at pattern recognition across large numbers of data points. They are less reliable at contextual judgement — understanding which finding matters most for your specific business.

For example: a one-page brochure website for a local tradesperson and an e-commerce store on Shopify might both have "missing schema markup" flagged. But the schema types that matter are completely different. The tradesperson needs LocalBusiness and FAQPage schema. The Shopify store needs Product, Offer, and AggregateRating schema. The generic finding is technically correct in both cases, but the specific action is entirely different — and getting it wrong wastes time.

Before every report goes out, a member of the seoandgeo.co.uk team reviews the findings to:

  1. Catch false positives where the AI has flagged something that is actually intentional or correct
  2. Prioritise findings in order of likely impact for your business type
  3. Confirm that platform-specific recommendations are achievable on your CMS
  4. Add context where needed — not just "fix your Core Web Vitals" but "your LCP image is loading late because it is not defined as a priority resource in your theme's head section"

This review typically adds 15–30 minutes to delivery time. We think it is worth it.


The Three Tiers: What Each Includes

We offer three audit tiers because different businesses need different depths of analysis.

Essential (£49): Full automated audit across all 14 agents, 200+ checks, Digital Visibility Score, priority fix list (top 10 actions), platform-specific recommendations, free 90-day re-audit.

Professional (£97): Everything in Essential, plus competitor benchmarking (up to 3 competitors), detailed GEO analysis with specific AI citability scores per page, and an llms.txt file generated for your domain. This tier suits businesses actively investing in digital marketing.

Comprehensive (£197): Everything in Professional, plus a 45-minute strategy call to walk through the findings, a 90-day action plan with prioritised timelines, and a second human review of the re-audit at 90 days. This tier suits businesses where digital visibility is a primary revenue channel.

All three tiers include the same 200+ automated checks. The difference is in the depth of analysis and the level of human support.


Key Takeaways

  • Most SEO audit tools were built before AI search existed. They miss the category of search growing fastest.
  • Our audit runs 14 specialised agents in parallel, completing in 30–45 seconds with over 200 individual checks.
  • Platform-specific analysis is not optional — generic recommendations for the wrong CMS waste time and sometimes cause harm.
  • The Digital Visibility Score (60% SEO, 40% GEO) gives you one number to track across two increasingly distinct search channels.
  • Every report is reviewed by a human before delivery to catch false positives and ensure recommendations are actionable.
  • All three tiers include a free re-audit at 90 days — proof of progress, not a sales call.

Frequently Asked Questions

How long does the audit take?

The audit runs for 30–45 seconds while our 14 agents work in parallel. You receive a partial score and your top 3 findings immediately. The full report — all 200+ checks, competitor analysis, and platform-specific recommendations — is delivered by email within a few minutes.

What platforms does the audit cover?

We audit sites built on WordPress, Wix, Squarespace, Shopify, Webflow, Next.js, and AI website builders (Bolt, Lovable, v0). Each platform gets platform-specific recommendations. Shopify sites get advice about duplicate URL structures. Wix sites get advice about robots.txt limitations. WordPress sites get advice about plugin conflicts and crawl budget.

What is the Digital Visibility Score?

The Digital Visibility Score (DVS) is a single number out of 100 that combines your SEO performance (60% weighting) and your GEO performance — how visible you are to AI search engines like ChatGPT, Perplexity, and Google AI Overviews (40% weighting). It gives you one number to track over time instead of juggling two separate score sets.

Why does seoandgeo.co.uk review results before delivery?

AI agents identify patterns across hundreds of checks very well. They are less reliable at judging which finding matters most for your specific business context. We review every report before it goes out to catch false positives, prioritise findings by actual impact, and ensure recommendations are achievable on your specific platform — not theoretical.

What is included in the free 90-day re-audit?

Every tier includes a free re-audit at 90 days. We run the same 200+ checks on your site again and compare the results against your original report. You see which issues have been fixed, how your Digital Visibility Score has changed, and what still needs attention. It is proof of progress.


Want to see what the audit finds on your site? Get your audit score — starts at £49.

comprehensive SEO audit UKSEO audit tool UKSEO and GEO auditAI SEO auditDigital Visibility Score

Free Audit

See what our audit finds on your site

14 AI agents, 200+ checks, platform-specific fixes for your exact CMS. Free re-audit at 90 days to measure your progress.

Get Your Free Audit Score

More insights