SEO · Answer Engine Optimization · Generative Engine Optimization

Search is changing. Get visible everywhere it goes.

SEO is no longer just about Google. Half the queries that used to start in a search bar now start in ChatGPT, Claude, Perplexity, or Gemini — and Google itself increasingly answers in an AI Overview before any blue links appear. Ingress engineers your site to rank in classical search and get cited inside the generative answers replacing it.

Every fix shipped as code. Every claim verified against a live signal. No opaque dashboards, no link-building schemes, no AI-generated content slop.

Technical SEOSchema.org Engineering/llms.txtAI Crawler PostureCore Web VitalsAppend-Only Ledger

The new shape of search

Search has bifurcated. Most sites are optimizing for the half that’s shrinking.

For two decades the model was simple: rank on Google, earn the click. That model still drives a lot of traffic — but it now shares air with two surfaces it didn’t share with five years ago.

Above the blue links, Google’s AI Overviews now answer a meaningful share of US queries before the user ever scrolls. The answer is generated, not retrieved — and whether your site is cited inside it depends on how machine-legibly your content is structured, not on which keyword you matched.

Outside Google entirely, ChatGPT, Claude, Perplexity, and Gemini collectively answer billions of queries that, five years ago, would have started on a search engine. They cite sources. They reference brands. They recommend vendors. And they do all of it with no Search Console, no PageRank, no clear feedback loop telling you whether you’re in the index or not.

The technical surface that determines who shows up has gotten more complex, not less. Most SEO programs are still optimizing for a 2018 SERP that no longer exists.

What is AI visibility?

AI visibility is the measurable degree to which a brand, site, or entity is surfaced and cited by large language models — ChatGPT, Claude, Perplexity, Gemini, and Google’s AI Overviews — when users ask questions inside those products. It is the generative-search analog to traditional search engine ranking.

AI visibility is engineered through three overlapping disciplines: technical SEO (the fundamentals — crawlability, indexability, canonicals, Core Web Vitals), Answer Engine Optimization (AEO) (structuring content so it can be extracted into featured snippets, People Also Ask, and AI Overviews), and Generative Engine Optimization (GEO) (making your site discoverable and citable inside conversational AI through /llms.txt, deliberate AI-crawler posture, structured data graphs, and entity disambiguation).

All three matter. None of them work without the others.

Three layers we engineer for

A real SEO and AI visibility program operates on all three at once. Skip one and the others underperform.

Classical SEO

The Google-shaped surface

Crawlability, indexability, and ranking on the ten blue links. Self-referential canonicals, accurate sitemaps, no 301 chains, lossless internal linking, on-page entity optimization, Core Web Vitals inside the green thresholds. The fundamentals that have always mattered — done correctly.

Answer Engine Optimization (AEO)

Featured snippets, People Also Ask, AI Overviews

Extractable 40–60 word answers that lead with the answer. FAQ schema where real Q&A exists. Definition-first paragraphs that match how Google's generative box and Bing Copilot pull source material. Heading structure, anchor text, and entity coverage tuned for answer extraction — not keyword density.

Generative Engine Optimization (GEO)

ChatGPT, Claude, Perplexity, Gemini

Discoverability and citation inside conversational AI. /llms.txt at the root, deliberate AI-crawler robots posture (GPTBot, ClaudeBot, PerplexityBot, Google-Extended, CCBot), entity disambiguation via Wikipedia/Wikidata sameAs, structured data graphs that LLMs ingest cleanly, and content shape that survives the chunking and reranking models actually use.

What we deliver

Concrete artifacts in your repository. Verifiable signals on the live site. A ledger that compounds over time instead of evaporating between vendor cycles.

Technical audit with curl-checkable proof

Every finding ships with the exact HTTP header, DOM node, schema validator pass, or Lighthouse metric that proves it. No opaque rubrics — just signals you can re-verify yourself.

Schema.org graph engineering

Organization, WebSite, BreadcrumbList, Service, FAQPage, Article, and Product schemas wired with @id cross-references — so search engines and LLMs read your site as a connected entity graph, not a pile of disconnected pages.

/llms.txt and AI-crawler posture

A curated, canonical /llms.txt index for AI ingestion. Explicit Allow/Disallow rules per AI bot — GPTBot, ClaudeBot, PerplexityBot, Google-Extended, CCBot, anthropic-ai — instead of accidental defaults that may not match your IP and brand strategy.

Sitemap and lastmod hygiene

Real lastmod dates that reflect real edits — never new Date() on every request. Google distrusts always-fresh timestamps. Sitemaps stay under the 50k URL / 50MB limits, are 200-only, and only point at indexable canonicals.

Core Web Vitals on the field, not the lab

LCP under 2.5s with hero preload and fetchpriority hints. INP under 200ms via deferred hydration and broken-up long tasks. CLS under 0.1 with explicit image dimensions and matched-fallback fonts. We measure with CrUX-grade field data, not synthetic lab runs.

Internal linking and entity coverage

Descriptive anchor text, no orphan pages, breadcrumb chains on every non-root indexable route, and topical clusters that establish you as the authoritative entity for the queries that matter — instead of a site that occasionally mentions them.

E-E-A-T signals that pass scrutiny

Author bios linked to verified profiles (LinkedIn, GitHub, Crossref). Citations to primary sources. dateModified that matches real edits. Organization sameAs to Wikipedia/Wikidata/Crunchbase where defensible. Real expertise made machine-legible.

Append-only ledger of every fix

Every increment recorded with date, files, the live signal that verified it, and the commit. Nine months from now, when an audit asks why your canonicals look the way they do, you have the receipts. Reversal becomes a single git revert away.

Exhibit A: this site

We run our own SEO and AI visibility program in public.

Most SEO agencies cannot point at their own site as the proof. Many cannot point at their own site at all. We can. Open a terminal:

$ curl -sS https://ingresssoftware.com/ | grep -c '<h1'

1 — exactly one h1 in the server-rendered HTML, on every page

$ curl -sSI https://ingresssoftware.com/sitemap.xml

200 OK, content-type application/xml, served from the edge

$ curl -sS https://ingresssoftware.com/llms.txt | head -3

Curated /llms.txt for AI-crawler ingestion — present, canonical, hand-maintained

$ curl -sS https://ingresssoftware.com/about | grep -c '"@type":"BreadcrumbList"'

1 — BreadcrumbList JSON-LD on every non-root indexable route

$ curl -sS https://ingresssoftware.com/sitemap.xml | grep -oE '<lastmod>[^<]+' | sort -u | head

Real edit dates — not new Date() on every request, the way Google learns to ignore

Behind the public site is an append-only ledger of every SEO and AI-visibility increment — date, files touched, the live signal that verified it, and the commit. When a future audit asks why a canonical looks the way it does, or whether a fix actually shipped, we have the receipts. We will run your program the same way.

Who hires us, and why

Three patterns we see most often. The shape of your engagement will look like one of these — or a combination.

B2B SaaS losing to AI assistants

For: Marketing and growth leadership

Problem:

Your top-of-funnel comparison queries — 'best X for Y', 'X vs Z', 'how does X work' — increasingly resolve inside ChatGPT and Perplexity instead of on your site. You can see the clicks evaporating in GA4, but the AI answers don't cite you.

What we do:

Audit which prompts surface competitors and which surface you. Restructure landing pages with definition-first paragraphs, FAQ schema for real Q&A, and a structured-data graph LLMs can resolve. Establish /llms.txt and AI-crawler posture. Track citation share over time across the major models.

Outcome:

Visibility inside AI assistants — measured by direct prompt sweeps and referrer logs — alongside traditional SERP recovery. The work is shippable as code, not as content slop.

Professional services with thin organic presence

For: Founders and partners at consulting, legal, and financial firms

Problem:

You sell expertise but your site reads like a brochure. Search engines and AI models can't find a coherent entity to cite. Competitors with worse credentials outrank you because they ship structured pages and you ship PDFs.

What we do:

Convert credentials into machine-legible entities — Person schemas with sameAs to verified profiles, Service schemas linked to Organization, real Q&A converted into FAQ schema, Article schema for thought-leadership pieces with author and dateModified that match reality.

Outcome:

Your firm becomes a discoverable, citable entity in both classical search and generative answers. Inbound qualified leads start arriving with prompts like 'I asked Claude about firms that handle X and they recommended you.'

Mid-market enterprise with technical-debt SEO

For: Engineering and platform leadership inheriting a CMS

Problem:

Your site has 800 pages, 47 redirect chains, sitemaps with 404s, JS-only content that Googlebot mostly renders but inconsistently, and seven different schema implementations from seven different vendors. Every audit finds different problems and nobody trusts the numbers.

What we do:

Engineering-led remediation: migrate render-blocking JS off the critical path, collapse redirect chains, deduplicate canonicals, unify schema under a typed helper, fix sitemap fidelity, and stand up monitoring that flags regressions in CI. Atomic increments, each shippable as a single PR.

Outcome:

A site you can reason about. Audits stop flagging the same issues every quarter. SEO becomes a software engineering practice with regression tests — not a recurring consulting bill with no compounding outcome.

How Ingress compares to other approaches

vs. traditional SEO agencies

Agencies sell content production and link building dressed up as expertise. We are senior software engineers who treat SEO as a code problem. Every fix is a diff, every claim has a verification command, every increment is recorded in a ledger. We will not pad an invoice with blog posts.

vs. AI content mills and 'GEO specialists'

The current crop of generative-SEO vendors mostly publishes AI-written content and hopes models pick it up. AI training and retrieval increasingly penalize that. We engineer the structural and entity-graph surface that real expertise needs to be surfaced through — and we leave the actual claims to your subject-matter experts.

vs. building it in-house

In-house teams know your business but rarely have a senior SEO + structured-data + Core Web Vitals + AI-crawler-posture specialist on staff. We come in for the heavy structural work, hand off a documented system your engineers can maintain, and do not require a permanent retainer to keep working.

SEO & AI visibility: frequently asked questions

Want to see what your site looks like to Google and to ChatGPT?

Bring the URL. We will walk through what the major crawlers and LLMs see, what is missing, and what the highest-leverage fixes are — in 30 minutes, with a senior engineer, no sales pitch.