My GEO process (if I ever did one)
Updated : 23/02/26
GEO (for Generative Engine Optimization) is the recently popular term for optimizing the visibility of a Brand or Product on AI chatbots.
I donât do GEO, but if I had to, hereâs what Iâd do (based on 150+ hours of Training and R&D on AI and RAGs).
1 – Audit the current state of visibility
- Check if, outside of RAG’s systems, LLMs know my brand.
- Analyze the frequent terms that come up when I ask for a description of my brand
- Analyze the gap in frequent terms vs. competitors to spot strengths/weaknesses and figure out the right communication angle. Iâve got a tool that does this automatically as the previous steps.

- Run these frequent term tests not only on the latest models but also on older ones and keep track to monitor evolution
- Analyze the generated phrases used to describe my brand and list the inaccurate ones to know the kind of mistakes my persona might see about my brand (same here, Iâve got a tool to help with that)

- Check if Common Crawl has recently added my site to its database (through their site : https://index.commoncrawl.org/)
- Generate a top 10/15 list of the most likely brands LLMs would suggest for a given product/service with an estimated probability % (donât have a tool for this yet, but itâs in my Todo List lol)
I put knowledge in quotes because, strictly speaking, an LLM doesn’t actually have any.
2 – Technical Audit
- Are AI crawlers actually able to access my site, or are they getting blocked by something like robots.txt or a firewall ? Iâve got a tool that automatically checks this against all the major LLM’s crawlers user agents. If you want me to test a site for you, just hit me up (LinkedIn works fine).

- If you’re using Cloudflare, take a look at your setting as we know that they actually block ai bots by default. Go to Log in â Security â Bots â Block AI Bots.
- Run a task with an AI agent (like filling out a form or adding a product to the cart) to make sure it can actually complete the action. AI agents browsing the web on our behalf are already in development, and they might well be part of the future of the internet, so this isnât something to ignore.
- Make sure there are no spider traps on your website (especially if you run an e-commerce site). This issue can occur quite often when filters arenât working properly.
A spider trap happens when a broken link leads to another broken link, which leads to another bugged page, and so on. The crawler ends up stuck in an endless loop of useless, broken, or error pages.
You need to be extremely careful about this, because, as Kevin Lesieutre demonstrated in his excellent conference (which I highly recommend), he presented a case study of an e-commerce site where the crawler encountered a spider trap and eventually stopped crawling the website altogether. After repeatedly hitting broken pages, the crawler concluded that the site wasnât worth crawling and considered it low quality.
So, itâs definitely worth running a crawl beforehand to make sure there are no spider traps. And if youâre managing an e-commerce website, be especially vigilant about the filters on your product listing pages.
- Some chatbots, like ChatGPT, donât handle JavaScript rendering, so make sure your content is accessible without JavaScript. To test this, donât hesitate to ask ChatGPT to summarize your page or ask it questions about the content to see what itâs able to understand.
3 – Map Out Query Fan Outs
Query fan-outs are background queries that AI chatbots run on search engines like Google or Bing to retrieve information.
Example: If a user enters the prompt ‘Recommend me a vegan and LGBT-friendly restaurant in Lyon’, the LLM might then search for ‘best vegan restaurant Lyon’ and ‘LGBT-friendly restaurant Lyon’.”
- Bulk collect as many search needs of the persona as possible (via PAA, Reddit questions, keywords, etc.).
- Turn those search needs into prompts, making personalized variations with my brandâs personas.
- With your list of keywords and also the different personas you’ve found, discover all the prompts potentially typed by your users by using the following prompt on the LLM of your choice:
If I was [your persona] trying to find [your keywords or questions] what might I ask?
- Run all those prompts multiple times on AI Mode / Gemini / ChatGPT to grab the query fan outs in bulk, then cluster them (you can use dataforseo to get QFO from chat gpt in bulk). Finally, list out the query fan outs where Iâm not ranking in the top 10 on Google / Bing â that becomes a content roadmap. (Iâve got a tool to pull query fan outs from Gemini and ChatGPT, check out my tutorials posted on Linkedin to try them.
Btw some prompts will not recquire the chatbot to generate Query Fan Outs. So try to clean your list of prompts before checking their query fan outs.
Does your prompt gĂ©nĂ©rate query fan out ? Here’s how to check :
- For Chat GPT : Try the open ai grounding tool made by dejan
- For Ai Mode / Ai overview / Gemini : Try the Google Grounding Api (if you dont get any query as an output it mean that your prompt doesnt recquire query fan out).

4 – Content Creation
- Iâll do both internal and external content creation since chatbots tend to favor earned stuff according to this research paper.
- A classic move: make sure your brand shows up in as many ranking / âtopâ / “best X to do Y” list articles as possible.
- From my list of query fan-outs (but only the ones I know generate outputs mentioning brands), I check the rankings on Google (and maybe also Bing, which I donât think should be written off), and I reach out to the sites that are already ranking to ask them to add a mention of my brand. Having done this before, I know itâs very time-consuming, so we reserve this action for the query fan-outs we consider highly prioritized or lucrative.
- If some query fan-outs are topics that can be covered on my site, then create one page per prompt (that page will be optimized for all the query fan-outs generated by that prompt).
- Sometimes, some prompts will generate query fan-outs in English, even if the language of my site or the prompt itself is different. In that case, youâll need to either create external content on English-language websites or develop an English version of the site to target those English query fan-outs.
- Iâd especially focus on BOFU content. For example, an article like âMy review of site X: is it trustworthy?â ranks easily on Google/Bing and also gets pulled by ChatGPT when someone asks about the reliability of an e-commerce site before buying. Iâve tested it : it works well, and I think AI can serve as the final step for the persona before conversion. So you might as well control the narrative.
- Keep pushing a quality content strategy that covers the entire topic. If your pages arenât more âvaluableâ than basic AI summaries, then honestly your site has no reason to exist. We actually published an article with Paul Grillet that breaks down our process and method to create outstanding content.
- If youâre running an e-commerce business, make sure your product feed and visibility on Google Shopping are up to date and working properly. Indeed, we know that ChatGPT relies on Google Shopping to power its own ChatGPT Shopping display.
- If your running an e-commerce, the way your writing your product page’s descriptions are also important, here’s a breakdown of an arxiv paper that explain how to do it.
5 – Local GEO
If local visibility in GEO is something you care about, here’s a reminder worth keeping in mind: ChatGPT still relies on the Google Maps API for its local map results.
How do we know? In the JSON response code on the ChatGPT interface, the local entity IDs match the Google Maps API place ID format, confirming Google Maps remains the underlying data source.

Here’s what I’ve learned from personal testing in the kitchen retailer space:
Rating is a strong filter. In my test no business with a rating below 4.5/5 appeared in ChatGPT’s local results.
Review volume matters too. In my test Some businesses with a perfect 5/5 but fewer than 20 reviews were pushed to secondary results. So a high average alone isn’t enough : you need sufficient review volume to back it up.
For “best X in [city]” queries, there’s a clear correlation between rating and visibility. ChatGPT favors high average scores for these intent types.
Web sources play a supporting role, not a selection role. The fan-out queries and blog sources cited by ChatGPT mostly served to enrich business descriptions. They didn’t drive which businesses were selected.
So even in a GEO context, maintaining a strong Google Business Profile with high ratings and a solid volume of reviews isn’t optional. ChatGPT is pulling directly from Google’s infrastructure to build its local answers. This reinforces what I’ve been advocating for a while : if you want to understand how local SEO should really be done, GBP optimization and review management are foundational, whether you’re optimizing for Google or for AI-powered search.
6 – Measure
- Check the AI logs (especially the ones triggered by a user request) to get an idea of how many times your content is being pulled as a source, and which content specifically. Iâve got a Linkedin post listing all the logs worth tracking.
Recently, we found the following UserAgent : GoogleAgent-URLContext , which seems to help Gemini access the content of a URL directly without going through its search engine or through Google. Monitoring this UserAgent therefore makes it possible to know that someone using Gemini has requested information about this URL. More info about url context api here.
- Here are the interesting and actionable insights you can uncover by analyzing the AI logs collected on your server (I stole them from Jerome Salomon) :
- Check for 499 errors in the logs: thatâs when ChatGPT decides to cut its visit short because your site took too long to load (so yeah, time to revisit TTFB or other webperf issues).
- Even if itâs marginal, keep an eye on traffic from AI chatbots by adding this regex into your analytics tool.
^.*ai|.*\.openai.*|.*copilot.*|.*chatgpt.*|.*gemini.*|.*gpt.*|.*neeva.*|.*writesonic.*|.*nimble.*|.*outrider.*|.*perplexity.*|.*google.*bard.*|.*bard.*google.*|.*bard.*|.*edgeservices.*|.*astastic.*|.*copy.ai.*|.*bnngpt.*|.*gemini.*google.*$
- Measure the number of clicks coming from AI Overviews by setting up a JavaScript variable in Google Tag Manager (check out the tutorial for how to do it). This tracking relies on the parameter used by AI Overviews, which (as far as I know) is always included in the sources‘s urls. That said, this should be confirmed, since I havenât personally dug into AI Overviews for a while (itâs still not rolled out in France đ ).
At the end of the day, youâve got to understand that GEO is way more about branding than traffic acquisition. The goal is to get your brand mentioned, not to expect people to click through to your site (they basically never do).
So apart from monitoring direct traffic or branded searches in search engines (which could just as well come from your overall comms efforts), itâs basically impossible to measure the direct business impact of your GEO strategy.
Bing Webmaster Tools Just Launched AI Performance Reporting

Bing rolled out a new dashboard showing how your content is cited in AI-generated answers across Microsoft Copilot, Bing AI summaries, and select partner integrations.
What you can track:
- Pages cited in AI answers â Which URLs are referenced most frequently
- Average cited pages â Daily unique pages cited during your selected timeframe
- Grounding queries â Key phrases the AI used when retrieving your content
- Citation trends over time â 7 days, 30 days, 3 months, or custom ranges
Important clarification from Jean-Christophe Chouinard:
The “grounding queries” shown are NOT the actual user prompts. They’re labels assigned to prompts by Bing’s system â grouped, generalized phrases summarizing citation activity.
Jean-Christophe Chouinard on linkedin

My take:
This is a first step from a major AI chatbot player, and I appreciate the effort. But honestly? It’s still disappointing and hard to get actionable data like we’re used to in traditional SEO.
The biggest issue: you need existing visibility on Bing to get meaningful data. In France (my market), Bing usage is minimal, which makes this tool pretty limited in practice.
We’re still far from having the granular, workable data we need to optimize for GEO the way we do for classic search.
If you feel overwhelmed by âGEO,â before rushing into offering a half-baked service, I can only recommend one thing: seriously educate yourself. Yes, it takes time, but if you want to be an honest, trustworthy service provider, itâs the baseline.
Here are two of my LinkedIn carousels that give a lot of insight into how AI Search and AI in general work â a good starting point, but obviously not enough to fully master the topic. Sorry it’s in french.
Ian Sorin is an SEO consultant at Empirik agency in Lyon, France. Passionate about search engine and LLM algorithms, he actively monitors patents, updates, and research papers to better understand possible manipulations on these systems. In his spare time, he develops tools to help automate analysis tasks and regularly conducts tests on personal projects to discover new GEO and SEO tricks.

