Last Updated: January 10, 2026
- Google AI Overviews still appear in the first organic slot most of the time, but a meaningful share now show up in positions 2 to 5 and sometimes even lower.
- When an AI Overview lands above your result, expect lower clickthrough rates on many informational queries, especially on mobile.
- You can track AI Overview placement with rank tracking tools, Google Search Console, and simple sheet workflows that tie SERP features to traffic shifts.
- The smart move right now is not panic, but to segment keywords by AI impact, adjust content goals, and build traffic that does not depend only on Google.
Google’s AI Overviews are no longer just a shiny box glued to position 1, they now move up and down the page based on query type, layout, and what users seem to react to.
Most overviews still sit at the very top, but a growing slice of queries now show them tucked under ads, mixed with other features, or sitting mid-page, which quietly reshapes how your rankings and clicks really work.
Key context and current numbers
Let me anchor this with numbers first, because hand waving does not help when you are answering to a client or a boss.
Across a mix of third-party studies and client data from late 2025 into early 2026, AI Overviews show up on a minority of queries, and when they do, they sit in position 1 the large majority of the time, but not always.
| Dataset | Period | Queries with AI Overview | Overviews not in position 1 | Typical lowest position |
|---|---|---|---|---|
| Large 10M+ SERP panel (multi-country) | Mid 2025 | Single-digit % of all queries | ~9% | 5-6 |
| Mixed client portfolio (content + ecommerce) | Q4 2025 | Higher on broad informational terms | 10-13% | 5-6 |
| Client sample, mobile-only | Q4 2025 | More frequent vs desktop | 12-15% | 5-6 |
If you see your average position barely move but clicks fall, there is a fair chance an AI Overview or another feature slid in above you without a classic “ranking drop” in the report.
I would treat those numbers as a benchmark, not a fixed rule, because Google keeps tweaking where these boxes show up and on which queries.
The trend is clear though, AI Overviews are still top-heavy, but they now show a wider spread down to positions 4 to 6 in a noticeable chunk of queries.

Where AI Overviews actually appear now
The old pattern was simple, AI Overview at the top, then the rest of the organic results fighting under it.
That still happens most of the time, but when you zoom into queries that trigger an overview, you see a more nuanced pattern.
Distribution by position when they are not at the top
Looking at mixed-tool and client data from late 2025, here is roughly how AI Overviews tend to be placed when they do not own position 1.
| AI Overview position | Share of non-position-1 cases |
|---|---|
| 2 | ~45-55% |
| 3-4 | ~30-40% |
| 5+ | ~10-20% |
In plain language, if an AI Overview is not first, it is usually second, and only a small share sink into the mid-SERP range like position 5 or 6.
On mobile, those positions can still be above the fold or just below it, so you cannot assume a position 3 overview is harmless.
Country and language differences
The early pattern where Japan and Spain had a higher share of non-position-1 overviews has mostly held, but the gap is less dramatic now that more markets are live.
| Country | % of AI Overviews not in P1 | Notes |
|---|---|---|
| Japan | ~13-15% | More non-P1 on product comparisons and long questions |
| Spain | ~11-13% | Higher spread into P3-P5 |
| United States | ~9-11% | Overviews still heavily skewed to P1-P2 |
| United Kingdom | ~8-10% | Similar to US but with more tests on mobile |
| Germany, France, Canada | ~9-12% | Pattern sits between US and Spain |
This is not perfect census data, but it matches what you see if you run cross-country SERP checks on a broad keyword set.
The short version, non-position-1 overviews exist in every major market now, not just in a few early test regions.
Query types where non-position-1 is more common
Google is not random about this, some query patterns are more likely to get a lower-placed overview.
This is not theory, you can see it if you segment by intent.
| Query type | AI Overview presence | Non-P1 share (when present) | Comment |
|---|---|---|---|
| Simple informational (“how to boil eggs”) | High | Low | Usually at P1, sometimes ahead of all organic |
| Complex informational / YMYL | Medium | Medium-high | More likely to be lower or collapsed |
| Commercial research (“best laptop under 1000”) | Medium-high | Medium | Placement varies with ads and shopping units |
| Pure navigational (brand terms) | Low | Very low | Brand intent tends to win the top spot |
If your traffic leans heavy into complex health, finance, or B2B research, expect AI Overviews to appear, but not always as the very first thing users see.
In practice, this means you cannot treat all AI Overview queries the same when you model traffic or plan content.
An AI box above everything is a different threat than one sitting between organic results or under a news box.

Why Google sometimes ranks AI Overviews lower
The honest answer is that no one outside Google has the full rulebook, and anybody pretending they do is stretching it.
But you can piece together a working model from patterns, public statements, and a lot of SERP watching.
Query intent and risk level
Google is much more careful with queries where a wrong or shallow answer can hurt people or cause serious confusion.
Think health, money, legal rights, or anything that hints at diagnosis or treatment.
- For low-risk, basic questions, AI Overviews are more likely to sit confidently at position 1.
- For YMYL topics, overviews tend to be either missing, collapsed, or not the first thing on the page.
- On some sensitive queries, Google seems to prefer surfacing official bodies or top publishers above any AI summary at all.
I do not think this is just about brand pressure, it is common sense that a machine-written box should not be your first exposure to complex medical or financial advice.
And yes, there are still messy examples where the AI box arguably shows up higher than it should, but the broad trend is more conservative placement on high-risk topics.
SERP feature crowding and layout choices
The position of AI Overviews is also shaped by what else Google wants to show for that query.
On many product and news-related searches, the page is already packed.
- Top Stories and video carousels often hold the top visual slots for breaking topics.
- Shopping units and product carousels fight for the upper fold on commercial queries.
- Local packs tend to sit high for location-heavy intent.
What you often see is an AI Overview getting nudged under these blocks, especially when Google thinks freshness, multimedia, or local answers matter more than a generic synthesis.
So even if the AI answer exists, it might not be what Google wants as the first screen on a phone.
User behavior signals and test loops
There is mounting evidence from click maps and session data that user response influences where these boxes stick around.
When users consistently scroll past AI Overviews on a certain query class, those overviews seem to appear less frequently or lower over time.
Hypothesis, Google is quietly testing how much people actually engage with AI Overviews at different positions, and tuning placement to where they help more than they annoy.
Some industry tests have shown that when an overview is not very tailored to the query, it gets low engagement, which correlates with later demotions or removals on those same terms.
This is still correlation, not a published rule, but it fits how Google has tuned other SERP features over the years.
How AI Overviews have evolved visually
These boxes are not static either, the UI has been changing fast.
If you look at 2025 snapshots vs now, you see at least four shifts that matter.
- More collapsed or expandable overviews, especially on mobile and on YMYL queries.
- More visible source links and sometimes small carousels or grids inside the overview.
- Product-leaning layouts that mix generative text with product cards and ratings.
- Subtle visual changes that make them feel a bit less like a giant answer box and more like another SERP element.
These shifts mean position is only part of the story, because a collapsed overview at position 1 is very different from a huge, open block at position 2 filled with products and links.
You need to look at both placement and presentation when you evaluate impact on your niche.

How much AI Overviews affect clicks and traffic
This is where the conversation stops being academic, because a 2-point drop in CTR on your money pages is not theoretical.
Let me break this into patterns you can actually track and model.
Directional CTR impact from real data
Across several sites I have seen three broad situations when an AI Overview appears above or close to a strong organic result.
| Scenario | Example query | Typical impact on CTR | Notes |
|---|---|---|---|
| Simple informational, AI at P1, site at P1→P2 | “how long to bake salmon” | CTR drop of ~20-40% | High zero-click behavior, many users stop at the overview |
| Commercial research, AI at P1 or P2, site in top 3 | “best noise cancelling headphones” | CTR drop of ~10-25% | Some users still explore brand reviews, but fewer total clicks |
| Brand-heavy or expertise-heavy queries | “[brand] vs [brand] honest review” | Small drop or sometimes flat | Users consciously look for human reviews or specific brands |
Those ranges are not exact science, but they line up with GA4 and Search Console comparisons before and after overview rollouts on tracked query sets.
The main shift is fewer clicks for straightforward answers, with more resilience on queries where user trust or depth really matter.
Zero-click behavior and engagement
AI Overview queries tend to show a higher share of zero-click sessions, especially when the question is short and factual.
Think definitions, quick how-tos, basic list questions, or simple comparisons.
- For some simple informational keywords, zero-click rates jumped by 10-20 points after the overview appeared.
- Time on SERP often increased a bit, because users scan the generative box and maybe expand related follow-up prompts.
- Down-funnel pages saw fewer raw visitors but a slightly higher intent mix, because casual fact-checkers never clicked through.
This is not always bad, by the way, some of the people you used to attract on these queries were never going to convert or remember your brand.
But you need to be honest about what is happening so you do not keep forecasting traffic from top-of-funnel phrases that are now mostly handled in-SERP.
How to measure impact in your own data
You do not need a fancy data science stack to see whether AI Overviews are hurting or helping you.
A basic workflow combining a rank tracker with Search Console is usually enough.
- In your rank tracking tool, tag all keywords where an AI Overview appears at any position.
- Split those into three buckets, overview above your result, between your result and the top, and below your result.
- Pull Search Console data for those keywords over a 30-90 day period and compare impressions, clicks, and CTR across buckets.
- Add simple date annotations when AI Overviews first appear or move above your listing on major queries.
When you add this kind of tagging, it becomes much easier to say “this drop came from a new AI Overview” rather than blaming some vague core update every time.
Do this at least quarterly on your top 50 to 200 money terms, not just once when something looks scary.
Patterns will emerge, and those will guide smarter content and budget decisions.
Non-obvious upside cases
I am not going to pretend AI Overviews are secretly great for everyone, but there are a few edge cases where they help your visibility.
Sometimes the overview quotes or links to your content in-context, which can create an extra branded touch even if users do not always click.
- Pages with clean, structured answers and strong E-E-A-T signals sometimes get cited more.
- If your brand appears in multiple overview sources, your authority impression can rise even with fewer visits.
- For complex comparisons, the overview can actually drive clicks to deeper guides it surfaces from niche sites.
I would not build a strategy purely around being cited inside these boxes, but you should at least track when you are mentioned and see if those mentions correlate with brand searches or assisted conversions later.
A simple monthly check of high-value queries in an incognito browser can already tell you whether you are part of the story or invisible.

What to actually do about AI Overviews
This is where most articles either say “keep making great content” or push you toward gimmicks, and both extremes miss the point.
You need a clear, practical playbook across tracking, content, and diversification, not magic tricks.
Step 1, track AI Overviews properly
If your tools treat AI Overviews as a vague SERP feature with no position data, you are flying half-blind.
Look for or ask for three specific capabilities.
- Detection of AI Overview presence per keyword and per device.
- Recorded position of the block relative to classic organic results and ads.
- Country and device segmentation so you can see different layouts on mobile vs desktop.
Then, build a simple recurring workflow around that data.
- Once a month, export your main keyword set with AI Overview flags and positions.
- Join it with Search Console data, at least at the query level, in a sheet or BI tool.
- Tag each keyword as “AI above me”, “AI below me”, or “No AI” and track avg position, impressions, and CTR by tag.
- Watch how those tags shift over time and tie changes to traffic graphs in GA4.
You do not need perfection, you just need enough clarity to separate AI effects from regular competition and algorithm moves.
Without that, your SEO planning becomes guesswork.
Step 2, decide which queries are worth fighting for
Not every keyword with an AI Overview deserves the same effort.
Some are now so zero-click heavy that doubling down on them is a distraction.
- Keep investing in queries where users still need depth, nuance, or tools that a short overview cannot provide.
- Be cautious with ultra-basic questions where traffic and CTR have already collapsed after AI rollout.
- Look for long-tail versions of high-level queries where overviews appear less often or at lower positions.
I know it feels painful to walk away from a nice vanity keyword, but sometimes the numbers make that decision for you.
Channel that energy into pages where a click is more likely and more valuable.
Step 3, shape content so it can win with or without AI
There is no official AI Overview playbook, but common patterns keep showing up in pages that get cited or stay strong near these boxes.
Here is a balanced approach that does not chase every rumor.
- Put a clear, concise answer or summary near the top of the page, in 1-3 short paragraphs.
- Use simple HTML structure, h2 and h3 headings, lists where they actually help, and logical sections.
- Mark up appropriate content types with schema like FAQPage, HowTo, Product, and Article when they fit.
- Back claims with references, data, or at least thoughtful reasoning, not just filler copy.
- Layer in depth under the summary, with examples, edge cases, and real experience that an AI box usually cannot cover well.
Your goal is not to “beat” the AI Overview on basic facts, it is to be the page people click when they decide the AI summary is not enough.
I think this is where many sites go wrong, they either keep everything shallow and get replaced by the overview, or they bury the answer so deep that users bounce back to the SERP.
You need both, a fast answer and a deep path.
Step 4, adapt page goals when AI steals the basics
When an AI Overview clearly covers the simple part of a topic, do not keep your page stuck at that level.
Shift the page toward what the box cannot really do.
- Add interactive calculators, quizzes, or tools that require user input.
- Include downloadable resources, templates, or checklists gated or ungated.
- Highlight first-hand experience, case studies, and real numbers.
- Improve internal linking into deeper, related guides that expand on sub-questions.
In other words, let the overview handle the FAQ-level stuff, while your page becomes the destination for real decision-making and action.
That shift not only protects some traffic, it also tends to increase conversion rates for the visitors you still get.
Step 5, reduce your dependence on AI-exposed queries
Relying on broad, informational SEO traffic is a fragile strategy now.
You need more resilience baked into your marketing mix.
- Strengthen your capture on SEO-driven traffic with email signups, remarketing, and simple lead magnets.
- Invest in channels where you own the audience, like newsletters, communities, or YouTube subscribers.
- Push more into branded search by doing work off Google, through PR, partnerships, and content that people seek out by name.
None of this is new, but the stakes are higher, because AI Overviews are erasing part of the casual, top-funnel search traffic that many sites leaned on for years.
If your growth plan still assumes that endless free clicks on simple how-to’s will carry the business, I think you are betting on the wrong horse.
Paid search and AI Overviews
AI Overviews do not just affect SEO, they also move the furniture around for paid search layouts.
On some queries, ads sit above the AI box, on others they sit below or interleaved, and that matters for your numbers.
- Track CTR, impression share, and conversion rate separately on queries where you see an AI Overview frequently.
- Check live SERPs for your top paid terms on mobile and desktop to see whether your ads tend to appear above or below the overview.
- Test ad copy that positions you as the more detailed, human, or trusted alternative to the generic AI summary.
If AI is giving a clear answer that satisfies most users, you may need to tighten targeting or pull back bids on those terms.
On the other hand, when the overview is vague or conflicting, strong ads can still grab highly motivated users who want clarity from a real service or product.

How AI Overview behavior might change next
Trying to predict exactly where Google will place AI Overviews a year from now is a bit of a trap.
But you can still plan around broad directions without pretending to read minds.
- Expect continued testing of collapsed vs expanded layouts and different ad placements around AI blocks.
- Expect stricter handling of sensitive YMYL topics, with more reliance on named authorities and maybe fewer aggressive AI summaries.
- Expect more nuance by query type, with some categories becoming almost fully AI-handled and others staying mostly human-first.
Your best hedge here is flexibility, not trying to pin down one frozen SERP model.
If your reporting stack lets you see when formats change and how it hits your numbers, you can adapt without starting from zero each time.
FAQ
How do I know if AI Overviews caused my traffic drop?
Check your top queries in Search Console for impression and CTR shifts while your average position stays stable, then cross-check those queries in a rank tracker that logs AI Overview presence and placement, and add annotations for dates when AI first appears above your result.
Should I create special “AI-friendly” summaries at the top of every article?
Short, clear summaries at the top help both users and any feature that quotes you, but if you twist your writing just to please a hypothetical AI parser, you risk hurting readability and conversions, so keep it natural and let structure follow logic, not paranoia.
Are AI Overviews more common on mobile?
They do tend to feel more dominant on mobile simply because of screen size and layout, so you should monitor mobile SERPs and segment your tracking by device rather than assuming desktop screenshots tell the whole story.
What metrics should I track for AI-affected queries?
Focus on impressions, clicks, CTR, and average position in Search Console, combined with AI Overview presence and position from your rank tracker, plus page-level engagement and conversion data in GA4 to see if you are trading volume for quality.
Is it worth chasing AI Overview citations directly?
You can nudge your odds by having clean structure, strong sources, and concise answers, but I would treat citations as a bonus, not a core KPI, and keep your main focus on building pages that users trust and remember after they leave Google.
The real edge is not finding a hack for the next SERP test, it is building a site and a strategy that still make sense when those tests flip again.
Practical checklist you can reuse
If you want a simple loop to keep your site aligned with how AI Overviews behave, this monthly checklist is a decent starting point.
- Export your top keywords with AI Overview presence and positions from your rank tracker.
- Match them with Search Console query data and tag them by “AI above”, “AI below”, and “No AI”.
- Review the biggest CTR drops and see which ones line up with new or moved AI Overviews.
- Decide which harmed queries are still worth fighting for and which should get less priority.
- Update 2-5 key pages with clearer summaries, stronger depth, and better capture of visitors.
- Run at least one experiment outside of Google, like an email sequence, community initiative, or content on another platform.
If you keep running that loop, you will not need to panic every time AI Overviews shift by a few percentage points above or below position 1.
You will already know which parts of your traffic are fragile, which are stable, and where your time and budget actually move the needle for your business.
Need a quick summary of this article? Choose your favorite AI tool below:



2 replies on “How Often Do Google AI Overviews Rank Below Position 1?”
Useful information for all
Thanks for the insights. I would love to hear your take on Core Web Vitals.