Why This Audit Matters

Most Google Ads accounts waste 20–40% of their budget — and the waste is almost always invisible unless you know where to look. Smart Bidding hides structural problems. Agencies avoid questions they can't answer. Performance Max masks underlying campaign issues.

This is the same 30-point audit we run on every new account before we spend a penny. It takes roughly 90 minutes on a mid-sized account. You don't need any tools beyond Google Ads, Google Analytics, and Google Tag Assistant.

Work through each section in order. Flag every issue as red (fix this week), amber (fix this quarter), or green (healthy). At the end, you'll have a prioritised punch list.

The single most common finding across every audit we've ever done: conversion tracking is broken in some meaningful way. Start here before you touch anything else.

Section 1: Conversion Tracking (Red Flags 1–6)

1. Every conversion action has a clear business meaning

Open Tools → Conversions. For each action, ask: "What business outcome does this represent, and what is it worth to us?" If you can't answer, delete it. Orphan conversion actions poison Smart Bidding.

2. Only one primary purchase action is counted in "Conversions"

Having two primary purchase actions (e.g. GA4 purchase + Google Ads purchase) causes double-counting. Set one as primary; mark the other as secondary.

3. Enhanced Conversions is enabled

Enhanced Conversions recovers 5–15% of conversions lost to iOS privacy changes and ad blockers. If the status is "Inactive" or "Needs attention" in the Conversions panel, that's a red flag.

4. Conversion values are accurate, not placeholders

Check a sample: does the value in Google Ads match the actual transaction in your ecommerce platform? A surprising number of accounts use hardcoded values (e.g. £50 for every lead) that bear no relation to reality. Smart Bidding on fake values is worse than no bidding at all.

5. Attribution model is set deliberately, not by default

Under the conversion action, check the attribution model. Data-driven is the default now, but for lead-gen accounts with long sales cycles, it can misallocate credit. If your sales cycle is over 30 days, ensure the conversion window matches.

6. Offline conversions are imported (if you have a sales cycle)

If you sell via phone calls, demos, or closed deals tracked in a CRM, Google Ads needs that data to optimise. No offline upload = Google optimising for form fills, not revenue. This alone can be worth 30–50% improvement in ROAS.

Found 3+ Red Flags Already?

Book a free 30-minute audit review and we'll show you exactly what to fix first — prioritised by revenue impact, not vanity metrics.

Book a Free Audit Review

Section 2: Account Structure (Red Flags 7–12)

7. Campaigns are segmented by intent and margin, not product category

A £10 product and a £200 product in the same campaign share the same target ROAS. That's always wrong. Separate high-margin from low-margin SKUs.

8. Brand and non-brand are in separate campaigns

Brand campaigns naturally have 10x+ ROAS. Letting them sit in the same campaign as prospecting inflates reported performance and makes non-brand look worse than it is. Split them.

9. No "broad-match everything" campaigns

Broad match without strong conversion tracking, audience signals, and Smart Bidding is a budget fire. Check match types across all campaigns.

10. Ad groups have ≤20 keywords, all thematically tight

Sprawling ad groups with 100 loosely related keywords kill Quality Score. If you see this, restructure into single-theme ad groups.

11. Performance Max and Search aren't cannibalising each other

PMax outbids Search for the same queries if not configured carefully. Check the "Insights" tab in PMax for overlap, and use campaign-level negatives where needed.

12. Locations and languages match the target market

"Presence or interest in" is the default location setting — and it leaks spend to irrelevant countries. Switch to "Presence" unless you have a specific reason not to. Check languages match your ad copy.

Section 3: Wasted Spend (Red Flags 13–18)

13. Search terms report reviewed in the last 7 days

This is the fastest-compounding audit item. Go to Reports → Predefined → Search terms, sort by cost, and review the top 50. Every one that isn't a buyer intent = a negative keyword waiting to happen.

14. Negative keyword lists exist and are applied

Check Tools → Shared library → Negative keyword lists. No lists? Red flag. At minimum you need a "Brand safety" list (competitor names, legal terms) and a "Job seekers" list ("jobs", "career", "salary").

15. Device bid adjustments reflect device-level conversion rates

Mobile often converts worse than desktop for B2B and considered purchases. Check device conversion rates in Campaigns → Devices and adjust bids accordingly.

16. Day-parting is set if performance varies by time

If your business doesn't answer the phone between 10pm and 7am, leads from that window are wasted spend. Review the Ad schedule tab for each campaign.

17. No zero-conversion keywords with £200+ spend in last 30 days

Sort keywords by cost, filter to "Conversions = 0". Anything with meaningful spend and no conversions over 30 days is either a pause candidate or a symptom of bigger problems.

18. Display and Search Partner networks are audited

Unless you're deliberately running Display campaigns, exclude Display and Search Partners from Search campaigns. They usually convert 5–10x worse.

Section 4: Bidding & Budgets (Red Flags 19–23)

19. Bid strategy matches the campaign's stage

New campaigns with <30 conversions/month should usually be on Maximise Conversions or Manual CPC, not tROAS. Smart Bidding needs data to work.

20. Target ROAS / Target CPA aren't set too aggressively

If your target tROAS is higher than the campaign has ever achieved, Google restricts impressions. Sanity check: is your target realistic based on the last 90 days of actual performance?

21. Budgets aren't capping campaigns artificially

Check the "Limited by budget" status on your campaigns column. If your best campaigns are budget-constrained while underperformers have headroom, budget allocation is broken.

22. No "experiment" campaigns that have run indefinitely

Legacy experiments, old remarketing campaigns, and "testing" campaigns that no-one's reviewed for 6+ months are common spend leaks.

23. Seasonality adjustments are used for known spikes/drops

If you have a predictable Black Friday spike or January drop, use seasonality adjustments under Tools → Bid strategies. Smart Bidding doesn't handle abrupt changes well on its own.

Want Us to Run the Full Audit?

We'll audit your account end-to-end using this checklist, plus 40 more advanced checks we don't share publicly. You get a prioritised action plan — and a realistic picture of what your current setup is costing you.

Book a Free Strategy Call

Section 5: Creative & Landing Pages (Red Flags 24–27)

24. Every ad group has at least 3 Responsive Search Ads

RSAs need rotation to learn. One ad per ad group = no learning signal. Check Ad Strength — "Poor" or "Average" indicates missing variation.

25. Sitelinks, callouts, and structured snippets are present on every campaign

Ad extensions improve click-through rate by 10–20% on average, for free. Missing extensions = leaving free lift on the table.

26. Landing pages load in under 3 seconds on mobile

Run your top 3 landing pages through PageSpeed Insights. Anything above 3 seconds on mobile is actively hurting Quality Score and conversion rate.

27. Landing page matches the ad copy and search intent

Sending "buy running shoes" searchers to a category homepage is a conversion rate killer. Each ad group should point to a matching product or landing page.

Section 6: Reporting & Hygiene (Red Flags 28–30)

28. Auto-apply recommendations are turned off (or carefully audited)

Google's auto-apply changes your campaigns without asking. Under Recommendations → Settings, review what's enabled. Most settings should be off for any account you care about.

29. Change history shows meaningful human activity

Pull the change history for the last 30/60/90 days. If your agency's total human-made changes are under 20 per month on a £10k+ account, that's a red flag. Automated changes don't count — Google does those for free. (This is the core of what FireMyAgency.ai analyses automatically.)

30. Account access and audit log is clean

Check Tools → Access and security. Remove anyone who's left the company or agency. Ensure you have admin-level access to your own account — not just "standard" — so an agency can't lock you out.

What to Do Next

Once you've worked through the checklist, you'll typically find 5–15 red flags on an average account. Don't try to fix everything at once. Prioritise:

  1. Conversion tracking first (items 1–6). Every other fix depends on accurate data.
  2. Wasted spend next (items 13–18). Fastest dollar-for-dollar lift.
  3. Structural fixes (items 7–12). Highest long-term impact but most disruptive.
  4. Everything else as capacity allows.

If this checklist surfaced more problems than you can fix alone, or you want a second pair of expert eyes, that's exactly what our free strategy call is for. We'll run through your findings and give you a prioritised roadmap — no sales pressure, no contracts.