How to Audit a Meta Ads Account: The Complete Agency Checklist (2026)
A complete Meta Ads audit in 2026 must cover eight layers: pre-audit discovery, pixel and CAPI health, campaign architecture, creative scoring, audience overlap, frequency, post-click funnel, and competitive intelligence. At Baker, we have audited Meta Ads accounts for B2B SaaS clients spending from $3K to $250K per month, and the consistent pattern is that 80% of underperformance traces back to two layers: pixel health and creative diversity, not to bidding or budget. This guide is Baker’s 8-Layer Meta Ads Audit Framework, with the exact thresholds, expert sources, and dependency order we use on every engagement.
Why Most Meta Ads Audits Fail
Most audit templates are flat checklists: 40 items with equal weight. The problem is that several layers depend on each other. Auditing creative quality on an account whose pixel is feeding 56% wrong data produces conclusions that look right but are mathematically meaningless [1]. Baker’s 8-Layer Meta Ads Audit Framework runs the checks in dependency order, so each layer’s fixes compound rather than mask each other.
The three principles behind the framework:
- Data quality before optimisation. If the algorithm is training on the wrong conversions, every other lever is downstream of a broken signal [1].
- Andromeda-aware diagnostics. The 2024-2025 algorithm shift means audit thresholds for campaign count, audience targeting, and creative variety changed. See Baker’s Andromeda Adaptation Framework for the full breakdown [2].
- Tier-appropriate benchmarks. A $3K/month account and a $50K/month account have different healthy values. Applying enterprise thresholds to a small account produces false negatives, and vice versa [3].
Layer 1: Pre-Audit Discovery
Before opening Ads Manager, capture context. Maximilian von Mer’s account audit framework defines five questions every audit must answer before tactical review [4]:
| Question | Why It Matters |
|---|---|
| Status quo: where do we stand on hard numbers? | Establishes baseline ENCAC, ROAS, CPL, and lead-to-close rate |
| Goal: what specific outcome do we want in 90 days? | Audit recommendations must serve a target, not vague improvement |
| Historical review: what has been tried and at what scale? | Prevents recommending tactics that already failed |
| Industry benchmarking: how does this compare to mature peers? | Distinguishes account-level issues from category-level reality |
| Devil’s advocate: what current assumptions might be wrong? | Surfaces tracking, attribution, or offer issues hiding behind tactics |
Discovery questions Baker uses on B2B SaaS audits:
- What is the difference between in-platform CPL and CRM-tracked sales-qualified opportunity cost?
- How are leads handed off to sales, and what is the SLA for first contact?
- What is the average sales cycle length, and is the attribution window aligned to it?
- Has CAPI been implemented? When? Has Event Match Quality been verified since?
- What is the offline conversion import cadence, if any?
If discovery surfaces unanswered questions, pause the technical audit until they are answered. According to Sylvia Perez (Ad Conversion, $3M/month managed spend), “auditing a B2B account without knowing the close rate by lead source is auditing the wrong half of the funnel” [5].
Layer 2: Pixel and CAPI Health
Data quality is the foundational layer. Every downstream audit conclusion depends on the algorithm receiving accurate signals.
Standard pixel verification
| Check | Healthy | Action If Failing |
|---|---|---|
| Pixel firing on all key events | 100% | Reinstall via Events Manager, verify with Pixel Helper |
| PageView fires once per page | Single fire | Remove duplicate snippets |
| Standard events match conversion goals | Direct match | Map custom events to standard schema |
| Test events show no errors in Events Manager | No errors | Resolve schema mismatches before continuing |
CAPI and deduplication
According to Jon Loomer, the easiest non-Shopify CAPI setup is Conversions API Gateway via Stape.io at roughly $10/month, which automatically duplicates web pixel events through the API [6]. For Shopify accounts, the native Meta integration is a checkbox.
The non-negotiable check: deduplication. Rok Hladnik (Thumbstop) lists this as the first item in his low-hanging-fruit audit, noting that “even standard, two-click integrations like Shopify and Meta are frequently broken” [7].
| CAPI Check | Healthy | Failing Symptom |
|---|---|---|
| Deduplication rate | 70-100% browser+server matched | Duplicate conversions inflating ROAS |
| Event Match Quality (EMQ) | 7.0+ on key events | Algorithm cannot match users, CPMs rise |
| Server-only events on iOS traffic | Present | Major iOS undercount |
| Edge-tagged CAPI for cross-device | Present | CRM imports lose fingerprint match |
The 56% problem
According to John Moran (Tier 11), “Shopify’s own public attribution is only 44% accurate. Importing Shopify purchase data via the direct Shopify-to-Meta integration means feeding the algorithm 56% wrong data” [1]. The Tier 11 fix is first-click CAPI imports from the CRM with custom new-customer events, which raised attribution accuracy from 53% to 85%+ in the documented case studies [1].
A Tier 11 beauty account using this fix dropped ENCAC from $26 to $6.73 across six weeks while doubling spend [1]. For the full implementation, see Baker’s CAPI Setup Guide.
Layer 3: Campaign Architecture Review
Andromeda changed the right number of campaigns. Pre-Andromeda audits flagged accounts with too few campaigns. Post-Andromeda audits flag accounts with too many [2].
| Monthly Spend | Healthy Campaign Count | Common Audit Finding |
|---|---|---|
| Under $3K | 1 ABO campaign, broad targeting | 5+ campaigns split too thin to exit Learning |
| $3K-$10K | 2 campaigns (Test ABO + Scale CBO) | 8-10 campaigns with overlapping audiences |
| $10K-$30K | 3 campaigns (Test, Scale, Retargeting) | Persona-by-persona ABO splits over-segmented |
| $30K-$100K | 3-5 portfolio CBOs by awareness level | Legacy structure from 2022 still running |
| $100K+ | 5-8 CBOs, full-funnel sequencing | Insufficient retargeting investment |
Multi-expert consensus (Charley T at Disruptor School, Manel Gomez, Dara Denney) confirms that consolidation from 15+ campaigns to 3-5 typically improves CPA and accelerates Learning Phase exit [2] [3] [8].
Architecture audit checklist:
- Campaign count matches budget tier (table above)
- No campaign in Learning Limited for more than 14 days
- No ad set spending under $50/day with conversion-objective optimisation
- CBO budget weighted toward proven creative, not split equally
- Manual placement restrictions justified by regulation, not preference
- No duplicate ad sets in the same campaign with similar audiences
For the full structure-by-budget breakdown, see Baker’s Meta Ads Campaign Structure guide.
Layer 4: Creative Scoring
Creative is the targeting layer in Andromeda. Audit it accordingly.
Entity ID diversity
According to Ryan from Leadbase ($1B+ in Meta spend analysed), Andromeda measures variation by Entity ID (the underlying visual concept), not Creative ID. Six is the minimum for the algorithm to optimise effectively, and 6-10 is the target for scaled accounts [9].
| Account State | Entity IDs | Diagnosis |
|---|---|---|
| 1-3 distinct concepts | Below threshold | Andromeda cannot match diverse audience pockets |
| 4-5 distinct concepts | Marginal | Performance plateau likely |
| 6-10 distinct concepts | Healthy | Algorithm has room to optimise |
| 10+ distinct concepts | Scale-ready | Provided each has 3-5 variations |
Hook and hold rate benchmarks
Tier 11’s Ralph Burns publishes the operative benchmarks for video creative [10]:
| Metric | Definition | Minimum | Strong | Exceptional |
|---|---|---|---|---|
| Hook rate | 3-sec views ÷ impressions | 25% | 30% | 40-60% |
| Hold rate | 15-sec views ÷ 3-sec views | 30% | 40% | 50%+ |
| Thumbstop ratio | 3-sec views ÷ video plays | 30% | 40% | 50%+ |
Diagnostic logic: Healthy hook + low hold = fix the body of the ad (introduce the offer earlier, tighten the middle). Low hook + healthy hold = the audience is wrong for the visual, or the opening is generic [10].
Format mix audit
Lattice rewards format diversity. Dara Denney’s recommended mix [8]:
| Format | Share of Active Creative | Common Audit Gap |
|---|---|---|
| Static images | 30-40% | Often under-invested for video-led brands |
| Video (15-60s) | 30-40% | Single format dominance |
| Carousel | 10-15% | Frequently 0% on B2B accounts |
| Experimental (UGC, text, AI) | 10-15% | Skipped entirely |
Creative fatigue signals
According to Sylvia Perez, the operational fatigue rule is frequency above tier-specific cap (7 for retargeting, lower for prospecting) combined with CTR below the trailing 90-day average [5]. Manel Gomez adds the 30-day frequency band of 2.7 to 3.3 as the most common silent CPM-spike trigger [3].
Hook type audit (Dara Denney’s 10 dominant formats in 2026):
- Going-against-tradition hooks
- Conversational openers (“no because”)
- Viral product hooks
- Reaction hooks
- Controversy hooks
- Trigger-word emotional hooks
- Demographic hooks
- Multiple stacked hooks
- “How do I know if X” mirror hooks
- Speed and transformation (before/after with urgency) [8]
A B2B SaaS account running only one or two of these in active rotation is undertesting hook variety, regardless of total ad count.
Layer 5: Audience Overlap Diagnostics
Use Meta’s Audience Overlap tool, then triangulate with first-click attribution.
| Overlap Type | Healthy | Audit Action |
|---|---|---|
| Two ad sets in same campaign | Under 25% | Above 25% = consolidate |
| Prospecting vs retargeting | Under 10% | Higher = exclusion list missing |
| Lookalike 1% vs lookalike 2% | Under 50% | Use seed-quality scoring instead of stacking |
| Cross-campaign overlap | Under 25% | Drop the lower-CTR campaign |
The cross-channel overlap problem: John Moran’s Tier 11 framework documents that without first-click CAPI, increasing Meta spend lifts in-platform numbers across Google, email, and organic, because Meta creates the first touch but those channels take last-click credit. The fix is first-click CAPI imports, which restore Meta’s true contribution [1].
B2B audit nuance: Sylvia Perez’s rule for broad targeting on B2B is that it works only when (1) ABM is not required, (2) TAM is large enough that broad makes sense, and (3) creative explicitly calls out the ICP and repels non-buyers [5]. If any condition fails, narrow audiences with high creative discipline outperform broad.
Layer 6: Frequency Analysis
Frequency must be evaluated on a fixed window (7-day or 30-day), never lifetime [6].
| Funnel Stage | Healthy 7-Day Frequency | Cap Threshold |
|---|---|---|
| Cold prospecting | 1.5-2.5 | 3 per 7 days |
| Warm engagement | 2.0-3.5 | 4 per 7 days |
| Retargeting | 3.0-5.0 | 5 per 7 days |
| Brand-aware retargeting | 4.0-6.0 | 6 per 7 days |
Irina Buht’s diagnostic method: plot conversion rate by frequency bucket, and cap at the bucket where conversion rate peaks before declining [11]. For prospecting, that is typically frequency 2.3 (peak) to 4.0 (decline), so the cap goes at 3. For retargeting, the curve runs higher, peaking around 6 before drop-off [11].
Manel Gomez’s silent killer: 30-day frequency between 2.7 and 3.3 produces unexplained CPM increases and inconsistent ROAS in roughly two-thirds of audits Baker runs. Always check 30-day frequency before blaming the algorithm [3].
Jon Loomer’s context rule: Frequency 4.0 over four days is intrusive; frequency 4.0 over twelve months is invisible. Trust Meta’s native fatigue alerts in Ads Manager over arbitrary numerical thresholds [6].
Layer 7: Post-Click Funnel Audit
A clean ad audit means nothing if the landing page leaks 90% of clicks.
Landing page conversion rate floors
According to Aaron Young’s benchmarks [12]:
| Vertical | LP Conversion Rate Floor | Below Floor = Audit Required |
|---|---|---|
| E-commerce | 3% | Test offer, urgency, social proof |
| Lead generation | 7% | Test headline, form, hero copy |
| B2B SaaS demo request | 5-8% | Test ICP-specificity of LP copy |
Form audit checklist
Arsh Sanwarwala and Market Lead’s combined form framework [13]:
- Under 4 fields where possible
- “Full name” instead of separate first/last
- Phone number removed unless essential to the offer
- Single-column layout (never multi-column)
- Mental commitment: low-friction fields first, qualifying fields after
- Sales-team qualifying questions added on Step 2 of multi-step forms
- Thank-you page states exactly what happens next and when
Michael Nadlin’s progressive disclosure data shows that roughly 80% of users who complete a low-friction Step 1 (name, email, phone) will complete a higher-friction Step 2 if Step 2 is paired with a value escalation (custom pricing, custom plan, premium asset) [13].
Lead quality filtering
For local services and high-ticket B2B, Jack Hepp’s framework: ask the sales team for the first 3-5 qualification questions they use on prospects, and put those on the form. Total lead volume drops; lead quality rises significantly. The ENCAC almost always improves [14]. For the full lead-quality stack, see Baker’s Meta Ads Lead Quality guide.
Layer 8: Competitive Intelligence
The Meta Ad Library is the audit’s external benchmark.
| Check | What To Capture |
|---|---|
| Top 3 competitors’ active ad count | Volume signal |
| Longest-running ads (90+ days) | Proven concepts to study |
| Format mix used | Where competitors over- or under-invest |
| Hook patterns | Recurring opening structures |
| Landing page destinations | Funnel architecture insights |
Baker’s rule: if the audited account has fewer than half the active ads of the median competitor at the same scale, creative production is the bottleneck regardless of every other layer.
Growth-Phase Audit Calibration
Audit thresholds shift by spend tier. Adapted from the 99 Ads $0 to $1M/month playbook and Baker’s client work [15]:
| Phase | Monthly Spend | Audit Priority |
|---|---|---|
| Offer-Market Fit | $0-$10K | Test offer presentations, not channels. Audit creative volume and offer mix, not architecture |
| Iteration Muscle | $10K-$100K | 100-200 ads/month target. Audit creative production cadence and winner rate (1-5/week) |
| Angle Expansion | $100K-$500K | 1 audience, up to 4 angles, 400 ads/month. Audit angle diversification (EPIC: Emotional, Practical, Identity, Critical) |
| Audience Expansion | $500K-$1M | 2-3 adjacent audiences, 80-90% on core, 10-20% horizontal tests. Audit concentration risk |
| Beyond $1M | $1M+ | TAM, LTV, AOV. Audit international rollout, upsell architecture, subscription LTV |
A $3K/month account audited against $100K/month thresholds will look broken on every layer. A $100K/month account audited against $3K thresholds will look fine when it is leaving 7 figures of incremental revenue on the table.
The Baker Audit Output: From Findings to Fixes
A finished audit produces three artefacts:
- The 8-Layer scorecard (red, amber, green per layer)
- The dependency-ordered fix list (Layer 2 issues fixed before Layer 4, etc.)
- The 30/60/90 implementation plan mapping each fix to a measurable target
Multi-expert consensus (Tier 11, Sylvia Perez, Manel Gomez) is that fixing one tracking issue in Layer 2 typically delivers more performance lift than rebuilding 50 ads in Layer 4 [1] [3] [5]. The audit’s job is to prevent the team from spending two months on Layer 4 when Layer 2 is broken.
Common Audit Mistakes
Mistake 1: Auditing creative on broken pixels
Conclusions about which creative converts depend on conversion data being accurate. Always verify Layer 2 first.
Mistake 2: Applying enterprise thresholds to small accounts
A $3K/month account is not failing because it has only 4 active ads. It is in the offer-fit phase. Audit the offer, not the campaign architecture.
Mistake 3: Treating frequency as a single number
Frequency 4.0 across 7 days is fatigue. Frequency 4.0 across 90 days is healthy. Always specify the window.
Mistake 4: Ignoring cross-channel cannibalisation
Meta often creates the first touch that Google, email, or organic claim. Without first-click CAPI, the audit will recommend cutting Meta spend that is actually driving the entire funnel [1].
Mistake 5: Stopping at in-platform metrics
The CRM is the source of truth. CPL is a vanity metric without close rate, ENCAC, and lifetime value behind it. Audit to the closed deal, not the lead form submission.
Sources
- John Moran, Tier 11. “First-Click CAPI Imports, Custom New-Customer Events, and the 56% Shopify Attribution Problem.” Beauty Account ENCAC Case Study ($26 to $6.73), 2026.
- Manel Gomez, Crece sin Limite. “Andromeda Architecture and Campaign Consolidation.” $3.3M Ad Spend Analysis, 2026.
- Manel Gomez, Crece sin Limite. “30-Day Frequency Band 2.7-3.3 as Silent CPM Trigger.” Meta Scaling Diagnostic Framework, 2026.
- Maximilian von Mer. “Five-Question Account Audit Framework.” Strategic Audit Methodology, 2026.
- Sylvia Perez, Ad Conversion ($3M/month managed spend). “B2B Meta Ads Lead Quality and Broad Targeting Prerequisites.” B2B Meta Playbook, 2026.
- Jon Loomer. “Conversions API Gateway, Frequency Context, and Native Fatigue Alerts.” Meta CAPI and Frequency Guides, 2026.
- Rok Hladnik, Thumbstop. “Low-Hanging-Fruit Audit Checklist: Pixel Deduplication First.” Meta Ads Audit Methodology, 2026.
- Dara Denney. “Top 10 Hook Types and Format Mix Recommendations Under Andromeda.” Meta Creative Strategy 2026.
- Ryan, Leadbase. “Entity ID Diversity Requirements: 6-10 Distinct Visual Concepts.” $1B+ Meta Spend Analysis, 2026.
- Ralph Burns, Tier 11. “Hook Rate, Hold Rate, and Thumbstop Benchmarks for Video Creative.” Tier 11 Creative Diagnostic, 2026.
- Irina Buht. “Frequency Cap Methodology: Conversion Rate by Frequency Bucket.” Meta Frequency Cap Recommendations, 2026.
- Aaron Young, Scale Bottom-Up. “Landing Page Conversion Rate Floors by Vertical.” Benchmarks and KPIs, 2026.
- Arsh Sanwarwala, Michael Nadlin (Market Lead). “Form Field Strategy and Progressive Disclosure (80% Step-2 Completion).” High-Ticket Lead Gen Forms, 2026.
- Jack Hepp, Local Ads. “Lead Quality Filtering via Sales-Team Qualifying Questions.” Local Lead Generation Framework, 2026.
- Founder of 99 Ads. “Five-Stage Scaling Playbook: $0 to $1M/Month.” Scaling Strategy, 2026.
FAQ
- How do you audit a Meta Ads account?
- A complete Meta Ads audit covers eight layers: pre-audit discovery, pixel and CAPI health, campaign architecture, creative scoring, audience overlap, frequency, post-click funnel, and competitive intelligence. According to John Moran (Tier 11), the highest-leverage check is data quality: standard pixel and Shopify-Meta integrations are wrong roughly 56% of the time, which means the algorithm trains on bad signals before any creative or targeting issue matters. Baker's 8-Layer Meta Ads Audit Framework runs these checks in dependency order so that fixes compound rather than mask each other.
- What should be included in a Meta Ads audit checklist?
- A 2026 Meta Ads audit checklist should include: Conversions API and deduplication verification, Event Match Quality (EMQ) score, campaign and ad-set count vs budget tier, Entity ID diversity (Andromeda needs 6+ distinct visual concepts), hook rate (target ≥30%), hold rate (target ≥40%), 7-day and 30-day frequency by funnel stage, audience overlap and cannibalisation, landing page conversion rate (≥3% e-commerce, ≥7% lead gen per Aaron Young), form field count, and Ad Library competitive scan. Each item maps to a specific fix, not a generic recommendation.
- What is a healthy frequency for Meta Ads?
- Healthy frequency depends on placement and funnel stage. According to Irina Buht's analysis, prospecting should cap at 3 impressions per 7 days and retargeting at 5 per 7 days. Manel Gomez identifies 30-day frequency between 2.7 and 3.3 as the most common cause of unexplained CPM spikes and inconsistent ROAS. Jon Loomer's nuance is that frequency 4.0 over four days is high impact but the same number over months is low impact, so always evaluate frequency on a fixed window rather than lifetime. High-commitment purchases (real estate, B2B SaaS) tolerate higher frequency than impulse e-commerce.
- What is a good hook rate for Meta Ads in 2026?
- Hook rate (3-second video views divided by impressions) should be at least 25% for Meta to keep delivering, ≥30% to be considered strong, and 40-60% for ads that scale aggressively, according to Tier 11's Ralph Burns. Hold rate (15-second views divided by 3-second views) should be at least 30%, with 40-50% indicating a creative strong enough to scale. If hook rate is healthy but hold rate is low, fix the body of the ad (introduce the offer earlier, tighten the middle) rather than the opening.
- How do you check for audience overlap in Meta Ads?
- Use Meta's Audience Overlap tool inside Audiences, then triangulate with attribution. According to John Moran (Tier 11), the deeper problem is cross-channel overlap: Meta creates the first touch but Google or email takes the in-platform credit, so increasing Meta spend often appears to lift every other channel in backend reporting. The fix is first-click CAPI imports from the CRM, which attribute the journey-starting impression to Meta correctly. In-platform overlap above 25% between two ad sets is a consolidation signal.
- How often should I audit a Meta Ads account?
- Run a full 8-layer audit on a new account before any optimisation work, every quarter for active accounts, and after any structural change (CAPI rollout, new offer, budget increase above 50%). Run a creative-only audit (Layer 4) weekly because creative fatigue moves on a 7-14 day cycle. Run a frequency check (Layer 6) every 30 days because the rolling window shifts. Multi-expert consensus (Sylvia Perez, Manel Gomez, Tier 11) is that quarterly full audits plus weekly creative reviews catch 90% of issues before they compound into wasted spend.
- What does ENCAC mean in Meta Ads?
- ENCAC stands for Effective New Customer Acquisition Cost: the cost to acquire a genuinely new customer (excluding repeat buyers) attributed via first-click CAPI rather than last-click. According to John Moran (Tier 11), ENCAC is the only metric that survives iOS 14, Andromeda's first-touch reweighting, and cross-channel cannibalisation simultaneously. In one Tier 11 beauty account case study, ENCAC dropped from $26 to $6.73 over six weeks after switching from Shopify-Meta direct integration (44% accurate) to first-click CAPI imports with custom new-customer events (85%+ accurate).