Most marketing audits are theater — 80 slides of platform observations that conveniently justify whatever the auditor sells. This is the opposite. The actual diagnostic framework we run on every new Praxxii engagement: 36 specific checks across 6 audit zones, with red-flag thresholds and a prioritization matrix that ranks findings by leverage. Run it on your own account in a weekend. Within 90 minutes you'll know whether you have a campaign problem, a measurement problem, a creative problem, or an operating model problem — and the four answers lead to four different actions. This is the practical companion to Day 10's CMO Operating System. That piece gave the framework; this gives the operational checklist.

Why most audits produce nothing

Three failure patterns: cataloging problems instead of ranking them; conflating symptoms with causes (ROAS-down is a symptom, broken server-side tracking is a cause); recommending whatever the auditor sells. Good audits avoid all three by following a defined methodology, ranking findings by leverage, and surfacing the binding constraint regardless of where it lands.

The 6 audit zones

Every performance marketing operation has six interlocking systems. A problem in one caps the others. The right order is Zone 1 first — without trustworthy data, every Zone 2-6 finding is suspect. Most audits start at Zone 2 because that's the visible noise. That's the first mistake.

Zone 1: Data and Attribution Foundation (6 checks)

  • 1.1 Server-side tracking on every paid channel (Meta CAPI, Google Enhanced Conversions, TikTok Events API, LinkedIn CAPI), deduplicated. Red flag: any major channel pixel-only.

  • 1.2 Platform claim vs back-end revenue reconciliation. Red flag: 70%+ overclaim.

  • 1.3 Attribution model deliberately chosen. Target: MMM + MTA + Incrementality above $1M annual ad spend. Red flag: still on last-click for budget allocation.

  • 1.4 Conversion event taxonomy clean across Meta, Google, GA4, CRM. Red flag: 5+ overlapping events.

  • 1.5 Source-of-truth dashboard — marketing, finance, and CMO see the same number. Red flag: dashboard ≠ board deck ≠ CFO spreadsheet.

  • 1.6 Compliance audit for regulated industries (PHI/PII pixels without BAA, missing consent, third-party cookie reliance).

  • Red flag: any of the above.

Scoring: 6/6 trustworthy · 4-5 directional but flawed · 0-3 binding constraint, nothing else compounds.

Zone 2: Acquisition Channel Execution (8 checks)

2.1 PMax: brand exclusions running, asset groups segmented by margin, search themes from converted Search keywords, Customer Match signals layered. Red flag: single asset group, no exclusions.

2.2 Meta Advantage+: existing-customer caps under 30%, 15-50+ active creatives, Incremental Attribution on. Red flag: ASC at default with under 10 creatives.

2.3 LinkedIn B2B: Predictive Audiences from CRM, layered targeting (Industry × Size × Function × Seniority), Lead Gen Forms 3-4 fields. Red flag: optimizing for cheapest-CPL hitting practitioners not decision-makers.

2.4 TikTok Smart+: module-level automation, Symphony auto-add, Spark Ads default, 7-10 day refresh. Red flag: manual campaigns running 6-month-old creatives.

2.5 YouTube Demand Gen at Excellent Ad Strength: 9+ images all aspect ratios, 4+ videos including 9:16, post-March 2026 Lookalike configured. Red flag: force-migrated VAC sitting paused.

2.6 AI search visibility tracked manually across ChatGPT, Perplexity, Google AI Overview for top 20 commercial queries. Red flag: not appearing in any AI answer for material queries.

2.7 Entity hygiene across Wikidata, Crunchbase, LinkedIn, GBP, industry directories. Red flag: outdated descriptions, missing canonical website on Wikidata.

2.8 Change cadence: no major edits more often than every 7 days, no ROAS shifts greater than 25% at once. Red flag: daily edits, weekly pivots — algorithm never exits learning.

Scoring: 7-8/8 strong · 4-6 recoverable · 0-3 restructure or replace agency.

Zone 3: Creative Production Pipeline (6 checks)

3.1 Volume against requirements: Meta 15-50+ active with 3-5 fresh weekly; PMax 15+ headlines, 5+ descriptions, 10-20 images, 3-5 videos per asset group; TikTok 5-10 fresh weekly. Red flag: under 8 new variants weekly across all paid combined.

3.2 AI creative stack across all four layers: Production (Veo, Runway, Arcads), Variation (AdCreative.ai, Canva), Intelligence (Uplifted, Segwise), Strategy (Claude/ChatGPT for briefs). Red flag: zero AI tools or single tool with no intelligence-layer feedback.

3.3 Brand prompt library exists and is actively used: Red flag: prompts ad-hoc each sprint.

3.4 Test plan structured: each batch tests specific dimensions deliberately, results documented. Red flag: "ship 30 variants and see what works" with no captured learnings.

3.5 Refresh cadence: Meta 2-4 weeks, LinkedIn 4 max, TikTok 7-10 days, PMax monthly. Red flag: any creative running unchanged 60+ days.

3.6 Single producer owns variant volume as manufacturing function: Red flag: production split across paid manager + brand designer + agency, no single owner.

Scoring: 5-6/6 at algorithmic-required volume · 3-4 capped by supply · 0-2 binding constraint regardless of campaign management.

Zone 4: Conversion Infrastructure (6 checks)

4.1 Mobile LCP under 2.5s, Core Web Vitals green. Red flag: mobile LCP above 4 seconds.

4.2 Form fields minimum: 3-field forms convert at 10.1%, 9-field at 3.6%. Red flag: 4+ fields top-of-funnel, 6+ on demo request.

4.3 Message match between ad and landing page: headlines compared, offer identical. Red flag: ad promises X, page leads with Y.

4.4 Mobile parity not just responsive: sticky CTA, single-column forms, 48×48dp tap targets. Red flag: mobile converts at less than 65% of desktop.

4.5 Above-the-fold composition : single primary CTA, value proposition specific (numbers, outcomes), social proof above fold. Red flag: 3+ competing CTAs, generic value prop.

4.6 Test cadence : at least one structured A/B test per priority page per month, statistical significance reached, results documented. Red flag: no testing tool deployed or tests ad-hoc.

Scoring: 5-6/6 healthy · 3-4 top-quartile gap recoverable with focused CRO · 0-2 doubling spend caps your ceiling.

Zone 5: Retention and Lifecycle Engine (5 checks)

5.1 Email contributes 25-30%+ of revenue (D2C) or drives expansion (B2B), with five core flows: welcome, abandoned cart, browse abandonment, post-purchase, win-back. Red flag: under 15% revenue or only 1-2 flows.

5.2 Platform fits the model : Klaviyo for D2C, Customer.io for B2B SaaS, Braze/Iterable for enterprise. Red flag: Mailchimp on $5M+ business; Klaviyo on B2B SaaS.

5.3 RFM segmentation foundation : Champions, Loyal, At Risk, Hibernating with differentiated content and frequency. Red flag: blasts to "all subscribers" without segmentation.

5.4 AI personalization layer enabled : Predictive product recs, predictive send-time, channel affinity routing. Red flag: still using generic personalization.

5.5 Deliverability: SPF/DKIM/DMARC configured, 90-day unengaged suppression, inbox placement testing. Red flag: missing authentication, list older than 18 months without cleaning.

Scoring: 4-5/5 compounding · 2-3 leaving 2-3x improvement on the table · 0-1 fix in parallel; unit economics depend on it.

Zone 6: Operating Model and Org Structure (5 checks)

6.1 Team structured by platforms-and-outcomes, not channel silos : platforms lead, creative producer, audience-and-conversion lead. Red flag: separate Meta / Google / SEO / email people each optimizing their channel.

6.2 Agency relationship integrated, not fragmented: Red flag: 3+ separate agencies with no integration layer.

6.3 Leadership runs on the four-metric dashboard: blended CAC vs incremental revenue, payback period, LTV:CAC at 12 months, downstream signal (pipeline velocity for B2B / repeat purchase for D2C). Red flag: monthly reviews dominated by ROAS and CPL.

6.4 Reporting cadence aligns with decision cadence : weekly informs daily, monthly informs quarterly. Red flag: weekly reports the CMO can't act on.

6.5 5% of marketing budget allocated to test/experiment reserve: Red flag: no test line item.

Scoring: 4-5/5 system can compound · 2-3 team is good but system caps them · 0-1 nothing else changes lasting outcomes until rebuilt.

The prioritization matrix

After scoring all 6 zones, the rule: always start with the lowest-scoring zone, weighted toward earlier-numbered zones. The zones compound. Broken Zone 1 (data) means every Zone 2 finding is suspect. Broken Zone 3 (creative) caps Zone 2 scaling regardless of campaign quality. Broken Zone 6 (operating model) means Zones 1-5 keep regressing because the system that should produce good outcomes is structurally producing the current ones.

The decision tree:

Zone 1 < 4/6 → data foundation rebuild is P0. 4-6 weeks.

Zone 6 < 3/5 → operating model rebuild is P0. Tactical fixes won't stick. 8-12 weeks.

Zone 3 or 5 < 3 → binding the system. Fix in parallel with Zones 1 and 6 if those are also weak. 6-8 weeks.

Zones 2 and 4 weakest → tactical optimization. Manageable inside an existing team or specialist partner. 4-6 weeks per zone.

Most teams start with Zone 2 because that's the visible noise. Zone 2 fixes typically produce 2-quarter ROAS lifts that don't compound. Zone 1 and Zone 6 fixes produce 12-month efficiency gains that do.

What to do with this

Pull the 36 checks for your account. Score each zone. Identify the lowest-scoring two or three. Cross-reference against the matrix. Build the 90-day rebuild plan. The corresponding playbook for each zone lives in the catalog — Day 5/7 for data, Days 3, 4, 9, 12, 13 for channels, Day 8 for creative, Day 6 for CRO, Day 11 for lifecycle, Day 10 for operating model. If three or more zones score below 4, you're not looking at tactical optimization — you're looking at a structural rebuild. The right move is a 90-day diagnostic-and-rebuild engagement, not a campaign-level intervention.

If you'd rather have us run this audit on your account — same framework, ranked rebuild plan, shipped across operations from $500K to $30M in annual marketing spend — that's the entry point for Praxxii engagements at Praxxii Global. The audit produces a 6-zone scorecard, the prioritization matrix specific to your operation, and a 90-day rebuild plan you can run with your existing team or with us.

Most accounts mistake optimization for rebuild and spend a year on whichever wasn't the binding constraint. Run the audit. The binding constraint is usually not where you've been looking.