Ask ten Meta media buyers whether you should run CBO or ABO and you'll get ten different answers, all stated with complete confidence. "Always run CBO — it's what the algorithm wants." "Never run CBO — you lose control over budget allocation." Both camps are wrong, because the answer depends on account maturity, event volume, testing stage, and what you're optimizing for. Here's a data-backed framework for deciding which structure to use, when to switch between them, and why the hybrid approach outperforms both pure strategies.
CBO (Campaign Budget Optimization) means the budget is set at the campaign level and Meta's algorithm automatically distributes it across the adsets beneath that campaign based on which adsets are performing best. You set one number — the campaign budget — and the algorithm decides how much goes to each adset in real time.
ABO (Adset Budget Optimization) means the budget is set at the adset level. Each adset has its own independent budget that you control manually, and the algorithm only optimizes within each adset — not across them. If you run 5 adsets at $20/day each, every adset gets exactly $20, regardless of which one is performing best.
Meta officially renamed CBO to "Advantage Campaign Budget" in 2022, but everyone still calls it CBO, so we will too. The functionality hasn't changed.
The standard advice you'll hear on Twitter, Reddit, and Meta marketing podcasts:
Every one of these rules has exceptions that matter more than the rule itself. CBO is not universally better. CBO for scaling is fine but not required. Lead gen accounts can absolutely crush it with CBO. The real decision depends on four factors that nobody talks about.
Use these four questions to decide between CBO and ABO. If you answer honestly, the right structure becomes obvious.
Meta's algorithm needs 50+ optimization events per adset per week to exit the learning phase. If your budget is split across 5 adsets in an ABO structure, each one needs to hit 50 events independently. If your target CPA is $25 and you're spending $100/day, you'll get 4 events/day per adset — which is 28 per week — not enough to exit learning.
CBO fixes this by letting the budget flow disproportionately to the adset that's winning. Instead of 5 adsets each getting $20/day, one might get $60 and four might get $10. The winning adset reaches 50 events fast, exits learning, and stabilizes. The rest act as dormant options the algorithm can shift to if the winner fatigues.
If your 5 adsets target overlapping audiences (e.g., a broad lookalike, a 1% lookalike, a 3% lookalike, an interest-based audience, and a broad demographic targeting), there's massive overlap between the people in each audience. In an ABO structure, you're effectively bidding against yourself — Meta's auction pits your own adsets against each other for the same impressions, driving your CPM up.
CBO fixes this because the algorithm can identify the best-performing adset and throttle delivery to the other overlapping ones, preventing self-competition. This is the #1 reason CBO typically wins when you're testing multiple similar audiences.
ABO wins when your adsets target completely different audiences with no overlap — e.g., one adset for men 25-34 and another for women 45-54 in a different country. No auction competition, so budget-level control doesn't hurt you.
If you're testing audiences, ABO is better because it guarantees equal spend to each audience so you can compare them fairly. In CBO, the algorithm will cut budget from the "worse" audience within hours, and you'll never get clean data on whether that audience could have scaled with more time.
If you're testing creative, CBO is better because it lets the algorithm route budget to the best creative fast, and you get performance data quicker. Equal spend across creatives is the slower path to the same answer.
If you're testing placements or bid strategies, ABO is better because those structural changes need clean, equal comparison data.
New accounts (fewer than 500 lifetime purchases or conversions) should almost always run CBO. The algorithm needs concentrated data, and ABO forces the spread too thin. Mature accounts (thousands of conversions) can run either, and ABO gives you finer control that actually pays off because the algorithm already has strong signal.
"The mistake new media buyers make is running ABO on an account with 3 conversions per week and wondering why nothing works. The algorithm needs 50 per adset per week to optimize, and ABO fragments the signal. CBO consolidates it. On a brand-new account, CBO isn't a preference — it's a mathematical requirement."
Most media buyers pick one structure and stick with it. The top-performing accounts in our portfolio run a hybrid — CBO for one thing and ABO for another, in parallel, within the same account. Here's how it works.
Dedicate a specific campaign to creative and audience testing. Run it as ABO so every test gets equal spend and clean comparison data. Budget is small ($30-100/day per adset). The goal isn't ROAS — it's learning. Kill losers fast, promote winners to the scaling layer.
Separate campaign, running CBO with your winning creatives and audiences. Budget is larger (your actual spend target). The goal is efficient delivery at scale — the algorithm routes budget to whichever winning adset is performing best on any given day.
Separate campaign, running ABO, with dedicated budgets for each retargeting pool (website visitors, cart abandoners, past purchasers, email list, etc.). You want guaranteed floor spend for each pool, not algorithm-driven allocation.
This three-campaign structure gives you the best of both worlds: clean test data from ABO, efficient scale from CBO, and controlled retargeting from ABO. Every adset in the scaling layer is a proven winner that graduated from the testing layer, and every retargeting pool has its own budget guarantee.
With 10+ adsets, CBO starves most of them — 2-3 get all the budget and the rest receive almost nothing. The non-winners don't even get enough spend to fail honestly. Cap CBO campaigns at 3-5 adsets for clean algorithmic distribution.
When you add a new adset to a running CBO campaign, the algorithm has to reallocate budget to test the newcomer, which often disrupts performance of the proven winners. If you want to test something new, launch it in a separate ABO campaign first and graduate to the CBO campaign only after it proves out.
CBO actively prevents clean audience testing because it kills spend on adsets that look "worse" in the first 24 hours — which is exactly the window where audience tests have the most noise. Never test audiences in CBO.
Cost cap bidding + CBO on a new account is a recipe for Learning Limited. The cap restricts delivery, and CBO amplifies any delivery issues by choking the struggling adsets further. Use Lowest Cost for new CBO launches.
$20/day per adset in ABO will never exit the learning phase on most accounts. You'll get 5-10 events per adset per week and the algorithm will never stabilize. Either consolidate into fewer adsets with higher budgets or switch to CBO.
Running a 1% lookalike and a 3% lookalike in separate ABO adsets means you're bidding against yourself for the people in the 1% overlap. CPM goes up, CPA goes up, and both adsets underperform. Consolidate overlapping audiences into one adset or switch to CBO.
ABO forces equal spend, but performance is almost never equal. If one adset is crushing at $15 CPA and another is dying at $80 CPA, ABO will keep funding the losing one until you manually pause it. Check ABO campaigns daily and manually reallocate — the algorithm won't do it for you.
CBO vs ABO isn't a religious debate — it's a structural decision that depends on what you're doing and what data you have. The best media buyers use both, in parallel, for different jobs in the same account. Testing? ABO. Scaling winners? CBO. Retargeting? ABO. This hybrid approach beats pure CBO and pure ABO every time we've tested it.
The worst approach is picking a side based on Twitter advice and applying it to every campaign regardless of context. Meta's algorithm rewards strategic thinking, not dogma. Run the structure that fits the specific job, and your account will outperform 90% of competitors who picked a team and stuck with it.