
January 16, 2026 • 12 min read

January 16, 2026 • 12 min read
You're running $700/day on Facebook ads. You upload 3 new creatives every day to your CBO campaign. ROAS looks decent at 3.5.
But here's the problem: Most of your new creatives aren't even getting $10 in daily spend. Meta's algorithm is dumping 80% of your budget into the same old ads while your fresh concepts sit at $2 spend with zero data.
Meanwhile, another advertiser swears by ABO, claiming it gives them perfect control. Except their overall ROAS is 1.8 while CBO users are hitting 4.5.
So which structure actually wins? The answer isn't what you think—and it's costing you money if you get it wrong.
This guide breaks down the real-world performance data from advertisers spending $45/day to $45,000/month, the critical budget threshold where everything changes, and the hybrid strategy that's actually working in 2026.

CBO vs ABO
Before we dive into which performs better, let's clarify exactly what these structures do—because the confusion starts here.
Campaign Budget Optimization means you set one budget at the campaign level, and Meta's algorithm decides how to distribute that money across all your ad sets and creatives.
How it works:
The theory: Meta's algorithm is smarter than you at finding the best-performing combinations and will maximize your ROAS by automatically funding winners.
The reality: "CBO allocates 70-100% of budget to old winners, completely ignoring new creatives you upload."
One advertiser explained: "CBO is adspend level budget so the creatives you put in each creative get the spend you determined per ad but meta will push the one that it thinks will perform the best."
Best for:
Worst for:
Ad Set Budget Optimization means you set specific budgets for each ad set individually, giving you complete control over spend distribution.
How it works:
The theory: You know your business and testing strategy better than an algorithm, so you should control where money goes.
The reality: "ABO gives you that control, but often at the cost of overall performance" because you're not letting the algorithm optimize dynamically.
Best for:
Worst for:
An advertiser posted: "Running Facebook ads for the past two months. Spend is around $700/day. I'm uploading 3 AI avatar creatives everyday to my CBO campaign. ROAS is okay around 3.5, but most of the creatives aren't even getting a daily spend of $10."
The breakdown:
Why this happens: CBO prioritizes proven performers. New ads get $5 to "test," show mediocre early signals (too early to tell), and budget shifts back to old winners.
Result: 90 creatives uploaded monthly, only 5 actually tested. Wasted creative production with zero useful data.
For budgets under $100/day, CBO becomes nearly useless.
With £45/day split across 3 ad sets, CBO typically allocates: £35, £8, £2. The ad sets getting £2-8 don't generate meaningful data.
The math doesn't work: Without proven winners, CBO at low budgets starves everything.
CBO isn't broken—it's being used wrong. Here's when it legitimately outperforms ABO.
At $700+ daily spend with proven winners, CBO becomes the clear winner.
Real success case:
Why this works: Sufficient budget volume for algorithm to optimize, only proven creatives (no testing dilution), real-time budget shifts to top performers.
One advertiser running multiple products: "I run a master CBO for different products/offers. I add £20 min spend ad sets (w/3 ads) on new tests to force spending and collect data. If they flop I kill em, if they perform I keep them. ROAS 4.5 for the last few weeks."
The structure:
Key: "CBO prioritises performance by pushing budget to what's working best, hence the uneven spend but better ROAS."
CBO excels when you have:
3-5 proven winners already identified
Budget >$500/day minimum
Focus on scaling, not testing
50+ conversions per week (15-50 minimum)
Once these conditions exist, CBO's dynamic allocation outperforms any manual split.
For creative testing with new concepts, ABO provides what CBO cannot: even budget distribution across unknowns.
Successful testing structure:
Each creative gets sufficient budget for meaningful data without algorithm interference.
Service business with £10/day budget:
Structure:
Process: Test 3-5 days → Move winners to winners campaign → Pause losers → Keep testing and scaling separate
Result: "Clean new ad sets, isolated test budget, promoting winners only after 3-5 days is the right move."
Critical rule: At <$100/day, separate testing and scaling campaigns. CBO always favors proven over unproven.
This is where CBO goes catastrophically wrong—and it's more common than you think.
An advertiser started a brand new CBO campaign with $80 budget. After 3 days:
Spend breakdown:
Performance:
The advertiser asked: "Why is Meta spending all my budget on the underperforming ad?"
Expert response: "That's a classic Meta CBO issue right there. The algorithm, especially when it's new and hasn't had enough conversion data, often optimises for what it thinks is the best path to your goal. But sometimes, that means it just finds the cheapest impressions or clicks, not actual customers. You absolutely need to turn off that Video5 ad. A $9 CPC and 0.72% CTR is a clear signal it's not working. Don't let it run for a week, you'll just burn more money. Kill it today."
CBO's early-stage optimization problem:
Without conversion data, Meta's algorithm optimizes for:
Cheapest impressions (CPM)
Estimated action rates (clicks, views)
Historical account patterns
It doesn't optimize for conversions until it has conversion data—which it can't get if it's not funding the right ads.
The vicious cycle:
Solution: Manual intervention. Kill the obvious underperformers within 24-48 hours, don't wait for algorithm to figure it out.
Note: ROAS ranges based on reported Reddit advertiser data from Sept 2025-Jan 2026
The advertisers winning in 2026 aren't using CBO or ABO—they're using both strategically.
Phase 1: ABO Testing Campaign
Phase 2: CBO Scaling Campaign
The process:
Test new concepts in ABO testing campaign
Let each concept gather 3-5 days of data
Winners (hitting target CPA/ROAS) move to CBO scaling campaign
Losers get paused immediately
CBO scales winners without testing dilution
Why this works: Testing and scaling are separated. ABO gives you control when you need it (testing unknowns). CBO gives you performance when you have data (scaling knowns).
For advertisers who want everything in one campaign:
Structure:
The critical technique: Setting minimum spend per ad set forces CBO to fund your tests even when it wants to dump everything into proven winners.
Real result: "ROAS 4.5 for the last few weeks" with this structure at high budget.
When this works:
Rule #1: Never scale without data
Quote: "Your AI creative tool is irrelevant if you're not giving each creative enough budget to generate statistical significance."
Minimum budget for valid test: 2-3x your target CPA per creative
Rule #2: Gradual budget increases only
Never increase >10% per week. One advertiser went from 2.2 ROAS to 1.0 ROAS overnight with a 20% increase.
Rule #3: Wait for stability
"Keep increases tiny and consistent" and wait several days between changes.
Rule #4: Data threshold before moving winners
Minimum: 15-50 purchases in one week before declaring something a "winner" worthy of CBO scaling.
An advertiser shared: "I had my top creatives in one CBO with 2 broad ad sets ($100/day) and 3 creatives per ad set. Performance went up. Then I tried increasing the budget by just 20%, and my ROAS dropped from 2.2 to almost 1 overnight."
Why 20% killed it:
The correct scaling protocol:
Vertical scaling:
Horizontal scaling (better):
Budget schedule example:
Week 1: $100/day
Week 2: $110/day (+10%)
Week 3: $120/day (+9%)
Week 4: $130/day (+8%)
Slow and steady wins. Aggressive scaling crashes campaigns.
Here's exactly which structure to use based on your situation:
"ABO tests, CBO scales."
Use ABO to identify winners through controlled testing. Use CBO to maximize winners through dynamic optimization.
The structure that wins isn't CBO or ABO—it's using the right one at the right stage.
If struggling with CBO:
Budget <$100/day? Switch to ABO immediately
Testing new creatives? Move to separate ABO campaign
80%+ going to 1-2 ads? Budget concentration problem—kill underperformers
Pause ads with $9+ CPC or <1% CTR within 48 hours
If using ABO and want to scale: 5. Identify proven winners (15-50+ conversions at target CPA) 6. Create separate CBO campaign for winners only 7. Keep ABO for testing new concepts 8. Increase CBO budget 5-10% weekly maximum
Starting fresh: 9. Start with ABO: Test 3-5 concepts, equal budget 10. Run 3-5 days minimum (15-50 conversions target) 11. Move winners to CBO for scaling 12. Continue testing in ABO (20-30% of budget), scaling in CBO (70-80%)
Here's what the endless debate misses: The question isn't "which is better?" It's "which is better for what?"
CBO wins at scaling proven winners with high budgets. ABO wins at testing new concepts with controlled spend. Neither wins everywhere.
The advertisers succeeding in 2026:
The advertisers struggling:
The 2026 winner isn't a campaign structure—it's a strategy. Use ABO when you need control. Use CBO when you have data. Scale gradually. Kill losers fast.
That's how you win.
Get notified when new insights, case studies, and trends go live — no clutter, just creativity.
Table of Contents

Rahul Mondal
Product, Design and Co-founder, Vibemyad

Arpita Mahato
Content Writer, Vibemyad

Rahul Mondal
Product, Design and Co-founder, Vibemyad