Some “Optimizations” Hurt More Than They Help

Test with confidence. Optimize effectively. Grow sustainably.

Time and time again, I hear it: “Optimize and iterate to watch your profits explode!”

It sounds great, but that’s not how easy it is. 

Some iterations flop… miserably. 

And when that happens, it’s not just disappointing. It’s expensive.

You might not realize the damage until months later when your revenue drops, and you’re running around like a chicken with his head cut off, wondering, “What went wrong?”

You don’t want to be the brand blindly rolling out changes just because “testing is good.”

A small tweak here, a slight change there. It’s all harmless…

Until you realize you’re in a Black Swan event and one minor tweak will collapse your entire funnel suddenly.

All you need to stop any of this damage in its tracks is a foundational A/B testing system to avoid burning cash on changes that do more harm than good. 

A/B Testing: Your Best Defense Against Revenue Loss

The simplest, cost-effective way to protect your revenue is A/B testing.

But only if you do it right.

Some brands treat A/B testing like flipping a coin.
Roll out a change, cross your fingers, and hope for the best.

That’s not testing. That’s wishful thinking.

A client of ours was about to make a small change that seemed like a good idea.

So we asked to test it first, and it turned out that if they had rolled it out, it would have cost them $1.5 million a year in lost revenue.

A/B testing caught the issue before it became catastrophic.

But this happens more often than you’d think.

Brands come to us after noticing unexplained revenue drops because they have no idea which changes are causing the problem

What makes the situation even more frustrating for them is they’ve made tweaks throughout the year without testing so they can’t even pin where the issue started.

And by the time they figure it out, the damage is already done.

A/B testing not only identifies the specific issue, but it also prevents small mistakes from snowballing into unfixable losses

Where Brands Lose Money

One of the biggest revenue killers is bad testing hygiene.

Brands will run A/B tests too quickly and make decisions based on incomplete data. 

They assume a seven-day test is enough, but that's not long enough unless you pull in close to a million monthly visitors.

Short tests are dangerous because:

  • A test might look like a big winner in the first week, but that’s often due to the novelty effect (returning visitors engaging with a new change).

  • Over time, that initial boost fades, and the ugly duckling starts to peak out.

  • Calling a test too early means you could be making false assumptions about wins and losses.

Case in Point:

We ran a free shipping threshold test where AOV jumped by $19 in week one. 

But after an entire test cycle, it settled at a $7 increase. 

If we had stopped early, the brand would have overestimated its impact and made terrible pricing decisions.

To Avoid Wasting Money on Bad Tests

Run tests long enough, typically 2-4 weeks, depending on traffic and funnel position.

Then track key success metrics:

  • Transaction rate – Are more purchases happening?

  • AOV (Average Order Value) – Are people spending more per order?

  • UPT (Units Per Transaction) – Are they buying more items per checkout?

  • Add-to-cart rate – Are they adding more products, signaling strong intent?

  • Personalization engagement – Are visitors interacting with recommended or previously viewed products?

For subscription businesses, additional must-watch metrics include:

  • Subscriber conversion rate – How many first-time buyers become subscribers?

  • Churn rate – Are people sticking around longer?

  • Newsletter sign-ups – If quantified properly, this can be a major driver of LTV.

If you're having trouble pinpointing the root cause of your optimization issues, our Usability Scorecard can help.

We have real users complete key site tasks and rate difficulty (1-4). 

Anything above a 1 triggers a follow-up asking “why,” which helps us pinpoint where friction points hurt conversions and why.

We’ve found that tests generated from our usability scorecard have a 44% win rate, nearly double the industry average of 20-25%.

It helps users optimize as efficiently as possible instead of making expensive guesses.

Quote of the week:

The dominant pattern of history isn’t stability, but instability; the dominant pattern of business isn’t perpetuation of the incumbents, but triumph of the insurgents; the dominant pattern of capitalism isn’t equilibrium, but what Joseph Schumpeter famously described as the “perennial gale of creative destruction.”

The Biggest A/B Testing Myths (and Why They’re Costing You Money)

Running quick A/B tests won’t shoot your revenue up overnight. 

But thanks to some questionable advice floating around LinkedIn and X, too many brands fall into two major misconceptions that lead to wasted time, bad decisions, and lost revenue.

Myth #1: A/B Testing Should Be Fast-Paced

Some people love to brag about running a new test every seven days like they’re operating at a Google-level scale. 

That's not how this works unless you’re pulling in 700K+ monthly visitors.

Here’s why running tests too fast is a bad idea:

  • What looks like a big win in week one often evens out by week three.

  • Returning visitors engage with something new, boosting early numbers, but that spike rarely lasts.

  • A test that crushed it in 2020 might flop in 2024. Shopping habits evolve, and assumptions need to be revalidated.

We’ve seen tests that started looking transformative, only to level out once they ran long enough to reach statistical significance. 

Brands that call tests too early often make costly mistakes based on incomplete data.

Myth #2: One Person Can Run Your Entire A/B Testing Program

Brands tend to hand A/B testing off to a single person and expect solid results, but that’s like asking your graphic designer to handle legal contracts.

A proper A/B testing team needs:

  • A strategist to develop test hypotheses and plans.

  • A designer will create test variations that make sense.

  • An engineer will build and deploy the tests correctly.

  • An analyst can interpret results without bias.

Larger teams also bring in UX researchers and QA specialists to ensure everything runs smoothly. Without them, things can get messy fast.

The most considerable risk of a one-person testing team is bias.

  • If emotionally invested in a test, they might push to prove it “worked” even when the data says otherwise.

  • They could unknowingly cherry-pick numbers to support their ideas (or dismiss tests they didn’t like).

  • Strategy, design, and development all require different skills. Stacking it all on one person guarantees something gets overlooked.

You don’t have to focus on speed. Instead, make the right move based on accurate data. 

Brands that take their time, build the right team, and focus on sustainable optimization will always win in the long run.

Win of the Week: When the Maker’s Story Didn’t Convert

Post-purchase surveys showed that customers cared about the craftsmanship and history behind Made In’s products, but the product pages (PDPs) didn’t highlight this. 

The idea seemed simple: add a video sharing the Maker’s Story, justify the price, and increase conversions.

A test was launched on Made In’s PDPs across desktop and mobile. 

The variation (V1) added a “How It’s Made” CTA in the hero section, opening a video about the makers behind the products. 

The goal: More cart adds. 

The result: Not what was expected.

What the Data Showed

  • Only 7-8% of visitors clicked to watch the video. Those who did converted 26-46% better than the control.

  • For those who didn’t watch, performance dropped.

Mobile non-watchers: Cart adds fell 4% vs. control.

Desktop non-watchers: Cart adds dropped 15%.

  • Returning visitors responded poorly.

Mobile: Cart adds down 11-15%.

Desktop: Cart adds down 17%.

  • Paid search was the only traffic source that saw a lift. Other channels declined.

Why It Didn’t Work

  • The video engaged a small segment, but most ignored it. Those who clicked performed better, but the majority didn’t engage.

  • It may have distracted casual shoppers. Instead of guiding them toward a purchase, it added an extra decision that wasn’t essential to conversion.

  • Page load speed could have been a factor. The video likely increased load times, creating friction for users who weren’t interested in watching.

  • Sizing remained a serious concern. Users kept toggling between size options, reinforcing past data that better size guidance would be more valuable than an extra content feature.

What’s Next

  • Revert to control. 

The video didn’t justify the trade-off in lost conversions.

  • Test a more concise version of the Maker’s Story. 

A shorter, more digestible format may engage those who care while minimizing friction for those who don’t.

  • Refocus on core buying decisions. 

Data suggests users are more concerned with size selection than brand storytelling at this funnel stage.

  • Test placing an embedded video further down the page. 

The shoppers who watched and converted at a higher rate are more likely to scroll the page fully and still engage with the video. Moving it down the page would keep it out of the decision-making process for shoppers who are not interested in watching the video.

Adding more content isn’t always the answer.

It might get in the way if it doesn’t support the buying decision.

To Conclude

A/B testing will ultimately help you make better decisions.

Every test, win or loss, helps you get closer to understanding how customers behave, what they respond to, and where friction exists.

Treat testing as an ongoing process rather than a one-off tactic.

Your focus shouldn’t just be on what to test but HOW TO test better.

  • Think long-term and ignore the surface-level wins.

  • Optimize for the full customer journey. 

  • Challenge past assumptions. What worked last year might not work today. Retesting and revisiting hypotheses is just as important as launching new experiments.

  • Prioritize insights over results. Even a “failed” test teaches you something valuable. Every piece of data helps you refine the next move.

This process is to improve with each day rather than just react. 

Don’t move fast for the sake of speed, but to iterate with intention and test better.

Looking forward,

Brian

P.S. If you’re looking to improve your conversion rate but you don’t know where to begin or don’t have the team to support it, give us a shout. We can help you determine the best path to increase your conversion rate and revenue.