The Law of 100: How To Test Your Marketing in your Auto Detailing Business

Get Real Industry Marketing Advice & Business Leadership In Your Inbox 3x A Week

The Law of 100: A Strategic Thinking Guide For Detailing Business Owners

Why Most Growth Strategies Appear To Fail In The $30k-$40k/month Range (And What Is Actually Happening)

What I see most often is a detailer at your revenue level who has a mental list of things they have “tried.” Facebook ads are on that list. Referral programs are on that list. Hiring is on that list. Raising prices, implementing a CRM, running email campaigns, building a membership program. All of it sitting in a mental folder labeled “stuff that didn’t work for me.”

What I see most often is that the story feels completely true to the person telling it. They genuinely believe they gave these things a fair shot. They genuinely believe their market is different, their customers are different, their situation is unique.

This is why I want to walk you through what is actually happening when you conclude that something “didn’t work.” Because what I see most often is not a failed strategy. What I see most often is a test that never actually happened.

This Is Why Sample Size Matters

There is a principle in statistics called the Law of Averages. In business, we often call it the Law of 100. Understanding why this principle exists will change how you make decisions about your business.

This is why it works this way: When you perform an action a small number of times, your results will be dominated by randomness. The outcomes you see will not reflect the true probability of success. They will reflect whatever random variation happened to occur during your limited window of observation.

This is why it works this way: When you perform that same action many times, the randomness begins to cancel itself out. The true underlying probability starts to show up in your results. The signal emerges from the noise.

Think about a casino. A casino does not care whether any individual gambler wins or loses on any given night. Someone might walk in and win ten thousand dollars. Someone else might lose fifty thousand. On any single hand of blackjack, the outcome is uncertain. The casino’s edge on each bet is small.

This is why casinos make money: They know that over hundreds of thousands of bets, over millions of bets, the mathematical edge will show up in the total results. The casino is not gambling. The casino is doing math. The casino has enough volume for the Law of Averages to work.

What I see most often with detailers is the exact opposite. They are drawing conclusions from sample sizes so small that the Law of Averages cannot work at all. They are trying to figure out if something works based on a handful of observations. A handful of observations contains almost no reliable information about the true probability of success.

Think about flipping a coin ten times. You might get seven heads and three tails. You might get four heads and six tails. You might get nine heads and one tail. All of these outcomes fall within the normal range of variation for ten coin flips. None of them tell you anything meaningful about whether the coin is fair. The sample is too small. Randomness is drowning out any real signal.

Flip that coin ten thousand times and you will land very close to five thousand heads and five thousand tails. The fifty percent probability will show up in your results because you have given it enough trials for the randomness to wash out.

This is why it works this way: Your marketing has a conversion rate. Your sales conversations have a close rate. Your hiring process has a success rate. Every input in your business has an underlying probability of working. You cannot know what that probability actually is until you have run enough volume through the system for the Law of Averages to reveal it.

This Is Why One Hundred Is The Number

The number one hundred is not random. It represents the minimum sample size where you can start to have reasonable confidence in what you are seeing.

This is why it works this way: Below 100 data points, the range of possible explanations for your results is too wide to be useful. You cannot tell the difference between a strategy that works ten percent of the time and a strategy that works thirty percent of the time when you only have fifteen data points. The uncertainty is too large.

At one hundred data points, the range starts to narrow enough that you can make informed decisions. You can start to see whether something is working at a level worth continuing. You can start to identify which factors might be affecting your results.

This is why it works this way: One hundred data points do not give you certainty. They give you enough information to make a rational decision about whether to keep going, make changes, or move on to something else.

What I see most often is detailers making permanent strategic decisions based on fifteen or twenty leads collected over a couple of weeks. They call this a data-driven decision. It is not. It is an emotional decision dressed up in business language. The data they collected does not support any conclusion. It’s noise.

Everything below one hundred data points is guessing. You can feel very certain about your conclusions. The mathematical reality is that you do not have enough information to know anything useful.

 

This Is Why You Quit Early

There is a reason you stop testing strategies before you collect enough data. The reason is not impatience. The reason is not lack of discipline.

Uncertainty is deeply uncomfortable. When you are running a marketing campaign and the early results are not obviously positive, you feel anxiety. You feel doubt. You feel like you might be wasting money. Every day without a clear win feels like evidence that you made a mistake.

When you quit, the uncertainty goes away. You get to stop wondering. You get to stop feeling like you might be making a mistake. You return to the familiar territory of your established business where you know what you are doing.

Quitting provides immediate emotional relief. The decision to quit is not a strategic decision based on data. It is a decision to escape discomfort. You are choosing to feel certain. You are not actually becoming certain through evidence.

What I see most often is detailers who break through the 430K-$40K/mon ceiling are not people who found magic strategies that worked right away. They are people who tolerated the discomfort of uncertainty long enough to collect real data. They stayed with the testing process even when early results were unclear. They accumulated enough repetitions for the Law of Averages to reveal true performance.

This Is Why Your Gut Is Wrong About New Things

Your instincts feel reliable. You have been running your business for years. You trust your gut.

Your gut is only reliable in areas where you have massive experience. You have detailed thousands of vehicles. You have had thousands of customer interactions. You have mixed chemicals and adjusted processes through enormous repetition. In those areas, your intuition is excellent because it has been built on a huge sample size.

When you try something new, your gut has nothing to calibrate on. Your instincts about Facebook ads are not based on running hundreds of campaigns. They are based on one or two attempts. Your feelings about hiring are not based on interviewing hundreds of candidates. They are based on a handful of frustrating experiences.

What I see most often is detailers who feel completely confident about things they have almost no experience with. They feel like they know that certain strategies do not work. That feeling of knowing is the trap. It feels identical to real knowledge, but it is built on a completely different foundation.

Think about how you would respond if a brand new detailer came to you after washing three cars and told you paint correction does not work. He tried it on three vehicles and the scratches did not fully come out on two of them. So he concluded paint correction is not a real service.

You would tell him he has done three cars and you have done three thousand. You would explain that he needs to learn proper technique, adjust his process, try different products, and put in the reps before he can conclude anything.

You would be right to tell him that. And when it comes to marketing, sales systems, and hiring, you are that brand new detailer. You are drawing massive conclusions from three or four attempts.

This Is Why You Already Understand This Principle

You apply rigorous testing logic to your technical work without even realizing it.

What I see most often is a detailer who would never evaluate a new ceramic coating by applying it to one car, waiting a week, seeing a water spot, and concluding the product is garbage. You understand that one data point tells you almost nothing.

You know proper product testing requires multiple applications across different conditions. Different paint types. Different temperatures. Different humidity levels. Different maintenance routines. You need multiple tests to isolate what is actually happening.

When one application fails, you cannot know what caused it. Maybe the surface preparation was wrong. Maybe the temperature was wrong. Maybe the curing time was wrong. Maybe the customer did not follow proper maintenance. Too many variables could explain a single failure. You need more data to start isolating which variables actually matter.

What I see most often is detailers who understand this principle completely when it comes to products and processes, and then ignore it completely when it comes to marketing, sales, and hiring.

When a marketing channel appears to fail, you have no idea why based on a handful of leads. Was the targeting wrong? Was the ad creative wrong? Was the offer wrong? Was your follow-up speed wrong? Was your phone script wrong? Was it just random variation?

You need dozens of leads, ideally hundreds, before you can start identifying which variables are actually affecting your outcomes. The statistical logic is the same. You have just failed to apply it outside your technical comfort zone.

This Is Why Time Matters Too

Volume is not the only requirement. Time matters as well.

That’s because customer behavior varies by season. A marketing message that gets strong response in March might get weak response in December. A service package that sells well in spring might sell poorly in winter.

This is why it works this way: Customer behavior varies by day of week. Leads generated Monday might behave differently from leads generated Saturday. Calls made Tuesday morning might connect at different rates from calls made Thursday evening.

Market conditions change. Competitor activity changes. Economic conditions affect willingness to spend. Local events influence demand.

What I see most often is detailers running two-week tests and drawing conclusions. A two-week test tells you almost nothing regardless of how many leads you generate. The sample is drawn from too narrow a window. The variation caused by timing effects cannot be separated from the variation caused by actual strategy performance.

90 days is the minimum reasonable window for testing any business strategy.
90 days gives you enough time to see how performance varies across different weeks.
90 days captures some seasonal variation.
90 days allows you to observe multiple cycles of customer behavior.

This Is Why You Must Control Your Variables

Valid testing requires that you hold things constant.

If you are testing Facebook ads and you change your follow-up process halfway through, you have ruined your data. You no longer know whether results are being affected by the ads or by the change you made. If you raise prices in week three, you cannot tell whether the change in close rate is from the price increase or from natural variation in lead quality.

This is why it works this way: Every time you change something during a test, you reset your data collection. You need one hundred data points collected under the same conditions. If you change conditions, the data points collected before and after the change are measuring different things. You cannot combine them.

What I see most often is detailers who cannot resist tinkering. When results are not immediately positive, the temptation is to start changing things, start trying to fix it, start looking for quick adjustments. Every change undermines the validity of the test.

Testing is about gathering enough data under controlled conditions to understand baseline performance. Adjustment comes after testing. If you try to adjust during testing, you never learn what your baseline actually was.

This Is Why Execution Honesty Matters

There is another way tests get corrupted.

What I see most often is detailers who do not actually execute the test as designed, but still draw conclusions as if they did.

Most of your competitors will not do any of this. 

They will complain that Google changed things. They will say that marketing is too hard now. They will keep doing what they did five years ago and wonder why it stopped working.

That is your opportunity. The shops that understand how these pieces connect will pull ahead. The shops that create content across platforms will show up everywhere their competitors do not. 

The shops that run LSAs and AI Max campaigns supported by strong SEO will see their ads perform better, their conversions increase, and their phone ring more consistently.

Google changed how search results display. That is true. But they also created more ways for your content to show up. More ways for your expertise to be seen. More ways for AI to reference your business as the authority in your market. The visibility expanded. Now you decide whether to expand with it.

When someone tells you to respond to leads within five minutes and you agree to test this, you are committing to actually respond within five minutes. If you respond to half your leads within five minutes and the other half within several hours, you have not tested the five-minute strategy. You tested a mixed approach of your own creation. Any conclusions about the five-minute strategy are invalid.

When someone tells you to use a specific sales script and you agree to test this, you are committing to actually use the script. If you use it sometimes but revert to your old approach when it feels uncomfortable, you have not tested the new script. You tested an inconsistent mixture.

What I see most often is execution failure. The person running the test does not actually run the test as designed. They do a partial version, a modified version, a when-I-feel-like-it version. And then they conclude the strategy does not work.

Valid testing requires tracking your inputs as rigorously as your outputs. You need to know whether you actually did what you said you would do. If you committed to 50 follow-up calls per week and you made 23, you did not test the 50-call strategy. If you committed to asking every customer for a referral and you asked a third of them, you did not test the referral strategy.

What I see most often is that honest assessment of execution is the most uncomfortable part. It is easier to conclude that a strategy does not work than to admit you did not actually do the strategy because it immediately shows you where you didn’t do your job. 

What A Real Test Looks Like

Understanding this intellectually is not the same as doing it. You can agree with everything here and go back to your business tomorrow and keep abandoning tests too early. The pattern is deep. Breaking it requires structure.

Pick one strategy. Not three. Not a combination. One thing you are going to test properly.

Define what the test requires. What specific actions do you need to take? How many times? What does successful execution look like on a daily and weekly basis? Write it down in specific terms that leave no room for confusion about whether you are doing the work.

Commit to the timeline. 90 days minimum. Mark the end date. Between now and that date, you do not get to conclude anything about whether the strategy works. You do not get to have opinions. You execute and record data.

Track your inputs and your outputs. Track how many times you actually took the required action. Track the results you generated. If you committed to following up with every lead within five minutes and you had forty leads, track exactly how many you actually reached within five minutes.

Eliminate your escape hatches. Tell someone else what you are testing. Ask them to hold you accountable. Give yourself no room to quietly abandon and pretend it never happened.

At the end of 90 days, with at least 100 data points, you sit down and analyze what actually happened. You look at your execution compliance. You look at your results at each stage. You calculate your actual conversion rates and your actual cost to acquire a customer.

At that point, and only at that point, you have earned the right to draw a conclusion.

The Opportunity This Creates

Here is what should give you hope. All those strategies you tried and abandoned were not proven failures. They were untested possibilities you left on the table.

You may have a marketing channel sitting right there that could transform your business. You may have a referral system that would generate steady new customers. You may have a pricing approach that would significantly raise your average ticket. The strategy that actually works might be something you already attempted and dismissed.

What I see most often is detailers who think the path forward is finding some new tactic nobody knows about. The real opportunity is going back to what you already tried and testing it properly. Accumulating enough data to actually know whether it works. Giving the Law of Averages enough raw material to reveal true performance.

You are not stuck because the strategies do not work. You are stuck because you have never stayed with any strategy long enough to find out. You have been making permanent decisions based on temporary experiments. You have been treating noise as signal.

The detailers who build businesses generating $60k, $80k, $100k a month are not people who found strategies that worked right away. They are people who stayed with strategies long enough to make them work or to discover definitively that they could not. They accumulated the data. They followed the math. They earned their conclusions through disciplined execution over extended time.

The Law of 100 is the minimum threshold of effort required to know anything useful about whether a strategy works. Everything below that threshold is guessing.

Guessing is what has kept you stuck.

Join the DetailShift Newsletter For Weekly Top Down Marketing & Business Advice

Want Real Marketing & Business Advice 3 Times A Week? 

Share This Post:

Facebook
Twitter
LinkedIn
Picture of Gabe Fletcher

Gabe Fletcher

Gabe Fletcher is the automotive protection industry's most polarizing figure. Known equally for his business innovation and his brutal honesty about industry practices, he's earned both devoted followers and vocal critics. As owner of Ceramic Pro Pottstown/Total Detailing and co-founder of Detailing Growth, he's built a reputation for elevating industry standards while refusing to sugarcoat hard truths about the sector.

A Forbes Council member and creator of the Talkin' Paint Podcast, Gabe combines technical expertise with controversial yet transformative business insights.

Though often labeled "the most hated voice in detailing," his impact on reshaping industry standards and business practices is undeniable.

Through his work in building successful protection businesses and mentoring others, Gabe continues to challenge conventional thinking - critics be damned.

See More Content

Related Post

How To Win Black Friday & Not Lose A Ton Of $

State of the Industry: Google Changed. Here Is What That Actually Means.

Getting Your Detailing Business Ready for AEO, AI Summaries and LLM Mentions

Want Success In Auto Detailing? Get to know “Unreasonable”

Scroll to Top