User
Write something
Ignore your split tests for the first few days.
Want to improve your conversion rate? Ignore your split tests for the first few days. The first few days of a split test are VERY dangerous. Check out the day by day of the test below. You’ll notice over the first ~4 days the variant was outpacing the control by a large margin… then it flipped. If you would have taken 4 days worth of data and implemented the test, you would have implemented a worse converting. So… why does this happen in the first place? Random Variance is a major culprit. Flip a coin 3x, and you have a 1 in 8 chance of getting Heads three times. Flip that same coin 10 times and you have a 1/1024 chance of getting heads 10 times in a row. So… the first few days are just a small amount of data. the longer you run a test, the more data you get, and the less chance what you’re seeing is due to random variation. For those coming into split testing with a background in paid traffic, this is pretty much the opposite of what happens on ad platforms. Usually, if an ad works, it works immediately. Meta is TRYING to find buyers. Split testing is completely agnostic to who buys. Next reason you can’t trust the first few days of split testing data is novelty bias. If a large percentage of traffic to your site are returning visitors to your site, the control version of your site is super familiar to them. When you introduce a new element to the site, they’ll pay more attention simply because it’s new. When you’re analyzing your split test results, this is exactly why you have to analyze your returning visitors separately from new visitors. I’m all for getting excited over new split test results, but remember, my first rule of CRO is “do no harm”. And 2 or 3 days of data simply is not enough.
0
0
Ignore your split tests for the first few days.
How long should you let a split test run?
Just wrote this up based on a question I got yesterday and I thought it would be useful for you guys! This is always a fun question because there isn’t a clear answer and there's a lot of nuance. First and foremost, we need to make sure the changes make don’t HARM conversion rate. That will happen about 50% of the time. The trick is we don’t know which times that’s gonna be… so we have to test. Obviously, the more data we have the better. But we don’t want to run tests for months and months. Ask any statistician if you have enough data and they’re always going to say more is better. But we can’t tests run forevermore so we need to compromise and be ok with some level of uncertainty. At the same time, running a test for one single day also doesn’t feel right (for reasons we’ll go over). So the optimal strategy must be somewhere in the middle. Let’s go over some of the competing interests; ✅ Volume of visitors in the test - We don’t want to run a test to 20 visitors and decide the variant is a winner because it has one more conversion than the control. More data is almost certainly better for certainty that a variant is indeed better than the control. ✅ Difference in conversion rate. A control that has 1% CVR and a variant that has 4% CVR requires less data to be certain that we have an improvement in conversion rate. By the same token, if you have a 1% vs. 1.1% conversion rate, you’re going to need a lot of data to be confident that difference isn’t due to random chance. ✅ Product pricing/AOV. Higher ticket products can have a lot more variability day to day. If you have a product that’s more expensive, generally that means there’s a longer buying cycle. If your average buying cycle from click to buy is 7 days, you don’t want to make a decision after 4 days. You haven’t even let one business cycle run through yet. ✅ Getting a representative sample of traffic (days of week) - similar to above, when we are making long term predictions about conversion rate differences, we need to make sure that we have a sample that is close to our long term traffic. Would you want to poll a random set of Americans to make predictions on the Japanese economy? So when running a split test we want to make sure that we are running it during a relatively normal time period AND account for different traffic throughout the week.
3
3
New comment Sep 20
Split Testing Images on Sales pages
Hey guys! We just got a 34% lift by split testing the image on a sales page for a health brand and wanted to report back on it. The importance of the above the fold image on your landing pages can’t be overstated. They’re also some of the easiest tests to run.. even if you put zero thought into it. To be honest, randomly testing images on your LPs is probably a good use of your time. As in, putting 30 seconds of thought into it and testing will probably get you results. But if you want to put a 10 minutes of thought into it, you can use the following framework for a test: “Aspirational” vs. “identifiable” Aspirational images appeal to the end result/the person they will become by using the product. They showcase what and who your customer WANTS to be. If you sell skincare, this would be showing a young and attractive woman or man with perfect skin. Identifiable images appeal to who the customer currently is. Prevailing wisdom would say that aspirational one would win out. I mean, isn’t the whole point of product marketing to show what the person can become if they buy a product? The truth is that depends on the confidence of the avatar. Some markets and avatars are so mistrusting and jaded from trying dozens of solutions that they don’t even believe that they can get to the end goal. If you show them an aspirational image, it’s just going to turn them off. If you’re dealing with an insecure market, identifiable image would likely be more appropriate. So which test won in the test I referenced above? Aspirational. My theory is because the brand has a pretty clear unique mechanism that has a ton of trust built-in to the product. Even jaded and sophisticated prospects believe the results. Sidenote: You can use both as aspirational and identifiable images in the same above the fold. Before and after images oftentimes show both - the before is identifiable, the after is aspirational. Showing the transformation builds trust. The beauty of split testing is it puts all the armchair philosophizing to bed… even though I love armchair philosophizing about CRO. Ultimately, the market decides. What we think doesn’t matter.
7
4
New comment Sep 2
Some FREE recruitment nuggets
Hey everyone! After being in the recruiting business for over 10 years, I have just launched a GUIDE ON TALENT ACQUISITION in the marketing & eCommerce space. This guide is a great resource to help you save over $15,000 you'd spend on wrong hire. For some time, it was only available to our clients, but now I'd like to share it with you too! If this is something you might need now or in the near future, drop me a message here and I'll send it over! P.S. Anyone wants to network on a zoom call? 😉
2
2
New comment Aug 29
Facebook Group Engagement Farming 101
Posted fun little square post this a big Facebook group for entrepreneurs. The comments are unhinged lol You get seen a lot more in your polarizing, but the prices there are people start fuming at you. If you don’t care about the opinions of people who aren’t your dream client you can post things like this and play your lyre like Nero as the FB group burns 🔥 What’s the ROI you ask? 0>🤷🏻‍♂️ I find a little fun though.
0
0
Facebook Group Engagement Farming 101
1-7 of 7
Lords of Marketing
skool.com/lordsofmarketing
Welcome to the house of marketing lords.
Leaderboard (30-day)
powered by