Activity
Mon
Wed
Fri
Sun
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
What is this?
Less
More

Memberships

The Reverse Engineer

Private • 1.4k • Paid

Lords of Marketing

Public • 90 • Free

Golden Era Society

Private • 3 • Free

Content Creator Cashflow

Private • 2.3k • Free

6 contributions to Lords of Marketing
Ignore your split tests for the first few days.
Want to improve your conversion rate? Ignore your split tests for the first few days. The first few days of a split test are VERY dangerous. Check out the day by day of the test below. You’ll notice over the first ~4 days the variant was outpacing the control by a large margin… then it flipped. If you would have taken 4 days worth of data and implemented the test, you would have implemented a worse converting. So… why does this happen in the first place? Random Variance is a major culprit. Flip a coin 3x, and you have a 1 in 8 chance of getting Heads three times. Flip that same coin 10 times and you have a 1/1024 chance of getting heads 10 times in a row. So… the first few days are just a small amount of data. the longer you run a test, the more data you get, and the less chance what you’re seeing is due to random variation. For those coming into split testing with a background in paid traffic, this is pretty much the opposite of what happens on ad platforms. Usually, if an ad works, it works immediately. Meta is TRYING to find buyers. Split testing is completely agnostic to who buys. Next reason you can’t trust the first few days of split testing data is novelty bias. If a large percentage of traffic to your site are returning visitors to your site, the control version of your site is super familiar to them. When you introduce a new element to the site, they’ll pay more attention simply because it’s new. When you’re analyzing your split test results, this is exactly why you have to analyze your returning visitors separately from new visitors. I’m all for getting excited over new split test results, but remember, my first rule of CRO is “do no harm”. And 2 or 3 days of data simply is not enough.
0
0
Ignore your split tests for the first few days.
I have no idea what converts anymore.
I have no idea what converts anymore. The more split tests we run… the less I feel I’m able to predict what’s going to win. I genuinely don’t know anymore. And I think that’s a good thing. The Dunning Krueger effect is finally rearing its’ head I guess. Here are some recent tests that LOST that “should have” won based off conventional wisdom/common sense: Image vs. video on a book a call page. Funny enough, this one was an accident at first. When uploading the video, we accidentally put a thumbnail instead of the video. Turns out, that won. In order to validate the results, we ran the test twice again and… you guessed it, same thing. Image beat video. Usually everyone says “video converts better!” yea… not always. Another one - on an upsell page for book a call, we’re noticing the same thing. Having no video is converting better than having an objectively good video that does an amazing job of framing the call. If you would’ve asked me to bet money on either of these before, I would have absolutely said video was going to win out. Without a doubt. Headlines? My “favorite” out of 4 we test loses plenty of times. Sometimes the most basic, least “copywritten” headline wins by a huge margin. We are testing a headline on an opt in right now - I think the control is objectively better, but it’s losing by 25% to one of the variants. Takeaway: best practices aren’t always best for you… When someone tells you something is GUARANTEED to convert better… be very wary unless it’s an extremely pedantic thing (e.g. a working buy button works better than a broken buy button) I can’t tell you what test will win… but I can tell you that if you consistently test your business will win.
5
3
New comment 26d ago
1 like • 27d
^ based on a bunch of tests and messaging woes with @Blake Wyatt lol
How long should you let a split test run?
Just wrote this up based on a question I got yesterday and I thought it would be useful for you guys! This is always a fun question because there isn’t a clear answer and there's a lot of nuance. First and foremost, we need to make sure the changes make don’t HARM conversion rate. That will happen about 50% of the time. The trick is we don’t know which times that’s gonna be… so we have to test. Obviously, the more data we have the better. But we don’t want to run tests for months and months. Ask any statistician if you have enough data and they’re always going to say more is better. But we can’t tests run forevermore so we need to compromise and be ok with some level of uncertainty. At the same time, running a test for one single day also doesn’t feel right (for reasons we’ll go over). So the optimal strategy must be somewhere in the middle. Let’s go over some of the competing interests; ✅ Volume of visitors in the test - We don’t want to run a test to 20 visitors and decide the variant is a winner because it has one more conversion than the control. More data is almost certainly better for certainty that a variant is indeed better than the control. ✅ Difference in conversion rate. A control that has 1% CVR and a variant that has 4% CVR requires less data to be certain that we have an improvement in conversion rate. By the same token, if you have a 1% vs. 1.1% conversion rate, you’re going to need a lot of data to be confident that difference isn’t due to random chance. ✅ Product pricing/AOV. Higher ticket products can have a lot more variability day to day. If you have a product that’s more expensive, generally that means there’s a longer buying cycle. If your average buying cycle from click to buy is 7 days, you don’t want to make a decision after 4 days. You haven’t even let one business cycle run through yet. ✅ Getting a representative sample of traffic (days of week) - similar to above, when we are making long term predictions about conversion rate differences, we need to make sure that we have a sample that is close to our long term traffic. Would you want to poll a random set of Americans to make predictions on the Japanese economy? So when running a split test we want to make sure that we are running it during a relatively normal time period AND account for different traffic throughout the week.
3
3
New comment Sep 20
2 likes • Sep 20
@Kyle Rutledge exactly. full funnel tracking is the way!
Split Testing Images on Sales pages
Hey guys! We just got a 34% lift by split testing the image on a sales page for a health brand and wanted to report back on it. The importance of the above the fold image on your landing pages can’t be overstated. They’re also some of the easiest tests to run.. even if you put zero thought into it. To be honest, randomly testing images on your LPs is probably a good use of your time. As in, putting 30 seconds of thought into it and testing will probably get you results. But if you want to put a 10 minutes of thought into it, you can use the following framework for a test: “Aspirational” vs. “identifiable” Aspirational images appeal to the end result/the person they will become by using the product. They showcase what and who your customer WANTS to be. If you sell skincare, this would be showing a young and attractive woman or man with perfect skin. Identifiable images appeal to who the customer currently is. Prevailing wisdom would say that aspirational one would win out. I mean, isn’t the whole point of product marketing to show what the person can become if they buy a product? The truth is that depends on the confidence of the avatar. Some markets and avatars are so mistrusting and jaded from trying dozens of solutions that they don’t even believe that they can get to the end goal. If you show them an aspirational image, it’s just going to turn them off. If you’re dealing with an insecure market, identifiable image would likely be more appropriate. So which test won in the test I referenced above? Aspirational. My theory is because the brand has a pretty clear unique mechanism that has a ton of trust built-in to the product. Even jaded and sophisticated prospects believe the results. Sidenote: You can use both as aspirational and identifiable images in the same above the fold. Before and after images oftentimes show both - the before is identifiable, the after is aspirational. Showing the transformation builds trust. The beauty of split testing is it puts all the armchair philosophizing to bed… even though I love armchair philosophizing about CRO. Ultimately, the market decides. What we think doesn’t matter.
7
4
New comment Sep 2
1 like • Aug 20
@Tobias Allen oh that's perfect. that's exactly what i was trying to say 🤣
New Marketing Lords Wall 👑
Ya’ll are sneaking in here and you thought I wouldn’t notice. As mentioned, if you don’t comment on the “pitch yourself post”. I will. I did give you warning ;) Ladies & Lords here are some of your latest members: @Logan Forsyth In a nutshell, these guys make you famous on social media using what I call the “Andrew Tate” strategy. (They guarantee up to 1 Billion views in 180 Days) I am personally itching to work with them one day. @Blake Wyatt Blake’s one of the best FB media buyers in the world. (In my totally biased opinion). He’s VP of marketing for a company I won’t mention. He’s also a legitimate king at low-ticket ascension funnels. One he runs has done over 5,200 qualified sales calls in like 9 months. (that was over a year ago, so it's probably more impressive now. I remember stuff Blake.) @Ethan Bence CRO whizz. Over $100m in profitable revenue scaled for his clients. Again, in my totally biased opinion, he’s one of the best CRO guys out there. He’s also a super chill dude. @Tyler Foo Tyler was previous head of advertising at MindValley. MindValley’s a behemoth in the info/education space. You may have seen some of their manifestation ads on Youtube. Since leaving, he’s now a killer gun for hire. @Will Green Man behind the curtain so I won’t reveal too much. He mentored me on how to market & write copy for over 8 months. Then gave me to Hormozi. To finish, here's a couple of ways to get more value out of this group: - Comment on the "shamelessly pitch yourself here post". Remember every new person who joins reads the "pitch post" (Let's be real). Meaning if you want to be top of mind and intro yourself on autopilot to every person that gets added, there's your shot. - Value in public, ask in private. You're in a room with some of the brightest minds in marketing. (With more joining). You've got a free platform to showcase what you know and open new doors. (I know of a couple of folk who are already collaborating on projects.)
8
7
New comment Jun 20
1 like • Jun 20
Daaaangggg @Tobias Allen don't make me blush! Super happy to be here with you guys
0 likes • Jun 20
@Dakota Hermes a legend right above me ^
1-6 of 6
Ethan Bence
3
41points to level up
@ethan-bence-6736
Making music when not marketing

Active 15m ago
Joined May 24, 2024
powered by