Activity
Mon
Wed
Fri
Sun
Dec
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
What is this?
Less
More

Memberships

Uplevel

Private ā€¢ 2.1k ā€¢ Paid

Mojo Mastery Challenge

Private ā€¢ 1k ā€¢ Free

Lazy Ripped

Private ā€¢ 1.5k ā€¢ Free

Paid Ad Secrets

Private ā€¢ 715 ā€¢ Free

Flow Collective

Private ā€¢ 370 ā€¢ Free

Lords of Marketing

Public ā€¢ 83 ā€¢ Free

Mojo Dojo

Private ā€¢ 1.1k ā€¢ Paid

SCALE by CreatorLaunch

Private ā€¢ 1.6k ā€¢ Free

UM
UnitedMind

Private ā€¢ 253 ā€¢ Free

2 contributions to Lords of Marketing
I have no idea what converts anymore.
I have no idea what converts anymore. The more split tests we runā€¦ the less I feel Iā€™m able to predict whatā€™s going to win. I genuinely donā€™t know anymore. And I think thatā€™s a good thing. The Dunning Krueger effect is finally rearing itsā€™ head I guess. Here are some recent tests that LOST that ā€œshould haveā€ won based off conventional wisdom/common sense: Image vs. video on a book a call page. Funny enough, this one was an accident at first. When uploading the video, we accidentally put a thumbnail instead of the video. Turns out, that won. In order to validate the results, we ran the test twice again andā€¦ you guessed it, same thing. Image beat video. Usually everyone says ā€œvideo converts better!ā€ yeaā€¦ not always. Another one - on an upsell page for book a call, weā€™re noticing the same thing. Having no video is converting better than having an objectively good video that does an amazing job of framing the call. If you wouldā€™ve asked me to bet money on either of these before, I would have absolutely said video was going to win out. Without a doubt. Headlines? My ā€œfavoriteā€ out of 4 we test loses plenty of times. Sometimes the most basic, least ā€œcopywrittenā€ headline wins by a huge margin. We are testing a headline on an opt in right now - I think the control is objectively better, but itā€™s losing by 25% to one of the variants. Takeaway: best practices arenā€™t always best for youā€¦ When someone tells you something is GUARANTEED to convert betterā€¦ be very wary unless itā€™s an extremely pedantic thing (e.g. a working buy button works better than a broken buy button) I canā€™t tell you what test will winā€¦ but I can tell you that if you consistently test your business will win.
4
3
New comment 6d ago
1 like ā€¢ 6d
@Ethan Bence I can attest to this šŸ˜…. An interesting note... I'm starting to focus more on what the previous steps and experience were, as for example, in the upsell call step that is winning without the video... it's because by this point they are already facing decision fatigue. They don't need MORE information, they need less. Just tell them what to do and grease the page to make it as slippery as possible (in terms of lowest cognitive load to complete the action we want). It's all contextual - there is no such thing as one "best practice" fits all. It must be congruent.
How long should you let a split test run?
Just wrote this up based on a question I got yesterday and I thought it would be useful for you guys! This is always a fun question because there isnā€™t a clear answer and there's a lot of nuance. First and foremost, we need to make sure the changes make donā€™t HARM conversion rate. That will happen about 50% of the time. The trick is we donā€™t know which times thatā€™s gonna beā€¦ so we have to test. Obviously, the more data we have the better. But we donā€™t want to run tests for months and months. Ask any statistician if you have enough data and theyā€™re always going to say more is better. But we canā€™t tests run forevermore so we need to compromise and be ok with some level of uncertainty. At the same time, running a test for one single day also doesnā€™t feel right (for reasons weā€™ll go over). So the optimal strategy must be somewhere in the middle. Letā€™s go over some of the competing interests; āœ… Volume of visitors in the test - We donā€™t want to run a test to 20 visitors and decide the variant is a winner because it has one more conversion than the control. More data is almost certainly better for certainty that a variant is indeed better than the control. āœ… Difference in conversion rate. A control that has 1% CVR and a variant that has 4% CVR requires less data to be certain that we have an improvement in conversion rate. By the same token, if you have a 1% vs. 1.1% conversion rate, youā€™re going to need a lot of data to be confident that difference isnā€™t due to random chance. āœ… Product pricing/AOV. Higher ticket products can have a lot more variability day to day. If you have a product thatā€™s more expensive, generally that means thereā€™s a longer buying cycle. If your average buying cycle from click to buy is 7 days, you donā€™t want to make a decision after 4 days. You havenā€™t even let one business cycle run through yet. āœ… Getting a representative sample of traffic (days of week) - similar to above, when we are making long term predictions about conversion rate differences, we need to make sure that we have a sample that is close to our long term traffic. Would you want to poll a random set of Americans to make predictions on the Japanese economy? So when running a split test we want to make sure that we are running it during a relatively normal time period AND account for different traffic throughout the week.
3
3
New comment Sep 20
1 like ā€¢ Sep 17
@Ethan Bence worth mentioning that the first few days of a split-test seem to not even matter either... extremely volatile lol. Especially so on Google traffic it seems.
1-2 of 2
Blake Wyatt
1
3points to level up
@blake-wyatt
The Clark Kent of marketing... I swoop in to save your business from the bad marketers of Krypton.

Active 17h ago
Joined May 27, 2024
powered by