Activity
Mon
Wed
Fri
Sun
Dec
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
What is this?
Less
More

Memberships

The Reverse Engineer

Private • 1.4k • Paid

Lords of Marketing

Public • 83 • Free

5 contributions to Lords of Marketing
I have no idea what converts anymore.
I have no idea what converts anymore. The more split tests we run… the less I feel I’m able to predict what’s going to win. I genuinely don’t know anymore. And I think that’s a good thing. The Dunning Krueger effect is finally rearing its’ head I guess. Here are some recent tests that LOST that “should have” won based off conventional wisdom/common sense: Image vs. video on a book a call page. Funny enough, this one was an accident at first. When uploading the video, we accidentally put a thumbnail instead of the video. Turns out, that won. In order to validate the results, we ran the test twice again and… you guessed it, same thing. Image beat video. Usually everyone says “video converts better!” yea… not always. Another one - on an upsell page for book a call, we’re noticing the same thing. Having no video is converting better than having an objectively good video that does an amazing job of framing the call. If you would’ve asked me to bet money on either of these before, I would have absolutely said video was going to win out. Without a doubt. Headlines? My “favorite” out of 4 we test loses plenty of times. Sometimes the most basic, least “copywritten” headline wins by a huge margin. We are testing a headline on an opt in right now - I think the control is objectively better, but it’s losing by 25% to one of the variants. Takeaway: best practices aren’t always best for you… When someone tells you something is GUARANTEED to convert better… be very wary unless it’s an extremely pedantic thing (e.g. a working buy button works better than a broken buy button) I can’t tell you what test will win… but I can tell you that if you consistently test your business will win.
4
3
New comment 6d ago
1 like • 7d
^ based on a bunch of tests and messaging woes with @Blake Wyatt lol
How long should you let a split test run?
Just wrote this up based on a question I got yesterday and I thought it would be useful for you guys! This is always a fun question because there isn’t a clear answer and there's a lot of nuance. First and foremost, we need to make sure the changes make don’t HARM conversion rate. That will happen about 50% of the time. The trick is we don’t know which times that’s gonna be… so we have to test. Obviously, the more data we have the better. But we don’t want to run tests for months and months. Ask any statistician if you have enough data and they’re always going to say more is better. But we can’t tests run forevermore so we need to compromise and be ok with some level of uncertainty. At the same time, running a test for one single day also doesn’t feel right (for reasons we’ll go over). So the optimal strategy must be somewhere in the middle. Let’s go over some of the competing interests; ✅ Volume of visitors in the test - We don’t want to run a test to 20 visitors and decide the variant is a winner because it has one more conversion than the control. More data is almost certainly better for certainty that a variant is indeed better than the control. ✅ Difference in conversion rate. A control that has 1% CVR and a variant that has 4% CVR requires less data to be certain that we have an improvement in conversion rate. By the same token, if you have a 1% vs. 1.1% conversion rate, you’re going to need a lot of data to be confident that difference isn’t due to random chance. ✅ Product pricing/AOV. Higher ticket products can have a lot more variability day to day. If you have a product that’s more expensive, generally that means there’s a longer buying cycle. If your average buying cycle from click to buy is 7 days, you don’t want to make a decision after 4 days. You haven’t even let one business cycle run through yet. ✅ Getting a representative sample of traffic (days of week) - similar to above, when we are making long term predictions about conversion rate differences, we need to make sure that we have a sample that is close to our long term traffic. Would you want to poll a random set of Americans to make predictions on the Japanese economy? So when running a split test we want to make sure that we are running it during a relatively normal time period AND account for different traffic throughout the week.
3
3
New comment Sep 20
2 likes • Sep 20
@Kyle Rutledge exactly. full funnel tracking is the way!
Split Testing Images on Sales pages
Hey guys! We just got a 34% lift by split testing the image on a sales page for a health brand and wanted to report back on it. The importance of the above the fold image on your landing pages can’t be overstated. They’re also some of the easiest tests to run.. even if you put zero thought into it. To be honest, randomly testing images on your LPs is probably a good use of your time. As in, putting 30 seconds of thought into it and testing will probably get you results. But if you want to put a 10 minutes of thought into it, you can use the following framework for a test: “Aspirational” vs. “identifiable” Aspirational images appeal to the end result/the person they will become by using the product. They showcase what and who your customer WANTS to be. If you sell skincare, this would be showing a young and attractive woman or man with perfect skin. Identifiable images appeal to who the customer currently is. Prevailing wisdom would say that aspirational one would win out. I mean, isn’t the whole point of product marketing to show what the person can become if they buy a product? The truth is that depends on the confidence of the avatar. Some markets and avatars are so mistrusting and jaded from trying dozens of solutions that they don’t even believe that they can get to the end goal. If you show them an aspirational image, it’s just going to turn them off. If you’re dealing with an insecure market, identifiable image would likely be more appropriate. So which test won in the test I referenced above? Aspirational. My theory is because the brand has a pretty clear unique mechanism that has a ton of trust built-in to the product. Even jaded and sophisticated prospects believe the results. Sidenote: You can use both as aspirational and identifiable images in the same above the fold. Before and after images oftentimes show both - the before is identifiable, the after is aspirational. Showing the transformation builds trust. The beauty of split testing is it puts all the armchair philosophizing to bed… even though I love armchair philosophizing about CRO. Ultimately, the market decides. What we think doesn’t matter.
7
4
New comment Sep 2
1 like • Aug 20
@Tobias Allen oh that's perfect. that's exactly what i was trying to say 🤣
New Marketing Lords Wall 👑
Ya’ll are sneaking in here and you thought I wouldn’t notice. As mentioned, if you don’t comment on the “pitch yourself post”. I will. I did give you warning ;) Ladies & Lords here are some of your latest members: @Logan Forsyth In a nutshell, these guys make you famous on social media using what I call the “Andrew Tate” strategy. (They guarantee up to 1 Billion views in 180 Days) I am personally itching to work with them one day. @Blake Wyatt Blake’s one of the best FB media buyers in the world. (In my totally biased opinion). He’s VP of marketing for a company I won’t mention. He’s also a legitimate king at low-ticket ascension funnels. One he runs has done over 5,200 qualified sales calls in like 9 months. (that was over a year ago, so it's probably more impressive now. I remember stuff Blake.) @Ethan Bence CRO whizz. Over $100m in profitable revenue scaled for his clients. Again, in my totally biased opinion, he’s one of the best CRO guys out there. He’s also a super chill dude. @Tyler Foo Tyler was previous head of advertising at MindValley. MindValley’s a behemoth in the info/education space. You may have seen some of their manifestation ads on Youtube. Since leaving, he’s now a killer gun for hire. @Will Green Man behind the curtain so I won’t reveal too much. He mentored me on how to market & write copy for over 8 months. Then gave me to Hormozi. To finish, here's a couple of ways to get more value out of this group: - Comment on the "shamelessly pitch yourself here post". Remember every new person who joins reads the "pitch post" (Let's be real). Meaning if you want to be top of mind and intro yourself on autopilot to every person that gets added, there's your shot. - Value in public, ask in private. You're in a room with some of the brightest minds in marketing. (With more joining). You've got a free platform to showcase what you know and open new doors. (I know of a couple of folk who are already collaborating on projects.)
8
7
New comment Jun 20
1 like • Jun 20
Daaaangggg @Tobias Allen don't make me blush! Super happy to be here with you guys
0 likes • Jun 20
@Dakota Hermes a legend right above me ^
Generating $70k+ Incremental Revenue from 1 Split Test
What's up guys! I was sharing a split-test result from a webinar opt-in page with Tobias we ran and wanted to post it here. Long and the short of it - Opt-in % = Same - Book Call % = 36% Increase - High Ticket Revenue = 300% increase. Incremental Revenue Increase: $70k+ (as of today) Now the cool part about this test is that it literally was just the opt-in page. We changed the headline and the bullet points - nothing else. We've seen this a BUNCH of times. *Your opt-in/landing page directly affects the QUALITY of leads - even if it doesn't affect the quantity* This is test showcases why even having the same (or lower) opt-in rate can in fact be better. I can't share the exact copy change for privacy reasons, but I'll describe it: The control headline was talking about an outcome "How to [get desired result]". It was a pretty general headline - not bad, but just very straight to the point. The variant headline introduced a mechanism "This new [mechanism] is a way to [solve sophisticated problem] to [get desired result] sustainably" What *likely* happened here is that the headline that introduced a mechanism attracted a more sophisticated lead - someone who has done research, is knowledgable, and has tried a few solutions before. We changed the bullet points in the variant to speak about misconceptions - misconceptions that a more sophisticated audience would believe. Example: - Why [this thing every competitor tells you to do] is actually wrong and hurting your progress - How [this thing you don't want to do but you think is helping] is unnecessary And that was really it. We got the idea from reading customer feedback and applications. Is it good copywriting? Yea for sure. Is it going to impress other marketers? Probably not. Is the market impressed by it? Definitely! The key with this is to split-test the result with software and then track it on the backend via Hyros or whatever attribution platform you use. Hope this is helpful!
3
7
New comment Jun 7
1 like • Jun 7
@Kyle Rutledge I'm always down! Yea, that's a great headline change. It really is that simple sometimes
1-5 of 5
Ethan Bence
3
42points to level up
@ethan-bence-6736
Making music when not marketing

Active 6d ago
Joined May 24, 2024
powered by