Updated on March 2, 2023 by Nicole Mezei
“We must improve our conversions.”
“This conversion rate is an industry low.”
“What can we do to convert more customers into premium buyers?”
The above phrases have become ubiquitous in the ecommerce world. Getting higher conversions has become an obsession.
The problem with this obsession is that there will always be people looking for shortcuts and the fastest route to conversions. Not enough people know that consistently improving your conversion rate is a process that requires patience and consistent efforts.
In this article, we’ll cover the Conversion Rate Optimization process and everything that comes with it.
Let’s jump in!
What is considered a good conversion rate?
A good conversion rate depends on a lot of factors. This can be an individual goal set by you, a benchmark using your industry’s average conversion rates, or something completely arbitrary.
Keep in mind that there are additional factors that weigh on conversion rates such as the device used for the purchase, geographical location, traffic source, etc.
Shippypro has rounded up results from a variety of ecommerce niches ranging from Agricultural supplies, Food and Drink, Baby and Child, and much more.
Conversion rates can range from 0.87% to 1.50% and more.
What is conversion rate optimization?
Conversion rate indicates the percentage of visitors that come to your site who become actual customers.
Conversion Rate Optimization is all about increasing the number of visitors who become customers using a variety of methods.
If you’re an ecommerce business owner or run any type of online business, learning about Conversion Rate Optimization (CRO) is essential.
Well, because we live in a time of overwhelming choice. This means that there are thousands if not millions of other business customers can go to if you’re not meeting their needs.
Speaking of needs, they are as fickle as the customers themselves. Everything from how fast your online store loads to the colors you use can boost or lower your conversions.
Having a toolkit of time-tested, powerful conversion rate optimization techniques that you can employ at any stage of the buying journey gives you a significant advantage.
How to start Conversion Rate Optimization?
There are 5 steps to the Conversion Rate Optimization cycle. Here’s what goes down at each step:
1. Research Phase: This is where you discover the parts of your conversion funnel that needs tweaking.
2. Hypothesis Phase: This is where you form a working hypothesis based on your metrics and research.
3. Prioritization Phase: This is where you figure out what to attack first for your optimization.
4. Testing Phase: This is where you put your hypothesis up against the existing version of your website.
5. Learning Phase: This is you deploy the winning hypothesis and gather information for future tests and planning.
Stage 1: Research and data gathering
The very first part of CRO requires you to properly gather data to inform your testing.
Although this stage can be a long process, it’s worth properly doing it to save yourself a lot of time and headaches down the road.
Start off your data gathering by consulting Key Performance Indicators such as Customer Acquisition Cost, Customer Lifetime Value, Monthly Recurring Revenue, and Sales Cycle duration.
The cold, hard numbers, also known as Quantitative Data can give a clear picture of your customer click-through rates, times on sites, and much more.
This type of data can be collected through heatmaps, surveys, KPI analytics, and A/B tests.
The cold hard numbers are great but don’t necessarily paint the clearest picture. That’s when Qualitative data comes in. The type of data leaves more room for interpretation and can be analyzed subjectively.
Instead of just accepting what the numbers say, qualitative data asks “why”.
- Why are the traffic numbers so low?
- What problems could users be having on the site?
- Why are the cart abandonment rates so high?
These are questions that qualitative data can answer.
Some popular ways to collect this data include: customer interviews, focus groups, and other observational methods.
When you’ve effectively collected this data, you’ll need to understand what it actually means. Let’s take customer surveys for example. After you’ve collected surveys responses, you’ll want to look at patterns, objections, and language used within them.
Take note of repeated words, emphasized words, “points of resistance” or places that made browsing or shopping from you difficult.
Lastly, closely analyze the tone of the language used in these surveys. They can serve as powerful tools for social proof, sales copy, and other personalized messages.
Almost anything can be tracked with Google Analytics these days.
Every business has its own Key Performance Indicators and different priorities but there are conversion metrics that are more or less of the same regardless of industry/niche. Check out this article to help you identify which conversion metrics are worth tracking.
Although Google Analytics is an incredible tool, it’s not perfect.
Remember those headaches I mentioned earlier? Well, these can happen when you spent time collecting all the wrong data (which absolutely sucks but happens more often than not).
Avoid making poor choices and being misled by your data and go here to learn how to run a Google Analytics Audit.
Stage 2: Hypotheses
A hypothesis is a tentative assumption for trying out logical and empirical consequences of an action.
In CRO talk, this means a working theory (based on your research and data) of “If I change X, it will have Y effect”.
Once you’ve built your list of hypotheses, you need to prioritize that list from most urgent to not-so-pressing. We’ll go into this in-depth in the following chapter. Before that, let’s look at the components of a hypothesis.
Source: Optimizely Blog
Component 1: The variable (IF)
- This is a website element that can be modified, added, or taken away to result in a desired outcome.
Component 2: The result (THEN)
- The predicted outcome (e.g.: more clicks on a call-to-action, more signups on a landing page, etc).
Component 3: Rationale (DUE TO)
- This is where you show that the result occurred because it was informed by research (quantitative and qualitative).
It looks like a simple enough thing to do–and can be when done right, but many eCommerce business owners fail to give enough detail in their hypotheses to really move the needle on their CRO process.
Here’s an example of a weaker and far-too-common hypothesis:
“If I make our landing page copy more impersonal, then we’ll get more click-throughs due to customers saying on surveys that they felt the previous copy was too generic and felt cold”.
Now a stronger and more detailed hypothesis looks like this:
“Based on our heatmaps, survey results, and quantitative data, product page #3 is too long and visitors do not convert because they don’t scroll to the bottom to see the call-to-action. If we make the copy shorter, highlight key features, and put the call-to-action above the fold, we will see conversions go from the current 0.8% to the site average of 2.6%.”
Stage 3: Prioritizing ideas
Priorities, priorities, priorities. They help us make good decisions and guide our life choices. What we prioritize makes us who we are. In the CRO process, this also applies.
Not only do prioritizing ideas and hypotheses help you solve the most pressing issues in your business, but it also creates a precedent that you can follow for future optimization practices.
Done properly, you’ll have an effective testing system ready to go for everything.
Isn’t prioritizing wonderful? Let’s look at some well-known prioritizing frameworks!
Not the delicious dessert but short for Potential For Improvement, Importance, and Ease.
Source: Practical Ecommerce
The “Potential for Improvement” aspect looks at how likely it is that the hypothesis will result in an overall improvement. “Importance” refers to the gravity of the observed problem, and “Ease” looks at well, how much effort is required to implement the hypothesis (hours, days, weeks, etc).
Using a scale from 1 to 10, hypotheses are ranked from lowest to highest.
The problems with this model?
The Potential part is often difficult to quantify/estimate. Secondly, the prioritization scale of 1 to 10 can lead some businesses to prioritize minor issues in hopes of achieving significant results.
Travel website HotWire has an additive prioritization method that takes the emotion out of A/B testing.
If an idea meets a requirement, it’s given 1 point. If it doesn’t meet a requirement, it gets zero.
These points are then added up in a spreadsheet to give each idea an overall score out of 10. These ideas are then ranked according to the overall score. To learn more about this framework and to build a similar one, check out this blog post.
This method works best for large companies that have hundreds and thousands of optimization ideas in backlog but also for smaller companies that want to quickly implement ideas.
Created by Peep Laja of popular conversion blog ConversionXL, this framework focuses on asking a set of questions about user behavior to better prioritize ideas.
The goal of this method is to make any “potential” or “impact” rating more objective, foster a data-informed culture, and make “ease of implementation” rating more objective.
The questions in the framework look like this:
- Is the change above the fold? → Changes above the fold are noticed by more people, thus increasing the likelihood of the test having an impact.
- Is the change noticeable in under 5 seconds? → Show a group of people control and then variation(s), can they tell the difference after seeing it for 5 seconds? If not, it’s likely to have less impact.
- Does it add or remove anything? → Bigger changes like removing distractions or adding key information tend to have more impact.
- Does the test run on high traffic pages? → Relative improvement on a high traffic page results in more absolute dollars.
A solid aspect of this framework is that it’s fundamentally rooted in data.
It asks if every single observed issue was discovered in user testing, map/heat tracking, or any other analytics tool. This turns prioritizing from “I think that we should focus on X” to “The numbers say that Y and Z are the two most likely causes of our low conversion rate”.
TIR stands for Time, Impact, and Resources. The ranking system in the TIR model except the scale runs from 1 to 5.
Time – How many calendar days, man-hours, development hours, etc. will be necessary for this test to achieve maximum impact?
“A score of 5 would be given to a project that takes the least amount of time to execute and to realize the impact.”
Impact – The amount of revenue (or reduced costs) that will change in the event of a successful test. Are you testing on the whole customer base or just a segment? Are you looking at a 3% increase or 15%?
“A score of 5 would be given to a project that takes the minimal amount of time to execute and to realize the impact.”
Resources – How much are the tools, people, and everything else associated with this test going to cost?
“A score of 5 is given when resources needed are few and are available for the project.”
Founded by Conversion Rate Optimization veteran Bryan Eisenberg, the TIR model encourages you to dig into the human aspect of conversion rate optimization.
This model makes you think about three important questions before testing:
1. Who are we trying to convince?
2. What particular action do we want them to take?
3. What action do they actually want to take?
What you’ll often find is the action that you want visitors to take isn’t necessarily the same action they want to take. This is where real customer feedback comes in handy.
When you really dig deep and mine those actionable points from your customer surveys and other data collection tools, you can make more powerful improvements to a particular page, email funnel, etc.
Step 4: Implementation and testing
Now that you’ve prioritized your hypotheses and you know which tests are the most pressing, it’s time to put those hypotheses into action.
Potentially game-changing tests require you to have the best tools at your disposal. To help you do that, here’s a list of the top tools to use for your experimentation:
Optimizely is one of the world’s leading experimentation platforms, allowing marketing and product teams to test, deploy, review, and deploy all sorts of digital experiences. More specifically, Optimizely gives you access to a suite of A/B testing tools that allow you to effectively target your messaging and launch more personalized campaigns.
Whether it’s your home page or sign-up page, you need a solid tool that can help you “squeeze the most juice” out of that page. Unbounce is a leader in landing page optimization and allows you to easily customize and test different versions of your most important pages. You can then study which versions work best (and why) to improve your conversion rate.
If you’re someone who doesn’t want to fuss around with coding or graphic design, Unbounce becomes even more appealing as you can get beautiful, responsive pages within minutes.
Usability Hub is a remote user research platform that takes the guesswork out of design decisions by validating them with actual users. Also known as the swiss army knife of user research, you can perform a variety of tests such as first click tests, preference tests, and five-second tests to confirm your hypotheses.
With this tool, there’s no more “I think that’s the color that users like” and more of “the research and the tests we perform prove that users are most responsive to this color palette”.
Are you picking up the overarching theme of this post yet? It’s about coming as close to certainty in your decisions because your data sources support your ideas!
4) AB Tasty
AB Tasty is an all-in-one conversion rate optimization platform that allows you to run a multitude of tests on just about anything. With AI-powered implementation and personalization, you can quickly get user insights, experiment, personalize, and increase engagement and conversions.
With some of the biggest names in various industries using AB Tasty as their testing platform, you can be sure you’re using one of the best products on the market when it comes to Conversion Rate Optimization.
Google Optimize is an optimization platform built on Google Analytics. This means that you which means that you get to use your existing GA data to quickly see what can be improved on. You can also benefit from an advanced statistical modeling tool as well as a suite of sophisticated targeting tools.
VWO is another all-in-one platform for conversion rate optimization that helps you conduct visitor research, build an optimization roadmap, and run continuous experiments.
We spoke about the importance of creating a solid conversion experimentation system so that you don’t always have to start from scratch for future experiments. Well, you’ll be glad to know that VWO is rooted in process-driven optimization and can help you find ways to constantly improve your user experience.
Moreover, you can run A/B tests at scale without reducing performance because VWO is built to handle enterprise-level tests.
Rebrandly is a well-known platform for creating and managing custom short URLs. It’s also a powerful analytics tool that can help you run conversion rate optimization tests on your website.
Custom short URLs (or branded links) can be used to collect a ton of source data based on clicks — data you can use to test small but important elements on any page you create, from buttons to navigation menu options, and much more.
A/B Testing vs. Split Testing vs. Multivariate testing
Source: Wingate Media
A/B testing is when you compare two or more versions of the same page by looking at the conversion rates and metrics that matter to your business (such as clicks, views, signups, etc).
For example: If you change the title on a landing page, you can target all landing pages at once and they will be considered as variations of the same group. This group is a name or observation title you give to a particular test (example: Landing page title testgroup1). Hopefully, you have a much cooler group name, but you get the picture.
A/B tests are great if you want to test radical ideas for conversion optimization as well as if you want to make small changes.
Lastly, A/B tests are a great way to get fast results and achieve and maximize test time.
If you have a large amount of traffic to your site and want to test key sections on a page, this is where you run multivariate tests. A/B testing looks at making changes to a whole page whereas multivariate testing looks at key sections on a page and how they interact with each other.
With this being said, multivariate testing is more complicated than A/B testing because there are more layers involved. When you test different key sections, you can get a huge number of possible combinations that may prove too overwhelming to deal with if you’re not an experienced marketer.
Check out this post to get an idea of what a multivariate looks like.
Split testing is where you test the one element on a page and see how the results for that page are different from the original version. This may look similar to A/B testing because it’s the same.
The terms are often used interchangeably but split testing and A/B testing are intrinsically the same.
The difference between A/B testing aka split testing and multivariate testing is the former tests one variation whereas the latter test multiple combinations at once.
Top Elements to A/B Test
Deciding the top elements to test can be difficult. Testing every random aspect of your website is obviously counter-productive (unless you have the time and resources to do so). Refer to the prioritization methods in the chapter above to help with this.
To give you a head start on obvious places to start testing on a page, here’s a helpful list from marketing guru, Neil Patel. If there’s any “marketing guru” whose conversion rate optimization techniques you should be following, it’s him.
In this article, you’ll find the logic behind testing for typography, colors, positioning, pricing schemes, video, images, and much more!
Remember, even though Neil is a well respected, data-driven marketer, some of the elements he mentions might not be aligned with your particular business goals. That’s perfectly fine.
Remember that a lot of conversion rate optimization is subjective and every business has different goals!
How long should A/B tests last?
This is a tricky question because a lot of factors play into it. Factors such as sample size, statistical confidence, seasonality, representativeness of your sample, and the timing. There’s no clear answer as to how short or long an A/B test should last because… drum roll… it depends on your industry amongst a host of other factors.
However, that doesn’t mean that running a test for 1 or two days is enough. Generally, a few weeks to a month range can be regarded as a safe territory for a test given data collection was done right, the conditions weren’t out of the ordinary, the test was carried out by experienced marketers.
Determining The Winner and Mistakes Experts Make
Determining the validity of a test can be done in 3 steps:
Step 1: Calculate the minimum sample size
Define what level of confidence you’d like in your test results (ex: 90-95% is largely considered a solid target to aim for) and calculate a sample size based on that number. This will give you the minimum number of visitors that your variations need.
Step 2: Check for discrepancies in segments
Before completing the test, you should know how to segment your visitors. With the minimum sample and segments, check for major discrepancies and if there aren’t any, keep the rest running.
Step 3: Assess your business cycle
As mentioned above, business cycles and seasonality can play a large role in the validity of any optimization tests. Run the test in different cycles and compare how they fare against one another (ex: Are visitors and sales the same in Q4 with Christmas/New Years as the rest of the year?)
Even with all the right numbers in the world, there are mistakes to be made. Here’s a summary of the top testing mistakes even the pros make:
- Doing A/B testing without enough traffic or conversions
- Not basing tests on a hypothesis
- Not sending data to Google Analytics
- Giving up after first tests fail
- Failing to understand false positives
- Not running tests regularly
Those are just a few out of many, many mistakes that are common with A/B testing.
Conversion Rate Optimization is not an easy thing to tick off your checklist.
It’s a perpetual process that will kick your ass many times but will also take your business to the next level if you learn to embrace it. This goes for newbies and professionals alike.
Stage 5: Learn and review
Analyzing your results
If you’re looking to increase the number of people that sign up for a free trial for a product, you might want to set up goals for people that make it to the signup page and people that actually make it across the line and sign up.
In whatever testing platform you use, you should see the running test and some sort of indication of whether that new variation has improved conversions or not. Carefully look at the two numbers (original variation vs new variation) and look at the percentage of growth as well as the potential it has (also in percentages) to beat the original. If that percentage is short of the ideal 90-95% goal, keep optimizing and keep running tests to hit that goal.
If you end up with inconclusive results, here are a few things you can do:
1) Segment the data
Individual segments often reveal clearer data than lumped segments. Look at segments like traffic sources, devices, and other things that make sense in your business. Sometimes you need to dig even deeper into the numbers to find clarity, especially with A/B tests.
2) Don’t test things that don’t matter
Another reason for inconclusive results is often tests that were run on things that didn’t actually matter to the business. Make sure all of your tests are backed up by hypotheses and are clearly prioritized before getting itchy fingers to test every single thing on your page.
3) Challenge your hypothesis
If you follow a process and still get inconclusive results, it could be time to revise your hypothesis and even scrap it all together. You could test new variations on the same hypothesis or go back to the drawing board to better understand the data you collected and form a stronger hypothesis.
In this article, you learned about the 5 stages of the Conversion Optimization Process.
You learned about the best ways to gather data, you discovered how to form data-driven hypotheses, you unearthed the best formats to prioritize your hypotheses, then you learned how to take those ideas and put them to the test using the best tools in the world. Lastly, you learned how to draw conclusions to your research and challenge your hypotheses.
Remember: Conversion Rate Optimization is an evolving process.
There’s a huge learning curve to just about every stage of the CRO process and it can be overwhelming at times. Commit to learning the process, commit to regularly optimizing and always aiming for better results, commit to growth.
Don’t forget that OptiMonk is also a conversion optimization tool that helps you deliver customized messages to your visitors and turn them into customers. Create a free account now and see what you can achieve with it.
Are you ready to dive into the waters of CRO with this article as your guide? Drop a comment below👇👇
PrevPrevious PostHow to Increase Average Customer Spend with Conversion Rate Optimization
Next PostOptiMonk Became #1 on Product Hunt 😻Next
Nicole is the CEO and co-founder of OptiMonk. She has been involved in digital marketing including lead generation, e-commerce, CRO and analytics for over 7 years. Her strength relies on helping and motivating others to strive towards success. She truly believes that the recipe for success involves more than science but also creativity.
YOU MAY ALSO LIKE
Abandoned Cart Shopify: 9 Tried-and-Tested Strategies to Recover Abandoned Carts (+ the Best Apps)
Overnight 64% Increase in Organic Conversions Using Website Personalization