A reminder to dive deeper into your email testing results
Split testing of email campaigns is a great way to learn and improve your results. Testing works by changing particular campaign elements, sending both the orignal (control) and new version (treatment) and then measuring the difference in results.
However test results can be ruined when there are additional factors that impact one test cell's results but not the other test cells. When this happens you can pick the wrong winner and end up decreasing campaign performance and revenue.
I was recently running a test and was hit by an external factor that without correction would have meant wrong conclusions were reached.
When diving into the results of one test cell I observed that one email address had clicked five times on every single link in the email. Upon investigation it turned out these clicks were not clicks from a human but clicks by a…
New research suggests 38.5% more email subscribers view emails than suggested by open rates
An interesting new study on Email open rates has just been released by European Open Source/paid Email Provider Agnitas.
It's well-known in Email marketing circles that open rates are measured through the number of subscribers who download tracking pixel images. So, if images are blocked, or the subscriber reads the email on their mobile, then open-rates are under-reported. Some ESPs take this into account within their tracking. The methodology is summarised by this graphic.
In the survey, twelve individual newsletters with a total of 17.5 million sent emails were assessed over the period from November 2010 until February 2011. The methodology involved reviewing the number of clicks which were received across all emails regardless of whether they were recorded as opened and then extrapolating open rates from this.
This graphic shows that 28% of clicks were received from emails…
Success seems such an easy thing to measure in email marketing.
Even the lowliest of campaign software should report "opens" and clicks. And many marketers have access to more-important post-click metrics, like sales, downloads, page impressions, donations etc.
Most arguments around email metrics concern which numbers are best suited to campaign analysis. But there are two less-discussed issues which are equally important.
First, when an email performs particularly well (or badly), we attribute that success (or failure) to far fewer factors than might actually be the case. Which means we risk drawing the wrong conclusions and making inappropriate changes in future campaigns.
Second, we forget that the typical measures of success we use aren't actually that good at measuring the true impacts of our emails.
In this article, I'll explain each problem in more depth and suggest solutions that will improve the usefulness of your email marketing analysis.
Attributing success
Assuming the audience is relatively unchanged, peaks…