I was on vacation recently in Galveston, TX. The first couple days were overcast but pleasant and I spent a lot of time in the pool and the Gulf. Now, I am plenty old enough to know better and, I guess, young enough to still feel immortal. Shame on me. No sunscreen. Of course, after day two, I was lobster red and couldn’t sleep. It gave me plenty of time to consider this article as I tried to recover from the hidden dangers of the sun.
Metrics and benchmarks, like anything, have positives and negatives. Solid benchmarks can provide a perfect plumb line by which your own marketing efforts can be judged. The right metrics can help you evaluate the effectiveness of your campaigns and the ROI they are achieving. But how do you know the benchmark is legit? And what if your metrics are skewed? You could be evaluating your apples on a benchmark for oranges using metrics designed for bananas. Who’s hungry?
A recent client requested benchmarks against which they could measure their native advertising campaign. If I were to provide them a national benchmark, it would make their campaign look STELLAR. Good for me, right? No. We don’t operate that way. A national benchmark takes too many campaign types into account. Nationally, native advertising is over eighty percent social media. Our local campaign is about fifty percent. National campaigns have a lot of untargeted, crappy creative, run out of a basement data in them that drives down the effectiveness. Our campaigns utilize, not only beautiful creative but many versions and, we are always targeted in our deployment unless specifically running a broad awareness campaign. So instead, I looked at all the campaigns we had run over the last two years. I then segmented by advertiser and pulled out all data from house marketing as well as campaigns that I knew would skew the data for various reasons. What I ended up with was a benchmark for Native Advertising that was much closer to my client’s current campaign. I was able to explain that, though this was a local benchmark, and therefore more valuable, it still wasn’t a perfect comparison as there were many different industries represented. The client was appreciative of the work and candor and still had performance above that of the benchmark. I was able to sleep at night knowing that I hadn’t inflated our campaign to the point of the ridiculous.
In another instance, we had a client running their very first Native Advertising campaign. As I was going over the metrics after month one, they were happy with the click thru rate because it was above the national average for a display campaign. I explained that, while yes, it was a good metric, the average click thru for a Native Ad is much higher than traditional display. I also pointed to some other metrics of worthy note. First, since we were also promoting their campaign through social, page views were way more important than click thru since click thru didn’t take the social impressions into consideration. Also, we had engagement metrics that were outstanding! Social sharing simply doesn’t happen at all with a display campaign. The fact that I could show them the number of shares their content was achieving as well as reactions the post got, blew them away. Then I brought out the big gun, time on content. After all, it’s all well and good that so man people interacted and ended up on the content page but, what happened after that? The clients content was very good, full disclosure, we created it, and had resulted in the average reader spending over THREE MINUTES consuming their brand message. Hours and hours of eyeballs on brand. That goes way beyond click thru rate.
So, how can this data be used if you aren’t sure of the validity of the benchmarks and metrics provided? Carefully and with subtle shifts. Of course, if your marketing is in the tank, switch it up, takes some risks. If, on the other hand, you are looking for a little bit better CTR or more shares of your content, make small adjustments. If you make too many changes all at once, you won’t know which ones were effective.