CEM and Web Analytics Overlaps

Recently I was working with a large web based company with Tealeaf CEM tools and happened on an issue/opportunity that would save the client double-digit-millions of dollars. Having worked with Omniture as a consultant and HP as a web analyst, I had to think back if I would have discovered this same issue with the other web analytics toolsets.

As I thought about it, the resounding response was, “Yes, yes I could have found that issue with a web analytics package.” The difference is the process, and how the process fits in with the client’s/company’s processes.

—Now I don’t want to make this into another “my tool is better than your tool” post. I promise not to do that, I just wanted to point out the difference in processes that could be used to find the same issue.—

I’m not going to be the guy that pretends there are hard lines between CEM tools and Web Analytics tools. Those lines are crossing every day. It’s just a big Venn diagram that keeps pushing in towards the center. I think most of us who use both tools realize that. The differences are the angles and the processes. At this point I highly respect the companies that use both a web analytics toolset and Tealeaf products. You can find different things with each tool, sometimes it is hard justifying both toolsets to the execs, but they both have their unique value propositions (which also happen to overlap more and more as years go on).

There are 2 types of web analytics issues/opportunities that can be found on a web site, your low hanging fruit and your high hanging fruit. When I was a consultant at Omniture, the head of consulting espoused finding the low hanging fruit: 1- because it was easy to do and 2- many times there is just as much value in the low hanging fruit than in the high hanging fruit. The problem I had with that, I was always handed the high hanging fruit and I had the wrong tools to get at them. Often the anomalies were handed my way because I was the guy that knew how the system worked. I either found the heart of the issue (through a lot of hard work) or failed because I just couldn’t get high enough up the tree or broke a couple of branches in the process. It was a high risk position with very little reward. I simply lacked the right tools. That is why it was so refreshing for me to discover Tealeaf. Tealeaf is the ladder that I can place against the fruit tree to get at that high hanging fruit that no one is touching in the web analytics world. The web analytics world can definitely see some juicy fruit high up there, but often just can’t reach it…

The same can be said about Tealeaf getting at the low hanging fruit in the web analytics world, it can be done, but you have to try it from the top of the ladder. Thus the Venn diagram analogy…

So here is the process that I went through to find the problem. I want to compare it to web analytics processes I would expect to see from two different types of companies:

1.A large company that has strict release dates and heavy control on client side scripting.

2.A mid-size company with virtually no restrictions to update the implementation.

First I discovered that a particular browser had lower conversion rates than other browsers. OK, this one is easy to find in both a web analytics tool and Tealeaf. So we know there is a problem.

1.Large Company: Easy to find

2.Mid-Size Company: Easy to find

Now I need to know if this is related to a specific checkout process. Easy to do in Tealeaf, just add each checkout process as its own event (takes minutes) and let the data chug.

1.Large Company: Hopefully separating out varying checkout processes was thought through. I’ll assume it was, so easy to do.

2.Mid-Size Company: Even if it wasn’t thought out it should be easy to have an engineer add in the tracking for each process. May take an hour, may take a day or two. Let the data chug.

It is related to a single checkout process. Replaying a few browser sessions I see a common occurrence, a message telling users to update the security in their browser. This is where the split often happens between CEM and web analytics.

1.Large Company: To find this issue there is a lot of digging that needs to happen. You can pull up the browser in question and walk through the process hoping you have the same issue, but often, if QA didn’t see it you won’t see it.

2.Mid-Size Company: Same as a large company.

Now I want to see how prevalent the security message is for that browser in the process. Maybe there is a common occurrence between these sessions that will help pinpoint the problem. I add an event to the security message (minutes to do) and let the data chug.

1.Large Company: If the security message was discovered, but there was no way to find that it happened in the web analytics tools, then need to update the implementation. If it requires server side coding you could be looking out 3 months for the next release date. If there is less concern around client side scripting AND you can identify that the message was displayed by looking in the DOM, you could get at it a little quicker.

2.Mid-Size Company: If the security message was discovered, and no way to see it in the analytics tools. Just implement further tracking. May take an hour, may take a day or two. Let the data chug.

I was able to determine that the message appeared for N% of users on that browser. And the conversion rate for those that saw the message was rather low. Now replaying those specific sessions, I see a series of clicks and page views that lead up to the message. Now let me create a sequence event to track how often those series of events occur. Now let the data chug.

1.Large Company: Sequence events are nearly nonexistent in an out of the box web analytics tool. May be able to get at this with some advanced segmentation or data warehousing.

2.Mid-Size Company: Same

Using the sequence event, I was able to determine that 99% of the time it was this sequence that created the security message. “Bag it and tag it”! Time to pass on the data AND the replayable sessions to QA, Product Management and Engineering. It is then added to the list of bugs to fix.

1.Large Company: Finally able to determine the cause of the low conversion. Now, convincing Product Management and Engineering is a whole other ball of wax.

2.Mid-Size Company: Finally. Now get in a room with everyone and talk it through. They’ll see the issue easy enough. Added to the list of bugs to fix.

The difference here, with CEM tools I was able to pull out the problem and pinpoint in less than a day. By providing real evidence to the engineering group, the issue was taken seriously and the fix was added to the list.

With web analytics tools we may eventually get there, but it will take days to months to completely flesh out the problem. Convincing engineering will take some more time if you are in a large company.

Once again, this is not a “MY TOOL IS BETTER THAN YOUR TOOL” post. There are different processes that get you to the same solution. I just feel like I’m climbing a ladder with Tealeaf rather than struggling up branches to get to those high hanging fruits in the web analytics world.

Add a Comment

Your email address will not be published. Required fields are marked *

Read previous post:
tagmanagement
The Myth of the “Universal Tag” and the upcoming Tag Management Systems (TMS)

So I just listened to the webinar from Peterson and Ensighten on Tag Management Systems. This has always been a...

Close