Posted on January 9th, 2012 No comments
I recently worked with Adam Greco to do a post on integrating Tealeaf and Sitecatalyst. You should check it out when you get the chance, Adam is a very informed individual and his passion for web analytics and digital marketing are in my opinion unsurpassed. I first met Adam years ago when I was “accepted” into the best practices group now the consulting group at Adobe. I say accepted because I was more of a techie than an analyst at the time (they let me in because I knew the data). And I enjoyed working with Adam, Brent Dykes, Nathan Frodsham, Brian Jenkins and others. I always liked the business side of digital marketing, measurement and optimization.
Now Adam’s post does an exceptional job at stating how the integration works and not selling Tealeaf as a product. (I know Adam, you’re not there for the Vendors, you’re there for the clients!). Having come from the web analytics world with Adam, I wanted to give my two cents beyond the Sitecatlyst integration of how the two tools can work together. Well, how web analytics and Tealeaf as a product can work together.
Adam already talked about how web analytics excels at slicing and dicing data. It is a great way to find issues/opportunities. But, when those issues/opportunities are found, often it is difficult to say why it happened. Borrowing from Brent Dykes here, it is like playing Clue. You know that Professor Plum did it in the library with the candlestick, but you don’t know why. You need the full story, you need more data. This is where quantitative leans heavily on qualitative to get the story. Sometimes a survey will point you in the right direction or your customer support team will fill in the story. This is where having more data helps. And, the thing that really attracted me to Tealeaf was not the replay functionality, it was the huge amounts of data storage they have tackled for accurate replay. With Tealeaf you can collect everything. Every server call, every internal API call, every external API call, every UI interaction, it tells the whole story because you have everything that happened to that customer laid out before you. The replay is great to give you some quick gut understandings to what happened, but being able to dive into a deep ocean of data at the individual level tells the entire story. It’s like having the novel to the game of clue right at your fingertips. Yes, it takes some digging to find the issues, but it’s easy to become adept at pulling out what happened from those issues. Point is, having massive amounts of data like that at the individual level can tell you the whole story.
Adam also mentions that previous to version 8, slicing and dicing of data was not as powerful as using a web analytics tool, and that is certainly true. I was very lucky to come in to Tealeaf during the launch of version 8. I LOVE version 8. It does all the breakdowns and eventing like traditional web analytics companies and the dimensioning has been built in really well to all the reporting. So, slicing and dicing of data to find issues/opportunities can be done directly inside the Tealeaf UI and eventing engine. So, the example of tracking segments of users who abandon shopping carts for X reason can be easily tracked and reported in Version 8, with the option to replay individual sessions. The beauty of having all the data is what would take weeks, sometimes months to pull out, often can be found replaying a session and diving into the code for that session. We are talking 20 minutes vs. weeks or months simply because you have access to all the data.
Now, I hate using the word “replay”, to me it is more of a “deep dive”. You find the issue and you dive into what the cause of the issue was. That can be looking at what the user saw in the UI, his/her UI interactions, what was available in the request/response, what happened with the internal/external APIS, and what happened on previous visits. When the issue is found, you don’t have to wait for your development team to code up the issue for measurement, you simply code events based on the data gathered and you soon know the extent of the problem, Or if you have CxConnect you can run a job to know how it affected your past visits, but more on that next.
During my web analytics years I was a heavy user of Data Warehouse, it was basically NOSQL or a flat file of clickstream data collected for analytics. I can’t tell you how many times we had to solve issues or dig into an analyses using the data warehouse. Now, the thing that really blew my mind when I heard of Tealeaf was the storage they do on sessions. Full sessions data stored for days, months and sometimes years. That means all that pure data was available at anytime to be pulled and used for past analyses when eventing/dimensioning missed something. This is CxConnect. You have this storage of ALL data that is used to create the replayable sessions, but if you need to pull out data from those sessions, you can do it. Using the Data Warehouse with web analytics analysis, around 50% of the time we would have to tell the Client it could not be done without making a change to their implementation. The beauty of CxConnect is at anytime you can pull out data that was lost to your web analytics tool. It is seriously amazing to me. That means telling the client it can be done maybe 95% of the time. It’s as if you were able to open a time portal and go back weeks, months, or maybe years to tell your developer to code this one thing for your web analytics tool. Now, how can you use this functionality with your present web analytics tool? Simple, pull out the data using CxConnect and insert into your web analytics environment. This will give you access to past data and allow for side-by-side reporting.
The other thing that I LOVE about Tealeaf is that I can finally control DATA QUALITY. That’s right, there have been numerous articles in the digital analytics space about finding your balance between data quality and analysis. If you spend too much time on Data Quality you sacrifice time you could have spent on analysis. Data quality was always a pet peeve of mine. I would be helping out an analyst to understand the click stream data and had to explain why things were being collected in such a manner or discover an anomaly in the client’s implementation that ruined the whole analysis. Now, during my analyses with Tealeaf, when I find pages that are not coded correctly, I simply change my event/dimension, document it and my data is a little cleaner. Tealeaf does well with scrubbing process flows. By searching for unexpected process orders, you can quickly see how the events should be recoded by “replaying” (deep diving) into a session. Often there are sub/side processes that use parts of other processes and often need to be tracked separately or aggregated into the main process or both. Now, if you were able to have your Tealeaf eventing/dimensioning match or mirror your web analytics implementation you could find the data measurement issues and slate for updates so your overall analytics outside of Tealeaf is more accurate.
Another point of integration is combining IT data with clickstream data. Tealeaf monitors many things on the server side that a web analytics tool would not. Namely the time it takes a server to generate a page, network times, ack times, etc. This data is extremely useful when you are trying to understand why conversion rates may have dropped, if not to show it was server performance then to rule out server performance. In a previous post I stated how I used to work for a web analytics company and I was on a call with a client frantically trying to figure out why a campaign was performing so poorly from a conversion standpoint. From the campaign management perspective it was incredibly successful, a large click-through to impression rate. Turns out the servers could not handle the “success” of the campaign. This turned into a wasted campaign budget and a bad user experience. With Tealeaf IT data can be aggregated across pages, campaigns, or any other sub relation to create further reporting inside SiteCatalyst that points out server side issues that could affect conversion. This could also include 400 level and 500 level status code pages. By taking the aggregations on a predefined time basis (10 mins, 30 mins, or maybe hourly), this data could be uploaded to SiteCatalyst with minimal API token costs.
Finally Adam mentioned a couple of competitors in Replay. And, yes they are competitors in replay, but not in deep diving. The pages that are constructed are not exactly what the user saw. It is a simple way to give an idea on how they navigated the site. UI navigation, API calls and the full request/response that the user sent/received are not available. So, yes this may give you some rudimentary understanding of the user experience, but can’t give you accurate view of what actually happened to the visitor.
Tealeaf is not a simple product, but it is chock full of all sorts of goodies that excite me everyday working with the set of tools available. I hope you can use Tealeaf as a companion metric gatherer to your web analytics tool, as a deep dive into web analytics segments, as a data quality tuner, as an IT data gatherer, and as a way to pull missed data.
Please feel free to ping me about any questions you may have on my twitter account @solanalytics.
Posted on January 4th, 2012 No comments
This is a post I wrote available HERE. I am posting on this site to make it more widely available.
In my earlier post, I shared two tips on how to perform campaign tracking beyond what a typical web analytics solution can provide. The goal is to avoid providing a negative user experience that would ruin and otherwise well run campaign. The first tip was to set up Tealeaf with performance metrics in order to measure your campaign’s user-experience. The second tip was to add campaign IDs to a group list, allowing you to quickly identify campaigns that may be having an issue. In this post, I’ll give you two more tips on this topic.
Tip #3: Measure Conversion
Don’t forget your KPIs! If you’re a retailer, make sure you track your orders. If you’re a B2B company, make sure you keep track of your leads, etc. Look at your success counts over campaign click-through ratios. Use the dimensional analysis capabilities in Tealeaf to hone-in on differences that merit replay of a few sessions in order to understand the user experience. Keep track of what campaign groups are converting and which ones are not. Replay sessions that convert well and sessions that don’t and look for stark differences.
Tip #4: Non-Converting Metrics
There’s no avoiding it—some campaigns are going to be more successful than others. But don’t leave it to pure conversion rates to understand the campaign success and the user experience. Some campaigns do well at conversion, some are good for branding, others may have unexpected outcomes.
- Registration: Did the user register? If so, he may be open to further marketing, and that’s a win in itself.
- Abandoned Revenue: Did the user add products to the cart and then abandon? If he went into the checkout process, chances are you have a way to contact him again. Look at the campaigns that generate large amounts of abandoned revenue to find prospects that are open to more marketing. That means additional opportunity.
- Information Pages: Did the users spend a lot of time on information pages? Chances are you just successfully placed your brand in the mind of the user. A branding success.
- Don’t forget REPLAY: before you kill a campaign make sure there are no unexpected outcomes. Walk through the customer experience by replaying 5-10 sessions in Tealeaf. You may be surprised by what you find.
Although there may be some overlaps with the metrics you are tracking in your web analytics tool, adding campaign tracking to Tealeaf gives a holistic view of what your prospects experience when they click through from a campaign. Keep your eyes open for anomalies and stark differences. Then understand what’s going on by replaying web sessions. It’s a great way to be further informed about the campaigns you have running at your company.
How do you track your campaigns in terms of how well they are performing from a user’s point of view?
Posted on January 4th, 2012 1 comment
This is a post I wrote available HERE. I am posting on this site to make it more widely available.
—————————————————————————After several engagements where I walked clients through the importance of tracking their campaigns in Tealeaf, I think this important topic warrants more detailed discussion here in our blog.
I’ll start by saying that when I first suggest tracking campaigns in Tealeaf, our customers typically show a hint of doubt. He or she will explain that they are already tracking campaigns in another system, typically a web analytics tool. And that’s fine. But let me highlight a few of the reasons for tracking campaigns in Tealeaf, in addition to web analytics.
For starters, Tealeaf tracks things that are beyond the scope of your average web analytics tool. I spent many years at a web analytics company, so I can highlight the important distinction with a real-world example of a successful campaign.
Before I came to Tealeaf, I had a client with an interesting issue. The company delivering this client’s campaigns reported a large number of click-throughs. And their click-throughs to impressions ratio was stellar. So this was a successful campaign, right? The problems were that the analytics tool showed only fraction of the reported click-throughs and conversions were actually very low. After some phone calls and discussions with their IT department, it turned out that their web servers could not handle the traffic. They had lost money on a “successful” campaign and had given their users (most being new to the site) horrible who’s who list of poor customer expience—slow-loading pages, status-code-500 errors, and the like. Now, if they had been combining the click-through data with their IT data in real time, this campaign may have had a better outcome. An alert would have warned them of the issue, they would have paused the campaign and worked through the hardware issues. Tracking your campaigns and site performance ensures that new customers, who are less forgiving, have a great experience.
Here are some tips on how you can ensure that your campaigns are tracked to ensure the best user experience and, therefore, greater campaign success:Tip #1: Site Performance
Setup Tealeaf with performance metrics to measure your campaign’s user experience. If you are not measuring these metrics, put them in place right away. Most of these events come built in with newer releases of Tealeaf.
- 500 Level Errors – Track how often the server returns internal server errors with status code 500. Can your servers handle the extra traffic from a successful campaign?
- Cancelled Requests – This is a request to the server where the response could not be delivered. Did the user just give up on loading the page? Maybe he or she accidently clicked on a banner then quickly hit the back button or closed the browser. This will at least give you some clues.
- Server Gen Time – Create buckets of times for the server generation time of a web page. If the page is taking more than 30 seconds to load, this is bad news and most browsers give up on looking for a response from the servers. If the user has to wait more than a couple seconds for the page to load it’s a bad experience for that user.
- Network Time – Is your network slowing down the response back to the browser? Though this is not often an issue, you’ll still want to rule it out.
- Page Render Time – How long is the page taking to render on the browser. If it is too heavy consider making the landing page lighter or modifying it by browser version/type.
- Round Trip Time – From click-through to having the landing-page loaded, how long did it take to serve up the campaign landing page to the end user? If it took more than a couple seconds, start looking at server page generation, network or page render times.
Also, don’t forget your customer struggle metrics. Make sure to measure process restarts, form-field errors, time-to-complete, etc. The next section lists dimensions that you can use for your campaigns. Once you create the dimensions, don’t forget to add report groups and make sure all the events mentioned above are using the same report groups.
Tip #2: Group Lists
Adding your campaign IDs to a group list allows you to quickly identify campaigns that may be having an issue. Group lists are easy to manage and you can export/import from an excel file. Populate multiple attributes/dimensions with the campaign tracking code ID. For each attribute/dimension use a group list to classify the tracking codes as part of a value group. Some popular value groups and their uses are shown below:
- Campaign Code – Make sure the campaign code is in its own attribute/dimension to hone-in on the individual campaign that may have a problem.
- Campaign Type – Was this a paid keyword? A banner display? This shows how performance and user experience may differ from one campaign type to another.
- Campaign Name – The general name for the campaign that is running. If you’re running multiple campaigns, it shows how the user experience may differ from one campaign to another.
- Campaign Creative – What creative group was this added to? This shows how a creative helps the user experience or creates a disconnect in the user experience.
- Paid Keyword – If the campaign was for a paid keyword add the keyword to its own report. This shows how popular keywords may have low conversion because of user experience disconnects once they land on the site.
- Search Engine – Find out if users from different search engines are expecting different experiences.
- Branded Keywords – Track whether users click through from branded or non-branded keywords. Brand aware users often have different expectations from non-branded users.
I will share additional tips in my next post on this topic. Coming soon!
How are you measuring and monitoring your campaigns to ensure they are as successful as they can be?
Dear FTC. Give Amazon rights to user data and solve the privacy and hacker problems that plague America!Posted on December 1st, 2011 2 comments
[Update:] Maybe some people have misunderstood what I’m getting at, so I decided to spell out some of the main points below:
1. Amazon’s Silk browser uses their cloud to put together a web page and deliver an optimized version for a device. The device does not communicate with other data centers, only with Amazon’s EC2 cloud (unless disabled).
2. Although marketed as an optimizer, aggregating and optimizing web pages puts Amazon in a position to protect web users from hackers and nefarious marketers. If used as the sole access to the internet, Amazon can more easily protect user data than any other source (including anti-virus tools).
3. My guess is other companies are already working on a similar cloud optimizer. I would not be surprised to see Google, Microsoft, Apple, and IBM (partnered with Firefox?) release their own aggregator/optimizer.
4. I suggest to the FTC that rather than policing user data, allow the aggregators to gather it, protect it, and sell it to marketing firms.
5. I theorize that when an aggregator abuses user data, it is best to let the free market choose how their data is used by moving to a different aggregation source.
6. All of our problems solved (I know just a tad bit too simple). Enjoy my blog post.
Looks like Facebook has agreed to be audited by the FTC for the next 20 years. Sounds like the FTC may have a hard time even finding a company to audit how they are using the data they collect. Facebook has been accused of tracking users across multiple sites and selling/using that data with advertisers. This really cracks me up! I remember sitting in a conference room with David Humphries when I worked at Omniture discussing how we could do similar tracking. I was the technical resource for the Business Development group at the time. Of course, it was all about cookie sharing across multiple clients and creating an industry-wide standard for data collection. After seeing 3rd party “social plugins” offered by Facebook on cnn.com and other sites, it was obvious how they could use that to track users across the internet and use that data for hyper-focused advertising or, as I like to call it, “Marketers Gold”.
Now, Dear FTC. Please, please stop wasting your money. Please don’t send out sheriffs to patrol the wild west of user data. It’s time to give trustworthy people ownership of the user data and— hear it is — privatize it! Let someone own it, protect it, and, yes, sell it. (Can’t we learn from the history of our own “Wild West?) Let’s face it, all the online marketing companies (and hackers) have been given a free-ride with user data. It is time for some order, and guess what, a company you are targeting for privacy concerns could end up solving all of the privacy and hacker issues that affect America and the World.
Amazon has just introduced their silk browser that uses the Amazon cloud to optimize how the web page is delivered to their browser. All requests are routed through their servers and rebuilt for optimized delivery to the silk browser. I hope you can see the potential here for Amazon to be a guardian of private data and not an abuser. All the data that gets pushed out from a user’s browser would be pushed out through Amazon, and potentially, blocked. Now, here is where you, the FTC, can parcel out the wild west of user data. Let Amazon block ALL data getting passed out and allow them to charge companies to stream user data out from their cloud. If you do this, it allows Amazon to more quickly monetize their setup as a guardian of data. And, guess what, other cloud services that aggregate data will pop up and consumers can choose which cloud they want to use to connect to the internet. These aggregators will be more adept at protecting themselves from hackers and nefarious marketers. At that point, it is just a matter of auditing the data that is passed out from the various clouds to marketing firms (and ensure the marketing firms are not hackers). User’s can choose which aggregation center protects their data better than others, creating a free market where user’s can leave one cloud for another if their own data is misused.
I believe at some point in the future this will be the prominent model of data distribution. We can fight against it, or we can embrace it and encourage it to grow. The internet has been really free and not many expect or want organization applied to it. But I’m sure that Henry Ford would never have imagined the number of laws and the order that has been applied to driving a car Today. This is coming and I’m sure if you fight against it, 20 years from now, many techies will smile and laugh and say “What was the FTC thinking!”.
Posted on October 7th, 2011 No comments
How will Silk change everything?
Take heed, everyone is up in arms about the privacy implications of Silk. But the performance improvements and potential protection from Malware will probably win out in the end. Let’s consider the implications.
At first, images will most likely be cached, but as time goes on, by determining which content is dynamic and which content is static, most static content will be aggregated. Cookies may move from the browser to the server. And eventually the browser will die and just become a terminal. The request/response that builds a Document Object Model (DOM) for the page would soon morph because it is now about server to server communications. Most likely it will mean most pages start out static and use a server-to-server AJAX type request to update the requested page.
And, what is to stop the other data centers from doing the same? If this model takes off, soon all requests will be built around a terminal system and everything transfers from server to server. People will stop asking, “Which browser do you use?” and instead ask, “Which Aggregation Center do you use?”.
At this point, it means that the aggregators like Amazon will be at the center of determining which data comes in and which data goes out. 3rd party data collectors are then dependent on these aggregators. If you are collecting web data at your own data servers, you do have access to the dynamic content sent out and hopefully some kind of request for change to static content every time the user requests the static content. Worst case scenario, Amazon and aggregators close the world to the data collection from their system due to an increased desire for privacy. Then we all move back to the data center for our data collection needs. They have every right to ensure their consumers with the statement “we are protecting your privacy, companies are still able to optimize based on data requested directly from their data centers”. It would actually be a good move for them if privacy was a real concern. Many spammers and hackers do use beacons to mark users/computers for nefarious purposes.
Anyway, I’m actually looking forward to a quicker browsing experience, with the potential of protection from hackers and maybe even an increase in privacy (depending on how Amazon wants to approach it). Go ahead Amazon, you’ll get the web tracking companies angry, but remember, they can still collect directly from the data center.
What do you think? Do you think Amazon would restrict data collection for beacon-based data-collection companies and would there be an exodus to the data center? Or do you think a company like Amazon would keep it open in the name of web optimization?
Posted on August 25th, 2011 No comments
At a client I was surprised by one of the concerns they have with measuring web traffic in general. Their concern is not with technology, manpower or budget, the concern is with culture. Their culture is highly innovative and creative and there are hints of resistance to web measurement. This has created concerns that web measurement will not be fully embraced. I was actually a bit surprised by this. I see measurement and innovation, done well, as the next innovation focused disruptor. One of my favorite subjects during my MBA was innovation; culture was always stressed as important for enabling innovation and implementing strategy. Of course, changing culture is akin to turning a large cruise liner. It is a large effort that takes a lot of time. The more I thought about this client, the more I could see the reasons for the resistance. Organization and innovation are polar opposites. The dark side of innovation is free movement, but utter chaos. The dark side of organization is complete organization with no movement. These two sides need each other to operate properly, but leaning to one side or the other depends on the state of the market. Anything with the web, mobile, cloud, etc. as a market needs to lean heavily to the innovative side. Otherwise, as we continue to see in this ever changing world, companies focused on organization bite the dust. My hope is that this client can stop seeing web measurement as another form of measurement and accountability, but as a tool for learning.
We’ve all heard the mantras, “You don’t know what you can’t measure”, “If you can’t measure it, you can’t improve it”, etc. These are valid statements that are more on the organization side (needed to take advantage of innovation). They are like the brakes on a car. If you drive a car without brakes how fast are you really going to drive? But any innovative company should be concerned, if these brakes are misused; they freeze up, the car stops moving and the competition passes by. So, yes, there is a dark side to measurement. Measurement is organization, plain and simple. If measurement is used as a way to just show reports and ensure some incremental improvement to the status quo, there is reason for concern. If reports are used in this way the company is merely policing the status quo. The big question should always be, “Am I Learning Something?”. If there is no learning there is no way to challenge the status quo which is necessary for small to big innovations. If measurement is used as a learning tool, it can empower innovation and further accelerate innovation. If used as a learning tool the incremental and LARGE improvements will come because you know your market and your customers. That is what I love about Tealeaf’s set of tools. Yes, you can create some great reports and measure incremental improvements, but the most powerful piece is understanding the customer experience. This puts a real story behind the numbers and empowers innovation. Being able to drill in to individual sessions based on abandonments, voice-of-customer, time-to-complete, customer-struggle, etc. moves it from numbers on a report to a learning experience. My hope here is that eventually this company I am working with will see Tealeaf as an accelerator to innovation and not just another reporting tool. In that way, turning that cruise-liner of a culture doesn’t need to happen. Innovation can move forward accelerated with customer experience learning.
Posted on April 19th, 2011 No comments
Recently I was working with a large web based company with Tealeaf CEM tools and happened on an issue/opportunity that would save the client double-digit-millions of dollars. Having worked with Omniture as a consultant and HP as a web analyst, I had to think back if I would have discovered this same issue with the other web analytics toolsets.
As I thought about it, the resounding response was, “Yes, yes I could have found that issue with a web analytics package.” The difference is the process, and how the process fits in with the client’s/company’s processes.
—Now I don’t want to make this into another “my tool is better than your tool” post. I promise not to do that, I just wanted to point out the difference in processes that could be used to find the same issue.—
I’m not going to be the guy that pretends there are hard lines between CEM tools and Web Analytics tools. Those lines are crossing every day. It’s just a big Venn diagram that keeps pushing in towards the center. I think most of us who use both tools realize that. The differences are the angles and the processes. At this point I highly respect the companies that use both a web analytics toolset and Tealeaf products. You can find different things with each tool, sometimes it is hard justifying both toolsets to the execs, but they both have their unique value propositions (which also happen to overlap more and more as years go on).
There are 2 types of web analytics issues/opportunities that can be found on a web site, your low hanging fruit and your high hanging fruit. When I was a consultant at Omniture, the head of consulting espoused finding the low hanging fruit: 1- because it was easy to do and 2- many times there is just as much value in the low hanging fruit than in the high hanging fruit. The problem I had with that, I was always handed the high hanging fruit and I had the wrong tools to get at them. Often the anomalies were handed my way because I was the guy that knew how the system worked. I either found the heart of the issue (through a lot of hard work) or failed because I just couldn’t get high enough up the tree or broke a couple of branches in the process. It was a high risk position with very little reward. I simply lacked the right tools. That is why it was so refreshing for me to discover Tealeaf. Tealeaf is the ladder that I can place against the fruit tree to get at that high hanging fruit that no one is touching in the web analytics world. The web analytics world can definitely see some juicy fruit high up there, but often just can’t reach it…
The same can be said about Tealeaf getting at the low hanging fruit in the web analytics world, it can be done, but you have to try it from the top of the ladder. Thus the Venn diagram analogy…
So here is the process that I went through to find the problem. I want to compare it to web analytics processes I would expect to see from two different types of companies:
1. A large company that has strict release dates and heavy control on client side scripting.
2. A mid-size company with virtually no restrictions to update the implementation.
First I discovered that a particular browser had lower conversion rates than other browsers. OK, this one is easy to find in both a web analytics tool and Tealeaf. So we know there is a problem.
1. Large Company: Easy to find
2. Mid-Size Company: Easy to find
Now I need to know if this is related to a specific checkout process. Easy to do in Tealeaf, just add each checkout process as its own event (takes minutes) and let the data chug.
1. Large Company: Hopefully separating out varying checkout processes was thought through. I’ll assume it was, so easy to do.
2. Mid-Size Company: Even if it wasn’t thought out it should be easy to have an engineer add in the tracking for each process. May take an hour, may take a day or two. Let the data chug.
It is related to a single checkout process. Replaying a few browser sessions I see a common occurrence, a message telling users to update the security in their browser. This is where the split often happens between CEM and web analytics.
1. Large Company: To find this issue there is a lot of digging that needs to happen. You can pull up the browser in question and walk through the process hoping you have the same issue, but often, if QA didn’t see it you won’t see it.
2. Mid-Size Company: Same as a large company.
Now I want to see how prevalent the security message is for that browser in the process. Maybe there is a common occurrence between these sessions that will help pinpoint the problem. I add an event to the security message (minutes to do) and let the data chug.
1. Large Company: If the security message was discovered, but there was no way to find that it happened in the web analytics tools, then need to update the implementation. If it requires server side coding you could be looking out 3 months for the next release date. If there is less concern around client side scripting AND you can identify that the message was displayed by looking in the DOM, you could get at it a little quicker.
2. Mid-Size Company: If the security message was discovered, and no way to see it in the analytics tools. Just implement further tracking. May take an hour, may take a day or two. Let the data chug.
I was able to determine that the message appeared for N% of users on that browser. And the conversion rate for those that saw the message was rather low. Now replaying those specific sessions, I see a series of clicks and page views that lead up to the message. Now let me create a sequence event to track how often those series of events occur. Now let the data chug.
1. Large Company: Sequence events are nearly nonexistent in an out of the box web analytics tool. May be able to get at this with some advanced segmentation or data warehousing.
2. Mid-Size Company: Same
Using the sequence event, I was able to determine that 99% of the time it was this sequence that created the security message. “Bag it and tag it”! Time to pass on the data AND the replayable sessions to QA, Product Management and Engineering. It is then added to the list of bugs to fix.
1. Large Company: Finally able to determine the cause of the low conversion. Now, convincing Product Management and Engineering is a whole other ball of wax.
2. Mid-Size Company: Finally. Now get in a room with everyone and talk it through. They’ll see the issue easy enough. Added to the list of bugs to fix.
The difference here, with CEM tools I was able to pull out the problem and pinpoint in less than a day. By providing real evidence to the engineering group, the issue was taken seriously and the fix was added to the list.
With web analytics tools we may eventually get there, but it will take days to months to completely flesh out the problem. Convincing engineering will take some more time if you are in a large company.
Once again, this is not a “MY TOOL IS BETTER THAN YOUR TOOL” post. There are different processes that get you to the same solution. I just feel like I’m climbing a ladder with Tealeaf rather than struggling up branches to get to those high hanging fruits in the web analytics world.
Posted on January 26th, 2011 8 comments
So I just listened to the webinar from Peterson and Ensighten on Tag Management Systems. This has always been a hot topic in my career. At Omniture I was part of the original team to implement and identify directions for the “Universal Tag”. I use quotes because, as was pointed out in the webinar, it really wasn’t a “Universal Tag” it was more of a helper tag to push out data to partners. It also came with unreasonable costs (at least in my opinion). Why would we charge for work that the browser was doing? Yes the data that was already being collected through the Omniture implementation could be leveraged toward partners, but the cost was unreasonable and further entrenched the customer into the Omniture tagging architecture. I complained up the channels at Omniture, but the opportunity at leveraging the tags for further revenue streams was more appealing than building out an open architecture free for everyone to use.
Fast forward and I left Omniture to be an analyst at HP. Managing tags was a HUGE issue and we looked into Tealium for help. What really made sense to us at the time was an open architecture that enterprise online tools could turn to for help in easily collecting data on customers. A central source where industry specific data was collected and then passed to any partner that wanted data to run their online tools. I had some contacts from the Omniture partner program and got some feedback on what would really work. We decided to build out our own architecture and make it open source so anyone could access it and partners could build out new functionality. I worked with Matt Wright (now the CTO at Keystone Solutions) to build out the architecture for an open source tag. Well, we had built the tag and were implementing it when I had an offer to make a lot of money and travel the world. So I left the #measure world for a year. During that time, Matt left HP for Keystone, he open-sourced our tag management architecture and has since inked a deal with webanalyticsdemystified (nice work guys). I know that Keystone has been having some success with the open sourced tag management system and that is why I was surprised to hear Eric Peterson say that online managers should run away from open source tag management.
For me keeping tag management open makes more sense than building a new industry around it. And the reason I say that is because of the power that comes from being the center of data collection. Many online tools are vying to be the center of data collection for the web. It is an extremely strategic position to be in. Everything begins at the center of data collection and distribution. That is the one reason I think that keeping the architecture open makes sense. The one question I had during the webinar was how Ensighten planned on creating checks and balances so their position of power was not abused. And also, I was curious how they would plan on working with online tools to implement new feature sets. Some kind of open architecture to develop on and then get reviewed by Ensighten developers and analysts would be ideal. Maybe if Ensighten was a non-profit entity that would give me less worry on where they might end up.
But as some of you know, I joined Tealeaf because of their data collection setup (easily collect data without bugging developers) and potentials for extreme analyses of data (they collect everything). Just because Tealeaf has a different way to collect data does not mean I think that a TMS system is moot. There will always be a need to access and distribute data directly from the browser (unless the request-response internet model ceases to exist). In fact there is code that Tealeaf uses that would be nice to add to a TMS system so data collection can be flipped on rather than reviewed, implemented and tested by clients (ideally). So, yes I am on board for an architecture that can more easily implement all these tags that online managers need to run their website. My only concern is the strategic position that the de facto TMS system may find itself in. Let’s make sure no abuse comes of it. My vote will always be on Open Source or non-profit entity because of that strategic position.
Posted on December 18th, 2010 No comments
My Omniture Story
I started at Omniture in 2000 as an engineer, then went to Implementation, then started the Engineering Services group, and then went into the Best Practices group. During my time at Omniture I was lucky to be put in a position that acted as the liaison between Professional Services and Engineering. Without all those great questions (mainly from clients through consultants) I wouldn’t have learned and thought through as much as I was able to. I really enjoyed seeking out solutions to out-of-the-ordinary reporting requests. Then in 2006, Omniture built out the Best Practices group and a lot of those questions from consultants ended up going to that group. I realized how much I missed getting those questions and seeking out technical answers. I talked to the group and the truth was, even though they were heavy on the business side, they lacked a lot of the technical insight to really solve some of the more advanced problems. So I was hired on with that group. While in BP I was asked to help move along the Genesis program, which frankly was dead in the water. I was lucky to have background in implementation and BP, and we were able to get things rolling. During that time, we had requests from clients to integrate with a company called Tealeaf. So jumping on a phone call we talked about what Tealeaf does. Once I heard what they did, I was excited. This is a company that collects everything? And replays the user’s experiences? Holy Crap!
What I liked about Tealeaf
So, what I really liked about tealeaf is their implementation process. While a little heavy up front, the system they have built to update their data collection and eventing is amazing. One of the frustrating things at Omniture was creating a solution that required additional data collection. Tealeaf is setup to change the implementation on a dime. No more waiting 2 or 3 months to get the clients’ developers to update the implementation. So while costly up front, it saves in both time (crucial to strategy on getting data on customers) and money (paid for additional development to update implementation). When the time was right I moved on from Omniture and let the 1 ½ year non-compete agreement expire. Now that I am at Tealeaf, it’s really exciting to see what the tool can do and where they are headed.
Where Tealeaf is Now
Tealeaf is King of the “Customer Experience Management” industry. Adobe (Omniture) has split themselves into two pieces from a consulting standpoint; Acquisition and Conversion. “Customer Experience Management” more closely resembles the Conversion realm: helping the customer get through web processes by discovering and resolving customer struggle. Tealeaf’s technology has always been very session oriented, and most reporting brings you back to specific sessions that can be replayed to VERY EASILY discover a problem. Tealeaf’s core competency is definitely the replay. After walking through some replays my conclusion is that one Tealeaf replay is worth 100 reports and 20 minutes of replay is worth 2 days of data mining. Amazing! Tealeaf has also perfected data collection through AJAX and RIA. The data collection from flash and subsequent replays is also amazing.
Where Tealeaf is Headed
I think there are a few things that will happen in the future that really drew me to Tealeaf. One is the drop in processing and storage costs, and just general processing and storage improvements. These improvements are a boon to any technology company, but especially to Tealeaf who has tons of data and needs to quickly sort through and reprocess older data. Also, I think that traditional web analytics companies will experience a squeeze. They are experiencing a squeeze from the bottom side right now and it’s the elephant in the room at any web analytics company; Google Analytics. I think that as web analytics practitioners become more and more savvy, we will see a big demand for Tealeaf products and subsequent demands for improvements. This will culminate into a high end offering and be the upper squeeze. Just conjecturing. Finally, I think that cloud computing and media offerings like Netflix are going to be a huge disruptor that Tealeaf is positioned to take advantage of. The old days of request and response between server and client may just disappear into server processing and displays over broadband. Because Tealeaf is attached to the data center, they are a shoe-in for this. So, some wild predictions, but that is what excites me most about Tealeaf. They have been around for a long time (since 99’) and I think their day in the sun is coming this next decade.
Posted on November 25th, 2009 1 comment
I was recently contacted by an old friend from Omniture with an interesting proposition. Richard, who was one of the star developers at Omniture, had put together a prototype for a company he is working for. They had been having some success and he told me a little bit about what they were doing. After signing an NDA and getting more information, I was extremely intrigued and it only reinforced my notion that analytics can solve almost any problem and will be used in very innovative ways in years to come. This company has a very intriguing business plan where using analytics they can solve the ills of the music industry. He was looking for someone to support their business development team from an engineering perspective, which I had a lot of experience doing with the Genesis program at Omniture. It also meant managing a small team to employ rapid development techniques. Very interesting and looking back on my Career at Omniture these were some of my favorite things to do (work with BizDev and create rapid solutions).
Personally I never saw myself working in the music industry, but who doesn’t like music and who wouldn’t want to be close to the industry that inspires the world every day? This is also one of those opportunities that contain the two ingredients needed for extreme potential; it is manifestly important and it is nearly impossible. Manifestly important to both society and artists; artists need to get paid and society wants them to be paid (just not from their own pocketbook). Nearly impossible because of the complexities of working this out with partners and customers. I can’t get into specifics, but expect good things to come from the music industry in years to come.
In short, I will miss doing the traditional web analytics with HP, but this job offer shows me that the analytics industry extends well beyond the online web site and offline business intelligence. For me it feels like the internet is just starting to experiment with crawling. I’m amazed by the potentials that are presented every day.
I’ll still spend some time on the yahoo user’s group for web analytics and follow #measure on twitter. Once you are in analytics I think it is hard to change that mindset.