Posted on January 9th, 2012 No comments
I recently worked with Adam Greco to do a post on integrating Tealeaf and Sitecatalyst. You should check it out when you get the chance, Adam is a very informed individual and his passion for web analytics and digital marketing are in my opinion unsurpassed. I first met Adam years ago when I was “accepted” into the best practices group now the consulting group at Adobe. I say accepted because I was more of a techie than an analyst at the time (they let me in because I knew the data). And I enjoyed working with Adam, Brent Dykes, Nathan Frodsham, Brian Jenkins and others. I always liked the business side of digital marketing, measurement and optimization.
Now Adam’s post does an exceptional job at stating how the integration works and not selling Tealeaf as a product. (I know Adam, you’re not there for the Vendors, you’re there for the clients!). Having come from the web analytics world with Adam, I wanted to give my two cents beyond the Sitecatlyst integration of how the two tools can work together. Well, how web analytics and Tealeaf as a product can work together.
Adam already talked about how web analytics excels at slicing and dicing data. It is a great way to find issues/opportunities. But, when those issues/opportunities are found, often it is difficult to say why it happened. Borrowing from Brent Dykes here, it is like playing Clue. You know that Professor Plum did it in the library with the candlestick, but you don’t know why. You need the full story, you need more data. This is where quantitative leans heavily on qualitative to get the story. Sometimes a survey will point you in the right direction or your customer support team will fill in the story. This is where having more data helps. And, the thing that really attracted me to Tealeaf was not the replay functionality, it was the huge amounts of data storage they have tackled for accurate replay. With Tealeaf you can collect everything. Every server call, every internal API call, every external API call, every UI interaction, it tells the whole story because you have everything that happened to that customer laid out before you. The replay is great to give you some quick gut understandings to what happened, but being able to dive into a deep ocean of data at the individual level tells the entire story. It’s like having the novel to the game of clue right at your fingertips. Yes, it takes some digging to find the issues, but it’s easy to become adept at pulling out what happened from those issues. Point is, having massive amounts of data like that at the individual level can tell you the whole story.
Adam also mentions that previous to version 8, slicing and dicing of data was not as powerful as using a web analytics tool, and that is certainly true. I was very lucky to come in to Tealeaf during the launch of version 8. I LOVE version 8. It does all the breakdowns and eventing like traditional web analytics companies and the dimensioning has been built in really well to all the reporting. So, slicing and dicing of data to find issues/opportunities can be done directly inside the Tealeaf UI and eventing engine. So, the example of tracking segments of users who abandon shopping carts for X reason can be easily tracked and reported in Version 8, with the option to replay individual sessions. The beauty of having all the data is what would take weeks, sometimes months to pull out, often can be found replaying a session and diving into the code for that session. We are talking 20 minutes vs. weeks or months simply because you have access to all the data.
Now, I hate using the word “replay”, to me it is more of a “deep dive”. You find the issue and you dive into what the cause of the issue was. That can be looking at what the user saw in the UI, his/her UI interactions, what was available in the request/response, what happened with the internal/external APIS, and what happened on previous visits. When the issue is found, you don’t have to wait for your development team to code up the issue for measurement, you simply code events based on the data gathered and you soon know the extent of the problem, Or if you have CxConnect you can run a job to know how it affected your past visits, but more on that next.
During my web analytics years I was a heavy user of Data Warehouse, it was basically NOSQL or a flat file of clickstream data collected for analytics. I can’t tell you how many times we had to solve issues or dig into an analyses using the data warehouse. Now, the thing that really blew my mind when I heard of Tealeaf was the storage they do on sessions. Full sessions data stored for days, months and sometimes years. That means all that pure data was available at anytime to be pulled and used for past analyses when eventing/dimensioning missed something. This is CxConnect. You have this storage of ALL data that is used to create the replayable sessions, but if you need to pull out data from those sessions, you can do it. Using the Data Warehouse with web analytics analysis, around 50% of the time we would have to tell the Client it could not be done without making a change to their implementation. The beauty of CxConnect is at anytime you can pull out data that was lost to your web analytics tool. It is seriously amazing to me. That means telling the client it can be done maybe 95% of the time. It’s as if you were able to open a time portal and go back weeks, months, or maybe years to tell your developer to code this one thing for your web analytics tool. Now, how can you use this functionality with your present web analytics tool? Simple, pull out the data using CxConnect and insert into your web analytics environment. This will give you access to past data and allow for side-by-side reporting.
The other thing that I LOVE about Tealeaf is that I can finally control DATA QUALITY. That’s right, there have been numerous articles in the digital analytics space about finding your balance between data quality and analysis. If you spend too much time on Data Quality you sacrifice time you could have spent on analysis. Data quality was always a pet peeve of mine. I would be helping out an analyst to understand the click stream data and had to explain why things were being collected in such a manner or discover an anomaly in the client’s implementation that ruined the whole analysis. Now, during my analyses with Tealeaf, when I find pages that are not coded correctly, I simply change my event/dimension, document it and my data is a little cleaner. Tealeaf does well with scrubbing process flows. By searching for unexpected process orders, you can quickly see how the events should be recoded by “replaying” (deep diving) into a session. Often there are sub/side processes that use parts of other processes and often need to be tracked separately or aggregated into the main process or both. Now, if you were able to have your Tealeaf eventing/dimensioning match or mirror your web analytics implementation you could find the data measurement issues and slate for updates so your overall analytics outside of Tealeaf is more accurate.
Another point of integration is combining IT data with clickstream data. Tealeaf monitors many things on the server side that a web analytics tool would not. Namely the time it takes a server to generate a page, network times, ack times, etc. This data is extremely useful when you are trying to understand why conversion rates may have dropped, if not to show it was server performance then to rule out server performance. In a previous post I stated how I used to work for a web analytics company and I was on a call with a client frantically trying to figure out why a campaign was performing so poorly from a conversion standpoint. From the campaign management perspective it was incredibly successful, a large click-through to impression rate. Turns out the servers could not handle the “success” of the campaign. This turned into a wasted campaign budget and a bad user experience. With Tealeaf IT data can be aggregated across pages, campaigns, or any other sub relation to create further reporting inside SiteCatalyst that points out server side issues that could affect conversion. This could also include 400 level and 500 level status code pages. By taking the aggregations on a predefined time basis (10 mins, 30 mins, or maybe hourly), this data could be uploaded to SiteCatalyst with minimal API token costs.
Finally Adam mentioned a couple of competitors in Replay. And, yes they are competitors in replay, but not in deep diving. The pages that are constructed are not exactly what the user saw. It is a simple way to give an idea on how they navigated the site. UI navigation, API calls and the full request/response that the user sent/received are not available. So, yes this may give you some rudimentary understanding of the user experience, but can’t give you accurate view of what actually happened to the visitor.
Tealeaf is not a simple product, but it is chock full of all sorts of goodies that excite me everyday working with the set of tools available. I hope you can use Tealeaf as a companion metric gatherer to your web analytics tool, as a deep dive into web analytics segments, as a data quality tuner, as an IT data gatherer, and as a way to pull missed data.
Please feel free to ping me about any questions you may have on my twitter account @solanalytics.
Posted on January 4th, 2012 No comments
This is a post I wrote available HERE. I am posting on this site to make it more widely available.
In my earlier post, I shared two tips on how to perform campaign tracking beyond what a typical web analytics solution can provide. The goal is to avoid providing a negative user experience that would ruin and otherwise well run campaign. The first tip was to set up Tealeaf with performance metrics in order to measure your campaign’s user-experience. The second tip was to add campaign IDs to a group list, allowing you to quickly identify campaigns that may be having an issue. In this post, I’ll give you two more tips on this topic.
Tip #3: Measure Conversion
Don’t forget your KPIs! If you’re a retailer, make sure you track your orders. If you’re a B2B company, make sure you keep track of your leads, etc. Look at your success counts over campaign click-through ratios. Use the dimensional analysis capabilities in Tealeaf to hone-in on differences that merit replay of a few sessions in order to understand the user experience. Keep track of what campaign groups are converting and which ones are not. Replay sessions that convert well and sessions that don’t and look for stark differences.
Tip #4: Non-Converting Metrics
There’s no avoiding it—some campaigns are going to be more successful than others. But don’t leave it to pure conversion rates to understand the campaign success and the user experience. Some campaigns do well at conversion, some are good for branding, others may have unexpected outcomes.
- Registration: Did the user register? If so, he may be open to further marketing, and that’s a win in itself.
- Abandoned Revenue: Did the user add products to the cart and then abandon? If he went into the checkout process, chances are you have a way to contact him again. Look at the campaigns that generate large amounts of abandoned revenue to find prospects that are open to more marketing. That means additional opportunity.
- Information Pages: Did the users spend a lot of time on information pages? Chances are you just successfully placed your brand in the mind of the user. A branding success.
- Don’t forget REPLAY: before you kill a campaign make sure there are no unexpected outcomes. Walk through the customer experience by replaying 5-10 sessions in Tealeaf. You may be surprised by what you find.
Although there may be some overlaps with the metrics you are tracking in your web analytics tool, adding campaign tracking to Tealeaf gives a holistic view of what your prospects experience when they click through from a campaign. Keep your eyes open for anomalies and stark differences. Then understand what’s going on by replaying web sessions. It’s a great way to be further informed about the campaigns you have running at your company.
How do you track your campaigns in terms of how well they are performing from a user’s point of view?
Posted on January 4th, 2012 1 comment
This is a post I wrote available HERE. I am posting on this site to make it more widely available.
—————————————————————————After several engagements where I walked clients through the importance of tracking their campaigns in Tealeaf, I think this important topic warrants more detailed discussion here in our blog.
I’ll start by saying that when I first suggest tracking campaigns in Tealeaf, our customers typically show a hint of doubt. He or she will explain that they are already tracking campaigns in another system, typically a web analytics tool. And that’s fine. But let me highlight a few of the reasons for tracking campaigns in Tealeaf, in addition to web analytics.
For starters, Tealeaf tracks things that are beyond the scope of your average web analytics tool. I spent many years at a web analytics company, so I can highlight the important distinction with a real-world example of a successful campaign.
Before I came to Tealeaf, I had a client with an interesting issue. The company delivering this client’s campaigns reported a large number of click-throughs. And their click-throughs to impressions ratio was stellar. So this was a successful campaign, right? The problems were that the analytics tool showed only fraction of the reported click-throughs and conversions were actually very low. After some phone calls and discussions with their IT department, it turned out that their web servers could not handle the traffic. They had lost money on a “successful” campaign and had given their users (most being new to the site) horrible who’s who list of poor customer expience—slow-loading pages, status-code-500 errors, and the like. Now, if they had been combining the click-through data with their IT data in real time, this campaign may have had a better outcome. An alert would have warned them of the issue, they would have paused the campaign and worked through the hardware issues. Tracking your campaigns and site performance ensures that new customers, who are less forgiving, have a great experience.
Here are some tips on how you can ensure that your campaigns are tracked to ensure the best user experience and, therefore, greater campaign success:Tip #1: Site Performance
Setup Tealeaf with performance metrics to measure your campaign’s user experience. If you are not measuring these metrics, put them in place right away. Most of these events come built in with newer releases of Tealeaf.
- 500 Level Errors – Track how often the server returns internal server errors with status code 500. Can your servers handle the extra traffic from a successful campaign?
- Cancelled Requests – This is a request to the server where the response could not be delivered. Did the user just give up on loading the page? Maybe he or she accidently clicked on a banner then quickly hit the back button or closed the browser. This will at least give you some clues.
- Server Gen Time – Create buckets of times for the server generation time of a web page. If the page is taking more than 30 seconds to load, this is bad news and most browsers give up on looking for a response from the servers. If the user has to wait more than a couple seconds for the page to load it’s a bad experience for that user.
- Network Time – Is your network slowing down the response back to the browser? Though this is not often an issue, you’ll still want to rule it out.
- Page Render Time – How long is the page taking to render on the browser. If it is too heavy consider making the landing page lighter or modifying it by browser version/type.
- Round Trip Time – From click-through to having the landing-page loaded, how long did it take to serve up the campaign landing page to the end user? If it took more than a couple seconds, start looking at server page generation, network or page render times.
Also, don’t forget your customer struggle metrics. Make sure to measure process restarts, form-field errors, time-to-complete, etc. The next section lists dimensions that you can use for your campaigns. Once you create the dimensions, don’t forget to add report groups and make sure all the events mentioned above are using the same report groups.
Tip #2: Group Lists
Adding your campaign IDs to a group list allows you to quickly identify campaigns that may be having an issue. Group lists are easy to manage and you can export/import from an excel file. Populate multiple attributes/dimensions with the campaign tracking code ID. For each attribute/dimension use a group list to classify the tracking codes as part of a value group. Some popular value groups and their uses are shown below:
- Campaign Code – Make sure the campaign code is in its own attribute/dimension to hone-in on the individual campaign that may have a problem.
- Campaign Type – Was this a paid keyword? A banner display? This shows how performance and user experience may differ from one campaign type to another.
- Campaign Name – The general name for the campaign that is running. If you’re running multiple campaigns, it shows how the user experience may differ from one campaign to another.
- Campaign Creative – What creative group was this added to? This shows how a creative helps the user experience or creates a disconnect in the user experience.
- Paid Keyword – If the campaign was for a paid keyword add the keyword to its own report. This shows how popular keywords may have low conversion because of user experience disconnects once they land on the site.
- Search Engine – Find out if users from different search engines are expecting different experiences.
- Branded Keywords – Track whether users click through from branded or non-branded keywords. Brand aware users often have different expectations from non-branded users.
I will share additional tips in my next post on this topic. Coming soon!
How are you measuring and monitoring your campaigns to ensure they are as successful as they can be?
Posted on October 7th, 2011 No comments
How will Silk change everything?
Take heed, everyone is up in arms about the privacy implications of Silk. But the performance improvements and potential protection from Malware will probably win out in the end. Let’s consider the implications.
At first, images will most likely be cached, but as time goes on, by determining which content is dynamic and which content is static, most static content will be aggregated. Cookies may move from the browser to the server. And eventually the browser will die and just become a terminal. The request/response that builds a Document Object Model (DOM) for the page would soon morph because it is now about server to server communications. Most likely it will mean most pages start out static and use a server-to-server AJAX type request to update the requested page.
And, what is to stop the other data centers from doing the same? If this model takes off, soon all requests will be built around a terminal system and everything transfers from server to server. People will stop asking, “Which browser do you use?” and instead ask, “Which Aggregation Center do you use?”.
At this point, it means that the aggregators like Amazon will be at the center of determining which data comes in and which data goes out. 3rd party data collectors are then dependent on these aggregators. If you are collecting web data at your own data servers, you do have access to the dynamic content sent out and hopefully some kind of request for change to static content every time the user requests the static content. Worst case scenario, Amazon and aggregators close the world to the data collection from their system due to an increased desire for privacy. Then we all move back to the data center for our data collection needs. They have every right to ensure their consumers with the statement “we are protecting your privacy, companies are still able to optimize based on data requested directly from their data centers”. It would actually be a good move for them if privacy was a real concern. Many spammers and hackers do use beacons to mark users/computers for nefarious purposes.
Anyway, I’m actually looking forward to a quicker browsing experience, with the potential of protection from hackers and maybe even an increase in privacy (depending on how Amazon wants to approach it). Go ahead Amazon, you’ll get the web tracking companies angry, but remember, they can still collect directly from the data center.
What do you think? Do you think Amazon would restrict data collection for beacon-based data-collection companies and would there be an exodus to the data center? Or do you think a company like Amazon would keep it open in the name of web optimization?
Posted on August 25th, 2011 No comments
At a client I was surprised by one of the concerns they have with measuring web traffic in general. Their concern is not with technology, manpower or budget, the concern is with culture. Their culture is highly innovative and creative and there are hints of resistance to web measurement. This has created concerns that web measurement will not be fully embraced. I was actually a bit surprised by this. I see measurement and innovation, done well, as the next innovation focused disruptor. One of my favorite subjects during my MBA was innovation; culture was always stressed as important for enabling innovation and implementing strategy. Of course, changing culture is akin to turning a large cruise liner. It is a large effort that takes a lot of time. The more I thought about this client, the more I could see the reasons for the resistance. Organization and innovation are polar opposites. The dark side of innovation is free movement, but utter chaos. The dark side of organization is complete organization with no movement. These two sides need each other to operate properly, but leaning to one side or the other depends on the state of the market. Anything with the web, mobile, cloud, etc. as a market needs to lean heavily to the innovative side. Otherwise, as we continue to see in this ever changing world, companies focused on organization bite the dust. My hope is that this client can stop seeing web measurement as another form of measurement and accountability, but as a tool for learning.
We’ve all heard the mantras, “You don’t know what you can’t measure”, “If you can’t measure it, you can’t improve it”, etc. These are valid statements that are more on the organization side (needed to take advantage of innovation). They are like the brakes on a car. If you drive a car without brakes how fast are you really going to drive? But any innovative company should be concerned, if these brakes are misused; they freeze up, the car stops moving and the competition passes by. So, yes, there is a dark side to measurement. Measurement is organization, plain and simple. If measurement is used as a way to just show reports and ensure some incremental improvement to the status quo, there is reason for concern. If reports are used in this way the company is merely policing the status quo. The big question should always be, “Am I Learning Something?”. If there is no learning there is no way to challenge the status quo which is necessary for small to big innovations. If measurement is used as a learning tool, it can empower innovation and further accelerate innovation. If used as a learning tool the incremental and LARGE improvements will come because you know your market and your customers. That is what I love about Tealeaf’s set of tools. Yes, you can create some great reports and measure incremental improvements, but the most powerful piece is understanding the customer experience. This puts a real story behind the numbers and empowers innovation. Being able to drill in to individual sessions based on abandonments, voice-of-customer, time-to-complete, customer-struggle, etc. moves it from numbers on a report to a learning experience. My hope here is that eventually this company I am working with will see Tealeaf as an accelerator to innovation and not just another reporting tool. In that way, turning that cruise-liner of a culture doesn’t need to happen. Innovation can move forward accelerated with customer experience learning.
Posted on April 19th, 2011 No comments
Recently I was working with a large web based company with Tealeaf CEM tools and happened on an issue/opportunity that would save the client double-digit-millions of dollars. Having worked with Omniture as a consultant and HP as a web analyst, I had to think back if I would have discovered this same issue with the other web analytics toolsets.
As I thought about it, the resounding response was, “Yes, yes I could have found that issue with a web analytics package.” The difference is the process, and how the process fits in with the client’s/company’s processes.
—Now I don’t want to make this into another “my tool is better than your tool” post. I promise not to do that, I just wanted to point out the difference in processes that could be used to find the same issue.—
I’m not going to be the guy that pretends there are hard lines between CEM tools and Web Analytics tools. Those lines are crossing every day. It’s just a big Venn diagram that keeps pushing in towards the center. I think most of us who use both tools realize that. The differences are the angles and the processes. At this point I highly respect the companies that use both a web analytics toolset and Tealeaf products. You can find different things with each tool, sometimes it is hard justifying both toolsets to the execs, but they both have their unique value propositions (which also happen to overlap more and more as years go on).
There are 2 types of web analytics issues/opportunities that can be found on a web site, your low hanging fruit and your high hanging fruit. When I was a consultant at Omniture, the head of consulting espoused finding the low hanging fruit: 1- because it was easy to do and 2- many times there is just as much value in the low hanging fruit than in the high hanging fruit. The problem I had with that, I was always handed the high hanging fruit and I had the wrong tools to get at them. Often the anomalies were handed my way because I was the guy that knew how the system worked. I either found the heart of the issue (through a lot of hard work) or failed because I just couldn’t get high enough up the tree or broke a couple of branches in the process. It was a high risk position with very little reward. I simply lacked the right tools. That is why it was so refreshing for me to discover Tealeaf. Tealeaf is the ladder that I can place against the fruit tree to get at that high hanging fruit that no one is touching in the web analytics world. The web analytics world can definitely see some juicy fruit high up there, but often just can’t reach it…
The same can be said about Tealeaf getting at the low hanging fruit in the web analytics world, it can be done, but you have to try it from the top of the ladder. Thus the Venn diagram analogy…
So here is the process that I went through to find the problem. I want to compare it to web analytics processes I would expect to see from two different types of companies:
1. A large company that has strict release dates and heavy control on client side scripting.
2. A mid-size company with virtually no restrictions to update the implementation.
First I discovered that a particular browser had lower conversion rates than other browsers. OK, this one is easy to find in both a web analytics tool and Tealeaf. So we know there is a problem.
1. Large Company: Easy to find
2. Mid-Size Company: Easy to find
Now I need to know if this is related to a specific checkout process. Easy to do in Tealeaf, just add each checkout process as its own event (takes minutes) and let the data chug.
1. Large Company: Hopefully separating out varying checkout processes was thought through. I’ll assume it was, so easy to do.
2. Mid-Size Company: Even if it wasn’t thought out it should be easy to have an engineer add in the tracking for each process. May take an hour, may take a day or two. Let the data chug.
It is related to a single checkout process. Replaying a few browser sessions I see a common occurrence, a message telling users to update the security in their browser. This is where the split often happens between CEM and web analytics.
1. Large Company: To find this issue there is a lot of digging that needs to happen. You can pull up the browser in question and walk through the process hoping you have the same issue, but often, if QA didn’t see it you won’t see it.
2. Mid-Size Company: Same as a large company.
Now I want to see how prevalent the security message is for that browser in the process. Maybe there is a common occurrence between these sessions that will help pinpoint the problem. I add an event to the security message (minutes to do) and let the data chug.
1. Large Company: If the security message was discovered, but there was no way to find that it happened in the web analytics tools, then need to update the implementation. If it requires server side coding you could be looking out 3 months for the next release date. If there is less concern around client side scripting AND you can identify that the message was displayed by looking in the DOM, you could get at it a little quicker.
2. Mid-Size Company: If the security message was discovered, and no way to see it in the analytics tools. Just implement further tracking. May take an hour, may take a day or two. Let the data chug.
I was able to determine that the message appeared for N% of users on that browser. And the conversion rate for those that saw the message was rather low. Now replaying those specific sessions, I see a series of clicks and page views that lead up to the message. Now let me create a sequence event to track how often those series of events occur. Now let the data chug.
1. Large Company: Sequence events are nearly nonexistent in an out of the box web analytics tool. May be able to get at this with some advanced segmentation or data warehousing.
2. Mid-Size Company: Same
Using the sequence event, I was able to determine that 99% of the time it was this sequence that created the security message. “Bag it and tag it”! Time to pass on the data AND the replayable sessions to QA, Product Management and Engineering. It is then added to the list of bugs to fix.
1. Large Company: Finally able to determine the cause of the low conversion. Now, convincing Product Management and Engineering is a whole other ball of wax.
2. Mid-Size Company: Finally. Now get in a room with everyone and talk it through. They’ll see the issue easy enough. Added to the list of bugs to fix.
The difference here, with CEM tools I was able to pull out the problem and pinpoint in less than a day. By providing real evidence to the engineering group, the issue was taken seriously and the fix was added to the list.
With web analytics tools we may eventually get there, but it will take days to months to completely flesh out the problem. Convincing engineering will take some more time if you are in a large company.
Once again, this is not a “MY TOOL IS BETTER THAN YOUR TOOL” post. There are different processes that get you to the same solution. I just feel like I’m climbing a ladder with Tealeaf rather than struggling up branches to get to those high hanging fruits in the web analytics world.
Posted on January 26th, 2011 8 comments
So I just listened to the webinar from Peterson and Ensighten on Tag Management Systems. This has always been a hot topic in my career. At Omniture I was part of the original team to implement and identify directions for the “Universal Tag”. I use quotes because, as was pointed out in the webinar, it really wasn’t a “Universal Tag” it was more of a helper tag to push out data to partners. It also came with unreasonable costs (at least in my opinion). Why would we charge for work that the browser was doing? Yes the data that was already being collected through the Omniture implementation could be leveraged toward partners, but the cost was unreasonable and further entrenched the customer into the Omniture tagging architecture. I complained up the channels at Omniture, but the opportunity at leveraging the tags for further revenue streams was more appealing than building out an open architecture free for everyone to use.
Fast forward and I left Omniture to be an analyst at HP. Managing tags was a HUGE issue and we looked into Tealium for help. What really made sense to us at the time was an open architecture that enterprise online tools could turn to for help in easily collecting data on customers. A central source where industry specific data was collected and then passed to any partner that wanted data to run their online tools. I had some contacts from the Omniture partner program and got some feedback on what would really work. We decided to build out our own architecture and make it open source so anyone could access it and partners could build out new functionality. I worked with Matt Wright (now the CTO at Keystone Solutions) to build out the architecture for an open source tag. Well, we had built the tag and were implementing it when I had an offer to make a lot of money and travel the world. So I left the #measure world for a year. During that time, Matt left HP for Keystone, he open-sourced our tag management architecture and has since inked a deal with webanalyticsdemystified (nice work guys). I know that Keystone has been having some success with the open sourced tag management system and that is why I was surprised to hear Eric Peterson say that online managers should run away from open source tag management.
For me keeping tag management open makes more sense than building a new industry around it. And the reason I say that is because of the power that comes from being the center of data collection. Many online tools are vying to be the center of data collection for the web. It is an extremely strategic position to be in. Everything begins at the center of data collection and distribution. That is the one reason I think that keeping the architecture open makes sense. The one question I had during the webinar was how Ensighten planned on creating checks and balances so their position of power was not abused. And also, I was curious how they would plan on working with online tools to implement new feature sets. Some kind of open architecture to develop on and then get reviewed by Ensighten developers and analysts would be ideal. Maybe if Ensighten was a non-profit entity that would give me less worry on where they might end up.
But as some of you know, I joined Tealeaf because of their data collection setup (easily collect data without bugging developers) and potentials for extreme analyses of data (they collect everything). Just because Tealeaf has a different way to collect data does not mean I think that a TMS system is moot. There will always be a need to access and distribute data directly from the browser (unless the request-response internet model ceases to exist). In fact there is code that Tealeaf uses that would be nice to add to a TMS system so data collection can be flipped on rather than reviewed, implemented and tested by clients (ideally). So, yes I am on board for an architecture that can more easily implement all these tags that online managers need to run their website. My only concern is the strategic position that the de facto TMS system may find itself in. Let’s make sure no abuse comes of it. My vote will always be on Open Source or non-profit entity because of that strategic position.
Posted on December 18th, 2010 No comments
My Omniture Story
I started at Omniture in 2000 as an engineer, then went to Implementation, then started the Engineering Services group, and then went into the Best Practices group. During my time at Omniture I was lucky to be put in a position that acted as the liaison between Professional Services and Engineering. Without all those great questions (mainly from clients through consultants) I wouldn’t have learned and thought through as much as I was able to. I really enjoyed seeking out solutions to out-of-the-ordinary reporting requests. Then in 2006, Omniture built out the Best Practices group and a lot of those questions from consultants ended up going to that group. I realized how much I missed getting those questions and seeking out technical answers. I talked to the group and the truth was, even though they were heavy on the business side, they lacked a lot of the technical insight to really solve some of the more advanced problems. So I was hired on with that group. While in BP I was asked to help move along the Genesis program, which frankly was dead in the water. I was lucky to have background in implementation and BP, and we were able to get things rolling. During that time, we had requests from clients to integrate with a company called Tealeaf. So jumping on a phone call we talked about what Tealeaf does. Once I heard what they did, I was excited. This is a company that collects everything? And replays the user’s experiences? Holy Crap!
What I liked about Tealeaf
So, what I really liked about tealeaf is their implementation process. While a little heavy up front, the system they have built to update their data collection and eventing is amazing. One of the frustrating things at Omniture was creating a solution that required additional data collection. Tealeaf is setup to change the implementation on a dime. No more waiting 2 or 3 months to get the clients’ developers to update the implementation. So while costly up front, it saves in both time (crucial to strategy on getting data on customers) and money (paid for additional development to update implementation). When the time was right I moved on from Omniture and let the 1 ½ year non-compete agreement expire. Now that I am at Tealeaf, it’s really exciting to see what the tool can do and where they are headed.
Where Tealeaf is Now
Tealeaf is King of the “Customer Experience Management” industry. Adobe (Omniture) has split themselves into two pieces from a consulting standpoint; Acquisition and Conversion. “Customer Experience Management” more closely resembles the Conversion realm: helping the customer get through web processes by discovering and resolving customer struggle. Tealeaf’s technology has always been very session oriented, and most reporting brings you back to specific sessions that can be replayed to VERY EASILY discover a problem. Tealeaf’s core competency is definitely the replay. After walking through some replays my conclusion is that one Tealeaf replay is worth 100 reports and 20 minutes of replay is worth 2 days of data mining. Amazing! Tealeaf has also perfected data collection through AJAX and RIA. The data collection from flash and subsequent replays is also amazing.
Where Tealeaf is Headed
I think there are a few things that will happen in the future that really drew me to Tealeaf. One is the drop in processing and storage costs, and just general processing and storage improvements. These improvements are a boon to any technology company, but especially to Tealeaf who has tons of data and needs to quickly sort through and reprocess older data. Also, I think that traditional web analytics companies will experience a squeeze. They are experiencing a squeeze from the bottom side right now and it’s the elephant in the room at any web analytics company; Google Analytics. I think that as web analytics practitioners become more and more savvy, we will see a big demand for Tealeaf products and subsequent demands for improvements. This will culminate into a high end offering and be the upper squeeze. Just conjecturing. Finally, I think that cloud computing and media offerings like Netflix are going to be a huge disruptor that Tealeaf is positioned to take advantage of. The old days of request and response between server and client may just disappear into server processing and displays over broadband. Because Tealeaf is attached to the data center, they are a shoe-in for this. So, some wild predictions, but that is what excites me most about Tealeaf. They have been around for a long time (since 99’) and I think their day in the sun is coming this next decade.
Posted on September 23rd, 2009 1 comment
So I have had some time to think about the Adobe acquisition of Omniture and wanted to relay some of my thoughts on the merger. I, like most, was extremely surprised at the move. This definitely feels like a good partnership, but an acquisition? It was a little hard for me to swallow. When I left Omniture almost one year ago I was asked to drop everything and work on integrating metrics tracking into flash communication server. I had scoped the work, but at the last moment decided I was ready to move on. Halloween was my last day and I got to see the execs rent “little people” as Munchkins for their Wizard of Oz theme. Every year the execs rented midgets and it made me snicker (4 years total).
Like I said in previous posts, I was sad to leave, but Omniture wasn’t feeling as innovative as they had been (at least the department I was in). Omniture does have a strong culture, but that was morphing into more organized and clanish. For most at Omniture it didn’t matter because at some point we were “destined to make it big”.
So, let’s speculate on what happened at Omniture. The one thing that gave me comfort at Omniture was that Josh James was shooting for the moon. He was looking to do 1 Billion in Revenue. I still remember at an Omniture Summit when he announced his vision and Eric Petersen, while explaining the importance of a vision, said soon afterward “I don’t think that will happen, but at least Josh has a vision”. I don’t think Eric was invited again then after. Well, looks like Eric is right, at least in part. The Omniture business unit may still achieve the billion mark, just not on its own now. Josh’s high expectations were comforting because it meant the company was going all the way. We were looking to be the next IBM or Oracle and it was part of our vision. I was on board 100 percent.
If they were shooting for the moon like this why did they sell? Someone was feeling the pressure and I don’t think it was the executive team. Those guys were hyper focused on building the company to be the next Salesforce. I think the execs had confidence that the stock would rebound. Or, I have to question, did they? With Google and others putting extreme pressure on clients to switch and creeping costs they may have been feeling the pressure.
My guess is that the executive team still had the same vision. But with clients leaving for Google and Unica (maybe even Tealeaf) and with the difficulty in keeping costs low, the Omniture board voted to take a bird in the hand. My guess is there was an offer to enhance Adobe’s own position and the Omniture board took the bait. At this point the executive team is just making the best of the decision. That’s what I think happened, but I’ve been wrong before, sometimes very wrong.
I tend to agree with Omni_man Adam Greco about Omniture’s ability to integrate products. They did have a hard time focusing, but I think that comes from the change in their culture. Innovators became gun shy because failure was getting high visibility from an organized-clanish culture. The old innovative culture just plowed through the failures to create some great products. Now there is a lot of finger pointing and power plays. I was very sad to see the change, but I do feel it was the fault of execs who hired management from large companies who were very good at being “organized”. The product and the digital marketing industry are not ready for that type of management. Innovation should have been the focus; it still should be the focus. Rather than hiring externally, Omniture could have hired internal innovators and paid for them to get their MBA. Innovators like Richard Zinn, Josh Ezro, Chris Error and Catherine Wong should have been given that opportunity. Catherine is now a VP over integrations which is good, but who let Richard Zinn walk away? Richard’s combination of innovative spirit and ambition would have spelled super innovative leader with an MBA. If there was someone that fought for him to stay, that’s the person who knows the importance of these types of people. Chris Error would have been an innovative rock star with a few organizational behavior and project management classes.
I recently finished my MBA and after taking a class on mergers and acquisitions I was surprised at some of the mistakes that Omniture made, specifically with Touch Clarity. It was a technology acquisition and all the metrics that would indicate a successful technology acquisition were not ideal. Take a look at linkedin and the talent exodus is definitely not a good sign. a Technology acquisition is all about the people. Keeping the people and allowing them to innovate is the big key. Omniture learned from this acquisition that due diligence could have been better and pre-acquisition integration work would have helped or at least given some warning signs. After the Touch Clarity and Instadia acquisitions, Omniture created task forces to work on the organization and cultural integration of acquisitions (i.e. websidestory). But when it comes to even more horizontal acquisitions like Offermatica, product integrations are a little more difficult. It would have been a breeze 3-5 years ago, but the culture change from innovative to organized-clanish made the innovators gun shy and the bureaucracy was tiring. This is only the fault of management and not the fault of the innovators themselves. Offermatica is a technology acquisition where hopefully the technical innovators stay. As for Adobe, Omniture is also a technology acquisition, don’t forget the innovators.
As far as the product synergies, I think most everyone in other blogs have touched on them. One of the value propositions I was working on when I left was automatic tracking of videos served by flash communication server. Obviously with the convergence of traditional media and the web, this is a great move. They could easily unseat Nielsen for media tracking and push flash as a standard for media delivery. The other tracking pieces are important, but the shot at standardized media delivery and tracking for top media companies is huge.
Overall, I wasn’t surprised to see this move by Adobe, but I was surprised by Omniture’s acceptance. The exec team was focused on making the company into the next Oracle. But with pressures from competitors, the economy, and cost structures my guess is the board voted to sell. I have heard that Adobe is even more business focused (organized) than Omniture. My only suggestion to Adobe is to find those innovators at Omniture and treat them VERY VERY well. Empower them and let them try things that may fail, because when there are failures, that is when they are at their best! (See my article on innovation and failures).
Posted on July 28th, 2009 No comments
Recently there was a post on the web analytics demystified users group about having a mistake in your analysis. This got me thinking about some of the classes I took recently around innovation and strategy (I recently finished up my MBA). The classes always pointed to the same thing, successful companies are built to fail. And the model of being built for failure is becoming somewhat of a disrupter because it nurtures innovation. Constantly staying ahead requires that you try different things and measure the success.
My last post was about Omniture and some of the concerns I had about the change from innovative culture to more organized and clanish. I thought about how this happened or how it was allowed to happen and it seems like this is a pretty common occurrence to any successful organization. Think about it, Omniture made some ground by being more innovative than Webtrends and other web analytics companies (well maybe not visual sciences, but visual didn’t get the marketing/sales piece). Having some success the executive team started looking around for experienced management that had success. They looked for the people that really did well at making money from existing products. When you move those people into another company with a successful product, they will also be successful. Why? Because they organize and market things very well.
Well, put them in a company where there is great uncertainty and that is where they have a tough time. Using the same set of tools that they have developed doesn’t work because it stifles innovation. You start to see huge planning cycles for products that may or may not have a market. A lot more money is spent to go after a market that may or may not want the product or the market may not even exist. The cool thing about being innovative is that you can try many things at a low cost and see which ones find a market. I was sad to see Omniture lose some of the agility it once had, but maybe Omniture has found its market, and they are ready to make the switch to being more organized. But if there is still uncertainty about where things are going with web analytics and web optimization they should focus on management schooled in innovation.
So, back to planning to fail. Planning to fail means that you try things that you think will be successful or at least have a chance at success and measure the success of the changes. The culture that creates innovation is the one that says it is “OK to fail”. But with that failure they build in a way to fail safely. That is the needed piece, designing an architecture that allows for safely failing. This way many things can be tried and a fraction will end up being a “bulls eye”.
I probably watched the sixty minutes video on IDEO like a dozen time in different classes. There probably isn’t a better company to get the point across about innovating better than IDEO. They basically get a bunch of people together from different backgrounds and allow them to brainstorm out a product for a market.
IDEO Innovation Techniques:
Different individuals’ backgrounds create more ideas.
- They don’t hire the “you are like me” people. They look for diversity to engender different ideas.
There is no bad idea.
- An idea should never be shot down. In fact the craziest ideas should be explored because there may be something there that has the seeds of innovation.
- Once they have a few ideas they quickly create the prototype to see something tangible and get feedback from customers. They have a machine shop on premise to rapidly create a prototype for their ideas.
Get out in the real world.
- They do a lot of customer research by leaving the building and interacting with potential customers. This includes showing different prototypes.
Even though they take these steps to be successful, still only a fraction of their ideas are successful, but the point is, they have successful ideas. With most companies they see a need and plain and simple get lucky. If they are looking at a market in fluctuation or at a market with an uncertain future, these types of concepts should probably be used to keep innovative and stay ahead of any potential competition, especially with those that compete on price.
But, back to my original thought on web analytics and web site optimization. We use web analytics to create actions that optimize the site, meaning we ‘help’ our potential customers better find our site and ‘help’ them convert once they click through. Doing any analysis without taking action is just silly, but sometimes that analysis is just going to be dead wrong. But as a web analyst we can’t spend our time second guessing if we truly want to find those gold nuggets that really kick the site into overdrive. That is why changes to the site need to be made, but measured and measured quickly. And, sometimes you really don’t know the affects of what your changes may have made. So, beyond using a good web analytics tool, a tool like RUM or Tealeaf would be great to quickly understand how the users are reacting to the changes individually rather than in aggregate. It is like observing your customer’s actions at a store when they pick up the new product. Also, survey any users who may have been affected by the change to get the attitudinal data. Combining the attitudinal and the behavioral should give the picture of the affects from the change that was made.
If the change is wreaking havoc on the site, get it back to how it was before and analyze what happened. You may learn something about your customers you did not know before and you may have just come closer to the gold vein you have been mining for.
And just to show that this should apply to the web here is part of a manifesto from Avinash Kaushik, a guru in web analytics.
“I believe that God created the Internet so we could fail faster. In the offline world it is very expensive to experiment and test, the cost of failure is very high. As a result we don’t take risks. We keep doing what we think ‘works’, until the day we go bankrupt. The web changes that. You can take dramatic risks, at very low costs and learn big. Your website is nothing but a machine built to make you smart by taking lots of risks. Why should you tolerate ideas getting killed on conference room tables or by your HiPPO’s? Why accept opinions when you can convert them into hypothesis and get them validated for cheap and quickly? Why not let your customers actively be a part of helping you create customer experiences that deliver value to them AND to you? The cost of taking risk on the web is low. You can try an idea. As soon as it is live data starts following it. If the idea is a total loser then kill it fast, does not have to cost you a ton of money. What is more likely is that you will find winners that you had never imagined. Give it a try. Fail faster.”