Dan Ariely (http://web.mit.edu/ariely/www/MIT/) notes how little experimentation is done in the ‘marketing’ world. It’s worth a read through this and then consider what we do…
“Much of the industry prides itself on the insightful intuition of strategists and creatives – and of course Dan spends much time warning about the fallibility of intuition. The other half of the industry heralds the availability of data and metrics from digital or interactive work, or the foundation of decisions in primary and secondary research.
Dan says there’s a couple of problems here. First, research is often conducted with methods that themselves are based on intuitions – it’s assumed that focus groups provide direct insight into consumer preferences when they are in fact very removed from the real context in which people make decisions. More importantly, Dan stresses that there’s a difference between data and experimentation- and the two are often confused. The availability of data is helpful, and can guide hypothesis, but it’s not the same as a robust method of experimentation in which researchers have a crystalized idea of exactly what limited variables they are testing.”
The point of crunching this huge quote is to flag up something that we often flip into conversations and contracts – the ‘pilot’ – when it comes to our projects.
I’d suggest we need to be a lot positive (emotionally and certainly) about the role a pilot stage (i.e. the research) in experience delivery plays. The point won’t be missed on you, the reader, but at present we don’t set out the parameters of what we’re testing in pilots (at least I’m not aware that we’ve done this recently, with the exception of Lexus Hybrid POS, the results of which we’re very useful).
So I raise it again and ask, why when we next put that throw-away line in a proposal we set the question ‘which is more important – the data that lead to the experience or the experiment of it?’.