Month: May 2008
Deep down, every single one of us is driven by self-interest. As much as we would like to think that’s not true; that we can look beyond ourselves and do what is in the common good, the fact is, more times than not, we will do what is in our own self interest. Or at minimum, leverage the decisions we make to benefit our self interest.
When deciding what should and shouldn’t go into a release, we try to look at the market needs, at the competition, at the strategic direction of the company etc. etc. But when push comes to shove, and we have to make a hard call on including something or not, our subconscious will have a significant impact.
In a conversation about prioritizing some requirements, I had to make a hard choice between two important items of roughly equal effort. When another PM asked me why I chose one instead of the other, I said that if we implemented the one I selected, it would get a lot of people off my butt. And that I was tired of hearing people complain about the issue. The other one wasn’t causing the same stink to be raised.
As I said it, it surprised me somewhat. I told the other PM, “I’m being honest here.”
I like to think that I focus on what will help drive revenue, better position us against competitors, help strengthen relationships with key strategic partners and all that good stuff. But seriously, when a hard decision needed to be made, my reasons were none of those.
It’s not as if that functionality wasn’t needed or that it wasn’t something we should add to the product. Don’t get me wrong. People were complaining about it because it was a gap in the product and customers needed it. But, the main reason for my choice was firstly self-interest.
So let me ask you a question. Have you ever been in that situation and made a decision for similar reasons? In retrospect, any thoughts on whether that was the right decision?
Let’s start with a haiku:
Research a concept?
How accurate will it be?
Build something, then see.
I’ve written a couple of articles recently on the difficulties in developing successful products and innovating in large organizations. A lot of what I wrote had to do with research and ROI on initiatives. And certainly for companies such as McDonald’s or Proctor and Gamble, the cost of launching a new product is significant, and the cost of launching a failure is even greater.
But in the software world, launching a product or service is very different. One (or two or three) people can successfully create something, launch it on the web, get feedback quickly, modify the offering, get more feedback, modify the offering some more, grow the audience, get more feedback….you get the picture.
Pierre Omidyar started eBay (originally AuctionWeb) in his spare time when he was working at another software company in Silicon Valley.
And of course, Google started out as a Ph.D research project for Larry and Sergei, with the idea being that pages that have more links pointing to them should be more relevant in a given set of search results.
While these are three HUGELY successful companies, there are many other smaller ones that follow the pattern these companies had in their origins. i.e. build something simple, useful, cool or different, get people to use it, get feedback, build it out a bit more etc. etc.
There was little or no market research in the founding of these companies. What research could actually have been done? Who could have predicted the success of any of these companies very early on? No one.
On a slightly smaller scale, and a little more recent success, here’s an interesting example of a 1 (OK now it’s 2) person company, started in 2003, that is generating about $10,000,000 of revenue per year. The NYTimes article provides a bit more detail. The article title alone — From 10 hours a week, 10 million per year — says a lot. The site was started as a way for the founder, Markus Frind, to teach himself ASP.NET.
The point of all these examples, and there are many more like them in the software world, is that given the very low barrier to entry, a “good idea” can be quickly iterated on and developed and become a successful endeavour without a lot of research, and market segmentation, and problem identification, and persona development etc.
Just take your idea, write some code, put it out there and see where it takes you.
Is this a valid model for developing software, particularly in the context of the WWW? I’d like to hear your thoughts.
Phil had summed up his Chasing Outcomes post with the following line:
|At the end of the day, its simple. Create a product or service that your buyers want to buy and the rest takes care of itself.|
I argued that things don’t take care of themselves and asked for other people’s comments.
Scott took my comment, that product success is not easy, and analysed Phil’s post and then provided guidance on how to increase the likelihood of product success. He states 7 steps should be followed. These are:
Note: Step 7 should occasionally replace step 6, so that you stay focused on your market, and not just an out-of-date snapshot of what used to be important to your customers.
This is a good list that describes a good standard process for developing products. It basically says one should do their homework before implementing solutions to problems, and then iterate on the solution based on feedback from the market.
And while this should reduce chances of failure, and help develop something the people want, it doesn’t mean that there will be product success or that anything will take care of itself.
There are several reasons for this.
First, even if you follow the steps above, none will be done perfectly, and there will be aspects of the market, the problem, your solution etc. that are not fully taken into account or addressed. It could simply be because there are things you cannot know for certain, or because your resources or budget don’t permit more accurate research or for other reasons. Regardless, it goes without saying that your research and your efforts will not be perfect at each step. Think of these imperfections as “error bars” associated with decisions made at each step.
And remember, decisions are what we make when we have imperfect information. When we have ALL the information needed, we no longer are making a decision, but a calculation. Now, when do we have ALL the data? Well, almost never, or almost certainly when the market or opportunity has passed us by. And keep in mind that the error bars from these decisions can compound over the product development process.
In addition, the marketplace is not static. Customer needs or preferences may change. New competitors may appear. Economic conditions may change. The market may not develop as expected. Thus the solution you provide may no longer address the market need the way you thought. Now step 7 wisely says you should revisit step 1 and step 2.
While this is going to help reduce some of the error in the solution, it will not eliminate it. Thus no matter how hard you try, there will always be a delta between what you offer and what the market needs. That delta could be small or it could be large, but it will always be there.
Remember McPizza? McDonald’s can be accused of a lot of things but not for rushing to market with this product. They did a lot of research and test marketing. They had a well defined target audience. They advertised and promoted heavily. It was hard not to know about their pizza.
Yet it failed miserably. Why? In short, people didn’t associate McDonald’s with pizza and they could get better pizza elsewhere. Now why didn’t any of the market research turn up this insight before McDonald’s invested untold millions on pizza ovens and marketing on the nationwide launch?
Here’s a great recent article from the LA Times about Proctor and Gamble — the company where modern Brand Management was developed. While part of the article refers to a new book by P&G Chairman and CEO A.G. Lafley, there is a great line in the article that is relevant to this topic.
|The book’s strongest message comes not from successes such as Olay, Febreze and Swiffer but from failures, such as Fit, a fruit and vegetable sterilizing wash launched in 2000 and sold three years later at a loss of $50 million.
After all the changes, the company still has a success rate of 60% on innovations. But that “is as high as P&G wants to go. Any higher would be playing it too safe,” the authors say.
Even at P&G, they view a failure rate of 40% as acceptable. Later in the article:
|He also insists on disciplined control of innovation that weeds out failures before they become painfully big. Each innovation team must, from the start, identify the issue that represents the biggest threat to its success and explain how to deal with it.
There will be plenty of failure to go round — and also, probably, innumerable new versions of Whitestrips.
So, what can be concluded from this? Bringing new successful products to market, even for large well established companies, take a real disciplined process. There is no guarantee for success with a disciplined approach. In fact innovation has to be measured and carefully managed. And for those products that make it to the market, but don’t succeed, there needs to be enough discipline in the company to know when to kill them or sell them off and focus efforts and resources on more lucrative or potentially rewarding ventures.
P.S. There is a good article called “Why Startups Fail” that covers some of the things I’ve talked about here.
This has nothing to do with anything in product management, but maybe something to do with how well you get along with your co-workers, and your emotional intelligence. I just thought it was cool. Here’s a site that allows you to rate your own personality, and have your friends rate you. Then compare the two. Based on my articles, what would you say about me? (This is Alan writing … to be fair, Saeed has been writing most of the articles on our blog … so maybe our blog is weighted to his personality … maybe Saeed will also post a “come and guess me” link too.)
Being in sales is an interesting experience. So much of it is about self perception and perception of others. I have worked for my whole career with sales people, and often shook my head at some of the reactionary behavior, but having been in the seat myself now for over a year, I can see how the mind starts to play tricks on a person.
Here’s a big one to get over: Sales is not the same as customer service. (Corollary: The customer is not always right.)
Quite often the customer calls up looking for information. They’re researching the market, looking for information, looking for prices, educating themselves. That is fine and normal, but you must realize that when they are talking with you, you are funding their education. What are you getting in return?
Many of my friends in sales are hyper-responsive to customer requests. “The customer is always right …” … “The customer asked for X, I am going to give them X …”, and the good ones will even throw a popular PM-line right back at you: “Your opinion, while interesting, is irrelevant!” (you can buy the mug here).
There is no question that the customer’s goals and problems need to be understood. Of course they do. No one buys anything without a problem or a goal, even if that goal / problem is as basic as satisfy the discomfort created by envying the product.
But it is an entirely different thing to say we give customers exactly what they ask for… we don’t! And neither do we give it to them for free.
In the next few posts I’ll dissect this issue further. Stay tuned.