In my last post I discussed aspects of the execution gap at most companies when it comes to innovation. At the end, I posed a couple thought questions because these were questions I myself have been thinking about for some time, albeit in other contexts. In particular, I’ve wondered a lot about how companies can better define an open innovation strategy.
Open Innovation Fallacies
I’ve noticed a lot of confusion around the topic and meaning of open innovation. The term has become misconstrued over time, often mistaken as being simply synonymous with crowdsourcing. Open innovation is actually a two way street, with ideas flowing into and out from a firm’s boundaries. Open innovation is characterized by that permeability, not the directionality.
Another mistake is to conclude the increased flow of ideas implies a loss of strategic focus. To the contrary, open innovation should allow for greater focus. Rather than waste resources trying to squeeze a square peg into a round hole, innovative ideas that do not fit well with a firm’s strategy and/or business model can be made available to others to monetize. Similarly, firms can also focus limited resources on the most important, differentiating R&D and open themselves up to outside ideas in areas further from their core competitive advantage.
Most importantly, open innovation is not an innovation panacea. It should not entirely supplant other sources of innovation – specifically, innovation driven by traditional R&D. Open innovation, in all its various forms (e.g. crowdsourcing, M&A, joint ventures, innovation contests, etc.), should be used in combination with R&D investing to reach a firm’s innovation goals.
Each of these fallacies leads to a misapplication of the principles of open innovation. So how do you know what innovation projects you should be opening up to others outside your firm and what you should continue to protect and incubate inside a walled garden?
The Strategic Openness Matrix
That is the essential questions I have tried to answer with an initial version (call this v0.1) of what I am referring to as the Strategic Openness Matrix (please, help me come up with a better name!). I’ve really just repurposed the House of Quality (HOQ) matrix from Quality Function Deployment (QFD) to build a tool that will help companies craft an open innovation strategy. (Note: I’ll assume some basic familiarity with the HOQ matrix in the explanation that follows. A good primer can be found here.)
To build this matrix, I’ve started with a strategy canvas from Blue Ocean Strategy rather than the voice-of-the-customer as I would with the HOQ matrix (go here to learn more about the strategy canvas itself). For expediency, I used a canvas that I created for a previous discussion of online music services as an illustrative example. Each of the competitive factors in the canvas along with its corresponding value are listed on the left hand side on the y-axis. For reference, I’ve also included some generic competitive benchmarks of the same factors (don’t get hung up on the values; they’re all just illustrative).
Relationships between Competitive Factors and Research
At the top, along the x-axis, are all the top level areas of research – what I’m calling L1 or Level 1 research areas. As you’ll see in a bit, each top level area of research can be broken down into a number of more specific, component research areas – Level 2 research and then potentially further into Level 3 research and so on to the desired level of granularity (much like a process decomposition). If the competitive factors are the “what’s” from the HOQ matrix, these are the “how’s” from the HOQ.
I’ve also indicated at the top an estimate of the firms current research capabilities in a given area. This could be a somewhat objective estimate – based on access to specialized lab equipment for instance – or highly subjective. With only three possible values – leading, lagging, and pacing – even a guesstimate will suffice.
For each competitive factor, we rate its relationship to each research area on a scale of 1-10, to be consistent with the scale used in the strategy canvas. Really the scale is somewhat arbitrary since what will ultimately matter are the relative scores calculated from these numbers, not the absolute scores. In any event, these relationship values are entered at the intersection of the rows and columns, with 1 being a weak relationship and 10 being strong. (Note: zero is not used for reasons of mathematical practicality.)
The Research Importance Score
For each research area I calculated a research important score by first multiplying the strategic importance rating with the relationship rating for each competitive factor (so 8, the strategic importance rating for Undirected Listening, multiplied by 10, the relationship score with Automated Song Selection Algorithm). I then summed up all the values in a given column and divided by 10 to get my research importance score (dividing by 10 is, again, because scale is arbitrary here). This follows the same procedure as the HOQ matrix.
To illustrate with an example, for the first L1 Research Area on the left, Automated Song Selection Algorithm, the math comes out to (8×10 + 3×3 + 6×0 + 5×6 + 3×6 + 9×6 + 7×8)/10 = 24.7. In the beginning of the polynomial equation, 8 is the strategic importance rating for Undirected Listening and 10 is the relationship score with Automated Song Selection Algorithm. 24.7 is the total research importance score.
If the math is confusing, try looking at the spreadsheet by clicking on the image and downloading it from Box. The used logic is that the more important the competitive factor and the stronger the relationship, the higher the research importance score should be. The more relevance a given area of research has to the various competitive factors, the higher the score should be as well. I’ll explain how the score is used shortly, but for now, just think of it as a proxy for the strategic importance an area of research, as the name implies.
Synergies and Trade-offs
There is another relationship to be considered also which I haven’t mentioned yet, the relationship between research areas. This is the “roof” in the HOQ matrix. The correlation adjustment is intended to increase or decrease the research importance score to account for synergies and trade-offs.
The math may get confusing again here. For a given research area, I assigned a correlation coefficient with each of the other research areas (this appear in a separate table on a different tab, “L1 Research Correlations”) and multiplied those correlation coefficients by the corresponding research importance score for each research area.
Confused? Here’s the logic. The higher the positive correlation a research area has with other research areas, the higher the adjustment should be. If a research area creates a lot of good synergies, it’s going to be more important to a firm. If there’s a negative correlation – a trade-off with another research area – that is a negative adjustment.
Adding the strategic importance score and the correlation adjustment, I can now calculate a net score for each research area. Next, I stack rank these net scores – the higher the score, the higher the ranking, and I identify the top, 2nd, 3rd and bottom quartiles (these cells are hidden in the spreadsheet I created).
The Strategic Recommendations
The final step is to apply some logic to recommend how the firm should handle a given area of research. The logic I’ve used probably needs to be more finely tuned but works fine for a proof of concept. The logic looks at the relative strategic importance score (specifically where in the stack ranking and quartiles it falls) and where the firm’s current research capabilities are. Here it is laid out in plain-ish English:
- Lagging and relatively unimportant, Partner Openly to innovate
- Lagging and relatively important, Over Invest (internally) or Acquire to close the gap
- Pacing and relatively unimportant, Partner Strategically to amplify minimal internal investments
- Pacing and relatively important, continue to Invest internally to keep from falling behind but don’t share too much, which might allow others who are “drafting” behind your research to actually jump ahead
- Leading and relatively unimportant, consider additional ways to Monetize the research externally (generating more funds for internal innovation) or Reallocate some fund away to more important areas
- Leading and relatively important, Protect the R&D investments supporting your competitive advantage
These recommended actions are not strict rules but rather suggestive indicators, guidance for management to consider along with other perspectives. I was pleasantly surprise by some of the recommendations the logic actually makes. For instance, in this example, I have placed a high priority on Sociality (see the original blog post to understand more about what Sociality means). The research most strongly related to Sociality is Synchronous Sharing and Communication. You might think this would be an area to Protect if the firm is Leading or an area to Over Invest/Acquire if it is Lagging, but that’s not the recommendation the logic produces when you consider all the other competitive factors and research areas.
Synchronous Sharing and Communication gets a relatively lower importance score because it has a weaker relationship to the other competitive factors and only modest correlation with the other research. If the firm is leading, it might consider alternate monetization options (firms not in music that could use technology for adding communications features to their products) or reallocate resources away to other deserving areas. If the firm is lagging, it should consider partnering openly, perhaps integrating with instant messenger or VOIP partners rather than going it alone because closing the R&D capabilities gap would be to large a drain on resources.
This is precisely the kind of thought provoking results a tool or framework should produce, overriding our mental biases and forcing us to think different! Another interesting result from the tool is that if everything is lagging, the resulting recommendations are a very focused innovation strategy. Put everything into the few most important research areas, partner openly with others to close the gap for all the rest.
I’ve begun to build out the L2 section of the tool as well. It normalizes the net research importance score to get back to a value between 1 and 10. From there, it is more of the same except starting with the L1 research areas on the left hand y-axis. The resulting importance scores will allow the higher level recommendations to be overridden for a more precise handling of the lower level innovation projects. Different logic and rules may also need to be developed and applied as you get more granular.
I am really excited about this tool (or does this qualify as a framework?), but it is still just v0.1. It can only get better with feedback from others so following the example of Alex Osterwalder and the business model canvas, I’m making it available here under a Creative Commons license. Please cite this blog if you use it and make any derivative works available under an equivalent Creative Commons license. Share this post on Twitter or Facebook, and please post comments for ways you think it might be improved.
Strategic Openness Matrix by openopine.wordpress.com is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.