December 10, 2012
I recently attended a meeting where we were going to be taught the secrets of becoming a “7-figure” professional speaker. That is, we would learn how to make $1,000,000 a year. The presenter is part of an elite group of speakers who earn at least this much every year. His presentation was based on the lessons extracted from this successful group.
In the audience, listening to him, were about 60 professional speakers, ranging in experience from novices to highly accomplished individuals.
He shared ideas like, “Be controversial; say things that others are not saying or are afraid to say,” or “Don’t just speak; have a process.”
Listening to these words of wisdom, I have to say what others were not saying or were afraid to say: “His premise on how to be successful is flawed.”
The truth is, he has no idea how he really got to where he is. He only thinks he does. And no, he was not intentionally being deceitful. Not at all. He was just not applying critical thinking to the process.
Here’s the mistaken logic of so many people…
If we study a lot of successful people (companies) we will know what to do in order to replicate their success.
This is faulty logic for so many reasons.
One reason is “the undersampling of failure.”
When trying to learn what to do, we study those who are successful. But we rarely study those who tried the same things yet were not successful in achieving the same outcome.
I bet if we studied the speakers who make more than a million dollars a year, we will find that all of them shower every day. We could potentially therefore conclude that showering is the key to making a lot of money. Although I suspect that if you never shower, it will indeed impact your success, I do not believe that showering will make you successful. Why? Because there are many people who also shower yet are not as successful. This is the undersampling of failure.
For every million dollar speaker who “is controversial and says what others are afraid to say,” there are 100 who have done exactly that yet were not successful. But we never study the people who never made it, because we don’t know who they are (unless they were colossal failures). Their “failures” were not sampled, and therefore we wrongly conclude that this attribute leads to success.
My latest book is called “Best Practices Are Stupid.” The undersampling of failure is one of three reasons why it is dangerous to blindly follow what others do.
Any time you receive advice, be skeptical. Any time you read a book, don’t follow blindly. Any time you study a best practice, carefully consider if it is right for you and if it truly will give you the results you want.
P.S. My hypothesis of why he was really success will be shared in a later blog entry (and he confirmed it without coming out and directly saying it). It has to do with how to “manufacture serendipity” as a means of creating non-linear success. And to be fair, listening to the speaker, I did gather some nice tactics for improving my business that I will be implementing. I only questioned his premise on how to be successful.
December 4, 2012
About 10 years ago, I developed the Personality Poker® system.
Since then, I gathered a mountain of anecdotal evidence supporting its value. But one question remained in my mind: Is Personality Poker truly valid? That is, do the words actually measure what they are supposed to measure? Or is it just a fun game?
To help assess the situation, several years ago I decided to hire an expert on psychological testing. The perfect person for the job was Michael Wiederman, professor of psychology at Columbia College in South Carolina.
I mentioned to Michael that people found Personality Poker to be extremely simple and valuable. It provided deep insights in a short period of time, while being very easy and intuitive to play.
The question I had was, “Could something so simple also be scientifically valid?”
Michael response: “Simple is good, as long as it’s useful.” And just because somethings is valid, does not mean it is useful.
He went on to recall a study published several years ago in which a battery of widely used depression tests and methods were administered to a group of people, along with some simple questions. Although the tests administered were complex and supposedly scientifically validated, the most accurate predictor of depression was the single question: “Are you depressed?”
So much for scientific validity!
Complexity does not equate to value. And scientific validation does not imply usefulness.
The real world is only what matters. And sometimes the simplest solutions are the most useful.
Innovators need to stop relying on spreadsheet, statistics, lab tests, focus groups, surveys, and other attempts to “validate” a solution.
What works in the “laboratory” may not be the best solution for the real world.
Sometimes you need to get out and see what really works. See what solves a customer’s pain. Discover latent and hidden needs. And keep it simple.
As Antoine de Saint-Exupery, author of The Little Prince, once said, “Perfection is finally attained not when there is no longer anything to add but when there is no longer anything to take away.”
Our left-brained society seems to value things that are complex and “proven.”
But never confuse “validity” with usefulness. And never think that a complex solution is better than a simple one.
June 18, 2012
A potential client asked, “What is the best way to create a culture of innovation?”
My response: “Stop calling it innovation!”
Innovation has become the word du jour. Is it important? Of course. But the term has been used and abused by so many people that it means nothing. I am seeing a backlash against the word. Inside many organizations, there are antibodies waiting to kill anything called “innovation.”
If you want to have a chance at innovating, call it something else.
Although this is an old fashioned term, I like: “problem solving.” It is calling it what it really is.
Yes, maybe the problems when innovating seem bigger, like business model changes or the creation of new product lines. But you are still solving a problem (ok you can call it an “opportunity” if you prefer).
If you have an innovative idea and if doesn’t solve a problem, it will not be valuable.* (see footnote)
When starting an “innovation” program (excuse my perpetuation of the word), I ask the leaders of the organization (top executives, P&L owners, Business Unit/Lines of Business leads) to give me their three most important issues; ones that if solved would be incredibly valuable. These problems/opportunities could be related to improving productivity, developing new service offerings, stimulating sales, addressing changing market conditions, or dealing with commoditization. We look for leverage points; things that will create exponential value.
Everything ties back to an issue, challenge, problem, or opportunity.
Once the challenge is identified, we use the best method – brainstorming, skunkworks, open innovation, outsourcing, alliances, etc – to find solutions.
After doing this with the senior leaders, we can then engage the entire organization in identifying and solving pressing challenges. This starts the cycle.
Every organization wants to know if they and their ideas are “innovative enough.” Who cares? The more important question is, “Do you know which problems, if solved, would create substantial value for your organization and your customers?”
There are many companies that produce unsexy products with few “breakthrough” technologies (they are not considered “cool” like Apple, 3M or Google). However, these organizations adapt and grow at incredibly fast rates. Does it matter that others don’t consider them to be innovative?
Explosive and continued growth is the name of the game. By calling it innovation, you may in fact be killing what you hope to create.
Look for important problems to solve and then find the best means for sourcing solutions. This is what you really want.
* FOOTNOTE: Please note that this does not mean that the problems/opportunities needs to be known/understood by consumers or others. Focus groups and surveys are poor ways to identifying problems as they only tap into conscious beliefs. For more on this, read my tip, “Your Market Research Sucks” in my Best Practices Are Stupid book.
March 26, 2012
We are constantly bombarded by expert advice from advertisements, books, magazines, TV and the Internet. But how much of this information is actually true? From my experience, there is reason to believe that little of it is accurate. People (often unknowingly) make claims that are exaggerated or in some unfortunate cases, blatant lies.
I remember giving a presentation to a group of eager individuals who were either launching or advancing their speaking careers. During our 90-minute discussion, I provided dozens of tips and techniques for growing their business.
At the end of the evening, one attendee asked, “What is the most important tip?” I thought about this for a minute and replied, “I don’t know.”
Although this answer may sound like a cop out, it is in fact the truth. No one really knows what made them successful. More importantly, they have no idea how others can replicate their success. They may be able to look at a series of events that led to a particular outcome, but most likely the “most important tip” is something completely different than what is seen on the surface.
Several years ago, I attended a “book marketing” conference. It was led by a well-known author who sold millions (and millions) of books. His promise was to share the steps and tools that made him successful so that others could replicate and reap the same rewards. Over the years, thousands of people have tried his “formula,” and as far as I can tell, no one has come even close to his level of success. And those achieving some modicum of success mainly did so by leveraging this author’s name and network.
I am not implying that these experts are misleading or malicious. Not at all. The issue lies in our inability to find the correct correlations between cause and effect. Too many hidden factors play a major role—ones that we might never consider or notice.
Many experts use anecdotal evidence to support their conclusions: It worked for me and a few of my buddies, so it should work for you. This isn’t the most sound reasoning. Maybe the expert’s 10 Steps to Financial Wealth were not the true causes of their success. Maybe success was coincidental. Without more data, it is impossible to know. If 100 people tried the same 10 steps and each got the same results, then you might be able to claim a correlation. While there may be wisdom in anecdotal evidence, you shouldn’t blindly accept it as the truth.
There are many, harder to measure factors that often play a substantial role. Your attitude plays a larger part than you might think. Your Rolodex of contacts (for the younger readers, this is where the old-timers stored our addresses before computers) can be a huge factor in the equation. Being in the right place at the right time has launched many businesses, including Microsoft. Or sometimes, plain old dumb luck is the real cause.
So, how can you separate the accurate from the invalid? One way is to understand the difference between causality, correlation and coincidence.
I recall a study that claimed, “Individuals with greater wealth are happier.” Assuming that this statement is true, it is a correlation. Wealth and happiness are related. However, after reading this, some immediately jump to the conclusion that “money makes people happy.” This statement is causality suggesting that money is the cause of people’s happiness. According to this study though, this is not true. The research indicated that money did not make people happier. Happy people attracted more wealth into their lives. Money is correlated to happiness but is not the cause of happiness.
Beyond causality, correlation and coincidence, there is another factor: conditions. Just because something works for one company does not necessarily mean it will work for yours, even if there truly is a cause and effect relationship.
December 14, 2011
One of my favorite topics is to discuss how breakthroughs are generated by looking for someone who has solved a similar problem in a different space.
Some examples I talk about in my “Best Practices Are Stupid” books are:
- A company developed a new type of whitening toothpaste by studying the way non-bleach laundry detergent works
- A gas pipeline “sealing” system was developed by studying the way the capillaries in the finger coagulate blood and heal themselves
- An office supply company found a way to get customers to return used toner cartridges by studying Netflix’s DVD service
And there are so many more interesting case studies.
While giving a speech on this recently, a client shared another wonderful example.
The company is in the computer simulation space. They are able to build incredibly realistic models of what might happen in the real world by creating simulations in the virtual world.
When working for a medical device company that made angioplasty equipment, they wanted to create a computer simulation that would predict how the “balloon” would expand.
Where did they turn for an accurate computer model?
In the past, they worked with car manufacturers and built statistical models that simulated the expansion and contraction of airbags. This proved to be a wildly accurate way of predicting how a balloon catheter would operate.
When you are working on your next business challenge, ask yourself: “Who else has solved a similar problem.”
In doing so, you might significantly accelerate your innovation effort.