Periodically, I have a conversation where the topic turns to entrepreneurship researcher’s inability to answer—with precision—why some ventures succeed, some fail, some become zombies, and some become unicorns. Similar conversations surround the topic of startup communities and clusters, and the role of research universities in supporting entrepreneurial ecosystems. Often someone bemoans that we have study after study that addresses only one small piece of the puzzle, or that one study may be contradictory to another study, or that a study is simply too esoteric to be useful.
My response is, well, that’s social science.
I am a social scientist, and proud to be one. I think across the social science domain, including management and entrepreneurship research, we have much to offer the students, businesses, governments, and other stakeholders we serve. But the one thing we aren’t particularly good at is humility. Humility in the sense that when we talk about our research and what we can offer, we’re aren’t always very good at acknowledging the limitations of our work.
Think about predicting the weather. The cool thing about the weather is that it’s governed by the laws of physics, and we know a lot about physics. But even with our knowledge, computational power, and millions of data points, there remains considerable uncertainty about predicting the weather over the next 24, 48, and 72 hours. Part of the reason is that interactions between variables in the environment are difficult to account for, difficult to model, and especially difficult to predict. Meteorologists are exceptionally good forecasters, but are far from perfect. This is in a field where the fundamental relationships are governed by underlying law-like relationships.
The hard reality is that establishing unequivocal causal relationships in the social sciences is extremely hard, let alone forecasting specific cause and effect sizes. We don’t deal with law-like relationships, measuring latent phenomenon makes error always present, eliminating alternate explanations is maddeningly complex, and, well, we’re humans (that not-being-perfect-thing). Interactions among social forces and social phenomena are not only difficult to model, but in many ways are simply incomprehensible.
One technique we use as social scientists is to hold many factors that we cannot control and cannot observe as constant, and to build a much simpler model of a phenomenon than exists in reality. It helps us make sense of the world, but it comes at the cost of ignoring other factors that may be important, or even more important, than what we are trying to understand. It also means that our models are subjective—the answer provided by one model may not be the answer provided by another. In a sense, models are equally right and equally wrong.
Where stakeholders who are not social scientists get frustrated with us is the desire for simple, unequivocal answers. What is also troublesome is that some social scientists—despite knowing better—are more than happy to tell the stakeholder that “yes, I’ve got the answer, and this is it.” When that answer turns out not to work as advertised, the search begins again, although this time with the stakeholder even more frustrated then before.
Making the matter even more complicated are statistical tools and methodologies that seem to provide that unequivocal answer; the effect of x on y is z—when x changes by a given amount, expect y to change by z amount. It seems so simple, so believable, that it’s easy to be fooled into thinking that the numbers produced by a statistics package represent truth, when the reality of that number is, well, far from ‘truth’.
In conversations which turn to wanting simple, unequivocal answers about entrepreneurship—what I call the grand theory of entrepreneurship fallacy—telling the weather analogy helps. But it’s also easy to say that there simply aren’t simple answers. I can’t answer the question because there isn’t an answer; you are trying to solve an unsolvable problem. The best that I can provide, and the best that entrepreneurship data science can provide, is an educated guess. That guess will have a credibility interval around it, and will be narrowly applicable, and be subject to update as new data comes in and new relationships between variables emerge. That’s the best we can do, and be extremely wary of the researcher who says he or she can do better!
We characterize our human experience with uncertainty and with variance. Don’t expect anything better from data science on that human experience.
Originally published on August 9. 2018 on Dr. Anderson’s blog