Don Bergh1 and colleagues published a great note in Strategic Organization recently on the question of reproducibility of results in strategy research. I agree with virtually everything in the paper, but this passage on page 8 caught my attention…

Overall, based on our sample of 88 SMJ articles, the strategic management literature appears vulnerable to credibility problems for two main reasons. One, the majority of the articles did not report their data sufficiently to permit reproduction, leaving us in the dark with regards to the accuracy of their reported results. Two, among those articles where reproduction analyses were possible, a significant number of discrepancies existed between reported and reproduced significance levels.

I’ve written about this before—what limits our impact on management practice is a lack of rigor, and not an excess of it. Here is another example of the problem. When a second scholar is not able to reproduce the results of a study, using the same data (correlation matrix) and same estimator, that’s a significant concern. We simply cannot say with confidence, especially given threats to causal inference, that a single reported study has the strength of effect reported if data, code, and other related disclosures about research design and methodology are absent. Rigor and transparency, to me, will be the keys to unlocking the potential impact on management practice from strategy and entrepreneurship research.

On a related note, it’s nice in this paper that the authors drew the distinction between reproducibility and replication, which sometimes gets confused. A reproduction of a study is the ability to generate the same results from a secondary analysis as reported in the original study, using the same data. A replication is the ability to draw similar nomological conclusions—generally with overlapping confidence intervals of the estimates—of a study using the same research design and methodology but on a different random sample.

Both reproducibility and replication are critical to building confidence and credibility in scientific findings. To me though, reproducibility is a necessary, but not sufficient condition for credibility. The easiest way to ensure reproducibility is to share data and to share code, and to do this early in the review process. For example, the Open Science Framework allows authors to make use of an anonymized data and file repository, allowing reviewers to check data and code without violating blind review.

While yes, many estimators (OLS, ML, covariance-based SEM) allow you to reproduce results based on a correlation/covariance matrix, as reported in the paper, this can be a tall order, what with the garden of forking paths problem. More problematic for strategy research is the use of panel/multilevel data, which was an area the authors didn’t touch on. In this case, a multilevel study’s reported correlation matrix would pool the lower- and higher-order variance together, effectively eliminating the panel structure. You could reproduce a naive, pooled model from the published correlation matrix, but not the multilevel model, which demonstrably limits its usefulness. This is a major a reason why I’m in favor of dropping the standard convention of reporting a correlation matrix and instead requiring data and code.

Regardless though, lack of reproducibility is a significant problem in strategy, as in other disciplines. We’ve got a lot more work to do to build confidence in our results, and to have the impact on management practice that we could.

  1. In the interest of full disclosure, Dr. Bergh was a mentor of mine at University of Denver—I was a big fan of him then, and I still am 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *