Making a theoretical contribution

I'm not a fan of the necessity to have a novel theoretical contribution to publish in top management journals. Arguably, I think this standard has contributed to the replication crisis in the social sciences. Nonetheless, that is the standard, so it's helpful to think through just what a theoretical contribution means in the era of the replication crisis.

Making a theoretical contribution is absolutely in the eye of the beholder. My working hypothesis is that whether an editor believes you have met this standard has a lot to do with the clarity of the writing. The better the writing/argumentation, the better the probability that the editor and reviewers will see the 'theoretical contribution' in the paper. This makes writing quality the single biggest predictor of paper acceptance, and not the 'That's Interesting!' standard that, also, likely contributed to the replication crisis in the first place.

So given that writing quality is the key to crossing the theoretical contribution bar, I would argue that the single best way to enhance writing quality in an management study is clarity of purpose. This is, quite simply, being clear in what you are trying to accomplish. If the purpose of the study is to offer a are grand theory of the firm, great. Just. Say. So.

If you have more modest aims, just say so. To me, a theoretical contribution is something that improves our understanding of the nomological relationship between two or more phenomena. If that means your study is a replication, GREAT! Just say that's what you are doing. If your study asks an existing question but does it in a more rigorous way, even better!!! We need to revisit a number of past findings with new data and new—yes, that means better—methods. My point is that the key to making a theoretical contribution is to just be clear; to be intellectually honest about the purpose behind a study.

As a spillover benefit, I think clarity of purpose will also help address the HARKing problem. If a study is using data from an already published paper, if the code isn't made available, and if the original study wasn't pre-registered, well, the paper is probably a fishing expedition masquerading as, well, a new source of broad managerial insight. If it's fishing, just call it that. But, you better have a self-replication included in the new submission!

Science and journalism in academic publishing


Journalism

I’ve drawn a version of that graphic dozens of times now when talking to PhD students about publishing in management/entrepreneurship. The purpose of the graphic is to talk about publishing probabilities based on a given paper’s strengths—its ability to draw causal inference (good science), or its ability to tell an interesting and compelling story (good journalism).

As a field, we have a silly devotion to ‘making a theoretical contribution’ as a standard for publication. The necessity for each study to bring something new to the table is the exact opposite of what we should want as scientists—trustworthy, replicable results—that imbue confidence in a model’s predictions.

Now, the happiest face, and hence the highest publication probability, is absolutely a paper that addresses an important topic, is well written and argued, AND has a high quality design with supporting empirics. This should be, of course, the goal. Producing such work consistently, however, is not easy. In our publish or perish world, promotion and tenure standards call for a portfolio of work, at least some of which is not likely to fall in our ideal box. So the question becomes, as a field, should we favor high quality science that may address less interesting topics or simple main effects? Or, should we favor papers that speak to an interesting topic but with research designs that represent a garden of forking paths and have less trustworthy results?

To put it another way, what matters more to our field, internal validity or external validity?

Again the ideal is that both matter, although I’m in the camp that internal validity is a necessary element for external validity—what’s the point of a generalizable finding that is wrong in the first place? But when it comes to editorial decisions—and I’ve certainly seen this in my own work—I would argue that as a field good journalism improves the odds of publication even with questionable empirics. I don’t have any data to support my hypothesis, although I typically don’t get much resistance when I draw my picture during a seminar.

Fortunately though, I think we’re slowly changing as a field. The increasing recognition of the replication in science broadly and in associated fields like psychology and organizational behavior will, over time I believe, change the incentive structure to favor scientific rigor over journalistic novelty. Closer to my field, the encouraging changes in the editorial policies of Strategic Management Journal may help tilt the balance in favor of rigor.

In the spirit then of Joe Simmons’ recent post on prioritizing replication, I’d like our field to demonstrably lower the bar for novel theoretical insights in each new published study. It is, to me, what is holding our field back from bridging the gap between academia and practice—why should we expect a manager to use our work if we can’t show that our results replicate?

Rigor and relevance

This post challenges the assumption that for an academic paper to be relevant it must be interesting, and for the paper to be interesting, it only needs appropriate empirics, as opposed to a rigorous research design and empirical treatment.

An easy critique of this assumption is to say that I’ve got a straw-man argument; to publish you need rigorous empirics AND a compelling story that makes a contribution. I don’t think that’s the case. I think as a field (management and entrepreneurship specifically), we are too willing to trade studies that are interesting for those that are less interesting, even if the less interesting paper has a stronger design and stronger empirics. The term interesting is, without question, subjectively determined by journal editors and reviewers—what is interesting to one scholar may or may not be interesting to another.

Generally we think of interesting in terms of making a theoretical contribution; the standard for publication at most of our top empirical journals is that a paper must make a novel insight—or insights—to be publishable. The problem with this standard, as has been amply covered by others is that is encourages, or forgives, researcher degrees of freedom that may weaken statistical and causal inference to maximize the ‘interesting-ness’ factor. The ongoing debate over the replicability of power-posing is a notable case in point.

My hypothesis is that the willingness to trade rigorous research design for ‘novel’ insights is the root cause for the very real gap between academic management research and management practice. The requirement to make a novel insight encourages poor research behavior while minimizing the critical role that replicability plays in the trustworthiness of scientific research. In entrepreneurship research, we are also late in embracing concepts like counterfactual reasoning and appropriate techniques to deal with endogeneity, which diminishes the causal inference of our research and hence its usefulness.

In short, managers are less likely to adopt practices borne out of academic research not because such findings are unapproachable—although true, studies aren’t easy reads—but that most academic research simply isn’t trustworthy.  I’m not suggesting most research is the result of academic misconduct, far from it.  But I am suggesting that weak designs and poorly done analyses lower the trustworthiness of study results and their usefulness to practice.

To be clear, a well done study that maximizes causal inference AND is theoretically novel is, certainly, ideal. But the next most important consideration should be a rigorously designed and executed study on a simple main effect relationship that maximizes causal inference and understanding. It may not be interesting, but at least it will be accurate.

The best way to be relevant is to be trustworthy, and the best way to be trustworthy is to be rigorous. You can’t have external validity without first maximizing internal validity.