‘Formulaic, cautious, dull and unreadable.’ Dennis Tourish struggles to understand management research papers.
Many decisions in organisations are taken in spite of evidence that they do more harm than good. Consider the use of stock options as a compensation strategy. Originally intended to align the behaviours of managers with those of shareholders, unintended consequences quickly multiplied.
These included managers focusing on the short term rather than the long term, leading many of them to exaggerate earnings and even commit fraud. One study found that companies using such systems were more likely to be forced into restating their earnings than those that did not.
Mergers and acquisitions likewise fair more poorly than their authors generally hope for. A study of 947 acquisitions over almost 20 years reported that when large companies use stock to buy smaller firms they can expect 25% inferior returns in the next five years.
It is, therefore, no surprise that managers are increasingly urged to use “evidence” when making decisions. This sounds better than merely imitating what others do, a process that has been described as “casual benchmarking” Just because others are doing something doesn’t mean that it makes sense for you to do the same.
However, advice to study “the evidence” is itself flawed. For a start, where do you find it? An obvious place might be in academic journals devoted to management. But these are written by academics for academics. They are generally unintelligible to anyone outside a small fraternity already initiated into the history, vocabulary and methods of whatever topic is being addressed.
Yet careers within academia depend on building a strong portfolio of publications within such journals. Making matters worse, we climb the greasy pole much faster if we publish many such papers, and make sensational claims about their importance. These are strong incentives to cut methodological corners, hide weaknesses in our data, use dodgy statistical analyses and claim to have discovered entirely new theories.
I offer an example of an “authentic leadership” theory (ALT). This has become something of a fad within the leadership development industry. ALT proposes that leaders who bring their “real” selves to the forefront of their interactions with others are more effective. If you are persuaded enough, or maybe just gullible, you can pay Harvard Business School $15,500 for a five-day course in how to be more authentic.
As the prominent management theorist Jeffrey Pfeffer has noted, there is a “delicious irony” in the notion that you can be trained – that is, taught — how to change yourself to be more authentic. Yet the evidence base behind all this is fundamentally flawed. Here is one example.
A group of academics published a paper on ALT in 2016. Its main claim is as follows: “Study 1 shows experimentally that compared to a leader who advances personal interests, a leader who advances the interests of a collective is (a) perceived as offering more authentic leadership and (b) more likely to inspire followership.”
Please note how flimsy this sounds. The paper is telling us, in effect, that leaders who promote the common good are more valued than those who pursue selfishness. The closer one studies the paper’s methods and data the louder such misgivings begin to shriek.
Its authors report that they gathered data from 73 students. No details are given, including whether they were undergraduates or postgraduates. This is an extraordinarily small sample on which to build claims about organisations and leadership throughout the world.
The practice of using student samples to draw conclusions about relationships at work is also dubious. Few of them have much experience of the world of business. Some journals now refuse to publish papers that rely only on student samples.
Those in this study were presented with a one-page commentary article on a senior politician who had switched his support from the incumbent minister to a challenger. The first experimental condition reported the leader to have reached this position for personal reasons while the second reported that he did so because of collective interests.
Note that the first headline is a depiction of selfishness and will contaminate any responses that are obtained. Of course, the second headline is equally likely to skew responses. All this tells us is that when people are given negative information about a leader, they will rate that leader poorly on whatever measurement of leadership that you present them with. And when they are given positive information about a leader the opposite will happen. It doesn’t really tell us anything about leadership that is worth knowing and certainly doesn’t provide robust support for ALT.
There is another problem with management research. It is that many of its theories offer little or no evidence to support them.
A study of papers in one prestigious journal concluded than no more than 9% of the theoretical propositions they contained were ever tested. This problem is compounded by our reluctance to replicate. A study of 18 leading business journals from 1970 to 1991 found that less than 10% of published empirical work in the accounting, economics and finance areas were replications. It was 5% or less in the management and marketing fields.
Multiple replications are even rarer. This means that management research may be suffused with false positives – that is, it makes knowledge claims that are false. We don’t know what findings can be trusted and what cannot. Even when papers are retracted because of fraud, poor analysis, plagiarism or other issues action is often far too cautious.
For example, one American accountancy professor who has now had 37 papers retracted because he invented all his data. But that leaves nearly one hundred other papers by him that remain in the public domain, even though they are in all probability fatally flawed. We are left to our own instincts in deciding whether we should trust their conclusions or not.
There is yet another major problem with management research. Look at the guidelines for authors provided by any of our top journals, and the odds are that you will see a call for “theory development” in the papers that they publish.
There is no equivalent demand that authors should develop guidelines for practice, describe an interesting phenomenon for which no theory is yet available, be clearly written or address important issues. The result is a great deal of pretentious gibberish masquerading as “theory development.”
Here is an example. Why didn’t our author use words with which people would be more familiar? Read an extract from the abstract: “The core hypothesis, supported by the results, is that the more similar the initially experienced level of organisational munificence is to the level of munificence in a subsequent period, the higher an individual’s job performance. This relationship between what I call ‘imprint–environment fit’ and performance is contingent on the individuals’ career stage when entering the organisation and the influence of second-hand imprinting resulting from the social transmission of others’ imprints.”
Not quite nonsense – but not quite English either. This kind of writing has become far too common in our field. I would translate it as follows: “ When managers behave well with new employees, and continue to do so, they work better, and when they behave badly the opposite happens. This is partly because already existing employees also behave either well or badly and so role model attitudes and behaviour for others”. “Imprinting” seems to mean the impact on us of our experiences with others. But since this doesn’t sound sufficiently theoretical, we read instead of “imprint–environment fit”.
Note also the reference to “supported by the results.” This means that some hypotheses that stated the blindingly obvious were developed and, in the manner of someone hypothesising that alcohol can be bought in a bar, they were duly confirmed. The paper strives for enigma, and I suppose achieves it, since without the help of a dictionary it is hard to know what point is being made. There is a skill to this kind of writing, just as there is in playing scrabble. Whether it really advances the sum of human knowledge is a different matter.
Most papers in mainstream journals are formulaic, cautious, dull and unreadable. They tend to ignore genuinely important problems in pursuit of this or that fancy that lends itself to quick data collection and facile theorising. Management researchers have so far published very little that addresses the looming technological revolution in the world of work, organisations and management.
What is to be done about all this? My alarm is now shared by many. A network of management researchers has emerged committed to promoting more responsible research in business and management in order to help produce a better and more sustainable world (Responsible Research for Business and Management).
Among much else, our academic journals need to change.
The RRBM encourages elite journals to publish more problem-centred research “oriented toward critical social and business questions that are complex and span disciplinary boundaries.” For example, terrorism is an organisational problem but it is one that we ignore. The organisational and management implications of climate change also cry out for deeper analysis but is again largely invisible in management journals.
Putting all this right means back-peddling on our fascination with theory. There is, of course, nothing wrong with papers that seek to develop theory. But it is ridiculous to insist that they should all do so.
We also need more papers that have larger and more representative samples and consider the real-world implications of what they report. We need more replications of studies that make big claims and a greater willingness to retract those that look toxic when placed under the microscope. Managers would benefit and so would our wider society.
So too would academics. We might even find that our work becomes less boring and pointless, more interesting and more widely read.