We all know that correct science is needed for much of what
we do. So many parts of our modern world—things we buy, use, or interact with
in various ways—are based on science and work as they should only when the
science is correct. Social science does not build things, but it teaches us how
society works and how it can be improved. This gives value to social science, including
to organization and management theory. One could even argue that organization
theory has special motivations to be a correct science, because so many parts
of our modern world—things we buy, use, or interact with in various ways—are
made and operated by organizations and work as they should only when the
science of managing is correct.
In an editorial essay in Administrative Science Quarterly,William Starbuck argues that we need to improve social science. The argument
has multiple parts, and I will discuss just one here. It is a simple issue with
a lot of complications: Research is done by people. People have all sorts of
thoughts, feelings, actions, and reactions, which are often different from the
cold and objective look at evidence that science calls for.
People want safe jobs, success, and recognition. They often
feel under pressure to do well, both from what they want and from rules such as
the promotion system we have, which is strongly driven by publications. This
drives some of them to deviate from the true reporting of findings, which could
involve selective modeling to strengthen findings, holding back findings that
go against expectations, making up theory for unexpected findings, or even
editing or falsifying data. A very important problem is that many researchers
who are selective, hold back, or make up theory believe that they are better
than those who edit or falsify. But they are not. Any one of the actions I just
mentioned is a misleading departure from scientific standards. We understand
why some people do it, but we need them to stop.
People assess scientific results, but they also think about
the people doing the science, and that colors their assessment because people
have stronger feelings about people than they have about scientific findings. Much
of social science is aware of this effect and tries to shield authors through
double blind review, to hide their identity from those who assess them. The
fields that don’t do this end up rewarding the already famous over and over
again. But double blind is not enough. People are good readers and can make all
sorts of guesses about who the writer is, or at least what kind of person the
writer is. The guesses are
usually accurate, but the assessments that follow
the guesses are biased and disadvantage those who have low status. We
understand why some assessments are shaped by status, but we need it to stop.
When people make the science, with bias, and people assess
the science, with bias, how can we improve it? Starbuck has a wide range of
suggestions that can help improve our procedures. At ASQ we are thinking
carefully about these problems, and we have wondered whether part of the
problem is that the evidence is not presented with enough appeal and
transparency. We have changed our invitation to contributors to encourage more
display of data before showing models and to encourage using graphical
approaches that show the reader exactly how much is explained by the theory and
how much is not yet explained. I have also written a blog post on this issue.
People will still be people. The solution has to involve better procedures, including
those that allow the data we work with to become more prominent.