I'm going to sketch a chronology and analysis that draw on the history of several centuries of science and on many volumes written about that. In being concise, I'll make some very sweeping generalizations without acknowledging necessary exceptions or nuances. But the basic story is solidly in the mainstream of history of science, philosophy of science, sociology of science, and the like, what's nowadays called "science & technology studies" (STS).
It never was really true, of course, as the conventional wisdom tends even now to imagine, that "the scientific method" guarantees objectivity, that scientists work impersonally to discover truth, that scientists are notably smarter, more trustworthy, more honest, so tied up in their work that they neglect everything else, don't care about making money . . . But it is true that for centuries scientists weren't subject to multiple and powerful conflicts of interest. There is no "scientific method." Science is done by people; people aren't objective. Scientists are just like other professionals – to use a telling contemporary parallel, scientists are professionals just like the wheelers and dealers on Wall Street: not exactly dishonest, but looking out first and foremost for Number One.
"Modern" science dates roughly from the 17th century. It was driven by the sheer curiosity of lay amateurs and the God-worshipping curiosity of churchmen; there was little or no conflict of interest with plain truth-seeking. The truth-seekers formed voluntary associations: academies like the Royal Society of London. Those began to publish what happened at their meetings, and some of those Proceedings and Transactions have continued publication to the present day. These meetings and publications were the first informal steps to contemporary "peer review."
During the 19th century, "scientist" became a profession, one could make a living at it. Research universities were founded, and with that came the inevitable conflict of interest between truth-seeking and career-making, especially since science gained a very high status and one could become famous through success in science. (An excellent account is by David Knight in The Age of Science.)
Still it was pretty much an intellectual free market, in which the entrepreneurs could be highly independent because almost all science was quite inexpensive and there were a multitude of potential patrons and sponsors, circumstances that made for genuine intellectual competition.
The portentous change to "Big Science" really got going in mid-20th century. Iconic of the new circumstances remains the Manhattan Project to produce atomic bombs. Its dramatic success strengthened the popular faith that "science" can do anything, and very quickly, given enough resources. More than half a century later, people still talk about having a "Manhattan Project" to stop global warming, eradicate cancer, whatever.
So shortly after World War II, the National Science Foundation (NSF) was established, and researchers could get grants for almost anything they wanted to do, not only from NSF but also from the Atomic Energy Commission, the Army, the Navy, the Air Force, the Defense Advanced Research Projects Agency (DARPA), the Department of the Interior, the Agriculture Department . . . as well as from a number of private foundations. I experienced the tail end of this bonanza after I came to the United States in the mid-1960s. Everyone was getting grants. Teachers colleges were climbing the prestige ladder to become research universities, funded by grant-getting faculty "stars": colleges just had to appoint some researchers, those would bring in the moolah, that would pay for graduate students to do the actual work, and the "overhead" or "indirect costs" associated with the grants – often on the order of 25%, with private universities sometimes even double that – allowed the institutions to establish all sorts of infrastructure and administrative structures. In the 1940s, there had been 107 PhD-granting universities in the United States; by 1978 there were more than 300.
Institutions competed with one another for faculty stars and to be ranked high among "research universities," to get their graduate programs into the 20 or so "Top Graduate Departments" – rankings that were being published at intervals for quite a range of disciplines.
Everything was being quantified, and the rankings pretty much reflected quantity, because of course that's what you can measure "objectively": How many grants? How much money? How many papers published? How many citations to those papers? How many students? How many graduates placed where?
This quantitative explosion quickly reached the limits of possible growth. That had been predicted early on by Derek de Solla Price, historian of science and pioneer of "scientometrics" and "Science Indicators," quantitative measures of scientific and technological activity. Price had recognized that science had been growing exponentially with remarkable regularity since roughly the 17th century: doubling about every 15 years had been the numbers of scientific journals being published, the numbers of papers being published in them, the numbers of abstracts journals established to digest the flood of research, the numbers of researchers . . . .
Soon after WWII, Price noted, expenditures on research and development (R&D) had reached about 2.5% of GDP in industrialized countries, which meant quite obviously that continued exponential growth had become literally impossible. And indeed the growth slowed, and quite dramatically by the early 1970s. I saw recently that the Obama administration expressed the ambition to bring R&D to 3% of GDP, so there's indeed been little relative growth in the last half century.
The Origin, Persistenc... Best Price: $4.00 Buy New $35.00 (as of 10:35 UTC - Details)
Now, modern science had developed a culture based on limitless growth. Huge numbers of graduates were being turned out, many with the ambition to do what their mentors had done: become entrepreneurial researchers bringing in grants wholesale and commanding a stable of students and post-docs who could churn out the research and generate a flood of publications. By the late 1960s or early 1970s, for example, to my personal knowledge, one of the leading electrochemists in the United States in one of the better universities was controlling annual expenditures of many hundreds of thousands of dollars (1970s dollars!), with several postdocs each supervising a horde of graduate students and pouring out the paper.
The change from unlimited possibilities to a culture of steady state, to science as zero-sum game, represents a genuine crisis: If one person gets a grant, some number of others don't. The "success rate" in applications to NSF or the National Institutes of Health (NIH) is no more than 25% on average nowadays, less so among the not-yet-established institutions. So it would make sense for researchers to change their aims, their beliefs about what is possible, to stop counting success in terms of quantities: but they can't do that because the institutions that employ them still count success in terms of quantity, primarily the quantity of dollars brought in. To draw again on a contemporary analogy, scientific research and the production or training of researchers expanded in bubble-like fashion following World War II; that bubble was pricked in the early 1970s and has been deflating with increasingly obvious consequences ever since.
One consequence of the bubble's burst is that there are far too many would-be researchers and would-be research institutions chasing grants. Increasing desperation leads to corner-cutting and frank cheating. Senior researchers established in comfortable positions guard their own privileged circumstances jealously, and that means in some part not allowing their favored theories and approaches to be challenged by the Young Turks. Hence knowledge monopolies and research cartels.
Science or Pseudoscien... Best Price: $2.99 Buy New $3.24 (as of 10:15 UTC - Details)
A consequence of Big Science is that very few if any researchers can work as independent entrepreneurs. They belong to teams or institutions with inevitably hierarchical structures. Where independent scientists owed loyalty first and foremost to scientific truth, now employee researchers owe loyalty first to employers, grant-givers, sponsors. (For this change in ideals and mores, see John Ziman, Prometheus Bound, 1994.) Science used to be compared to religion, and scientists to monks – in the late 19th century, T. H. Huxley claimed quite seriously to be giving Lay Sermons on behalf of the Church of Scientific; but today's scientists, as already said, are more like Wall Street professionals than like monks.
Since those who pay the piper call the tune, research projects are chosen increasingly for non-scientific reasons; perhaps political ones, as when President Nixon declared war on cancer at a time when the scientific background knowledge made such a declaration substantively ludicrous and doomed to failure for the foreseeable future. With administrators in control because the enterprises are so large, bureaucrats set the rules and make the decisions. For advice, they naturally listen to the senior well-established figures, so grants go only to "mainstream" projects.
Nowadays there are conflicts of interest everywhere. Researchers benefit from individual consultancies. University faculty establish personal businesses to exploit their specialized knowledge which was gained largely at public expense. Institutional conflicts of interest are everywhere: There are university-industry collaborations; some universities have toyed with establishing their own for-profit enterprises to exploit directly the patents generated by their faculty; research universities have whole bureaucracies devoted to finding ways to make money from the university's knowledge stock, just as the same or parallel university bureaucracies sell rights to use the university's athletics logos. It is not at all an exaggeration to talk of an academic-government-industry complex whose prime objective is not the search for abstract scientific truth.
Widely known is that President Eisenhower had warned of the dangers of a military-industrial complex. Much less well known is that Eisenhower was just as insightful and prescient about the dangers from Big Science:
in holding scientific research and discovery in respect . . . we must also be alert to the . . . danger that public policy could itself become the captive of a scientific-technological elite
That describes in a nutshell today's knowledge monopolies. A single theory acts as dogma once the senior, established researchers have managed to capture the cooperation of the political powers. The media take their cues also from the powers that be and from the established scientific authorities, so "no one" even knows that alternatives exist to HIV/AIDS theory, to the theory that human activities are contributing to climate change, that the Big Bang might not have happened, that it wasn't an asteroid that killed the dinosaurs, and so on.
Scientific Literacy an... Best Price: $4.25 Buy New $12.28 (as of 10:10 UTC - Details)
The bitter lesson is that the traditionally normal process of science, open argument and unfettered competition, can no longer be relied upon to deliver empirically arrived at, relatively objective understanding of the world's workings. Political and social activism and public-relations efforts are needed, as public policies are increasingly determined by the actions of lobbyists backed by tremendous resources and pushing a single dogmatic approach. No collection of scientifically impeccable writings can compete against an International Panel on Climate Change and a Nobel Peace Prize awarded for Albert Gore's activism and "documentary" film – and that is no prophesy, for the evidence is here already, in the thousands of well-qualified environmental scientists who have for years petitioned for an unbiased analysis of the data. No collection of scientifically impeccable writings can compete against the National Institutes of Health, the World Health Organization, UNAIDS, innumerable eminent charities like the Bill and Melinda Gates Foundation, when it comes to questions of HIV and AIDS – and again that is no prophesy, because the data have been clear for a couple of decades that HIV is not, cannot be the cause of AIDS.
If you like this site, please help keep it going and growing.
As to HIV and AIDS, maybe the impetus to truth may come from politicians who insist on finding out exactly what the benefits are of the roughly $20 billion we – the United States – are spending annually under the mistaken HIV/AIDS theory. Or maybe the impetus to truth may come from African Americans, who may finally rebel against the calumny that it is their reprehensible behavior that makes them 7 to 20 times more likely to test "HIV-positive" than their white American compatriots; or perhaps from South African blacks who are alleged to be "infected" at rates as high as 30%, supposedly because they are continually engaged in "concurrent multiple sexual relationships," having multiple sexual partners at any given time but changing them every few weeks or months. Or from a court case or series of them, because of ill health caused by toxic antiretroviral drugs administered on the basis of misleading "HIV" tests; or perhaps because one or more of the "AIDS denialists" wins libel judgment against one or more of those who call them Holocaust deniers. Maybe the impetus to truth may come from the media finally seizing on any of the above as something "news-worthy."
At any rate, the science has long been clear, and the need is for action at a political, social, public-relations, level. In this age of knowledge monopolies and research cartels, scientific truth is suppressed by the most powerful forces in society. It used to be that this sort of thing would be experienced only in Nazi Germany or the Soviet Union, but nowadays it happens in democratic societies as a result of what President Eisenhower warned against: "public policy . . . become the captive of a scientific-technological elite."
December 19, 2009