Understanding the funding of science may be scientists’ greatest challenge.
AFTER a lifetime in Science, my enthusiasm and fascination for biology and biological systems has not diminished, but during that time my understanding of the research funding process has, if anything, declined.
I think, I hope, that the benefit of scientific research is self evident. Many, if not all, of the technologies that we now enjoy have come from the application of scientific discoveries (vaccination, antibiotics, electricity, transistors, for example). I have observed that Darwin’s’ ideas on natural selection — survival of the fittest, does not necessarily mean survival of the best, ask Alan Turing — can be very widely applied to whole ecosystems, to organisms, and ultimately at the molecular level. Apply a selective pressure to a system and it will adapt to, reduce, or even negate that pressure. Put excessive antibiotics in the environment and bacteria become more resistant. Spray pesticides and the pests adapt. Treat cancer patients with chemotherapy and the tumour cells evolve.
You get the idea. So what about scientists under funding pressure? Lord Kelvin once pointed out that: “If you can not measure it, you can not improve it.” Nor can you understand it; you have to know what and how to measure. Accordingly, the last decade has seen the increasing use of metrics to measure the output and benefits of science.
The science of measuring Science if you will.
The Australian Research Council (ARC) has been much lauded for its Excellence in Research for Australia Initiative “which aims to identify and promote excellence across the full spectrum of research activity in Australia’s higher education institutions. ERA assesses research quality within Australia’s 41 higher education providers using a combination of indicators and expert review by committees comprising experienced, internationally recognised experts.”
The ERA defines research as “the creation of new knowledge and/or the use of existing knowledge in a new and creative way so as to generate new concepts methodologies and understandings. This could include synthesis and analysis of previous research to the extent that it is new and creative.” A description of the indicators used can be found on the ARC website. In a nutshell it assesses the volume and quality of research publications and amount of research funding raised. The reference to creativity is interesting, but I must confess I do not know how that can be measured.
A great deal of scientific research, at least in Academia is funded by grants either from a government agency (in Australia it is mainly the Australian Research Council, ARC or National Medical and Health Research Council, NHMRC) or private research foundations such as The Welcome Trust, The Bill and Melinda Gates foundation. Much time and effort is spent writing and agonising over the funding applications. Researchers are encouraged to develop “Grantsmanship, the art of acquiring peer-reviewed research funding”. Universities have departments dedicated to managing the funding application process.
A typical application can run to 40 to 50 pages by the time the project description, methods, track records of the applicants (metrics such as H-factor) and a detailed justified budget are assembled. I should point out that all this effort is required to raise the research funds. It, of itself, does not, as far as I can see, contribute to scientific understanding. It does, however, consume a great deal of time.
I can hear the cries of outrage: ‘Hypotheses are discussed, experimental designs refined, peers consulted to debate the ideas’. On the contrary, while scientists are occupied with grant writing, they are not carrying out research, or debating scientific ideas but looking for the best way to market their ideas, to convince the reviewers to give them the money to actually do the research. The reviewers are of course peers, who are very often applying for funding from the same sources.
Reviewers are not paid for their efforts and it is not uncommon that they are not experts in the field of the application. In the interest of fairness the applicants are allowed to comment on the assessments of the reviewers before the final selection process, which generally means a few more weeks poring over the applications. Another, almost mandatory requirement is preliminary data. It has to be shown that the proposed research has results before it is given funding. This whole process is only a paper, or more precisely a Web Based, exercise. There is no direct contact between the reviewers and the reviewed, nor are the institutions in which the research is to be conducted visited.
It can also happen that an application, if it is rejected this year, can be resubmitted and funded next year. Another issue, which has come to the fore recently, is duplication of research: that is, the same research is funded in multiple instances. This not only wastes money but also effort. Since all applications are now submitted digitally, it should be possible to create a database and attempt to bring together like-minded research. To a large extent projects are judged on track record of published output and successful fund raising. So if you have won grant funding you improve your chances of getting more.
The overall success rate from this annual tortuous process is often less than 1 in 4 (see ARC success rates. A typical project grant will run for 3 years. If the application is unsuccessful then the cycle is repeated and it is another year before a project might receive funding. This not only puts pressure on the researchers with regard to funding their own position it forces short term thinking.
Where is the selection pressure in all this? There are several. Firstly, if funding is for 3 years and it takes 1 or 2 years to secure it, then there is considerable pressure to submit grant applications every year. This leads to a great deal of time not only writing the application but often doing a few experiments that are only designed to obtain preliminary data that will satisfy the reviewers. The requirement to publish (which incidentally means only publishing positive results) can lead to the splitting of work into multiple papers that gives the impression of greater output. Another common practice is submission to one journal after another until a paper is accepted. This generally means a descent down the impact factor ladder (all journals now have an impact factor which measures the status of the journal; the higher the number the better the journal).
These practices, though perhaps not consciously encouraged by Universities and Research Institutes, do, I believe, result from official pressure to publish even when you have nothing to say. It is common to applaud a researcher for having hundreds of publications, but let’s say a researcher has a productive period of 30 years and amasses 300 publications in that time, that is 10 per year or nearly 1 a month. Seems they do little but write. There is a clear danger in this. Quantity becomes the measure, not quality, and this pressure will lead to more competition than cooperation.
Given the complexity of contemporary science the increase in competitive pressure will lead to a reduction in the quality. Finally, there is increasing pressure to conduct research with one eye on its potential commercialization. CSIRO has been pushed very much in that direction. This tends to lead to secrecy, not open discussion, and the ‘spinning’ of results so that it appears that a profit is just around the corner.
The media sound bite of a scientific breakthrough is common, often conjuring up the image of Archimedes running down the street shouting ‘Eureka’. The reality is very different. It is more akin to Edison’s comment that invention is 99% perspiration 1% inspiration. Science only becomes knowledge if the results published by one group are reproducible in-house and by others.
The funding bodies would do well to adhere to that principle and maybe think about Darwinian selection pressure.
Published in the Australian Rationalist, June 2013
Keith Ashman is an expert on proteomics and head of Clinical Applications Development at the University of Queensland. |
---|