Science-policy revolution

From Opasnet
Jump to: navigation, search


For related articles, see Heande.

<section begin=glossary />

Science-policy revolution is a collateral paradigm shift in both science and policy. It is based on systematic and open sharing of information that is
  • scientifically criticizable and
  • usable in policy-making.
Science-policy revolution builds on open assessment.<section end=glossary />


"I speak of none but the computer that is to come after me," intoned Deep Thought, his voice regaining its accustomed declamatory tones. "A computer whose merest operational parameters I am not worthy to calculate -- and yet I will design it for you. A computer that can calculate the Question to the Ultimate Answer, a computer of such infinite and subtle complexity that organic life itself shall form part of its operational matrix. And you yourselves shall take on new forms and go down into the computer to navigate its ten-million-year program! Yes! I shall design this computer for you. And I shall name it also unto you. And it shall be called ... the Earth."
-- Douglas Adams: The Hitchhiker's Guide to the Galaxy (1979)


Motivation

I think I know how scientific information should be handled. It is a much more efficient way than what we are doing now. It releases the information for useful purposes earlier. It builds on existing information in a more clever way than today. And it is quick and merciless in destroying pseudo-scientific false beliefs.

I think that if scientists start to apply the new ideas of handling scientific information, this will also result in a revolution in policy-making. Why? Because then scientists (an all other people) would have much more policy-relevant information in their hands. They could start evaluating policies with scientific rigour before policies are actually decided. Fairly quickly, it can be shown that policies that go against the scientific evaluation are failing. Citizens learn this and force politicians to drop those policies that have least support from science, given explicit objectives.

A part of this revolution is that citizens will get tools to show their own objectives and valuations. And I don't mean web polls about random details such as who is the most talented competitor in a TV show. I mean the most important things in people's lives. Policies have crucial impact on whether people are able to achieve them or not. This impact can and should be scientifically studied. With new methods, these studies can be done quickly and systematically. It would show that a massive bulk of current policies bring people further away from their objectives.

All this sounds too good to be true. Indeed, I am afraid that the science-policy revolution will not happen. But I don't doubt that we can use scientific information much better. I don't doubt that when given a power to choose, people will, in general, choose wisely. I am afraid that, afterwards, we will have to confess:

"We found out how to save the world, but we were too busy to do it."

Making a revolution is a time-consuming business. All the current rewarding systems have been developed for current science practices. Starting to do things in another way does not bring credit, but it takes time. The revolution is not what scientists have promised to do and what is expected from them. They should go against expectations. This is a major problem if you have a lot to lose but little to gain personally. Revolutions usually start among people who have nothing to lose but a hope of a gain. (Maybe this is why Ph.D. students typically listen to me more carefully than professors.)

Opasnet aims at science-policy revolution

The failure of the Copenhagen climate meeting showed that the current ways of policy-making do not work. Despite the work of thousands of researchers to collect and synthesise scientific information, and thousands of politicians working hard to develop policies about an urgent issue, the result was only a statement of good will and funding commitments. What went wrong?

I believe that there was a major gap between scientific information and its use in policy. In Copenhagen, the countries tried to make international policy as if it was a matter of mutual agreement. It is not. It is not even a matter of majority vote. If the mankind takes the +2 °C target seriously, there are huge numbers of policies that simply are insufficient to reach that target, including the one that was agreed on in Copenhagen. In contrast, there are dauntingly few policies that actually would lead to the target and would be implementable in the real world.

It is a scientific, not so much political, effort to find those effective policies. We should see potential policies as scientific hypotheses. Everyone is encouraged to develop new hypotheses about good policies and gain merit for this. Then, as a joint effort, we should attack these hypotheses with scientific evidence and aim to show that they do not help us to reach the target. Only those that stand up against attacks are worth further consideration. Those who fail should be abandoned immediately. This is how science works at its best.

Thus, we should bring this scientific approach to the policy arena. We should also bring politicians to the scientific arena, with their potential policies and questions. And thirdly, we should bring the citizens into this open discussion to tell, what our targets should actually be. Researchers should give their valuable time and capacity in the service of policy analysis. Politicians should accept that there can be normative policy analysis, which is limiting their degrees of freedom in developing policies. And citizens should understand that their political pressure is needed to make things move forward faster. The climate challenge is too urgent to rely on standard administrative rate of change.

This is the science-policy revolution. Combine the potential policies with the current scientific understanding, and apply the scientific method to separate poor and good policies based on value judgements by politicians and the citizens. All this should be done by immediately sharing all relevant information to be used and evaluated by everyone. Opasnet is a web workspace for performing all the work described above. Opasnet itself is also a series of research questions and hypotheses, and it is under continuous scrutiny. You are welcome to bring information to or attack any proposed methods or policy assessments you find.

We, as the Open Assessors' Network, respect your contributions, and we believe that the future generations will, too. But if you want really to gain merit, focus on issues with scarcity. Bring in information about poorly known issues rather than well-known ones. Attack hypotheses if there are plenty of competing ones rather than the only one standing up. If there is only one hypothesis, it is better to develop new ones instead. This is simply because Opasnet is practically oriented, and its outcomes will be applied in the real world as soon as possible. If the only hypothesis is truly a non-working one, we will find it out in practice very soon anyway.

Anyone can solve common problems. Opasnet is the web workspace for solving them by you, and by us together.

Science-policy revolution in a nutshell

I will briefly describe the main ideas behind the scientific revolution. More thorough descriptions are described on the dedicated pages.


Scientific information should be published immediately.

It often takes one to two years to get scientific data from the original data files into a peer-reviewed scientific article. Even then, the article does not typically contain the original data, only the analyses and conclusions of the original researcher. This is very inefficient. Just think of the alternative: You would get merit for just creating study designs, without actually doing them. Other people could have time and resources to do it quicker than you. When you do a study, you publish the results immediately. Other people are likely to do the statistical analyses better than you, especially if the original data from all other studies on the topic are available as well. And finally, you can participate in or read the one discussion of the topic together with all researchers, instead of having to write a separate discussion about your study alone, and having to read all other discussions about separate studies.

Why is this not done in a better way? Because in the current system, a scientist that releases any other material than manuscripts to scientific peer-reviewed journals is a fool. This is obvious when you think about it. The only thing that brings scientific merit is a scientific article. The whole system is flawed. We simply should make a revolution and change it. Researchers should get merit when they release information to others, otherwise they will keep it in their own drawers waiting for further analyses - which often do not occur.


Peer review should be open, continuous, and occur only after publication of the information.

Peer review is currently thought as the cornerstone of the quality of scientific information. Let's look at this opinion critically. Peer review simply means that two or three researchers from the same or related area have read the manuscript and thought that it is of good enough scientific quality to deserve publication. This tradition developed in the early 20th century, when the scientific community grew so large and rich that most readers were no longer capable of evaluating the quality themselves.

Despite one century of success, peer review is problematic. One thing is that it is a very laborious system. Also, its criteria are actually very fuzzy. After all, what does "scientific quality" or "good enough" mean? Good enough for what purpose? When publishing was expensive, it was useful to remove rubbish from the publishing pipeline. But during the Internet era, publishing is practically costless. Huge amounts of rubbish is published all the time, but researchers prevent each other from publishing mediocre results. Thus, peer review actually decreases the quality of information available to people.

Again, let's think of an alternative. Any research study can be published immediately in a study repository. People interested in the quality of the study could ask for an evaluation. If nobody is interested, there is no point in evaluating the study. Also, if the result conforms with the current understanding of the topic, the informativeness of the study and thus the benefit of evaluation is low. But if the study is actually changing our thinking, it is worth much more rigorous an evaluation than by two anonymous researchers. It should have a discussion section where any researcher can evaluate the strengths and weaknesses of the study. If original data is available as suggested above, much more intensive evaluation is possible than with the current peer review. The alternative approach would result in more thorough review of important studies, but less review burden overall - a clear improvement.


Science should be organised as information objects based on research questions, not articles.

The structure of a typical scientific article has proven a good one for reporting a single study. However, organising scientific information by using a study as the basic building block is inefficient. There are often several studies looking at the same topic. Therefore, a topic, or more specifically a research question, should be the basic building block of information. All studies about the topic are just pieces of data under the same research question. Of course, it is practical to report studies separately, because they have specific materials, methods, and observed data. However, any statistical analyses of results or discussions or conclusions based on them should not be specific to a single study but are better handled under the topic rather than under all studies separately. For this reason, we should start organising information into studies (for material, methods, and data), and variables (analyses of data, discussions, and conclusions about a topic, i.e. a research question). The usability of scientific information would improve a lot.


Policy decisions should be made based on policy impact assessments that are based on scientific information.

There is an old story that a parliament wanted to vote against the law of gravity, because some politicians thought it was causing more harm than good. With a clear case, the story becomes a joke. With a fuzzy case, it becomes a real-world tragedy. The EU has decided that it will reduce greenhouse gas emissions by increasing the use of biofuels in traffic. Many experts see this as a poor joke, because the current plans of biofuel production are likely to actually increase greenhouse gas emissions. On the other hand, there might be ways to achieve both biofuel and greenhouse gas targets, but this might cause collateral damage such as problems in food production or high cost.

The problem in this kind of policy-making is a poor allocation of tasks between science and policy. Policy-making should set the targets for things that have intrinsic value. In this case, greenhouse gas reduction has intrinsic value. In contrast, biofuel increase has only instrumental value as putatively beneficial means to reduce emissions. What its actual impact is, is not a policy question at all but something that scientists should answer. With biofuels, scientists are coming one or two years too late. The answers should have been on the desks of the European Parliament before the decisions were made. Actually, the answers should have been so early that there would have been enough time to develop new policies for greenhouse gas reduction before the decisions.

With the current resources and information flow this does not seem to happen. But if science was published in the way described above, we would actually have tens of thousands of ready-made meta-analyses (or variables, which essentially are critical summaries about published data) about all kinds of questions. Now we only have hundreds of thousands of articles that can only be understood and utilised by peer experts.

Variables can be directly used in policy assessments. They can be used like Lego blocks in building models estimating impacts of policy-relevant actions. And they can be reused in other, similar assessments. They are much more usable than the current articles.

See also