Evaluating Data and Evidence

Many people mistakenly believe that the terms “data” and “evidence” are interchangeable, and these words have the same meaning.

Data is factual information such as numbers, percentages, and statistics.

Evidence is data that is relevant and furnishes proof that supports a conclusion.

Evidence is the information that helps in the formation of a conclusion or judgment. Whether you know it or not, you provide evidence in most of your conversations – they’re all the things you say to try and support your claims.

Scientific Evidence

Ultimately, scientific ideas must not only be testable but must actually be tested — preferably with many different lines of evidence by many different people. This characteristic is at the heart of all science. Scientists actively seek evidence to test their ideas — even if the test is difficult and means, for example, spending years working on a single experiment, traveling to Antarctica to measure carbon dioxide levels in an ice core, or collecting DNA samples from thousands of volunteers all over the world.

Performing such tests is so important to science because, in science, the acceptance or rejection of a scientific idea depends upon the evidence relevant to it — not upon dogma, popular opinion, or tradition. In science, ideas that are not supported by evidence are ultimately rejected. And ideas that are protected from testing or are only allowed to be tested by one group with a vested interest in the outcome are not a part of good science.

The phrase “scientific evidence” has become part of the vernacular – thrown about like a hot potato during discussions of major environmental, health or social issues. Climate change is one example. The EU’s ban on neonicotinoid pesticides is another.

Scientific evidence is information gathered from scientific research, which takes a lot of time (and patience!) to conduct. But there are a few things that all this research needs to have in common to make it possible for decision-makers, and ultimately all of us, to accept it as “evidence”.

Objective and unbiased

Research needs money to pay for laboratory equipment, field surveys, and materials – not to mention the wages of all the people involved in the project. The majority of researchers have to constantly apply for funds to carry out their research. These funds can come from different places, usually government bodies, corporations, academic or research institutions, non-profit organizations, or industry bodies. Applications are judged on scientific merit and their relevance to society or the funding body’s interests.

Mostly, funds are distributed fairly. But if an organization funds a research project that will benefit them financially, then we cannot accept the findings as “evidence” unless different researchers (from unrelated organizations) come to the same conclusions through their own independent research.

Ensuring results will be valid and accurate

Scientific evidence relies on data, and it is crucial for researchers to ensure that the data they collect is representative of the “true” situation. This means using proved or appropriate ways of collecting and analyzing the data and ensuring the research is conducted ethically and safely.

Control scenarios may also be necessary when testing for effects or impacts – such as when developing new products (such as medicines), or evaluating management actions (such as farmland pesticide use). The control scenario represents the opposite of the scenario being tested. This is so the results that are seen in the test scenario are guaranteed to be from the tested product or impact, and nothing else.

If the scenario involves environmental processes of some kind, the test and control should ideally be carried out under natural conditions (or in an environment where these processes normally occur).

Sometimes this can be virtually impossible to do, and lab-based or combined lab/ field studies will need to be done instead so the “nuisance factors” can be controlled.

Take the recent neonicotinoid issue. If a researcher wants to prove that use of a pesticide does not affect bees flying about in the environment where the chemical is normally used, they will need to test two different scenarios.

One hive of bees will have to go about their business out in the field while being exposed to the pesticide. A second hive of bees will have to be in the same general environmental location as the first hive (to ensure both hives experience the same overall living conditions) but remain completely uncontaminated by the pesticide throughout the test.

It’s obvious how impossible this would be to manage under natural conditions, where no one can control the drift of chemical droplets or the movement of tiny insects across the landscape!  In this case, completely field-based studies may not exist, but it would be misleading to say that a “lack of field studies” means that the pesticide does not affect bees.

Peer-review and professional consensus

This step is the most crucial, and it turns research into the “evidence” that we all talk about. The researcher has to present their data, results, and conclusions in the form of a scientific report or paper. This must be reviewed by their scientific peers – only they are qualified to assess the validity of the methods and the accuracy of the conclusions the researcher has drawn from the results.

Having research findings published in an international peer-reviewed journal means that other professional scientists who specialize in that kind of research have verified the quality and validity of the research.

This process takes a long time – from submission of the manuscript to a journal, to the final publication date can take six months to a year, often longer.

For really important decisions, especially ones that will affect lots of people (how we should manage our national parks, for example), multiple studies may need to be sourced to show that a majority of scientists experienced with the issue agree on the evidence (just like a jury in a court case).

This is to show there is a “scientific consensus” on the evidence, and it provides even more reason for taking action on the issue at hand.

Of course, not everyone agrees on everything – think of any topic from the Earth being round, to what you and your family will eat for dinner tonight! So if a few scientists disagree with the majority group of scientists over a particular issue, that is not immediate proof that the evidence is wrong, and neither is it shocking or newsworthy.

Interpreting the evidence presented to us

Most of us hear of “scientific evidence” from journalists, newsreaders, politicians or media commentators, and often we don’t have the opportunity to check the facts ourselves. But understanding where true scientific evidence comes from, and what it means, is imperative to helping us tackle the most important issues affecting our own lives and the world we live in.

So the next time someone says they have “scientific evidence” to back up their case, ask a few questions. Who funded the research and why? How much evidence is there and how was it gathered? Was the sample size or location representative of the “real” situation?

Has the research been published in an internationally-accepted, peer-reviewed journal, or is it only available online on a personal or organization’s website? Do a majority of other scientists agree on these results? If a few disagree, are they qualified to evaluate the issue? (For example, a medical doctor and an astronomer are both scientists – but that doesn’t mean the astronomer is qualified to perform heart surgery!)

And if someone claims there is a “lack” of evidence on a contested issue, ask them to clarify. Do they mean that peer-reviewed research has been carried out, and found no proof of an effect? Or, do they mean that no one has yet funded research to examine the issue? These do not mean the same thing.

Evaluating Sources Checklist

  1. Authority:
    • Who created the site?
    • What is their authority?
      • Do they have expertise or experience with the topic?
      • What are their credentials, institutional affiliation?
    • Is organizational information provided?
    • Does the URL suggest a reputable affiliation with regard to the topic–personal or official site; type of Internet domain (i.e., .edu: educational institution; .org: non-profit organization; .com: commercial enterprise; .net: Internet Service Provider; .gov: governmental body; .mil: military body)?
  2. Objectivity:
    • Is the purpose and intention of the site clear, including any bias or particular viewpoint?
      • Are the purpose and scope stated?
      • Who is the intended audience?
      • Is the information clearly presented as being factual or opinion, primary or secondary in origin?
      • What criteria are used for inclusion of the information?
      • Is any sponsorship or underwriting fully disclosed?
  3. Accuracy:
    • Is the information presented accurate?
      • Are the facts documented or well-researched?
      • Are the facts similar to those reported in related print or other online sources?
      • Are the Web resources for which links are provided quality sites?
  4. Currency:
    • Is the information current?
      • Is the content current?
      • Are the pages date-stamped with last update?


A GOOD QUESTION TO ASK:
“What evidence would you need in order to convince you/change your mind?”

The answer to this question can provide a lot of insight about how reasonable this person is, their requirements for evidence, their perspectives, and a great deal more.