RECOGNIZING THE SIGNS OF BAD SCIENCE

QUESTIONS YOU SHOULD ASK BEFORE ACCEPTING
A SCIENTIFIC REPORT AS BEING VALID:

Every year, many cases of bad science are palmed off onto the press as valid scientific findings, and the press publishes these reports as though they were proven facts. Here is a list of questions you should ask about each report before accepting it as a valid report:

  1. Have the results been verified by an independent study?

    If the results have not yet been verified, the report is nothing but a preliminary report. It should have been published only in scientific journals, not the general press. Independent verification by disinterested parties must occur before the study can be considered to be verified. And verification must be obtained BEFORE the study is released to the press.

    "Independent verification" means that the group doing the verification must have no ties at all with the group doing the original study.

    "Disinterested" means that those doing the verification (or the study) have no stake in or desire for one of the possible outcomes of the study.

  2. Was a link of correlation misrepresented as link of causality?

    This is a common mistake, and if the results have political connotations, is often committed deliberately. In order to prove a causal link, one has to show that actively varying the variable said to be the cause will produce the effect. No passive statistical study can ever be used to prove causality, though it can prove that causality is not present.

    Another mistake is to try to find a correlation with categorical data (see below). Categorical data can be tested for an association, but not a correlation.

  3. Are several predicted results of an effect being used to prove the presence of the effect itself?

    This is another common mistake. In this case also, if the results have political connotations, it is often committed deliberately. The direction of proof is that the veracity of the cause can be used to prove the effect, not the other way around. The only way use the effects to prove the presence of the cause is to prove that nothing else can produce any of those effects.

    See "Affirming the Consequent" on my BAD REASONING page for more on this.

  4. Were statistics or other social-science methods used to prove a result in the realm of the physical sciences?

    This is a big no-no! Social-science methods were developed because direct-action experimental methods can cause harm to the people being studied. But the social-science methods are much weaker, and can not be substituted for the rigors of the laboratory methods of the physical sciences.

  5. Was the wrong population studied?

    This is one of the most ubiquitous mistakes in bad science. The researcher "extends" his limited funding by confining his research entirely to specimens having the property he wants to study the cause of. The mistake is that he totally ignores those specimens which do not have that property. Without results for those without the property, he then has nothing to compare his results to.

    It could be that he has discovered an uncorrelated factor which is present in both those specimens with the property and those without the property in the same proportions. But since he has not collected any samples without the property, he will never know.

    Any unorthodox method of collecting samples can also produce the wrong population. Any use of advertising to find subjects is automatically suspect in any social science, because only certain personality types would answer the ad.

  6. Did the researcher fail to have a control group?

    In active experimentation, this is the same case as is presented in the question above this one. The experimenter gave the experimental treatment to all of the test specimens. None of them received the control treatment (which is to effectively do nothing). Again, there is nothing to compare the result to.

    In cases where there is no difference in how subjects or samples are actively treated, failure to collect a totally random and unbalanced sample constitutes the same error.

  7. Was the correct expected value for the null hypothesis used?

    This is often incorrectly done, especially in the area of genetics. Many genetics studies by bad scientists are analyzed wrong, because the experimenter used the wrong values for the case where an effect which is randomly distributed (with respect to the gene being studied) occurs.

    A randomly distributed gene occurs in 75 percent of the population, not 50 percent. A dominant gene is actively expressed in 75 percent of the population. If the gene is recessive, it is expressed in only 25 percent of the population. A DNA probe will find it in 75 percent of the population. But those using social-science norms tend to erroneously use a value of 50 percent.

    Other places where the wrong null hypothesis value can be found are in areas of probability of events, and events where several causes must act simultaneously or in sequence to produce the effect being studied. Most people will come up with the wrong expected values for games of chance and elections.

    Another case where a wrong expected value is used is where the data are categorical data. The expected value must be obtained from the crosstable, not from any preconceived value.

  8. Does the conclusion actually fit the data collected?

    It is amazing how often a conclusion does not agree with the data obtained. One study reported a causal link, when only 27 percent of those with the studied property had the indicated causal factor. This is ridiculous, because the other 73 percent with the effect did NOT have the factor which was claimed to be the cause.

    Often these poor conclusions are the result of bad math, or of not understanding the meaning of the numbers obtained by the procedure. Other times, the conclusions given are outright lies, intended to fool the members of the press who do not understand mathematics or science.

  9. Was the researcher under any pressure to complete the study within a certain period of time or a certain budget?

    If so, the researcher may have been forced to cut corners to meet the deadline. In several cases, the researcher was a college student working on a dissertation, and some unfortunate accident damaged the experiment in some way. The researcher faked the results, so he could receive his degree on time.

    In one of these cases, an exterminating company accidentally killed all of the test animals halfway through the test run. The student faked the results to get his degree, and as a result, a well known-product recall resulted. But his results also were never verified.

  10. Was there any financial or other pressure to find a certain result?

    If so, the study is suspect from the start. It is biased.

    Recently revealed is the fact that a government research funding agency was denying funding to anyone whose opinion is that a controversial environmental belief is not a real threat.

  11. Did the researcher have an axe to grind?

    If so this study is also suspect from the start. It is biased. The researcher is not disinterested.

    The purpose of a scientific study is always to FIND OUT what the truth is. This must be approached blindly, with no prejudice for or against any possible outcome. If anything other than this is done, it is not science.

    Politics has NO PLACE in science. Favoring a particular result does not make it so. But it can make a biased researcher imagine a correlation, an association, or a causal connection where none exists.

  12. Was the wrong statistical method used?

    It is amazing how researchers collect volumes of data, and then choose the wrong statistic to analyze the data.

    Often, the researcher has an adequate scientific background, but lacks the understanding of statistics needed to choose the correct statistical method for the circumstances.

    On the other hand, the wrong statistic is often chosen because the correct statistic failed to produce the "wanted" result. Again, the desires of the experimenter have no place in science.

    1. Was the wrong statistic used because the researcher didn't know the difference between numerical and catagorical data?

      Many don't know that different statistical methods are needed for each kind of data:

      • Numerical data are values that are obtained through actual measurements.

        Calculations for numerical values include normal, t, and correlation statistics.

      • Categorical data are obtained by observation for yes-no or multiple choice states.

        Calculations for categorical data include crosstables, Cramer's coefficient, Chi-squared, and association statistics.

      • A third set of tools is provided for cases with mixed numerical and categorical data.
    2. Was the wrong statistic used because the researcher didn't know how to or didn't want to use statistics for categorical data?

      Many researchers wrongly try to create numerical data from categorical data by finding the percentage of each category. But this gives false results.

      Some do this because they want a correlation (obtained from numerical data) to impress the news media. The media has usually never heard of an association (obtained from categorical data).

      Others deliberately choose to do this because the correct way didn't give the numbers they wanted.

  13. Was some factor overlooked?

    Often a researcher makes an assumption that no other factors are at work. If that assumption is false, the results of the experiment will be wrong too.

    One example is that environmentalists made an assumption that ice cannot change the composition of air trapped inside it. In reality, ice removes carbon dioxide from air through the formation of carbonic acid, and deposits it in the sea or in rock. Carbon dioxide was the very gas they were studying.

  14. Is the direness of the predicted result of one possible outcome affecting the objectivity of the researcher?

    Often, when a researcher discovers a possible outcome of the system being studied that has a frightening result (such as thousands of deaths), it often causes the scientist, often subconsciously, to "err on the side of caution." So he slants the research to indicate that the dangerous outcome is likely, so precautions can be put in place to prevent the dire outcome.

    An example of this is the unwarranted fear of nuclear power, based on the almost impossible event that the nuclear reactor would explode. Thousands of "scientists" banded together, based on the fear of an outcome which is virtually impossible.

  15. Is emotion a factor?

    If emotion is being presented in any argument requesting government action to correct a perceived hazard of a scientific nature, it has NO BEARING AT ALL on whether or not the hazard actually exists. Emotion has NO place in science. Any attempt to inject emotion into a scientific argument is usually a sign that the person making the argument has a motive other than finding out the truth.

  16. Is a political belief a factor?

    If a political belief is being presented in any argument requesting government action to correct something of a scientific nature, it has NO BEARING AT ALL on whether or not the problem actually exists. Politics have NO place in science. Any attempt to inject politics into a scientific argument is usually a sign that the person making the argument has a motive other than finding out the truth.

  17. Is a criminal motive a factor?

    If a criminal motive is being presented in any argument requesting government action, it has no bearing on the science. Any action for personal gain has no place in science. Any attempt to falsify a scientific argument is usually a sign that the person making the argument has a motive other than finding out the truth.

  18. Is a religious belief a factor?

    If a religious belief is being presented in any argument requesting government action, it has no bearing on the science, and the science has no bearing on the religious belief. Neither can be used to prove or disprove the other.

    Religion and science are normally not incompatible. Usually they agree in most areas. When there is a disagreement, there is usually a political reason for the dispute. Usually someone wants something that violates someone else's religion.

    The usual reason science is introduced into a religious debate is to wrongly try to disprove someone else's belief. This is a wrong use of science.

    The usual reason religion is introduced into a scientific debate is not to try to change the science, but to explain the religious objection to a proposed law.



Links