INFORMATION TIPS THE SCALES


  2490 Lee Boulevard #205
Cleveland Heights, OH 44118-1255
Phone (216) 321-5018
Fax (216) 321-3731
Email:
johnmiller@nasw.org

    



  

ANALYZING DATA

When I was a graduate student at the University of Michigan, I discovered mistakes in two statistical programs widely used on campus:

  • In one case the Institute for Social Research incorrectly "rewrote" its program for the F- test, a workhorse statistic used to compare the scores of three or more groups of people in an experiment or survey. In the few days between the time the programmer made the mistake and the time I caught it, that centrally important test was used hundreds of times by many different researchers, but no one else pointed out the error. Unless they noticed it on their own later, those people's results were all incorrect.
  • In the other case the university computing center's main data-analysis program told me that a statistic I used called a factor analysis explained over 1600 per cent of the differences of opinion among people in my research project. Since the most any statistic can explain is 100 percent, that was a big problem. But when I pointed out the error to the co-author of the main program, he "fixed" this small part by removing the feature that printed out the percentage explained. He didn't repair it. He just covered it up!

I tell these anecdotes not to criticize these computer programs or their authors, which were otherwise wonderful. But these two stories show that analyzing statistics is not straightforward. It does matter whom you choose to evaluate yours, because the results don't always come out the same.

Too many researchers analyze data in a "cookbook" style. They ignore any technical requirements that must be satisfied before using a certain statistic. Instead, they just plug in the formula. For instance, some statistical tests won't give accurate results unless people's measured responses to a question fit a "normal" or "Bell" curve. Researchers who violate this requirement get incorrect results, which they unfortunately go on to publish in academic journals.

Too many statisticians cannot "translate" their results so that non-experts understand them. Indeed, a statistics professor once told a colleague of mine that of all the statistics books he had ever seen, only two of them were clearly written!

In contrast, I have been explaining complicated research results in newspapers, on TV, on the Internet, and in college classes since 1973.

The inability of a statistician to explain to you or to a jury clearly and simply what your data mean could cost you a victory. Trying to interpret complicated scientific concepts in a courtroom is very risky business. Millions of potential jurors "don't do" numbers. They prefer their dentist to division and arthritis over algebra. So entire juries all too often only half "get it."

Bottom line? I analyze data with the utmost attention to detail; I explain it so anyone can understand it.


To see examples of Dr. John Miller's work analyzing data, click here.