Return to: How Do I Know If the Research Is Trustworthy?
How were the data analyzed?
When determining whether or not a particular study did a good job of analyzing the data it produced, it is important to distinguish between quantitative data and qualitative data (see also Creswell, 2002).
Quantitative Data Analysis
Researchers analyze quantitative data through statistics. The wide availability of statistical software programs makes it easy for researchers to analyze data, but also makes it easy to use statistics incorrectly, leading to invalid research conclusions.
The computation of inferential statistics is the primary basis for research conclusions about a treatment effect – that is, that a treatment or intervention worked. A statistically significant effect at the .05 level means that there is a 5% or less probability that the result occurred by chance. By convention, social scientists have chosen this percentage as the cut-off point (although other percentages are sometimes chosen). Thus, when there is statistical significance, the researcher concludes that the treatment effect did not occur by chance.
Researchers should not discuss nonsignificant results – results with a probability of occurrence that is greater than 5% – as if they indicate real treatment effects or group differences.
The probability of detecting a statistically significant effect increases with the size of the sample. There are two consequences of this relationship. First, a treatment effect might not be detected in a research study with a small sample size (e.g., less than 30 participants). As a result, the researcher’s conclusion that the treatment has no effect might be invalid. Second, with a large sample size, a very small treatment effect can be statistically significant, but the practical significance of the treatment might be limited. For this reason, the researcher should report the effect size of the treatment.
The concept of error is at the heart of inferential statistics. The more error that occurs in a study, the more the scores will vary. The more variability there is, the less likely it is that a treatment effect will be detected. Think of error and variability as background noise and the treatment as a sound. When there is too much noise, some sounds cannot be detected. Error in a research study can occur due to small sample sizes, unsystematic treatment implementation and unreliable measurement. The researcher should report the efforts made to standardize the treatment and the measurement (such as pilot-testing the treatment and training the data collectors).
For a deeper understanding of these statistical concepts, see the UNDERSTANDING STATISTICS TUTORIAL.
Qualitative Data Analysis
In qualitative research, the data consist of narrative descriptions and observations. Although statistics are not used, qualitative data analyses need to be systematic to support valid research conclusions. Organization is at the heart of qualitative data analyses. In most qualitative research studies, large amounts of descriptive information are organized into categories and themes through coding. Coding is designed to reduce the information in ways that facilitate interpretations of the findings. A report on qualitative research should give detailed descriptions of the codes and the coding procedures.
Here is an example of coding qualitative data:
A researcher interviews the principals of 10 elementary schools to answer the following research question: “What challenges do schools face when adopting a comprehensive reform model?” The researcher reads the transcriptions of the interviews and lists all the topics that the 10 interviews addressed. Next the researcher groups similar topics into categories such as “parent approval,” “teacher collaboration” and “time issues.” The researcher uses these categories to code each interview and then assembles the information for each coded category across the 10 interviews. The researcher can then describe, for example, the degree to which parent approval was a challenge for the interviewed principals.
Qualitative researchers use verification methods to support their conclusions. For example, through triangulation of results, information from different measures in the study, such as interviews and documents, converges to support an interpretation. Member checking involves reporting the results of data analyses (i.e., the categories and themes) to the participants to verify that the researcher’s interpretations are correct. A researcher also can verify findings by conducting a deliberate search for disconfirming evidence, which is information that does not fit the categories, themes and interpretations.
The concept of error also is applicable to qualitative research studies. To minimize error, qualitative researchers need to maintain careful records of their field notes and observations. For this reason, interviews are often tape-recorded and transcribed.