Preparing the Petition | UW Human Subjects Review | Sample Petition
Data Management | Data Analysis | The Final Report
Data Analysis and Interpretation
Look before you leap. Begin analyzing your data by listing all the research questions you would like to answer using the variables you have. Start with the most important questions and then go to the minor ones. If you have collected information on a lot of variables, don't try to include them all. Keep in mind the main purpose of your study and stick to the questions that pertain to it. It can also be helpful at this time to rough out the tables and graphs you plan to include and to start planning the "story line" for your results section. This will provide a roadmap for your analysis.
Choosing statistical methods. Once you have a clear idea of what questions you want to ask, choosing the right statistical technique will depend on characteristics of your experimental design and the kinds of variables you will be using. Answers to the following questions will steer you toward the correct analysis:
Performing the analyses. Although Excel has the tools for many statistical operations, it's easier and more foolproof to use a statistics program instead. SPSS can read Excel files directly and does analyses quickly and easily. The library computers have SPSS or you can buy it yourself from the bookstore for about $75. You can get statistical help from UW Statistical Consulting Services (http://www.stat.washington.edu/consulting/) or from the Student Resource Office in the Medical School.
Links to statistical sites:
How to interpret p-values. A low p-value (<.05) is usually considered "significant". Significant, in this statistical sense, means that there is a low probability that the observed effect occurred by chance-by the luck of the draw rather than because of some systematic difference between study groups. Two caveats are in order in interpreting significant p-values. First, pay attention to the size of the effect (the measured differences between groups) as well as the p-value. Ask yourself whether the observed difference is also clinically significant-large enough to really matter. Second, before you can attribute the observed difference to a putative cause, you need to consider the contribution of confounding variables.
Non-significant results can be just as interesting as significant results. Results can fail to achieve significance for 2 reasons, either because there is no effect of the treatment being tested or because the study included too few subjects to demonstrate significance. The first possibility is informative; the second is not. To distinguish between the two you need to examine the power of your study. If you are trying to interpret negative results, this paper may help you: Detsky, AS and Sackett, DL (1985) When was a "negative" clinical trial big enough? Arch Int Med 145, 709-712.