Advertisement
Given the variety of experimental designs, potential types of data, and analytical approaches, it is relatively impossible to develop a cookbook approach to reporting data summaries and analyses. That being said, it is the intent of this chapter to give some broad and practical advice for this task.
Packages used in this chapter
The packages used in this chapter include:
• FSA
The following commands will install these packages if they are not already installed:
if(!require(FSA)){install.packages("FSA")}
Reporting analyses, results, and assumptions
The following bullets list some of the information you should include in reporting summaries and analyses. These can be included in a combination of text, plots, tables, plot captions, and table headings.
A variety of plots are shown in the Basic Plots chapter, as well as in the chapters for specific statistical analyses. Simple tables for grouped data are shown in the chapters on Descriptive Statistics, Confidence Intervals, Descriptive Statistics for Likert Data, and Confidence Intervals for Medians.
Procedures
• A description of the analysis. Give
the test used, indicate the dependent variables and independent variables,
and describe the experimental design or model in as much detail as is
appropriate. Mention the experimental units, dates, and location.
• Model assumptions checks. Describe what
assumptions about the data or residuals are made by the test and how
you assessed your data’s conformity with these assumptions.
• Cite the software and packages you used.
• Cite a reference for statistical procedures or code.
Results
• A measure of the central tendency or location
of the data, such as the mean or median for groups or populations
• A measure of the variation for the mean or
median for each group, such standard deviation, standard error, first
and third quartile, or confidence intervals
• Size of the effect. Often this is conveyed
by presenting means and medians in a plot or table so that the reader
can see the differences. Sometimes, specific statistics are reported
to indicate “effect size”. See the “Optional technical note on
effect sizes” below for more information.
• Number of observations, either for each group,
or for total observations
• pvalue from the analysis
• Goodnessoffit statistics, if appropriate,
such as rsquared, or pseudo Rsquared.
• Any other relevant statistics, such as the predictive equation from linear regression or the critical x value from a linear plateau model.
• Results from posthoc analyses. Summarize the differences among groups with letters, in a plot or table. Include the alpha value for group separations
Notes on different data and analyses
Interval/ratio or ordinal data analyzed by group
The bullet points above should be appropriate for this kind of analysis. Data can be summarized in tables or with in a variety of plots including “plot of means”, “plot of medians”, “interaction plot”, box plot, histogram, or bar plot. For some of these, error bars can indicate measures of variation. Mean separations can be indicated with letters within plots.
Nominal data arranged in contingency tables
Nominal data arranged in contingency tables can be presented as frequencies in contingency tables, or in plots. Bar plots of frequencies can be used. Confidence intervals can be added as error bars to plots. Another alternative is using mosaic plots.
Bivariate data analyses
Bivariate relationships can be shown with a scatterplot of the two variables, often with the best fit model drawn in. These models include those from linear regression or curvilinear regression. It is usually important to present the pvalue for the model and the rsquared or pseudo Rsquared value.
Advice on tables and plots
Plots
Some advice for producing plots is given in the “Some advice on producing plots” section of the
Basic Plots chapter.
For additional advice on presenting data in plots, see McDonald (2014a) in the “References” section.
Tables
Table style and format will vary widely from publication to publication. As general advice, the style and format should follow that of the journals or extension publications in your field or institution, or where you hope to get published.
For additional advice on presenting data in tables, see McDonald (2014b) in the “References” section.
Table headings and plot captions
Both table headings and plot captions should give as much information as is practical to make the table or plot understandable if it were separated from the rest of the publication.
Example elements
Element 
Example 
Description of data 
“Mean of reading scores for program curricula” 
Experimental units 
“for 8^{th} grade students” 
Location 
“in Watkins Glen, NY” 
Date 
“1999–2000” 
Statistics 
“Error bars represent standard error of the mean. The effect of curricula on mean reading score was significant by oneway ANOVA (p = 0.027). Means sharing a letter not significantly different by Tukeyadjusted mean separations (alpha = 0.05). Total observations = 24.” 
Example description of statistical analysis and results
The citation information for the R software can be found with:
citation()
Citation information for individual packages can be found with, e.g.:
library(FSA)
citation("FSA")
Procedures
A study of student learning was conducted in 1999–2000 in Watkins Glen, NY. Students were randomly assigned to one of four curricula, which they studied for onehour per week under teachersupervised conditions, and their scores on an assessment exam were recorded at the end of the study. A oneway analysis of variance was conducted with student score as the dependent variable and curriculum as the independent variable (Mangiafico, 2016). Treatment means were separated by Tukeyadjusted comparisons. Model residuals were checked for normality and homoscedasticity by visual inspection of residual plots. Analysis of variance and posthoc tests were conducted in R (R Core Team, 2016) with the car and emmeans packages. Data summary was conducted with the FSA package.
Results
Figure 1. Mean of reading scores for program
curricula for 8^{th} grade students in Watkins Glen, NY, 1999–2000.
Error bars represent standard error of the mean. The effect of
curricula on mean reading score was significant by oneway ANOVA (p <
0.0001). Means sharing a letter not significantly different by
Tukeyadjusted mean separations (alpha = 0.05). Total observations
= 24.
Table 1. Mean of reading scores for program curricula for 8^{th} grade students in Watkins Glen, NY, 1999–2000. The effect of curricula on mean reading score was significant by oneway ANOVA (p < 0.0001). Means sharing a letter not significantly different by Tukeyadjusted mean separations (alpha = 0.05). n indicates number of observations. Std. err. indicates the standard error of the mean.
Curriculum 

n 

Mean score 

Std. err. 

Tukey group 
A 

6 

90 

2.89 

a 
B 

6 

60 

2.89 

b 
C 

6 

55 

1.29 

b 
D 

6 

80 

2.89 

a 
Optional technical note on effect sizes
In this chapter, I use the term “size of the effect” in a general sense of the magnitude of differences among groups or the degree of association of variables. “Effect size” usually describes specific statistics for a given analysis, such as Cohen's d, etasquared, or odds ratio.
For some examples of these statistics, see Sullivan and Feinn (2012) or IDRE (2015) in the “References” section of this chapter.
The pwr package can calculate the effect size for proportions, chisquare goodnessoffit, and chisquare test of association. For the effect size for oneway ANOVA (Cohen’s f), see Mangiafico (2015) or IDRE (2015). For effect sizes for nonparametric tests, see Tomczak and Tomczak (2014), and King and Rosopa (2010).
References
“Oneway Anova” in Mangiafico, S.S. 2015. An R Companion for the Handbook of Biological Statistics, version 1.09. rcompanion.org/rcompanion/d_05.html.
“Guide to fairly good graphs” in McDonald, J.H. 2014a. Handbook of Biological Statistics. www.biostathandbook.com/graph.html.
“Presenting data in tables” in McDonald, J.H. 2014b. Handbook of Biological Statistics. www.biostathandbook.com/table.html.
Sullivan, G.M. and R. Feinn. 2012. Using Effect Size—or Why the P Value is Not Enough. Journal of Graduate Medical Education 4(3): 279–282. www.ncbi.nlm.nih.gov/pmc/articles/PMC3444174/.
Tomczak, M. and Tomczak, E. 2014. The need to report effect size estimates revisited. An overview of some recommended measures of effect size. Trends in Sports Sciences 1(21):1–25. www.tss.awf.poznan.pl/files/3_Trends_Vol21_2014__no1_20.pdf.
[IDRE] Institute for Digital Research and Education. 2015. How is effect size used in power analysis? UCLA. www.ats.ucla.edu/stat/mult_pkg/faq/general/effect_size_power/effect_size_power.htm/.
King, B.M. and P.J. Rosopa. 2010. Some (Almost) AssumptionFree Tests. In Statistical Reasoning in the Behavioral Sciences, 6th ed. Wiley.