The two-sample sign test assesses the number of observations in one group that are greater than paired observations in th
The two-sample sign test assesses the number of observations in one group that are greater than paired observations in the other group without accounting for the magnitude of the difference. The test is similar in purpose to the two-sample Wilcoxon signed-rank test but looks specifically at the median value of differences (if the values are numeric), and is not affected by the distribution of the data.
The SIGN.test function in the BSDA package requires the data to be separated into two variables, each of which is ordered so that the first observation of each is paired, and so on. Information on options for the function can be viewed with ?SIGN.test. The SignTest function in the DescTools package is similar.
For appropriate plots and summary statistics, see the Two-sample Paired Signed-rank Test chapter.
The test is equivalent to the one-sample sign test on the differences of the pairs.
Appropriate data
• Two-sample paired data. That is, one-way data with two groups only, where the observations are paired between groups.
• Dependent variable is interval or ratio. Note that because the first step in the process is to find the differences in the pairs, that the test is not appropriate for truly ordinal data.
• Independent variable is a factor with two levels. That is, two groups.
Hypotheses
• Null hypothesis: For numeric data, the median of the paired differences in the population from which the sample was drawn is equal to zero.
• Alternative hypothesis (two-sided): For numeric data, the median of the paired differences in the population from which the sample was drawn is not equal to zero.
Interpretation
Significant results can be reported as “There was a significant difference in values between group A and group B.”
Packages used in this chapter
The packages used in this chapter include:
• psych
• BSDA
• DescTools
• rcompanion
The following commands will install these packages if they are not already installed:
if(!require(psych)){install.packages("psych")}
if(!require(BSDA)){install.packages("BSDA")}
if(!require(DescTools)){install.packages("DescTools")}
if(!require(rcompanion)){install.packages("rcompanion")}
Sign test for paired two-sample data example
Data = read.table(header=TRUE, stringsAsFactors=TRUE, text="
Speaker Time Student Likert
Pooh 1 a 1
Pooh 1 b 4
Pooh 1 c 3
Pooh 1 d 3
Pooh 1 e 3
Pooh 1 f 3
Pooh 1 g 4
Pooh 1 h 3
Pooh 1 i 3
Pooh 1 j 3
Pooh 2 a 4
Pooh 2 b 5
Pooh 2 c 4
Pooh 2 d 5
Pooh 2 e 4
Pooh 2 f 5
Pooh 2 g 3
Pooh 2 h 4
Pooh 2 i 3
Pooh 2 j 4
")
### Check the data frame
library(psych)
headTail(Data)
str(Data)
summary(Data)
Two-sample sign test with DescTools package
Time.1 = Data$Likert [Data$Time == 1]
Time.2 = Data$Likert [Data$Time == 2]
library(DescTools)
SignTest(x = Time.1,
y = Time.2)
Dependent-samples Sign-Test
S = 1, number of differences = 9, p-value = 0.03906
### p-value reported above
alternative hypothesis: true median difference is not equal to 0
97.9 percent confidence interval:
-2 0
sample estimates:
median of the differences
-1
### median of differences and confidence interval
of differences
Two-sample sign test with BSDA package
Time.1 = Data$Likert [Data$Time == 1]
Time.2 = Data$Likert [Data$Time == 2]
library(BSDA)
SIGN.test(x = Time.1,
y = Time.2,
alternative = "two.sided",
conf.level = 0.95)
Dependent-samples Sign-Test
S = 1, p-value = 0.03906
### p-value reported above
95 percent confidence interval:
-2.0000000 -0.3244444
sample estimates:
median of x-y
-1
### median of differences and confidence interval
of differences
Two-sample sign test with nonpar package
Note that the paired differences between the two groups is calculated manually, and m=3 indicates the default value to compare to. At the time of writing, it appears that the exact=FALSE option actually produces the exact test.
Time.1 = Data$Likert [Data$Time == 1]
Time.2 = Data$Likert [Data$Time == 2]
Diff = Time.1 - Time.2
library(nonpar)
signtest(Diff, m=0, conf.level=0.95, exact=FALSE)
Exact Sign Test
The p-value is 0.03906
The 95 % confidence interval is [ -2 , -1 ].
Effect size measurement
One effect size statistic that can be used for the paired sign test is a dominance statistic. For more information on this statistic, see the Sign Test for One-sample Data chapter. Note that median represents the median of the paired differences.
Time.1 = Data$Likert [Data$Time == 1]
Time.2 = Data$Likert [Data$Time == 2]
Diff = Time.1 - Time.2
library(rcompanion)
oneSampleDominance(Diff, mu=0)
n Median mu Less Equal Greater Dominance VDA
1 10 -1 0 0.8 0.1 0.1 -0.7 0.15
### Note that a negative median of the diffrences, as
conducted here, indicates that Time.2 has larger values that Time.1. This also
applies to the dominance statistic.
oneSampleDominance(Diff, mu=0, ci=TRUE)
n Median mu Less Equal Greater Dominance lower.ci upper.ci VDA lower.vda.ci
upper.vda.ci
1 10 -1 0 0.8 0.1 0.1 -0.7 -1 -0.2 0.15
0 0.45
It is helpful to look at the difference between medians of the two paired groups. The functions for the sign test above report the difference in medians and a confidence interval for that difference. Or this difference can be assessed manually.
Time.1 = Data$Likert [Data$Time == 1]
Time.2 = Data$Likert [Data$Time == 2]
median(Time.1)
3
median(Time.2)
4
Diff = Time.1 - Time.2
Median(Diff)
-1
The confidence interval for the median of the differences by bootstrap can be assessed with the groupwiseMedian function, with the caveat that the bootstrap procedure may not be appropriate with discrete data or a small sample size.
Time.1 = Data$Likert [Data$Time == 1]
Time.2 = Data$Likert [Data$Time == 2]
Diff = Time.1 - Time.2
Sum = data.frame(Time.1, Time.2, Diff)
library(rcompanion)
groupwiseMedian(Diff ~ 1, data=Sum, bca=FALSE, perc=TRUE)
.id n Median Conf.level Percentile.lower Percentile.upper
1 <NA> 10 -1 0.95 -2 -0.5
library(DescTools)
MedianCI(Diff, method="exact")
median lwr.ci upr.ci
-1 -2 0
attr(,"conf.level")
[1] 0.9785156
Manual calculations
Time1 = c(1, 4, 3, 3, 3, 3, 4, 3, 3, 3)
Time2 = c(4, 5, 4, 5, 4, 5, 3, 4, 3, 4)
Time1Greater = sum(Time1 > Time2)
DifferentPairs = sum(Time1 != Time2)
binom.test(Time2Greater, DifferentPairs)
Exact binomial test
number of successes = 8, number of trials = 9, p-value = 0.03906
N = length(Likert)
Time2GreaterProp = sum(Time2 > Time1) / N
Time2GreaterProp
0.8
Time2LesserProp = sum(Time2 < Time1) / N
Time2LesserProp
0.1
EqualProp = sum(Time2 == Time1) / N
EqualProp
0.1