| In
what way(s) are estimation and hypothesis testing similar? How do they
differ? |
| Distinguish
between point and interval estimates. What are the advantages of each? |
| What
are confidence intervals? What does the width of the confidence interval
indicate? |
| How
is the width of a confidence interval influenced by sample size,
variability, and the level of confidence? |
| When
should we use ANOVA instead of t-tests? |
| When
would you use an independent samples design? A repeated measures design?
A matched participants design? |
| What
are the advantages and disadvantages of each type of design? |
| What
is the logic behind ANOVA (e.g., why should F = 1.00 when Ho is true? How do
between group variability and within group variability influence F?). |
| Why
shouldn’t we do a bunch of t-tests instead of an analysis of variance (see
box 13.1)? |
| Why
is the critical value for the F ratio positive? |
| What
does a statistically significant F-test allow you to conclude? |
| What
are post-hoc tests? Why and when do we use them? |
| What
are planned comparisons? Why and when do we use them? |
| What
is the relationship between independent samples F and t? |
| What
is homogeneity of variance and why is it important? |
| What
is the statistical difference between the independent samples and repeated
measures ANOVA? |
| What
is the advantage (both conceptually and computationally) of repeated
measures ANOVA? |
| What
is the advantage of using a factorial design (rather than conducting
multiple single factor experiments)? |
| What
three “effects” are tested in a two factor ANOVA? |
| What
are the assumptions for each type of ANOVA? |
| What
are the three characteristics of a pearson correlation? |
| When
and why do we use correlations? |
| What
approximate values of r are considered weak, moderate, and strong? |
| What
is the relationship between r and z? |
| Does
correlation indicate causation? |
| How
does restricted range influence r? How
do outliers influence r? |
| What
is the coefficient of determination (r2) and what does it tell
us? |
| What
is regression to the mean? In what way does regression to the mean
complicate interpretation of correlations? |
| In
what other ways (besides pearson correlation) can the relationship between
variables be measured? When is each type of correlation used? |
| What
is the goal of a regression analysis? |
| Why
do we compute the standard error of the estimate? |
| What
is the relationship between the standard error and the correlation? |
| Compute
and interpret point and interval estimates (confidence intervals) for both z
and t. |
| Read
the description of a study and identify all of the following: the IV and
levels of the IV, the DV, operational definitions of IV(s) and DV(s), the
scale of measurement, the type of research design, and the specific
statistical analysis that should be used to analyze the data. |
| Distinguish
independent samples from repeated measures designs. |
| Translate
a research question into statistical hypotheses (H0 and H1). |
| Find
the rejection region and critical value(s) for a given alpha level and use
this information to formulate a decision rule. |
| Determine
whether or not to reject H0. |
| Interpret
the results of a statistical test. |
| Compute
an F-ratio. |
| Construct
an ANOVA summary table and/or complete an ANOVA summary table if given
partial information. |
| Conduct
Tukey’s HSD and Scheffe’s F tests. |
| Determine
coefficients for a planned contrast. |
| Use
Hartley’s F-max test to assess homogeneity of variance. |
| Identify,
describe, and interpret main effects and interactions (through hypothesis
testing, by simply looking at the pattern of means, and by using ANOVA). |
| Graph
the results of a factorial experiment and/or interpret results of a
factorial experiment from a graph. |
| Identify
and describe the direction, form, and degree (strength) of a correlation. |
| Test
hypotheses with the pearson and spearman correlations. |
| Compute
each type of correlation introduced in chapter 16. |
| Interpret
correlations and understand why we can’t infer causality from
correlations. |
| Compute
a regression equation. |
| Use
a regression equation to find a predicted value of Y. |
| Interpret
SPSS output for ANOVA, correlation, and simple regression. |
| estimation |
| point
estimate |
| interval
estimate |
| confidence
intervals |
| independent
variable (IV) = factor |
| dependent
variable |
| single
factor v. factorial designs |
| independent
samples designs |
| repeated
measures designs |
| analysis
of variance (ANOVA) |
| F-ratio |
| variability
between groups |
| variability
within groups |
| treatment
effect |
| experimental
error |
| MS
(mean square) |
| SS
(sum of squares) |
| df
(degrees of freedom) |
| F-distribution |
| ANOVA
summary table |
| error
term |
| post-hoc
tests |
| experimentwise
alpha level |
| Tukey’s
HSD |
| studentized
range statistic (q) |
| Scheffe’s
F test |
| planned
comparisons/contrasts |
| individual
differences |
| main
effect |
| interaction |
| simple
main effect |
| correlation |
| scatterplot |
| positive
v. negative correlation |
| linear
v. non-linear relationships |
| prediction |
| validity |
| reliability |
| theory
verification |
| pearson
correlation coefficient ( r ) |
| covariability |
| sum
of products (SP) |
| restricted
range |
| coefficient
of determination (r2) |
| outliers
(outriders) |
| regression
to the mean |
| spearman
correlation (rs) |
| point-biserial
correlation |
| phi-coefficient |
| linear
equation |
| regression |
| regression
line |
| Y
intercept |
| least
squared error |
| regression
equation for Y |
| standard
error of the estimate (SEE) |