The purpose of the AP course in statistics is to introduce students to the major concepts and tools for collecting, analyzing and drawing conclusions from data. Students are exposed to four broad conceptual themes:
1. Exploring Data: Describing patterns and departures from patterns
2. Sampling and Experimentation: Planning and conducting a study
3. Anticipating Patterns: Exploring random phenomena using probability and simulation
4. Statistical Inference: Estimating population parameters and testing hypotheses
Students who successfully complete the course and exam may receive credit,
advanced placement or both for a one-semester introductory college statistics course
The course follows the AP Statistics curriculum established by the College Board, while also including material that will guide students in conducting and communicating their own statistical analyses. Students will learn standard statistical terms and techniques through presentation of real world cases.
Technology will be an integral part of the course. Students will be expected to use the TI-83/84 graphing calculator to perform their analyses, to present their findings, and to investigate topics visually. All students are expected to possess the determination and initiative to take on a college-level course, including the corresponding workload. All students taking AP Statistics are required by the district to take the AP Statistics Examination in May.
The topics for AP Statistics are divided into four major themes: exploratory analysis (20–30 percent of the exam), planning and conducting a study (10–15 percent of the exam), probability (20–30 percent of the exam), and statistical inference (30–40 percent of the exam).
I. Exploratory analysis of data makes use of graphical and numerical techniques to study patterns and departures from patterns. In examining distributions of data, students should be able to detect important characteristics, such as shape, location, variability and unusual values. From careful observations of patterns in data, students can generate conjectures about relationships among variables. The notion of how one variable may be associated with another permeates almost all of statistics, from simple comparisons of proportions through linear regression. The difference between association and causation must accompany this conceptual development throughout.
II. Data must be collected according to a well-developed plan if valid information is to be obtained. If data are to be collected to provide an answer to a question of interest, a careful plan must be developed. Both the type of analysis that is appropriate and the nature of conclusions that can be drawn from that analysis depend in a critical way on how the data was collected. Collecting data in a reasonable way, through either sampling or experimentation, is an essential step in the data analysis process.
III. Probability is the tool used for anticipating what the distribution of data should look like under a given model. Random phenomena are not haphazard: they display an order that emerges only in the long run and is described by a distribution. The mathematical description of variation is central to statistics. The probability required for statistical inference is not primarily axiomatic or combinatorial but is oriented toward using probability distributions to describe data.
IV. Statistical inference guides the selection of appropriate models. Models and data interact in statistical work: models are used to draw conclusions from data, while the data are allowed to criticize and even falsify the model through inferential and diagnostic methods. Inference from data can be thought of as the process of selecting a reasonable model, including a statement in probability language, of how confident one can be about the selection.
Following is an outline of the major topics covered by the AP Statistics Exam. The ordering here is intended to define the scope of the course but not necessarily the sequence. The percentages in parentheses for each content area indicate the coverage for that content area in the exam.
I. Exploring Data: Describing patterns and departures from patterns (20%–30%)
Exploratory analysis of data makes use of graphical and numerical techniques to study patterns and departures from patterns. Emphasis should be placed on interpreting information from graphical and numerical displays and summaries.
A. Constructing and interpreting graphical displays of distributions of univariate data (dotplot, stemplot, histogram, cumulative frequency plot)
1. Center and spread
2. Clusters and gaps
3. Outliers and other unusual features
B. Summarizing distributions of univariate data
1. Measuring center: median, mean
2. Measuring spread: range, interquartile range, standard deviation
3. Measuring position: quartiles, percentiles, standardized scores (z-scores)
4. Using boxplots
5. The effect of changing units on summary measures
C. Comparing distributions of univariate data (dotplots, back-to-back stemplots, parallel boxplots)
1. Comparing center and spread: within group, between group variation
2. Comparing clusters and gaps
3. Comparing outliers and other unusual features
4. Comparing shapes
D. Exploring bivariate data
1. Analyzing patterns in scatterplots
2. Correlation and linearity
3. Least-squares regression line
4. Residual plots, outliers and influential points
5. Transformations to achieve linearity: logarithmic and power transformations
E. Exploring categorical data
1. Frequency tables and bar charts
2. Marginal and joint frequencies for two-way tables
3. Conditional relative frequencies and association
4. Comparing distributions using bar charts
II. Sampling and Experimentation: Planning and conducting a study (10%–15%)
Data must be collected according to a well-developed plan if valid information on a conjecture is to be obtained. This plan includes clarifying the question and deciding upon a method of data collection and analysis.
A. Overview of methods of data collection
2. Sample survey
4. Observational study
B. Planning and conducting surveys
1. Characteristics of a well-designed and well-conducted survey
2. Populations, samples and random selection
3. Sources of bias in sampling and surveys
4. Sampling methods, including simple random sampling, stratified random sampling and cluster sampling
C. Planning and conducting experiments
1. Characteristics of a well-designed and well-conducted experiment
2. Treatments, control groups, experimental units, random assignments and replication
3. Sources of bias and confounding, including placebo effect and blinding
4. Completely randomized design
5. Randomized block design, including matched pairs design
D. Generalizability of results and types of conclusions that can be drawn from observational
studies, experiments and surveys
III. Anticipating Patterns: Exploring random phenomena using probability and simulation (20%–30%)
Probability is the tool used for anticipating what the distribution of data should look like under a given model.
1. Interpreting probability, including long-run relative frequency interpretation
2. “Law of Large Numbers” concept
3. Addition rule, multiplication rule, conditional probability and independence
4. Discrete random variables and their probability distributions, including binomial and geometric
5. Simulation of random behavior and probability distributions
6. Mean (expected value) and standard deviation of a random variable, and linear transformation of a random variable
B. Combining independent random variables
1. Notion of independence versus dependence
2. Mean and standard deviation for sums and differences of independent random variables
C. The normal distribution
1. Properties of the normal distribution
2. Using tables of the normal distribution
3. The normal distribution as a model for measurements
D. Sampling distributions
1. Sampling distribution of a sample proportion
2. Sampling distribution of a sample mean
3. Central Limit Theorem
4. Sampling distribution of a difference between two independent sample proportions
5. Sampling distribution of a difference between two independent sample means
6. Simulation of sampling distributions
8. Chi-square distribution
IV. Statistical Inference: Estimating population parameters and testing hypotheses (30%–40%)
Statistical inference guides the selection of appropriate models.
A. Estimation (point estimators and confidence intervals)
1. Estimating population parameters and margins of error
2. Properties of point estimators, including unbiasedness and variability
3. Logic of confidence intervals, meaning of confidence level and confidence intervals, and properties of confidence intervals
4. Large sample confidence interval for a proportion
5. Large sample confidence interval for a difference between two proportions
6. Confidence interval for a mean
7. Confidence interval for a difference between two means (unpaired and paired)
8. Confidence interval for the slope of a least-squares regression line
B. Tests of significance
1. Logic of significance testing, null and alternative hypotheses; p-values; one- and two-sided tests; concepts of Type I and Type II errors; concept of power
2. Large sample test for a proportion
3. Large sample test for a difference between two proportions
4 . Test for a mean
5. Test for a difference between two means (unpaired and paired)
6. Chi-square test for goodness of fit, homogeneity of proportions, and independence (one- and two-way tables)
7. Test for the slope of a least-squares regression line
The Use of Technology:
The AP Statistics course adheres to the philosophy and methods of modern data analysis. Although the distinction between graphing calculators and computers is becoming blurred as technology advances, at present the fundamental tool of data analysis is the computer. The computer does more than eliminate the drudgery of hand computation and graphing — it is an essential tool for structured inquiry.
Data analysis is a journey of discovery. It is an iterative process that involves a dialogue between the data and a mathematical model. As more is learned about the data, the model is refined and new questions are formed. The computer aids in this journey in some essential ways. First, it produces graphs that are specifically designed for data analysis. These graphical displays make it easier to observe patterns in data, to identify important subgroups of the data and to locate any unusual data points. Second, the computer allows the student to fit complex mathematical models to the data and to assess how well the model fits the data by examining the residuals. Finally, the computer is helpful in identifying an observation that has an undue influence on the analysis and in isolating its effects.
In addition to its use in data analysis, the computer facilitates the simulation approach to probability that is emphasized in the AP Statistics course. Probabilities of random events, probability distributions of random variables and sampling distributions of statistics can be studied conceptually, using simulation. This frees the student and teacher from a narrow approach that depends on a few simple probabilistic models.
Because the computer is central to what statisticians do, it is considered essential for teaching the AP Statistics course. However, it is not yet possible for students to have access to a computer during the AP Statistics Exam. Without a computer and under the conditions of a timed exam, students cannot be asked to perform the amount of computation that is needed for many statistical investigations. Consequently, standard computer output will be provided as necessary and students will be expected to interpret it.
A graphing calculator is a useful computational aid, particularly in analyzing small data sets, but should not be considered equivalent to a computer in the teaching of statistics. If a graphing calculator is used in the course, its computational capabilities should include standard statistical univariate and bivariate summaries through linear regression. Its graphical capabilities should include common univariate and bivariate displays such as histograms, boxplots and scatterplots. Students find calculators where data are entered into a spreadsheet format particularly easy to use. Ideally, students should have access to both computers and calculators for work in and outside the classroom.
Currently, the graphing calculator is the only computational aid that is available to students for use as a tool for data analysis on the AP Exam. Students who utilize graphing calculators on the exam should be aware of the following policy. It is not only inappropriate, but unethical, for students who are taking the AP Statistics Exam to have access to any information in their graphing calculators or elsewhere that is not directly related to upgrading the statistical functionality of older graphing calculators to make them comparable to statistical features found on newer models. During the exam, students are not permitted to have access to any information in their graphing calculators or elsewhere that is not directly related to upgrading the statistical functionality of older graphing calculators to make them comparable to statistical features found on newer model. Acceptable upgrades include improving the calculator’s computational functionalities and/or graphical functionalities for data that students key into the calculator while taking the exam. Unacceptable enhancements include, but are not limited to, keying or scanning text or response templates into the calculator. Students attempting to augment the capabilities of their graphing calculators in any way other than for the purpose of upgrading features, as described above, will be considered to be cheating on the exam.