- Specialization: Data Science Foundations: Statistical Inference
- Instructor: Dr. Anne Dougherty, Senior Instructor, University of Colorado Teaching Professor in Applied Mathematics, Associate Department Chair, Undergraduate Studies Chair
- Prior knowledge needed: Calculus 1 and 2, Intro to R programming
Understand the foundations of probability and its relationship to statistics and data science. We’ll learn what it means to calculate a probability, independent and dependent outcomes, and conditional events. We’ll study discrete and continuous random variables and see how this fits with data collection. We’ll end the course with Gaussian (normal) random variables and the Central Limit Theorem and understand it’s fundamental importance for all of statistics and data science.
**Could be changed** The notion of “conditional probability” is a very useful concept from Probability Theory and in this module we introduce the idea of “conditioning” and Bayes’ Formula. The fundamental concept of “independent event” then naturally arises from the notion of conditioning. Conditional and independent events are fundamental concepts in understanding statistical results.
The concept of a “random variable” (r.v.) is fundamental and often used in statistics. In this module we’ll study various named discrete random variables. We’ll learn some of their properties and why they are important. We’ll also calculate the expectation and variance for these random variables.
This module contains materials for the final exam for MS-DS degree students. If you've upgraded to the for-credit version of this course, please make sure you review the additional for-credit materials in the Introductory module and anywhere else they may be found.
Note: This page is periodically updated. Course information on the Coursera platform supersedes the information on this page. Click View on Coursera button above for the most up-to-date information.