scrapbook
  • "Unorganized" Notes
  • The Best Public Datasets for Machine Learning and Data Science
  • Practice Coding
  • plaid-API project
  • Biotech
    • Machine Learning vs. Deep Learning
  • Machine Learning for Computer Graphics
  • Books (on GitHub)
  • Ideas/Thoughts
  • Ziva for feature animation: Stylized simulation and machine learning-ready workflows
  • Tools
  • ðŸŠķmath
    • Papers
    • Math for ML (coursera)
      • Linear Algebra
        • Wk1
        • Wk2
        • Wk3
        • Wk4
        • Wk5
      • Multivariate Calculus
    • Improving your Algorithms & Data Structure Skills
    • Algorithms
    • Algorithms (MIT)
      • Lecture 1: Algorithmic Thinking, Peak Finding
    • Algorithms (khan academy)
      • Binary Search
      • Asymptotic notation
      • Sorting
      • Insertion sort
      • Recursion
      • Solve Hanoi recursively
      • Merge Sort
      • Representing graphs
      • The breadth-first search algorithm
      • Breadth First Search in JavaScript
      • Breadth-first vs Depth-first Tree Traversal in Javascript
    • Algorithms (udacity)
      • Social Network
    • Udacity
      • Linear Algebra Refresher /w Python
    • math-notes
      • functions
      • differential calculus
      • derivative
      • extras
      • Exponentials & logarithms
      • Trigonometry
    • Probability (MIT)
      • Unit 1
        • Probability Models and Axioms
        • Mathematical background: Sets; sequences, limits, and series; (un)countable sets.
    • Statistics and probability (khan academy)
      • Analyzing categorical data
      • Describing and comparing distributions
      • Outliers Definition
      • Mean Absolute Deviation (MAD)
      • Modeling data distribution
      • Exploring bivariate numerical data
      • Study Design
      • Probability
      • Counting, permutations, and combinations
      • Binomial variables
        • Binomial Distribution
        • Binomial mean and standard deviation formulas
        • Geometric random variable
      • Central Limit Theorem
      • Significance Tests (hypothesis testing)
    • Statistics (hackerrank)
      • Mean, Medium, Mode
      • Weighted Mean
      • Quartiles
      • Standard Deviation
      • Basic Probability
      • Conditional Probability
      • Permutations & Combinations
      • Binomial Distribution
      • Negative Binomial
      • Poisson Distribution
      • Normal Distribution
      • Central Limit Theorem
      • Important Concepts in Bayesian Statistics
  • ðŸ“―ïļPRODUCT
    • Product Strategy
    • Product Design
    • Product Development
    • Product Launch
  • ðŸ‘Ļ‍ðŸ’ŧcoding
    • of any interest
    • Maya API
      • Python API
    • Python
      • Understanding Class Inheritance in Python 3
      • 100+ Python challenging programming exercises
      • coding
      • Iterables vs. Iterators vs. Generators
      • Generator Expression
      • Stacks (LIFO) / Queues (FIFO)
      • What does -1 mean in numpy reshape?
      • Fold Left and Right in Python
      • Flatten a nested list of lists
      • Flatten a nested dictionary
      • Traverse A Tree
      • How to Implement Breadth-First Search
      • Breadth First Search
        • Level Order Tree Traversal
        • Breadth First Search or BFS for a Graph
        • BFS for Disconnected Graph
      • Trees and Tree Algorithms
      • Graph and its representations
      • Graph Data Structure Interview Questions
      • Graphs in Python
      • GitHub Repo's
    • Python in CG Production
    • GLSL/HLSL Shading programming
    • Deep Learning Specialization
      • Neural Networks and Deep Learning
      • Untitled
      • Untitled
      • Untitled
    • TensorFlow for AI, ML, and DL
      • Google ML Crash Course
      • TensorFlow C++ API
      • TensorFlow - coursera
      • Notes
      • An Introduction to different Types of Convolutions in Deep Learning
      • One by One [ 1 x 1 ] Convolution - counter-intuitively useful
      • SqueezeNet
      • Deep Compression
      • An Overview of ResNet and its Variants
      • Introducing capsule networks
      • What is a CapsNet or Capsule Network?
      • Xception
      • TensorFlow Eager
    • GitHub
      • Project README
    • Agile - User Stories
    • The Open-Source Data Science Masters
    • Coding Challenge Websites
    • Coding Interview
      • leetcode python
      • Data Structures
        • Arrays
        • Linked List
        • Hash Tables
        • Trees: Basic
        • Heaps, Stacks, Queues
        • Graphs
          • Shortest Path
      • Sorting & Searching
        • Depth-First Search & Breadth-First Search
        • Backtracking
        • Sorting
      • Dynamic Programming
        • Dynamic Programming: Basic
        • Dynamic Programming: Advanced
    • spaCy
    • Pandas
    • Python Packages
    • Julia
      • jupyter
    • macos
    • CPP
      • Debugging
      • Overview of memory management problems
      • What are lvalues and rvalues?
      • The Rule of Five
      • Concurrency
      • Avoiding Data Races
      • Mutex
      • The Monitor Object Pattern
      • Lambdas
      • Maya C++ API Programming Tips
      • How can I read and parse CSV files in C++?
      • Cpp NumPy
    • Advanced Machine Learning
      • Wk 1
      • Untitled
      • Untitled
      • Untitled
      • Untitled
  • data science
    • Resources
    • Tensorflow C++
    • Computerphile
      • Big Data
    • Google ML Crash Course
    • Kaggle
      • Data Versioning
      • The Basics of Rest APIs
      • How to Make an API
      • How to deploying your API
    • Jupiter Notebook Tips & Tricks
      • Jupyter
    • Image Datasets Notes
    • DS Cheatsheets
      • Websites & Blogs
      • Q&A
      • Strata
      • Data Visualisation
      • Matplotlib etc
      • Keras
      • Spark
      • Probability
      • Machine Learning
        • Fast Computation of AUC-ROC score
    • Data Visualisation
    • fast.ai
      • deep learning
      • How to work with Jupyter Notebook on a remote machine (Linux)
      • Up and Running With Fast.ai and Docker
      • AWS
    • Data Scientist
    • ML for Beginners (Video)
    • ML Mastery
      • Machine Learning Algorithms
      • Deep Learning With Python
    • Linear algebra cheat sheet for deep learning
    • DL_ML_Resources
    • Awesome Machine Learning
    • web scraping
    • SQL Style Guide
    • SQL - Tips & Tricks
  • ðŸ’ĄIdeas & Thoughts
    • Outdoors
    • Blog
      • markdown
      • How to survive your first day as an On-set VFX Supervisor
    • Book Recommendations by Demi Lee
  • career
    • Skills
    • learn.co
      • SQL
      • Distribution
      • Hypothesis Testing Glossary
      • Hypothesis Tests
      • Hypothesis & AB Testing
      • Combinatorics Continued and Maximum Likelihood Estimation
      • Bayesian Classification
      • Resampling and Monte Carlo Simulation
      • Extensions To Linear Models
      • Time Series
      • Distance Metrics
      • Graph Theory
      • Logistic Regression
      • MLE (Maximum Likelihood Estimation)
      • Gradient Descent
      • Decision Trees
      • Ensemble Methods
      • Spark
      • Machine Learning
      • Deep Learning
        • Backpropagation - math notation
        • PRACTICE DATASETS
        • Big Data
      • Deep Learning Resources
      • DL Datasets
      • DL Tutorials
      • Keras
      • Word2Vec
        • Word2Vec Tutorial Part 1 - The Skip-Gram Model
        • Word2Vec Tutorial Part 2 - Negative Sampling
        • An Intuitive Explanation of Convolutional Neural Networks
      • Mod 4 Project
        • Presentation
      • Mod 5 Project
      • Capstone Project Notes
        • Streaming large training and test files into Tensorflow's DNNClassifier
    • Carrier Prep
      • The Job Search
        • Building a Strong Job Search Foundation
        • Key Traits of Successful Job Seekers
        • Your Job Search Mindset
        • Confidence
        • Job Search Action Plan
        • CSC Weekly Activity
        • Managing Your Job Search
      • Your Online Presence
        • GitHub
      • Building Your Resume
        • Writing Your Resume Summary
        • Technical Experience
      • Effective Networking
        • 30 Second Elevator Pitch
        • Leveraging Your Network
        • Building an Online Network
        • Linkedin For Research And Networking
        • Building An In-Person Network
        • Opening The Line Of Communication
      • Applying to Jobs
        • Applying To Jobs Online
        • Cover Letters
      • Interviewing
        • Networking Coffees vs Formal Interviews
        • The Coffee Meeting/ Informational Interview
        • Communicating With Recruiters And HR Professional
        • Research Before an Interview
        • Preparing Questions for Interviews
        • Phone And Video/Virtual Interviews
        • Cultural/HR Interview Questions
        • The Salary Question
        • Talking About Apps/Projects You Built
        • Sending Thank You's After an Interview
      • Technical Interviewing
        • Technical Interviewing Formats
        • Code Challenge Best Practices
        • Technical Interviewing Resources
      • Communication
        • Following Up
        • When You Haven't Heard From an Employer
      • Job Offers
        • Approaching Salary Negotiations
      • Staying Current in the Tech Industry
      • Module 6 Post Work
      • Interview Prep
  • projects
    • Text Classification
    • TERRA-REF
    • saildrone
  • Computer Graphics
  • AI/ML
  • 3deeplearning
    • Fast and Deep Deformation Approximations
    • Compress and Denoise MoCap with Autoencoders
    • ‘Fast and Deep Deformation Approximations’ Implementation
    • Running a NeuralNet live in Maya in a Python DG Node
    • Implement a Substance like Normal Map Generator with a Convolutional Network
    • Deploying Neural Nets to the Maya C++ API
  • Tools/Plugins
  • AR/VR
  • Game Engine
  • Rigging
    • Deformer Ideas
    • Research
    • brave rabbit
    • Useful Rigging Links
  • Maya
    • Optimizing Node Graph for Parallel Evaluation
  • Houdini
    • Stuff
    • Popular Built-in VEX Attributes (Global Variables)
Powered by GitBook
On this page
  • Shapes of distributions
  • Clusters, gaps, peaks & outliers
  • box-plot
  • Arithmetic Mean
  • Median
  • Mode
  • Interquartile Range (IQR
  • Measure of spread: range, variance & standard deviation
  • What are the similarities between SD and MAD?
  • What are the differences?
  • Which one is better?
  • Population and sample standard deviation
  • Population Standard Deviation
  • Sample Standard Deviation
  • Mean -> for Population we are taking a parameter, for Sample it is a statistic
  1. math
  2. Statistics and probability (khan academy)

Describing and comparing distributions

PreviousAnalyzing categorical dataNextOutliers Definition

Last updated 6 years ago

Shapes of distributions

  • right or left-tailed, if there is only one tail

  • approx. symmetrical

  • skewed to the left or right (mean is off center and the tail on one side is longer than the other, eg left tail is longer its skewed to the left)

Clusters, gaps, peaks & outliers

  • cluster, groups of data

  • outlier, a data point which is far of the rest

box-plot

  • example: 1, 1, 2, 2, 3, 3, 4, 4, 6, 7, 8, 10, 11, 14, 15, 20, 21

  • median would be 6

  • 2, 3 is the median in the lower end and 11, 14 is the median in the upper end to build our box ends

  • and the whiskers are 1 and 21

Arithmetic Mean

The "average" number; found by adding all data points and dividing by the number of data points.

example: 4 3 1 6 1 7 => 4+3+1+6+1+76=226=3.66ˉ\frac {4+3+1+6+1+7}{6}=\frac{22}{6}=3.\bar{66}64+3+1+6+1+7​=622​=3.66ˉ

Median

The middle number; found by ordering all data points and picking out the one in the middle (or if there are two middle numbers, taking the mean of those two numbers).

example: reorder number set to 1 1 3 4 6 7 and add the two center numbers together and halve them => 3.5

Mode

The most frequent number—that is, the number that occurs the highest number of times. In the example above the most common number is 1.

Interquartile Range (IQR

The IQR describes the middle 50% of values when ordered from lowest to highest. To find the interquartile range (IQR), ​first find the median (middle value) of the lower and upper half of the data. These values are quartile 1 (Q1) and quartile 3 (Q3). The IQR is the difference between Q3 and Q1.

Example: 4, 4, 6, 7, 10, 11, 12, 14, 15

(4+6)/2 = 5 and (12+14)/2 = 13 --> IQR = 13 - 5 = 8

Measure of spread: range, variance & standard deviation

-10, 0, 10, 20 ,30 --> mean == 10

Range is max minus min number: 30 - (-10) = 40

8, 9, 10, 11, 12 --> mean == 10, range == 4

(min value - mean)squared

-10, 0, 10, 20 ,30 (mean = 10) ==>

Or we can write is as

  1. Find the mean

  2. For each data point, find the square of its distance to the mean.

  3. Sum the values from Step 2

  4. Divide by the number of data points

  5. Take the square root

What are the similarities between SD and MAD?

What are the differences?

The difference between the two formulas is that when calculating standard deviation, we square the distance from each data point to the mean, and we take the square root as the last step of the formula.

Which one is better?

Standard deviation is more complicated, but it has some nice properties that make it statisticians' preferred measure of spread.

Population and sample standard deviation

Standard deviation measures the spread of a data distribution. It measures the typical distance between each data point and the mean.The formula we use for standard deviation depends on whether the data is being considered a population of its own, or the data is a sample representing a larger population.

Population Standard Deviation

Sample Standard Deviation

Mean -> for Population we are taking a parameter, for Sample it is a statistic

Population (parameter)

Sample (statistic)

Mean

Variance

mean = ∑xin\frac{\sum x_i}{n}n∑xi​​ or sum of data divided by number of data points

Variance = σ2 \bf \sigma^2 σ2

(−10−10)2+(0−10)2+(10−10)2+(20−10)2+(30−10)25=200\frac{(-10 -10)^2 + (0-10)^2+(10-10)^2+(20-10)^2+(30-10)^2}{5}=2005(−10−10)2+(0−10)2+(10−10)2+(20−10)2+(30−10)2​=200

σ2=∑i=1N(xi−ξ)2N \sigma^2 =\frac {\sum_{i=1}^N(x_i-\mu)^2}{N}σ2=N∑i=1N​(xi​−ξ)2​ i=1 stands for the first value given, N the last number in list. Ξ\muΞ(mu) stand for mean.

Standard Deviation ==> sqrt of Variance ==> σ2==σ\bf \sqrt{\sigma^2} == \sigmaσ2​==σ

σ=200=102 \sigma = \sqrt{200} = 10\sqrt{2} σ=200​=102​ means that is has 10 times the standard deviation.

SD=∑âˆĢx−ξâˆĢ2NSD =\sqrt{\frac{\sum\vert{x}-\mu\vert{^2}}{N}}SD=N∑âˆĢx−ξâˆĢ2​​ Standard Deviation

where ∑\sum∑means "sum of", xxx is a value in the data set, ξ\muξ is the mean of the data set, and NNN is the number of data points in the population.

MAD=∑âˆĢx−xˉâˆĢ2nMAD=\frac{\sum{\vert{x}-\bar{x}\vert^2}}{n} MAD=n∑âˆĢx−xˉâˆĢ2​ Mean Absolute Deviation

The formulas are very similar! They are both based on the distance from each data point to the mean âˆĢx−xˉâˆĢ\vert{x} - \bar{x} \vert âˆĢx−xˉâˆĢ, and they both include dividing by the number of data points nnn.

Sample Standard Deviation ==> sn−12 s_{n-1}^2sn−12​or ∑i=1N(xi−xˉ)2n−1\frac {\sum_{i=1}^N(x_i-\bar{x})^2}{n-1}n−1∑i=1N​(xi​−xˉ)2​ , where xˉ\bar{x}xˉis just a different way to write ξ\muξ. Pronounced as "sample mean". (--> un-bias sample variance)

If the data is being considered a population on its own, we divide by the number of data points, NNN.

If the data is a sample from a larger population, we divide by one fewer than the number of data points in the sample, n−1n-1n−1.

σ=∑(xi−ξ)2N \sigma = \sqrt{\frac{\sum(x_i - \mu)^2} {N}}σ=N∑(xi​−ξ)2​​

σ=∑(xi−xˉ)2n−1 \sigma = \sqrt{\frac{\sum(x_i - \bar{x})^2} {n - 1}} σ=n−1∑(xi​−xˉ)2​​

(biased)

(unbiased)

ðŸŠķ
ξ=∑i=1NxiN\mu = \frac{\sum_{i=1}^{N} x_i}{N} ξ=N∑i=1N​xi​​
xˉ=∑i=1nnin\bar{x} = \frac{\sum_{i=1}^{n}n_i}{n}xˉ=n∑i=1n​ni​​
σ2=∑i=1N(xi−ξ)2N \sigma^2 = \frac{\sum_{i=1}^{N}(x_i-\mu)^2}{N}σ2=N∑i=1N​(xi​−ξ)2​
sn2=∑i=1n(xi−xˉ)2ns_n^2=\frac{\sum_{i=1}^n(x_i-\bar{x})^2}{n}sn2​=n∑i=1n​(xi​−xˉ)2​
sn−12=∑i=1n(xi−xˉ)2n−1s_{n-1}^2=\frac{\sum_{i=1}^n(x_i-\bar{x})^2}{n-1}sn−12​=n−1∑i=1n​(xi​−xˉ)2​