scrapbook
  • "Unorganized" Notes
  • The Best Public Datasets for Machine Learning and Data Science
  • Practice Coding
  • plaid-API project
  • Biotech
    • Machine Learning vs. Deep Learning
  • Machine Learning for Computer Graphics
  • Books (on GitHub)
  • Ideas/Thoughts
  • Ziva for feature animation: Stylized simulation and machine learning-ready workflows
  • Tools
  • 🪶math
    • Papers
    • Math for ML (coursera)
      • Linear Algebra
        • Wk1
        • Wk2
        • Wk3
        • Wk4
        • Wk5
      • Multivariate Calculus
    • Improving your Algorithms & Data Structure Skills
    • Algorithms
    • Algorithms (MIT)
      • Lecture 1: Algorithmic Thinking, Peak Finding
    • Algorithms (khan academy)
      • Binary Search
      • Asymptotic notation
      • Sorting
      • Insertion sort
      • Recursion
      • Solve Hanoi recursively
      • Merge Sort
      • Representing graphs
      • The breadth-first search algorithm
      • Breadth First Search in JavaScript
      • Breadth-first vs Depth-first Tree Traversal in Javascript
    • Algorithms (udacity)
      • Social Network
    • Udacity
      • Linear Algebra Refresher /w Python
    • math-notes
      • functions
      • differential calculus
      • derivative
      • extras
      • Exponentials & logarithms
      • Trigonometry
    • Probability (MIT)
      • Unit 1
        • Probability Models and Axioms
        • Mathematical background: Sets; sequences, limits, and series; (un)countable sets.
    • Statistics and probability (khan academy)
      • Analyzing categorical data
      • Describing and comparing distributions
      • Outliers Definition
      • Mean Absolute Deviation (MAD)
      • Modeling data distribution
      • Exploring bivariate numerical data
      • Study Design
      • Probability
      • Counting, permutations, and combinations
      • Binomial variables
        • Binomial Distribution
        • Binomial mean and standard deviation formulas
        • Geometric random variable
      • Central Limit Theorem
      • Significance Tests (hypothesis testing)
    • Statistics (hackerrank)
      • Mean, Medium, Mode
      • Weighted Mean
      • Quartiles
      • Standard Deviation
      • Basic Probability
      • Conditional Probability
      • Permutations & Combinations
      • Binomial Distribution
      • Negative Binomial
      • Poisson Distribution
      • Normal Distribution
      • Central Limit Theorem
      • Important Concepts in Bayesian Statistics
  • 📽️PRODUCT
    • Product Strategy
    • Product Design
    • Product Development
    • Product Launch
  • 👨‍💻coding
    • of any interest
    • Maya API
      • Python API
    • Python
      • Understanding Class Inheritance in Python 3
      • 100+ Python challenging programming exercises
      • coding
      • Iterables vs. Iterators vs. Generators
      • Generator Expression
      • Stacks (LIFO) / Queues (FIFO)
      • What does -1 mean in numpy reshape?
      • Fold Left and Right in Python
      • Flatten a nested list of lists
      • Flatten a nested dictionary
      • Traverse A Tree
      • How to Implement Breadth-First Search
      • Breadth First Search
        • Level Order Tree Traversal
        • Breadth First Search or BFS for a Graph
        • BFS for Disconnected Graph
      • Trees and Tree Algorithms
      • Graph and its representations
      • Graph Data Structure Interview Questions
      • Graphs in Python
      • GitHub Repo's
    • Python in CG Production
    • GLSL/HLSL Shading programming
    • Deep Learning Specialization
      • Neural Networks and Deep Learning
      • Untitled
      • Untitled
      • Untitled
    • TensorFlow for AI, ML, and DL
      • Google ML Crash Course
      • TensorFlow C++ API
      • TensorFlow - coursera
      • Notes
      • An Introduction to different Types of Convolutions in Deep Learning
      • One by One [ 1 x 1 ] Convolution - counter-intuitively useful
      • SqueezeNet
      • Deep Compression
      • An Overview of ResNet and its Variants
      • Introducing capsule networks
      • What is a CapsNet or Capsule Network?
      • Xception
      • TensorFlow Eager
    • GitHub
      • Project README
    • Agile - User Stories
    • The Open-Source Data Science Masters
    • Coding Challenge Websites
    • Coding Interview
      • leetcode python
      • Data Structures
        • Arrays
        • Linked List
        • Hash Tables
        • Trees: Basic
        • Heaps, Stacks, Queues
        • Graphs
          • Shortest Path
      • Sorting & Searching
        • Depth-First Search & Breadth-First Search
        • Backtracking
        • Sorting
      • Dynamic Programming
        • Dynamic Programming: Basic
        • Dynamic Programming: Advanced
    • spaCy
    • Pandas
    • Python Packages
    • Julia
      • jupyter
    • macos
    • CPP
      • Debugging
      • Overview of memory management problems
      • What are lvalues and rvalues?
      • The Rule of Five
      • Concurrency
      • Avoiding Data Races
      • Mutex
      • The Monitor Object Pattern
      • Lambdas
      • Maya C++ API Programming Tips
      • How can I read and parse CSV files in C++?
      • Cpp NumPy
    • Advanced Machine Learning
      • Wk 1
      • Untitled
      • Untitled
      • Untitled
      • Untitled
  • data science
    • Resources
    • Tensorflow C++
    • Computerphile
      • Big Data
    • Google ML Crash Course
    • Kaggle
      • Data Versioning
      • The Basics of Rest APIs
      • How to Make an API
      • How to deploying your API
    • Jupiter Notebook Tips & Tricks
      • Jupyter
    • Image Datasets Notes
    • DS Cheatsheets
      • Websites & Blogs
      • Q&A
      • Strata
      • Data Visualisation
      • Matplotlib etc
      • Keras
      • Spark
      • Probability
      • Machine Learning
        • Fast Computation of AUC-ROC score
    • Data Visualisation
    • fast.ai
      • deep learning
      • How to work with Jupyter Notebook on a remote machine (Linux)
      • Up and Running With Fast.ai and Docker
      • AWS
    • Data Scientist
    • ML for Beginners (Video)
    • ML Mastery
      • Machine Learning Algorithms
      • Deep Learning With Python
    • Linear algebra cheat sheet for deep learning
    • DL_ML_Resources
    • Awesome Machine Learning
    • web scraping
    • SQL Style Guide
    • SQL - Tips & Tricks
  • 💡Ideas & Thoughts
    • Outdoors
    • Blog
      • markdown
      • How to survive your first day as an On-set VFX Supervisor
    • Book Recommendations by Demi Lee
  • career
    • Skills
    • learn.co
      • SQL
      • Distribution
      • Hypothesis Testing Glossary
      • Hypothesis Tests
      • Hypothesis & AB Testing
      • Combinatorics Continued and Maximum Likelihood Estimation
      • Bayesian Classification
      • Resampling and Monte Carlo Simulation
      • Extensions To Linear Models
      • Time Series
      • Distance Metrics
      • Graph Theory
      • Logistic Regression
      • MLE (Maximum Likelihood Estimation)
      • Gradient Descent
      • Decision Trees
      • Ensemble Methods
      • Spark
      • Machine Learning
      • Deep Learning
        • Backpropagation - math notation
        • PRACTICE DATASETS
        • Big Data
      • Deep Learning Resources
      • DL Datasets
      • DL Tutorials
      • Keras
      • Word2Vec
        • Word2Vec Tutorial Part 1 - The Skip-Gram Model
        • Word2Vec Tutorial Part 2 - Negative Sampling
        • An Intuitive Explanation of Convolutional Neural Networks
      • Mod 4 Project
        • Presentation
      • Mod 5 Project
      • Capstone Project Notes
        • Streaming large training and test files into Tensorflow's DNNClassifier
    • Carrier Prep
      • The Job Search
        • Building a Strong Job Search Foundation
        • Key Traits of Successful Job Seekers
        • Your Job Search Mindset
        • Confidence
        • Job Search Action Plan
        • CSC Weekly Activity
        • Managing Your Job Search
      • Your Online Presence
        • GitHub
      • Building Your Resume
        • Writing Your Resume Summary
        • Technical Experience
      • Effective Networking
        • 30 Second Elevator Pitch
        • Leveraging Your Network
        • Building an Online Network
        • Linkedin For Research And Networking
        • Building An In-Person Network
        • Opening The Line Of Communication
      • Applying to Jobs
        • Applying To Jobs Online
        • Cover Letters
      • Interviewing
        • Networking Coffees vs Formal Interviews
        • The Coffee Meeting/ Informational Interview
        • Communicating With Recruiters And HR Professional
        • Research Before an Interview
        • Preparing Questions for Interviews
        • Phone And Video/Virtual Interviews
        • Cultural/HR Interview Questions
        • The Salary Question
        • Talking About Apps/Projects You Built
        • Sending Thank You's After an Interview
      • Technical Interviewing
        • Technical Interviewing Formats
        • Code Challenge Best Practices
        • Technical Interviewing Resources
      • Communication
        • Following Up
        • When You Haven't Heard From an Employer
      • Job Offers
        • Approaching Salary Negotiations
      • Staying Current in the Tech Industry
      • Module 6 Post Work
      • Interview Prep
  • projects
    • Text Classification
    • TERRA-REF
    • saildrone
  • Computer Graphics
  • AI/ML
  • 3deeplearning
    • Fast and Deep Deformation Approximations
    • Compress and Denoise MoCap with Autoencoders
    • ‘Fast and Deep Deformation Approximations’ Implementation
    • Running a NeuralNet live in Maya in a Python DG Node
    • Implement a Substance like Normal Map Generator with a Convolutional Network
    • Deploying Neural Nets to the Maya C++ API
  • Tools/Plugins
  • AR/VR
  • Game Engine
  • Rigging
    • Deformer Ideas
    • Research
    • brave rabbit
    • Useful Rigging Links
  • Maya
    • Optimizing Node Graph for Parallel Evaluation
  • Houdini
    • Stuff
    • Popular Built-in VEX Attributes (Global Variables)
Powered by GitBook
On this page
  • Big-(Big-Theta) notation
  • Which type is each of the presented options?
  • Big-O notation
  • Big-Ω (Big-Omega) notation
  1. math
  2. Algorithms (khan academy)

Asymptotic notation

PreviousBinary SearchNextSorting

Last updated 6 years ago

Big-Θ\ThetaΘ(Big-Theta) notation

Logarithms grow more slowly than polynomials. That is, Θ(log⁡2n)\Theta(\log_2 n)Θ(log2​n) grows more slowly than Θ(na)\Theta(n^a)Θ(na) for any positive constant aaa. But since the value of log⁡2n\log_2 nlog2​n, increases as nnn increases, Θ(log⁡2n)\Theta(\log_2 n)Θ(log2​n) grows faster than Θ(1)\Theta(1)Θ(1).

The following graph compares the growth of 111, nnn, and log⁡2n\log_2 nlog2​n:

Here's a list of functions in asymptotic notation that we often encounter when analyzing algorithms, ordered by slowest to fastest growing:

  1. Θ(1)\Theta(1)Θ(1)

  2. Θ(log⁡2n)\Theta(\log_2 n)Θ(log2​n)

  3. Θ(n)\Theta(n)Θ(n)

  4. Θ(nlog⁡2n)\Theta(n \log_2 n)Θ(nlog2​n)

  5. Θ(n2)\Theta(n^2)Θ(n2)

  6. Θ(n2log⁡2n)\Theta(n^2 \log_2 n)Θ(n2log2​n)

  7. Θ(n3)\Theta(n^3)Θ(n3)

  8. Θ(2n)\Theta(2^n)Θ(2n)

  9. Θ(n!)\Theta(n!)Θ(n!)

Note that an exponential function a^nana, start superscript, n>1n > 1n>1, grows faster than any polynomial function nbn^bnb, where bbb is any constant.

A function has "constant" growth if its output does not change based on the input, the nnn. The easy way to identify constant functions is find those that have no nnn in their expression anywhere, or have n0n^0n0, In this case, 111 and 100010001000 are constant.

A function has "linear" growth if its output increases linearly with the size of its input. The way to identify linear functions is find those where nnn is never raised to a power (although n1n^1n1is OK) or used as a power. In this case, 3n\bf3n3n and (3/2)n\bf(3/2)n(3/2)n are linear.

A function has "polynomial" growth if its output increases according to a polynomial expression. The way to identify polynomial functions is to find those where nnn is raised to some constant power. In this case, 2n3\bf2n^32n3 and 3n2\bf3n^23n2 are polynomial.

A function has "exponential" growth if its output increases according to an exponential expression. The way to identify exponential functions is to find those where a constant is raised to some expression involving nnn. In this case, 2n\bf2^n2n and (3/2)n\bf(3/2)^n(3/2)n are exponential.

We have several different types of functions here, so we start by thinking about the general properties of those function types and how their rate of growth compares. Here's a reminder of different function types shown here, in order of their growth:

  1. Constant functions

  2. Logarithmic functions

  3. Linear functions

  4. Linearithmic functions

  5. Polynomial functions

  6. Exponential functions

Which type is each of the presented options?

  1. Constant functions:

    • 646464

  2. Logarithmic functions

    • log⁡8n\log_8{n}log8​n, log⁡2n\log_2{n}log2​n

  3. Linear functions

    • 4n4n4n

  4. Linearithmic functions

    • n log⁡2nn\,\log_2{n}nlog2​n, n log⁡8nn\,\log_8{n}nlog8​n

  5. Polynomial functions

    • 8n28n^28n2, 6n36n^36n3

  6. Exponential functions

    • 82n8^{2n}82n

Within the logarithmic functions, the lesser bases grow more quickly than the higher bases - so log⁡2n\log_2{n}log2​n will grow more quickly than log⁡8n\log_8{n}log8​n. You can see that in the graph below:

Within the polynomial functions, 8n28n^28n2 will grow more slowly than 6n36n^36n3 since it has a lesser exponent. We don't even have to look at the constants in front, since the exponent is more significant.

To conclude, the correct order of the functions would be:

  • 646464

  • log⁡8n\log_8{n}log8​n

  • log⁡2n\log_2{n}log2​n

  • 4n4n4n

  • n log⁡6nn\,\log_6{n}nlog6​n

  • n log⁡2nn\,\log_2{n}nlog2​n

  • 8n28n^28n2, 6n36n^36n3

  • 82n8^{2n}82n

Big-O notation

If a running time is O(f(n))O(f(n))O(f(n)), then for large enough nnn, the running time is at most k⋅f(n)k \cdot f(n)k⋅f(n) for some constant kkk. Here's how to think of a running time that is O(f(n))O(f(n))O(f(n)):

We use big-O notation for asymptotic upper bounds, since it bounds the growth of the running time from above for large enough input sizes.

Big-Ω (Big-Omega) notation

Sometimes, we want to say that an algorithm takes at least a certain amount of time, without providing an upper bound. We use big-Ω notation; that's the Greek letter "omega."

If a running time is Ω(f(n))\Omega(f(n))Ω(f(n)), then for large enough nnn, the running time is at least k⋅f(n)k \cdot f(n)k⋅f(n) for some constant kkk. Here's how to think of a running time that is Ω(f(n))\Omega(f(n))Ω(f(n)):

We say that the running time is "big-Ω of f(n)f(n)f(n). We use big-Ω notation for asymptotic lower bounds, since it bounds the growth of the running time from below for large enough input sizes.

Just as Θ(f(n))\Theta(f(n))Θ(f(n)) automatically implies O(f(n))O(f(n))O(f(n)), it also automatically implies Ω(f(n))\Omega(f(n))Ω(f(n)). So we can say that the worst-case running time of binary search is Ω(log⁡2n)\Omega(\log_2 n)Ω(log2​n).

The linearithmic functions are those that multiply linear terms by a logarithm, of the form n log⁡knn\,\log_k nnlogk​n. With the nnn being the same in both, then the growth is dependent on the base of the logarithms. And as we just stated, the lesser bases grow more quickly than the higher bases - so n log⁡2nn\,\log_2{n} nlog2​n will grow more quickly than n log⁡6nn\,\log_6{n}nlog6​n. You can see that in the graph below:

🪶