Skip to Content
Technical DocsBig O Notation

Big O Notation (Full Content Coming Soon)

Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. In computer science, it is used to classify algorithms according to how their run time or space requirements grow as the input size grows.

Common Time Complexities

NotationNameDescriptionExample
O(1)ConstantExecution time is independent of input size.Accessing array index
O(log n)LogarithmicTime grows logarithmically with input size.Binary Search
O(n)LinearTime grows linearly with input size.Linear Search
O(n log n)LinearithmicFaster than quadratic but slower than linear.Merge Sort, Quick Sort
O(n²)QuadraticTime grows with the square of the input size.Bubble Sort
O(2ⁿ)ExponentialTime doubles with each addition to input.Recursive Fibonacci
O(n!)FactorialGrowth is factorial based on input size.Traveling Salesperson

Why it Matters

Understanding Big O helps developers:

  • Predict performance at scale.
  • Choose the right algorithm for the job.
  • Identify bottlenecks in code.

Visualizing Growth

Last updated on