Help I'm dumb 4
Posted by Zearog@reddit | learnprogramming | View on Reddit | 12 comments
Might be on the wrong thread. Apologies for the wording. I have the vocabulary of a two year old.
Java class // For some reason, big O notation is not clicking for me. So far, the two main concepts that keep coming up in the is class run time and memory complexity(efficiency?). I understand those concepts, I think. Runtime: how long the code takes to run. Memory: how much resource(?) it takes to run the code. The issue I've repeatedly run into is the math(?) portion of it; rather, how do I determine what's O(1), O(logn), O(n), O(nlogn), O(n\^2), O(2\^n), or O(!n)?
What I think I know:
If the longest runtime of a piece of my code is O(n\^2), then it does not matter how many lower runtime pieces of code I add, the end result will still be O(n\^2).
Stacking the same runtime will result in the same runtime. i.e. stacking an n amount of O(n) will still result in O(n).
Stretchslash@reddit
Your not being dumb. Although I'm not saying forget about Big O. It's good to understand programming concepts. At least in my experience it doesn't come up as often as you may think. So don't get stressed trying to figure out everything about Big O (unless it's not stressful and it's just curiosity).
If you have a general idea about it. More nested loops = worse performance the bigger data set is provided. You should be fine.
r_ht76@reddit
Big O is just answering one question. As your input grows, how does the work grow?
Imagine your code is a machine. You feed it an input of size n. Maybe n is the length of an array, the number of users, the size of a string. Big O describes the shape of the curve you'd draw if you plotted "amount of work done" against "size of input". It throws away constants and small terms because we only care about what happens when n gets huge.
O(1), constant. The work doesn't depend on n at all. Looking up
arr[5]takes the same time whether the array has 10 elements or 10 million. You walk into a room, grab the thing nearest the door and leave.O(log n), logarithmic. Each step throws away half the remaining work. Binary search is the classic. If n doubles, you do one extra step. n = 1000, you do about 10 steps. n = 1,000,000, you do about 20. Logarithms grow painfully slowly, which is why we love them.
The signature is division by a constant factor inside the loop. Halving, thirding, whatever. Cut the problem down by a fixed fraction each step and you're in log n territory.
O(n), linear. You touch every element once. n doubles, work doubles.
O(n log n). This is what good sorting algorithms do. Mergesort, heapsort, the sort Java actually uses. The pattern is "do n work, log n times" or "do log n work, n times". You see it whenever you split something in half repeatedly and do linear work at each level.
O(n²), quadratic. Two nested loops, each going up to n. The dead giveaway.
n = 10 means 100 operations. n = 1000 means a million. n = 1,000,000 means a trillion and your laptop is on fire.
O(2\^n), exponential. Each step branches into two. Naive recursion for Fibonacci does this. n = 30 is fine. n = 60 outlives the universe.
O(n!), factorial. Trying every permutation. Travelling salesman by brute force. n = 12 is already painful.
Walk through the code asking one question at every line. How many times does this run, as a function of n?
Single statement, runs once, O(1). A loop from 0 to n, O(n). A loop nested inside another loop, both going to n, O(n) times O(n) = O(n²). A loop where the counter halves or doubles, O(log n). A loop from 0 to n with a halving loop inside it, O(n) times O(log n) = O(n log n).
Then add the pieces together and keep only the biggest.
Total: O(1) + O(n) + O(n²) = O(n²). Which leads me to the two rules you already worked out, both correct.
So O(n²) + O(n) + O(1) collapses to O(n²). When n is huge, n² dwarfs the rest. At n = 1,000,000, n² is a trillion and n is just a million. Adding a million to a trillion changes nothing meaningful. Big O cares about the dominant term and ignores the rest.
Stacking the same complexity doesn't change the class. Three separate O(n) loops one after another do 3n work total, but Big O drops constants. 3n is still O(n). Same reason O(n/2) is just O(n) and O(100n) is just O(n). Constants don't survive.
The reason for both rules is that Big O is an asymptotic statement. It describes behaviour in the limit as n goes to infinity. Constants and smaller terms get swamped at infinity, so we throw them away.
Once you've done this enough, you stop deriving and start pattern-matching.
One loop over the input, O(n). Two nested loops over the input, O(n²). Halving or doubling inside a loop, log n shows up. Recursion that splits the input in half and recurses on both halves, O(n log n) usually. Recursion that branches into two calls on input of size n minus 1, O(2\^n).
A useful sanity check. Plug in n = 10 and n = 100. Count operations roughly.
O(n) goes from 10 to 100. Tenfold. O(n²) goes from 100 to 10,000. Hundredfold. O(log n) goes from about 3 to about 7. Barely moves. O(2\^n) goes from 1,024 to a number with 30 digits. Catastrophic.
If your timing roughly matches one of these jumps when you increase n tenfold, you've identified the complexity empirically.
Memory complexity is the same idea but for space instead of time. How much extra storage does the algorithm need as n grows? An algorithm that creates a copy of the input array is O(n) space. One that builds an n by n grid is O(n²) space. One that just uses a few variables regardless of input size is O(1) space.
Time and space complexity are independent. An algorithm can be fast in time and greedy in space, or slow in time and frugal in space. You analyse them separately, using the same rules.
Don't worry about the math feeling slippery at first. It clicks once you've stared at a few dozen loops and asked "how many times does this run".
TechBriefbyBMe@reddit
Big O is just "will this break if my data gets 10x bigger?" You already know this intuitively, you're just mad they won't let you use normal words for it.
ExtraTNT@reddit
Big O is really x*(whatever after o)…
So 1 iteration takes x time and you take n log n iterations… so if you do sth and you know, that n will never be over 200, some algorithm with n^2 can be better, than one with n… just by being much faster per iteration. And if you combine O(n) with O(n) you increase that x before, iterations are the same…
Zesher_@reddit
It's a measure of complexity, not time. I could run a triple loop on 5 pieces of data and it would be faster than running a single loop on a data set of 5 million items.
The O notation is helpful for conceptualizing the amount of unique steps your code needs to make, but not the actual total number, if that makes sense.
Smart_Tool247@reddit
You’re overthinking it. Big O just shows how your code slows down as input gets bigger. One loop = O(n), loop inside a loop = O(n²), and if it halves each step then it’s O(log n). Ignore small stuff, just focus on recognizing patterns.
iJustSeen2Dudes1Bike@reddit
Yeah it's not "how long the code takes to run" it's more how well does the code handle increasing input size
dmazzoni@reddit
Yes everything you understand so far is right.
Intuitively you just need to measure how many "steps" your function runs.
For example, if your function has a for loop that runs from 1 to n, then the inside of the loop runs n times, so your code is O(n).
It doesn't matter if you do one instruction or 10 instructions inside the loop. n and 10n are both O(n). The reason is because we only care about what happens as n gets large.
If you have two nested for loops, and each loop runs n times, then the stuff inside will run n\^2 times, so it's O(n\^2).
The way you get O(log n) is when you keep dividing the problem in half. A good example is binary search - if you're searching through an array with 1000 elements, the second time through you search 500 elements, then 250 elements and so on.
The number of times your loop will run is the number of times you can divide n in half before you get 1, which is log_2(n).
Can you think of an O(2\^n) example?
johnpeters42@reddit
"As n gets large" is important. An O(n) algorithm may really be around 1000*n, while an O(n^2) algorithm may be 5n^2. So for n > 200 the latter is worse, but if in practice you only deal with n < 50, then the latter is better on that range.
SlightTip6811@reddit
the O(2\^n) example that always comes in my mind is fibonacci with naive recursion. each call makes two more calls, so you get this explosion of function calls that doubles at every level.
like calculating fib(5) calls fib(4) and fib(3), then fib(4) calls fib(3) and fib(2), and so on. the tree gets massive really quick because same calculations happen over and over.
zeekar@reddit
So N is the size of the thing you're operating on. If it's a list, it's the number of elements. If it's a file, it's the size - lines or bytes, whatever unit you're reading it in.
O(1) means it doesn't matter - your program does the same amount of work no matter how big the input. Double the size and it runs just as fast or uses the same amount of space. Looking something up in a hashtable is O(1) as long as there are no collisions.
O(logN) means that if you quadruple the size, the runtime or space doubles.
O(N) means that if you double the size, the runtime or space doubles.
O(NlogN) is more than N but less than N^(2). The best sorting algorithms are here.
O(N^(2)) means that if you double the size, the runtime or space used by the program quadruples. Naïve sorting algorithms like bubble sort are here.
O(2^N) is exponential. If you double the size of the input, you square the runtime/space. If you triple the size of the input, you cube it. It grows very fast.
... but not as fast as O(n!), which grows as N factorial. You can't even say what happens when you double n because it depends on how big n is. If n is 2, doubling it multiplies the runtime by 12. If n is 10, doubling it multiplies the runtime by 670 billion.
Smart_Tool247@reddit
You’re thinking too hard about it. Big O bas yeh hai ki input badhne par code kitna slow hota hai. Ek loop = O(n), loop ke andar loop = O(n²), aur agar har step me half ho raha hai to O(log n). Chhoti cheeze ignore karo, bas pattern samajhne pe focus karo.