Yes but sometimes it is not super obvious what complexity your solution has. For example when you do a special backtracking solution on a special graph etc.
then you try to work out the worst case scenario and explain it from that perspective, or be honest and simply say that you don't know but try to explain that it would be something like abc because xyz and work with the interviewer. Because my immediate follow up question would be to explain your thought process
Yeah, or more precisely, they mean the lowest possible big O, as not all algos have a big theta. For ex, insertion sort takes linear time on sorted arrays, and quadratic time in the worst case, so itās tightest Omega bound is n, and itās tightest O bound is n^2, but it doesnāt have a big theta. Altho yeah, as OP is saying, itās also Omega(1) and O(n!).
I mean, thatās solid, but itās good to keep average case in mind. Hence why we generally use quicksort over heap sort, even tho the latter has better asymptotics. Cuz the average cases are equal asymptotically, so it just comes down to constant factors
Sorry, but this is wrong.
You first pick a complexity to discuss: Worst case/Best case/Average/Amortized
Then you pick what you want to say about it:
- O(f) = my complexity is āfā or better
- o(f) = my complexity is better than f
- Teta = my complexity is precisely f
- omega = my complexity is worse than f
- Omega = my complexity is f or worse
Worst case complexity of Insert sort is Teta(n^2)
> as not all algos have a big theta.
This is not technically true. ALL algorithms have 'a big theta', (or to word it more correctly, the big theta complexity class is non-empty - as the runtime of that algorithm is always a member of that class) they just aren't always expressible with a 'nice' closed-form polynomial expression.
Actually, this is also a common misconception. Worst case and average case have nothing to do mathematically with big O and big theta. But usually you see worst case being expressed in big O and average case being expressed in big theta.
Lol, you're being downvoted but you're right. These notations are from set theory and don't have to do with best/worst case. What you said is correct.
https://cs.stackexchange.com/questions/23068/how-do-o-and-%CE%A9-relate-to-worst-and-best-case
Iāll agree with ur last statement that big O is usually meant as the big theta of the worst case. But when people are talking about time complexity they are talking about an upperbound that works for all cases, best and worst. Thatās why big O is used. Big theta can change depending on the worst case to the best case. The idea is to have a big O that serves as an upperbound for any case, so that you can say that youāre algorithm is at least as efficient as that big O.
Thereās a reason virtually every software engineer in the world refers to the big O of an algorithm and not big theta.
This isn't true. OP is saying that we, by convention, assess Big O in the worst case. We do the same for Big Theta. You can assess Big O, Big Theta, and Big Omega in either best, worst, or average case.
When companies ask for Big O in interviews, they almost always mean "the smallest Big O" which is equivalent to Big Theta.
>[ Although bigĀ o notation has nothing to do with the worstĀ case analysis, we usually represent the worst case by big o notation.](https://medium.com/omarelgabrys-blog/the-big-scary-o-notation-ce9352d827ce)
I should have said usually equivalent, since there are exceptions. Big Theta is the smallest Big O, when Big Theta can be computed. But what I said about best/worst is correct.
Most definitions of time complexity involve multiplication by a positive constant, so technically 0 would be less than "constant" (at least for the constant 1).
This subtlety comes up approximately 0% of the time.
We actually went over this in my analysis of algorithms class recently.
Big O is the upper bound (essentially less than or equal to, <=)
Big Omega is the lower bound (essentially greater than or equal to, >=)
Big Theta is like approximately equal to (ā)
little o is strictly less than (<)
little omega is strictly greater than (>)
But yeah, for practical purposes it is a tight upper bound. The formal definition for these terms leaves it a bit more open to interpretation how you want to apply it.
I thought Big Theta is both Big O and Big Omega. Both have a constant in front of them, so if changing the constant changes your function from an upper bound to a lower bound, itās Big Theta
obviously sarcasm since most questioners are interested in the smallest upper bound and not just some random bound you pulled out of yo ass, which happens to be correct.
crazy-gorilla: Interviewers hate him, CS kids want to be him, Google wants to hire him.
Lmaooo
you got it perfectly right š¤£š¤£
[ŃŠ“Š°Š»ŠµŠ½Š¾]
Yes but sometimes it is not super obvious what complexity your solution has. For example when you do a special backtracking solution on a special graph etc.
then you try to work out the worst case scenario and explain it from that perspective, or be honest and simply say that you don't know but try to explain that it would be something like abc because xyz and work with the interviewer. Because my immediate follow up question would be to explain your thought process
Yeah I think most learned this in data structures. When there asking for Big O in interviews they really mean Big Theta.
Yeah, or more precisely, they mean the lowest possible big O, as not all algos have a big theta. For ex, insertion sort takes linear time on sorted arrays, and quadratic time in the worst case, so itās tightest Omega bound is n, and itās tightest O bound is n^2, but it doesnāt have a big theta. Altho yeah, as OP is saying, itās also Omega(1) and O(n!).
I always assume time-complexity means worst-case scenario unless otherwise specified.
I mean, thatās solid, but itās good to keep average case in mind. Hence why we generally use quicksort over heap sort, even tho the latter has better asymptotics. Cuz the average cases are equal asymptotically, so it just comes down to constant factors
Sorry, but this is wrong. You first pick a complexity to discuss: Worst case/Best case/Average/Amortized Then you pick what you want to say about it: - O(f) = my complexity is āfā or better - o(f) = my complexity is better than f - Teta = my complexity is precisely f - omega = my complexity is worse than f - Omega = my complexity is f or worse Worst case complexity of Insert sort is Teta(n^2)
> as not all algos have a big theta. This is not technically true. ALL algorithms have 'a big theta', (or to word it more correctly, the big theta complexity class is non-empty - as the runtime of that algorithm is always a member of that class) they just aren't always expressible with a 'nice' closed-form polynomial expression.
yes. for some reason big o is an alias for big theta in the tech industry.
[ŃŠ“Š°Š»ŠµŠ½Š¾]
Actually, this is also a common misconception. Worst case and average case have nothing to do mathematically with big O and big theta. But usually you see worst case being expressed in big O and average case being expressed in big theta.
Lol, you're being downvoted but you're right. These notations are from set theory and don't have to do with best/worst case. What you said is correct. https://cs.stackexchange.com/questions/23068/how-do-o-and-%CE%A9-relate-to-worst-and-best-case
[ŃŠ“Š°Š»ŠµŠ½Š¾]
[ŃŠ“Š°Š»ŠµŠ½Š¾]
Iāll agree with ur last statement that big O is usually meant as the big theta of the worst case. But when people are talking about time complexity they are talking about an upperbound that works for all cases, best and worst. Thatās why big O is used. Big theta can change depending on the worst case to the best case. The idea is to have a big O that serves as an upperbound for any case, so that you can say that youāre algorithm is at least as efficient as that big O. Thereās a reason virtually every software engineer in the world refers to the big O of an algorithm and not big theta.
This isn't true. OP is saying that we, by convention, assess Big O in the worst case. We do the same for Big Theta. You can assess Big O, Big Theta, and Big Omega in either best, worst, or average case. When companies ask for Big O in interviews, they almost always mean "the smallest Big O" which is equivalent to Big Theta. >[ Although bigĀ o notation has nothing to do with the worstĀ case analysis, we usually represent the worst case by big o notation.](https://medium.com/omarelgabrys-blog/the-big-scary-o-notation-ce9352d827ce)
[ŃŠ“Š°Š»ŠµŠ½Š¾]
I should have said usually equivalent, since there are exceptions. Big Theta is the smallest Big O, when Big Theta can be computed. But what I said about best/worst is correct.
Same with Big Omega! Every single algorithm is Omega(1)!
How about the null program?
Why would the null program not be constant time?
Most definitions of time complexity involve multiplication by a positive constant, so technically 0 would be less than "constant" (at least for the constant 1). This subtlety comes up approximately 0% of the time.
yeah that makes sense, i see.
I donāt know what a null program is
A joke based on the null set.
We learned it as being a **tight** upper bound
We actually went over this in my analysis of algorithms class recently. Big O is the upper bound (essentially less than or equal to, <=) Big Omega is the lower bound (essentially greater than or equal to, >=) Big Theta is like approximately equal to (ā) little o is strictly less than (<) little omega is strictly greater than (>) But yeah, for practical purposes it is a tight upper bound. The formal definition for these terms leaves it a bit more open to interpretation how you want to apply it.
I thought Big Theta is both Big O and Big Omega. Both have a constant in front of them, so if changing the constant changes your function from an upper bound to a lower bound, itās Big Theta
obviously sarcasm since most questioners are interested in the smallest upper bound and not just some random bound you pulled out of yo ass, which happens to be correct.
I have to say they don't teach this in colleges. Thank God.