Also big O analysis IMO should just be the starting point of maximizing efficiency. Those coefficients that just get dropped can have a huge impact. For example, any algorithm written in JavaScript or visual basic will be of the same order as that same algorithm written in C/C++ or rust, but will likely perform much slower. And the exact manner of implementation could even result in the C version being slower than the VB one.
And on the other hand, I wouldn’t call a lot of big O analysis very advanced math. You might get a tighter bound with advanced math on some algorithms, but you can get a rough estimate just by multiplying and adding loops together. The harder question would be something like “does this early exit optimization result in an O(x³) algorithm becoming an O(log(x)*x²)?”
Another big thing that doesn’t get covered by big O analysis is the potential for parallelization and multi threading, because the difference created by multi threading only amounts to one of those dropped coefficients.
And yet, especially for the workloads being run on a server with 32-128 cores, being able to run algorithms in parallel will make a huge difference to performance.
There’s a lot of software engineering that doesn’t require understanding Big O
Also big O analysis IMO should just be the starting point of maximizing efficiency. Those coefficients that just get dropped can have a huge impact. For example, any algorithm written in JavaScript or visual basic will be of the same order as that same algorithm written in C/C++ or rust, but will likely perform much slower. And the exact manner of implementation could even result in the C version being slower than the VB one.
And on the other hand, I wouldn’t call a lot of big O analysis very advanced math. You might get a tighter bound with advanced math on some algorithms, but you can get a rough estimate just by multiplying and adding loops together. The harder question would be something like “does this early exit optimization result in an O(x³) algorithm becoming an O(log(x)*x²)?”
Another big thing that doesn’t get covered by big O analysis is the potential for parallelization and multi threading, because the difference created by multi threading only amounts to one of those dropped coefficients.
And yet, especially for the workloads being run on a server with 32-128 cores, being able to run algorithms in parallel will make a huge difference to performance.
I think the tldr; of what you said is that even when you have a theoretical handle on the growth function, you still need to actually benchmark anyway