Big-O notation is methodical and depends purely on the control flow in your code so it's definitely doable but not exactly easy..
It would probably be best to let the compilers do the initial heavy lifting and just do this by analyzing the control operations in the compiled bytecode. Also, studying compilers might help with this, such as learning how they perform code reachability analysis.
Basically (as far as I know), every instruction is constant time complexity except for backwards branches.
So if you come across any backwards branch instruction you'd need to store contextual data, especially about that loop's exit condition, and determine the Big-O of that loop. Then multiply its complexity by the complexity of instructions contained within it, which could be other method calls or backward branches that you'd need to recursively analyze.
As for special context not known to the compiler: Big-O wouldn't care since it assumes the worst case, but you could always let your analyzer report additional things about the complexity of the method, assuming you make it smart enough.
nto execute, assuming that the algorithm is notO(ludicrous).