I don't know how to phrase this question properly so please feel free to suggest edits.
I've helped develop an in-house RESTful service for a company I'm working for, and it seems to have hit a wall in terms of performance without saturating the network channel (I think). I need to make it run faster, but I don't have enough data to go on. How do I measure, collect and analyze performance metrics like network, disk, memory and CPU usage? Right now I only have timestamped logs so I could try and extract some timing data by parsing them, but it seems like there should be a more straightforward way. Also should I make the system distributed in the future, I'd have to aggregate this data somehow across multiple machines, and I'd have to make this process asynchronous and resistant to network glitches.
What existing tools or general approaches work best in this kind of setup? I'd appreciate any nudge in the right direction, I don't know where to look.
UPD: I'm targeting FreeBSD, my dev box is a MacBook; the heavy lifting parts are written in C++, the talk-to-client parts are written in PHP (currently Yii) run by Apache with an nginx proxy, although I'm considering porting them to C++ and splitting them into multiple parts along the separation of commands and queries line of thought. Currently the command queue is managed by PostgreSQL, and it's one of the things I'm itching to rewrite from scratch. There is a simplistic supervisor that allows me to see the stderr log of the heavy lifting daemon in real time, but otherwise it doesn't do much.