6

I have a Nginx web server with uWSGI app server installed on a single CPU Ubuntu 14.04 image.

This uWSGI app server successfully handles a Flask app requests. The problem I am facing is that sometimes requests from a single client will time out for an extended period of time (1-2 hours).

This was happening without specifying workers or threads in my uwsgi.conf file. Is there an ideal amount of workers/threads to be used per CPU?

I am using emperor service to start the uWSGI app server. This is what my uwsgi.conf looks like

description "uWSGI" start on runlevel [2345] stop on runlevel [06] respawn env UWSGI=/var/www/parachute_server/venv/bin/uwsgi env LOGTO=/var/log/uwsgi/emperor.log exec $UWSGI --master --workers 2 --threads 2 --emperor /etc/uwsgi/vassals --die-on-term --uid www-data --gid www-data --logto $LOGTO --stats 127.0.0.1:9191 

Could this be a performance problem in regards to nginx / uwsgi or is it more probable that these timeouts are occuring because I am only using a single CPU?

Any help is much appreciated!

1 Answer 1

7

Interesting issue you have...

Generally, you'd specify at least 2 * #CPUs + 1. This is because uWSGI might be performing a read/write to a socket, and then you'll have another worker accepting requests. Also, the threads flag is useful if your workers are synchronous, because they can notify the master thread that they are still busy working and so preventing a timeout.

I think having one worker was the reason for your timeout (blocking all other requests), but you should inspect your responses from your app. If they are taking a long time (say reading from db), you'll want to adjust the uwsgi_read_timeout directive in Nginx to allow uWSGI to process the request.

I hope this helps.

Sign up to request clarification or add additional context in comments.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.