0

This isn't specifically related to programming, more-so infrastructure, but of all the exchange sites StackOverflow seems to be most knowledgeable in departments of RESTful APIs.

I have a single endpoint configured for handling events that could take in up to 1k events within a 3 minute window. I am noticing a lot of events "missed", but I'm not sure that I'm willing to blame over-utilization right away without fully understanding.

The listening endpoint is /users/events?user=2345345 where 2345345 is the user id. From here we perform necessary actions on that particular user, but what if during this the next user, 2895467 performs an action which results in a new event being sent to /users/events?user=2895467 before the first could be processed. What happens?

I intend to alleviate the concern by using celery to signal tasks which would greatly reduce this, but is it fair to assume that events could be missed while this single endpoint remains synchronous?

3
  • 1
    This is no different from a webserver handling requests from many users concurrently. There's nothing different about webhooks. Commented May 3, 2019 at 23:48
  • That's what I assumed, and the vendor in question does send retries if no 200 is returned so then I suppose the missed events must actually be on the vendor else they'd retry if my server didn't process them. Commented May 3, 2019 at 23:50
  • 1
    Sounds like they have a fairly short timeout, and your server can't keep up. Commented May 3, 2019 at 23:56

1 Answer 1

1

Real-life behavior depends on approach used for "deployment".

For example if you are using uwsgi with single unthreaded worker behind nginx, then requests will be processed "sequentially": if second request arrives before first is processed, then second will be "queued" (added to backlog).

How long it can be queued and how many requests may be in queue depends on the configuration of nginx (listen backlog), configuration of uwsgi (concurrency, listen backlog) and even on configuration of OS kernel (search for net.core.somaxconn, net.core.netdev_max_backlog). When queue becomes "full" then new "concurrent" connections will be dropped instead of being added to queue.

Sign up to request clarification or add additional context in comments.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.