Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

Required fields*

10
  • 3
    surely a pub/sub message queue solution will easily scale to 100,000 rps? Commented Mar 18, 2022 at 19:23
  • 2
    the websocket server is limited to the number of users it supports right? so there is a natural limit to the number of messages. I guess with no restrictions, you could have a single user sign up to every other message on the system? but they wouldnt be able to read them all Commented Mar 19, 2022 at 9:51
  • 1
    what's the throughput on your load balancer? Commented Mar 19, 2022 at 9:59
  • 2
    If the number of clients per WebSocket server is very small compared to the number of active topics, then it is unlikely that the WebSocket server would be overwhelmed with the amount of messages. Precondition is that each WebSocket server subscribes to only those topics that its clients are currently interested in. This in turn requires that messages are published on sufficiently fine-grained topics. While databases like Redis and Postgres provide basic PubSub systems, a system with more difficult requirements will likely look at Kafka instead. Commented Mar 19, 2022 at 10:21
  • 1
    One approach would be to shard the channels/chat rooms: every user in a particular chat room connects to a particular WebSocket server, which does not send the messages to other WebSocket servers. If even one chat room is still too big for one WebSocket server, you could cluster several together, with a PubSub server that only handles traffic for that chat room (or a few chat rooms, up to the capacity of one PubSub server) Commented Mar 21, 2022 at 9:42