Timeline for java threading model for scale up
Current License: CC BY-SA 4.0
17 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| Sep 11, 2022 at 7:51 | comment | added | Mikhail | look on akka cluster with persistance | |
| Apr 18, 2021 at 19:56 | vote | accept | london tom | ||
| Apr 18, 2021 at 9:21 | answer | added | Helena | timeline score: 1 | |
| Apr 13, 2021 at 8:40 | comment | added | london tom | @user949300 I read this book 6-7 years ago, but it still don't resolve my current issue. | |
| Apr 13, 2021 at 8:39 | comment | added | london tom | I don't think the disruptor has any performance issue. the current issue is we use "Thread Per Request Model", it means each client has its own long running live thread, we can't scale up the app on our current box if we want to support 20K concurrency users at the same time. | |
| Apr 13, 2021 at 1:03 | comment | added | user949300 | Run, don't walk, and get a copy of Java Concurrency in Practice. Then read it. | |
| Apr 12, 2021 at 16:04 | comment | added | Shadows In Rain | Is the disruptor piping data at full capacity? What's the current bottleneck? Can you detect when the thread pool is saturated? | |
| Apr 11, 2021 at 21:00 | comment | added | london tom | no Step C doesn't have any I/O Ops but it has some local cache update, it purely just business logic here. | |
| Apr 11, 2021 at 20:58 | comment | added | london tom | Step B is a single thread consumer to read the data from disruptor, based on the message header info, it sends to data into client's queues. for example, if we have 20 clients streaming live data from us, it means we have 20 queues. it works fine for a small number of user, but if we say one day we have 20K users streaming live data, then we are having a problem. | |
| Apr 11, 2021 at 20:55 | comment | added | Helena | Does Step C have any blocking I/O like disk reads or network, or are they purely just business logic? Could you just limit yourself to 144 queues and 144 threads that never go to sleep? | |
| Apr 11, 2021 at 20:46 | comment | added | Joop Eggen | Too little info for me. Profiling seems to have been done. Load testing with varying dummy data hopefully too. Sometimes one can change the granularity of concurrency: larger/smaller pieces per thread, or defer some costly operations to a later point. There are others more experienced than me. | |
| Apr 11, 2021 at 20:45 | comment | added | Helena | Also: How are messages dispatched in Step B? Is there usually one message per user, or all messages go to all users? Or every message goes to different sets of users? | |
| Apr 11, 2021 at 20:42 | comment | added | Helena | Sorry that was hasty typing, I meant to ask "what do arrows" represent? I guess it is flow of data? I usually read/write diagrams the other way around with the user on the left and data sources on the right. But I guess that is just a different convention. Though I think your diagram would benefit from labeling the arrows or having a legend. | |
| Apr 11, 2021 at 20:39 | comment | added | london tom | the system works as expected, but we can't scale up to support a large amount the clients due to each client currently has one thread to serve them. The actor is our client to request streaming live data from us. our system is a data distribution system. | |
| Apr 11, 2021 at 19:58 | comment | added | Helena | What do errors in your diagram represent? I find it confusing to have arrows going from a component to an actor. | |
| Apr 11, 2021 at 19:49 | review | First posts | |||
| Apr 11, 2021 at 21:00 | |||||
| Apr 11, 2021 at 19:37 | history | asked | london tom | CC BY-SA 4.0 |