Timeline for Separate Thread Pools for I/O and CPU Tasks
Current License: CC BY-SA 3.0
18 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| May 25, 2018 at 17:12 | comment | added | Miha_x64 | They consume much memory; context switch (schedule one thread, deschedule another) requires much processor time. Whatever, locking threads and using the same pool for IO and computations is counterproductive. | |
| May 25, 2018 at 16:59 | comment | added | Blrfl | @Miha_x64 Threads consume no processor time while waiting to acquire semaphores. The exception would be if the semaphore was implemented as a spinlock, which is not something you find in sanely-developed userspace applications. | |
| May 25, 2018 at 15:21 | comment | added | Miha_x64 | Threads are not cheap. Context switch, e. g. when locked on a Semaphore, is not cheap too. When a task execution begins while all semaphore permits are acquired, thread sleeps until another semaphore gets released and thus wastes processor cores and time. | |
| Apr 24, 2018 at 20:40 | history | edited | Blrfl | CC BY-SA 3.0 | Fixed variable name |
| Nov 1, 2017 at 14:52 | comment | added | TheCatWhisperer | @Blrfl fair enough. | |
| Nov 1, 2017 at 14:51 | comment | added | Blrfl | @TheCatWhisperer Cheaper than writing and maintaining software to pass the parts of the job around among thread pools. | |
| Nov 1, 2017 at 14:39 | comment | added | TheCatWhisperer | "threads are cheap" [Compared to IO] ? | |
| Apr 15, 2017 at 18:50 | comment | added | Blrfl | @ndm13 None that I can think of. Idle threads consume no resources and a well-written executor should do its job as long as whatever container it uses to hold the pool doesn't overflow. (Not very likely.) | |
| Apr 15, 2017 at 17:31 | comment | added | ndm13 | By the way, are there any considerations for the executor service in running these in, if many threads could be idle at any given time? | |
| Apr 15, 2017 at 17:29 | comment | added | ndm13 | Apparently my comment didn't post last night. I misread the code snippet (really shouldn't comment when falling asleep) and thought the semaphore acquire was before both operations and not just the second. Thanks for a good answer, even though I couldn't appreciate it... | |
| Apr 15, 2017 at 11:44 | comment | added | Blrfl | @Basilevs Clarified. | |
| Apr 15, 2017 at 11:35 | history | edited | Blrfl | CC BY-SA 3.0 | added 55 characters in body |
| Apr 15, 2017 at 3:39 | vote | accept | ndm13 | ||
| Apr 15, 2017 at 3:30 | comment | added | Blrfl | @ndm13 Executors are just wrappers that make using thread pools a more asynchronous affair. They don't do anything to solve your problem, which is narrowing a lot of download threads into a few processing threads. The semaphore takes care of that by counting the number of available cores and putting the after-download process on hold until one becomes available. | |
| Apr 15, 2017 at 3:15 | history | edited | Blrfl | CC BY-SA 3.0 | added 259 characters in body |
| Apr 15, 2017 at 3:01 | comment | added | ndm13 | The processing order doesn't matter. If it did, I'd put up with the efficiency drop for the sake of maintaining order. I like the idea that if a file is taking an unusually long time to process, other files can still download. I see where you're coming from, as this makes things much simpler implementation-wise, but I'm not sure I understand the semaphore bit; wouldn't a thread pool backed by an executor service be much more efficient at limiting access, since tasks are queued rather than started and set to wait? | |
| Apr 15, 2017 at 3:01 | history | edited | Blrfl | CC BY-SA 3.0 | added 4 characters in body |
| Apr 15, 2017 at 2:50 | history | answered | Blrfl | CC BY-SA 3.0 |