Skip to main content
added 119 characters in body
Source Link

So I have this idea i'm playing with to make my own neural network experiment from the ground up as a fun hobby exercise, so I have something to learn, and to expand upon in doing research for what I want to code. I'm totally aware i'm re-inventing the wheel. But I'm doing it for learning/fun/hobby purposes.

My approach will not be one with the traditional chance calculation and updating status in the next generation where it will improve on itself, i'm trying something else, that will allow my network to adapt to changing situations, just to see what will happen when I turn off the lights.

What would be a good way to handle the multitude of worker processes my app will end up with?

  1. Should I iterate them on each "tick" and update their state?
  2. Should I put them in threads so they call all function simultaneously(or how important the thread scheduler finds them)?
  3. Should I cluster them in groups and put the groups in threads that iterate them independently.
  4. Other...

My problem with scenario 1 is: That if there are a lot of neurons that it could take a while to update them all, making the overall progress slower. Things that pop to mind is adding update flags to them and only to iterate those for updating. I also could use java streams so java can do it's magic under the hood by making the iteration parallel when possible.

My problem with scenario 2 is: There are only so many threads. On normal cpu's 1 to 8 threads are usually running and it has to host an OS too. So not much wiggle room there. I could expand to using GPU cores, but there's also a variable amount there, and they are only for specialized instructions. Also the same problem as in the traffic jam issue in scenario 3, but on smaller scales., and I want to prevent trashing the thread scheduler

My problem with scenario 3 is: That it causes a "buildup/traffic jam" of signals that need to be processed that cross from cluster A to cluster B if cluster B doesn't get as much time from the thread scheduler as cluster A.

My problem with scenario 4 is: I haven't come up with it yet.

My target language will be Java. I am sure there are solutions for this and similar problems, where lots of little processes have to do a thing to complete one big thing(what comes to mind is 3d game rendering, hence my scenario 1).

WhatI'm leaning towards scenario 1, but what are the things I should consider with all these little workers and what would be the most advisable approach.

So I have this idea i'm playing with to make my own neural network experiment from the ground up as a fun hobby exercise, so I have something to learn, and to expand upon in doing research for what I want to code. I'm totally aware i'm re-inventing the wheel. But I'm doing it for learning/fun/hobby purposes.

My approach will not be one with the traditional chance calculation and updating status in the next generation where it will improve on itself, i'm trying something else, that will allow my network to adapt to changing situations, just to see what will happen when I turn off the lights.

What would be a good way to handle the multitude of worker processes my app will end up with?

  1. Should I iterate them on each "tick" and update their state?
  2. Should I put them in threads so they call all function simultaneously(or how important the thread scheduler finds them)?
  3. Should I cluster them in groups and put the groups in threads that iterate them independently.
  4. Other...

My problem with scenario 1 is: That if there are a lot of neurons that it could take a while to update them all, making the overall progress slower. Things that pop to mind is adding update flags to them and only to iterate those for updating.

My problem with scenario 2 is: There are only so many threads. On normal cpu's 1 to 8 threads are usually running and it has to host an OS too. So not much wiggle room there. I could expand to using GPU cores, but there's also a variable amount there, and they are only for specialized instructions. Also the same problem as in the traffic jam issue in scenario 3, but on smaller scales.

My problem with scenario 3 is: That it causes a "buildup/traffic jam" of signals that need to be processed that cross from cluster A to cluster B if cluster B doesn't get as much time from the thread scheduler as cluster A.

My problem with scenario 4 is: I haven't come up with it yet.

My target language will be Java. I am sure there are solutions for this and similar problems, where lots of little processes have to do a thing to complete one big thing(what comes to mind is 3d game rendering, hence my scenario 1).

What are the things I should consider with all these little workers and what would be the most advisable approach.

So I have this idea i'm playing with to make my own neural network experiment from the ground up as a fun hobby exercise, so I have something to learn, and to expand upon in doing research for what I want to code. I'm totally aware i'm re-inventing the wheel. But I'm doing it for learning/fun/hobby purposes.

My approach will not be one with the traditional chance calculation and updating status in the next generation where it will improve on itself, i'm trying something else, that will allow my network to adapt to changing situations, just to see what will happen when I turn off the lights.

What would be a good way to handle the multitude of worker processes my app will end up with?

  1. Should I iterate them on each "tick" and update their state?
  2. Should I put them in threads so they call all function simultaneously(or how important the thread scheduler finds them)?
  3. Should I cluster them in groups and put the groups in threads that iterate them independently.
  4. Other...

My problem with scenario 1 is: That if there are a lot of neurons that it could take a while to update them all, making the overall progress slower. Things that pop to mind is adding update flags to them and only to iterate those for updating. I also could use java streams so java can do it's magic under the hood by making the iteration parallel when possible.

My problem with scenario 2 is: There are only so many threads. On normal cpu's 1 to 8 threads are usually running and it has to host an OS too. So not much wiggle room there. I could expand to using GPU cores, but there's also a variable amount there, and they are only for specialized instructions. Also the same problem as in the traffic jam issue in scenario 3, but on smaller scales, and I want to prevent trashing the thread scheduler

My problem with scenario 3 is: That it causes a "buildup/traffic jam" of signals that need to be processed that cross from cluster A to cluster B if cluster B doesn't get as much time from the thread scheduler as cluster A.

My problem with scenario 4 is: I haven't come up with it yet.

My target language will be Java. I am sure there are solutions for this and similar problems, where lots of little processes have to do a thing to complete one big thing(what comes to mind is 3d game rendering, hence my scenario 1).

I'm leaning towards scenario 1, but what are the things I should consider with all these little workers and what would be the most advisable approach.

added 89 characters in body
Source Link

So I have this idea i'm playing with to make my own neural network experiment from the ground up as a fun hobby exercise, so I have something to learn, and to expand upon in doing research for what I want to code. I'm totally aware i'm re-inventing the wheel. But I'm doing it for learning/fun/hobby purposes.

My approach will not be one with the traditional chance calculation and updating status in the next generation where it will improve on itself, i'm trying something else, that will allow my network to adapt to changing situations, just to see what will happen when I turn off the lights.

What would be a good way to handle the multitude of worker processes my app will end up with?

  1. Should I iterate them on each "tick" and update their state?
  2. Should I put them in threads so they call all function simultaneously(or how important the thread scheduler finds them)?
  3. Should I cluster them in groups and put the groups in threads that iterate them independently.
  4. Other...

My problem with scenario 1 is: That if there are a lot of neurons that it could take a while to update them all, making the overall progress slower. Things that pop to mind is adding update flags to them and only to iterate those for updating.

My problem with scenario 2 is: There are only so many threads. On normal cpu's 1 to 8 threads are usually running and it has to host an OS too. So not much wiggle room there. I could expand to using GPU cores, but there's also a variable amount there, and they are only for specialized instructions. Also the same problem as in the traffic jam issue in scenario 3, but on smaller scales.

My problem with scenario 3 is: That it causes a "buildup/traffic jam" of signals that need to be processed that cross from cluster A to cluster B if cluster B doesn't get as much time from the thread scheduler as cluster A.

My problem with scenario 4 is: I haven't come up with it yet.

My target language will be Java. I am sure there are solutions for this and similar problems, where lots of little processes have to do a thing to complete one big thing(what comes to mind is 3d game rendering, hence my scenario 1).

What are the things I should consider with all these little workers and what would be the most advisable approach.

So I have this idea i'm playing with to make my own neural network experiment from the ground up as a fun hobby exercise, so I have something to learn, and to expand upon in doing research for what I want to code. I'm totally aware i'm re-inventing the wheel. But I'm doing it for learning/fun/hobby purposes.

My approach will not be one with the traditional chance calculation and updating status in the next generation where it will improve on itself, i'm trying something else, that will allow my network to adapt to changing situations, just to see what will happen when I turn off the lights.

What would be a good way to handle the multitude of worker processes my app will end up with?

  1. Should I iterate them on each "tick" and update their state?
  2. Should I put them in threads so they call all function simultaneously(or how important the thread scheduler finds them)?
  3. Should I cluster them in groups and put the groups in threads that iterate them independently.
  4. Other...

My problem with scenario 1 is: That if there are a lot of neurons that it could take a while to update them all, making the overall progress slower. Things that pop to mind is adding update flags to them and only to iterate those for updating.

My problem with scenario 2 is: There are only so many threads. On normal cpu's 1 to 8 threads are usually running and it has to host an OS too. So not much wiggle room there. I could expand to using GPU cores, but there's also a variable amount there, and they are only for specialized instructions

My problem with scenario 3 is: That it causes a "buildup/traffic jam" of signals that need to be processed that cross from cluster A to cluster B if cluster B doesn't get as much time from the thread scheduler as cluster A.

My problem with scenario 4 is: I haven't come up with it yet.

My target language will be Java. I am sure there are solutions for this and similar problems, where lots of little processes have to do a thing to complete one big thing(what comes to mind is 3d game rendering, hence my scenario 1).

What are the things I should consider with all these little workers and what would be the most advisable approach.

So I have this idea i'm playing with to make my own neural network experiment from the ground up as a fun hobby exercise, so I have something to learn, and to expand upon in doing research for what I want to code. I'm totally aware i'm re-inventing the wheel. But I'm doing it for learning/fun/hobby purposes.

My approach will not be one with the traditional chance calculation and updating status in the next generation where it will improve on itself, i'm trying something else, that will allow my network to adapt to changing situations, just to see what will happen when I turn off the lights.

What would be a good way to handle the multitude of worker processes my app will end up with?

  1. Should I iterate them on each "tick" and update their state?
  2. Should I put them in threads so they call all function simultaneously(or how important the thread scheduler finds them)?
  3. Should I cluster them in groups and put the groups in threads that iterate them independently.
  4. Other...

My problem with scenario 1 is: That if there are a lot of neurons that it could take a while to update them all, making the overall progress slower. Things that pop to mind is adding update flags to them and only to iterate those for updating.

My problem with scenario 2 is: There are only so many threads. On normal cpu's 1 to 8 threads are usually running and it has to host an OS too. So not much wiggle room there. I could expand to using GPU cores, but there's also a variable amount there, and they are only for specialized instructions. Also the same problem as in the traffic jam issue in scenario 3, but on smaller scales.

My problem with scenario 3 is: That it causes a "buildup/traffic jam" of signals that need to be processed that cross from cluster A to cluster B if cluster B doesn't get as much time from the thread scheduler as cluster A.

My problem with scenario 4 is: I haven't come up with it yet.

My target language will be Java. I am sure there are solutions for this and similar problems, where lots of little processes have to do a thing to complete one big thing(what comes to mind is 3d game rendering, hence my scenario 1).

What are the things I should consider with all these little workers and what would be the most advisable approach.

Source Link

How to manage a lot of little worker processes

So I have this idea i'm playing with to make my own neural network experiment from the ground up as a fun hobby exercise, so I have something to learn, and to expand upon in doing research for what I want to code. I'm totally aware i'm re-inventing the wheel. But I'm doing it for learning/fun/hobby purposes.

My approach will not be one with the traditional chance calculation and updating status in the next generation where it will improve on itself, i'm trying something else, that will allow my network to adapt to changing situations, just to see what will happen when I turn off the lights.

What would be a good way to handle the multitude of worker processes my app will end up with?

  1. Should I iterate them on each "tick" and update their state?
  2. Should I put them in threads so they call all function simultaneously(or how important the thread scheduler finds them)?
  3. Should I cluster them in groups and put the groups in threads that iterate them independently.
  4. Other...

My problem with scenario 1 is: That if there are a lot of neurons that it could take a while to update them all, making the overall progress slower. Things that pop to mind is adding update flags to them and only to iterate those for updating.

My problem with scenario 2 is: There are only so many threads. On normal cpu's 1 to 8 threads are usually running and it has to host an OS too. So not much wiggle room there. I could expand to using GPU cores, but there's also a variable amount there, and they are only for specialized instructions

My problem with scenario 3 is: That it causes a "buildup/traffic jam" of signals that need to be processed that cross from cluster A to cluster B if cluster B doesn't get as much time from the thread scheduler as cluster A.

My problem with scenario 4 is: I haven't come up with it yet.

My target language will be Java. I am sure there are solutions for this and similar problems, where lots of little processes have to do a thing to complete one big thing(what comes to mind is 3d game rendering, hence my scenario 1).

What are the things I should consider with all these little workers and what would be the most advisable approach.