You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Problem and Solution Description
Right now, MAGIST occupies only 2 threads, namely the main thread and the daemon thread. This is problematic when the AI is simultaneously training and processing data. This makes it nearly impossible to run without major improvements. If we had a manager that can assign tasks to all the threads across all cores in the system, it could run faster and more efficiently.
Detailed Description
Multi-threading and Multi-processing managers can be used to resolve this. In this, there would be a multiprocessing pool that would take tasks and assign them to a core manager. Each core manager takes the tasks and assigns them to a thread. This way, we get complete resource utilization. However, Python's GIL(Global Interpreter Lock) makes it arduous to implement.
Alternatives
A much simpler idea would be to implement a second worker node in the PriorityQueue class. If we add a second __worker function to the PriorityQueue class as well as a manager to automatically distribute tasks once they are published, we can have more processes running with faster and more efficient execution. This also resolves the issue described in #9 .
The text was updated successfully, but these errors were encountered:
Problem and Solution Description
Right now, MAGIST occupies only 2 threads, namely the main thread and the daemon thread. This is problematic when the AI is simultaneously training and processing data. This makes it nearly impossible to run without major improvements. If we had a manager that can assign tasks to all the threads across all cores in the system, it could run faster and more efficiently.
Detailed Description
Multi-threading and Multi-processing managers can be used to resolve this. In this, there would be a multiprocessing pool that would take tasks and assign them to a core manager. Each core manager takes the tasks and assigns them to a thread. This way, we get complete resource utilization. However, Python's GIL(Global Interpreter Lock) makes it arduous to implement.
Alternatives
A much simpler idea would be to implement a second worker node in the PriorityQueue class. If we add a second
__worker
function to thePriorityQueue
class as well as a manager to automatically distribute tasks once they are published, we can have more processes running with faster and more efficient execution. This also resolves the issue described in #9 .The text was updated successfully, but these errors were encountered: