Replies: 1 comment
-
Hello @hu6360567, I am not sure whether I get the problem correctly but maybe some remarks:
You can reuse states. For instance you could create a from transitions.extensions import HierarchicalMachine
states = ["unset", "initialized", "processing"]
transitions = [["init", ["processing", "unset"], "initialized"], ["event", "*", "processing"]]
client = HierarchicalMachine(model=None, states=states, transitions=transitions, initial="unset")
scheduler = HierarchicalMachine(states=[{"name": "client", "parallel": [
{"name": "A", "states": client, "initial": "initialized"},
{"name": "B", "states": client},
{"name": "C", "states": client},
]}], send_event=True, initial="client")
print(scheduler.state)
# >>> 'client_A_initialized', 'client_B_unset', 'client_C_unset']
client.get_state("initialized").on_enter.append(lambda evt: print(f"initialized {evt.machine.scoped.name}"))
scheduler.init()
# >>> initialized B
# >>> initialized C Since this is a question about how to use transitions but may also not be a good fit for Stackoverflow, I will transfer this to Q&A. Others might have better ideas about how to tackle this. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I'm trying to use transistions to implment a scheduler for distributed system.
Let's say, there are three participants
A, B, C
in a distrubted task; scheduler runs on each participant to broadcast its own state periodicity and collect peer state to make callback to its own participant.Each participant have states
['unspecified', 'inited', 'prepared, 'done', 'error']
;to_prepared
should be triggered when all participants areinited
.to_done(execute)
should be triggered when all participants areprepared
.For more detail,
A
isinited
and all peers areunspecified
on initial;B
andC
, scheduler callto_prepared
; sceduler ofA
becomeprepared
after the triggerto_done
Since messages that send from one to another participant may be lost, or disordering, to be robust, peer state can be skipped, e.g.
B
from unspecified to prepared from the perspective of A [ B received A and C inited messages, but A missed the "inited" message from B].I'd like to implement the scheduler by HSM at the first place, but it seems much more complicated.
paralleled states cannot be inherited between top level states, then the skipped states are not able to be presented.
For now, the scheduler is implemented in a separate state machine, process each message to update peer state, and after that, it calculates the overall state of peers to identify its state.
What's the best practice of such scenario? Can it be done by nested state machine?
Beta Was this translation helpful? Give feedback.
All reactions