Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Script Environment #15

Open
mvandeberg opened this issue Dec 11, 2017 · 4 comments
Open

Script Environment #15

mvandeberg opened this issue Dec 11, 2017 · 4 comments

Comments

@mvandeberg
Copy link

Should the development of the test net scripts be done entirely in Python with each portion being a submodule that has traditional Python interfaces. Or, should each part be a stand alone Python script that is orchestrated in a Bash environment?

Or, is there another language that we would rather use that already has Steem support?

@theoreticalbts
Copy link
Contributor

entirely in Python

Yes please

a submodule that has traditional Python interfaces. Or, should each part be a stand alone Python script that is orchestrated in a Bash environment

This is already answered by the architecture of tinman. The individual parts are classes in Python source files which can be imported, but are also registered as sub-commands of the tinman command that can be used from the shell.

I think this ticket should be closed because the tinman code that's already implemented has made these architectural decisions.

@mvandeberg
Copy link
Author

Ok, so let's clean up the use of the existing tools a bit. I don't actually think it will be that bad. Currently, each tool is a stand alone command line utility that are connected via pipes. This is a useful use case, however, the main use case for these scripts is going to be creating an automated testnet in AWS. We can code the composite commands in a script, but that is pretty inefficient with repeated I/O processing between tools that are part of the same Python library.

Let's create another target that links all of the tools in process and skips the redundant I/O by passing objects by reference in process. We can keep the private key for the initial witnesses in process as well. In this way, the individual tools are usable but the 99% use case can simply take a main net and test net node as args and start a test net.

@theoreticalbts
Copy link
Contributor

theoreticalbts commented Dec 13, 2017

Sounds like premature optimization. UNIX pipes are pretty fast, these scripts are Fast Enough To Do What We Need Them To. Also, Python doesn't do multithreading nicely because of the GIL. My hypothesis is that putting it all in a single Python process will actually be slower, because while multiple processes chained by pipes do have to do redundant (de)serialization, it also allows the work to be parallelized somewhat on multi-core machines.

@mvandeberg
Copy link
Author

mvandeberg commented Dec 13, 2017

My hypothesis is that putting it all in a single Python process will actually be slower (because of the GIL)

This is what I need. This is a clear and concise rationale for the existing design. I am not a Python guru. The intricacies of Python multi-threading are not something I actively think about.

I still want a single command to exist to launch a testnet. Perhaps that exists as a bash script or a docker entry point. But I think it is important for usability.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants