You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The primary operators in tomviz that produce children via dataset.create_child_dataset() are reconstruction operators. Since usually only one reconstruction is performed on a data source, running multiple child-producing operators on one data source is currently not common.
With #2061 coming, however, child-producing operators may become more common, because setting scalars that have different dimensions than the other scalars on the dataset is easier to do on a child than on the original dataset (because scalars with non-matching dimensions are disposed of).
This issue is partly here to document the current behavior of running multiple child-producing operators on one data source, and then discuss what the behavior should be. I think the big question is: should we be producing intermediate data sources, as some of the examples below are doing?
This produces intermediate data sources, that are kept and not deleted when the next operator is performed. Each child's parent is the data source before it. The modules are only moved down once when the first child is created, but the output of each step appears to be correct.
Internal pipeline with no description.json
In this case, there are no intermediate data sources. However, the output is not correct, because the internal pipeline requires a description.json, or else it ignores the child and just copies the input down.
External pipeline with description.json
This produces intermediate data sources. The first child output is correct, but all subsequent child outputs are not (it seems to just copy the input after the first one).
External pipeline with no description.json
There are no intermediate data sources, but each output is correct - it properly inverts the data source each time.
The text was updated successfully, but these errors were encountered:
I spoke with @cryos and we are planning to keep the intermediate data sources, and fix the broken pipelines.
We typically want long-running operators to produce intermediate data sources, and we don't want the steps before the intermediate data source to have to run again when we add new operators. Fast-running operators do not really need intermediate data sources.
Good summary, like I said reconstructions especially want a child output that can have further operators applied without rerunning the reconstruction again. For other things we likely want to extend the API to express that the intermediate isn't needed and can be squashed away.
The primary operators in tomviz that produce children via
dataset.create_child_dataset()
are reconstruction operators. Since usually only one reconstruction is performed on a data source, running multiple child-producing operators on one data source is currently not common.With #2061 coming, however, child-producing operators may become more common, because setting scalars that have different dimensions than the other scalars on the dataset is easier to do on a child than on the original dataset (because scalars with non-matching dimensions are disposed of).
This issue is partly here to document the current behavior of running multiple child-producing operators on one data source, and then discuss what the behavior should be. I think the big question is: should we be producing intermediate data sources, as some of the examples below are doing?
The attached operator is used in these examples. It simply produces a child that is the inverse of its parent.
make_inverted_child.py.gz
make_inverted_child.json.gz
Internal pipeline with description.json
This produces intermediate data sources, that are kept and not deleted when the next operator is performed. Each child's parent is the data source before it. The modules are only moved down once when the first child is created, but the output of each step appears to be correct.
Internal pipeline with no description.json
In this case, there are no intermediate data sources. However, the output is not correct, because the internal pipeline requires a
description.json
, or else it ignores the child and just copies the input down.External pipeline with description.json
This produces intermediate data sources. The first child output is correct, but all subsequent child outputs are not (it seems to just copy the input after the first one).
External pipeline with no description.json
There are no intermediate data sources, but each output is correct - it properly inverts the data source each time.
The text was updated successfully, but these errors were encountered: