You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This issue is capturing some ideas for exchanging processes and categorizing them. This is not necessarily meant for the Hub itself, but for the place where we implement a process exchange.
Processes should be analyzed on submission and submitters should be made aware of the categories and implications during submission.
Bound to a back-end
The process contains back-end specific details, e.g. collection id, band names, file paths, batch job result IDs, save_result options etc. Therefore it usually also contains a load_collection/result and a save_result. These processes can only be run at a specific back-end and thus the submitter must also provide the back-end URL (and API version?) so that people can actually reproduce it.
Parametrized data-cube operations
The process contains parameters for back-end specific details (see above) and has at least one input parameter that accepts a data cube. It should not contain load_* and save_* processes. Some other processes may also be excluded such as the new preprocessing steps as these are often also very back-end specific.
Parametrized 'callbacks'
The process contains parameters for back-end specific details (see above) and has NO parameters that accepts a data cube or any process working on data cubes, so it (usually) works on arrays or scalars.
For 2 and 3 all processes should have well-defined process metadata, so at least id, parameters, return value, summary and description. 2 and 3 can be promoted across back-ends, but 1 should not be as visible as these are more like examples and thus should only be listed at a back-end specific page.
The text was updated successfully, but these errors were encountered:
This issue is capturing some ideas for exchanging processes and categorizing them. This is not necessarily meant for the Hub itself, but for the place where we implement a process exchange.
Processes should be analyzed on submission and submitters should be made aware of the categories and implications during submission.
The process contains back-end specific details, e.g. collection id, band names, file paths, batch job result IDs, save_result options etc. Therefore it usually also contains a load_collection/result and a save_result. These processes can only be run at a specific back-end and thus the submitter must also provide the back-end URL (and API version?) so that people can actually reproduce it.
The process contains parameters for back-end specific details (see above) and has at least one input parameter that accepts a data cube. It should not contain load_* and save_* processes. Some other processes may also be excluded such as the new preprocessing steps as these are often also very back-end specific.
The process contains parameters for back-end specific details (see above) and has NO parameters that accepts a data cube or any process working on data cubes, so it (usually) works on arrays or scalars.
For 2 and 3 all processes should have well-defined process metadata, so at least id, parameters, return value, summary and description. 2 and 3 can be promoted across back-ends, but 1 should not be as visible as these are more like examples and thus should only be listed at a back-end specific page.
The text was updated successfully, but these errors were encountered: