-
Notifications
You must be signed in to change notification settings - Fork 128
Houdini: Farm caching submission to Deadline #4903
Houdini: Farm caching submission to Deadline #4903
Conversation
Task linked: OP-5744 Houdini farm caching - Analysis |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some questions
openpype/modules/deadline/plugins/publish/submit_houdini_cache_deadline.py
Outdated
Show resolved
Hide resolved
openpype/modules/deadline/plugins/publish/submit_houdini_cache_deadline.py
Outdated
Show resolved
Hide resolved
@mustafa-zarkash can you give it a test again please? |
I was testing this PR as a user
the bad news:
Results
oh, I have noticed that I wrote all deadline caches weren't saved however I can remember that some families worked just fine. |
@moonyuet
Side Notes: Not Related to this PR:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it is not triggering publishing plugin submit_publish_job
and there are more modifications needed for that.
@MustafaJafar can you test it again? |
@moonyuet However, Cache jobs and Render Jobs doesn't inject environment variables in the same way which is not compatible with this PR Oh, Cache Jobs don't inject any environments at all! I think it's related to Job Environments I have a little question, could you tell me the difference between these two |
There are some differences between two. One is for rendering image from Houdini and the other is for publish job from the finished renders to the publish folder. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is working 🥳️🥳️🥳️
one last thing would you add kitsu keys like in this one #5455
If kitsu keys are needed, which means ftrack keys are also needed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello, I'm sorry if this comes out as a bit harsh but I think the approach this PR is taking to support caching in the farm is wrong and over engineered. First of all, caching in the farm (and rendering or any other Houdini processes) are already supported by third-party toolsets (Deadline, HQueue, Tractor...) and in WAY more powerful ways that this PR tries to accomplish and the OP plugin framework can manage. This is duplicating all of that logic in OP and adding 1,398 more lines to the already super complex code base!! Most Houdini TDs are already familiarized with those vanilla workflows and having them learn this other "black box" approach through OP is backwards and doesn't add any benefit in my opinion. You can see an example of a very normal submission to the farm here #5621 (comment) OpenPype shouldn't try to orchestrate the extract/render dependencies of the Houdini node graph, that's already done by these schedulers/submitters, we just need means to be able to run OP publish tasks of the generated outputs, but without doing any gimmicks, just taking a path, a family, a few other data inputs and registering it to the DB so it runs the other integrate plugins of OP like publishing that to SG/ftrack as well (and ideally the API for doing that in OP should be super straightforward to call it from anywhere! the current json file input isn't the best access point to that funcionality). If we wanted to help the existing vanilla submission OP could provide a wrapper of the vanilla submitters so it sets some reasonable defaults and we could intersect the submission to run some pre-validations on the graph... set some parms that might be driven by the OP settings or create utility HDAs to facilitate the creation of the submitted graph so frame dependencies are set correctly and chunk sizes for simulations... but that's it, we don't need to reinvent the wheel by interpreting how the graph needs to be evaluated. On the other hand, I still don't quite get why the existing Extra notes:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've successfully published these types with the default deadline options settings.
- ifd
- point cache abc
- point cache bgeo
- vdb
I can't test the other two.
Could we add a validator that checks if camera exists ?
I created a mantra IFD, set it to farm.
and then my job failed with no clue in the log,
Although, it was my fault that I didn't update the camera path on the node, I think such a validator will help.
we can create a ticket later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with @fabiaserra and I think we must take some effort to create more lightweight Deadline support. But that is more of a long-term plan because of the vanilla deadline support in DCCs differs vastly. There are multiple issues with the farm publishing currently, starting from farm specific attributes in the Publisher UI (and how are we handling local/farm rendering) to unifying deadline specific attributes on host/renderers. So I would merge it as it is adding functionality that is useful even if it is not "final" solution.
Changelog Description
Implements functionality to offload instances of the specific families to be processed on Deadline instead of locally. This increases productivity as artist can use local machine could be used for other tasks.
Implemented for families:
Additional info
Abc export via farm caching submission doesn't include any animation
Current version of deadline does not support vdb farm caching. (Tried with the manual deadline submission)
Testing notes: