diff --git a/.github/ISSUE_TEMPLATE/bug_report.yml b/.github/ISSUE_TEMPLATE/bug_report.yml index 2d9915609a8..83a4b6a8d4d 100644 --- a/.github/ISSUE_TEMPLATE/bug_report.yml +++ b/.github/ISSUE_TEMPLATE/bug_report.yml @@ -35,6 +35,22 @@ body: label: Version description: What version are you running? Look to OpenPype Tray options: + - 3.16.5-nightly.4 + - 3.16.5-nightly.3 + - 3.16.5-nightly.2 + - 3.16.5-nightly.1 + - 3.16.4 + - 3.16.4-nightly.3 + - 3.16.4-nightly.2 + - 3.16.4-nightly.1 + - 3.16.3 + - 3.16.3-nightly.5 + - 3.16.3-nightly.4 + - 3.16.3-nightly.3 + - 3.16.3-nightly.2 + - 3.16.3-nightly.1 + - 3.16.2 + - 3.16.2-nightly.2 - 3.16.2-nightly.1 - 3.16.1 - 3.16.0 @@ -119,22 +135,6 @@ body: - 3.14.9-nightly.1 - 3.14.8 - 3.14.8-nightly.4 - - 3.14.8-nightly.3 - - 3.14.8-nightly.2 - - 3.14.8-nightly.1 - - 3.14.7 - - 3.14.7-nightly.8 - - 3.14.7-nightly.7 - - 3.14.7-nightly.6 - - 3.14.7-nightly.5 - - 3.14.7-nightly.4 - - 3.14.7-nightly.3 - - 3.14.7-nightly.2 - - 3.14.7-nightly.1 - - 3.14.6 - - 3.14.6-nightly.3 - - 3.14.6-nightly.2 - - 3.14.6-nightly.1 validations: required: true - type: dropdown diff --git a/.gitignore b/.gitignore index e5019a4e74c..622d55fb883 100644 --- a/.gitignore +++ b/.gitignore @@ -37,7 +37,7 @@ Temporary Items ########### /build /dist/ -/server_addon/package/* +/server_addon/packages/* /vendor/bin/* /vendor/python/* diff --git a/CHANGELOG.md b/CHANGELOG.md index 07b95c7343a..f1948b1a3f7 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,6 +1,1328 @@ # Changelog +## [3.16.4](https://github.com/ynput/OpenPype/tree/3.16.4) + + +[Full Changelog](https://github.com/ynput/OpenPype/compare/3.16.3...3.16.4) + +### **🆕 New features** + + +
+Feature: Download last published workfile specify version #4998 + +Setting `workfile_version` key to hook's `self.launch_context.data` allow you to specify the workfile version you want sync service to download if none is matched locally. This is helpful if the last version hasn't been correctly published/synchronized, and you want to recover the previous one (or some you'd like).Version could be set in two ways: +- OP's absolute version, matching the `version` index in DB. +- Relative version in reverse order from the last one: `-2`, `-3`...I don't know where I should write documentation about that. + + +___ + +
+ +### **🚀 Enhancements** + + +
+Maya: allow not creation of group for Import loaders #5427 + +This PR enhances previous one. All ReferenceLoaders could not wrap imported products into explicit group.Also `Import` Loaders have same options. Control for this is separate in Settings, eg. Reference might wrap loaded items in group, `Import` might not. + + +___ + +
+ + +
+3dsMax: Settings for Ayon #5388 + +Max Addon Setting for Ayon + + +___ + +
+ + +
+General: Navigation to Folder from Launcher #5404 + +Adds an action in launcher to open the directory of the asset. + + +___ + +
+ + +
+Chore: Default variant in create plugin #5429 + +Attribute `default_variant` on create plugins always returns string and if default variant is not filled other ways how to get one are implemented. + + +___ + +
+ + +
+Publisher: Thumbnail widget enhancements #5439 + +Thumbnails widget in Publisher has new 3 options to choose from: Paste (from clipboard), Take screenshot and Browse. Clear button and new options are not visible by default, user must expand options button to show them. + + +___ + +
+ + +
+AYON: Update ayon api to '0.3.5' #5460 + +Updated ayon-python-api to 0.3.5. + + +___ + +
+ +### **🐛 Bug fixes** + + +
+AYON: Apply unknown ayon settings first #5435 + +Settings of custom addons are available in converted settings. + + +___ + +
+ + +
+Maya: Fix wrong subset name of render family in deadline #5442 + +New Publisher is creating different subset names than previously which resulted in duplication of `render` string in final subset name of `render` family published on Deadline.This PR solves that, it also fixes issues with legacy instances from old publisher, it matches the subset name as was before.This solves same issue in Max implementation. + + +___ + +
+ + +
+Maya: Fix setting of version to workfile instance #5452 + +If there are multiple instances of renderlayer published, previous logic resulted in unpredictable rewrite of instance family to 'workfile' if `Sync render version with workfile` was on. + + +___ + +
+ + +
+Maya: Context plugin shouldn't be tied to family #5464 + +`Maya Current File` collector was tied to `workfile` unnecessary. It should run even if `workile` instance is not being published. + + +___ + +
+ + +
+Unreal: Fix loading hero version for static and skeletal meshes #5393 + +Fixed a problem with loading hero versions for static ans skeletal meshes. + + +___ + +
+ + +
+TVPaint: Fix 'repeat' behavior #5412 + +Calculation of frames for repeat behavior is working correctly. + + +___ + +
+ + +
+AYON: Thumbnails cache and api prep #5437 + +Moved thumbnails cache from ayon python api to OpenPype and prepare AYON thumbnail resolver for new api functions. Current implementation should work with old and new ayon-python-api. + + +___ + +
+ + +
+Nuke: Name of the Read Node should be updated correctly when switching versions or assets. #5444 + +Bug fixing of the read node's name not being updated correctly when setting version or switching asset. + + +___ + +
+ + +
+Farm publishing: asymmetric handles fixed #5446 + +Handles are now set correctly on farm published product version if asymmetric were set to shot attributes. + + +___ + +
+ + +
+Scene Inventory: Provider icons fix #5450 + +Fix how provider icons are accessed in scene inventory. + + +___ + +
+ + +
+Fix typo on Deadline OP plugin name #5453 + +Surprised that no one has hit this bug yet... but it seems like there was a typo on the name of the OP Deadline plugin when submitting jobs to it. + + +___ + +
+ + +
+AYON: Fix version attributes update #5472 + +Fixed updates of attribs in AYON mode. + + +___ + +
+ +### **Merged pull requests** + + +
+Added missing defaults for import_loader #5447 + + +___ + +
+ + +
+Bug: Local settings don't open on 3.14.7 #5220 + +### Before posting a new ticket, have you looked through the documentation to find an answer? + +Yes I have + +### Have you looked through the existing tickets to find any related issues ? + +Not yet + +### Author of the bug + +@FadyFS + +### Version + +3.15.11-nightly.3 + +### What platform you are running OpenPype on? + +Linux / Centos + +### Current Behavior: + +the previous behavior (bug) : +![image](https://github.com/quadproduction/OpenPype/assets/135602303/09bff9d5-3f8b-4339-a1e5-30c04ade828c) + + +### Expected Behavior: + +![image](https://github.com/quadproduction/OpenPype/assets/135602303/c505a103-7965-4796-bcdf-73bcc48a469b) + + +### What type of bug is it ? + +Happened only once in a particular configuration + +### Which project / workfile / asset / ... + +open settings with 3.14.7 + +### Steps To Reproduce: + +1. Run openpype on the 3.15.11-nightly.3 version +2. Open settings in 3.14.7 version + +### Relevant log output: + +_No response_ + +### Additional context: + +_No response_ + +___ + +
+ + +
+Tests: Add automated targets for tests #5443 + +Without it plugins with 'automated' targets won't be triggered (eg `CloseAE` etc.) + + +___ + +
+ + + + +## [3.16.3](https://github.com/ynput/OpenPype/tree/3.16.3) + + +[Full Changelog](https://github.com/ynput/OpenPype/compare/3.16.2...3.16.3) + +### **🆕 New features** + + +
+AYON: 3rd party addon usage #5300 + +Prepare OpenPype code to be able use `ayon-third-party` addon which supply ffmpeg and OpenImageIO executables. Because they both can support to define custom arguments (more than one) a new functions were needed to supply.New functions are `get_ffmpeg_tool_args` and `get_oiio_tool_args`. They work similar to previous but instead of string are returning list of strings. All places using previous functions `get_ffmpeg_tool_path` and `get_oiio_tool_path` are now using new ones. They should be backwards compatible and even with addon if returns single argument. + + +___ + +
+ + +
+AYON: Addon settings in OpenPype #5347 + +Moved settings addons to OpenPype server addon. Modified create package to create zip files for server for each settings addon and for openpype addon. + + +___ + +
+ + +
+AYON: Add folder to template data #5417 + +Added `folder` to template data, so `{folder[name]}` can be used in templates. + + +___ + +
+ + +
+Option to start versioning from 0 #5262 + +This PR adds a settings option to start all versioning from 0.This PR will replace #4455. + + +___ + +
+ + +
+Ayon: deadline implementation #5321 + +Quick implementation of deadline in Ayon. New Ayon plugin added for Deadline repository + + +___ + +
+ + +
+AYON: Remove AYON launch logic from OpenPype #5348 + +Removed AYON launch logic from OpenPype. The logic is outdated at this moment and is replaced by `ayon-launcher`. + + +___ + +
+ +### **🚀 Enhancements** + + +
+Bug: Error on multiple instance rig with maya #5310 + +I change endswith method by startswith method because the set are automacaly name out_SET, out_SET1, out_SET2 ... + + +___ + +
+ + +
+Applications: Use prelaunch hooks to extract environments #5387 + +Environment variable preparation is based on prelaunch hooks. This should allow to pass OCIO environment variables to farm jobs. + + +___ + +
+ + +
+Applications: Launch hooks cleanup #5395 + +Use `set` instead of `list` for filtering attributes in launch hooks. Celaction hooks dir does not contain `__init__.py`. Celaction prelaunch hook is reusing `CELACTION_ROOT_DIR`. Launch hooks are using full import from `openpype.lib.applications`. + + +___ + +
+ + +
+Applications: Environment variables order #5245 + +Changed order of set environment variables. First are set context environment variables and then project environment overrides. Also asset and task environemnt variables are optional. + + +___ + +
+ + +
+Autosave preferences can be read after Nuke opens the script #5295 + +Looks like I need to open the script in Nuke to be able to correctly load the autosave preferences.This PR reads the Nuke script in context, and offers owerwriting the current script with autosaved one if autosave exists. + + +___ + +
+ + +
+Resolve: Update with compatible resolve version and latest docs #5317 + +Missing information about compatible Resolve version and latest docs from https://github.com/ynput/OpenPype/tree/develop/openpype/hosts/resolve + + +___ + +
+ + +
+Chore: Remove deprecated functions #5323 + +Removed functions/classes that are deprecated and marked to be removed. + + +___ + +
+ + +
+Nuke Render and Prerender nodes Process Order - OP-3555 #5332 + +This PR exposes control over the order of processing of the instances, by sorting the instances created. The sorting happens on the `render_order` and subset name. If the knob `render_order` is found on the instance, we'll sort by that first before sorting by subset name.`render_order` instances are processed before nodes without `render_order`. This could be extended in the future by querying other knobs but I dont know of a usecase for this.Hardcoded the creator `order` attribute of the `prerender` class to be before the `render`. Could be exposed to the user/studio but dont know of a use case for this. + + +___ + +
+ + +
+Unreal: Python Environment Improvements #5344 + +Automatically set `UE_PYTHONPATH` as `PYTHONPATH` when launching Unreal. + + +___ + +
+ + +
+Unreal: Custom location for Unreal Ayon Plugin #5346 + +Added a new environment variable `AYON_BUILT_UNREAL_PLUGIN` to set an already existing and built Ayon Plugin for Unreal. + + +___ + +
+ + +
+Unreal: Better handling of Exceptions in UE Worker threads #5349 + +Implemented a new `UEWorker` base class to handle exception during the execution of UE Workers. + + +___ + +
+ + +
+Houdini: Add farm toggle on creation menu #5350 + +Deadline Farm publishing and Rendering for Houdini was possible with this PR #4825 farm publishing is enabled by default some ROP nodes which may surprise new users (like me).I think adding a toggle (on by default) on creation UI is better so that users will be aware that there's a farm option for this publish instance.ROPs Modified : +- [x] Mantra ROP +- [x] Karma ROP +- [x] Arnold ROP +- [x] Redshift ROP +- [x] Vray ROP + + +___ + +
+ + +
+Ftrack: Sync to avalon settings #5353 + +Added roles settings for sync to avalon action. + + +___ + +
+ + +
+Chore: Schemas inside OpenPype #5354 + +Moved/copied schemas from repository root inside openpype/pipeline. + + +___ + +
+ + +
+AYON: Addons creation enhancements #5356 + +Enhanced AYON addons creation. Fix issue with `Pattern` typehint. Zip filenames contain version. OpenPype package is skipping modules that are already separated in AYON. Updated settings of addons. + + +___ + +
+ + +
+AYON: Update staging icons #5372 + +Updated staging icons for staging mode. + + +___ + +
+ + +
+Enhancement: Houdini Update pointcache labels #5373 + +To me it's logical to find pointcaches types listed one after another, but they were named differentlySo, I made this PR to update their labels + + +___ + +
+ + +
+nuke: split write node product instance features #5389 + +Improving Write node product instances by allowing precise activation of specific features. + + +___ + +
+ + +
+Max: Use the empty modifiers in container to store AYON Parameter #5396 + +Instead of adding AYON/OP Parameter along with other attributes inside the container, empty modifiers would be created to store AYON/OP custom attributes + + +___ + +
+ + +
+AfterEffects: Removed unused imports #5397 + +Removed unused import from extract local render plugin file. + + +___ + +
+ + +
+Nuke: adding BBox knob type to settings #5405 + +Nuke knob types in settings having new `Box` type for reposition nodes like Crop or Reformat. + + +___ + +
+ + +
+SyncServer: Existence of module is optional #5413 + +Existence of SyncServer module is optional and not required. Added `sync_server` module back to ignored modules when openpype addon is created for AYON. Command `syncserver` is marked as deprecated and redirected to sync server cli. + + +___ + +
+ + +
+Webpublisher: Self contain test publish logic #5414 + +Moved test logic of publishing to webpublisher. Simplified `remote_publish` to remove webpublisher specific logic. + + +___ + +
+ + +
+Webpublisher: Cleanup targets #5418 + +Removed `remote` target from webpublisher and replaced it with 2 targets `webpublisher` and `automated`. + + +___ + +
+ + +
+nuke: update server addon settings with box #5419 + +updtaing nuke ayon server settings for Box option in knob types. + + +___ + +
+ +### **🐛 Bug fixes** + + +
+Maya: fix validate frame range on review attached to other instances #5296 + +Fixes situation where frame range validator can't be turned off on models if they are attached to reviewable camera in Maya. + + +___ + +
+ + +
+Maya: Apply project settings to creators #5303 + +Project settings were not applied to the creators. + + +___ + +
+ + +
+Maya: Validate Model Content #5336 + +`assemblies` in `cmds.ls` does not seem to work; +```python + +from maya import cmds + + +content_instance = ['|group2|pSphere1_GEO', '|group2|pSphere1_GEO|pSphere1_GEOShape', '|group1|pSphere1_GEO', '|group1|pSphere1_GEO|pSphere1_GEOShape'] +assemblies = cmds.ls(content_instance, assemblies=True, long=True) +print(assemblies) +``` + +Fixing with string splitting instead. + + +___ + +
+ + +
+Bugfix: Maya update defaults variable #5368 + +So, something was forgotten while moving out from `LegacyCreator` to `NewCreator``LegacyCreator` used `defaults` to list suggested subset names which was changed into `default_variants` in the the `NewCreator`and setting `defaults` to any values has no effect!This update affects: +- [x] Model +- [x] Set Dress + + +___ + +
+ + +
+Chore: Python 2 support fix #5375 + +Fix Python 2 support by adding `click` into python 2 dependencies and removing f-string from maya. + + +___ + +
+ + +
+Maya: do not create top level group on reference #5402 + +This PR allows to not wrapping loaded referenced assets in top level group either explicitly for artist or by configuration in Settings.Artists can control group creation in ReferenceLoader options.Default no group creation could be set by emptying `Group Name` in `project_settings/maya/load/reference_loader` + + +___ + +
+ + +
+Settings: Houdini & Maya create plugin settings #5436 + +Fixes related to Maya and Houdini settings. Renamed `defaults` to `default_variants` in plugin settings to match attribute name on create plugin in both OpenPype and AYON settings. Fixed Houdini AYON settings where were missing settings for defautlt varaints and fixed Maya AYON settings where default factory had wrong assignment. + + +___ + +
+ + +
+Maya: Hide CreateAnimation #5297 + +When converting `animation` family or loading a `rig` family, need to include the `animation` creator but hide it in creator context. + + +___ + +
+ + +
+Nuke Anamorphic slate - Read pixel aspect from input #5304 + +When asset pixel aspect differs from rendered pixel aspect, Nuke slate pixel aspect is not longer taken from asset, but is readed via ffprobe. + + +___ + +
+ + +
+Nuke - Allow ExtractReviewDataMov with no timecode knob #5305 + +ExtractReviewDataMov allows to specify file type. Trying to write some other extension than mov fails on generate_mov assuming that mov64_write_timecode knob exists. + + +___ + +
+ + +
+Nuke: removing settings schema with defaults for OpenPype #5306 + +continuation of https://github.com/ynput/OpenPype/pull/5275 + + +___ + +
+ + +
+Bugfix: Dependency without 'inputLinks' not downloaded #5337 + +Remove condition that avoids downloading dependency without `inputLinks`. + + +___ + +
+ + +
+Bugfix: Houdini Creator use selection even if it was toggled off #5359 + +When creating many product types (families) one after another without refreshing the creator window manually if you toggled `Use selection` once, all the later product types will use selection even if it was toggled offHere's Before it will keep use selection even if it was toggled off, unless you refresh window manuallyhttps://github.com/ynput/OpenPype/assets/20871534/8b890122-5b53-4c6b-897d-6a2f3aa3388aHere's After it works as expectedhttps://github.com/ynput/OpenPype/assets/20871534/6b1db990-de1b-428e-8828-04ab59a44e28 + + +___ + +
+ + +
+Houdini: Correct camera selection for karma renderer when using selected node #5360 + +When user creates the karma rop with selected camera by use selection, it will give the error message of "no render camera found in selection".This PR is to fix the bug of creating karma rop when using selected camera node in Houdini + + +___ + +
+ + +
+AYON: Environment variables and functions #5361 + +Prepare code for ayon-launcher compatibility. Fix ayon launcher subprocess calls, added more checks for `AYON_SERVER_ENABLED`, use ayon launcher suitable environment variables in AYON mode and changed outputs of some functions. Replaced usages of `OPENPYPE_REPOS_ROOT` environment variable with `PACKAGE_DIR` variable -> correct paths are used. + + +___ + +
+ + +
+Nuke: farm rendering of prerender ignore roots in nuke #5366 + +`prerender` family was using wrong subset, same as `render` which should be different. + + +___ + +
+ + +
+Bugfix: Houdini update defaults variable #5367 + +So, something was forgotten while moving out from `LegacyCreator` to `NewCreator``LegacyCreator` used `defaults` to list suggested subset names which was changed into `default_variants` in the the `NewCreator`and setting `defaults` to any values has no effect!This update affects: +- [x] Arnold ASS +- [x] Arnold ROP +- [x] Karma ROP +- [x] Mantra ROP +- [x] Redshift ROP +- [x] VRay ROP + + +___ + +
+ + +
+Publisher: Fix create/publish animation #5369 + +Use geometry movement instead of changing min/max width. + + +___ + +
+ + +
+Unreal: Move unreal splash screen to unreal #5370 + +Moved splash screen code to unreal integration and removed import from Igniter. + + +___ + +
+ + +
+Nuke: returned not cleaning of renders folder on the farm #5374 + +Previous PR enabled explicit cleanup of `renders` folder after farm publishing. This is not matching customer's workflows. Customer wants to have access to files in `renders` folder and potentially redo some frames for long frame sequences.This PR extends logic of marking rendered files for deletion only if instance doesn't have `stagingDir_persistent`.For backwards compatibility all Nuke instances have `stagingDir_persistent` set to True, eg. `renders` folder won't be cleaned after farm publish. + + +___ + +
+ + +
+Nuke: loading sequences is working #5376 + +Loading image sequences was broken after the latest release, version 3.16. However, I am pleased to inform you that it is now functioning as expected. + + +___ + +
+ + +
+AYON: Fix settings conversion for ayon addons #5377 + +AYON addon settings are available in system settings and does not have available the same values in `"modules"` subkey. + + +___ + +
+ + +
+Nuke: OCIO env var workflow #5379 + +The OCIO environment variable needs to be consistently handled across all platforms. Nuke resolves the custom OCIO config path differently depending on the platform, so we included the ocio config path in the workfile with a partial replacement using an environment variable. Additionally, for Windows sessions, we replaced backward slashes with a TCL expression. + + +___ + +
+ + +
+Unreal: Fix Unreal build script #5381 + +Define 'AYON_UNREAL_ROOT' environment variable in unreal addon. + + +___ + +
+ + +
+3dsMax: Use relative path to MAX_HOST_DIR #5382 + +Use `MAX_HOST_DIR` to calculate startup script path instead of use relative path to `OPENPYPE_ROOT` environment variable. + + +___ + +
+ + +
+Bugfix: Houdini abc validator error message #5386 + +When ABC path validator fails, it prints node objects not node paths or namesThis bug happened because of updating `get_invalid` method to return nodes instead of node pathsBeforeAfter + + +___ + +
+ + +
+Nuke: node name influence product (subset) name #5392 + +Nuke now allows users to duplicate publishing instances, making the workflow easier. By duplicating a node and changing its name, users can set the product (subset) name in the publishing context.Users now have the ability to change the variant name in Publisher, which will automatically rename the associated instance node. + + +___ + +
+ + +
+Houdini: delete redundant bgeo sop validator #5394 + +I found out that this `Validate BGEO SOP Path` validator is redundant, it catches two cases that are already implemented in "Validate Output Node". "Validate Output Node" works with `bgeo` as well as `abc` because `"pointcache"` is listed in its families + + +___ + +
+ + +
+Nuke: workfile is not reopening after change of context #5399 + +Nuke no longer reopens the latest workfile when the context is changed to a different task using the Workfile tool. The issue also affected the Script Clean (from Nuke File menu) and Close feature, but it has now been fixed. + + +___ + +
+ + +
+Bugfix: houdini hard coded project settings #5400 + +I made this PR to solve the issue with hard-coded settings in houdini + + +___ + +
+ + +
+AYON: 3dsMax settings #5401 + +Keep `adsk_3dsmax` group in applications settings. + + +___ + +
+ + +
+Bugfix: update defaults to default_variants in maya and houdini OP DCC settings #5407 + +On moving out to new creator in Maya and Houdini updating settings was missed. + + +___ + +
+ + +
+Applications: Attributes creation #5408 + +Applications addon does not cause infinite server restart loop. + + +___ + +
+ + +
+Max: fix the bug of handling Object deletion in OP Parameter #5410 + +If the object is added to the OP parameter and user delete it in the scene thereafter, it will error out the container with OP attributes. This PR resolves the bug.This PR also fixes the bug of not adding the attribute into OP parameter correctly when the user enables "use selections" to link the object into the OP parameter. + + +___ + +
+ + +
+Colorspace: including environments from launcher process #5411 + +Fixed bug in GitHub PR where the OCIO config template was not properly formatting environment variables from System Settings `general/environment`. + + +___ + +
+ + +
+Nuke: workfile template fixes #5428 + +Some bunch of small bugs needed to be fixed + + +___ + +
+ + +
+Houdini, Max: Fix missed function interface change #5430 + +This PR https://github.com/ynput/OpenPype/pull/5321/files from @kalisp missed updating the `add_render_job_env_var` in Houdini and Max as they are passing an extra arg: +``` +TypeError: add_render_job_env_var() takes 1 positional argument but 2 were given +``` + + +___ + +
+ + +
+Scene Inventory: Fix issue with 'sync_server' #5431 + +Fix accesss to `sync_server` attribute in scene inventory. + + +___ + +
+ + +
+Unpack project: Fix import issue #5433 + +Added `load_json_file`, `replace_project_documents` and `store_project_documents` to mongo init. + + +___ + +
+ + +
+Chore: Versions post fixes #5441 + +Fixed issues caused by my fault. Filled right version value to anatomy data. + + +___ + +
+ +### **📃 Testing** + + +
+Tests: Copy file_handler as it will be removed by purging ayon code #5357 + +Ayon code will get purged in the future from this repo/addon, therefore all `ayon_common` will be gone. `file_handler` gets internalized to tests as it is not used anywhere else. + + +___ + +
+ + + + +## [3.16.2](https://github.com/ynput/OpenPype/tree/3.16.2) + + +[Full Changelog](https://github.com/ynput/OpenPype/compare/3.16.1...3.16.2) + +### **🆕 New features** + + +
+Fusion - Set selected tool to active #5327 + +When you run the action to select a node, this PR makes the node-flow show the selected node + you'll see the nodes controls in the inspector. + + +___ + +
+ +### **🚀 Enhancements** + + +
+Maya: All base create plugins #5326 + +Prepared base classes for each creator type in Maya. Extended `MayaCreatorBase` to have default implementations of common logic with instances which is used in each type of plugin. + + +___ + +
+ + +
+Windows: Support long paths on zip updates. #5265 + +Support long paths for version extract on Windows.Use case is when having long paths in for example an addon. You can install to the C drive but because the zip files are extracted in the local users folder, it'll add additional sub directories to the paths and quickly get too long paths for Windows to handle the zip updates. + + +___ + +
+ + +
+Blender: Added setting to set resolution and start/end frames at startup #5338 + +This PR adds `set_resolution_startup`and `set_frames_startup` settings. They automatically set respectively the resolution and start/end frames and FPS in Blender when opening a file or creating a new one. + + +___ + +
+ + +
+Blender: Support for ExtractBurnin #5339 + +This PR adds support for ExtractBurnin for Blender, when publishing a Review. + + +___ + +
+ + +
+Blender: Extract Camera as Alembic #5343 + +Added support to extract Alembic Cameras in Blender. + + +___ + +
+ +### **🐛 Bug fixes** + + +
+Maya: Validate Instance In Context #5335 + +Missing new publisher error so the repair action shows up. + + +___ + +
+ + +
+Settings: Fix default settings #5311 + +Fixed defautl settings for shotgrid. Renamed `FarmRootEnumEntity` to `DynamicEnumEntity` and removed doubled ABC metaclass definition (all settings entities have abstract metaclass). + + +___ + +
+ + +
+Deadline: missing context argument #5312 + +Updated function arguments + + +___ + +
+ + +
+Qt UI: Multiselection combobox PySide6 compatibility #5314 + +- The check states are replaced with the values for PySide6 +- `QtCore.Qt.ItemIsUserTristate` is used instead of `QtCore.Qt.ItemIsTristate` to avoid crashes on PySide6 + + +___ + +
+ + +
+Docker: handle openssl 1.1.1 for centos 7 docker build #5319 + +Move to python 3.9 has added need to use openssl 1.1.x - but it is not by default available on centos 7 image. This is fixing it. + + +___ + +
+ + +
+houdini: fix typo in redshift proxy #5320 + +I believe there's a typo in `create_redshift_proxy.py` ( extra ` ) in filename, and I made this PR to suggest a fix + + +___ + +
+ + +
+Houdini: fix wrong creator identifier in pointCache workflow #5324 + +FIxing a bug in publishing alembics, were invalid creator identifier caused missing family association. + + +___ + +
+ + +
+Fix colorspace compatibility check #5334 + +for some reason a user may have `PyOpenColorIO` installed to his machine, _in my case it came with renderman._it can trick the compatibility check as `import PyOpenColorIO` won't raise an error however it may be an old version _like my case_Beforecompatibility check was true and It used wrapper directly After Fix It will use wrapper via subprocess instead + + +___ + +
+ +### **Merged pull requests** + + +
+Remove forgotten dev logging #5315 + + +___ + +
+ + + + ## [3.16.1](https://github.com/ynput/OpenPype/tree/3.16.1) @@ -177,7 +1499,7 @@ ___ Add functional base for API Documentation using Sphinx and AutoAPI. -After unsuccessful #2512, #834 and #210 this is yet another try. But this time without ambition to solve the whole issue. This is making Shinx script to work and nothing else. Any changes and improvements in API docs should be made in subsequent PRs. +After unsuccessful #2512, #834 and #210 this is yet another try. But this time without ambition to solve the whole issue. This is making Shinx script to work and nothing else. Any changes and improvements in API docs should be made in subsequent PRs. ## How to use it @@ -188,7 +1510,7 @@ cd .\docs make.bat html ``` -or +or ```sh cd ./docs @@ -203,7 +1525,7 @@ During the build you'll see tons of red errors that are pointing to our issues: Invalid import are usually wrong relative imports (too deep) or circular imports. 2) **Invalid doc-strings** - Doc-strings to be processed into documentation needs to follow some syntax - this can be checked by running + Doc-strings to be processed into documentation needs to follow some syntax - this can be checked by running `pydocstyle` that is already included with OpenPype 3) **Invalid markdown/rst files** md/rst files can be included inside rst files using `.. include::` directive. But they have to be properly formatted. @@ -1390,11 +2712,11 @@ ___
Houdini: Redshift ROP image format bug #5218 -Problem : -"RS_outputFileFormat" parm value was missing -and there were more "image_format" than redshift rop supports +Problem : +"RS_outputFileFormat" parm value was missing +and there were more "image_format" than redshift rop supports -Fix: +Fix: 1) removed unnecessary formats from `image_format_enum` 2) add the selected format value to `RS_outputFileFormat` ___ @@ -3571,7 +4893,7 @@ ___
Maya Load References - Add Display Handle Setting #4904 -When we load a reference in Maya using OpenPype loader, display handle is checked by default and prevent us to select easily the object in the viewport. I understand that some productions like to keep this option, so I propose to add display handle to the reference loader settings. +When we load a reference in Maya using OpenPype loader, display handle is checked by default and prevent us to select easily the object in the viewport. I understand that some productions like to keep this option, so I propose to add display handle to the reference loader settings. ___ @@ -3679,7 +5001,7 @@ ___
Patchelf version locked #4853 -For Centos dockerfile it is necessary to lock the patchelf version to the older, otherwise the build process fails. +For Centos dockerfile it is necessary to lock the patchelf version to the older, otherwise the build process fails. ___ diff --git a/README.md b/README.md index 6caed8061c5..ce98f845e68 100644 --- a/README.md +++ b/README.md @@ -62,7 +62,7 @@ development tools like [CMake](https://cmake.org/) and [Visual Studio](https://v #### Clone repository: ```sh -git clone --recurse-submodules git@github.com:Pypeclub/OpenPype.git +git clone --recurse-submodules git@github.com:ynput/OpenPype.git ``` #### To build OpenPype: @@ -144,6 +144,10 @@ sudo ./tools/docker_build.sh centos7 If all is successful, you'll find built OpenPype in `./build/` folder. +Docker build can be also started from Windows machine, just use `./tools/docker_build.ps1` instead of shell script. + +This could be used even for building linux build (with argument `centos7` or `debian`) + #### Manual build You will need [Python >= 3.9](https://www.python.org/downloads/) and [git](https://git-scm.com/downloads). You'll also need [curl](https://curl.se) on systems that doesn't have one preinstalled. diff --git a/ayon_start.py b/ayon_start.py deleted file mode 100644 index 458c46bba6c..00000000000 --- a/ayon_start.py +++ /dev/null @@ -1,483 +0,0 @@ -# -*- coding: utf-8 -*- -"""Main entry point for AYON command. - -Bootstrapping process of AYON. -""" -import os -import sys -import site -import traceback -import contextlib - - -# Enabled logging debug mode when "--debug" is passed -if "--verbose" in sys.argv: - expected_values = ( - "Expected: notset, debug, info, warning, error, critical" - " or integer [0-50]." - ) - idx = sys.argv.index("--verbose") - sys.argv.pop(idx) - if idx < len(sys.argv): - value = sys.argv.pop(idx) - else: - raise RuntimeError(( - f"Expect value after \"--verbose\" argument. {expected_values}" - )) - - log_level = None - low_value = value.lower() - if low_value.isdigit(): - log_level = int(low_value) - elif low_value == "notset": - log_level = 0 - elif low_value == "debug": - log_level = 10 - elif low_value == "info": - log_level = 20 - elif low_value == "warning": - log_level = 30 - elif low_value == "error": - log_level = 40 - elif low_value == "critical": - log_level = 50 - - if log_level is None: - raise ValueError(( - "Unexpected value after \"--verbose\" " - f"argument \"{value}\". {expected_values}" - )) - - os.environ["OPENPYPE_LOG_LEVEL"] = str(log_level) - os.environ["AYON_LOG_LEVEL"] = str(log_level) - -# Enable debug mode, may affect log level if log level is not defined -if "--debug" in sys.argv: - sys.argv.remove("--debug") - os.environ["AYON_DEBUG"] = "1" - os.environ["OPENPYPE_DEBUG"] = "1" - -if "--automatic-tests" in sys.argv: - sys.argv.remove("--automatic-tests") - os.environ["IS_TEST"] = "1" - -SKIP_HEADERS = False -if "--skip-headers" in sys.argv: - sys.argv.remove("--skip-headers") - SKIP_HEADERS = True - -SKIP_BOOTSTRAP = False -if "--skip-bootstrap" in sys.argv: - sys.argv.remove("--skip-bootstrap") - SKIP_BOOTSTRAP = True - -if "--use-staging" in sys.argv: - sys.argv.remove("--use-staging") - os.environ["AYON_USE_STAGING"] = "1" - os.environ["OPENPYPE_USE_STAGING"] = "1" - -if "--headless" in sys.argv: - os.environ["AYON_HEADLESS_MODE"] = "1" - os.environ["OPENPYPE_HEADLESS_MODE"] = "1" - sys.argv.remove("--headless") - -elif ( - os.getenv("AYON_HEADLESS_MODE") != "1" - or os.getenv("OPENPYPE_HEADLESS_MODE") != "1" -): - os.environ.pop("AYON_HEADLESS_MODE", None) - os.environ.pop("OPENPYPE_HEADLESS_MODE", None) - -elif ( - os.getenv("AYON_HEADLESS_MODE") - != os.getenv("OPENPYPE_HEADLESS_MODE") -): - os.environ["OPENPYPE_HEADLESS_MODE"] = ( - os.environ["AYON_HEADLESS_MODE"] - ) - -IS_BUILT_APPLICATION = getattr(sys, "frozen", False) -HEADLESS_MODE_ENABLED = os.getenv("AYON_HEADLESS_MODE") == "1" - -_pythonpath = os.getenv("PYTHONPATH", "") -_python_paths = _pythonpath.split(os.pathsep) -if not IS_BUILT_APPLICATION: - # Code root defined by `start.py` directory - AYON_ROOT = os.path.dirname(os.path.abspath(__file__)) - _dependencies_path = site.getsitepackages()[-1] -else: - AYON_ROOT = os.path.dirname(sys.executable) - - # add dependencies folder to sys.pat for frozen code - _dependencies_path = os.path.normpath( - os.path.join(AYON_ROOT, "dependencies") - ) -# add stuff from `/dependencies` to PYTHONPATH. -sys.path.append(_dependencies_path) -_python_paths.append(_dependencies_path) - -# Vendored python modules that must not be in PYTHONPATH environment but -# are required for OpenPype processes -sys.path.insert(0, os.path.join(AYON_ROOT, "vendor", "python")) - -# Add common package to sys path -# - common contains common code for bootstraping and OpenPype processes -sys.path.insert(0, os.path.join(AYON_ROOT, "common")) - -# This is content of 'core' addon which is ATM part of build -common_python_vendor = os.path.join( - AYON_ROOT, - "openpype", - "vendor", - "python", - "common" -) -# Add tools dir to sys path for pyblish UI discovery -tools_dir = os.path.join(AYON_ROOT, "openpype", "tools") -for path in (AYON_ROOT, common_python_vendor, tools_dir): - while path in _python_paths: - _python_paths.remove(path) - - while path in sys.path: - sys.path.remove(path) - - _python_paths.insert(0, path) - sys.path.insert(0, path) - -os.environ["PYTHONPATH"] = os.pathsep.join(_python_paths) - -# enabled AYON state -os.environ["USE_AYON_SERVER"] = "1" -# Set this to point either to `python` from venv in case of live code -# or to `ayon` or `ayon_console` in case of frozen code -os.environ["AYON_EXECUTABLE"] = sys.executable -os.environ["OPENPYPE_EXECUTABLE"] = sys.executable -os.environ["AYON_ROOT"] = AYON_ROOT -os.environ["OPENPYPE_ROOT"] = AYON_ROOT -os.environ["OPENPYPE_REPOS_ROOT"] = AYON_ROOT -os.environ["AYON_MENU_LABEL"] = "AYON" -os.environ["AVALON_LABEL"] = "AYON" -# Set name of pyblish UI import -os.environ["PYBLISH_GUI"] = "pyblish_pype" -# Set builtin OCIO root -os.environ["BUILTIN_OCIO_ROOT"] = os.path.join( - AYON_ROOT, - "vendor", - "bin", - "ocioconfig", - "OpenColorIOConfigs" -) - -import blessed # noqa: E402 -import certifi # noqa: E402 - - -if sys.__stdout__: - term = blessed.Terminal() - - def _print(message: str): - if message.startswith("!!! "): - print(f'{term.orangered2("!!! ")}{message[4:]}') - elif message.startswith(">>> "): - print(f'{term.aquamarine3(">>> ")}{message[4:]}') - elif message.startswith("--- "): - print(f'{term.darkolivegreen3("--- ")}{message[4:]}') - elif message.startswith("*** "): - print(f'{term.gold("*** ")}{message[4:]}') - elif message.startswith(" - "): - print(f'{term.wheat(" - ")}{message[4:]}') - elif message.startswith(" . "): - print(f'{term.tan(" . ")}{message[4:]}') - elif message.startswith(" - "): - print(f'{term.seagreen3(" - ")}{message[7:]}') - elif message.startswith(" ! "): - print(f'{term.goldenrod(" ! ")}{message[7:]}') - elif message.startswith(" * "): - print(f'{term.aquamarine1(" * ")}{message[7:]}') - elif message.startswith(" "): - print(f'{term.darkseagreen3(" ")}{message[4:]}') - else: - print(message) -else: - def _print(message: str): - print(message) - - -# if SSL_CERT_FILE is not set prior to OpenPype launch, we set it to point -# to certifi bundle to make sure we have reasonably new CA certificates. -if not os.getenv("SSL_CERT_FILE"): - os.environ["SSL_CERT_FILE"] = certifi.where() -elif os.getenv("SSL_CERT_FILE") != certifi.where(): - _print("--- your system is set to use custom CA certificate bundle.") - -from ayon_api import get_base_url -from ayon_api.constants import SERVER_URL_ENV_KEY, SERVER_API_ENV_KEY -from ayon_common import is_staging_enabled -from ayon_common.connection.credentials import ( - ask_to_login_ui, - add_server, - need_server_or_login, - load_environments, - set_environments, - create_global_connection, - confirm_server_login, -) -from ayon_common.distribution import ( - AyonDistribution, - BundleNotFoundError, - show_missing_bundle_information, -) - - -def set_global_environments() -> None: - """Set global OpenPype's environments.""" - import acre - - from openpype.settings import get_general_environments - - general_env = get_general_environments() - - # first resolve general environment because merge doesn't expect - # values to be list. - # TODO: switch to OpenPype environment functions - merged_env = acre.merge( - acre.compute(acre.parse(general_env), cleanup=False), - dict(os.environ) - ) - env = acre.compute( - merged_env, - cleanup=False - ) - os.environ.clear() - os.environ.update(env) - - # Hardcoded default values - os.environ["PYBLISH_GUI"] = "pyblish_pype" - # Change scale factor only if is not set - if "QT_AUTO_SCREEN_SCALE_FACTOR" not in os.environ: - os.environ["QT_AUTO_SCREEN_SCALE_FACTOR"] = "1" - - -def set_addons_environments(): - """Set global environments for OpenPype modules. - - This requires to have OpenPype in `sys.path`. - """ - - import acre - from openpype.modules import ModulesManager - - modules_manager = ModulesManager() - - # Merge environments with current environments and update values - if module_envs := modules_manager.collect_global_environments(): - parsed_envs = acre.parse(module_envs) - env = acre.merge(parsed_envs, dict(os.environ)) - os.environ.clear() - os.environ.update(env) - - -def _connect_to_ayon_server(): - load_environments() - if not need_server_or_login(): - create_global_connection() - return - - if HEADLESS_MODE_ENABLED: - _print("!!! Cannot open v4 Login dialog in headless mode.") - _print(( - "!!! Please use `{}` to specify server address" - " and '{}' to specify user's token." - ).format(SERVER_URL_ENV_KEY, SERVER_API_ENV_KEY)) - sys.exit(1) - - current_url = os.environ.get(SERVER_URL_ENV_KEY) - url, token, username = ask_to_login_ui(current_url, always_on_top=True) - if url is not None and token is not None: - confirm_server_login(url, token, username) - return - - if url is not None: - add_server(url, username) - - _print("!!! Login was not successful.") - sys.exit(0) - - -def _check_and_update_from_ayon_server(): - """Gets addon info from v4, compares with local folder and updates it. - - Raises: - RuntimeError - """ - - distribution = AyonDistribution() - bundle = None - bundle_name = None - try: - bundle = distribution.bundle_to_use - if bundle is not None: - bundle_name = bundle.name - except BundleNotFoundError as exc: - bundle_name = exc.bundle_name - - if bundle is None: - url = get_base_url() - if not HEADLESS_MODE_ENABLED: - show_missing_bundle_information(url, bundle_name) - - elif bundle_name: - _print(( - f"!!! Requested release bundle '{bundle_name}'" - " is not available on server." - )) - _print( - "!!! Check if selected release bundle" - f" is available on the server '{url}'." - ) - - else: - mode = "staging" if is_staging_enabled() else "production" - _print( - f"!!! No release bundle is set as {mode} on the AYON server." - ) - _print( - "!!! Make sure there is a release bundle set" - f" as \"{mode}\" on the AYON server '{url}'." - ) - sys.exit(1) - - distribution.distribute() - distribution.validate_distribution() - os.environ["AYON_BUNDLE_NAME"] = bundle_name - - python_paths = [ - path - for path in os.getenv("PYTHONPATH", "").split(os.pathsep) - if path - ] - - for path in distribution.get_sys_paths(): - sys.path.insert(0, path) - if path not in python_paths: - python_paths.append(path) - os.environ["PYTHONPATH"] = os.pathsep.join(python_paths) - - -def boot(): - """Bootstrap OpenPype.""" - - from openpype.version import __version__ - - # TODO load version - os.environ["OPENPYPE_VERSION"] = __version__ - os.environ["AYON_VERSION"] = __version__ - - _connect_to_ayon_server() - _check_and_update_from_ayon_server() - - # delete OpenPype module and it's submodules from cache so it is used from - # specific version - modules_to_del = [ - sys.modules.pop(module_name) - for module_name in tuple(sys.modules) - if module_name == "openpype" or module_name.startswith("openpype.") - ] - - for module_name in modules_to_del: - with contextlib.suppress(AttributeError, KeyError): - del sys.modules[module_name] - - -def main_cli(): - from openpype import cli - from openpype.version import __version__ - from openpype.lib import terminal as t - - _print(">>> loading environments ...") - _print(" - global AYON ...") - set_global_environments() - _print(" - for addons ...") - set_addons_environments() - - # print info when not running scripts defined in 'silent commands' - if not SKIP_HEADERS: - info = get_info(is_staging_enabled()) - info.insert(0, f">>> Using AYON from [ {AYON_ROOT} ]") - - t_width = 20 - with contextlib.suppress(ValueError, OSError): - t_width = os.get_terminal_size().columns - 2 - - _header = f"*** AYON [{__version__}] " - info.insert(0, _header + "-" * (t_width - len(_header))) - - for i in info: - t.echo(i) - - try: - cli.main(obj={}, prog_name="ayon") - except Exception: # noqa - exc_info = sys.exc_info() - _print("!!! AYON crashed:") - traceback.print_exception(*exc_info) - sys.exit(1) - - -def script_cli(): - """Run and execute script.""" - - filepath = os.path.abspath(sys.argv[1]) - - # Find '__main__.py' in directory - if os.path.isdir(filepath): - new_filepath = os.path.join(filepath, "__main__.py") - if not os.path.exists(new_filepath): - raise RuntimeError( - f"can't find '__main__' module in '{filepath}'") - filepath = new_filepath - - # Add parent dir to sys path - sys.path.insert(0, os.path.dirname(filepath)) - - # Read content and execute - with open(filepath, "r") as stream: - content = stream.read() - - exec(compile(content, filepath, "exec"), globals()) - - -def get_info(use_staging=None) -> list: - """Print additional information to console.""" - - inf = [] - if use_staging: - inf.append(("AYON variant", "staging")) - else: - inf.append(("AYON variant", "production")) - inf.append(("AYON bundle", os.getenv("AYON_BUNDLE"))) - - # NOTE add addons information - - maximum = max(len(i[0]) for i in inf) - formatted = [] - for info in inf: - padding = (maximum - len(info[0])) + 1 - formatted.append(f'... {info[0]}:{" " * padding}[ {info[1]} ]') - return formatted - - -def main(): - if not SKIP_BOOTSTRAP: - boot() - - args = list(sys.argv) - args.pop(0) - if args and os.path.exists(args[0]): - script_cli() - else: - main_cli() - - -if __name__ == "__main__": - main() diff --git a/common/ayon_common/__init__.py b/common/ayon_common/__init__.py deleted file mode 100644 index ddabb7da2f4..00000000000 --- a/common/ayon_common/__init__.py +++ /dev/null @@ -1,16 +0,0 @@ -from .utils import ( - IS_BUILT_APPLICATION, - is_staging_enabled, - get_local_site_id, - get_ayon_appdirs, - get_ayon_launch_args, -) - - -__all__ = ( - "IS_BUILT_APPLICATION", - "is_staging_enabled", - "get_local_site_id", - "get_ayon_appdirs", - "get_ayon_launch_args", -) diff --git a/common/ayon_common/connection/credentials.py b/common/ayon_common/connection/credentials.py deleted file mode 100644 index 7f70cb7992a..00000000000 --- a/common/ayon_common/connection/credentials.py +++ /dev/null @@ -1,511 +0,0 @@ -"""Handle credentials and connection to server for client application. - -Cache and store used server urls. Store/load API keys to/from keyring if -needed. Store metadata about used urls, usernames for the urls and when was -the connection with the username established. - -On bootstrap is created global connection with information about site and -client version. The connection object lives in 'ayon_api'. -""" - -import os -import json -import platform -import datetime -import contextlib -import subprocess -import tempfile -from typing import Optional, Union, Any - -import ayon_api - -from ayon_api.constants import SERVER_URL_ENV_KEY, SERVER_API_ENV_KEY -from ayon_api.exceptions import UrlError -from ayon_api.utils import ( - validate_url, - is_token_valid, - logout_from_server, -) - -from ayon_common.utils import ( - get_ayon_appdirs, - get_local_site_id, - get_ayon_launch_args, - is_staging_enabled, -) - - -class ChangeUserResult: - def __init__( - self, logged_out, old_url, old_token, old_username, - new_url, new_token, new_username - ): - shutdown = logged_out - restart = new_url is not None and new_url != old_url - token_changed = new_token is not None and new_token != old_token - - self.logged_out = logged_out - self.old_url = old_url - self.old_token = old_token - self.old_username = old_username - self.new_url = new_url - self.new_token = new_token - self.new_username = new_username - - self.shutdown = shutdown - self.restart = restart - self.token_changed = token_changed - - -def _get_servers_path(): - return get_ayon_appdirs("used_servers.json") - - -def get_servers_info_data(): - """Metadata about used server on this machine. - - Store data about all used server urls, last used url and user username for - the url. Using this metadata we can remember which username was used per - url if token stored in keyring loose lifetime. - - Returns: - dict[str, Any]: Information about servers. - """ - - data = {} - servers_info_path = _get_servers_path() - if not os.path.exists(servers_info_path): - dirpath = os.path.dirname(servers_info_path) - if not os.path.exists(dirpath): - os.makedirs(dirpath) - - return data - - with open(servers_info_path, "r") as stream: - with contextlib.suppress(BaseException): - data = json.load(stream) - return data - - -def add_server(url: str, username: str): - """Add server to server info metadata. - - This function will also mark the url as last used url on the machine so on - next launch will be used. - - Args: - url (str): Server url. - username (str): Name of user used to log in. - """ - - servers_info_path = _get_servers_path() - data = get_servers_info_data() - data["last_server"] = url - if "urls" not in data: - data["urls"] = {} - data["urls"][url] = { - "updated_dt": datetime.datetime.now().strftime("%Y/%m/%d %H:%M:%S"), - "username": username, - } - - with open(servers_info_path, "w") as stream: - json.dump(data, stream) - - -def remove_server(url: str): - """Remove server url from servers information. - - This should be used on logout to completelly loose information about server - on the machine. - - Args: - url (str): Server url. - """ - - if not url: - return - - servers_info_path = _get_servers_path() - data = get_servers_info_data() - if data.get("last_server") == url: - data["last_server"] = None - - if "urls" in data: - data["urls"].pop(url, None) - - with open(servers_info_path, "w") as stream: - json.dump(data, stream) - - -def get_last_server( - data: Optional[dict[str, Any]] = None -) -> Union[str, None]: - """Last server used to log in on this machine. - - Args: - data (Optional[dict[str, Any]]): Prepared server information data. - - Returns: - Union[str, None]: Last used server url. - """ - - if data is None: - data = get_servers_info_data() - return data.get("last_server") - - -def get_last_username_by_url( - url: str, - data: Optional[dict[str, Any]] = None -) -> Union[str, None]: - """Get last username which was used for passed url. - - Args: - url (str): Server url. - data (Optional[dict[str, Any]]): Servers info. - - Returns: - Union[str, None]: Username. - """ - - if not url: - return None - - if data is None: - data = get_servers_info_data() - - if urls := data.get("urls"): - if url_info := urls.get(url): - return url_info.get("username") - return None - - -def get_last_server_with_username(): - """Receive last server and username used in last connection. - - Returns: - tuple[Union[str, None], Union[str, None]]: Url and username. - """ - - data = get_servers_info_data() - url = get_last_server(data) - username = get_last_username_by_url(url) - return url, username - - -class TokenKeyring: - # Fake username with hardcoded username - username_key = "username" - - def __init__(self, url): - try: - import keyring - - except Exception as exc: - raise NotImplementedError( - "Python module `keyring` is not available." - ) from exc - - # hack for cx_freeze and Windows keyring backend - if platform.system().lower() == "windows": - from keyring.backends import Windows - - keyring.set_keyring(Windows.WinVaultKeyring()) - - self._url = url - self._keyring_key = f"AYON/{url}" - - def get_value(self): - import keyring - - return keyring.get_password(self._keyring_key, self.username_key) - - def set_value(self, value): - import keyring - - if value is not None: - keyring.set_password(self._keyring_key, self.username_key, value) - return - - with contextlib.suppress(keyring.errors.PasswordDeleteError): - keyring.delete_password(self._keyring_key, self.username_key) - - -def load_token(url: str) -> Union[str, None]: - """Get token for url from keyring. - - Args: - url (str): Server url. - - Returns: - Union[str, None]: Token for passed url available in keyring. - """ - - return TokenKeyring(url).get_value() - - -def store_token(url: str, token: str): - """Store token by url to keyring. - - Args: - url (str): Server url. - token (str): User token to server. - """ - - TokenKeyring(url).set_value(token) - - -def ask_to_login_ui( - url: Optional[str] = None, - always_on_top: Optional[bool] = False -) -> tuple[str, str, str]: - """Ask user to login using UI. - - This should be used only when user is not yet logged in at all or available - credentials are invalid. To change credentials use 'change_user_ui' - function. - - Use a subprocess to show UI. - - Args: - url (Optional[str]): Server url that could be prefilled in UI. - always_on_top (Optional[bool]): Window will be drawn on top of - other windows. - - Returns: - tuple[str, str, str]: Url, user's token and username. - """ - - current_dir = os.path.dirname(os.path.abspath(__file__)) - ui_dir = os.path.join(current_dir, "ui") - - if url is None: - url = get_last_server() - username = get_last_username_by_url(url) - data = { - "url": url, - "username": username, - "always_on_top": always_on_top, - } - - with tempfile.NamedTemporaryFile( - mode="w", prefix="ayon_login", suffix=".json", delete=False - ) as tmp: - output = tmp.name - json.dump(data, tmp) - - code = subprocess.call( - get_ayon_launch_args(ui_dir, "--skip-bootstrap", output)) - if code != 0: - raise RuntimeError("Failed to show login UI") - - with open(output, "r") as stream: - data = json.load(stream) - os.remove(output) - return data["output"] - - -def change_user_ui() -> ChangeUserResult: - """Change user using UI. - - Show UI to user where he can change credentials or url. Output will contain - all information about old/new values of url, username, api key. If user - confirmed or declined values. - - Returns: - ChangeUserResult: Information about user change. - """ - - from .ui import change_user - - url, username = get_last_server_with_username() - token = load_token(url) - result = change_user(url, username, token) - new_url, new_token, new_username, logged_out = result - - output = ChangeUserResult( - logged_out, url, token, username, - new_url, new_token, new_username - ) - if output.logged_out: - logout(url, token) - - elif output.token_changed: - change_token( - output.new_url, - output.new_token, - output.new_username, - output.old_url - ) - return output - - -def change_token( - url: str, - token: str, - username: Optional[str] = None, - old_url: Optional[str] = None -): - """Change url and token in currently running session. - - Function can also change server url, in that case are previous credentials - NOT removed from cache. - - Args: - url (str): Url to server. - token (str): New token to be used for url connection. - username (Optional[str]): Username of logged user. - old_url (Optional[str]): Previous url. Value from 'get_last_server' - is used if not entered. - """ - - if old_url is None: - old_url = get_last_server() - if old_url and old_url == url: - remove_url_cache(old_url) - - # TODO check if ayon_api is already connected - add_server(url, username) - store_token(url, token) - ayon_api.change_token(url, token) - - -def remove_url_cache(url: str): - """Clear cache for server url. - - Args: - url (str): Server url which is removed from cache. - """ - - store_token(url, None) - - -def remove_token_cache(url: str, token: str): - """Remove token from local cache of url. - - Is skipped if cached token under the passed url is not the same - as passed token. - - Args: - url (str): Url to server. - token (str): Token to be removed from url cache. - """ - - if load_token(url) == token: - remove_url_cache(url) - - -def logout(url: str, token: str): - """Logout from server and throw token away. - - Args: - url (str): Url from which should be logged out. - token (str): Token which should be used to log out. - """ - - remove_server(url) - ayon_api.close_connection() - ayon_api.set_environments(None, None) - remove_token_cache(url, token) - logout_from_server(url, token) - - -def load_environments(): - """Load environments on startup. - - Handle environments needed for connection with server. Environments are - 'AYON_SERVER_URL' and 'AYON_API_KEY'. - - Server is looked up from environment. Already set environent is not - changed. If environemnt is not filled then last server stored in appdirs - is used. - - Token is skipped if url is not available. Otherwise, is also checked from - env and if is not available then uses 'load_token' to try to get token - based on server url. - """ - - server_url = os.environ.get(SERVER_URL_ENV_KEY) - if not server_url: - server_url = get_last_server() - if not server_url: - return - os.environ[SERVER_URL_ENV_KEY] = server_url - - if not os.environ.get(SERVER_API_ENV_KEY): - if token := load_token(server_url): - os.environ[SERVER_API_ENV_KEY] = token - - -def set_environments(url: str, token: str): - """Change url and token environemnts in currently running process. - - Args: - url (str): New server url. - token (str): User's token. - """ - - ayon_api.set_environments(url, token) - - -def create_global_connection(): - """Create global connection with site id and client version. - - Make sure the global connection in 'ayon_api' have entered site id and - client version. - - Set default settings variant to use based on 'is_staging_enabled'. - """ - - ayon_api.create_connection( - get_local_site_id(), os.environ.get("AYON_VERSION") - ) - ayon_api.set_default_settings_variant( - "staging" if is_staging_enabled() else "production" - ) - - -def need_server_or_login() -> bool: - """Check if server url or login to the server are needed. - - It is recommended to call 'load_environments' on startup before this check. - But in some cases this function could be called after startup. - - Returns: - bool: 'True' if server and token are available. Otherwise 'False'. - """ - - server_url = os.environ.get(SERVER_URL_ENV_KEY) - if not server_url: - return True - - try: - server_url = validate_url(server_url) - except UrlError: - return True - - token = os.environ.get(SERVER_API_ENV_KEY) - if token: - return not is_token_valid(server_url, token) - - token = load_token(server_url) - if token: - return not is_token_valid(server_url, token) - return True - - -def confirm_server_login(url, token, username): - """Confirm login of user and do necessary stepts to apply changes. - - This should not be used on "change" of user but on first login. - - Args: - url (str): Server url where user authenticated. - token (str): API token used for authentication to server. - username (Union[str, None]): Username related to API token. - """ - - add_server(url, username) - store_token(url, token) - set_environments(url, token) - create_global_connection() diff --git a/common/ayon_common/connection/ui/__init__.py b/common/ayon_common/connection/ui/__init__.py deleted file mode 100644 index 96e573df0d0..00000000000 --- a/common/ayon_common/connection/ui/__init__.py +++ /dev/null @@ -1,12 +0,0 @@ -from .login_window import ( - ServerLoginWindow, - ask_to_login, - change_user, -) - - -__all__ = ( - "ServerLoginWindow", - "ask_to_login", - "change_user", -) diff --git a/common/ayon_common/connection/ui/__main__.py b/common/ayon_common/connection/ui/__main__.py deleted file mode 100644 index 719b2b8ef58..00000000000 --- a/common/ayon_common/connection/ui/__main__.py +++ /dev/null @@ -1,23 +0,0 @@ -import sys -import json - -from ayon_common.connection.ui.login_window import ask_to_login - - -def main(output_path): - with open(output_path, "r") as stream: - data = json.load(stream) - - url = data.get("url") - username = data.get("username") - always_on_top = data.get("always_on_top", False) - out_url, out_token, out_username = ask_to_login( - url, username, always_on_top=always_on_top) - - data["output"] = [out_url, out_token, out_username] - with open(output_path, "w") as stream: - json.dump(data, stream) - - -if __name__ == "__main__": - main(sys.argv[-1]) diff --git a/common/ayon_common/connection/ui/login_window.py b/common/ayon_common/connection/ui/login_window.py deleted file mode 100644 index 94c239852ea..00000000000 --- a/common/ayon_common/connection/ui/login_window.py +++ /dev/null @@ -1,710 +0,0 @@ -import traceback - -from qtpy import QtWidgets, QtCore, QtGui - -from ayon_api.exceptions import UrlError -from ayon_api.utils import validate_url, login_to_server - -from ayon_common.resources import ( - get_resource_path, - get_icon_path, - load_stylesheet, -) -from ayon_common.ui_utils import set_style_property, get_qt_app - -from .widgets import ( - PressHoverButton, - PlaceholderLineEdit, -) - - -class LogoutConfirmDialog(QtWidgets.QDialog): - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - - self.setWindowTitle("Logout confirmation") - - message_widget = QtWidgets.QWidget(self) - - message_label = QtWidgets.QLabel( - ( - "You are going to logout. This action will close this" - " application and will invalidate your login." - " All other applications launched with this login won't be" - " able to use it anymore.

" - "You can cancel logout and only change server and user login" - " in login dialog.

" - "Press OK to confirm logout." - ), - message_widget - ) - message_label.setWordWrap(True) - - message_layout = QtWidgets.QHBoxLayout(message_widget) - message_layout.setContentsMargins(0, 0, 0, 0) - message_layout.addWidget(message_label, 1) - - sep_frame = QtWidgets.QFrame(self) - sep_frame.setObjectName("Separator") - sep_frame.setMinimumHeight(2) - sep_frame.setMaximumHeight(2) - - footer_widget = QtWidgets.QWidget(self) - - cancel_btn = QtWidgets.QPushButton("Cancel", footer_widget) - confirm_btn = QtWidgets.QPushButton("OK", footer_widget) - - footer_layout = QtWidgets.QHBoxLayout(footer_widget) - footer_layout.setContentsMargins(0, 0, 0, 0) - footer_layout.addStretch(1) - footer_layout.addWidget(cancel_btn, 0) - footer_layout.addWidget(confirm_btn, 0) - - main_layout = QtWidgets.QVBoxLayout(self) - main_layout.addWidget(message_widget, 0) - main_layout.addStretch(1) - main_layout.addWidget(sep_frame, 0) - main_layout.addWidget(footer_widget, 0) - - cancel_btn.clicked.connect(self._on_cancel_click) - confirm_btn.clicked.connect(self._on_confirm_click) - - self._cancel_btn = cancel_btn - self._confirm_btn = confirm_btn - self._result = False - - def showEvent(self, event): - super().showEvent(event) - self._match_btns_sizes() - - def resizeEvent(self, event): - super().resizeEvent(event) - self._match_btns_sizes() - - def _match_btns_sizes(self): - width = max( - self._cancel_btn.sizeHint().width(), - self._confirm_btn.sizeHint().width() - ) - self._cancel_btn.setMinimumWidth(width) - self._confirm_btn.setMinimumWidth(width) - - def _on_cancel_click(self): - self._result = False - self.reject() - - def _on_confirm_click(self): - self._result = True - self.accept() - - def get_result(self): - return self._result - - -class ServerLoginWindow(QtWidgets.QDialog): - default_width = 410 - default_height = 170 - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - - icon_path = get_icon_path() - icon = QtGui.QIcon(icon_path) - self.setWindowIcon(icon) - self.setWindowTitle("Login to server") - - edit_icon_path = get_resource_path("edit.png") - edit_icon = QtGui.QIcon(edit_icon_path) - - # --- URL page --- - login_widget = QtWidgets.QWidget(self) - - user_cred_widget = QtWidgets.QWidget(login_widget) - - url_label = QtWidgets.QLabel("URL:", user_cred_widget) - - url_widget = QtWidgets.QWidget(user_cred_widget) - - url_input = PlaceholderLineEdit(url_widget) - url_input.setPlaceholderText("< https://ayon.server.com >") - - url_preview = QtWidgets.QLineEdit(url_widget) - url_preview.setReadOnly(True) - url_preview.setObjectName("LikeDisabledInput") - - url_edit_btn = PressHoverButton(user_cred_widget) - url_edit_btn.setIcon(edit_icon) - url_edit_btn.setObjectName("PasswordBtn") - - url_layout = QtWidgets.QHBoxLayout(url_widget) - url_layout.setContentsMargins(0, 0, 0, 0) - url_layout.addWidget(url_input, 1) - url_layout.addWidget(url_preview, 1) - - # --- URL separator --- - url_cred_sep = QtWidgets.QFrame(self) - url_cred_sep.setObjectName("Separator") - url_cred_sep.setMinimumHeight(2) - url_cred_sep.setMaximumHeight(2) - - # --- Login page --- - username_label = QtWidgets.QLabel("Username:", user_cred_widget) - - username_widget = QtWidgets.QWidget(user_cred_widget) - - username_input = PlaceholderLineEdit(username_widget) - username_input.setPlaceholderText("< Artist >") - - username_preview = QtWidgets.QLineEdit(username_widget) - username_preview.setReadOnly(True) - username_preview.setObjectName("LikeDisabledInput") - - username_edit_btn = PressHoverButton(user_cred_widget) - username_edit_btn.setIcon(edit_icon) - username_edit_btn.setObjectName("PasswordBtn") - - username_layout = QtWidgets.QHBoxLayout(username_widget) - username_layout.setContentsMargins(0, 0, 0, 0) - username_layout.addWidget(username_input, 1) - username_layout.addWidget(username_preview, 1) - - password_label = QtWidgets.QLabel("Password:", user_cred_widget) - password_input = PlaceholderLineEdit(user_cred_widget) - password_input.setPlaceholderText("< *********** >") - password_input.setEchoMode(PlaceholderLineEdit.Password) - - api_label = QtWidgets.QLabel("API key:", user_cred_widget) - api_preview = QtWidgets.QLineEdit(user_cred_widget) - api_preview.setReadOnly(True) - api_preview.setObjectName("LikeDisabledInput") - - show_password_icon_path = get_resource_path("eye.png") - show_password_icon = QtGui.QIcon(show_password_icon_path) - show_password_btn = PressHoverButton(user_cred_widget) - show_password_btn.setObjectName("PasswordBtn") - show_password_btn.setIcon(show_password_icon) - show_password_btn.setFocusPolicy(QtCore.Qt.ClickFocus) - - cred_msg_sep = QtWidgets.QFrame(self) - cred_msg_sep.setObjectName("Separator") - cred_msg_sep.setMinimumHeight(2) - cred_msg_sep.setMaximumHeight(2) - - # --- Credentials inputs --- - user_cred_layout = QtWidgets.QGridLayout(user_cred_widget) - user_cred_layout.setContentsMargins(0, 0, 0, 0) - row = 0 - - user_cred_layout.addWidget(url_label, row, 0, 1, 1) - user_cred_layout.addWidget(url_widget, row, 1, 1, 1) - user_cred_layout.addWidget(url_edit_btn, row, 2, 1, 1) - row += 1 - - user_cred_layout.addWidget(url_cred_sep, row, 0, 1, 3) - row += 1 - - user_cred_layout.addWidget(username_label, row, 0, 1, 1) - user_cred_layout.addWidget(username_widget, row, 1, 1, 1) - user_cred_layout.addWidget(username_edit_btn, row, 2, 2, 1) - row += 1 - - user_cred_layout.addWidget(api_label, row, 0, 1, 1) - user_cred_layout.addWidget(api_preview, row, 1, 1, 1) - row += 1 - - user_cred_layout.addWidget(password_label, row, 0, 1, 1) - user_cred_layout.addWidget(password_input, row, 1, 1, 1) - user_cred_layout.addWidget(show_password_btn, row, 2, 1, 1) - row += 1 - - user_cred_layout.addWidget(cred_msg_sep, row, 0, 1, 3) - row += 1 - - user_cred_layout.setColumnStretch(0, 0) - user_cred_layout.setColumnStretch(1, 1) - user_cred_layout.setColumnStretch(2, 0) - - login_layout = QtWidgets.QVBoxLayout(login_widget) - login_layout.setContentsMargins(0, 0, 0, 0) - login_layout.addWidget(user_cred_widget, 1) - - # --- Messages --- - # Messages for users (e.g. invalid url etc.) - message_label = QtWidgets.QLabel(self) - message_label.setWordWrap(True) - message_label.setTextInteractionFlags(QtCore.Qt.TextBrowserInteraction) - - footer_widget = QtWidgets.QWidget(self) - logout_btn = QtWidgets.QPushButton("Logout", footer_widget) - user_message = QtWidgets.QLabel(footer_widget) - login_btn = QtWidgets.QPushButton("Login", footer_widget) - confirm_btn = QtWidgets.QPushButton("Confirm", footer_widget) - - footer_layout = QtWidgets.QHBoxLayout(footer_widget) - footer_layout.setContentsMargins(0, 0, 0, 0) - footer_layout.addWidget(logout_btn, 0) - footer_layout.addWidget(user_message, 1) - footer_layout.addWidget(login_btn, 0) - footer_layout.addWidget(confirm_btn, 0) - - main_layout = QtWidgets.QVBoxLayout(self) - main_layout.addWidget(login_widget, 0) - main_layout.addWidget(message_label, 0) - main_layout.addStretch(1) - main_layout.addWidget(footer_widget, 0) - - url_input.textChanged.connect(self._on_url_change) - url_input.returnPressed.connect(self._on_url_enter_press) - username_input.textChanged.connect(self._on_user_change) - username_input.returnPressed.connect(self._on_username_enter_press) - password_input.returnPressed.connect(self._on_password_enter_press) - show_password_btn.change_state.connect(self._on_show_password) - url_edit_btn.clicked.connect(self._on_url_edit_click) - username_edit_btn.clicked.connect(self._on_username_edit_click) - logout_btn.clicked.connect(self._on_logout_click) - login_btn.clicked.connect(self._on_login_click) - confirm_btn.clicked.connect(self._on_login_click) - - self._message_label = message_label - - self._url_widget = url_widget - self._url_input = url_input - self._url_preview = url_preview - self._url_edit_btn = url_edit_btn - - self._login_widget = login_widget - - self._user_cred_widget = user_cred_widget - self._username_input = username_input - self._username_preview = username_preview - self._username_edit_btn = username_edit_btn - - self._password_label = password_label - self._password_input = password_input - self._show_password_btn = show_password_btn - self._api_label = api_label - self._api_preview = api_preview - - self._logout_btn = logout_btn - self._user_message = user_message - self._login_btn = login_btn - self._confirm_btn = confirm_btn - - self._url_is_valid = None - self._credentials_are_valid = None - self._result = (None, None, None, False) - self._first_show = True - - self._allow_logout = False - self._logged_in = False - self._url_edit_mode = False - self._username_edit_mode = False - - def set_allow_logout(self, allow_logout): - if allow_logout is self._allow_logout: - return - self._allow_logout = allow_logout - - self._update_states_by_edit_mode() - - def _set_logged_in(self, logged_in): - if logged_in is self._logged_in: - return - self._logged_in = logged_in - - self._update_states_by_edit_mode() - - def _set_url_edit_mode(self, edit_mode): - if self._url_edit_mode is not edit_mode: - self._url_edit_mode = edit_mode - self._update_states_by_edit_mode() - - def _set_username_edit_mode(self, edit_mode): - if self._username_edit_mode is not edit_mode: - self._username_edit_mode = edit_mode - self._update_states_by_edit_mode() - - def _get_url_user_edit(self): - url_edit = True - if self._logged_in and not self._url_edit_mode: - url_edit = False - user_edit = url_edit - if not user_edit and self._logged_in and self._username_edit_mode: - user_edit = True - return url_edit, user_edit - - def _update_states_by_edit_mode(self): - url_edit, user_edit = self._get_url_user_edit() - - self._url_preview.setVisible(not url_edit) - self._url_input.setVisible(url_edit) - self._url_edit_btn.setVisible(self._allow_logout and not url_edit) - - self._username_preview.setVisible(not user_edit) - self._username_input.setVisible(user_edit) - self._username_edit_btn.setVisible( - self._allow_logout and not user_edit - ) - - self._api_preview.setVisible(not user_edit) - self._api_label.setVisible(not user_edit) - - self._password_label.setVisible(user_edit) - self._show_password_btn.setVisible(user_edit) - self._password_input.setVisible(user_edit) - - self._logout_btn.setVisible(self._allow_logout and self._logged_in) - self._login_btn.setVisible(not self._allow_logout) - self._confirm_btn.setVisible(self._allow_logout) - self._update_login_btn_state(url_edit, user_edit) - - def _update_login_btn_state(self, url_edit=None, user_edit=None, url=None): - if url_edit is None: - url_edit, user_edit = self._get_url_user_edit() - - if url is None: - url = self._url_input.text() - - enabled = bool(url) and (url_edit or user_edit) - - self._login_btn.setEnabled(enabled) - self._confirm_btn.setEnabled(enabled) - - def showEvent(self, event): - super().showEvent(event) - if self._first_show: - self._first_show = False - self._on_first_show() - - def _on_first_show(self): - self.setStyleSheet(load_stylesheet()) - self.resize(self.default_width, self.default_height) - self._center_window() - if self._allow_logout is None: - self.set_allow_logout(False) - - self._update_states_by_edit_mode() - if not self._url_input.text(): - widget = self._url_input - elif not self._username_input.text(): - widget = self._username_input - else: - widget = self._password_input - - self._set_input_focus(widget) - - def result(self): - """Result url and token or login. - - Returns: - Union[Tuple[str, str], Tuple[None, None]]: Url and token used for - login if was successful otherwise are both set to None. - """ - return self._result - - def _center_window(self): - """Move window to center of screen.""" - - if hasattr(QtWidgets.QApplication, "desktop"): - desktop = QtWidgets.QApplication.desktop() - screen_idx = desktop.screenNumber(self) - screen_geo = desktop.screenGeometry(screen_idx) - else: - screen = self.screen() - screen_geo = screen.geometry() - - geo = self.frameGeometry() - geo.moveCenter(screen_geo.center()) - if geo.y() < screen_geo.y(): - geo.setY(screen_geo.y()) - self.move(geo.topLeft()) - - def _on_url_change(self, text): - self._update_login_btn_state(url=text) - self._set_url_valid(None) - self._set_credentials_valid(None) - self._url_preview.setText(text) - - def _set_url_valid(self, valid): - if valid is self._url_is_valid: - return - - self._url_is_valid = valid - self._set_input_valid_state(self._url_input, valid) - - def _set_credentials_valid(self, valid): - if self._credentials_are_valid is valid: - return - - self._credentials_are_valid = valid - self._set_input_valid_state(self._username_input, valid) - self._set_input_valid_state(self._password_input, valid) - - def _on_url_enter_press(self): - self._set_input_focus(self._username_input) - - def _on_user_change(self, username): - self._username_preview.setText(username) - - def _on_username_enter_press(self): - self._set_input_focus(self._password_input) - - def _on_password_enter_press(self): - self._login() - - def _on_show_password(self, show_password): - if show_password: - placeholder_text = "< MySecret124 >" - echo_mode = QtWidgets.QLineEdit.Normal - else: - placeholder_text = "< *********** >" - echo_mode = QtWidgets.QLineEdit.Password - - self._password_input.setEchoMode(echo_mode) - self._password_input.setPlaceholderText(placeholder_text) - - def _on_username_edit_click(self): - self._username_edit_mode = True - self._update_states_by_edit_mode() - - def _on_url_edit_click(self): - self._url_edit_mode = True - self._update_states_by_edit_mode() - - def _on_logout_click(self): - dialog = LogoutConfirmDialog(self) - dialog.exec_() - if dialog.get_result(): - self._result = (None, None, None, True) - self.accept() - - def _on_login_click(self): - self._login() - - def _validate_url(self): - """Use url from input to connect and change window state on success. - - Todos: - Threaded check. - """ - - url = self._url_input.text() - valid_url = None - try: - valid_url = validate_url(url) - - except UrlError as exc: - parts = [f"{exc.title}"] - parts.extend(f"- {hint}" for hint in exc.hints) - self._set_message("
".join(parts)) - - except KeyboardInterrupt: - # Reraise KeyboardInterrupt error - raise - - except BaseException: - self._set_unexpected_error() - return - - if valid_url is None: - return False - - self._url_input.setText(valid_url) - return True - - def _login(self): - if ( - not self._login_btn.isEnabled() - and not self._confirm_btn.isEnabled() - ): - return - - if not self._url_is_valid: - self._set_url_valid(self._validate_url()) - - if not self._url_is_valid: - self._set_input_focus(self._url_input) - self._set_credentials_valid(None) - return - - self._clear_message() - - url = self._url_input.text() - username = self._username_input.text() - password = self._password_input.text() - try: - token = login_to_server(url, username, password) - except BaseException: - self._set_unexpected_error() - return - - if token is not None: - self._result = (url, token, username, False) - self.accept() - return - - self._set_credentials_valid(False) - message_lines = ["Invalid credentials"] - if not username.strip(): - message_lines.append("- Username is not filled") - - if not password.strip(): - message_lines.append("- Password is not filled") - - if username and password: - message_lines.append("- Check your credentials") - - self._set_message("
".join(message_lines)) - self._set_input_focus(self._username_input) - - def _set_input_focus(self, widget): - widget.setFocus(QtCore.Qt.MouseFocusReason) - - def _set_input_valid_state(self, widget, valid): - state = "" - if valid is True: - state = "valid" - elif valid is False: - state = "invalid" - set_style_property(widget, "state", state) - - def _set_message(self, message): - self._message_label.setText(message) - - def _clear_message(self): - self._message_label.setText("") - - def _set_unexpected_error(self): - # TODO add traceback somewhere - # - maybe a button to show or copy? - traceback.print_exc() - lines = [ - "Unexpected error happened", - "- Can be caused by wrong url (leading elsewhere)" - ] - self._set_message("
".join(lines)) - - def set_url(self, url): - self._url_preview.setText(url) - self._url_input.setText(url) - self._validate_url() - - def set_username(self, username): - self._username_preview.setText(username) - self._username_input.setText(username) - - def _set_api_key(self, api_key): - if not api_key or len(api_key) < 3: - self._api_preview.setText(api_key or "") - return - - api_key_len = len(api_key) - offset = 6 - if api_key_len < offset: - offset = api_key_len // 2 - api_key = api_key[:offset] + "." * (api_key_len - offset) - - self._api_preview.setText(api_key) - - def set_logged_in( - self, - logged_in, - url=None, - username=None, - api_key=None, - allow_logout=None - ): - if url is not None: - self.set_url(url) - - if username is not None: - self.set_username(username) - - if api_key: - self._set_api_key(api_key) - - if logged_in and allow_logout is None: - allow_logout = True - - self._set_logged_in(logged_in) - - if allow_logout: - self.set_allow_logout(True) - elif allow_logout is False: - self.set_allow_logout(False) - - -def ask_to_login(url=None, username=None, always_on_top=False): - """Ask user to login using Qt dialog. - - Function creates new QApplication if is not created yet. - - Args: - url (Optional[str]): Server url that will be prefilled in dialog. - username (Optional[str]): Username that will be prefilled in dialog. - always_on_top (Optional[bool]): Window will be drawn on top of - other windows. - - Returns: - tuple[str, str, str]: Returns Url, user's token and username. Url can - be changed during dialog lifetime that's why the url is returned. - """ - - app_instance = get_qt_app() - - window = ServerLoginWindow() - if always_on_top: - window.setWindowFlags( - window.windowFlags() - | QtCore.Qt.WindowStaysOnTopHint - ) - - if url: - window.set_url(url) - - if username: - window.set_username(username) - - if not app_instance.startingUp(): - window.exec_() - else: - window.open() - app_instance.exec_() - result = window.result() - out_url, out_token, out_username, _ = result - return out_url, out_token, out_username - - -def change_user(url, username, api_key, always_on_top=False): - """Ask user to login using Qt dialog. - - Function creates new QApplication if is not created yet. - - Args: - url (str): Server url that will be prefilled in dialog. - username (str): Username that will be prefilled in dialog. - api_key (str): API key that will be prefilled in dialog. - always_on_top (Optional[bool]): Window will be drawn on top of - other windows. - - Returns: - Tuple[str, str]: Returns Url and user's token. Url can be changed - during dialog lifetime that's why the url is returned. - """ - - app_instance = get_qt_app() - window = ServerLoginWindow() - if always_on_top: - window.setWindowFlags( - window.windowFlags() - | QtCore.Qt.WindowStaysOnTopHint - ) - window.set_logged_in(True, url, username, api_key) - - if not app_instance.startingUp(): - window.exec_() - else: - window.open() - # This can become main Qt loop. Maybe should live elsewhere - app_instance.exec_() - return window.result() diff --git a/common/ayon_common/connection/ui/widgets.py b/common/ayon_common/connection/ui/widgets.py deleted file mode 100644 index 78b73e056d7..00000000000 --- a/common/ayon_common/connection/ui/widgets.py +++ /dev/null @@ -1,47 +0,0 @@ -from qtpy import QtWidgets, QtCore, QtGui - - -class PressHoverButton(QtWidgets.QPushButton): - """Keep track about mouse press/release and enter/leave.""" - - _mouse_pressed = False - _mouse_hovered = False - change_state = QtCore.Signal(bool) - - def mousePressEvent(self, event): - self._mouse_pressed = True - self._mouse_hovered = True - self.change_state.emit(self._mouse_hovered) - super(PressHoverButton, self).mousePressEvent(event) - - def mouseReleaseEvent(self, event): - self._mouse_pressed = False - self._mouse_hovered = False - self.change_state.emit(self._mouse_hovered) - super(PressHoverButton, self).mouseReleaseEvent(event) - - def mouseMoveEvent(self, event): - mouse_pos = self.mapFromGlobal(QtGui.QCursor.pos()) - under_mouse = self.rect().contains(mouse_pos) - if under_mouse != self._mouse_hovered: - self._mouse_hovered = under_mouse - self.change_state.emit(self._mouse_hovered) - - super(PressHoverButton, self).mouseMoveEvent(event) - - -class PlaceholderLineEdit(QtWidgets.QLineEdit): - """Set placeholder color of QLineEdit in Qt 5.12 and higher.""" - - def __init__(self, *args, **kwargs): - super(PlaceholderLineEdit, self).__init__(*args, **kwargs) - # Change placeholder palette color - if hasattr(QtGui.QPalette, "PlaceholderText"): - filter_palette = self.palette() - color = QtGui.QColor("#D3D8DE") - color.setAlpha(67) - filter_palette.setColor( - QtGui.QPalette.PlaceholderText, - color - ) - self.setPalette(filter_palette) diff --git a/common/ayon_common/distribution/README.md b/common/ayon_common/distribution/README.md deleted file mode 100644 index f1c34ba7223..00000000000 --- a/common/ayon_common/distribution/README.md +++ /dev/null @@ -1,18 +0,0 @@ -Addon distribution tool ------------------------- - -Code in this folder is backend portion of Addon distribution logic for v4 server. - -Each host, module will be separate Addon in the future. Each v4 server could run different set of Addons. - -Client (running on artist machine) will in the first step ask v4 for list of enabled addons. -(It expects list of json documents matching to `addon_distribution.py:AddonInfo` object.) -Next it will compare presence of enabled addon version in local folder. In the case of missing version of -an addon, client will use information in the addon to download (from http/shared local disk/git) zip file -and unzip it. - -Required part of addon distribution will be sharing of dependencies (python libraries, utilities) which is not part of this folder. - -Location of this folder might change in the future as it will be required for a clint to add this folder to sys.path reliably. - -This code needs to be independent on Openpype code as much as possible! diff --git a/common/ayon_common/distribution/__init__.py b/common/ayon_common/distribution/__init__.py deleted file mode 100644 index e3c0f0e1618..00000000000 --- a/common/ayon_common/distribution/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -from .control import AyonDistribution, BundleNotFoundError -from .utils import show_missing_bundle_information - - -__all__ = ( - "AyonDistribution", - "BundleNotFoundError", - "show_missing_bundle_information", -) diff --git a/common/ayon_common/distribution/control.py b/common/ayon_common/distribution/control.py deleted file mode 100644 index 95c221d753d..00000000000 --- a/common/ayon_common/distribution/control.py +++ /dev/null @@ -1,1116 +0,0 @@ -import os -import sys -import json -import traceback -import collections -import datetime -import logging -import shutil -import threading -import platform -import attr -from enum import Enum - -import ayon_api - -from ayon_common.utils import is_staging_enabled - -from .utils import ( - get_addons_dir, - get_dependencies_dir, -) -from .downloaders import get_default_download_factory -from .data_structures import ( - AddonInfo, - DependencyItem, - Bundle, -) - -NOT_SET = type("UNKNOWN", (), {"__bool__": lambda: False})() - - -class BundleNotFoundError(Exception): - """Bundle name is defined but is not available on server. - - Args: - bundle_name (str): Name of bundle that was not found. - """ - - def __init__(self, bundle_name): - self.bundle_name = bundle_name - super().__init__( - f"Bundle '{bundle_name}' is not available on server" - ) - - -class UpdateState(Enum): - UNKNOWN = "unknown" - UPDATED = "udated" - OUTDATED = "outdated" - UPDATE_FAILED = "failed" - MISS_SOURCE_FILES = "miss_source_files" - - -class DistributeTransferProgress: - """Progress of single source item in 'DistributionItem'. - - The item is to keep track of single source item. - """ - - def __init__(self): - self._transfer_progress = ayon_api.TransferProgress() - self._started = False - self._failed = False - self._fail_reason = None - self._unzip_started = False - self._unzip_finished = False - self._hash_check_started = False - self._hash_check_finished = False - - def set_started(self): - """Call when source distribution starts.""" - - self._started = True - - def set_failed(self, reason): - """Set source distribution as failed. - - Args: - reason (str): Error message why the transfer failed. - """ - - self._failed = True - self._fail_reason = reason - - def set_hash_check_started(self): - """Call just before hash check starts.""" - - self._hash_check_started = True - - def set_hash_check_finished(self): - """Call just after hash check finishes.""" - - self._hash_check_finished = True - - def set_unzip_started(self): - """Call just before unzip starts.""" - - self._unzip_started = True - - def set_unzip_finished(self): - """Call just after unzip finishes.""" - - self._unzip_finished = True - - @property - def is_running(self): - """Source distribution is in progress. - - Returns: - bool: Transfer is in progress. - """ - - return bool( - self._started - and not self._failed - and not self._hash_check_finished - ) - - @property - def transfer_progress(self): - """Source file 'download' progress tracker. - - Returns: - ayon_api.TransferProgress.: Content download progress. - """ - - return self._transfer_progress - - @property - def started(self): - return self._started - - @property - def hash_check_started(self): - return self._hash_check_started - - @property - def hash_check_finished(self): - return self._has_check_finished - - @property - def unzip_started(self): - return self._unzip_started - - @property - def unzip_finished(self): - return self._unzip_finished - - @property - def failed(self): - return self._failed or self._transfer_progress.failed - - @property - def fail_reason(self): - return self._fail_reason or self._transfer_progress.fail_reason - - -class DistributionItem: - """Distribution item with sources and target directories. - - Distribution item can be an addon or dependency package. Distribution item - can be already distributed and don't need any progression. The item keeps - track of the progress. The reason is to be able to use the distribution - items as source data for UI without implementing the same logic. - - Distribution is "state" based. Distribution can be 'UPDATED' or 'OUTDATED' - at the initialization. If item is 'UPDATED' the distribution is skipped - and 'OUTDATED' will trigger the distribution process. - - Because the distribution may have multiple sources each source has own - progress item. - - Args: - state (UpdateState): Initial state (UpdateState.UPDATED or - UpdateState.OUTDATED). - unzip_dirpath (str): Path to directory where zip is downloaded. - download_dirpath (str): Path to directory where file is unzipped. - file_hash (str): Hash of file for validation. - factory (DownloadFactory): Downloaders factory object. - sources (List[SourceInfo]): Possible sources to receive the - distribution item. - downloader_data (Dict[str, Any]): More information for downloaders. - item_label (str): Label used in log outputs (and in UI). - logger (logging.Logger): Logger object. - """ - - def __init__( - self, - state, - unzip_dirpath, - download_dirpath, - file_hash, - factory, - sources, - downloader_data, - item_label, - logger=None, - ): - if logger is None: - logger = logging.getLogger(self.__class__.__name__) - self.log = logger - self.state = state - self.unzip_dirpath = unzip_dirpath - self.download_dirpath = download_dirpath - self.file_hash = file_hash - self.factory = factory - self.sources = [ - (source, DistributeTransferProgress()) - for source in sources - ] - self.downloader_data = downloader_data - self.item_label = item_label - - self._need_distribution = state != UpdateState.UPDATED - self._current_source_progress = None - self._used_source_progress = None - self._used_source = None - self._dist_started = False - self._dist_finished = False - - self._error_msg = None - self._error_detail = None - - @property - def need_distribution(self): - """Need distribution based on initial state. - - Returns: - bool: Need distribution. - """ - - return self._need_distribution - - @property - def current_source_progress(self): - """Currently processed source progress object. - - Returns: - Union[DistributeTransferProgress, None]: Transfer progress or None. - """ - - return self._current_source_progress - - @property - def used_source_progress(self): - """Transfer progress that successfully distributed the item. - - Returns: - Union[DistributeTransferProgress, None]: Transfer progress or None. - """ - - return self._used_source_progress - - @property - def used_source(self): - """Data of source item. - - Returns: - Union[Dict[str, Any], None]: SourceInfo data or None. - """ - - return self._used_source - - @property - def error_message(self): - """Reason why distribution item failed. - - Returns: - Union[str, None]: Error message. - """ - - return self._error_msg - - @property - def error_detail(self): - """Detailed reason why distribution item failed. - - Returns: - Union[str, None]: Detailed information (maybe traceback). - """ - - return self._error_detail - - def _distribute(self): - if not self.sources: - message = ( - f"{self.item_label}: Don't have" - " any sources to download from." - ) - self.log.error(message) - self._error_msg = message - self.state = UpdateState.MISS_SOURCE_FILES - return - - download_dirpath = self.download_dirpath - unzip_dirpath = self.unzip_dirpath - for source, source_progress in self.sources: - self._current_source_progress = source_progress - source_progress.set_started() - - # Remove directory if exists - if os.path.isdir(unzip_dirpath): - self.log.debug(f"Cleaning {unzip_dirpath}") - shutil.rmtree(unzip_dirpath) - - # Create directory - os.makedirs(unzip_dirpath) - if not os.path.isdir(download_dirpath): - os.makedirs(download_dirpath) - - try: - downloader = self.factory.get_downloader(source.type) - except Exception: - message = f"Unknown downloader {source.type}" - source_progress.set_failed(message) - self.log.warning(message, exc_info=True) - continue - - source_data = attr.asdict(source) - cleanup_args = ( - source_data, - download_dirpath, - self.downloader_data - ) - - try: - zip_filepath = downloader.download( - source_data, - download_dirpath, - self.downloader_data, - source_progress.transfer_progress, - ) - except Exception: - message = "Failed to download source" - source_progress.set_failed(message) - self.log.warning( - f"{self.item_label}: {message}", - exc_info=True - ) - downloader.cleanup(*cleanup_args) - continue - - source_progress.set_hash_check_started() - try: - downloader.check_hash(zip_filepath, self.file_hash) - except Exception: - message = "File hash does not match" - source_progress.set_failed(message) - self.log.warning( - f"{self.item_label}: {message}", - exc_info=True - ) - downloader.cleanup(*cleanup_args) - continue - - source_progress.set_hash_check_finished() - source_progress.set_unzip_started() - try: - downloader.unzip(zip_filepath, unzip_dirpath) - except Exception: - message = "Couldn't unzip source file" - source_progress.set_failed(message) - self.log.warning( - f"{self.item_label}: {message}", - exc_info=True - ) - downloader.cleanup(*cleanup_args) - continue - - source_progress.set_unzip_finished() - downloader.cleanup(*cleanup_args) - self.state = UpdateState.UPDATED - self._used_source = source_data - break - - last_progress = self._current_source_progress - self._current_source_progress = None - if self.state == UpdateState.UPDATED: - self._used_source_progress = last_progress - self.log.info(f"{self.item_label}: Distributed") - return - - self.log.error(f"{self.item_label}: Failed to distribute") - self._error_msg = "Failed to receive or install source files" - - def distribute(self): - """Execute distribution logic.""" - - if not self.need_distribution or self._dist_started: - return - - self._dist_started = True - try: - if self.state == UpdateState.OUTDATED: - self._distribute() - - except Exception as exc: - self.state = UpdateState.UPDATE_FAILED - self._error_msg = str(exc) - self._error_detail = "".join( - traceback.format_exception(*sys.exc_info()) - ) - self.log.error( - f"{self.item_label}: Distibution filed", - exc_info=True - ) - - finally: - self._dist_finished = True - if self.state == UpdateState.OUTDATED: - self.state = UpdateState.UPDATE_FAILED - self._error_msg = "Distribution failed" - - if ( - self.state != UpdateState.UPDATED - and self.unzip_dirpath - and os.path.isdir(self.unzip_dirpath) - ): - self.log.debug(f"Cleaning {self.unzip_dirpath}") - shutil.rmtree(self.unzip_dirpath) - - -class AyonDistribution: - """Distribution control. - - Receive information from server what addons and dependency packages - should be available locally and prepare/validate their distribution. - - Arguments are available for testing of the class. - - Args: - addon_dirpath (Optional[str]): Where addons will be stored. - dependency_dirpath (Optional[str]): Where dependencies will be stored. - dist_factory (Optional[DownloadFactory]): Factory which cares about - downloading of items based on source type. - addons_info (Optional[list[dict[str, Any]]): List of prepared - addons' info. - dependency_packages_info (Optional[list[dict[str, Any]]): Info - about packages from server. - bundles_info (Optional[Dict[str, Any]]): Info about - bundles. - bundle_name (Optional[str]): Name of bundle to use. If not passed - an environment variable 'AYON_BUNDLE_NAME' is checked for value. - When both are not available the bundle is defined by 'use_staging' - value. - use_staging (Optional[bool]): Use staging versions of an addon. - If not passed, 'is_staging_enabled' is used as default value. - """ - - def __init__( - self, - addon_dirpath=None, - dependency_dirpath=None, - dist_factory=None, - addons_info=NOT_SET, - dependency_packages_info=NOT_SET, - bundles_info=NOT_SET, - bundle_name=NOT_SET, - use_staging=None - ): - self._log = None - - self._dist_started = False - self._dist_finished = False - - self._addons_dirpath = addon_dirpath or get_addons_dir() - self._dependency_dirpath = dependency_dirpath or get_dependencies_dir() - self._dist_factory = ( - dist_factory or get_default_download_factory() - ) - - if bundle_name is NOT_SET: - bundle_name = os.environ.get("AYON_BUNDLE_NAME", NOT_SET) - - # Raw addons data from server - self._addons_info = addons_info - # Prepared data as Addon objects - self._addon_items = NOT_SET - # Distrubtion items of addons - # - only those addons and versions that should be distributed - self._addon_dist_items = NOT_SET - - # Raw dependency packages data from server - self._dependency_packages_info = dependency_packages_info - # Prepared dependency packages as objects - self._dependency_packages_items = NOT_SET - # Dependency package item that should be used - self._dependency_package_item = NOT_SET - # Distribution item of dependency package - self._dependency_dist_item = NOT_SET - - # Raw bundles data from server - self._bundles_info = bundles_info - # Bundles as objects - self._bundle_items = NOT_SET - - # Bundle that should be used in production - self._production_bundle = NOT_SET - # Bundle that should be used in staging - self._staging_bundle = NOT_SET - # Boolean that defines if staging bundle should be used - self._use_staging = use_staging - - # Specific bundle name should be used - self._bundle_name = bundle_name - # Final bundle that will be used - self._bundle = NOT_SET - - @property - def use_staging(self): - """Staging version of a bundle should be used. - - This value is completely ignored if specific bundle name should - be used. - - Returns: - bool: True if staging version should be used. - """ - - if self._use_staging is None: - self._use_staging = is_staging_enabled() - return self._use_staging - - @property - def log(self): - """Helper to access logger. - - Returns: - logging.Logger: Logger instance. - """ - if self._log is None: - self._log = logging.getLogger(self.__class__.__name__) - return self._log - - @property - def bundles_info(self): - """ - - Returns: - dict[str, dict[str, Any]]: Bundles information from server. - """ - - if self._bundles_info is NOT_SET: - self._bundles_info = ayon_api.get_bundles() - return self._bundles_info - - @property - def bundle_items(self): - """ - - Returns: - list[Bundle]: List of bundles info. - """ - - if self._bundle_items is NOT_SET: - self._bundle_items = [ - Bundle.from_dict(info) - for info in self.bundles_info["bundles"] - ] - return self._bundle_items - - def _prepare_production_staging_bundles(self): - production_bundle = None - staging_bundle = None - for bundle in self.bundle_items: - if bundle.is_production: - production_bundle = bundle - if bundle.is_staging: - staging_bundle = bundle - self._production_bundle = production_bundle - self._staging_bundle = staging_bundle - - @property - def production_bundle(self): - """ - Returns: - Union[Bundle, None]: Bundle that should be used in production. - """ - - if self._production_bundle is NOT_SET: - self._prepare_production_staging_bundles() - return self._production_bundle - - @property - def staging_bundle(self): - """ - Returns: - Union[Bundle, None]: Bundle that should be used in staging. - """ - - if self._staging_bundle is NOT_SET: - self._prepare_production_staging_bundles() - return self._staging_bundle - - @property - def bundle_to_use(self): - """Bundle that will be used for distribution. - - Bundle that should be used can be affected by 'bundle_name' - or 'use_staging'. - - Returns: - Union[Bundle, None]: Bundle that will be used for distribution - or None. - - Raises: - BundleNotFoundError: When bundle name to use is defined - but is not available on server. - """ - - if self._bundle is NOT_SET: - if self._bundle_name is not NOT_SET: - bundle = next( - ( - bundle - for bundle in self.bundle_items - if bundle.name == self._bundle_name - ), - None - ) - if bundle is None: - raise BundleNotFoundError(self._bundle_name) - - self._bundle = bundle - elif self.use_staging: - self._bundle = self.staging_bundle - else: - self._bundle = self.production_bundle - return self._bundle - - @property - def bundle_name_to_use(self): - bundle = self.bundle_to_use - return None if bundle is None else bundle.name - - @property - def addons_info(self): - """Server information about available addons. - - Returns: - Dict[str, dict[str, Any]: Addon info by addon name. - """ - - if self._addons_info is NOT_SET: - server_info = ayon_api.get_addons_info(details=True) - self._addons_info = server_info["addons"] - return self._addons_info - - @property - def addon_items(self): - """Information about available addons on server. - - Addons may require distribution of files. For those addons will be - created 'DistributionItem' handling distribution itself. - - Returns: - Dict[str, AddonInfo]: Addon info object by addon name. - """ - - if self._addon_items is NOT_SET: - addons_info = {} - for addon in self.addons_info: - addon_info = AddonInfo.from_dict(addon) - addons_info[addon_info.name] = addon_info - self._addon_items = addons_info - return self._addon_items - - @property - def dependency_packages_info(self): - """Server information about available dependency packages. - - Notes: - For testing purposes it is possible to pass dependency packages - information to '__init__'. - - Returns: - list[dict[str, Any]]: Dependency packages information. - """ - - if self._dependency_packages_info is NOT_SET: - self._dependency_packages_info = ( - ayon_api.get_dependency_packages())["packages"] - return self._dependency_packages_info - - @property - def dependency_packages_items(self): - """Dependency packages as objects. - - Returns: - dict[str, DependencyItem]: Dependency packages as objects by name. - """ - - if self._dependency_packages_items is NOT_SET: - dependenc_package_items = {} - for item in self.dependency_packages_info: - item = DependencyItem.from_dict(item) - dependenc_package_items[item.name] = item - self._dependency_packages_items = dependenc_package_items - return self._dependency_packages_items - - @property - def dependency_package_item(self): - """Dependency package item that should be used by bundle. - - Returns: - Union[None, Dict[str, Any]]: None if bundle does not have - specified dependency package. - """ - - if self._dependency_package_item is NOT_SET: - dependency_package_item = None - bundle = self.bundle_to_use - if bundle is not None: - package_name = bundle.dependency_packages.get( - platform.system().lower() - ) - dependency_package_item = self.dependency_packages_items.get( - package_name) - self._dependency_package_item = dependency_package_item - return self._dependency_package_item - - def _prepare_current_addon_dist_items(self): - addons_metadata = self.get_addons_metadata() - output = [] - addon_versions = {} - bundle = self.bundle_to_use - if bundle is not None: - addon_versions = bundle.addon_versions - for addon_name, addon_item in self.addon_items.items(): - addon_version = addon_versions.get(addon_name) - # Addon is not in bundle -> Skip - if addon_version is None: - continue - - addon_version_item = addon_item.versions.get(addon_version) - # Addon version is not available in addons info - # - TODO handle this case (raise error, skip, store, report, ...) - if addon_version_item is None: - print( - f"Version '{addon_version}' of addon '{addon_name}'" - " is not available on server." - ) - continue - - if not addon_version_item.require_distribution: - continue - full_name = addon_version_item.full_name - addon_dest = os.path.join(self._addons_dirpath, full_name) - self.log.debug(f"Checking {full_name} in {addon_dest}") - addon_in_metadata = ( - addon_name in addons_metadata - and addon_version_item.version in addons_metadata[addon_name] - ) - if addon_in_metadata and os.path.isdir(addon_dest): - self.log.debug( - f"Addon version folder {addon_dest} already exists." - ) - state = UpdateState.UPDATED - - else: - state = UpdateState.OUTDATED - - downloader_data = { - "type": "addon", - "name": addon_name, - "version": addon_version - } - - dist_item = DistributionItem( - state, - addon_dest, - addon_dest, - addon_version_item.hash, - self._dist_factory, - list(addon_version_item.sources), - downloader_data, - full_name, - self.log - ) - output.append({ - "dist_item": dist_item, - "addon_name": addon_name, - "addon_version": addon_version, - "addon_item": addon_item, - "addon_version_item": addon_version_item, - }) - return output - - def _prepare_dependency_progress(self): - package = self.dependency_package_item - if package is None: - return None - - metadata = self.get_dependency_metadata() - downloader_data = { - "type": "dependency_package", - "name": package.name, - "platform": package.platform_name - } - zip_dir = package_dir = os.path.join( - self._dependency_dirpath, package.name - ) - self.log.debug(f"Checking {package.name} in {package_dir}") - - if not os.path.isdir(package_dir) or package.name not in metadata: - state = UpdateState.OUTDATED - else: - state = UpdateState.UPDATED - - return DistributionItem( - state, - zip_dir, - package_dir, - package.checksum, - self._dist_factory, - package.sources, - downloader_data, - package.name, - self.log, - ) - - def get_addon_dist_items(self): - """Addon distribution items. - - These items describe source files required by addon to be available on - machine. Each item may have 0-n source information from where can be - obtained. If file is already available it's state will be 'UPDATED'. - - Example output: - [ - { - "dist_item": DistributionItem, - "addon_name": str, - "addon_version": str, - "addon_item": AddonInfo, - "addon_version_item": AddonVersionInfo - }, { - ... - } - ] - - Returns: - list[dict[str, Any]]: Distribution items with addon version item. - """ - - if self._addon_dist_items is NOT_SET: - self._addon_dist_items = ( - self._prepare_current_addon_dist_items()) - return self._addon_dist_items - - def get_dependency_dist_item(self): - """Dependency package distribution item. - - Item describe source files required by server to be available on - machine. Item may have 0-n source information from where can be - obtained. If file is already available it's state will be 'UPDATED'. - - 'None' is returned if server does not have defined any dependency - package. - - Returns: - Union[None, DistributionItem]: Dependency item or None if server - does not have specified any dependency package. - """ - - if self._dependency_dist_item is NOT_SET: - self._dependency_dist_item = self._prepare_dependency_progress() - return self._dependency_dist_item - - def get_dependency_metadata_filepath(self): - """Path to distribution metadata file. - - Metadata contain information about distributed packages, used source, - expected file hash and time when file was distributed. - - Returns: - str: Path to a file where dependency package metadata are stored. - """ - - return os.path.join(self._dependency_dirpath, "dependency.json") - - def get_addons_metadata_filepath(self): - """Path to addons metadata file. - - Metadata contain information about distributed addons, used sources, - expected file hashes and time when files were distributed. - - Returns: - str: Path to a file where addons metadata are stored. - """ - - return os.path.join(self._addons_dirpath, "addons.json") - - def read_metadata_file(self, filepath, default_value=None): - """Read json file from path. - - Method creates the file when does not exist with default value. - - Args: - filepath (str): Path to json file. - default_value (Union[Dict[str, Any], List[Any], None]): Default - value if the file is not available (or valid). - - Returns: - Union[Dict[str, Any], List[Any]]: Value from file. - """ - - if default_value is None: - default_value = {} - - if not os.path.exists(filepath): - return default_value - - try: - with open(filepath, "r") as stream: - data = json.load(stream) - except ValueError: - data = default_value - return data - - def save_metadata_file(self, filepath, data): - """Store data to json file. - - Method creates the file when does not exist. - - Args: - filepath (str): Path to json file. - data (Union[Dict[str, Any], List[Any]]): Data to store into file. - """ - - if not os.path.exists(filepath): - dirpath = os.path.dirname(filepath) - if not os.path.exists(dirpath): - os.makedirs(dirpath) - with open(filepath, "w") as stream: - json.dump(data, stream, indent=4) - - def get_dependency_metadata(self): - filepath = self.get_dependency_metadata_filepath() - return self.read_metadata_file(filepath, {}) - - def update_dependency_metadata(self, package_name, data): - dependency_metadata = self.get_dependency_metadata() - dependency_metadata[package_name] = data - filepath = self.get_dependency_metadata_filepath() - self.save_metadata_file(filepath, dependency_metadata) - - def get_addons_metadata(self): - filepath = self.get_addons_metadata_filepath() - return self.read_metadata_file(filepath, {}) - - def update_addons_metadata(self, addons_information): - if not addons_information: - return - addons_metadata = self.get_addons_metadata() - for addon_name, version_value in addons_information.items(): - if addon_name not in addons_metadata: - addons_metadata[addon_name] = {} - for addon_version, version_data in version_value.items(): - addons_metadata[addon_name][addon_version] = version_data - - filepath = self.get_addons_metadata_filepath() - self.save_metadata_file(filepath, addons_metadata) - - def finish_distribution(self): - """Store metadata about distributed items.""" - - self._dist_finished = True - stored_time = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S") - dependency_dist_item = self.get_dependency_dist_item() - if ( - dependency_dist_item is not None - and dependency_dist_item.need_distribution - and dependency_dist_item.state == UpdateState.UPDATED - ): - package = self.dependency_package - source = dependency_dist_item.used_source - if source is not None: - data = { - "source": source, - "file_hash": dependency_dist_item.file_hash, - "distributed_dt": stored_time - } - self.update_dependency_metadata(package.name, data) - - addons_info = {} - for item in self.get_addon_dist_items(): - dist_item = item["dist_item"] - if ( - not dist_item.need_distribution - or dist_item.state != UpdateState.UPDATED - ): - continue - - source_data = dist_item.used_source - if not source_data: - continue - - addon_name = item["addon_name"] - addon_version = item["addon_version"] - addons_info.setdefault(addon_name, {}) - addons_info[addon_name][addon_version] = { - "source": source_data, - "file_hash": dist_item.file_hash, - "distributed_dt": stored_time - } - - self.update_addons_metadata(addons_info) - - def get_all_distribution_items(self): - """Distribution items required by server. - - Items contain dependency package item and all addons that are enabled - and have distribution requirements. - - Items can be already available on machine. - - Returns: - List[DistributionItem]: Distribution items required by server. - """ - - output = [ - item["dist_item"] - for item in self.get_addon_dist_items() - ] - dependency_dist_item = self.get_dependency_dist_item() - if dependency_dist_item is not None: - output.insert(0, dependency_dist_item) - - return output - - def distribute(self, threaded=False): - """Distribute all missing items. - - Method will try to distribute all items that are required by server. - - This method does not handle failed items. To validate the result call - 'validate_distribution' when this method finishes. - - Args: - threaded (bool): Distribute items in threads. - """ - - if self._dist_started: - raise RuntimeError("Distribution already started") - self._dist_started = True - threads = collections.deque() - for item in self.get_all_distribution_items(): - if threaded: - threads.append(threading.Thread(target=item.distribute)) - else: - item.distribute() - - while threads: - thread = threads.popleft() - if thread.is_alive(): - threads.append(thread) - else: - thread.join() - - self.finish_distribution() - - def validate_distribution(self): - """Check if all required distribution items are distributed. - - Raises: - RuntimeError: Any of items is not available. - """ - - invalid = [] - dependency_package = self.get_dependency_dist_item() - if ( - dependency_package is not None - and dependency_package.state != UpdateState.UPDATED - ): - invalid.append("Dependency package") - - for item in self.get_addon_dist_items(): - dist_item = item["dist_item"] - if dist_item.state != UpdateState.UPDATED: - invalid.append(item["addon_name"]) - - if not invalid: - return - - raise RuntimeError("Failed to distribute {}".format( - ", ".join([f'"{item}"' for item in invalid]) - )) - - def get_sys_paths(self): - """Get all paths to python packages that should be added to python. - - These paths lead to addon directories and python dependencies in - dependency package. - - Todos: - Add dependency package directory to output. ATM is not structure of - dependency package 100% defined. - - Returns: - List[str]: Paths that should be added to 'sys.path' and - 'PYTHONPATH'. - """ - - output = [] - for item in self.get_all_distribution_items(): - if item.state != UpdateState.UPDATED: - continue - unzip_dirpath = item.unzip_dirpath - if unzip_dirpath and os.path.exists(unzip_dirpath): - output.append(unzip_dirpath) - return output - - -def cli(*args): - raise NotImplementedError diff --git a/common/ayon_common/distribution/data_structures.py b/common/ayon_common/distribution/data_structures.py deleted file mode 100644 index aa93d4ed714..00000000000 --- a/common/ayon_common/distribution/data_structures.py +++ /dev/null @@ -1,265 +0,0 @@ -import attr -from enum import Enum - - -class UrlType(Enum): - HTTP = "http" - GIT = "git" - FILESYSTEM = "filesystem" - SERVER = "server" - - -@attr.s -class MultiPlatformValue(object): - windows = attr.ib(default=None) - linux = attr.ib(default=None) - darwin = attr.ib(default=None) - - -@attr.s -class SourceInfo(object): - type = attr.ib() - - -@attr.s -class LocalSourceInfo(SourceInfo): - path = attr.ib(default=attr.Factory(MultiPlatformValue)) - - -@attr.s -class WebSourceInfo(SourceInfo): - url = attr.ib(default=None) - headers = attr.ib(default=None) - filename = attr.ib(default=None) - - -@attr.s -class ServerSourceInfo(SourceInfo): - filename = attr.ib(default=None) - path = attr.ib(default=None) - - -def convert_source(source): - """Create source object from data information. - - Args: - source (Dict[str, any]): Information about source. - - Returns: - Union[None, SourceInfo]: Object with source information if type is - known. - """ - - source_type = source.get("type") - if not source_type: - return None - - if source_type == UrlType.FILESYSTEM.value: - return LocalSourceInfo( - type=source_type, - path=source["path"] - ) - - if source_type == UrlType.HTTP.value: - url = source["path"] - return WebSourceInfo( - type=source_type, - url=url, - headers=source.get("headers"), - filename=source.get("filename") - ) - - if source_type == UrlType.SERVER.value: - return ServerSourceInfo( - type=source_type, - filename=source.get("filename"), - path=source.get("path") - ) - - -def prepare_sources(src_sources): - sources = [] - unknown_sources = [] - for source in (src_sources or []): - dependency_source = convert_source(source) - if dependency_source is not None: - sources.append(dependency_source) - else: - print(f"Unknown source {source.get('type')}") - unknown_sources.append(source) - return sources, unknown_sources - - -@attr.s -class VersionData(object): - version_data = attr.ib(default=None) - - -@attr.s -class AddonVersionInfo(object): - version = attr.ib() - full_name = attr.ib() - title = attr.ib(default=None) - require_distribution = attr.ib(default=False) - sources = attr.ib(default=attr.Factory(list)) - unknown_sources = attr.ib(default=attr.Factory(list)) - hash = attr.ib(default=None) - - @classmethod - def from_dict( - cls, addon_name, addon_title, addon_version, version_data - ): - """Addon version info. - - Args: - addon_name (str): Name of addon. - addon_title (str): Title of addon. - addon_version (str): Version of addon. - version_data (dict[str, Any]): Addon version information from - server. - - Returns: - AddonVersionInfo: Addon version info. - """ - - full_name = f"{addon_name}_{addon_version}" - title = f"{addon_title} {addon_version}" - - source_info = version_data.get("clientSourceInfo") - require_distribution = source_info is not None - sources, unknown_sources = prepare_sources(source_info) - - return cls( - version=addon_version, - full_name=full_name, - require_distribution=require_distribution, - sources=sources, - unknown_sources=unknown_sources, - hash=version_data.get("hash"), - title=title - ) - - -@attr.s -class AddonInfo(object): - """Object matching json payload from Server""" - name = attr.ib() - versions = attr.ib(default=attr.Factory(dict)) - title = attr.ib(default=None) - description = attr.ib(default=None) - license = attr.ib(default=None) - authors = attr.ib(default=None) - - @classmethod - def from_dict(cls, data): - """Addon info by available versions. - - Args: - data (dict[str, Any]): Addon information from server. Should - contain information about every version under 'versions'. - - Returns: - AddonInfo: Addon info with available versions. - """ - - # server payload contains info about all versions - addon_name = data["name"] - title = data.get("title") or addon_name - - src_versions = data.get("versions") or {} - dst_versions = { - addon_version: AddonVersionInfo.from_dict( - addon_name, title, addon_version, version_data - ) - for addon_version, version_data in src_versions.items() - } - return cls( - name=addon_name, - versions=dst_versions, - description=data.get("description"), - title=data.get("title") or addon_name, - license=data.get("license"), - authors=data.get("authors") - ) - - -@attr.s -class DependencyItem(object): - """Object matching payload from Server about single dependency package""" - name = attr.ib() - platform_name = attr.ib() - checksum = attr.ib() - sources = attr.ib(default=attr.Factory(list)) - unknown_sources = attr.ib(default=attr.Factory(list)) - source_addons = attr.ib(default=attr.Factory(dict)) - python_modules = attr.ib(default=attr.Factory(dict)) - - @classmethod - def from_dict(cls, package): - src_sources = package.get("sources") or [] - for source in src_sources: - if source.get("type") == "server" and not source.get("filename"): - source["filename"] = package["filename"] - sources, unknown_sources = prepare_sources(src_sources) - return cls( - name=package["filename"], - platform_name=package["platform"], - sources=sources, - unknown_sources=unknown_sources, - checksum=package["checksum"], - source_addons=package["sourceAddons"], - python_modules=package["pythonModules"] - ) - - -@attr.s -class Installer: - version = attr.ib() - filename = attr.ib() - platform_name = attr.ib() - size = attr.ib() - checksum = attr.ib() - python_version = attr.ib() - python_modules = attr.ib() - sources = attr.ib(default=attr.Factory(list)) - unknown_sources = attr.ib(default=attr.Factory(list)) - - @classmethod - def from_dict(cls, installer_info): - sources, unknown_sources = prepare_sources( - installer_info.get("sources")) - - return cls( - version=installer_info["version"], - filename=installer_info["filename"], - platform_name=installer_info["platform"], - size=installer_info["size"], - sources=sources, - unknown_sources=unknown_sources, - checksum=installer_info["checksum"], - python_version=installer_info["pythonVersion"], - python_modules=installer_info["pythonModules"] - ) - - -@attr.s -class Bundle: - """Class representing bundle information.""" - - name = attr.ib() - installer_version = attr.ib() - addon_versions = attr.ib(default=attr.Factory(dict)) - dependency_packages = attr.ib(default=attr.Factory(dict)) - is_production = attr.ib(default=False) - is_staging = attr.ib(default=False) - - @classmethod - def from_dict(cls, data): - return cls( - name=data["name"], - installer_version=data.get("installerVersion"), - addon_versions=data.get("addons", {}), - dependency_packages=data.get("dependencyPackages", {}), - is_production=data["isProduction"], - is_staging=data["isStaging"], - ) diff --git a/common/ayon_common/distribution/downloaders.py b/common/ayon_common/distribution/downloaders.py deleted file mode 100644 index 23280176c32..00000000000 --- a/common/ayon_common/distribution/downloaders.py +++ /dev/null @@ -1,250 +0,0 @@ -import os -import logging -import platform -from abc import ABCMeta, abstractmethod - -import ayon_api - -from .file_handler import RemoteFileHandler -from .data_structures import UrlType - - -class SourceDownloader(metaclass=ABCMeta): - """Abstract class for source downloader.""" - - log = logging.getLogger(__name__) - - @classmethod - @abstractmethod - def download(cls, source, destination_dir, data, transfer_progress): - """Returns url of downloaded addon zip file. - - Tranfer progress can be ignored, in that case file transfer won't - be shown as 0-100% but as 'running'. First step should be to set - destination content size and then add transferred chunk sizes. - - Args: - source (dict): {type:"http", "url":"https://} ...} - destination_dir (str): local folder to unzip - data (dict): More information about download content. Always have - 'type' key in. - transfer_progress (ayon_api.TransferProgress): Progress of - transferred (copy/download) content. - - Returns: - (str) local path to addon zip file - """ - - pass - - @classmethod - @abstractmethod - def cleanup(cls, source, destination_dir, data): - """Cleanup files when distribution finishes or crashes. - - Cleanup e.g. temporary files (downloaded zip) or other related stuff - to downloader. - """ - - pass - - @classmethod - def check_hash(cls, addon_path, addon_hash, hash_type="sha256"): - """Compares 'hash' of downloaded 'addon_url' file. - - Args: - addon_path (str): Local path to addon file. - addon_hash (str): Hash of downloaded file. - hash_type (str): Type of hash. - - Raises: - ValueError if hashes doesn't match - """ - - if not os.path.exists(addon_path): - raise ValueError(f"{addon_path} doesn't exist.") - if not RemoteFileHandler.check_integrity( - addon_path, addon_hash, hash_type=hash_type - ): - raise ValueError(f"{addon_path} doesn't match expected hash.") - - @classmethod - def unzip(cls, addon_zip_path, destination_dir): - """Unzips local 'addon_zip_path' to 'destination'. - - Args: - addon_zip_path (str): local path to addon zip file - destination_dir (str): local folder to unzip - """ - - RemoteFileHandler.unzip(addon_zip_path, destination_dir) - os.remove(addon_zip_path) - - -class OSDownloader(SourceDownloader): - """Downloader using files from file drive.""" - - @classmethod - def download(cls, source, destination_dir, data, transfer_progress): - # OS doesn't need to download, unzip directly - addon_url = source["path"].get(platform.system().lower()) - if not os.path.exists(addon_url): - raise ValueError(f"{addon_url} is not accessible") - return addon_url - - @classmethod - def cleanup(cls, source, destination_dir, data): - # Nothing to do - download does not copy anything - pass - - -class HTTPDownloader(SourceDownloader): - """Downloader using http or https protocol.""" - - CHUNK_SIZE = 100000 - - @staticmethod - def get_filename(source): - source_url = source["url"] - filename = source.get("filename") - if not filename: - filename = os.path.basename(source_url) - basename, ext = os.path.splitext(filename) - allowed_exts = set(RemoteFileHandler.IMPLEMENTED_ZIP_FORMATS) - if ext.lower().lstrip(".") not in allowed_exts: - filename = f"{basename}.zip" - return filename - - @classmethod - def download(cls, source, destination_dir, data, transfer_progress): - source_url = source["url"] - cls.log.debug(f"Downloading {source_url} to {destination_dir}") - headers = source.get("headers") - filename = cls.get_filename(source) - - # TODO use transfer progress - RemoteFileHandler.download_url( - source_url, - destination_dir, - filename, - headers=headers - ) - - return os.path.join(destination_dir, filename) - - @classmethod - def cleanup(cls, source, destination_dir, data): - filename = cls.get_filename(source) - filepath = os.path.join(destination_dir, filename) - if os.path.exists(filepath) and os.path.isfile(filepath): - os.remove(filepath) - - -class AyonServerDownloader(SourceDownloader): - """Downloads static resource file from AYON Server. - - Expects filled env var AYON_SERVER_URL. - """ - - CHUNK_SIZE = 8192 - - @classmethod - def download(cls, source, destination_dir, data, transfer_progress): - path = source["path"] - filename = source["filename"] - if path and not filename: - filename = path.split("/")[-1] - - cls.log.debug(f"Downloading {filename} to {destination_dir}") - - _, ext = os.path.splitext(filename) - ext = ext.lower().lstrip(".") - valid_exts = set(RemoteFileHandler.IMPLEMENTED_ZIP_FORMATS) - if ext not in valid_exts: - raise ValueError(( - f"Invalid file extension \"{ext}\"." - f" Expected {', '.join(valid_exts)}" - )) - - if path: - filepath = os.path.join(destination_dir, filename) - return ayon_api.download_file( - path, - filepath, - chunk_size=cls.CHUNK_SIZE, - progress=transfer_progress - ) - - # dst_filepath = os.path.join(destination_dir, filename) - if data["type"] == "dependency_package": - return ayon_api.download_dependency_package( - data["name"], - destination_dir, - filename, - platform_name=data["platform"], - chunk_size=cls.CHUNK_SIZE, - progress=transfer_progress - ) - - if data["type"] == "addon": - return ayon_api.download_addon_private_file( - data["name"], - data["version"], - filename, - destination_dir, - chunk_size=cls.CHUNK_SIZE, - progress=transfer_progress - ) - - raise ValueError(f"Unknown type to download \"{data['type']}\"") - - @classmethod - def cleanup(cls, source, destination_dir, data): - filename = source["filename"] - filepath = os.path.join(destination_dir, filename) - if os.path.exists(filepath) and os.path.isfile(filepath): - os.remove(filepath) - - -class DownloadFactory: - """Factory for downloaders.""" - - def __init__(self): - self._downloaders = {} - - def register_format(self, downloader_type, downloader): - """Register downloader for download type. - - Args: - downloader_type (UrlType): Type of source. - downloader (SourceDownloader): Downloader which cares about - download, hash check and unzipping. - """ - - self._downloaders[downloader_type.value] = downloader - - def get_downloader(self, downloader_type): - """Registered downloader for type. - - Args: - downloader_type (UrlType): Type of source. - - Returns: - SourceDownloader: Downloader object which should care about file - distribution. - - Raises: - ValueError: If type does not have registered downloader. - """ - - if downloader := self._downloaders.get(downloader_type): - return downloader() - raise ValueError(f"{downloader_type} not implemented") - - -def get_default_download_factory(): - download_factory = DownloadFactory() - download_factory.register_format(UrlType.FILESYSTEM, OSDownloader) - download_factory.register_format(UrlType.HTTP, HTTPDownloader) - download_factory.register_format(UrlType.SERVER, AyonServerDownloader) - return download_factory diff --git a/common/ayon_common/distribution/tests/test_addon_distributtion.py b/common/ayon_common/distribution/tests/test_addon_distributtion.py deleted file mode 100644 index 3e7bd1bc6a4..00000000000 --- a/common/ayon_common/distribution/tests/test_addon_distributtion.py +++ /dev/null @@ -1,248 +0,0 @@ -import os -import sys -import copy -import tempfile - - -import attr -import pytest - -current_dir = os.path.dirname(os.path.abspath(__file__)) -root_dir = os.path.abspath(os.path.join(current_dir, "..", "..", "..", "..")) -sys.path.append(root_dir) - -from common.ayon_common.distribution.downloaders import ( - DownloadFactory, - OSDownloader, - HTTPDownloader, -) -from common.ayon_common.distribution.control import ( - AyonDistribution, - UpdateState, -) -from common.ayon_common.distribution.data_structures import ( - AddonInfo, - UrlType, -) - - -@pytest.fixture -def download_factory(): - addon_downloader = DownloadFactory() - addon_downloader.register_format(UrlType.FILESYSTEM, OSDownloader) - addon_downloader.register_format(UrlType.HTTP, HTTPDownloader) - - yield addon_downloader - - -@pytest.fixture -def http_downloader(download_factory): - yield download_factory.get_downloader(UrlType.HTTP.value) - - -@pytest.fixture -def temp_folder(): - yield tempfile.mkdtemp(prefix="ayon_test_") - - -@pytest.fixture -def sample_bundles(): - yield { - "bundles": [ - { - "name": "TestBundle", - "createdAt": "2023-06-29T00:00:00.0+00:00", - "installerVersion": None, - "addons": { - "slack": "1.0.0" - }, - "dependencyPackages": {}, - "isProduction": True, - "isStaging": False - } - ], - "productionBundle": "TestBundle", - "stagingBundle": None - } - - -@pytest.fixture -def sample_addon_info(): - yield { - "name": "slack", - "title": "Slack addon", - "versions": { - "1.0.0": { - "hasSettings": True, - "hasSiteSettings": False, - "clientPyproject": { - "tool": { - "poetry": { - "dependencies": { - "nxtools": "^1.6", - "orjson": "^3.6.7", - "typer": "^0.4.1", - "email-validator": "^1.1.3", - "python": "^3.10", - "fastapi": "^0.73.0" - } - } - } - }, - "clientSourceInfo": [ - { - "type": "http", - "path": "https://drive.google.com/file/d/1TcuV8c2OV8CcbPeWi7lxOdqWsEqQNPYy/view?usp=sharing", # noqa - "filename": "dummy.zip" - }, - { - "type": "filesystem", - "path": { - "windows": "P:/sources/some_file.zip", - "linux": "/mnt/srv/sources/some_file.zip", - "darwin": "/Volumes/srv/sources/some_file.zip" - } - } - ], - "frontendScopes": { - "project": { - "sidebar": "hierarchy", - } - }, - "hash": "4be25eb6215e91e5894d3c5475aeb1e379d081d3f5b43b4ee15b0891cf5f5658" # noqa - } - }, - "description": "" - } - - -def test_register(printer): - download_factory = DownloadFactory() - - assert len(download_factory._downloaders) == 0, "Contains registered" - - download_factory.register_format(UrlType.FILESYSTEM, OSDownloader) - assert len(download_factory._downloaders) == 1, "Should contain one" - - -def test_get_downloader(printer, download_factory): - assert download_factory.get_downloader(UrlType.FILESYSTEM.value), "Should find" # noqa - - with pytest.raises(ValueError): - download_factory.get_downloader("unknown"), "Shouldn't find" - - -def test_addon_info(printer, sample_addon_info): - """Tests parsing of expected payload from v4 server into AadonInfo.""" - valid_minimum = { - "name": "slack", - "versions": { - "1.0.0": { - "clientSourceInfo": [ - { - "type": "filesystem", - "path": { - "windows": "P:/sources/some_file.zip", - "linux": "/mnt/srv/sources/some_file.zip", - "darwin": "/Volumes/srv/sources/some_file.zip" - } - } - ] - } - } - } - - assert AddonInfo.from_dict(valid_minimum), "Missing required fields" - - addon = AddonInfo.from_dict(sample_addon_info) - assert addon, "Should be created" - assert addon.name == "slack", "Incorrect name" - assert "1.0.0" in addon.versions, "Version is not in versions" - - with pytest.raises(TypeError): - assert addon["name"], "Dict approach not implemented" - - addon_as_dict = attr.asdict(addon) - assert addon_as_dict["name"], "Dict approach should work" - - -def _get_dist_item(dist_items, name, version): - final_dist_info = next( - ( - dist_info - for dist_info in dist_items - if ( - dist_info["addon_name"] == name - and dist_info["addon_version"] == version - ) - ), - {} - ) - return final_dist_info["dist_item"] - - -def test_update_addon_state( - printer, sample_addon_info, temp_folder, download_factory, sample_bundles -): - """Tests possible cases of addon update.""" - - addon_version = list(sample_addon_info["versions"])[0] - broken_addon_info = copy.deepcopy(sample_addon_info) - - # Cause crash because of invalid hash - broken_addon_info["versions"][addon_version]["hash"] = "brokenhash" - distribution = AyonDistribution( - addon_dirpath=temp_folder, - dependency_dirpath=temp_folder, - dist_factory=download_factory, - addons_info=[broken_addon_info], - dependency_packages_info=[], - bundles_info=sample_bundles - ) - distribution.distribute() - dist_items = distribution.get_addon_dist_items() - slack_dist_item = _get_dist_item( - dist_items, - sample_addon_info["name"], - addon_version - ) - slack_state = slack_dist_item.state - assert slack_state == UpdateState.UPDATE_FAILED, ( - "Update should have failed because of wrong hash") - - # Fix cache and validate if was updated - distribution = AyonDistribution( - addon_dirpath=temp_folder, - dependency_dirpath=temp_folder, - dist_factory=download_factory, - addons_info=[sample_addon_info], - dependency_packages_info=[], - bundles_info=sample_bundles - ) - distribution.distribute() - dist_items = distribution.get_addon_dist_items() - slack_dist_item = _get_dist_item( - dist_items, - sample_addon_info["name"], - addon_version - ) - assert slack_dist_item.state == UpdateState.UPDATED, ( - "Addon should have been updated") - - # Is UPDATED without calling distribute - distribution = AyonDistribution( - addon_dirpath=temp_folder, - dependency_dirpath=temp_folder, - dist_factory=download_factory, - addons_info=[sample_addon_info], - dependency_packages_info=[], - bundles_info=sample_bundles - ) - dist_items = distribution.get_addon_dist_items() - slack_dist_item = _get_dist_item( - dist_items, - sample_addon_info["name"], - addon_version - ) - assert slack_dist_item.state == UpdateState.UPDATED, ( - "Addon should already exist") diff --git a/common/ayon_common/distribution/ui/missing_bundle_window.py b/common/ayon_common/distribution/ui/missing_bundle_window.py deleted file mode 100644 index ae7a6a2976a..00000000000 --- a/common/ayon_common/distribution/ui/missing_bundle_window.py +++ /dev/null @@ -1,146 +0,0 @@ -import sys - -from qtpy import QtWidgets, QtGui - -from ayon_common import is_staging_enabled -from ayon_common.resources import ( - get_icon_path, - load_stylesheet, -) -from ayon_common.ui_utils import get_qt_app - - -class MissingBundleWindow(QtWidgets.QDialog): - default_width = 410 - default_height = 170 - - def __init__( - self, url=None, bundle_name=None, use_staging=None, parent=None - ): - super().__init__(parent) - - icon_path = get_icon_path() - icon = QtGui.QIcon(icon_path) - self.setWindowIcon(icon) - self.setWindowTitle("Missing Bundle") - - self._url = url - self._bundle_name = bundle_name - self._use_staging = use_staging - self._first_show = True - - info_label = QtWidgets.QLabel("", self) - info_label.setWordWrap(True) - - btns_widget = QtWidgets.QWidget(self) - confirm_btn = QtWidgets.QPushButton("Exit", btns_widget) - - btns_layout = QtWidgets.QHBoxLayout(btns_widget) - btns_layout.setContentsMargins(0, 0, 0, 0) - btns_layout.addStretch(1) - btns_layout.addWidget(confirm_btn, 0) - - main_layout = QtWidgets.QVBoxLayout(self) - main_layout.addWidget(info_label, 0) - main_layout.addStretch(1) - main_layout.addWidget(btns_widget, 0) - - confirm_btn.clicked.connect(self._on_confirm_click) - - self._info_label = info_label - self._confirm_btn = confirm_btn - - self._update_label() - - def set_url(self, url): - if url == self._url: - return - self._url = url - self._update_label() - - def set_bundle_name(self, bundle_name): - if bundle_name == self._bundle_name: - return - self._bundle_name = bundle_name - self._update_label() - - def set_use_staging(self, use_staging): - if self._use_staging == use_staging: - return - self._use_staging = use_staging - self._update_label() - - def showEvent(self, event): - super().showEvent(event) - if self._first_show: - self._first_show = False - self._on_first_show() - self._recalculate_sizes() - - def resizeEvent(self, event): - super().resizeEvent(event) - self._recalculate_sizes() - - def _recalculate_sizes(self): - hint = self._confirm_btn.sizeHint() - new_width = max((hint.width(), hint.height() * 3)) - self._confirm_btn.setMinimumWidth(new_width) - - def _on_first_show(self): - self.setStyleSheet(load_stylesheet()) - self.resize(self.default_width, self.default_height) - - def _on_confirm_click(self): - self.accept() - self.close() - - def _update_label(self): - self._info_label.setText(self._get_label()) - - def _get_label(self): - url_part = f" {self._url}" if self._url else "" - - if self._bundle_name: - return ( - f"Requested release bundle {self._bundle_name}" - f" is not available on server{url_part}." - "

Try to restart AYON desktop launcher. Please" - " contact your administrator if issue persist." - ) - mode = "staging" if self._use_staging else "production" - return ( - f"No release bundle is set as {mode} on the AYON" - f" server{url_part} so there is nothing to launch." - "

Please contact your administrator" - " to resolve the issue." - ) - - -def main(): - """Show message that server does not have set bundle to use. - - It is possible to pass url as argument to show it in the message. To use - this feature, pass `--url ` as argument to this script. - """ - - url = None - bundle_name = None - if "--url" in sys.argv: - url_index = sys.argv.index("--url") + 1 - if url_index < len(sys.argv): - url = sys.argv[url_index] - - if "--bundle" in sys.argv: - bundle_index = sys.argv.index("--bundle") + 1 - if bundle_index < len(sys.argv): - bundle_name = sys.argv[bundle_index] - - use_staging = is_staging_enabled() - app = get_qt_app() - window = MissingBundleWindow(url, bundle_name, use_staging) - window.show() - app.exec_() - - -if __name__ == "__main__": - main() diff --git a/common/ayon_common/distribution/utils.py b/common/ayon_common/distribution/utils.py deleted file mode 100644 index a8b755707af..00000000000 --- a/common/ayon_common/distribution/utils.py +++ /dev/null @@ -1,90 +0,0 @@ -import os -import subprocess - -from ayon_common.utils import get_ayon_appdirs, get_ayon_launch_args - - -def get_local_dir(*subdirs): - """Get product directory in user's home directory. - - Each user on machine have own local directory where are downloaded updates, - addons etc. - - Returns: - str: Path to product local directory. - """ - - if not subdirs: - raise ValueError("Must fill dir_name if nothing else provided!") - - local_dir = get_ayon_appdirs(*subdirs) - if not os.path.isdir(local_dir): - try: - os.makedirs(local_dir) - except Exception: # TODO fix exception - raise RuntimeError(f"Cannot create {local_dir}") - - return local_dir - - -def get_addons_dir(): - """Directory where addon packages are stored. - - Path to addons is defined using python module 'appdirs' which - - The path is stored into environment variable 'AYON_ADDONS_DIR'. - Value of environment variable can be overriden, but we highly recommended - to use that option only for development purposes. - - Returns: - str: Path to directory where addons should be downloaded. - """ - - addons_dir = os.environ.get("AYON_ADDONS_DIR") - if not addons_dir: - addons_dir = get_local_dir("addons") - os.environ["AYON_ADDONS_DIR"] = addons_dir - return addons_dir - - -def get_dependencies_dir(): - """Directory where dependency packages are stored. - - Path to addons is defined using python module 'appdirs' which - - The path is stored into environment variable 'AYON_DEPENDENCIES_DIR'. - Value of environment variable can be overriden, but we highly recommended - to use that option only for development purposes. - - Returns: - str: Path to directory where dependency packages should be downloaded. - """ - - dependencies_dir = os.environ.get("AYON_DEPENDENCIES_DIR") - if not dependencies_dir: - dependencies_dir = get_local_dir("dependency_packages") - os.environ["AYON_DEPENDENCIES_DIR"] = dependencies_dir - return dependencies_dir - - -def show_missing_bundle_information(url, bundle_name=None): - """Show missing bundle information window. - - This function should be called when server does not have set bundle for - production or staging, or when bundle that should be used is not available - on server. - - Using subprocess to show the dialog. Is blocking and is waiting until - dialog is closed. - - Args: - url (str): Server url where bundle is not set. - bundle_name (Optional[str]): Name of bundle that was not found. - """ - - ui_dir = os.path.join(os.path.dirname(__file__), "ui") - script_path = os.path.join(ui_dir, "missing_bundle_window.py") - args = get_ayon_launch_args(script_path, "--skip-bootstrap", "--url", url) - if bundle_name: - args.extend(["--bundle", bundle_name]) - subprocess.call(args) diff --git a/common/ayon_common/resources/AYON.icns b/common/ayon_common/resources/AYON.icns deleted file mode 100644 index 2ec66cf3e0b..00000000000 Binary files a/common/ayon_common/resources/AYON.icns and /dev/null differ diff --git a/common/ayon_common/resources/AYON.ico b/common/ayon_common/resources/AYON.ico deleted file mode 100644 index e0ec3292f85..00000000000 Binary files a/common/ayon_common/resources/AYON.ico and /dev/null differ diff --git a/common/ayon_common/resources/AYON.png b/common/ayon_common/resources/AYON.png deleted file mode 100644 index ed13aeea527..00000000000 Binary files a/common/ayon_common/resources/AYON.png and /dev/null differ diff --git a/common/ayon_common/resources/AYON_staging.png b/common/ayon_common/resources/AYON_staging.png deleted file mode 100644 index 75dadfd56c8..00000000000 Binary files a/common/ayon_common/resources/AYON_staging.png and /dev/null differ diff --git a/common/ayon_common/resources/__init__.py b/common/ayon_common/resources/__init__.py deleted file mode 100644 index 2b516feff3d..00000000000 --- a/common/ayon_common/resources/__init__.py +++ /dev/null @@ -1,25 +0,0 @@ -import os - -from ayon_common.utils import is_staging_enabled - -RESOURCES_DIR = os.path.dirname(os.path.abspath(__file__)) - - -def get_resource_path(*args): - path_items = list(args) - path_items.insert(0, RESOURCES_DIR) - return os.path.sep.join(path_items) - - -def get_icon_path(): - if is_staging_enabled(): - return get_resource_path("AYON_staging.png") - return get_resource_path("AYON.png") - - -def load_stylesheet(): - stylesheet_path = get_resource_path("stylesheet.css") - - with open(stylesheet_path, "r") as stream: - content = stream.read() - return content diff --git a/common/ayon_common/resources/edit.png b/common/ayon_common/resources/edit.png deleted file mode 100644 index a5a07998a65..00000000000 Binary files a/common/ayon_common/resources/edit.png and /dev/null differ diff --git a/common/ayon_common/resources/eye.png b/common/ayon_common/resources/eye.png deleted file mode 100644 index 5a683e29748..00000000000 Binary files a/common/ayon_common/resources/eye.png and /dev/null differ diff --git a/common/ayon_common/resources/stylesheet.css b/common/ayon_common/resources/stylesheet.css deleted file mode 100644 index 01e664e9e8a..00000000000 --- a/common/ayon_common/resources/stylesheet.css +++ /dev/null @@ -1,84 +0,0 @@ -* { - font-size: 10pt; - font-family: "Noto Sans"; - font-weight: 450; - outline: none; -} - -QWidget { - color: #D3D8DE; - background: #2C313A; - border-radius: 0px; -} - -QWidget:disabled { - color: #5b6779; -} - -QLabel { - background: transparent; -} - -QPushButton { - text-align:center center; - border: 0px solid transparent; - border-radius: 0.2em; - padding: 3px 5px 3px 5px; - background: #434a56; -} - -QPushButton:hover { - background: rgba(168, 175, 189, 0.3); - color: #F0F2F5; -} - -QPushButton:pressed {} - -QPushButton:disabled { - background: #434a56; -} - -QLineEdit { - border: 1px solid #373D48; - border-radius: 0.3em; - background: #21252B; - padding: 0.1em; -} - -QLineEdit:disabled { - background: #2C313A; -} -QLineEdit:hover { - border-color: rgba(168, 175, 189, .3); -} -QLineEdit:focus { - border-color: rgb(92, 173, 214); -} - -QLineEdit[state="invalid"] { - border-color: #AA5050; -} - -#Separator { - background: rgba(75, 83, 98, 127); -} - -#PasswordBtn { - border: none; - padding: 0.1em; - background: transparent; -} - -#PasswordBtn:hover { - background: #434a56; -} - -#LikeDisabledInput { - background: #2C313A; -} -#LikeDisabledInput:hover { - border-color: #373D48; -} -#LikeDisabledInput:focus { - border-color: #373D48; -} diff --git a/common/ayon_common/ui_utils.py b/common/ayon_common/ui_utils.py deleted file mode 100644 index a3894d0d9cd..00000000000 --- a/common/ayon_common/ui_utils.py +++ /dev/null @@ -1,36 +0,0 @@ -import sys -from qtpy import QtWidgets, QtCore - - -def set_style_property(widget, property_name, property_value): - """Set widget's property that may affect style. - - Style of widget is polished if current property value is different. - """ - - cur_value = widget.property(property_name) - if cur_value == property_value: - return - widget.setProperty(property_name, property_value) - widget.style().polish(widget) - - -def get_qt_app(): - app = QtWidgets.QApplication.instance() - if app is not None: - return app - - for attr_name in ( - "AA_EnableHighDpiScaling", - "AA_UseHighDpiPixmaps", - ): - attr = getattr(QtCore.Qt, attr_name, None) - if attr is not None: - QtWidgets.QApplication.setAttribute(attr) - - if hasattr(QtWidgets.QApplication, "setHighDpiScaleFactorRoundingPolicy"): - QtWidgets.QApplication.setHighDpiScaleFactorRoundingPolicy( - QtCore.Qt.HighDpiScaleFactorRoundingPolicy.PassThrough - ) - - return QtWidgets.QApplication(sys.argv) diff --git a/common/ayon_common/utils.py b/common/ayon_common/utils.py deleted file mode 100644 index c0d0c7c0b10..00000000000 --- a/common/ayon_common/utils.py +++ /dev/null @@ -1,90 +0,0 @@ -import os -import sys -import appdirs - -IS_BUILT_APPLICATION = getattr(sys, "frozen", False) - - -def get_ayon_appdirs(*args): - """Local app data directory of AYON client. - - Args: - *args (Iterable[str]): Subdirectories/files in local app data dir. - - Returns: - str: Path to directory/file in local app data dir. - """ - - return os.path.join( - appdirs.user_data_dir("AYON", "Ynput"), - *args - ) - - -def is_staging_enabled(): - """Check if staging is enabled. - - Returns: - bool: True if staging is enabled. - """ - - return os.getenv("AYON_USE_STAGING") == "1" - - -def _create_local_site_id(): - """Create a local site identifier. - - Returns: - str: Randomly generated site id. - """ - - from coolname import generate_slug - - new_id = generate_slug(3) - - print("Created local site id \"{}\"".format(new_id)) - - return new_id - - -def get_local_site_id(): - """Get local site identifier. - - Site id is created if does not exist yet. - - Returns: - str: Site id. - """ - - # used for background syncing - site_id = os.environ.get("AYON_SITE_ID") - if site_id: - return site_id - - site_id_path = get_ayon_appdirs("site_id") - if os.path.exists(site_id_path): - with open(site_id_path, "r") as stream: - site_id = stream.read() - - if not site_id: - site_id = _create_local_site_id() - with open(site_id_path, "w") as stream: - stream.write(site_id) - return site_id - - -def get_ayon_launch_args(*args): - """Launch arguments that can be used to launch ayon process. - - Args: - *args (str): Additional arguments. - - Returns: - list[str]: Launch arguments. - """ - - output = [sys.executable] - if not IS_BUILT_APPLICATION: - output.append(sys.argv[0]) - output.extend(args) - return output diff --git a/openpype/action.py b/openpype/action.py deleted file mode 100644 index 6114c65fd44..00000000000 --- a/openpype/action.py +++ /dev/null @@ -1,135 +0,0 @@ -import warnings -import functools -import pyblish.api - - -class ActionDeprecatedWarning(DeprecationWarning): - pass - - -def deprecated(new_destination): - """Mark functions as deprecated. - - It will result in a warning being emitted when the function is used. - """ - - func = None - if callable(new_destination): - func = new_destination - new_destination = None - - def _decorator(decorated_func): - if new_destination is None: - warning_message = ( - " Please check content of deprecated function to figure out" - " possible replacement." - ) - else: - warning_message = " Please replace your usage with '{}'.".format( - new_destination - ) - - @functools.wraps(decorated_func) - def wrapper(*args, **kwargs): - warnings.simplefilter("always", ActionDeprecatedWarning) - warnings.warn( - ( - "Call to deprecated function '{}'" - "\nFunction was moved or removed.{}" - ).format(decorated_func.__name__, warning_message), - category=ActionDeprecatedWarning, - stacklevel=4 - ) - return decorated_func(*args, **kwargs) - return wrapper - - if func is None: - return _decorator - return _decorator(func) - - -@deprecated("openpype.pipeline.publish.get_errored_instances_from_context") -def get_errored_instances_from_context(context, plugin=None): - """ - Deprecated: - Since 3.14.* will be removed in 3.16.* or later. - """ - - from openpype.pipeline.publish import get_errored_instances_from_context - - return get_errored_instances_from_context(context, plugin=plugin) - - -@deprecated("openpype.pipeline.publish.get_errored_plugins_from_context") -def get_errored_plugins_from_data(context): - """ - Deprecated: - Since 3.14.* will be removed in 3.16.* or later. - """ - - from openpype.pipeline.publish import get_errored_plugins_from_context - - return get_errored_plugins_from_context(context) - - -class RepairAction(pyblish.api.Action): - """Repairs the action - - To process the repairing this requires a static `repair(instance)` method - is available on the plugin. - - Deprecated: - 'RepairAction' and 'RepairContextAction' were moved to - 'openpype.pipeline.publish' please change you imports. - There is no "reasonable" way hot mark these classes as deprecated - to show warning of wrong import. Deprecated since 3.14.* will be - removed in 3.16.* - - """ - label = "Repair" - on = "failed" # This action is only available on a failed plug-in - icon = "wrench" # Icon from Awesome Icon - - def process(self, context, plugin): - - if not hasattr(plugin, "repair"): - raise RuntimeError("Plug-in does not have repair method.") - - # Get the errored instances - self.log.info("Finding failed instances..") - errored_instances = get_errored_instances_from_context(context, - plugin=plugin) - for instance in errored_instances: - plugin.repair(instance) - - -class RepairContextAction(pyblish.api.Action): - """Repairs the action - - To process the repairing this requires a static `repair(instance)` method - is available on the plugin. - - Deprecated: - 'RepairAction' and 'RepairContextAction' were moved to - 'openpype.pipeline.publish' please change you imports. - There is no "reasonable" way hot mark these classes as deprecated - to show warning of wrong import. Deprecated since 3.14.* will be - removed in 3.16.* - - """ - label = "Repair" - on = "failed" # This action is only available on a failed plug-in - - def process(self, context, plugin): - - if not hasattr(plugin, "repair"): - raise RuntimeError("Plug-in does not have repair method.") - - # Get the errored instances - self.log.info("Finding failed instances..") - errored_plugins = get_errored_plugins_from_data(context) - - # Apply pyblish.logic to get the instances for the plug-in - if plugin in errored_plugins: - self.log.info("Attempting fix ...") - plugin.repair(context) diff --git a/openpype/cli.py b/openpype/cli.py index bc837cdeba8..0df277fb0a1 100644 --- a/openpype/cli.py +++ b/openpype/cli.py @@ -5,6 +5,7 @@ import code import click +from openpype import AYON_SERVER_ENABLED from .pype_commands import PypeCommands @@ -46,7 +47,11 @@ def main(ctx): if ctx.invoked_subcommand is None: # Print help if headless mode is used - if os.environ.get("OPENPYPE_HEADLESS_MODE") == "1": + if AYON_SERVER_ENABLED: + is_headless = os.getenv("AYON_HEADLESS_MODE") == "1" + else: + is_headless = os.getenv("OPENPYPE_HEADLESS_MODE") == "1" + if is_headless: print(ctx.get_help()) sys.exit(0) else: @@ -57,6 +62,9 @@ def main(ctx): @click.option("-d", "--dev", is_flag=True, help="Settings in Dev mode") def settings(dev): """Show Pype Settings UI.""" + + if AYON_SERVER_ENABLED: + raise RuntimeError("AYON does not support 'settings' command.") PypeCommands().launch_settings_gui(dev) @@ -110,6 +118,8 @@ def eventserver(ftrack_url, on linux and window service). """ + if AYON_SERVER_ENABLED: + raise RuntimeError("AYON does not support 'eventserver' command.") PypeCommands().launch_eventservercli( ftrack_url, ftrack_user, @@ -134,6 +144,10 @@ def webpublisherwebserver(executable, upload_dir, host=None, port=None): Expect "pype.club" user created on Ftrack. """ + if AYON_SERVER_ENABLED: + raise RuntimeError( + "AYON does not support 'webpublisherwebserver' command." + ) PypeCommands().launch_webpublisher_webservercli( upload_dir=upload_dir, executable=executable, @@ -182,43 +196,10 @@ def publish(paths, targets, gui): PypeCommands.publish(list(paths), targets, gui) -@main.command() -@click.argument("path") -@click.option("-h", "--host", help="Host") -@click.option("-u", "--user", help="User email address") -@click.option("-p", "--project", help="Project") -@click.option("-t", "--targets", help="Targets", default=None, - multiple=True) -def remotepublishfromapp(project, path, host, user=None, targets=None): - """Start CLI publishing. - - Publish collects json from paths provided as an argument. - More than one path is allowed. - """ - - PypeCommands.remotepublishfromapp( - project, path, host, user, targets=targets - ) - - -@main.command() -@click.argument("path") -@click.option("-u", "--user", help="User email address") -@click.option("-p", "--project", help="Project") -@click.option("-t", "--targets", help="Targets", default=None, - multiple=True) -def remotepublish(project, path, user=None, targets=None): - """Start CLI publishing. - - Publish collects json from paths provided as an argument. - More than one path is allowed. - """ - - PypeCommands.remotepublish(project, path, user, targets=targets) - - @main.command(context_settings={"ignore_unknown_options": True}) def projectmanager(): + if AYON_SERVER_ENABLED: + raise RuntimeError("AYON does not support 'projectmanager' command.") PypeCommands().launch_project_manager() @@ -316,12 +297,18 @@ def runtests(folder, mark, pyargs, test_data_folder, persist, app_variant, persist, app_variant, timeout, setup_only) -@main.command() +@main.command(help="DEPRECATED - run sync server") +@click.pass_context @click.option("-a", "--active_site", required=True, - help="Name of active stie") -def syncserver(active_site): + help="Name of active site") +def syncserver(ctx, active_site): """Run sync site server in background. + Deprecated: + This command is deprecated and will be removed in future versions. + Use '~/openpype_console module sync_server syncservice' instead. + + Details: Some Site Sync use cases need to expose site to another one. For example if majority of artists work in studio, they are not using SS at all, but if you want to expose published assets to 'studio' site @@ -335,7 +322,12 @@ def syncserver(active_site): var OPENPYPE_LOCAL_ID set to 'active_site'. """ - PypeCommands().syncserver(active_site) + if AYON_SERVER_ENABLED: + raise RuntimeError("AYON does not support 'syncserver' command.") + + from openpype.modules.sync_server.sync_server_module import ( + syncservice) + ctx.invoke(syncservice, active_site=active_site) @main.command() @@ -347,6 +339,8 @@ def repack_version(directory): recalculating file checksums. It will try to use version detected in directory name. """ + if AYON_SERVER_ENABLED: + raise RuntimeError("AYON does not support 'repack-version' command.") PypeCommands().repack_version(directory) @@ -358,6 +352,9 @@ def repack_version(directory): "--dbonly", help="Store only Database data", default=False, is_flag=True) def pack_project(project, dirpath, dbonly): """Create a package of project with all files and database dump.""" + + if AYON_SERVER_ENABLED: + raise RuntimeError("AYON does not support 'pack-project' command.") PypeCommands().pack_project(project, dirpath, dbonly) @@ -370,6 +367,8 @@ def pack_project(project, dirpath, dbonly): "--dbonly", help="Store only Database data", default=False, is_flag=True) def unpack_project(zipfile, root, dbonly): """Create a package of project with all files and database dump.""" + if AYON_SERVER_ENABLED: + raise RuntimeError("AYON does not support 'unpack-project' command.") PypeCommands().unpack_project(zipfile, root, dbonly) @@ -384,9 +383,17 @@ def interactive(): Executable 'openpype_gui' on Windows won't work. """ - from openpype.version import __version__ + if AYON_SERVER_ENABLED: + version = os.environ["AYON_VERSION"] + banner = ( + f"AYON launcher {version}\nPython {sys.version} on {sys.platform}" + ) + else: + from openpype.version import __version__ - banner = f"OpenPype {__version__}\nPython {sys.version} on {sys.platform}" + banner = ( + f"OpenPype {__version__}\nPython {sys.version} on {sys.platform}" + ) code.interact(banner) @@ -395,11 +402,13 @@ def interactive(): is_flag=True, default=False) def version(build): """Print OpenPype version.""" + if AYON_SERVER_ENABLED: + print(os.environ["AYON_VERSION"]) + return from openpype.version import __version__ from igniter.bootstrap_repos import BootstrapRepos, OpenPypeVersion from pathlib import Path - import os if getattr(sys, 'frozen', False): local_version = BootstrapRepos.get_version( diff --git a/openpype/client/mongo/__init__.py b/openpype/client/mongo/__init__.py index 5c5143a7310..9f62d7a9cfa 100644 --- a/openpype/client/mongo/__init__.py +++ b/openpype/client/mongo/__init__.py @@ -6,6 +6,9 @@ OpenPypeMongoConnection, get_project_database, get_project_connection, + load_json_file, + replace_project_documents, + store_project_documents, ) @@ -17,4 +20,7 @@ "OpenPypeMongoConnection", "get_project_database", "get_project_connection", + "load_json_file", + "replace_project_documents", + "store_project_documents", ) diff --git a/openpype/client/mongo/entity_links.py b/openpype/client/mongo/entity_links.py index c97a828118b..fd13a2d83be 100644 --- a/openpype/client/mongo/entity_links.py +++ b/openpype/client/mongo/entity_links.py @@ -212,16 +212,12 @@ def _process_referenced_pipeline_result(result, link_type): continue for output in sorted(outputs_recursive, key=lambda o: o["depth"]): - output_links = output.get("data", {}).get("inputLinks") - if not output_links and output["type"] != "hero_version": - continue - # Leaf if output["_id"] not in correctly_linked_ids: continue _filter_input_links( - output_links, + output.get("data", {}).get("inputLinks"), link_type, correctly_linked_ids ) diff --git a/openpype/client/server/conversion_utils.py b/openpype/client/server/conversion_utils.py index 24d46780957..a6c190a0fc9 100644 --- a/openpype/client/server/conversion_utils.py +++ b/openpype/client/server/conversion_utils.py @@ -133,7 +133,6 @@ def _get_default_template_name(templates): def _template_replacements_to_v3(template): return ( template - .replace("{folder[name]}", "{asset}") .replace("{product[name]}", "{subset}") .replace("{product[type]}", "{family}") ) @@ -715,7 +714,6 @@ def convert_v4_representation_to_v3(representation): if "template" in output_data: output_data["template"] = ( output_data["template"] - .replace("{folder[name]}", "{asset}") .replace("{product[name]}", "{subset}") .replace("{product[type]}", "{family}") ) @@ -977,7 +975,6 @@ def convert_create_representation_to_v4(representation, con): representation_data = representation["data"] representation_data["template"] = ( representation_data["template"] - .replace("{asset}", "{folder[name]}") .replace("{subset}", "{product[name]}") .replace("{family}", "{product[type]}") ) @@ -1077,7 +1074,7 @@ def convert_update_folder_to_v4(project_name, asset_id, update_data, con): parent_id = None tasks = None new_data = {} - attribs = {} + attribs = full_update_data.pop("attrib", {}) if "type" in update_data: new_update_data["active"] = update_data["type"] == "asset" @@ -1116,6 +1113,9 @@ def convert_update_folder_to_v4(project_name, asset_id, update_data, con): print("Folder has new data: {}".format(new_data)) new_update_data["data"] = new_data + if attribs: + new_update_data["attrib"] = attribs + if has_task_changes: raise ValueError("Task changes of folder are not implemented") @@ -1129,7 +1129,7 @@ def convert_update_subset_to_v4(project_name, subset_id, update_data, con): full_update_data = _from_flat_dict(update_data) data = full_update_data.get("data") new_data = {} - attribs = {} + attribs = full_update_data.pop("attrib", {}) if data: if "family" in data: family = data.pop("family") @@ -1151,9 +1151,6 @@ def convert_update_subset_to_v4(project_name, subset_id, update_data, con): elif value is not REMOVED_VALUE: new_data[key] = value - if attribs: - new_update_data["attribs"] = attribs - if "name" in update_data: new_update_data["name"] = update_data["name"] @@ -1168,6 +1165,9 @@ def convert_update_subset_to_v4(project_name, subset_id, update_data, con): new_update_data["folderId"] = update_data["parent"] flat_data = _to_flat_dict(new_update_data) + if attribs: + flat_data["attrib"] = attribs + if new_data: print("Subset has new data: {}".format(new_data)) flat_data["data"] = new_data @@ -1182,7 +1182,7 @@ def convert_update_version_to_v4(project_name, version_id, update_data, con): full_update_data = _from_flat_dict(update_data) data = full_update_data.get("data") new_data = {} - attribs = {} + attribs = full_update_data.pop("attrib", {}) if data: if "author" in data: new_update_data["author"] = data.pop("author") @@ -1199,9 +1199,6 @@ def convert_update_version_to_v4(project_name, version_id, update_data, con): elif value is not REMOVED_VALUE: new_data[key] = value - if attribs: - new_update_data["attribs"] = attribs - if "name" in update_data: new_update_data["version"] = update_data["name"] @@ -1216,6 +1213,9 @@ def convert_update_version_to_v4(project_name, version_id, update_data, con): new_update_data["productId"] = update_data["parent"] flat_data = _to_flat_dict(new_update_data) + if attribs: + flat_data["attrib"] = attribs + if new_data: print("Version has new data: {}".format(new_data)) flat_data["data"] = new_data @@ -1255,7 +1255,7 @@ def convert_update_representation_to_v4( data = full_update_data.get("data") new_data = {} - attribs = {} + attribs = full_update_data.pop("attrib", {}) if data: for key, value in data.items(): if key in folder_attributes: @@ -1266,7 +1266,6 @@ def convert_update_representation_to_v4( if "template" in attribs: attribs["template"] = ( attribs["template"] - .replace("{asset}", "{folder[name]}") .replace("{family}", "{product[type]}") .replace("{subset}", "{product[name]}") ) @@ -1313,6 +1312,9 @@ def convert_update_representation_to_v4( new_update_data["files"] = new_files flat_data = _to_flat_dict(new_update_data) + if attribs: + flat_data["attrib"] = attribs + if new_data: print("Representation has new data: {}".format(new_data)) flat_data["data"] = new_data diff --git a/openpype/client/server/entities.py b/openpype/client/server/entities.py index 9579f13add1..39322627bb5 100644 --- a/openpype/client/server/entities.py +++ b/openpype/client/server/entities.py @@ -83,10 +83,10 @@ def _get_subsets( project_name, subset_ids, subset_names, - folder_ids, - names_by_folder_ids, - active, - fields + folder_ids=folder_ids, + names_by_folder_ids=names_by_folder_ids, + active=active, + fields=fields, ): yield convert_v4_subset_to_v3(subset) diff --git a/openpype/vendor/python/common/ayon_api/thumbnails.py b/openpype/client/server/thumbnails.py similarity index 93% rename from openpype/vendor/python/common/ayon_api/thumbnails.py rename to openpype/client/server/thumbnails.py index 11734ca7624..dc649b96515 100644 --- a/openpype/vendor/python/common/ayon_api/thumbnails.py +++ b/openpype/client/server/thumbnails.py @@ -1,3 +1,11 @@ +"""Cache of thumbnails downloaded from AYON server. + +Thumbnails are cached to appdirs to predefined directory. + +This should be moved to thumbnails logic in pipeline but because it would +overflow OpenPype logic it's here for now. +""" + import os import time import collections @@ -10,7 +18,7 @@ ) -class ThumbnailCache: +class AYONThumbnailCache: """Cache of thumbnails on local storage. Thumbnails are cached to appdirs to predefined directory. Each project has @@ -32,13 +40,14 @@ class ThumbnailCache: # Lifetime of thumbnails (in seconds) # - default 3 days - days_alive = 3 * 24 * 60 * 60 + days_alive = 3 # Max size of thumbnail directory (in bytes) # - default 2 Gb max_filesize = 2 * 1024 * 1024 * 1024 def __init__(self, cleanup=True): self._thumbnails_dir = None + self._days_alive_secs = self.days_alive * 24 * 60 * 60 if cleanup: self.cleanup() @@ -50,6 +59,7 @@ def get_thumbnails_dir(self): """ if self._thumbnails_dir is None: + # TODO use generic function directory = appdirs.user_data_dir("AYON", "Ynput") self._thumbnails_dir = os.path.join(directory, "thumbnails") return self._thumbnails_dir @@ -121,7 +131,7 @@ def _soft_cleanup(self, thumbnails_dir): for filename in filenames: path = os.path.join(root, filename) modification_time = os.path.getmtime(path) - if current_time - modification_time > self.days_alive: + if current_time - modification_time > self._days_alive_secs: os.remove(path) def _max_size_cleanup(self, thumbnails_dir): diff --git a/openpype/hooks/pre_add_last_workfile_arg.py b/openpype/hooks/pre_add_last_workfile_arg.py index c54acbc2039..1418bc210b1 100644 --- a/openpype/hooks/pre_add_last_workfile_arg.py +++ b/openpype/hooks/pre_add_last_workfile_arg.py @@ -1,6 +1,6 @@ import os -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook, LaunchTypes class AddLastWorkfileToLaunchArgs(PreLaunchHook): @@ -13,8 +13,8 @@ class AddLastWorkfileToLaunchArgs(PreLaunchHook): # Execute after workfile template copy order = 10 - app_groups = [ - "3dsmax", + app_groups = { + "3dsmax", "adsk_3dsmax", "maya", "nuke", "nukex", @@ -26,8 +26,9 @@ class AddLastWorkfileToLaunchArgs(PreLaunchHook): "photoshop", "tvpaint", "substancepainter", - "aftereffects" - ] + "aftereffects", + } + launch_types = {LaunchTypes.local} def execute(self): if not self.data.get("start_last_workfile"): diff --git a/openpype/hooks/pre_copy_template_workfile.py b/openpype/hooks/pre_copy_template_workfile.py index 70c549919fa..2203ff43963 100644 --- a/openpype/hooks/pre_copy_template_workfile.py +++ b/openpype/hooks/pre_copy_template_workfile.py @@ -1,7 +1,7 @@ import os import shutil -from openpype.lib import PreLaunchHook from openpype.settings import get_project_settings +from openpype.lib.applications import PreLaunchHook, LaunchTypes from openpype.pipeline.workfile import ( get_custom_workfile_template, get_custom_workfile_template_by_string_context @@ -19,7 +19,8 @@ class CopyTemplateWorkfile(PreLaunchHook): # Before `AddLastWorkfileToLaunchArgs` order = 0 - app_groups = ["blender", "photoshop", "tvpaint", "aftereffects"] + app_groups = {"blender", "photoshop", "tvpaint", "aftereffects"} + launch_types = {LaunchTypes.local} def execute(self): """Check if can copy template for context and do it if possible. diff --git a/openpype/hooks/pre_create_extra_workdir_folders.py b/openpype/hooks/pre_create_extra_workdir_folders.py index 8856281120f..4c9d08b3755 100644 --- a/openpype/hooks/pre_create_extra_workdir_folders.py +++ b/openpype/hooks/pre_create_extra_workdir_folders.py @@ -1,5 +1,5 @@ import os -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook, LaunchTypes from openpype.pipeline.workfile import create_workdir_extra_folders @@ -14,6 +14,7 @@ class CreateWorkdirExtraFolders(PreLaunchHook): # Execute after workfile template copy order = 15 + launch_types = {LaunchTypes.local} def execute(self): if not self.application.is_host: diff --git a/openpype/hooks/pre_foundry_apps.py b/openpype/hooks/pre_foundry_apps.py index 21ec8e78814..7536df4c16d 100644 --- a/openpype/hooks/pre_foundry_apps.py +++ b/openpype/hooks/pre_foundry_apps.py @@ -1,5 +1,5 @@ import subprocess -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook, LaunchTypes class LaunchFoundryAppsWindows(PreLaunchHook): @@ -13,8 +13,9 @@ class LaunchFoundryAppsWindows(PreLaunchHook): # Should be as last hook because must change launch arguments to string order = 1000 - app_groups = ["nuke", "nukeassist", "nukex", "hiero", "nukestudio"] - platforms = ["windows"] + app_groups = {"nuke", "nukeassist", "nukex", "hiero", "nukestudio"} + platforms = {"windows"} + launch_types = {LaunchTypes.local} def execute(self): # Change `creationflags` to CREATE_NEW_CONSOLE diff --git a/openpype/hooks/pre_global_host_data.py b/openpype/hooks/pre_global_host_data.py index 260e28a18b9..813df24af01 100644 --- a/openpype/hooks/pre_global_host_data.py +++ b/openpype/hooks/pre_global_host_data.py @@ -1,5 +1,5 @@ from openpype.client import get_project, get_asset_by_name -from openpype.lib import ( +from openpype.lib.applications import ( PreLaunchHook, EnvironmentPrepData, prepare_app_environments, @@ -10,6 +10,7 @@ class GlobalHostDataHook(PreLaunchHook): order = -100 + launch_types = set() def execute(self): """Prepare global objects to `data` that will be used for sure.""" diff --git a/openpype/hooks/pre_mac_launch.py b/openpype/hooks/pre_mac_launch.py index f85557a4f00..402e9a55172 100644 --- a/openpype/hooks/pre_mac_launch.py +++ b/openpype/hooks/pre_mac_launch.py @@ -1,5 +1,5 @@ import os -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook, LaunchTypes class LaunchWithTerminal(PreLaunchHook): @@ -12,7 +12,8 @@ class LaunchWithTerminal(PreLaunchHook): """ order = 1000 - platforms = ["darwin"] + platforms = {"darwin"} + launch_types = {LaunchTypes.local} def execute(self): executable = str(self.launch_context.executable) diff --git a/openpype/hooks/pre_non_python_host_launch.py b/openpype/hooks/pre_non_python_host_launch.py index 043cb3c7f69..d9e912c8269 100644 --- a/openpype/hooks/pre_non_python_host_launch.py +++ b/openpype/hooks/pre_non_python_host_launch.py @@ -1,10 +1,11 @@ import os -from openpype.lib import ( +from openpype.lib import get_openpype_execute_args +from openpype.lib.applications import ( + get_non_python_host_kwargs, PreLaunchHook, - get_openpype_execute_args + LaunchTypes, ) -from openpype.lib.applications import get_non_python_host_kwargs from openpype import PACKAGE_DIR as OPENPYPE_DIR @@ -16,9 +17,10 @@ class NonPythonHostHook(PreLaunchHook): python script which launch the host. For these cases it is necessary to prepend python (or openpype) executable and script path before application's. """ - app_groups = ["harmony", "photoshop", "aftereffects"] + app_groups = {"harmony", "photoshop", "aftereffects"} order = 20 + launch_types = {LaunchTypes.local} def execute(self): # Pop executable @@ -54,4 +56,3 @@ def execute(self): self.launch_context.kwargs = \ get_non_python_host_kwargs(self.launch_context.kwargs) - diff --git a/openpype/hooks/pre_ocio_hook.py b/openpype/hooks/pre_ocio_hook.py index 8f462665bcf..add3a0adaf1 100644 --- a/openpype/hooks/pre_ocio_hook.py +++ b/openpype/hooks/pre_ocio_hook.py @@ -1,8 +1,6 @@ -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook -from openpype.pipeline.colorspace import ( - get_imageio_config -) +from openpype.pipeline.colorspace import get_imageio_config from openpype.pipeline.template_data import get_template_data_with_names @@ -10,7 +8,7 @@ class OCIOEnvHook(PreLaunchHook): """Set OCIO environment variable for hosts that use OpenColorIO.""" order = 0 - hosts = [ + hosts = { "substancepainter", "fusion", "blender", @@ -20,8 +18,9 @@ class OCIOEnvHook(PreLaunchHook): "maya", "nuke", "hiero", - "resolve" - ] + "resolve", + } + launch_types = set() def execute(self): """Hook entry method.""" @@ -39,12 +38,16 @@ def execute(self): host_name=self.host_name, project_settings=self.data["project_settings"], anatomy_data=template_data, - anatomy=self.data["anatomy"] + anatomy=self.data["anatomy"], + env=self.launch_context.env, ) if config_data: ocio_path = config_data["path"] + if self.host_name in ["nuke", "hiero"]: + ocio_path = ocio_path.replace("\\", "/") + self.log.info( f"Setting OCIO environment to config path: {ocio_path}") diff --git a/openpype/host/dirmap.py b/openpype/host/dirmap.py index e77f06e9d6c..96a98e808e7 100644 --- a/openpype/host/dirmap.py +++ b/openpype/host/dirmap.py @@ -32,19 +32,26 @@ class HostDirmap(object): """ def __init__( - self, host_name, project_name, project_settings=None, sync_module=None + self, + host_name, + project_name, + project_settings=None, + sync_module=None ): self.host_name = host_name self.project_name = project_name self._project_settings = project_settings - self._sync_module = sync_module # to limit reinit of Modules + self._sync_module = sync_module + # to limit reinit of Modules + self._sync_module_discovered = sync_module is not None self._log = None @property def sync_module(self): - if self._sync_module is None: + if not self._sync_module_discovered: + self._sync_module_discovered = True manager = ModulesManager() - self._sync_module = manager["sync_server"] + self._sync_module = manager.get("sync_server") return self._sync_module @property @@ -151,21 +158,25 @@ def _get_local_sync_dirmap(self): """ project_name = self.project_name + sync_module = self.sync_module mapping = {} - if (not self.sync_module.enabled or - project_name not in self.sync_module.get_enabled_projects()): + if ( + sync_module is None + or not sync_module.enabled + or project_name not in sync_module.get_enabled_projects() + ): return mapping - active_site = self.sync_module.get_local_normalized_site( - self.sync_module.get_active_site(project_name)) - remote_site = self.sync_module.get_local_normalized_site( - self.sync_module.get_remote_site(project_name)) + active_site = sync_module.get_local_normalized_site( + sync_module.get_active_site(project_name)) + remote_site = sync_module.get_local_normalized_site( + sync_module.get_remote_site(project_name)) self.log.debug( "active {} - remote {}".format(active_site, remote_site) ) if active_site == "local" and active_site != remote_site: - sync_settings = self.sync_module.get_sync_project_setting( + sync_settings = sync_module.get_sync_project_setting( project_name, exclude_locals=False, cached=False) @@ -179,7 +190,7 @@ def _get_local_sync_dirmap(self): self.log.debug("remote overrides {}".format(remote_overrides)) current_platform = platform.system().lower() - remote_provider = self.sync_module.get_provider_for_site( + remote_provider = sync_module.get_provider_for_site( project_name, remote_site ) # dirmap has sense only with regular disk provider, in the workfile diff --git a/openpype/hosts/aftereffects/plugins/create/create_render.py b/openpype/hosts/aftereffects/plugins/create/create_render.py index fa79fac78f8..dcf424b44f4 100644 --- a/openpype/hosts/aftereffects/plugins/create/create_render.py +++ b/openpype/hosts/aftereffects/plugins/create/create_render.py @@ -28,7 +28,6 @@ class RenderCreator(Creator): create_allow_context_change = True # Settings - default_variants = [] mark_for_review = True def create(self, subset_name_from_ui, data, pre_create_data): @@ -171,6 +170,10 @@ def apply_settings(self, project_settings, system_settings): ) self.mark_for_review = plugin_settings["mark_for_review"] + self.default_variants = plugin_settings.get( + "default_variants", + plugin_settings.get("defaults") or [] + ) def get_detail_description(self): return """Creator for Render instances diff --git a/openpype/hosts/aftereffects/plugins/publish/closeAE.py b/openpype/hosts/aftereffects/plugins/publish/closeAE.py index eff2573e8fb..0be20d9f05a 100644 --- a/openpype/hosts/aftereffects/plugins/publish/closeAE.py +++ b/openpype/hosts/aftereffects/plugins/publish/closeAE.py @@ -15,7 +15,7 @@ class CloseAE(pyblish.api.ContextPlugin): active = True hosts = ["aftereffects"] - targets = ["remotepublish"] + targets = ["automated"] def process(self, context): self.log.info("CloseAE") diff --git a/openpype/hosts/aftereffects/plugins/publish/collect_render.py b/openpype/hosts/aftereffects/plugins/publish/collect_render.py index aa464619157..49874d6cff5 100644 --- a/openpype/hosts/aftereffects/plugins/publish/collect_render.py +++ b/openpype/hosts/aftereffects/plugins/publish/collect_render.py @@ -138,7 +138,6 @@ def get_instances(self, context): fam = "render.farm" if fam not in instance.families: instance.families.append(fam) - instance.toBeRenderedOn = "deadline" instance.renderer = "aerender" instance.farm = True # to skip integrate if "review" in instance.families: diff --git a/openpype/hosts/aftereffects/plugins/publish/extract_local_render.py b/openpype/hosts/aftereffects/plugins/publish/extract_local_render.py index c70aa41dbe0..bdb48e11f8b 100644 --- a/openpype/hosts/aftereffects/plugins/publish/extract_local_render.py +++ b/openpype/hosts/aftereffects/plugins/publish/extract_local_render.py @@ -1,11 +1,5 @@ import os -import sys -import six -from openpype.lib import ( - get_ffmpeg_tool_path, - run_subprocess, -) from openpype.pipeline import publish from openpype.hosts.aftereffects.api import get_stub diff --git a/openpype/hosts/blender/api/ops.py b/openpype/hosts/blender/api/ops.py index 2c1b7245cd6..62d7987b47d 100644 --- a/openpype/hosts/blender/api/ops.py +++ b/openpype/hosts/blender/api/ops.py @@ -20,6 +20,7 @@ from openpype.tools.utils import host_tools from .workio import OpenFileCacher +from . import pipeline PREVIEW_COLLECTIONS: Dict = dict() @@ -344,6 +345,26 @@ def before_window_show(self): self._window.refresh() +class SetFrameRange(bpy.types.Operator): + bl_idname = "wm.ayon_set_frame_range" + bl_label = "Set Frame Range" + + def execute(self, context): + data = pipeline.get_asset_data() + pipeline.set_frame_range(data) + return {"FINISHED"} + + +class SetResolution(bpy.types.Operator): + bl_idname = "wm.ayon_set_resolution" + bl_label = "Set Resolution" + + def execute(self, context): + data = pipeline.get_asset_data() + pipeline.set_resolution(data) + return {"FINISHED"} + + class TOPBAR_MT_avalon(bpy.types.Menu): """Avalon menu.""" @@ -381,9 +402,11 @@ def draw(self, context): layout.operator(LaunchManager.bl_idname, text="Manage...") layout.operator(LaunchLibrary.bl_idname, text="Library...") layout.separator() + layout.operator(SetFrameRange.bl_idname, text="Set Frame Range") + layout.operator(SetResolution.bl_idname, text="Set Resolution") + layout.separator() layout.operator(LaunchWorkFiles.bl_idname, text="Work Files...") - # TODO (jasper): maybe add 'Reload Pipeline', 'Set Frame Range' and - # 'Set Resolution'? + # TODO (jasper): maybe add 'Reload Pipeline' def draw_avalon_menu(self, context): @@ -399,6 +422,8 @@ def draw_avalon_menu(self, context): LaunchManager, LaunchLibrary, LaunchWorkFiles, + SetFrameRange, + SetResolution, TOPBAR_MT_avalon, ] diff --git a/openpype/hosts/blender/api/pipeline.py b/openpype/hosts/blender/api/pipeline.py index eb696ec1849..29339a512c1 100644 --- a/openpype/hosts/blender/api/pipeline.py +++ b/openpype/hosts/blender/api/pipeline.py @@ -113,22 +113,21 @@ def message_window(title, message): _process_app_events() -def set_start_end_frames(): +def get_asset_data(): project_name = get_current_project_name() asset_name = get_current_asset_name() asset_doc = get_asset_by_name(project_name, asset_name) + return asset_doc.get("data") + + +def set_frame_range(data): scene = bpy.context.scene # Default scene settings frameStart = scene.frame_start frameEnd = scene.frame_end fps = scene.render.fps / scene.render.fps_base - resolution_x = scene.render.resolution_x - resolution_y = scene.render.resolution_y - - # Check if settings are set - data = asset_doc.get("data") if not data: return @@ -139,26 +138,47 @@ def set_start_end_frames(): frameEnd = data.get("frameEnd") if data.get("fps"): fps = data.get("fps") - if data.get("resolutionWidth"): - resolution_x = data.get("resolutionWidth") - if data.get("resolutionHeight"): - resolution_y = data.get("resolutionHeight") scene.frame_start = frameStart scene.frame_end = frameEnd scene.render.fps = round(fps) scene.render.fps_base = round(fps) / fps + + +def set_resolution(data): + scene = bpy.context.scene + + # Default scene settings + resolution_x = scene.render.resolution_x + resolution_y = scene.render.resolution_y + + if not data: + return + + if data.get("resolutionWidth"): + resolution_x = data.get("resolutionWidth") + if data.get("resolutionHeight"): + resolution_y = data.get("resolutionHeight") + scene.render.resolution_x = resolution_x scene.render.resolution_y = resolution_y def on_new(): - set_start_end_frames() - project = os.environ.get("AVALON_PROJECT") - settings = get_project_settings(project) + settings = get_project_settings(project).get("blender") + + set_resolution_startup = settings.get("set_resolution_startup") + set_frames_startup = settings.get("set_frames_startup") + + data = get_asset_data() + + if set_resolution_startup: + set_resolution(data) + if set_frames_startup: + set_frame_range(data) - unit_scale_settings = settings.get("blender").get("unit_scale_settings") + unit_scale_settings = settings.get("unit_scale_settings") unit_scale_enabled = unit_scale_settings.get("enabled") if unit_scale_enabled: unit_scale = unit_scale_settings.get("base_file_unit_scale") @@ -166,12 +186,20 @@ def on_new(): def on_open(): - set_start_end_frames() - project = os.environ.get("AVALON_PROJECT") - settings = get_project_settings(project) + settings = get_project_settings(project).get("blender") + + set_resolution_startup = settings.get("set_resolution_startup") + set_frames_startup = settings.get("set_frames_startup") + + data = get_asset_data() + + if set_resolution_startup: + set_resolution(data) + if set_frames_startup: + set_frame_range(data) - unit_scale_settings = settings.get("blender").get("unit_scale_settings") + unit_scale_settings = settings.get("unit_scale_settings") unit_scale_enabled = unit_scale_settings.get("enabled") apply_on_opening = unit_scale_settings.get("apply_on_opening") if unit_scale_enabled and apply_on_opening: diff --git a/openpype/hosts/blender/hooks/pre_add_run_python_script_arg.py b/openpype/hosts/blender/hooks/pre_add_run_python_script_arg.py index 559e9ae0ce1..68c9bfdd575 100644 --- a/openpype/hosts/blender/hooks/pre_add_run_python_script_arg.py +++ b/openpype/hosts/blender/hooks/pre_add_run_python_script_arg.py @@ -1,6 +1,6 @@ from pathlib import Path -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook, LaunchTypes class AddPythonScriptToLaunchArgs(PreLaunchHook): @@ -8,9 +8,8 @@ class AddPythonScriptToLaunchArgs(PreLaunchHook): # Append after file argument order = 15 - app_groups = [ - "blender", - ] + app_groups = {"blender"} + launch_types = {LaunchTypes.local} def execute(self): if not self.launch_context.data.get("python_scripts"): diff --git a/openpype/hosts/blender/hooks/pre_pyside_install.py b/openpype/hosts/blender/hooks/pre_pyside_install.py index e5f66d2a26e..777e383215a 100644 --- a/openpype/hosts/blender/hooks/pre_pyside_install.py +++ b/openpype/hosts/blender/hooks/pre_pyside_install.py @@ -2,7 +2,7 @@ import re import subprocess from platform import system -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook, LaunchTypes class InstallPySideToBlender(PreLaunchHook): @@ -16,7 +16,8 @@ class InstallPySideToBlender(PreLaunchHook): blender's python packages. """ - app_groups = ["blender"] + app_groups = {"blender"} + launch_types = {LaunchTypes.local} def execute(self): # Prelaunch hook is not crucial diff --git a/openpype/hosts/blender/hooks/pre_windows_console.py b/openpype/hosts/blender/hooks/pre_windows_console.py index d6be45b225c..2161b7a2f53 100644 --- a/openpype/hosts/blender/hooks/pre_windows_console.py +++ b/openpype/hosts/blender/hooks/pre_windows_console.py @@ -1,5 +1,5 @@ import subprocess -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook, LaunchTypes class BlenderConsoleWindows(PreLaunchHook): @@ -13,8 +13,9 @@ class BlenderConsoleWindows(PreLaunchHook): # Should be as last hook because must change launch arguments to string order = 1000 - app_groups = ["blender"] - platforms = ["windows"] + app_groups = {"blender"} + platforms = {"windows"} + launch_types = {LaunchTypes.local} def execute(self): # Change `creationflags` to CREATE_NEW_CONSOLE diff --git a/openpype/hosts/blender/plugins/publish/collect_review.py b/openpype/hosts/blender/plugins/publish/collect_review.py index 82b3ca11ebd..64599270158 100644 --- a/openpype/hosts/blender/plugins/publish/collect_review.py +++ b/openpype/hosts/blender/plugins/publish/collect_review.py @@ -29,6 +29,8 @@ def process(self, instance): camera = cameras[0].name self.log.debug(f"camera: {camera}") + focal_length = cameras[0].data.lens + # get isolate objects list from meshes instance members . isolate_objects = [ obj @@ -40,6 +42,10 @@ def process(self, instance): task = instance.context.data["task"] + # Store focal length in `burninDataMembers` + burninData = instance.data.setdefault("burninDataMembers", {}) + burninData["focalLength"] = focal_length + instance.data.update({ "subset": f"{task}Review", "review_camera": camera, diff --git a/openpype/hosts/blender/plugins/publish/extract_abc.py b/openpype/hosts/blender/plugins/publish/extract_abc.py index 1cab9d225b7..f4babc94d3d 100644 --- a/openpype/hosts/blender/plugins/publish/extract_abc.py +++ b/openpype/hosts/blender/plugins/publish/extract_abc.py @@ -22,8 +22,6 @@ def process(self, instance): filepath = os.path.join(stagingdir, filename) context = bpy.context - scene = context.scene - view_layer = context.view_layer # Perform extraction self.log.info("Performing extraction..") @@ -31,24 +29,25 @@ def process(self, instance): plugin.deselect_all() selected = [] - asset_group = None + active = None for obj in instance: obj.select_set(True) selected.append(obj) + # Set as active the asset group if obj.get(AVALON_PROPERTY): - asset_group = obj + active = obj context = plugin.create_blender_context( - active=asset_group, selected=selected) - - # We export the abc - bpy.ops.wm.alembic_export( - context, - filepath=filepath, - selected=True, - flatten=False - ) + active=active, selected=selected) + + with bpy.context.temp_override(**context): + # We export the abc + bpy.ops.wm.alembic_export( + filepath=filepath, + selected=True, + flatten=False + ) plugin.deselect_all() diff --git a/openpype/hosts/blender/plugins/publish/extract_camera_abc.py b/openpype/hosts/blender/plugins/publish/extract_camera_abc.py new file mode 100644 index 00000000000..a21a59b151e --- /dev/null +++ b/openpype/hosts/blender/plugins/publish/extract_camera_abc.py @@ -0,0 +1,73 @@ +import os + +import bpy + +from openpype.pipeline import publish +from openpype.hosts.blender.api import plugin +from openpype.hosts.blender.api.pipeline import AVALON_PROPERTY + + +class ExtractCameraABC(publish.Extractor): + """Extract camera as ABC.""" + + label = "Extract Camera (ABC)" + hosts = ["blender"] + families = ["camera"] + optional = True + + def process(self, instance): + # Define extract output file path + stagingdir = self.staging_dir(instance) + filename = f"{instance.name}.abc" + filepath = os.path.join(stagingdir, filename) + + context = bpy.context + + # Perform extraction + self.log.info("Performing extraction..") + + plugin.deselect_all() + + selected = [] + active = None + + asset_group = None + for obj in instance: + if obj.get(AVALON_PROPERTY): + asset_group = obj + break + assert asset_group, "No asset group found" + + # Need to cast to list because children is a tuple + selected = list(asset_group.children) + active = selected[0] + + for obj in selected: + obj.select_set(True) + + context = plugin.create_blender_context( + active=active, selected=selected) + + with bpy.context.temp_override(**context): + # We export the abc + bpy.ops.wm.alembic_export( + filepath=filepath, + selected=True, + flatten=True + ) + + plugin.deselect_all() + + if "representations" not in instance.data: + instance.data["representations"] = [] + + representation = { + 'name': 'abc', + 'ext': 'abc', + 'files': filename, + "stagingDir": stagingdir, + } + instance.data["representations"].append(representation) + + self.log.info("Extracted instance '%s' to: %s", + instance.name, representation) diff --git a/openpype/hosts/blender/plugins/publish/extract_camera.py b/openpype/hosts/blender/plugins/publish/extract_camera_fbx.py similarity index 98% rename from openpype/hosts/blender/plugins/publish/extract_camera.py rename to openpype/hosts/blender/plugins/publish/extract_camera_fbx.py index 9fd181825cd..315994140e9 100644 --- a/openpype/hosts/blender/plugins/publish/extract_camera.py +++ b/openpype/hosts/blender/plugins/publish/extract_camera_fbx.py @@ -9,7 +9,7 @@ class ExtractCamera(publish.Extractor): """Extract as the camera as FBX.""" - label = "Extract Camera" + label = "Extract Camera (FBX)" hosts = ["blender"] families = ["camera"] optional = True diff --git a/openpype/hosts/celaction/hooks/__init__.py b/openpype/hosts/celaction/hooks/__init__.py deleted file mode 100644 index e69de29bb2d..00000000000 diff --git a/openpype/hosts/celaction/hooks/pre_celaction_setup.py b/openpype/hosts/celaction/hooks/pre_celaction_setup.py index 96e784875c2..83aeab7c58f 100644 --- a/openpype/hosts/celaction/hooks/pre_celaction_setup.py +++ b/openpype/hosts/celaction/hooks/pre_celaction_setup.py @@ -2,20 +2,18 @@ import shutil import winreg import subprocess -from openpype.lib import PreLaunchHook, get_openpype_execute_args -from openpype.hosts.celaction import scripts - -CELACTION_SCRIPTS_DIR = os.path.dirname( - os.path.abspath(scripts.__file__) -) +from openpype.lib import get_openpype_execute_args +from openpype.lib.applications import PreLaunchHook, LaunchTypes +from openpype.hosts.celaction import CELACTION_ROOT_DIR class CelactionPrelaunchHook(PreLaunchHook): """ Bootstrap celacion with pype """ - app_groups = ["celaction"] - platforms = ["windows"] + app_groups = {"celaction"} + platforms = {"windows"} + launch_types = {LaunchTypes.local} def execute(self): asset_doc = self.data["asset_doc"] @@ -37,7 +35,9 @@ def execute(self): winreg.KEY_ALL_ACCESS ) - path_to_cli = os.path.join(CELACTION_SCRIPTS_DIR, "publish_cli.py") + path_to_cli = os.path.join( + CELACTION_ROOT_DIR, "scripts", "publish_cli.py" + ) subprocess_args = get_openpype_execute_args("run", path_to_cli) openpype_executable = subprocess_args.pop(0) workfile_settings = self.get_workfile_settings() @@ -122,9 +122,8 @@ def workfile_path(self): if not os.path.exists(workfile_path): # TODO add ability to set different template workfile path via # settings - openpype_celaction_dir = os.path.dirname(CELACTION_SCRIPTS_DIR) template_path = os.path.join( - openpype_celaction_dir, + CELACTION_ROOT_DIR, "resources", "celaction_template_scene.scn" ) diff --git a/openpype/hosts/flame/hooks/pre_flame_setup.py b/openpype/hosts/flame/hooks/pre_flame_setup.py index 83110bb6b55..850569cfdd1 100644 --- a/openpype/hosts/flame/hooks/pre_flame_setup.py +++ b/openpype/hosts/flame/hooks/pre_flame_setup.py @@ -6,13 +6,10 @@ from pprint import pformat from openpype.lib import ( - PreLaunchHook, get_openpype_username, run_subprocess, ) -from openpype.lib.applications import ( - ApplicationLaunchFailed -) +from openpype.lib.applications import PreLaunchHook, LaunchTypes from openpype.hosts import flame as opflame @@ -22,11 +19,12 @@ class FlamePrelaunch(PreLaunchHook): Will make sure flame_script_dirs are copied to user's folder defined in environment var FLAME_SCRIPT_DIR. """ - app_groups = ["flame"] + app_groups = {"flame"} permissions = 0o777 wtc_script_path = os.path.join( opflame.HOST_DIR, "api", "scripts", "wiretap_com.py") + launch_types = {LaunchTypes.local} def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) diff --git a/openpype/hosts/fusion/api/action.py b/openpype/hosts/fusion/api/action.py index 347d552108b..66b787c2f13 100644 --- a/openpype/hosts/fusion/api/action.py +++ b/openpype/hosts/fusion/api/action.py @@ -18,8 +18,10 @@ class SelectInvalidAction(pyblish.api.Action): icon = "search" # Icon from Awesome Icon def process(self, context, plugin): - errored_instances = get_errored_instances_from_context(context, - plugin=plugin) + errored_instances = get_errored_instances_from_context( + context, + plugin=plugin, + ) # Get the invalid nodes for the plug-ins self.log.info("Finding invalid nodes..") @@ -51,6 +53,7 @@ def process(self, context, plugin): names = set() for tool in invalid: flow.Select(tool, True) + comp.SetActiveTool(tool) names.add(tool.Name) self.log.info( "Selecting invalid tools: %s" % ", ".join(sorted(names)) diff --git a/openpype/hosts/fusion/hooks/pre_fusion_profile_hook.py b/openpype/hosts/fusion/hooks/pre_fusion_profile_hook.py index fd726ccda14..66b0f803aa4 100644 --- a/openpype/hosts/fusion/hooks/pre_fusion_profile_hook.py +++ b/openpype/hosts/fusion/hooks/pre_fusion_profile_hook.py @@ -2,12 +2,16 @@ import shutil import platform from pathlib import Path -from openpype.lib import PreLaunchHook, ApplicationLaunchFailed from openpype.hosts.fusion import ( FUSION_HOST_DIR, FUSION_VERSIONS_DICT, get_fusion_version, ) +from openpype.lib.applications import ( + PreLaunchHook, + LaunchTypes, + ApplicationLaunchFailed, +) class FusionCopyPrefsPrelaunch(PreLaunchHook): @@ -21,8 +25,9 @@ class FusionCopyPrefsPrelaunch(PreLaunchHook): Master.prefs is defined in openpype/hosts/fusion/deploy/fusion_shared.prefs """ - app_groups = ["fusion"] + app_groups = {"fusion"} order = 2 + launch_types = {LaunchTypes.local} def get_fusion_profile_name(self, profile_version) -> str: # Returns 'Default', unless FUSION16_PROFILE is set diff --git a/openpype/hosts/fusion/hooks/pre_fusion_setup.py b/openpype/hosts/fusion/hooks/pre_fusion_setup.py index f27cd1674ba..576628e8765 100644 --- a/openpype/hosts/fusion/hooks/pre_fusion_setup.py +++ b/openpype/hosts/fusion/hooks/pre_fusion_setup.py @@ -1,5 +1,9 @@ import os -from openpype.lib import PreLaunchHook, ApplicationLaunchFailed +from openpype.lib.applications import ( + PreLaunchHook, + LaunchTypes, + ApplicationLaunchFailed, +) from openpype.hosts.fusion import ( FUSION_HOST_DIR, FUSION_VERSIONS_DICT, @@ -17,8 +21,9 @@ class FusionPrelaunch(PreLaunchHook): Fusion 18 : Python 3.6 - 3.10 """ - app_groups = ["fusion"] + app_groups = {"fusion"} order = 1 + launch_types = {LaunchTypes.local} def execute(self): # making sure python 3 is installed at provided path diff --git a/openpype/hosts/fusion/plugins/publish/collect_render.py b/openpype/hosts/fusion/plugins/publish/collect_render.py index 9e48cc000e2..117347a4c2a 100644 --- a/openpype/hosts/fusion/plugins/publish/collect_render.py +++ b/openpype/hosts/fusion/plugins/publish/collect_render.py @@ -108,7 +108,6 @@ def get_instances(self, context): fam = "render.farm" if fam not in instance.families: instance.families.append(fam) - instance.toBeRenderedOn = "deadline" instance.farm = True # to skip integrate if "review" in instance.families: # to skip ExtractReview locally diff --git a/openpype/hosts/harmony/plugins/publish/collect_farm_render.py b/openpype/hosts/harmony/plugins/publish/collect_farm_render.py index 5e9b9094a74..af825c052ac 100644 --- a/openpype/hosts/harmony/plugins/publish/collect_farm_render.py +++ b/openpype/hosts/harmony/plugins/publish/collect_farm_render.py @@ -147,13 +147,13 @@ def get_instances(self, context): attachTo=False, setMembers=[node], publish=info[4], - review=False, renderer=None, priority=50, name=node.split("/")[1], family="render.farm", families=["render.farm"], + farm=True, resolutionWidth=context.data["resolutionWidth"], resolutionHeight=context.data["resolutionHeight"], @@ -174,7 +174,6 @@ def get_instances(self, context): outputFormat=info[1], outputStartFrame=info[3], leadingZeros=info[2], - toBeRenderedOn='deadline', ignoreFrameHandleCheck=True ) diff --git a/openpype/hosts/harmony/plugins/publish/extract_render.py b/openpype/hosts/harmony/plugins/publish/extract_render.py index 38b09902c15..5825d95a4a8 100644 --- a/openpype/hosts/harmony/plugins/publish/extract_render.py +++ b/openpype/hosts/harmony/plugins/publish/extract_render.py @@ -94,15 +94,14 @@ def process(self, instance): # Generate thumbnail. thumbnail_path = os.path.join(path, "thumbnail.png") - ffmpeg_path = openpype.lib.get_ffmpeg_tool_path("ffmpeg") - args = [ - ffmpeg_path, + args = openpype.lib.get_ffmpeg_tool_args( + "ffmpeg", "-y", "-i", os.path.join(path, list(collections[0])[0]), "-vf", "scale=300:-1", "-vframes", "1", thumbnail_path - ] + ) process = subprocess.Popen( args, stdout=subprocess.PIPE, diff --git a/openpype/hosts/hiero/plugins/publish/extract_frames.py b/openpype/hosts/hiero/plugins/publish/extract_frames.py index f865d2fb398..803c3387667 100644 --- a/openpype/hosts/hiero/plugins/publish/extract_frames.py +++ b/openpype/hosts/hiero/plugins/publish/extract_frames.py @@ -2,7 +2,7 @@ import pyblish.api from openpype.lib import ( - get_oiio_tools_path, + get_oiio_tool_args, run_subprocess, ) from openpype.pipeline import publish @@ -18,7 +18,7 @@ class ExtractFrames(publish.Extractor): movie_extensions = ["mov", "mp4"] def process(self, instance): - oiio_tool_path = get_oiio_tools_path() + oiio_tool_args = get_oiio_tool_args("oiiotool") staging_dir = self.staging_dir(instance) output_template = os.path.join(staging_dir, instance.data["name"]) sequence = instance.context.data["activeTimeline"] @@ -36,7 +36,7 @@ def process(self, instance): output_path = output_template output_path += ".{:04d}.{}".format(int(frame), output_ext) - args = [oiio_tool_path] + args = list(oiio_tool_args) ext = os.path.splitext(input_path)[1][1:] if ext in self.movie_extensions: diff --git a/openpype/hosts/houdini/api/creator_node_shelves.py b/openpype/hosts/houdini/api/creator_node_shelves.py index 7c6122cffee..1f9fef7417b 100644 --- a/openpype/hosts/houdini/api/creator_node_shelves.py +++ b/openpype/hosts/houdini/api/creator_node_shelves.py @@ -57,29 +57,32 @@ def create_interactive(creator_identifier, **kwargs): list: The created instances. """ + host = registered_host() + context = CreateContext(host) + creator = context.manual_creators.get(creator_identifier) + if not creator: + raise RuntimeError("Invalid creator identifier: {}".format( + creator_identifier) + ) # TODO Use Qt instead - result, variant = hou.ui.readInput('Define variant name', - buttons=("Ok", "Cancel"), - initial_contents='Main', - title="Define variant", - help="Set the variant for the " - "publish instance", - close_choice=1) + result, variant = hou.ui.readInput( + "Define variant name", + buttons=("Ok", "Cancel"), + initial_contents=creator.get_default_variant(), + title="Define variant", + help="Set the variant for the publish instance", + close_choice=1 + ) + if result == 1: # User interrupted return + variant = variant.strip() if not variant: raise RuntimeError("Empty variant value entered.") - host = registered_host() - context = CreateContext(host) - creator = context.manual_creators.get(creator_identifier) - if not creator: - raise RuntimeError("Invalid creator identifier: " - "{}".format(creator_identifier)) - # TODO: Once more elaborate unique create behavior should exist per Creator # instead of per network editor area then we should move this from here # to a method on the Creators for which this could be the default diff --git a/openpype/hosts/houdini/api/lib.py b/openpype/hosts/houdini/api/lib.py index b03f8c8fc16..75c7ff9fee0 100644 --- a/openpype/hosts/houdini/api/lib.py +++ b/openpype/hosts/houdini/api/lib.py @@ -22,9 +22,12 @@ JSON_PREFIX = "JSON:::" -def get_asset_fps(): +def get_asset_fps(asset_doc=None): """Return current asset fps.""" - return get_current_project_asset()["data"].get("fps") + + if asset_doc is None: + asset_doc = get_current_project_asset(fields=["data.fps"]) + return asset_doc["data"]["fps"] def set_id(node, unique_id, overwrite=False): @@ -472,14 +475,19 @@ def maintained_selection(): def reset_framerange(): - """Set frame range to current asset""" + """Set frame range and FPS to current asset""" + # Get asset data project_name = get_current_project_name() asset_name = get_current_asset_name() # Get the asset ID from the database for the asset of current context asset_doc = get_asset_by_name(project_name, asset_name) asset_data = asset_doc["data"] + # Get FPS + fps = get_asset_fps(asset_doc) + + # Get Start and End Frames frame_start = asset_data.get("frameStart") frame_end = asset_data.get("frameEnd") @@ -493,6 +501,9 @@ def reset_framerange(): frame_start -= int(handle_start) frame_end += int(handle_end) + # Set frame range and FPS + print("Setting scene FPS to {}".format(int(fps))) + set_scene_fps(fps) hou.playbar.setFrameRange(frame_start, frame_end) hou.playbar.setPlaybackRange(frame_start, frame_end) hou.setFrame(frame_start) diff --git a/openpype/hosts/houdini/api/pipeline.py b/openpype/hosts/houdini/api/pipeline.py index 8a26bbb5040..3c325edfa7d 100644 --- a/openpype/hosts/houdini/api/pipeline.py +++ b/openpype/hosts/houdini/api/pipeline.py @@ -25,7 +25,6 @@ emit_event, ) -from .lib import get_asset_fps log = logging.getLogger("openpype.hosts.houdini") @@ -385,11 +384,6 @@ def _set_context_settings(): None """ - # Set new scene fps - fps = get_asset_fps() - print("Setting scene FPS to %i" % fps) - lib.set_scene_fps(fps) - lib.reset_framerange() diff --git a/openpype/hosts/houdini/api/plugin.py b/openpype/hosts/houdini/api/plugin.py index 1e7eaa7e22e..70c837205ed 100644 --- a/openpype/hosts/houdini/api/plugin.py +++ b/openpype/hosts/houdini/api/plugin.py @@ -167,9 +167,12 @@ def create_instance_node( class HoudiniCreator(NewCreator, HoudiniCreatorBase): """Base class for most of the Houdini creator plugins.""" selected_nodes = [] + settings_name = None def create(self, subset_name, instance_data, pre_create_data): try: + self.selected_nodes = [] + if pre_create_data.get("use_selection"): self.selected_nodes = hou.selectedNodes() @@ -292,3 +295,21 @@ def get_network_categories(self): """ return [hou.ropNodeTypeCategory()] + + def apply_settings(self, project_settings, system_settings): + """Method called on initialization of plugin to apply settings.""" + + settings_name = self.settings_name + if settings_name is None: + settings_name = self.__class__.__name__ + + settings = project_settings["houdini"]["create"] + settings = settings.get(settings_name) + if settings is None: + self.log.debug( + "No settings found for {}".format(self.__class__.__name__) + ) + return + + for key, value in settings.items(): + setattr(self, key, value) diff --git a/openpype/hosts/houdini/hooks/set_paths.py b/openpype/hosts/houdini/hooks/set_paths.py index 04a33b16431..b23659e23b5 100644 --- a/openpype/hosts/houdini/hooks/set_paths.py +++ b/openpype/hosts/houdini/hooks/set_paths.py @@ -1,4 +1,4 @@ -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook, LaunchTypes class SetPath(PreLaunchHook): @@ -6,7 +6,8 @@ class SetPath(PreLaunchHook): Hook `GlobalHostDataHook` must be executed before this hook. """ - app_groups = ["houdini"] + app_groups = {"houdini"} + launch_types = {LaunchTypes.local} def execute(self): workdir = self.launch_context.env.get("AVALON_WORKDIR", "") diff --git a/openpype/hosts/houdini/plugins/create/create_arnold_ass.py b/openpype/hosts/houdini/plugins/create/create_arnold_ass.py index 8b310753d02..12d08f7d838 100644 --- a/openpype/hosts/houdini/plugins/create/create_arnold_ass.py +++ b/openpype/hosts/houdini/plugins/create/create_arnold_ass.py @@ -10,9 +10,10 @@ class CreateArnoldAss(plugin.HoudiniCreator): label = "Arnold ASS" family = "ass" icon = "magic" - defaults = ["Main"] # Default extension: `.ass` or `.ass.gz` + # however calling HoudiniCreator.create() + # will override it by the value in the project settings ext = ".ass" def create(self, subset_name, instance_data, pre_create_data): diff --git a/openpype/hosts/houdini/plugins/create/create_arnold_rop.py b/openpype/hosts/houdini/plugins/create/create_arnold_rop.py index bddf26dbd50..b58c377a204 100644 --- a/openpype/hosts/houdini/plugins/create/create_arnold_rop.py +++ b/openpype/hosts/houdini/plugins/create/create_arnold_rop.py @@ -1,5 +1,5 @@ from openpype.hosts.houdini.api import plugin -from openpype.lib import EnumDef +from openpype.lib import EnumDef, BoolDef class CreateArnoldRop(plugin.HoudiniCreator): @@ -9,7 +9,6 @@ class CreateArnoldRop(plugin.HoudiniCreator): label = "Arnold ROP" family = "arnold_rop" icon = "magic" - defaults = ["master"] # Default extension ext = "exr" @@ -24,7 +23,7 @@ def create(self, subset_name, instance_data, pre_create_data): # Add chunk size attribute instance_data["chunkSize"] = 1 # Submit for job publishing - instance_data["farm"] = True + instance_data["farm"] = pre_create_data.get("farm") instance = super(CreateArnoldRop, self).create( subset_name, @@ -64,6 +63,9 @@ def get_pre_create_attr_defs(self): ] return attrs + [ + BoolDef("farm", + label="Submitting to Farm", + default=True), EnumDef("image_format", image_format_enum, default=self.ext, diff --git a/openpype/hosts/houdini/plugins/create/create_bgeo.py b/openpype/hosts/houdini/plugins/create/create_bgeo.py index a1101fd0452..a3f31e7e94c 100644 --- a/openpype/hosts/houdini/plugins/create/create_bgeo.py +++ b/openpype/hosts/houdini/plugins/create/create_bgeo.py @@ -8,7 +8,7 @@ class CreateBGEO(plugin.HoudiniCreator): """BGEO pointcache creator.""" identifier = "io.openpype.creators.houdini.bgeo" - label = "BGEO PointCache" + label = "PointCache (Bgeo)" family = "pointcache" icon = "gears" diff --git a/openpype/hosts/houdini/plugins/create/create_karma_rop.py b/openpype/hosts/houdini/plugins/create/create_karma_rop.py index edfb992e1a0..4e1360ca45a 100644 --- a/openpype/hosts/houdini/plugins/create/create_karma_rop.py +++ b/openpype/hosts/houdini/plugins/create/create_karma_rop.py @@ -11,7 +11,6 @@ class CreateKarmaROP(plugin.HoudiniCreator): label = "Karma ROP" family = "karma_rop" icon = "magic" - defaults = ["master"] def create(self, subset_name, instance_data, pre_create_data): import hou # noqa @@ -21,7 +20,7 @@ def create(self, subset_name, instance_data, pre_create_data): # Add chunk size attribute instance_data["chunkSize"] = 10 # Submit for job publishing - instance_data["farm"] = True + instance_data["farm"] = pre_create_data.get("farm") instance = super(CreateKarmaROP, self).create( subset_name, @@ -67,6 +66,7 @@ def create(self, subset_name, instance_data, pre_create_data): camera = None for node in self.selected_nodes: if node.type().name() == "cam": + camera = node.path() has_camera = pre_create_data.get("cam_res") if has_camera: res_x = node.evalParm("resx") @@ -96,6 +96,9 @@ def get_pre_create_attr_defs(self): ] return attrs + [ + BoolDef("farm", + label="Submitting to Farm", + default=True), EnumDef("image_format", image_format_enum, default="exr", diff --git a/openpype/hosts/houdini/plugins/create/create_mantra_rop.py b/openpype/hosts/houdini/plugins/create/create_mantra_rop.py index 5ca53e96de9..d2f0e735a88 100644 --- a/openpype/hosts/houdini/plugins/create/create_mantra_rop.py +++ b/openpype/hosts/houdini/plugins/create/create_mantra_rop.py @@ -11,7 +11,6 @@ class CreateMantraROP(plugin.HoudiniCreator): label = "Mantra ROP" family = "mantra_rop" icon = "magic" - defaults = ["master"] def create(self, subset_name, instance_data, pre_create_data): import hou # noqa @@ -21,7 +20,7 @@ def create(self, subset_name, instance_data, pre_create_data): # Add chunk size attribute instance_data["chunkSize"] = 10 # Submit for job publishing - instance_data["farm"] = True + instance_data["farm"] = pre_create_data.get("farm") instance = super(CreateMantraROP, self).create( subset_name, @@ -76,6 +75,9 @@ def get_pre_create_attr_defs(self): ] return attrs + [ + BoolDef("farm", + label="Submitting to Farm", + default=True), EnumDef("image_format", image_format_enum, default="exr", diff --git a/openpype/hosts/houdini/plugins/create/create_pointcache.py b/openpype/hosts/houdini/plugins/create/create_pointcache.py index 554d5f2016b..7eaf2aff2ba 100644 --- a/openpype/hosts/houdini/plugins/create/create_pointcache.py +++ b/openpype/hosts/houdini/plugins/create/create_pointcache.py @@ -8,7 +8,7 @@ class CreatePointCache(plugin.HoudiniCreator): """Alembic ROP to pointcache""" identifier = "io.openpype.creators.houdini.pointcache" - label = "Point Cache" + label = "PointCache (Abc)" family = "pointcache" icon = "gears" diff --git a/openpype/hosts/houdini/plugins/create/create_redshift_rop.py b/openpype/hosts/houdini/plugins/create/create_redshift_rop.py index 4576e9a7214..1b8826a932c 100644 --- a/openpype/hosts/houdini/plugins/create/create_redshift_rop.py +++ b/openpype/hosts/houdini/plugins/create/create_redshift_rop.py @@ -3,7 +3,7 @@ import hou # noqa from openpype.hosts.houdini.api import plugin -from openpype.lib import EnumDef +from openpype.lib import EnumDef, BoolDef class CreateRedshiftROP(plugin.HoudiniCreator): @@ -13,7 +13,6 @@ class CreateRedshiftROP(plugin.HoudiniCreator): label = "Redshift ROP" family = "redshift_rop" icon = "magic" - defaults = ["master"] ext = "exr" def create(self, subset_name, instance_data, pre_create_data): @@ -23,7 +22,7 @@ def create(self, subset_name, instance_data, pre_create_data): # Add chunk size attribute instance_data["chunkSize"] = 10 # Submit for job publishing - instance_data["farm"] = True + instance_data["farm"] = pre_create_data.get("farm") instance = super(CreateRedshiftROP, self).create( subset_name, @@ -100,6 +99,9 @@ def get_pre_create_attr_defs(self): ] return attrs + [ + BoolDef("farm", + label="Submitting to Farm", + default=True), EnumDef("image_format", image_format_enum, default=self.ext, diff --git a/openpype/hosts/houdini/plugins/create/create_vbd_cache.py b/openpype/hosts/houdini/plugins/create/create_vbd_cache.py index c015cebd49b..9c96e48e3a4 100644 --- a/openpype/hosts/houdini/plugins/create/create_vbd_cache.py +++ b/openpype/hosts/houdini/plugins/create/create_vbd_cache.py @@ -33,7 +33,7 @@ def create(self, subset_name, instance_data, pre_create_data): } if self.selected_nodes: - parms["soppath"] = self.selected_nodes[0].path() + parms["soppath"] = self.get_sop_node_path(self.selected_nodes[0]) instance_node.setParms(parms) @@ -42,3 +42,63 @@ def get_network_categories(self): hou.ropNodeTypeCategory(), hou.sopNodeTypeCategory() ] + + def get_sop_node_path(self, selected_node): + """Get Sop Path of the selected node. + + Although Houdini allows ObjNode path on `sop_path` for the + the ROP node, we prefer it set to the SopNode path explicitly. + """ + + # Allow sop level paths (e.g. /obj/geo1/box1) + if isinstance(selected_node, hou.SopNode): + self.log.debug( + "Valid SopNode selection, 'SOP Path' in ROP will" + " be set to '%s'.", selected_node.path() + ) + return selected_node.path() + + # Allow object level paths to Geometry nodes (e.g. /obj/geo1) + # but do not allow other object level nodes types like cameras, etc. + elif isinstance(selected_node, hou.ObjNode) and \ + selected_node.type().name() == "geo": + + # Try to find output node. + sop_node = self.get_obj_output(selected_node) + if sop_node: + self.log.debug( + "Valid ObjNode selection, 'SOP Path' in ROP will " + "be set to the child path '%s'.", sop_node.path() + ) + return sop_node.path() + + self.log.debug( + "Selection isn't valid. 'SOP Path' in ROP will be empty." + ) + return "" + + def get_obj_output(self, obj_node): + """Try to find output node. + + If any output nodes are present, return the output node with + the minimum 'outputidx' + If no output nodes are present, return the node with display flag + If no nodes are present at all, return None + """ + + outputs = obj_node.subnetOutputs() + + # if obj_node is empty + if not outputs: + return + + # if obj_node has one output child whether its + # sop output node or a node with the render flag + elif len(outputs) == 1: + return outputs[0] + + # if there are more than one, then it has multiple output nodes + # return the one with the minimum 'outputidx' + else: + return min(outputs, + key=lambda node: node.evalParm('outputidx')) diff --git a/openpype/hosts/houdini/plugins/create/create_vray_rop.py b/openpype/hosts/houdini/plugins/create/create_vray_rop.py index 1de9be4ed61..793a544fdff 100644 --- a/openpype/hosts/houdini/plugins/create/create_vray_rop.py +++ b/openpype/hosts/houdini/plugins/create/create_vray_rop.py @@ -14,8 +14,6 @@ class CreateVrayROP(plugin.HoudiniCreator): label = "VRay ROP" family = "vray_rop" icon = "magic" - defaults = ["master"] - ext = "exr" def create(self, subset_name, instance_data, pre_create_data): @@ -25,7 +23,7 @@ def create(self, subset_name, instance_data, pre_create_data): # Add chunk size attribute instance_data["chunkSize"] = 10 # Submit for job publishing - instance_data["farm"] = True + instance_data["farm"] = pre_create_data.get("farm") instance = super(CreateVrayROP, self).create( subset_name, @@ -139,6 +137,9 @@ def get_pre_create_attr_defs(self): ] return attrs + [ + BoolDef("farm", + label="Submitting to Farm", + default=True), EnumDef("image_format", image_format_enum, default=self.ext, diff --git a/openpype/hosts/houdini/plugins/load/load_hda.py b/openpype/hosts/houdini/plugins/load/load_hda.py index 57edc341a3d..9630716253c 100644 --- a/openpype/hosts/houdini/plugins/load/load_hda.py +++ b/openpype/hosts/houdini/plugins/load/load_hda.py @@ -59,6 +59,9 @@ def update(self, container, representation): def_paths = [d.libraryFilePath() for d in defs] new = def_paths.index(file_path) defs[new].setIsPreferred(True) + hda_node.setParms({ + "representation": str(representation["_id"]) + }) def remove(self, container): node = container["node"] diff --git a/openpype/hosts/houdini/plugins/publish/collect_pointcache_type.py b/openpype/hosts/houdini/plugins/publish/collect_pointcache_type.py index 6c527377e0a..3323e97c206 100644 --- a/openpype/hosts/houdini/plugins/publish/collect_pointcache_type.py +++ b/openpype/hosts/houdini/plugins/publish/collect_pointcache_type.py @@ -17,5 +17,5 @@ class CollectPointcacheType(pyblish.api.InstancePlugin): def process(self, instance): if instance.data["creator_identifier"] == "io.openpype.creators.houdini.bgeo": # noqa: E501 instance.data["families"] += ["bgeo"] - elif instance.data["creator_identifier"] == "io.openpype.creators.houdini.alembic": # noqa: E501 + elif instance.data["creator_identifier"] == "io.openpype.creators.houdini.pointcache": # noqa: E501 instance.data["families"] += ["abc"] diff --git a/openpype/hosts/houdini/plugins/publish/validate_bgeo_file_sop_path.py b/openpype/hosts/houdini/plugins/publish/validate_bgeo_file_sop_path.py deleted file mode 100644 index 22746aabb03..00000000000 --- a/openpype/hosts/houdini/plugins/publish/validate_bgeo_file_sop_path.py +++ /dev/null @@ -1,26 +0,0 @@ -# -*- coding: utf-8 -*- -"""Validator plugin for SOP Path in bgeo isntance.""" -import pyblish.api -from openpype.pipeline import PublishValidationError - - -class ValidateNoSOPPath(pyblish.api.InstancePlugin): - """Validate if SOP Path in BGEO instance exists.""" - - order = pyblish.api.ValidatorOrder - families = ["bgeo"] - label = "Validate BGEO SOP Path" - - def process(self, instance): - - import hou - - node = hou.node(instance.data.get("instance_node")) - sop_path = node.evalParm("soppath") - if not sop_path: - raise PublishValidationError( - ("Empty SOP Path ('soppath' parameter) found in " - f"the BGEO instance Geometry - {node.path()}")) - if not isinstance(hou.node(sop_path), hou.SopNode): - raise PublishValidationError( - "SOP path is not pointing to valid SOP node.") diff --git a/openpype/hosts/houdini/plugins/publish/validate_primitive_hierarchy_paths.py b/openpype/hosts/houdini/plugins/publish/validate_primitive_hierarchy_paths.py index ca06617ab00..471fa5b6d13 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_primitive_hierarchy_paths.py +++ b/openpype/hosts/houdini/plugins/publish/validate_primitive_hierarchy_paths.py @@ -32,8 +32,9 @@ class ValidatePrimitiveHierarchyPaths(pyblish.api.InstancePlugin): def process(self, instance): invalid = self.get_invalid(instance) if invalid: + nodes = [n.path() for n in invalid] raise PublishValidationError( - "See log for details. " "Invalid nodes: {0}".format(invalid), + "See log for details. " "Invalid nodes: {0}".format(nodes), title=self.label ) diff --git a/openpype/hosts/houdini/plugins/publish/validate_workfile_paths.py b/openpype/hosts/houdini/plugins/publish/validate_workfile_paths.py index 543c8e1407a..afe05e31732 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_workfile_paths.py +++ b/openpype/hosts/houdini/plugins/publish/validate_workfile_paths.py @@ -7,8 +7,6 @@ ) from openpype.pipeline.publish import RepairAction -from openpype.pipeline.publish import RepairAction - class ValidateWorkfilePaths( pyblish.api.InstancePlugin, OptionalPyblishPluginMixin): diff --git a/openpype/hosts/houdini/startup/MainMenuCommon.xml b/openpype/hosts/houdini/startup/MainMenuCommon.xml index 47a4653d5d7..5818a117eb2 100644 --- a/openpype/hosts/houdini/startup/MainMenuCommon.xml +++ b/openpype/hosts/houdini/startup/MainMenuCommon.xml @@ -2,7 +2,19 @@ - + + + + + + diff --git a/openpype/hosts/max/api/lib.py b/openpype/hosts/max/api/lib.py index ccd4cd67e1a..82873414568 100644 --- a/openpype/hosts/max/api/lib.py +++ b/openpype/hosts/max/api/lib.py @@ -6,7 +6,7 @@ import six from openpype.pipeline.context_tools import ( - get_current_project, get_current_project_asset,) + get_current_project, get_current_project_asset) from pymxs import runtime as rt JSON_PREFIX = "JSON::" @@ -312,3 +312,98 @@ def set_timeline(frameStart, frameEnd): """ rt.animationRange = rt.interval(frameStart, frameEnd) return rt.animationRange + + +def unique_namespace(namespace, format="%02d", + prefix="", suffix="", con_suffix="CON"): + """Return unique namespace + + Arguments: + namespace (str): Name of namespace to consider + format (str, optional): Formatting of the given iteration number + suffix (str, optional): Only consider namespaces with this suffix. + con_suffix: max only, for finding the name of the master container + + >>> unique_namespace("bar") + # bar01 + >>> unique_namespace(":hello") + # :hello01 + >>> unique_namespace("bar:", suffix="_NS") + # bar01_NS: + + """ + + def current_namespace(): + current = namespace + # When inside a namespace Max adds no trailing : + if not current.endswith(":"): + current += ":" + return current + + # Always check against the absolute namespace root + # There's no clash with :x if we're defining namespace :a:x + ROOT = ":" if namespace.startswith(":") else current_namespace() + + # Strip trailing `:` tokens since we might want to add a suffix + start = ":" if namespace.startswith(":") else "" + end = ":" if namespace.endswith(":") else "" + namespace = namespace.strip(":") + if ":" in namespace: + # Split off any nesting that we don't uniqify anyway. + parents, namespace = namespace.rsplit(":", 1) + start += parents + ":" + ROOT += start + + iteration = 1 + increment_version = True + while increment_version: + nr_namespace = namespace + format % iteration + unique = prefix + nr_namespace + suffix + container_name = f"{unique}:{namespace}{con_suffix}" + if not rt.getNodeByName(container_name): + name_space = start + unique + end + increment_version = False + return name_space + else: + increment_version = True + iteration += 1 + + +def get_namespace(container_name): + """Get the namespace and name of the sub-container + + Args: + container_name (str): the name of master container + + Raises: + RuntimeError: when there is no master container found + + Returns: + namespace (str): namespace of the sub-container + name (str): name of the sub-container + """ + node = rt.getNodeByName(container_name) + if not node: + raise RuntimeError("Master Container Not Found..") + name = rt.getUserProp(node, "name") + namespace = rt.getUserProp(node, "namespace") + return namespace, name + + +def object_transform_set(container_children): + """A function which allows to store the transform of + previous loaded object(s) + Args: + container_children(list): A list of nodes + + Returns: + transform_set (dict): A dict with all transform data of + the previous loaded object(s) + """ + transform_set = {} + for node in container_children: + name = f"{node.name}.transform" + transform_set[name] = node.pos + name = f"{node.name}.scale" + transform_set[name] = node.scale + return transform_set diff --git a/openpype/hosts/max/api/lib_rendersettings.py b/openpype/hosts/max/api/lib_rendersettings.py index 1b62edabeef..afde5008d53 100644 --- a/openpype/hosts/max/api/lib_rendersettings.py +++ b/openpype/hosts/max/api/lib_rendersettings.py @@ -43,7 +43,7 @@ def set_render_camera(self, selection): rt.viewport.setCamera(sel) break if not found: - raise RuntimeError("Camera not found") + raise RuntimeError("Active Camera not found") def render_output(self, container): folder = rt.maxFilePath @@ -113,7 +113,8 @@ def arnold_setup(self): # for setting up renderable camera arv = rt.MAXToAOps.ArnoldRenderView() render_camera = rt.viewport.GetCamera() - arv.setOption("Camera", str(render_camera)) + if render_camera: + arv.setOption("Camera", str(render_camera)) # TODO: add AOVs and extension img_fmt = self._project_settings["max"]["RenderSettings"]["image_format"] # noqa diff --git a/openpype/hosts/max/api/pipeline.py b/openpype/hosts/max/api/pipeline.py index 03b85a40668..d9a66c60f52 100644 --- a/openpype/hosts/max/api/pipeline.py +++ b/openpype/hosts/max/api/pipeline.py @@ -15,8 +15,10 @@ ) from openpype.hosts.max.api.menu import OpenPypeMenu from openpype.hosts.max.api import lib +from openpype.hosts.max.api.plugin import MS_CUSTOM_ATTRIB from openpype.hosts.max import MAX_HOST_DIR + from pymxs import runtime as rt # noqa log = logging.getLogger("openpype.hosts.max") @@ -152,17 +154,18 @@ def ls() -> list: yield lib.read(container) -def containerise(name: str, nodes: list, context, loader=None, suffix="_CON"): +def containerise(name: str, nodes: list, context, + namespace=None, loader=None, suffix="_CON"): data = { "schema": "openpype:container-2.0", "id": AVALON_CONTAINER_ID, "name": name, - "namespace": "", + "namespace": namespace or "", "loader": loader, "representation": context["representation"]["_id"], } - container_name = f"{name}{suffix}" + container_name = f"{namespace}:{name}{suffix}" container = rt.container(name=container_name) for node in nodes: node.Parent = container @@ -170,3 +173,52 @@ def containerise(name: str, nodes: list, context, loader=None, suffix="_CON"): if not lib.imprint(container_name, data): print(f"imprinting of {container_name} failed.") return container + + +def load_custom_attribute_data(): + """Re-loading the Openpype/AYON custom parameter built by the creator + + Returns: + attribute: re-loading the custom OP attributes set in Maxscript + """ + return rt.Execute(MS_CUSTOM_ATTRIB) + + +def import_custom_attribute_data(container: str, selections: list): + """Importing the Openpype/AYON custom parameter built by the creator + + Args: + container (str): target container which adds custom attributes + selections (list): nodes to be added into + group in custom attributes + """ + attrs = load_custom_attribute_data() + modifier = rt.EmptyModifier() + rt.addModifier(container, modifier) + container.modifiers[0].name = "OP Data" + rt.custAttributes.add(container.modifiers[0], attrs) + nodes = {} + for i in selections: + nodes = { + str(i): rt.NodeTransformMonitor(node=i), + } + # Setting the property + rt.setProperty( + container.modifiers[0].openPypeData, + "all_handles", nodes.values()) + rt.setProperty( + container.modifiers[0].openPypeData, + "sel_list", nodes.keys()) + + +def update_custom_attribute_data(container: str, selections: list): + """Updating the Openpype/AYON custom parameter built by the creator + + Args: + container (str): target container which adds custom attributes + selections (list): nodes to be added into + group in custom attributes + """ + if container.modifiers[0].name == "OP Data": + rt.deleteModifier(container, container.modifiers[0]) + import_custom_attribute_data(container, selections) diff --git a/openpype/hosts/max/api/plugin.py b/openpype/hosts/max/api/plugin.py index d8db716e6d5..3389447cb0e 100644 --- a/openpype/hosts/max/api/plugin.py +++ b/openpype/hosts/max/api/plugin.py @@ -136,6 +136,7 @@ temp_arr = #() for x in all_handles do ( + if x.node == undefined do continue handle_name = node_to_name x.node append temp_arr handle_name ) @@ -185,7 +186,10 @@ def create_instance_node(node): node = rt.Container(name=node) attrs = rt.Execute(MS_CUSTOM_ATTRIB) - rt.custAttributes.add(node.baseObject, attrs) + modifier = rt.EmptyModifier() + rt.addModifier(node, modifier) + node.modifiers[0].name = "OP Data" + rt.custAttributes.add(node.modifiers[0], attrs) return node @@ -209,13 +213,19 @@ def create(self, subset_name, instance_data, pre_create_data): if pre_create_data.get("use_selection"): node_list = [] + sel_list = [] for i in self.selected_nodes: node_ref = rt.NodeTransformMonitor(node=i) node_list.append(node_ref) + sel_list.append(str(i)) # Setting the property rt.setProperty( - instance_node.openPypeData, "all_handles", node_list) + instance_node.modifiers[0].openPypeData, + "all_handles", node_list) + rt.setProperty( + instance_node.modifiers[0].openPypeData, + "sel_list", sel_list) self._add_instance_to_context(instance) imprint(instance_node.name, instance.data_to_store()) @@ -254,8 +264,8 @@ def remove_instances(self, instances): instance_node = rt.GetNodeByName( instance.data.get("instance_node")) if instance_node: - count = rt.custAttributes.count(instance_node) - rt.custAttributes.delete(instance_node, count) + count = rt.custAttributes.count(instance_node.modifiers[0]) + rt.custAttributes.delete(instance_node.modifiers[0], count) rt.Delete(instance_node) self._remove_instance_from_context(instance) diff --git a/openpype/hosts/max/hooks/force_startup_script.py b/openpype/hosts/max/hooks/force_startup_script.py index 4fcf4fef21f..5fb8334d4b6 100644 --- a/openpype/hosts/max/hooks/force_startup_script.py +++ b/openpype/hosts/max/hooks/force_startup_script.py @@ -1,7 +1,8 @@ # -*- coding: utf-8 -*- """Pre-launch to force 3ds max startup script.""" -from openpype.lib import PreLaunchHook import os +from openpype.hosts.max import MAX_HOST_DIR +from openpype.lib.applications import PreLaunchHook, LaunchTypes class ForceStartupScript(PreLaunchHook): @@ -13,12 +14,14 @@ class ForceStartupScript(PreLaunchHook): Hook `GlobalHostDataHook` must be executed before this hook. """ - app_groups = ["3dsmax"] + app_groups = {"3dsmax", "adsk_3dsmax"} order = 11 + launch_types = {LaunchTypes.local} def execute(self): startup_args = [ "-U", "MAXScript", - f"{os.getenv('OPENPYPE_ROOT')}\\openpype\\hosts\\max\\startup\\startup.ms"] # noqa + os.path.join(MAX_HOST_DIR, "startup", "startup.ms"), + ] self.launch_context.launch_args.append(startup_args) diff --git a/openpype/hosts/max/hooks/inject_python.py b/openpype/hosts/max/hooks/inject_python.py index d9753ccbd8f..e9dddbf710b 100644 --- a/openpype/hosts/max/hooks/inject_python.py +++ b/openpype/hosts/max/hooks/inject_python.py @@ -1,7 +1,7 @@ # -*- coding: utf-8 -*- """Pre-launch hook to inject python environment.""" -from openpype.lib import PreLaunchHook import os +from openpype.lib.applications import PreLaunchHook, LaunchTypes class InjectPythonPath(PreLaunchHook): @@ -13,7 +13,8 @@ class InjectPythonPath(PreLaunchHook): Hook `GlobalHostDataHook` must be executed before this hook. """ - app_groups = ["3dsmax"] + app_groups = {"3dsmax", "adsk_3dsmax"} + launch_types = {LaunchTypes.local} def execute(self): self.launch_context.env["MAX_PYTHONPATH"] = os.environ["PYTHONPATH"] diff --git a/openpype/hosts/max/hooks/set_paths.py b/openpype/hosts/max/hooks/set_paths.py index 3db53063441..4b961fa91e3 100644 --- a/openpype/hosts/max/hooks/set_paths.py +++ b/openpype/hosts/max/hooks/set_paths.py @@ -1,4 +1,4 @@ -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook, LaunchTypes class SetPath(PreLaunchHook): @@ -6,7 +6,8 @@ class SetPath(PreLaunchHook): Hook `GlobalHostDataHook` must be executed before this hook. """ - app_groups = ["max"] + app_groups = {"max"} + launch_types = {LaunchTypes.local} def execute(self): workdir = self.launch_context.env.get("AVALON_WORKDIR", "") diff --git a/openpype/hosts/max/plugins/load/load_camera_fbx.py b/openpype/hosts/max/plugins/load/load_camera_fbx.py index 62284b23d9e..f040115417d 100644 --- a/openpype/hosts/max/plugins/load/load_camera_fbx.py +++ b/openpype/hosts/max/plugins/load/load_camera_fbx.py @@ -1,7 +1,16 @@ import os from openpype.hosts.max.api import lib, maintained_selection -from openpype.hosts.max.api.pipeline import containerise +from openpype.hosts.max.api.lib import ( + unique_namespace, + get_namespace, + object_transform_set +) +from openpype.hosts.max.api.pipeline import ( + containerise, + import_custom_attribute_data, + update_custom_attribute_data +) from openpype.pipeline import get_representation_path, load @@ -13,50 +22,76 @@ class FbxLoader(load.LoaderPlugin): order = -9 icon = "code-fork" color = "white" + postfix = "param" def load(self, context, name=None, namespace=None, data=None): from pymxs import runtime as rt - filepath = self.filepath_from_context(context) filepath = os.path.normpath(filepath) rt.FBXImporterSetParam("Animation", True) rt.FBXImporterSetParam("Camera", True) rt.FBXImporterSetParam("AxisConversionMethod", True) + rt.FBXImporterSetParam("Mode", rt.Name("create")) rt.FBXImporterSetParam("Preserveinstances", True) rt.ImportFile( filepath, rt.name("noPrompt"), using=rt.FBXIMP) - container = rt.GetNodeByName(f"{name}") - if not container: - container = rt.Container() - container.name = f"{name}" + namespace = unique_namespace( + name + "_", + suffix="_", + ) + container = rt.container( + name=f"{namespace}:{name}_{self.postfix}") + selections = rt.GetCurrentSelection() + import_custom_attribute_data(container, selections) - for selection in rt.GetCurrentSelection(): + for selection in selections: selection.Parent = container + selection.name = f"{namespace}:{selection.name}" return containerise( - name, [container], context, loader=self.__class__.__name__) + name, [container], context, + namespace, loader=self.__class__.__name__) def update(self, container, representation): from pymxs import runtime as rt path = get_representation_path(representation) - node = rt.GetNodeByName(container["instance_node"]) - rt.Select(node.Children) - fbx_reimport_cmd = ( - f""" - -FBXImporterSetParam "Animation" true -FBXImporterSetParam "Cameras" true -FBXImporterSetParam "AxisConversionMethod" true -FbxExporterSetParam "UpAxis" "Y" -FbxExporterSetParam "Preserveinstances" true - -importFile @"{path}" #noPrompt using:FBXIMP - """) - rt.Execute(fbx_reimport_cmd) + node_name = container["instance_node"] + node = rt.getNodeByName(node_name) + namespace, name = get_namespace(node_name) + sub_node_name = f"{namespace}:{name}_{self.postfix}" + inst_container = rt.getNodeByName(sub_node_name) + rt.Select(inst_container.Children) + transform_data = object_transform_set(inst_container.Children) + for prev_fbx_obj in rt.selection: + if rt.isValidNode(prev_fbx_obj): + rt.Delete(prev_fbx_obj) + + rt.FBXImporterSetParam("Animation", True) + rt.FBXImporterSetParam("Camera", True) + rt.FBXImporterSetParam("Mode", rt.Name("merge")) + rt.FBXImporterSetParam("AxisConversionMethod", True) + rt.FBXImporterSetParam("Preserveinstances", True) + rt.ImportFile( + path, rt.name("noPrompt"), using=rt.FBXIMP) + current_fbx_objects = rt.GetCurrentSelection() + for fbx_object in current_fbx_objects: + if fbx_object.Parent != inst_container: + fbx_object.Parent = inst_container + fbx_object.name = f"{namespace}:{fbx_object.name}" + fbx_object.pos = transform_data[ + f"{fbx_object.name}.transform"] + fbx_object.scale = transform_data[ + f"{fbx_object.name}.scale"] + + for children in node.Children: + if rt.classOf(children) == rt.Container: + if children.name == sub_node_name: + update_custom_attribute_data( + children, current_fbx_objects) with maintained_selection(): rt.Select(node) diff --git a/openpype/hosts/max/plugins/load/load_max_scene.py b/openpype/hosts/max/plugins/load/load_max_scene.py index 76cd3bf3673..98e9be96e16 100644 --- a/openpype/hosts/max/plugins/load/load_max_scene.py +++ b/openpype/hosts/max/plugins/load/load_max_scene.py @@ -1,7 +1,15 @@ import os from openpype.hosts.max.api import lib -from openpype.hosts.max.api.pipeline import containerise +from openpype.hosts.max.api.lib import ( + unique_namespace, + get_namespace, + object_transform_set +) +from openpype.hosts.max.api.pipeline import ( + containerise, import_custom_attribute_data, + update_custom_attribute_data +) from openpype.pipeline import get_representation_path, load @@ -16,22 +24,34 @@ class MaxSceneLoader(load.LoaderPlugin): order = -8 icon = "code-fork" color = "green" + postfix = "param" def load(self, context, name=None, namespace=None, data=None): from pymxs import runtime as rt - path = self.filepath_from_context(context) path = os.path.normpath(path) # import the max scene by using "merge file" path = path.replace('\\', '/') - rt.MergeMaxFile(path) + rt.MergeMaxFile(path, quiet=True, includeFullGroup=True) max_objects = rt.getLastMergedNodes() - max_container = rt.Container(name=f"{name}") - for max_object in max_objects: - max_object.Parent = max_container - + max_object_names = [obj.name for obj in max_objects] + # implement the OP/AYON custom attributes before load + max_container = [] + + namespace = unique_namespace( + name + "_", + suffix="_", + ) + container_name = f"{namespace}:{name}_{self.postfix}" + container = rt.Container(name=container_name) + import_custom_attribute_data(container, max_objects) + max_container.append(container) + max_container.extend(max_objects) + for max_obj, obj_name in zip(max_objects, max_object_names): + max_obj.name = f"{namespace}:{obj_name}" return containerise( - name, [max_container], context, loader=self.__class__.__name__) + name, max_container, context, + namespace, loader=self.__class__.__name__) def update(self, container, representation): from pymxs import runtime as rt @@ -39,15 +59,32 @@ def update(self, container, representation): path = get_representation_path(representation) node_name = container["instance_node"] - rt.MergeMaxFile(path, - rt.Name("noRedraw"), - rt.Name("deleteOldDups"), - rt.Name("useSceneMtlDups")) - - max_objects = rt.getLastMergedNodes() - container_node = rt.GetNodeByName(node_name) - for max_object in max_objects: - max_object.Parent = container_node + node = rt.getNodeByName(node_name) + namespace, name = get_namespace(node_name) + sub_container_name = f"{namespace}:{name}_{self.postfix}" + # delete the old container with attribute + # delete old duplicate + rt.Select(node.Children) + transform_data = object_transform_set(node.Children) + for prev_max_obj in rt.GetCurrentSelection(): + if rt.isValidNode(prev_max_obj) and prev_max_obj.name != sub_container_name: # noqa + rt.Delete(prev_max_obj) + rt.MergeMaxFile(path, rt.Name("deleteOldDups")) + + current_max_objects = rt.getLastMergedNodes() + current_max_object_names = [obj.name for obj + in current_max_objects] + sub_container = rt.getNodeByName(sub_container_name) + update_custom_attribute_data(sub_container, current_max_objects) + for max_object in current_max_objects: + max_object.Parent = node + for max_obj, obj_name in zip(current_max_objects, + current_max_object_names): + max_obj.name = f"{namespace}:{obj_name}" + max_obj.pos = transform_data[ + f"{max_obj.name}.transform"] + max_obj.scale = transform_data[ + f"{max_obj.name}.scale"] lib.imprint(container["instance_node"], { "representation": str(representation["_id"]) diff --git a/openpype/hosts/max/plugins/load/load_model.py b/openpype/hosts/max/plugins/load/load_model.py index cff82a593c7..c5a73b43276 100644 --- a/openpype/hosts/max/plugins/load/load_model.py +++ b/openpype/hosts/max/plugins/load/load_model.py @@ -1,8 +1,14 @@ import os from openpype.pipeline import load, get_representation_path -from openpype.hosts.max.api.pipeline import containerise +from openpype.hosts.max.api.pipeline import ( + containerise, + import_custom_attribute_data, + update_custom_attribute_data +) from openpype.hosts.max.api import lib -from openpype.hosts.max.api.lib import maintained_selection +from openpype.hosts.max.api.lib import ( + maintained_selection, unique_namespace +) class ModelAbcLoader(load.LoaderPlugin): @@ -14,6 +20,7 @@ class ModelAbcLoader(load.LoaderPlugin): order = -10 icon = "code-fork" color = "orange" + postfix = "param" def load(self, context, name=None, namespace=None, data=None): from pymxs import runtime as rt @@ -30,7 +37,7 @@ def load(self, context, name=None, namespace=None, data=None): rt.AlembicImport.CustomAttributes = True rt.AlembicImport.UVs = True rt.AlembicImport.VertexColors = True - rt.importFile(file_path, rt.name("noPrompt")) + rt.importFile(file_path, rt.name("noPrompt"), using=rt.AlembicImport) abc_after = { c @@ -45,9 +52,22 @@ def load(self, context, name=None, namespace=None, data=None): self.log.error("Something failed when loading.") abc_container = abc_containers.pop() + import_custom_attribute_data( + abc_container, abc_container.Children) + + namespace = unique_namespace( + name + "_", + suffix="_", + ) + for abc_object in abc_container.Children: + abc_object.name = f"{namespace}:{abc_object.name}" + # rename the abc container with namespace + abc_container_name = f"{namespace}:{name}_{self.postfix}" + abc_container.name = abc_container_name return containerise( - name, [abc_container], context, loader=self.__class__.__name__ + name, [abc_container], context, + namespace, loader=self.__class__.__name__ ) def update(self, container, representation): @@ -55,21 +75,19 @@ def update(self, container, representation): path = get_representation_path(representation) node = rt.GetNodeByName(container["instance_node"]) - rt.Select(node.Children) - - for alembic in rt.Selection: - abc = rt.GetNodeByName(alembic.name) - rt.Select(abc.Children) - for abc_con in rt.Selection: - container = rt.GetNodeByName(abc_con.name) - container.source = path - rt.Select(container.Children) - for abc_obj in rt.Selection: - alembic_obj = rt.GetNodeByName(abc_obj.name) - alembic_obj.source = path with maintained_selection(): - rt.Select(node) + rt.Select(node.Children) + + for alembic in rt.Selection: + abc = rt.GetNodeByName(alembic.name) + update_custom_attribute_data(abc, abc.Children) + rt.Select(abc.Children) + for abc_con in abc.Children: + abc_con.source = path + rt.Select(abc_con.Children) + for abc_obj in abc_con.Children: + abc_obj.source = path lib.imprint( container["instance_node"], diff --git a/openpype/hosts/max/plugins/load/load_model_fbx.py b/openpype/hosts/max/plugins/load/load_model_fbx.py index 12f526ab957..56c8768675f 100644 --- a/openpype/hosts/max/plugins/load/load_model_fbx.py +++ b/openpype/hosts/max/plugins/load/load_model_fbx.py @@ -1,7 +1,15 @@ import os from openpype.pipeline import load, get_representation_path -from openpype.hosts.max.api.pipeline import containerise +from openpype.hosts.max.api.pipeline import ( + containerise, import_custom_attribute_data, + update_custom_attribute_data +) from openpype.hosts.max.api import lib +from openpype.hosts.max.api.lib import ( + unique_namespace, + get_namespace, + object_transform_set +) from openpype.hosts.max.api.lib import maintained_selection @@ -13,6 +21,7 @@ class FbxModelLoader(load.LoaderPlugin): order = -9 icon = "code-fork" color = "white" + postfix = "param" def load(self, context, name=None, namespace=None, data=None): from pymxs import runtime as rt @@ -20,39 +29,69 @@ def load(self, context, name=None, namespace=None, data=None): filepath = os.path.normpath(self.filepath_from_context(context)) rt.FBXImporterSetParam("Animation", False) rt.FBXImporterSetParam("Cameras", False) + rt.FBXImporterSetParam("Mode", rt.Name("create")) rt.FBXImporterSetParam("Preserveinstances", True) rt.importFile(filepath, rt.name("noPrompt"), using=rt.FBXIMP) - container = rt.GetNodeByName(name) - if not container: - container = rt.Container() - container.name = name + namespace = unique_namespace( + name + "_", + suffix="_", + ) + container = rt.container( + name=f"{namespace}:{name}_{self.postfix}") + selections = rt.GetCurrentSelection() + import_custom_attribute_data(container, selections) - for selection in rt.GetCurrentSelection(): + for selection in selections: selection.Parent = container + selection.name = f"{namespace}:{selection.name}" return containerise( - name, [container], context, loader=self.__class__.__name__ + name, [container], context, + namespace, loader=self.__class__.__name__ ) def update(self, container, representation): from pymxs import runtime as rt path = get_representation_path(representation) - node = rt.getNodeByName(container["instance_node"]) - rt.select(node.Children) + node_name = container["instance_node"] + node = rt.getNodeByName(node_name) + namespace, name = get_namespace(node_name) + sub_node_name = f"{namespace}:{name}_{self.postfix}" + inst_container = rt.getNodeByName(sub_node_name) + rt.Select(inst_container.Children) + transform_data = object_transform_set(inst_container.Children) + for prev_fbx_obj in rt.selection: + if rt.isValidNode(prev_fbx_obj): + rt.Delete(prev_fbx_obj) rt.FBXImporterSetParam("Animation", False) rt.FBXImporterSetParam("Cameras", False) + rt.FBXImporterSetParam("Mode", rt.Name("merge")) rt.FBXImporterSetParam("AxisConversionMethod", True) - rt.FBXImporterSetParam("UpAxis", "Y") rt.FBXImporterSetParam("Preserveinstances", True) rt.importFile(path, rt.name("noPrompt"), using=rt.FBXIMP) + current_fbx_objects = rt.GetCurrentSelection() + for fbx_object in current_fbx_objects: + if fbx_object.Parent != inst_container: + fbx_object.Parent = inst_container + fbx_object.name = f"{namespace}:{fbx_object.name}" + fbx_object.pos = transform_data[ + f"{fbx_object.name}.transform"] + fbx_object.scale = transform_data[ + f"{fbx_object.name}.scale"] + + for children in node.Children: + if rt.classOf(children) == rt.Container: + if children.name == sub_node_name: + update_custom_attribute_data( + children, current_fbx_objects) with maintained_selection(): rt.Select(node) lib.imprint( - container["instance_node"], + node_name, {"representation": str(representation["_id"])}, ) diff --git a/openpype/hosts/max/plugins/load/load_model_obj.py b/openpype/hosts/max/plugins/load/load_model_obj.py index 18a19414fab..314889e6ecd 100644 --- a/openpype/hosts/max/plugins/load/load_model_obj.py +++ b/openpype/hosts/max/plugins/load/load_model_obj.py @@ -1,8 +1,18 @@ import os from openpype.hosts.max.api import lib +from openpype.hosts.max.api.lib import ( + unique_namespace, + get_namespace, + maintained_selection, + object_transform_set +) from openpype.hosts.max.api.lib import maintained_selection -from openpype.hosts.max.api.pipeline import containerise +from openpype.hosts.max.api.pipeline import ( + containerise, + import_custom_attribute_data, + update_custom_attribute_data +) from openpype.pipeline import get_representation_path, load @@ -14,6 +24,7 @@ class ObjLoader(load.LoaderPlugin): order = -9 icon = "code-fork" color = "white" + postfix = "param" def load(self, context, name=None, namespace=None, data=None): from pymxs import runtime as rt @@ -22,36 +33,49 @@ def load(self, context, name=None, namespace=None, data=None): self.log.debug("Executing command to import..") rt.Execute(f'importFile @"{filepath}" #noPrompt using:ObjImp') - # create "missing" container for obj import - container = rt.Container() - container.name = name + namespace = unique_namespace( + name + "_", + suffix="_", + ) + # create "missing" container for obj import + container = rt.Container(name=f"{namespace}:{name}_{self.postfix}") + selections = rt.GetCurrentSelection() + import_custom_attribute_data(container, selections) # get current selection - for selection in rt.GetCurrentSelection(): + for selection in selections: selection.Parent = container - - asset = rt.GetNodeByName(name) - + selection.name = f"{namespace}:{selection.name}" return containerise( - name, [asset], context, loader=self.__class__.__name__) + name, [container], context, + namespace, loader=self.__class__.__name__) def update(self, container, representation): from pymxs import runtime as rt path = get_representation_path(representation) node_name = container["instance_node"] - node = rt.GetNodeByName(node_name) - - instance_name, _ = node_name.split("_") - container = rt.GetNodeByName(instance_name) - for child in container.Children: - rt.Delete(child) + node = rt.getNodeByName(node_name) + namespace, name = get_namespace(node_name) + sub_node_name = f"{namespace}:{name}_{self.postfix}" + inst_container = rt.getNodeByName(sub_node_name) + rt.Select(inst_container.Children) + transform_data = object_transform_set(inst_container.Children) + for prev_obj in rt.selection: + if rt.isValidNode(prev_obj): + rt.Delete(prev_obj) rt.Execute(f'importFile @"{path}" #noPrompt using:ObjImp') # get current selection - for selection in rt.GetCurrentSelection(): - selection.Parent = container - + selections = rt.GetCurrentSelection() + update_custom_attribute_data(inst_container, selections) + for selection in selections: + selection.Parent = inst_container + selection.name = f"{namespace}:{selection.name}" + selection.pos = transform_data[ + f"{selection.name}.transform"] + selection.scale = transform_data[ + f"{selection.name}.scale"] with maintained_selection(): rt.Select(node) diff --git a/openpype/hosts/max/plugins/load/load_model_usd.py b/openpype/hosts/max/plugins/load/load_model_usd.py index 48b50b9b180..f35d8e63271 100644 --- a/openpype/hosts/max/plugins/load/load_model_usd.py +++ b/openpype/hosts/max/plugins/load/load_model_usd.py @@ -1,8 +1,16 @@ import os from openpype.hosts.max.api import lib +from openpype.hosts.max.api.lib import ( + unique_namespace, + get_namespace, + object_transform_set +) from openpype.hosts.max.api.lib import maintained_selection -from openpype.hosts.max.api.pipeline import containerise +from openpype.hosts.max.api.pipeline import ( + containerise, + import_custom_attribute_data +) from openpype.pipeline import get_representation_path, load @@ -15,6 +23,7 @@ class ModelUSDLoader(load.LoaderPlugin): order = -10 icon = "code-fork" color = "orange" + postfix = "param" def load(self, context, name=None, namespace=None, data=None): from pymxs import runtime as rt @@ -30,11 +39,24 @@ def load(self, context, name=None, namespace=None, data=None): rt.LogLevel = rt.Name("info") rt.USDImporter.importFile(filepath, importOptions=import_options) - + namespace = unique_namespace( + name + "_", + suffix="_", + ) asset = rt.GetNodeByName(name) + import_custom_attribute_data(asset, asset.Children) + for usd_asset in asset.Children: + usd_asset.name = f"{namespace}:{usd_asset.name}" + + asset_name = f"{namespace}:{name}_{self.postfix}" + asset.name = asset_name + # need to get the correct container after renamed + asset = rt.GetNodeByName(asset_name) + return containerise( - name, [asset], context, loader=self.__class__.__name__) + name, [asset], context, + namespace, loader=self.__class__.__name__) def update(self, container, representation): from pymxs import runtime as rt @@ -42,11 +64,16 @@ def update(self, container, representation): path = get_representation_path(representation) node_name = container["instance_node"] node = rt.GetNodeByName(node_name) + namespace, name = get_namespace(node_name) + sub_node_name = f"{namespace}:{name}_{self.postfix}" + transform_data = None for n in node.Children: - for r in n.Children: - rt.Delete(r) + rt.Select(n.Children) + transform_data = object_transform_set(n.Children) + for prev_usd_asset in rt.selection: + if rt.isValidNode(prev_usd_asset): + rt.Delete(prev_usd_asset) rt.Delete(n) - instance_name, _ = node_name.split("_") import_options = rt.USDImporter.CreateOptions() base_filename = os.path.basename(path) @@ -55,11 +82,20 @@ def update(self, container, representation): rt.LogPath = log_filepath rt.LogLevel = rt.Name("info") - rt.USDImporter.importFile(path, - importOptions=import_options) + rt.USDImporter.importFile( + path, importOptions=import_options) - asset = rt.GetNodeByName(instance_name) + asset = rt.GetNodeByName(name) asset.Parent = node + import_custom_attribute_data(asset, asset.Children) + for children in asset.Children: + children.name = f"{namespace}:{children.name}" + children.pos = transform_data[ + f"{children.name}.transform"] + children.scale = transform_data[ + f"{children.name}.scale"] + + asset.name = sub_node_name with maintained_selection(): rt.Select(node) diff --git a/openpype/hosts/max/plugins/load/load_pointcache.py b/openpype/hosts/max/plugins/load/load_pointcache.py index 290503e053e..070dea88d4b 100644 --- a/openpype/hosts/max/plugins/load/load_pointcache.py +++ b/openpype/hosts/max/plugins/load/load_pointcache.py @@ -7,7 +7,12 @@ import os from openpype.pipeline import load, get_representation_path from openpype.hosts.max.api import lib, maintained_selection -from openpype.hosts.max.api.pipeline import containerise +from openpype.hosts.max.api.lib import unique_namespace +from openpype.hosts.max.api.pipeline import ( + containerise, + import_custom_attribute_data, + update_custom_attribute_data +) class AbcLoader(load.LoaderPlugin): @@ -19,6 +24,7 @@ class AbcLoader(load.LoaderPlugin): order = -10 icon = "code-fork" color = "orange" + postfix = "param" def load(self, context, name=None, namespace=None, data=None): from pymxs import runtime as rt @@ -33,7 +39,7 @@ def load(self, context, name=None, namespace=None, data=None): } rt.AlembicImport.ImportToRoot = False - rt.importFile(file_path, rt.name("noPrompt")) + rt.importFile(file_path, rt.name("noPrompt"), using=rt.AlembicImport) abc_after = { c @@ -48,13 +54,27 @@ def load(self, context, name=None, namespace=None, data=None): self.log.error("Something failed when loading.") abc_container = abc_containers.pop() - - for abc in rt.GetCurrentSelection(): + selections = rt.GetCurrentSelection() + import_custom_attribute_data( + abc_container, abc_container.Children) + for abc in selections: for cam_shape in abc.Children: cam_shape.playbackType = 2 + namespace = unique_namespace( + name + "_", + suffix="_", + ) + + for abc_object in abc_container.Children: + abc_object.name = f"{namespace}:{abc_object.name}" + # rename the abc container with namespace + abc_container_name = f"{namespace}:{name}_{self.postfix}" + abc_container.name = abc_container_name + return containerise( - name, [abc_container], context, loader=self.__class__.__name__ + name, [abc_container], context, + namespace, loader=self.__class__.__name__ ) def update(self, container, representation): @@ -63,28 +83,23 @@ def update(self, container, representation): path = get_representation_path(representation) node = rt.GetNodeByName(container["instance_node"]) - alembic_objects = self.get_container_children(node, "AlembicObject") - for alembic_object in alembic_objects: - alembic_object.source = path - - lib.imprint( - container["instance_node"], - {"representation": str(representation["_id"])}, - ) - with maintained_selection(): rt.Select(node.Children) for alembic in rt.Selection: abc = rt.GetNodeByName(alembic.name) + update_custom_attribute_data(abc, abc.Children) rt.Select(abc.Children) - for abc_con in rt.Selection: - container = rt.GetNodeByName(abc_con.name) - container.source = path - rt.Select(container.Children) - for abc_obj in rt.Selection: - alembic_obj = rt.GetNodeByName(abc_obj.name) - alembic_obj.source = path + for abc_con in abc.Children: + abc_con.source = path + rt.Select(abc_con.Children) + for abc_obj in abc_con.Children: + abc_obj.source = path + + lib.imprint( + container["instance_node"], + {"representation": str(representation["_id"])}, + ) def switch(self, container, representation): self.update(container, representation) diff --git a/openpype/hosts/max/plugins/load/load_pointcloud.py b/openpype/hosts/max/plugins/load/load_pointcloud.py index 2a1175167ab..c4c4cfbc6cc 100644 --- a/openpype/hosts/max/plugins/load/load_pointcloud.py +++ b/openpype/hosts/max/plugins/load/load_pointcloud.py @@ -1,7 +1,14 @@ import os from openpype.hosts.max.api import lib, maintained_selection -from openpype.hosts.max.api.pipeline import containerise +from openpype.hosts.max.api.lib import ( + unique_namespace, get_namespace +) +from openpype.hosts.max.api.pipeline import ( + containerise, + import_custom_attribute_data, + update_custom_attribute_data +) from openpype.pipeline import get_representation_path, load @@ -13,6 +20,7 @@ class PointCloudLoader(load.LoaderPlugin): order = -8 icon = "code-fork" color = "green" + postfix = "param" def load(self, context, name=None, namespace=None, data=None): """load point cloud by tyCache""" @@ -22,10 +30,19 @@ def load(self, context, name=None, namespace=None, data=None): obj = rt.tyCache() obj.filename = filepath - prt_container = rt.GetNodeByName(obj.name) + namespace = unique_namespace( + name + "_", + suffix="_", + ) + prt_container = rt.Container( + name=f"{namespace}:{name}_{self.postfix}") + import_custom_attribute_data(prt_container, [obj]) + obj.Parent = prt_container + obj.name = f"{namespace}:{obj.name}" return containerise( - name, [prt_container], context, loader=self.__class__.__name__) + name, [prt_container], context, + namespace, loader=self.__class__.__name__) def update(self, container, representation): """update the container""" @@ -33,15 +50,18 @@ def update(self, container, representation): path = get_representation_path(representation) node = rt.GetNodeByName(container["instance_node"]) + namespace, name = get_namespace(container["instance_node"]) + sub_node_name = f"{namespace}:{name}_{self.postfix}" + inst_container = rt.getNodeByName(sub_node_name) + update_custom_attribute_data( + inst_container, inst_container.Children) with maintained_selection(): rt.Select(node.Children) - for prt in rt.Selection: - prt_object = rt.GetNodeByName(prt.name) - prt_object.filename = path - - lib.imprint(container["instance_node"], { - "representation": str(representation["_id"]) - }) + for prt in inst_container.Children: + prt.filename = path + lib.imprint(container["instance_node"], { + "representation": str(representation["_id"]) + }) def switch(self, container, representation): self.update(container, representation) diff --git a/openpype/hosts/max/plugins/load/load_redshift_proxy.py b/openpype/hosts/max/plugins/load/load_redshift_proxy.py index 31692f6367c..f7dd95962b9 100644 --- a/openpype/hosts/max/plugins/load/load_redshift_proxy.py +++ b/openpype/hosts/max/plugins/load/load_redshift_proxy.py @@ -5,8 +5,15 @@ load, get_representation_path ) -from openpype.hosts.max.api.pipeline import containerise +from openpype.hosts.max.api.pipeline import ( + containerise, + import_custom_attribute_data, + update_custom_attribute_data +) from openpype.hosts.max.api import lib +from openpype.hosts.max.api.lib import ( + unique_namespace, get_namespace +) class RedshiftProxyLoader(load.LoaderPlugin): @@ -18,6 +25,7 @@ class RedshiftProxyLoader(load.LoaderPlugin): order = -9 icon = "code-fork" color = "white" + postfix = "param" def load(self, context, name=None, namespace=None, data=None): from pymxs import runtime as rt @@ -30,24 +38,32 @@ def load(self, context, name=None, namespace=None, data=None): if collections: rs_proxy.is_sequence = True - container = rt.container() - container.name = name + namespace = unique_namespace( + name + "_", + suffix="_", + ) + container = rt.Container( + name=f"{namespace}:{name}_{self.postfix}") rs_proxy.Parent = container - - asset = rt.getNodeByName(name) + rs_proxy.name = f"{namespace}:{rs_proxy.name}" + import_custom_attribute_data(container, [rs_proxy]) return containerise( - name, [asset], context, loader=self.__class__.__name__) + name, [container], context, + namespace, loader=self.__class__.__name__) def update(self, container, representation): from pymxs import runtime as rt path = get_representation_path(representation) - node = rt.getNodeByName(container["instance_node"]) - for children in node.Children: - children_node = rt.getNodeByName(children.name) - for proxy in children_node.Children: - proxy.file = path + namespace, name = get_namespace(container["instance_node"]) + sub_node_name = f"{namespace}:{name}_{self.postfix}" + inst_container = rt.getNodeByName(sub_node_name) + + update_custom_attribute_data( + inst_container, inst_container.Children) + for proxy in inst_container.Children: + proxy.file = path lib.imprint(container["instance_node"], { "representation": str(representation["_id"]) diff --git a/openpype/hosts/max/plugins/publish/collect_members.py b/openpype/hosts/max/plugins/publish/collect_members.py index 812d82ff26e..2970cf0e247 100644 --- a/openpype/hosts/max/plugins/publish/collect_members.py +++ b/openpype/hosts/max/plugins/publish/collect_members.py @@ -17,6 +17,6 @@ def process(self, instance): container = rt.GetNodeByName(instance.data["instance_node"]) instance.data["members"] = [ member.node for member - in container.openPypeData.all_handles + in container.modifiers[0].openPypeData.all_handles ] self.log.debug("{}".format(instance.data["members"])) diff --git a/openpype/hosts/max/plugins/publish/collect_render.py b/openpype/hosts/max/plugins/publish/collect_render.py index db5c84fad99..8ee2f431037 100644 --- a/openpype/hosts/max/plugins/publish/collect_render.py +++ b/openpype/hosts/max/plugins/publish/collect_render.py @@ -34,6 +34,9 @@ def process(self, instance): aovs = RenderProducts().get_aovs(instance.name) files_by_aov.update(aovs) + camera = rt.viewport.GetCamera() + instance.data["cameras"] = [camera.name] if camera else None # noqa + if "expectedFiles" not in instance.data: instance.data["expectedFiles"] = list() instance.data["files"] = list() diff --git a/openpype/hosts/max/plugins/publish/validate_no_max_content.py b/openpype/hosts/max/plugins/publish/validate_no_max_content.py index c6a27dace36..73e12e75c90 100644 --- a/openpype/hosts/max/plugins/publish/validate_no_max_content.py +++ b/openpype/hosts/max/plugins/publish/validate_no_max_content.py @@ -13,7 +13,6 @@ class ValidateMaxContents(pyblish.api.InstancePlugin): order = pyblish.api.ValidatorOrder families = ["camera", "maxScene", - "maxrender", "review"] hosts = ["max"] label = "Max Scene Contents" diff --git a/openpype/hosts/max/plugins/publish/validate_renderable_camera.py b/openpype/hosts/max/plugins/publish/validate_renderable_camera.py new file mode 100644 index 00000000000..61321661b53 --- /dev/null +++ b/openpype/hosts/max/plugins/publish/validate_renderable_camera.py @@ -0,0 +1,46 @@ +# -*- coding: utf-8 -*- +import pyblish.api +from openpype.pipeline import ( + PublishValidationError, + OptionalPyblishPluginMixin) +from openpype.pipeline.publish import RepairAction +from openpype.hosts.max.api.lib import get_current_renderer + +from pymxs import runtime as rt + + +class ValidateRenderableCamera(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): + """Validates Renderable Camera + + Check if the renderable camera used for rendering + """ + + order = pyblish.api.ValidatorOrder + families = ["maxrender"] + hosts = ["max"] + label = "Renderable Camera" + optional = True + actions = [RepairAction] + + def process(self, instance): + if not self.is_active(instance.data): + return + if not instance.data["cameras"]: + raise PublishValidationError( + "No renderable Camera found in scene." + ) + + @classmethod + def repair(cls, instance): + + rt.viewport.setType(rt.Name("view_camera")) + camera = rt.viewport.GetCamera() + cls.log.info(f"Camera {camera} set as renderable camera") + renderer_class = get_current_renderer() + renderer = str(renderer_class).split(":")[0] + if renderer == "Arnold": + arv = rt.MAXToAOps.ArnoldRenderView() + arv.setOption("Camera", str(camera)) + arv.close() + instance.data["cameras"] = [camera.name] diff --git a/openpype/hosts/maya/api/lib.py b/openpype/hosts/maya/api/lib.py index cdc722a4093..40b3419e73c 100644 --- a/openpype/hosts/maya/api/lib.py +++ b/openpype/hosts/maya/api/lib.py @@ -27,20 +27,16 @@ from openpype.pipeline import ( get_current_project_name, get_current_asset_name, + get_current_task_name, discover_loader_plugins, loaders_from_representation, get_representation_path, load_container, - registered_host, + registered_host ) from openpype.lib import NumberDef from openpype.pipeline.context_tools import get_current_project_asset from openpype.pipeline.create import CreateContext -from openpype.pipeline.context_tools import ( - get_current_asset_name, - get_current_project_name, - get_current_task_name -) from openpype.lib.profiles_filtering import filter_profiles diff --git a/openpype/hosts/maya/api/plugin.py b/openpype/hosts/maya/api/plugin.py index 2b5aee9700e..00d6602ef92 100644 --- a/openpype/hosts/maya/api/plugin.py +++ b/openpype/hosts/maya/api/plugin.py @@ -8,13 +8,24 @@ from maya.app.renderSetup.model import renderSetup from openpype.lib import BoolDef, Logger -from openpype.pipeline import AVALON_CONTAINER_ID, Anatomy, CreatedInstance -from openpype.pipeline import Creator as NewCreator +from openpype.settings import get_project_settings from openpype.pipeline import ( - CreatorError, LegacyCreator, LoaderPlugin, get_representation_path, - legacy_io) + AVALON_CONTAINER_ID, + Anatomy, + + CreatedInstance, + Creator as NewCreator, + AutoCreator, + HiddenCreator, + + CreatorError, + LegacyCreator, + LoaderPlugin, + get_representation_path, +) from openpype.pipeline.load import LoadError -from openpype.settings import get_project_settings +from openpype.client import get_asset_by_name +from openpype.pipeline.create import get_subset_name from . import lib from .lib import imprint, read @@ -177,10 +188,42 @@ def read_instance_node(self, node): return node_data + def _default_collect_instances(self): + self.cache_subsets(self.collection_shared_data) + cached_subsets = self.collection_shared_data["maya_cached_subsets"] + for node in cached_subsets.get(self.identifier, []): + node_data = self.read_instance_node(node) + + created_instance = CreatedInstance.from_existing(node_data, self) + self._add_instance_to_context(created_instance) + + def _default_update_instances(self, update_list): + for created_inst, _changes in update_list: + data = created_inst.data_to_store() + node = data.get("instance_node") + + self.imprint_instance_node(node, data) + + def _default_remove_instances(self, instances): + """Remove specified instance from the scene. + + This is only removing `id` parameter so instance is no longer + instance, because it might contain valuable data for artist. + + """ + for instance in instances: + node = instance.data.get("instance_node") + if node: + cmds.delete(node) + + self._remove_instance_from_context(instance) + @six.add_metaclass(ABCMeta) class MayaCreator(NewCreator, MayaCreatorBase): + settings_name = None + def create(self, subset_name, instance_data, pre_create_data): members = list() @@ -202,34 +245,13 @@ def create(self, subset_name, instance_data, pre_create_data): return instance def collect_instances(self): - self.cache_subsets(self.collection_shared_data) - cached_subsets = self.collection_shared_data["maya_cached_subsets"] - for node in cached_subsets.get(self.identifier, []): - node_data = self.read_instance_node(node) - - created_instance = CreatedInstance.from_existing(node_data, self) - self._add_instance_to_context(created_instance) + return self._default_collect_instances() def update_instances(self, update_list): - for created_inst, _changes in update_list: - data = created_inst.data_to_store() - node = data.get("instance_node") - - self.imprint_instance_node(node, data) + return self._default_update_instances(update_list) def remove_instances(self, instances): - """Remove specified instance from the scene. - - This is only removing `id` parameter so instance is no longer - instance, because it might contain valuable data for artist. - - """ - for instance in instances: - node = instance.data.get("instance_node") - if node: - cmds.delete(node) - - self._remove_instance_from_context(instance) + return self._default_remove_instances(instances) def get_pre_create_attr_defs(self): return [ @@ -238,6 +260,61 @@ def get_pre_create_attr_defs(self): default=True) ] + def apply_settings(self, project_settings, system_settings): + """Method called on initialization of plugin to apply settings.""" + + settings_name = self.settings_name + if settings_name is None: + settings_name = self.__class__.__name__ + + settings = project_settings["maya"]["create"] + settings = settings.get(settings_name) + if settings is None: + self.log.debug( + "No settings found for {}".format(self.__class__.__name__) + ) + return + + for key, value in settings.items(): + setattr(self, key, value) + + +class MayaAutoCreator(AutoCreator, MayaCreatorBase): + """Automatically triggered creator for Maya. + + The plugin is not visible in UI, and 'create' method does not expect + any arguments. + """ + + def collect_instances(self): + return self._default_collect_instances() + + def update_instances(self, update_list): + return self._default_update_instances(update_list) + + def remove_instances(self, instances): + return self._default_remove_instances(instances) + + +class MayaHiddenCreator(HiddenCreator, MayaCreatorBase): + """Hidden creator for Maya. + + The plugin is not visible in UI, and it does not have strictly defined + arguments for 'create' method. + """ + + def create(self, *args, **kwargs): + return MayaCreator.create(self, *args, **kwargs) + + def collect_instances(self): + return self._default_collect_instances() + + def update_instances(self, update_list): + return self._default_update_instances(update_list) + + def remove_instances(self, instances): + return self._default_remove_instances(instances) + def ensure_namespace(namespace): """Make sure the namespace exists. @@ -328,14 +405,21 @@ def collect_instances(self): # No existing scene instance node for this layer. Note that # this instance will not have the `instance_node` data yet # until it's been saved/persisted at least once. - # TODO: Correctly define the subset name using templates - prefix = self.layer_instance_prefix or self.family - subset_name = "{}{}".format(prefix, layer.name()) + project_name = self.create_context.get_current_project_name() + instance_data = { - "asset": legacy_io.Session["AVALON_ASSET"], - "task": legacy_io.Session["AVALON_TASK"], + "asset": self.create_context.get_current_asset_name(), + "task": self.create_context.get_current_task_name(), "variant": layer.name(), } + asset_doc = get_asset_by_name(project_name, + instance_data["asset"]) + subset_name = self.get_subset_name( + layer.name(), + instance_data["task"], + asset_doc, + project_name) + instance = CreatedInstance( family=self.family, subset_name=subset_name, @@ -362,7 +446,7 @@ def find_layer_instance_node(self, layer): creator_identifier = cmds.getAttr(node + ".creator_identifier") if creator_identifier == self.identifier: - self.log.info(f"Found node: {node}") + self.log.info("Found node: {}".format(node)) return node def _create_layer_instance_node(self, layer): @@ -442,10 +526,75 @@ def remove_instances(self, instances): if node and cmds.objExists(node): cmds.delete(node) + def get_subset_name( + self, + variant, + task_name, + asset_doc, + project_name, + host_name=None, + instance=None + ): + # creator.family != 'render' as expected + return get_subset_name(self.layer_instance_prefix, + variant, + task_name, + asset_doc, + project_name) + class Loader(LoaderPlugin): hosts = ["maya"] + def get_custom_namespace_and_group(self, context, options, loader_key): + """Queries Settings to get custom template for namespace and group. + + Group template might be empty >> this forces to not wrap imported items + into separate group. + + Args: + context (dict) + options (dict): artist modifiable options from dialog + loader_key (str): key to get separate configuration from Settings + ('reference_loader'|'import_loader') + """ + options["attach_to_root"] = True + + asset = context['asset'] + subset = context['subset'] + settings = get_project_settings(context['project']['name']) + custom_naming = settings['maya']['load'][loader_key] + + if not custom_naming['namespace']: + raise LoadError("No namespace specified in " + "Maya ReferenceLoader settings") + elif not custom_naming['group_name']: + self.log.debug("No custom group_name, no group will be created.") + options["attach_to_root"] = False + + formatting_data = { + "asset_name": asset['name'], + "asset_type": asset['type'], + "folder": { + "name": asset["name"], + }, + "subset": subset['name'], + "family": ( + subset['data'].get('family') or + subset['data']['families'][0] + ) + } + + custom_namespace = custom_naming['namespace'].format( + **formatting_data + ) + + custom_group_name = custom_naming['group_name'].format( + **formatting_data + ) + + return custom_group_name, custom_namespace, options + class ReferenceLoader(Loader): """A basic ReferenceLoader for Maya @@ -488,39 +637,13 @@ def load( path = self.filepath_from_context(context) assert os.path.exists(path), "%s does not exist." % path - asset = context['asset'] - subset = context['subset'] - settings = get_project_settings(context['project']['name']) - custom_naming = settings['maya']['load']['reference_loader'] - loaded_containers = [] - - if not custom_naming['namespace']: - raise LoadError("No namespace specified in " - "Maya ReferenceLoader settings") - elif not custom_naming['group_name']: - raise LoadError("No group name specified in " - "Maya ReferenceLoader settings") - - formatting_data = { - "asset_name": asset['name'], - "asset_type": asset['type'], - "subset": subset['name'], - "family": ( - subset['data'].get('family') or - subset['data']['families'][0] - ) - } - - custom_namespace = custom_naming['namespace'].format( - **formatting_data - ) - - custom_group_name = custom_naming['group_name'].format( - **formatting_data - ) + custom_group_name, custom_namespace, options = \ + self.get_custom_namespace_and_group(context, options, + "reference_loader") count = options.get("count") or 1 + loaded_containers = [] for c in range(0, count): namespace = lib.get_custom_namespace(custom_namespace) group_name = "{}:{}".format( diff --git a/openpype/hosts/maya/hooks/pre_auto_load_plugins.py b/openpype/hosts/maya/hooks/pre_auto_load_plugins.py index 689d7adb4f0..4b1ea698a6a 100644 --- a/openpype/hosts/maya/hooks/pre_auto_load_plugins.py +++ b/openpype/hosts/maya/hooks/pre_auto_load_plugins.py @@ -1,4 +1,4 @@ -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook, LaunchTypes class MayaPreAutoLoadPlugins(PreLaunchHook): @@ -6,7 +6,8 @@ class MayaPreAutoLoadPlugins(PreLaunchHook): # Before AddLastWorkfileToLaunchArgs order = 9 - app_groups = ["maya"] + app_groups = {"maya"} + launch_types = {LaunchTypes.local} def execute(self): diff --git a/openpype/hosts/maya/hooks/pre_copy_mel.py b/openpype/hosts/maya/hooks/pre_copy_mel.py index 9cea829ad74..0fb5af149ad 100644 --- a/openpype/hosts/maya/hooks/pre_copy_mel.py +++ b/openpype/hosts/maya/hooks/pre_copy_mel.py @@ -1,4 +1,4 @@ -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook, LaunchTypes from openpype.hosts.maya.lib import create_workspace_mel @@ -7,7 +7,8 @@ class PreCopyMel(PreLaunchHook): Hook `GlobalHostDataHook` must be executed before this hook. """ - app_groups = ["maya"] + app_groups = {"maya"} + launch_types = {LaunchTypes.local} def execute(self): project_doc = self.data["project_doc"] diff --git a/openpype/hosts/maya/hooks/pre_open_workfile_post_initialization.py b/openpype/hosts/maya/hooks/pre_open_workfile_post_initialization.py index 7582ce0591d..1fe3c3ca2c4 100644 --- a/openpype/hosts/maya/hooks/pre_open_workfile_post_initialization.py +++ b/openpype/hosts/maya/hooks/pre_open_workfile_post_initialization.py @@ -1,4 +1,4 @@ -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook, LaunchTypes class MayaPreOpenWorkfilePostInitialization(PreLaunchHook): @@ -6,7 +6,8 @@ class MayaPreOpenWorkfilePostInitialization(PreLaunchHook): # Before AddLastWorkfileToLaunchArgs. order = 9 - app_groups = ["maya"] + app_groups = {"maya"} + launch_types = {LaunchTypes.local} def execute(self): diff --git a/openpype/hosts/maya/plugins/create/convert_legacy.py b/openpype/hosts/maya/plugins/create/convert_legacy.py index 6133abc2053..cd8faf291b8 100644 --- a/openpype/hosts/maya/plugins/create/convert_legacy.py +++ b/openpype/hosts/maya/plugins/create/convert_legacy.py @@ -2,6 +2,8 @@ from openpype.hosts.maya.api import plugin from openpype.hosts.maya.api.lib import read +from openpype.client import get_asset_by_name + from maya import cmds from maya.app.renderSetup.model import renderSetup @@ -51,7 +53,7 @@ def convert(self): # From all current new style manual creators find the mapping # from family to identifier family_to_id = {} - for identifier, creator in self.create_context.manual_creators.items(): + for identifier, creator in self.create_context.creators.items(): family = getattr(creator, "family", None) if not family: continue @@ -70,7 +72,6 @@ def convert(self): # logic was thus to be live to the current task to begin with. data = dict() data["task"] = self.create_context.get_current_task_name() - for family, instance_nodes in legacy.items(): if family not in family_to_id: self.log.warning( @@ -81,7 +82,7 @@ def convert(self): continue creator_id = family_to_id[family] - creator = self.create_context.manual_creators[creator_id] + creator = self.create_context.creators[creator_id] data["creator_identifier"] = creator_id if isinstance(creator, plugin.RenderlayerCreator): @@ -136,6 +137,18 @@ def _convert_per_renderlayer(self, instance_nodes, data, creator): # "rendering" family being converted to "renderlayer" family) original_data["family"] = creator.family + # recreate subset name as without it would be + # `renderingMain` vs correct `renderMain` + project_name = self.create_context.get_current_project_name() + asset_doc = get_asset_by_name(project_name, + original_data["asset"]) + subset_name = creator.get_subset_name( + original_data["variant"], + data["task"], + asset_doc, + project_name) + original_data["subset"] = subset_name + # Convert to creator attributes when relevant creator_attributes = {} for key in list(original_data.keys()): diff --git a/openpype/hosts/maya/plugins/create/create_animation.py b/openpype/hosts/maya/plugins/create/create_animation.py index cade8603ce3..214ac18aef2 100644 --- a/openpype/hosts/maya/plugins/create/create_animation.py +++ b/openpype/hosts/maya/plugins/create/create_animation.py @@ -8,15 +8,13 @@ ) -class CreateAnimation(plugin.MayaCreator): - """Animation output for character rigs""" - - # We hide the animation creator from the UI since the creation of it - # is automated upon loading a rig. There's an inventory action to recreate - # it for loaded rigs if by chance someone deleted the animation instance. - # Note: This setting is actually applied from project settings - enabled = False +class CreateAnimation(plugin.MayaHiddenCreator): + """Animation output for character rigs + We hide the animation creator from the UI since the creation of it is + automated upon loading a rig. There's an inventory action to recreate it + for loaded rigs if by chance someone deleted the animation instance. + """ identifier = "io.openpype.creators.maya.animation" name = "animationDefault" label = "Animation" @@ -28,9 +26,6 @@ class CreateAnimation(plugin.MayaCreator): include_parent_hierarchy = False include_user_defined_attributes = False - # TODO: Would be great if we could visually hide this from the creator - # by default but do allow to generate it through code. - def get_instance_attr_defs(self): defs = lib.collect_animation_defs() @@ -85,3 +80,12 @@ def get_instance_attr_defs(self): """ return defs + + def apply_settings(self, project_settings, system_settings): + super(CreateAnimation, self).apply_settings( + project_settings, system_settings + ) + # Hardcoding creator to be enabled due to existing settings would + # disable the creator causing the creator plugin to not be + # discoverable. + self.enabled = True diff --git a/openpype/hosts/maya/plugins/create/create_arnold_scene_source.py b/openpype/hosts/maya/plugins/create/create_arnold_scene_source.py index 0c8cf8d2bbe..1ef132725f4 100644 --- a/openpype/hosts/maya/plugins/create/create_arnold_scene_source.py +++ b/openpype/hosts/maya/plugins/create/create_arnold_scene_source.py @@ -15,6 +15,7 @@ class CreateArnoldSceneSource(plugin.MayaCreator): label = "Arnold Scene Source" family = "ass" icon = "cube" + settings_name = "CreateAss" expandProcedurals = False motionBlur = True diff --git a/openpype/hosts/maya/plugins/create/create_model.py b/openpype/hosts/maya/plugins/create/create_model.py index 30f1a822814..5c3dd04af0f 100644 --- a/openpype/hosts/maya/plugins/create/create_model.py +++ b/openpype/hosts/maya/plugins/create/create_model.py @@ -12,7 +12,7 @@ class CreateModel(plugin.MayaCreator): label = "Model" family = "model" icon = "cube" - defaults = ["Main", "Proxy", "_MD", "_HD", "_LD"] + default_variants = ["Main", "Proxy", "_MD", "_HD", "_LD"] write_color_sets = False write_face_sets = False diff --git a/openpype/hosts/maya/plugins/create/create_rig.py b/openpype/hosts/maya/plugins/create/create_rig.py index 04104cb7cba..345ab6c00d3 100644 --- a/openpype/hosts/maya/plugins/create/create_rig.py +++ b/openpype/hosts/maya/plugins/create/create_rig.py @@ -20,6 +20,6 @@ def create(self, subset_name, instance_data, pre_create_data): instance_node = instance.get("instance_node") self.log.info("Creating Rig instance set up ...") - controls = cmds.sets(name="controls_SET", empty=True) - pointcache = cmds.sets(name="out_SET", empty=True) + controls = cmds.sets(name=subset_name + "_controls_SET", empty=True) + pointcache = cmds.sets(name=subset_name + "_out_SET", empty=True) cmds.sets([controls, pointcache], forceElement=instance_node) diff --git a/openpype/hosts/maya/plugins/create/create_setdress.py b/openpype/hosts/maya/plugins/create/create_setdress.py index 594a3dc46de..23a706380ac 100644 --- a/openpype/hosts/maya/plugins/create/create_setdress.py +++ b/openpype/hosts/maya/plugins/create/create_setdress.py @@ -9,7 +9,7 @@ class CreateSetDress(plugin.MayaCreator): label = "Set Dress" family = "setdress" icon = "cubes" - defaults = ["Main", "Anim"] + default_variants = ["Main", "Anim"] def get_instance_attr_defs(self): return [ diff --git a/openpype/hosts/maya/plugins/load/_load_animation.py b/openpype/hosts/maya/plugins/load/_load_animation.py index 49792b28060..981b9ef4340 100644 --- a/openpype/hosts/maya/plugins/load/_load_animation.py +++ b/openpype/hosts/maya/plugins/load/_load_animation.py @@ -33,6 +33,13 @@ def process_reference(self, context, name, namespace, options): suffix="_abc" ) + attach_to_root = options.get("attach_to_root", True) + group_name = options["group_name"] + + # no group shall be created + if not attach_to_root: + group_name = namespace + # hero_001 (abc) # asset_counter{optional} path = self.filepath_from_context(context) @@ -41,8 +48,8 @@ def process_reference(self, context, name, namespace, options): nodes = cmds.file(file_url, namespace=namespace, sharedReferenceFile=False, - groupReference=True, - groupName=options['group_name'], + groupReference=attach_to_root, + groupName=group_name, reference=True, returnNewNodes=True) diff --git a/openpype/hosts/maya/plugins/load/actions.py b/openpype/hosts/maya/plugins/load/actions.py index 348657e5928..d347ef0d080 100644 --- a/openpype/hosts/maya/plugins/load/actions.py +++ b/openpype/hosts/maya/plugins/load/actions.py @@ -5,8 +5,9 @@ from openpype.pipeline import load from openpype.hosts.maya.api.lib import ( maintained_selection, - unique_namespace + get_custom_namespace ) +import openpype.hosts.maya.api.plugin class SetFrameRangeLoader(load.LoaderPlugin): @@ -83,7 +84,7 @@ def load(self, context, name, namespace, data): animationEndTime=end) -class ImportMayaLoader(load.LoaderPlugin): +class ImportMayaLoader(openpype.hosts.maya.api.plugin.Loader): """Import action for Maya (unmanaged) Warning: @@ -130,13 +131,14 @@ def load(self, context, name=None, namespace=None, data=None): if choice is False: return - asset = context['asset'] + custom_group_name, custom_namespace, options = \ + self.get_custom_namespace_and_group(context, data, + "import_loader") - namespace = namespace or unique_namespace( - asset["name"] + "_", - prefix="_" if asset["name"][0].isdigit() else "", - suffix="_", - ) + namespace = get_custom_namespace(custom_namespace) + + if not options.get("attach_to_root", True): + custom_group_name = namespace path = self.filepath_from_context(context) with maintained_selection(): @@ -145,8 +147,9 @@ def load(self, context, name=None, namespace=None, data=None): preserveReferences=True, namespace=namespace, returnNewNodes=True, - groupReference=True, - groupName="{}:{}".format(namespace, name)) + groupReference=options.get("attach_to_root", + True), + groupName=custom_group_name) if data.get("clean_import", False): remove_attributes = ["cbId"] diff --git a/openpype/hosts/maya/plugins/load/load_reference.py b/openpype/hosts/maya/plugins/load/load_reference.py index d339aff69c1..91767249e09 100644 --- a/openpype/hosts/maya/plugins/load/load_reference.py +++ b/openpype/hosts/maya/plugins/load/load_reference.py @@ -123,6 +123,10 @@ def process_reference(self, context, name, namespace, options): attach_to_root = options.get("attach_to_root", True) group_name = options["group_name"] + # no group shall be created + if not attach_to_root: + group_name = namespace + path = self.filepath_from_context(context) with maintained_selection(): cmds.loadPlugin("AbcImport.mll", quiet=True) @@ -148,11 +152,10 @@ def process_reference(self, context, name, namespace, options): if current_namespace != ":": group_name = current_namespace + ":" + group_name - group_name = "|" + group_name - self[:] = new_nodes if attach_to_root: + group_name = "|" + group_name roots = cmds.listRelatives(group_name, children=True, fullPath=True) or [] @@ -205,6 +208,11 @@ def process_reference(self, context, name, namespace, options): self._post_process_rig(name, namespace, context, options) else: if "translate" in options: + if not attach_to_root and new_nodes: + root_nodes = cmds.ls(new_nodes, assemblies=True, + long=True) + # we assume only a single root is ever loaded + group_name = root_nodes[0] cmds.setAttr("{}.translate".format(group_name), *options["translate"]) return new_nodes diff --git a/openpype/hosts/maya/plugins/load/load_yeti_rig.py b/openpype/hosts/maya/plugins/load/load_yeti_rig.py index c9dfe9478bf..6cfcffe27d9 100644 --- a/openpype/hosts/maya/plugins/load/load_yeti_rig.py +++ b/openpype/hosts/maya/plugins/load/load_yeti_rig.py @@ -19,8 +19,15 @@ class YetiRigLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): def process_reference( self, context, name=None, namespace=None, options=None ): - group_name = options['group_name'] path = self.filepath_from_context(context) + + attach_to_root = options.get("attach_to_root", True) + group_name = options["group_name"] + + # no group shall be created + if not attach_to_root: + group_name = namespace + with lib.maintained_selection(): file_url = self.prepare_root_value( path, context["project"]["name"] @@ -30,7 +37,7 @@ def process_reference( namespace=namespace, reference=True, returnNewNodes=True, - groupReference=True, + groupReference=attach_to_root, groupName=group_name ) diff --git a/openpype/hosts/maya/plugins/publish/collect_current_file.py b/openpype/hosts/maya/plugins/publish/collect_current_file.py index e777a209d45..c7105a7f3ca 100644 --- a/openpype/hosts/maya/plugins/publish/collect_current_file.py +++ b/openpype/hosts/maya/plugins/publish/collect_current_file.py @@ -10,7 +10,6 @@ class CollectCurrentFile(pyblish.api.ContextPlugin): order = pyblish.api.CollectorOrder - 0.4 label = "Maya Current File" hosts = ['maya'] - families = ["workfile"] def process(self, context): """Inject the current working file""" diff --git a/openpype/hosts/maya/plugins/publish/collect_render.py b/openpype/hosts/maya/plugins/publish/collect_render.py index c37b54ea9a9..c17a8789e4d 100644 --- a/openpype/hosts/maya/plugins/publish/collect_render.py +++ b/openpype/hosts/maya/plugins/publish/collect_render.py @@ -304,9 +304,9 @@ def process(self, instance): if self.sync_workfile_version: data["version"] = context.data["version"] - for instance in context: - if instance.data['family'] == "workfile": - instance.data["version"] = context.data["version"] + for _instance in context: + if _instance.data['family'] == "workfile": + _instance.data["version"] = context.data["version"] # Define nice label label = "{0} ({1})".format(layer_name, instance.data["asset"]) diff --git a/openpype/hosts/maya/plugins/publish/collect_review.py b/openpype/hosts/maya/plugins/publish/collect_review.py index 6cb10f9066c..586939a3b85 100644 --- a/openpype/hosts/maya/plugins/publish/collect_review.py +++ b/openpype/hosts/maya/plugins/publish/collect_review.py @@ -107,6 +107,9 @@ def process(self, instance): data["displayLights"] = display_lights data["burninDataMembers"] = burninDataMembers + for key, value in instance.data["publish_attributes"].items(): + data["publish_attributes"][key] = value + # The review instance must be active cmds.setAttr(str(instance) + '.active', 1) diff --git a/openpype/hosts/maya/plugins/publish/extract_look.py b/openpype/hosts/maya/plugins/publish/extract_look.py index e2c88ef44ac..b13568c7813 100644 --- a/openpype/hosts/maya/plugins/publish/extract_look.py +++ b/openpype/hosts/maya/plugins/publish/extract_look.py @@ -15,8 +15,14 @@ from maya import cmds # noqa -from openpype.lib.vendor_bin_utils import find_executable -from openpype.lib import source_hash, run_subprocess, get_oiio_tools_path +from openpype.lib import ( + find_executable, + source_hash, + run_subprocess, + get_oiio_tool_args, + ToolNotFoundError, +) + from openpype.pipeline import legacy_io, publish, KnownPublishError from openpype.hosts.maya.api import lib @@ -267,12 +273,11 @@ def process(self, """ - maketx_path = get_oiio_tools_path("maketx") - - if not maketx_path: - raise AssertionError( - "OIIO 'maketx' tool not found. Result: {}".format(maketx_path) - ) + try: + maketx_args = get_oiio_tool_args("maketx") + except ToolNotFoundError: + raise KnownPublishError( + "OpenImageIO is not available on the machine") # Define .tx filepath in staging if source file is not .tx fname, ext = os.path.splitext(os.path.basename(source)) @@ -328,8 +333,7 @@ def process(self, self.log.info("Generating .tx file for %s .." % source) - subprocess_args = [ - maketx_path, + subprocess_args = maketx_args + [ "-v", # verbose "-u", # update mode # --checknan doesn't influence the output file but aborts the diff --git a/openpype/hosts/maya/plugins/publish/submit_maya_muster.py b/openpype/hosts/maya/plugins/publish/submit_maya_muster.py index 8e219eae85f..b79c9ed140a 100644 --- a/openpype/hosts/maya/plugins/publish/submit_maya_muster.py +++ b/openpype/hosts/maya/plugins/publish/submit_maya_muster.py @@ -249,7 +249,6 @@ def process(self, instance): Authenticate with Muster, collect all data, prepare path for post render publish job and submit job to farm. """ - instance.data["toBeRenderedOn"] = "muster" # setup muster environment self.MUSTER_REST_URL = os.environ.get("MUSTER_REST_URL") diff --git a/openpype/hosts/maya/plugins/publish/validate_instance_attributes.py b/openpype/hosts/maya/plugins/publish/validate_instance_attributes.py deleted file mode 100644 index f870c9f8c45..00000000000 --- a/openpype/hosts/maya/plugins/publish/validate_instance_attributes.py +++ /dev/null @@ -1,60 +0,0 @@ -from maya import cmds - -import pyblish.api -from openpype.pipeline.publish import ( - ValidateContentsOrder, PublishValidationError, RepairAction -) -from openpype.pipeline import discover_legacy_creator_plugins -from openpype.hosts.maya.api.lib import imprint - - -class ValidateInstanceAttributes(pyblish.api.InstancePlugin): - """Validate Instance Attributes. - - New attributes can be introduced as new features come in. Old instances - will need to be updated with these attributes for the documentation to make - sense, and users do not have to recreate the instances. - """ - - order = ValidateContentsOrder - hosts = ["maya"] - families = ["*"] - label = "Instance Attributes" - plugins_by_family = { - p.family: p for p in discover_legacy_creator_plugins() - } - actions = [RepairAction] - - @classmethod - def get_missing_attributes(self, instance): - plugin = self.plugins_by_family[instance.data["family"]] - subset = instance.data["subset"] - asset = instance.data["asset"] - objset = instance.data["objset"] - - missing_attributes = {} - for key, value in plugin(subset, asset).data.items(): - if not cmds.objExists("{}.{}".format(objset, key)): - missing_attributes[key] = value - - return missing_attributes - - def process(self, instance): - objset = instance.data.get("objset") - if objset is None: - self.log.debug( - "Skipping {} because no objectset found.".format(instance) - ) - return - - missing_attributes = self.get_missing_attributes(instance) - if missing_attributes: - raise PublishValidationError( - "Missing attributes on {}:\n{}".format( - objset, missing_attributes - ) - ) - - @classmethod - def repair(cls, instance): - imprint(instance.data["objset"], cls.get_missing_attributes(instance)) diff --git a/openpype/hosts/maya/plugins/publish/validate_instance_in_context.py b/openpype/hosts/maya/plugins/publish/validate_instance_in_context.py index 41bb4148296..4ded57137cb 100644 --- a/openpype/hosts/maya/plugins/publish/validate_instance_in_context.py +++ b/openpype/hosts/maya/plugins/publish/validate_instance_in_context.py @@ -3,92 +3,19 @@ from __future__ import absolute_import import pyblish.api -from openpype.pipeline.publish import ValidateContentsOrder +import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ( + RepairAction, + ValidateContentsOrder, + PublishValidationError, + OptionalPyblishPluginMixin +) from maya import cmds -class SelectInvalidInstances(pyblish.api.Action): - """Select invalid instances in Outliner.""" - - label = "Select Instances" - icon = "briefcase" - on = "failed" - - def process(self, context, plugin): - """Process invalid validators and select invalid instances.""" - # Get the errored instances - failed = [] - for result in context.data["results"]: - if ( - result["error"] is None - or result["instance"] is None - or result["instance"] in failed - or result["plugin"] != plugin - ): - continue - - failed.append(result["instance"]) - - # Apply pyblish.logic to get the instances for the plug-in - instances = pyblish.api.instances_by_plugin(failed, plugin) - - if instances: - self.log.info( - "Selecting invalid nodes: %s" % ", ".join( - [str(x) for x in instances] - ) - ) - self.select(instances) - else: - self.log.info("No invalid nodes found.") - self.deselect() - - def select(self, instances): - cmds.select(instances, replace=True, noExpand=True) - - def deselect(self): - cmds.select(deselect=True) - - -class RepairSelectInvalidInstances(pyblish.api.Action): - """Repair the instance asset.""" - - label = "Repair" - icon = "wrench" - on = "failed" - - def process(self, context, plugin): - # Get the errored instances - failed = [] - for result in context.data["results"]: - if result["error"] is None: - continue - if result["instance"] is None: - continue - if result["instance"] in failed: - continue - if result["plugin"] != plugin: - continue - - failed.append(result["instance"]) - - # Apply pyblish.logic to get the instances for the plug-in - instances = pyblish.api.instances_by_plugin(failed, plugin) - - context_asset = context.data["assetEntity"]["name"] - for instance in instances: - self.set_attribute(instance, context_asset) - - def set_attribute(self, instance, context_asset): - cmds.setAttr( - instance.data.get("name") + ".asset", - context_asset, - type="string" - ) - - -class ValidateInstanceInContext(pyblish.api.InstancePlugin): +class ValidateInstanceInContext(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): """Validator to check if instance asset match context asset. When working in per-shot style you always publish data in context of @@ -102,10 +29,49 @@ class ValidateInstanceInContext(pyblish.api.InstancePlugin): label = "Instance in same Context" optional = True hosts = ["maya"] - actions = [SelectInvalidInstances, RepairSelectInvalidInstances] + actions = [ + openpype.hosts.maya.api.action.SelectInvalidAction, RepairAction + ] def process(self, instance): + if not self.is_active(instance.data): + return + asset = instance.data.get("asset") - context_asset = instance.context.data["assetEntity"]["name"] - msg = "{} has asset {}".format(instance.name, asset) - assert asset == context_asset, msg + context_asset = self.get_context_asset(instance) + if asset != context_asset: + raise PublishValidationError( + message=( + "Instance '{}' publishes to different asset than current " + "context: {}. Current context: {}".format( + instance.name, asset, context_asset + ) + ), + description=( + "## Publishing to a different asset\n" + "There are publish instances present which are publishing " + "into a different asset than your current context.\n\n" + "Usually this is not what you want but there can be cases " + "where you might want to publish into another asset or " + "shot. If that's the case you can disable the validation " + "on the instance to ignore it." + ) + ) + + @classmethod + def get_invalid(cls, instance): + return [instance.data["instance_node"]] + + @classmethod + def repair(cls, instance): + context_asset = cls.get_context_asset(instance) + instance_node = instance.data["instance_node"] + cmds.setAttr( + "{}.asset".format(instance_node), + context_asset, + type="string" + ) + + @staticmethod + def get_context_asset(instance): + return instance.context.data["assetEntity"]["name"] diff --git a/openpype/hosts/maya/plugins/publish/validate_model_content.py b/openpype/hosts/maya/plugins/publish/validate_model_content.py index 9ba458a4160..19373efad92 100644 --- a/openpype/hosts/maya/plugins/publish/validate_model_content.py +++ b/openpype/hosts/maya/plugins/publish/validate_model_content.py @@ -63,15 +63,10 @@ def get_invalid(cls, instance): return True # Top group - assemblies = cmds.ls(content_instance, assemblies=True, long=True) - if len(assemblies) != 1 and cls.validate_top_group: + top_parents = set([x.split("|")[1] for x in content_instance]) + if cls.validate_top_group and len(top_parents) != 1: cls.log.error("Must have exactly one top group") - return assemblies - if len(assemblies) == 0: - cls.log.warning("No top group found. " - "(Are there objects in the instance?" - " Or is it parented in another group?)") - return assemblies or True + return top_parents def _is_visible(node): """Return whether node is visible""" @@ -82,11 +77,11 @@ def _is_visible(node): visibility=True) # The roots must be visible (the assemblies) - for assembly in assemblies: - if not _is_visible(assembly): - cls.log.error("Invisible assembly (root node) is not " - "allowed: {0}".format(assembly)) - invalid.add(assembly) + for parent in top_parents: + if not _is_visible(parent): + cls.log.error("Invisible parent (root node) is not " + "allowed: {0}".format(parent)) + invalid.add(parent) # Ensure at least one shape is visible if not any(_is_visible(shape) for shape in shapes): diff --git a/openpype/hosts/maya/plugins/publish/validate_plugin_path_attributes.py b/openpype/hosts/maya/plugins/publish/validate_plugin_path_attributes.py index 78334cd01f1..9f47bf7a3d0 100644 --- a/openpype/hosts/maya/plugins/publish/validate_plugin_path_attributes.py +++ b/openpype/hosts/maya/plugins/publish/validate_plugin_path_attributes.py @@ -4,6 +4,8 @@ import pyblish.api +from openpype.hosts.maya.api.lib import pairwise +from openpype.hosts.maya.api.action import SelectInvalidAction from openpype.pipeline.publish import ( ValidateContentsOrder, PublishValidationError @@ -19,31 +21,33 @@ class ValidatePluginPathAttributes(pyblish.api.InstancePlugin): hosts = ['maya'] families = ["workfile"] label = "Plug-in Path Attributes" + actions = [SelectInvalidAction] - def get_invalid(self, instance): + # Attributes are defined in project settings + attribute = [] + + @classmethod + def get_invalid(cls, instance): invalid = list() - # get the project setting - validate_path = ( - instance.context.data["project_settings"]["maya"]["publish"] - ) - file_attr = validate_path["ValidatePluginPathAttributes"]["attribute"] + file_attr = cls.attribute if not file_attr: return invalid - # get the nodes and file attributes - for node, attr in file_attr.items(): - # check the related nodes - targets = cmds.ls(type=node) + # Consider only valid node types to avoid "Unknown object type" warning + all_node_types = set(cmds.allNodeTypes()) + node_types = [key for key in file_attr.keys() if key in all_node_types] - for target in targets: - # get the filepath - file_attr = "{}.{}".format(target, attr) - filepath = cmds.getAttr(file_attr) + for node, node_type in pairwise(cmds.ls(type=node_types, + showType=True)): + # get the filepath + file_attr = "{}.{}".format(node, file_attr[node_type]) + filepath = cmds.getAttr(file_attr) - if filepath and not os.path.exists(filepath): - self.log.error("File {0} not exists".format(filepath)) # noqa - invalid.append(target) + if filepath and not os.path.exists(filepath): + cls.log.error("{} '{}' uses non-existing filepath: {}" + .format(node_type, node, filepath)) + invalid.append(node) return invalid @@ -51,5 +55,16 @@ def process(self, instance): """Process all directories Set as Filenames in Non-Maya Nodes""" invalid = self.get_invalid(instance) if invalid: - raise PublishValidationError("Non-existent Path " - "found: {0}".format(invalid)) + raise PublishValidationError( + title="Plug-in Path Attributes", + message="Non-existent filepath found on nodes: {}".format( + ", ".join(invalid) + ), + description=( + "## Plug-in nodes use invalid filepaths\n" + "The workfile contains nodes from plug-ins that use " + "filepaths which do not exist.\n\n" + "Please make sure their filepaths are correct and the " + "files exist on disk." + ) + ) diff --git a/openpype/hosts/maya/plugins/publish/validate_rig_output_ids.py b/openpype/hosts/maya/plugins/publish/validate_rig_output_ids.py index 75447fdfea7..cbc750baceb 100644 --- a/openpype/hosts/maya/plugins/publish/validate_rig_output_ids.py +++ b/openpype/hosts/maya/plugins/publish/validate_rig_output_ids.py @@ -47,7 +47,7 @@ def get_invalid_matches(cls, instance, compute=False): invalid = {} if compute: - out_set = next(x for x in instance if x.endswith("out_SET")) + out_set = next(x for x in instance if "out_SET" in x) instance_nodes = cmds.sets(out_set, query=True, nodesOnly=True) instance_nodes = cmds.ls(instance_nodes, long=True) diff --git a/openpype/hosts/maya/plugins/publish/validate_shape_zero.py b/openpype/hosts/maya/plugins/publish/validate_shape_zero.py index 7a7e9a0aeee..c7af6a60dbe 100644 --- a/openpype/hosts/maya/plugins/publish/validate_shape_zero.py +++ b/openpype/hosts/maya/plugins/publish/validate_shape_zero.py @@ -7,6 +7,7 @@ from openpype.pipeline.publish import ( ValidateContentsOrder, RepairAction, + PublishValidationError ) @@ -67,5 +68,30 @@ def process(self, instance): invalid = self.get_invalid(instance) if invalid: - raise ValueError("Shapes found with non-zero component tweaks: " - "{0}".format(invalid)) + raise PublishValidationError( + title="Shape Component Tweaks", + message="Shapes found with non-zero component tweaks: '{}'" + "".format(", ".join(invalid)), + description=( + "## Shapes found with component tweaks\n" + "Shapes were detected that have component tweaks on their " + "components. Please remove the component tweaks to " + "continue.\n\n" + "### Repair\n" + "The repair action will try to *freeze* the component " + "tweaks into the shapes, which is usually the correct fix " + "if the mesh has no construction history (= has its " + "history deleted)."), + detail=( + "Maya allows to store component tweaks within shape nodes " + "which are applied between its `inMesh` and `outMesh` " + "connections resulting in the output of a shape node " + "differing from the input. We usually want to avoid this " + "for published meshes (in particular for Maya scenes) as " + "it can have unintended results when using these meshes " + "as intermediate meshes since it applies positional " + "differences without being visible edits in the node " + "graph.\n\n" + "These tweaks are traditionally stored in the `.pnts` " + "attribute of shapes.") + ) diff --git a/openpype/hosts/nuke/api/lib.py b/openpype/hosts/nuke/api/lib.py index 364c8eeff48..41e6a27cef3 100644 --- a/openpype/hosts/nuke/api/lib.py +++ b/openpype/hosts/nuke/api/lib.py @@ -424,10 +424,13 @@ def add_publish_knob(node): return node -@deprecated +@deprecated("openpype.hosts.nuke.api.lib.set_node_data") def set_avalon_knob_data(node, data=None, prefix="avalon:"): """[DEPRECATED] Sets data into nodes's avalon knob + This function is still used but soon will be deprecated. + Use `set_node_data` instead. + Arguments: node (nuke.Node): Nuke node to imprint with data, data (dict, optional): Data to be imprinted into AvalonTab @@ -487,10 +490,13 @@ def set_avalon_knob_data(node, data=None, prefix="avalon:"): return node -@deprecated +@deprecated("openpype.hosts.nuke.api.lib.get_node_data") def get_avalon_knob_data(node, prefix="avalon:", create=True): """[DEPRECATED] Gets a data from nodes's avalon knob + This function is still used but soon will be deprecated. + Use `get_node_data` instead. + Arguments: node (obj): Nuke node to search for data, prefix (str, optional): filtering prefix @@ -1699,7 +1705,7 @@ def create_write_node_legacy( knob_value = float(knob_value) if knob_type == "bool": knob_value = bool(knob_value) - if knob_type in ["2d_vector", "3d_vector"]: + if knob_type in ["2d_vector", "3d_vector", "color", "box"]: knob_value = list(knob_value) GN[knob_name].setValue(knob_value) @@ -1715,7 +1721,7 @@ def set_node_knobs_from_settings(node, knob_settings, **kwargs): Args: node (nuke.Node): nuke node knob_settings (list): list of dict. Keys are `type`, `name`, `value` - kwargs (dict)[optional]: keys for formatable knob settings + kwargs (dict)[optional]: keys for formattable knob settings """ for knob in knob_settings: log.debug("__ knob: {}".format(pformat(knob))) @@ -1732,7 +1738,7 @@ def set_node_knobs_from_settings(node, knob_settings, **kwargs): ) continue - # first deal with formatable knob settings + # first deal with formattable knob settings if knob_type == "formatable": template = knob["template"] to_type = knob["to_type"] @@ -1741,8 +1747,8 @@ def set_node_knobs_from_settings(node, knob_settings, **kwargs): **kwargs ) except KeyError as msg: - log.warning("__ msg: {}".format(msg)) - raise KeyError(msg) + raise KeyError( + "Not able to format expression: {}".format(msg)) # convert value to correct type if to_type == "2d_vector": @@ -1781,8 +1787,8 @@ def convert_knob_value_to_correct_type(knob_type, knob_value): knob_value = knob_value elif knob_type == "color_gui": knob_value = color_gui_to_int(knob_value) - elif knob_type in ["2d_vector", "3d_vector", "color"]: - knob_value = [float(v) for v in knob_value] + elif knob_type in ["2d_vector", "3d_vector", "color", "box"]: + knob_value = [float(val_) for val_ in knob_value] return knob_value @@ -2035,6 +2041,7 @@ def set_root_colorspace(self, imageio_host): ) workfile_settings = imageio_host["workfile"] + viewer_process_settings = imageio_host["viewer"]["viewerProcess"] if not config_data: # TODO: backward compatibility for old projects - remove later @@ -2070,14 +2077,30 @@ def set_root_colorspace(self, imageio_host): str(workfile_settings["OCIO_config"])) else: - # set values to root + # OCIO config path is defined from prelaunch hook self._root_node["colorManagement"].setValue("OCIO") + # print previous settings in case some were found in workfile + residual_path = self._root_node["customOCIOConfigPath"].value() + if residual_path: + log.info("Residual OCIO config path found: `{}`".format( + residual_path + )) + # we dont need the key anymore workfile_settings.pop("customOCIOConfigPath", None) workfile_settings.pop("colorManagement", None) workfile_settings.pop("OCIO_config", None) + # get monitor lut from settings respecting Nuke version differences + monitor_lut = workfile_settings.pop("monitorLut", None) + monitor_lut_data = self._get_monitor_settings( + viewer_process_settings, monitor_lut) + + # set monitor related knobs luts (MonitorOut, Thumbnails) + for knob, value_ in monitor_lut_data.items(): + workfile_settings[knob] = value_ + # then set the rest for knob, value_ in workfile_settings.items(): # skip unfilled ocio config path @@ -2094,9 +2117,70 @@ def set_root_colorspace(self, imageio_host): # set ocio config path if config_data: - current_ocio_path = os.getenv("OCIO") - if current_ocio_path != config_data["path"]: - message = """ + config_path = config_data["path"].replace("\\", "/") + log.info("OCIO config path found: `{}`".format( + config_path)) + + # check if there's a mismatch between environment and settings + correct_settings = self._is_settings_matching_environment( + config_data) + + # if there's no mismatch between environment and settings + if correct_settings: + self._set_ocio_config_path_to_workfile(config_data) + + def _get_monitor_settings(self, viewer_lut, monitor_lut): + """ Get monitor settings from viewer and monitor lut + + Args: + viewer_lut (str): viewer lut string + monitor_lut (str): monitor lut string + + Returns: + dict: monitor settings + """ + output_data = {} + m_display, m_viewer = get_viewer_config_from_string(monitor_lut) + v_display, v_viewer = get_viewer_config_from_string(viewer_lut) + + # set monitor lut differently for nuke version 14 + if nuke.NUKE_VERSION_MAJOR >= 14: + output_data["monitorOutLUT"] = create_viewer_profile_string( + m_viewer, m_display, path_like=False) + # monitorLut=thumbnails - viewerProcess makes more sense + output_data["monitorLut"] = create_viewer_profile_string( + v_viewer, v_display, path_like=False) + + if nuke.NUKE_VERSION_MAJOR == 13: + output_data["monitorOutLUT"] = create_viewer_profile_string( + m_viewer, m_display, path_like=False) + # monitorLut=thumbnails - viewerProcess makes more sense + output_data["monitorLut"] = create_viewer_profile_string( + v_viewer, v_display, path_like=True) + if nuke.NUKE_VERSION_MAJOR <= 12: + output_data["monitorLut"] = create_viewer_profile_string( + m_viewer, m_display, path_like=True) + + return output_data + + def _is_settings_matching_environment(self, config_data): + """ Check if OCIO config path is different from environment + + Args: + config_data (dict): OCIO config data from settings + + Returns: + bool: True if settings are matching environment, False otherwise + """ + current_ocio_path = os.environ["OCIO"] + settings_ocio_path = config_data["path"] + + # normalize all paths to forward slashes + current_ocio_path = current_ocio_path.replace("\\", "/") + settings_ocio_path = settings_ocio_path.replace("\\", "/") + + if current_ocio_path != settings_ocio_path: + message = """ It seems like there's a mismatch between the OCIO config path set in your Nuke settings and the actual path set in your OCIO environment. @@ -2114,12 +2198,119 @@ def set_root_colorspace(self, imageio_host): Reopening Nuke should synchronize these paths and resolve any discrepancies. """ - nuke.message( - message.format( - env_path=current_ocio_path, - settings_path=config_data["path"] - ) + nuke.message( + message.format( + env_path=current_ocio_path, + settings_path=settings_ocio_path ) + ) + return False + + return True + + def _set_ocio_config_path_to_workfile(self, config_data): + """ Set OCIO config path to workfile + + Path set into nuke workfile. It is trying to replace path with + environment variable if possible. If not, it will set it as it is. + It also saves the script to apply the change, but only if it's not + empty Untitled script. + + Args: + config_data (dict): OCIO config data from settings + + """ + # replace path with env var if possible + ocio_path = self._replace_ocio_path_with_env_var(config_data) + ocio_path = ocio_path.replace("\\", "/") + + log.info("Setting OCIO config path to: `{}`".format( + ocio_path)) + + self._root_node["customOCIOConfigPath"].setValue( + ocio_path + ) + self._root_node["OCIO_config"].setValue("custom") + + # only save script if it's not empty + if self._root_node["name"].value() != "": + log.info("Saving script to apply OCIO config path change.") + nuke.scriptSave() + + def _get_included_vars(self, config_template): + """ Get all environment variables included in template + + Args: + config_template (str): OCIO config template from settings + + Returns: + list: list of environment variables included in template + """ + # resolve all environments for whitelist variables + included_vars = [ + "BUILTIN_OCIO_ROOT", + ] + + # include all project root related env vars + for env_var in os.environ: + if env_var.startswith("OPENPYPE_PROJECT_ROOT_"): + included_vars.append(env_var) + + # use regex to find env var in template with format {ENV_VAR} + # this way we make sure only template used env vars are included + env_var_regex = r"\{([A-Z0-9_]+)\}" + env_var = re.findall(env_var_regex, config_template) + if env_var: + included_vars.append(env_var[0]) + + return included_vars + + def _replace_ocio_path_with_env_var(self, config_data): + """ Replace OCIO config path with environment variable + + Environment variable is added as TCL expression to path. TCL expression + is also replacing backward slashes found in path for windows + formatted values. + + Args: + config_data (str): OCIO config dict from settings + + Returns: + str: OCIO config path with environment variable TCL expression + """ + config_path = config_data["path"].replace("\\", "/") + config_template = config_data["template"] + + included_vars = self._get_included_vars(config_template) + + # make sure we return original path if no env var is included + new_path = config_path + + for env_var in included_vars: + env_path = os.getenv(env_var) + if not env_path: + continue + + # it has to be directory current process can see + if not os.path.isdir(env_path): + continue + + # make sure paths are in same format + env_path = env_path.replace("\\", "/") + path = config_path.replace("\\", "/") + + # check if env_path is in path and replace to first found positive + if env_path in path: + # with regsub we make sure path format of slashes is correct + resub_expr = ( + "[regsub -all {{\\\\}} [getenv {}] \"/\"]").format(env_var) + + new_path = path.replace( + env_path, resub_expr + ) + break + + return new_path def set_writes_colorspace(self): ''' Adds correct colorspace to write node dict @@ -2204,7 +2395,6 @@ def set_reads_colorspace(self, read_clrs_inputs): continue preset_clrsp = input["colorspace"] - log.debug(preset_clrsp) if preset_clrsp is not None: current = n["colorspace"].value() future = str(preset_clrsp) @@ -2234,7 +2424,7 @@ def set_reads_colorspace(self, read_clrs_inputs): knobs["to"])) def set_colorspace(self): - ''' Setting colorpace following presets + ''' Setting colorspace following presets ''' # get imageio nuke_colorspace = get_nuke_imageio_settings() @@ -2242,17 +2432,16 @@ def set_colorspace(self): log.info("Setting colorspace to workfile...") try: self.set_root_colorspace(nuke_colorspace) - except AttributeError: - msg = "set_colorspace(): missing `workfile` settings in template" + except AttributeError as _error: + msg = "Set Colorspace to workfile error: {}".format(_error) nuke.message(msg) log.info("Setting colorspace to viewers...") try: self.set_viewers_colorspace(nuke_colorspace["viewer"]) - except AttributeError: - msg = "set_colorspace(): missing `viewer` settings in template" + except AttributeError as _error: + msg = "Set Colorspace to viewer error: {}".format(_error) nuke.message(msg) - log.error(msg) log.info("Setting colorspace to write nodes...") try: @@ -2686,7 +2875,15 @@ def _launch_workfile_app(): host_tools.show_workfiles(parent=None, on_top=True) +@deprecated("openpype.hosts.nuke.api.lib.start_workfile_template_builder") def process_workfile_builder(): + """ [DEPRECATED] Process workfile builder on nuke start + + This function is deprecated and will be removed in future versions. + Use settings for `project_settings/nuke/templated_workfile_build` which are + supported by api `start_workfile_template_builder()`. + """ + # to avoid looping of the callback, remove it! nuke.removeOnCreate(process_workfile_builder, nodeClass="Root") @@ -2695,11 +2892,6 @@ def process_workfile_builder(): workfile_builder = project_settings["nuke"].get( "workfile_builder", {}) - # get all imortant settings - openlv_on = env_value_to_bool( - env_key="AVALON_OPEN_LAST_WORKFILE", - default=None) - # get settings createfv_on = workfile_builder.get("create_first_version") or None builder_on = workfile_builder.get("builder_on_start") or None @@ -2740,20 +2932,15 @@ def process_workfile_builder(): save_file(last_workfile_path) return - # skip opening of last version if it is not enabled - if not openlv_on or not os.path.exists(last_workfile_path): - return - - log.info("Opening last workfile...") - # open workfile - open_file(last_workfile_path) - def start_workfile_template_builder(): from .workfile_template_builder import ( build_workfile_template ) + # remove callback since it would be duplicating the workfile + nuke.removeOnCreate(start_workfile_template_builder, nodeClass="Root") + # to avoid looping of the callback, remove it! log.info("Starting workfile template builder...") try: @@ -2761,8 +2948,6 @@ def start_workfile_template_builder(): except TemplateProfileNotFound: log.warning("Template profile not found. Skipping...") - # remove callback since it would be duplicating the workfile - nuke.removeOnCreate(start_workfile_template_builder, nodeClass="Root") @deprecated def recreate_instance(origin_node, avalon_data=None): @@ -2954,6 +3139,7 @@ class DirmapCache: """Caching class to get settings and sync_module easily and only once.""" _project_name = None _project_settings = None + _sync_module_discovered = False _sync_module = None _mapping = None @@ -2971,8 +3157,10 @@ def project_settings(cls): @classmethod def sync_module(cls): - if cls._sync_module is None: - cls._sync_module = ModulesManager().modules_by_name["sync_server"] + if not cls._sync_module_discovered: + cls._sync_module_discovered = True + cls._sync_module = ModulesManager().modules_by_name.get( + "sync_server") return cls._sync_module @classmethod @@ -3178,11 +3366,11 @@ def get_viewer_config_from_string(input_string): display = split[0] elif "(" in viewer: pattern = r"([\w\d\s\.\-]+).*[(](.*)[)]" - result = re.findall(pattern, viewer) + result_ = re.findall(pattern, viewer) try: - result = result.pop() - display = str(result[1]).rstrip() - viewer = str(result[0]).rstrip() + result_ = result_.pop() + display = str(result_[1]).rstrip() + viewer = str(result_[0]).rstrip() except IndexError: raise IndexError(( "Viewer Input string is not correct. " @@ -3190,3 +3378,22 @@ def get_viewer_config_from_string(input_string): ).format(input_string)) return (display, viewer) + + +def create_viewer_profile_string(viewer, display=None, path_like=False): + """Convert viewer and display to string + + Args: + viewer (str): viewer name + display (Optional[str]): display name + path_like (Optional[bool]): if True, return path like string + + Returns: + str: viewer config string + """ + if not display: + return viewer + + if path_like: + return "{}/{}".format(display, viewer) + return "{} ({})".format(viewer, display) diff --git a/openpype/hosts/nuke/api/pipeline.py b/openpype/hosts/nuke/api/pipeline.py index cdfc8aa512a..a1d290646cb 100644 --- a/openpype/hosts/nuke/api/pipeline.py +++ b/openpype/hosts/nuke/api/pipeline.py @@ -2,7 +2,7 @@ import os import importlib -from collections import OrderedDict +from collections import OrderedDict, defaultdict import pyblish.api @@ -34,6 +34,7 @@ get_main_window, add_publish_knob, WorkfileSettings, + # TODO: remove this once workfile builder will be removed process_workfile_builder, start_workfile_template_builder, launch_workfiles_app, @@ -155,11 +156,18 @@ def add_nuke_callbacks(): """ nuke_settings = get_current_project_settings()["nuke"] workfile_settings = WorkfileSettings() + # Set context settings. nuke.addOnCreate( workfile_settings.set_context_settings, nodeClass="Root") + + # adding favorites to file browser nuke.addOnCreate(workfile_settings.set_favorites, nodeClass="Root") + + # template builder callbacks nuke.addOnCreate(start_workfile_template_builder, nodeClass="Root") + + # TODO: remove this callback once workfile builder will be removed nuke.addOnCreate(process_workfile_builder, nodeClass="Root") # fix ffmpeg settings on script @@ -169,11 +177,12 @@ def add_nuke_callbacks(): nuke.addOnScriptLoad(check_inventory_versions) nuke.addOnScriptSave(check_inventory_versions) - # # set apply all workfile settings on script load and save + # set apply all workfile settings on script load and save nuke.addOnScriptLoad(WorkfileSettings().set_context_settings) + if nuke_settings["nuke-dirmap"]["enabled"]: - log.info("Added Nuke's dirmaping callback ...") + log.info("Added Nuke's dir-mapping callback ...") # Add dirmap for file paths. nuke.addFilenameFilter(dirmap_file_name_filter) @@ -534,10 +543,16 @@ def list_instances(creator_id=None): For SubsetManager + Args: + creator_id (Optional[str]): creator identifier + Returns: (list) of dictionaries matching instances format """ - listed_instances = [] + instances_by_order = defaultdict(list) + subset_instances = [] + instance_ids = set() + for node in nuke.allNodes(recurseGroups=True): if node.Class() in ["Viewer", "Dot"]: @@ -563,9 +578,60 @@ def list_instances(creator_id=None): if creator_id and instance_data["creator_identifier"] != creator_id: continue - listed_instances.append((node, instance_data)) + instance_id = instance_data.get("instance_id") + if not instance_id: + pass + elif instance_id in instance_ids: + instance_data.pop("instance_id") + else: + instance_ids.add(instance_id) + + # node name could change, so update subset name data + _update_subset_name_data(instance_data, node) + + if "render_order" not in node.knobs(): + subset_instances.append((node, instance_data)) + continue + + order = int(node["render_order"].value()) + instances_by_order[order].append((node, instance_data)) + + # Sort instances based on order attribute or subset name. + # TODO: remove in future Publisher enhanced with sorting + ordered_instances = [] + for key in sorted(instances_by_order.keys()): + instances_by_subset = defaultdict(list) + for node, data_ in instances_by_order[key]: + instances_by_subset[data_["subset"]].append((node, data_)) + for subkey in sorted(instances_by_subset.keys()): + ordered_instances.extend(instances_by_subset[subkey]) + + instances_by_subset = defaultdict(list) + for node, data_ in subset_instances: + instances_by_subset[data_["subset"]].append((node, data_)) + for key in sorted(instances_by_subset.keys()): + ordered_instances.extend(instances_by_subset[key]) + + return ordered_instances + + +def _update_subset_name_data(instance_data, node): + """Update subset name data in instance data. + + Args: + instance_data (dict): instance creator data + node (nuke.Node): nuke node + """ + # make sure node name is subset name + old_subset_name = instance_data["subset"] + old_variant = instance_data["variant"] + subset_name_root = old_subset_name.replace(old_variant, "") + + new_subset_name = node.name() + new_variant = new_subset_name.replace(subset_name_root, "") - return listed_instances + instance_data["subset"] = new_subset_name + instance_data["variant"] = new_variant def remove_instance(instance): diff --git a/openpype/hosts/nuke/api/plugin.py b/openpype/hosts/nuke/api/plugin.py index cfdb407d266..6d48c09d60d 100644 --- a/openpype/hosts/nuke/api/plugin.py +++ b/openpype/hosts/nuke/api/plugin.py @@ -212,9 +212,15 @@ def collect_instances(self): created_instance["creator_attributes"].pop(key) def update_instances(self, update_list): - for created_inst, _changes in update_list: + for created_inst, changes in update_list: instance_node = created_inst.transient_data["node"] + # update instance node name if subset name changed + if "subset" in changes.changed_keys: + instance_node["name"].setValue( + changes["subset"].new_value + ) + # in case node is not existing anymore (user erased it manually) try: instance_node.fullName() @@ -256,6 +262,17 @@ class NukeWriteCreator(NukeCreator): family = "write" icon = "sign-out" + def get_linked_knobs(self): + linked_knobs = [] + if "channels" in self.instance_attributes: + linked_knobs.append("channels") + if "ordered" in self.instance_attributes: + linked_knobs.append("render_order") + if "use_range_limit" in self.instance_attributes: + linked_knobs.extend(["___", "first", "last", "use_limit"]) + + return linked_knobs + def integrate_links(self, node, outputs=True): # skip if no selection if not self.selected_node: @@ -310,6 +327,7 @@ def _get_render_target_enum(self): "frames": "Use existing frames" } if ("farm_rendering" in self.instance_attributes): + rendering_targets["frames_farm"] = "Use existing frames - farm" rendering_targets["farm"] = "Farm rendering" return EnumDef( @@ -921,7 +939,11 @@ def generate_mov(self, farm=False, **kwargs): except Exception: self.log.info("`mov64_codec` knob was not found") - write_node["mov64_write_timecode"].setValue(1) + try: + write_node["mov64_write_timecode"].setValue(1) + except Exception: + self.log.info("`mov64_write_timecode` knob was not found") + write_node["raw"].setValue(1) # connect write_node.setInput(0, self.previous_node) diff --git a/openpype/hosts/nuke/api/workfile_template_builder.py b/openpype/hosts/nuke/api/workfile_template_builder.py index a19cb9dfead..9d7604c58d0 100644 --- a/openpype/hosts/nuke/api/workfile_template_builder.py +++ b/openpype/hosts/nuke/api/workfile_template_builder.py @@ -114,6 +114,11 @@ def _parse_placeholder_node_data(self, node): placeholder_data[key] = value return placeholder_data + def delete_placeholder(self, placeholder): + """Remove placeholder if building was successful""" + placeholder_node = nuke.toNode(placeholder.scene_identifier) + nuke.delete(placeholder_node) + class NukePlaceholderLoadPlugin(NukePlaceholderPlugin, PlaceholderLoadMixin): identifier = "nuke.load" @@ -276,14 +281,6 @@ def post_placeholder_process(self, placeholder, failed): placeholder.data["nb_children"] += 1 reset_selection() - # remove placeholders marked as delete - if ( - placeholder.data.get("delete") - and not placeholder.data.get("keep_placeholder") - ): - self.log.debug("Deleting node: {}".format(placeholder_node.name())) - nuke.delete(placeholder_node) - # go back to root group nuke.root().begin() @@ -690,14 +687,6 @@ def post_placeholder_process(self, placeholder, failed): placeholder.data["nb_children"] += 1 reset_selection() - # remove placeholders marked as delete - if ( - placeholder.data.get("delete") - and not placeholder.data.get("keep_placeholder") - ): - self.log.debug("Deleting node: {}".format(placeholder_node.name())) - nuke.delete(placeholder_node) - # go back to root group nuke.root().begin() diff --git a/openpype/hosts/nuke/api/workio.py b/openpype/hosts/nuke/api/workio.py index 8d29e0441f8..98e59eff719 100644 --- a/openpype/hosts/nuke/api/workio.py +++ b/openpype/hosts/nuke/api/workio.py @@ -1,6 +1,7 @@ """Host API required Work Files tool""" import os import nuke +import shutil from .utils import is_headless @@ -21,21 +22,37 @@ def save_file(filepath): def open_file(filepath): + + def read_script(nuke_script): + nuke.scriptClear() + nuke.scriptReadFile(nuke_script) + nuke.Root()["name"].setValue(nuke_script) + nuke.Root()["project_directory"].setValue(os.path.dirname(nuke_script)) + nuke.Root().setModified(False) + filepath = filepath.replace("\\", "/") # To remain in the same window, we have to clear the script and read # in the contents of the workfile. - nuke.scriptClear() + # Nuke Preferences can be read after the script is read. + read_script(filepath) + if not is_headless(): autosave = nuke.toNode("preferences")["AutoSaveName"].evaluate() - autosave_prmpt = "Autosave detected.\nWould you like to load the autosave file?" # noqa + autosave_prmpt = "Autosave detected.\n" \ + "Would you like to load the autosave file?" # noqa if os.path.isfile(autosave) and nuke.ask(autosave_prmpt): - filepath = autosave + try: + # Overwrite the filepath with autosave + shutil.copy(autosave, filepath) + # Now read the (auto-saved) script again + read_script(filepath) + except shutil.Error as err: + nuke.message( + "Detected autosave file could not be used.\n{}" + + .format(err)) - nuke.scriptReadFile(filepath) - nuke.Root()["name"].setValue(filepath) - nuke.Root()["project_directory"].setValue(os.path.dirname(filepath)) - nuke.Root().setModified(False) return True diff --git a/openpype/hosts/nuke/hooks/pre_nukeassist_setup.py b/openpype/hosts/nuke/hooks/pre_nukeassist_setup.py index 3948a665c6b..657291ec519 100644 --- a/openpype/hosts/nuke/hooks/pre_nukeassist_setup.py +++ b/openpype/hosts/nuke/hooks/pre_nukeassist_setup.py @@ -1,11 +1,12 @@ -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook class PrelaunchNukeAssistHook(PreLaunchHook): """ Adding flag when nukeassist """ - app_groups = ["nukeassist"] + app_groups = {"nukeassist"} + launch_types = set() def execute(self): self.launch_context.env["NUKEASSIST"] = "1" diff --git a/openpype/hosts/nuke/plugins/create/create_write_image.py b/openpype/hosts/nuke/plugins/create/create_write_image.py index 0c8adfb75c1..8c18739587e 100644 --- a/openpype/hosts/nuke/plugins/create/create_write_image.py +++ b/openpype/hosts/nuke/plugins/create/create_write_image.py @@ -64,9 +64,6 @@ def _get_frame_source_number(self): ) def create_instance_node(self, subset_name, instance_data): - linked_knobs_ = [] - if "use_range_limit" in self.instance_attributes: - linked_knobs_ = ["channels", "___", "first", "last", "use_limit"] # add fpath_template write_data = { @@ -81,7 +78,7 @@ def create_instance_node(self, subset_name, instance_data): write_data, input=self.selected_node, prenodes=self.prenodes, - linked_knobs=linked_knobs_, + linked_knobs=self.get_linked_knobs(), **{ "frame": nuke.frame() } diff --git a/openpype/hosts/nuke/plugins/create/create_write_prerender.py b/openpype/hosts/nuke/plugins/create/create_write_prerender.py index f46dd2d6d5d..395c3b002fd 100644 --- a/openpype/hosts/nuke/plugins/create/create_write_prerender.py +++ b/openpype/hosts/nuke/plugins/create/create_write_prerender.py @@ -30,6 +30,9 @@ class CreateWritePrerender(napi.NukeWriteCreator): temp_rendering_path_template = ( "{work}/renders/nuke/{subset}/{subset}.{frame}.{ext}") + # Before write node render. + order = 90 + def get_pre_create_attr_defs(self): attr_defs = [ BoolDef( @@ -42,10 +45,6 @@ def get_pre_create_attr_defs(self): return attr_defs def create_instance_node(self, subset_name, instance_data): - linked_knobs_ = [] - if "use_range_limit" in self.instance_attributes: - linked_knobs_ = ["channels", "___", "first", "last", "use_limit"] - # add fpath_template write_data = { "creator": self.__class__.__name__, @@ -68,7 +67,7 @@ def create_instance_node(self, subset_name, instance_data): write_data, input=self.selected_node, prenodes=self.prenodes, - linked_knobs=linked_knobs_, + linked_knobs=self.get_linked_knobs(), **{ "width": width, "height": height diff --git a/openpype/hosts/nuke/plugins/create/create_write_render.py b/openpype/hosts/nuke/plugins/create/create_write_render.py index c24405873a0..91acf4eabcb 100644 --- a/openpype/hosts/nuke/plugins/create/create_write_render.py +++ b/openpype/hosts/nuke/plugins/create/create_write_render.py @@ -56,11 +56,15 @@ def create_instance_node(self, subset_name, instance_data): actual_format = nuke.root().knob('format').value() width, height = (actual_format.width(), actual_format.height()) + self.log.debug(">>>>>>> : {}".format(self.instance_attributes)) + self.log.debug(">>>>>>> : {}".format(self.get_linked_knobs())) + created_node = napi.create_write_node( subset_name, write_data, input=self.selected_node, prenodes=self.prenodes, + linked_knobs=self.get_linked_knobs(), **{ "width": width, "height": height diff --git a/openpype/hosts/nuke/plugins/load/load_clip.py b/openpype/hosts/nuke/plugins/load/load_clip.py index 5539324fb70..19038b168d0 100644 --- a/openpype/hosts/nuke/plugins/load/load_clip.py +++ b/openpype/hosts/nuke/plugins/load/load_clip.py @@ -91,14 +91,14 @@ def load(self, context, name, namespace, options): # reset container id so it is always unique for each instance self.reset_container_id() - self.log.warning(self.extensions) - is_sequence = len(representation["files"]) > 1 if is_sequence: - representation = self._representation_with_hash_in_frame( - representation + context["representation"] = \ + self._representation_with_hash_in_frame( + representation ) + filepath = self.filepath_from_context(context) filepath = filepath.replace("\\", "/") self.log.debug("_ filepath: {}".format(filepath)) @@ -260,6 +260,7 @@ def update(self, container, representation): representation = self._representation_with_hash_in_frame( representation ) + filepath = get_representation_path(representation).replace("\\", "/") self.log.debug("_ filepath: {}".format(filepath)) diff --git a/openpype/hosts/nuke/plugins/load/load_image.py b/openpype/hosts/nuke/plugins/load/load_image.py index d8c0a822061..0dd3a940db3 100644 --- a/openpype/hosts/nuke/plugins/load/load_image.py +++ b/openpype/hosts/nuke/plugins/load/load_image.py @@ -96,7 +96,8 @@ def load(self, context, name, namespace, options): file = file.replace("\\", "/") - repr_cont = context["representation"]["context"] + representation = context["representation"] + repr_cont = representation["context"] frame = repr_cont.get("frame") if frame: padding = len(frame) @@ -104,16 +105,7 @@ def load(self, context, name, namespace, options): frame, format(frame_number, "0{}".format(padding))) - name_data = { - "asset": repr_cont["asset"], - "subset": repr_cont["subset"], - "representation": context["representation"]["name"], - "ext": repr_cont["representation"], - "id": context["representation"]["_id"], - "class_name": self.__class__.__name__ - } - - read_name = self.node_name_template.format(**name_data) + read_name = self._get_node_name(representation) # Create the Loader with the filename path set with viewer_update_and_undo_stop(): @@ -212,6 +204,8 @@ def update(self, container, representation): last = first = int(frame_number) # Set the global in to the start frame of the sequence + read_name = self._get_node_name(representation) + node["name"].setValue(read_name) node["file"].setValue(file) node["origfirst"].setValue(first) node["first"].setValue(first) @@ -250,3 +244,17 @@ def remove(self, container): with viewer_update_and_undo_stop(): nuke.delete(node) + + def _get_node_name(self, representation): + + repre_cont = representation["context"] + name_data = { + "asset": repre_cont["asset"], + "subset": repre_cont["subset"], + "representation": representation["name"], + "ext": repre_cont["representation"], + "id": representation["_id"], + "class_name": self.__class__.__name__ + } + + return self.node_name_template.format(**name_data) diff --git a/openpype/hosts/nuke/plugins/publish/collect_instance_data.py b/openpype/hosts/nuke/plugins/publish/collect_nuke_instance_data.py similarity index 71% rename from openpype/hosts/nuke/plugins/publish/collect_instance_data.py rename to openpype/hosts/nuke/plugins/publish/collect_nuke_instance_data.py index 3908aef4bcc..b0f69e8ab8b 100644 --- a/openpype/hosts/nuke/plugins/publish/collect_instance_data.py +++ b/openpype/hosts/nuke/plugins/publish/collect_nuke_instance_data.py @@ -2,11 +2,13 @@ import pyblish.api -class CollectInstanceData(pyblish.api.InstancePlugin): - """Collect all nodes with Avalon knob.""" +class CollectNukeInstanceData(pyblish.api.InstancePlugin): + """Collect Nuke instance data + + """ order = pyblish.api.CollectorOrder - 0.49 - label = "Collect Instance Data" + label = "Collect Nuke Instance Data" hosts = ["nuke", "nukeassist"] # presets @@ -40,5 +42,14 @@ def process(self, instance): "pixelAspect": pixel_aspect }) + + # add creator attributes to instance + creator_attributes = instance.data["creator_attributes"] + instance.data.update(creator_attributes) + + # add review family if review activated on instance + if instance.data.get("review"): + instance.data["families"].append("review") + self.log.debug("Collected instance: {}".format( instance.data)) diff --git a/openpype/hosts/nuke/plugins/publish/collect_slate_node.py b/openpype/hosts/nuke/plugins/publish/collect_slate_node.py index 57010876977..c7d65ffd249 100644 --- a/openpype/hosts/nuke/plugins/publish/collect_slate_node.py +++ b/openpype/hosts/nuke/plugins/publish/collect_slate_node.py @@ -5,7 +5,7 @@ class CollectSlate(pyblish.api.InstancePlugin): """Check if SLATE node is in scene and connected to rendering tree""" - order = pyblish.api.CollectorOrder + 0.09 + order = pyblish.api.CollectorOrder + 0.002 label = "Collect Slate Node" hosts = ["nuke"] families = ["render"] @@ -13,10 +13,14 @@ class CollectSlate(pyblish.api.InstancePlugin): def process(self, instance): node = instance.data["transientData"]["node"] - slate = next((n for n in nuke.allNodes() - if "slate" in n.name().lower() - if not n["disable"].getValue()), - None) + slate = next( + ( + n_ for n_ in nuke.allNodes() + if "slate" in n_.name().lower() + if not n_["disable"].getValue() + ), + None + ) if slate: # check if slate node is connected to write node tree diff --git a/openpype/hosts/nuke/plugins/publish/collect_writes.py b/openpype/hosts/nuke/plugins/publish/collect_writes.py index 2d1caacdc34..6f9245f5b96 100644 --- a/openpype/hosts/nuke/plugins/publish/collect_writes.py +++ b/openpype/hosts/nuke/plugins/publish/collect_writes.py @@ -1,5 +1,4 @@ import os -from pprint import pformat import nuke import pyblish.api from openpype.hosts.nuke import api as napi @@ -15,30 +14,16 @@ class CollectNukeWrites(pyblish.api.InstancePlugin, hosts = ["nuke", "nukeassist"] families = ["render", "prerender", "image"] + # cache + _write_nodes = {} + _frame_ranges = {} + def process(self, instance): - self.log.debug(pformat(instance.data)) - creator_attributes = instance.data["creator_attributes"] - instance.data.update(creator_attributes) group_node = instance.data["transientData"]["node"] render_target = instance.data["render_target"] - family = instance.data["family"] - families = instance.data["families"] - - # add targeted family to families - instance.data["families"].append( - "{}.{}".format(family, render_target) - ) - if instance.data.get("review"): - instance.data["families"].append("review") - - child_nodes = napi.get_instance_group_node_childs(instance) - instance.data["transientData"]["childNodes"] = child_nodes - write_node = None - for x in child_nodes: - if x.Class() == "Write": - write_node = x + write_node = self._write_node_helper(instance) if write_node is None: self.log.warning( @@ -48,113 +33,134 @@ def process(self, instance): ) return - instance.data["writeNode"] = write_node - self.log.debug("checking instance: {}".format(instance)) + # get colorspace and add to version data + colorspace = napi.get_colorspace_from_node(write_node) - # Determine defined file type - ext = write_node["file_type"].value() + if render_target == "frames": + self._set_existing_files_data(instance, colorspace) - # Get frame range - handle_start = instance.context.data["handleStart"] - handle_end = instance.context.data["handleEnd"] + elif render_target == "frames_farm": + collected_frames = self._set_existing_files_data( + instance, colorspace) + + self._set_expected_files(instance, collected_frames) + + self._add_farm_instance_data(instance) + + elif render_target == "farm": + self._add_farm_instance_data(instance) + + # set additional instance data + self._set_additional_instance_data(instance, render_target, colorspace) + + def _set_existing_files_data(self, instance, colorspace): + """Set existing files data to instance data. + + Args: + instance (pyblish.api.Instance): pyblish instance + colorspace (str): colorspace + + Returns: + list: collected frames + """ + collected_frames = self._get_collected_frames(instance) + + representation = self._get_existing_frames_representation( + instance, collected_frames + ) + + # inject colorspace data + self.set_representation_colorspace( + representation, instance.context, + colorspace=colorspace + ) + + instance.data["representations"].append(representation) + + return collected_frames + + def _set_expected_files(self, instance, collected_frames): + """Set expected files to instance data. + + Args: + instance (pyblish.api.Instance): pyblish instance + collected_frames (list): collected frames + """ + write_node = self._write_node_helper(instance) + + write_file_path = nuke.filename(write_node) + output_dir = os.path.dirname(write_file_path) + + instance.data["expectedFiles"] = [ + os.path.join(output_dir, source_file) + for source_file in collected_frames + ] + + def _get_frame_range_data(self, instance): + """Get frame range data from instance. + + Args: + instance (pyblish.api.Instance): pyblish instance + + Returns: + tuple: first_frame, last_frame + """ + + instance_name = instance.data["name"] + + if self._frame_ranges.get(instance_name): + # return cashed write node + return self._frame_ranges[instance_name] + + write_node = self._write_node_helper(instance) + + # Get frame range from workfile first_frame = int(nuke.root()["first_frame"].getValue()) last_frame = int(nuke.root()["last_frame"].getValue()) - frame_length = int(last_frame - first_frame + 1) + # Get frame range from write node if activated if write_node["use_limit"].getValue(): first_frame = int(write_node["first"].getValue()) last_frame = int(write_node["last"].getValue()) - write_file_path = nuke.filename(write_node) - output_dir = os.path.dirname(write_file_path) + # add to cache + self._frame_ranges[instance_name] = (first_frame, last_frame) - # get colorspace and add to version data - colorspace = napi.get_colorspace_from_node(write_node) + return first_frame, last_frame - self.log.debug('output dir: {}'.format(output_dir)) + def _set_additional_instance_data( + self, instance, render_target, colorspace + ): + """Set additional instance data. - if render_target == "frames": - representation = { - 'name': ext, - 'ext': ext, - "stagingDir": output_dir, - "tags": [] - } - - # get file path knob - node_file_knob = write_node["file"] - # list file paths based on input frames - expected_paths = list(sorted({ - node_file_knob.evaluate(frame) - for frame in range(first_frame, last_frame + 1) - })) - - # convert only to base names - expected_filenames = [ - os.path.basename(filepath) - for filepath in expected_paths - ] - - # make sure files are existing at folder - collected_frames = [ - filename - for filename in os.listdir(output_dir) - if filename in expected_filenames - ] - - if collected_frames: - collected_frames_len = len(collected_frames) - frame_start_str = "%0{}d".format( - len(str(last_frame))) % first_frame - representation['frameStart'] = frame_start_str - - # in case slate is expected and not yet rendered - self.log.debug("_ frame_length: {}".format(frame_length)) - self.log.debug("_ collected_frames_len: {}".format( - collected_frames_len)) - - # this will only run if slate frame is not already - # rendered from previews publishes - if ( - "slate" in families - and frame_length == collected_frames_len - and family == "render" - ): - frame_slate_str = ( - "{{:0{}d}}".format(len(str(last_frame))) - ).format(first_frame - 1) - - slate_frame = collected_frames[0].replace( - frame_start_str, frame_slate_str) - collected_frames.insert(0, slate_frame) - - if collected_frames_len == 1: - representation['files'] = collected_frames.pop() - else: - representation['files'] = collected_frames - - # inject colorspace data - self.set_representation_colorspace( - representation, instance.context, - colorspace=colorspace - ) + Args: + instance (pyblish.api.Instance): pyblish instance + render_target (str): render target + colorspace (str): colorspace + """ + family = instance.data["family"] - instance.data["representations"].append(representation) - self.log.info("Publishing rendered frames ...") + # add targeted family to families + instance.data["families"].append( + "{}.{}".format(family, render_target) + ) + self.log.debug("Appending render target to families: {}.{}".format( + family, render_target) + ) - elif render_target == "farm": - farm_keys = ["farm_chunk", "farm_priority", "farm_concurrency"] - for key in farm_keys: - # Skip if key is not in creator attributes - if key not in creator_attributes: - continue - # Add farm attributes to instance - instance.data[key] = creator_attributes[key] - - # Farm rendering - instance.data["transfer"] = False - instance.data["farm"] = True - self.log.info("Farm rendering ON ...") + write_node = self._write_node_helper(instance) + + # Determine defined file type + ext = write_node["file_type"].value() + + # get frame range data + handle_start = instance.context.data["handleStart"] + handle_end = instance.context.data["handleEnd"] + first_frame, last_frame = self._get_frame_range_data(instance) + + # get output paths + write_file_path = nuke.filename(write_node) + output_dir = os.path.dirname(write_file_path) # TODO: remove this when we have proper colorspace support version_data = { @@ -188,9 +194,208 @@ def process(self, instance): "frameEndHandle": last_frame, }) + + # TODO temporarily set stagingDir as persistent for backward + # compatibility. This is mainly focused on `renders`folders which + # were previously not cleaned up (and could be used in read notes) + # this logic should be removed and replaced with custom staging dir + instance.data["stagingDir_persistent"] = True + + def _write_node_helper(self, instance): + """Helper function to get write node from instance. + + Also sets instance transient data with child nodes. + + Args: + instance (pyblish.api.Instance): pyblish instance + + Returns: + nuke.Node: write node + """ + instance_name = instance.data["name"] + + if self._write_nodes.get(instance_name): + # return cashed write node + return self._write_nodes[instance_name] + + # get all child nodes from group node + child_nodes = napi.get_instance_group_node_childs(instance) + + # set child nodes to instance transient data + instance.data["transientData"]["childNodes"] = child_nodes + + write_node = None + for node_ in child_nodes: + if node_.Class() == "Write": + write_node = node_ + + if write_node: + # for slate frame extraction + instance.data["transientData"]["writeNode"] = write_node + # add to cache + self._write_nodes[instance_name] = write_node + + return self._write_nodes[instance_name] + + def _get_existing_frames_representation( + self, + instance, + collected_frames + ): + """Get existing frames representation. + + Args: + instance (pyblish.api.Instance): pyblish instance + collected_frames (list): collected frames + + Returns: + dict: representation + """ + + first_frame, last_frame = self._get_frame_range_data(instance) + + write_node = self._write_node_helper(instance) + + write_file_path = nuke.filename(write_node) + output_dir = os.path.dirname(write_file_path) + + # Determine defined file type + ext = write_node["file_type"].value() + + representation = { + "name": ext, + "ext": ext, + "stagingDir": output_dir, + "tags": [] + } + + frame_start_str = self._get_frame_start_str(first_frame, last_frame) + + representation['frameStart'] = frame_start_str + + # set slate frame + collected_frames = self._add_slate_frame_to_collected_frames( + instance, + collected_frames, + first_frame, + last_frame + ) + + if len(collected_frames) == 1: + representation['files'] = collected_frames.pop() + else: + representation['files'] = collected_frames + + return representation + + def _get_frame_start_str(self, first_frame, last_frame): + """Get frame start string. + + Args: + first_frame (int): first frame + last_frame (int): last frame + + Returns: + str: frame start string + """ + # convert first frame to string with padding + return ( + "{{:0{}d}}".format(len(str(last_frame))) + ).format(first_frame) + + def _add_slate_frame_to_collected_frames( + self, + instance, + collected_frames, + first_frame, + last_frame + ): + """Add slate frame to collected frames. + + Args: + instance (pyblish.api.Instance): pyblish instance + collected_frames (list): collected frames + first_frame (int): first frame + last_frame (int): last frame + + Returns: + list: collected frames + """ + frame_start_str = self._get_frame_start_str(first_frame, last_frame) + frame_length = int(last_frame - first_frame + 1) + + # this will only run if slate frame is not already + # rendered from previews publishes + if ( + "slate" in instance.data["families"] + and frame_length == len(collected_frames) + ): + frame_slate_str = self._get_frame_start_str( + first_frame - 1, + last_frame + ) + + slate_frame = collected_frames[0].replace( + frame_start_str, frame_slate_str) + collected_frames.insert(0, slate_frame) + + return collected_frames + + def _add_farm_instance_data(self, instance): + """Add farm publishing related instance data. + + Args: + instance (pyblish.api.Instance): pyblish instance + """ + # make sure rendered sequence on farm will # be used for extract review if not instance.data.get("review"): instance.data["useSequenceForReview"] = False - self.log.debug("instance.data: {}".format(pformat(instance.data))) + # Farm rendering + instance.data.update({ + "transfer": False, + "farm": True # to skip integrate + }) + self.log.info("Farm rendering ON ...") + + def _get_collected_frames(self, instance): + """Get collected frames. + + Args: + instance (pyblish.api.Instance): pyblish instance + + Returns: + list: collected frames + """ + + first_frame, last_frame = self._get_frame_range_data(instance) + + write_node = self._write_node_helper(instance) + + write_file_path = nuke.filename(write_node) + output_dir = os.path.dirname(write_file_path) + + # get file path knob + node_file_knob = write_node["file"] + # list file paths based on input frames + expected_paths = list(sorted({ + node_file_knob.evaluate(frame) + for frame in range(first_frame, last_frame + 1) + })) + + # convert only to base names + expected_filenames = { + os.path.basename(filepath) + for filepath in expected_paths + } + + # make sure files are existing at folder + collected_frames = [ + filename + for filename in os.listdir(output_dir) + if filename in expected_filenames + ] + + return collected_frames diff --git a/openpype/hosts/nuke/plugins/publish/extract_camera.py b/openpype/hosts/nuke/plugins/publish/extract_camera.py index 4286f71e834..33df6258aef 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_camera.py +++ b/openpype/hosts/nuke/plugins/publish/extract_camera.py @@ -11,9 +11,9 @@ class ExtractCamera(publish.Extractor): - """ 3D camera exctractor + """ 3D camera extractor """ - label = 'Exctract Camera' + label = 'Extract Camera' order = pyblish.api.ExtractorOrder families = ["camera"] hosts = ["nuke"] diff --git a/openpype/hosts/nuke/plugins/publish/extract_model.py b/openpype/hosts/nuke/plugins/publish/extract_model.py index 814d4041375..00462f80351 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_model.py +++ b/openpype/hosts/nuke/plugins/publish/extract_model.py @@ -11,9 +11,9 @@ class ExtractModel(publish.Extractor): - """ 3D model exctractor + """ 3D model extractor """ - label = 'Exctract Model' + label = 'Extract Model' order = pyblish.api.ExtractorOrder families = ["model"] hosts = ["nuke"] diff --git a/openpype/hosts/nuke/plugins/publish/extract_slate_frame.py b/openpype/hosts/nuke/plugins/publish/extract_slate_frame.py index 06c086b10dd..25262a74185 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_slate_frame.py +++ b/openpype/hosts/nuke/plugins/publish/extract_slate_frame.py @@ -249,7 +249,7 @@ def _render_slate_to_sequence(self, instance): # Add file to representation files # - get write node - write_node = instance.data["writeNode"] + write_node = instance.data["transientData"]["writeNode"] # - evaluate filepaths for first frame and slate frame first_filename = os.path.basename( write_node["file"].evaluate(first_frame)) diff --git a/openpype/hosts/nuke/plugins/publish/extract_thumbnail.py b/openpype/hosts/nuke/plugins/publish/extract_thumbnail.py index 21eefda249b..d57d55f85da 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_thumbnail.py +++ b/openpype/hosts/nuke/plugins/publish/extract_thumbnail.py @@ -54,6 +54,7 @@ def process(self, instance): def render_thumbnail(self, instance, output_name=None, **kwargs): first_frame = instance.data["frameStartHandle"] last_frame = instance.data["frameEndHandle"] + colorspace = instance.data["colorspace"] # find frame range and define middle thumb frame mid_frame = int((last_frame - first_frame) / 2) @@ -112,8 +113,8 @@ def render_thumbnail(self, instance, output_name=None, **kwargs): if self.use_rendered and os.path.isfile(path_render): # check if file exist otherwise connect to write node rnode = nuke.createNode("Read") - rnode["file"].setValue(path_render) + rnode["colorspace"].setValue(colorspace) # turn it raw if none of baking is ON if all([ diff --git a/openpype/hosts/nuke/plugins/publish/validate_rendered_frames.py b/openpype/hosts/nuke/plugins/publish/validate_rendered_frames.py index 45c20412c8c..9a35b61a0e9 100644 --- a/openpype/hosts/nuke/plugins/publish/validate_rendered_frames.py +++ b/openpype/hosts/nuke/plugins/publish/validate_rendered_frames.py @@ -14,27 +14,26 @@ def get_instance(context, plugin): # Get the errored instances return get_errored_instances_from_context(context, plugin=plugin) - def repair_knob(self, instances, state): + def repair_knob(self, context, instances, state): + create_context = context.data["create_context"] for instance in instances: - node = instance.data["transientData"]["node"] - files_remove = [os.path.join(instance.data["outputDir"], f) - for r in instance.data.get("representations", []) - for f in r.get("files", []) - ] - self.log.info("Files to be removed: {}".format(files_remove)) - for f in files_remove: - os.remove(f) - self.log.debug("removing file: {}".format(f)) - node["render"].setValue(state) + # Reset the render knob + instance_id = instance.data.get("instance_id") + created_instance = create_context.get_instance_by_id( + instance_id + ) + created_instance.creator_attributes["render_target"] = state self.log.info("Rendering toggled to `{}`".format(state)) + create_context.save_changes() + class RepairCollectionActionToLocal(RepairActionBase): label = "Repair - rerender with \"Local\"" def process(self, context, plugin): instances = self.get_instance(context, plugin) - self.repair_knob(instances, "Local") + self.repair_knob(context, instances, "local") class RepairCollectionActionToFarm(RepairActionBase): @@ -42,7 +41,7 @@ class RepairCollectionActionToFarm(RepairActionBase): def process(self, context, plugin): instances = self.get_instance(context, plugin) - self.repair_knob(instances, "On farm") + self.repair_knob(context, instances, "farm") class ValidateRenderedFrames(pyblish.api.InstancePlugin): diff --git a/openpype/hosts/nuke/plugins/publish/validate_write_nodes.py b/openpype/hosts/nuke/plugins/publish/validate_write_nodes.py index aeecea655f3..2a925fbefff 100644 --- a/openpype/hosts/nuke/plugins/publish/validate_write_nodes.py +++ b/openpype/hosts/nuke/plugins/publish/validate_write_nodes.py @@ -1,3 +1,5 @@ +from collections import defaultdict + import pyblish.api from openpype.pipeline.publish import get_errored_instances_from_context from openpype.hosts.nuke.api.lib import ( @@ -87,6 +89,11 @@ def process(self, instance): correct_data )) + # Collect key values of same type in a list. + values_by_name = defaultdict(list) + for knob_data in correct_data["knobs"]: + values_by_name[knob_data["name"]].append(knob_data["value"]) + for knob_data in correct_data["knobs"]: knob_type = knob_data["type"] self.log.debug("__ knob_type: {}".format( @@ -105,28 +112,33 @@ def process(self, instance): ) key = knob_data["name"] - value = knob_data["value"] + values = values_by_name[key] node_value = write_node[key].value() # fix type differences - if type(node_value) in (int, float): - try: - if isinstance(value, list): - value = color_gui_to_int(value) - else: - value = float(value) - node_value = float(node_value) - except ValueError: + fixed_values = [] + for value in values: + if type(node_value) in (int, float): + try: + + if isinstance(value, list): + value = color_gui_to_int(value) + else: + value = float(value) + node_value = float(node_value) + except ValueError: + value = str(value) + else: value = str(value) - else: - value = str(value) - node_value = str(node_value) + node_value = str(node_value) + + fixed_values.append(value) - self.log.debug("__ key: {} | value: {}".format( - key, value + self.log.debug("__ key: {} | values: {}".format( + key, fixed_values )) if ( - node_value != value + node_value not in fixed_values and key != "file" and key != "tile_color" ): diff --git a/openpype/hosts/photoshop/plugins/publish/closePS.py b/openpype/hosts/photoshop/plugins/publish/closePS.py index b4ded96001c..b4c3a4c966e 100644 --- a/openpype/hosts/photoshop/plugins/publish/closePS.py +++ b/openpype/hosts/photoshop/plugins/publish/closePS.py @@ -17,7 +17,7 @@ class ClosePS(pyblish.api.ContextPlugin): active = True hosts = ["photoshop"] - targets = ["remotepublish"] + targets = ["automated"] def process(self, context): self.log.info("ClosePS") diff --git a/openpype/hosts/photoshop/plugins/publish/collect_auto_image.py b/openpype/hosts/photoshop/plugins/publish/collect_auto_image.py index ce408f8d010..f1d84196082 100644 --- a/openpype/hosts/photoshop/plugins/publish/collect_auto_image.py +++ b/openpype/hosts/photoshop/plugins/publish/collect_auto_image.py @@ -6,8 +6,6 @@ class CollectAutoImage(pyblish.api.ContextPlugin): """Creates auto image in non artist based publishes (Webpublisher). - - 'remotepublish' should be renamed to 'autopublish' or similar in the future """ label = "Collect Auto Image" @@ -15,7 +13,7 @@ class CollectAutoImage(pyblish.api.ContextPlugin): hosts = ["photoshop"] order = pyblish.api.CollectorOrder + 0.2 - targets = ["remotepublish"] + targets = ["automated"] def process(self, context): family = "image" diff --git a/openpype/hosts/photoshop/plugins/publish/collect_auto_review.py b/openpype/hosts/photoshop/plugins/publish/collect_auto_review.py index 7de4adcaf4b..82ba0ac09c2 100644 --- a/openpype/hosts/photoshop/plugins/publish/collect_auto_review.py +++ b/openpype/hosts/photoshop/plugins/publish/collect_auto_review.py @@ -20,7 +20,7 @@ class CollectAutoReview(pyblish.api.ContextPlugin): label = "Collect Auto Review" hosts = ["photoshop"] order = pyblish.api.CollectorOrder + 0.2 - targets = ["remotepublish"] + targets = ["automated"] publish = True diff --git a/openpype/hosts/photoshop/plugins/publish/collect_auto_workfile.py b/openpype/hosts/photoshop/plugins/publish/collect_auto_workfile.py index d10cf62c677..01dc50af401 100644 --- a/openpype/hosts/photoshop/plugins/publish/collect_auto_workfile.py +++ b/openpype/hosts/photoshop/plugins/publish/collect_auto_workfile.py @@ -12,7 +12,7 @@ class CollectAutoWorkfile(pyblish.api.ContextPlugin): label = "Collect Workfile" hosts = ["photoshop"] - targets = ["remotepublish"] + targets = ["automated"] def process(self, context): family = "workfile" diff --git a/openpype/hosts/photoshop/plugins/publish/collect_batch_data.py b/openpype/hosts/photoshop/plugins/publish/collect_batch_data.py index a5fea7ac7d4..b13ff5e4763 100644 --- a/openpype/hosts/photoshop/plugins/publish/collect_batch_data.py +++ b/openpype/hosts/photoshop/plugins/publish/collect_batch_data.py @@ -35,7 +35,7 @@ class CollectBatchData(pyblish.api.ContextPlugin): order = pyblish.api.CollectorOrder - 0.495 label = "Collect batch data" hosts = ["photoshop"] - targets = ["remotepublish"] + targets = ["webpublish"] def process(self, context): self.log.info("CollectBatchData") diff --git a/openpype/hosts/photoshop/plugins/publish/collect_color_coded_instances.py b/openpype/hosts/photoshop/plugins/publish/collect_color_coded_instances.py index 90fca8398f9..c16616bcb29 100644 --- a/openpype/hosts/photoshop/plugins/publish/collect_color_coded_instances.py +++ b/openpype/hosts/photoshop/plugins/publish/collect_color_coded_instances.py @@ -34,7 +34,7 @@ class CollectColorCodedInstances(pyblish.api.ContextPlugin): label = "Instances" order = pyblish.api.CollectorOrder hosts = ["photoshop"] - targets = ["remotepublish"] + targets = ["automated"] # configurable by Settings color_code_mapping = [] diff --git a/openpype/hosts/photoshop/plugins/publish/collect_published_version.py b/openpype/hosts/photoshop/plugins/publish/collect_published_version.py index 2502689e4bb..eec6f1fae40 100644 --- a/openpype/hosts/photoshop/plugins/publish/collect_published_version.py +++ b/openpype/hosts/photoshop/plugins/publish/collect_published_version.py @@ -18,6 +18,7 @@ import pyblish.api from openpype.client import get_last_version_by_subset_name +from openpype.pipeline.version_start import get_versioning_start class CollectPublishedVersion(pyblish.api.ContextPlugin): @@ -26,7 +27,7 @@ class CollectPublishedVersion(pyblish.api.ContextPlugin): order = pyblish.api.CollectorOrder + 0.190 label = "Collect published version" hosts = ["photoshop"] - targets = ["remotepublish"] + targets = ["automated"] def process(self, context): workfile_subset_name = None @@ -47,9 +48,17 @@ def process(self, context): version_doc = get_last_version_by_subset_name(project_name, workfile_subset_name, asset_id) - version_int = 1 + if version_doc: - version_int += int(version_doc["name"]) + version_int = int(version_doc["name"]) + 1 + else: + version_int = get_versioning_start( + project_name, + "photoshop", + task_name=context.data["task"], + task_type=context.data["taskType"], + project_settings=context.data["project_settings"] + ) self.log.debug(f"Setting {version_int} to context.") context.data["version"] = version_int diff --git a/openpype/hosts/photoshop/plugins/publish/extract_review.py b/openpype/hosts/photoshop/plugins/publish/extract_review.py index d5416a389d4..4aa7a05bd1e 100644 --- a/openpype/hosts/photoshop/plugins/publish/extract_review.py +++ b/openpype/hosts/photoshop/plugins/publish/extract_review.py @@ -1,10 +1,9 @@ import os -import shutil from PIL import Image from openpype.lib import ( run_subprocess, - get_ffmpeg_tool_path, + get_ffmpeg_tool_args, ) from openpype.pipeline import publish from openpype.hosts.photoshop import api as photoshop @@ -85,7 +84,7 @@ def process(self, instance): instance.data["representations"].append(repre_skeleton) processed_img_names = [img_list] - ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") + ffmpeg_args = get_ffmpeg_tool_args("ffmpeg") instance.data["stagingDir"] = staging_dir @@ -94,13 +93,21 @@ def process(self, instance): source_files_pattern = self._check_and_resize(processed_img_names, source_files_pattern, staging_dir) - self._generate_thumbnail(ffmpeg_path, instance, source_files_pattern, - staging_dir) + self._generate_thumbnail( + list(ffmpeg_args), + instance, + source_files_pattern, + staging_dir) no_of_frames = len(processed_img_names) if no_of_frames > 1: - self._generate_mov(ffmpeg_path, instance, fps, no_of_frames, - source_files_pattern, staging_dir) + self._generate_mov( + list(ffmpeg_args), + instance, + fps, + no_of_frames, + source_files_pattern, + staging_dir) self.log.info(f"Extracted {instance} to {staging_dir}") @@ -142,8 +149,9 @@ def _generate_mov(self, ffmpeg_path, instance, fps, no_of_frames, "tags": self.mov_options['tags'] }) - def _generate_thumbnail(self, ffmpeg_path, instance, source_files_pattern, - staging_dir): + def _generate_thumbnail( + self, ffmpeg_args, instance, source_files_pattern, staging_dir + ): """Generates scaled down thumbnail and adds it as representation. Args: @@ -157,8 +165,7 @@ def _generate_thumbnail(self, ffmpeg_path, instance, source_files_pattern, # Generate thumbnail thumbnail_path = os.path.join(staging_dir, "thumbnail.jpg") self.log.info(f"Generate thumbnail {thumbnail_path}") - args = [ - ffmpeg_path, + args = ffmpeg_args + [ "-y", "-i", source_files_pattern, "-vf", "scale=300:-1", diff --git a/openpype/hosts/resolve/hooks/pre_resolve_last_workfile.py b/openpype/hosts/resolve/hooks/pre_resolve_last_workfile.py index bc03baad8d9..73f5ac75b1a 100644 --- a/openpype/hosts/resolve/hooks/pre_resolve_last_workfile.py +++ b/openpype/hosts/resolve/hooks/pre_resolve_last_workfile.py @@ -1,5 +1,5 @@ import os -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook, LaunchTypes class PreLaunchResolveLastWorkfile(PreLaunchHook): @@ -9,7 +9,8 @@ class PreLaunchResolveLastWorkfile(PreLaunchHook): workfile. This property is set explicitly in Launcher. """ order = 10 - app_groups = ["resolve"] + app_groups = {"resolve"} + launch_types = {LaunchTypes.local} def execute(self): if not self.data.get("start_last_workfile"): diff --git a/openpype/hosts/resolve/hooks/pre_resolve_setup.py b/openpype/hosts/resolve/hooks/pre_resolve_setup.py index 3fd39d665c3..326f37dffce 100644 --- a/openpype/hosts/resolve/hooks/pre_resolve_setup.py +++ b/openpype/hosts/resolve/hooks/pre_resolve_setup.py @@ -1,7 +1,7 @@ import os from pathlib import Path import platform -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook, LaunchTypes from openpype.hosts.resolve.utils import setup @@ -30,7 +30,8 @@ class PreLaunchResolveSetup(PreLaunchHook): """ - app_groups = ["resolve"] + app_groups = {"resolve"} + launch_types = {LaunchTypes.local} def execute(self): current_platform = platform.system().lower() diff --git a/openpype/hosts/resolve/hooks/pre_resolve_startup.py b/openpype/hosts/resolve/hooks/pre_resolve_startup.py index 599e0c00086..6dbfd09a377 100644 --- a/openpype/hosts/resolve/hooks/pre_resolve_startup.py +++ b/openpype/hosts/resolve/hooks/pre_resolve_startup.py @@ -1,6 +1,6 @@ import os -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook, LaunchTypes import openpype.hosts.resolve @@ -9,7 +9,8 @@ class PreLaunchResolveStartup(PreLaunchHook): """ order = 11 - app_groups = ["resolve"] + app_groups = {"resolve"} + launch_types = {LaunchTypes.local} def execute(self): # Set the openpype prelaunch startup script path for easy access diff --git a/openpype/hosts/standalonepublisher/plugins/publish/extract_thumbnail.py b/openpype/hosts/standalonepublisher/plugins/publish/extract_thumbnail.py index 9f02d65d00b..b99503b3c83 100644 --- a/openpype/hosts/standalonepublisher/plugins/publish/extract_thumbnail.py +++ b/openpype/hosts/standalonepublisher/plugins/publish/extract_thumbnail.py @@ -1,8 +1,9 @@ import os +import subprocess import tempfile import pyblish.api from openpype.lib import ( - get_ffmpeg_tool_path, + get_ffmpeg_tool_args, get_ffprobe_streams, path_to_subprocess_arg, run_subprocess, @@ -62,12 +63,12 @@ def process(self, instance): instance.context.data["cleanupFullPaths"].append(full_thumbnail_path) - ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") + ffmpeg_executable_args = get_ffmpeg_tool_args("ffmpeg") ffmpeg_args = self.ffmpeg_args or {} jpeg_items = [ - path_to_subprocess_arg(ffmpeg_path), + subprocess.list2cmdline(ffmpeg_executable_args), # override file if already exists "-y" ] diff --git a/openpype/hosts/tvpaint/api/lib.py b/openpype/hosts/tvpaint/api/lib.py index 49846d7f296..f8b8c29cdb1 100644 --- a/openpype/hosts/tvpaint/api/lib.py +++ b/openpype/hosts/tvpaint/api/lib.py @@ -233,7 +233,7 @@ def get_layers_pre_post_behavior(layer_ids, communicator=None): Pre and Post behaviors is enumerator of possible values: - "none" - - "repeat" / "loop" + - "repeat" - "pingpong" - "hold" @@ -242,7 +242,7 @@ def get_layers_pre_post_behavior(layer_ids, communicator=None): { 0: { "pre": "none", - "post": "loop" + "post": "repeat" } } ``` diff --git a/openpype/hosts/tvpaint/hooks/pre_launch_args.py b/openpype/hosts/tvpaint/hooks/pre_launch_args.py index c31403437ab..a1c946b60b6 100644 --- a/openpype/hosts/tvpaint/hooks/pre_launch_args.py +++ b/openpype/hosts/tvpaint/hooks/pre_launch_args.py @@ -1,7 +1,5 @@ -from openpype.lib import ( - PreLaunchHook, - get_openpype_execute_args -) +from openpype.lib import get_openpype_execute_args +from openpype.lib.applications import PreLaunchHook, LaunchTypes class TvpaintPrelaunchHook(PreLaunchHook): @@ -13,7 +11,8 @@ class TvpaintPrelaunchHook(PreLaunchHook): Existence of last workfile is checked. If workfile does not exists tries to copy templated workfile from predefined path. """ - app_groups = ["tvpaint"] + app_groups = {"tvpaint"} + launch_types = {LaunchTypes.local} def execute(self): # Pop tvpaint executable diff --git a/openpype/hosts/tvpaint/lib.py b/openpype/hosts/tvpaint/lib.py index 95653b6ecb1..97cf8d36339 100644 --- a/openpype/hosts/tvpaint/lib.py +++ b/openpype/hosts/tvpaint/lib.py @@ -77,13 +77,15 @@ def _calculate_pre_behavior_copy( for frame_idx in range(range_start, layer_frame_start): output_idx_by_frame_idx[frame_idx] = first_exposure_frame - elif pre_beh in ("loop", "repeat"): + elif pre_beh == "repeat": # Loop backwards from last frame of layer for frame_idx in reversed(range(range_start, layer_frame_start)): eq_frame_idx_offset = ( (layer_frame_end - frame_idx) % frame_count ) - eq_frame_idx = layer_frame_end - eq_frame_idx_offset + eq_frame_idx = layer_frame_start + ( + layer_frame_end - eq_frame_idx_offset + ) output_idx_by_frame_idx[frame_idx] = eq_frame_idx elif pre_beh == "pingpong": @@ -139,10 +141,10 @@ def _calculate_post_behavior_copy( for frame_idx in range(layer_frame_end + 1, range_end + 1): output_idx_by_frame_idx[frame_idx] = last_exposure_frame - elif post_beh in ("loop", "repeat"): + elif post_beh == "repeat": # Loop backwards from last frame of layer for frame_idx in range(layer_frame_end + 1, range_end + 1): - eq_frame_idx = frame_idx % frame_count + eq_frame_idx = layer_frame_start + (frame_idx % frame_count) output_idx_by_frame_idx[frame_idx] = eq_frame_idx elif post_beh == "pingpong": diff --git a/openpype/hosts/tvpaint/plugins/load/load_workfile.py b/openpype/hosts/tvpaint/plugins/load/load_workfile.py index 2155a1bbd54..169bfdcdd8f 100644 --- a/openpype/hosts/tvpaint/plugins/load/load_workfile.py +++ b/openpype/hosts/tvpaint/plugins/load/load_workfile.py @@ -18,6 +18,7 @@ from openpype.hosts.tvpaint.api.pipeline import ( get_current_workfile_context, ) +from openpype.pipeline.version_start import get_versioning_start class LoadWorkfile(plugin.Loader): @@ -95,7 +96,13 @@ def load(self, context, name, namespace, options): )[1] if version is None: - version = 1 + version = get_versioning_start( + project_name, + "tvpaint", + task_name=task_name, + task_type=data["task"]["type"], + family="workfile" + ) else: version += 1 diff --git a/openpype/hosts/tvpaint/plugins/publish/extract_convert_to_exr.py b/openpype/hosts/tvpaint/plugins/publish/extract_convert_to_exr.py index ab5bbc5e2c7..c10fc4de97f 100644 --- a/openpype/hosts/tvpaint/plugins/publish/extract_convert_to_exr.py +++ b/openpype/hosts/tvpaint/plugins/publish/extract_convert_to_exr.py @@ -9,7 +9,8 @@ import pyblish.api from openpype.lib import ( - get_oiio_tools_path, + get_oiio_tool_args, + ToolNotFoundError, run_subprocess, ) from openpype.pipeline import KnownPublishError @@ -34,11 +35,12 @@ def process(self, instance): if not repres: return - oiio_path = get_oiio_tools_path() - # Raise an exception when oiiotool is not available - # - this can currently happen on MacOS machines - if not os.path.exists(oiio_path): - KnownPublishError( + try: + oiio_args = get_oiio_tool_args("oiiotool") + except ToolNotFoundError: + # Raise an exception when oiiotool is not available + # - this can currently happen on MacOS machines + raise KnownPublishError( "OpenImageIO tool is not available on this machine." ) @@ -64,8 +66,8 @@ def process(self, instance): src_filepaths.add(src_filepath) - args = [ - oiio_path, src_filepath, + args = oiio_args + [ + src_filepath, "--compression", self.exr_compression, # TODO how to define color conversion? "--colorconvert", "sRGB", "linear", diff --git a/openpype/hosts/unreal/addon.py b/openpype/hosts/unreal/addon.py index b5c978d98fb..fcc5d98ab6e 100644 --- a/openpype/hosts/unreal/addon.py +++ b/openpype/hosts/unreal/addon.py @@ -12,6 +12,11 @@ class UnrealAddon(OpenPypeModule, IHostAddon): def initialize(self, module_settings): self.enabled = True + def get_global_environments(self): + return { + "AYON_UNREAL_ROOT": UNREAL_ROOT_DIR, + } + def add_implementation_envs(self, env, app): """Modify environments to contain all required for implementation.""" # Set AYON_UNREAL_PLUGIN required for Unreal implementation @@ -54,7 +59,8 @@ def add_implementation_envs(self, env, app): # Set default environments if are not set via settings defaults = { - "OPENPYPE_LOG_NO_COLORS": "True" + "OPENPYPE_LOG_NO_COLORS": "True", + "UE_PYTHONPATH": os.environ.get("PYTHONPATH", ""), } for key, value in defaults.items(): if not env.get(key): diff --git a/openpype/hosts/unreal/hooks/pre_workfile_preparation.py b/openpype/hosts/unreal/hooks/pre_workfile_preparation.py index 760d55077a4..a635bd4cab9 100644 --- a/openpype/hosts/unreal/hooks/pre_workfile_preparation.py +++ b/openpype/hosts/unreal/hooks/pre_workfile_preparation.py @@ -2,22 +2,25 @@ """Hook to launch Unreal and prepare projects.""" import os import copy +import shutil +import tempfile from pathlib import Path -from openpype.widgets.splash_screen import SplashScreen + from qtpy import QtCore -from openpype.hosts.unreal.ue_workers import ( - UEProjectGenerationWorker, - UEPluginInstallWorker -) from openpype import resources -from openpype.lib import ( +from openpype.lib.applications import ( PreLaunchHook, ApplicationLaunchFailed, - ApplicationNotFound, + LaunchTypes, ) from openpype.pipeline.workfile import get_workfile_template_key import openpype.hosts.unreal.lib as unreal_lib +from openpype.hosts.unreal.ue_workers import ( + UEProjectGenerationWorker, + UEPluginInstallWorker +) +from openpype.hosts.unreal.ui import SplashScreen class UnrealPrelaunchHook(PreLaunchHook): @@ -29,6 +32,8 @@ class UnrealPrelaunchHook(PreLaunchHook): shell script. """ + app_groups = {"unreal"} + launch_types = {LaunchTypes.local} def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) @@ -187,32 +192,58 @@ def execute(self): project_path.mkdir(parents=True, exist_ok=True) - # Set "AYON_UNREAL_PLUGIN" to current process environment for - # execution of `create_unreal_project` - - if self.launch_context.env.get("AYON_UNREAL_PLUGIN"): - self.log.info(( - f"{self.signature} using Ayon plugin from " - f"{self.launch_context.env.get('AYON_UNREAL_PLUGIN')}" - )) - env_key = "AYON_UNREAL_PLUGIN" - if self.launch_context.env.get(env_key): - os.environ[env_key] = self.launch_context.env[env_key] - # engine_path points to the specific Unreal Engine root # so, we are going up from the executable itself 3 levels. engine_path: Path = Path(executable).parents[3] - if not unreal_lib.check_plugin_existence(engine_path): - self.exec_plugin_install(engine_path) + # Check if new env variable exists, and if it does, if the path + # actually contains the plugin. If not, install it. + + built_plugin_path = self.launch_context.env.get( + "AYON_BUILT_UNREAL_PLUGIN", None) + + if unreal_lib.check_built_plugin_existance(built_plugin_path): + self.log.info(( + f"{self.signature} using existing built Ayon plugin from " + f"{built_plugin_path}" + )) + unreal_lib.copy_built_plugin(engine_path, Path(built_plugin_path)) + else: + # Set "AYON_UNREAL_PLUGIN" to current process environment for + # execution of `create_unreal_project` + env_key = "AYON_UNREAL_PLUGIN" + if self.launch_context.env.get(env_key): + self.log.info(( + f"{self.signature} using Ayon plugin from " + f"{self.launch_context.env.get(env_key)}" + )) + if self.launch_context.env.get(env_key): + os.environ[env_key] = self.launch_context.env[env_key] + + if not unreal_lib.check_plugin_existence(engine_path): + self.exec_plugin_install(engine_path) project_file = project_path / unreal_project_filename if not project_file.is_file(): - self.exec_ue_project_gen(engine_version, - unreal_project_name, - engine_path, - project_path) + with tempfile.TemporaryDirectory() as temp_dir: + self.exec_ue_project_gen(engine_version, + unreal_project_name, + engine_path, + Path(temp_dir)) + try: + self.log.info(( + f"Moving from {temp_dir} to " + f"{project_path.as_posix()}" + )) + shutil.copytree( + temp_dir, project_path, dirs_exist_ok=True) + + except shutil.Error as e: + raise ApplicationLaunchFailed(( + f"{self.signature} Cannot copy directory {temp_dir} " + f"to {project_path.as_posix()} - {e}" + )) from e self.launch_context.env["AYON_UNREAL_VERSION"] = engine_version # Append project file to launch arguments diff --git a/openpype/hosts/unreal/integration b/openpype/hosts/unreal/integration index ff15c700771..63266607ceb 160000 --- a/openpype/hosts/unreal/integration +++ b/openpype/hosts/unreal/integration @@ -1 +1 @@ -Subproject commit ff15c700771e719cc5f3d561ac5d6f7590623986 +Subproject commit 63266607ceb972a61484f046634ddfc9eb0b5757 diff --git a/openpype/hosts/unreal/lib.py b/openpype/hosts/unreal/lib.py index 67e7891344d..6d544f65b2d 100644 --- a/openpype/hosts/unreal/lib.py +++ b/openpype/hosts/unreal/lib.py @@ -369,11 +369,11 @@ def get_compatible_integration( def get_path_to_cmdlet_project(ue_version: str) -> Path: cmd_project = Path( - os.path.abspath(os.getenv("OPENPYPE_ROOT"))) + os.path.dirname(os.path.abspath(__file__))) # For now, only tested on Windows (For Linux and Mac # it has to be implemented) - cmd_project /= f"openpype/hosts/unreal/integration/UE_{ue_version}" + cmd_project /= f"integration/UE_{ue_version}" # if the integration doesn't exist for current engine version # try to find the closest to it. @@ -429,6 +429,36 @@ def get_build_id(engine_path: Path, ue_version: str) -> str: return "{" + loaded_modules.get("BuildId") + "}" +def check_built_plugin_existance(plugin_path) -> bool: + if not plugin_path: + return False + + integration_plugin_path = Path(plugin_path) + + if not integration_plugin_path.is_dir(): + raise RuntimeError("Path to the integration plugin is null!") + + if not (integration_plugin_path / "Binaries").is_dir() \ + or not (integration_plugin_path / "Intermediate").is_dir(): + return False + + return True + + +def copy_built_plugin(engine_path: Path, plugin_path: Path) -> None: + ayon_plugin_path: Path = engine_path / "Engine/Plugins/Marketplace/Ayon" + + if not ayon_plugin_path.is_dir(): + ayon_plugin_path.mkdir(parents=True, exist_ok=True) + + engine_plugin_config_path: Path = ayon_plugin_path / "Config" + engine_plugin_config_path.mkdir(exist_ok=True) + + dir_util._path_created = {} + + dir_util.copy_tree(plugin_path.as_posix(), ayon_plugin_path.as_posix()) + + def check_plugin_existence(engine_path: Path, env: dict = None) -> bool: env = env or os.environ integration_plugin_path: Path = Path(env.get("AYON_UNREAL_PLUGIN", "")) diff --git a/openpype/hosts/unreal/plugins/load/load_alembic_animation.py b/openpype/hosts/unreal/plugins/load/load_alembic_animation.py index cb60197a4c2..1d60b63f9ac 100644 --- a/openpype/hosts/unreal/plugins/load/load_alembic_animation.py +++ b/openpype/hosts/unreal/plugins/load/load_alembic_animation.py @@ -76,11 +76,16 @@ def load(self, context, name, namespace, data): asset_name = "{}_{}".format(asset, name) else: asset_name = "{}".format(name) - version = context.get('version').get('name') + version = context.get('version') + # Check if version is hero version and use different name + if not version.get("name") and version.get('type') == "hero_version": + name_version = f"{name}_hero" + else: + name_version = f"{name}_v{version.get('name'):03d}" tools = unreal.AssetToolsHelpers().get_asset_tools() asset_dir, container_name = tools.create_unique_asset_name( - f"{root}/{asset}/{name}_v{version:03d}", suffix="") + f"{root}/{asset}/{name_version}", suffix="") container_name += suffix diff --git a/openpype/hosts/unreal/plugins/load/load_skeletalmesh_abc.py b/openpype/hosts/unreal/plugins/load/load_skeletalmesh_abc.py index 0b0030ff774..9285602b646 100644 --- a/openpype/hosts/unreal/plugins/load/load_skeletalmesh_abc.py +++ b/openpype/hosts/unreal/plugins/load/load_skeletalmesh_abc.py @@ -78,11 +78,16 @@ def load(self, context, name, namespace, data): asset_name = "{}_{}".format(asset, name) else: asset_name = "{}".format(name) - version = context.get('version').get('name') + version = context.get('version') + # Check if version is hero version and use different name + if not version.get("name") and version.get('type') == "hero_version": + name_version = f"{name}_hero" + else: + name_version = f"{name}_v{version.get('name'):03d}" tools = unreal.AssetToolsHelpers().get_asset_tools() asset_dir, container_name = tools.create_unique_asset_name( - f"{root}/{asset}/{name}_v{version:03d}", suffix="") + f"{root}/{asset}/{name_version}", suffix="") container_name += suffix diff --git a/openpype/hosts/unreal/plugins/load/load_skeletalmesh_fbx.py b/openpype/hosts/unreal/plugins/load/load_skeletalmesh_fbx.py index 09cd37b9db5..9aa0e4d1a8a 100644 --- a/openpype/hosts/unreal/plugins/load/load_skeletalmesh_fbx.py +++ b/openpype/hosts/unreal/plugins/load/load_skeletalmesh_fbx.py @@ -52,11 +52,16 @@ def load(self, context, name, namespace, options): asset_name = "{}_{}".format(asset, name) else: asset_name = "{}".format(name) - version = context.get('version').get('name') + version = context.get('version') + # Check if version is hero version and use different name + if not version.get("name") and version.get('type') == "hero_version": + name_version = f"{name}_hero" + else: + name_version = f"{name}_v{version.get('name'):03d}" tools = unreal.AssetToolsHelpers().get_asset_tools() asset_dir, container_name = tools.create_unique_asset_name( - f"{root}/{asset}/{name}_v{version:03d}", suffix="") + f"{root}/{asset}/{name_version}", suffix="") container_name += suffix diff --git a/openpype/hosts/unreal/plugins/load/load_staticmesh_abc.py b/openpype/hosts/unreal/plugins/load/load_staticmesh_abc.py index 98e6d962b15..bb13692f9eb 100644 --- a/openpype/hosts/unreal/plugins/load/load_staticmesh_abc.py +++ b/openpype/hosts/unreal/plugins/load/load_staticmesh_abc.py @@ -79,11 +79,13 @@ def load(self, context, name, namespace, options): root = "/Game/Ayon/Assets" asset = context.get('asset').get('name') suffix = "_CON" - if asset: - asset_name = "{}_{}".format(asset, name) + asset_name = f"{asset}_{name}" if asset else f"{name}" + version = context.get('version') + # Check if version is hero version and use different name + if not version.get("name") and version.get('type') == "hero_version": + name_version = f"{name}_hero" else: - asset_name = "{}".format(name) - version = context.get('version').get('name') + name_version = f"{name}_v{version.get('name'):03d}" default_conversion = False if options.get("default_conversion"): @@ -91,7 +93,7 @@ def load(self, context, name, namespace, options): tools = unreal.AssetToolsHelpers().get_asset_tools() asset_dir, container_name = tools.create_unique_asset_name( - f"{root}/{asset}/{name}_v{version:03d}", suffix="") + f"{root}/{asset}/{name_version}", suffix="") container_name += suffix diff --git a/openpype/hosts/unreal/plugins/load/load_staticmesh_fbx.py b/openpype/hosts/unreal/plugins/load/load_staticmesh_fbx.py index fa26e252f52..ffc68d83755 100644 --- a/openpype/hosts/unreal/plugins/load/load_staticmesh_fbx.py +++ b/openpype/hosts/unreal/plugins/load/load_staticmesh_fbx.py @@ -78,10 +78,16 @@ def load(self, context, name, namespace, options): asset_name = "{}_{}".format(asset, name) else: asset_name = "{}".format(name) + version = context.get('version') + # Check if version is hero version and use different name + if not version.get("name") and version.get('type') == "hero_version": + name_version = f"{name}_hero" + else: + name_version = f"{name}_v{version.get('name'):03d}" tools = unreal.AssetToolsHelpers().get_asset_tools() asset_dir, container_name = tools.create_unique_asset_name( - f"{root}/{asset}/{name}", suffix="" + f"{root}/{asset}/{name_version}", suffix="" ) container_name += suffix diff --git a/openpype/hosts/unreal/plugins/publish/validate_sequence_frames.py b/openpype/hosts/unreal/plugins/publish/validate_sequence_frames.py index 76bb25fac39..96485d5a2da 100644 --- a/openpype/hosts/unreal/plugins/publish/validate_sequence_frames.py +++ b/openpype/hosts/unreal/plugins/publish/validate_sequence_frames.py @@ -1,4 +1,6 @@ import clique +import os +import re import pyblish.api @@ -21,7 +23,19 @@ def process(self, instance): representations = instance.data.get("representations") for repr in representations: data = instance.data.get("assetEntity", {}).get("data", {}) - patterns = [clique.PATTERNS["frames"]] + repr_files = repr["files"] + if isinstance(repr_files, str): + continue + + ext = repr.get("ext") + if not ext: + _, ext = os.path.splitext(repr_files[0]) + elif not ext.startswith("."): + ext = ".{}".format(ext) + pattern = r"\D?(?P(?P0*)\d+){}$".format( + re.escape(ext)) + patterns = [pattern] + collections, remainder = clique.assemble( repr["files"], minimum_items=1, patterns=patterns) @@ -30,6 +44,10 @@ def process(self, instance): collection = collections[0] frames = list(collection.indexes) + if instance.data.get("slate"): + # Slate is not part of the frame range + frames = frames[1:] + current_range = (frames[0], frames[-1]) required_range = (data["clipIn"], data["clipOut"]) diff --git a/openpype/hosts/unreal/ue_workers.py b/openpype/hosts/unreal/ue_workers.py index 3a0f9769572..386ad877d78 100644 --- a/openpype/hosts/unreal/ue_workers.py +++ b/openpype/hosts/unreal/ue_workers.py @@ -40,17 +40,34 @@ def retrieve_exit_code(line: str): return None -class UEProjectGenerationWorker(QtCore.QObject): +class UEWorker(QtCore.QObject): finished = QtCore.Signal(str) - failed = QtCore.Signal(str) + failed = QtCore.Signal(str, int) progress = QtCore.Signal(int) log = QtCore.Signal(str) + + engine_path: Path = None + env = None + + def execute(self): + raise NotImplementedError("Please implement this method!") + + def run(self): + try: + self.execute() + except Exception as e: + import traceback + self.log.emit(str(e)) + self.log.emit(traceback.format_exc()) + self.failed.emit(str(e), 1) + raise e + + +class UEProjectGenerationWorker(UEWorker): stage_begin = QtCore.Signal(str) ue_version: str = None project_name: str = None - env = None - engine_path: Path = None project_dir: Path = None dev_mode = False @@ -87,7 +104,7 @@ def setup(self, ue_version: str, self.project_name = unreal_project_name self.engine_path = engine_path - def run(self): + def execute(self): # engine_path should be the location of UE_X.X folder ue_editor_exe = ue_lib.get_editor_exe_path(self.engine_path, @@ -298,15 +315,8 @@ def run(self): self.finished.emit("Project successfully built!") -class UEPluginInstallWorker(QtCore.QObject): - finished = QtCore.Signal(str) +class UEPluginInstallWorker(UEWorker): installing = QtCore.Signal(str) - failed = QtCore.Signal(str, int) - progress = QtCore.Signal(int) - log = QtCore.Signal(str) - - engine_path: Path = None - env = None def setup(self, engine_path: Path, env: dict = None, ): self.engine_path = engine_path @@ -374,7 +384,7 @@ def _build_and_move_plugin(self, plugin_build_path: Path): dir_util.remove_tree(temp_dir.as_posix()) - def run(self): + def execute(self): src_plugin_dir = Path(self.env.get("AYON_UNREAL_PLUGIN", "")) if not os.path.isdir(src_plugin_dir): diff --git a/openpype/hosts/unreal/ui/__init__.py b/openpype/hosts/unreal/ui/__init__.py new file mode 100644 index 00000000000..606b21ef192 --- /dev/null +++ b/openpype/hosts/unreal/ui/__init__.py @@ -0,0 +1,5 @@ +from .splash_screen import SplashScreen + +__all__ = ( + "SplashScreen", +) diff --git a/openpype/widgets/splash_screen.py b/openpype/hosts/unreal/ui/splash_screen.py similarity index 98% rename from openpype/widgets/splash_screen.py rename to openpype/hosts/unreal/ui/splash_screen.py index 7c1ff72ecd3..7ac77821d9c 100644 --- a/openpype/widgets/splash_screen.py +++ b/openpype/hosts/unreal/ui/splash_screen.py @@ -1,6 +1,5 @@ from qtpy import QtWidgets, QtCore, QtGui from openpype import style, resources -from igniter.nice_progress_bar import NiceProgressBar class SplashScreen(QtWidgets.QDialog): @@ -143,7 +142,7 @@ def init_ui(self): button_layout.addWidget(self.close_btn) # Progress Bar - self.progress_bar = NiceProgressBar() + self.progress_bar = QtWidgets.QProgressBar() self.progress_bar.setValue(0) self.progress_bar.setAlignment(QtCore.Qt.AlignTop) diff --git a/openpype/hosts/webpublisher/README.md b/openpype/hosts/webpublisher/README.md index 0826e444902..07a957fa7f9 100644 --- a/openpype/hosts/webpublisher/README.md +++ b/openpype/hosts/webpublisher/README.md @@ -3,4 +3,4 @@ Webpublisher Plugins meant for processing of Webpublisher. -Gets triggered by calling openpype.cli.remotepublish with appropriate arguments. \ No newline at end of file +Gets triggered by calling `openpype_console modules webpublisher publish` with appropriate arguments. diff --git a/openpype/hosts/webpublisher/addon.py b/openpype/hosts/webpublisher/addon.py index eb7fced2e63..4438775b033 100644 --- a/openpype/hosts/webpublisher/addon.py +++ b/openpype/hosts/webpublisher/addon.py @@ -20,11 +20,10 @@ def headless_publish(self, log, close_plugin_name=None, is_test=False): Close Python process at the end. """ - from openpype.pipeline.publish.lib import remote_publish - from .lib import get_webpublish_conn, publish_and_log + from .lib import get_webpublish_conn, publish_and_log, publish_in_test if is_test: - remote_publish(log, close_plugin_name) + publish_in_test(log, close_plugin_name) return dbcon = get_webpublish_conn() diff --git a/openpype/hosts/webpublisher/lib.py b/openpype/hosts/webpublisher/lib.py index b207f85b46e..ecd28d24321 100644 --- a/openpype/hosts/webpublisher/lib.py +++ b/openpype/hosts/webpublisher/lib.py @@ -12,7 +12,6 @@ from openpype.settings import get_project_settings from openpype.lib import Logger from openpype.lib.profiles_filtering import filter_profiles -from openpype.pipeline.publish.lib import find_close_plugin ERROR_STATUS = "error" IN_PROGRESS_STATUS = "in_progress" @@ -68,6 +67,46 @@ def get_batch_asset_task_info(ctx): return asset, task_name, task_type +def find_close_plugin(close_plugin_name, log): + if close_plugin_name: + plugins = pyblish.api.discover() + for plugin in plugins: + if plugin.__name__ == close_plugin_name: + return plugin + + log.debug("Close plugin not found, app might not close.") + + +def publish_in_test(log, close_plugin_name=None): + """Loops through all plugins, logs to console. Used for tests. + + Args: + log (Logger) + close_plugin_name (Optional[str]): Name of plugin with responsibility + to close application. + """ + + # Error exit as soon as any error occurs. + error_format = "Failed {plugin.__name__}: {error} -- {error.traceback}" + + close_plugin = find_close_plugin(close_plugin_name, log) + + for result in pyblish.util.publish_iter(): + for record in result["records"]: + # Why do we log again? pyblish logger is logging to stdout... + log.info("{}: {}".format(result["plugin"].label, record.msg)) + + if not result["error"]: + continue + + # QUESTION We don't break on error? + error_message = error_format.format(**result) + log.error(error_message) + if close_plugin: # close host app explicitly after error + context = pyblish.api.Context() + close_plugin().process(context) + + def get_webpublish_conn(): """Get connection to OP 'webpublishes' collection.""" mongo_client = OpenPypeMongoConnection.get_mongo_client() @@ -231,7 +270,7 @@ def find_variant_key(application_manager, host): def get_task_data(batch_dir): """Return parsed data from first task manifest.json - Used for `remotepublishfromapp` command where batch contains only + Used for `publishfromapp` command where batch contains only single task with publishable workfile. Returns: diff --git a/openpype/hosts/webpublisher/plugins/publish/collect_published_files.py b/openpype/hosts/webpublisher/plugins/publish/collect_published_files.py index 79ed499a20b..1416255083e 100644 --- a/openpype/hosts/webpublisher/plugins/publish/collect_published_files.py +++ b/openpype/hosts/webpublisher/plugins/publish/collect_published_files.py @@ -25,6 +25,7 @@ ) from openpype.pipeline.create import get_subset_name from openpype_modules.webpublisher.lib import parse_json +from openpype.pipeline.version_start import get_versioning_start class CollectPublishedFiles(pyblish.api.ContextPlugin): @@ -103,7 +104,13 @@ def process(self, context): project_settings=context.data["project_settings"] ) version = self._get_next_version( - project_name, asset_doc, subset_name + project_name, + asset_doc, + task_name, + task_type, + family, + subset_name, + context ) next_versions.append(version) @@ -141,8 +148,9 @@ def process(self, context): try: no_of_frames = self._get_number_of_frames(file_url) if no_of_frames: - frame_end = int(frame_start) + \ - math.ceil(no_of_frames) + frame_end = ( + int(frame_start) + math.ceil(no_of_frames) + ) frame_end = math.ceil(frame_end) - 1 instance.data["frameEnd"] = frame_end self.log.debug("frameEnd:: {}".format( @@ -270,7 +278,16 @@ def _get_family(self, settings, task_type, is_sequence, extension): config["families"], config["tags"]) - def _get_next_version(self, project_name, asset_doc, subset_name): + def _get_next_version( + self, + project_name, + asset_doc, + task_name, + task_type, + family, + subset_name, + context + ): """Returns version number or 1 for 'asset' and 'subset'""" version_doc = get_last_version_by_subset_name( @@ -279,9 +296,19 @@ def _get_next_version(self, project_name, asset_doc, subset_name): asset_doc["_id"], fields=["name"] ) - version = 1 if version_doc: - version += int(version_doc["name"]) + version = int(version_doc["name"]) + 1 + else: + version = get_versioning_start( + project_name, + "webpublisher", + task_name=task_name, + task_type=task_type, + family=family, + subset=subset_name, + project_settings=context.data["project_settings"] + ) + return version def _get_number_of_frames(self, file_url): diff --git a/openpype/hosts/webpublisher/publish_functions.py b/openpype/hosts/webpublisher/publish_functions.py index 83f53ced68b..f5dc88f54d7 100644 --- a/openpype/hosts/webpublisher/publish_functions.py +++ b/openpype/hosts/webpublisher/publish_functions.py @@ -6,7 +6,7 @@ from openpype.lib import Logger from openpype.lib.applications import ( ApplicationManager, - get_app_environments_for_context, + LaunchTypes, ) from openpype.pipeline import install_host from openpype.hosts.webpublisher.api import WebpublisherHost @@ -34,7 +34,7 @@ def cli_publish(project_name, batch_path, user_email, targets): Args: project_name (str): project to publish (only single context is - expected per call of remotepublish + expected per call of 'publish') batch_path (str): Path batch folder. Contains subfolders with resources (workfile, another subfolder 'renders' etc.) user_email (string): email address for webpublisher - used to @@ -49,8 +49,8 @@ def cli_publish(project_name, batch_path, user_email, targets): if not batch_path: raise RuntimeError("No publish paths specified") - log = Logger.get_logger("remotepublish") - log.info("remotepublish command") + log = Logger.get_logger("Webpublish") + log.info("Webpublish command") # Register target and host webpublisher_host = WebpublisherHost() @@ -107,7 +107,7 @@ def cli_publish_from_app( Args: project_name (str): project to publish (only single context is - expected per call of remotepublish + expected per call of publish batch_path (str): Path batch folder. Contains subfolders with resources (workfile, another subfolder 'renders' etc.) host_name (str): 'photoshop' @@ -117,9 +117,9 @@ def cli_publish_from_app( (to choose validator for example) """ - log = Logger.get_logger("RemotePublishFromApp") + log = Logger.get_logger("PublishFromApp") - log.info("remotepublishphotoshop command") + log.info("Webpublish photoshop command") task_data = get_task_data(batch_path) @@ -156,22 +156,31 @@ def cli_publish_from_app( found_variant_key = find_variant_key(application_manager, host_name) app_name = "{}/{}".format(host_name, found_variant_key) + data = { + "last_workfile_path": workfile_path, + "start_last_workfile": True, + "project_name": project_name, + "asset_name": asset_name, + "task_name": task_name, + "launch_type": LaunchTypes.automated, + } + launch_context = application_manager.create_launch_context( + app_name, **data) + launch_context.run_prelaunch_hooks() + # must have for proper launch of app - env = get_app_environments_for_context( - project_name, - asset_name, - task_name, - app_name - ) + env = launch_context.env print("env:: {}".format(env)) - os.environ.update(env) - - os.environ["OPENPYPE_PUBLISH_DATA"] = batch_path + env["OPENPYPE_PUBLISH_DATA"] = batch_path # must pass identifier to update log lines for a batch - os.environ["BATCH_LOG_ID"] = str(_id) - os.environ["HEADLESS_PUBLISH"] = 'true' # to use in app lib - os.environ["USER_EMAIL"] = user_email + env["BATCH_LOG_ID"] = str(_id) + env["HEADLESS_PUBLISH"] = 'true' # to use in app lib + env["USER_EMAIL"] = user_email + + os.environ.update(env) + # Why is this here? Registered host in this process does not affect + # regitered host in launched process. pyblish.api.register_host(host_name) if targets: if isinstance(targets, str): @@ -184,15 +193,7 @@ def cli_publish_from_app( os.environ["PYBLISH_TARGETS"] = os.pathsep.join( set(current_targets)) - data = { - "last_workfile_path": workfile_path, - "start_last_workfile": True, - "project_name": project_name, - "asset_name": asset_name, - "task_name": task_name - } - - launched_app = application_manager.launch(app_name, **data) + launched_app = application_manager.launch_with_context(launch_context) timeout = get_timeout(project_name, host_name, task_type) diff --git a/openpype/hosts/webpublisher/webserver_service/webpublish_routes.py b/openpype/hosts/webpublisher/webserver_service/webpublish_routes.py index 9fe4b4d3c18..20d585e9068 100644 --- a/openpype/hosts/webpublisher/webserver_service/webpublish_routes.py +++ b/openpype/hosts/webpublisher/webserver_service/webpublish_routes.py @@ -216,7 +216,7 @@ async def post(self, request) -> Response: "extensions": [".tvpp"], "command": "publish", "arguments": { - "targets": ["tvpaint_worker"] + "targets": ["tvpaint_worker", "webpublish"] }, "add_to_queue": False }, @@ -230,7 +230,7 @@ async def post(self, request) -> Response: # Make sure targets are set to None for cases that default # would change # - targets argument is not used in 'publishfromapp' - "targets": ["remotepublish"] + "targets": ["automated", "webpublish"] }, # does publish need to be handled by a queue, eg. only # single process running concurrently? @@ -247,7 +247,7 @@ async def post(self, request) -> Response: "project": content["project_name"], "user": content["user"], - "targets": ["filespublish"] + "targets": ["filespublish", "webpublish"] } add_to_queue = False @@ -280,13 +280,14 @@ async def post(self, request) -> Response: for key, value in add_args.items(): # Skip key values where value is None - if value is not None: - args.append("--{}".format(key)) - # Extend list into arguments (targets can be a list) - if isinstance(value, (tuple, list)): - args.extend(value) - else: - args.append(value) + if value is None: + continue + arg_key = "--{}".format(key) + if not isinstance(value, (tuple, list)): + value = [value] + + for item in value: + args += [arg_key, item] log.info("args:: {}".format(args)) if add_to_queue: diff --git a/openpype/hosts/webpublisher/webserver_service/webserver.py b/openpype/hosts/webpublisher/webserver_service/webserver.py index 093b53d9d33..d7c2ea01b93 100644 --- a/openpype/hosts/webpublisher/webserver_service/webserver.py +++ b/openpype/hosts/webpublisher/webserver_service/webserver.py @@ -45,7 +45,7 @@ def run_webserver(executable, upload_dir, host=None, port=None): server_manager = webserver_module.create_new_server_manager(port, host) webserver_url = server_manager.url - # queue for remotepublishfromapp tasks + # queue for publishfromapp tasks studio_task_queue = collections.deque() resource = RestApiResource(server_manager, diff --git a/openpype/lib/__init__.py b/openpype/lib/__init__.py index 06de486f2e1..f1eb564e5e5 100644 --- a/openpype/lib/__init__.py +++ b/openpype/lib/__init__.py @@ -5,11 +5,11 @@ import sys import os import site +from openpype import PACKAGE_DIR # Add Python version specific vendor folder python_version_dir = os.path.join( - os.getenv("OPENPYPE_REPOS_ROOT", ""), - "openpype", "vendor", "python", "python_{}".format(sys.version[0]) + PACKAGE_DIR, "vendor", "python", "python_{}".format(sys.version[0]) ) # Prepend path in sys paths sys.path.insert(0, python_version_dir) @@ -22,11 +22,14 @@ ) from .vendor_bin_utils import ( + ToolNotFoundError, find_executable, get_vendor_bin_path, get_oiio_tools_path, + get_oiio_tool_args, get_ffmpeg_tool_path, - is_oiio_supported + get_ffmpeg_tool_args, + is_oiio_supported, ) from .attribute_definitions import ( @@ -52,12 +55,13 @@ from .terminal import Terminal from .execute import ( + get_ayon_launcher_args, get_openpype_execute_args, - get_pype_execute_args, get_linux_launcher_args, execute, run_subprocess, run_detached_process, + run_ayon_launcher_process, run_openpype_process, clean_envs_for_openpype_process, path_to_subprocess_arg, @@ -65,7 +69,6 @@ ) from .log import ( Logger, - PypeLogger, ) from .path_templates import ( @@ -77,12 +80,6 @@ FormatObject, ) -from .mongo import ( - get_default_components, - validate_mongo_connection, - OpenPypeMongoConnection -) - from .dateutils import ( get_datetime_data, get_timestamp, @@ -115,25 +112,6 @@ convert_ffprobe_fps_value, convert_ffprobe_fps_to_float, ) -from .avalon_context import ( - CURRENT_DOC_SCHEMAS, - create_project, - - get_workfile_template_key, - get_workfile_template_key_from_context, - get_last_workfile_with_version, - get_last_workfile, - - BuildWorkfile, - - get_creator_by_name, - - get_custom_workfile_template, - - get_custom_workfile_template_by_context, - get_custom_workfile_template_by_string_context, - get_custom_workfile_template -) from .local_settings import ( IniSettingRegistry, @@ -163,9 +141,6 @@ ) from .plugin_tools import ( - TaskNotSetError, - get_subset_name, - get_subset_name_with_asset_doc, prepare_template_data, source_hash, ) @@ -177,9 +152,6 @@ version_up, get_version_from_path, get_last_version_from_path, - create_project_folders, - create_workdir_extra_folders, - get_project_basic_paths, ) from .openpype_version import ( @@ -205,13 +177,13 @@ "emit_event", "register_event_callback", - "find_executable", + "get_ayon_launcher_args", "get_openpype_execute_args", - "get_pype_execute_args", "get_linux_launcher_args", "execute", "run_subprocess", "run_detached_process", + "run_ayon_launcher_process", "run_openpype_process", "clean_envs_for_openpype_process", "path_to_subprocess_arg", @@ -220,9 +192,13 @@ "env_value_to_bool", "get_paths_from_environ", + "ToolNotFoundError", + "find_executable", "get_vendor_bin_path", "get_oiio_tools_path", + "get_oiio_tool_args", "get_ffmpeg_tool_path", + "get_ffmpeg_tool_args", "is_oiio_supported", "AbstractAttrDef", @@ -257,22 +233,6 @@ "convert_ffprobe_fps_value", "convert_ffprobe_fps_to_float", - "CURRENT_DOC_SCHEMAS", - "create_project", - - "get_workfile_template_key", - "get_workfile_template_key_from_context", - "get_last_workfile_with_version", - "get_last_workfile", - - "BuildWorkfile", - - "get_creator_by_name", - - "get_custom_workfile_template_by_context", - "get_custom_workfile_template_by_string_context", - "get_custom_workfile_template", - "IniSettingRegistry", "JSONSettingRegistry", "OpenPypeSecureRegistry", @@ -298,9 +258,7 @@ "filter_profiles", - "TaskNotSetError", - "get_subset_name", - "get_subset_name_with_asset_doc", + "prepare_template_data", "source_hash", "format_file_size", @@ -323,15 +281,6 @@ "get_formatted_current_time", "Logger", - "PypeLogger", - - "get_default_components", - "validate_mongo_connection", - "OpenPypeMongoConnection", - - "create_project_folders", - "create_workdir_extra_folders", - "get_project_basic_paths", "op_version_control_available", "get_openpype_version", diff --git a/openpype/lib/applications.py b/openpype/lib/applications.py index f47e11926ca..ff5e27c1226 100644 --- a/openpype/lib/applications.py +++ b/openpype/lib/applications.py @@ -11,10 +11,7 @@ import six -from openpype.client import ( - get_project, - get_asset_by_name, -) +from openpype import AYON_SERVER_ENABLED, PACKAGE_DIR from openpype.settings import ( get_system_settings, get_project_settings, @@ -46,6 +43,25 @@ } +class LaunchTypes: + """Launch types are filters for pre/post-launch hooks. + + Please use these variables in case they'll change values. + """ + + # Local launch - application is launched on local machine + local = "local" + # Farm render job - application is on farm + farm_render = "farm-render" + # Farm publish job - integration post-render job + farm_publish = "farm-publish" + # Remote launch - application is launched on remote machine from which + # can be started publishing + remote = "remote" + # Automated launch - application is launched with automated publishing + automated = "automated" + + def parse_environments(env_data, env_group=None, platform_name=None): """Parse environment values from settings byt group and platform. @@ -482,6 +498,42 @@ def find_latest_available_variant_for_group(self, group_name): break return output + def create_launch_context(self, app_name, **data): + """Prepare launch context for application. + + Args: + app_name (str): Name of application that should be launched. + **data (Any): Any additional data. Data may be used during + + Returns: + ApplicationLaunchContext: Launch context for application. + + Raises: + ApplicationNotFound: Application was not found by entered name. + """ + + app = self.applications.get(app_name) + if not app: + raise ApplicationNotFound(app_name) + + executable = app.find_executable() + + return ApplicationLaunchContext( + app, executable, **data + ) + + def launch_with_context(self, launch_context): + """Launch application using existing launch context. + + Args: + launch_context (ApplicationLaunchContext): Prepared launch + context. + """ + + if not launch_context.executable: + raise ApplictionExecutableNotFound(launch_context.application) + return launch_context.launch() + def launch(self, app_name, **data): """Launch procedure. @@ -502,18 +554,10 @@ def launch(self, app_name, **data): failed. Exception should contain explanation message, traceback should not be needed. """ - app = self.applications.get(app_name) - if not app: - raise ApplicationNotFound(app_name) - executable = app.find_executable() - if not executable: - raise ApplictionExecutableNotFound(app) + context = self.create_launch_context(app_name, **data) + return self.launch_with_context(context) - context = ApplicationLaunchContext( - app, executable, **data - ) - return context.launch() class EnvironmentToolGroup: @@ -735,13 +779,17 @@ class LaunchHook: # Order of prelaunch hook, will be executed as last if set to None. order = None # List of host implementations, skipped if empty. - hosts = [] - # List of application groups - app_groups = [] - # List of specific application names - app_names = [] - # List of platform availability, skipped if empty. - platforms = [] + hosts = set() + # Set of application groups + app_groups = set() + # Set of specific application names + app_names = set() + # Set of platform availability + platforms = set() + # Set of launch types for which is available + # - if empty then is available for all launch types + # - by default has 'local' which is most common reason for launc hooks + launch_types = {LaunchTypes.local} def __init__(self, launch_context): """Constructor of launch hook. @@ -789,6 +837,10 @@ def class_validation(cls, launch_context): if launch_context.app_name not in cls.app_names: return False + if cls.launch_types: + if launch_context.launch_type not in cls.launch_types: + return False + return True @property @@ -858,9 +910,9 @@ class PostLaunchHook(LaunchHook): class ApplicationLaunchContext: """Context of launching application. - Main purpose of context is to prepare launch arguments and keyword arguments - for new process. Most important part of keyword arguments preparations - are environment variables. + Main purpose of context is to prepare launch arguments and keyword + arguments for new process. Most important part of keyword arguments + preparations are environment variables. During the whole process is possible to use `data` attribute to store object usable in multiple places. @@ -873,14 +925,30 @@ class ApplicationLaunchContext: insert argument between `nuke.exe` and `--NukeX`. To keep them together it is better to wrap them in another list: `[["nuke.exe", "--NukeX"]]`. + Notes: + It is possible to use launch context only to prepare environment + variables. In that case `executable` may be None and can be used + 'run_prelaunch_hooks' method to run prelaunch hooks which prepare + them. + Args: application (Application): Application definition. executable (ApplicationExecutable): Object with path to executable. + env_group (Optional[str]): Environment variable group. If not set + 'DEFAULT_ENV_SUBGROUP' is used. + launch_type (Optional[str]): Launch type. If not set 'local' is used. **data (dict): Any additional data. Data may be used during preparation to store objects usable in multiple places. """ - def __init__(self, application, executable, env_group=None, **data): + def __init__( + self, + application, + executable, + env_group=None, + launch_type=None, + **data + ): from openpype.modules import ModulesManager # Application object @@ -895,6 +963,10 @@ def __init__(self, application, executable, env_group=None, **data): self.executable = executable + if launch_type is None: + launch_type = LaunchTypes.local + self.launch_type = launch_type + if env_group is None: env_group = DEFAULT_ENV_SUBGROUP @@ -902,8 +974,11 @@ def __init__(self, application, executable, env_group=None, **data): self.data = dict(data) + launch_args = [] + if executable is not None: + launch_args = executable.as_args() # subprocess.Popen launch arguments (first argument in constructor) - self.launch_args = executable.as_args() + self.launch_args = launch_args self.launch_args.extend(application.arguments) if self.data.get("app_args"): self.launch_args.extend(self.data.pop("app_args")) @@ -945,6 +1020,7 @@ def __init__(self, application, executable, env_group=None, **data): self.postlaunch_hooks = None self.process = None + self._prelaunch_hooks_executed = False @property def env(self): @@ -1214,18 +1290,16 @@ def _run_process(self): # Return process which is already terminated return process - def launch(self): - """Collect data for new process and then create it. - - This method must not be executed more than once. + def run_prelaunch_hooks(self): + """Run prelaunch hooks. - Returns: - subprocess.Popen: Created process as Popen object. + This method will be executed only once, any future calls will skip + the processing. """ - if self.process is not None: - self.log.warning("Application was already launched.") - return + if self._prelaunch_hooks_executed: + self.log.warning("Prelaunch hooks were already executed.") + return # Discover launch hooks self.discover_launch_hooks() @@ -1235,6 +1309,22 @@ def launch(self): str(prelaunch_hook.__class__.__name__) )) prelaunch_hook.execute() + self._prelaunch_hooks_executed = True + + def launch(self): + """Collect data for new process and then create it. + + This method must not be executed more than once. + + Returns: + subprocess.Popen: Created process as Popen object. + """ + if self.process is not None: + self.log.warning("Application was already launched.") + return + + if not self._prelaunch_hooks_executed: + self.run_prelaunch_hooks() self.log.debug("All prelaunch hook executed. Starting new process.") @@ -1352,6 +1442,7 @@ def get_app_environments_for_context( task_name, app_name, env_group=None, + launch_type=None, env=None, modules_manager=None ): @@ -1362,54 +1453,33 @@ def get_app_environments_for_context( task_name (str): Name of task. app_name (str): Name of application that is launched and can be found by ApplicationManager. - env (dict): Initial environment variables. `os.environ` is used when - not passed. - modules_manager (ModulesManager): Initialized modules manager. + env_group (Optional[str]): Name of environment group. If not passed + default group is used. + launch_type (Optional[str]): Type for which prelaunch hooks are + executed. + env (Optional[dict[str, str]]): Initial environment variables. + `os.environ` is used when not passed. + modules_manager (Optional[ModulesManager]): Initialized modules + manager. Returns: dict: Environments for passed context and application. """ - from openpype.modules import ModulesManager - from openpype.pipeline import Anatomy - from openpype.lib.openpype_version import is_running_staging - - # Project document - project_doc = get_project(project_name) - asset_doc = get_asset_by_name(project_name, asset_name) - - if modules_manager is None: - modules_manager = ModulesManager() - - # Prepare app object which can be obtained only from ApplciationManager + # Prepare app object which can be obtained only from ApplicationManager app_manager = ApplicationManager() - app = app_manager.applications[app_name] - - # Project's anatomy - anatomy = Anatomy(project_name) - - data = EnvironmentPrepData({ - "project_name": project_name, - "asset_name": asset_name, - "task_name": task_name, - - "app": app, - - "project_doc": project_doc, - "asset_doc": asset_doc, - - "anatomy": anatomy, - - "env": env - }) - data["env"].update(anatomy.root_environments()) - if is_running_staging(): - data["env"]["OPENPYPE_IS_STAGING"] = "1" - - prepare_app_environments(data, env_group, modules_manager) - prepare_context_environments(data, env_group, modules_manager) - - return data["env"] + context = app_manager.create_launch_context( + app_name, + project_name=project_name, + asset_name=asset_name, + task_name=task_name, + env_group=env_group, + launch_type=launch_type, + env=env, + modules_manager=modules_manager, + ) + context.run_prelaunch_hooks() + return context.env def _merge_env(env, current_env): @@ -1435,10 +1505,8 @@ def _add_python_version_paths(app, env, logger, modules_manager): return # Add Python 2/3 modules - openpype_root = os.getenv("OPENPYPE_REPOS_ROOT") python_vendor_dir = os.path.join( - openpype_root, - "openpype", + PACKAGE_DIR, "vendor", "python" ) @@ -1640,11 +1708,7 @@ def prepare_context_environments(data, env_group=None, modules_manager=None): project_doc = data["project_doc"] asset_doc = data["asset_doc"] task_name = data["task_name"] - if ( - not project_doc - or not asset_doc - or not task_name - ): + if not project_doc: log.info( "Skipping context environments preparation." " Launch context does not contain required data." @@ -1657,18 +1721,16 @@ def prepare_context_environments(data, env_group=None, modules_manager=None): system_settings = get_system_settings() data["project_settings"] = project_settings data["system_settings"] = system_settings - # Apply project specific environments on current env value - apply_project_environments_value( - project_name, data["env"], project_settings, env_group - ) app = data["app"] context_env = { "AVALON_PROJECT": project_doc["name"], - "AVALON_ASSET": asset_doc["name"], - "AVALON_TASK": task_name, "AVALON_APP_NAME": app.full_name } + if asset_doc: + context_env["AVALON_ASSET"] = asset_doc["name"] + if task_name: + context_env["AVALON_TASK"] = task_name log.debug( "Context environments set:\n{}".format( @@ -1676,9 +1738,25 @@ def prepare_context_environments(data, env_group=None, modules_manager=None): ) ) data["env"].update(context_env) + + # Apply project specific environments on current env value + # - apply them once the context environments are set + apply_project_environments_value( + project_name, data["env"], project_settings, env_group + ) + if not app.is_host: return + data["env"]["AVALON_APP"] = app.host_name + + if not asset_doc or not task_name: + # QUESTION replace with log.info and skip workfile discovery? + # - technically it should be possible to launch host without context + raise ApplicationLaunchFailed( + "Host launch require asset and task context." + ) + workdir_data = get_template_data( project_doc, asset_doc, task_name, app.host_name, system_settings ) @@ -1716,7 +1794,6 @@ def prepare_context_environments(data, env_group=None, modules_manager=None): "Couldn't create workdir because: {}".format(str(exc)) ) - data["env"]["AVALON_APP"] = app.host_name data["env"]["AVALON_WORKDIR"] = workdir _prepare_last_workfile(data, workdir, modules_manager) @@ -1950,17 +2027,28 @@ def get_non_python_host_kwargs(kwargs, allow_console=True): allow_console (bool): use False for inner Popen opening app itself or it will open additional console (at least for Harmony) """ + if kwargs is None: kwargs = {} if platform.system().lower() != "windows": return kwargs - executable_path = os.environ.get("OPENPYPE_EXECUTABLE") + if AYON_SERVER_ENABLED: + executable_path = os.environ.get("AYON_EXECUTABLE") + else: + executable_path = os.environ.get("OPENPYPE_EXECUTABLE") + executable_filename = "" if executable_path: executable_filename = os.path.basename(executable_path) - if "openpype_gui" in executable_filename: + + if AYON_SERVER_ENABLED: + is_gui_executable = "ayon_console" not in executable_filename + else: + is_gui_executable = "openpype_gui" in executable_filename + + if is_gui_executable: kwargs.update({ "creationflags": subprocess.CREATE_NO_WINDOW, "stdout": subprocess.DEVNULL, diff --git a/openpype/lib/avalon_context.py b/openpype/lib/avalon_context.py deleted file mode 100644 index a9ae27cb790..00000000000 --- a/openpype/lib/avalon_context.py +++ /dev/null @@ -1,654 +0,0 @@ -"""Should be used only inside of hosts.""" - -import platform -import logging -import functools -import warnings - -import six - -from openpype.client import ( - get_project, - get_asset_by_name, -) -from openpype.client.operations import ( - CURRENT_ASSET_DOC_SCHEMA, - CURRENT_PROJECT_SCHEMA, - CURRENT_PROJECT_CONFIG_SCHEMA, -) -from .profiles_filtering import filter_profiles -from .path_templates import StringTemplate - -legacy_io = None - -log = logging.getLogger("AvalonContext") - - -# Backwards compatibility - should not be used anymore -# - Will be removed in OP 3.16.* -CURRENT_DOC_SCHEMAS = { - "project": CURRENT_PROJECT_SCHEMA, - "asset": CURRENT_ASSET_DOC_SCHEMA, - "config": CURRENT_PROJECT_CONFIG_SCHEMA -} - - -class AvalonContextDeprecatedWarning(DeprecationWarning): - pass - - -def deprecated(new_destination): - """Mark functions as deprecated. - - It will result in a warning being emitted when the function is used. - """ - - func = None - if callable(new_destination): - func = new_destination - new_destination = None - - def _decorator(decorated_func): - if new_destination is None: - warning_message = ( - " Please check content of deprecated function to figure out" - " possible replacement." - ) - else: - warning_message = " Please replace your usage with '{}'.".format( - new_destination - ) - - @functools.wraps(decorated_func) - def wrapper(*args, **kwargs): - warnings.simplefilter("always", AvalonContextDeprecatedWarning) - warnings.warn( - ( - "Call to deprecated function '{}'" - "\nFunction was moved or removed.{}" - ).format(decorated_func.__name__, warning_message), - category=AvalonContextDeprecatedWarning, - stacklevel=4 - ) - return decorated_func(*args, **kwargs) - return wrapper - - if func is None: - return _decorator - return _decorator(func) - - -@deprecated("openpype.client.operations.create_project") -def create_project( - project_name, project_code, library_project=False, dbcon=None -): - """Create project using OpenPype settings. - - This project creation function is not validating project document on - creation. It is because project document is created blindly with only - minimum required information about project which is it's name, code, type - and schema. - - Entered project name must be unique and project must not exist yet. - - Args: - project_name(str): New project name. Should be unique. - project_code(str): Project's code should be unique too. - library_project(bool): Project is library project. - dbcon(AvalonMongoDB): Object of connection to MongoDB. - - Raises: - ValueError: When project name already exists in MongoDB. - - Returns: - dict: Created project document. - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.client.operations import create_project - - return create_project(project_name, project_code, library_project) - - -def with_pipeline_io(func): - @functools.wraps(func) - def wrapped(*args, **kwargs): - global legacy_io - if legacy_io is None: - from openpype.pipeline import legacy_io - return func(*args, **kwargs) - return wrapped - - -@deprecated("openpype.client.get_linked_asset_ids") -def get_linked_asset_ids(asset_doc): - """Return linked asset ids for `asset_doc` from DB - - Args: - asset_doc (dict): Asset document from DB. - - Returns: - (list): MongoDB ids of input links. - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.client import get_linked_asset_ids - from openpype.pipeline import legacy_io - - project_name = legacy_io.active_project() - - return get_linked_asset_ids(project_name, asset_doc=asset_doc) - - -@deprecated( - "openpype.pipeline.workfile.get_workfile_template_key_from_context") -def get_workfile_template_key_from_context( - asset_name, task_name, host_name, project_name=None, - dbcon=None, project_settings=None -): - """Helper function to get template key for workfile template. - - Do the same as `get_workfile_template_key` but returns value for "session - context". - - It is required to pass one of 'dbcon' with already set project name or - 'project_name' arguments. - - Args: - asset_name(str): Name of asset document. - task_name(str): Task name for which is template key retrieved. - Must be available on asset document under `data.tasks`. - host_name(str): Name of host implementation for which is workfile - used. - project_name(str): Project name where asset and task is. Not required - when 'dbcon' is passed. - dbcon(AvalonMongoDB): Connection to mongo with already set project - under `AVALON_PROJECT`. Not required when 'project_name' is passed. - project_settings(dict): Project settings for passed 'project_name'. - Not required at all but makes function faster. - Raises: - ValueError: When both 'dbcon' and 'project_name' were not - passed. - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline.workfile import ( - get_workfile_template_key_from_context - ) - - if not project_name: - if not dbcon: - raise ValueError(( - "`get_workfile_template_key_from_context` requires to pass" - " one of 'dbcon' or 'project_name' arguments." - )) - project_name = dbcon.active_project() - - return get_workfile_template_key_from_context( - asset_name, task_name, host_name, project_name, project_settings - ) - - -@deprecated( - "openpype.pipeline.workfile.get_workfile_template_key") -def get_workfile_template_key( - task_type, host_name, project_name=None, project_settings=None -): - """Workfile template key which should be used to get workfile template. - - Function is using profiles from project settings to return right template - for passet task type and host name. - - One of 'project_name' or 'project_settings' must be passed it is preferred - to pass settings if are already available. - - Args: - task_type(str): Name of task type. - host_name(str): Name of host implementation (e.g. "maya", "nuke", ...) - project_name(str): Name of project in which context should look for - settings. Not required if `project_settings` are passed. - project_settings(dict): Prepare project settings for project name. - Not needed if `project_name` is passed. - - Raises: - ValueError: When both 'project_name' and 'project_settings' were not - passed. - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline.workfile import get_workfile_template_key - - return get_workfile_template_key( - task_type, host_name, project_name, project_settings - ) - - -@deprecated("openpype.pipeline.context_tools.compute_session_changes") -def compute_session_changes( - session, task=None, asset=None, app=None, template_key=None -): - """Compute the changes for a Session object on asset, task or app switch - - This does *NOT* update the Session object, but returns the changes - required for a valid update of the Session. - - Args: - session (dict): The initial session to compute changes to. - This is required for computing the full Work Directory, as that - also depends on the values that haven't changed. - task (str, Optional): Name of task to switch to. - asset (str or dict, Optional): Name of asset to switch to. - You can also directly provide the Asset dictionary as returned - from the database to avoid an additional query. (optimization) - app (str, Optional): Name of app to switch to. - - Returns: - dict: The required changes in the Session dictionary. - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline import legacy_io - from openpype.pipeline.context_tools import compute_session_changes - - if isinstance(asset, six.string_types): - project_name = legacy_io.active_project() - asset = get_asset_by_name(project_name, asset) - - return compute_session_changes( - session, - asset, - task, - template_key - ) - - -@deprecated("openpype.pipeline.context_tools.get_workdir_from_session") -def get_workdir_from_session(session=None, template_key=None): - """Calculate workdir path based on session data. - - Args: - session (Union[None, Dict[str, str]]): Session to use. If not passed - current context session is used (from legacy_io). - template_key (Union[str, None]): Precalculate template key to define - workfile template name in Anatomy. - - Returns: - str: Workdir path. - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline.context_tools import get_workdir_from_session - - return get_workdir_from_session(session, template_key) - - -@deprecated("openpype.pipeline.context_tools.change_current_context") -def update_current_task(task=None, asset=None, app=None, template_key=None): - """Update active Session to a new task work area. - - This updates the live Session to a different `asset`, `task` or `app`. - - Args: - task (str): The task to set. - asset (str): The asset to set. - app (str): The app to set. - - Returns: - dict: The changed key, values in the current Session. - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline import legacy_io - from openpype.pipeline.context_tools import change_current_context - - project_name = legacy_io.active_project() - if isinstance(asset, six.string_types): - asset = get_asset_by_name(project_name, asset) - - return change_current_context(asset, task, template_key) - - -@deprecated("openpype.pipeline.workfile.BuildWorkfile") -def BuildWorkfile(): - """Build workfile class was moved to workfile pipeline. - - Deprecated: - Function will be removed after release version 3.16.* - """ - from openpype.pipeline.workfile import BuildWorkfile - - return BuildWorkfile() - - -@deprecated("openpype.pipeline.create.get_legacy_creator_by_name") -def get_creator_by_name(creator_name, case_sensitive=False): - """Find creator plugin by name. - - Args: - creator_name (str): Name of creator class that should be returned. - case_sensitive (bool): Match of creator plugin name is case sensitive. - Set to `False` by default. - - Returns: - Creator: Return first matching plugin or `None`. - - Deprecated: - Function will be removed after release version 3.16.* - """ - from openpype.pipeline.create import get_legacy_creator_by_name - - return get_legacy_creator_by_name(creator_name, case_sensitive) - - -def _get_task_context_data_for_anatomy( - project_doc, asset_doc, task_name, anatomy=None -): - """Prepare Task context for anatomy data. - - WARNING: this data structure is currently used only in workfile templates. - Key "task" is currently in rest of pipeline used as string with task - name. - - Args: - project_doc (dict): Project document with available "name" and - "data.code" keys. - asset_doc (dict): Asset document from MongoDB. - task_name (str): Name of context task. - anatomy (Anatomy): Optionally Anatomy for passed project name can be - passed as Anatomy creation may be slow. - - Returns: - dict: With Anatomy context data. - """ - - from openpype.pipeline.template_data import get_general_template_data - - if anatomy is None: - from openpype.pipeline import Anatomy - anatomy = Anatomy(project_doc["name"]) - - asset_name = asset_doc["name"] - project_task_types = anatomy["tasks"] - - # get relevant task type from asset doc - assert task_name in asset_doc["data"]["tasks"], ( - "Task name \"{}\" not found on asset \"{}\"".format( - task_name, asset_name - ) - ) - - task_type = asset_doc["data"]["tasks"][task_name].get("type") - - assert task_type, ( - "Task name \"{}\" on asset \"{}\" does not have specified task type." - ).format(asset_name, task_name) - - # get short name for task type defined in default anatomy settings - project_task_type_data = project_task_types.get(task_type) - assert project_task_type_data, ( - "Something went wrong. Default anatomy tasks are not holding" - "requested task type: `{}`".format(task_type) - ) - - data = { - "project": { - "name": project_doc["name"], - "code": project_doc["data"].get("code") - }, - "asset": asset_name, - "task": { - "name": task_name, - "type": task_type, - "short": project_task_type_data["short_name"] - } - } - - system_general_data = get_general_template_data() - data.update(system_general_data) - - return data - - -@deprecated( - "openpype.pipeline.workfile.get_custom_workfile_template_by_context") -def get_custom_workfile_template_by_context( - template_profiles, project_doc, asset_doc, task_name, anatomy=None -): - """Filter and fill workfile template profiles by passed context. - - It is expected that passed argument are already queried documents of - project and asset as parents of processing task name. - - Existence of formatted path is not validated. - - Args: - template_profiles(list): Template profiles from settings. - project_doc(dict): Project document from MongoDB. - asset_doc(dict): Asset document from MongoDB. - task_name(str): Name of task for which templates are filtered. - anatomy(Anatomy): Optionally passed anatomy object for passed project - name. - - Returns: - str: Path to template or None if none of profiles match current - context. (Existence of formatted path is not validated.) - - Deprecated: - Function will be removed after release version 3.16.* - """ - - if anatomy is None: - from openpype.pipeline import Anatomy - anatomy = Anatomy(project_doc["name"]) - - # get project, asset, task anatomy context data - anatomy_context_data = _get_task_context_data_for_anatomy( - project_doc, asset_doc, task_name, anatomy - ) - # add root dict - anatomy_context_data["root"] = anatomy.roots - - # get task type for the task in context - current_task_type = anatomy_context_data["task"]["type"] - - # get path from matching profile - matching_item = filter_profiles( - template_profiles, - {"task_types": current_task_type} - ) - # when path is available try to format it in case - # there are some anatomy template strings - if matching_item: - template = matching_item["path"][platform.system().lower()] - return StringTemplate.format_strict_template( - template, anatomy_context_data - ) - - return None - - -@deprecated( - "openpype.pipeline.workfile.get_custom_workfile_template_by_string_context" -) -def get_custom_workfile_template_by_string_context( - template_profiles, project_name, asset_name, task_name, - dbcon=None, anatomy=None -): - """Filter and fill workfile template profiles by passed context. - - Passed context are string representations of project, asset and task. - Function will query documents of project and asset to be able use - `get_custom_workfile_template_by_context` for rest of logic. - - Args: - template_profiles(list): Loaded workfile template profiles. - project_name(str): Project name. - asset_name(str): Asset name. - task_name(str): Task name. - dbcon(AvalonMongoDB): Optional avalon implementation of mongo - connection with context Session. - anatomy(Anatomy): Optionally prepared anatomy object for passed - project. - - Returns: - str: Path to template or None if none of profiles match current - context. (Existence of formatted path is not validated.) - - Deprecated: - Function will be removed after release version 3.16.* - """ - - project_name = None - if anatomy is not None: - project_name = anatomy.project_name - - if not project_name and dbcon is not None: - project_name = dbcon.active_project() - - if not project_name: - raise ValueError("Can't determina project") - - project_doc = get_project(project_name, fields=["name", "data.code"]) - asset_doc = get_asset_by_name( - project_name, asset_name, fields=["name", "data.tasks"]) - - return get_custom_workfile_template_by_context( - template_profiles, project_doc, asset_doc, task_name, anatomy - ) - - -@deprecated("openpype.pipeline.context_tools.get_custom_workfile_template") -def get_custom_workfile_template(template_profiles): - """Filter and fill workfile template profiles by current context. - - Current context is defined by `legacy_io.Session`. That's why this - function should be used only inside host where context is set and stable. - - Args: - template_profiles(list): Template profiles from settings. - - Returns: - str: Path to template or None if none of profiles match current - context. (Existence of formatted path is not validated.) - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline import legacy_io - - return get_custom_workfile_template_by_string_context( - template_profiles, - legacy_io.Session["AVALON_PROJECT"], - legacy_io.Session["AVALON_ASSET"], - legacy_io.Session["AVALON_TASK"], - legacy_io - ) - - -@deprecated("openpype.pipeline.workfile.get_last_workfile_with_version") -def get_last_workfile_with_version( - workdir, file_template, fill_data, extensions -): - """Return last workfile version. - - Args: - workdir(str): Path to dir where workfiles are stored. - file_template(str): Template of file name. - fill_data(dict): Data for filling template. - extensions(list, tuple): All allowed file extensions of workfile. - - Returns: - tuple: Last workfile with version if there is any otherwise - returns (None, None). - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline.workfile import get_last_workfile_with_version - - return get_last_workfile_with_version( - workdir, file_template, fill_data, extensions - ) - - -@deprecated("openpype.pipeline.workfile.get_last_workfile") -def get_last_workfile( - workdir, file_template, fill_data, extensions, full_path=False -): - """Return last workfile filename. - - Returns file with version 1 if there is not workfile yet. - - Args: - workdir(str): Path to dir where workfiles are stored. - file_template(str): Template of file name. - fill_data(dict): Data for filling template. - extensions(list, tuple): All allowed file extensions of workfile. - full_path(bool): Full path to file is returned if set to True. - - Returns: - str: Last or first workfile as filename of full path to filename. - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline.workfile import get_last_workfile - - return get_last_workfile( - workdir, file_template, fill_data, extensions, full_path - ) - - -@deprecated("openpype.client.get_linked_representation_id") -def get_linked_ids_for_representations( - project_name, repre_ids, dbcon=None, link_type=None, max_depth=0 -): - """Returns list of linked ids of particular type (if provided). - - Goes from representations to version, back to representations - Args: - project_name (str) - repre_ids (list) or (ObjectId) - dbcon (avalon.mongodb.AvalonMongoDB, optional): Avalon Mongo connection - with Session. - link_type (str): ['reference', '..] - max_depth (int): limit how many levels of recursion - - Returns: - (list) of ObjectId - linked representations - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.client import get_linked_representation_id - - if not isinstance(repre_ids, list): - repre_ids = [repre_ids] - - output = [] - for repre_id in repre_ids: - output.extend(get_linked_representation_id( - project_name, - repre_id=repre_id, - link_type=link_type, - max_depth=max_depth - )) - return output diff --git a/openpype/lib/delivery.py b/openpype/lib/delivery.py deleted file mode 100644 index efb542de753..00000000000 --- a/openpype/lib/delivery.py +++ /dev/null @@ -1,252 +0,0 @@ -"""Functions useful for delivery action or loader""" -import os -import shutil -import functools -import warnings - - -class DeliveryDeprecatedWarning(DeprecationWarning): - pass - - -def deprecated(new_destination): - """Mark functions as deprecated. - - It will result in a warning being emitted when the function is used. - """ - - func = None - if callable(new_destination): - func = new_destination - new_destination = None - - def _decorator(decorated_func): - if new_destination is None: - warning_message = ( - " Please check content of deprecated function to figure out" - " possible replacement." - ) - else: - warning_message = " Please replace your usage with '{}'.".format( - new_destination - ) - - @functools.wraps(decorated_func) - def wrapper(*args, **kwargs): - warnings.simplefilter("always", DeliveryDeprecatedWarning) - warnings.warn( - ( - "Call to deprecated function '{}'" - "\nFunction was moved or removed.{}" - ).format(decorated_func.__name__, warning_message), - category=DeliveryDeprecatedWarning, - stacklevel=4 - ) - return decorated_func(*args, **kwargs) - return wrapper - - if func is None: - return _decorator - return _decorator(func) - - -@deprecated("openpype.lib.path_tools.collect_frames") -def collect_frames(files): - """Returns dict of source path and its frame, if from sequence - - Uses clique as most precise solution, used when anatomy template that - created files is not known. - - Assumption is that frames are separated by '.', negative frames are not - allowed. - - Args: - files(list) or (set with single value): list of source paths - - Returns: - (dict): {'/asset/subset_v001.0001.png': '0001', ....} - - Deprecated: - Function was moved to different location and will be removed - after 3.16.* release. - """ - - from .path_tools import collect_frames - - return collect_frames(files) - - -@deprecated("openpype.lib.path_tools.format_file_size") -def sizeof_fmt(num, suffix=None): - """Returns formatted string with size in appropriate unit - - Deprecated: - Function was moved to different location and will be removed - after 3.16.* release. - """ - - from .path_tools import format_file_size - return format_file_size(num, suffix) - - -@deprecated("openpype.pipeline.load.get_representation_path_with_anatomy") -def path_from_representation(representation, anatomy): - """Get representation path using representation document and anatomy. - - Args: - representation (Dict[str, Any]): Representation document. - anatomy (Anatomy): Project anatomy. - - Deprecated: - Function was moved to different location and will be removed - after 3.16.* release. - """ - - from openpype.pipeline.load import get_representation_path_with_anatomy - - return get_representation_path_with_anatomy(representation, anatomy) - - -@deprecated -def copy_file(src_path, dst_path): - """Hardlink file if possible(to save space), copy if not""" - from openpype.lib import create_hard_link # safer importing - - if os.path.exists(dst_path): - return - try: - create_hard_link( - src_path, - dst_path - ) - except OSError: - shutil.copyfile(src_path, dst_path) - - -@deprecated("openpype.pipeline.delivery.get_format_dict") -def get_format_dict(anatomy, location_path): - """Returns replaced root values from user provider value. - - Args: - anatomy (Anatomy) - location_path (str): user provided value - - Returns: - (dict): prepared for formatting of a template - - Deprecated: - Function was moved to different location and will be removed - after 3.16.* release. - """ - - from openpype.pipeline.delivery import get_format_dict - - return get_format_dict(anatomy, location_path) - - -@deprecated("openpype.pipeline.delivery.check_destination_path") -def check_destination_path(repre_id, - anatomy, anatomy_data, - datetime_data, template_name): - """ Try to create destination path based on 'template_name'. - - In the case that path cannot be filled, template contains unmatched - keys, provide error message to filter out repre later. - - Args: - anatomy (Anatomy) - anatomy_data (dict): context to fill anatomy - datetime_data (dict): values with actual date - template_name (str): to pick correct delivery template - - Returns: - (collections.defauldict): {"TYPE_OF_ERROR":"ERROR_DETAIL"} - - Deprecated: - Function was moved to different location and will be removed - after 3.16.* release. - """ - - from openpype.pipeline.delivery import check_destination_path - - return check_destination_path( - repre_id, - anatomy, - anatomy_data, - datetime_data, - template_name - ) - - -@deprecated("openpype.pipeline.delivery.deliver_single_file") -def process_single_file( - src_path, repre, anatomy, template_name, anatomy_data, format_dict, - report_items, log -): - """Copy single file to calculated path based on template - - Args: - src_path(str): path of source representation file - _repre (dict): full repre, used only in process_sequence, here only - as to share same signature - anatomy (Anatomy) - template_name (string): user selected delivery template name - anatomy_data (dict): data from repre to fill anatomy with - format_dict (dict): root dictionary with names and values - report_items (collections.defaultdict): to return error messages - log (Logger): for log printing - - Returns: - (collections.defaultdict , int) - - Deprecated: - Function was moved to different location and will be removed - after 3.16.* release. - """ - - from openpype.pipeline.delivery import deliver_single_file - - return deliver_single_file( - src_path, repre, anatomy, template_name, anatomy_data, format_dict, - report_items, log - ) - - -@deprecated("openpype.pipeline.delivery.deliver_sequence") -def process_sequence( - src_path, repre, anatomy, template_name, anatomy_data, format_dict, - report_items, log -): - """ For Pype2(mainly - works in 3 too) where representation might not - contain files. - - Uses listing physical files (not 'files' on repre as a)might not be - present, b)might not be reliable for representation and copying them. - - TODO Should be refactored when files are sufficient to drive all - representations. - - Args: - src_path(str): path of source representation file - repre (dict): full representation - anatomy (Anatomy) - template_name (string): user selected delivery template name - anatomy_data (dict): data from repre to fill anatomy with - format_dict (dict): root dictionary with names and values - report_items (collections.defaultdict): to return error messages - log (Logger): for log printing - - Returns: - (collections.defaultdict , int) - - Deprecated: - Function was moved to different location and will be removed - after 3.16.* release. - """ - - from openpype.pipeline.delivery import deliver_sequence - - return deliver_sequence( - src_path, repre, anatomy, template_name, anatomy_data, format_dict, - report_items, log - ) diff --git a/openpype/lib/events.py b/openpype/lib/events.py index dca58fcf93c..496b765a05f 100644 --- a/openpype/lib/events.py +++ b/openpype/lib/events.py @@ -3,6 +3,7 @@ import re import copy import inspect +import collections import logging import weakref from uuid import uuid4 @@ -340,8 +341,8 @@ def emit(self, topic, data, source): event.emit() return event - def emit_event(self, event): - """Emit event object. + def _process_event(self, event): + """Process event topic and trigger callbacks. Args: event (Event): Prepared event with topic and data. @@ -356,6 +357,91 @@ def emit_event(self, event): for callback in invalid_callbacks: self._registered_callbacks.remove(callback) + def emit_event(self, event): + """Emit event object. + + Args: + event (Event): Prepared event with topic and data. + """ + + self._process_event(event) + + +class QueuedEventSystem(EventSystem): + """Events are automatically processed in queue. + + If callback triggers another event, the event is not processed until + all callbacks of previous event are processed. + + Allows to implement custom event process loop by changing 'auto_execute'. + + Note: + This probably should be default behavior of 'EventSystem'. Changing it + now could cause problems in existing code. + + Args: + auto_execute (Optional[bool]): If 'True', events are processed + automatically. Custom loop calling 'process_next_event' + must be implemented when set to 'False'. + """ + + def __init__(self, auto_execute=True): + super(QueuedEventSystem, self).__init__() + self._event_queue = collections.deque() + self._current_event = None + self._auto_execute = auto_execute + + def __len__(self): + return self.count() + + def count(self): + """Get number of events in queue. + + Returns: + int: Number of events in queue. + """ + + return len(self._event_queue) + + def process_next_event(self): + """Process next event in queue. + + Should be used only if 'auto_execute' is set to 'False'. Only single + event is processed. + + Returns: + Union[Event, None]: Processed event. + """ + + if self._current_event is not None: + raise ValueError("An event is already in progress.") + + if not self._event_queue: + return None + event = self._event_queue.popleft() + self._current_event = event + self._process_event(event) + self._current_event = None + return event + + def emit_event(self, event): + """Emit event object. + + Args: + event (Event): Prepared event with topic and data. + """ + + if not self._auto_execute or self._current_event is not None: + self._event_queue.append(event) + return + + self._event_queue.append(event) + while self._event_queue: + event = self._event_queue.popleft() + self._current_event = event + self._process_event(event) + self._current_event = None + class GlobalEventSystem: """Event system living in global scope of process. diff --git a/openpype/lib/execute.py b/openpype/lib/execute.py index 6c1425fc635..c54541a1163 100644 --- a/openpype/lib/execute.py +++ b/openpype/lib/execute.py @@ -164,12 +164,40 @@ def run_subprocess(*args, **kwargs): return full_output +def clean_envs_for_ayon_process(env=None): + """Modify environments that may affect ayon-launcher process. + + Main reason to implement this function is to pop PYTHONPATH which may be + affected by in-host environments. + + Args: + env (Optional[dict[str, str]]): Environment variables to modify. + + Returns: + dict[str, str]: Environment variables for ayon process. + """ + + if env is None: + env = os.environ + + # Exclude some environment variables from a copy of the environment + env = env.copy() + for key in ["PYTHONPATH", "PYTHONHOME"]: + env.pop(key, None) + + return env + + def clean_envs_for_openpype_process(env=None): """Modify environments that may affect OpenPype process. Main reason to implement this function is to pop PYTHONPATH which may be affected by in-host environments. """ + + if AYON_SERVER_ENABLED: + return clean_envs_for_ayon_process(env=env) + if env is None: env = os.environ @@ -181,7 +209,7 @@ def clean_envs_for_openpype_process(env=None): return env -def run_openpype_process(*args, **kwargs): +def run_ayon_launcher_process(*args, **kwargs): """Execute OpenPype process with passed arguments and wait. Wrapper for 'run_process' which prepends OpenPype executable arguments @@ -192,13 +220,52 @@ def run_openpype_process(*args, **kwargs): Example: ``` - run_detached_process("run", "") + run_ayon_process("run", "") ``` + Args: + *args (str): ayon-launcher cli arguments. + **kwargs (Any): Keyword arguments for subprocess.Popen. + + Returns: + str: Full output of subprocess concatenated stdout and stderr. + """ + + args = get_ayon_launcher_args(*args) + env = kwargs.pop("env", None) + # Keep env untouched if are passed and not empty + if not env: + # Skip envs that can affect OpenPype process + # - fill more if you find more + env = clean_envs_for_openpype_process(os.environ) + + # Only keep OpenPype version if we are running from build. + if not is_running_from_build(): + env.pop("OPENPYPE_VERSION", None) + + return run_subprocess(args, env=env, **kwargs) + + +def run_openpype_process(*args, **kwargs): + """Execute OpenPype process with passed arguments and wait. + + Wrapper for 'run_process' which prepends OpenPype executable arguments + before passed arguments and define environments if are not passed. + + Values from 'os.environ' are used for environments if are not passed. + They are cleaned using 'clean_envs_for_openpype_process' function. + + Example: + >>> run_openpype_process("version") + Args: *args (tuple): OpenPype cli arguments. - **kwargs (dict): Keyword arguments for for subprocess.Popen. + **kwargs (dict): Keyword arguments for subprocess.Popen. """ + + if AYON_SERVER_ENABLED: + return run_ayon_launcher_process(*args, **kwargs) + args = get_openpype_execute_args(*args) env = kwargs.pop("env", None) # Keep env untouched if are passed and not empty @@ -221,18 +288,18 @@ def run_detached_process(args, **kwargs): They are cleaned using 'clean_envs_for_openpype_process' function. Example: - ``` - run_detached_openpype_process("run", "") - ``` + >>> run_detached_process("run", "./path_to.py") + Args: *args (tuple): OpenPype cli arguments. - **kwargs (dict): Keyword arguments for for subprocess.Popen. + **kwargs (dict): Keyword arguments for subprocess.Popen. Returns: subprocess.Popen: Pointer to launched process but it is possible that launched process is already killed (on linux). """ + env = kwargs.pop("env", None) # Keep env untouched if are passed and not empty if not env: @@ -296,16 +363,37 @@ def path_to_subprocess_arg(path): return subprocess.list2cmdline([path]) -def get_pype_execute_args(*args): - """Backwards compatible function for 'get_openpype_execute_args'.""" - import traceback +def get_ayon_launcher_args(*args): + """Arguments to run ayon-launcher process. + + Arguments for subprocess when need to spawn new pype process. Which may be + needed when new python process for pype scripts must be executed in build + pype. + + Reasons: + Ayon-launcher started from code has different executable set to + virtual env python and must have path to script as first argument + which is not needed for built application. + + Args: + *args (str): Any arguments that will be added after executables. + + Returns: + list[str]: List of arguments to run ayon-launcher process. + """ + + executable = os.environ["AYON_EXECUTABLE"] + launch_args = [executable] + + executable_filename = os.path.basename(executable) + if "python" in executable_filename.lower(): + filepath = os.path.join(os.environ["AYON_ROOT"], "start.py") + launch_args.append(filepath) + + if args: + launch_args.extend(args) - log = Logger.get_logger("get_pype_execute_args") - stack = "\n".join(traceback.format_stack()) - log.warning(( - "Using deprecated function 'get_pype_execute_args'. Called from:\n{}" - ).format(stack)) - return get_openpype_execute_args(*args) + return launch_args def get_openpype_execute_args(*args): @@ -323,17 +411,17 @@ def get_openpype_execute_args(*args): It is possible to pass any arguments that will be added after pype executables. """ + + if AYON_SERVER_ENABLED: + return get_ayon_launcher_args(*args) + executable = os.environ["OPENPYPE_EXECUTABLE"] launch_args = [executable] executable_filename = os.path.basename(executable) if "python" in executable_filename.lower(): - filename = "start.py" - if AYON_SERVER_ENABLED: - filename = "ayon_start.py" - launch_args.append( - os.path.join(os.environ["OPENPYPE_ROOT"], filename) - ) + filepath = os.path.join(os.environ["OPENPYPE_ROOT"], "start.py") + launch_args.append(filepath) if args: launch_args.extend(args) @@ -350,6 +438,9 @@ def get_linux_launcher_args(*args): It is possible that this function is used in OpenPype build which does not have yet the new executable. In that case 'None' is returned. + Todos: + Replace by script in scripts for ayon-launcher. + Args: args (iterable): List of additional arguments added after executable argument. @@ -358,19 +449,24 @@ def get_linux_launcher_args(*args): list: Executables with possible positional argument to script when called from code. """ + filename = "app_launcher" - openpype_executable = os.environ["OPENPYPE_EXECUTABLE"] + if AYON_SERVER_ENABLED: + executable = os.environ["AYON_EXECUTABLE"] + else: + executable = os.environ["OPENPYPE_EXECUTABLE"] - executable_filename = os.path.basename(openpype_executable) + executable_filename = os.path.basename(executable) if "python" in executable_filename.lower(): - script_path = os.path.join( - os.environ["OPENPYPE_ROOT"], - "{}.py".format(filename) - ) - launch_args = [openpype_executable, script_path] + if AYON_SERVER_ENABLED: + root = os.environ["AYON_ROOT"] + else: + root = os.environ["OPENPYPE_ROOT"] + script_path = os.path.join(root, "{}.py".format(filename)) + launch_args = [executable, script_path] else: new_executable = os.path.join( - os.path.dirname(openpype_executable), + os.path.dirname(executable), filename ) executable_path = find_executable(new_executable) diff --git a/openpype/lib/log.py b/openpype/lib/log.py index dc2e6615fe0..72071063ec6 100644 --- a/openpype/lib/log.py +++ b/openpype/lib/log.py @@ -492,21 +492,3 @@ def get_log_mongo_connection(cls): cls.initialize() return OpenPypeMongoConnection.get_mongo_client() - - -class PypeLogger(Logger): - """Duplicate of 'Logger'. - - Deprecated: - Class will be removed after release version 3.16.* - """ - - @classmethod - def get_logger(cls, *args, **kwargs): - logger = Logger.get_logger(*args, **kwargs) - # TODO uncomment when replaced most of places - logger.warning(( - "'openpype.lib.PypeLogger' is deprecated class." - " Please use 'openpype.lib.Logger' instead." - )) - return logger diff --git a/openpype/lib/mongo.py b/openpype/lib/mongo.py deleted file mode 100644 index bb2ee6016a9..00000000000 --- a/openpype/lib/mongo.py +++ /dev/null @@ -1,61 +0,0 @@ -import warnings -import functools -from openpype.client.mongo import ( - MongoEnvNotSet, - OpenPypeMongoConnection, -) - - -class MongoDeprecatedWarning(DeprecationWarning): - pass - - -def mongo_deprecated(func): - """Mark functions as deprecated. - - It will result in a warning being emitted when the function is used. - """ - - @functools.wraps(func) - def new_func(*args, **kwargs): - warnings.simplefilter("always", MongoDeprecatedWarning) - warnings.warn( - ( - "Call to deprecated function '{}'." - " Function was moved to 'openpype.client.mongo'." - ).format(func.__name__), - category=MongoDeprecatedWarning, - stacklevel=2 - ) - return func(*args, **kwargs) - return new_func - - -@mongo_deprecated -def get_default_components(): - from openpype.client.mongo import get_default_components - - return get_default_components() - - -@mongo_deprecated -def should_add_certificate_path_to_mongo_url(mongo_url): - from openpype.client.mongo import should_add_certificate_path_to_mongo_url - - return should_add_certificate_path_to_mongo_url(mongo_url) - - -@mongo_deprecated -def validate_mongo_connection(mongo_uri): - from openpype.client.mongo import validate_mongo_connection - - return validate_mongo_connection(mongo_uri) - - -__all__ = ( - "MongoEnvNotSet", - "OpenPypeMongoConnection", - "get_default_components", - "should_add_certificate_path_to_mongo_url", - "validate_mongo_connection", -) diff --git a/openpype/lib/openpype_version.py b/openpype/lib/openpype_version.py index bdf7099f615..1c8356d5fe8 100644 --- a/openpype/lib/openpype_version.py +++ b/openpype/lib/openpype_version.py @@ -26,8 +26,25 @@ def get_openpype_version(): return openpype.version.__version__ +def get_ayon_launcher_version(): + version_filepath = os.path.join( + os.environ["AYON_ROOT"], + "version.py" + ) + if not os.path.exists(version_filepath): + return None + content = {} + with open(version_filepath, "r") as stream: + exec(stream.read(), content) + return content["__version__"] + + def get_build_version(): """OpenPype version of build.""" + + if AYON_SERVER_ENABLED: + return get_ayon_launcher_version() + # Return OpenPype version if is running from code if not is_running_from_build(): return get_openpype_version() @@ -51,7 +68,11 @@ def is_running_from_build(): Returns: bool: True if running from build. """ - executable_path = os.environ["OPENPYPE_EXECUTABLE"] + + if AYON_SERVER_ENABLED: + executable_path = os.environ["AYON_EXECUTABLE"] + else: + executable_path = os.environ["OPENPYPE_EXECUTABLE"] executable_filename = os.path.basename(executable_path) if "python" in executable_filename.lower(): return False @@ -59,6 +80,8 @@ def is_running_from_build(): def is_staging_enabled(): + if AYON_SERVER_ENABLED: + return os.getenv("AYON_USE_STAGING") == "1" return os.environ.get("OPENPYPE_USE_STAGING") == "1" diff --git a/openpype/lib/path_tools.py b/openpype/lib/path_tools.py index 0b6d0a3391a..fec6a0c47dc 100644 --- a/openpype/lib/path_tools.py +++ b/openpype/lib/path_tools.py @@ -2,59 +2,12 @@ import re import logging import platform -import functools -import warnings import clique log = logging.getLogger(__name__) -class PathToolsDeprecatedWarning(DeprecationWarning): - pass - - -def deprecated(new_destination): - """Mark functions as deprecated. - - It will result in a warning being emitted when the function is used. - """ - - func = None - if callable(new_destination): - func = new_destination - new_destination = None - - def _decorator(decorated_func): - if new_destination is None: - warning_message = ( - " Please check content of deprecated function to figure out" - " possible replacement." - ) - else: - warning_message = " Please replace your usage with '{}'.".format( - new_destination - ) - - @functools.wraps(decorated_func) - def wrapper(*args, **kwargs): - warnings.simplefilter("always", PathToolsDeprecatedWarning) - warnings.warn( - ( - "Call to deprecated function '{}'" - "\nFunction was moved or removed.{}" - ).format(decorated_func.__name__, warning_message), - category=PathToolsDeprecatedWarning, - stacklevel=4 - ) - return decorated_func(*args, **kwargs) - return wrapper - - if func is None: - return _decorator - return _decorator(func) - - def format_file_size(file_size, suffix=None): """Returns formatted string with size in appropriate unit. @@ -269,99 +222,3 @@ def get_last_version_from_path(path_dir, filter): return filtred_files[-1] return None - - -@deprecated("openpype.pipeline.project_folders.concatenate_splitted_paths") -def concatenate_splitted_paths(split_paths, anatomy): - """ - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline.project_folders import concatenate_splitted_paths - - return concatenate_splitted_paths(split_paths, anatomy) - - -@deprecated -def get_format_data(anatomy): - """ - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline.template_data import get_project_template_data - - data = get_project_template_data(project_name=anatomy.project_name) - data["root"] = anatomy.roots - return data - - -@deprecated("openpype.pipeline.project_folders.fill_paths") -def fill_paths(path_list, anatomy): - """ - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline.project_folders import fill_paths - - return fill_paths(path_list, anatomy) - - -@deprecated("openpype.pipeline.project_folders.create_project_folders") -def create_project_folders(basic_paths, project_name): - """ - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline.project_folders import create_project_folders - - return create_project_folders(project_name, basic_paths) - - -@deprecated("openpype.pipeline.project_folders.get_project_basic_paths") -def get_project_basic_paths(project_name): - """ - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline.project_folders import get_project_basic_paths - - return get_project_basic_paths(project_name) - - -@deprecated("openpype.pipeline.workfile.create_workdir_extra_folders") -def create_workdir_extra_folders( - workdir, host_name, task_type, task_name, project_name, - project_settings=None -): - """Create extra folders in work directory based on context. - - Args: - workdir (str): Path to workdir where workfiles is stored. - host_name (str): Name of host implementation. - task_type (str): Type of task for which extra folders should be - created. - task_name (str): Name of task for which extra folders should be - created. - project_name (str): Name of project on which task is. - project_settings (dict): Prepared project settings. Are loaded if not - passed. - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline.project_folders import create_workdir_extra_folders - - return create_workdir_extra_folders( - workdir, - host_name, - task_type, - task_name, - project_name, - project_settings - ) diff --git a/openpype/lib/plugin_tools.py b/openpype/lib/plugin_tools.py index 10fd3940b8a..d204fc2c8f1 100644 --- a/openpype/lib/plugin_tools.py +++ b/openpype/lib/plugin_tools.py @@ -4,157 +4,9 @@ import logging import re -import warnings -import functools - -from openpype.client import get_asset_by_id - log = logging.getLogger(__name__) -class PluginToolsDeprecatedWarning(DeprecationWarning): - pass - - -def deprecated(new_destination): - """Mark functions as deprecated. - - It will result in a warning being emitted when the function is used. - """ - - func = None - if callable(new_destination): - func = new_destination - new_destination = None - - def _decorator(decorated_func): - if new_destination is None: - warning_message = ( - " Please check content of deprecated function to figure out" - " possible replacement." - ) - else: - warning_message = " Please replace your usage with '{}'.".format( - new_destination - ) - - @functools.wraps(decorated_func) - def wrapper(*args, **kwargs): - warnings.simplefilter("always", PluginToolsDeprecatedWarning) - warnings.warn( - ( - "Call to deprecated function '{}'" - "\nFunction was moved or removed.{}" - ).format(decorated_func.__name__, warning_message), - category=PluginToolsDeprecatedWarning, - stacklevel=4 - ) - return decorated_func(*args, **kwargs) - return wrapper - - if func is None: - return _decorator - return _decorator(func) - - -@deprecated("openpype.pipeline.create.TaskNotSetError") -def TaskNotSetError(*args, **kwargs): - from openpype.pipeline.create import TaskNotSetError - - return TaskNotSetError(*args, **kwargs) - - -@deprecated("openpype.pipeline.create.get_subset_name") -def get_subset_name_with_asset_doc( - family, - variant, - task_name, - asset_doc, - project_name=None, - host_name=None, - default_template=None, - dynamic_data=None -): - """Calculate subset name based on passed context and OpenPype settings. - - Subst name templates are defined in `project_settings/global/tools/creator - /subset_name_profiles` where are profiles with host name, family, task name - and task type filters. If context does not match any profile then - `DEFAULT_SUBSET_TEMPLATE` is used as default template. - - That's main reason why so many arguments are required to calculate subset - name. - - Args: - family (str): Instance family. - variant (str): In most of cases it is user input during creation. - task_name (str): Task name on which context is instance created. - asset_doc (dict): Queried asset document with it's tasks in data. - Used to get task type. - project_name (str): Name of project on which is instance created. - Important for project settings that are loaded. - host_name (str): One of filtering criteria for template profile - filters. - default_template (str): Default template if any profile does not match - passed context. Constant 'DEFAULT_SUBSET_TEMPLATE' is used if - is not passed. - dynamic_data (dict): Dynamic data specific for a creator which creates - instance. - """ - - from openpype.pipeline.create import get_subset_name - - return get_subset_name( - family, - variant, - task_name, - asset_doc, - project_name, - host_name, - default_template, - dynamic_data - ) - - -@deprecated -def get_subset_name( - family, - variant, - task_name, - asset_id, - project_name=None, - host_name=None, - default_template=None, - dynamic_data=None, - dbcon=None -): - """Calculate subset name using OpenPype settings. - - This variant of function expects asset id as argument. - - This is legacy function should be replaced with - `get_subset_name_with_asset_doc` where asset document is expected. - """ - - from openpype.pipeline.create import get_subset_name - - if project_name is None: - project_name = dbcon.project_name - - asset_doc = get_asset_by_id(project_name, asset_id, fields=["data.tasks"]) - - return get_subset_name( - family, - variant, - task_name, - asset_doc, - project_name, - host_name, - default_template, - dynamic_data - ) - - def prepare_template_data(fill_pairs): """ Prepares formatted data for filling template. diff --git a/openpype/lib/transcoding.py b/openpype/lib/transcoding.py index de6495900e7..2bae28786e6 100644 --- a/openpype/lib/transcoding.py +++ b/openpype/lib/transcoding.py @@ -11,8 +11,8 @@ from .execute import run_subprocess from .vendor_bin_utils import ( - get_ffmpeg_tool_path, - get_oiio_tools_path, + get_ffmpeg_tool_args, + get_oiio_tool_args, is_oiio_supported, ) @@ -83,11 +83,11 @@ def get_oiio_info_for_input(filepath, logger=None, subimages=False): Stdout should contain xml format string. """ - args = [ - get_oiio_tools_path(), + args = get_oiio_tool_args( + "oiiotool", "--info", "-v" - ] + ) if subimages: args.append("-a") @@ -486,12 +486,11 @@ def convert_for_ffmpeg( compression = "none" # Prepare subprocess arguments - oiio_cmd = [ - get_oiio_tools_path(), - + oiio_cmd = get_oiio_tool_args( + "oiiotool", # Don't add any additional attributes "--nosoftwareattrib", - ] + ) # Add input compression if available if compression: oiio_cmd.extend(["--compression", compression]) @@ -656,12 +655,11 @@ def convert_input_paths_for_ffmpeg( for input_path in input_paths: # Prepare subprocess arguments - oiio_cmd = [ - get_oiio_tools_path(), - + oiio_cmd = get_oiio_tool_args( + "oiiotool", # Don't add any additional attributes "--nosoftwareattrib", - ] + ) # Add input compression if available if compression: oiio_cmd.extend(["--compression", compression]) @@ -729,8 +727,8 @@ def get_ffprobe_data(path_to_file, logger=None): logger.info( "Getting information about input \"{}\".".format(path_to_file) ) - args = [ - get_ffmpeg_tool_path("ffprobe"), + ffprobe_args = get_ffmpeg_tool_args("ffprobe") + args = ffprobe_args + [ "-hide_banner", "-loglevel", "fatal", "-show_error", @@ -1084,13 +1082,13 @@ def convert_colorspace( if logger is None: logger = logging.getLogger(__name__) - oiio_cmd = [ - get_oiio_tools_path(), + oiio_cmd = get_oiio_tool_args( + "oiiotool", input_path, # Don't add any additional attributes "--nosoftwareattrib", "--colorconfig", config_path - ] + ) if all([target_colorspace, view, display]): raise ValueError("Colorspace and both screen and display" diff --git a/openpype/lib/usdlib.py b/openpype/lib/usdlib.py index cb96a0c1d01..c166feb3a6b 100644 --- a/openpype/lib/usdlib.py +++ b/openpype/lib/usdlib.py @@ -334,6 +334,9 @@ def get_usd_master_path(asset, subset, representation): "name": project_name, "code": project_doc.get("data", {}).get("code") }, + "folder": { + "name": asset_doc["name"], + }, "asset": asset_doc["name"], "subset": subset, "representation": representation, diff --git a/openpype/lib/vendor_bin_utils.py b/openpype/lib/vendor_bin_utils.py index f27c78d486c..dc8bb7435e1 100644 --- a/openpype/lib/vendor_bin_utils.py +++ b/openpype/lib/vendor_bin_utils.py @@ -3,9 +3,15 @@ import platform import subprocess +from openpype import AYON_SERVER_ENABLED + log = logging.getLogger("Vendor utils") +class ToolNotFoundError(Exception): + """Raised when tool arguments are not found.""" + + class CachedToolPaths: """Cache already used and discovered tools and their executables. @@ -252,7 +258,7 @@ def _check_args_returncode(args): return proc.returncode == 0 -def _oiio_executable_validation(filepath): +def _oiio_executable_validation(args): """Validate oiio tool executable if can be executed. Validation has 2 steps. First is using 'find_executable' to fill possible @@ -270,32 +276,63 @@ def _oiio_executable_validation(filepath): should be used. Args: - filepath (str): Path to executable. + args (Union[str, list[str]]): Arguments to launch tool or + path to tool executable. Returns: bool: Filepath is valid executable. """ - filepath = find_executable(filepath) - if not filepath: + if not args: return False - return _check_args_returncode([filepath, "--help"]) + if not isinstance(args, list): + filepath = find_executable(args) + if not filepath: + return False + args = [filepath] + return _check_args_returncode(args + ["--help"]) + + +def _get_ayon_oiio_tool_args(tool_name): + try: + # Use 'ayon-third-party' addon to get oiio arguments + from ayon_third_party import get_oiio_arguments + except Exception: + print("!!! Failed to import 'ayon_third_party' addon.") + return None + + try: + return get_oiio_arguments(tool_name) + except Exception as exc: + print("!!! Failed to get OpenImageIO args. Reason: {}".format(exc)) + return None def get_oiio_tools_path(tool="oiiotool"): - """Path to vendorized OpenImageIO tool executables. + """Path to OpenImageIO tool executables. - On Window it adds .exe extension if missing from tool argument. + On Windows it adds .exe extension if missing from tool argument. Args: - tool (string): Tool name (oiiotool, maketx, ...). + tool (string): Tool name 'oiiotool', 'maketx', etc. Default is "oiiotool". """ if CachedToolPaths.is_tool_cached(tool): return CachedToolPaths.get_executable_path(tool) + if AYON_SERVER_ENABLED: + args = _get_ayon_oiio_tool_args(tool) + if args: + if len(args) > 1: + raise ValueError( + "AYON oiio arguments consist of multiple arguments." + ) + tool_executable_path = args[0] + CachedToolPaths.cache_executable_path(tool, tool_executable_path) + return tool_executable_path + custom_paths_str = os.environ.get("OPENPYPE_OIIO_PATHS") or "" tool_executable_path = find_tool_in_custom_paths( custom_paths_str.split(os.pathsep), @@ -321,7 +358,33 @@ def get_oiio_tools_path(tool="oiiotool"): return tool_executable_path -def _ffmpeg_executable_validation(filepath): +def get_oiio_tool_args(tool_name, *extra_args): + """Arguments to launch OpenImageIO tool. + + Args: + tool_name (str): Tool name 'oiiotool', 'maketx', etc. + *extra_args (str): Extra arguments to add to after tool arguments. + + Returns: + list[str]: List of arguments. + """ + + extra_args = list(extra_args) + + if AYON_SERVER_ENABLED: + args = _get_ayon_oiio_tool_args(tool_name) + if args: + return args + extra_args + + path = get_oiio_tools_path(tool_name) + if path: + return [path] + extra_args + raise ToolNotFoundError( + "OIIO '{}' tool not found.".format(tool_name) + ) + + +def _ffmpeg_executable_validation(args): """Validate ffmpeg tool executable if can be executed. Validation has 2 steps. First is using 'find_executable' to fill possible @@ -338,24 +401,45 @@ def _ffmpeg_executable_validation(filepath): It does not validate if the executable is really a ffmpeg tool. Args: - filepath (str): Path to executable. + args (Union[str, list[str]]): Arguments to launch tool or + path to tool executable. Returns: bool: Filepath is valid executable. """ - filepath = find_executable(filepath) - if not filepath: + if not args: return False - return _check_args_returncode([filepath, "-version"]) + if not isinstance(args, list): + filepath = find_executable(args) + if not filepath: + return False + args = [filepath] + return _check_args_returncode(args + ["--help"]) + + +def _get_ayon_ffmpeg_tool_args(tool_name): + try: + # Use 'ayon-third-party' addon to get ffmpeg arguments + from ayon_third_party import get_ffmpeg_arguments + + except Exception: + print("!!! Failed to import 'ayon_third_party' addon.") + return None + + try: + return get_ffmpeg_arguments(tool_name) + except Exception as exc: + print("!!! Failed to get FFmpeg args. Reason: {}".format(exc)) + return None def get_ffmpeg_tool_path(tool="ffmpeg"): """Path to vendorized FFmpeg executable. Args: - tool (string): Tool name (ffmpeg, ffprobe, ...). + tool (str): Tool name 'ffmpeg', 'ffprobe', etc. Default is "ffmpeg". Returns: @@ -365,6 +449,17 @@ def get_ffmpeg_tool_path(tool="ffmpeg"): if CachedToolPaths.is_tool_cached(tool): return CachedToolPaths.get_executable_path(tool) + if AYON_SERVER_ENABLED: + args = _get_ayon_ffmpeg_tool_args(tool) + if args is not None: + if len(args) > 1: + raise ValueError( + "AYON ffmpeg arguments consist of multiple arguments." + ) + tool_executable_path = args[0] + CachedToolPaths.cache_executable_path(tool, tool_executable_path) + return tool_executable_path + custom_paths_str = os.environ.get("OPENPYPE_FFMPEG_PATHS") or "" tool_executable_path = find_tool_in_custom_paths( custom_paths_str.split(os.pathsep), @@ -390,19 +485,44 @@ def get_ffmpeg_tool_path(tool="ffmpeg"): return tool_executable_path +def get_ffmpeg_tool_args(tool_name, *extra_args): + """Arguments to launch FFmpeg tool. + + Args: + tool_name (str): Tool name 'ffmpeg', 'ffprobe', exc. + *extra_args (str): Extra arguments to add to after tool arguments. + + Returns: + list[str]: List of arguments. + """ + + extra_args = list(extra_args) + + if AYON_SERVER_ENABLED: + args = _get_ayon_ffmpeg_tool_args(tool_name) + if args: + return args + extra_args + + executable_path = get_ffmpeg_tool_path(tool_name) + if executable_path: + return [executable_path] + extra_args + raise ToolNotFoundError( + "FFmpeg '{}' tool not found.".format(tool_name) + ) + + def is_oiio_supported(): """Checks if oiiotool is configured for this platform. Returns: bool: OIIO tool executable is available. """ - loaded_path = oiio_path = get_oiio_tools_path() - if oiio_path: - oiio_path = find_executable(oiio_path) - - if not oiio_path: - log.debug("OIIOTool is not configured or not present at {}".format( - loaded_path - )) + + try: + args = get_oiio_tool_args("oiiotool") + except ToolNotFoundError: + args = None + if not args: + log.debug("OIIOTool is not configured or not present.") return False - return True + return _oiio_executable_validation(args) diff --git a/openpype/modules/base.py b/openpype/modules/base.py index 9b3637c48af..84e213288ce 100644 --- a/openpype/modules/base.py +++ b/openpype/modules/base.py @@ -373,10 +373,12 @@ def _load_ayon_addons(openpype_modules, modules_key, log): addons_info = _get_ayon_addons_information() if not addons_info: return v3_addons_to_skip - addons_dir = os.path.join( - appdirs.user_data_dir("AYON", "Ynput"), - "addons" - ) + addons_dir = os.environ.get("AYON_ADDONS_DIR") + if not addons_dir: + addons_dir = os.path.join( + appdirs.user_data_dir("AYON", "Ynput"), + "addons" + ) if not os.path.exists(addons_dir): log.warning("Addons directory does not exists. Path \"{}\"".format( addons_dir diff --git a/openpype/modules/deadline/abstract_submit_deadline.py b/openpype/modules/deadline/abstract_submit_deadline.py index 3fa427204b3..23e959d84c1 100644 --- a/openpype/modules/deadline/abstract_submit_deadline.py +++ b/openpype/modules/deadline/abstract_submit_deadline.py @@ -25,6 +25,7 @@ from openpype.pipeline.publish.lib import ( replace_with_published_scene_path ) +from openpype import AYON_SERVER_ENABLED JSONDecodeError = getattr(json.decoder, "JSONDecodeError", ValueError) @@ -397,6 +398,15 @@ def update(self, data): for key, value in data.items(): setattr(self, key, value) + def add_render_job_env_var(self): + """Check if in OP or AYON mode and use appropriate env var.""" + if AYON_SERVER_ENABLED: + self.EnvironmentKeyValue["AYON_RENDER_JOB"] = "1" + self.EnvironmentKeyValue["AYON_BUNDLE_NAME"] = ( + os.environ["AYON_BUNDLE_NAME"]) + else: + self.EnvironmentKeyValue["OPENPYPE_RENDER_JOB"] = "1" + @six.add_metaclass(AbstractMetaInstancePlugin) class AbstractSubmitDeadline(pyblish.api.InstancePlugin, diff --git a/openpype/modules/deadline/plugins/publish/collect_deadline_server_from_instance.py b/openpype/modules/deadline/plugins/publish/collect_deadline_server_from_instance.py index 2de6073e290..8a408d7f4f0 100644 --- a/openpype/modules/deadline/plugins/publish/collect_deadline_server_from_instance.py +++ b/openpype/modules/deadline/plugins/publish/collect_deadline_server_from_instance.py @@ -8,6 +8,7 @@ from maya import cmds import pyblish.api +from openpype.pipeline.publish import KnownPublishError class CollectDeadlineServerFromInstance(pyblish.api.InstancePlugin): @@ -21,6 +22,8 @@ class CollectDeadlineServerFromInstance(pyblish.api.InstancePlugin): def process(self, instance): instance.data["deadlineUrl"] = self._collect_deadline_url(instance) + instance.data["deadlineUrl"] = \ + instance.data["deadlineUrl"].strip().rstrip("/") self.log.info( "Using {} for submission.".format(instance.data["deadlineUrl"])) @@ -79,13 +82,14 @@ def _collect_deadline_url(self, render_instance): if k in default_servers } - msg = ( - "\"{}\" server on instance is not enabled in project settings." - " Enabled project servers:\n{}".format( - instance_server, project_enabled_servers + if instance_server not in project_enabled_servers: + msg = ( + "\"{}\" server on instance is not enabled in project settings." + " Enabled project servers:\n{}".format( + instance_server, project_enabled_servers + ) ) - ) - assert instance_server in project_enabled_servers, msg + raise KnownPublishError(msg) self.log.debug("Using project approved server.") return project_enabled_servers[instance_server] diff --git a/openpype/modules/deadline/plugins/publish/collect_default_deadline_server.py b/openpype/modules/deadline/plugins/publish/collect_default_deadline_server.py index 1a0d615dc3d..58721efad39 100644 --- a/openpype/modules/deadline/plugins/publish/collect_default_deadline_server.py +++ b/openpype/modules/deadline/plugins/publish/collect_default_deadline_server.py @@ -48,3 +48,6 @@ def process(self, context): context.data["defaultDeadline"] = deadline_webservice self.log.debug("Overriding from project settings with {}".format( # noqa: E501 deadline_webservice)) + + context.data["defaultDeadline"] = \ + context.data["defaultDeadline"].strip().rstrip("/") diff --git a/openpype/modules/deadline/plugins/publish/help/validate_deadline_pools.xml b/openpype/modules/deadline/plugins/publish/help/validate_deadline_pools.xml index 0e7d72910e9..aa21df37343 100644 --- a/openpype/modules/deadline/plugins/publish/help/validate_deadline_pools.xml +++ b/openpype/modules/deadline/plugins/publish/help/validate_deadline_pools.xml @@ -1,31 +1,31 @@ - Scene setting + Deadline Pools - ## Invalid Deadline pools found +## Invalid Deadline pools found - Configured pools don't match what is set in Deadline. +Configured pools don't match available pools in Deadline. - {invalid_value_str} +### How to repair? - ### How to repair? +If your instance had deadline pools set on creation, remove or +change them. - If your instance had deadline pools set on creation, remove or - change them. +In other cases inform admin to change them in Settings. - In other cases inform admin to change them in Settings. +Available deadline pools: + +{pools_str} - Available deadline pools {pools_str}. - ### __Detailed Info__ +### __Detailed Info__ - This error is shown when deadline pool is not on Deadline anymore. It - could happen in case of republish old workfile which was created with - previous deadline pools, - or someone changed pools on Deadline side, but didn't modify Openpype - Settings. +This error is shown when a configured pool is not available on Deadline. It +can happen when publishing old workfiles which were created with previous +deadline pools, or someone changed the available pools in Deadline, +but didn't modify Openpype Settings to match the changes. \ No newline at end of file diff --git a/openpype/modules/deadline/plugins/publish/submit_aftereffects_deadline.py b/openpype/modules/deadline/plugins/publish/submit_aftereffects_deadline.py index 83dd5b49e24..009375e87ee 100644 --- a/openpype/modules/deadline/plugins/publish/submit_aftereffects_deadline.py +++ b/openpype/modules/deadline/plugins/publish/submit_aftereffects_deadline.py @@ -106,8 +106,8 @@ def get_job_info(self): if value: dln_job_info.EnvironmentKeyValue[key] = value - # to recognize job from PYPE for turning Event On/Off - dln_job_info.EnvironmentKeyValue["OPENPYPE_RENDER_JOB"] = "1" + # to recognize render jobs + dln_job_info.add_render_job_env_var() return dln_job_info diff --git a/openpype/modules/deadline/plugins/publish/submit_celaction_deadline.py b/openpype/modules/deadline/plugins/publish/submit_celaction_deadline.py index ee28612b44d..4aef914023c 100644 --- a/openpype/modules/deadline/plugins/publish/submit_celaction_deadline.py +++ b/openpype/modules/deadline/plugins/publish/submit_celaction_deadline.py @@ -27,7 +27,7 @@ class CelactionSubmitDeadline(pyblish.api.InstancePlugin): deadline_job_delay = "00:00:08:00" def process(self, instance): - instance.data["toBeRenderedOn"] = "deadline" + context = instance.context # get default deadline webservice url from deadline module diff --git a/openpype/modules/deadline/plugins/publish/submit_harmony_deadline.py b/openpype/modules/deadline/plugins/publish/submit_harmony_deadline.py index 84fca11d9d2..16e703fc91e 100644 --- a/openpype/modules/deadline/plugins/publish/submit_harmony_deadline.py +++ b/openpype/modules/deadline/plugins/publish/submit_harmony_deadline.py @@ -265,7 +265,7 @@ def get_job_info(self): job_info.SecondaryPool = self._instance.data.get("secondaryPool") job_info.ChunkSize = self.chunk_size batch_name = os.path.basename(self._instance.data["source"]) - if is_in_tests: + if is_in_tests(): batch_name += datetime.now().strftime("%d%m%Y%H%M%S") job_info.BatchName = batch_name job_info.Department = self.department @@ -299,8 +299,8 @@ def get_job_info(self): if value: job_info.EnvironmentKeyValue[key] = value - # to recognize job from PYPE for turning Event On/Off - job_info.EnvironmentKeyValue["OPENPYPE_RENDER_JOB"] = "1" + # to recognize render jobs + job_info.add_render_job_env_var() return job_info diff --git a/openpype/modules/deadline/plugins/publish/submit_houdini_render_deadline.py b/openpype/modules/deadline/plugins/publish/submit_houdini_render_deadline.py index af341ca8e8a..8f21a920be5 100644 --- a/openpype/modules/deadline/plugins/publish/submit_houdini_render_deadline.py +++ b/openpype/modules/deadline/plugins/publish/submit_houdini_render_deadline.py @@ -105,8 +105,8 @@ def get_job_info(self): if value: job_info.EnvironmentKeyValue[key] = value - # to recognize job from PYPE for turning Event On/Off - job_info.EnvironmentKeyValue["OPENPYPE_RENDER_JOB"] = "1" + # to recognize render jobs + job_info.add_render_job_env_var() for i, filepath in enumerate(instance.data["files"]): dirname = os.path.dirname(filepath) @@ -141,4 +141,3 @@ def process(self, instance): # Store output dir for unified publisher (filesequence) output_dir = os.path.dirname(instance.data["files"][0]) instance.data["outputDir"] = output_dir - instance.data["toBeRenderedOn"] = "deadline" diff --git a/openpype/modules/deadline/plugins/publish/submit_max_deadline.py b/openpype/modules/deadline/plugins/publish/submit_max_deadline.py index fff7a4ced50..63c6e4a0c72 100644 --- a/openpype/modules/deadline/plugins/publish/submit_max_deadline.py +++ b/openpype/modules/deadline/plugins/publish/submit_max_deadline.py @@ -12,7 +12,9 @@ legacy_io, OpenPypePyblishPluginMixin ) -from openpype.settings import get_project_settings +from openpype.pipeline.publish.lib import ( + replace_with_published_scene_path +) from openpype.hosts.max.api.lib import ( get_current_renderer, get_multipass_setting @@ -131,8 +133,8 @@ def get_job_info(self): continue job_info.EnvironmentKeyValue[key] = value - # to recognize job from PYPE for turning Event On/Off - job_info.EnvironmentKeyValue["OPENPYPE_RENDER_JOB"] = "1" + # to recognize render jobs + job_info.add_render_job_env_var() job_info.EnvironmentKeyValue["OPENPYPE_LOG_NO_COLORS"] = "1" # Add list of expected files to job @@ -174,7 +176,6 @@ def process_submission(self): first_file = next(self._iter_expected_files(files)) output_dir = os.path.dirname(first_file) instance.data["outputDir"] = output_dir - instance.data["toBeRenderedOn"] = "deadline" filename = os.path.basename(filepath) @@ -236,7 +237,10 @@ def _use_published_name(self, data, project_settings): if renderer == "Redshift_Renderer": plugin_data["redshift_SeparateAovFiles"] = instance.data.get( "separateAovFiles") - + if instance.data["cameras"]: + plugin_info["Camera0"] = None + plugin_info["Camera"] = instance.data["cameras"][0] + plugin_info["Camera1"] = instance.data["cameras"][0] self.log.debug("plugin data:{}".format(plugin_data)) plugin_info.update(plugin_data) @@ -247,7 +251,8 @@ def from_published_scene(self, replace_in_path=True): if instance.data["renderer"] == "Redshift_Renderer": self.log.debug("Using Redshift...published scene wont be used..") replace_in_path = False - return replace_in_path + return replace_with_published_scene_path( + instance, replace_in_path) @staticmethod def _iter_expected_files(exp): diff --git a/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py b/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py index 1dfb6e0e5c0..34f3905a174 100644 --- a/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py +++ b/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py @@ -226,8 +226,8 @@ def get_job_info(self): continue job_info.EnvironmentKeyValue[key] = value - # to recognize job from PYPE for turning Event On/Off - job_info.EnvironmentKeyValue["OPENPYPE_RENDER_JOB"] = "1" + # to recognize render jobs + job_info.add_render_job_env_var() job_info.EnvironmentKeyValue["OPENPYPE_LOG_NO_COLORS"] = "1" # Adding file dependencies. @@ -300,7 +300,6 @@ def process_submission(self): first_file = next(iter_expected_files(expected_files)) output_dir = os.path.dirname(first_file) instance.data["outputDir"] = output_dir - instance.data["toBeRenderedOn"] = "deadline" # Patch workfile (only when use_published is enabled) if self.use_published: diff --git a/openpype/modules/deadline/plugins/publish/submit_maya_remote_publish_deadline.py b/openpype/modules/deadline/plugins/publish/submit_maya_remote_publish_deadline.py index 39120f7c8ac..0d23f44333b 100644 --- a/openpype/modules/deadline/plugins/publish/submit_maya_remote_publish_deadline.py +++ b/openpype/modules/deadline/plugins/publish/submit_maya_remote_publish_deadline.py @@ -4,6 +4,7 @@ from maya import cmds +from openpype import AYON_SERVER_ENABLED from openpype.pipeline import legacy_io, PublishXmlValidationError from openpype.tests.lib import is_in_tests from openpype.lib import is_running_from_build @@ -114,11 +115,14 @@ def get_job_info(self): environment["AVALON_TASK"] = instance.context.data["task"] environment["AVALON_APP_NAME"] = os.environ.get("AVALON_APP_NAME") environment["OPENPYPE_LOG_NO_COLORS"] = "1" - environment["OPENPYPE_REMOTE_JOB"] = "1" environment["OPENPYPE_USERNAME"] = instance.context.data["user"] environment["OPENPYPE_PUBLISH_SUBSET"] = instance.data["subset"] environment["OPENPYPE_REMOTE_PUBLISH"] = "1" + if AYON_SERVER_ENABLED: + environment["AYON_REMOTE_PUBLISH"] = "1" + else: + environment["OPENPYPE_REMOTE_PUBLISH"] = "1" for key, value in environment.items(): job_info.EnvironmentKeyValue[key] = value diff --git a/openpype/modules/deadline/plugins/publish/submit_nuke_deadline.py b/openpype/modules/deadline/plugins/publish/submit_nuke_deadline.py index 49002317835..ded5cd179f7 100644 --- a/openpype/modules/deadline/plugins/publish/submit_nuke_deadline.py +++ b/openpype/modules/deadline/plugins/publish/submit_nuke_deadline.py @@ -8,6 +8,8 @@ import pyblish.api import nuke + +from openpype import AYON_SERVER_ENABLED from openpype.pipeline import legacy_io from openpype.pipeline.publish import ( OpenPypePyblishPluginMixin @@ -88,7 +90,6 @@ def process(self, instance): if not instance.data.get("farm"): self.log.debug("Skipping local instance.") return - instance.data["attributeValues"] = self.get_attr_values_from_data( instance.data) @@ -96,7 +97,6 @@ def process(self, instance): instance.data["suspend_publish"] = instance.data["attributeValues"][ "suspend_publish"] - instance.data["toBeRenderedOn"] = "deadline" families = instance.data["families"] node = instance.data["transientData"]["node"] @@ -121,13 +121,10 @@ def process(self, instance): render_path = instance.data['path'] script_path = context.data["currentFile"] - for item in context: - if "workfile" in item.data["families"]: - msg = "Workfile (scene) must be published along" - assert item.data["publish"] is True, msg - - template_data = item.data.get("anatomyData") - rep = item.data.get("representations")[0].get("name") + for item_ in context: + if "workfile" in item_.data["family"]: + template_data = item_.data.get("anatomyData") + rep = item_.data.get("representations")[0].get("name") template_data["representation"] = rep template_data["ext"] = rep template_data["comment"] = None @@ -139,19 +136,24 @@ def process(self, instance): "Using published scene for render {}".format(script_path) ) - response = self.payload_submit( - instance, - script_path, - render_path, - node.name(), - submit_frame_start, - submit_frame_end - ) - # Store output dir for unified publisher (filesequence) - instance.data["deadlineSubmissionJob"] = response.json() - instance.data["outputDir"] = os.path.dirname( - render_path).replace("\\", "/") - instance.data["publishJobState"] = "Suspended" + # only add main rendering job if target is not frames_farm + r_job_response_json = None + if instance.data["render_target"] != "frames_farm": + r_job_response = self.payload_submit( + instance, + script_path, + render_path, + node.name(), + submit_frame_start, + submit_frame_end + ) + r_job_response_json = r_job_response.json() + instance.data["deadlineSubmissionJob"] = r_job_response_json + + # Store output dir for unified publisher (filesequence) + instance.data["outputDir"] = os.path.dirname( + render_path).replace("\\", "/") + instance.data["publishJobState"] = "Suspended" if instance.data.get("bakingNukeScripts"): for baking_script in instance.data["bakingNukeScripts"]: @@ -159,18 +161,20 @@ def process(self, instance): script_path = baking_script["bakeScriptPath"] exe_node_name = baking_script["bakeWriteNodeName"] - resp = self.payload_submit( + b_job_response = self.payload_submit( instance, script_path, render_path, exe_node_name, submit_frame_start, submit_frame_end, - response.json() + r_job_response_json, + baking_submission=True ) # Store output dir for unified publisher (filesequence) - instance.data["deadlineSubmissionJob"] = resp.json() + instance.data["deadlineSubmissionJob"] = b_job_response.json() + instance.data["publishJobState"] = "Suspended" # add to list of job Id @@ -178,7 +182,7 @@ def process(self, instance): instance.data["bakingSubmissionJobs"] = [] instance.data["bakingSubmissionJobs"].append( - resp.json()["_id"]) + b_job_response.json()["_id"]) # redefinition of families if "render" in instance.data["family"]: @@ -197,15 +201,35 @@ def payload_submit( exe_node_name, start_frame, end_frame, - response_data=None + response_data=None, + baking_submission=False, ): + """Submit payload to Deadline + + Args: + instance (pyblish.api.Instance): pyblish instance + script_path (str): path to nuke script + render_path (str): path to rendered images + exe_node_name (str): name of the node to render + start_frame (int): start frame + end_frame (int): end frame + response_data Optional[dict]: response data from + previous submission + baking_submission Optional[bool]: if it's baking submission + + Returns: + requests.Response + """ render_dir = os.path.normpath(os.path.dirname(render_path)) - batch_name = os.path.basename(script_path) - jobname = "%s - %s" % (batch_name, instance.name) + + # batch name + src_filepath = instance.context.data["currentFile"] + batch_name = os.path.basename(src_filepath) + job_name = os.path.basename(render_path) + if is_in_tests(): batch_name += datetime.now().strftime("%d%m%Y%H%M%S") - output_filename_0 = self.preview_fname(render_path) if not response_data: @@ -226,11 +250,8 @@ def payload_submit( # Top-level group name "BatchName": batch_name, - # Asset dependency to wait for at least the scene file to sync. - # "AssetDependency0": script_path, - # Job name, as seen in Monitor - "Name": jobname, + "Name": job_name, # Arbitrary username, for visualisation in Monitor "UserName": self._deadline_user, @@ -292,12 +313,17 @@ def payload_submit( "AuxFiles": [] } - if response_data.get("_id"): + # TODO: rewrite for baking with sequences + if baking_submission: payload["JobInfo"].update({ "JobType": "Normal", + "ChunkSize": 99999999 + }) + + if response_data.get("_id"): + payload["JobInfo"].update({ "BatchName": response_data["Props"]["Batch"], "JobDependency0": response_data["_id"], - "ChunkSize": 99999999 }) # Include critical environment variables with submission @@ -337,8 +363,14 @@ def payload_submit( if _path.lower().startswith('openpype_'): environment[_path] = os.environ[_path] - # to recognize job from PYPE for turning Event On/Off - environment["OPENPYPE_RENDER_JOB"] = "1" + # to recognize render jobs + if AYON_SERVER_ENABLED: + environment["AYON_BUNDLE_NAME"] = os.environ["AYON_BUNDLE_NAME"] + render_job_label = "AYON_RENDER_JOB" + else: + render_job_label = "OPENPYPE_RENDER_JOB" + + environment[render_job_label] = "1" # finally search replace in values of any key if self.env_search_replace_values: diff --git a/openpype/modules/deadline/plugins/publish/submit_publish_job.py b/openpype/modules/deadline/plugins/publish/submit_publish_job.py index 2ed21c0621c..bf4411ef432 100644 --- a/openpype/modules/deadline/plugins/publish/submit_publish_job.py +++ b/openpype/modules/deadline/plugins/publish/submit_publish_job.py @@ -3,22 +3,20 @@ import os import json import re -from copy import copy, deepcopy +from copy import deepcopy import requests import clique import pyblish.api +from openpype import AYON_SERVER_ENABLED from openpype.client import ( get_last_version_by_subset_name, ) -from openpype.pipeline import ( - legacy_io, -) -from openpype.pipeline import publish -from openpype.lib import EnumDef +from openpype.pipeline import publish, legacy_io +from openpype.lib import EnumDef, is_running_from_build from openpype.tests.lib import is_in_tests -from openpype.lib import is_running_from_build +from openpype.pipeline.version_start import get_versioning_start from openpype.pipeline.farm.pyblish_functions import ( create_skeleton_instance, @@ -94,13 +92,14 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, label = "Submit image sequence jobs to Deadline or Muster" order = pyblish.api.IntegratorOrder + 0.2 icon = "tractor" - deadline_plugin = "OpenPype" + targets = ["local"] hosts = ["fusion", "max", "maya", "nuke", "houdini", "celaction", "aftereffects", "harmony"] - families = ["render.farm", "prerender.farm", + families = ["render.farm", "render.frames_farm", + "prerender.farm", "prerender.frames_farm", "renderlayer", "imagesequence", "vrayscene", "maxrender", "arnold_rop", "mantra_rop", @@ -123,13 +122,11 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, "FTRACK_SERVER", "AVALON_APP_NAME", "OPENPYPE_USERNAME", - "OPENPYPE_SG_USER" + "OPENPYPE_SG_USER", + "KITSU_LOGIN", + "KITSU_PWD" ] - # Add OpenPype version if we are running from build. - if is_running_from_build(): - environ_keys.append("OPENPYPE_VERSION") - # custom deadline attributes deadline_department = "" deadline_pool = "" @@ -189,7 +186,7 @@ def _submit_deadline_post_job(self, instance, job, instances): instance.data.get("asset"), instances[0]["subset"], instance.context, - 'render', + instances[0]["family"], override_version ) @@ -203,13 +200,25 @@ def _submit_deadline_post_job(self, instance, job, instances): "AVALON_ASSET": instance.context.data["asset"], "AVALON_TASK": instance.context.data["task"], "OPENPYPE_USERNAME": instance.context.data["user"], - "OPENPYPE_PUBLISH_JOB": "1", - "OPENPYPE_RENDER_JOB": "0", - "OPENPYPE_REMOTE_JOB": "0", "OPENPYPE_LOG_NO_COLORS": "1", "IS_TEST": str(int(is_in_tests())) } + if AYON_SERVER_ENABLED: + environment["AYON_PUBLISH_JOB"] = "1" + environment["AYON_RENDER_JOB"] = "0" + environment["AYON_REMOTE_PUBLISH"] = "0" + environment["AYON_BUNDLE_NAME"] = os.environ["AYON_BUNDLE_NAME"] + deadline_plugin = "Ayon" + else: + environment["OPENPYPE_PUBLISH_JOB"] = "1" + environment["OPENPYPE_RENDER_JOB"] = "0" + environment["OPENPYPE_REMOTE_PUBLISH"] = "0" + deadline_plugin = "OpenPype" + # Add OpenPype version if we are running from build. + if is_running_from_build(): + self.environ_keys.append("OPENPYPE_VERSION") + # add environments from self.environ_keys for env_key in self.environ_keys: if os.getenv(env_key): @@ -252,7 +261,7 @@ def _submit_deadline_post_job(self, instance, job, instances): ) payload = { "JobInfo": { - "Plugin": self.deadline_plugin, + "Plugin": deadline_plugin, "BatchName": job["Props"]["Batch"], "Name": job_name, "UserName": job["Props"]["User"], @@ -293,7 +302,7 @@ def _submit_deadline_post_job(self, instance, job, instances): payload["JobInfo"]["JobDependency{}".format( job_index)] = assembly_id # noqa: E501 job_index += 1 - else: + elif job.get("_id"): payload["JobInfo"]["JobDependency0"] = job["_id"] for index, (key_, value_) in enumerate(environment.items()): @@ -469,6 +478,7 @@ def process(self, instance): "FTRACK_SERVER": os.environ.get("FTRACK_SERVER"), } + deadline_publish_job_id = None if submission_type == "deadline": # get default deadline webservice url from deadline module self.deadline_url = instance.context.data["defaultDeadline"] @@ -561,13 +571,32 @@ def _get_publish_folder(self, anatomy, template_data, if version: version = int(version["name"]) + 1 else: - version = 1 + version = get_versioning_start( + project_name, + template_data["app"], + task_name=template_data["task"]["name"], + task_type=template_data["task"]["type"], + family="render", + subset=subset, + project_settings=context.data["project_settings"] + ) + + host_name = context.data["hostName"] + task_info = template_data.get("task") or {} + + template_name = publish.get_publish_template_name( + project_name, + host_name, + family, + task_info.get("name"), + task_info.get("type"), + ) template_data["subset"] = subset template_data["family"] = family template_data["version"] = version - render_templates = anatomy.templates_obj["render"] + render_templates = anatomy.templates_obj[template_name] if "folder" in render_templates: publish_folder = render_templates["folder"].format_strict( template_data diff --git a/openpype/modules/deadline/plugins/publish/validate_deadline_connection.py b/openpype/modules/deadline/plugins/publish/validate_deadline_connection.py index d5016a4d825..a7b300befff 100644 --- a/openpype/modules/deadline/plugins/publish/validate_deadline_connection.py +++ b/openpype/modules/deadline/plugins/publish/validate_deadline_connection.py @@ -1,8 +1,7 @@ -import os -import requests - import pyblish.api +from openpype_modules.deadline.abstract_submit_deadline import requests_get + class ValidateDeadlineConnection(pyblish.api.InstancePlugin): """Validate Deadline Web Service is running""" @@ -10,7 +9,10 @@ class ValidateDeadlineConnection(pyblish.api.InstancePlugin): label = "Validate Deadline Web Service" order = pyblish.api.ValidatorOrder hosts = ["maya", "nuke"] - families = ["renderlayer"] + families = ["renderlayer", "render"] + + # cache + responses = {} def process(self, instance): # get default deadline webservice url from deadline module @@ -18,28 +20,16 @@ def process(self, instance): # if custom one is set in instance, use that if instance.data.get("deadlineUrl"): deadline_url = instance.data.get("deadlineUrl") - self.log.info( - "We have deadline URL on instance {}".format( - deadline_url)) + self.log.debug( + "We have deadline URL on instance {}".format(deadline_url) + ) assert deadline_url, "Requires Deadline Webservice URL" - # Check response - response = self._requests_get(deadline_url) + if deadline_url not in self.responses: + self.responses[deadline_url] = requests_get(deadline_url) + + response = self.responses[deadline_url] assert response.ok, "Response must be ok" assert response.text.startswith("Deadline Web Service "), ( "Web service did not respond with 'Deadline Web Service'" ) - - def _requests_get(self, *args, **kwargs): - """ Wrapper for requests, disabling SSL certificate validation if - DONT_VERIFY_SSL environment variable is found. This is useful when - Deadline or Muster server are running with self-signed certificates - and their certificate is not added to trusted certificates on - client machines. - - WARNING: disabling SSL certificate validation is defeating one line - of defense SSL is providing and it is not recommended. - """ - if 'verify' not in kwargs: - kwargs['verify'] = False if os.getenv("OPENPYPE_DONT_VERIFY_SSL", True) else True # noqa - return requests.get(*args, **kwargs) diff --git a/openpype/modules/deadline/plugins/publish/validate_deadline_pools.py b/openpype/modules/deadline/plugins/publish/validate_deadline_pools.py index e1c05958304..949caff7d8c 100644 --- a/openpype/modules/deadline/plugins/publish/validate_deadline_pools.py +++ b/openpype/modules/deadline/plugins/publish/validate_deadline_pools.py @@ -19,38 +19,64 @@ class ValidateDeadlinePools(OptionalPyblishPluginMixin, order = pyblish.api.ValidatorOrder families = ["rendering", "render.farm", + "render.frames_farm", "renderFarm", "renderlayer", "maxrender"] optional = True + # cache + pools_per_url = {} + def process(self, instance): + if not self.is_active(instance.data): + return + if not instance.data.get("farm"): self.log.debug("Skipping local instance.") return - # get default deadline webservice url from deadline module - deadline_url = instance.context.data["defaultDeadline"] - self.log.info("deadline_url::{}".format(deadline_url)) - pools = DeadlineModule.get_deadline_pools(deadline_url, log=self.log) - self.log.info("pools::{}".format(pools)) - - formatting_data = { - "pools_str": ",".join(pools) - } + deadline_url = self.get_deadline_url(instance) + pools = self.get_pools(deadline_url) + invalid_pools = {} primary_pool = instance.data.get("primaryPool") if primary_pool and primary_pool not in pools: - msg = "Configured primary '{}' not present on Deadline".format( - instance.data["primaryPool"]) - formatting_data["invalid_value_str"] = msg - raise PublishXmlValidationError(self, msg, - formatting_data=formatting_data) + invalid_pools["primary"] = primary_pool secondary_pool = instance.data.get("secondaryPool") if secondary_pool and secondary_pool not in pools: - msg = "Configured secondary '{}' not present on Deadline".format( - instance.data["secondaryPool"]) - formatting_data["invalid_value_str"] = msg - raise PublishXmlValidationError(self, msg, - formatting_data=formatting_data) + invalid_pools["secondary"] = secondary_pool + + if invalid_pools: + message = "\n".join( + "{} pool '{}' not available on Deadline".format(key.title(), + pool) + for key, pool in invalid_pools.items() + ) + raise PublishXmlValidationError( + plugin=self, + message=message, + formatting_data={"pools_str": ", ".join(pools)} + ) + + def get_deadline_url(self, instance): + # get default deadline webservice url from deadline module + deadline_url = instance.context.data["defaultDeadline"] + if instance.data.get("deadlineUrl"): + # if custom one is set in instance, use that + deadline_url = instance.data.get("deadlineUrl") + return deadline_url + + def get_pools(self, deadline_url): + if deadline_url not in self.pools_per_url: + self.log.debug( + "Querying available pools for Deadline url: {}".format( + deadline_url) + ) + pools = DeadlineModule.get_deadline_pools(deadline_url, + log=self.log) + self.log.info("Available pools: {}".format(pools)) + self.pools_per_url[deadline_url] = pools + + return self.pools_per_url[deadline_url] diff --git a/openpype/modules/deadline/plugins/publish/validate_expected_and_rendered_files.py b/openpype/modules/deadline/plugins/publish/validate_expected_and_rendered_files.py index ff4be677e74..5d37e7357ee 100644 --- a/openpype/modules/deadline/plugins/publish/validate_expected_and_rendered_files.py +++ b/openpype/modules/deadline/plugins/publish/validate_expected_and_rendered_files.py @@ -20,8 +20,19 @@ class ValidateExpectedFiles(pyblish.api.InstancePlugin): allow_user_override = True def process(self, instance): - self.instance = instance - frame_list = self._get_frame_list(instance.data["render_job_id"]) + """Process all the nodes in the instance""" + + # get dependency jobs ids for retrieving frame list + dependent_job_ids = self._get_dependent_job_ids(instance) + + if not dependent_job_ids: + self.log.warning("No dependent jobs found for instance: {}" + "".format(instance)) + return + + # get list of frames from dependent jobs + frame_list = self._get_dependent_jobs_frames( + instance, dependent_job_ids) for repre in instance.data["representations"]: expected_files = self._get_expected_files(repre) @@ -59,7 +70,10 @@ def process(self, instance): # Update the representation expected files self.log.info("Update range from actual job range " "to frame list: {}".format(frame_list)) - repre["files"] = sorted(job_expected_files) + # single item files must be string not list + repre["files"] = (sorted(job_expected_files) + if len(job_expected_files) > 1 else + list(job_expected_files)[0]) # Update the expected files expected_files = job_expected_files @@ -78,26 +92,45 @@ def process(self, instance): ) ) - def _get_frame_list(self, original_job_id): + def _get_dependent_job_ids(self, instance): + """Returns list of dependent job ids from instance metadata.json + + Args: + instance (pyblish.api.Instance): pyblish instance + + Returns: + (list): list of dependent job ids + + """ + dependent_job_ids = [] + + # job_id collected from metadata.json + original_job_id = instance.data["render_job_id"] + + dependent_job_ids_env = os.environ.get("RENDER_JOB_IDS") + if dependent_job_ids_env: + dependent_job_ids = dependent_job_ids_env.split(',') + elif original_job_id: + dependent_job_ids = [original_job_id] + + return dependent_job_ids + + def _get_dependent_jobs_frames(self, instance, dependent_job_ids): """Returns list of frame ranges from all render job. Render job might be re-submitted so job_id in metadata.json could be invalid. GlobalJobPreload injects current job id to RENDER_JOB_IDS. Args: - original_job_id (str) + instance (pyblish.api.Instance): pyblish instance + dependent_job_ids (list): list of dependent job ids Returns: (list) """ all_frame_lists = [] - render_job_ids = os.environ.get("RENDER_JOB_IDS") - if render_job_ids: - render_job_ids = render_job_ids.split(',') - else: # fallback - render_job_ids = [original_job_id] - - for job_id in render_job_ids: - job_info = self._get_job_info(job_id) + + for job_id in dependent_job_ids: + job_info = self._get_job_info(instance, job_id) frame_list = job_info["Props"].get("Frames") if frame_list: all_frame_lists.extend(frame_list.split(',')) @@ -152,18 +185,25 @@ def _get_file_name_template_and_placeholder(self, files): return file_name_template, frame_placeholder - def _get_job_info(self, job_id): + def _get_job_info(self, instance, job_id): """Calls DL for actual job info for 'job_id' Might be different than job info saved in metadata.json if user manually changes job pre/during rendering. + Args: + instance (pyblish.api.Instance): pyblish instance + job_id (str): Deadline job id + + Returns: + (dict): Job info from Deadline + """ # get default deadline webservice url from deadline module - deadline_url = self.instance.context.data["defaultDeadline"] + deadline_url = instance.context.data["defaultDeadline"] # if custom one is set in instance, use that - if self.instance.data.get("deadlineUrl"): - deadline_url = self.instance.data.get("deadlineUrl") + if instance.data.get("deadlineUrl"): + deadline_url = instance.data.get("deadlineUrl") assert deadline_url, "Requires Deadline Webservice URL" url = "{}/api/jobs?JobID={}".format(deadline_url, job_id) diff --git a/openpype/modules/deadline/repository/custom/plugins/Ayon/Ayon.ico b/openpype/modules/deadline/repository/custom/plugins/Ayon/Ayon.ico new file mode 100644 index 00000000000..aea977a1251 Binary files /dev/null and b/openpype/modules/deadline/repository/custom/plugins/Ayon/Ayon.ico differ diff --git a/openpype/modules/deadline/repository/custom/plugins/Ayon/Ayon.options b/openpype/modules/deadline/repository/custom/plugins/Ayon/Ayon.options new file mode 100644 index 00000000000..1fbe1ef2994 --- /dev/null +++ b/openpype/modules/deadline/repository/custom/plugins/Ayon/Ayon.options @@ -0,0 +1,9 @@ +[Arguments] +Type=string +Label=Arguments +Category=Python Options +CategoryOrder=0 +Index=1 +Description=The arguments to pass to the script. If no arguments are required, leave this blank. +Required=false +DisableIfBlank=true diff --git a/openpype/modules/deadline/repository/custom/plugins/Ayon/Ayon.param b/openpype/modules/deadline/repository/custom/plugins/Ayon/Ayon.param new file mode 100644 index 00000000000..8ba044ff815 --- /dev/null +++ b/openpype/modules/deadline/repository/custom/plugins/Ayon/Ayon.param @@ -0,0 +1,35 @@ +[About] +Type=label +Label=About +Category=About Plugin +CategoryOrder=-1 +Index=0 +Default=Ayon Plugin for Deadline +Description=Not configurable + +[AyonExecutable] +Type=multilinemultifilename +Label=Ayon Executable +Category=Ayon Executables +CategoryOrder=1 +Index=0 +Default= +Description=The path to the Ayon executable. Enter alternative paths on separate lines. + +[AyonServerUrl] +Type=string +Label=Ayon Server Url +Category=Ayon Credentials +CategoryOrder=2 +Index=0 +Default= +Description=Url to Ayon server + +[AyonApiKey] +Type=password +Label=Ayon API key +Category=Ayon Credentials +CategoryOrder=2 +Index=0 +Default= +Description=API key for service account on Ayon Server diff --git a/openpype/modules/deadline/repository/custom/plugins/Ayon/Ayon.py b/openpype/modules/deadline/repository/custom/plugins/Ayon/Ayon.py new file mode 100644 index 00000000000..a29acf98238 --- /dev/null +++ b/openpype/modules/deadline/repository/custom/plugins/Ayon/Ayon.py @@ -0,0 +1,156 @@ +#!/usr/bin/env python3 + +from System.IO import Path +from System.Text.RegularExpressions import Regex + +from Deadline.Plugins import PluginType, DeadlinePlugin +from Deadline.Scripting import ( + StringUtils, + FileUtils, + DirectoryUtils, + RepositoryUtils +) + +import re +import os +import platform + + +###################################################################### +# This is the function that Deadline calls to get an instance of the +# main DeadlinePlugin class. +###################################################################### +def GetDeadlinePlugin(): + return AyonDeadlinePlugin() + + +def CleanupDeadlinePlugin(deadlinePlugin): + deadlinePlugin.Cleanup() + + +class AyonDeadlinePlugin(DeadlinePlugin): + """ + Standalone plugin for publishing from Ayon + + Calls Ayonexecutable 'ayon_console' from first correctly found + file based on plugin configuration. Uses 'publish' command and passes + path to metadata json file, which contains all needed information + for publish process. + """ + def __init__(self): + super().__init__() + self.InitializeProcessCallback += self.InitializeProcess + self.RenderExecutableCallback += self.RenderExecutable + self.RenderArgumentCallback += self.RenderArgument + + def Cleanup(self): + for stdoutHandler in self.StdoutHandlers: + del stdoutHandler.HandleCallback + + del self.InitializeProcessCallback + del self.RenderExecutableCallback + del self.RenderArgumentCallback + + def InitializeProcess(self): + self.PluginType = PluginType.Simple + self.StdoutHandling = True + + self.SingleFramesOnly = self.GetBooleanPluginInfoEntryWithDefault( + "SingleFramesOnly", False) + self.LogInfo("Single Frames Only: %s" % self.SingleFramesOnly) + + self.AddStdoutHandlerCallback( + ".*Progress: (\d+)%.*").HandleCallback += self.HandleProgress + + def RenderExecutable(self): + job = self.GetJob() + + # set required env vars for Ayon + # cannot be in InitializeProcess as it is too soon + config = RepositoryUtils.GetPluginConfig("Ayon") + ayon_server_url = ( + job.GetJobEnvironmentKeyValue("AYON_SERVER_URL") or + config.GetConfigEntryWithDefault("AyonServerUrl", "") + ) + ayon_api_key = ( + job.GetJobEnvironmentKeyValue("AYON_API_KEY") or + config.GetConfigEntryWithDefault("AyonApiKey", "") + ) + ayon_bundle_name = job.GetJobEnvironmentKeyValue("AYON_BUNDLE_NAME") + + environment = { + "AYON_SERVER_URL": ayon_server_url, + "AYON_API_KEY": ayon_api_key, + "AYON_BUNDLE_NAME": ayon_bundle_name, + } + + for env, val in environment.items(): + self.SetProcessEnvironmentVariable(env, val) + + exe_list = self.GetConfigEntry("AyonExecutable") + # clean '\ ' for MacOS pasting + if platform.system().lower() == "darwin": + exe_list = exe_list.replace("\\ ", " ") + + expanded_paths = [] + for path in exe_list.split(";"): + if path.startswith("~"): + path = os.path.expanduser(path) + expanded_paths.append(path) + exe = FileUtils.SearchFileList(";".join(expanded_paths)) + + if exe == "": + self.FailRender( + "Ayon executable was not found " + + "in the semicolon separated list " + + "\"" + ";".join(exe_list) + "\". " + + "The path to the render executable can be configured " + + "from the Plugin Configuration in the Deadline Monitor.") + return exe + + def RenderArgument(self): + arguments = str(self.GetPluginInfoEntryWithDefault("Arguments", "")) + arguments = RepositoryUtils.CheckPathMapping(arguments) + + arguments = re.sub(r"<(?i)STARTFRAME>", str(self.GetStartFrame()), + arguments) + arguments = re.sub(r"<(?i)ENDFRAME>", str(self.GetEndFrame()), + arguments) + arguments = re.sub(r"<(?i)QUOTE>", "\"", arguments) + + arguments = self.ReplacePaddedFrame(arguments, + "<(?i)STARTFRAME%([0-9]+)>", + self.GetStartFrame()) + arguments = self.ReplacePaddedFrame(arguments, + "<(?i)ENDFRAME%([0-9]+)>", + self.GetEndFrame()) + + count = 0 + for filename in self.GetAuxiliaryFilenames(): + localAuxFile = Path.Combine(self.GetJobsDataDirectory(), filename) + arguments = re.sub(r"<(?i)AUXFILE" + str(count) + r">", + localAuxFile.replace("\\", "/"), arguments) + count += 1 + + return arguments + + def ReplacePaddedFrame(self, arguments, pattern, frame): + frameRegex = Regex(pattern) + while True: + frameMatch = frameRegex.Match(arguments) + if not frameMatch.Success: + break + paddingSize = int(frameMatch.Groups[1].Value) + if paddingSize > 0: + padding = StringUtils.ToZeroPaddedString( + frame, paddingSize, False) + else: + padding = str(frame) + arguments = arguments.replace( + frameMatch.Groups[0].Value, padding) + + return arguments + + def HandleProgress(self): + progress = float(self.GetRegexMatch(1)) + self.SetProgress(progress) diff --git a/openpype/modules/deadline/repository/custom/plugins/GlobalJobPreLoad.py b/openpype/modules/deadline/repository/custom/plugins/GlobalJobPreLoad.py index 15226bb7733..97875215aea 100644 --- a/openpype/modules/deadline/repository/custom/plugins/GlobalJobPreLoad.py +++ b/openpype/modules/deadline/repository/custom/plugins/GlobalJobPreLoad.py @@ -355,6 +355,13 @@ def inject_openpype_environment(deadlinePlugin): " AVALON_TASK, AVALON_APP_NAME" )) + openpype_mongo = job.GetJobEnvironmentKeyValue("OPENPYPE_MONGO") + if openpype_mongo: + # inject env var for OP extractenvironments + # SetEnvironmentVariable is important, not SetProcessEnv... + deadlinePlugin.SetEnvironmentVariable("OPENPYPE_MONGO", + openpype_mongo) + if not os.environ.get("OPENPYPE_MONGO"): print(">>> Missing OPENPYPE_MONGO env var, process won't work") @@ -398,6 +405,158 @@ def inject_openpype_environment(deadlinePlugin): raise +def inject_ayon_environment(deadlinePlugin): + """ Pull env vars from Ayon and push them to rendering process. + + Used for correct paths, configuration from OpenPype etc. + """ + job = deadlinePlugin.GetJob() + + print(">>> Injecting Ayon environments ...") + try: + exe_list = get_ayon_executable() + exe = FileUtils.SearchFileList(exe_list) + + if not exe: + raise RuntimeError(( + "Ayon executable was not found in the semicolon " + "separated list \"{}\"." + "The path to the render executable can be configured" + " from the Plugin Configuration in the Deadline Monitor." + ).format(";".join(exe_list))) + + print("--- Ayon executable: {}".format(exe)) + + ayon_bundle_name = job.GetJobEnvironmentKeyValue("AYON_BUNDLE_NAME") + if not ayon_bundle_name: + raise RuntimeError("Missing env var in job properties " + "AYON_BUNDLE_NAME") + + config = RepositoryUtils.GetPluginConfig("Ayon") + ayon_server_url = ( + job.GetJobEnvironmentKeyValue("AYON_SERVER_URL") or + config.GetConfigEntryWithDefault("AyonServerUrl", "") + ) + ayon_api_key = ( + job.GetJobEnvironmentKeyValue("AYON_API_KEY") or + config.GetConfigEntryWithDefault("AyonApiKey", "") + ) + + if not all([ayon_server_url, ayon_api_key]): + raise RuntimeError(( + "Missing required values for server url and api key. " + "Please fill in Ayon Deadline plugin or provide by " + "AYON_SERVER_URL and AYON_API_KEY" + )) + + # tempfile.TemporaryFile cannot be used because of locking + temp_file_name = "{}_{}.json".format( + datetime.utcnow().strftime('%Y%m%d%H%M%S%f'), + str(uuid.uuid1()) + ) + export_url = os.path.join(tempfile.gettempdir(), temp_file_name) + print(">>> Temporary path: {}".format(export_url)) + + args = [ + "--headless", + "extractenvironments", + export_url + ] + + add_kwargs = { + "project": job.GetJobEnvironmentKeyValue("AVALON_PROJECT"), + "asset": job.GetJobEnvironmentKeyValue("AVALON_ASSET"), + "task": job.GetJobEnvironmentKeyValue("AVALON_TASK"), + "app": job.GetJobEnvironmentKeyValue("AVALON_APP_NAME"), + "envgroup": "farm", + } + + if job.GetJobEnvironmentKeyValue('IS_TEST'): + args.append("--automatic-tests") + + if all(add_kwargs.values()): + for key, value in add_kwargs.items(): + args.extend(["--{}".format(key), value]) + else: + raise RuntimeError(( + "Missing required env vars: AVALON_PROJECT, AVALON_ASSET," + " AVALON_TASK, AVALON_APP_NAME" + )) + + environment = { + "AYON_SERVER_URL": ayon_server_url, + "AYON_API_KEY": ayon_api_key, + "AYON_BUNDLE_NAME": ayon_bundle_name, + } + for env, val in environment.items(): + deadlinePlugin.SetEnvironmentVariable(env, val) + + args_str = subprocess.list2cmdline(args) + print(">>> Executing: {} {}".format(exe, args_str)) + process_exitcode = deadlinePlugin.RunProcess( + exe, args_str, os.path.dirname(exe), -1 + ) + + if process_exitcode != 0: + raise RuntimeError( + "Failed to run Ayon process to extract environments." + ) + + print(">>> Loading file ...") + with open(export_url) as fp: + contents = json.load(fp) + + for key, value in contents.items(): + deadlinePlugin.SetProcessEnvironmentVariable(key, value) + + script_url = job.GetJobPluginInfoKeyValue("ScriptFilename") + if script_url: + script_url = script_url.format(**contents).replace("\\", "/") + print(">>> Setting script path {}".format(script_url)) + job.SetJobPluginInfoKeyValue("ScriptFilename", script_url) + + print(">>> Removing temporary file") + os.remove(export_url) + + print(">> Injection end.") + except Exception as e: + if hasattr(e, "output"): + print(">>> Exception {}".format(e.output)) + import traceback + print(traceback.format_exc()) + print("!!! Injection failed.") + RepositoryUtils.FailJob(job) + raise + + +def get_ayon_executable(): + """Return OpenPype Executable from Event Plug-in Settings + + Returns: + (list) of paths + Raises: + (RuntimeError) if no path configured at all + """ + config = RepositoryUtils.GetPluginConfig("Ayon") + exe_list = config.GetConfigEntryWithDefault("AyonExecutable", "") + + if not exe_list: + raise RuntimeError("Path to Ayon executable not configured." + "Please set it in Ayon Deadline Plugin.") + + # clean '\ ' for MacOS pasting + if platform.system().lower() == "darwin": + exe_list = exe_list.replace("\\ ", " ") + + # Expand user paths + expanded_paths = [] + for path in exe_list.split(";"): + if path.startswith("~"): + path = os.path.expanduser(path) + expanded_paths.append(path) + return ";".join(expanded_paths) + + def inject_render_job_id(deadlinePlugin): """Inject dependency ids to publish process as env var for validation.""" print(">>> Injecting render job id ...") @@ -422,16 +581,29 @@ def __main__(deadlinePlugin): openpype_publish_job = \ job.GetJobEnvironmentKeyValue('OPENPYPE_PUBLISH_JOB') or '0' openpype_remote_job = \ - job.GetJobEnvironmentKeyValue('OPENPYPE_REMOTE_JOB') or '0' + job.GetJobEnvironmentKeyValue('OPENPYPE_REMOTE_PUBLISH') or '0' - print("--- Job type - render {}".format(openpype_render_job)) - print("--- Job type - publish {}".format(openpype_publish_job)) - print("--- Job type - remote {}".format(openpype_remote_job)) if openpype_publish_job == '1' and openpype_render_job == '1': raise RuntimeError("Misconfiguration. Job couldn't be both " + "render and publish.") if openpype_publish_job == '1': inject_render_job_id(deadlinePlugin) - elif openpype_render_job == '1' or openpype_remote_job == '1': + if openpype_render_job == '1' or openpype_remote_job == '1': inject_openpype_environment(deadlinePlugin) + + ayon_render_job = \ + job.GetJobEnvironmentKeyValue('AYON_RENDER_JOB') or '0' + ayon_publish_job = \ + job.GetJobEnvironmentKeyValue('AYON_PUBLISH_JOB') or '0' + ayon_remote_job = \ + job.GetJobEnvironmentKeyValue('AYON_REMOTE_PUBLISH') or '0' + + if ayon_publish_job == '1' and ayon_render_job == '1': + raise RuntimeError("Misconfiguration. Job couldn't be both " + + "render and publish.") + + if ayon_publish_job == '1': + inject_render_job_id(deadlinePlugin) + if ayon_render_job == '1' or ayon_remote_job == '1': + inject_ayon_environment(deadlinePlugin) diff --git a/openpype/modules/deadline/repository/custom/plugins/HarmonyOpenPype/HarmonyOpenPype.param b/openpype/modules/deadline/repository/custom/plugins/HarmonyOpenPype/HarmonyOpenPype.param index ff2949766c5..43a54a464e0 100644 --- a/openpype/modules/deadline/repository/custom/plugins/HarmonyOpenPype/HarmonyOpenPype.param +++ b/openpype/modules/deadline/repository/custom/plugins/HarmonyOpenPype/HarmonyOpenPype.param @@ -77,4 +77,22 @@ CategoryOrder=0 Index=4 Label=Harmony 20 Render Executable Description=The path to the Harmony Render executable file used for rendering. Enter alternative paths on separate lines. -Default=c:\Program Files (x86)\Toon Boom Animation\Toon Boom Harmony 20 Premium\win64\bin\HarmonyPremium.exe;/Applications/Toon Boom Harmony 20 Premium/Harmony Premium.app/Contents/MacOS/Harmony Premium;/usr/local/ToonBoomAnimation/harmonyPremium_20/lnx86_64/bin/HarmonyPremium \ No newline at end of file +Default=c:\Program Files (x86)\Toon Boom Animation\Toon Boom Harmony 20 Premium\win64\bin\HarmonyPremium.exe;/Applications/Toon Boom Harmony 20 Premium/Harmony Premium.app/Contents/MacOS/Harmony Premium;/usr/local/ToonBoomAnimation/harmonyPremium_20/lnx86_64/bin/HarmonyPremium + +[Harmony_RenderExecutable_21] +Type=multilinemultifilename +Category=Render Executables +CategoryOrder=0 +Index=4 +Label=Harmony 21 Render Executable +Description=The path to the Harmony Render executable file used for rendering. Enter alternative paths on separate lines. +Default=c:\Program Files (x86)\Toon Boom Animation\Toon Boom Harmony 21 Premium\win64\bin\HarmonyPremium.exe;/Applications/Toon Boom Harmony 21 Premium/Harmony Premium.app/Contents/MacOS/Harmony Premium;/usr/local/ToonBoomAnimation/harmonyPremium_21/lnx86_64/bin/HarmonyPremium + +[Harmony_RenderExecutable_22] +Type=multilinemultifilename +Category=Render Executables +CategoryOrder=0 +Index=4 +Label=Harmony 22 Render Executable +Description=The path to the Harmony Render executable file used for rendering. Enter alternative paths on separate lines. +Default=c:\Program Files (x86)\Toon Boom Animation\Toon Boom Harmony 22 Premium\win64\bin\HarmonyPremium.exe;/Applications/Toon Boom Harmony 22 Premium/Harmony Premium.app/Contents/MacOS/Harmony Premium;/usr/local/ToonBoomAnimation/harmonyPremium_22/lnx86_64/bin/HarmonyPremium diff --git a/openpype/modules/deadline/repository/custom/plugins/HarmonyOpenPype/HarmonyOpenPype.py b/openpype/modules/deadline/repository/custom/plugins/HarmonyOpenPype/HarmonyOpenPype.py index 0615af95dd4..32ed76b58d5 100644 --- a/openpype/modules/deadline/repository/custom/plugins/HarmonyOpenPype/HarmonyOpenPype.py +++ b/openpype/modules/deadline/repository/custom/plugins/HarmonyOpenPype/HarmonyOpenPype.py @@ -1,3 +1,4 @@ +#!/usr/bin/env python3 from System import * from System.Diagnostics import * from System.IO import * @@ -8,13 +9,14 @@ def GetDeadlinePlugin(): return HarmonyOpenPypePlugin() - + def CleanupDeadlinePlugin( deadlinePlugin ): deadlinePlugin.Cleanup() - + class HarmonyOpenPypePlugin( DeadlinePlugin ): def __init__( self ): + super().__init__() self.InitializeProcessCallback += self.InitializeProcess self.RenderExecutableCallback += self.RenderExecutable self.RenderArgumentCallback += self.RenderArgument @@ -24,11 +26,11 @@ def Cleanup( self ): print("Cleanup") for stdoutHandler in self.StdoutHandlers: del stdoutHandler.HandleCallback - + del self.InitializeProcessCallback del self.RenderExecutableCallback del self.RenderArgumentCallback - + def CheckExitCode( self, exitCode ): print("check code") if exitCode != 0: @@ -36,20 +38,20 @@ def CheckExitCode( self, exitCode ): self.LogInfo( "Renderer reported an error with error code 100. This will be ignored, since the option to ignore it is specified in the Job Properties." ) else: self.FailRender( "Renderer returned non-zero error code %d. Check the renderer's output." % exitCode ) - + def InitializeProcess( self ): self.PluginType = PluginType.Simple self.StdoutHandling = True self.PopupHandling = True - + self.AddStdoutHandlerCallback( "Rendered frame ([0-9]+)" ).HandleCallback += self.HandleStdoutProgress - + def HandleStdoutProgress( self ): startFrame = self.GetStartFrame() endFrame = self.GetEndFrame() if( endFrame - startFrame + 1 != 0 ): self.SetProgress( 100 * ( int(self.GetRegexMatch(1)) - startFrame + 1 ) / ( endFrame - startFrame + 1 ) ) - + def RenderExecutable( self ): version = int( self.GetPluginInfoEntry( "Version" ) ) exe = "" @@ -58,7 +60,7 @@ def RenderExecutable( self ): if( exe == "" ): self.FailRender( "Harmony render executable was not found in the configured separated list \"" + exeList + "\". The path to the render executable can be configured from the Plugin Configuration in the Deadline Monitor." ) return exe - + def RenderArgument( self ): renderArguments = "-batch" @@ -72,20 +74,20 @@ def RenderArgument( self ): resolutionX = self.GetIntegerPluginInfoEntryWithDefault( "ResolutionX", -1 ) resolutionY = self.GetIntegerPluginInfoEntryWithDefault( "ResolutionY", -1 ) fov = self.GetFloatPluginInfoEntryWithDefault( "FieldOfView", -1 ) - + if resolutionX > 0 and resolutionY > 0 and fov > 0: renderArguments += " -res " + str( resolutionX ) + " " + str( resolutionY ) + " " + str( fov ) - + camera = self.GetPluginInfoEntryWithDefault( "Camera", "" ) - + if not camera == "": renderArguments += " -camera " + camera - + startFrame = str( self.GetStartFrame() ) endFrame = str( self.GetEndFrame() ) - + renderArguments += " -frames " + startFrame + " " + endFrame - + if not self.GetBooleanPluginInfoEntryWithDefault( "IsDatabase", False ): sceneFilename = self.GetPluginInfoEntryWithDefault( "SceneFile", self.GetDataFilename() ) sceneFilename = RepositoryUtils.CheckPathMapping( sceneFilename ) @@ -99,12 +101,12 @@ def RenderArgument( self ): renderArguments += " -scene " + scene version = self.GetPluginInfoEntryWithDefault( "SceneVersion", "" ) renderArguments += " -version " + version - + #tempSceneDirectory = self.CreateTempDirectory( "thread" + str(self.GetThreadNumber()) ) - #preRenderScript = + #preRenderScript = rendernodeNum = 0 scriptBuilder = StringBuilder() - + while True: nodeName = self.GetPluginInfoEntryWithDefault( "Output" + str( rendernodeNum ) + "Node", "" ) if nodeName == "": @@ -115,35 +117,35 @@ def RenderArgument( self ): nodeLeadingZero = self.GetPluginInfoEntryWithDefault( "Output" + str( rendernodeNum ) + "LeadingZero", "" ) nodeFormat = self.GetPluginInfoEntryWithDefault( "Output" + str( rendernodeNum ) + "Format", "" ) nodeStartFrame = self.GetPluginInfoEntryWithDefault( "Output" + str( rendernodeNum ) + "StartFrame", "" ) - + if not nodePath == "": scriptBuilder.AppendLine("node.setTextAttr( \"" + nodeName + "\", \"drawingName\", 1, \"" + nodePath + "\" );") - + if not nodeLeadingZero == "": scriptBuilder.AppendLine("node.setTextAttr( \"" + nodeName + "\", \"leadingZeros\", 1, \"" + nodeLeadingZero + "\" );") - + if not nodeFormat == "": scriptBuilder.AppendLine("node.setTextAttr( \"" + nodeName + "\", \"drawingType\", 1, \"" + nodeFormat + "\" );") - + if not nodeStartFrame == "": scriptBuilder.AppendLine("node.setTextAttr( \"" + nodeName + "\", \"start\", 1, \"" + nodeStartFrame + "\" );") - + if nodeType == "Movie": nodePath = self.GetPluginInfoEntryWithDefault( "Output" + str( rendernodeNum ) + "Path", "" ) if not nodePath == "": scriptBuilder.AppendLine("node.setTextAttr( \"" + nodeName + "\", \"moviePath\", 1, \"" + nodePath + "\" );") - + rendernodeNum += 1 - + tempDirectory = self.CreateTempDirectory( "thread" + str(self.GetThreadNumber()) ) preRenderScriptName = Path.Combine( tempDirectory, "preRenderScript.txt" ) - + File.WriteAllText( preRenderScriptName, scriptBuilder.ToString() ) - + preRenderInlineScript = self.GetPluginInfoEntryWithDefault( "PreRenderInlineScript", "" ) if preRenderInlineScript: renderArguments += " -preRenderInlineScript \"" + preRenderInlineScript +"\"" - + renderArguments += " -preRenderScript \"" + preRenderScriptName +"\"" - + return renderArguments diff --git a/openpype/modules/deadline/repository/custom/plugins/OpenPype/OpenPype.py b/openpype/modules/deadline/repository/custom/plugins/OpenPype/OpenPype.py index 6e1b973fb91..004c58d3467 100644 --- a/openpype/modules/deadline/repository/custom/plugins/OpenPype/OpenPype.py +++ b/openpype/modules/deadline/repository/custom/plugins/OpenPype/OpenPype.py @@ -38,6 +38,7 @@ class OpenPypeDeadlinePlugin(DeadlinePlugin): for publish process. """ def __init__(self): + super().__init__() self.InitializeProcessCallback += self.InitializeProcess self.RenderExecutableCallback += self.RenderExecutable self.RenderArgumentCallback += self.RenderArgument @@ -107,7 +108,7 @@ def RenderExecutable(self): "Scanning for compatible requested " f"version {requested_version}")) dir_list = self.GetConfigEntry("OpenPypeInstallationDirs") - + # clean '\ ' for MacOS pasting if platform.system().lower() == "darwin": dir_list = dir_list.replace("\\ ", " ") diff --git a/openpype/modules/deadline/repository/custom/plugins/OpenPypeTileAssembler/OpenPypeTileAssembler.py b/openpype/modules/deadline/repository/custom/plugins/OpenPypeTileAssembler/OpenPypeTileAssembler.py index b51daffbc8f..9641c16d20d 100644 --- a/openpype/modules/deadline/repository/custom/plugins/OpenPypeTileAssembler/OpenPypeTileAssembler.py +++ b/openpype/modules/deadline/repository/custom/plugins/OpenPypeTileAssembler/OpenPypeTileAssembler.py @@ -249,6 +249,7 @@ class OpenPypeTileAssembler(DeadlinePlugin): def __init__(self): """Init.""" + super().__init__() self.InitializeProcessCallback += self.initialize_process self.RenderExecutableCallback += self.render_executable self.RenderArgumentCallback += self.render_argument diff --git a/openpype/modules/ftrack/event_handlers_server/action_sync_to_avalon.py b/openpype/modules/ftrack/event_handlers_server/action_sync_to_avalon.py index df9147bdf77..442206feba0 100644 --- a/openpype/modules/ftrack/event_handlers_server/action_sync_to_avalon.py +++ b/openpype/modules/ftrack/event_handlers_server/action_sync_to_avalon.py @@ -40,6 +40,7 @@ class SyncToAvalonServer(ServerAction): #: Action description. description = "Send data from Ftrack to Avalon" role_list = {"Pypeclub", "Administrator", "Project Manager"} + settings_key = "sync_to_avalon" def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) @@ -48,11 +49,16 @@ def __init__(self, *args, **kwargs): def discover(self, session, entities, event): """ Validation """ # Check if selection is valid + is_valid = False for ent in event["data"]["selection"]: # Ignore entities that are not tasks or projects if ent["entityType"].lower() in ["show", "task"]: - return True - return False + is_valid = True + break + + if is_valid: + is_valid = self.valid_roles(session, entities, event) + return is_valid def launch(self, session, in_entities, event): self.log.debug("{}: Creating job".format(self.label)) diff --git a/openpype/modules/ftrack/launch_hooks/post_ftrack_changes.py b/openpype/modules/ftrack/launch_hooks/post_ftrack_changes.py index 86ecffd5b80..ac4e499e417 100644 --- a/openpype/modules/ftrack/launch_hooks/post_ftrack_changes.py +++ b/openpype/modules/ftrack/launch_hooks/post_ftrack_changes.py @@ -2,11 +2,12 @@ import ftrack_api from openpype.settings import get_project_settings -from openpype.lib import PostLaunchHook +from openpype.lib.applications import PostLaunchHook, LaunchTypes class PostFtrackHook(PostLaunchHook): order = None + launch_types = {LaunchTypes.local} def execute(self): project_name = self.data.get("project_name") diff --git a/openpype/modules/ftrack/plugins/publish/collect_username.py b/openpype/modules/ftrack/plugins/publish/collect_username.py index 798f3960a8e..0c7c0a57bee 100644 --- a/openpype/modules/ftrack/plugins/publish/collect_username.py +++ b/openpype/modules/ftrack/plugins/publish/collect_username.py @@ -33,7 +33,7 @@ class CollectUsernameForWebpublish(pyblish.api.ContextPlugin): order = pyblish.api.CollectorOrder + 0.0015 label = "Collect ftrack username" hosts = ["webpublisher", "photoshop"] - targets = ["remotepublish", "filespublish", "tvpaint_worker"] + targets = ["webpublish"] def process(self, context): self.log.info("{}".format(self.__class__.__name__)) diff --git a/openpype/modules/ftrack/plugins/publish/integrate_ftrack_api.py b/openpype/modules/ftrack/plugins/publish/integrate_ftrack_api.py index deb8b414f06..4d474fab101 100644 --- a/openpype/modules/ftrack/plugins/publish/integrate_ftrack_api.py +++ b/openpype/modules/ftrack/plugins/publish/integrate_ftrack_api.py @@ -11,10 +11,8 @@ """ import os -import sys import collections -import six import pyblish.api import clique @@ -355,7 +353,7 @@ def _ensure_asset_version_exists( status_name = asset_version_data.pop("status_name", None) # Try query asset version by criteria (asset id and version) - version = asset_version_data.get("version") or 0 + version = asset_version_data.get("version") or "0" asset_version_entity = self._query_asset_version( session, version, asset_id ) diff --git a/openpype/modules/shotgrid/plugins/publish/collect_shotgrid_entities.py b/openpype/modules/shotgrid/plugins/publish/collect_shotgrid_entities.py index 43f5d1ef0e8..db2e4eadc54 100644 --- a/openpype/modules/shotgrid/plugins/publish/collect_shotgrid_entities.py +++ b/openpype/modules/shotgrid/plugins/publish/collect_shotgrid_entities.py @@ -1,7 +1,5 @@ -import os - import pyblish.api -from openpype.lib.mongo import OpenPypeMongoConnection +from openpype.client.mongo import OpenPypeMongoConnection class CollectShotgridEntities(pyblish.api.ContextPlugin): diff --git a/openpype/modules/slack/launch_hooks/pre_python2_vendor.py b/openpype/modules/slack/launch_hooks/pre_python2_vendor.py index 0f4bc22a345..891c92bb7a0 100644 --- a/openpype/modules/slack/launch_hooks/pre_python2_vendor.py +++ b/openpype/modules/slack/launch_hooks/pre_python2_vendor.py @@ -1,5 +1,5 @@ import os -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook from openpype_modules.slack import SLACK_MODULE_DIR @@ -8,6 +8,7 @@ class PrePython2Support(PreLaunchHook): Path to vendor modules is added to the beginning of PYTHONPATH. """ + launch_types = set() def execute(self): if not self.application.use_python_2: diff --git a/openpype/modules/sync_server/launch_hooks/pre_copy_last_published_workfile.py b/openpype/modules/sync_server/launch_hooks/pre_copy_last_published_workfile.py index bbc220945c6..047e35e3ac1 100644 --- a/openpype/modules/sync_server/launch_hooks/pre_copy_last_published_workfile.py +++ b/openpype/modules/sync_server/launch_hooks/pre_copy_last_published_workfile.py @@ -1,12 +1,8 @@ import os import shutil -from openpype.client.entities import ( - get_representations, - get_project -) - -from openpype.lib import PreLaunchHook +from openpype.client.entities import get_representations +from openpype.lib.applications import PreLaunchHook, LaunchTypes from openpype.lib.profiles_filtering import filter_profiles from openpype.modules.sync_server.sync_server import ( download_last_published_workfile, @@ -32,6 +28,7 @@ class CopyLastPublishedWorkfile(PreLaunchHook): "nuke", "nukeassist", "nukex", "hiero", "nukestudio", "maya", "harmony", "celaction", "flame", "fusion", "houdini", "tvpaint"] + launch_types = {LaunchTypes.local} def execute(self): """Check if local workfile doesn't exist, else copy it. @@ -119,6 +116,18 @@ def execute(self): "task": {"name": task_name, "type": task_type} } + # Add version filter + workfile_version = self.launch_context.data.get("workfile_version", -1) + if workfile_version > 0 and workfile_version not in {None, "last"}: + context_filters["version"] = self.launch_context.data[ + "workfile_version" + ] + + # Only one version will be matched + version_index = 0 + else: + version_index = workfile_version + workfile_representations = list(get_representations( project_name, context_filters=context_filters @@ -136,9 +145,10 @@ def execute(self): lambda r: r["context"].get("version") is not None, workfile_representations ) - workfile_representation = max( + # Get workfile version + workfile_representation = sorted( filtered_repres, key=lambda r: r["context"]["version"] - ) + )[version_index] # Copy file and substitute path last_published_workfile_path = download_last_published_workfile( diff --git a/openpype/modules/sync_server/sync_server_module.py b/openpype/modules/sync_server/sync_server_module.py index 67856f0d8e9..8a926979205 100644 --- a/openpype/modules/sync_server/sync_server_module.py +++ b/openpype/modules/sync_server/sync_server_module.py @@ -34,7 +34,12 @@ from .providers.local_drive import LocalDriveHandler from .providers import lib -from .utils import time_function, SyncStatus, SiteAlreadyPresentError +from .utils import ( + time_function, + SyncStatus, + SiteAlreadyPresentError, + SYNC_SERVER_ROOT, +) log = Logger.get_logger("SyncServer") @@ -138,9 +143,23 @@ def initialize(self, module_settings): def get_plugin_paths(self): """Deadline plugin paths.""" - current_dir = os.path.dirname(os.path.abspath(__file__)) return { - "load": [os.path.join(current_dir, "plugins", "load")] + "load": [os.path.join(SYNC_SERVER_ROOT, "plugins", "load")] + } + + def get_site_icons(self): + """Icons for sites. + + Returns: + dict[str, str]: Path to icon by site. + """ + + resource_path = os.path.join( + SYNC_SERVER_ROOT, "providers", "resources" + ) + return { + provider: "{}/{}.png".format(resource_path, provider) + for provider in ["studio", "local_drive", "gdrive"] } """ Start of Public API """ @@ -904,10 +923,7 @@ def get_launch_hook_paths(self): (str): full absolut path to directory with hooks for the module """ - return os.path.join( - os.path.dirname(os.path.abspath(__file__)), - "launch_hooks" - ) + return os.path.join(SYNC_SERVER_ROOT, "launch_hooks") # Needs to be refactored after Settings are updated # # Methods for Settings to get appriate values to fill forms diff --git a/openpype/modules/sync_server/utils.py b/openpype/modules/sync_server/utils.py index 4caa01e9d73..b2f855539f0 100644 --- a/openpype/modules/sync_server/utils.py +++ b/openpype/modules/sync_server/utils.py @@ -1,9 +1,12 @@ +import os import time from openpype.lib import Logger log = Logger.get_logger("SyncServer") +SYNC_SERVER_ROOT = os.path.dirname(os.path.abspath(__file__)) + class ResumableError(Exception): """Error which could be temporary, skip current loop, try next time""" diff --git a/openpype/modules/timers_manager/launch_hooks/post_start_timer.py b/openpype/modules/timers_manager/launch_hooks/post_start_timer.py index d6ae0134033..76c3cca33e8 100644 --- a/openpype/modules/timers_manager/launch_hooks/post_start_timer.py +++ b/openpype/modules/timers_manager/launch_hooks/post_start_timer.py @@ -1,4 +1,4 @@ -from openpype.lib import PostLaunchHook +from openpype.lib.applications import PostLaunchHook, LaunchTypes class PostStartTimerHook(PostLaunchHook): @@ -7,6 +7,7 @@ class PostStartTimerHook(PostLaunchHook): This module requires enabled TimerManager module. """ order = None + launch_types = {LaunchTypes.local} def execute(self): project_name = self.data.get("project_name") diff --git a/openpype/pipeline/__init__.py b/openpype/pipeline/__init__.py index 5c15a5fa82d..8f370d389bf 100644 --- a/openpype/pipeline/__init__.py +++ b/openpype/pipeline/__init__.py @@ -13,6 +13,7 @@ BaseCreator, Creator, AutoCreator, + HiddenCreator, CreatedInstance, CreatorError, @@ -93,7 +94,7 @@ get_current_host_name, get_current_project_name, get_current_asset_name, - get_current_task_name, + get_current_task_name ) install = install_host uninstall = uninstall_host @@ -114,6 +115,7 @@ "BaseCreator", "Creator", "AutoCreator", + "HiddenCreator", "CreatedInstance", "CreatorError", diff --git a/openpype/pipeline/colorspace.py b/openpype/pipeline/colorspace.py index 3f2d4891c17..731132911a6 100644 --- a/openpype/pipeline/colorspace.py +++ b/openpype/pipeline/colorspace.py @@ -237,10 +237,17 @@ def get_data_subprocess(config_path, data_type): def compatibility_check(): - """Making sure PyOpenColorIO is importable""" + """checking if user has a compatible PyOpenColorIO >= 2. + + It's achieved by checking if PyOpenColorIO is importable + and calling any version 2 specific function + """ try: - import PyOpenColorIO # noqa: F401 - except ImportError: + import PyOpenColorIO + + # ocio versions lower than 2 will raise AttributeError + PyOpenColorIO.GetVersion() + except (ImportError, AttributeError): return False return True @@ -322,7 +329,8 @@ def get_imageio_config( host_name, project_settings=None, anatomy_data=None, - anatomy=None + anatomy=None, + env=None ): """Returns config data from settings @@ -335,6 +343,7 @@ def get_imageio_config( project_settings (Optional[dict]): Project settings. anatomy_data (Optional[dict]): anatomy formatting data. anatomy (Optional[Anatomy]): Anatomy object. + env (Optional[dict]): Environment variables. Returns: dict: config path data or empty dict @@ -407,13 +416,13 @@ def get_imageio_config( if override_global_config: config_data = _get_config_data( - host_ocio_config["filepath"], formatting_data + host_ocio_config["filepath"], formatting_data, env ) else: # get config path from global config_global = imageio_global["ocio_config"] config_data = _get_config_data( - config_global["filepath"], formatting_data + config_global["filepath"], formatting_data, env ) if not config_data: @@ -425,7 +434,7 @@ def get_imageio_config( return config_data -def _get_config_data(path_list, anatomy_data): +def _get_config_data(path_list, anatomy_data, env=None): """Return first existing path in path list. If template is used in path inputs, @@ -435,14 +444,17 @@ def _get_config_data(path_list, anatomy_data): Args: path_list (list[str]): list of abs paths anatomy_data (dict): formatting data + env (Optional[dict]): Environment variables. Returns: dict: config data """ formatting_data = deepcopy(anatomy_data) + environment_vars = env or dict(**os.environ) + # format the path for potential env vars - formatting_data.update(dict(**os.environ)) + formatting_data.update(environment_vars) # first try host config paths for path_ in path_list: diff --git a/openpype/pipeline/context_tools.py b/openpype/pipeline/context_tools.py index c12b76cc74b..f567118062d 100644 --- a/openpype/pipeline/context_tools.py +++ b/openpype/pipeline/context_tools.py @@ -21,6 +21,7 @@ from openpype.lib.events import emit_event from openpype.modules import load_modules, ModulesManager from openpype.settings import get_project_settings +from openpype.tests.lib import is_in_tests from .publish.lib import filter_pyblish_plugins from .anatomy import Anatomy @@ -35,7 +36,7 @@ register_inventory_action_path, register_creator_plugin_path, deregister_loader_plugin_path, - deregister_inventory_action_path, + deregister_inventory_action_path ) @@ -142,6 +143,10 @@ def modified_emit(obj, record): else: pyblish.api.register_target("local") + if is_in_tests(): + print("Registering pyblish target: automated") + pyblish.api.register_target("automated") + project_name = os.environ.get("AVALON_PROJECT") host_name = os.environ.get("AVALON_APP") diff --git a/openpype/pipeline/create/__init__.py b/openpype/pipeline/create/__init__.py index 6755224c194..94d575a7761 100644 --- a/openpype/pipeline/create/__init__.py +++ b/openpype/pipeline/create/__init__.py @@ -2,6 +2,7 @@ SUBSET_NAME_ALLOWED_SYMBOLS, DEFAULT_SUBSET_TEMPLATE, PRE_CREATE_THUMBNAIL_KEY, + DEFAULT_VARIANT_VALUE, ) from .utils import ( @@ -50,6 +51,7 @@ "SUBSET_NAME_ALLOWED_SYMBOLS", "DEFAULT_SUBSET_TEMPLATE", "PRE_CREATE_THUMBNAIL_KEY", + "DEFAULT_VARIANT_VALUE", "get_last_versions_for_instances", "get_next_versions_for_instances", diff --git a/openpype/pipeline/create/constants.py b/openpype/pipeline/create/constants.py index 375cfc4a12f..7d1d0154e9b 100644 --- a/openpype/pipeline/create/constants.py +++ b/openpype/pipeline/create/constants.py @@ -1,10 +1,12 @@ SUBSET_NAME_ALLOWED_SYMBOLS = "a-zA-Z0-9_." DEFAULT_SUBSET_TEMPLATE = "{family}{Variant}" PRE_CREATE_THUMBNAIL_KEY = "thumbnail_source" +DEFAULT_VARIANT_VALUE = "Main" __all__ = ( "SUBSET_NAME_ALLOWED_SYMBOLS", "DEFAULT_SUBSET_TEMPLATE", "PRE_CREATE_THUMBNAIL_KEY", + "DEFAULT_VARIANT_VALUE", ) diff --git a/openpype/pipeline/create/context.py b/openpype/pipeline/create/context.py index 98fcee5fe53..3076efcde7d 100644 --- a/openpype/pipeline/create/context.py +++ b/openpype/pipeline/create/context.py @@ -1165,8 +1165,8 @@ def from_existing(cls, instance_data, creator): Args: instance_data (Dict[str, Any]): Data in a structure ready for 'CreatedInstance' object. - creator (Creator): Creator plugin which is creating the instance - of for which the instance belong. + creator (BaseCreator): Creator plugin which is creating the + instance of for which the instance belong. """ instance_data = copy.deepcopy(instance_data) @@ -1979,7 +1979,11 @@ def create( if pre_create_data is None: pre_create_data = {} - precreate_attr_defs = creator.get_pre_create_attr_defs() or [] + precreate_attr_defs = [] + # Hidden creators do not have or need the pre-create attributes. + if isinstance(creator, Creator): + precreate_attr_defs = creator.get_pre_create_attr_defs() + # Create default values of precreate data _pre_create_data = get_default_values(precreate_attr_defs) # Update passed precreate data to default values @@ -2121,7 +2125,7 @@ def bulk_instances_collection(self): def reset_instances(self): """Reload instances""" - self._instances_by_id = {} + self._instances_by_id = collections.OrderedDict() # Collect instances error_message = "Collection of instances for creator {} failed. {}" diff --git a/openpype/pipeline/create/creator_plugins.py b/openpype/pipeline/create/creator_plugins.py index c9edbbfd71b..38d6b6f465c 100644 --- a/openpype/pipeline/create/creator_plugins.py +++ b/openpype/pipeline/create/creator_plugins.py @@ -1,4 +1,3 @@ -import os import copy import collections @@ -20,6 +19,7 @@ deregister_plugin_path ) +from .constants import DEFAULT_VARIANT_VALUE from .subset_name import get_subset_name from .utils import get_next_versions_for_instances from .legacy_create import LegacyCreator @@ -517,7 +517,7 @@ class Creator(BaseCreator): default_variants = [] # Default variant used in 'get_default_variant' - default_variant = None + _default_variant = None # Short description of family # - may not be used if `get_description` is overriden @@ -543,6 +543,21 @@ class Creator(BaseCreator): # - similar to instance attribute definitions pre_create_attr_defs = [] + def __init__(self, *args, **kwargs): + cls = self.__class__ + + # Fix backwards compatibility for plugins which override + # 'default_variant' attribute directly + if not isinstance(cls.default_variant, property): + # Move value from 'default_variant' to '_default_variant' + self._default_variant = self.default_variant + # Create property 'default_variant' on the class + cls.default_variant = property( + cls._get_default_variant_wrap, + cls._set_default_variant_wrap + ) + super(Creator, self).__init__(*args, **kwargs) + @property def show_order(self): """Order in which is creator shown in UI. @@ -595,10 +610,10 @@ def get_detail_description(self): def get_default_variants(self): """Default variant values for UI tooltips. - Replacement of `defatults` attribute. Using method gives ability to - have some "logic" other than attribute values. + Replacement of `default_variants` attribute. Using method gives + ability to have some "logic" other than attribute values. - By default returns `default_variants` value. + By default, returns `default_variants` value. Returns: List[str]: Whisper variants for user input. @@ -606,17 +621,63 @@ def get_default_variants(self): return copy.deepcopy(self.default_variants) - def get_default_variant(self): + def get_default_variant(self, only_explicit=False): """Default variant value that will be used to prefill variant input. This is for user input and value may not be content of result from `get_default_variants`. - Can return `None`. In that case first element from - `get_default_variants` should be used. + Note: + This method does not allow to have empty string as + default variant. + + Args: + only_explicit (Optional[bool]): If True, only explicit default + variant from '_default_variant' will be returned. + + Returns: + str: Variant value. """ - return self.default_variant + if only_explicit or self._default_variant: + return self._default_variant + + for variant in self.get_default_variants(): + return variant + return DEFAULT_VARIANT_VALUE + + def _get_default_variant_wrap(self): + """Default variant value that will be used to prefill variant input. + + Wrapper for 'get_default_variant'. + + Notes: + This method is wrapper for 'get_default_variant' + for 'default_variant' property, so creator can override + the method. + + Returns: + str: Variant value. + """ + + return self.get_default_variant() + + def _set_default_variant_wrap(self, variant): + """Set default variant value. + + This method is needed for automated settings overrides which are + changing attributes based on keys in settings. + + Args: + variant (str): New default variant value. + """ + + self._default_variant = variant + + default_variant = property( + _get_default_variant_wrap, + _set_default_variant_wrap + ) def get_pre_create_attr_defs(self): """Plugin attribute definitions needed for creation. diff --git a/openpype/pipeline/farm/pyblish_functions.py b/openpype/pipeline/farm/pyblish_functions.py index 2df8269d792..fe3ab97de89 100644 --- a/openpype/pipeline/farm/pyblish_functions.py +++ b/openpype/pipeline/farm/pyblish_functions.py @@ -116,8 +116,8 @@ def get_time_data_from_instance_or_context(instance): instance.context.data.get("fps")), handle_start=(instance.data.get("handleStart") or instance.context.data.get("handleStart")), # noqa: E501 - handle_end=(instance.data.get("handleStart") or - instance.context.data.get("handleStart")) + handle_end=(instance.data.get("handleEnd") or + instance.context.data.get("handleEnd")) ) @@ -139,7 +139,7 @@ def get_transferable_representations(instance): to_transfer = [] for representation in instance.data.get("representations", []): - if "publish_on_farm" not in representation.get("tags"): + if "publish_on_farm" not in representation.get("tags", []): continue trans_rep = representation.copy() @@ -212,7 +212,7 @@ def create_skeleton_instance( "This may cause issues.").format(source)) family = ("render" - if "prerender" not in instance.data["families"] + if "prerender.farm" not in instance.data["families"] else "prerender") families = [family] @@ -265,8 +265,10 @@ def create_skeleton_instance( instance_skeleton_data[v] = instance.data.get(v) representations = get_transferable_representations(instance) - instance_skeleton_data["representations"] = [] - instance_skeleton_data["representations"] += representations + instance_skeleton_data["representations"] = representations + + persistent = instance.data.get("stagingDir_persistent") is True + instance_skeleton_data["stagingDir_persistent"] = persistent return instance_skeleton_data @@ -565,9 +567,15 @@ def _create_instances_for_aov(instance, skeleton, aov_filter, additional_data, col = list(cols[0]) # create subset name `familyTaskSubset_AOV` - group_name = 'render{}{}{}{}'.format( - task[0].upper(), task[1:], - subset[0].upper(), subset[1:]) + # TODO refactor/remove me + family = skeleton["family"] + if not subset.startswith(family): + group_name = '{}{}{}{}{}'.format( + family, + task[0].upper(), task[1:], + subset[0].upper(), subset[1:]) + else: + group_name = subset # if there are multiple cameras, we need to add camera name if isinstance(col, (list, tuple)): diff --git a/openpype/pipeline/publish/abstract_collect_render.py b/openpype/pipeline/publish/abstract_collect_render.py index 6877d556c36..8a26402bd80 100644 --- a/openpype/pipeline/publish/abstract_collect_render.py +++ b/openpype/pipeline/publish/abstract_collect_render.py @@ -75,7 +75,6 @@ class RenderInstance(object): tilesY = attr.ib(default=0) # number of tiles in Y # submit_publish_job - toBeRenderedOn = attr.ib(default=None) deadlineSubmissionJob = attr.ib(default=None) anatomyData = attr.ib(default=None) outputDir = attr.ib(default=None) diff --git a/openpype/pipeline/publish/lib.py b/openpype/pipeline/publish/lib.py index 2768fe3fa1c..1ae6ea43b27 100644 --- a/openpype/pipeline/publish/lib.py +++ b/openpype/pipeline/publish/lib.py @@ -464,9 +464,8 @@ def apply_plugin_settings_automatically(plugin, settings, logger=None): for option, value in settings.items(): if logger: - logger.debug("Plugin {} - Attr: {} -> {}".format( - option, value, plugin.__name__ - )) + logger.debug("Plugin %s - Attr: %s -> %s", + plugin.__name__, option, value) setattr(plugin, option, value) @@ -537,44 +536,24 @@ class method 'def apply_settings(cls, project_settings, system_settings)' plugins.remove(plugin) -def find_close_plugin(close_plugin_name, log): - if close_plugin_name: - plugins = pyblish.api.discover() - for plugin in plugins: - if plugin.__name__ == close_plugin_name: - return plugin - - log.debug("Close plugin not found, app might not close.") - - -def remote_publish(log, close_plugin_name=None, raise_error=False): +def remote_publish(log): """Loops through all plugins, logs to console. Used for tests. Args: log (Logger) - close_plugin_name (str): name of plugin with responsibility to - close host app """ - # Error exit as soon as any error occurs. - error_format = "Failed {plugin.__name__}: {error} -- {error.traceback}" - close_plugin = find_close_plugin(close_plugin_name, log) + # Error exit as soon as any error occurs. + error_format = "Failed {plugin.__name__}: {error}\n{error.traceback}" for result in pyblish.util.publish_iter(): - for record in result["records"]: - log.info("{}: {}".format( - result["plugin"].label, record.msg)) + if not result["error"]: + continue - if result["error"]: - error_message = error_format.format(**result) - log.error(error_message) - if close_plugin: # close host app explicitly after error - context = pyblish.api.Context() - close_plugin().process(context) - if raise_error: - # Fatal Error is because of Deadline - error_message = "Fatal Error: " + error_format.format(**result) - raise RuntimeError(error_message) + error_message = error_format.format(**result) + log.error(error_message) + # 'Fatal Error: ' is because of Deadline + raise RuntimeError("Fatal Error: {}".format(error_message)) def get_errored_instances_from_context(context, plugin=None): @@ -973,6 +952,7 @@ def _clean_name(path): return file_path + def add_repre_files_for_cleanup(instance, repre): """ Explicitly mark repre files to be deleted. @@ -981,7 +961,16 @@ def add_repre_files_for_cleanup(instance, repre): """ files = repre["files"] staging_dir = repre.get("stagingDir") - if not staging_dir: + + # first make sure representation level is not persistent + if ( + not staging_dir + or repre.get("stagingDir_persistent") + ): + return + + # then look into instance level if it's not persistent + if instance.data.get("stagingDir_persistent"): return if isinstance(files, str): diff --git a/openpype/pipeline/schema.py b/openpype/pipeline/schema/__init__.py similarity index 92% rename from openpype/pipeline/schema.py rename to openpype/pipeline/schema/__init__.py index 7e96bfe1b1c..d7b33f26219 100644 --- a/openpype/pipeline/schema.py +++ b/openpype/pipeline/schema/__init__.py @@ -24,6 +24,7 @@ ValidationError = jsonschema.ValidationError SchemaError = jsonschema.SchemaError +CURRENT_DIR = os.path.dirname(os.path.abspath(__file__)) _CACHED = False @@ -121,17 +122,14 @@ def _precache(): """Store available schemas in-memory for reduced disk access""" global _CACHED - repos_root = os.environ["OPENPYPE_REPOS_ROOT"] - schema_dir = os.path.join(repos_root, "schema") - - for schema in os.listdir(schema_dir): + for schema in os.listdir(CURRENT_DIR): if schema.startswith(("_", ".")): continue if not schema.endswith(".json"): continue - if not os.path.isfile(os.path.join(schema_dir, schema)): + if not os.path.isfile(os.path.join(CURRENT_DIR, schema)): continue - with open(os.path.join(schema_dir, schema)) as f: + with open(os.path.join(CURRENT_DIR, schema)) as f: log_.debug("Installing schema '%s'.." % schema) _cache[schema] = json.load(f) _CACHED = True diff --git a/openpype/pipeline/schema/application-1.0.json b/openpype/pipeline/schema/application-1.0.json new file mode 100644 index 00000000000..953abee569c --- /dev/null +++ b/openpype/pipeline/schema/application-1.0.json @@ -0,0 +1,68 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:application-1.0", + "description": "An application definition.", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "label", + "application_dir", + "executable" + ], + + "properties": { + "schema": { + "description": "Schema identifier for payload", + "type": "string" + }, + "label": { + "description": "Nice name of application.", + "type": "string" + }, + "application_dir": { + "description": "Name of directory used for application resources.", + "type": "string" + }, + "executable": { + "description": "Name of callable executable, this is called to launch the application", + "type": "string" + }, + "description": { + "description": "Description of application.", + "type": "string" + }, + "environment": { + "description": "Key/value pairs for environment variables related to this application. Supports lists for paths, such as PYTHONPATH.", + "type": "object", + "items": { + "oneOf": [ + {"type": "string"}, + {"type": "array", "items": {"type": "string"}} + ] + } + }, + "default_dirs": { + "type": "array", + "items": { + "type": "string" + } + }, + "copy": { + "type": "object", + "patternProperties": { + "^.*$": { + "anyOf": [ + {"type": "string"}, + {"type": "null"} + ] + } + }, + "additionalProperties": false + } + } +} diff --git a/openpype/pipeline/schema/asset-1.0.json b/openpype/pipeline/schema/asset-1.0.json new file mode 100644 index 00000000000..ab104c002a1 --- /dev/null +++ b/openpype/pipeline/schema/asset-1.0.json @@ -0,0 +1,35 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:asset-1.0", + "description": "A unit of data", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "name", + "subsets" + ], + + "properties": { + "schema": { + "description": "Schema identifier for payload", + "type": "string" + }, + "name": { + "description": "Name of directory", + "type": "string" + }, + "subsets": { + "type": "array", + "items": { + "$ref": "subset.json" + } + } + }, + + "definitions": {} +} diff --git a/openpype/pipeline/schema/asset-2.0.json b/openpype/pipeline/schema/asset-2.0.json new file mode 100644 index 00000000000..b894d797927 --- /dev/null +++ b/openpype/pipeline/schema/asset-2.0.json @@ -0,0 +1,55 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:asset-2.0", + "description": "A unit of data", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "type", + "name", + "silo", + "data" + ], + + "properties": { + "schema": { + "description": "Schema identifier for payload", + "type": "string", + "enum": ["openpype:asset-2.0"], + "example": "openpype:asset-2.0" + }, + "type": { + "description": "The type of document", + "type": "string", + "enum": ["asset"], + "example": "asset" + }, + "parent": { + "description": "Unique identifier to parent document", + "example": "592c33475f8c1b064c4d1696" + }, + "name": { + "description": "Name of asset", + "type": "string", + "pattern": "^[a-zA-Z0-9_.]*$", + "example": "Bruce" + }, + "silo": { + "description": "Group or container of asset", + "type": "string", + "example": "assets" + }, + "data": { + "description": "Document metadata", + "type": "object", + "example": {"key": "value"} + } + }, + + "definitions": {} +} diff --git a/openpype/pipeline/schema/asset-3.0.json b/openpype/pipeline/schema/asset-3.0.json new file mode 100644 index 00000000000..948704d2a17 --- /dev/null +++ b/openpype/pipeline/schema/asset-3.0.json @@ -0,0 +1,55 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:asset-3.0", + "description": "A unit of data", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "type", + "name", + "data" + ], + + "properties": { + "schema": { + "description": "Schema identifier for payload", + "type": "string", + "enum": ["openpype:asset-3.0"], + "example": "openpype:asset-3.0" + }, + "type": { + "description": "The type of document", + "type": "string", + "enum": ["asset"], + "example": "asset" + }, + "parent": { + "description": "Unique identifier to parent document", + "example": "592c33475f8c1b064c4d1696" + }, + "name": { + "description": "Name of asset", + "type": "string", + "pattern": "^[a-zA-Z0-9_.]*$", + "example": "Bruce" + }, + "silo": { + "description": "Group or container of asset", + "type": "string", + "pattern": "^[a-zA-Z0-9_.]*$", + "example": "assets" + }, + "data": { + "description": "Document metadata", + "type": "object", + "example": {"key": "value"} + } + }, + + "definitions": {} +} diff --git a/openpype/pipeline/schema/config-1.0.json b/openpype/pipeline/schema/config-1.0.json new file mode 100644 index 00000000000..49398a57cd7 --- /dev/null +++ b/openpype/pipeline/schema/config-1.0.json @@ -0,0 +1,85 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:config-1.0", + "description": "A project configuration.", + + "type": "object", + + "additionalProperties": false, + "required": [ + "tasks", + "apps" + ], + + "properties": { + "schema": { + "description": "Schema identifier for payload", + "type": "string" + }, + "template": { + "type": "object", + "additionalProperties": false, + "patternProperties": { + "^.*$": { + "type": "string" + } + } + }, + "tasks": { + "type": "array", + "items": { + "type": "object", + "properties": { + "name": {"type": "string"}, + "icon": {"type": "string"}, + "group": {"type": "string"}, + "label": {"type": "string"} + }, + "required": ["name"] + } + }, + "apps": { + "type": "array", + "items": { + "type": "object", + "properties": { + "name": {"type": "string"}, + "icon": {"type": "string"}, + "group": {"type": "string"}, + "label": {"type": "string"} + }, + "required": ["name"] + } + }, + "families": { + "type": "array", + "items": { + "type": "object", + "properties": { + "name": {"type": "string"}, + "icon": {"type": "string"}, + "label": {"type": "string"}, + "hideFilter": {"type": "boolean"} + }, + "required": ["name"] + } + }, + "groups": { + "type": "array", + "items": { + "type": "object", + "properties": { + "name": {"type": "string"}, + "icon": {"type": "string"}, + "color": {"type": "string"}, + "order": {"type": ["integer", "number"]} + }, + "required": ["name"] + } + }, + "copy": { + "type": "object" + } + } +} diff --git a/openpype/pipeline/schema/config-1.1.json b/openpype/pipeline/schema/config-1.1.json new file mode 100644 index 00000000000..6e15514aafc --- /dev/null +++ b/openpype/pipeline/schema/config-1.1.json @@ -0,0 +1,87 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:config-1.1", + "description": "A project configuration.", + + "type": "object", + + "additionalProperties": false, + "required": [ + "tasks", + "apps" + ], + + "properties": { + "schema": { + "description": "Schema identifier for payload", + "type": "string" + }, + "template": { + "type": "object", + "additionalProperties": false, + "patternProperties": { + "^.*$": { + "type": "string" + } + } + }, + "tasks": { + "type": "object", + "items": { + "type": "object", + "properties": { + "name": {"type": "string"}, + "icon": {"type": "string"}, + "group": {"type": "string"}, + "label": {"type": "string"} + }, + "required": [ + "short_name" + ] + } + }, + "apps": { + "type": "array", + "items": { + "type": "object", + "properties": { + "name": {"type": "string"}, + "icon": {"type": "string"}, + "group": {"type": "string"}, + "label": {"type": "string"} + }, + "required": ["name"] + } + }, + "families": { + "type": "array", + "items": { + "type": "object", + "properties": { + "name": {"type": "string"}, + "icon": {"type": "string"}, + "label": {"type": "string"}, + "hideFilter": {"type": "boolean"} + }, + "required": ["name"] + } + }, + "groups": { + "type": "array", + "items": { + "type": "object", + "properties": { + "name": {"type": "string"}, + "icon": {"type": "string"}, + "color": {"type": "string"}, + "order": {"type": ["integer", "number"]} + }, + "required": ["name"] + } + }, + "copy": { + "type": "object" + } + } +} diff --git a/openpype/pipeline/schema/config-2.0.json b/openpype/pipeline/schema/config-2.0.json new file mode 100644 index 00000000000..54b226711ac --- /dev/null +++ b/openpype/pipeline/schema/config-2.0.json @@ -0,0 +1,87 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:config-2.0", + "description": "A project configuration.", + + "type": "object", + + "additionalProperties": false, + "required": [ + "tasks", + "apps" + ], + + "properties": { + "schema": { + "description": "Schema identifier for payload", + "type": "string" + }, + "templates": { + "type": "object" + }, + "roots": { + "type": "object" + }, + "imageio": { + "type": "object" + }, + "tasks": { + "type": "object", + "items": { + "type": "object", + "properties": { + "name": {"type": "string"}, + "icon": {"type": "string"}, + "group": {"type": "string"}, + "label": {"type": "string"} + }, + "required": [ + "short_name" + ] + } + }, + "apps": { + "type": "array", + "items": { + "type": "object", + "properties": { + "name": {"type": "string"}, + "icon": {"type": "string"}, + "group": {"type": "string"}, + "label": {"type": "string"} + }, + "required": ["name"] + } + }, + "families": { + "type": "array", + "items": { + "type": "object", + "properties": { + "name": {"type": "string"}, + "icon": {"type": "string"}, + "label": {"type": "string"}, + "hideFilter": {"type": "boolean"} + }, + "required": ["name"] + } + }, + "groups": { + "type": "array", + "items": { + "type": "object", + "properties": { + "name": {"type": "string"}, + "icon": {"type": "string"}, + "color": {"type": "string"}, + "order": {"type": ["integer", "number"]} + }, + "required": ["name"] + } + }, + "copy": { + "type": "object" + } + } +} diff --git a/openpype/pipeline/schema/container-1.0.json b/openpype/pipeline/schema/container-1.0.json new file mode 100644 index 00000000000..012e8499e69 --- /dev/null +++ b/openpype/pipeline/schema/container-1.0.json @@ -0,0 +1,100 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:container-1.0", + "description": "A loaded asset", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "id", + "objectName", + "name", + "author", + "loader", + "families", + "time", + "subset", + "asset", + "representation", + "version", + "silo", + "path", + "source" + ], + "properties": { + "id": { + "description": "Identifier for finding object in host", + "type": "string", + "enum": ["pyblish.mindbender.container"], + "example": "pyblish.mindbender.container" + }, + "objectName": { + "description": "Name of internal object, such as the objectSet in Maya.", + "type": "string", + "example": "Bruce_:rigDefault_CON" + }, + "name": { + "description": "Full name of application object", + "type": "string", + "example": "modelDefault" + }, + "author": { + "description": "Name of the author of the published version", + "type": "string", + "example": "Marcus Ottosson" + }, + "loader": { + "description": "Name of loader plug-in used to produce this container", + "type": "string", + "example": "ModelLoader" + }, + "families": { + "description": "Families associated with the this subset", + "type": "string", + "example": "mindbender.model" + }, + "time": { + "description": "File-system safe, formatted time", + "type": "string", + "example": "20170329T131545Z" + }, + "subset": { + "description": "Name of source subset", + "type": "string", + "example": "modelDefault" + }, + "asset": { + "description": "Name of source asset", + "type": "string" , + "example": "Bruce" + }, + "representation": { + "description": "Name of source representation", + "type": "string" , + "example": ".ma" + }, + "version": { + "description": "Version number", + "type": "number", + "example": 12 + }, + "silo": { + "description": "Silo of parent asset", + "type": "string", + "example": "assets" + }, + "path": { + "description": "Absolute path on disk", + "type": "string", + "example": "{root}/assets/Bruce/publish/rigDefault/v002" + }, + "source": { + "description": "Absolute path to file from which this version was published", + "type": "string", + "example": "{root}/assets/Bruce/work/rigging/maya/scenes/rig_v001.ma" + } + } +} diff --git a/openpype/pipeline/schema/container-2.0.json b/openpype/pipeline/schema/container-2.0.json new file mode 100644 index 00000000000..1673ee5d1de --- /dev/null +++ b/openpype/pipeline/schema/container-2.0.json @@ -0,0 +1,59 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:container-2.0", + "description": "A loaded asset", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "id", + "objectName", + "name", + "namespace", + "loader", + "representation" + ], + "properties": { + "schema": { + "description": "Schema identifier for payload", + "type": "string", + "enum": ["openpype:container-2.0"], + "example": "openpype:container-2.0" + }, + "id": { + "description": "Identifier for finding object in host", + "type": "string", + "enum": ["pyblish.avalon.container"], + "example": "pyblish.avalon.container" + }, + "objectName": { + "description": "Name of internal object, such as the objectSet in Maya.", + "type": "string", + "example": "Bruce_:rigDefault_CON" + }, + "loader": { + "description": "Name of loader plug-in used to produce this container", + "type": "string", + "example": "ModelLoader" + }, + "name": { + "description": "Internal object name of container in application", + "type": "string", + "example": "modelDefault_01" + }, + "namespace": { + "description": "Internal namespace of container in application", + "type": "string", + "example": "Bruce_" + }, + "representation": { + "description": "Unique id of representation in database", + "type": "string", + "example": "59523f355f8c1b5f6c5e8348" + } + } +} diff --git a/openpype/pipeline/schema/hero_version-1.0.json b/openpype/pipeline/schema/hero_version-1.0.json new file mode 100644 index 00000000000..b720dc28874 --- /dev/null +++ b/openpype/pipeline/schema/hero_version-1.0.json @@ -0,0 +1,44 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:hero_version-1.0", + "description": "Hero version of asset", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "version_id", + "schema", + "type", + "parent" + ], + + "properties": { + "_id": { + "description": "Document's id (database will create it's if not entered)", + "example": "ObjectId(592c33475f8c1b064c4d1696)" + }, + "version_id": { + "description": "The version ID from which it was created", + "example": "ObjectId(592c33475f8c1b064c4d1695)" + }, + "schema": { + "description": "The schema associated with this document", + "type": "string", + "enum": ["openpype:hero_version-1.0"], + "example": "openpype:hero_version-1.0" + }, + "type": { + "description": "The type of document", + "type": "string", + "enum": ["hero_version"], + "example": "hero_version" + }, + "parent": { + "description": "Unique identifier to parent document", + "example": "ObjectId(592c33475f8c1b064c4d1697)" + } + } +} diff --git a/openpype/pipeline/schema/inventory-1.0.json b/openpype/pipeline/schema/inventory-1.0.json new file mode 100644 index 00000000000..2fe78794ab2 --- /dev/null +++ b/openpype/pipeline/schema/inventory-1.0.json @@ -0,0 +1,10 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:config-1.0", + "description": "A project configuration.", + + "type": "object", + + "additionalProperties": true +} diff --git a/openpype/pipeline/schema/inventory-1.1.json b/openpype/pipeline/schema/inventory-1.1.json new file mode 100644 index 00000000000..b61a76b32af --- /dev/null +++ b/openpype/pipeline/schema/inventory-1.1.json @@ -0,0 +1,10 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:config-1.1", + "description": "A project configuration.", + + "type": "object", + + "additionalProperties": true +} diff --git a/openpype/pipeline/schema/project-2.0.json b/openpype/pipeline/schema/project-2.0.json new file mode 100644 index 00000000000..0ed5a55599c --- /dev/null +++ b/openpype/pipeline/schema/project-2.0.json @@ -0,0 +1,86 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:project-2.0", + "description": "A unit of data", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "type", + "name", + "data", + "config" + ], + + "properties": { + "schema": { + "description": "Schema identifier for payload", + "type": "string", + "enum": ["openpype:project-2.0"], + "example": "openpype:project-2.0" + }, + "type": { + "description": "The type of document", + "type": "string", + "enum": ["project"], + "example": "project" + }, + "parent": { + "description": "Unique identifier to parent document", + "example": "592c33475f8c1b064c4d1696" + }, + "name": { + "description": "Name of directory", + "type": "string", + "pattern": "^[a-zA-Z0-9_.]*$", + "example": "hulk" + }, + "data": { + "description": "Document metadata", + "type": "object", + "example": { + "fps": 24, + "width": 1920, + "height": 1080 + } + }, + "config": { + "type": "object", + "description": "Document metadata", + "example": { + "schema": "openpype:config-1.0", + "apps": [ + { + "name": "maya2016", + "label": "Autodesk Maya 2016" + }, + { + "name": "nuke10", + "label": "The Foundry Nuke 10.0" + } + ], + "tasks": [ + {"name": "model"}, + {"name": "render"}, + {"name": "animate"}, + {"name": "rig"}, + {"name": "lookdev"}, + {"name": "layout"} + ], + "template": { + "work": + "{root}/{project}/{silo}/{asset}/work/{task}/{app}", + "publish": + "{root}/{project}/{silo}/{asset}/publish/{subset}/v{version:0>3}/{subset}.{representation}" + } + }, + "$ref": "config-1.0.json" + } + }, + + "definitions": {} +} diff --git a/openpype/pipeline/schema/project-2.1.json b/openpype/pipeline/schema/project-2.1.json new file mode 100644 index 00000000000..9413c9f6913 --- /dev/null +++ b/openpype/pipeline/schema/project-2.1.json @@ -0,0 +1,86 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:project-2.1", + "description": "A unit of data", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "type", + "name", + "data", + "config" + ], + + "properties": { + "schema": { + "description": "Schema identifier for payload", + "type": "string", + "enum": ["openpype:project-2.1"], + "example": "openpype:project-2.1" + }, + "type": { + "description": "The type of document", + "type": "string", + "enum": ["project"], + "example": "project" + }, + "parent": { + "description": "Unique identifier to parent document", + "example": "592c33475f8c1b064c4d1696" + }, + "name": { + "description": "Name of directory", + "type": "string", + "pattern": "^[a-zA-Z0-9_.]*$", + "example": "hulk" + }, + "data": { + "description": "Document metadata", + "type": "object", + "example": { + "fps": 24, + "width": 1920, + "height": 1080 + } + }, + "config": { + "type": "object", + "description": "Document metadata", + "example": { + "schema": "openpype:config-1.1", + "apps": [ + { + "name": "maya2016", + "label": "Autodesk Maya 2016" + }, + { + "name": "nuke10", + "label": "The Foundry Nuke 10.0" + } + ], + "tasks": { + "Model": {"short_name": "mdl"}, + "Render": {"short_name": "rnd"}, + "Animate": {"short_name": "anim"}, + "Rig": {"short_name": "rig"}, + "Lookdev": {"short_name": "look"}, + "Layout": {"short_name": "lay"} + }, + "template": { + "work": + "{root}/{project}/{silo}/{asset}/work/{task}/{app}", + "publish": + "{root}/{project}/{silo}/{asset}/publish/{subset}/v{version:0>3}/{subset}.{representation}" + } + }, + "$ref": "config-1.1.json" + } + }, + + "definitions": {} +} diff --git a/openpype/pipeline/schema/project-3.0.json b/openpype/pipeline/schema/project-3.0.json new file mode 100644 index 00000000000..be23e10c936 --- /dev/null +++ b/openpype/pipeline/schema/project-3.0.json @@ -0,0 +1,59 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:project-3.0", + "description": "A unit of data", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "type", + "name", + "data", + "config" + ], + + "properties": { + "schema": { + "description": "Schema identifier for payload", + "type": "string", + "enum": ["openpype:project-3.0"], + "example": "openpype:project-3.0" + }, + "type": { + "description": "The type of document", + "type": "string", + "enum": ["project"], + "example": "project" + }, + "parent": { + "description": "Unique identifier to parent document", + "example": "592c33475f8c1b064c4d1696" + }, + "name": { + "description": "Name of directory", + "type": "string", + "pattern": "^[a-zA-Z0-9_.]*$", + "example": "hulk" + }, + "data": { + "description": "Document metadata", + "type": "object", + "example": { + "fps": 24, + "width": 1920, + "height": 1080 + } + }, + "config": { + "type": "object", + "description": "Document metadata", + "$ref": "config-2.0.json" + } + }, + + "definitions": {} +} diff --git a/openpype/pipeline/schema/representation-1.0.json b/openpype/pipeline/schema/representation-1.0.json new file mode 100644 index 00000000000..347c585f52f --- /dev/null +++ b/openpype/pipeline/schema/representation-1.0.json @@ -0,0 +1,28 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:representation-1.0", + "description": "The inverse of an instance", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "format", + "path" + ], + + "properties": { + "schema": {"type": "string"}, + "format": { + "description": "File extension, including '.'", + "type": "string" + }, + "path": { + "description": "Unformatted path to version.", + "type": "string" + } + } +} diff --git a/openpype/pipeline/schema/representation-2.0.json b/openpype/pipeline/schema/representation-2.0.json new file mode 100644 index 00000000000..f47c16a10a5 --- /dev/null +++ b/openpype/pipeline/schema/representation-2.0.json @@ -0,0 +1,78 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:representation-2.0", + "description": "The inverse of an instance", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "type", + "parent", + "name", + "data" + ], + + "properties": { + "schema": { + "description": "Schema identifier for payload", + "type": "string", + "enum": ["openpype:representation-2.0"], + "example": "openpype:representation-2.0" + }, + "type": { + "description": "The type of document", + "type": "string", + "enum": ["representation"], + "example": "representation" + }, + "parent": { + "description": "Unique identifier to parent document", + "example": "592c33475f8c1b064c4d1696" + }, + "name": { + "description": "Name of representation", + "type": "string", + "pattern": "^[a-zA-Z0-9_.]*$", + "example": "abc" + }, + "data": { + "description": "Document metadata", + "type": "object", + "example": { + "label": "Alembic" + } + }, + "dependencies": { + "description": "Other representation that this representation depends on", + "type": "array", + "items": {"type": "string"}, + "example": [ + "592d547a5f8c1b388093c145" + ] + }, + "context": { + "description": "Summary of the context to which this representation belong.", + "type": "object", + "properties": { + "project": {"type": "object"}, + "asset": {"type": "string"}, + "silo": {"type": ["string", "null"]}, + "subset": {"type": "string"}, + "version": {"type": "number"}, + "representation": {"type": "string"} + }, + "example": { + "project": "hulk", + "asset": "Bruce", + "silo": "assets", + "subset": "rigDefault", + "version": 12, + "representation": "ma" + } + } + } +} diff --git a/openpype/pipeline/schema/session-1.0.json b/openpype/pipeline/schema/session-1.0.json new file mode 100644 index 00000000000..5ced0a6f085 --- /dev/null +++ b/openpype/pipeline/schema/session-1.0.json @@ -0,0 +1,143 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:session-1.0", + "description": "The Avalon environment", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "AVALON_PROJECTS", + "AVALON_PROJECT", + "AVALON_ASSET", + "AVALON_SILO", + "AVALON_CONFIG" + ], + + "properties": { + "AVALON_PROJECTS": { + "description": "Absolute path to root of project directories", + "type": "string", + "example": "/nas/projects" + }, + "AVALON_PROJECT": { + "description": "Name of project", + "type": "string", + "pattern": "^\\w*$", + "example": "Hulk" + }, + "AVALON_ASSET": { + "description": "Name of asset", + "type": "string", + "pattern": "^\\w*$", + "example": "Bruce" + }, + "AVALON_SILO": { + "description": "Name of asset group or container", + "type": "string", + "pattern": "^\\w*$", + "example": "assets" + }, + "AVALON_TASK": { + "description": "Name of task", + "type": "string", + "pattern": "^\\w*$", + "example": "modeling" + }, + "AVALON_CONFIG": { + "description": "Name of Avalon configuration", + "type": "string", + "pattern": "^\\w*$", + "example": "polly" + }, + "AVALON_APP": { + "description": "Name of application", + "type": "string", + "pattern": "^\\w*$", + "example": "maya2016" + }, + "AVALON_MONGO": { + "description": "Address to the asset database", + "type": "string", + "pattern": "^mongodb://[\\w/@:.]*$", + "example": "mongodb://localhost:27017", + "default": "mongodb://localhost:27017" + }, + "AVALON_DB": { + "description": "Name of database", + "type": "string", + "pattern": "^\\w*$", + "example": "avalon", + "default": "avalon" + }, + "AVALON_LABEL": { + "description": "Nice name of Avalon, used in e.g. graphical user interfaces", + "type": "string", + "example": "Mindbender", + "default": "Avalon" + }, + "AVALON_SENTRY": { + "description": "Address to Sentry", + "type": "string", + "pattern": "^http[\\w/@:.]*$", + "example": "https://5b872b280de742919b115bdc8da076a5:8d278266fe764361b8fa6024af004a9c@logs.mindbender.com/2", + "default": null + }, + "AVALON_DEADLINE": { + "description": "Address to Deadline", + "type": "string", + "pattern": "^http[\\w/@:.]*$", + "example": "http://192.168.99.101", + "default": null + }, + "AVALON_TIMEOUT": { + "description": "Wherever there is a need for a timeout, this is the default value.", + "type": "string", + "pattern": "^[0-9]*$", + "default": "1000", + "example": "1000" + }, + "AVALON_UPLOAD": { + "description": "Boolean of whether to upload published material to central asset repository", + "type": "string", + "default": null, + "example": "True" + }, + "AVALON_USERNAME": { + "description": "Generic username", + "type": "string", + "pattern": "^\\w*$", + "default": "avalon", + "example": "myself" + }, + "AVALON_PASSWORD": { + "description": "Generic password", + "type": "string", + "pattern": "^\\w*$", + "default": "secret", + "example": "abc123" + }, + "AVALON_INSTANCE_ID": { + "description": "Unique identifier for instances in a working file", + "type": "string", + "pattern": "^[\\w.]*$", + "default": "avalon.instance", + "example": "avalon.instance" + }, + "AVALON_CONTAINER_ID": { + "description": "Unique identifier for a loaded representation in a working file", + "type": "string", + "pattern": "^[\\w.]*$", + "default": "avalon.container", + "example": "avalon.container" + }, + "AVALON_DEBUG": { + "description": "Enable debugging mode. Some applications may use this for e.g. extended verbosity or mock plug-ins.", + "type": "string", + "default": null, + "example": "True" + } + } +} diff --git a/openpype/pipeline/schema/session-2.0.json b/openpype/pipeline/schema/session-2.0.json new file mode 100644 index 00000000000..0a4d51beb2b --- /dev/null +++ b/openpype/pipeline/schema/session-2.0.json @@ -0,0 +1,134 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:session-2.0", + "description": "The Avalon environment", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "AVALON_PROJECT", + "AVALON_ASSET", + "AVALON_CONFIG" + ], + + "properties": { + "AVALON_PROJECTS": { + "description": "Absolute path to root of project directories", + "type": "string", + "example": "/nas/projects" + }, + "AVALON_PROJECT": { + "description": "Name of project", + "type": "string", + "pattern": "^\\w*$", + "example": "Hulk" + }, + "AVALON_ASSET": { + "description": "Name of asset", + "type": "string", + "pattern": "^\\w*$", + "example": "Bruce" + }, + "AVALON_SILO": { + "description": "Name of asset group or container", + "type": "string", + "pattern": "^\\w*$", + "example": "assets" + }, + "AVALON_TASK": { + "description": "Name of task", + "type": "string", + "pattern": "^\\w*$", + "example": "modeling" + }, + "AVALON_CONFIG": { + "description": "Name of Avalon configuration", + "type": "string", + "pattern": "^\\w*$", + "example": "polly" + }, + "AVALON_APP": { + "description": "Name of application", + "type": "string", + "pattern": "^\\w*$", + "example": "maya2016" + }, + "AVALON_DB": { + "description": "Name of database", + "type": "string", + "pattern": "^\\w*$", + "example": "avalon", + "default": "avalon" + }, + "AVALON_LABEL": { + "description": "Nice name of Avalon, used in e.g. graphical user interfaces", + "type": "string", + "example": "Mindbender", + "default": "Avalon" + }, + "AVALON_SENTRY": { + "description": "Address to Sentry", + "type": "string", + "pattern": "^http[\\w/@:.]*$", + "example": "https://5b872b280de742919b115bdc8da076a5:8d278266fe764361b8fa6024af004a9c@logs.mindbender.com/2", + "default": null + }, + "AVALON_DEADLINE": { + "description": "Address to Deadline", + "type": "string", + "pattern": "^http[\\w/@:.]*$", + "example": "http://192.168.99.101", + "default": null + }, + "AVALON_TIMEOUT": { + "description": "Wherever there is a need for a timeout, this is the default value.", + "type": "string", + "pattern": "^[0-9]*$", + "default": "1000", + "example": "1000" + }, + "AVALON_UPLOAD": { + "description": "Boolean of whether to upload published material to central asset repository", + "type": "string", + "default": null, + "example": "True" + }, + "AVALON_USERNAME": { + "description": "Generic username", + "type": "string", + "pattern": "^\\w*$", + "default": "avalon", + "example": "myself" + }, + "AVALON_PASSWORD": { + "description": "Generic password", + "type": "string", + "pattern": "^\\w*$", + "default": "secret", + "example": "abc123" + }, + "AVALON_INSTANCE_ID": { + "description": "Unique identifier for instances in a working file", + "type": "string", + "pattern": "^[\\w.]*$", + "default": "avalon.instance", + "example": "avalon.instance" + }, + "AVALON_CONTAINER_ID": { + "description": "Unique identifier for a loaded representation in a working file", + "type": "string", + "pattern": "^[\\w.]*$", + "default": "avalon.container", + "example": "avalon.container" + }, + "AVALON_DEBUG": { + "description": "Enable debugging mode. Some applications may use this for e.g. extended verbosity or mock plug-ins.", + "type": "string", + "default": null, + "example": "True" + } + } +} diff --git a/openpype/pipeline/schema/session-3.0.json b/openpype/pipeline/schema/session-3.0.json new file mode 100644 index 00000000000..9f785939e4b --- /dev/null +++ b/openpype/pipeline/schema/session-3.0.json @@ -0,0 +1,81 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:session-3.0", + "description": "The Avalon environment", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "AVALON_PROJECT", + "AVALON_ASSET" + ], + + "properties": { + "AVALON_PROJECTS": { + "description": "Absolute path to root of project directories", + "type": "string", + "example": "/nas/projects" + }, + "AVALON_PROJECT": { + "description": "Name of project", + "type": "string", + "pattern": "^\\w*$", + "example": "Hulk" + }, + "AVALON_ASSET": { + "description": "Name of asset", + "type": "string", + "pattern": "^\\w*$", + "example": "Bruce" + }, + "AVALON_TASK": { + "description": "Name of task", + "type": "string", + "pattern": "^\\w*$", + "example": "modeling" + }, + "AVALON_APP": { + "description": "Name of host", + "type": "string", + "pattern": "^\\w*$", + "example": "maya2016" + }, + "AVALON_DB": { + "description": "Name of database", + "type": "string", + "pattern": "^\\w*$", + "example": "avalon", + "default": "avalon" + }, + "AVALON_LABEL": { + "description": "Nice name of Avalon, used in e.g. graphical user interfaces", + "type": "string", + "example": "Mindbender", + "default": "Avalon" + }, + "AVALON_TIMEOUT": { + "description": "Wherever there is a need for a timeout, this is the default value.", + "type": "string", + "pattern": "^[0-9]*$", + "default": "1000", + "example": "1000" + }, + "AVALON_INSTANCE_ID": { + "description": "Unique identifier for instances in a working file", + "type": "string", + "pattern": "^[\\w.]*$", + "default": "avalon.instance", + "example": "avalon.instance" + }, + "AVALON_CONTAINER_ID": { + "description": "Unique identifier for a loaded representation in a working file", + "type": "string", + "pattern": "^[\\w.]*$", + "default": "avalon.container", + "example": "avalon.container" + } + } +} diff --git a/openpype/pipeline/schema/shaders-1.0.json b/openpype/pipeline/schema/shaders-1.0.json new file mode 100644 index 00000000000..7102ba1861a --- /dev/null +++ b/openpype/pipeline/schema/shaders-1.0.json @@ -0,0 +1,32 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:shaders-1.0", + "description": "Relationships between shaders and Avalon IDs", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "shader" + ], + + "properties": { + "schema": { + "description": "Schema identifier for payload", + "type": "string" + }, + "shader": { + "description": "Name of directory", + "type": "array", + "items": { + "type": "str", + "description": "Avalon ID and optional face indexes, e.g. 'f9520572-ac1d-11e6-b39e-3085a99791c9.f[5002:5185]'" + } + } + }, + + "definitions": {} +} diff --git a/openpype/pipeline/schema/subset-1.0.json b/openpype/pipeline/schema/subset-1.0.json new file mode 100644 index 00000000000..a299a6d341e --- /dev/null +++ b/openpype/pipeline/schema/subset-1.0.json @@ -0,0 +1,35 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:subset-1.0", + "description": "A container of instances", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "name", + "versions" + ], + + "properties": { + "schema": { + "description": "Schema identifier for payload", + "type": "string" + }, + "name": { + "description": "Name of directory", + "type": "string" + }, + "versions": { + "type": "array", + "items": { + "$ref": "version.json" + } + } + }, + + "definitions": {} +} diff --git a/openpype/pipeline/schema/subset-2.0.json b/openpype/pipeline/schema/subset-2.0.json new file mode 100644 index 00000000000..db256ec7fb4 --- /dev/null +++ b/openpype/pipeline/schema/subset-2.0.json @@ -0,0 +1,51 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:subset-2.0", + "description": "A container of instances", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "type", + "parent", + "name", + "data" + ], + + "properties": { + "schema": { + "description": "The schema associated with this document", + "type": "string", + "enum": ["openpype:subset-2.0"], + "example": "openpype:subset-2.0" + }, + "type": { + "description": "The type of document", + "type": "string", + "enum": ["subset"], + "example": "subset" + }, + "parent": { + "description": "Unique identifier to parent document", + "example": "592c33475f8c1b064c4d1696" + }, + "name": { + "description": "Name of directory", + "type": "string", + "pattern": "^[a-zA-Z0-9_.]*$", + "example": "shot01" + }, + "data": { + "type": "object", + "description": "Document metadata", + "example": { + "frameStart": 1000, + "frameEnd": 1201 + } + } + } +} diff --git a/openpype/pipeline/schema/subset-3.0.json b/openpype/pipeline/schema/subset-3.0.json new file mode 100644 index 00000000000..1a0db53c04a --- /dev/null +++ b/openpype/pipeline/schema/subset-3.0.json @@ -0,0 +1,62 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:subset-3.0", + "description": "A container of instances", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "type", + "parent", + "name", + "data" + ], + + "properties": { + "schema": { + "description": "The schema associated with this document", + "type": "string", + "enum": ["openpype:subset-3.0"], + "example": "openpype:subset-3.0" + }, + "type": { + "description": "The type of document", + "type": "string", + "enum": ["subset"], + "example": "subset" + }, + "parent": { + "description": "Unique identifier to parent document", + "example": "592c33475f8c1b064c4d1696" + }, + "name": { + "description": "Name of directory", + "type": "string", + "pattern": "^[a-zA-Z0-9_.]*$", + "example": "shot01" + }, + "data": { + "description": "Document metadata", + "type": "object", + "required": ["families"], + "properties": { + "families": { + "type": "array", + "items": {"type": "string"}, + "description": "One or more families associated with this subset" + } + }, + "example": { + "families" : [ + "avalon.camera" + ], + "frameStart": 1000, + "frameEnd": 1201 + } + } + } +} diff --git a/openpype/pipeline/schema/thumbnail-1.0.json b/openpype/pipeline/schema/thumbnail-1.0.json new file mode 100644 index 00000000000..5bdf78a4b1e --- /dev/null +++ b/openpype/pipeline/schema/thumbnail-1.0.json @@ -0,0 +1,42 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:thumbnail-1.0", + "description": "Entity with thumbnail data", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "type", + "data" + ], + + "properties": { + "schema": { + "description": "The schema associated with this document", + "type": "string", + "enum": ["openpype:thumbnail-1.0"], + "example": "openpype:thumbnail-1.0" + }, + "type": { + "description": "The type of document", + "type": "string", + "enum": ["thumbnail"], + "example": "thumbnail" + }, + "data": { + "description": "Thumbnail data", + "type": "object", + "example": { + "binary_data": "Binary({byte data of image})", + "template": "{thumbnail_root}/{project[name]}/{_id}{ext}}", + "template_data": { + "ext": ".jpg" + } + } + } + } +} diff --git a/openpype/pipeline/schema/version-1.0.json b/openpype/pipeline/schema/version-1.0.json new file mode 100644 index 00000000000..daa19977211 --- /dev/null +++ b/openpype/pipeline/schema/version-1.0.json @@ -0,0 +1,50 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:version-1.0", + "description": "An individual version", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "version", + "path", + "time", + "author", + "source", + "representations" + ], + + "properties": { + "schema": {"type": "string"}, + "representations": { + "type": "array", + "items": { + "$ref": "representation.json" + } + }, + "time": { + "description": "ISO formatted, file-system compatible time", + "type": "string" + }, + "author": { + "description": "User logged on to the machine at time of publish", + "type": "string" + }, + "version": { + "description": "Number of this version", + "type": "number" + }, + "path": { + "description": "Unformatted path, e.g. '{root}/assets/Bruce/publish/lookdevDefault/v001", + "type": "string" + }, + "source": { + "description": "Original file from which this version was made.", + "type": "string" + } + } +} diff --git a/openpype/pipeline/schema/version-2.0.json b/openpype/pipeline/schema/version-2.0.json new file mode 100644 index 00000000000..099e9be70a1 --- /dev/null +++ b/openpype/pipeline/schema/version-2.0.json @@ -0,0 +1,92 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:version-2.0", + "description": "An individual version", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "type", + "parent", + "name", + "data" + ], + + "properties": { + "schema": { + "description": "The schema associated with this document", + "type": "string", + "enum": ["openpype:version-2.0"], + "example": "openpype:version-2.0" + }, + "type": { + "description": "The type of document", + "type": "string", + "enum": ["version"], + "example": "version" + }, + "parent": { + "description": "Unique identifier to parent document", + "example": "592c33475f8c1b064c4d1696" + }, + "name": { + "description": "Number of version", + "type": "number", + "example": 12 + }, + "locations": { + "description": "Where on the planet this version can be found.", + "type": "array", + "items": {"type": "string"}, + "example": ["data.avalon.com"] + }, + "data": { + "description": "Document metadata", + "type": "object", + "required": ["families", "author", "source", "time"], + "properties": { + "time": { + "description": "ISO formatted, file-system compatible time", + "type": "string" + }, + "timeFormat": { + "description": "ISO format of time", + "type": "string" + }, + "author": { + "description": "User logged on to the machine at time of publish", + "type": "string" + }, + "version": { + "description": "Number of this version", + "type": "number" + }, + "path": { + "description": "Unformatted path, e.g. '{root}/assets/Bruce/publish/lookdevDefault/v001", + "type": "string" + }, + "source": { + "description": "Original file from which this version was made.", + "type": "string" + }, + "families": { + "type": "array", + "items": {"type": "string"}, + "description": "One or more families associated with this version" + } + }, + "example": { + "source" : "{root}/f02_prod/assets/BubbleWitch/work/modeling/marcus/maya/scenes/model_v001.ma", + "author" : "marcus", + "families" : [ + "avalon.model" + ], + "time" : "20170510T090203Z" + } + } + } +} diff --git a/openpype/pipeline/schema/version-3.0.json b/openpype/pipeline/schema/version-3.0.json new file mode 100644 index 00000000000..3e07fc4499a --- /dev/null +++ b/openpype/pipeline/schema/version-3.0.json @@ -0,0 +1,84 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:version-3.0", + "description": "An individual version", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "type", + "parent", + "name", + "data" + ], + + "properties": { + "schema": { + "description": "The schema associated with this document", + "type": "string", + "enum": ["openpype:version-3.0"], + "example": "openpype:version-3.0" + }, + "type": { + "description": "The type of document", + "type": "string", + "enum": ["version"], + "example": "version" + }, + "parent": { + "description": "Unique identifier to parent document", + "example": "592c33475f8c1b064c4d1696" + }, + "name": { + "description": "Number of version", + "type": "number", + "example": 12 + }, + "locations": { + "description": "Where on the planet this version can be found.", + "type": "array", + "items": {"type": "string"}, + "example": ["data.avalon.com"] + }, + "data": { + "description": "Document metadata", + "type": "object", + "required": ["author", "source", "time"], + "properties": { + "time": { + "description": "ISO formatted, file-system compatible time", + "type": "string" + }, + "timeFormat": { + "description": "ISO format of time", + "type": "string" + }, + "author": { + "description": "User logged on to the machine at time of publish", + "type": "string" + }, + "version": { + "description": "Number of this version", + "type": "number" + }, + "path": { + "description": "Unformatted path, e.g. '{root}/assets/Bruce/publish/lookdevDefault/v001", + "type": "string" + }, + "source": { + "description": "Original file from which this version was made.", + "type": "string" + } + }, + "example": { + "source" : "{root}/f02_prod/assets/BubbleWitch/work/modeling/marcus/maya/scenes/model_v001.ma", + "author" : "marcus", + "time" : "20170510T090203Z" + } + } + } +} diff --git a/openpype/pipeline/schema/workfile-1.0.json b/openpype/pipeline/schema/workfile-1.0.json new file mode 100644 index 00000000000..5f9600ef206 --- /dev/null +++ b/openpype/pipeline/schema/workfile-1.0.json @@ -0,0 +1,52 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:workfile-1.0", + "description": "Workfile additional information.", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "type", + "filename", + "task_name", + "parent" + ], + + "properties": { + "schema": { + "description": "Schema identifier for payload", + "type": "string", + "enum": ["openpype:workfile-1.0"], + "example": "openpype:workfile-1.0" + }, + "type": { + "description": "The type of document", + "type": "string", + "enum": ["workfile"], + "example": "workfile" + }, + "parent": { + "description": "Unique identifier to parent document", + "example": "592c33475f8c1b064c4d1696" + }, + "filename": { + "description": "Workfile's filename", + "type": "string", + "example": "kuba_each_case_Alpaca_01_animation_v001.ma" + }, + "task_name": { + "description": "Task name", + "type": "string", + "example": "animation" + }, + "data": { + "description": "Document metadata", + "type": "object", + "example": {"key": "value"} + } + } +} diff --git a/openpype/pipeline/template_data.py b/openpype/pipeline/template_data.py index fd21930eccc..a48f0721b6b 100644 --- a/openpype/pipeline/template_data.py +++ b/openpype/pipeline/template_data.py @@ -94,6 +94,9 @@ def get_asset_template_data(asset_doc, project_name): return { "asset": asset_doc["name"], + "folder": { + "name": asset_doc["name"] + }, "hierarchy": hierarchy, "parent": parent_name } diff --git a/openpype/pipeline/thumbnail.py b/openpype/pipeline/thumbnail.py index 9d4a6f3e484..b2b3679450d 100644 --- a/openpype/pipeline/thumbnail.py +++ b/openpype/pipeline/thumbnail.py @@ -3,6 +3,7 @@ import logging from openpype import AYON_SERVER_ENABLED +from openpype.lib import Logger from openpype.client import get_project from . import legacy_io from .anatomy import Anatomy @@ -11,13 +12,13 @@ register_plugin, register_plugin_path, ) -log = logging.getLogger(__name__) def get_thumbnail_binary(thumbnail_entity, thumbnail_type, dbcon=None): if not thumbnail_entity: return + log = Logger.get_logger(__name__) resolvers = discover_thumbnail_resolvers() resolvers = sorted(resolvers, key=lambda cls: cls.priority) if dbcon is None: @@ -133,6 +134,16 @@ def process(self, thumbnail_entity, thumbnail_type): class ServerThumbnailResolver(ThumbnailResolver): + _cache = None + + @classmethod + def _get_cache(cls): + if cls._cache is None: + from openpype.client.server.thumbnails import AYONThumbnailCache + + cls._cache = AYONThumbnailCache() + return cls._cache + def process(self, thumbnail_entity, thumbnail_type): if not AYON_SERVER_ENABLED: return None @@ -142,20 +153,40 @@ def process(self, thumbnail_entity, thumbnail_type): if not entity_type or not entity_id: return None - from openpype.client.server.server_api import get_server_api_connection + import ayon_api project_name = self.dbcon.active_project() thumbnail_id = thumbnail_entity["_id"] - con = get_server_api_connection() - filepath = con.get_thumbnail( - project_name, entity_type, entity_id, thumbnail_id - ) - content = None + + cache = self._get_cache() + filepath = cache.get_thumbnail_filepath(project_name, thumbnail_id) if filepath: with open(filepath, "rb") as stream: - content = stream.read() + return stream.read() + + # This is new way how thumbnails can be received from server + # - output is 'ThumbnailContent' object + if hasattr(ayon_api, "get_thumbnail_by_id"): + result = ayon_api.get_thumbnail_by_id(thumbnail_id) + if result.is_valid: + filepath = cache.store_thumbnail( + project_name, + thumbnail_id, + result.content, + result.content_type + ) + else: + # Backwards compatibility for ayon api where 'get_thumbnail_by_id' + # is not implemented and output is filepath + filepath = ayon_api.get_thumbnail( + project_name, entity_type, entity_id, thumbnail_id + ) - return content + if not filepath: + return None + + with open(filepath, "rb") as stream: + return stream.read() # Thumbnail resolvers diff --git a/openpype/pipeline/version_start.py b/openpype/pipeline/version_start.py new file mode 100644 index 00000000000..0240ab0c7a0 --- /dev/null +++ b/openpype/pipeline/version_start.py @@ -0,0 +1,37 @@ +from openpype.lib.profiles_filtering import filter_profiles +from openpype.settings import get_project_settings + + +def get_versioning_start( + project_name, + host_name, + task_name=None, + task_type=None, + family=None, + subset=None, + project_settings=None, +): + """Get anatomy versioning start""" + if not project_settings: + project_settings = get_project_settings(project_name) + + version_start = 1 + settings = project_settings["global"] + profiles = settings.get("version_start_category", {}).get("profiles", []) + + if not profiles: + return version_start + + filtering_criteria = { + "host_names": host_name, + "families": family, + "task_names": task_name, + "task_types": task_type, + "subsets": subset + } + profile = filter_profiles(profiles, filtering_criteria) + + if profile is None: + return version_start + + return profile["version_start"] diff --git a/openpype/pipeline/workfile/path_resolving.py b/openpype/pipeline/workfile/path_resolving.py index 15689f4d994..78acee20dac 100644 --- a/openpype/pipeline/workfile/path_resolving.py +++ b/openpype/pipeline/workfile/path_resolving.py @@ -10,7 +10,7 @@ Logger, StringTemplate, ) -from openpype.pipeline import Anatomy +from openpype.pipeline import version_start, Anatomy from openpype.pipeline.template_data import get_template_data @@ -316,7 +316,13 @@ def get_last_workfile( ) if filename is None: data = copy.deepcopy(fill_data) - data["version"] = 1 + data["version"] = version_start.get_versioning_start( + data["project"]["name"], + data["app"], + task_name=data["task"]["name"], + task_type=data["task"]["type"], + family="workfile" + ) data.pop("comment", None) if not data.get("ext"): data["ext"] = extensions[0] diff --git a/openpype/pipeline/workfile/workfile_template_builder.py b/openpype/pipeline/workfile/workfile_template_builder.py index bdb13415bf5..b218a348685 100644 --- a/openpype/pipeline/workfile/workfile_template_builder.py +++ b/openpype/pipeline/workfile/workfile_template_builder.py @@ -1612,7 +1612,7 @@ def post_placeholder_process(self, placeholder, failed): pass - def delete_placeholder(self, placeholder, failed): + def delete_placeholder(self, placeholder): """Called when all item population is done.""" self.log.debug("Clean up of placeholder is not implemented.") @@ -1781,6 +1781,17 @@ def populate_create_placeholder(self, placeholder, pre_create_data=None): self.post_placeholder_process(placeholder, failed) + if failed: + self.log.debug( + "Placeholder cleanup skipped due to failed placeholder " + "population." + ) + return + + if not placeholder.data.get("keep_placeholder", True): + self.delete_placeholder(placeholder) + + def create_failed(self, placeholder, creator_data): if hasattr(placeholder, "create_failed"): placeholder.create_failed(creator_data) @@ -1800,9 +1811,12 @@ def post_placeholder_process(self, placeholder, failed): representation. failed (bool): Loading of representation failed. """ - pass + def delete_placeholder(self, placeholder): + """Called when all item population is done.""" + self.log.debug("Clean up of placeholder is not implemented.") + def _before_instance_create(self, placeholder): """Can be overriden. Is called before instance is created.""" diff --git a/openpype/plugin.py b/openpype/plugin.py deleted file mode 100644 index 7e906b4451e..00000000000 --- a/openpype/plugin.py +++ /dev/null @@ -1,128 +0,0 @@ -import functools -import warnings - -import pyblish.api - -# New location of orders: openpype.pipeline.publish.constants -# - can be imported as -# 'from openpype.pipeline.publish import ValidatePipelineOrder' -ValidatePipelineOrder = pyblish.api.ValidatorOrder + 0.05 -ValidateContentsOrder = pyblish.api.ValidatorOrder + 0.1 -ValidateSceneOrder = pyblish.api.ValidatorOrder + 0.2 -ValidateMeshOrder = pyblish.api.ValidatorOrder + 0.3 - - -class PluginDeprecatedWarning(DeprecationWarning): - pass - - -def _deprecation_warning(item_name, warning_message): - warnings.simplefilter("always", PluginDeprecatedWarning) - warnings.warn( - ( - "Call to deprecated function '{}'" - "\nFunction was moved or removed.{}" - ).format(item_name, warning_message), - category=PluginDeprecatedWarning, - stacklevel=4 - ) - - -def deprecated(new_destination): - """Mark functions as deprecated. - - It will result in a warning being emitted when the function is used. - """ - - func = None - if callable(new_destination): - func = new_destination - new_destination = None - - def _decorator(decorated_func): - if new_destination is None: - warning_message = ( - " Please check content of deprecated function to figure out" - " possible replacement." - ) - else: - warning_message = " Please replace your usage with '{}'.".format( - new_destination - ) - - @functools.wraps(decorated_func) - def wrapper(*args, **kwargs): - _deprecation_warning(decorated_func.__name__, warning_message) - return decorated_func(*args, **kwargs) - return wrapper - - if func is None: - return _decorator - return _decorator(func) - - -# Classes just inheriting from pyblish classes -# - seems to be unused in code (not 100% sure) -# - they should be removed but because it is not clear if they're used -# we'll keep then and log deprecation warning -# Deprecated since 3.14.* will be removed in 3.16.* -class ContextPlugin(pyblish.api.ContextPlugin): - def __init__(self, *args, **kwargs): - _deprecation_warning( - "openpype.plugin.ContextPlugin", - " Please replace your usage with 'pyblish.api.ContextPlugin'." - ) - super(ContextPlugin, self).__init__(*args, **kwargs) - - -# Deprecated since 3.14.* will be removed in 3.16.* -class InstancePlugin(pyblish.api.InstancePlugin): - def __init__(self, *args, **kwargs): - _deprecation_warning( - "openpype.plugin.ContextPlugin", - " Please replace your usage with 'pyblish.api.InstancePlugin'." - ) - super(InstancePlugin, self).__init__(*args, **kwargs) - - -class Extractor(pyblish.api.InstancePlugin): - """Extractor base class. - - The extractor base class implements a "staging_dir" function used to - generate a temporary directory for an instance to extract to. - - This temporary directory is generated through `tempfile.mkdtemp()` - - """ - - order = 2.0 - - def staging_dir(self, instance): - """Provide a temporary directory in which to store extracted files - - Upon calling this method the staging directory is stored inside - the instance.data['stagingDir'] - """ - - from openpype.pipeline.publish import get_instance_staging_dir - - return get_instance_staging_dir(instance) - - -@deprecated("openpype.pipeline.publish.context_plugin_should_run") -def contextplugin_should_run(plugin, context): - """Return whether the ContextPlugin should run on the given context. - - This is a helper function to work around a bug pyblish-base#250 - Whenever a ContextPlugin sets specific families it will still trigger even - when no instances are present that have those families. - - This actually checks it correctly and returns whether it should run. - - Deprecated: - Since 3.14.* will be removed in 3.16.* or later. - """ - - from openpype.pipeline.publish import context_plugin_should_run - - return context_plugin_should_run(plugin, context) diff --git a/openpype/plugins/actions/open_file_explorer.py b/openpype/plugins/actions/open_file_explorer.py new file mode 100644 index 00000000000..e4fbd911437 --- /dev/null +++ b/openpype/plugins/actions/open_file_explorer.py @@ -0,0 +1,125 @@ +import os +import platform +import subprocess + +from string import Formatter +from openpype.client import ( + get_project, + get_asset_by_name, +) +from openpype.pipeline import ( + Anatomy, + LauncherAction, +) +from openpype.pipeline.template_data import get_template_data + + +class OpenTaskPath(LauncherAction): + name = "open_task_path" + label = "Explore here" + icon = "folder-open" + order = 500 + + def is_compatible(self, session): + """Return whether the action is compatible with the session""" + return bool(session.get("AVALON_ASSET")) + + def process(self, session, **kwargs): + from qtpy import QtCore, QtWidgets + + project_name = session["AVALON_PROJECT"] + asset_name = session["AVALON_ASSET"] + task_name = session.get("AVALON_TASK", None) + + path = self._get_workdir(project_name, asset_name, task_name) + if not path: + return + + app = QtWidgets.QApplication.instance() + ctrl_pressed = QtCore.Qt.ControlModifier & app.keyboardModifiers() + if ctrl_pressed: + # Copy path to clipboard + self.copy_path_to_clipboard(path) + else: + self.open_in_explorer(path) + + def _find_first_filled_path(self, path): + if not path: + return "" + + fields = set() + for item in Formatter().parse(path): + _, field_name, format_spec, conversion = item + if not field_name: + continue + conversion = "!{}".format(conversion) if conversion else "" + format_spec = ":{}".format(format_spec) if format_spec else "" + orig_key = "{{{}{}{}}}".format( + field_name, conversion, format_spec) + fields.add(orig_key) + + for field in fields: + path = path.split(field, 1)[0] + return path + + def _get_workdir(self, project_name, asset_name, task_name): + project = get_project(project_name) + asset = get_asset_by_name(project_name, asset_name) + + data = get_template_data(project, asset, task_name) + + anatomy = Anatomy(project_name) + workdir = anatomy.templates_obj["work"]["folder"].format(data) + + # Remove any potential un-formatted parts of the path + valid_workdir = self._find_first_filled_path(workdir) + + # Path is not filled at all + if not valid_workdir: + raise AssertionError("Failed to calculate workdir.") + + # Normalize + valid_workdir = os.path.normpath(valid_workdir) + if os.path.exists(valid_workdir): + return valid_workdir + + # If task was selected, try to find asset path only to asset + if not task_name: + raise AssertionError("Folder does not exist.") + + data.pop("task", None) + workdir = anatomy.templates_obj["work"]["folder"].format(data) + valid_workdir = self._find_first_filled_path(workdir) + if valid_workdir: + # Normalize + valid_workdir = os.path.normpath(valid_workdir) + if os.path.exists(valid_workdir): + return valid_workdir + raise AssertionError("Folder does not exist.") + + @staticmethod + def open_in_explorer(path): + platform_name = platform.system().lower() + if platform_name == "windows": + args = ["start", path] + elif platform_name == "darwin": + args = ["open", "-na", path] + elif platform_name == "linux": + args = ["xdg-open", path] + else: + raise RuntimeError(f"Unknown platform {platform.system()}") + # Make sure path is converted correctly for 'os.system' + os.system(subprocess.list2cmdline(args)) + + @staticmethod + def copy_path_to_clipboard(path): + from qtpy import QtWidgets + + path = path.replace("\\", "/") + print(f"Copied to clipboard: {path}") + app = QtWidgets.QApplication.instance() + assert app, "Must have running QApplication instance" + + # Set to Clipboard + clipboard = QtWidgets.QApplication.clipboard() + clipboard.setText(os.path.normpath(path)) diff --git a/openpype/plugins/publish/collect_anatomy_instance_data.py b/openpype/plugins/publish/collect_anatomy_instance_data.py index 128ad90b4f8..b4f4d6a16a6 100644 --- a/openpype/plugins/publish/collect_anatomy_instance_data.py +++ b/openpype/plugins/publish/collect_anatomy_instance_data.py @@ -32,6 +32,7 @@ get_subsets, get_last_versions ) +from openpype.pipeline.version_start import get_versioning_start class CollectAnatomyInstanceData(pyblish.api.ContextPlugin): @@ -187,25 +188,13 @@ def fill_anatomy_data(self, context): project_task_types = project_doc["config"]["tasks"] for instance in context: - if self.follow_workfile_version: - version_number = context.data('version') - else: - version_number = instance.data.get("version") - # If version is not specified for instance or context - if version_number is None: - # TODO we should be able to change default version by studio - # preferences (like start with version number `0`) - version_number = 1 - # use latest version (+1) if already any exist - latest_version = instance.data["latestVersion"] - if latest_version is not None: - version_number += int(latest_version) - anatomy_updates = { "asset": instance.data["asset"], + "folder": { + "name": instance.data["asset"], + }, "family": instance.data["family"], "subset": instance.data["subset"], - "version": version_number } # Hierarchy @@ -225,6 +214,7 @@ def fill_anatomy_data(self, context): anatomy_updates["parent"] = parent_name # Task + task_type = None task_name = instance.data.get("task") if task_name: asset_tasks = asset_doc["data"]["tasks"] @@ -240,6 +230,30 @@ def fill_anatomy_data(self, context): "short": task_code } + # Define version + if self.follow_workfile_version: + version_number = context.data('version') + else: + version_number = instance.data.get("version") + + # use latest version (+1) if already any exist + if version_number is None: + latest_version = instance.data["latestVersion"] + if latest_version is not None: + version_number = int(latest_version) + 1 + + # If version is not specified for instance or context + if version_number is None: + version_number = get_versioning_start( + context.data["projectName"], + instance.context.data["hostName"], + task_name=task_name, + task_type=task_type, + family=instance.data["family"], + subset=instance.data["subset"] + ) + anatomy_updates["version"] = version_number + # Additional data resolution_width = instance.data.get("resolutionWidth") if resolution_width: diff --git a/openpype/plugins/publish/collect_farm_target.py b/openpype/plugins/publish/collect_farm_target.py new file mode 100644 index 00000000000..adcd842b485 --- /dev/null +++ b/openpype/plugins/publish/collect_farm_target.py @@ -0,0 +1,35 @@ +# -*- coding: utf-8 -*- +import pyblish.api + + +class CollectFarmTarget(pyblish.api.InstancePlugin): + """Collects the render target for the instance + """ + + order = pyblish.api.CollectorOrder + 0.499 + label = "Collect Farm Target" + targets = ["local"] + + def process(self, instance): + if not instance.data.get("farm"): + return + + context = instance.context + + farm_name = "" + op_modules = context.data.get("openPypeModules") + + for farm_renderer in ["deadline", "royalrender", "muster"]: + op_module = op_modules.get(farm_renderer, False) + + if op_module and op_module.enabled: + farm_name = farm_renderer + elif not op_module: + self.log.error("Cannot get OpenPype {0} module.".format( + farm_renderer)) + + if farm_name: + self.log.debug("Collected render target: {0}".format(farm_name)) + instance.data["toBeRenderedOn"] = farm_name + else: + AssertionError("No OpenPype renderer module found") diff --git a/openpype/plugins/publish/extract_burnin.py b/openpype/plugins/publish/extract_burnin.py index e67739e842b..e5b37ee3b4a 100644 --- a/openpype/plugins/publish/extract_burnin.py +++ b/openpype/plugins/publish/extract_burnin.py @@ -52,8 +52,9 @@ class ExtractBurnin(publish.Extractor): "photoshop", "flame", "houdini", - "max" - # "resolve" + "max", + "blender", + "unreal" ] optional = True diff --git a/openpype/plugins/publish/extract_hierarchy_to_ayon.py b/openpype/plugins/publish/extract_hierarchy_to_ayon.py index 915650ae419..36a7042ba54 100644 --- a/openpype/plugins/publish/extract_hierarchy_to_ayon.py +++ b/openpype/plugins/publish/extract_hierarchy_to_ayon.py @@ -8,6 +8,11 @@ from ayon_api.entity_hub import EntityHub from openpype import AYON_SERVER_ENABLED +from openpype.client import get_assets +from openpype.pipeline.template_data import ( + get_asset_template_data, + get_task_template_data, +) def _default_json_parse(value): @@ -27,13 +32,51 @@ def process(self, context): hierarchy_context = context.data.get("hierarchyContext") if not hierarchy_context: - self.log.info("Skipping") + self.log.debug("Skipping") return project_name = context.data["projectName"] + self._create_hierarchy(context, project_name) + self._fill_instance_entities(context, project_name) + + def _fill_instance_entities(self, context, project_name): + instances_by_asset_name = collections.defaultdict(list) + for instance in context: + if instance.data.get("publish") is False: + continue + + instance_entity = instance.data.get("assetEntity") + if instance_entity: + continue + + # Skip if instance asset does not match + instance_asset_name = instance.data.get("asset") + instances_by_asset_name[instance_asset_name].append(instance) + + project_doc = context.data["projectEntity"] + asset_docs = get_assets( + project_name, asset_names=instances_by_asset_name.keys() + ) + asset_docs_by_name = { + asset_doc["name"]: asset_doc + for asset_doc in asset_docs + } + for asset_name, instances in instances_by_asset_name.items(): + asset_doc = asset_docs_by_name[asset_name] + asset_data = get_asset_template_data(asset_doc, project_name) + for instance in instances: + task_name = instance.data.get("task") + template_data = get_task_template_data( + project_doc, asset_doc, task_name) + template_data.update(copy.deepcopy(asset_data)) + + instance.data["anatomyData"].update(template_data) + instance.data["assetEntity"] = asset_doc + + def _create_hierarchy(self, context, project_name): hierarchy_context = self._filter_hierarchy(context) if not hierarchy_context: - self.log.info("All folders were filtered out") + self.log.debug("All folders were filtered out") return self.log.debug("Hierarchy_context: {}".format( diff --git a/openpype/plugins/publish/extract_otio_audio_tracks.py b/openpype/plugins/publish/extract_otio_audio_tracks.py index e19b7eeb137..4f177314526 100644 --- a/openpype/plugins/publish/extract_otio_audio_tracks.py +++ b/openpype/plugins/publish/extract_otio_audio_tracks.py @@ -1,7 +1,7 @@ import os import pyblish from openpype.lib import ( - get_ffmpeg_tool_path, + get_ffmpeg_tool_args, run_subprocess ) import tempfile @@ -20,9 +20,6 @@ class ExtractOtioAudioTracks(pyblish.api.ContextPlugin): label = "Extract OTIO Audio Tracks" hosts = ["hiero", "resolve", "flame"] - # FFmpeg tools paths - ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") - def process(self, context): """Convert otio audio track's content to audio representations @@ -91,13 +88,13 @@ def add_audio_to_instances(self, audio_file, instances): # temp audio file audio_fpath = self.create_temp_file(name) - cmd = [ - self.ffmpeg_path, + cmd = get_ffmpeg_tool_args( + "ffmpeg", "-ss", str(start_sec), "-t", str(duration_sec), "-i", audio_file, audio_fpath - ] + ) # run subprocess self.log.debug("Executing: {}".format(" ".join(cmd))) @@ -210,13 +207,13 @@ def create_empty(self, inputs): max_duration_sec = max(end_secs) # create empty cmd - cmd = [ - self.ffmpeg_path, + cmd = get_ffmpeg_tool_args( + "ffmpeg", "-f", "lavfi", "-i", "anullsrc=channel_layout=stereo:sample_rate=48000", "-t", str(max_duration_sec), empty_fpath - ] + ) # generate empty with ffmpeg # run subprocess @@ -295,7 +292,7 @@ def mix_audio(self, audio_inputs, audio_temp_fpath): filters_tmp_filepath = tmp_file.name tmp_file.write(",".join(filters)) - args = [self.ffmpeg_path] + args = get_ffmpeg_tool_args("ffmpeg") args.extend(input_args) args.extend([ "-filter_complex_script", filters_tmp_filepath, diff --git a/openpype/plugins/publish/extract_otio_review.py b/openpype/plugins/publish/extract_otio_review.py index 9ebcad2af1e..699207df8af 100644 --- a/openpype/plugins/publish/extract_otio_review.py +++ b/openpype/plugins/publish/extract_otio_review.py @@ -20,7 +20,7 @@ from pyblish import api from openpype.lib import ( - get_ffmpeg_tool_path, + get_ffmpeg_tool_args, run_subprocess, ) from openpype.pipeline import publish @@ -338,8 +338,6 @@ def _render_seqment(self, sequence=None, Returns: otio.time.TimeRange: trimmed available range """ - # get rendering app path - ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") # create path and frame start to destination output_path, out_frame_start = self._get_ffmpeg_output() @@ -348,7 +346,7 @@ def _render_seqment(self, sequence=None, out_frame_start += end_offset # start command list - command = [ffmpeg_path] + command = get_ffmpeg_tool_args("ffmpeg") input_extension = None if sequence: diff --git a/openpype/plugins/publish/extract_otio_trimming_video.py b/openpype/plugins/publish/extract_otio_trimming_video.py index 70726338aaf..67ff6c538ca 100644 --- a/openpype/plugins/publish/extract_otio_trimming_video.py +++ b/openpype/plugins/publish/extract_otio_trimming_video.py @@ -11,7 +11,7 @@ import pyblish.api from openpype.lib import ( - get_ffmpeg_tool_path, + get_ffmpeg_tool_args, run_subprocess, ) from openpype.pipeline import publish @@ -75,14 +75,12 @@ def _ffmpeg_trim_seqment(self, input_file_path, otio_range): otio_range (opentime.TimeRange): range to trim to """ - # get rendering app path - ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") # create path to destination output_path = self._get_ffmpeg_output(input_file_path) # start command list - command = [ffmpeg_path] + command = get_ffmpeg_tool_args("ffmpeg") video_path = input_file_path frame_start = otio_range.start_time.value diff --git a/openpype/plugins/publish/extract_review.py b/openpype/plugins/publish/extract_review.py index f053d1b500a..9cc456872e5 100644 --- a/openpype/plugins/publish/extract_review.py +++ b/openpype/plugins/publish/extract_review.py @@ -3,6 +3,7 @@ import copy import json import shutil +import subprocess from abc import ABCMeta, abstractmethod import six @@ -11,7 +12,7 @@ import pyblish.api from openpype.lib import ( - get_ffmpeg_tool_path, + get_ffmpeg_tool_args, filter_profiles, path_to_subprocess_arg, run_subprocess, @@ -72,9 +73,6 @@ class ExtractReview(pyblish.api.InstancePlugin): alpha_exts = ["exr", "png", "dpx"] - # FFmpeg tools paths - ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") - # Preset attributes profiles = None @@ -787,8 +785,9 @@ def ffmpeg_full_args( arg = arg.replace(identifier, "").strip() audio_filters.append(arg) - all_args = [] - all_args.append(path_to_subprocess_arg(self.ffmpeg_path)) + all_args = [ + subprocess.list2cmdline(get_ffmpeg_tool_args("ffmpeg")) + ] all_args.extend(input_args) if video_filters: all_args.append("-filter:v") diff --git a/openpype/plugins/publish/extract_review_slate.py b/openpype/plugins/publish/extract_review_slate.py index fca3d96ca6d..886384fee60 100644 --- a/openpype/plugins/publish/extract_review_slate.py +++ b/openpype/plugins/publish/extract_review_slate.py @@ -1,5 +1,6 @@ import os import re +import subprocess from pprint import pformat import pyblish.api @@ -7,7 +8,7 @@ from openpype.lib import ( path_to_subprocess_arg, run_subprocess, - get_ffmpeg_tool_path, + get_ffmpeg_tool_args, get_ffprobe_data, get_ffprobe_streams, get_ffmpeg_codec_args, @@ -47,8 +48,6 @@ def process(self, instance): self.log.info("_ slates_data: {}".format(pformat(slates_data))) - ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") - if "reviewToWidth" in inst_data: use_legacy_code = True else: @@ -86,8 +85,11 @@ def process(self, instance): input_width, input_height, input_timecode, - input_frame_rate + input_frame_rate, + input_pixel_aspect ) = self._get_video_metadata(streams) + if input_pixel_aspect: + pixel_aspect = input_pixel_aspect # Raise exception of any stream didn't define input resolution if input_width is None: @@ -260,7 +262,7 @@ def process(self, instance): _remove_at_end.append(slate_v_path) slate_args = [ - path_to_subprocess_arg(ffmpeg_path), + subprocess.list2cmdline(get_ffmpeg_tool_args("ffmpeg")), " ".join(input_args), " ".join(output_args) ] @@ -281,7 +283,6 @@ def process(self, instance): os.path.splitext(slate_v_path)) _remove_at_end.append(slate_silent_path) self._create_silent_slate( - ffmpeg_path, slate_v_path, slate_silent_path, audio_codec, @@ -309,12 +310,12 @@ def process(self, instance): "[0:v] [1:v] concat=n=2:v=1:a=0 [v]", "-map", '[v]' ] - concat_args = [ - ffmpeg_path, + concat_args = get_ffmpeg_tool_args( + "ffmpeg", "-y", "-i", slate_v_path, "-i", input_path, - ] + ) concat_args.extend(fmap) if offset_timecode: concat_args.extend(["-timecode", offset_timecode]) @@ -421,6 +422,7 @@ def _get_video_metadata(self, streams): input_width = None input_height = None input_frame_rate = None + input_pixel_aspect = None for stream in streams: if stream.get("codec_type") != "video": continue @@ -438,6 +440,16 @@ def _get_video_metadata(self, streams): input_width = width input_height = height + input_pixel_aspect = stream.get("sample_aspect_ratio") + if input_pixel_aspect is not None: + try: + input_pixel_aspect = float( + eval(str(input_pixel_aspect).replace(':', '/'))) + except Exception: + self.log.debug( + "__Converting pixel aspect to float failed: {}".format( + input_pixel_aspect)) + tags = stream.get("tags") or {} input_timecode = tags.get("timecode") or "" @@ -448,7 +460,8 @@ def _get_video_metadata(self, streams): input_width, input_height, input_timecode, - input_frame_rate + input_frame_rate, + input_pixel_aspect ) def _get_audio_metadata(self, streams): @@ -490,7 +503,6 @@ def _get_audio_metadata(self, streams): def _create_silent_slate( self, - ffmpeg_path, src_path, dst_path, audio_codec, @@ -515,8 +527,8 @@ def _create_silent_slate( one_frame_duration = str(int(one_frame_duration)) + "us" self.log.debug("One frame duration is {}".format(one_frame_duration)) - slate_silent_args = [ - ffmpeg_path, + slate_silent_args = get_ffmpeg_tool_args( + "ffmpeg", "-i", src_path, "-f", "lavfi", "-i", "anullsrc=r={}:cl={}:d={}".format( @@ -531,7 +543,7 @@ def _create_silent_slate( "-shortest", "-y", dst_path - ] + ) # run slate generation subprocess self.log.debug("Silent Slate Executing: {}".format( " ".join(slate_silent_args) diff --git a/openpype/plugins/publish/extract_scanline_exr.py b/openpype/plugins/publish/extract_scanline_exr.py index 0e4c0ca65f1..9f22794a792 100644 --- a/openpype/plugins/publish/extract_scanline_exr.py +++ b/openpype/plugins/publish/extract_scanline_exr.py @@ -5,7 +5,12 @@ import pyblish.api -from openpype.lib import run_subprocess, get_oiio_tools_path +from openpype.lib import ( + run_subprocess, + get_oiio_tool_args, + ToolNotFoundError, +) +from openpype.pipeline import KnownPublishError class ExtractScanlineExr(pyblish.api.InstancePlugin): @@ -45,11 +50,11 @@ def process(self, instance): stagingdir = os.path.normpath(repre.get("stagingDir")) - oiio_tool_path = get_oiio_tools_path() - if not os.path.exists(oiio_tool_path): - self.log.error( - "OIIO tool not found in {}".format(oiio_tool_path)) - raise AssertionError("OIIO tool not found") + try: + oiio_tool_args = get_oiio_tool_args("oiiotool") + except ToolNotFoundError: + self.log.error("OIIO tool not found.") + raise KnownPublishError("OIIO tool not found") for file in input_files: @@ -57,8 +62,7 @@ def process(self, instance): temp_name = os.path.join(stagingdir, "__{}".format(file)) # move original render to temp location shutil.move(original_name, temp_name) - oiio_cmd = [ - oiio_tool_path, + oiio_cmd = oiio_tool_args + [ os.path.join(stagingdir, temp_name), "--scanline", "-o", os.path.join(stagingdir, original_name) ] diff --git a/openpype/plugins/publish/extract_thumbnail.py b/openpype/plugins/publish/extract_thumbnail.py index b98ab64f560..b72a6d02ad5 100644 --- a/openpype/plugins/publish/extract_thumbnail.py +++ b/openpype/plugins/publish/extract_thumbnail.py @@ -1,10 +1,11 @@ import os +import subprocess import tempfile import pyblish.api from openpype.lib import ( - get_ffmpeg_tool_path, - get_oiio_tools_path, + get_ffmpeg_tool_args, + get_oiio_tool_args, is_oiio_supported, run_subprocess, @@ -174,12 +175,11 @@ def _get_filtered_repres(self, instance): def create_thumbnail_oiio(self, src_path, dst_path): self.log.info("Extracting thumbnail {}".format(dst_path)) - oiio_tool_path = get_oiio_tools_path() - oiio_cmd = [ - oiio_tool_path, + oiio_cmd = get_oiio_tool_args( + "oiiotool", "-a", src_path, "-o", dst_path - ] + ) self.log.debug("running: {}".format(" ".join(oiio_cmd))) try: run_subprocess(oiio_cmd, logger=self.log) @@ -194,27 +194,27 @@ def create_thumbnail_oiio(self, src_path, dst_path): def create_thumbnail_ffmpeg(self, src_path, dst_path): self.log.info("outputting {}".format(dst_path)) - ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") + ffmpeg_path_args = get_ffmpeg_tool_args("ffmpeg") ffmpeg_args = self.ffmpeg_args or {} - jpeg_items = [] - jpeg_items.append(path_to_subprocess_arg(ffmpeg_path)) - # override file if already exists - jpeg_items.append("-y") + jpeg_items = [ + subprocess.list2cmdline(ffmpeg_path_args) + ] # flag for large file sizes max_int = 2147483647 - jpeg_items.append("-analyzeduration {}".format(max_int)) - jpeg_items.append("-probesize {}".format(max_int)) + jpeg_items.extend([ + "-y", + "-analyzeduration", str(max_int), + "-probesize", str(max_int), + ]) # use same input args like with mov jpeg_items.extend(ffmpeg_args.get("input") or []) # input file - jpeg_items.append("-i {}".format( - path_to_subprocess_arg(src_path) - )) + jpeg_items.extend(["-i", path_to_subprocess_arg(src_path)]) # output arguments from presets jpeg_items.extend(ffmpeg_args.get("output") or []) # we just want one frame from movie files - jpeg_items.append("-vframes 1") + jpeg_items.extend(["-vframes", "1"]) # output file jpeg_items.append(path_to_subprocess_arg(dst_path)) subprocess_command = " ".join(jpeg_items) diff --git a/openpype/plugins/publish/extract_thumbnail_from_source.py b/openpype/plugins/publish/extract_thumbnail_from_source.py index a9c95d60652..1b9f0a8baeb 100644 --- a/openpype/plugins/publish/extract_thumbnail_from_source.py +++ b/openpype/plugins/publish/extract_thumbnail_from_source.py @@ -17,8 +17,8 @@ import pyblish.api from openpype.lib import ( - get_ffmpeg_tool_path, - get_oiio_tools_path, + get_ffmpeg_tool_args, + get_oiio_tool_args, is_oiio_supported, run_subprocess, @@ -128,7 +128,7 @@ def _create_thumbnail(self, context, thumbnail_source): if thumbnail_created: return full_output_path - self.log.warning("Thumbanil has not been created.") + self.log.warning("Thumbnail has not been created.") def _instance_has_thumbnail(self, instance): if "representations" not in instance.data: @@ -144,12 +144,12 @@ def _instance_has_thumbnail(self, instance): def create_thumbnail_oiio(self, src_path, dst_path): self.log.info("outputting {}".format(dst_path)) - oiio_tool_path = get_oiio_tools_path() - oiio_cmd = [ - oiio_tool_path, + oiio_cmd = get_oiio_tool_args( + "oiiotool", "-a", src_path, + "--ch", "R,G,B", "-o", dst_path - ] + ) self.log.info("Running: {}".format(" ".join(oiio_cmd))) try: run_subprocess(oiio_cmd, logger=self.log) @@ -162,18 +162,16 @@ def create_thumbnail_oiio(self, src_path, dst_path): return False def create_thumbnail_ffmpeg(self, src_path, dst_path): - ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") - max_int = str(2147483647) - ffmpeg_cmd = [ - ffmpeg_path, + ffmpeg_cmd = get_ffmpeg_tool_args( + "ffmpeg", "-y", "-analyzeduration", max_int, "-probesize", max_int, "-i", src_path, "-vframes", "1", dst_path - ] + ) self.log.info("Running: {}".format(" ".join(ffmpeg_cmd))) try: diff --git a/openpype/plugins/publish/extract_trim_video_audio.py b/openpype/plugins/publish/extract_trim_video_audio.py index b951136391a..2907ae1839f 100644 --- a/openpype/plugins/publish/extract_trim_video_audio.py +++ b/openpype/plugins/publish/extract_trim_video_audio.py @@ -4,7 +4,7 @@ import pyblish.api from openpype.lib import ( - get_ffmpeg_tool_path, + get_ffmpeg_tool_args, run_subprocess, ) from openpype.pipeline import publish @@ -32,7 +32,7 @@ def process(self, instance): instance.data["representations"] = list() # get ffmpet path - ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") + ffmpeg_tool_args = get_ffmpeg_tool_args("ffmpeg") # get staging dir staging_dir = self.staging_dir(instance) @@ -76,8 +76,7 @@ def process(self, instance): if "trimming" not in fml ] - ffmpeg_args = [ - ffmpeg_path, + ffmpeg_args = ffmpeg_tool_args + [ "-ss", str(clip_start_h / fps), "-i", video_file_path, "-t", str(clip_dur_h / fps) diff --git a/openpype/plugins/publish/integrate.py b/openpype/plugins/publish/integrate.py index ffb9acf4a79..be07cffe727 100644 --- a/openpype/plugins/publish/integrate.py +++ b/openpype/plugins/publish/integrate.py @@ -2,9 +2,10 @@ import logging import sys import copy +import datetime + import clique import six - from bson.objectid import ObjectId import pyblish.api @@ -320,10 +321,16 @@ def register(self, instance, file_transactions, filtered_repres): # Get the accessible sites for Site Sync modules_by_name = instance.context.data["openPypeModules"] - sync_server_module = modules_by_name["sync_server"] - sites = sync_server_module.compute_resource_sync_sites( - project_name=instance.data["projectEntity"]["name"] - ) + sync_server_module = modules_by_name.get("sync_server") + if sync_server_module is None: + sites = [{ + "name": "studio", + "created_dt": datetime.datetime.now() + }] + else: + sites = sync_server_module.compute_resource_sync_sites( + project_name=instance.data["projectEntity"]["name"] + ) self.log.debug("Sync Server Sites: {}".format(sites)) # Compute the resource file infos once (files belonging to the diff --git a/openpype/plugins/publish/integrate_hero_version.py b/openpype/plugins/publish/integrate_hero_version.py index b7feeac6a45..6c21664b784 100644 --- a/openpype/plugins/publish/integrate_hero_version.py +++ b/openpype/plugins/publish/integrate_hero_version.py @@ -142,6 +142,12 @@ def integrate_instance( )) return + if AYON_SERVER_ENABLED and src_version_entity["name"] == 0: + self.log.debug( + "Version 0 cannot have hero version. Skipping." + ) + return + all_copied_files = [] transfers = instance.data.get("transfers", list()) for _src, dst in transfers: diff --git a/openpype/plugins/publish/validate_publish_dir.py b/openpype/plugins/publish/validate_publish_dir.py index 2f41127548a..ad5fd344347 100644 --- a/openpype/plugins/publish/validate_publish_dir.py +++ b/openpype/plugins/publish/validate_publish_dir.py @@ -7,12 +7,12 @@ class ValidatePublishDir(pyblish.api.InstancePlugin): - """Validates if 'publishDir' is a project directory + """Validates if files are being published into a project directory - 'publishDir' is collected based on publish templates. In specific cases - ('source' template) source folder of items is used as a 'publishDir', this - validates if it is inside any project dir for the project. - (eg. files are not published from local folder, unaccessible for studio' + In specific cases ('source' template - in place publishing) source folder + of published items is used as a regular `publish` dir. + This validates if it is inside any project dir for the project. + (eg. files are not published from local folder, inaccessible for studio') """ @@ -44,6 +44,8 @@ def process(self, instance): anatomy = instance.context.data["anatomy"] + # original_dirname must be convertable to rootless path + # in other case it is path inside of root folder for the project success, _ = anatomy.find_root_template_from_path(original_dirname) formatting_data = { @@ -56,11 +58,12 @@ def process(self, instance): formatting_data=formatting_data) def _get_template_name_from_instance(self, instance): + """Find template which will be used during integration.""" project_name = instance.context.data["projectName"] host_name = instance.context.data["hostName"] anatomy_data = instance.data["anatomyData"] family = anatomy_data["family"] - family = self.family_mapping.get("family") or family + family = self.family_mapping.get(family) or family task_info = anatomy_data.get("task") or {} return get_publish_template_name( diff --git a/openpype/plugins/publish/validate_version.py b/openpype/plugins/publish/validate_version.py index 2b919a3119c..84d52fab731 100644 --- a/openpype/plugins/publish/validate_version.py +++ b/openpype/plugins/publish/validate_version.py @@ -25,16 +25,16 @@ def process(self, instance): # TODO: Remove full non-html version upon drop of old publisher msg = ( "Version '{0}' from instance '{1}' that you are " - " trying to publish is lower or equal to an existing version " - " in the database. Version in database: '{2}'." + "trying to publish is lower or equal to an existing version " + "in the database. Version in database: '{2}'." "Please version up your workfile to a higher version number " "than: '{2}'." ).format(version, instance.data["name"], latest_version) msg_html = ( "Version {0} from instance {1} that you are " - " trying to publish is lower or equal to an existing version " - " in the database. Version in database: {2}.

" + "trying to publish is lower or equal to an existing version " + "in the database. Version in database: {2}.

" "Please version up your workfile to a higher version number " "than: {2}." ).format(version, instance.data["name"], latest_version) diff --git a/openpype/pype_commands.py b/openpype/pype_commands.py index 8a3f25a0267..7f1c3b01e21 100644 --- a/openpype/pype_commands.py +++ b/openpype/pype_commands.py @@ -88,7 +88,10 @@ def publish(paths, targets=None, gui=False): """ from openpype.lib import Logger - from openpype.lib.applications import get_app_environments_for_context + from openpype.lib.applications import ( + get_app_environments_for_context, + LaunchTypes, + ) from openpype.modules import ModulesManager from openpype.pipeline import ( install_openpype_plugins, @@ -122,7 +125,8 @@ def publish(paths, targets=None, gui=False): context["project_name"], context["asset_name"], context["task_name"], - app_full_name + app_full_name, + launch_type=LaunchTypes.farm_publish, ) os.environ.update(env) @@ -161,74 +165,6 @@ def publish(paths, targets=None, gui=False): log.info("Publish finished.") - @staticmethod - def remotepublishfromapp(project_name, batch_path, host_name, - user_email, targets=None): - """Opens installed variant of 'host' and run remote publish there. - - Eventually should be yanked out to Webpublisher cli. - - Currently implemented and tested for Photoshop where customer - wants to process uploaded .psd file and publish collected layers - from there. Triggered by Webpublisher. - - Checks if no other batches are running (status =='in_progress). If - so, it sleeps for SLEEP (this is separate process), - waits for WAIT_FOR seconds altogether. - - Requires installed host application on the machine. - - Runs publish process as user would, in automatic fashion. - - Args: - project_name (str): project to publish (only single context is - expected per call of remotepublish - batch_path (str): Path batch folder. Contains subfolders with - resources (workfile, another subfolder 'renders' etc.) - host_name (str): 'photoshop' - user_email (string): email address for webpublisher - used to - find Ftrack user with same email - targets (list): Pyblish targets - (to choose validator for example) - """ - - from openpype.hosts.webpublisher.publish_functions import ( - cli_publish_from_app - ) - - cli_publish_from_app( - project_name, batch_path, host_name, user_email, targets - ) - - @staticmethod - def remotepublish(project, batch_path, user_email, targets=None): - """Start headless publishing. - - Used to publish rendered assets, workfiles etc via Webpublisher. - Eventually should be yanked out to Webpublisher cli. - - Publish use json from passed paths argument. - - Args: - project (str): project to publish (only single context is expected - per call of remotepublish - batch_path (str): Path batch folder. Contains subfolders with - resources (workfile, another subfolder 'renders' etc.) - user_email (string): email address for webpublisher - used to - find Ftrack user with same email - targets (list): Pyblish targets - (to choose validator for example) - - Raises: - RuntimeError: When there is no path to process. - """ - - from openpype.hosts.webpublisher.publish_functions import ( - cli_publish - ) - - cli_publish(project, batch_path, user_email, targets) - @staticmethod def extractenvironments(output_json_path, project, asset, task, app, env_group): @@ -237,11 +173,19 @@ def extractenvironments(output_json_path, project, asset, task, app, Called by Deadline plugin to propagate environment into render jobs. """ - from openpype.lib.applications import get_app_environments_for_context + from openpype.lib.applications import ( + get_app_environments_for_context, + LaunchTypes, + ) if all((project, asset, task, app)): env = get_app_environments_for_context( - project, asset, task, app, env_group + project, + asset, + task, + app, + env_group=env_group, + launch_type=LaunchTypes.farm_render, ) else: env = os.environ.copy() @@ -324,34 +268,6 @@ def run_tests(self, folder, mark, pyargs, import pytest pytest.main(args) - def syncserver(self, active_site): - """Start running sync_server in background. - - This functionality is available in directly in module cli commands. - `~/openpype_console module sync_server syncservice` - """ - - os.environ["OPENPYPE_LOCAL_ID"] = active_site - - def signal_handler(sig, frame): - print("You pressed Ctrl+C. Process ended.") - sync_server_module.server_exit() - sys.exit(0) - - signal.signal(signal.SIGINT, signal_handler) - signal.signal(signal.SIGTERM, signal_handler) - - from openpype.modules import ModulesManager - - manager = ModulesManager() - sync_server_module = manager.modules_by_name["sync_server"] - - sync_server_module.server_init() - sync_server_module.server_start() - - while True: - time.sleep(1.0) - def repack_version(self, directory): """Repacking OpenPype version.""" from openpype.tools.repack_version import VersionRepacker diff --git a/openpype/resources/icons/AYON_icon_staging.png b/openpype/resources/icons/AYON_icon_staging.png index 75dadfd56c8..9da5b0488e2 100644 Binary files a/openpype/resources/icons/AYON_icon_staging.png and b/openpype/resources/icons/AYON_icon_staging.png differ diff --git a/openpype/resources/icons/AYON_splash_staging.png b/openpype/resources/icons/AYON_splash_staging.png index 2923413664e..ab2537e8a8b 100644 Binary files a/openpype/resources/icons/AYON_splash_staging.png and b/openpype/resources/icons/AYON_splash_staging.png differ diff --git a/openpype/scripts/fusion_switch_shot.py b/openpype/scripts/fusion_switch_shot.py index 8ecf4fb5ea0..1cc728226f8 100644 --- a/openpype/scripts/fusion_switch_shot.py +++ b/openpype/scripts/fusion_switch_shot.py @@ -19,6 +19,7 @@ ) from openpype.pipeline.context_tools import get_workdir_from_session +from openpype.pipeline.version_start import get_versioning_start log = logging.getLogger("Update Slap Comp") @@ -26,9 +27,6 @@ def _format_version_folder(folder): """Format a version folder based on the filepath - Assumption here is made that, if the path does not exists the folder - will be "v001" - Args: folder: file path to a folder @@ -36,9 +34,13 @@ def _format_version_folder(folder): str: new version folder name """ - new_version = 1 + new_version = get_versioning_start( + get_current_project_name(), + "fusion", + family="workfile" + ) if os.path.isdir(folder): - re_version = re.compile("v\d+$") + re_version = re.compile(r"v\d+$") versions = [i for i in os.listdir(folder) if os.path.isdir(i) and re_version.match(i)] if versions: diff --git a/openpype/scripts/otio_burnin.py b/openpype/scripts/otio_burnin.py index 085b62501c7..189feaee3a1 100644 --- a/openpype/scripts/otio_burnin.py +++ b/openpype/scripts/otio_burnin.py @@ -8,21 +8,15 @@ import opentimelineio_contrib.adapters.ffmpeg_burnins as ffmpeg_burnins from openpype.lib import ( - get_ffmpeg_tool_path, + get_ffmpeg_tool_args, get_ffmpeg_codec_args, get_ffmpeg_format_args, convert_ffprobe_fps_value, - convert_ffprobe_fps_to_float, ) - -ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") -ffprobe_path = get_ffmpeg_tool_path("ffprobe") - - FFMPEG = ( - '"{}"%(input_args)s -i "%(input)s" %(filters)s %(args)s%(output)s' -).format(ffmpeg_path) + '{}%(input_args)s -i "%(input)s" %(filters)s %(args)s%(output)s' +).format(subprocess.list2cmdline(get_ffmpeg_tool_args("ffmpeg"))) DRAWTEXT = ( "drawtext@'%(label)s'=fontfile='%(font)s':text=\\'%(text)s\\':" @@ -46,14 +40,14 @@ def _get_ffprobe_data(source): :param str source: source media file :rtype: [{}, ...] """ - command = [ - ffprobe_path, + command = get_ffmpeg_tool_args( + "ffprobe", "-v", "quiet", "-print_format", "json", "-show_format", "-show_streams", source - ] + ) kwargs = { "stdout": subprocess.PIPE, } diff --git a/openpype/scripts/remote_publish.py b/openpype/scripts/remote_publish.py index 37df35e36c7..d362f7abdc1 100644 --- a/openpype/scripts/remote_publish.py +++ b/openpype/scripts/remote_publish.py @@ -9,4 +9,4 @@ if __name__ == "__main__": # Perform remote publish with thorough error checking log = Logger.get_logger(__name__) - remote_publish(log, raise_error=True) + remote_publish(log) diff --git a/openpype/settings/ayon_settings.py b/openpype/settings/ayon_settings.py index d2a2afbee04..9a4f0607e09 100644 --- a/openpype/settings/ayon_settings.py +++ b/openpype/settings/ayon_settings.py @@ -124,8 +124,6 @@ def _convert_applications_system_settings( # Applications settings ayon_apps = addon_settings["applications"] - if "adsk_3dsmax" in ayon_apps: - ayon_apps["3dsmax"] = ayon_apps.pop("adsk_3dsmax") additional_apps = ayon_apps.pop("additional_apps") applications = _convert_applications_groups( @@ -161,91 +159,95 @@ def _convert_general(ayon_settings, output, default_settings): output["general"] = general -def _convert_kitsu_system_settings(ayon_settings, output): - output["modules"]["kitsu"] = { - "server": ayon_settings["kitsu"]["server"] - } - - -def _convert_ftrack_system_settings(ayon_settings, output, defaults): - # Ftrack contains few keys that are needed for initialization in OpenPype - # mode and some are used on different places - ftrack_settings = defaults["modules"]["ftrack"] - ftrack_settings["ftrack_server"] = ( - ayon_settings["ftrack"]["ftrack_server"]) - output["modules"]["ftrack"] = ftrack_settings - - -def _convert_shotgrid_system_settings(ayon_settings, output): - ayon_shotgrid = ayon_settings["shotgrid"] - # Skip conversion if different ayon addon is used - if "leecher_manager_url" not in ayon_shotgrid: - output["shotgrid"] = ayon_shotgrid - return - - shotgrid_settings = {} - for key in ( - "leecher_manager_url", - "leecher_backend_url", - "filter_projects_by_login", - ): - shotgrid_settings[key] = ayon_shotgrid[key] - - new_items = {} - for item in ayon_shotgrid["shotgrid_settings"]: - name = item.pop("name") - new_items[name] = item - shotgrid_settings["shotgrid_settings"] = new_items - - output["modules"]["shotgrid"] = shotgrid_settings +def _convert_kitsu_system_settings( + ayon_settings, output, addon_versions, default_settings +): + enabled = addon_versions.get("kitsu") is not None + kitsu_settings = default_settings["modules"]["kitsu"] + kitsu_settings["enabled"] = enabled + if enabled: + kitsu_settings["server"] = ayon_settings["kitsu"]["server"] + output["modules"]["kitsu"] = kitsu_settings -def _convert_timers_manager_system_settings(ayon_settings, output): - ayon_manager = ayon_settings["timers_manager"] - manager_settings = { - key: ayon_manager[key] - for key in { - "auto_stop", "full_time", "message_time", "disregard_publishing" - } - } +def _convert_timers_manager_system_settings( + ayon_settings, output, addon_versions, default_settings +): + enabled = addon_versions.get("timers_manager") is not None + manager_settings = default_settings["modules"]["timers_manager"] + manager_settings["enabled"] = enabled + if enabled: + ayon_manager = ayon_settings["timers_manager"] + manager_settings.update({ + key: ayon_manager[key] + for key in { + "auto_stop", + "full_time", + "message_time", + "disregard_publishing" + } + }) output["modules"]["timers_manager"] = manager_settings -def _convert_clockify_system_settings(ayon_settings, output): - output["modules"]["clockify"] = ayon_settings["clockify"] +def _convert_clockify_system_settings( + ayon_settings, output, addon_versions, default_settings +): + enabled = addon_versions.get("clockify") is not None + clockify_settings = default_settings["modules"]["clockify"] + clockify_settings["enabled"] = enabled + if enabled: + clockify_settings["workspace_name"] = ( + ayon_settings["clockify"]["workspace_name"] + ) + output["modules"]["clockify"] = clockify_settings -def _convert_deadline_system_settings(ayon_settings, output): - ayon_deadline = ayon_settings["deadline"] - deadline_settings = { - "deadline_urls": { +def _convert_deadline_system_settings( + ayon_settings, output, addon_versions, default_settings +): + enabled = addon_versions.get("deadline") is not None + deadline_settings = default_settings["modules"]["deadline"] + deadline_settings["enabled"] = enabled + if enabled: + ayon_deadline = ayon_settings["deadline"] + deadline_settings["deadline_urls"] = { item["name"]: item["value"] for item in ayon_deadline["deadline_urls"] } - } + output["modules"]["deadline"] = deadline_settings -def _convert_muster_system_settings(ayon_settings, output): - ayon_muster = ayon_settings["muster"] - templates_mapping = { - item["name"]: item["value"] - for item in ayon_muster["templates_mapping"] - } - output["modules"]["muster"] = { - "templates_mapping": templates_mapping, - "MUSTER_REST_URL": ayon_muster["MUSTER_REST_URL"] - } +def _convert_muster_system_settings( + ayon_settings, output, addon_versions, default_settings +): + enabled = addon_versions.get("muster") is not None + muster_settings = default_settings["modules"]["muster"] + muster_settings["enabled"] = enabled + if enabled: + ayon_muster = ayon_settings["muster"] + muster_settings["MUSTER_REST_URL"] = ayon_muster["MUSTER_REST_URL"] + muster_settings["templates_mapping"] = { + item["name"]: item["value"] + for item in ayon_muster["templates_mapping"] + } + output["modules"]["muster"] = muster_settings -def _convert_royalrender_system_settings(ayon_settings, output): - ayon_royalrender = ayon_settings["royalrender"] - output["modules"]["royalrender"] = { - "rr_paths": { +def _convert_royalrender_system_settings( + ayon_settings, output, addon_versions, default_settings +): + enabled = addon_versions.get("royalrender") is not None + rr_settings = default_settings["modules"]["royalrender"] + rr_settings["enabled"] = enabled + if enabled: + ayon_royalrender = ayon_settings["royalrender"] + rr_settings["rr_paths"] = { item["name"]: item["value"] for item in ayon_royalrender["rr_paths"] } - } + output["modules"]["royalrender"] = rr_settings def _convert_modules_system( @@ -253,42 +255,39 @@ def _convert_modules_system( ): # TODO add all modules # TODO add 'enabled' values - for key, func in ( - ("kitsu", _convert_kitsu_system_settings), - ("shotgrid", _convert_shotgrid_system_settings), - ("timers_manager", _convert_timers_manager_system_settings), - ("clockify", _convert_clockify_system_settings), - ("deadline", _convert_deadline_system_settings), - ("muster", _convert_muster_system_settings), - ("royalrender", _convert_royalrender_system_settings), + for func in ( + _convert_kitsu_system_settings, + _convert_timers_manager_system_settings, + _convert_clockify_system_settings, + _convert_deadline_system_settings, + _convert_muster_system_settings, + _convert_royalrender_system_settings, ): - if key in ayon_settings: - func(ayon_settings, output) - - if "ftrack" in ayon_settings: - _convert_ftrack_system_settings( - ayon_settings, output, default_settings) - - output_modules = output["modules"] - # TODO remove when not needed - for module_name, value in default_settings["modules"].items(): - if module_name not in output_modules: - output_modules[module_name] = value - - for module_name, value in default_settings["modules"].items(): - if "enabled" not in value or module_name not in output_modules: - continue + func(ayon_settings, output, addon_versions, default_settings) + + modules_settings = output["modules"] + for module_name in ( + "sync_server", + "log_viewer", + "standalonepublish_tool", + "project_manager", + "job_queue", + "avalon", + "addon_paths", + ): + settings = default_settings["modules"][module_name] + if "enabled" in settings: + settings["enabled"] = False + modules_settings[module_name] = settings - ayon_module_name = module_name - if module_name == "sync_server": - ayon_module_name = "sitesync" - output_modules[module_name]["enabled"] = ( - ayon_module_name in addon_versions) + for key, value in ayon_settings.items(): + if key not in output: + output[key] = value - # Missing modules conversions - # - "sync_server" -> renamed to sitesync - # - "slack" -> only 'enabled' - # - "job_queue" -> completelly missing in ayon + # Make sure addons have access to settings in initialization + # - ModulesManager passes only modules settings into initialization + if key not in modules_settings: + modules_settings[key] = value def convert_system_settings(ayon_settings, default_settings, addon_versions): @@ -302,15 +301,20 @@ def convert_system_settings(ayon_settings, default_settings, addon_versions): if "core" in ayon_settings: _convert_general(ayon_settings, output, default_settings) + for key, value in ayon_settings.items(): + if key not in output: + output[key] = value + + for key, value in default_settings.items(): + if key not in output: + output[key] = value + _convert_modules_system( ayon_settings, output, addon_versions, default_settings ) - for key, value in default_settings.items(): - if key not in output: - output[key] = value return output @@ -599,13 +603,36 @@ def _convert_maya_project_settings(ayon_settings, output): reference_loader = ayon_maya_load["reference_loader"] reference_loader["namespace"] = ( reference_loader["namespace"] - .replace("{folder[name]}", "{asset_name}") .replace("{product[name]}", "{subset}") ) + if ayon_maya_load.get("import_loader"): + import_loader = ayon_maya_load["import_loader"] + import_loader["namespace"] = ( + import_loader["namespace"] + .replace("{product[name]}", "{subset}") + ) + output["maya"] = ayon_maya +def _convert_3dsmax_project_settings(ayon_settings, output): + if "max" not in ayon_settings: + return + + ayon_max = ayon_settings["max"] + _convert_host_imageio(ayon_max) + if "PointCloud" in ayon_max: + point_cloud_attribute = ayon_max["PointCloud"]["attribute"] + new_point_cloud_attribute = { + item["name"]: item["value"] + for item in point_cloud_attribute + } + ayon_max["PointCloud"]["attribute"] = new_point_cloud_attribute + + output["max"] = ayon_max + + def _convert_nuke_knobs(knobs): new_knobs = [] for knob in knobs: @@ -645,6 +672,9 @@ def _convert_nuke_knobs(knobs): elif knob_type == "vector_3d": value = [value["x"], value["y"], value["z"]] + elif knob_type == "box": + value = [value["x"], value["y"], value["r"], value["t"]] + new_knob[value_key] = value return new_knobs @@ -724,11 +754,16 @@ def _convert_nuke_project_settings(ayon_settings, output): item_filter["subsets"] = item_filter.pop("product_names") item_filter["families"] = item_filter.pop("product_types") - item["reformat_node_config"] = _convert_nuke_knobs( - item["reformat_node_config"]) + reformat_nodes_config = item.get("reformat_nodes_config") or {} + reposition_nodes = reformat_nodes_config.get( + "reposition_nodes") or [] - for node in item["reformat_nodes_config"]["reposition_nodes"]: - node["knobs"] = _convert_nuke_knobs(node["knobs"]) + for reposition_node in reposition_nodes: + if "knobs" not in reposition_node: + continue + reposition_node["knobs"] = _convert_nuke_knobs( + reposition_node["knobs"] + ) name = item.pop("name") new_review_data_outputs[name] = item @@ -990,8 +1025,11 @@ def _convert_royalrender_project_settings(ayon_settings, output): if "royalrender" not in ayon_settings: return ayon_royalrender = ayon_settings["royalrender"] + rr_paths = ayon_royalrender.get("selected_rr_paths", []) + output["royalrender"] = { - "publish": ayon_royalrender["publish"] + "publish": ayon_royalrender["publish"], + "rr_paths": rr_paths, } @@ -1251,6 +1289,7 @@ def convert_project_settings(ayon_settings, default_settings): _convert_flame_project_settings(ayon_settings, output) _convert_fusion_project_settings(ayon_settings, output) _convert_maya_project_settings(ayon_settings, output) + _convert_3dsmax_project_settings(ayon_settings, output) _convert_nuke_project_settings(ayon_settings, output) _convert_hiero_project_settings(ayon_settings, output) _convert_photoshop_project_settings(ayon_settings, output) @@ -1266,6 +1305,10 @@ def convert_project_settings(ayon_settings, default_settings): _convert_global_project_settings(ayon_settings, output, default_settings) + for key, value in ayon_settings.items(): + if key not in output: + output[key] = value + for key, value in default_settings.items(): if key not in output: output[key] = value diff --git a/openpype/settings/defaults/project_settings/aftereffects.json b/openpype/settings/defaults/project_settings/aftereffects.json index 63f544e5360..77ccb744102 100644 --- a/openpype/settings/defaults/project_settings/aftereffects.json +++ b/openpype/settings/defaults/project_settings/aftereffects.json @@ -12,7 +12,7 @@ }, "create": { "RenderCreator": { - "defaults": [ + "default_variants": [ "Main" ], "mark_for_review": true diff --git a/openpype/settings/defaults/project_settings/blender.json b/openpype/settings/defaults/project_settings/blender.json index 29e61fe2332..df865adeba9 100644 --- a/openpype/settings/defaults/project_settings/blender.json +++ b/openpype/settings/defaults/project_settings/blender.json @@ -4,6 +4,8 @@ "apply_on_opening": false, "base_file_unit_scale": 0.01 }, + "set_resolution_startup": true, + "set_frames_startup": true, "imageio": { "activate_host_color_management": true, "ocio_config": { @@ -83,6 +85,11 @@ "optional": true, "active": true }, + "ExtractCameraABC": { + "enabled": true, + "optional": true, + "active": true + }, "ExtractLayout": { "enabled": true, "optional": true, diff --git a/openpype/settings/defaults/project_settings/ftrack.json b/openpype/settings/defaults/project_settings/ftrack.json index b87c45666d6..e2ca334b5f3 100644 --- a/openpype/settings/defaults/project_settings/ftrack.json +++ b/openpype/settings/defaults/project_settings/ftrack.json @@ -1,9 +1,10 @@ { "events": { "sync_to_avalon": { - "statuses_name_change": [ - "ready", - "not ready" + "role_list": [ + "Pypeclub", + "Administrator", + "Project manager" ] }, "prepare_project": { diff --git a/openpype/settings/defaults/project_settings/global.json b/openpype/settings/defaults/project_settings/global.json index b6eb2f52f18..06a595d1c50 100644 --- a/openpype/settings/defaults/project_settings/global.json +++ b/openpype/settings/defaults/project_settings/global.json @@ -1,4 +1,7 @@ { + "version_start_category": { + "profiles": [] + }, "imageio": { "activate_global_color_management": false, "ocio_config": { diff --git a/openpype/settings/defaults/project_settings/houdini.json b/openpype/settings/defaults/project_settings/houdini.json index a53f1ff2024..9d047c28bd2 100644 --- a/openpype/settings/defaults/project_settings/houdini.json +++ b/openpype/settings/defaults/project_settings/houdini.json @@ -14,48 +14,70 @@ "create": { "CreateArnoldAss": { "enabled": true, - "defaults": [], + "default_variants": [ + "Main" + ], "ext": ".ass" }, "CreateAlembicCamera": { "enabled": true, - "defaults": [] + "default_variants": [ + "Main" + ] }, "CreateCompositeSequence": { "enabled": true, - "defaults": [] + "default_variants": [ + "Main" + ] }, "CreatePointCache": { "enabled": true, - "defaults": [] + "default_variants": [ + "Main" + ] }, "CreateRedshiftROP": { "enabled": true, - "defaults": [] + "default_variants": [ + "Main" + ] }, "CreateRemotePublish": { "enabled": true, - "defaults": [] + "default_variants": [ + "Main" + ] }, "CreateVDBCache": { "enabled": true, - "defaults": [] + "default_variants": [ + "Main" + ] }, "CreateUSD": { "enabled": false, - "defaults": [] + "default_variants": [ + "Main" + ] }, "CreateUSDModel": { "enabled": false, - "defaults": [] + "default_variants": [ + "Main" + ] }, "USDCreateShadingWorkspace": { "enabled": false, - "defaults": [] + "default_variants": [ + "Main" + ] }, "CreateUSDRender": { "enabled": false, - "defaults": [] + "default_variants": [ + "Main" + ] } }, "publish": { diff --git a/openpype/settings/defaults/project_settings/maya.json b/openpype/settings/defaults/project_settings/maya.json index a25775e5925..38f14ec022c 100644 --- a/openpype/settings/defaults/project_settings/maya.json +++ b/openpype/settings/defaults/project_settings/maya.json @@ -521,19 +521,19 @@ "enabled": true, "make_tx": true, "rs_tex": false, - "defaults": [ + "default_variants": [ "Main" ] }, "CreateRender": { "enabled": true, - "defaults": [ + "default_variants": [ "Main" ] }, "CreateUnrealStaticMesh": { "enabled": true, - "defaults": [ + "default_variants": [ "", "_Main" ], @@ -547,7 +547,9 @@ }, "CreateUnrealSkeletalMesh": { "enabled": true, - "defaults": [], + "default_variants": [ + "Main" + ], "joint_hints": "jnt_org" }, "CreateMultiverseLook": { @@ -555,12 +557,11 @@ "publish_mip_map": true }, "CreateAnimation": { - "enabled": false, "write_color_sets": false, "write_face_sets": false, "include_parent_hierarchy": false, "include_user_defined_attributes": false, - "defaults": [ + "default_variants": [ "Main" ] }, @@ -568,7 +569,7 @@ "enabled": true, "write_color_sets": false, "write_face_sets": false, - "defaults": [ + "default_variants": [ "Main", "Proxy", "Sculpt" @@ -579,7 +580,7 @@ "write_color_sets": false, "write_face_sets": false, "include_user_defined_attributes": false, - "defaults": [ + "default_variants": [ "Main" ] }, @@ -587,20 +588,20 @@ "enabled": true, "write_color_sets": false, "write_face_sets": false, - "defaults": [ + "default_variants": [ "Main" ] }, "CreateReview": { "enabled": true, - "defaults": [ + "default_variants": [ "Main" ], "useMayaTimeline": true }, "CreateAss": { "enabled": true, - "defaults": [ + "default_variants": [ "Main" ], "expandProcedurals": false, @@ -622,61 +623,61 @@ "enabled": true, "vrmesh": true, "alembic": true, - "defaults": [ + "default_variants": [ "Main" ] }, "CreateMultiverseUsd": { "enabled": true, - "defaults": [ + "default_variants": [ "Main" ] }, "CreateMultiverseUsdComp": { "enabled": true, - "defaults": [ + "default_variants": [ "Main" ] }, "CreateMultiverseUsdOver": { "enabled": true, - "defaults": [ + "default_variants": [ "Main" ] }, "CreateAssembly": { "enabled": true, - "defaults": [ + "default_variants": [ "Main" ] }, "CreateCamera": { "enabled": true, - "defaults": [ + "default_variants": [ "Main" ] }, "CreateLayout": { "enabled": true, - "defaults": [ + "default_variants": [ "Main" ] }, "CreateMayaScene": { "enabled": true, - "defaults": [ + "default_variants": [ "Main" ] }, "CreateRenderSetup": { "enabled": true, - "defaults": [ + "default_variants": [ "Main" ] }, "CreateRig": { "enabled": true, - "defaults": [ + "default_variants": [ "Main", "Sim", "Cloth" @@ -684,20 +685,20 @@ }, "CreateSetDress": { "enabled": true, - "defaults": [ + "default_variants": [ "Main", "Anim" ] }, "CreateVRayScene": { "enabled": true, - "defaults": [ + "default_variants": [ "Main" ] }, "CreateYetiRig": { "enabled": true, - "defaults": [ + "default_variants": [ "Main" ] } @@ -1464,6 +1465,10 @@ "namespace": "{asset_name}_{subset}_##_", "group_name": "_GRP", "display_handle": true + }, + "import_loader": { + "namespace": "{asset_name}_{subset}_##_", + "group_name": "_GRP" } }, "workfile_build": { diff --git a/openpype/settings/defaults/project_settings/nuke.json b/openpype/settings/defaults/project_settings/nuke.json index 85e3c0d3c3a..b736c462fff 100644 --- a/openpype/settings/defaults/project_settings/nuke.json +++ b/openpype/settings/defaults/project_settings/nuke.json @@ -465,34 +465,6 @@ "viewer_process_override": "", "bake_viewer_process": true, "bake_viewer_input_process": true, - "reformat_node_add": false, - "reformat_node_config": [ - { - "type": "text", - "name": "type", - "value": "to format" - }, - { - "type": "text", - "name": "format", - "value": "HD_1080" - }, - { - "type": "text", - "name": "filter", - "value": "Lanczos6" - }, - { - "type": "bool", - "name": "black_outside", - "value": true - }, - { - "type": "bool", - "name": "pbb", - "value": false - } - ], "reformat_nodes_config": { "enabled": false, "reposition_nodes": [ diff --git a/openpype/settings/defaults/project_settings/traypublisher.json b/openpype/settings/defaults/project_settings/traypublisher.json index dda958ebcd9..7f7b7d1452f 100644 --- a/openpype/settings/defaults/project_settings/traypublisher.json +++ b/openpype/settings/defaults/project_settings/traypublisher.json @@ -256,6 +256,23 @@ "allow_multiple_items": true, "allow_version_control": false, "extensions": [] + }, + { + "family": "audio", + "identifier": "", + "label": "Audio ", + "icon": "fa5s.file-audio", + "default_variants": [ + "Main" + ], + "description": "Audio product", + "detailed_description": "Audio files for review or final delivery", + "allow_sequences": false, + "allow_multiple_items": false, + "allow_version_control": false, + "extensions": [ + ".wav" + ] } ], "editorial_creators": { diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_aftereffects.json b/openpype/settings/entities/schemas/projects_schema/schema_project_aftereffects.json index 35b8fede864..72f09a641de 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_aftereffects.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_aftereffects.json @@ -32,7 +32,7 @@ "children": [ { "type": "list", - "key": "defaults", + "key": "default_variants", "label": "Default Variants", "object_type": "text", "docstring": "Fill default variant(s) (like 'Main' or 'Default') used in subset name creation." diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_blender.json b/openpype/settings/entities/schemas/projects_schema/schema_project_blender.json index c549b577b26..aeb70dfd8cd 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_blender.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_blender.json @@ -31,6 +31,16 @@ } ] }, + { + "key": "set_resolution_startup", + "type": "boolean", + "label": "Set Resolution on Startup" + }, + { + "key": "set_frames_startup", + "type": "boolean", + "label": "Set Start/End Frames and FPS on Startup" + }, { "key": "imageio", "type": "dict", diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_ftrack.json b/openpype/settings/entities/schemas/projects_schema/schema_project_ftrack.json index 157a8d297ef..d6efb118b9f 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_ftrack.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_ftrack.json @@ -21,12 +21,9 @@ }, { "type": "list", - "key": "statuses_name_change", - "label": "Statuses", - "object_type": { - "type": "text", - "multiline": false - } + "key": "role_list", + "label": "Roles", + "object_type": "text" } ] }, diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_global.json b/openpype/settings/entities/schemas/projects_schema/schema_project_global.json index 953361935c1..4094632c72a 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_global.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_global.json @@ -5,6 +5,61 @@ "label": "Global", "is_file": true, "children": [ + { + "type": "dict", + "key": "version_start_category", + "label": "Version Start", + "collapsible": true, + "collapsible_key": true, + "children": [ + { + "type": "list", + "collapsible": true, + "key": "profiles", + "label": "Profiles", + "object_type": { + "type": "dict", + "children": [ + { + "key": "host_names", + "label": "Host names", + "type": "hosts-enum", + "multiselection": true + }, + { + "key": "task_types", + "label": "Task types", + "type": "task-types-enum" + }, + { + "key": "task_names", + "label": "Task names", + "type": "list", + "object_type": "text" + }, + { + "key": "families", + "label": "Families", + "type": "list", + "object_type": "text" + }, + { + "key": "subsets", + "label": "Subset names", + "type": "list", + "object_type": "text" + }, + { + "key": "version_start", + "label": "Version Start", + "type": "number", + "minimum": 0 + } + ] + } + } + ] + }, { "key": "imageio", "type": "dict", diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_nuke.json b/openpype/settings/entities/schemas/projects_schema/schema_project_nuke.json index 26c64e6219e..6b516ddf4a0 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_nuke.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_nuke.json @@ -284,6 +284,10 @@ "type": "schema_template", "name": "template_workfile_options" }, + { + "type": "label", + "label": "^ Settings and for Workfile Builder is deprecated and will be soon removed.
Please use Template Workfile Build Settings instead." + }, { "type": "schema", "name": "schema_templated_workfile_build" diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_blender_publish.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_blender_publish.json index 1037519f57d..2f0bf0a8316 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_blender_publish.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_blender_publish.json @@ -105,7 +105,11 @@ }, { "key": "ExtractCamera", - "label": "Extract FBX Camera as FBX" + "label": "Extract Camera as FBX" + }, + { + "key": "ExtractCameraABC", + "label": "Extract Camera as ABC" }, { "key": "ExtractLayout", @@ -174,4 +178,4 @@ ] } ] -} \ No newline at end of file +} diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_houdini_create.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_houdini_create.json index 83e0cf789a8..799bc0e81aa 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_houdini_create.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_houdini_create.json @@ -18,8 +18,8 @@ }, { "type": "list", - "key": "defaults", - "label": "Default Subsets", + "key": "default_variants", + "label": "Default Variants", "object_type": "text" }, { @@ -39,51 +39,51 @@ ] }, - { - "type": "schema_template", - "name": "template_create_plugin", - "template_data": [ - { - "key": "CreateAlembicCamera", - "label": "Create Alembic Camera" - }, - { - "key": "CreateCompositeSequence", - "label": "Create Composite (Image Sequence)" - }, - { - "key": "CreatePointCache", - "label": "Create Point Cache" - }, - { - "key": "CreateRedshiftROP", - "label": "Create Redshift ROP" - }, - { - "key": "CreateRemotePublish", - "label": "Create Remote Publish" - }, - { - "key": "CreateVDBCache", - "label": "Create VDB Cache" - }, - { - "key": "CreateUSD", - "label": "Create USD" - }, - { - "key": "CreateUSDModel", - "label": "Create USD Model" - }, - { - "key": "USDCreateShadingWorkspace", - "label": "Create USD Shading Workspace" - }, - { - "key": "CreateUSDRender", - "label": "Create USD Render" - } - ] - } + { + "type": "schema_template", + "name": "template_create_plugin", + "template_data": [ + { + "key": "CreateAlembicCamera", + "label": "Create Alembic Camera" + }, + { + "key": "CreateCompositeSequence", + "label": "Create Composite (Image Sequence)" + }, + { + "key": "CreatePointCache", + "label": "Create Point Cache" + }, + { + "key": "CreateRedshiftROP", + "label": "Create Redshift ROP" + }, + { + "key": "CreateRemotePublish", + "label": "Create Remote Publish" + }, + { + "key": "CreateVDBCache", + "label": "Create VDB Cache" + }, + { + "key": "CreateUSD", + "label": "Create USD" + }, + { + "key": "CreateUSDModel", + "label": "Create USD Model" + }, + { + "key": "USDCreateShadingWorkspace", + "label": "Create USD Shading Workspace" + }, + { + "key": "CreateUSDRender", + "label": "Create USD Render" + } + ] + } ] -} \ No newline at end of file +} diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_create.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_create.json index 1c37638c90f..b56e381c1da 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_create.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_create.json @@ -28,15 +28,21 @@ }, { "type": "list", - "key": "defaults", - "label": "Default Subsets", + "key": "default_variants", + "label": "Default Variants", "object_type": "text" } ] }, - { - "type": "schema", - "name": "schema_maya_create_render" + { + "type": "schema_template", + "name": "template_create_plugin", + "template_data": [ + { + "key": "CreateRender", + "label": "Create Render" + } + ] }, { "type": "dict", @@ -52,8 +58,8 @@ }, { "type": "list", - "key": "defaults", - "label": "Default Subsets", + "key": "default_variants", + "label": "Default Variants", "object_type": "text" }, { @@ -84,8 +90,8 @@ }, { "type": "list", - "key": "defaults", - "label": "Default Subsets", + "key": "default_variants", + "label": "Default Variants", "object_type": "text" }, { @@ -120,12 +126,10 @@ "collapsible": true, "key": "CreateAnimation", "label": "Create Animation", - "checkbox_key": "enabled", "children": [ { - "type": "boolean", - "key": "enabled", - "label": "Enabled" + "type": "label", + "label": "This plugin is not optional due to implicit creation through loading the \"rig\" family.\nThis family is also hidden from creation due to complexity in setup." }, { "type": "boolean", @@ -149,8 +153,8 @@ }, { "type": "list", - "key": "defaults", - "label": "Default Subsets", + "key": "default_variants", + "label": "Default Variants", "object_type": "text" } ] @@ -179,8 +183,8 @@ }, { "type": "list", - "key": "defaults", - "label": "Default Subsets", + "key": "default_variants", + "label": "Default Variants", "object_type": "text" } ] @@ -214,8 +218,8 @@ }, { "type": "list", - "key": "defaults", - "label": "Default Subsets", + "key": "default_variants", + "label": "Default Variants", "object_type": "text" } ] @@ -244,8 +248,8 @@ }, { "type": "list", - "key": "defaults", - "label": "Default Subsets", + "key": "default_variants", + "label": "Default Variants", "object_type": "text" } ] @@ -264,8 +268,8 @@ }, { "type": "list", - "key": "defaults", - "label": "Default Subsets", + "key": "default_variants", + "label": "Default Variants", "object_type": "text" }, { @@ -289,8 +293,8 @@ }, { "type": "list", - "key": "defaults", - "label": "Default Subsets", + "key": "default_variants", + "label": "Default Variants", "object_type": "text" }, { @@ -391,8 +395,8 @@ }, { "type": "list", - "key": "defaults", - "label": "Default Subsets", + "key": "default_variants", + "label": "Default Variants", "object_type": "text" } ] diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_create_render.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_create_render.json deleted file mode 100644 index 68ad7ad63df..00000000000 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_create_render.json +++ /dev/null @@ -1,20 +0,0 @@ -{ - "type": "dict", - "collapsible": true, - "key": "CreateRender", - "label": "Create Render", - "checkbox_key": "enabled", - "children": [ - { - "type": "boolean", - "key": "enabled", - "label": "Enabled" - }, - { - "type": "list", - "key": "defaults", - "label": "Default Subsets", - "object_type": "text" - } - ] -} \ No newline at end of file diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_load.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_load.json index 4b6b97ab4e6..e73d39c06d3 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_load.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_load.json @@ -121,6 +121,28 @@ "label": "Display Handle On Load References" } ] + }, + { + "type": "dict", + "collapsible": true, + "key": "import_loader", + "label": "Import Loader", + "children": [ + { + "type": "text", + "label": "Namespace", + "key": "namespace" + }, + { + "type": "text", + "label": "Group name", + "key": "group_name" + }, + { + "type": "label", + "label": "Here's a link to the doc where you can find explanations about customing the naming of referenced assets: https://openpype.io/docs/admin_hosts_maya#load-plugins" + } + ] } ] } diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_publish.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_publish.json index 07c8d8715b7..b115ee3faa5 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_publish.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_publish.json @@ -103,7 +103,7 @@ }, { "key": "exclude_families", - "label": "Families", + "label": "Exclude Families", "type": "list", "object_type": "text" } diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_publish.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_publish.json index 3019c9b1b52..f006392bef7 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_publish.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_publish.json @@ -308,26 +308,6 @@ { "type": "separator" }, - { - "type": "label", - "label": "Currently we are supporting also multiple reposition nodes.
Older single reformat node is still supported
and if it is activated then preference will
be on it. If you want to use multiple reformat
nodes then you need to disable single reformat
node and enable multiple Reformat nodes here." - }, - { - "type": "boolean", - "key": "reformat_node_add", - "label": "Add Reformat Node", - "default": false - }, - { - "type": "schema_template", - "name": "template_nuke_knob_inputs", - "template_data": [ - { - "label": "Reformat Node Knobs", - "key": "reformat_node_config" - } - ] - }, { "key": "reformat_nodes_config", "type": "dict", diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/template_create_plugin.json b/openpype/settings/entities/schemas/projects_schema/schemas/template_create_plugin.json index 14d15e78401..3d2ed9f3d4c 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/template_create_plugin.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/template_create_plugin.json @@ -13,8 +13,8 @@ }, { "type": "list", - "key": "defaults", - "label": "Default Subsets", + "key": "default_variants", + "label": "Default Variants", "object_type": "text" } ] diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/template_nuke_knob_inputs.json b/openpype/settings/entities/schemas/projects_schema/schemas/template_nuke_knob_inputs.json index c9dee8681ab..51c78ce8f0c 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/template_nuke_knob_inputs.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/template_nuke_knob_inputs.json @@ -213,7 +213,7 @@ }, { "type": "number", - "key": "y", + "key": "z", "default": 1, "decimal": 4, "maximum": 99999999 @@ -238,28 +238,28 @@ "object_types": [ { "type": "number", - "key": "x", + "key": "r", "default": 1, "decimal": 4, "maximum": 99999999 }, { "type": "number", - "key": "x", + "key": "g", "default": 1, "decimal": 4, "maximum": 99999999 }, { "type": "number", - "key": "y", + "key": "b", "default": 1, "decimal": 4, "maximum": 99999999 }, { "type": "number", - "key": "y", + "key": "a", "default": 1, "decimal": 4, "maximum": 99999999 @@ -268,6 +268,52 @@ } ] }, + { + "key": "box", + "label": "Box", + "children": [ + { + "type": "text", + "key": "name", + "label": "Name" + }, + { + "type": "list-strict", + "key": "value", + "label": "Value", + "object_types": [ + { + "type": "number", + "key": "x", + "default": 0, + "decimal": 4, + "maximum": 99999999 + }, + { + "type": "number", + "key": "y", + "default": 0, + "decimal": 4, + "maximum": 99999999 + }, + { + "type": "number", + "key": "r", + "default": 1920, + "decimal": 4, + "maximum": 99999999 + }, + { + "type": "number", + "key": "t", + "default": 1080, + "decimal": 4, + "maximum": 99999999 + } + ] + } + ] + }, { "key": "__legacy__", "label": "_ Legacy type _", diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/template_nuke_write_attrs.json b/openpype/settings/entities/schemas/projects_schema/schemas/template_nuke_write_attrs.json index 8be48e669de..3a34858f4ec 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/template_nuke_write_attrs.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/template_nuke_write_attrs.json @@ -13,6 +13,12 @@ }, { "use_range_limit": "Use range limit" + }, + { + "ordered": "Defined order" + }, + { + "channels": "Channels override" } ] } diff --git a/openpype/settings/handlers.py b/openpype/settings/handlers.py index 1d4c838f1a4..671cabfbc28 100644 --- a/openpype/settings/handlers.py +++ b/openpype/settings/handlers.py @@ -1803,10 +1803,7 @@ class MongoLocalSettingsHandler(LocalSettingsHandler): def __init__(self, local_site_id=None): # Get mongo connection - from openpype.lib import ( - OpenPypeMongoConnection, - get_local_site_id - ) + from openpype.lib import get_local_site_id if local_site_id is None: local_site_id = get_local_site_id() diff --git a/openpype/tools/attribute_defs/widgets.py b/openpype/tools/attribute_defs/widgets.py index d46c238da1c..7967416e9f3 100644 --- a/openpype/tools/attribute_defs/widgets.py +++ b/openpype/tools/attribute_defs/widgets.py @@ -343,6 +343,7 @@ def current_value(self): return self._input_widget.text() def set_value(self, value, multivalue=False): + block_signals = False if multivalue: set_value = set(value) if None in set_value: @@ -352,13 +353,18 @@ def set_value(self, value, multivalue=False): if len(set_value) == 1: value = tuple(set_value)[0] else: + block_signals = True value = "< Multiselection >" if value != self.current_value(): + if block_signals: + self._input_widget.blockSignals(True) if self.multiline: self._input_widget.setPlainText(value) else: self._input_widget.setText(value) + if block_signals: + self._input_widget.blockSignals(False) class BoolAttrWidget(_BaseAttrDefWidget): @@ -391,7 +397,9 @@ def set_value(self, value, multivalue=False): set_value.add(self.attr_def.default) if len(set_value) > 1: + self._input_widget.blockSignals(True) self._input_widget.setCheckState(QtCore.Qt.PartiallyChecked) + self._input_widget.blockSignals(False) return value = tuple(set_value)[0] diff --git a/openpype/tools/libraryloader/app.py b/openpype/tools/libraryloader/app.py index bd105953339..e68e9a59319 100644 --- a/openpype/tools/libraryloader/app.py +++ b/openpype/tools/libraryloader/app.py @@ -114,9 +114,10 @@ def __init__( manager = ModulesManager() sync_server = manager.modules_by_name.get("sync_server") - sync_server_enabled = False - if sync_server is not None: - sync_server_enabled = sync_server.enabled + sync_server_enabled = ( + sync_server is not None + and sync_server.enabled + ) repres_widget = None if sync_server_enabled: diff --git a/openpype/tools/loader/model.py b/openpype/tools/loader/model.py index 5115f39a698..69b7e593b12 100644 --- a/openpype/tools/loader/model.py +++ b/openpype/tools/loader/model.py @@ -64,6 +64,7 @@ def reset_sync_server(self, project_name=None): """Sets/Resets sync server vars after every change (refresh.)""" repre_icons = {} sync_server = None + sync_server_enabled = False active_site = active_provider = None remote_site = remote_provider = None @@ -75,6 +76,7 @@ def reset_sync_server(self, project_name=None): if not project_name: self.repre_icons = repre_icons self.sync_server = sync_server + self.sync_server_enabled = sync_server_enabled self.active_site = active_site self.active_provider = active_provider self.remote_site = remote_site @@ -100,8 +102,13 @@ def reset_sync_server(self, project_name=None): self._modules_manager = ModulesManager() self._last_manager_cache = now_time - sync_server = self._modules_manager.modules_by_name["sync_server"] - if sync_server.is_project_enabled(project_name, single=True): + sync_server = self._modules_manager.modules_by_name.get("sync_server") + if ( + sync_server is not None + and sync_server.enabled + and sync_server.is_project_enabled(project_name, single=True) + ): + sync_server_enabled = True active_site = sync_server.get_active_site(project_name) active_provider = sync_server.get_provider_for_site( project_name, active_site) @@ -118,6 +125,7 @@ def reset_sync_server(self, project_name=None): self.repre_icons = repre_icons self.sync_server = sync_server + self.sync_server_enabled = sync_server_enabled self.active_site = active_site self.active_provider = active_provider self.remote_site = remote_site @@ -213,6 +221,7 @@ def __init__( self.repre_icons = {} self.sync_server = None + self.sync_server_enabled = False self.active_site = self.active_provider = None self.columns_index = dict( @@ -282,7 +291,7 @@ def setData(self, index, value, role=QtCore.Qt.EditRole): ) # update availability on active site when version changes - if self.sync_server.enabled and version_doc: + if self.sync_server_enabled and version_doc: repres_info = list( self.sync_server.get_repre_info_for_versions( project_name, @@ -507,7 +516,7 @@ def _fetch(self): return repre_info_by_version_id = {} - if self.sync_server.enabled: + if self.sync_server_enabled: versions_by_id = {} for _subset_id, doc in last_versions_by_subset_id.items(): versions_by_id[doc["_id"]] = doc @@ -1033,12 +1042,16 @@ def __init__(self, dbcon, header): self._version_ids = [] manager = ModulesManager() - sync_server = active_site = remote_site = None + active_site = remote_site = None active_provider = remote_provider = None + sync_server = manager.modules_by_name.get("sync_server") + sync_server_enabled = ( + sync_server is not None + and sync_server.enabled + ) project_name = dbcon.current_project() - if project_name: - sync_server = manager.modules_by_name["sync_server"] + if sync_server_enabled and project_name: active_site = sync_server.get_active_site(project_name) remote_site = sync_server.get_remote_site(project_name) @@ -1057,6 +1070,7 @@ def __init__(self, dbcon, header): remote_provider = 'studio' self.sync_server = sync_server + self.sync_server_enabled = sync_server_enabled self.active_site = active_site self.active_provider = active_provider self.remote_site = remote_site @@ -1174,9 +1188,15 @@ def _on_doc_fetched(self): repre_groups_items[doc["name"]] = 0 group = group_item - progress = self.sync_server.get_progress_for_repre( - doc, - self.active_site, self.remote_site) + progress = { + self.active_site: 0, + self.remote_site: 0, + } + if self.sync_server_enabled: + progress = self.sync_server.get_progress_for_repre( + doc, + self.active_site, + self.remote_site) active_site_icon = self._icons.get(self.active_provider) remote_site_icon = self._icons.get(self.remote_provider) diff --git a/openpype/tools/publisher/widgets/create_widget.py b/openpype/tools/publisher/widgets/create_widget.py index 1940d16eb8e..64fed1d70cc 100644 --- a/openpype/tools/publisher/widgets/create_widget.py +++ b/openpype/tools/publisher/widgets/create_widget.py @@ -6,6 +6,7 @@ from openpype.pipeline.create import ( SUBSET_NAME_ALLOWED_SYMBOLS, PRE_CREATE_THUMBNAIL_KEY, + DEFAULT_VARIANT_VALUE, TaskNotSetError, ) @@ -626,7 +627,7 @@ def _set_creator(self, creator_item): default_variants = creator_item.default_variants if not default_variants: - default_variants = ["Main"] + default_variants = [DEFAULT_VARIANT_VALUE] default_variant = creator_item.default_variant if not default_variant: @@ -642,7 +643,7 @@ def _set_creator(self, creator_item): elif variant: self.variant_hints_menu.addAction(variant) - variant_text = default_variant or "Main" + variant_text = default_variant or DEFAULT_VARIANT_VALUE # Make sure subset name is updated to new plugin if variant_text == self.variant_input.text(): self._on_variant_change() diff --git a/openpype/tools/publisher/widgets/images/browse.png b/openpype/tools/publisher/widgets/images/browse.png new file mode 100644 index 00000000000..b115bb67662 Binary files /dev/null and b/openpype/tools/publisher/widgets/images/browse.png differ diff --git a/openpype/tools/publisher/widgets/images/options.png b/openpype/tools/publisher/widgets/images/options.png new file mode 100644 index 00000000000..b394dbd4ce5 Binary files /dev/null and b/openpype/tools/publisher/widgets/images/options.png differ diff --git a/openpype/tools/publisher/widgets/images/paste.png b/openpype/tools/publisher/widgets/images/paste.png new file mode 100644 index 00000000000..14a6050da1a Binary files /dev/null and b/openpype/tools/publisher/widgets/images/paste.png differ diff --git a/openpype/tools/publisher/widgets/images/take_screenshot.png b/openpype/tools/publisher/widgets/images/take_screenshot.png new file mode 100644 index 00000000000..242a36a0264 Binary files /dev/null and b/openpype/tools/publisher/widgets/images/take_screenshot.png differ diff --git a/openpype/tools/publisher/widgets/overview_widget.py b/openpype/tools/publisher/widgets/overview_widget.py index 25fff731349..778aa1139f9 100644 --- a/openpype/tools/publisher/widgets/overview_widget.py +++ b/openpype/tools/publisher/widgets/overview_widget.py @@ -28,12 +28,14 @@ def __init__(self, controller, parent): self._refreshing_instances = False self._controller = controller - create_widget = CreateWidget(controller, self) + subset_content_widget = QtWidgets.QWidget(self) + + create_widget = CreateWidget(controller, subset_content_widget) # --- Created Subsets/Instances --- # Common widget for creation and overview subset_views_widget = BorderedLabelWidget( - "Subsets to publish", self + "Subsets to publish", subset_content_widget ) subset_view_cards = InstanceCardView(controller, subset_views_widget) @@ -45,14 +47,14 @@ def __init__(self, controller, parent): subset_views_layout.setCurrentWidget(subset_view_cards) # Buttons at the bottom of subset view - create_btn = CreateInstanceBtn(self) - delete_btn = RemoveInstanceBtn(self) - change_view_btn = ChangeViewBtn(self) + create_btn = CreateInstanceBtn(subset_views_widget) + delete_btn = RemoveInstanceBtn(subset_views_widget) + change_view_btn = ChangeViewBtn(subset_views_widget) # --- Overview --- # Subset details widget subset_attributes_wrap = BorderedLabelWidget( - "Publish options", self + "Publish options", subset_content_widget ) subset_attributes_widget = SubsetAttributesWidget( controller, subset_attributes_wrap @@ -81,7 +83,6 @@ def __init__(self, controller, parent): subset_views_widget.set_center_widget(subset_view_widget) # Whole subset layout with attributes and details - subset_content_widget = QtWidgets.QWidget(self) subset_content_layout = QtWidgets.QHBoxLayout(subset_content_widget) subset_content_layout.setContentsMargins(0, 0, 0, 0) subset_content_layout.addWidget(create_widget, 7) @@ -161,44 +162,62 @@ def __init__(self, controller, parent): self._change_anim = change_anim # Start in create mode - self._create_widget_policy = create_widget.sizePolicy() - self._subset_views_widget_policy = subset_views_widget.sizePolicy() - self._subset_attributes_wrap_policy = ( - subset_attributes_wrap.sizePolicy() - ) - self._max_widget_width = None self._current_state = "create" subset_attributes_wrap.setVisible(False) + def make_sure_animation_is_finished(self): + if self._change_anim.state() == QtCore.QAbstractAnimation.Running: + self._change_anim.stop() + self._on_change_anim_finished() + def set_state(self, new_state, animate): if new_state == self._current_state: return self._current_state = new_state - anim_is_running = ( - self._change_anim.state() == QtCore.QAbstractAnimation.Running - ) if not animate: - self._change_visibility_for_state() - if anim_is_running: - self._change_anim.stop() + self.make_sure_animation_is_finished() return - if self._max_widget_width is None: - self._max_widget_width = self._subset_views_widget.maximumWidth() - if new_state == "create": direction = QtCore.QAbstractAnimation.Backward else: direction = QtCore.QAbstractAnimation.Forward self._change_anim.setDirection(direction) - if not anim_is_running: - view_width = self._subset_views_widget.width() - self._subset_views_widget.setMinimumWidth(view_width) - self._subset_views_widget.setMaximumWidth(view_width) + if ( + self._change_anim.state() != QtCore.QAbstractAnimation.Running + ): + self._start_animation() + + def _start_animation(self): + views_geo = self._subset_views_widget.geometry() + layout_spacing = self._subset_content_layout.spacing() + if self._create_widget.isVisible(): + create_geo = self._create_widget.geometry() + subset_geo = QtCore.QRect(create_geo) + subset_geo.moveTop(views_geo.top()) + subset_geo.moveLeft(views_geo.right() + layout_spacing) + self._subset_attributes_wrap.setVisible(True) + + elif self._subset_attributes_wrap.isVisible(): + subset_geo = self._subset_attributes_wrap.geometry() + create_geo = QtCore.QRect(subset_geo) + create_geo.moveTop(views_geo.top()) + create_geo.moveRight(views_geo.left() - (layout_spacing + 1)) + self._create_widget.setVisible(True) + else: self._change_anim.start() + return + + while self._subset_content_layout.count(): + self._subset_content_layout.takeAt(0) + self._subset_views_widget.setGeometry(views_geo) + self._create_widget.setGeometry(create_geo) + self._subset_attributes_wrap.setGeometry(subset_geo) + + self._change_anim.start() def get_subset_views_geo(self): parent = self._subset_views_widget.parent() @@ -281,41 +300,39 @@ def _on_active_changed(self): def _on_change_anim(self, value): self._create_widget.setVisible(True) self._subset_attributes_wrap.setVisible(True) - width = ( - self._subset_content_widget.width() - - ( - self._subset_views_widget.width() - + (self._subset_content_layout.spacing() * 2) - ) + layout_spacing = self._subset_content_layout.spacing() + + content_width = ( + self._subset_content_widget.width() - (layout_spacing * 2) ) + content_height = self._subset_content_widget.height() + views_width = max( + int(content_width * 0.3), + self._subset_views_widget.minimumWidth() + ) + width = content_width - views_width + # Visible widths of other widgets subset_attrs_width = int((float(width) / self.anim_end_value) * value) - if subset_attrs_width > width: - subset_attrs_width = width - create_width = width - subset_attrs_width - self._create_widget.setMinimumWidth(create_width) - self._create_widget.setMaximumWidth(create_width) - self._subset_attributes_wrap.setMinimumWidth(subset_attrs_width) - self._subset_attributes_wrap.setMaximumWidth(subset_attrs_width) + views_geo = QtCore.QRect( + create_width + layout_spacing, 0, + views_width, content_height + ) + create_geo = QtCore.QRect(0, 0, width, content_height) + subset_attrs_geo = QtCore.QRect(create_geo) + create_geo.moveRight(views_geo.left() - (layout_spacing + 1)) + subset_attrs_geo.moveLeft(views_geo.right() + layout_spacing) + + self._subset_views_widget.setGeometry(views_geo) + self._create_widget.setGeometry(create_geo) + self._subset_attributes_wrap.setGeometry(subset_attrs_geo) def _on_change_anim_finished(self): self._change_visibility_for_state() - self._create_widget.setMinimumWidth(0) - self._create_widget.setMaximumWidth(self._max_widget_width) - self._subset_attributes_wrap.setMinimumWidth(0) - self._subset_attributes_wrap.setMaximumWidth(self._max_widget_width) - self._subset_views_widget.setMinimumWidth(0) - self._subset_views_widget.setMaximumWidth(self._max_widget_width) - self._create_widget.setSizePolicy( - self._create_widget_policy - ) - self._subset_attributes_wrap.setSizePolicy( - self._subset_attributes_wrap_policy - ) - self._subset_views_widget.setSizePolicy( - self._subset_views_widget_policy - ) + self._subset_content_layout.addWidget(self._create_widget, 7) + self._subset_content_layout.addWidget(self._subset_views_widget, 3) + self._subset_content_layout.addWidget(self._subset_attributes_wrap, 7) def _change_visibility_for_state(self): self._create_widget.setVisible( diff --git a/openpype/tools/publisher/widgets/screenshot_widget.py b/openpype/tools/publisher/widgets/screenshot_widget.py new file mode 100644 index 00000000000..4ccf9205710 --- /dev/null +++ b/openpype/tools/publisher/widgets/screenshot_widget.py @@ -0,0 +1,314 @@ +import os +import tempfile + +from qtpy import QtCore, QtGui, QtWidgets + + +class ScreenMarquee(QtWidgets.QDialog): + """Dialog to interactively define screen area. + + This allows to select a screen area through a marquee selection. + + You can use any of its classmethods for easily saving an image, + capturing to QClipboard or returning a QPixmap, respectively + `capture_to_file`, `capture_to_clipboard` and `capture_to_pixmap`. + """ + + def __init__(self, parent=None): + super(ScreenMarquee, self).__init__(parent=parent) + + self.setWindowFlags( + QtCore.Qt.FramelessWindowHint + | QtCore.Qt.WindowStaysOnTopHint + | QtCore.Qt.CustomizeWindowHint + | QtCore.Qt.Tool) + self.setAttribute(QtCore.Qt.WA_TranslucentBackground) + self.setCursor(QtCore.Qt.CrossCursor) + self.setMouseTracking(True) + + fade_anim = QtCore.QVariantAnimation() + fade_anim.setStartValue(0) + fade_anim.setEndValue(50) + fade_anim.setDuration(200) + fade_anim.setEasingCurve(QtCore.QEasingCurve.OutCubic) + fade_anim.start(QtCore.QAbstractAnimation.DeleteWhenStopped) + + fade_anim.valueChanged.connect(self._on_fade_anim) + + app = QtWidgets.QApplication.instance() + if hasattr(app, "screenAdded"): + app.screenAdded.connect(self._on_screen_added) + app.screenRemoved.connect(self._fit_screen_geometry) + elif hasattr(app, "desktop"): + desktop = app.desktop() + desktop.screenCountChanged.connect(self._fit_screen_geometry) + + for screen in QtWidgets.QApplication.screens(): + screen.geometryChanged.connect(self._fit_screen_geometry) + + self._opacity = fade_anim.currentValue() + self._click_pos = None + self._capture_rect = None + + self._fade_anim = fade_anim + + def get_captured_pixmap(self): + if self._capture_rect is None: + return QtGui.QPixmap() + + return self.get_desktop_pixmap(self._capture_rect) + + def paintEvent(self, event): + """Paint event""" + + # Convert click and current mouse positions to local space. + mouse_pos = self.mapFromGlobal(QtGui.QCursor.pos()) + click_pos = None + if self._click_pos is not None: + click_pos = self.mapFromGlobal(self._click_pos) + + painter = QtGui.QPainter(self) + + # Draw background. Aside from aesthetics, this makes the full + # tool region accept mouse events. + painter.setBrush(QtGui.QColor(0, 0, 0, self._opacity)) + painter.setPen(QtCore.Qt.NoPen) + painter.drawRect(event.rect()) + + # Clear the capture area + if click_pos is not None: + capture_rect = QtCore.QRect(click_pos, mouse_pos) + painter.setCompositionMode( + QtGui.QPainter.CompositionMode_Clear) + painter.drawRect(capture_rect) + painter.setCompositionMode( + QtGui.QPainter.CompositionMode_SourceOver) + + pen_color = QtGui.QColor(255, 255, 255, 64) + pen = QtGui.QPen(pen_color, 1, QtCore.Qt.DotLine) + painter.setPen(pen) + + # Draw cropping markers at click position + rect = event.rect() + if click_pos is not None: + painter.drawLine( + rect.left(), click_pos.y(), + rect.right(), click_pos.y() + ) + painter.drawLine( + click_pos.x(), rect.top(), + click_pos.x(), rect.bottom() + ) + + # Draw cropping markers at current mouse position + painter.drawLine( + rect.left(), mouse_pos.y(), + rect.right(), mouse_pos.y() + ) + painter.drawLine( + mouse_pos.x(), rect.top(), + mouse_pos.x(), rect.bottom() + ) + + def mousePressEvent(self, event): + """Mouse click event""" + + if event.button() == QtCore.Qt.LeftButton: + # Begin click drag operation + self._click_pos = event.globalPos() + + def mouseReleaseEvent(self, event): + """Mouse release event""" + if ( + self._click_pos is not None + and event.button() == QtCore.Qt.LeftButton + ): + # End click drag operation and commit the current capture rect + self._capture_rect = QtCore.QRect( + self._click_pos, event.globalPos() + ).normalized() + self._click_pos = None + self.close() + + def mouseMoveEvent(self, event): + """Mouse move event""" + self.repaint() + + def keyPressEvent(self, event): + """Mouse press event""" + if event.key() == QtCore.Qt.Key_Escape: + self._click_pos = None + self._capture_rect = None + self.close() + return + return super(ScreenMarquee, self).mousePressEvent(event) + + def showEvent(self, event): + self._fit_screen_geometry() + self._fade_anim.start() + + def _fit_screen_geometry(self): + # Compute the union of all screen geometries, and resize to fit. + workspace_rect = QtCore.QRect() + for screen in QtWidgets.QApplication.screens(): + workspace_rect = workspace_rect.united(screen.geometry()) + self.setGeometry(workspace_rect) + + def _on_fade_anim(self): + """Animation callback for opacity.""" + + self._opacity = self._fade_anim.currentValue() + self.repaint() + + def _on_screen_added(self): + for screen in QtGui.QGuiApplication.screens(): + screen.geometryChanged.connect(self._fit_screen_geometry) + + @classmethod + def get_desktop_pixmap(cls, rect): + """Performs a screen capture on the specified rectangle. + + Args: + rect (QtCore.QRect): The rectangle to capture. + + Returns: + QtGui.QPixmap: Captured pixmap image + """ + + if rect.width() < 1 or rect.height() < 1: + return QtGui.QPixmap() + + screen_pixes = [] + for screen in QtWidgets.QApplication.screens(): + screen_geo = screen.geometry() + if not screen_geo.intersects(rect): + continue + + screen_pix_rect = screen_geo.intersected(rect) + screen_pix = screen.grabWindow( + 0, + screen_pix_rect.x() - screen_geo.x(), + screen_pix_rect.y() - screen_geo.y(), + screen_pix_rect.width(), screen_pix_rect.height() + ) + paste_point = QtCore.QPoint( + screen_pix_rect.x() - rect.x(), + screen_pix_rect.y() - rect.y() + ) + screen_pixes.append((screen_pix, paste_point)) + + output_pix = QtGui.QPixmap(rect.width(), rect.height()) + output_pix.fill(QtCore.Qt.transparent) + pix_painter = QtGui.QPainter() + pix_painter.begin(output_pix) + for item in screen_pixes: + (screen_pix, offset) = item + pix_painter.drawPixmap(offset, screen_pix) + + pix_painter.end() + + return output_pix + + @classmethod + def capture_to_pixmap(cls): + """Take screenshot with marquee into pixmap. + + Note: + The pixmap can be invalid (use 'isNull' to check). + + Returns: + QtGui.QPixmap: Captured pixmap image. + """ + + tool = cls() + tool.exec_() + return tool.get_captured_pixmap() + + @classmethod + def capture_to_file(cls, filepath=None): + """Take screenshot with marquee into file. + + Args: + filepath (Optional[str]): Path where screenshot will be saved. + + Returns: + Union[str, None]: Path to the saved screenshot, or None if user + cancelled the operation. + """ + + pixmap = cls.capture_to_pixmap() + if pixmap.isNull(): + return None + + if filepath is None: + with tempfile.NamedTemporaryFile( + prefix="screenshot_", suffix=".png", delete=False + ) as tmpfile: + filepath = tmpfile.name + + else: + output_dir = os.path.dirname(filepath) + if not os.path.exists(output_dir): + os.makedirs(output_dir) + + pixmap.save(filepath) + return filepath + + @classmethod + def capture_to_clipboard(cls): + """Take screenshot with marquee into clipboard. + + Notes: + Screenshot is not in clipboard if user cancelled the operation. + + Returns: + bool: Screenshot was added to clipboard. + """ + + clipboard = QtWidgets.QApplication.clipboard() + pixmap = cls.capture_to_pixmap() + if pixmap.isNull(): + return False + image = pixmap.toImage() + clipboard.setImage(image, QtGui.QClipboard.Clipboard) + return True + + +def capture_to_pixmap(): + """Take screenshot with marquee into pixmap. + + Note: + The pixmap can be invalid (use 'isNull' to check). + + Returns: + QtGui.QPixmap: Captured pixmap image. + """ + + return ScreenMarquee.capture_to_pixmap() + + +def capture_to_file(filepath=None): + """Take screenshot with marquee into file. + + Args: + filepath (Optional[str]): Path where screenshot will be saved. + + Returns: + Union[str, None]: Path to the saved screenshot, or None if user + cancelled the operation. + """ + + return ScreenMarquee.capture_to_file(filepath) + + +def capture_to_clipboard(): + """Take screenshot with marquee into clipboard. + + Notes: + Screenshot is not in clipboard if user cancelled the operation. + + Returns: + bool: Screenshot was added to clipboard. + """ + + return ScreenMarquee.capture_to_clipboard() diff --git a/openpype/tools/publisher/widgets/thumbnail_widget.py b/openpype/tools/publisher/widgets/thumbnail_widget.py index b17ca0adc8e..60970710d8f 100644 --- a/openpype/tools/publisher/widgets/thumbnail_widget.py +++ b/openpype/tools/publisher/widgets/thumbnail_widget.py @@ -7,8 +7,8 @@ from openpype.lib import ( run_subprocess, is_oiio_supported, - get_oiio_tools_path, - get_ffmpeg_tool_path, + get_oiio_tool_args, + get_ffmpeg_tool_args, ) from openpype.lib.transcoding import ( IMAGE_EXTENSIONS, @@ -22,6 +22,7 @@ from openpype.tools.publisher.control import CardMessageTypes from .icons import get_image +from .screenshot_widget import capture_to_file class ThumbnailPainterWidget(QtWidgets.QWidget): @@ -306,20 +307,43 @@ def __init__(self, controller, parent): thumbnail_painter = ThumbnailPainterWidget(self) + icon_color = get_objected_colors("bg-view-selection").get_qcolor() + icon_color.setAlpha(255) + buttons_widget = QtWidgets.QWidget(self) buttons_widget.setAttribute(QtCore.Qt.WA_TranslucentBackground) - icon_color = get_objected_colors("bg-view-selection").get_qcolor() - icon_color.setAlpha(255) clear_image = get_image("clear_thumbnail") clear_pix = paint_image_with_color(clear_image, icon_color) - clear_button = PixmapButton(clear_pix, buttons_widget) clear_button.setObjectName("ThumbnailPixmapHoverButton") + clear_button.setToolTip("Clear thumbnail") + + take_screenshot_image = get_image("take_screenshot") + take_screenshot_pix = paint_image_with_color( + take_screenshot_image, icon_color) + take_screenshot_btn = PixmapButton( + take_screenshot_pix, buttons_widget) + take_screenshot_btn.setObjectName("ThumbnailPixmapHoverButton") + take_screenshot_btn.setToolTip("Take screenshot") + + paste_image = get_image("paste") + paste_pix = paint_image_with_color(paste_image, icon_color) + paste_btn = PixmapButton(paste_pix, buttons_widget) + paste_btn.setObjectName("ThumbnailPixmapHoverButton") + paste_btn.setToolTip("Paste from clipboard") + + browse_image = get_image("browse") + browse_pix = paint_image_with_color(browse_image, icon_color) + browse_btn = PixmapButton(browse_pix, buttons_widget) + browse_btn.setObjectName("ThumbnailPixmapHoverButton") + browse_btn.setToolTip("Browse...") buttons_layout = QtWidgets.QHBoxLayout(buttons_widget) - buttons_layout.setContentsMargins(3, 3, 3, 3) - buttons_layout.addStretch(1) + buttons_layout.setContentsMargins(0, 0, 0, 0) + buttons_layout.addWidget(take_screenshot_btn, 0) + buttons_layout.addWidget(paste_btn, 0) + buttons_layout.addWidget(browse_btn, 0) buttons_layout.addWidget(clear_button, 0) layout = QtWidgets.QHBoxLayout(self) @@ -327,6 +351,9 @@ def __init__(self, controller, parent): layout.addWidget(thumbnail_painter) clear_button.clicked.connect(self._on_clear_clicked) + take_screenshot_btn.clicked.connect(self._on_take_screenshot) + paste_btn.clicked.connect(self._on_paste_from_clipboard) + browse_btn.clicked.connect(self._on_browse_clicked) self._controller = controller self._output_dir = controller.get_thumbnail_temp_dir_path() @@ -338,9 +365,16 @@ def __init__(self, controller, parent): self._adapted_to_size = True self._last_width = None self._last_height = None + self._hide_on_finish = False self._buttons_widget = buttons_widget self._thumbnail_painter = thumbnail_painter + self._clear_button = clear_button + self._take_screenshot_btn = take_screenshot_btn + self._paste_btn = paste_btn + self._browse_btn = browse_btn + + clear_button.setEnabled(False) @property def width_ratio(self): @@ -430,13 +464,75 @@ def set_height(self, height): self._thumbnail_painter.clear_cache() + def _set_current_thumbails(self, thumbnail_paths): + self._thumbnail_painter.set_current_thumbnails(thumbnail_paths) + self._update_buttons_position() + def set_current_thumbnails(self, thumbnail_paths=None): self._thumbnail_painter.set_current_thumbnails(thumbnail_paths) self._update_buttons_position() + self._clear_button.setEnabled(self._thumbnail_painter.has_pixes) def _on_clear_clicked(self): self.set_current_thumbnails() self.thumbnail_cleared.emit() + self._clear_button.setEnabled(False) + + def _on_take_screenshot(self): + window = self.window() + state = window.windowState() + window.setWindowState(QtCore.Qt.WindowMinimized) + output_path = os.path.join( + self._output_dir, uuid.uuid4().hex + ".png") + if capture_to_file(output_path): + self.thumbnail_created.emit(output_path) + # restore original window state + window.setWindowState(state) + + def _on_paste_from_clipboard(self): + """Set thumbnail from a pixmap image in the system clipboard""" + + clipboard = QtWidgets.QApplication.clipboard() + pixmap = clipboard.pixmap() + if pixmap.isNull(): + return + + # Save as temporary file + output_path = os.path.join( + self._output_dir, uuid.uuid4().hex + ".png") + + output_dir = os.path.dirname(output_path) + if not os.path.exists(output_dir): + os.makedirs(output_dir) + + if pixmap.save(output_path): + self.thumbnail_created.emit(output_path) + + def _on_browse_clicked(self): + ext_filter = "Source (*{0})".format( + " *".join(self._review_extensions) + ) + filepath, _ = QtWidgets.QFileDialog.getOpenFileName( + self, "Choose thumbnail", os.path.expanduser("~"), ext_filter + ) + if not filepath: + return + valid_path = False + ext = os.path.splitext(filepath)[-1].lower() + if ext in self._review_extensions: + valid_path = True + + output = None + if valid_path: + output = export_thumbnail(filepath, self._output_dir) + + if output: + self.thumbnail_created.emit(output) + else: + self._controller.emit_card_message( + "Couldn't convert the source for thumbnail", + CardMessageTypes.error + ) def _adapt_to_size(self): if not self._adapted_to_size: @@ -452,13 +548,25 @@ def _adapt_to_size(self): self._thumbnail_painter.clear_cache() def _update_buttons_position(self): - self._buttons_widget.setVisible(self._thumbnail_painter.has_pixes) size = self.size() + my_width = size.width() my_height = size.height() - height = self._buttons_widget.sizeHint().height() + buttons_sh = self._buttons_widget.sizeHint() + buttons_height = buttons_sh.height() + buttons_width = buttons_sh.width() + pos_x = my_width - (buttons_width + 3) + pos_y = my_height - (buttons_height + 3) + if pos_x < 0: + pos_x = 0 + buttons_width = my_width + if pos_y < 0: + pos_y = 0 + buttons_height = my_height self._buttons_widget.setGeometry( - 0, my_height - height, - size.width(), height + pos_x, + pos_y, + buttons_width, + buttons_height ) def resizeEvent(self, event): @@ -481,12 +589,12 @@ def _convert_thumbnail_oiio(src_path, dst_path): if not is_oiio_supported(): return None - oiio_cmd = [ - get_oiio_tools_path(), + oiio_cmd = get_oiio_tool_args( + "oiiotool", "-i", src_path, "--subimage", "0", "-o", dst_path - ] + ) try: _run_silent_subprocess(oiio_cmd) except Exception: @@ -495,12 +603,12 @@ def _convert_thumbnail_oiio(src_path, dst_path): def _convert_thumbnail_ffmpeg(src_path, dst_path): - ffmpeg_cmd = [ - get_ffmpeg_tool_path(), + ffmpeg_cmd = get_ffmpeg_tool_args( + "ffmpeg", "-y", "-i", src_path, dst_path - ] + ) try: _run_silent_subprocess(ffmpeg_cmd) except Exception: diff --git a/openpype/tools/publisher/window.py b/openpype/tools/publisher/window.py index 2bda0c1cfe1..39e78c01bb7 100644 --- a/openpype/tools/publisher/window.py +++ b/openpype/tools/publisher/window.py @@ -634,16 +634,7 @@ def _on_tab_change(self, old_tab, new_tab): if old_tab == "details": self._publish_details_widget.close_details_popup() - if new_tab in ("create", "publish"): - animate = True - if old_tab not in ("create", "publish"): - animate = False - self._content_stacked_layout.setCurrentWidget( - self._overview_widget - ) - self._overview_widget.set_state(new_tab, animate) - - elif new_tab == "details": + if new_tab == "details": self._content_stacked_layout.setCurrentWidget( self._publish_details_widget ) @@ -654,6 +645,21 @@ def _on_tab_change(self, old_tab, new_tab): self._report_widget ) + old_on_overview = old_tab in ("create", "publish") + if new_tab in ("create", "publish"): + self._content_stacked_layout.setCurrentWidget( + self._overview_widget + ) + # Overview state is animated only when switching between + # 'create' and 'publish' tab + self._overview_widget.set_state(new_tab, old_on_overview) + + elif old_on_overview: + # Make sure animation finished if previous tab was 'create' + # or 'publish'. That is just for safety to avoid stuck animation + # when user clicks too fast. + self._overview_widget.make_sure_animation_is_finished() + is_create = new_tab == "create" if is_create: self._install_app_event_listener() diff --git a/openpype/tools/push_to_project/control_integrate.py b/openpype/tools/push_to_project/control_integrate.py index 37a0512d59f..a822339ccf0 100644 --- a/openpype/tools/push_to_project/control_integrate.py +++ b/openpype/tools/push_to_project/control_integrate.py @@ -40,6 +40,7 @@ from openpype.lib.file_transaction import FileTransaction from openpype.settings import get_project_settings from openpype.pipeline import Anatomy +from openpype.pipeline.version_start import get_versioning_start from openpype.pipeline.template_data import get_template_data from openpype.pipeline.publish import get_publish_template_name from openpype.pipeline.create import get_subset_name @@ -940,9 +941,17 @@ def make_sure_version_exists(self): last_version_doc = get_last_version_by_subset_id( project_name, subset_id ) - version = 1 if last_version_doc: - version += int(last_version_doc["name"]) + version = int(last_version_doc["name"]) + 1 + else: + version = get_versioning_start( + project_name, + self.host_name, + task_name=self.task_info["name"], + task_type=self.task_info["type"], + family=families[0], + subset=subset_doc["name"] + ) existing_version_doc = get_version_by_name( project_name, version, subset_id @@ -966,14 +975,6 @@ def make_sure_version_exists(self): return - if version is None: - last_version_doc = get_last_version_by_subset_id( - project_name, subset_id - ) - version = 1 - if last_version_doc: - version += int(last_version_doc["name"]) - version_doc = new_version_doc( version, subset_id, version_data ) diff --git a/openpype/tools/sceneinventory/lib.py b/openpype/tools/sceneinventory/lib.py index 4b1860342ac..0ac7622d654 100644 --- a/openpype/tools/sceneinventory/lib.py +++ b/openpype/tools/sceneinventory/lib.py @@ -1,9 +1,3 @@ -import os -from openpype_modules import sync_server - -from qtpy import QtGui - - def walk_hierarchy(node): """Recursively yield group node.""" for child in node.children(): @@ -12,19 +6,3 @@ def walk_hierarchy(node): for _child in walk_hierarchy(child): yield _child - - -def get_site_icons(): - resource_path = os.path.join( - os.path.dirname(sync_server.sync_server_module.__file__), - "providers", - "resources" - ) - icons = {} - # TODO get from sync module - for provider in ["studio", "local_drive", "gdrive"]: - pix_url = "{}/{}.png".format(resource_path, provider) - icons[provider] = QtGui.QIcon(pix_url) - - return icons - diff --git a/openpype/tools/sceneinventory/model.py b/openpype/tools/sceneinventory/model.py index 1cfcd0d8c03..4fd82f04a4b 100644 --- a/openpype/tools/sceneinventory/model.py +++ b/openpype/tools/sceneinventory/model.py @@ -24,10 +24,7 @@ from openpype.tools.utils.models import TreeModel, Item from openpype.modules import ModulesManager -from .lib import ( - get_site_icons, - walk_hierarchy, -) +from .lib import walk_hierarchy class InventoryModel(TreeModel): @@ -53,8 +50,10 @@ def __init__(self, family_config_cache, parent=None): self._default_icon_color = get_default_entity_icon_color() manager = ModulesManager() - sync_server = manager.modules_by_name["sync_server"] - self.sync_enabled = sync_server.enabled + sync_server = manager.modules_by_name.get("sync_server") + self.sync_enabled = ( + sync_server is not None and sync_server.enabled + ) self._site_icons = {} self.active_site = self.remote_site = None self.active_provider = self.remote_provider = None @@ -84,7 +83,10 @@ def __init__(self, family_config_cache, parent=None): self.active_provider = active_provider self.remote_site = remote_site self.remote_provider = remote_provider - self._site_icons = get_site_icons() + self._site_icons = { + provider: QtGui.QIcon(icon_path) + for provider, icon_path in sync_server.get_site_icons().items() + } if "active_site" not in self.Columns: self.Columns.append("active_site") if "remote_site" not in self.Columns: diff --git a/openpype/tools/sceneinventory/view.py b/openpype/tools/sceneinventory/view.py index d22b2bdd0f7..af463e48678 100644 --- a/openpype/tools/sceneinventory/view.py +++ b/openpype/tools/sceneinventory/view.py @@ -54,8 +54,11 @@ def __init__(self, parent=None): self._selected = None manager = ModulesManager() - self.sync_server = manager.modules_by_name["sync_server"] - self.sync_enabled = self.sync_server.enabled + sync_server = manager.modules_by_name.get("sync_server") + sync_enabled = sync_server is not None and sync_server.enabled + + self.sync_server = sync_server + self.sync_enabled = sync_enabled def _set_hierarchy_view(self, enabled): if enabled == self._hierarchy_view: diff --git a/openpype/tools/settings/local_settings/projects_widget.py b/openpype/tools/settings/local_settings/projects_widget.py index 4a4148d7cd7..f2b65351157 100644 --- a/openpype/tools/settings/local_settings/projects_widget.py +++ b/openpype/tools/settings/local_settings/projects_widget.py @@ -267,25 +267,26 @@ def _clear_widgets(self): self.input_objects = {} def _get_sites_inputs(self): - sync_server_module = ( - self.modules_manager.modules_by_name["sync_server"] - ) + output = [] + if self._project_name is None: + return output + + sync_server_module = self.modules_manager.modules_by_name.get( + "sync_server") + if sync_server_module is None or not sync_server_module.enabled: + return output site_configs = sync_server_module.get_all_site_configs( self._project_name, local_editable_only=True) - roots_entity = ( - self.project_settings[PROJECT_ANATOMY_KEY][LOCAL_ROOTS_KEY] - ) site_names = [self.active_site_widget.current_text(), self.remote_site_widget.current_text()] - output = [] for site_name in site_names: if not site_name: continue site_inputs = [] - site_config = site_configs[site_name] + site_config = site_configs.get(site_name, {}) for root_name, path_entity in site_config.get("root", {}).items(): if not path_entity: continue @@ -350,9 +351,6 @@ def _prepare_value_item(self, site_name, key): def refresh(self): self._clear_widgets() - if self._project_name is None: - return - # Site label for site_name, site_inputs in self._get_sites_inputs(): site_widget = QtWidgets.QWidget(self.content_widget) diff --git a/openpype/tools/standalonepublish/widgets/widget_drop_frame.py b/openpype/tools/standalonepublish/widgets/widget_drop_frame.py index f46e31786c5..306c43e85d1 100644 --- a/openpype/tools/standalonepublish/widgets/widget_drop_frame.py +++ b/openpype/tools/standalonepublish/widgets/widget_drop_frame.py @@ -5,6 +5,8 @@ import subprocess import openpype.lib from qtpy import QtWidgets, QtCore + +from openpype.lib import get_ffprobe_data from . import DropEmpty, ComponentsList, ComponentItem @@ -269,26 +271,8 @@ def _process_remainder(self, remainder): self._process_data(data) def load_data_with_probe(self, filepath): - ffprobe_path = openpype.lib.get_ffmpeg_tool_path("ffprobe") - args = [ - "\"{}\"".format(ffprobe_path), - '-v', 'quiet', - '-print_format json', - '-show_format', - '-show_streams', - '"{}"'.format(filepath) - ] - ffprobe_p = subprocess.Popen( - ' '.join(args), - stdout=subprocess.PIPE, - shell=True - ) - ffprobe_output = ffprobe_p.communicate()[0] - if ffprobe_p.returncode != 0: - raise RuntimeError( - 'Failed on ffprobe: check if ffprobe path is set in PATH env' - ) - return json.loads(ffprobe_output)['streams'][0] + ffprobe_data = get_ffprobe_data(filepath) + return ffprobe_data["streams"][0] def get_file_data(self, data): filepath = data['files'][0] diff --git a/openpype/tools/standalonepublish/widgets/widget_family.py b/openpype/tools/standalonepublish/widgets/widget_family.py index 8c18a93a002..73dc2122db2 100644 --- a/openpype/tools/standalonepublish/widgets/widget_family.py +++ b/openpype/tools/standalonepublish/widgets/widget_family.py @@ -10,6 +10,7 @@ ) from openpype.settings import get_project_settings from openpype.pipeline import LegacyCreator +from openpype.pipeline.version_start import get_versioning_start from openpype.pipeline.create import ( SUBSET_NAME_ALLOWED_SYMBOLS, TaskNotSetError, @@ -299,7 +300,15 @@ def on_version_refresh(self): project_name = self.dbcon.active_project() asset_name = self.asset_name subset_name = self.input_result.text() - version = 1 + plugin = self.list_families.currentItem().data(PluginRole) + family = plugin.family.rsplit(".", 1)[-1] + version = get_versioning_start( + project_name, + "standalonepublisher", + task_name=self.dbcon.Session["AVALON_TASK"], + family=family, + subset=subset_name + ) asset_doc = None subset_doc = None diff --git a/openpype/tools/utils/lib.py b/openpype/tools/utils/lib.py index 82ca23c8483..2df46c1eae3 100644 --- a/openpype/tools/utils/lib.py +++ b/openpype/tools/utils/lib.py @@ -760,20 +760,23 @@ def run(self): def get_repre_icons(): """Returns a dict {'provider_name': QIcon}""" + icons = {} try: from openpype_modules import sync_server except Exception: # Backwards compatibility - from openpype.modules import sync_server + try: + from openpype.modules import sync_server + except Exception: + return icons resource_path = os.path.join( os.path.dirname(sync_server.sync_server_module.__file__), "providers", "resources" ) - icons = {} if not os.path.exists(resource_path): print("No icons for Site Sync found") - return {} + return icons for file_name in os.listdir(resource_path): if file_name and not file_name.endswith("png"): diff --git a/openpype/tools/utils/tasks_widget.py b/openpype/tools/utils/tasks_widget.py index 8c0505223e3..b554ed50d36 100644 --- a/openpype/tools/utils/tasks_widget.py +++ b/openpype/tools/utils/tasks_widget.py @@ -75,7 +75,7 @@ def _get_current_project(self): def set_asset_id(self, asset_id): asset_doc = None - if self._context_is_valid(): + if asset_id and self._context_is_valid(): project_name = self._get_current_project() asset_doc = get_asset_by_id( project_name, asset_id, fields=["data.tasks"] diff --git a/openpype/tools/utils/widgets.py b/openpype/tools/utils/widgets.py index 5a8104611b8..a70437cc654 100644 --- a/openpype/tools/utils/widgets.py +++ b/openpype/tools/utils/widgets.py @@ -410,6 +410,18 @@ def __init__(self, pixmap, parent): self._pixmap = pixmap self._cached_pixmap = None + self._disabled = False + + def resizeEvent(self, event): + super(PixmapButtonPainter, self).resizeEvent(event) + self._cached_pixmap = None + self.repaint() + + def set_enabled(self, enabled): + if self._disabled != enabled: + return + self._disabled = not enabled + self.repaint() def set_pixmap(self, pixmap): self._pixmap = pixmap @@ -444,6 +456,8 @@ def paintEvent(self, event): if self._cached_pixmap is None: self._cache_pixmap() + if self._disabled: + painter.setOpacity(0.5) painter.drawPixmap(0, 0, self._cached_pixmap) painter.end() @@ -464,6 +478,10 @@ def setContentsMargins(self, *args): layout.setContentsMargins(*args) self._update_painter_geo() + def setEnabled(self, enabled): + self._button_painter.set_enabled(enabled) + super(PixmapButton, self).setEnabled(enabled) + def set_pixmap(self, pixmap): self._button_painter.set_pixmap(pixmap) diff --git a/openpype/tools/workfiles/save_as_dialog.py b/openpype/tools/workfiles/save_as_dialog.py index 9f1d1060da8..7052eaed067 100644 --- a/openpype/tools/workfiles/save_as_dialog.py +++ b/openpype/tools/workfiles/save_as_dialog.py @@ -12,6 +12,7 @@ from openpype.pipeline.workfile import get_last_workfile_with_version from openpype.pipeline.template_data import get_template_data_with_names from openpype.tools.utils import PlaceholderLineEdit +from openpype.pipeline import version_start, get_current_host_name log = logging.getLogger(__name__) @@ -218,7 +219,15 @@ def __init__( # Version number input version_input = QtWidgets.QSpinBox(version_widget) - version_input.setMinimum(1) + version_input.setMinimum( + version_start.get_versioning_start( + self.data["project"]["name"], + get_current_host_name(), + task_name=self.data["task"]["name"], + task_type=self.data["task"]["type"], + family="workfile" + ) + ) version_input.setMaximum(9999) # Last version checkbox @@ -420,7 +429,13 @@ def refresh(self): )[1] if version is None: - version = 1 + version = version_start.get_versioning_start( + data["project"]["name"], + get_current_host_name(), + task_name=self.data["task"]["name"], + task_type=self.data["task"]["type"], + family="workfile" + ) else: version += 1 diff --git a/openpype/vendor/python/common/ayon_api/__init__.py b/openpype/vendor/python/common/ayon_api/__init__.py index 4b4e0f33590..dc3d361f467 100644 --- a/openpype/vendor/python/common/ayon_api/__init__.py +++ b/openpype/vendor/python/common/ayon_api/__init__.py @@ -30,6 +30,8 @@ set_client_version, get_default_settings_variant, set_default_settings_variant, + get_sender, + set_sender, get_base_url, get_rest_url, @@ -46,6 +48,11 @@ patch, delete, + get_timeout, + set_timeout, + get_max_retries, + set_max_retries, + get_event, get_events, dispatch_event, @@ -78,6 +85,8 @@ download_dependency_package, upload_dependency_package, + upload_addon_zip, + get_bundles, create_bundle, update_bundle, @@ -90,6 +99,7 @@ get_users, get_attributes_for_type, + get_attributes_fields_for_type, get_default_fields_for_type, get_project_anatomy_preset, @@ -108,6 +118,11 @@ get_addons_project_settings, get_addons_settings, + get_secrets, + get_secret, + save_secret, + delete_secret, + get_project_names, get_projects, get_project, @@ -122,6 +137,8 @@ get_folders_hierarchy, get_tasks, + get_task_by_id, + get_task_by_name, get_folder_ids_with_products, get_product_by_id, @@ -152,6 +169,7 @@ get_workfile_info, get_workfile_info_by_id, + get_thumbnail_by_id, get_thumbnail, get_folder_thumbnail, get_version_thumbnail, @@ -214,6 +232,8 @@ "set_client_version", "get_default_settings_variant", "set_default_settings_variant", + "get_sender", + "set_sender", "get_base_url", "get_rest_url", @@ -230,6 +250,11 @@ "patch", "delete", + "get_timeout", + "set_timeout", + "get_max_retries", + "set_max_retries", + "get_event", "get_events", "dispatch_event", @@ -262,6 +287,8 @@ "download_dependency_package", "upload_dependency_package", + "upload_addon_zip", + "get_bundles", "create_bundle", "update_bundle", @@ -274,6 +301,7 @@ "get_users", "get_attributes_for_type", + "get_attributes_fields_for_type", "get_default_fields_for_type", "get_project_anatomy_preset", @@ -291,6 +319,11 @@ "get_addons_project_settings", "get_addons_settings", + "get_secrets", + "get_secret", + "save_secret", + "delete_secret", + "get_project_names", "get_projects", "get_project", @@ -304,6 +337,8 @@ "get_folders", "get_tasks", + "get_task_by_id", + "get_task_by_name", "get_folder_ids_with_products", "get_product_by_id", @@ -334,6 +369,7 @@ "get_workfile_info", "get_workfile_info_by_id", + "get_thumbnail_by_id", "get_thumbnail", "get_folder_thumbnail", "get_version_thumbnail", diff --git a/openpype/vendor/python/common/ayon_api/_api.py b/openpype/vendor/python/common/ayon_api/_api.py index 82ffdc7527e..22e137d6e5c 100644 --- a/openpype/vendor/python/common/ayon_api/_api.py +++ b/openpype/vendor/python/common/ayon_api/_api.py @@ -25,12 +25,29 @@ class GlobalServerAPI(ServerAPI): but that can be filled afterwards with calling 'login' method. """ - def __init__(self, site_id=None, client_version=None): + def __init__( + self, + site_id=None, + client_version=None, + default_settings_variant=None, + ssl_verify=None, + cert=None, + ): url = self.get_url() token = self.get_token() - super(GlobalServerAPI, self).__init__(url, token, site_id, client_version) - + super(GlobalServerAPI, self).__init__( + url, + token, + site_id, + client_version, + default_settings_variant, + ssl_verify, + cert, + # We want to make sure that server and api key validation + # happens all the time in 'GlobalServerAPI'. + create_session=False, + ) self.validate_server_availability() self.create_session() @@ -129,17 +146,6 @@ class ServiceContext: addon_version = None service_name = None - @staticmethod - def get_value_from_envs(env_keys, value=None): - if value: - return value - - for env_key in env_keys: - value = os.environ.get(env_key) - if value: - break - return value - @classmethod def init_service( cls, @@ -150,14 +156,8 @@ def init_service( service_name=None, connect=True ): - token = cls.get_value_from_envs( - ("AY_API_KEY", "AYON_API_KEY"), - token - ) - server_url = cls.get_value_from_envs( - ("AY_SERVER_URL", "AYON_SERVER_URL"), - server_url - ) + token = token or os.environ.get("AYON_API_KEY") + server_url = server_url or os.environ.get("AYON_SERVER_URL") if not server_url: raise FailedServiceInit("URL to server is not set") @@ -166,18 +166,9 @@ def init_service( "Token to server {} is not set".format(server_url) ) - addon_name = cls.get_value_from_envs( - ("AY_ADDON_NAME", "AYON_ADDON_NAME"), - addon_name - ) - addon_version = cls.get_value_from_envs( - ("AY_ADDON_VERSION", "AYON_ADDON_VERSION"), - addon_version - ) - service_name = cls.get_value_from_envs( - ("AY_SERVICE_NAME", "AYON_SERVICE_NAME"), - service_name - ) + addon_name = addon_name or os.environ.get("AYON_ADDON_NAME") + addon_version = addon_version or os.environ.get("AYON_ADDON_VERSION") + service_name = service_name or os.environ.get("AYON_SERVICE_NAME") cls.token = token cls.server_url = server_url @@ -401,6 +392,28 @@ def set_default_settings_variant(variant): return con.set_default_settings_variant(variant) +def get_sender(): + """Sender used to send requests. + + Returns: + Union[str, None]: Sender name or None. + """ + + con = get_server_api_connection() + return con.get_sender() + + +def set_sender(sender): + """Change sender used for requests. + + Args: + sender (Union[str, None]): Sender name or None. + """ + + con = get_server_api_connection() + return con.set_sender(sender) + + def get_base_url(): con = get_server_api_connection() return con.get_base_url() @@ -461,6 +474,26 @@ def delete(*args, **kwargs): return con.delete(*args, **kwargs) +def get_timeout(*args, **kwargs): + con = get_server_api_connection() + return con.get_timeout(*args, **kwargs) + + +def set_timeout(*args, **kwargs): + con = get_server_api_connection() + return con.set_timeout(*args, **kwargs) + + +def get_max_retries(*args, **kwargs): + con = get_server_api_connection() + return con.get_max_retries(*args, **kwargs) + + +def set_max_retries(*args, **kwargs): + con = get_server_api_connection() + return con.set_max_retries(*args, **kwargs) + + def get_event(*args, **kwargs): con = get_server_api_connection() return con.get_event(*args, **kwargs) @@ -618,6 +651,11 @@ def delete_dependency_package(*args, **kwargs): return con.delete_dependency_package(*args, **kwargs) +def upload_addon_zip(*args, **kwargs): + con = get_server_api_connection() + return con.upload_addon_zip(*args, **kwargs) + + def get_project_anatomy_presets(*args, **kwargs): con = get_server_api_connection() return con.get_project_anatomy_presets(*args, **kwargs) @@ -708,6 +746,26 @@ def get_addons_settings(*args, **kwargs): return con.get_addons_settings(*args, **kwargs) +def get_secrets(*args, **kwargs): + con = get_server_api_connection() + return con.get_secrets(*args, **kwargs) + + +def get_secret(*args, **kwargs): + con = get_server_api_connection() + return con.delete_secret(*args, **kwargs) + + +def save_secret(*args, **kwargs): + con = get_server_api_connection() + return con.delete_secret(*args, **kwargs) + + +def delete_secret(*args, **kwargs): + con = get_server_api_connection() + return con.delete_secret(*args, **kwargs) + + def get_project_names(*args, **kwargs): con = get_server_api_connection() return con.get_project_names(*args, **kwargs) @@ -738,6 +796,16 @@ def get_tasks(*args, **kwargs): return con.get_tasks(*args, **kwargs) +def get_task_by_id(*args, **kwargs): + con = get_server_api_connection() + return con.get_task_by_id(*args, **kwargs) + + +def get_task_by_name(*args, **kwargs): + con = get_server_api_connection() + return con.get_task_by_name(*args, **kwargs) + + def get_folder_by_id(*args, **kwargs): con = get_server_api_connection() return con.get_folder_by_id(*args, **kwargs) @@ -908,6 +976,11 @@ def delete_project(project_name): return con.delete_project(project_name) +def get_thumbnail_by_id(project_name, thumbnail_id): + con = get_server_api_connection() + con.get_thumbnail_by_id(project_name, thumbnail_id) + + def get_thumbnail(project_name, entity_type, entity_id, thumbnail_id=None): con = get_server_api_connection() con.get_thumbnail(project_name, entity_type, entity_id, thumbnail_id) @@ -938,6 +1011,11 @@ def update_thumbnail(project_name, thumbnail_id, src_filepath): return con.update_thumbnail(project_name, thumbnail_id, src_filepath) +def get_attributes_fields_for_type(entity_type): + con = get_server_api_connection() + return con.get_attributes_fields_for_type(entity_type) + + def get_default_fields_for_type(entity_type): con = get_server_api_connection() return con.get_default_fields_for_type(entity_type) diff --git a/openpype/vendor/python/common/ayon_api/constants.py b/openpype/vendor/python/common/ayon_api/constants.py index e2b05a5cae6..eaeb77b607c 100644 --- a/openpype/vendor/python/common/ayon_api/constants.py +++ b/openpype/vendor/python/common/ayon_api/constants.py @@ -1,9 +1,31 @@ # Environments where server url and api key are stored for global connection SERVER_URL_ENV_KEY = "AYON_SERVER_URL" SERVER_API_ENV_KEY = "AYON_API_KEY" +SERVER_TIMEOUT_ENV_KEY = "AYON_SERVER_TIMEOUT" +SERVER_RETRIES_ENV_KEY = "AYON_SERVER_RETRIES" + # Backwards compatibility SERVER_TOKEN_ENV_KEY = SERVER_API_ENV_KEY +# --- User --- +DEFAULT_USER_FIELDS = { + "accessGroups", + "defaultAccessGroups", + "name", + "isService", + "isManager", + "isGuest", + "isAdmin", + "createdAt", + "active", + "hasPassword", + "updatedAt", + "apiKeyPreview", + "attrib.avatarUrl", + "attrib.email", + "attrib.fullName", +} + # --- Product types --- DEFAULT_PRODUCT_TYPE_FIELDS = { "name", diff --git a/openpype/vendor/python/common/ayon_api/entity_hub.py b/openpype/vendor/python/common/ayon_api/entity_hub.py index ab1e2584d7c..b9b017bac50 100644 --- a/openpype/vendor/python/common/ayon_api/entity_hub.py +++ b/openpype/vendor/python/common/ayon_api/entity_hub.py @@ -1,10 +1,11 @@ +import re import copy import collections from abc import ABCMeta, abstractmethod import six from ._api import get_server_api_connection -from .utils import create_entity_id, convert_entity_id +from .utils import create_entity_id, convert_entity_id, slugify_string UNKNOWN_VALUE = object() PROJECT_PARENT_ID = object() @@ -545,6 +546,7 @@ def fill_project_from_server(self): library=project["library"], folder_types=project["folderTypes"], task_types=project["taskTypes"], + statuses=project["statuses"], name=project["name"], attribs=project["ownAttrib"], data=project["data"], @@ -775,8 +777,7 @@ def commit_changes(self): "projects/{}".format(self.project_name), **project_changes ) - if response.status_code != 204: - raise ValueError("Failed to update project") + response.raise_for_status() self.project_entity.lock() @@ -1485,6 +1486,722 @@ def fill_children_ids(self, children_ids): self._children_ids = set(children_ids) +class ProjectStatus: + """Project status class. + + Args: + name (str): Name of the status. e.g. 'In progress' + short_name (Optional[str]): Short name of the status. e.g. 'IP' + state (Optional[Literal[not_started, in_progress, done, blocked]]): A + state of the status. + icon (Optional[str]): Icon of the status. e.g. 'play_arrow'. + color (Optional[str]): Color of the status. e.g. '#eeeeee'. + index (Optional[int]): Index of the status. + project_statuses (Optional[_ProjectStatuses]): Project statuses + wrapper. + """ + + valid_states = ("not_started", "in_progress", "done", "blocked") + color_regex = re.compile(r"#([a-f0-9]{6})$") + default_state = "in_progress" + default_color = "#eeeeee" + + def __init__( + self, + name, + short_name=None, + state=None, + icon=None, + color=None, + index=None, + project_statuses=None, + is_new=None, + ): + short_name = short_name or "" + icon = icon or "" + state = state or self.default_state + color = color or self.default_color + self._name = name + self._short_name = short_name + self._icon = icon + self._slugified_name = None + self._state = None + self._color = None + self.set_state(state) + self.set_color(color) + + self._original_name = name + self._original_short_name = short_name + self._original_icon = icon + self._original_state = state + self._original_color = color + self._original_index = index + + self._index = index + self._project_statuses = project_statuses + if is_new is None: + is_new = index is None or project_statuses is None + self._is_new = is_new + + def __str__(self): + short_name = "" + if self.short_name: + short_name = "({})".format(self.short_name) + return "<{} {}{}>".format( + self.__class__.__name__, self.name, short_name + ) + + def __repr__(self): + return str(self) + + def __getitem__(self, key): + if key in { + "name", "short_name", "icon", "state", "color", "slugified_name" + }: + return getattr(self, key) + raise KeyError(key) + + def __setitem__(self, key, value): + if key in {"name", "short_name", "icon", "state", "color"}: + return setattr(self, key, value) + raise KeyError(key) + + def lock(self): + """Lock status. + + Changes were commited and current values are now the original values. + """ + + self._is_new = False + self._original_name = self.name + self._original_short_name = self.short_name + self._original_icon = self.icon + self._original_state = self.state + self._original_color = self.color + self._original_index = self.index + + @staticmethod + def slugify_name(name): + """Slugify status name for name comparison. + + Args: + name (str): Name of the status. + + Returns: + str: Slugified name. + """ + + return slugify_string(name.lower()) + + def get_project_statuses(self): + """Internal logic method. + + Returns: + _ProjectStatuses: Project statuses object. + """ + + return self._project_statuses + + def set_project_statuses(self, project_statuses): + """Internal logic method to change parent object. + + Args: + project_statuses (_ProjectStatuses): Project statuses object. + """ + + self._project_statuses = project_statuses + + def unset_project_statuses(self, project_statuses): + """Internal logic method to unset parent object. + + Args: + project_statuses (_ProjectStatuses): Project statuses object. + """ + + if self._project_statuses is project_statuses: + self._project_statuses = None + self._index = None + + @property + def changed(self): + """Status has changed. + + Returns: + bool: Status has changed. + """ + + return ( + self._is_new + or self._original_name != self._name + or self._original_short_name != self._short_name + or self._original_index != self._index + or self._original_state != self._state + or self._original_icon != self._icon + or self._original_color != self._color + ) + + def delete(self): + """Remove status from project statuses object.""" + + if self._project_statuses is not None: + self._project_statuses.remove(self) + + def get_index(self): + """Get index of status. + + Returns: + Union[int, None]: Index of status or None if status is not under + project. + """ + + return self._index + + def set_index(self, index, **kwargs): + """Change status index. + + Returns: + Union[int, None]: Index of status or None if status is not under + project. + """ + + if kwargs.get("from_parent"): + self._index = index + else: + self._project_statuses.set_status_index(self, index) + + def get_name(self): + """Status name. + + Returns: + str: Status name. + """ + + return self._name + + def set_name(self, name): + """Change status name. + + Args: + name (str): New status name. + """ + + if not isinstance(name, six.string_types): + raise TypeError("Name must be a string.") + if name == self._name: + return + self._name = name + self._slugified_name = None + + def get_short_name(self): + """Status short name 3 letters tops. + + Returns: + str: Status short name. + """ + + return self._short_name + + def set_short_name(self, short_name): + """Change status short name. + + Args: + short_name (str): New status short name. 3 letters tops. + """ + + if not isinstance(short_name, six.string_types): + raise TypeError("Short name must be a string.") + self._short_name = short_name + + def get_icon(self): + """Name of icon to use for status. + + Returns: + str: Name of the icon. + """ + + return self._icon + + def set_icon(self, icon): + """Change status icon name. + + Args: + icon (str): Name of the icon. + """ + + if icon is None: + icon = "" + if not isinstance(icon, six.string_types): + raise TypeError("Icon name must be a string.") + self._icon = icon + + @property + def slugified_name(self): + """Slugified and lowere status name. + + Can be used for comparison of existing statuses. e.g. 'In Progress' + vs. 'in-progress'. + + Returns: + str: Slugified and lower status name. + """ + + if self._slugified_name is None: + self._slugified_name = self.slugify_name(self.name) + return self._slugified_name + + def get_state(self): + """Get state of project status. + + Return: + Literal[not_started, in_progress, done, blocked]: General + state of status. + """ + + return self._state + + def set_state(self, state): + """Set color of project status. + + Args: + state (Literal[not_started, in_progress, done, blocked]): General + state of status. + """ + + if state not in self.valid_states: + raise ValueError("Invalid state '{}'".format(str(state))) + self._state = state + + def get_color(self): + """Get color of project status. + + Returns: + str: Status color. + """ + + return self._color + + def set_color(self, color): + """Set color of project status. + + Args: + color (str): Color in hex format. Example: '#ff0000'. + """ + + if not isinstance(color, six.string_types): + raise TypeError( + "Color must be string got '{}'".format(type(color))) + color = color.lower() + if self.color_regex.fullmatch(color) is None: + raise ValueError("Invalid color value '{}'".format(color)) + self._color = color + + name = property(get_name, set_name) + short_name = property(get_short_name, set_short_name) + project_statuses = property(get_project_statuses, set_project_statuses) + index = property(get_index, set_index) + state = property(get_state, set_state) + color = property(get_color, set_color) + icon = property(get_icon, set_icon) + + def _validate_other_p_statuses(self, other): + """Validate if other status can be used for move. + + To be able to work with other status, and position them in relation, + they must belong to same existing object of '_ProjectStatuses'. + + Args: + other (ProjectStatus): Other status to validate. + """ + + o_project_statuses = other.project_statuses + m_project_statuses = self.project_statuses + if o_project_statuses is None and m_project_statuses is None: + raise ValueError("Both statuses are not assigned to a project.") + + missing_status = None + if o_project_statuses is None: + missing_status = other + elif m_project_statuses is None: + missing_status = self + if missing_status is not None: + raise ValueError( + "Status '{}' is not assigned to a project.".format( + missing_status.name)) + if m_project_statuses is not o_project_statuses: + raise ValueError( + "Statuse are assigned to different projects." + " Cannot execute move." + ) + + def move_before(self, other): + """Move status before other status. + + Args: + other (ProjectStatus): Status to move before. + """ + + self._validate_other_p_statuses(other) + self._project_statuses.set_status_index(self, other.index) + + def move_after(self, other): + """Move status after other status. + + Args: + other (ProjectStatus): Status to move after. + """ + + self._validate_other_p_statuses(other) + self._project_statuses.set_status_index(self, other.index + 1) + + def to_data(self): + """Convert status to data. + + Returns: + dict[str, str]: Status data. + """ + + output = { + "name": self.name, + "shortName": self.short_name, + "state": self.state, + "icon": self.icon, + "color": self.color, + } + if ( + not self._is_new + and self._original_name + and self.name != self._original_name + ): + output["original_name"] = self._original_name + return output + + @classmethod + def from_data(cls, data, index=None, project_statuses=None): + """Create project status from data. + + Args: + data (dict[str, str]): Status data. + index (Optional[int]): Status index. + project_statuses (Optional[ProjectStatuses]): Project statuses + object which wraps the status for a project. + """ + + return cls( + data["name"], + data.get("shortName", data.get("short_name")), + data.get("state"), + data.get("icon"), + data.get("color"), + index=index, + project_statuses=project_statuses + ) + + +class _ProjectStatuses: + """Wrapper for project statuses. + + Supports basic methods to add, change or remove statuses from a project. + + To add new statuses use 'create' or 'add_status' methods. To change + statuses receive them by one of the getter methods and change their + values. + + Todos: + Validate if statuses are duplicated. + """ + + def __init__(self, statuses): + self._statuses = [ + ProjectStatus.from_data(status, idx, self) + for idx, status in enumerate(statuses) + ] + self._orig_status_length = len(self._statuses) + self._set_called = False + + def __len__(self): + return len(self._statuses) + + def __iter__(self): + """Iterate over statuses. + + Yields: + ProjectStatus: Project status. + """ + + for status in self._statuses: + yield status + + def create( + self, + name, + short_name=None, + state=None, + icon=None, + color=None, + ): + """Create project status. + + Args: + name (str): Name of the status. e.g. 'In progress' + short_name (Optional[str]): Short name of the status. e.g. 'IP' + state (Optional[Literal[not_started, in_progress, done, blocked]]): A + state of the status. + icon (Optional[str]): Icon of the status. e.g. 'play_arrow'. + color (Optional[str]): Color of the status. e.g. '#eeeeee'. + + Returns: + ProjectStatus: Created project status. + """ + + status = ProjectStatus( + name, short_name, state, icon, color, is_new=True + ) + self.append(status) + return status + + def lock(self): + """Lock statuses. + + Changes were commited and current values are now the original values. + """ + + self._orig_status_length = len(self._statuses) + self._set_called = False + for status in self._statuses: + status.lock() + + def to_data(self): + """Convert to project statuses data.""" + + return [ + status.to_data() + for status in self._statuses + ] + + def set(self, statuses): + """Explicitly override statuses. + + This method does not handle if statuses changed or not. + + Args: + statuses (list[dict[str, str]]): List of statuses data. + """ + + self._set_called = True + self._statuses = [ + ProjectStatus.from_data(status, idx, self) + for idx, status in enumerate(statuses) + ] + + @property + def changed(self): + """Statuses have changed. + + Returns: + bool: True if statuses changed, False otherwise. + """ + + if self._set_called: + return True + + # Check if status length changed + # - when all statuses are removed it is a changed + if self._orig_status_length != len(self._statuses): + return True + # Go through all statuses and check if any of them changed + for status in self._statuses: + if status.changed: + return True + return False + + def get(self, name, default=None): + """Get status by name. + + Args: + name (str): Status name. + default (Any): Default value of status is not found. + + Returns: + Union[ProjectStatus, Any]: Status or default value. + """ + + return next( + ( + status + for status in self._statuses + if status.name == name + ), + default + ) + + get_status_by_name = get + + def index(self, status, **kwargs): + """Get status index. + + Args: + status (ProjectStatus): Status to get index of. + default (Optional[Any]): Default value if status is not found. + + Returns: + Union[int, Any]: Status index. + + Raises: + ValueError: If status is not found and default value is not + defined. + """ + + output = next( + ( + idx + for idx, st in enumerate(self._statuses) + if st is status + ), + None + ) + if output is not None: + return output + + if "default" in kwargs: + return kwargs["default"] + raise ValueError("Status '{}' not found".format(status.name)) + + def get_status_by_slugified_name(self, name): + """Get status by slugified name. + + Args: + name (str): Status name. Is slugified before search. + + Returns: + Union[ProjectStatus, None]: Status or None if not found. + """ + + slugified_name = ProjectStatus.slugify_name(name) + return next( + ( + status + for status in self._statuses + if status.slugified_name == slugified_name + ), + None + ) + + def remove_by_name(self, name, ignore_missing=False): + """Remove status by name. + + Args: + name (str): Status name. + ignore_missing (Optional[bool]): If True, no error is raised if + status is not found. + + Returns: + ProjectStatus: Removed status. + """ + + matching_status = self.get(name) + if matching_status is None: + if ignore_missing: + return + raise ValueError( + "Status '{}' not found in project".format(name)) + return self.remove(matching_status) + + def remove(self, status, ignore_missing=False): + """Remove status. + + Args: + status (ProjectStatus): Status to remove. + ignore_missing (Optional[bool]): If True, no error is raised if + status is not found. + + Returns: + Union[ProjectStatus, None]: Removed status. + """ + + index = self.index(status, default=None) + if index is None: + if ignore_missing: + return None + raise ValueError("Status '{}' not in project".format(status)) + + return self.pop(index) + + def pop(self, index): + """Remove status by index. + + Args: + index (int): Status index. + + Returns: + ProjectStatus: Removed status. + """ + + status = self._statuses.pop(index) + status.unset_project_statuses(self) + for st in self._statuses[index:]: + st.set_index(st.index - 1, from_parent=True) + return status + + def insert(self, index, status): + """Insert status at index. + + Args: + index (int): Status index. + status (Union[ProjectStatus, dict[str, str]]): Status to insert. + Can be either status object or status data. + + Returns: + ProjectStatus: Inserted status. + """ + + if not isinstance(status, ProjectStatus): + status = ProjectStatus.from_data(status) + + start_index = index + end_index = len(self._statuses) + 1 + matching_index = self.index(status, default=None) + if matching_index is not None: + if matching_index == index: + status.set_index(index, from_parent=True) + return + + self._statuses.pop(matching_index) + if matching_index < index: + start_index = matching_index + end_index = index + 1 + else: + end_index -= 1 + + status.set_project_statuses(self) + self._statuses.insert(index, status) + for idx, st in enumerate(self._statuses[start_index:end_index]): + st.set_index(start_index + idx, from_parent=True) + return status + + def append(self, status): + """Add new status to the end of the list. + + Args: + status (Union[ProjectStatus, dict[str, str]]): Status to insert. + Can be either status object or status data. + + Returns: + ProjectStatus: Inserted status. + """ + + return self.insert(len(self._statuses), status) + + def set_status_index(self, status, index): + """Set status index. + + Args: + status (ProjectStatus): Status to set index. + index (int): New status index. + """ + + return self.insert(index, status) + + class ProjectEntity(BaseEntity): """Entity representing project on AYON server. @@ -1514,7 +2231,14 @@ class ProjectEntity(BaseEntity): default_task_type_icon = "task_alt" def __init__( - self, project_code, library, folder_types, task_types, *args, **kwargs + self, + project_code, + library, + folder_types, + task_types, + statuses, + *args, + **kwargs ): super(ProjectEntity, self).__init__(*args, **kwargs) @@ -1522,11 +2246,13 @@ def __init__( self._library_project = library self._folder_types = folder_types self._task_types = task_types + self._statuses_obj = _ProjectStatuses(statuses) self._orig_project_code = project_code self._orig_library_project = library self._orig_folder_types = copy.deepcopy(folder_types) self._orig_task_types = copy.deepcopy(task_types) + self._orig_statuses = copy.deepcopy(statuses) def _prepare_entity_id(self, entity_id): if entity_id != self.project_name: @@ -1573,13 +2299,24 @@ def set_task_types(self, task_types): new_task_types.append(task_type) self._task_types = new_task_types + def get_orig_statuses(self): + return copy.deepcopy(self._orig_statuses) + + def get_statuses(self): + return self._statuses_obj + + def set_statuses(self, statuses): + self._statuses_obj.set(statuses) + folder_types = property(get_folder_types, set_folder_types) task_types = property(get_task_types, set_task_types) + statuses = property(get_statuses, set_statuses) def lock(self): super(ProjectEntity, self).lock() self._orig_folder_types = copy.deepcopy(self._folder_types) self._orig_task_types = copy.deepcopy(self._task_types) + self._statuses_obj.lock() @property def changes(self): @@ -1590,6 +2327,9 @@ def changes(self): if self._orig_task_types != self._task_types: changes["taskTypes"] = self.get_task_types() + if self._statuses_obj.changed: + changes["statuses"] = self._statuses_obj.to_data() + return changes @classmethod diff --git a/openpype/vendor/python/common/ayon_api/graphql_queries.py b/openpype/vendor/python/common/ayon_api/graphql_queries.py index 4af8c53e4e3..2435fc8a177 100644 --- a/openpype/vendor/python/common/ayon_api/graphql_queries.py +++ b/openpype/vendor/python/common/ayon_api/graphql_queries.py @@ -247,9 +247,11 @@ def products_graphql_query(fields): query = GraphQlQuery("ProductsQuery") project_name_var = query.add_variable("projectName", "String!") - folder_ids_var = query.add_variable("folderIds", "[String!]") product_ids_var = query.add_variable("productIds", "[String!]") product_names_var = query.add_variable("productNames", "[String!]") + folder_ids_var = query.add_variable("folderIds", "[String!]") + product_types_var = query.add_variable("productTypes", "[String!]") + statuses_var = query.add_variable("statuses", "[String!]") project_field = query.add_field("project") project_field.set_filter("name", project_name_var) @@ -258,6 +260,8 @@ def products_graphql_query(fields): products_field.set_filter("ids", product_ids_var) products_field.set_filter("names", product_names_var) products_field.set_filter("folderIds", folder_ids_var) + products_field.set_filter("productTypes", product_types_var) + products_field.set_filter("statuses", statuses_var) nested_fields = fields_to_dict(set(fields)) add_links_fields(products_field, nested_fields) @@ -462,3 +466,28 @@ def events_graphql_query(fields): for k, v in value.items(): query_queue.append((k, v, field)) return query + + +def users_graphql_query(fields): + query = GraphQlQuery("Users") + names_var = query.add_variable("userNames", "[String!]") + + users_field = query.add_field_with_edges("users") + users_field.set_filter("names", names_var) + + nested_fields = fields_to_dict(set(fields)) + + query_queue = collections.deque() + for key, value in nested_fields.items(): + query_queue.append((key, value, users_field)) + + while query_queue: + item = query_queue.popleft() + key, value, parent = item + field = parent.add_field(key) + if value is FIELD_VALUE: + continue + + for k, v in value.items(): + query_queue.append((k, v, field)) + return query diff --git a/openpype/vendor/python/common/ayon_api/operations.py b/openpype/vendor/python/common/ayon_api/operations.py index 7cf610a5664..eb2ca8afe31 100644 --- a/openpype/vendor/python/common/ayon_api/operations.py +++ b/openpype/vendor/python/common/ayon_api/operations.py @@ -1,3 +1,4 @@ +import os import copy import collections import uuid @@ -22,6 +23,8 @@ def new_folder_entity( name, folder_type, parent_id=None, + status=None, + tags=None, attribs=None, data=None, thumbnail_id=None, @@ -32,12 +35,14 @@ def new_folder_entity( Args: name (str): Is considered as unique identifier of folder in project. folder_type (str): Type of folder. - parent_id (Optional[str]]): Id of parent folder. + parent_id (Optional[str]): Parent folder id. + status (Optional[str]): Product status. + tags (Optional[List[str]]): List of tags. attribs (Optional[Dict[str, Any]]): Explicitly set attributes of folder. data (Optional[Dict[str, Any]]): Custom folder data. Empty dictionary is used if not passed. - thumbnail_id (Optional[str]): Id of thumbnail related to folder. + thumbnail_id (Optional[str]): Thumbnail id related to folder. entity_id (Optional[str]): Predefined id of entity. New id is created if not passed. @@ -54,7 +59,7 @@ def new_folder_entity( if parent_id is not None: parent_id = _create_or_convert_to_id(parent_id) - return { + output = { "id": _create_or_convert_to_id(entity_id), "name": name, # This will be ignored @@ -64,6 +69,11 @@ def new_folder_entity( "attrib": attribs, "thumbnailId": thumbnail_id } + if status: + output["status"] = status + if tags: + output["tags"] = tags + return output def new_product_entity( @@ -71,6 +81,7 @@ def new_product_entity( product_type, folder_id, status=None, + tags=None, attribs=None, data=None, entity_id=None @@ -81,8 +92,9 @@ def new_product_entity( name (str): Is considered as unique identifier of product under folder. product_type (str): Product type. - folder_id (str): Id of parent folder. + folder_id (str): Parent folder id. status (Optional[str]): Product status. + tags (Optional[List[str]]): List of tags. attribs (Optional[Dict[str, Any]]): Explicitly set attributes of product. data (Optional[Dict[str, Any]]): product entity data. Empty dictionary @@ -110,6 +122,8 @@ def new_product_entity( } if status: output["status"] = status + if tags: + output["tags"] = tags return output @@ -119,6 +133,8 @@ def new_version_entity( task_id=None, thumbnail_id=None, author=None, + status=None, + tags=None, attribs=None, data=None, entity_id=None @@ -128,10 +144,12 @@ def new_version_entity( Args: version (int): Is considered as unique identifier of version under product. - product_id (str): Id of parent product. - task_id (Optional[str]]): Id of task under which product was created. - thumbnail_id (Optional[str]]): Thumbnail related to version. - author (Optional[str]]): Name of version author. + product_id (str): Parent product id. + task_id (Optional[str]): Task id under which product was created. + thumbnail_id (Optional[str]): Thumbnail related to version. + author (Optional[str]): Name of version author. + status (Optional[str]): Version status. + tags (Optional[List[str]]): List of tags. attribs (Optional[Dict[str, Any]]): Explicitly set attributes of version. data (Optional[Dict[str, Any]]): Version entity custom data. @@ -164,6 +182,10 @@ def new_version_entity( output["thumbnailId"] = thumbnail_id if author: output["author"] = author + if tags: + output["tags"] = tags + if status: + output["status"] = status return output @@ -173,6 +195,8 @@ def new_hero_version_entity( task_id=None, thumbnail_id=None, author=None, + status=None, + tags=None, attribs=None, data=None, entity_id=None @@ -182,10 +206,12 @@ def new_hero_version_entity( Args: version (int): Is considered as unique identifier of version under product. Should be same as standard version if there is any. - product_id (str): Id of parent product. - task_id (Optional[str]): Id of task under which product was created. + product_id (str): Parent product id. + task_id (Optional[str]): Task id under which product was created. thumbnail_id (Optional[str]): Thumbnail related to version. author (Optional[str]): Name of version author. + status (Optional[str]): Version status. + tags (Optional[List[str]]): List of tags. attribs (Optional[Dict[str, Any]]): Explicitly set attributes of version. data (Optional[Dict[str, Any]]): Version entity data. @@ -215,18 +241,32 @@ def new_hero_version_entity( output["thumbnailId"] = thumbnail_id if author: output["author"] = author + if tags: + output["tags"] = tags + if status: + output["status"] = status return output def new_representation_entity( - name, version_id, attribs=None, data=None, entity_id=None + name, + version_id, + files, + status=None, + tags=None, + attribs=None, + data=None, + entity_id=None ): """Create skeleton data of representation entity. Args: name (str): Representation name considered as unique identifier of representation under version. - version_id (str): Id of parent version. + version_id (str): Parent version id. + files (list[dict[str, str]]): List of files in representation. + status (Optional[str]): Representation status. + tags (Optional[List[str]]): List of tags. attribs (Optional[Dict[str, Any]]): Explicitly set attributes of representation. data (Optional[Dict[str, Any]]): Representation entity data. @@ -243,27 +283,42 @@ def new_representation_entity( if data is None: data = {} - return { + output = { "id": _create_or_convert_to_id(entity_id), "versionId": _create_or_convert_to_id(version_id), + "files": files, "name": name, "data": data, "attrib": attribs } + if tags: + output["tags"] = tags + if status: + output["status"] = status + return output -def new_workfile_info_doc( - filename, folder_id, task_name, files, data=None, entity_id=None +def new_workfile_info( + filepath, + task_id, + status=None, + tags=None, + attribs=None, + description=None, + data=None, + entity_id=None ): """Create skeleton data of workfile info entity. Workfile entity is at this moment used primarily for artist notes. Args: - filename (str): Filename of workfile. - folder_id (str): Id of folder under which workfile live. - task_name (str): Task under which was workfile created. - files (List[str]): List of rootless filepaths related to workfile. + filepath (str): Rootless workfile filepath. + task_id (str): Task under which was workfile created. + status (Optional[str]): Workfile status. + tags (Optional[List[str]]): Workfile tags. + attribs (Options[dic[str, Any]]): Explicitly set attributes. + description (Optional[str]): Workfile description. data (Optional[Dict[str, Any]]): Additional metadata. entity_id (Optional[str]): Predefined id of entity. New id is created if not passed. @@ -272,17 +327,31 @@ def new_workfile_info_doc( Dict[str, Any]: Skeleton of workfile info entity. """ + if attribs is None: + attribs = {} + + if "extension" not in attribs: + attribs["extension"] = os.path.splitext(filepath)[-1] + + if description: + attribs["description"] = description + if not data: data = {} - return { + output = { "id": _create_or_convert_to_id(entity_id), - "parent": _create_or_convert_to_id(folder_id), - "task_name": task_name, - "filename": filename, + "taskId": task_id, + "path": filepath, "data": data, - "files": files + "attrib": attribs } + if status: + output["status"] = status + + if tags: + output["tags"] = tags + return output @six.add_metaclass(ABCMeta) diff --git a/openpype/vendor/python/common/ayon_api/server_api.py b/openpype/vendor/python/common/ayon_api/server_api.py index c886fed976c..511a239a831 100644 --- a/openpype/vendor/python/common/ayon_api/server_api.py +++ b/openpype/vendor/python/common/ayon_api/server_api.py @@ -2,9 +2,9 @@ import re import io import json +import time import logging import collections -import datetime import platform import copy import uuid @@ -15,9 +15,20 @@ HTTPStatus = None import requests -from requests.exceptions import JSONDecodeError as RequestsJSONDecodeError +try: + # This should be used if 'requests' have it available + from requests.exceptions import JSONDecodeError as RequestsJSONDecodeError +except ImportError: + # Older versions of 'requests' don't have custom exception for json + # decode error + try: + from simplejson import JSONDecodeError as RequestsJSONDecodeError + except ImportError: + from json import JSONDecodeError as RequestsJSONDecodeError from .constants import ( + SERVER_TIMEOUT_ENV_KEY, + SERVER_RETRIES_ENV_KEY, DEFAULT_PRODUCT_TYPE_FIELDS, DEFAULT_PROJECT_FIELDS, DEFAULT_FOLDER_FIELDS, @@ -28,8 +39,8 @@ REPRESENTATION_FILES_FIELDS, DEFAULT_WORKFILE_INFO_FIELDS, DEFAULT_EVENT_FIELDS, + DEFAULT_USER_FIELDS, ) -from .thumbnails import ThumbnailCache from .graphql import GraphQlQuery, INTROSPECTION_QUERY from .graphql_queries import ( project_graphql_query, @@ -44,6 +55,7 @@ representations_parents_qraphql_query, workfiles_info_graphql_query, events_graphql_query, + users_graphql_query, ) from .exceptions import ( FailedOperations, @@ -62,6 +74,7 @@ failed_json_default, TransferProgress, create_dependency_package_basename, + ThumbnailContent, ) PatternType = type(re.compile("")) @@ -117,6 +130,8 @@ def __init__(self, response, data=None): @property def text(self): + if self._response is None: + return self.detail return self._response.text @property @@ -125,6 +140,8 @@ def orig_response(self): @property def headers(self): + if self._response is None: + return {} return self._response.headers @property @@ -138,6 +155,8 @@ def data(self): @property def content(self): + if self._response is None: + return b"" return self._response.content @property @@ -320,12 +339,20 @@ class ServerAPI(object): default_settings_variant (Optional[Literal["production", "staging"]]): Settings variant used by default if a method for settings won't get any (by default is 'production'). + sender (Optional[str]): Sender of requests. Used in server logs and + propagated into events. ssl_verify (Union[bool, str, None]): Verify SSL certificate Looks for env variable value 'AYON_CA_FILE' by default. If not available then 'True' is used. cert (Optional[str]): Path to certificate file. Looks for env variable value 'AYON_CERT_FILE' by default. + create_session (Optional[bool]): Create session for connection if + token is available. Default is True. + timeout (Optional[float]): Timeout for requests. + max_retries (Optional[int]): Number of retries for requests. """ + _default_timeout = 10.0 + _default_max_retries = 3 def __init__( self, @@ -334,8 +361,12 @@ def __init__( site_id=None, client_version=None, default_settings_variant=None, + sender=None, ssl_verify=None, cert=None, + create_session=True, + timeout=None, + max_retries=None, ): if not base_url: raise ValueError("Invalid server URL {}".format(str(base_url))) @@ -352,6 +383,14 @@ def __init__( default_settings_variant or "production" ) + self._sender = sender + + self._timeout = None + self._max_retries = None + + # Set timeout and max retries based on passed values + self.set_timeout(timeout) + self.set_max_retries(max_retries) if ssl_verify is None: # Custom AYON env variable for CA file or 'True' @@ -367,6 +406,7 @@ def __init__( self._access_token_is_service = None self._token_is_valid = None + self._token_validation_started = False self._server_available = None self._server_version = None self._server_version_tuple = None @@ -387,7 +427,11 @@ def __init__( self._entity_type_attributes_cache = {} self._as_user_stack = _AsUserStack() - self._thumbnail_cache = ThumbnailCache(True) + + # Create session + if self._access_token and create_session: + self.validate_server_availability() + self.create_session() @property def log(self): @@ -452,6 +496,87 @@ def set_cert(self, cert): ssl_verify = property(get_ssl_verify, set_ssl_verify) cert = property(get_cert, set_cert) + @classmethod + def get_default_timeout(cls): + """Default value for requests timeout. + + First looks for environment variable SERVER_TIMEOUT_ENV_KEY which + can affect timeout value. If not available then use class + attribute '_default_timeout'. + + Returns: + float: Timeout value in seconds. + """ + + try: + return float(os.environ.get(SERVER_TIMEOUT_ENV_KEY)) + except (ValueError, TypeError): + pass + + return cls._default_timeout + + @classmethod + def get_default_max_retries(cls): + """Default value for requests max retries. + + First looks for environment variable SERVER_RETRIES_ENV_KEY, which + can affect max retries value. If not available then use class + attribute '_default_max_retries'. + + Returns: + int: Max retries value. + """ + + try: + return int(os.environ.get(SERVER_RETRIES_ENV_KEY)) + except (ValueError, TypeError): + pass + + return cls._default_max_retries + + def get_timeout(self): + """Current value for requests timeout. + + Returns: + float: Timeout value in seconds. + """ + + return self._timeout + + def set_timeout(self, timeout): + """Change timeout value for requests. + + Args: + timeout (Union[float, None]): Timeout value in seconds. + """ + + if timeout is None: + timeout = self.get_default_timeout() + self._timeout = float(timeout) + + def get_max_retries(self): + """Current value for requests max retries. + + Returns: + int: Max retries value. + """ + + return self._max_retries + + def set_max_retries(self, max_retries): + """Change max retries value for requests. + + Args: + max_retries (Union[int, None]): Max retries value. + """ + + if max_retries is None: + max_retries = self.get_default_max_retries() + self._max_retries = int(max_retries) + + timeout = property(get_timeout, set_timeout) + max_retries = property(get_max_retries, set_max_retries) + @property def access_token(self): """Access token used for authorization to server. @@ -551,6 +676,29 @@ def set_default_settings_variant(self, variant): set_default_settings_variant ) + def get_sender(self): + """Sender used to send requests. + + Returns: + Union[str, None]: Sender name or None. + """ + + return self._sender + + def set_sender(self, sender): + """Change sender used for requests. + + Args: + sender (Union[str, None]): Sender name or None. + """ + + if sender == self._sender: + return + self._sender = sender + self._update_session_headers() + + sender = property(get_sender, set_sender) + def get_default_service_username(self): """Default username used for callbacks when used with service API key. @@ -652,6 +800,7 @@ def validate_server_availability(self): def validate_token(self): try: + self._token_validation_started = True # TODO add other possible validations # - existence of 'user' key in info # - validate that 'site_id' is in 'sites' in info @@ -661,6 +810,9 @@ def validate_token(self): except UnauthorizedError: self._token_is_valid = False + + finally: + self._token_validation_started = False return self._token_is_valid def set_token(self, token): @@ -673,8 +825,25 @@ def reset_token(self): self._token_is_valid = None self.close_session() - def create_session(self): + def create_session(self, ignore_existing=True, force=False): + """Create a connection session. + + Session helps to keep connection with server without + need to reconnect on each call. + + Args: + ignore_existing (bool): If session already exists, + ignore creation. + force (bool): If session already exists, close it and + create new. + """ + + if force and self._session is not None: + self.close_session() + if self._session is not None: + if ignore_existing: + return raise ValueError("Session is already created.") self._as_user_stack.clear() @@ -713,6 +882,7 @@ def _update_session_headers(self): ("X-as-user", self._as_user_stack.username), ("x-ayon-version", self._client_version), ("x-ayon-site-id", self._site_id), + ("x-sender", self._sender), ): if value is not None: self._session.headers[key] = value @@ -797,10 +967,44 @@ def _get_user_info(self): self._access_token_is_service = None return None - def get_users(self): - # TODO how to find out if user have permission? - users = self.get("users") - return users.data + def get_users(self, usernames=None, fields=None): + """Get Users. + + Args: + usernames (Optional[Iterable[str]]): Filter by usernames. + fields (Optional[Iterable[str]]): fields to be queried + for users. + + Returns: + Generator[dict[str, Any]]: Queried users. + """ + + filters = {} + if usernames is not None: + usernames = set(usernames) + if not usernames: + return + filters["userNames"] = list(usernames) + + if not fields: + fields = self.get_default_fields_for_type("user") + + query = users_graphql_query(set(fields)) + for attr, filter_value in filters.items(): + query.set_variable_value(attr, filter_value) + + # Backwards compatibility for server 0.3.x + # - will be removed in future releases + major, minor, _, _, _ = self.server_version_tuple + access_groups_field = "accessGroups" + if major == 0 and minor <= 3: + access_groups_field = "roles" + + for parsed_data in query.continuous_query(self): + for user in parsed_data["users"]: + user[access_groups_field] = json.loads( + user[access_groups_field]) + yield user def get_user(self, username=None): output = None @@ -830,6 +1034,9 @@ def get_headers(self, content_type=None): if self._client_version is not None: headers["x-ayon-version"] = self._client_version + if self._sender is not None: + headers["x-sender"] = self._sender + if self._access_token: if self._access_token_is_service: headers["X-Api-Key"] = self._access_token @@ -841,7 +1048,19 @@ def get_headers(self, content_type=None): self._access_token) return headers - def login(self, username, password): + def login(self, username, password, create_session=True): + """Login to server. + + Args: + username (str): Username. + password (str): Password. + create_session (Optional[bool]): Create session after login. + Default: True. + + Raises: + AuthenticationError: Login failed. + """ + if self.has_valid_token: try: user_info = self.get_user() @@ -851,31 +1070,40 @@ def login(self, username, password): current_username = user_info.get("name") if current_username == username: self.close_session() - self.create_session() + if create_session: + self.create_session() return self.reset_token() self.validate_server_availability() - response = self.post( - "auth/login", - name=username, - password=password - ) - if response.status_code != 200: - _detail = response.data.get("detail") - details = "" - if _detail: - details = " {}".format(_detail) + self._token_validation_started = True - raise AuthenticationError("Login failed {}".format(details)) + try: + response = self.post( + "auth/login", + name=username, + password=password + ) + if response.status_code != 200: + _detail = response.data.get("detail") + details = "" + if _detail: + details = " {}".format(_detail) + + raise AuthenticationError("Login failed {}".format(details)) + + finally: + self._token_validation_started = False self._access_token = response["token"] if not self.has_valid_token: raise AuthenticationError("Invalid credentials") - self.create_session() + + if create_session: + self.create_session() def logout(self, soft=False): if self._access_token: @@ -887,7 +1115,20 @@ def _logout(self): logout_from_server(self._base_url, self._access_token) def _do_rest_request(self, function, url, **kwargs): + kwargs.setdefault("timeout", self.timeout) + max_retries = kwargs.get("max_retries", self.max_retries) + if max_retries < 1: + max_retries = 1 if self._session is None: + # Validate token if was not yet validated + # - ignore validation if we're in middle of + # validation + if ( + self._token_is_valid is None + and not self._token_validation_started + ): + self.validate_token() + if "headers" not in kwargs: kwargs["headers"] = self.get_headers() @@ -897,38 +1138,54 @@ def _do_rest_request(self, function, url, **kwargs): elif isinstance(function, RequestType): function = self._session_functions_mapping[function] - try: - response = function(url, **kwargs) + response = None + new_response = None + for _ in range(max_retries): + try: + response = function(url, **kwargs) + break + + except ConnectionRefusedError: + # Server may be restarting + new_response = RestApiResponse( + None, + {"detail": "Unable to connect the server. Connection refused"} + ) + except requests.exceptions.Timeout: + # Connection timed out + new_response = RestApiResponse( + None, + {"detail": "Connection timed out."} + ) + except requests.exceptions.ConnectionError: + # Other connection error (ssl, etc) - does not make sense to + # try call server again + new_response = RestApiResponse( + None, + {"detail": "Unable to connect the server. Connection error"} + ) + break - except ConnectionRefusedError: - new_response = RestApiResponse( - None, - {"detail": "Unable to connect the server. Connection refused"} - ) - except requests.exceptions.ConnectionError: - new_response = RestApiResponse( - None, - {"detail": "Unable to connect the server. Connection error"} - ) - else: - content_type = response.headers.get("Content-Type") - if content_type == "application/json": - try: - new_response = RestApiResponse(response) - except JSONDecodeError: - new_response = RestApiResponse( - None, - { - "detail": "The response is not a JSON: {}".format( - response.text) - } - ) + time.sleep(0.1) - elif content_type in ("image/jpeg", "image/png"): - new_response = RestApiResponse(response) + if new_response is not None: + return new_response - else: + content_type = response.headers.get("Content-Type") + if content_type == "application/json": + try: new_response = RestApiResponse(response) + except JSONDecodeError: + new_response = RestApiResponse( + None, + { + "detail": "The response is not a JSON: {}".format( + response.text) + } + ) + + else: + new_response = RestApiResponse(response) self.log.debug("Response {}".format(str(new_response))) return new_response @@ -1074,7 +1331,7 @@ def get_events( filters["includeLogsFilter"] = include_logs if not fields: - fields = DEFAULT_EVENT_FIELDS + fields = self.get_default_fields_for_type("event") query = events_graphql_query(set(fields)) for attr, filter_value in filters.items(): @@ -1175,7 +1432,8 @@ def enroll_event_job( target_topic, sender, description=None, - sequential=None + sequential=None, + events_filter=None, ): """Enroll job based on events. @@ -1217,6 +1475,8 @@ def enroll_event_job( in target event. sequential (Optional[bool]): The source topic must be processed in sequence. + events_filter (Optional[ayon_server.sqlfilter.Filter]): A dict-like + with conditions to filter the source event. Returns: Union[None, dict[str, Any]]: None if there is no event matching @@ -1232,6 +1492,8 @@ def enroll_event_job( kwargs["sequential"] = sequential if description is not None: kwargs["description"] = description + if events_filter is not None: + kwargs["filter"] = events_filter response = self.post("enroll", **kwargs) if response.status_code == 204: return None @@ -1328,6 +1590,7 @@ def _upload_file(self, url, filepath, progress, request_type=None): response = post_func(url, data=stream, **kwargs) response.raise_for_status() progress.set_transferred_size(size) + return response def upload_file( self, endpoint, filepath, progress=None, request_type=None @@ -1344,6 +1607,9 @@ def upload_file( to track upload progress. request_type (Optional[RequestType]): Type of request that will be used to upload file. + + Returns: + requests.Response: Response object. """ if endpoint.startswith(self._base_url): @@ -1362,7 +1628,7 @@ def upload_file( progress.set_started() try: - self._upload_file(url, filepath, progress, request_type) + return self._upload_file(url, filepath, progress, request_type) except Exception as exc: progress.set_failed(str(exc)) @@ -1555,6 +1821,19 @@ def get_attributes_for_type(self, entity_type): return copy.deepcopy(attributes) + def get_attributes_fields_for_type(self, entity_type): + """Prepare attribute fields for entity type. + + Returns: + set[str]: Attributes fields for entity type. + """ + + attributes = self.get_attributes_for_type(entity_type) + return { + "attrib.{}".format(attr) + for attr in attributes + } + def get_default_fields_for_type(self, entity_type): """Default fields for entity type. @@ -1567,51 +1846,54 @@ def get_default_fields_for_type(self, entity_type): set[str]: Fields that should be queried from server. """ - attributes = self.get_attributes_for_type(entity_type) + # Event does not have attributes + if entity_type == "event": + return set(DEFAULT_EVENT_FIELDS) + if entity_type == "project": - return DEFAULT_PROJECT_FIELDS | { - "attrib.{}".format(attr) - for attr in attributes - } + entity_type_defaults = DEFAULT_PROJECT_FIELDS - if entity_type == "folder": - return DEFAULT_FOLDER_FIELDS | { - "attrib.{}".format(attr) - for attr in attributes - } + elif entity_type == "folder": + entity_type_defaults = DEFAULT_FOLDER_FIELDS - if entity_type == "task": - return DEFAULT_TASK_FIELDS | { - "attrib.{}".format(attr) - for attr in attributes - } + elif entity_type == "task": + entity_type_defaults = DEFAULT_TASK_FIELDS - if entity_type == "product": - return DEFAULT_PRODUCT_FIELDS | { - "attrib.{}".format(attr) - for attr in attributes - } + elif entity_type == "product": + entity_type_defaults = DEFAULT_PRODUCT_FIELDS - if entity_type == "version": - return DEFAULT_VERSION_FIELDS | { - "attrib.{}".format(attr) - for attr in attributes - } + elif entity_type == "version": + entity_type_defaults = DEFAULT_VERSION_FIELDS - if entity_type == "representation": - return ( + elif entity_type == "representation": + entity_type_defaults = ( DEFAULT_REPRESENTATION_FIELDS | REPRESENTATION_FILES_FIELDS - | { - "attrib.{}".format(attr) - for attr in attributes - } ) - if entity_type == "productType": - return DEFAULT_PRODUCT_TYPE_FIELDS + elif entity_type == "productType": + entity_type_defaults = DEFAULT_PRODUCT_TYPE_FIELDS + + elif entity_type == "workfile": + entity_type_defaults = DEFAULT_WORKFILE_INFO_FIELDS + + elif entity_type == "user": + entity_type_defaults = set(DEFAULT_USER_FIELDS) + # Backwards compatibility for server 0.3.x + # - will be removed in future releases + major, minor, _, _, _ = self.server_version_tuple + if major == 0 and minor <= 3: + entity_type_defaults.discard("accessGroups") + entity_type_defaults.discard("defaultAccessGroups") + entity_type_defaults.add("roles") + entity_type_defaults.add("defaultRoles") - raise ValueError("Unknown entity type \"{}\"".format(entity_type)) + else: + raise ValueError("Unknown entity type \"{}\"".format(entity_type)) + return ( + entity_type_defaults + | self.get_attributes_fields_for_type(entity_type) + ) def get_addons_info(self, details=True): """Get information about addons available on server. @@ -1640,7 +1922,7 @@ def get_addon_url(self, addon_name, addon_version, *subpaths): Args: addon_name (str): Name of addon. addon_version (str): Version of addon. - subpaths (tuple[str]): Any amount of subpaths that are added to + *subpaths (str): Any amount of subpaths that are added to addon url. Returns: @@ -1848,9 +2130,12 @@ def upload_installer(self, src_filepath, dst_filename, progress=None): dst_filename (str): Destination filename. progress (Optional[TransferProgress]): Object that gives ability to track download progress. + + Returns: + requests.Response: Response object. """ - self.upload_file( + return self.upload_file( "desktop/installers/{}".format(dst_filename), src_filepath, progress=progress @@ -1978,7 +2263,12 @@ def get_dependency_packages(self): server. """ - result = self.get("desktop/dependency_packages") + endpoint = "desktop/dependencyPackages" + major, minor, _, _, _ = self.server_version_tuple + if major == 0 and minor <= 3: + endpoint = "desktop/dependency_packages" + + result = self.get(endpoint) result.raise_for_status() return result.data @@ -2162,6 +2452,33 @@ def create_dependency_package_basename(self, platform_name=None): return create_dependency_package_basename(platform_name) + def upload_addon_zip(self, src_filepath, progress=None): + """Upload addon zip file to server. + + File is validated on server. If it is valid, it is installed. It will + create an event job which can be tracked (tracking part is not + implemented yet). + + Example output: + {'eventId': 'a1bfbdee27c611eea7580242ac120003'} + + Args: + src_filepath (str): Path to a zip file. + progress (Optional[TransferProgress]): Object to keep track about + upload state. + + Returns: + dict[str, Any]: Response data from server. + """ + + response = self.upload_file( + "addons/install", + src_filepath, + progress=progress, + request_type=RequestTypes.post, + ) + return response.json() + def _get_bundles_route(self): major, minor, patch, _, _ = self.server_version_tuple # Backwards compatibility for AYON server 0.3.0 @@ -2839,6 +3156,79 @@ def get_addons_settings( only_values=only_values ) + def get_secrets(self): + """Get all secrets. + + Example output: + [ + { + "name": "secret_1", + "value": "secret_value_1", + }, + { + "name": "secret_2", + "value": "secret_value_2", + } + ] + + Returns: + list[dict[str, str]]: List of secret entities. + """ + + response = self.get("secrets") + response.raise_for_status() + return response.data + + def get_secret(self, secret_name): + """Get secret by name. + + Example output: + { + "name": "secret_name", + "value": "secret_value", + } + + Args: + secret_name (str): Name of secret. + + Returns: + dict[str, str]: Secret entity data. + """ + + response = self.get("secrets/{}".format(secret_name)) + response.raise_for_status() + return response.data + + def save_secret(self, secret_name, secret_value): + """Save secret. + + This endpoint can create and update secret. + + Args: + secret_name (str): Name of secret. + secret_value (str): Value of secret. + """ + + response = self.put( + "secrets/{}".format(secret_name), + name=secret_name, + value=secret_value, + ) + response.raise_for_status() + return response.data + + + def delete_secret(self, secret_name): + """Delete secret by name. + + Args: + secret_name (str): Name of secret to delete. + """ + + response = self.delete("secrets/{}".format(secret_name)) + response.raise_for_status() + return response.data + # Entity getters def get_rest_project(self, project_name): """Query project by name. @@ -2983,8 +3373,6 @@ def get_projects( else: use_rest = False fields = set(fields) - if own_attributes: - fields.add("ownAttrib") for field in fields: if field.startswith("config"): use_rest = True @@ -2997,6 +3385,13 @@ def get_projects( yield project else: + if "attrib" in fields: + fields.remove("attrib") + fields |= self.get_attributes_fields_for_type("project") + + if own_attributes: + fields.add("ownAttrib") + query = projects_graphql_query(fields) for parsed_data in query.continuous_query(self): for project in parsed_data["projects"]: @@ -3037,8 +3432,12 @@ def get_project(self, project_name, fields=None, own_attributes=False): fill_own_attribs(project) return project + if "attrib" in fields: + fields.remove("attrib") + fields |= self.get_attributes_fields_for_type("project") + if own_attributes: - field.add("ownAttrib") + fields.add("ownAttrib") query = project_graphql_query(fields) query.set_variable_value("projectName", project_name) @@ -3051,6 +3450,65 @@ def get_project(self, project_name, fields=None, own_attributes=False): fill_own_attribs(project) return project + def get_folders_hierarchy( + self, + project_name, + search_string=None, + folder_types=None + ): + """Get project hierarchy. + + All folders in project in hierarchy data structure. + + Example output: + { + "hierarchy": [ + { + "id": "...", + "name": "...", + "label": "...", + "status": "...", + "folderType": "...", + "hasTasks": False, + "taskNames": [], + "parents": [], + "parentId": None, + "children": [...children folders...] + }, + ... + ] + } + + Args: + project_name (str): Project where to look for folders. + search_string (Optional[str]): Search string to filter folders. + folder_types (Optional[Iterable[str]]): Folder types to filter. + + Returns: + dict[str, Any]: Response data from server. + """ + + if folder_types: + folder_types = ",".join(folder_types) + + query_fields = [ + "{}={}".format(key, value) + for key, value in ( + ("search", search_string), + ("types", folder_types), + ) + if value + ] + query = "" + if query_fields: + query = "?{}".format(",".join(query_fields)) + + response = self.get( + "projects/{}/hierarchy{}".format(project_name, query) + ) + response.raise_for_status() + return response.data + def get_folders( self, project_name, @@ -3136,10 +3594,13 @@ def get_folders( filters["parentFolderIds"] = list(parent_ids) - if fields: - fields = set(fields) - else: + if not fields: fields = self.get_default_fields_for_type("folder") + else: + fields = set(fields) + if "attrib" in fields: + fields.remove("attrib") + fields |= self.get_attributes_fields_for_type("folder") use_rest = False if "data" in fields: @@ -3373,8 +3834,11 @@ def get_tasks( if not fields: fields = self.get_default_fields_for_type("task") - - fields = set(fields) + else: + fields = set(fields) + if "attrib" in fields: + fields.remove("attrib") + fields |= self.get_attributes_fields_for_type("task") use_rest = False if "data" in fields: @@ -3490,6 +3954,8 @@ def get_products( product_ids=None, product_names=None, folder_ids=None, + product_types=None, + statuses=None, names_by_folder_ids=None, active=True, fields=None, @@ -3508,6 +3974,10 @@ def get_products( filtering. folder_ids (Optional[Iterable[str]]): Ids of task parents. Use 'None' if folder is direct child of project. + product_types (Optional[Iterable[str]]): Product types used for + filtering. + statuses (Optional[Iterable[str]]): Product statuses used for + filtering. names_by_folder_ids (Optional[dict[str, Iterable[str]]]): Product name filtering by folder id. active (Optional[bool]): Filter active/inactive products. @@ -3542,6 +4012,18 @@ def get_products( if not filter_folder_ids: return + filter_product_types = None + if product_types is not None: + filter_product_types = set(product_types) + if not filter_product_types: + return + + filter_statuses = None + if statuses is not None: + filter_statuses = set(statuses) + if not filter_statuses: + return + # This will disable 'folder_ids' and 'product_names' filters # - maybe could be enhanced in future? if names_by_folder_ids is not None: @@ -3559,6 +4041,9 @@ def get_products( # Convert fields and add minimum required fields if fields: fields = set(fields) | {"id"} + if "attrib" in fields: + fields.remove("attrib") + fields |= self.get_attributes_fields_for_type("product") else: fields = self.get_default_fields_for_type("product") @@ -3585,6 +4070,12 @@ def get_products( if filter_folder_ids: filters["folderIds"] = list(filter_folder_ids) + if filter_product_types: + filters["productTypes"] = list(filter_product_types) + + if filter_statuses: + filters["statuses"] = list(filter_statuses) + if product_ids: filters["productIds"] = list(product_ids) @@ -3622,7 +4113,6 @@ def get_products( if filtered_product is not None: yield filtered_product - def get_product_by_id( self, project_name, @@ -3816,7 +4306,11 @@ def get_versions( if not fields: fields = self.get_default_fields_for_type("version") - fields = set(fields) + else: + fields = set(fields) + if "attrib" in fields: + fields.remove("attrib") + fields |= self.get_attributes_fields_for_type("version") if active is not None: fields.add("active") @@ -4274,7 +4768,11 @@ def get_representations( if not fields: fields = self.get_default_fields_for_type("representation") - fields = set(fields) + else: + fields = set(fields) + if "attrib" in fields: + fields.remove("attrib") + fields |= self.get_attributes_fields_for_type("representation") use_rest = False if "data" in fields: @@ -4620,8 +5118,15 @@ def get_workfiles_info( filters["workfileIds"] = list(workfile_ids) if not fields: - fields = DEFAULT_WORKFILE_INFO_FIELDS + fields = self.get_default_fields_for_type("workfile") + fields = set(fields) + if "attrib" in fields: + fields.remove("attrib") + fields |= { + "attrib.{}".format(attr) + for attr in self.get_attributes_for_type("workfile") + } if own_attributes: fields.add("ownAttrib") @@ -4698,18 +5203,61 @@ def get_workfile_info_by_id( return workfile_info return None + def _prepare_thumbnail_content(self, project_name, response): + content = None + content_type = response.content_type + + # It is expected the response contains thumbnail id otherwise the + # content cannot be cached and filepath returned + thumbnail_id = response.headers.get("X-Thumbnail-Id") + if thumbnail_id is not None: + content = response.content + + return ThumbnailContent( + project_name, thumbnail_id, content, content_type + ) + + def get_thumbnail_by_id(self, project_name, thumbnail_id): + """Get thumbnail from server by id. + + Permissions of thumbnails are related to entities so thumbnails must + be queried per entity. So an entity type and entity type is required + to be passed. + + Notes: + It is recommended to use one of prepared entity type specific + methods 'get_folder_thumbnail', 'get_version_thumbnail' or + 'get_workfile_thumbnail'. + We do recommend pass thumbnail id if you have access to it. Each + entity that allows thumbnails has 'thumbnailId' field, so it + can be queried. + + Args: + project_name (str): Project under which the entity is located. + thumbnail_id (Optional[str]): DEPRECATED Use + 'get_thumbnail_by_id'. + + Returns: + ThumbnailContent: Thumbnail content wrapper. Does not have to be + valid. + """ + + response = self.raw_get( + "projects/{}/thumbnails/{}".format( + project_name, + thumbnail_id + ) + ) + return self._prepare_thumbnail_content(project_name, response) + def get_thumbnail( self, project_name, entity_type, entity_id, thumbnail_id=None ): """Get thumbnail from server. - Permissions of thumbnails are related to entities so thumbnails must be - queried per entity. So an entity type and entity type is required to - be passed. - - If thumbnail id is passed logic can look into locally cached thumbnails - before calling server which can enhance loading time. If thumbnail id - is not passed the thumbnail is always downloaded even if is available. + Permissions of thumbnails are related to entities so thumbnails must + be queried per entity. So an entity type and entity type is required + to be passed. Notes: It is recommended to use one of prepared entity type specific @@ -4723,20 +5271,16 @@ def get_thumbnail( project_name (str): Project under which the entity is located. entity_type (str): Entity type which passed entity id represents. entity_id (str): Entity id for which thumbnail should be returned. - thumbnail_id (Optional[str]): Prepared thumbnail id from entity. - Used only to check if thumbnail was already cached. + thumbnail_id (Optional[str]): DEPRECATED Use + 'get_thumbnail_by_id'. Returns: - Union[str, None]: Path to downloaded thumbnail or none if entity - does not have any (or if user does not have permissions). + ThumbnailContent: Thumbnail content wrapper. Does not have to be + valid. """ - # Look for thumbnail into cache and return the path if was found - filepath = self._thumbnail_cache.get_thumbnail_filepath( - project_name, thumbnail_id - ) - if filepath: - return filepath + if thumbnail_id: + return self.get_thumbnail_by_id(project_name, thumbnail_id) if entity_type in ( "folder", @@ -4745,29 +5289,12 @@ def get_thumbnail( ): entity_type += "s" - # Receive thumbnail content from server - result = self.raw_get("projects/{}/{}/{}/thumbnail".format( + response = self.raw_get("projects/{}/{}/{}/thumbnail".format( project_name, entity_type, entity_id )) - - if result.content_type is None: - return None - - # It is expected the response contains thumbnail id otherwise the - # content cannot be cached and filepath returned - thumbnail_id = result.headers.get("X-Thumbnail-Id") - if thumbnail_id is None: - return None - - # Cache thumbnail and return path - return self._thumbnail_cache.store_thumbnail( - project_name, - thumbnail_id, - result.content, - result.content_type - ) + return self._prepare_thumbnail_content(project_name, response) def get_folder_thumbnail( self, project_name, folder_id, thumbnail_id=None diff --git a/openpype/vendor/python/common/ayon_api/utils.py b/openpype/vendor/python/common/ayon_api/utils.py index 69fd8e9b41c..314d13faeca 100644 --- a/openpype/vendor/python/common/ayon_api/utils.py +++ b/openpype/vendor/python/common/ayon_api/utils.py @@ -27,6 +27,45 @@ ) +class ThumbnailContent: + """Wrapper for thumbnail content. + + Args: + project_name (str): Project name. + thumbnail_id (Union[str, None]): Thumbnail id. + content_type (Union[str, None]): Content type e.g. 'image/png'. + content (Union[bytes, None]): Thumbnail content. + """ + + def __init__(self, project_name, thumbnail_id, content, content_type): + self.project_name = project_name + self.thumbnail_id = thumbnail_id + self.content_type = content_type + self.content = content or b"" + + @property + def id(self): + """Wrapper for thumbnail id. + + Returns: + + """ + + return self.thumbnail_id + + @property + def is_valid(self): + """Content of thumbnail is valid. + + Returns: + bool: Content is valid and can be used. + """ + return ( + self.thumbnail_id is not None + and self.content_type is not None + ) + + def prepare_query_string(key_values): """Prepare data to query string. @@ -359,7 +398,7 @@ class TransferProgress: def __init__(self): self._started = False self._transfer_done = False - self._transfered = 0 + self._transferred = 0 self._content_size = None self._failed = False @@ -369,25 +408,66 @@ def __init__(self): self._destination_url = "N/A" def get_content_size(self): + """Content size in bytes. + + Returns: + Union[int, None]: Content size in bytes or None + if is unknown. + """ + return self._content_size def set_content_size(self, content_size): + """Set content size in bytes. + + Args: + content_size (int): Content size in bytes. + + Raises: + ValueError: If content size was already set. + """ + if self._content_size is not None: raise ValueError("Content size was set more then once") self._content_size = content_size def get_started(self): + """Transfer was started. + + Returns: + bool: True if transfer started. + """ + return self._started def set_started(self): + """Mark that transfer started. + + Raises: + ValueError: If transfer was already started. + """ + if self._started: raise ValueError("Progress already started") self._started = True def get_transfer_done(self): + """Transfer finished. + + Returns: + bool: Transfer finished. + """ + return self._transfer_done def set_transfer_done(self): + """Mark progress as transfer finished. + + Raises: + ValueError: If progress was already marked as done + or wasn't started yet. + """ + if self._transfer_done: raise ValueError("Progress was already marked as done") if not self._started: @@ -395,41 +475,117 @@ def set_transfer_done(self): self._transfer_done = True def get_failed(self): + """Transfer failed. + + Returns: + bool: True if transfer failed. + """ + return self._failed def get_fail_reason(self): + """Get reason why transfer failed. + + Returns: + Union[str, None]: Reason why transfer + failed or None. + """ + return self._fail_reason def set_failed(self, reason): + """Mark progress as failed. + + Args: + reason (str): Reason why transfer failed. + """ + self._fail_reason = reason self._failed = True def get_transferred_size(self): - return self._transfered + """Already transferred size in bytes. + + Returns: + int: Already transferred size in bytes. + """ - def set_transferred_size(self, transfered): - self._transfered = transfered + return self._transferred + + def set_transferred_size(self, transferred): + """Set already transferred size in bytes. + + Args: + transferred (int): Already transferred size in bytes. + """ + + self._transferred = transferred def add_transferred_chunk(self, chunk_size): - self._transfered += chunk_size + """Add transferred chunk size in bytes. + + Args: + chunk_size (int): Add transferred chunk size + in bytes. + """ + + self._transferred += chunk_size def get_source_url(self): + """Source url from where transfer happens. + + Note: + Consider this as title. Must be set using + 'set_source_url' or 'N/A' will be returned. + + Returns: + str: Source url from where transfer happens. + """ + return self._source_url def set_source_url(self, url): + """Set source url from where transfer happens. + + Args: + url (str): Source url from where transfer happens. + """ + self._source_url = url def get_destination_url(self): + """Destination url where transfer happens. + + Note: + Consider this as title. Must be set using + 'set_source_url' or 'N/A' will be returned. + + Returns: + str: Destination url where transfer happens. + """ + return self._destination_url def set_destination_url(self, url): + """Set destination url where transfer happens. + + Args: + url (str): Destination url where transfer happens. + """ + self._destination_url = url @property def is_running(self): + """Check if transfer is running. + + Returns: + bool: True if transfer is running. + """ + if ( not self.started - or self.done + or self.transfer_done or self.failed ): return False @@ -437,9 +593,16 @@ def is_running(self): @property def transfer_progress(self): + """Get transfer progress in percents. + + Returns: + Union[float, None]: Transfer progress in percents or 'None' + if content size is unknown. + """ + if self._content_size is None: return None - return (self._transfered * 100.0) / float(self._content_size) + return (self._transferred * 100.0) / float(self._content_size) content_size = property(get_content_size, set_content_size) started = property(get_started) @@ -448,7 +611,6 @@ def transfer_progress(self): fail_reason = property(get_fail_reason) source_url = property(get_source_url, set_source_url) destination_url = property(get_destination_url, set_destination_url) - content_size = property(get_content_size, set_content_size) transferred_size = property(get_transferred_size, set_transferred_size) diff --git a/openpype/vendor/python/common/ayon_api/version.py b/openpype/vendor/python/common/ayon_api/version.py index 238f6e94263..f3826a64075 100644 --- a/openpype/vendor/python/common/ayon_api/version.py +++ b/openpype/vendor/python/common/ayon_api/version.py @@ -1,2 +1,2 @@ """Package declaring Python API for Ayon server.""" -__version__ = "0.3.2" +__version__ = "0.4.1" diff --git a/openpype/vendor/python/python_2/click/__init__.py b/openpype/vendor/python/python_2/click/__init__.py new file mode 100644 index 00000000000..2b6008f2dd4 --- /dev/null +++ b/openpype/vendor/python/python_2/click/__init__.py @@ -0,0 +1,79 @@ +""" +Click is a simple Python module inspired by the stdlib optparse to make +writing command line scripts fun. Unlike other modules, it's based +around a simple API that does not come with too much magic and is +composable. +""" +from .core import Argument +from .core import BaseCommand +from .core import Command +from .core import CommandCollection +from .core import Context +from .core import Group +from .core import MultiCommand +from .core import Option +from .core import Parameter +from .decorators import argument +from .decorators import command +from .decorators import confirmation_option +from .decorators import group +from .decorators import help_option +from .decorators import make_pass_decorator +from .decorators import option +from .decorators import pass_context +from .decorators import pass_obj +from .decorators import password_option +from .decorators import version_option +from .exceptions import Abort +from .exceptions import BadArgumentUsage +from .exceptions import BadOptionUsage +from .exceptions import BadParameter +from .exceptions import ClickException +from .exceptions import FileError +from .exceptions import MissingParameter +from .exceptions import NoSuchOption +from .exceptions import UsageError +from .formatting import HelpFormatter +from .formatting import wrap_text +from .globals import get_current_context +from .parser import OptionParser +from .termui import clear +from .termui import confirm +from .termui import echo_via_pager +from .termui import edit +from .termui import get_terminal_size +from .termui import getchar +from .termui import launch +from .termui import pause +from .termui import progressbar +from .termui import prompt +from .termui import secho +from .termui import style +from .termui import unstyle +from .types import BOOL +from .types import Choice +from .types import DateTime +from .types import File +from .types import FLOAT +from .types import FloatRange +from .types import INT +from .types import IntRange +from .types import ParamType +from .types import Path +from .types import STRING +from .types import Tuple +from .types import UNPROCESSED +from .types import UUID +from .utils import echo +from .utils import format_filename +from .utils import get_app_dir +from .utils import get_binary_stream +from .utils import get_os_args +from .utils import get_text_stream +from .utils import open_file + +# Controls if click should emit the warning about the use of unicode +# literals. +disable_unicode_literals_warning = False + +__version__ = "7.1.2" diff --git a/openpype/vendor/python/python_2/click/_bashcomplete.py b/openpype/vendor/python/python_2/click/_bashcomplete.py new file mode 100644 index 00000000000..8bca24480f7 --- /dev/null +++ b/openpype/vendor/python/python_2/click/_bashcomplete.py @@ -0,0 +1,375 @@ +import copy +import os +import re + +from .core import Argument +from .core import MultiCommand +from .core import Option +from .parser import split_arg_string +from .types import Choice +from .utils import echo + +try: + from collections import abc +except ImportError: + import collections as abc + +WORDBREAK = "=" + +# Note, only BASH version 4.4 and later have the nosort option. +COMPLETION_SCRIPT_BASH = """ +%(complete_func)s() { + local IFS=$'\n' + COMPREPLY=( $( env COMP_WORDS="${COMP_WORDS[*]}" \\ + COMP_CWORD=$COMP_CWORD \\ + %(autocomplete_var)s=complete $1 ) ) + return 0 +} + +%(complete_func)setup() { + local COMPLETION_OPTIONS="" + local BASH_VERSION_ARR=(${BASH_VERSION//./ }) + # Only BASH version 4.4 and later have the nosort option. + if [ ${BASH_VERSION_ARR[0]} -gt 4 ] || ([ ${BASH_VERSION_ARR[0]} -eq 4 ] \ +&& [ ${BASH_VERSION_ARR[1]} -ge 4 ]); then + COMPLETION_OPTIONS="-o nosort" + fi + + complete $COMPLETION_OPTIONS -F %(complete_func)s %(script_names)s +} + +%(complete_func)setup +""" + +COMPLETION_SCRIPT_ZSH = """ +#compdef %(script_names)s + +%(complete_func)s() { + local -a completions + local -a completions_with_descriptions + local -a response + (( ! $+commands[%(script_names)s] )) && return 1 + + response=("${(@f)$( env COMP_WORDS=\"${words[*]}\" \\ + COMP_CWORD=$((CURRENT-1)) \\ + %(autocomplete_var)s=\"complete_zsh\" \\ + %(script_names)s )}") + + for key descr in ${(kv)response}; do + if [[ "$descr" == "_" ]]; then + completions+=("$key") + else + completions_with_descriptions+=("$key":"$descr") + fi + done + + if [ -n "$completions_with_descriptions" ]; then + _describe -V unsorted completions_with_descriptions -U + fi + + if [ -n "$completions" ]; then + compadd -U -V unsorted -a completions + fi + compstate[insert]="automenu" +} + +compdef %(complete_func)s %(script_names)s +""" + +COMPLETION_SCRIPT_FISH = ( + "complete --no-files --command %(script_names)s --arguments" + ' "(env %(autocomplete_var)s=complete_fish' + " COMP_WORDS=(commandline -cp) COMP_CWORD=(commandline -t)" + ' %(script_names)s)"' +) + +_completion_scripts = { + "bash": COMPLETION_SCRIPT_BASH, + "zsh": COMPLETION_SCRIPT_ZSH, + "fish": COMPLETION_SCRIPT_FISH, +} + +_invalid_ident_char_re = re.compile(r"[^a-zA-Z0-9_]") + + +def get_completion_script(prog_name, complete_var, shell): + cf_name = _invalid_ident_char_re.sub("", prog_name.replace("-", "_")) + script = _completion_scripts.get(shell, COMPLETION_SCRIPT_BASH) + return ( + script + % { + "complete_func": "_{}_completion".format(cf_name), + "script_names": prog_name, + "autocomplete_var": complete_var, + } + ).strip() + ";" + + +def resolve_ctx(cli, prog_name, args): + """Parse into a hierarchy of contexts. Contexts are connected + through the parent variable. + + :param cli: command definition + :param prog_name: the program that is running + :param args: full list of args + :return: the final context/command parsed + """ + ctx = cli.make_context(prog_name, args, resilient_parsing=True) + args = ctx.protected_args + ctx.args + while args: + if isinstance(ctx.command, MultiCommand): + if not ctx.command.chain: + cmd_name, cmd, args = ctx.command.resolve_command(ctx, args) + if cmd is None: + return ctx + ctx = cmd.make_context( + cmd_name, args, parent=ctx, resilient_parsing=True + ) + args = ctx.protected_args + ctx.args + else: + # Walk chained subcommand contexts saving the last one. + while args: + cmd_name, cmd, args = ctx.command.resolve_command(ctx, args) + if cmd is None: + return ctx + sub_ctx = cmd.make_context( + cmd_name, + args, + parent=ctx, + allow_extra_args=True, + allow_interspersed_args=False, + resilient_parsing=True, + ) + args = sub_ctx.args + ctx = sub_ctx + args = sub_ctx.protected_args + sub_ctx.args + else: + break + return ctx + + +def start_of_option(param_str): + """ + :param param_str: param_str to check + :return: whether or not this is the start of an option declaration + (i.e. starts "-" or "--") + """ + return param_str and param_str[:1] == "-" + + +def is_incomplete_option(all_args, cmd_param): + """ + :param all_args: the full original list of args supplied + :param cmd_param: the current command paramter + :return: whether or not the last option declaration (i.e. starts + "-" or "--") is incomplete and corresponds to this cmd_param. In + other words whether this cmd_param option can still accept + values + """ + if not isinstance(cmd_param, Option): + return False + if cmd_param.is_flag: + return False + last_option = None + for index, arg_str in enumerate( + reversed([arg for arg in all_args if arg != WORDBREAK]) + ): + if index + 1 > cmd_param.nargs: + break + if start_of_option(arg_str): + last_option = arg_str + + return True if last_option and last_option in cmd_param.opts else False + + +def is_incomplete_argument(current_params, cmd_param): + """ + :param current_params: the current params and values for this + argument as already entered + :param cmd_param: the current command parameter + :return: whether or not the last argument is incomplete and + corresponds to this cmd_param. In other words whether or not the + this cmd_param argument can still accept values + """ + if not isinstance(cmd_param, Argument): + return False + current_param_values = current_params[cmd_param.name] + if current_param_values is None: + return True + if cmd_param.nargs == -1: + return True + if ( + isinstance(current_param_values, abc.Iterable) + and cmd_param.nargs > 1 + and len(current_param_values) < cmd_param.nargs + ): + return True + return False + + +def get_user_autocompletions(ctx, args, incomplete, cmd_param): + """ + :param ctx: context associated with the parsed command + :param args: full list of args + :param incomplete: the incomplete text to autocomplete + :param cmd_param: command definition + :return: all the possible user-specified completions for the param + """ + results = [] + if isinstance(cmd_param.type, Choice): + # Choices don't support descriptions. + results = [ + (c, None) for c in cmd_param.type.choices if str(c).startswith(incomplete) + ] + elif cmd_param.autocompletion is not None: + dynamic_completions = cmd_param.autocompletion( + ctx=ctx, args=args, incomplete=incomplete + ) + results = [ + c if isinstance(c, tuple) else (c, None) for c in dynamic_completions + ] + return results + + +def get_visible_commands_starting_with(ctx, starts_with): + """ + :param ctx: context associated with the parsed command + :starts_with: string that visible commands must start with. + :return: all visible (not hidden) commands that start with starts_with. + """ + for c in ctx.command.list_commands(ctx): + if c.startswith(starts_with): + command = ctx.command.get_command(ctx, c) + if not command.hidden: + yield command + + +def add_subcommand_completions(ctx, incomplete, completions_out): + # Add subcommand completions. + if isinstance(ctx.command, MultiCommand): + completions_out.extend( + [ + (c.name, c.get_short_help_str()) + for c in get_visible_commands_starting_with(ctx, incomplete) + ] + ) + + # Walk up the context list and add any other completion + # possibilities from chained commands + while ctx.parent is not None: + ctx = ctx.parent + if isinstance(ctx.command, MultiCommand) and ctx.command.chain: + remaining_commands = [ + c + for c in get_visible_commands_starting_with(ctx, incomplete) + if c.name not in ctx.protected_args + ] + completions_out.extend( + [(c.name, c.get_short_help_str()) for c in remaining_commands] + ) + + +def get_choices(cli, prog_name, args, incomplete): + """ + :param cli: command definition + :param prog_name: the program that is running + :param args: full list of args + :param incomplete: the incomplete text to autocomplete + :return: all the possible completions for the incomplete + """ + all_args = copy.deepcopy(args) + + ctx = resolve_ctx(cli, prog_name, args) + if ctx is None: + return [] + + has_double_dash = "--" in all_args + + # In newer versions of bash long opts with '='s are partitioned, but + # it's easier to parse without the '=' + if start_of_option(incomplete) and WORDBREAK in incomplete: + partition_incomplete = incomplete.partition(WORDBREAK) + all_args.append(partition_incomplete[0]) + incomplete = partition_incomplete[2] + elif incomplete == WORDBREAK: + incomplete = "" + + completions = [] + if not has_double_dash and start_of_option(incomplete): + # completions for partial options + for param in ctx.command.params: + if isinstance(param, Option) and not param.hidden: + param_opts = [ + param_opt + for param_opt in param.opts + param.secondary_opts + if param_opt not in all_args or param.multiple + ] + completions.extend( + [(o, param.help) for o in param_opts if o.startswith(incomplete)] + ) + return completions + # completion for option values from user supplied values + for param in ctx.command.params: + if is_incomplete_option(all_args, param): + return get_user_autocompletions(ctx, all_args, incomplete, param) + # completion for argument values from user supplied values + for param in ctx.command.params: + if is_incomplete_argument(ctx.params, param): + return get_user_autocompletions(ctx, all_args, incomplete, param) + + add_subcommand_completions(ctx, incomplete, completions) + # Sort before returning so that proper ordering can be enforced in custom types. + return sorted(completions) + + +def do_complete(cli, prog_name, include_descriptions): + cwords = split_arg_string(os.environ["COMP_WORDS"]) + cword = int(os.environ["COMP_CWORD"]) + args = cwords[1:cword] + try: + incomplete = cwords[cword] + except IndexError: + incomplete = "" + + for item in get_choices(cli, prog_name, args, incomplete): + echo(item[0]) + if include_descriptions: + # ZSH has trouble dealing with empty array parameters when + # returned from commands, use '_' to indicate no description + # is present. + echo(item[1] if item[1] else "_") + + return True + + +def do_complete_fish(cli, prog_name): + cwords = split_arg_string(os.environ["COMP_WORDS"]) + incomplete = os.environ["COMP_CWORD"] + args = cwords[1:] + + for item in get_choices(cli, prog_name, args, incomplete): + if item[1]: + echo("{arg}\t{desc}".format(arg=item[0], desc=item[1])) + else: + echo(item[0]) + + return True + + +def bashcomplete(cli, prog_name, complete_var, complete_instr): + if "_" in complete_instr: + command, shell = complete_instr.split("_", 1) + else: + command = complete_instr + shell = "bash" + + if command == "source": + echo(get_completion_script(prog_name, complete_var, shell)) + return True + elif command == "complete": + if shell == "fish": + return do_complete_fish(cli, prog_name) + elif shell in {"bash", "zsh"}: + return do_complete(cli, prog_name, shell == "zsh") + + return False diff --git a/openpype/vendor/python/python_2/click/_compat.py b/openpype/vendor/python/python_2/click/_compat.py new file mode 100644 index 00000000000..60cb115bc50 --- /dev/null +++ b/openpype/vendor/python/python_2/click/_compat.py @@ -0,0 +1,786 @@ +# flake8: noqa +import codecs +import io +import os +import re +import sys +from weakref import WeakKeyDictionary + +PY2 = sys.version_info[0] == 2 +CYGWIN = sys.platform.startswith("cygwin") +MSYS2 = sys.platform.startswith("win") and ("GCC" in sys.version) +# Determine local App Engine environment, per Google's own suggestion +APP_ENGINE = "APPENGINE_RUNTIME" in os.environ and "Development/" in os.environ.get( + "SERVER_SOFTWARE", "" +) +WIN = sys.platform.startswith("win") and not APP_ENGINE and not MSYS2 +DEFAULT_COLUMNS = 80 + + +_ansi_re = re.compile(r"\033\[[;?0-9]*[a-zA-Z]") + + +def get_filesystem_encoding(): + return sys.getfilesystemencoding() or sys.getdefaultencoding() + + +def _make_text_stream( + stream, encoding, errors, force_readable=False, force_writable=False +): + if encoding is None: + encoding = get_best_encoding(stream) + if errors is None: + errors = "replace" + return _NonClosingTextIOWrapper( + stream, + encoding, + errors, + line_buffering=True, + force_readable=force_readable, + force_writable=force_writable, + ) + + +def is_ascii_encoding(encoding): + """Checks if a given encoding is ascii.""" + try: + return codecs.lookup(encoding).name == "ascii" + except LookupError: + return False + + +def get_best_encoding(stream): + """Returns the default stream encoding if not found.""" + rv = getattr(stream, "encoding", None) or sys.getdefaultencoding() + if is_ascii_encoding(rv): + return "utf-8" + return rv + + +class _NonClosingTextIOWrapper(io.TextIOWrapper): + def __init__( + self, + stream, + encoding, + errors, + force_readable=False, + force_writable=False, + **extra + ): + self._stream = stream = _FixupStream(stream, force_readable, force_writable) + io.TextIOWrapper.__init__(self, stream, encoding, errors, **extra) + + # The io module is a place where the Python 3 text behavior + # was forced upon Python 2, so we need to unbreak + # it to look like Python 2. + if PY2: + + def write(self, x): + if isinstance(x, str) or is_bytes(x): + try: + self.flush() + except Exception: + pass + return self.buffer.write(str(x)) + return io.TextIOWrapper.write(self, x) + + def writelines(self, lines): + for line in lines: + self.write(line) + + def __del__(self): + try: + self.detach() + except Exception: + pass + + def isatty(self): + # https://bitbucket.org/pypy/pypy/issue/1803 + return self._stream.isatty() + + +class _FixupStream(object): + """The new io interface needs more from streams than streams + traditionally implement. As such, this fix-up code is necessary in + some circumstances. + + The forcing of readable and writable flags are there because some tools + put badly patched objects on sys (one such offender are certain version + of jupyter notebook). + """ + + def __init__(self, stream, force_readable=False, force_writable=False): + self._stream = stream + self._force_readable = force_readable + self._force_writable = force_writable + + def __getattr__(self, name): + return getattr(self._stream, name) + + def read1(self, size): + f = getattr(self._stream, "read1", None) + if f is not None: + return f(size) + # We only dispatch to readline instead of read in Python 2 as we + # do not want cause problems with the different implementation + # of line buffering. + if PY2: + return self._stream.readline(size) + return self._stream.read(size) + + def readable(self): + if self._force_readable: + return True + x = getattr(self._stream, "readable", None) + if x is not None: + return x() + try: + self._stream.read(0) + except Exception: + return False + return True + + def writable(self): + if self._force_writable: + return True + x = getattr(self._stream, "writable", None) + if x is not None: + return x() + try: + self._stream.write("") + except Exception: + try: + self._stream.write(b"") + except Exception: + return False + return True + + def seekable(self): + x = getattr(self._stream, "seekable", None) + if x is not None: + return x() + try: + self._stream.seek(self._stream.tell()) + except Exception: + return False + return True + + +if PY2: + text_type = unicode + raw_input = raw_input + string_types = (str, unicode) + int_types = (int, long) + iteritems = lambda x: x.iteritems() + range_type = xrange + + def is_bytes(x): + return isinstance(x, (buffer, bytearray)) + + _identifier_re = re.compile(r"^[a-zA-Z_][a-zA-Z0-9_]*$") + + # For Windows, we need to force stdout/stdin/stderr to binary if it's + # fetched for that. This obviously is not the most correct way to do + # it as it changes global state. Unfortunately, there does not seem to + # be a clear better way to do it as just reopening the file in binary + # mode does not change anything. + # + # An option would be to do what Python 3 does and to open the file as + # binary only, patch it back to the system, and then use a wrapper + # stream that converts newlines. It's not quite clear what's the + # correct option here. + # + # This code also lives in _winconsole for the fallback to the console + # emulation stream. + # + # There are also Windows environments where the `msvcrt` module is not + # available (which is why we use try-catch instead of the WIN variable + # here), such as the Google App Engine development server on Windows. In + # those cases there is just nothing we can do. + def set_binary_mode(f): + return f + + try: + import msvcrt + except ImportError: + pass + else: + + def set_binary_mode(f): + try: + fileno = f.fileno() + except Exception: + pass + else: + msvcrt.setmode(fileno, os.O_BINARY) + return f + + try: + import fcntl + except ImportError: + pass + else: + + def set_binary_mode(f): + try: + fileno = f.fileno() + except Exception: + pass + else: + flags = fcntl.fcntl(fileno, fcntl.F_GETFL) + fcntl.fcntl(fileno, fcntl.F_SETFL, flags & ~os.O_NONBLOCK) + return f + + def isidentifier(x): + return _identifier_re.search(x) is not None + + def get_binary_stdin(): + return set_binary_mode(sys.stdin) + + def get_binary_stdout(): + _wrap_std_stream("stdout") + return set_binary_mode(sys.stdout) + + def get_binary_stderr(): + _wrap_std_stream("stderr") + return set_binary_mode(sys.stderr) + + def get_text_stdin(encoding=None, errors=None): + rv = _get_windows_console_stream(sys.stdin, encoding, errors) + if rv is not None: + return rv + return _make_text_stream(sys.stdin, encoding, errors, force_readable=True) + + def get_text_stdout(encoding=None, errors=None): + _wrap_std_stream("stdout") + rv = _get_windows_console_stream(sys.stdout, encoding, errors) + if rv is not None: + return rv + return _make_text_stream(sys.stdout, encoding, errors, force_writable=True) + + def get_text_stderr(encoding=None, errors=None): + _wrap_std_stream("stderr") + rv = _get_windows_console_stream(sys.stderr, encoding, errors) + if rv is not None: + return rv + return _make_text_stream(sys.stderr, encoding, errors, force_writable=True) + + def filename_to_ui(value): + if isinstance(value, bytes): + value = value.decode(get_filesystem_encoding(), "replace") + return value + + +else: + import io + + text_type = str + raw_input = input + string_types = (str,) + int_types = (int,) + range_type = range + isidentifier = lambda x: x.isidentifier() + iteritems = lambda x: iter(x.items()) + + def is_bytes(x): + return isinstance(x, (bytes, memoryview, bytearray)) + + def _is_binary_reader(stream, default=False): + try: + return isinstance(stream.read(0), bytes) + except Exception: + return default + # This happens in some cases where the stream was already + # closed. In this case, we assume the default. + + def _is_binary_writer(stream, default=False): + try: + stream.write(b"") + except Exception: + try: + stream.write("") + return False + except Exception: + pass + return default + return True + + def _find_binary_reader(stream): + # We need to figure out if the given stream is already binary. + # This can happen because the official docs recommend detaching + # the streams to get binary streams. Some code might do this, so + # we need to deal with this case explicitly. + if _is_binary_reader(stream, False): + return stream + + buf = getattr(stream, "buffer", None) + + # Same situation here; this time we assume that the buffer is + # actually binary in case it's closed. + if buf is not None and _is_binary_reader(buf, True): + return buf + + def _find_binary_writer(stream): + # We need to figure out if the given stream is already binary. + # This can happen because the official docs recommend detatching + # the streams to get binary streams. Some code might do this, so + # we need to deal with this case explicitly. + if _is_binary_writer(stream, False): + return stream + + buf = getattr(stream, "buffer", None) + + # Same situation here; this time we assume that the buffer is + # actually binary in case it's closed. + if buf is not None and _is_binary_writer(buf, True): + return buf + + def _stream_is_misconfigured(stream): + """A stream is misconfigured if its encoding is ASCII.""" + # If the stream does not have an encoding set, we assume it's set + # to ASCII. This appears to happen in certain unittest + # environments. It's not quite clear what the correct behavior is + # but this at least will force Click to recover somehow. + return is_ascii_encoding(getattr(stream, "encoding", None) or "ascii") + + def _is_compat_stream_attr(stream, attr, value): + """A stream attribute is compatible if it is equal to the + desired value or the desired value is unset and the attribute + has a value. + """ + stream_value = getattr(stream, attr, None) + return stream_value == value or (value is None and stream_value is not None) + + def _is_compatible_text_stream(stream, encoding, errors): + """Check if a stream's encoding and errors attributes are + compatible with the desired values. + """ + return _is_compat_stream_attr( + stream, "encoding", encoding + ) and _is_compat_stream_attr(stream, "errors", errors) + + def _force_correct_text_stream( + text_stream, + encoding, + errors, + is_binary, + find_binary, + force_readable=False, + force_writable=False, + ): + if is_binary(text_stream, False): + binary_reader = text_stream + else: + # If the stream looks compatible, and won't default to a + # misconfigured ascii encoding, return it as-is. + if _is_compatible_text_stream(text_stream, encoding, errors) and not ( + encoding is None and _stream_is_misconfigured(text_stream) + ): + return text_stream + + # Otherwise, get the underlying binary reader. + binary_reader = find_binary(text_stream) + + # If that's not possible, silently use the original reader + # and get mojibake instead of exceptions. + if binary_reader is None: + return text_stream + + # Default errors to replace instead of strict in order to get + # something that works. + if errors is None: + errors = "replace" + + # Wrap the binary stream in a text stream with the correct + # encoding parameters. + return _make_text_stream( + binary_reader, + encoding, + errors, + force_readable=force_readable, + force_writable=force_writable, + ) + + def _force_correct_text_reader(text_reader, encoding, errors, force_readable=False): + return _force_correct_text_stream( + text_reader, + encoding, + errors, + _is_binary_reader, + _find_binary_reader, + force_readable=force_readable, + ) + + def _force_correct_text_writer(text_writer, encoding, errors, force_writable=False): + return _force_correct_text_stream( + text_writer, + encoding, + errors, + _is_binary_writer, + _find_binary_writer, + force_writable=force_writable, + ) + + def get_binary_stdin(): + reader = _find_binary_reader(sys.stdin) + if reader is None: + raise RuntimeError("Was not able to determine binary stream for sys.stdin.") + return reader + + def get_binary_stdout(): + writer = _find_binary_writer(sys.stdout) + if writer is None: + raise RuntimeError( + "Was not able to determine binary stream for sys.stdout." + ) + return writer + + def get_binary_stderr(): + writer = _find_binary_writer(sys.stderr) + if writer is None: + raise RuntimeError( + "Was not able to determine binary stream for sys.stderr." + ) + return writer + + def get_text_stdin(encoding=None, errors=None): + rv = _get_windows_console_stream(sys.stdin, encoding, errors) + if rv is not None: + return rv + return _force_correct_text_reader( + sys.stdin, encoding, errors, force_readable=True + ) + + def get_text_stdout(encoding=None, errors=None): + rv = _get_windows_console_stream(sys.stdout, encoding, errors) + if rv is not None: + return rv + return _force_correct_text_writer( + sys.stdout, encoding, errors, force_writable=True + ) + + def get_text_stderr(encoding=None, errors=None): + rv = _get_windows_console_stream(sys.stderr, encoding, errors) + if rv is not None: + return rv + return _force_correct_text_writer( + sys.stderr, encoding, errors, force_writable=True + ) + + def filename_to_ui(value): + if isinstance(value, bytes): + value = value.decode(get_filesystem_encoding(), "replace") + else: + value = value.encode("utf-8", "surrogateescape").decode("utf-8", "replace") + return value + + +def get_streerror(e, default=None): + if hasattr(e, "strerror"): + msg = e.strerror + else: + if default is not None: + msg = default + else: + msg = str(e) + if isinstance(msg, bytes): + msg = msg.decode("utf-8", "replace") + return msg + + +def _wrap_io_open(file, mode, encoding, errors): + """On Python 2, :func:`io.open` returns a text file wrapper that + requires passing ``unicode`` to ``write``. Need to open the file in + binary mode then wrap it in a subclass that can write ``str`` and + ``unicode``. + + Also handles not passing ``encoding`` and ``errors`` in binary mode. + """ + binary = "b" in mode + + if binary: + kwargs = {} + else: + kwargs = {"encoding": encoding, "errors": errors} + + if not PY2 or binary: + return io.open(file, mode, **kwargs) + + f = io.open(file, "{}b".format(mode.replace("t", ""))) + return _make_text_stream(f, **kwargs) + + +def open_stream(filename, mode="r", encoding=None, errors="strict", atomic=False): + binary = "b" in mode + + # Standard streams first. These are simple because they don't need + # special handling for the atomic flag. It's entirely ignored. + if filename == "-": + if any(m in mode for m in ["w", "a", "x"]): + if binary: + return get_binary_stdout(), False + return get_text_stdout(encoding=encoding, errors=errors), False + if binary: + return get_binary_stdin(), False + return get_text_stdin(encoding=encoding, errors=errors), False + + # Non-atomic writes directly go out through the regular open functions. + if not atomic: + return _wrap_io_open(filename, mode, encoding, errors), True + + # Some usability stuff for atomic writes + if "a" in mode: + raise ValueError( + "Appending to an existing file is not supported, because that" + " would involve an expensive `copy`-operation to a temporary" + " file. Open the file in normal `w`-mode and copy explicitly" + " if that's what you're after." + ) + if "x" in mode: + raise ValueError("Use the `overwrite`-parameter instead.") + if "w" not in mode: + raise ValueError("Atomic writes only make sense with `w`-mode.") + + # Atomic writes are more complicated. They work by opening a file + # as a proxy in the same folder and then using the fdopen + # functionality to wrap it in a Python file. Then we wrap it in an + # atomic file that moves the file over on close. + import errno + import random + + try: + perm = os.stat(filename).st_mode + except OSError: + perm = None + + flags = os.O_RDWR | os.O_CREAT | os.O_EXCL + + if binary: + flags |= getattr(os, "O_BINARY", 0) + + while True: + tmp_filename = os.path.join( + os.path.dirname(filename), + ".__atomic-write{:08x}".format(random.randrange(1 << 32)), + ) + try: + fd = os.open(tmp_filename, flags, 0o666 if perm is None else perm) + break + except OSError as e: + if e.errno == errno.EEXIST or ( + os.name == "nt" + and e.errno == errno.EACCES + and os.path.isdir(e.filename) + and os.access(e.filename, os.W_OK) + ): + continue + raise + + if perm is not None: + os.chmod(tmp_filename, perm) # in case perm includes bits in umask + + f = _wrap_io_open(fd, mode, encoding, errors) + return _AtomicFile(f, tmp_filename, os.path.realpath(filename)), True + + +# Used in a destructor call, needs extra protection from interpreter cleanup. +if hasattr(os, "replace"): + _replace = os.replace + _can_replace = True +else: + _replace = os.rename + _can_replace = not WIN + + +class _AtomicFile(object): + def __init__(self, f, tmp_filename, real_filename): + self._f = f + self._tmp_filename = tmp_filename + self._real_filename = real_filename + self.closed = False + + @property + def name(self): + return self._real_filename + + def close(self, delete=False): + if self.closed: + return + self._f.close() + if not _can_replace: + try: + os.remove(self._real_filename) + except OSError: + pass + _replace(self._tmp_filename, self._real_filename) + self.closed = True + + def __getattr__(self, name): + return getattr(self._f, name) + + def __enter__(self): + return self + + def __exit__(self, exc_type, exc_value, tb): + self.close(delete=exc_type is not None) + + def __repr__(self): + return repr(self._f) + + +auto_wrap_for_ansi = None +colorama = None +get_winterm_size = None + + +def strip_ansi(value): + return _ansi_re.sub("", value) + + +def _is_jupyter_kernel_output(stream): + if WIN: + # TODO: Couldn't test on Windows, should't try to support until + # someone tests the details wrt colorama. + return + + while isinstance(stream, (_FixupStream, _NonClosingTextIOWrapper)): + stream = stream._stream + + return stream.__class__.__module__.startswith("ipykernel.") + + +def should_strip_ansi(stream=None, color=None): + if color is None: + if stream is None: + stream = sys.stdin + return not isatty(stream) and not _is_jupyter_kernel_output(stream) + return not color + + +# If we're on Windows, we provide transparent integration through +# colorama. This will make ANSI colors through the echo function +# work automatically. +if WIN: + # Windows has a smaller terminal + DEFAULT_COLUMNS = 79 + + from ._winconsole import _get_windows_console_stream, _wrap_std_stream + + def _get_argv_encoding(): + import locale + + return locale.getpreferredencoding() + + if PY2: + + def raw_input(prompt=""): + sys.stderr.flush() + if prompt: + stdout = _default_text_stdout() + stdout.write(prompt) + stdin = _default_text_stdin() + return stdin.readline().rstrip("\r\n") + + try: + import colorama + except ImportError: + pass + else: + _ansi_stream_wrappers = WeakKeyDictionary() + + def auto_wrap_for_ansi(stream, color=None): + """This function wraps a stream so that calls through colorama + are issued to the win32 console API to recolor on demand. It + also ensures to reset the colors if a write call is interrupted + to not destroy the console afterwards. + """ + try: + cached = _ansi_stream_wrappers.get(stream) + except Exception: + cached = None + if cached is not None: + return cached + strip = should_strip_ansi(stream, color) + ansi_wrapper = colorama.AnsiToWin32(stream, strip=strip) + rv = ansi_wrapper.stream + _write = rv.write + + def _safe_write(s): + try: + return _write(s) + except: + ansi_wrapper.reset_all() + raise + + rv.write = _safe_write + try: + _ansi_stream_wrappers[stream] = rv + except Exception: + pass + return rv + + def get_winterm_size(): + win = colorama.win32.GetConsoleScreenBufferInfo( + colorama.win32.STDOUT + ).srWindow + return win.Right - win.Left, win.Bottom - win.Top + + +else: + + def _get_argv_encoding(): + return getattr(sys.stdin, "encoding", None) or get_filesystem_encoding() + + _get_windows_console_stream = lambda *x: None + _wrap_std_stream = lambda *x: None + + +def term_len(x): + return len(strip_ansi(x)) + + +def isatty(stream): + try: + return stream.isatty() + except Exception: + return False + + +def _make_cached_stream_func(src_func, wrapper_func): + cache = WeakKeyDictionary() + + def func(): + stream = src_func() + try: + rv = cache.get(stream) + except Exception: + rv = None + if rv is not None: + return rv + rv = wrapper_func() + try: + stream = src_func() # In case wrapper_func() modified the stream + cache[stream] = rv + except Exception: + pass + return rv + + return func + + +_default_text_stdin = _make_cached_stream_func(lambda: sys.stdin, get_text_stdin) +_default_text_stdout = _make_cached_stream_func(lambda: sys.stdout, get_text_stdout) +_default_text_stderr = _make_cached_stream_func(lambda: sys.stderr, get_text_stderr) + + +binary_streams = { + "stdin": get_binary_stdin, + "stdout": get_binary_stdout, + "stderr": get_binary_stderr, +} + +text_streams = { + "stdin": get_text_stdin, + "stdout": get_text_stdout, + "stderr": get_text_stderr, +} diff --git a/openpype/vendor/python/python_2/click/_termui_impl.py b/openpype/vendor/python/python_2/click/_termui_impl.py new file mode 100644 index 00000000000..88bec37701c --- /dev/null +++ b/openpype/vendor/python/python_2/click/_termui_impl.py @@ -0,0 +1,657 @@ +# -*- coding: utf-8 -*- +""" +This module contains implementations for the termui module. To keep the +import time of Click down, some infrequently used functionality is +placed in this module and only imported as needed. +""" +import contextlib +import math +import os +import sys +import time + +from ._compat import _default_text_stdout +from ._compat import CYGWIN +from ._compat import get_best_encoding +from ._compat import int_types +from ._compat import isatty +from ._compat import open_stream +from ._compat import range_type +from ._compat import strip_ansi +from ._compat import term_len +from ._compat import WIN +from .exceptions import ClickException +from .utils import echo + +if os.name == "nt": + BEFORE_BAR = "\r" + AFTER_BAR = "\n" +else: + BEFORE_BAR = "\r\033[?25l" + AFTER_BAR = "\033[?25h\n" + + +def _length_hint(obj): + """Returns the length hint of an object.""" + try: + return len(obj) + except (AttributeError, TypeError): + try: + get_hint = type(obj).__length_hint__ + except AttributeError: + return None + try: + hint = get_hint(obj) + except TypeError: + return None + if hint is NotImplemented or not isinstance(hint, int_types) or hint < 0: + return None + return hint + + +class ProgressBar(object): + def __init__( + self, + iterable, + length=None, + fill_char="#", + empty_char=" ", + bar_template="%(bar)s", + info_sep=" ", + show_eta=True, + show_percent=None, + show_pos=False, + item_show_func=None, + label=None, + file=None, + color=None, + width=30, + ): + self.fill_char = fill_char + self.empty_char = empty_char + self.bar_template = bar_template + self.info_sep = info_sep + self.show_eta = show_eta + self.show_percent = show_percent + self.show_pos = show_pos + self.item_show_func = item_show_func + self.label = label or "" + if file is None: + file = _default_text_stdout() + self.file = file + self.color = color + self.width = width + self.autowidth = width == 0 + + if length is None: + length = _length_hint(iterable) + if iterable is None: + if length is None: + raise TypeError("iterable or length is required") + iterable = range_type(length) + self.iter = iter(iterable) + self.length = length + self.length_known = length is not None + self.pos = 0 + self.avg = [] + self.start = self.last_eta = time.time() + self.eta_known = False + self.finished = False + self.max_width = None + self.entered = False + self.current_item = None + self.is_hidden = not isatty(self.file) + self._last_line = None + self.short_limit = 0.5 + + def __enter__(self): + self.entered = True + self.render_progress() + return self + + def __exit__(self, exc_type, exc_value, tb): + self.render_finish() + + def __iter__(self): + if not self.entered: + raise RuntimeError("You need to use progress bars in a with block.") + self.render_progress() + return self.generator() + + def __next__(self): + # Iteration is defined in terms of a generator function, + # returned by iter(self); use that to define next(). This works + # because `self.iter` is an iterable consumed by that generator, + # so it is re-entry safe. Calling `next(self.generator())` + # twice works and does "what you want". + return next(iter(self)) + + # Python 2 compat + next = __next__ + + def is_fast(self): + return time.time() - self.start <= self.short_limit + + def render_finish(self): + if self.is_hidden or self.is_fast(): + return + self.file.write(AFTER_BAR) + self.file.flush() + + @property + def pct(self): + if self.finished: + return 1.0 + return min(self.pos / (float(self.length) or 1), 1.0) + + @property + def time_per_iteration(self): + if not self.avg: + return 0.0 + return sum(self.avg) / float(len(self.avg)) + + @property + def eta(self): + if self.length_known and not self.finished: + return self.time_per_iteration * (self.length - self.pos) + return 0.0 + + def format_eta(self): + if self.eta_known: + t = int(self.eta) + seconds = t % 60 + t //= 60 + minutes = t % 60 + t //= 60 + hours = t % 24 + t //= 24 + if t > 0: + return "{}d {:02}:{:02}:{:02}".format(t, hours, minutes, seconds) + else: + return "{:02}:{:02}:{:02}".format(hours, minutes, seconds) + return "" + + def format_pos(self): + pos = str(self.pos) + if self.length_known: + pos += "/{}".format(self.length) + return pos + + def format_pct(self): + return "{: 4}%".format(int(self.pct * 100))[1:] + + def format_bar(self): + if self.length_known: + bar_length = int(self.pct * self.width) + bar = self.fill_char * bar_length + bar += self.empty_char * (self.width - bar_length) + elif self.finished: + bar = self.fill_char * self.width + else: + bar = list(self.empty_char * (self.width or 1)) + if self.time_per_iteration != 0: + bar[ + int( + (math.cos(self.pos * self.time_per_iteration) / 2.0 + 0.5) + * self.width + ) + ] = self.fill_char + bar = "".join(bar) + return bar + + def format_progress_line(self): + show_percent = self.show_percent + + info_bits = [] + if self.length_known and show_percent is None: + show_percent = not self.show_pos + + if self.show_pos: + info_bits.append(self.format_pos()) + if show_percent: + info_bits.append(self.format_pct()) + if self.show_eta and self.eta_known and not self.finished: + info_bits.append(self.format_eta()) + if self.item_show_func is not None: + item_info = self.item_show_func(self.current_item) + if item_info is not None: + info_bits.append(item_info) + + return ( + self.bar_template + % { + "label": self.label, + "bar": self.format_bar(), + "info": self.info_sep.join(info_bits), + } + ).rstrip() + + def render_progress(self): + from .termui import get_terminal_size + + if self.is_hidden: + return + + buf = [] + # Update width in case the terminal has been resized + if self.autowidth: + old_width = self.width + self.width = 0 + clutter_length = term_len(self.format_progress_line()) + new_width = max(0, get_terminal_size()[0] - clutter_length) + if new_width < old_width: + buf.append(BEFORE_BAR) + buf.append(" " * self.max_width) + self.max_width = new_width + self.width = new_width + + clear_width = self.width + if self.max_width is not None: + clear_width = self.max_width + + buf.append(BEFORE_BAR) + line = self.format_progress_line() + line_len = term_len(line) + if self.max_width is None or self.max_width < line_len: + self.max_width = line_len + + buf.append(line) + buf.append(" " * (clear_width - line_len)) + line = "".join(buf) + # Render the line only if it changed. + + if line != self._last_line and not self.is_fast(): + self._last_line = line + echo(line, file=self.file, color=self.color, nl=False) + self.file.flush() + + def make_step(self, n_steps): + self.pos += n_steps + if self.length_known and self.pos >= self.length: + self.finished = True + + if (time.time() - self.last_eta) < 1.0: + return + + self.last_eta = time.time() + + # self.avg is a rolling list of length <= 7 of steps where steps are + # defined as time elapsed divided by the total progress through + # self.length. + if self.pos: + step = (time.time() - self.start) / self.pos + else: + step = time.time() - self.start + + self.avg = self.avg[-6:] + [step] + + self.eta_known = self.length_known + + def update(self, n_steps): + self.make_step(n_steps) + self.render_progress() + + def finish(self): + self.eta_known = 0 + self.current_item = None + self.finished = True + + def generator(self): + """Return a generator which yields the items added to the bar + during construction, and updates the progress bar *after* the + yielded block returns. + """ + # WARNING: the iterator interface for `ProgressBar` relies on + # this and only works because this is a simple generator which + # doesn't create or manage additional state. If this function + # changes, the impact should be evaluated both against + # `iter(bar)` and `next(bar)`. `next()` in particular may call + # `self.generator()` repeatedly, and this must remain safe in + # order for that interface to work. + if not self.entered: + raise RuntimeError("You need to use progress bars in a with block.") + + if self.is_hidden: + for rv in self.iter: + yield rv + else: + for rv in self.iter: + self.current_item = rv + yield rv + self.update(1) + self.finish() + self.render_progress() + + +def pager(generator, color=None): + """Decide what method to use for paging through text.""" + stdout = _default_text_stdout() + if not isatty(sys.stdin) or not isatty(stdout): + return _nullpager(stdout, generator, color) + pager_cmd = (os.environ.get("PAGER", None) or "").strip() + if pager_cmd: + if WIN: + return _tempfilepager(generator, pager_cmd, color) + return _pipepager(generator, pager_cmd, color) + if os.environ.get("TERM") in ("dumb", "emacs"): + return _nullpager(stdout, generator, color) + if WIN or sys.platform.startswith("os2"): + return _tempfilepager(generator, "more <", color) + if hasattr(os, "system") and os.system("(less) 2>/dev/null") == 0: + return _pipepager(generator, "less", color) + + import tempfile + + fd, filename = tempfile.mkstemp() + os.close(fd) + try: + if hasattr(os, "system") and os.system('more "{}"'.format(filename)) == 0: + return _pipepager(generator, "more", color) + return _nullpager(stdout, generator, color) + finally: + os.unlink(filename) + + +def _pipepager(generator, cmd, color): + """Page through text by feeding it to another program. Invoking a + pager through this might support colors. + """ + import subprocess + + env = dict(os.environ) + + # If we're piping to less we might support colors under the + # condition that + cmd_detail = cmd.rsplit("/", 1)[-1].split() + if color is None and cmd_detail[0] == "less": + less_flags = "{}{}".format(os.environ.get("LESS", ""), " ".join(cmd_detail[1:])) + if not less_flags: + env["LESS"] = "-R" + color = True + elif "r" in less_flags or "R" in less_flags: + color = True + + c = subprocess.Popen(cmd, shell=True, stdin=subprocess.PIPE, env=env) + encoding = get_best_encoding(c.stdin) + try: + for text in generator: + if not color: + text = strip_ansi(text) + + c.stdin.write(text.encode(encoding, "replace")) + except (IOError, KeyboardInterrupt): + pass + else: + c.stdin.close() + + # Less doesn't respect ^C, but catches it for its own UI purposes (aborting + # search or other commands inside less). + # + # That means when the user hits ^C, the parent process (click) terminates, + # but less is still alive, paging the output and messing up the terminal. + # + # If the user wants to make the pager exit on ^C, they should set + # `LESS='-K'`. It's not our decision to make. + while True: + try: + c.wait() + except KeyboardInterrupt: + pass + else: + break + + +def _tempfilepager(generator, cmd, color): + """Page through text by invoking a program on a temporary file.""" + import tempfile + + filename = tempfile.mktemp() + # TODO: This never terminates if the passed generator never terminates. + text = "".join(generator) + if not color: + text = strip_ansi(text) + encoding = get_best_encoding(sys.stdout) + with open_stream(filename, "wb")[0] as f: + f.write(text.encode(encoding)) + try: + os.system('{} "{}"'.format(cmd, filename)) + finally: + os.unlink(filename) + + +def _nullpager(stream, generator, color): + """Simply print unformatted text. This is the ultimate fallback.""" + for text in generator: + if not color: + text = strip_ansi(text) + stream.write(text) + + +class Editor(object): + def __init__(self, editor=None, env=None, require_save=True, extension=".txt"): + self.editor = editor + self.env = env + self.require_save = require_save + self.extension = extension + + def get_editor(self): + if self.editor is not None: + return self.editor + for key in "VISUAL", "EDITOR": + rv = os.environ.get(key) + if rv: + return rv + if WIN: + return "notepad" + for editor in "sensible-editor", "vim", "nano": + if os.system("which {} >/dev/null 2>&1".format(editor)) == 0: + return editor + return "vi" + + def edit_file(self, filename): + import subprocess + + editor = self.get_editor() + if self.env: + environ = os.environ.copy() + environ.update(self.env) + else: + environ = None + try: + c = subprocess.Popen( + '{} "{}"'.format(editor, filename), env=environ, shell=True, + ) + exit_code = c.wait() + if exit_code != 0: + raise ClickException("{}: Editing failed!".format(editor)) + except OSError as e: + raise ClickException("{}: Editing failed: {}".format(editor, e)) + + def edit(self, text): + import tempfile + + text = text or "" + if text and not text.endswith("\n"): + text += "\n" + + fd, name = tempfile.mkstemp(prefix="editor-", suffix=self.extension) + try: + if WIN: + encoding = "utf-8-sig" + text = text.replace("\n", "\r\n") + else: + encoding = "utf-8" + text = text.encode(encoding) + + f = os.fdopen(fd, "wb") + f.write(text) + f.close() + timestamp = os.path.getmtime(name) + + self.edit_file(name) + + if self.require_save and os.path.getmtime(name) == timestamp: + return None + + f = open(name, "rb") + try: + rv = f.read() + finally: + f.close() + return rv.decode("utf-8-sig").replace("\r\n", "\n") + finally: + os.unlink(name) + + +def open_url(url, wait=False, locate=False): + import subprocess + + def _unquote_file(url): + try: + import urllib + except ImportError: + import urllib + if url.startswith("file://"): + url = urllib.unquote(url[7:]) + return url + + if sys.platform == "darwin": + args = ["open"] + if wait: + args.append("-W") + if locate: + args.append("-R") + args.append(_unquote_file(url)) + null = open("/dev/null", "w") + try: + return subprocess.Popen(args, stderr=null).wait() + finally: + null.close() + elif WIN: + if locate: + url = _unquote_file(url) + args = 'explorer /select,"{}"'.format(_unquote_file(url.replace('"', ""))) + else: + args = 'start {} "" "{}"'.format( + "/WAIT" if wait else "", url.replace('"', "") + ) + return os.system(args) + elif CYGWIN: + if locate: + url = _unquote_file(url) + args = 'cygstart "{}"'.format(os.path.dirname(url).replace('"', "")) + else: + args = 'cygstart {} "{}"'.format("-w" if wait else "", url.replace('"', "")) + return os.system(args) + + try: + if locate: + url = os.path.dirname(_unquote_file(url)) or "." + else: + url = _unquote_file(url) + c = subprocess.Popen(["xdg-open", url]) + if wait: + return c.wait() + return 0 + except OSError: + if url.startswith(("http://", "https://")) and not locate and not wait: + import webbrowser + + webbrowser.open(url) + return 0 + return 1 + + +def _translate_ch_to_exc(ch): + if ch == u"\x03": + raise KeyboardInterrupt() + if ch == u"\x04" and not WIN: # Unix-like, Ctrl+D + raise EOFError() + if ch == u"\x1a" and WIN: # Windows, Ctrl+Z + raise EOFError() + + +if WIN: + import msvcrt + + @contextlib.contextmanager + def raw_terminal(): + yield + + def getchar(echo): + # The function `getch` will return a bytes object corresponding to + # the pressed character. Since Windows 10 build 1803, it will also + # return \x00 when called a second time after pressing a regular key. + # + # `getwch` does not share this probably-bugged behavior. Moreover, it + # returns a Unicode object by default, which is what we want. + # + # Either of these functions will return \x00 or \xe0 to indicate + # a special key, and you need to call the same function again to get + # the "rest" of the code. The fun part is that \u00e0 is + # "latin small letter a with grave", so if you type that on a French + # keyboard, you _also_ get a \xe0. + # E.g., consider the Up arrow. This returns \xe0 and then \x48. The + # resulting Unicode string reads as "a with grave" + "capital H". + # This is indistinguishable from when the user actually types + # "a with grave" and then "capital H". + # + # When \xe0 is returned, we assume it's part of a special-key sequence + # and call `getwch` again, but that means that when the user types + # the \u00e0 character, `getchar` doesn't return until a second + # character is typed. + # The alternative is returning immediately, but that would mess up + # cross-platform handling of arrow keys and others that start with + # \xe0. Another option is using `getch`, but then we can't reliably + # read non-ASCII characters, because return values of `getch` are + # limited to the current 8-bit codepage. + # + # Anyway, Click doesn't claim to do this Right(tm), and using `getwch` + # is doing the right thing in more situations than with `getch`. + if echo: + func = msvcrt.getwche + else: + func = msvcrt.getwch + + rv = func() + if rv in (u"\x00", u"\xe0"): + # \x00 and \xe0 are control characters that indicate special key, + # see above. + rv += func() + _translate_ch_to_exc(rv) + return rv + + +else: + import tty + import termios + + @contextlib.contextmanager + def raw_terminal(): + if not isatty(sys.stdin): + f = open("/dev/tty") + fd = f.fileno() + else: + fd = sys.stdin.fileno() + f = None + try: + old_settings = termios.tcgetattr(fd) + try: + tty.setraw(fd) + yield fd + finally: + termios.tcsetattr(fd, termios.TCSADRAIN, old_settings) + sys.stdout.flush() + if f is not None: + f.close() + except termios.error: + pass + + def getchar(echo): + with raw_terminal() as fd: + ch = os.read(fd, 32) + ch = ch.decode(get_best_encoding(sys.stdin), "replace") + if echo and isatty(sys.stdout): + sys.stdout.write(ch) + _translate_ch_to_exc(ch) + return ch diff --git a/openpype/vendor/python/python_2/click/_textwrap.py b/openpype/vendor/python/python_2/click/_textwrap.py new file mode 100644 index 00000000000..6959087b7f3 --- /dev/null +++ b/openpype/vendor/python/python_2/click/_textwrap.py @@ -0,0 +1,37 @@ +import textwrap +from contextlib import contextmanager + + +class TextWrapper(textwrap.TextWrapper): + def _handle_long_word(self, reversed_chunks, cur_line, cur_len, width): + space_left = max(width - cur_len, 1) + + if self.break_long_words: + last = reversed_chunks[-1] + cut = last[:space_left] + res = last[space_left:] + cur_line.append(cut) + reversed_chunks[-1] = res + elif not cur_line: + cur_line.append(reversed_chunks.pop()) + + @contextmanager + def extra_indent(self, indent): + old_initial_indent = self.initial_indent + old_subsequent_indent = self.subsequent_indent + self.initial_indent += indent + self.subsequent_indent += indent + try: + yield + finally: + self.initial_indent = old_initial_indent + self.subsequent_indent = old_subsequent_indent + + def indent_only(self, text): + rv = [] + for idx, line in enumerate(text.splitlines()): + indent = self.initial_indent + if idx > 0: + indent = self.subsequent_indent + rv.append(indent + line) + return "\n".join(rv) diff --git a/openpype/vendor/python/python_2/click/_unicodefun.py b/openpype/vendor/python/python_2/click/_unicodefun.py new file mode 100644 index 00000000000..781c3652272 --- /dev/null +++ b/openpype/vendor/python/python_2/click/_unicodefun.py @@ -0,0 +1,131 @@ +import codecs +import os +import sys + +from ._compat import PY2 + + +def _find_unicode_literals_frame(): + import __future__ + + if not hasattr(sys, "_getframe"): # not all Python implementations have it + return 0 + frm = sys._getframe(1) + idx = 1 + while frm is not None: + if frm.f_globals.get("__name__", "").startswith("click."): + frm = frm.f_back + idx += 1 + elif frm.f_code.co_flags & __future__.unicode_literals.compiler_flag: + return idx + else: + break + return 0 + + +def _check_for_unicode_literals(): + if not __debug__: + return + + from . import disable_unicode_literals_warning + + if not PY2 or disable_unicode_literals_warning: + return + bad_frame = _find_unicode_literals_frame() + if bad_frame <= 0: + return + from warnings import warn + + warn( + Warning( + "Click detected the use of the unicode_literals __future__" + " import. This is heavily discouraged because it can" + " introduce subtle bugs in your code. You should instead" + ' use explicit u"" literals for your unicode strings. For' + " more information see" + " https://click.palletsprojects.com/python3/" + ), + stacklevel=bad_frame, + ) + + +def _verify_python3_env(): + """Ensures that the environment is good for unicode on Python 3.""" + if PY2: + return + try: + import locale + + fs_enc = codecs.lookup(locale.getpreferredencoding()).name + except Exception: + fs_enc = "ascii" + if fs_enc != "ascii": + return + + extra = "" + if os.name == "posix": + import subprocess + + try: + rv = subprocess.Popen( + ["locale", "-a"], stdout=subprocess.PIPE, stderr=subprocess.PIPE + ).communicate()[0] + except OSError: + rv = b"" + good_locales = set() + has_c_utf8 = False + + # Make sure we're operating on text here. + if isinstance(rv, bytes): + rv = rv.decode("ascii", "replace") + + for line in rv.splitlines(): + locale = line.strip() + if locale.lower().endswith((".utf-8", ".utf8")): + good_locales.add(locale) + if locale.lower() in ("c.utf8", "c.utf-8"): + has_c_utf8 = True + + extra += "\n\n" + if not good_locales: + extra += ( + "Additional information: on this system no suitable" + " UTF-8 locales were discovered. This most likely" + " requires resolving by reconfiguring the locale" + " system." + ) + elif has_c_utf8: + extra += ( + "This system supports the C.UTF-8 locale which is" + " recommended. You might be able to resolve your issue" + " by exporting the following environment variables:\n\n" + " export LC_ALL=C.UTF-8\n" + " export LANG=C.UTF-8" + ) + else: + extra += ( + "This system lists a couple of UTF-8 supporting locales" + " that you can pick from. The following suitable" + " locales were discovered: {}".format(", ".join(sorted(good_locales))) + ) + + bad_locale = None + for locale in os.environ.get("LC_ALL"), os.environ.get("LANG"): + if locale and locale.lower().endswith((".utf-8", ".utf8")): + bad_locale = locale + if locale is not None: + break + if bad_locale is not None: + extra += ( + "\n\nClick discovered that you exported a UTF-8 locale" + " but the locale system could not pick up from it" + " because it does not exist. The exported locale is" + " '{}' but it is not supported".format(bad_locale) + ) + + raise RuntimeError( + "Click will abort further execution because Python 3 was" + " configured to use ASCII as encoding for the environment." + " Consult https://click.palletsprojects.com/python3/ for" + " mitigation steps.{}".format(extra) + ) diff --git a/openpype/vendor/python/python_2/click/_winconsole.py b/openpype/vendor/python/python_2/click/_winconsole.py new file mode 100644 index 00000000000..b6c4274af0e --- /dev/null +++ b/openpype/vendor/python/python_2/click/_winconsole.py @@ -0,0 +1,370 @@ +# -*- coding: utf-8 -*- +# This module is based on the excellent work by Adam Bartoš who +# provided a lot of what went into the implementation here in +# the discussion to issue1602 in the Python bug tracker. +# +# There are some general differences in regards to how this works +# compared to the original patches as we do not need to patch +# the entire interpreter but just work in our little world of +# echo and prmopt. +import ctypes +import io +import os +import sys +import time +import zlib +from ctypes import byref +from ctypes import c_char +from ctypes import c_char_p +from ctypes import c_int +from ctypes import c_ssize_t +from ctypes import c_ulong +from ctypes import c_void_p +from ctypes import POINTER +from ctypes import py_object +from ctypes import windll +from ctypes import WinError +from ctypes import WINFUNCTYPE +from ctypes.wintypes import DWORD +from ctypes.wintypes import HANDLE +from ctypes.wintypes import LPCWSTR +from ctypes.wintypes import LPWSTR + +import msvcrt + +from ._compat import _NonClosingTextIOWrapper +from ._compat import PY2 +from ._compat import text_type + +try: + from ctypes import pythonapi + + PyObject_GetBuffer = pythonapi.PyObject_GetBuffer + PyBuffer_Release = pythonapi.PyBuffer_Release +except ImportError: + pythonapi = None + + +c_ssize_p = POINTER(c_ssize_t) + +kernel32 = windll.kernel32 +GetStdHandle = kernel32.GetStdHandle +ReadConsoleW = kernel32.ReadConsoleW +WriteConsoleW = kernel32.WriteConsoleW +GetConsoleMode = kernel32.GetConsoleMode +GetLastError = kernel32.GetLastError +GetCommandLineW = WINFUNCTYPE(LPWSTR)(("GetCommandLineW", windll.kernel32)) +CommandLineToArgvW = WINFUNCTYPE(POINTER(LPWSTR), LPCWSTR, POINTER(c_int))( + ("CommandLineToArgvW", windll.shell32) +) +LocalFree = WINFUNCTYPE(ctypes.c_void_p, ctypes.c_void_p)( + ("LocalFree", windll.kernel32) +) + + +STDIN_HANDLE = GetStdHandle(-10) +STDOUT_HANDLE = GetStdHandle(-11) +STDERR_HANDLE = GetStdHandle(-12) + + +PyBUF_SIMPLE = 0 +PyBUF_WRITABLE = 1 + +ERROR_SUCCESS = 0 +ERROR_NOT_ENOUGH_MEMORY = 8 +ERROR_OPERATION_ABORTED = 995 + +STDIN_FILENO = 0 +STDOUT_FILENO = 1 +STDERR_FILENO = 2 + +EOF = b"\x1a" +MAX_BYTES_WRITTEN = 32767 + + +class Py_buffer(ctypes.Structure): + _fields_ = [ + ("buf", c_void_p), + ("obj", py_object), + ("len", c_ssize_t), + ("itemsize", c_ssize_t), + ("readonly", c_int), + ("ndim", c_int), + ("format", c_char_p), + ("shape", c_ssize_p), + ("strides", c_ssize_p), + ("suboffsets", c_ssize_p), + ("internal", c_void_p), + ] + + if PY2: + _fields_.insert(-1, ("smalltable", c_ssize_t * 2)) + + +# On PyPy we cannot get buffers so our ability to operate here is +# serverly limited. +if pythonapi is None: + get_buffer = None +else: + + def get_buffer(obj, writable=False): + buf = Py_buffer() + flags = PyBUF_WRITABLE if writable else PyBUF_SIMPLE + PyObject_GetBuffer(py_object(obj), byref(buf), flags) + try: + buffer_type = c_char * buf.len + return buffer_type.from_address(buf.buf) + finally: + PyBuffer_Release(byref(buf)) + + +class _WindowsConsoleRawIOBase(io.RawIOBase): + def __init__(self, handle): + self.handle = handle + + def isatty(self): + io.RawIOBase.isatty(self) + return True + + +class _WindowsConsoleReader(_WindowsConsoleRawIOBase): + def readable(self): + return True + + def readinto(self, b): + bytes_to_be_read = len(b) + if not bytes_to_be_read: + return 0 + elif bytes_to_be_read % 2: + raise ValueError( + "cannot read odd number of bytes from UTF-16-LE encoded console" + ) + + buffer = get_buffer(b, writable=True) + code_units_to_be_read = bytes_to_be_read // 2 + code_units_read = c_ulong() + + rv = ReadConsoleW( + HANDLE(self.handle), + buffer, + code_units_to_be_read, + byref(code_units_read), + None, + ) + if GetLastError() == ERROR_OPERATION_ABORTED: + # wait for KeyboardInterrupt + time.sleep(0.1) + if not rv: + raise OSError("Windows error: {}".format(GetLastError())) + + if buffer[0] == EOF: + return 0 + return 2 * code_units_read.value + + +class _WindowsConsoleWriter(_WindowsConsoleRawIOBase): + def writable(self): + return True + + @staticmethod + def _get_error_message(errno): + if errno == ERROR_SUCCESS: + return "ERROR_SUCCESS" + elif errno == ERROR_NOT_ENOUGH_MEMORY: + return "ERROR_NOT_ENOUGH_MEMORY" + return "Windows error {}".format(errno) + + def write(self, b): + bytes_to_be_written = len(b) + buf = get_buffer(b) + code_units_to_be_written = min(bytes_to_be_written, MAX_BYTES_WRITTEN) // 2 + code_units_written = c_ulong() + + WriteConsoleW( + HANDLE(self.handle), + buf, + code_units_to_be_written, + byref(code_units_written), + None, + ) + bytes_written = 2 * code_units_written.value + + if bytes_written == 0 and bytes_to_be_written > 0: + raise OSError(self._get_error_message(GetLastError())) + return bytes_written + + +class ConsoleStream(object): + def __init__(self, text_stream, byte_stream): + self._text_stream = text_stream + self.buffer = byte_stream + + @property + def name(self): + return self.buffer.name + + def write(self, x): + if isinstance(x, text_type): + return self._text_stream.write(x) + try: + self.flush() + except Exception: + pass + return self.buffer.write(x) + + def writelines(self, lines): + for line in lines: + self.write(line) + + def __getattr__(self, name): + return getattr(self._text_stream, name) + + def isatty(self): + return self.buffer.isatty() + + def __repr__(self): + return "".format( + self.name, self.encoding + ) + + +class WindowsChunkedWriter(object): + """ + Wraps a stream (such as stdout), acting as a transparent proxy for all + attribute access apart from method 'write()' which we wrap to write in + limited chunks due to a Windows limitation on binary console streams. + """ + + def __init__(self, wrapped): + # double-underscore everything to prevent clashes with names of + # attributes on the wrapped stream object. + self.__wrapped = wrapped + + def __getattr__(self, name): + return getattr(self.__wrapped, name) + + def write(self, text): + total_to_write = len(text) + written = 0 + + while written < total_to_write: + to_write = min(total_to_write - written, MAX_BYTES_WRITTEN) + self.__wrapped.write(text[written : written + to_write]) + written += to_write + + +_wrapped_std_streams = set() + + +def _wrap_std_stream(name): + # Python 2 & Windows 7 and below + if ( + PY2 + and sys.getwindowsversion()[:2] <= (6, 1) + and name not in _wrapped_std_streams + ): + setattr(sys, name, WindowsChunkedWriter(getattr(sys, name))) + _wrapped_std_streams.add(name) + + +def _get_text_stdin(buffer_stream): + text_stream = _NonClosingTextIOWrapper( + io.BufferedReader(_WindowsConsoleReader(STDIN_HANDLE)), + "utf-16-le", + "strict", + line_buffering=True, + ) + return ConsoleStream(text_stream, buffer_stream) + + +def _get_text_stdout(buffer_stream): + text_stream = _NonClosingTextIOWrapper( + io.BufferedWriter(_WindowsConsoleWriter(STDOUT_HANDLE)), + "utf-16-le", + "strict", + line_buffering=True, + ) + return ConsoleStream(text_stream, buffer_stream) + + +def _get_text_stderr(buffer_stream): + text_stream = _NonClosingTextIOWrapper( + io.BufferedWriter(_WindowsConsoleWriter(STDERR_HANDLE)), + "utf-16-le", + "strict", + line_buffering=True, + ) + return ConsoleStream(text_stream, buffer_stream) + + +if PY2: + + def _hash_py_argv(): + return zlib.crc32("\x00".join(sys.argv[1:])) + + _initial_argv_hash = _hash_py_argv() + + def _get_windows_argv(): + argc = c_int(0) + argv_unicode = CommandLineToArgvW(GetCommandLineW(), byref(argc)) + if not argv_unicode: + raise WinError() + try: + argv = [argv_unicode[i] for i in range(0, argc.value)] + finally: + LocalFree(argv_unicode) + del argv_unicode + + if not hasattr(sys, "frozen"): + argv = argv[1:] + while len(argv) > 0: + arg = argv[0] + if not arg.startswith("-") or arg == "-": + break + argv = argv[1:] + if arg.startswith(("-c", "-m")): + break + + return argv[1:] + + +_stream_factories = { + 0: _get_text_stdin, + 1: _get_text_stdout, + 2: _get_text_stderr, +} + + +def _is_console(f): + if not hasattr(f, "fileno"): + return False + + try: + fileno = f.fileno() + except OSError: + return False + + handle = msvcrt.get_osfhandle(fileno) + return bool(GetConsoleMode(handle, byref(DWORD()))) + + +def _get_windows_console_stream(f, encoding, errors): + if ( + get_buffer is not None + and encoding in ("utf-16-le", None) + and errors in ("strict", None) + and _is_console(f) + ): + func = _stream_factories.get(f.fileno()) + if func is not None: + if not PY2: + f = getattr(f, "buffer", None) + if f is None: + return None + else: + # If we are on Python 2 we need to set the stream that we + # deal with to binary mode as otherwise the exercise if a + # bit moot. The same problems apply as for + # get_binary_stdin and friends from _compat. + msvcrt.setmode(f.fileno(), os.O_BINARY) + return func(f) diff --git a/openpype/vendor/python/python_2/click/core.py b/openpype/vendor/python/python_2/click/core.py new file mode 100644 index 00000000000..f58bf26d2f9 --- /dev/null +++ b/openpype/vendor/python/python_2/click/core.py @@ -0,0 +1,2030 @@ +import errno +import inspect +import os +import sys +from contextlib import contextmanager +from functools import update_wrapper +from itertools import repeat + +from ._compat import isidentifier +from ._compat import iteritems +from ._compat import PY2 +from ._compat import string_types +from ._unicodefun import _check_for_unicode_literals +from ._unicodefun import _verify_python3_env +from .exceptions import Abort +from .exceptions import BadParameter +from .exceptions import ClickException +from .exceptions import Exit +from .exceptions import MissingParameter +from .exceptions import UsageError +from .formatting import HelpFormatter +from .formatting import join_options +from .globals import pop_context +from .globals import push_context +from .parser import OptionParser +from .parser import split_opt +from .termui import confirm +from .termui import prompt +from .termui import style +from .types import BOOL +from .types import convert_type +from .types import IntRange +from .utils import echo +from .utils import get_os_args +from .utils import make_default_short_help +from .utils import make_str +from .utils import PacifyFlushWrapper + +_missing = object() + +SUBCOMMAND_METAVAR = "COMMAND [ARGS]..." +SUBCOMMANDS_METAVAR = "COMMAND1 [ARGS]... [COMMAND2 [ARGS]...]..." + +DEPRECATED_HELP_NOTICE = " (DEPRECATED)" +DEPRECATED_INVOKE_NOTICE = "DeprecationWarning: The command %(name)s is deprecated." + + +def _maybe_show_deprecated_notice(cmd): + if cmd.deprecated: + echo(style(DEPRECATED_INVOKE_NOTICE % {"name": cmd.name}, fg="red"), err=True) + + +def fast_exit(code): + """Exit without garbage collection, this speeds up exit by about 10ms for + things like bash completion. + """ + sys.stdout.flush() + sys.stderr.flush() + os._exit(code) + + +def _bashcomplete(cmd, prog_name, complete_var=None): + """Internal handler for the bash completion support.""" + if complete_var is None: + complete_var = "_{}_COMPLETE".format(prog_name.replace("-", "_").upper()) + complete_instr = os.environ.get(complete_var) + if not complete_instr: + return + + from ._bashcomplete import bashcomplete + + if bashcomplete(cmd, prog_name, complete_var, complete_instr): + fast_exit(1) + + +def _check_multicommand(base_command, cmd_name, cmd, register=False): + if not base_command.chain or not isinstance(cmd, MultiCommand): + return + if register: + hint = ( + "It is not possible to add multi commands as children to" + " another multi command that is in chain mode." + ) + else: + hint = ( + "Found a multi command as subcommand to a multi command" + " that is in chain mode. This is not supported." + ) + raise RuntimeError( + "{}. Command '{}' is set to chain and '{}' was added as" + " subcommand but it in itself is a multi command. ('{}' is a {}" + " within a chained {} named '{}').".format( + hint, + base_command.name, + cmd_name, + cmd_name, + cmd.__class__.__name__, + base_command.__class__.__name__, + base_command.name, + ) + ) + + +def batch(iterable, batch_size): + return list(zip(*repeat(iter(iterable), batch_size))) + + +def invoke_param_callback(callback, ctx, param, value): + code = getattr(callback, "__code__", None) + args = getattr(code, "co_argcount", 3) + + if args < 3: + from warnings import warn + + warn( + "Parameter callbacks take 3 args, (ctx, param, value). The" + " 2-arg style is deprecated and will be removed in 8.0.".format(callback), + DeprecationWarning, + stacklevel=3, + ) + return callback(ctx, value) + + return callback(ctx, param, value) + + +@contextmanager +def augment_usage_errors(ctx, param=None): + """Context manager that attaches extra information to exceptions.""" + try: + yield + except BadParameter as e: + if e.ctx is None: + e.ctx = ctx + if param is not None and e.param is None: + e.param = param + raise + except UsageError as e: + if e.ctx is None: + e.ctx = ctx + raise + + +def iter_params_for_processing(invocation_order, declaration_order): + """Given a sequence of parameters in the order as should be considered + for processing and an iterable of parameters that exist, this returns + a list in the correct order as they should be processed. + """ + + def sort_key(item): + try: + idx = invocation_order.index(item) + except ValueError: + idx = float("inf") + return (not item.is_eager, idx) + + return sorted(declaration_order, key=sort_key) + + +class Context(object): + """The context is a special internal object that holds state relevant + for the script execution at every single level. It's normally invisible + to commands unless they opt-in to getting access to it. + + The context is useful as it can pass internal objects around and can + control special execution features such as reading data from + environment variables. + + A context can be used as context manager in which case it will call + :meth:`close` on teardown. + + .. versionadded:: 2.0 + Added the `resilient_parsing`, `help_option_names`, + `token_normalize_func` parameters. + + .. versionadded:: 3.0 + Added the `allow_extra_args` and `allow_interspersed_args` + parameters. + + .. versionadded:: 4.0 + Added the `color`, `ignore_unknown_options`, and + `max_content_width` parameters. + + .. versionadded:: 7.1 + Added the `show_default` parameter. + + :param command: the command class for this context. + :param parent: the parent context. + :param info_name: the info name for this invocation. Generally this + is the most descriptive name for the script or + command. For the toplevel script it is usually + the name of the script, for commands below it it's + the name of the script. + :param obj: an arbitrary object of user data. + :param auto_envvar_prefix: the prefix to use for automatic environment + variables. If this is `None` then reading + from environment variables is disabled. This + does not affect manually set environment + variables which are always read. + :param default_map: a dictionary (like object) with default values + for parameters. + :param terminal_width: the width of the terminal. The default is + inherit from parent context. If no context + defines the terminal width then auto + detection will be applied. + :param max_content_width: the maximum width for content rendered by + Click (this currently only affects help + pages). This defaults to 80 characters if + not overridden. In other words: even if the + terminal is larger than that, Click will not + format things wider than 80 characters by + default. In addition to that, formatters might + add some safety mapping on the right. + :param resilient_parsing: if this flag is enabled then Click will + parse without any interactivity or callback + invocation. Default values will also be + ignored. This is useful for implementing + things such as completion support. + :param allow_extra_args: if this is set to `True` then extra arguments + at the end will not raise an error and will be + kept on the context. The default is to inherit + from the command. + :param allow_interspersed_args: if this is set to `False` then options + and arguments cannot be mixed. The + default is to inherit from the command. + :param ignore_unknown_options: instructs click to ignore options it does + not know and keeps them for later + processing. + :param help_option_names: optionally a list of strings that define how + the default help parameter is named. The + default is ``['--help']``. + :param token_normalize_func: an optional function that is used to + normalize tokens (options, choices, + etc.). This for instance can be used to + implement case insensitive behavior. + :param color: controls if the terminal supports ANSI colors or not. The + default is autodetection. This is only needed if ANSI + codes are used in texts that Click prints which is by + default not the case. This for instance would affect + help output. + :param show_default: if True, shows defaults for all options. + Even if an option is later created with show_default=False, + this command-level setting overrides it. + """ + + def __init__( + self, + command, + parent=None, + info_name=None, + obj=None, + auto_envvar_prefix=None, + default_map=None, + terminal_width=None, + max_content_width=None, + resilient_parsing=False, + allow_extra_args=None, + allow_interspersed_args=None, + ignore_unknown_options=None, + help_option_names=None, + token_normalize_func=None, + color=None, + show_default=None, + ): + #: the parent context or `None` if none exists. + self.parent = parent + #: the :class:`Command` for this context. + self.command = command + #: the descriptive information name + self.info_name = info_name + #: the parsed parameters except if the value is hidden in which + #: case it's not remembered. + self.params = {} + #: the leftover arguments. + self.args = [] + #: protected arguments. These are arguments that are prepended + #: to `args` when certain parsing scenarios are encountered but + #: must be never propagated to another arguments. This is used + #: to implement nested parsing. + self.protected_args = [] + if obj is None and parent is not None: + obj = parent.obj + #: the user object stored. + self.obj = obj + self._meta = getattr(parent, "meta", {}) + + #: A dictionary (-like object) with defaults for parameters. + if ( + default_map is None + and parent is not None + and parent.default_map is not None + ): + default_map = parent.default_map.get(info_name) + self.default_map = default_map + + #: This flag indicates if a subcommand is going to be executed. A + #: group callback can use this information to figure out if it's + #: being executed directly or because the execution flow passes + #: onwards to a subcommand. By default it's None, but it can be + #: the name of the subcommand to execute. + #: + #: If chaining is enabled this will be set to ``'*'`` in case + #: any commands are executed. It is however not possible to + #: figure out which ones. If you require this knowledge you + #: should use a :func:`resultcallback`. + self.invoked_subcommand = None + + if terminal_width is None and parent is not None: + terminal_width = parent.terminal_width + #: The width of the terminal (None is autodetection). + self.terminal_width = terminal_width + + if max_content_width is None and parent is not None: + max_content_width = parent.max_content_width + #: The maximum width of formatted content (None implies a sensible + #: default which is 80 for most things). + self.max_content_width = max_content_width + + if allow_extra_args is None: + allow_extra_args = command.allow_extra_args + #: Indicates if the context allows extra args or if it should + #: fail on parsing. + #: + #: .. versionadded:: 3.0 + self.allow_extra_args = allow_extra_args + + if allow_interspersed_args is None: + allow_interspersed_args = command.allow_interspersed_args + #: Indicates if the context allows mixing of arguments and + #: options or not. + #: + #: .. versionadded:: 3.0 + self.allow_interspersed_args = allow_interspersed_args + + if ignore_unknown_options is None: + ignore_unknown_options = command.ignore_unknown_options + #: Instructs click to ignore options that a command does not + #: understand and will store it on the context for later + #: processing. This is primarily useful for situations where you + #: want to call into external programs. Generally this pattern is + #: strongly discouraged because it's not possibly to losslessly + #: forward all arguments. + #: + #: .. versionadded:: 4.0 + self.ignore_unknown_options = ignore_unknown_options + + if help_option_names is None: + if parent is not None: + help_option_names = parent.help_option_names + else: + help_option_names = ["--help"] + + #: The names for the help options. + self.help_option_names = help_option_names + + if token_normalize_func is None and parent is not None: + token_normalize_func = parent.token_normalize_func + + #: An optional normalization function for tokens. This is + #: options, choices, commands etc. + self.token_normalize_func = token_normalize_func + + #: Indicates if resilient parsing is enabled. In that case Click + #: will do its best to not cause any failures and default values + #: will be ignored. Useful for completion. + self.resilient_parsing = resilient_parsing + + # If there is no envvar prefix yet, but the parent has one and + # the command on this level has a name, we can expand the envvar + # prefix automatically. + if auto_envvar_prefix is None: + if ( + parent is not None + and parent.auto_envvar_prefix is not None + and self.info_name is not None + ): + auto_envvar_prefix = "{}_{}".format( + parent.auto_envvar_prefix, self.info_name.upper() + ) + else: + auto_envvar_prefix = auto_envvar_prefix.upper() + if auto_envvar_prefix is not None: + auto_envvar_prefix = auto_envvar_prefix.replace("-", "_") + self.auto_envvar_prefix = auto_envvar_prefix + + if color is None and parent is not None: + color = parent.color + + #: Controls if styling output is wanted or not. + self.color = color + + self.show_default = show_default + + self._close_callbacks = [] + self._depth = 0 + + def __enter__(self): + self._depth += 1 + push_context(self) + return self + + def __exit__(self, exc_type, exc_value, tb): + self._depth -= 1 + if self._depth == 0: + self.close() + pop_context() + + @contextmanager + def scope(self, cleanup=True): + """This helper method can be used with the context object to promote + it to the current thread local (see :func:`get_current_context`). + The default behavior of this is to invoke the cleanup functions which + can be disabled by setting `cleanup` to `False`. The cleanup + functions are typically used for things such as closing file handles. + + If the cleanup is intended the context object can also be directly + used as a context manager. + + Example usage:: + + with ctx.scope(): + assert get_current_context() is ctx + + This is equivalent:: + + with ctx: + assert get_current_context() is ctx + + .. versionadded:: 5.0 + + :param cleanup: controls if the cleanup functions should be run or + not. The default is to run these functions. In + some situations the context only wants to be + temporarily pushed in which case this can be disabled. + Nested pushes automatically defer the cleanup. + """ + if not cleanup: + self._depth += 1 + try: + with self as rv: + yield rv + finally: + if not cleanup: + self._depth -= 1 + + @property + def meta(self): + """This is a dictionary which is shared with all the contexts + that are nested. It exists so that click utilities can store some + state here if they need to. It is however the responsibility of + that code to manage this dictionary well. + + The keys are supposed to be unique dotted strings. For instance + module paths are a good choice for it. What is stored in there is + irrelevant for the operation of click. However what is important is + that code that places data here adheres to the general semantics of + the system. + + Example usage:: + + LANG_KEY = f'{__name__}.lang' + + def set_language(value): + ctx = get_current_context() + ctx.meta[LANG_KEY] = value + + def get_language(): + return get_current_context().meta.get(LANG_KEY, 'en_US') + + .. versionadded:: 5.0 + """ + return self._meta + + def make_formatter(self): + """Creates the formatter for the help and usage output.""" + return HelpFormatter( + width=self.terminal_width, max_width=self.max_content_width + ) + + def call_on_close(self, f): + """This decorator remembers a function as callback that should be + executed when the context tears down. This is most useful to bind + resource handling to the script execution. For instance, file objects + opened by the :class:`File` type will register their close callbacks + here. + + :param f: the function to execute on teardown. + """ + self._close_callbacks.append(f) + return f + + def close(self): + """Invokes all close callbacks.""" + for cb in self._close_callbacks: + cb() + self._close_callbacks = [] + + @property + def command_path(self): + """The computed command path. This is used for the ``usage`` + information on the help page. It's automatically created by + combining the info names of the chain of contexts to the root. + """ + rv = "" + if self.info_name is not None: + rv = self.info_name + if self.parent is not None: + rv = "{} {}".format(self.parent.command_path, rv) + return rv.lstrip() + + def find_root(self): + """Finds the outermost context.""" + node = self + while node.parent is not None: + node = node.parent + return node + + def find_object(self, object_type): + """Finds the closest object of a given type.""" + node = self + while node is not None: + if isinstance(node.obj, object_type): + return node.obj + node = node.parent + + def ensure_object(self, object_type): + """Like :meth:`find_object` but sets the innermost object to a + new instance of `object_type` if it does not exist. + """ + rv = self.find_object(object_type) + if rv is None: + self.obj = rv = object_type() + return rv + + def lookup_default(self, name): + """Looks up the default for a parameter name. This by default + looks into the :attr:`default_map` if available. + """ + if self.default_map is not None: + rv = self.default_map.get(name) + if callable(rv): + rv = rv() + return rv + + def fail(self, message): + """Aborts the execution of the program with a specific error + message. + + :param message: the error message to fail with. + """ + raise UsageError(message, self) + + def abort(self): + """Aborts the script.""" + raise Abort() + + def exit(self, code=0): + """Exits the application with a given exit code.""" + raise Exit(code) + + def get_usage(self): + """Helper method to get formatted usage string for the current + context and command. + """ + return self.command.get_usage(self) + + def get_help(self): + """Helper method to get formatted help page for the current + context and command. + """ + return self.command.get_help(self) + + def invoke(*args, **kwargs): # noqa: B902 + """Invokes a command callback in exactly the way it expects. There + are two ways to invoke this method: + + 1. the first argument can be a callback and all other arguments and + keyword arguments are forwarded directly to the function. + 2. the first argument is a click command object. In that case all + arguments are forwarded as well but proper click parameters + (options and click arguments) must be keyword arguments and Click + will fill in defaults. + + Note that before Click 3.2 keyword arguments were not properly filled + in against the intention of this code and no context was created. For + more information about this change and why it was done in a bugfix + release see :ref:`upgrade-to-3.2`. + """ + self, callback = args[:2] + ctx = self + + # It's also possible to invoke another command which might or + # might not have a callback. In that case we also fill + # in defaults and make a new context for this command. + if isinstance(callback, Command): + other_cmd = callback + callback = other_cmd.callback + ctx = Context(other_cmd, info_name=other_cmd.name, parent=self) + if callback is None: + raise TypeError( + "The given command does not have a callback that can be invoked." + ) + + for param in other_cmd.params: + if param.name not in kwargs and param.expose_value: + kwargs[param.name] = param.get_default(ctx) + + args = args[2:] + with augment_usage_errors(self): + with ctx: + return callback(*args, **kwargs) + + def forward(*args, **kwargs): # noqa: B902 + """Similar to :meth:`invoke` but fills in default keyword + arguments from the current context if the other command expects + it. This cannot invoke callbacks directly, only other commands. + """ + self, cmd = args[:2] + + # It's also possible to invoke another command which might or + # might not have a callback. + if not isinstance(cmd, Command): + raise TypeError("Callback is not a command.") + + for param in self.params: + if param not in kwargs: + kwargs[param] = self.params[param] + + return self.invoke(cmd, **kwargs) + + +class BaseCommand(object): + """The base command implements the minimal API contract of commands. + Most code will never use this as it does not implement a lot of useful + functionality but it can act as the direct subclass of alternative + parsing methods that do not depend on the Click parser. + + For instance, this can be used to bridge Click and other systems like + argparse or docopt. + + Because base commands do not implement a lot of the API that other + parts of Click take for granted, they are not supported for all + operations. For instance, they cannot be used with the decorators + usually and they have no built-in callback system. + + .. versionchanged:: 2.0 + Added the `context_settings` parameter. + + :param name: the name of the command to use unless a group overrides it. + :param context_settings: an optional dictionary with defaults that are + passed to the context object. + """ + + #: the default for the :attr:`Context.allow_extra_args` flag. + allow_extra_args = False + #: the default for the :attr:`Context.allow_interspersed_args` flag. + allow_interspersed_args = True + #: the default for the :attr:`Context.ignore_unknown_options` flag. + ignore_unknown_options = False + + def __init__(self, name, context_settings=None): + #: the name the command thinks it has. Upon registering a command + #: on a :class:`Group` the group will default the command name + #: with this information. You should instead use the + #: :class:`Context`\'s :attr:`~Context.info_name` attribute. + self.name = name + if context_settings is None: + context_settings = {} + #: an optional dictionary with defaults passed to the context. + self.context_settings = context_settings + + def __repr__(self): + return "<{} {}>".format(self.__class__.__name__, self.name) + + def get_usage(self, ctx): + raise NotImplementedError("Base commands cannot get usage") + + def get_help(self, ctx): + raise NotImplementedError("Base commands cannot get help") + + def make_context(self, info_name, args, parent=None, **extra): + """This function when given an info name and arguments will kick + off the parsing and create a new :class:`Context`. It does not + invoke the actual command callback though. + + :param info_name: the info name for this invokation. Generally this + is the most descriptive name for the script or + command. For the toplevel script it's usually + the name of the script, for commands below it it's + the name of the script. + :param args: the arguments to parse as list of strings. + :param parent: the parent context if available. + :param extra: extra keyword arguments forwarded to the context + constructor. + """ + for key, value in iteritems(self.context_settings): + if key not in extra: + extra[key] = value + ctx = Context(self, info_name=info_name, parent=parent, **extra) + with ctx.scope(cleanup=False): + self.parse_args(ctx, args) + return ctx + + def parse_args(self, ctx, args): + """Given a context and a list of arguments this creates the parser + and parses the arguments, then modifies the context as necessary. + This is automatically invoked by :meth:`make_context`. + """ + raise NotImplementedError("Base commands do not know how to parse arguments.") + + def invoke(self, ctx): + """Given a context, this invokes the command. The default + implementation is raising a not implemented error. + """ + raise NotImplementedError("Base commands are not invokable by default") + + def main( + self, + args=None, + prog_name=None, + complete_var=None, + standalone_mode=True, + **extra + ): + """This is the way to invoke a script with all the bells and + whistles as a command line application. This will always terminate + the application after a call. If this is not wanted, ``SystemExit`` + needs to be caught. + + This method is also available by directly calling the instance of + a :class:`Command`. + + .. versionadded:: 3.0 + Added the `standalone_mode` flag to control the standalone mode. + + :param args: the arguments that should be used for parsing. If not + provided, ``sys.argv[1:]`` is used. + :param prog_name: the program name that should be used. By default + the program name is constructed by taking the file + name from ``sys.argv[0]``. + :param complete_var: the environment variable that controls the + bash completion support. The default is + ``"__COMPLETE"`` with prog_name in + uppercase. + :param standalone_mode: the default behavior is to invoke the script + in standalone mode. Click will then + handle exceptions and convert them into + error messages and the function will never + return but shut down the interpreter. If + this is set to `False` they will be + propagated to the caller and the return + value of this function is the return value + of :meth:`invoke`. + :param extra: extra keyword arguments are forwarded to the context + constructor. See :class:`Context` for more information. + """ + # If we are in Python 3, we will verify that the environment is + # sane at this point or reject further execution to avoid a + # broken script. + if not PY2: + _verify_python3_env() + else: + _check_for_unicode_literals() + + if args is None: + args = get_os_args() + else: + args = list(args) + + if prog_name is None: + prog_name = make_str( + os.path.basename(sys.argv[0] if sys.argv else __file__) + ) + + # Hook for the Bash completion. This only activates if the Bash + # completion is actually enabled, otherwise this is quite a fast + # noop. + _bashcomplete(self, prog_name, complete_var) + + try: + try: + with self.make_context(prog_name, args, **extra) as ctx: + rv = self.invoke(ctx) + if not standalone_mode: + return rv + # it's not safe to `ctx.exit(rv)` here! + # note that `rv` may actually contain data like "1" which + # has obvious effects + # more subtle case: `rv=[None, None]` can come out of + # chained commands which all returned `None` -- so it's not + # even always obvious that `rv` indicates success/failure + # by its truthiness/falsiness + ctx.exit() + except (EOFError, KeyboardInterrupt): + echo(file=sys.stderr) + raise Abort() + except ClickException as e: + if not standalone_mode: + raise + e.show() + sys.exit(e.exit_code) + except IOError as e: + if e.errno == errno.EPIPE: + sys.stdout = PacifyFlushWrapper(sys.stdout) + sys.stderr = PacifyFlushWrapper(sys.stderr) + sys.exit(1) + else: + raise + except Exit as e: + if standalone_mode: + sys.exit(e.exit_code) + else: + # in non-standalone mode, return the exit code + # note that this is only reached if `self.invoke` above raises + # an Exit explicitly -- thus bypassing the check there which + # would return its result + # the results of non-standalone execution may therefore be + # somewhat ambiguous: if there are codepaths which lead to + # `ctx.exit(1)` and to `return 1`, the caller won't be able to + # tell the difference between the two + return e.exit_code + except Abort: + if not standalone_mode: + raise + echo("Aborted!", file=sys.stderr) + sys.exit(1) + + def __call__(self, *args, **kwargs): + """Alias for :meth:`main`.""" + return self.main(*args, **kwargs) + + +class Command(BaseCommand): + """Commands are the basic building block of command line interfaces in + Click. A basic command handles command line parsing and might dispatch + more parsing to commands nested below it. + + .. versionchanged:: 2.0 + Added the `context_settings` parameter. + .. versionchanged:: 7.1 + Added the `no_args_is_help` parameter. + + :param name: the name of the command to use unless a group overrides it. + :param context_settings: an optional dictionary with defaults that are + passed to the context object. + :param callback: the callback to invoke. This is optional. + :param params: the parameters to register with this command. This can + be either :class:`Option` or :class:`Argument` objects. + :param help: the help string to use for this command. + :param epilog: like the help string but it's printed at the end of the + help page after everything else. + :param short_help: the short help to use for this command. This is + shown on the command listing of the parent command. + :param add_help_option: by default each command registers a ``--help`` + option. This can be disabled by this parameter. + :param no_args_is_help: this controls what happens if no arguments are + provided. This option is disabled by default. + If enabled this will add ``--help`` as argument + if no arguments are passed + :param hidden: hide this command from help outputs. + + :param deprecated: issues a message indicating that + the command is deprecated. + """ + + def __init__( + self, + name, + context_settings=None, + callback=None, + params=None, + help=None, + epilog=None, + short_help=None, + options_metavar="[OPTIONS]", + add_help_option=True, + no_args_is_help=False, + hidden=False, + deprecated=False, + ): + BaseCommand.__init__(self, name, context_settings) + #: the callback to execute when the command fires. This might be + #: `None` in which case nothing happens. + self.callback = callback + #: the list of parameters for this command in the order they + #: should show up in the help page and execute. Eager parameters + #: will automatically be handled before non eager ones. + self.params = params or [] + # if a form feed (page break) is found in the help text, truncate help + # text to the content preceding the first form feed + if help and "\f" in help: + help = help.split("\f", 1)[0] + self.help = help + self.epilog = epilog + self.options_metavar = options_metavar + self.short_help = short_help + self.add_help_option = add_help_option + self.no_args_is_help = no_args_is_help + self.hidden = hidden + self.deprecated = deprecated + + def get_usage(self, ctx): + """Formats the usage line into a string and returns it. + + Calls :meth:`format_usage` internally. + """ + formatter = ctx.make_formatter() + self.format_usage(ctx, formatter) + return formatter.getvalue().rstrip("\n") + + def get_params(self, ctx): + rv = self.params + help_option = self.get_help_option(ctx) + if help_option is not None: + rv = rv + [help_option] + return rv + + def format_usage(self, ctx, formatter): + """Writes the usage line into the formatter. + + This is a low-level method called by :meth:`get_usage`. + """ + pieces = self.collect_usage_pieces(ctx) + formatter.write_usage(ctx.command_path, " ".join(pieces)) + + def collect_usage_pieces(self, ctx): + """Returns all the pieces that go into the usage line and returns + it as a list of strings. + """ + rv = [self.options_metavar] + for param in self.get_params(ctx): + rv.extend(param.get_usage_pieces(ctx)) + return rv + + def get_help_option_names(self, ctx): + """Returns the names for the help option.""" + all_names = set(ctx.help_option_names) + for param in self.params: + all_names.difference_update(param.opts) + all_names.difference_update(param.secondary_opts) + return all_names + + def get_help_option(self, ctx): + """Returns the help option object.""" + help_options = self.get_help_option_names(ctx) + if not help_options or not self.add_help_option: + return + + def show_help(ctx, param, value): + if value and not ctx.resilient_parsing: + echo(ctx.get_help(), color=ctx.color) + ctx.exit() + + return Option( + help_options, + is_flag=True, + is_eager=True, + expose_value=False, + callback=show_help, + help="Show this message and exit.", + ) + + def make_parser(self, ctx): + """Creates the underlying option parser for this command.""" + parser = OptionParser(ctx) + for param in self.get_params(ctx): + param.add_to_parser(parser, ctx) + return parser + + def get_help(self, ctx): + """Formats the help into a string and returns it. + + Calls :meth:`format_help` internally. + """ + formatter = ctx.make_formatter() + self.format_help(ctx, formatter) + return formatter.getvalue().rstrip("\n") + + def get_short_help_str(self, limit=45): + """Gets short help for the command or makes it by shortening the + long help string. + """ + return ( + self.short_help + or self.help + and make_default_short_help(self.help, limit) + or "" + ) + + def format_help(self, ctx, formatter): + """Writes the help into the formatter if it exists. + + This is a low-level method called by :meth:`get_help`. + + This calls the following methods: + + - :meth:`format_usage` + - :meth:`format_help_text` + - :meth:`format_options` + - :meth:`format_epilog` + """ + self.format_usage(ctx, formatter) + self.format_help_text(ctx, formatter) + self.format_options(ctx, formatter) + self.format_epilog(ctx, formatter) + + def format_help_text(self, ctx, formatter): + """Writes the help text to the formatter if it exists.""" + if self.help: + formatter.write_paragraph() + with formatter.indentation(): + help_text = self.help + if self.deprecated: + help_text += DEPRECATED_HELP_NOTICE + formatter.write_text(help_text) + elif self.deprecated: + formatter.write_paragraph() + with formatter.indentation(): + formatter.write_text(DEPRECATED_HELP_NOTICE) + + def format_options(self, ctx, formatter): + """Writes all the options into the formatter if they exist.""" + opts = [] + for param in self.get_params(ctx): + rv = param.get_help_record(ctx) + if rv is not None: + opts.append(rv) + + if opts: + with formatter.section("Options"): + formatter.write_dl(opts) + + def format_epilog(self, ctx, formatter): + """Writes the epilog into the formatter if it exists.""" + if self.epilog: + formatter.write_paragraph() + with formatter.indentation(): + formatter.write_text(self.epilog) + + def parse_args(self, ctx, args): + if not args and self.no_args_is_help and not ctx.resilient_parsing: + echo(ctx.get_help(), color=ctx.color) + ctx.exit() + + parser = self.make_parser(ctx) + opts, args, param_order = parser.parse_args(args=args) + + for param in iter_params_for_processing(param_order, self.get_params(ctx)): + value, args = param.handle_parse_result(ctx, opts, args) + + if args and not ctx.allow_extra_args and not ctx.resilient_parsing: + ctx.fail( + "Got unexpected extra argument{} ({})".format( + "s" if len(args) != 1 else "", " ".join(map(make_str, args)) + ) + ) + + ctx.args = args + return args + + def invoke(self, ctx): + """Given a context, this invokes the attached callback (if it exists) + in the right way. + """ + _maybe_show_deprecated_notice(self) + if self.callback is not None: + return ctx.invoke(self.callback, **ctx.params) + + +class MultiCommand(Command): + """A multi command is the basic implementation of a command that + dispatches to subcommands. The most common version is the + :class:`Group`. + + :param invoke_without_command: this controls how the multi command itself + is invoked. By default it's only invoked + if a subcommand is provided. + :param no_args_is_help: this controls what happens if no arguments are + provided. This option is enabled by default if + `invoke_without_command` is disabled or disabled + if it's enabled. If enabled this will add + ``--help`` as argument if no arguments are + passed. + :param subcommand_metavar: the string that is used in the documentation + to indicate the subcommand place. + :param chain: if this is set to `True` chaining of multiple subcommands + is enabled. This restricts the form of commands in that + they cannot have optional arguments but it allows + multiple commands to be chained together. + :param result_callback: the result callback to attach to this multi + command. + """ + + allow_extra_args = True + allow_interspersed_args = False + + def __init__( + self, + name=None, + invoke_without_command=False, + no_args_is_help=None, + subcommand_metavar=None, + chain=False, + result_callback=None, + **attrs + ): + Command.__init__(self, name, **attrs) + if no_args_is_help is None: + no_args_is_help = not invoke_without_command + self.no_args_is_help = no_args_is_help + self.invoke_without_command = invoke_without_command + if subcommand_metavar is None: + if chain: + subcommand_metavar = SUBCOMMANDS_METAVAR + else: + subcommand_metavar = SUBCOMMAND_METAVAR + self.subcommand_metavar = subcommand_metavar + self.chain = chain + #: The result callback that is stored. This can be set or + #: overridden with the :func:`resultcallback` decorator. + self.result_callback = result_callback + + if self.chain: + for param in self.params: + if isinstance(param, Argument) and not param.required: + raise RuntimeError( + "Multi commands in chain mode cannot have" + " optional arguments." + ) + + def collect_usage_pieces(self, ctx): + rv = Command.collect_usage_pieces(self, ctx) + rv.append(self.subcommand_metavar) + return rv + + def format_options(self, ctx, formatter): + Command.format_options(self, ctx, formatter) + self.format_commands(ctx, formatter) + + def resultcallback(self, replace=False): + """Adds a result callback to the chain command. By default if a + result callback is already registered this will chain them but + this can be disabled with the `replace` parameter. The result + callback is invoked with the return value of the subcommand + (or the list of return values from all subcommands if chaining + is enabled) as well as the parameters as they would be passed + to the main callback. + + Example:: + + @click.group() + @click.option('-i', '--input', default=23) + def cli(input): + return 42 + + @cli.resultcallback() + def process_result(result, input): + return result + input + + .. versionadded:: 3.0 + + :param replace: if set to `True` an already existing result + callback will be removed. + """ + + def decorator(f): + old_callback = self.result_callback + if old_callback is None or replace: + self.result_callback = f + return f + + def function(__value, *args, **kwargs): + return f(old_callback(__value, *args, **kwargs), *args, **kwargs) + + self.result_callback = rv = update_wrapper(function, f) + return rv + + return decorator + + def format_commands(self, ctx, formatter): + """Extra format methods for multi methods that adds all the commands + after the options. + """ + commands = [] + for subcommand in self.list_commands(ctx): + cmd = self.get_command(ctx, subcommand) + # What is this, the tool lied about a command. Ignore it + if cmd is None: + continue + if cmd.hidden: + continue + + commands.append((subcommand, cmd)) + + # allow for 3 times the default spacing + if len(commands): + limit = formatter.width - 6 - max(len(cmd[0]) for cmd in commands) + + rows = [] + for subcommand, cmd in commands: + help = cmd.get_short_help_str(limit) + rows.append((subcommand, help)) + + if rows: + with formatter.section("Commands"): + formatter.write_dl(rows) + + def parse_args(self, ctx, args): + if not args and self.no_args_is_help and not ctx.resilient_parsing: + echo(ctx.get_help(), color=ctx.color) + ctx.exit() + + rest = Command.parse_args(self, ctx, args) + if self.chain: + ctx.protected_args = rest + ctx.args = [] + elif rest: + ctx.protected_args, ctx.args = rest[:1], rest[1:] + + return ctx.args + + def invoke(self, ctx): + def _process_result(value): + if self.result_callback is not None: + value = ctx.invoke(self.result_callback, value, **ctx.params) + return value + + if not ctx.protected_args: + # If we are invoked without command the chain flag controls + # how this happens. If we are not in chain mode, the return + # value here is the return value of the command. + # If however we are in chain mode, the return value is the + # return value of the result processor invoked with an empty + # list (which means that no subcommand actually was executed). + if self.invoke_without_command: + if not self.chain: + return Command.invoke(self, ctx) + with ctx: + Command.invoke(self, ctx) + return _process_result([]) + ctx.fail("Missing command.") + + # Fetch args back out + args = ctx.protected_args + ctx.args + ctx.args = [] + ctx.protected_args = [] + + # If we're not in chain mode, we only allow the invocation of a + # single command but we also inform the current context about the + # name of the command to invoke. + if not self.chain: + # Make sure the context is entered so we do not clean up + # resources until the result processor has worked. + with ctx: + cmd_name, cmd, args = self.resolve_command(ctx, args) + ctx.invoked_subcommand = cmd_name + Command.invoke(self, ctx) + sub_ctx = cmd.make_context(cmd_name, args, parent=ctx) + with sub_ctx: + return _process_result(sub_ctx.command.invoke(sub_ctx)) + + # In chain mode we create the contexts step by step, but after the + # base command has been invoked. Because at that point we do not + # know the subcommands yet, the invoked subcommand attribute is + # set to ``*`` to inform the command that subcommands are executed + # but nothing else. + with ctx: + ctx.invoked_subcommand = "*" if args else None + Command.invoke(self, ctx) + + # Otherwise we make every single context and invoke them in a + # chain. In that case the return value to the result processor + # is the list of all invoked subcommand's results. + contexts = [] + while args: + cmd_name, cmd, args = self.resolve_command(ctx, args) + sub_ctx = cmd.make_context( + cmd_name, + args, + parent=ctx, + allow_extra_args=True, + allow_interspersed_args=False, + ) + contexts.append(sub_ctx) + args, sub_ctx.args = sub_ctx.args, [] + + rv = [] + for sub_ctx in contexts: + with sub_ctx: + rv.append(sub_ctx.command.invoke(sub_ctx)) + return _process_result(rv) + + def resolve_command(self, ctx, args): + cmd_name = make_str(args[0]) + original_cmd_name = cmd_name + + # Get the command + cmd = self.get_command(ctx, cmd_name) + + # If we can't find the command but there is a normalization + # function available, we try with that one. + if cmd is None and ctx.token_normalize_func is not None: + cmd_name = ctx.token_normalize_func(cmd_name) + cmd = self.get_command(ctx, cmd_name) + + # If we don't find the command we want to show an error message + # to the user that it was not provided. However, there is + # something else we should do: if the first argument looks like + # an option we want to kick off parsing again for arguments to + # resolve things like --help which now should go to the main + # place. + if cmd is None and not ctx.resilient_parsing: + if split_opt(cmd_name)[0]: + self.parse_args(ctx, ctx.args) + ctx.fail("No such command '{}'.".format(original_cmd_name)) + + return cmd_name, cmd, args[1:] + + def get_command(self, ctx, cmd_name): + """Given a context and a command name, this returns a + :class:`Command` object if it exists or returns `None`. + """ + raise NotImplementedError() + + def list_commands(self, ctx): + """Returns a list of subcommand names in the order they should + appear. + """ + return [] + + +class Group(MultiCommand): + """A group allows a command to have subcommands attached. This is the + most common way to implement nesting in Click. + + :param commands: a dictionary of commands. + """ + + def __init__(self, name=None, commands=None, **attrs): + MultiCommand.__init__(self, name, **attrs) + #: the registered subcommands by their exported names. + self.commands = commands or {} + + def add_command(self, cmd, name=None): + """Registers another :class:`Command` with this group. If the name + is not provided, the name of the command is used. + """ + name = name or cmd.name + if name is None: + raise TypeError("Command has no name.") + _check_multicommand(self, name, cmd, register=True) + self.commands[name] = cmd + + def command(self, *args, **kwargs): + """A shortcut decorator for declaring and attaching a command to + the group. This takes the same arguments as :func:`command` but + immediately registers the created command with this instance by + calling into :meth:`add_command`. + """ + from .decorators import command + + def decorator(f): + cmd = command(*args, **kwargs)(f) + self.add_command(cmd) + return cmd + + return decorator + + def group(self, *args, **kwargs): + """A shortcut decorator for declaring and attaching a group to + the group. This takes the same arguments as :func:`group` but + immediately registers the created command with this instance by + calling into :meth:`add_command`. + """ + from .decorators import group + + def decorator(f): + cmd = group(*args, **kwargs)(f) + self.add_command(cmd) + return cmd + + return decorator + + def get_command(self, ctx, cmd_name): + return self.commands.get(cmd_name) + + def list_commands(self, ctx): + return sorted(self.commands) + + +class CommandCollection(MultiCommand): + """A command collection is a multi command that merges multiple multi + commands together into one. This is a straightforward implementation + that accepts a list of different multi commands as sources and + provides all the commands for each of them. + """ + + def __init__(self, name=None, sources=None, **attrs): + MultiCommand.__init__(self, name, **attrs) + #: The list of registered multi commands. + self.sources = sources or [] + + def add_source(self, multi_cmd): + """Adds a new multi command to the chain dispatcher.""" + self.sources.append(multi_cmd) + + def get_command(self, ctx, cmd_name): + for source in self.sources: + rv = source.get_command(ctx, cmd_name) + if rv is not None: + if self.chain: + _check_multicommand(self, cmd_name, rv) + return rv + + def list_commands(self, ctx): + rv = set() + for source in self.sources: + rv.update(source.list_commands(ctx)) + return sorted(rv) + + +class Parameter(object): + r"""A parameter to a command comes in two versions: they are either + :class:`Option`\s or :class:`Argument`\s. Other subclasses are currently + not supported by design as some of the internals for parsing are + intentionally not finalized. + + Some settings are supported by both options and arguments. + + :param param_decls: the parameter declarations for this option or + argument. This is a list of flags or argument + names. + :param type: the type that should be used. Either a :class:`ParamType` + or a Python type. The later is converted into the former + automatically if supported. + :param required: controls if this is optional or not. + :param default: the default value if omitted. This can also be a callable, + in which case it's invoked when the default is needed + without any arguments. + :param callback: a callback that should be executed after the parameter + was matched. This is called as ``fn(ctx, param, + value)`` and needs to return the value. + :param nargs: the number of arguments to match. If not ``1`` the return + value is a tuple instead of single value. The default for + nargs is ``1`` (except if the type is a tuple, then it's + the arity of the tuple). + :param metavar: how the value is represented in the help page. + :param expose_value: if this is `True` then the value is passed onwards + to the command callback and stored on the context, + otherwise it's skipped. + :param is_eager: eager values are processed before non eager ones. This + should not be set for arguments or it will inverse the + order of processing. + :param envvar: a string or list of strings that are environment variables + that should be checked. + + .. versionchanged:: 7.1 + Empty environment variables are ignored rather than taking the + empty string value. This makes it possible for scripts to clear + variables if they can't unset them. + + .. versionchanged:: 2.0 + Changed signature for parameter callback to also be passed the + parameter. The old callback format will still work, but it will + raise a warning to give you a chance to migrate the code easier. + """ + param_type_name = "parameter" + + def __init__( + self, + param_decls=None, + type=None, + required=False, + default=None, + callback=None, + nargs=None, + metavar=None, + expose_value=True, + is_eager=False, + envvar=None, + autocompletion=None, + ): + self.name, self.opts, self.secondary_opts = self._parse_decls( + param_decls or (), expose_value + ) + + self.type = convert_type(type, default) + + # Default nargs to what the type tells us if we have that + # information available. + if nargs is None: + if self.type.is_composite: + nargs = self.type.arity + else: + nargs = 1 + + self.required = required + self.callback = callback + self.nargs = nargs + self.multiple = False + self.expose_value = expose_value + self.default = default + self.is_eager = is_eager + self.metavar = metavar + self.envvar = envvar + self.autocompletion = autocompletion + + def __repr__(self): + return "<{} {}>".format(self.__class__.__name__, self.name) + + @property + def human_readable_name(self): + """Returns the human readable name of this parameter. This is the + same as the name for options, but the metavar for arguments. + """ + return self.name + + def make_metavar(self): + if self.metavar is not None: + return self.metavar + metavar = self.type.get_metavar(self) + if metavar is None: + metavar = self.type.name.upper() + if self.nargs != 1: + metavar += "..." + return metavar + + def get_default(self, ctx): + """Given a context variable this calculates the default value.""" + # Otherwise go with the regular default. + if callable(self.default): + rv = self.default() + else: + rv = self.default + return self.type_cast_value(ctx, rv) + + def add_to_parser(self, parser, ctx): + pass + + def consume_value(self, ctx, opts): + value = opts.get(self.name) + if value is None: + value = self.value_from_envvar(ctx) + if value is None: + value = ctx.lookup_default(self.name) + return value + + def type_cast_value(self, ctx, value): + """Given a value this runs it properly through the type system. + This automatically handles things like `nargs` and `multiple` as + well as composite types. + """ + if self.type.is_composite: + if self.nargs <= 1: + raise TypeError( + "Attempted to invoke composite type but nargs has" + " been set to {}. This is not supported; nargs" + " needs to be set to a fixed value > 1.".format(self.nargs) + ) + if self.multiple: + return tuple(self.type(x or (), self, ctx) for x in value or ()) + return self.type(value or (), self, ctx) + + def _convert(value, level): + if level == 0: + return self.type(value, self, ctx) + return tuple(_convert(x, level - 1) for x in value or ()) + + return _convert(value, (self.nargs != 1) + bool(self.multiple)) + + def process_value(self, ctx, value): + """Given a value and context this runs the logic to convert the + value as necessary. + """ + # If the value we were given is None we do nothing. This way + # code that calls this can easily figure out if something was + # not provided. Otherwise it would be converted into an empty + # tuple for multiple invocations which is inconvenient. + if value is not None: + return self.type_cast_value(ctx, value) + + def value_is_missing(self, value): + if value is None: + return True + if (self.nargs != 1 or self.multiple) and value == (): + return True + return False + + def full_process_value(self, ctx, value): + value = self.process_value(ctx, value) + + if value is None and not ctx.resilient_parsing: + value = self.get_default(ctx) + + if self.required and self.value_is_missing(value): + raise MissingParameter(ctx=ctx, param=self) + + return value + + def resolve_envvar_value(self, ctx): + if self.envvar is None: + return + if isinstance(self.envvar, (tuple, list)): + for envvar in self.envvar: + rv = os.environ.get(envvar) + if rv is not None: + return rv + else: + rv = os.environ.get(self.envvar) + + if rv != "": + return rv + + def value_from_envvar(self, ctx): + rv = self.resolve_envvar_value(ctx) + if rv is not None and self.nargs != 1: + rv = self.type.split_envvar_value(rv) + return rv + + def handle_parse_result(self, ctx, opts, args): + with augment_usage_errors(ctx, param=self): + value = self.consume_value(ctx, opts) + try: + value = self.full_process_value(ctx, value) + except Exception: + if not ctx.resilient_parsing: + raise + value = None + if self.callback is not None: + try: + value = invoke_param_callback(self.callback, ctx, self, value) + except Exception: + if not ctx.resilient_parsing: + raise + + if self.expose_value: + ctx.params[self.name] = value + return value, args + + def get_help_record(self, ctx): + pass + + def get_usage_pieces(self, ctx): + return [] + + def get_error_hint(self, ctx): + """Get a stringified version of the param for use in error messages to + indicate which param caused the error. + """ + hint_list = self.opts or [self.human_readable_name] + return " / ".join(repr(x) for x in hint_list) + + +class Option(Parameter): + """Options are usually optional values on the command line and + have some extra features that arguments don't have. + + All other parameters are passed onwards to the parameter constructor. + + :param show_default: controls if the default value should be shown on the + help page. Normally, defaults are not shown. If this + value is a string, it shows the string instead of the + value. This is particularly useful for dynamic options. + :param show_envvar: controls if an environment variable should be shown on + the help page. Normally, environment variables + are not shown. + :param prompt: if set to `True` or a non empty string then the user will be + prompted for input. If set to `True` the prompt will be the + option name capitalized. + :param confirmation_prompt: if set then the value will need to be confirmed + if it was prompted for. + :param hide_input: if this is `True` then the input on the prompt will be + hidden from the user. This is useful for password + input. + :param is_flag: forces this option to act as a flag. The default is + auto detection. + :param flag_value: which value should be used for this flag if it's + enabled. This is set to a boolean automatically if + the option string contains a slash to mark two options. + :param multiple: if this is set to `True` then the argument is accepted + multiple times and recorded. This is similar to ``nargs`` + in how it works but supports arbitrary number of + arguments. + :param count: this flag makes an option increment an integer. + :param allow_from_autoenv: if this is enabled then the value of this + parameter will be pulled from an environment + variable in case a prefix is defined on the + context. + :param help: the help string. + :param hidden: hide this option from help outputs. + """ + + param_type_name = "option" + + def __init__( + self, + param_decls=None, + show_default=False, + prompt=False, + confirmation_prompt=False, + hide_input=False, + is_flag=None, + flag_value=None, + multiple=False, + count=False, + allow_from_autoenv=True, + type=None, + help=None, + hidden=False, + show_choices=True, + show_envvar=False, + **attrs + ): + default_is_missing = attrs.get("default", _missing) is _missing + Parameter.__init__(self, param_decls, type=type, **attrs) + + if prompt is True: + prompt_text = self.name.replace("_", " ").capitalize() + elif prompt is False: + prompt_text = None + else: + prompt_text = prompt + self.prompt = prompt_text + self.confirmation_prompt = confirmation_prompt + self.hide_input = hide_input + self.hidden = hidden + + # Flags + if is_flag is None: + if flag_value is not None: + is_flag = True + else: + is_flag = bool(self.secondary_opts) + if is_flag and default_is_missing: + self.default = False + if flag_value is None: + flag_value = not self.default + self.is_flag = is_flag + self.flag_value = flag_value + if self.is_flag and isinstance(self.flag_value, bool) and type in [None, bool]: + self.type = BOOL + self.is_bool_flag = True + else: + self.is_bool_flag = False + + # Counting + self.count = count + if count: + if type is None: + self.type = IntRange(min=0) + if default_is_missing: + self.default = 0 + + self.multiple = multiple + self.allow_from_autoenv = allow_from_autoenv + self.help = help + self.show_default = show_default + self.show_choices = show_choices + self.show_envvar = show_envvar + + # Sanity check for stuff we don't support + if __debug__: + if self.nargs < 0: + raise TypeError("Options cannot have nargs < 0") + if self.prompt and self.is_flag and not self.is_bool_flag: + raise TypeError("Cannot prompt for flags that are not bools.") + if not self.is_bool_flag and self.secondary_opts: + raise TypeError("Got secondary option for non boolean flag.") + if self.is_bool_flag and self.hide_input and self.prompt is not None: + raise TypeError("Hidden input does not work with boolean flag prompts.") + if self.count: + if self.multiple: + raise TypeError( + "Options cannot be multiple and count at the same time." + ) + elif self.is_flag: + raise TypeError( + "Options cannot be count and flags at the same time." + ) + + def _parse_decls(self, decls, expose_value): + opts = [] + secondary_opts = [] + name = None + possible_names = [] + + for decl in decls: + if isidentifier(decl): + if name is not None: + raise TypeError("Name defined twice") + name = decl + else: + split_char = ";" if decl[:1] == "/" else "/" + if split_char in decl: + first, second = decl.split(split_char, 1) + first = first.rstrip() + if first: + possible_names.append(split_opt(first)) + opts.append(first) + second = second.lstrip() + if second: + secondary_opts.append(second.lstrip()) + else: + possible_names.append(split_opt(decl)) + opts.append(decl) + + if name is None and possible_names: + possible_names.sort(key=lambda x: -len(x[0])) # group long options first + name = possible_names[0][1].replace("-", "_").lower() + if not isidentifier(name): + name = None + + if name is None: + if not expose_value: + return None, opts, secondary_opts + raise TypeError("Could not determine name for option") + + if not opts and not secondary_opts: + raise TypeError( + "No options defined but a name was passed ({}). Did you" + " mean to declare an argument instead of an option?".format(name) + ) + + return name, opts, secondary_opts + + def add_to_parser(self, parser, ctx): + kwargs = { + "dest": self.name, + "nargs": self.nargs, + "obj": self, + } + + if self.multiple: + action = "append" + elif self.count: + action = "count" + else: + action = "store" + + if self.is_flag: + kwargs.pop("nargs", None) + action_const = "{}_const".format(action) + if self.is_bool_flag and self.secondary_opts: + parser.add_option(self.opts, action=action_const, const=True, **kwargs) + parser.add_option( + self.secondary_opts, action=action_const, const=False, **kwargs + ) + else: + parser.add_option( + self.opts, action=action_const, const=self.flag_value, **kwargs + ) + else: + kwargs["action"] = action + parser.add_option(self.opts, **kwargs) + + def get_help_record(self, ctx): + if self.hidden: + return + any_prefix_is_slash = [] + + def _write_opts(opts): + rv, any_slashes = join_options(opts) + if any_slashes: + any_prefix_is_slash[:] = [True] + if not self.is_flag and not self.count: + rv += " {}".format(self.make_metavar()) + return rv + + rv = [_write_opts(self.opts)] + if self.secondary_opts: + rv.append(_write_opts(self.secondary_opts)) + + help = self.help or "" + extra = [] + if self.show_envvar: + envvar = self.envvar + if envvar is None: + if self.allow_from_autoenv and ctx.auto_envvar_prefix is not None: + envvar = "{}_{}".format(ctx.auto_envvar_prefix, self.name.upper()) + if envvar is not None: + extra.append( + "env var: {}".format( + ", ".join(str(d) for d in envvar) + if isinstance(envvar, (list, tuple)) + else envvar + ) + ) + if self.default is not None and (self.show_default or ctx.show_default): + if isinstance(self.show_default, string_types): + default_string = "({})".format(self.show_default) + elif isinstance(self.default, (list, tuple)): + default_string = ", ".join(str(d) for d in self.default) + elif inspect.isfunction(self.default): + default_string = "(dynamic)" + else: + default_string = self.default + extra.append("default: {}".format(default_string)) + + if self.required: + extra.append("required") + if extra: + help = "{}[{}]".format( + "{} ".format(help) if help else "", "; ".join(extra) + ) + + return ("; " if any_prefix_is_slash else " / ").join(rv), help + + def get_default(self, ctx): + # If we're a non boolean flag our default is more complex because + # we need to look at all flags in the same group to figure out + # if we're the the default one in which case we return the flag + # value as default. + if self.is_flag and not self.is_bool_flag: + for param in ctx.command.params: + if param.name == self.name and param.default: + return param.flag_value + return None + return Parameter.get_default(self, ctx) + + def prompt_for_value(self, ctx): + """This is an alternative flow that can be activated in the full + value processing if a value does not exist. It will prompt the + user until a valid value exists and then returns the processed + value as result. + """ + # Calculate the default before prompting anything to be stable. + default = self.get_default(ctx) + + # If this is a prompt for a flag we need to handle this + # differently. + if self.is_bool_flag: + return confirm(self.prompt, default) + + return prompt( + self.prompt, + default=default, + type=self.type, + hide_input=self.hide_input, + show_choices=self.show_choices, + confirmation_prompt=self.confirmation_prompt, + value_proc=lambda x: self.process_value(ctx, x), + ) + + def resolve_envvar_value(self, ctx): + rv = Parameter.resolve_envvar_value(self, ctx) + if rv is not None: + return rv + if self.allow_from_autoenv and ctx.auto_envvar_prefix is not None: + envvar = "{}_{}".format(ctx.auto_envvar_prefix, self.name.upper()) + return os.environ.get(envvar) + + def value_from_envvar(self, ctx): + rv = self.resolve_envvar_value(ctx) + if rv is None: + return None + value_depth = (self.nargs != 1) + bool(self.multiple) + if value_depth > 0 and rv is not None: + rv = self.type.split_envvar_value(rv) + if self.multiple and self.nargs != 1: + rv = batch(rv, self.nargs) + return rv + + def full_process_value(self, ctx, value): + if value is None and self.prompt is not None and not ctx.resilient_parsing: + return self.prompt_for_value(ctx) + return Parameter.full_process_value(self, ctx, value) + + +class Argument(Parameter): + """Arguments are positional parameters to a command. They generally + provide fewer features than options but can have infinite ``nargs`` + and are required by default. + + All parameters are passed onwards to the parameter constructor. + """ + + param_type_name = "argument" + + def __init__(self, param_decls, required=None, **attrs): + if required is None: + if attrs.get("default") is not None: + required = False + else: + required = attrs.get("nargs", 1) > 0 + Parameter.__init__(self, param_decls, required=required, **attrs) + if self.default is not None and self.nargs < 0: + raise TypeError( + "nargs=-1 in combination with a default value is not supported." + ) + + @property + def human_readable_name(self): + if self.metavar is not None: + return self.metavar + return self.name.upper() + + def make_metavar(self): + if self.metavar is not None: + return self.metavar + var = self.type.get_metavar(self) + if not var: + var = self.name.upper() + if not self.required: + var = "[{}]".format(var) + if self.nargs != 1: + var += "..." + return var + + def _parse_decls(self, decls, expose_value): + if not decls: + if not expose_value: + return None, [], [] + raise TypeError("Could not determine name for argument") + if len(decls) == 1: + name = arg = decls[0] + name = name.replace("-", "_").lower() + else: + raise TypeError( + "Arguments take exactly one parameter declaration, got" + " {}".format(len(decls)) + ) + return name, [arg], [] + + def get_usage_pieces(self, ctx): + return [self.make_metavar()] + + def get_error_hint(self, ctx): + return repr(self.make_metavar()) + + def add_to_parser(self, parser, ctx): + parser.add_argument(dest=self.name, nargs=self.nargs, obj=self) diff --git a/openpype/vendor/python/python_2/click/decorators.py b/openpype/vendor/python/python_2/click/decorators.py new file mode 100644 index 00000000000..c7b5af6cc57 --- /dev/null +++ b/openpype/vendor/python/python_2/click/decorators.py @@ -0,0 +1,333 @@ +import inspect +import sys +from functools import update_wrapper + +from ._compat import iteritems +from ._unicodefun import _check_for_unicode_literals +from .core import Argument +from .core import Command +from .core import Group +from .core import Option +from .globals import get_current_context +from .utils import echo + + +def pass_context(f): + """Marks a callback as wanting to receive the current context + object as first argument. + """ + + def new_func(*args, **kwargs): + return f(get_current_context(), *args, **kwargs) + + return update_wrapper(new_func, f) + + +def pass_obj(f): + """Similar to :func:`pass_context`, but only pass the object on the + context onwards (:attr:`Context.obj`). This is useful if that object + represents the state of a nested system. + """ + + def new_func(*args, **kwargs): + return f(get_current_context().obj, *args, **kwargs) + + return update_wrapper(new_func, f) + + +def make_pass_decorator(object_type, ensure=False): + """Given an object type this creates a decorator that will work + similar to :func:`pass_obj` but instead of passing the object of the + current context, it will find the innermost context of type + :func:`object_type`. + + This generates a decorator that works roughly like this:: + + from functools import update_wrapper + + def decorator(f): + @pass_context + def new_func(ctx, *args, **kwargs): + obj = ctx.find_object(object_type) + return ctx.invoke(f, obj, *args, **kwargs) + return update_wrapper(new_func, f) + return decorator + + :param object_type: the type of the object to pass. + :param ensure: if set to `True`, a new object will be created and + remembered on the context if it's not there yet. + """ + + def decorator(f): + def new_func(*args, **kwargs): + ctx = get_current_context() + if ensure: + obj = ctx.ensure_object(object_type) + else: + obj = ctx.find_object(object_type) + if obj is None: + raise RuntimeError( + "Managed to invoke callback without a context" + " object of type '{}' existing".format(object_type.__name__) + ) + return ctx.invoke(f, obj, *args, **kwargs) + + return update_wrapper(new_func, f) + + return decorator + + +def _make_command(f, name, attrs, cls): + if isinstance(f, Command): + raise TypeError("Attempted to convert a callback into a command twice.") + try: + params = f.__click_params__ + params.reverse() + del f.__click_params__ + except AttributeError: + params = [] + help = attrs.get("help") + if help is None: + help = inspect.getdoc(f) + if isinstance(help, bytes): + help = help.decode("utf-8") + else: + help = inspect.cleandoc(help) + attrs["help"] = help + _check_for_unicode_literals() + return cls( + name=name or f.__name__.lower().replace("_", "-"), + callback=f, + params=params, + **attrs + ) + + +def command(name=None, cls=None, **attrs): + r"""Creates a new :class:`Command` and uses the decorated function as + callback. This will also automatically attach all decorated + :func:`option`\s and :func:`argument`\s as parameters to the command. + + The name of the command defaults to the name of the function with + underscores replaced by dashes. If you want to change that, you can + pass the intended name as the first argument. + + All keyword arguments are forwarded to the underlying command class. + + Once decorated the function turns into a :class:`Command` instance + that can be invoked as a command line utility or be attached to a + command :class:`Group`. + + :param name: the name of the command. This defaults to the function + name with underscores replaced by dashes. + :param cls: the command class to instantiate. This defaults to + :class:`Command`. + """ + if cls is None: + cls = Command + + def decorator(f): + cmd = _make_command(f, name, attrs, cls) + cmd.__doc__ = f.__doc__ + return cmd + + return decorator + + +def group(name=None, **attrs): + """Creates a new :class:`Group` with a function as callback. This + works otherwise the same as :func:`command` just that the `cls` + parameter is set to :class:`Group`. + """ + attrs.setdefault("cls", Group) + return command(name, **attrs) + + +def _param_memo(f, param): + if isinstance(f, Command): + f.params.append(param) + else: + if not hasattr(f, "__click_params__"): + f.__click_params__ = [] + f.__click_params__.append(param) + + +def argument(*param_decls, **attrs): + """Attaches an argument to the command. All positional arguments are + passed as parameter declarations to :class:`Argument`; all keyword + arguments are forwarded unchanged (except ``cls``). + This is equivalent to creating an :class:`Argument` instance manually + and attaching it to the :attr:`Command.params` list. + + :param cls: the argument class to instantiate. This defaults to + :class:`Argument`. + """ + + def decorator(f): + ArgumentClass = attrs.pop("cls", Argument) + _param_memo(f, ArgumentClass(param_decls, **attrs)) + return f + + return decorator + + +def option(*param_decls, **attrs): + """Attaches an option to the command. All positional arguments are + passed as parameter declarations to :class:`Option`; all keyword + arguments are forwarded unchanged (except ``cls``). + This is equivalent to creating an :class:`Option` instance manually + and attaching it to the :attr:`Command.params` list. + + :param cls: the option class to instantiate. This defaults to + :class:`Option`. + """ + + def decorator(f): + # Issue 926, copy attrs, so pre-defined options can re-use the same cls= + option_attrs = attrs.copy() + + if "help" in option_attrs: + option_attrs["help"] = inspect.cleandoc(option_attrs["help"]) + OptionClass = option_attrs.pop("cls", Option) + _param_memo(f, OptionClass(param_decls, **option_attrs)) + return f + + return decorator + + +def confirmation_option(*param_decls, **attrs): + """Shortcut for confirmation prompts that can be ignored by passing + ``--yes`` as parameter. + + This is equivalent to decorating a function with :func:`option` with + the following parameters:: + + def callback(ctx, param, value): + if not value: + ctx.abort() + + @click.command() + @click.option('--yes', is_flag=True, callback=callback, + expose_value=False, prompt='Do you want to continue?') + def dropdb(): + pass + """ + + def decorator(f): + def callback(ctx, param, value): + if not value: + ctx.abort() + + attrs.setdefault("is_flag", True) + attrs.setdefault("callback", callback) + attrs.setdefault("expose_value", False) + attrs.setdefault("prompt", "Do you want to continue?") + attrs.setdefault("help", "Confirm the action without prompting.") + return option(*(param_decls or ("--yes",)), **attrs)(f) + + return decorator + + +def password_option(*param_decls, **attrs): + """Shortcut for password prompts. + + This is equivalent to decorating a function with :func:`option` with + the following parameters:: + + @click.command() + @click.option('--password', prompt=True, confirmation_prompt=True, + hide_input=True) + def changeadmin(password): + pass + """ + + def decorator(f): + attrs.setdefault("prompt", True) + attrs.setdefault("confirmation_prompt", True) + attrs.setdefault("hide_input", True) + return option(*(param_decls or ("--password",)), **attrs)(f) + + return decorator + + +def version_option(version=None, *param_decls, **attrs): + """Adds a ``--version`` option which immediately ends the program + printing out the version number. This is implemented as an eager + option that prints the version and exits the program in the callback. + + :param version: the version number to show. If not provided Click + attempts an auto discovery via setuptools. + :param prog_name: the name of the program (defaults to autodetection) + :param message: custom message to show instead of the default + (``'%(prog)s, version %(version)s'``) + :param others: everything else is forwarded to :func:`option`. + """ + if version is None: + if hasattr(sys, "_getframe"): + module = sys._getframe(1).f_globals.get("__name__") + else: + module = "" + + def decorator(f): + prog_name = attrs.pop("prog_name", None) + message = attrs.pop("message", "%(prog)s, version %(version)s") + + def callback(ctx, param, value): + if not value or ctx.resilient_parsing: + return + prog = prog_name + if prog is None: + prog = ctx.find_root().info_name + ver = version + if ver is None: + try: + import pkg_resources + except ImportError: + pass + else: + for dist in pkg_resources.working_set: + scripts = dist.get_entry_map().get("console_scripts") or {} + for _, entry_point in iteritems(scripts): + if entry_point.module_name == module: + ver = dist.version + break + if ver is None: + raise RuntimeError("Could not determine version") + echo(message % {"prog": prog, "version": ver}, color=ctx.color) + ctx.exit() + + attrs.setdefault("is_flag", True) + attrs.setdefault("expose_value", False) + attrs.setdefault("is_eager", True) + attrs.setdefault("help", "Show the version and exit.") + attrs["callback"] = callback + return option(*(param_decls or ("--version",)), **attrs)(f) + + return decorator + + +def help_option(*param_decls, **attrs): + """Adds a ``--help`` option which immediately ends the program + printing out the help page. This is usually unnecessary to add as + this is added by default to all commands unless suppressed. + + Like :func:`version_option`, this is implemented as eager option that + prints in the callback and exits. + + All arguments are forwarded to :func:`option`. + """ + + def decorator(f): + def callback(ctx, param, value): + if value and not ctx.resilient_parsing: + echo(ctx.get_help(), color=ctx.color) + ctx.exit() + + attrs.setdefault("is_flag", True) + attrs.setdefault("expose_value", False) + attrs.setdefault("help", "Show this message and exit.") + attrs.setdefault("is_eager", True) + attrs["callback"] = callback + return option(*(param_decls or ("--help",)), **attrs)(f) + + return decorator diff --git a/openpype/vendor/python/python_2/click/exceptions.py b/openpype/vendor/python/python_2/click/exceptions.py new file mode 100644 index 00000000000..592ee38f0de --- /dev/null +++ b/openpype/vendor/python/python_2/click/exceptions.py @@ -0,0 +1,253 @@ +from ._compat import filename_to_ui +from ._compat import get_text_stderr +from ._compat import PY2 +from .utils import echo + + +def _join_param_hints(param_hint): + if isinstance(param_hint, (tuple, list)): + return " / ".join(repr(x) for x in param_hint) + return param_hint + + +class ClickException(Exception): + """An exception that Click can handle and show to the user.""" + + #: The exit code for this exception + exit_code = 1 + + def __init__(self, message): + ctor_msg = message + if PY2: + if ctor_msg is not None: + ctor_msg = ctor_msg.encode("utf-8") + Exception.__init__(self, ctor_msg) + self.message = message + + def format_message(self): + return self.message + + def __str__(self): + return self.message + + if PY2: + __unicode__ = __str__ + + def __str__(self): + return self.message.encode("utf-8") + + def show(self, file=None): + if file is None: + file = get_text_stderr() + echo("Error: {}".format(self.format_message()), file=file) + + +class UsageError(ClickException): + """An internal exception that signals a usage error. This typically + aborts any further handling. + + :param message: the error message to display. + :param ctx: optionally the context that caused this error. Click will + fill in the context automatically in some situations. + """ + + exit_code = 2 + + def __init__(self, message, ctx=None): + ClickException.__init__(self, message) + self.ctx = ctx + self.cmd = self.ctx.command if self.ctx else None + + def show(self, file=None): + if file is None: + file = get_text_stderr() + color = None + hint = "" + if self.cmd is not None and self.cmd.get_help_option(self.ctx) is not None: + hint = "Try '{} {}' for help.\n".format( + self.ctx.command_path, self.ctx.help_option_names[0] + ) + if self.ctx is not None: + color = self.ctx.color + echo("{}\n{}".format(self.ctx.get_usage(), hint), file=file, color=color) + echo("Error: {}".format(self.format_message()), file=file, color=color) + + +class BadParameter(UsageError): + """An exception that formats out a standardized error message for a + bad parameter. This is useful when thrown from a callback or type as + Click will attach contextual information to it (for instance, which + parameter it is). + + .. versionadded:: 2.0 + + :param param: the parameter object that caused this error. This can + be left out, and Click will attach this info itself + if possible. + :param param_hint: a string that shows up as parameter name. This + can be used as alternative to `param` in cases + where custom validation should happen. If it is + a string it's used as such, if it's a list then + each item is quoted and separated. + """ + + def __init__(self, message, ctx=None, param=None, param_hint=None): + UsageError.__init__(self, message, ctx) + self.param = param + self.param_hint = param_hint + + def format_message(self): + if self.param_hint is not None: + param_hint = self.param_hint + elif self.param is not None: + param_hint = self.param.get_error_hint(self.ctx) + else: + return "Invalid value: {}".format(self.message) + param_hint = _join_param_hints(param_hint) + + return "Invalid value for {}: {}".format(param_hint, self.message) + + +class MissingParameter(BadParameter): + """Raised if click required an option or argument but it was not + provided when invoking the script. + + .. versionadded:: 4.0 + + :param param_type: a string that indicates the type of the parameter. + The default is to inherit the parameter type from + the given `param`. Valid values are ``'parameter'``, + ``'option'`` or ``'argument'``. + """ + + def __init__( + self, message=None, ctx=None, param=None, param_hint=None, param_type=None + ): + BadParameter.__init__(self, message, ctx, param, param_hint) + self.param_type = param_type + + def format_message(self): + if self.param_hint is not None: + param_hint = self.param_hint + elif self.param is not None: + param_hint = self.param.get_error_hint(self.ctx) + else: + param_hint = None + param_hint = _join_param_hints(param_hint) + + param_type = self.param_type + if param_type is None and self.param is not None: + param_type = self.param.param_type_name + + msg = self.message + if self.param is not None: + msg_extra = self.param.type.get_missing_message(self.param) + if msg_extra: + if msg: + msg += ". {}".format(msg_extra) + else: + msg = msg_extra + + return "Missing {}{}{}{}".format( + param_type, + " {}".format(param_hint) if param_hint else "", + ". " if msg else ".", + msg or "", + ) + + def __str__(self): + if self.message is None: + param_name = self.param.name if self.param else None + return "missing parameter: {}".format(param_name) + else: + return self.message + + if PY2: + __unicode__ = __str__ + + def __str__(self): + return self.__unicode__().encode("utf-8") + + +class NoSuchOption(UsageError): + """Raised if click attempted to handle an option that does not + exist. + + .. versionadded:: 4.0 + """ + + def __init__(self, option_name, message=None, possibilities=None, ctx=None): + if message is None: + message = "no such option: {}".format(option_name) + UsageError.__init__(self, message, ctx) + self.option_name = option_name + self.possibilities = possibilities + + def format_message(self): + bits = [self.message] + if self.possibilities: + if len(self.possibilities) == 1: + bits.append("Did you mean {}?".format(self.possibilities[0])) + else: + possibilities = sorted(self.possibilities) + bits.append("(Possible options: {})".format(", ".join(possibilities))) + return " ".join(bits) + + +class BadOptionUsage(UsageError): + """Raised if an option is generally supplied but the use of the option + was incorrect. This is for instance raised if the number of arguments + for an option is not correct. + + .. versionadded:: 4.0 + + :param option_name: the name of the option being used incorrectly. + """ + + def __init__(self, option_name, message, ctx=None): + UsageError.__init__(self, message, ctx) + self.option_name = option_name + + +class BadArgumentUsage(UsageError): + """Raised if an argument is generally supplied but the use of the argument + was incorrect. This is for instance raised if the number of values + for an argument is not correct. + + .. versionadded:: 6.0 + """ + + def __init__(self, message, ctx=None): + UsageError.__init__(self, message, ctx) + + +class FileError(ClickException): + """Raised if a file cannot be opened.""" + + def __init__(self, filename, hint=None): + ui_filename = filename_to_ui(filename) + if hint is None: + hint = "unknown error" + ClickException.__init__(self, hint) + self.ui_filename = ui_filename + self.filename = filename + + def format_message(self): + return "Could not open file {}: {}".format(self.ui_filename, self.message) + + +class Abort(RuntimeError): + """An internal signalling exception that signals Click to abort.""" + + +class Exit(RuntimeError): + """An exception that indicates that the application should exit with some + status code. + + :param code: the status code to exit with. + """ + + __slots__ = ("exit_code",) + + def __init__(self, code=0): + self.exit_code = code diff --git a/openpype/vendor/python/python_2/click/formatting.py b/openpype/vendor/python/python_2/click/formatting.py new file mode 100644 index 00000000000..319c7f6163e --- /dev/null +++ b/openpype/vendor/python/python_2/click/formatting.py @@ -0,0 +1,283 @@ +from contextlib import contextmanager + +from ._compat import term_len +from .parser import split_opt +from .termui import get_terminal_size + +# Can force a width. This is used by the test system +FORCED_WIDTH = None + + +def measure_table(rows): + widths = {} + for row in rows: + for idx, col in enumerate(row): + widths[idx] = max(widths.get(idx, 0), term_len(col)) + return tuple(y for x, y in sorted(widths.items())) + + +def iter_rows(rows, col_count): + for row in rows: + row = tuple(row) + yield row + ("",) * (col_count - len(row)) + + +def wrap_text( + text, width=78, initial_indent="", subsequent_indent="", preserve_paragraphs=False +): + """A helper function that intelligently wraps text. By default, it + assumes that it operates on a single paragraph of text but if the + `preserve_paragraphs` parameter is provided it will intelligently + handle paragraphs (defined by two empty lines). + + If paragraphs are handled, a paragraph can be prefixed with an empty + line containing the ``\\b`` character (``\\x08``) to indicate that + no rewrapping should happen in that block. + + :param text: the text that should be rewrapped. + :param width: the maximum width for the text. + :param initial_indent: the initial indent that should be placed on the + first line as a string. + :param subsequent_indent: the indent string that should be placed on + each consecutive line. + :param preserve_paragraphs: if this flag is set then the wrapping will + intelligently handle paragraphs. + """ + from ._textwrap import TextWrapper + + text = text.expandtabs() + wrapper = TextWrapper( + width, + initial_indent=initial_indent, + subsequent_indent=subsequent_indent, + replace_whitespace=False, + ) + if not preserve_paragraphs: + return wrapper.fill(text) + + p = [] + buf = [] + indent = None + + def _flush_par(): + if not buf: + return + if buf[0].strip() == "\b": + p.append((indent or 0, True, "\n".join(buf[1:]))) + else: + p.append((indent or 0, False, " ".join(buf))) + del buf[:] + + for line in text.splitlines(): + if not line: + _flush_par() + indent = None + else: + if indent is None: + orig_len = term_len(line) + line = line.lstrip() + indent = orig_len - term_len(line) + buf.append(line) + _flush_par() + + rv = [] + for indent, raw, text in p: + with wrapper.extra_indent(" " * indent): + if raw: + rv.append(wrapper.indent_only(text)) + else: + rv.append(wrapper.fill(text)) + + return "\n\n".join(rv) + + +class HelpFormatter(object): + """This class helps with formatting text-based help pages. It's + usually just needed for very special internal cases, but it's also + exposed so that developers can write their own fancy outputs. + + At present, it always writes into memory. + + :param indent_increment: the additional increment for each level. + :param width: the width for the text. This defaults to the terminal + width clamped to a maximum of 78. + """ + + def __init__(self, indent_increment=2, width=None, max_width=None): + self.indent_increment = indent_increment + if max_width is None: + max_width = 80 + if width is None: + width = FORCED_WIDTH + if width is None: + width = max(min(get_terminal_size()[0], max_width) - 2, 50) + self.width = width + self.current_indent = 0 + self.buffer = [] + + def write(self, string): + """Writes a unicode string into the internal buffer.""" + self.buffer.append(string) + + def indent(self): + """Increases the indentation.""" + self.current_indent += self.indent_increment + + def dedent(self): + """Decreases the indentation.""" + self.current_indent -= self.indent_increment + + def write_usage(self, prog, args="", prefix="Usage: "): + """Writes a usage line into the buffer. + + :param prog: the program name. + :param args: whitespace separated list of arguments. + :param prefix: the prefix for the first line. + """ + usage_prefix = "{:>{w}}{} ".format(prefix, prog, w=self.current_indent) + text_width = self.width - self.current_indent + + if text_width >= (term_len(usage_prefix) + 20): + # The arguments will fit to the right of the prefix. + indent = " " * term_len(usage_prefix) + self.write( + wrap_text( + args, + text_width, + initial_indent=usage_prefix, + subsequent_indent=indent, + ) + ) + else: + # The prefix is too long, put the arguments on the next line. + self.write(usage_prefix) + self.write("\n") + indent = " " * (max(self.current_indent, term_len(prefix)) + 4) + self.write( + wrap_text( + args, text_width, initial_indent=indent, subsequent_indent=indent + ) + ) + + self.write("\n") + + def write_heading(self, heading): + """Writes a heading into the buffer.""" + self.write("{:>{w}}{}:\n".format("", heading, w=self.current_indent)) + + def write_paragraph(self): + """Writes a paragraph into the buffer.""" + if self.buffer: + self.write("\n") + + def write_text(self, text): + """Writes re-indented text into the buffer. This rewraps and + preserves paragraphs. + """ + text_width = max(self.width - self.current_indent, 11) + indent = " " * self.current_indent + self.write( + wrap_text( + text, + text_width, + initial_indent=indent, + subsequent_indent=indent, + preserve_paragraphs=True, + ) + ) + self.write("\n") + + def write_dl(self, rows, col_max=30, col_spacing=2): + """Writes a definition list into the buffer. This is how options + and commands are usually formatted. + + :param rows: a list of two item tuples for the terms and values. + :param col_max: the maximum width of the first column. + :param col_spacing: the number of spaces between the first and + second column. + """ + rows = list(rows) + widths = measure_table(rows) + if len(widths) != 2: + raise TypeError("Expected two columns for definition list") + + first_col = min(widths[0], col_max) + col_spacing + + for first, second in iter_rows(rows, len(widths)): + self.write("{:>{w}}{}".format("", first, w=self.current_indent)) + if not second: + self.write("\n") + continue + if term_len(first) <= first_col - col_spacing: + self.write(" " * (first_col - term_len(first))) + else: + self.write("\n") + self.write(" " * (first_col + self.current_indent)) + + text_width = max(self.width - first_col - 2, 10) + wrapped_text = wrap_text(second, text_width, preserve_paragraphs=True) + lines = wrapped_text.splitlines() + + if lines: + self.write("{}\n".format(lines[0])) + + for line in lines[1:]: + self.write( + "{:>{w}}{}\n".format( + "", line, w=first_col + self.current_indent + ) + ) + + if len(lines) > 1: + # separate long help from next option + self.write("\n") + else: + self.write("\n") + + @contextmanager + def section(self, name): + """Helpful context manager that writes a paragraph, a heading, + and the indents. + + :param name: the section name that is written as heading. + """ + self.write_paragraph() + self.write_heading(name) + self.indent() + try: + yield + finally: + self.dedent() + + @contextmanager + def indentation(self): + """A context manager that increases the indentation.""" + self.indent() + try: + yield + finally: + self.dedent() + + def getvalue(self): + """Returns the buffer contents.""" + return "".join(self.buffer) + + +def join_options(options): + """Given a list of option strings this joins them in the most appropriate + way and returns them in the form ``(formatted_string, + any_prefix_is_slash)`` where the second item in the tuple is a flag that + indicates if any of the option prefixes was a slash. + """ + rv = [] + any_prefix_is_slash = False + for opt in options: + prefix = split_opt(opt)[0] + if prefix == "/": + any_prefix_is_slash = True + rv.append((len(prefix), opt)) + + rv.sort(key=lambda x: x[0]) + + rv = ", ".join(x[1] for x in rv) + return rv, any_prefix_is_slash diff --git a/openpype/vendor/python/python_2/click/globals.py b/openpype/vendor/python/python_2/click/globals.py new file mode 100644 index 00000000000..1649f9a0bfb --- /dev/null +++ b/openpype/vendor/python/python_2/click/globals.py @@ -0,0 +1,47 @@ +from threading import local + +_local = local() + + +def get_current_context(silent=False): + """Returns the current click context. This can be used as a way to + access the current context object from anywhere. This is a more implicit + alternative to the :func:`pass_context` decorator. This function is + primarily useful for helpers such as :func:`echo` which might be + interested in changing its behavior based on the current context. + + To push the current context, :meth:`Context.scope` can be used. + + .. versionadded:: 5.0 + + :param silent: if set to `True` the return value is `None` if no context + is available. The default behavior is to raise a + :exc:`RuntimeError`. + """ + try: + return _local.stack[-1] + except (AttributeError, IndexError): + if not silent: + raise RuntimeError("There is no active click context.") + + +def push_context(ctx): + """Pushes a new context to the current stack.""" + _local.__dict__.setdefault("stack", []).append(ctx) + + +def pop_context(): + """Removes the top level from the stack.""" + _local.stack.pop() + + +def resolve_color_default(color=None): + """"Internal helper to get the default value of the color flag. If a + value is passed it's returned unchanged, otherwise it's looked up from + the current context. + """ + if color is not None: + return color + ctx = get_current_context(silent=True) + if ctx is not None: + return ctx.color diff --git a/openpype/vendor/python/python_2/click/parser.py b/openpype/vendor/python/python_2/click/parser.py new file mode 100644 index 00000000000..f43ebfe9fc0 --- /dev/null +++ b/openpype/vendor/python/python_2/click/parser.py @@ -0,0 +1,428 @@ +# -*- coding: utf-8 -*- +""" +This module started out as largely a copy paste from the stdlib's +optparse module with the features removed that we do not need from +optparse because we implement them in Click on a higher level (for +instance type handling, help formatting and a lot more). + +The plan is to remove more and more from here over time. + +The reason this is a different module and not optparse from the stdlib +is that there are differences in 2.x and 3.x about the error messages +generated and optparse in the stdlib uses gettext for no good reason +and might cause us issues. + +Click uses parts of optparse written by Gregory P. Ward and maintained +by the Python Software Foundation. This is limited to code in parser.py. + +Copyright 2001-2006 Gregory P. Ward. All rights reserved. +Copyright 2002-2006 Python Software Foundation. All rights reserved. +""" +import re +from collections import deque + +from .exceptions import BadArgumentUsage +from .exceptions import BadOptionUsage +from .exceptions import NoSuchOption +from .exceptions import UsageError + + +def _unpack_args(args, nargs_spec): + """Given an iterable of arguments and an iterable of nargs specifications, + it returns a tuple with all the unpacked arguments at the first index + and all remaining arguments as the second. + + The nargs specification is the number of arguments that should be consumed + or `-1` to indicate that this position should eat up all the remainders. + + Missing items are filled with `None`. + """ + args = deque(args) + nargs_spec = deque(nargs_spec) + rv = [] + spos = None + + def _fetch(c): + try: + if spos is None: + return c.popleft() + else: + return c.pop() + except IndexError: + return None + + while nargs_spec: + nargs = _fetch(nargs_spec) + if nargs == 1: + rv.append(_fetch(args)) + elif nargs > 1: + x = [_fetch(args) for _ in range(nargs)] + # If we're reversed, we're pulling in the arguments in reverse, + # so we need to turn them around. + if spos is not None: + x.reverse() + rv.append(tuple(x)) + elif nargs < 0: + if spos is not None: + raise TypeError("Cannot have two nargs < 0") + spos = len(rv) + rv.append(None) + + # spos is the position of the wildcard (star). If it's not `None`, + # we fill it with the remainder. + if spos is not None: + rv[spos] = tuple(args) + args = [] + rv[spos + 1 :] = reversed(rv[spos + 1 :]) + + return tuple(rv), list(args) + + +def _error_opt_args(nargs, opt): + if nargs == 1: + raise BadOptionUsage(opt, "{} option requires an argument".format(opt)) + raise BadOptionUsage(opt, "{} option requires {} arguments".format(opt, nargs)) + + +def split_opt(opt): + first = opt[:1] + if first.isalnum(): + return "", opt + if opt[1:2] == first: + return opt[:2], opt[2:] + return first, opt[1:] + + +def normalize_opt(opt, ctx): + if ctx is None or ctx.token_normalize_func is None: + return opt + prefix, opt = split_opt(opt) + return prefix + ctx.token_normalize_func(opt) + + +def split_arg_string(string): + """Given an argument string this attempts to split it into small parts.""" + rv = [] + for match in re.finditer( + r"('([^'\\]*(?:\\.[^'\\]*)*)'|\"([^\"\\]*(?:\\.[^\"\\]*)*)\"|\S+)\s*", + string, + re.S, + ): + arg = match.group().strip() + if arg[:1] == arg[-1:] and arg[:1] in "\"'": + arg = arg[1:-1].encode("ascii", "backslashreplace").decode("unicode-escape") + try: + arg = type(string)(arg) + except UnicodeError: + pass + rv.append(arg) + return rv + + +class Option(object): + def __init__(self, opts, dest, action=None, nargs=1, const=None, obj=None): + self._short_opts = [] + self._long_opts = [] + self.prefixes = set() + + for opt in opts: + prefix, value = split_opt(opt) + if not prefix: + raise ValueError("Invalid start character for option ({})".format(opt)) + self.prefixes.add(prefix[0]) + if len(prefix) == 1 and len(value) == 1: + self._short_opts.append(opt) + else: + self._long_opts.append(opt) + self.prefixes.add(prefix) + + if action is None: + action = "store" + + self.dest = dest + self.action = action + self.nargs = nargs + self.const = const + self.obj = obj + + @property + def takes_value(self): + return self.action in ("store", "append") + + def process(self, value, state): + if self.action == "store": + state.opts[self.dest] = value + elif self.action == "store_const": + state.opts[self.dest] = self.const + elif self.action == "append": + state.opts.setdefault(self.dest, []).append(value) + elif self.action == "append_const": + state.opts.setdefault(self.dest, []).append(self.const) + elif self.action == "count": + state.opts[self.dest] = state.opts.get(self.dest, 0) + 1 + else: + raise ValueError("unknown action '{}'".format(self.action)) + state.order.append(self.obj) + + +class Argument(object): + def __init__(self, dest, nargs=1, obj=None): + self.dest = dest + self.nargs = nargs + self.obj = obj + + def process(self, value, state): + if self.nargs > 1: + holes = sum(1 for x in value if x is None) + if holes == len(value): + value = None + elif holes != 0: + raise BadArgumentUsage( + "argument {} takes {} values".format(self.dest, self.nargs) + ) + state.opts[self.dest] = value + state.order.append(self.obj) + + +class ParsingState(object): + def __init__(self, rargs): + self.opts = {} + self.largs = [] + self.rargs = rargs + self.order = [] + + +class OptionParser(object): + """The option parser is an internal class that is ultimately used to + parse options and arguments. It's modelled after optparse and brings + a similar but vastly simplified API. It should generally not be used + directly as the high level Click classes wrap it for you. + + It's not nearly as extensible as optparse or argparse as it does not + implement features that are implemented on a higher level (such as + types or defaults). + + :param ctx: optionally the :class:`~click.Context` where this parser + should go with. + """ + + def __init__(self, ctx=None): + #: The :class:`~click.Context` for this parser. This might be + #: `None` for some advanced use cases. + self.ctx = ctx + #: This controls how the parser deals with interspersed arguments. + #: If this is set to `False`, the parser will stop on the first + #: non-option. Click uses this to implement nested subcommands + #: safely. + self.allow_interspersed_args = True + #: This tells the parser how to deal with unknown options. By + #: default it will error out (which is sensible), but there is a + #: second mode where it will ignore it and continue processing + #: after shifting all the unknown options into the resulting args. + self.ignore_unknown_options = False + if ctx is not None: + self.allow_interspersed_args = ctx.allow_interspersed_args + self.ignore_unknown_options = ctx.ignore_unknown_options + self._short_opt = {} + self._long_opt = {} + self._opt_prefixes = {"-", "--"} + self._args = [] + + def add_option(self, opts, dest, action=None, nargs=1, const=None, obj=None): + """Adds a new option named `dest` to the parser. The destination + is not inferred (unlike with optparse) and needs to be explicitly + provided. Action can be any of ``store``, ``store_const``, + ``append``, ``appnd_const`` or ``count``. + + The `obj` can be used to identify the option in the order list + that is returned from the parser. + """ + if obj is None: + obj = dest + opts = [normalize_opt(opt, self.ctx) for opt in opts] + option = Option(opts, dest, action=action, nargs=nargs, const=const, obj=obj) + self._opt_prefixes.update(option.prefixes) + for opt in option._short_opts: + self._short_opt[opt] = option + for opt in option._long_opts: + self._long_opt[opt] = option + + def add_argument(self, dest, nargs=1, obj=None): + """Adds a positional argument named `dest` to the parser. + + The `obj` can be used to identify the option in the order list + that is returned from the parser. + """ + if obj is None: + obj = dest + self._args.append(Argument(dest=dest, nargs=nargs, obj=obj)) + + def parse_args(self, args): + """Parses positional arguments and returns ``(values, args, order)`` + for the parsed options and arguments as well as the leftover + arguments if there are any. The order is a list of objects as they + appear on the command line. If arguments appear multiple times they + will be memorized multiple times as well. + """ + state = ParsingState(args) + try: + self._process_args_for_options(state) + self._process_args_for_args(state) + except UsageError: + if self.ctx is None or not self.ctx.resilient_parsing: + raise + return state.opts, state.largs, state.order + + def _process_args_for_args(self, state): + pargs, args = _unpack_args( + state.largs + state.rargs, [x.nargs for x in self._args] + ) + + for idx, arg in enumerate(self._args): + arg.process(pargs[idx], state) + + state.largs = args + state.rargs = [] + + def _process_args_for_options(self, state): + while state.rargs: + arg = state.rargs.pop(0) + arglen = len(arg) + # Double dashes always handled explicitly regardless of what + # prefixes are valid. + if arg == "--": + return + elif arg[:1] in self._opt_prefixes and arglen > 1: + self._process_opts(arg, state) + elif self.allow_interspersed_args: + state.largs.append(arg) + else: + state.rargs.insert(0, arg) + return + + # Say this is the original argument list: + # [arg0, arg1, ..., arg(i-1), arg(i), arg(i+1), ..., arg(N-1)] + # ^ + # (we are about to process arg(i)). + # + # Then rargs is [arg(i), ..., arg(N-1)] and largs is a *subset* of + # [arg0, ..., arg(i-1)] (any options and their arguments will have + # been removed from largs). + # + # The while loop will usually consume 1 or more arguments per pass. + # If it consumes 1 (eg. arg is an option that takes no arguments), + # then after _process_arg() is done the situation is: + # + # largs = subset of [arg0, ..., arg(i)] + # rargs = [arg(i+1), ..., arg(N-1)] + # + # If allow_interspersed_args is false, largs will always be + # *empty* -- still a subset of [arg0, ..., arg(i-1)], but + # not a very interesting subset! + + def _match_long_opt(self, opt, explicit_value, state): + if opt not in self._long_opt: + possibilities = [word for word in self._long_opt if word.startswith(opt)] + raise NoSuchOption(opt, possibilities=possibilities, ctx=self.ctx) + + option = self._long_opt[opt] + if option.takes_value: + # At this point it's safe to modify rargs by injecting the + # explicit value, because no exception is raised in this + # branch. This means that the inserted value will be fully + # consumed. + if explicit_value is not None: + state.rargs.insert(0, explicit_value) + + nargs = option.nargs + if len(state.rargs) < nargs: + _error_opt_args(nargs, opt) + elif nargs == 1: + value = state.rargs.pop(0) + else: + value = tuple(state.rargs[:nargs]) + del state.rargs[:nargs] + + elif explicit_value is not None: + raise BadOptionUsage(opt, "{} option does not take a value".format(opt)) + + else: + value = None + + option.process(value, state) + + def _match_short_opt(self, arg, state): + stop = False + i = 1 + prefix = arg[0] + unknown_options = [] + + for ch in arg[1:]: + opt = normalize_opt(prefix + ch, self.ctx) + option = self._short_opt.get(opt) + i += 1 + + if not option: + if self.ignore_unknown_options: + unknown_options.append(ch) + continue + raise NoSuchOption(opt, ctx=self.ctx) + if option.takes_value: + # Any characters left in arg? Pretend they're the + # next arg, and stop consuming characters of arg. + if i < len(arg): + state.rargs.insert(0, arg[i:]) + stop = True + + nargs = option.nargs + if len(state.rargs) < nargs: + _error_opt_args(nargs, opt) + elif nargs == 1: + value = state.rargs.pop(0) + else: + value = tuple(state.rargs[:nargs]) + del state.rargs[:nargs] + + else: + value = None + + option.process(value, state) + + if stop: + break + + # If we got any unknown options we re-combinate the string of the + # remaining options and re-attach the prefix, then report that + # to the state as new larg. This way there is basic combinatorics + # that can be achieved while still ignoring unknown arguments. + if self.ignore_unknown_options and unknown_options: + state.largs.append("{}{}".format(prefix, "".join(unknown_options))) + + def _process_opts(self, arg, state): + explicit_value = None + # Long option handling happens in two parts. The first part is + # supporting explicitly attached values. In any case, we will try + # to long match the option first. + if "=" in arg: + long_opt, explicit_value = arg.split("=", 1) + else: + long_opt = arg + norm_long_opt = normalize_opt(long_opt, self.ctx) + + # At this point we will match the (assumed) long option through + # the long option matching code. Note that this allows options + # like "-foo" to be matched as long options. + try: + self._match_long_opt(norm_long_opt, explicit_value, state) + except NoSuchOption: + # At this point the long option matching failed, and we need + # to try with short options. However there is a special rule + # which says, that if we have a two character options prefix + # (applies to "--foo" for instance), we do not dispatch to the + # short option code and will instead raise the no option + # error. + if arg[:2] not in self._opt_prefixes: + return self._match_short_opt(arg, state) + if not self.ignore_unknown_options: + raise + state.largs.append(arg) diff --git a/openpype/vendor/python/python_2/click/termui.py b/openpype/vendor/python/python_2/click/termui.py new file mode 100644 index 00000000000..02ef9e9f045 --- /dev/null +++ b/openpype/vendor/python/python_2/click/termui.py @@ -0,0 +1,681 @@ +import inspect +import io +import itertools +import os +import struct +import sys + +from ._compat import DEFAULT_COLUMNS +from ._compat import get_winterm_size +from ._compat import isatty +from ._compat import raw_input +from ._compat import string_types +from ._compat import strip_ansi +from ._compat import text_type +from ._compat import WIN +from .exceptions import Abort +from .exceptions import UsageError +from .globals import resolve_color_default +from .types import Choice +from .types import convert_type +from .types import Path +from .utils import echo +from .utils import LazyFile + +# The prompt functions to use. The doc tools currently override these +# functions to customize how they work. +visible_prompt_func = raw_input + +_ansi_colors = { + "black": 30, + "red": 31, + "green": 32, + "yellow": 33, + "blue": 34, + "magenta": 35, + "cyan": 36, + "white": 37, + "reset": 39, + "bright_black": 90, + "bright_red": 91, + "bright_green": 92, + "bright_yellow": 93, + "bright_blue": 94, + "bright_magenta": 95, + "bright_cyan": 96, + "bright_white": 97, +} +_ansi_reset_all = "\033[0m" + + +def hidden_prompt_func(prompt): + import getpass + + return getpass.getpass(prompt) + + +def _build_prompt( + text, suffix, show_default=False, default=None, show_choices=True, type=None +): + prompt = text + if type is not None and show_choices and isinstance(type, Choice): + prompt += " ({})".format(", ".join(map(str, type.choices))) + if default is not None and show_default: + prompt = "{} [{}]".format(prompt, _format_default(default)) + return prompt + suffix + + +def _format_default(default): + if isinstance(default, (io.IOBase, LazyFile)) and hasattr(default, "name"): + return default.name + + return default + + +def prompt( + text, + default=None, + hide_input=False, + confirmation_prompt=False, + type=None, + value_proc=None, + prompt_suffix=": ", + show_default=True, + err=False, + show_choices=True, +): + """Prompts a user for input. This is a convenience function that can + be used to prompt a user for input later. + + If the user aborts the input by sending a interrupt signal, this + function will catch it and raise a :exc:`Abort` exception. + + .. versionadded:: 7.0 + Added the show_choices parameter. + + .. versionadded:: 6.0 + Added unicode support for cmd.exe on Windows. + + .. versionadded:: 4.0 + Added the `err` parameter. + + :param text: the text to show for the prompt. + :param default: the default value to use if no input happens. If this + is not given it will prompt until it's aborted. + :param hide_input: if this is set to true then the input value will + be hidden. + :param confirmation_prompt: asks for confirmation for the value. + :param type: the type to use to check the value against. + :param value_proc: if this parameter is provided it's a function that + is invoked instead of the type conversion to + convert a value. + :param prompt_suffix: a suffix that should be added to the prompt. + :param show_default: shows or hides the default value in the prompt. + :param err: if set to true the file defaults to ``stderr`` instead of + ``stdout``, the same as with echo. + :param show_choices: Show or hide choices if the passed type is a Choice. + For example if type is a Choice of either day or week, + show_choices is true and text is "Group by" then the + prompt will be "Group by (day, week): ". + """ + result = None + + def prompt_func(text): + f = hidden_prompt_func if hide_input else visible_prompt_func + try: + # Write the prompt separately so that we get nice + # coloring through colorama on Windows + echo(text, nl=False, err=err) + return f("") + except (KeyboardInterrupt, EOFError): + # getpass doesn't print a newline if the user aborts input with ^C. + # Allegedly this behavior is inherited from getpass(3). + # A doc bug has been filed at https://bugs.python.org/issue24711 + if hide_input: + echo(None, err=err) + raise Abort() + + if value_proc is None: + value_proc = convert_type(type, default) + + prompt = _build_prompt( + text, prompt_suffix, show_default, default, show_choices, type + ) + + while 1: + while 1: + value = prompt_func(prompt) + if value: + break + elif default is not None: + if isinstance(value_proc, Path): + # validate Path default value(exists, dir_okay etc.) + value = default + break + return default + try: + result = value_proc(value) + except UsageError as e: + echo("Error: {}".format(e.message), err=err) # noqa: B306 + continue + if not confirmation_prompt: + return result + while 1: + value2 = prompt_func("Repeat for confirmation: ") + if value2: + break + if value == value2: + return result + echo("Error: the two entered values do not match", err=err) + + +def confirm( + text, default=False, abort=False, prompt_suffix=": ", show_default=True, err=False +): + """Prompts for confirmation (yes/no question). + + If the user aborts the input by sending a interrupt signal this + function will catch it and raise a :exc:`Abort` exception. + + .. versionadded:: 4.0 + Added the `err` parameter. + + :param text: the question to ask. + :param default: the default for the prompt. + :param abort: if this is set to `True` a negative answer aborts the + exception by raising :exc:`Abort`. + :param prompt_suffix: a suffix that should be added to the prompt. + :param show_default: shows or hides the default value in the prompt. + :param err: if set to true the file defaults to ``stderr`` instead of + ``stdout``, the same as with echo. + """ + prompt = _build_prompt( + text, prompt_suffix, show_default, "Y/n" if default else "y/N" + ) + while 1: + try: + # Write the prompt separately so that we get nice + # coloring through colorama on Windows + echo(prompt, nl=False, err=err) + value = visible_prompt_func("").lower().strip() + except (KeyboardInterrupt, EOFError): + raise Abort() + if value in ("y", "yes"): + rv = True + elif value in ("n", "no"): + rv = False + elif value == "": + rv = default + else: + echo("Error: invalid input", err=err) + continue + break + if abort and not rv: + raise Abort() + return rv + + +def get_terminal_size(): + """Returns the current size of the terminal as tuple in the form + ``(width, height)`` in columns and rows. + """ + # If shutil has get_terminal_size() (Python 3.3 and later) use that + if sys.version_info >= (3, 3): + import shutil + + shutil_get_terminal_size = getattr(shutil, "get_terminal_size", None) + if shutil_get_terminal_size: + sz = shutil_get_terminal_size() + return sz.columns, sz.lines + + # We provide a sensible default for get_winterm_size() when being invoked + # inside a subprocess. Without this, it would not provide a useful input. + if get_winterm_size is not None: + size = get_winterm_size() + if size == (0, 0): + return (79, 24) + else: + return size + + def ioctl_gwinsz(fd): + try: + import fcntl + import termios + + cr = struct.unpack("hh", fcntl.ioctl(fd, termios.TIOCGWINSZ, "1234")) + except Exception: + return + return cr + + cr = ioctl_gwinsz(0) or ioctl_gwinsz(1) or ioctl_gwinsz(2) + if not cr: + try: + fd = os.open(os.ctermid(), os.O_RDONLY) + try: + cr = ioctl_gwinsz(fd) + finally: + os.close(fd) + except Exception: + pass + if not cr or not cr[0] or not cr[1]: + cr = (os.environ.get("LINES", 25), os.environ.get("COLUMNS", DEFAULT_COLUMNS)) + return int(cr[1]), int(cr[0]) + + +def echo_via_pager(text_or_generator, color=None): + """This function takes a text and shows it via an environment specific + pager on stdout. + + .. versionchanged:: 3.0 + Added the `color` flag. + + :param text_or_generator: the text to page, or alternatively, a + generator emitting the text to page. + :param color: controls if the pager supports ANSI colors or not. The + default is autodetection. + """ + color = resolve_color_default(color) + + if inspect.isgeneratorfunction(text_or_generator): + i = text_or_generator() + elif isinstance(text_or_generator, string_types): + i = [text_or_generator] + else: + i = iter(text_or_generator) + + # convert every element of i to a text type if necessary + text_generator = (el if isinstance(el, string_types) else text_type(el) for el in i) + + from ._termui_impl import pager + + return pager(itertools.chain(text_generator, "\n"), color) + + +def progressbar( + iterable=None, + length=None, + label=None, + show_eta=True, + show_percent=None, + show_pos=False, + item_show_func=None, + fill_char="#", + empty_char="-", + bar_template="%(label)s [%(bar)s] %(info)s", + info_sep=" ", + width=36, + file=None, + color=None, +): + """This function creates an iterable context manager that can be used + to iterate over something while showing a progress bar. It will + either iterate over the `iterable` or `length` items (that are counted + up). While iteration happens, this function will print a rendered + progress bar to the given `file` (defaults to stdout) and will attempt + to calculate remaining time and more. By default, this progress bar + will not be rendered if the file is not a terminal. + + The context manager creates the progress bar. When the context + manager is entered the progress bar is already created. With every + iteration over the progress bar, the iterable passed to the bar is + advanced and the bar is updated. When the context manager exits, + a newline is printed and the progress bar is finalized on screen. + + Note: The progress bar is currently designed for use cases where the + total progress can be expected to take at least several seconds. + Because of this, the ProgressBar class object won't display + progress that is considered too fast, and progress where the time + between steps is less than a second. + + No printing must happen or the progress bar will be unintentionally + destroyed. + + Example usage:: + + with progressbar(items) as bar: + for item in bar: + do_something_with(item) + + Alternatively, if no iterable is specified, one can manually update the + progress bar through the `update()` method instead of directly + iterating over the progress bar. The update method accepts the number + of steps to increment the bar with:: + + with progressbar(length=chunks.total_bytes) as bar: + for chunk in chunks: + process_chunk(chunk) + bar.update(chunks.bytes) + + .. versionadded:: 2.0 + + .. versionadded:: 4.0 + Added the `color` parameter. Added a `update` method to the + progressbar object. + + :param iterable: an iterable to iterate over. If not provided the length + is required. + :param length: the number of items to iterate over. By default the + progressbar will attempt to ask the iterator about its + length, which might or might not work. If an iterable is + also provided this parameter can be used to override the + length. If an iterable is not provided the progress bar + will iterate over a range of that length. + :param label: the label to show next to the progress bar. + :param show_eta: enables or disables the estimated time display. This is + automatically disabled if the length cannot be + determined. + :param show_percent: enables or disables the percentage display. The + default is `True` if the iterable has a length or + `False` if not. + :param show_pos: enables or disables the absolute position display. The + default is `False`. + :param item_show_func: a function called with the current item which + can return a string to show the current item + next to the progress bar. Note that the current + item can be `None`! + :param fill_char: the character to use to show the filled part of the + progress bar. + :param empty_char: the character to use to show the non-filled part of + the progress bar. + :param bar_template: the format string to use as template for the bar. + The parameters in it are ``label`` for the label, + ``bar`` for the progress bar and ``info`` for the + info section. + :param info_sep: the separator between multiple info items (eta etc.) + :param width: the width of the progress bar in characters, 0 means full + terminal width + :param file: the file to write to. If this is not a terminal then + only the label is printed. + :param color: controls if the terminal supports ANSI colors or not. The + default is autodetection. This is only needed if ANSI + codes are included anywhere in the progress bar output + which is not the case by default. + """ + from ._termui_impl import ProgressBar + + color = resolve_color_default(color) + return ProgressBar( + iterable=iterable, + length=length, + show_eta=show_eta, + show_percent=show_percent, + show_pos=show_pos, + item_show_func=item_show_func, + fill_char=fill_char, + empty_char=empty_char, + bar_template=bar_template, + info_sep=info_sep, + file=file, + label=label, + width=width, + color=color, + ) + + +def clear(): + """Clears the terminal screen. This will have the effect of clearing + the whole visible space of the terminal and moving the cursor to the + top left. This does not do anything if not connected to a terminal. + + .. versionadded:: 2.0 + """ + if not isatty(sys.stdout): + return + # If we're on Windows and we don't have colorama available, then we + # clear the screen by shelling out. Otherwise we can use an escape + # sequence. + if WIN: + os.system("cls") + else: + sys.stdout.write("\033[2J\033[1;1H") + + +def style( + text, + fg=None, + bg=None, + bold=None, + dim=None, + underline=None, + blink=None, + reverse=None, + reset=True, +): + """Styles a text with ANSI styles and returns the new string. By + default the styling is self contained which means that at the end + of the string a reset code is issued. This can be prevented by + passing ``reset=False``. + + Examples:: + + click.echo(click.style('Hello World!', fg='green')) + click.echo(click.style('ATTENTION!', blink=True)) + click.echo(click.style('Some things', reverse=True, fg='cyan')) + + Supported color names: + + * ``black`` (might be a gray) + * ``red`` + * ``green`` + * ``yellow`` (might be an orange) + * ``blue`` + * ``magenta`` + * ``cyan`` + * ``white`` (might be light gray) + * ``bright_black`` + * ``bright_red`` + * ``bright_green`` + * ``bright_yellow`` + * ``bright_blue`` + * ``bright_magenta`` + * ``bright_cyan`` + * ``bright_white`` + * ``reset`` (reset the color code only) + + .. versionadded:: 2.0 + + .. versionadded:: 7.0 + Added support for bright colors. + + :param text: the string to style with ansi codes. + :param fg: if provided this will become the foreground color. + :param bg: if provided this will become the background color. + :param bold: if provided this will enable or disable bold mode. + :param dim: if provided this will enable or disable dim mode. This is + badly supported. + :param underline: if provided this will enable or disable underline. + :param blink: if provided this will enable or disable blinking. + :param reverse: if provided this will enable or disable inverse + rendering (foreground becomes background and the + other way round). + :param reset: by default a reset-all code is added at the end of the + string which means that styles do not carry over. This + can be disabled to compose styles. + """ + bits = [] + if fg: + try: + bits.append("\033[{}m".format(_ansi_colors[fg])) + except KeyError: + raise TypeError("Unknown color '{}'".format(fg)) + if bg: + try: + bits.append("\033[{}m".format(_ansi_colors[bg] + 10)) + except KeyError: + raise TypeError("Unknown color '{}'".format(bg)) + if bold is not None: + bits.append("\033[{}m".format(1 if bold else 22)) + if dim is not None: + bits.append("\033[{}m".format(2 if dim else 22)) + if underline is not None: + bits.append("\033[{}m".format(4 if underline else 24)) + if blink is not None: + bits.append("\033[{}m".format(5 if blink else 25)) + if reverse is not None: + bits.append("\033[{}m".format(7 if reverse else 27)) + bits.append(text) + if reset: + bits.append(_ansi_reset_all) + return "".join(bits) + + +def unstyle(text): + """Removes ANSI styling information from a string. Usually it's not + necessary to use this function as Click's echo function will + automatically remove styling if necessary. + + .. versionadded:: 2.0 + + :param text: the text to remove style information from. + """ + return strip_ansi(text) + + +def secho(message=None, file=None, nl=True, err=False, color=None, **styles): + """This function combines :func:`echo` and :func:`style` into one + call. As such the following two calls are the same:: + + click.secho('Hello World!', fg='green') + click.echo(click.style('Hello World!', fg='green')) + + All keyword arguments are forwarded to the underlying functions + depending on which one they go with. + + .. versionadded:: 2.0 + """ + if message is not None: + message = style(message, **styles) + return echo(message, file=file, nl=nl, err=err, color=color) + + +def edit( + text=None, editor=None, env=None, require_save=True, extension=".txt", filename=None +): + r"""Edits the given text in the defined editor. If an editor is given + (should be the full path to the executable but the regular operating + system search path is used for finding the executable) it overrides + the detected editor. Optionally, some environment variables can be + used. If the editor is closed without changes, `None` is returned. In + case a file is edited directly the return value is always `None` and + `require_save` and `extension` are ignored. + + If the editor cannot be opened a :exc:`UsageError` is raised. + + Note for Windows: to simplify cross-platform usage, the newlines are + automatically converted from POSIX to Windows and vice versa. As such, + the message here will have ``\n`` as newline markers. + + :param text: the text to edit. + :param editor: optionally the editor to use. Defaults to automatic + detection. + :param env: environment variables to forward to the editor. + :param require_save: if this is true, then not saving in the editor + will make the return value become `None`. + :param extension: the extension to tell the editor about. This defaults + to `.txt` but changing this might change syntax + highlighting. + :param filename: if provided it will edit this file instead of the + provided text contents. It will not use a temporary + file as an indirection in that case. + """ + from ._termui_impl import Editor + + editor = Editor( + editor=editor, env=env, require_save=require_save, extension=extension + ) + if filename is None: + return editor.edit(text) + editor.edit_file(filename) + + +def launch(url, wait=False, locate=False): + """This function launches the given URL (or filename) in the default + viewer application for this file type. If this is an executable, it + might launch the executable in a new session. The return value is + the exit code of the launched application. Usually, ``0`` indicates + success. + + Examples:: + + click.launch('https://click.palletsprojects.com/') + click.launch('/my/downloaded/file', locate=True) + + .. versionadded:: 2.0 + + :param url: URL or filename of the thing to launch. + :param wait: waits for the program to stop. + :param locate: if this is set to `True` then instead of launching the + application associated with the URL it will attempt to + launch a file manager with the file located. This + might have weird effects if the URL does not point to + the filesystem. + """ + from ._termui_impl import open_url + + return open_url(url, wait=wait, locate=locate) + + +# If this is provided, getchar() calls into this instead. This is used +# for unittesting purposes. +_getchar = None + + +def getchar(echo=False): + """Fetches a single character from the terminal and returns it. This + will always return a unicode character and under certain rare + circumstances this might return more than one character. The + situations which more than one character is returned is when for + whatever reason multiple characters end up in the terminal buffer or + standard input was not actually a terminal. + + Note that this will always read from the terminal, even if something + is piped into the standard input. + + Note for Windows: in rare cases when typing non-ASCII characters, this + function might wait for a second character and then return both at once. + This is because certain Unicode characters look like special-key markers. + + .. versionadded:: 2.0 + + :param echo: if set to `True`, the character read will also show up on + the terminal. The default is to not show it. + """ + f = _getchar + if f is None: + from ._termui_impl import getchar as f + return f(echo) + + +def raw_terminal(): + from ._termui_impl import raw_terminal as f + + return f() + + +def pause(info="Press any key to continue ...", err=False): + """This command stops execution and waits for the user to press any + key to continue. This is similar to the Windows batch "pause" + command. If the program is not run through a terminal, this command + will instead do nothing. + + .. versionadded:: 2.0 + + .. versionadded:: 4.0 + Added the `err` parameter. + + :param info: the info string to print before pausing. + :param err: if set to message goes to ``stderr`` instead of + ``stdout``, the same as with echo. + """ + if not isatty(sys.stdin) or not isatty(sys.stdout): + return + try: + if info: + echo(info, nl=False, err=err) + try: + getchar() + except (KeyboardInterrupt, EOFError): + pass + finally: + if info: + echo(err=err) diff --git a/openpype/vendor/python/python_2/click/testing.py b/openpype/vendor/python/python_2/click/testing.py new file mode 100644 index 00000000000..a3dba3b3014 --- /dev/null +++ b/openpype/vendor/python/python_2/click/testing.py @@ -0,0 +1,382 @@ +import contextlib +import os +import shlex +import shutil +import sys +import tempfile + +from . import formatting +from . import termui +from . import utils +from ._compat import iteritems +from ._compat import PY2 +from ._compat import string_types + + +if PY2: + from cStringIO import StringIO +else: + import io + from ._compat import _find_binary_reader + + +class EchoingStdin(object): + def __init__(self, input, output): + self._input = input + self._output = output + + def __getattr__(self, x): + return getattr(self._input, x) + + def _echo(self, rv): + self._output.write(rv) + return rv + + def read(self, n=-1): + return self._echo(self._input.read(n)) + + def readline(self, n=-1): + return self._echo(self._input.readline(n)) + + def readlines(self): + return [self._echo(x) for x in self._input.readlines()] + + def __iter__(self): + return iter(self._echo(x) for x in self._input) + + def __repr__(self): + return repr(self._input) + + +def make_input_stream(input, charset): + # Is already an input stream. + if hasattr(input, "read"): + if PY2: + return input + rv = _find_binary_reader(input) + if rv is not None: + return rv + raise TypeError("Could not find binary reader for input stream.") + + if input is None: + input = b"" + elif not isinstance(input, bytes): + input = input.encode(charset) + if PY2: + return StringIO(input) + return io.BytesIO(input) + + +class Result(object): + """Holds the captured result of an invoked CLI script.""" + + def __init__( + self, runner, stdout_bytes, stderr_bytes, exit_code, exception, exc_info=None + ): + #: The runner that created the result + self.runner = runner + #: The standard output as bytes. + self.stdout_bytes = stdout_bytes + #: The standard error as bytes, or None if not available + self.stderr_bytes = stderr_bytes + #: The exit code as integer. + self.exit_code = exit_code + #: The exception that happened if one did. + self.exception = exception + #: The traceback + self.exc_info = exc_info + + @property + def output(self): + """The (standard) output as unicode string.""" + return self.stdout + + @property + def stdout(self): + """The standard output as unicode string.""" + return self.stdout_bytes.decode(self.runner.charset, "replace").replace( + "\r\n", "\n" + ) + + @property + def stderr(self): + """The standard error as unicode string.""" + if self.stderr_bytes is None: + raise ValueError("stderr not separately captured") + return self.stderr_bytes.decode(self.runner.charset, "replace").replace( + "\r\n", "\n" + ) + + def __repr__(self): + return "<{} {}>".format( + type(self).__name__, repr(self.exception) if self.exception else "okay" + ) + + +class CliRunner(object): + """The CLI runner provides functionality to invoke a Click command line + script for unittesting purposes in a isolated environment. This only + works in single-threaded systems without any concurrency as it changes the + global interpreter state. + + :param charset: the character set for the input and output data. This is + UTF-8 by default and should not be changed currently as + the reporting to Click only works in Python 2 properly. + :param env: a dictionary with environment variables for overriding. + :param echo_stdin: if this is set to `True`, then reading from stdin writes + to stdout. This is useful for showing examples in + some circumstances. Note that regular prompts + will automatically echo the input. + :param mix_stderr: if this is set to `False`, then stdout and stderr are + preserved as independent streams. This is useful for + Unix-philosophy apps that have predictable stdout and + noisy stderr, such that each may be measured + independently + """ + + def __init__(self, charset=None, env=None, echo_stdin=False, mix_stderr=True): + if charset is None: + charset = "utf-8" + self.charset = charset + self.env = env or {} + self.echo_stdin = echo_stdin + self.mix_stderr = mix_stderr + + def get_default_prog_name(self, cli): + """Given a command object it will return the default program name + for it. The default is the `name` attribute or ``"root"`` if not + set. + """ + return cli.name or "root" + + def make_env(self, overrides=None): + """Returns the environment overrides for invoking a script.""" + rv = dict(self.env) + if overrides: + rv.update(overrides) + return rv + + @contextlib.contextmanager + def isolation(self, input=None, env=None, color=False): + """A context manager that sets up the isolation for invoking of a + command line tool. This sets up stdin with the given input data + and `os.environ` with the overrides from the given dictionary. + This also rebinds some internals in Click to be mocked (like the + prompt functionality). + + This is automatically done in the :meth:`invoke` method. + + .. versionadded:: 4.0 + The ``color`` parameter was added. + + :param input: the input stream to put into sys.stdin. + :param env: the environment overrides as dictionary. + :param color: whether the output should contain color codes. The + application can still override this explicitly. + """ + input = make_input_stream(input, self.charset) + + old_stdin = sys.stdin + old_stdout = sys.stdout + old_stderr = sys.stderr + old_forced_width = formatting.FORCED_WIDTH + formatting.FORCED_WIDTH = 80 + + env = self.make_env(env) + + if PY2: + bytes_output = StringIO() + if self.echo_stdin: + input = EchoingStdin(input, bytes_output) + sys.stdout = bytes_output + if not self.mix_stderr: + bytes_error = StringIO() + sys.stderr = bytes_error + else: + bytes_output = io.BytesIO() + if self.echo_stdin: + input = EchoingStdin(input, bytes_output) + input = io.TextIOWrapper(input, encoding=self.charset) + sys.stdout = io.TextIOWrapper(bytes_output, encoding=self.charset) + if not self.mix_stderr: + bytes_error = io.BytesIO() + sys.stderr = io.TextIOWrapper(bytes_error, encoding=self.charset) + + if self.mix_stderr: + sys.stderr = sys.stdout + + sys.stdin = input + + def visible_input(prompt=None): + sys.stdout.write(prompt or "") + val = input.readline().rstrip("\r\n") + sys.stdout.write("{}\n".format(val)) + sys.stdout.flush() + return val + + def hidden_input(prompt=None): + sys.stdout.write("{}\n".format(prompt or "")) + sys.stdout.flush() + return input.readline().rstrip("\r\n") + + def _getchar(echo): + char = sys.stdin.read(1) + if echo: + sys.stdout.write(char) + sys.stdout.flush() + return char + + default_color = color + + def should_strip_ansi(stream=None, color=None): + if color is None: + return not default_color + return not color + + old_visible_prompt_func = termui.visible_prompt_func + old_hidden_prompt_func = termui.hidden_prompt_func + old__getchar_func = termui._getchar + old_should_strip_ansi = utils.should_strip_ansi + termui.visible_prompt_func = visible_input + termui.hidden_prompt_func = hidden_input + termui._getchar = _getchar + utils.should_strip_ansi = should_strip_ansi + + old_env = {} + try: + for key, value in iteritems(env): + old_env[key] = os.environ.get(key) + if value is None: + try: + del os.environ[key] + except Exception: + pass + else: + os.environ[key] = value + yield (bytes_output, not self.mix_stderr and bytes_error) + finally: + for key, value in iteritems(old_env): + if value is None: + try: + del os.environ[key] + except Exception: + pass + else: + os.environ[key] = value + sys.stdout = old_stdout + sys.stderr = old_stderr + sys.stdin = old_stdin + termui.visible_prompt_func = old_visible_prompt_func + termui.hidden_prompt_func = old_hidden_prompt_func + termui._getchar = old__getchar_func + utils.should_strip_ansi = old_should_strip_ansi + formatting.FORCED_WIDTH = old_forced_width + + def invoke( + self, + cli, + args=None, + input=None, + env=None, + catch_exceptions=True, + color=False, + **extra + ): + """Invokes a command in an isolated environment. The arguments are + forwarded directly to the command line script, the `extra` keyword + arguments are passed to the :meth:`~clickpkg.Command.main` function of + the command. + + This returns a :class:`Result` object. + + .. versionadded:: 3.0 + The ``catch_exceptions`` parameter was added. + + .. versionchanged:: 3.0 + The result object now has an `exc_info` attribute with the + traceback if available. + + .. versionadded:: 4.0 + The ``color`` parameter was added. + + :param cli: the command to invoke + :param args: the arguments to invoke. It may be given as an iterable + or a string. When given as string it will be interpreted + as a Unix shell command. More details at + :func:`shlex.split`. + :param input: the input data for `sys.stdin`. + :param env: the environment overrides. + :param catch_exceptions: Whether to catch any other exceptions than + ``SystemExit``. + :param extra: the keyword arguments to pass to :meth:`main`. + :param color: whether the output should contain color codes. The + application can still override this explicitly. + """ + exc_info = None + with self.isolation(input=input, env=env, color=color) as outstreams: + exception = None + exit_code = 0 + + if isinstance(args, string_types): + args = shlex.split(args) + + try: + prog_name = extra.pop("prog_name") + except KeyError: + prog_name = self.get_default_prog_name(cli) + + try: + cli.main(args=args or (), prog_name=prog_name, **extra) + except SystemExit as e: + exc_info = sys.exc_info() + exit_code = e.code + if exit_code is None: + exit_code = 0 + + if exit_code != 0: + exception = e + + if not isinstance(exit_code, int): + sys.stdout.write(str(exit_code)) + sys.stdout.write("\n") + exit_code = 1 + + except Exception as e: + if not catch_exceptions: + raise + exception = e + exit_code = 1 + exc_info = sys.exc_info() + finally: + sys.stdout.flush() + stdout = outstreams[0].getvalue() + if self.mix_stderr: + stderr = None + else: + stderr = outstreams[1].getvalue() + + return Result( + runner=self, + stdout_bytes=stdout, + stderr_bytes=stderr, + exit_code=exit_code, + exception=exception, + exc_info=exc_info, + ) + + @contextlib.contextmanager + def isolated_filesystem(self): + """A context manager that creates a temporary folder and changes + the current working directory to it for isolated filesystem tests. + """ + cwd = os.getcwd() + t = tempfile.mkdtemp() + os.chdir(t) + try: + yield t + finally: + os.chdir(cwd) + try: + shutil.rmtree(t) + except (OSError, IOError): # noqa: B014 + pass diff --git a/openpype/vendor/python/python_2/click/types.py b/openpype/vendor/python/python_2/click/types.py new file mode 100644 index 00000000000..505c39f8509 --- /dev/null +++ b/openpype/vendor/python/python_2/click/types.py @@ -0,0 +1,762 @@ +import os +import stat +from datetime import datetime + +from ._compat import _get_argv_encoding +from ._compat import filename_to_ui +from ._compat import get_filesystem_encoding +from ._compat import get_streerror +from ._compat import open_stream +from ._compat import PY2 +from ._compat import text_type +from .exceptions import BadParameter +from .utils import LazyFile +from .utils import safecall + + +class ParamType(object): + """Helper for converting values through types. The following is + necessary for a valid type: + + * it needs a name + * it needs to pass through None unchanged + * it needs to convert from a string + * it needs to convert its result type through unchanged + (eg: needs to be idempotent) + * it needs to be able to deal with param and context being `None`. + This can be the case when the object is used with prompt + inputs. + """ + + is_composite = False + + #: the descriptive name of this type + name = None + + #: if a list of this type is expected and the value is pulled from a + #: string environment variable, this is what splits it up. `None` + #: means any whitespace. For all parameters the general rule is that + #: whitespace splits them up. The exception are paths and files which + #: are split by ``os.path.pathsep`` by default (":" on Unix and ";" on + #: Windows). + envvar_list_splitter = None + + def __call__(self, value, param=None, ctx=None): + if value is not None: + return self.convert(value, param, ctx) + + def get_metavar(self, param): + """Returns the metavar default for this param if it provides one.""" + + def get_missing_message(self, param): + """Optionally might return extra information about a missing + parameter. + + .. versionadded:: 2.0 + """ + + def convert(self, value, param, ctx): + """Converts the value. This is not invoked for values that are + `None` (the missing value). + """ + return value + + def split_envvar_value(self, rv): + """Given a value from an environment variable this splits it up + into small chunks depending on the defined envvar list splitter. + + If the splitter is set to `None`, which means that whitespace splits, + then leading and trailing whitespace is ignored. Otherwise, leading + and trailing splitters usually lead to empty items being included. + """ + return (rv or "").split(self.envvar_list_splitter) + + def fail(self, message, param=None, ctx=None): + """Helper method to fail with an invalid value message.""" + raise BadParameter(message, ctx=ctx, param=param) + + +class CompositeParamType(ParamType): + is_composite = True + + @property + def arity(self): + raise NotImplementedError() + + +class FuncParamType(ParamType): + def __init__(self, func): + self.name = func.__name__ + self.func = func + + def convert(self, value, param, ctx): + try: + return self.func(value) + except ValueError: + try: + value = text_type(value) + except UnicodeError: + value = str(value).decode("utf-8", "replace") + self.fail(value, param, ctx) + + +class UnprocessedParamType(ParamType): + name = "text" + + def convert(self, value, param, ctx): + return value + + def __repr__(self): + return "UNPROCESSED" + + +class StringParamType(ParamType): + name = "text" + + def convert(self, value, param, ctx): + if isinstance(value, bytes): + enc = _get_argv_encoding() + try: + value = value.decode(enc) + except UnicodeError: + fs_enc = get_filesystem_encoding() + if fs_enc != enc: + try: + value = value.decode(fs_enc) + except UnicodeError: + value = value.decode("utf-8", "replace") + else: + value = value.decode("utf-8", "replace") + return value + return value + + def __repr__(self): + return "STRING" + + +class Choice(ParamType): + """The choice type allows a value to be checked against a fixed set + of supported values. All of these values have to be strings. + + You should only pass a list or tuple of choices. Other iterables + (like generators) may lead to surprising results. + + The resulting value will always be one of the originally passed choices + regardless of ``case_sensitive`` or any ``ctx.token_normalize_func`` + being specified. + + See :ref:`choice-opts` for an example. + + :param case_sensitive: Set to false to make choices case + insensitive. Defaults to true. + """ + + name = "choice" + + def __init__(self, choices, case_sensitive=True): + self.choices = choices + self.case_sensitive = case_sensitive + + def get_metavar(self, param): + return "[{}]".format("|".join(self.choices)) + + def get_missing_message(self, param): + return "Choose from:\n\t{}.".format(",\n\t".join(self.choices)) + + def convert(self, value, param, ctx): + # Match through normalization and case sensitivity + # first do token_normalize_func, then lowercase + # preserve original `value` to produce an accurate message in + # `self.fail` + normed_value = value + normed_choices = {choice: choice for choice in self.choices} + + if ctx is not None and ctx.token_normalize_func is not None: + normed_value = ctx.token_normalize_func(value) + normed_choices = { + ctx.token_normalize_func(normed_choice): original + for normed_choice, original in normed_choices.items() + } + + if not self.case_sensitive: + if PY2: + lower = str.lower + else: + lower = str.casefold + + normed_value = lower(normed_value) + normed_choices = { + lower(normed_choice): original + for normed_choice, original in normed_choices.items() + } + + if normed_value in normed_choices: + return normed_choices[normed_value] + + self.fail( + "invalid choice: {}. (choose from {})".format( + value, ", ".join(self.choices) + ), + param, + ctx, + ) + + def __repr__(self): + return "Choice('{}')".format(list(self.choices)) + + +class DateTime(ParamType): + """The DateTime type converts date strings into `datetime` objects. + + The format strings which are checked are configurable, but default to some + common (non-timezone aware) ISO 8601 formats. + + When specifying *DateTime* formats, you should only pass a list or a tuple. + Other iterables, like generators, may lead to surprising results. + + The format strings are processed using ``datetime.strptime``, and this + consequently defines the format strings which are allowed. + + Parsing is tried using each format, in order, and the first format which + parses successfully is used. + + :param formats: A list or tuple of date format strings, in the order in + which they should be tried. Defaults to + ``'%Y-%m-%d'``, ``'%Y-%m-%dT%H:%M:%S'``, + ``'%Y-%m-%d %H:%M:%S'``. + """ + + name = "datetime" + + def __init__(self, formats=None): + self.formats = formats or ["%Y-%m-%d", "%Y-%m-%dT%H:%M:%S", "%Y-%m-%d %H:%M:%S"] + + def get_metavar(self, param): + return "[{}]".format("|".join(self.formats)) + + def _try_to_convert_date(self, value, format): + try: + return datetime.strptime(value, format) + except ValueError: + return None + + def convert(self, value, param, ctx): + # Exact match + for format in self.formats: + dtime = self._try_to_convert_date(value, format) + if dtime: + return dtime + + self.fail( + "invalid datetime format: {}. (choose from {})".format( + value, ", ".join(self.formats) + ) + ) + + def __repr__(self): + return "DateTime" + + +class IntParamType(ParamType): + name = "integer" + + def convert(self, value, param, ctx): + try: + return int(value) + except ValueError: + self.fail("{} is not a valid integer".format(value), param, ctx) + + def __repr__(self): + return "INT" + + +class IntRange(IntParamType): + """A parameter that works similar to :data:`click.INT` but restricts + the value to fit into a range. The default behavior is to fail if the + value falls outside the range, but it can also be silently clamped + between the two edges. + + See :ref:`ranges` for an example. + """ + + name = "integer range" + + def __init__(self, min=None, max=None, clamp=False): + self.min = min + self.max = max + self.clamp = clamp + + def convert(self, value, param, ctx): + rv = IntParamType.convert(self, value, param, ctx) + if self.clamp: + if self.min is not None and rv < self.min: + return self.min + if self.max is not None and rv > self.max: + return self.max + if ( + self.min is not None + and rv < self.min + or self.max is not None + and rv > self.max + ): + if self.min is None: + self.fail( + "{} is bigger than the maximum valid value {}.".format( + rv, self.max + ), + param, + ctx, + ) + elif self.max is None: + self.fail( + "{} is smaller than the minimum valid value {}.".format( + rv, self.min + ), + param, + ctx, + ) + else: + self.fail( + "{} is not in the valid range of {} to {}.".format( + rv, self.min, self.max + ), + param, + ctx, + ) + return rv + + def __repr__(self): + return "IntRange({}, {})".format(self.min, self.max) + + +class FloatParamType(ParamType): + name = "float" + + def convert(self, value, param, ctx): + try: + return float(value) + except ValueError: + self.fail( + "{} is not a valid floating point value".format(value), param, ctx + ) + + def __repr__(self): + return "FLOAT" + + +class FloatRange(FloatParamType): + """A parameter that works similar to :data:`click.FLOAT` but restricts + the value to fit into a range. The default behavior is to fail if the + value falls outside the range, but it can also be silently clamped + between the two edges. + + See :ref:`ranges` for an example. + """ + + name = "float range" + + def __init__(self, min=None, max=None, clamp=False): + self.min = min + self.max = max + self.clamp = clamp + + def convert(self, value, param, ctx): + rv = FloatParamType.convert(self, value, param, ctx) + if self.clamp: + if self.min is not None and rv < self.min: + return self.min + if self.max is not None and rv > self.max: + return self.max + if ( + self.min is not None + and rv < self.min + or self.max is not None + and rv > self.max + ): + if self.min is None: + self.fail( + "{} is bigger than the maximum valid value {}.".format( + rv, self.max + ), + param, + ctx, + ) + elif self.max is None: + self.fail( + "{} is smaller than the minimum valid value {}.".format( + rv, self.min + ), + param, + ctx, + ) + else: + self.fail( + "{} is not in the valid range of {} to {}.".format( + rv, self.min, self.max + ), + param, + ctx, + ) + return rv + + def __repr__(self): + return "FloatRange({}, {})".format(self.min, self.max) + + +class BoolParamType(ParamType): + name = "boolean" + + def convert(self, value, param, ctx): + if isinstance(value, bool): + return bool(value) + value = value.lower() + if value in ("true", "t", "1", "yes", "y"): + return True + elif value in ("false", "f", "0", "no", "n"): + return False + self.fail("{} is not a valid boolean".format(value), param, ctx) + + def __repr__(self): + return "BOOL" + + +class UUIDParameterType(ParamType): + name = "uuid" + + def convert(self, value, param, ctx): + import uuid + + try: + if PY2 and isinstance(value, text_type): + value = value.encode("ascii") + return uuid.UUID(value) + except ValueError: + self.fail("{} is not a valid UUID value".format(value), param, ctx) + + def __repr__(self): + return "UUID" + + +class File(ParamType): + """Declares a parameter to be a file for reading or writing. The file + is automatically closed once the context tears down (after the command + finished working). + + Files can be opened for reading or writing. The special value ``-`` + indicates stdin or stdout depending on the mode. + + By default, the file is opened for reading text data, but it can also be + opened in binary mode or for writing. The encoding parameter can be used + to force a specific encoding. + + The `lazy` flag controls if the file should be opened immediately or upon + first IO. The default is to be non-lazy for standard input and output + streams as well as files opened for reading, `lazy` otherwise. When opening a + file lazily for reading, it is still opened temporarily for validation, but + will not be held open until first IO. lazy is mainly useful when opening + for writing to avoid creating the file until it is needed. + + Starting with Click 2.0, files can also be opened atomically in which + case all writes go into a separate file in the same folder and upon + completion the file will be moved over to the original location. This + is useful if a file regularly read by other users is modified. + + See :ref:`file-args` for more information. + """ + + name = "filename" + envvar_list_splitter = os.path.pathsep + + def __init__( + self, mode="r", encoding=None, errors="strict", lazy=None, atomic=False + ): + self.mode = mode + self.encoding = encoding + self.errors = errors + self.lazy = lazy + self.atomic = atomic + + def resolve_lazy_flag(self, value): + if self.lazy is not None: + return self.lazy + if value == "-": + return False + elif "w" in self.mode: + return True + return False + + def convert(self, value, param, ctx): + try: + if hasattr(value, "read") or hasattr(value, "write"): + return value + + lazy = self.resolve_lazy_flag(value) + + if lazy: + f = LazyFile( + value, self.mode, self.encoding, self.errors, atomic=self.atomic + ) + if ctx is not None: + ctx.call_on_close(f.close_intelligently) + return f + + f, should_close = open_stream( + value, self.mode, self.encoding, self.errors, atomic=self.atomic + ) + # If a context is provided, we automatically close the file + # at the end of the context execution (or flush out). If a + # context does not exist, it's the caller's responsibility to + # properly close the file. This for instance happens when the + # type is used with prompts. + if ctx is not None: + if should_close: + ctx.call_on_close(safecall(f.close)) + else: + ctx.call_on_close(safecall(f.flush)) + return f + except (IOError, OSError) as e: # noqa: B014 + self.fail( + "Could not open file: {}: {}".format( + filename_to_ui(value), get_streerror(e) + ), + param, + ctx, + ) + + +class Path(ParamType): + """The path type is similar to the :class:`File` type but it performs + different checks. First of all, instead of returning an open file + handle it returns just the filename. Secondly, it can perform various + basic checks about what the file or directory should be. + + .. versionchanged:: 6.0 + `allow_dash` was added. + + :param exists: if set to true, the file or directory needs to exist for + this value to be valid. If this is not required and a + file does indeed not exist, then all further checks are + silently skipped. + :param file_okay: controls if a file is a possible value. + :param dir_okay: controls if a directory is a possible value. + :param writable: if true, a writable check is performed. + :param readable: if true, a readable check is performed. + :param resolve_path: if this is true, then the path is fully resolved + before the value is passed onwards. This means + that it's absolute and symlinks are resolved. It + will not expand a tilde-prefix, as this is + supposed to be done by the shell only. + :param allow_dash: If this is set to `True`, a single dash to indicate + standard streams is permitted. + :param path_type: optionally a string type that should be used to + represent the path. The default is `None` which + means the return value will be either bytes or + unicode depending on what makes most sense given the + input data Click deals with. + """ + + envvar_list_splitter = os.path.pathsep + + def __init__( + self, + exists=False, + file_okay=True, + dir_okay=True, + writable=False, + readable=True, + resolve_path=False, + allow_dash=False, + path_type=None, + ): + self.exists = exists + self.file_okay = file_okay + self.dir_okay = dir_okay + self.writable = writable + self.readable = readable + self.resolve_path = resolve_path + self.allow_dash = allow_dash + self.type = path_type + + if self.file_okay and not self.dir_okay: + self.name = "file" + self.path_type = "File" + elif self.dir_okay and not self.file_okay: + self.name = "directory" + self.path_type = "Directory" + else: + self.name = "path" + self.path_type = "Path" + + def coerce_path_result(self, rv): + if self.type is not None and not isinstance(rv, self.type): + if self.type is text_type: + rv = rv.decode(get_filesystem_encoding()) + else: + rv = rv.encode(get_filesystem_encoding()) + return rv + + def convert(self, value, param, ctx): + rv = value + + is_dash = self.file_okay and self.allow_dash and rv in (b"-", "-") + + if not is_dash: + if self.resolve_path: + rv = os.path.realpath(rv) + + try: + st = os.stat(rv) + except OSError: + if not self.exists: + return self.coerce_path_result(rv) + self.fail( + "{} '{}' does not exist.".format( + self.path_type, filename_to_ui(value) + ), + param, + ctx, + ) + + if not self.file_okay and stat.S_ISREG(st.st_mode): + self.fail( + "{} '{}' is a file.".format(self.path_type, filename_to_ui(value)), + param, + ctx, + ) + if not self.dir_okay and stat.S_ISDIR(st.st_mode): + self.fail( + "{} '{}' is a directory.".format( + self.path_type, filename_to_ui(value) + ), + param, + ctx, + ) + if self.writable and not os.access(value, os.W_OK): + self.fail( + "{} '{}' is not writable.".format( + self.path_type, filename_to_ui(value) + ), + param, + ctx, + ) + if self.readable and not os.access(value, os.R_OK): + self.fail( + "{} '{}' is not readable.".format( + self.path_type, filename_to_ui(value) + ), + param, + ctx, + ) + + return self.coerce_path_result(rv) + + +class Tuple(CompositeParamType): + """The default behavior of Click is to apply a type on a value directly. + This works well in most cases, except for when `nargs` is set to a fixed + count and different types should be used for different items. In this + case the :class:`Tuple` type can be used. This type can only be used + if `nargs` is set to a fixed number. + + For more information see :ref:`tuple-type`. + + This can be selected by using a Python tuple literal as a type. + + :param types: a list of types that should be used for the tuple items. + """ + + def __init__(self, types): + self.types = [convert_type(ty) for ty in types] + + @property + def name(self): + return "<{}>".format(" ".join(ty.name for ty in self.types)) + + @property + def arity(self): + return len(self.types) + + def convert(self, value, param, ctx): + if len(value) != len(self.types): + raise TypeError( + "It would appear that nargs is set to conflict with the" + " composite type arity." + ) + return tuple(ty(x, param, ctx) for ty, x in zip(self.types, value)) + + +def convert_type(ty, default=None): + """Converts a callable or python type into the most appropriate + param type. + """ + guessed_type = False + if ty is None and default is not None: + if isinstance(default, tuple): + ty = tuple(map(type, default)) + else: + ty = type(default) + guessed_type = True + + if isinstance(ty, tuple): + return Tuple(ty) + if isinstance(ty, ParamType): + return ty + if ty is text_type or ty is str or ty is None: + return STRING + if ty is int: + return INT + # Booleans are only okay if not guessed. This is done because for + # flags the default value is actually a bit of a lie in that it + # indicates which of the flags is the one we want. See get_default() + # for more information. + if ty is bool and not guessed_type: + return BOOL + if ty is float: + return FLOAT + if guessed_type: + return STRING + + # Catch a common mistake + if __debug__: + try: + if issubclass(ty, ParamType): + raise AssertionError( + "Attempted to use an uninstantiated parameter type ({}).".format(ty) + ) + except TypeError: + pass + return FuncParamType(ty) + + +#: A dummy parameter type that just does nothing. From a user's +#: perspective this appears to just be the same as `STRING` but internally +#: no string conversion takes place. This is necessary to achieve the +#: same bytes/unicode behavior on Python 2/3 in situations where you want +#: to not convert argument types. This is usually useful when working +#: with file paths as they can appear in bytes and unicode. +#: +#: For path related uses the :class:`Path` type is a better choice but +#: there are situations where an unprocessed type is useful which is why +#: it is is provided. +#: +#: .. versionadded:: 4.0 +UNPROCESSED = UnprocessedParamType() + +#: A unicode string parameter type which is the implicit default. This +#: can also be selected by using ``str`` as type. +STRING = StringParamType() + +#: An integer parameter. This can also be selected by using ``int`` as +#: type. +INT = IntParamType() + +#: A floating point value parameter. This can also be selected by using +#: ``float`` as type. +FLOAT = FloatParamType() + +#: A boolean parameter. This is the default for boolean flags. This can +#: also be selected by using ``bool`` as a type. +BOOL = BoolParamType() + +#: A UUID parameter. +UUID = UUIDParameterType() diff --git a/openpype/vendor/python/python_2/click/utils.py b/openpype/vendor/python/python_2/click/utils.py new file mode 100644 index 00000000000..79265e732d4 --- /dev/null +++ b/openpype/vendor/python/python_2/click/utils.py @@ -0,0 +1,455 @@ +import os +import sys + +from ._compat import _default_text_stderr +from ._compat import _default_text_stdout +from ._compat import auto_wrap_for_ansi +from ._compat import binary_streams +from ._compat import filename_to_ui +from ._compat import get_filesystem_encoding +from ._compat import get_streerror +from ._compat import is_bytes +from ._compat import open_stream +from ._compat import PY2 +from ._compat import should_strip_ansi +from ._compat import string_types +from ._compat import strip_ansi +from ._compat import text_streams +from ._compat import text_type +from ._compat import WIN +from .globals import resolve_color_default + +if not PY2: + from ._compat import _find_binary_writer +elif WIN: + from ._winconsole import _get_windows_argv + from ._winconsole import _hash_py_argv + from ._winconsole import _initial_argv_hash + +echo_native_types = string_types + (bytes, bytearray) + + +def _posixify(name): + return "-".join(name.split()).lower() + + +def safecall(func): + """Wraps a function so that it swallows exceptions.""" + + def wrapper(*args, **kwargs): + try: + return func(*args, **kwargs) + except Exception: + pass + + return wrapper + + +def make_str(value): + """Converts a value into a valid string.""" + if isinstance(value, bytes): + try: + return value.decode(get_filesystem_encoding()) + except UnicodeError: + return value.decode("utf-8", "replace") + return text_type(value) + + +def make_default_short_help(help, max_length=45): + """Return a condensed version of help string.""" + words = help.split() + total_length = 0 + result = [] + done = False + + for word in words: + if word[-1:] == ".": + done = True + new_length = 1 + len(word) if result else len(word) + if total_length + new_length > max_length: + result.append("...") + done = True + else: + if result: + result.append(" ") + result.append(word) + if done: + break + total_length += new_length + + return "".join(result) + + +class LazyFile(object): + """A lazy file works like a regular file but it does not fully open + the file but it does perform some basic checks early to see if the + filename parameter does make sense. This is useful for safely opening + files for writing. + """ + + def __init__( + self, filename, mode="r", encoding=None, errors="strict", atomic=False + ): + self.name = filename + self.mode = mode + self.encoding = encoding + self.errors = errors + self.atomic = atomic + + if filename == "-": + self._f, self.should_close = open_stream(filename, mode, encoding, errors) + else: + if "r" in mode: + # Open and close the file in case we're opening it for + # reading so that we can catch at least some errors in + # some cases early. + open(filename, mode).close() + self._f = None + self.should_close = True + + def __getattr__(self, name): + return getattr(self.open(), name) + + def __repr__(self): + if self._f is not None: + return repr(self._f) + return "".format(self.name, self.mode) + + def open(self): + """Opens the file if it's not yet open. This call might fail with + a :exc:`FileError`. Not handling this error will produce an error + that Click shows. + """ + if self._f is not None: + return self._f + try: + rv, self.should_close = open_stream( + self.name, self.mode, self.encoding, self.errors, atomic=self.atomic + ) + except (IOError, OSError) as e: # noqa: E402 + from .exceptions import FileError + + raise FileError(self.name, hint=get_streerror(e)) + self._f = rv + return rv + + def close(self): + """Closes the underlying file, no matter what.""" + if self._f is not None: + self._f.close() + + def close_intelligently(self): + """This function only closes the file if it was opened by the lazy + file wrapper. For instance this will never close stdin. + """ + if self.should_close: + self.close() + + def __enter__(self): + return self + + def __exit__(self, exc_type, exc_value, tb): + self.close_intelligently() + + def __iter__(self): + self.open() + return iter(self._f) + + +class KeepOpenFile(object): + def __init__(self, file): + self._file = file + + def __getattr__(self, name): + return getattr(self._file, name) + + def __enter__(self): + return self + + def __exit__(self, exc_type, exc_value, tb): + pass + + def __repr__(self): + return repr(self._file) + + def __iter__(self): + return iter(self._file) + + +def echo(message=None, file=None, nl=True, err=False, color=None): + """Prints a message plus a newline to the given file or stdout. On + first sight, this looks like the print function, but it has improved + support for handling Unicode and binary data that does not fail no + matter how badly configured the system is. + + Primarily it means that you can print binary data as well as Unicode + data on both 2.x and 3.x to the given file in the most appropriate way + possible. This is a very carefree function in that it will try its + best to not fail. As of Click 6.0 this includes support for unicode + output on the Windows console. + + In addition to that, if `colorama`_ is installed, the echo function will + also support clever handling of ANSI codes. Essentially it will then + do the following: + + - add transparent handling of ANSI color codes on Windows. + - hide ANSI codes automatically if the destination file is not a + terminal. + + .. _colorama: https://pypi.org/project/colorama/ + + .. versionchanged:: 6.0 + As of Click 6.0 the echo function will properly support unicode + output on the windows console. Not that click does not modify + the interpreter in any way which means that `sys.stdout` or the + print statement or function will still not provide unicode support. + + .. versionchanged:: 2.0 + Starting with version 2.0 of Click, the echo function will work + with colorama if it's installed. + + .. versionadded:: 3.0 + The `err` parameter was added. + + .. versionchanged:: 4.0 + Added the `color` flag. + + :param message: the message to print + :param file: the file to write to (defaults to ``stdout``) + :param err: if set to true the file defaults to ``stderr`` instead of + ``stdout``. This is faster and easier than calling + :func:`get_text_stderr` yourself. + :param nl: if set to `True` (the default) a newline is printed afterwards. + :param color: controls if the terminal supports ANSI colors or not. The + default is autodetection. + """ + if file is None: + if err: + file = _default_text_stderr() + else: + file = _default_text_stdout() + + # Convert non bytes/text into the native string type. + if message is not None and not isinstance(message, echo_native_types): + message = text_type(message) + + if nl: + message = message or u"" + if isinstance(message, text_type): + message += u"\n" + else: + message += b"\n" + + # If there is a message, and we're in Python 3, and the value looks + # like bytes, we manually need to find the binary stream and write the + # message in there. This is done separately so that most stream + # types will work as you would expect. Eg: you can write to StringIO + # for other cases. + if message and not PY2 and is_bytes(message): + binary_file = _find_binary_writer(file) + if binary_file is not None: + file.flush() + binary_file.write(message) + binary_file.flush() + return + + # ANSI-style support. If there is no message or we are dealing with + # bytes nothing is happening. If we are connected to a file we want + # to strip colors. If we are on windows we either wrap the stream + # to strip the color or we use the colorama support to translate the + # ansi codes to API calls. + if message and not is_bytes(message): + color = resolve_color_default(color) + if should_strip_ansi(file, color): + message = strip_ansi(message) + elif WIN: + if auto_wrap_for_ansi is not None: + file = auto_wrap_for_ansi(file) + elif not color: + message = strip_ansi(message) + + if message: + file.write(message) + file.flush() + + +def get_binary_stream(name): + """Returns a system stream for byte processing. This essentially + returns the stream from the sys module with the given name but it + solves some compatibility issues between different Python versions. + Primarily this function is necessary for getting binary streams on + Python 3. + + :param name: the name of the stream to open. Valid names are ``'stdin'``, + ``'stdout'`` and ``'stderr'`` + """ + opener = binary_streams.get(name) + if opener is None: + raise TypeError("Unknown standard stream '{}'".format(name)) + return opener() + + +def get_text_stream(name, encoding=None, errors="strict"): + """Returns a system stream for text processing. This usually returns + a wrapped stream around a binary stream returned from + :func:`get_binary_stream` but it also can take shortcuts on Python 3 + for already correctly configured streams. + + :param name: the name of the stream to open. Valid names are ``'stdin'``, + ``'stdout'`` and ``'stderr'`` + :param encoding: overrides the detected default encoding. + :param errors: overrides the default error mode. + """ + opener = text_streams.get(name) + if opener is None: + raise TypeError("Unknown standard stream '{}'".format(name)) + return opener(encoding, errors) + + +def open_file( + filename, mode="r", encoding=None, errors="strict", lazy=False, atomic=False +): + """This is similar to how the :class:`File` works but for manual + usage. Files are opened non lazy by default. This can open regular + files as well as stdin/stdout if ``'-'`` is passed. + + If stdin/stdout is returned the stream is wrapped so that the context + manager will not close the stream accidentally. This makes it possible + to always use the function like this without having to worry to + accidentally close a standard stream:: + + with open_file(filename) as f: + ... + + .. versionadded:: 3.0 + + :param filename: the name of the file to open (or ``'-'`` for stdin/stdout). + :param mode: the mode in which to open the file. + :param encoding: the encoding to use. + :param errors: the error handling for this file. + :param lazy: can be flipped to true to open the file lazily. + :param atomic: in atomic mode writes go into a temporary file and it's + moved on close. + """ + if lazy: + return LazyFile(filename, mode, encoding, errors, atomic=atomic) + f, should_close = open_stream(filename, mode, encoding, errors, atomic=atomic) + if not should_close: + f = KeepOpenFile(f) + return f + + +def get_os_args(): + """This returns the argument part of sys.argv in the most appropriate + form for processing. What this means is that this return value is in + a format that works for Click to process but does not necessarily + correspond well to what's actually standard for the interpreter. + + On most environments the return value is ``sys.argv[:1]`` unchanged. + However if you are on Windows and running Python 2 the return value + will actually be a list of unicode strings instead because the + default behavior on that platform otherwise will not be able to + carry all possible values that sys.argv can have. + + .. versionadded:: 6.0 + """ + # We can only extract the unicode argv if sys.argv has not been + # changed since the startup of the application. + if PY2 and WIN and _initial_argv_hash == _hash_py_argv(): + return _get_windows_argv() + return sys.argv[1:] + + +def format_filename(filename, shorten=False): + """Formats a filename for user display. The main purpose of this + function is to ensure that the filename can be displayed at all. This + will decode the filename to unicode if necessary in a way that it will + not fail. Optionally, it can shorten the filename to not include the + full path to the filename. + + :param filename: formats a filename for UI display. This will also convert + the filename into unicode without failing. + :param shorten: this optionally shortens the filename to strip of the + path that leads up to it. + """ + if shorten: + filename = os.path.basename(filename) + return filename_to_ui(filename) + + +def get_app_dir(app_name, roaming=True, force_posix=False): + r"""Returns the config folder for the application. The default behavior + is to return whatever is most appropriate for the operating system. + + To give you an idea, for an app called ``"Foo Bar"``, something like + the following folders could be returned: + + Mac OS X: + ``~/Library/Application Support/Foo Bar`` + Mac OS X (POSIX): + ``~/.foo-bar`` + Unix: + ``~/.config/foo-bar`` + Unix (POSIX): + ``~/.foo-bar`` + Win XP (roaming): + ``C:\Documents and Settings\\Local Settings\Application Data\Foo Bar`` + Win XP (not roaming): + ``C:\Documents and Settings\\Application Data\Foo Bar`` + Win 7 (roaming): + ``C:\Users\\AppData\Roaming\Foo Bar`` + Win 7 (not roaming): + ``C:\Users\\AppData\Local\Foo Bar`` + + .. versionadded:: 2.0 + + :param app_name: the application name. This should be properly capitalized + and can contain whitespace. + :param roaming: controls if the folder should be roaming or not on Windows. + Has no affect otherwise. + :param force_posix: if this is set to `True` then on any POSIX system the + folder will be stored in the home folder with a leading + dot instead of the XDG config home or darwin's + application support folder. + """ + if WIN: + key = "APPDATA" if roaming else "LOCALAPPDATA" + folder = os.environ.get(key) + if folder is None: + folder = os.path.expanduser("~") + return os.path.join(folder, app_name) + if force_posix: + return os.path.join(os.path.expanduser("~/.{}".format(_posixify(app_name)))) + if sys.platform == "darwin": + return os.path.join( + os.path.expanduser("~/Library/Application Support"), app_name + ) + return os.path.join( + os.environ.get("XDG_CONFIG_HOME", os.path.expanduser("~/.config")), + _posixify(app_name), + ) + + +class PacifyFlushWrapper(object): + """This wrapper is used to catch and suppress BrokenPipeErrors resulting + from ``.flush()`` being called on broken pipe during the shutdown/final-GC + of the Python interpreter. Notably ``.flush()`` is always called on + ``sys.stdout`` and ``sys.stderr``. So as to have minimal impact on any + other cleanup code, and the case where the underlying file is not a broken + pipe, all calls and attributes are proxied. + """ + + def __init__(self, wrapped): + self.wrapped = wrapped + + def flush(self): + try: + self.wrapped.flush() + except IOError as e: + import errno + + if e.errno != errno.EPIPE: + raise + + def __getattr__(self, attr): + return getattr(self.wrapped, attr) diff --git a/openpype/version.py b/openpype/version.py index 40375bef43a..7de6fd752b2 100644 --- a/openpype/version.py +++ b/openpype/version.py @@ -1,3 +1,3 @@ # -*- coding: utf-8 -*- """Package declaring Pype version.""" -__version__ = "3.16.2-nightly.1" +__version__ = "3.16.5-nightly.4" diff --git a/pyproject.toml b/pyproject.toml index fb6e222f274..a07c5471233 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -1,6 +1,6 @@ [tool.poetry] name = "OpenPype" -version = "3.16.1" # OpenPype +version = "3.16.4" # OpenPype description = "Open VFX and Animation pipeline with support." authors = ["OpenPype Team "] license = "MIT License" diff --git a/server_addon/README.md b/server_addon/README.md index fa9a6001d22..c6d467adaa9 100644 --- a/server_addon/README.md +++ b/server_addon/README.md @@ -1,5 +1,5 @@ -# OpenPype addon for AYON server -Convert openpype into AYON addon which can be installed on AYON server. The versioning of the addon is following versioning of OpenPype. +# Addons for AYON server +Preparation of AYON addons based on OpenPype codebase. The output is a bunch of zip files in `./packages` directory that can be uploaded to AYON server. One of the packages is `openpype` which is OpenPype code converted to AYON addon. The addon is must have requirement to be able to use `ayon-launcher`. The versioning of `openpype` addon is following versioning of OpenPype. The other addons contain only settings models. ## Intro OpenPype is transitioning to AYON, a dedicated server with its own database, moving away from MongoDB. During this transition period, OpenPype will remain compatible with both MongoDB and AYON. However, we will gradually update the codebase to align with AYON's data structure and separate individual components into addons. @@ -11,11 +11,24 @@ Since the implementation of the AYON Launcher is not yet fully completed, we wil During this transitional period, the AYON Launcher addon will be a requirement as the entry point for using the AYON Launcher. ## How to start -There is a `create_ayon_addon.py` python file which contains logic how to create server addon from OpenPype codebase. Just run the code. +There is a `create_ayon_addons.py` python file which contains logic how to create server addon from OpenPype codebase. Just run the code. ```shell -./.poetry/bin/poetry run python ./server_addon/create_ayon_addon.py +./.poetry/bin/poetry run python ./server_addon/create_ayon_addons.py ``` -It will create directory `./package/openpype//*` folder with all files necessary for AYON server. You can then copy `./package/openpype/` to server addons, or zip the folder and upload it to AYON server. Restart server to update addons information, add the addon version to server bundle and set the bundle for production or staging usage. +It will create directory `./packages/.zip` files for AYON server. You can then copy upload the zip files to AYON server. Restart server to update addons information, add the addon version to server bundle and set the bundle for production or staging usage. Once addon is on server and is enabled, you can just run AYON launcher. Content will be downloaded and used automatically. + +### Additional arguments +Additional arguments are useful for development purposes. + +To skip zip creation to keep only server ready folder structure, pass `--skip-zip` argument. +```shell +./.poetry/bin/poetry run python ./server_addon/create_ayon_addons.py --skip-zip +``` + +To create both zips and keep folder structure, pass `--keep-sources` argument. +```shell +./.poetry/bin/poetry run python ./server_addon/create_ayon_addons.py --keep-sources +``` diff --git a/server_addon/aftereffects/LICENSE b/server_addon/aftereffects/LICENSE new file mode 100644 index 00000000000..d6456956733 --- /dev/null +++ b/server_addon/aftereffects/LICENSE @@ -0,0 +1,202 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/server_addon/aftereffects/README.md b/server_addon/aftereffects/README.md new file mode 100644 index 00000000000..b2f34f34077 --- /dev/null +++ b/server_addon/aftereffects/README.md @@ -0,0 +1,4 @@ +AfterEffects Addon +=============== + +Integration with Adobe AfterEffects. diff --git a/server_addon/aftereffects/server/__init__.py b/server_addon/aftereffects/server/__init__.py new file mode 100644 index 00000000000..e14e76e9dba --- /dev/null +++ b/server_addon/aftereffects/server/__init__.py @@ -0,0 +1,16 @@ +from ayon_server.addons import BaseServerAddon + +from .settings import AfterEffectsSettings, DEFAULT_AFTEREFFECTS_SETTING +from .version import __version__ + + +class AfterEffects(BaseServerAddon): + name = "aftereffects" + title = "AfterEffects" + version = __version__ + + settings_model = AfterEffectsSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_AFTEREFFECTS_SETTING) diff --git a/server_addon/aftereffects/server/settings/__init__.py b/server_addon/aftereffects/server/settings/__init__.py new file mode 100644 index 00000000000..4e96804b4aa --- /dev/null +++ b/server_addon/aftereffects/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + AfterEffectsSettings, + DEFAULT_AFTEREFFECTS_SETTING, +) + + +__all__ = ( + "AfterEffectsSettings", + "DEFAULT_AFTEREFFECTS_SETTING", +) diff --git a/server_addon/aftereffects/server/settings/creator_plugins.py b/server_addon/aftereffects/server/settings/creator_plugins.py new file mode 100644 index 00000000000..9cb03b0b266 --- /dev/null +++ b/server_addon/aftereffects/server/settings/creator_plugins.py @@ -0,0 +1,18 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class CreateRenderPlugin(BaseSettingsModel): + mark_for_review: bool = Field(True, title="Review") + default_variants: list[str] = Field( + default_factory=list, + title="Default Variants" + ) + + +class AfterEffectsCreatorPlugins(BaseSettingsModel): + RenderCreator: CreateRenderPlugin = Field( + title="Create Render", + default_factory=CreateRenderPlugin, + ) diff --git a/server_addon/aftereffects/server/settings/imageio.py b/server_addon/aftereffects/server/settings/imageio.py new file mode 100644 index 00000000000..55160ffd11e --- /dev/null +++ b/server_addon/aftereffects/server/settings/imageio.py @@ -0,0 +1,48 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class AfterEffectsImageIOModel(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/aftereffects/server/settings/main.py b/server_addon/aftereffects/server/settings/main.py new file mode 100644 index 00000000000..4edc46d259b --- /dev/null +++ b/server_addon/aftereffects/server/settings/main.py @@ -0,0 +1,56 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + +from .imageio import AfterEffectsImageIOModel +from .creator_plugins import AfterEffectsCreatorPlugins +from .publish_plugins import ( + AfterEffectsPublishPlugins, + AE_PUBLISH_PLUGINS_DEFAULTS, +) +from .workfile_builder import WorkfileBuilderPlugin +from .templated_workfile_build import TemplatedWorkfileBuildModel + + +class AfterEffectsSettings(BaseSettingsModel): + """AfterEffects Project Settings.""" + + imageio: AfterEffectsImageIOModel = Field( + default_factory=AfterEffectsImageIOModel, + title="OCIO config" + ) + create: AfterEffectsCreatorPlugins = Field( + default_factory=AfterEffectsCreatorPlugins, + title="Creator plugins" + ) + publish: AfterEffectsPublishPlugins = Field( + default_factory=AfterEffectsPublishPlugins, + title="Publish plugins" + ) + workfile_builder: WorkfileBuilderPlugin = Field( + default_factory=WorkfileBuilderPlugin, + title="Workfile Builder" + ) + templated_workfile_build: TemplatedWorkfileBuildModel = Field( + default_factory=TemplatedWorkfileBuildModel, + title="Templated Workfile Build Settings" + ) + + +DEFAULT_AFTEREFFECTS_SETTING = { + "create": { + "RenderCreator": { + "mark_for_review": True, + "default_variants": [ + "Main" + ] + } + }, + "publish": AE_PUBLISH_PLUGINS_DEFAULTS, + "workfile_builder": { + "create_first_version": False, + "custom_templates": [] + }, + "templated_workfile_build": { + "profiles": [] + }, +} diff --git a/server_addon/aftereffects/server/settings/publish_plugins.py b/server_addon/aftereffects/server/settings/publish_plugins.py new file mode 100644 index 00000000000..78445d3223a --- /dev/null +++ b/server_addon/aftereffects/server/settings/publish_plugins.py @@ -0,0 +1,68 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class CollectReviewPluginModel(BaseSettingsModel): + enabled: bool = Field(True, title="Enabled") + + +class ValidateSceneSettingsModel(BaseSettingsModel): + """Validate naming of products and layers""" + + # _isGroup = True + enabled: bool = Field(True, title="Enabled") + optional: bool = Field(False, title="Optional") + active: bool = Field(True, title="Active") + skip_resolution_check: list[str] = Field( + default_factory=list, + title="Skip Resolution Check for Tasks", + ) + skip_timelines_check: list[str] = Field( + default_factory=list, + title="Skip Timeline Check for Tasks", + ) + + +class ValidateContainersModel(BaseSettingsModel): + enabled: bool = Field(True, title="Enabled") + optional: bool = Field(True, title="Optional") + active: bool = Field(True, title="Active") + + +class AfterEffectsPublishPlugins(BaseSettingsModel): + CollectReview: CollectReviewPluginModel = Field( + default_factory=CollectReviewPluginModel, + title="Collect Review", + ) + ValidateSceneSettings: ValidateSceneSettingsModel = Field( + default_factory=ValidateSceneSettingsModel, + title="Validate Scene Settings", + ) + ValidateContainers: ValidateContainersModel = Field( + default_factory=ValidateContainersModel, + title="Validate Containers", + ) + + +AE_PUBLISH_PLUGINS_DEFAULTS = { + "CollectReview": { + "enabled": True + }, + "ValidateSceneSettings": { + "enabled": True, + "optional": True, + "active": True, + "skip_resolution_check": [ + ".*" + ], + "skip_timelines_check": [ + ".*" + ] + }, + "ValidateContainers": { + "enabled": True, + "optional": True, + "active": True, + } +} diff --git a/server_addon/aftereffects/server/settings/templated_workfile_build.py b/server_addon/aftereffects/server/settings/templated_workfile_build.py new file mode 100644 index 00000000000..e0245c8d069 --- /dev/null +++ b/server_addon/aftereffects/server/settings/templated_workfile_build.py @@ -0,0 +1,33 @@ +from pydantic import Field +from ayon_server.settings import ( + BaseSettingsModel, + task_types_enum, +) + + +class TemplatedWorkfileProfileModel(BaseSettingsModel): + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + task_names: list[str] = Field( + default_factory=list, + title="Task names" + ) + path: str = Field( + title="Path to template" + ) + keep_placeholder: bool = Field( + False, + title="Keep placeholders") + create_first_version: bool = Field( + True, + title="Create first version" + ) + + +class TemplatedWorkfileBuildModel(BaseSettingsModel): + profiles: list[TemplatedWorkfileProfileModel] = Field( + default_factory=list + ) diff --git a/server_addon/aftereffects/server/settings/workfile_builder.py b/server_addon/aftereffects/server/settings/workfile_builder.py new file mode 100644 index 00000000000..d45d3f7f24e --- /dev/null +++ b/server_addon/aftereffects/server/settings/workfile_builder.py @@ -0,0 +1,25 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel, MultiplatformPathModel + + +class CustomBuilderTemplate(BaseSettingsModel): + task_types: list[str] = Field( + default_factory=list, + title="Task types", + ) + template_path: MultiplatformPathModel = Field( + default_factory=MultiplatformPathModel + ) + + +class WorkfileBuilderPlugin(BaseSettingsModel): + _title = "Workfile Builder" + create_first_version: bool = Field( + False, + title="Create first workfile" + ) + + custom_templates: list[CustomBuilderTemplate] = Field( + default_factory=list + ) diff --git a/server_addon/aftereffects/server/version.py b/server_addon/aftereffects/server/version.py new file mode 100644 index 00000000000..df0c92f1e27 --- /dev/null +++ b/server_addon/aftereffects/server/version.py @@ -0,0 +1,3 @@ +# -*- coding: utf-8 -*- +"""Package declaring addon version.""" +__version__ = "0.1.2" diff --git a/server_addon/applications/server/__init__.py b/server_addon/applications/server/__init__.py new file mode 100644 index 00000000000..e782e8a5917 --- /dev/null +++ b/server_addon/applications/server/__init__.py @@ -0,0 +1,233 @@ +import os +import json +import copy + +from ayon_server.addons import BaseServerAddon, AddonLibrary +from ayon_server.lib.postgres import Postgres + +from .version import __version__ +from .settings import ApplicationsAddonSettings, DEFAULT_VALUES + +try: + import semver +except ImportError: + semver = None + + +def sort_versions(addon_versions, reverse=False): + if semver is None: + for addon_version in sorted(addon_versions, reverse=reverse): + yield addon_version + return + + version_objs = [] + invalid_versions = [] + for addon_version in addon_versions: + try: + version_objs.append( + (addon_version, semver.VersionInfo.parse(addon_version)) + ) + except ValueError: + invalid_versions.append(addon_version) + + valid_versions = [ + addon_version + for addon_version, _ in sorted(version_objs, key=lambda x: x[1]) + ] + sorted_versions = list(sorted(invalid_versions)) + valid_versions + if reverse: + sorted_versions = reversed(sorted_versions) + for addon_version in sorted_versions: + yield addon_version + + +def merge_groups(output, new_groups): + groups_by_name = { + o_group["name"]: o_group + for o_group in output + } + extend_groups = [] + for new_group in new_groups: + group_name = new_group["name"] + if group_name not in groups_by_name: + extend_groups.append(new_group) + continue + existing_group = groups_by_name[group_name] + existing_variants = existing_group["variants"] + existing_variants_by_name = { + variant["name"]: variant + for variant in existing_variants + } + for new_variant in new_group["variants"]: + if new_variant["name"] not in existing_variants_by_name: + existing_variants.append(new_variant) + + output.extend(extend_groups) + + +def get_enum_items_from_groups(groups): + label_by_name = {} + for group in groups: + group_name = group["name"] + group_label = group["label"] or group_name + for variant in group["variants"]: + variant_name = variant["name"] + if not variant_name: + continue + variant_label = variant["label"] or variant_name + full_name = f"{group_name}/{variant_name}" + full_label = f"{group_label} {variant_label}" + label_by_name[full_name] = full_label + + return [ + {"value": full_name, "label": label_by_name[full_name]} + for full_name in sorted(label_by_name) + ] + + +class ApplicationsAddon(BaseServerAddon): + name = "applications" + title = "Applications" + version = __version__ + settings_model = ApplicationsAddonSettings + + async def get_default_settings(self): + applications_path = os.path.join(self.addon_dir, "applications.json") + tools_path = os.path.join(self.addon_dir, "tools.json") + default_values = copy.deepcopy(DEFAULT_VALUES) + with open(applications_path, "r") as stream: + default_values.update(json.load(stream)) + + with open(tools_path, "r") as stream: + default_values.update(json.load(stream)) + + return self.get_settings_model()(**default_values) + + async def pre_setup(self): + """Make sure older version of addon use the new way of attributes.""" + + instance = AddonLibrary.getinstance() + app_defs = instance.data.get(self.name) + old_addon = app_defs.versions.get("0.1.0") + if old_addon is not None: + # Override 'create_applications_attribute' for older versions + # - avoid infinite server restart loop + old_addon.create_applications_attribute = ( + self.create_applications_attribute + ) + + async def setup(self): + need_restart = await self.create_applications_attribute() + if need_restart: + self.request_server_restart() + + async def create_applications_attribute(self) -> bool: + """Make sure there are required attributes which ftrack addon needs. + + Returns: + bool: 'True' if an attribute was created or updated. + """ + + instance = AddonLibrary.getinstance() + app_defs = instance.data.get(self.name) + all_applications = [] + all_tools = [] + for addon_version in sort_versions( + app_defs.versions.keys(), reverse=True + ): + addon = app_defs.versions[addon_version] + for variant in ("production", "staging"): + settings_model = await addon.get_studio_settings(variant) + studio_settings = settings_model.dict() + application_settings = studio_settings["applications"] + app_groups = application_settings.pop("additional_apps") + for group_name, value in application_settings.items(): + value["name"] = group_name + app_groups.append(value) + merge_groups(all_applications, app_groups) + merge_groups(all_tools, studio_settings["tool_groups"]) + + query = "SELECT name, position, scope, data from public.attributes" + + apps_attrib_name = "applications" + tools_attrib_name = "tools" + + apps_enum = get_enum_items_from_groups(all_applications) + tools_enum = get_enum_items_from_groups(all_tools) + apps_attribute_data = { + "type": "list_of_strings", + "title": "Applications", + "enum": apps_enum + } + tools_attribute_data = { + "type": "list_of_strings", + "title": "Tools", + "enum": tools_enum + } + apps_scope = ["project"] + tools_scope = ["project", "folder", "task"] + + apps_match_position = None + apps_matches = False + tools_match_position = None + tools_matches = False + position = 1 + async for row in Postgres.iterate(query): + position += 1 + if row["name"] == apps_attrib_name: + # Check if scope is matching ftrack addon requirements + if ( + set(row["scope"]) == set(apps_scope) + and row["data"].get("enum") == apps_enum + ): + apps_matches = True + apps_match_position = row["position"] + + elif row["name"] == tools_attrib_name: + if ( + set(row["scope"]) == set(tools_scope) + and row["data"].get("enum") == tools_enum + ): + tools_matches = True + tools_match_position = row["position"] + + if apps_matches and tools_matches: + return False + + postgre_query = "\n".join(( + "INSERT INTO public.attributes", + " (name, position, scope, data)", + "VALUES", + " ($1, $2, $3, $4)", + "ON CONFLICT (name)", + "DO UPDATE SET", + " scope = $3,", + " data = $4", + )) + if not apps_matches: + # Reuse position from found attribute + if apps_match_position is None: + apps_match_position = position + position += 1 + + await Postgres.execute( + postgre_query, + apps_attrib_name, + apps_match_position, + apps_scope, + apps_attribute_data, + ) + + if not tools_matches: + if tools_match_position is None: + tools_match_position = position + position += 1 + + await Postgres.execute( + postgre_query, + tools_attrib_name, + tools_match_position, + tools_scope, + tools_attribute_data, + ) + return True diff --git a/server_addon/applications/server/applications.json b/server_addon/applications/server/applications.json new file mode 100644 index 00000000000..8e5b28623ec --- /dev/null +++ b/server_addon/applications/server/applications.json @@ -0,0 +1,1123 @@ +{ + "applications": { + "maya": { + "enabled": true, + "label": "Maya", + "icon": "{}/app_icons/maya.png", + "host_name": "maya", + "environment": "{\n \"MAYA_DISABLE_CLIC_IPM\": \"Yes\",\n \"MAYA_DISABLE_CIP\": \"Yes\",\n \"MAYA_DISABLE_CER\": \"Yes\",\n \"PYMEL_SKIP_MEL_INIT\": \"Yes\",\n \"LC_ALL\": \"C\"\n}\n", + "variants": [ + { + "name": "2023", + "label": "2023", + "executables": { + "windows": [ + "C:\\Program Files\\Autodesk\\Maya2023\\bin\\maya.exe" + ], + "darwin": [], + "linux": [ + "/usr/autodesk/maya2023/bin/maya" + ] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{\n \"MAYA_VERSION\": \"2023\"\n}", + "use_python_2": false + }, + { + "name": "2022", + "label": "2022", + "executables": { + "windows": [ + "C:\\Program Files\\Autodesk\\Maya2022\\bin\\maya.exe" + ], + "darwin": [], + "linux": [ + "/usr/autodesk/maya2022/bin/maya" + ] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{\n \"MAYA_VERSION\": \"2022\"\n}", + "use_python_2": false + }, + { + "name": "2020", + "label": "2020", + "executables": { + "windows": [ + "C:\\Program Files\\Autodesk\\Maya2020\\bin\\maya.exe" + ], + "darwin": [], + "linux": [ + "/usr/autodesk/maya2020/bin/maya" + ] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{\n \"MAYA_VERSION\": \"2020\"\n}", + "use_python_2": true + }, + { + "name": "2019", + "label": "2019", + "executables": { + "windows": [ + "C:\\Program Files\\Autodesk\\Maya2019\\bin\\maya.exe" + ], + "darwin": [], + "linux": [ + "/usr/autodesk/maya2019/bin/maya" + ] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{\n \"MAYA_VERSION\": \"2019\"\n}", + "use_python_2": true + }, + { + "name": "2018", + "label": "2018", + "executables": { + "windows": [ + "C:\\Program Files\\Autodesk\\Maya2018\\bin\\maya.exe" + ], + "darwin": [], + "linux": [ + "/usr/autodesk/maya2018/bin/maya" + ] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{\n \"MAYA_VERSION\": \"2018\"\n}", + "use_python_2": true + } + ] + }, + "adsk_3dsmax": { + "enabled": true, + "label": "3ds Max", + "icon": "{}/app_icons/3dsmax.png", + "host_name": "max", + "environment": "{\n \"ADSK_3DSMAX_STARTUPSCRIPTS_ADDON_DIR\": \"{OPENPYPE_ROOT}/openpype/hosts/max/startup\"\n}", + "variants": [ + { + "name": "2023", + "use_python_2": false, + "executables": { + "windows": [ + "C:\\Program Files\\Autodesk\\3ds Max 2023\\3dsmax.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{\n \"3DSMAX_VERSION\": \"2023\"\n}" + } + ] + }, + "flame": { + "enabled": true, + "label": "Flame", + "icon": "{}/app_icons/flame.png", + "host_name": "flame", + "environment": "{\n \"FLAME_SCRIPT_DIRS\": {\n \"windows\": \"\",\n \"darwin\": \"\",\n \"linux\": \"\"\n },\n \"FLAME_WIRETAP_HOSTNAME\": \"\",\n \"FLAME_WIRETAP_VOLUME\": \"stonefs\",\n \"FLAME_WIRETAP_GROUP\": \"staff\"\n}", + "variants": [ + { + "name": "2021", + "label": "2021", + "executables": { + "windows": [], + "darwin": [ + "/opt/Autodesk/flame_2021/bin/flame.app/Contents/MacOS/startApp" + ], + "linux": [ + "/opt/Autodesk/flame_2021/bin/startApplication" + ] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{\n \"OPENPYPE_FLAME_PYTHON_EXEC\": \"/opt/Autodesk/python/2021/bin/python2.7\",\n \"OPENPYPE_FLAME_PYTHONPATH\": \"/opt/Autodesk/flame_2021/python\",\n \"OPENPYPE_WIRETAP_TOOLS\": \"/opt/Autodesk/wiretap/tools/2021\"\n}", + "use_python_2": true + }, + { + "name": "2021_1", + "label": "2021.1", + "executables": { + "windows": [], + "darwin": [ + "/opt/Autodesk/flame_2021.1/bin/flame.app/Contents/MacOS/startApp" + ], + "linux": [ + "/opt/Autodesk/flame_2021.1/bin/startApplication" + ] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{\n \"OPENPYPE_FLAME_PYTHON_EXEC\": \"/opt/Autodesk/python/2021.1/bin/python2.7\",\n \"OPENPYPE_FLAME_PYTHONPATH\": \"/opt/Autodesk/flame_2021.1/python\",\n \"OPENPYPE_WIRETAP_TOOLS\": \"/opt/Autodesk/wiretap/tools/2021.1\"\n}", + "use_python_2": true + } + ] + }, + "nuke": { + "enabled": true, + "label": "Nuke", + "icon": "{}/app_icons/nuke.png", + "host_name": "nuke", + "environment": "{\n \"NUKE_PATH\": [\n \"{NUKE_PATH}\",\n \"{OPENPYPE_STUDIO_PLUGINS}/nuke\"\n ]\n}", + "variants": [ + { + "name": "14-0", + "label": "14.0", + "executables": { + "windows": [ + "C:\\Program Files\\Nuke14.0v4\\Nuke14.0.exe" + ], + "darwin": [ + "/Applications/Nuke14.0v4/Nuke14.0v4.app" + ], + "linux": [ + "/usr/local/Nuke14.0v4/Nuke14.0" + ] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}", + "use_python_2": false + }, + { + "name": "13-2", + "label": "13.2", + "executables": { + "windows": [ + "C:\\Program Files\\Nuke13.2v5\\Nuke13.2.exe" + ], + "darwin": [ + "/Applications/Nuke13.2v5/Nuke13.2v5.app" + ], + "linux": [ + "/usr/local/Nuke13.2v5/Nuke13.2" + ] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}", + "use_python_2": false + }, + { + "name": "13-0", + "use_python_2": false, + "executables": { + "windows": [ + "C:\\Program Files\\Nuke13.0v1\\Nuke13.0.exe" + ], + "darwin": [ + "/Applications/Nuke13.0v1/Nuke13.0v1.app" + ], + "linux": [ + "/usr/local/Nuke13.0v1/Nuke13.0" + ] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + } + ] + }, + "nukeassist": { + "enabled": true, + "label": "Nuke Assist", + "icon": "{}/app_icons/nuke.png", + "host_name": "nuke", + "environment": "{\n \"NUKE_PATH\": [\n \"{NUKE_PATH}\",\n \"{OPENPYPE_STUDIO_PLUGINS}/nuke\"\n ]\n}", + "variants": [ + { + "name": "14-0", + "label": "14.0", + "executables": { + "windows": [ + "C:\\Program Files\\Nuke14.0v4\\Nuke14.0.exe" + ], + "darwin": [ + "/Applications/Nuke14.0v4/NukeAssist14.0v4.app" + ], + "linux": [ + "/usr/local/Nuke14.0v4/Nuke14.0" + ] + }, + "arguments": { + "windows": [ + "--nukeassist" + ], + "darwin": [], + "linux": [ + "--nukeassist" + ] + }, + "environment": "{}", + "use_python_2": false + }, + { + "name": "13-2", + "label": "13.2", + "executables": { + "windows": [ + "C:\\Program Files\\Nuke13.2v5\\Nuke13.2.exe" + ], + "darwin": [ + "/Applications/Nuke13.2v5/NukeAssist13.2v5.app" + ], + "linux": [ + "/usr/local/Nuke13.2v5/Nuke13.2" + ] + }, + "arguments": { + "windows": [ + "--nukeassist" + ], + "darwin": [], + "linux": [ + "--nukeassist" + ] + }, + "environment": "{}", + "use_python_2": false + }, + { + "name": "13-0", + "use_python_2": false, + "executables": { + "windows": [ + "C:\\Program Files\\Nuke13.0v1\\Nuke13.0.exe" + ], + "darwin": [ + "/Applications/Nuke13.0v1/NukeAssist13.0v1.app" + ], + "linux": [ + "/usr/local/Nuke13.0v1/Nuke13.0" + ] + }, + "arguments": { + "windows": [ + "--nukeassist" + ], + "darwin": [], + "linux": [ + "--nukeassist" + ] + }, + "environment": "{}" + } + ] + }, + "nukex": { + "enabled": true, + "label": "Nuke X", + "icon": "{}/app_icons/nukex.png", + "host_name": "nuke", + "environment": "{\n \"NUKE_PATH\": [\n \"{NUKE_PATH}\",\n \"{OPENPYPE_STUDIO_PLUGINS}/nuke\"\n ]\n}", + "variants": [ + { + "name": "14-0", + "label": "14.0", + "executables": { + "windows": [ + "C:\\Program Files\\Nuke14.0v4\\Nuke14.0.exe" + ], + "darwin": [ + "/Applications/Nuke14.0v4/NukeX14.0v4.app" + ], + "linux": [ + "/usr/local/Nuke14.0v4/Nuke14.0" + ] + }, + "arguments": { + "windows": [ + "--nukex" + ], + "darwin": [], + "linux": [ + "--nukex" + ] + }, + "environment": "{}", + "use_python_2": false + }, + { + "name": "13-2", + "label": "13.2", + "executables": { + "windows": [ + "C:\\Program Files\\Nuke13.2v5\\Nuke13.2.exe" + ], + "darwin": [ + "/Applications/Nuke13.2v5/NukeX13.2v5.app" + ], + "linux": [ + "/usr/local/Nuke13.2v5/Nuke13.2" + ] + }, + "arguments": { + "windows": [ + "--nukex" + ], + "darwin": [], + "linux": [ + "--nukex" + ] + }, + "environment": "{}", + "use_python_2": false + }, + { + "name": "13-0", + "use_python_2": false, + "executables": { + "windows": [ + "C:\\Program Files\\Nuke13.0v1\\Nuke13.0.exe" + ], + "darwin": [ + "/Applications/Nuke13.0v1/NukeX13.0v1.app" + ], + "linux": [ + "/usr/local/Nuke13.0v1/Nuke13.0" + ] + }, + "arguments": { + "windows": [ + "--nukex" + ], + "darwin": [], + "linux": [ + "--nukex" + ] + }, + "environment": "{}" + } + ] + }, + "nukestudio": { + "enabled": true, + "label": "Nuke Studio", + "icon": "{}/app_icons/nukestudio.png", + "host_name": "hiero", + "environment": "{\n \"WORKFILES_STARTUP\": \"0\",\n \"TAG_ASSETBUILD_STARTUP\": \"0\"\n}", + "variants": [ + { + "name": "14-0", + "label": "14.0", + "executables": { + "windows": [ + "C:\\Program Files\\Nuke14.0v4\\Nuke14.0.exe" + ], + "darwin": [ + "/Applications/Nuke14.0v4/NukeStudio14.0v4.app" + ], + "linux": [ + "/usr/local/Nuke14.0v4/Nuke14.0" + ] + }, + "arguments": { + "windows": [ + "--studio" + ], + "darwin": [], + "linux": [ + "--studio" + ] + }, + "environment": "{}", + "use_python_2": false + }, + { + "name": "13-2", + "label": "13.2", + "executables": { + "windows": [ + "C:\\Program Files\\Nuke13.2v5\\Nuke13.2.exe" + ], + "darwin": [ + "/Applications/Nuke13.2v5/NukeStudio13.2v5.app" + ], + "linux": [ + "/usr/local/Nuke13.2v5/Nuke13.2" + ] + }, + "arguments": { + "windows": [ + "--studio" + ], + "darwin": [], + "linux": [ + "--studio" + ] + }, + "environment": "{}", + "use_python_2": false + }, + { + "name": "13-0", + "use_python_2": false, + "executables": { + "windows": [ + "C:\\Program Files\\Nuke13.0v1\\Nuke13.0.exe" + ], + "darwin": [ + "/Applications/Nuke13.0v1/NukeStudio13.0v1.app" + ], + "linux": [ + "/usr/local/Nuke13.0v1/Nuke13.0" + ] + }, + "arguments": { + "windows": [ + "--studio" + ], + "darwin": [], + "linux": [ + "--studio" + ] + }, + "environment": "{}" + } + ] + }, + "hiero": { + "enabled": true, + "label": "Hiero", + "icon": "{}/app_icons/hiero.png", + "host_name": "hiero", + "environment": "{\n \"WORKFILES_STARTUP\": \"0\",\n \"TAG_ASSETBUILD_STARTUP\": \"0\"\n}", + "variants": [ + { + "name": "14-0", + "label": "14.0", + "executables": { + "windows": [ + "C:\\Program Files\\Nuke14.0v4\\Nuke14.0.exe" + ], + "darwin": [ + "/Applications/Nuke14.0v4/Hiero14.0v4.app" + ], + "linux": [ + "/usr/local/Nuke14.0v4/Nuke14.0" + ] + }, + "arguments": { + "windows": [ + "--hiero" + ], + "darwin": [], + "linux": [ + "--hiero" + ] + }, + "environment": "{}", + "use_python_2": false + }, + { + "name": "13-2", + "label": "13.2", + "executables": { + "windows": [ + "C:\\Program Files\\Nuke13.2v5\\Nuke13.2.exe" + ], + "darwin": [ + "/Applications/Nuke13.2v5/Hiero13.2v5.app" + ], + "linux": [ + "/usr/local/Nuke13.2v5/Nuke13.2" + ] + }, + "arguments": { + "windows": [ + "--hiero" + ], + "darwin": [], + "linux": [ + "--hiero" + ] + }, + "environment": "{}", + "use_python_2": false + }, + { + "name": "13-0", + "use_python_2": false, + "executables": { + "windows": [ + "C:\\Program Files\\Nuke13.0v1\\Nuke13.0.exe" + ], + "darwin": [ + "/Applications/Nuke13.0v1/Hiero13.0v1.app" + ], + "linux": [ + "/usr/local/Nuke13.0v1/Nuke13.0" + ] + }, + "arguments": { + "windows": [ + "--hiero" + ], + "darwin": [], + "linux": [ + "--hiero" + ] + }, + "environment": "{}" + } + ] + }, + "fusion": { + "enabled": true, + "label": "Fusion", + "icon": "{}/app_icons/fusion.png", + "host_name": "fusion", + "environment": "{\n \"FUSION_PYTHON3_HOME\": {\n \"windows\": \"{LOCALAPPDATA}/Programs/Python/Python36\",\n \"darwin\": \"~/Library/Python/3.6/bin\",\n \"linux\": \"/opt/Python/3.6/bin\"\n }\n}", + "variants": [ + { + "name": "17", + "label": "17", + "executables": { + "windows": [ + "C:\\Program Files\\Blackmagic Design\\Fusion 17\\Fusion.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + }, + { + "name": "16", + "label": "16", + "executables": { + "windows": [ + "C:\\Program Files\\Blackmagic Design\\Fusion 16\\Fusion.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + }, + { + "name": "9", + "label": "9", + "executables": { + "windows": [ + "C:\\Program Files\\Blackmagic Design\\Fusion 9\\Fusion.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + } + ] + }, + "resolve": { + "enabled": true, + "label": "Resolve", + "icon": "{}/app_icons/resolve.png", + "host_name": "resolve", + "environment": "{\n \"RESOLVE_UTILITY_SCRIPTS_SOURCE_DIR\": [],\n \"RESOLVE_PYTHON3_HOME\": {\n \"windows\": \"{LOCALAPPDATA}/Programs/Python/Python36\",\n \"darwin\": \"~/Library/Python/3.6/bin\",\n \"linux\": \"/opt/Python/3.6/bin\"\n }\n}", + "variants": [ + { + "name": "stable", + "label": "stable", + "executables": { + "windows": [ + "C:/Program Files/Blackmagic Design/DaVinci Resolve/Resolve.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + } + ] + }, + "houdini": { + "enabled": true, + "label": "Houdini", + "icon": "{}/app_icons/houdini.png", + "host_name": "houdini", + "environment": "{}", + "variants": [ + { + "name": "18-5", + "label": "18.5", + "executables": { + "windows": [ + "C:\\Program Files\\Side Effects Software\\Houdini 18.5.499\\bin\\houdini.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}", + "use_python_2": true + }, + { + "name": "18", + "label": "18", + "executables": { + "windows": [], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}", + "use_python_2": true + }, + { + "name": "17", + "label": "17", + "executables": { + "windows": [], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}", + "use_python_2": true + } + ] + }, + "blender": { + "enabled": true, + "label": "Blender", + "icon": "{}/app_icons/blender.png", + "host_name": "blender", + "environment": "{}", + "variants": [ + { + "name": "2-83", + "label": "2.83", + "executables": { + "windows": [ + "C:\\Program Files\\Blender Foundation\\Blender 2.83\\blender.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [ + "--python-use-system-env" + ], + "darwin": [ + "--python-use-system-env" + ], + "linux": [ + "--python-use-system-env" + ] + }, + "environment": "{}" + }, + { + "name": "2-90", + "label": "2.90", + "executables": { + "windows": [ + "C:\\Program Files\\Blender Foundation\\Blender 2.90\\blender.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [ + "--python-use-system-env" + ], + "darwin": [ + "--python-use-system-env" + ], + "linux": [ + "--python-use-system-env" + ] + }, + "environment": "{}" + }, + { + "name": "2-91", + "label": "2.91", + "executables": { + "windows": [ + "C:\\Program Files\\Blender Foundation\\Blender 2.91\\blender.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [ + "--python-use-system-env" + ], + "darwin": [ + "--python-use-system-env" + ], + "linux": [ + "--python-use-system-env" + ] + }, + "environment": "{}" + } + ] + }, + "harmony": { + "enabled": true, + "label": "Harmony", + "icon": "{}/app_icons/harmony.png", + "host_name": "harmony", + "environment": "{\n \"AVALON_HARMONY_WORKFILES_ON_LAUNCH\": \"1\"\n}", + "variants": [ + { + "name": "21", + "label": "21", + "executables": { + "windows": [ + "c:\\Program Files (x86)\\Toon Boom Animation\\Toon Boom Harmony 21 Premium\\win64\\bin\\HarmonyPremium.exe" + ], + "darwin": [ + "/Applications/Toon Boom Harmony 21 Premium/Harmony Premium.app/Contents/MacOS/Harmony Premium" + ], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + }, + { + "name": "20", + "label": "20", + "executables": { + "windows": [ + "c:\\Program Files (x86)\\Toon Boom Animation\\Toon Boom Harmony 20 Premium\\win64\\bin\\HarmonyPremium.exe" + ], + "darwin": [ + "/Applications/Toon Boom Harmony 20 Premium/Harmony Premium.app/Contents/MacOS/Harmony Premium" + ], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + }, + { + "name": "17", + "label": "17", + "executables": { + "windows": [ + "c:\\Program Files (x86)\\Toon Boom Animation\\Toon Boom Harmony 17 Premium\\win64\\bin\\HarmonyPremium.exe" + ], + "darwin": [ + "/Applications/Toon Boom Harmony 17 Premium/Harmony Premium.app/Contents/MacOS/Harmony Premium" + ], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + } + ] + }, + "tvpaint": { + "enabled": true, + "label": "TVPaint", + "icon": "{}/app_icons/tvpaint.png", + "host_name": "tvpaint", + "environment": "{}", + "variants": [ + { + "name": "animation_11-64bits", + "label": "11 (64bits)", + "executables": { + "windows": [ + "C:\\Program Files\\TVPaint Developpement\\TVPaint Animation 11 (64bits)\\TVPaint Animation 11 (64bits).exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + }, + { + "name": "animation_11-32bits", + "label": "11 (32bits)", + "executables": { + "windows": [ + "C:\\Program Files (x86)\\TVPaint Developpement\\TVPaint Animation 11 (32bits)\\TVPaint Animation 11 (32bits).exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + } + ] + }, + "photoshop": { + "enabled": true, + "label": "Photoshop", + "icon": "{}/app_icons/photoshop.png", + "host_name": "photoshop", + "environment": "{\n \"AVALON_PHOTOSHOP_WORKFILES_ON_LAUNCH\": \"1\",\n \"WORKFILES_SAVE_AS\": \"Yes\"\n}", + "variants": [ + { + "name": "2020", + "label": "2020", + "executables": { + "windows": [ + "C:\\Program Files\\Adobe\\Adobe Photoshop 2020\\Photoshop.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + }, + { + "name": "2021", + "label": "2021", + "executables": { + "windows": [ + "C:\\Program Files\\Adobe\\Adobe Photoshop 2021\\Photoshop.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + }, + { + "name": "2022", + "label": "2022", + "executables": { + "windows": [ + "C:\\Program Files\\Adobe\\Adobe Photoshop 2022\\Photoshop.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + } + ] + }, + "aftereffects": { + "enabled": true, + "label": "AfterEffects", + "icon": "{}/app_icons/aftereffects.png", + "host_name": "aftereffects", + "environment": "{\n \"AVALON_AFTEREFFECTS_WORKFILES_ON_LAUNCH\": \"1\",\n \"WORKFILES_SAVE_AS\": \"Yes\"\n}", + "variants": [ + { + "name": "2020", + "label": "2020", + "executables": { + "windows": [ + "C:\\Program Files\\Adobe\\Adobe After Effects 2020\\Support Files\\AfterFX.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + }, + { + "name": "2021", + "label": "2021", + "executables": { + "windows": [ + "C:\\Program Files\\Adobe\\Adobe After Effects 2021\\Support Files\\AfterFX.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + }, + { + "name": "2022", + "label": "2022", + "executables": { + "windows": [ + "C:\\Program Files\\Adobe\\Adobe After Effects 2022\\Support Files\\AfterFX.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{\n \"MULTIPROCESS\": \"No\"\n}" + } + ] + }, + "celaction": { + "enabled": true, + "label": "CelAction 2D", + "icon": "app_icons/celaction.png", + "host_name": "celaction", + "environment": "{\n \"CELACTION_TEMPLATE\": \"{OPENPYPE_REPOS_ROOT}/openpype/hosts/celaction/celaction_template_scene.scn\"\n}", + "variants": [ + { + "name": "local", + "label": "local", + "executables": { + "windows": [], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + } + ] + }, + "unreal": { + "enabled": true, + "label": "Unreal Editor", + "icon": "{}/app_icons/ue4.png", + "host_name": "unreal", + "environment": "{}", + "variants": [ + { + "name": "4-26", + "label": "4.26", + "executables": {}, + "arguments": {}, + "environment": "{}" + } + ] + }, + "djvview": { + "enabled": true, + "label": "DJV View", + "icon": "{}/app_icons/djvView.png", + "host_name": "", + "environment": "{}", + "variants": [ + { + "name": "1-1", + "label": "1.1", + "executables": { + "windows": [], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + } + ] + }, + "additional_apps": [] + } +} diff --git a/server_addon/applications/server/settings.py b/server_addon/applications/server/settings.py new file mode 100644 index 00000000000..fd481b6ce82 --- /dev/null +++ b/server_addon/applications/server/settings.py @@ -0,0 +1,201 @@ +import json +from pydantic import Field, validator + +from ayon_server.settings import BaseSettingsModel, ensure_unique_names +from ayon_server.exceptions import BadRequestException + + +def validate_json_dict(value): + if not value.strip(): + return "{}" + try: + converted_value = json.loads(value) + success = isinstance(converted_value, dict) + except json.JSONDecodeError as exc: + print(exc) + success = False + + if not success: + raise BadRequestException( + "Environment's can't be parsed as json object" + ) + return value + + +class MultiplatformStrList(BaseSettingsModel): + windows: list[str] = Field(default_factory=list, title="Windows") + linux: list[str] = Field(default_factory=list, title="Linux") + darwin: list[str] = Field(default_factory=list, title="MacOS") + + +class AppVariant(BaseSettingsModel): + name: str = Field("", title="Name") + label: str = Field("", title="Label") + executables: MultiplatformStrList = Field( + default_factory=MultiplatformStrList, title="Executables" + ) + arguments: MultiplatformStrList = Field( + default_factory=MultiplatformStrList, title="Arguments" + ) + environment: str = Field("{}", title="Environment", widget="textarea") + + @validator("environment") + def validate_json(cls, value): + return validate_json_dict(value) + + +class AppVariantWithPython(AppVariant): + use_python_2: bool = Field(False, title="Use Python 2") + + +class AppGroup(BaseSettingsModel): + enabled: bool = Field(True) + label: str = Field("", title="Label") + host_name: str = Field("", title="Host name") + icon: str = Field("", title="Icon") + environment: str = Field("{}", title="Environment", widget="textarea") + + variants: list[AppVariant] = Field( + default_factory=list, + title="Variants", + description="Different variants of the applications", + section="Variants", + ) + + @validator("variants") + def validate_unique_name(cls, value): + ensure_unique_names(value) + return value + + +class AppGroupWithPython(AppGroup): + variants: list[AppVariantWithPython] = Field( + default_factory=list, + title="Variants", + description="Different variants of the applications", + section="Variants", + ) + + +class AdditionalAppGroup(BaseSettingsModel): + enabled: bool = Field(True) + name: str = Field("", title="Name") + label: str = Field("", title="Label") + host_name: str = Field("", title="Host name") + icon: str = Field("", title="Icon") + environment: str = Field("{}", title="Environment", widget="textarea") + + variants: list[AppVariantWithPython] = Field( + default_factory=list, + title="Variants", + description="Different variants of the applications", + section="Variants", + ) + + @validator("variants") + def validate_unique_name(cls, value): + ensure_unique_names(value) + return value + + +class ToolVariantModel(BaseSettingsModel): + name: str = Field("", title="Name") + label: str = Field("", title="Label") + host_names: list[str] = Field(default_factory=list, title="Hosts") + # TODO use applications enum if possible + app_variants: list[str] = Field(default_factory=list, title="Applications") + environment: str = Field("{}", title="Environments", widget="textarea") + + @validator("environment") + def validate_json(cls, value): + return validate_json_dict(value) + + +class ToolGroupModel(BaseSettingsModel): + name: str = Field("", title="Name") + label: str = Field("", title="Label") + environment: str = Field("{}", title="Environments", widget="textarea") + variants: list[ToolVariantModel] = Field( + default_factory=ToolVariantModel + ) + + @validator("environment") + def validate_json(cls, value): + return validate_json_dict(value) + + @validator("variants") + def validate_unique_name(cls, value): + ensure_unique_names(value) + return value + + +class ApplicationsSettings(BaseSettingsModel): + """Applications settings""" + + maya: AppGroupWithPython = Field( + default_factory=AppGroupWithPython, title="Autodesk Maya") + adsk_3dsmax: AppGroupWithPython = Field( + default_factory=AppGroupWithPython, title="Autodesk 3ds Max") + flame: AppGroupWithPython = Field( + default_factory=AppGroupWithPython, title="Autodesk Flame") + nuke: AppGroupWithPython = Field( + default_factory=AppGroupWithPython, title="Nuke") + nukeassist: AppGroupWithPython = Field( + default_factory=AppGroupWithPython, title="Nuke Assist") + nukex: AppGroupWithPython = Field( + default_factory=AppGroupWithPython, title="Nuke X") + nukestudio: AppGroupWithPython = Field( + default_factory=AppGroupWithPython, title="Nuke Studio") + hiero: AppGroupWithPython = Field( + default_factory=AppGroupWithPython, title="Hiero") + fusion: AppGroup = Field( + default_factory=AppGroupWithPython, title="Fusion") + resolve: AppGroupWithPython = Field( + default_factory=AppGroupWithPython, title="Resolve") + houdini: AppGroupWithPython = Field( + default_factory=AppGroupWithPython, title="Houdini") + blender: AppGroup = Field( + default_factory=AppGroupWithPython, title="Blender") + harmony: AppGroup = Field( + default_factory=AppGroupWithPython, title="Harmony") + tvpaint: AppGroup = Field( + default_factory=AppGroupWithPython, title="TVPaint") + photoshop: AppGroup = Field( + default_factory=AppGroupWithPython, title="Adobe Photoshop") + aftereffects: AppGroup = Field( + default_factory=AppGroupWithPython, title="Adobe After Effects") + celaction: AppGroup = Field( + default_factory=AppGroupWithPython, title="Celaction 2D") + unreal: AppGroup = Field( + default_factory=AppGroupWithPython, title="Unreal Editor") + additional_apps: list[AdditionalAppGroup] = Field( + default_factory=list, title="Additional Applications") + + @validator("additional_apps") + def validate_unique_name(cls, value): + ensure_unique_names(value) + return value + + +class ApplicationsAddonSettings(BaseSettingsModel): + applications: ApplicationsSettings = Field( + default_factory=ApplicationsSettings, + title="Applications", + scope=["studio"] + ) + tool_groups: list[ToolGroupModel] = Field( + default_factory=list, + scope=["studio"] + ) + only_available: bool = Field( + True, title="Show only available applications") + + @validator("tool_groups") + def validate_unique_name(cls, value): + ensure_unique_names(value) + return value + + +DEFAULT_VALUES = { + "only_available": False +} diff --git a/server_addon/applications/server/tools.json b/server_addon/applications/server/tools.json new file mode 100644 index 00000000000..54bee11cf70 --- /dev/null +++ b/server_addon/applications/server/tools.json @@ -0,0 +1,55 @@ +{ + "tool_groups": [ + { + "environment": "{\n \"MTOA\": \"{STUDIO_SOFTWARE}/arnold/mtoa_{MAYA_VERSION}_{MTOA_VERSION}\",\n \"MAYA_RENDER_DESC_PATH\": \"{MTOA}\",\n \"MAYA_MODULE_PATH\": \"{MTOA}\",\n \"ARNOLD_PLUGIN_PATH\": \"{MTOA}/shaders\",\n \"MTOA_EXTENSIONS_PATH\": {\n \"darwin\": \"{MTOA}/extensions\",\n \"linux\": \"{MTOA}/extensions\",\n \"windows\": \"{MTOA}/extensions\"\n },\n \"MTOA_EXTENSIONS\": {\n \"darwin\": \"{MTOA}/extensions\",\n \"linux\": \"{MTOA}/extensions\",\n \"windows\": \"{MTOA}/extensions\"\n },\n \"DYLD_LIBRARY_PATH\": {\n \"darwin\": \"{MTOA}/bin\"\n },\n \"PATH\": {\n \"windows\": \"{PATH};{MTOA}/bin\"\n }\n}", + "name": "mtoa", + "label": "Autodesk Arnold", + "variants": [ + { + "host_names": [], + "app_variants": [], + "environment": "{\n \"MTOA_VERSION\": \"3.2\"\n}", + "name": "3-2", + "label": "3.2" + }, + { + "host_names": [], + "app_variants": [], + "environment": "{\n \"MTOA_VERSION\": \"3.1\"\n}", + "name": "3-1", + "label": "3.1" + } + ] + }, + { + "environment": "{}", + "name": "vray", + "label": "Chaos Group Vray", + "variants": [] + }, + { + "environment": "{}", + "name": "yeti", + "label": "Peregrine Labs Yeti", + "variants": [] + }, + { + "environment": "{}", + "name": "renderman", + "label": "Pixar Renderman", + "variants": [ + { + "host_names": [ + "maya" + ], + "app_variants": [ + "maya/2022" + ], + "environment": "{\n \"RFMTREE\": {\n \"windows\": \"C:\\\\Program Files\\\\Pixar\\\\RenderManForMaya-24.3\",\n \"darwin\": \"/Applications/Pixar/RenderManForMaya-24.3\",\n \"linux\": \"/opt/pixar/RenderManForMaya-24.3\"\n },\n \"RMANTREE\": {\n \"windows\": \"C:\\\\Program Files\\\\Pixar\\\\RenderManProServer-24.3\",\n \"darwin\": \"/Applications/Pixar/RenderManProServer-24.3\",\n \"linux\": \"/opt/pixar/RenderManProServer-24.3\"\n }\n}", + "name": "24-3-maya", + "label": "24.3 RFM" + } + ] + } + ] +} diff --git a/server_addon/applications/server/version.py b/server_addon/applications/server/version.py new file mode 100644 index 00000000000..485f44ac21b --- /dev/null +++ b/server_addon/applications/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.1" diff --git a/server_addon/blender/server/__init__.py b/server_addon/blender/server/__init__.py new file mode 100644 index 00000000000..a7d6cb4400f --- /dev/null +++ b/server_addon/blender/server/__init__.py @@ -0,0 +1,19 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import BlenderSettings, DEFAULT_VALUES + + +class BlenderAddon(BaseServerAddon): + name = "blender" + title = "Blender" + version = __version__ + settings_model: Type[BlenderSettings] = BlenderSettings + frontend_scopes = {} + services = {} + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/blender/server/settings/__init__.py b/server_addon/blender/server/settings/__init__.py new file mode 100644 index 00000000000..3d51e5c3e15 --- /dev/null +++ b/server_addon/blender/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + BlenderSettings, + DEFAULT_VALUES, +) + + +__all__ = ( + "BlenderSettings", + "DEFAULT_VALUES", +) diff --git a/server_addon/blender/server/settings/imageio.py b/server_addon/blender/server/settings/imageio.py new file mode 100644 index 00000000000..a6d3c5ff643 --- /dev/null +++ b/server_addon/blender/server/settings/imageio.py @@ -0,0 +1,48 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class BlenderImageIOModel(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/blender/server/settings/main.py b/server_addon/blender/server/settings/main.py new file mode 100644 index 00000000000..f6118d39cd1 --- /dev/null +++ b/server_addon/blender/server/settings/main.py @@ -0,0 +1,63 @@ +from pydantic import Field +from ayon_server.settings import ( + BaseSettingsModel, + TemplateWorkfileBaseOptions, +) + +from .imageio import BlenderImageIOModel +from .publish_plugins import ( + PublishPuginsModel, + DEFAULT_BLENDER_PUBLISH_SETTINGS +) + + +class UnitScaleSettingsModel(BaseSettingsModel): + enabled: bool = Field(True, title="Enabled") + apply_on_opening: bool = Field( + False, title="Apply on Opening Existing Files") + base_file_unit_scale: float = Field( + 1.0, title="Base File Unit Scale" + ) + + +class BlenderSettings(BaseSettingsModel): + unit_scale_settings: UnitScaleSettingsModel = Field( + default_factory=UnitScaleSettingsModel, + title="Set Unit Scale" + ) + set_resolution_startup: bool = Field( + True, + title="Set Resolution on Startup" + ) + set_frames_startup: bool = Field( + True, + title="Set Start/End Frames and FPS on Startup" + ) + imageio: BlenderImageIOModel = Field( + default_factory=BlenderImageIOModel, + title="Color Management (ImageIO)" + ) + workfile_builder: TemplateWorkfileBaseOptions = Field( + default_factory=TemplateWorkfileBaseOptions, + title="Workfile Builder" + ) + publish: PublishPuginsModel = Field( + default_factory=PublishPuginsModel, + title="Publish Plugins" + ) + + +DEFAULT_VALUES = { + "unit_scale_settings": { + "enabled": True, + "apply_on_opening": False, + "base_file_unit_scale": 0.01 + }, + "set_frames_startup": True, + "set_resolution_startup": True, + "publish": DEFAULT_BLENDER_PUBLISH_SETTINGS, + "workfile_builder": { + "create_first_version": False, + "custom_templates": [] + } +} diff --git a/server_addon/blender/server/settings/publish_plugins.py b/server_addon/blender/server/settings/publish_plugins.py new file mode 100644 index 00000000000..65dda78411d --- /dev/null +++ b/server_addon/blender/server/settings/publish_plugins.py @@ -0,0 +1,283 @@ +import json +from pydantic import Field, validator +from ayon_server.exceptions import BadRequestException +from ayon_server.settings import BaseSettingsModel + + +def validate_json_dict(value): + if not value.strip(): + return "{}" + try: + converted_value = json.loads(value) + success = isinstance(converted_value, dict) + except json.JSONDecodeError: + success = False + + if not success: + raise BadRequestException( + "Environment's can't be parsed as json object" + ) + return value + + +class ValidatePluginModel(BaseSettingsModel): + enabled: bool = Field(True) + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + + +class ExtractBlendModel(BaseSettingsModel): + enabled: bool = Field(True) + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + families: list[str] = Field( + default_factory=list, + title="Families" + ) + + +class ExtractPlayblastModel(BaseSettingsModel): + enabled: bool = Field(True) + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + presets: str = Field("", title="Presets", widget="textarea") + + @validator("presets") + def validate_json(cls, value): + return validate_json_dict(value) + + +class PublishPuginsModel(BaseSettingsModel): + ValidateCameraZeroKeyframe: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Validate Camera Zero Keyframe", + section="Validators" + ) + ValidateMeshHasUvs: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Validate Mesh Has Uvs" + ) + ValidateMeshNoNegativeScale: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Validate Mesh No Negative Scale" + ) + ValidateTransformZero: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Validate Transform Zero" + ) + ValidateNoColonsInName: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Validate No Colons In Name" + ) + ExtractBlend: ExtractBlendModel = Field( + default_factory=ExtractBlendModel, + title="Extract Blend", + section="Extractors" + ) + ExtractFBX: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Extract FBX" + ) + ExtractABC: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Extract ABC" + ) + ExtractBlendAnimation: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Extract Blend Animation" + ) + ExtractAnimationFBX: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Extract Animation FBX" + ) + ExtractCamera: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Extract Camera" + ) + ExtractCameraABC: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Extract Camera as ABC" + ) + ExtractLayout: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Extract Layout" + ) + ExtractThumbnail: ExtractPlayblastModel = Field( + default_factory=ExtractPlayblastModel, + title="Extract Thumbnail" + ) + ExtractPlayblast: ExtractPlayblastModel = Field( + default_factory=ExtractPlayblastModel, + title="Extract Playblast" + ) + + +DEFAULT_BLENDER_PUBLISH_SETTINGS = { + "ValidateCameraZeroKeyframe": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateMeshHasUvs": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateMeshNoNegativeScale": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateTransformZero": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateNoColonsInName": { + "enabled": True, + "optional": False, + "active": True + }, + "ExtractBlend": { + "enabled": True, + "optional": True, + "active": True, + "families": [ + "model", + "camera", + "rig", + "action", + "layout", + "blendScene" + ] + }, + "ExtractFBX": { + "enabled": True, + "optional": True, + "active": False + }, + "ExtractABC": { + "enabled": True, + "optional": True, + "active": False + }, + "ExtractBlendAnimation": { + "enabled": True, + "optional": True, + "active": True + }, + "ExtractAnimationFBX": { + "enabled": True, + "optional": True, + "active": False + }, + "ExtractCamera": { + "enabled": True, + "optional": True, + "active": True + }, + "ExtractCameraABC": { + "enabled": True, + "optional": True, + "active": True + }, + "ExtractLayout": { + "enabled": True, + "optional": True, + "active": False + }, + "ExtractThumbnail": { + "enabled": True, + "optional": True, + "active": True, + "presets": json.dumps( + { + "model": { + "image_settings": { + "file_format": "JPEG", + "color_mode": "RGB", + "quality": 100 + }, + "display_options": { + "shading": { + "light": "STUDIO", + "studio_light": "Default", + "type": "SOLID", + "color_type": "OBJECT", + "show_xray": False, + "show_shadows": False, + "show_cavity": True + }, + "overlay": { + "show_overlays": False + } + } + }, + "rig": { + "image_settings": { + "file_format": "JPEG", + "color_mode": "RGB", + "quality": 100 + }, + "display_options": { + "shading": { + "light": "STUDIO", + "studio_light": "Default", + "type": "SOLID", + "color_type": "OBJECT", + "show_xray": True, + "show_shadows": False, + "show_cavity": False + }, + "overlay": { + "show_overlays": True, + "show_ortho_grid": False, + "show_floor": False, + "show_axis_x": False, + "show_axis_y": False, + "show_axis_z": False, + "show_text": False, + "show_stats": False, + "show_cursor": False, + "show_annotation": False, + "show_extras": False, + "show_relationship_lines": False, + "show_outline_selected": False, + "show_motion_paths": False, + "show_object_origins": False, + "show_bones": True + } + } + } + }, + indent=4, + ) + }, + "ExtractPlayblast": { + "enabled": True, + "optional": True, + "active": True, + "presets": json.dumps( + { + "default": { + "image_settings": { + "file_format": "PNG", + "color_mode": "RGB", + "color_depth": "8", + "compression": 15 + }, + "display_options": { + "shading": { + "type": "MATERIAL", + "render_pass": "COMBINED" + }, + "overlay": { + "show_overlays": False + } + } + } + }, + indent=4 + ) + } +} diff --git a/server_addon/blender/server/version.py b/server_addon/blender/server/version.py new file mode 100644 index 00000000000..485f44ac21b --- /dev/null +++ b/server_addon/blender/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.1" diff --git a/server_addon/celaction/server/__init__.py b/server_addon/celaction/server/__init__.py new file mode 100644 index 00000000000..90d3dbaa016 --- /dev/null +++ b/server_addon/celaction/server/__init__.py @@ -0,0 +1,19 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import CelActionSettings, DEFAULT_VALUES + + +class CelActionAddon(BaseServerAddon): + name = "celaction" + title = "CelAction" + version = __version__ + settings_model: Type[CelActionSettings] = CelActionSettings + frontend_scopes = {} + services = {} + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/celaction/server/imageio.py b/server_addon/celaction/server/imageio.py new file mode 100644 index 00000000000..72da441528c --- /dev/null +++ b/server_addon/celaction/server/imageio.py @@ -0,0 +1,48 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class CelActionImageIOModel(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/celaction/server/settings.py b/server_addon/celaction/server/settings.py new file mode 100644 index 00000000000..68d1d2dc312 --- /dev/null +++ b/server_addon/celaction/server/settings.py @@ -0,0 +1,92 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel +from .imageio import CelActionImageIOModel + + +class CollectRenderPathModel(BaseSettingsModel): + output_extension: str = Field( + "", + title="Output render file extension" + ) + anatomy_template_key_render_files: str = Field( + "", + title="Anatomy template key: render files" + ) + anatomy_template_key_metadata: str = Field( + "", + title="Anatomy template key: metadata job file" + ) + + +def _workfile_submit_overrides(): + return [ + { + "value": "render_chunk", + "label": "Pass chunk size" + }, + { + "value": "frame_range", + "label": "Pass frame range" + }, + { + "value": "resolution", + "label": "Pass resolution" + } + ] + + +class WorkfileModel(BaseSettingsModel): + submission_overrides: list[str] = Field( + default_factory=list, + title="Submission workfile overrides", + enum_resolver=_workfile_submit_overrides + ) + + +class PublishPuginsModel(BaseSettingsModel): + CollectRenderPath: CollectRenderPathModel = Field( + default_factory=CollectRenderPathModel, + title="Collect Render Path" + ) + + +class CelActionSettings(BaseSettingsModel): + imageio: CelActionImageIOModel = Field( + default_factory=CelActionImageIOModel, + title="Color Management (ImageIO)" + ) + workfile: WorkfileModel = Field( + title="Workfile" + ) + publish: PublishPuginsModel = Field( + default_factory=PublishPuginsModel, + title="Publish plugins", + ) + + +DEFAULT_VALUES = { + "imageio": { + "ocio_config": { + "enabled": False, + "filepath": [] + }, + "file_rules": { + "enabled": False, + "rules": [] + } + }, + "workfile": { + "submission_overrides": [ + "render_chunk", + "frame_range", + "resolution" + ] + }, + "publish": { + "CollectRenderPath": { + "output_extension": "png", + "anatomy_template_key_render_files": "render", + "anatomy_template_key_metadata": "render" + } + } +} diff --git a/server_addon/celaction/server/version.py b/server_addon/celaction/server/version.py new file mode 100644 index 00000000000..3dc1f76bc69 --- /dev/null +++ b/server_addon/celaction/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.0" diff --git a/server_addon/clockify/server/__init__.py b/server_addon/clockify/server/__init__.py new file mode 100644 index 00000000000..0fa453fdf46 --- /dev/null +++ b/server_addon/clockify/server/__init__.py @@ -0,0 +1,15 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import ClockifySettings + + +class ClockifyAddon(BaseServerAddon): + name = "clockify" + title = "Clockify" + version = __version__ + settings_model: Type[ClockifySettings] = ClockifySettings + frontend_scopes = {} + services = {} diff --git a/server_addon/clockify/server/settings.py b/server_addon/clockify/server/settings.py new file mode 100644 index 00000000000..9067cd42435 --- /dev/null +++ b/server_addon/clockify/server/settings.py @@ -0,0 +1,10 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class ClockifySettings(BaseSettingsModel): + workspace_name: str = Field( + "", + title="Workspace name", + scope=["studio"] + ) diff --git a/server_addon/clockify/server/version.py b/server_addon/clockify/server/version.py new file mode 100644 index 00000000000..485f44ac21b --- /dev/null +++ b/server_addon/clockify/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.1" diff --git a/server_addon/core/server/__init__.py b/server_addon/core/server/__init__.py new file mode 100644 index 00000000000..4de2b038a5c --- /dev/null +++ b/server_addon/core/server/__init__.py @@ -0,0 +1,15 @@ +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import CoreSettings, DEFAULT_VALUES + + +class CoreAddon(BaseServerAddon): + name = "core" + title = "Core" + version = __version__ + settings_model = CoreSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/core/server/settings/__init__.py b/server_addon/core/server/settings/__init__.py new file mode 100644 index 00000000000..527a2bdc0c0 --- /dev/null +++ b/server_addon/core/server/settings/__init__.py @@ -0,0 +1,7 @@ +from .main import CoreSettings, DEFAULT_VALUES + + +__all__ = ( + "CoreSettings", + "DEFAULT_VALUES", +) diff --git a/server_addon/core/server/settings/main.py b/server_addon/core/server/settings/main.py new file mode 100644 index 00000000000..ca8f7e63edd --- /dev/null +++ b/server_addon/core/server/settings/main.py @@ -0,0 +1,207 @@ +import json +from pydantic import Field, validator +from ayon_server.settings import ( + BaseSettingsModel, + MultiplatformPathListModel, + ensure_unique_names, + task_types_enum, +) +from ayon_server.exceptions import BadRequestException + +from .publish_plugins import PublishPuginsModel, DEFAULT_PUBLISH_VALUES +from .tools import GlobalToolsModel, DEFAULT_TOOLS_VALUES + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class CoreImageIOFileRulesModel(BaseSettingsModel): + activate_global_file_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class CoreImageIOConfigModel(BaseSettingsModel): + filepath: list[str] = Field(default_factory=list, title="Config path") + + +class CoreImageIOBaseModel(BaseSettingsModel): + activate_global_color_management: bool = Field( + False, + title="Enable Color Management" + ) + ocio_config: CoreImageIOConfigModel = Field( + default_factory=CoreImageIOConfigModel, + title="OCIO config" + ) + file_rules: CoreImageIOFileRulesModel = Field( + default_factory=CoreImageIOFileRulesModel, + title="File Rules" + ) + + +class VersionStartCategoryProfileModel(BaseSettingsModel): + _layout = "expanded" + host_names: list[str] = Field( + default_factory=list, + title="Host names" + ) + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + task_names: list[str] = Field( + default_factory=list, + title="Task names" + ) + product_types: list[str] = Field( + default_factory=list, + title="Product types" + ) + product_names: list[str] = Field( + default_factory=list, + title="Product names" + ) + version_start: int = Field( + 1, + title="Version Start", + ge=0 + ) + + +class VersionStartCategoryModel(BaseSettingsModel): + profiles: list[VersionStartCategoryProfileModel] = Field( + default_factory=list, + title="Profiles" + ) + + +class CoreSettings(BaseSettingsModel): + studio_name: str = Field("", title="Studio name", scope=["studio"]) + studio_code: str = Field("", title="Studio code", scope=["studio"]) + environments: str = Field( + "{}", + title="Global environment variables", + widget="textarea", + scope=["studio"], + ) + tools: GlobalToolsModel = Field( + default_factory=GlobalToolsModel, + title="Tools" + ) + version_start_category: VersionStartCategoryModel = Field( + default_factory=VersionStartCategoryModel, + title="Version start" + ) + imageio: CoreImageIOBaseModel = Field( + default_factory=CoreImageIOBaseModel, + title="Color Management (ImageIO)" + ) + publish: PublishPuginsModel = Field( + default_factory=PublishPuginsModel, + title="Publish plugins" + ) + project_plugins: MultiplatformPathListModel = Field( + default_factory=MultiplatformPathListModel, + title="Additional Project Plugin Paths", + ) + project_folder_structure: str = Field( + "{}", + widget="textarea", + title="Project folder structure", + section="---" + ) + project_environments: str = Field( + "{}", + widget="textarea", + title="Project environments", + section="---" + ) + + @validator( + "environments", + "project_folder_structure", + "project_environments") + def validate_json(cls, value): + if not value.strip(): + return "{}" + try: + converted_value = json.loads(value) + success = isinstance(converted_value, dict) + except json.JSONDecodeError: + success = False + + if not success: + raise BadRequestException( + "Environment's can't be parsed as json object" + ) + return value + + +DEFAULT_VALUES = { + "imageio": { + "activate_global_color_management": False, + "ocio_config": { + "filepath": [ + "{BUILTIN_OCIO_ROOT}/aces_1.2/config.ocio", + "{BUILTIN_OCIO_ROOT}/nuke-default/config.ocio" + ] + }, + "file_rules": { + "activate_global_file_rules": False, + "rules": [ + { + "name": "example", + "pattern": ".*(beauty).*", + "colorspace": "ACES - ACEScg", + "ext": "exr" + } + ] + } + }, + "studio_name": "", + "studio_code": "", + "environments": "{}", + "tools": DEFAULT_TOOLS_VALUES, + "version_start_category": { + "profiles": [] + }, + "publish": DEFAULT_PUBLISH_VALUES, + "project_folder_structure": json.dumps({ + "__project_root__": { + "prod": {}, + "resources": { + "footage": { + "plates": {}, + "offline": {} + }, + "audio": {}, + "art_dept": {} + }, + "editorial": {}, + "assets": { + "characters": {}, + "locations": {} + }, + "shots": {} + } + }, indent=4), + "project_plugins": { + "windows": [], + "darwin": [], + "linux": [] + }, + "project_environments": "{}" +} diff --git a/server_addon/core/server/settings/publish_plugins.py b/server_addon/core/server/settings/publish_plugins.py new file mode 100644 index 00000000000..c0123125796 --- /dev/null +++ b/server_addon/core/server/settings/publish_plugins.py @@ -0,0 +1,959 @@ +from pydantic import Field, validator + +from ayon_server.settings import ( + BaseSettingsModel, + MultiplatformPathModel, + normalize_name, + ensure_unique_names, + task_types_enum, +) + +from ayon_server.types import ColorRGBA_uint8 + + +class ValidateBaseModel(BaseSettingsModel): + _isGroup = True + enabled: bool = Field(True) + optional: bool = Field(True, title="Optional") + active: bool = Field(True, title="Active") + + +class CollectAnatomyInstanceDataModel(BaseSettingsModel): + _isGroup = True + follow_workfile_version: bool = Field( + True, title="Collect Anatomy Instance Data" + ) + + +class CollectAudioModel(BaseSettingsModel): + _isGroup = True + enabled: bool = Field(True) + audio_product_name: str = Field( + "", title="Name of audio variant" + ) + + +class CollectSceneVersionModel(BaseSettingsModel): + _isGroup = True + hosts: list[str] = Field( + default_factory=list, + title="Host names" + ) + skip_hosts_headless_publish: list[str] = Field( + default_factory=list, + title="Skip for host if headless publish" + ) + + +class CollectCommentPIModel(BaseSettingsModel): + enabled: bool = Field(True) + families: list[str] = Field(default_factory=list, title="Families") + + +class CollectFramesFixDefModel(BaseSettingsModel): + enabled: bool = Field(True) + rewrite_version_enable: bool = Field( + True, + title="Show 'Rewrite latest version' toggle" + ) + + +class ValidateIntentProfile(BaseSettingsModel): + _layout = "expanded" + hosts: list[str] = Field(default_factory=list, title="Host names") + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + tasks: list[str] = Field(default_factory=list, title="Task names") + # TODO This was 'validate' in v3 + validate_intent: bool = Field(True, title="Validate") + + +class ValidateIntentModel(BaseSettingsModel): + """Validate if Publishing intent was selected. + + It is possible to disable validation for specific publishing context + with profiles. + """ + + _isGroup = True + enabled: bool = Field(False) + profiles: list[ValidateIntentProfile] = Field(default_factory=list) + + +class ExtractThumbnailFFmpegModel(BaseSettingsModel): + _layout = "expanded" + input: list[str] = Field( + default_factory=list, + title="FFmpeg input arguments" + ) + output: list[str] = Field( + default_factory=list, + title="FFmpeg input arguments" + ) + + +class ExtractThumbnailModel(BaseSettingsModel): + _isGroup = True + enabled: bool = Field(True) + ffmpeg_args: ExtractThumbnailFFmpegModel = Field( + default_factory=ExtractThumbnailFFmpegModel + ) + + +def _extract_oiio_transcoding_type(): + return [ + {"value": "colorspace", "label": "Use Colorspace"}, + {"value": "display", "label": "Use Display&View"} + ] + + +class OIIOToolArgumentsModel(BaseSettingsModel): + additional_command_args: list[str] = Field( + default_factory=list, title="Arguments") + + +class ExtractOIIOTranscodeOutputModel(BaseSettingsModel): + extension: str = Field("", title="Extension") + transcoding_type: str = Field( + "colorspace", + title="Transcoding type", + enum_resolver=_extract_oiio_transcoding_type + ) + colorspace: str = Field("", title="Colorspace") + display: str = Field("", title="Display") + view: str = Field("", title="View") + oiiotool_args: OIIOToolArgumentsModel = Field( + default_factory=OIIOToolArgumentsModel, + title="OIIOtool arguments") + + tags: list[str] = Field(default_factory=list, title="Tags") + custom_tags: list[str] = Field(default_factory=list, title="Custom Tags") + + +class ExtractOIIOTranscodeProfileModel(BaseSettingsModel): + product_types: list[str] = Field( + default_factory=list, + title="Product types" + ) + hosts: list[str] = Field( + default_factory=list, + title="Host names" + ) + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + task_names: list[str] = Field( + default_factory=list, + title="Task names" + ) + product_names: list[str] = Field( + default_factory=list, + title="Product names" + ) + delete_original: bool = Field( + True, + title="Delete Original Representation" + ) + outputs: list[ExtractOIIOTranscodeOutputModel] = Field( + default_factory=list, + title="Output Definitions", + ) + + +class ExtractOIIOTranscodeModel(BaseSettingsModel): + enabled: bool = Field(True) + profiles: list[ExtractOIIOTranscodeProfileModel] = Field( + default_factory=list, title="Profiles" + ) + + +# --- [START] Extract Review --- +class ExtractReviewFFmpegModel(BaseSettingsModel): + video_filters: list[str] = Field( + default_factory=list, + title="Video filters" + ) + audio_filters: list[str] = Field( + default_factory=list, + title="Audio filters" + ) + input: list[str] = Field( + default_factory=list, + title="Input arguments" + ) + output: list[str] = Field( + default_factory=list, + title="Output arguments" + ) + + +def extract_review_filter_enum(): + return [ + { + "value": "everytime", + "label": "Always" + }, + { + "value": "single_frame", + "label": "Only if input has 1 image frame" + }, + { + "value": "multi_frame", + "label": "Only if input is video or sequence of frames" + } + ] + + +class ExtractReviewFilterModel(BaseSettingsModel): + families: list[str] = Field(default_factory=list, title="Families") + product_names: list[str] = Field( + default_factory=list, title="Product names") + custom_tags: list[str] = Field(default_factory=list, title="Custom Tags") + single_frame_filter: str = Field( + "everytime", + description=( + "Use output always / only if input is 1 frame" + " image / only if has 2+ frames or is video" + ), + enum_resolver=extract_review_filter_enum + ) + + +class ExtractReviewLetterBox(BaseSettingsModel): + enabled: bool = Field(True) + ratio: float = Field( + 0.0, + title="Ratio", + ge=0.0, + le=10000.0 + ) + fill_color: ColorRGBA_uint8 = Field( + (0, 0, 0, 0.0), + title="Fill Color" + ) + line_thickness: int = Field( + 0, + title="Line Thickness", + ge=0, + le=1000 + ) + line_color: ColorRGBA_uint8 = Field( + (0, 0, 0, 0.0), + title="Line Color" + ) + + +class ExtractReviewOutputDefModel(BaseSettingsModel): + _layout = "expanded" + name: str = Field("", title="Name") + ext: str = Field("", title="Output extension") + # TODO use some different source of tags + tags: list[str] = Field(default_factory=list, title="Tags") + burnins: list[str] = Field( + default_factory=list, title="Link to a burnin by name" + ) + ffmpeg_args: ExtractReviewFFmpegModel = Field( + default_factory=ExtractReviewFFmpegModel, + title="FFmpeg arguments" + ) + filter: ExtractReviewFilterModel = Field( + default_factory=ExtractReviewFilterModel, + title="Additional output filtering" + ) + overscan_crop: str = Field( + "", + title="Overscan crop", + description=( + "Crop input overscan. See the documentation for more information." + ) + ) + overscan_color: ColorRGBA_uint8 = Field( + (0, 0, 0, 0.0), + title="Overscan color", + description=( + "Overscan color is used when input aspect ratio is not" + " same as output aspect ratio." + ) + ) + width: int = Field( + 0, + ge=0, + le=100000, + title="Output width", + description=( + "Width and Height must be both set to higher" + " value than 0 else source resolution is used." + ) + ) + height: int = Field( + 0, + title="Output height", + ge=0, + le=100000, + ) + scale_pixel_aspect: bool = Field( + True, + title="Scale pixel aspect", + description=( + "Rescale input when it's pixel aspect ratio is not 1." + " Usefull for anamorph reviews." + ) + ) + bg_color: ColorRGBA_uint8 = Field( + (0, 0, 0, 0.0), + description=( + "Background color is used only when input have transparency" + " and Alpha is higher than 0." + ), + title="Background color", + ) + letter_box: ExtractReviewLetterBox = Field( + default_factory=ExtractReviewLetterBox, + title="Letter Box" + ) + + @validator("name") + def validate_name(cls, value): + """Ensure name does not contain weird characters""" + return normalize_name(value) + + +class ExtractReviewProfileModel(BaseSettingsModel): + _layout = "expanded" + product_types: list[str] = Field( + default_factory=list, title="Product types" + ) + # TODO use hosts enum + hosts: list[str] = Field( + default_factory=list, title="Host names" + ) + outputs: list[ExtractReviewOutputDefModel] = Field( + default_factory=list, title="Output Definitions" + ) + + @validator("outputs") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class ExtractReviewModel(BaseSettingsModel): + _isGroup = True + enabled: bool = Field(True) + profiles: list[ExtractReviewProfileModel] = Field( + default_factory=list, + title="Profiles" + ) +# --- [END] Extract Review --- + + +# --- [Start] Extract Burnin --- +class ExtractBurninOptionsModel(BaseSettingsModel): + font_size: int = Field(0, ge=0, title="Font size") + font_color: ColorRGBA_uint8 = Field( + (255, 255, 255, 1.0), + title="Font color" + ) + bg_color: ColorRGBA_uint8 = Field( + (0, 0, 0, 1.0), + title="Background color" + ) + x_offset: int = Field(0, title="X Offset") + y_offset: int = Field(0, title="Y Offset") + bg_padding: int = Field(0, title="Padding around text") + font_filepath: MultiplatformPathModel = Field( + default_factory=MultiplatformPathModel, + title="Font file path" + ) + + +class ExtractBurninDefFilter(BaseSettingsModel): + families: list[str] = Field( + default_factory=list, + title="Families" + ) + tags: list[str] = Field( + default_factory=list, + title="Tags" + ) + + +class ExtractBurninDef(BaseSettingsModel): + _isGroup = True + _layout = "expanded" + name: str = Field("") + TOP_LEFT: str = Field("", topic="Top Left") + TOP_CENTERED: str = Field("", topic="Top Centered") + TOP_RIGHT: str = Field("", topic="Top Right") + BOTTOM_LEFT: str = Field("", topic="Bottom Left") + BOTTOM_CENTERED: str = Field("", topic="Bottom Centered") + BOTTOM_RIGHT: str = Field("", topic="Bottom Right") + filter: ExtractBurninDefFilter = Field( + default_factory=ExtractBurninDefFilter, + title="Additional filtering" + ) + + @validator("name") + def validate_name(cls, value): + """Ensure name does not contain weird characters""" + return normalize_name(value) + + +class ExtractBurninProfile(BaseSettingsModel): + _layout = "expanded" + product_types: list[str] = Field( + default_factory=list, + title="Produt types" + ) + hosts: list[str] = Field( + default_factory=list, + title="Host names" + ) + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + task_names: list[str] = Field( + default_factory=list, + title="Task names" + ) + product_names: list[str] = Field( + default_factory=list, + title="Product names" + ) + burnins: list[ExtractBurninDef] = Field( + default_factory=list, + title="Burnins" + ) + + @validator("burnins") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + + return value + + +class ExtractBurninModel(BaseSettingsModel): + _isGroup = True + enabled: bool = Field(True) + options: ExtractBurninOptionsModel = Field( + default_factory=ExtractBurninOptionsModel, + title="Burnin formatting options" + ) + profiles: list[ExtractBurninProfile] = Field( + default_factory=list, + title="Profiles" + ) +# --- [END] Extract Burnin --- + + +class PreIntegrateThumbnailsProfile(BaseSettingsModel): + _isGroup = True + product_types: list[str] = Field( + default_factory=list, + title="Product types", + ) + hosts: list[str] = Field( + default_factory=list, + title="Hosts", + ) + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + product_names: list[str] = Field( + default_factory=list, + title="Product names", + ) + integrate_thumbnail: bool = Field(True) + + +class PreIntegrateThumbnailsModel(BaseSettingsModel): + """Explicitly set if Thumbnail representation should be integrated. + + If no matching profile set, existing state from Host implementation + is kept. + """ + + _isGroup = True + enabled: bool = Field(True) + integrate_profiles: list[PreIntegrateThumbnailsProfile] = Field( + default_factory=list, + title="Integrate profiles" + ) + + +class IntegrateProductGroupProfile(BaseSettingsModel): + product_types: list[str] = Field( + default_factory=list, + title="Product types" + ) + hosts: list[str] = Field(default_factory=list, title="Hosts") + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + tasks: list[str] = Field(default_factory=list, title="Task names") + template: str = Field("", title="Template") + + +class IntegrateProductGroupModel(BaseSettingsModel): + """Group published products by filtering logic. + + Set all published instances as a part of specific group named according + to 'Template'. + + Implemented all variants of placeholders '{task}', '{product[type]}', + '{host}', '{product[name]}', '{renderlayer}'. + """ + + _isGroup = True + product_grouping_profiles: list[IntegrateProductGroupProfile] = Field( + default_factory=list, + title="Product group profiles" + ) + + +class IntegrateANProductGroupProfileModel(BaseSettingsModel): + product_types: list[str] = Field( + default_factory=list, + title="Product types" + ) + hosts: list[str] = Field( + default_factory=list, + title="Hosts" + ) + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + tasks: list[str] = Field( + default_factory=list, + title="Task names" + ) + template: str = Field("", title="Template") + + +class IntegrateANTemplateNameProfileModel(BaseSettingsModel): + product_types: list[str] = Field( + default_factory=list, + title="Product types" + ) + hosts: list[str] = Field( + default_factory=list, + title="Hosts" + ) + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + tasks: list[str] = Field( + default_factory=list, + title="Task names" + ) + template_name: str = Field("", title="Template name") + + +class IntegrateHeroTemplateNameProfileModel(BaseSettingsModel): + product_types: list[str] = Field( + default_factory=list, + title="Product types" + ) + hosts: list[str] = Field( + default_factory=list, + title="Hosts" + ) + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + task_names: list[str] = Field( + default_factory=list, + title="Task names" + ) + template_name: str = Field("", title="Template name") + + +class IntegrateHeroVersionModel(BaseSettingsModel): + _isGroup = True + enabled: bool = Field(True) + optional: bool = Field(False, title="Optional") + active: bool = Field(True, title="Active") + families: list[str] = Field(default_factory=list, title="Families") + # TODO remove when removed from client code + template_name_profiles: list[IntegrateHeroTemplateNameProfileModel] = ( + Field( + default_factory=list, + title="Template name profiles" + ) + ) + + +class CleanUpModel(BaseSettingsModel): + _isGroup = True + paterns: list[str] = Field( + default_factory=list, + title="Patterns (regex)" + ) + remove_temp_renders: bool = Field(False, title="Remove Temp renders") + + +class CleanUpFarmModel(BaseSettingsModel): + _isGroup = True + enabled: bool = Field(True) + + +class PublishPuginsModel(BaseSettingsModel): + CollectAnatomyInstanceData: CollectAnatomyInstanceDataModel = Field( + default_factory=CollectAnatomyInstanceDataModel, + title="Collect Anatomy Instance Data" + ) + CollectAudio: CollectAudioModel = Field( + default_factory=CollectAudioModel, + title="Collect Audio" + ) + CollectSceneVersion: CollectSceneVersionModel = Field( + default_factory=CollectSceneVersionModel, + title="Collect Version from Workfile" + ) + collect_comment_per_instance: CollectCommentPIModel = Field( + default_factory=CollectCommentPIModel, + title="Collect comment per instance", + ) + CollectFramesFixDef: CollectFramesFixDefModel = Field( + default_factory=CollectFramesFixDefModel, + title="Collect Frames to Fix", + ) + ValidateEditorialAssetName: ValidateBaseModel = Field( + default_factory=ValidateBaseModel, + title="Validate Editorial Asset Name" + ) + ValidateVersion: ValidateBaseModel = Field( + default_factory=ValidateBaseModel, + title="Validate Version" + ) + ValidateIntent: ValidateIntentModel = Field( + default_factory=ValidateIntentModel, + title="Validate Intent" + ) + ExtractThumbnail: ExtractThumbnailModel = Field( + default_factory=ExtractThumbnailModel, + title="Extract Thumbnail" + ) + ExtractOIIOTranscode: ExtractOIIOTranscodeModel = Field( + default_factory=ExtractOIIOTranscodeModel, + title="Extract OIIO Transcode" + ) + ExtractReview: ExtractReviewModel = Field( + default_factory=ExtractReviewModel, + title="Extract Review" + ) + ExtractBurnin: ExtractBurninModel = Field( + default_factory=ExtractBurninModel, + title="Extract Burnin" + ) + PreIntegrateThumbnails: PreIntegrateThumbnailsModel = Field( + default_factory=PreIntegrateThumbnailsModel, + title="Override Integrate Thumbnail Representations" + ) + IntegrateProductGroup: IntegrateProductGroupModel = Field( + default_factory=IntegrateProductGroupModel, + title="Integrate Product Group" + ) + IntegrateHeroVersion: IntegrateHeroVersionModel = Field( + default_factory=IntegrateHeroVersionModel, + title="Integrate Hero Version" + ) + CleanUp: CleanUpModel = Field( + default_factory=CleanUpModel, + title="Clean Up" + ) + CleanUpFarm: CleanUpFarmModel = Field( + default_factory=CleanUpFarmModel, + title="Clean Up Farm" + ) + + +DEFAULT_PUBLISH_VALUES = { + "CollectAnatomyInstanceData": { + "follow_workfile_version": False + }, + "CollectAudio": { + "enabled": False, + "audio_product_name": "audioMain" + }, + "CollectSceneVersion": { + "hosts": [ + "aftereffects", + "blender", + "celaction", + "fusion", + "harmony", + "hiero", + "houdini", + "maya", + "nuke", + "photoshop", + "resolve", + "tvpaint" + ], + "skip_hosts_headless_publish": [] + }, + "collect_comment_per_instance": { + "enabled": False, + "families": [] + }, + "CollectFramesFixDef": { + "enabled": True, + "rewrite_version_enable": True + }, + "ValidateEditorialAssetName": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateVersion": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateIntent": { + "enabled": False, + "profiles": [] + }, + "ExtractThumbnail": { + "enabled": True, + "ffmpeg_args": { + "input": [ + "-apply_trc gamma22" + ], + "output": [] + } + }, + "ExtractOIIOTranscode": { + "enabled": True, + "profiles": [] + }, + "ExtractReview": { + "enabled": True, + "profiles": [ + { + "product_types": [], + "hosts": [], + "outputs": [ + { + "name": "png", + "ext": "png", + "tags": [ + "ftrackreview", + "kitsureview" + ], + "burnins": [], + "ffmpeg_args": { + "video_filters": [], + "audio_filters": [], + "input": [], + "output": [] + }, + "filter": { + "families": [ + "render", + "review", + "ftrack" + ], + "product_names": [], + "custom_tags": [], + "single_frame_filter": "single_frame" + }, + "overscan_crop": "", + "overscan_color": [0, 0, 0, 1.0], + "width": 1920, + "height": 1080, + "scale_pixel_aspect": True, + "bg_color": [0, 0, 0, 0.0], + "letter_box": { + "enabled": False, + "ratio": 0.0, + "fill_color": [0, 0, 0, 1.0], + "line_thickness": 0, + "line_color": [255, 0, 0, 1.0] + } + }, + { + "name": "h264", + "ext": "mp4", + "tags": [ + "burnin", + "ftrackreview", + "kitsureview" + ], + "burnins": [], + "ffmpeg_args": { + "video_filters": [], + "audio_filters": [], + "input": [ + "-apply_trc gamma22" + ], + "output": [ + "-pix_fmt yuv420p", + "-crf 18", + "-intra" + ] + }, + "filter": { + "families": [ + "render", + "review", + "ftrack" + ], + "product_names": [], + "custom_tags": [], + "single_frame_filter": "multi_frame" + }, + "overscan_crop": "", + "overscan_color": [0, 0, 0, 1.0], + "width": 0, + "height": 0, + "scale_pixel_aspect": True, + "bg_color": [0, 0, 0, 0.0], + "letter_box": { + "enabled": False, + "ratio": 0.0, + "fill_color": [0, 0, 0, 1.0], + "line_thickness": 0, + "line_color": [255, 0, 0, 1.0] + } + } + ] + } + ] + }, + "ExtractBurnin": { + "enabled": True, + "options": { + "font_size": 42, + "font_color": [255, 255, 255, 1.0], + "bg_color": [0, 0, 0, 0.5], + "x_offset": 5, + "y_offset": 5, + "bg_padding": 5, + "font_filepath": { + "windows": "", + "darwin": "", + "linux": "" + } + }, + "profiles": [ + { + "product_types": [], + "hosts": [], + "task_types": [], + "task_names": [], + "product_names": [], + "burnins": [ + { + "name": "burnin", + "TOP_LEFT": "{yy}-{mm}-{dd}", + "TOP_CENTERED": "", + "TOP_RIGHT": "{anatomy[version]}", + "BOTTOM_LEFT": "{username}", + "BOTTOM_CENTERED": "{folder[name]}", + "BOTTOM_RIGHT": "{frame_start}-{current_frame}-{frame_end}", + "filter": { + "families": [], + "tags": [] + } + }, + ] + }, + { + "product_types": ["review"], + "hosts": [ + "maya", + "houdini", + "max" + ], + "task_types": [], + "task_names": [], + "product_names": [], + "burnins": [ + { + "name": "focal_length_burnin", + "TOP_LEFT": "{yy}-{mm}-{dd}", + "TOP_CENTERED": "{focalLength:.2f} mm", + "TOP_RIGHT": "{anatomy[version]}", + "BOTTOM_LEFT": "{username}", + "BOTTOM_CENTERED": "{folder[name]}", + "BOTTOM_RIGHT": "{frame_start}-{current_frame}-{frame_end}", + "filter": { + "families": [], + "tags": [] + } + } + ] + } + ] + }, + "PreIntegrateThumbnails": { + "enabled": True, + "integrate_profiles": [] + }, + "IntegrateProductGroup": { + "product_grouping_profiles": [ + { + "product_types": [], + "hosts": [], + "task_types": [], + "tasks": [], + "template": "" + } + ] + }, + "IntegrateHeroVersion": { + "enabled": True, + "optional": True, + "active": True, + "families": [ + "model", + "rig", + "look", + "pointcache", + "animation", + "setdress", + "layout", + "mayaScene", + "simpleUnrealTexture" + ], + "template_name_profiles": [ + { + "product_types": [ + "simpleUnrealTexture" + ], + "hosts": [ + "standalonepublisher" + ], + "task_types": [], + "task_names": [], + "template_name": "simpleUnrealTextureHero" + } + ] + }, + "CleanUp": { + "paterns": [], + "remove_temp_renders": False + }, + "CleanUpFarm": { + "enabled": False + } +} diff --git a/server_addon/core/server/settings/tools.py b/server_addon/core/server/settings/tools.py new file mode 100644 index 00000000000..7befc795e41 --- /dev/null +++ b/server_addon/core/server/settings/tools.py @@ -0,0 +1,506 @@ +from pydantic import Field, validator +from ayon_server.settings import ( + BaseSettingsModel, + normalize_name, + ensure_unique_names, + task_types_enum, +) + + +class ProductTypeSmartSelectModel(BaseSettingsModel): + _layout = "expanded" + name: str = Field("", title="Product type") + task_names: list[str] = Field(default_factory=list, title="Task names") + + @validator("name") + def normalize_value(cls, value): + return normalize_name(value) + + +class ProductNameProfile(BaseSettingsModel): + _layout = "expanded" + product_types: list[str] = Field( + default_factory=list, title="Product types" + ) + hosts: list[str] = Field(default_factory=list, title="Hosts") + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + tasks: list[str] = Field(default_factory=list, title="Task names") + template: str = Field("", title="Template") + + +class CreatorToolModel(BaseSettingsModel): + # TODO this was dynamic dictionary '{name: task_names}' + product_types_smart_select: list[ProductTypeSmartSelectModel] = Field( + default_factory=list, + title="Create Smart Select" + ) + product_name_profiles: list[ProductNameProfile] = Field( + default_factory=list, + title="Product name profiles" + ) + + @validator("product_types_smart_select") + def validate_unique_name(cls, value): + ensure_unique_names(value) + return value + + +class WorkfileTemplateProfile(BaseSettingsModel): + _layout = "expanded" + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + # TODO this should use hosts enum + hosts: list[str] = Field(default_factory=list, title="Hosts") + # TODO this was using project anatomy template name + workfile_template: str = Field("", title="Workfile template") + + +class LastWorkfileOnStartupProfile(BaseSettingsModel): + _layout = "expanded" + # TODO this should use hosts enum + hosts: list[str] = Field(default_factory=list, title="Hosts") + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + tasks: list[str] = Field(default_factory=list, title="Task names") + enabled: bool = Field(True, title="Enabled") + use_last_published_workfile: bool = Field( + True, title="Use last published workfile" + ) + + +class WorkfilesToolOnStartupProfile(BaseSettingsModel): + _layout = "expanded" + # TODO this should use hosts enum + hosts: list[str] = Field(default_factory=list, title="Hosts") + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + tasks: list[str] = Field(default_factory=list, title="Task names") + enabled: bool = Field(True, title="Enabled") + + +class ExtraWorkFoldersProfile(BaseSettingsModel): + _layout = "expanded" + # TODO this should use hosts enum + hosts: list[str] = Field(default_factory=list, title="Hosts") + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + task_names: list[str] = Field(default_factory=list, title="Task names") + folders: list[str] = Field(default_factory=list, title="Folders") + + +class WorkfilesLockProfile(BaseSettingsModel): + _layout = "expanded" + # TODO this should use hosts enum + host_names: list[str] = Field(default_factory=list, title="Hosts") + enabled: bool = Field(True, title="Enabled") + + +class WorkfilesToolModel(BaseSettingsModel): + workfile_template_profiles: list[WorkfileTemplateProfile] = Field( + default_factory=list, + title="Workfile template profiles" + ) + last_workfile_on_startup: list[LastWorkfileOnStartupProfile] = Field( + default_factory=list, + title="Open last workfile on launch" + ) + open_workfile_tool_on_startup: list[WorkfilesToolOnStartupProfile] = Field( + default_factory=list, + title="Open workfile tool on launch" + ) + extra_folders: list[ExtraWorkFoldersProfile] = Field( + default_factory=list, + title="Extra work folders" + ) + workfile_lock_profiles: list[WorkfilesLockProfile] = Field( + default_factory=list, + title="Workfile lock profiles" + ) + + +def _product_types_enum(): + return [ + "action", + "animation", + "assembly", + "audio", + "backgroundComp", + "backgroundLayout", + "camera", + "editorial", + "gizmo", + "image", + "layout", + "look", + "matchmove", + "mayaScene", + "model", + "nukenodes", + "plate", + "pointcache", + "prerender", + "redshiftproxy", + "reference", + "render", + "review", + "rig", + "setdress", + "take", + "usdShade", + "vdbcache", + "vrayproxy", + "workfile", + "xgen", + "yetiRig", + "yeticache" + ] + + +class LoaderProductTypeFilterProfile(BaseSettingsModel): + _layout = "expanded" + # TODO this should use hosts enum + hosts: list[str] = Field(default_factory=list, title="Hosts") + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + is_include: bool = Field(True, title="Exclude / Include") + filter_product_types: list[str] = Field( + default_factory=list, + enum_resolver=_product_types_enum + ) + + +class LoaderToolModel(BaseSettingsModel): + product_type_filter_profiles: list[LoaderProductTypeFilterProfile] = Field( + default_factory=list, + title="Product type filtering" + ) + + +class PublishTemplateNameProfile(BaseSettingsModel): + _layout = "expanded" + product_types: list[str] = Field( + default_factory=list, + title="Product types" + ) + # TODO this should use hosts enum + hosts: list[str] = Field(default_factory=list, title="Hosts") + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + task_names: list[str] = Field(default_factory=list, title="Task names") + template_name: str = Field("", title="Template name") + + +class CustomStagingDirProfileModel(BaseSettingsModel): + active: bool = Field(True, title="Is active") + hosts: list[str] = Field(default_factory=list, title="Host names") + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + task_names: list[str] = Field( + default_factory=list, title="Task names" + ) + product_types: list[str] = Field( + default_factory=list, title="Product types" + ) + product_names: list[str] = Field( + default_factory=list, title="Product names" + ) + custom_staging_dir_persistent: bool = Field( + False, title="Custom Staging Folder Persistent" + ) + template_name: str = Field("", title="Template Name") + + +class PublishToolModel(BaseSettingsModel): + template_name_profiles: list[PublishTemplateNameProfile] = Field( + default_factory=list, + title="Template name profiles" + ) + hero_template_name_profiles: list[PublishTemplateNameProfile] = Field( + default_factory=list, + title="Hero template name profiles" + ) + custom_staging_dir_profiles: list[CustomStagingDirProfileModel] = Field( + default_factory=list, + title="Custom Staging Dir Profiles" + ) + + +class GlobalToolsModel(BaseSettingsModel): + creator: CreatorToolModel = Field( + default_factory=CreatorToolModel, + title="Creator" + ) + Workfiles: WorkfilesToolModel = Field( + default_factory=WorkfilesToolModel, + title="Workfiles" + ) + loader: LoaderToolModel = Field( + default_factory=LoaderToolModel, + title="Loader" + ) + publish: PublishToolModel = Field( + default_factory=PublishToolModel, + title="Publish" + ) + + +DEFAULT_TOOLS_VALUES = { + "creator": { + "product_types_smart_select": [ + { + "name": "Render", + "task_names": [ + "light", + "render" + ] + }, + { + "name": "Model", + "task_names": [ + "model" + ] + }, + { + "name": "Layout", + "task_names": [ + "layout" + ] + }, + { + "name": "Look", + "task_names": [ + "look" + ] + }, + { + "name": "Rig", + "task_names": [ + "rigging", + "rig" + ] + } + ], + "product_name_profiles": [ + { + "product_types": [], + "hosts": [], + "task_types": [], + "tasks": [], + "template": "{product[type]}{variant}" + }, + { + "product_types": [ + "workfile" + ], + "hosts": [], + "task_types": [], + "tasks": [], + "template": "{product[type]}{Task[name]}" + }, + { + "product_types": [ + "render" + ], + "hosts": [], + "task_types": [], + "tasks": [], + "template": "{product[type]}{Task[name]}{Variant}" + }, + { + "product_types": [ + "renderLayer", + "renderPass" + ], + "hosts": [ + "tvpaint" + ], + "task_types": [], + "tasks": [], + "template": "{product[type]}{Task[name]}_{Renderlayer}_{Renderpass}" + }, + { + "product_types": [ + "review", + "workfile" + ], + "hosts": [ + "aftereffects", + "tvpaint" + ], + "task_types": [], + "tasks": [], + "template": "{product[type]}{Task[name]}" + }, + { + "product_types": ["render"], + "hosts": [ + "aftereffects" + ], + "task_types": [], + "tasks": [], + "template": "{product[type]}{Task[name]}{Composition}{Variant}" + }, + { + "product_types": [ + "staticMesh" + ], + "hosts": [ + "maya" + ], + "task_types": [], + "tasks": [], + "template": "S_{folder[name]}{variant}" + }, + { + "product_types": [ + "skeletalMesh" + ], + "hosts": [ + "maya" + ], + "task_types": [], + "tasks": [], + "template": "SK_{folder[name]}{variant}" + } + ] + }, + "Workfiles": { + "workfile_template_profiles": [ + { + "task_types": [], + "hosts": [], + "workfile_template": "work" + }, + { + "task_types": [], + "hosts": [ + "unreal" + ], + "workfile_template": "work_unreal" + } + ], + "last_workfile_on_startup": [ + { + "hosts": [], + "task_types": [], + "tasks": [], + "enabled": True, + "use_last_published_workfile": False + } + ], + "open_workfile_tool_on_startup": [ + { + "hosts": [], + "task_types": [], + "tasks": [], + "enabled": False + } + ], + "extra_folders": [], + "workfile_lock_profiles": [] + }, + "loader": { + "product_type_filter_profiles": [ + { + "hosts": [], + "task_types": [], + "is_include": True, + "filter_product_types": [] + } + ] + }, + "publish": { + "template_name_profiles": [ + { + "product_types": [], + "hosts": [], + "task_types": [], + "task_names": [], + "template_name": "publish" + }, + { + "product_types": [ + "review", + "render", + "prerender" + ], + "hosts": [], + "task_types": [], + "task_names": [], + "template_name": "publish_render" + }, + { + "product_types": [ + "simpleUnrealTexture" + ], + "hosts": [ + "standalonepublisher" + ], + "task_types": [], + "task_names": [], + "template_name": "publish_simpleUnrealTexture" + }, + { + "product_types": [ + "staticMesh", + "skeletalMesh" + ], + "hosts": [ + "maya" + ], + "task_types": [], + "task_names": [], + "template_name": "publish_maya2unreal" + }, + { + "product_types": [ + "online" + ], + "hosts": [ + "traypublisher" + ], + "task_types": [], + "task_names": [], + "template_name": "publish_online" + } + ], + "hero_template_name_profiles": [ + { + "product_types": [ + "simpleUnrealTexture" + ], + "hosts": [ + "standalonepublisher" + ], + "task_types": [], + "task_names": [], + "template_name": "hero_simpleUnrealTextureHero" + } + ] + } +} diff --git a/server_addon/core/server/version.py b/server_addon/core/server/version.py new file mode 100644 index 00000000000..b3f4756216d --- /dev/null +++ b/server_addon/core/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.2" diff --git a/server_addon/create_ayon_addon.py b/server_addon/create_ayon_addon.py deleted file mode 100644 index 657f4164412..00000000000 --- a/server_addon/create_ayon_addon.py +++ /dev/null @@ -1,140 +0,0 @@ -import os -import re -import shutil -import zipfile -import collections -from pathlib import Path -from typing import Any, Optional, Iterable - -# Patterns of directories to be skipped for server part of addon -IGNORE_DIR_PATTERNS: list[re.Pattern] = [ - re.compile(pattern) - for pattern in { - # Skip directories starting with '.' - r"^\.", - # Skip any pycache folders - "^__pycache__$" - } -] - -# Patterns of files to be skipped for server part of addon -IGNORE_FILE_PATTERNS: list[re.Pattern] = [ - re.compile(pattern) - for pattern in { - # Skip files starting with '.' - # NOTE this could be an issue in some cases - r"^\.", - # Skip '.pyc' files - r"\.pyc$" - } -] - - -def _value_match_regexes(value: str, regexes: Iterable[re.Pattern]) -> bool: - return any( - regex.search(value) - for regex in regexes - ) - - -def find_files_in_subdir( - src_path: str, - ignore_file_patterns: Optional[list[re.Pattern]] = None, - ignore_dir_patterns: Optional[list[re.Pattern]] = None -): - """Find all files to copy in subdirectories of given path. - - All files that match any of the patterns in 'ignore_file_patterns' will - be skipped and any directories that match any of the patterns in - 'ignore_dir_patterns' will be skipped with all subfiles. - - Args: - src_path (str): Path to directory to search in. - ignore_file_patterns (Optional[list[re.Pattern]]): List of regexes - to match files to ignore. - ignore_dir_patterns (Optional[list[re.Pattern]]): List of regexes - to match directories to ignore. - - Returns: - list[tuple[str, str]]: List of tuples with path to file and parent - directories relative to 'src_path'. - """ - - if ignore_file_patterns is None: - ignore_file_patterns = IGNORE_FILE_PATTERNS - - if ignore_dir_patterns is None: - ignore_dir_patterns = IGNORE_DIR_PATTERNS - output: list[tuple[str, str]] = [] - - hierarchy_queue = collections.deque() - hierarchy_queue.append((src_path, [])) - while hierarchy_queue: - item: tuple[str, str] = hierarchy_queue.popleft() - dirpath, parents = item - for name in os.listdir(dirpath): - path = os.path.join(dirpath, name) - if os.path.isfile(path): - if not _value_match_regexes(name, ignore_file_patterns): - items = list(parents) - items.append(name) - output.append((path, os.path.sep.join(items))) - continue - - if not _value_match_regexes(name, ignore_dir_patterns): - items = list(parents) - items.append(name) - hierarchy_queue.append((path, items)) - - return output - - -def main(): - openpype_addon_dir = Path(os.path.dirname(os.path.abspath(__file__))) - server_dir = openpype_addon_dir / "server" - package_root = openpype_addon_dir / "package" - pyproject_path = openpype_addon_dir / "client" / "pyproject.toml" - - root_dir = openpype_addon_dir.parent - openpype_dir = root_dir / "openpype" - version_path = openpype_dir / "version.py" - - # Read version - version_content: dict[str, Any] = {} - with open(str(version_path), "r") as stream: - exec(stream.read(), version_content) - addon_version: str = version_content["__version__"] - - output_dir = package_root / "openpype" / addon_version - private_dir = output_dir / "private" - - # Make sure package dir is empty - if package_root.exists(): - shutil.rmtree(str(package_root)) - # Make sure output dir is created - output_dir.mkdir(parents=True) - - # Copy version - shutil.copy(str(version_path), str(output_dir)) - for subitem in server_dir.iterdir(): - shutil.copy(str(subitem), str(output_dir / subitem.name)) - - # Make sure private dir exists - private_dir.mkdir(parents=True) - - # Copy pyproject.toml - shutil.copy( - str(pyproject_path), - (private_dir / pyproject_path.name) - ) - - # Zip client - zip_filepath = private_dir / "client.zip" - with zipfile.ZipFile(zip_filepath, "w", zipfile.ZIP_DEFLATED) as zipf: - # Add client code content to zip - for path, sub_path in find_files_in_subdir(str(openpype_dir)): - zipf.write(path, f"{openpype_dir.name}/{sub_path}") - - -if __name__ == "__main__": - main() diff --git a/server_addon/create_ayon_addons.py b/server_addon/create_ayon_addons.py new file mode 100644 index 00000000000..61dbd5c8d97 --- /dev/null +++ b/server_addon/create_ayon_addons.py @@ -0,0 +1,308 @@ +import os +import sys +import re +import json +import shutil +import zipfile +import platform +import collections +from pathlib import Path +from typing import Any, Optional, Iterable, Pattern, List, Tuple + +# Patterns of directories to be skipped for server part of addon +IGNORE_DIR_PATTERNS: List[Pattern] = [ + re.compile(pattern) + for pattern in { + # Skip directories starting with '.' + r"^\.", + # Skip any pycache folders + "^__pycache__$" + } +] + +# Patterns of files to be skipped for server part of addon +IGNORE_FILE_PATTERNS: List[Pattern] = [ + re.compile(pattern) + for pattern in { + # Skip files starting with '.' + # NOTE this could be an issue in some cases + r"^\.", + # Skip '.pyc' files + r"\.pyc$" + } +] + + +class ZipFileLongPaths(zipfile.ZipFile): + """Allows longer paths in zip files. + + Regular DOS paths are limited to MAX_PATH (260) characters, including + the string's terminating NUL character. + That limit can be exceeded by using an extended-length path that + starts with the '\\?\' prefix. + """ + _is_windows = platform.system().lower() == "windows" + + def _extract_member(self, member, tpath, pwd): + if self._is_windows: + tpath = os.path.abspath(tpath) + if tpath.startswith("\\\\"): + tpath = "\\\\?\\UNC\\" + tpath[2:] + else: + tpath = "\\\\?\\" + tpath + + return super(ZipFileLongPaths, self)._extract_member( + member, tpath, pwd + ) + + +def _value_match_regexes(value: str, regexes: Iterable[Pattern]) -> bool: + return any( + regex.search(value) + for regex in regexes + ) + + +def find_files_in_subdir( + src_path: str, + ignore_file_patterns: Optional[List[Pattern]] = None, + ignore_dir_patterns: Optional[List[Pattern]] = None, + ignore_subdirs: Optional[Iterable[Tuple[str]]] = None +): + """Find all files to copy in subdirectories of given path. + + All files that match any of the patterns in 'ignore_file_patterns' will + be skipped and any directories that match any of the patterns in + 'ignore_dir_patterns' will be skipped with all subfiles. + + Args: + src_path (str): Path to directory to search in. + ignore_file_patterns (Optional[List[Pattern]]): List of regexes + to match files to ignore. + ignore_dir_patterns (Optional[List[Pattern]]): List of regexes + to match directories to ignore. + ignore_subdirs (Optional[Iterable[Tuple[str]]]): List of + subdirectories to ignore. + + Returns: + List[Tuple[str, str]]: List of tuples with path to file and parent + directories relative to 'src_path'. + """ + + if ignore_file_patterns is None: + ignore_file_patterns = IGNORE_FILE_PATTERNS + + if ignore_dir_patterns is None: + ignore_dir_patterns = IGNORE_DIR_PATTERNS + output: list[tuple[str, str]] = [] + + hierarchy_queue = collections.deque() + hierarchy_queue.append((src_path, [])) + while hierarchy_queue: + item: tuple[str, str] = hierarchy_queue.popleft() + dirpath, parents = item + if ignore_subdirs and parents in ignore_subdirs: + continue + for name in os.listdir(dirpath): + path = os.path.join(dirpath, name) + if os.path.isfile(path): + if not _value_match_regexes(name, ignore_file_patterns): + items = list(parents) + items.append(name) + output.append((path, os.path.sep.join(items))) + continue + + if not _value_match_regexes(name, ignore_dir_patterns): + items = list(parents) + items.append(name) + hierarchy_queue.append((path, items)) + + return output + + +def read_addon_version(version_path: Path) -> str: + # Read version + version_content: dict[str, Any] = {} + with open(str(version_path), "r") as stream: + exec(stream.read(), version_content) + return version_content["__version__"] + + +def get_addon_version(addon_dir: Path) -> str: + return read_addon_version(addon_dir / "server" / "version.py") + + +def create_addon_zip( + output_dir: Path, + addon_name: str, + addon_version: str, + keep_source: bool +): + zip_filepath = output_dir / f"{addon_name}-{addon_version}.zip" + addon_output_dir = output_dir / addon_name / addon_version + with ZipFileLongPaths(zip_filepath, "w", zipfile.ZIP_DEFLATED) as zipf: + zipf.writestr( + "manifest.json", + json.dumps({ + "addon_name": addon_name, + "addon_version": addon_version + }) + ) + # Add client code content to zip + src_root = os.path.normpath(str(addon_output_dir.absolute())) + src_root_offset = len(src_root) + 1 + for root, _, filenames in os.walk(str(addon_output_dir)): + rel_root = "" + if root != src_root: + rel_root = root[src_root_offset:] + + for filename in filenames: + src_path = os.path.join(root, filename) + if rel_root: + dst_path = os.path.join("addon", rel_root, filename) + else: + dst_path = os.path.join("addon", filename) + zipf.write(src_path, dst_path) + + if not keep_source: + shutil.rmtree(str(output_dir / addon_name)) + + +def create_openpype_package( + addon_dir: Path, + output_dir: Path, + root_dir: Path, + create_zip: bool, + keep_source: bool +): + server_dir = addon_dir / "server" + pyproject_path = addon_dir / "client" / "pyproject.toml" + + openpype_dir = root_dir / "openpype" + version_path = openpype_dir / "version.py" + addon_version = read_addon_version(version_path) + + addon_output_dir = output_dir / "openpype" / addon_version + private_dir = addon_output_dir / "private" + # Make sure dir exists + addon_output_dir.mkdir(parents=True) + private_dir.mkdir(parents=True) + + # Copy version + shutil.copy(str(version_path), str(addon_output_dir)) + for subitem in server_dir.iterdir(): + shutil.copy(str(subitem), str(addon_output_dir / subitem.name)) + + # Copy pyproject.toml + shutil.copy( + str(pyproject_path), + (private_dir / pyproject_path.name) + ) + + ignored_hosts = [] + ignored_modules = [ + "ftrack", + "shotgrid", + "sync_server", + "example_addons", + "slack" + ] + # Subdirs that won't be added to output zip file + ignored_subpaths = [ + ["addons"], + ["vendor", "common", "ayon_api"], + ] + ignored_subpaths.extend( + ["hosts", host_name] + for host_name in ignored_hosts + ) + ignored_subpaths.extend( + ["modules", module_name] + for module_name in ignored_modules + ) + + # Zip client + zip_filepath = private_dir / "client.zip" + with ZipFileLongPaths(zip_filepath, "w", zipfile.ZIP_DEFLATED) as zipf: + # Add client code content to zip + for path, sub_path in find_files_in_subdir( + str(openpype_dir), ignore_subdirs=ignored_subpaths + ): + zipf.write(path, f"{openpype_dir.name}/{sub_path}") + + if create_zip: + create_addon_zip(output_dir, "openpype", addon_version, keep_source) + + +def create_addon_package( + addon_dir: Path, + output_dir: Path, + create_zip: bool, + keep_source: bool +): + server_dir = addon_dir / "server" + addon_version = get_addon_version(addon_dir) + + addon_output_dir = output_dir / addon_dir.name / addon_version + if addon_output_dir.exists(): + shutil.rmtree(str(addon_output_dir)) + addon_output_dir.mkdir(parents=True) + + # Copy server content + src_root = os.path.normpath(str(server_dir.absolute())) + src_root_offset = len(src_root) + 1 + for root, _, filenames in os.walk(str(server_dir)): + dst_root = addon_output_dir + if root != src_root: + rel_root = root[src_root_offset:] + dst_root = dst_root / rel_root + + dst_root.mkdir(parents=True, exist_ok=True) + for filename in filenames: + src_path = os.path.join(root, filename) + shutil.copy(src_path, str(dst_root)) + + if create_zip: + create_addon_zip( + output_dir, addon_dir.name, addon_version, keep_source + ) + + +def main(create_zip=True, keep_source=False): + current_dir = Path(os.path.dirname(os.path.abspath(__file__))) + root_dir = current_dir.parent + output_dir = current_dir / "packages" + print("Package creation started...") + + # Make sure package dir is empty + if output_dir.exists(): + shutil.rmtree(str(output_dir)) + # Make sure output dir is created + output_dir.mkdir(parents=True) + + for addon_dir in current_dir.iterdir(): + if not addon_dir.is_dir(): + continue + + server_dir = addon_dir / "server" + if not server_dir.exists(): + continue + + if addon_dir.name == "openpype": + create_openpype_package( + addon_dir, output_dir, root_dir, create_zip, keep_source + ) + + else: + create_addon_package( + addon_dir, output_dir, create_zip, keep_source + ) + + print(f"- package '{addon_dir.name}' created") + print(f"Package creation finished. Output directory: {output_dir}") + + +if __name__ == "__main__": + create_zip = "--skip-zip" not in sys.argv + keep_sources = "--keep-sources" in sys.argv + main(create_zip, keep_sources) diff --git a/server_addon/deadline/server/__init__.py b/server_addon/deadline/server/__init__.py new file mode 100644 index 00000000000..36d04189a94 --- /dev/null +++ b/server_addon/deadline/server/__init__.py @@ -0,0 +1,17 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import DeadlineSettings, DEFAULT_VALUES + + +class Deadline(BaseServerAddon): + name = "deadline" + title = "Deadline" + version = __version__ + settings_model: Type[DeadlineSettings] = DeadlineSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/deadline/server/settings/__init__.py b/server_addon/deadline/server/settings/__init__.py new file mode 100644 index 00000000000..0307862afa6 --- /dev/null +++ b/server_addon/deadline/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + DeadlineSettings, + DEFAULT_VALUES, +) + + +__all__ = ( + "DeadlineSettings", + "DEFAULT_VALUES", +) diff --git a/server_addon/deadline/server/settings/main.py b/server_addon/deadline/server/settings/main.py new file mode 100644 index 00000000000..f158b7464dd --- /dev/null +++ b/server_addon/deadline/server/settings/main.py @@ -0,0 +1,48 @@ +from pydantic import Field, validator + +from ayon_server.settings import BaseSettingsModel, ensure_unique_names + +from .publish_plugins import ( + PublishPluginsModel, + DEFAULT_DEADLINE_PLUGINS_SETTINGS +) + + +class ServerListSubmodel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: str = Field(title="Value") + + +class DeadlineSettings(BaseSettingsModel): + deadline_urls: list[ServerListSubmodel] = Field( + default_factory=list, + title="System Deadline Webservice URLs", + scope=["studio"], + ) + deadline_servers: list[str] = Field( + title="Project deadline servers", + section="---", + ) + publish: PublishPluginsModel = Field( + default_factory=PublishPluginsModel, + title="Publish Plugins", + ) + + @validator("deadline_urls") + def validate_unique_names(cls, value): + ensure_unique_names(value) + return value + + +DEFAULT_VALUES = { + "deadline_urls": [ + { + "name": "default", + "value": "http://127.0.0.1:8082" + } + ], + # TODO: this needs to be dynamic from "deadline_urls" + "deadline_servers": [], + "publish": DEFAULT_DEADLINE_PLUGINS_SETTINGS +} diff --git a/server_addon/deadline/server/settings/publish_plugins.py b/server_addon/deadline/server/settings/publish_plugins.py new file mode 100644 index 00000000000..8d1b6673452 --- /dev/null +++ b/server_addon/deadline/server/settings/publish_plugins.py @@ -0,0 +1,435 @@ +from pydantic import Field, validator + +from ayon_server.settings import BaseSettingsModel, ensure_unique_names + + +class CollectDefaultDeadlineServerModel(BaseSettingsModel): + """Settings for event handlers running in ftrack service.""" + + pass_mongo_url: bool = Field(title="Pass Mongo url to job") + + +class CollectDeadlinePoolsModel(BaseSettingsModel): + """Settings Deadline default pools.""" + + primary_pool: str = Field(title="Primary Pool") + + secondary_pool: str = Field(title="Secondary Pool") + + +class ValidateExpectedFilesModel(BaseSettingsModel): + enabled: bool = Field(True, title="Enabled") + active: bool = Field(True, title="Active") + allow_user_override: bool = Field( + True, title="Allow user change frame range" + ) + families: list[str] = Field( + default_factory=list, title="Trigger on families" + ) + targets: list[str] = Field( + default_factory=list, title="Trigger for plugins" + ) + + +def tile_assembler_enum(): + """Return a list of value/label dicts for the enumerator. + + Returning a list of dicts is used to allow for a custom label to be + displayed in the UI. + """ + return [ + { + "value": "DraftTileAssembler", + "label": "Draft Tile Assembler" + }, + { + "value": "OpenPypeTileAssembler", + "label": "Open Image IO" + } + ] + + +class ScenePatchesSubmodel(BaseSettingsModel): + _layout = "expanded" + name: str = Field(title="Patch name") + regex: str = Field(title="Patch regex") + line: str = Field(title="Patch line") + + +class MayaSubmitDeadlineModel(BaseSettingsModel): + """Maya deadline submitter settings.""" + + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + use_published: bool = Field(title="Use Published scene") + import_reference: bool = Field(title="Use Scene with Imported Reference") + asset_dependencies: bool = Field(title="Use Asset dependencies") + priority: int = Field(title="Priority") + tile_priority: int = Field(title="Tile Priority") + group: str = Field(title="Group") + limit: list[str] = Field( + default_factory=list, + title="Limit Groups" + ) + tile_assembler_plugin: str = Field( + title="Tile Assembler Plugin", + enum_resolver=tile_assembler_enum, + ) + jobInfo: str = Field( + title="Additional JobInfo data", + widget="textarea", + ) + pluginInfo: str = Field( + title="Additional PluginInfo data", + widget="textarea", + ) + + scene_patches: list[ScenePatchesSubmodel] = Field( + default_factory=list, + title="Scene patches", + ) + strict_error_checking: bool = Field( + title="Disable Strict Error Check profiles" + ) + + @validator("limit", "scene_patches") + def validate_unique_names(cls, value): + ensure_unique_names(value) + return value + + +class MaxSubmitDeadlineModel(BaseSettingsModel): + enabled: bool = Field(True) + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + use_published: bool = Field(title="Use Published scene") + priority: int = Field(title="Priority") + chunk_size: int = Field(title="Frame per Task") + group: str = Field("", title="Group Name") + + +class EnvSearchReplaceSubmodel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: str = Field(title="Value") + + +class LimitGroupsSubmodel(BaseSettingsModel): + _layout = "expanded" + name: str = Field(title="Name") + value: list[str] = Field( + default_factory=list, + title="Limit Groups" + ) + + +class FusionSubmitDeadlineModel(BaseSettingsModel): + enabled: bool = Field(True, title="Enabled") + optional: bool = Field(False, title="Optional") + active: bool = Field(True, title="Active") + priority: int = Field(50, title="Priority") + chunk_size: int = Field(10, title="Frame per Task") + concurrent_tasks: int = Field(1, title="Number of concurrent tasks") + group: str = Field("", title="Group Name") + + +class NukeSubmitDeadlineModel(BaseSettingsModel): + """Nuke deadline submitter settings.""" + + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + priority: int = Field(title="Priority") + chunk_size: int = Field(title="Chunk Size") + concurrent_tasks: int = Field(title="Number of concurrent tasks") + group: str = Field(title="Group") + department: str = Field(title="Department") + use_gpu: bool = Field(title="Use GPU") + + env_allowed_keys: list[str] = Field( + default_factory=list, + title="Allowed environment keys" + ) + + env_search_replace_values: list[EnvSearchReplaceSubmodel] = Field( + default_factory=list, + title="Search & replace in environment values", + ) + + limit_groups: list[LimitGroupsSubmodel] = Field( + default_factory=list, + title="Limit Groups", + ) + + @validator("limit_groups", "env_allowed_keys", "env_search_replace_values") + def validate_unique_names(cls, value): + ensure_unique_names(value) + return value + + +class HarmonySubmitDeadlineModel(BaseSettingsModel): + """Harmony deadline submitter settings.""" + + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + use_published: bool = Field(title="Use Published scene") + priority: int = Field(title="Priority") + chunk_size: int = Field(title="Chunk Size") + group: str = Field(title="Group") + department: str = Field(title="Department") + + +class AfterEffectsSubmitDeadlineModel(BaseSettingsModel): + """After Effects deadline submitter settings.""" + + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + use_published: bool = Field(title="Use Published scene") + priority: int = Field(title="Priority") + chunk_size: int = Field(title="Chunk Size") + group: str = Field(title="Group") + department: str = Field(title="Department") + multiprocess: bool = Field(title="Optional") + + +class CelactionSubmitDeadlineModel(BaseSettingsModel): + enabled: bool = Field(True, title="Enabled") + deadline_department: str = Field("", title="Deadline apartment") + deadline_priority: int = Field(50, title="Deadline priority") + deadline_pool: str = Field("", title="Deadline pool") + deadline_pool_secondary: str = Field("", title="Deadline pool (secondary)") + deadline_group: str = Field("", title="Deadline Group") + deadline_chunk_size: int = Field(10, title="Deadline Chunk size") + deadline_job_delay: str = Field( + "", title="Delay job (timecode dd:hh:mm:ss)" + ) + + +class AOVFilterSubmodel(BaseSettingsModel): + _layout = "expanded" + name: str = Field(title="Host") + value: list[str] = Field( + default_factory=list, + title="AOV regex" + ) + + +class ProcessSubmittedJobOnFarmModel(BaseSettingsModel): + """Process submitted job on farm.""" + + enabled: bool = Field(title="Enabled") + deadline_department: str = Field(title="Department") + deadline_pool: str = Field(title="Pool") + deadline_group: str = Field(title="Group") + deadline_chunk_size: int = Field(title="Chunk Size") + deadline_priority: int = Field(title="Priority") + publishing_script: str = Field(title="Publishing script path") + skip_integration_repre_list: list[str] = Field( + default_factory=list, + title="Skip integration of representation with ext" + ) + aov_filter: list[AOVFilterSubmodel] = Field( + default_factory=list, + title="Reviewable products filter", + ) + + @validator("aov_filter", "skip_integration_repre_list") + def validate_unique_names(cls, value): + ensure_unique_names(value) + return value + + +class PublishPluginsModel(BaseSettingsModel): + CollectDefaultDeadlineServer: CollectDefaultDeadlineServerModel = Field( + default_factory=CollectDefaultDeadlineServerModel, + title="Default Deadline Webservice") + CollectDefaultDeadlineServer: CollectDefaultDeadlineServerModel = Field( + default_factory=CollectDefaultDeadlineServerModel, + title="Default Deadline Webservice") + CollectDeadlinePools: CollectDeadlinePoolsModel = Field( + default_factory=CollectDeadlinePoolsModel, + title="Default Pools") + ValidateExpectedFiles: ValidateExpectedFilesModel = Field( + default_factory=ValidateExpectedFilesModel, + title="Validate Expected Files" + ) + MayaSubmitDeadline: MayaSubmitDeadlineModel = Field( + default_factory=MayaSubmitDeadlineModel, + title="Maya Submit to deadline") + MaxSubmitDeadline: MaxSubmitDeadlineModel = Field( + default_factory=MaxSubmitDeadlineModel, + title="Max Submit to deadline") + FusionSubmitDeadline: FusionSubmitDeadlineModel = Field( + default_factory=FusionSubmitDeadlineModel, + title="Fusion submit to Deadline") + NukeSubmitDeadline: NukeSubmitDeadlineModel = Field( + default_factory=NukeSubmitDeadlineModel, + title="Nuke Submit to deadline") + HarmonySubmitDeadline: HarmonySubmitDeadlineModel = Field( + default_factory=HarmonySubmitDeadlineModel, + title="Harmony Submit to deadline") + AfterEffectsSubmitDeadline: AfterEffectsSubmitDeadlineModel = Field( + default_factory=AfterEffectsSubmitDeadlineModel, + title="After Effects to deadline") + CelactionSubmitDeadline: CelactionSubmitDeadlineModel = Field( + default_factory=CelactionSubmitDeadlineModel, + title="Celaction Submit Deadline" + ) + ProcessSubmittedJobOnFarm: ProcessSubmittedJobOnFarmModel = Field( + default_factory=ProcessSubmittedJobOnFarmModel, + title="Process submitted job on farm.") + + +DEFAULT_DEADLINE_PLUGINS_SETTINGS = { + "CollectDefaultDeadlineServer": { + "pass_mongo_url": True + }, + "CollectDeadlinePools": { + "primary_pool": "", + "secondary_pool": "" + }, + "ValidateExpectedFiles": { + "enabled": True, + "active": True, + "allow_user_override": True, + "families": [ + "render" + ], + "targets": [ + "deadline" + ] + }, + "MayaSubmitDeadline": { + "enabled": True, + "optional": False, + "active": True, + "tile_assembler_plugin": "DraftTileAssembler", + "use_published": True, + "import_reference": False, + "asset_dependencies": True, + "strict_error_checking": True, + "priority": 50, + "tile_priority": 50, + "group": "none", + "limit": [], + # this used to be empty dict + "jobInfo": "", + # this used to be empty dict + "pluginInfo": "", + "scene_patches": [] + }, + "MaxSubmitDeadline": { + "enabled": True, + "optional": False, + "active": True, + "use_published": True, + "priority": 50, + "chunk_size": 10, + "group": "none" + }, + "FusionSubmitDeadline": { + "enabled": True, + "optional": False, + "active": True, + "priority": 50, + "chunk_size": 10, + "concurrent_tasks": 1, + "group": "" + }, + "NukeSubmitDeadline": { + "enabled": True, + "optional": False, + "active": True, + "priority": 50, + "chunk_size": 10, + "concurrent_tasks": 1, + "group": "", + "department": "", + "use_gpu": True, + "env_allowed_keys": [], + "env_search_replace_values": [], + "limit_groups": [] + }, + "HarmonySubmitDeadline": { + "enabled": True, + "optional": False, + "active": True, + "use_published": True, + "priority": 50, + "chunk_size": 10000, + "group": "", + "department": "" + }, + "AfterEffectsSubmitDeadline": { + "enabled": True, + "optional": False, + "active": True, + "use_published": True, + "priority": 50, + "chunk_size": 10000, + "group": "", + "department": "", + "multiprocess": True + }, + "CelactionSubmitDeadline": { + "enabled": True, + "deadline_department": "", + "deadline_priority": 50, + "deadline_pool": "", + "deadline_pool_secondary": "", + "deadline_group": "", + "deadline_chunk_size": 10, + "deadline_job_delay": "00:00:00:00" + }, + "ProcessSubmittedJobOnFarm": { + "enabled": True, + "deadline_department": "", + "deadline_pool": "", + "deadline_group": "", + "deadline_chunk_size": 1, + "deadline_priority": 50, + "publishing_script": "", + "skip_integration_repre_list": [], + "aov_filter": [ + { + "name": "maya", + "value": [ + ".*([Bb]eauty).*" + ] + }, + { + "name": "aftereffects", + "value": [ + ".*" + ] + }, + { + "name": "celaction", + "value": [ + ".*" + ] + }, + { + "name": "harmony", + "value": [ + ".*" + ] + }, + { + "name": "max", + "value": [ + ".*" + ] + }, + { + "name": "fusion", + "value": [ + ".*" + ] + } + ] + } +} diff --git a/server_addon/deadline/server/version.py b/server_addon/deadline/server/version.py new file mode 100644 index 00000000000..485f44ac21b --- /dev/null +++ b/server_addon/deadline/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.1" diff --git a/server_addon/flame/server/__init__.py b/server_addon/flame/server/__init__.py new file mode 100644 index 00000000000..7d5eb3960f7 --- /dev/null +++ b/server_addon/flame/server/__init__.py @@ -0,0 +1,19 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import FlameSettings, DEFAULT_VALUES + + +class FlameAddon(BaseServerAddon): + name = "flame" + title = "Flame" + version = __version__ + settings_model: Type[FlameSettings] = FlameSettings + frontend_scopes = {} + services = {} + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/flame/server/settings/__init__.py b/server_addon/flame/server/settings/__init__.py new file mode 100644 index 00000000000..39b8220d407 --- /dev/null +++ b/server_addon/flame/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + FlameSettings, + DEFAULT_VALUES, +) + + +__all__ = ( + "FlameSettings", + "DEFAULT_VALUES", +) diff --git a/server_addon/flame/server/settings/create_plugins.py b/server_addon/flame/server/settings/create_plugins.py new file mode 100644 index 00000000000..374a7368d24 --- /dev/null +++ b/server_addon/flame/server/settings/create_plugins.py @@ -0,0 +1,120 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class CreateShotClipModel(BaseSettingsModel): + hierarchy: str = Field( + "shot", + title="Shot parent hierarchy", + section="Shot Hierarchy And Rename Settings" + ) + useShotName: bool = Field( + True, + title="Use Shot Name", + ) + clipRename: bool = Field( + False, + title="Rename clips", + ) + clipName: str = Field( + "{sequence}{shot}", + title="Clip name template" + ) + segmentIndex: bool = Field( + True, + title="Accept segment order" + ) + countFrom: int = Field( + 10, + title="Count sequence from" + ) + countSteps: int = Field( + 10, + title="Stepping number" + ) + + folder: str = Field( + "shots", + title="{folder}", + section="Shot Template Keywords" + ) + episode: str = Field( + "ep01", + title="{episode}" + ) + sequence: str = Field( + "a", + title="{sequence}" + ) + track: str = Field( + "{_track_}", + title="{track}" + ) + shot: str = Field( + "####", + title="{shot}" + ) + + vSyncOn: bool = Field( + False, + title="Enable Vertical Sync", + section="Vertical Synchronization Of Attributes" + ) + + workfileFrameStart: int = Field( + 1001, + title="Workfiles Start Frame", + section="Shot Attributes" + ) + handleStart: int = Field( + 10, + title="Handle start (head)" + ) + handleEnd: int = Field( + 10, + title="Handle end (tail)" + ) + includeHandles: bool = Field( + False, + title="Enable handles including" + ) + retimedHandles: bool = Field( + True, + title="Enable retimed handles" + ) + retimedFramerange: bool = Field( + True, + title="Enable retimed shot frameranges" + ) + + +class CreatePuginsModel(BaseSettingsModel): + CreateShotClip: CreateShotClipModel = Field( + default_factory=CreateShotClipModel, + title="Create Shot Clip" + ) + + +DEFAULT_CREATE_SETTINGS = { + "CreateShotClip": { + "hierarchy": "{folder}/{sequence}", + "useShotName": True, + "clipRename": False, + "clipName": "{sequence}{shot}", + "segmentIndex": True, + "countFrom": 10, + "countSteps": 10, + "folder": "shots", + "episode": "ep01", + "sequence": "a", + "track": "{_track_}", + "shot": "####", + "vSyncOn": False, + "workfileFrameStart": 1001, + "handleStart": 5, + "handleEnd": 5, + "includeHandles": False, + "retimedHandles": True, + "retimedFramerange": True + } +} diff --git a/server_addon/flame/server/settings/imageio.py b/server_addon/flame/server/settings/imageio.py new file mode 100644 index 00000000000..ef1e4721d12 --- /dev/null +++ b/server_addon/flame/server/settings/imageio.py @@ -0,0 +1,130 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel, ensure_unique_names + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class ImageIORemappingRulesModel(BaseSettingsModel): + host_native_name: str = Field( + title="Application native colorspace name" + ) + ocio_name: str = Field(title="OCIO colorspace name") + + +class ImageIORemappingModel(BaseSettingsModel): + rules: list[ImageIORemappingRulesModel] = Field( + default_factory=list + ) + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ProfileNamesMappingInputsModel(BaseSettingsModel): + _layout = "expanded" + + flameName: str = Field("", title="Flame name") + ocioName: str = Field("", title="OCIO name") + + +class ProfileNamesMappingModel(BaseSettingsModel): + _layout = "expanded" + + inputs: list[ProfileNamesMappingInputsModel] = Field( + default_factory=list, + title="Profile names mapping" + ) + + +class ImageIOProjectModel(BaseSettingsModel): + colourPolicy: str = Field( + "ACES 1.1", + title="Colour Policy (name or path)", + section="Project" + ) + frameDepth: str = Field( + "16-bit fp", + title="Image Depth" + ) + fieldDominance: str = Field( + "PROGRESSIVE", + title="Field Dominance" + ) + + +class FlameImageIOModel(BaseSettingsModel): + _isGroup = True + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + remapping: ImageIORemappingModel = Field( + title="Remapping colorspace names", + default_factory=ImageIORemappingModel + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) + # NOTE 'project' attribute was expanded to this model but that caused + # inconsistency with v3 settings and harder conversion handling + # - it can be moved back but keep in mind that it must be handled in v3 + # conversion script too + project: ImageIOProjectModel = Field( + default_factory=ImageIOProjectModel, + title="Project" + ) + profilesMapping: ProfileNamesMappingModel = Field( + default_factory=ProfileNamesMappingModel, + title="Profile names mapping" + ) + + +DEFAULT_IMAGEIO_SETTINGS = { + "project": { + "colourPolicy": "ACES 1.1", + "frameDepth": "16-bit fp", + "fieldDominance": "PROGRESSIVE" + }, + "profilesMapping": { + "inputs": [ + { + "flameName": "ACEScg", + "ocioName": "ACES - ACEScg" + }, + { + "flameName": "Rec.709 video", + "ocioName": "Output - Rec.709" + } + ] + } +} diff --git a/server_addon/flame/server/settings/loader_plugins.py b/server_addon/flame/server/settings/loader_plugins.py new file mode 100644 index 00000000000..6c27b926c2a --- /dev/null +++ b/server_addon/flame/server/settings/loader_plugins.py @@ -0,0 +1,99 @@ +from ayon_server.settings import Field, BaseSettingsModel + + +class LoadClipModel(BaseSettingsModel): + enabled: bool = Field(True) + + product_types: list[str] = Field( + default_factory=list, + title="Product types" + ) + reel_group_name: str = Field( + "OpenPype_Reels", + title="Reel group name" + ) + reel_name: str = Field( + "Loaded", + title="Reel name" + ) + + clip_name_template: str = Field( + "{folder[name]}_{product[name]}<_{output}>", + title="Clip name template" + ) + layer_rename_template: str = Field("", title="Layer name template") + layer_rename_patterns: list[str] = Field( + default_factory=list, + title="Layer rename patters", + ) + + +class LoadClipBatchModel(BaseSettingsModel): + enabled: bool = Field(True) + product_types: list[str] = Field( + default_factory=list, + title="Product types" + ) + reel_name: str = Field( + "OP_LoadedReel", + title="Reel name" + ) + clip_name_template: str = Field( + "{batch}_{folder[name]}_{product[name]}<_{output}>", + title="Clip name template" + ) + layer_rename_template: str = Field("", title="Layer name template") + layer_rename_patterns: list[str] = Field( + default_factory=list, + title="Layer rename patters", + ) + + +class LoaderPluginsModel(BaseSettingsModel): + LoadClip: LoadClipModel = Field( + default_factory=LoadClipModel, + title="Load Clip" + ) + LoadClipBatch: LoadClipBatchModel = Field( + default_factory=LoadClipBatchModel, + title="Load as clip to current batch" + ) + + +DEFAULT_LOADER_SETTINGS = { + "LoadClip": { + "enabled": True, + "product_types": [ + "render2d", + "source", + "plate", + "render", + "review" + ], + "reel_group_name": "OpenPype_Reels", + "reel_name": "Loaded", + "clip_name_template": "{folder[name]}_{product[name]}<_{output}>", + "layer_rename_template": "{folder[name]}_{product[name]}<_{output}>", + "layer_rename_patterns": [ + "rgb", + "rgba" + ] + }, + "LoadClipBatch": { + "enabled": True, + "product_types": [ + "render2d", + "source", + "plate", + "render", + "review" + ], + "reel_name": "OP_LoadedReel", + "clip_name_template": "{batch}_{folder[name]}_{product[name]}<_{output}>", + "layer_rename_template": "{folder[name]}_{product[name]}<_{output}>", + "layer_rename_patterns": [ + "rgb", + "rgba" + ] + } +} diff --git a/server_addon/flame/server/settings/main.py b/server_addon/flame/server/settings/main.py new file mode 100644 index 00000000000..f28de6641bb --- /dev/null +++ b/server_addon/flame/server/settings/main.py @@ -0,0 +1,33 @@ +from ayon_server.settings import Field, BaseSettingsModel + +from .imageio import FlameImageIOModel, DEFAULT_IMAGEIO_SETTINGS +from .create_plugins import CreatePuginsModel, DEFAULT_CREATE_SETTINGS +from .publish_plugins import PublishPuginsModel, DEFAULT_PUBLISH_SETTINGS +from .loader_plugins import LoaderPluginsModel, DEFAULT_LOADER_SETTINGS + + +class FlameSettings(BaseSettingsModel): + imageio: FlameImageIOModel = Field( + default_factory=FlameImageIOModel, + title="Color Management (ImageIO)" + ) + create: CreatePuginsModel = Field( + default_factory=CreatePuginsModel, + title="Create plugins" + ) + publish: PublishPuginsModel = Field( + default_factory=PublishPuginsModel, + title="Publish plugins" + ) + load: LoaderPluginsModel = Field( + default_factory=LoaderPluginsModel, + title="Loader plugins" + ) + + +DEFAULT_VALUES = { + "imageio": DEFAULT_IMAGEIO_SETTINGS, + "create": DEFAULT_CREATE_SETTINGS, + "publish": DEFAULT_PUBLISH_SETTINGS, + "load": DEFAULT_LOADER_SETTINGS +} diff --git a/server_addon/flame/server/settings/publish_plugins.py b/server_addon/flame/server/settings/publish_plugins.py new file mode 100644 index 00000000000..ea7f109f739 --- /dev/null +++ b/server_addon/flame/server/settings/publish_plugins.py @@ -0,0 +1,190 @@ +from ayon_server.settings import Field, BaseSettingsModel, task_types_enum + + +class XMLPresetAttrsFromCommentsModel(BaseSettingsModel): + _layout = "expanded" + name: str = Field("", title="Attribute name") + type: str = Field( + default_factory=str, + title="Attribute type", + enum_resolver=lambda: ["number", "float", "string"] + ) + + +class AddTasksModel(BaseSettingsModel): + _layout = "expanded" + name: str = Field("", title="Task name") + type: str = Field( + default_factory=str, + title="Task type", + enum_resolver=task_types_enum + ) + create_batch_group: bool = Field( + True, + title="Create batch group" + ) + + +class CollectTimelineInstancesModel(BaseSettingsModel): + _isGroup = True + + xml_preset_attrs_from_comments: list[XMLPresetAttrsFromCommentsModel] = Field( + default_factory=list, + title="XML presets attributes parsable from segment comments" + ) + add_tasks: list[AddTasksModel] = Field( + default_factory=list, + title="Add tasks" + ) + + +class ExportPresetsMappingModel(BaseSettingsModel): + _layout = "expanded" + + name: str = Field( + ..., + title="Name" + ) + active: bool = Field(True, title="Is active") + export_type: str = Field( + "File Sequence", + title="Eport clip type", + enum_resolver=lambda: ["Movie", "File Sequence", "Sequence Publish"] + ) + ext: str = Field("exr", title="Output extension") + xml_preset_file: str = Field( + "OpenEXR (16-bit fp DWAA).xml", + title="XML preset file (with ext)" + ) + colorspace_out: str = Field( + "ACES - ACEScg", + title="Output color (imageio)" + ) + # TODO remove when resolved or v3 is not a thing anymore + # NOTE next 4 attributes were grouped under 'other_parameters' but that + # created inconsistency with v3 settings and harder conversion handling + # - it can be moved back but keep in mind that it must be handled in v3 + # conversion script too + xml_preset_dir: str = Field( + "", + title="XML preset directory" + ) + parsed_comment_attrs: bool = Field( + True, + title="Parsed comment attributes" + ) + representation_add_range: bool = Field( + True, + title="Add range to representation name" + ) + representation_tags: list[str] = Field( + default_factory=list, + title="Representation tags" + ) + load_to_batch_group: bool = Field( + True, + title="Load to batch group reel" + ) + batch_group_loader_name: str = Field( + "LoadClipBatch", + title="Use loader name" + ) + filter_path_regex: str = Field( + ".*", + title="Regex in clip path" + ) + + +class ExtractProductResourcesModel(BaseSettingsModel): + _isGroup = True + + keep_original_representation: bool = Field( + False, + title="Publish clip's original media" + ) + export_presets_mapping: list[ExportPresetsMappingModel] = Field( + default_factory=list, + title="Export presets mapping" + ) + + +class IntegrateBatchGroupModel(BaseSettingsModel): + enabled: bool = Field( + False, + title="Enabled" + ) + + +class PublishPuginsModel(BaseSettingsModel): + CollectTimelineInstances: CollectTimelineInstancesModel = Field( + default_factory=CollectTimelineInstancesModel, + title="Collect Timeline Instances" + ) + + ExtractProductResources: ExtractProductResourcesModel = Field( + default_factory=ExtractProductResourcesModel, + title="Extract Product Resources" + ) + + IntegrateBatchGroup: IntegrateBatchGroupModel = Field( + default_factory=IntegrateBatchGroupModel, + title="IntegrateBatchGroup" + ) + + +DEFAULT_PUBLISH_SETTINGS = { + "CollectTimelineInstances": { + "xml_preset_attrs_from_comments": [ + { + "name": "width", + "type": "number" + }, + { + "name": "height", + "type": "number" + }, + { + "name": "pixelRatio", + "type": "float" + }, + { + "name": "resizeType", + "type": "string" + }, + { + "name": "resizeFilter", + "type": "string" + } + ], + "add_tasks": [ + { + "name": "compositing", + "type": "Compositing", + "create_batch_group": True + } + ] + }, + "ExtractProductResources": { + "keep_original_representation": False, + "export_presets_mapping": [ + { + "name": "exr16fpdwaa", + "active": True, + "export_type": "File Sequence", + "ext": "exr", + "xml_preset_file": "OpenEXR (16-bit fp DWAA).xml", + "colorspace_out": "ACES - ACEScg", + "xml_preset_dir": "", + "parsed_comment_attrs": True, + "representation_add_range": True, + "representation_tags": [], + "load_to_batch_group": True, + "batch_group_loader_name": "LoadClipBatch", + "filter_path_regex": ".*" + } + ] + }, + "IntegrateBatchGroup": { + "enabled": False + } +} diff --git a/server_addon/flame/server/version.py b/server_addon/flame/server/version.py new file mode 100644 index 00000000000..3dc1f76bc69 --- /dev/null +++ b/server_addon/flame/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.0" diff --git a/server_addon/fusion/server/__init__.py b/server_addon/fusion/server/__init__.py new file mode 100644 index 00000000000..4d43f288128 --- /dev/null +++ b/server_addon/fusion/server/__init__.py @@ -0,0 +1,19 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import FusionSettings, DEFAULT_VALUES + + +class FusionAddon(BaseServerAddon): + name = "fusion" + title = "Fusion" + version = __version__ + settings_model: Type[FusionSettings] = FusionSettings + frontend_scopes = {} + services = {} + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/fusion/server/imageio.py b/server_addon/fusion/server/imageio.py new file mode 100644 index 00000000000..fe867af4243 --- /dev/null +++ b/server_addon/fusion/server/imageio.py @@ -0,0 +1,48 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class FusionImageIOModel(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/fusion/server/settings.py b/server_addon/fusion/server/settings.py new file mode 100644 index 00000000000..92fb362c66b --- /dev/null +++ b/server_addon/fusion/server/settings.py @@ -0,0 +1,95 @@ +from pydantic import Field +from ayon_server.settings import ( + BaseSettingsModel, +) + +from .imageio import FusionImageIOModel + + +class CopyFusionSettingsModel(BaseSettingsModel): + copy_path: str = Field("", title="Local Fusion profile directory") + copy_status: bool = Field(title="Copy profile on first launch") + force_sync: bool = Field(title="Resync profile on each launch") + + +def _create_saver_instance_attributes_enum(): + return [ + { + "value": "reviewable", + "label": "Reviewable" + }, + { + "value": "farm_rendering", + "label": "Farm rendering" + } + ] + + +class CreateSaverPluginModel(BaseSettingsModel): + _isGroup = True + temp_rendering_path_template: str = Field( + "", title="Temporary rendering path template" + ) + default_variants: list[str] = Field( + default_factory=list, + title="Default variants" + ) + instance_attributes: list[str] = Field( + default_factory=list, + enum_resolver=_create_saver_instance_attributes_enum, + title="Instance attributes" + ) + + +class CreatPluginsModel(BaseSettingsModel): + CreateSaver: CreateSaverPluginModel = Field( + default_factory=CreateSaverPluginModel, + title="Create Saver" + ) + + +class FusionSettings(BaseSettingsModel): + imageio: FusionImageIOModel = Field( + default_factory=FusionImageIOModel, + title="Color Management (ImageIO)" + ) + copy_fusion_settings: CopyFusionSettingsModel = Field( + default_factory=CopyFusionSettingsModel, + title="Local Fusion profile settings" + ) + create: CreatPluginsModel = Field( + default_factory=CreatPluginsModel, + title="Creator plugins" + ) + + +DEFAULT_VALUES = { + "imageio": { + "ocio_config": { + "enabled": False, + "filepath": [] + }, + "file_rules": { + "enabled": False, + "rules": [] + } + }, + "copy_fusion_settings": { + "copy_path": "~/.openpype/hosts/fusion/profiles", + "copy_status": False, + "force_sync": False + }, + "create": { + "CreateSaver": { + "temp_rendering_path_template": "{workdir}/renders/fusion/{product[name]}/{product[name]}.{frame}.{ext}", + "default_variants": [ + "Main", + "Mask" + ], + "instance_attributes": [ + "reviewable", + "farm_rendering" + ] + } + } +} diff --git a/server_addon/fusion/server/version.py b/server_addon/fusion/server/version.py new file mode 100644 index 00000000000..3dc1f76bc69 --- /dev/null +++ b/server_addon/fusion/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.0" diff --git a/server_addon/harmony/LICENSE b/server_addon/harmony/LICENSE new file mode 100644 index 00000000000..d6456956733 --- /dev/null +++ b/server_addon/harmony/LICENSE @@ -0,0 +1,202 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/server_addon/harmony/README.md b/server_addon/harmony/README.md new file mode 100644 index 00000000000..d971fa39f92 --- /dev/null +++ b/server_addon/harmony/README.md @@ -0,0 +1,4 @@ +ToonBoom Harmony Addon +=============== + +Integration with ToonBoom Harmony. diff --git a/server_addon/harmony/server/__init__.py b/server_addon/harmony/server/__init__.py new file mode 100644 index 00000000000..4ecda1989ec --- /dev/null +++ b/server_addon/harmony/server/__init__.py @@ -0,0 +1,16 @@ +from ayon_server.addons import BaseServerAddon + +from .settings import HarmonySettings, DEFAULT_HARMONY_SETTING +from .version import __version__ + + +class Harmony(BaseServerAddon): + name = "harmony" + title = "Harmony" + version = __version__ + + settings_model = HarmonySettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_HARMONY_SETTING) diff --git a/server_addon/harmony/server/settings/__init__.py b/server_addon/harmony/server/settings/__init__.py new file mode 100644 index 00000000000..4a8118d4da8 --- /dev/null +++ b/server_addon/harmony/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + HarmonySettings, + DEFAULT_HARMONY_SETTING, +) + + +__all__ = ( + "HarmonySettings", + "DEFAULT_HARMONY_SETTING", +) diff --git a/server_addon/harmony/server/settings/imageio.py b/server_addon/harmony/server/settings/imageio.py new file mode 100644 index 00000000000..4e01fae3d42 --- /dev/null +++ b/server_addon/harmony/server/settings/imageio.py @@ -0,0 +1,55 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class ImageIORemappingRulesModel(BaseSettingsModel): + host_native_name: str = Field( + title="Application native colorspace name" + ) + ocio_name: str = Field(title="OCIO colorspace name") + + +class HarmonyImageIOModel(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/harmony/server/settings/main.py b/server_addon/harmony/server/settings/main.py new file mode 100644 index 00000000000..0936bc1fc7f --- /dev/null +++ b/server_addon/harmony/server/settings/main.py @@ -0,0 +1,63 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + +from .imageio import HarmonyImageIOModel +from .publish_plugins import HarmonyPublishPlugins + + +class HarmonySettings(BaseSettingsModel): + """Harmony Project Settings.""" + + imageio: HarmonyImageIOModel = Field( + default_factory=HarmonyImageIOModel, + title="OCIO config" + ) + publish: HarmonyPublishPlugins = Field( + default_factory=HarmonyPublishPlugins, + title="Publish plugins" + ) + + +DEFAULT_HARMONY_SETTING = { + "load": { + "ImageSequenceLoader": { + "family": [ + "shot", + "render", + "image", + "plate", + "reference" + ], + "representations": [ + "jpeg", + "png", + "jpg" + ] + } + }, + "publish": { + "CollectPalettes": { + "allowed_tasks": [ + ".*" + ] + }, + "ValidateAudio": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateContainers": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateSceneSettings": { + "enabled": True, + "optional": True, + "active": True, + "frame_check_filter": [], + "skip_resolution_check": [], + "skip_timelines_check": [] + } + } +} diff --git a/server_addon/harmony/server/settings/publish_plugins.py b/server_addon/harmony/server/settings/publish_plugins.py new file mode 100644 index 00000000000..bdaec2bbd47 --- /dev/null +++ b/server_addon/harmony/server/settings/publish_plugins.py @@ -0,0 +1,76 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class CollectPalettesPlugin(BaseSettingsModel): + """Set regular expressions to filter triggering on specific task names. '.*' means on all.""" # noqa + + allowed_tasks: list[str] = Field( + default_factory=list, + title="Allowed tasks" + ) + + +class ValidateAudioPlugin(BaseSettingsModel): + """Check if scene contains audio track.""" # + _isGroup = True + enabled: bool = True + optional: bool = Field(False, title="Optional") + active: bool = Field(True, title="Active") + + +class ValidateContainersPlugin(BaseSettingsModel): + """Check if loaded container is scene are latest versions.""" + _isGroup = True + enabled: bool = True + optional: bool = Field(False, title="Optional") + active: bool = Field(True, title="Active") + + +class ValidateSceneSettingsPlugin(BaseSettingsModel): + """Validate if FrameStart, FrameEnd and Resolution match shot data in DB. + Use regular expressions to limit validations only on particular asset + or task names.""" + _isGroup = True + enabled: bool = True + optional: bool = Field(False, title="Optional") + active: bool = Field(True, title="Active") + + frame_check_filter: list[str] = Field( + default_factory=list, + title="Skip Frame check for Assets with name containing" + ) + + skip_resolution_check: list[str] = Field( + default_factory=list, + title="Skip Resolution Check for Tasks" + ) + + skip_timelines_check: list[str] = Field( + default_factory=list, + title="Skip Timeline Check for Tasks" + ) + + +class HarmonyPublishPlugins(BaseSettingsModel): + + CollectPalettes: CollectPalettesPlugin = Field( + title="Collect Palettes", + default_factory=CollectPalettesPlugin, + ) + + ValidateAudio: ValidateAudioPlugin = Field( + title="Validate Audio", + default_factory=ValidateAudioPlugin, + ) + + ValidateContainers: ValidateContainersPlugin = Field( + title="Validate Containers", + default_factory=ValidateContainersPlugin, + ) + + ValidateSceneSettings: ValidateSceneSettingsPlugin = Field( + title="Validate Scene Settings", + default_factory=ValidateSceneSettingsPlugin, + ) diff --git a/server_addon/harmony/server/version.py b/server_addon/harmony/server/version.py new file mode 100644 index 00000000000..df0c92f1e27 --- /dev/null +++ b/server_addon/harmony/server/version.py @@ -0,0 +1,3 @@ +# -*- coding: utf-8 -*- +"""Package declaring addon version.""" +__version__ = "0.1.2" diff --git a/server_addon/hiero/server/__init__.py b/server_addon/hiero/server/__init__.py new file mode 100644 index 00000000000..d0f9bcefc36 --- /dev/null +++ b/server_addon/hiero/server/__init__.py @@ -0,0 +1,19 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import HieroSettings, DEFAULT_VALUES + + +class HieroAddon(BaseServerAddon): + name = "hiero" + title = "Hiero" + version = __version__ + settings_model: Type[HieroSettings] = HieroSettings + frontend_scopes = {} + services = {} + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/hiero/server/settings/__init__.py b/server_addon/hiero/server/settings/__init__.py new file mode 100644 index 00000000000..246c8203e93 --- /dev/null +++ b/server_addon/hiero/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + HieroSettings, + DEFAULT_VALUES, +) + + +__all__ = ( + "HieroSettings", + "DEFAULT_VALUES", +) diff --git a/server_addon/hiero/server/settings/common.py b/server_addon/hiero/server/settings/common.py new file mode 100644 index 00000000000..eb4791f93e2 --- /dev/null +++ b/server_addon/hiero/server/settings/common.py @@ -0,0 +1,98 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel +from ayon_server.types import ( + ColorRGBA_float, + ColorRGB_uint8 +) + + +class Vector2d(BaseSettingsModel): + _layout = "compact" + + x: float = Field(1.0, title="X") + y: float = Field(1.0, title="Y") + + +class Vector3d(BaseSettingsModel): + _layout = "compact" + + x: float = Field(1.0, title="X") + y: float = Field(1.0, title="Y") + z: float = Field(1.0, title="Z") + + +def formatable_knob_type_enum(): + return [ + {"value": "text", "label": "Text"}, + {"value": "number", "label": "Number"}, + {"value": "decimal_number", "label": "Decimal number"}, + {"value": "2d_vector", "label": "2D vector"}, + # "3D vector" + ] + + +class Formatable(BaseSettingsModel): + _layout = "compact" + + template: str = Field( + "", + placeholder="""{{key}} or {{key}};{{key}}""", + title="Template" + ) + to_type: str = Field( + "Text", + title="To Knob type", + enum_resolver=formatable_knob_type_enum, + ) + + +knob_types_enum = [ + {"value": "text", "label": "Text"}, + {"value": "formatable", "label": "Formate from template"}, + {"value": "color_gui", "label": "Color GUI"}, + {"value": "boolean", "label": "Boolean"}, + {"value": "number", "label": "Number"}, + {"value": "decimal_number", "label": "Decimal number"}, + {"value": "vector_2d", "label": "2D vector"}, + {"value": "vector_3d", "label": "3D vector"}, + {"value": "color", "label": "Color"} +] + + +class KnobModel(BaseSettingsModel): + _layout = "expanded" + + type: str = Field( + title="Type", + description="Switch between different knob types", + enum_resolver=lambda: knob_types_enum, + conditionalEnum=True + ) + name: str = Field( + title="Name", + placeholder="Name" + ) + text: str = Field("", title="Value") + color_gui: ColorRGB_uint8 = Field( + (0, 0, 255), + title="RGB Uint8", + ) + boolean: bool = Field(False, title="Value") + number: int = Field(0, title="Value") + decimal_number: float = Field(0.0, title="Value") + vector_2d: Vector2d = Field( + default_factory=Vector2d, + title="Value" + ) + vector_3d: Vector3d = Field( + default_factory=Vector3d, + title="Value" + ) + color: ColorRGBA_float = Field( + (0.0, 0.0, 1.0, 1.0), + title="RGBA Float" + ) + formatable: Formatable = Field( + default_factory=Formatable, + title="Value" + ) diff --git a/server_addon/hiero/server/settings/create_plugins.py b/server_addon/hiero/server/settings/create_plugins.py new file mode 100644 index 00000000000..daec4a7cea6 --- /dev/null +++ b/server_addon/hiero/server/settings/create_plugins.py @@ -0,0 +1,97 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class CreateShotClipModels(BaseSettingsModel): + hierarchy: str = Field( + "{folder}/{sequence}", + title="Shot parent hierarchy", + section="Shot Hierarchy And Rename Settings" + ) + clipRename: bool = Field( + True, + title="Rename clips" + ) + clipName: str = Field( + "{track}{sequence}{shot}", + title="Clip name template" + ) + countFrom: int = Field( + 10, + title="Count sequence from" + ) + countSteps: int = Field( + 10, + title="Stepping number" + ) + + folder: str = Field( + "shots", + title="{folder}", + section="Shot Template Keywords" + ) + episode: str = Field( + "ep01", + title="{episode}" + ) + sequence: str = Field( + "sq01", + title="{sequence}" + ) + track: str = Field( + "{_track_}", + title="{track}" + ) + shot: str = Field( + "sh###", + title="{shot}" + ) + + vSyncOn: bool = Field( + False, + title="Enable Vertical Sync", + section="Vertical Synchronization Of Attributes" + ) + + workfileFrameStart: int = Field( + 1001, + title="Workfiles Start Frame", + section="Shot Attributes" + ) + handleStart: int = Field( + 10, + title="Handle start (head)" + ) + handleEnd: int = Field( + 10, + title="Handle end (tail)" + ) + + +class CreatorPluginsSettings(BaseSettingsModel): + CreateShotClip: CreateShotClipModels = Field( + default_factory=CreateShotClipModels, + title="Create Shot Clip" + ) + + +DEFAULT_CREATE_SETTINGS = { + "create": { + "CreateShotClip": { + "hierarchy": "{folder}/{sequence}", + "clipRename": True, + "clipName": "{track}{sequence}{shot}", + "countFrom": 10, + "countSteps": 10, + "folder": "shots", + "episode": "ep01", + "sequence": "sq01", + "track": "{_track_}", + "shot": "sh###", + "vSyncOn": False, + "workfileFrameStart": 1001, + "handleStart": 10, + "handleEnd": 10 + } + } +} diff --git a/server_addon/hiero/server/settings/filters.py b/server_addon/hiero/server/settings/filters.py new file mode 100644 index 00000000000..7e2702b3b7f --- /dev/null +++ b/server_addon/hiero/server/settings/filters.py @@ -0,0 +1,19 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel, ensure_unique_names + + +class PublishGUIFilterItemModel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: bool = Field(True, title="Active") + + +class PublishGUIFiltersModel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: list[PublishGUIFilterItemModel] = Field(default_factory=list) + + @validator("value") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value diff --git a/server_addon/hiero/server/settings/imageio.py b/server_addon/hiero/server/settings/imageio.py new file mode 100644 index 00000000000..f2c27280579 --- /dev/null +++ b/server_addon/hiero/server/settings/imageio.py @@ -0,0 +1,169 @@ +from pydantic import Field, validator + +from ayon_server.settings import ( + BaseSettingsModel, + ensure_unique_names, +) + + +def ocio_configs_switcher_enum(): + return [ + {"value": "nuke-default", "label": "nuke-default"}, + {"value": "spi-vfx", "label": "spi-vfx"}, + {"value": "spi-anim", "label": "spi-anim"}, + {"value": "aces_0.1.1", "label": "aces_0.1.1"}, + {"value": "aces_0.7.1", "label": "aces_0.7.1"}, + {"value": "aces_1.0.1", "label": "aces_1.0.1"}, + {"value": "aces_1.0.3", "label": "aces_1.0.3"}, + {"value": "aces_1.1", "label": "aces_1.1"}, + {"value": "aces_1.2", "label": "aces_1.2"}, + {"value": "aces_1.3", "label": "aces_1.3"}, + {"value": "custom", "label": "custom"} + ] + + +class WorkfileColorspaceSettings(BaseSettingsModel): + """Hiero workfile colorspace preset. """ + """# TODO: enhance settings with host api: + we need to add mapping to resolve properly keys. + Hiero is excpecting camel case key names, + but for better code consistency we are using snake_case: + + ocio_config = ocioConfigName + working_space_name = workingSpace + int_16_name = sixteenBitLut + int_8_name = eightBitLut + float_name = floatLut + log_name = logLut + viewer_name = viewerLut + thumbnail_name = thumbnailLut + """ + + ocioConfigName: str = Field( + title="OpenColorIO Config", + description="Switch between OCIO configs", + enum_resolver=ocio_configs_switcher_enum, + conditionalEnum=True + ) + workingSpace: str = Field( + title="Working Space" + ) + viewerLut: str = Field( + title="Viewer" + ) + eightBitLut: str = Field( + title="8-bit files" + ) + sixteenBitLut: str = Field( + title="16-bit files" + ) + logLut: str = Field( + title="Log files" + ) + floatLut: str = Field( + title="Float files" + ) + thumbnailLut: str = Field( + title="Thumnails" + ) + monitorOutLut: str = Field( + title="Monitor" + ) + + +class ClipColorspaceRulesItems(BaseSettingsModel): + _layout = "expanded" + + regex: str = Field("", title="Regex expression") + colorspace: str = Field("", title="Colorspace") + + +class RegexInputsModel(BaseSettingsModel): + inputs: list[ClipColorspaceRulesItems] = Field( + default_factory=list, + title="Inputs" + ) + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class ImageIOSettings(BaseSettingsModel): + """Hiero color management project settings. """ + _isGroup: bool = True + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) + workfile: WorkfileColorspaceSettings = Field( + default_factory=WorkfileColorspaceSettings, + title="Workfile" + ) + """# TODO: enhance settings with host api: + - old settings are using `regexInputs` key but we + need to rename to `regex_inputs` + - no need for `inputs` middle part. It can stay + directly on `regex_inputs` + """ + regexInputs: RegexInputsModel = Field( + default_factory=RegexInputsModel, + title="Assign colorspace to clips via rules" + ) + + +DEFAULT_IMAGEIO_SETTINGS = { + "workfile": { + "ocioConfigName": "nuke-default", + "workingSpace": "linear", + "viewerLut": "sRGB", + "eightBitLut": "sRGB", + "sixteenBitLut": "sRGB", + "logLut": "Cineon", + "floatLut": "linear", + "thumbnailLut": "sRGB", + "monitorOutLut": "sRGB" + }, + "regexInputs": { + "inputs": [ + { + "regex": "[^-a-zA-Z0-9](plateRef).*(?=mp4)", + "colorspace": "sRGB" + } + ] + } +} diff --git a/server_addon/hiero/server/settings/loader_plugins.py b/server_addon/hiero/server/settings/loader_plugins.py new file mode 100644 index 00000000000..83b3564c2a4 --- /dev/null +++ b/server_addon/hiero/server/settings/loader_plugins.py @@ -0,0 +1,38 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class LoadClipModel(BaseSettingsModel): + enabled: bool = Field( + True, + title="Enabled" + ) + product_types: list[str] = Field( + default_factory=list, + title="Product types" + ) + clip_name_template: str = Field( + title="Clip name template" + ) + + +class LoaderPuginsModel(BaseSettingsModel): + LoadClip: LoadClipModel = Field( + default_factory=LoadClipModel, + title="Load Clip" + ) + + +DEFAULT_LOADER_PLUGINS_SETTINGS = { + "LoadClip": { + "enabled": True, + "product_types": [ + "render2d", + "source", + "plate", + "render", + "review" + ], + "clip_name_template": "{folder[name]}_{product[name]}_{representation}" + } +} diff --git a/server_addon/hiero/server/settings/main.py b/server_addon/hiero/server/settings/main.py new file mode 100644 index 00000000000..47f8110c22f --- /dev/null +++ b/server_addon/hiero/server/settings/main.py @@ -0,0 +1,64 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + +from .imageio import ( + ImageIOSettings, + DEFAULT_IMAGEIO_SETTINGS +) +from .create_plugins import ( + CreatorPluginsSettings, + DEFAULT_CREATE_SETTINGS +) +from .loader_plugins import ( + LoaderPuginsModel, + DEFAULT_LOADER_PLUGINS_SETTINGS +) +from .publish_plugins import ( + PublishPuginsModel, + DEFAULT_PUBLISH_PLUGIN_SETTINGS +) +from .scriptsmenu import ( + ScriptsmenuSettings, + DEFAULT_SCRIPTSMENU_SETTINGS +) +from .filters import PublishGUIFilterItemModel + + +class HieroSettings(BaseSettingsModel): + """Nuke addon settings.""" + + imageio: ImageIOSettings = Field( + default_factory=ImageIOSettings, + title="Color Management (imageio)", + ) + + create: CreatorPluginsSettings = Field( + default_factory=CreatorPluginsSettings, + title="Creator Plugins", + ) + load: LoaderPuginsModel = Field( + default_factory=LoaderPuginsModel, + title="Loader plugins" + ) + publish: PublishPuginsModel = Field( + default_factory=PublishPuginsModel, + title="Publish plugins" + ) + scriptsmenu: ScriptsmenuSettings = Field( + default_factory=ScriptsmenuSettings, + title="Scripts Menu Definition", + ) + filters: list[PublishGUIFilterItemModel] = Field( + default_factory=list + ) + + +DEFAULT_VALUES = { + "imageio": DEFAULT_IMAGEIO_SETTINGS, + "create": DEFAULT_CREATE_SETTINGS, + "load": DEFAULT_LOADER_PLUGINS_SETTINGS, + "publish": DEFAULT_PUBLISH_PLUGIN_SETTINGS, + "scriptsmenu": DEFAULT_SCRIPTSMENU_SETTINGS, + "filters": [], +} diff --git a/server_addon/hiero/server/settings/publish_plugins.py b/server_addon/hiero/server/settings/publish_plugins.py new file mode 100644 index 00000000000..a85e62724b4 --- /dev/null +++ b/server_addon/hiero/server/settings/publish_plugins.py @@ -0,0 +1,48 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class CollectInstanceVersionModel(BaseSettingsModel): + enabled: bool = Field( + True, + title="Enabled" + ) + + +class ExtractReviewCutUpVideoModel(BaseSettingsModel): + enabled: bool = Field( + True, + title="Enabled" + ) + tags_addition: list[str] = Field( + default_factory=list, + title="Additional tags" + ) + + +class PublishPuginsModel(BaseSettingsModel): + CollectInstanceVersion: CollectInstanceVersionModel = Field( + default_factory=CollectInstanceVersionModel, + title="Collect Instance Version" + ) + """# TODO: enhance settings with host api: + Rename class name and plugin name + to match title (it makes more sense) + """ + ExtractReviewCutUpVideo: ExtractReviewCutUpVideoModel = Field( + default_factory=ExtractReviewCutUpVideoModel, + title="Exctract Review Trim" + ) + + +DEFAULT_PUBLISH_PLUGIN_SETTINGS = { + "CollectInstanceVersion": { + "enabled": False, + }, + "ExtractReviewCutUpVideo": { + "enabled": True, + "tags_addition": [ + "review" + ] + } +} diff --git a/server_addon/hiero/server/settings/scriptsmenu.py b/server_addon/hiero/server/settings/scriptsmenu.py new file mode 100644 index 00000000000..51cb088298d --- /dev/null +++ b/server_addon/hiero/server/settings/scriptsmenu.py @@ -0,0 +1,41 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class ScriptsmenuSubmodel(BaseSettingsModel): + """Item Definition""" + _isGroup = True + + type: str = Field(title="Type") + command: str = Field(title="Command") + sourcetype: str = Field(title="Source Type") + title: str = Field(title="Title") + tooltip: str = Field(title="Tooltip") + + +class ScriptsmenuSettings(BaseSettingsModel): + """Nuke script menu project settings.""" + _isGroup = True + + """# TODO: enhance settings with host api: + - in api rename key `name` to `menu_name` + """ + name: str = Field(title="Menu name") + definition: list[ScriptsmenuSubmodel] = Field( + default_factory=list, + title="Definition", + description="Scriptmenu Items Definition") + + +DEFAULT_SCRIPTSMENU_SETTINGS = { + "name": "OpenPype Tools", + "definition": [ + { + "type": "action", + "sourcetype": "python", + "title": "OpenPype Docs", + "command": "import webbrowser;webbrowser.open(url='https://openpype.io/docs/artist_hosts_hiero')", + "tooltip": "Open the OpenPype Hiero user doc page" + } + ] +} diff --git a/server_addon/hiero/server/version.py b/server_addon/hiero/server/version.py new file mode 100644 index 00000000000..3dc1f76bc69 --- /dev/null +++ b/server_addon/hiero/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.0" diff --git a/server_addon/houdini/server/__init__.py b/server_addon/houdini/server/__init__.py new file mode 100644 index 00000000000..870ec2d0b7e --- /dev/null +++ b/server_addon/houdini/server/__init__.py @@ -0,0 +1,17 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import HoudiniSettings, DEFAULT_VALUES + + +class Houdini(BaseServerAddon): + name = "houdini" + title = "Houdini" + version = __version__ + settings_model: Type[HoudiniSettings] = HoudiniSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/houdini/server/settings/__init__.py b/server_addon/houdini/server/settings/__init__.py new file mode 100644 index 00000000000..9fd26789250 --- /dev/null +++ b/server_addon/houdini/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + HoudiniSettings, + DEFAULT_VALUES, +) + + +__all__ = ( + "HoudiniSettings", + "DEFAULT_VALUES", +) diff --git a/server_addon/houdini/server/settings/imageio.py b/server_addon/houdini/server/settings/imageio.py new file mode 100644 index 00000000000..88aa40ecd64 --- /dev/null +++ b/server_addon/houdini/server/settings/imageio.py @@ -0,0 +1,48 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class HoudiniImageIOModel(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/houdini/server/settings/main.py b/server_addon/houdini/server/settings/main.py new file mode 100644 index 00000000000..fdb6838f5c1 --- /dev/null +++ b/server_addon/houdini/server/settings/main.py @@ -0,0 +1,79 @@ +from pydantic import Field +from ayon_server.settings import ( + BaseSettingsModel, + MultiplatformPathModel, + MultiplatformPathListModel, +) + +from .imageio import HoudiniImageIOModel +from .publish_plugins import ( + PublishPluginsModel, + CreatePluginsModel, + DEFAULT_HOUDINI_PUBLISH_SETTINGS, + DEFAULT_HOUDINI_CREATE_SETTINGS +) + + +class ShelfToolsModel(BaseSettingsModel): + name: str = Field(title="Name") + help: str = Field(title="Help text") + script: MultiplatformPathModel = Field( + default_factory=MultiplatformPathModel, + title="Script Path " + ) + icon: MultiplatformPathModel = Field( + default_factory=MultiplatformPathModel, + title="Icon Path " + ) + + +class ShelfDefinitionModel(BaseSettingsModel): + _layout = "expanded" + shelf_name: str = Field(title="Shelf name") + tools_list: list[ShelfToolsModel] = Field( + default_factory=list, + title="Shelf Tools" + ) + + +class ShelvesModel(BaseSettingsModel): + _layout = "expanded" + shelf_set_name: str = Field(title="Shelfs set name") + + shelf_set_source_path: MultiplatformPathListModel = Field( + default_factory=MultiplatformPathListModel, + title="Shelf Set Path (optional)" + ) + + shelf_definition: list[ShelfDefinitionModel] = Field( + default_factory=list, + title="Shelf Definitions" + ) + + +class HoudiniSettings(BaseSettingsModel): + imageio: HoudiniImageIOModel = Field( + default_factory=HoudiniImageIOModel, + title="Color Management (ImageIO)" + ) + shelves: list[ShelvesModel] = Field( + default_factory=list, + title="Houdini Scripts Shelves", + ) + + publish: PublishPluginsModel = Field( + default_factory=PublishPluginsModel, + title="Publish Plugins", + ) + + create: CreatePluginsModel = Field( + default_factory=CreatePluginsModel, + title="Creator Plugins", + ) + + +DEFAULT_VALUES = { + "shelves": [], + "create": DEFAULT_HOUDINI_CREATE_SETTINGS, + "publish": DEFAULT_HOUDINI_PUBLISH_SETTINGS +} diff --git a/server_addon/houdini/server/settings/publish_plugins.py b/server_addon/houdini/server/settings/publish_plugins.py new file mode 100644 index 00000000000..7d35d7e6345 --- /dev/null +++ b/server_addon/houdini/server/settings/publish_plugins.py @@ -0,0 +1,156 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +# Creator Plugins +class CreatorModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + default_variants: list[str] = Field( + title="Default Products", + default_factory=list, + ) + + +class CreateArnoldAssModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + default_variants: list[str] = Field( + title="Default Products", + default_factory=list, + ) + ext: str = Field(Title="Extension") + + +class CreatePluginsModel(BaseSettingsModel): + CreateArnoldAss: CreateArnoldAssModel = Field( + default_factory=CreateArnoldAssModel, + title="Create Alembic Camera") + CreateAlembicCamera: CreatorModel = Field( + default_factory=CreatorModel, + title="Create Alembic Camera") + CreateCompositeSequence: CreatorModel = Field( + default_factory=CreatorModel, + title="Create Composite Sequence") + CreatePointCache: CreatorModel = Field( + default_factory=CreatorModel, + title="Create Point Cache") + CreateRedshiftROP: CreatorModel = Field( + default_factory=CreatorModel, + title="Create RedshiftROP") + CreateRemotePublish: CreatorModel = Field( + default_factory=CreatorModel, + title="Create Remote Publish") + CreateVDBCache: CreatorModel = Field( + default_factory=CreatorModel, + title="Create VDB Cache") + CreateUSD: CreatorModel = Field( + default_factory=CreatorModel, + title="Create USD") + CreateUSDModel: CreatorModel = Field( + default_factory=CreatorModel, + title="Create USD model") + USDCreateShadingWorkspace: CreatorModel = Field( + default_factory=CreatorModel, + title="Create USD shading workspace") + CreateUSDRender: CreatorModel = Field( + default_factory=CreatorModel, + title="Create USD render") + + +DEFAULT_HOUDINI_CREATE_SETTINGS = { + "CreateArnoldAss": { + "enabled": True, + "default_variants": ["Main"], + "ext": ".ass" + }, + "CreateAlembicCamera": { + "enabled": True, + "default_variants": ["Main"] + }, + "CreateCompositeSequence": { + "enabled": True, + "default_variants": ["Main"] + }, + "CreatePointCache": { + "enabled": True, + "default_variants": ["Main"] + }, + "CreateRedshiftROP": { + "enabled": True, + "default_variants": ["Main"] + }, + "CreateRemotePublish": { + "enabled": True, + "default_variants": ["Main"] + }, + "CreateVDBCache": { + "enabled": True, + "default_variants": ["Main"] + }, + "CreateUSD": { + "enabled": False, + "default_variants": ["Main"] + }, + "CreateUSDModel": { + "enabled": False, + "default_variants": ["Main"] + }, + "USDCreateShadingWorkspace": { + "enabled": False, + "default_variants": ["Main"] + }, + "CreateUSDRender": { + "enabled": False, + "default_variants": ["Main"] + }, +} + + +# Publish Plugins +class ValidateWorkfilePathsModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + node_types: list[str] = Field( + default_factory=list, + title="Node Types" + ) + prohibited_vars: list[str] = Field( + default_factory=list, + title="Prohibited Variables" + ) + + +class ValidateContainersModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + + +class PublishPluginsModel(BaseSettingsModel): + ValidateWorkfilePaths: ValidateWorkfilePathsModel = Field( + default_factory=ValidateWorkfilePathsModel, + title="Validate workfile paths settings.") + ValidateContainers: ValidateContainersModel = Field( + default_factory=ValidateContainersModel, + title="Validate Latest Containers.") + + +DEFAULT_HOUDINI_PUBLISH_SETTINGS = { + "ValidateWorkfilePaths": { + "enabled": True, + "optional": True, + "node_types": [ + "file", + "alembic" + ], + "prohibited_vars": [ + "$HIP", + "$JOB" + ] + }, + "ValidateContainers": { + "enabled": True, + "optional": True, + "active": True + } +} diff --git a/server_addon/houdini/server/version.py b/server_addon/houdini/server/version.py new file mode 100644 index 00000000000..485f44ac21b --- /dev/null +++ b/server_addon/houdini/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.1" diff --git a/server_addon/kitsu/server/__init__.py b/server_addon/kitsu/server/__init__.py new file mode 100644 index 00000000000..69cf812dea0 --- /dev/null +++ b/server_addon/kitsu/server/__init__.py @@ -0,0 +1,19 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import KitsuSettings, DEFAULT_VALUES + + +class KitsuAddon(BaseServerAddon): + name = "kitsu" + title = "Kitsu" + version = __version__ + settings_model: Type[KitsuSettings] = KitsuSettings + frontend_scopes = {} + services = {} + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/kitsu/server/settings.py b/server_addon/kitsu/server/settings.py new file mode 100644 index 00000000000..a4d10d889de --- /dev/null +++ b/server_addon/kitsu/server/settings.py @@ -0,0 +1,112 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class EntityPattern(BaseSettingsModel): + episode: str = Field(title="Episode") + sequence: str = Field(title="Sequence") + shot: str = Field(title="Shot") + + +def _status_change_cond_enum(): + return [ + {"value": "equal", "label": "Equal"}, + {"value": "not_equal", "label": "Not equal"} + ] + + +class StatusChangeCondition(BaseSettingsModel): + condition: str = Field( + "equal", + enum_resolver=_status_change_cond_enum, + title="Condition" + ) + short_name: str = Field("", title="Short name") + + +class StatusChangeProductTypeRequirementModel(BaseSettingsModel): + condition: str = Field( + "equal", + enum_resolver=_status_change_cond_enum, + title="Condition" + ) + product_type: str = Field("", title="Product type") + + +class StatusChangeConditionsModel(BaseSettingsModel): + status_conditions: list[StatusChangeCondition] = Field( + default_factory=list, + title="Status conditions" + ) + product_type_requirements: list[StatusChangeProductTypeRequirementModel] = Field( + default_factory=list, + title="Product type requirements") + + +class CustomCommentTemplateModel(BaseSettingsModel): + enabled: bool = Field(True) + comment_template: str = Field("", title="Custom comment") + + +class IntegrateKitsuNotes(BaseSettingsModel): + """Kitsu supports markdown and here you can create a custom comment template. + + You can use data from your publishing instance's data. + """ + + set_status_note: bool = Field(title="Set status on note") + note_status_shortname: str = Field(title="Note shortname") + status_change_conditions: StatusChangeConditionsModel = Field( + default_factory=StatusChangeConditionsModel, + title="Status change conditions" + ) + custom_comment_template: CustomCommentTemplateModel = Field( + default_factory=CustomCommentTemplateModel, + title="Custom Comment Template", + ) + + +class PublishPlugins(BaseSettingsModel): + IntegrateKitsuNote: IntegrateKitsuNotes = Field( + default_factory=IntegrateKitsuNotes, + title="Integrate Kitsu Note" + ) + + +class KitsuSettings(BaseSettingsModel): + server: str = Field( + "", + title="Kitsu Server", + scope=["studio"], + ) + entities_naming_pattern: EntityPattern = Field( + default_factory=EntityPattern, + title="Entities naming pattern", + ) + publish: PublishPlugins = Field( + default_factory=PublishPlugins, + title="Publish plugins", + ) + + +DEFAULT_VALUES = { + "entities_naming_pattern": { + "episode": "E##", + "sequence": "SQ##", + "shot": "SH##" + }, + "publish": { + "IntegrateKitsuNote": { + "set_status_note": False, + "note_status_shortname": "wfa", + "status_change_conditions": { + "status_conditions": [], + "product_type_requirements": [] + }, + "custom_comment_template": { + "enabled": False, + "comment_template": "{comment}\n\n| | |\n|--|--|\n| version| `{version}` |\n| product type | `{product[type]}` |\n| name | `{name}` |" + } + } + } +} diff --git a/server_addon/kitsu/server/version.py b/server_addon/kitsu/server/version.py new file mode 100644 index 00000000000..485f44ac21b --- /dev/null +++ b/server_addon/kitsu/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.1" diff --git a/server_addon/max/server/__init__.py b/server_addon/max/server/__init__.py new file mode 100644 index 00000000000..31c694a0844 --- /dev/null +++ b/server_addon/max/server/__init__.py @@ -0,0 +1,17 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import MaxSettings, DEFAULT_VALUES + + +class MaxAddon(BaseServerAddon): + name = "max" + title = "Max" + version = __version__ + settings_model: Type[MaxSettings] = MaxSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/max/server/settings/__init__.py b/server_addon/max/server/settings/__init__.py new file mode 100644 index 00000000000..986b1903a54 --- /dev/null +++ b/server_addon/max/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + MaxSettings, + DEFAULT_VALUES, +) + + +__all__ = ( + "MaxSettings", + "DEFAULT_VALUES", +) diff --git a/server_addon/max/server/settings/imageio.py b/server_addon/max/server/settings/imageio.py new file mode 100644 index 00000000000..5e46104fa73 --- /dev/null +++ b/server_addon/max/server/settings/imageio.py @@ -0,0 +1,48 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class ImageIOSettings(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/max/server/settings/main.py b/server_addon/max/server/settings/main.py new file mode 100644 index 00000000000..7f4561cbb1f --- /dev/null +++ b/server_addon/max/server/settings/main.py @@ -0,0 +1,60 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel +from .imageio import ImageIOSettings +from .render_settings import ( + RenderSettingsModel, DEFAULT_RENDER_SETTINGS +) +from .publishers import ( + PublishersModel, DEFAULT_PUBLISH_SETTINGS +) + + +class PRTAttributesModel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: str = Field(title="Attribute") + + +class PointCloudSettings(BaseSettingsModel): + attribute: list[PRTAttributesModel] = Field( + default_factory=list, title="Channel Attribute") + + +class MaxSettings(BaseSettingsModel): + imageio: ImageIOSettings = Field( + default_factory=ImageIOSettings, + title="Color Management (ImageIO)" + ) + RenderSettings: RenderSettingsModel = Field( + default_factory=RenderSettingsModel, + title="Render Settings" + ) + PointCloud: PointCloudSettings = Field( + default_factory=PointCloudSettings, + title="Point Cloud" + ) + publish: PublishersModel = Field( + default_factory=PublishersModel, + title="Publish Plugins") + + +DEFAULT_VALUES = { + "RenderSettings": DEFAULT_RENDER_SETTINGS, + "PointCloud": { + "attribute": [ + {"name": "Age", "value": "age"}, + {"name": "Radius", "value": "radius"}, + {"name": "Position", "value": "position"}, + {"name": "Rotation", "value": "rotation"}, + {"name": "Scale", "value": "scale"}, + {"name": "Velocity", "value": "velocity"}, + {"name": "Color", "value": "color"}, + {"name": "TextureCoordinate", "value": "texcoord"}, + {"name": "MaterialID", "value": "matid"}, + {"name": "custFloats", "value": "custFloats"}, + {"name": "custVecs", "value": "custVecs"}, + ] + }, + "publish": DEFAULT_PUBLISH_SETTINGS + +} diff --git a/server_addon/max/server/settings/publishers.py b/server_addon/max/server/settings/publishers.py new file mode 100644 index 00000000000..a695b85e899 --- /dev/null +++ b/server_addon/max/server/settings/publishers.py @@ -0,0 +1,26 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class BasicValidateModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + + +class PublishersModel(BaseSettingsModel): + ValidateFrameRange: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Frame Range", + section="Validators" + ) + + +DEFAULT_PUBLISH_SETTINGS = { + "ValidateFrameRange": { + "enabled": True, + "optional": True, + "active": True + } +} diff --git a/server_addon/max/server/settings/render_settings.py b/server_addon/max/server/settings/render_settings.py new file mode 100644 index 00000000000..c00cb5e4360 --- /dev/null +++ b/server_addon/max/server/settings/render_settings.py @@ -0,0 +1,49 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +def aov_separators_enum(): + return [ + {"value": "dash", "label": "- (dash)"}, + {"value": "underscore", "label": "_ (underscore)"}, + {"value": "dot", "label": ". (dot)"} + ] + + +def image_format_enum(): + """Return enumerator for image output formats.""" + return [ + {"label": "bmp", "value": "bmp"}, + {"label": "exr", "value": "exr"}, + {"label": "tif", "value": "tif"}, + {"label": "tiff", "value": "tiff"}, + {"label": "jpg", "value": "jpg"}, + {"label": "png", "value": "png"}, + {"label": "tga", "value": "tga"}, + {"label": "dds", "value": "dds"} + ] + + +class RenderSettingsModel(BaseSettingsModel): + default_render_image_folder: str = Field( + title="Default render image folder" + ) + aov_separator: str = Field( + "underscore", + title="AOV Separator character", + enum_resolver=aov_separators_enum + ) + image_format: str = Field( + enum_resolver=image_format_enum, + title="Output Image Format" + ) + multipass: bool = Field(title="multipass") + + +DEFAULT_RENDER_SETTINGS = { + "default_render_image_folder": "renders/3dsmax", + "aov_separator": "underscore", + "image_format": "exr", + "multipass": True +} diff --git a/server_addon/max/server/version.py b/server_addon/max/server/version.py new file mode 100644 index 00000000000..3dc1f76bc69 --- /dev/null +++ b/server_addon/max/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.0" diff --git a/server_addon/maya/LICENCE b/server_addon/maya/LICENCE new file mode 100644 index 00000000000..261eeb9e9f8 --- /dev/null +++ b/server_addon/maya/LICENCE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/server_addon/maya/README.md b/server_addon/maya/README.md new file mode 100644 index 00000000000..c65c09fba05 --- /dev/null +++ b/server_addon/maya/README.md @@ -0,0 +1,4 @@ +Maya Integration Addon +====================== + +WIP diff --git a/server_addon/maya/server/__init__.py b/server_addon/maya/server/__init__.py new file mode 100644 index 00000000000..8784427dcfe --- /dev/null +++ b/server_addon/maya/server/__init__.py @@ -0,0 +1,16 @@ +"""Maya Addon Module""" +from ayon_server.addons import BaseServerAddon + +from .settings.main import MayaSettings, DEFAULT_MAYA_SETTING +from .version import __version__ + + +class MayaAddon(BaseServerAddon): + name = "maya" + title = "Maya" + version = __version__ + settings_model = MayaSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_MAYA_SETTING) diff --git a/common/ayon_common/connection/__init__.py b/server_addon/maya/server/settings/__init__.py similarity index 100% rename from common/ayon_common/connection/__init__.py rename to server_addon/maya/server/settings/__init__.py diff --git a/server_addon/maya/server/settings/creators.py b/server_addon/maya/server/settings/creators.py new file mode 100644 index 00000000000..11e2b8a36cf --- /dev/null +++ b/server_addon/maya/server/settings/creators.py @@ -0,0 +1,410 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class CreateLookModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + make_tx: bool = Field(title="Make tx files") + rs_tex: bool = Field(title="Make Redshift texture files") + default_variants: list[str] = Field( + default_factory=list, title="Default Products" + ) + + +class BasicCreatorModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + default_variants: list[str] = Field( + default_factory=list, + title="Default Products" + ) + + +class CreateUnrealStaticMeshModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + default_variants: list[str] = Field( + default_factory=list, + title="Default Products" + ) + static_mesh_prefixes: str = Field("S", title="Static Mesh Prefix") + collision_prefixes: list[str] = Field( + default_factory=list, + title="Collision Prefixes" + ) + + +class CreateUnrealSkeletalMeshModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + default_variants: list[str] = Field( + default_factory=list, title="Default Products") + joint_hints: str = Field("jnt_org", title="Joint root hint") + + +class CreateMultiverseLookModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + publish_mip_map: bool = Field(title="publish_mip_map") + + +class BasicExportMeshModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + write_color_sets: bool = Field(title="Write Color Sets") + write_face_sets: bool = Field(title="Write Face Sets") + default_variants: list[str] = Field( + default_factory=list, + title="Default Products" + ) + + +class CreateAnimationModel(BaseSettingsModel): + write_color_sets: bool = Field(title="Write Color Sets") + write_face_sets: bool = Field(title="Write Face Sets") + include_parent_hierarchy: bool = Field( + title="Include Parent Hierarchy") + include_user_defined_attributes: bool = Field( + title="Include User Defined Attributes") + default_variants: list[str] = Field( + default_factory=list, + title="Default Products" + ) + + +class CreatePointCacheModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + write_color_sets: bool = Field(title="Write Color Sets") + write_face_sets: bool = Field(title="Write Face Sets") + include_user_defined_attributes: bool = Field( + title="Include User Defined Attributes" + ) + default_variants: list[str] = Field( + default_factory=list, + title="Default Products" + ) + + +class CreateProxyAlembicModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + write_color_sets: bool = Field(title="Write Color Sets") + write_face_sets: bool = Field(title="Write Face Sets") + default_variants: list[str] = Field( + default_factory=list, + title="Default Products" + ) + + +class CreateAssModel(BasicCreatorModel): + expandProcedurals: bool = Field(title="Expand Procedurals") + motionBlur: bool = Field(title="Motion Blur") + motionBlurKeys: int = Field(2, title="Motion Blur Keys") + motionBlurLength: float = Field(0.5, title="Motion Blur Length") + maskOptions: bool = Field(title="Mask Options") + maskCamera: bool = Field(title="Mask Camera") + maskLight: bool = Field(title="Mask Light") + maskShape: bool = Field(title="Mask Shape") + maskShader: bool = Field(title="Mask Shader") + maskOverride: bool = Field(title="Mask Override") + maskDriver: bool = Field(title="Mask Driver") + maskFilter: bool = Field(title="Mask Filter") + maskColor_manager: bool = Field(title="Mask Color Manager") + maskOperator: bool = Field(title="Mask Operator") + + +class CreateReviewModel(BasicCreatorModel): + useMayaTimeline: bool = Field(title="Use Maya Timeline for Frame Range.") + + +class CreateVrayProxyModel(BaseSettingsModel): + enabled: bool = Field(True) + vrmesh: bool = Field(title="VrMesh") + alembic: bool = Field(title="Alembic") + default_variants: list[str] = Field( + default_factory=list, title="Default Products") + + +class CreatorsModel(BaseSettingsModel): + CreateLook: CreateLookModel = Field( + default_factory=CreateLookModel, + title="Create Look" + ) + CreateRender: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Render" + ) + # "-" is not compatible in the new model + CreateUnrealStaticMesh: CreateUnrealStaticMeshModel = Field( + default_factory=CreateUnrealStaticMeshModel, + title="Create Unreal_Static Mesh" + ) + # "-" is not compatible in the new model + CreateUnrealSkeletalMesh: CreateUnrealSkeletalMeshModel = Field( + default_factory=CreateUnrealSkeletalMeshModel, + title="Create Unreal_Skeletal Mesh" + ) + CreateMultiverseLook: CreateMultiverseLookModel = Field( + default_factory=CreateMultiverseLookModel, + title="Create Multiverse Look" + ) + CreateAnimation: CreateAnimationModel = Field( + default_factory=CreateAnimationModel, + title="Create Animation" + ) + CreateModel: BasicExportMeshModel = Field( + default_factory=BasicExportMeshModel, + title="Create Model" + ) + CreatePointCache: CreatePointCacheModel = Field( + default_factory=CreatePointCacheModel, + title="Create Point Cache" + ) + CreateProxyAlembic: CreateProxyAlembicModel = Field( + default_factory=CreateProxyAlembicModel, + title="Create Proxy Alembic" + ) + CreateMultiverseUsd: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Multiverse USD" + ) + CreateMultiverseUsdComp: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Multiverse USD Composition" + ) + CreateMultiverseUsdOver: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Multiverse USD Override" + ) + CreateAss: CreateAssModel = Field( + default_factory=CreateAssModel, + title="Create Ass" + ) + CreateAssembly: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Assembly" + ) + CreateCamera: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Camera" + ) + CreateLayout: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Layout" + ) + CreateMayaScene: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Maya Scene" + ) + CreateRenderSetup: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Render Setup" + ) + CreateReview: CreateReviewModel = Field( + default_factory=CreateReviewModel, + title="Create Review" + ) + CreateRig: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Rig" + ) + CreateSetDress: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Set Dress" + ) + CreateVrayProxy: CreateVrayProxyModel = Field( + default_factory=CreateVrayProxyModel, + title="Create VRay Proxy" + ) + CreateVRayScene: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create VRay Scene" + ) + CreateYetiRig: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Yeti Rig" + ) + + +DEFAULT_CREATORS_SETTINGS = { + "CreateLook": { + "enabled": True, + "make_tx": True, + "rs_tex": False, + "default_variants": [ + "Main" + ] + }, + "CreateRender": { + "enabled": True, + "default_variants": [ + "Main" + ] + }, + "CreateUnrealStaticMesh": { + "enabled": True, + "default_variants": [ + "", + "_Main" + ], + "static_mesh_prefix": "S", + "collision_prefixes": [ + "UBX", + "UCP", + "USP", + "UCX" + ] + }, + "CreateUnrealSkeletalMesh": { + "enabled": True, + "default_variants": [ + "Main", + ], + "joint_hints": "jnt_org" + }, + "CreateMultiverseLook": { + "enabled": True, + "publish_mip_map": True + }, + "CreateAnimation": { + "write_color_sets": False, + "write_face_sets": False, + "include_parent_hierarchy": False, + "include_user_defined_attributes": False, + "default_variants": [ + "Main" + ] + }, + "CreateModel": { + "enabled": True, + "write_color_sets": False, + "write_face_sets": False, + "default_variants": [ + "Main", + "Proxy", + "Sculpt" + ] + }, + "CreatePointCache": { + "enabled": True, + "write_color_sets": False, + "write_face_sets": False, + "include_user_defined_attributes": False, + "default_variants": [ + "Main" + ] + }, + "CreateProxyAlembic": { + "enabled": True, + "write_color_sets": False, + "write_face_sets": False, + "default_variants": [ + "Main" + ] + }, + "CreateMultiverseUsd": { + "enabled": True, + "default_variants": [ + "Main" + ] + }, + "CreateMultiverseUsdComp": { + "enabled": True, + "default_variants": [ + "Main" + ] + }, + "CreateMultiverseUsdOver": { + "enabled": True, + "default_variants": [ + "Main" + ] + }, + "CreateAss": { + "enabled": True, + "default_variants": [ + "Main" + ], + "expandProcedurals": False, + "motionBlur": True, + "motionBlurKeys": 2, + "motionBlurLength": 0.5, + "maskOptions": False, + "maskCamera": False, + "maskLight": False, + "maskShape": False, + "maskShader": False, + "maskOverride": False, + "maskDriver": False, + "maskFilter": False, + "maskColor_manager": False, + "maskOperator": False + }, + "CreateAssembly": { + "enabled": True, + "default_variants": [ + "Main" + ] + }, + "CreateCamera": { + "enabled": True, + "default_variants": [ + "Main" + ] + }, + "CreateLayout": { + "enabled": True, + "default_variants": [ + "Main" + ] + }, + "CreateMayaScene": { + "enabled": True, + "default_variants": [ + "Main" + ] + }, + "CreateRenderSetup": { + "enabled": True, + "default_variants": [ + "Main" + ] + }, + "CreateReview": { + "enabled": True, + "default_variants": [ + "Main" + ], + "useMayaTimeline": True + }, + "CreateRig": { + "enabled": True, + "default_variants": [ + "Main", + "Sim", + "Cloth" + ] + }, + "CreateSetDress": { + "enabled": True, + "default_variants": [ + "Main", + "Anim" + ] + }, + "CreateVrayProxy": { + "enabled": True, + "vrmesh": True, + "alembic": True, + "default_variants": [ + "Main" + ] + }, + "CreateVRayScene": { + "enabled": True, + "default_variants": [ + "Main" + ] + }, + "CreateYetiRig": { + "enabled": True, + "default_variants": [ + "Main" + ] + } +} diff --git a/server_addon/maya/server/settings/explicit_plugins_loading.py b/server_addon/maya/server/settings/explicit_plugins_loading.py new file mode 100644 index 00000000000..394adb728f2 --- /dev/null +++ b/server_addon/maya/server/settings/explicit_plugins_loading.py @@ -0,0 +1,429 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class PluginsModel(BaseSettingsModel): + _layout = "expanded" + enabled: bool = Field(title="Enabled") + name: str = Field("", title="Name") + + +class ExplicitPluginsLoadingModel(BaseSettingsModel): + """Maya Explicit Plugins Loading.""" + _isGroup: bool = True + enabled: bool = Field(title="enabled") + plugins_to_load: list[PluginsModel] = Field( + default_factory=list, title="Plugins To Load" + ) + + +DEFAULT_EXPLITCIT_PLUGINS_LOADING_SETTINGS = { + "enabled": False, + "plugins_to_load": [ + { + "enabled": False, + "name": "AbcBullet" + }, + { + "enabled": True, + "name": "AbcExport" + }, + { + "enabled": True, + "name": "AbcImport" + }, + { + "enabled": False, + "name": "animImportExport" + }, + { + "enabled": False, + "name": "ArubaTessellator" + }, + { + "enabled": False, + "name": "ATFPlugin" + }, + { + "enabled": False, + "name": "atomImportExport" + }, + { + "enabled": False, + "name": "AutodeskPacketFile" + }, + { + "enabled": False, + "name": "autoLoader" + }, + { + "enabled": False, + "name": "bifmeshio" + }, + { + "enabled": False, + "name": "bifrostGraph" + }, + { + "enabled": False, + "name": "bifrostshellnode" + }, + { + "enabled": False, + "name": "bifrostvisplugin" + }, + { + "enabled": False, + "name": "blast2Cmd" + }, + { + "enabled": False, + "name": "bluePencil" + }, + { + "enabled": False, + "name": "Boss" + }, + { + "enabled": False, + "name": "bullet" + }, + { + "enabled": True, + "name": "cacheEvaluator" + }, + { + "enabled": False, + "name": "cgfxShader" + }, + { + "enabled": False, + "name": "cleanPerFaceAssignment" + }, + { + "enabled": False, + "name": "clearcoat" + }, + { + "enabled": False, + "name": "convertToComponentTags" + }, + { + "enabled": False, + "name": "curveWarp" + }, + { + "enabled": False, + "name": "ddsFloatReader" + }, + { + "enabled": True, + "name": "deformerEvaluator" + }, + { + "enabled": False, + "name": "dgProfiler" + }, + { + "enabled": False, + "name": "drawUfe" + }, + { + "enabled": False, + "name": "dx11Shader" + }, + { + "enabled": False, + "name": "fbxmaya" + }, + { + "enabled": False, + "name": "fltTranslator" + }, + { + "enabled": False, + "name": "freeze" + }, + { + "enabled": False, + "name": "Fur" + }, + { + "enabled": False, + "name": "gameFbxExporter" + }, + { + "enabled": False, + "name": "gameInputDevice" + }, + { + "enabled": False, + "name": "GamePipeline" + }, + { + "enabled": False, + "name": "gameVertexCount" + }, + { + "enabled": False, + "name": "geometryReport" + }, + { + "enabled": False, + "name": "geometryTools" + }, + { + "enabled": False, + "name": "glslShader" + }, + { + "enabled": True, + "name": "GPUBuiltInDeformer" + }, + { + "enabled": False, + "name": "gpuCache" + }, + { + "enabled": False, + "name": "hairPhysicalShader" + }, + { + "enabled": False, + "name": "ik2Bsolver" + }, + { + "enabled": False, + "name": "ikSpringSolver" + }, + { + "enabled": False, + "name": "invertShape" + }, + { + "enabled": False, + "name": "lges" + }, + { + "enabled": False, + "name": "lookdevKit" + }, + { + "enabled": False, + "name": "MASH" + }, + { + "enabled": False, + "name": "matrixNodes" + }, + { + "enabled": False, + "name": "mayaCharacterization" + }, + { + "enabled": False, + "name": "mayaHIK" + }, + { + "enabled": False, + "name": "MayaMuscle" + }, + { + "enabled": False, + "name": "mayaUsdPlugin" + }, + { + "enabled": False, + "name": "mayaVnnPlugin" + }, + { + "enabled": False, + "name": "melProfiler" + }, + { + "enabled": False, + "name": "meshReorder" + }, + { + "enabled": True, + "name": "modelingToolkit" + }, + { + "enabled": False, + "name": "mtoa" + }, + { + "enabled": False, + "name": "mtoh" + }, + { + "enabled": False, + "name": "nearestPointOnMesh" + }, + { + "enabled": True, + "name": "objExport" + }, + { + "enabled": False, + "name": "OneClick" + }, + { + "enabled": False, + "name": "OpenEXRLoader" + }, + { + "enabled": False, + "name": "pgYetiMaya" + }, + { + "enabled": False, + "name": "pgyetiVrayMaya" + }, + { + "enabled": False, + "name": "polyBoolean" + }, + { + "enabled": False, + "name": "poseInterpolator" + }, + { + "enabled": False, + "name": "quatNodes" + }, + { + "enabled": False, + "name": "randomizerDevice" + }, + { + "enabled": False, + "name": "redshift4maya" + }, + { + "enabled": True, + "name": "renderSetup" + }, + { + "enabled": False, + "name": "retargeterNodes" + }, + { + "enabled": False, + "name": "RokokoMotionLibrary" + }, + { + "enabled": False, + "name": "rotateHelper" + }, + { + "enabled": False, + "name": "sceneAssembly" + }, + { + "enabled": False, + "name": "shaderFXPlugin" + }, + { + "enabled": False, + "name": "shotCamera" + }, + { + "enabled": False, + "name": "snapTransform" + }, + { + "enabled": False, + "name": "stage" + }, + { + "enabled": True, + "name": "stereoCamera" + }, + { + "enabled": False, + "name": "stlTranslator" + }, + { + "enabled": False, + "name": "studioImport" + }, + { + "enabled": False, + "name": "Substance" + }, + { + "enabled": False, + "name": "substancelink" + }, + { + "enabled": False, + "name": "substancemaya" + }, + { + "enabled": False, + "name": "substanceworkflow" + }, + { + "enabled": False, + "name": "svgFileTranslator" + }, + { + "enabled": False, + "name": "sweep" + }, + { + "enabled": False, + "name": "testify" + }, + { + "enabled": False, + "name": "tiffFloatReader" + }, + { + "enabled": False, + "name": "timeSliderBookmark" + }, + { + "enabled": False, + "name": "Turtle" + }, + { + "enabled": False, + "name": "Type" + }, + { + "enabled": False, + "name": "udpDevice" + }, + { + "enabled": False, + "name": "ufeSupport" + }, + { + "enabled": False, + "name": "Unfold3D" + }, + { + "enabled": False, + "name": "VectorRender" + }, + { + "enabled": False, + "name": "vrayformaya" + }, + { + "enabled": False, + "name": "vrayvolumegrid" + }, + { + "enabled": False, + "name": "xgenToolkit" + }, + { + "enabled": False, + "name": "xgenVray" + } + ] +} diff --git a/server_addon/maya/server/settings/imageio.py b/server_addon/maya/server/settings/imageio.py new file mode 100644 index 00000000000..7512bfe253f --- /dev/null +++ b/server_addon/maya/server/settings/imageio.py @@ -0,0 +1,126 @@ +"""Providing models and setting values for image IO in Maya. + +Note: Names were changed to get rid of the versions in class names. +""" +from pydantic import Field, validator + +from ayon_server.settings import BaseSettingsModel, ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class ColorManagementPreferenceV2Model(BaseSettingsModel): + """Color Management Preference v2 (Maya 2022+).""" + _layout = "expanded" + + enabled: bool = Field(True, title="Use Color Management Preference v2") + + renderSpace: str = Field(title="Rendering Space") + displayName: str = Field(title="Display") + viewName: str = Field(title="View") + + +class ColorManagementPreferenceModel(BaseSettingsModel): + """Color Management Preference (legacy).""" + _layout = "expanded" + + renderSpace: str = Field(title="Rendering Space") + viewTransform: str = Field(title="Viewer Transform ") + + +class WorkfileImageIOModel(BaseSettingsModel): + enabled: bool = Field(True, title="Enabled") + renderSpace: str = Field(title="Rendering Space") + displayName: str = Field(title="Display") + viewName: str = Field(title="View") + + +class ImageIOSettings(BaseSettingsModel): + """Maya color management project settings. + + Todo: What to do with color management preferences version? + """ + + _isGroup: bool = True + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) + workfile: WorkfileImageIOModel = Field( + default_factory=WorkfileImageIOModel, + title="Workfile" + ) + # Deprecated + colorManagementPreference_v2: ColorManagementPreferenceV2Model = Field( + default_factory=ColorManagementPreferenceV2Model, + title="Color Management Preference v2 (Maya 2022+)" + ) + colorManagementPreference: ColorManagementPreferenceModel = Field( + default_factory=ColorManagementPreferenceModel, + title="Color Management Preference (legacy)" + ) + + +DEFAULT_IMAGEIO_SETTINGS = { + "activate_host_color_management": True, + "ocio_config": { + "override_global_config": False, + "filepath": [] + }, + "file_rules": { + "activate_host_rules": False, + "rules": [] + }, + "workfile": { + "enabled": False, + "renderSpace": "ACES - ACEScg", + "displayName": "ACES", + "viewName": "sRGB" + }, + "colorManagementPreference_v2": { + "enabled": True, + "renderSpace": "ACEScg", + "displayName": "sRGB", + "viewName": "ACES 1.0 SDR-video" + }, + "colorManagementPreference": { + "renderSpace": "scene-linear Rec 709/sRGB", + "viewTransform": "sRGB gamma" + } +} diff --git a/server_addon/maya/server/settings/include_handles.py b/server_addon/maya/server/settings/include_handles.py new file mode 100644 index 00000000000..3ba6aca66bd --- /dev/null +++ b/server_addon/maya/server/settings/include_handles.py @@ -0,0 +1,30 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel, task_types_enum + + +class IncludeByTaskTypeModel(BaseSettingsModel): + task_type: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + include_handles: bool = Field(True, title="Include handles") + + +class IncludeHandlesModel(BaseSettingsModel): + """Maya dirmap settings.""" + # _layout = "expanded" + include_handles_default: bool = Field( + True, title="Include handles by default" + ) + per_task_type: list[IncludeByTaskTypeModel] = Field( + default_factory=list, + title="Include/exclude handles by task type" + ) + + +DEFAULT_INCLUDE_HANDLES = { + "include_handles_default": False, + "per_task_type": [] +} diff --git a/server_addon/maya/server/settings/loaders.py b/server_addon/maya/server/settings/loaders.py new file mode 100644 index 00000000000..ed6b6fd2ace --- /dev/null +++ b/server_addon/maya/server/settings/loaders.py @@ -0,0 +1,129 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel +from ayon_server.types import ColorRGBA_uint8 + + +class ColorsSetting(BaseSettingsModel): + model: ColorRGBA_uint8 = Field( + (209, 132, 30, 1.0), title="Model:") + rig: ColorRGBA_uint8 = Field( + (59, 226, 235, 1.0), title="Rig:") + pointcache: ColorRGBA_uint8 = Field( + (94, 209, 30, 1.0), title="Pointcache:") + animation: ColorRGBA_uint8 = Field( + (94, 209, 30, 1.0), title="Animation:") + ass: ColorRGBA_uint8 = Field( + (249, 135, 53, 1.0), title="Arnold StandIn:") + camera: ColorRGBA_uint8 = Field( + (136, 114, 244, 1.0), title="Camera:") + fbx: ColorRGBA_uint8 = Field( + (215, 166, 255, 1.0), title="FBX:") + mayaAscii: ColorRGBA_uint8 = Field( + (67, 174, 255, 1.0), title="Maya Ascii:") + mayaScene: ColorRGBA_uint8 = Field( + (67, 174, 255, 1.0), title="Maya Scene:") + setdress: ColorRGBA_uint8 = Field( + (255, 250, 90, 1.0), title="Set Dress:") + layout: ColorRGBA_uint8 = Field(( + 255, 250, 90, 1.0), title="Layout:") + vdbcache: ColorRGBA_uint8 = Field( + (249, 54, 0, 1.0), title="VDB Cache:") + vrayproxy: ColorRGBA_uint8 = Field( + (255, 150, 12, 1.0), title="VRay Proxy:") + vrayscene_layer: ColorRGBA_uint8 = Field( + (255, 150, 12, 1.0), title="VRay Scene:") + yeticache: ColorRGBA_uint8 = Field( + (99, 206, 220, 1.0), title="Yeti Cache:") + yetiRig: ColorRGBA_uint8 = Field( + (0, 205, 125, 1.0), title="Yeti Rig:") + + +class ReferenceLoaderModel(BaseSettingsModel): + namespace: str = Field(title="Namespace") + group_name: str = Field(title="Group name") + display_handle: bool = Field(title="Display Handle On Load References") + + +class ImportLoaderModel(BaseSettingsModel): + namespace: str = Field(title="Namespace") + group_name: str = Field(title="Group name") + + +class LoadersModel(BaseSettingsModel): + colors: ColorsSetting = Field( + default_factory=ColorsSetting, + title="Loaded Products Outliner Colors") + + reference_loader: ReferenceLoaderModel = Field( + default_factory=ReferenceLoaderModel, + title="Reference Loader" + ) + + import_loader: ImportLoaderModel = Field( + default_factory=ImportLoaderModel, + title="Import Loader" + ) + +DEFAULT_LOADERS_SETTING = { + "colors": { + "model": [ + 209, 132, 30, 1.0 + ], + "rig": [ + 59, 226, 235, 1.0 + ], + "pointcache": [ + 94, 209, 30, 1.0 + ], + "animation": [ + 94, 209, 30, 1.0 + ], + "ass": [ + 249, 135, 53, 1.0 + ], + "camera": [ + 136, 114, 244, 1.0 + ], + "fbx": [ + 215, 166, 255, 1.0 + ], + "mayaAscii": [ + 67, 174, 255, 1.0 + ], + "mayaScene": [ + 67, 174, 255, 1.0 + ], + "setdress": [ + 255, 250, 90, 1.0 + ], + "layout": [ + 255, 250, 90, 1.0 + ], + "vdbcache": [ + 249, 54, 0, 1.0 + ], + "vrayproxy": [ + 255, 150, 12, 1.0 + ], + "vrayscene_layer": [ + 255, 150, 12, 1.0 + ], + "yeticache": [ + 99, 206, 220, 1.0 + ], + "yetiRig": [ + 0, 205, 125, 1.0 + ] + }, + "reference_loader": { + "namespace": "{folder[name]}_{product[name]}_##_", + "group_name": "_GRP", + "display_handle": True + }, + "import_loader": { + "namespace": "{folder[name]}_{product[name]}_##_", + "group_name": "_GRP", + "display_handle": True + } +} diff --git a/server_addon/maya/server/settings/main.py b/server_addon/maya/server/settings/main.py new file mode 100644 index 00000000000..c8021614bea --- /dev/null +++ b/server_addon/maya/server/settings/main.py @@ -0,0 +1,141 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel, ensure_unique_names +from .imageio import ImageIOSettings, DEFAULT_IMAGEIO_SETTINGS +from .maya_dirmap import MayaDirmapModel, DEFAULT_MAYA_DIRMAP_SETTINGS +from .include_handles import IncludeHandlesModel, DEFAULT_INCLUDE_HANDLES +from .explicit_plugins_loading import ( + ExplicitPluginsLoadingModel, DEFAULT_EXPLITCIT_PLUGINS_LOADING_SETTINGS +) +from .scriptsmenu import ScriptsmenuModel, DEFAULT_SCRIPTSMENU_SETTINGS +from .render_settings import RenderSettingsModel, DEFAULT_RENDER_SETTINGS +from .creators import CreatorsModel, DEFAULT_CREATORS_SETTINGS +from .publishers import PublishersModel, DEFAULT_PUBLISH_SETTINGS +from .loaders import LoadersModel, DEFAULT_LOADERS_SETTING +from .workfile_build_settings import ProfilesModel, DEFAULT_WORKFILE_SETTING +from .templated_workfile_settings import ( + TemplatedProfilesModel, DEFAULT_TEMPLATED_WORKFILE_SETTINGS +) + + +class ExtMappingItemModel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Product type") + value: str = Field(title="Extension") + + +class PublishGUIFilterItemModel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: bool = Field(True, title="Active") + + +class PublishGUIFiltersModel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: list[PublishGUIFilterItemModel] = Field(default_factory=list) + + @validator("value") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class MayaSettings(BaseSettingsModel): + """Maya Project Settings.""" + + open_workfile_post_initialization: bool = Field( + True, title="Open Workfile Post Initialization") + explicit_plugins_loading: ExplicitPluginsLoadingModel = Field( + default_factory=ExplicitPluginsLoadingModel, + title="Explicit Plugins Loading") + imageio: ImageIOSettings = Field( + default_factory=ImageIOSettings, title="Color Management (imageio)") + mel_workspace: str = Field(title="Maya MEL Workspace", widget="textarea") + ext_mapping: list[ExtMappingItemModel] = Field( + default_factory=list, title="Extension Mapping") + maya_dirmap: MayaDirmapModel = Field( + default_factory=MayaDirmapModel, title="Maya dirmap Settings") + include_handles: IncludeHandlesModel = Field( + default_factory=IncludeHandlesModel, + title="Include/Exclude Handles in default playback & render range" + ) + scriptsmenu: ScriptsmenuModel = Field( + default_factory=ScriptsmenuModel, + title="Scriptsmenu Settings" + ) + render_settings: RenderSettingsModel = Field( + default_factory=RenderSettingsModel, title="Render Settings") + create: CreatorsModel = Field( + default_factory=CreatorsModel, title="Creators") + publish: PublishersModel = Field( + default_factory=PublishersModel, title="Publishers") + load: LoadersModel = Field( + default_factory=LoadersModel, title="Loaders") + workfile_build: ProfilesModel = Field( + default_factory=ProfilesModel, title="Workfile Build Settings") + templated_workfile_build: TemplatedProfilesModel = Field( + default_factory=TemplatedProfilesModel, + title="Templated Workfile Build Settings") + filters: list[PublishGUIFiltersModel] = Field( + default_factory=list, + title="Publish GUI Filters") + + @validator("filters", "ext_mapping") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +DEFAULT_MEL_WORKSPACE_SETTINGS = "\n".join(( + 'workspace -fr "shaders" "renderData/shaders";', + 'workspace -fr "images" "renders/maya";', + 'workspace -fr "particles" "particles";', + 'workspace -fr "mayaAscii" "";', + 'workspace -fr "mayaBinary" "";', + 'workspace -fr "scene" "";', + 'workspace -fr "alembicCache" "cache/alembic";', + 'workspace -fr "renderData" "renderData";', + 'workspace -fr "sourceImages" "sourceimages";', + 'workspace -fr "fileCache" "cache/nCache";', + '', +)) + +DEFAULT_MAYA_SETTING = { + "open_workfile_post_initialization": False, + "explicit_plugins_loading": DEFAULT_EXPLITCIT_PLUGINS_LOADING_SETTINGS, + "imageio": DEFAULT_IMAGEIO_SETTINGS, + "mel_workspace": DEFAULT_MEL_WORKSPACE_SETTINGS, + "ext_mapping": [ + {"name": "model", "value": "ma"}, + {"name": "mayaAscii", "value": "ma"}, + {"name": "camera", "value": "ma"}, + {"name": "rig", "value": "ma"}, + {"name": "workfile", "value": "ma"}, + {"name": "yetiRig", "value": "ma"} + ], + # `maya_dirmap` was originally with dash - `maya-dirmap` + "maya_dirmap": DEFAULT_MAYA_DIRMAP_SETTINGS, + "include_handles": DEFAULT_INCLUDE_HANDLES, + "scriptsmenu": DEFAULT_SCRIPTSMENU_SETTINGS, + "render_settings": DEFAULT_RENDER_SETTINGS, + "create": DEFAULT_CREATORS_SETTINGS, + "publish": DEFAULT_PUBLISH_SETTINGS, + "load": DEFAULT_LOADERS_SETTING, + "workfile_build": DEFAULT_WORKFILE_SETTING, + "templated_workfile_build": DEFAULT_TEMPLATED_WORKFILE_SETTINGS, + "filters": [ + { + "name": "preset 1", + "value": [ + {"name": "ValidateNoAnimation", "value": False}, + {"name": "ValidateShapeDefaultNames", "value": False}, + ] + }, + { + "name": "preset 2", + "value": [ + {"name": "ValidateNoAnimation", "value": False}, + ] + }, + ] +} diff --git a/server_addon/maya/server/settings/maya_dirmap.py b/server_addon/maya/server/settings/maya_dirmap.py new file mode 100644 index 00000000000..243261dc872 --- /dev/null +++ b/server_addon/maya/server/settings/maya_dirmap.py @@ -0,0 +1,40 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class MayaDirmapPathsSubmodel(BaseSettingsModel): + _layout = "compact" + source_path: list[str] = Field( + default_factory=list, title="Source Paths" + ) + destination_path: list[str] = Field( + default_factory=list, title="Destination Paths" + ) + + +class MayaDirmapModel(BaseSettingsModel): + """Maya dirmap settings.""" + # _layout = "expanded" + _isGroup: bool = True + + enabled: bool = Field(title="enabled") + # Use ${} placeholder instead of absolute value of a root in + # referenced filepaths. + use_env_var_as_root: bool = Field( + title="Use env var placeholder in referenced paths" + ) + paths: MayaDirmapPathsSubmodel = Field( + default_factory=MayaDirmapPathsSubmodel, + title="Dirmap Paths" + ) + + +DEFAULT_MAYA_DIRMAP_SETTINGS = { + "use_env_var_as_root": False, + "enabled": False, + "paths": { + "source-path": [], + "destination-path": [] + } +} diff --git a/server_addon/maya/server/settings/publish_playblast.py b/server_addon/maya/server/settings/publish_playblast.py new file mode 100644 index 00000000000..acfcaf59889 --- /dev/null +++ b/server_addon/maya/server/settings/publish_playblast.py @@ -0,0 +1,382 @@ +from pydantic import Field, validator + +from ayon_server.settings import ( + BaseSettingsModel, + ensure_unique_names, + task_types_enum, +) +from ayon_server.types import ColorRGBA_uint8 + + +def hardware_falloff_enum(): + return [ + {"label": "Linear", "value": "0"}, + {"label": "Exponential", "value": "1"}, + {"label": "Exponential Squared", "value": "2"} + ] + + +def renderer_enum(): + return [ + {"label": "Viewport 2.0", "value": "vp2Renderer"} + ] + + +def displayLights_enum(): + return [ + {"label": "Default Lighting", "value": "default"}, + {"label": "All Lights", "value": "all"}, + {"label": "Selected Lights", "value": "selected"}, + {"label": "Flat Lighting", "value": "flat"}, + {"label": "No Lights", "value": "nolights"} + ] + + +def plugin_objects_default(): + return [ + { + "name": "gpuCacheDisplayFilter", + "value": False + } + ] + + +class CodecSetting(BaseSettingsModel): + _layout = "expanded" + compression: str = Field("png", title="Encoding") + format: str = Field("image", title="Format") + quality: int = Field(95, title="Quality", ge=0, le=100) + + +class DisplayOptionsSetting(BaseSettingsModel): + _layout = "expanded" + override_display: bool = Field(True, title="Override display options") + background: ColorRGBA_uint8 = Field( + (125, 125, 125, 1.0), title="Background Color" + ) + displayGradient: bool = Field(True, title="Display background gradient") + backgroundTop: ColorRGBA_uint8 = Field( + (125, 125, 125, 1.0), title="Background Top" + ) + backgroundBottom: ColorRGBA_uint8 = Field( + (125, 125, 125, 1.0), title="Background Bottom" + ) + + +class GenericSetting(BaseSettingsModel): + _layout = "expanded" + isolate_view: bool = Field(True, title="Isolate View") + off_screen: bool = Field(True, title="Off Screen") + pan_zoom: bool = Field(False, title="2D Pan/Zoom") + + +class RendererSetting(BaseSettingsModel): + _layout = "expanded" + rendererName: str = Field( + "vp2Renderer", + enum_resolver=renderer_enum, + title="Renderer name" + ) + + +class ResolutionSetting(BaseSettingsModel): + _layout = "expanded" + width: int = Field(0, title="Width") + height: int = Field(0, title="Height") + + +class PluginObjectsModel(BaseSettingsModel): + name: str = Field("", title="Name") + value: bool = Field(True, title="Enabled") + + +class ViewportOptionsSetting(BaseSettingsModel): + override_viewport_options: bool = Field( + True, title="Override viewport options" + ) + displayLights: str = Field( + "default", enum_resolver=displayLights_enum, title="Display Lights" + ) + displayTextures: bool = Field(True, title="Display Textures") + textureMaxResolution: int = Field(1024, title="Texture Clamp Resolution") + renderDepthOfField: bool = Field( + True, title="Depth of Field", section="Depth of Field" + ) + shadows: bool = Field(True, title="Display Shadows") + twoSidedLighting: bool = Field(True, title="Two Sided Lighting") + lineAAEnable: bool = Field( + True, title="Enable Anti-Aliasing", section="Anti-Aliasing" + ) + multiSample: int = Field(8, title="Anti Aliasing Samples") + useDefaultMaterial: bool = Field(False, title="Use Default Material") + wireframeOnShaded: bool = Field(False, title="Wireframe On Shaded") + xray: bool = Field(False, title="X-Ray") + jointXray: bool = Field(False, title="X-Ray Joints") + backfaceCulling: bool = Field(False, title="Backface Culling") + ssaoEnable: bool = Field( + False, title="Screen Space Ambient Occlusion", section="SSAO" + ) + ssaoAmount: int = Field(1, title="SSAO Amount") + ssaoRadius: int = Field(16, title="SSAO Radius") + ssaoFilterRadius: int = Field(16, title="SSAO Filter Radius") + ssaoSamples: int = Field(16, title="SSAO Samples") + fogging: bool = Field(False, title="Enable Hardware Fog", section="Fog") + hwFogFalloff: str = Field( + "0", enum_resolver=hardware_falloff_enum, title="Hardware Falloff" + ) + hwFogDensity: float = Field(0.0, title="Fog Density") + hwFogStart: int = Field(0, title="Fog Start") + hwFogEnd: int = Field(100, title="Fog End") + hwFogAlpha: int = Field(0, title="Fog Alpha") + hwFogColorR: float = Field(1.0, title="Fog Color R") + hwFogColorG: float = Field(1.0, title="Fog Color G") + hwFogColorB: float = Field(1.0, title="Fog Color B") + motionBlurEnable: bool = Field( + False, title="Enable Motion Blur", section="Motion Blur" + ) + motionBlurSampleCount: int = Field(8, title="Motion Blur Sample Count") + motionBlurShutterOpenFraction: float = Field( + 0.2, title="Shutter Open Fraction" + ) + cameras: bool = Field(False, title="Cameras", section="Show") + clipGhosts: bool = Field(False, title="Clip Ghosts") + deformers: bool = Field(False, title="Deformers") + dimensions: bool = Field(False, title="Dimensions") + dynamicConstraints: bool = Field(False, title="Dynamic Constraints") + dynamics: bool = Field(False, title="Dynamics") + fluids: bool = Field(False, title="Fluids") + follicles: bool = Field(False, title="Follicles") + greasePencils: bool = Field(False, title="Grease Pencils") + grid: bool = Field(False, title="Grid") + hairSystems: bool = Field(True, title="Hair Systems") + handles: bool = Field(False, title="Handles") + headsUpDisplay: bool = Field(False, title="HUD") + ikHandles: bool = Field(False, title="IK Handles") + imagePlane: bool = Field(True, title="Image Plane") + joints: bool = Field(False, title="Joints") + lights: bool = Field(False, title="Lights") + locators: bool = Field(False, title="Locators") + manipulators: bool = Field(False, title="Manipulators") + motionTrails: bool = Field(False, title="Motion Trails") + nCloths: bool = Field(False, title="nCloths") + nParticles: bool = Field(False, title="nParticles") + nRigids: bool = Field(False, title="nRigids") + controlVertices: bool = Field(False, title="NURBS CVs") + nurbsCurves: bool = Field(False, title="NURBS Curves") + hulls: bool = Field(False, title="NURBS Hulls") + nurbsSurfaces: bool = Field(False, title="NURBS Surfaces") + particleInstancers: bool = Field(False, title="Particle Instancers") + pivots: bool = Field(False, title="Pivots") + planes: bool = Field(False, title="Planes") + pluginShapes: bool = Field(False, title="Plugin Shapes") + polymeshes: bool = Field(True, title="Polygons") + strokes: bool = Field(False, title="Strokes") + subdivSurfaces: bool = Field(False, title="Subdiv Surfaces") + textures: bool = Field(False, title="Texture Placements") + pluginObjects: list[PluginObjectsModel] = Field( + default_factory=plugin_objects_default, + title="Plugin Objects" + ) + + @validator("pluginObjects") + def validate_unique_plugin_objects(cls, value): + ensure_unique_names(value) + return value + + +class CameraOptionsSetting(BaseSettingsModel): + displayGateMask: bool = Field(False, title="Display Gate Mask") + displayResolution: bool = Field(False, title="Display Resolution") + displayFilmGate: bool = Field(False, title="Display Film Gate") + displayFieldChart: bool = Field(False, title="Display Field Chart") + displaySafeAction: bool = Field(False, title="Display Safe Action") + displaySafeTitle: bool = Field(False, title="Display Safe Title") + displayFilmPivot: bool = Field(False, title="Display Film Pivot") + displayFilmOrigin: bool = Field(False, title="Display Film Origin") + overscan: int = Field(1.0, title="Overscan") + + +class CapturePresetSetting(BaseSettingsModel): + Codec: CodecSetting = Field( + default_factory=CodecSetting, + title="Codec", + section="Codec") + DisplayOptions: DisplayOptionsSetting = Field( + default_factory=DisplayOptionsSetting, + title="Display Options", + section="Display Options") + Generic: GenericSetting = Field( + default_factory=GenericSetting, + title="Generic", + section="Generic") + Renderer: RendererSetting = Field( + default_factory=RendererSetting, + title="Renderer", + section="Renderer") + Resolution: ResolutionSetting = Field( + default_factory=ResolutionSetting, + title="Resolution", + section="Resolution") + ViewportOptions: ViewportOptionsSetting = Field( + default_factory=ViewportOptionsSetting, + title="Viewport Options") + CameraOptions: CameraOptionsSetting = Field( + default_factory=CameraOptionsSetting, + title="Camera Options") + + +class ProfilesModel(BaseSettingsModel): + _layout = "expanded" + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + task_names: list[str] = Field(default_factory=list, title="Task names") + product_names: list[str] = Field(default_factory=list, title="Products names") + capture_preset: CapturePresetSetting = Field( + default_factory=CapturePresetSetting, + title="Capture Preset" + ) + + +class ExtractPlayblastSetting(BaseSettingsModel): + capture_preset: CapturePresetSetting = Field( + default_factory=CapturePresetSetting, + title="DEPRECATED! Please use \"Profiles\" below. Capture Preset" + ) + profiles: list[ProfilesModel] = Field( + default_factory=list, + title="Profiles" + ) + + +DEFAULT_PLAYBLAST_SETTING = { + "capture_preset": { + "Codec": { + "compression": "png", + "format": "image", + "quality": 95 + }, + "DisplayOptions": { + "override_display": True, + "background": [ + 125, + 125, + 125, + 1.0 + ], + "backgroundBottom": [ + 125, + 125, + 125, + 1.0 + ], + "backgroundTop": [ + 125, + 125, + 125, + 1.0 + ], + "displayGradient": True + }, + "Generic": { + "isolate_view": True, + "off_screen": True, + "pan_zoom": False + }, + "Renderer": { + "rendererName": "vp2Renderer" + }, + "Resolution": { + "width": 1920, + "height": 1080 + }, + "ViewportOptions": { + "override_viewport_options": True, + "displayLights": "default", + "displayTextures": True, + "textureMaxResolution": 1024, + "renderDepthOfField": True, + "shadows": True, + "twoSidedLighting": True, + "lineAAEnable": True, + "multiSample": 8, + "useDefaultMaterial": False, + "wireframeOnShaded": False, + "xray": False, + "jointXray": False, + "backfaceCulling": False, + "ssaoEnable": False, + "ssaoAmount": 1, + "ssaoRadius": 16, + "ssaoFilterRadius": 16, + "ssaoSamples": 16, + "fogging": False, + "hwFogFalloff": "0", + "hwFogDensity": 0.0, + "hwFogStart": 0, + "hwFogEnd": 100, + "hwFogAlpha": 0, + "hwFogColorR": 1.0, + "hwFogColorG": 1.0, + "hwFogColorB": 1.0, + "motionBlurEnable": False, + "motionBlurSampleCount": 8, + "motionBlurShutterOpenFraction": 0.2, + "cameras": False, + "clipGhosts": False, + "deformers": False, + "dimensions": False, + "dynamicConstraints": False, + "dynamics": False, + "fluids": False, + "follicles": False, + "greasePencils": False, + "grid": False, + "hairSystems": True, + "handles": False, + "headsUpDisplay": False, + "ikHandles": False, + "imagePlane": True, + "joints": False, + "lights": False, + "locators": False, + "manipulators": False, + "motionTrails": False, + "nCloths": False, + "nParticles": False, + "nRigids": False, + "controlVertices": False, + "nurbsCurves": False, + "hulls": False, + "nurbsSurfaces": False, + "particleInstancers": False, + "pivots": False, + "planes": False, + "pluginShapes": False, + "polymeshes": True, + "strokes": False, + "subdivSurfaces": False, + "textures": False, + "pluginObjects": [ + { + "name": "gpuCacheDisplayFilter", + "value": False + } + ] + }, + "CameraOptions": { + "displayGateMask": False, + "displayResolution": False, + "displayFilmGate": False, + "displayFieldChart": False, + "displaySafeAction": False, + "displaySafeTitle": False, + "displayFilmPivot": False, + "displayFilmOrigin": False, + "overscan": 1.0 + } + }, + "profiles": [] +} diff --git a/server_addon/maya/server/settings/publishers.py b/server_addon/maya/server/settings/publishers.py new file mode 100644 index 00000000000..bd7ccdf4d59 --- /dev/null +++ b/server_addon/maya/server/settings/publishers.py @@ -0,0 +1,1262 @@ +import json +from pydantic import Field, validator +from ayon_server.settings import ( + BaseSettingsModel, + MultiplatformPathModel, + ensure_unique_names, +) +from ayon_server.exceptions import BadRequestException +from .publish_playblast import ( + ExtractPlayblastSetting, + DEFAULT_PLAYBLAST_SETTING, +) + + +def linear_unit_enum(): + """Get linear units enumerator.""" + return [ + {"label": "mm", "value": "millimeter"}, + {"label": "cm", "value": "centimeter"}, + {"label": "m", "value": "meter"}, + {"label": "km", "value": "kilometer"}, + {"label": "in", "value": "inch"}, + {"label": "ft", "value": "foot"}, + {"label": "yd", "value": "yard"}, + {"label": "mi", "value": "mile"} + ] + + +def angular_unit_enum(): + """Get angular units enumerator.""" + return [ + {"label": "deg", "value": "degree"}, + {"label": "rad", "value": "radian"}, + ] + + +class BasicValidateModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + + +class ValidateMeshUVSetMap1Model(BasicValidateModel): + """Validate model's default uv set exists and is named 'map1'.""" + pass + + +class ValidateNoAnimationModel(BasicValidateModel): + """Ensure no keyframes on nodes in the Instance.""" + pass + + +class ValidateRigOutSetNodeIdsModel(BaseSettingsModel): + enabled: bool = Field(title="ValidateSkinclusterDeformerSet") + optional: bool = Field(title="Optional") + allow_history_only: bool = Field(title="Allow history only") + + +class ValidateModelNameModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + database: bool = Field(title="Use database shader name definitions") + material_file: MultiplatformPathModel = Field( + default_factory=MultiplatformPathModel, + title="Material File", + description=( + "Path to material file defining list of material names to check." + ) + ) + regex: str = Field( + "(.*)_(\\d)*_(?P.*)_(GEO)", + title="Validation regex", + description=( + "Regex for validating name of top level group name. You can use" + " named capturing groups:(?P.*) for Asset name" + ) + ) + top_level_regex: str = Field( + ".*_GRP", + title="Top level group name regex", + description=( + "To check for asset in name so *_some_asset_name_GRP" + " is valid, use:.*?_(?P.*)_GEO" + ) + ) + + +class ValidateModelContentModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + validate_top_group: bool = Field(title="Validate one top group") + + +class ValidateTransformNamingSuffixModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + SUFFIX_NAMING_TABLE: str = Field( + "{}", + title="Suffix Naming Tables", + widget="textarea", + description=( + "Validates transform suffix based on" + " the type of its children shapes." + ) + ) + + @validator("SUFFIX_NAMING_TABLE") + def validate_json(cls, value): + if not value.strip(): + return "{}" + try: + converted_value = json.loads(value) + success = isinstance(converted_value, dict) + except json.JSONDecodeError: + success = False + + if not success: + raise BadRequestException( + "The text can't be parsed as json object" + ) + return value + ALLOW_IF_NOT_IN_SUFFIX_TABLE: bool = Field( + title="Allow if suffix not in table" + ) + + +class CollectMayaRenderModel(BaseSettingsModel): + sync_workfile_version: bool = Field( + title="Sync render version with workfile" + ) + + +class CollectFbxCameraModel(BaseSettingsModel): + enabled: bool = Field(title="CollectFbxCamera") + + +class CollectGLTFModel(BaseSettingsModel): + enabled: bool = Field(title="CollectGLTF") + + +class ValidateFrameRangeModel(BaseSettingsModel): + enabled: bool = Field(title="ValidateFrameRange") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + exclude_product_types: list[str] = Field( + default_factory=list, + title="Exclude product types" + ) + + +class ValidateShaderNameModel(BaseSettingsModel): + """ + Shader name regex can use named capture group asset to validate against current asset name. + """ + enabled: bool = Field(title="ValidateShaderName") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + regex: str = Field("(?P.*)_(.*)_SHD", title="Validation regex") + + +class ValidateAttributesModel(BaseSettingsModel): + enabled: bool = Field(title="ValidateAttributes") + attributes: str = Field( + "{}", title="Attributes", widget="textarea") + + @validator("attributes") + def validate_json(cls, value): + if not value.strip(): + return "{}" + try: + converted_value = json.loads(value) + success = isinstance(converted_value, dict) + except json.JSONDecodeError: + success = False + + if not success: + raise BadRequestException( + "The attibutes can't be parsed as json object" + ) + return value + + +class ValidateLoadedPluginModel(BaseSettingsModel): + enabled: bool = Field(title="ValidateLoadedPlugin") + optional: bool = Field(title="Optional") + whitelist_native_plugins: bool = Field( + title="Whitelist Maya Native Plugins" + ) + authorized_plugins: list[str] = Field( + default_factory=list, title="Authorized plugins" + ) + + +class ValidateMayaUnitsModel(BaseSettingsModel): + enabled: bool = Field(title="ValidateMayaUnits") + optional: bool = Field(title="Optional") + validate_linear_units: bool = Field(title="Validate linear units") + linear_units: str = Field( + enum_resolver=linear_unit_enum, title="Linear Units" + ) + validate_angular_units: bool = Field(title="Validate angular units") + angular_units: str = Field( + enum_resolver=angular_unit_enum, title="Angular units" + ) + validate_fps: bool = Field(title="Validate fps") + + +class ValidateUnrealStaticMeshNameModel(BaseSettingsModel): + enabled: bool = Field(title="ValidateUnrealStaticMeshName") + optional: bool = Field(title="Optional") + validate_mesh: bool = Field(title="Validate mesh names") + validate_collision: bool = Field(title="Validate collison names") + + +class ValidateCycleErrorModel(BaseSettingsModel): + enabled: bool = Field(title="ValidateCycleError") + optional: bool = Field(title="Optional") + families: list[str] = Field(default_factory=list, title="Families") + + +class ValidatePluginPathAttributesAttrModel(BaseSettingsModel): + name: str = Field(title="Node type") + value: str = Field(title="Attribute") + + +class ValidatePluginPathAttributesModel(BaseSettingsModel): + """Fill in the node types and attributes you want to validate. + +

e.g. AlembicNode.abc_file, the node type is AlembicNode + and the node attribute is abc_file + """ + + enabled: bool = True + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + attribute: list[ValidatePluginPathAttributesAttrModel] = Field( + default_factory=list, + title="File Attribute" + ) + + @validator("attribute") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +# Validate Render Setting +class RendererAttributesModel(BaseSettingsModel): + _layout = "compact" + type: str = Field(title="Type") + value: str = Field(title="Value") + + +class ValidateRenderSettingsModel(BaseSettingsModel): + arnold_render_attributes: list[RendererAttributesModel] = Field( + default_factory=list, title="Arnold Render Attributes") + vray_render_attributes: list[RendererAttributesModel] = Field( + default_factory=list, title="VRay Render Attributes") + redshift_render_attributes: list[RendererAttributesModel] = Field( + default_factory=list, title="Redshift Render Attributes") + renderman_render_attributes: list[RendererAttributesModel] = Field( + default_factory=list, title="Renderman Render Attributes") + + +class BasicValidateModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + + +class ValidateCameraContentsModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + validate_shapes: bool = Field(title="Validate presence of shapes") + + +class ExtractProxyAlembicModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + families: list[str] = Field( + default_factory=list, + title="Families") + + +class ExtractAlembicModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + families: list[str] = Field( + default_factory=list, + title="Families") + + +class ExtractObjModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + + +class ExtractMayaSceneRawModel(BaseSettingsModel): + """Add loaded instances to those published families:""" + enabled: bool = Field(title="ExtractMayaSceneRaw") + add_for_families: list[str] = Field(default_factory=list, title="Families") + + +class ExtractCameraAlembicModel(BaseSettingsModel): + """ + List of attributes that will be added to the baked alembic camera. Needs to be written in python list syntax. + """ + enabled: bool = Field(title="ExtractCameraAlembic") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + bake_attributes: str = Field( + "[]", title="Base Attributes", widget="textarea" + ) + + @validator("bake_attributes") + def validate_json_list(cls, value): + if not value.strip(): + return "[]" + try: + converted_value = json.loads(value) + success = isinstance(converted_value, list) + except json.JSONDecodeError: + success = False + + if not success: + raise BadRequestException( + "The text can't be parsed as json object" + ) + return value + + +class ExtractGLBModel(BaseSettingsModel): + enabled: bool = True + active: bool = Field(title="Active") + ogsfx_path: str = Field(title="GLSL Shader Directory") + + +class ExtractLookArgsModel(BaseSettingsModel): + argument: str = Field(title="Argument") + parameters: list[str] = Field(default_factory=list, title="Parameters") + + +class ExtractLookModel(BaseSettingsModel): + maketx_arguments: list[ExtractLookArgsModel] = Field( + default_factory=list, + title="Extra arguments for maketx command line" + ) + + +class ExtractGPUCacheModel(BaseSettingsModel): + enabled: bool = True + families: list[str] = Field(default_factory=list, title="Families") + step: float = Field(1.0, ge=1.0, title="Step") + stepSave: int = Field(1, ge=1, title="Step Save") + optimize: bool = Field(title="Optimize Hierarchy") + optimizationThreshold: int = Field(1, ge=1, title="Optimization Threshold") + optimizeAnimationsForMotionBlur: bool = Field( + title="Optimize Animations For Motion Blur" + ) + writeMaterials: bool = Field(title="Write Materials") + useBaseTessellation: bool = Field(title="User Base Tesselation") + + +class PublishersModel(BaseSettingsModel): + CollectMayaRender: CollectMayaRenderModel = Field( + default_factory=CollectMayaRenderModel, + title="Collect Render Layers", + section="Collectors" + ) + CollectFbxCamera: CollectFbxCameraModel = Field( + default_factory=CollectFbxCameraModel, + title="Collect Camera for FBX export", + ) + CollectGLTF: CollectGLTFModel = Field( + default_factory=CollectGLTFModel, + title="Collect Assets for GLB/GLTF export" + ) + ValidateInstanceInContext: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Instance In Context", + section="Validators" + ) + ValidateContainers: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Containers" + ) + ValidateFrameRange: ValidateFrameRangeModel = Field( + default_factory=ValidateFrameRangeModel, + title="Validate Frame Range" + ) + ValidateShaderName: ValidateShaderNameModel = Field( + default_factory=ValidateShaderNameModel, + title="Validate Shader Name" + ) + ValidateShadingEngine: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Look Shading Engine Naming" + ) + ValidateMayaColorSpace: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Colorspace" + ) + ValidateAttributes: ValidateAttributesModel = Field( + default_factory=ValidateAttributesModel, + title="Validate Attributes" + ) + ValidateLoadedPlugin: ValidateLoadedPluginModel = Field( + default_factory=ValidateLoadedPluginModel, + title="Validate Loaded Plugin" + ) + ValidateMayaUnits: ValidateMayaUnitsModel = Field( + default_factory=ValidateMayaUnitsModel, + title="Validate Maya Units" + ) + ValidateUnrealStaticMeshName: ValidateUnrealStaticMeshNameModel = Field( + default_factory=ValidateUnrealStaticMeshNameModel, + title="Validate Unreal Static Mesh Name" + ) + ValidateCycleError: ValidateCycleErrorModel = Field( + default_factory=ValidateCycleErrorModel, + title="Validate Cycle Error" + ) + ValidatePluginPathAttributes: ValidatePluginPathAttributesModel = Field( + default_factory=ValidatePluginPathAttributesModel, + title="Plug-in Path Attributes" + ) + ValidateRenderSettings: ValidateRenderSettingsModel = Field( + default_factory=ValidateRenderSettingsModel, + title="Validate Render Settings" + ) + ValidateCurrentRenderLayerIsRenderable: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Current Render Layer Has Renderable Camera" + ) + ValidateGLSLMaterial: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate GLSL Material" + ) + ValidateGLSLPlugin: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate GLSL Plugin" + ) + ValidateRenderImageRule: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Render Image Rule (Workspace)" + ) + ValidateRenderNoDefaultCameras: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate No Default Cameras Renderable" + ) + ValidateRenderSingleCamera: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Render Single Camera " + ) + ValidateRenderLayerAOVs: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Render Passes/AOVs Are Registered" + ) + ValidateStepSize: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Step Size" + ) + ValidateVRayDistributedRendering: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="VRay Distributed Rendering" + ) + ValidateVrayReferencedAOVs: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="VRay Referenced AOVs" + ) + ValidateVRayTranslatorEnabled: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="VRay Translator Settings" + ) + ValidateVrayProxy: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="VRay Proxy Settings" + ) + ValidateVrayProxyMembers: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="VRay Proxy Members" + ) + ValidateYetiRenderScriptCallbacks: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Yeti Render Script Callbacks" + ) + ValidateYetiRigCacheState: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Yeti Rig Cache State" + ) + ValidateYetiRigInputShapesInInstance: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Yeti Rig Input Shapes In Instance" + ) + ValidateYetiRigSettings: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Yeti Rig Settings" + ) + # Model - START + ValidateModelName: ValidateModelNameModel = Field( + default_factory=ValidateModelNameModel, + title="Validate Model Name", + section="Model", + ) + ValidateModelContent: ValidateModelContentModel = Field( + default_factory=ValidateModelContentModel, + title="Validate Model Content", + ) + ValidateTransformNamingSuffix: ValidateTransformNamingSuffixModel = Field( + default_factory=ValidateTransformNamingSuffixModel, + title="Validate Transform Naming Suffix", + ) + ValidateColorSets: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Color Sets", + ) + ValidateMeshHasOverlappingUVs: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh Has Overlapping UVs", + ) + ValidateMeshArnoldAttributes: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh Arnold Attributes", + ) + ValidateMeshShaderConnections: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh Shader Connections", + ) + ValidateMeshSingleUVSet: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh Single UV Set", + ) + ValidateMeshHasUVs: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh Has UVs", + ) + ValidateMeshLaminaFaces: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh Lamina Faces", + ) + ValidateMeshNgons: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh Ngons", + ) + ValidateMeshNonManifold: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh Non-Manifold", + ) + ValidateMeshNoNegativeScale: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh No Negative Scale", + ) + ValidateMeshNonZeroEdgeLength: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh Edge Length Non Zero", + ) + ValidateMeshNormalsUnlocked: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh Normals Unlocked", + ) + ValidateMeshUVSetMap1: ValidateMeshUVSetMap1Model = Field( + default_factory=ValidateMeshUVSetMap1Model, + title="Validate Mesh UV Set Map 1", + ) + ValidateMeshVerticesHaveEdges: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh Vertices Have Edges", + ) + ValidateNoAnimation: ValidateNoAnimationModel = Field( + default_factory=ValidateNoAnimationModel, + title="Validate No Animation", + ) + ValidateNoNamespace: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate No Namespace", + ) + ValidateNoNullTransforms: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate No Null Transforms", + ) + ValidateNoUnknownNodes: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate No Unknown Nodes", + ) + ValidateNodeNoGhosting: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Node No Ghosting", + ) + ValidateShapeDefaultNames: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Shape Default Names", + ) + ValidateShapeRenderStats: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Shape Render Stats", + ) + ValidateShapeZero: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Shape Zero", + ) + ValidateTransformZero: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Transform Zero", + ) + ValidateUniqueNames: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Unique Names", + ) + ValidateNoVRayMesh: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate No V-Ray Proxies (VRayMesh)", + ) + ValidateUnrealMeshTriangulated: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate if Mesh is Triangulated", + ) + ValidateAlembicVisibleOnly: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Alembic Visible Node", + ) + ExtractProxyAlembic: ExtractProxyAlembicModel = Field( + default_factory=ExtractProxyAlembicModel, + title="Extract Proxy Alembic", + section="Model Extractors", + ) + ExtractAlembic: ExtractAlembicModel = Field( + default_factory=ExtractAlembicModel, + title="Extract Alembic", + ) + ExtractObj: ExtractObjModel = Field( + default_factory=ExtractObjModel, + title="Extract OBJ" + ) + # Model - END + + # Rig - START + ValidateRigContents: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Rig Contents", + section="Rig", + ) + ValidateRigJointsHidden: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Rig Joints Hidden", + ) + ValidateRigControllers: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Rig Controllers", + ) + ValidateAnimationContent: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Animation Content", + ) + ValidateOutRelatedNodeIds: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Animation Out Set Related Node Ids", + ) + ValidateRigControllersArnoldAttributes: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Rig Controllers (Arnold Attributes)", + ) + ValidateSkeletalMeshHierarchy: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Skeletal Mesh Top Node", + ) + ValidateSkinclusterDeformerSet: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Skincluster Deformer Relationships", + ) + ValidateRigOutSetNodeIds: ValidateRigOutSetNodeIdsModel = Field( + default_factory=ValidateRigOutSetNodeIdsModel, + title="Validate Rig Out Set Node Ids", + ) + # Rig - END + ValidateCameraAttributes: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Camera Attributes" + ) + ValidateAssemblyName: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Assembly Name" + ) + ValidateAssemblyNamespaces: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Assembly Namespaces" + ) + ValidateAssemblyModelTransforms: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Assembly Model Transforms" + ) + ValidateAssRelativePaths: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Ass Relative Paths" + ) + ValidateInstancerContent: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Instancer Content" + ) + ValidateInstancerFrameRanges: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Instancer Cache Frame Ranges" + ) + ValidateNoDefaultCameras: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate No Default Cameras" + ) + ValidateUnrealUpAxis: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Unreal Up-Axis Check" + ) + ValidateCameraContents: ValidateCameraContentsModel = Field( + default_factory=ValidateCameraContentsModel, + title="Validate Camera Content" + ) + ExtractPlayblast: ExtractPlayblastSetting = Field( + default_factory=ExtractPlayblastSetting, + title="Extract Playblast Settings", + section="Extractors" + ) + ExtractMayaSceneRaw: ExtractMayaSceneRawModel = Field( + default_factory=ExtractMayaSceneRawModel, + title="Maya Scene(Raw)" + ) + ExtractCameraAlembic: ExtractCameraAlembicModel = Field( + default_factory=ExtractCameraAlembicModel, + title="Extract Camera Alembic" + ) + ExtractGLB: ExtractGLBModel = Field( + default_factory=ExtractGLBModel, + title="Extract GLB" + ) + ExtractLook: ExtractLookModel = Field( + default_factory=ExtractLookModel, + title="Extract Look" + ) + ExtractGPUCache: ExtractGPUCacheModel = Field( + default_factory=ExtractGPUCacheModel, + title="Extract GPU Cache", + ) + + +DEFAULT_SUFFIX_NAMING = { + "mesh": ["_GEO", "_GES", "_GEP", "_OSD"], + "nurbsCurve": ["_CRV"], + "nurbsSurface": ["_NRB"], + "locator": ["_LOC"], + "group": ["_GRP"] +} + +DEFAULT_PUBLISH_SETTINGS = { + "CollectMayaRender": { + "sync_workfile_version": False + }, + "CollectFbxCamera": { + "enabled": False + }, + "CollectGLTF": { + "enabled": False + }, + "ValidateInstanceInContext": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateContainers": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateFrameRange": { + "enabled": True, + "optional": True, + "active": True, + "exclude_product_types": [ + "model", + "rig", + "staticMesh" + ] + }, + "ValidateShaderName": { + "enabled": False, + "optional": True, + "active": True, + "regex": "(?P.*)_(.*)_SHD" + }, + "ValidateShadingEngine": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateMayaColorSpace": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateAttributes": { + "enabled": False, + "attributes": "{}" + }, + "ValidateLoadedPlugin": { + "enabled": False, + "optional": True, + "whitelist_native_plugins": False, + "authorized_plugins": [] + }, + "ValidateMayaUnits": { + "enabled": True, + "optional": False, + "validate_linear_units": True, + "linear_units": "cm", + "validate_angular_units": True, + "angular_units": "deg", + "validate_fps": True + }, + "ValidateUnrealStaticMeshName": { + "enabled": True, + "optional": True, + "validate_mesh": False, + "validate_collision": True + }, + "ValidateCycleError": { + "enabled": True, + "optional": False, + "families": [ + "rig" + ] + }, + "ValidatePluginPathAttributes": { + "enabled": True, + "optional": False, + "active": True, + "attribute": [ + {"name": "AlembicNode", "value": "abc_File"}, + {"name": "VRayProxy", "value": "fileName"}, + {"name": "RenderManArchive", "value": "filename"}, + {"name": "pgYetiMaya", "value": "cacheFileName"}, + {"name": "aiStandIn", "value": "dso"}, + {"name": "RedshiftSprite", "value": "tex0"}, + {"name": "RedshiftBokeh", "value": "dofBokehImage"}, + {"name": "RedshiftCameraMap", "value": "tex0"}, + {"name": "RedshiftEnvironment", "value": "tex2"}, + {"name": "RedshiftDomeLight", "value": "tex1"}, + {"name": "RedshiftIESLight", "value": "profile"}, + {"name": "RedshiftLightGobo", "value": "tex0"}, + {"name": "RedshiftNormalMap", "value": "tex0"}, + {"name": "RedshiftProxyMesh", "value": "fileName"}, + {"name": "RedshiftVolumeShape", "value": "fileName"}, + {"name": "VRayTexGLSL", "value": "fileName"}, + {"name": "VRayMtlGLSL", "value": "fileName"}, + {"name": "VRayVRmatMtl", "value": "fileName"}, + {"name": "VRayPtex", "value": "ptexFile"}, + {"name": "VRayLightIESShape", "value": "iesFile"}, + {"name": "VRayMesh", "value": "materialAssignmentsFile"}, + {"name": "VRayMtlOSL", "value": "fileName"}, + {"name": "VRayTexOSL", "value": "fileName"}, + {"name": "VRayTexOCIO", "value": "ocioConfigFile"}, + {"name": "VRaySettingsNode", "value": "pmap_autoSaveFile2"}, + {"name": "VRayScannedMtl", "value": "file"}, + {"name": "VRayScene", "value": "parameterOverrideFilePath"}, + {"name": "VRayMtlMDL", "value": "filename"}, + {"name": "VRaySimbiont", "value": "file"}, + {"name": "dlOpenVDBShape", "value": "filename"}, + {"name": "pgYetiMayaShape", "value": "liveABCFilename"}, + {"name": "gpuCache", "value": "cacheFileName"}, + ] + }, + "ValidateRenderSettings": { + "arnold_render_attributes": [], + "vray_render_attributes": [], + "redshift_render_attributes": [], + "renderman_render_attributes": [] + }, + "ValidateCurrentRenderLayerIsRenderable": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateGLSLMaterial": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateGLSLPlugin": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateRenderImageRule": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateRenderNoDefaultCameras": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateRenderSingleCamera": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateRenderLayerAOVs": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateStepSize": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateVRayDistributedRendering": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateVrayReferencedAOVs": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateVRayTranslatorEnabled": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateVrayProxy": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateVrayProxyMembers": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateYetiRenderScriptCallbacks": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateYetiRigCacheState": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateYetiRigInputShapesInInstance": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateYetiRigSettings": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateModelName": { + "enabled": False, + "database": True, + "material_file": { + "windows": "", + "darwin": "", + "linux": "" + }, + "regex": "(.*)_(\\d)*_(?P.*)_(GEO)", + "top_level_regex": ".*_GRP" + }, + "ValidateModelContent": { + "enabled": True, + "optional": False, + "validate_top_group": True + }, + "ValidateTransformNamingSuffix": { + "enabled": True, + "optional": True, + "SUFFIX_NAMING_TABLE": json.dumps(DEFAULT_SUFFIX_NAMING, indent=4), + "ALLOW_IF_NOT_IN_SUFFIX_TABLE": True + }, + "ValidateColorSets": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateMeshHasOverlappingUVs": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateMeshArnoldAttributes": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateMeshShaderConnections": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateMeshSingleUVSet": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateMeshHasUVs": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateMeshLaminaFaces": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateMeshNgons": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateMeshNonManifold": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateMeshNoNegativeScale": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateMeshNonZeroEdgeLength": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateMeshNormalsUnlocked": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateMeshUVSetMap1": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateMeshVerticesHaveEdges": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateNoAnimation": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateNoNamespace": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateNoNullTransforms": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateNoUnknownNodes": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateNodeNoGhosting": { + "enabled": False, + "optional": False, + "active": True + }, + "ValidateShapeDefaultNames": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateShapeRenderStats": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateShapeZero": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateTransformZero": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateUniqueNames": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateNoVRayMesh": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateUnrealMeshTriangulated": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateAlembicVisibleOnly": { + "enabled": True, + "optional": False, + "active": True + }, + "ExtractProxyAlembic": { + "enabled": True, + "families": [ + "proxyAbc" + ] + }, + "ExtractAlembic": { + "enabled": True, + "families": [ + "pointcache", + "model", + "vrayproxy.alembic" + ] + }, + "ExtractObj": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateRigContents": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateRigJointsHidden": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateRigControllers": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateAnimationContent": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateOutRelatedNodeIds": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateRigControllersArnoldAttributes": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateSkeletalMeshHierarchy": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateSkinclusterDeformerSet": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateRigOutSetNodeIds": { + "enabled": True, + "optional": False, + "allow_history_only": False + }, + "ValidateCameraAttributes": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateAssemblyName": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateAssemblyNamespaces": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateAssemblyModelTransforms": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateAssRelativePaths": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateInstancerContent": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateInstancerFrameRanges": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateNoDefaultCameras": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateUnrealUpAxis": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateCameraContents": { + "enabled": True, + "optional": False, + "validate_shapes": True + }, + "ExtractPlayblast": DEFAULT_PLAYBLAST_SETTING, + "ExtractMayaSceneRaw": { + "enabled": True, + "add_for_families": [ + "layout" + ] + }, + "ExtractCameraAlembic": { + "enabled": True, + "optional": True, + "active": True, + "bake_attributes": "[]" + }, + "ExtractGLB": { + "enabled": True, + "active": True, + "ogsfx_path": "/maya2glTF/PBR/shaders/glTF_PBR.ogsfx" + }, + "ExtractLook": { + "maketx_arguments": [] + }, + "ExtractGPUCache": { + "enabled": False, + "families": [ + "model", + "animation", + "pointcache" + ], + "step": 1.0, + "stepSave": 1, + "optimize": True, + "optimizationThreshold": 40000, + "optimizeAnimationsForMotionBlur": True, + "writeMaterials": True, + "useBaseTessellation": True + } +} diff --git a/server_addon/maya/server/settings/render_settings.py b/server_addon/maya/server/settings/render_settings.py new file mode 100644 index 00000000000..b6163a04ce0 --- /dev/null +++ b/server_addon/maya/server/settings/render_settings.py @@ -0,0 +1,500 @@ +"""Providing models and values for Maya Render Settings.""" +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +def aov_separators_enum(): + return [ + {"value": "dash", "label": "- (dash)"}, + {"value": "underscore", "label": "_ (underscore)"}, + {"value": "dot", "label": ". (dot)"} + ] + + +def arnold_image_format_enum(): + """Return enumerator for Arnold output formats.""" + return [ + {"label": "jpeg", "value": "jpeg"}, + {"label": "png", "value": "png"}, + {"label": "deepexr", "value": "deep exr"}, + {"label": "tif", "value": "tif"}, + {"label": "exr", "value": "exr"}, + {"label": "maya", "value": "maya"}, + {"label": "mtoa_shaders", "value": "mtoa_shaders"} + ] + + +def arnold_aov_list_enum(): + """Return enumerator for Arnold AOVs. + + Note: Key is value, Value in this case is Label. This + was taken from v3 settings. + """ + return [ + {"value": "empty", "label": "< empty >"}, + {"value": "ID", "label": "ID"}, + {"value": "N", "label": "N"}, + {"value": "P", "label": "P"}, + {"value": "Pref", "label": "Pref"}, + {"value": "RGBA", "label": "RGBA"}, + {"value": "Z", "label": "Z"}, + {"value": "albedo", "label": "albedo"}, + {"value": "background", "label": "background"}, + {"value": "coat", "label": "coat"}, + {"value": "coat_albedo", "label": "coat_albedo"}, + {"value": "coat_direct", "label": "coat_direct"}, + {"value": "coat_indirect", "label": "coat_indirect"}, + {"value": "cputime", "label": "cputime"}, + {"value": "crypto_asset", "label": "crypto_asset"}, + {"value": "crypto_material", "label": "cypto_material"}, + {"value": "crypto_object", "label": "crypto_object"}, + {"value": "diffuse", "label": "diffuse"}, + {"value": "diffuse_albedo", "label": "diffuse_albedo"}, + {"value": "diffuse_direct", "label": "diffuse_direct"}, + {"value": "diffuse_indirect", "label": "diffuse_indirect"}, + {"value": "direct", "label": "direct"}, + {"value": "emission", "label": "emission"}, + {"value": "highlight", "label": "highlight"}, + {"value": "indirect", "label": "indirect"}, + {"value": "motionvector", "label": "motionvector"}, + {"value": "opacity", "label": "opacity"}, + {"value": "raycount", "label": "raycount"}, + {"value": "rim_light", "label": "rim_light"}, + {"value": "shadow", "label": "shadow"}, + {"value": "shadow_diff", "label": "shadow_diff"}, + {"value": "shadow_mask", "label": "shadow_mask"}, + {"value": "shadow_matte", "label": "shadow_matte"}, + {"value": "sheen", "label": "sheen"}, + {"value": "sheen_albedo", "label": "sheen_albedo"}, + {"value": "sheen_direct", "label": "sheen_direct"}, + {"value": "sheen_indirect", "label": "sheen_indirect"}, + {"value": "specular", "label": "specular"}, + {"value": "specular_albedo", "label": "specular_albedo"}, + {"value": "specular_direct", "label": "specular_direct"}, + {"value": "specular_indirect", "label": "specular_indirect"}, + {"value": "sss", "label": "sss"}, + {"value": "sss_albedo", "label": "sss_albedo"}, + {"value": "sss_direct", "label": "sss_direct"}, + {"value": "sss_indirect", "label": "sss_indirect"}, + {"value": "transmission", "label": "transmission"}, + {"value": "transmission_albedo", "label": "transmission_albedo"}, + {"value": "transmission_direct", "label": "transmission_direct"}, + {"value": "transmission_indirect", "label": "transmission_indirect"}, + {"value": "volume", "label": "volume"}, + {"value": "volume_Z", "label": "volume_Z"}, + {"value": "volume_albedo", "label": "volume_albedo"}, + {"value": "volume_direct", "label": "volume_direct"}, + {"value": "volume_indirect", "label": "volume_indirect"}, + {"value": "volume_opacity", "label": "volume_opacity"}, + ] + + +def vray_image_output_enum(): + """Return output format for Vray enumerator.""" + return [ + {"label": "png", "value": "png"}, + {"label": "jpg", "value": "jpg"}, + {"label": "vrimg", "value": "vrimg"}, + {"label": "hdr", "value": "hdr"}, + {"label": "exr", "value": "exr"}, + {"label": "exr (multichannel)", "value": "exr (multichannel)"}, + {"label": "exr (deep)", "value": "exr (deep)"}, + {"label": "tga", "value": "tga"}, + {"label": "bmp", "value": "bmp"}, + {"label": "sgi", "value": "sgi"} + ] + + +def vray_aov_list_enum(): + """Return enumerator for Vray AOVs. + + Note: Key is value, Value in this case is Label. This + was taken from v3 settings. + """ + + return [ + {"value": "empty", "label": "< empty >"}, + {"value": "atmosphereChannel", "label": "atmosphere"}, + {"value": "backgroundChannel", "label": "background"}, + {"value": "bumpNormalsChannel", "label": "bumpnormals"}, + {"value": "causticsChannel", "label": "caustics"}, + {"value": "coatFilterChannel", "label": "coat_filter"}, + {"value": "coatGlossinessChannel", "label": "coatGloss"}, + {"value": "coatReflectionChannel", "label": "coat_reflection"}, + {"value": "vrayCoatChannel", "label": "coat_specular"}, + {"value": "CoverageChannel", "label": "coverage"}, + {"value": "cryptomatteChannel", "label": "cryptomatte"}, + {"value": "customColor", "label": "custom_color"}, + {"value": "drBucketChannel", "label": "DR"}, + {"value": "denoiserChannel", "label": "denoiser"}, + {"value": "diffuseChannel", "label": "diffuse"}, + {"value": "ExtraTexElement", "label": "extraTex"}, + {"value": "giChannel", "label": "GI"}, + {"value": "LightMixElement", "label": "None"}, + {"value": "lightingChannel", "label": "lighting"}, + {"value": "LightingAnalysisChannel", "label": "LightingAnalysis"}, + {"value": "materialIDChannel", "label": "materialID"}, + {"value": "MaterialSelectElement", "label": "materialSelect"}, + {"value": "matteShadowChannel", "label": "matteShadow"}, + {"value": "MultiMatteElement", "label": "multimatte"}, + {"value": "multimatteIDChannel", "label": "multimatteID"}, + {"value": "normalsChannel", "label": "normals"}, + {"value": "nodeIDChannel", "label": "objectId"}, + {"value": "objectSelectChannel", "label": "objectSelect"}, + {"value": "rawCoatFilterChannel", "label": "raw_coat_filter"}, + {"value": "rawCoatReflectionChannel", "label": "raw_coat_reflection"}, + {"value": "rawDiffuseFilterChannel", "label": "rawDiffuseFilter"}, + {"value": "rawGiChannel", "label": "rawGI"}, + {"value": "rawLightChannel", "label": "rawLight"}, + {"value": "rawReflectionChannel", "label": "rawReflection"}, + { + "value": "rawReflectionFilterChannel", + "label": "rawReflectionFilter" + }, + {"value": "rawRefractionChannel", "label": "rawRefraction"}, + { + "value": "rawRefractionFilterChannel", + "label": "rawRefractionFilter" + }, + {"value": "rawShadowChannel", "label": "rawShadow"}, + {"value": "rawSheenFilterChannel", "label": "raw_sheen_filter"}, + { + "value": "rawSheenReflectionChannel", + "label": "raw_sheen_reflection" + }, + {"value": "rawTotalLightChannel", "label": "rawTotalLight"}, + {"value": "reflectIORChannel", "label": "reflIOR"}, + {"value": "reflectChannel", "label": "reflect"}, + {"value": "reflectionFilterChannel", "label": "reflectionFilter"}, + {"value": "reflectGlossinessChannel", "label": "reflGloss"}, + {"value": "refractChannel", "label": "refract"}, + {"value": "refractionFilterChannel", "label": "refractionFilter"}, + {"value": "refractGlossinessChannel", "label": "refrGloss"}, + {"value": "renderIDChannel", "label": "renderId"}, + {"value": "FastSSS2Channel", "label": "SSS"}, + {"value": "sampleRateChannel", "label": "sampleRate"}, + {"value": "samplerInfo", "label": "samplerInfo"}, + {"value": "selfIllumChannel", "label": "selfIllum"}, + {"value": "shadowChannel", "label": "shadow"}, + {"value": "sheenFilterChannel", "label": "sheen_filter"}, + {"value": "sheenGlossinessChannel", "label": "sheenGloss"}, + {"value": "sheenReflectionChannel", "label": "sheen_reflection"}, + {"value": "vraySheenChannel", "label": "sheen_specular"}, + {"value": "specularChannel", "label": "specular"}, + {"value": "Toon", "label": "Toon"}, + {"value": "toonLightingChannel", "label": "toonLighting"}, + {"value": "toonSpecularChannel", "label": "toonSpecular"}, + {"value": "totalLightChannel", "label": "totalLight"}, + {"value": "unclampedColorChannel", "label": "unclampedColor"}, + {"value": "VRScansPaintMaskChannel", "label": "VRScansPaintMask"}, + {"value": "VRScansZoneMaskChannel", "label": "VRScansZoneMask"}, + {"value": "velocityChannel", "label": "velocity"}, + {"value": "zdepthChannel", "label": "zDepth"}, + {"value": "LightSelectElement", "label": "lightselect"}, + ] + + +def redshift_engine_enum(): + """Get Redshift engine type enumerator.""" + return [ + {"value": "0", "label": "None"}, + {"value": "1", "label": "Photon Map"}, + {"value": "2", "label": "Irradiance Cache"}, + {"value": "3", "label": "Brute Force"} + ] + + +def redshift_image_output_enum(): + """Return output format for Redshift enumerator.""" + return [ + {"value": "iff", "label": "Maya IFF"}, + {"value": "exr", "label": "OpenEXR"}, + {"value": "tif", "label": "TIFF"}, + {"value": "png", "label": "PNG"}, + {"value": "tga", "label": "Targa"}, + {"value": "jpg", "label": "JPEG"} + ] + + +def redshift_aov_list_enum(): + """Return enumerator for Vray AOVs. + + Note: Key is value, Value in this case is Label. This + was taken from v3 settings. + """ + return [ + {"value": "empty", "label": "< none >"}, + {"value": "AO", "label": "Ambient Occlusion"}, + {"value": "Background", "label": "Background"}, + {"value": "Beauty", "label": "Beauty"}, + {"value": "BumpNormals", "label": "Bump Normals"}, + {"value": "Caustics", "label": "Caustics"}, + {"value": "CausticsRaw", "label": "Caustics Raw"}, + {"value": "Cryptomatte", "label": "Cryptomatte"}, + {"value": "Custom", "label": "Custom"}, + {"value": "Z", "label": "Depth"}, + {"value": "DiffuseFilter", "label": "Diffuse Filter"}, + {"value": "DiffuseLighting", "label": "Diffuse Lighting"}, + {"value": "DiffuseLightingRaw", "label": "Diffuse Lighting Raw"}, + {"value": "Emission", "label": "Emission"}, + {"value": "GI", "label": "Global Illumination"}, + {"value": "GIRaw", "label": "Global Illumination Raw"}, + {"value": "Matte", "label": "Matte"}, + {"value": "MotionVectors", "label": "Ambient Occlusion"}, + {"value": "N", "label": "Normals"}, + {"value": "ID", "label": "ObjectID"}, + {"value": "ObjectBumpNormal", "label": "Object-Space Bump Normals"}, + {"value": "ObjectPosition", "label": "Object-Space Positions"}, + {"value": "PuzzleMatte", "label": "Puzzle Matte"}, + {"value": "Reflections", "label": "Reflections"}, + {"value": "ReflectionsFilter", "label": "Reflections Filter"}, + {"value": "ReflectionsRaw", "label": "Reflections Raw"}, + {"value": "Refractions", "label": "Refractions"}, + {"value": "RefractionsFilter", "label": "Refractions Filter"}, + {"value": "RefractionsRaw", "label": "Refractions Filter"}, + {"value": "Shadows", "label": "Shadows"}, + {"value": "SpecularLighting", "label": "Specular Lighting"}, + {"value": "SSS", "label": "Sub Surface Scatter"}, + {"value": "SSSRaw", "label": "Sub Surface Scatter Raw"}, + { + "value": "TotalDiffuseLightingRaw", + "label": "Total Diffuse Lighting Raw" + }, + { + "value": "TotalTransLightingRaw", + "label": "Total Translucency Filter" + }, + {"value": "TransTint", "label": "Translucency Filter"}, + {"value": "TransGIRaw", "label": "Translucency Lighting Raw"}, + {"value": "VolumeFogEmission", "label": "Volume Fog Emission"}, + {"value": "VolumeFogTint", "label": "Volume Fog Tint"}, + {"value": "VolumeLighting", "label": "Volume Lighting"}, + {"value": "P", "label": "World Position"}, + ] + + +class AdditionalOptionsModel(BaseSettingsModel): + """Additional Option""" + _layout = "compact" + + attribute: str = Field("", title="Attribute name") + value: str = Field("", title="Value") + + +class ArnoldSettingsModel(BaseSettingsModel): + image_prefix: str = Field(title="Image prefix template") + image_format: str = Field( + enum_resolver=arnold_image_format_enum, title="Output Image Format") + multilayer_exr: bool = Field(title="Multilayer (exr)") + tiled: bool = Field(title="Tiled (tif, exr)") + aov_list: list[str] = Field( + default_factory=list, + enum_resolver=arnold_aov_list_enum, + title="AOVs to create" + ) + additional_options: list[AdditionalOptionsModel] = Field( + default_factory=list, + title="Additional Arnold Options", + description=( + "Add additional options - put attribute and value, like AASamples" + ) + ) + + +class VraySettingsModel(BaseSettingsModel): + image_prefix: str = Field(title="Image prefix template") + # engine was str because of JSON limitation (key must be string) + engine: str = Field( + enum_resolver=lambda: [ + {"label": "V-Ray", "value": "1"}, + {"label": "V-Ray GPU", "value": "2"} + ], + title="Production Engine" + ) + image_format: str = Field( + enum_resolver=vray_image_output_enum, + title="Output Image Format" + ) + aov_list: list[str] = Field( + default_factory=list, + enum_resolver=vray_aov_list_enum, + title="AOVs to create" + ) + additional_options: list[AdditionalOptionsModel] = Field( + default_factory=list, + title="Additional Vray Options", + description=( + "Add additional options - put attribute and value," + " like aaFilterSize" + ) + ) + + +class RedshiftSettingsModel(BaseSettingsModel): + image_prefix: str = Field(title="Image prefix template") + # both engines are using the same enumerator, + # both were originally str because of JSON limitation. + primary_gi_engine: str = Field( + enum_resolver=redshift_engine_enum, + title="Primary GI Engine" + ) + secondary_gi_engine: str = Field( + enum_resolver=redshift_engine_enum, + title="Secondary GI Engine" + ) + image_format: str = Field( + enum_resolver=redshift_image_output_enum, + title="Output Image Format" + ) + multilayer_exr: bool = Field(title="Multilayer (exr)") + force_combine: bool = Field(title="Force combine beauty and AOVs") + aov_list: list[str] = Field( + default_factory=list, + enum_resolver=redshift_aov_list_enum, + title="AOVs to create" + ) + additional_options: list[AdditionalOptionsModel] = Field( + default_factory=list, + title="Additional Vray Options", + description=( + "Add additional options - put attribute and value," + " like reflectionMaxTraceDepth" + ) + ) + + +def renderman_display_filters(): + return [ + "PxrBackgroundDisplayFilter", + "PxrCopyAOVDisplayFilter", + "PxrEdgeDetect", + "PxrFilmicTonemapperDisplayFilter", + "PxrGradeDisplayFilter", + "PxrHalfBufferErrorFilter", + "PxrImageDisplayFilter", + "PxrLightSaturation", + "PxrShadowDisplayFilter", + "PxrStylizedHatching", + "PxrStylizedLines", + "PxrStylizedToon", + "PxrWhitePointDisplayFilter" + ] + + +def renderman_sample_filters_enum(): + return [ + "PxrBackgroundSampleFilter", + "PxrCopyAOVSampleFilter", + "PxrCryptomatte", + "PxrFilmicTonemapperSampleFilter", + "PxrGradeSampleFilter", + "PxrShadowFilter", + "PxrWatermarkFilter", + "PxrWhitePointSampleFilter" + ] + + +class RendermanSettingsModel(BaseSettingsModel): + image_prefix: str = Field( + "", title="Image prefix template") + image_dir: str = Field( + "", title="Image Output Directory") + display_filters: list[str] = Field( + default_factory=list, + title="Display Filters", + enum_resolver=renderman_display_filters + ) + imageDisplay_dir: str = Field( + "", title="Image Display Filter Directory") + sample_filters: list[str] = Field( + default_factory=list, + title="Sample Filters", + enum_resolver=renderman_sample_filters_enum + ) + cryptomatte_dir: str = Field( + "", title="Cryptomatte Output Directory") + watermark_dir: str = Field( + "", title="Watermark Filter Directory") + additional_options: list[AdditionalOptionsModel] = Field( + default_factory=list, + title="Additional Renderer Options" + ) + + +class RenderSettingsModel(BaseSettingsModel): + apply_render_settings: bool = Field( + title="Apply Render Settings on creation" + ) + default_render_image_folder: str = Field( + title="Default render image folder" + ) + enable_all_lights: bool = Field( + title="Include all lights in Render Setup Layers by default" + ) + aov_separator: str = Field( + "underscore", + title="AOV Separator character", + enum_resolver=aov_separators_enum + ) + reset_current_frame: bool = Field( + title="Reset Current Frame") + remove_aovs: bool = Field( + title="Remove existing AOVs") + arnold_renderer: ArnoldSettingsModel = Field( + default_factory=ArnoldSettingsModel, + title="Arnold Renderer") + vray_renderer: VraySettingsModel = Field( + default_factory=VraySettingsModel, + title="Vray Renderer") + redshift_renderer: RedshiftSettingsModel = Field( + default_factory=RedshiftSettingsModel, + title="Redshift Renderer") + renderman_renderer: RendermanSettingsModel = Field( + default_factory=RendermanSettingsModel, + title="Renderman Renderer") + + +DEFAULT_RENDER_SETTINGS = { + "apply_render_settings": True, + "default_render_image_folder": "renders/maya", + "enable_all_lights": True, + "aov_separator": "underscore", + "reset_current_frame": False, + "remove_aovs": False, + "arnold_renderer": { + "image_prefix": "//_", + "image_format": "exr", + "multilayer_exr": True, + "tiled": True, + "aov_list": [], + "additional_options": [] + }, + "vray_renderer": { + "image_prefix": "//", + "engine": "1", + "image_format": "exr", + "aov_list": [], + "additional_options": [] + }, + "redshift_renderer": { + "image_prefix": "//", + "primary_gi_engine": "0", + "secondary_gi_engine": "0", + "image_format": "exr", + "multilayer_exr": True, + "force_combine": True, + "aov_list": [], + "additional_options": [] + }, + "renderman_renderer": { + "image_prefix": "{aov_separator}..", + "image_dir": "/", + "display_filters": [], + "imageDisplay_dir": "/{aov_separator}imageDisplayFilter..", + "sample_filters": [], + "cryptomatte_dir": "/{aov_separator}cryptomatte..", + "watermark_dir": "/{aov_separator}watermarkFilter..", + "additional_options": [] + } +} diff --git a/server_addon/maya/server/settings/scriptsmenu.py b/server_addon/maya/server/settings/scriptsmenu.py new file mode 100644 index 00000000000..82c1c2e53c7 --- /dev/null +++ b/server_addon/maya/server/settings/scriptsmenu.py @@ -0,0 +1,43 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class ScriptsmenuSubmodel(BaseSettingsModel): + """Item Definition""" + _isGroup = True + type: str = Field(title="Type") + command: str = Field(title="Command") + sourcetype: str = Field(title="Source Type") + title: str = Field(title="Title") + tooltip: str = Field(title="Tooltip") + tags: list[str] = Field(default_factory=list, title="A list of tags") + + +class ScriptsmenuModel(BaseSettingsModel): + _isGroup = True + + name: str = Field(title="Menu Name") + definition: list[ScriptsmenuSubmodel] = Field( + default_factory=list, + title="Menu Definition", + description="Scriptmenu Items Definition" + ) + + +DEFAULT_SCRIPTSMENU_SETTINGS = { + "name": "OpenPype Tools", + "definition": [ + { + "type": "action", + "command": "import openpype.hosts.maya.api.commands as op_cmds; op_cmds.edit_shader_definitions()", + "sourcetype": "python", + "title": "Edit shader name definitions", + "tooltip": "Edit shader name definitions used in validation and renaming.", + "tags": [ + "pipeline", + "shader" + ] + } + ] +} diff --git a/server_addon/maya/server/settings/templated_workfile_settings.py b/server_addon/maya/server/settings/templated_workfile_settings.py new file mode 100644 index 00000000000..ef81b31a071 --- /dev/null +++ b/server_addon/maya/server/settings/templated_workfile_settings.py @@ -0,0 +1,25 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel, task_types_enum + + +class WorkfileBuildProfilesModel(BaseSettingsModel): + _layout = "expanded" + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + task_names: list[str] = Field(default_factory=list, title="Task names") + path: str = Field("", title="Path to template") + + +class TemplatedProfilesModel(BaseSettingsModel): + profiles: list[WorkfileBuildProfilesModel] = Field( + default_factory=list, + title="Profiles" + ) + + +DEFAULT_TEMPLATED_WORKFILE_SETTINGS = { + "profiles": [] +} diff --git a/server_addon/maya/server/settings/workfile_build_settings.py b/server_addon/maya/server/settings/workfile_build_settings.py new file mode 100644 index 00000000000..dc56d1a3208 --- /dev/null +++ b/server_addon/maya/server/settings/workfile_build_settings.py @@ -0,0 +1,131 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel, task_types_enum + + +class ContextItemModel(BaseSettingsModel): + _layout = "expanded" + product_name_filters: list[str] = Field( + default_factory=list, title="Product name Filters") + product_types: list[str] = Field( + default_factory=list, title="Product types") + repre_names: list[str] = Field( + default_factory=list, title="Repre Names") + loaders: list[str] = Field( + default_factory=list, title="Loaders") + + +class WorkfileSettingModel(BaseSettingsModel): + _layout = "expanded" + task_types: list[str] = Field( + default_factory=list, + enum_resolver=task_types_enum, + title="Task types") + tasks: list[str] = Field( + default_factory=list, + title="Task names") + current_context: list[ContextItemModel] = Field( + default_factory=list, + title="Current Context") + linked_assets: list[ContextItemModel] = Field( + default_factory=list, + title="Linked Assets") + + +class ProfilesModel(BaseSettingsModel): + profiles: list[WorkfileSettingModel] = Field( + default_factory=list, + title="Profiles" + ) + + +DEFAULT_WORKFILE_SETTING = { + "profiles": [ + { + "task_types": [], + "tasks": [ + "Lighting" + ], + "current_context": [ + { + "product_name_filters": [ + ".+[Mm]ain" + ], + "product_types": [ + "model" + ], + "repre_names": [ + "abc", + "ma" + ], + "loaders": [ + "ReferenceLoader" + ] + }, + { + "product_name_filters": [], + "product_types": [ + "animation", + "pointcache", + "proxyAbc" + ], + "repre_names": [ + "abc" + ], + "loaders": [ + "ReferenceLoader" + ] + }, + { + "product_name_filters": [], + "product_types": [ + "rendersetup" + ], + "repre_names": [ + "json" + ], + "loaders": [ + "RenderSetupLoader" + ] + }, + { + "product_name_filters": [], + "product_types": [ + "camera" + ], + "repre_names": [ + "abc" + ], + "loaders": [ + "ReferenceLoader" + ] + } + ], + "linked_assets": [ + { + "product_name_filters": [], + "product_types": [ + "sedress" + ], + "repre_names": [ + "ma" + ], + "loaders": [ + "ReferenceLoader" + ] + }, + { + "product_name_filters": [], + "product_types": [ + "ArnoldStandin" + ], + "repre_names": [ + "ass" + ], + "loaders": [ + "assLoader" + ] + } + ] + } + ] +} diff --git a/server_addon/maya/server/version.py b/server_addon/maya/server/version.py new file mode 100644 index 00000000000..e57ad007184 --- /dev/null +++ b/server_addon/maya/server/version.py @@ -0,0 +1,3 @@ +# -*- coding: utf-8 -*- +"""Package declaring addon version.""" +__version__ = "0.1.3" diff --git a/server_addon/muster/server/__init__.py b/server_addon/muster/server/__init__.py new file mode 100644 index 00000000000..2cb8943554e --- /dev/null +++ b/server_addon/muster/server/__init__.py @@ -0,0 +1,17 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import MusterSettings, DEFAULT_VALUES + + +class MusterAddon(BaseServerAddon): + name = "muster" + version = __version__ + title = "Muster" + settings_model: Type[MusterSettings] = MusterSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/muster/server/settings.py b/server_addon/muster/server/settings.py new file mode 100644 index 00000000000..e37c7628705 --- /dev/null +++ b/server_addon/muster/server/settings.py @@ -0,0 +1,41 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class TemplatesMapping(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: int = Field(title="mapping") + + +class MusterSettings(BaseSettingsModel): + enabled: bool = True + MUSTER_REST_URL: str = Field( + "", + title="Muster Rest URL", + scope=["studio"], + ) + + templates_mapping: list[TemplatesMapping] = Field( + default_factory=list, + title="Templates mapping", + ) + + +DEFAULT_VALUES = { + "enabled": False, + "MUSTER_REST_URL": "http://127.0.0.1:9890", + "templates_mapping": [ + {"name": "file_layers", "value": 7}, + {"name": "mentalray", "value": 2}, + {"name": "mentalray_sf", "value": 6}, + {"name": "redshift", "value": 55}, + {"name": "renderman", "value": 29}, + {"name": "software", "value": 1}, + {"name": "software_sf", "value": 5}, + {"name": "turtle", "value": 10}, + {"name": "vector", "value": 4}, + {"name": "vray", "value": 37}, + {"name": "ffmpeg", "value": 48} + ] +} diff --git a/server_addon/muster/server/version.py b/server_addon/muster/server/version.py new file mode 100644 index 00000000000..485f44ac21b --- /dev/null +++ b/server_addon/muster/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.1" diff --git a/server_addon/nuke/server/__init__.py b/server_addon/nuke/server/__init__.py new file mode 100644 index 00000000000..032ceea5fbd --- /dev/null +++ b/server_addon/nuke/server/__init__.py @@ -0,0 +1,17 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import NukeSettings, DEFAULT_VALUES + + +class NukeAddon(BaseServerAddon): + name = "nuke" + title = "Nuke" + version = __version__ + settings_model: Type[NukeSettings] = NukeSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/nuke/server/settings/__init__.py b/server_addon/nuke/server/settings/__init__.py new file mode 100644 index 00000000000..1e588653951 --- /dev/null +++ b/server_addon/nuke/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + NukeSettings, + DEFAULT_VALUES, +) + + +__all__ = ( + "NukeSettings", + "DEFAULT_VALUES", +) diff --git a/server_addon/nuke/server/settings/common.py b/server_addon/nuke/server/settings/common.py new file mode 100644 index 00000000000..700f01f3dc6 --- /dev/null +++ b/server_addon/nuke/server/settings/common.py @@ -0,0 +1,142 @@ +import json +from pydantic import Field +from ayon_server.exceptions import BadRequestException +from ayon_server.settings import BaseSettingsModel +from ayon_server.types import ( + ColorRGBA_float, + ColorRGB_uint8 +) + + +def validate_json_dict(value): + if not value.strip(): + return "{}" + try: + converted_value = json.loads(value) + success = isinstance(converted_value, dict) + except json.JSONDecodeError: + success = False + + if not success: + raise BadRequestException( + "Environment's can't be parsed as json object" + ) + return value + + +class Vector2d(BaseSettingsModel): + _layout = "compact" + + x: float = Field(1.0, title="X") + y: float = Field(1.0, title="Y") + + +class Vector3d(BaseSettingsModel): + _layout = "compact" + + x: float = Field(1.0, title="X") + y: float = Field(1.0, title="Y") + z: float = Field(1.0, title="Z") + + +class Box(BaseSettingsModel): + _layout = "compact" + + x: float = Field(1.0, title="X") + y: float = Field(1.0, title="Y") + r: float = Field(1.0, title="R") + t: float = Field(1.0, title="T") + + +def formatable_knob_type_enum(): + return [ + {"value": "text", "label": "Text"}, + {"value": "number", "label": "Number"}, + {"value": "decimal_number", "label": "Decimal number"}, + {"value": "2d_vector", "label": "2D vector"}, + # "3D vector" + ] + + +class Formatable(BaseSettingsModel): + _layout = "compact" + + template: str = Field( + "", + placeholder="""{{key}} or {{key}};{{key}}""", + title="Template" + ) + to_type: str = Field( + "Text", + title="To Knob type", + enum_resolver=formatable_knob_type_enum, + ) + + +knob_types_enum = [ + {"value": "text", "label": "Text"}, + {"value": "formatable", "label": "Formate from template"}, + {"value": "color_gui", "label": "Color GUI"}, + {"value": "boolean", "label": "Boolean"}, + {"value": "number", "label": "Number"}, + {"value": "decimal_number", "label": "Decimal number"}, + {"value": "vector_2d", "label": "2D vector"}, + {"value": "vector_3d", "label": "3D vector"}, + {"value": "color", "label": "Color"}, + {"value": "box", "label": "Box"}, + {"value": "expression", "label": "Expression"} +] + + +class KnobModel(BaseSettingsModel): + """# TODO: new data structure + - v3 was having type, name, value but + ayon is not able to make it the same. Current model is + defining `type` as `text` and instead of `value` the key is `text`. + So if `type` is `boolean` then key is `boolean` (value). + """ + _layout = "expanded" + + type: str = Field( + title="Type", + description="Switch between different knob types", + enum_resolver=lambda: knob_types_enum, + conditionalEnum=True + ) + + name: str = Field( + title="Name", + placeholder="Name" + ) + text: str = Field("", title="Value") + color_gui: ColorRGB_uint8 = Field( + (0, 0, 255), + title="RGB Uint8", + ) + boolean: bool = Field(False, title="Value") + number: int = Field(0, title="Value") + decimal_number: float = Field(0.0, title="Value") + vector_2d: Vector2d = Field( + default_factory=Vector2d, + title="Value" + ) + vector_3d: Vector3d = Field( + default_factory=Vector3d, + title="Value" + ) + color: ColorRGBA_float = Field( + (0.0, 0.0, 1.0, 1.0), + title="RGBA Float" + ) + box: Box = Field( + default_factory=Box, + title="Value" + ) + formatable: Formatable = Field( + default_factory=Formatable, + title="Formatable" + ) + expression: str = Field( + "", + title="Expression" + ) diff --git a/server_addon/nuke/server/settings/create_plugins.py b/server_addon/nuke/server/settings/create_plugins.py new file mode 100644 index 00000000000..0bbae4ee774 --- /dev/null +++ b/server_addon/nuke/server/settings/create_plugins.py @@ -0,0 +1,223 @@ +from pydantic import validator, Field +from ayon_server.settings import ( + BaseSettingsModel, + ensure_unique_names +) +from .common import KnobModel + + +def instance_attributes_enum(): + """Return create write instance attributes.""" + return [ + {"value": "reviewable", "label": "Reviewable"}, + {"value": "farm_rendering", "label": "Farm rendering"}, + {"value": "use_range_limit", "label": "Use range limit"} + ] + + +class PrenodeModel(BaseSettingsModel): + # TODO: missing in host api + # - good for `dependency` + name: str = Field( + title="Node name" + ) + + # TODO: `nodeclass` should be renamed to `nuke_node_class` + nodeclass: str = Field( + "", + title="Node class" + ) + dependent: str = Field( + "", + title="Incoming dependency" + ) + + """# TODO: Changes in host api: + - Need complete rework of knob types in nuke integration. + - We could not support v3 style of settings. + """ + knobs: list[KnobModel] = Field( + title="Knobs", + ) + + @validator("knobs") + def ensure_unique_names(cls, value): + """Ensure name fields within the lists have unique names.""" + ensure_unique_names(value) + return value + + +class CreateWriteRenderModel(BaseSettingsModel): + temp_rendering_path_template: str = Field( + title="Temporary rendering path template" + ) + default_variants: list[str] = Field( + title="Default variants", + default_factory=list + ) + instance_attributes: list[str] = Field( + default_factory=list, + enum_resolver=instance_attributes_enum, + title="Instance attributes" + ) + + """# TODO: Changes in host api: + - prenodes key was originally dict and now is list + (we could not support v3 style of settings) + """ + prenodes: list[PrenodeModel] = Field( + title="Preceding nodes", + ) + + @validator("prenodes") + def ensure_unique_names(cls, value): + """Ensure name fields within the lists have unique names.""" + ensure_unique_names(value) + return value + + +class CreateWritePrerenderModel(BaseSettingsModel): + temp_rendering_path_template: str = Field( + title="Temporary rendering path template" + ) + default_variants: list[str] = Field( + title="Default variants", + default_factory=list + ) + instance_attributes: list[str] = Field( + default_factory=list, + enum_resolver=instance_attributes_enum, + title="Instance attributes" + ) + + """# TODO: Changes in host api: + - prenodes key was originally dict and now is list + (we could not support v3 style of settings) + """ + prenodes: list[PrenodeModel] = Field( + title="Preceding nodes", + ) + + @validator("prenodes") + def ensure_unique_names(cls, value): + """Ensure name fields within the lists have unique names.""" + ensure_unique_names(value) + return value + + +class CreateWriteImageModel(BaseSettingsModel): + temp_rendering_path_template: str = Field( + title="Temporary rendering path template" + ) + default_variants: list[str] = Field( + title="Default variants", + default_factory=list + ) + instance_attributes: list[str] = Field( + default_factory=list, + enum_resolver=instance_attributes_enum, + title="Instance attributes" + ) + + """# TODO: Changes in host api: + - prenodes key was originally dict and now is list + (we could not support v3 style of settings) + """ + prenodes: list[PrenodeModel] = Field( + title="Preceding nodes", + ) + + @validator("prenodes") + def ensure_unique_names(cls, value): + """Ensure name fields within the lists have unique names.""" + ensure_unique_names(value) + return value + + +class CreatorPluginsSettings(BaseSettingsModel): + CreateWriteRender: CreateWriteRenderModel = Field( + default_factory=CreateWriteRenderModel, + title="Create Write Render" + ) + CreateWritePrerender: CreateWritePrerenderModel = Field( + default_factory=CreateWritePrerenderModel, + title="Create Write Prerender" + ) + CreateWriteImage: CreateWriteImageModel = Field( + default_factory=CreateWriteImageModel, + title="Create Write Image" + ) + + +DEFAULT_CREATE_SETTINGS = { + "CreateWriteRender": { + "temp_rendering_path_template": "{work}/renders/nuke/{product[name]}/{product[name]}.{frame}.{ext}", + "default_variants": [ + "Main", + "Mask" + ], + "instance_attributes": [ + "reviewable", + "farm_rendering" + ], + "prenodes": [ + { + "name": "Reformat01", + "nodeclass": "Reformat", + "dependent": "", + "knobs": [ + { + "type": "text", + "name": "resize", + "text": "none" + }, + { + "type": "boolean", + "name": "black_outside", + "boolean": True + } + ] + } + ] + }, + "CreateWritePrerender": { + "temp_rendering_path_template": "{work}/renders/nuke/{product[name]}/{product[name]}.{frame}.{ext}", + "default_variants": [ + "Key01", + "Bg01", + "Fg01", + "Branch01", + "Part01" + ], + "instance_attributes": [ + "farm_rendering", + "use_range_limit" + ], + "prenodes": [] + }, + "CreateWriteImage": { + "temp_rendering_path_template": "{work}/renders/nuke/{product[name]}/{product[name]}.{ext}", + "default_variants": [ + "StillFrame", + "MPFrame", + "LayoutFrame" + ], + "instance_attributes": [ + "use_range_limit" + ], + "prenodes": [ + { + "name": "FrameHold01", + "nodeclass": "FrameHold", + "dependent": "", + "knobs": [ + { + "type": "expression", + "name": "first_frame", + "expression": "parent.first" + } + ] + } + ] + } +} diff --git a/server_addon/nuke/server/settings/dirmap.py b/server_addon/nuke/server/settings/dirmap.py new file mode 100644 index 00000000000..2da6d7bf60b --- /dev/null +++ b/server_addon/nuke/server/settings/dirmap.py @@ -0,0 +1,47 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class DirmapPathsSubmodel(BaseSettingsModel): + _layout = "compact" + source_path: list[str] = Field( + default_factory=list, + title="Source Paths" + ) + destination_path: list[str] = Field( + default_factory=list, + title="Destination Paths" + ) + + +class DirmapSettings(BaseSettingsModel): + """Nuke color management project settings.""" + _isGroup: bool = True + + enabled: bool = Field(title="enabled") + paths: DirmapPathsSubmodel = Field( + default_factory=DirmapPathsSubmodel, + title="Dirmap Paths" + ) + + +"""# TODO: +nuke is having originally implemented +following data inputs: + +"nuke-dirmap": { + "enabled": false, + "paths": { + "source-path": [], + "destination-path": [] + } +} +""" + +DEFAULT_DIRMAP_SETTINGS = { + "enabled": False, + "paths": { + "source_path": [], + "destination_path": [] + } +} diff --git a/server_addon/nuke/server/settings/filters.py b/server_addon/nuke/server/settings/filters.py new file mode 100644 index 00000000000..7e2702b3b7f --- /dev/null +++ b/server_addon/nuke/server/settings/filters.py @@ -0,0 +1,19 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel, ensure_unique_names + + +class PublishGUIFilterItemModel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: bool = Field(True, title="Active") + + +class PublishGUIFiltersModel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: list[PublishGUIFilterItemModel] = Field(default_factory=list) + + @validator("value") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value diff --git a/server_addon/nuke/server/settings/general.py b/server_addon/nuke/server/settings/general.py new file mode 100644 index 00000000000..bcbb1839520 --- /dev/null +++ b/server_addon/nuke/server/settings/general.py @@ -0,0 +1,42 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class MenuShortcut(BaseSettingsModel): + """Nuke general project settings.""" + + create: str = Field( + title="Create..." + ) + publish: str = Field( + title="Publish..." + ) + load: str = Field( + title="Load..." + ) + manage: str = Field( + title="Manage..." + ) + build_workfile: str = Field( + title="Build Workfile..." + ) + + +class GeneralSettings(BaseSettingsModel): + """Nuke general project settings.""" + + menu: MenuShortcut = Field( + default_factory=MenuShortcut, + title="Menu Shortcuts", + ) + + +DEFAULT_GENERAL_SETTINGS = { + "menu": { + "create": "ctrl+alt+c", + "publish": "ctrl+alt+p", + "load": "ctrl+alt+l", + "manage": "ctrl+alt+m", + "build_workfile": "ctrl+alt+b" + } +} diff --git a/server_addon/nuke/server/settings/gizmo.py b/server_addon/nuke/server/settings/gizmo.py new file mode 100644 index 00000000000..4cdd614da8a --- /dev/null +++ b/server_addon/nuke/server/settings/gizmo.py @@ -0,0 +1,79 @@ +from pydantic import Field +from ayon_server.settings import ( + BaseSettingsModel, + MultiplatformPathModel, + MultiplatformPathListModel, +) + + +class SubGizmoItem(BaseSettingsModel): + title: str = Field( + title="Label" + ) + sourcetype: str = Field( + title="Type of usage" + ) + command: str = Field( + title="Python command" + ) + icon: str = Field( + title="Icon Path" + ) + shortcut: str = Field( + title="Hotkey" + ) + + +class GizmoDefinitionItem(BaseSettingsModel): + gizmo_toolbar_path: str = Field( + title="Gizmo Menu" + ) + sub_gizmo_list: list[SubGizmoItem] = Field( + default_factory=list, title="Sub Gizmo List") + + +class GizmoItem(BaseSettingsModel): + """Nuke gizmo item """ + + toolbar_menu_name: str = Field( + title="Toolbar Menu Name" + ) + gizmo_source_dir: MultiplatformPathListModel = Field( + default_factory=MultiplatformPathListModel, + title="Gizmo Directory Path" + ) + toolbar_icon_path: MultiplatformPathModel = Field( + default_factory=MultiplatformPathModel, + title="Toolbar Icon Path" + ) + gizmo_definition: list[GizmoDefinitionItem] = Field( + default_factory=list, title="Gizmo Definition") + + +DEFAULT_GIZMO_ITEM = { + "toolbar_menu_name": "OpenPype Gizmo", + "gizmo_source_dir": { + "windows": [], + "darwin": [], + "linux": [] + }, + "toolbar_icon_path": { + "windows": "", + "darwin": "", + "linux": "" + }, + "gizmo_definition": [ + { + "gizmo_toolbar_path": "/path/to/menu", + "sub_gizmo_list": [ + { + "sourcetype": "python", + "title": "Gizmo Note", + "command": "nuke.nodes.StickyNote(label='You can create your own toolbar menu in the Nuke GizmoMenu of OpenPype')", + "icon": "", + "shortcut": "" + } + ] + } + ] +} diff --git a/server_addon/nuke/server/settings/imageio.py b/server_addon/nuke/server/settings/imageio.py new file mode 100644 index 00000000000..b43017ef8be --- /dev/null +++ b/server_addon/nuke/server/settings/imageio.py @@ -0,0 +1,410 @@ +from typing import Literal +from pydantic import validator, Field +from ayon_server.settings import ( + BaseSettingsModel, + ensure_unique_names, +) + +from .common import KnobModel + + +class NodesModel(BaseSettingsModel): + """# TODO: This needs to be somehow labeled in settings panel + or at least it could show gist of configuration + """ + _layout = "expanded" + plugins: list[str] = Field( + title="Used in plugins" + ) + # TODO: rename `nukeNodeClass` to `nuke_node_class` + nukeNodeClass: str = Field( + title="Nuke Node Class", + ) + + """ # TODO: Need complete rework of knob types + in nuke integration. We could not support v3 style of settings. + """ + knobs: list[KnobModel] = Field( + title="Knobs", + ) + + @validator("knobs") + def ensure_unique_names(cls, value): + """Ensure name fields within the lists have unique names.""" + ensure_unique_names(value) + return value + + +class NodesSetting(BaseSettingsModel): + # TODO: rename `requiredNodes` to `required_nodes` + requiredNodes: list[NodesModel] = Field( + title="Plugin required", + default_factory=list + ) + # TODO: rename `overrideNodes` to `override_nodes` + overrideNodes: list[NodesModel] = Field( + title="Plugin's node overrides", + default_factory=list + ) + + +def ocio_configs_switcher_enum(): + return [ + {"value": "nuke-default", "label": "nuke-default"}, + {"value": "spi-vfx", "label": "spi-vfx"}, + {"value": "spi-anim", "label": "spi-anim"}, + {"value": "aces_0.1.1", "label": "aces_0.1.1"}, + {"value": "aces_0.7.1", "label": "aces_0.7.1"}, + {"value": "aces_1.0.1", "label": "aces_1.0.1"}, + {"value": "aces_1.0.3", "label": "aces_1.0.3"}, + {"value": "aces_1.1", "label": "aces_1.1"}, + {"value": "aces_1.2", "label": "aces_1.2"}, + {"value": "aces_1.3", "label": "aces_1.3"}, + {"value": "custom", "label": "custom"} + ] + + +class WorkfileColorspaceSettings(BaseSettingsModel): + """Nuke workfile colorspace preset. """ + """# TODO: enhance settings with host api: + we need to add mapping to resolve properly keys. + Nuke is excpecting camel case key names, + but for better code consistency we need to + be using snake_case: + + color_management = colorManagement + ocio_config = OCIO_config + working_space_name = workingSpaceLUT + monitor_name = monitorLut + monitor_out_name = monitorOutLut + int_8_name = int8Lut + int_16_name = int16Lut + log_name = logLut + float_name = floatLut + """ + + colorManagement: Literal["Nuke", "OCIO"] = Field( + title="Color Management" + ) + + OCIO_config: str = Field( + title="OpenColorIO Config", + description="Switch between OCIO configs", + enum_resolver=ocio_configs_switcher_enum, + conditionalEnum=True + ) + + workingSpaceLUT: str = Field( + title="Working Space" + ) + monitorLut: str = Field( + title="Monitor" + ) + int8Lut: str = Field( + title="8-bit files" + ) + int16Lut: str = Field( + title="16-bit files" + ) + logLut: str = Field( + title="Log files" + ) + floatLut: str = Field( + title="Float files" + ) + + +class ReadColorspaceRulesItems(BaseSettingsModel): + _layout = "expanded" + + regex: str = Field("", title="Regex expression") + colorspace: str = Field("", title="Colorspace") + + +class RegexInputsModel(BaseSettingsModel): + inputs: list[ReadColorspaceRulesItems] = Field( + default_factory=list, + title="Inputs" + ) + + +class ViewProcessModel(BaseSettingsModel): + viewerProcess: str = Field( + title="Viewer Process Name" + ) + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class ImageIOSettings(BaseSettingsModel): + """Nuke color management project settings. """ + _isGroup: bool = True + + """# TODO: enhance settings with host api: + to restruture settings for simplification. + + now: nuke/imageio/viewer/viewerProcess + future: nuke/imageio/viewer + """ + activate_host_color_management: bool = Field( + True, title="Enable Color Management") + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) + viewer: ViewProcessModel = Field( + default_factory=ViewProcessModel, + title="Viewer", + description="""Viewer profile is used during + Creation of new viewer node at knob viewerProcess""" + ) + + """# TODO: enhance settings with host api: + to restruture settings for simplification. + + now: nuke/imageio/baking/viewerProcess + future: nuke/imageio/baking + """ + baking: ViewProcessModel = Field( + default_factory=ViewProcessModel, + title="Baking", + description="""Baking profile is used during + publishing baked colorspace data at knob viewerProcess""" + ) + + workfile: WorkfileColorspaceSettings = Field( + default_factory=WorkfileColorspaceSettings, + title="Workfile" + ) + + nodes: NodesSetting = Field( + default_factory=NodesSetting, + title="Nodes" + ) + """# TODO: enhance settings with host api: + - old settings are using `regexInputs` key but we + need to rename to `regex_inputs` + - no need for `inputs` middle part. It can stay + directly on `regex_inputs` + """ + regexInputs: RegexInputsModel = Field( + default_factory=RegexInputsModel, + title="Assign colorspace to read nodes via rules" + ) + + +DEFAULT_IMAGEIO_SETTINGS = { + "viewer": { + "viewerProcess": "sRGB" + }, + "baking": { + "viewerProcess": "rec709" + }, + "workfile": { + "colorManagement": "Nuke", + "OCIO_config": "nuke-default", + "workingSpaceLUT": "linear", + "monitorLut": "sRGB", + "int8Lut": "sRGB", + "int16Lut": "sRGB", + "logLut": "Cineon", + "floatLut": "linear" + }, + "nodes": { + "requiredNodes": [ + { + "plugins": [ + "CreateWriteRender" + ], + "nukeNodeClass": "Write", + "knobs": [ + { + "type": "text", + "name": "file_type", + "text": "exr" + }, + { + "type": "text", + "name": "datatype", + "text": "16 bit half" + }, + { + "type": "text", + "name": "compression", + "text": "Zip (1 scanline)" + }, + { + "type": "boolean", + "name": "autocrop", + "boolean": True + }, + { + "type": "color_gui", + "name": "tile_color", + "color_gui": [ + 186, + 35, + 35 + ] + }, + { + "type": "text", + "name": "channels", + "text": "rgb" + }, + { + "type": "text", + "name": "colorspace", + "text": "linear" + }, + { + "type": "boolean", + "name": "create_directories", + "boolean": True + } + ] + }, + { + "plugins": [ + "CreateWritePrerender" + ], + "nukeNodeClass": "Write", + "knobs": [ + { + "type": "text", + "name": "file_type", + "text": "exr" + }, + { + "type": "text", + "name": "datatype", + "text": "16 bit half" + }, + { + "type": "text", + "name": "compression", + "text": "Zip (1 scanline)" + }, + { + "type": "boolean", + "name": "autocrop", + "boolean": True + }, + { + "type": "color_gui", + "name": "tile_color", + "color_gui": [ + 171, + 171, + 10 + ] + }, + { + "type": "text", + "name": "channels", + "text": "rgb" + }, + { + "type": "text", + "name": "colorspace", + "text": "linear" + }, + { + "type": "boolean", + "name": "create_directories", + "boolean": True + } + ] + }, + { + "plugins": [ + "CreateWriteImage" + ], + "nukeNodeClass": "Write", + "knobs": [ + { + "type": "text", + "name": "file_type", + "text": "tiff" + }, + { + "type": "text", + "name": "datatype", + "text": "16 bit" + }, + { + "type": "text", + "name": "compression", + "text": "Deflate" + }, + { + "type": "color_gui", + "name": "tile_color", + "color_gui": [ + 56, + 162, + 7 + ] + }, + { + "type": "text", + "name": "channels", + "text": "rgb" + }, + { + "type": "text", + "name": "colorspace", + "text": "sRGB" + }, + { + "type": "boolean", + "name": "create_directories", + "boolean": True + } + ] + } + ], + "overrideNodes": [] + }, + "regexInputs": { + "inputs": [ + { + "regex": "(beauty).*(?=.exr)", + "colorspace": "linear" + } + ] + } +} diff --git a/server_addon/nuke/server/settings/loader_plugins.py b/server_addon/nuke/server/settings/loader_plugins.py new file mode 100644 index 00000000000..6db381bffb8 --- /dev/null +++ b/server_addon/nuke/server/settings/loader_plugins.py @@ -0,0 +1,80 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class LoadImageModel(BaseSettingsModel): + enabled: bool = Field( + title="Enabled" + ) + """# TODO: v3 api used `_representation` + New api is hiding it so it had to be renamed + to `representations_include` + """ + representations_include: list[str] = Field( + default_factory=list, + title="Include representations" + ) + + node_name_template: str = Field( + title="Read node name template" + ) + + +class LoadClipOptionsModel(BaseSettingsModel): + start_at_workfile: bool = Field( + title="Start at workfile's start frame" + ) + add_retime: bool = Field( + title="Add retime" + ) + + +class LoadClipModel(BaseSettingsModel): + enabled: bool = Field( + title="Enabled" + ) + """# TODO: v3 api used `_representation` + New api is hiding it so it had to be renamed + to `representations_include` + """ + representations_include: list[str] = Field( + default_factory=list, + title="Include representations" + ) + + node_name_template: str = Field( + title="Read node name template" + ) + options_defaults: LoadClipOptionsModel = Field( + default_factory=LoadClipOptionsModel, + title="Loader option defaults" + ) + + +class LoaderPuginsModel(BaseSettingsModel): + LoadImage: LoadImageModel = Field( + default_factory=LoadImageModel, + title="Load Image" + ) + LoadClip: LoadClipModel = Field( + default_factory=LoadClipModel, + title="Load Clip" + ) + + +DEFAULT_LOADER_PLUGINS_SETTINGS = { + "LoadImage": { + "enabled": True, + "representations_include": [], + "node_name_template": "{class_name}_{ext}" + }, + "LoadClip": { + "enabled": True, + "representations_include": [], + "node_name_template": "{class_name}_{ext}", + "options_defaults": { + "start_at_workfile": True, + "add_retime": True + } + } +} diff --git a/server_addon/nuke/server/settings/main.py b/server_addon/nuke/server/settings/main.py new file mode 100644 index 00000000000..4687d48ac91 --- /dev/null +++ b/server_addon/nuke/server/settings/main.py @@ -0,0 +1,128 @@ +from pydantic import validator, Field + +from ayon_server.settings import ( + BaseSettingsModel, + ensure_unique_names +) + +from .general import ( + GeneralSettings, + DEFAULT_GENERAL_SETTINGS +) +from .imageio import ( + ImageIOSettings, + DEFAULT_IMAGEIO_SETTINGS +) +from .dirmap import ( + DirmapSettings, + DEFAULT_DIRMAP_SETTINGS +) +from .scriptsmenu import ( + ScriptsmenuSettings, + DEFAULT_SCRIPTSMENU_SETTINGS +) +from .gizmo import ( + GizmoItem, + DEFAULT_GIZMO_ITEM +) +from .create_plugins import ( + CreatorPluginsSettings, + DEFAULT_CREATE_SETTINGS +) +from .publish_plugins import ( + PublishPuginsModel, + DEFAULT_PUBLISH_PLUGIN_SETTINGS +) +from .loader_plugins import ( + LoaderPuginsModel, + DEFAULT_LOADER_PLUGINS_SETTINGS +) +from .workfile_builder import ( + WorkfileBuilderModel, + DEFAULT_WORKFILE_BUILDER_SETTINGS +) +from .templated_workfile_build import ( + TemplatedWorkfileBuildModel +) +from .filters import PublishGUIFilterItemModel + + +class NukeSettings(BaseSettingsModel): + """Nuke addon settings.""" + + general: GeneralSettings = Field( + default_factory=GeneralSettings, + title="General", + ) + + imageio: ImageIOSettings = Field( + default_factory=ImageIOSettings, + title="Color Management (imageio)", + ) + """# TODO: fix host api: + - rename `nuke-dirmap` to `dirmap` was inevitable + """ + dirmap: DirmapSettings = Field( + default_factory=DirmapSettings, + title="Nuke Directory Mapping", + ) + + scriptsmenu: ScriptsmenuSettings = Field( + default_factory=ScriptsmenuSettings, + title="Scripts Menu Definition", + ) + + gizmo: list[GizmoItem] = Field( + default_factory=list, title="Gizmo Menu") + + create: CreatorPluginsSettings = Field( + default_factory=CreatorPluginsSettings, + title="Creator Plugins", + ) + + publish: PublishPuginsModel = Field( + default_factory=PublishPuginsModel, + title="Publish Plugins", + ) + + load: LoaderPuginsModel = Field( + default_factory=LoaderPuginsModel, + title="Loader Plugins", + ) + + workfile_builder: WorkfileBuilderModel = Field( + default_factory=WorkfileBuilderModel, + title="Workfile Builder", + ) + + templated_workfile_build: TemplatedWorkfileBuildModel = Field( + title="Templated Workfile Build", + default_factory=TemplatedWorkfileBuildModel + ) + + filters: list[PublishGUIFilterItemModel] = Field( + default_factory=list + ) + + @validator("filters") + def ensure_unique_names(cls, value): + """Ensure name fields within the lists have unique names.""" + ensure_unique_names(value) + return value + + +DEFAULT_VALUES = { + "general": DEFAULT_GENERAL_SETTINGS, + "imageio": DEFAULT_IMAGEIO_SETTINGS, + "dirmap": DEFAULT_DIRMAP_SETTINGS, + "scriptsmenu": DEFAULT_SCRIPTSMENU_SETTINGS, + "gizmo": [DEFAULT_GIZMO_ITEM], + "create": DEFAULT_CREATE_SETTINGS, + "publish": DEFAULT_PUBLISH_PLUGIN_SETTINGS, + "load": DEFAULT_LOADER_PLUGINS_SETTINGS, + "workfile_builder": DEFAULT_WORKFILE_BUILDER_SETTINGS, + "templated_workfile_build": { + "profiles": [] + }, + "filters": [] +} diff --git a/server_addon/nuke/server/settings/publish_plugins.py b/server_addon/nuke/server/settings/publish_plugins.py new file mode 100644 index 00000000000..7e898f8c9a2 --- /dev/null +++ b/server_addon/nuke/server/settings/publish_plugins.py @@ -0,0 +1,504 @@ +from pydantic import validator, Field +from ayon_server.settings import ( + BaseSettingsModel, + ensure_unique_names, + task_types_enum +) +from .common import KnobModel, validate_json_dict + + +def nuke_render_publish_types_enum(): + """Return all nuke render families available in creators.""" + return [ + {"value": "render", "label": "Render"}, + {"value": "prerender", "label": "Prerender"}, + {"value": "image", "label": "Image"} + ] + + +def nuke_product_types_enum(): + """Return all nuke families available in creators.""" + return [ + {"value": "nukenodes", "label": "Nukenodes"}, + {"value": "model", "label": "Model"}, + {"value": "camera", "label": "Camera"}, + {"value": "gizmo", "label": "Gizmo"}, + {"value": "source", "label": "Source"} + ] + nuke_render_publish_types_enum() + + +class NodeModel(BaseSettingsModel): + # TODO: missing in host api + name: str = Field( + title="Node name" + ) + # TODO: `nodeclass` rename to `nuke_node_class` + nodeclass: str = Field( + "", + title="Node class" + ) + dependent: str = Field( + "", + title="Incoming dependency" + ) + """# TODO: Changes in host api: + - Need complete rework of knob types in nuke integration. + - We could not support v3 style of settings. + """ + knobs: list[KnobModel] = Field( + title="Knobs", + ) + + @validator("knobs") + def ensure_unique_names(cls, value): + """Ensure name fields within the lists have unique names.""" + ensure_unique_names(value) + return value + + +class ThumbnailRepositionNodeModel(BaseSettingsModel): + node_class: str = Field(title="Node class") + knobs: list[KnobModel] = Field(title="Knobs", default_factory=list) + + @validator("knobs") + def ensure_unique_names(cls, value): + """Ensure name fields within the lists have unique names.""" + ensure_unique_names(value) + return value + + +class CollectInstanceDataModel(BaseSettingsModel): + sync_workfile_version_on_product_types: list[str] = Field( + default_factory=list, + enum_resolver=nuke_product_types_enum, + title="Sync workfile versions for familes" + ) + + +class OptionalPluginModel(BaseSettingsModel): + enabled: bool = Field(True) + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + + +class ValidateKnobsModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + knobs: str = Field( + "{}", + title="Knobs", + widget="textarea", + ) + + @validator("knobs") + def validate_json(cls, value): + return validate_json_dict(value) + + +class ExtractThumbnailModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + use_rendered: bool = Field(title="Use rendered images") + bake_viewer_process: bool = Field(title="Bake view process") + bake_viewer_input_process: bool = Field(title="Bake viewer input process") + """# TODO: needs to rewrite from v3 to ayon + - `nodes` in v3 was dict but now `prenodes` is list of dict + - also later `nodes` should be `prenodes` + """ + + nodes: list[NodeModel] = Field( + title="Nodes (deprecated)" + ) + reposition_nodes: list[ThumbnailRepositionNodeModel] = Field( + title="Reposition nodes", + default_factory=list + ) + + +class ExtractReviewDataModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + + +class ExtractReviewDataLutModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + + +class BakingStreamFilterModel(BaseSettingsModel): + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + product_types: list[str] = Field( + default_factory=list, + enum_resolver=nuke_render_publish_types_enum, + title="Sync workfile versions for familes" + ) + product_names: list[str] = Field( + default_factory=list, title="Product names") + + +class ReformatNodesRepositionNodes(BaseSettingsModel): + node_class: str = Field(title="Node class") + knobs: list[KnobModel] = Field( + default_factory=list, + title="Node knobs") + + +class ReformatNodesConfigModel(BaseSettingsModel): + """Only reposition nodes supported. + + You can add multiple reformat nodes and set their knobs. + Order of reformat nodes is important. First reformat node will + be applied first and last reformat node will be applied last. + """ + enabled: bool = Field(False) + reposition_nodes: list[ReformatNodesRepositionNodes] = Field( + default_factory=list, + title="Reposition knobs" + ) + + +class BakingStreamModel(BaseSettingsModel): + name: str = Field(title="Output name") + filter: BakingStreamFilterModel = Field( + title="Filter", default_factory=BakingStreamFilterModel) + read_raw: bool = Field(title="Read raw switch") + viewer_process_override: str = Field(title="Viewer process override") + bake_viewer_process: bool = Field(title="Bake view process") + bake_viewer_input_process: bool = Field(title="Bake viewer input process") + reformat_nodes_config: ReformatNodesConfigModel = Field( + default_factory=ReformatNodesConfigModel, + title="Reformat Nodes") + extension: str = Field(title="File extension") + add_custom_tags: list[str] = Field( + title="Custom tags", default_factory=list) + + +class ExtractReviewDataMovModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + viewer_lut_raw: bool = Field(title="Viewer lut raw") + outputs: list[BakingStreamModel] = Field( + title="Baking streams" + ) + + +class FSubmissionNoteModel(BaseSettingsModel): + enabled: bool = Field(title="enabled") + template: str = Field(title="Template") + + +class FSubmistingForModel(BaseSettingsModel): + enabled: bool = Field(title="enabled") + template: str = Field(title="Template") + + +class FVFXScopeOfWorkModel(BaseSettingsModel): + enabled: bool = Field(title="enabled") + template: str = Field(title="Template") + + +class ExctractSlateFrameParamModel(BaseSettingsModel): + f_submission_note: FSubmissionNoteModel = Field( + title="f_submission_note", + default_factory=FSubmissionNoteModel + ) + f_submitting_for: FSubmistingForModel = Field( + title="f_submitting_for", + default_factory=FSubmistingForModel + ) + f_vfx_scope_of_work: FVFXScopeOfWorkModel = Field( + title="f_vfx_scope_of_work", + default_factory=FVFXScopeOfWorkModel + ) + + +class ExtractSlateFrameModel(BaseSettingsModel): + viewer_lut_raw: bool = Field(title="Viewer lut raw") + """# TODO: v3 api different model: + - not possible to replicate v3 model: + {"name": [bool, str]} + - not it is: + {"name": {"enabled": bool, "template": str}} + """ + key_value_mapping: ExctractSlateFrameParamModel = Field( + title="Key value mapping", + default_factory=ExctractSlateFrameParamModel + ) + + +class IncrementScriptVersionModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + + +class PublishPuginsModel(BaseSettingsModel): + CollectInstanceData: CollectInstanceDataModel = Field( + title="Collect Instance Version", + default_factory=CollectInstanceDataModel, + section="Collectors" + ) + ValidateCorrectAssetName: OptionalPluginModel = Field( + title="Validate Correct Folder Name", + default_factory=OptionalPluginModel, + section="Validators" + ) + ValidateContainers: OptionalPluginModel = Field( + title="Validate Containers", + default_factory=OptionalPluginModel + ) + ValidateKnobs: ValidateKnobsModel = Field( + title="Validate Knobs", + default_factory=ValidateKnobsModel + ) + ValidateOutputResolution: OptionalPluginModel = Field( + title="Validate Output Resolution", + default_factory=OptionalPluginModel + ) + ValidateGizmo: OptionalPluginModel = Field( + title="Validate Gizmo", + default_factory=OptionalPluginModel + ) + ValidateBackdrop: OptionalPluginModel = Field( + title="Validate Backdrop", + default_factory=OptionalPluginModel + ) + ValidateScript: OptionalPluginModel = Field( + title="Validate Script", + default_factory=OptionalPluginModel + ) + ExtractThumbnail: ExtractThumbnailModel = Field( + title="Extract Thumbnail", + default_factory=ExtractThumbnailModel, + section="Extractors" + ) + ExtractReviewData: ExtractReviewDataModel = Field( + title="Extract Review Data", + default_factory=ExtractReviewDataModel + ) + ExtractReviewDataLut: ExtractReviewDataLutModel = Field( + title="Extract Review Data Lut", + default_factory=ExtractReviewDataLutModel + ) + ExtractReviewDataMov: ExtractReviewDataMovModel = Field( + title="Extract Review Data Mov", + default_factory=ExtractReviewDataMovModel + ) + ExtractSlateFrame: ExtractSlateFrameModel = Field( + title="Extract Slate Frame", + default_factory=ExtractSlateFrameModel + ) + # TODO: plugin should be renamed - `workfile` not `script` + IncrementScriptVersion: IncrementScriptVersionModel = Field( + title="Increment Workfile Version", + default_factory=IncrementScriptVersionModel, + section="Integrators" + ) + + +DEFAULT_PUBLISH_PLUGIN_SETTINGS = { + "CollectInstanceData": { + "sync_workfile_version_on_product_types": [ + "nukenodes", + "camera", + "gizmo", + "source", + "render", + "write" + ] + }, + "ValidateCorrectAssetName": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateContainers": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateKnobs": { + "enabled": False, + "knobs": "\n".join([ + '{', + ' "render": {', + ' "review": true', + ' }', + '}' + ]) + }, + "ValidateOutputResolution": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateGizmo": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateBackdrop": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateScript": { + "enabled": True, + "optional": True, + "active": True + }, + "ExtractThumbnail": { + "enabled": True, + "use_rendered": True, + "bake_viewer_process": True, + "bake_viewer_input_process": True, + "nodes": [ + { + "name": "Reformat01", + "nodeclass": "Reformat", + "dependency": "", + "knobs": [ + { + "type": "text", + "name": "type", + "text": "to format" + }, + { + "type": "text", + "name": "format", + "text": "HD_1080" + }, + { + "type": "text", + "name": "filter", + "text": "Lanczos6" + }, + { + "type": "boolean", + "name": "black_outside", + "boolean": True + }, + { + "type": "boolean", + "name": "pbb", + "boolean": False + } + ] + } + ], + "reposition_nodes": [ + { + "node_class": "Reformat", + "knobs": [ + { + "type": "text", + "name": "type", + "text": "to format" + }, + { + "type": "text", + "name": "format", + "text": "HD_1080" + }, + { + "type": "text", + "name": "filter", + "text": "Lanczos6" + }, + { + "type": "bool", + "name": "black_outside", + "boolean": True + }, + { + "type": "bool", + "name": "pbb", + "boolean": False + } + ] + } + ] + }, + "ExtractReviewData": { + "enabled": False + }, + "ExtractReviewDataLut": { + "enabled": False + }, + "ExtractReviewDataMov": { + "enabled": True, + "viewer_lut_raw": False, + "outputs": [ + { + "name": "baking", + "filter": { + "task_types": [], + "product_types": [], + "product_names": [] + }, + "read_raw": False, + "viewer_process_override": "", + "bake_viewer_process": True, + "bake_viewer_input_process": True, + "reformat_nodes_config": { + "enabled": False, + "reposition_nodes": [ + { + "node_class": "Reformat", + "knobs": [ + { + "type": "text", + "name": "type", + "text": "to format" + }, + { + "type": "text", + "name": "format", + "text": "HD_1080" + }, + { + "type": "text", + "name": "filter", + "text": "Lanczos6" + }, + { + "type": "bool", + "name": "black_outside", + "boolean": True + }, + { + "type": "bool", + "name": "pbb", + "boolean": False + } + ] + } + ] + }, + "extension": "mov", + "add_custom_tags": [] + } + ] + }, + "ExtractSlateFrame": { + "viewer_lut_raw": False, + "key_value_mapping": { + "f_submission_note": { + "enabled": True, + "template": "{comment}" + }, + "f_submitting_for": { + "enabled": True, + "template": "{intent[value]}" + }, + "f_vfx_scope_of_work": { + "enabled": False, + "template": "" + } + } + }, + "IncrementScriptVersion": { + "enabled": True, + "optional": True, + "active": True + } +} diff --git a/server_addon/nuke/server/settings/scriptsmenu.py b/server_addon/nuke/server/settings/scriptsmenu.py new file mode 100644 index 00000000000..9d1c32ebac3 --- /dev/null +++ b/server_addon/nuke/server/settings/scriptsmenu.py @@ -0,0 +1,54 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class ScriptsmenuSubmodel(BaseSettingsModel): + """Item Definition""" + _isGroup = True + + type: str = Field(title="Type") + command: str = Field(title="Command") + sourcetype: str = Field(title="Source Type") + title: str = Field(title="Title") + tooltip: str = Field(title="Tooltip") + + +class ScriptsmenuSettings(BaseSettingsModel): + """Nuke script menu project settings.""" + _isGroup = True + + # TODO: in api rename key `name` to `menu_name` + name: str = Field(title="Menu Name") + definition: list[ScriptsmenuSubmodel] = Field( + default_factory=list, + title="Definition", + description="Scriptmenu Items Definition" + ) + + +DEFAULT_SCRIPTSMENU_SETTINGS = { + "name": "OpenPype Tools", + "definition": [ + { + "type": "action", + "sourcetype": "python", + "title": "OpenPype Docs", + "command": "import webbrowser;webbrowser.open(url='https://openpype.io/docs/artist_hosts_nuke_tut')", + "tooltip": "Open the OpenPype Nuke user doc page" + }, + { + "type": "action", + "sourcetype": "python", + "title": "Set Frame Start (Read Node)", + "command": "from openpype.hosts.nuke.startup.frame_setting_for_read_nodes import main;main();", + "tooltip": "Set frame start for read node(s)" + }, + { + "type": "action", + "sourcetype": "python", + "title": "Set non publish output for Write Node", + "command": "from openpype.hosts.nuke.startup.custom_write_node import main;main();", + "tooltip": "Open the OpenPype Nuke user doc page" + } + ] +} diff --git a/server_addon/nuke/server/settings/templated_workfile_build.py b/server_addon/nuke/server/settings/templated_workfile_build.py new file mode 100644 index 00000000000..e0245c8d069 --- /dev/null +++ b/server_addon/nuke/server/settings/templated_workfile_build.py @@ -0,0 +1,33 @@ +from pydantic import Field +from ayon_server.settings import ( + BaseSettingsModel, + task_types_enum, +) + + +class TemplatedWorkfileProfileModel(BaseSettingsModel): + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + task_names: list[str] = Field( + default_factory=list, + title="Task names" + ) + path: str = Field( + title="Path to template" + ) + keep_placeholder: bool = Field( + False, + title="Keep placeholders") + create_first_version: bool = Field( + True, + title="Create first version" + ) + + +class TemplatedWorkfileBuildModel(BaseSettingsModel): + profiles: list[TemplatedWorkfileProfileModel] = Field( + default_factory=list + ) diff --git a/server_addon/nuke/server/settings/workfile_builder.py b/server_addon/nuke/server/settings/workfile_builder.py new file mode 100644 index 00000000000..ee67c7c16af --- /dev/null +++ b/server_addon/nuke/server/settings/workfile_builder.py @@ -0,0 +1,72 @@ +from pydantic import Field +from ayon_server.settings import ( + BaseSettingsModel, + task_types_enum, + MultiplatformPathModel, +) + + +class CustomTemplateModel(BaseSettingsModel): + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + path: MultiplatformPathModel = Field( + default_factory=MultiplatformPathModel, + title="Gizmo Directory Path" + ) + + +class BuilderProfileItemModel(BaseSettingsModel): + product_name_filters: list[str] = Field( + default_factory=list, + title="Product name" + ) + product_types: list[str] = Field( + default_factory=list, + title="Product types" + ) + repre_names: list[str] = Field( + default_factory=list, + title="Representations" + ) + loaders: list[str] = Field( + default_factory=list, + title="Loader plugins" + ) + + +class BuilderProfileModel(BaseSettingsModel): + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + tasks: list[str] = Field( + default_factory=list, + title="Task names" + ) + current_context: list[BuilderProfileItemModel] = Field( + title="Current context") + linked_assets: list[BuilderProfileItemModel] = Field( + title="Linked assets/shots") + + +class WorkfileBuilderModel(BaseSettingsModel): + create_first_version: bool = Field( + title="Create first workfile") + custom_templates: list[CustomTemplateModel] = Field( + title="Custom templates") + builder_on_start: bool = Field( + title="Run Builder at first workfile") + profiles: list[BuilderProfileModel] = Field( + title="Builder profiles") + + +DEFAULT_WORKFILE_BUILDER_SETTINGS = { + "create_first_version": False, + "custom_templates": [], + "builder_on_start": False, + "profiles": [] +} diff --git a/server_addon/nuke/server/version.py b/server_addon/nuke/server/version.py new file mode 100644 index 00000000000..b3f4756216d --- /dev/null +++ b/server_addon/nuke/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.2" diff --git a/server_addon/client/pyproject.toml b/server_addon/openpype/client/pyproject.toml similarity index 100% rename from server_addon/client/pyproject.toml rename to server_addon/openpype/client/pyproject.toml diff --git a/server_addon/server/__init__.py b/server_addon/openpype/server/__init__.py similarity index 100% rename from server_addon/server/__init__.py rename to server_addon/openpype/server/__init__.py diff --git a/server_addon/photoshop/LICENSE b/server_addon/photoshop/LICENSE new file mode 100644 index 00000000000..d6456956733 --- /dev/null +++ b/server_addon/photoshop/LICENSE @@ -0,0 +1,202 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/server_addon/photoshop/README.md b/server_addon/photoshop/README.md new file mode 100644 index 00000000000..2d1e1c745c9 --- /dev/null +++ b/server_addon/photoshop/README.md @@ -0,0 +1,4 @@ +Photoshp Addon +=============== + +Integration with Adobe Photoshop. diff --git a/server_addon/photoshop/server/__init__.py b/server_addon/photoshop/server/__init__.py new file mode 100644 index 00000000000..3a45f7a809d --- /dev/null +++ b/server_addon/photoshop/server/__init__.py @@ -0,0 +1,16 @@ +from ayon_server.addons import BaseServerAddon + +from .settings import PhotoshopSettings, DEFAULT_PHOTOSHOP_SETTING +from .version import __version__ + + +class Photoshop(BaseServerAddon): + name = "photoshop" + title = "Photoshop" + version = __version__ + + settings_model = PhotoshopSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_PHOTOSHOP_SETTING) diff --git a/server_addon/photoshop/server/settings/__init__.py b/server_addon/photoshop/server/settings/__init__.py new file mode 100644 index 00000000000..9ae5764362d --- /dev/null +++ b/server_addon/photoshop/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + PhotoshopSettings, + DEFAULT_PHOTOSHOP_SETTING, +) + + +__all__ = ( + "PhotoshopSettings", + "DEFAULT_PHOTOSHOP_SETTING", +) diff --git a/server_addon/photoshop/server/settings/creator_plugins.py b/server_addon/photoshop/server/settings/creator_plugins.py new file mode 100644 index 00000000000..2fe63a7e3a9 --- /dev/null +++ b/server_addon/photoshop/server/settings/creator_plugins.py @@ -0,0 +1,79 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class CreateImagePluginModel(BaseSettingsModel): + enabled: bool = Field(True, title="Enabled") + active_on_create: bool = Field(True, title="Active by default") + mark_for_review: bool = Field(False, title="Review by default") + default_variants: list[str] = Field( + default_factory=list, + title="Default Variants" + ) + + +class AutoImageCreatorPluginModel(BaseSettingsModel): + enabled: bool = Field(False, title="Enabled") + active_on_create: bool = Field(True, title="Active by default") + mark_for_review: bool = Field(False, title="Review by default") + default_variant: str = Field("", title="Default Variants") + + +class CreateReviewPlugin(BaseSettingsModel): + enabled: bool = Field(True, title="Enabled") + active_on_create: bool = Field(True, title="Active by default") + default_variant: str = Field("", title="Default Variants") + + +class CreateWorkfilelugin(BaseSettingsModel): + enabled: bool = Field(True, title="Enabled") + active_on_create: bool = Field(True, title="Active by default") + default_variant: str = Field("", title="Default Variants") + + +class PhotoshopCreatorPlugins(BaseSettingsModel): + ImageCreator: CreateImagePluginModel = Field( + title="Create Image", + default_factory=CreateImagePluginModel, + ) + AutoImageCreator: AutoImageCreatorPluginModel = Field( + title="Create Flatten Image", + default_factory=AutoImageCreatorPluginModel, + ) + ReviewCreator: CreateReviewPlugin = Field( + title="Create Review", + default_factory=CreateReviewPlugin, + ) + WorkfileCreator: CreateWorkfilelugin = Field( + title="Create Workfile", + default_factory=CreateWorkfilelugin, + ) + + +DEFAULT_CREATE_SETTINGS = { + "ImageCreator": { + "enabled": True, + "active_on_create": True, + "mark_for_review": False, + "default_variants": [ + "Main" + ] + }, + "AutoImageCreator": { + "enabled": False, + "active_on_create": True, + "mark_for_review": False, + "default_variant": "" + }, + "ReviewCreator": { + "enabled": True, + "active_on_create": True, + "default_variant": "" + }, + "WorkfileCreator": { + "enabled": True, + "active_on_create": True, + "default_variant": "Main" + } +} diff --git a/server_addon/photoshop/server/settings/imageio.py b/server_addon/photoshop/server/settings/imageio.py new file mode 100644 index 00000000000..56b7f2fa328 --- /dev/null +++ b/server_addon/photoshop/server/settings/imageio.py @@ -0,0 +1,64 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class ImageIORemappingRulesModel(BaseSettingsModel): + host_native_name: str = Field( + title="Application native colorspace name" + ) + ocio_name: str = Field(title="OCIO colorspace name") + + +class ImageIORemappingModel(BaseSettingsModel): + rules: list[ImageIORemappingRulesModel] = Field( + default_factory=list) + + +class PhotoshopImageIOModel(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + remapping: ImageIORemappingModel = Field( + title="Remapping colorspace names", + default_factory=ImageIORemappingModel + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/photoshop/server/settings/main.py b/server_addon/photoshop/server/settings/main.py new file mode 100644 index 00000000000..ae7705b3dbb --- /dev/null +++ b/server_addon/photoshop/server/settings/main.py @@ -0,0 +1,41 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + +from .imageio import PhotoshopImageIOModel +from .creator_plugins import PhotoshopCreatorPlugins, DEFAULT_CREATE_SETTINGS +from .publish_plugins import PhotoshopPublishPlugins, DEFAULT_PUBLISH_SETTINGS +from .workfile_builder import WorkfileBuilderPlugin + + +class PhotoshopSettings(BaseSettingsModel): + """Photoshop Project Settings.""" + + imageio: PhotoshopImageIOModel = Field( + default_factory=PhotoshopImageIOModel, + title="OCIO config" + ) + + create: PhotoshopCreatorPlugins = Field( + default_factory=PhotoshopCreatorPlugins, + title="Creator plugins" + ) + + publish: PhotoshopPublishPlugins = Field( + default_factory=PhotoshopPublishPlugins, + title="Publish plugins" + ) + + workfile_builder: WorkfileBuilderPlugin = Field( + default_factory=WorkfileBuilderPlugin, + title="Workfile Builder" + ) + + +DEFAULT_PHOTOSHOP_SETTING = { + "create": DEFAULT_CREATE_SETTINGS, + "publish": DEFAULT_PUBLISH_SETTINGS, + "workfile_builder": { + "create_first_version": False, + "custom_templates": [] + } +} diff --git a/server_addon/photoshop/server/settings/publish_plugins.py b/server_addon/photoshop/server/settings/publish_plugins.py new file mode 100644 index 00000000000..6bc72b40728 --- /dev/null +++ b/server_addon/photoshop/server/settings/publish_plugins.py @@ -0,0 +1,221 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +create_flatten_image_enum = [ + {"value": "flatten_with_images", "label": "Flatten with images"}, + {"value": "flatten_only", "label": "Flatten only"}, + {"value": "no", "label": "No"}, +] + + +color_code_enum = [ + {"value": "red", "label": "Red"}, + {"value": "orange", "label": "Orange"}, + {"value": "yellowColor", "label": "Yellow"}, + {"value": "grain", "label": "Green"}, + {"value": "blue", "label": "Blue"}, + {"value": "violet", "label": "Violet"}, + {"value": "gray", "label": "Gray"}, +] + + +class ColorCodeMappings(BaseSettingsModel): + color_code: list[str] = Field( + title="Color codes for layers", + default_factory=list, + enum_resolver=lambda: color_code_enum, + ) + + layer_name_regex: list[str] = Field( + "", + title="Layer name regex" + ) + + product_type: str = Field( + "", + title="Resulting product type" + ) + + product_name_template: str = Field( + "", + title="Product name template" + ) + + +class ExtractedOptions(BaseSettingsModel): + tags: list[str] = Field( + title="Tags", + default_factory=list + ) + + +class CollectColorCodedInstancesPlugin(BaseSettingsModel): + """Set color for publishable layers, set its resulting product type + and template for product name. \n Can create flatten image from published + instances. + (Applicable only for remote publishing!)""" + + enabled: bool = Field(True, title="Enabled") + create_flatten_image: str = Field( + "", + title="Create flatten image", + enum_resolver=lambda: create_flatten_image_enum, + ) + + flatten_product_type_template: str = Field( + "", + title="Subset template for flatten image" + ) + + color_code_mapping: list[ColorCodeMappings] = Field( + title="Color code mappings", + default_factory=ColorCodeMappings, + ) + + +class CollectReviewPlugin(BaseSettingsModel): + """Should review product be created""" + enabled: bool = Field(True, title="Enabled") + + +class CollectVersionPlugin(BaseSettingsModel): + """Synchronize version for image and review instances by workfile version""" # noqa + enabled: bool = Field(True, title="Enabled") + + +class ValidateContainersPlugin(BaseSettingsModel): + """Check that workfile contains latest version of loaded items""" # noqa + _isGroup = True + enabled: bool = True + optional: bool = Field(False, title="Optional") + active: bool = Field(True, title="Active") + + +class ValidateNamingPlugin(BaseSettingsModel): + """Validate naming of products and layers""" # noqa + invalid_chars: str = Field( + '', + title="Regex pattern of invalid characters" + ) + + replace_char: str = Field( + '', + title="Replacement character" + ) + + +class ExtractImagePlugin(BaseSettingsModel): + """Currently only jpg and png are supported""" + formats: list[str] = Field( + title="Extract Formats", + default_factory=list, + ) + + +class ExtractReviewPlugin(BaseSettingsModel): + make_image_sequence: bool = Field( + False, + title="Make an image sequence instead of flatten image" + ) + + max_downscale_size: int = Field( + 8192, + title="Maximum size of sources for review", + description="FFMpeg can only handle limited resolution for creation of review and/or thumbnail", # noqa + gt=300, # greater than + le=16384, # less or equal + ) + + jpg_options: ExtractedOptions = Field( + title="Extracted jpg Options", + default_factory=ExtractedOptions + ) + + mov_options: ExtractedOptions = Field( + title="Extracted mov Options", + default_factory=ExtractedOptions + ) + + +class PhotoshopPublishPlugins(BaseSettingsModel): + CollectColorCodedInstances: CollectColorCodedInstancesPlugin = Field( + title="Collect Color Coded Instances", + default_factory=CollectColorCodedInstancesPlugin, + ) + CollectReview: CollectReviewPlugin = Field( + title="Collect Review", + default_factory=CollectReviewPlugin, + ) + + CollectVersion: CollectVersionPlugin = Field( + title="Create Image", + default_factory=CollectVersionPlugin, + ) + + ValidateContainers: ValidateContainersPlugin = Field( + title="Validate Containers", + default_factory=ValidateContainersPlugin, + ) + + ValidateNaming: ValidateNamingPlugin = Field( + title="Validate naming of products and layers", + default_factory=ValidateNamingPlugin, + ) + + ExtractImage: ExtractImagePlugin = Field( + title="Extract Image", + default_factory=ExtractImagePlugin, + ) + + ExtractReview: ExtractReviewPlugin = Field( + title="Extract Review", + default_factory=ExtractReviewPlugin, + ) + + +DEFAULT_PUBLISH_SETTINGS = { + "CollectColorCodedInstances": { + "create_flatten_image": "no", + "flatten_product_type_template": "", + "color_code_mapping": [] + }, + "CollectReview": { + "enabled": True + }, + "CollectVersion": { + "enabled": False + }, + "ValidateContainers": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateNaming": { + "invalid_chars": "[ \\\\/+\\*\\?\\(\\)\\[\\]\\{\\}:,;]", + "replace_char": "_" + }, + "ExtractImage": { + "formats": [ + "png", + "jpg" + ] + }, + "ExtractReview": { + "make_image_sequence": False, + "max_downscale_size": 8192, + "jpg_options": { + "tags": [ + "review", + "ftrackreview" + ] + }, + "mov_options": { + "tags": [ + "review", + "ftrackreview" + ] + } + } +} diff --git a/server_addon/photoshop/server/settings/workfile_builder.py b/server_addon/photoshop/server/settings/workfile_builder.py new file mode 100644 index 00000000000..ec2ee136ad9 --- /dev/null +++ b/server_addon/photoshop/server/settings/workfile_builder.py @@ -0,0 +1,41 @@ +from pydantic import Field +from pathlib import Path + +from ayon_server.settings import BaseSettingsModel + + +class PathsTemplate(BaseSettingsModel): + windows: Path = Field( + '', + title="Windows" + ) + darwin: Path = Field( + '', + title="MacOS" + ) + linux: Path = Field( + '', + title="Linux" + ) + + +class CustomBuilderTemplate(BaseSettingsModel): + task_types: list[str] = Field( + default_factory=list, + title="Task types", + ) + template_path: PathsTemplate = Field( + default_factory=PathsTemplate + ) + + +class WorkfileBuilderPlugin(BaseSettingsModel): + _title = "Workfile Builder" + create_first_version: bool = Field( + False, + title="Create first workfile" + ) + + custom_templates: list[CustomBuilderTemplate] = Field( + default_factory=CustomBuilderTemplate + ) diff --git a/server_addon/photoshop/server/version.py b/server_addon/photoshop/server/version.py new file mode 100644 index 00000000000..d4b9e2d7f35 --- /dev/null +++ b/server_addon/photoshop/server/version.py @@ -0,0 +1,3 @@ +# -*- coding: utf-8 -*- +"""Package declaring addon version.""" +__version__ = "0.1.0" diff --git a/server_addon/resolve/server/__init__.py b/server_addon/resolve/server/__init__.py new file mode 100644 index 00000000000..a84180d0f55 --- /dev/null +++ b/server_addon/resolve/server/__init__.py @@ -0,0 +1,19 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import ResolveSettings, DEFAULT_VALUES + + +class ResolveAddon(BaseServerAddon): + name = "resolve" + title = "DaVinci Resolve" + version = __version__ + settings_model: Type[ResolveSettings] = ResolveSettings + frontend_scopes = {} + services = {} + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/resolve/server/imageio.py b/server_addon/resolve/server/imageio.py new file mode 100644 index 00000000000..c2bfcd40d04 --- /dev/null +++ b/server_addon/resolve/server/imageio.py @@ -0,0 +1,64 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class ImageIORemappingRulesModel(BaseSettingsModel): + host_native_name: str = Field( + title="Application native colorspace name" + ) + ocio_name: str = Field(title="OCIO colorspace name") + + +class ImageIORemappingModel(BaseSettingsModel): + rules: list[ImageIORemappingRulesModel] = Field( + default_factory=list) + + +class ResolveImageIOModel(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + remapping: ImageIORemappingModel = Field( + title="Remapping colorspace names", + default_factory=ImageIORemappingModel + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/resolve/server/settings.py b/server_addon/resolve/server/settings.py new file mode 100644 index 00000000000..326f6bea1e0 --- /dev/null +++ b/server_addon/resolve/server/settings.py @@ -0,0 +1,114 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + +from .imageio import ResolveImageIOModel + + +class CreateShotClipModels(BaseSettingsModel): + hierarchy: str = Field( + "{folder}/{sequence}", + title="Shot parent hierarchy", + section="Shot Hierarchy And Rename Settings" + ) + clipRename: bool = Field( + True, + title="Rename clips" + ) + clipName: str = Field( + "{track}{sequence}{shot}", + title="Clip name template" + ) + countFrom: int = Field( + 10, + title="Count sequence from" + ) + countSteps: int = Field( + 10, + title="Stepping number" + ) + + folder: str = Field( + "shots", + title="{folder}", + section="Shot Template Keywords" + ) + episode: str = Field( + "ep01", + title="{episode}" + ) + sequence: str = Field( + "sq01", + title="{sequence}" + ) + track: str = Field( + "{_track_}", + title="{track}" + ) + shot: str = Field( + "sh###", + title="{shot}" + ) + + vSyncOn: bool = Field( + False, + title="Enable Vertical Sync", + section="Vertical Synchronization Of Attributes" + ) + + workfileFrameStart: int = Field( + 1001, + title="Workfiles Start Frame", + section="Shot Attributes" + ) + handleStart: int = Field( + 10, + title="Handle start (head)" + ) + handleEnd: int = Field( + 10, + title="Handle end (tail)" + ) + + +class CreatorPuginsModel(BaseSettingsModel): + CreateShotClip: CreateShotClipModels = Field( + default_factory=CreateShotClipModels, + title="Create Shot Clip" + ) + + +class ResolveSettings(BaseSettingsModel): + launch_openpype_menu_on_start: bool = Field( + False, title="Launch OpenPype menu on start of Resolve" + ) + imageio: ResolveImageIOModel = Field( + default_factory=ResolveImageIOModel, + title="Color Management (ImageIO)" + ) + create: CreatorPuginsModel = Field( + default_factory=CreatorPuginsModel, + title="Creator plugins", + ) + + +DEFAULT_VALUES = { + "launch_openpype_menu_on_start": False, + "create": { + "CreateShotClip": { + "hierarchy": "{folder}/{sequence}", + "clipRename": True, + "clipName": "{track}{sequence}{shot}", + "countFrom": 10, + "countSteps": 10, + "folder": "shots", + "episode": "ep01", + "sequence": "sq01", + "track": "{_track_}", + "shot": "sh###", + "vSyncOn": False, + "workfileFrameStart": 1001, + "handleStart": 10, + "handleEnd": 10 + } + } +} diff --git a/server_addon/resolve/server/version.py b/server_addon/resolve/server/version.py new file mode 100644 index 00000000000..3dc1f76bc69 --- /dev/null +++ b/server_addon/resolve/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.0" diff --git a/server_addon/royal_render/server/__init__.py b/server_addon/royal_render/server/__init__.py new file mode 100644 index 00000000000..c5f0aafa006 --- /dev/null +++ b/server_addon/royal_render/server/__init__.py @@ -0,0 +1,17 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import RoyalRenderSettings, DEFAULT_VALUES + + +class RoyalRenderAddon(BaseServerAddon): + name = "royalrender" + version = __version__ + title = "Royal Render" + settings_model: Type[RoyalRenderSettings] = RoyalRenderSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/royal_render/server/settings.py b/server_addon/royal_render/server/settings.py new file mode 100644 index 00000000000..677d7e2671e --- /dev/null +++ b/server_addon/royal_render/server/settings.py @@ -0,0 +1,70 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel, MultiplatformPathModel + + +class CustomPath(MultiplatformPathModel): + _layout = "expanded" + + +class ServerListSubmodel(BaseSettingsModel): + _layout = "expanded" + name: str = Field("", title="Name") + value: CustomPath = Field( + default_factory=CustomPath + ) + + +class CollectSequencesFromJobModel(BaseSettingsModel): + review: bool = Field(True, title="Generate reviews from sequences") + + +class PublishPluginsModel(BaseSettingsModel): + CollectSequencesFromJob: CollectSequencesFromJobModel = Field( + default_factory=CollectSequencesFromJobModel, + title="Collect Sequences from the Job" + ) + + +class RoyalRenderSettings(BaseSettingsModel): + enabled: bool = True + # WARNING/TODO this needs change + # - both system and project settings contained 'rr_path' + # where project settings did choose one of rr_path from system settings + # that is not possible in AYON + rr_paths: list[ServerListSubmodel] = Field( + default_factory=list, + title="Royal Render Root Paths", + scope=["studio"], + ) + # This was 'rr_paths' in project settings and should be enum of + # 'rr_paths' from system settings, but that's not possible in AYON + selected_rr_paths: list[str] = Field( + default_factory=list, + title="Selected Royal Render Paths", + section="---", + ) + publish: PublishPluginsModel = Field( + default_factory=PublishPluginsModel, + title="Publish plugins", + ) + + +DEFAULT_VALUES = { + "enabled": False, + "rr_paths": [ + { + "name": "default", + "value": { + "windows": "", + "darwin": "", + "linux": "" + } + } + ], + "selected_rr_paths": ["default"], + "publish": { + "CollectSequencesFromJob": { + "review": True + } + } +} diff --git a/server_addon/royal_render/server/version.py b/server_addon/royal_render/server/version.py new file mode 100644 index 00000000000..485f44ac21b --- /dev/null +++ b/server_addon/royal_render/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.1" diff --git a/server_addon/timers_manager/server/__init__.py b/server_addon/timers_manager/server/__init__.py new file mode 100644 index 00000000000..29f9d47370b --- /dev/null +++ b/server_addon/timers_manager/server/__init__.py @@ -0,0 +1,13 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import TimersManagerSettings + + +class TimersManagerAddon(BaseServerAddon): + name = "timers_manager" + version = __version__ + title = "Timers Manager" + settings_model: Type[TimersManagerSettings] = TimersManagerSettings diff --git a/server_addon/timers_manager/server/settings.py b/server_addon/timers_manager/server/settings.py new file mode 100644 index 00000000000..a5c5721a575 --- /dev/null +++ b/server_addon/timers_manager/server/settings.py @@ -0,0 +1,25 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class TimersManagerSettings(BaseSettingsModel): + auto_stop: bool = Field( + True, + title="Auto stop timer", + scope=["studio"], + ) + full_time: int = Field( + 15, + title="Max idle time", + scope=["studio"], + ) + message_time: float = Field( + 0.5, + title="When dialog will show", + scope=["studio"], + ) + disregard_publishing: bool = Field( + False, + title="Disregard publishing", + scope=["studio"], + ) diff --git a/server_addon/timers_manager/server/version.py b/server_addon/timers_manager/server/version.py new file mode 100644 index 00000000000..485f44ac21b --- /dev/null +++ b/server_addon/timers_manager/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.1" diff --git a/server_addon/traypublisher/server/LICENSE b/server_addon/traypublisher/server/LICENSE new file mode 100644 index 00000000000..d6456956733 --- /dev/null +++ b/server_addon/traypublisher/server/LICENSE @@ -0,0 +1,202 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/server_addon/traypublisher/server/README.md b/server_addon/traypublisher/server/README.md new file mode 100644 index 00000000000..c0029bc7823 --- /dev/null +++ b/server_addon/traypublisher/server/README.md @@ -0,0 +1,4 @@ +Photoshp Addon +=============== + +Integration with Adobe Traypublisher. diff --git a/server_addon/traypublisher/server/__init__.py b/server_addon/traypublisher/server/__init__.py new file mode 100644 index 00000000000..e6f079609f0 --- /dev/null +++ b/server_addon/traypublisher/server/__init__.py @@ -0,0 +1,16 @@ +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import TraypublisherSettings, DEFAULT_TRAYPUBLISHER_SETTING + + +class Traypublisher(BaseServerAddon): + name = "traypublisher" + title = "TrayPublisher" + version = __version__ + + settings_model = TraypublisherSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_TRAYPUBLISHER_SETTING) diff --git a/server_addon/traypublisher/server/settings/__init__.py b/server_addon/traypublisher/server/settings/__init__.py new file mode 100644 index 00000000000..bcf8beffa7d --- /dev/null +++ b/server_addon/traypublisher/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + TraypublisherSettings, + DEFAULT_TRAYPUBLISHER_SETTING, +) + + +__all__ = ( + "TraypublisherSettings", + "DEFAULT_TRAYPUBLISHER_SETTING", +) diff --git a/server_addon/traypublisher/server/settings/creator_plugins.py b/server_addon/traypublisher/server/settings/creator_plugins.py new file mode 100644 index 00000000000..345cb92e635 --- /dev/null +++ b/server_addon/traypublisher/server/settings/creator_plugins.py @@ -0,0 +1,46 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class BatchMovieCreatorPlugin(BaseSettingsModel): + """Allows to publish multiple video files in one go.
Name of matching + asset is parsed from file names ('asset.mov', 'asset_v001.mov', + 'my_asset_to_publish.mov')""" + + default_variants: list[str] = Field( + title="Default variants", + default_factory=list + ) + + default_tasks: list[str] = Field( + title="Default tasks", + default_factory=list + ) + + extensions: list[str] = Field( + title="Extensions", + default_factory=list + ) + + +class TrayPublisherCreatePluginsModel(BaseSettingsModel): + BatchMovieCreator: BatchMovieCreatorPlugin = Field( + title="Batch Movie Creator", + default_factory=BatchMovieCreatorPlugin + ) + + +DEFAULT_CREATORS = { + "BatchMovieCreator": { + "default_variants": [ + "Main" + ], + "default_tasks": [ + "Compositing" + ], + "extensions": [ + ".mov" + ] + }, +} diff --git a/server_addon/traypublisher/server/settings/editorial_creators.py b/server_addon/traypublisher/server/settings/editorial_creators.py new file mode 100644 index 00000000000..4111f225762 --- /dev/null +++ b/server_addon/traypublisher/server/settings/editorial_creators.py @@ -0,0 +1,181 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel, task_types_enum + + +class ClipNameTokenizerItem(BaseSettingsModel): + _layout = "expanded" + # TODO was 'dict-modifiable', is list of dicts now, must be fixed in code + name: str = Field("#TODO", title="Tokenizer name") + regex: str = Field("", title="Tokenizer regex") + + +class ShotAddTasksItem(BaseSettingsModel): + _layout = "expanded" + # TODO was 'dict-modifiable', is list of dicts now, must be fixed in code + name: str = Field('', title="Key") + task_type: list[str] = Field( + title="Task type", + default_factory=list, + enum_resolver=task_types_enum) + + +class ShotRenameSubmodel(BaseSettingsModel): + enabled: bool = True + shot_rename_template: str = Field( + "", + title="Shot rename template" + ) + + +parent_type_enum = [ + {"value": "Project", "label": "Project"}, + {"value": "Folder", "label": "Folder"}, + {"value": "Episode", "label": "Episode"}, + {"value": "Sequence", "label": "Sequence"}, +] + + +class TokenToParentConvertorItem(BaseSettingsModel): + # TODO - was 'type' must be renamed in code to `parent_type` + parent_type: str = Field( + "Project", + enum_resolver=lambda: parent_type_enum + ) + name: str = Field( + "", + title="Parent token name", + description="Unique name used in `Parent path template`" + ) + value: str = Field( + "", + title="Parent token value", + description="Template where any text, Anatomy keys and Tokens could be used" # noqa + ) + + +class ShotHierchySubmodel(BaseSettingsModel): + enabled: bool = True + parents_path: str = Field( + "", + title="Parents path template", + description="Using keys from \"Token to parent convertor\" or tokens directly" # noqa + ) + parents: list[TokenToParentConvertorItem] = Field( + default_factory=TokenToParentConvertorItem, + title="Token to parent convertor" + ) + + +output_file_type = [ + {"value": ".mp4", "label": "MP4"}, + {"value": ".mov", "label": "MOV"}, + {"value": ".wav", "label": "WAV"} +] + + +class ProductTypePresetItem(BaseSettingsModel): + product_type: str = Field("", title="Product type") + # TODO add placeholder '< Inherited >' + variant: str = Field("", title="Variant") + review: bool = Field(True, title="Review") + output_file_type: str = Field( + ".mp4", + enum_resolver=lambda: output_file_type + ) + + +class EditorialSimpleCreatorPlugin(BaseSettingsModel): + default_variants: list[str] = Field( + default_factory=list, + title="Default Variants" + ) + clip_name_tokenizer: list[ClipNameTokenizerItem] = Field( + default_factory=ClipNameTokenizerItem, + description=( + "Using Regex expression to create tokens. \nThose can be used" + " later in \"Shot rename\" creator \nor \"Shot hierarchy\"." + "\n\nTokens should be decorated with \"_\" on each side" + ) + ) + shot_rename: ShotRenameSubmodel = Field( + title="Shot Rename", + default_factory=ShotRenameSubmodel + ) + shot_hierarchy: ShotHierchySubmodel = Field( + title="Shot Hierarchy", + default_factory=ShotHierchySubmodel + ) + shot_add_tasks: list[ShotAddTasksItem] = Field( + title="Add tasks to shot", + default_factory=ShotAddTasksItem + ) + product_type_presets: list[ProductTypePresetItem] = Field( + default_factory=list + ) + + +class TraypublisherEditorialCreatorPlugins(BaseSettingsModel): + editorial_simple: EditorialSimpleCreatorPlugin = Field( + title="Editorial simple creator", + default_factory=EditorialSimpleCreatorPlugin, + ) + + +DEFAULT_EDITORIAL_CREATORS = { + "editorial_simple": { + "default_variants": [ + "Main" + ], + "clip_name_tokenizer": [ + {"name": "_sequence_", "regex": "(sc\\d{3})"}, + {"name": "_shot_", "regex": "(sh\\d{3})"} + ], + "shot_rename": { + "enabled": True, + "shot_rename_template": "{project[code]}_{_sequence_}_{_shot_}" + }, + "shot_hierarchy": { + "enabled": True, + "parents_path": "{project}/{folder}/{sequence}", + "parents": [ + { + "parent_type": "Project", + "name": "project", + "value": "{project[name]}" + }, + { + "parent_type": "Folder", + "name": "folder", + "value": "shots" + }, + { + "parent_type": "Sequence", + "name": "sequence", + "value": "{_sequence_}" + } + ] + }, + "shot_add_tasks": [], + "product_type_presets": [ + { + "product_type": "review", + "variant": "Reference", + "review": True, + "output_file_type": ".mp4" + }, + { + "product_type": "plate", + "variant": "", + "review": False, + "output_file_type": ".mov" + }, + { + "product_type": "audio", + "variant": "", + "review": False, + "output_file_type": ".wav" + } + ] + } +} diff --git a/server_addon/traypublisher/server/settings/imageio.py b/server_addon/traypublisher/server/settings/imageio.py new file mode 100644 index 00000000000..3df0d2f2fb5 --- /dev/null +++ b/server_addon/traypublisher/server/settings/imageio.py @@ -0,0 +1,48 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class TrayPublisherImageIOModel(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/traypublisher/server/settings/main.py b/server_addon/traypublisher/server/settings/main.py new file mode 100644 index 00000000000..fad96bef2f5 --- /dev/null +++ b/server_addon/traypublisher/server/settings/main.py @@ -0,0 +1,52 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + +from .imageio import TrayPublisherImageIOModel +from .simple_creators import ( + SimpleCreatorPlugin, + DEFAULT_SIMPLE_CREATORS, +) +from .editorial_creators import ( + TraypublisherEditorialCreatorPlugins, + DEFAULT_EDITORIAL_CREATORS, +) +from .creator_plugins import ( + TrayPublisherCreatePluginsModel, + DEFAULT_CREATORS, +) +from .publish_plugins import ( + TrayPublisherPublishPlugins, + DEFAULT_PUBLISH_PLUGINS, +) + + +class TraypublisherSettings(BaseSettingsModel): + """Traypublisher Project Settings.""" + imageio: TrayPublisherImageIOModel = Field( + default_factory=TrayPublisherImageIOModel, + title="Color Management (ImageIO)" + ) + simple_creators: list[SimpleCreatorPlugin] = Field( + title="Simple Create Plugins", + default_factory=SimpleCreatorPlugin, + ) + editorial_creators: TraypublisherEditorialCreatorPlugins = Field( + title="Editorial Creators", + default_factory=TraypublisherEditorialCreatorPlugins, + ) + create: TrayPublisherCreatePluginsModel = Field( + title="Create", + default_factory=TrayPublisherCreatePluginsModel + ) + publish: TrayPublisherPublishPlugins = Field( + title="Publish Plugins", + default_factory=TrayPublisherPublishPlugins + ) + + +DEFAULT_TRAYPUBLISHER_SETTING = { + "simple_creators": DEFAULT_SIMPLE_CREATORS, + "editorial_creators": DEFAULT_EDITORIAL_CREATORS, + "create": DEFAULT_CREATORS, + "publish": DEFAULT_PUBLISH_PLUGINS, +} diff --git a/server_addon/traypublisher/server/settings/publish_plugins.py b/server_addon/traypublisher/server/settings/publish_plugins.py new file mode 100644 index 00000000000..8c844f29f2b --- /dev/null +++ b/server_addon/traypublisher/server/settings/publish_plugins.py @@ -0,0 +1,50 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class ValidatePluginModel(BaseSettingsModel): + _isGroup = True + enabled: bool = True + optional: bool = Field(True, title="Optional") + active: bool = Field(True, title="Active") + + +class ValidateFrameRangeModel(ValidatePluginModel): + """Allows to publish multiple video files in one go.
Name of matching + asset is parsed from file names ('asset.mov', 'asset_v001.mov', + 'my_asset_to_publish.mov')""" + + +class TrayPublisherPublishPlugins(BaseSettingsModel): + CollectFrameDataFromAssetEntity: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Collect Frame Data From Folder Entity", + ) + ValidateFrameRange: ValidateFrameRangeModel = Field( + title="Validate Frame Range", + default_factory=ValidateFrameRangeModel, + ) + ValidateExistingVersion: ValidatePluginModel = Field( + title="Validate Existing Version", + default_factory=ValidatePluginModel, + ) + + +DEFAULT_PUBLISH_PLUGINS = { + "CollectFrameDataFromAssetEntity": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateFrameRange": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateExistingVersion": { + "enabled": True, + "optional": True, + "active": True + } +} diff --git a/server_addon/traypublisher/server/settings/simple_creators.py b/server_addon/traypublisher/server/settings/simple_creators.py new file mode 100644 index 00000000000..8335b9d34e9 --- /dev/null +++ b/server_addon/traypublisher/server/settings/simple_creators.py @@ -0,0 +1,309 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class SimpleCreatorPlugin(BaseSettingsModel): + _layout = "expanded" + product_type: str = Field("", title="Product type") + # TODO add placeholder + identifier: str = Field("", title="Identifier") + label: str = Field("", title="Label") + icon: str = Field("", title="Icon") + default_variants: list[str] = Field( + default_factory=list, + title="Default Variants" + ) + description: str = Field( + "", + title="Description", + widget="textarea" + ) + detailed_description: str = Field( + "", + title="Detailed Description", + widget="textarea" + ) + allow_sequences: bool = Field( + False, + title="Allow sequences" + ) + allow_multiple_items: bool = Field( + False, + title="Allow multiple items" + ) + allow_version_control: bool = Field( + False, + title="Allow version control" + ) + extensions: list[str] = Field( + default_factory=list, + title="Extensions" + ) + + +DEFAULT_SIMPLE_CREATORS = [ + { + "product_type": "workfile", + "identifier": "", + "label": "Workfile", + "icon": "fa.file", + "default_variants": [ + "Main" + ], + "description": "Backup of a working scene", + "detailed_description": "Workfiles are full scenes from any application that are directly edited by artists. They represent a state of work on a task at a given point and are usually not directly referenced into other scenes.", + "allow_sequences": False, + "allow_multiple_items": False, + "allow_version_control": False, + "extensions": [ + ".ma", + ".mb", + ".nk", + ".hrox", + ".hip", + ".hiplc", + ".hipnc", + ".blend", + ".scn", + ".tvpp", + ".comp", + ".zip", + ".prproj", + ".drp", + ".psd", + ".psb", + ".aep" + ] + }, + { + "product_type": "model", + "identifier": "", + "label": "Model", + "icon": "fa.cubes", + "default_variants": [ + "Main", + "Proxy", + "Sculpt" + ], + "description": "Clean models", + "detailed_description": "Models should only contain geometry data, without any extras like cameras, locators or bones.\n\nKeep in mind that models published from tray publisher are not validated for correctness. ", + "allow_sequences": False, + "allow_multiple_items": True, + "allow_version_control": False, + "extensions": [ + ".ma", + ".mb", + ".obj", + ".abc", + ".fbx", + ".bgeo", + ".bgeogz", + ".bgeosc", + ".usd", + ".blend" + ] + }, + { + "product_type": "pointcache", + "identifier": "", + "label": "Pointcache", + "icon": "fa.gears", + "default_variants": [ + "Main" + ], + "description": "Geometry Caches", + "detailed_description": "Alembic or bgeo cache of animated data", + "allow_sequences": True, + "allow_multiple_items": True, + "allow_version_control": False, + "extensions": [ + ".abc", + ".bgeo", + ".bgeogz", + ".bgeosc" + ] + }, + { + "product_type": "plate", + "identifier": "", + "label": "Plate", + "icon": "mdi.camera-image", + "default_variants": [ + "Main", + "BG", + "Animatic", + "Reference", + "Offline" + ], + "description": "Footage Plates", + "detailed_description": "Any type of image seqeuence coming from outside of the studio. Usually camera footage, but could also be animatics used for reference.", + "allow_sequences": True, + "allow_multiple_items": True, + "allow_version_control": False, + "extensions": [ + ".exr", + ".png", + ".dpx", + ".jpg", + ".tiff", + ".tif", + ".mov", + ".mp4", + ".avi" + ] + }, + { + "product_type": "render", + "identifier": "", + "label": "Render", + "icon": "mdi.folder-multiple-image", + "default_variants": [], + "description": "Rendered images or video", + "detailed_description": "Sequence or single file renders", + "allow_sequences": True, + "allow_multiple_items": True, + "allow_version_control": False, + "extensions": [ + ".exr", + ".png", + ".dpx", + ".jpg", + ".jpeg", + ".tiff", + ".tif", + ".mov", + ".mp4", + ".avi" + ] + }, + { + "product_type": "camera", + "identifier": "", + "label": "Camera", + "icon": "fa.video-camera", + "default_variants": [], + "description": "3d Camera", + "detailed_description": "Ideally this should be only camera itself with baked animation, however, it can technically also include helper geometry.", + "allow_sequences": False, + "allow_multiple_items": True, + "allow_version_control": False, + "extensions": [ + ".abc", + ".ma", + ".hip", + ".blend", + ".fbx", + ".usd" + ] + }, + { + "product_type": "image", + "identifier": "", + "label": "Image", + "icon": "fa.image", + "default_variants": [ + "Reference", + "Texture", + "Concept", + "Background" + ], + "description": "Single image", + "detailed_description": "Any image data can be published as image product type. References, textures, concept art, matte paints. This is a fallback 2d product type for everything that doesn't fit more specific product type.", + "allow_sequences": False, + "allow_multiple_items": True, + "allow_version_control": False, + "extensions": [ + ".exr", + ".jpg", + ".jpeg", + ".dpx", + ".bmp", + ".tif", + ".tiff", + ".png", + ".psb", + ".psd" + ] + }, + { + "product_type": "vdb", + "identifier": "", + "label": "VDB Volumes", + "icon": "fa.cloud", + "default_variants": [], + "description": "Sparse volumetric data", + "detailed_description": "Hierarchical data structure for the efficient storage and manipulation of sparse volumetric data discretized on three-dimensional grids", + "allow_sequences": True, + "allow_multiple_items": True, + "allow_version_control": False, + "extensions": [ + ".vdb" + ] + }, + { + "product_type": "matchmove", + "identifier": "", + "label": "Matchmove", + "icon": "fa.empire", + "default_variants": [ + "Camera", + "Object", + "Mocap" + ], + "description": "Matchmoving script", + "detailed_description": "Script exported from matchmoving application to be later processed into a tracked camera with additional data", + "allow_sequences": False, + "allow_multiple_items": True, + "allow_version_control": False, + "extensions": [] + }, + { + "product_type": "rig", + "identifier": "", + "label": "Rig", + "icon": "fa.wheelchair", + "default_variants": [], + "description": "CG rig file", + "detailed_description": "CG rigged character or prop. Rig should be clean of any extra data and directly loadable into it's respective application\t", + "allow_sequences": False, + "allow_multiple_items": False, + "allow_version_control": False, + "extensions": [ + ".ma", + ".blend", + ".hip", + ".hda" + ] + }, + { + "product_type": "simpleUnrealTexture", + "identifier": "", + "label": "Simple UE texture", + "icon": "fa.image", + "default_variants": [], + "description": "Simple Unreal Engine texture", + "detailed_description": "Texture files with Unreal Engine naming conventions", + "allow_sequences": False, + "allow_multiple_items": True, + "allow_version_control": False, + "extensions": [] + }, + { + "product_type": "audio", + "identifier": "", + "label": "Audio ", + "icon": "fa5s.file-audio", + "default_variants": [ + "Main" + ], + "description": "Audio product", + "detailed_description": "Audio files for review or final delivery", + "allow_sequences": False, + "allow_multiple_items": False, + "allow_version_control": False, + "extensions": [ + ".wav" + ] + } +] diff --git a/server_addon/traypublisher/server/version.py b/server_addon/traypublisher/server/version.py new file mode 100644 index 00000000000..df0c92f1e27 --- /dev/null +++ b/server_addon/traypublisher/server/version.py @@ -0,0 +1,3 @@ +# -*- coding: utf-8 -*- +"""Package declaring addon version.""" +__version__ = "0.1.2" diff --git a/server_addon/tvpaint/server/__init__.py b/server_addon/tvpaint/server/__init__.py new file mode 100644 index 00000000000..033d7d3792b --- /dev/null +++ b/server_addon/tvpaint/server/__init__.py @@ -0,0 +1,17 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import TvpaintSettings, DEFAULT_VALUES + + +class TvpaintAddon(BaseServerAddon): + name = "tvpaint" + title = "TVPaint" + version = __version__ + settings_model: Type[TvpaintSettings] = TvpaintSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/tvpaint/server/settings/__init__.py b/server_addon/tvpaint/server/settings/__init__.py new file mode 100644 index 00000000000..abee32e8976 --- /dev/null +++ b/server_addon/tvpaint/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + TvpaintSettings, + DEFAULT_VALUES, +) + + +__all__ = ( + "TvpaintSettings", + "DEFAULT_VALUES", +) diff --git a/server_addon/tvpaint/server/settings/create_plugins.py b/server_addon/tvpaint/server/settings/create_plugins.py new file mode 100644 index 00000000000..349bfdd2882 --- /dev/null +++ b/server_addon/tvpaint/server/settings/create_plugins.py @@ -0,0 +1,133 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class CreateWorkfileModel(BaseSettingsModel): + enabled: bool = Field(True) + default_variant: str = Field(title="Default variant") + default_variants: list[str] = Field( + default_factory=list, title="Default variants") + + +class CreateReviewModel(BaseSettingsModel): + enabled: bool = Field(True) + active_on_create: bool = Field(True, title="Active by default") + default_variant: str = Field(title="Default variant") + default_variants: list[str] = Field( + default_factory=list, title="Default variants") + + +class CreateRenderSceneModel(BaseSettingsModel): + enabled: bool = Field(True) + active_on_create: bool = Field(True, title="Active by default") + mark_for_review: bool = Field(True, title="Review by default") + default_pass_name: str = Field(title="Default beauty pass") + default_variant: str = Field(title="Default variant") + default_variants: list[str] = Field( + default_factory=list, title="Default variants") + + +class CreateRenderLayerModel(BaseSettingsModel): + mark_for_review: bool = Field(True, title="Review by default") + default_pass_name: str = Field(title="Default beauty pass") + default_variant: str = Field(title="Default variant") + default_variants: list[str] = Field( + default_factory=list, title="Default variants") + + +class CreateRenderPassModel(BaseSettingsModel): + mark_for_review: bool = Field(True, title="Review by default") + default_variant: str = Field(title="Default variant") + default_variants: list[str] = Field( + default_factory=list, title="Default variants") + + +class AutoDetectCreateRenderModel(BaseSettingsModel): + """The creator tries to auto-detect Render Layers and Render Passes in scene. + + For Render Layers is used group name as a variant and for Render Passes is + used TVPaint layer name. + + Group names can be renamed by their used order in scene. The renaming + template where can be used '{group_index}' formatting key which is + filled by "used position index of group". + - Template: 'L{group_index}' + - Group offset: '10' + - Group padding: '3' + + Would create group names "L010", "L020", ... + """ + + enabled: bool = Field(True) + allow_group_rename: bool = Field(title="Allow group rename") + group_name_template: str = Field(title="Group name template") + group_idx_offset: int = Field(1, title="Group index Offset", ge=1) + group_idx_padding: int = Field(4, title="Group index Padding", ge=1) + + +class CreatePluginsModel(BaseSettingsModel): + create_workfile: CreateWorkfileModel = Field( + default_factory=CreateWorkfileModel, + title="Create Workfile" + ) + create_review: CreateReviewModel = Field( + default_factory=CreateReviewModel, + title="Create Review" + ) + create_render_scene: CreateRenderSceneModel = Field( + default_factory=CreateReviewModel, + title="Create Render Scene" + ) + create_render_layer: CreateRenderLayerModel= Field( + default_factory=CreateRenderLayerModel, + title="Create Render Layer" + ) + create_render_pass: CreateRenderPassModel = Field( + default_factory=CreateRenderPassModel, + title="Create Render Pass" + ) + auto_detect_render: AutoDetectCreateRenderModel = Field( + default_factory=AutoDetectCreateRenderModel, + title="Auto-Detect Create Render", + ) + + +DEFAULT_CREATE_SETTINGS = { + "create_workfile": { + "enabled": True, + "default_variant": "Main", + "default_variants": [] + }, + "create_review": { + "enabled": True, + "active_on_create": True, + "default_variant": "Main", + "default_variants": [] + }, + "create_render_scene": { + "enabled": True, + "active_on_create": False, + "mark_for_review": True, + "default_pass_name": "beauty", + "default_variant": "Main", + "default_variants": [] + }, + "create_render_layer": { + "mark_for_review": False, + "default_pass_name": "beauty", + "default_variant": "Main", + "default_variants": [] + }, + "create_render_pass": { + "mark_for_review": False, + "default_variant": "Main", + "default_variants": [] + }, + "auto_detect_render": { + "enabled": False, + "allow_group_rename": True, + "group_name_template": "L{group_index}", + "group_idx_offset": 10, + "group_idx_padding": 3 + } +} diff --git a/server_addon/tvpaint/server/settings/filters.py b/server_addon/tvpaint/server/settings/filters.py new file mode 100644 index 00000000000..009febae069 --- /dev/null +++ b/server_addon/tvpaint/server/settings/filters.py @@ -0,0 +1,19 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class FiltersSubmodel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: str = Field( + "", + title="Textarea", + widget="textarea", + ) + + +class PublishFiltersModel(BaseSettingsModel): + env_search_replace_values: list[FiltersSubmodel] = Field( + default_factory=list + ) diff --git a/server_addon/tvpaint/server/settings/imageio.py b/server_addon/tvpaint/server/settings/imageio.py new file mode 100644 index 00000000000..50f8b7eef4d --- /dev/null +++ b/server_addon/tvpaint/server/settings/imageio.py @@ -0,0 +1,48 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class TVPaintImageIOModel(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/tvpaint/server/settings/main.py b/server_addon/tvpaint/server/settings/main.py new file mode 100644 index 00000000000..4cd6ac4b1a7 --- /dev/null +++ b/server_addon/tvpaint/server/settings/main.py @@ -0,0 +1,90 @@ +from pydantic import Field, validator +from ayon_server.settings import ( + BaseSettingsModel, + ensure_unique_names, +) + +from .imageio import TVPaintImageIOModel +from .workfile_builder import WorkfileBuilderPlugin +from .create_plugins import CreatePluginsModel, DEFAULT_CREATE_SETTINGS +from .publish_plugins import ( + PublishPluginsModel, + LoadPluginsModel, + DEFAULT_PUBLISH_SETTINGS, +) + + +class PublishGUIFilterItemModel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: bool = Field(True, title="Active") + + +class PublishGUIFiltersModel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: list[PublishGUIFilterItemModel] = Field(default_factory=list) + + @validator("value") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class TvpaintSettings(BaseSettingsModel): + imageio: TVPaintImageIOModel = Field( + default_factory=TVPaintImageIOModel, + title="Color Management (ImageIO)" + ) + stop_timer_on_application_exit: bool = Field( + title="Stop timer on application exit") + create: CreatePluginsModel = Field( + default_factory=CreatePluginsModel, + title="Create plugins" + ) + publish: PublishPluginsModel = Field( + default_factory=PublishPluginsModel, + title="Publish plugins") + load: LoadPluginsModel = Field( + default_factory=LoadPluginsModel, + title="Load plugins") + workfile_builder: WorkfileBuilderPlugin = Field( + default_factory=WorkfileBuilderPlugin, + title="Workfile Builder" + ) + filters: list[PublishGUIFiltersModel] = Field( + default_factory=list, + title="Publish GUI Filters") + + @validator("filters") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +DEFAULT_VALUES = { + "stop_timer_on_application_exit": False, + "create": DEFAULT_CREATE_SETTINGS, + "publish": DEFAULT_PUBLISH_SETTINGS, + "load": { + "LoadImage": { + "defaults": { + "stretch": True, + "timestretch": True, + "preload": True + } + }, + "ImportImage": { + "defaults": { + "stretch": True, + "timestretch": True, + "preload": True + } + } + }, + "workfile_builder": { + "create_first_version": False, + "custom_templates": [] + }, + "filters": [] +} diff --git a/server_addon/tvpaint/server/settings/publish_plugins.py b/server_addon/tvpaint/server/settings/publish_plugins.py new file mode 100644 index 00000000000..76c7eaac01e --- /dev/null +++ b/server_addon/tvpaint/server/settings/publish_plugins.py @@ -0,0 +1,132 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel +from ayon_server.types import ColorRGBA_uint8 + + +class CollectRenderInstancesModel(BaseSettingsModel): + ignore_render_pass_transparency: bool = Field( + title="Ignore Render Pass opacity" + ) + + +class ExtractSequenceModel(BaseSettingsModel): + """Review BG color is used for whole scene review and for thumbnails.""" + # TODO Use alpha color + review_bg: ColorRGBA_uint8 = Field( + (255, 255, 255, 1.0), + title="Review BG color") + + +class ValidatePluginModel(BaseSettingsModel): + enabled: bool = True + optional: bool = Field(True, title="Optional") + active: bool = Field(True, title="Active") + + +def compression_enum(): + return [ + {"value": "ZIP", "label": "ZIP"}, + {"value": "ZIPS", "label": "ZIPS"}, + {"value": "DWAA", "label": "DWAA"}, + {"value": "DWAB", "label": "DWAB"}, + {"value": "PIZ", "label": "PIZ"}, + {"value": "RLE", "label": "RLE"}, + {"value": "PXR24", "label": "PXR24"}, + {"value": "B44", "label": "B44"}, + {"value": "B44A", "label": "B44A"}, + {"value": "none", "label": "None"} + ] + + +class ExtractConvertToEXRModel(BaseSettingsModel): + """WARNING: This plugin does not work on MacOS (using OIIO tool).""" + enabled: bool = False + replace_pngs: bool = True + + exr_compression: str = Field( + "ZIP", + enum_resolver=compression_enum, + title="EXR Compression" + ) + + +class LoadImageDefaultModel(BaseSettingsModel): + _layout = "expanded" + stretch: bool = Field(title="Stretch") + timestretch: bool = Field(title="TimeStretch") + preload: bool = Field(title="Preload") + + +class LoadImageModel(BaseSettingsModel): + defaults: LoadImageDefaultModel = Field( + default_factory=LoadImageDefaultModel + ) + + +class PublishPluginsModel(BaseSettingsModel): + CollectRenderInstances: CollectRenderInstancesModel = Field( + default_factory=CollectRenderInstancesModel, + title="Collect Render Instances") + ExtractSequence: ExtractSequenceModel = Field( + default_factory=ExtractSequenceModel, + title="Extract Sequence") + ValidateProjectSettings: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Validate Project Settings") + ValidateMarks: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Validate MarkIn/Out") + ValidateStartFrame: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Validate Scene Start Frame") + ValidateAssetName: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Validate Folder Name") + ExtractConvertToEXR: ExtractConvertToEXRModel = Field( + default_factory=ExtractConvertToEXRModel, + title="Extract Convert To EXR") + + +class LoadPluginsModel(BaseSettingsModel): + LoadImage: LoadImageModel = Field( + default_factory=LoadImageModel, + title="Load Image") + ImportImage: LoadImageModel = Field( + default_factory=LoadImageModel, + title="Import Image") + + +DEFAULT_PUBLISH_SETTINGS = { + "CollectRenderInstances": { + "ignore_render_pass_transparency": False + }, + "ExtractSequence": { + "review_bg": [255, 255, 255, 1.0] + }, + "ValidateProjectSettings": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateMarks": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateStartFrame": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateAssetName": { + "enabled": True, + "optional": True, + "active": True + }, + "ExtractConvertToEXR": { + "enabled": False, + "replace_pngs": True, + "exr_compression": "ZIP" + } +} diff --git a/server_addon/tvpaint/server/settings/workfile_builder.py b/server_addon/tvpaint/server/settings/workfile_builder.py new file mode 100644 index 00000000000..e0aba5da7e1 --- /dev/null +++ b/server_addon/tvpaint/server/settings/workfile_builder.py @@ -0,0 +1,30 @@ +from pydantic import Field + +from ayon_server.settings import ( + BaseSettingsModel, + MultiplatformPathModel, + task_types_enum, +) + + +class CustomBuilderTemplate(BaseSettingsModel): + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + template_path: MultiplatformPathModel = Field( + default_factory=MultiplatformPathModel + ) + + +class WorkfileBuilderPlugin(BaseSettingsModel): + _title = "Workfile Builder" + create_first_version: bool = Field( + False, + title="Create first workfile" + ) + + custom_templates: list[CustomBuilderTemplate] = Field( + default_factory=CustomBuilderTemplate + ) diff --git a/server_addon/tvpaint/server/version.py b/server_addon/tvpaint/server/version.py new file mode 100644 index 00000000000..3dc1f76bc69 --- /dev/null +++ b/server_addon/tvpaint/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.0" diff --git a/server_addon/unreal/server/__init__.py b/server_addon/unreal/server/__init__.py new file mode 100644 index 00000000000..a5f3e9597d6 --- /dev/null +++ b/server_addon/unreal/server/__init__.py @@ -0,0 +1,19 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import UnrealSettings, DEFAULT_VALUES + + +class UnrealAddon(BaseServerAddon): + name = "unreal" + title = "Unreal" + version = __version__ + settings_model: Type[UnrealSettings] = UnrealSettings + frontend_scopes = {} + services = {} + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/unreal/server/imageio.py b/server_addon/unreal/server/imageio.py new file mode 100644 index 00000000000..dde042ba47e --- /dev/null +++ b/server_addon/unreal/server/imageio.py @@ -0,0 +1,48 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class UnrealImageIOModel(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/unreal/server/settings.py b/server_addon/unreal/server/settings.py new file mode 100644 index 00000000000..479e041e25e --- /dev/null +++ b/server_addon/unreal/server/settings.py @@ -0,0 +1,64 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + +from .imageio import UnrealImageIOModel + + +class ProjectSetup(BaseSettingsModel): + dev_mode: bool = Field( + False, + title="Dev mode" + ) + + +def _render_format_enum(): + return [ + {"value": "png", "label": "PNG"}, + {"value": "exr", "label": "EXR"}, + {"value": "jpg", "label": "JPG"}, + {"value": "bmp", "label": "BMP"} + ] + + +class UnrealSettings(BaseSettingsModel): + imageio: UnrealImageIOModel = Field( + default_factory=UnrealImageIOModel, + title="Color Management (ImageIO)" + ) + level_sequences_for_layouts: bool = Field( + False, + title="Generate level sequences when loading layouts" + ) + delete_unmatched_assets: bool = Field( + False, + title="Delete assets that are not matched" + ) + render_config_path: str = Field( + "", + title="Render Config Path" + ) + preroll_frames: int = Field( + 0, + title="Pre-roll frames" + ) + render_format: str = Field( + "png", + title="Render format", + enum_resolver=_render_format_enum + ) + project_setup: ProjectSetup = Field( + default_factory=ProjectSetup, + title="Project Setup", + ) + + +DEFAULT_VALUES = { + "level_sequences_for_layouts": False, + "delete_unmatched_assets": False, + "render_config_path": "", + "preroll_frames": 0, + "render_format": "png", + "project_setup": { + "dev_mode": False + } +} diff --git a/server_addon/unreal/server/version.py b/server_addon/unreal/server/version.py new file mode 100644 index 00000000000..3dc1f76bc69 --- /dev/null +++ b/server_addon/unreal/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.0" diff --git a/setup.py b/setup.py index 260728dde67..4b6f2867304 100644 --- a/setup.py +++ b/setup.py @@ -1,7 +1,6 @@ # -*- coding: utf-8 -*- """Setup info for building OpenPype 3.0.""" import os -import sys import re import platform import distutils.spawn @@ -125,7 +124,6 @@ def validate_thirdparty_binaries(): include_files = [ "igniter", "openpype", - "common", "schema", "LICENSE", "README.md" @@ -170,22 +168,7 @@ def validate_thirdparty_binaries(): target_name="openpype_console", icon=icon_path.as_posix() ), - Executable( - "ayon_start.py", - base=base, - target_name="ayon", - icon=icon_path.as_posix() - ), ] -if IS_WINDOWS: - executables.append( - Executable( - "ayon_start.py", - base=None, - target_name="ayon_console", - icon=icon_path.as_posix() - ) - ) if IS_LINUX: executables.append( diff --git a/tests/integration/hosts/nuke/test_deadline_publish_in_nuke_prerender.py b/tests/integration/hosts/nuke/test_deadline_publish_in_nuke_prerender.py new file mode 100644 index 00000000000..57e2f78973d --- /dev/null +++ b/tests/integration/hosts/nuke/test_deadline_publish_in_nuke_prerender.py @@ -0,0 +1,106 @@ +import logging + +from tests.lib.assert_classes import DBAssert +from tests.integration.hosts.nuke.lib import NukeDeadlinePublishTestClass + +log = logging.getLogger("test_publish_in_nuke") + + +class TestDeadlinePublishInNukePrerender(NukeDeadlinePublishTestClass): + """Basic test case for publishing in Nuke and Deadline for prerender + + It is different from `test_deadline_publish_in_nuke` as that one is for + `render` family >> this test expects different subset names. + + Uses generic TestCase to prepare fixtures for test data, testing DBs, + env vars. + + !!! + It expects path in WriteNode starting with 'c:/projects', it replaces + it with correct value in temp folder. + Access file path by selecting WriteNode group, CTRL+Enter, update file + input + !!! + + Opens Nuke, run publish on prepared workile. + + Then checks content of DB (if subset, version, representations were + created. + Checks tmp folder if all expected files were published. + + How to run: + (in cmd with activated {OPENPYPE_ROOT}/.venv) + {OPENPYPE_ROOT}/.venv/Scripts/python.exe {OPENPYPE_ROOT}/start.py + runtests ../tests/integration/hosts/nuke # noqa: E501 + + To check log/errors from launched app's publish process keep PERSIST + to True and check `test_openpype.logs` collection. + """ + TEST_FILES = [ + ("1aQaKo3cF-fvbTfvODIRFMxgherjbJ4Ql", + "test_nuke_deadline_publish_in_nuke_prerender.zip", "") + ] + + APP_GROUP = "nuke" + + TIMEOUT = 180 # publish timeout + + # could be overwritten by command line arguments + # keep empty to locate latest installed variant or explicit + APP_VARIANT = "" + PERSIST = False # True - keep test_db, test_openpype, outputted test files + TEST_DATA_FOLDER = None + + def test_db_asserts(self, dbcon, publish_finished): + """Host and input data dependent expected results in DB.""" + print("test_db_asserts") + failures = [] + + failures.append(DBAssert.count_of_types(dbcon, "version", 2)) + + failures.append( + DBAssert.count_of_types(dbcon, "version", 0, name={"$ne": 1})) + + # prerender has only default subset format `{family}{variant}`, + # Key01 is used variant + failures.append( + DBAssert.count_of_types(dbcon, "subset", 1, + name="prerenderKey01")) + + failures.append( + DBAssert.count_of_types(dbcon, "subset", 1, + name="workfileTest_task")) + + failures.append( + DBAssert.count_of_types(dbcon, "representation", 2)) + + additional_args = {"context.subset": "workfileTest_task", + "context.ext": "nk"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 1, + additional_args=additional_args)) + + additional_args = {"context.subset": "prerenderKey01", + "context.ext": "exr"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 1, + additional_args=additional_args)) + + # prerender doesn't have set creation of review by default + additional_args = {"context.subset": "prerenderKey01", + "name": "thumbnail"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 0, + additional_args=additional_args)) + + additional_args = {"context.subset": "prerenderKey01", + "name": "h264_mov"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 0, + additional_args=additional_args)) + + assert not any(failures) + + +if __name__ == "__main__": + test_case = TestDeadlinePublishInNukePrerender() diff --git a/common/ayon_common/distribution/file_handler.py b/tests/lib/file_handler.py similarity index 100% rename from common/ayon_common/distribution/file_handler.py rename to tests/lib/file_handler.py diff --git a/tests/lib/testing_classes.py b/tests/lib/testing_classes.py index f04607dc27b..2af4af02dea 100644 --- a/tests/lib/testing_classes.py +++ b/tests/lib/testing_classes.py @@ -12,7 +12,7 @@ import re from tests.lib.db_handler import DBHandler -from common.ayon_common.distribution.file_handler import RemoteFileHandler +from tests.lib.file_handler import RemoteFileHandler from openpype.modules import ModulesManager from openpype.settings import get_project_settings diff --git a/tests/unit/openpype/hosts/unreal/plugins/publish/test_validate_sequence_frames.py b/tests/unit/openpype/hosts/unreal/plugins/publish/test_validate_sequence_frames.py index 17e47c9f646..f472b8052a0 100644 --- a/tests/unit/openpype/hosts/unreal/plugins/publish/test_validate_sequence_frames.py +++ b/tests/unit/openpype/hosts/unreal/plugins/publish/test_validate_sequence_frames.py @@ -19,7 +19,7 @@ from pyblish.api import Instance as PyblishInstance from tests.lib.testing_classes import BaseTest -from openpype.plugins.publish.validate_sequence_frames import ( +from openpype.hosts.unreal.plugins.publish.validate_sequence_frames import ( ValidateSequenceFrames ) @@ -38,7 +38,13 @@ class Instance(PyblishInstance): data = { "frameStart": 1001, "frameEnd": 1002, - "representations": [] + "representations": [], + "assetEntity": { + "data": { + "clipIn": 1001, + "clipOut": 1002, + } + } } yield Instance @@ -58,6 +64,7 @@ def test_validate_sequence_frames_single_frame(self, instance, plugin): ] instance.data["representations"] = representations instance.data["frameEnd"] = 1001 + instance.data["assetEntity"]["data"]["clipOut"] = 1001 plugin.process(instance) @@ -84,49 +91,11 @@ def test_validate_sequence_frames_name(self, instance, plugin.process(instance) - @pytest.mark.parametrize("files", - [["Main_beauty.1001.v001.exr", - "Main_beauty.1002.v001.exr"]]) - def test_validate_sequence_frames_wrong_name(self, instance, - plugin, files): - # tests for names with number inside, caused clique failure before - representations = [ - { - "ext": "exr", - "files": files, - } - ] - instance.data["representations"] = representations - - with pytest.raises(AssertionError) as excinfo: - plugin.process(instance) - assert ("Must detect single collection" in - str(excinfo.value)) - - @pytest.mark.parametrize("files", - [["Main_beauty.v001.1001.ass.gz", - "Main_beauty.v001.1002.ass.gz"]]) - def test_validate_sequence_frames_possible_wrong_name( - self, instance, plugin, files): - # currently pattern fails on extensions with dots - representations = [ - { - "files": files, - } - ] - instance.data["representations"] = representations - - with pytest.raises(AssertionError) as excinfo: - plugin.process(instance) - assert ("Must not have remainder" in - str(excinfo.value)) - @pytest.mark.parametrize("files", [["Main_beauty.v001.1001.ass.gz", "Main_beauty.v001.1002.ass.gz"]]) def test_validate_sequence_frames__correct_ext( self, instance, plugin, files): - # currently pattern fails on extensions with dots representations = [ { "ext": "ass.gz", @@ -147,6 +116,7 @@ def test_validate_sequence_frames_multi_frame(self, instance, plugin): ] instance.data["representations"] = representations instance.data["frameEnd"] = 1003 + instance.data["assetEntity"]["data"]["clipOut"] = 1003 plugin.process(instance) @@ -160,6 +130,7 @@ def test_validate_sequence_frames_multi_frame_missing(self, instance, ] instance.data["representations"] = representations instance.data["frameEnd"] = 1003 + instance.data["assetEntity"]["data"]["clipOut"] = 1003 with pytest.raises(ValueError) as excinfo: plugin.process(instance) @@ -175,6 +146,7 @@ def test_validate_sequence_frames_multi_frame_hole(self, instance, plugin): ] instance.data["representations"] = representations instance.data["frameEnd"] = 1003 + instance.data["assetEntity"]["data"]["clipOut"] = 1003 with pytest.raises(AssertionError) as excinfo: plugin.process(instance) @@ -195,6 +167,7 @@ def test_validate_sequence_frames_slate(self, instance, plugin): instance.data["slate"] = True instance.data["representations"] = representations instance.data["frameEnd"] = 1003 + instance.data["assetEntity"]["data"]["clipOut"] = 1003 plugin.process(instance) diff --git a/tests/unit/openpype/lib/test_delivery.py b/tests/unit/openpype/lib/test_delivery.py index 04a71655e32..f1e435f3f8c 100644 --- a/tests/unit/openpype/lib/test_delivery.py +++ b/tests/unit/openpype/lib/test_delivery.py @@ -1,6 +1,6 @@ # -*- coding: utf-8 -*- """Test suite for delivery functions.""" -from openpype.lib.delivery import collect_frames +from openpype.lib import collect_frames def test_collect_frames_multi_sequence(): @@ -153,4 +153,3 @@ def test_collect_frames_single_file(): print(ret) assert ret == expected, "Not matching" - diff --git a/tests/unit/openpype/lib/test_event_system.py b/tests/unit/openpype/lib/test_event_system.py new file mode 100644 index 00000000000..aa3f9290659 --- /dev/null +++ b/tests/unit/openpype/lib/test_event_system.py @@ -0,0 +1,83 @@ +from openpype.lib.events import EventSystem, QueuedEventSystem + + +def test_default_event_system(): + output = [] + expected_output = [3, 2, 1] + event_system = EventSystem() + + def callback_1(): + event_system.emit("topic.2", {}, None) + output.append(1) + + def callback_2(): + event_system.emit("topic.3", {}, None) + output.append(2) + + def callback_3(): + output.append(3) + + event_system.add_callback("topic.1", callback_1) + event_system.add_callback("topic.2", callback_2) + event_system.add_callback("topic.3", callback_3) + + event_system.emit("topic.1", {}, None) + + assert output == expected_output, ( + "Callbacks were not called in correct order") + + +def test_base_event_system_queue(): + output = [] + expected_output = [1, 2, 3] + event_system = QueuedEventSystem() + + def callback_1(): + event_system.emit("topic.2", {}, None) + output.append(1) + + def callback_2(): + event_system.emit("topic.3", {}, None) + output.append(2) + + def callback_3(): + output.append(3) + + event_system.add_callback("topic.1", callback_1) + event_system.add_callback("topic.2", callback_2) + event_system.add_callback("topic.3", callback_3) + + event_system.emit("topic.1", {}, None) + + assert output == expected_output, ( + "Callbacks were not called in correct order") + + +def test_manual_event_system_queue(): + output = [] + expected_output = [1, 2, 3] + event_system = QueuedEventSystem(auto_execute=False) + + def callback_1(): + event_system.emit("topic.2", {}, None) + output.append(1) + + def callback_2(): + event_system.emit("topic.3", {}, None) + output.append(2) + + def callback_3(): + output.append(3) + + event_system.add_callback("topic.1", callback_1) + event_system.add_callback("topic.2", callback_2) + event_system.add_callback("topic.3", callback_3) + + event_system.emit("topic.1", {}, None) + + while True: + if event_system.process_next_event() is None: + break + + assert output == expected_output, ( + "Callbacks were not called in correct order") diff --git a/tests/unit/openpype/modules/sync_server/test_site_operations.py b/tests/unit/openpype/modules/sync_server/test_site_operations.py index 6a861100a42..c4a83e33a68 100644 --- a/tests/unit/openpype/modules/sync_server/test_site_operations.py +++ b/tests/unit/openpype/modules/sync_server/test_site_operations.py @@ -12,16 +12,19 @@ removes temporary databases (?) """ import pytest +from bson.objectid import ObjectId from tests.lib.testing_classes import ModuleUnitTest -from bson.objectid import ObjectId + +from openpype.modules.sync_server.utils import SiteAlreadyPresentError + class TestSiteOperation(ModuleUnitTest): REPRESENTATION_ID = "60e578d0c987036c6a7b741d" - TEST_FILES = [("1eCwPljuJeOI8A3aisfOIBKKjcmIycTEt", + TEST_FILES = [("1FHE70Hi7y05LLT_1O3Y6jGxwZGXKV9zX", "test_site_operations.zip", '')] @pytest.fixture(scope="module") @@ -71,7 +74,7 @@ def test_add_site(self, dbcon, setup_sync_server_module): @pytest.mark.usefixtures("setup_sync_server_module") def test_add_site_again(self, dbcon, setup_sync_server_module): """Depends on test_add_site, must throw exception.""" - with pytest.raises(ValueError): + with pytest.raises(SiteAlreadyPresentError): setup_sync_server_module.add_site(self.TEST_PROJECT_NAME, self.REPRESENTATION_ID, site_name='test_site') diff --git a/tools/docker_build.ps1 b/tools/docker_build.ps1 new file mode 100644 index 00000000000..392165288c4 --- /dev/null +++ b/tools/docker_build.ps1 @@ -0,0 +1,98 @@ +$current_dir = Get-Location +$script_dir = Split-Path -Path $MyInvocation.MyCommand.Definition -Parent +$repo_root = (Get-Item $script_dir).parent.FullName + +$env:PSModulePath = $env:PSModulePath + ";$($repo_root)\tools\modules\powershell" + +function Exit-WithCode($exitcode) { + # Only exit this host process if it's a child of another PowerShell parent process... + $parentPID = (Get-CimInstance -ClassName Win32_Process -Filter "ProcessId=$PID" | Select-Object -Property ParentProcessId).ParentProcessId + $parentProcName = (Get-CimInstance -ClassName Win32_Process -Filter "ProcessId=$parentPID" | Select-Object -Property Name).Name + if ('powershell.exe' -eq $parentProcName) { $host.SetShouldExit($exitcode) } + + exit $exitcode +} + +function Restore-Cwd() { + $tmp_current_dir = Get-Location + if ("$tmp_current_dir" -ne "$current_dir") { + Write-Color -Text ">>> ", "Restoring current directory" -Color Green, Gray + Set-Location -Path $current_dir + } +} + +function Get-Container { + if (-not (Test-Path -PathType Leaf -Path "$($repo_root)\build\docker-image.id")) { + Write-Color -Text "!!! ", "Docker command failed, cannot find image id." -Color Red, Yellow + Restore-Cwd + Exit-WithCode 1 + } + $id = Get-Content "$($repo_root)\build\docker-image.id" + Write-Color -Text ">>> ", "Creating container from image id ", "[", $id, "]" -Color Green, Gray, White, Cyan, White + $cid = docker create $id bash + if ($LASTEXITCODE -ne 0) { + Write-Color -Text "!!! ", "Cannot create container." -Color Red, Yellow + Restore-Cwd + Exit-WithCode 1 + } + return $cid +} + +function Change-Cwd() { + Set-Location -Path $repo_root +} + +function New-DockerBuild { + $version_file = Get-Content -Path "$($repo_root)\openpype\version.py" + $result = [regex]::Matches($version_file, '__version__ = "(?\d+\.\d+.\d+.*)"') + $openpype_version = $result[0].Groups['version'].Value + $startTime = [int][double]::Parse((Get-Date -UFormat %s)) + Write-Color -Text ">>> ", "Building OpenPype using Docker ..." -Color Green, Gray, White + $variant = $args[0] + if ($variant.Length -eq 0) { + $dockerfile = "$($repo_root)\Dockerfile" + } else { + $dockerfile = "$( $repo_root )\Dockerfile.$variant" + } + if (-not (Test-Path -PathType Leaf -Path $dockerfile)) { + Write-Color -Text "!!! ", "Dockerfile for specifed platform ", "[", $variant, "]", "doesn't exist." -Color Red, Yellow, Cyan, White, Cyan, Yellow + Restore-Cwd + Exit-WithCode 1 + } + Write-Color -Text ">>> ", "Using Dockerfile for ", "[ ", $variant, " ]" -Color Green, Gray, White, Cyan, White + + $build_dir = "$($repo_root)\build" + if (-not(Test-Path $build_dir)) { + New-Item -ItemType Directory -Path $build_dir + } + Write-Color -Text "--- ", "Cleaning build directory ..." -Color Yellow, Gray + try { + Remove-Item -Recurse -Force "$($build_dir)\*" + } catch { + Write-Color -Text "!!! ", "Cannot clean build directory, possibly because process is using it." -Color Red, Gray + Write-Color -Text $_.Exception.Message -Color Red + Exit-WithCode 1 + } + + Write-Color -Text ">>> ", "Running Docker build ..." -Color Green, Gray, White + docker build --pull --iidfile $repo_root/build/docker-image.id --build-arg BUILD_DATE=$(Get-Date -UFormat %Y-%m-%dT%H:%M:%SZ) --build-arg VERSION=$openpype_version -t pypeclub/openpype:$openpype_version -f $dockerfile . + if ($LASTEXITCODE -ne 0) { + Write-Color -Text "!!! ", "Docker command failed.", $LASTEXITCODE -Color Red, Yellow, Red + Restore-Cwd + Exit-WithCode 1 + } + Write-Color -Text ">>> ", "Copying build from container ..." -Color Green, Gray, White + $cid = Get-Container + + docker cp "$($cid):/opt/openpype/build/exe.linux-x86_64-3.9" "$($repo_root)/build" + docker cp "$($cid):/opt/openpype/build/build.log" "$($repo_root)/build" + + $endTime = [int][double]::Parse((Get-Date -UFormat %s)) + try { + New-BurntToastNotification -AppLogo "$openpype_root/openpype/resources/icons/openpype_icon.png" -Text "OpenPype build complete!", "All done in $( $endTime - $startTime ) secs. You will find OpenPype and build log in build directory." + } catch {} + Write-Color -Text "*** ", "All done in ", $($endTime - $startTime), " secs. You will find OpenPype and build log in ", "'.\build'", " directory." -Color Green, Gray, White, Gray, White, Gray +} + +Change-Cwd +New-DockerBuild $ARGS diff --git a/tools/run_tray_ayon.ps1 b/tools/run_tray_ayon.ps1 deleted file mode 100644 index 54a80f93fd1..00000000000 --- a/tools/run_tray_ayon.ps1 +++ /dev/null @@ -1,41 +0,0 @@ -<# -.SYNOPSIS - Helper script AYON Tray. - -.DESCRIPTION - - -.EXAMPLE - -PS> .\run_tray.ps1 - -#> -$current_dir = Get-Location -$script_dir = Split-Path -Path $MyInvocation.MyCommand.Definition -Parent -$ayon_root = (Get-Item $script_dir).parent.FullName - -# Install PSWriteColor to support colorized output to terminal -$env:PSModulePath = $env:PSModulePath + ";$($ayon_root)\tools\modules\powershell" - -$env:_INSIDE_OPENPYPE_TOOL = "1" - -# make sure Poetry is in PATH -if (-not (Test-Path 'env:POETRY_HOME')) { - $env:POETRY_HOME = "$ayon_root\.poetry" -} -$env:PATH = "$($env:PATH);$($env:POETRY_HOME)\bin" - - -Set-Location -Path $ayon_root - -Write-Color -Text ">>> ", "Reading Poetry ... " -Color Green, Gray -NoNewline -if (-not (Test-Path -PathType Container -Path "$($env:POETRY_HOME)\bin")) { - Write-Color -Text "NOT FOUND" -Color Yellow - Write-Color -Text "*** ", "We need to install Poetry create virtual env first ..." -Color Yellow, Gray - & "$ayon_root\tools\create_env.ps1" -} else { - Write-Color -Text "OK" -Color Green -} - -& "$($env:POETRY_HOME)\bin\poetry" run python "$($ayon_root)\ayon_start.py" tray --debug -Set-Location -Path $current_dir diff --git a/tools/run_tray_ayon.sh b/tools/run_tray_ayon.sh deleted file mode 100755 index 3039750b87d..00000000000 --- a/tools/run_tray_ayon.sh +++ /dev/null @@ -1,78 +0,0 @@ -#!/usr/bin/env bash -# Run AYON Tray - -# Colors for terminal - -RST='\033[0m' # Text Reset - -# Regular Colors -Black='\033[0;30m' # Black -Red='\033[0;31m' # Red -Green='\033[0;32m' # Green -Yellow='\033[0;33m' # Yellow -Blue='\033[0;34m' # Blue -Purple='\033[0;35m' # Purple -Cyan='\033[0;36m' # Cyan -White='\033[0;37m' # White - -# Bold -BBlack='\033[1;30m' # Black -BRed='\033[1;31m' # Red -BGreen='\033[1;32m' # Green -BYellow='\033[1;33m' # Yellow -BBlue='\033[1;34m' # Blue -BPurple='\033[1;35m' # Purple -BCyan='\033[1;36m' # Cyan -BWhite='\033[1;37m' # White - -# Bold High Intensity -BIBlack='\033[1;90m' # Black -BIRed='\033[1;91m' # Red -BIGreen='\033[1;92m' # Green -BIYellow='\033[1;93m' # Yellow -BIBlue='\033[1;94m' # Blue -BIPurple='\033[1;95m' # Purple -BICyan='\033[1;96m' # Cyan -BIWhite='\033[1;97m' # White - - -############################################################################## -# Return absolute path -# Globals: -# None -# Arguments: -# Path to resolve -# Returns: -# None -############################################################################### -realpath () { - echo $(cd $(dirname "$1"); pwd)/$(basename "$1") -} - -# Main -main () { - # Directories - ayon_root=$(realpath $(dirname $(dirname "${BASH_SOURCE[0]}"))) - - _inside_openpype_tool="1" - - if [[ -z $POETRY_HOME ]]; then - export POETRY_HOME="$ayon_root/.poetry" - fi - - echo -e "${BIGreen}>>>${RST} Reading Poetry ... \c" - if [ -f "$POETRY_HOME/bin/poetry" ]; then - echo -e "${BIGreen}OK${RST}" - else - echo -e "${BIYellow}NOT FOUND${RST}" - echo -e "${BIYellow}***${RST} We need to install Poetry and virtual env ..." - . "$ayon_root/tools/create_env.sh" || { echo -e "${BIRed}!!!${RST} Poetry installation failed"; return; } - fi - - pushd "$ayon_root" > /dev/null || return > /dev/null - - echo -e "${BIGreen}>>>${RST} Running AYON Tray with debug option ..." - "$POETRY_HOME/bin/poetry" run python3 "$ayon_root/ayon_start.py" tray --debug -} - -main diff --git a/website/docs/admin_hosts_maya.md b/website/docs/admin_hosts_maya.md index 700822843f1..93acf316c23 100644 --- a/website/docs/admin_hosts_maya.md +++ b/website/docs/admin_hosts_maya.md @@ -113,7 +113,8 @@ This is useful to fix some specific renderer glitches and advanced hacking of Ma #### Namespace and Group Name Here you can create your own custom naming for the reference loader. -The custom naming is split into two parts: namespace and group name. If you don't set the namespace or the group name, an error will occur. +The custom naming is split into two parts: namespace and group name. If you don't set the namespace, an error will occur. +Group name could be set empty, that way no wrapping group will be created for loaded item. Here's the different variables you can use:

diff --git a/website/docs/admin_hosts_resolve.md b/website/docs/admin_hosts_resolve.md index 09e7df1d9f5..8bb8440f785 100644 --- a/website/docs/admin_hosts_resolve.md +++ b/website/docs/admin_hosts_resolve.md @@ -4,100 +4,38 @@ title: DaVinci Resolve Setup sidebar_label: DaVinci Resolve --- -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; +:::warning +Only Resolve Studio is supported due to Python API limitation in Resolve (free). +::: ## Resolve requirements Due to the way resolve handles python and python scripts there are a few steps required steps needed to be done on any machine that will be using OpenPype with resolve. -### Installing Resolve's own python 3.6 interpreter. -Resolve uses a hardcoded method to look for the python executable path. All of tho following paths are defined automatically by Python msi installer. We are using Python 3.6.2. - - - - - -`%LOCALAPPDATA%\Programs\Python\Python36` - - - - -`/opt/Python/3.6/bin` - - - - -`~/Library/Python/3.6/bin` - - - - - -### Installing PySide2 into python 3.6 for correct gui work - -OpenPype is using its own window widget inside Resolve, for that reason PySide2 has to be installed into the python 3.6 (as explained above). - - - - - -paste to any terminal of your choice - -```bash -%LOCALAPPDATA%\Programs\Python\Python36\python.exe -m pip install PySide2 -``` - - - - -paste to any terminal of your choice - -```bash -/opt/Python/3.6/bin/python -m pip install PySide2 -``` - - - - -paste to any terminal of your choice - -```bash -~/Library/Python/3.6/bin/python -m pip install PySide2 -``` - - - - -
- -### Set Resolve's Fusion settings for Python 3.6 interpereter - -
- - -As it is shown in below picture you have to go to Fusion Tab and then in Fusion menu find Fusion Settings. Go to Fusion/Script and find Default Python Version and switch to Python 3.6 - -
- -
- -![Create menu](assets/resolve_fusion_tab.png) -![Create menu](assets/resolve_fusion_menu.png) -![Create menu](assets/resolve_fusion_script_settings.png) - -
-
\ No newline at end of file +## Basic setup + +- Supported version is up to v18 +- Install Python 3.6.2 (latest tested v17) or up to 3.9.13 (latest tested on v18) +- pip install PySide2: + - Python 3.9.*: open terminal and go to python.exe directory, then `python -m pip install PySide2` +- pip install OpenTimelineIO: + - Python 3.9.*: open terminal and go to python.exe directory, then `python -m pip install OpenTimelineIO` + - Python 3.6: open terminal and go to python.exe directory, then `python -m pip install git+https://github.com/PixarAnimationStudios/OpenTimelineIO.git@5aa24fbe89d615448876948fe4b4900455c9a3e8` and move built files from `./Lib/site-packages/opentimelineio/cxx-libs/bin and lib` to `./Lib/site-packages/opentimelineio/`. I was building it on Win10 machine with Visual Studio Community 2019 and + ![image](https://user-images.githubusercontent.com/40640033/102792588-ffcb1c80-43a8-11eb-9c6b-bf2114ed578e.png) with installed CMake in PATH. +- make sure Resolve Fusion (Fusion Tab/menu/Fusion/Fusion Settings) is set to Python 3.6 + ![image](https://user-images.githubusercontent.com/40640033/102631545-280b0f00-414e-11eb-89fc-98ac268d209d.png) +- Open OpenPype **Tray/Admin/Studio settings** > `applications/resolve/environment` and add Python3 path to `RESOLVE_PYTHON3_HOME` platform related. + +## Editorial setup + +This is how it looks on my testing project timeline +![image](https://user-images.githubusercontent.com/40640033/102637638-96ec6600-4156-11eb-9656-6e8e3ce4baf8.png) +Notice I had renamed tracks to `main` (holding metadata markers) and `review` used for generating review data with ffmpeg confersion to jpg sequence. + +1. you need to start OpenPype menu from Resolve/EditTab/Menu/Workspace/Scripts/Comp/**__OpenPype_Menu__** +2. then select any clips in `main` track and change their color to `Chocolate` +3. in OpenPype Menu select `Create` +4. in Creator select `Create Publishable Clip [New]` (temporary name) +5. set `Rename clips` to True, Master Track to `main` and Use review track to `review` as in picture + ![image](https://user-images.githubusercontent.com/40640033/102643773-0d419600-4160-11eb-919e-9c2be0aecab8.png) +6. after you hit `ok` all clips are colored to `ping` and marked with openpype metadata tag +7. git `Publish` on openpype menu and see that all had been collected correctly. That is the last step for now as rest is Work in progress. Next steps will follow.