-
Notifications
You must be signed in to change notification settings - Fork 214
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Examples for use XPU with IPEX #904
Conversation
This commit introduces an example of using an XPU with the Workflow Interface. The example demonstrates how to leverage the power of XPU to optimize the execution of complex workflows with OpenFL
This commit introduces an example of using an XPU with the non-federated_case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The PR looks good! The two changes needed are:
- Explain what device(s) these examples support (Intel® Data Center GPU Max Series) before mentioning XPU
- Update copyright
...fl-tutorials/interactive_api/PyTorch_TinyImageNet_XPU/envoy/tinyimagenet_shard_descriptor.py
Outdated
Show resolved
Hide resolved
openfl-tutorials/experimental/Workflow_Interface_104_MNIST_XPU.ipynb
Outdated
Show resolved
Hide resolved
All changes were completed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approved! Great addition to OpenFL, @manuelhsantana 🚀
* Added XPU example for Workflow Interface This commit introduces an example of using an XPU with the Workflow Interface. The example demonstrates how to leverage the power of XPU to optimize the execution of complex workflows with OpenFL * Added XPU example for non-federated_case This commit introduces an example of using an XPU with the non-federated_case. * Removed non-federated_case_XPU file * Updated Workflow_Interface_104_MNIST_XPU file * Added TinyImagenet example for interactive api and XPU * Update xpu definition and copyright * Added link for download xpu driver Signed-off-by: nammbash <[email protected]>
* Added XPU example for Workflow Interface This commit introduces an example of using an XPU with the Workflow Interface. The example demonstrates how to leverage the power of XPU to optimize the execution of complex workflows with OpenFL * Added XPU example for non-federated_case This commit introduces an example of using an XPU with the non-federated_case. * Removed non-federated_case_XPU file * Updated Workflow_Interface_104_MNIST_XPU file * Added TinyImagenet example for interactive api and XPU * Update xpu definition and copyright * Added link for download xpu driver Signed-off-by: nammbash <[email protected]>
* Added XPU example for Workflow Interface This commit introduces an example of using an XPU with the Workflow Interface. The example demonstrates how to leverage the power of XPU to optimize the execution of complex workflows with OpenFL * Added XPU example for non-federated_case This commit introduces an example of using an XPU with the non-federated_case. * Removed non-federated_case_XPU file * Updated Workflow_Interface_104_MNIST_XPU file * Added TinyImagenet example for interactive api and XPU * Update xpu definition and copyright * Added link for download xpu driver Signed-off-by: nammbash <[email protected]>
* Added XPU example for Workflow Interface This commit introduces an example of using an XPU with the Workflow Interface. The example demonstrates how to leverage the power of XPU to optimize the execution of complex workflows with OpenFL * Added XPU example for non-federated_case This commit introduces an example of using an XPU with the non-federated_case. * Removed non-federated_case_XPU file * Updated Workflow_Interface_104_MNIST_XPU file * Added TinyImagenet example for interactive api and XPU * Update xpu definition and copyright * Added link for download xpu driver Signed-off-by: nammbash <[email protected]>
* Added XPU example for Workflow Interface This commit introduces an example of using an XPU with the Workflow Interface. The example demonstrates how to leverage the power of XPU to optimize the execution of complex workflows with OpenFL * Added XPU example for non-federated_case This commit introduces an example of using an XPU with the non-federated_case. * Removed non-federated_case_XPU file * Updated Workflow_Interface_104_MNIST_XPU file * Added TinyImagenet example for interactive api and XPU * Update xpu definition and copyright * Added link for download xpu driver Signed-off-by: manuelhsantana <[email protected]>
New Examples: XPU and Intel® Extension for PyTorch Examples
This PR introduces new examples demonstrating the usage of XPU and Intel® Extension for PyTorch. This enhancement aims to provide users with practical guides and clear understanding of these features, promoting efficient and effective use in their workflows.
In this examples, when we refer to XPU, we are specifically referring to the Intel® Data Center GPU Max Series. When using the Intel® Extension for PyTorch* package, selecting the device as 'xpu' will refer to this Intel® Data Center GPU Max Series.
The following examples have been added:
Federated PyTorch TinyImageNet Tutorial (XPU version): This example demonstrates a practical application of XPU and Intel® Extension for PyTorch in training a Federated PyTorch model on the TinyImageNet dataset using the interactive API. It provides insights into how these features can enhance the training process, offering a hands-on experience for users.
Workflow Interface 104: MNIST XPU: This example serves as a beginner's guide to using XPU in their workflow interface for OPENFL. It provides step-by-step instructions on how to integrate and utilize XPU, aiming to quickstart users' experience with this feature.
These additions underscore our commitment to providing comprehensive resources that empower users to leverage the full potential of Intel's technologies in their workflows. We welcome feedback and suggestions to further improve these examples and look forward to enhancing our user experience continually.
Please review these changes and provide your valuable feedback.