Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How does the XLA process when need to run on cpu op code? #4

Open
engineer1109 opened this issue Apr 12, 2024 · 2 comments
Open

How does the XLA process when need to run on cpu op code? #4

engineer1109 opened this issue Apr 12, 2024 · 2 comments

Comments

@engineer1109
Copy link

Some pytorch code need CPU OP support.

How does the XLA process?

Build multi graphs?

@antkillerfarm
Copy link

The pipeline is below:
pytorch op -> xla op -> TIM-VX op

We don't support pytorch op directly.

There are two ways to support unofficial pytorch op:

  • use official pytorch ops to construct your custom op.
  • add op compiler in torch_xla to transform your custom op to XLA ops form.

@engineer1109
Copy link
Author

It seems like lazy mode, not immediate mode.
Need write op to a graph.
When need to get output, then compile and run graph.
So all ops should be in the graph.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants