Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ONNX-MLIR dialect support #40

Open
wants to merge 18 commits into
base: main
Choose a base branch
from

Conversation

kartikeyporwal
Copy link
Contributor

@kartikeyporwal kartikeyporwal commented Apr 27, 2022

This PR adds the ONNX-MLIR support in the nebullvm based on #31.

Few notable points:

  • The project ONNX-MLIR needs to be built from source
  • It depends on the LLVM-PROJECT that needs to be built from source
  • The python wrapper named PyRuntime is built using pybind11 needs to be in the path

@kartikeyporwal kartikeyporwal changed the title - Add ONNX-MLIR dialect support Add ONNX-MLIR dialect support Apr 27, 2022
@kartikeyporwal kartikeyporwal marked this pull request as ready for review April 27, 2022 00:32
@diegofiori diegofiori self-requested a review April 27, 2022 18:18
nebullvm/base.py Outdated Show resolved Hide resolved
nebullvm/inference_learners/onnx_mlir.py Outdated Show resolved Hide resolved
nebullvm/inference_learners/onnx_mlir.py Outdated Show resolved Hide resolved
nebullvm/inference_learners/onnx_mlir.py Outdated Show resolved Hide resolved
nebullvm/optimizers/multi_compiler.py Outdated Show resolved Hide resolved
nebullvm/inference_learners/onnx_mlir.py Outdated Show resolved Hide resolved
@diegofiori
Copy link
Collaborator

@kartikeyporwal 🥇🚀 thanks for the amazing contribution ⭐! However, there are a few things to change before merging the code to master (See specific comments in the code section). BTW, we're working on a huge release for next week, so I won't merge this PR before it.

@kartikeyporwal kartikeyporwal requested a review from diegofiori May 8, 2022 10:44
@kartikeyporwal
Copy link
Contributor Author

@morgoth95 Thanks for suggesting the changes. I've made the necessary changes. Please feel free to review the changes again. Thanks

kartikeyporwal and others added 9 commits May 8, 2022 16:28
* add half precision and transformations logic

* fix bugs

* add support to gpu

* add support to gpu

* fix minor bug in gpu code

* fix minor bug in gpu code

* refactor quantization

* add dataset interface

* fix bug

* fix bugs with dataset api

* fix bug

* update test

* solve minor issues

* fix error in cuda

* fix error with tensorRT

* fix bug in tvm and change name of quantization_ths

* add resources

* Modify readme (nebuly-ai#48)

* Create section - integration with other libraries

* update readme (work-in-progress)

* Minor Readme update

* rename notebook

* update notebook

* rename notebook

* Rename notebook

* Rename notebook

* Rename notebook

* Rename notebook

* Rename notebook

* update version to 0.3.0

* Update readme with latest release information

* Benchmarks

* Update readme

* Update readme with benchmarks

* Update readme, minor changes

* solve api issue with tf

* fix typos in benchmarks

Co-authored-by: morgoth95 <[email protected]>
Co-authored-by: Emile Courthoud <[email protected]>
Co-authored-by: Nebuly <[email protected]>
@kartikeyporwal
Copy link
Contributor Author

Hi team,

Could please help me knowing where does we stand in merging this PR in nebullvm?

Thanks

CC: @morgoth95

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants