Skip to content

Commit

Permalink
Merge branch 'main' into dev/docs
Browse files Browse the repository at this point in the history
  • Loading branch information
onuralpszr authored Mar 21, 2024
2 parents e5b9e86 + 3694a69 commit ebfd883
Showing 1 changed file with 18 additions and 7 deletions.
25 changes: 18 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ _Note_: To contribute, send a pull request to this repository. Note that this re
- [Libraries, Platforms and Development Platform-specific Resources](#libraries-platforms-and-development-platform-specific-resources)
- [Platforms](#platforms)
- [Development Platform](#development-platform)
- [Framework](#framework)
- [Web](#web)
- [Mobile](#mobile)
- [Edge](#edge)
Expand Down Expand Up @@ -53,32 +54,42 @@ _Note_: To contribute, send a pull request to this repository. Note that this re
### Platforms

- [Hugging Face Hub](https://huggingface.co/): Collaborative platform for machine learning. Discover hundreds of thousands of open-source models able to work off-the-shelf in [/models](https://huggingface.co/models).
- [Kaggle Models](https://www.kaggle.com/models): Discover and use thousands of machine learning models, including the most popular diffusion models and LLMs.
- [Pytorch Hub](https://pytorch.org/hub/): Discover and publish models to a pre-trained model repository designed for research exploration.

### Development Platform

- [ONNX Runtime](https://onnxruntime.ai/): Platform agnostic model runtime to use ML models.

### Web
#### Framework

- [TensorFlow](https://www.tensorflow.org): An end-to-end open source platform for machine learning.
- [PyTorch](https://pytorch.org/): An open source machine learning framework that accelerates the path from research prototyping to production deployment.

#### Web

- [Transformers.js](https://huggingface.co/docs/transformers.js/en/index): A library to run cutting edge models directly in-browser.
- [huggingface.js](https://huggingface.co/docs/huggingface.js/en/index): A library to play with models on Hugging Face Hub through javascript.
- [TensorFlow.js](https://www.tensorflow.org/js): A library for machine learning in JavaScript.
- [Mediapipe](https://developers.google.com/mediapipe/api/solutions/js) A framework that has prebuilt and customizable ML solutions, ready to deploy on Web

### Mobile
#### Mobile

- [TensorFlow Lite](https://www.tensorflow.org/lite): A library to deploy models on mobile and edge devices.
- [Mediapipe](https://developers.google.com/mediapipe): A framework that has prebuilt and customizable ML solutions, ready to deploy on Android, iOS.
- [ExecuTorch](https://pytorch.org/executorch/): A library for enabling on-device ML in mobile/edge devices for PyTorch models.
- [huggingface.dart](https://github.com/shivance/huggingface.dart): A Dart SDK to interact with the models on Hugging Face Hub.
- [flutter-tflite](https://github.com/tensorflow/flutter-tflite): TensorFlow Lite Flutter plugin provides an easy, flexible, and fast Dart API to integrate TFLite models in flutter apps across mobile and desktop platforms.
- [NCNN](https://github.com/Tencent/ncnn): A high-performance neural network inference framework optimized for the mobile platform.

### Edge
#### Edge

- [TensorFlow Lite](https://www.tensorflow.org/lite): A library to deploy models on mobile and edge devices.
- [ExecuTorch](https://pytorch.org/executorch/): A library for enabling on-device ML in mobile/edge devices for PyTorch models.
- [ExecuTorch](https://pytorch.org/executorch/): A library for enabling on-device ML in everywhere from AR/VR wearables to mobile/edge devices for PyTorch models.

### Cloud Deployment
#### Cloud Deployment

### Serving
#### Serving

- [Text Generation Inference](https://huggingface.co/docs/text-generation-inference/index): Toolkit to serve large language models.
- [Text Embedding Inference](https://huggingface.co/docs/text-embeddings-inference/index): Toolkit to serve text embeddings.
Expand All @@ -88,7 +99,7 @@ _Note_: To contribute, send a pull request to this repository. Note that this re
- [Nvidia Triton Inference Server](https://github.com/triton-inference-server/server): Open source inference serving software that streamlines AI inferencing. Triton enables teams to deploy any AI model from multiple deep learning and machine learning frameworks, including TensorRT, TensorFlow, PyTorch, ONNX, OpenVINO, Python, RAPIDS FIL, and more.
- [PyTrion](https://triton-inference-server.github.io/pytriton/latest/): PyTriton is a Flask/FastAPI-like interface that simplifies Triton's deployment in Python environments. The library allows serving Machine Learning models directly from Python through NVIDIA's Triton Inference Server.

### Game Development
#### Game Development

## Modalities and Tasks

Expand Down

0 comments on commit ebfd883

Please sign in to comment.