- Deploy - - LLama 2 - - ( - - Chat 7B - - and - - 13B - - ) in a few clicks on - - - Inference Endpoints - -
-- The AI community building the future. -
-- Build, train and deploy state of the art models powered by the reference open source in machine learning. -
-- Meta AI -
-- Amazon Web Services -
-- Google -
-- Intel -
-- SpeechBrain -
-- Microsoft -
-- Grammarly -
-Hub
-Home of Machine Learning
-
- Create, discover and collaborate on ML better.
-
- Join the community to start your ML journey.
-
Tasks
-Problems solvers
-- Thousands of creators work as a community to solve Audio, Vision, and Language with AI. -
- - - Explore tasks - -Open Source
-Transformers
-- Transformers is our natural language processing library and our hub is now open to all ML models, with - support from libraries like - - Flair - - , - - Asteroid - - , - - ESPnet - - , - - Pyannote - - , and more to come. -
- - - Read documentation - -
-
- {/* */}
- from
- transformers
- import
- AutoTokenizer,
- AutoModelForMaskedLM
- tokenizer= AutoTokenizer.
- from_pretrained(
- {`"bert-base-uncased"`})
- model=AutoModelForMaskedLM.
- from_pretrained(
- {`"bert-base-uncased"`}){/* */}
-
-
- On demand
-Inference API
-- Serve your models directly from Hugging Face infrastructure and run large scale NLP models in - milliseconds with just a few lines of code. -
- - - Learn more - -Our Research contributions
-- We’re on a journey to advance and democratize NLP for everyone. Along the way, we contribute to the - development of technology for the better. -
-🌸
-T0
-- Multitask Prompted Training Enables Zero-Shot Task Generalization -
-- Open source state-of-the-art zero-shot language model out of - - BigScience - - . -
- - - Read more - -🐎
-DistilBERT
-- DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter -
-- A smaller, faster, lighter, cheaper version of BERT obtained via model distillation. -
- - - Read more - -📚
-HMTL
-Hierarchical Multi-Task Learning
-- Learning embeddings from semantic tasks for multi-task learning. We have open-sourced code and a demo. -
- - - Read more - -🐸
-Dynamical Language Models
-- Meta-learning for language modeling -
-- A meta learner is trained via gradient descent to continuously and dynamically update language model - weights. -
- - - Read more - -🤖
-State of the art
-Neuralcoref
-- Our open source coreference resolution library for coreference. You can train it on your own dataset and - language. -
- - - Read more - -🦄
-Auto-complete your thoughts
-Write with Transformers
-- This web app is the official demo of the Transformers repository 's text generation capabilities. -
- - - Start writing - -