Rasa NLU (Natural Language Understanding) is a tool for understanding what is being said in short pieces of text. For example, taking a short message like:
"I'm looking for a Mexican restaurant in the center of town"
And returning structured data like:
intent: search_restaurant
entities:
- cuisine : Mexican
- location : center
Rasa NLU is primarily used to build chatbots and voice apps, where this is called intent classification and entity extraction. To use Rasa, you have to provide some training data. That is, a set of messages which you've already labelled with their intents and entities. Rasa then uses machine learning to pick up patterns and generalise to unseen sentences.
You can think of Rasa NLU as a set of high level APIs for building your own language parser using existing NLP and ML libraries.
If you are new to Rasa NLU and want to create a bot, you should start with the tutorial.
-
What does Rasa NLU do? π€ Read About the Rasa Stack
-
I'd like to read the detailed docs π€ Read The Docs
-
I'm ready to install Rasa NLU! π Installation
-
I have a question β Rasa Community Forum
-
I would like to contribute π€ How to contribute
Current github master version does NOT support python 2.7 anymore (neither will the next major release). If you want to use Rasa NLU with python 2.7, please install the most recent version from pypi (0.14).
For the full installation instructions, please head over to the documentation: Installation
Via Docker Image From docker hub:
docker run -p 5000:5000 rasa/rasa_nlu:latest-full
(for more docker installation options see Advanced Docker Installation)
Via Python Library From pypi:
pip install rasa_nlu
python -m rasa_nlu.server &
(for more python installation options see Advanced Python Installation)
The below command can be executed for either method used above.
curl 'http://localhost:5000/parse?q=hello'
curl 'http://localhost:5000/status'
curl 'http://localhost:5000/version'
Examples and Documentation of the training data format are provided. But as a quick start execute the below command to train a new model
curl 'https://raw.githubusercontent.com/RasaHQ/rasa_nlu/master/sample_configs/config_train_server_json.yml' | \
curl --request POST --header 'content-type: application/x-yml' --data-binary @- --url 'localhost:5000/train?project=test_model'
This will train a simple keyword based models (not usable for anything but this demo). For better pipelines consult the documentation.
wget 'https://raw.githubusercontent.com/RasaHQ/rasa_nlu/master/sample_configs/config_train_server_md.yml'
curl --request POST --header 'content-type: application/x-yml' --data-binary @config_train_server_md.yml --url 'localhost:5000/train?project=test_model'
The above command does the following:
- It Fetches some of the example data in the repo
- It
POSTS
that data to the/train
endpoint and names the modelproject=test_model
Make sure the above command has finished before executing the below. You can check with the /status
command above.
curl 'http://localhost:5000/parse?q=hello&project=test_model'
The intended audience is mainly people developing bots, starting from scratch or looking to find a a drop-in replacement for wit, LUIS, or Dialogflow. The setup process is designed to be as simple as possible. Rasa NLU is written in Python, but you can use it from any language through a HTTP API. If your project is written in Python you can simply import the relevant classes. If you're currently using wit/LUIS/Dialogflow, you just:
- Download your app data from wit, LUIS, or Dialogflow and feed it into Rasa NLU
- Run Rasa NLU on your machine and switch the URL of your wit/LUIS api calls to
localhost:5000/parse
.
- You don't have to hand over your data to FB/MSFT/GOOG
- You don't have to make a
https
call to parse every message. - You can tune models to work well on your particular use case.
These points are laid out in more detail in a blog post. Rasa is a set of tools for building more advanced bots, developed by the company Rasa. Rasa NLU is the natural language understanding module, and the first component to be open-sourced.
The supervised_embeddings
pipeline works in any language.
If you want to use pre-trained word embeddings, there are models available for
many languages. See details here
We are very happy to receive and merge your contributions. There is some more information about the style of the code and docs in the documentation.
In general the process is rather simple:
- create an issue describing the feature you want to work on (or have a look at issues with the label help wanted)
- write your code, tests and documentation
- create a pull request describing your changes
You pull request will be reviewed by a maintainer, who might get back to you about any necessary changes or questions. You will also be asked to sign the Contributor License Agreement
From github:
git clone [email protected]:RasaHQ/rasa_nlu.git
cd rasa_nlu
pip install -r requirements.txt
pip install -e .
For local development make sure you install the development requirements:
pip install -r alt_requirements/requirements_dev.txt
pip install -e .
To test the installation use (this will run a very stupid default model. you need to train your own model to do something useful!):
Before you start, ensure you have the latest version of docker engine on your machine. You can check if you have docker installed by typing docker -v
in your terminal.
To see all available builds go to the Rasa docker hub, but to get up and going the quickest just run:
docker run -p 5000:5000 rasa/rasa_nlu:latest-full
There are also three volumes, which you may want to map: /app/projects
, /app/logs
, and /app/data
. It is also possible to override the config file used by the server by mapping a new config file to the volume /app/config.json
. For complete docker usage instructions go to the official docker hub readme.
To test run the below command after the container has started. For more info on using the HTTP API see here
curl 'http://localhost:5000/parse?q=hello'
Warning! setting up Docker Cloud is quite involved - this method isn't recommended unless you've already configured Docker Cloud Nodes (or swarms)
In order to use the Spacy or Mitie backends make sure you have one of their pretrained models installed.
python -m spacy download en
To download the Mitie model run and place it in a location that you can reference in your configuration during model training:
wget https://github.com/mit-nlp/MITIE/releases/download/v0.4/MITIE-models-v0.2.tar.bz2
tar jxf MITIE-models-v0.2.tar.bz2
If you want to run the tests, you need to copy the model into the Rasa folder:
cp MITIE-models/english/total_word_feature_extractor.dat RASA_NLU_ROOT/data/
Where RASA_NLU_ROOT
points to your Rasa installation directory.
Releasing a new version is quite simple, as the packages are build and distributed by travis. The following things need to be done to release a new version
- update rasa_nlu/version.py to reflect the correct version number
- edit the CHANGELOG.rst, create a new section for the release (eg by moving the items from the collected master section) and create a new master logging section
- edit the migration guide to provide assistance for users updating to the new version
- commit all the above changes and tag a new release, e.g. using
travis will build this tag and push a package to pypi
git tag -f 0.7.0 -m "Some helpful line describing the release" git push origin 0.7.0
- only if it is a major release, a new branch should be created pointing to the same commit as the tag to allow for future minor patches, e.g.
git checkout -b 0.7.x git push origin 0.7.x
In order to run the tests make sure that you have the development requirements installed.
make test
Licensed under the Apache License, Version 2.0. Copyright 2019 Rasa Technologies GmbH. Copy of the license.
A list of the Licenses of the dependencies of the project can be found at the bottom of the Libraries Summary.