During this lab, you will use the AlchemyLanguage and Speech to Text services to build an application that transcribes audio from YouTube videos in real time, and then applies NLP services to annotate the transcription using AlchemyLanguage. The finished application will display the real-time annotation and the associated concepts that have been identified in the transcribed text as a user-provided YouTube video plays.
So let’s get started. The first thing to do is to build out the shell of our application in Bluemix.
Creating a IBM Bluemix Account
- Go to https://bluemix.net/
- Create a Bluemix account if required.
- Log in with your IBM ID (the ID used to create your Bluemix account)
Note: The confirmation email from Bluemix mail take up to 1 hour.
- Clone the repository into your computer.
git clone https://github.com/watson-developer-cloud/audio-analysis.git
-
Sign up in Bluemix or use an existing account.
-
If it is not already installed on your system, download and install the Cloud-foundry CLI tool.
-
Edit the
manifest.yml
file in the folder that contains your code and replaceaudio-analysis-starter-kit
with a unique name for your application. The name that you specify determines the application's URL, such asapplication-name.mybluemix.net
. The relevant portion of themanifest.yml
file looks like the following:applications: - services: - speech-to-text-service - alchemy-language-service name: application-name command: npm start path: . memory: 512M
-
Connect to Bluemix by running the following commands in a terminal window:
cf api https://api.ng.bluemix.net
cf login
- Create and retrieve service keys to access the AlchemyLanguage service by running the following command:
cf create-service alchemy_api free alchemy-language-service
cf create-service-key alchemy-language-service myKey
cf service-key alchemy-language-service myKey
- Create and retrieve service keys to access the Speech to Text service by running the following command:
cf create-service speech_to_text standard speech-to-text-service
cf create-service-key speech-to-text-service myKey
cf service-key speech-to-text-service myKey
- Create a
.env
file in the root directory of your clone of the project repository by copying the sample.env.example
file using the following command:
cp .env.example .env
You will update the `.env` with the information you retrieved in steps 6 and 7.
The `.env` file will look something like the following:
```none
ALCHEMY_LANGUAGE_API_KEY=
SPEECH_TO_TEXT_USERNAME=
SPEECH_TO_TEXT_PASSWORD=
```
- Push the updated application live by running the following command:
cf push
Right now, our app is interesting, but we can add more functionality into it to make it much more useful.
-
It's time to edit our source code and add one more of the Watson services into the app.
-
Open the
app.js
file. -
Uncomment from line 57 to line 63 and comment line 66. The final method should look like:
app.post('/api/concepts', function(req, res, next) {
alchemyLanguage.concepts(req.body, function(err, result) {
if (err)
next(err);
else
res.json(result);
});
});
The code above will connect the app to the Alchemy Language service.
We've added AlchemyLanguage, but we need to update our application to reflect these changes. 🚀
- Install the dependencies you application need:
npm install
- Start the application locally:
npm start
- Test your application by going to: http://localhost:3000/
- Push the updated application live by running the following command:
cf push
After completing the steps above, you are ready to test your application. Start a browser and enter the URL of your application.
<application-name>.mybluemix.net
You can also find your application name when you click on your application in Bluemix.
You have completed the Audio Analysis Lab!