Skip to content

Commit 4100fb9

Browse files
One way translation to add Multi-Language One-Way Translation example (#1706)
1 parent 70c790b commit 4100fb9

29 files changed

+21727
-1
lines changed

authors.yaml

+5
Original file line numberDiff line numberDiff line change
@@ -257,3 +257,8 @@ thli-openai:
257257
name: "Thomas Li"
258258
website: "https://www.linkedin.com/in/thli/"
259259
avatar: "https://avatars.githubusercontent.com/u/189043632?v=4"
260+
261+
erikakettleson-openai:
262+
name: "Erika Kettleson"
263+
website: "https://www.linkedin.com/in/erika-kettleson-85763196/"
264+
avatar: "https://avatars.githubusercontent.com/u/186107044?v=4"
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,161 @@
1+
# Multi-Language Conversational Translation with the Realtime API
2+
3+
One of the most exciting things about the Realtime API is that the emotion, tone and pace of speech are all passed to the model for inference. Traditional cascaded voice systems (involving STT and TTS) introduce an intermediate transcription step, relying on SSML or prompting to approximate prosody, which inherently loses fidelity. The speaker's expressiveness is literally lost in translation. Because it can process raw audio, the Realtime API preserves those audio attributes through inference, minimizing latency and enriching responses with tonal and inflectional cues. Because of this, the Realtime API makes LLM-powered speech translation closer to a live interpreter than ever before.
4+
5+
This cookbook demonstrates how to use OpenAI's [ Realtime API](https://platform.openai.com/docs/guides/realtime) to build a multi-lingual, one-way translation workflow with WebSockets. It is implemented using the [Realtime + WebSockets integration](https://platform.openai.com/docs/guides/realtime-websocket) in a speaker application and a WebSocket server to mirror the translated audio to a listener application.
6+
7+
A real-world use case for this demo is a multilingual, conversational translation where a speaker talks into the speaker app and listeners hear translations in their selected native language via the listener app. Imagine a conference room with a speaker talking in English and a participant with headphones in choosing to listen to a Tagalog translation. Due to the current turn-based nature of audio models, the speaker must pause briefly to allow the model to process and translate speech. However, as models become faster and more efficient, this latency will decrease significantly and the translation will become more seamless.
8+
9+
10+
Let's explore the main functionalities and code snippets that illustrate how the app works. You can find the code in the [accompanying repo](https://github.com/openai/openai-cookbook/tree/main/examples/voice_solutions/one_way_translation_using_realtime_api/README.md
11+
) if you want to run the app locally.
12+
13+
### High Level Architecture Overview
14+
15+
This project has two applications - a speaker and listener app. The speaker app takes in audio from the browser, forks the audio and creates a unique Realtime session for each language and sends it to the OpenAI Realtime API via WebSocket. Translated audio streams back and is mirrored via a separate WebSocket server to the listener app. The listener app receives all translated audio streams simultaneously, but only the selected language is played. This architecture is designed for a POC and is not intended for a production use case. Let's dive into the workflow!
16+
17+
![Architecture](translation_images/Realtime_flow_diagram.png)
18+
19+
### Step 1: Language & Prompt Setup
20+
21+
We need a unique stream for each language - each language requires a unique prompt and session with the Realtime API. We define these prompts in `translation_prompts.js`.
22+
23+
The Realtime API is powered by [GPT-4o Realtime](https://platform.openai.com/docs/models/gpt-4o-realtime-preview) or [GPT-4o mini Realtime](https://platform.openai.com/docs/models/gpt-4o-mini-realtime-preview) which are turn-based and trained for conversational speech use cases. In order to ensure the model returns translated audio (i.e. instead of answering a question, we want a direct translation of that question), we want to steer the model with few-shot examples of questions in the prompts. If you're translating for a specific reason or context, or have specialized vocabulary that will help the model understand context of the translation, include that in the prompt as well. If you want the model to speak with a specific accent or otherwise steer the voice, you can follpow tips from our cookbook on [Steering Text-to-Speech for more dynamic audio generation](https://cookbook.openai.com/examples/voice_solutions/steering_tts).
24+
25+
We can dynamically input speech in any language.
26+
27+
```js
28+
// Define language codes and import their corresponding instructions from our prompt config file
29+
const languageConfigs = [
30+
{ code: 'fr', instructions: french_instructions },
31+
{ code: 'es', instructions: spanish_instructions },
32+
{ code: 'tl', instructions: tagalog_instructions },
33+
{ code: 'en', instructions: english_instructions },
34+
{ code: 'zh', instructions: mandarin_instructions },
35+
];
36+
```
37+
38+
## Step 2: Setting up the Speaker App
39+
40+
![SpeakerApp](translation_images/SpeakerApp.png)
41+
42+
We need to handle the setup and management of client instances that connect to the Realtime API, allowing the application to process and stream audio in different languages. `clientRefs` holds a map of `RealtimeClient` instances, each associated with a language code (e.g., 'fr' for French, 'es' for Spanish) representing each unique client connection to the Realtime API.
43+
44+
```js
45+
const clientRefs = useRef(
46+
languageConfigs.reduce((acc, { code }) => {
47+
acc[code] = new RealtimeClient({
48+
apiKey: OPENAI_API_KEY,
49+
dangerouslyAllowAPIKeyInBrowser: true,
50+
});
51+
return acc;
52+
}, {} as Record<string, RealtimeClient>)
53+
).current;
54+
55+
// Update languageConfigs to include client references
56+
const updatedLanguageConfigs = languageConfigs.map(config => ({
57+
...config,
58+
clientRef: { current: clientRefs[config.code] }
59+
}));
60+
```
61+
62+
Note: The `dangerouslyAllowAPIKeyInBrowser` option is set to true because we are using our OpenAI API key in the browser for demo purposes but in production you should use an [ephemeral API key](https://platform.openai.com/docs/api-reference/realtime-sessions) generated via the OpenAI REST API.
63+
64+
We need to actually initiate the connection to the Realtime API and send audio data to the server. When a user clicks 'Connect' on the speaker page, we start that process.
65+
66+
The `connectConversation` function orchestrates the connection, ensuring that all necessary components are initialized and ready for use.
67+
68+
```js
69+
const connectConversation = useCallback(async () => {
70+
try {
71+
setIsLoading(true);
72+
const wavRecorder = wavRecorderRef.current;
73+
await wavRecorder.begin();
74+
await connectAndSetupClients();
75+
setIsConnected(true);
76+
} catch (error) {
77+
console.error('Error connecting to conversation:', error);
78+
} finally {
79+
setIsLoading(false);
80+
}
81+
}, []);
82+
```
83+
84+
`connectAndSetupClients` ensures we are using the right model and voice. For this demo, we are using gpt-4o-realtime-preview-2024-12-17 and coral.
85+
86+
```js
87+
// Function to connect and set up all clients
88+
const connectAndSetupClients = async () => {
89+
for (const { clientRef } of updatedLanguageConfigs) {
90+
const client = clientRef.current;
91+
await client.realtime.connect({ model: DEFAULT_REALTIME_MODEL });
92+
await client.updateSession({ voice: DEFAULT_REALTIME_VOICE });
93+
}
94+
};
95+
```
96+
97+
### Step 3: Audio Streaming
98+
99+
Sending audio with WebSockets requires work to manage the inbound and outbound PCM16 audio streams ([more details on that](https://platform.openai.com/docs/guides/realtime-model-capabilities#handling-audio-with-websockets)). We abstract that using wavtools, a library for both recording and streaming audio data in the browser. Here we use `WavRecorder` for capturing audio in the browser.
100+
101+
This demo supports both [manual and voice activity detection (VAD)](https://platform.openai.com/docs/guides/realtime-model-capabilities#voice-activity-detection-vad) modes for recording that can be toggled by the speaker. For cleaner audio capture we recommend using manual mode here.
102+
103+
```js
104+
const startRecording = async () => {
105+
setIsRecording(true);
106+
const wavRecorder = wavRecorderRef.current;
107+
108+
await wavRecorder.record((data) => {
109+
// Send mic PCM to all clients
110+
updatedLanguageConfigs.forEach(({ clientRef }) => {
111+
clientRef.current.appendInputAudio(data.mono);
112+
});
113+
});
114+
};
115+
```
116+
117+
### Step 4: Showing Transcripts
118+
119+
We listen for `response.audio_transcript.done` events to update the transcripts of the audio. These input transcripts are generated by the Whisper model in parallel to the GPT-4o Realtime inference that is doing the translations on raw audio.
120+
121+
We have a Realtime session running simultaneously for every selectable language and so we get transcriptions for every language (regardless of what language is selected in the listener application). Those can be shown by toggling the 'Show Transcripts' button.
122+
123+
## Step 5: Setting up the Listener App
124+
125+
Listeners can choose from a dropdown menu of translation streams and after connecting, dynamically change languages. The demo application uses French, Spanish, Tagalog, English, and Mandarin but OpenAI supports 57+ languages.
126+
127+
The app connects to a simple `Socket.IO` server that acts as a relay for audio data. When translated audio is streamed back to from the Realtime API, we mirror those audio streams to the listener page and allow users to select a language and listen to translated streams.
128+
129+
The key function here is `connectServer` that connects to the server and sets up audio streaming.
130+
131+
```js
132+
// Function to connect to the server and set up audio streaming
133+
const connectServer = useCallback(async () => {
134+
if (socketRef.current) return;
135+
try {
136+
const socket = io('http://localhost:3001');
137+
socketRef.current = socket;
138+
await wavStreamPlayerRef.current.connect();
139+
socket.on('connect', () => {
140+
console.log('Listener connected:', socket.id);
141+
setIsConnected(true);
142+
});
143+
socket.on('disconnect', () => {
144+
console.log('Listener disconnected');
145+
setIsConnected(false);
146+
});
147+
} catch (error) {
148+
console.error('Error connecting to server:', error);
149+
}
150+
}, []);
151+
```
152+
153+
### POC to Production
154+
155+
This is a demo and meant for inspiration. We are using WebSockets here for easy local development. However, in a production environment we’d suggest using WebRTC (which is much better for streaming audio quality and lower latency) and connecting to the Realtime API with an [ephemeral API key](https://platform.openai.com/docs/api-reference/realtime-sessions) generated via the OpenAI REST API.
156+
157+
Current Realtime models are turn based - this is best for conversational use cases as opposed to the uninterrupted, UN-style live translation that we really want for a one-directional streaming use case. For this demo, we can capture additional audio from the speaker app as soon as the model returns translated audio (i.e. capturing more input audio while the translated audio played from the listener app), but there is a limit to the length of audio we can capture at a time. The speaker needs to pause to let the translation catch up.
158+
159+
## Conclusion
160+
161+
In summary, this POC is a demonstration of a one-way translation use of the Realtime API but the idea of forking audio for multiple uses can expand beyond translation. Other workflows might be simultaneous sentiment analysis, live guardrails or generating subtitles.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
REACT_APP_OPENAI_API_KEY=sk-proj-1234567890
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
# See https://help.github.com/articles/ignoring-files/ for more about ignoring files.
2+
3+
# dependencies
4+
/node_modules
5+
/.pnp
6+
.pnp.js
7+
8+
# testing
9+
/coverage
10+
11+
# production
12+
/build
13+
14+
# packaging
15+
*.zip
16+
*.tar.gz
17+
*.tar
18+
*.tgz
19+
*.bla
20+
21+
# misc
22+
.DS_Store
23+
.env
24+
.env.local
25+
.env.development.local
26+
.env.test.local
27+
.env.production.local
28+
29+
npm-debug.log*
30+
yarn-debug.log*
31+
yarn-error.log*
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,128 @@
1+
# Translation Demo
2+
3+
This project demonstrates how to use the [OpenAI Realtime API](https://platform.openai.com/docs/guides/realtime) to build a one-way translation application with WebSockets. It is implemented using the [Realtime + Websockets integration](https://platform.openai.com/docs/guides/realtime-websocket). A real-world use case for this demo is multilingual, conversational translation—where a speaker talks into the speaker app and listeners hear translations in their selected native languages via the listener app. Imagine a conference room with multiple participants with headphones, listening live to a speaker in their own languages. Due to the current turn-based nature of audio models, the speaker must pause briefly to allow the model to process and translate speech. However, as models become faster and more efficient, this latency will decrease significantly and the translation will become more seamless.
4+
5+
## How to Use
6+
7+
### Running the Application
8+
9+
1. **Set up the OpenAI API:**
10+
11+
- If you're new to the OpenAI API, [sign up for an account](https://platform.openai.com/signup).
12+
- Follow the [Quickstart](https://platform.openai.com/docs/quickstart) to retrieve your API key.
13+
14+
2. **Clone the Repository:**
15+
16+
```bash
17+
git clone <repository-url>
18+
```
19+
20+
3. **Set your API key:**
21+
22+
- Create a `.env` file at the root of the project and add the following line:
23+
```bash
24+
REACT_APP_OPENAI_API_KEY=<your_api_key>
25+
```
26+
27+
4. **Install dependencies:**
28+
29+
Navigate to the project directory and run:
30+
31+
```bash
32+
npm install
33+
```
34+
35+
5. **Run the Speaker & Listener Apps:**
36+
37+
```bash
38+
npm start
39+
```
40+
41+
The speaker and listener apps will be available at:
42+
- [http://localhost:3000/speaker](http://localhost:3000/speaker)
43+
- [http://localhost:3000/listener](http://localhost:3000/listener)
44+
45+
6. **Start the Mirror Server:**
46+
47+
In another terminal window, navigate to the project directory and run:
48+
49+
```bash
50+
node mirror-server/mirror-server.mjs
51+
```
52+
53+
### Adding a New Language
54+
55+
To add a new language to the codebase, follow these steps:
56+
57+
1. **Socket Event Handling in Mirror Server:**
58+
59+
- Open `mirror-server/mirror-server.cjs`.
60+
- Add a new socket event for the new language. For example, for Hindi:
61+
```javascript
62+
socket.on('mirrorAudio:hi', (audioChunk) => {
63+
console.log('logging Hindi mirrorAudio', audioChunk);
64+
socket.broadcast.emit('audioFrame:hi', audioChunk);
65+
});
66+
```
67+
68+
2. **Instructions Configuration:**
69+
70+
- Open `src/utils/translation_prompts.js`.
71+
- Add new instructions for the new language. For example:
72+
```javascript
73+
export const hindi_instructions = "Your Hindi instructions here...";
74+
```
75+
76+
3. **Realtime Client Initialization in SpeakerPage:**
77+
78+
- Open `src/pages/SpeakerPage.tsx`.
79+
- Import the new language instructions:
80+
```typescript
81+
import { hindi_instructions } from '../utils/translation_prompts.js';
82+
```
83+
- Add the new language to the `languageConfigs` array:
84+
```typescript
85+
const languageConfigs = [
86+
// ... existing languages ...
87+
{ code: 'hi', instructions: hindi_instructions },
88+
];
89+
```
90+
91+
4. **Language Configuration in ListenerPage:**
92+
93+
- Open `src/pages/ListenerPage.tsx`.
94+
- Locate the `languages` object, which centralizes all language-related data.
95+
- Add a new entry for your language. The key should be the language code, and the value should be an object containing the language name.
96+
97+
```typescript
98+
const languages = {
99+
fr: { name: 'French' },
100+
es: { name: 'Spanish' },
101+
tl: { name: 'Tagalog' },
102+
en: { name: 'English' },
103+
zh: { name: 'Mandarin' },
104+
// Add your new language here
105+
hi: { name: 'Hindi' }, // Example for adding Hindi
106+
} as const;
107+
```
108+
109+
- The `ListenerPage` component will automatically handle the new language in the dropdown menu and audio stream handling.
110+
111+
5. **Test the New Language:**
112+
113+
- Run your application and test the new language by selecting it from the dropdown menu.
114+
- Ensure that the audio stream for the new language is correctly received and played.
115+
116+
### Demo Flow
117+
118+
1. **Connect in the Speaker App:**
119+
120+
- Click "Connect" and wait for the WebSocket connections to be established with the Realtime API.
121+
- Choose between VAD (Voice Activity Detection) and Manual push-to-talk mode.
122+
- the speaker should ensure they pause to allow the translation to catch up - the model is turn based and cannot constantly stream translations.
123+
- The speaker can view live translations in the Speaker App for each language.
124+
125+
2. **Select Language in the Listener App:**
126+
127+
- Select the language from the dropdown menu.
128+
- The listener app will play the translated audio. The app translates all audio streams simultaneously, but only the selected language is played. You can switch languages at any time.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,42 @@
1+
// mirror_server.js
2+
import express from 'express';
3+
import http from 'http';
4+
import { Server } from 'socket.io';
5+
6+
const app = express();
7+
const server = http.createServer(app);
8+
const io = new Server(server, {
9+
cors: { origin: '*' }
10+
});
11+
12+
io.on('connection', (socket) => {
13+
console.log('Client connected', socket.id);
14+
15+
socket.on('mirrorAudio:fr', (audioChunk) => {
16+
socket.broadcast.emit('audioFrame:fr', audioChunk);
17+
});
18+
19+
socket.on('mirrorAudio:es', (audioChunk) => {
20+
socket.broadcast.emit('audioFrame:es', audioChunk);
21+
});
22+
23+
socket.on('mirrorAudio:tl', (audioChunk) => {
24+
socket.broadcast.emit('audioFrame:tl', audioChunk);
25+
});
26+
27+
socket.on('mirrorAudio:en', (audioChunk) => {
28+
socket.broadcast.emit('audioFrame:en', audioChunk);
29+
});
30+
31+
socket.on('mirrorAudio:zh', (audioChunk) => {
32+
socket.broadcast.emit('audioFrame:zh', audioChunk);
33+
});
34+
35+
socket.on('disconnect', () => {
36+
console.log('Client disconnected', socket.id);
37+
});
38+
});
39+
40+
server.listen(3001, () => {
41+
console.log('Socket.IO mirror server running on port 3001');
42+
});

0 commit comments

Comments
 (0)