RecordRTC Documentation / RecordRTC Wiki Pages / RecordRTC Demo / WebRTC Experiments
RecordRTC is a JavaScript-based media-recording library for modern web-browsers (supporting WebRTC getUserMedia API). It is optimized for different devices and browsers to bring all client-side (pluginfree) recording solutions in single place.
Please check dev directory for development files.
- RecordRTC API Reference
- MRecordRTC API Reference
- MediaStreamRecorder API Reference
- StereoAudioRecorder API Reference
- WhammyRecorder API Reference
- Whammy API Reference
- CanvasRecorder API Reference
- GifRecorder API Reference
- Global API Reference
Browser | Support |
---|---|
Firefox | Stable / Aurora / Nightly |
Google Chrome | Stable / Canary / Beta / Dev |
Opera | Stable / NEXT |
Android | Chrome / Firefox / Opera |
Microsoft Edge | Normal Build |
Media File | Bitrate/Framerate | encoders | Framesize | additional info |
---|---|---|---|---|
Audio File (WAV) | 1411 kbps | pcm_s16le | 44100 Hz | stereo, s16 |
Video File (WebM) | 60 kb/s | (whammy) vp8 codec yuv420p | -- | SAR 1:1 DAR 4:3, 1k tbr, 1k tbn, 1k tbc (default) |
- RecordRTC to Node.js
- RecordRTC to PHP
- RecordRTC to ASP.NET MVC
- RecordRTC & HTML-2-Canvas i.e. Canvas/HTML Recording!
- MRecordRTC i.e. Multi-RecordRTC!
- RecordRTC on Ruby!
- RecordRTC over Socket.io
- ffmpeg-asm.js and RecordRTC! Audio/Video Merging & Transcoding!
- RecordRTC / PHP / FFmpeg
- Record Audio and upload to Nodejs server
- ConcatenateBlobs.js - Concatenate multiple recordings in single Blob!
- Remote stream recording
- Mp3 or Wav Recording
npm install recordrtc
# you can use with "require" (browserify/nodejs)
var RecordRTC = require('recordrtc');
var recorder = RecordRTC(mediaStream, { type: 'audio'});
or using Bower:
bower install recordrtc
To use it:
<script src="./node_modules/recordrtc/RecordRTC.js"></script>
<!-- or -->
<script src="https://cdn.WebRTC-Experiment.com/RecordRTC.js"></script>
<!-- or -->
<script src="https://www.WebRTC-Experiment.com/RecordRTC.js"></script>
It is suggested to link specific release:
E.g.
<!-- use 5.2.6 or any other version -->
<script src="https://github.com/muaz-khan/RecordRTC/releases/download/5.2.6/RecordRTC.js"></script>
There are some other NPM packages regarding RecordRTC:
<script src="https://cdn.webrtc-experiment.com/gumadapter.js"></script>
<script>
function successCallback(stream) {
// RecordRTC usage goes here
}
function errorCallback(error) {
// maybe another application is using the device
}
var mediaConstraints = { video: true, audio: true };
navigator.mediaDevices.getUserMedia(mediaConstraints).then(successCallback).catch(errorCallback);
</script>
You'll be recording both audio/video in single WebM container. Though you can edit RecordRTC.js to record in mp4.
var recordRTC;
function successCallback(stream) {
// RecordRTC usage goes here
var options = {
mimeType: 'video/webm', // or video/mp4 or audio/ogg
audioBitsPerSecond: 128000,
videoBitsPerSecond: 128000,
bitsPerSecond: 128000 // if this line is provided, skip above two
};
recordRTC = RecordRTC(stream, options);
recordRTC.startRecording();
}
function errorCallback(error) {
// maybe another application is using the device
}
var mediaConstraints = { video: true, audio: true };
navigator.mediaDevices.getUserMedia(mediaConstraints).then(successCallback).catch(errorCallback);
btnStopRecording.onclick = function () {
recordRTC.stopRecording(function (audioVideoWebMURL) {
video.src = audioVideoWebMURL;
var recordedBlob = recordRTC.getBlob();
recordRTC.getDataURL(function(dataURL) { });
});
};
Demo: AudioVideo-on-Firefox.html
var recordRTC = RecordRTC(mediaStream);
recordRTC.startRecording();
recordRTC.stopRecording(function(audioURL) {
audio.src = audioURL;
var recordedBlob = recordRTC.getBlob();
recordRTC.getDataURL(function(dataURL) { });
});
Simply set volume=0
or muted=true
over <audio>
or <video>
element:
videoElement.muted = true;
audioElement.muted = true;
Otherwise, you can pass some media constraints:
function successCallback(stream) {
// RecordRTC usage goes here
}
function errorCallback(error) {
// maybe another application is using the device
}
var mediaConstraints = {
audio: {
mandatory: {
echoCancellation: false,
googAutoGainControl: false,
googNoiseSuppression: false,
googHighpassFilter: false
},
optional: [{
googAudioMirroring: false
}]
},
};
navigator.mediaDevices.getUserMedia(mediaConstraints).then(successCallback).catch(errorCallback);
Everything is optional except type:'video'
:
var options = {
type: 'video',
frameInterval: 20 // minimum time between pushing frames to Whammy (in milliseconds)
};
var recordRTC = RecordRTC(mediaStream, options);
recordRTC.startRecording();
recordRTC.stopRecording(function(videoURL) {
video.src = videoURL;
var recordedBlob = recordRTC.getBlob();
recordRTC.getDataURL(function(dataURL) { });
});
Everything is optional except type:'gif'
:
// you must "manually" link:
// https://cdn.webrtc-experiment.com/gif-recorder.js
var options = {
type: 'gif',
frameRate: 200,
quality: 10
};
var recordRTC = RecordRTC(mediaStream || canvas || context, options);
recordRTC.startRecording();
recordRTC.stopRecording(function(gifURL) {
mediaElement.src = gifURL;
});
You can say it: "HTML/Canvas Recording using RecordRTC"!
<script src="https://cdn.WebRTC-Experiment.com/RecordRTC.js"></script>
<script src="https://cdn.webrtc-experiment.com/screenshot.js"></script>
<div id="elementToShare" style="width:100%;height:100%;background:green;"></div>
<script>
var elementToShare = document.getElementById('elementToShare');
var recordRTC = RecordRTC(elementToShare, {
type: 'canvas'
});
recordRTC.startRecording();
recordRTC.stopRecording(function(videoURL) {
video.src = videoURL;
var recordedBlob = recordRTC.getBlob();
recordRTC.getDataURL(function(dataURL) { });
});
</script>
See a demo: /Canvas-Recording/
You can even record Canvas2D drawings:
<script src="https://cdn.webrtc-experiment.com/RecordRTC/Whammy.js"></script>
<script src="https://cdn.webrtc-experiment.com/RecordRTC/CanvasRecorder.js"></script>
<canvas></canvas>
<script>
var canvas = document.querySelector('canvas');
var recorder = new CanvasRecorder(window.canvasElementToBeRecorded, {
disableLogs: false
});
// start recording <canvas> drawings
recorder.record();
// a few minutes later
recorder.stop(function(blob) {
var url = URL.createObjectURL(blob);
window.open(url);
});
</script>
Live Demo:
Watch a video: https://vimeo.com/152119435
It is a function that can be used to initiate recorder however skip getting recording outputs. It will provide maximum accuracy in the outputs after using startRecording
method. Here is how to use it:
var audioRecorder = RecordRTC(mediaStream, {
recorderType: StereoAudioRecorder
});
var videoRecorder = RecordRTC(mediaStream, {
recorderType: WhammyRecorder
});
videoRecorder.initRecorder(function() {
audioRecorder.initRecorder(function() {
// Both recorders are ready to record things accurately
videoRecorder.startRecording();
audioRecorder.startRecording();
});
});
After using stopRecording
, you'll see that both WAV/WebM blobs are having following charachteristics:
- Both are having same recording duration i.e. length
- Video recorder is having no blank frames
- Audio recorder is having no empty buffers
This method is really useful to sync audio/video outputs.
You can ask RecordRTC to auto stop recording after specific duration. It accepts one mandatory and one optional argument:
recordRTC.setRecordingDuration(milliseconds, stoppedCallback);
// the easiest one:
recordRTC.setRecordingDuration(milliseconds).onRecordingStopped(stoppedCallback);
Try a simple demo; paste in the chrome console:
navigator.mediaDevices.getUserMedia({
video: true
}).then(function(stream) {
var recordRTC = RecordRTC(stream, {
recorderType: WhammyRecorder
});
// auto stop recording after 5 seconds
recordRTC.setRecordingDuration(5 * 1000).onRecordingStopped(function(url) {
console.debug('setRecordingDuration', url);
window.open(url);
})
recordRTC.startRecording();
}).catch(function(error) {
console.error(error);
});
This method can be used to clear old recorded frames/buffers. Snippet:
recorder.clearRecordedData();
If you're using recorderType
then you don't need to use type
. Second one will be redundant i.e. skipped.
You can force any Recorder by passing this object over RecordRTC constructor:
var audioRecorder = RecordRTC(mediaStream, {
recorderType: StereoAudioRecorder
})
It means that ALL_BROWSERS will be using StereoAudioRecorder i.e. WebAudio API for audio recording.
This feature brings remote audio recording support in Firefox, and local audio recording support in Microsoft Edge.
You can even force WhammyRecorder
on Firefox however webp format isn't yet supported in standard Firefox builds. It simply means that, you're skipping MediaRecorder API in Firefox.
If you are NOT using recorderType
parameter then type
parameter can be used to ask RecordRTC choose best recorder-type for recording.
// if it is Firefox, then RecordRTC will be using MediaStreamRecorder.js
// if it is Chrome or Opera, then RecordRTC will be using WhammyRecorder.js
var recordVideo = RecordRTC(mediaStream, {
type: 'video'
});
// if it is Firefox, then RecordRTC will be using MediaStreamRecorder.js
// if it is Chrome or Opera or Edge, then RecordRTC will be using StereoAudioRecorder.js
var recordVideo = RecordRTC(mediaStream, {
type: 'audio'
});
Set minimum interval (in milliseconds) between each time we push a frame to Whammy recorder.
var whammyRecorder = RecordRTC(videoStream, {
recorderType: WhammyRecorder,
frameInterval: 1 // setTimeout interval
});
You can disable all the RecordRTC logs by passing this Boolean:
var recorder = RecordRTC(mediaStream, {
disableLogs: true
});
You can force StereoAudioRecorder to record single-audio-channel only. It allows you reduce WAV file size to half.
var audioRecorder = RecordRTC(audioStream, {
recorderType: StereoAudioRecorder,
numberOfAudioChannels: 1 // or leftChannel:true
});
It will reduce WAV size to half!
This feature is useful only in Chrome and Microsoft Edge (WAV-recorders). It can work in Firefox as well.
var options = {
type: 'video',
video: {
width: 320,
height: 240
},
canvas: {
width: 320,
height: 240
}
};
var recordVideo = RecordRTC(MediaStream, options);
RecordRTC pauses recording buffers/frames.
recordRTC.pauseRecording();
If you're using "initRecorder" then it asks RecordRTC that now its time to record buffers/frames. Otherwise, it asks RecordRTC to not only initialize recorder but also record buffers/frames.
recordRTC.resumeRecording();
Optionally get "DataURL" object instead of "Blob".
recordRTC.getDataURL(function(dataURL) {
mediaElement.src = dataURL;
});
Get "Blob" object. A blob object looks similar to input[type=file]
. Which means that you can append it to FormData
and upload to server using XMLHttpRequest object. Even socket.io nowadays supports blob-transmission.
blob = recordRTC.getBlob();
A virtual URL. It can be used only inside the same browser. You can't share it. It is just providing a preview of the recording.
window.open( recordRTC.toURL() );
Invoke save-as dialog. You can pass "fileName" as well; though fileName argument is optional.
recordRTC.save('File Name');
Here is how to customize Buffer-Size for audio recording?
// From the spec: This value controls how frequently the audioprocess event is
// dispatched and how many sample-frames need to be processed each call.
// Lower values for buffer size will result in a lower (better) latency.
// Higher values will be necessary to avoid audio breakup and glitches
// bug: how to minimize wav size?
// workaround? obviously ffmpeg!
// The size of the buffer (in sample-frames) which needs to
// be processed each time onprocessaudio is called.
// Legal values are (256, 512, 1024, 2048, 4096, 8192, 16384).
var options = {
bufferSize: 16384
};
var recordRTC = RecordRTC(audioStream, options);
Following values are allowed for buffer-size:
// Legal values are (256, 512, 1024, 2048, 4096, 8192, 16384)
If you passed invalid value then you'll get blank audio.
Here is jow to customize Sample-Rate for audio recording?
// The sample rate (in sample-frames per second) at which the
// AudioContext handles audio. It is assumed that all AudioNodes
// in the context run at this rate. In making this assumption,
// sample-rate converters or "varispeed" processors are not supported
// in real-time processing.
// The sampleRate parameter describes the sample-rate of the
// linear PCM audio data in the buffer in sample-frames per second.
// An implementation must support sample-rates in at least
// the range 22050 to 96000.
var options = {
sampleRate: 96000
};
var recordRTC = RecordRTC(audioStream, options);
Values for sample-rate must be greater than or equal to 22050 and less than or equal to 96000.
If you passed invalid value then you'll get blank audio.
You can pass custom sample-rate values only on Mac (or additionally maybe on Windows 10).
This option allows you set MediaRecorder output format (currently works only in Firefox; Chrome support coming soon):
var options = {
mimeType 'video/webm', // or video/mp4 or audio/ogg
bitsPerSecond: 128000
};
var recorder = RecordRTC(mediaStream, options);
Note: For chrome, it will simply auto-set type:audio or video
parameters to keep supporting StereoAudioRecorder.js
and WhammyRecorder.js
.
That is, you can skip passing type:audio
parameter when you're using mimeType
parameter.
The chosen bitrate for the audio and video components of the media. If this is specified along with one or the other of the above properties, this will be used for the one that isn't specified.
var options = {
mimeType 'video/webm', // or video/mp4 or audio/ogg
bitsPerSecond: 128000
};
var recorder = RecordRTC(mediaStream, options);
The chosen bitrate for the audio component of the media.
var options = {
mimeType 'audio/ogg',
audioBitsPerSecond: 128000
};
var recorder = RecordRTC(mediaStream, options);
The chosen bitrate for the video component of the media.
var options = {
mimeType 'video/webm', // or video/mp4
videooBitsPerSecond: 128000
};
var recorder = RecordRTC(mediaStream, options);
Note: "initRecorder" is preferred over this old hack. Both works similarly.
Useful to recover audio/video sync issues inside the browser:
recordAudio = RecordRTC( stream, {
onAudioProcessStarted: function( ) {
recordVideo.startRecording();
}
});
recordVideo = RecordRTC(stream, {
type: 'video'
});
recordAudio.startRecording();
onAudioProcessStarted
fixes shared/exclusive audio gap (a little bit). Because shared audio sometimes causes 100ms delay...
sometime about 400-to-500 ms delay.
Delay depends upon number of applications concurrently requesting same audio devices and CPU/Memory available.
Shared mode is the only mode currently available on 90% of windows systems especially on windows 7.
Using autoWriteToDisk
; you can suggest RecordRTC to auto-write to indexed-db as soon as you call stopRecording
method.
var recordRTC = RecordRTC(MediaStream, {
autoWriteToDisk: true
});
autoWriteToDisk
is helpful for single stream recording and writing to disk; however for MRecordRTC
; writeToDisk
is preferred one.
You can write recorded blob to disk using writeToDisk
method:
recordRTC.stopRecording();
recordRTC.writeToDisk();
You can get recorded blob from disk using getFromDisk
method:
// get all blobs from disk
RecordRTC.getFromDisk('all', function(dataURL, type) {
type == 'audio'
type == 'video'
type == 'gif'
});
// or get just single blob
RecordRTC.getFromDisk('audio', function(dataURL) {
// only audio blob is returned from disk!
});
For MRecordRTC; you can use word MRecordRTC
instead of RecordRTC
!
Another possible situation!
var recordRTC = RecordRTC(mediaStream);
recordRTC.startRecording();
recordRTC.stopRecording(function(audioURL) {
mediaElement.src = audioURL;
});
// "recordRTC" instance object to invoke "getFromDisk" method!
recordRTC.getFromDisk(function(dataURL) {
// audio blob is automaticlaly returned from disk!
});
In the above example; you can see that recordRTC
instance object is used instead of global RecordRTC
object.
No WinXP SP2 based "Chrome" support. However, RecordRTC works on WinXP Service Pack 3.
RecordRTC uses WebAudio API for stereo-audio recording. AFAIK, WebAudio is not supported on android chrome releases, yet.
Firefox merely supports audio-recording on Android devices.
Audio recording fails for mono
audio. So, try stereo
audio only.
MediaRecorder API (in Firefox) seems using mono-audio-recording instead.
This section applies only to StereoAudioRecorder:
Do you know "RecordRTC" fails recording audio because following conditions fails:
- Sample rate and channel configuration must be the same for input and output sides on Windows i.e. audio input/output devices mismatch
- Only the Default microphone device can be used for capturing.
- The requesting scheme is none of the following: http, https, chrome, extension's, or file (only works with
--allow-file-access-from-files
) - The browser cannot create/initialize the metadata database for the API under the profile directory
If you see this error message: Uncaught Error: SecurityError: DOM Exception 18
; it means that you're using HTTP
; whilst your webpage is loading worker file (i.e. audio-recorder.js
) from HTTPS
. Both files's (i.e. RecordRTC.js
and audio-recorder.js
) scheme MUST be same!
- If you're on Windows, you have to be running WinXP SP3, Windows Vista or better (will not work on Windows XP SP2 or earlier).
- On Windows, audio input hardware must be set to the same sample rate as audio output hardware.
- On Mac and Windows, the audio input device must be at least stereo (i.e. a mono/single-channel USB microphone WILL NOT work).
If you explorer chromium code; you'll see that some APIs can only be successfully called for WAV
files with stereo
audio.
Stereo audio is only supported for WAV files.
RecordRTC is unable to record "mono" audio on chrome; however it seems that we can covert channels from "stereo" to "mono" using WebAudio API, though. MediaRecorder API's encoder only support 48k/16k mono audio channel (on Firefox Nightly).
- Recorderjs for audio recording
- whammy for video recording
- jsGif for gif recording
Contribute in RecordRTC.org domain
The domain www.RecordRTC.org is open-sourced here:
RecordRTC.js is released under MIT licence . Copyright (c) Muaz Khan.