Get Started with WebRTC
Get Started with WebRTC
HTML5 RocksWebRTC is a new front in the long war for an open and unencumbered web. Brendan Eich, inventor of JavaScript
Real-time communication without plugins
Imagine a world where your phone, TV, and computer could communicate on a common platform. Imagine it was easy to add video chat and peer-to-peer data sharing to your web app. That's the vision of WebRTC.
Want to try it out? WebRTC is available on desktop and mobile in Google Chrome, Safari, Firefox, and Opera. A good place to start is the simple video chat app at appr.tc:
- Open appr.tc in your browser.
- Click Join to join a chat room and let the app use your webcam.
- Open the URL displayed at the end of the page in a new tab or, better still, on a different computer.
Quick start
Haven't got time to read this article or only want code?
-
To get an overview of WebRTC, watch the following Google I/O video or view these slides:
- If you haven't used the
getUserMedia
API, see Capture audio and video in HTML5 and simpl.info getUserMedia. - To learn about the
RTCPeerConnection
API, see the following example and simpl.info RTCPeerConnection. - To learn how WebRTC uses servers for signaling, and firewall and NAT traversal, see the code and console logs from appr.tc.
- Can’t wait and just want to try WebRTC right now? Try some of the more-than 20 demos that exercise the WebRTC JavaScript APIs.
- Having trouble with your machine and WebRTC? Visit the WebRTC Troubleshooter.
Alternatively, jump straight into the WebRTC codelab, a step-by-step guide that explains how to build a complete video chat app, including a simple signaling server.
A very short history of WebRTC
One of the last major challenges for the web is to enable human communication through voice and video: real-time communication or RTC for short. RTC should be as natural in a web app as entering text in a text input. Without it, you're limited in your ability to innovate and develop new ways for people to interact.
Historically, RTC has been corporate and complex, requiring expensive audio and video technologies to be licensed or developed in house. Integrating RTC technology with existing content, data, and services has been difficult and time-consuming, particularly on the web.
Gmail video chat became popular in 2008 and, in 2011, Google introduced Hangouts, which uses Talk (as did Gmail). Google bought GIPS, a company that developed many components required for RTC, such as codecs and echo cancellation techniques. Google open sourced the technologies developed by GIPS and engaged with relevant standards bodies at the Internet Engineering Task Force (IETF) and World Wide Web Consortium (W3C) to ensure industry consensus. In May 2011, Ericsson built the first implementation of WebRTC.
WebRTC implemented open standards for real-time, plugin-free video, audio, and data communication. The need was real:
- Many web services used RTC, but needed downloads, native apps, or plugins. These included Skype, Facebook, and Hangouts.
- Downloading, installing, and updating plugins is complex, error prone, and annoying.
- Plugins are difficult to deploy, debug, troubleshoot, test, and maintain—and may require licensing and integration with complex, expensive technology. It's often difficult to persuade people to install plugins in the first place!
The guiding principles of the WebRTC project are that its APIs should be open source, free, standardized, built into web browsers, and more efficient than existing technologies.
Where are we now?
WebRTC is used in various apps, such as Google Meet. WebRTC has also been integrated with WebKitGTK+ and Qt native apps.
WebRTC implements these three APIs:
MediaStream
(also known asgetUserMedia
)RTCPeerConnection
RTCDataChannel
The APIs are defined in these two specs:
All three APIs are supported on mobile and desktop by Chrome, Safari, Firefox, Edge, and Opera.
getUserMedia
: For demos and code, see WebRTC samples or try Chris Wilson's amazing examples that use getUserMedia
as input for web audio.
RTCPeerConnection
: For a simple demo and a fully functional video-chat app, see WebRTC samples Peer connection and appr.tc, respectively. This app uses adapter.js, a JavaScript shim maintained by Google with help from the WebRTC community, to abstract away browser differences and spec changes.
RTCDataChannel
: To see this in action, see WebRTC samples to check out one of the data-channel demos.
The WebRTC codelab shows how to use all three APIs to build a simple app for video chat and file sharing.
Your first WebRTC
WebRTC apps need to do several things:
- Get streaming audio, video, or other data.
- Get network information, such as IP addresses and ports, and exchange it with other WebRTC clients (known as peers) to enable connection, even through NATs and firewalls.
- Coordinate signaling communication to report errors and initiate or close sessions.
- Exchange information about media and client capability, such as resolution and codecs.
- Communicate streaming audio, video, or data.
To acquire and communicate streaming data, WebRTC implements the following APIs:
MediaStream
gets access to data streams, such as from the user's camera and microphone.RTCPeerConnection
enables audio or video calling with facilities for encryption and bandwidth management.RTCDataChannel
enables peer-to-peer communication of generic data.
(There is detailed discussion of the network and signaling aspects of WebRTC later.)
MediaStream
API (also known as getUserMedia
API)
The MediaStream
API represents synchronized streams of media. For example, a stream taken from camera and microphone input has synchronized video and audio tracks. (Don't confuse MediaStreamTrack
with the <track> element, which is something entirely different.)
Probably the easiest way to understand the MediaStream
API is to look at it in the wild:
- In your browser, navigate to WebRTC samples
getUserMedia
. - Open the console.
- Inspect the
stream
variable, which is in global scope.
Each MediaStream
has an input, which might be a MediaStream
generated by getUserMedia()
, and an output, which might be passed to a video element or an RTCPeerConnection
.
The getUserMedia()
method takes a MediaStreamConstraints
object parameter and returns a Promise
that resolves to a MediaStream
object.
Each MediaStream
has a label
, such as 'Xk7EuLhsuHKbnjLWkW4yYGNJJ8ONsgwHBvLQ'
. An array of MediaStreamTrack
s is returned by the getAudioTracks()
and getVideoTracks()
methods.
For the getUserMedia
example, stream.getAudioTracks()
returns an empty array (because there's no audio) and, assuming a working webcam is connected, stream.getVideoTracks()
returns an array of one MediaStreamTrack
representing the stream from the webcam. Each MediaStreamTrack
has a kind ('video'
or 'audio'
), a label
(something like 'FaceTime HD Camera (Built-in)'
), and represents one or more channels of either audio or video. In this case, there is only one video track and no audio, but it is easy to imagine use cases where there are more, such as a chat app that gets streams from the front camera, rear camera, microphone, and an app sharing its screen.
A MediaStream
can be attached to a video element by setting the srcObject
attribute. Previously, this was done by setting the src
attribute to an object URL created with URL.createObjectURL()
, but this has been deprecated.
The MediaStreamTrack
is actively using the camera, which takes resources, and keeps the camera open and camera light on. When you are no longer using a track, make sure to call track.stop()
so that the camera can be closed.
getUserMedia
can also be used as an input node for the Web Audio API:
// Cope with browser differences.
let audioContext;
if (typeof AudioContext === 'function') {
audioContext = new AudioContext();
} else if (typeof webkitAudioContext === 'function') {
audioContext = new webkitAudioContext(); // eslint-disable-line new-cap
} else {
console.log('Sorry! Web Audio not supported.');
}
// Create a filter node.
var filterNode = audioContext.createBiquadFilter();
// See https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html#BiquadFilterNode-section
filterNode.type = 'highpass';
// Cutoff frequency. For highpass, audio is attenuated below this frequency.
filterNode.frequency.value = 10000;
// Create a gain node to change audio volume.
var gainNode = audioContext.createGain();
// Default is 1 (no change). Less than 1 means audio is attenuated
// and vice versa.
gainNode.gain.value = 0.5;
navigator.mediaDevices.getUserMedia({audio: true}, (stream) => {
// Create an AudioNode from the stream.
const mediaStreamSource =
audioContext.createMediaStreamSource(stream);
mediaStreamSource.connect(filterNode);
filterNode.connect(gainNode);
// Connect the gain node to the destination. For example, play the sound.
gainNode.connect(audioContext.destination);
});
Chromium-based apps and extensions can also incorporate getUserMedia
. Adding audioCapture
and/or videoCapture
permissions to the manifest enables permission to be requested and granted only once upon installation. Thereafter, the user is not asked for permission for camera or microphone access.
Permission only has to be granted once for getUserMedia()
. First time around, an Allow button is displayed in the browser's infobar. HTTP access for getUserMedia()
was deprecated by Chrome at the end of 2015 due to it being classified as a Powerful feature.
The intention is potentially to enable a MediaStream
for any streaming data source, not only a camera or microphone. This would enable streaming from stored data or arbitrary data sources, such as sensors or other inputs.
getUserMedia()
really comes to life in combination with other JavaScript APIs and libraries:
- Webcam Toy is a photobooth app that uses WebGL to add weird and wonderful effects to photos that can be shared or saved locally.
- FaceKat is a face-tracking game built with headtrackr.js.
- ASCII Camera uses the Canvas API to generate ASCII images.
Constraints
Constraints can be used to set values for video resolution for getUserMedia()
. This also allows support for other constraints, such as aspect ratio; facing mode (front or back camera); frame rate, height and width; and an applyConstraints()
method.
For an example, see WebRTC samples getUserMedia
: select resolution.
One gotcha: getUserMedia
constraints may affect the available configurations of a shared resource. For example, if a camera was opened in 640 x 480 mode by one tab, another tab will not be able to use constraints to open it in a higher-resolution mode because it can only be opened in one mode. Note that this is an implementation detail. It would be possible to let the second tab reopen the camera in a higher resolution mode and use video processing to downscale the video track to 640 x 480 for the first tab, but this has not been implemented.
Setting a disallowed constraint value gives a DOMException
or an OverconstrainedError
if, for example, a resolution requested is not available. To see this in action, see WebRTC samples getUserMedia
: select resolution for a demo.
Screen and tab capture
Chrome apps also make it possible to share a live video of a single browser tab or the entire desktop through chrome.tabCapture
and chrome.desktopCapture
APIs. (For a demo and more information, see Screensharing with WebRTC. The article is a few years old, but it's still interesting.)
It's also possible to use screen capture as a MediaStream
source in Chrome using the experimental chromeMediaSource
constraint. Note that screen capture requires HTTPS and should only be used for development due to it being enabled through a command-line flag as explained in this post.
Signaling: Session control, network, and media information
WebRTC uses RTCPeerConnection
to communicate streaming data between browsers (also known as peers), but also needs a mechanism to coordinate communication and to send control messages, a process known as signaling. Signaling methods and protocols are not specified by WebRTC. Signaling is not part of the RTCPeerConnection
API.
Instead, WebRTC app developers can choose whatever messaging protocol they prefer, such as SIP or XMPP, and any appropriate duplex (two-way) communication channel. The appr.tc example uses XHR and the Channel API as the signaling mechanism. The codelab uses Socket.io running on a Node server.
Signaling is used to exchange three types of information:
- Session control messages: to initialize or close communication and report errors.
- Network configuration: to the outside world, what's your computer's IP address and port?
- Media capabilities: what codecs and resolutions can be handled by your browser and the browser it wants to communicate with?
The exchange of information through signaling must have completed successfully before peer-to-peer streaming can begin.
For example, imagine Alice wants to communicate with Bob. Here's a code sample from the W3C WebRTC spec, which shows the signaling process in action. The code assumes the existence of some signaling mechanism created in the createSignalingChannel()
method. Also note that on Chrome and Opera, RTCPeerConnection
is currently prefixed.
// handles JSON.stringify/parse
const signaling = new SignalingChannel();
const constraints = {audio: true, video: true};
const configuration = {iceServers: [{urls: 'stuns:stun.example.org'}]};
const pc = new RTCPeerConnection(configuration);
// Send any ice candidates to the other peer.
pc.onicecandidate = ({candidate}) => signaling.send({candidate});
// Let the "negotiationneeded" event trigger offer generation.
pc.onnegotiationneeded = async () => {
try {
await pc.setLocalDescription(await pc.createOffer());
// Send the offer to the other peer.
signaling.send({desc: pc.localDescription});
} catch (err) {
console.error(err);
}
};
// Once remote track media arrives, show it in remote video element.
pc.ontrack = (event) => {
// Don't set srcObject again if it is already set.
if (remoteView.srcObject) return;
remoteView.srcObject = event.streams[0];
};
// Call start() to initiate.
async function start() {
try {
// Get local stream, show it in self-view, and add it to be sent.
const stream =
await navigator.mediaDevices.getUserMedia(constraints);
stream.getTracks().forEach((track) =>
pc.addTrack(track, stream));
selfView.srcObject = stream;
} catch (err) {
console.error(err);
}
}
signaling.onmessage = async ({desc