Cameras, microphones, and more—your device is alive with sensors. Let's see what you can do.
This Noop is a little different from the others: there's no API!
Accessing devices can be a little intimidating, but modern browsers have made it easy. Read the documentation below for tips on how to get started, and then use the other Noop APIs to enliven the data with random colors, polygons, or sounds to create something awesome.
Share what you make with us on Twitter (#noopschallenge) or on the GitHub Community page for this challenge.
On modern browsers, it is (relatively) simple to access the user's sensors. getUserMedia() is the workhorse you'll need to learn how to work with.
getUserMedia
takes a constraints
object where you ask for what you need. If you want access to both audio and video, pass this object:
{ audio: true, video: true }
You can specify a lot more details like pixel size, which camera, audio sample rate, and more.
getUserMedia()
is a promise because access to user media requires the user to give permission, which halts operation until the user acts.
This means your code will need to request access and then asynchronously gain access to the media stream. Here's a standard implementation:
const constraints = { audio: true, video: true };
navigator.mediaDevices.getUserMedia(constraints)
.then(function(stream) {
/* do something amazing with the stream */
})
.catch(function(err) {
/*
handle what happens if user doesn't
grant permission or there's another error.
*/
});
If you haven't used promises before, the TLDR is that the code in the promises (in this case, the then
and catch
) will happen "later" while the the rest of your code executes. "later" happens when the promise resolves.
When your promise resolves, you can access your video stream
. The first thing to do is to show the video:
const video = document.querySelector('video');
const constraints = { audio: true, video: true };
navigator.mediaDevices.getUserMedia(constraints)
.then(function(stream) {
video.srcObject = stream;
video.play();
}).catch(function(err) {
//handle error
});
This will set the stream
as the source of your video
tag on screen. You can style your video
tag however you like—it's part of your HTML.
🎩 Congratulations! You've now accessed your video camera and put the image on a webpage.
But we're just getting started, because your video
tag is also accessible to JavaScript. You can capture those pixels and draw them to a canvas
—manipulating the pixels along the way.
Using the video
as a source, you can use the canvas
API's drawImage()
to render the pixels.
const canvas = document.querySelector('canvas');
const ctx = canvas.getContext('2d');
const video = document.querySelector('video');
navigator.mediaDevices.getUserMedia(constraints)
.then(function(stream) {
video.srcObject = stream;
video.play();
kickoff()
}).catch(function(err) {
console.error("Shoot, we need to access your camera to make this demo work.")
});
function kickoff() {
const video = document.querySelector('video');
const canvas = document.querySelector('canvas');
const ctx = canvas.getContext('2d');
function drawVideo() {
ctx.drawImage(video, 0, 0, 800, 450);
const frameData = ctx.getImageData(0, 0, 800, 450);
ctx.putImageData(frameData, 0, 0);
requestAnimationFrame(drawVideo);
}
// kickoff
requestAnimationFrame(drawVideo);
}
Now that we're drawing the pixels to canvas as data, we can manipulate them. Let's invert the colors.
Add a call to invertColors(frameData)
to the above, before putImageData
:
function invertColors(data) {
const dataLength = data.length;
for (let i = 0; i < dataLength; i += 4) {
//invert RGB
data[i] = data[i] ^ 255;
data[i+1] = data[i+1] ^ 255;
data[i+2] = data[i+2] ^ 255;
}
}}
🌈 Huzzah! We're manipulating video data in a canvas
, and now we can pull in elements from the other Noops (like Hexbot) to spice up our video.
🔒 Note about security: you can't access getUserMedia
from a local file (meaning you can't just drag the HTML file into your browser). You need to serve the file with https://
. But don't worry, that's easy! You can use http-server to quickly serve a file on your local computer, and you can use GitHub Pages to easily share your creation with the world.
Almost every device has a camera, and it's easy to access it with your web browser.
Libraries to help you get started:
-
p5js: is an expressive framework for working with media and JavaScript, and has a library dedicated to working with browser media: p5.dom
-
react-webcam: A popular and well-supported library for accessing the webcam in React. Accessing the webcam is as simple as adding a tag to your React app, and comes with a built-in method to make a screenshot.
-
three.js: Three.js is a popular 3D software that has ready made webcam code (example and code).
Audio data is a little more subtle than the camera to work with, but you can do fun things visualizing the soundwaves or use the microphone as a game controller.
Libraries to help you get started:
-
p5js: not only does it support video, it has a library dedicated to working with sound: p5.sound
-
pizzicato: play and manipulate sounds using the Web Audio API.
-
waveform-data.js: from the BBC, a library to give you representations of audio waveforms that you can zoom, browse, and manipulate.
Need some inspiration? Check out this great list of Audio Visualization examples and tools from @williamjusten.
While most laptops only have microphones and cameras, phones and tablets have a wealth of sensors you can access—like the accelerometer, gyroscope, GPS, and even light sensor. Support is spotty across devices—but we'd love to see what you come up with!
There are millions of things you can do with Cambot and your device sensors, but here are a few ideas to get you started:
- Glitch: Make a glitchy view of your webcam.
- ASCII-fy everything: Name one thing that isn't made better by ASCII art. Try this asciify library to render your webcam in text art.
- Accelerometer: Use your phone's accelerometer combined with the Mazebot to solve mazes by rolling a "ball" through them.
More about Cambot on the challenge page at noopschallenge.com