I’ve worked on a couple of audio-centered apps in the last two months with other developers at Fullstack, and one thing I’ve noticed is that audio seems to make some non-musician developers feel intimidated, even if they are interested in or curious about a particular idea for an app that incorporates audio in some way.
Another great Web Audio Meetup here in NYC today. We got to see a voice pitch-controlled game (sing to shoot your opponent), an app that pipes audio from one location to many devices in another location, and music performance by coding.
So what does the Web MIDI API tell you about the device you’re playing and what that device is currently doing? Let’s have a look.
To follow along, have a look at my fork of the Toptal Web MIDI API demo.
I’ll be working with a couple of guys on a 4-day hackathon cracking in to the Web MIDI API, and his work is shining a lot of light on this API that is currently sparsely documented.
We had a great talk at Fullstack yesterday by data scientist Ben Wellington, who writes about NYC’s wealth of urban data at I Quant NY.
He shared some examples of his work, like figuring out which NY neighborhoods Californians tend to visit based on parking ticket data (hint: Williamsburg and Bushwick) and showing how many millions of dollars Texans and Canadians pay in fines to NYC each year.