Last month, we hosted Come and Play — an audio storytelling hackathon where artists, producers, developers, and designers came together at the Thoughtworks offices in San Francisco to find new and fun ways to tell stories with audio (the event was co-organized by Audiosear.ch and Buzzfeed, and sponsored by Stitcher and Detour).  

Today, we want to share the eight amaaaazing projects developed by the teams who participated. Broadly speaking, they fell into three categories — sharing/discovery, context/commentary, and audience participation. These projects will make you think about who gets to be a creator, the ways we use audio in our daily lives, the editorial role of the community, and more. (And if you want to start with a quick primer on what exactly a hackathon is, check out this conversation between two of the participants, Sonia Paul and Claire Mullen).

Sharing and discovery

Aud.io

It’s no secret that audio as a medium is sharing-challenged. It’s also tough to discover. So what can we do about it?

Aud.io is a crowdsourced library of short audio clips (10 seconds and under) where you can upload, download, and discover soundbites (think Giphy, but for audio). Users can also share soundbites to social media platforms and find the original audio source. In the future, the site could also include trending hashtags and curated sections on specific themes.

The team envisioned three user types as they developed Aud.io: the serious producer/creator (a professional who wants a way to get their work into the world), the sharer (millennial, active social media user), and the consumer (not necessarily a podcast listener, but would would listen if a show were trending or getting a lot of buzz).

Imagine if next time someone says oops — you could send them some Britney Spears audio of her 2000 hit, “Oops I did it again!” Whether sharing a clip of President Trump bemoaning his bad fortune or an inspirational quote from a favorite artist, Aud.io wants to make sharing quick, simple, and intuitive.

Team members: Seth Benton, Emma Cillekens, Carolyn Han, Karen Hao, Fernando Hernández, Eli Lyonhart

Technology used: HTML/CSS, Javascript, node.js, repurposed WNYC’s Audiogram code

Aud.io slides

Tappas

Tappas is a sampling technology that gives listeners the ability to quickly judge whether a podcast is worth their listening time by providing a short audio sample upfront and providing audible transcripts of episodes available for their pursuing.

Users search Tappas by topic, person, place, etc. for a grid of podcast shows. Hover the mouse over any of the results (or hold down your finger if using a smartphone) to hear a short sample of the podcast. If the listener wants to hear more, a double click/tap takes them to the next screen, which is a transcript of the episode. Using a mouse or finger, the listener can scroll through the transcript to drill into a specific portion of the episode they want to hear, such an guest interview.

Team members: Erika Aguilar, Edvin Besic, Laurian Gridinoc, Iris Jong, Savanna Nilsen, Brian Underwood

Technology used: Audiosear.ch, Descript

Code repo

Audiofeels

Audio is fundamentally a feeling-driven medium — we can’t help but react, sometimes deeply, to what we’re listening to. But there’s no way to share that auditory emotional reaction — instead we’re forced to resort to emoji and words. Audiofeels wants to change that. This platform allows users to add auditory emotional reactions to podcasts, with easy to use peer-to-peer sharing to let others know how you felt about something.

The premise of Audiofeels is that many podcast listeners want to react and share to the pieces they’re hearing. The “feels” get sent over mobile messenger — iMessage, Facebook Messenger, WhatsApp, etc. — and yes, haha!, mmm!, and ugh overlay the audio so the user reaction is heard concurrently with the original piece.

See the Pen AudioFeels by Benjamin Titcomb (@Ravenstine) on CodePen.

If Audiofeels gets continued development, the team would like to add the functionality to podcatchers, go beyond peer-to-peer sharing, and provide users the ability to turn it off and on as they want.

Team members: Larry Berger, Katherine Rae Mondo, Ash Ngu, Eric Silver, Ben Titcomb

Technology used: JavaScript, Wavesurfer.js

Audiofeels slides, demo

Context and commentary

Call Collect

Collecting audio samples from a wide group of people or directly from your audience is difficult, time-consuming, and costly. Call Collect is tool to make it easy for listeners to submit their own audio based on questions or prompts from the host.

It works like an email blast, but to phone numbers. Say you are a host or producer: you make a call to action to your community for interested participants to provide their phone number. Then, whenever the time is right, you record a prompt in Call Collect and send it out. Users receive a call from their favorite host asking them for a story, reaction, or other response of some kind. They Press 1 to start recording, and then send their response back.

What could Call Collect be used for?

  • Crowdsource stories and ideas for future shows
  • Gather audience reactions and feedback
  • Create mash-ups of multiple characters/narrators
  • Gather sound on a topic from multiple locations
  • Actor voiceovers in your audio drama
  • Pre-interviews and interviews
  • Gather sound effects

Here’s an example:

The choice of phone — as opposed to a messenger service or an app — was deliberate. Phones are accessible to nearly everyone, even those with landlines only. There’s virtually no barrier to entry: no clicks, no downloads. Call time is currently limited to five minutes; in the future that limit would probably go down. Call Collect would also like to offer the option for respondents to re-record if they wish.

Technology used: Twilio API, Python, Heroku, GitHub

Team members: Leah Culver, Alec Glassford, Brandon Grugle, Claire Mullen, Sonia Paul

Call Collect slides, Medium post

duck under

Duck under asks the question: Who gets to be a storyteller, and can we expand that definition?

This listening platform brings producers and listeners together to add context to audio, making stories more inclusive and creating new ways of telling them derived from expanded participation. Using duck under, a producer can provide clips to annotate the main story that they couldn’t include in the original piece. Audiophiles and listeners can use it share their thoughts throughout the story, giving the audience a voice as well.

Duck under creates a more genuine, less packaged feel to audio stories and provides a tool for providing more nuance and depth in reporting. Comments are recorded and shared in the duck under app, and appear underneath the audio. The audio of the original story fades out when a comment is added, and fades back in when the comment is complete.

Members: Adwoa Boakye, Katie Briggs, Reid Delahunt, Sarah Siplak, Shindo Strzelczyk, Stephen Suen, Todd Whitney

Technology used: Sketch, Mockuuups (Wireframes/mockups/identity work), ProTools, Adobe Audition (Audio editing), Glitch, Javascript, Node/Express, jQuery, jPlayer, SQLite/Sequelize, Recordmp3.js, LAME (Development)

Duck under slides, code

Hot Takes

Hot Takes is a commenting tool that allows you to record and share audio comments on podcasts, so that listeners can respond to podcast with their voices.

Hot takes has two primary functions:

First, you can add your comments to an episode, for public visibility, and/or listen through an entire podcast episode, toggling whether you want to listen to any comments affiliated with that episode. This is like a choose-your-own-adventure, in that you can dig into the marginalia of any given portion of an episode, or not.

Second, you can excerpt a clip of the episode, up to 30 seconds, and add your own commentary to the clip, for sharing with your own social media circles. This feature is much like the This American Life Shortcut tool, but with the option to add one’s own audio hot-take on the clip.

Team members: Markus Ahlstrand, Ting-Ju Chen, Joshua Curry, Aayush Iyer, Will Rogers, Emily Shaw

Hot Takes code, demo, audio cards

Audience participation

VoxAnon

Voices are, by their nature, intensely personal — especially if they are telling stories that deal with a trauma, embarrassment, or something revealing. VoxAnon wants to invite more voices, and get more stories out into the world, by giving anyone the comfort and space to be a potential creator.

VoxAnon is a platform for sharing audio anonymously. The VoxAnon website stores and categorizes content, suggesting storytelling prompts for users, but is essentially an open platform similar to PostSecret.

Users are provided with an easy link to begin chatting with a Facebook Messenger bot which guides them through the process of contributing audio content anonymously. They have the option to use various vocal filters to make sure their story isn’t linked to their identity, or to add a little something creative to their posting.

The sound file automatically generates a searchable transcript. A showcase page on the VoxAnon website highlights different content, and users have the ability to select the privacy settings for their audio, allowing journalists and producers to use the site as a way to find tape for their own stories.

Team members: Ted Han, Miko Lee, Cyrus Nemati, Emily Saltz, Maya Sugarman, Robert M Ochshorn

Technology used: Ruby on Rails, Hypersolo Facebook Messenger, Python Wrapper for World Vocoder, Lower Quality Gentle, HTML, CSS, JS, jQuery

VoxAnon slides, blog post

BackTalk

It’s not easy for audiences to interact with a show’s content and share their own experiences with friends and the show itself. BackTalk puts the listener in the hotseat and allows them to answer the questions of their favorite interviewers.

Here’s how it works: A user can choose from a question-generator that’s populated by producers and then record their own answer. The short Q&A becomes a shareable clip for social media. BackTalk also provides a link to the original interview so the audience can see how the guest on the show answered.  

Producers will use BackTalk because they’re already isolating teasers to share at the top of the show or promote elsewhere — if they also upload them to BackTalk, users can interact directly with those clips, allowing audience participation and driving traffic back to the episode. The audience will use BackTalk as a funny way to show how they would have responded to the same question.

In the future, if they were to develop BackTalk further, they might introduce the option to upload questions from upcoming episodes, or teasers, to help the creators find content they’re looking for. The BackTalk team could also see producers using it in the reverse way — they have a hot piece of tape they want to use, but they’re not sure how. They could use BackTalk to solicit the audience to produce something that would enable them to reproduce and share. The team also made a Telegram bot (demo video here).

Team members: Ninna Gaensler-Debs, Phillip Hermans, Katrina Huber-Juma, Jenny Luna, Guillermo Mario Narvaja, Jim Sam, Dario Slavazza

Technology used:  Web Audio API and Recorder.js (client side), Flask (back end) FFmpeg, RadioCut (hosting)

The outlier

Tasty Machine Learning

Some of the data that Audiosear.ch collects concerns tastemakers (people or organizations who curate content) and the podcasts they write about in articles, newsletters, and tweets. But many of the podcasts mentioned in tweets are self-promotional — and while there’s nothing wrong with someone promoting their own podcast, we want to be able to identify recommendations that don’t come from the creator.

Tasty Machine Learning is a project to develop a classifier to determine whether a tweet is self-promotional, and — if it is — to eliminate flag it in the Audiosear.ch tasty database. We started with a corpus of 1,000 tweets containing links to podcasts; then each tweet was evaluated to assess whether it was self-promotional.

Tali used the Maximum Entropy model with Natural Language Toolkit, which allows you to specify certain features (for example, whether a specific word appears in a tweet) — that the model then learns from. In this case, the model looked at the words in a tweet: each word was a feature that helped to determine if the tweet was self-promotional or not. Since in this case each of the 1,000 tweets was selected specifically because it linked to a podcast, we followed the link to get text that was then used to determine the values of some additional features.

The goal for this project is to introduce a new type of recommendation data into the Audiosear.ch tasty database and get more information about which people are associated with which podcasts. In this exercise, we determined that around 27% of the 1,000 tweets were not self-promotional. As this project develops, and we get more information, we’ll be retraining the model to deliver a higher degree of accuracy.

Team: Tali Singer

Technology used: Audiosear.ch

Tasty Machine Learning slides

***

It was a fantastic event that could not have happened without the tireless effort of Lam Thuy Vo (Buzzfeed), Jared Hatch (Thoughtworks), and Anne Wootton (Audiosear.ch). To hear more from Anne and Lam about the event, check out this interview with them conducted by Hackathon participant Emma Cillekens. You can also listen to a brief summary of the key takeaways and experiences in this “post-event podcast” from participant Adwoa  Boakye. Top photo by Lam Thuy Vo.