Josh Constine is a technology journalist who specializes in deep analysis of social products. He is currently a writer for TechCrunch. Previously, Constine was the Lead Writer of Inside Facebook, where he covered Facebook product changes, privacy, the Ads API, Page management, ecommerce, virtual currency, and music technology. Prior to writing for Inside Facebook, Constine graduated from Stanford University… → Learn More
Facial recognition, landmark detection, and even audio cues picked up through your phone’s microphone could let Instagram Video intelligently suggest a cover frame for mini-movies or even tag them, according to recent Facebook patents. The tech could route each video to the right people and put its best (motionless) foot forward so it gets viewed amongst the sea of more easily-consumed photos.
When I first discovered these patents a month ago, it was hard to envision how they’d be applied. Facebook hadn’t been putting much focus on recording video in its own app. The schematics described choosing a cover frame for videos you shot — something Facebook’s camera didn’t offer — while the patents imagined using every sensor possible to help you make that choice. It was all a little hazy.
The Instagram Video launch made it a lot clearer.
Why Cover Frames Matter
Currently after you’ve shot an video on Instagram, you can scrub your finger across a timeline to choose what’s shown as the video’s thumbnail. This is a costly extra step in the publishing process that makes Instagram Video feel more sluggish than Vine.
While picking a cover frame might seem mundane or annoying, it hugely important. Videos are just a much bigger investment to view than photos. You can gleam the beauty of a classic Instagram as quickly or slowly as you want. With video, you’re at the mercy of the director. Deciding to watch a video is a time investment. Maybe only 15 seconds, but on mobile that can feel agonizingly long if the content is boring.
The only clues to whether that investment will be well spent are the author’s reputation, the description, and the cover frame.
Not all frames of a video are ripe for being paralyzed, pulled out on context, and put on display as the defining moment of a moment. Right now Facebook automatically gives you about 15 frames from across your video to choose from ,though you drag your finger to choose frames in between. There’s no suggestion of which is best and it defaults to the first frame of the video.
But with these patents, Facebook and Instagram could pick out the most interesting moments of your video, determine who and what is in them, and recommend tags or what to use as your cover frame.
Instagram Sees Your Smile, Hears Your Laugh
The patents were granted in April 2013 and filed for in October 2011 by Facebook and its employees Andrew “Boz” Bosworth, David Garcia, and Soleio Cuervo. The patents are for Automatic Photo Capture Based on Social Components and Identity Recognition (’80), Preferred images from captured video sequence (’00), and Image selection from captured video sequence based on social components (’65).
Essentially, they describe technology for looking at each frame of a video as if it were a photo. Detection algorithms can then be used to identify people, written words, brands, and landmarks through facial and pattern recognition.
“The image capturing process may analyze frames of the sequence of video frames to identify…a place (e.g., Eiffel Tower, Golden Gate Bridge, Yosemite National Park, Hollywood), a business or an organization (e.g., a coffee shop, San Francisco Giants), or a brand or product (e.g., Coca-Cola, Louis Vuitton).”
This could let Facebook suggest formal tags of people, places, and brands in a video, or just quietly log that information as metadata to be used for determining who to surface the video to in the news feed. For example, the video could be shown more prominently to people nearby, who Like those landmarks or brands, or who are friends of those who appear in the video. Though Instagram uses an unfiltered feed, it recently added a tagging system that could take advantage of this kind of detection.
That same information could also be used to suggest the best possible cover frames, such as ones featuring people or famous places. The patents also outline picking the best frame by detecting good lighting or contrast, and even detecting preferred facial expressions. If you pan over a couch full of friends, Instagram could suggest a cover frame where they’re all well let and smiling. The patents also describe using data from a phone’s accelerometer to pick a still, stable image for the cover frame rather than a blurry one.
What most excites me, though, is the potential use of microphone for determining the most interesting moment of a video:
“The image selection process may analyze content of the voice segments (e.g., by using a speech recognition algorithm) for indication of importance (e.g., “Say cheese!”, “Cheese!”, “This is beautiful!”, “Amazing!”)
That means Instagram could potentially *hear* you shouting with joy when your camera lens hovers over a beautiful sunset, skyline, or smile, and then make it easy to choose that part of the video as the cover.
As the battle to be the premier social video app rages on between Instagram, Vine, and other competitors, serious technology like this will become a factor. The apps need to stay lightweight with a minimum number of steps before publishing. While extra features and editing tools might appeal to power users, it’s the elimination or simplification of core flow that may turn the tides for one developer or another.
Lucky for Instagram, its parent company Facebook has spent years envisioning how to smooth the media capture and sharing experience. Boz, Garcia, and Soleio seem to have had foresight that technology would eventually make recording video almost as easy as snapping a photo. If there patents bear fruit for Instagram, consuming video could get easier as well. and we might actually watch more of the moments our friends are capturing.