Samim Winiger, whose work we’ve covered recently–sent along his latest experiment. He used an open-source neural network that was trained on 14 million passages of romance novels by Ryan Kiros, a University of Toronto PhD student specializing in machine learning. Called the Neural-Storyteller, the network was trained to analyze images and retrieve appropriate captions from its vast store of sexy knowledge, creating “little stories about images,” says Kiros.
To make poorly labeled videos easier to discover, Manhattan-based video analysis startup Dextro is launching a platform that analyzes and tags the contents of publicly available videos, using algorithms to identify common scenes, objects, and speech. Mic, a news site aimed at millennials, has partnered with Dextro and will use the platform, called Sight, Sound & Motion (SSM), to discover newsworthy videos that may otherwise be difficult to find. READ MORE: This New Platform Makes The Contents Of Videos As Searchable As Text | Fast Company | Business + Innovation
SAN FRANCISCO — For Google, the map of the future is taking everything it knows about you and the world and plotting it in real-time as you move through your life.
“We can build a whole new map for every context and every person,” said Bernhard Seefeld, product management director for Google Maps, speaking at the GigaOm Roadmap 2013 conference. “It’s a specific map nobody has seen before, and it’s just there for that moment to visualize the data.”
Like the early days of map making that told stories of discovery and created more of an emotional connection with the unfolding world, Google wants to build what Seefeld called “emotional maps that reflect our real life connections and peek into the future and possibly travel there.”
Google’s context-aware maps will require refining and extending the underlying map data, and combining it with the kind of personal data from applications that powers Google Now, the company’s personal digital assistant technology.