Images archived in digital libraries are either born digital or scans/photos of hard copy originals. This technology may be useful in enhancing images of historical photos and documents that are of low quality.
You know how in CSI, the cops always try to “enhance” a shot to zoom in and read (non-existent) details in photos? It’s amusing to the rest of us, but perhaps one day won’t be all that impossible, with artificial intelligence.
Researchers have been adopting neural networks and machine learning technologies to help computers fill in missing detail in photos.
Some consumer-ready websites are already making some of this magic accessible to you and me.
Artists beware! AI is coming for your paintbrush too… A new iOS app, called Prisma, is using deep learning algorithms to turn smartphone photos into stylized artworks based on different artwork/graphical styles.
Samim Winiger, whose work we’ve covered recently–sent along his latest experiment. He used an open-source neural network that was trained on 14 million passages of romance novels by Ryan Kiros, a University of Toronto PhD student specializing in machine learning. Called the Neural-Storyteller, the network was trained to analyze images and retrieve appropriate captions from its vast store of sexy knowledge, creating “little stories about images,” says Kiros.
With its Color Alive line, Crayola was the first company to merge coloring books and apps so kids could bring their on-page creations to life. But Disney Research is taking that idea one step further by letting kids see a coloring book character move in 3D while they’re still coloring it. It’s all made possible by a new augmented reality app that Disney Research has developed that’s able to track and capture real-time images from a mobile device’s camera, and then map them onto any 3D deformable surface. READ MORE: Disney Has Invented 3D Coloring Books | Gizmodo
Until Azriel Knight developed the film in his darkroom, no one had ever seen these photographs, or hundreds of others like them, dating back decades and showing everything from mundane life to the historically fascinating. “It’s easier than throwing the film away — I come across them by happenstance, by buying old cameras,” said Knight. “In the photos, a lot of the time the people seem really happy, so I figure they would probably like to have them back.” Like a voyeuristic Indiana Jones, Knight is an archeological treasure hunter, sifting through forgotten rolls of film for clues as to who lost them, and when and where they were taken.
Knight’s website, Mysterious Developments (http://mysteriousdev.com), contains 61 documented cases of lost film so far, presented both as still pictures and as narrated video episodes, explaining what’s known so far.
Remember a few weeks back, when we learned that Google’s artificial neural network was having creepy daydreams, turning buildings into acid trips and landscapes into Magic Eye pictures? Well, prepare to never sleep again, because last week, Google made its “inceptionism” algorithm available to the public, and the nightmarish images are cropping up everywhere.
The “Deep Dream” system essentially feeds an image through a layer of artificial neurons, asking an AI to enhance and build on certain features, such as edges. Over time, pictures can become so distorted that they morph into something entirely different, or just a bunch of colorful, random noise.
Now that the code for the system is publicly available, anyone can upload a photo of their baby and watch it metamorphose into a surrealist cockroach, or whatever. If you need some inspiration, or an excuse to crawl back into bed, pull the covers over your face, and wait for the world to end, just check out the hashtag ‘DeepDream’ on your social media platform of choice. READ MORE: Google’s Dream Robot Is Running Wild Across the Internet | Gizmodo.