Camera draws cartoons by reinterpreting what it sees through a neural network
Dan McNish has built a new type of Polaroid camera, and... well, if I was the Polaroid brand manager, I'd be pinging Dan with a chequebook at the ready.
The camera is a mash up of a neural network for object recognition, the google quickdraw dataset, a thermal printer, and a raspberry pi. Initially, I began with some experiments on my laptop. I set up an image processing pipeline in python to take pre-captured images and recognise the objects in them, using pre-trained models from google. At the same time, I explored the quickdraw dataset, and mapped the categories available in the dataset with the categories recognisable by the image processor. After writing some code to patch the two together, wrapping the lot in a docker image, and cobbling together some electronics, interspersed with some hair pulling moments of frustration, the camera was ready.