top of page

Neural Segmentation


2019 - Digital Videos

I believe that Machine Learning holds the potential to create the alphabet of cinema and the moving image. This means democratization of narratives and the birth of the true Open Source Cinema. Our cultural artifacts are being collected and monetized by a handful of entities that rent those artifacts back to the individuals through their platforms, while they form ownership over an increasing percentage of the cultural artifacts human civilization currently produces. The democratization of our narratives could in fact prove the platform model to be unsustainable.


In these video series, I’ve explored the possibilities of world building and abstraction, through making basic color coded animations so that the machine learning model places objets and things such as “grass”, “sky”, or “fabric” into the image by replacing it with the color it corresponds to.


Animation through segmentation is as of yet a limited from of expression, and isn’t the only direction to explore within the field of machine learning while pondering about Open Source Cinema. Having said that, it’s a good symbolic method to start meditating on the endless possibilities of remixing things visually.  

The machine learning model I’ve used in my workflow is called SPADE-COCO. COCO is short for Common Objects in Context and is a large dataset that enables object segmentation. SPADE-COCO lets one create images from segmented imagery, through a technique called Semantic Image Synthesis.

Scenery I

Scenery II

Other Experiments

bottom of page