16 May | 18:30 - 21:30
Workshop pix2pix and deep generative models
Deep generative models are a large class of learning algorithms which have stolen the attention of artists over the past two years, by hallucinating imitations of images from the uncanny valley. This workshop will survey the fast-moving landscape of these algorithms, reviewing the properties of variational autoencoders and generative adversarial networks, as well as surveying existing codebases which implement them and artistic projects which have made use of them.
The workshop will also feature a tutorial on how to use the related technique, pix2pix (https://phillipi.github.io/pix2pix/). pix2pix and its cousin CycleGAN (https://junyanz.github.io/CycleGAN/) have been responsible for restyling cities (https://opendot.github.io/ml4a-invisible-cities/) and streetviews (https://twitter.com/JaspervanLoenen/status/841633164846084097), puppeteering pop stars (https://twitter.com/quasimondo/status/827901041349890049) and heads-of-state (https://twitter.com/genekogan/status/857922705412239362), zebrafying horses (https://twitter.com/goodfellow_ian/status/851124988903997440), and many more. The tutorial will overview how to install and use the software, and various considerations in constructing a dataset to train it on.
About the workshop holder
Gene Kogan is an artist and a programmer who is interested in generative systems, artificial intelligence, and software for creativity and self-expression. He is a collaborator within numerous open-source software projects, and leads workshops and demonstrations on topics at the intersection of code, art, and technology activism. Gene initiated and contributes to ml4a, a free book about machine learning for artists, activists, and citizen scientists. He regularly publishes video lectures, writings, and tutorials to facilitate a greater public understanding of the topic.