dNaQ1o5l5l- Mario Klingemann June 21, 2020 Not sure if you can call it an improvement, but by simply starting the gradient descent from different random locations in latent space you can already get more variation in the results. I had to try my own method for this problem. Klingemann notes that he was able to use StyleGAN to generate more non-white outputs from the same pixelated Obama image, as shown below:
The AI artist Mario Klingemann suggests that the PULSE selection algorithm itself, rather than the data, is to blame. On a technical level, some experts aren’t sure this is even an example of dataset bias. Indeed, they’re so complicated that this single image has sparked heated disagreement among AI academics, engineers, and researchers. Not coincidentally, it’s white men who dominate AI research.īut exactly what the Obama example reveals about bias and how the problems it represents might be fixed are complicated questions. Data used to train AI is often skewed toward a single demographic, white men, and when a program sees data not in that demographic it performs poorly. This problem is extremely common in machine learning, and it’s one of the reasons facial recognition algorithms perform worse on non-white and female faces. In other words, because of the data StyleGAN was trained on, when it’s trying to come up with a face that looks like the pixelated input image, it defaults to white features. “This bias is likely inherited from the dataset StyleGAN was trained on though there could be other factors that we are unaware of.” “It does appear that PULSE is producing white faces much more frequently than faces of people of color,” wrote the algorithm’s creators on Github. “This bias is likely inherited from the dataset” It’s not that the algorithm is “finding” new detail in the image as in the “zoom and enhance” trope it’s instead inventing new faces that revert to the input data. It’s also why you can use PULSE to see what Doom guy, or the hero of Wolfenstein 3D, or even the crying emoji look like at high resolution.
This means each depixelated image can be upscaled in a variety of ways, the same way a single set of ingredients makes different dishes. It does this not by “enhancing” the original low-res image, but by generating a completely new high-res face that, when pixelated, looks the same as the one inputted by the user. What PULSE does is use StyleGAN to “imagine” the high-res version of pixelated inputs. A sample of faces created by StyleGAN, the algorithm that powers PULSE. It’s the algorithm responsible for making those eerily realistic human faces that you can see on websites like faces so realistic they’re often used to generate fake social media profiles. Although you might not have heard of StyleGAN before, you’re probably familiar with its work. In the case of PULSE, the algorithm doing this work is StyleGAN, which was created by researchers from NVIDIA.
In order to turn a low-resolution image into a high-resolution one, the software has to fill in the blanks using machine learning.
Upscaling is like the “ zoom and enhance” tropes you see in TV and film, but, unlike in Hollywood, real software can’t just generate new data from nothing. The program generating these images is an algorithm called PULSE, which uses a technique known as upscaling to process visual data. Here is my wife /EehsSHW6se- Robert Osazuwa Ness June 20, 2020įirst, we need to know a little a bit about the technology being used here.