Colormind II

In last Monday's post, I wrote about Colormind, a program which extracts colour palettes from photos. And on Friday, I turned to pix2pix, a program which can be trained to transform images, producing effects such as these:
Screenshot of the pix2pix page, showing a sketch for a cat, and the picture generated from it.
Screenshot of the pix2pix page, showing my sketch for a handbag, and the picture generated from it. [Images: (1) in tweet 19 Feb 2017 by Christopher Hesse; (2) Chromophilia ]

As it happens, Colormind is two different programs. On Friday, I discussed one of these, the extractor. But there's also an ab initio generator. Colormind's author Jack Qiao describes it in his blog entry "Generating Color Palettes with Deep Learning". Here, he trained pix2pix to generate complete palettes from partial ones. He did this by giving it a database of pairs of images. In each pair, the "output" image was a complete palette from Adobe Color, and the "input" image was the same palette with some colours missing. So in effect, he was training pix2pix to "fill in" missing colours.

One could regard this as analogous to what I showed on Friday, where pix2pix was being trained to "fill in" handbags, shoes or cats from their sketches. (For the technically minded, the original authors of pix2pix note under "Color palette completion" in "Image-to-Image Translation with Conditional Adversarial Nets" that this "stretches the definition of what counts as 'image-to-image translation' in an exciting way"; it may not be the best choice of representation.)

I'm not clear from Jack Qiao's writeups how closely the ab initio generated palettes resemble those created by people. In describing the palette extractor, he says it submitted the palettes it generated to a gatekeeper, which rejects those that don't look like human-created ones. The ab initio generator doesn't have a gatekeeper: its knowledge comes from complete palettes from Adobe Colour. Do these have the same kind of high-level structure that human-created palettes do? I don't know.

To experiment with the ab initio generator, go to http://colormind.io/ . You'll see a strip of five colours. Each box in it has either three or four controls under it. These are represented by icons for: a padlock; sliders; and a left arrow or a right arrow or both. Clicking on the sliders icon gives you controls for changing the colour. Clicking on the padlock locks in your choice. And clicking on the arrow(s) exchanges your colour with the one on its left or right. Clicking "Generate" will generate a new palette from the locked-in colours.

Primark and the Spectrum Suckers IV: Brown Needs Purple?

Here's an interesting sidelight on human-designed colour palettes. I tried running my photo from "Primark and the Spectrum Suckers" through Colormind. The photo is predominantly brown, and every single palette Colormind made from it contained some kind of purple, not too different from the one on the left below.
Colour palette from Colormind for the photo of Primark used in 'Primark and the Spectrum Suckers'.

Colormind, I explained in my post about it, extracts the main colours from a photo, produces random variations on them, and then sends these for scrutiny by a gatekeeper: a machine-learning program trained on palettes that Jack Qiao, Colormind's author, thought were good looking. I wonder whether Jack didn't use enough palettes to teach it that brown doesn't always have to go with purple. Google colour palette brown and you'll see that there are other choices.

Colormind

While I was looking for photos of green-and-purple clothing, I came across a colour-scheme generator named Colormind. There are lots of generators on the web. What distinguishes Colormind is that it tries to make its schemes acceptable to humans.

This is difficult, says Colormind's author, Jack Qiao. In his blog post "Extracting Colors from Photos and Video", he writes that:

Human-designed color palettes typically have some high-level structure — a gradient running left to right, similar hues grouped together etc., and have some minimum amount of contrast between each color. Automatically created palettes [ones automatically created from an image] look more haphazard, with colors distributed according to how they were used in the original image.

There's a short discussion about this on the YCombinator Hacker News group at https://news.ycombinator.com/item?id=16351409. There, Jack proposes an experiment to demonstrate the difference between randomly generated palettes and ones designed by experts. Go to https://color.adobe.com and click on one of the color rules. Adobe will generate a random palette based on that rule. Then compare it with a palette uploaded by users on https://coolors.co/ or https://color.adobe.com/explore/ .

Given that there is this difference, how can one make a machine generate human-style palettes? Jack's answer is to use the results of machine learning. Here's his diagram for the process:
Diagram of how Colormind generates a palette from an image. It starts
with an extracted palette labelled 'MMCQ'. This is followed by four slightly
different palettes labelled 'Random Variations'. These lead to another four
palettes labelled 'Shuffle'. They are all fed into a 'Classifier'. The output
from the classifier is the same as the 'Shuffle' palettes, except that each is
annotated with a number. Finally, there is an 'Output' palette. In the diagram, this is the one
with the highest number. [ Image: from "Extracting Colors from Photos and Video" by Jack Qiao in his blog. ]

The first stage is colour quantisation. And now you know why I devoted a post to this last Friday. In the diagram above, that's represented by the first sub-image, the one labelled MMCQ. That's an abbreviation for the name of a particular colour-quantisation algorithm, the so-called Modified Median Color Quantization. The second stage is to produce a few random variations on the extracted palette, shown in the row below. The third stage is labelled 'Shuffle'. From Jack's diagram, this appears to mean that it shuffles the order of colours within each palette. The fourth stage feeds all the shuffled palettes to a "classifier", which rates them for acceptability. And the fifth stage rejects unacceptable palettes.

Where machine learning enters is the classifier. Jack trained this on palettes that he'd chosen as "good looking". As he says, "In the end [after some experiments] I built a self-contained classifier and trained it on a hand-picked list of examples. Good color palettes generally have good color contrast and an overarching theme, and bad ones look random and/or has bad inter-color contrast." Once trained, Jack's classifier acts as a gatekeeper, letting through only palettes that it thinks are good looking.

So to summarise, Colormind reduces a photo to a palette consisting a small number of colours. It then generates random variations on this, and then rejects those that, to a gatekeeper trained on appealing palettes designed by humans, look bad. I was curious to see how this would apply to my red silk top, which as I mentioned in "Visualising Clothing Colours as a 3D Cloud of Points II", is an intense red with little white. Here are three palettes Colormind generated from it:
Three palettes generated by Colormind for my red silk Chinese top. Each has an intense red, two pale brick-reddish-pinks, a very pale whitish red, and a dark maroony-aubergine.

Each has an intense red, two pale brick-reddish-pinks, a very pale whitish red, and a dark maroony-aubergine. For comparison, here's the TinEye palette. It has a very different distribution. which hasn't balanced the darks with a pale:
Colour palette from TinEye for my red Chinese silk top, with a copy of the photo reduced to that set of colours. The palette contains one intense red (Cinnabar), two much browner reds (Guardsman Red and Monarch), a blackish brown (Aubergine), and a pink (Sea Pink).

Here's one other example, from the photo of the blue, green, and plum shirts together. The first image is from Colormind, and the second from TinEye. I don't know why Colormind hasn't given me the colour labels this time.
Colour palette from Colormind for my sage-green, ice-blue, and plum velvet Moroccan shirts.
Colour palette from TinEye for my sage-green, ice-blue, and plum velvet Moroccan shirts.

To see how Colormind does on other images, try it yourself. Should you want to use my photos, I've made them available in this zip file.