Cranston-Pickle Pink

Cranston Pickles bubble-and-squeak Scotch eggs. One is sliced open, showing the pink colour of the coating.
It isn't often that my clothing gets compared to a Scotch egg. In fact, until Wednesday, it was never. But then in the Gloucester Green market, I stopped to look at a new food stall. It was called "Cranston Pickles" — no relation to Branston, but the owner's surname — and sold pickles and vegetarian Scotch eggs.


While I was looking at these, the stall's owner said "That matches your outfit!" What I was wearing was the pink Gerry Weber silk jacket which I posted about last summer:
Pink silk jacket

I tried one of her spicy kedgeree eggs, and the coating was pleasingly light, without the cloddy heaviness that I find in the supermarket brands. These show a combination of stodge and impenetrability which inspired one humourist — Alan Coren perhaps, or Bill Bryson — to describe these as eggs coated with firebrick.

But this blog is supposed to be about colour, not taste. So I then decided to find out whether Cranston Scotch-egg pink really does match my outfit. I loaded photos of the egg, and of my jacket, into the Gimp image-processing program, cut out a small uniform portion of each, and fed both these into 3D Color Inspector, Kai Uwe Barthel's colour-analysis program that I wrote about here. This plots the distribution of colours in colour cubes with axes representing the strengths of red, green, and blue. Here are my results, the jacket colours on the left:


I conclude that my jacket is a purer colour, and more towards the white. Which I thought it would be; I just decided I'd use this post to remind readers of 3D Color Inspector's existence, as well as writing about some colourful and tasty new foods I'd seen. So thanks to Cranston for the photos and the eggs. Now, has anyone written a Taste Inspector program?... Output from (imaginary) Taste Inspector 3D program, favourably comparing the Cranston Pickles Scotch egg with one from a well-known supermarket.

Colormind II

In last Monday's post, I wrote about Colormind, a program which extracts colour palettes from photos. And on Friday, I turned to pix2pix, a program which can be trained to transform images, producing effects such as these:
Screenshot of the pix2pix page, showing a sketch for a cat, and the picture generated from it.
Screenshot of the pix2pix page, showing my sketch for a handbag, and the picture generated from it. [Images: (1) in tweet 19 Feb 2017 by Christopher Hesse; (2) Chromophilia ]

As it happens, Colormind is two different programs. On Friday, I discussed one of these, the extractor. But there's also an ab initio generator. Colormind's author Jack Qiao describes it in his blog entry "Generating Color Palettes with Deep Learning". Here, he trained pix2pix to generate complete palettes from partial ones. He did this by giving it a database of pairs of images. In each pair, the "output" image was a complete palette from Adobe Color, and the "input" image was the same palette with some colours missing. So in effect, he was training pix2pix to "fill in" missing colours.

One could regard this as analogous to what I showed on Friday, where pix2pix was being trained to "fill in" handbags, shoes or cats from their sketches. (For the technically minded, the original authors of pix2pix note under "Color palette completion" in "Image-to-Image Translation with Conditional Adversarial Nets" that this "stretches the definition of what counts as 'image-to-image translation' in an exciting way"; it may not be the best choice of representation.)

I'm not clear from Jack Qiao's writeups how closely the ab initio generated palettes resemble those created by people. In describing the palette extractor, he says it submitted the palettes it generated to a gatekeeper, which rejects those that don't look like human-created ones. The ab initio generator doesn't have a gatekeeper: its knowledge comes from complete palettes from Adobe Colour. Do these have the same kind of high-level structure that human-created palettes do? I don't know.

To experiment with the ab initio generator, go to http://colormind.io/ . You'll see a strip of five colours. Each box in it has either three or four controls under it. These are represented by icons for: a padlock; sliders; and a left arrow or a right arrow or both. Clicking on the sliders icon gives you controls for changing the colour. Clicking on the padlock locks in your choice. And clicking on the arrow(s) exchanges your colour with the one on its left or right. Clicking "Generate" will generate a new palette from the locked-in colours.

Primark and the Spectrum Suckers IV: Brown Needs Purple?

Here's an interesting sidelight on human-designed colour palettes. I tried running my photo from "Primark and the Spectrum Suckers" through Colormind. The photo is predominantly brown, and every single palette Colormind made from it contained some kind of purple, not too different from the one on the left below.
Colour palette from Colormind for the photo of Primark used in 'Primark and the Spectrum Suckers'.

Colormind, I explained in my post about it, extracts the main colours from a photo, produces random variations on them, and then sends these for scrutiny by a gatekeeper: a machine-learning program trained on palettes that Jack Qiao, Colormind's author, thought were good looking. I wonder whether Jack didn't use enough palettes to teach it that brown doesn't always have to go with purple. Google colour palette brown and you'll see that there are other choices.

Colormind

While I was looking for photos of green-and-purple clothing, I came across a colour-scheme generator named Colormind. There are lots of generators on the web. What distinguishes Colormind is that it tries to make its schemes acceptable to humans.

This is difficult, says Colormind's author, Jack Qiao. In his blog post "Extracting Colors from Photos and Video", he writes that:

Human-designed color palettes typically have some high-level structure — a gradient running left to right, similar hues grouped together etc., and have some minimum amount of contrast between each color. Automatically created palettes [ones automatically created from an image] look more haphazard, with colors distributed according to how they were used in the original image.

There's a short discussion about this on the YCombinator Hacker News group at https://news.ycombinator.com/item?id=16351409. There, Jack proposes an experiment to demonstrate the difference between randomly generated palettes and ones designed by experts. Go to https://color.adobe.com and click on one of the color rules. Adobe will generate a random palette based on that rule. Then compare it with a palette uploaded by users on https://coolors.co/ or https://color.adobe.com/explore/ .

Given that there is this difference, how can one make a machine generate human-style palettes? Jack's answer is to use the results of machine learning. Here's his diagram for the process:
Diagram of how Colormind generates a palette from an image. It starts
with an extracted palette labelled 'MMCQ'. This is followed by four slightly
different palettes labelled 'Random Variations'. These lead to another four
palettes labelled 'Shuffle'. They are all fed into a 'Classifier'. The output
from the classifier is the same as the 'Shuffle' palettes, except that each is
annotated with a number. Finally, there is an 'Output' palette. In the diagram, this is the one
with the highest number. [ Image: from "Extracting Colors from Photos and Video" by Jack Qiao in his blog. ]

The first stage is colour quantisation. And now you know why I devoted a post to this last Friday. In the diagram above, that's represented by the first sub-image, the one labelled MMCQ. That's an abbreviation for the name of a particular colour-quantisation algorithm, the so-called Modified Median Color Quantization. The second stage is to produce a few random variations on the extracted palette, shown in the row below. The third stage is labelled 'Shuffle'. From Jack's diagram, this appears to mean that it shuffles the order of colours within each palette. The fourth stage feeds all the shuffled palettes to a "classifier", which rates them for acceptability. And the fifth stage rejects unacceptable palettes.

Where machine learning enters is the classifier. Jack trained this on palettes that he'd chosen as "good looking". As he says, "In the end [after some experiments] I built a self-contained classifier and trained it on a hand-picked list of examples. Good color palettes generally have good color contrast and an overarching theme, and bad ones look random and/or has bad inter-color contrast." Once trained, Jack's classifier acts as a gatekeeper, letting through only palettes that it thinks are good looking.

So to summarise, Colormind reduces a photo to a palette consisting a small number of colours. It then generates random variations on this, and then rejects those that, to a gatekeeper trained on appealing palettes designed by humans, look bad. I was curious to see how this would apply to my red silk top, which as I mentioned in "Visualising Clothing Colours as a 3D Cloud of Points II", is an intense red with little white. Here are three palettes Colormind generated from it:
Three palettes generated by Colormind for my red silk Chinese top. Each has an intense red, two pale brick-reddish-pinks, a very pale whitish red, and a dark maroony-aubergine.

Each has an intense red, two pale brick-reddish-pinks, a very pale whitish red, and a dark maroony-aubergine. For comparison, here's the TinEye palette. It has a very different distribution. which hasn't balanced the darks with a pale:
Colour palette from TinEye for my red Chinese silk top, with a copy of the photo reduced to that set of colours. The palette contains one intense red (Cinnabar), two much browner reds (Guardsman Red and Monarch), a blackish brown (Aubergine), and a pink (Sea Pink).

Here's one other example, from the photo of the blue, green, and plum shirts together. The first image is from Colormind, and the second from TinEye. I don't know why Colormind hasn't given me the colour labels this time.
Colour palette from Colormind for my sage-green, ice-blue, and plum velvet Moroccan shirts.
Colour palette from TinEye for my sage-green, ice-blue, and plum velvet Moroccan shirts.

To see how Colormind does on other images, try it yourself. Should you want to use my photos, I've made them available in this zip file.

Primark and the Spectrum Suckers III: Four Browns, Two Greys, and a Black

Here's a TinEye colour extraction for my Primark photo from "Primark and the Spectrum Suckers". Four browns, two greys, and a black.
Colour palette from TinEye for the photo of Primark used in 'Primark and the Spectrum Suckers', with a copy of the photo reduced to that set of colours. The palette contains four shades of brown, two shades of grey, and a black.

For comparison, here is the palette for mud .
Colour palette from TinEye for a Wikipedia photo of dirt and mud. The palette contains two shades of brown and a shade of grey.

Extracted from the photograph at https://commons.wikimedia.org/wiki/File:Dirt_and_Mud_007_-_Mud.jpg, credited to user 0x0077BE.

Colour Quantisation

In my previous two posts, I showed off the colour distributions for some of my clothes. These vary widely from one garment to the next. But they all consist of innumerable points, each representing a slightly different colour from those nearby. Sometimes, we need to reduce this multiplicity to a much smaller number. That's called colour quantisation, and is what I'm going to introduce today.

Wikipedia explains colour quantisation as "a process that reduces the number of distinct colors used in an image, usually with the intention that the new image should be as visually similar as possible to the original image". I referred to Wikipedia because I wanted to use two of its public-domain images. The first is this rose:
A yellow rose. The photo has had all its blue removed. [ Image: from Wikipedia article on "Color Quantization". Credited to Dcoetzee. ]

The second picture I wanted is below, and is the colour distribution for the rose. It was produced by different software from the distributions in my last post, so doesn't look exactly the same, but the idea is the same. The only difference is that because the blue has been removed, there are only two colour axes. I.e., the distribution lies in a plane:
The color space of the photograph: a plane containing a broad diagonal coloured band running from the black corner at bottom left towards the white corner at top right. The plane also contains line dividing it into 16 polygonal regions, each with a dot marking its centre. [ Image: from Wikipedia article on "Color Quantization". Credited to Dcoetzee. ]

I mention lying in a plane only because it makes the next bit easier to understand. As well as the distribution itself, the image contains lines dividing it into 16 regions, and blue dots marking the centre of the regions. These describe, Wikipedia says, "an optimized palette generated by Photoshop via standard methods". What this means is that Photoshop has squished the multiplicity of colours down to 16. It thinks that these are "optimized", in that if you were to replace each region's colours by the region's centre, this would do less damage to the image than if you used any other set of 16 regions.

What would such a colour-reduced image look like? I don't have one for the rose, but I've made a different example. At the top of this post, you'll see my logo: a pattern of fruit and flowers taken from a rather lovely velvet waistcoat by Oakland. I fed this to the online TinEye Color extraction page. Doing so is easy: just browse and upload an image, or submit its URL. Here's the result for my logo. The reduced set of colours is on the right, and the reduced image made from them is the top one on the left.
Colour palette from TinEye for my logo, with a copy of the logo reduced to that set of colours.

That's almost all I want to say about colour quantisation, now that I've introduced the concept. There are a variety of algorithms for achieving it, and these have been built into lots of different software packages. Wikipedia sounds a caution about these:

The name "color quantization" is primarily used in computer graphics research literature; in applications, terms such as optimized palette generation, optimal palette generation, or decreasing color depth are used. Some of these are misleading, as the palettes generated by standard algorithms are not necessarily the best possible.

In other words, distrust the words "optimal", "optimised", and "optimum".

Before finishing, here are some more examples, using the clothing photos I analysed for colour distribution in the previous two posts. These are also from TinEye.
Colour palette from TinEye for my chocolate-brown velvet Moroccan shirt, with a copy of the photo reduced to that set of colours.
Colour palette from TinEye for the Colorpoint silk shirt, with a copy of the photo reduced to that set of colours.

Colour palette from TinEye for my garnet-red velvet Moroccan shirt, with a copy of the photo reduced to that set of colours.
Colour palette from TinEye for my orange qandrissi and ice-blue velvet Moroccan shirt, with a copy of the photo reduced to that set of colours.
Colour palette from TinEye for my plum velvet Moroccan shirt, with a copy of the photo reduced to that set of colours.
Colour palette from TinEye for my red Chinese silk top, with a copy of the photo reduced to that set of colours.
Colour palette from TinEye for my rose-pink velvet Moroccan shirt, with a copy of the photo reduced to that set of colours.
Colour palette from TinEye for my sage-green, ice-blue and plum velvet Moroccan shirts, with a copy of the photo reduced to that set of colours.
Colour palette from TinEye for my sage-green velvet Moroccan shirt, with a copy of the photo reduced to that set of colours.

Visualising Clothing Colours as a 3D Cloud of Points II

Friday's post was about displaying the colour distribution in clothing photos. Here are the results of trying it on some more of my clothes. The first few are velvet Moroccan shirts in chocolate-brown, garnet-red, violet, ice-blue, rose-pink, and sage-green. Then there's a cube for the green, blue and violet shirts together. Then one for my red silk Chinese top. And finally, one for this silk shirt by Colorpoint, decorated with figures of the Roman-inspired Marvin the Martian.

The preponderance of Moroccan is because I need the background removed, so that its colour distribution doesn't get mixed in with that of the clothes. As it happened, I'd already done so for most of the Moroccan ones, when I was preparing for a show. But background removal is tedious and imprecise, and I've not done it for most of the other clothes.
Colour-distribution cube for my brown velvet Moroccan shirt. The cube contains a fairly tight diagonal line going from the black corner to the white; i.e. from (0,0,0) to (1,1,1).
Colour-distribution cube for my garnet velvet Moroccan shirt. The cube contains a diagonal line going from the black corner to the white. It has a small offshoot going to the blue corner, and a bigger offshoot parallel to the red-blue face of the cube, going about 1/3 of the distance to the white corner.
Colour-distribution cube for my plum velvet Moroccan shirt. The cube contains a bluish-violet banana going from the black corner to the white. The middle part of the banana bends away from the green corner.
Colour-distribution cube for my rose velvet Moroccan shirt. The cube contains a white-pinky-blue banana going from the black corner to the white. The middle part of the banana bends away from the green corner.
Colour-distribution cube for my sage-green velvet Moroccan shirt. The cube contains a greenish-yellowy-white banana going from the black corner to the white.
Colour-distribution cube for my sage-green, ice-blue, and plum  velvet Moroccan shirts. The cube contains tongues of pale green, ice blue, and pinkish-violet, and a little white.
Colour-distribution cube for my red silk Chinese top. The cube contains a plume of red going from the black corner to the red corner, and a fine spray of white going to the white corner.
Colour-distribution cube for the Colorpoint silk shirt, which is black decorated with green, yello, and orange-red aliens. The cube contains vivid roughly parallel sprays of these three colours, plus a bit of pale blue.

I love the vividness and contrast in the distribution for the Colorpoint shirt. But the most noticeable thing is the intensity of the colour plume for the red silk Chinese top. Unlike with the other clothes, the plume goes off to a colour corner (red), with only a very thin offshoot to the white corner. This may be because most of the other clothes have either patches of whitish shine, or actual whitish material as in the garnet-red shirt. But also, the silk top does look an intense red when I wear it — a red that I 've not seen on any other garment. I've been told that silk dyes more intensely than other fabrics, though I can't find anything online that confirms this.

Primark and the Spectrum Suckers II: Visualising Clothing Colours as a 3D Cloud of Points

    In my post "Primark and the Spectrum Suckers", I imagined white light passing through a Primark shop and exiting as a spectrum made entirely from grey:
    Two pictures in one image. The first is a prism with light going through and forming a spectrum, labelled 'PRISM'. The second is the same prism with an interior photo of the Oxford Westgate Primark shop superimposed. It is labelled 'PRIMARK'. The light coming out is the same spectrum as for the first prism, but grey, not coloured.

    My collage was inspired by the stunningly dreary Primark in the new Westgate shopping centre in Oxford. I'm sure it's obvious that I made the grey "spectrum" by monochromaticising the one in the upper half of the collage, which I'd taken from some free clip-art. But how could I produce a real picture of the colour distribution? I've been looking for tools, and found one recently whose output I'll show. As this is also related to an article I'll post next week about generating colour schemes — including green and purple colour schemes — I've decided to write about colour distributions today.

    One standard tool for displaying colour distributions is the two-dimensional colour histogram. Here's one for the Primark photo from the collage. I made it in Gimp, free image-processing software that I use for editing photos and a host of other things, including retouching cartoons.
    Photo of Primark shop from the above collage, with a colour histogram superimposed.

    Such histograms are easy to produce, but as David Tschumperlé explains in "Visualizing the 3D point cloud of RGB colors", they have disadvantages. He displays a photo of the Swedish model Lena Söderberg, and another photo edited so that the green channel has been reflected around the X-axis, and the blue around the Y-axis. The second photo appears to have much more green, but its histogram is exactly the same as the first photo's.

    But there are other ways to plot colour distributions, using the idea that because we have three colour components, we can represent them as points in three-dimensional space. Take any pixel, and express it as so much red, so much green, and so much blue. This gives us three numbers, each between 0 and 1. A pixel that's black would be (0,0,0); one that's white would be (1,1,1); and one that's pure red would be (1,0,0). Treat these as a point, and plot it. Repeat for all the pixels. This gives us a cloud of points lying within a cube. To display the distribution of colours, colour each point with its actual colour; to display the number of occurrences, give each point a colour that represents these. These, as David Tschumperlé says, give us more information about the global variety of colour in the image, and the local dispersion of tones around each point.

    There are examples for Lena and other images in David's article. Here are some of my own, made using Kai Uwe Barthel's Color Inspector 3D. Each of these shows an image and the corresponding distribution cube.
    Colour-distribution cube for a red square. There's a red
dot at the (1,0,0) corner of the cube.
    Colour-distribution cube for a green square. There's a red
dot at the (0,1,0) corner of the cube.
    Colour-distribution cube for a red square next to a green square. There's a
red dot at the (1,0,0) corner of the cube, and a green dot at the
(0,1,0) corner.
    Colour-distribution cube for a red square next to a thin green rectangle. There's a red dot at the (1,0,0) corner of the cube, and a green dot at the
(0,1,0) corner.
    Colour-distribution cube for my logo. There's a spray of green, purple, and pink, fanning out from the (0,0,0) corner.
    Colour-distribution cube for orange sarouel and ice-blue shirt. There are two 'bananas' going from near the black corner to near the white corner. One is blue, the other orange.

    In the first image, all the pixels are pure red. So strictly speaking, we'd end up with just one point, at the position (1,0,0), the red corner of the distribution cube. Color Inspector 3D has made a small dot rather than a point, probably to make the results easier to see. The second image is all green, so we get its counterpart: a cube with a green dot at (0,1,0), the green corner.

    The third and fourth images show what happens for more than one colour, in this case pure red and pure green. We get a red dot and a green dot.

    And the fifth and sixth images show results for two realistic images. One is for my logo, which is from this Oakland velvet waistcoat. The other is for my orange qandrissi and ice-blue shirt.

    Note that in the third and fourth images, the results are the same regardless of the proportions of red and green. I have a reason for mentioning this, and it's to do with "anti-aliasing". Look at this:
    Colour-distribution cube for a red line. There's a red dot at the (1,0,0) corner of the cube, and other dots on a diagonal leading to the white corner, (1,1,1).

    What I did here was to get the colour-distribution cube for a red line. But if it's red, why are there those increasingly white dots leading down to the white corner of the cube? I suspect the reason is that to counteract the relatively low resolution of the line, the drawing program (Gimp, but other programs would do the same) gives pixels at the edge of the line colours intermediate between the line and its background. In this case, various shades of reddish-white. This is called anti-aliasing. You can see this in this zoomed-in portion of the line:
    The red line enlarged. On its edges are paler, reddish-white, pixels. As the distribution plots don't record how many pixels of each colour they see, these few intermediate pixels have as much impact on the display as the pure red ones.

    I didn't start today's post with the intent of writing a graphics tutorial. But I noticed these unexpected intermediate colours when analysing example coloured images I'd drawn, and decided I'd better understand where they come from, because they could mislead. I don't know whether similar artefacts could arise in other ways, for example when resizing JPEG files. But I'm fairly sure that some of my images will be affected by the following. Some of my clothing pictures are ones where I've separated the clothing from its background. But it's hard to do so perfectly, so there are minute remnants of background clinging to their borders. This seems likely to bias the distributions, perhaps by giving them tails that shade off towards black.

    With that out of the way, let's look at Primark. Here's my original photo:
    The photo used in 'Primark and the Spectrum Suckers'. Interior of the Primark shop in the Westgate Shopping Centre, Oxford. If the photo seems rather "chewy", it's because the original suffered from motion blur, which I reduced by using Focus Magic. I didn't want to stand around with a tripod, and my camera isn't sensitive enough for a fast point-and-shoot response to indoor lighting.

    And here's the colour distribution:
    Colour-distribution cube for the photo of the Primark shop used in 'Primark and the Spectrum Suckers'. It's a moderately loose diagonal line,
stretching from the black corner to the white corner. There are a few dots of red, green, and blue further out.

    So what do we have? It's a moderately loose diagonal line, stretching from the black corner to the white corner. There are a few dots of red, green, and blue further out. These, I suspect, come not from the clothes, but from their labels.

    It's rather sad that the labels on the socks are the most colourful thing in the shop.
    Detail from the photo used in 'Primark and the Spectrum Suckers'. The sock rack in the Primark shop in the Westgate Shopping Centre, Oxford.

    Footnotes

    "Visualizing the 3D point cloud of RGB colors", Open Source Graphics, by David Tschumperlé, 24 February 2018.
    http://opensource.graphics/tag/color-distribution/

    I used Color Inspector 3D by Kai Uwe Barthel. This is a Java program, packaged as a JAR file: something rather like a zip file, containing all the program components. On Windows 10, I was able to run it by following Kai's instructions: download ColorInspector3D.jar from the link, and double click on it. This requires the computer to have Java, which the one I'm using must have got. Once you've started the program, click "File" and then "Open", and select an image. Its colour cube should then appear. I found that the program failed on very big images, and I had to reduce their size.
    https://imagej.nih.gov/ij/plugins/color-inspector.html