Citizen of Nowhere on a Brexit Farewell Tour

I've been pondering questions. Why do Brexiteers sound dashing, whereas Remainers sound like Remoaners? Did you know Theresa May has three ears? A left ear, a right ear, and a rabid Brexiteer. And did you hear the new idiom coined to signify minuscule in influence or effect: "as meaningless as a Remainer's vote".

If I sound bitter, it's because I'm going to lose my European citizenship and my right to move freely through 27 countries. Guy Verhofstadt, Brexit coordinator for the European Parliament, has suggested giving EU citizenship to those Brits who want it post-Brexit, but nothing has yet happened. My previous experiences of visa-based travel involve trogging up to the Romanian embassy in Berlin's Dorotheenstraße, after expending some effort to fit in with its peculiar opening hours; and, later on the same trip, being woken by a border official at 3am as my sleeper car crossed the Danube from Romania to Bulgaria. "Come, Mr Englishman", murmured the guard as he beckoned me from the train, onto a platform at Ruse station, and into the customs room. I'm not thrilled at the prospect of such things returning.

In the 17th and 18th centuries, wealthy young men were wont to take the Grand Tour. They'd meander around Europe, acquainting themselves with the Renaissance and Europe's classical heritage. I suggest we hold a one-off equivalent; the Brexit Farewell Tour. Everyone will be encouraged to spend as much of a year as they can visiting, or revisiting, those countries that will henceforth no longer be accessible without visa-based inconvenience.

It's almost Christmas, so I'd begin with Germany. Probably Münster, because I know the city well, the cathedral with its green copper roof is sublime against the snow, and Münster does Christmas markets as well as anywhere in Germany. I'd order what I think was called "drin, draus": a small glass of Glühwein with two joined sausages balanced on the edge of the glass, one dipping into the wine, and one thereout. But so that I can stand at the stall and drink in comfort, first I'd follow the advice of a friend. When I arrived in Münster in winter one year, he said, "Right. The first thing you do is to go to the nearest department store and buy yourself a pair of lange Unterhosen." These are German long johns. The best are made by Schiesser, a company known to every German, and I do recommend them. They're very warm, and they last.

The Schiesser lange Unterhosen I bought lasted for over 10 years, and did me very good service against UK winters. Here, you can survive without them. But in Germany, such protection really is needed. I once walked across the Aasee, the lake in the middle of Münster, on a — for there — perfectly normal November day. It was -12°C, and the air felt like flame in my stomach. Against such extremes, another item of clothing is useful, and that's the Unterziehrolli: a polo neck that you wear under your shirt. There are thermal skiing versions which I've worn when walking in Bavaria, and these are great against the mountain air.

But enough of the cold. After Germany, I'd make tracks for Greece. In Athens in January, you sometimes get the "halcyon" days when the temperature can rise to 20°C. This happens, according to one myth, because it's then that the kingfisher — the halcyon — builds her nest on the sea, and her father, Aeolus god of the winds, calms the winds to protect the young. For such warmth, the baggy trousers shown below — vráka or βράκα, plural vrákes or βράκες — look ideal. The two pictures below show versions from Skyros and Crete.
Man wearing Skyros shepherd's costume, including vraka trousers. Postcard of man wearing Cretan costume, including vraka trousers.
[ Images: (1) via user Pycckhcoz on Wikipedia, originally by E. Athanasiades; (2) via user Nepuzedin on Wikipedia, originally by Nicolas Sperling ]

I never saw vráka while living in Athens, but I often saw another pleated garment, the fustanella or φουστανέλλα worn by the Presidential Guards in Syntagma Square. A member of the Greek Presidential Guard, wearing a fustanella. [ Image: via Wikipedia, by Stanislav Amelchyts ]

Wikipedia on the fustanella cites the paper "Akritan Ikonography on Byzantine Pottery" by J. A. Notopoulos, which says that the fustanella evolved from the Roman toga, and that in cold climates, pleats were added for extra warmth. I was puzzled about how this works, but it makes perfect sense when you look at the diagrams in "'Military' Box Pleats" in the blog Matthew A. C. Newsome Kiltmaker. These show how, as you add cloth, the shape of the pleats changes, resulting in more overlap up to an effective four or five layers per pleat. It makes me think that none of the pleated Moroccan trousers I own were designed for British winters, and I should commission a kiltmaker to make a pair that are. In Scotland, because it's the only country I'll be able to visit without a visa. Until the next independence referendum, that is, when Scotland rejoins the EU.

On the way from Germany to Greece, I'd pass through Italy. If Münster cathedral is sublime, so is the train journey down through the Alps via Rosenheim, Kufstein, Innsbruck, Brenner, and the towns of the Italian Tyrol. Watching shoppers getting on and off the train as it stops at the stations: weekly routine to them, a thrice-in-a-decade occurrence to me. Looking down at the onion-dome churches in little villages far below the railway track. And sitting in the dining car as the train speeds past the Adige south of Trento, the river foaming past near the rails, pine forests and bare peaks in the background.

Italian tailoring is legendary, and I have a fine specimen of it myself, this Falabella velvet jacket. But Italian clothes still share the same body plan, shall I call it, as English ones, so I don't find them as interesting as the more exotic styles I've blogged on this site. But there is one splendid exception. Strictly speaking, it's not in Italy, but I'm still going to show it. It's the Pontifical Swiss Guard of Vatican City:
A member of the Pontifical Swiss Guard, in his colourful blue, yellow, and red uniform. [ Image: by Mircea Iancu, via Pexels ]

Now to Portugal. If Athens has the halcyon days, the Algarve has folksongs about the almond blossom coming out in January. So that's another place I'd go for warmth. But for tailoring, I might need to go further north. There's a saying: "Braga reza, o Porto trabalha, Coimbra estuda e Lisboa diverte-se". Braga prays, Oporto works, Coimbra studies, and Lisbon plays. The tailoring I've seen would bear out the part about Oporto working. When I was last employed in Portugal, the city was full of little independent clothes shops, selling excellently cut clothes made from local fabrics. One shop owner told me that Portugal has excellent textiles, but that they were little known outside Portugal, because the companies didn't do enough marketing.

There was a great deal of black. The Portuguese have this word "saudades" that they claim no-one else truly understands: a national melancholy, perhaps for the loss of an empire. (That's better than Tory party nostalgia, which manifests itself in unrealistic longings for Empire 2.0 while playing silly buggers with the EU.) Saudades shows itself in much use of minor keys in music; and in the colours of everyday dress. Traditional costumes, however, are more colourful. Search for "trajes tradicionais Portugueses" and "trajes folkloricos Portugueses". The minstrel and academic costumes, "trajes de tunas Portugueses", are also worth seeing.

But that's quite enough of Europe. I hear Theresa May yell. "If you believe you are a citizen of the world, you are a citizen of nowhere. You don't understand what citizenship means." I've seen at first hand the national costumes of four countries, bought clothes in six, and had haircuts in three. I know where to buy the best bread in Athens, and the best beer in Brussels; how to order doces de ovos in Coimbra, speculaas in Maastricht, torte in Münster, and churros in Salamanca. If I'm not now as fat as a house, it's because I've also walked and run: amidst the piney aromas of Strefi hill in Athens; 20 km from Eindhoven to the Achelse Kluis monastery on the Belgian border, returning with a rucsack full of Trappist beers; meandering amongst the National Rebirth architecture of Plovdiv; exploring Sintra, of which a Spanish proverb says, "to see the world and leave out Sintra is to go blind about."

I've made phone calls in five languages, written emails in four, and listened to lyrics in eight. There's a lovely melancholic song "De Fanfare van Honger en Dorst" — "The Fanfare of Hunger and Thirst" — written by Lieven Tavernier and sung by Gerard van Maasakkers that I found out about by reading a music review in a discarded paper in a café in Maastricht. It still makes me want to cry. So, in a different way, do tracks from the Galician group Luar na Lubre's Plenilunio. I once walked back at night in Sintra singing pre-revolutionary songs by Vitorino with a musician from the Grupo de Ação Cultural, a group that fought against the dictator Salazar. I don't know whether she ever heard Manos Loïzos singing "Ο Δρόμος" and "Τρίτος Παγκόσμιος", but I've heard those too, and they would have fitted right in. With a few languages and some persistence, the EU provides such treasures for the taking. Has Theresa May ever opened herself to such opportunities? I think not. So here, from Redbubble, is my final item of clothing.
Man wearing a T-shirt with the legend 'CITIZEN OF NOWHERE'. The shirt is black, with the legend in white and Theresa May's face under it.
Available in small, medium, large, extra large, and double extra large.

Colormind II

In last Monday's post, I wrote about Colormind, a program which extracts colour palettes from photos. And on Friday, I turned to pix2pix, a program which can be trained to transform images, producing effects such as these:
Screenshot of the pix2pix page, showing a sketch for a cat, and the picture generated from it.
Screenshot of the pix2pix page, showing my sketch for a handbag, and the picture generated from it. [Images: (1) in tweet 19 Feb 2017 by Christopher Hesse; (2) Chromophilia ]

As it happens, Colormind is two different programs. On Friday, I discussed one of these, the extractor. But there's also an ab initio generator. Colormind's author Jack Qiao describes it in his blog entry "Generating Color Palettes with Deep Learning". Here, he trained pix2pix to generate complete palettes from partial ones. He did this by giving it a database of pairs of images. In each pair, the "output" image was a complete palette from Adobe Color, and the "input" image was the same palette with some colours missing. So in effect, he was training pix2pix to "fill in" missing colours.

One could regard this as analogous to what I showed on Friday, where pix2pix was being trained to "fill in" handbags, shoes or cats from their sketches. (For the technically minded, the original authors of pix2pix note under "Color palette completion" in "Image-to-Image Translation with Conditional Adversarial Nets" that this "stretches the definition of what counts as 'image-to-image translation' in an exciting way"; it may not be the best choice of representation.)

I'm not clear from Jack Qiao's writeups how closely the ab initio generated palettes resemble those created by people. In describing the palette extractor, he says it submitted the palettes it generated to a gatekeeper, which rejects those that don't look like human-created ones. The ab initio generator doesn't have a gatekeeper: its knowledge comes from complete palettes from Adobe Colour. Do these have the same kind of high-level structure that human-created palettes do? I don't know.

To experiment with the ab initio generator, go to . You'll see a strip of five colours. Each box in it has either three or four controls under it. These are represented by icons for: a padlock; sliders; and a left arrow or a right arrow or both. Clicking on the sliders icon gives you controls for changing the colour. Clicking on the padlock locks in your choice. And clicking on the arrow(s) exchanges your colour with the one on its left or right. Clicking "Generate" will generate a new palette from the locked-in colours.

Designing Handbags with pix2pix

I just designed a handbag!
Screenshot of a handbag designed by Christopher Hesse's pix2pix page.

To be fair, there's very little about the bag that's mine, apart from its outline. I made the image by sketching a handbag in the "Input" box in the "edges2handbags" section of "Image-to-Image Demo Interactive Image Translation with pix2pix-tensorflow", by Christopher Hesse. Once I'd done so and pressed "Process", his software did the rest:
Screenshot of the pix2pix page, showing the generated handbag above, and my sketch for it.

As well as handbags, Christopher Hesse's page allows you to generate shoes from sketches, cats from sketches (with gruesome results if you get it wrong), and buildings from facade plans. It's all based on Hesse's re-implementation of pix2pix, a rather wonderful piece of machine-learning software, which can be trained to carry out a variety of general-purpose — and hard — image transformations.

To train pix2pix, it must be fed with a database of pairs of images. With the handbags, shoes, and cats, the "output" image of each pair was a photo of a handbag, shoe, or cat. The other image in the pair, the "input", was a black-and-white "sketch" thereof, automatically generated by software that detects the edges of objects. Once pix2pix has been trained, it can take new inputs and generate outputs from them.

You can try this for yourself, at various levels. To try Christopher Hesse's generators, go to his page. He recommends using it in Chrome. I tried it in Firefox, and found that the browser kept popping up messages saying "A script is slowing down the page: do you want to kill it or wait?". (Obviously, one should then wait, not kill.) Typically, this would happen three or four times during each run. But the runs do eventually end, and then you get a new handbag you can admire, or a new cat you can run away from screaming.

Training pix2pix on new sets of images would be fun. At the moment, I think this still requires knowledge of programming: that is, there aren't yet systems that will allow you to (for example) click on loads of handbag photos, automatically turn them into sketches, feed the sketch-photo pairs to a learning program, leave it to train on them, and then embed the result into a web page or app you can use to generate new pictures from sketches. No doubt someone will eventually build one, but in the meantime, the pages above plus "Pix2Pix" by Machine Learning for Artists contain enough information for a reasonably skilled programmer to get started.

And at an even deeper level, one can research into improved learning programs for fashion design, as in this recent paper: "DeSIGN: Design Inspiration from Generative Networks" by Othman Sbai, Mohamed Elhoseiny, Antoine Bordes, Yann LeCun, and Camille Couprie. That requires a deep knowledge of machine-learning-related things such as loss functions, as well as the visual language of clothing. But let's return to something simpler, the handbags. Here are some more of my runs:
Screenshots of more handbags generated by Hesse's pix2pix from my sketches.

It's notable how sensitive the output is to minute changes in input. See how the texture and colour of the right-hand face of bags 1 to 4 change when I add small details to the sketch. Or the way the colouring of bag 5 changes when I add a handle.

Why? Christopher Hesse says that he trained the handbag generator on a database of about 137,000 handbag pictures collected from Amazon. But bags vary hugely in surface detailing: one bag could be made from indigo ruched satin, while another with almost the same outline could be navy viscose/polyamide netted with black lace. A not-too-clever edge detector might output very similar sketches for both. So the mapping from sketch to bag is, as mathematicians like to say, "not well behaved": moving from one point to the next, you feel like a chamois leaping around a million-dimensional version of the Brenner Pass. One infinitesimal step in one direction, and you plummet down a precipice in some other direction that you can't define and never wanted to go.

In addition, the edge detection isn't perfect, so if you sketch a handbag using unbroken even lines, your drawing won't be using the same "notation" that the inputs do.

And, according to a remark by Jack Qiao on

Pix2pix is great for texture generation but bad at creating structure, like in the photo->map example straight lines and arcs tend to come out looking "liquified".

Here for comparison is a real bag: an evening bag that I bought from Unicorn to use as a purse. It has lots of structure.
Black velvet evening bag. It's the size of a large purse, rectangular, and decorated with sequins, small plastic beads, and leaves and spirals made from metal segments.

Primark and the Spectrum Suckers IV: Brown Needs Purple?

Here's an interesting sidelight on human-designed colour palettes. I tried running my photo from "Primark and the Spectrum Suckers" through Colormind. The photo is predominantly brown, and every single palette Colormind made from it contained some kind of purple, not too different from the one on the left below.
Colour palette from Colormind for the photo of Primark used in 'Primark and the Spectrum Suckers'.

Colormind, I explained in my post about it, extracts the main colours from a photo, produces random variations on them, and then sends these for scrutiny by a gatekeeper: a machine-learning program trained on palettes that Jack Qiao, Colormind's author, thought were good looking. I wonder whether Jack didn't use enough palettes to teach it that brown doesn't always have to go with purple. Google colour palette brown and you'll see that there are other choices.


While I was looking for photos of green-and-purple clothing, I came across a colour-scheme generator named Colormind. There are lots of generators on the web. What distinguishes Colormind is that it tries to make its schemes acceptable to humans.

This is difficult, says Colormind's author, Jack Qiao. In his blog post "Extracting Colors from Photos and Video", he writes that:

Human-designed color palettes typically have some high-level structure — a gradient running left to right, similar hues grouped together etc., and have some minimum amount of contrast between each color. Automatically created palettes [ones automatically created from an image] look more haphazard, with colors distributed according to how they were used in the original image.

There's a short discussion about this on the YCombinator Hacker News group at There, Jack proposes an experiment to demonstrate the difference between randomly generated palettes and ones designed by experts. Go to and click on one of the color rules. Adobe will generate a random palette based on that rule. Then compare it with a palette uploaded by users on or .

Given that there is this difference, how can one make a machine generate human-style palettes? Jack's answer is to use the results of machine learning. Here's his diagram for the process:
Diagram of how Colormind generates a palette from an image. It starts
with an extracted palette labelled 'MMCQ'. This is followed by four slightly
different palettes labelled 'Random Variations'. These lead to another four
palettes labelled 'Shuffle'. They are all fed into a 'Classifier'. The output
from the classifier is the same as the 'Shuffle' palettes, except that each is
annotated with a number. Finally, there is an 'Output' palette. In the diagram, this is the one
with the highest number. [ Image: from "Extracting Colors from Photos and Video" by Jack Qiao in his blog. ]

The first stage is colour quantisation. And now you know why I devoted a post to this last Friday. In the diagram above, that's represented by the first sub-image, the one labelled MMCQ. That's an abbreviation for the name of a particular colour-quantisation algorithm, the so-called Modified Median Color Quantization. The second stage is to produce a few random variations on the extracted palette, shown in the row below. The third stage is labelled 'Shuffle'. From Jack's diagram, this appears to mean that it shuffles the order of colours within each palette. The fourth stage feeds all the shuffled palettes to a "classifier", which rates them for acceptability. And the fifth stage rejects unacceptable palettes.

Where machine learning enters is the classifier. Jack trained this on palettes that he'd chosen as "good looking". As he says, "In the end [after some experiments] I built a self-contained classifier and trained it on a hand-picked list of examples. Good color palettes generally have good color contrast and an overarching theme, and bad ones look random and/or has bad inter-color contrast." Once trained, Jack's classifier acts as a gatekeeper, letting through only palettes that it thinks are good looking.

So to summarise, Colormind reduces a photo to a palette consisting a small number of colours. It then generates random variations on this, and then rejects those that, to a gatekeeper trained on appealing palettes designed by humans, look bad. I was curious to see how this would apply to my red silk top, which as I mentioned in "Visualising Clothing Colours as a 3D Cloud of Points II", is an intense red with little white. Here are three palettes Colormind generated from it:
Three palettes generated by Colormind for my red silk Chinese top. Each has an intense red, two pale brick-reddish-pinks, a very pale whitish red, and a dark maroony-aubergine.

Each has an intense red, two pale brick-reddish-pinks, a very pale whitish red, and a dark maroony-aubergine. For comparison, here's the TinEye palette. It has a very different distribution. which hasn't balanced the darks with a pale:
Colour palette from TinEye for my red Chinese silk top, with a copy of the photo reduced to that set of colours. The palette contains one intense red (Cinnabar), two much browner reds (Guardsman Red and Monarch), a blackish brown (Aubergine), and a pink (Sea Pink).

Here's one other example, from the photo of the blue, green, and plum shirts together. The first image is from Colormind, and the second from TinEye. I don't know why Colormind hasn't given me the colour labels this time.
Colour palette from Colormind for my sage-green, ice-blue, and plum velvet Moroccan shirts.
Colour palette from TinEye for my sage-green, ice-blue, and plum velvet Moroccan shirts.

To see how Colormind does on other images, try it yourself. Should you want to use my photos, I've made them available in this zip file.

Primark and the Spectrum Suckers III: Four Browns, Two Greys, and a Black

Here's a TinEye colour extraction for my Primark photo from "Primark and the Spectrum Suckers". Four browns, two greys, and a black.
Colour palette from TinEye for the photo of Primark used in 'Primark and the Spectrum Suckers', with a copy of the photo reduced to that set of colours. The palette contains four shades of brown, two shades of grey, and a black.

For comparison, here is the palette for mud .
Colour palette from TinEye for a Wikipedia photo of dirt and mud. The palette contains two shades of brown and a shade of grey.

Extracted from the photograph at, credited to user 0x0077BE.

Colour Quantisation

In my previous two posts, I showed off the colour distributions for some of my clothes. These vary widely from one garment to the next. But they all consist of innumerable points, each representing a slightly different colour from those nearby. Sometimes, we need to reduce this multiplicity to a much smaller number. That's called colour quantisation, and is what I'm going to introduce today.

Wikipedia explains colour quantisation as "a process that reduces the number of distinct colors used in an image, usually with the intention that the new image should be as visually similar as possible to the original image". I referred to Wikipedia because I wanted to use two of its public-domain images. The first is this rose:
A yellow rose. The photo has had all its blue removed. [ Image: from Wikipedia article on "Color Quantization". Credited to Dcoetzee. ]

The second picture I wanted is below, and is the colour distribution for the rose. It was produced by different software from the distributions in my last post, so doesn't look exactly the same, but the idea is the same. The only difference is that because the blue has been removed, there are only two colour axes. I.e., the distribution lies in a plane:
The color space of the photograph: a plane containing a broad diagonal coloured band running from the black corner at bottom left towards the white corner at top right. The plane also contains line dividing it into 16 polygonal regions, each with a dot marking its centre. [ Image: from Wikipedia article on "Color Quantization". Credited to Dcoetzee. ]

I mention lying in a plane only because it makes the next bit easier to understand. As well as the distribution itself, the image contains lines dividing it into 16 regions, and blue dots marking the centre of the regions. These describe, Wikipedia says, "an optimized palette generated by Photoshop via standard methods". What this means is that Photoshop has squished the multiplicity of colours down to 16. It thinks that these are "optimized", in that if you were to replace each region's colours by the region's centre, this would do less damage to the image than if you used any other set of 16 regions.

What would such a colour-reduced image look like? I don't have one for the rose, but I've made a different example. At the top of this post, you'll see my logo: a pattern of fruit and flowers taken from a rather lovely velvet waistcoat by Oakland. I fed this to the online TinEye Color extraction page. Doing so is easy: just browse and upload an image, or submit its URL. Here's the result for my logo. The reduced set of colours is on the right, and the reduced image made from them is the top one on the left.
Colour palette from TinEye for my logo, with a copy of the logo reduced to that set of colours.

That's almost all I want to say about colour quantisation, now that I've introduced the concept. There are a variety of algorithms for achieving it, and these have been built into lots of different software packages. Wikipedia sounds a caution about these:

The name "color quantization" is primarily used in computer graphics research literature; in applications, terms such as optimized palette generation, optimal palette generation, or decreasing color depth are used. Some of these are misleading, as the palettes generated by standard algorithms are not necessarily the best possible.

In other words, distrust the words "optimal", "optimised", and "optimum".

Before finishing, here are some more examples, using the clothing photos I analysed for colour distribution in the previous two posts. These are also from TinEye.
Colour palette from TinEye for my chocolate-brown velvet Moroccan shirt, with a copy of the photo reduced to that set of colours.
Colour palette from TinEye for the Colorpoint silk shirt, with a copy of the photo reduced to that set of colours.

Colour palette from TinEye for my garnet-red velvet Moroccan shirt, with a copy of the photo reduced to that set of colours.
Colour palette from TinEye for my orange qandrissi and ice-blue velvet Moroccan shirt, with a copy of the photo reduced to that set of colours.
Colour palette from TinEye for my plum velvet Moroccan shirt, with a copy of the photo reduced to that set of colours.
Colour palette from TinEye for my red Chinese silk top, with a copy of the photo reduced to that set of colours.
Colour palette from TinEye for my rose-pink velvet Moroccan shirt, with a copy of the photo reduced to that set of colours.
Colour palette from TinEye for my sage-green, ice-blue and plum velvet Moroccan shirts, with a copy of the photo reduced to that set of colours.
Colour palette from TinEye for my sage-green velvet Moroccan shirt, with a copy of the photo reduced to that set of colours.

Visualising Clothing Colours as a 3D Cloud of Points II

Friday's post was about displaying the colour distribution in clothing photos. Here are the results of trying it on some more of my clothes. The first few are velvet Moroccan shirts in chocolate-brown, garnet-red, violet, ice-blue, rose-pink, and sage-green. Then there's a cube for the green, blue and violet shirts together. Then one for my red silk Chinese top. And finally, one for this silk shirt by Colorpoint, decorated with figures of the Roman-inspired Marvin the Martian.

The preponderance of Moroccan is because I need the background removed, so that its colour distribution doesn't get mixed in with that of the clothes. As it happened, I'd already done so for most of the Moroccan ones, when I was preparing for a show. But background removal is tedious and imprecise, and I've not done it for most of the other clothes.
Colour-distribution cube for my brown velvet Moroccan shirt. The cube contains a fairly tight diagonal line going from the black corner to the white; i.e. from (0,0,0) to (1,1,1).
Colour-distribution cube for my garnet velvet Moroccan shirt. The cube contains a diagonal line going from the black corner to the white. It has a small offshoot going to the blue corner, and a bigger offshoot parallel to the red-blue face of the cube, going about 1/3 of the distance to the white corner.
Colour-distribution cube for my plum velvet Moroccan shirt. The cube contains a bluish-violet banana going from the black corner to the white. The middle part of the banana bends away from the green corner.
Colour-distribution cube for my rose velvet Moroccan shirt. The cube contains a white-pinky-blue banana going from the black corner to the white. The middle part of the banana bends away from the green corner.
Colour-distribution cube for my sage-green velvet Moroccan shirt. The cube contains a greenish-yellowy-white banana going from the black corner to the white.
Colour-distribution cube for my sage-green, ice-blue, and plum  velvet Moroccan shirts. The cube contains tongues of pale green, ice blue, and pinkish-violet, and a little white.
Colour-distribution cube for my red silk Chinese top. The cube contains a plume of red going from the black corner to the red corner, and a fine spray of white going to the white corner.
Colour-distribution cube for the Colorpoint silk shirt, which is black decorated with green, yello, and orange-red aliens. The cube contains vivid roughly parallel sprays of these three colours, plus a bit of pale blue.

I love the vividness and contrast in the distribution for the Colorpoint shirt. But the most noticeable thing is the intensity of the colour plume for the red silk Chinese top. Unlike with the other clothes, the plume goes off to a colour corner (red), with only a very thin offshoot to the white corner. This may be because most of the other clothes have either patches of whitish shine, or actual whitish material as in the garnet-red shirt. But also, the silk top does look an intense red when I wear it — a red that I 've not seen on any other garment. I've been told that silk dyes more intensely than other fabrics, though I can't find anything online that confirms this.

Primark and the Spectrum Suckers II: Visualising Clothing Colours as a 3D Cloud of Points

    In my post "Primark and the Spectrum Suckers", I imagined white light passing through a Primark shop and exiting as a spectrum made entirely from grey:
    Two pictures in one image. The first is a prism with light going through and forming a spectrum, labelled 'PRISM'. The second is the same prism with an interior photo of the Oxford Westgate Primark shop superimposed. It is labelled 'PRIMARK'. The light coming out is the same spectrum as for the first prism, but grey, not coloured.

    My collage was inspired by the stunningly dreary Primark in the new Westgate shopping centre in Oxford. I'm sure it's obvious that I made the grey "spectrum" by monochromaticising the one in the upper half of the collage, which I'd taken from some free clip-art. But how could I produce a real picture of the colour distribution? I've been looking for tools, and found one recently whose output I'll show. As this is also related to an article I'll post next week about generating colour schemes — including green and purple colour schemes — I've decided to write about colour distributions today.

    One standard tool for displaying colour distributions is the two-dimensional colour histogram. Here's one for the Primark photo from the collage. I made it in Gimp, free image-processing software that I use for editing photos and a host of other things, including retouching cartoons.
    Photo of Primark shop from the above collage, with a colour histogram superimposed.

    Such histograms are easy to produce, but as David Tschumperlé explains in "Visualizing the 3D point cloud of RGB colors", they have disadvantages. He displays a photo of the Swedish model Lena Söderberg, and another photo edited so that the green channel has been reflected around the X-axis, and the blue around the Y-axis. The second photo appears to have much more green, but its histogram is exactly the same as the first photo's.

    But there are other ways to plot colour distributions, using the idea that because we have three colour components, we can represent them as points in three-dimensional space. Take any pixel, and express it as so much red, so much green, and so much blue. This gives us three numbers, each between 0 and 1. A pixel that's black would be (0,0,0); one that's white would be (1,1,1); and one that's pure red would be (1,0,0). Treat these as a point, and plot it. Repeat for all the pixels. This gives us a cloud of points lying within a cube. To display the distribution of colours, colour each point with its actual colour; to display the number of occurrences, give each point a colour that represents these. These, as David Tschumperlé says, give us more information about the global variety of colour in the image, and the local dispersion of tones around each point.

    There are examples for Lena and other images in David's article. Here are some of my own, made using Kai Uwe Barthel's Color Inspector 3D. Each of these shows an image and the corresponding distribution cube.
    Colour-distribution cube for a red square. There's a red
dot at the (1,0,0) corner of the cube.
    Colour-distribution cube for a green square. There's a red
dot at the (0,1,0) corner of the cube.
    Colour-distribution cube for a red square next to a green square. There's a
red dot at the (1,0,0) corner of the cube, and a green dot at the
(0,1,0) corner.
    Colour-distribution cube for a red square next to a thin green rectangle. There's a red dot at the (1,0,0) corner of the cube, and a green dot at the
(0,1,0) corner.
    Colour-distribution cube for my logo. There's a spray of green, purple, and pink, fanning out from the (0,0,0) corner.
    Colour-distribution cube for orange sarouel and ice-blue shirt. There are two 'bananas' going from near the black corner to near the white corner. One is blue, the other orange.

    In the first image, all the pixels are pure red. So strictly speaking, we'd end up with just one point, at the position (1,0,0), the red corner of the distribution cube. Color Inspector 3D has made a small dot rather than a point, probably to make the results easier to see. The second image is all green, so we get its counterpart: a cube with a green dot at (0,1,0), the green corner.

    The third and fourth images show what happens for more than one colour, in this case pure red and pure green. We get a red dot and a green dot.

    And the fifth and sixth images show results for two realistic images. One is for my logo, which is from this Oakland velvet waistcoat. The other is for my orange qandrissi and ice-blue shirt.

    Note that in the third and fourth images, the results are the same regardless of the proportions of red and green. I have a reason for mentioning this, and it's to do with "anti-aliasing". Look at this:
    Colour-distribution cube for a red line. There's a red dot at the (1,0,0) corner of the cube, and other dots on a diagonal leading to the white corner, (1,1,1).

    What I did here was to get the colour-distribution cube for a red line. But if it's red, why are there those increasingly white dots leading down to the white corner of the cube? I suspect the reason is that to counteract the relatively low resolution of the line, the drawing program (Gimp, but other programs would do the same) gives pixels at the edge of the line colours intermediate between the line and its background. In this case, various shades of reddish-white. This is called anti-aliasing. You can see this in this zoomed-in portion of the line:
    The red line enlarged. On its edges are paler, reddish-white, pixels. As the distribution plots don't record how many pixels of each colour they see, these few intermediate pixels have as much impact on the display as the pure red ones.

    I didn't start today's post with the intent of writing a graphics tutorial. But I noticed these unexpected intermediate colours when analysing example coloured images I'd drawn, and decided I'd better understand where they come from, because they could mislead. I don't know whether similar artefacts could arise in other ways, for example when resizing JPEG files. But I'm fairly sure that some of my images will be affected by the following. Some of my clothing pictures are ones where I've separated the clothing from its background. But it's hard to do so perfectly, so there are minute remnants of background clinging to their borders. This seems likely to bias the distributions, perhaps by giving them tails that shade off towards black.

    With that out of the way, let's look at Primark. Here's my original photo:
    The photo used in 'Primark and the Spectrum Suckers'. Interior of the Primark shop in the Westgate Shopping Centre, Oxford. If the photo seems rather "chewy", it's because the original suffered from motion blur, which I reduced by using Focus Magic. I didn't want to stand around with a tripod, and my camera isn't sensitive enough for a fast point-and-shoot response to indoor lighting.

    And here's the colour distribution:
    Colour-distribution cube for the photo of the Primark shop used in 'Primark and the Spectrum Suckers'. It's a moderately loose diagonal line,
stretching from the black corner to the white corner. There are a few dots of red, green, and blue further out.

    So what do we have? It's a moderately loose diagonal line, stretching from the black corner to the white corner. There are a few dots of red, green, and blue further out. These, I suspect, come not from the clothes, but from their labels.

    It's rather sad that the labels on the socks are the most colourful thing in the shop.
    Detail from the photo used in 'Primark and the Spectrum Suckers'. The sock rack in the Primark shop in the Westgate Shopping Centre, Oxford.


    "Visualizing the 3D point cloud of RGB colors", Open Source Graphics, by David Tschumperlé, 24 February 2018.

    I used Color Inspector 3D by Kai Uwe Barthel. This is a Java program, packaged as a JAR file: something rather like a zip file, containing all the program components. On Windows 10, I was able to run it by following Kai's instructions: download ColorInspector3D.jar from the link, and double click on it. This requires the computer to have Java, which the one I'm using must have got. Once you've started the program, click "File" and then "Open", and select an image. Its colour cube should then appear. I found that the program failed on very big images, and I had to reduce their size.

Colourful Cowboys with Green and Purple

Cover of the Monster Book for Boys, possibly 1954 Title page of the Monster Book for Boys, possibly 1954
Because it's related to the theme of green and purple, I'm going to post an illustration from a children's annual. It's from the Monster Book for Boys and shows three colourfully dressed cowboys. One is wearing green and purple. Real cowboys, I suspect, were not so picturesque. Illustration from the Monster Book for Boys, possibly 1954. Shows three cowboys
wearing blue trousers with red shirt, black waistcoat, and green bandana; grey-blue trousers, red-and-white checked shirt, yellow jacket, and blue bandana; and purple trousers, green shirt, and bottle-green waistcoat.


The website Old Classic Car has a page "Old childrens books and annuals that feature cars". One of the front covers shown is the same as my Monster Book for Boys. The author says their copy was a Christmas gift from 1954, so perhaps the book was published in 1953 or 1954.