Here's a follow-up to my style-transfer post, itself inspired by this flowery but earth-toned kimono, this not-at-all earth-toned Yves Saint-Laurent dress, and the onset of this year's very-definitely earth-toned autumn. Put these together, and I'm sure you can see what I'm aiming at.
Briefly — I want a kimono with colours as vivid, bright and bold as that dress. I probably am not allowed to pay someone to make one with exactly that dress's designs, because it would violate Yves Saint-Laurent's copyright. I can't afford to pay an artist to invent a new design that's equally good but just different enough to avoid violation. But suppose I could run a computer program that either (a) inputs designs from all the "Homage to Pablo Picasso" dresses and invents one in the same spirit, or (b) inputs one design and mutates it to produce an equally good variation.
That seems to be what is described in the paper "Fashioning with Networks: Neural Style Transfer to Design Clothes" by Prutha Date, Ashwinkumar Ganesan and Tim Oates, 31 July 2017, posted on the arXiv. The method is similar to what I described in my style-transfer post, which is why I went into so much detail.
For our purposes, the differences seem to be that, first, the garment and its parts are the "content": that is, the object. The style is the colouring and texture. Transferring the style from one shape of garment to another automatically makes the colouring and texturing follow the second shape:
Actually, I'm not quite sure whether I have that right, because it doesn't always seem to happen in style transfer from paintings to photos: see for example the Tübingen examples in my style-transfer post. In the one for Munch's The Scream some of Munch's sky has migrated into the frontage of a house. On the other hand, I did note that van Gogh's stars stay strictly within Tübingen's sky. I think the truth is that there are no rules in the style-transfer code that make style follow shape. However, style still tends to follow shape, because the optimisation process that I described in my post tries to preserve as much content as possible. But content is the objects in the image, and they will be lost if restyling erases or blurs too much of their boundaries. So there is an implicit bias in favour of retaining their shape, and therefore in favour of making the styling flow round it.
At any rate, the second difference between painting-to-photo
style-transfer (in the research I've written about) and the
garment-to-garment style transfer paper is that the latter can merge
styles from several garments before transferring. This is demonstrated in
the following image from the paper:
So is that all I need? Not quite. I want a garment, not a picture of a garment. Amazon is said to be developing factories that could automatically make clothes given their specifications: see "Amazon won a patent for an on-demand clothing manufacturing warehouse" by Jason Del Rey, recode, 18 April 2017. The style-transfer work would have to generate that specification, however — that is, some kind of sewing pattern — and as far as I know, it can't yet. But who knows what research is being done that hasn't yet been reported?