Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Technology

Artist Uses AI To Extract Color Palettes From Text Descriptions (arstechnica.com) 18

A London-based artist named Matt DesLauriers has developed a tool to generate color palettes from any text prompt, allowing someone to type in "beautiful sunset" and get a series of colors that matches a typical sunset scene, for example. ArsTechnica: Or you could get more abstract, finding colors that match "a sad and rainy Tuesday." To achieve the effect, DesLauriers uses Stable Diffusion, an open source image synthesis model, to generate an image that matches the text prompt. Next, a JavaScript GIF encoder named gifenc extracts the palette information by analyzing the image and quantizing the colors down to a certain set.

DesLauriers has posted his code on GitHub; it requires a local Stable Diffusion installation and Node.JS. It's a bleeding-edge prototype at the moment that requires some technical skill to set up, but it's also a noteworthy example of the unexpected graphical innovations that can come from open source releases of powerful image synthesis models. Stable Diffusion, which went open source on August 22, generates images from a neural network that has been trained on tens of millions of images pulled from the Internet. Its ability to draw from a wide range of visual influences translates well to extracting color palette information. Other palette examples DesLauriers provided include "Tokyo neon," which suggests colors from a vibrant Japanese cityscape, "living coral," which echoes a coral reef with deep pinks and blues, and "green garden, blue sky," which suggests a saturated pastoral scene. In a tweet earlier today, DesLauriers demonstrated how different quantization methods (reducing the vast number of colors in an image down to just a handful that represent the image) could produce different color palettes.

This discussion has been archived. No new comments can be posted.

Artist Uses AI To Extract Color Palettes From Text Descriptions

Comments Filter:
  • Generating an image from a prompt so you can grab the color histogram? That's so silly. Maybe it gives a better result than taking the first hit from a google image search - which would take a small fraction of the resources - but I doubt it.
    • I'm not going to mod you down, I understand your opinion and where it comes from, but I would disagree. I expect a lot of artists. both fine art and graphic professionals, would hear this news and say "I'm not sure HOW that will be useful, but I bet some people will do some cool stuff with that".
      Sure, using the entire Stable Diffusion data set for such a limited tool is technically overkill, but it's very interesting that it can do this so (relatively) easily, and does make me think about the possibilities

      • by EmoryM ( 2726097 )
        No, I have seen too many posts lambasting cryptocurrency for power usage, having AI generate a coherent image so that you can throw away the image and extract the top-N colors is a waste - it's a waste of technology and energy and is pretty far from being useful or cool. Don't do me any favors, mod me down if you want to. Google image search "tokyo neon" - grab the first result, extract a palette - done. Almost free.
        • by Rei ( 128717 )

          What about generating a 64x64 image, or even smaller? That would take basically zero time.

          • Implementation of OP is still far better. Will always be faster and may give you more results. The stable diffusion method can work offline, its the only plus side I see (30s to generate an image on a M1) but who can't work online? Also both methods ignore copyrights...
            • by narcc ( 412956 )

              What copyright issues?

            • by Rei ( 128717 )

              There are no "images" being copied with SD. The amount that SD knows about the average image in its training dataset is less than 1 byte each.

              • Sure they are not copied but information is extracted from the training set and encoded in the network. SD can generate images with some kind of blurry watermark because it has been trained on watermarked images. I don't know what are the copyright implication of this, I think it is currently an open problem. Same thing applies to github's copilot for code. Maybe people should have the choice to disallow the use of their content for machine learning? May be it could be opt-in only?
    • by narcc ( 412956 )

      Yeah, it's absolutely ridiculous. There are much better ways to achieve the same end.

      I don't even think you need the image generation step, or image search in its place. There are a ton of named colors and palettes around already, which are probably closer to what you'd expect than whatever you'd get from some random image. A simple search matching search words to named colors, averaging similar colors until you get down to the number of colors you want, would probably work just fine.

      If it absolutely mus

  • by John.Banister ( 1291556 ) * on Tuesday September 13, 2022 @04:35PM (#62879013) Homepage
    It sounds like this could work in reverse as well, and that could be very useful for people who make paints, lipsticks, and other things that come in a variety of colors. Half of the color names these companies use sound like they're made up by an AI already. For a company to not only get color names theoretically connected to the zeitgeist but also to have the cover of being able to say "marketing had an AI make up the names for us," sounds like something that these manufacturers would value.
  • Comment removed based on user account deletion
  • Yet, we know how useful it is. LOL!
  • by Big Hairy Gorilla ( 9839972 ) on Tuesday September 13, 2022 @08:46PM (#62879533)
    Might as week get the AI to paint your next painting, then you on top of not having to choose colours, you don't have to paint ... or think. Then sell it as an NFT. Do nothing and make money ... that's what everybody wants. .. amirite?
  • This seems like the logical way to splice two existing processes (AI image generation and palette extraction) together. But for any useful purpose, wouldn't I want an additional step of interactivity when picking the image result from AI generation?

  • OK, AI , grab this:

    Her rouge lips contrasted with the deep pink of her hard nipples as she took his big black....

"The vast majority of successful major crimes against property are perpetrated by individuals abusing positions of trust." -- Lawrence Dalzell

Working...