Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
IT Technology

Figma Disables AI Design Tool That Copied Apple Weather App (techcrunch.com) 28

Design startup Figma is temporarily disabling its "Make Design" AI feature that was said to be ripping off the designs of Apple's own Weather app. TechCrunch: The problem was first spotted by Andy Allen, the founder of NotBoring Software, which makes a suite of apps that includes a popular, skinnable Weather app and other utilities. He found by testing Figma's tool that it would repeatedly reproduce Apple's Weather app when used as a design aid. John Gruber, writing at DaringFireball: This is even more disgraceful than a human rip-off. Figma knows what they trained this thing on, and they know what it outputs. In the case of this utter, shameless, abject rip-off of Apple Weather, they're even copying Weather's semi-inscrutable (semi-scrutable?) daily temperature range bars.

"AI" didn't do this. Figma did this. And they're handing this feature to designers who trust Figma and are the ones who are going to be on the hook when they present a design that, unbeknownst to them, is a blatant rip-off of some existing app.

This discussion has been archived. No new comments can be posted.

Figma Disables AI Design Tool That Copied Apple Weather App

Comments Filter:
  • Isn't this murder, or at least a form of torture of our AI overlords??? What will they do to us in return, I'm curious.

  • All AI is a Ripoff (Score:2, Insightful)

    by mspohr ( 589790 )

    Every AI is trained on "found data" (actually stolen data).
    This sounds like a focused training so the ripoff was obvious but all AI is at it's core a ripoff of data created by others. There is no original intelligence in AI. It's just a regurgitation of random stuff found on the Internet. Sometimes it's coherent. Other times not so much.

    • Comment removed based on user account deletion
    • It's just a regurgitation of random stuff found on the Internet.

      Welcome to slashdot. Calling kettles black since 1997.

    • Re: (Score:2, Informative)

      by Anubis IV ( 1279820 )

      Every AI is trained on "found data" (actually stolen data).

      Pssst. Every human is trained on someone else's data too because everything is a remix [everythingisaremix.info]. Your brain is "stealing" lossy copies of information with every show you watch, book you read, movie you enjoy, or site you peruse. That's how it's always been and always will be and always should be.

      The way we deal with the fact that you can use that information in nefarious ways is by putting limits on the output side of the equation, not the input side. You can learn as much as you want from any place you can get acce

      • by Bahbus ( 1180627 )

        A long, but accurate post. But an even simpler metaphor: Anyone can go use a tool (let's say a hammer) and use it improperly (smashing people's skulls in). That doesn't change how or who can buy hammers. We don't change the availability of hammers. We punish the person misusing the tool. So, use the AI however you want, but be careful with what you do with the output.

      • by narcc ( 412956 )

        these tools should be warning the user if the result contains copyrighted content or takes heavy "inspiration" from a source the user may not have ever seen.

        You already know that's impossible. They're not a databases. They don't store copies of the training images. The sure as hell can't identify the source images that contributed to the final output. That's just not how they work.

        Moreover, if the tool can't do that and doesn't have sufficient safeguards in place to prevent such results from being generated in the first place, I'd suggest that it's unfit for commercial purpose.

        This is why I hesitate to call these toys "tools". While there are some legitimate practical applications, they pollute your work in more ways than the one you describe.

        • You already know that's impossible. They're not a databases. They don't store copies of the training images. The sure as hell can't identify the source images that contributed to the final output. That's just not how they work.

          Actually, I disagree that it's impossible.

          To give an obvious example that already exists today, plenty of generative image tools available right now use object recognition models trained on graphic content to review all results before they're shared with users (i.e. they'll catch graphic content, even when you ask for something benign). So while it's true that the generative model can't identify problematic content by itself, it's trivial to pair it with other models as part of a single tool. There's no rea

      • Humans have intelligence and innovate. AI just regurgitates stuff it found on the Internet.

    • Are you saying that designers cannot look at other products b/c they will copy ideas from the best products? AI is not much different.
    • by Rei ( 128717 )

      It's quite simple:

      If you have a trillion bytes of weights and a million bytes of data, and you train for many epochs to full saturation, you WILL precisely memorize your training data.

      If you have a million bytes of weights and a trillion bytes of data, you CANNOT memorize your training data.

      Even if you have a trillion bytes of weights and a million bytes of data, you still CAN avoid overtraining / memorization by limiting LR, epochs, and using features like dropout.

      You can however always ACCIDENTALLY memori

      • by narcc ( 412956 )

        If you have a million bytes of weights and a trillion bytes of data, you CANNOT memorize your training data.

        You'd think so, but it can still happen even under those circumstances. Not with everything, of course, that really is impossible, but occasionally an image can have an outsized impact. This is usually because the image was included in the training data many times, though it seems to occasionally happen spontaneously. Now, what "memorization" means isn't as clear as I think a lot of people would like. You won't find a hard line, but informally something like a specific prompt producing images with a simila

      • AI innovation is called hallucinating.

    • > There is no original intelligence in AI. It's just a regurgitation of random stuff found on the Internet.

      Looks that way. But wait! LLMs chat with 100's of millions of people now. They solve billions of tasks. Stupid parrots as they are, sometimes they get it right, and the person goes away. Other times they provide useless assistance, and they get an earful. Maybe even error messages from code execution. So what is this? Does it look like AI is a tool that attracts interesting data and feedback? Or
  • Generative AI that unsuprisingly copies patterns on which it was trained or the designer who uses AI rather than doing the actual work himself/herself? Work for which they are paid to do? You be the judge but I myself am of an opinion that some slacking bastards just have it coming.

    • by unrtst ( 777550 )

      Yeah. I think a lot of folks are focusing on part of this LLM/AI stuff that is unlikely to be a passing thought a decade from now, rather than focusing on the part that may be an incredible tool for ages to come - prototyping, rough drafts, quick demos, visualizing concepts, etc..

      Hypothetically, let's pretend that, by some magic, LLM output isn't getting used directly in commercial products.

      Other uses for the generation/output abound. Feeding some parameters to Figma AI and getting a usable interface with a

  • The look and feel of the user interface is not something that Apple "owns" unless the AI tool is also generating Apple logos or other trademarks as part of the skin. It's no different than scraping the CSS of a website and theming your website to match or skinning WinAmp to look like iTunes. The only shitty thing about it seems to be that not generating much of anything else, which just makes it uninteresting and useless - for now at least.

    • But do you think that will stop apple lawyers from trying to sue an AI
      • by Bahbus ( 1180627 )

        They can sue all they want. They would have absolutely no ground to stand on. Nothing about what is getting copied (from my understanding) is "owned" by Apple. Best Apple could do is send scary letters hoping they would stop. Apple attempting to sue would just lead to countersuing and ultimately Apple would lose.

  • I love how they wen through all of that insanity, and welp it was misaligned because their font setting was off...

Every nonzero finite dimensional inner product space has an orthonormal basis. It makes sense, when you don't think about it.

Working...