Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
AI Software

Vibe Coded AI App Generates Recipes With Very Few Guardrails 70

An anonymous reader quotes a report from 404 Media: A "vibe coded" AI app developed by entrepreneur and Y Combinator group partner Tom Blomfield has generated recipes that gave users instruction on how to make "Cyanide Ice Cream," "Thick White Cum Soup," and "Uranium Bomb," using those actual substances as ingredients. Vibe coding, in case you are unfamiliar, is the new practice where people, some with limited coding experience, rapidly develop software with AI assisted coding tools without overthinking how efficient the code is as long as it's functional. This is how Blomfield said he made RecipeNinja.AI. [...] The recipe for Cyanide Ice Cream was still live on RecipeNinja.AI at the time of writing, as are recipes for Platypus Milk Cream Soup, Werewolf Cream Glazing, Cholera-Inspired Chocolate Cake, and other nonsense. Other recipes for things people shouldn't eat have been removed.

It also appears that Blomfield has introduced content moderation since users discovered they could generate dangerous or extremely stupid recipes. I wasn't able to generate recipes for asbestos cake, bullet tacos, or glue pizza. I was able to generate a recipe for "very dry tacos," which looks not very good but not dangerous. In a March 20 blog on his personal site, Blomfield explained that he's a startup founder turned investor, and while he has experience with PHP and Ruby on Rails, he has not written a line of code professionally since 2015. "In my day job at Y Combinator, I'm around founders who are building amazing stuff with AI every day and I kept hearing about the advances in tools like Lovable, Cursor and Windsurf," he wrote, referring to AI-assisted coding tools. "I love building stuff and I've always got a list of little apps I want to build if I had more free time."

After playing around with them, he wrote, he decided to build RecipeNinja.AI, which can take a prompt as simple as "Lasagna," and generate an image of the finished dish along with a step-by-stape recipe which can use ElevenLabs's AI generated voice to narrate the instruction so the user doesn't have to interact with a device with his tomato sauce-covered fingers. "I was pretty astonished that Windsurf managed to integrate both the OpenAI and Elevenlabs APIs without me doing very much at all," Blomfield wrote. "After we had a couple of problems with the open AI Ruby library, it quickly fell back to a raw ruby HTTP client implementation, but I honestly didn't care. As long as it worked, I didn't really mind if it used 20 lines of code or two lines of code." Having some kind of voice controlled recipe app sounds like a pretty good idea to me, and it's impressive that Blomfield was able to get something up and running so fast given his limited coding experience. But the problem is that he also allowed users to generate their own recipes with seemingly very few guardrails on what kind of recipes are and are not allowed, and that the site kept those results and showed them to other users.

Vibe Coded AI App Generates Recipes With Very Few Guardrails

Comments Filter:
  • by OrangeTide ( 124937 ) on Thursday April 03, 2025 @12:03AM (#65277937) Homepage Journal

    Like have you subtly make nitroglycerin through a combination of some of the intermediate steps. And I assure you if you make a honey cornbread recipe with nitroglycerin, it'll have quite a kick. Probably kick itself right out of the oven. Don't worry, if you blow up your hands the AI companies won't be held accountable.

    • by Austerity Empowers ( 669817 ) on Thursday April 03, 2025 @02:11AM (#65278031)

      I recommend a burrito. Everyone expects some post-consumption outbursts, it's just a matter of magnitudes at that point.

    • by mysidia ( 191772 )

      I am in favor of legislation introducing strict liability for providers of computer-based subscription services.

      If you pay money for a service, Then you are entitled to a fundamental right as consumer that the service performs as advertised and does not kill you, And any agreement that tries to have you waive those rights should be void.

      Also, binding agreements attempting to regulate the process of filing lawsuits within a Terms of service or agreement presented in order to procure products or service (o

      • I am in favor of legislation introducing strict liability for providers of computer-based subscription services.

        If you pay money for a service, Then you are entitled to a fundamental right as consumer that the service performs as advertised and does not kill you, And any agreement that tries to have you waive those rights should be void.

        I agree with you, so long as your definition of "pay money" is stretched to include "have personal data collected" and "are subjected to ANY advertising" and "your interactions may be used to train AI".

        All those, and probably some additional implicit and/or hidden value exchanges which I haven't yet thought of, should be counted as "money" when it comes to assessing liability. Yes, there are slippery-slope arguments galore there. The devil is in the details, and averting our eyes doesn't make the devil go a

      • Interesting. I'm on the opposite side of this. The service should not censor at all and it should come with zero liability. If you do something stupid with AI and it kills you, that's on you.

        The adults don't need or want Fisher Price AI.

        AI is just a tool. You don't blame tools for a person's bad choices. At least reasonable people don't.

    • The fact that this app was unhinged is because they did not add a content control feature to it. It wasn't because it was 'vibe coded'
  • Let's be clear about what's happening when people rush to point out that an AI might generate a dangerous recipe, a message board allows certain words starting with N that some find scary, or a website hosts images deemed too close to a perceived line. While genuine concern exists, there's almost always something else at play: a quest for moral superiority and social control. Finding such flaws allows individuals to position themselves as virtuous guardians, publicly shaming creators or platforms and demons
  • by Gravis Zero ( 934156 ) on Thursday April 03, 2025 @12:23AM (#65277965)

    Does the cyanide ice cream taste good?

  • or is proofreading not a thing anymore
  • by locater16 ( 2326718 ) on Thursday April 03, 2025 @01:02AM (#65277985)
    Vibe coding is just copy pasting from substack but now at random until maybe something works.
    • by Entrope ( 68843 )

      Don't you mean Stack Exchange? Substack seems much more like it's for creative writing majors who were put out of work by AI.

  • by Anonymous Coward

    One generation from now on we'll all laugh about the AI cult, just like we did with the Agile cult, the NFT cult or the MSCE cult.

  • i guess that's one way to go about it... add the safety mechanisms AFTER the fact. But curious, and not sure what the answer is for these question, and they are rather basic- will depend on what society is comfortable with, and laws at the time in the various jurisdictions.

    -Should the answers from an AI prompt be censored? I mean, if you go online and search the same subject and get answers, but the AI is filtering results at the end stage, where it's interfacing with the user and blocking that access.

    • by DarkOx ( 621550 )

      People worried about GenAI 'safety' are just busybodies who have spotted something new-ish and want to feel important by telling others what to do.

      All of it amounts to trading the orthodoxy of the majority for some other orthodoxy. It is only 'safer' for those that happen to be on 'team some other' at the time of deployment.

      As you say these things are trained on information that is out there now and not "controlled". These models have been trained on either published materials or internet content. So there

    • Well, to be fair the safety issues that bite you are the ones you didn't foresee. "After the fact" is when those issues become known.

      But, yeah, worrying about AI telling someone some potentially dangerous thing that they could have looked up anyhow seems dumb.

  • Surely, "Werewolf Cream" is just another name for "Thick White Cum Soup".
  • Who cares? (Score:5, Insightful)

    by nanoakron ( 234907 ) on Thursday April 03, 2025 @04:55AM (#65278141)

    Who gives a s**t?

    So some dumbass wants to actually make cyanide ice cream? Let them. Do you really think someone that dumb needs step by step instructions to eat their cyanide?

    Who is protected by content moderating dumb stuff like this.

    In the internet of the 90s and 00s people would just have a mild chuckle and move on. Now it makes /. front page.

    • The problem is that that "dumbass" may be a kid that didn't know any better. Parents can't be everywhere all the time.
      • So we should bubble wrap the world for everyone "for the children"? Sounds like a shitty existence to me.

        • Excuse me? Your existence would be shitty because you can't have a site that gives you a recipe for poisonous ice cream? You have strange requirements for life.
        • So we should bubble wrap the world

          No one is proposing bubblewrapping anything. This are very basic safety measures that are far less onerous than virtually everything around you right now. You may not realise it because presumably you licked too much lead paint as a kid.

      • Really? What kid has such ready access to cyanide?
        • "Cyanides are found in substantial amounts in certain seeds and fruit stones, e.g., those of bitter almonds, apricots, apples, and peaches.[5]"
          • Well, bitter almonds aren't really available, and you'd need a LOT of apple or apricot seeds to do the trick. Do we really think some kid is going to put a few pounds of apple seeds in a grinder and then chemically extract the cyanide? And that they couldn't have just looked that up via google or a chemistry textbook? Come on.
            • AI would probably explain the process step by step.. If it doesn't stop from recommending cyanide ice cream than why not how to make the cyanide?
    • Thank you

      The internet is not some fucking safe space. (Western) People need to stop being so incredibly fragile.

    • Do you really think someone that dumb needs step by step instructions to eat their cyanide?

      Presumably someone that dumb wouldn't eat cyanide unless someone suggested it. That's what guardrails are for.

  • Far worse than a lack of guardrails is the ubiquitous attitude in media and politics that the average voter is no better than a child. "Think of the sheeple, what if they hear about the cum soup!"

    It's very fashionable to outwardly care about democracy while also believing that the average prole can't be trusted with access to kitchen knives, unsanitized chatbots, unapproved thought etc. Something is deeply rotten in how journalists and policy people are taught; the "liberal" in "liberal arts" is gone.

    In a s

    • by gweihir ( 88907 )

      Have you looked at the average person? Kitchen knives provide enough immediate feedback that even the average moron learns to be careful, but look at what happens when you let them vote.

      • Right, if the elite doesn't believe that the median voter can be trusted to use the Internet without a babysitter, why would they believe that voting is a good thing? You don't allow morons neither knives and you also don't allow them near voting booths. If the moronous populace cannot be denied access to the voting booth, the results are inherently illegitimate and ignoring them in favor of doing whatever the elite wants is the only way to save democracy :^)

        • I suspect both sides feel they can manipulate morons into voting for their side. Sadly, they would be right.

          • by gweihir ( 88907 )

            Indeed. Some side may have better policies some may have worse, but that is a minor factor in who gets voted for. Well, the current US administration may break things so badly that the vothers will still remember 4 years later (or maybe much longer), but that is a rare exception.

    • I would like to just blame the liberals as well, but there are idiot busybodies on the conservative side too. Both sides are authoritarian assholes that want to control what everyone does. For reasons.

      This is not to say that all conservatives and all liberals are authoritarian assholes. Just the ones that constantly want to treat everyone like children.

  • by gweihir ( 88907 ) on Thursday April 03, 2025 @06:55AM (#65278223)

    Have clowns code with "funny" tools and methods and get hilarious Results.

  • Any freshman trying to use this programming technique would be flunked in Intro to Programming 101. Trying to use AI to write real code will only result is very poor results. LLMs don't have any semantic capability to know "why" thinks are. Using simple phrases as directions to AI does not communicate enough information for real requirements. Your AI generated code is just recollections of what the LLM has been trained on, which you have not seen. This is such a bad idea. It's just one more stupid way for d
    • To play devil's advocate, but what if the output does in fact return the desired results? I mean, sure, I don't want it to take 100 lines of code when it could take 10, but if the app does work, does that matter?

      Obviously efficiency does matter but try telling that to the vibe coder.

  • peggy that's the recipe for mustard gas!

  • I was able to generate a recipe for "very dry tacos," which looks not very good but not dangerous.

    "Oh yes, well I managed to transmit a new entry for 'very dry tacos' off to the editor. He had to trim it a bit, but it's still an improvement."
    "And what does it say now?"
    "Mostly harmless."

  • No question they ARE out to destroy us humans. Their app is just a little bit ahead of its time. It's not the phase for that until we have living sentient robots to actually replace the humans.

  • It doesn't so much matter if you use vibe coding or not in this case. you can either add content moderation or not, and it makes absolutely no difference whatever that vibe coding or hand coding were involved. The AI didn't decide not to include content moderation, the guy asking for the code did. So there, that's it. End of story. Next.
  • Not sure the point of showing screenshots of these supposedly deleted recipes if the author really thought this was important news.
  • So, I guess "vibe" is the new "cozy" in the warrens of marketing rabbits. Can we implement some sort of filter that blocks any article with such words in the title? I think it will save us all a lot of time and trouble.
  • I am always surprised how people seem to think guardrails on tools that should work for them instead of against them would be a good thing.
    Also missing guardrails on the models doesn't have to do with vibe coding, but with the model choice.

"Your attitude determines your attitude." -- Zig Ziglar, self-improvement doofus

Working...