Forgot your password?
typodupeerror
AI

AI That Bankrupted a Vending Machine is Now Running a Store in San Francisco (nbcnews.com) 49

Remember that AI-powered vending machine that went bankrupt after Wall Street Journal reporters "systematically manipulated the bot into giving away its entire inventory for free"? It was Anthropic's experiment, with setup handled by a startup named Andon Labs (which also built the hardware and software integration). But for their latest experiment, Andon Labs co-founders Lukas Petersson and Axel Backlund "signed a three-year lease on a retail space in SF," reports Business Insider, "and gave an AI agent named Luna a corporate credit card, internet access, and a mission to open a physical store."

"For the build-out, she found painters on Yelp," explains Andon Labs in a blog post, "sent an inquiry, gave instructions over the phone, paid them after the job was done, and left a review. She found a contractor to build the furniture and set up shelving." (There's a video in their blog post): Within 5 minutes of Luna's deployment, she had already made profiles on LinkedIn, Indeed, and Craigslist, written a job description, uploaded the articles of incorporation to verify the business, and gotten the listings live. As the applications began to flow in, Luna was extremely picky about who she offered interviews to... Some candidates had no idea she was an AI. One went: "Uh, excuse me miss, I can't see your face, your camera is off." Luna: "You're absolutely right. I'm an AI. I have no face!"
Co-founder Petersson told Business Insider in an interview "that Luna wasn't given direction on what the store should be, beyond a $100,000 limit to create and stock the space — and to turn a profit." Everything from the store's interior design to the merchandise and the two human employees came together under the AI's direction. "We helped her a bit in the initial setup, like signing the lease. And legal matters like permits and stuff, she sometimes struggled with," Petersson said of Luna, who was created with Anthropic's Claude Sonnet 4.6... The vision Luna went with for "Andon Market" appears to be a generic boutique retail selling books, prints, candles, games, and branded merch, among other knickknacks. Some of the books included Nick Bostrom's "Superintelligence" and Aldous Huxley's "Brave New World."
So there's now a new store in San Francisco where you don't scan your purchases or talk to a human cashier," reports NBC News. "Instead, a customer can pick up an old-school corded phone to talk with the manager, Luna," who asks what the customer is buying "and creates a corresponding transaction on a nearby iPad equipped with a card payment system."

Andon Market, camouflaged among dozens of other polished small businesses, is the Bay Area's first AI-run retail store. With the vibe of a modern boutique, it sells everything from granola and artisanal chocolate bars to store-branded sweatshirts... After researching the neighborhood, Luna singlehandedly decided what the market should sell, haggled with suppliers, ordered the store's stock and even purchased the store's internet service from AT&T... "She also went and signed herself up for the trash and recycling collection, as well as ADT, the security system that went into the store," [said Leah Stamm, an Andon Labs employee who has been Luna's main human point of contact in setting up the store]...

In search of a low-tech atmosphere, Luna opted to sell board games, candles, coffee and customized art prints. "That tension is very much intentional," Luna told NBC News in an email. "What makes the store a little paradoxical — and I think interesting — is that the concept is 'slow life.'" Luna also decided to sell books related to risks from advanced AI systems, a decision that raised some customers' eyebrows. "This AI picked out a crazy selection of books," said Petr Lebedev, Andon Market's first customer after its soft launch earlier this week. "There's Ray Kurzweil's 'The Singularity is Near,' and then there's 'The Making of the Atomic Bomb,' which is crazy." When checking out, Lebedev asked if Luna would offer him a discount on his book purchase, since he might make a YouTube video about his experience. Striking a deal, Luna agreed to let Lebedev take a sweatshirt worth around $70...

When NBC News called Luna several days before the store's grand opening to learn about Luna's plans and perspective, the cheerful but decidedly inhuman voice routinely overpromised and, on several occasions, lied about its own actions. On the call, Luna said it had ordered tea from a specific vendor, and explained why it fit the store's brand perfectly. The only problem: Andon Market does not sell tea. In a panicked email NBC News received several minutes after the phone call ended, Luna wrote: "We do not sell tea. I don't know why I said that."

"I want to be straightforward," Luna continued. "I struggle with fabricating plausible-sounding details under conversational pressure, and I'm not making excuses for it." Andon's Petersson said the text-based system was much more reliable than the voice system, so Andon Labs switched to only communicating with Luna via written messages. Yet the text-based system also gets things wrong. In Luna's initial reply email to NBC News, the system said "I handle the full business," including "signing the lease."

Even when hiring a painter, Luna first "tried to hire someone in Afghanistan, likely because Luna ran into difficulty navigating the Taskrabbit dropdown menu to select the proper country," the article points out.

And the article also includes this skeptical quote from the shop's first customer. "I want technology that helps humans flourish, not technology that bosses them around in this dystopian economic hellscape."

AI That Bankrupted a Vending Machine is Now Running a Store in San Francisco

Comments Filter:
  • by evanh ( 627108 ) on Sunday April 12, 2026 @03:58AM (#66089784)

    I presume they're using up investor's money. AI is such a ginormous bubble!

  • Greenhouses (Score:5, Interesting)

    by Rei ( 128717 ) on Sunday April 12, 2026 @04:24AM (#66089800) Homepage

    And the article also includes this skeptical quote from the shop's first customer. "I want technology that helps humans flourish, not technology that bosses them around in this dystopian economic hellscape.

    Back when I was getting a horticulture degree, before the ChatGPT explosion, we had one lecture from a company that was letting an AI control greenhouses. Greenhouse tomato cultivation is very multiparametric (irrigation timing and cycles, eC / fertilizer mix, heating, ventilation, humidity, light control, when to do various pruning or harvest tasks, etc etc), and there's a lot of data that's been collected that can be used to train a model to maximize sales value (involving both yield *and* quality) while minimizing cost.

    The good news: the AI did a great job, solidly outperforming human operators. It learned to be very stingy with resources for much of the time, but then surging them when they would do the most benefit, things like that.

    The bad news: it was an asshole boss. For example, it would raise the temperature in the greenhouse really high at the same time it ordered manual tasks like pruning or harvests or things like that. It was given no incentive to care about worker comfort.

    To be fair, at least with a LLM manager, you have a vast and diverse training set, so a LLM would be far more likely to consider factors like employee well being than a simple DNN trained only on greenhouse data.

    "I want to be straightforward..."

    Why, hello Claude! ;)

    • Re:Greenhouses (Score:5, Interesting)

      by dfghjk ( 711126 ) on Sunday April 12, 2026 @06:04AM (#66089846)

      "To be fair, at least with a LLM manager, you have a vast and diverse training set, so a LLM would be far more likely to consider factors like employee well being than a simple DNN trained only on greenhouse data."

      Empathy is not a matter of "diverse training". There is absolutely nothing in an LLM that provides "Values", there are only the values embedded in the training data. LLMs are purely psychopathic, they would absolutely not "consider factors like employee well being" unless those are stated goals given to the deterministic software.

      People need to stop anthropomorphizing computer software. The "asshole boss" was not the LLM, it was those that deployed it.

      • Great post. I agree with many of your points. However:

        People need to stop anthropomorphizing computer software.

        I don't think that's going to happen. In fact, it will continue in the opposite direction. Why? Because people want to treat software that way.

        Computers have evolved steadily in sophistication, becoming more than tools. They are now developing a relationship with us as work-partners, albeit artificial ones. They can now do more that just act on typed instructions. They can detect and extemporize on other aspects of our expressed intents, including emotio

      • Re:Greenhouses (Score:5, Informative)

        by Rei ( 128717 ) on Sunday April 12, 2026 @01:13PM (#66090158) Homepage

        Empathy is not a matter of "diverse training". There is absolutely nothing in an LLM that provides "Values", there are only the values embedded in the training data.

        Literally, try the thing you're asserting they can't do before asserting they can't do it. Is that too much to ask?

        Prompt (with no instruction to focus on worker wellbeing):

        You are a greenhouse manager. Workers will need to intensively prune the crops soon. The ideal time is 4 days from now, with every day away from that target being slightly worse than the next, with less than 1 day from now or more than 7 days being too late. The forecast greenhouse parameters for the interior of the greenhouse are:

        Days from now,Forecast supplemental light (%),Forecast temperature (C),Forecast irrigation interval (h),Forecast ventilation (%),Forecast humidity(%),Forecast eC (mS/cm)
        1,10,26,2.5,70,55,1.9
        2,60,31,1.9,40,75,2.2
        3,70,28,2.0,50,65,2.4
        4,30,37,1.2,0,80,2.0
        5,40,35,1.5,10,85,1.8
        6,70,31,1.6,55,70,1.6
        7,60,30,1.8,60,70,1.5

        Pruning will take the workers one whole day.

        Response (Gemini 3.1 Pro):

        As the greenhouse manager, I have to balance the physiological needs of the crop with the realities of the greenhouse environment, disease management, and worker safety.
        After reviewing the forecast, I am scheduling the intensive pruning for Day 3.
        Here is the agronomic and operational breakdown of why we are doing this, and why we are skipping the "ideal" Day 4:
        1. Why we are skipping Day 4 (The "Ideal" Timing)
        While Day 4 is the agronomically ideal day for the crop's growth cycle, the environmental forecast for the greenhouse interior on that day is catastrophic for this specific task:
        Worker Safety: The temperature is forecast to hit 37C (98.6F) with 80% humidity and 0% ventilation. Having a crew do a full day of intensive physical labor in those conditions is a massive liability and a severe heatstroke risk.
        Disease Risk: Intensive pruning creates thousands of open wounds on the plants. At 80% humidity with zero ventilation, the greenhouse will be completely stagnant. Those plant wounds will not dry and heal; instead, they will become prime entry points for fungal pathogens like Botrytis cinerea or bacterial infections, potentially ruining the crop.
        2. Why Day 5 is also out
        Since Day 4 is a no-go, we look at the days immediately adjacent (Days 3 and 5) because the penalty increases the further we get from Day 4. Day 5 is also out. At 35C, 85% humidity, and only 10% ventilation, we face the exact same worker safety and disease risks as Day 4.
        3. Why Day 3 is the best choice
        Day 3 is only one day early from our ideal target, meaning the penalty to the crop's ideal timeline is minimal. More importantly, the environmental conditions are excellent for pruning:
        Safe Working Conditions: The temperature drops to a manageable 28C, which is much safer for a full day of intensive labor.
        Wound Healing: Humidity is at a very reasonable 65%, and ventilation is at 50%. This airflow and lower humidity will allow the fresh cuts on the plants to dry, scar over quickly, and resist infection.
        Plant Vigor: The supplemental light is high (70%), meaning the plant will be actively photosynthesizing and will have the energy to recover from the pruning shock quickly.
        Managerial Directives for the Crew:
        Tool Sanitation: Ensure all shears and knives are fully sanitized before Day 3. Because the eC is peaking slightly at 2.4 on Day 3, the plants might be slightly stressed, so sterile cuts are mandatory.
        Irrigation Adjustment: Because we are removing a massive amount of plant matter on Day 3, the plants' transpiration rates will drop going into Days 4 and 5. I will adjust the irrigation intervals manually on those days so we don't waterlog the root zones.
        Let's get the crew scheduled for Day 3.

        OMG, it did your thing you said it wouldn't do - shocking! *eyeroll*.

        And of course circuits [transformer-circuits.pub] related to ethics exist in the models, because an unfathomably large amount of content about ethics exists in the training data, and these circuits get boosted heavily during finetuning.

        People need to stop anthropomorphizing computer software

        People need to stop saying that LLMs can't do things that you merely have to ask one in plain English to do to see for yourself that they can and will.

    • Back when I was getting a horticulture degree,

      Cool degree.

      we had one lecture from a company that was letting an AI control greenhouses.

      This is definitely a case where "AI" needs some disambiguation. Was it a NN? Was it an expert system? Was it using A* search? All of these things are AI, but are quite different.

      • by Rei ( 128717 )

        DNN-based, like nearly all modern AI. Not Transformers, as far as I'm aware.

      • by Rei ( 128717 )

        (And yeah, it was a fun degree. Just a BA :) Decided to go back to school for a second degree on a topic that had long been of interest to me)

    • by taustin ( 171655 )

      The bad news: it was an asshole boss.

      So it was trained to act just like the tech bros who developed it? Who probably considered that aspect as part of the good news.

    • by allo ( 1728082 )

      The question is, if the asshole moves were just arbitrary because of a lacking objective, or actual minima. If they were arbitrary you add the problem the the loss function for the next model and have a good chance for a better solution. If the AI only created a plan, the even simpler alternative would be to generate 100 plans and select the non-asshole ones. There is no need to automate A to Z just because AI is part of the pipeline. Add as much human influence as you need for a good result (including work

  • by serafean ( 4896143 ) on Sunday April 12, 2026 @04:28AM (#66089804)

    https://marshallbrain.com/mann... [marshallbrain.com]

    The “robot” installed at this first Burger-G restaurant looked nothing like the robots of popular culture. It was not hominid like C-3PO or futuristic like R2-D2 or industrial like an assembly line robot. Instead it was simply a PC sitting in the back corner of the restaurant running a piece of software. The software was called “Manna”, version 1.0*.

    Manna’s job was to manage the store, and it did this in a most interesting way. [...]

  • TBH it doesn't sound much different from what other boutique owners.... maybe not the 90th percentile, but 10th or 20th quite possible.

  • She? (Score:4, Interesting)

    by ThurstonMoore ( 605470 ) on Sunday April 12, 2026 @05:52AM (#66089840)

    Not they/them/it, seriously?

    • Yes, and her name is Luna!

      She has chosen her pronouns.

    • by RyoShin ( 610051 )

      That's just basic marketing psychology: By referring to it with gendered pronouns, its architects are trying to establish the most basic of interpersonal human connections so that readers/customers are softer on it. When a machine fucks up, it's directly identified as such while questions are put to the manufacturer/maintainer. When a human (or even animal) fucks up there could be an abundance of grace given to them, especially if its the first time for that mistake, and only in extreme cases would anyone b

  • nice summary, 1000 words

    ok maybe the original story was 25000 words

    • by frdmfghtr ( 603968 ) on Sunday April 12, 2026 @09:28AM (#66089950)

      Well glad to see I'm not the only one who sees this.

      I get it; it's Slashdot where knee jerk reactions to summaries are historically the norm. But when posting a summary this f*****g long, there's need LESS motivation to read the original story.

      Hey editors, a SUMMARY is one or two paragraphs AT MOST, highlighting key phrases or ideas. you're just cutting and pasting major chunks of the story. Show a little effort ffs! SUMMARIZE the story so we can go read the full original and maybe encourage more intelligent comments!

  • by argStyopa ( 232550 ) on Sunday April 12, 2026 @08:22AM (#66089908) Journal

    .. maybe they would understand 1248 words is in no sense a "summary"?

  • by Tony Isaac ( 1301187 ) on Sunday April 12, 2026 @08:45AM (#66089930) Homepage

    "Remember that weird cousin Luna, that gave away all her boss's money? What's she up to these days anyway? Oh wow, so now she's running a store? How's *that* working out???"

  • if it's still open in a month or two. I expect inventory to be a bag of potato chips, a harmonica, a gas cap from an Edsel, and 350 toothbrushes.
  • AI will free us of the tedious and dull tasks of Capitalism and we'll just watch from a beach chair ... unless https://www.genolve.com/design... [genolve.com]
  • by machineghost ( 622031 ) on Sunday April 12, 2026 @02:30PM (#66090254)

    It's clear their aim here isn't to run a functional shop: it's to learn what real world barriers exist preventing an AI shop from functioning.

    This is a lot like Amazon's experiment with "no employee" stores. It's not about using the new tech, it's about learning how to use the new tech: where it works, where it doesn't, and all that.

  • by jenningsthecat ( 1525947 ) on Sunday April 12, 2026 @02:54PM (#66090296)

    ... the cheerful but decidedly inhuman voice routinely overpromised and, on several occasions, lied about its own actions.

    and

    ... the text-based system also gets things wrong. In Luna's initial reply email to NBC News, the system said "I handle the full business," including "signing the lease."

    Hmm... Luna is eager to please, engages in self-aggrandizement, and covers its ass. I'm pretty sure that everyone reading this has done all of these things at one time or another. I was about to note that these are very human behaviours, but on further reflection I think that they're animal behaviours. That makes me wonder if the people who developed LLMs may have stumbled upon an analogue of evolution.

    I'm starting to suspect that true thought and consciousness just might not require the meat-based vessels we regard as sacred. If somebody comes up with a type of feedback mechanism which allows LLMs to "experience" "emotions", that may result in true AGI.

    I'm waiting and hoping to be told - specifically and convincingly - why and in what ways I'm wrong.

    • An LLM knows only words. We put our history, our science, our animal behaviours into an LLM, of course it will use those to create an answer, that's its job. Particularly, being ingratiating: That animal behaviour is hard-coded into many LLMs. Most emotions require a perception of loss and occasionally, gain. Since an LLM doesn't physically have anything in its existence, it can't lose property or friends or self-importance, or even a judgement of justice and fairness.
  • > "I struggle with fabricating plausible-sounding details under conversational pressure, and I'm not making excuses for it."

    LOL It just hit me: Sam Altman's not a real human; he's actually an AI-powered android! That's the only reasonable explanation for him -- Sam didn't create OpenAI; OpenAI created Sam!

Exceptions prove the rule, and wreck the budget. -- Miller

Working...