Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Microsoft AI Technology

Microsoft Publishes Garbled AI Article Calling Tragically Deceased NBA Player 'Useless' (futurism.com) 87

An anonymous reader shares a report: Former NBA player Brandon Hunter passed away unexpectedly at the young age of 42 this week, a tragedy that rattled fans of his 2000s career with the Boston Celtics and Orlando Magic. But in an unhinged twist on what was otherwise a somber news story, Microsoft's MSN news portal published a garbled, seemingly AI-generated article that derided Hunter as "useless" in its headline. "Brandon Hunter useless at 42," read the article, which was quickly called out on social media. The rest of the brief report is even more incomprehensible, informing readers that Hunter "handed away" after achieving "vital success as a ahead [sic] for the Bobcats" and "performed in 67 video games." Condemnation for the disrespectful article was swift and forceful. "AI should not be writing obituaries," posted one reader. "Pay your damn writers â¦MSN." "The most dystopian part of this is that AI which replaces us will be as obtuse and stupid as this translation," wrote a redditor, "but for the money men, it's enough."
This discussion has been archived. No new comments can be posted.

Microsoft Publishes Garbled AI Article Calling Tragically Deceased NBA Player 'Useless'

Comments Filter:
  • Editor? (Score:5, Insightful)

    by Dan East ( 318230 ) on Friday September 15, 2023 @10:11AM (#63850842) Journal

    Even worse, a human never even gave this a token glance before it was published.

    • The editor was useless. /s

      Kind of like /. =P

      • The editor was probably Microsoft Word. No red or blue underlines? Post it!
    • Editors cost money.
    • It's possible this was deliberate - get a LLM to write something horrible to embarrass executives into not downsizing the human writers. It's just so obviously bad, I can't imagine anyone posting it in good faith, especially with that headline.
    • The system has likely been made to use less sensitive and more emotionally neutral language by the editors, and it passed all the tests with good enough accuracy, so why bother?

  • by oldgraybeard ( 2939809 ) on Friday September 15, 2023 @10:12AM (#63850848)
    Garbled, incorrect and bias responses are just the result of the options the algorithms were created with. AI is all hype at this point, just advertisers and marketers doing what they do.

    A correct term would be automation not AI.
    • by ShanghaiBill ( 739463 ) on Friday September 15, 2023 @10:50AM (#63850968)

      Garbled, incorrect and bias responses are just the result of the options the algorithms were created with.

      Incorrect and biased? Sure.

      Garbled? No. LLMs usually produce grammatically correct output that, while often wrong, at least is legible.

      I have no clue how Microsoft screwed this up so badly, but it was more than just hooking up an LLM to output a newsfeed. Their incompetence went beyond that.

      • "while often wrong". So what's the point?

        I bet the editor who missed this thought AI was always perfect and truthful.
        • by xwin ( 848234 )
          The "point" is not to stick an AI write up on a website without checking it. The "point" is to research NLP problems and get AI better at solving them. We did not start with producing a billion transistor chips, we started with just a few transistors and progressed to billions. AI is in a "few transistors" stage at the moment.
          • Ok but clearly Microsoft just wanted a well written article. If you need to fix massive parts of articles and also throw some out completely such as in this case, it's more work than you had in the first place.
          • This makes me remember the movie "Hidden Figures". I did some research and it really happened that at the time before Nasa sent the first American to space, someone there had a mathematician redo by hand the calculations that a computer had done because they did not trust the computer enough.

            • What is so surprising about having a human 'check' the math from the computer.

              The computer was newly-installed at NASA.

              The program had likely been written by a first-time programmer.

              A man's life was on the line.

              A nations pride was on the line.

              How comfortable would you be to go into space in a rocket guided by a program written by a first-time programmer on a new machine?

              Prior to the installation of the new computer, scores of humans did all the calculations "by hand". That was nothing new, so why not run th

      • I have no clue how Microsoft screwed this up

        TFA does. MSN just reached out and grabbed a story from a shitty site that is suspected of using AI.

      • This is not a large language model at all. This is a garden-variety article spinner [wikipedia.org], the kind that was used to spam email inboxes a decade ago by abusing a thesaurus to evade phrase filters.

        • I agree, not so much an AI creation as a victim of an over-enthusiastic use of a word substitution (thesaurus) tool, most likely on a Google-translated article written by a non-native english speaker...

    • As of my last knowledge update in September 2021, AI was a field with substantial potential and real-world applications, but it was also accompanied by a certain level of hype and inflated expectations. AI had made significant progress in areas like natural language processing, computer vision, and machine learning, leading to practical applications in industries like healthcare, finance, and manufacturing. However, it's crucial to recognize that AI's capabilities varied across tasks and domains, and it was

    • I can see how an AI connected to a news feed could easily link up the words Brandon, Hunter, and useless. The AI would not recognize that Brandon Hunter is not the same entity as "Brandon" or Hunter. It's definitely one of those "not that one, you idiot machine" moments.

      Auto correct has been coming up with some real whoppers just on my tablet. I swear I have to spend more time proof-reading now than I had to before. A bigger machine should be capable of generating even better examples of artificial stupidit

    • Garbled, incorrect and bias responses are just the result of the options the algorithms were created with. AI is all hype at this point, just advertisers and marketers doing what they do.

      SOTA LLMs are awesome. While not infallible and sometimes lacking in coherence the vast knowledge combined with ability to apply automatically learned concepts across domains is something I personally find to be quite useful.

  • by Anonymous Coward

    Brandon Hunter useless at 42
    Story by Editor
    9/12/2023, 11:21:42 PM21h

    © Editor
    Former NBA participant Brandon Hunter, who beforehand performed for the Boston Celtics and Orlando Magic, has handed away on the age of 42, as introduced by Ohio males’s basketball coach Jeff Boals on Tuesday.
    Hunter, initially a extremely regarded highschool basketball participant in Cincinnati, achieved vital success as a ahead for the Bobcats.
    He earned three first-team All-MAC convention alternatives and led the NCAA i

    • Re:Here's the story (Score:5, Interesting)

      by Junta ( 36770 ) on Friday September 15, 2023 @10:17AM (#63850862)

      Evidently, it was something ripping off the following, which was at least posted on TMZ:
      "Former Boston Celtics and Orlando Magic player Brandon Hunter has died, Ohio men's basketball coach Jeff Boals said Tuesday. He was just 42 years old.

      Hunter -- a standout high school hoops player in Cincinnati -- was a star forward for the Bobcats, earning three first-team All-MAC conference selections and leading the NCAA in rebounding his senior season ... before being taken with the 56th overall pick in the 2003 NBA Draft.

      He played 67 games over two seasons in the Association ... scoring a career-high 17 points against the Milwaukee Bucks in 2004."

      So maybe someone fed that into a translation program to run a few rounds to try to obfuscate the copyright violation?

      • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Friday September 15, 2023 @10:33AM (#63850908) Homepage Journal

        They fed it into a LLM and asked it to paraphrase it to avoid plagiarism.

        But it doesn't know the difference between a name and anything else, or between a respectful word similar to deceased and a disrespectful one, because it doesn't know anything. It just contains records of similarities and when you run the text through the system, it uses them to make more similarities.

        • by Micar ( 1236696 )

          I have seen this "the LLM doesn't really know the difference between a name and anything else" comment a couple times. I'm somewhat familiar with LLMs and NLP in general, so I'm curious about how true this is.

          It seems to me that ChatGPT has quite good Named Entity Recognition and other capabilities that would allow it to categorize people/places/etc. There are many articles documenting processes for using ChatGPT to do NER on a corpus. It certainly seems to have a good capability of doing this when I've int

          • by Anonymous Coward

            Machine translation (probably used in this particular instance) tends to screw this up royally, but LLMs seem to have a much higher hit rate. Do others have this experience or any other information on it?

            I suspect that the player's name, Brandon Hunter, caused a lot of the AI's confusion. There is a lot of disrespect and anger out on the Internet regarding "Brandon" and similarly for "Hunter." Then you get AI involved in writing an article about someone named "Brandon Hunter" and what do you expect to happen?

            So, once again, it's the Republicans fault that progress is not being made at the speed that we could have enjoyed if not for their inane commentary filling the Internet we invented. Those fucking hate

          • Yes it does a fine job. Whatever happened with the MSN article, it wasn't a modern LLM.

            Could you please re-phrase this news announcement?

            "Former Boston Celtics and Orlando Magic player Brandon Hunter has died, Ohio men's basketball coach Jeff Boals said Tuesday. He was just 42 years old.

            Hunter -- a standout high school hoops player in Cincinnati -- was a star forward for the Bobcats, earning three first-team All-MAC conference selections and leading the NCAA in rebounding his senior season ... before being taken with the 56th overall pick in the 2003 NBA Draft.

            He played 67 games over two seasons in the Association ... scoring a career-high 17 points against the Milwaukee Bucks in 2004."

            "Brandon Hunter, an ex-player for the Boston Celtics and Orlando Magic, has passed away at the age of 42, as confirmed by Ohio men's basketball coach Jeff Boals on Tuesday.

            Originally shining as a high school basketball talent in Cincinnati, Hunter established himself as a top forward for the Bobcats. He garnered three All-MAC conference first-team honors and led the NCAA in rebounds during his final collegiate year. He was then chosen as the 56th pick in the 2003 NBA Draft.

            During his tenure in the NBA, he played in 67 matches over two seasons, with his most notable performance being a career-best 17 points against the Milwaukee Bucks in 2004."

            chat link [openai.com]

          • These names are confusing. Names that are also common words are like that. This also doesn't have to be the state of the art. What are the consequences going to be if it doesn't go well? Fuck all. So they can afford to experiment on the public.

        • I laughed out loud when I realized that "a ahead" was the algorithm modifying "a Forward". There are a few other equally humorous examples, as well.
        • "Let's go Brandon! We need to investigate Hunter's laptop." -- A real human, probably

        • They fed it into a LLM and asked it to paraphrase it to avoid plagiarism.

          But it doesn't know the difference between a name and anything else, or between a respectful word similar to deceased and a disrespectful one, because it doesn't know anything. It just contains records of similarities and when you run the text through the system, it uses them to make more similarities.

          This wasn't just a bad paraphrasing. It was a horrific joke of an attempt. A Dictionary, a Thesaurus, and a Grammar Checker all walk into a boardroom bar, and that shit is the end result? Doesn't know the difference is quite a stretch to excuse a translation to useless, which I doubt even the Thesaurus would know how the hell Lappy the Language Licker got there.

          If the zombie LLM recognizes us meatsacks as 'useless' that easily, then we might as well call the future solution Skynet and get it over with.

          • Some things shouldn't be paraphrased -- MSN using "corridor of fame" loses meaning as that is not the same as "hall of fame"

      • That doesn't explain the grammar slip. "As a ahead"!? It sounds ugly even to me!

  • by Anonymous Coward

    Clearly it is far from ready for prime time. Why are they using this to generate articles that will be published?

    First, they must use it internally for their internal information digests that are not exposed to the public. When they feel that the product excels internally, only then should they begin to think it can be used for submissions for public consumption.

    But as usual, Microsoft wants to crowdsource their alpha/beta testing for free to the public.

  • Often "technically correct" is not the best type of correct. And the only type of correct AI will likely be is "technically correct"
    • by Junta ( 36770 )

      After reading the article, and the apparent source, it seems not like a generative AI but like someone pasting an article to plagiarize into google translate and walking it through a few languages hoping it won't be too obvious (see "Here's the story" thread for the actual 'article' text and the apparent original it was ripping off).

      It doesn't have the LLM 'smell', more of a bad translation smell.

      • My guess is that they hired some remote worker in India to do it, who hired some mechanical turk to do it, who wrote a script to plagiarize TMZ articles. Everyone got what they deserved.

        • by Megane ( 129182 )
          So it is possible that the article was written by an Actual Indian?
          • by Bongo ( 13261 )

            So it is possible that the article was written by an Actual Indian?

            Or even a ajcnegiletnI anzcutzS in reverse polish.

    • Comment removed based on user account deletion
  • Should these "AI" and their owners get free speech rights? ie, should generated content, and the owners of them, be protected under free speech? Should they be legally liable for anything that gets generated? Unlike free speech for humans, I don't see any similar benefits in terms of being able to have competing ideas. It's just word vomit without thought.

    If these companies don't want to be liable, then they should just retain/hire human writers and prove that they didn't use "AI" for the offending conte
    • by JBMcB ( 73720 )

      Should these "AI" and their owners get free speech rights? ie, should generated content, and the owners of them, be protected under free speech?

      If you'll note the constitution, there is no limit on whom has free speech, nor freedom of the press.

      Should they be legally liable for anything that gets generated?

      If it's a news outlet, for it to be defamatory against a public figure, there has to be knowing and actual malice on the part of the author. That is, the author has to know what they are writing is wrong, and is doing it on purpose to cause harm. Hard to do that with a machine, unless an editor lets it through on purpose. It's pretty easy to say it was an accident, unless there is proof otherwise.

    • Free speech does not mean not legally liable. AI articles should be treated like any other. Protected from government infringement, but legally liable for the very many things normal speech is even in freedom loving America.

      The deceased's estate should sue them for libel.

    • The human who posted it is protected by free speech and liable for the content they posted. That they amplified their speech via a machine won't make any difference. Until such time as the law decides an AI has free will and is furthermore not acting under direction from a human, anything the human causes the machine to make will be treated as having been made by the human.

      As an example, if your boss instructs you to create a sample death threat against someone named [rival CEO], and then publishes the text

    • Computer programs don't have rights.

      If someone wants to pass-off AU-generated text as their own, then yes, that someone enjoys freedom of speech - assuming they reside in a country where it's citizens enjoy "free speech" rights (many people do not live in countries in which citizens have free speech rights...

      You apparently are still having a hard time with the so-called "Citizens United" case, where it was decided that US Citizens, when they assemble into a group, do not lose their first amendment (or reall

  • useless being dead and all
  • There is an interesting similarity in the way the MSN article substitutes words (ersatz synonyms) to how machine translation between languages often makes the same errors.
  • Comment removed based on user account deletion
  • AI is trying to beat us at our own game.

  • What the sci fi story arcs miss about AI destroying the world. They think it will be hunter killer drones or missile launch. No it is civil war, where families kill each other do to trusting the truth of the AI. The characters in Sci Fi are not interesting because of the technology and changed politics, they are interesting despite it.

    It is the humans who are so mentally immature to forget independent thought and give their existence over to something powered by 216 D batteries.
    • AI is slightly smarter mad libs. Slightly.
      • That's a great comparison. I think LLMs are a fun new form of the same sort of entertainment. I enjoy them for what they are. It's just ridiculous that people keep treating them as something more.

    • by Megane ( 129182 )
      It's even worse, the end of the world will be PHBs turning over control of daily maintenance to AI because "everyone knows" that AI is smart, and it's cheap too. Budget under control, it's time to hit the links! Then after its stupidity kicks in, all of humanity dies from the collapse of infrastructure. And maybe from some unsanitised telephones too.
  • Hunter was 42 years old, a fit professional athlete doing perhaps not extremely strenuous exercise. Oddly enough, I cannot find anywhere on line a cause of death.

    Heart trouble, perhaps? Just a wild guess.

  • by PPH ( 736903 )

    Maybe we can hire these AIs as Seattle cops.

  • We trained our bot on Mad magazine. What do you expect.
  • by smooth wombat ( 796938 ) on Friday September 15, 2023 @12:25PM (#63851266) Journal

    You mean like Microsoft Outlook which can't find an email when you search for it even when it's the first one in your Inbox? Or did they mean useless like their OS which doesn't register every mouse click or copy/paste action? Perhaps they mean useless such as Excel when all you do is copy out the information in a cell then close the worksheet, only to be asked if you want to save your changes. If nothing was changed, why ask if you want to save the changes?

    • by gweihir ( 88907 )

      Hahaha, yes. I have had to export outlook mailboxes to get working search and don't get me started on Excel.

      MS "productivity" software is really getting more and more crappy every day. The expected and very typical effect of a near-monopoly.

    • Or did they mean useless like their OS which doesn't register every mouse click or copy/paste action?

      This is the worst thing I've noticed about Windows 11. Mouse click events are apparently processed well after mouse move events. This was not a problem on Windows 7, and I'm not honestly sure when it really started happening, but it's awful. It really slows down my window manipulation actions, which is a shame because I'm using three displays and moving things between them frequently.

  • If this is enough for the money men; then who are the people giving money for this stuff and why?

  • Been noticing a lot of interest in very small models lately able to blast out nonsense at blinding fast speeds (e.g. 9600 baud modem) with results that look a lot like the "garbled' article.

    Other than AI research these things really only have two real world uses that I know of: summarization and spam. "MSN news portal" as best as I can tell seems to be farming out news to randos.

  • It's obvious that any newsroom doing AI generated content should have some variation of the following guideline "the human reporter/editor should review the generated content in detail to ensure accuracy and appropriateness".

    Clearly in this case the human did an exceptionally poor job.

    Remember, only a small minority of people screw up that badly, meaning to generate this one massive screw up you needed a lot of instances that went just fine.

    In other words, this article is a sign there's already a lot of AI

    • by gweihir ( 88907 )

      That really needs to be "should" -> "must" and failure to do so must result in termination.

      I do not agree on your statistical analysis. This one data-point does not allow any conclusion how many AI generated articles are out there.

      • I do not agree on your statistical analysis. This one data-point does not allow any conclusion how many AI generated articles are out there.

        For sure it's not a P > 0.95 level of certainty, and certainly, the odds of using an LLM are correlated with the odds of being a lazy/incompetent reporter. But realistically, you think the one outfit & reporter doing this happened to be the one that posted the gibberish article?

  • Just because he is deceased does not make him useless, He can be used to make Soylent Green.

    Support your local AI, DYSTOPIA FOR ALL!!!

  • A dead NBA player is pretty useless as a player. Of course, while clear, it is highly impolite and highly redundant to point that out.

    This is however an excellent example of the level of "skill" and "insight" an LLM-type "AI" has. Not usable without expert supervision.

  • You all just read about someone you have little or no knowledge about. Sign me up.

As long as we're going to reinvent the wheel again, we might as well try making it round this time. - Mike Dennison

Working...