Forgot your password?
typodupeerror
AI Education

How An MIT Student Awed Top Economists With His AI Study - Until It All Fell Apart (msn.com) 80

In May MIT announced "no confidence" in a preprint paper on how AI increased scientific discovery, asking arXiv to withdraw it. The paper, authored by 27-year-old grad student Aidan Toner-Rodgers, had claimed an AI-driven materials discovery tool helped 1,018 scientists at a U.S. R&D lab.

But within weeks his academic mentors "were asking an unthinkable question," reports the Wall Street Journal. Had Toner-Rodgers made it all up? Toner-Rodgers's illusory success seems in part thanks to the dynamics he has now upset: an academic culture at MIT where high levels of trust, integrity and rigor are all — for better or worse — assumed. He focused on AI, a field where peer-reviewed research is still in its infancy and the hunger for data is insatiable. What has stunned his former colleagues and mentors is the sheer breadth of his apparent deception. He didn't just tweak a few variables. It appears he invented the entire study. In the aftermath, MIT economics professors have been discussing ways to raise standards for graduate students' research papers, including scrutinizing raw data, and students are going out of their way to show their work isn't counterfeit, according to people at the school.

Since parting with the university, Toner-Rodgers has told other students that his paper's problems were essentially a mere issue with data rights. According to him, he had indeed burrowed into a trove of data from a large materials-science company, as his paper said he did. But instead of getting formal permission to use the data, he faked a data-use agreement after the company wanted to pull out, he told other students via a WhatsApp message in May... On Jan. 31, Corning filed a complaint with the World Intellectual Property Organization against the registrar of the domain name corningresearch.com. Someone who controlled that domain name could potentially create email addresses or webpages that gave the impression they were affiliated with the company. WIPO soon found that Toner-Rodgers had apparently registered the domain name, according to the organization's written decision on the case. Toner-Rodgers never responded to the complaint, and Corning successfully won the transfer of the domain name. WIPO declined to comment...

In the WhatsApp chat in May, in which Toner-Rodgers told other students he had faked the data-use agreement, he wrote, "This was a huge and embarrassing act of dishonesty on my part, and in hindsight it clearly would've been better to just abandon the paper." Both Corning and 3M told the Journal that they didn't roll out the experiment Toner-Rodgers described, and that they didn't share data with him.

This discussion has been archived. No new comments can be posted.

How An MIT Student Awed Top Economists With His AI Study - Until It All Fell Apart

Comments Filter:
  • It even sounds like a name AI would make up. How do we know this guy even exists at all?

  • by Norbert de Jonge ( 1225 ) on Monday November 24, 2025 @06:12AM (#65814769)

    It is possible that Aidan Toner-Rodgers's paper bore the hallmarks of AI-assisted writing. The polished structure, fluent transitions, and journal-ready formatting seemed unusually advanced for a first-year PhD student, especially given the speed with which the work was produced. Large language models are adept at generating academic-style prose, weaving bold claims into coherent narratives, and mimicking the authoritative tone expected in economics journals. If AI tools were used, they may have helped transform fabricated data into a text that appeared credible enough to attract citations from policymakers and coverage in major outlets.

    Speculating further, the combination of fabricated data and AI-generated polish would create a particularly deceptive package. The fake numbers provided the illusion of groundbreaking findings, while AI-assisted writing could have smoothed over inconsistencies and amplified the sense of authority. This synergy might explain how the paper gained traction so quickly despite lacking real evidence. While no investigation has confirmed AI involvement, the possibility adds another dimension to the scandal: not only was the data fraudulent, but the very voice of the paper may have been artificially constructed to maximize its persuasive power.

    • by Anonymous Coward

      Yes, thank you for your AI-generated comment :(

    • by Viol8 ( 599362 )

      Its always good to hear ChatGPTs opinion on the matter in hand.

    • artificially constructed to maximize its persuasive power.

      Imagine having such an immense vocabulary, and using it that weakly.

    • If AI tools were used, they may have helped transform fabricated data into a text that appeared credible enough to attract citations from policymakers and coverage in major outlets.

      How is this different from standard economics?

    • by gtall ( 79522 )

      The dead giveaway your comment is some AI-slop is the use of "synergy". No one uses that word. Hell, even MBAs know better than to use it lest they sound like their dork bosses.

    • If agentic AI were involved, given sufficient prompts and naivete on the part of Toner-Rogers, the AI could have performed the registration of the false Corning website on its own and even fooled the human researcher into believing the materials were real. Won't be long before we're all questioning whether the answers we get from our computers are legitimate or a tremendously detailed hallucination conjured to match what we appear to want to see.

    • by shanen ( 462549 )

      Was that an AI generated joke driven by an old timer, or has the old timer (4-digit UID) passed away and the identity been hijacked by an AI generating it's own joke? Seems to be a lot of that going around these years...

  • Banned. (Score:5, Interesting)

    by Smidge204 ( 605297 ) on Monday November 24, 2025 @06:15AM (#65814771) Journal

    This should be a career-ending move. Demonstrating this level of dishonesty should bar him from holding a graduate degree of any kind, really, let alone anything in scientific research.

    Increasing and enforcing standards is needed, but also higher standards mean nothing if there are no consequences. Make it clear that this kind of nonsense will obliterate your academic career.
    =Smidge=

    • by gweihir ( 88907 )

      Yes. That should get this person barred for life from any research position. Had that gone on for longer, it would have done significant damage.

      • Re:Banned. (Score:5, Interesting)

        by hdyoung ( 5182939 ) on Monday November 24, 2025 @10:30AM (#65815013)
        There's no single group of people called "academia" that make that decision. The ecosystem is extremely fragmented. This guy's academic/research career is definitely over, but there wont be any satisfying single moment where he's banished by some higher power. But his chances of convincing any decent PhD program to accept him has gotten asymptotically close to zero.
    • Re:Banned. (Score:4, Insightful)

      by supabeast! ( 84658 ) on Monday November 24, 2025 @07:35AM (#65814849)

      Unlikely. With skills in economics, AI, and deception he'll either become a think tank wonk or a Silicon Valley startup charlatan.

    • This should be a career-ending move.

      You would think, but just like in corporate America where someone runs a company into the ground then is hired elsewhere for an even bigger pay package, he'll be picked up. All will be forgotten.

      • by DarkOx ( 621550 )

        There is an always has been an underlying element to the American-psychology where we sort of admire the conman and outlaw. Its really baked in to how we have characterized our conception.

        Even going to back Patriots vs Loyalists, while there were plenty of legitimate grievances with colonial governance. They were inflated to a degree that almost beclowns everyone involved, doubly so in the context of what was implemented in the aftermath at least on the representation, regulatory and taxation fronts. Our v

        • There is an always has been an underlying element to the American-psychology where we sort of admire the conman and outlaw. Its really baked in to how we have characterized our conception.

          Even going to back Patriots vs Loyalists, while there were plenty of legitimate grievances with colonial governance. They were inflated to a degree that almost beclowns everyone involved, doubly so in the context of what was implemented in the aftermath at least on the representation, regulatory and taxation fronts. Our very founding revolution was sold on if not lies, radical liberties with the truth. Everyone knows we just don't really talk about it.

          So to with something like this. Most people will outwardly condem the guy. At least some people will inwardly be impressed by how much he got away with and wounder what he might do for them, if they happened to throw a little gold his way..

          If I had seen your comment before I wrote my own, I would have simply modded your comment 'Insightful' instead. Or maybe 'Informative' - for me it's both.

          It looks as though you may have been downmodded. Maybe what you wrote hit a little too close to home. Even I was - as you said - "inwardly impressed". And I think there's a good chance that he'll do well in life, at least in the financial sense. After all, the upper echelons of our society are full of smart people with flexible morals and the ability to l

        • Yes, America does love conmen - how else could the Democratic party exist?
    • This guy's academic career is *definitely* over.

      The wheels of academia turn very slowly. And, there's no singular body of ultrapowerful authoritative academic judges that can make a decision, point their finger at the guy and thunder YOU'RE FIRED. Academia doesn't work like that. This is what's gonna happen - he's gonna apply to a bunch of grad school programs, and a hundred separate admissions committees are gonna say "nope, not touching this guy". He might manage to squeak through into some program
      • I'd hope so, he screwed MIT, a world renowned school. I'd hope MIT goes beyond that and demotes his major prof for his PhD. This is a major embarrassment for MIT and his prof should have been overseeing what got published, so while MIT has egg on its face, the major prof has the whole breakfast.
      • Fabricating data is generally an academic death sentence.

        Ironically, it might not be a death sentence when it comes to working for an AI company. The guy is obviously very bright, and he can tap-dance, and he's willing to have a "flexible" relationship with fact and truth. I imagine that if I was Sam Altman I'd be thinking "Ladies and gentlemen, we have a winner!"

        • Oh, he's definitely got a bright future in the corporate world. Half of corporate America is "fake it till you make it". The other half is rock solid.
        • I don't think he will or should receive a lifetime ban from any and all employment.

          But as for MIT, or any other research institution with any prestige, my prediction is he is done.

          As for Sam Altman, maybe you can point us to some example when he or OpenAI violated academic integrity by fabricating data like this?

          • As for Sam Altman, maybe you can point us to some example when he or OpenAI violated academic integrity by fabricating data like this?

            I wasn't thinking about academic integrity specifically. I was thinking of examples of both Sam's and his company's apparently flexible relationship with fact and truth [time.com].

            • I had forgot about Altman getting fired by the board for not 'being candid' (presumably tantamount to lying). Admittedly if there is a business equivalent to being expelled for academic fraud, that might be it.
    • Meh, this kind of crap is what peer review is for. As long as he learns his lesson I'd be fine with letting him keep going. I mean he's still going to MIT so he's not an idiot.

      I mean we all act like he got away with this but he was caught during the initial process of peer review. The system really does work.

      We all like to complain about how there's thousands and thousands of papers that are just garbage but here's the thing so what? If the papers aren't doing any harm and they're just sitting out t
      • Can you explain the anon who always replies to your posts with what appears to be a parody? Secret admirer perhaps? Are they using AI, or do you think they actually type out a unique post every single time?

    • by necro81 ( 917438 )
      What is actually interesting to me in this situation is: where was MIT's oversight and review process? Faculty can, and often do, go to arXiv and/or publication without necessarily clearing things through their institution. But this guy was a grad student, and so presumably has an advisor. Did the advisor not check the student's work? Was the advisor not a part of conducting the research, analyzing the results, and proofing the paper? If not, then the advisor, and the entire department, should share so
      • > Did the advisor not check the student's work?

        The student made up the data, claiming if came from a legitimate source. Other than independently trying to get that same data from the same source and verifying it, how exactly do you 'check the work?'

        The review is typically focused on how the data is processed and if the conclusions follow logically from the data presented. If you just make shit up at the very start it can be very difficult to catch or prove short of completely redoing the study - which is

        • by necro81 ( 917438 )

          The student made up the data, claiming if came from a legitimate source.

          Well, since a lot of the supposed data came from industry, one might have expected the advisor to say "gee, that's an impressive amount of access you've gotten to proprietary data. How did you manage that?" Also: "Why have you been having these conversations with industry - including signed [faked] agreements - without keeping me and/or MIT's legal team in the loop?"

  • by gweihir ( 88907 ) on Monday November 24, 2025 @06:32AM (#65814787)

    Lets face it: LLMs are somewhat better search, can summarize non-complex texts and can do simple, well-known work with not very good reliability. But that really is it. And that in no way justifies the hype.

    • by Entrope ( 68843 )

      DuckDuckGo's LLM generates this, for example

      Right-hand drive (RHD) refers to vehicles designed with the steering wheel on the right side, which is typical in countries that follow right-hand traffic (RHT) rules. In these countries, vehicles drive on the right side of the road, and roundabouts circulate counterclockwise.

      ... (emphases added) and if you click "more" it may change to say that RHD vehicles drive of the *left* side of the road (LHT, which is true).

    • by TheStatsMan ( 1763322 ) on Monday November 24, 2025 @09:42AM (#65814951)

      Last week if you posted, "AI can do LITERALLY NOTHING" you got 5 points.

      This week, "LLMs are somewhat better than search but THAT'S IT"

      An LLM helped me build an automatic farm in minecraft with my child. And if you don't understand why that's impressive, you ought to reassess. An LLM helped my SO learn how to use a piece of new commercial equipment. An LLM helped me put together a marketing model that solves several systems of equations in an afternoon (it would have taken me a lot longer with google).

      If you can't figure out how to use this stuff, it's on you at this point.

      • by gweihir ( 88907 )

        If you can't figure out how to use this stuff, it's on you at this point.

        I know how to use this stuff: Stay away from it, it adds nothing and wastes my time. Oh, I have one use: I currently have a student evaluate the major coding assistant and some general LLMs on how good they can judge code security. The results so far are that they work well for toy examples and not well or not at all for real situations. This may eventually get me a nice publication.

        • So you want an LLM that does everything you already do, but better than you? It's not the use case, the use case is helping you with things that you don't have mastery over already.

          Your challenge to the LLM is a test to replace you (probably because you're afraid it will replace you). Instead try to use it when you need help with something you aren't already good at.

          Or do you only do things you're good at? A character flaw to rival incuriosity.

          • by gweihir ( 88907 )

            What kind of a nonsense posting is that? Stop projecting and read what I actually wrote.

            • >I have a student evaluate the major coding assistant and some general LLMs on how good they can judge code security

              I think it's pretty clear you're evaluating the outer perimeter of what an LLM is meant to do, and not where people might find it useful.

              So if you want to judge LLMs by the insufficiency to perform a specific, precise task, then fine - but all of the rest of us are using it in a totally different way and finding a lot of value.

              • by gweihir ( 88907 )

                Hahaha, no. It is just "clear" to you, because you have no clue how research, or risk management or securing software works.

                I, on the other hand, am quite capable of identifying critical parts of a performance landscape and then look into those as a priority.

                Or in other words, insightless comment is without insight...

                • >because you have no clue how research, or risk management or securing software works.

                  It's so sad that in devoting your life to one particular thing you don't see the value in learning new things. Good luck with that.

      • And I would be highly surprised if some decent conventional web searches couldn’t do the exact same thing.
      • Three examples given, all about doing something that could have been done with a traditional search, but faster. That sounds a lot like "somewhat better than search" to me. Got an example where it's something more than that?

        The problem is the tendency to hallucinate. You call yourself TheStatsMan; you should know that LLMs are just statistical engines that string a bunch of words together which are statistically likely to follow from the prompt, given the body of text the LLM has been trained on. Garbage

        • >you should know that LLMs are just statistical engines that string a bunch of words together which are statistically likely to follow from the prompt, given the body of text the LLM has been trained on

          This is not a serious critique, and it's definitely not statistical in nature.

          I find, in practice, for the LLM to be correct most of the time, nearly always writes the code that I need to close specifications the first time, and borderline insightful for certain tasks.

          > If you're doing something fairly

          • by gweihir ( 88907 )

            >you should know that LLMs are just statistical engines that string a bunch of words together which are statistically likely to follow from the prompt, given the body of text the LLM has been trained on

            This is not a serious critique, and it's definitely not statistical in nature.

            Excuse me? Are you seriously claiming an LLM is not a statistical engine? If so, you need to have your head examined, because that is the literal, mathematical truth.

      • by Xarius ( 691264 )

        An LLM helped me build an automatic farm in minecraft with my child. And if you don't understand why that's impressive, you ought to reassess.

        It's not that impressive at all, it just regurgitated and re-wrote the content from hundreds of websites and video transcripts that tell you how to do that... The impressive part was all of those people figuring this out and writing it up themselves.

        https://www.ign.com/wikis/mine... [ign.com]
        https://ender-chest.com/tutor [ender-chest.com]

        • by gweihir ( 88907 )

          The person you answered to clearly things the LLM constructed something and had agency and insight, when in reality it just found everything in its training data and just did a bit of aggregation via correlation (not implication, LLMs cannot do implications) and then presented what it found in a seemingly polished fashion.

          If you have no clue how an LLM works and are not very smart (i.e. Dunning-Kruger left-side), you can come to this invalid conclusion.

          • >thinks the LLM constructed something and had agency and insight

            No, you absolute pollywhompus-brained buffoon - but the value is in the solution it provided my in 2 paragraphs, not reading 10 different webpages.

      • If you can't figure out how to use this stuff, it's on you at this point.

        The vast majority of people have no idea how an LLM works and have no idea how to interact with it properly. Numerous folks are able to work around that with experience; however, the vast majority of folks are going to use it like the Oracle at Delphi. That is not much better than rolling dice.

  • by Viol8 ( 599362 ) on Monday November 24, 2025 @07:31AM (#65814839) Homepage

    He'll go a long way in the AI tech-bro world.

  • ... The whole point of AI is to fake things!
  • by SlashbotAgent ( 6477336 ) on Monday November 24, 2025 @08:38AM (#65814885)

    I did it! I cured cancer and Alzheimer's. It turns out that its super simple. You just need to do a tiny DNA edit and you can do it with common kitchen chemicals.

    I am the greatest. Totally amazing. My paper proves it all. Trust me, Bro.

    • by gtall ( 79522 )

      HHS has a job opening for you. Please see JFK Jr.'s appt secretary. You need (2) faulty references, proof of your ability to lie with a straight face, and a bone and a rattle. The latter are for explaining new "cures".

    • I did it! I cured cancer and Alzheimer's. It turns out that its super simple. You just need to do a tiny DNA edit and you can do it with common kitchen chemicals.

      I am the greatest. Totally amazing. My paper proves it all. Trust me, Bro.

      Totally! AI regrew my severed arm, which was an advanced risk for it to even try!

    • by allo ( 1728082 )

      You cured Alzheimer and then forgot where you put the paper?

    • Too bad I already forgot what chemical to look for under the kitchen sink.

  • So, summary:

    Corning's core competence resides within the AI they're using to do their work for them but prefer to keep this secret from their shareholders. Protagonist took one for the team to reveal this.

    ?

  • by jenningsthecat ( 1525947 ) on Monday November 24, 2025 @10:42AM (#65815043)

    AI is famous for hallucinating results, and now an MIT student has hallucinated some benefits of AI and written a research paper about it. It's dishonest and unethical, but in a twisted way it's also rather appropriate.

  • A lot of regular folks are starting to use AI and getting completely wrong answers to obvious questions. Their common-sense makes them ask âoeif you got that wrong, what else are you getting wrong?â
    Donâ(TM)t give them the bs that they donâ(TM)t use the correct prompts.
    OR the explanation I am seeing more and more: âoewhen you ask a person a question, often they get it wrong too.â Iâ(TM)ve seen that in several articles in the past few weeks.
    Is that the best theyâ(TM)ve

    • You think they are using introspection and multiple source checking now? Ive had many people double and triple down that the AI slop was right and every other factual source was wrong because AI is super intelligent and can’t be wrong. They take it personally because they lack the abilities to tell crap from credible.
  • by Ecuador ( 740021 ) on Monday November 24, 2025 @11:47AM (#65815159) Homepage

    He should have pivoted to "Ha! This obvious AI-generated paper fooled so many researchers! THAT was my actual research into fake AI-generated papers!".

  • Wow, who could have seen that as a possibility?

  • so he can always get the ai to hallucinte he had a stellar academic career.

Everybody likes a kidder, but nobody lends him money. -- Arthur Miller

Working...