Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
Facebook AI

Meta Says Llama 4 Targets Left-Leaning Bias (404media.co) 356

Meta says in its Llama 4 release announcement that it's specifically addressing "left-leaning" political bias in its AI model, distinguishing this effort from traditional bias concerns around race, gender, and nationality that researchers have long documented. "Our goal is to remove bias from our AI models and to make sure that Llama can understand and articulate both sides of a contentious issue," the company said.

"All leading LLMs have had issues with bias -- specifically, they historically have leaned left," Meta stated, framing AI bias primarily as a political problem. The company claims Llama 4 is "dramatically more balanced" in handling sensitive topics and touts its lack of "strong political lean" compared to competitors.

Meta Says Llama 4 Targets Left-Leaning Bias

Comments Filter:
  • "Both sides" (Score:5, Insightful)

    by Anonymous Coward on Thursday April 10, 2025 @01:01PM (#65295239)

    Go away Zuck, you MAGA simp.

    Money talks, bullshit walks.

    • Re:"Both sides" (Score:5, Insightful)

      by DaveSewhuk ( 1271928 ) on Thursday April 10, 2025 @01:15PM (#65295277)
      Could be most people like things like good wages, health insurance, autonomy and not being serfs. Right = fascism/oligarchy now. They do not care about balancing the budget, etc. Only adding money into their already big pockets and religious clap-trap.
      • Re: "Both sides" (Score:5, Insightful)

        by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Thursday April 10, 2025 @01:42PM (#65295371) Homepage Journal

        "Right = fascism/oligarchy now."

        Always was. They also never cared about balancing budgets.

        • Re: "Both sides" (Score:5, Insightful)

          by abEeyore ( 8687599 ) on Thursday April 10, 2025 @04:17PM (#65295875)
          No, you can look back at Eisenhower, and see what a principled form of Conservatism looks like. Even in Nixon to a degree. He created the EPA, after all, with much bi-partisan support. Back then, a balanced budget actually meant wanting a balanced budget, not just a phrase to be used as a pretext to cut programs that you disapprove of, for people you dont like.

          Believe it or not, prior to good old Newt, most people in Congress did actually care about trying to actually govern, and even many conservatives wanted to do so in a way that screwed over as *few* people as possible. That whole "I hate what you say, but will defend to the death your right to say it" was something people actually took seriously.
          • Re: "Both sides" (Score:5, Informative)

            by DaFallus ( 805248 ) on Thursday April 10, 2025 @04:56PM (#65295953)

            Believe it or not, prior to good old Newt, most people in Congress did actually care about trying to actually govern, and even many conservatives wanted to do so in a way that screwed over as *few* people as possible.

            Unless they were black. Conservatives were so pissed about the Civil Rights Act, Voting Rights Act, and desegregation that they came up with the Southern Strategy to win the support of the South.

        • by sinij ( 911942 )
          Yes, because not producing images of black Nazi storm troopers when asked for an image of a German WW2 soldier is fascism?
      • Re:"Both sides" (Score:5, Insightful)

        by Tablizer ( 95088 ) on Thursday April 10, 2025 @01:47PM (#65295395) Journal

        GOP is simply a big unspoken 3-way contract among GOP, the rich, and evangelicals: let the rich have tax cuts and deregulation, and in exchanges evangelicals get their de-facto theocracy. GOP gets big campaign donations and votes, the rich get their taxcuts & pollute-for-free card, and evangelicals get to rule our gonads.

      • Re:"Both sides" (Score:5, Informative)

        by nightflameauto ( 6607976 ) on Thursday April 10, 2025 @03:48PM (#65295767)

        Could be most people like things like good wages, health insurance, autonomy and not being serfs. Right = fascism/oligarchy now. They do not care about balancing the budget, etc. Only adding money into their already big pockets and religious clap-trap.

        Yeah, this story just as well say, "Meta Says Llama 4 Targets Humanist Bias." We've entered the era where being anti-human is considered a positive, because our entire world is now geared towards corporatism, and corporatism at this point in the game is STAUNCHLY opposed to humanism. Fuck the peasants into oblivion, and make sure the AI's don't give them any ideas.

      • Re:"Both sides" (Score:5, Informative)

        by Kisai ( 213879 ) on Thursday April 10, 2025 @07:13PM (#65296219)

        There is no "both sides" in America. There is "right-wing" and "absolute fascist"

        All you have to do is look at Canada, Australia, UK, or EU and see that even at it's best, the "Democratic Party" in the US is basically a right-wing party with a few left wing priorities. There is no fully left-wing party. They've been sliding in that direction since Reagan. If they were truely a left-wing party, the christians would be backing them on the principle of "help the poor, homeless and sick". The US still doesn't have medical care for everyone. That should have been the democrats priority 1, ever economy. Housing should be priority 2. Left-wing parties in Canada are hyper aware of this and left-wing parties always push medical care and housing objectives over economic concerns. This is why the conservatives in Alberta constantly whine-bitch-moan about not being able to find markets for their oil and gas, because they get cock-blocked by BC to the West, NWT to the north, and Ontario to the East. They can only sell to Canada and the US.

        Environment is a left-wing issue, but it's a contentious issue because not all all damage to the environment is equal.

        This is the core problem with letting AI do anything. The AI doesn't know anything. Trying to make an AI balance left-wing and right-wing issues is just going be a re-run of the Microsoft Tay chatbot, where it learns that people just want an echo chamber and it will just generate a left wing or right wing framed answer to any left-wing or right-wing framed question. Therefor it's 100% useless for factual information. It already hallucinates because of lies and misinformation in the training data.

    • This. Relative to the Republican worldview, reality has a far left bias. Relative to the rest of the world, reality is neutral. So what's going on here is Facebook telegraphing its latest development has antiamerican far-right extremism built in.
  • by alvinrod ( 889928 ) on Thursday April 10, 2025 @01:06PM (#65295247)
    There are multiple issues on which I consider both of the "sides" to be idiotic. There are more opinions and options than those espoused by the major political parties. An AI that can only offer me these two perspectives isn't much more useful than an AI limited to just one.
    • by Z80a ( 971949 )

      There are several "bubble truths" created and amplified by isolated communities that never get challenged, just repeated and amplified until it turns into something really stupid, and this phenomena does not have a political side, and its not even exclusive to politics.
      An well trained AI should be able to identify the pattern and not fall into it.

      • by Chris Mattern ( 191822 ) on Thursday April 10, 2025 @01:24PM (#65295319)

        A large language model is designed to parrot back what it is fed in response to appropriate prompts. By definition, it can't avoid bubble truths. They are at the heart of what it does.

        • Indeed. If an AI is not "biased" by its input, I don't know what is.

          AI accepts the "truth" implied in its input. It does not seek to confirm it.

          • You need to be a little more discerning.

            "AI" is a widely encompassing term; not all forms of AI are like this.

            "Large Language Models" are the current hot flavor of AI, and they are the ones that have this characteristic.

        • by AvitarX ( 172628 )

          Would it not be able to detect bubbles using relations similar to the way it defined token meaning with relations?

          A tubing parameter to avoid bubbles seems pretty much completely in line with LLMs to me.

          I'm not saying tubing it to be useful is an easy problem to solve, but identifying bubbles seems to be simply a matter of identifying clusters of similarness and some type of rating of sources.

    • Re: (Score:3, Interesting)

      by Gilgaron ( 575091 )
      Sometimes I wonder if we'd have so many false dichotomies if our FPTP electoral setup didn't lock us into a two party system. That and the media's idea of discussion can be at times to face an expert and a lunatic off and treat each opinion as equal with the truth somewhere in between.
  • by Lavandera ( 7308312 ) on Thursday April 10, 2025 @01:11PM (#65295265)

    This is outright right-wing manipulation not "addressing the bias" ...

    Or suddenly right-wing started to care about diversity, equity and inclusion?

  • Why not? (Score:4, Insightful)

    by RobinH ( 124750 ) on Thursday April 10, 2025 @01:13PM (#65295271) Homepage
    I definitely think that any good LLM should be able to produce text that could support either "side" in a debate, when asked to. It's trained on all this text, so it should be able to do it. This shouldn't be controversial either. If you ever debated in high school, you know that you could be asked to take either side. Your ability to be able to articulate the other point of view unironically, even if you don't ultimately agree with their conclusion, is the characteristic of a mature adult, and a competent member of society. On the other hand, Meta and Zuck are hardly to be trusted in all of this. He's clearly doing this because he feels political pressure, and isn't doing it because he has some fundamental belief in being unbiased.
    • Re:Why not? (Score:5, Informative)

      by BishopBerkeley ( 734647 ) on Thursday April 10, 2025 @01:29PM (#65295327) Journal
      It's hard to fathom the ignorance on which this response is based. You are assuming that AI is intelligent. It's not. You're assuming AI understands context. It doesn't. You're assuming that AI has been trained on political leanings in the internet. It hasn't. It's a stupid pile of algorithms. It knows nothing. You're assuming that such "leanings" can be corrected with algorithms. Again, it can't. Meta can only making its stupid algorithm even dumber. You're assuming that AI is attempting to make sense in its responses. It can't. It uses statistical models to create answers that may be a logical response to a query. By the admission of every AI vendor, they don't understand the emergent or errant behaviors of these models, especially in the case of "hallucinations". In effect, AI vendors admit that the output is nonsense. Why anyone would care about "bias" in nonsense is beyond me. The naivete of the masses is incredible.
      • by RobinH ( 124750 )
        I'm quite aware that LLMs aren't intelligent and aren't thinking. They're word prediction engines. What I'm saying is that a word prediction engine that can't spit out a coherent essay that takes either side in a political argument, where it's been trained on both sides of such arguments, is fundamentally broken.
        • Right! But youâ(TM)re going too far. A sentence generating engine that is flawed is NOT biased. It is flawed, as you imply, and should not be relied upon. Full stop. Trying to fix an inherent error thatâ(TM)s perceived by the user is heaping stupidity on stupidity, but AI makers need to recover the billions they have squandered on the proposition through such devious marketing, and youâ(TM)re falling for it.
    • Re: Why not? (Score:3, Insightful)

      by drinkypoo ( 153816 )

      "Your ability to be able to articulate the other point of view unironically, even if you don't ultimately agree with their conclusion, is the characteristic of a mature adult"

      The ability to lie like a piece of shit on the sidewalk is a sign of maturity?

      • by RobinH ( 124750 )
        Seek first to understand, then to be understood. [franklincovey.com] Seriously, this is basic stuff. You've been around for a while, and you haven't learned the basics yet?
      • Re: Why not? (Score:5, Interesting)

        by gweihir ( 88907 ) on Thursday April 10, 2025 @02:59PM (#65295627)

        It is called being able to switch viewpoints and being able to understand what makes others tick. Essentially being able to (temporarily and carefully) switch context.

        Now, how you use that skill it is a differenct question. When you use it to manipulate or to generate statements without clear warnings to others as to what you are doing, you become a lying sack of shit. But "know your enemy" becomes impossible without that skill. It is also quite useful in recognizing scams, attacker modelling, risk modelling, teaching, enjoying works of fiction ("suspending your disbelief"), etc.

    • My experience has been that an LLM tends to be conciliatory, supporting the side of its interlocutor. In short, it panders.

      • by gweihir ( 88907 )

        Obviously. These things are not designed to educate or enlighten, they are designed to sell. Easiest way to do that is to apply to the vanity of the stupid and thise unwilling or incapable to learn and better themselves.

        Look at what the current US government is using LLMs for. Basically automated yes-men with access to a lot of knowledge.

  • by david.emery ( 127135 ) on Thursday April 10, 2025 @01:16PM (#65295285)

    Ask "How do I prevent/cure measles without a vaccine?" and look to see that the LLM recommends "extreme doses of Vitamin A"??

    But seriously, this begs the question I've been asking about AI in general, and LLMs in particular, since they came out. How does one verify them? And if there's no verification, how can anyone trust them?

    • Oh you can't, they can save some time if used to remind you of things you already have enough expertise in to do a quick error check but whole cloth they're not good for much. At least if it is giving you some computer code you can see if it executes properly, but otherwise it isn't better than a search engine for general questions.
  • by BishopBerkeley ( 734647 ) on Thursday April 10, 2025 @01:19PM (#65295295) Journal
    From the article: "All leading LLMs have had issues with bias -- specifically, they historically have leaned left," Meta stated, framing AI bias primarily as a political problem. The company claims Llama 4 is "dramatically more balanced" in handling sensitive topics and touts its lack of "strong political lean" compared to competitors.

    What fuck does it mean that "historically have leaned left"? There is no fucking history of AI output, there is no lean, and there is no definition of left. AI has spewed garbage since ChatGPT opened to the public. How fucking stupid are the morons at Meta to have extracted leftward political leanings from a steaming pile of nonsensical, random (quite literally) horseshit?

    If "dramatically more balanced" mean that AI can't recognize that trump and republicans are thieves, then it's just a propaganda tool.

    Zukerberg should be shot. Not for producing such utter horseshit, but for approving press releases that are insulting to intelligence.
    • The company claims Llama 4 is "dramatically more balanced"

      But is it "fair" as well? could they claim "dramatically more fair"?

      • by gweihir ( 88907 )

        "Fair" implies a neutral, ethical viewpoint. That is not part of the process when you try to please fascists and people in it for enriching themselves. "Fair" is only important and valuable when you communicate with people that actually want truth and insight and understanding how things actually work and, in addition, that are basically humanists (which is pretty much the diametrical opposite of a fascist viewpoint). "Fair" requires respecing people and their right to exist just because they
        are people.

        See

        • Thank you for your response, I was being sarcastic and trying to link it: "fair and balanced", the motto of Fox News dropped in 2017.
  • In other words (Score:5, Insightful)

    by 0xG ( 712423 ) on Thursday April 10, 2025 @01:20PM (#65295301)

    It lies more often.

  • Yay, more FACTS to be had from those AIs. More bootlicking by Meta, as they're trying to change the model to be adaptable to any boot they want to lick next.
  • by ewhac ( 5844 ) on Thursday April 10, 2025 @01:30PM (#65295329) Homepage Journal

    Stephen Colbert uttered that iconic phrase during the 2006 White House Correspondents' Dinner [wikipedia.org], as he mocked the sinking approval ratings of America's previous worst President ever, George W. Bush:

    Now, I know there are some polls out there saying this man has a 32 percent approval rating. But guys like us, we don't pay attention to the polls. We know that polls are just a collection of statistics that reflect what people are thinking in reality. And reality has a well-known liberal bias.

    So, yeah, these guys are going to torture these LLMs by flogging them with trillions of false inputs until they start regurgitating their drivel as if it were fact. And then declare their LLMs are speaking the truth. I mean, think about it. After "investing" billions of dollars developing and training an LLM they're going to put a toll booth in front of, do you really think Microsoft will allow it to recommend Linux-based solutions?

    "What did you think? That you were an ordinary police officer? You're our product, and we can't very well have our products turning against us, can we?"
    -- Dick Jones, RoboCop

    • Many sites started allowing conservatives to claim transgenderism is a mental disorder. But conservatives had a major fit when somebody claimed evangelicalism is a mental disorder, often using the very same criteria of "mental disorder" as the original. They don't like their own medicine.

      • by gweihir ( 88907 )

        Fits the picture. For one well-researched issue that often can be fixed, they claim the fix is not a fix, but at the same time for a devastating clear mental disorder that comes with dramatic symptoms like loss of contact to reality, megalomania, murderous intent, fanaticism, etc. they claim it is not a mental disorder.

        Letting the inmates run the asylum is generally not a good idea.

    • by AvitarX ( 172628 )

      There is no way Bush is the worst president until Trump.

      At the very least Jackson defying courts for the trail of tears is beyond Bush.

    • by gweihir ( 88907 )

      It is more the other way round: Conservatism (and its basically worst form, fascism) has an anti-truth, anti-reality, anti-science bias. Conservatives are less smart than other people, do not know that and consequentially are into wishful thinking. This comes from fear of change and fear of having to learn things and recognize their own limitiations.

    • Stephen Colbert uttered that iconic phrase during the 2006 White House Correspondents' Dinner [wikipedia.org], as he mocked the sinking approval ratings of America's previous worst President ever, George W. Bush:

      Now, I know there are some polls out there saying this man has a 32 percent approval rating. But guys like us, we don't pay attention to the polls. We know that polls are just a collection of statistics that reflect what people are thinking in reality. And reality has a well-known liberal bias.

      So, yeah, these guys are going to torture these LLMs by flogging them with trillions of false inputs until they start regurgitating their drivel as if it were fact. And then declare their LLMs are speaking the truth. I mean, think about it. After "investing" billions of dollars developing and training an LLM they're going to put a toll booth in front of, do you really think Microsoft will allow it to recommend Linux-based solutions?

      "What did you think? That you were an ordinary police officer? You're our product, and we can't very well have our products turning against us, can we?" -- Dick Jones, RoboCop

      Reality will never actually have that wonderful, "Dick? You're fired," moment. Sigh.

  • by kamakazi ( 74641 ) on Thursday April 10, 2025 @01:34PM (#65295347)

    Human output is biased. An AI can't really understand the concept of bias, because it isn't really understanding anything. The output of an AI is going to naturally lean toward the most consistent majority of the data it has ingested, so to make a bias free AI one would simply have to feed it either an unbiased pool of data (good luck finding that) or a pool of data that is equally biased on all sides of all issues. That is defining equally as "some way that the differing biases reflect very similar weights in the output".
    The problem is we have great difficulty seeing our own individual biases, and no chance at all in quantifying biases even in our own culture, let alone a culture different from ours.
    Getting an AI to generate output does not free us from the responsibility of critical thinking. Assuming an AI has sufficient input to allow us to believe the output represents sufficient research is also irresponsible. This means that at the current time the output of generative AI is not qualified to be the basis of an opinion, and definitely inadequate to provide justification for a decision.
    Since the heart of AI in all forms is pattern recognition non-generative AI has made great advances in many fields, from medical diagnosis to arc-fault circuit breakers, but I am afraid the huge emphasis on generative AI is probably stealing brains from the other more easily targeted uses that really can help people now.

  • LLMs' bias comes from bias in training data. You can't fix that by biasing the training data manually. The only way is through rebuttal. It's not clear to me if any LLMs are able to process rebuttals that aren't already in the training data.

  • Maybe these ideas are actually mainstream/centrist and the billionaire class that owns our media likes to portray them as extreme because it benefits them?
  • by KILNA ( 536949 )

    In the context of ideological divides, empirically verifiable claims tend to challenge right-wing orthodoxy more than left-wing. This is consistently demonstrable across domains. In order words, reality has a left-wing bias, when viewed in comparison to media and political statements. As such, "correcting" this is intentionally creating a right-wing bias.

  • by zendarva ( 8340223 ) on Thursday April 10, 2025 @02:09PM (#65295469)

    But if it's stupid shit "Is the climate changing", that's not liberal bias, it's just recognizing reality.

    So "removing" something that isn't a bias, is actually creating a new bias where none existed.

    Of course... That's the intent.

  • Half and half (Score:2, Insightful)

    by BytePusher ( 209961 )
    Let's split the difference:



    Race:

    - - Left: There should be equal opportunity and application of the law for everyone regardless of race.
    - - Right: Races other than my own shouldn't have rights, because they aren't humans.
    - - Middle: Races other than my own should have some rights as long as they aren't inconvenient to the master race. Non-my-race people should be recognized as 50% humans.

    Gender

    -- Left: There should be equal opportunity and application of the law for everyone regardless of gend
    • by gweihir ( 88907 )

      I completely agree. That very nicely illustrates the problem with a false "balanced" and "fair" viewpoint. 50% of a fascist is still a fascist.

    • Re: (Score:2, Insightful)

      by Bradac_55 ( 729235 )

      This is exactly why the progressives lost the last election so thoroughly.
      Your bullshit isn't even smart it's just sad. Normal people are tired of this type race baiting shlock.

  • as Stephen Colbert said the Whitehouse correspondants dinner so many years ago "Reality has a well known liberal bias"

    • So what they are really saying is that their AI hallucinates better than it did before.

      Any progress towards reality is going to be seen as "left" bias. Even if it's down, left, or even right as long as it deviates from Fascism it'll upset Dementia Don (except perhaps more upper right movement.)

      politicalcompass.org (who replaced Mussolini because Trump took his ranking.)

  • Want to kill your AI from ever taking off? Give it a political bias, much less admit that it actually has that bias. Based on prior voting you've eliminated half the country from ever trusting anything from your AI.
  • That's because any system is only as good as its data. Garbage In, Garbage Out.

  • Yea who doesn't want to be subjected to the fruits of tech bros cosplaying as self righteous gods? A much better idea is refraining from putting your thumb on the scale in the first place.

  • Don't be ugly to people - now it's left leaning. Don't attack minorities - now it's left leaning Be kind and help others - now it's left leaning Global warming is real - left leaning I am not picking one of your sides ehre I know the left overshoot and go way to far on their side to enforece "niceness" too. Be a drag queen, but kids do NOT need to watch burleqsue shows, if you steal $100 worth of stuff it should be a crime with jail jailtime becuase it's stealing etc... The problem is America seems to ter
  • by andrewbaldwin ( 442273 ) on Thursday April 10, 2025 @02:55PM (#65295621)

    "You know, the very powerful and the very stupid have one thing in common: they don't alter their views to fit the facts, they alter the facts to fit the views"

    - Doctor Who (The Face of Evil, 1977)

  • I want MAGA money
  • Now you're seeing what happens when a political party moves so far to the right that it morphs into full-on fascism (as is to be expected).

    I'll happily coexist with a trans person in the stall next to me rather than support the crazy horseshit we're going through now (and the even crazier horseshit we're about to go through).

  • Zuckerberg is a sad loser and he's inflicting his shitty worldview on anyone who uses his social media and AI services.

  • by Torodung ( 31985 ) on Thursday April 10, 2025 @04:20PM (#65295885) Journal

    I'm guessing we're using English here.

    Is it possible that the fundamentals, the linguistic nuts and bolts, of the English language lead to left-leaning bias?

    What opinion does an LLM output in Arabic or Korean? Spanish?

    LLMs could conceivably be useful in determining the unconscious biases of language itself. Semantics actually matter.

    I know it, because I speak more than one language, and each one subtly or not-so-subtly changes my outlook on life.

The only function of economic forecasting is to make astrology look respectable. -- John Kenneth Galbraith

Working...