Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
Facebook AI

After Meta Cheating Allegations, 'Unmodified' Llama 4 Maverick Model Tested - Ranks #32 (neowin.net) 17

Remember how last weekend Meta claimed its "Maverick" AI model (in the newly-released Llama-4 series) beat GPT-4o and Gemini Flash 2 "on all benchmarks... This thing is a beast."

And then how within a day several AI researchers pointed out that even Meta's own announcement admitted the Maverick tested on LM Arena was an "experimental chat version," as TechCrunch pointed out. ("As we've written about before, for various reasons, LM Arena has never been the most reliable measure of an AI model's performance. But AI companies generally haven't customized or otherwise fine-tuned their models to score better on LM Arena — or haven't admitted to doing so, at least.")

Friday TechCrunch on what happened when LMArena tested the unmodified release version of Maverick (Llama-4-Maverick-17B-128E-Instruct).

It ranked 32nd.

"For the record, older models like Claude 3.5 Sonnet, released last June, and Gemini-1.5-Pro-002, released last September, rank higher," notes the tech site Neowin.
This discussion has been archived. No new comments can be posted.

After Meta Cheating Allegations, 'Unmodified' Llama 4 Maverick Model Tested - Ranks #32

Comments Filter:

I had the rare misfortune of being one of the first people to try and implement a PL/1 compiler. -- T. Cheatham

Working...