Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Software Hardware

A Look At AI Benchmarking For Mobile Devices In a Rapidly Evolving Ecosystem (hothardware.com) 10

MojoKid writes: AI and Machine Learning performance benchmarks have been well explored in the data center, but are fairly new and unestablished for edge devices like smartphones. While AI implementations on phones are typically limited to inferencing tasks like speech-to-text transcription and camera image optimization, there are real-world neural network models employed on mobile devices and accelerated by their dedicated processing engines. A deep dive look at HotHardware of three popular AI benchmarking apps for Android shows that not all platforms are created equal, but also that performance results can vary wildly, depending on the app used for benchmarking.

Generally speaking, it all hinges on what neural networks (NNs) the benchmarks are testing and what precision is being tested and weighted. Most mobile apps that currently employ some level of AI make use of INT8 (quantized). While INT8 offers less precision than FP16 (Floating Point), it's also more power-efficient and offers enough precision for most consumer applications. Typically, Qualcomm Snapdragon 865 powered devices offer the best INT8 performance, while Huawei's Kirin 990 in the P40 Pro 5G offers superior FP16 performance. Since INT8 precision for NN processing is more common in today's mobile apps, it could be said that Qualcomm has the upper hand, but the landscape in this area is ever-evolving to be sure.

This discussion has been archived. No new comments can be posted.

A Look At AI Benchmarking For Mobile Devices In a Rapidly Evolving Ecosystem

Comments Filter:
  • Then I wouldn't have to bother with them again for the rest of my life.
  • On a more serious note does anyone have comparisons between INT8 and FP16 showing _actual_ differences in quality?

    • I'd liken the integer results are digital whereas the floating point results are closer to an analog system, at least the best a digital system can emulate an analog system.
      • Maybe.

        I would expect after N nodes that there shouldn't be TOO much of a difference due to the 8^n vs 11^N being "large enough" (having enough precision as a group.)

        I see that nVidia has this page comparing FP16 and FP32 [nvidia.com] which gives insight into the FP16 format.

  • Sick of hearing 'AI' this and 'AI' that. How about we work on improving human intelligence instead? Doesn't seem to be a whole lot of that around lately.
  • Comment removed based on user account deletion
    • And that would be completely inaccurate. Neural Networks and libraries are employed in multiple real-world applications on phones currently. Your on-device speech to text processing doesn't happen unless the machine is listening to your spoken word and inferring what you're saying and translating it to text. This is done with anything from tensor cores, which do exist in modern smartphone SoCs like Snapdragon chips, to DSP complexes as well that are on-chip. Also, machine vision to improve image capture and
  • As per the article here the other day, recent "advances" in AI seem to be mainly vaporware designed to con public and especially private investors.
    Certainly, the "AI" cameras I have tried on smartphones (even high-end ones) does not seem to do much; as for "voice recognition", I thought Google and others uploaded and then processed in the backend?

This is the theory that Jack built. This is the flaw that lay in the theory that Jack built. This is the palpable verbal haze that hid the flaw that lay in...

Working...