Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI Technology

Nvidia Launches Entry-Level AI Computer for Small Developers 29

Nvidia unveiled a $249 version of its Jetson AI computer Tuesday, targeting hobbyists and small companies with a device that offers 70% more processing power than its predecessor at half the cost.

The Jetson Orin Nano Super functions as a portable AI brain for robotics and industrial automation, allowing developers to run AI computations directly without data center connections. The palm-sized device, demonstrated by Nvidia founder Jensen Huang, uses less advanced chips than the company's high-end products. While Nvidia primarily serves major companies and AI startups, the budget-friendly Jetson line aims to make AI development more accessible to students and smaller developers working on drones and cameras.
This discussion has been archived. No new comments can be posted.

Nvidia Launches Entry-Level AI Computer for Small Developers

Comments Filter:
  • by AleRunner ( 4556245 ) on Tuesday December 17, 2024 @11:07AM (#65019477)

    It's a while since we used to hear about people calling limited memory turing machine equivalent systems "electronic brains". It's kind of sweet.

    • I thought the federation banned thinking machines. When do we move on to mentats for computation?
    • by narcc ( 412956 )

      This bullshit again? Ugh...

      Unbounded memory is not the same as unlimited memory. Your home computer is Turing complete. So are you.

      Go review basic automata theory, then stop repeating that nonsense.

      • Then again your home computer is an application dump with the app creator being the node controller. I've always suspected any AI application possible as a distributed grid environment.

      • This bullshit again? Ugh...

        Unbounded memory is not the same as unlimited memory. Your home computer is Turing complete. So are you.

        Go review basic automata theory, then stop repeating that nonsense.

        If it's got finite memory and a finite processor then by definition it is a finite state machine. Think about it. The only possible way you could justify your claim is by including the other stuff on the network. The language definition may be during complete, however the machine it runs on is not.

        • by narcc ( 412956 )

          Again, you REALLY need to review basic automata theory. You're completely full of shit.

          • Teach your mother to suck eggs. I'll point you to a stack exchange answer because apparently that's about your level, but a finite autonoma cannot be turing complete [stackexchange.com]. There is an equivalent Wikipedia article, but don't go there, it'll probably blow your brain.

            A processor is a state machine with a finite number of states. Computer memory has a finite number of states. The full potential set of states of a computer like this is bounded the number of possible states of the processor multiplied by the number of

            • by narcc ( 412956 )

              I know the "argument". Every moron destined to drop out in their first year brings it up when they first encounter it. You'll see the same bullshit repeated endlessly online by "self-taught" wannabe computer scientists like you. You'll notice that we have not updated the textbooks to include that silly nonsense. Can you figure out why? Do you think it's because you're a rogue outsider bravely taking on the educated elites? LOL! You're just a crackpot that doesn't understand the subject well enough t

    • Very 1960s indeed, the product itself is named after a popular 1960s cartoon series about family life in the distant future

      https://en.wikipedia.org/wiki/... [wikipedia.org]

  • Hook them up early and you'll have customers for life.

  • by Rei ( 128717 ) on Tuesday December 17, 2024 @11:16AM (#65019503) Homepage

    I didn't realize how cheap some of the options were relative to the amount of VRAM. This Nano is only 8GB, but like you can get a 64GB one [amazon.com] for $2k.

    The Orin Nano would be great for an AI-powered drone. Small, and only 7-15 watts.

    Looking around, though, it sounds like can be hard to get arbitrary software stacks running on them. [reddit.com] Hmm. Will still need to think about this for possible future purchases....

    • by bjamesv ( 1528503 ) on Tuesday December 17, 2024 @12:09PM (#65019661)

      ...

      Looking around, though, it sounds like can be hard to get arbitrary software stacks running on them. [reddit.com] Hmm. Will still need to think about this for possible future purchases....

      The major issue is, unlike Nvida's arm64 datacenter GPUs the Tegra mobile chips like the Jetsons cannot run the Nvidia mainline CUDA so many arm64 Python wheels, etc. can only use cpu-only computation (No one but Nvidia really does builds against the Tegra-flavor CUDA https://catalog.ngc.nvidia.com... [nvidia.com], and Nvidia is frequently inconsistent with the version and compile time options of the various libs they build and release.)

      It can be a whole song and dance to get the special fork of CUDA Nvidia maintains specifically for these Tegra/Android gpus installed (double, if you're containerizing your system components) and rebuilding all your libs from source. Even then, many PyTorch features etc. are not fully built out for arm64 if you stray too far from the beaten path.

      • Looks like you are back to writing your own client/server or just using the available drivers/platforms

      • That's not a fork is it?
        The runtime is different, but cuda works the same afaik.
        So you may need to rebuild. But code should be the same.
        Did I miss something?

      • by tlhIngan ( 30335 )

        The major issue is, unlike Nvida's arm64 datacenter GPUs the Tegra mobile chips like the Jetsons cannot run the Nvidia mainline CUDA so many arm64 Python wheels, etc. can only use cpu-only computation (No one but Nvidia really does builds against the Tegra-flavor CUDA https://catalog.ngc.nvidia.com... [ngc.nvidia.com] and Nvidia is frequently inconsistent with the version and compile time options of the various libs they build and release.)

        It can be a whole song and dance to get the special fork of CUDA Nvidia maintains spe

    • really? Every code I ran on the platform (well the generation before) worked out of standard configure make makeinstall sequence.
      Fundamentally it runs linux. And cuda works the same afaik.

      • really? Every code I ran on the platform (well the generation before) worked out of standard configure make makeinstall sequence. Fundamentally it runs linux. And cuda works the same afaik.

        Yes, best-case scenario you just have to rebuild your code/all your deps from source like you did. If you had Jetpack SDK installed and set up correctly for CUDA development work that means you had l4t-cuda installed, so when you make you are building against the Tegra-version of CUDA.

        Go ahead and take an arm64 binary build against mainline CUDA, it will not run on Jetson/Tegra. You have to seek out builds specifically done against the Tegra CUDA lib (l4t) or make sure you are doing all your own builds. T

        • by Rei ( 128717 )

          What I'd like to know is... assuming one gets CUDA installed on it, which is kinda of a prerequisite for everything... can I pip install e.g.:

          pytorch
          transformers
          peft
          tokenizers
          bitsandbytes
          accelerate
          deepspeed
          flash-attn
          sentencepiece
          wandb
          xformers
          numpy
          scipy
          scikit-learn
          . ...
          and so on?

        • ah, I see what you are saying I've been building directly on the jetson. So I didn't notice the binaries where not compatible.
          That seems simple enough to set to rebuild for deb-src packages though.

  • by will4 ( 7250692 ) on Tuesday December 17, 2024 @11:57AM (#65019609)

    Does any have details on what it would take to get a fully trained system on this computer including everything from training on a xGB data set, to loading the AI model, to running verification tests?

  • For the vertically challenged?

  • by Tablizer ( 95088 ) on Tuesday December 17, 2024 @01:20PM (#65019879) Journal

    Rather than reinvent PC's, they should make it connectable to a PC or Mac via USB and run the UI and high-level coordination on the PC or Mac. That way we don't need to learn a different OS. Use Tk or Qt as the UI engine so it can run on both with only minor tweaks.

    Making it a pluggin also facilitates blurring the distinction between a cloud service and a local box such that one can upscale or downscale as needed.

    But it looks like Nvidia is targeting embedded use, such as mobile robots. This is not necessarily mutually exclusive, but I realize it's more expensive to carter to both workstation and embedded uses at the same time. I wonder if MS is working on a Windows-friendly pluggin box?

    • I've used the previous version of this to teach distributed computing.
      That is not meant to be a computational accelerator. This is an embedded device.
      They run linux with some nvidia drivers on it.
      You can control with gpio if you want to get tty access to the device.
      If you want high bandwidth I think that ethernet is probably the best bet.

  • Data point (Score:4, Informative)

    by ElizabethGreene ( 1185405 ) on Tuesday December 17, 2024 @02:35PM (#65020107)

    GPU: NVIDIA Ampere architecture with 1024 CUDA cores and 32 tensor cores
    CPU: 6-core Arm Cortex-A78AE v8.2 64-bit CPU
    Memory: 8GB 128-bit LPDDR5 102 GB/s
    Storage: SD card slot and external NVMe
    Power: 7-25W

  • Wow...I didn't realize NVidia had a height requirement for its computers; and I'm over 6' tall so not small.

    Perhaps NVidia needs to call it the 'midget' like like the Japanese midget submarines of WW2. ??

    JoshK.

  • I've been playing around with CCTV recording and I know that Frigate NVR supports these things. You can use an AI model to do local object detection. For example you can have rules to state that if a person is in a zone and looters for x amount of time then raise an alert. If it's a cat or dog then ignore it. Pretty cool.

"The great question... which I have not been able to answer... is, `What does woman want?'" -- Sigmund Freud

Working...