Nvidia Launches Entry-Level AI Computer for Small Developers 16
Nvidia unveiled a $249 version of its Jetson AI computer Tuesday, targeting hobbyists and small companies with a device that offers 70% more processing power than its predecessor at half the cost.
The Jetson Orin Nano Super functions as a portable AI brain for robotics and industrial automation, allowing developers to run AI computations directly without data center connections. The palm-sized device, demonstrated by Nvidia founder Jensen Huang, uses less advanced chips than the company's high-end products. While Nvidia primarily serves major companies and AI startups, the budget-friendly Jetson line aims to make AI development more accessible to students and smaller developers working on drones and cameras.
The Jetson Orin Nano Super functions as a portable AI brain for robotics and industrial automation, allowing developers to run AI computations directly without data center connections. The palm-sized device, demonstrated by Nvidia founder Jensen Huang, uses less advanced chips than the company's high-end products. While Nvidia primarily serves major companies and AI startups, the budget-friendly Jetson line aims to make AI development more accessible to students and smaller developers working on drones and cameras.
"Portable brains" - we're back in the 1960s (Score:3)
It's a while since we used to hear about people calling limited memory turing machine equivalent systems "electronic brains". It's kind of sweet.
Re: (Score:1)
Re: (Score:2)
This bullshit again? Ugh...
Unbounded memory is not the same as unlimited memory. Your home computer is Turing complete. So are you.
Go review basic automata theory, then stop repeating that nonsense.
Re: "Portable brains" - we're back in the 1960s (Score:2)
Then again your home computer is an application dump with the app creator being the node controller. I've always suspected any AI application possible as a distributed grid environment.
Just like McD (Score:2)
Hook them up early and you'll have customers for life.
I hadn't followed the Jetson line well enough. (Score:2)
I didn't realize how cheap some of the options were relative to the amount of VRAM. This Nano is only 8GB, but like you can get a 64GB one [amazon.com] for $2k.
The Orin Nano would be great for an AI-powered drone. Small, and only 7-15 watts.
Looking around, though, it sounds like can be hard to get arbitrary software stacks running on them. [reddit.com] Hmm. Will still need to think about this for possible future purchases....
Jetson requires Tegra CUDA fork (Score:2)
...
Looking around, though, it sounds like can be hard to get arbitrary software stacks running on them. [reddit.com] Hmm. Will still need to think about this for possible future purchases....
The major issue is, unlike Nvida's arm64 datacenter GPUs the Tegra mobile chips like the Jetsons cannot run the Nvidia mainline CUDA so many arm64 Python wheels, etc. can only use cpu-only computation (No one but Nvidia really does builds against the Tegra-flavor CUDA https://catalog.ngc.nvidia.com... [nvidia.com], and Nvidia is frequently inconsistent with the version and compile time options of the various libs they build and release.)
It can be a whole song and dance to get the special fork of CUDA Nvidia maintains
Re: Jetson requires Tegra CUDA fork (Score:2)
Looks like you are back to writing your own client/server or just using the available drivers/platforms
Re: Jetson requires Tegra CUDA fork (Score:2)
That's not a fork is it?
The runtime is different, but cuda works the same afaik.
So you may need to rebuild. But code should be the same.
Did I miss something?
Re: I hadn't followed the Jetson line well enough. (Score:2)
really? Every code I ran on the platform (well the generation before) worked out of standard configure make makeinstall sequence.
Fundamentally it runs linux. And cuda works the same afaik.
TCO of building a system including training data (Score:3)
Does any have details on what it would take to get a fully trained system on this computer including everything from training on a xGB data set, to loading the AI model, to running verification tests?
Small developers? (Score:2)
For the vertically challenged?
Would rather have a pluggin (Score:2)
Rather than reinvent PC's, they should make it connectable to a PC or Mac via USB and run the UI and high-level coordination on the PC or Mac. That way we don't need to learn a different OS. Use Tk or Qt as the UI engine so it can run on both with only minor tweaks.
Making it a pluggin also facilitates blurring the distinction between a cloud service and a local box such that one can upscale or downscale as needed.
But it looks like Nvidia is targeting embedded use, such as mobile robots. This is not necessa
Re: Would rather have a pluggin (Score:2)
I've used the previous version of this to teach distributed computing.
That is not meant to be a computational accelerator. This is an embedded device.
They run linux with some nvidia drivers on it.
You can control with gpio if you want to get tty access to the device.
If you want high bandwidth I think that ethernet is probably the best bet.
Data point (Score:2)
GPU: NVIDIA Ampere architecture with 1024 CUDA cores and 32 tensor cores
CPU: 6-core Arm Cortex-A78AE v8.2 64-bit CPU
Memory: 8GB 128-bit LPDDR5 102 GB/s
Storage: SD card slot and external NVMe
Power: 7-25W