N
NVIDIA
2026-03-11
Vendor Strategy Important High 90% Confidence

NVIDIA Jetson Advances Localized Deployment of Open-Source AI Models at Edge

Summary

NVIDIA's Jetson edge AI platform enables localized deployment of open-source generative AI models like Qwen3 4B and Mistral 3 on edge devices. The platform offers a complete hardware range from Jetson Orin Nano to Thor, integrating compute and memory in SoM for simplified design. Key performance shows Jetson Thor achieves 52 tokens/sec for Mistral 3 inference.

Key Takeaways

NVIDIA Jetson brings open-source generative AI models to edge devices and robots, enabling low-latency, offline autonomous intelligence. Caterpillar's CES demo of Cat AI assistant uses Jetson Thor with Nemotron voice model and local Qwen3 4B (via vLLM) for real-time cockpit voice interaction.

Hardware spans Jetson Orin Nano 8GB to Jetson Thor, supporting frameworks like TRT, Llama.cpp, vLLM. Performance: Mistral 3 on Jetson Thor reaches 52 tokens/sec, Qwen 3.5-35B-A3B at 35 tokens/sec. Isaac GR00T N1.6 enables end-to-end on-board real-time perception and action.

Ecosystem expands from research (e.g., NYU robotics) to developers, with tools like OpenClaw for building 24/7 private AI assistants, ensuring full local data processing for privacy and zero API cost.

Why It Matters

英伟达强化边缘AI战略,通过硬件集成和开源模型优化降低部署门槛,可能加速工业自动化和机器人领域创新,推动边缘计算与AI融合的行业趋势。...

Sign up to view full strategic analysis

Sign Up Free
Source: NVIDIA新闻中心
View Original →