- Planned software update adds external accelerator feature to Nvidia’s DGX Spark
- MacBook Pro users can send heavy AI processing to external system
- Performance and tooling changes focus on open source local AI
At CES 2026, Nvidia revealed that it is planning a software update for the DGX Spark that will significantly expand the device’s capabilities.
Powered by the GB10 Grace Blackwell superchip, Nvidia’s small powerhouse combines CPU and GPU cores with 128GB of unified memory, allowing users to load and run large language models locally without relying on cloud infrastructure.
Early reviews of the Spark, while universally positive, noted limitations in the software. This is something Nvidia is looking to address. A key part of the change will be expanding support for open source AI frameworks and models.
Good news for MacBook Pro users
The move will be a software-only change, with no new hardware involved, and for organizations that rely on open tools, the changes will reduce custom configuration work and help keep systems running the same way as models and frameworks evolve.
The mini PC update will add support for tools such as PyTorch, vLLM, SGLang, llama.cpp and LlamaIndex, as well as Qwen, Meta, Stability and Wan models.
Nvidia says users can expect performance gains of up to 2.5x compared to Spark at launch, driven primarily by TensorRT-LLM updates, tighter quantization, and decoding improvements.
An example shared by Nvidia involved the Qwen-235B, which more than doubles performance when going from FP8 to NVFP4 with speculative decoding. Other workloads, including Qwen3-30B and Stable Diffusion 3.5 Large, reportedly show smaller gains.
The update also introduces DGX Spark guides that bundle tools, models, and configuration guides into reusable workflows. They are designed to run locally without rebuilding entire environments.
One interesting demo combined a MacBook Pro with DGX Spark for AI video generation. Nvidia showed off a 4K pipeline that took eight minutes to complete on the laptop and about a minute when the heavy compute steps were offloaded to the Spark.
The approach keeps creative tools on the laptop while Spark handles the heavy processing, bringing AI video work closer to interactive use rather than long batch runs.
DGX Spark can also act as a background processor for 3D workflows, generating assets while creatives continue working on their core systems.
An on-premises Nsight Copilot is included, enabling CUDA support without sending code or data to the cloud.
Together, the planned upgrade will move DGX Spark from a standalone developer system to a flexible on-premises AI node capable of supporting laptops, workstations, and edge deployments.
Through Storage review
TechRadar will cover this year’s edition extensively CESand will bring you all the important announcements as they happen. Go to our CES 2026 News page for the latest stories and our hands-on verdicts on everything from wireless TVs and foldable screens to new phones, laptops, smart home devices and the latest in artificial intelligence. You can also ask us a question about the show on our CES 2026 Live Q&A and we will do our best to answer it.
And don’t forget follow us on tiktok and WhatsApp For the latest from the CES fair!




