- XpertStation WS300 supports billion-parameter models without relying on cloud infrastructure
- Dual 400GbE LAN ports enable high-speed distributed multi-node AI workloads
- Unified HBM3e GPU and LPDDR5X CPU memory maximize bandwidth for AI
MSI has officially launched the XpertStation WS300, an AI desktop workstation based on Nvidia’s DGX Station architecture.
This system is designed to handle large and demanding language models, generative AI, and advanced data science workloads.
The platform is powered by the Nvidia GB300 Grace Blackwell Ultra desktop superchip and supports up to 748GB of large, coherent, unified memory.
Article continues below.
Unified memory architecture for high-bandwidth AI processing
The XpertStation WS300 combines HBM3e GPU memory with LPDDR5X CPU memory for high-bandwidth data sharing.
This setup enables local processing of billion-parameter models and supports extensive AI workflows without relying on cloud infrastructure.
The workstation includes two 400 GbE LAN ports, enabling multi-node distributed computing with aggregate bandwidth of up to 800 Gbps.
MSI claims that the XpertStation WS300 brings data center-class performance directly to the desktop environment, and its configuration is intended to help organizations move from experimentation to production while maintaining consistent computing reliability.
XpertStation WS300 supports the entire AI lifecycle, including large-scale model training, intensive data analysis, and real-time inference.
Functioning as a centralized AI computing node, the platform enables collaborative tuning and on-demand deployment, but maintains control over its data and intellectual property.
High-speed PCIe Gen5 and Gen6 NVMe storage accelerates data set ingestion and AI pipelines, ensuring sustained utilization during compute-intensive operations.
Combined with the Nvidia AI Software Stack, the workstation integrates hardware and software to enable seamless workflow transitions from research to production environments.
MSI also integrated Nvidia NemoClaw, an open source stack that runs OpenShell within a policy-controlled sandbox.
This allows autonomous AI agents to operate continuously and securely on the desktop, utilizing the 20petaFLOPS computing potential of the workstation.
The setup supports always-on AI processes locally, enabling experiments with advanced AI and robotics applications without moving sensitive workloads to cloud servers.
“MSI has a strategic vision to advance AI-centric computing,” said Danny Hsu, general manager of enterprise platform solutions at MSI.
“With Nvidia, we are defining the next era of AI infrastructure, uniting centralized performance and distributed innovation, and enabling organizations to move from experimentation to production with greater speed, scale and confidence.”
The platform offers extensive capabilities for advanced AI workflows, but its $84,999.99 price tag raises concerns about cost-effectiveness.
Organizations that do not require maximum memory or continuous operation of a trillion-parameter model may find it difficult to justify the investment.
The system delivers unprecedented local AI performance, enabling demanding calculations on the desktop.
However, the practical value of this workstation is likely to be limited to companies with high-performance AI workloads and specific infrastructure requirements.
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.




