SAN JOSE, Calif., March 17, 2026 /PRNewswire/ — MSI today announced the launch of XpertStation WS300 on NVIDIA DGX Station Architecture, a next-generation deskside AI supercomputer built to support the accelerating demands of large language models (LLMs), generative AI, and advanced data science workflows. Powered by NVIDIA GB300 Grace Blackwell Ultra Desktop Superchip, supporting up to 748GB of large coherent memory and dual 400GbE networking, the platform extends advanced AI infrastructure capabilities into a compact deskside deployment model and is available for order starting today.
“MSI has a strategic vision to advance AI-first computing,” said Danny Hsu, General Manager of MSI’s Enterprise Platform Solutions. “With NVIDIA, we are defining the next era of AI infrastructure, bridging centralized performance and distributed innovation, and enabling organizations to move from experimentation to production with greater speed, scale, and confidence.”
Bringing Data-Center AI to the Desktop
XpertStation WS300 integrates up to 748GB of large coherent memory, combining high-bandwidth HBM3e GPU memory and LPDDR5X CPU memory into a unified domain to enable efficient CPU-GPU data sharing for large-scale model training and fine-tuning.
With dual 400GbE connectivity powered by NVIDIA ConnectX-8 SuperNIC, the platform delivers up to 800Gb/s of aggregate networking bandwidth to support distributed AI workloads and multi-node scalability. High-speed PCIe Gen5 and Gen6 NVMe storage accelerates dataset ingestion and AI data pipelines, ensuring sustained compute utilization during intensive training and inference operations. Combined with full support for NVIDIA AI Software Stack, the platform provides an integrated hardware-software foundation for seamless AI development and deployment from desktop to data center.
Expanding AI Workflows from Development to Deployment
XpertStation WS300 supports the full AI lifecycle, from large-scale model training and data-intensive analytics to real-time inference and emerging physical AI and robotics workloads. The platform enables organizations to accelerate deep learning models, process massive datasets efficiently, and execute complex AI workloads locally with high-throughput performance.
The system can also function as a centralized AI compute node for collaborative fine-tuning and on-demand deployment, providing teams greater operational flexibility while maintaining control over proprietary data and intellectual property.
By extending data-center-class performance to the deskside, XpertStation WS300 allows organizations to move AI initiatives from experimentation to production with infrastructure-level consistency and reliability.
Supporting Autonomous AI Agents
NVIDIA NemoClaw is an open-source stack installing OpenShell runtime with a policy-controlled sandbox that enables autonomous AI agents to operate continuously more safely. Running OpenShell on XpertStation WS300, developers can run trillion-parameter models locally with up to 20 petaFLOPS of AI compute and 748GB of memory, enabling always-on AI agents at the deskside without relying on cloud infrastructure.
ข่าวที่เกี่ยวข้อง
- MiTAC Accelerates Next-Gen AI with Turnkey Solutions and Flexible NVIDIA MGX at NVIDIA GTC 2026
- IBM Announces Expanded Collaboration with NVIDIA to Advance AI for the Enterprise
- PEGATRON Unveils Next Generation AI Platforms Powered by NVIDIA Vera Rubin NVL72 and NVIDIA HGX Rubin NVL8 at GTC 2026
- Hyundai Motor, Kia and NVIDIA Expand Strategic Partnership for Next-Generation Autonomous Driving Technology