
Intel Aims to Undercut Nvidia with New AI Workstation GPUs: A Battlematrix of Affordable Power
At Computex 2025, Intel unveiled its new Arc Pro B50 and B60 series workstation GPUs, setting the stage for a potential shake-up in the AI accelerator market. While Intel acknowledges it may not directly compete with Nvidia in raw performance, the company is strategically focusing on affordability. But can cheaper cards really challenge Nvidia's dominance?

The Arc Pro B50, priced at $299, is targeted towards traditional graphics-intensive tasks. However, the Arc Pro B60, with an expected price tag of around $500, is designed for both graphics and AI inference applications. This price point is significantly lower than Nvidia's RTX 4000 Ada and 4500 Ada Generation GPUs, which range from $1,250 to $2,400. According to Intel VP Vivian Lien, the company believes this will be very competitive, particularly for design and engineering.
On paper, the B60 lags behind Nvidia's high-end offerings. For example, Nvidia's RTX Pro 6000 boasts significantly higher INT8 performance and memory capacity. But Intel's strategy revolves around parallelism.
Intel envisions “Project Battlematrix,” systems packing up to eight Arc Pro B60 GPUs. An eight-GPU system could deliver 1.5 petaOPS of dense INT8 performance and 192GB of vRAM for around $4,000 (based on Lien's estimate). This positions Intel as offering potentially two eight-GPU systems for the price of one dual-GPU Nvidia workstation. Each individual B60 consumes less power (120W-200W) than the RTX Pro 6000 workstation cards (300W-600W), but are also less power efficient.

While the RTX Pro 6000 may outperform the B60 in specific tasks, Intel argues that inference workloads are often memory-bandwidth-bottlenecked rather than performance limited, negating the advantages of sparsity. Further, existing platforms like Llama.cpp already support Intel GPUs via Sycl, facilitating the use of quantized models.
Beyond raw power, multiple GPUs offer unique deployment opportunities, especially in multi-user lab environments. Intel is also working on SR-IOV support, simplifying the partitioning of GPU resources. This competes with Nvidia's multi-instance GPU partitioning, which may require an additional vGPU license.

Acknowledging past software challenges, Intel has committed to improving its software ecosystem with pre-baked container images for common frameworks. These containers are expected to roll out alongside the cards in Q3, with SR-IOV and virtual desktop infrastructure following in Q4.
Intel's strategy is a bold move to offer a more accessible entry point into the world of AI workstations. Will their focus on affordability and software improvements be enough to challenge Nvidia's dominance? What are your thoughts on Intel's approach? Share your opinions in the comments below!