Intel today announced, in addition to the processors 3rd generation Intel Xeon Scalable, its latest FPGA designed for artificial intelligence applications, the Intel Stratix 10 NX, which comes with great improvements over its predecessor, the Intel Stratix 10 MX, and which will arrive later this year with 15 times higher performance in INT8, thus adding to the Stratix family.
The Intel Stratix 10 NX is an FPGA designed specifically to have AI acceleration built into the chip itself, so that we come across a dedicated AI accelerator, very different from the NPUs that we could find in chips such as the latest batch smartphone processors, designed mainly for inference. In this case, we find a substrate with chiplets that contain the main chip at 14 nanometers together with HBM memory glued to the chip itself, something that, as we already know, provides us with an extremely high bandwidth that will allow us much faster memory accesses and with very low latency, something very important in artificial intelligence tasks.
Likewise, we find the already known EMIB technology that connects multiple Intel Ethernet blocks to the PAM4 transceivers that we could already see in the Intel Stratix 10 TX with speeds of up to 57.8Gbps of bandwidth, which is also fully flexible and customizable to adapt and scale across multiple nodes. As it is a customizable architecture, we can connect ASIC extensions and customize chip interfaces through EMIB, so that each company will be able, on paper, to customize its implementation of the Intel Stratix 10 NX to fit its needs.
One of the strengths of the Intel Stratix 10 NX is the new AI Tensor Block, something that as we already know from the experience we have with NVIDIA Turing and Ampere GPUs, dramatically accelerates artificial intelligence processes, allowing this new model to have 15 times more performance in INT8 than the Intel Stratix 10 MX.