. . . . . "1123128346"^^ . . . . . . . . . . . . . . "50858569"^^ . . . . "Nvidia DGX is a line of Nvidia-produced servers and workstations which specialize in using GPGPU to accelerate deep learning applications. The typical design of a DGX system is based upon a rackmount chassis with motherboard that carries high performance x86 server CPUs (Typically Intel Xeons, though recently the DGX A100 and DGX Station A100 utilize AMD EPYC CPUs). The main component of a DGX system is a set of 4 to 16 Nvidia Tesla GPU modules on an independent system board. DGX systems have large heatsinks and powerful fans to adequately cool thousands of watts of thermal output. The GPU modules are typically integrated into the system using a version of the SXM socket."@en . . . . . . . . . . . . . . . . . . . . . . . . . . . . "20651"^^ . . . . . . . . . . . . . . "Nvidia DGX"@en . . . . . . . . "Nvidia DGX is a line of Nvidia-produced servers and workstations which specialize in using GPGPU to accelerate deep learning applications. The typical design of a DGX system is based upon a rackmount chassis with motherboard that carries high performance x86 server CPUs (Typically Intel Xeons, though recently the DGX A100 and DGX Station A100 utilize AMD EPYC CPUs). The main component of a DGX system is a set of 4 to 16 Nvidia Tesla GPU modules on an independent system board. DGX systems have large heatsinks and powerful fans to adequately cool thousands of watts of thermal output. The GPU modules are typically integrated into the system using a version of the SXM socket."@en . . . . . . . . . . . .