This HTML5 document contains 43 embedded RDF statements represented using HTML+Microdata notation.

The embedded RDF content will be recognized by any processor of HTML5 Microdata.

Namespace Prefixes

PrefixIRI
n13https://hpc.fau.de/systems-services/systems-documentation-instructions/clusters/alex-cluster/
dctermshttp://purl.org/dc/terms/
dbohttp://dbpedia.org/ontology/
n17http://dbpedia.org/resource/File:
foafhttp://xmlns.com/foaf/0.1/
n10https://global.dbpedia.org/id/
dbthttp://dbpedia.org/resource/Template:
rdfshttp://www.w3.org/2000/01/rdf-schema#
n4http://commons.wikimedia.org/wiki/Special:FilePath/
rdfhttp://www.w3.org/1999/02/22-rdf-syntax-ns#
owlhttp://www.w3.org/2002/07/owl#
wikipedia-enhttp://en.wikipedia.org/wiki/
dbchttp://dbpedia.org/resource/Category:
dbphttp://dbpedia.org/property/
provhttp://www.w3.org/ns/prov#
xsdhhttp://www.w3.org/2001/XMLSchema#
wikidatahttp://www.wikidata.org/entity/
dbrhttp://dbpedia.org/resource/

Statements

Subject Item
dbr:SXM_(socket)
rdfs:label
SXM (socket)
rdfs:comment
SXM is a high bandwidth socket solution for connecting Nvidia Compute Accelerators to a system. Each generation of Nvidia Tesla since P100 models, the DGX computer series and the HGX boards come with an SXM socket type that realizes high bandwidth, power delivery and more for the matching GPU daughter cards. Nvidia offers these combinations as an end-user product e.g. in their models of the DGX system series. Current socket generations are SXM for Pascal based GPUs, SXM2 and SXM3 for Volta based GPUs, SXM4 for Ampere based GPUs, and SXM5 for Hopper based GPUs. These sockets are used for specific models of these accelerators, and offer higher performance per card than PCIe equivalents. The DGX-1 system was the first to be equipped with SXM-2 sockets and thus was the first to carry the form
foaf:depiction
n4:TSUBAME_3.0_PA075079.jpg
dcterms:subject
dbc:Nvidia_hardware
dbo:wikiPageID
70326671
dbo:wikiPageRevisionID
1120402553
dbo:wikiPageWikiLink
dbr:NVLink dbr:Volta_(microarchitecture) dbr:Pascal_(microarchitecture) dbr:PCI_Express dbc:Nvidia_hardware dbr:Systems_integrator dbr:Hopper_(microarchitecture) dbr:CPU_socket dbr:Nvidia dbr:Thermal_design_power dbr:Central_processing_unit dbr:Tegra dbr:Bandwidth_(computing) dbr:Ampere_(microarchitecture) dbr:Supermicro dbr:Nvidia_Tesla dbr:Nvidia_HGX n17:TSUBAME_3.0_PA075079.jpg dbr:Mobile_PCI_Express_Module dbr:Nvidia_DGX
dbo:wikiPageExternalLink
n13:%23a100
owl:sameAs
wikidata:Q111945017 n10:GPmmu
dbp:wikiPageUsesTemplate
dbt:More_citations_needed dbt:More_footnotes_needed dbt:About dbt:Technical dbt:NvidiaDgxAccelerators dbt:! dbt:Short_description dbt:Reflist dbt:Multiple_issues
dbo:thumbnail
n4:TSUBAME_3.0_PA075079.jpg?width=300
dbo:abstract
SXM is a high bandwidth socket solution for connecting Nvidia Compute Accelerators to a system. Each generation of Nvidia Tesla since P100 models, the DGX computer series and the HGX boards come with an SXM socket type that realizes high bandwidth, power delivery and more for the matching GPU daughter cards. Nvidia offers these combinations as an end-user product e.g. in their models of the DGX system series. Current socket generations are SXM for Pascal based GPUs, SXM2 and SXM3 for Volta based GPUs, SXM4 for Ampere based GPUs, and SXM5 for Hopper based GPUs. These sockets are used for specific models of these accelerators, and offer higher performance per card than PCIe equivalents. The DGX-1 system was the first to be equipped with SXM-2 sockets and thus was the first to carry the form factor compatible SXM modules with P100 GPUs and later was unveiled to be capable of allowing upgrading to (or being pre-equipped with) SXM2 modules with V100 GPUs. SXM boards are typically built with four or eight GPU slots, although some solutions such as the Nvidia DGX-2 connect multiple boards to deliver high performance. While third party solutions for SXM boards exist, most System Integrators such as Supermicro use prebuilt Nvidia HGX boards, which come in four or eight socket configurations. This solution greatly lowers the cost and difficulty of SXM based GPU servers, and enables compatibility and reliability across all boards of the same generation. SXM modules on e.g. HGX boards, particularly recent generations, may have NVLink switches to allow faster GPU-to-GPU communication. This as well reduces bottlenecks which would normally be located within CPU and PCIe. The GPUs on the daughter cards are just using NVLink as their main communication protocol. For example a Hopper-based H100 SXM5 based GPU can use up to 900GB/s of bandwidth across 18 NVLink 4 channels, with each contributing a 50GB/s of bandwidth; This compared to PCIe 5.0, which can handle up to 64GB/s of bandwidth within a x16 slot. This high bandwidth also means that GPUs can share memory over the NVLink bus, allowing an entire HGX board to present to the host system as a single, massive GPU. Power delivery is also handled by the SXM socket, negating the need for external power cables such as those needed in PCIe equivalent cards. This, combined with the horizontal mounting allows cooling options of higher efficiency which in turn allows the SXM based GPUs to operate at a much higher TDP. The Hopper-based H100, for example, can draw up to 700W solely from the SXM socket. The lack of cabling also makes assembling and repairing of large systems much easier, and also reduces the possible points of failure. The early Nvidia Tegra automotive targeted evaluation board, 'Drive PX2', had two MXM (Mobile PCI Express Module) sockets on both sides of the card, this dual MXM design can be considered a predecessor to the Nvidia Tesla implementation of the SXM socket. Comparison of accelerators used in DGX:
prov:wasDerivedFrom
wikipedia-en:SXM_(socket)?oldid=1120402553&ns=0
dbo:wikiPageLength
6206
foaf:isPrimaryTopicOf
wikipedia-en:SXM_(socket)