NVIDIA RTX 5880 Ada 48GB (900-5G133-2240-000)
P/N: 900-5G133-2240-000
7 117 € (inc. VAT (Spain))

P/N: A100-8x40GB-Baseboard
45 231 € (inc. VAT (Spain))
EU Delivery time is made within 3-7 days
This is an estimated timeframe. Delivery times may vary depending on logistics and stock availability.
Warranty 2 year
The product is covered by the manufacturer’s standard warranty.
In stock
Guaranteed Safe Checkout:
NVIDIA A100 8×40GB Baseboard. with fast EU delivery and worldwide shipping. Includes official warranty.
Our specialist will help you choose the right server components and ensure full compatibility with your system.
| Weight | 10,00 kg |
|---|---|
| Dimensions | 26,7 × 11,1 × 17 cm |
| Country of manufacture | Taiwan |
| Manufacturer's warranty (years) | 1 |
| Model | NVIDIA A100 |
| Cache L2 (MB) | 40 |
| Process technology (nm) | 4 |
| Memory type | HBM3 |
| Graphics Processing Unit (Chip) | |
| Number of CUDA cores | 16896 |
| Number of Tensor cores | 528 |
| Video memory size (GB) | 40 |
| Memory frequency (MHz) | 14000 |
| Memory bus width (bits) | 5120 |
| Memory Bandwidth (GB/s) | 1555 |
| Connection interface (PCIe) | PCIe 5.0 x16 |
| FP16 performance (TFLOPS) | 312 |
| FP32 performance (TFLOPS) | 156 |
| FP64 performance (TFLOPS) | 9.7 |
| Cooling type | Passive (server module) |
| Number of occupied slots (pcs) | 8 |
| Temperature range (°C) | 0–85 |
| NVLink Throughput (GB/s) | 600 |
| Multi-GPU support | Yes (NVSwitch) |
| Virtualization/MIG support | MIG (up to 7 instances) |
| Architecture | Ampere |
8×NVIDIA A100 SXM 40GB GPU Baseboard is a high-performance server module that integrates eight NVIDIA A100 GPUs with 40 GB of HBM2 memory each. In total, the system delivers 320 GB of GPU memory and tremendous computing capacity for artificial intelligence, machine learning, and high-performance computing (HPC) workloads.
Built on the Ampere architecture, the module utilizes the SXM4 form factor and interconnects GPUs via NVLink and NVSwitch. With bandwidths of up to 600 GB/s per link, all eight GPUs operate as a unified compute fabric, eliminating bottlenecks typical of PCIe-based systems.
8×NVIDIA A100 SXM 40GB GPU Baseboard is the ideal choice for enterprises and research organizations that need serious compute power without overpaying for maximum configurations. It’s engineered for data centers, scientific institutions, and cloud platforms that demand consistent performance, scalability, and energy efficiency.
Only logged in customers who have purchased this product may leave a review.
The NVIDIA A100 8x40GB Baseboard is a highly integrated HGX-class solution, representing a platform of eight A100 graphics accelerators combined into a single computing cluster. This system is designed for the most resource-intensive artificial intelligence, deep learning, and high-performance computing (HPC) tasks. The Baseboard architecture provides unprecedented compute density and scalability, enabling the resolution of tasks inaccessible to single PCIe cards.
Based on the Ampere architecture (combined with elements of the latest technologies specified in the specifications), this platform provides colossal total video memory and bandwidth necessary for training large language models (LLMs) and complex simulations. The use of NVLink and NVSwitch technology allows the memory of all eight GPUs to be combined into a single address space, eliminating bottlenecks during data transfer.
The table presents a comparison of the NVIDIA A100 8x40GB Baseboard with its nine closest competitors, including flagship H100 series solutions, single A100 cards, and professional graphics accelerators of the RTX and L series.
| Model | VRAM (GB) | Memory Type | Bandwidth (GB/s)* | CUDA Cores | Tensor Cores | FP16 (TFLOPS) | FP32 (TFLOPS) | FP64 (TFLOPS) | Tensor Perf (TFLOPS) | Interface | TDP (W) |
|---|---|---|---|---|---|---|---|---|---|---|---|
| NVIDIA A100 8x40GB Baseboard | 40** | HBM3 | 1555 | 16896 | 528 | 312 | 156 | 9.7 | 624 | PCIe 5.0 x16 | 400 |
| NVIDIA H100 80GB | 80 | HBM2e | 2039 | 14592 | 432 | 1979 | 989 | 49 | 3958 | PCIe 5.0 x16 | 350 |
| NVIDIA H100 NVL 94GB | 94 | HBM3 | 3900 | 14592 | 432 | 1513 | 756 | 30 | 3026 | PCIe 5.0 x16 | 400 |
| NVIDIA H100 96GB | 94 | HBM3 | 3900 | 16896 | 432 | 1979 | 989 | 49 | 3958 | PCIe 5.0 x16 | 400 |
| NVIDIA A100 80GB | 80 | HBM2e | 1935 | 6912 | 432 | 312 | 156 | 9.7 | 624 | PCIe 4.0 x16 | 300 |
| NVIDIA A100 40GB | 40 | HBM2e | 1555 | 6912 | 432 | 312 | 156 | 9.7 | 624 | PCIe 4.0 x16 | 250 |
| NVIDIA RTX 6000 Ada 48GB | 48 | GDDR6 ECC | 960 | 18176 | 568 | – | 91.1 | – | 1457 | PCIe 4.0 x16 | 300 |
| NVIDIA L40S 48GB | 48 | GDDR6 ECC | 864 | 18176 | 568 | 733 | 366 | – | – | PCIe 4.0 x16 | 350 |
| NVIDIA L40 48GB | 48 | GDDR6 ECC | 864 | 18176 | 568 | 362.1 | 90.52 | 1.414 | – | PCIe 4.0 x16 | 300 |
| NVIDIA A40 48GB | 48 | GDDR6 | 695.8 | 10752 | 336 | 37.42 | 37.42 | 0.5846 | – | PCIe 4.0 x16 | 300 |
*Bandwidth — Memory Bandwidth. Values “-” indicate data not available in table.
**Memory capacity per single GPU within the Baseboard is indicated. The total system memory capacity is 320 GB.
An analysis of the technical specifications and architectural features of the NVIDIA A100 8x40GB Baseboard reveals key advantages that make this solution dominant in the considered segment:
The NVIDIA A100 8x40GB Baseboard represents an uncompromising solution for industrial artificial intelligence implementation. By the aggregate of characteristics — from the massive array of CUDA cores to the scalable memory of 320 GB — this video card (in the Baseboard form factor) significantly exceeds single alternatives when building high-density computing clusters. It remains the preferred choice for tasks requiring maximum reliability, proven architecture, and the ability to work with the largest data models in the considered segment.
It is a high-performance server module (HGX platform) that integrates eight NVIDIA A100 GPUs into a single system. Unlike standard graphics cards, they are mounted on a single board and interconnected via a high-speed interface, allowing them to operate as a single massive computational accelerator for artificial intelligence tasks.
The key difference is the interconnect technology. This platform uses NVSwitch and NVLink, providing data transfer speeds between chips of up to 600 GB/s, which is many times faster than standard PCIe bus capabilities. This eliminates bottlenecks in neural network training, as all 8 GPUs can exchange data directly, bypassing the central processor (CPU).
The system provides a total of 320 GB of high-speed HBM2 memory. Each of the eight processors is equipped with 40 GB of its own memory. Thanks to high bandwidth (1.6 TB/s per GPU), this solution is ideal for working with Big Data.
The NVIDIA A100 8x40GB Baseboard is designed for the most resource-intensive workloads: training large language models (LLMs), deep learning, high-performance computing (HPC), as well as complex scientific simulations and modeling.
No, this module has the SXM4 Baseboard form factor and requires a specialized server chassis compatible with the NVIDIA HGX A100 platform. It is not intended for installation in standard PCIe expansion slots of regular servers or workstations.
The module is covered by an official manufacturer’s warranty for a period of 3 years. We guarantee that the equipment is new and original, fully supporting the stated specifications.
Yes, the Ampere architecture supports MIG technology, which allows partitioning each physical A100 GPU into several isolated instances (up to 7 per chip). On the scale of the entire Baseboard, this enables the creation of up to 56 independent compute units for different users or tasks.
Despite its high power, integrating eight GPUs onto a single board allows for optimized power and cooling systems. The module delivers one of the highest performance-per-watt ratios in the industry, which is critically important for reducing operating costs in data centers.
We provide reliable packaging and specialized logistics for high-value server equipment. Delivery is carried out across all regions of operation, complying with all requirements for transporting complex electronics.
Fast and reliable delivery across the European Union
Estimated transit time: 3–7 days from order confirmation. Worldwide shipping is available for customers outside the EU.
All orders are processed within 24 hours after confirmation. Tracking information is provided as soon as the parcel leaves our logistics center.
Multiple Secure Payment Methods
We accept: Visa, MasterCard, Apple Pay, Google Pay, PayPal, SEPA, iDEAL, Amazon Pay, Bancontact, International Bank Transfers, Klarna via Stripe gateway.
All transactions are encrypted and processed via certified Stripe payment gateway for your security.
Additional Notes
Reviews
There are no reviews yet.