NVIDIA RTX 6000 Ada 48GB (900-5G133-2250-000)
P/N: 900-5G133-2250-000
8 682 € (inc. VAT (Spain))

P/N: A100-8x80GB-Baseboard
84 805 € (inc. VAT (Spain))
EU Delivery time is made within 3-7 days
This is an estimated timeframe. Delivery times may vary depending on logistics and stock availability.
Warranty 2 year
The product is covered by the manufacturer’s standard warranty.
In stock
Guaranteed Safe Checkout:
NVIDIA A100 8×80GB Baseboard. with fast EU delivery and worldwide shipping. Includes official warranty.
Our specialist will help you choose the right server components and ensure full compatibility with your system.
| Weight | 10,00 kg |
|---|---|
| Dimensions | 26,7 × 11,1 × 17 cm |
| Country of manufacture | Taiwan |
| Manufacturer's warranty (years) | 1 |
| Model | NVIDIA A100 |
| Cache L2 (MB) | 40 |
| Process technology (nm) | 4 |
| Memory type | HBM2e |
| Graphics Processing Unit (Chip) | |
| Number of CUDA cores | 6912 |
| Number of Tensor cores | 432 |
| Video memory size (GB) | 80 |
| Memory frequency (MHz) | 16000 |
| Memory bus width (bits) | 5120 |
| Memory Bandwidth (GB/s) | 2039 |
| Connection interface (PCIe) | PCIe 5.0 x16 |
| FP16 performance (TFLOPS) | 312 |
| FP32 performance (TFLOPS) | 156 |
| FP64 performance (TFLOPS) | 9.7 |
| Cooling type | Passive (server module) |
| Number of occupied slots (pcs) | 8 |
| Temperature range (°C) | 0–85 |
| NVLink Throughput (GB/s) | 600 |
| Multi-GPU support | Yes (NVSwitch) |
| Virtualization/MIG support | MIG (up to 7 instances) |
| Architecture | Ampere |
8×NVIDIA A100 SXM 80GB GPU Baseboard is a high-density compute module that integrates eight NVIDIA A100 GPUs with 80 GB of HBM2e memory each — providing a total of 640 GB of ultra-fast memory and immense computational throughput for artificial intelligence, data analytics, and high-performance computing (HPC) workloads.
Unlike traditional PCIe configurations, GPUs within this baseboard are interconnected via NVLink and NVSwitch, enabling direct high-bandwidth communication between all eight accelerators. This design allows them to operate as a unified system, eliminating PCIe bottlenecks and maximizing parallel efficiency for the most demanding AI and scientific applications.
8×NVIDIA A100 SXM 80GB GPU Baseboard is a foundational building block for modern data centers and AI clusters. It provides unmatched performance, scalability, and efficiency — an ideal solution for enterprises and research institutions advancing large-scale AI, simulation, and scientific computing projects.
Only logged in customers who have purchased this product may leave a review.
The NVIDIA A100 8x80GB Baseboard is a high-performance data center solution focused on the most resource-intensive tasks in artificial intelligence, data analytics, and high-performance computing (HPC). This model represents a key infrastructure element for training neural networks and scientific simulation, offering a balance between massive video memory capacity and high bandwidth.
A key advantage of this configuration is the use of HBM2e memory with record-breaking bandwidth, allowing for efficient work with massive models. Unlike standard PCIe versions, the Baseboard implementation (typically in the SXM form factor) features an expanded 400W thermal envelope, ensuring higher clock speeds and stable performance during sustained workloads. Support for double-precision (FP64) computing makes it indispensable for engineering calculations where maximum precision is required.
The table presents a comparison of the NVIDIA A100 8x80GB Baseboard with nine closest competitors, including the latest flagships of the H100 and H200 series, as well as professional solutions from the L40 and RTX lines.
| Model | VRAM (GB) | Memory Type | Bandwidth (GB/s)* | CUDA Cores | Tensor Cores | FP16 (TFLOPS) | FP32 (TFLOPS) | FP64 (TFLOPS) | Tensor Perf (TFLOPS) | Interface | TDP (W) |
|---|---|---|---|---|---|---|---|---|---|---|---|
| NVIDIA A100 8x80GB Baseboard | 80** | HBM2e | 2039 | 6912 | 432 | 312 | 156 | 9.7 | 624 | PCIe 5.0 x16 | 400 |
| NVIDIA H100 80GB | 80 | HBM2e | 2039 | 14592 | 432 | 1979 | 989 | 49 | 3958 | PCIe 5.0 x16 | 350 |
| NVIDIA H200 141GB | 141 | HBM3e | 4800 | 14592 | 432 | 1979 | 989 | 49 | 3958 | PCIe 5.0 x16 | 600 |
| NVIDIA H100 NVL 94GB | 94 | HBM3 | 3900 | 14592 | 432 | 1513 | 756 | 30 | 3026 | PCIe 5.0 x16 | 400 |
| NVIDIA H100 96GB | 94 | HBM3 | 3900 | 16896 | 432 | 1979 | 989 | 49 | 3958 | PCIe 5.0 x16 | 400 |
| NVIDIA A100 80GB | 80 | HBM2e | 1935 | 6912 | 432 | 312 | 156 | 9.7 | 624 | PCIe 4.0 x16 | 300 |
| NVIDIA RTX Pro 6000 Blackwell 96GB | 96 | GDDR7 ECC | 1792 | 24064 | 752 | 110.1 | 55.1 | 1.9 | 220.2 | PCIe 5.0 x16 | 600 |
| NVIDIA L40S 48GB | 48 | GDDR6 ECC | 864 | 18176 | 568 | 733 | 366 | – | – | PCIe 4.0 x16 | 350 |
| NVIDIA RTX 6000 Ada 48GB | 48 | GDDR6 ECC | 960 | 18176 | 568 | – | 91.1 | – | 1457 | PCIe 4.0 x16 | 300 |
| NVIDIA A40 48GB | 48 | GDDR6 | 695.8 | 10752 | 336 | 37.4 | 37.4 | 0.5 | – | PCIe 4.0 x16 | 300 |
*Bandwidth — Memory Bandwidth. Values “-” indicate data is unavailable.
**Memory volume per single GPU within the Baseboard is indicated. Total system memory volume is 640 GB.
Technical data analysis confirms that the NVIDIA A100 8x80GB Baseboard retains its position as one of the most balanced solutions in the segment, distinguished by the following characteristics:
The NVIDIA A100 8x80GB Baseboard remains a benchmark of reliability and performance for deep learning infrastructure and scientific calculations. Despite the release of newer generations, the combination of a massive 80 GB memory buffer, high HBM2e bandwidth, and full FP64 support makes this card the preferred choice for tasks requiring maximum stability and speed when working with large datasets, where it confidently outperforms many alternative consumer and professional-class solutions.
The NVIDIA A100 8x80GB Baseboard is an SXM form factor solution designed for installation in specialized server platforms (HGX). Unlike PCIe versions, this layout provides higher power consumption limits (400W TDP), which allows for maintaining maximum GPU performance for extended periods without clock speed reduction (throttling).
Each graphics processor in the configuration is equipped with 80 GB of high-speed HBM2e memory. This is double the capacity compared to the first A100 version (40 GB), which is critical for loading massive neural networks and working with large datasets (Big Data).
The memory bandwidth is 2039 GB/s. This is one of the highest figures in the industry, allowing the GPU to instantly access data for calculations, eliminating bottlenecks when training complex AI models and preventing compute core idleness.
Yes, thanks to Multi-Instance GPU (MIG) technology, each physical A100 80GB accelerator can be partitioned into 7 isolated instances. This allows for simultaneously serving multiple users or running different tasks (for example, inference of smaller models) with guaranteed memory and cache resource allocation.
The NVIDIA A100 8x80GB Baseboard is ideally suited for the most resource-intensive calculations: Deep Learning, High-Performance Computing (HPC), scientific simulation (weather, genomics, physics), and big data analysis. High performance in FP64 (9.7 TFLOPS) makes it the standard for engineering calculations.
Given the high heat dissipation level (400W TDP per GPU), the Baseboard solution requires installation in server chassis with a powerful airflow system or the use of liquid cooling systems. The passive heatsinks on the modules are designed for external airflow from server fans.
Despite the release of the new generation, the A100 remains the “gold standard” in terms of the availability-to-performance ratio. A huge base of optimized software and libraries makes it the most stable and predictable solution for deploying existing AI pipelines.
The accelerator is equipped with 432 third-generation Tensor Cores, providing peak performance of up to 624 TFLOPS (using sparsity). This ensures colossal acceleration of matrix calculations, which lie at the foundation of modern artificial intelligence.
Yes, the Baseboard (SXM) form factor is specifically designed for maximum NVLink technology efficiency. This allows combining all GPUs on the board into a single computing cluster with massive data exchange speeds between them, which is unattainable for standard PCIe cards.
Fast and reliable delivery across the European Union
Estimated transit time: 3–7 days from order confirmation. Worldwide shipping is available for customers outside the EU.
All orders are processed within 24 hours after confirmation. Tracking information is provided as soon as the parcel leaves our logistics center.
Multiple Secure Payment Methods
We accept: Visa, MasterCard, Apple Pay, Google Pay, PayPal, SEPA, iDEAL, Amazon Pay, Bancontact, International Bank Transfers, Klarna via Stripe gateway.
All transactions are encrypted and processed via certified Stripe payment gateway for your security.
Additional Notes
Reviews
There are no reviews yet.