NVIDIA RTX A5000 24GB (900-5G132-1700-000)
P/N: 900-5G132-1700-000
3 836€ (inc. VAT (Spain))

P/N: 900-21001-0020-100
8 844€ (inc. VAT (Spain))
Delivery is made within 3-7 days
This is an estimated timeframe. Delivery times may vary depending on logistics and stock availability.
Warranty 1 year
The product is covered by the manufacturer’s standard warranty.
Out of stock
Guaranteed Safe Checkout:
NVIDIA A100 80GB. with fast EU delivery and worldwide shipping. The best price in the European Union. Includes official warranty.
Our specialist will help you choose the right server components and ensure full compatibility with your system.
| Weight | 1 kg |
|---|---|
| Dimensions | 26,7 × 11,1 × 17 cm |
| Country of manufacture | Taiwan |
| Manufacturer's warranty (years) | 1 |
| Model | NVIDIA A100 |
| Cache L2 (MB) | 40 |
| Process technology (nm) | 4 |
| Memory type | HBM2e |
| Graphics Processing Unit (Chip) | |
| Number of CUDA cores | 6912 |
| Number of Tensor cores | 432 |
| GPU Frequency (MHz) | 1065 |
| GPU Boost Frequency (MHz) | 1410 |
| Video memory size (GB) | 80 |
| Memory frequency (MHz) | 16000 |
| Memory bus width (bits) | 5120 |
| Memory Bandwidth (GB/s) | 1935 |
| Connection interface (PCIe) | PCIe 4.0 x16 |
| FP16 performance (TFLOPS) | 312 |
| FP32 performance (TFLOPS) | 156 |
| FP64 performance (TFLOPS) | 9.7 |
| Cooling type | Passive (server module) |
| Number of occupied slots (pcs) | 2 |
| Length (cm) | 26.7 |
| Width (cm) | 11.1 |
| Weight (kg) | 1 |
| Temperature range (°C) | 0–85 |
| NVLink Throughput (GB/s) | 600 |
| Multi-GPU support | Yes, via NVLink |
| Virtualization/MIG support | MIG (up to 7 instances) |
| SKU | 900-21001-0020-000, 900-21001-0120-130, 900-21001-2720-030 |
| Architecture | Ampere |
NVIDIA A100 80GB PCIe OEM is a professional accelerator built on the Ampere architecture, designed for artificial intelligence, high-performance computing (HPC), and big data analytics. This GPU remains an industry standard for data centers and research institutions, offering the perfect balance between performance and cost.
80 GB of HBM2e memory with ECC and a bandwidth of up to 1,935 GB/s allows efficient processing of large AI models and massive datasets. Support for Multi-Instance GPU (MIG) technology makes it ideal for cloud and distributed environments, enabling a single GPU to be partitioned into up to seven independent instances.
NVIDIA A100 80GB is still regarded as a reliable standard for data centers — ideal for neural network training, inference, and scientific workloads, offering excellent stability and predictable performance. However, the release of NVIDIA H100 has shifted the focus to an entirely new level of AI computing.
While the A100 was designed as a universal accelerator for AI and HPC workloads, the H100 was built specifically for generative AI and large-scale language model training. It introduces next-generation Tensor Cores and FP8 precision support, dramatically boosting performance in LLM training and inference. With the same memory size, the H100 delivers much higher bandwidth and data throughput, while its Hopper architecture is overall more efficient than Ampere.
Thus, the A100 remains a more affordable and proven choice for organizations that need reliable, time-tested AI and big data accelerators — while the H100 is the solution for those working on the cutting edge of generative AI, seeking maximum performance for the most demanding workloads.
Purchasing the A100 80GB PCIe OEM means investing in a proven accelerator that has become the industry standard and remains relevant for the vast majority of enterprise and research applications.
Only logged in customers who have purchased this product may leave a review.
NVIDIA A100 80GB is a high-performance graphics accelerator based on the Ampere architecture, which has become the industry standard for artificial intelligence tasks, data analysis, and high-performance computing (HPC). This model offers double the memory capacity compared to the original A100 40GB version and significantly increased bandwidth, enabling efficient work with the largest models and datasets.
A key feature of the A100 80GB is the use of fast HBM2e memory, providing a bandwidth of nearly 2 TB/s, as well as support for Multi-Instance GPU (MIG) technology, which allows a single physical accelerator to be partitioned into multiple isolated instances to optimize resource utilization.
The table presents a comparison of the NVIDIA A100 80GB with nine competitors and alternative solutions, including newer Hopper (H100, H200) and Blackwell architecture models, as well as workstation and server solutions (RTX and L40 series).
| Model | VRAM (GB) | Memory Type | Bandwidth (GB/s)* | CUDA Cores | Tensor Cores | FP16 (TFLOPS) | FP32 (TFLOPS) | FP64 (TFLOPS) | Tensor Perf (TFLOPS) | Interface | TDP (W) |
|---|---|---|---|---|---|---|---|---|---|---|---|
| NVIDIA A100 80GB | 80 | HBM2e | 1935 | 6912 | 432 | 312 | 156 | 9.7 | 624 | PCIe 4.0 x16 | 300 |
| NVIDIA H100 80GB | 80 | HBM2e | 2039 | 14592 | 432 | 1979 | 989 | 49 | 3958 | PCIe 5.0 x16 | 350 |
| NVIDIA H200 141GB | 141 | HBM3e | 4800 | 14592 | 432 | 1979 | 989 | 49 | 3958 | PCIe 5.0 x16 | 600 |
| NVIDIA H100 96GB | 96 | HBM3 | 3900 | 16896 | 432 | 1979 | 989 | 49 | 3958 | PCIe 5.0 x16 | 400 |
| NVIDIA H100 NVL 94GB | 94 | HBM3 | 3900 | 14592 | 432 | 1513 | 756 | 30 | 3026 | PCIe 5.0 x16 | 400 |
| NVIDIA A100 40GB | 40 | HBM2e | 1555 | 6912 | 432 | 312 | 156 | 9.7 | 624 | PCIe 4.0 x16 | 250 |
| NVIDIA RTX Pro 6000 Blackwell 96GB | 96 | GDDR7 ECC | 1792 | 24064 | 752 | 110.1 | 55.1 | 1.97 | 220.2 | PCIe 5.0 x16 | 600 |
| NVIDIA RTX 6000 Ada 48GB | 48 | GDDR6 ECC | 960 | 18176 | 568 | – | 91.1 | – | 1457 | PCIe 4.0 x16 | 300 |
| NVIDIA L40S 48GB | 48 | GDDR6 ECC | 864 | 18176 | 568 | 733 | 366 | – | – | PCIe 4.0 x16 | 350 |
| NVIDIA L40 48GB | 48 | GDDR6 ECC | 864 | 18176 | 568 | 362.1 | 90.52 | 1.414 | – | PCIe 4.0 x16 | 300 |
*Bandwidth — Memory Bandwidth. Values marked with “-” indicate a lack of data in the provided specification.
The analysis shows that the NVIDIA A100 80GB retains key advantages in specific task segments, especially when compared to alternative server and workstation solutions:
The NVIDIA A100 80GB remains a powerful and sought-after tool for the corporate sector and research institutes. Possessing a significant advantage in memory bandwidth and FP64 performance over modern universal L and RTX series accelerators, it represents the gold standard for tasks requiring high computational precision and rapid processing of large data volumes, yielding only to its direct successors of the Hopper architecture.
The main difference lies in the volume and speed of the video memory. The 80GB version is equipped with HBM2e standard memory, which not only doubles the available capacity for loading massive neural networks and datasets but also provides memory bandwidth at 1935 GB/s (compared to 1555 GB/s for the 40GB version), significantly accelerating model training and high-performance computing.
No, the NVIDIA A100 80GB is not designed for gaming and does not have video outputs (HDMI or DisplayPort) for connecting a monitor. It is a specialized accelerator for servers and workstations, created for artificial intelligence (AI), data analysis, and scientific calculations.
The graphics card is equipped with a passive cooling system. This means it only has a heatsink without fans. For correct operation, the card must be installed in a server chassis or workstation with powerful directional airflow ensuring necessary heat dissipation.
Yes, the card supports Multi-Instance GPU (MIG) technology. It allows a single physical A100 GPU to be partitioned into seven fully isolated virtual instances, each with its own dedicated memory and compute resources. This is ideal for cloud solutions and simultaneous multi-user operation.
Yes, the A100 80GB PCIe supports combining two graphics cards using an NVLink Bridge. This provides a data exchange speed between cards of up to 600 GB/s, which is critical for scaling large language model training tasks and complex simulations.
The card uses the PCI Express 4.0 x16 interface. It is fully compatible with modern servers supporting PCIe 4.0 and 5.0 (in backward compatibility mode), ensuring high data transfer speeds between the processor and the video card.
The maximum power consumption (TDP) of the card is 300 W. Connection usually requires the use of specialized power cables (often 8-pin CPU/EPS12V in server configurations), and the server power supply must have sufficient power reserve to service all installed accelerators.
This model is the industry standard for training and inference of large language models (LLM), deep learning, high-performance computing (HPC) in physics, genomics, and chemistry, as well as for real-time big data analytics.
When purchasing from our store, an official 3-year warranty is provided. This ensures investment reliability and support in case of warranty claims throughout the long operation period.
Fast and reliable delivery across the European Union
Estimated transit time: 3–7 days from order confirmation. Worldwide shipping is available for customers outside the EU.
All orders are processed within 24 hours after confirmation. Tracking information is provided as soon as the parcel leaves our logistics center.
Multiple Secure Payment Methods
We accept: Visa, MasterCard, PayPal, Bank Transfer, Klarna, Stripe, Revolut Pay, Google Pay, Apple Pay, and USDT (TRC20) cryptocurrency payments.
All transactions are encrypted and processed via certified payment gateways for your security.
Additional Notes
Reviews
There are no reviews yet.