NVIDIA RTX A5000 24GB (900-5G132-1700-000)
P/N: 900-5G132-1700-000
3 836€ (inc. VAT (Spain))

P/N: 900-21001-0000-000
4 783€ (inc. VAT (Spain))
Delivery is made within 3-7 days
This is an estimated timeframe. Delivery times may vary depending on logistics and stock availability.
Warranty 1 year
The product is covered by the manufacturer’s standard warranty.
Out of stock
Guaranteed Safe Checkout:
NVIDIA A100 40GB. with fast EU delivery and worldwide shipping. The best price in the European Union. Includes official warranty.
Our specialist will help you choose the right server components and ensure full compatibility with your system.
| Weight | 1 kg |
|---|---|
| Dimensions | 26,7 × 11,1 × 17 cm |
| Country of manufacture | Taiwan |
| Manufacturer's warranty (years) | 1 |
| Model | NVIDIA A100 |
| Cache L2 (MB) | 40 |
| Process technology (nm) | 4 |
| Memory type | HBM2e |
| Graphics Processing Unit (Chip) | |
| Number of CUDA cores | 6912 |
| Number of Tensor cores | 432 |
| GPU Frequency (MHz) | 765 |
| GPU Boost Frequency (MHz) | 1410 |
| Video memory size (GB) | 40 |
| Memory frequency (MHz) | 14000 |
| Memory bus width (bits) | 5120 |
| Memory Bandwidth (GB/s) | 1555 |
| Connection interface (PCIe) | PCIe 4.0 x16 |
| FP16 performance (TFLOPS) | 312 |
| FP32 performance (TFLOPS) | 156 |
| FP64 performance (TFLOPS) | 9.7 |
| Cooling type | Passive (server module) |
| Number of occupied slots (pcs) | 2 |
| Length (cm) | 26.7 |
| Width (cm) | 11.1 |
| Weight (kg) | 1 |
| Temperature range (°C) | 0–85 |
| NVLink Throughput (GB/s) | 600 |
| Multi-GPU support | Yes, via NVLink |
| Virtualization/MIG support | MIG (up to 7 instances) |
| SKU | 900-21001-0100-030, 900-21001-2700-030 |
| Architecture | Ampere |
NVIDIA A100 40GB PCIe OEM is a professional accelerator based on the Ampere architecture — the benchmark for modern data centers and enterprise AI solutions. Featuring 40 GB of high-bandwidth HBM2 memory, it delivers an exceptional balance of power and efficiency, allowing organizations to scale compute infrastructure with flexibility and confidence.
This GPU is widely used across various industries — from machine learning training and inference to complex scientific simulations and industrial modeling. Unlike consumer graphics cards, the A100 is purpose-built for professional workloads, where precision, memory bandwidth, and enterprise-grade reliability are essential.
NVIDIA A100 40GB PCIe OEM is a universal accelerator suitable for:
The A100 40GB marked the beginning of the Ampere generation, setting new standards for enterprise AI acceleration. It’s designed for organizations requiring high training throughput and the ability to scale compute resources efficiently.
Compared with the 80 GB version, the A100 40GB PCIe targets workloads where ultra-high memory capacity isn’t required but bandwidth and Tensor Core power remain critical. With MIG technology, the GPU can be split into seven independent virtual instances — ideal for cloud providers and distributed compute environments.
Relative to the previous Tesla V100 generation, the A100 delivers up to a 20× performance increase in AI and HPC tasks, along with significantly improved energy efficiency.
NVIDIA A100 40GB PCIe OEM is a trusted enterprise-class accelerator that combines the Ampere architecture, high-speed HBM2 memory, and powerful Tensor Cores. It unlocks new possibilities in AI development, big data analytics, and scientific computing — the ideal choice for organizations that demand performance, scalability, and reliability.
Only logged in customers who have purchased this product may leave a review.
NVIDIA A100 40GB is a high-performance graphics accelerator based on the Ampere architecture, which has become the industry standard for data centers, artificial intelligence tasks, and scientific computing. The video card is designed to ensure maximum scalability and efficiency in fields such as deep learning, big data analysis, and high-performance computing (HPC).
A key advantage of the A100 is the use of third-generation Tensor Cores with support for Tensor Float 32 (TF32) technology, which significantly accelerates neural network training without a loss in precision. Thanks to Multi-Instance GPU (MIG) technology support, a single video card can be partitioned into several isolated hardware instances, allowing for the optimization of resource usage for different workloads simultaneously.
The table below presents a comparison of the NVIDIA A100 40GB with its nine closest competitors and alternative solutions in the professional accelerator segment, including previous-generation models, modern flagship architectures such as Hopper and Blackwell, as well as specialized solutions from the L and RTX series.
| Model | VRAM (GB) | Memory Type | Bandwidth (GB/s)* | CUDA Cores | Tensor Cores | FP16 (TFLOPS) | FP32 (TFLOPS) | FP64 (TFLOPS) | Tensor Perf (TFLOPS) | Interface | TDP (W) |
|---|---|---|---|---|---|---|---|---|---|---|---|
| NVIDIA A100 40GB | 40 | HBM2e | 1555 | 6912 | 432 | 312 | 156 | 9.7 | 624 | PCIe 4.0 x16 | 250 |
| NVIDIA A100 80GB | 80 | HBM2e | 1935 | 6912 | 432 | 312 | 156 | 9.7 | 624 | PCIe 4.0 x16 | 300 |
| NVIDIA H100 80GB | 80 | HBM2e | 2039 | 14592 | 432 | 1979 | 989 | 49 | 3958 | PCIe 5.0 x16 | 350 |
| NVIDIA H200 141GB | 141 | HBM3e | 4800 | 14592 | 432 | 1979 | 989 | 49 | 3958 | PCIe 5.0 x16 | 600 |
| NVIDIA RTX Pro 6000 Blackwell 96GB | 96 | GDDR7 ECC | 1792 | 24064 | 752 | 110.1 | 55.1 | 1.97 | 220.2 | PCIe 5.0 x16 | 600 |
| NVIDIA RTX 6000 Ada 48GB | 48 | GDDR6 ECC | 960 | 18176 | 568 | – | 91.1 | – | 1457 | PCIe 4.0 x16 | 300 |
| NVIDIA L40S 48GB | 48 | GDDR6 ECC | 864 | 18176 | 568 | 733 | 366 | – | – | PCIe 4.0 x16 | 350 |
| NVIDIA L40 48GB | 48 | GDDR6 ECC | 864 | 18176 | 568 | 362.1 | 90.5 | 1.4 | – | PCIe 4.0 x16 | 300 |
| NVIDIA RTX A6000 48GB | 48 | GDDR6 | 768 | 10752 | 336 | – | 38.7 | 1.2 | 309.7 | PCIe 4.0 x16 | 300 |
| NVIDIA A40 48GB | 48 | GDDR6 | 695.8 | 10752 | 336 | 37.4 | 37.4 | 0.6 | – | PCIe 4.0 x16 | 300 |
*Bandwidth refers to memory throughput. Values marked with “-” indicate missing data.
Based on the analysis of the data in the table, the key advantages of the NVIDIA A100 40GB can be highlighted, which allow it to maintain strong positions in the high-performance computing segment:
The NVIDIA A100 40GB remains one of the most balanced and powerful solutions for professional computing. Its key advantage lies in its unprecedented FP64 performance and ultra-fast HBM2e memory, making it a more efficient choice for scientific and analytical tasks compared to many newer but more specialized cards. The combination of moderate power consumption and MIG technology support confirms the A100’s status as a reliable foundation for modern computing infrastructure.
This accelerator is optimized for the most demanding workloads in modern data centers. Primary application areas include training deep neural networks (Deep Learning), artificial intelligence inference (AI Inference), high-performance computing (HPC), and complex big data analysis. Thanks to the Ampere architecture, the card provides a massive performance boost in tasks requiring high memory bandwidth and complex mathematical computations.
MIG technology allows a single physical A100 GPU to be partitioned into seven isolated hardware instances. Each instance has its own dedicated compute cores and memory. This enables efficient utilization of accelerator resources by running different types of tasks simultaneously or providing access to multiple users without performance degradation and with guaranteed Quality of Service (QoS).
HBM2e (High Bandwidth Memory) in the NVIDIA A100 40GB provides a bandwidth of 1555 GB/s. This is significantly higher than cards using GDDR6. Such speed is critical for working with massive datasets and complex AI models, as it prevents GPU idle time while waiting for data from memory, thereby accelerating overall computation time.
The NVIDIA A100 40GB is a leader in double-precision computing, delivering 9.7 TFLOPS of performance. This makes it an ideal choice for scientific modeling, weather forecasting, quantum chemistry, and engineering simulations where high data precision is crucial. In this regard, it outperforms RTX or L-series accelerators by several times.
The card has a maximum power consumption (TDP) of 250W. The PCIe version features a dual-slot design with passive cooling, which requires installation in server chassis with high-velocity directional airflow. For stable operation, it is recommended to use high-quality server-grade power supplies designed for high peak loads.
Yes, the accelerator supports third-generation NVIDIA NVLink technology. This allows multiple GPUs to be combined into a single computing system with shared memory and a bandwidth of up to 600 GB/s between GPUs. This solution is essential for training ultra-large language models that do not fit into the memory of a single accelerator.
The NVIDIA A100 40GB utilizes a PCIe 4.0 x16 interface, which provides double the data transfer rate of the previous generation. However, it is fully backward compatible with PCIe 3.0. It should be noted that when using the older interface, the bandwidth between the CPU and GPU will be limited, which may slow down performance in tasks with intensive data exchange.
The TF32 format in third-generation Tensor Cores can accelerate neural network training by up to 10 times compared to FP32 without requiring changes to the code or model structure. TF32 uses the same precision as FP32 but operates at the speed of specialized AI formats, significantly reducing development and deployment time.
The NVIDIA A100 supports NVIDIA vGPU software, including NVIDIA Virtual Compute Server (vCS). This allows the card to be used in virtualized environments to distribute computing power among virtual machines, which is particularly in demand in cloud infrastructures and corporate data centers for AI and analytics workloads.
Fast and reliable delivery across the European Union
Estimated transit time: 3–7 days from order confirmation. Worldwide shipping is available for customers outside the EU.
All orders are processed within 24 hours after confirmation. Tracking information is provided as soon as the parcel leaves our logistics center.
Multiple Secure Payment Methods
We accept: Visa, MasterCard, PayPal, Bank Transfer, Klarna, Stripe, Revolut Pay, Google Pay, Apple Pay, and USDT (TRC20) cryptocurrency payments.
All transactions are encrypted and processed via certified payment gateways for your security.
Additional Notes
Reviews
There are no reviews yet.