NVIDIA A100 8x80GB Baseboard (A100-8x80GB-Baseboard)
P/N: A100-8x80GB-Baseboard
84 805 € (inc. VAT (Spain))

P/N: 900-21010-0000-000
22 293 € (inc. VAT (Spain))
EU Delivery time is made within 3-7 days
This is an estimated timeframe. Delivery times may vary depending on logistics and stock availability.
Warranty 2 year
The product is covered by the manufacturer’s standard warranty.
In stock
Guaranteed Safe Checkout:
NVIDIA H100 80GB. with fast EU delivery and worldwide shipping. Includes official warranty.
Our specialist will help you choose the right server components and ensure full compatibility with your system.
| Weight | 1,55 kg |
|---|---|
| Dimensions | 26,8 × 11,1 × 17 cm |
| Country of manufacture | Taiwan |
| Manufacturer's warranty (years) | 1 |
| Model | NVIDIA H100 |
| Cache L2 (MB) | 50 |
| Process technology (nm) | 4 |
| Memory type | HBM2e |
| Graphics Processing Unit (Chip) | |
| Number of CUDA cores | 14592 |
| Number of Tensor cores | 432 |
| GPU Frequency (MHz) | 1095 |
| GPU Boost Frequency (MHz) | 1755 |
| Video memory size (GB) | 80 |
| Memory frequency (MHz) | 16000 |
| Memory bus width (bits) | 5120 |
| Memory Bandwidth (GB/s) | 2039 |
| Connection interface (PCIe) | PCIe 5.0 x16 |
| FP16 performance (TFLOPS) | 1979 |
| FP32 performance (TFLOPS) | 989 |
| FP64 performance (TFLOPS) | 49 |
| Cooling type | Passive (server module) |
| Number of occupied slots (pcs) | 2 |
| Temperature range (°C) | 0–85 |
| NVLink Throughput (GB/s) | 600 |
| Multi-GPU support | Yes, via NVLink |
| Virtualization/MIG support | MIG (up to 7 instances) |
| SKU | 900-21010-0000-100, 900-21010-0300-030, 900-21010-6200-030 |
| Architecture | Hopper |
NVIDIA H100 80GB PCIe OEM is a professional accelerator based on the Hopper architecture, designed for training and inference of artificial intelligence models, big data processing, and high-performance computing. This card is intended for use in data centres and corporate infrastructures where scalability and efficiency are important.
Unlike gaming graphics cards, the H100 is not equipped with video outputs or multimedia units. It is a specialised tool for building clusters and server solutions, optimised for machine learning and HPC tasks.
H100 80GB PCIe is in demand where world-class computing power is required:
NVIDIA H100 80GB PCIe OEM combines high performance with scalability. Support for PCIe Gen5, large memory capacity, and optimisation for AI frameworks make it a versatile solution for enterprise customers.
Compared to the previous generation A100, the H100 accelerator delivers a multiple increase in power for machine learning and inference tasks, opening up opportunities for generative AI and LLM model processing.
While the A100 80GB was a versatile accelerator for HPC and AI, the H100 was designed specifically for generative models and ultra-large-scale language systems.
Thus, the A100 remains a proven and more affordable solution for data centres, while the H100 is the choice for those working at the forefront of generative AI.
We offer original NVIDIA H100 PCIe OEM server accelerators with warranty and official support:
Buying the H100 80GB PCIe OEM at OSODOSO.NET means investing in performance and stability for modern AI systems and data centres.
NVIDIA H100 80GB PCIe OEM is a specialised accelerator designed for AI clusters, HPC and enterprise computing. It combines the Hopper architecture, HBM2e memory, and support for modern tools, providing the foundation for the future of artificial intelligence.
Only logged in customers who have purchased this product may leave a review.
The NVIDIA H100 80GB is a flagship graphics accelerator built on the Hopper architecture, designed for High-Performance Computing (HPC) and Artificial Intelligence tasks. The video card is engineered to accelerate the training of giant language models, deep neural networks, as well as to solve the most complex problems in genomics, quantum chemistry, and scientific simulation.
A key feature of the H100 is the use of fourth-generation Tensor Cores and the Transformer Engine, which provide unprecedented performance in AI tasks. The accelerator supports the PCIe 5.0 interface, which doubles the data exchange speed with the central processor compared to the previous generation, ensuring maximum bandwidth for scalable systems.
The table presents a comparison of the NVIDIA H100 80GB with nine closest competitors and alternative solutions available in this segment, including predecessors (A100), enhanced versions (H200, H100 NVL), and professional solutions based on Ada Lovelace and Blackwell architectures.
| Model | VRAM (GB) | Memory Type | Bandwidth (GB/s)* | CUDA Cores | Tensor Cores | FP16 (TFLOPS) | FP32 (TFLOPS) | FP64 (TFLOPS) | Tensor Perf (TFLOPS) | Interface | TDP (W) |
|---|---|---|---|---|---|---|---|---|---|---|---|
| NVIDIA H100 80GB | 80 | HBM2e | 2039 | 14592 | 432 | 1979 | 989 | 49 | 3958 | PCIe 5.0 x16 | 350 |
| NVIDIA H200 141GB | 141 | HBM3e | 4800 | 14592 | 432 | 1979 | 989 | 49 | 3958 | PCIe 5.0 x16 | 600 |
| NVIDIA H100 96GB | 94 | HBM3 | 3900 | 16896 | 432 | 1979 | 989 | 49 | 3958 | PCIe 5.0 x16 | 400 |
| NVIDIA H100 NVL 94GB | 94 | HBM3 | 3900 | 14592 | 432 | 1513 | 756 | 30 | 3026 | PCIe 5.0 x16 | 400 |
| NVIDIA A100 80GB | 80 | HBM2e | 1935 | 6912 | 432 | 312 | 156 | 9.7 | 624 | PCIe 4.0 x16 | 300 |
| NVIDIA A100 40GB | 40 | HBM2e | 1555 | 6912 | 432 | 312 | 156 | 9.7 | 624 | PCIe 4.0 x16 | 250 |
| NVIDIA RTX Pro 6000 Blackwell 96GB | 96 | GDDR7 ECC | 1792 | 24064 | 752 | 110.1 | 55.1 | 1.97 | 220.2 | PCIe 5.0 x16 | 600 |
| NVIDIA RTX 6000 Ada 48GB | 48 | GDDR6 ECC | 960 | 18176 | 568 | – | 91.1 | – | 1457 | PCIe 4.0 x16 | 300 |
| NVIDIA L40S 48GB | 48 | GDDR6 ECC | 864 | 18176 | 568 | 733 | 366 | – | – | PCIe 4.0 x16 | 350 |
| NVIDIA L40 48GB | 48 | GDDR6 ECC | 864 | 18176 | 568 | 362.1 | 90.5 | 1.4 | – | PCIe 4.0 x16 | 300 |
*Memory Bandwidth (GB/s)
An analysis of the technical specifications shows the clear dominance of the NVIDIA H100 80GB in the high-performance computing segment compared to the reviewed competitors:
The NVIDIA H100 80GB is the undisputed technological leader in the segment under review. It represents a qualitative leap compared to the Ampere generation (A100), offering a multiple increase in performance in key AI and HPC metrics. It is the most powerful and balanced solution for building modern machine learning infrastructure, ensuring maximum computing density and readiness for tomorrow’s challenges.
The NVIDIA H100 80GB is a specialized computing accelerator created for the most resource-intensive workloads. Its main areas of application include training and inference of Large Language Models (LLMs, such as GPT), generative AI, High-Performance Computing (HPC), genomics, molecular dynamics, and complex scientific simulations. It is not intended for standard workstations or graphic design in the traditional sense.
The Hopper architecture delivers a significant performance boost. In AI tasks using FP16 precision, the H100 outperforms the A100 by more than 6 times (1979 TFLOPS vs. 312 TFLOPS). Thanks to the new Transformer Engine, neural network training speeds can be up to 9 times faster, and inference speeds up to 30 times faster, depending on the specific model and optimization.
Yes, the NVIDIA H100 features a PCIe 5.0 interface but is backward compatible with PCIe 4.0 slots. However, to unlock the full potential of data transfer speeds (up to 128 GB/s bi-directional) and memory performance, it is recommended to use platforms supporting PCIe Gen5.
No. The NVIDIA H100 80GB is a computational accelerator (GPGPU) designed for server racks. The card lacks video outputs for connecting monitors. System management and access are performed remotely via the server console.
The card has a Thermal Design Power (TDP) of 350W and is equipped with a passive cooling system. This means it has no fans of its own and requires installation in a server chassis with powerful directional airflow passing through the card’s heatsink. For power, it uses a modern 16-pin connector (12VHPWR) or adapters included with certified servers.
Yes, the H100 PCIe version supports NVLink technology (via a special NVLink Bridge), allowing up to two video cards to be connected for direct data exchange at speeds of up to 600 GB/s, bypassing the PCIe bus. This is critically important for scaling AI training tasks when the model does not fit into the memory of a single accelerator.
The accelerator is equipped with 80 GB of high-speed HBM2e memory (or HBM3, depending on the specific supply revision) with a bandwidth of over 2000 GB/s (up to 3.35 TB/s for SXM versions; for PCIe — around 2 TB/s). This ensures lightning-fast data loading, which is necessary for working with large datasets.
The card itself works with NVIDIA Data Center Drivers. However, accessing the full stack of optimized software, including pre-trained models and frameworks (NVIDIA AI Enterprise), may require a separate enterprise license, which is often purchased alongside the hardware for commercial use.
Fast and reliable delivery across the European Union
Estimated transit time: 3–7 days from order confirmation. Worldwide shipping is available for customers outside the EU.
All orders are processed within 24 hours after confirmation. Tracking information is provided as soon as the parcel leaves our logistics center.
Multiple Secure Payment Methods
We accept: Visa, MasterCard, Apple Pay, Google Pay, PayPal, SEPA, iDEAL, Amazon Pay, Bancontact, International Bank Transfers, Klarna via Stripe gateway.
All transactions are encrypted and processed via certified Stripe payment gateway for your security.
Additional Notes
Reviews
There are no reviews yet.