Skip to content
E36D707D-0D5C-4868-8F88-36B609BF5E26

A new cloud for a new generation of innovation

Taiga Cloud is Europe’s largest AI cloud service provider, fuelled by the very latest NVIDIA technology solutions. Our infrastructure as a service provides true data sovereignty running on carbon-free energy – via the Taiga Cloud self-service portal or through our API.

5 types of NVIDIA cloud solutions
1 goal: helping you bring your best ideas to life

gb200-nvl-rack-gtc24-social-ig-2048x2048

NVIDIA GB200 GPUs

The GB200 NVL72 is a high-performance AI engine with 72 Blackwell GPUs and 36 Grace CPUs, delivering 30X faster inference for trillion-parameter models. Designed for efficiency with liquid cooling, it offers 25X more performance than NVIDIA H100 air-cooled systems at the same power while reducing water consumption. Its second-generation Transformer Engine with FP8 precision enables 4X faster training for large language models, and it accelerates key database queries by 18X compared to CPUs, achieving 5X better total cost of ownership (TCO).

Top 3 GB200 GPUs use cases

Icons_TC_Gradient_DEF_29---Deep-Learning

LLM inference, model acceleration, and large-scale AI training.

Icons_TC_Gradient_DEF_31 - Autonomous driving

Product design, physics simulations, and circuit acceleration.

Icons_TC_Gradient_Usecase_Data_Analytics

Data compression, storage efficiency, and accelerated analytics.

Pre-register now
h200

NVIDIA H200 Tensor Core GPUs

With almost double the memory capacity of the NVIDIA H100 Tensor Core GPU, plus advanced performance capabilities, the H200 is a game changer. Our AI platform, Taiga Cloud, is one of the first in Europe to offer instant access to this revolutionary hardware, which offers 141 gigabytes of HBM3e memory at 4.8 terabytes per second and 4 petaFLOPS of FP8 performance. Get ready to supercharge your AI and HPC workloads with up to 2x the LLM inference performance, 110x faster time to results and a 50% reduction in LLM energy use and TCO.

Top 3 H200 Tensor Core GPUs use cases

Icons_TC_Gradient_Benefit_Pricing

Market simulations, risk assessment, and fraud detection.

Icons_TC_Gradient_DEF_31 - Autonomous driving

Digital twin simulations for factories and logistics.

Icons_TC_Gradient_DEF_34 - Medecine

Processing larger datasets for drug discovery and medicine

Request H200s now
hopper-announcement-fb-insta-h100-sxm-2048x2048

NVIDIA H100 Tensor Core GPUs

Choose the NVIDIA H100 for enterprise AI – up to 9x faster AI training on the largest models. Configured into pods of 512 GPUs, connected into islands of four pods each (2,048 GPUs) using NVIDIA BlueField DPUs and the NVIDIA Quantum-2 InfiniBand platform, we offer efficient and quick means of training LLMS. This configuration offers businesses AI solutions in a much shorter timeframe. These H100 Tensor Core GPU islands will be spread across our European, clean-energy data center estate, providing additional resilience and redundancy.

Top 3 H100 Tensor Core GPUs use cases

Icons_TC_Gradient_DEF_29 - Deep Learning

Industry leading conversational AI and deep learning applications

Icons_TC_Gradient_DEF_31 - Autonomous driving

AI-aided design for the manufacturing and automotive industries

Icons_TC_Gradient_DEF_34 - Medecine

Advanced medical research and scientific discoveries

Request H100s now
NVIDIA-HGX-A100-Server-Platform_664

NVIDIA A100 Cloud GPUs

The NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration in AI and data analytics, and HPC at any scale. The current generation of A100 GPUs delivers up to 20x the performance of the previous generation.

Top 3 A100 use cases

Icons_TC_Gradient_DEF_53 - Evangelist

Deep learning training

Icons_TC_Gradient_DEF_29 - Deep Learning-2

Deep learning inferencing

Icons_TC_Gradient_Usecase_Data_Analytics

High performance data analytics

Request A100s now
NVIDIA-RTX-A6000_664_2022-05-27-073354_yfwv

NVIDIA RTX™ A6000 GPUs

The NVIDIA RTX™ A6000 GPUs provide the speed and performance to enable engineers to develop innovative products, help architects design cutting-edge buildings, and helps scientists make breakthrough discoveries.

Top 3 NVIDIA RTX™ A6000 use cases

Icons_TC_Gradient_DEF_33 - Rendering

Graphics and simulation workflows

Icons_TC_Gradient_DEF_51 - Explorer

Photorealistic rendering, CAD and CAE

Icons_TC_Gradient_BusinessType_AI_Consultant

AI model training and data science

Request A6000s now

Operated in Europe. Running on carbon-free energy.

Hosted in Europe to help achieve sovereignty and compliance standards

Non-blocking network with DE-CIX access and low latency (sub 10Ms)

PUE performance between 1.15 and 1.06

H100 InfiniBand pods of 512 GPUs and islands of 2,048 GPUs and more

NVIDIA GPUs alongside CPU and RAM resources – dedicated to your workload

Access via API or Taiga self-service portal