Skip to content
E36D707D-0D5C-4868-8F88-36B609BF5E26

A new cloud for a new generation of innovation

Taiga Cloud is Europe’s largest AI cloud service provider, fuelled by the very latest NVIDIA technology solutions. Our infrastructure as a service provides true data sovereignty running on carbon-free energy – via the Taiga Cloud self-service portal or through our API.

4 types of NVIDIA cloud solutions
1 goal: helping you bring your best ideas to life

h200

NVIDIA H200 Tensor Core GPUs

With almost double the memory capacity of the NVIDIA H100 Tensor Core GPU, plus advanced performance capabilities, the H200 is a game changer. Our AI platform, Taiga Cloud, is one of the first in Europe to offer instant access to this revolutionary hardware, which offers 141 gigabytes of HBM3e memory at 4.8 terabytes per second and 4 petaFLOPS of FP8 performance. Get ready to supercharge your AI and HPC workloads with up to 2x the LLM inference performance, 110x faster time to results and a 50% reduction in LLM energy use and TCO.

Top 3 H200 Tensor Core GPUs use cases

Icons_TC_Gradient_DEF_29 - Deep Learning

Industry leading conversational AI and deep learning applications

Icons_TC_Gradient_DEF_31 - Autonomous driving

AI-aided design for the manufacturing and automotive industries

Icons_TC_Gradient_DEF_34 - Medecine

Advanced medical research and scientific discoveries

Request H200s now
hopper-announcement-fb-insta-h100-sxm-2048x2048

NVIDIA H100 Tensor Core GPUs

Choose the NVIDIA H100 for enterprise AI – up to 9x faster AI training on the largest models. Configured into pods of 512 GPUs, connected into islands of four pods each (2,048 GPUs) using NVIDIA BlueField DPUs and the NVIDIA Quantum-2 InfiniBand platform, we offer efficient and quick means of training LLMS. This configuration offers businesses AI solutions in a much shorter timeframe. These H100 Tensor Core GPU islands will be spread across our European, clean-energy data center estate, providing additional resilience and redundancy.

Top 3 H100 Tensor Core GPUs use cases

Icons_TC_Gradient_DEF_29 - Deep Learning

Industry leading conversational AI and deep learning applications

Icons_TC_Gradient_DEF_31 - Autonomous driving

AI-aided design for the manufacturing and automotive industries

Icons_TC_Gradient_DEF_34 - Medecine

Advanced medical research and scientific discoveries

Request H100s now
NVIDIA-HGX-A100-Server-Platform_664

NVIDIA A100 Cloud GPUs

The NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration in AI and data analytics, and HPC at any scale. The current generation of A100 GPUs delivers up to 20x the performance of the previous generation.

Top 3 A100 use cases

Icons_TC_Gradient_DEF_53 - Evangelist

Deep learning training

Icons_TC_Gradient_DEF_29 - Deep Learning-2

Deep learning inferencing

Icons_TC_Gradient_Usecase_Data_Analytics

High performance data analytics

Request A100s now
NVIDIA-RTX-A6000_664_2022-05-27-073354_yfwv

NVIDIA RTX™ A6000 GPUs

The NVIDIA RTX™ A6000 GPUs provide the speed and performance to enable engineers to develop innovative products, help architects design cutting-edge buildings, and helps scientists make breakthrough discoveries.

Top 3 NVIDIA RTX™ A6000 use cases

Icons_TC_Gradient_DEF_33 - Rendering

Graphics and simulation workflows

Icons_TC_Gradient_DEF_51 - Explorer

Photorealistic rendering, CAD and CAE

Icons_TC_Gradient_BusinessType_AI_Consultant

AI model training and data science

Request A6000s now

Operated in Europe. Running on carbon-free energy.

Hosted in Europe to help achieve sovereignty and compliance standards

Non-blocking network with DE-CIX access and low latency (sub 10Ms)

PUE performance between 1.15 and 1.06

H100 InfiniBand pods of 512 GPUs and islands of 2,048 GPUs and more

NVIDIA GPUs alongside CPU and RAM resources – dedicated to your workload

Access via API or Taiga self-service portal