is now Verda Learn more

Verda Monthly Digest: January Edition ❄️

Verda Content Team 5 min read
Verda Monthly Digest: January Edition ❄️

January marks a strong start to the new year at Verda. We entered 2026 with a focus on scaling our platform, expanding next-generation infrastructure, and supporting builders. This month’s updates reflect our continued commitment to building reliable, high-performance, and sustainable AI infrastructure for teams across Europe and beyond.

Verda in 2026

As we look ahead to 2026, our focus at Verda is clear. We are building a truly sovereign European AI cloud, designed to give builders, enterprises, and organizations access to high-performance AI infrastructure without compromising on data protection, transparency, or sustainability.

In the year ahead, we will continue expanding next-generation compute, improving self-service and automation across the platform, and deepening our engagement with the open-source and enterprise ecosystems. Our goal is to move beyond promise and build real, production-ready AI infrastructure that strengthens Europe’s position in the global AI landscape.

Platform Updates

This month, we rolled out a series of platform updates designed to make Verda easier to use, automate, and scale. The release includes early access to next-generation GB300 NVL72 hardware, expanded GPU offerings, new infrastructure tooling, and managed inference endpoints.

Terraform Integrations

Verda-Blog-Tutorial-Terraform-1200x628px

We launched a Terraform provider for Verda, making it easy to provision and manage Verda infrastructure using infrastructure-as-code with Terraform and OpenTofu.

To validate the provider, we deployed DeepSeek-R1 (NVFP4) with SGLang on 4× NVIDIA B300 SXM6 with local NVMe storage.

We also published a step-by-step guide covering provider setup, compute and storage configuration, the DeepSeek-R1 deployment workflow, and a fully reproducible SGLang benchmark.

Want to get started? Read the blog →

GB300 NVL72

The latest Blackwell Ultra GB300 NVL72 is now available on Verda, delivering next-generation performance for large-scale AI training and inference. You can request access directly through the Verda Cloud Platform.

Reserve now →

For large-scale deployments, please reach out to our VP of Sales, Anssi Harjunpaa, to discuss your use case and requirements.

Book a meeting →

RTX PRO 6000

New waves of RTX PRO 6000 GPUs are now live on Verda, expanding capacity for high-performance AI inference, simulation, and media workloads. Spin up instances instantly, scale with ease, and put powerful GPUs to work across a wide range of production use cases.

Deploy now →

Managed Endpoints for FLUX.2 [klein]

We have added managed endpoints for FLUX.2 [klein] on Verda, making it easy to run fast, scalable image generation and editing workloads without manual setup. You can now deploy FLUX.2 [klein] through a simple API and focus on building rather than managing infrastructure.

Try it out →

Transferring funds between projects

You can now transfer funds between projects on Verda, making it easier to manage budgets across teams and workloads. To get started, follow the step-by-step instructions in our documentation.

Read the Docs

Ecosystems Updates

LMSYS Org

LMSYS Org published a technical deep dive on INT4 Quantization-Aware Training (QAT) where the SGLang RL team demonstrated how extreme low-bit quantization enables ~1TB-scale models to run rollout on a single GPU with strong train-to-inference consistency. Verda proudly sponsored the compute resources used in this work, helping support open-source advances in efficient large model training and inference.

Read the research

ISTAS Quartet II

We supported recent research on NVFP4 quantized training with hardware on NVIDIA Blackwell GPUs. The work introduces MS-EDEN and Quartet II, advancing fully quantized NVFP4 training with significantly lower quantization error and up to 4.2× speedup over BF16. This research demonstrates how Verda’s Blackwell-based infrastructure enables cutting-edge experimentation in low-precision training for large language models.

Learn more

Job Openings

We’re growing fast and expanding our team. If you’re passionate about AI infrastructure and want to help build Europe’s next hyperscaler, we’d love to hear from you.

Explore our open roles and join us on the journey to shape the future of cloud computing:

PR

Verda was featured in recent media coverage highlighting the growing demand for European cloud infrastructure and digital sovereignty. The articles position Verda as part of the emerging ecosystem of AI-focused cloud providers building alternatives to traditional hyperscalers.

Events

MLOps Meet-up Helsinki

We kicked off the year with an AI meetup at Maria 01, bringing together the local MLOps and AI community.

The event featured a talk from Riccardo Mereu, ML Engineer at Verda, on FP4 and low-precision AI on NVIDIA Blackwell Ultra, alongside insights from Shantipriya Parida of AMD Silo AI on model compression and inference efficiency.

The evening wrapped up with drinks and conversations, highlighting Verda’s role in supporting the growing AI ecosystem in Europe. Verda-NewsletterImage-Ruben-press (1)

Upcoming Events

Check out our event calendar for the latest updates on our next events.

Event Calendar

Communities

Developer Community

Our developer community is a space where AI builders share benchmarks, workflow tips, and real-world lessons from training and inference at scale. It is the best place to ask questions, learn from other users, and stay close to what we are shipping next.

Join the conversation on forum.verda.com and build with the community.

Discord

You can also join our Discord, where the Verda community connects in real time. It is a great place to get quick help, exchange ideas, share experiments, and keep up with new features as they launch. Come hang out with AI builders from around the world and be part of the conversation.

Join the discussion

Subscribe to our newsletter

Get the latest updates on GPU benchmarks and AI research
Your information will be used in accordance with the Privacy Policy. You may opt out at any time.