November was a month of steady growth, connection, and meaningful transformation for Verda. We continued to attend events across Europe, engage in discussions about the future of AI and the infrastructure required to support it. Alongside these community moments, we rolled out new product capabilities, expanded our hardware capacity, and improved the performance and reliability of the Verda Cloud Platform.
This month also marked a major milestone in our journey. We began the official transition from DataCrunch to Verda. The name change represents more than a new identity. It reinforces the values and mission that have guided us from day one.
As we close out the year, we remain committed to empowering builders, researchers, and enterprises with the tools and infrastructure they need to create the next generation of AI innovations on Verda.
Mission & Value
Our rebrand to Verda marks a new chapter in how we present ourselves to the world, while staying true to everything we have built as DataCrunch. This is the same company, the same team, and the same platform, now with a name that better reflects who we are becoming as a full-stack, AI-first European cloud.
Verda is still committed to building a sustainable cloud platform powered entirely by clean energy, protecting the privacy and independence of our users, and strengthening Europe’s ability to compete in the global AI landscape. These values guide every decision we make as we continue to expand Verda for the developers and companies who build the future.
Tech Update
This month we rolled out a new pricing structure, expanded capacity across key GPU and CPU nodes, and shipped new platform features. We also deepened collaborations with partners to bring more cutting-edge models and infrastructure to Verda users.
Pricing Restructure
Dynamic Pricing is being phased out as we shift to a simpler and more predictable fixed pricing model across Verda. Our goal is to make it easier for teams to plan budgets and scale production workloads without surprises.
As part of this transition, fixed prices will become more competitive throughout the platform, while still being reviewed and refreshed periodically. The result is stable, transparent pricing you can rely on, with performance and value that keep improving over time.
Capacity Expansion
In November we significantly expanded our compute capacity to support growing demand from builders and enterprises. New B200 and B300 nodes are now online, alongside additional RTX PRO 6000 instances and high-performance CPU nodes.
These upgrades ensure faster provisioning, higher availability, and more flexibility for users running everything from large-scale training to complex inference and simulation workloads.
B300 SXM6
B300 SXM6 instances are now available on Verda, giving builders immediate access to NVIDIA’s latest Blackwell Ultra generation for demanding AI training and inference. These instances are designed for high throughput, large memory workloads, and efficient scaling, making them a strong fit for everything from frontier model training to production inference at scale.
You can launch B300 SXM6 today through the Verda Cloud Platform and start running your most ambitious workloads with next-gen performance.
Spot prices starting from $1.24 per GPU per hour
B200 SXM6
To celebrate our latest capacity expansion, B200 pricing is now 5% lower across both fixed and spot options.
Instant Clusters are now an even better deal. On top of the 5% B200 price drop, pay-as-you-go Instant Clusters get an additional 10% off, giving you the fastest way to spin up with the lowest cost.
Flux.2 by Black Forest Labs

FLUX.2 models are now available on Verda, and we are excited to be an official inference partner for Black Forest Labs’ latest state-of-the-art image generation lineup. Users can now get fast, secure access to FLUX.2 [pro] and [flex], along with an ultra-efficient hosted endpoint for FLUX.2 [dev], all running on Verda infrastructure.
Batch Jobs
Batch Jobs are now officially available to everyone on Verda. After a successful early-access phase, we are opening this serverless container feature to the general availability so any team can run long, compute-heavy workloads without manual setup or infrastructure management.
Partnership
We’re excited to partner with leading teams across the AI ecosystem to bring new capabilities to Verda. These collaborations help us deliver faster access to cutting-edge models and infrastructure, while supporting builders with reliable and scalable compute.
dstack
We are thrilled to announce our partnership with dstack, an open-source alternative to Kubernetes and Slurm that simplifies and accelerates container orchestration for AI workloads.
With our joint technical integration, you can utilize dstack’s unified control plane to provision and orchestrate GPU resources across multi- and hybrid-cloud setups, such as Verda and on-prem.
Get started with the docs.
General News
MLPerf
Huge milestone for us in MLPerf Training this round. By successfully training Llama 3.1 8B on 64 GPUs across 8 nodes and reaching the target perplexity, Verda is now listed in the same MLCommons Training categories as top GPU cloud providers like CoreWeave, Lambda, NVIDIA, AMD, and others.
This validation shows our infrastructure can deliver real, benchmarked performance for large scale model training, not just in theory but also in production practice.
Job Openings
We’re growing fast and expanding our team across engineering, infrastructure, and product. If you’re excited about AI infrastructure, large-scale systems, and building a sustainable European cloud, we would love to hear from you. Take a look at our open roles and join us in shaping what’s next at Verda.
- Senior Application Security Analyst
- Senior OpenStack Engineer
- GPU Container Expert
- AI/ML Developer Advocate (Marketing)
- Senior/Principal Site Reliability Engineer
- Senior Backend Developer
- Principal Frontend Engineer
Events
Our team was on the move throughout November, meeting AI builders, researchers, and founders, and sharing insights at events across Europe. From community meet-ups to technical sessions and partner gatherings, we had the chance to showcase Verda’s latest infrastructure, learn from the ecosystem, and connect with people pushing AI forward.
Here’s a look at where we showed up this month and what we’ve been talking about.
Slush

Our CEO Ruben Bryon took the stage at Slush to speak about the rapid growth of AI worldwide and why Europe needs a trustworthy cloud alternative built for this new era.
He shared Verda’s vision to meet that need by becoming a leading European AI cloud provider focused on reliability, strong data security, and sustainability.
AI meetup for founders and researchers (EurIPS side event)
We hosted a Verda and byFounders tech meetup in Copenhagen alongside EurIPS, bringing together frontier AI builders from the local ecosystem and conference community. The evening focused on low-bit training in FP4 and what it unlocks for both research and real-world AI product scaling.
Our ML engineers Paul Chang and Riccardo Mereu opened with a deep dive into FP4 on NVIDIA Blackwell, followed by Andrei Panferov from ISTA sharing when FP4 pre-training makes sense for LLMs. The talks sparked lively discussion, and the night wrapped with dinner, drinks, and great conversations among founders, ML engineers, product leaders, and researchers.
Finland Agentics x Verda meetup #3 - Slush Edition
We partnered with Finland Agentics for a Slush side event in Helsinki, bringing together AI leaders, engineers, and founders for an evening focused on agentic AI and scaling real products. The program featured a tech talk from Solita, a panel with builders from Verda, Root Signals, Invinite, and Supabase, plus lightning demos and networking. It was a packed, high-energy meetup that highlighted Finland’s growing agentic AI ecosystem.
What it takes to be AI Native
We hosted an AI Native knowledge-sharing session at Maria 01 in Helsinki, co-hosted by Verda and FSC. Our CEO Ruben Bryon and Chief of Staff Magnus Hambleton, led an engaging discussion on what it truly means to be AI native, covering product strategy, cost efficiency, infrastructure choices, and emerging trends.
Attendees gained practical insights from real-world case studies and explored how European, renewable-powered infrastructure can shape the next generation of AI-powered businesses.
Aalto Agentic AI Hackathon
We were proud to sponsor the Agentic AI Hackathon this month, supporting teams exploring the next wave of autonomous and agent-based AI. It was inspiring to see builders move fast from ideas to working prototypes, and we are excited that Verda credits helped power projects throughout the event.
Junction
We also joined Junction as a sponsored partner this month, supporting one of Europe’s biggest hackathons and the builders behind it. It was especially exciting to see one of the winning teams power their project using Verda credits, showing what teams can achieve with fast, scalable compute.
Upcoming Events
Check out our Event Calendar for the latest updates on upcoming events.
Our Communities
Developer Community
Our developer community is a space where AI builders share benchmarks, workflow tips, and real-world lessons from training and inference at scale. It is the best place to ask questions, learn from other users, and stay close to what we are shipping next.
Join the conversation on forum.verda.com and build with the community.
Discord
You can also join our Discord, where the Verda community connects in real time. It is a great place to get quick help, exchange ideas, share experiments, and keep up with new features as they launch. Come hang out with AI builders from around the world and be part of the conversation.