December marked an important wrap-up to a transformative year at Verda. From platform improvements to new partnerships and community milestones, the month capped off a significant chapter of our growth. Take a look at what happened in December.
Tech Updates
GB300 NVL72
We’ve onboarded our first customers to the early access program to GB300 NVL72. We’ll be sharing more about their ground-breaking work very soon!
- If you’d like to join this early access program, contact us.
- To get notified about general availability, head over to the cloud console.
Capacity expansion
This month we expanded capacity across our GPU fleet, bringing new waves of Blackwell and Blackwell Ultra online: B300, B200, and RTX PRO 6000.
Whether you are training large models, running high-throughput inference, or building media and simulation pipelines, this expansion improves availability and reduces wait times, making it easier for teams to spin up compute when they need it most.
ICE-01 discontinuation
As part of our ongoing infrastructure optimization, we have discontinued the ICE-01 location. This change allows us to focus capacity on our newest, most efficient regions and improve overall reliability and scalability across the platform.
All services remain available through our other data-center locations, and we continue to expand infrastructure to support growing demand.
If you still have questions or require assistance, don’t hesitate to contact us.
Ecosystems
Siili Solutions
We’re delighted to announce a new partnership with Siili Solutions to deliver a sovereign and compliant LLM-as-a-Service offering for European enterprises and public-sector organizations.
Built for scalability and trust, the solution runs on NVIDIA GPU infrastructure hosted in Finland, is powered by 100% renewable energy, and provides serverless, high-performance inference with predictable pay-per-token pricing.
SGLang
Huge congrats to the SGLang team on the latest release!
This update brings a major boost to the Miles RL training framework by adding FSDP2 (Fully Sharded Data Parallel) and introducing a more flexible training backend, making large-scale RL training easier to run, easier to adapt, and better suited for newer model architectures like Alibaba Cloud’s Qwen3-Next.
We’re proud to have supported this work by sponsoring Blackwell GPU compute, helping enable the large-scale training and experimentation needed to push Miles forward.
Job Openings
We’re growing fast and expanding our team. If you’re passionate about AI infrastructure and want to help build Europe’s next hyperscaler, we’d love to hear from you.
Explore our open roles and join us on the journey to shape the future of cloud computing:
- GPU Container Expert
- Senior / Principal Site Reliability Engineer (EU and US)
- Senior Software Developer, Go/Kubernetes
- Senior Application Security Analyst
- Senior Backend Developer
- Principal Frontend Engineer
- Senior OpenStack Engineer
- Senior Data Center Networking Technician
- AI/ML Developer Advocate (Marketing)
- Product Marketing Director
- Forward Deployed Engineer (UK)
- Enterprise Sales Director (UK)
- Open Application / Community Talent
Events
December was a quieter month for us but we still showed up at EurIPS in Copenhagen 🇩🇰 and hosted a side event to connect with the community and share insights on AI engineering and infrastructure.
AI meetup for Founders and Researchers (EurIPS side event)
We hosted a Verda and byFounders tech meetup in Copenhagen alongside EurIPS, bringing together frontier AI builders from the local ecosystem and conference community.
The evening focused on FP4 low-bit training on NVIDIA Blackwell, featuring talks from Verda ML engineers Paul Chang and Riccardo Mereu, followed by Andrei Panferov from ISTA. The night wrapped with great discussions and networking over dinner and drinks.
Upcoming Events
On February 4th, Verda is sponsoring Helsinki MLOps Community's next meetup event. Our ML Engineer Riccardo Mereu will host a talk on NVIDIA Blackwell Ultra.
More info on list of speakers and agenda will soon follow.
Communities
Developer Community
Our developer community is a space where AI builders share benchmarks, workflow tips, and real-world lessons from training and inference at scale. It is the best place to ask questions, learn from other users, and stay close to what we are shipping next.
Join the conversation on forum.verda.com and build with the community.
Discord
You can also join our Discord, where the Verda community connects in real time. It is a great place to get quick help, exchange ideas, share experiments, and keep up with new features as they launch. Come hang out with AI builders from around the world and be part of the conversation.