General FAQs
Verda (formerly DataCrunch) is a next generation AI cloud that gives AI builders instant access to powerful production-grade GPUs
at unbeatable prices. With self-service instances and clusters, and top-tier support,
we remove infrastructure barriers, so AI teams can focus on what matters most: building great models and products.
Verda accelerates AI projects by becoming an extension of your team, focused on optimizing the performance,
reliability and costs for your AI workloads.
Verda supports a broad range of use cases, including model training and inference on virtual machines
(instances and clusters), bare-metal servers, serverless containers and managed service endpoints. We also support
co-development of custom AI stacks and software integrations. Our customers and users include:
• AI-first startups and scaleups
• Applied research teams
• Infrastructure engineers deploying ML systems
• Enterprises needing inference or training at scale
1. We give AI builders instant access. Verda gives you instant access to the latest GPU instances,
clusters, bare metal servers, serverless containers and managed endpoints without hassle or hurdles.
2. We're an extension of your AI team. We're always available with a team of AI infrastructure experts
to help solve performance, latency, and availability issues.
3. We have optimized our AI stack to help you get more value from every GPU hour. With Verda, startup
times are 30-50% faster which ultimately decrease unwanted waiting time in your AI spend. Additionally
we also offer Spot Pricing which provide additional cost saving opportunities.
We believe in transparency, trust, and sustainability. We're open in communication,
security practices,
privacy protections, and responsible energy usage. We're ISO 27001 certified and GDPR compliant.
At Verda, we provide support through multiple channels to ensure you get the help you need,
when you need it. You can reach us via chat support, email, the
Developer Community, or
Discord.
Our team of AI infrastructure experts is here to assist with performance optimization, scalability,
and infrastructure management. In most cases, you'll be able to access your project and start running
workloads instantly without assistance. Nonetheless, whenever you need help, we're ready to step in.
We see ourselves as an extension of your AI infrastructure team, whether it's troubleshooting,
fine-tuning workloads, or sharing knowledge and best practices to help you get the most out of Verda.
Verda makes it easy to access production-grade GPU resources, with higher availability of
AI optimized systems such as HGX servers, all at affordable prices. There are no sales hurdles or
delays to get started running AI workload, and we provide a developer-first experience through our
cloud dashboard and APIs. In many cases you'll also be able to save up to 90% compared to hyperscalers.
Compute & Infrastructure FAQs
Verda provides access to a broad range of GPU models to meet the performance and cost requirements
of your specific workloads. GPUs include NVIDIA HGX B200, H200, H100, A100, L40S,
RTX 6000 ADA, RTX A6000, and V100. CPUs are primarily high-end server CPUs from the AMD EPYC family.
Please see our
instances and
clusters
pages for the latest information.
Verda supports distributed training via dedicated multi-GPU clusters (instant or bare metal),
high-speed networking, and compatibility with industry-standard and open-source distributed training
frameworks. We support distributed frameworks such as PyTorch DDP, TensorFlow, HiveMind or advanced
frameworks like OpenDiLoCo. Verda actively engages in multi-datacenter, distributed training
research and development such as the
global training of PrimeIntellect's INTELLECT-1.
Yes, you can reserve or schedule compute resources on the Verda cloud. Verda offers the
ability to reserve compute capacity by purchasing long-term rentals, which are paid upfront and
ensure that specific GPU instances or clusters are held exclusively for you during the contract period.
This is the primary method for guaranteeing access to high-demand resources, which can be crucial if
you need predictable, uninterrupted capacity for large projects or peak periods.
To schedule compute jobs, you can use the API combined with typical cron jobs or your own scheduling
code to ensure workloads run according to your schedule. Verda instant GPU clusters come with
the Slurm job scheduling system pre-installed. Please keep in mind that scheduling is subject to available
capacity unless the capacity has been reserved.
There are many ways to deploy an AI model. The most common is a containerized model deployment using Docker,
which can be performed on a Verda instance, cluster, or by using the managed Serverless Containers service.
We also provide support for the NVIDIA Triton Inference Server, vLLM, SGLang, FastAPI, Flask, and other tools and
frameworks.
Use of Verda
managed inference endpoints
such as the FLUX models for image generation and editing or
Whisper model
for transcription or translation means that the model is built-in, optimized and ready to use.
The Flux.1 Kontext
managed service
endpoint is a turnkey, production-grade API for leveraging state-of-the-art FLUX models from Black
Forest Labs for next-gen image generation and editing, hosted and operated entirely as a managed
service—abstracting away the infrastructure, scaling, and performance tuning for the end user.
Verda plans to release additional managed endpoints, such as FLUX.1 on the Krea platform.
Yes, Verda is designed to let customers deploy and run their own machine learning models—containerized
or otherwise—on-demand, with minimal friction and support for a broad spectrum of deployment scenarios. You can
upload weights or mount them from local files, Hugging Face Hub, or cloud object storage (e.g. S3, GCS).
Developer Tools and APIs
Verda focuses on a great developer experience, with fast access, easy onboarding, and an API-first approach.
Verda provides a comprehensive set of APIs and client libraries for interacting with its GPU cloud resources,
running workloads, managing infrastructure, and deploying inference endpoints. These include a
public REST API and a
Python SDK.
Verda provides the following developer tools, in addition to the published API:
• CLI: Deploy, monitor, and scale workloads
• Python SDK: Automate pipelines and experiment tracking
• Web Dashboard: Manage endpoints, track usage, inspect logs
• Monitoring: View latency, throughput, memory, and GPU metrics in real time for serverless containers
Yes, Verda supports OpenAI style APIs for deploying and serving language models. This is possible
via integrations with popular open-source LLM frameworks such as SGLang and vLLM, both of which can be
deployed on Verda in configurations that expose endpoints compatible with the OpenAI API protocol.
Billing & Pricing
At Verda, we are committed to a flexible and transparent pricing structure designed to align with market
demand. There are 2 types of billing options available: On-demand and Spot Pricing.
On-demand Pricing: We offer 2 types of contracts: Pay-as-you-go and Long-term rental.
• Pay-as-you-go: Users are charged for every 10 minutes they use. Note: 10 minutes of increments will
always be added up front when deploying an Instance or Cluster.
• Long-term rental: For users who need resources over an extended period of time with increasing discount
the longer the contract is. Note: Only available on Instances.
Spot Pricing: At least 25% cheaper than On-demand Pricing but can be terminated at any time without
warning. Billed every 10 minutes similarly to pay-as-you-go. Note: Only available on Instances and
Serverless Containers.
Payments can be made through the payment card on file or by bank transfer.
Yes — startups and research teams may qualify for free compute credits. In addition, qualified new accounts
may be granted limited credits to conduct a trial or proof of concept. Reach out at
[email protected].
Read our
Docs
to learn more about how to receive free credits.
Verda does not impose traditional hard quota limits on GPU usage for general customers. Instead,
access is governed by real-time availability and elastic scaling. For special cases or very large clusters,
users can reserve capacity in advance. For constrained resources such as recently released new GPU models
in high-demand, users can request additional availability by contacting
[email protected].
Security, Compliance & Privacy
We take privacy and security compliance seriously. We prioritize top-notch data security and safeguard your
intellectual property. Verda is ISO 27001 certified and adheres to GDPR requirements.
Verda is dedicated to upholding full European Union data sovereignty in all our data centers and cloud
services. We ensure that our infrastructure, operations, and contractual commitments rigorously adhere to the
core requirements for EU digital autonomy, regulatory compliance, and data protection.
Please see additional details about security controls and compliance in the Verda
Trust Center and in the separate security and compliance
FAQs.
All of our data centers are located in the EU (Finland) or European Economic Area (Iceland). As such, they are under the GDPR. Please see our
docs for additional details.
Our data centers adhere to the highest standards for physical and environmental security. These include 24/7 monitoring,
biometric access controls, and on-site security teams. All systems are protected by cooling, redundant power,
and fire suppression systems to maintain the highest possible availability of our services.
Our first line of defense is CloudFlare, which performs filtering of incoming traffic. Once traffic reaches
our services, additional protections are in place to further filter traffic to allow in only appropriate traffic.
Several automated systems continuously monitor our servers' and networks' behavior.