Vultr unveils GPU Stack and Container Registry for AI application lifecycle management

Vultr unveils GPU Stack and Container Registry for AI application lifecycle management

Cloud computing platform Vultr has launched its Vultr GPU Stack and Container Registry for enterprises and digital startups to build, test and operationalize AI models at scale.

The company says the GPU Stack supports instant provisioning of the full array of NVIDIA GPUs, while the new Vultr Container Registry makes AI pre-trained NVIDIA NGC models available globally for on-demand provisioning, development, training, tuning and inference.

Available across Vultr’s 32 cloud data center locations, across all six continents, the new Vultr GPU Stack and Container Registry aims to accelerate speed, collaboration and the development and deployment of AI and machine learning (ML) models.

When asked about the launch, J.J. Kardwell, CEO of Vultr’s parent company, Constant, says: “Vultr is committed to enabling innovation ecosystems around the world – from Silicon Valley and Miami to São Paulo, Tel Aviv, Tokyo, Singapore, London, Amsterdam and beyond – providing instant access to high-performance cloud GPU and cloud computing resources to accelerate AI and cloud-native innovation.

“By working closely with NVIDIA and our growing ecosystem of technology partners, we are removing access barriers to the latest technologies, and offering enterprises the first composable, full-stack solution for end-to-end AI application lifecycle management. This enables data science, MLOps and engineering teams to build on a globally-distributed basis, without worrying about security, latency, local compliance, or data sovereignty requirements.”

“The Vultr GPU Stack and Container Registry provide organizations with instant access to the entire library of pre-trained LLMs on the NVIDIA NGC catalog, so they can accelerate their AI initiatives and provision and scale NVIDIA cloud GPU instances from anywhere,” said Dave Salvator, director of accelerated computing products at NVIDIA.

The Vultr GPU Stack uses the full array of NVIDIA GPUs, pre-configured with the NVIDIA CUDA Toolkit, NVIDIA cuDNN and NVIDIA drivers, for immediate deployment.

The company says that the solution’s goal is to remove the complexity of configuring GPUs , calibrating them to the specific model requirements for each application and integrating them with the AI model accelerators of choice.

Article Topics

 |   |   | 

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Sponsored Links

Avassa: Empowers companies to bridge the gap between modern containerized applications development and operations and distributed edge infrastructure. https://avassa.io/

DataBank: We believe there is a different edge to be served - the “middle edge" - that will become the first step for many in their journey to the edge. https://www.databank.com/

Latitude.sh: Where the power of bare metal meets the flexibility of the cloud. Deploy physical servers across 23 global locations in as little as 5 seconds. https://www.latitude.sh/

Zenlayer: A massively distributed edge cloud service provider operating over 270 PoPs around the world, with expertise in fast-growing emerging markets. https://www.zenlayer.com/

OnLogic: A global industrial PC manufacturer and solution provider focused on hardware for IoT and edge AI, OnLogic designs highly-configurable computers engineered for reliability. https://www.onlogic.com/

Featured Company

Latest News