StackPath upgrades virtual machine product with NVIDIA edge computing instances

StackPath upgrades virtual machine product with NVIDIA edge computing instances

Edge computing platform StackPath has added NVIDIA GPU-accelerated instances to its Virtual Machine (VM) and container product options.

The new instances use NVIDIA A2 Tensor Core and NVIDIA A16 GPUs to deliver the compute power required for workloads such as deep learning algorithms, graphical processing, and other parallel architectures.

Currently, the instances are available in StackPath Dallas, San Jose, and Frankfurt locations, and will be added across the StackPath platform throughout 2024, the company reveals.

Speaking about the announcement, Tom Reyes, chief product officer at StackPath, says: “Our GPU-Accelerated Instances are exactly what new and next-generation workloads—like AI inference, computer vision, and natural language processing—really need to succeed.

“These are real-time applications. So, as much as they need high computational power, they also need exceptionally low latency. The physical location of our platform minimizes the number of hops in and out of our instances, so the advantages provided by a GPU aren’t undermined by geographic distance.”

StackPath edge compute instances are provisioned on demand through the StackPath Customer Portal or API. The company reveals that the instances are billed by the hour and volume of data transferred. Additional options include forming virtual private clouds, leveraging built-in L3-L4 DDoS protection, storage, image capture and deployment, private IP addresses, and more.

StackP edge compute VM GPU-Accelerated Instances are available in 1 NVIDIA A2/16 GPU x 12vCPUs x 48GiB RAM x 25GiB root disk, 2 NVIDIA A2/16 GPU x 24vCPUs x 96GiB RAM x 25GiB root disk, and 4 NVIDIA A2/16 GPU x 48vCPUs x 192GiB RAM x 25GiB root disk.

Additionally, the company’s edge compute container GPU-accelerated Instances are accessible in several configurations including 22 NVIDIA A2/16 GPU x 12vCPUs x 48GiB RAM x 40GiB root disk, 2 NVIDIA A2/16 GPU x 24vCPUs x 96GiB RAM x 40GiB root risk. opm4 NVIDIA A2/16 GPU x 48vCPUs x 192GiB RAM x 40GiB root disk.

Article Topics

 |   |   |   |   | 

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Sponsored Links

Avassa: Empowers companies to bridge the gap between modern containerized applications development and operations and distributed edge infrastructure. https://avassa.io/

DataBank: We believe there is a different edge to be served - the “middle edge" - that will become the first step for many in their journey to the edge. https://www.databank.com/

Latitude.sh: Where the power of bare metal meets the flexibility of the cloud. Deploy physical servers across 23 global locations in as little as 5 seconds. https://www.latitude.sh/

Zenlayer: A massively distributed edge cloud service provider operating over 270 PoPs around the world, with expertise in fast-growing emerging markets. https://www.zenlayer.com/

OnLogic: A global industrial PC manufacturer and solution provider focused on hardware for IoT and edge AI, OnLogic designs highly-configurable computers engineered for reliability. https://www.onlogic.com/

Featured Company

Latest News