Lenovo launches ultra-compact AI inferencing server for the edge

Lenovo launches ultra-compact AI inferencing server for the edge

Lenovo unveiled the ThinkEdge SE100, the first compact, entry-level AI inferencing server designed for edge computing, making AI accessible and affordable for businesses of all sizes. 

The ThinkEdge SE100 is 85% smaller than traditional servers, GPU-ready, and offers high-performance, low-latency AI capabilities for real-time tasks like video analytics and object detection. 

It supports diverse industries, including retail, manufacturing, healthcare, and energy, with applications like inventory management, quality assurance, and process automation. The server is adaptable, scalable, and energy-efficient, consuming under 140W even in its fullest configuration, reducing carbon emissions by up to 84%. 

Lenovo’s Open Cloud Automation (LOC-A) simplifies deployment, cutting costs by up to 47% and saving up to 60% in resources and time. 

“Lenovo is committed to bringing AI-powered innovation to everyone with continued innovation that simplifies deployment and speeds the time to results,” says Scott Tease, Vice President of Lenovo Infrastructure Solutions Group, Products. “The Lenovo ThinkEdge SE100 is a high-performance, low-latency platform for inferencing. Its compact and cost-effective design is easily tailored to diverse business needs across a broad range of industries. This unique, purpose-driven system adapts to any environment, seamlessly scaling from a base device, to a GPU-optimized system that enables easy-to-deploy, low-cost inferencing at the edge.”

Enhanced security features, such as tamper protection and disk encryption, ensure data safety in real-world environments. 

The ThinkEdge SE100 is part of Lenovo’s broader hybrid AI portfolio, which includes sustainable and scalable solutions to bring AI to the edge. 

Lenovo continues to lead in edge computing, with over a million edge systems shipped globally and 13 consecutive quarters of growth in edge revenue.

This innovation reinforces the growing trend of AI-driven edge computing, where low-latency, high-performance inferencing can operate closer to data sources, reducing costs and accelerating insights in diverse, distributed environments.

Article Topics

 |   |   |   | 

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Sponsored Links

Avassa: Empowers companies to bridge the gap between modern containerized applications development and operations and distributed edge infrastructure. https://avassa.io/

DataBank: We believe there is a different edge to be served - the “middle edge" - that will become the first step for many in their journey to the edge. https://www.databank.com/

Latitude.sh: Where the power of bare metal meets the flexibility of the cloud. Deploy physical servers across 23 global locations in as little as 5 seconds. https://www.latitude.sh/

Zenlayer: A massively distributed edge cloud service provider operating over 270 PoPs around the world, with expertise in fast-growing emerging markets. https://www.zenlayer.com/

OnLogic: A global industrial PC manufacturer and solution provider focused on hardware for IoT and edge AI, OnLogic designs highly-configurable computers engineered for reliability. https://www.onlogic.com/

Featured Company

Latest News