Reference Architecture: Lenovo ThinkEdge for AI

Last update: 18 July 2025 | Version 1.0

Introduction

As organizations generate massive volumes of data at the network edge from smart sensors, cameras, and IoT devices, extracting real-time business value has become essential. To meet this need, companies are increasingly adopting edge computing and real-time event-streaming technologies. This shift away from centralized data centers has made AI inference at the edge a critical enabler of intelligent decision making.

Unlike traditional AI architectures that depend on cloud-based processing, edge AI allows data to be processed locally, at the point of generation. This localized approach reduces latency, strengthens data privacy and sovereignty, and addresses growing concerns around compliance with regulations such as GDPR, particularly in scenarios where transferring sensitive data to the cloud is restricted. Additionally, by minimizing the need to transmit large volumes of raw data over the backhaul to central servers, edge AI significantly reduces bandwidth consumption and associated operational costs. The result is faster, context-aware insights that support real-time actions and improved efficiency.

To support this evolution, Lenovo's ThinkEdge servers provide a robust, secure, efficient, and scalable infrastructure for edge AI deployments. Beyond performance and compliance, Lenovo ThinkEdge servers offer environmental and operational benefits with their compact size, low acoustic output, and energy-efficient design, ideal for space-constrained and noise-sensitive environments.

The reference architecture includes integrated data services, built-in protection mechanisms, seamless scalability, and hybrid connectivity for cloud integration. With this solution, organizations can confidently deploy and manage AI workloads at the edge, unlocking transformative value across industries such as retail, manufacturing, healthcare, and smart cities.

Target Audience

Solution Overview

Lenovo's Edge AI architecture is built on a comprehensive foundation that integrates infrastructure, compute, software, and services, shown in the diagram below.

Figure 1 - End-to-End Lenovo Architecture for AI at the Edge: This diagram illustrates the integration of Services, Models & Software, Data & Compute, and Infrastructure components in Lenovo's Edge AI architecture.

To streamline solution design and validation, this architecture document outlines nine standard edge server sizing models that address a broad range of use cases. Customers can use these models to identify the configuration that best aligns with their specific requirements. Each sizing model includes recommended workloads optimized for the available hardware; however, a clear understanding of the intended use case remains the most critical factor in selecting the appropriate configuration.

Solution Areas

The key advantage of AI inferencing and edge computing lies in their ability to process and analyze data locally with minimal latency. This enables faster decision-making and real-time responsiveness across industries. Below are leading use cases across five major sectors:

Retail: Self-Checkout and Shopper Analytics

Edge computing enables AI-driven self-checkout systems that reduce wait times and enhance the customer experience. These systems manage real-time authentication (linking a shopper to an account), product selection tracking via computer vision, and inventory monitoring using RFID and sensors. Edge servers are deployed at each checkout station to process visual and sensor data locally, while shared storage centrally manages AI models and transaction logs across the store. The visual data processing can also help manage checkout queues, abandoned shopping carts, and alerts for required cleanups.

Manufacturing: Quality Inspection and Predictive Maintenance (Industry 4.0)

In smart factories, AI and edge computing support predictive maintenance and real-time quality inspection. Edge servers collect data from a wide array of sensors and vision systems to detect defects, monitor equipment health, and manage production lines with minimal delay. Updated inference models can be centrally stored and pushed from a shared storage system. This setup helps reduce downtime, extends equipment life, and improves operational efficiency, supporting ISO-aligned quality management and automated decision-making without constant cloud reliance.

Smart Cities: Traffic Management and Public Safety

Urban infrastructure now benefits from real-time AI insights at the edge. Use cases include smart traffic lights, pedestrian safety monitoring, license plate recognition, and video analytics for crowd or threat detection. Edge servers placed at intersections or public buildings can locally process high-resolution video and sensor data. Shared storage provides a central repository for AI models, event history, and analytics dashboards, enabling city planners to respond proactively to evolving conditions.

Healthcare: Remote and In-Hospital Patient Monitoring

AI at the edge supports continuous health monitoring in both clinical and home settings. Devices tracking vitals, such as heart rate, respiratory patterns, glucose levels, and neurological signals, require immediate inferencing for timely interventions. Edge servers analyze incoming patient data in real-time, triggering alerts or adjusting treatment automatically. Shared storage systems maintain model libraries and anonymized health data, ensuring rapid access while meeting regulatory requirements for data security.

Financial Services: Smart ATMs and Fraud Detection

Banks are transforming customer experience and safety using AI-powered kiosks and surveillance at the edge. Interactive ATMs use facial recognition, behavioral analysis, and object detection to assess potential threats, identify fraud attempts, and provide enhanced services. Edge compute nodes integrated with kiosk cameras process data locally, ensuring real-time inferencing without depending on cloud latency. Shared storage supports centralized model updates and synchronization across multiple branches or kiosk networks.

Edge Location in a Hybrid AI Platform

Enabling Real-Time Intelligence at the Edge

Edge AI empowers real-time decision-making by reducing the latency, bandwidth usage, and privacy concerns often associated with data center or cloud-based inference. While centralized AI infrastructure remains essential for training large models and handling complex batch workloads, many real-time applications require immediate responses at the point of data generation. Lenovo ThinkEdge servers are purpose-built to meet these demands, enabling scalable AI deployments at the edge, from compact form factors for constrained spaces to high-performance systems supporting multimodal inference workloads. This section outlines the technologies and frameworks that make this possible, including vLLM, TensorRT-LLM, and OpenVINO, each optimized to deliver high-throughput, low-latency inference tailored to specific application needs.

vLLM: Unlocking Efficiency LLM Serving at the Edge

Virtual Large Language Model (vLLM) technology is designed for efficient and high throughput serving of Large Language Models (LLMs). While traditionally used in cloud or data center and cloud environments with powerful GPUs, vLLM is now emerging as a practical option for edge deployments.

Why vLLM at the Edge?

By enabling efficient LLM deployment on edge hardware—such as industrial PCs, intelligent robotics, or high-performance mobile devices, vLLM makes real-time AI more accessible and practical outside the data center. These capabilities enable smarter, real-time decision-making in environments such as smart cities, retail, industrial automation, and security operations.

TensorRT-LLM: High-Throughput LLM Inference on Edge GPUs

TensorRT-LLM, built on NVIDIA's TensorRT inference engine, is optimized for running large language models (LLMs) with extremely low latency and high efficiency. It is particularly suited for powerful edge servers like the ThinkEdge SE360 V2 and SE455 V3, equipped with NVIDIA L4 or L40S GPUs. TensorRT-LLM is ideal for demanding use cases such as:

Edge servers powered by TensorRT-LLM can support secure, high-performance LLM inference entirely on-premises, ensuring minimal latency and greater control over data privacy.

OpenVINOTM: Accelerated Vision and AI Inference with Intel CPUs

OpenVINOTM, Intel's open-source AI toolkit, delivers optimized inferencing performance on Intel CPU platforms—including those in the ThinkEdge SE100 and SE350 V2 servers. OpenVINO enables efficient execution of models such as YOLO, SSD, and ResNet for vision applications, even without GPUs. Its lightweight footprint and high compatibility make it ideal for:

OpenVINO offers quantization and graph optimization capabilities that reduce compute overhead while maintaining accuracy, which is especially valuable in power- and cost-sensitive deployments.

Computer Vision: Foundational for Edge AI Insights

Computer vision is one of the foundational workloads for edge AI platforms. It enables machines to interpret and act on visual data captured from cameras, drones, or industrial sensors. When deployed at the edge, these capabilities can deliver real-time insights critical for safety, efficiency, and customer engagement.

Edge-based computer vision supports a wide range of applications:

Lenovo's ThinkEdge portfolio provides the ideal infrastructure for deploying vision AI workloads, ranging from compact, fanless SE100 models to GPU-powered SE360 V2 and SE455 V3 servers, supporting models accelerated by vLLM, TensorRT, and OpenVINO.

Audio Environmental Intelligence (Sound AI)

While the human ear is remarkably adept at recognizing a wide range of sounds and their sources, it is still prone to error, distraction, or limitations in noisy environments. AI-powered auditory systems, however, can continuously listen, classify, and interpret audio signals—often with greater precision and consistency than humans.

This capability is known as Auditory AI or Sound AI, which involves training models to detect and respond to various sound categories, such as:

Training AI systems on a growing library of environmental sounds continuously enhances their accuracy and responsiveness. This expanding intelligence is the foundation of Sound AI's value, enabling real-time monitoring, anomaly detection, and proactive responses across industries such as security, healthcare, manufacturing, retail, and smart cities.

Asset Location – Real-Time Location System (RTLS)

Real-time tracking of critical assets with low-latency performance enhances both operational efficiency and security. RTLS enables organizations to locate, monitor, and manage assets across facilities, campuses, or field environments using a variety of access technologies, including:

Each of these technologies offers unique advantages depending on the use case, environment, and performance requirements.

RTLS is not a one-size-fits-all solution—it requires a thoughtful strategy that considers business objectives, physical layout, infrastructure constraints, and the trade-offs of each technology.

Lenovo can integrate and process data from these access technologies directly at the edge, enabling fast, secure, and scalable RTLS deployments tailored to industry-specific needs across manufacturing, healthcare, logistics, retail, and smart cities.

Data Analytics at the Edge

Combining computer vision, real-time asset location (RTLS), and environmental audio data enables a powerful and context-rich IoT experience at the edge. This convergence of modalities drives intelligent, situational awareness and enables actionable insights across industries.

Lenovo Validated Designs (LVD) demonstrate how these integrated data streams can be analyzed and utilized to power smart solutions that deliver:

Lenovo Hybrid AI Vision Strategy

Lenovo's Hybrid AI strategy unifies edge, data center, and cloud capabilities to deliver secure, scalable, and context-aware intelligence. This approach prioritizes data privacy, GenAI integration, real-time profiling, and multimodal analysis, allowing customers to deploy AI workloads across the most appropriate environments. By enabling LLM-driven contextual understanding and semantic video analysis, while keeping sensitive data local or selectively processed in the cloud or data center, Lenovo's Hybrid AI empowers organizations to maintain control over their information without sacrificing performance or innovation.

Scalable Multi-Node, Multi-GPU Architecture

Our architecture supports multi-node, multi-GPU edge deployments, enabling distributed AI processing closer to the data source. This configuration can either:

By integrating technologies like vLLM and LLM-d, Lenovo solutions provide scalable, high-throughput inferencing with low latency—ideal for mission-critical applications requiring rapid decision-making and edge autonomy.

Architecture Overview

Figure 1- Connectivity for ThinkEdge servers in an Edge Location: Depicts the secure connectivity of ThinkEdge servers within an edge environment, showing LAN/WAN connections, management tools, and internet access.

The traditional connectivity for Edge architecture is deployed inside a Protected Traffic Area of the organization and transports the data securely. Utilizing Tools such as Lenovo Open Cloud Automation (LOC-A) and XClarity for Management and REST APIs integration, the Edge nodes can be deployed and managed from a remote/central location. The Edge nodes make data actionable performing tasks even if the back-haul connectivity is interrupted.

Figure 2 - Single Node Kubernetes Cluster in an Edge Location: Illustrates a single-node Kubernetes cluster setup at the edge, detailing application and infrastructure components, including operating systems and management tools.

The Edge stack is designed as a fully optimized, end-to-end solution capable of supporting a wide range of operating systems—including Red Hat, SUSE, Canonical, Windows, and AKS—within Kubernetes containers. This flexible architecture ensures compatibility with existing IT investments while enabling rapid adoption of emerging technologies.

To support evolving workloads and future scalability, the infrastructure can be deployed as a single-node cluster or expanded to thousands of nodes across distributed edge locations. This flexibility allows applications to be deployed and orchestrated in a containerized environment, ensuring services are delivered precisely when and where they are needed.

With support for automated provisioning, bare metal deployment, and multi-cloud orchestration, the platform is built for future-proofing and operational efficiency. Applications can seamlessly run on ThinkEdge systems, including SE454 V3, SE360 V2, SE350 V2, and SE100, integrated into a Kubernetes-managed cluster with secure remote access and network automation.

This solution empowers organizations to manage edge computing workloads with centralized visibility, simplified operations, and consistent performance from edge to cloud.

Figure 3 - High level overview of edge platform: Shows the interaction between Sensors and Devices, Edge Computer, Data Center or Cloud, and Human Interface in an edge AI platform.

Edge computing plays a critical role in a unified AI Hybrid Strategy, seamlessly integrating with data center and cloud environments to deliver a consistent and complementary platform experience.

By deploying sensors, cameras, microphones, and other far-edge devices, data can be captured and processed locally at the Edge—close to where it's generated. This reduces latency, minimizes bandwidth usage by avoiding unnecessary backhaul, and enables real-time inferencing, rendering, stable diffusion, text-to-speech, and retrieval-augmented generation (RAG) directly at the edge nodes.

Modern edge computing has evolved to match the growing demands of AI workloads, enabling intelligent decision-making at the point of action while remaining tightly coupled with centralized infrastructure for model training, data storage, and visualization.

This distributed architecture not only enhances performance and responsiveness but also ensures the infrastructure is scalable, adaptable, and future-ready, supporting a wide range of AI use cases from the edge to the cloud.

Lenovo ThinkEdge Server Portfolio

Figure 4 - Lenovo ThinkEdge server wall-mounted in an industrial manufacturing environment: A visual representation of a Lenovo ThinkEdge server installed in an industrial setting.

Lenovo ThinkEdge servers combine cutting-edge hardware, software, and services to address today's most pressing customer challenges while offering a modular, future-ready design to meet evolving business needs. Built on industry-standard x86 technologies and enhanced with Lenovo's unique innovations, these servers deliver exceptional flexibility, scalability, and performance at the edge. Designed for durability and adaptability, ThinkEdge servers can be deployed in a wide range of constrained environments, including space-limited locations, thanks to their compact form factor, low acoustic output, and support for wall-mount installation. This makes them ideal for retail, industrial, and remote edge use cases where space, power, and environmental constraints are critical.

Key advantages of deploying Lenovo ThinkEdge servers include:

In the AI area, Lenovo is taking a practical approach to helping enterprises understand and adopt the benefits of ML and AI for their workloads. Lenovo customers can explore and evaluate Lenovo AI offerings in Lenovo AI Innovation Centers to fully understand the value for their particular use case. To improve time to value, this customer-centric approach gives customers proofs of concept for solution development platforms that are ready to use and optimized for AI.

Lenovo ThinkEdge SE100 Edge Server

Edge computing allows data from internet of things devices to be analyzed at the edge of network before being sent to data center or cloud. The Lenovo ThinkEdge SE100 is a purpose-built server that is 1/3 width and significantly shorter than a traditional server, making it ideal for deployment in tight spaces. It can be mounted on a wall, desktop, or mounted in a rack. The ThinkEdge SE100 server is Artificial Intelligence optimized with increased processing power, storage and network closer to where data is generated

Table 1 - Lenovo ThinkEdge SE100
AttributeValue
Form FactorBase node: Height: 53mm, Width: 142mm, Depth: 278mm, 2.1L
With Expansion Kit: Height: 53mm, Width: 214mm, Depth: 278mm, 3.1L
RAID SupportN/A
Mounting OptionsSingle node mounting options for desktop, VESA, DIN, wall, or ceiling
1U 2N or 1U 3N rackmount
PowerDual-redundant external power supplies 140W for stand alone
1U2N, 1U3N in 1U enclosure with 300W power adapter
Network Interfaces (Wired)2x 1GbE
1GbE RJ45 management port
Front I/O2x USB 3.2 Gen2 (Type-A)
1x USB 3.2 Gen2 (Type-C, with XCC display), 1x USB 3.2 Gen2 (type-C, with CPU display)
2x HDMI 2.0
1x RJ45 for RS232 serial COM
Rear I/O1x Type-C (Power), with locking screw
1x Type-C (Power, BMC USB 2.0), support power redundancy with locking screw
2x USB 3.2 Gen2 Type-A
2x 1GbE+ 1x GbE management
Systems ManagementLenovo XClarity Administrator
EnvironmentalExtended operating temperature of 5 to 45°C
Satisfy 15G shock & 0.15 Grms vibration, IEC 60068
IP50 dust protection; noise level: 35 dBA (base node)
MERV5 with expansion kit
SecuritySecurity 2.0 with ThinkShield Key Vault or XCC management
Optional Key Vault SED encrypted storage for boot and data drives
Lenovo Trusted Supplier Program, Secure boot, and Smart USB Protections
NIST SP800-193 compliance using hardware Root of Trust and Platform Firmware Resilience
TPM 2.0
Intrusion tamper protection
Optional Kensington keyed lock compatible chassis
Type-C Locking screw
Operating SystemsMicrosoft Windows 11 IoT Enterprise LTSC, Microsoft Windows 11 Enterprise, Ubuntu 24.04, RHEL 10.0*. Visit lenovopress.lenovo.com/osig for details.
Limited Warranty3-year customer replaceable unit and onsite service, next business day 9x5; optional service upgrades

Lenovo ThinkEdge SE350 V2 Edge Server

The ThinkEdge SE350 V2 server puts increased processing power, storage and network closer to where data is generated, allowing actions resulting from the analysis of that data to take place more quickly. The SE350 V2 is targeted toward hybrid clouds at the edge, virtualization, NFV, web host, and management server workloads. Note that the SE350 V2 cannot be configured with a GPU.

Table 2 - Lenovo ThinkEdge SE350 V2
AttributeValue
Form Factor1U height, half width edge server; Height: 41.7mm, Width: 209mm, Depth: 384mm
RAID SupportRAID 0, 1 for boot drives
RAID 0, 1, 5, 10 for data drives
Mounting OptionsSingle node mounting options for desktop, VESA, DIN, wall, or ceiling
1U2N Enclosure for two nodes side-by-side and internal power supply. Depth: 476.1mm, Height: 1U
1U2N Enclosure for two nodes side-by-side and 4x external power supplies. Depth: 771.1mm, Height: 1U
2U2N Short Depth Enclosure for two nodes sides by side + 4x power supplies: Depth: 476.1mm, Height: 2U
Locking bezels and dust filter options
PowerDual-redundant external power supplies 300W 115V/230V AC
Dual DC supply: 12V-48VDC
Single internal AC power supply: 500W
Network Interfaces (Wired)4x 10GbE/25GbE SFP+/SFP28
2x 2.5GbE TSN
1GbE RJ45 management port; or
4x 1GbE
2x 2.5GbE TSN
1GbE RJ45 management port
I/OFront: 2x USB 3.2 Gen 1 (Type-A) + 1x BMC USB 2.0 (Type-C), 1x Display USB-C (USB 2.0 + Display Port (video) / USB 3.2 Gen 1 auto switch), 1x BMC serial RJ45 management port
Rear: 1x RJ45 Console Serial Port (can be disabled)
Systems ManagementLenovo XClarity Administrator with mobile option
EnvironmentalExtended operating temperature of 0-55°C, up to 40G shock & 1.91Grms vibration, IEC 60068, optional dust filter
SecurityThinkShield Key Vault secure management with motion and intrusion tamper protection
Optional Key Vault SED encrypted storage for boot and data drives
Lenovo Trusted Supplier Program, Secure boot, and Smart USB Protections
Optional Kensington keyed lock compatible chassis
Cable locking bezel
Operating SystemsMicrosoft Windows Server, SLES, Ubuntu, RHEL, VMware ESXi, vSAN. Visit lenovopress.lenovo.com/osig for details.
Limited Warranty3-year customer replaceable unit and onsite service, next business day 9x5; optional service upgrades

Lenovo ThinkEdge SE360 V2 Edge Server

The ThinkEdge SE360 V2 is a versatile solution supporting a wide range of workloads, including Augmented Reality, Edge AI & MRP, CDN, NFV, Gaming, and Video Streaming. All the compute power comes inside a 2U height and half width server, making the SE360 V2 a great option for GPU workloads that must be processed close to the source, saving network bandwidth and reducing latency.

Key characteristics:

Table 3 - Lenovo ThinkEdge SE360 V2
AttributeValue
Form Factor2U height, half width edge server; Height: 84.5mm, Width: 212mm, Depth: 317.5mm
RAID SupportRAID 0, 1 for boot drives
RAID 0, 1, 5, 10 for data drives
Mounting OptionsSingle node mounting options for desktop, VESA, DIN, wall, or ceiling
2U2N Short Depth Enclosure for two nodes sides by side: Depth: 466mm, Height: 2U
Locking bezels and dust filter options
PowerDual-redundant external power supplies 300W 115V/230V AC
Dual DC supply: 12V-48VDC
Single internal AC power supply: 500W
Network Interfaces (Wired)4x 10GbE/25GbE SFP+/SFP28
2x 2.5GbE TSN
1GbE RJ45 management port; or
4x 1GbE
2x 2.5GbE TSN
1GbE RJ45 management port
Network Interfaces (Wireless)Four wireless SMA connectors for WLAN
1 wireless SMA connector for Bluetooth
WLAN 128/192-bits encrypted WPA2, WPA3 802.11 a/b/g/n/ac/ax
Geotracking
I/OFront: 2x USB 3.2 Gen 1 (Type-A) + 1x BMC USB 2.0 (Type-C), 1x Display USB-C (USB 2.0 + Display Port (video) / USB 3.2 Gen 1 auto switch), 1x BMC serial RJ45 management port
Rear: 1x RJ45 Console Serial USB and Console ports can be disabled
Systems ManagementLenovo XClarity Administrator with mobile option
EnvironmentalExtended operating temperature of 0-55°C, support -20~65°C with certain configuration, up to 40G shock & 1.91Grms vibration, IEC 60068, optional dust filter, IP3X, Marine certification
SecurityThinkShield Key Vault secure management with motion and intrusion tamper protection
Optional Key Vault SED encrypted storage for boot and data drives
Lenovo Trusted Supplier Program, Secure boot, and Smart USB Protections,
Lenovo WLAN Security, Lenovo Bluetooth Security
Optional Kensington keyed lock compatible chassis
Nationz TPM 2.0 for customers in China
Cable locking bezel
Geotracking
Operating SystemsMicrosoft Windows Server, SLES, Ubuntu, RHEL, VMware ESXi, vSAN.
Limited Warranty3-year customer replaceable unit and onsite service, next business day 9x5; optional service upgrades

Lenovo ThinkEdge SE455 V3 Edge Server

The ThinkEdge SE455 V3 is Lenovo's most powerful edge server, containing an AMD EPYC CPU and up to 2 NVIDIA L40s GPUs. As a 2U server with a short depth case, it can be mounted in a 2-post or 4-post rack. This server is well suited for transformative AI systems, especially those that need video processing and AI inference at the edge.

Key characteristics:

Table 4 – Lenovo ThinkEdge SE455 V3
AttributeValue
Form Factor2U rack server 440mm depth
Network Interface1/10/25/100 Gb LOM adapter in OCP 3.0 slot
Up to 6x 1/10/25/100/200 Gb PCIe network adapters
Ports, ButtonsFront: 1x Power Button (with green LED), 1x System Locator (with blue LED), 1x NMI button, 1x USB-C, 2x USB 3.0, 1x USB 2.0 for XCC2, 1x RJ45 for XCC2, 1x Diagnostic handset, COM port via PCI slot
LEDSecurity (green), Attention (yellow), XCC2 Ethernet (Link and Activity),
HBA/RAID SupportHW RAID with/without cache or SAS HBAs
4350-8i, 5350-8i, 440-8i, 540-8i, 940-8i with Supercap
2 and 4 port Qlogic QLE277x 32Gb Fibre Channel HBA
PowerDual redundant power supplies AC (1100W Platinum/Titanium, 1800W Platinum) or
Dual redundant power supplies -48V DC 1100W
SecuritySecurity 2.0 with ThinkShield or XCC Management, Security Bezel, Tamper detection,
Rack Security Bracket, Encrypted SSD, System Lockdown, Silicon Root of Trust, TPM
2.0, System Guard, AMD Infinity Guard, NIST SP800-193 compliance using hardware
Root of Trust and Platform Firmware Resilience
Easy to DeployThinkShield or XCC Managed lockdown mode and SED
Lenovo Open Cloud Automation (LOC-A) and XClarity Administrator/Pro
Environmental5°C to 55°C standard support ; NEBS 3 -5°C to 55°C (< 96 hours)
40/45dBA Acoustic Modes
MERV2 Air Filter with clog detecting airflow sensor
Systems ManagementLenovo XClarity Controller (AST2600) DC-SCM 2.0
OS SupportTier 1/1.5: Microsoft 2019/2022, Red Hat 8.x, 9.x, SLES 15, Ubuntu 20.x/22.x,VMware
7.x/8.x
Tier 2: XenServer 8.2
Tier 3: Alma Linux 8.x/9.x, Rocky Linux 8.x/9.x
Future Support: Windows 11 IOT Enterprise LTSC
Limited Warranty3-year customer replaceable unit and onsite service, next business day 9x5
Optional service upgrades

Workload-Optimized AI Hardware Guide

This Edge AI Server Sizing Guide provides recommended configurations for Lenovo ThinkEdge platforms, optimized for a range of AI workloads. Each model size from X-Small to X-Large is tailored to specific performance needs, balancing compute power, memory, storage, and GPU acceleration. These reference configurations are designed to support key use cases such as computer vision, natural language processing (NLP), self-checkout systems, and LLM-based inference. By matching workloads with the appropriate hardware profile, customers can achieve efficient, scalable AI deployments at the edge.

Table 5 - Edge AI Server Sizing Guide
ServerSizeGPUCPURAMStorageAI TaskFrameworkModel type examples
SE100X-SmallNone14 core Intel Ultra 5-225H32GB (2 x 16)2 x 480GBImage identification, Natural Language Processing, Data AnalyticsOpenVINOBERT, YOLO
SE100Small1x RTX A10008 core Intel Xeon D-2733NT128 GB (2 x 64)Video and Image AnalysisOpenVINOSpecialized Machine Learning Models
SE100Small1x RTX 2000E Ada
SE360 V2Small1x NVIDIA A28 core Intel Xeon D-2733NT32GB (2 x 16)2 x 960 GBText-to-speech, Image and Video AIvLLM, TensorRT, Pytorch, TensorFlowEfficientDet, BERT- Large, Llama 3.2 3B
SE360 V2Medium1x NVIDIA L416 core Intel Xeon D-2775TE2 x 1.92 TBLLM/VLM inference, Stable Diffusion at 512x512 on Per-GPU basis (See Test Results)vLLM, TensorRT-LLMLlaVA-NeXT, LlaVaOneVision, MiniCPM-V
SE360 V2Large2x NVIDIA L4
SE455 V2Large3x NVIDIA L432 core AMD EPYC 8324P64 GB (2 x 32)Multi-tenancy, Fine-tuning, RAG, Stable diffusion at 1024x1024vLLM, TensorRT-LLMLlaVA-NeXT, LlaVaOneVision, MiniCPM-V
SE455 V3X-Large2x NVIDIA L40S32 core AMD EPYC 8324PN512 GB (6 x 96) TB2 x 3.84 TB

Note: All SE360 V2 models support optional Qualcomm A100 or Intel Flex 140. SE455 V3 supports only the Qualcomm A100.

Test Overview

Vision Language Models (VLMs) are a class of generative AI models designed to interpret and respond to visual inputs such as images and videos through natural language. In smart city and public safety applications, VLMs can be used for tasks like situational awareness, anomaly detection, and identifying persons of interest—all in real time.

To assess the performance of these models in edge environments, the Lenovo team has developed a purpose-built benchmark that evaluates the throughput of VLMs running on Lenovo ThinkEdge servers. This benchmark provides valuable insights into how efficiently these models can process and analyze video frames at the edge, enabling scalable, low-latency AI deployments for critical use cases.

The benchmark evaluates the time taken for large language models (LLMs) to generate a textual response from a 6-second video processed at 4 frames per second (24 frames total). Each model is prompted with 5 different sample videos and 5 sample prompts randomly a total of 10 times. The 5 prompts are: "Describe the characters/objects in this video", "Is anything dangerous occurring in this video?", "Answer yes or no: is there an animal in this video?", "Why is this video funny?", "What is shown in this video? Be concise." These prompts were chosen to cover a variety of scenarios that VLMs could be used for, ensuring that contextual reasoning is a factor in the benchmark.

Tests were conducted on the Lenovo ThinkEdge SE360 V2 equipped with 2 NVIDIA L4 GPUs, as found in the out-of-the-box configuration with no optimizations or tuning. Two inference platforms were used, vLLM and TensorRT-LLM. Since model support varies, three models were tested on vLLM: LLaVA-NeXT, MiniCPM-V, and LLaVA-OneVision, while the latter was used as the common model for comparison across both platforms.

The LLaVA models are chosen to fit within a single L4 GPU, enabling dual-model concurrency, while MiniCPM-V requires both GPUs. Frame rate and video length were set to achieve a consistent input size (24 frames), but results are model-dependent due to limitations like context length, token output (capped at 10 for this test), and model architecture.

Both platforms were installed using default configurations via pip, with the sampling temperature set to 0 to ensure deterministic, repeatable outputs. The results primarily reflect each platform's inference efficiency under consistent conditions. However, selecting the right platform and model should also consider factors such as model compatibility, accuracy requirements, and the specific context of the deployment. This benchmark helps identify which platform and model combinations offer the best balance of performance, efficiency, and model fidelity for edge inferencing use cases in bandwidth-constrained, latency-sensitive environments.

Test Results

The Lenovo SE360 V2 with 2 L4 GPUs was able to achieve impressive video throughput in frames per second, and the Video Language Models demonstrate contextual understanding of actions that occur in the videos. Consider the following example.

The image below shows a single frame from a longer video of a vehicle incident, with the model's response to the question "Is anything dangerous occurring in this video?". This is just one of the scenarios tested here and is used for demonstrative purposes. The model's textual response is superimposed onto the image.

Figure 5 - AI video analytics detecting and explaining a vehicle incident in real time: Shows a video frame with an AI model's analysis of a vehicle incident, including textual output and frame rate.

The model accurately identifies a vehicle incident in the video and assesses its potential danger. It recognizes the sequence of events and tracks objects in relation to one another. This showcases the model's capability to contextualize visual information and respond to natural language queries with precision. The green and red text on the screen originates from the source video feed and is not relevant to the analysis.

For benchmark results on CPU-only AI workloads with OpenVino, please see Lenovo Press Article 2249: Implementing Generative AI Using Intel Xeon 6 CPUs

To evaluate TensorRT-LLM's performance relative to vLLM, we compared results using the LLaVA-OneVision model, which is supported on both platforms. TensorRT-LLM consistently demonstrated slightly higher throughput for the same model, indicating greater efficiency in frame processing. While this advantage may not extend to all models, similar trends have been reported in external benchmarking efforts. For instance, the Llava-NeXT model shows exceptional performance on vLLM, highlighting that both platforms can deliver impressive results depending on the use case.

It is important to note that model compatibility varies between TensorRT-LLM and vLLM. While TensorRT-LLM may deliver higher throughput, vLLM may support a broader range of models or offer features better suited for certain applications. For inference scenarios where model fidelity and accuracy are more critical than speed, models such as MiniCPM-V may be more appropriate. These larger models often produce higher-quality outputs but require increased computational resources, as observed in our test results.

Ultimately, selecting the optimal inference platform and model depends on the characteristics of the video input and the desired outcomes. Benchmarking across a representative dataset is recommended to identify the best fit for the specific use case and deployment constraints.

For full test data, please contact your sales representative.

Life Cycle Management Software

Deployment orchestration and management of the ThinkEdge servers can be done in container level integrating any validated solution in the environment. Lenovo XClarity One & LOC-A provides united dashboards to manage IT infrastructure and workloads with advanced automation capabilities both in hardware and Containers-as-a-Service (CaaS) layers. These tools also support open APIs, enabling integration with northbound systems to federate operations, streamline maintenance, and enhance end-to-end visibility.

Lenovo XClarity One

Lenovo XClarity One is a management-as-a-service offering for hybrid-cloud management of on-premises data-center assets. XClarity One makes use of local management hubs across multiple sites to collect inventory, incidents, and service data, and to provision resources. XClarity One provides a modern, intuitive interface that centralizes IT orchestration, deployment, automation, and support from edge to cloud, with enhanced visibility into infrastructure performance, usage metering, and analytics.

Figure 6 - Lenovo XClarity One Dashboard: A screenshot of the Lenovo XClarity One dashboard, displaying managed devices, firmware updates, and collection summaries.

Lenovo XClarity One offers the following key features:

Lenovo XClarity One manages devices through lightweight software components known as Management Hubs. These hubs operate as virtual appliances deployed on-premises, typically within customer datacenters across one or more locations. This architecture enables low-latency communication, rapid response times, and enhanced data privacy by keeping sensitive operations local.

The supported hub, Lenovo XClarity Management Hub 2.0, facilitates secure provisioning and management of Lenovo devices across distributed environments.

The following figure illustrates the XClarity One infrastructure architecture, highlighting the logical placement of Management Hubs within the environment.

Figure 7 - XClarity One infrastructure: An architectural diagram illustrating the placement of Management Hubs within the XClarity One ecosystem, connecting to the portal and managed devices.

XClarity One Overview

The XClarity One Dashboard is a cloud-based interface designed for fast, efficient resource discovery and task execution. It provides a centralized, intuitive view of your infrastructure, enabling administrators to quickly locate devices and act.

XClarity One enhances security by requiring only a single secure connection between the cloud portal and your on-premises or private cloud-managed devices. This reduces exposure and simplifies security management while maintaining robust connectivity.

Key Capabilities of Lenovo XClarity One

With its simplified, unified dashboard, Lenovo XClarity One enables a broad set of administrative functions, including:

Operational Benefits

Firmware management

Firmware management is streamlined using firmware compliance policies, which can be assigned to support managed endpoints to ensure that firmware remains up to date and aligned with compliance standards. When validated firmware levels differ from pre-defined policies, custom firmware compliance policies can be created or edited to reflect approved configurations. In addition to policy-based updates, firmware versions that are newer than the currently installed versions can be applied and activated directly, either on individual managed endpoints or across groups, without requiring compliance policies. This flexibility supports both structured policy-based management and on-demand updates as needed.

Security

Lenovo XClarity One provides robust security features to protect infrastructure and support compliance:

Lenovo Open Cloud Automation (LOC-A)

LOC-A is integrated into Lenovo XClarity One, a unified, cloud-based platform for managing and automating infrastructure from edge to cloud. LOC-A enables streamlined Edge IT automation, authentication, and provisioning within XClarity One, making it easier to deploy and manage large, distributed compute environments. Key benefits include:

Summary

This reference architecture presents Lenovo's approach to deploying and scaling AI inference workloads at the edge using ThinkEdge servers. It details validated server configurations, across small, medium, and large sizing models tailored for specific edge AI workloads across industries such as retail, manufacturing, healthcare, smart cities, and financial services.

Key highlights include:

Through this flexible and scalable architecture, Lenovo empowers organizations to deliver real-time AI insights where data is generated, driving faster decisions, enhanced customer experiences, and operational efficiency at the edge.

Appendix: Lenovo Bill of Materials

X-Small SE 100 No GPU

Part NumberDescriptionQuantity
7DGRCTO1WWNode : ThinkEdge SE100 - 3 Year Warranty1
C31DThinkEdge SE100 Chassis1
C30LThinkEdge SE100 Planar with Intel Core Ultra 5 225H ,14C, 28W, 1.7GHz1
C39JThinkEdge 16GB TruDDR5 6400MHz CSODIMM2
C8V3ThinkSystem M.2 7450 PRO 480GB Read Intensive NVMe PCIe 4.0 x4 NHS SSD1
BYF8ThinkSystem M.2 ER3 480GB Read Intensive SATA 6Gb NHS SSD1
C30DThinkEdge SE100 Expansion Connection Cover1
C39RThinkEdge 140W 230V/115V External Power Supply2
A4VP1.0m, 10A/100-250V, C13 to C14 Jumper Cord2
C31JThinkEdge SE100 Bottom Rubber Feet1
C31AThinkEdge SE100 Fan Module1
C31CThinkEdge SE100 Port Dust Cover Kit1
BRPJXCC Platinum1
C8U9Top-Cover Thermal Gap Pad Kit1
C319ThinkEdge SE100 Node Cosmetic Cover1
C308ThinkEdge SE100 M.2 Holder1
C8UCFront I/O Panel1
C8UABottom-Cover Thermal Gap Pad Kit1
7S0XCTO5WWXClarity Controller Platin-FOD1
SBCVLenovo XClarity XCC2 Platinum Upgrade (FOD)1

X-Small SE100

Part NumberDescriptionQuantity
7DGRCTO1WWSE100 X-Small : ThinkEdge SE100 - 3 Year Warranty1
C31DThinkEdge SE100 Chassis1
C30LThinkEdge SE100 Planar with Intel Ultra 5-225H ,14C, 28W, 1.7GHz1
C39JThinkEdge 16GB TruDDR5 6400MHz CSODIMM2
C8V3ThinkSystem M.2 7450 PRO 480GB Read Intensive NVMe PCIe 4.0 x4 NHS SSD1
BS2PThinkSystem M.2 7450 PRO 480GB Read Intensive NVMe PCIe 4.0 x4 NHS SSD1
C39NThinkSystem NVIDIA RTX A1000 8GB PCIe Gen4 Active GPU1
C39RThinkEdge 140W 230V/115V External Power Supply1
A4VP1.0m, 10A/100-250V, C13 to C14 Jumper Cord1
C31JThinkEdge SE100 Bottom Rubber Feet1
C31AThinkEdge SE100 Fan Module1
C31CThinkEdge SE100 Port Dust Cover Kit1
BRPJXCC Platinum1
C8U9Top-Cover Thermal Gap Pad Kit1
C319ThinkEdge SE100 Node Cosmetic Cover1
C308ThinkEdge SE100 M.2 Holder1
C8UCFront I/O Panel1
C8UBExpansion Kit Rubber Feet1
C8UABottom-Cover Thermal Gap Pad Kit1
C30FSE100 Expansion Kit for Active Cooling GPU1
7S0XCTO5WWXClarity Controller Platin-FOD1
SBCVLenovo XClarity XCC2 Platinum Upgrade (FOD)1

Small SE100

Part NumberDescriptionQuantity
7DGRCTO1WWNode : ThinkEdge SE100 - 3 Year Warranty1
C31DThinkEdge SE100 Chassis1
C30LThinkEdge SE100 Planar with Intel Core Ultra 5 225H ,14C, 28W, 1.7GHz1
C39JThinkEdge 16GB TruDDR5 6400MHz CSODIMM2
C8V3ThinkSystem M.2 7450 PRO 480GB Read Intensive NVMe PCIe 4.0 x4 NHS SSD1
BYF8ThinkSystem M.2 ER3 480GB Read Intensive SATA 6Gb NHS SSD1
C39PThinkSystem NVIDIA RTX 2000E Ada 16GB PCIe Active GPU1
C39RThinkEdge 140W 230V/115V External Power Supply2
A4VP1.0m, 10A/100-250V, C13 to C14 Jumper Cord2
C31JThinkEdge SE100 Bottom Rubber Feet1
C31AThinkEdge SE100 Fan Module1
C31CThinkEdge SE100 Port Dust Cover Kit1
BRPJXCC Platinum1
C8U9Top-Cover Thermal Gap Pad Kit1
C319ThinkEdge SE100 Node Cosmetic Cover1
C308ThinkEdge SE100 M.2 Holder1
C8UCFront I/O Panel1
C8UBExpansion Kit Rubber Feet1
C8UABottom-Cover Thermal Gap Pad Kit1
C30FSE100 Expansion Kit for Active Cooling GPU1
7S0XCTO5WWXClarity Controller Platin-FOD1
SBCVLenovo XClarity XCC2 Platinum Upgrade (FOD)1

Small SE350 V2 No GPU

Part NumberDescriptionQuantity
7DA9CTO1WWNode : ThinkEdge SE350 V2 - 3 Year Warranty1
BS3SThinkEdge SE350 V2 Chassis1
BS3TThinkEdge SE350 V2 4x 10/25Gb, 2x 2.5Gb(TSN) I/O Module1
BS41ThinkEdge SE350 V2/SE360 V2 Planar with Intel Xeon D-2733NT 8C 80W 2.1 GHz1
B966ThinkSystem 64GB TruDDR4 3200 MHz (2Rx4 1.2V) RDIMM2
BQ1VThinkSystem 7mm 5400 PRO 480GB Read Intensive SATA 6Gb HS SSD1
BS48ThinkEdge SE350 V2 7mm SSD Module1
BS46ThinkSystem M.2 7450 PRO 480GB Read Intensive NVMe PCIe 4.0 x4 NHS SSD (with Heatsink)1
BUGPThinkEdge SE350 V2 AC Power Input Board1
BWK7ThinkEdge SE350 V2 500W 230V/115V Non-Hot Swap Power Supply1
62011.5m, 10A/100-250V, C13 to C14 Jumper Cord1
BS4EThinkEdge 130mm USB-C to VGA Display Cable1
B6Q3ThinkEdge Rubber Feet1
BRPJXCC Platinum1
B8L4ThinkSystem 7mm Tray Filler3
BS4VThinkEdge SE350 V2 Front IO Bezel (25G/10G) Assembly1
BS4MThinkEdge SE350 V2 Operational Panel Module1
BS4LThinkEdge SE350 V2 Bridge Board1
7S0XCTO5WWXClarity Controller Platin-FOD1
SBCVLenovo XClarity XCC2 Platinum Upgrade (FOD)1
5641PX3XClarity Pro, Per Endpoint w/3 Yr SW S&S1
1340Lenovo XClarity Pro, Per Managed Endpoint w/3 Yr SW S&S1
7S0YCTO1WWLenovo Open Cloud Automation w/Support1
SD2SLenovo Open Cloud Automation - nZTP with Device Management platform onboarding for 1-socket ThinkEdge server with 1 year support. Price per node1
7Q01CTS4WWSERVER PREMIER 24X7 4HR RESP1
7Q01CTSAWWSERVER KEEP YOUR DRIVE ADD-ON1

Small SE360 V2

Part NumberDescriptionQuantity
7DAMCTO1WWNode : ThinkEdge SE360 V2 - 3 Year Warranty1
BS56ThinkEdge SE360 V2 Chassis1
BS58ThinkEdge SE360 V2 4x 1Gb, 2x 2.5Gb(TSN) I/O Module1
BS41ThinkEdge SE350 V2/SE360 V2 Planar with Intel Xeon D-2733NT 8C 80W 2.1 GHz1
B963ThinkSystem 16GB TruDDR4 3200 MHz (2Rx8 1.2V) RDIMM2
BSW6ThinkSystem M.2 7450 PRO 480GB Read Intensive NVMe PCIe 4.0 x4 NHS SSD (with Heatsink)1
BS5MThinkEdge SE360 V2 M.2 Cabled Adapter Module1
BS46ThinkSystem M.2 7450 PRO 480GB Read Intensive NVMe PCIe 4.0 x4 NHS SSD (with Heatsink)1
BQZTThinkSystem NVIDIA A2 16GB PCIe Gen4 Passive GPU w/o CEC1
BS5JThinkEdge SE360 V2 Riser Assembly (PCIe Riser + 7mm Backplane)1
BUGUThinkEdge SE360 V2 AC Power Input Board1
BW8UThinkEdge SE360 V2 500W 230V/115V Non-Hot Swap Power Supply1
63132.8m, 10A/120V, C13 to NEMA 5-15P (US) Line Cord1
BS5WThinkEdge SE360 V2 Fan Assembly (Front to Rear)1
BS4EThinkEdge 130mm USB-C to VGA Display Cable1
B6Q3ThinkEdge Rubber Feet1
BRPJXCC Platinum1
BUGSThinkEdge SE350 V2/ SE360 V2 7mm Tray Filler2
BS69ThinkEdge SE360 V2 Top Cover1
BTJKThinkEdge SE360 V2 Air Baffle for Processor1
BS66ThinkEdge SE360 V2 IO Cover Assembly for 1GbE I/O Module1
BS64ThinkEdge SE360 V2 Rear Operational Panel Module1
BS63ThinkEdge SE360 V2 Operational Panel Module1
BUGVThinkEdge SE360 V2 AC Power Module Board Air Baffle1
7S0XCTO5WWXClarity Controller Platin-FOD1
SBCVLenovo XClarity XCC2 Platinum Upgrade (FOD)1
5641PX3XClarity Pro, Per Endpoint w/3 Yr SW S&S1
1340Lenovo XClarity Pro, Per Managed Endpoint w/3 Yr SW S&S1
7S0YCTO1WWLenovo Open Cloud Automation w/Support1
SD2SLenovo Open Cloud Automation - nZTP with Device Management platform onboarding for 1-socket ThinkEdge server with 1 year support. Price per node1
7Q01CTS4WWSERVER PREMIER 24X7 4HR RESP1
7Q01CTSAWWSERVER KEEP YOUR DRIVE ADD-ON1

Medium SE360 V2

Part NumberDescriptionQuantity
7DAMCTO1WWNode : ThinkEdge SE360 V2 - 3 Year Warranty with Controlled GPU1
BS56ThinkEdge SE360 V2 Chassis1

Large SE360 V2

Part NumberDescriptionQuantity
BS58ThinkEdge SE360 V2 4x 1Gb, 2x 2.5Gb(TSN) I/O Module1
BS42ThinkEdge SE350 V2/SE360 V2 Planar with Intel Xeon D-2775TE 16C 100W 2.0 GHz1
B963ThinkSystem 16GB TruDDR4 3200 MHz (2Rx8 1.2V) RDIMM2
BZEFThinkSystem M.2 N-30m2 960GB Read Intensive NVMe PCIe 3.0 x4 NHS SSD (Industrial)1
BS5MThinkEdge SE360 V2 M.2 Cabled Adapter Module1
BYLNThinkSystem M.2 N-30m2 1.92TB Read Intensive NVMe PCIe 3.0 x4 NHS SSD (Industrial)1
BS2CThinkSystem NVIDIA L4 24GB PCIe Gen4 Passive GPU2
BS5EThinkEdge SE360 V2 Riser Assembly (PCIe Riser + PCIe Riser) w/ Geotracking1
BUGUThinkEdge SE360 V2 AC Power Input Board1
BW8UThinkEdge SE360 V2 500W 230V/115V Non-Hot Swap Power Supply1
63132.8m, 10A/120V, C13 to NEMA 5-15P (US) Line Cord1
BS5WThinkEdge SE360 V2 Fan Assembly (Front to Rear)1
BS4EThinkEdge 130mm USB-C to VGA Display Cable1
B6Q3ThinkEdge Rubber Feet1
BRPJXCC Platinum1
BUGSThinkEdge SE350 V2/ SE360 V2 7mm Tray Filler2
BS69ThinkEdge SE360 V2 Top Cover1
BTJKThinkEdge SE360 V2 Air Baffle for Processor1
BS66ThinkEdge SE360 V2 IO Cover Assembly for 1GbE I/O Module1
BS64ThinkEdge SE360 V2 Rear Operational Panel Module1
BS63ThinkEdge SE360 V2 Operational Panel Module1
BUGVThinkEdge SE360 V2 AC Power Module Board Air Baffle1
7S0XCTO5WWXClarity Controller Platin-FOD1
SBCVLenovo XClarity XCC2 Platinum Upgrade (FOD)1
5641PX3XClarity Pro, Per Endpoint w/3 Yr SW S&S1
1340Lenovo XClarity Pro, Per Managed Endpoint w/3 Yr SW S&S1
7S0YCTO1WWLenovo Open Cloud Automation w/Support1
SD2SLenovo Open Cloud Automation - nZTP with Device Management platform onboarding for 1-socket ThinkEdge server with 1 year support. Price per node1
7Q01CTS4WWSERVER PREMIER 24X7 4HR RESP1
7Q01CTSAWWSERVER KEEP YOUR DRIVE ADD-ON1

Large SE455 V3

Part NumberDescriptionQuantity
7DBYCTOAWWSE455 V3 Large : ThinkEdge SE455 V3 - 3Yr Warranty with Controlled GPU1
BVTKThinkEdge SE455 V3 Chassis1
BW2TThinkEdge SE455 V3 AMD EPYC 8324P 32C 180W 2.65GHz Processor1
BQ39ThinkSystem 32GB TruDDR5 4800MHz (1Rx4) 10x4 RDIMM-A2
C18NThinkSystem 2.5" U.2 VA 1.92TB Read Intensive NVMe PCIe 4.0 x4 HS SSD1
BVUUThinkEdge SE455 V3 2.5" NVMe Backplane1
BVUYThinkEdge SE455 V3 M.2 SATA/x4 NVMe Adapter with Carrier1
BXMGThinkSystem M.2 PM9A3 1.92TB Read Intensive NVMe PCIe 4.0 x4 NHS SSD1
BS2CThinkSystem NVIDIA L4 24GB PCIe Gen4 Passive GPU3
BVURThinkEdge SE455 V3 Riser11
BMH8ThinkEdge 1100W 230V/115V Platinum Hot-Swap Gen2 Power Supply2
BMH2ThinkEdge 600mm Ball Bearing Rail Kit1
BS4EThinkEdge 130mm USB-C to VGA Display Cable1
BVV6ThinkEdge SE455 V3 Intrusion Switch1
BVTXThinkEdge SE455 V3 Standard EIA Bracket1
BVTLThinkEdge SE455 V3 Motherboard1
BRPJXCC Platinum1
BW38ThinkEdge SE455 V3 Supercap Holder1
BW37ThinkEdge SE455 V3 M.2 Air Baffle Extension1
BW36ThinkEdge SE455 V3 M.2 Air Baffle1
BVVYThinkEdge SE455 V3 2.5" Drive Bay Filler3
BVUTThinkEdge SE455 V3 Riser2 Filler1
BY8TThinkEdge SE455 V3 OCP Filler1
BVTPThinkEdge SE455 V3 Fan5
BVV1ThinkEdge SE455 V3 Fan Board1
BVTMThinkEdge SE455 V3 Root of Trust1
BVUKThinkEdge SE455 V3 Power Distribution Board1
BVVFThinkEdge SE455 V3 Riser Side Support3
BW3AThinkEdge SE455 V3 CPU Air Baffle for 2U Heatsink1
7S0XCTO5WWXClarity Controller Platin-FOD1

X-Large SE455 V3

Part NumberDescriptionQuantity
7DBYCTOAWWSE455 V3 XLarge : ThinkEdge SE455 V3 - 3Yr Warranty with Controlled GPU1
BVTKThinkEdge SE455 V3 Chassis1
BY8XThinkEdge SE455 V3 AMD EPYC 8324PN 32C 130W 2.05GHz Processor1
BUVVThinkSystem 96GB TruDDR5 4800MHz (2Rx4) 10x4 RDIMM-A6
C18MThinkSystem 2.5" U.2 VA 3.84TB Read Intensive NVMe PCIe 4.0 x4 HS SSD1
BVUUThinkEdge SE455 V3 2.5" NVMe Backplane1
BVUYThinkEdge SE455 V3 M.2 SATA/x4 NVMe Adapter with Carrier1
BXMFThinkSystem M.2 PM9A3 3.84TB Read Intensive NVMe PCIe 4.0 x4 NHS SSD1
BYFHThinkSystem NVIDIA L40S 48GB PCIe Gen4 Passive GPU2
BVURThinkEdge SE455 V3 Riser11
BVUSThinkEdge SE455 V3 Riser21
BMH9ThinkEdge 1800W 230V Platinum Hot-Swap Gen2 Power Supply2
BMH2ThinkEdge 600mm Ball Bearing Rail Kit1
BS4EThinkEdge 130mm USB-C to VGA Display Cable1
BVV6ThinkEdge SE455 V3 Intrusion Switch1
BVTXThinkEdge SE455 V3 Standard EIA Bracket1
BVTLThinkEdge SE455 V3 Motherboard1
BRPJXCC Platinum1
BW39ThinkEdge SE455 V3 CPU Air Baffle for 1U Heatsink1
BW38ThinkEdge SE455 V3 Supercap Holder1
BW37ThinkEdge SE455 V3 M.2 Air Baffle Extension1
BW36ThinkEdge SE455 V3 M.2 Air Baffle1
BVVYThinkEdge SE455 V3 2.5" Drive Bay Filler3
BY8TThinkEdge SE455 V3 OCP Filler1
BVTPThinkEdge SE455 V3 Fan5
BVV1ThinkEdge SE455 V3 Fan Board1
BVTMThinkEdge SE455 V3 Root of Trust1
BVUKThinkEdge SE455 V3 Power Distribution Board1
BVVJThinkEdge SE455 V3 Riser2 Rear Support1
BVVHThinkEdge SE455 V3 Riser1 Rear Support1
BVVFThinkEdge SE455 V3 Riser Side Support2
7S0XCTO5WWXClarity Controller Platin-FOD1

Resources

Change history

Version 1.0 | June 27, 2025 | Initial release with 9 Edge Configurations and sizing models

Trademarks and special notices

© Copyright Lenovo 2025.

Lenovo and the Lenovo logo are trademarks or registered trademarks of Lenovo in the United States, other countries, or both. A current list of Lenovo trademarks is available on the Web at https://www.lenovo.com/us/en/legal/copytrade/.

The following terms are trademarks of Lenovo in the United States, other countries, or both: Lenovo®, ThinkAgile®, ThinkEdge®, ThinkShield®, ThinkSystem®, XClarity®

The following terms are trademarks of other companies:

AMD and AMD EPYC™ are trademarks of Advanced Micro Devices, Inc.

Intel®, Intel Core®, OpenVINO®, and Xeon® are trademarks of Intel Corporation or its subsidiaries.

Linux® is the trademark of Linus Torvalds in the U.S. and other countries.

Microsoft®, Windows Server®, and Windows® are trademarks of Microsoft Corporation in the United States, other countries, or both.

Other company, product, or service names may be trademarks or service marks of others.

Information is provided "AS IS" without warranty of any kind.

All customer examples described are presented as illustrations of how those customers have used Lenovo products and the results they may have achieved. Actual environmental costs and performance characteristics may vary by customer.

Information concerning non-Lenovo products was obtained from a supplier of these products, published announcement material, or other publicly available sources and does not constitute an endorsement of such products by Lenovo. Sources for non-Lenovo list prices and performance numbers are taken from publicly available information, including vendor announcements and vendor worldwide homepages. Lenovo has not tested these products and cannot confirm the accuracy of performance, capability, or any other claims related to non-Lenovo products. Questions on the capability of non-Lenovo products should be addressed to the supplier of those products.

All statements regarding Lenovo future direction and intent are subject to change or withdrawal without notice and represent goals and objectives only. Contact your local Lenovo office or Lenovo authorized reseller for the full text of the specific Statement of Direction. Some information addresses anticipated future capabilities. Such information is not intended as a definitive statement of a commitment to specific levels of performance, function or delivery schedules with respect to any future products. Such commitments are only made in Lenovo product announcements. The information is presented here to communicate Lenovo's current investment and development activities as a good faith effort to help with our customers' future planning.

Performance is based on measurements and projections using standard Lenovo benchmarks in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput or performance improvements equivalent to the ratios stated here.

Photographs shown are of engineering prototypes. Changes may be incorporated in production models.

Any references in this information to non-Lenovo websites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of the materials for this Lenovo product and use of those websites is at your own risk

PDF preview unavailable. Download the PDF instead.

lp2260 Lenovo Adobe PDF Library 25.1.97

Related Documents

Preview Running Edge AI Workloads with Lenovo ThinkAgile HX360 V2 Edge Servers
Explore how Lenovo ThinkAgile HX360 V2 Edge servers, powered by Nutanix Cloud Platform, accelerate AI inferencing deployments at the edge. This document details the server's capabilities, validated designs, and performance testing for AI workloads.
Preview Lenovo ThinkAgile HX Series User Guide
This user guide provides a comprehensive overview of the Lenovo ThinkAgile HX series, detailing its architecture, various models, deployment considerations, and troubleshooting steps. It covers hardware specifications, software updates, and support resources for optimizing hyper-converged cluster solutions.
Preview Smarter AI for Higher Education: The Complete AI Portfolio
Explore how Lenovo and AMD are transforming Higher Education with AI. Discover AI's impact on learning, research, and campus operations, featuring ThinkShield and AMD PRO Security, hybrid AI models, and innovative solutions for educational institutions.
Preview Lenovo AI Solutions for Financial Services: Secure, Hybrid, and Innovative
Explore how Lenovo and Intel deliver comprehensive, secure, and hybrid AI solutions for the Financial Services Industry, driving innovation, efficiency, and compliance.
Preview Lenovo ThinkEdge SE350 V2 Server Product Guide
Explore the Lenovo ThinkEdge SE350 V2, a purpose-built edge server designed for tight spaces and demanding workloads. Learn about its key features, scalability, performance, manageability, and security.
Preview Lenovo ThinkSystem SR665 V3 Server Product Guide
Explore the Lenovo ThinkSystem SR665 V3 Server, a 2U, 2-socket server featuring AMD EPYC 9004 processors for high performance, scalability, and flexibility in enterprise workloads.
Preview Lenovo ThinkSystem SR860 V4 Server Product Guide
Explore the Lenovo ThinkSystem SR860 V4 Server, a 4U rack server designed for demanding data services workloads. Discover its key features, scalability, performance, and advanced capabilities powered by Intel Xeon 6700-Series processors and DDR5 memory.
Preview Lenovo ThinkSystem SR675 V3 Server: Product Guide for AI & HPC
Explore the Lenovo ThinkSystem SR675 V3, a versatile 3U rack server designed for demanding AI, HPC, and graphical workloads. Featuring AMD EPYC processors, NVIDIA GPUs, NVLink, and advanced cooling, it offers high performance and flexibility for data-intensive computing.