Arista 7280R3 Switch Architecture: A Day in the Life of a Packet
The Arista 7280R3 Series are purpose-built high-performance fixed-configuration Universal Leaf switches and routers featuring a deep buffer, virtual output queue (VoQ) architecture combined with rich features. Evolving from systems first introduced in 2010, the 7280R3 series supports interfaces up to 400G, based on Broadcom Jericho2 silicon.
The 7280R3 series addresses the demands of modern networking, requiring a lossless forwarding solution. They are ideal for Universal Cloud Networks, offering deep buffers, wire-speed L2 and L3 forwarding, and advanced features for network virtualization, monitoring, resiliency, and architectural flexibility. The deep packet buffers and support for highly scalable IPv4 and IPv6 tables enable open networking solutions for Cloud WAN aggregation, Service Provider NFV, Internet Peering, Overlay Networks, and more.
This white paper provides an overview of the architecture of the Arista 7280R3 Universal Leaf platform.
Arista 7280R3: Overview
The Arista 7280R3 Universal Leaf platform represents the evolution of the 7280R family. Key differentiators include the proven VoQ and deep buffer architecture, combined with the rich EOS feature set and a programmable pipeline. Key features include:
- High-density 100G and 400G switching for future-proof designs.
- Segment Routing and EVPN with flexible underlay and overlay topologies.
- Ultra-deep buffers for lossless performance in demanding environments.
- Directly connected 25GbE, 40GbE, and 50GbE storage systems.
- Flexible support for 100G and 400G with various optics and cables.
- Comprehensive L2 and L3 feature set for open, multi-vendor networks.
- Scalable forwarding table resources for deployment flexibility.
- Accelerated sFlow and IPFIX for network forensics.
- Streaming network state for advanced analytics with Arista CloudVision®.
- Network-wide virtualization for next-generation cloud bursting with wire-speed VXLAN routing.
- Hardware-assisted PTP for accurate timing solutions.
- Unique monitoring and provisioning features: LANZ, DANZ, AEM, IEEE 1588 PTP, ZTP, VM Tracer, VXLAN, and eAPI.
- Programmable packet processor for advanced features and flexible profiles.
- NEBS compliance and DC power supplies for service provider environments.
- MACsec encryption for secure data center interconnects.
The 7280R3 platform is designed for lossless behavior in environments requiring large-scale routing, VXLAN routing, DANZ, and enhanced LANZ. It scales to 96x100G in 2RU and 24x400G in 1RU, offering industry-leading performance and density.
System Specifications
7280R3 Series | 7280PR3-24 & 7280PR3K-24 | 7280DR3-24 & 7280DR3K-24 | 7280CR3-32P4 & 7280CR3K-32P4 | 7280CR3-32D4 & 7280CR3K-32D4 | 7280CR3-96 & 7280CR3K-96 | 7280SR3-48YC8 & 7280SR3K-48YC8 |
Max 400GbE Ports* | 24 | 24 | 4 | 4 | ||
Max 100GbE Ports* | 96 | 96 | 48 | 48 | 96 | 8 |
Max 50GbE Ports* | 192 | 192 | 96 | 96 | 192 | 16 |
Max 40GbE Ports* | 24 | 24 | 36 | 36 | 36 | 8 |
Max 25GbE Ports* | 192 | 192 | 96 | 96 | 192 | 80 |
Max 10GbE Ports* | 192 | 192 | 96 | 96 | 192 | 80 |
Max Total Interfaces § | 192 | 192 | 96 | 96 | 192 | 80 |
L2/3 Throughput | 9.6 Tbps | 9.6 Tbps | 4.8 Tbps | 4.8 Tbps | 9.6 Tbps | 2.0 Tbps |
L2/3 PPS | 4 Bpps | 4 Bpps | 2 Bpps | 2 Bpps | 4 Bpps | 1 Bpps |
Latency | From 3.8us | From 3.8us | From 3.8us | From 3.8us | From 3.8us | From 3.8us |
Total System Buffer | 16 GB | 16 GB | 8 GB | 8 GB | 16 GB | 4 GB |
Rack Units | 1 | 1 | 1 | 1 | 2 | 1 |
Airflow | F/R | F/R | F/R and R/F | F/R and R/F | F/R | F/R and R/F |
* Maximum port numbers are uni-dimensional, may require the use of break-outs and are subject to transceiver/cable capabilities.
§ Where supported by EOS, each system supports a maximum number of interfaces. Certain configurations may impose restrictions on which physical ports can be used.
Arista 7280R - Router Table Scale, Features and Functionality
Arista's FlexRoute™ Engine enables more than a million IPv4 and IPv6 route prefixes in hardware. Extensions to FlexRoute increase this capability to over 2 million routes. The table below shows key scale metrics:
7280R Series (Jericho) | 7280R2 Series (Jericho+) | 7280R2K Series (Jericho+) | 7280R3 Series (Jericho2) | 7280R3K Series (Jericho2) | |
MAC Table Size | 768 K | 768 K | 768 K | 736 K | 448 K |
IPv4 Host Routes | 768 K | 768 K | 768 K | 896 K | 1.4 M |
IPv4 Route Prefixes | 1 M+ | Up to 1.3 M | 2 M+ | 704 K | 1.2 M |
IPv6 Host Routes | 768 K | 768 K | 768 K | 224 K | 368 K |
IPv6 Route Prefixes | 928 K | Up to 1 M | 1.4 M+ | 235 K | 411 K |
Multicast Routes | Up to 768 K | Up to 768 K | Up to 768 K | 448 K | 448 K |
LAG Groups | 1152 | 1152 | 1152 | 1152 | 1152 |
LAG Members | 64 Ports | 64 Ports | 64 Ports | 128 Ports | 128 Ports |
ECMP Fanout | 128-way | 128-way | 128-way | 256-way | 256-way |
1 Represents a balanced profile for the partitioning of the MDB.
Arista 7280R3 - Cloud Scale And Features
The 7280R3 family provides Internet routing scale through Arista's FlexRoute™ Engine, extending forwarding table capacity. The 7280R3 series introduces the Modular Database (MDB) for flexible allocation of forwarding resources. The MDB provides a common database of forwarding and lookup resources, allocated using forwarding profiles for different networking use-cases. The L3 optimized profile expands routing and next-hop tables for large-scale networks, while the balanced profile suits leaf and spine data center applications.
The fungible nature of MDB resources ensures operators have the flexibility to standardize on a common platform across various roles, streamlining deployments, simplifying sparing, and consolidating testing.
Airflow
The 7280R3 Series offers a choice of airflow direction (front-to-rear or rear-to-front). Power and fan modules are color-coded to indicate airflow direction for proper rack installation.
Arista 7280R3 Universal Leaf System Architecture
All 7280R3 Series switches share a common system design built around a high-performance x86 CPU for the control plane. The CPU is connected to system memory, internal flash, SSD, boot flash, power supplies, fans, management I/O, and peripherals. The packet processors, running all data plane forwarding, are connected via PCIe to the CPU.
Key high availability features include:
- 1+1 hot-swappable power supplies and hot-swap fans.
- Color-coded PSUs and fans for platinum level power efficiency.
- Live software patching.
- Self-healing software with Stateful Fault Repair (SFR).
- Smart System Upgrade (SSU).
Each packet processor is a System-on-Chip (SoC) that provides ingress and egress packet forwarding pipeline stages. Jericho2 supports network interface speeds from 10G to 400G, with up to 4.8 Tbps of total network capacity. It features 96 50G PAM SerDes interfaces that can be combined for flexible interface speeds.
Gearboxes are employed in many 7280R3 models to increase front panel interface density and maximize capabilities by converting 50G PAM4 SerDes lanes to more lanes at lower speeds and different encoding.
Jericho2C, a member of the Jericho2 family, provides 2.4 Tbps of front panel bandwidth and is designed for lower capacity systems with a focus on 1G to 100G network connectivity.
Arista 7280CR3-32P4 Switch Architecture
The 7280CR3-32P4 switch utilizes a single Jericho2 chip. A total of 8 gearboxes support 32 ports of 100G and a diverse range of optics. Four logical interfaces are assigned to each odd+even pair of QSFP ports, allowing combinations such as 1x100G, 1x40G, 4x25G, or 4x10G.
Each of the four 400G ports supports up to 8 unique logical interfaces and can operate as 1x400G, 2x200G, 4x100G, 8x50G, or 8x25G interfaces, subject to cable or transceiver capabilities.
Port Identification
40G and 100G QSFP transceivers use the same physical size. Ports are indicated on the front panel to ensure correct transceiver installation. 100G QSFP ports are highlighted with a purple line, supporting QSFP+ (40G) or QSFP100 (100G). 400G QSFP-DD ports are highlighted in orange, supporting 40G, 100G, and 400G transceivers. OSFP 400G ports are similarly marked.
Arista 7280R3 Universal Leaf Platform Layout
Arista 7280R3 series switches utilize high-performance packet processors. The packet forwarding architecture is consistent across systems, with front-panel ports connected to each packet processor.
7280CR3-32P4
The 7280CR3-32P4 system uses a single Jericho2 chip supporting up to 96 interfaces in breakout mode. It supports 32 ports of 100G and a range of optics. Combinations include 2x100G, 2x40G, 4x50G, 4x25G, or 4x10G per pair of QSFP ports.
The four 400G ports support various logical interfaces (400G, 200G, 100G, 50G, 25G) and can operate with copper, AOC, or OSFP optics.
7280CR3-32D4
The 7280CR3-32D4 is the QSFP-DD version of the 7280CR3-32P4, with identical packet processor to port assignment. Each 400G port supports copper, AOC, or QSFP-DD optics.
7280PR3-24 and 7280DR3-24
These 1U systems feature two Jericho2 chips, delivering 24 ports of 400G with 9.6 Tbps of non-blocking performance. They support up to 192 interfaces in breakout mode. Each 400G port supports various logical interfaces and optics.
7280CR3-96
This 2U system uses two Jericho2 chips for 9.6 Tbps performance and up to 192 interfaces in breakout mode. It supports 96 ports of 100G and a range of optics. Combinations are similar to the 7280CR3-32P4.
7280SR3-48YC8 and 7280SR3K-48YC8
These 1U systems feature a single Jericho2C chip, providing 2 Tbps of capacity across 48 SFP and 8 QSFP ports. SFP28 ports support 1/10/25G, while QSFP28 ports support 100G or 40G, with breakout options for 4x10/25G.
Scaling the Control Plane
A central x86 CPU complex handles control-plane and management functions, while all data-plane forwarding occurs at the packet processor level. Arista EOS® executes on multi-core x86 CPUs with ample DRAM. EOS is multi-threaded, runs on a Linux kernel, and is extensible, allowing for third-party software.
Out-of-band management is available via a serial console port and a 10/100/1000 Ethernet management interface. USB2.0 interfaces are also provided for image and log transfers.
Arista 7280R3 Universal Leaf Platform Packet Forwarding Pipeline
Each packet processor is a System-on-Chip (SoC) providing ingress and egress forwarding pipeline stages. Forwarding is always hardware-based.
Stage 1: Networking Interface (Ingress)
This stage implements the Physical Layer (PHY) and Ethernet Media Access Control (MAC) layers, including Forward Error Correction (FEC). Programmable lane mappings support various interface types and breakout configurations.
Stage 2: Ingress Receive Packet Processor
This stage performs forwarding decisions, including packet parsing, SMAC/DMAC/DIP lookups, forwarding table lookups, tunnel decap, and ingress ACLs. It supports various tunnel formats (MPLS, IPinIP, GRE, VXLAN) and can handle multi-label stacks. Forwarding decisions are made for L2 (bridging) or L3 (routing) pipelines.
For L2, it performs SMAC and DMAC lookups in the MAC table. For L3, it performs lookups on the Destination IP address (DIP) within the VRF. The result of a forwarding lookup is a pointer to a Forwarding Equivalence Class (FEC) or FEC group (LAG, ECMP, UCMP).
Multicast traffic logic is similar, with an adjacency entry providing a Multicast ID for replication requirements. The forwarding pipeline remains in the hardware data-plane, with hardware rate limiters and Control Plane Policing protecting the control-plane.
Ingress ACL lookups and Quality of Service (QoS) lookups are also performed. Counters provide accounting and statistics on ACLs, VLANs, sub-interfaces, tunnels, and next-hop groups.
Arista FlexRoute™ Engine
The FlexRoute Engine enables Internet-scale L3 routing tables with significant power savings. It supports IPv4 and IPv6 Longest Prefix Match (LPM) lookups without partitioning table resources, optimized for Internet routing tables and projected growth. FlexRoute enables scale beyond 1.4 million IPv4 and IPv6 prefixes combined, with fast route programming and reprogramming.
The multi-stage programmable forwarding pipeline provides flexible and scalable solutions for access control, policy-based networking, and telemetry. ACLs leverage forwarding lookup capabilities for traffic management actions.
sFlow
Hardware-accelerated sFlow on the 7280R3 platform enhances flow instrumentation, providing visibility into high-volume traffic flows and enabling effective traffic management, even with various tunnel overlay technologies.
Inband Network Telemetry (INT)
INT provides insight into per-hop latency, paths, congestion, and drops. It can be correlated for hotspot analysis and path topology to influence traffic engineering decisions. INT processes inband OAM frames and annotates them with metadata for detailed path and transit quality details.
Stage 3: Ingress Traffic Manager
This stage handles packet queuing and scheduling using Virtual Output Queuing (VoQ). Buffering is primarily on the ingress port, representing packets queued for the output side. VoQ balances buffers across sources contending for a congested output port, ensuring fairness and QoS policies.
Packets are queued in input buffers (on-chip and external HBM2 memory) while awaiting a VoQ grant. The memory is used dynamically, with portions reserved for traffic per Traffic Class, multi-destination traffic, and a dynamic buffer pool.
Output Port Characteristic | Maximum Packet Buffer Depth (MB) | Maximum Packet Buffer Depth (msec) |
VoQ for a 10G output port | 50 MB | 40 msec |
VoQ for a 25G output port | 125 MB | 40 msec |
VoQ for a 40G output port | 200 MB | 40 msec |
VoQ for a 50G output port | 250 MB | 40 msec |
VoQ for a 100G output port | 500 MB | 40 msec |
VoQ for a 400G output port | 500 MB | 10 msec |
The VoQ subsystem provides dynamic, intelligent, and deep buffers, ensuring packet buffer space availability and fairness, enabling deployment for various workloads.
7280R3 Deep Packet Buffers
The 7280R3 series utilizes on-chip buffers (32MB with Jericho2) in conjunction with flexible packet buffer memory (8 GB of HBM2 per packet processor). On-chip buffers are for non-congested forwarding, seamlessly utilizing HBM2 buffers during congestion. Buffers are allocated per VoQ and require no tuning.
Stage 4: Ingress Transmit Packet Processor
This stage transfers frames from the input packet processor to the relevant output packet processor. Frames are segmented into variable-sized cells and forwarded across fabric links simultaneously. Each cell has a header for reassembly and in-order delivery. Forward Error Correction (FEC) is enabled for traffic across fabric modules.
Stage 5: Egress Receive Packet Processor
This stage reassembles cells back into packets/frames. It handles multicast packet/frame replication for locally attached receivers. This stage ensures no frame or packet reordering and provides data-plane health tracing.
Egress ACLs are performed at this stage based on packet header updates.
Stage 6: Egress Traffic Manager
This stage grants VoQ credit requests from input packet processors and manages egress queues. If the output port is not congested, requests are granted immediately. If congested, it balances service requests between contending input ports, adhering to QoS configuration and PFC/ETS policies.
It also manages egress buffering, with an additional 32MB on-chip buffer primarily for multicast traffic.
Stage 7: Egress Transmit Packet Processor
This stage performs packet header updates, such as next-hop DMAC, Dot1q updates, and tunnel encapsulation operations. It can optionally set TCP Explicit Congestion Notification (ECN) bits. Flexible counters provide packet and byte counts.
Stage 8: Network Interface (Egress)
Packets/frames are transmitted onto the wire as a bit stream, complying with IEEE 802.3 standards.
Arista EOS: A Platform For Scale, Stability and Extensibility
Arista EOS® (Extensible Operating System) is built on an innovative multi-process state-sharing architecture, separating system state from packet forwarding and protocol processing. System state is maintained in a highly efficient System Database (SysDB), accessed via a publish/subscribe/notify model, enabling self-healing resiliency and ease of maintenance.
EOS contrasts with legacy network operating systems that embed system state within each process, relying on extensive Inter-Process Communications (IPC). Legacy systems lack an automated structured core like SysDB, making recovery difficult.
EOS is built on an unmodified Linux kernel, providing access to Linux shell and utilities, and supporting features like Docker Containers. It offers open APIs at management, control, and data-plane levels for extensibility.
The NetDB evolution of SysDB enhances EOS by enabling NetDB NetTable for scaling routing stacks, NetDB Network Central for streaming system state to central repositories, and NetDB Replication for fault-tolerant state streaming.
System Health Tracer And Integrity Checks
Arista EOS includes numerous subsystems for continuous system health and integrity validation:
- ECC protected memories with automatic error correction.
- Parity protected data-plane forwarding tables with shadow copies in ECC protected memory.
- Data-plane packet buffers protected using CRC32 checksums, validated at multiple points.
- Forward Error Correction (FEC) for fabric modules.
- Continuous testing and checking of reachability between data-plane forwarding elements.
Conclusion
The Arista 7280R3 Series switches provide a proven, industry-leading platform for cloud and service providers. They combine 400G density with Internet-scale capabilities and next-generation packet processing. The platform leverages a successful architecture focused on efficient design, reliability, and flexibility, with innovations in packet processors enabling wider use cases.
Arista's EOS network operating system leads in openness, extensibility, and software quality, with innovations in telemetry and automation through NetDB and open APIs.
The 7280R3 is ideally suited for cloud-scale data centers, Service Provider WAN backbones, Peering edges, and large enterprise networks.
Santa Clara—Corporate Headquarters
5453 Great America Parkway, Santa Clara, CA 95054
Phone: +1-408-547-5500
Fax: +1-408-538-8920
Email: info@arista.com
Ireland—International Headquarters
3130 Atlantic Avenue
Westpark Business Campus
Shannon, Co. Clare
Ireland
Vancouver-R&D Office
9200 Glenlyon Pkwy, Unit 300
Burnaby, British Columbia
Canada VSJ 5J8
San Francisco-R&D and Sales Office
1390 Market Street, Suite 800
San Francisco, CA 94102
India—R&D Office
Global Tech Park, Tower A & B, 11th Floor
Marathahalli Outer Ring Road
Devarabeesanahalli Village, Varthur Hobli
Bangalore, India 560103
Singapore-APAC Administrative Office
9 Temasek Boulevard
#29-01, Suntec Tower Two
Singapore 038989
Nashua-R&D Office
10 Tara Boulevard
Nashua, NH 03062
Copyright © 2020 Arista Networks, Inc. All rights reserved. CloudVision, and EOS are registered trademarks and Arista Networks is a trademark of Arista Networks, Inc. All other company names are trademarks of their respective holders. Information in this document is subject to change without notice. Certain features may not yet be available. Arista Networks, Inc. assumes no responsibility for any errors that may appear in this document. 11/20 02-0087-02