General purpose (M7i/M5/M5d) instances provide a balance of compute, memory, and network resources and can be used for general-purpose workloads, machine learning (ML) inference, web and application servers, databases, backend servers for enterprise applications, gaming servers, video streaming, and caching fleets. M7i instances are available on second-generation Outposts racks. M5 and M5d instances are available on first-generation Outposts racks.
Compute-optimized (C7i/C5/C5d) instances are optimized for compute-intensive workloads and deliver cost-effective high performance at a low price per compute ratio. They are suited for compute intensive applications such as batch processing, distributed analytics, media transcoding, high-performance web servers, high performance computing (HPC), scientific modeling, high-performance gaming servers and ad server engines, and ML inference. C7i instances are available on second-generation Outposts racks. C5 and C5d instances are available on first-generation Outposts racks.
Memory-optimized (R7i/R5/R5d) instances are designed to deliver fast performance for workloads that process large data sets in memory. They are well suited for memory-intensive applications such as high-performance databases, distributed web scale in-memory caches, in-memory databases, and real-time big data analytics. R7i instances are available on second-generation Outposts racks. R5 and R5d instances are available on first-generation Outposts racks.
Accelerated computing (G4dn) instances are designed to help accelerate machine learning inference and graphics-intensive workloads. They can be used for machine learning inference for applications like adding metadata to an image, object detection, recommender systems, automated speech recognition, and language translation. They also provide a very cost-effective platform for building and running graphics-intensive applications, such as remote graphics workstations, video transcoding, photo-realistic design, and game streaming in the cloud.
Accelerated networking (Bmn-sf2e/Bmn-cx2) instances on Outposts racks, powered by the latest 4th generation Intel Xeon Scalable Processors (Sapphire Rapids), are purpose built for the most compute- and network-intensive and latency-sensitive workloads on-premises. To deliver the best possible performance, in addition to the Outposts logical network, these instances feature a secondary bare metal network with network accelerator cards connected to customer-TOR switches within each Outposts compute rack.
Bmn-sf2e instances are high-frequency and high-memory instances powered by the 4th generation Intel Xeon Scalable Processors (Sapphire Rapids), delivering a sustained all core turbo frequency of up to 3.9 GHz and a 1:8 CPU to memory ratio. Bmn-sf2e instances feature AMD Solarflare™ X2522 Ethernet Adapters connected to top-of-rack (TOR) switches. These instances support native L2 multicast, Precision Time Protocol (PTP), and equal cable lengths between the network accelerator cards and TOR switches, making them ideally suited for complex workloads with stringent ultra-low latency, deterministic networking, and fair and equal access requirements.
Bmn-cx2 instances are powered by the 4th generation Intel Xeon Scalable Processors (Sapphire Rapids) and feature NVIDIA ConnectX-7 400G NICs physically connected to high-speed TOR network switches. Bmn-cx2 instances support native L2 multicast and hardware-based PTP. Compared to Bmn-sf2e instances, Bmn-cx2 instances are designed for workloads with higher throughput but less stringent latency requirements. These workloads include real-time market data ingestion and distribution, market and risk analytics, telecom 5G core, and media distribution.
* Outposts racks with Bmn-sf2 and Bmn-cx2 instances are configured differently than Outposts racks with other Amazon EC2 instances. Certain features detailed on this page may not apply to Outposts racks with Bmn instances.
Support for more EC2 instances, including GPU-enabled instances, is coming soon.