Apple Unveils M5 Pro and M5 Max, Raises MacBook Prices

Apple introduced M5 Pro and M5 Max with a new Fusion Architecture using 3nm chiplets, refreshed MacBook Air/Pro and Studio Displays, raised base storage and launched higher starting prices.

Overview

A summary of the key points of this story verified across multiple sources.

1.

Apple unveiled M5 Pro and M5 Max chips and refreshed MacBook Pro and MacBook Air models alongside updated Studio Displays, with preorders beginning on March 4 and availability starting March 11, according to company statements.

2.

Apple said the M5 Pro and M5 Max use an all-new Fusion Architecture that joins a CPU/I/O chiplet and a graphics chiplet on a 3nm TSMC process and adds a third type of CPU core to boost on-device AI.

3.

Apple said the new chips can process large language model prompts nearly four times faster than comparable M4 machines and up to eight times faster than M1 models, and reports said tighter memory supply tied to AI data center demand pushed component costs higher.

4.

Apple set the 13-inch MacBook Air starting price at $1,099 and the 15-inch at $1,299, raised MacBook Pro starts to about $2,199–$2,699 for M5 Pro and $3,599–$3,899 for M5 Max, and raised base storage to 512GB or higher, Apple said.

5.

Apple is holding a "special experience" event on Wednesday, and reports said a rumored lower-cost MacBook could appear there as the company looks to expand its lineup at multiple price points.

Written using shared reports from
10 sources
.
Report issue

Analysis

Compare how each side frames the story — including which facts they emphasize or leave out.

Center-leaning sources frame this coverage around consumer cost and supply-driven scarcity, using causal phrasing ('thank the RAM shortage') and evaluative words ('whopping', 'surge', 'plummet'). They foreground price increases and analysts' bleak forecasts while offering little counterbalance from Apple or alternative supply-chain explanations, shaping a scarcity-and-cost narrative.

FAQ

Dig deeper on this story with frequently asked questions.

The Fusion Architecture is Apple's new design that bonds two separate silicon dies into a single system-on-chip (SoC) instead of using a traditional monolithic design[1][2]. Each die is built on TSMC's third-generation 3-nanometer process and connected with high-bandwidth, low-latency interconnects[1]. While physically two separate dies, the system appears as one unified chip to macOS and maintains Apple's unified memory architecture[2][5]. This represents a significant shift from Apple's previous approach of fitting everything on a single die[5].

The M5 Pro and M5 Max deliver up to 30% faster CPU performance for pro workloads with their new 18-core CPU configuration[2], up to 20% faster overall GPU performance, and ray-tracing workloads that are up to 35% faster[2]. For AI workloads specifically, the new chips offer over 4x the peak GPU compute compared to the previous generation[1][2], and can process large language model prompts nearly four times faster than comparable M4 machines[6].

The M5 Pro supports up to 64GB of unified memory with bandwidth of 307GB/s, an increase from the M4 Pro's 48GB[2]. The M5 Max supports up to 128GB of unified memory with bandwidth increased to 614GB/s, double that of the M5 Pro[2]. Both chips feature a faster 16-core Neural Engine with higher bandwidth connections to memory for accelerating on-device AI features[1].

Apple transitioned to the Fusion Architecture to scale the capabilities of its silicon while maintaining performance and power efficiency[1]. The dual-die design allows Apple to improve performance within the MacBook Pro form factor while avoiding the need for discrete GPUs or high-end desktop capabilities[6]. By separating the CPU/I/O chiplet from the graphics chiplet, Apple could reallocate approximately 15-20% of GPU die area to dedicated AI logic with Neural Accelerators in each GPU core[4], enabling enhanced on-device AI capabilities without resorting to larger, less efficient designs.

The M5 Pro and M5 Max feature a next-generation GPU with up to 40 cores, each equipped with a dedicated Neural Accelerator[2]. These Neural Accelerators can execute standard floating-point graphics instructions and matrix multiply AI instructions simultaneously without interrupting data flow, making them an industry-first consumer implementation of heterogeneous compute within shader cores[4]. This architecture enables the chips to deliver over 4x the peak GPU compute for AI compared to the M4 generation[1] and supports on-device processing of 30-billion-parameter language models through hardware compression techniques[4].