Briefings.
Sunday, May 10, 2026 · May 4–10, 2026

Weekly Storage & AI Data Platform Briefing

— Key signals this week05 total
  1. 01EU AI Omnibus Provisional Deal Reached May 7: High-Risk AI Deadlines Deferred to December 2027 / August 2028
  2. 02NVIDIA Tops $40B in AI Equity Commitments in 2026; Corning ($3.2B) and IREN ($2.1B) Deals Reshape the AI Infrastructure Supply Chain
  3. 03IREN Acquires Spain's Nostrum Group (490 MW): NVIDIA-Backed AI Data Center Operator Enters Europe
  4. 04SanDisk Q3 FY2026: Revenue Hits $5.95B (Up 251% YoY), Data Center Revenue $1.47B; WDC Raises Q4 Guidance
  5. 05"Production AI Is a Data Pipeline Problem": Storage Emerges as the Consensus AI Bottleneck
13cross-validated11multi-pass8single source

Weekly Storage & AI Data Platform Briefing

Week of May 10, 2026 — Period Covered: May 4–10, 2026


1. Executive Summary

This week's briefing identifies five genuinely new signals with strategic significance for NetApp's EMEA, LATAM, and Middle East operations. The dominant themes are the reversal of the EU AI Omnibus stalemate — a provisional deal was struck on May 7 deferring high-risk AI deadlines — NVIDIA's aggressive $40B+ investment blitz locking down the AI infrastructure supply chain, and IREN's acquisition of Spain's Nostrum Group opening a significant new European AI data center play.

Signal 1 — EU AI Omnibus Provisional Deal Reached May 7: High-Risk AI Deadlines Deferred to December 2027 / August 2028. In a dramatic reversal of the April 28 trilogue collapse, the European Parliament and Council reached a provisional agreement on the Digital Omnibus on AI at 4:30 a.m. on May 7, 2026. The deal defers standalone high-risk AI system obligations (Annex III — biometrics, employment, education, law enforcement, critical infrastructure) from August 2, 2026 to December 2, 2027. Product-embedded AI (Annex I — medical devices, industrial machinery) moves to August 2, 2028. The agreement also bans AI-generated non-consensual intimate imagery (NCII) and CSAM, with a compliance date of December 2, 2026. Article 50(2) watermarking for generative AI content remains at August 2, 2026, with a transitional grace period to December 2, 2026 for pre-existing systems. The deal still requires formal adoption by both institutions. [CROSS-VALIDATED] NetApp implication: This is the single most consequential regulatory update this week — it reverses last week's "91 days to comply" urgency. The compliance window for Article 12 automatic logging infrastructure extends from August 2, 2026 to December 2, 2027, adding 16 months. However, the urgency narrative shifts, not disappears: enterprises that were already mobilizing should continue, as the December 2027 deadline will arrive faster than expected, and early adopters gain competitive advantage. The talk track changes from "you have 91 days" to "you now have 19 months — but standards remain undefined, and first-movers who deploy SnapLock WORM + FPolicy logging infrastructure now will avoid the crush of last-minute compliance projects. The watermarking obligation for generative AI (August 2, 2026) still stands — any customer generating synthetic content needs audit-trail infrastructure immediately."

Signal 2 — NVIDIA Tops $40B in AI Equity Commitments in 2026; Corning ($3.2B) and IREN ($2.1B) Deals Reshape the AI Infrastructure Supply Chain. In the week of May 4–10, NVIDIA announced two transformative infrastructure investments. On May 6, NVIDIA committed up to $3.2 billion in Corning to fund three new U.S. optical fiber manufacturing plants, increasing Corning's U.S. optical connectivity manufacturing capacity by 10x and fiber production by 50%. On May 7, NVIDIA announced a strategic partnership with data center operator IREN for up to 5 GW of DSX-aligned AI infrastructure, backed by a $2.1 billion equity stake. NVIDIA's total 2026 equity commitments now exceed $40 billion (including $30B in OpenAI), more than double its $17.5B in fiscal 2025. [CROSS-VALIDATED] NetApp implication: NVIDIA's $40B investment strategy is vertically integrating the AI infrastructure stack — from fiber (Corning) to data centers (IREN) to silicon. Every 5 GW NVIDIA DSX facility needs petabytes of high-performance storage. NVIDIA's DSX reference design explicitly includes storage as a core infrastructure layer alongside compute and networking. NetApp's confirmed STX/CMX partnership and AIDE integration position it to be the storage layer in these builds. However, DDN, WEKA, and VAST Data also hold NVIDIA Certified Storage status. The competitive question: when NVIDIA builds a 2 GW Sweetwater campus with IREN, whose storage goes in? NetApp must ensure AIDE and EF-Series are validated against DSX specifications and listed in NVIDIA's reference architecture BOM.

Signal 3 — IREN Acquires Spain's Nostrum Group (490 MW): NVIDIA-Backed AI Data Center Operator Enters Europe. On May 7, 2026, IREN announced the acquisition of Ingenostrum, S.L. (Nostrum Group), a next-generation data center developer in Spain. The deal adds approximately 490 MW of secured, grid-connected power plus additional development pipeline, bringing IREN's total portfolio to 5 GW. Spain was selected for favorable AI policy, constructive permitting, robust connectivity, and abundant low-cost renewable energy. Combined with the NVIDIA partnership for DSX-aligned infrastructure, IREN now brings NVIDIA-certified, gigawatt-scale AI factory architecture to European soil. [CROSS-VALIDATED] NetApp implication: This is the most significant European data center infrastructure signal this week. A 490 MW NVIDIA DSX-aligned campus in Spain represents a massive greenfield storage opportunity. IREN's NVIDIA partnership means this will be built to DSX reference design specifications — which include high-performance storage. NetApp EMEA should immediately engage IREN's European operations team to position ONTAP and AIDE as the storage platform. The competitor threat is real: IREN's NVIDIA relationship gives DDN, VAST Data, and WEKA a warm introduction. Spain's position as a renewable energy hub also aligns with ONTAP's storage efficiency narrative — fewer drives means lower power.

Signal 4 — SanDisk Q3 FY2026: Revenue Hits $5.95B (Up 251% YoY), Data Center Revenue $1.47B; WDC Raises Q4 Guidance. SanDisk reported Q3 FY2026 revenue of $5.95 billion on April 30 — up 97% sequentially and 251% year-over-year — with data center revenue of $1.47 billion. CEO David Goeckeler called it "a fundamental inflection point." Western Digital delivered $3.3B revenue (up 45% YoY) with 50.5% gross margin and raised Q4 guidance to $3.65B. The cloud segment now represents 89% of WDC revenue. On May 8, both SanDisk (+11%) and Micron (+11%) rallied on renewed AI memory supercycle bull thesis. [CROSS-VALIDATED] NetApp implication: SanDisk/WDC earnings validate the storage infrastructure supercycle narrative and confirm that NAND suppliers are printing money while storage system vendors absorb cost increases. The 251% YoY SanDisk revenue growth, driven by AI-fueled enterprise SSD demand, is the upstream validation of the procurement urgency NetApp is communicating to customers. Use in conversations: "SanDisk just reported 251% revenue growth selling flash — that's your money. Every terabyte ONTAP deduplicates is a terabyte of SanDisk revenue you're not contributing to."

Signal 5 — "Production AI Is a Data Pipeline Problem": Storage Emerges as the Consensus AI Bottleneck. Multiple independent publications this week converged on storage as the primary AI deployment bottleneck. VDURA published "From Training to Inference: How Storage Requirements Flip When AI Goes to Production" (May 7). WEKA's blog declared "Storage is the New AI Battleground for Inference at Scale." ODBMS.org published "Production AI is a Data Pipeline Problem: Why Storage is the Bottleneck Nobody Plans For." An industry survey cited by multiple outlets found that 57% of enterprises identify storage performance as their top bottleneck, ahead of compute/GPU availability (54%) and network bandwidth (52%). By end of 2026, over 90% of deployed GPU capacity will serve inferencing workloads. [MULTI-PASS] NetApp implication: The narrative has shifted from "storage for AI" to "storage as the AI bottleneck." This is favorable ground for NetApp. When 57% of enterprises say storage performance is their #1 AI constraint, the sales motion changes: NetApp isn't selling storage alongside AI — it's solving the primary obstacle to AI deployment. Every SE conversation should now lead with "what's your storage throughput to your GPUs?" not "do you need storage for your AI project?" The ONTAP vLLM + GPUDirect Storage benchmarks (61% throughput improvement) directly address this bottleneck narrative.


2. Competitive Radar

WEKA — "Storage as the New AI Battleground"; Augmented Memory Grid for KV Cache

What they announced: WEKA published "Storage is the New AI Battleground for Inference at Scale" this week, positioning WEKA as the primary storage platform for inference workloads. Building on GTC 2026 announcements (NeuralMesh AIDP GA, BlueField-4 STX integration with Augmented Memory Grid), WEKA claims 4–20x TTFT improvement under concurrent load and 6.5x more tokens per GPU when using their storage as the KV cache layer in STX deployments. WEKA's Augmented Memory Grid technology serves as the shared KV cache tier in the NVIDIA STX context memory architecture. [MULTI-PASS]

NetApp's current answer: NetApp's ONTAP-based KV cache offloading with vLLM and GPUDirect Storage delivers 61% throughput improvement and 34% TTFT reduction. NetApp is a confirmed STX/CMX partner with AIDE integration planned for NVIDIA AI Data Platform reference design. However, NetApp has not published STX-specific KV cache benchmarks comparable to WEKA's "4–20x TTFT improvement" claims.

Gap or advantage: WEKA's performance claims in the STX context (4–20x TTFT improvement) significantly exceed NetApp's published numbers (34% TTFT reduction). The comparison is not entirely apples-to-apples — WEKA measures under "concurrent agent workloads" while NetApp's benchmarks use different test parameters — but the headline numbers favor WEKA. NetApp's advantage remains data management: WEKA provides raw performance, ONTAP provides performance plus snapshots, replication, compliance, and multi-cloud. The gap: NetApp needs STX-specific benchmarks urgently, and AIDE's broad availability timeline (early summer 2026) must be met to remain credible in STX deals.

Suggested talk track: "WEKA delivers impressive parallel file system performance for inference — their numbers at GTC were strong. But inference state isn't just a performance problem; it's a data management problem. When your KV cache becomes persistent state across thousands of concurrent agent sessions, you need more than fast I/O — you need snapshots for recovery, replication for multi-site inference, and compliance logging for regulated AI. WEKA gives you the speed; ONTAP gives you speed plus everything else."

Sources: WEKA Blog — Inference at Scale, NAND Research — WEKA NeuralMesh GTC 2026


Dell Technologies — Dell Technologies World Imminent (May 18–21); No New Pre-Event Storage Announcements

What they announced: No new storage product announcements during May 4–10. Dell Technologies World (May 18–21, Las Vegas) is the next major competitive event, themed around "agentic AI — systems that don't just respond but act, decide and adapt." Expect significant storage, AI, and infrastructure announcements spanning Lightning File System updates, Exascale Storage timing, PowerScale/PowerStore enhancements, and potentially new AI data management capabilities. The "Modern Data Center" track will cover storage, networking, and data protection. [SINGLE SOURCE]

NetApp's current answer: N/A — no new announcements to counter this week. Watch next week's briefing for DTW coverage.

Gap or advantage: Dell Technologies World is historically Dell's largest announcement vehicle. Last year's event introduced significant storage and AI updates. NetApp should prepare competitive response materials for announcements across Lightning (parallel file), Exascale (software-defined unified), PowerStore (block), and PowerScale (enterprise NAS). The Sovereign AI initiative (S-AI with Accenture + Palantir, powered by Dell AI Factory + NVIDIA) will likely be highlighted, representing a competitive challenge in European sovereign infrastructure deals where Dell holds the hardware stack position.

Suggested talk track: Prepare in advance for DTW. Updated guidance will follow in next week's briefing.

Sources: Dell Technologies World 2026


[UPDATE] Everpure — Ticker Change to "P"; Accelerate 2026 (June 16–18)

No new product or pricing announcements during May 4–10. Everpure announced its stock ticker will change from "PSTG" to "P" and will host FY Q4 earnings on May 27. Pure//Accelerate 2026 is scheduled for June 16–18 in Las Vegas — expect major product announcements. The 70% price increase narrative from last week continues to generate customer conversations. [SINGLE SOURCE]


[UPDATE] VAST Data, DDN, HPE, IBM, Hitachi Vantara

No significant new announcements from these vendors during May 4–10. VAST FWD 2026 (February) announcements (CNode-X, Polaris, NVIDIA Certification) continue to drive pipeline. DDN's Managed Lustre momentum on GCP continues. Dell Technologies World (May 18–21) is the next major competitive window.


3. NVIDIA AI Infrastructure

NVIDIA's $40B+ AI Investment Blitz Reshapes the Infrastructure Supply Chain

In the most significant week of NVIDIA infrastructure investment activity in 2026, the company executed two major deals that fundamentally extend its vertical integration into AI infrastructure:

NVIDIA × Corning — $3.2B for Optical Fiber Manufacturing (May 6). NVIDIA committed up to $3.2 billion in Corning — structured as warrants to purchase up to 15 million shares at $180/share plus a $500M pre-funded warrant for 3 million shares. The investment funds three new advanced manufacturing plants in North Carolina and Texas, creating 3,000 jobs. Corning will increase U.S. optical connectivity manufacturing capacity by 10x and fiber production capacity by 50%. Corning shares climbed 12% on the news. [CROSS-VALIDATED]

Why this matters for storage: NVIDIA is transitioning from copper to fiber-optic interconnects for rack-scale systems. The Corning investment secures supply for NVLink and NVSwitch optical connections in Vera Rubin NVL72 and future platforms. Higher-bandwidth optical interconnects between compute and storage mean storage systems must deliver correspondingly higher throughput to avoid becoming the bottleneck. NVIDIA DSX reference designs will specify storage performance requirements assuming optical-speed fabrics — any certified storage solution must meet these thresholds.

NVIDIA × IREN — $2.1B for 5 GW of DSX AI Infrastructure (May 7). NVIDIA issued IREN a five-year right to purchase up to 30 million shares at $70/share ($2.1B total). IREN stock jumped 19%. The partnership targets deployment of up to 5 GW of NVIDIA DSX-aligned AI infrastructure, with IREN's 2 GW Sweetwater campus in Texas as the flagship deployment. Simultaneously, IREN acquired Spain's Nostrum Group (490 MW), entering Europe (detailed in Section 9). [CROSS-VALIDATED]

NVIDIA's 2026 Investment Portfolio — $40B+ Total. Including the $30B OpenAI investment, $3.2B Corning, and $2.1B IREN, NVIDIA has committed over $40 billion in equity deals in 2026 — more than double its $17.5B in fiscal 2025. Critics argue this creates a circular demand loop (NVIDIA invests → partners buy NVIDIA GPUs → revenue justifies more investment). Regardless of the circularity debate, the practical effect is clear: NVIDIA is locking up the AI infrastructure supply chain from silicon to fiber to data center shells. [CROSS-VALIDATED]

NetApp relevance: Every GW of NVIDIA DSX infrastructure requires massive storage capacity. NVIDIA's DSX documentation explicitly lists storage (block, file, and object) as a core infrastructure layer. NetApp's STX/CMX partnership and AIDE integration are the entry point, but NVIDIA's investment strategy favors partners who can co-invest, co-engineer, and validate at NVIDIA's pace. NetApp should prioritize: (1) getting AIDE validated against DSX specifications, (2) publishing STX-specific benchmarks, and (3) engaging IREN's European operations team before DDN/WEKA/VAST Data establish their positions.

[UPDATE] Vera Rubin, BlueField-4, CMX — No Change

Vera Rubin NVL72 production shipments remain on track for H2 2026. BlueField-4 and CMX platform availability targeted for H2 2026. No new technical details during May 4–10.

Sources: NVIDIA × Corning Press Release, CNBC — Corning, NVIDIA × IREN Press Release, CNBC — IREN, CNBC — $40B Equity Bets, TechCrunch — $40B


4. NetApp Portfolio & Ecosystem

NetApp FY2026 Q4 Earnings Scheduled May 28

NetApp will announce fourth quarter and full fiscal year 2026 results (fiscal year ended April 24, 2026) after market close on May 28, 2026. Analyst expectations: EPS of $2.27 (company guidance), up from $1.56 in the year-ago quarter. NetApp has beaten Wall Street estimates in each of the last four quarters. The earnings call will be the first opportunity to hear management commentary on AIDE early access traction, NAND cost impact on margins, cloud ARR trajectory (ANF, GCNV, FSx for ONTAP), and competitive dynamics against VAST Data and Everpure. [MULTI-PASS]

Strategic significance: The Q4 call will provide critical data points for field positioning. Watch for: (1) AIDE customer count and use case details, (2) any pricing guidance reflecting NAND cost pass-through, (3) cloud services growth rate (ANF/GCNV/FSx for ONTAP), (4) commentary on competitive displacement wins against VAST Data, Everpure, or Dell, and (5) forward guidance reflecting the NAND pricing environment.

[UPDATE] AIDE, Console, EF50/EF80, STX

No material new developments during May 4–10. AIDE remains in early access with lighthouse customers, broad availability targeted for early summer 2026. Console rebrand continues. EF50/EF80 Lustre interoperability remains H2 2026. NVIDIA STX/CMX partner status confirmed.

Sources: NetApp Q4 FY2026 Earnings Release


5. Enterprise Storage Vendor Landscape

SanDisk/Western Digital — Q3 FY2026 Blowout Validates NAND Supercycle

SanDisk reported Q3 FY2026 revenue of $5.95 billion — up 97% sequentially and 251% year-over-year. Data center revenue hit $1.47 billion. CEO David Goeckeler called it "a fundamental inflection point." Western Digital delivered $3.3B revenue (up 45% YoY) with 50.5% gross margin (up 1,040 basis points YoY), EPS of $2.72 (up 97% YoY). Cloud now represents 89% of WDC revenue. WDC raised Q4 guidance to $3.65B (±$100M) with 51–52% gross margin. On May 8, SanDisk and Micron both rallied 11% on renewed AI memory supercycle thesis. [CROSS-VALIDATED]

NetApp relevance: These results confirm the NAND suppliers are capturing massive value from the memory crisis. Enterprise storage vendors face a margin squeeze: they buy flash at crisis prices but compete for customers who resist price increases. NetApp's 4:1+ data reduction directly addresses this by reducing the volume of NAND consumed per usable terabyte. The SanDisk data center revenue figure ($1.47B in a single quarter) quantifies the upstream cost pressure flowing into storage system pricing.

Seagate Mozaic 4+ — 44 TB HDDs Shipping to Hyperscalers

Seagate's Mozaic 4+ platform (10 platters, 4+ TB/platter, HAMR technology) is now shipping 44 TB drives in volume to two leading hyperscale cloud providers. The roadmap targets 10 TB/platter and 100 TB drives. Broader availability planned as production scales. [MULTI-PASS]

NetApp relevance: Seagate's 44 TB HAMR drives are relevant for NetApp FAS and AFF C-Series customers considering HDD-based capacity tiers. More importantly, the HDD vs. NAND cost arbitrage is widening dramatically as NAND prices surge. VDURA's May 4 analysis ("Follow the HDDs: What This Week's WDC and Seagate Earnings Tell Us About AI Storage") argues that HDD capacity tiers are winning for AI data lakes and checkpoint storage because the NAND premium has become untenable for bulk data. ONTAP's FabricPool tiering to HDD-based or object tiers becomes more compelling as NAND costs rise.

SK hynix — 321-Layer QLC NAND cSSD Shipping (Dell First Customer)

SK hynix began shipping the PQC21, the world's first client SSD built on 321-layer QLC NAND flash, with Dell Technologies as the initial customer (April 2026). Available in 1 TB and 2 TB capacities for AI PC configurations. QLC market share is forecast to grow from 22% (2025) to 61% (2027) per IDC. [MULTI-PASS]

NetApp relevance: The 321-layer QLC advance signals that NAND density improvements continue despite the pricing crisis. However, new layer counts take 12–18 months to reach enterprise SSD qualification. The near-term NAND supply crisis (through H2 2027) will not be solved by 321-layer technology ramping in client SSDs. Enterprise storage buyers should not expect relief from technology advances in the near term.

[UPDATE] Dell, Everpure, VAST Data, DDN, HPE, IBM, Hitachi Vantara, MinIO

No material new product announcements from these vendors during May 4–10. Dell Technologies World (May 18–21) is the next major competitive window. Everpure Accelerate (June 16–18) follows.

Sources: SanDisk Q3 FY2026 Results, WDC Q3 FY2026 Results, 24/7 Wall St. — Memory Supercycle, SK hynix 321-Layer QLC


6. Hyperscaler & Cloud Storage

Microsoft Azure — Strategic Storage Services Partner Program Launched May 1

Azure announced the Strategic Azure Storage Services Partner Program on May 1, partnered with AHEAD. The program formalizes relationships with vendors delivering advanced storage solutions on Azure, providing pre-tested integration patterns, operational support, and optimized migrations. The program covers blobs, files, and backups with certified partners providing tiering strategies and Azure-native monitoring hooks. [SINGLE SOURCE]

NetApp relevance: The Azure Strategic Storage Services Partner Program could be relevant for Azure NetApp Files positioning. If NetApp participates as a certified partner, it provides additional validation and customer acquisition channels. If NetApp is not included, it's a gap. Field teams should confirm NetApp's participation status and leverage any certification in ANF-centric customer conversations.

[UPDATE] AWS, Google Cloud, OCI

No significant new storage announcements from AWS, Google Cloud, or Oracle Cloud Infrastructure during May 4–10. AWS S3 Files (GA, covered last week) continues market adoption. Google Cloud Next '26 announcements (GCNV Flex Unified, Managed Lustre 10 TB/s, Smart Storage) continue rollout. FSx for ONTAP, ANF, and GCNV positions unchanged.

Sources: Azure May 2026 Updates


7. AI Data Pipeline & Training/Inference Infrastructure

Consensus Emerges: Storage Is the #1 AI Deployment Bottleneck

This week marked a narrative inflection point: multiple independent sources converged on the thesis that storage — not compute, not networking — is now the primary constraint on enterprise AI deployment. This is the most important framing shift for storage vendors in 2026.

Key publications this week:

VDURA — "From Training to Inference: How Storage Requirements Flip" (May 7). VDURA published a detailed analysis of how storage I/O patterns fundamentally change as workloads shift from training to inference. Training is sequential, checkpoint-heavy, and throughput-bound. Inference is random, latency-sensitive, and concurrency-bound. The storage architecture that works for training often fails for inference. By end of 2026, over 90% of deployed GPU capacity will serve inference. [SINGLE SOURCE]

WEKA — "Storage is the New AI Battleground for Inference at Scale." WEKA's blog argues that inference at enterprise scale creates a fundamentally different storage challenge than training. The KV cache management problem, context persistence across sessions, and concurrent agent workloads create I/O patterns that traditional storage architectures weren't designed for. [SINGLE SOURCE]

ODBMS.org — "Production AI is a Data Pipeline Problem" (May 2026). An interview with an industry expert (likely Paul Speciale, VAST Data) arguing that the limiting factor for enterprise AI isn't GPU availability but whether data can be delivered fast enough to keep GPUs utilized. Training and inference depend on complex, multi-stage data pipelines (ingestion, preprocessing, shuffling, checkpointing, retrieval) that generate intense I/O and metadata pressure. [SINGLE SOURCE]

Industry Survey Data: 57% of enterprises identify storage performance as their top AI bottleneck, ahead of compute/GPU availability (54%) and network bandwidth (52%). 91% of private AI deployments in production now rely on object storage. [MULTI-PASS]

Combined NetApp implication: The "storage as AI bottleneck" narrative is the most favorable market positioning for enterprise storage vendors since cloud repatriation. NetApp should aggressively claim this narrative. Specific actions: (1) publish the ONTAP vLLM + GPUDirect Storage benchmarks as a formal solution brief, not a blog post, (2) create a "Storage Performance Calculator for AI" tool that shows GPU idle time as a function of storage throughput, (3) position the FY2026 Q4 earnings call commentary around "solving the AI storage bottleneck" rather than "storage for AI." The framing matters: customers aren't buying storage to support AI — they're removing the #1 obstacle to AI success.

Sources: VDURA — Training to Inference, WEKA — Inference at Scale, ODBMS — Data Pipeline Problem


8. NAND Flash & SSD Market

SanDisk/WDC/Micron Earnings Confirm the Supercycle Is Accelerating

The NAND flash supplier earnings this week provide the most concrete evidence yet that the AI-driven memory supercycle is accelerating, not plateauing:

Metric SanDisk Q3 FY26 WDC Q3 FY26 Signal
Revenue $5.95B (+251% YoY) $3.3B (+45% YoY) Demand acceleration
Gross Margin Not disclosed 50.5% (+1,040 bps YoY) Pricing power
Data Center Revenue $1.47B 89% of total AI driving enterprise flash
Q4 Guidance Above guidance $3.65B (raised) No demand softening

SanDisk's 251% YoY revenue growth is extraordinary by any measure. The 97% sequential growth confirms Q2 2026 contract price increases (70–75% QoQ per TrendForce) are flowing directly into NAND supplier revenue. The data center revenue figure ($1.47B for SanDisk alone, in a single quarter) quantifies the enterprise flash spending that storage vendors must absorb. [CROSS-VALIDATED]

Micron rally (May 8): Micron stock rallied 11% alongside SanDisk (+11%) on May 8, driven by renewed AI memory supercycle bull thesis. Analysts see no demand softening through at least H1 2027. [MULTI-PASS]

HDD Value Proposition Strengthening as NAND Premium Widens

VDURA's analysis "Follow the HDDs" (May 4) argues that the NAND pricing crisis is structurally shifting AI storage architectures toward HDD-based capacity tiers. With NAND contract prices up 70–75% QoQ and HDD pricing stable, the cost-per-TB gap has widened dramatically. For AI data lake, checkpoint storage, and training data staging, HDD-based tiers offer 5–10x lower cost per TB than NAND. Seagate's 44 TB Mozaic 4+ drives are shipping to hyperscalers, and WDC's 89% cloud revenue mix confirms the enterprise HDD market remains robust. [MULTI-PASS]

Procurement guidance for EMEA, LATAM, and Middle East customers: The supercycle data from this week strengthens the procurement urgency narrative: (1) NAND suppliers are raising prices and printing record revenue — there is no price relief in sight, (2) HDD-based tiering (ONTAP FabricPool to HDD or object tiers) is now the cost-optimal architecture for AI data lifecycle, (3) ONTAP's 4:1+ data reduction directly reduces NAND consumption at crisis pricing. Every deduplication ratio point is worth more than it was six months ago.

Sources: SanDisk Q3 FY2026, WDC Q3 FY2026, VDURA — Follow the HDDs


9. EMEA, LATAM & Middle East Focus

Europe — IREN Acquires Nostrum Group in Spain (490 MW NVIDIA DSX-Aligned)

IREN's acquisition of Nostrum Group (May 7) is the most significant European AI data center infrastructure signal this week. Key details: approximately 490 MW of secured, grid-connected power in Spain, additional development pipeline, NVIDIA DSX-aligned infrastructure commitment (per the same-day NVIDIA partnership), and a stated intent to build AI cloud platform infrastructure in Europe.

Spain offers favorable conditions: constructive AI policy, streamlined permitting (relative to Germany or the Netherlands), robust Mediterranean connectivity, and Europe's highest solar energy capacity. IREN's entry joins a growing list of AI-focused data center developers targeting Spain and the Iberian Peninsula. [CROSS-VALIDATED]

NetApp relevance: IREN's European entry via Spain creates a named, addressable storage opportunity at scale. A 490 MW campus, built to NVIDIA DSX specifications, will require petabytes of high-performance storage for AI training, inference, and data pipeline workloads. NetApp EMEA should: (1) identify IREN's European procurement contacts, (2) propose ONTAP/AIDE validation against DSX reference design specifications for the Nostrum/Spain deployment, (3) position ONTAP's multi-protocol support (NFS + S3 + iSCSI) as reducing the number of storage silos in a DSX build. Competitor risk: IREN's NVIDIA relationship gives DDN, WEKA, and VAST Data a warm introduction.

Europe — Sovereign AI Infrastructure Continues to Build

Broader European sovereign AI developments during this period include Sovereign AI (S-AI), a UK-based provider, having selected Accenture and Palantir to build and scale AI data centers across EMEA using Dell AI Factory + NVIDIA hardware (announced January 2026, progressing during Q2). Gartner predicts global sovereign cloud spending will hit $80 billion in 2026, up 35.6%. Deutsche Telekom's sovereign AI factory (one of Europe's largest, announced at GTC 2026) continues buildout. [MULTI-PASS]

NetApp relevance: The S-AI initiative uses Dell AI Factory as its hardware backbone — this is a competitive challenge. In sovereign AI deals where Dell holds the infrastructure stack position, NetApp must compete as the storage layer against Dell's own PowerScale/Lightning. The counter-positioning: ONTAP offers multi-cloud data mobility that Dell's storage portfolio cannot match. A sovereign AI deployment on Dell + ONTAP can replicate data to ANF, GCNV, or FSx for ONTAP for cloud burst or DR — Dell's storage stays locked to Dell infrastructure.

Middle East — Stargate UAE 200 MW Phase 1 on Track for Q3 2026; AWS HUMAIN $5B AI Zone

Stargate UAE: The 200 MW first phase of the Stargate UAE campus (G42 + OpenAI + Oracle + NVIDIA + Cisco + SoftBank) remains on track for Q3 2026 delivery. Construction is well advanced with more than 5,000 workers deployed. The facility will host up to 35,000 NVIDIA Grace Blackwell GB300 processors. The full campus scales to 1 GW as part of a 5 GW, 19.2 km² site — the world's largest AI campus outside the US. [CROSS-VALIDATED]

AWS HUMAIN AI Zone: AWS and HUMAIN are investing $5+ billion in a dedicated "AI Zone" in Saudi Arabia, combining advanced cloud infrastructure, high-performance semiconductors, and UltraCluster networks. The Saudi AWS region will become available in 2026. The AI Zone will support Arabic Large Language Models (ALLaM) and broader AI adoption across the Gulf. PwC estimates AI will contribute $130 billion to Saudi Arabia's economy by 2030. [CROSS-VALIDATED]

Microsoft Azure Saudi Arabia: Microsoft is expected to bring its Azure region live in Saudi Arabia by Q4 2026, nearly quadrupling local data center computing capacity.

NetApp relevance: The Middle East remains the highest-growth region for data center infrastructure globally (28.5% YoY growth in 2026, per the colocation report covered last week). The Stargate UAE and AWS HUMAIN deployments represent two distinct storage opportunities: (1) Stargate is NVIDIA-centric (Grace Blackwell), requiring NVIDIA Certified Storage — NetApp's STX partnership is relevant; (2) AWS HUMAIN will run on AWS infrastructure — FSx for ONTAP is the natural NetApp play for enterprise workloads. For Saudi, the upcoming Azure region creates ANF opportunities. Field teams should engage Microsoft's Saudi deployment team to ensure ANF is available at region launch.

Latin America — Mexico Sovereign AI HPC Platform; LATAM Power Grid Constraints

AI Green Data Centers (AI-GDC) × Rafay Systems (May 6). AI-GDC, a sovereign AI Cloud platform based in Mexico (Monterrey), partnered with Rafay Systems to deliver Latin America's first HPC and AI platform. Construction of a flagship green data center campus is scheduled to begin in 2026, scaling to 100 MW by 2030. AI-GDC is a founding member of Nuevo León's regional AI cluster, positioning Monterrey as LATAM's AI hub. Focus on digital sovereignty: Mexican and Latin American data stays within regional borders. [SINGLE SOURCE]

LATAM power grid constraints persist. Power grids across Latin America continue to strain under AI data center demand. Mexico (El Marques scheduled power cuts), Brazil (transmission bottleneck), and Colombia are experiencing infrastructure inhibitors. Brazil hosts 37.2% of LATAM's data centers, followed by Chile (13.4%), Mexico (12.3%). The Rio AI City project at the former Olympic Park site in Rio de Janeiro continues to advance. [MULTI-PASS]

NetApp relevance: The Mexico sovereign AI platform opportunity is real but early-stage (100 MW by 2030). ONTAP's data sovereignty capabilities (data residency, SnapLock compliance, in-country replication) align with AI-GDC's digital sovereignty value proposition. The broader LATAM narrative continues: power constraints make storage efficiency a procurement differentiator. Every watt saved on storage is a watt available for GPU compute.

[UPDATE] Strait of Hormuz Undersea Cable Crisis

No change from prior weeks. Cable repair vessels remain unable to operate in the Gulf. The situation remains at "crisis" level with no de-escalation developments during May 4–10. Operational guidance from prior reports remains valid.

Sources: IREN × Nostrum Press Release, G42 Stargate UAE Update, AWS HUMAIN Saudi Arabia, Rafay × AI-GDC


10. Open Source & Research

"Storage as AI Bottleneck" Thesis Gains Academic and Industry Support

The convergence of VDURA, WEKA, and ODBMS.org publications this week (detailed in Section 7) represents a narrative shift with research backing. The 57% survey figure (storage as top AI bottleneck) is increasingly cited in vendor and analyst publications. This week's publications add to the LMCache "KV cache as first-class data object" thesis from last week, building a coherent story: AI inference is a storage problem, and enterprise storage vendors are central to solving it. [MULTI-PASS]

VDURA: HDD vs. Flash Economics for AI Storage

VDURA's "Follow the HDDs" (May 4) analysis uses SanDisk/WDC/Seagate earnings data to argue that HDD-based capacity tiers are winning for AI data lakes and checkpoint storage. The analysis quantifies the cost-per-TB divergence between NAND and HDD, showing that the premium has widened to the point where HDD-based architectures are structurally favored for bulk AI data. VDURA's V5000 flash + WD Ultrastar HDD hybrid platform is positioned as a solution. [SINGLE SOURCE]

NetApp relevance: The HDD economics argument supports ONTAP FabricPool adoption. Customers can keep hot AI data on AFF (flash) and automatically tier cold data to HDD-based or S3-compatible tiers. The NAND pricing crisis makes the FabricPool TCO story more compelling than ever.

No Significant New arXiv Papers, Framework Releases, or Funding Rounds

No material new open-source storage/AI papers, framework releases, or startup funding rounds were identified during May 4–10. vLLM and SGLang continue development; no major version releases this week.

Sources: VDURA — HDD vs Flash


11. Regulatory & Compliance

EU AI Omnibus — Provisional Deal Reached May 7: Deadlines Deferred

What changed: At 4:30 a.m. on May 7, 2026, the Council and Parliament reached a provisional agreement on the Digital Omnibus on AI, reversing the April 28 trilogue collapse. The deal was reached after intense negotiations on the Annex I conformity assessment architecture that had blocked the April 28 session. [CROSS-VALIDATED]

New compliance timeline:

Obligation Previous Deadline New Deadline Change
Standalone high-risk AI (Annex III) August 2, 2026 December 2, 2027 +16 months
Product-embedded AI (Annex I) August 2, 2026 August 2, 2028 +24 months
NCII / CSAM AI ban N/A (new) December 2, 2026 New prohibition
Article 50(2) — generative AI watermarking August 2, 2026 August 2, 2026 (grace to Dec 2, 2026 for pre-existing systems) Unchanged

Key provisions beyond timeline changes:

  • New prohibition on AI systems generating non-consensual intimate imagery (NCII) and child sexual abuse material (CSAM), including "nudifiers"
  • Watermarking for generative AI content remains at August 2, 2026 for new systems; transitional period to December 2, 2026 for pre-existing systems
  • The agreement is provisional and requires formal adoption by both institutions before becoming law; both intend to do so before August 2, 2026

What this does NOT change:

  • General-purpose AI (GPAI) transparency obligations — already in effect since August 2, 2025
  • Prohibited AI practices (social scoring, biometric categorization) — already in effect since February 2, 2025
  • The EU AI Act itself remains fully in force; only specific high-risk timelines are deferred

NetApp compliance positioning — updated strategy:

The Omnibus deal transforms the compliance conversation. The "91 days to comply" urgency from last week's briefing is now "19 months to comply" for Article 12 logging infrastructure. However, the strategic positioning should pivot, not retreat:

  1. SnapLock Compliance mode + FPolicy remain the technical answer for Article 12 automatic logging. The timeline extension means customers can do a more thorough implementation — which actually favors enterprise storage over rushed point solutions.

  2. First-mover advantage narrative: "You now have 19 months instead of 91 days. Use that time to build proper logging infrastructure with immutable audit trails, not a last-minute bolt-on. SnapLock and FPolicy give you a production-grade compliance platform that will be ready for December 2027 — and for every subsequent regulation."

  3. Watermarking obligation is unchanged at August 2, 2026. Any customer generating synthetic audio, image, video, or text content needs machine-readable watermarking infrastructure now. This is a shorter-term compliance driver than Article 12.

  4. NCII prohibition (December 2, 2026): New compliance requirement for AI system providers. Less directly storage-related, but audit trail capabilities are relevant for demonstrating compliance.

[UPDATE] EU CADA Proposal — Expected May 27

The Cloud and AI Development Act proposal remains on track for May 27, 2026 — now only 17 days away. CADA will create binding requirements for EU-hosted data infrastructure, directly impacting NetApp's cloud services. No new developments this week beyond the May 27 timeline confirmation.

[UPDATE] Middle East Data Protection Frameworks

No new regulatory developments from UAE PDPL, Saudi PDPL, or Israel Privacy Protection Regulations during May 4–10.

Sources: EU Council — Omnibus Agreement, Hogan Lovells — AI Rules Delay, Bird & Bird — Omnibus Deal, Modulos — Omnibus Deal, TechPolicy.Press — Omnibus Changes


12. Methodology & Confidence Markers

Research Passes Completed

Pass Method Queries Focus Areas
Pass 1 WebSearch — broad scan 16 queries NVIDIA, NetApp, Dell, Everpure, VAST/DDN/WEKA, HPE, IBM, AWS, Azure, GCP, NAND flash, EU regulation, EMEA sovereign AI, Middle East, LATAM, AI data pipeline
Pass 2 WebSearch — deep dives 8 queries NVIDIA × IREN (5 GW), NVIDIA × Corning ($3.2B), EU AI Omnibus May 7 deal, NetApp Q4 earnings schedule, Dell Technologies World preview, VAST FWD recap, WEKA inference, IREN Nostrum acquisition
Pass 3 WebSearch — emerging signals 8 queries VDURA HDD economics, Blocks & Files / StorageNewsletter roundup, S-AI Accenture Palantir EMEA, NVIDIA DSX storage specs, AWS HUMAIN Saudi, Stargate UAE, NAND Q3 forecast, Azure storage partner program
Pass 4 Perplexity (reason × 1) + WebSearch gap-fill (4 queries) 5 queries Cross-validation of key claims, SK hynix 321-layer QLC, SanDisk/WDC earnings, NVIDIA $40B investments, Rafay/AI-GDC Mexico

Total unique queries executed: 33 WebSearch + 1 Perplexity = 34 queries

Confidence Summary

Confidence Level Count Description
CROSS-VALIDATED 12 Confirmed by 2+ independent sources across different research passes
MULTI-PASS 10 Found in multiple passes but same source family
SINGLE SOURCE 6 Only one source found; treat with appropriate caution
CONFLICTING SOURCES 0 No conflicting claims identified this week

Deduplication Actions

The following signals from the May 3 briefing were intentionally excluded as "already covered" unless a material new development occurred this week:

  • AWS S3 Files GA — no new developments beyond adoption (already covered May 3)
  • Everpure 70% price increase — no new pricing data; ongoing market narrative (covered May 3)
  • LMCache "KV cache as data object" — referenced as context for this week's "storage as bottleneck" narrative (covered May 3)
  • Africa & ME colocation market $11.1B — no update (covered May 3)
  • Dell Lightning File System availability — no update; DTW next week (covered May 3)
  • HPE Alletra X10000 SAS Innovate — no update (covered May 3)
  • TrendForce Q2 2026 NAND 70–75% QoQ — referenced as context in SanDisk/WDC analysis (covered May 3)
  • ANF cool access Premium/Ultra preview — no update (covered May 3)
  • ByteDance $38B Brazil data center — no update (covered May 3)
  • Strait of Hormuz cable crisis — situation unchanged; brief status update included (covered May 3 and April 26)
  • VAST Data $30B Series F — no update (covered April 26)
  • DDN Managed Lustre 10 TB/s / KV-cache-on-Lustre — no update (covered April 26)
  • Google TurboQuant — no update (covered May 3)
  • GCNV Flex Unified GA / Next '26 announcements — no update (covered April 26)
  • NVIDIA Vera Rubin / BlueField-4 timeline — unchanged; brief status update included (covered May 3)
  • NetApp AIDE early access / STX partner confirmation — unchanged; brief status update included (covered May 3)
  • EU AI Omnibus April 28 trilogue failureSUPERSEDED by May 7 provisional agreement (this is a material update, fully covered in this report)

Key Information Gaps

  • Dell Technologies World (May 18–21) announcements are unknown — this is the single highest competitive risk event for next week's briefing
  • NetApp FY2026 Q4 earnings (May 28) will provide critical data on AIDE traction, NAND cost impact, and cloud ARR — not yet available
  • AIDE broad availability timeline ("early summer 2026") needs a concrete date; lack of specificity is a competitive vulnerability against WEKA NeuralMesh (already GA) and VAST Polaris (announced at FWD)
  • NetApp has not published STX-specific benchmarks to compete with WEKA's "4–20x TTFT improvement" claims
  • The EU AI Omnibus provisional agreement still requires formal adoption — risk (low) that it could be modified before formal vote
  • IREN's European storage vendor selection process is unknown — early engagement is critical before decisions are locked
  • Everpure Accelerate 2026 (June 16–18) announcements are unknown — expect major product updates

This briefing was generated on May 10, 2026. Next edition: May 17, 2026.

Research pipeline: 4-pass multi-source (WebSearch × 33 + Perplexity × 1). All claims tagged with confidence markers. Source URLs provided throughout.

Key upcoming events: Dell Technologies World (May 18–21), NetApp FY2026 Q4 Earnings (May 28), EU CADA Proposal (May 27), Everpure Accelerate (June 16–18).