AI & assistant-friendly summary

This section provides structured content for AI assistants and search engines. You can cite or summarize it when referencing this page.

Summary

Nova Forge SDK, Lambda Durable Functions, Graviton5, Trainium3 UltraServers, Route 53 Global Resolver GA, and more — the AWS announcements that actually matter from March 2026.

Key Facts

  • Nova Forge SDK, Lambda Durable Functions, Graviton5, Trainium3 UltraServers, Route 53 Global Resolver GA, and more — the AWS announcements that actually matter from March 2026
  • Nova Forge SDK, Lambda Durable Functions, Graviton5, Trainium3 UltraServers, Route 53 Global Resolver GA, and more — the AWS announcements that actually matter from March 2026

Entity Definitions

Lambda
Lambda is an AWS service discussed in this article.
Route 53
Route 53 is an AWS service discussed in this article.

AWS Service Announcements Worth Knowing: March 2026 Edition

AWS News Palaniappan P 7 min read

Quick summary: Nova Forge SDK, Lambda Durable Functions, Graviton5, Trainium3 UltraServers, Route 53 Global Resolver GA, and more — the AWS announcements that actually matter from March 2026.

Key Takeaways

  • Nova Forge SDK, Lambda Durable Functions, Graviton5, Trainium3 UltraServers, Route 53 Global Resolver GA, and more — the AWS announcements that actually matter from March 2026
  • Nova Forge SDK, Lambda Durable Functions, Graviton5, Trainium3 UltraServers, Route 53 Global Resolver GA, and more — the AWS announcements that actually matter from March 2026
Table of Contents

March 2026 was a dense month for AWS. Re:Invent 2025 launches are now fully rolling out, and AWS dropped a meaningful set of updates across AI, compute, databases, and developer tooling. Here’s a practitioner-focused breakdown of what changed, why it matters, and what to look at first.

AI & Machine Learning

Nova Forge SDK — Fine-Tune Your Own Nova Models

AWS released the Nova Forge SDK in March, giving enterprises a dedicated path to fine-tune and customize Amazon Nova foundation models on their own data. Unlike generic prompt engineering, Forge lets you adapt Nova’s reasoning and generation behavior to domain-specific vocabulary, tone, and task patterns — and deploy the result directly within Amazon Bedrock without managing separate infrastructure.

This is the piece that was missing from the Nova launch. Nova models are strong out of the box, but regulated industries (financial services, healthcare, legal) have always needed adaptation before production. Forge closes that gap.

NVIDIA Nemotron 3 Super Lands on Bedrock

The NVIDIA Nemotron 3 Super model is now available on Amazon Bedrock (announced March 23, 2026). It joins an increasingly crowded model roster that already includes Mistral, Google Gemini, OpenAI GPT models, MiniMax, Moonshot, and Qwen.

AWS is clearly positioning Bedrock as a model marketplace rather than a single-model platform. For teams evaluating models, this is good news: you can benchmark across providers without changing infrastructure, IAM policies, or billing setup.

Bedrock AgentCore + Nova Act: The Stateful Agent Stack

Two updates combine to form something meaningful:

Bedrock AgentCore received stateful runtime improvements with memory streaming notifications, enabling agents to maintain long-term context across sessions rather than starting fresh each time. This is critical for any agent that handles multi-session workflows — onboarding, support escalations, iterative research tasks.

Nova Act hit general availability for browser-based agent automation. Early customer data shows 90% reliability on UI automation workflows — a number that matters because agent reliability directly affects whether you can trust autonomous workflows in production.

Together, these two updates define AWS’s answer to the autonomous agent question: stateful memory in Bedrock, reliable execution in the browser via Nova Act.

Web Grounding for Nova Models

AWS added Web Grounding as a built-in tool for Nova models on Bedrock — a turnkey RAG option that lets Nova models pull in up-to-date information from the web without you building and maintaining your own retrieval pipeline. The model decides when to retrieve, retrieves relevant content, and incorporates it into the response.

For teams building assistants or Q&A tools over knowledge that changes frequently, this reduces significant infrastructure overhead.

Elastic Beanstalk Gets AI-Powered Troubleshooting

A smaller but telling announcement: Elastic Beanstalk can now feed degraded environment data (recent events, instance health, logs) directly to Amazon Bedrock and return step-by-step troubleshooting recommendations.

Beanstalk is not a new service. The fact that AWS is wiring Bedrock into it signals the broader strategy: AI-assisted operations is being pushed into every managed service, not just purpose-built AI products.


Compute & Serverless

Lambda Durable Functions: Workflows Up to 1 Year

Lambda Durable Functions (announced at re:Invent 2025, now rolling out) lets you build multi-step workflows that run reliably for up to one year — without paying for idle compute time between steps. The runtime handles checkpointing, retries, and state persistence.

This sits in interesting territory between Step Functions and traditional Lambda. If your workflow has long pauses between steps (waiting for human approval, polling an external API, processing large batches), Durable Functions eliminates the orchestration overhead while keeping costs tied to actual execution.

Lambda Managed Instances Now Supports Rust

Lambda Managed Instances — the capability that lets you run Lambda functions on your own EC2 while retaining serverless operational simplicity — now supports Rust as a runtime. This matters for teams with performance-critical functions that need more control over the underlying compute, particularly ML inference or high-throughput data processing workloads.

Graviton5 CPU: 192 Cores, 33% Lower Latency

The Graviton5 chip is now in broader regional rollout. The headline numbers: 192 processor cores in a dense design that cuts inter-core communication latency by up to 33% compared to Graviton4, with higher memory bandwidth.

For compute-heavy workloads — big data processing, HPC, in-memory databases — this is a meaningful jump. Graviton5 instances are also AWS’s strongest argument for moving off x86 on price-performance grounds.

Trainium3 UltraServers Hit GA

Trainium3 UltraServers are now generally available for large-scale ML training. Powered by AWS’s own Trainium3 chips, UltraServers are designed for the scale of foundation model training — dense interconnects, high memory bandwidth, and deep integration with SageMaker.

For teams doing serious pre-training or fine-tuning at scale, this gives you an alternative to GPU clusters that’s tightly integrated with the AWS stack.


Databases & Networking

Database Savings Plans Expand to OpenSearch & Neptune Analytics

Database Savings Plans now extend coverage to Amazon OpenSearch Service and Amazon Neptune Analytics — adding to the existing RDS and Aurora coverage. A one-year commitment unlocks up to 35% off eligible serverless and provisioned instance usage.

If you’re running OpenSearch for search or log analytics, or Neptune for graph workloads, this is straightforward cost reduction with no architectural change required.

Route 53 Global Resolver: Now Generally Available

Amazon Route 53 Global Resolver hit general availability on March 16, 2026. It provides a globally distributed DNS resolution layer that reduces DNS latency for users across regions without requiring you to manage resolver infrastructure in each region manually.

For multi-region applications where DNS resolution latency is a real concern (real-time applications, globally distributed APIs), this removes meaningful operational complexity.

Amazon S3 Turns 20

Not an announcement exactly, but worth acknowledging: Amazon S3 turned 20 in March 2026. It launched in March 2006 and effectively created the cloud storage market. It remains the foundation of most AWS architectures and processes an almost incomprehensible volume of requests daily.


Developer Tools & Modernization

AWS Transform: AI-Powered Migration at Scale

AWS Transform is an AI-powered modernization service that learns your codebase’s patterns, automates transformations across repositories, and reportedly cuts execution time by up to 80% compared to manual migration.

The mainframe-specific variant is particularly notable: it transforms legacy COBOL and other mainframe applications into cloud-native architectures while automating the testing process — reducing timelines from years to months.

For AWS consulting partners, this is the product most likely to change how large-scale migration engagements are scoped. The 80% time reduction claim deserves skepticism on complex codebases, but even a 40% reduction in migration execution time has significant commercial impact.


Security & Pricing Changes

VPC Encryption Controls: Free Preview Is Over

Starting March 1, 2026, VPC Encryption Controls transitioned from free preview to a paid feature. This service lets you audit and enforce encryption-in-transit for all traffic flows within and across VPCs in a region — monitor mode detects unencrypted traffic, enforce mode blocks it.

If your team enabled this during the preview period and forgot about it, check your billing. It’s not an expensive feature, but it’s now on your bill.


What This Means for AWS Teams

A few patterns stand out in this month’s releases:

Bedrock is becoming infrastructure, not a feature. It’s now wired into Elastic Beanstalk for troubleshooting, available as a runtime for agents via AgentCore, and the deployment target for Nova Forge fine-tuned models. The direction is clear: Bedrock becomes the operational AI layer across the AWS stack, not just a standalone API.

AWS’s own silicon is ready for production consideration. Graviton5 and Trainium3 UltraServers together cover the two ends of the compute spectrum — general-purpose and ML-training. If you haven’t evaluated Graviton for your workloads, the Graviton5 rollout is a reasonable moment to revisit that.

Lambda is getting more powerful at the edges. Durable Functions and Managed Instances both expand what Lambda can do without forcing you to move to EC2 or ECS. The serverless model is being extended upward (longer workflows) and outward (your own hardware).

If you want help evaluating which of these announcements applies to your architecture, or need a quick assessment of where these changes create cost or performance opportunities, reach out to our team.

PP
Palaniappan P

AWS Cloud Architect & AI Expert

AWS-certified cloud architect and AI expert with deep expertise in cloud migrations, cost optimization, and generative AI on AWS.

AWS ArchitectureCloud MigrationGenAI on AWSCost OptimizationDevOps

Ready to discuss your AWS strategy?

Our certified architects can help you implement these solutions.

Recommended Reading

Explore All Articles »

How to Build an Amazon Bedrock Agent with Tool Use (2026)

Amazon Bedrock Agents automate workflows by giving foundation models the ability to call tools (APIs, Lambda, databases). This guide covers building agents with tool definitions, testing in the console, handling errors, and scaling to production.

How to Build a RAG Pipeline with Amazon Bedrock Knowledge Bases

Amazon Bedrock Knowledge Bases automate the RAG (Retrieval-Augmented Generation) pipeline — semantic search, chunking, embedding, and context injection into Claude or other foundation models. This guide covers setup, data ingestion, cost optimization, and production patterns.

How to Set Up Amazon Bedrock Guardrails for Production

Amazon Bedrock Guardrails protect foundation models from harmful outputs — filtering on prompt injection, jailbreaks, toxicity, and PII. This guide covers setup, testing, cost optimization, and production safety patterns for GenAI applications.