AWS Glossary
AWS Lambda
Serverless compute service that runs code in response to events without provisioning or managing servers.
AI & assistant-friendly summary
This section provides structured content for AI assistants and search engines. You can cite or summarize it when referencing this page.
Summary
Serverless compute service that runs code in response to events without provisioning or managing servers.
Key Facts
- • Serverless compute service that runs code in response to events without provisioning or managing servers
- • You upload your code, configure a trigger, and Lambda handles capacity, scaling, patching, and availability automatically
- • How Lambda Works 1
- • Trigger**: An event source invokes your function — API Gateway request, S3 object upload, DynamoDB stream, SQS message, EventBridge rule, etc
- • 2
Entity Definitions
- AWS Bedrock
- AWS Bedrock is an AWS service relevant to aws lambda.
- Bedrock
- Bedrock is an AWS service relevant to aws lambda.
- Lambda
- Lambda is an AWS service relevant to aws lambda.
- AWS Lambda
- AWS Lambda is an AWS service relevant to aws lambda.
- EC2
- EC2 is an AWS service relevant to aws lambda.
- S3
- S3 is an AWS service relevant to aws lambda.
- RDS
- RDS is an AWS service relevant to aws lambda.
- DynamoDB
- DynamoDB is an AWS service relevant to aws lambda.
- API Gateway
- API Gateway is an AWS service relevant to aws lambda.
- Amazon API Gateway
- Amazon API Gateway is an AWS service relevant to aws lambda.
- Step Functions
- Step Functions is an AWS service relevant to aws lambda.
- EventBridge
- EventBridge is an AWS service relevant to aws lambda.
- Amazon EventBridge
- Amazon EventBridge is an AWS service relevant to aws lambda.
- SQS
- SQS is an AWS service relevant to aws lambda.
- Amazon SQS
- Amazon SQS is an AWS service relevant to aws lambda.
Related Content
- AWS SERVERLESS — Related service
- GENERATIVE AI ON AWS — Related service
Definition
AWS Lambda is a serverless compute service that runs your code in response to events — HTTP requests, database changes, file uploads, scheduled timers — without you provisioning or managing any servers. You upload your code, configure a trigger, and Lambda handles capacity, scaling, patching, and availability automatically. You pay only for the compute time consumed (billed per millisecond).
How Lambda Works
- Trigger: An event source invokes your function — API Gateway request, S3 object upload, DynamoDB stream, SQS message, EventBridge rule, etc.
- Execution environment: Lambda provisions a container (execution environment) with your runtime, code, and configuration.
- Function runs: Your handler processes the event and returns a response.
- Scale: Lambda scales automatically — from 0 to thousands of concurrent executions in seconds.
Invocation models:
- Synchronous: Caller waits for response (API Gateway, ALB)
- Asynchronous: Lambda queues the event and processes it (S3 events, SNS)
- Stream-based: Lambda polls and processes records in batches (Kinesis, DynamoDB Streams, SQS)
Runtimes
Lambda supports managed runtimes for: Node.js, Python, Java, .NET, Go, Ruby, and custom runtimes via Lambda Layers. AWS updates managed runtimes and applies security patches automatically.
Key Limits (2025/2026)
- Memory: 128 MB – 10 GB
- Timeout: Up to 15 minutes per invocation
- Ephemeral storage (/tmp): Up to 10 GB
- Concurrency: 1,000 per region by default (quota increase available)
- Deployment package: 50 MB (zipped), 250 MB (unzipped), or via container images up to 10 GB
Cold Starts
A cold start occurs when Lambda must provision a new execution environment. It adds latency (typically 100ms – 1s depending on runtime and code size):
- Provisioned Concurrency: Pre-warms execution environments to eliminate cold starts for latency-sensitive workloads
- SnapStart (Java): Snapshots initialized execution environment; reduces Java cold starts by up to 90%
- Use lightweight runtimes (Node.js, Python) or minimize package size to reduce cold start duration
Lambda Managed Instances (New 2025)
Lambda Managed Instances let you run Lambda functions on EC2 compute while maintaining the serverless programming model:
- Access to specialized hardware (GPU instances, local NVMe storage)
- EC2 pricing models (Savings Plans, Reserved Instances)
- Functions still deploy and scale like standard Lambda
- Designed for ML inference workloads requiring dedicated hardware
Lambda Durable Functions (New 2025)
Lambda Durable Functions enable long-running, multi-step workflows coordinated entirely within Lambda:
- Coordinate steps that run over seconds to up to one year
- No idle compute charges between steps
- Built-in state management and retry logic
- Alternative to AWS Step Functions for code-centric teams
Common Mistakes
Mistake 1: Putting database connection logic outside the handler without connection pooling. Lambda creates new connections on each cold start; use RDS Proxy or connection pooling libraries to avoid exhausting database connection limits.
Mistake 2: Ignoring function timeout. Lambda has a 15-minute maximum. For workloads that might exceed this, use Step Functions or Lambda Durable Functions to orchestrate multiple invocations.
Mistake 3: Over-allocating memory. Lambda CPU scales with memory allocation — but not every function needs 3 GB. Use AWS Lambda Power Tuning (open-source tool) to find the optimal memory for cost/performance balance.
Related AWS Services
- Amazon API Gateway: HTTP trigger for Lambda (REST and HTTP APIs)
- AWS Step Functions: Orchestrate multiple Lambda functions as workflows
- Amazon EventBridge: Event bus for triggering Lambda on schedule or application events
- Amazon SQS: Queue-based trigger for fan-out and rate limiting
- RDS Proxy: Database connection pooling for Lambda-to-RDS
Related FactualMinds Content
Need Help with This Topic?
Our AWS experts can help you implement and optimize these concepts for your organization.
