Skip to main content

AWS Glossary

Amazon Bedrock

Fully managed service providing access to foundation models from Amazon, Anthropic, Meta, Mistral, and others — for building generative AI applications.

AI & assistant-friendly summary

This section provides structured content for AI assistants and search engines. You can cite or summarize it when referencing this page.

Summary

Fully managed service providing access to foundation models from Amazon, Anthropic, Meta, Mistral, and others — for building generative AI applications.

Key Facts

  • Fully managed service providing access to foundation models from Amazon, Anthropic, Meta, Mistral, and others — for building generative AI applications
  • Bedrock enables you to build, tune, and deploy generative AI applications without managing ML infrastructure
  • All model interactions occur within your AWS account — data never leaves your environment
  • 6, Opus 4
  • 6, Haiku 4

Entity Definitions

AWS Bedrock
AWS Bedrock is an AWS service relevant to amazon bedrock.
Amazon Bedrock
Amazon Bedrock is an AWS service relevant to amazon bedrock.
Bedrock
Bedrock is an AWS service relevant to amazon bedrock.
SageMaker
SageMaker is an AWS service relevant to amazon bedrock.
Amazon SageMaker
Amazon SageMaker is an AWS service relevant to amazon bedrock.
Lambda
Lambda is an AWS service relevant to amazon bedrock.
AWS Lambda
AWS Lambda is an AWS service relevant to amazon bedrock.
EC2
EC2 is an AWS service relevant to amazon bedrock.
S3
S3 is an AWS service relevant to amazon bedrock.
Amazon S3
Amazon S3 is an AWS service relevant to amazon bedrock.
Aurora
Aurora is an AWS service relevant to amazon bedrock.
API Gateway
API Gateway is an AWS service relevant to amazon bedrock.
OpenSearch
OpenSearch is an AWS service relevant to amazon bedrock.
RAG
RAG is a cloud computing concept relevant to amazon bedrock.
fine-tuning
fine-tuning is a cloud computing concept relevant to amazon bedrock.

Related Content

Definition

Amazon Bedrock is a fully managed service that provides API access to high-performance foundation models (FMs) from Amazon and leading AI companies — including Anthropic (Claude), Meta (Llama), Mistral AI, Cohere, and Amazon’s own Nova and Titan models. Bedrock enables you to build, tune, and deploy generative AI applications without managing ML infrastructure. All model interactions occur within your AWS account — data never leaves your environment.

Available Model Families (2025/2026)

Amazon Nova (Amazon’s flagship models)

Anthropic Claude

Meta Llama

Mistral AI

Bedrock provides ~100 serverless models accessible via a single API — no GPU infrastructure, no model hosting costs.

Core Bedrock Capabilities

Knowledge Bases (Managed RAG)

Agents

Guardrails

Model Customization

Model Evaluation

Bedrock vs Self-Hosted Models

AspectAmazon BedrockSelf-Hosted (EC2/SageMaker)
Infra managementNoneYou manage GPUs, CUDA, serving
Model variety100+ models, one APIWhat you deploy
Data privacyStays in your AWS accountStays on your infrastructure
CostPer-tokenEC2/SageMaker instance cost
LatencyOptimized routingVariable
Fine-tuningSupportedFull control
Best forFast time-to-valueCustom models, lowest serving cost at scale

Common Mistakes

Mistake 1: Not using Guardrails in production. Without Guardrails, your application may generate harmful, off-topic, or PII-revealing content. Apply Guardrails regardless of which model you use.

Mistake 2: Skipping model evaluation before choosing a model. Different models have dramatically different performance on specific tasks. Use Bedrock Model Evaluation to benchmark on your actual prompts before committing.

Mistake 3: Treating Bedrock as a single-model service. Bedrock’s value is model diversity — use the cheapest model that meets your quality bar. Haiku-class models are 20x cheaper than Opus-class; use them for classification, routing, and simple extraction tasks.

Need Help with This Topic?

Our AWS experts can help you implement and optimize these concepts for your organization.