Amazon Q in QuickSight: Building Natural-Language BI for Enterprise Data Teams
Quick summary: Amazon Q in QuickSight lets business users ask data questions in plain English and get visualizations. Here's how to deploy it, secure it, and measure adoption.
Key Takeaways
- Amazon Q in QuickSight lets business users ask data questions in plain English and get visualizations
- Amazon Q in QuickSight lets business users ask data questions in plain English and get visualizations

Table of Contents
Most enterprise BI programs solve 80% of business questions well. The standard dashboard covers revenue by region, pipeline by stage, support ticket volume by category — the known questions that get asked every week. The other 20% are the ad-hoc questions that arrive without warning: “Can you show me top-performing accounts in the Northeast where we increased spend last quarter but churn risk is above 40%?” That question requires either a data analyst to build a new view, or a data engineer to run a custom query. Both paths take hours to days.
Amazon Q in QuickSight is AWS’s answer to that 20%. Introduced in 2023 and significantly expanded through 2024–2025, Q lets business users type those questions in plain English and receive visualizations in seconds — without involving the data team. This post covers how Q works, how to prepare datasets for it, how to secure it for multi-tenant or sensitive data environments, and how to measure whether it is actually being adopted.
How Q in QuickSight Works
Q in QuickSight has three distinct feature sets that are grouped under the “Q” branding but serve different purposes.
Q Topics are the core feature. A Topic is a registered QuickSight dataset that has been enriched with business-friendly metadata: field display names, synonyms, glossary terms, and relationship definitions between fields. When a user types a question in the Q search bar, QuickSight maps their natural language to fields in the Topic and generates a visualization automatically.
The key constraint to understand: Q Topics bound the universe of questions Q can answer. Q does not write arbitrary SQL against your data warehouse. It maps natural language to the fields and relationships defined in the Topic, generates a query against the QuickSight dataset, and returns a visualization. This is both a limitation (Q can only answer questions the Topic supports) and a significant security property (Q cannot be prompted to access data outside the Topic’s scope).
Generative Q&A / Stories is the second feature: an AI-generated narrative summary that combines visualizations with written analysis. Ask “why did revenue decline in March?” and Q Stories returns a visual plus a written interpretation — which segments fell, by how much, and what correlates with the decline. This feature uses a large language model to generate the narrative text, which is where per-query generative charges can apply.
AI-powered data prep is the third feature: Q can suggest calculated fields (“you might want a profit margin field”), detect data quality issues (“this date column has 12% null values”), and suggest field type corrections. This surfaces during dataset preparation and is most useful when onboarding a new dataset into QuickSight.
QuickSight Q requires Enterprise edition. The domain of Q’s knowledge is strictly the Topics you configure — expanding coverage means expanding Topics.
Dataset Preparation: Making a Dataset Q-Ready
The single biggest determinant of Q adoption quality is how well you prepare the underlying dataset as a Q Topic. A raw data warehouse table with abbreviated column names and no business context will produce poor Q results even if the data is correct.
Field naming
Replace technical column names with full business names in the Topic configuration. Q maps user vocabulary to field names:
| Column (raw) | Q Topic display name | Why it matters |
|---|---|---|
tot_rev_usd | Total Revenue (USD) | Users say “revenue”, not “tot_rev” |
cust_acq_dt | Customer Acquisition Date | Date context needed for time-based questions |
rgn_cd | Region | ”region” is a natural word; “rgn_cd” is not |
churn_risk_scr | Churn Risk Score | Score implies numeric; Q will treat it as a measure |
prod_sku_id | Product SKU | Clarifies this is a categorical identifier |
Synonyms
Add synonyms for every business term that has multiple natural-language representations. This is where most Q Topic configurations are under-invested:
"revenue" → synonyms: sales, income, bookings, ARR, top-line
"customer" → synonyms: account, client, company, org
"churn" → synonyms: cancellation, attrition, lost customer, churned
"quarter" → synonyms: Q1, Q2, Q3, Q4 (map to date filter logic)Synonyms are configured in the Q Topic editor. There is no programmatic bulk import — you enter them in the console, but the time investment pays off in Q accuracy.
Field types and measures vs. dimensions
Q needs to know which fields are measures (aggregatable numbers: revenue, count, score) and which are dimensions (categorical groupings: region, product, customer segment). QuickSight auto-detects based on data type, but you should review and correct these manually — a ZIP code might be detected as a numeric measure when it should be a dimension.
Relationship definitions
For Topics built from multiple datasets (a star schema with fact and dimension tables), define the relationships explicitly in the Topic:
orders.customer_id → customers.customer_id (many-to-one)
orders.product_id → products.product_id (many-to-one)
orders.region_code → regions.region_code (many-to-one)Without relationship definitions, Q cannot answer questions that span multiple tables (e.g., “revenue by product category” where category lives in the products dimension table).
Dataset Q-readiness checklist:
- All column names replaced with full business names
- Synonyms added for top 20 most-searched business terms
- All measures/dimensions correctly classified
- Date fields formatted correctly (YYYY-MM-DD) and marked as date type
- Categorical fields with > 50,000 unique values excluded from Topic (cardinality limit)
- Relationships between tables defined if using multiple datasets
- Glossary terms from the business glossary linked to relevant fields
- Test 10 sample questions that real users would ask — review accuracy before launch
Row-Level and Column-Level Security with Generative BI
Q in QuickSight inherits the security configuration of the underlying QuickSight dataset. This is the security property you need to verify before rolling out Q to multi-tenant environments or datasets containing sensitive information.
Row-level security (RLS)
Configure RLS rules on the QuickSight dataset that backs the Q Topic. When RLS is active, Q Topic queries automatically apply the RLS filter for the authenticated user:
import boto3
quicksight = boto3.client('quicksight', region_name='us-east-1')
# Create an RLS rule using a rules dataset that maps user emails to data segments
quicksight.create_data_set(
AwsAccountId='123456789012',
DataSetId='customer-rls-rules',
Name='Customer RLS Rules',
ImportMode='SPICE',
PhysicalTableMap={
'rls-source': {
'S3Source': {
'DataSourceArn': 'arn:aws:quicksight:us-east-1:123456789012:datasource/s3-source',
'InputColumns': [
{'Name': 'UserName', 'Type': 'STRING'},
{'Name': 'region_code', 'Type': 'STRING'} # Filter field matching dataset
],
'UploadSettings': {'Format': 'CSV', 'ContainsHeader': True}
}
}
}
)
# Apply RLS to the main dataset
quicksight.create_row_level_permission_data_set(
AwsAccountId='123456789012',
DataSetId='sales-analytics-main', # The dataset registered as a Q Topic
DataSetArn='arn:aws:quicksight:us-east-1:123456789012:dataset/customer-rls-rules',
PermissionPolicy='GRANT_ACCESS',
FormatVersion='VERSION_2'
)Once RLS is applied, a regional sales manager asking Q “show me revenue by product” will see only revenue for their region — even though Q is querying the full dataset. Verify this works by testing Q queries as a restricted user before rolling out to business users. RLS failures (where a restricted user sees data they should not) are hard to detect post-rollout.
Column-level security
For datasets with sensitive columns (compensation data, SSNs, customer PII), use QuickSight column-level permissions to exclude specific fields from Q Topics:
# Exclude sensitive columns from the dataset used in Q Topics
quicksight.update_data_set(
AwsAccountId='123456789012',
DataSetId='hr-analytics-dataset',
Name='HR Analytics',
ImportMode='SPICE',
ColumnLevelPermissionRules=[
{
'Principals': ['arn:aws:quicksight:us-east-1:123456789012:group/default/hr-executives'],
'ColumnNames': ['base_salary', 'bonus_target', 'ssn_hash', 'performance_rating']
}
],
# ... rest of dataset config
)Columns not included in the column-level permission rules for the Q-querying user group will not appear in Q responses — Q will not surface salary data in an answer about headcount trends if the authenticated user does not have column-level access to the salary field.
CloudTrail audit for Q queries
Every Q Topic query generates a CloudTrail event (quicksight:GenerateEmbedUrlForRegisteredUser for embedded Q, quicksight:SearchAnswers for console Q). This is your audit trail:
import boto3
from datetime import datetime, timedelta
cloudtrail = boto3.client('cloudtrail', region_name='us-east-1')
# Pull Q query events from the last 7 days
response = cloudtrail.lookup_events(
LookupAttributes=[
{'AttributeKey': 'EventName', 'AttributeValue': 'SearchAnswers'}
],
StartTime=datetime.now() - timedelta(days=7),
EndTime=datetime.now()
)
for event in response['Events']:
detail = json.loads(event['CloudTrailEvent'])
print(f"User: {detail.get('userIdentity', {}).get('arn', 'unknown')}")
print(f"Time: {event['EventTime']}")
print(f"Topic: {detail.get('requestParameters', {}).get('topicId', 'unknown')}")Preserve these logs in S3 via CloudTrail’s S3 delivery configuration for long-term compliance archiving.
Measuring Q Adoption
The most common mistake in enterprise Q deployments is treating rollout as the finish line. Q is a product that improves with curation — unanswered questions tell you where to invest Topic enrichment effort.
Unanswered question analysis
QuickSight Q tracks questions that did not produce a result. These are accessible via the QuickSight API:
quicksight = boto3.client('quicksight')
# Pull unanswered questions for a Topic (past 30 days)
response = quicksight.list_topic_reviewed_answers(
AwsAccountId='123456789012',
TopicId='sales-analytics-topic'
)
# Also pull directly from the Q feedback API
feedback_response = quicksight.describe_topic_permissions(
AwsAccountId='123456789012',
TopicId='sales-analytics-topic'
)
# For production: query the Q usage events from CloudWatch Logs Insights
# QuickSight publishes Q query metrics to CloudWatch when configuredRun an unanswered question review weekly during the first 90 days. Each cluster of failed questions points to a missing synonym, a missing relationship definition, or a field that should be added to the Topic. After 90 days, the Topic’s coverage stabilizes and review frequency can drop to monthly.
User satisfaction scoring
Q responses include thumbs up/down feedback buttons. Track these via CloudWatch metrics (QuickSight publishes Q feedback events to CloudWatch when the account-level CloudWatch logging is enabled):
| Metric | Target (90-day) | Action if below target |
|---|---|---|
| Q answer rate | > 80% of questions produce a result | Review unanswered questions weekly |
| Thumbs-up rate | > 70% of rated answers | Review thumbs-down questions for Topic gaps |
| Unique Q users / total users | > 40% within 60 days | Train users; embed Q in primary dashboard |
| Questions per Q user per week | > 5 | Indicates genuine adoption vs. one-time trial |
Monthly adoption report setup
Create a QuickSight dashboard that tracks Q adoption metrics by pulling from CloudWatch:
- Enable QuickSight CloudWatch logging at the account level (AWS console → QuickSight → Admin → CloudWatch integration)
- Create a CloudWatch Logs Insights query that extracts Q query events, answer rates, and user counts
- Connect CloudWatch Logs as a QuickSight data source
- Build an internal “Q Adoption” dashboard visible to the BI team and data owners
This creates a feedback loop: the data team can see which Topics are heavily used (worth additional curation) and which are underused (worth training or deprecation).
Q in QuickSight vs. Building a Custom RAG BI Tool
When enterprise teams first encounter Q in QuickSight’s Topic-based scope limitation, the reaction is often “let’s just build our own generative BI tool with an LLM and a vector database.” This is worth evaluating honestly.
| Dimension | Q in QuickSight | Custom RAG BI Tool |
|---|---|---|
| Time to first user value | 1–2 weeks (Topic setup + testing) | 3–6 months (architecture, engineering, testing) |
| ML engineering required | None | Yes — prompt engineering, RAG pipeline, eval |
| Infrastructure to manage | None (QuickSight-managed) | Vector DB (OpenSearch/pgvector), Lambda/ECS, API Gateway |
| Response quality | Good for well-curated Topics | Depends heavily on prompt engineering quality |
| Response control | Limited — QuickSight controls LLM prompts | Full — you control the prompt, model, retrieval |
| Security integration | Native RLS/CLS from QuickSight datasets | Custom implementation required |
| Audit trail | CloudTrail automatically | Custom logging implementation required |
| Ongoing maintenance | Topic curation only | Model upgrades, prompt updates, vector DB maintenance |
| Multi-source querying | Topics only | Possible — can query multiple data sources |
| Cost (Year 1, 100 users) | ~$13,000–$22,000 (Enterprise license) | ~$80,000–$150,000 (engineering + infra) |
Choose Q in QuickSight when:
- Your data assets are already in QuickSight datasets (or can be)
- Business users are the primary audience (not developers querying programmatically)
- Time-to-value in weeks matters more than full control over responses
- Your security model is row/column-based (well-supported natively)
Choose a custom RAG BI tool when:
- You need to query data sources that cannot be loaded into QuickSight (operational DBs, real-time streams)
- Your use case requires multi-step reasoning across multiple data sources in one answer
- You want to embed the BI experience in a product with heavy custom branding
- Your organization already has the ML engineering capacity to build and maintain it
For most enterprises without a dedicated ML platform team, Q in QuickSight delivers 80% of the generative BI value at 20% of the cost and time investment. The RAG BI path is a significant engineering project that frequently takes longer and costs more than estimated.
Amazon Q in QuickSight addresses a real problem: the data team bottleneck on ad-hoc BI questions. The 20% of questions that don’t fit existing dashboards no longer need to wait for a sprint cycle. But Q’s quality ceiling is the curation quality of the underlying Topics — a poorly described dataset with abbreviated field names will produce poor Q results regardless of the underlying model quality.
The investment is in Topic preparation, synonym coverage, and adoption measurement — not in ML infrastructure. For teams already on QuickSight in production, Q integration is a natural next step. For teams running real-time analytics dashboards or embedding QuickSight in SaaS applications, Q’s embedding capabilities extend naturally into those architectures. See also the broader AWS AI services landscape for 2026 for where generative BI fits in the AI adoption roadmap.
Need help deploying Amazon Q in QuickSight for your organization, including Topic setup, RLS configuration, and adoption measurement? FactualMinds works with enterprise BI teams on QuickSight architecture, Q Topic optimization, and the build-vs-buy decision for generative analytics.
AWS Cloud Architect & AI Expert
AWS-certified cloud architect and AI expert with deep expertise in cloud migrations, cost optimization, and generative AI on AWS.

