Try 10 focused AWS DEA-C01 questions on Data Security and Governance, with explanations, then continue with IT Mastery.
Open the matching IT Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.
| Field | Detail |
|---|---|
| Exam route | AWS DEA-C01 |
| Topic area | Data Security and Governance |
| Blueprint weight | 18% |
| Page purpose | Focused sample questions before returning to mixed practice |
Use this page to isolate Data Security and Governance for AWS DEA-C01. Work through the 10 questions first, then review the explanations and return to mixed practice in IT Mastery.
| Pass | What to do | What to record |
|---|---|---|
| First attempt | Answer without checking the explanation first. | The fact, rule, calculation, or judgment point that controlled your answer. |
| Review | Read the explanation even when you were correct. | Why the best answer is stronger than the closest distractor. |
| Repair | Repeat only missed or uncertain items after a short break. | The pattern behind misses, not the answer letter. |
| Transfer | Return to mixed practice once the topic feels stable. | Whether the same skill holds up when the topic is no longer obvious. |
Blueprint context: 18% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.
These questions are original IT Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.
Topic: Data Security and Governance
A security team must prepare AWS CloudTrail logs for audit across 40 AWS accounts in an AWS Organization and multiple Regions. Requirements:
Which solution best meets these requirements?
Options:
A. Deliver CloudTrail logs to Amazon S3, crawl with AWS Glue, and query with Amazon Athena
B. Create a CloudTrail Lake event data store in the security account and enable organization event collection for querying
C. Route CloudTrail to Amazon CloudWatch Logs and use CloudWatch Logs Insights for investigations
D. Send CloudTrail events to Amazon OpenSearch Service for indexing and dashboard-based investigation
Best answer: B
Explanation: CloudTrail Lake is designed for centralized, queryable audit logs with managed ingestion, retention, and SQL-based querying. It fits scenarios that need fast, ad-hoc investigation across many accounts without standing up analytics infrastructure or maintaining partitions/ETL. Exporting to other stores is typically better when you need long-term, low-cost archival or broader lakehouse analytics beyond audit use cases.
The core decision is whether you need a managed, centralized audit log store that is immediately queryable versus exporting raw logs to a separate analytics store. CloudTrail Lake lets you collect events (including across an AWS Organization) into an event data store and query them directly with SQL, while controlling access in the security account and encrypting with KMS. This meets the requirements for interactive investigations over the last 90 days without building ETL pipelines, managing S3 partitions/catalogs, or operating a search/analytics cluster.
Exporting CloudTrail to Amazon S3 (and then using Athena/Glue/other engines) is a common pattern for large-scale analytics and low-cost archival, but it introduces data-lake operations (partitioning/cataloging) and query cost typically depends on the amount of data scanned.
Topic: Data Security and Governance
A company runs regulated workloads on AWS and must retain data platform audit logs for 7 years. Auditors require the logs to be immutable, tamper-evident (integrity verifiable), and accessible only to a small security team. The company already uses AWS CloudTrail, Amazon S3, AWS KMS, and AWS Organizations.
Which approach should you AVOID because it does NOT meet these audit-log requirements?
Options:
A. Use an AWS Organizations CloudTrail trail to centralize logs into a dedicated security account, limiting access to a security-admin role
B. Store CloudTrail logs in a versioned S3 bucket and allow the security team to delete or overwrite objects when needed
C. Encrypt the log bucket with SSE-KMS, restrict access with bucket policies and least-privilege IAM roles, and audit access with CloudTrail data events
D. Enable CloudTrail log file integrity validation and deliver logs to an S3 bucket with Object Lock (compliance mode) and a retention period
Best answer: B
Explanation: Immutable, regulated audit logs must prevent modification and deletion and provide a way to verify integrity over time. Centralizing logs in a dedicated account, enforcing strict access controls, using KMS encryption, and enabling integrity validation are all aligned with those requirements. Allowing anyone (even a security team) to delete or overwrite log objects defeats tamper resistance.
For regulated audit logging, design for write-once/read-many behavior and independent evidence of integrity. On AWS, CloudTrail can generate audit records and provides log file integrity validation to help detect tampering. Storing logs in a dedicated, tightly controlled S3 bucket in a separate security account reduces the risk that workload administrators can alter evidence.
To meet immutability and retention requirements, use S3 Object Lock in compliance mode with an appropriate retention period, and restrict access using least-privilege IAM and bucket policies (including limiting s3:DeleteObject and preventing changes to retention settings except by a break-glass role). Encrypt logs with SSE-KMS and monitor access via CloudTrail (including S3 data events if needed). The key takeaway is that “versioning only” does not prevent deletion or overwrite of evidence.
Topic: Data Security and Governance
A security analyst is investigating suspected credential misuse and needs to query AWS CloudTrail logs with Amazon Athena to identify the principal that called DeleteTrail last night.
Current setup:
audit.cloudtrail_logs.SecurityAnalystRole.When the analyst runs SELECT eventtime, useridentity.arn, eventname FROM audit.cloudtrail_logs WHERE eventname='DeleteTrail' LIMIT 10;, Athena fails with:
HIVE_ACCESS_CONTROL_ERROR: Insufficient Lake Formation permission(s)
on audit.cloudtrail_logs
Which action will fix the root cause with the least change while keeping Lake Formation as the permission control plane?
Options:
A. Add an S3 bucket policy to allow s3:GetObject on the CloudTrail prefix to SecurityAnalystRole
B. Disable Lake Formation permissions for the Data Catalog and use only IAM policies for access control
C. Grant SecurityAnalystRole Lake Formation SELECT on audit.cloudtrail_logs (and the underlying data location if required)
D. Add an IAM policy to SecurityAnalystRole granting athena:StartQueryExecution and glue:GetTable
Best answer: C
Explanation: The Athena error is a Lake Formation authorization error on the Data Catalog table, not an Athena API permission problem or an S3 bucket policy issue. When Lake Formation is enabled, it must explicitly grant data permissions (such as SELECT) to the querying principal. Granting the role the required Lake Formation permissions restores the analyst’s ability to query CloudTrail logs for the audit investigation.
Symptom: Athena returns HIVE_ACCESS_CONTROL_ERROR stating insufficient Lake Formation permissions on the CloudTrail table.
Root cause: With Lake Formation enabled as the permission control plane, Athena enforces Lake Formation grants for Data Catalog resources. Even if the IAM role can call Athena and read S3, the query will fail unless Lake Formation grants the role permissions (typically SELECT on the table, and sometimes access to the registered S3 data location).
Fix: Use Lake Formation to grant SecurityAnalystRole the needed table permissions (and data location access if the S3 path is registered in Lake Formation) so Athena can read the CloudTrail log data while preserving centralized governance.
Topic: Data Security and Governance
A company moves sensitive customer data in two ways: (1) AWS Database Migration Service (AWS DMS) replicates an on-premises PostgreSQL database to Amazon Redshift, and (2) partners upload CSV files directly into an Amazon S3 landing bucket. Security requires encryption in transit for all transfers, and client-side encryption is acceptable when appropriate.
Which actions meet these requirements? (Select THREE.)
Options:
A. Enable encryption on the DMS replication instance storage
B. Enable SSE-KMS on the S3 landing bucket
C. Deny S3 requests that do not use HTTPS via aws:SecureTransport
D. Require SSL/TLS for AWS DMS source and target database endpoints
E. Use an S3 gateway endpoint so partner uploads are encrypted in transit
F. Encrypt files client-side with AWS Encryption SDK before S3 upload
Correct answers: C, D and F
Explanation: To meet the requirement, the data must be protected while it is moving (TLS) or protected before it moves (client-side encryption). Enforcing HTTPS-only access on S3 ensures TLS for direct uploads, and configuring DMS endpoints for SSL/TLS encrypts database replication traffic. Client-side encryption ensures files are encrypted even before they leave the uploader.
Encryption for data movement is typically achieved by (1) using TLS for the network connection between the sender and receiver, and/or (2) encrypting the payload on the client before sending it. For partner uploads to S3, you can enforce TLS by denying any request that is not made over HTTPS using an S3 bucket policy condition on aws:SecureTransport. For AWS DMS, you enable encryption in transit by configuring the source and target endpoints to require SSL/TLS (service-specific settings such as sslMode=require or providing certificates). Client-side encryption (for example, with the AWS Encryption SDK backed by AWS KMS keys) encrypts the file before upload, so the data remains protected regardless of the path taken to S3.
Key takeaway: at-rest encryption settings (SSE-KMS, encrypted volumes) do not, by themselves, ensure encryption in transit.
aws:SecureTransport forces TLS for uploads.Topic: Data Security and Governance
A company runs a data lake in Amazon S3 and analytics in Amazon Redshift. Transactional data is stored in Amazon Aurora PostgreSQL and is replicated to S3 with AWS DMS for reporting.
Requirements:
Which solution best meets these requirements?
Options:
A. Use client-side encryption before writing to S3 and store encryption keys in an EC2-hosted key store
B. Use customer managed AWS KMS keys for S3 SSE-KMS, Redshift encryption, and Aurora storage encryption; enforce S3 encryption with bucket policy and control keys with key policies
C. Use AWS managed KMS keys for S3, Redshift, and Aurora encryption
D. Use SSE-S3 for S3, enable TLS for all connections, and restrict access with VPC endpoints only
Best answer: B
Explanation: Using customer managed AWS KMS keys provides the required centralized control, auditing, and the ability to revoke access by disabling keys. Enabling native encryption for S3 (SSE-KMS), Redshift, and Aurora preserves compatibility with managed services such as Glue, Athena/Redshift Spectrum, and DMS. Enforcing encryption through policies reduces operational overhead and configuration drift.
The core requirement is consistent at-rest encryption across S3, Redshift, and Aurora with centralized governance and the ability to revoke data access by disabling keys. Customer managed AWS KMS keys (CMKs) satisfy this because security administrators can manage key policies, rotation, and audit via CloudTrail, and disabling a CMK prevents further decrypt operations.
A best-fit approach is:
This meets compliance while keeping Glue/Athena/Redshift Spectrum and DMS working without introducing custom client-side cryptography.
Topic: Data Security and Governance
A company uses AWS Lake Formation to govern a data lake in Amazon S3. The curated data is registered as a Lake Formation data location at s3://dea-datalake/curated/ and is cataloged in the AWS Glue Data Catalog as curated_db.sales.
Users and jobs are failing with the following errors:
Athena: AccessDeniedException: Insufficient Lake Formation permission(s): Required SELECT on curated_db.sales
Crawler: AccessDeniedException: Insufficient Lake Formation permission(s): Required DATA_LOCATION_ACCESS on s3://dea-datalake/curated/
Glue job: InvalidInputException: Service is unable to assume role arn:aws:iam::111122223333:role/GlueETLRole
Which actions should a data engineer take to resolve these failures with least privilege? (Select THREE.)
Options:
A. Grant Lake Formation DATA_LOCATION_ACCESS on s3://dea-datalake/curated/ to CrawlerRole
B. Update GlueETLRole trust policy to allow glue.amazonaws.com to assume the role
C. Add an S3 bucket policy that allows s3:GetObject on curated/* to AnalyticsRole
D. Disable Lake Formation permissions enforcement for the curated_db database
E. Grant Lake Formation SELECT on curated_db.sales to AnalyticsRole
F. Attach an IAM policy granting athena:StartQueryExecution to AnalyticsRole
Correct answers: A, B and E
Explanation: The errors indicate two missing Lake Formation grants (table-level SELECT and data location access) and one incorrect IAM role trust relationship for AWS Glue. Granting the specific Lake Formation permissions resolves the governed-access failures. Updating the Glue role trust policy resolves the service’s inability to assume the execution role.
Troubleshoot authorization failures by matching the error message to the authorization layer being enforced. “Insufficient Lake Formation permission(s)” means Lake Formation grants (not just IAM/S3) are missing for the principal accessing a governed resource. A table read requires a Lake Formation table permission such as SELECT, and access to a registered S3 location requires DATA_LOCATION_ACCESS for the principal that needs to reach that location.
“Service is unable to assume role” indicates the target role’s trust policy does not allow the AWS service principal (for example, glue.amazonaws.com) to perform sts:AssumeRole. Fixing the trust relationship is the direct, least-privilege remedy; adding unrelated IAM permissions won’t help if the service cannot assume the role in the first place.
glue.amazonaws.com in the Glue job role trust policy — resolves the assume-role failure.athena:StartQueryExecution doesn’t fix a Lake Formation authorization failure on the table.Topic: Data Security and Governance
A company stores curated analytics datasets in an Amazon S3 bucket. AWS Glue jobs run in a VPC using an IAM role. The security team requires that curated objects can be read only by that IAM role and only through an S3 Access Point from within the VPC (not through direct bucket access from the internet).
Which action should you AVOID?
Options:
A. Add a bucket policy deny for requests not from the VPC endpoint
B. Attach a bucket policy granting s3:GetObject to Principal: *
C. Restrict the S3 gateway endpoint policy to the Access Point ARN
D. Use an S3 Access Point policy allowing only the Glue role
Best answer: B
Explanation: To control access paths at a high level, combine identity-based permissions on the Glue IAM role with resource policies on the S3 Access Point and network controls like VPC endpoint policies. This ensures access is possible only through the approved endpoint and access point. Granting access to everyone undermines those controls and breaks least-privilege design.
The core control is to force a specific access path: requests must come from the Glue job’s IAM role and traverse the VPC’s S3 endpoint to the approved S3 Access Point. You do this by layering policies at the identity and resource/network boundaries:
Any policy that broadly allows access (for example to all principals) is an anti-pattern because it defeats path restriction and least privilege.
Topic: Data Security and Governance
A team runs an AWS Glue job in private subnets (no NAT). The job must access an Amazon S3 bucket only through an S3 Access Point (raw-ap) using an S3 gateway VPC endpoint.
The Glue job fails when reading the input path that uses the access point alias. The Glue job IAM role and the S3 Access Point policy already allow the required S3 actions.
Exhibit: Error and current VPC endpoint policy (partial)
ERROR AccessDenied: Access Denied (Status Code: 403)
Operation: s3:ListBucket
Resource: arn:aws:s3:us-east-1:123456789012:accesspoint/raw-ap
VPC endpoint policy allows only:
- arn:aws:s3:::raw-bucket
- arn:aws:s3:::raw-bucket/*
Which change will resolve the failure with the least impact while preserving the access-path requirement?
Options:
A. Remove all statements from the VPC endpoint policy
B. Attach AmazonS3FullAccess to the Glue job role
C. Update the S3 Access Point policy to allow all principals
D. Allow the access point ARNs in the VPC endpoint policy
Best answer: D
Explanation: S3 gateway VPC endpoint policies are an additional authorization layer. Because the Glue job is accessing S3 through an access point, the evaluated resource is the access point ARN, not the bucket ARN in the endpoint policy. Adding the access point and object ARNs to the endpoint policy removes the implicit deny while keeping private, controlled access paths.
When an S3 gateway VPC endpoint is used, its endpoint policy must allow the requested S3 actions for the specific resources being addressed; otherwise, the request is denied even if the IAM role and the S3 resource policy allow it. In this case, the Glue job uses an S3 Access Point alias, so S3 authorizes against access point ARNs (for example, ...:accesspoint/raw-ap and ...:accesspoint/raw-ap/object/*). The endpoint policy only allows raw-bucket ARNs, so s3:ListBucket on the access point is implicitly denied.
Fix by updating the VPC endpoint policy to include the access point ARN(s) (and object ARNs) with the required actions, preserving the requirement to keep S3 access private and constrained to the access point.
Topic: Data Security and Governance
Which THREE statements about rotating and managing credentials with AWS Secrets Manager are INCORRECT (false or unsafe) because they increase operational risk?
Select THREE.
Options:
A. Enable rotation and have consumers retrieve the secret value at runtime by ARN
B. Grant a pipeline role secretsmanager:* on all secrets to prevent failures
C. Encrypt secrets with a customer managed KMS key and scope KMS permissions tightly
D. Keep the previous secret version available during rotation for a short rollback window
E. Hardcode the secret in a Lambda environment variable and redeploy to rotate
F. Use one shared IAM access key and email new keys during rotation
Correct answers: B, E and F
Explanation: Lower-risk credential management uses AWS Secrets Manager as the source of truth, rotates secrets automatically where possible, and ensures consumers retrieve the current value at runtime. Operational risk increases when secrets are embedded in deployments, when permissions are overbroad, or when long-lived keys are shared and distributed manually.
The core practice is to centralize credentials in AWS Secrets Manager and automate rotation, while keeping access least-privilege and auditable. Consumers (ETL jobs, Lambda functions, EMR/Glue roles) should read the secret at runtime (often with caching) so rotation doesn’t require redeployments.
The unsafe statements are unsafe because:
secretsmanager:* on all secrets is excessive and expands blast radius; permissions should be limited to specific secret ARNs and required actions.A key takeaway is that automated rotation plus runtime retrieval and least privilege reduces both outage and compromise risk.
Topic: Data Security and Governance
In AWS Lake Formation, what does LF-Tag-based access control (LF-TBAC) mean?
Options:
A. Granting Lake Formation permissions using LF-Tag expressions on Data Catalog resources
B. Using Iceberg snapshots to provide read-consistent access to historical table versions
C. Encrypting data with KMS envelope encryption so only tagged roles can decrypt
D. Restricting Amazon S3 object access using IAM aws:ResourceTag conditions
Best answer: A
Explanation: LF-TBAC is a Lake Formation authorization model where you attach LF-Tags to Glue Data Catalog databases/tables/columns and then grant permissions based on tag expressions. This supports least-privilege at scale because access is controlled by data classifications (tags) rather than enumerating each resource ARN or table name.
LF-Tag-based access control (LF-TBAC) is an AWS Lake Formation feature for authorizing access to data lake resources registered in the AWS Glue Data Catalog. You attach LF-Tags (for example, pii=true, domain=marketing) to databases, tables, or columns, and then create Lake Formation grants to IAM principals based on LF-Tag expressions (such as pii=false). This creates reusable, “policy-like” permissions that automatically apply as new resources are tagged, helping enforce least privilege while keeping operational overhead low. It is different from IAM/S3 tag conditions, which primarily control access to S3 APIs and do not provide Lake Formation’s table/column permission model across integrated services like Athena, Glue, and Redshift Spectrum.
Use the AWS DEA-C01 Practice Test page for the full IT Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.
Try AWS DEA-C01 on Web View AWS DEA-C01 Practice Test
Read the AWS DEA-C01 Cheat Sheet on Tech Exam Lexicon, then return to IT Mastery for timed practice.