AWS DEA-C01: Data Security and Governance

Try 10 focused AWS DEA-C01 questions on Data Security and Governance, with explanations, then continue with IT Mastery.

On this page

Open the matching IT Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Try AWS DEA-C01 on Web View full AWS DEA-C01 practice page

Topic snapshot

FieldDetail
Exam routeAWS DEA-C01
Topic areaData Security and Governance
Blueprint weight18%
Page purposeFocused sample questions before returning to mixed practice

How to use this topic drill

Use this page to isolate Data Security and Governance for AWS DEA-C01. Work through the 10 questions first, then review the explanations and return to mixed practice in IT Mastery.

PassWhat to doWhat to record
First attemptAnswer without checking the explanation first.The fact, rule, calculation, or judgment point that controlled your answer.
ReviewRead the explanation even when you were correct.Why the best answer is stronger than the closest distractor.
RepairRepeat only missed or uncertain items after a short break.The pattern behind misses, not the answer letter.
TransferReturn to mixed practice once the topic feels stable.Whether the same skill holds up when the topic is no longer obvious.

Blueprint context: 18% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.

Sample questions

These questions are original IT Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.

Question 1

Topic: Data Security and Governance

A security team must prepare AWS CloudTrail logs for audit across 40 AWS accounts in an AWS Organization and multiple Regions. Requirements:

  • Centralize events in a dedicated security account with KMS encryption and tightly controlled query access.
  • Support ad-hoc, interactive SQL queries over the most recent 90 days of events (incident response), without building ETL jobs or managing S3 partitions.
  • Minimize operational overhead (no cluster management) and avoid query costs that depend on scanning large object stores.

Which solution best meets these requirements?

Options:

  • A. Deliver CloudTrail logs to Amazon S3, crawl with AWS Glue, and query with Amazon Athena

  • B. Create a CloudTrail Lake event data store in the security account and enable organization event collection for querying

  • C. Route CloudTrail to Amazon CloudWatch Logs and use CloudWatch Logs Insights for investigations

  • D. Send CloudTrail events to Amazon OpenSearch Service for indexing and dashboard-based investigation

Best answer: B

Explanation: CloudTrail Lake is designed for centralized, queryable audit logs with managed ingestion, retention, and SQL-based querying. It fits scenarios that need fast, ad-hoc investigation across many accounts without standing up analytics infrastructure or maintaining partitions/ETL. Exporting to other stores is typically better when you need long-term, low-cost archival or broader lakehouse analytics beyond audit use cases.

The core decision is whether you need a managed, centralized audit log store that is immediately queryable versus exporting raw logs to a separate analytics store. CloudTrail Lake lets you collect events (including across an AWS Organization) into an event data store and query them directly with SQL, while controlling access in the security account and encrypting with KMS. This meets the requirements for interactive investigations over the last 90 days without building ETL pipelines, managing S3 partitions/catalogs, or operating a search/analytics cluster.

Exporting CloudTrail to Amazon S3 (and then using Athena/Glue/other engines) is a common pattern for large-scale analytics and low-cost archival, but it introduces data-lake operations (partitioning/cataloging) and query cost typically depends on the amount of data scanned.

  • S3 + Glue + Athena adds data-lake operational work (catalog/partitions) and query cost is tied to scanned data, conflicting with the requirements.
  • OpenSearch indexing increases operational overhead and cost to run and scale a search cluster.
  • CloudWatch Logs Insights is not a best fit for organization-wide, SQL-style audit querying at 90-day scope with the stated cost/overhead constraints.

Question 2

Topic: Data Security and Governance

A company runs regulated workloads on AWS and must retain data platform audit logs for 7 years. Auditors require the logs to be immutable, tamper-evident (integrity verifiable), and accessible only to a small security team. The company already uses AWS CloudTrail, Amazon S3, AWS KMS, and AWS Organizations.

Which approach should you AVOID because it does NOT meet these audit-log requirements?

Options:

  • A. Use an AWS Organizations CloudTrail trail to centralize logs into a dedicated security account, limiting access to a security-admin role

  • B. Store CloudTrail logs in a versioned S3 bucket and allow the security team to delete or overwrite objects when needed

  • C. Encrypt the log bucket with SSE-KMS, restrict access with bucket policies and least-privilege IAM roles, and audit access with CloudTrail data events

  • D. Enable CloudTrail log file integrity validation and deliver logs to an S3 bucket with Object Lock (compliance mode) and a retention period

Best answer: B

Explanation: Immutable, regulated audit logs must prevent modification and deletion and provide a way to verify integrity over time. Centralizing logs in a dedicated account, enforcing strict access controls, using KMS encryption, and enabling integrity validation are all aligned with those requirements. Allowing anyone (even a security team) to delete or overwrite log objects defeats tamper resistance.

For regulated audit logging, design for write-once/read-many behavior and independent evidence of integrity. On AWS, CloudTrail can generate audit records and provides log file integrity validation to help detect tampering. Storing logs in a dedicated, tightly controlled S3 bucket in a separate security account reduces the risk that workload administrators can alter evidence.

To meet immutability and retention requirements, use S3 Object Lock in compliance mode with an appropriate retention period, and restrict access using least-privilege IAM and bucket policies (including limiting s3:DeleteObject and preventing changes to retention settings except by a break-glass role). Encrypt logs with SSE-KMS and monitor access via CloudTrail (including S3 data events if needed). The key takeaway is that “versioning only” does not prevent deletion or overwrite of evidence.

  • Object Lock + integrity supports WORM retention and tamper detection.
  • KMS + least privilege enforces access control while keeping logs encrypted.
  • Central security account reduces the chance that producers can modify audit evidence.
  • Versioning alone still allows delete/overwrite, which undermines immutability.

Question 3

Topic: Data Security and Governance

A security analyst is investigating suspected credential misuse and needs to query AWS CloudTrail logs with Amazon Athena to identify the principal that called DeleteTrail last night.

Current setup:

  • An organization trail delivers CloudTrail logs to an Amazon S3 bucket.
  • An AWS Glue crawler created a Data Catalog table audit.cloudtrail_logs.
  • AWS Lake Formation is enabled and is the central permission system for the data lake.
  • The analyst runs Athena queries using an IAM role SecurityAnalystRole.

When the analyst runs SELECT eventtime, useridentity.arn, eventname FROM audit.cloudtrail_logs WHERE eventname='DeleteTrail' LIMIT 10;, Athena fails with:

HIVE_ACCESS_CONTROL_ERROR: Insufficient Lake Formation permission(s)
on audit.cloudtrail_logs

Which action will fix the root cause with the least change while keeping Lake Formation as the permission control plane?

Options:

  • A. Add an S3 bucket policy to allow s3:GetObject on the CloudTrail prefix to SecurityAnalystRole

  • B. Disable Lake Formation permissions for the Data Catalog and use only IAM policies for access control

  • C. Grant SecurityAnalystRole Lake Formation SELECT on audit.cloudtrail_logs (and the underlying data location if required)

  • D. Add an IAM policy to SecurityAnalystRole granting athena:StartQueryExecution and glue:GetTable

Best answer: C

Explanation: The Athena error is a Lake Formation authorization error on the Data Catalog table, not an Athena API permission problem or an S3 bucket policy issue. When Lake Formation is enabled, it must explicitly grant data permissions (such as SELECT) to the querying principal. Granting the role the required Lake Formation permissions restores the analyst’s ability to query CloudTrail logs for the audit investigation.

Symptom: Athena returns HIVE_ACCESS_CONTROL_ERROR stating insufficient Lake Formation permissions on the CloudTrail table.

Root cause: With Lake Formation enabled as the permission control plane, Athena enforces Lake Formation grants for Data Catalog resources. Even if the IAM role can call Athena and read S3, the query will fail unless Lake Formation grants the role permissions (typically SELECT on the table, and sometimes access to the registered S3 data location).

Fix: Use Lake Formation to grant SecurityAnalystRole the needed table permissions (and data location access if the S3 path is registered in Lake Formation) so Athena can read the CloudTrail log data while preserving centralized governance.

  • S3-only access can still fail because Lake Formation blocks table/data access without LF grants.
  • Athena/Glue IAM APIs may be necessary, but the specific error indicates Lake Formation data permissions are missing.
  • Disabling Lake Formation violates the constraint to keep Lake Formation as the permission control plane and is not a minimal operational fix.

Question 4

Topic: Data Security and Governance

A company moves sensitive customer data in two ways: (1) AWS Database Migration Service (AWS DMS) replicates an on-premises PostgreSQL database to Amazon Redshift, and (2) partners upload CSV files directly into an Amazon S3 landing bucket. Security requires encryption in transit for all transfers, and client-side encryption is acceptable when appropriate.

Which actions meet these requirements? (Select THREE.)

Options:

  • A. Enable encryption on the DMS replication instance storage

  • B. Enable SSE-KMS on the S3 landing bucket

  • C. Deny S3 requests that do not use HTTPS via aws:SecureTransport

  • D. Require SSL/TLS for AWS DMS source and target database endpoints

  • E. Use an S3 gateway endpoint so partner uploads are encrypted in transit

  • F. Encrypt files client-side with AWS Encryption SDK before S3 upload

Correct answers: C, D and F

Explanation: To meet the requirement, the data must be protected while it is moving (TLS) or protected before it moves (client-side encryption). Enforcing HTTPS-only access on S3 ensures TLS for direct uploads, and configuring DMS endpoints for SSL/TLS encrypts database replication traffic. Client-side encryption ensures files are encrypted even before they leave the uploader.

Encryption for data movement is typically achieved by (1) using TLS for the network connection between the sender and receiver, and/or (2) encrypting the payload on the client before sending it. For partner uploads to S3, you can enforce TLS by denying any request that is not made over HTTPS using an S3 bucket policy condition on aws:SecureTransport. For AWS DMS, you enable encryption in transit by configuring the source and target endpoints to require SSL/TLS (service-specific settings such as sslMode=require or providing certificates). Client-side encryption (for example, with the AWS Encryption SDK backed by AWS KMS keys) encrypts the file before upload, so the data remains protected regardless of the path taken to S3.

Key takeaway: at-rest encryption settings (SSE-KMS, encrypted volumes) do not, by themselves, ensure encryption in transit.

  • OK: Encrypting with AWS Encryption SDK before upload provides client-side encryption (before transit).
  • OK: Requiring SSL/TLS on DMS endpoints encrypts replication traffic in transit.
  • OK: Denying non-HTTPS S3 access with aws:SecureTransport forces TLS for uploads.
  • NO: SSE-KMS on S3 is encryption at rest and does not enforce TLS during upload.
  • NO: Encrypting DMS replication instance storage protects data at rest on the instance, not during transfer.
  • NO: An S3 gateway endpoint provides private connectivity, but does not itself guarantee TLS for requests.

Question 5

Topic: Data Security and Governance

A company runs a data lake in Amazon S3 and analytics in Amazon Redshift. Transactional data is stored in Amazon Aurora PostgreSQL and is replicated to S3 with AWS DMS for reporting.

Requirements:

  • Encrypt data at rest for S3, Redshift, and Aurora using customer managed keys with centralized administration and audit
  • Security team must be able to revoke access by disabling a key
  • Existing Glue/Athena/Redshift Spectrum and DMS jobs must keep working with minimal changes
  • Minimize ongoing operational overhead

Which solution best meets these requirements?

Options:

  • A. Use client-side encryption before writing to S3 and store encryption keys in an EC2-hosted key store

  • B. Use customer managed AWS KMS keys for S3 SSE-KMS, Redshift encryption, and Aurora storage encryption; enforce S3 encryption with bucket policy and control keys with key policies

  • C. Use AWS managed KMS keys for S3, Redshift, and Aurora encryption

  • D. Use SSE-S3 for S3, enable TLS for all connections, and restrict access with VPC endpoints only

Best answer: B

Explanation: Using customer managed AWS KMS keys provides the required centralized control, auditing, and the ability to revoke access by disabling keys. Enabling native encryption for S3 (SSE-KMS), Redshift, and Aurora preserves compatibility with managed services such as Glue, Athena/Redshift Spectrum, and DMS. Enforcing encryption through policies reduces operational overhead and configuration drift.

The core requirement is consistent at-rest encryption across S3, Redshift, and Aurora with centralized governance and the ability to revoke data access by disabling keys. Customer managed AWS KMS keys (CMKs) satisfy this because security administrators can manage key policies, rotation, and audit via CloudTrail, and disabling a CMK prevents further decrypt operations.

A best-fit approach is:

  • Set S3 default encryption to SSE-KMS with a CMK and enforce it with an S3 bucket policy (deny uploads that are not SSE-KMS or not using the approved CMK).
  • Create/enable Redshift encryption using a CMK so managed storage and snapshots are encrypted under KMS.
  • Enable Aurora storage encryption with a CMK (and configure DMS to use KMS where applicable) so the replication pipeline remains service-native.

This meets compliance while keeping Glue/Athena/Redshift Spectrum and DMS working without introducing custom client-side cryptography.

  • SSE-S3 and TLS only fails the customer-managed key and key-revocation requirements.
  • AWS managed keys do not provide the required centralized administrative control and revocation model demanded by many compliance programs.
  • Client-side encryption with an EC2 key store adds significant operational overhead and commonly breaks/complicates managed-service interoperability for querying and ETL.

Question 6

Topic: Data Security and Governance

A company uses AWS Lake Formation to govern a data lake in Amazon S3. The curated data is registered as a Lake Formation data location at s3://dea-datalake/curated/ and is cataloged in the AWS Glue Data Catalog as curated_db.sales.

Users and jobs are failing with the following errors:

Athena: AccessDeniedException: Insufficient Lake Formation permission(s): Required SELECT on curated_db.sales
Crawler: AccessDeniedException: Insufficient Lake Formation permission(s): Required DATA_LOCATION_ACCESS on s3://dea-datalake/curated/
Glue job: InvalidInputException: Service is unable to assume role arn:aws:iam::111122223333:role/GlueETLRole

Which actions should a data engineer take to resolve these failures with least privilege? (Select THREE.)

Options:

  • A. Grant Lake Formation DATA_LOCATION_ACCESS on s3://dea-datalake/curated/ to CrawlerRole

  • B. Update GlueETLRole trust policy to allow glue.amazonaws.com to assume the role

  • C. Add an S3 bucket policy that allows s3:GetObject on curated/* to AnalyticsRole

  • D. Disable Lake Formation permissions enforcement for the curated_db database

  • E. Grant Lake Formation SELECT on curated_db.sales to AnalyticsRole

  • F. Attach an IAM policy granting athena:StartQueryExecution to AnalyticsRole

Correct answers: A, B and E

Explanation: The errors indicate two missing Lake Formation grants (table-level SELECT and data location access) and one incorrect IAM role trust relationship for AWS Glue. Granting the specific Lake Formation permissions resolves the governed-access failures. Updating the Glue role trust policy resolves the service’s inability to assume the execution role.

Troubleshoot authorization failures by matching the error message to the authorization layer being enforced. “Insufficient Lake Formation permission(s)” means Lake Formation grants (not just IAM/S3) are missing for the principal accessing a governed resource. A table read requires a Lake Formation table permission such as SELECT, and access to a registered S3 location requires DATA_LOCATION_ACCESS for the principal that needs to reach that location.

“Service is unable to assume role” indicates the target role’s trust policy does not allow the AWS service principal (for example, glue.amazonaws.com) to perform sts:AssumeRole. Fixing the trust relationship is the direct, least-privilege remedy; adding unrelated IAM permissions won’t help if the service cannot assume the role in the first place.

  • OK: Grant Lake Formation SELECT on the table — addresses the explicit missing SELECT grant for Athena.
  • OK: Grant Lake Formation DATA_LOCATION_ACCESS on the curated S3 location — addresses the explicit missing data location permission for the crawler.
  • OK: Allow glue.amazonaws.com in the Glue job role trust policy — resolves the assume-role failure.
  • NO: Adding an S3 bucket policy for read access doesn’t resolve a Lake Formation “missing permission(s)” denial.
  • NO: Granting athena:StartQueryExecution doesn’t fix a Lake Formation authorization failure on the table.
  • NO: Disabling Lake Formation enforcement is overly broad and not a least-privilege fix.

Question 7

Topic: Data Security and Governance

A company stores curated analytics datasets in an Amazon S3 bucket. AWS Glue jobs run in a VPC using an IAM role. The security team requires that curated objects can be read only by that IAM role and only through an S3 Access Point from within the VPC (not through direct bucket access from the internet).

Which action should you AVOID?

Options:

  • A. Add a bucket policy deny for requests not from the VPC endpoint

  • B. Attach a bucket policy granting s3:GetObject to Principal: *

  • C. Restrict the S3 gateway endpoint policy to the Access Point ARN

  • D. Use an S3 Access Point policy allowing only the Glue role

Best answer: B

Explanation: To control access paths at a high level, combine identity-based permissions on the Glue IAM role with resource policies on the S3 Access Point and network controls like VPC endpoint policies. This ensures access is possible only through the approved endpoint and access point. Granting access to everyone undermines those controls and breaks least-privilege design.

The core control is to force a specific access path: requests must come from the Glue job’s IAM role and traverse the VPC’s S3 endpoint to the approved S3 Access Point. You do this by layering policies at the identity and resource/network boundaries:

  • IAM policy on the Glue role grants only required S3 actions.
  • S3 Access Point policy allows only that role (and can be VPC-restricted).
  • S3 gateway endpoint policy limits which S3 resources (such as the Access Point) are reachable from the VPC.
  • Optional bucket policy explicit denies can enforce “only via VPC endpoint/access point.”

Any policy that broadly allows access (for example to all principals) is an anti-pattern because it defeats path restriction and least privilege.

  • Access Point policy is an appropriate resource policy to scope access to a specific role.
  • Endpoint policy restriction is a valid way to constrain what the VPC can reach in S3.
  • Bucket explicit deny can enforce that requests must originate from the VPC endpoint.
  • Wildcard principal allow is a data platform anti-pattern that can enable unintended access paths.

Question 8

Topic: Data Security and Governance

A team runs an AWS Glue job in private subnets (no NAT). The job must access an Amazon S3 bucket only through an S3 Access Point (raw-ap) using an S3 gateway VPC endpoint.

The Glue job fails when reading the input path that uses the access point alias. The Glue job IAM role and the S3 Access Point policy already allow the required S3 actions.

Exhibit: Error and current VPC endpoint policy (partial)

ERROR AccessDenied: Access Denied (Status Code: 403)
Operation: s3:ListBucket
Resource: arn:aws:s3:us-east-1:123456789012:accesspoint/raw-ap

VPC endpoint policy allows only:
- arn:aws:s3:::raw-bucket
- arn:aws:s3:::raw-bucket/*

Which change will resolve the failure with the least impact while preserving the access-path requirement?

Options:

  • A. Remove all statements from the VPC endpoint policy

  • B. Attach AmazonS3FullAccess to the Glue job role

  • C. Update the S3 Access Point policy to allow all principals

  • D. Allow the access point ARNs in the VPC endpoint policy

Best answer: D

Explanation: S3 gateway VPC endpoint policies are an additional authorization layer. Because the Glue job is accessing S3 through an access point, the evaluated resource is the access point ARN, not the bucket ARN in the endpoint policy. Adding the access point and object ARNs to the endpoint policy removes the implicit deny while keeping private, controlled access paths.

When an S3 gateway VPC endpoint is used, its endpoint policy must allow the requested S3 actions for the specific resources being addressed; otherwise, the request is denied even if the IAM role and the S3 resource policy allow it. In this case, the Glue job uses an S3 Access Point alias, so S3 authorizes against access point ARNs (for example, ...:accesspoint/raw-ap and ...:accesspoint/raw-ap/object/*). The endpoint policy only allows raw-bucket ARNs, so s3:ListBucket on the access point is implicitly denied.

Fix by updating the VPC endpoint policy to include the access point ARN(s) (and object ARNs) with the required actions, preserving the requirement to keep S3 access private and constrained to the access point.

  • More role permissions doesn’t help because the deny is coming from the VPC endpoint policy layer.
  • Removing the endpoint policy is a broader change that weakens path and resource restrictions.
  • Opening the access point policy doesn’t fix an endpoint-policy implicit deny and reduces security.

Question 9

Topic: Data Security and Governance

Which THREE statements about rotating and managing credentials with AWS Secrets Manager are INCORRECT (false or unsafe) because they increase operational risk?

Select THREE.

Options:

  • A. Enable rotation and have consumers retrieve the secret value at runtime by ARN

  • B. Grant a pipeline role secretsmanager:* on all secrets to prevent failures

  • C. Encrypt secrets with a customer managed KMS key and scope KMS permissions tightly

  • D. Keep the previous secret version available during rotation for a short rollback window

  • E. Hardcode the secret in a Lambda environment variable and redeploy to rotate

  • F. Use one shared IAM access key and email new keys during rotation

Correct answers: B, E and F

Explanation: Lower-risk credential management uses AWS Secrets Manager as the source of truth, rotates secrets automatically where possible, and ensures consumers retrieve the current value at runtime. Operational risk increases when secrets are embedded in deployments, when permissions are overbroad, or when long-lived keys are shared and distributed manually.

The core practice is to centralize credentials in AWS Secrets Manager and automate rotation, while keeping access least-privilege and auditable. Consumers (ETL jobs, Lambda functions, EMR/Glue roles) should read the secret at runtime (often with caching) so rotation doesn’t require redeployments.

The unsafe statements are unsafe because:

  • Hardcoding secrets in environment variables shifts rotation into deployments and increases the chance of stale credentials and accidental exposure.
  • Granting secretsmanager:* on all secrets is excessive and expands blast radius; permissions should be limited to specific secret ARNs and required actions.
  • Sharing and emailing IAM access keys is insecure and undermines traceability; use per-role access (STS) and retrieve needed secrets securely.

A key takeaway is that automated rotation plus runtime retrieval and least privilege reduces both outage and compromise risk.

  • Hardcoded env vars require redeployments to rotate and increase secret sprawl.
  • Overbroad Secrets Manager permissions increase blast radius; scope to specific secret ARNs and required actions.
  • Shared, emailed access keys are insecure distribution and poor governance compared to per-role access and Secrets Manager retrieval.

Question 10

Topic: Data Security and Governance

In AWS Lake Formation, what does LF-Tag-based access control (LF-TBAC) mean?

Options:

  • A. Granting Lake Formation permissions using LF-Tag expressions on Data Catalog resources

  • B. Using Iceberg snapshots to provide read-consistent access to historical table versions

  • C. Encrypting data with KMS envelope encryption so only tagged roles can decrypt

  • D. Restricting Amazon S3 object access using IAM aws:ResourceTag conditions

Best answer: A

Explanation: LF-TBAC is a Lake Formation authorization model where you attach LF-Tags to Glue Data Catalog databases/tables/columns and then grant permissions based on tag expressions. This supports least-privilege at scale because access is controlled by data classifications (tags) rather than enumerating each resource ARN or table name.

LF-Tag-based access control (LF-TBAC) is an AWS Lake Formation feature for authorizing access to data lake resources registered in the AWS Glue Data Catalog. You attach LF-Tags (for example, pii=true, domain=marketing) to databases, tables, or columns, and then create Lake Formation grants to IAM principals based on LF-Tag expressions (such as pii=false). This creates reusable, “policy-like” permissions that automatically apply as new resources are tagged, helping enforce least privilege while keeping operational overhead low. It is different from IAM/S3 tag conditions, which primarily control access to S3 APIs and do not provide Lake Formation’s table/column permission model across integrated services like Athena, Glue, and Redshift Spectrum.

  • IAM tag conditions can restrict S3 API access but don’t implement Lake Formation table/column grants in the Data Catalog.
  • Iceberg snapshots are a table format capability for versioned reads, not an authorization mechanism.
  • KMS envelope encryption protects data at rest/in transit but does not define who is allowed to query specific tables/columns.

Continue with full practice

Use the AWS DEA-C01 Practice Test page for the full IT Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.

Try AWS DEA-C01 on Web View AWS DEA-C01 Practice Test

Free review resource

Read the AWS DEA-C01 Cheat Sheet on Tech Exam Lexicon, then return to IT Mastery for timed practice.

Revised on Thursday, May 14, 2026