Microsoft Azure AZ-104 Practice Test: Administrator

Practice Microsoft Azure AZ-104 Administrator with free sample questions, timed mock exams, topic drills, and detailed answer explanations in IT Mastery.

Use this AZ-104 exam simulator page when you want realistic AZ-104 practice exam questions, clearer explanations, and a direct route into the full IT Mastery experience on web, iOS, and Android. This page is built for search intent like AZ-104 mock exam, AZ-104 practice test, Azure Administrator simulator, and AZ-104 practice questions.

Interactive Practice Center

Start a practice session for Microsoft Azure Administrator (AZ-104) below, or open the full app in a new tab. For the best experience, open the full app in a new tab and navigate with swipes/gestures or the mouse wheel—just like on your phone or tablet.

Open Full App in a New Tab

A small set of questions is available for free preview. Subscribers can unlock full access by signing in with the same account they use on web and mobile.

Prefer to practice on your phone or tablet? Download the IT Mastery – AWS, Azure, GCP & CompTIA exam prep app for iOS or IT Mastery app on Google Play (Android) and use the same account across web and mobile.

Why this AZ-104 practice page is useful

  • a direct route into the IT Mastery simulator for AZ-104
  • topic drills and mixed sets across identity, compute, storage, networking, and monitoring
  • detailed explanations that show why the strongest Azure administrator answer is correct
  • a clear free-preview path before you subscribe
  • the same account across web and mobile

What premium unlocks in IT Mastery

  • the full AZ-104 question bank instead of the smaller free preview
  • more timed mock exams and mixed domain sets
  • progress tracking and review history
  • access across web, iPhone, iPad, and Android with the same subscription

AZ-104 exam snapshot

  • Issuer: Microsoft
  • Platform: Microsoft Azure
  • Official exam name: Microsoft Azure Administrator (AZ-104)
  • Exam code: AZ-104
  • Passing score: 700 scaled
  • Assessment style: scenario-based Azure administration, operations, governance, networking, and recovery decisions

AZ-104 questions usually reward the option that is operationally realistic, least-privilege aligned, and consistent with Azure-native management patterns rather than the most elaborate design.

What AZ-104 practice should cover

  • Identities and governance: Microsoft Entra ID, RBAC, subscriptions, management groups, Azure Policy, tags, and locks
  • Storage: account design, redundancy, lifecycle, private access, recovery, and Azure Files fundamentals
  • Compute: VMs, scale sets, images, extensions, App Service admin basics, backup, and restore
  • Networking: VNets, subnets, routing, NSGs, private endpoints, DNS, load balancing, VPN, and ExpressRoute fundamentals
  • Monitoring and recovery: Azure Monitor, alerts, Log Analytics, KQL basics, backup, and resilience

How to use the AZ-104 simulator efficiently

  1. Start with domain drills so you can isolate whether your misses come from governance, storage, compute, networking, or monitoring.
  2. Review every miss until you can explain the scope, security boundary, resilience choice, or operational workflow behind the best answer.
  3. Move into mixed sets once you can shift between RBAC, storage redundancy, VM operations, network isolation, and alerting scenarios without hesitation.
  4. Finish with timed runs so your Azure admin decisions stay sharp under exam pressure.

Free preview vs premium

  • Free preview: a smaller web set so you can validate the question style and explanation depth.
  • Premium: the full AZ-104 practice bank, focused drills, mixed sets, timed mock exams, detailed explanations, and progress tracking across web and mobile.

24 AZ-104 sample questions with detailed explanations

These sample questions include the same mix of single-answer and multiple-response items you should practice for AZ-104. Use them to check your readiness here, then move into the full IT Mastery question bank for broader timed coverage.

Question 1

Topic: Domain 5: Monitor and maintain Azure resources

You manage Azure Site Recovery for a production VM named WebVM1. You must validate the disaster recovery plan without impacting the running production workload or its network connectivity.

Based on the Failover options shown in the exhibit, what should you do?

Exhibit:

OperationDescriptionImpact on production
Planned failoverShuts down and fails over the primary VM to targetProduction VM is stopped/migrated
Unplanned failoverFails over using latest replicated dataProduction VM may still be running; data loss possible
Test failoverStarts a test VM copy in an isolated test networkNo impact to production VM or network

Options:

  • A. Run an Unplanned failover of WebVM1 to the DR production network to verify that failover succeeds.
  • B. Run a Test failover for WebVM1 to an isolated test network and perform test cleanup when validation is complete.
  • C. Run a Planned failover of WebVM1 to the DR production network during the next maintenance window.
  • D. Disable replication for WebVM1 and then re-enable it to simulate a disaster recovery test.

Best answer: B

Explanation: The choice to run a Test failover for WebVM1 to an isolated test network and then perform test cleanup directly matches the exhibit entry for Test failover: it “starts a test VM copy in an isolated test network” with “no impact to production VM or network.” This fulfills the requirement to validate DR while keeping the production workload and its network connectivity unaffected.


Question 2

Topic: Domain 5: Monitor and maintain Azure resources

Which of the following statements about Azure Monitor workbooks are correct for building operational views of your environment? (Select THREE.)

Options:

  • A. Workbooks are Azure Resource Manager (ARM) resources stored in resource groups and secured with Azure RBAC like other Azure resources.
  • B. Using Azure Monitor workbooks requires a separate Power BI license because all workbook charts are rendered through the Power BI service.
  • C. Each workbook is limited to visualizing data from a single Azure resource and cannot span multiple subscriptions.
  • D. Workbooks automatically start collecting guest OS logs from virtual machines without requiring any additional agent or configuration.
  • E. Workbooks support parameters (such as subscription, resource group, or time range) that can apply filters across multiple visualizations on a page.
  • F. Workbooks can combine metrics and log query results from multiple Azure resources into a single interactive report.

Correct answers: A, E, and F

Explanation: The statement that workbooks can combine metrics and log query results from multiple Azure resources is correct because workbooks are specifically designed for multi-source, multi-resource views, including both metric and KQL-based visualizations.

The statement about supporting parameters is correct because workbooks can use dropdowns, text inputs, time pickers, and other controls whose values can be passed into multiple queries and visualizations, allowing operators to filter the entire workbook by subscription, resource group, time range, or other criteria.

The statement that workbooks are ARM resources stored in resource groups and secured with Azure RBAC is correct because each workbook is an Azure resource type. As such, it is created in a resource group and access to view or edit it is controlled by RBAC roles at the resource, resource group, or subscription scope.


Question 3

Topic: Domain 3: Deploy and manage Azure compute resources

You manage an ARM template that deploys a single virtual machine. The VM size is hard-coded, and the template does not return any values after deployment. You must reuse the template for test and production with different VM sizes and automatically display the VM’s public IP after each deployment. Which change to the template best meets these goals?

Current template skeleton:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {},
  "variables": {},
  "resources": [
    {
      "type": "Microsoft.Compute/virtualMachines",
      "name": "appvm1",
      "apiVersion": "2023-03-01",
      "location": "[resourceGroup().location]",
      "properties": {
        "hardwareProfile": {
          "vmSize": "Standard_DS2_v2"
        }
      }
    }
  ]
}

Options:

  • A. Create two separate templates, one for test and one for production, each with a different hard-coded VM size, and leave the outputs section empty in both.
  • B. Define a vmSize variable in the variables section and reference [variables('vmSize')] in the VM hardwareProfile; no changes are made to outputs.
  • C. Define a vmSize parameter in the parameters section, reference it with [parameters('vmSize')] in the VM hardwareProfile, and add an outputs section that returns the public IP resource’s properties.ipAddress.
  • D. Add an environment parameter (test or prod) in the parameters section and use it only as a tag on the VM; keep the hard-coded VM size and no outputs.

Best answer: C

Explanation: The choice that defines a vmSize parameter, uses [parameters('vmSize')] in the VM resource, and adds an outputs section that returns the public IP’s properties.ipAddress directly addresses both requirements.

It uses the parameters section for deployment-time configurability, allowing different VM sizes in test and production without modifying the template. It also uses the outputs section to surface the public IP value at the end of deployment, improving operational visibility. This is precisely how ARM template structure is intended to be used for reusable and observable deployments.


Question 4

Topic: Domain 4: Configure and manage virtual networking

You manage the public DNS zone contoso.com in Azure DNS. A web server is reachable on public IP address 52.160.10.20 and must be accessible as both www.contoso.com and shop.contoso.com. Which of the following Azure DNS record configurations is INCORRECT for this requirement?

Options:

  • A. Create a CNAME record named “shop” in contoso.com that points to “52.160.10.20”.
  • B. Create an A record named “shop” in contoso.com that also points to 52.160.10.20.
  • C. Create a CNAME record named “shop” in contoso.com that points to “www.contoso.com ”.
  • D. Create an A record named “www” in contoso.com that points to 52.160.10.20.

Best answer: A

Explanation: The option that creates a CNAME record for shop.contoso.com targeting “52.160.10.20” is incorrect because CNAME targets must be hostnames, not IP addresses. Azure DNS expects the CNAME value to be another DNS name that ultimately resolves to an IP through an A or AAAA record. Using an IP as a CNAME target violates DNS rules and does not meet the requirement to correctly map hostnames to the web server’s public IP.


Question 5

Topic: Domain 3: Deploy and manage Azure compute resources

You manage a Bicep file that deploys an Azure Storage account:

resource sa 'Microsoft.Storage/storageAccounts@2023-01-01' = {
  name: 'contosostore001'
  location: resourceGroup().location
  sku: {
    name: 'Standard_LRS'
  }
  kind: 'StorageV2'
}

You must modify the Bicep file so that redeployment adds a tag environment=prod to this existing storage account. Which change should you make to the Bicep file?

Options:

  • A. Add a new parameter param environment string = 'prod' and reference it only in an output value, without changing the resource body.
  • B. Add a new resource of type Microsoft.Resources/tags that targets the storage account and sets environment=prod.
  • C. Change the resource to use the existing keyword and then define tags in a separate variable block that references sa.
  • D. Add a tags block to the existing sa resource definition:
resource sa 'Microsoft.Storage/storageAccounts@2023-01-01' = {
  name: 'contosostore001'
  location: resourceGroup().location
  sku: {
    name: 'Standard_LRS'
  }
  kind: 'StorageV2'
  tags: {
    environment: 'prod'
  }
}

Best answer: D

Explanation: The choice that adds a tags block inside the existing sa resource definition directly instructs ARM to manage the tags of that storage account. This matches the standard pattern for tagging resources in Bicep, ensures the change is tracked in code, and allows an idempotent redeploy that only updates the tag while keeping the resource name, type, and SKU the same.


Question 6

Topic: Domain 4: Configure and manage virtual networking

You are designing a new Azure landing zone and planning the virtual network layout and connectivity for several workloads. You must respect Azure virtual network scope/limits and correctly use service endpoints to secure PaaS services.

Which of the following configurations should you AVOID? (Select TWO.)

Options:

  • A. Enable an Azure Storage service endpoint at the virtual network level and assume it will automatically apply to all existing and future subnets.
  • B. Plan to create a single virtual network that spans two Azure regions so that you don’t hit per‑VNet address space limits.
  • C. Design each virtual network to exist in a single region and connect workloads in different regions by using VNet peering, staying within per‑subscription VNet limits.
  • D. Create separate subnets for workloads that need different sets of service endpoints, because service endpoints are configured on individual subnets.
  • E. Restrict an Azure Storage account’s network access to specific subnets where the Azure Storage service endpoint is enabled, blocking traffic from other virtual networks.

Correct answers: A and B

Explanation: The configuration that plans to create a single virtual network spanning two Azure regions is wrong because Azure does not support multi‑region VNets. Each VNet is bound to a single region; to connect regions you must use VNet peering or other connectivity options.

The configuration that enables an Azure Storage service endpoint “at the virtual network level” and assumes it covers all subnets is also incorrect. Service endpoints are a subnet‑scoped configuration. If you do not explicitly enable the endpoint on a particular subnet, traffic from that subnet will not be recognized as coming via the service endpoint, and corresponding firewall rules on the storage account will not apply as expected.


Question 7

Topic: Domain 1: Manage Azure identities and governance

You are defining a standard tag set for new Azure resources to support cost allocation, environment separation, and clear ownership. Which combination of tag keys is the most appropriate baseline standard for this purpose?

Options:

  • A. Region, VMSize, SKU, OSVersion
  • B. CostCenter, Environment, Owner, Application
  • C. Backup, Encryption, DR, Compliance
  • D. CreatedDate, LastPatched, Ticket, Comment

Best answer: B

Explanation: The choice that uses CostCenter, Environment, Owner, Application satisfies all three stated needs:

  • CostCenter enables chargeback/showback and cost analysis by business unit.
  • Environment clearly separates resources by lifecycle stage (for example, Prod, Dev, Test).
  • Owner identifies accountability for approvals, troubleshooting, and lifecycle decisions.
  • Application links resources to a specific workload, aiding both operations and reporting.

This aligns with common Microsoft guidance for baseline tag keys and provides a strong foundation that can be extended with additional tags as needed.


Question 8

Topic: Domain 5: Monitor and maintain Azure resources

You manage several Azure virtual machines that host a line-of-business application. You plan to use Azure Monitor VM Insights to troubleshoot intermittent performance and connectivity issues. Which of the following statements about VM Insights is INCORRECT?

Options:

  • A. VM Insights can automatically change network security group (NSG) rules to resolve detected connectivity problems.
  • B. To use VM Insights, the VM must send data to a Log Analytics workspace where the collected metrics and logs are stored.
  • C. VM Insights provides near real-time charts for CPU, memory, disk, and network performance of the virtual machine.
  • D. The dependency map in VM Insights can show inbound and outbound connections between the VM and other services.

Best answer: A

Explanation: The statement that VM Insights can automatically change NSG rules to resolve connectivity problems is incorrect because VM Insights does not perform configuration changes. It surfaces metrics, logs, and dependency information to help you diagnose issues, but actions such as editing NSG rules, changing routes, or reconfiguring VMs must be done manually or via separate automation. Treating VM Insights as an auto-remediation tool misrepresents its capabilities and could lead to unsafe expectations in production operations.


Question 9

Topic: Domain 3: Deploy and manage Azure compute resources

Which TWO statements about Azure Container Apps environments, ingress, and revisions are correct? (Select TWO.)

Options:

  • A. Each Azure Container App must be deployed into its own dedicated environment, and an environment cannot host more than one app.
  • B. When a new revision of a Container App is created, all previous revisions are immediately deleted and can never receive traffic again.
  • C. To expose a Container App to the internet, you must assign a public IP address directly to the running container and open ports on that IP.
  • D. You can configure Azure Container Apps revisions to route a percentage of HTTP traffic to different revisions of the same app for gradual rollouts.
  • E. An Azure Container Apps environment provides a secure boundary and shared runtime/networking fabric for multiple container apps deployed in the same region.

Correct answers: D and E

Explanation: The statement that an Azure Container Apps environment provides a secure boundary and shared runtime/networking fabric is correct because environments are explicitly designed as multi-tenant containers for multiple apps in one region, with shared infrastructure for networking, logging, and Dapr.

The statement about revisions supporting percentage-based traffic routing is also correct. Revisions allow you to run multiple versions of the same app side by side and configure weights (for example, 80/20) so that incoming HTTP requests are distributed between them for safe rollouts.


Question 10

Topic: Domain 4: Configure and manage virtual networking

Which of the following statements about Azure virtual network service endpoints and their relationship to virtual networks and subnets are correct? (Select THREE.)

Options:

  • A. To use a service endpoint, you must deploy a dedicated network virtual appliance (NVA) into the subnet to route traffic to the PaaS service.
  • B. Service endpoints are configured at the virtual network level and automatically apply to all existing and future subnets in that VNet.
  • C. Service endpoints extend your virtual network’s private address space to the Azure PaaS service over the Azure backbone, without requiring a private IP address for the service in your VNet.
  • D. You enable a service endpoint on a specific subnet, and only resources in that subnet benefit from the endpoint.
  • E. After you enable a service endpoint for Azure Storage on a subnet and configure the storage account firewall to allow only that virtual network, resources in the subnet can access the account while other public internet traffic is blocked.

Correct answers: C, D, and E

Explanation: The statement that service endpoints are enabled on a specific subnet and only benefit resources in that subnet is correct because configuration is always scoped to a subnet, and routing changes apply only to that subnet.

The statement about combining a service endpoint with a storage account firewall is also correct. Enabling a service endpoint on the subnet and then allowing that VNet/subnet in the storage account’s network rules lets resources in that subnet reach the account while other public internet traffic is blocked.

The statement that service endpoints extend your VNet address space to the PaaS service over the Azure backbone, without requiring a private IP in the VNet, accurately reflects how service endpoints work. They keep the PaaS resource’s public IP but route traffic over Microsoft’s backbone and do not inject a private IP into the VNet, which distinguishes them from private endpoints.


Question 11

Topic: Domain 3: Deploy and manage Azure compute resources

You manage a new microservices-based workload that will run entirely in Azure. The development team packages each component as a Linux container image and pushes them to Azure Container Registry. Some components are HTTP APIs that must be reachable from the internet via HTTPS, and other components are background workers that process messages from an Azure Service Bus queue. The team has no Kubernetes experience and wants to minimize infrastructure management while meeting these requirements:

  • Automatically scale out/in based on HTTP request load and queue length.
  • Use a fully managed platform without managing VM or cluster nodes.
  • Expose public HTTPS endpoints for the APIs.

You must recommend a single Azure hosting platform for this workload. Which option should you choose?

Options:

  • A. Deploy the APIs and workers as multi-container Web Apps for Containers on Azure App Service and configure autoscaling rules based on CPU utilization.
  • B. Deploy the containers to Azure Container Apps in a Container Apps environment, creating separate container apps for APIs and worker services with HTTP ingress and event-driven autoscaling rules.
  • C. Create an Azure Kubernetes Service (AKS) cluster and deploy the containers as Kubernetes deployments with a Horizontal Pod Autoscaler configured for CPU and queue metrics.
  • D. Host each container as a separate Azure Container Instance (ACI) container group and use Azure Automation runbooks to create or delete instances based on metrics.

Best answer: B

Explanation: Choosing Azure Container Apps in a Container Apps environment with separate container apps for APIs and worker services best meets all the requirements:

  • It is a fully managed container platform; you do not manage VMs or Kubernetes nodes.
  • It supports HTTP ingress, giving the APIs public HTTPS endpoints.
  • It supports event-driven autoscaling rules driven by HTTP traffic and message queue length, matching the specified scaling behavior.
  • It integrates directly with Azure Container Registry and fits an Azure administrator’s responsibility scope without demanding deep Kubernetes expertise.

Question 12

Topic: Domain 3: Deploy and manage Azure compute resources

You administer a production Windows web app hosted in Azure App Service. The app stores its content in the wwwroot folder and uses an Azure SQL Database referenced via an app setting connection string. The app also uses Azure Key Vault for secrets and an Azure DNS zone for a custom domain.

Compliance requires:

  • The ability to restore both the web content and the database to any daily point within the last 14 days.
  • Backups of the app to be stored in a geo-redundant storage account in a different region from the web app.

You decide to use Azure App Service backup for the web app where appropriate and use native backup features for other services as needed.

Which of the following backup configurations should you AVOID? (Select THREE.)

Options:

  • A. Schedule App Service backup to run once per day, store backups in a geo-redundant storage account in another region, and include the Azure SQL Database by selecting its connection string in the backup configuration.
  • B. Assume that App Service backup will include all aspects of the App Service resource (such as custom domains, TLS/SSL bindings, and VNet integration), so you do not export or template those settings anywhere else.
  • C. Schedule App Service backup to run once per week, store backups in a locally redundant storage account in the same region as the web app, and include the Azure SQL Database via its connection string.
  • D. Protect the Azure SQL Database primarily by using its built-in automated backups and point-in-time restore, and configure App Service backup only for the web app’s content and configuration.
  • E. Rely on App Service backup to also capture Azure Key Vault secrets and the Azure DNS zone, so you do not configure any separate backup, export, or documentation for those services.

Correct answers: B, C, and E

Explanation: The configuration that schedules weekly backups to a locally redundant storage account in the same region is clearly unsuitable. Weekly backups cannot provide daily restore points, and locally redundant storage in the same region does not satisfy the requirement to keep backups regionally separate.

The configuration that relies on App Service backup to protect Azure Key Vault secrets and the Azure DNS zone is unsafe because App Service backup only understands the web app and supported databases; it does not reach into external services. Without separate backup or export for those services, their critical configuration is unprotected.

The configuration that assumes App Service backup will include all aspects of the App Service resource, such as custom domains, TLS/SSL bindings, and VNet integration, is also an anti-pattern. App Service backup focuses on app content and selected configuration (for example, app settings and connection strings), not all platform-level settings. Failing to export or template those settings means you cannot fully reconstruct the environment from App Service backups alone.


Question 13

Topic: Domain 1: Manage Azure identities and governance

Which of the following statements about Azure role assignment scopes is NOT correct?

Options:

  • A. A role assignment at a subscription scope is inherited by all resource groups and resources in that subscription.
  • B. A role assignment created at a specific resource scope automatically grants the same role at the parent resource group and subscription scopes.
  • C. To restrict a user’s permissions to a single resource, you can assign the role only at that specific resource’s scope.
  • D. Assigning a role at a management group scope applies that role to all subscriptions contained in that management group.

Best answer: B

Explanation: The statement claiming that a role assignment at a specific resource scope automatically grants the same role at the parent resource group and subscription is incorrect because RBAC inheritance in Azure is strictly top-down, not bottom-up. When you assign a role at a resource scope, the user gains permissions only on that resource, not on the containing resource group or subscription.

This choice is therefore the one NOT-correct statement and is the correct answer for the question.


Question 14

Topic: Domain 3: Deploy and manage Azure compute resources

You administer an Azure VM that runs a 24x7 line-of-business database. Azure Backup creates a VM backup once per day at 00:00 and keeps daily backups for 30 days.

A new requirement states that, for disk recovery, you must have a recovery point objective (RPO) of no more than 45 minutes of data loss. You decide to keep the daily Azure Backup for long-term protection and add regular managed disk snapshots for short-term protection.

Snapshots will be taken at a fixed interval throughout the day. Assume worst-case data loss equals the time between snapshots, and that you want to minimize the total number of snapshots per day.

You may calculate snapshots per day as:

[

\text{snapshots per day} = \frac{24\text{ hours}}{\text{snapshot interval in hours}}

]

Which configuration should you use?

Options:

  • A. Keep daily Azure Backup and add managed disk snapshots every 30 minutes.
  • B. Keep daily Azure Backup and add managed disk snapshots every 45 minutes.
  • C. Keep daily Azure Backup and add managed disk snapshots every 2 hours.
  • D. Use only the existing daily Azure Backup at 00:00 and do not add snapshots.

Best answer: B

Explanation: The option that keeps daily Azure Backup and adds snapshots every 45 minutes is best because it:

  • Meets the RPO requirement: a 45‑minute interval gives a worst-case RPO of exactly 45 minutes.
  • Minimizes snapshots: 24 ÷ 0.75 = 32 snapshots per day, which is fewer than the 48 snapshots per day required with a 30‑minute interval.
  • Retains Azure Backup for full VM protection and 30‑day retention, while using snapshots specifically to improve short-term disk RPO.

Question 15

Topic: Domain 4: Configure and manage virtual networking

Which of the following statements about configuring DNS for Azure virtual networks are correct? (Select THREE.)

Options:

  • A. Setting custom DNS server IP addresses on a virtual network applies those DNS servers to all subnets in that VNet, unless a NIC has its own DNS override.
  • B. After you change the DNS servers on a virtual network, existing VMs in that VNet start using the new DNS servers after a restart or DHCP lease renewal.
  • C. To use a custom DNS server for a VM, you must always configure the DNS server IP directly on the VMs NIC; VNet-level DNS settings are ignored by VMs.
  • D. Azure-provided DNS can resolve private hostnames of VMs located in different VNets that are connected through virtual network peering.
  • E. If you do not specify any DNS servers on a virtual network, Azure automatically uses the Azure-provided DNS service for all VMs in that VNet.
  • F. DNS server IP addresses configured on a virtual network must be public IPs; Azure cannot use private IP addresses of DNS servers inside the VNet.

Correct answers: A, B, and E

Explanation: The statement that the VNet uses Azure-provided DNS by default is correct because Azure automatically assigns its internal DNS service to VNets unless you explicitly specify custom DNS servers.

The statement that setting custom DNS on a VNet applies to all subnets (with NIC-level overrides possible) is correct because VNet DNS is a VNet-wide setting. All subnets inherit it, and NIC-specific DNS is only used when you intentionally override the VNet defaults.

The statement that existing VMs begin using new VNet DNS settings after a restart or DHCP lease renewal is also correct. DNS configuration is delivered via DHCP, so VMs need a lease renewal (often triggered by a reboot) to acquire the updated DNS server list.


Question 16

Topic: Domain 1: Manage Azure identities and governance

You administer several Azure subscriptions used for production workloads. Leadership wants a single place to assign Azure Policy so the same security rules automatically apply to all production subscriptions. Which Azure construct should you use?

Options:

  • A. Tags on all production resources with a value of Environment=Prod
  • B. A management group that contains all production subscriptions
  • C. A dedicated subscription named Production for all workloads
  • D. A single resource group that contains all production resources

Best answer: B

Explanation: Using a management group that contains all production subscriptions is correct because management groups sit above subscriptions and are explicitly designed to provide centralized governance. Assigning Azure Policy at the management group scope ensures that all child subscriptions automatically receive and enforce the same policy set without needing per-subscription configuration.


Question 17

Topic: Domain 3: Deploy and manage Azure compute resources

Which TWO statements about configuring environment variables, secrets, and networking for Azure Container Instances are NOT correct or represent unsafe guidance? (Select TWO.)

Options:

  • A. When you create an ACI container group with a public IP address, the container’s exposed ports are reachable from the internet unless you restrict access at the application layer or place it behind other Azure security components such as Azure Firewall or Application Gateway.
  • B. For sensitive configuration values, you should prefer storing secrets in Azure Key Vault and letting the container retrieve them using a managed identity rather than embedding them directly in the container image or deployment template.
  • C. If an ACI container group is deployed into a subnet of a virtual network, you can use network security groups (NSGs) associated with that subnet to control which inbound and outbound traffic is allowed to reach the containers.
  • D. A container group that is deployed into a subnet of an Azure virtual network receives only a private IP address and does not expose a direct public endpoint from Azure Container Instances.
  • E. An ACI container group can be configured to simultaneously use a public IP address for internet access and a private IP address from a virtual network subnet for internal access on the same set of container ports.
  • F. It is safe to store database passwords as plain-text environment variables in an Azure Container Instances (ACI) container group because environment variables are automatically hidden from anyone who has access to the Azure portal.

Correct answers: E and F

Explanation: The incorrect or unsafe statements are:

  • The statement that it is safe to store database passwords as plain-text environment variables in ACI because environment variables are automatically hidden in the portal. This is unsafe; anyone with appropriate access can still retrieve them via the portal, CLI, or template export. Environment variables should not be relied on as a secure secret store for high-value secrets.

  • The statement that an ACI container group can simultaneously use both a public IP and a private IP from a VNet subnet for inbound access. ACI does not support dual inbound IP configurations on a single container group in this way. You either deploy the group with a public IP (no VNet integration) or into a VNet (private IP only), not both together for the same container group.


Question 18

Topic: Domain 4: Configure and manage virtual networking

You host an internal order-processing API on three Azure VMs in subnet backend-subnet of vnet-prod. The VMs are behind an Azure load balancer named prod-ilb. Only clients inside vnet-prod must access the API; no internet traffic should reach the service.

The following exhibit shows the current configuration of prod-ilb.

PropertyValue
SKUStandard
TypePublic load balancer
Frontend IP config nameprod-ilb-fe
Frontend IP typePublic
Frontend IP address52.160.10.24
Public IP resourceprod-ilb-pip
Virtual networkvnet-prod (10.20.0.0/16)
Backend poolbackendpool (3 NICs)
Load-balancing ruleTCP 443 from prod-ilb-fe

You must change the configuration so that traffic is distributed privately within the virtual network and the API is not reachable from the internet.

Based on the information in the exhibit, what should you do?

Options:

  • A. Change the load balancer SKU from Standard to Basic so the public IP can no longer receive traffic from the internet.
  • B. Delete the existing frontend IP configuration and create a new frontend IP configuration that uses a private IP address on backend-subnet, then update the load-balancing rule to use the new frontend.
  • C. Keep the existing public frontend and attach a network security group to backend-subnet that denies all inbound traffic from the internet.
  • D. Enable Floating IP (Direct Server Return) on the existing TCP 443 load-balancing rule so only private traffic is distributed.

Best answer: B

Explanation: Creating a new frontend IP configuration with a private IP address on backend-subnet and updating the load-balancing rule to use that frontend directly implements an internal load balancer pattern. The frontend is no longer bound to a public IP, so there is no internet-facing endpoint, and traffic is distributed privately within vnet-prod to the three VM NICs in backendpool. This exactly matches the requirement for private-only traffic distribution within the virtual network.


Question 19

Topic: Domain 3: Deploy and manage Azure compute resources

You manage an Azure VM running Windows Server that hosts a SQL Server database. The VM uses a premium SSD OS disk and a separate premium SSD data disk that stores only the database transaction log files. The log volume is write-intensive with very few reads. You must: improve general OS responsiveness, follow Microsoft-recommended settings for write-intensive database log disks, and keep the configuration simple using built-in host caching options. Which disk caching configuration should you use?

Options:

  • A. Configure the OS disk with None (no caching) and the data (log) disk with Read/Write caching.
  • B. Configure the OS disk with Read-only caching and the data (log) disk with Read-only caching.
  • C. Configure the OS disk with Read/Write caching and the data (log) disk with None (no caching).
  • D. Configure the OS disk with Read/Write caching and the data (log) disk with Read/Write caching.

Best answer: C

Explanation: The choice that configures Read/Write caching on the OS disk and None (no caching) on the data/log disk best matches all requirements.

  • The OS disk with Read/Write caching improves general OS responsiveness and aligns with the default, recommended setting for most Azure VM OS disks.
  • The write-intensive transaction log disk with caching set to None follows Microsoft guidance for database log volumes, avoiding host caching overhead and potential write performance degradation.
  • This uses only built-in host caching modes and keeps the configuration simple while directly addressing the different I/O patterns of OS and log disks.

Question 20

Topic: Domain 3: Deploy and manage Azure compute resources

Which TWO of the following statements about Azure virtual machine availability sets are INCORRECT? (Select TWO.)

Options:

  • A. An availability set distributes its virtual machines across multiple Azure regions to protect against a regional outage.
  • B. Virtual machines in an availability set are placed into multiple fault domains so that a rack or power failure is less likely to affect all VMs at the same time.
  • C. You can add any existing virtual machine to an availability set at any time without redeploying or recreating the VM.
  • D. Deploying two or more VMs in the same availability set qualifies them for a higher virtual machine SLA than a single standalone VM in the same region.
  • E. Availability sets use update domains so that planned platform maintenance is applied to different subsets of VMs at different times, helping keep the application online.

Correct answers: A and C

Explanation: The statement that availability sets distribute VMs across multiple Azure regions is incorrect because availability sets are strictly a regional construct; they operate only within one datacenter in a single region. Cross‑region resilience requires additional deployments, such as a second set of VMs in another region.

The statement that you can add any existing VM to an availability set at any time without redeploying is also incorrect. Azure requires that a VM be created within an availability set so it can be placed appropriately in fault and update domains. Moving an existing VM into an availability set involves deletion and redeployment (or recreation) of the VM, not a simple property change.


Question 21

Topic: Domain 2: Implement and manage storage

Which THREE statements about using Microsoft-managed keys versus customer-managed keys for Azure Storage encryption at rest are correct? (Select THREE.)

Options:

  • A. Customer-managed keys are stored only inside the storage account itself and cannot be integrated with Azure Key Vault or a managed HSM.
  • B. Customer-managed keys are stored in Azure Key Vault or a managed HSM, and you control their creation, rotation, and deletion.
  • C. Organizations that must prove control of encryption keys or enforce separation of duties often choose customer-managed keys for compliance reasons.
  • D. If a customer-managed key becomes unavailable, Azure automatically falls back to Microsoft-managed keys so data is always decrypted transparently.
  • E. With Microsoft-managed keys, you can download the raw key material from Azure and import it into on-premises HSM devices.
  • F. With Microsoft-managed keys, Azure automatically creates, manages, and rotates the encryption keys without any customer configuration.

Correct answers: B, C, and F

Explanation: The statement that Microsoft-managed keys are created, managed, and rotated automatically is correct because SSE with Microsoft-managed keys is the default behavior; Azure handles everything without any customer configuration.

The statement that customer-managed keys are stored in Azure Key Vault or a managed HSM and that you control their lifecycle is correct; CMKs are integrated with those services and you define policies, rotation, and deletion.

The statement about organizations with strict compliance or separation-of-duties requirements preferring customer-managed keys is also correct. Customer-managed keys provide clear ownership, audit logs, and explicit control over key usage, which is important for many regulated industries.


Question 22

Topic: Domain 2: Implement and manage storage

You administer an Azure Storage account for marketing. They will host product images in a new blob container and reference them from a public website. Images must be readable anonymously, but users must not list all blobs or view container metadata. You want to use a simple built-in setting, not shared access signatures. Which public access level should you configure?

Options:

  • A. Container set to private while distributing the storage account access key to the website
  • B. Container (anonymous read access for container and blobs)
  • C. Private (no anonymous access to blobs or the container)
  • D. Blob (anonymous read access for blobs only)

Best answer: D

Explanation: The choice that sets the container to Blob public access is correct because it:

  • Allows anonymous read access to blobs so the public website can load images directly via URL.
  • Prevents anonymous users from listing all blobs or reading container metadata, satisfying the security requirement.
  • Uses a simple built-in public access level without introducing shared access signatures or additional complexity.

This aligns exactly with the scenario’s functional and security needs.


Question 23

Topic: Domain 4: Configure and manage virtual networking

You manage an Azure virtual network with two subnets: AppSubnet for application VMs and AdminSubnet for management tools. The application stores data in an Azure Storage account named appdatawest. You must:

  • Allow the app VMs in AppSubnet to access appdatawest.
  • Block access to appdatawest from the internet and from AdminSubnet.
  • Keep using the storage account’s public endpoint (no private endpoints).
  • Ensure traffic between AppSubnet and appdatawest stays on the Azure backbone.

Which configuration should you implement to meet these requirements with the least complexity?

Options:

  • A. Enable a Microsoft.Storage service endpoint on AppSubnet and configure appdatawest to allow access only from that virtual network subnet using Selected networks.
  • B. Create a private endpoint for appdatawest in AdminSubnet and disable public network access for the storage account.
  • C. Add an NSG to AppSubnet that allows outbound traffic only to the public IP of appdatawest and denies all other outbound internet traffic.
  • D. Create a user-defined route on AppSubnet that sends all traffic for appdatawest through an Azure Firewall, and allow only appdatawest’s public IP in the firewall rules.

Best answer: A

Explanation: Enabling a Microsoft.Storage service endpoint on AppSubnet and configuring the storage account to allow only that virtual network subnet with Selected networks is the only option that:

  • Ties access control to a specific subnet (AppSubnet) at the storage account level.
  • Keeps the storage account’s public endpoint in use, as required.
  • Forces traffic from AppSubnet to traverse the Azure backbone to the storage service rather than the public internet.
  • Blocks all other sources, including AdminSubnet and general internet traffic, with minimal configuration changes.

This directly uses the designed combination of service endpoints plus storage firewall rules to secure PaaS access by subnet.


Question 24

Topic: Domain 5: Monitor and maintain Azure resources

You manage several production Azure Storage accounts that hold customer data. Security requires that:

  • all read/write operations are searchable with Kusto queries for 90 days;
  • logs are retained for 3 years in low-cost storage;
  • no sensitive logs are exposed publicly. You will use diagnostic settings on each storage account. Which configuration is INCORRECT?

Options:

  • A. Send all log categories to a Log Analytics workspace with 90-day retention, and send all logs to a storage account in another region secured with a private endpoint and 3-year retention.
  • B. Send all log categories and metrics to a Log Analytics workspace with 90-day retention, and send all logs to a storage account that is secured with a private endpoint and 3-year retention.
  • C. Send only metrics to a Log Analytics workspace with 90-day retention, and send all logs and metrics to a storage account configured for public anonymous read access with 3-year retention.
  • D. Send StorageRead, StorageWrite, and StorageDelete logs and metrics to a Log Analytics workspace with 90-day retention, and send all logs to a private storage account with 3-year retention.

Best answer: C

Explanation: The configuration that sends only metrics to a Log Analytics workspace and stores logs in a storage account with public anonymous read access is incorrect because it misses two key requirements:

  • No searchable logs in Log Analytics: Only metrics are sent to Log Analytics, so resource log categories like StorageRead and StorageWrite are not available for Kusto queries. This violates the requirement to have all read/write operations searchable for 90 days.
  • Public exposure of logs: The storage account is configured for public anonymous read access, which can expose sensitive log data to the internet, directly contradicting the requirement that no sensitive logs are exposed publicly.

Because it both fails the functional requirement (searchability) and the security requirement (no public exposure), this diagnostic configuration is unambiguously wrong.

Need deeper concept review first?

If you want concept-first reading before heavier simulator work, use the companion guide at TechExamLexicon.com .

Revised on Sunday, April 26, 2026