Try 50 free AZ-104 questions across the exam domains, with explanations, then continue with full IT Mastery practice.
This free full-length AZ-104 practice exam includes 50 original IT Mastery questions across the exam domains.
These questions are for self-assessment. They are not official exam questions and do not imply affiliation with the exam sponsor.
Count note: this page uses the full-length practice count maintained in the Mastery exam catalog. Some certification vendors publish total questions, scored questions, duration, or unscored/pretest-item rules differently; always confirm exam-day rules with the sponsor.
Need concept review first? Read the AZ-104 Cheat Sheet on Tech Exam Lexicon, then return here for timed mocks and full IT Mastery practice.
Open the matching IT Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.
| Domain | Weight |
|---|---|
| Manage Azure Identities and Governance | 15% |
| Implement and Manage Storage | 20% |
| Deploy and Manage Azure Compute Resources | 25% |
| Configure and Manage Virtual Networking | 25% |
| Monitor and Maintain Azure Resources | 15% |
Use this as one diagnostic run. IT Mastery gives you timed mocks, topic drills, analytics, code-reading practice where relevant, and full practice.
Topic: Implement and Manage Storage
You administer several Azure Storage accounts used by internal line-of-business apps and public-facing services. Your security team requires that all data to and from Azure Storage be encrypted in transit and that unencrypted HTTP access be blocked wherever possible.
Which of the following configuration approaches should you AVOID because they violate this requirement or Azure best practices for enforcing encryption in transit? (Select THREE.)
Options:
A. For SMB access to Azure Files from Windows 10 clients over a site‑to‑site VPN, keep ‘secure transfer required’ enabled and ensure clients connect using SMB 3.0 with encryption support.
B. Use connection strings in internal applications that specify http://<account>.blob.core.windows.net endpoints to reduce TLS overhead on high‑throughput workloads.
C. Leave the ‘secure transfer required’ setting disabled on storage accounts that host public website content so users can continue to access blobs over HTTP.
D. Create an Azure Policy definition that explicitly permits storage accounts with ‘secure transfer required’ disabled in the production subscription so legacy applications are not disrupted.
E. Enable the ‘secure transfer required’ setting on all storage accounts and update any hard‑coded storage endpoints in applications to use HTTPS URLs instead of HTTP.
F. Use an Azure Policy with a ‘Deny’ effect to block creation of any new storage account where ‘secure transfer required’ is set to Disabled.
Correct answers: B, C and D
Explanation: The ‘secure transfer required’ setting on an Azure Storage account enforces that clients use secure protocols such as HTTPS for REST endpoints and SMB 3.0 with encryption for file shares. When this setting is enabled, requests made over HTTP are rejected, helping ensure that data in transit is protected against eavesdropping and tampering.
To meet a requirement that all data to and from Azure Storage is encrypted in transit, you must both enable ‘secure transfer required’ and configure clients (applications, scripts, tools, and SMB clients) to use secure protocols and HTTPS URLs. You should avoid any configuration that leaves ‘secure transfer required’ disabled, uses HTTP endpoints, or deliberately weakens policy enforcement around secure transfer.
Azure Policy can be used to enforce this setting at scale by auditing or denying storage accounts that do not have ‘secure transfer required’ enabled. This is preferable to relying solely on network controls such as NSGs, which do not enforce encryption themselves.
Topic: Deploy and Manage Azure Compute Resources
Which of the following statements about using ARM templates or Bicep files instead of manual Azure portal deployments are INCORRECT? (Select THREE.)
Options:
A. Storing ARM templates or Bicep files in source control improves compliance and change auditing for infrastructure.
B. They are well suited for deploying the same infrastructure consistently to development, test, and production environments.
C. Bicep files can only be used interactively through the Azure portal and cannot be run from the Azure CLI or pipelines.
D. Manual portal deployments are generally preferable to templates when you must redeploy identical infrastructure many times.
E. ARM templates and Bicep files cannot define RBAC role assignments or policy assignments, so these must always be configured manually in the portal.
F. Using infrastructure as code helps reduce configuration drift between environments over time.
Correct answers: C, D and E
Explanation: ARM templates and Bicep files implement infrastructure as code, letting you define Azure resources declaratively and deploy them consistently. They shine in scenarios where you need repeatable, auditable, and automated deployments, such as multi-environment setups and environments with strict compliance requirements.
Using manual portal clicks is fine for quick experiments or one-off changes, but it is error-prone and hard to reproduce. Templates and Bicep files can be stored in source control to track changes, reviewed for compliance, and used in automation pipelines. They can define a wide range of Azure resources, including RBAC role assignments and Azure Policy assignments, helping you codify both infrastructure and governance.
The incorrect statements claim that Bicep can only be used in the portal, that manual portal deployments are better for repeated identical deployments, and that ARM/Bicep cannot manage RBAC or policy. All of these conflict with how infrastructure as code and Azure deployment tooling are intended to work.
Topic: Manage Azure Identities and Governance
Which of the following statements about removing Azure role-based access control (RBAC) role assignments is NOT correct?
Options:
A. If you remove a user’s role assignment at resource group scope, their permissions on resources in that resource group are unaffected because access is always configured directly on each resource.
B. Deleting a role assignment that grants the Owner role at subscription scope is an appropriate way to clean up over-privileged accounts that no longer require broad administrative access.
C. Role assignments are evaluated cumulatively, so removing one assignment reduces a principal’s permissions only if no other role assignments grant the same actions.
D. If you remove a role assignment at a higher scope, such as a subscription, a user might still have access through a separate assignment at a lower scope, such as a resource group or individual resource.
Best answer: A
Explanation: Azure RBAC controls access to Azure resources through role assignments at different scopes (management group, subscription, resource group, or resource). Permissions are inherited from parent scopes to child resources and are additive across all role assignments.
When cleaning up unnecessary or inappropriate role assignments, you must understand how inheritance and cumulative permissions work. Removing an assignment at a parent scope can remove broad access to many resources, while separate assignments at lower scopes can still grant access. Likewise, removing an over-privileged role such as Owner at subscription scope is a common way to reduce risk when a user no longer needs that level of control.
The incorrect statement is the one that claims removing a role assignment at resource group scope does not affect access to resources in that group. In reality, resource group–level assignments flow down to all resources in that group, so removing them usually revokes those inherited permissions unless another role assignment still grants access.
Topic: Configure and Manage Virtual Networking
In a public Azure DNS zone for contoso.com, you need to map the hostname www.contoso.com directly to the IPv4 address 52.160.1.10. Which type of DNS record should you create for www?
Options:
A. A CNAME record that points www to 52.160.1.10
B. A TXT record containing the IPv4 address 52.160.1.10
C. An AAAA record with the IPv4 address 52.160.1.10
D. An A record with the IPv4 address 52.160.1.10
Best answer: D
Explanation: In Azure DNS, a public DNS zone such as contoso.com holds records that map hostnames to IP addresses or to other hostnames. When you need a hostname like www.contoso.com to resolve directly to an IPv4 address, you use an A record.
An A record is specifically designed to map a host (for example, www) to an IPv4 address (for example, 52.160.1.10). When clients query DNS for www.contoso.com, the A record is returned and the client uses that IP address to reach the resource (for example, a web server).
Other record types such as AAAA, CNAME, and TXT serve different purposes and do not satisfy the requirement of directly mapping a hostname to an IPv4 address in this scenario.
Topic: Deploy and Manage Azure Compute Resources
You manage a Bicep file that deploys a single virtual machine.
param location string = resourceGroup().location
param vmName string
resource vm 'Microsoft.Compute/virtualMachines@2023-03-01' = {
name: vmName
location: location
tags: {
project: 'App1'
}
// VM properties omitted for brevity
}
You must update the file to:
Several colleagues propose the following changes.
Which proposed modification is INCORRECT and should NOT be implemented in the Bicep file?
Options:
A. Add a new param environment string and update the tags block to include environment: environment so the tag value is passed in at deployment time.
B. Replace the single vmName parameter with an array parameter vmNames and use a for-expression loop to create one Microsoft.Compute/virtualMachines resource for each value in vmNames.
C. Add param adminPassword string = 'P@ssw0rd!' and use this parameter as the admin password for all VMs in the OS profile section of the resources.
D. Add a second Microsoft.Compute/virtualMachines resource named vm2 whose name property is set to ' ${vmName}-02' and that reuses the same location and tag structure as the first VM.
Best answer: C
Explanation: The scenario focuses on making straightforward changes to an existing Bicep file: adding or updating simple resource properties (tags) and adding more resources of the same type (another VM). Several of the proposed modifications do exactly that in a safe, maintainable way.
The only clearly unsafe proposal is to introduce a plain string parameter with a hard-coded default value for an administrator password. Infrastructure-as-code files such as Bicep are often stored in source control and shared across teams. Storing secrets directly in those files, especially as default values, leaks credentials and violates security best practices.
In Azure deployments, administrator passwords and other secrets should be handled as secure parameters (for example, using the @secure() decorator in Bicep) without default values and ideally integrated with Azure Key Vault so the actual secret values are not stored in the template or code repository.
The other options either add a parameterized tag, add a second VM resource, or use a loop to create multiple VMs from an array of names. All of these are acceptable ways to modify the Bicep file to meet the functional requirements without introducing security issues.
Topic: Manage Azure Identities and Governance
Which TWO statements about how Azure role-based access control (RBAC) role assignments and scope inheritance work are correct? (Select TWO.)
Options:
A. You can block inheritance of a management group role assignment on a specific subscription by enabling a setting on that subscription.
B. If you assign a user the Contributor role at the subscription scope, they automatically have Contributor permissions on all existing and new resource groups in that subscription.
C. You can create additional role assignments at a resource group or resource scope to grant extra permissions beyond what a principal inherits from a higher scope.
D. Role assignments are not inherited; you must assign roles separately at the subscription, resource group, and resource scopes.
E. A more restrictive role assignment at a resource scope overrides broader permissions inherited from a subscription, effectively reducing the user’s rights on that resource.
Correct answers: B and C
Explanation: Azure RBAC is built around scopes arranged in a hierarchy: management groups at the top, then subscriptions, resource groups, and individual resources at the bottom. A role assignment at any scope grants permissions that automatically flow down to all child scopes beneath it.
For example, assigning the Contributor role at the subscription level means that identity has Contributor rights on every resource group and resource in that subscription, including ones that are created in the future. This makes it easy to set broad administrative access without having to reassign roles on each new resource.
RBAC is additive. A principal’s effective permissions are the union of all role assignments that apply to them across all scopes. You can add more permissions at a lower scope (such as a resource group) to let someone do extra actions on a subset of resources. However, you cannot use a more restrictive role at a lower scope to take away broader permissions inherited from a higher scope. To reduce access, you must remove or change the higher-scope assignment (or adjust which identities receive that assignment).
Topic: Manage Azure Identities and Governance
Which TWO of the following statements about Azure Advisor recommendations are NOT correct? (Select TWO.)
Options:
A. Azure Advisor can automatically apply all high-severity security recommendations to your resources as soon as they are generated.
B. Azure Advisor is available at no additional charge, although following its recommendations might increase or decrease your overall Azure spend depending on the changes you make.
C. Azure Advisor analyzes your Azure resource configuration and usage telemetry and generates recommendations across categories such as cost, security, reliability, performance, and operational excellence.
D. You can filter Azure Advisor recommendations by subscription and resource type, and you can export them for further analysis using tools such as CSV export or the Advisor API.
E. You can integrate Azure Advisor with Azure Monitor alerts so that administrators are notified when new recommendations are created for selected scopes.
F. Dismissing or suppressing a specific Advisor recommendation for a resource is permanent and cannot be reversed later.
Correct answers: A and F
Explanation: Azure Advisor is a free service that analyzes your Azure resource configuration and usage telemetry, then surfaces recommendations across categories such as cost, security, reliability, performance, and operational excellence. It is a guidance tool: it highlights opportunities to improve, but it does not automatically change your environment.
The two incorrect statements are the ones claiming that Advisor automatically applies high-severity security recommendations and that suppressing a recommendation is permanent. In reality, administrators remain in control of when and how to apply recommendations, and they can manage or remove suppressions at any time. The other statements accurately describe Advisor capabilities such as recommendation categories, filtering, export, alerts, and pricing.
Topic: Deploy and Manage Azure Compute Resources
You are deploying an Azure Container Instances (ACI) container group to host an internal REST API that connects to an Azure SQL Database.
Requirements:
backend-subnet of an existing virtual network; it must not have a public IP address.Assume that the total required vCPUs is the number of concurrent requests multiplied by the vCPU per request, and you must choose a vCPU value that is greater than or equal to that total.
Which container group configuration should you use? (Choose the single best answer.)
Options:
A. 1.5 vCPUs; assign a public IP address with a DNS label; retrieve the connection string from Azure Key Vault at deployment time and inject it into a standard environment variable.
B. 2 vCPUs; deploy the group into backend-subnet with a private IP address; store the connection string as a standard (non-secure) environment variable.
C. 1.5 vCPUs; deploy the group into backend-subnet with a private IP address; store the connection string as a secure environment variable in the container group definition.
D. 1 vCPU; assign a public IP address with a DNS label; store the connection string as a secure environment variable in the container group definition.
Best answer: C
Explanation: To pick the correct Azure Container Instances configuration, you must satisfy three types of requirements: CPU capacity, networking isolation, and secret handling.
For CPU capacity, the stem states that each concurrent request uses up to 0.25 vCPU and you must support 6 concurrent requests. The total required vCPUs is:
\[ 6 \text{ requests} \times 0.25\,\text{vCPU/request} = 1.5\,\text{vCPUs} \]Because ACI only allows choosing 1, 1.5, or 2 vCPUs in this scenario, and you must select a value greater than or equal to the requirement, 1.5 vCPUs is the lowest acceptable value.
For networking, the API must be reachable only from backend-subnet in a virtual network. That means the container group must be deployed with VNet integration and a private IP address, and it must not be assigned a public IP or DNS label.
For secret handling, the database connection string must not be visible in clear text in the Azure portal after deployment. In ACI, this is achieved by configuring the value as a secure environment variable (or designing the app to fetch it from Key Vault at runtime), not as a standard environment variable, which is visible in the portal and via API.
The configuration that uses 1.5 vCPUs, deploys into backend-subnet with a private IP, and stores the connection string as a secure environment variable is therefore the only option that meets all stated requirements while using the minimum required CPU allocation.
Topic: Configure and Manage Virtual Networking
You administer an Azure environment with two peered virtual networks: prod-vnet and dev-vnet. Several Azure Storage accounts expose blob services only through private endpoints in these VNets.
You have the following requirements:
mystorage.blob.core.windows.net) to the private endpoint IPs.Which of the following actions/solutions will meet these requirements? (Select THREE.)
Options:
A. Link both prod-vnet and dev-vnet to the privatelink.blob.core.windows.net private DNS zone with auto-registration disabled (resolution only).
B. Create an Azure private DNS zone named privatelink.blob.core.windows.net in the subscription.
C. When creating each blob private endpoint, choose the existing privatelink.blob.core.windows.net private DNS zone so that the necessary A records are created automatically.
D. Create a private DNS zone named contoso.internal and add A records mapping each storage account FQDN (for example, mystorage.blob.core.windows.net) to the corresponding private endpoint IP.
E. In the public DNS zone hosted at your domain registrar, create A records for each storage account FQDN (for example, mystorage.blob.core.windows.net) pointing to the private endpoint IPs.
Correct answers: A, B and C
Explanation: Azure private endpoints for platform services, such as Azure Storage, rely on Azure private DNS zones to provide internal-only name resolution. For blob storage, the correct private DNS zone suffix is privatelink.blob.core.windows.net. When you create private endpoints and associate them with this zone, Azure automatically creates A records mapping each storage account’s standard FQDN (for example, mystorage.blob.core.windows.net) to the corresponding private IP address.
To allow VMs in multiple virtual networks to resolve these names, each VNet that should use the zone must be linked to the private DNS zone. Once linked, VMs using the default Azure-provided DNS in those VNets can resolve the service FQDNs to the private IPs without any client-side changes. Because this is a private DNS zone, its records are not published to public DNS on the internet, keeping the private endpoint IPs internal.
Creating public DNS records with private IPs is not appropriate; it both exposes internal IP information and does not integrate with Azure’s private endpoint mechanism. Similarly, using an arbitrary internal DNS suffix like contoso.internal does not work for standard Azure service FQDNs unless applications are changed to use those custom names, which is outside the scenario and unnecessary.
Topic: Deploy and Manage Azure Compute Resources
Which of the following statements about securing Azure App Service custom domains with TLS/SSL certificates is INCORRECT? (Select THREE.)
Options:
A. You must add and validate a custom domain on the App Service app before you can bind a TLS/SSL certificate to that hostname.
B. Azure App Service managed certificates for custom domains can automatically renew before expiration as long as the custom domain’s DNS records remain valid and reachable.
C. To renew a certificate for an App Service custom domain, you must delete the existing hostname binding and recreate the web app before adding the new certificate.
D. Binding a certificate to one App Service app automatically secures all other custom domains across the same subscription without additional bindings.
E. When uploading a custom certificate to App Service manually, it must be in PFX format and include the private key to be used for HTTPS bindings.
F. You can upload a certificate in CER format without a private key and still use it directly for HTTPS bindings on an App Service app.
Correct answers: C, D and F
Explanation: Azure App Service secures custom domains by binding TLS/SSL certificates to specific hostnames on a web app. Before you can bind a certificate, the custom domain must be added to the app and validated so that App Service can ensure you control that hostname.
For certificates you upload manually, App Service requires a PFX file that contains the private key because the app must present the certificate and prove possession of the private key to terminate HTTPS sessions. CER files without a private key are useful only as public certificates or intermediate/CA certificates, not as server certificates for HTTPS.
Certificate renewal is typically straightforward: you obtain or renew the certificate, upload or sync the new PFX, and update the existing TLS/SSL binding to point to the renewed certificate. You do not need to delete hostnames or rebuild the app. App Service managed certificates simplify the process further by issuing and automatically renewing domain-validated certificates for supported custom domains, as long as DNS remains correctly configured.
Topic: Monitor and Maintain Azure Resources
You manage an Azure App Service web API that already has Application Insights enabled. Users report that some requests occasionally take more than 5 seconds to complete, but CPU and memory metrics on the App Service are normal. You must quickly identify which backend dependency (the Azure SQL database or an external REST API) is introducing the most latency by comparing their average call duration over the last hour. What should you use?
Options:
A. In the App Service, enable and download HTTP server logs, then manually review the log files to estimate which dependency is slowest.
B. In Azure Monitor, create a metric chart for the App Service that shows CPU Percentage and Average Response Time on the same graph for the last hour.
C. In Application Insights, open the Failures blade and group by operation name to find the request with the highest failure rate.
D. In Application Insights, open the Performance blade, select the Dependencies tab, and sort dependencies by average duration.
Best answer: D
Explanation: Application Insights collects several core telemetry types: requests (incoming operations), dependencies (outbound calls such as SQL queries and HTTP calls), and exceptions (errors). When an application is slow but infrastructure metrics like CPU and memory look healthy, the next step is usually to see whether a specific dependency is introducing latency.
The Application Insights Performance blade has a Dependencies tab that aggregates dependency telemetry. It shows each dependency type or target, along with call count and average duration over the selected time range. By sorting this view by average duration for the last hour, you can immediately see whether the Azure SQL database or the external REST API is slower and likely responsible for the long request times.
Other blades and logs (such as Failures, App Service metrics, or raw HTTP logs) provide useful information about errors and overall performance, but they do not directly compare dependency durations. Using dependency telemetry is the most direct and efficient way to answer the question, “Which backend is slow?”
Topic: Manage Azure Identities and Governance
You are an Azure administrator for a company that uses dedicated Microsoft Entra user accounts as identities for automation (“service accounts”) so they can be managed by existing user-based policies and reports. You must create a new cloud-only service account for a backup application. The security team has given these requirements:
You have the necessary privileges in the tenant. What is the BEST way to create this account and meet all the requirements?
Options:
A. Use the Azure portal to create a new user manually, accept the autogenerated password, leave “Require password change at next sign-in” enabled, and email the password to the backup team.
B. Create a user account in the on-premises Active Directory, configure its department and job title, then synchronize it to Microsoft Entra ID by using Microsoft Entra Connect.
C. Use Azure PowerShell with the Microsoft Graph module to run a script that creates the user with the required attributes, sets a randomly generated strong password with “password must be changed at next sign-in” disabled, and store the password in Azure Key Vault so the script can be reused in other tenants.
D. Create a new app registration in Microsoft Entra ID for the backup application, generate a client secret, and treat the app registration as the required service account identity.
Best answer: C
Explanation: The scenario focuses on choosing an appropriate way to create a Microsoft Entra user account with specific attributes and password behavior, while keeping the process repeatable across tenants. The account is a cloud-only “service account” that must not require an interactive sign-in and must be created in a way that can be automated.
Using an automated script with Azure PowerShell and the Microsoft Graph PowerShell module lets you define all user attributes (display name, user principal name, department, job title) at creation time. You can programmatically generate a long, strong password, set the flag so the password does not need to be changed at next sign-in, and then securely store that password in Azure Key Vault. Because this logic is in a script, you can run the same script in another tenant with only minimal changes (for example, tenant ID or domain), satisfying the repeatability requirement.
In contrast, manual portal operations are not easily repeatable at scale or across tenants, and they often default to requiring a password change on first sign-in, which is unsuitable for non-interactive service accounts. Creating an app registration or syncing from on-premises AD does not create the requested type of identity (a cloud-only Microsoft Entra user) and adds unnecessary complexity or breaks the stated constraints.
Topic: Manage Azure Identities and Governance
You are an Azure administrator for Contoso. You must create a Microsoft Entra group named “HR-App-Access” to control access to an internal HR application.
The solution must meet the following requirements:
Which of the following actions will meet these requirements? (Select THREE.)
Options:
A. Create a Microsoft Entra security group with Membership type set to Dynamic User and a membership rule of user.department -eq "HR".
B. Create a Microsoft 365 group with Membership type set to Assigned and have the HR managers add and remove members as HR staffing changes.
C. Enable self-service group membership by turning on “Users can request security group membership” so HR employees can join the group themselves.
D. Add the two HR managers as owners of the “HR-App-Access” group without assigning them any Microsoft Entra administrator roles.
E. Grant the two HR managers the User Administrator role so they can manage the group and user attributes for the entire tenant.
F. In Microsoft Entra Groups general settings, disable “Users can create security groups in Azure portals, API or PowerShell.”
Correct answers: A, D and F
Explanation: This scenario focuses on configuring Microsoft Entra group properties—specifically membership type, group owners, and tenant-level group governance—to meet a set of access-control and delegation requirements.
To satisfy the automatic and attribute-based membership requirement, the group must be a dynamic user group using a rule that evaluates the Department attribute. This removes the need for manual helpdesk management and ensures that membership always reflects current HR staff.
To allow HR managers to oversee the group without broad administrative power, they should be configured as group owners only, not given directory-wide admin roles. Group owners have rights limited to that group and do not gain tenant-wide permissions by default.
Finally, to prevent regular HR employees from creating their own security groups, you must adjust the tenant-level Groups settings to disable user-driven security group creation. This enforces governance over who can create groups across the directory.
Topic: Deploy and Manage Azure Compute Resources
You plan to host two web workloads in Azure App Service:
azurewebsites.net domain. The main goal is to minimize cost.You will deploy each workload to an appropriate App Service plan and pricing tier.
Which of the following actions will meet these requirements? (Select TWO.)
Options:
A. Create a single App Service plan named SharedPlan in the Free F1 tier and deploy both the public marketing website and the dev/test API to this plan.
B. Create a single App Service plan in the Premium v3 P2v3 tier and deploy both the public marketing website and the dev/test API to this plan.
C. Create an App Service plan named ProdPlan in the Basic B1 tier and deploy the public marketing website to this plan.
D. Create an App Service plan named DevPlan in the Free F1 tier in the same region and deploy the internal dev/test API to this plan.
E. Create an App Service plan named ProdPlan in the Standard S1 tier in the production region and deploy the public marketing website to this plan.
Correct answers: D and E
Explanation: App Service plans define the compute resources and capabilities for web apps. Different pricing tiers provide different features such as SLAs, autoscale, backup/restore, and support for custom domains. In this scenario, you must choose a tier for a small but real production website and a separate tier for a low-cost dev/test API.
The production marketing site requires:
The dev/test REST API requires only minimal cost and can use the default domain with no SLA or autoscale commitments.
Standard S1 is the lowest App Service tier that meets the production requirements for SLA, autoscale, and built-in backup/restore while still supporting custom domains and SSL. The Free F1 tier is suitable for the dev/test API because it has no SLA, limited resources, and allows the app to sleep when idle, which is acceptable for dev/test and minimizes cost.
Other tiers either fail to provide required production features (Free and Basic) or provide more capacity and isolation than needed at a much higher cost (Premium v3).
Topic: Configure and Manage Virtual Networking
You manage several production virtual networks hosting Windows and Linux VMs. Administrators work remotely from various locations and must manage the VMs securely. You are evaluating options including Azure Bastion, jumpbox VMs, and VPN-based administration.
Which TWO of the following remote administration configurations should you AVOID? (Select TWO.)
Options:
A. Create a jumpbox VM in a dedicated subnet with an NSG restricting RDP/SSH to a small set of known public IP ranges, and require per-admin accounts with MFA.
B. Assign a public IP to each production VM and allow inbound RDP/SSH from the Internet, protected only by strong passwords and default NSG rules.
C. Deploy Azure Bastion in a hub virtual network and use it for browser-based RDP/SSH to VMs in peered spoke networks that have no public IPs.
D. Provision a jumpbox VM with a public IP that allows RDP from any Internet address, and configure a single shared local administrator account for all admins to use.
E. Use a point-to-site VPN that requires certificate-based authentication and MFA, then restrict RDP/SSH to only the VPN subnet using NSGs on the management subnet.
Correct answers: B and D
Explanation: For secure remote administration of Azure VMs, you should minimize public exposure of management ports, enforce strong identity-based access, and keep management overhead reasonable. Common secure options include Azure Bastion, well-hardened jumpbox VMs, and VPN-based access with tight NSG rules.
Exposing RDP/SSH directly to the Internet, especially on many VMs or with weak identity practices, is a well-known anti-pattern. It increases the attack surface and makes brute-force or credential-stuffing attacks more likely. Similarly, poorly managed jumpbox designs (for example, wide-open RDP and shared admin accounts) erode accountability and make incident response harder.
In contrast, Azure Bastion provides browser-based access over TLS without public IPs on the target VMs, VPN-based access keeps management traffic on private channels, and hardened jumpboxes with strict NSG and identity controls can be acceptable when managed carefully. These patterns prioritize security and manageable overhead while giving admins a usable experience.
Topic: Configure and Manage Virtual Networking
You manage a hub-and-spoke network. The hub VNet has a VPN gateway connected to on-premises. Two spoke VNets are peered with the hub using default settings. On-premises servers can reach only VMs in the hub, not in the spokes. You must: centralize connectivity through the hub gateway, avoid deploying additional gateways, and minimize changes. Which configuration change should you implement to meet these goals?
Options:
A. Create VNet-to-VNet VPN connections between the hub and each spoke while leaving the existing peering unchanged, so traffic can traverse both the peering and the VPN tunnels.
B. Update each hub-to-spoke VNet peering to enable Allow gateway transit and Allow forwarded traffic on the hub side, and update each spoke-to-hub peering to use remote gateways.
C. Create direct VNet peering between the spoke VNets only, leaving the hub VNet configuration and gateway unchanged.
D. Deploy a new VPN gateway in each spoke VNet and create separate site-to-site VPN connections from on-premises to each spoke VNet, then remove the existing peering.
Best answer: B
Explanation: In a hub-and-spoke topology, a common design is to terminate site-to-site VPN or ExpressRoute in the hub VNet and allow spoke VNets to reuse that gateway. This is done through gateway transit and the use remote gateways setting on VNet peering.
By default, when you peer hub and spoke VNets, the hub’s VPN gateway is not automatically advertised to the spokes. As a result, on-premises networks can reach only the hub VNet, not the spokes. To let spokes send and receive traffic through the hub gateway, you must configure the peering correctly.
On the hub-to-spoke peering, enabling Allow gateway transit makes the hub’s gateway available to peered VNets. Enabling Allow forwarded traffic permits the hub to forward traffic arriving from on-premises into the spokes. On the spoke-to-hub peering, selecting Use remote gateways tells the spoke to treat the hub’s VPN gateway as its own default gateway to on-premises.
This configuration keeps a single, central VPN gateway in the hub (meeting the cost and simplicity goals) while allowing on-premises networks to reach resources in each spoke VNet.
The other options either add unnecessary gateways and tunnels or fail to expose the hub gateway to the spokes, so they do not meet all the stated requirements.
Topic: Deploy and Manage Azure Compute Resources
Which TWO of the following statements about Azure virtual machine snapshots and Azure Backup are INCORRECT? (Select TWO.)
Options:
A. Restoring from Azure Backup can recreate an entire VM, while restoring from a snapshot typically involves creating a new managed disk and then attaching it to a VM or using it to create a new VM.
B. Azure Backup is generally preferred over ad-hoc snapshots for production workloads that require regular backups, centralized management, and long-term retention.
C. Azure VM snapshots are point-in-time copies of individual managed disks and do not, by themselves, provide scheduling, retention policies, or application-aware consistency.
D. Azure Backup for virtual machines can protect all managed disks attached to a VM and supports policy-based scheduling and long-term retention.
E. Using only snapshots to protect a VM guarantees application-consistent backups and automatic log truncation for workloads such as SQL Server and Exchange.
F. Snapshots are stored in a Recovery Services vault and support the same long-term retention capabilities as Azure Backup.
Correct answers: E and F
Explanation: Azure VM snapshots and Azure Backup both help protect VM data, but they serve different purposes and offer different capabilities.
Snapshots are point-in-time copies of individual managed disks. They are useful for quick, ad hoc protection before changes or for capturing a particular disk state. However, they do not provide built-in scheduling, retention policies, or application-aware consistency. They are typically crash-consistent and must be managed manually or via custom automation.
Azure Backup for virtual machines is a managed backup service that uses a backup vault and policies to regularly back up entire VMs, including all attached disks. It supports scheduled backups, long-term retention, centralized management, and, for many workloads, application-consistent backups using VSS on Windows or appropriate Linux mechanisms. It can restore an entire VM or individual disks.
The incorrect statements are the ones that claim snapshots are stored in Recovery Services vaults with the same long-term retention capabilities as Azure Backup, and that snapshots alone guarantee application-consistent backups and automatic log truncation for workloads such as SQL Server and Exchange. Both of these claims describe capabilities of Azure Backup, not of snapshots.
Topic: Deploy and Manage Azure Compute Resources
You manage an Azure web application that currently runs on a single Windows Server virtual machine (VM) in the West Europe region. The app experiences occasional downtime during Azure platform maintenance, and you also want to limit the impact of a host hardware failure. You plan to deploy a second VM for the web tier.
The solution must:
What should you do to meet these requirements?
Options:
A. Move the existing web VM to a different Azure region, deploy the second VM there, and configure Azure Traffic Manager for failover between the two regions.
B. Create an availability set in West Europe that uses managed disks, deploy both web VMs into that availability set, and place them behind an Azure Load Balancer.
C. Create two availability sets in West Europe and place one web VM in each availability set, each with its own public IP address.
D. Deploy the second web VM into a different subnet in the same virtual network and enable accelerated networking on both VMs.
Best answer: B
Explanation: Availability sets are designed to improve the availability of multi-VM deployments within a single Azure region. When you place two or more virtual machines in the same availability set, Azure automatically distributes those VMs across multiple fault domains (separate racks, power, and network) and multiple update domains (groups that receive planned maintenance at different times).
By doing this, Azure reduces the chance that all instances of a workload are affected simultaneously by a single hardware failure or a planned maintenance event. When using managed disks and at least two VMs in the same availability set, Microsoft offers a higher SLA for VM uptime compared with a single VM.
In this scenario, the requirements explicitly call for staying in the same region, maximizing the Microsoft SLA, and ensuring that the VMs are spread across multiple fault and update domains. The correct way to achieve this is to create an availability set in the existing region, deploy both web-tier VMs into that availability set, and typically place them behind an Azure Load Balancer to distribute client traffic across the healthy instances.
Topic: Deploy and Manage Azure Compute Resources
You administer an Azure App Service web app that stores its main data in an Azure SQL Database and user-uploaded files on an Azure Files share. You open the backup configuration shown in the exhibit.
Based on the exhibit, which statement about what the App Service backup will capture is correct?
Exhibit:
| Setting | Value |
|---|---|
| Backup status | Enabled |
| Backup storage | mystorageaccount (StorageV2, GRS) |
| Backup schedule | Every 6 hours, retain 14 days |
| App content | Included |
| Databases selected | MainDb (Azure SQL Database) |
| Last backup status | Succeeded |
| Notes | App Service backups include app content and only the selected supported databases. External data sources such as Azure Files shares and other Azure resources are not included. |
Options:
A. The backup will include all resources used by the app, including Azure Files shares and any other external services it connects to.
B. The backup configuration will fail because the selected storage account uses GRS and GRS accounts cannot be used for App Service backups.
C. The backup will capture only app settings and connection strings; app content and databases are not backed up by this configuration.
D. The backup will capture the web app files and the MainDb Azure SQL database, but not the Azure Files share.
Best answer: D
Explanation: Azure App Service backups are intended to capture the web app’s content (files under the web app, such as site code and configuration) and, when configured, supported databases referenced via connection strings (for example, Azure SQL Database). They do not automatically include all external resources your app might use.
In the exhibit, app content is explicitly marked as Included and a specific Azure SQL Database (MainDb) is selected under Databases selected. The Notes field clarifies that external data sources such as Azure Files shares and other Azure resources are not part of the App Service backup.
Therefore, this backup will protect the web app files and the selected Azure SQL database, but you must handle backup of the Azure Files share and any other external dependencies separately, using appropriate storage backup or snapshot mechanisms.
Topic: Implement and Manage Storage
You manage an Azure Storage account that hosts several blob containers for different workloads. Security is a high priority, but some content must be accessible over the public internet.
Which of the following container configurations should you AVOID? (Select TWO.)
Options:
A. Public product brochures in a container named public-assets with public access level set to Blob, linked from the company website.
B. A static marketing website hosted from a container named web with static website hosting enabled and public access level set to Blob to serve HTML, CSS, and images anonymously.
C. An HR document archive in a container named hr-docs with public access level set to Container so anyone with the URL can browse and download all files.
D. Application logs in a container named app-logs with public access level set to Private, written by the app using a managed identity.
E. Nightly database exports in a container named db-backups with public access level set to Container so a third-party vendor can download backups anonymously via HTTP.
Correct answers: C and E
Explanation: Azure Blob Storage containers support three main public access levels: Private (no anonymous access), Blob (anonymous read access to blobs, but not container metadata or listing), and Container (anonymous read access to blobs and container metadata, including the ability to list all blobs).
From a security perspective, any data that is not explicitly meant to be world-readable should remain private. When public access is required, blob-level access is typically safer than container-level access because it does not allow anonymous listing of all content in the container.
In this scenario, HR documents and database backups are clearly sensitive workloads. Configuring their containers for anonymous, container-level public access would expose all files and metadata to anyone on the internet, which is a serious security misconfiguration. Public marketing content and static website files, on the other hand, are intended for anonymous access and can appropriately use blob-level public access when isolated in dedicated containers and accounts.
Topic: Configure and Manage Virtual Networking
You administer an Azure virtual network that contains a subnet named WebSubnet. A virtual machine scale set that hosts an internet-facing web app runs in WebSubnet behind a standard public load balancer. Administrators must use SSH to manage the VMs from a fixed corporate public IP of 203.0.113.10. You need to configure a network security group (NSG) to meet the following requirements:
any any inbound rules.Which NSG configuration is the most appropriate?
Options:
A. Create an NSG and associate it to WebSubnet. Add inbound rules to allow TCP port 80 from Any, allow TCP port 443 from Any, and allow TCP port 22 from source 203.0.113.10/32. Do not add any additional inbound allow rules and rely on the NSG’s default deny rule for other traffic.
B. Create an NSG and associate it to WebSubnet. Add inbound rules to allow TCP port 80 and 443 from the VirtualNetwork service tag, and allow TCP port 22 from 203.0.113.10/32. Rely on the NSG’s default deny rule for other traffic.
C. Create an NSG and associate it only to one VM NIC in the scale set. Add inbound rules to allow TCP port 80 and 443 from Any and allow TCP port 22 from 203.0.113.10/32. Leave other VMs without an NSG to simplify management.
D. Create an NSG and associate it to WebSubnet. Add a single inbound allow rule for TCP ports 22,80,443 with source Any and destination Any. Rely on the NSG’s default rules for other traffic.
Best answer: A
Explanation: To design NSG rules that follow least-privilege and zero-trust principles, you allow only the specific traffic that is required and block everything else by default. In this scenario, the web app must be reachable over HTTP/HTTPS from anywhere on the internet, while SSH management must be tightly restricted to a single known public IP (203.0.113.10). An NSG applied at the subnet level to WebSubnet will protect all current and future VMs in that subnet.
NSGs include an implicit deny-all inbound rule with the lowest priority. You typically create explicit allow rules for required ports and sources and then rely on this implicit deny to block everything else. A zero-trust approach avoids overly broad rules such as allowing multiple sensitive ports from Any to Any, especially for management ports like SSH or RDP.
The best solution therefore is an NSG associated with WebSubnet that has explicit allow rules for HTTP and HTTPS from Any, and an allow rule for SSH only from the specific corporate IP /32, with no additional broad inbound permits. This meets connectivity needs while maintaining a tight security posture.
Topic: Manage Azure Identities and Governance
You receive a ticket: A user is trying to create a new Microsoft Teams team based on an existing group named “Sales-Europe”, but the group does not appear in the “Create from an existing Microsoft 365 group” list.
In the Microsoft Entra admin center, you review the group:
| Property | Value |
|---|---|
| Group type | Security |
| Membership type | Assigned |
| Mail-enabled | No |
What is the most likely cause of the issue?
Options:
A. The group is not mail-enabled; enabling mail on the security group will make it eligible for use with Teams.
B. The group is a security group, and only Microsoft 365 groups can be used as the backing group for a team.
C. The group is not dynamic; only dynamic groups can be used to create a team from an existing group.
D. The group is assigned; changing it to device membership will allow it to appear in the Teams group selection list.
Best answer: B
Explanation: Microsoft Teams uses Microsoft 365 groups as its underlying membership and collaboration construct. When you create a team “from an existing group”, the picker only lists Microsoft 365 groups that meet certain criteria (for example, not already team-enabled). Standard security groups do not appear in this list because they are designed primarily for access control (for example, assigning permissions to apps or resources) rather than for collaboration workloads.
In the scenario, the group “Sales-Europe” is clearly identified as a security group, with mail-enabled = No. This means it is not a Microsoft 365 group and therefore cannot be used as the backing group for a Microsoft Teams team. To fix the issue, you must use or create a Microsoft 365 group (for example, a Microsoft 365 group with the same membership) and then create the team from that group or create a new team that automatically provisions a Microsoft 365 group.
Changing dynamic vs assigned membership, device vs user membership, or mail-enabling a security group does not transform it into the correct group type for Teams collaboration scenarios.
Topic: Configure and Manage Virtual Networking
You manage two Azure virtual networks in the same region: VNet-App (10.10.0.0/16) and VNet-DB (10.20.0.0/16). Each VNet contains a VM. You created VNet peering between them.
The VMs cannot RDP or ping each other. Both NSGs allow all traffic between the subnets, and there are no firewalls or custom routes.
The peering configuration is shown:
| Peering name | From VNet | To VNet | Allow virtual network access | Allow forwarded traffic | Use remote gateways |
|---|---|---|---|---|---|
| AppToDb | VNet-App | VNet-DB | Enabled | Disabled | Disabled |
| DbToApp | VNet-DB | VNet-App | Disabled | Disabled | Disabled |
What should you do to restore connectivity between the VMs?
Options:
A. Create a VPN gateway in each VNet and configure VNet-to-VNet VPN between VNet-App and VNet-DB.
B. Enable “Allow forwarded traffic” on both peerings so traffic can traverse between the VNets.
C. Enable “Use remote gateways” on the AppToDb peering so VNet-DB can use VNet-App’s gateway.
D. Enable “Allow virtual network access” on the DbToApp peering from VNet-DB to VNet-App.
Best answer: D
Explanation: Azure virtual network peering allows private IP connectivity between VNets as if they were part of the same network, provided key conditions are met: non-overlapping address spaces, successful peering on both sides, and permissive network security rules.
In this scenario, address spaces are non-overlapping, NSGs allow traffic, and there are no firewalls or custom routes. The main clue is the peering configuration table: on the DbToApp peering (from VNet-DB to VNet-App), the “Allow virtual network access” setting is disabled.
The “Allow virtual network access” flag controls whether traffic from the source VNet can use the peering to reach the target VNet. If it is disabled, traffic originating from that VNet is blocked on the peering, even if the peering exists. Since basic VM-to-VM communication depends on this setting being enabled on both directions, disabling it on one side will break connectivity.
Therefore, the correct fix is to enable “Allow virtual network access” on the DbToApp peering. No additional gateways, forwarded traffic settings, or complex routing are required for simple peered VNet connectivity.
Topic: Implement and Manage Storage
A client application accesses Azure Blob Storage over HTTPS by using a valid shared access signature (SAS). Time synchronization has been verified, and the SAS token has not expired. Which configuration change on the storage account would still cause the client’s requests from the public internet to be rejected with HTTP 403?
Options:
A. Remove all Storage Blob Data Reader role assignments from the user account that originally generated the SAS token.
B. Change the storage account network access setting to “Enabled from selected virtual networks and IP addresses” without adding the client’s public IP.
C. Set the blob container’s public access level to Private.
D. Change the access tier of the blobs in the container from Hot to Cool.
Best answer: B
Explanation: Azure Storage accounts have a built-in firewall that controls which networks and IP addresses can reach the service over the public endpoint. This network-level check happens before authorization based on SAS tokens, keys, or Azure RBAC.
When you set a storage account’s network access setting to “Enabled from selected virtual networks and IP addresses”, any client connecting from the public internet must have its public IP address explicitly allowed in the storage account’s firewall rules. If the IP is not listed, the request fails with a 403 error, even if the SAS token is correctly formed, not expired, and has the right permissions.
By contrast, settings such as the container’s public access level or blob access tier do not block authenticated access via SAS. Likewise, Azure RBAC assignments to the user who created the SAS are not evaluated when the SAS is later used; the SAS itself is the credential at that point.
Topic: Implement and Manage Storage
Which of the following statements about using AzCopy for large, scripted Azure Storage data transfers are NOT correct? (Select THREE.)
Options:
A. AzCopy can transfer data between on-premises file systems and Azure Blob storage using an azcopy copy command and the --recursive flag to include subfolders.
B. AzCopy cannot perform direct server-side copies between two Azure Storage accounts; you must always download data locally and then re-upload it.
C. AzCopy can only run on Windows hosts; Linux-based migration servers must use REST APIs or SDKs instead of AzCopy.
D. AzCopy supports authenticating to Azure Storage using a SAS token or Microsoft Entra ID credentials, depending on the scenario.
E. You can script AzCopy commands in batch files or shell scripts to automate recurring large data transfers such as nightly uploads or sync jobs.
F. AzCopy requires the Azure CLI to be installed on the same machine, because it is an extension of the az command.
Correct answers: B, C and F
Explanation: AzCopy is a cross-platform, standalone command-line utility designed for high-performance data transfer to, from, and between Azure Storage accounts. It is frequently used for initial data seeding, large one-time migrations, and scripted recurring transfers. Understanding what AzCopy can and cannot do helps you choose it confidently for bulk operations and avoid unnecessary complexity, such as staging data locally when it is not required.
Valid use cases include copying entire folder trees from on-premises file systems to Azure Blob or Azure Files, automating recurring transfers using scripts and schedulers, and performing server-side copies between storage accounts. AzCopy supports multiple authentication methods, including SAS tokens and Microsoft Entra ID, which enables secure, least-privilege access.
The incorrect statements in this question reflect common misconceptions: that AzCopy requires the Azure CLI, that it cannot do direct service-to-service copies, and that it only runs on Windows. Each of these is false for AzCopy v10, and recognizing this helps you plan efficient and portable migration workflows.
Topic: Implement and Manage Storage
Which of the following statements about using Azure Storage Explorer to manage Azure Blob Storage containers and Azure file shares is NOT correct? (Select THREE.)
Options:
A. Azure Storage Explorer can only connect to storage accounts that allow anonymous public access to blobs.
B. Azure Storage Explorer can display and manage blob containers and Azure file shares from multiple subscriptions within a single tree view.
C. You can use Azure Storage Explorer to confirm whether your current identity or connection (for example, SAS or key) has permission to list or modify data by attempting those operations in the tool.
D. Azure Storage Explorer cannot manage Azure file shares; it is limited to blobs, queues, and tables only.
E. To copy data between two blob containers in different storage accounts, you must first download the data locally and then re-upload it to the destination account.
Correct answers: A, D and E
Explanation: Azure Storage Explorer is a cross-platform client tool that lets administrators manage Azure Storage data across multiple accounts and subscriptions. It supports several authentication methods, including Microsoft Entra ID, shared keys, connection strings, and SAS tokens. Administrators commonly use it to browse containers and file shares, upload and download data, perform server-side copies, and verify that permissions are configured correctly by attempting real operations.
Understanding what Storage Explorer can and cannot do helps avoid inefficient workflows, such as unnecessary local downloads, and prevents incorrect assumptions about required permissions or supported storage types.
Topic: Monitor and Maintain Azure Resources
Which THREE statements about Azure Monitor alert processing rules are INCORRECT? (Select THREE.)
Options:
A. Alert processing rules apply only to metric alerts; they cannot affect log or activity log alerts.
B. Alert processing rules are defined only at the action group level and cannot be scoped by resource group or subscription.
C. Alert processing rules permanently delete matching alerts so they no longer appear in the Azure Monitor alerts list.
D. Alert processing rules can add or remove action groups on matching alerts, letting you route alerts to different teams without editing alert rules.
E. Alert processing rules can filter alerts based on properties like severity, monitor service type, and alert rule name.
F. Alert processing rules can suppress notifications from matching alerts during specified schedule windows, without disabling the underlying alert rules.
Correct answers: A, B and C
Explanation: Azure Monitor alert processing rules act as a post-alert layer that can change how alerts are handled after they fire but before notifications and other actions run. They are commonly used to suppress notifications during maintenance windows, change routing based on time or conditions, or modify which action groups are triggered.
Processing rules do not change whether alerts are generated; they only affect notification and action behavior. Rules can be scoped to subscriptions, resource groups, or individual resources and can filter on alert properties such as severity, monitor service, or alert rule name. This makes them ideal for implementing schedules and targeted routing without rewriting existing alert rules.
Topic: Monitor and Maintain Azure Resources
You administer several production web applications hosted on Azure virtual machines. Metrics and application logs are already sent to a single Log Analytics workspace.
The platform team asks for a unified “operations view” that meets these requirements:
Which of the following actions/solutions will meet these requirements? (Select THREE.)
Options:
A. Continuously export data from Log Analytics to Azure Storage and build a Power BI report that visualizes VM metrics and application errors, then share the Power BI workspace with the operations team.
B. Add dropdown parameters (such as subscription and resource group) to the workbook and bind them to all metrics and log queries used by the workbook’s visualizations.
C. Pin the workbook or specific workbook visualizations to a shared Azure portal dashboard and assign the operations team the Reader role on the resource group that contains the dashboard and workbook.
D. Create metric alert rules for CPU and memory on each VM, using an action group that emails the operations team when thresholds are exceeded.
E. Create an Azure Monitor workbook that combines VM metrics visualizations (CPU and memory) with a table visualization based on a Log Analytics query that returns recent application error entries.
Correct answers: B, C and E
Explanation: The scenario calls for a single, interactive operations view in the Azure portal that combines VM metrics (CPU and memory) with application error logs from Log Analytics. It must allow runtime filtering by subscription and resource group, and it must not require elevated permissions on the underlying VMs or the Log Analytics workspace.
Azure Monitor workbooks are designed exactly for this kind of combined visualization. A workbook can include metrics charts, Log Analytics queries, and various visualizations (such as tables and time charts) on the same page. Workbooks also support parameters that can be bound to queries, enabling interactive filtering without changing query text.
Once the workbook is created, its overall page or individual charts and tables can be pinned to an Azure portal dashboard. Dashboards and workbooks are Azure resources stored in resource groups, so RBAC at the resource group level controls who can view them. Granting the operations team the Reader role on that resource group allows them to see and interact with the dashboard and workbook, while not giving Contributor access to VMs or the workspace.
Alert rules and external BI tooling such as Power BI solve different problems: alerts handle notifications, and Power BI is a separate reporting platform that introduces extra complexity and may live outside the Azure portal-centric workflow. Neither is required to meet the portal-based, interactive, combined metrics-and-logs visualization requirement.
Topic: Manage Azure Identities and Governance
You are planning to reorganize resources by moving them between resource groups and subscriptions in Azure. You want to avoid common mistakes related to move constraints and unsupported scenarios.
Which of the following configurations should you AVOID when planning these moves? (Select TWO.)
Options:
A. Attempting to move a virtual network that currently has active virtual network peering connections without removing or reconfiguring the peering relationships first.
B. Using the Azure portal Move action and running the Validate step before starting the move to detect any unsupported resource types or dependencies.
C. Ensuring the source and target subscriptions are associated with the same Microsoft Entra tenant, or planning a documented tenant migration process if they are in different tenants before attempting any move operations.
D. Attempting to move a storage account to a different Azure region by using the “Move to another subscription” operation and expecting the region and data location to change as part of the move.
E. Planning to move a production virtual machine and its managed disks to a different resource group in the same subscription during a scheduled maintenance window, after confirming in Microsoft documentation that the VM size and region support move operations.
Correct answers: A and D
Explanation: When moving Azure resources between resource groups or subscriptions, you must respect platform constraints such as supported resource types, dependencies, and the fact that move operations do not change a resource’s region. Ignoring these constraints commonly leads to failed moves and unexpected downtime.
Moving a resource never changes its region; if you need to relocate data or compute to another region, you typically must create new resources in the target region and migrate data or configuration. Additionally, certain dependencies (such as virtual network peering) can block moves until you remove or reconfigure them.
Good practices include validating moves in the Azure portal, confirming supportability in documentation, ensuring tenant alignment, and scheduling maintenance windows for production workloads.
Topic: Deploy and Manage Azure Compute Resources
You administer an Azure VM named vm-app-prod in the North Europe region. You must move the VM to West Europe while meeting the requirements shown in the following exhibit.
Exhibit:
| Setting | Value |
|---|---|
| Source region | North Europe |
| Target region | West Europe |
| VM type | Single VM |
| OS disk | Managed |
| Data disks | 2 managed disks |
| Downtime allowed at cutover | ≤15 minutes |
| Need to test move before cutover | Yes |
| Need to keep same private IP | No |
Based only on the information in the exhibit, which approach should you use to move the VM?
Options:
A. Export the ARM template of vm-app-prod from North Europe and redeploy it manually in West Europe, then copy disk data using AzCopy before powering on the new VM.
B. Use Azure Resource Mover to move vm-app-prod from North Europe to West Europe, perform a test move, then commit the move during a short cutover window.
C. Create a managed image of vm-app-prod in North Europe and deploy a new VM from the image in West Europe during a scheduled maintenance window.
D. Configure Azure Backup to back up vm-app-prod to a Recovery Services vault in West Europe and restore the VM from that backup in West Europe.
Best answer: B
Explanation: The exhibit shows that vm-app-prod uses managed disks and that the business needs a region move from North Europe to West Europe with limited downtime at cutover and an explicit requirement to test the move before finalizing it.
Azure Resource Mover is designed for cross-region migrations of Azure resources, including VMs with managed disks. It orchestrates replication of VM disks to the target region, allows you to run a test move (test migration) so you can validate the workload in the target region, and then perform a controlled commit during a short downtime window.
The other options are more manual migration patterns (image-based redeploy, backup/restore, or ARM template plus manual data copy). They do not provide continuous replication with an integrated test move workflow and usually involve longer downtime or more operational risk than Resource Mover for this scenario.
Topic: Configure and Manage Virtual Networking
Which THREE statements about Azure Load Balancer health probes are correct? (Select THREE.)
Options:
A. Network security groups attached to backend subnets or NICs must allow probe traffic from Azure’s load balancer infrastructure, or the health probe will fail.
B. For a TCP health probe, the backend must return an HTTP 200 status code for the probe to succeed.
C. You must configure a separate health probe object for each backend VM in a backend pool.
D. The health probe must target a port on which the backend instance is actually listening; if that port is closed, the probe will fail.
E. If a backend instance fails the configured number of consecutive probe checks, the load balancer stops sending new connections to that instance.
F. Existing TCP connections to a backend are immediately dropped when the first health probe failure occurs for that backend instance.
Correct answers: A, D and E
Explanation: Azure Load Balancer uses health probes to determine which backend instances are healthy and can receive new traffic. A probe periodically tests each backend on a specific protocol and port. When enough consecutive probe failures occur, the backend is considered unhealthy and is removed from load-balancing rotation for new connections.
For probes to succeed, the backend must be listening on the configured port and any network security groups between the Azure Load Balancer and the backend must allow the probe traffic. Misconfigurations such as using the wrong probe port, blocking probe packets in NSGs, or expecting HTTP behavior from a TCP probe are common causes of failed probes and load-balancing issues.
Understanding how probes behave, including how thresholds and NSG rules affect health, is key to troubleshooting why certain VMs are not receiving traffic from an Azure Load Balancer.
Topic: Manage Azure Identities and Governance
You are planning role assignments for a new Azure subscription and are reviewing the following statements about common built-in roles with your team.
Which of the following statements about these Azure roles is INCORRECT?
Options:
A. Reader can view existing resources in a scope but cannot modify or delete those resources.
B. User Access Administrator can manage all resources in a scope and can also manage role assignments.
C. Owner can manage all resources in a scope and can also delegate access by assigning roles to others.
D. Contributor can create and delete resources in a scope but cannot grant access to others by assigning roles.
Best answer: B
Explanation: Azure provides several common built-in roles that control what a user can and cannot do at a given scope (management group, subscription, resource group, or resource).
Owner is the most privileged of the common roles, with full resource management and access control permissions. Contributor allows full management of resources but not access control. Reader is strictly read-only. User Access Administrator is focused on access control only and does not grant permissions to change resources.
In this question, the incorrect statement is the one that claims User Access Administrator can manage all resources. That role is limited to managing who has access, not managing the resources themselves.
Topic: Deploy and Manage Azure Compute Resources
You manually created a production resource group named RG1 in the Azure portal. It contains a web app, an App Service plan, a storage account, and a key vault with all settings already tuned for your workload.
Management asks you to:
RG1.You have limited experience with ARM/Bicep and want to minimize manual configuration effort while reusing the current configuration of RG1.
Which of the following actions/solutions will meet these requirements? (Select TWO.)
Options:
A. After validating the exported template, save it as a template spec in a shared “Templates” resource group so other admins can deploy standardized copies of the environment directly from the portal or automation tools.
B. Write a new Bicep file by hand that models all required resources based on what exists in RG1, then use Azure CLI to deploy it for the test and future environments.
C. In the Azure portal, open RG1, use Export template to generate an ARM template for the entire resource group, then deploy that template to a new resource group for the test environment, changing only names and other parameters during deployment.
D. Use Azure Policy to automatically deploy the full web application stack whenever a new resource group is created, embedding all required settings directly in custom policy definitions.
E. Recreate each resource manually in a new resource group using the Azure portal, copying settings from screenshots of RG1, and document the steps in a wiki for others to follow.
Correct answers: A and C
Explanation: Exporting an ARM template from an existing resource group is a practical way for an Azure administrator to turn a manually built environment into reusable infrastructure-as-code.
When you use the Export template feature on a resource group in the Azure portal, Azure generates a template (plus parameters) that describes the resources and their configuration. You can then deploy this template to another resource group to clone the environment with minimal changes, typically just updating names and environment-specific parameters.
To standardize this deployment for future use, you can save the exported template as a template spec in a shared resource group. Template specs provide a centrally stored, versioned template that can be re-deployed from the portal, Azure CLI, PowerShell, or pipelines, enabling consistent, repeatable deployments without having to recreate or re-document every setting.
Topic: Configure and Manage Virtual Networking
An administrator manages VM1 in a subnet that has a subnet NSG and a NIC NSG associated with VM1. Users from the branch office (10.20.0.0/16) cannot reach a web app on VM1 over HTTPS from on-premises. You open the Effective security rules view for VM1 and see:
| Priority | Name | Source | Destination | Port | Action |
|---|---|---|---|---|---|
| 100 | DenyCorp443 | 10.0.0.0/8 | Any | 443 | Deny |
| 200 | AllowBranch | 10.20.0.0/16 | Any | 443 | Allow |
| 65000 | AllowVNetIn | VirtualNetwork | Any | * | Allow |
| 65500 | DenyAllIn | Any | Any | * | Deny |
DenyCorp443 is defined on the NIC NSG; AllowBranch is defined on the subnet NSG. You must:
Which change should you make?
Options:
A. Change the priority of the AllowBranch rule on the subnet NSG to 90 so it appears above DenyCorp443 in the effective security rules list.
B. Add an inbound allow rule for source 10.20.0.0/16 on the NIC NSG with priority 90 (lower than 100) allowing TCP 443 to VM1.
C. Disassociate the NIC NSG from VM1 so that only the subnet NSG evaluates inbound HTTPS traffic.
D. Delete the DenyCorp443 rule from the NIC NSG and rely only on the subnet NSG rules to control HTTPS access.
Best answer: B
Explanation: Network security groups process rules by ascending priority number, stopping at the first matching rule. A packet is allowed only if it is permitted by both the subnet NSG and the NIC NSG. When you look at Effective security rules, Azure shows the combined view, but under the hood each NSG makes its own allow/deny decision.
In this scenario, the NIC NSG has a deny rule for source 10.0.0.0/8 on TCP 443 with priority 100. The branch prefix 10.20.0.0/16 is part of 10.0.0.0/8, so that deny rule matches the branch traffic. Even though the subnet NSG has an allow rule for 10.20.0.0/16 at priority 200, the NIC deny still wins, because both NSGs must allow the traffic for it to be delivered.
To allow 10.20.0.0/16 but still deny the rest of 10.0.0.0/8, you need a more specific allow rule on the NIC NSG that takes precedence over the broader deny. By adding an inbound allow rule for 10.20.0.0/16 on port 443 with a lower priority number (for example, 90), the NIC NSG will first match that allow for 10.20.0.0/16, while continuing to use the existing deny for any other 10.0.0.0/8 addresses. This meets all requirements with a minimal, targeted change.
Topic: Implement and Manage Storage
In Azure Files, an administrator configures per-share quotas on several SMB file shares used for lift-and-shift user home directories, mainly to control capacity growth and avoid paying for unused storage. Which cloud operations principle does this configuration best exemplify?
Options:
A. Defense in depth and network isolation
B. Operational observability and diagnostics
C. Cost optimization and capacity management
D. High availability within a region
Best answer: C
Explanation: In Azure Files, a per-share quota sets an upper limit on the logical size of a file share. When you apply quotas to SMB or NFS shares, you constrain how much data each workload or group of users can store on that share.
This directly supports cost optimization and capacity management. By preventing uncontrolled growth of lift-and-shift home directories or shared application data, you reduce the risk of paying for large volumes of rarely used or unnecessary data. It also makes capacity planning easier, because each share has a defined maximum size aligned with expected usage.
Availability, security, and observability are all important operational principles, but they are addressed through different Azure Files features, such as redundancy settings for availability, network rules and permissions for security, and diagnostic logs and metrics for observability. The scenario described focuses specifically on controlling storage consumption and associated cost, so cost optimization is the best match.
Topic: Monitor and Maintain Azure Resources
Which statement accurately describes a core capability of Azure VM Insights when it is enabled on a virtual machine?
Options:
A. It replaces network security group (NSG) flow logs by capturing and storing all packet-level traffic for the VM.
B. It provides detailed performance charts and a dependency map that show the VM’s resource usage and network connections to other components.
C. It automatically scales the VM’s size up or down when CPU usage exceeds a configured threshold.
D. It backs up the VM’s managed disks on a schedule and reports the status of each restore point.
Best answer: B
Explanation: Azure VM Insights is an Azure Monitor feature that gives deep visibility into the performance and dependencies of virtual machines. When enabled and connected to a Log Analytics workspace, it collects guest-level metrics such as CPU, memory, disk, and network usage. With the Dependency agent installed, it can also discover processes and map inbound and outbound network connections between the VM and other components.
These insights are presented as performance charts and a dependency map in the Azure portal, helping administrators quickly identify resource bottlenecks and understand how a VM interacts with other services. VM Insights is a monitoring and diagnostics tool only; it does not directly change resource configurations, scale VMs, capture all packets, or perform backups.
Topic: Deploy and Manage Azure Compute Resources
Which TWO statements about disk caching for Azure virtual machine managed disks are correct? (Select TWO.)
Options:
A. Read-only caching is well-suited for data disks that are heavily read and infrequently written, such as disks holding reference data or application binaries.
B. Using Read/Write caching on the OS disk of most general-purpose VMs helps speed up operating system boot and frequent OS file access.
C. Write-intensive data disks, such as database log disks, typically benefit from enabling Read/Write caching so that most writes are absorbed in the cache.
D. You can change the caching mode of any attached disk on a running virtual machine without stopping it, and the change takes effect immediately without downtime.
E. Premium SSD data disks must have some form of caching enabled; otherwise, data durability is not guaranteed.
F. You must disable caching on the OS disk before you can resize the virtual machine to a different size within the same VM family.
Correct answers: A and B
Explanation: Azure VM disk caching uses the local storage of the host to speed up access to managed disks. It offers three modes: Read/Write, Read-only, and None. The choice depends on the workload pattern.
For the OS disk, Azure defaults to Read/Write caching because operating systems perform many small, random reads and writes that benefit from caching on the host, improving boot time and general responsiveness.
For data disks, caching is chosen based on I/O characteristics. Read-only caching is beneficial when the workload is read-heavy and writes are rare, such as reference data, application binaries, or some reporting datasets. In that case, frequently accessed blocks stay in the cache, improving read performance.
Write-intensive workloads (for example, database log disks or high-throughput ingestion) are usually configured with caching set to None. This avoids extra latency and ensures that writes go directly to durable storage without passing through a host cache layer that can become a bottleneck.
Importantly, cache settings are not about durability or basic storage reliability; they are about performance behavior. Changing caching modes is also not a trivial live operation: usually, the VM must be stopped/deallocated or the disk detached to safely apply a new caching configuration.
Topic: Configure and Manage Virtual Networking
Your company runs dozens of production VMs in Azure. Administrators work remotely over the internet. You must minimize exposed management ports, improve security, and keep administration manageable. Which of the following remote-access configurations should you AVOID? (Select THREE.)
Options:
A. Deploy Azure Bastion in a hub virtual network, remove public IP addresses from the VMs, and require portal sign-in with RBAC and MFA for administrators.
B. Use Azure Bastion for ad-hoc administrator access to isolated VMs, but require daily administration to occur over a site-to-site VPN, with administrators connecting to VM private IP addresses from on-premises devices.
C. Assign a public IP address to each production VM and control RDP/SSH access only with host-based firewalls on the VMs, without using NSGs, Bastion, or VPN.
D. Enable point-to-site VPN for administrators, but leave existing NSG rules that allow RDP/SSH from any internet source to all VMs so that administrators can connect directly if VPN fails.
E. Create a dedicated management subnet that contains a jumpbox VM with no public IP; allow RDP to the jumpbox only from on-premises IPs over a site-to-site VPN, and block direct RDP/SSH to workload VMs from the internet.
F. Configure a Windows jumpbox VM with a public IP and a network security group (NSG) rule allowing RDP from any internet source, and have all administrators use a shared local administrator account on the jumpbox.
Correct answers: C, D and F
Explanation: The scenario emphasizes minimizing exposed management ports, improving security, and keeping administration manageable. In Azure, this usually means avoiding direct RDP/SSH from the internet, preferring Azure Bastion or VPN-based access, and keeping controls centralized and identity-aware.
Azure Bastion provides browser-based RDP/SSH to virtual machines over their private IPs, without exposing public IPs or opening management ports to the internet. VPN-based administration (point-to-site or site-to-site) allows admins to connect into the virtual network and then use RDP/SSH privately. A jumpbox can be acceptable if it is isolated, reachable only via private connectivity, and tightly controlled.
The unsafe patterns are those that expose many public IPs, allow wide-open RDP/SSH from the internet, use shared local accounts, or keep insecure “fallback” paths that bypass Bastion or VPN. These increase attack surface and reduce manageability and auditing, which conflicts with the stated requirements.
Topic: Implement and Manage Storage
An administrator replaces shared storage account keys hard-coded in applications with Microsoft Entra ID–based access for internal apps and user delegation SAS tokens for partners that grant read access to specific containers for 24 hours only. Which cloud security principle is primarily being applied?
Options:
A. Improving network latency by routing traffic over private endpoints
B. Optimizing storage costs by using lower-cost access tiers
C. Enforcing least privilege and reducing the blast radius of credential compromise
D. Providing cross-region disaster recovery for critical data
Best answer: C
Explanation: Storage account access methods differ in how well they support least privilege and secure operations.
Using storage account keys is powerful but coarse-grained: any holder has full account-level access, and revoking a key affects all callers using it. This makes least privilege and targeted revocation difficult.
Switching internal workloads to Microsoft Entra ID–based access lets you use RBAC roles scoped to specific containers or blobs and audited via identity. For external parties, issuing user delegation SAS tokens that are limited to specific containers, permissions, and short expiry windows provides time-bound, resource-scoped access without sharing account keys.
Together, identity-based access plus scoped, short-lived SAS tokens implement the principle of least privilege and significantly reduce the blast radius if a token is leaked, while also improving manageability of storage access.
Topic: Implement and Manage Storage
You manage an Azure Storage account used by several web apps that all connect using key1 in their connection strings. You must rotate the access keys while minimizing application downtime and keeping the same storage account. Which approach is the most appropriate?
Options:
A. First update all applications to use connection strings with key2, verify they work, then regenerate key1 for future use.
B. Immediately regenerate key1, then quickly update all application connection strings to use the new key1 value.
C. Create a new storage account, copy all data to it, and reconfigure every application to use the new account instead of rotating the keys.
D. Disable key1, wait for applications to fail to confirm they are using it, then regenerate both keys and redeploy all apps.
Best answer: A
Explanation: Azure Storage accounts provide two access keys (key1 and key2) specifically to enable safe key rotation without downtime. At any time, both keys are valid. To minimize disruption, you move all callers off the key you plan to regenerate before you change it.
The correct rotation pattern is:
key1).key2) and verify they connect successfully.key1, regenerate key1. It becomes the “spare” key for the next rotation.This approach keeps at least one valid, unchanged key in use at all times, so applications continue to work during the rotation, satisfying the requirement to minimize downtime while keeping the same storage account.
Topic: Monitor and Maintain Azure Resources
In Azure Site Recovery for Azure-to-Azure VM replication, what is the primary purpose of configuring network mapping between the primary and secondary regions?
Options:
A. To associate each source virtual network and subnet with a target virtual network and subnet so failed-over VMs attach to the correct subnet and use its IP range and NSG rules
B. To automatically copy all NSGs, route tables, and IP configurations from the source virtual network to an identically named virtual network in the secondary region
C. To ensure that all failed-over VMs in the secondary region are assigned new dynamic public IP addresses, regardless of their original private IP configuration
D. To create an automatic VPN or ExpressRoute connection between the primary and secondary regions so that no subnet or NSG configuration is required on the target side
Best answer: A
Explanation: In Azure Site Recovery (ASR) for Azure-to-Azure VM replication, you must plan what the network layout will look like in the secondary region. Network mapping is the mechanism that links the source virtual network and its subnets to the appropriate virtual network and subnets in the target region.
When you configure network mapping, you specify that VMs coming from a particular source subnet will fail over into a specific target subnet. During failover, ASR connects the VM’s NIC to that target subnet. As a result, the VM uses the target subnet’s address space, DHCP behavior, and Network Security Group (NSG) rules that are associated with that subnet or NIC.
ASR does not automatically clone NSGs or other network resources from the primary region. You must precreate the target virtual network, subnets, and any NSGs you require, then use network mapping to direct failover into those subnets. This separation lets you design different IP ranges or security rules in the failover environment while still controlling where each protected VM lands.
Topic: Configure and Manage Virtual Networking
You manage an Azure virtual network that hosts several production VMs. The security team has defined requirements and compared remote access options as shown in the following exhibit.
Security requirements:
| Item | Requirement |
|---|---|
| R1 | No public IP addresses on any VM |
| R2 | No inbound RDP/SSH directly from the Internet |
| R3 | Admins may connect from unmanaged devices without installing client software |
| R4 | Minimize remote access management overhead |
Remote access options:
| Option | Public IPs on VMs | Inbound RDP/SSH from Internet | Client software needed | Access interface | Management overhead |
|---|---|---|---|---|---|
| Azure Bastion | No | No | No | Browser/portal | Low |
| Jumpbox VM | Yes (jumpbox) | Yes (to jumpbox) | No | Native RDP/SSH | Medium |
| VPN gateway | No | No | Yes | Native RDP/SSH | High |
Based on the exhibit, which remote access option should you implement?
Options:
A. Assign public IP addresses to all production VMs and restrict RDP/SSH using NSG rules to corporate IP ranges only.
B. Deploy Azure Bastion in the virtual network and use browser-based RDP/SSH sessions to the VMs.
C. Create a jumpbox VM with a public IP and allow RDP/SSH from the Internet only to that VM.
D. Deploy a site-to-site or point-to-site VPN gateway and require admins to connect via a VPN client before using RDP/SSH.
Best answer: B
Explanation: The exhibit lists four security and operational requirements (R1–R4) and compares three remote access options across those dimensions. The goal is to pick the option whose characteristics align with all requirements.
Azure Bastion is shown as providing remote access without public IPs on VMs, without inbound RDP/SSH from the Internet, and without needing client software. It also has low management overhead because it is a managed PaaS service that integrates directly with the Azure portal. This exactly matches requirements R1–R4.
The jumpbox VM requires at least one VM with a public IP and inbound RDP/SSH from the Internet, even if other VMs remain private. This breaks the rule of having no public IPs on any VM and no direct RDP/SSH from the Internet. The VPN gateway avoids public IPs on individual VMs and blocks direct Internet RDP/SSH, but requires client software and is rated as high overhead in the exhibit, so it fails the unmanaged-device and low-overhead requirements.
Therefore, deploying Azure Bastion is the only option that meets all stated requirements in the exhibit.
Topic: Deploy and Manage Azure Compute Resources
You are deploying microservices to Azure Kubernetes Service (AKS) and Azure Container Apps in a single region. You will store container images in Azure Container Registry (ACR).
You must:
Which ACR SKU and configuration should you choose?
Options:
A. Create a Premium-tier ACR, disable public network access, and create private endpoints to the registry in both VNets.
B. Store images in a Standard general-purpose v2 storage account and provide SAS tokens to the workloads instead of using ACR.
C. Create a Basic-tier ACR and restrict access by allowing only the outbound public IP addresses of the AKS and Container Apps environments.
D. Create a Standard-tier ACR, enable a service endpoint for the subnets, and block all other IP addresses.
Best answer: A
Explanation: Azure Container Registry (ACR) is offered in Basic, Standard, and Premium tiers. While all tiers support storing and serving container images, advanced networking features such as private endpoints and disabling public network access are only available in the Premium tier.
In this scenario, you must ensure that only workloads in two specific VNets can reach the registry using private endpoints, and that the registry is not accessible over the public internet. This directly points to using ACR Premium with private endpoints created in each VNet and public network access disabled. Choosing Premium also satisfies the requirement to minimize cost, because it is the lowest SKU that provides these specific networking capabilities.
Basic and Standard tiers rely on a public endpoint and cannot provide VNet-only access via private endpoints. Alternative storage services, such as general-purpose storage accounts, do not fulfill the requirement to use Azure Container Registry and lack registry-specific features used by container platforms like AKS and Azure Container Apps.
Topic: Implement and Manage Storage
You plan to store application logs in an Azure Blob Storage container. Anonymous users must be able to download a specific log file only if they know its full URL. Anonymous users must not be able to list the blobs in the container. Which container public access level should you configure?
Options:
A. Set the container public access level to Blob (anonymous read access for blobs only).
B. Set the container public access level to Container (anonymous read access for container and blobs).
C. Disable public access at the storage account level (AllowBlobPublicAccess = Disabled).
D. Set the container public access level to Private (no anonymous access).
Best answer: A
Explanation: Azure Blob Storage supports three main container public access levels: Private, Blob, and Container. Private blocks all anonymous access; Blob allows anonymous read of individual blobs if the URL is known but does not allow anonymous listing of the container; Container allows anonymous listing of blobs and anonymous read.
In this scenario, you must allow anonymous downloads of blobs via their direct URLs while preventing anonymous listing of the container contents. The Blob public access level is designed for exactly this use case: it enables public read access to blobs only, without exposing the container listing.
Container public access would be less secure because it exposes the full list of blobs, and Private or disabling public access at the account level would be too restrictive and block the required anonymous read access by URL.
Topic: Configure and Manage Virtual Networking
You manage an e-commerce website hosted on Windows virtual machines in a single Azure region. A public Standard Azure Load Balancer distributes HTTP traffic to these VMs.
Developers deployed a new VM scale set that hosts an API at /api. They want:
https://contoso.com/ to go to the web VMshttps://contoso.com/api to go to the API VM scale setThey created a second backend pool on the existing load balancer and added the API VM scale set. The existing load-balancing rule remains:
Health probes show both backend pools as healthy, but all /api requests still go to the web VMs.
You must fix the routing while keeping a single public endpoint. What should you do?
Options:
A. Change the health probe for the API backend pool to HTTP and set the probe path to /api instead of using a TCP probe.
B. Add a second load-balancing rule on the existing load balancer for port 80 that targets the API backend pool and enable session persistence.
C. Configure Azure Traffic Manager with two endpoints: one for the existing load balancer and one directly for the API VM scale set, and point contoso.com to the Traffic Manager profile.
D. Deploy an Azure Application Gateway in front of the VMs and scale set, configure it as the public endpoint, and use URL path-based routing rules for / and /api.
Best answer: D
Explanation: In this scenario, the key issue is that the team expects routing decisions to be made based on the HTTP path (/ vs /api) while using an Azure Load Balancer. Azure Load Balancer operates at OSI layer 4 (TCP/UDP). It makes decisions based on IP address and port, not on HTTP headers, hostnames, or paths.
Because the existing rule uses a single frontend IP and port (80) and targets the web VM backend pool, the load balancer sends all HTTP traffic on that IP:port to the configured backend pool, regardless of the URL path. Creating additional backend pools does not allow path-based routing, because the load balancer has no way to distinguish which flows should go to which pool based on the HTTP request.
To route / to one backend pool and /api to another while preserving a single public endpoint, you need a layer-7 (application layer) load balancer that can inspect HTTP requests and apply URL path-based rules. Azure Application Gateway is specifically designed for this: it terminates HTTP/HTTPS, can look at host and path, and then forwards traffic to different backend pools accordingly.
Therefore, the appropriate fix is to introduce an Application Gateway as the public endpoint, configure it with backend pools for the web VMs and the API scale set, and create URL path-based routing rules that send / to the web pool and /api to the API pool.
Topic: Implement and Manage Storage
Which action is a core capability of Azure Storage Explorer for managing Azure Storage data across accounts and subscriptions?
Options:
A. Assign or modify Azure RBAC roles at the subscription and resource group scope
B. Collect and visualize real-time storage metrics and alerts for all accounts in a tenant
C. Browse multiple storage accounts from different subscriptions and copy blobs between containers using drag-and-drop
D. Create new Azure Storage accounts and configure their replication settings directly from the tool
Best answer: C
Explanation: Azure Storage Explorer is a cross-platform, standalone client application used by administrators to manage Azure Storage data. Its primary focus is data-level operations for blobs, files, queues, and tables rather than provisioning or monitoring storage resources.
With Storage Explorer, you can sign in with your Azure account or attach storage using connection strings, SAS, or shared keys. Once connected, you can browse containers and file shares, upload and download objects, and copy data between containers, storage accounts, and even subscriptions using a graphical interface, often via drag-and-drop.
It does not replace the Azure portal, CLI, or PowerShell for tasks such as creating storage accounts, configuring redundancy, managing RBAC at subscription scope, or setting up monitoring and alerts. Those are still performed through Azure management tools. Storage Explorer sits “on top” of existing storage resources to make everyday data management easier.
Topic: Monitor and Maintain Azure Resources
You manage backups for production Azure virtual machines using a Recovery Services vault. Business requirements are:
You are reviewing proposed backup policies.
Which TWO policy configurations should you AVOID because they do NOT meet the stated requirements? (Select TWO.)
Options:
A. Daily backup at 22:00; retain daily backups for 35 days; retain one backup on the last day of each month for 12 months.
B. Daily backup at 02:00; retain daily backups for 30 days; retain one backup on the first day of each month for 36 months.
C. Weekly backup every Sunday at 01:00; retain weekly backups for 12 weeks; retain one backup on the first Sunday of each month for 12 months.
D. Daily backup at 23:00; retain daily backups for 7 days; retain weekly backups (every Sunday) for 4 weeks; do not configure any monthly retention.
E. Daily backup at 00:00; retain daily backups for 30 days; retain one backup on the first Sunday of each month for 12 months.
Correct answers: C and D
Explanation: Azure Backup policies for virtual machines let you configure a backup schedule (daily or weekly) and separate retention rules for daily, weekly, monthly, and yearly recovery points.
In this scenario, you must:
Any acceptable policy must therefore:
Policies that use only weekly backups or keep daily backups for fewer than 30 days, or that lack 12 months of monthly retention, fail these requirements and should be avoided.
Topic: Monitor and Maintain Azure Resources
You manage several production virtual networks and site-to-site VPN connections. You must use Azure Monitor Insights for networks to proactively detect high latency, packet loss, and connection health issues with minimal manual effort. Which of the following configurations should you AVOID? (Select THREE.)
Options:
A. Disable alert rules for intermittent packet loss and instead review the Network Insights dashboard once per week for any visible issues.
B. Create Connection Monitor tests between critical subnets and set alert rules on increased latency and packet loss thresholds.
C. Enable NSG flow logs and Traffic Analytics, sending data to a Log Analytics workspace used by Network Insights to observe traffic patterns.
D. Rely on administrators to manually run ping tests from selected VMs during business hours to confirm connectivity and latency.
E. Turn off Azure Monitor for networks and rely solely on an external tool so that no Azure-native metrics or logs are collected for virtual networks.
F. Configure diagnostic settings on VPN gateways and Azure Firewall to send metrics and logs to a shared Log Analytics workspace for Network Insights analysis.
Correct answers: A, D and E
Explanation: Azure Monitor Insights for networks (often surfaced through Network Insights and Connection Monitor) is designed to provide continuous visibility into virtual network health, including latency, packet loss, and connection status. To use it effectively, you should configure automated tests, diagnostics, and alert rules so that issues are detected and surfaced without relying on manual checks.
Good configurations include enabling Connection Monitor tests between critical endpoints and setting alerts on latency and packet loss. Similarly, enabling NSG flow logs, Traffic Analytics, and diagnostic settings on network resources (such as VPN gateways and firewalls) and sending them to Log Analytics ensures that Network Insights has the data it needs to highlight traffic patterns and health problems.
In contrast, approaches that depend on manual ping tests, weekly dashboard reviews without alerts, or fully disabling Azure-native monitoring undermine the value of Azure Monitor Insights. These anti-patterns increase the risk that connectivity problems will go undetected or be identified only after users report incidents, which conflicts with the requirement for proactive monitoring.
Topic: Configure and Manage Virtual Networking
Which THREE of the following statements about Azure virtual network (VNet) peering and hub-and-spoke topologies are INCORRECT? (Select THREE.)
Options:
A. Peered VNets must use non-overlapping IP address ranges; otherwise, the peering connection cannot be created.
B. Gateway transit can be enabled on both virtual networks in the same peering so that each VNet can use the other’s VPN gateway.
C. To force spoke-to-spoke traffic through a network virtual appliance in the hub, you can use user-defined routes so that the traffic is inspected in the hub before reaching the destination spoke.
D. VNet peering allows overlapping IP address spaces between VNets as long as you do not enable gateway transit.
E. In a hub-and-spoke design, a VPN gateway is typically deployed only in the hub VNet and shared with spoke VNets by enabling gateway transit on the hub and ‘Use remote gateways’ on each spoke.
F. VNet peering automatically provides transitive routing between all peered VNets; additional routing or appliances are never required for spoke-to-spoke traffic.
Correct answers: B, D and F
Explanation: In Azure, VNet peering connects VNets at the network fabric level, but it has strict rules about IP address spaces and routing behavior. VNets must have non-overlapping address ranges to be peered. VNet peering itself is not transitive, so traffic does not automatically flow between all VNets that are indirectly connected through a hub.
In a hub-and-spoke design, a single VPN or ExpressRoute gateway is commonly deployed in the hub VNet. Gateway transit is enabled from the hub so that spoke VNets can share this gateway by configuring their peering to use remote gateways. For more advanced scenarios like inspecting traffic between spokes, you typically use user-defined routes to send traffic via a network virtual appliance in the hub.
The incorrect statements either violate the non-overlapping IP requirement, misrepresent how gateway transit is configured, or incorrectly claim that peering is automatically transitive.
Topic: Deploy and Manage Azure Compute Resources
You manage a production Windows Server VM in Azure that hosts sensitive data on both the OS and a data disk. Compliance requires encrypting both disks with customer-managed keys stored in Azure Key Vault and protecting the keys from accidental deletion. You want to use Azure-native features only. Which of the following actions will meet these requirements? (Select TWO.)
Options:
A. Rely only on the default server-side encryption with platform-managed keys on the managed disks; no further configuration is required.
B. Create an Azure Key Vault in the same region as the VM, enable soft delete and purge protection, and generate or import a customer-managed key to be used for disk encryption.
C. Enable Azure Disk Encryption on the VM and select the customer-managed key from the Azure Key Vault to encrypt both the OS disk and the attached data disk.
D. Store the encryption key in an Azure Storage account and configure a custom script extension in the VM to retrieve the key during startup.
E. Create a disk encryption set that references a Key Vault key without enabling soft delete or purge protection, and associate both the OS and data disks with this disk encryption set.
Correct answers: B and C
Explanation: To satisfy the scenario, you must both prepare a compliant key store for customer-managed keys and then enable disk encryption on the VM using those keys. Customer-managed keys for disk encryption must be stored in Azure Key Vault, typically in the same region as the disks, and many compliance standards require that Key Vault have soft delete and purge protection enabled to prevent accidental key loss. Only after this prerequisite is met can you configure Azure Disk Encryption on the VM and select the appropriate key to encrypt both OS and data disks.
Creating a properly configured Key Vault with soft delete and purge protection, and then enabling Azure Disk Encryption on the VM using the customer-managed key from that vault, together meet all stated requirements. Other options either rely on platform-managed keys instead of customer-managed keys, store keys in the wrong service, or omit required protection against accidental key deletion.
Use the AZ-104 Practice Test page for the full IT Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.
Try AZ-104 on Web View AZ-104 Practice Test
Read the AZ-104 Cheat Sheet on Tech Exam Lexicon for concept review before another timed run.