Try 50 free AZ-900 questions across the exam domains, with explanations, then continue with full IT Mastery practice.
This free full-length AZ-900 practice exam includes 50 original IT Mastery questions across the exam domains.
These questions are for self-assessment. They are not official exam questions and do not imply affiliation with the exam sponsor.
Count note: this page uses the full-length practice count maintained in the Mastery exam catalog. Some certification vendors publish total questions, scored questions, duration, or unscored/pretest-item rules differently; always confirm exam-day rules with the sponsor.
Need concept review first? Read the AZ-900 Cheat Sheet on Tech Exam Lexicon, then return here for timed mocks and full IT Mastery practice.
Open the matching IT Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.
| Domain | Weight |
|---|---|
| Describe Cloud Concepts | 25% |
| Describe Azure Architecture and Services | 39% |
| Describe Azure Management and Governance | 36% |
Use this as one diagnostic run. IT Mastery gives you timed mocks, topic drills, analytics, code-reading practice where relevant, and full practice.
Topic: Describe Azure Architecture and Services
Which statement BEST describes how an Azure availability set improves virtual machine availability within a single region?
Options:
A. It guarantees that virtual machines will have zero downtime during all Azure maintenance operations.
B. It automatically scales the number of virtual machine instances in or out based on CPU usage.
C. It automatically replicates virtual machines to a second Azure region for disaster recovery.
D. It distributes virtual machines across multiple fault and update domains so that not all VMs are affected by the same hardware failure or maintenance event.
Best answer: D
Explanation: An Azure availability set is a logical grouping of virtual machines that helps keep your application available during hardware failures and planned maintenance events within a single Azure region.
When you place VMs in an availability set, Azure distributes them across multiple fault domains and update domains. Fault domains represent different physical hardware groups, such as separate racks with independent power and network. Update domains represent separate groups that Azure updates one at a time during planned maintenance.
Because your VMs are spread across these domains, a single hardware failure or maintenance update will not take all VMs down at once, which increases the overall availability of your application running in that region.
Topic: Describe Azure Architecture and Services
Your company is designing its initial Azure governance structure. You need to decide how to organize management groups, subscriptions, resource groups, and resources.
Which TWO of the following design decisions are INCORRECT uses of Azure’s management hierarchy and should be AVOIDED? (Select TWO.)
Options:
A. Place resource groups directly under a management group, without assigning them to any subscription.
B. Create a management group that contains multiple subscriptions, one for each department.
C. Use multiple resource groups inside each subscription to group resources that share the same lifecycle and access requirements.
D. Put all production and test workloads for all departments into a single resource group in one subscription, even though they have different lifecycles and owners.
E. Use separate subscriptions under the same management group to isolate workloads for each business unit.
Correct answers: A and D
Explanation: Azure uses a clear management hierarchy: management groups → subscriptions → resource groups → resources.
Management groups help you organize and apply governance (like policies and RBAC) across multiple subscriptions. Each subscription is a billing and security boundary that contains one or more resource groups. Each resource group contains Azure resources and is meant to group items that share the same lifecycle, permissions, and management needs.
In this question, the incorrect choices either violate the formal hierarchy (trying to put resource groups directly under a management group) or ignore the intended use of resource groups for lifecycle and access boundaries (putting all workloads in a single resource group regardless of environment or ownership).
Topic: Describe Cloud Concepts
In the context of moving IT workloads from an on-premises datacenter to Microsoft Azure, which statement BEST describes operational expenditure (OpEx)?
Options:
A. Money spent up front to buy servers, storage, and networking hardware that your organization owns
B. Money reserved for building and owning a private datacenter facility, including power, cooling, and physical security
C. A one-time fee to migrate all existing servers and applications into Azure
D. Ongoing costs where you pay for cloud services based on how much you use them, such as per hour or per gigabyte
Best answer: D
Explanation: In traditional on-premises environments, organizations often use capital expenditure (CapEx) to buy servers, storage, networking equipment, and datacenter facilities up front. These are large, one-time purchases of physical assets that the organization owns.
When moving to Microsoft Azure, most spending shifts to operational expenditure (OpEx). With OpEx, you pay for services as you use them—such as compute hours, storage capacity, or outbound data transfer—on a recurring basis. This pay-as-you-go model aligns costs directly with consumption and reduces the need for large up-front investments.
Therefore, the description that OpEx is ongoing, usage-based payment for cloud services is the most accurate in the context of Azure and cloud adoption.
Topic: Describe Azure Architecture and Services
You host a public company website on a single Azure Virtual Machine running IIS. During traffic peaks, the site becomes slow or unavailable, and your team spends time patching the OS and web server manually. You want built-in autoscaling and to reduce server management effort while staying on Azure. What should you do?
Options:
A. Migrate the website to Azure App Service and configure autoscale on the App Service plan.
B. Add a second Virtual Machine and place both VMs behind an Azure Load Balancer.
C. Create a Virtual Machine Scale Set to host multiple VMs for the website.
D. Increase the size of the existing Virtual Machine to a larger SKU.
Best answer: A
Explanation: The symptom is that the website becomes slow or unavailable during traffic peaks and the team spends significant time managing and patching the underlying server. This indicates that the current IaaS approach using a single Azure Virtual Machine is creating both scalability and management challenges. Azure App Service is a Platform as a Service (PaaS) offering for hosting web apps, APIs, and mobile back ends. With App Service, Azure manages the underlying OS, runtime, and web server, and you manage only the application and its configuration. App Service plans support built-in autoscaling based on metrics like CPU or HTTP queue length, which directly addresses the performance issues during traffic peaks. Moving the site to Azure App Service therefore both reduces management overhead and provides easier autoscaling than adding or resizing VMs.
Topic: Describe Azure Architecture and Services
A company uses a single Microsoft Entra ID tenant (contoso.com) for all 500 employees and has one Azure subscription used by the IT department. The finance department wants its own Azure resources with separate billing, but all employees must continue to sign in with their existing corporate accounts. What should the administrator do to meet these requirements?
Options:
A. Create a new management group for the finance department and keep all resources in the current subscription.
B. Create a new Microsoft Entra ID tenant for the finance department and move all finance users into that tenant.
C. Create a new resource group in the existing subscription for the finance department and use tags to track costs.
D. Create a new Azure subscription in the existing Microsoft Entra ID tenant and grant finance users access to that subscription.
Best answer: D
Explanation: In Azure, a Microsoft Entra ID tenant represents the organization’s identity and is the home for users, groups, and app registrations. An Azure subscription is a separate construct that defines a billing and usage boundary for Azure resources.
Because the company wants separate billing for the finance department but wants to keep using the same corporate user accounts, the right approach is to add another Azure subscription under the existing Microsoft Entra ID tenant. This way, both subscriptions share the same tenant (and therefore the same users and SSO), while each subscription can be billed and managed independently.
Resource groups, tags, and management groups help with organization, governance, and reporting but do not replace subscriptions as the fundamental billing boundary. Multiple subscriptions can exist within a single tenant, but a user account belongs to a specific tenant.
Topic: Describe Cloud Concepts
Which statement BEST explains how Azure’s consumption-based pricing model supports experimentation with limited financial risk?
Options:
A. All experimental workloads in Azure run for free during development, and you start paying only when they are moved to production.
B. You must commit to at least a one-year subscription for each service, but the monthly price is discounted for experimental workloads.
C. You pay only for the resources you provision and use, and charges stop when you delete or deallocate them, so you can try services without large upfront costs.
D. You pay a fixed monthly fee for Azure, regardless of how many resources you deploy or how long you use them.
Best answer: C
Explanation: Azure uses a consumption-based (pay-as-you-go) pricing model. In this model, most services are billed based on the actual resources you allocate and how long you use them, such as compute hours, storage capacity, or data processed.
This supports experimentation because you can provision resources, run a test or proof of concept for a short time, then delete or deallocate the resources when you are done. Once the resources are removed (or fully deallocated, in the case of virtual machines), billing for those resources stops. There is typically no large upfront purchase or long-term commitment required just to try a service.
As a result, you can experiment with new ideas at relatively low cost and low financial risk: if an experiment fails or is no longer needed, you simply remove the resources and stop paying for them.
Topic: Describe Azure Management and Governance
Which statement BEST describes the relationship between Azure CLI and Azure PowerShell for managing Azure resources?
Options:
A. Azure CLI is a cross-platform command-line tool that uses simple, Bash-like commands, while Azure PowerShell is a collection of PowerShell cmdlets for managing Azure from PowerShell.
B. Azure CLI is used only inside the Azure portal, while Azure PowerShell can run only on Windows servers.
C. Azure CLI is a graphical management tool, while Azure PowerShell is a web-based scripting editor built into the Azure portal.
D. Azure CLI can manage only compute resources, while Azure PowerShell can manage all types of Azure resources.
Best answer: A
Explanation: Azure CLI is a cross-platform command-line tool designed to work naturally in shells like Bash or PowerShell using simple, text-based commands for Azure management. Azure PowerShell is a set of PowerShell modules that expose Azure operations as PowerShell cmdlets, fitting into PowerShell’s object-based scripting model. The idea that Azure PowerShell can run only on Windows servers is outdated; both tools are available on Windows, macOS, Linux, and in Azure Cloud Shell.
Topic: Describe Azure Management and Governance
Your company wants all new Azure resources to be created only in the “West Europe” and “North Europe” regions. A developer is still able to deploy a new virtual machine in “East US” without any warning or error. You need to prevent this from happening in the future. Which action should you take?
Options:
A. Use Azure Policy to create and assign a policy that restricts allowed locations for resources.
B. Tag all existing resources with the approved region names and require developers to use the same tags.
C. Enable Azure Advisor recommendations and review them regularly for region usage issues.
D. Create a custom Azure RBAC role that denies creating resources in the “East US” region and assign it to all developers.
Best answer: A
Explanation: This scenario is about enforcing organizational rules on where Azure resources can be created. The symptom is that a developer can still deploy a virtual machine in an unapproved region despite a company rule.
Azure Policy is the Azure governance service used to create and assign policies that enforce rules over resources, such as restricting allowed locations or requiring specific tags. When you assign an allowed locations policy at the subscription or management group scope, Azure evaluates new deployments against the policy and can deny any that do not meet the defined conditions.
Other tools like Azure RBAC, tags, and Azure Advisor support governance, but they do not enforce deployment conditions like region restrictions in the same way. RBAC controls permissions, tags provide metadata, and Advisor offers recommendations; only Azure Policy directly enforces rules such as where resources may be deployed.
Topic: Describe Azure Management and Governance
A company plans to deploy a new web application to Azure and wants to control costs. The team is discussing how the choice of Azure region affects pricing.
Which of the following statements about region selection and cost is INCORRECT?
Options:
A. Some regions may not offer certain service SKUs, so you might need to pick a different region if you want a lower-priced SKU that is unavailable in your preferred region.
B. Placing resources in a far-away region might reduce some resource prices but increase network egress costs and latency for users, so you should consider total cost, not just the per-resource price.
C. Prices for a given Azure service are always identical in every Azure region, so cost is never a factor when choosing a region.
D. Pricing for a given Azure service can vary between regions, so you should compare regions in the Azure pricing calculator before choosing where to deploy.
Best answer: C
Explanation: Azure uses a consumption-based pricing model, and resource location (region) is one of the key factors that can affect the price of many services. Different regions can have different underlying infrastructure and operational costs, which are reflected in service pricing.
Because of this, choosing a region is not just a technical or compliance decision; it is also a cost decision. Cost-conscious organizations use tools like the Azure pricing calculator to compare estimated costs across regions and consider both resource prices and related expenses such as network egress.
The statement claiming that prices are always identical in every region is wrong because it directly contradicts this cost principle. The other statements correctly describe how region choice can influence cost and service availability.
Topic: Describe Azure Management and Governance
Which TWO statements correctly describe Azure Monitor? (Select TWO.)
Options:
A. Azure Monitor replaces Microsoft Entra ID by storing all user identities for Azure subscriptions.
B. Azure Monitor is primarily used to create and enforce compliance policies on resources across subscriptions.
C. Azure Monitor is the central Azure service for collecting and analyzing telemetry such as metrics and logs from applications and resources.
D. Azure Monitor is a backup service used to create snapshots and restore points for virtual machines.
E. Azure Monitor uses features like Log Analytics workspaces and Application Insights to store, query, and visualize the data it collects.
Correct answers: C and E
Explanation: Azure Monitor is Azure’s central, umbrella monitoring service. Its main job is to collect telemetry data (metrics, logs, traces, and alerts) from Azure resources, applications, and even on-premises or other-cloud environments, then help you analyze and act on that data. Within Azure Monitor, capabilities such as Log Analytics workspaces and Application Insights provide storage, querying, visualization, and application performance monitoring. It does not enforce policies, manage identities, or perform backups; instead, it focuses on observability and alerting so you can understand and maintain the health and performance of your environment.
Topic: Describe Azure Architecture and Services
Your company has two Azure virtual networks (VNets) in the same region. Virtual machines in each VNet must communicate over private IP addresses with very low latency and without using VPN devices or public endpoints. Which Azure networking feature should you use?
Options:
A. Use an Azure Load Balancer with public IPs to expose the VMs and restrict traffic with network security groups.
B. Configure VNet peering between the two VNets.
C. Provision an ExpressRoute circuit and connect both VNets to it.
D. Create VPN gateways in each VNet and configure a site-to-site VPN between them.
Best answer: B
Explanation: The scenario describes two Azure virtual networks that need to communicate privately, with low latency, and without using VPN devices or exposing services to the internet. Azure VNet peering is designed exactly for this: it connects VNets using the Microsoft backbone network so that resources in each VNet can talk to each other using private IP addresses as if they were on the same network.
Because VNet peering is a native Azure feature, it avoids the overhead of VPN gateways and IPsec tunnels and typically provides lower latency and higher bandwidth for traffic within Azure. It is therefore the most appropriate and simplest option for private, low-latency connectivity between VNets in the same region or across regions.
Topic: Describe Azure Architecture and Services
Your team is building a static single-page marketing site that will call serverless APIs for business logic and data access. They want to use Azure Static Web Apps for the front end with integrated APIs. Which of the following designs is NOT an appropriate way to meet these requirements?
Options:
A. Expose the database directly to the Internet and have the Azure Static Web Apps front-end JavaScript call the database instead of using an API, to avoid creating Azure Functions.
B. Use Azure Static Web Apps only for the static HTML and JavaScript files, and run the API in a separate Azure Functions app that is linked through the Static Web Apps configuration.
C. Connect Azure Static Web Apps to a GitHub repository so that changes to the static front end and Azure Functions API are automatically built and deployed.
D. Host the static single-page app in Azure Static Web Apps and implement the API using Azure Functions integrated with the Static Web App.
Best answer: A
Explanation: Azure Static Web Apps is designed to host static front-end applications (such as single-page apps built with React, Angular, or Vue) and connect them to serverless back-end APIs, most commonly implemented with Azure Functions. The service provides integrated routing, authentication, and CI/CD workflows from source repositories like GitHub or Azure DevOps.
In a secure, recommended architecture, the browser-based front end calls an API endpoint. The API runs on the server side (for example, in Azure Functions), where it can safely access databases and other resources that are not directly exposed to the public Internet. This follows principles like defense in depth and least privilege.
The option that suggests exposing the database directly to the Internet and calling it from JavaScript in the browser is clearly unsafe and does not use the intended Static Web Apps + serverless API model. The other options all describe valid ways to combine Azure Static Web Apps with Azure Functions and CI/CD pipelines for static site hosting with integrated APIs.
Topic: Describe Azure Management and Governance
A company runs a critical application on an Azure virtual machine. Operators currently sign in to the Azure portal a few times a day to watch the VM’s CPU chart. They now want an automatic email whenever CPU usage stays above 85% for 5 minutes. Which option is the most appropriate way to meet this requirement?
Options:
A. Pin the VM’s CPU metric chart to an Azure dashboard and ask operators to monitor it more frequently.
B. Enable Azure Advisor recommendations for the virtual machine and review performance suggestions weekly.
C. Configure a budget alert in Azure Cost Management + Billing for the subscription that hosts the virtual machine.
D. Create an Azure Monitor metric alert rule on the VM’s CPU percentage and configure it to send an email when it exceeds 85% for 5 minutes.
Best answer: D
Explanation: Azure Monitor is the central service in Azure for collecting and analyzing telemetry from resources such as virtual machines, and for generating alerts when important conditions occur.
For performance-based conditions like “CPU above 85% for 5 minutes,” you use Azure Monitor metric alerts. A metric alert rule continuously evaluates the chosen metric (for example, CPU percentage) against a threshold and time aggregation. When the condition is met, the alert rule fires and uses an action (such as sending email via an action group) to notify the team.
In this scenario, the key requirement is proactive, automatic notification based on a performance threshold over time, without relying on manual monitoring. Only a metric alert rule on the VM’s CPU metric satisfies that requirement directly.
Topic: Describe Azure Architecture and Services
An online retail company stores product images in Azure Blob Storage in a single Azure region. They require high availability within that region and are willing to pay slightly more than the lowest-cost option but do not need cross-region disaster recovery. Which storage redundancy option is the most appropriate choice?
Options:
A. Geo-redundant storage (GRS)
B. Locally redundant storage (LRS)
C. Read-access geo-redundant storage (RA-GRS)
D. Zone-redundant storage (ZRS)
Best answer: D
Explanation: This question focuses on how different Azure Storage redundancy options balance resiliency and cost. The company wants higher availability within a single region but explicitly does not need cross-region disaster recovery.
Zone-redundant storage (ZRS) replicates data synchronously across multiple availability zones within the same region. This protects against a full datacenter or zone failure while keeping data in-region. It typically costs more than locally redundant storage (LRS) but less than geo-redundant options, aligning with the requirement to improve resiliency within the region while avoiding unnecessary cross-region cost.
Geo-redundant storage (GRS) and read-access geo-redundant storage (RA-GRS) extend protection to another region, which improves disaster recovery but increases cost and complexity beyond what is required. Since the scenario explicitly states that cross-region disaster recovery is not needed, choosing a geo-redundant option would not be the best fit.
Topic: Describe Azure Architecture and Services
A company plans to migrate hundreds of on-premises virtual machines to Azure. They decide to use Azure Migrate as the central hub to discover, assess, and migrate these servers.
Which TWO actions should you AVOID when planning this migration project? (Select TWO.)
Options:
A. Use Azure Migrate server migration tools to run test migrations of a few non-production workloads before migrating remaining servers.
B. Treat Azure Migrate only as a basic inventory list and plan migration timing and sizing in separate spreadsheets without reviewing its assessment results.
C. Use Azure Migrate assessment reports to right-size Azure virtual machine SKUs for each on-premises server before migration.
D. Use Azure Migrate to run discovery on on-premises servers and identify application dependencies before defining migration waves.
E. Move critical production virtual machines directly to Azure using ad-hoc scripts without first running any Azure Migrate discovery or assessment.
Correct answers: B and E
Explanation: Azure Migrate is designed to be a central hub for discovering, assessing, and migrating on-premises servers and applications to Azure. At the Azure Fundamentals level, its key value is that it automatically discovers your environment, analyzes readiness and sizing for Azure, and helps organize and execute migrations.
Actions that bypass or ignore this discovery and assessment capability are anti-patterns. Moving critical workloads with ad-hoc scripts and no prior assessment introduces unnecessary risk, because you do not fully understand dependencies, compatibility, or resource requirements. Likewise, treating Azure Migrate as just an inventory list and ignoring its reports discards the main reason to use the service in the first place.
In contrast, using Azure Migrate to discover dependencies, right-size virtual machines, and run test migrations aligns well with how Microsoft intends customers to use Azure Migrate during a cloud migration project.
Topic: Describe Azure Management and Governance
Your company wants to understand which Azure resources and business units generated the highest costs over the last three months in a specific subscription. You must use Azure-native tools and avoid exporting data to external systems. Which of the following actions will meet these requirements? (Select TWO.)
Options:
A. In Azure Cost Management’s Cost analysis, set the scope to the target subscription, select the last three months, and group costs by resource.
B. In Azure Cost Management’s Cost analysis, filter on the target subscription and last three months, then group costs by the department tag.
C. Download the latest Azure invoice PDF for the billing account and manually total costs per resource.
D. Use Azure Advisor’s cost recommendations to identify expensive resources based on the last three months.
E. Use the Azure Pricing calculator to estimate the monthly cost of each planned resource.
F. Create a cost budget on the subscription and review the budget alert emails to see which resources exceeded the threshold.
Correct answers: A and B
Explanation: Azure Cost Management’s Cost analysis feature is the main Azure-native tool for exploring historical and current costs. In Cost analysis you can set a scope (such as a subscription), choose a time range (such as the last three months), and then filter and group costs by different dimensions like resource, resource group, subscription, or tags (for example, a department tag). This lets you see which specific resources and which business units are driving the most cost without exporting data to external tools.
Other Azure tools related to cost, such as the Pricing calculator, budgets, invoices, and Azure Advisor, are useful for estimating, controlling, and optimizing costs but they do not replace Cost analysis for flexible, interactive breakdowns by resource and tag over a defined historical period.
Topic: Describe Azure Management and Governance
Your organization already has several production workloads running in Azure. Management wants to be automatically notified if the actual monthly Azure spending for a subscription goes above a defined threshold. Which Azure capability should you use?
Options:
A. Azure Cost Management + Billing budgets
B. Azure Advisor
C. Azure Pricing calculator
D. Azure Service Health
Best answer: A
Explanation: This scenario is about monitoring actual Azure spending after resources are deployed and sending alerts when costs exceed a defined limit. The Azure feature designed for this is budgets in Azure Cost Management + Billing.
With a budget, you specify a spending amount and time period (for example, a monthly budget on a subscription). Azure tracks real usage and charges against that budget and can automatically trigger email alerts or action groups when spending reaches certain percentages of the budget (such as 80% or 100%). This helps organizations proactively control and monitor cloud costs.
The other tools listed are important but solve different problems: one estimates costs before deployment (Pricing calculator), one provides optimization recommendations (Advisor), and one focuses on service health and availability (Service Health), not cost alerts.
Topic: Describe Azure Management and Governance
Which Azure tool should you use to estimate the expected monthly cost of a new solution before deploying any Azure resources? (Select ONE answer.)
Options:
A. Azure pricing calculator
B. Azure Advisor
C. Azure Cost Management + Billing
D. Azure Monitor
Best answer: A
Explanation: The scenario explicitly asks for estimating the monthly cost of a new solution before deploying any resources. At this stage, you have no usage data yet; you need a planning tool where you can select services, regions, and expected usage to see an estimated price.
The Azure pricing calculator is designed exactly for this purpose. You choose the services you plan to use (such as Virtual Machines, Storage, or App Service), select regions and configurations, and enter approximate usage (for example, hours per month or GB stored). The calculator then shows an estimated monthly cost so you can plan a budget or compare design options.
By contrast, tools like Azure Cost Management + Billing, Azure Advisor, and Azure Monitor mostly work with already deployed resources and actual usage data, so they are used for tracking, optimizing, or monitoring costs rather than for initial estimates.
Topic: Describe Azure Management and Governance
Which TWO of the following statements about Microsoft Purview are NOT accurate?
(Select TWO.)
Options:
A. Microsoft Purview is designed to replace backup and disaster recovery solutions by automatically keeping secondary copies of all governed data.
B. Microsoft Purview helps you discover and classify sensitive data across on-premises, multicloud, and SaaS data sources.
C. Microsoft Purview is primarily a tool for configuring virtual networks, network security groups, and other Azure network security settings.
D. Microsoft Purview provides a central data catalog so users can search for, understand, and govern enterprise data assets.
E. Microsoft Purview can help organizations demonstrate that they meet data protection and access requirements by providing insights and reports about where sensitive data is stored and how it is used.
Correct answers: A and C
Explanation: Microsoft Purview is a unified data governance solution that helps organizations discover, classify, catalog, and manage data across on-premises, multicloud, and SaaS environments. In the context of Azure governance and compliance, it is used to understand where sensitive information resides, how it moves, and who has access to it so that organizations can meet generic data protection and regulatory requirements.
It is not a networking configuration tool and it does not replace backup or disaster recovery products. Instead, it provides visibility, classification, policies, and reporting around data, complementing other Azure services such as Azure Backup, storage redundancy options, and security services.
Topic: Describe Azure Architecture and Services
An organization is developing a new mobile app that will be used by millions of retail customers. Customers must be able to sign in with social accounts such as Google and Facebook, and the organization does not want to create user accounts for each customer. Which Microsoft Entra capability should they use?
Options:
A. Azure Virtual Desktop
B. Microsoft Entra External ID for customers (B2C)
C. Microsoft Entra Domain Services
D. Microsoft Entra B2B collaboration (guest users)
Best answer: B
Explanation: This question targets external identity scenarios in Microsoft Entra, specifically the difference between business-to-business (B2B) and business-to-customer (B2C) access.
For large, public-facing applications where you need to authenticate many customers, often with social identities, you use Microsoft Entra External ID for customers (B2C). It is optimized for customer sign-up and sign-in, supports identity providers such as Google and Facebook, and does not require you to create and manage a separate internal account for every customer.
By contrast, Microsoft Entra B2B collaboration is used when you want to give external business partners controlled access to your organization’s resources as guest users, not when you are building a retail customer app. Domain Services and Azure Virtual Desktop address entirely different problems (legacy domain services and virtual desktops).
Topic: Describe Azure Architecture and Services
Which TWO statements about Azure network security groups (NSGs) are correct? (Select TWO.)
Options:
A. An NSG is a global resource that automatically applies to all virtual networks within a subscription.
B. You can associate an NSG with a subnet or a network interface to control traffic for resources in a virtual network.
C. An NSG uses security rules to allow or deny inbound and outbound traffic based on source, destination, port, and protocol.
D. NSGs provide deep application-layer inspection to filter traffic based on HTTP URLs and payload content.
E. NSGs are primarily used to encrypt traffic between virtual networks across a VPN connection.
Correct answers: B and C
Explanation: Azure network security groups (NSGs) are a core networking security feature that let you control inbound and outbound traffic to Azure resources in a virtual network. They contain a list of security rules that either allow or deny traffic based on properties such as source and destination IP address, port, and protocol.
You apply NSGs to subnets and/or individual network interfaces. When applied, the NSG evaluates each packet against its rules to decide whether the traffic is permitted. NSGs do not perform deep packet inspection, content filtering, or encryption; they simply act as a stateful, rule-based filter at the network level.
Topic: Describe Cloud Concepts
A company hosts a web application in a single Azure region in Europe. Users in North America and Asia report slow page loads due to high network latency. The company wants to improve responsiveness for these users by using Azure’s global infrastructure, without redesigning the app. Which of the following actions/solutions will meet these requirements? (Select TWO.)
Options:
A. Increase the virtual machine size for the existing web server in the current European Azure region to improve performance.
B. Enable Azure Content Delivery Network (CDN) for the application’s static content so it is cached at edge locations around the world.
C. Move rarely accessed files to the archive storage tier in the existing region to lower storage costs.
D. Configure an Azure VPN Gateway to connect the users’ corporate networks to the existing Azure virtual network.
E. Deploy an additional instance of the web application to an Azure region that is geographically close to the North American users.
Correct answers: B and E
Explanation: This scenario is about using Azure’s global reach to reduce latency for users who are far from the current deployment region.
Azure provides many regions and a global edge network. Placing workloads or caches closer to users shortens the physical distance data must travel, which reduces network latency and speeds up page load times.
Per option:
The key idea is that Azure’s global reach lets you deploy workloads and caches closer to where users are, which is the main way to reduce network latency at the fundamentals level.
Topic: Describe Azure Management and Governance
A company must deploy the same group of Azure resources to development, test, and production every week. They want deployments to be consistent, repeatable, and tracked in source control so changes can be reviewed before deployment. Which deployment approach is the most appropriate for this requirement?
Options:
A. Use Azure CLI commands typed interactively in Cloud Shell for each environment.
B. Define the resources in an ARM or Bicep template stored in source control and deploy the template to each environment.
C. Use the Azure mobile app to quickly create the resources whenever they are needed.
D. Create all resources manually each time by using the Azure portal.
Best answer: B
Explanation: The scenario emphasizes consistent, repeatable deployments across multiple environments (development, test, production) and the need to track and review changes in source control. These are classic requirements for Infrastructure as Code (IaC).
ARM templates and Bicep files let you define your Azure resources declaratively. You can store these files in a source code repository (such as Git), apply version control, use pull requests for review, and then deploy the exact same template to different environments. This minimizes configuration drift and provides an auditable history of changes.
Manual portal actions or ad-hoc CLI usage can automate parts of the process but do not inherently provide the same level of repeatability, versioning, and review that IaC offers. The Azure mobile app is even less suited, as it is aimed at quick checks and small changes, not structured deployments.
Therefore, defining resources in ARM or Bicep templates and deploying those templates is the best fit for the stated requirements.
Topic: Describe Cloud Concepts
A software company is moving its development and test environments to Azure. The goal is to release features faster and allow teams to experiment quickly without long wait times for infrastructure. Which action is NOT aligned with using cloud services to increase agility and speed to market?
Options:
A. Use Azure DevTest Labs or similar automation to create and automatically shut down temporary test environments.
B. Use autoscaling and pay-as-you-go pricing so applications can scale up for testing only when needed and then scale down.
C. Provide developers with preapproved templates so they can deploy test environments on demand within minutes.
D. Require all new Azure resources to be requested through a monthly capacity-planning meeting and provisioned manually by IT.
Best answer: D
Explanation: One of the key business benefits of cloud computing is increased agility: the ability to provision and deprovision resources quickly, experiment, and get features to market faster. Azure supports this with on-demand self-service, automation, and consumption-based pricing so teams do not wait weeks or months for hardware.
Re-creating slow, manual, on-premises approval and provisioning processes in the cloud removes much of this benefit. To gain agility, organizations should adopt practices such as self-service deployment within guardrails, automated environment creation, and autoscaling based on demand.
Topic: Describe Cloud Concepts
In Azure, Microsoft automatically encrypts data at rest for many services, but customers must still choose key management options, configure role assignments, and enable extra protections (such as customer-managed keys) for their data. This situation mainly illustrates which cloud security concept?
Options:
A. Zero Trust security
B. The shared responsibility model between cloud provider and customer
C. Defense in depth
D. Least privilege access
Best answer: B
Explanation: The scenario highlights that in Azure, Microsoft provides certain security controls by default, such as encryption at rest for many services. However, it also stresses that customers must still actively configure security settings such as role assignments, key management options, and additional encryption features.
This is a textbook example of the shared responsibility model in cloud computing. Under this model, Microsoft is responsible for securing the underlying cloud infrastructure, including physical datacenters, host operating systems, and many platform services. Customers, however, remain responsible for securing their data, identities, access controls, and many of the configuration choices made within Azure services.
Even when Azure enables a default security feature, customers typically decide how strictly to use it (for example, whether to use Microsoft-managed keys or customer-managed keys, how to structure RBAC roles, and which encryption options to enable). Understanding this boundary is critical so organizations do not assume that “Azure handles everything” for security.
Therefore, the scenario primarily illustrates the shared responsibility model, not broader security design patterns like defense in depth, Zero Trust, or least privilege, even though those patterns can involve similar tools (encryption, RBAC, policies).
Topic: Describe Cloud Concepts
A company plans to host a customer-facing web app in Microsoft Azure for users worldwide. The team is new to cloud computing. Which of the following assumptions or design decisions about Azure should you AVOID? (Select TWO.)
Options:
A. Assume Azure runs from a single data center, so there is no need to choose or plan for specific regions.
B. Assume Azure is a private cloud that is only accessible from the company’s on-premises network.
C. Plan to deploy the app in an Azure region that is geographically close to most users to reduce latency.
D. Treat Azure as a global public cloud platform available over the internet from many regions around the world.
E. Use multiple Azure regions if you later need to improve global performance or resilience.
Correct answers: A and B
Explanation: Microsoft Azure is a global public cloud platform. As a hyperscale provider, Microsoft operates many Azure regions around the world. Customers use these regions to place their resources closer to users, meet data residency needs, and improve availability.
Because Azure is a public cloud, it is generally accessed securely over the internet (or private connections like VPN/ExpressRoute), not limited to a single company’s data center. Treating Azure as either a single data center or a private, on-premises-only environment ignores its core characteristics.
In this scenario, you should avoid assumptions that contradict Azure’s global, multi-region public cloud nature. Reasonable choices will recognize that Azure has many regions and that you can choose and combine them to meet business requirements.
Topic: Describe Azure Architecture and Services
Your company recently had an incident where an attacker signed in to the Azure portal using a stolen employee password. Currently, users authenticate to Microsoft Entra ID with only a username and password, from any location. The security team wants to move closer to a Zero Trust model, focusing on “verify explicitly” and “assume breach,” without blocking remote work. What should you do first?
Options:
A. Configure Microsoft Entra Conditional Access to require multi-factor authentication (MFA) for Azure portal access and high-risk sign-ins.
B. Add the corporate office public IP range to a trusted network list and allow users from that range to sign in without additional verification.
C. Move all virtual machines into a new virtual network that is not connected to on-premises networks and keep the existing sign-in process unchanged.
D. Assign the Owner role on the subscription to all users so they can reset their own security settings if their account is compromised.
Best answer: A
Explanation: The scenario describes an attacker successfully accessing the Azure portal with a stolen password. In a Zero Trust model, you assume credentials and networks can be compromised (“assume breach”) and require strong, continuous verification for access (“verify explicitly”). Simply relying on a username and password is not enough.
Using Microsoft Entra Conditional Access to require multi-factor authentication (MFA) for Azure portal access adds an extra verification factor, such as a phone prompt or hardware token, that an attacker with only the password is unlikely to have. This directly mitigates the observed attack path while still allowing users to work remotely, which aligns well with Zero Trust principles.
Topic: Describe Azure Architecture and Services
A company is planning its Azure governance strategy and wants to apply policies starting at the broadest scope and flowing down to individual resources. They ask you to confirm the correct order of Azure scopes from the widest to the most specific. Which option shows the correct order?
Options:
A. Resource group → Subscription → Management group → Resource
B. Subscription → Management group → Resource group → Resource
C. Management group → Subscription → Resource group → Resource
D. Management group → Resource group → Subscription → Resource
Best answer: C
Explanation: Azure uses a clear hierarchy of management scopes to help you organize, govern, and secure cloud resources at different levels.
From the broadest to the most specific, the scopes are:
When you apply a policy or role assignment at a higher scope, it can inherit down to lower scopes in this order. The only option that correctly reflects this top-down hierarchy is the sequence from management group to subscription to resource group to resource.
Topic: Describe Cloud Concepts
An analytics application uses Azure Virtual Machines heavily for 3–4 days at the end of each month and has very low usage for the rest of the month. The company wants costs to closely follow actual usage and does not want any long-term commitments. Which Azure pricing approach best aligns with this requirement?
Options:
A. Run the workload on Azure Dedicated Host with bring-your-own-licenses
B. Purchase 3-year reserved instances for the virtual machines
C. Move the application to on-premises servers purchased as capital expenditure
D. Use pay-as-you-go pricing for the virtual machines
Best answer: D
Explanation: This question focuses on the cloud economics principle of consumption-based pricing and how to choose a matching Azure pricing model for a spiky workload.
The application uses a lot of compute only a few days each month and very little for the rest of the time. The company wants costs to closely track this pattern and does not want long-term commitments. In Azure, pay-as-you-go pricing is designed for this scenario: you are billed per unit of resource consumed (for example, per second or per hour of VM runtime) with no upfront commitment. When the VMs run during the busy days, costs go up; when they are stopped or scaled down during quiet days, costs drop.
Reserved instances, dedicated hosts, and buying on-premises hardware all assume more stable, always-on usage and/or long-term commitments. They can reduce cost for predictable workloads but do not align with the stated need for flexibility and cost that scales directly with short bursts of usage.
Topic: Describe Cloud Concepts
Your company is moving a line-of-business app to Azure. Management wants predictable uptime backed by documented service commitments and protection against data loss if underlying hardware fails. Which of the following actions/solutions will meet these requirements? (Select TWO.)
Options:
A. Disable storage redundancy to minimize costs and keep only a single copy of each file in Azure Storage.
B. Rely only on manual nightly backups to an on-premises file server instead of using Azure’s built-in redundancy options.
C. Store critical application data in Azure Storage using redundancy options such as zone-redundant or geo-redundant storage so multiple copies exist on separate hardware.
D. Select Azure services that have published uptime SLAs that meet your target availability and design the app according to those SLA guidelines.
E. Run the entire application on a single large virtual machine in one Azure region to simplify management.
Correct answers: C and D
Explanation: Azure improves reliability and predictability in two key ways at the fundamentals level: service-level agreements (SLAs) and built-in redundancy.
An Azure SLA is a formal, published document that describes the target uptime (for example, a percentage of time the service will be available) and the conditions under which Microsoft provides credits if the target is not met. By choosing services whose SLAs match your business needs, you gain predictable, contractually defined availability instead of informal best-effort promises.
Redundancy means Azure keeps multiple copies of your data on separate pieces of hardware, and sometimes in separate availability zones or regions. Options like zone-redundant storage (ZRS) and geo-redundant storage (GRS) are designed so that if one storage node or even an entire datacenter fails, your data is still available from another copy.
How each option fits this scenario:
Topic: Describe Azure Management and Governance
In Azure, which statement correctly describes a main reason to apply tags such as Department=Finance or Environment=Test to many resources?
Options:
A. Tags automatically encrypt all tagged resources without any additional configuration.
B. Tags guarantee that resources are deployed only in specific Azure regions that match the tag value.
C. Tags increase the performance of tagged resources by allocating more CPU and memory to them.
D. Tags let you group and filter resources in cost and management reports, such as Azure Cost Management, to understand spending by department or environment.
Best answer: D
Explanation: Azure tags are user-defined key–value pairs that you can apply to many Azure resources, resource groups, and subscriptions. They do not change how the resource runs but instead provide metadata that helps you organize and manage resources.
A primary benefit of tags at the fundamentals level is cost visibility. When you consistently tag resources (for example, by department, project, or environment), you can use those tags in Azure Cost Management and other management reports to group, filter, and break down costs. This helps you understand who is spending what and on which workloads, without changing the underlying resources.
Tags therefore support governance and cost management by enabling better reporting and analysis, not by enforcing security, performance, or deployment behavior directly.
Topic: Describe Azure Architecture and Services
A company stores 10 TB of files on an on-premises file server and wants a one-time transfer of these files into Azure Blob Storage over its existing internet connection. Administrators prefer an automated, scriptable command-line solution rather than a GUI. Which of the following actions/solutions will meet these requirements? (Select TWO.)
Options:
A. Use Azure Migrate: Discovery and assessment to move the on-premises file server and its data as a virtual machine into Azure.
B. Install AzCopy on the on-premises server and run commands to upload the files directly to an Azure Blob Storage container over HTTPS.
C. Use AzCopy with a shared access signature (SAS) to synchronize a local folder on the file server with a target Azure Blob Storage container.
D. Install Azure Storage Explorer and drag and drop the files from the on-premises server into the Azure Storage account.
E. Order an Azure Data Box to ship physical disks containing the files to Microsoft so they can be imported into Azure Storage.
Correct answers: B and C
Explanation: AzCopy is a Microsoft-supported command-line utility designed specifically for moving and copying data to, from, and between Azure Storage services such as Blob and File storage. It supports high-performance transfers over HTTPS and is well suited to automated or script-based migrations.
In this scenario, the company wants a one-time migration of 10 TB of files over its existing internet connection and prefers a scriptable command-line solution. This aligns directly with AzCopy’s purpose: a CLI tool that can be installed on an on-premises server, authenticated (for example, using a shared access signature), and then used in scripts or batch files to upload or synchronize large data sets into Azure Blob Storage.
Other tools like Azure Migrate, Azure Data Box, and Azure Storage Explorer address different needs (VM migration, offline bulk transfers, or GUI-based operations) and therefore do not fully meet the stated requirements of using the current network path and a command-line, automatable approach.
Topic: Describe Azure Architecture and Services
Which Azure Blob Storage access tier is most appropriate for data that is accessed very infrequently, can tolerate long retrieval times, and should minimize ongoing storage cost?
Options:
A. Archive tier
B. Zone-redundant storage (ZRS)
C. Cool tier
D. Hot tier
Best answer: A
Explanation: Azure Blob Storage offers three main access tiers—hot, cool, and archive—to balance storage cost against how often and how quickly you need to access data.
The archive tier is specifically designed for data that is accessed very infrequently, such as long-term backups or compliance archives. It has the lowest storage cost but requires rehydration before data can be read, which introduces higher latency and additional access costs. This matches the scenario where data can tolerate long retrieval times and the priority is minimizing ongoing storage cost.
The hot and cool tiers are for data that is accessed more often and therefore have higher storage costs compared to archive. Redundancy options like ZRS relate to how data is copied for durability and availability, not to how frequently it is accessed or its storage price level.
Topic: Describe Cloud Concepts
You are teaching a new team about the shared responsibility model in Azure. They review the following summary table.
| Responsibility task | Microsoft (Azure) | Customer |
|---|---|---|
| Securing physical access to datacenters | Yes | No |
| Patching the Azure host infrastructure | Yes | No |
| Defining who can access your app data | No | Yes |
Based on the information in the table, which action is the customer responsible for when using Azure services?
Options:
A. Applying patches to the physical servers hosting Azure
B. Controlling which employees can read sensitive data in the application
C. Managing power and cooling for Azure server rooms
D. Maintaining physical security at Azure datacenters
Best answer: B
Explanation: The exhibit summarizes the shared responsibility model by listing who is responsible for different security tasks.
In the shared responsibility model, Microsoft secures and manages the underlying cloud infrastructure (physical datacenters, power, cooling, and the Azure host platform). Customers remain responsible for things “inside” their environment: data, access control, identities, and application-level configuration.
In the table, physical datacenter security and host infrastructure patching are marked as Microsoft responsibilities. The only task marked as a customer responsibility is “Defining who can access your app data.” That directly maps to controlling which employees or users can read sensitive information in the application, such as by setting permissions, roles, and access policies.
This reflects a core idea of the shared responsibility model in Azure: even in the cloud, customers must manage and protect their own data and access to that data, while Microsoft secures the cloud infrastructure itself.
Topic: Describe Azure Management and Governance
Which TWO of the following statements about Azure Blob Storage access tiers and cost are NOT correct? (Select TWO.)
Options:
A. Cool and archive access tiers charge about the same low price for storing data and for retrieving it, so they reduce costs even for very frequently accessed data.
B. The archive access tier offers the lowest per-GB storage price but has higher retrieval costs and significant read latency, so it is best for data that is rarely read, such as long-term backups.
C. Using the hot access tier for all data is always the cheapest choice over time, because it avoids additional retrieval and early-deletion charges.
D. The hot access tier is typically most cost-effective for data that is read and written frequently, even though its per-GB storage price is higher than cool or archive.
E. The cool access tier is intended for data that is accessed infrequently but must be available quickly, and it usually has lower storage cost but higher access charges than the hot tier.
Correct answers: A and C
Explanation: Azure Blob Storage access tiers (hot, cool, and archive) let you balance storage cost, access cost, and performance based on how often you read the data.
The hot tier has the highest per-GB storage price but the lowest access cost and best performance, which makes it suitable for frequently accessed data. The cool tier lowers storage cost but increases access and potential early-deletion charges, so it fits data that is accessed infrequently but must still be available online. The archive tier offers the lowest storage cost but the highest retrieval cost and significant latency, so it is intended for data you rarely need, such as long-term backups or compliance archives.
Choosing the right tier over time means matching the data’s access pattern to these trade-offs: frequent access usually belongs in hot, infrequent access often belongs in cool, and very rare access can belong in archive. Assuming a single tier is always cheapest, regardless of access pattern, leads to higher long-term costs.
Topic: Describe Azure Architecture and Services
A company wants to provide employees with secure, remote access to a full Windows desktop and certain business applications hosted in Azure. Users will connect from various personal devices over the internet, and the company does not want to manage physical PCs. Which Azure service is specifically designed to deliver virtualized Windows desktops and apps from Azure to end users?
Options:
A. Azure Virtual Machines
B. Azure Virtual Desktop
C. Microsoft 365 Apps
D. Azure App Service
Best answer: B
Explanation: The scenario is asking for an Azure service whose primary purpose is to deliver virtualized Windows desktops and applications from Azure to end users over the internet. The discriminating factor is: a managed desktop and app virtualization service, not just generic compute or web hosting.
Azure Virtual Desktop is exactly that: a cloud-based desktop and application virtualization service that runs on Azure. It lets organizations publish full Windows desktops and individual applications to users on various devices, while centralizing management and keeping data in Azure.
Other compute services like Azure Virtual Machines or Azure App Service can host workloads in Azure, but they are not purpose-built solutions for end-user desktop virtualization. Similarly, Microsoft 365 Apps are SaaS applications, not a desktop-streaming platform.
Topic: Describe Azure Architecture and Services
A company needs to move 300 TB of on-premises archival data into Azure Blob Storage within three weeks. Their Internet link is limited to 50 Mbps, cannot be upgraded or supplemented with new circuits in this timeframe, and is needed for daily business traffic. They prefer an offline, bulk transfer method. Which of the following actions/solutions will meet these requirements? (Select TWO.)
Options:
A. Order Azure Data Box devices, copy the data to them locally, and have Microsoft upload the data into Azure Blob Storage.
B. Use Azure Data Box Heavy to copy the data locally and ship the device back for offline ingestion into Azure.
C. Enable Azure Site Recovery to replicate the on-premises file server to an Azure virtual machine and then perform a failover.
D. Use AzCopy to continuously upload the data over the existing 50 Mbps Internet connection until all files are in Azure.
E. Provision an Azure ExpressRoute circuit dedicated to the migration, then cancel it after the data transfer is complete.
Correct answers: A and B
Explanation: Azure Data Box is a family of Microsoft-managed physical devices that you can order, connect in your datacenter, copy data to, and then ship back to Microsoft for ingestion into Azure. This is ideal when you must move many terabytes of data within a limited time but do not have enough network bandwidth or cannot add new network capacity.
In this scenario, transferring 300 TB over a 50 Mbps link would take far longer than three weeks and would also interfere with normal business use of the connection. Using offline, shipped devices such as Azure Data Box and Azure Data Box Heavy meets the requirements for bulk, time-bound migration without relying on the constrained network link.
Topic: Describe Azure Management and Governance
In Azure, which feature is primarily used to label resources with business information (such as cost center or environment) so that you can filter and group them in cost reports across multiple resource groups and subscriptions?
Options:
A. Management groups
B. Subscriptions
C. Resource groups
D. Tags
Best answer: D
Explanation: The feature in Azure designed specifically for attaching business metadata to resources is tags. Tags are simple key–value pairs, such as Department = Finance or Environment = Production, that you can apply to resources, resource groups, and subscriptions.
Because tags are queryable in Azure Cost Management + Billing and other reports, they allow you to group and filter costs across different resource groups and even across multiple subscriptions. This makes them ideal for tracking spending by cost center, project, owner, or environment.
In contrast, resource groups, subscriptions, and management groups are structural organization and governance constructs. They define boundaries for deployment, management, billing, and policy, but they are not the flexible, reusable labeling mechanism required for detailed business reporting across those boundaries.
Topic: Describe Cloud Concepts
Under the shared responsibility model in Microsoft Azure, which security task remains the customer’s responsibility, even when using fully managed platform services such as Azure App Service?
Select the correct answer.
Options:
A. Designing and operating the perimeter fencing and on-site surveillance systems at Azure facilities
B. Installing security patches on the physical hosts that run Azure virtual machines
C. Defining and managing who can access the application’s data and functionality
D. Controlling physical access to the Azure datacenters where the application runs
Best answer: C
Explanation: In the Azure shared responsibility model, Microsoft and the customer share security duties. Microsoft secures the cloud infrastructure, including datacenter facilities, physical servers, networking hardware, and the hypervisor platform.
Customers, however, are always responsible for security in the cloud. This includes their data, identities, access controls, and configuration of security settings on the services they use. Even when using fully managed platform services such as Azure App Service, customers must decide who can access the application and its data and must configure roles, permissions, and authentication policies accordingly.
Therefore, the task that clearly remains the customer’s responsibility is defining and managing who can access the application’s data and functionality.
Topic: Describe Azure Management and Governance
Your company recently received an unexpectedly high Azure invoice. You are asked to use Azure Cost Management budgets and alerts to proactively control future spending. Which of the following actions is NOT appropriate for this goal?
Options:
A. Create a separate budget for each production subscription and configure alerts when the forecasted cost is expected to exceed the budget.
B. Create a monthly budget equal to the planned spend and configure an 80% alert to email the operations and finance teams.
C. Turn off all Azure cost alerts and rely only on the monthly invoice to notice overspending.
D. Configure quarterly budgets with alerts at both 50% and 90% of the budget, sending notifications to a shared monitoring mailbox.
Best answer: C
Explanation: Azure Cost Management + Billing provides budgets and alerts so organizations can monitor and control Azure spending before costs get out of hand. A budget defines a spending limit over a time period (for example, per month), and alerts notify the right people when actual or forecasted costs approach or exceed that budget.
To proactively control costs, you should define realistic budgets and configure alerts at appropriate thresholds (such as 50%, 80%, or 90% of the budget) so teams can investigate and take action before the invoice arrives. Simply waiting for the monthly invoice is reactive and does not use these tools as intended.
Topic: Describe Azure Management and Governance
A developer wants to manage Azure resources from various shared computers without installing any tools locally. They decide to use Azure Cloud Shell from a web browser. Which of the following statements about Azure Cloud Shell is INCORRECT?
Options:
A. It provides both Azure CLI and Azure PowerShell environments.
B. It runs in a browser and is integrated with the Azure portal.
C. It can also be accessed directly from a browser at shell.azure.com without opening the Azure portal.
D. It requires the Azure CLI to be installed locally on each computer before it can be used.
Best answer: D
Explanation: Azure Cloud Shell is a browser-based command-line environment hosted in Azure. It is integrated with the Azure portal and also available directly at shell.azure.com. Its main purpose is to let you run Azure CLI or Azure PowerShell without installing or maintaining these tools on your local computer.
Because Cloud Shell runs in Azure and is accessed through a browser, it removes the requirement for local installation of Azure management tools. Any statement that claims you must install Azure CLI or Azure PowerShell locally in order to use Cloud Shell is therefore incorrect and contradicts its core design goal.
Topic: Describe Cloud Concepts
A small startup wants to try several Azure services for a new product idea. They have a very limited budget and want to minimize financial risk by avoiding long-term commitments and stopping charges as soon as experiments end. Which approach BEST uses Azure’s consumption-based model to meet these needs?
Options:
A. Purchase 3-year reserved instances for all planned virtual machines before starting any experiments to lock in lower prices.
B. Use a pay-as-you-go Azure subscription, create only the needed test resources, and delete them immediately when each experiment is finished.
C. Buy new on-premises servers to run tests locally and later migrate only successful workloads to Azure.
D. Sign up for an Enterprise Agreement and prepay for a large amount of Azure capacity to ensure discounts during experimentation.
Best answer: B
Explanation: Azure’s consumption-based (pay-as-you-go) model means you are billed based on actual resource usage over time, rather than large upfront purchases or long-term commitments. When you create a resource such as a virtual machine or database, you start incurring charges; when you stop and delete that resource, billing for it stops.
This model is ideal for experimentation, proofs of concept, and short-lived test environments because you can try services with minimal initial cost and shut them down if an idea does not work. The financial risk is limited to the short period during which the resources are running, and there is no need to commit to multi-year contracts just to experiment.
In the scenario, the startup wants to avoid long-term commitments and stop paying as soon as experiments end, so the best choice is to use a pay-as-you-go subscription, create only the necessary resources, and delete them when tests are complete so charges stop immediately.
Topic: Describe Azure Management and Governance
An organization configures an Azure budget for each subscription and sets email alerts when actual spending reaches 80% and 100% of the budget. Which primary cloud principle or Azure concept does this practice best represent?
Options:
A. Security hardening using defense in depth and Zero Trust
B. Cost management and optimization using Azure budgets and alerts
C. High availability through redundant resources across regions
D. Scalability and elasticity to handle variable workloads
Best answer: B
Explanation: Azure budgets and alerts are part of Azure Cost Management + Billing. You use a budget to define how much you plan or are willing to spend over a period (for example, per month or per quarter). You then configure alerts at specific percentage thresholds (such as 80% and 100%) so you are notified before or when you exceed the planned amount.
This is a classic example of cost management and optimization. By monitoring usage and receiving proactive notifications, you can investigate unexpected spend, adjust resource usage, or change configurations before costs grow beyond what the business has approved. It does not change performance, availability, or security directly; it gives financial visibility and control over cloud consumption.
Other principles like high availability, scalability, or security are important, but they are implemented with different services and features (such as Availability Zones, autoscale, Microsoft Defender for Cloud, and Conditional Access), not with budgets and cost alerts.
Topic: Describe Azure Architecture and Services
Your company created a separate Windows virtual machine in Azure for each remote user so they can sign in with RDP and use corporate desktop applications. Users can connect, but IT reports high management overhead and rising costs for maintaining many individual VMs. You need a simpler Azure-based way to deliver Windows desktops and apps from the cloud. Which Azure service should you use?
Options:
A. Azure VPN Gateway
B. Azure Virtual Desktop
C. Azure App Service
D. Azure Virtual Machines with autoscale
Best answer: B
Explanation: In this scenario, the symptom is high management overhead and cost caused by running a separate Azure virtual machine for each remote user just to provide access to Windows desktops and corporate applications. The likely cause is that the organization is using raw infrastructure (individual VMs) instead of a managed desktop virtualization service.
Azure Virtual Desktop is the correct choice because it is a platform service that lets you deliver virtualized Windows desktops and remote apps from Azure. It simplifies deployment and management compared to having one VM per user, while still allowing secure remote access to corporate desktops and applications over the internet.
Other options either focus on basic compute, networking, or web hosting and do not solve the core requirement of centrally delivering and managing Windows desktops and apps from Azure.
Topic: Describe Azure Management and Governance
A company runs a payment-processing application on Azure. They want to use logging, metrics, and alerts to detect issues early and improve reliability and security. Which of the following actions is INCORRECT and should be AVOIDED?
Options:
A. Turn off most alerts and logging on production resources to reduce noise and save costs, relying on users to report problems when they notice them.
B. Configure Azure Monitor alerts on key metrics such as CPU usage, request failure rate, and queue length, and send notifications to the on-call team.
C. Enable diagnostic logs on critical resources and send them to a Log Analytics workspace for centralized querying and investigation.
D. Create security alerts that trigger when Microsoft Defender for Cloud detects suspicious activity on critical resources.
Best answer: A
Explanation: Logging, metrics, and alerts are core parts of observability in Azure. They allow you to detect reliability, performance, and security issues before users are heavily impacted.
A good monitoring strategy collects telemetry (logs and metrics) from key resources, sends that data to a central place (such as Azure Monitor and Log Analytics), and defines alerts that automatically notify operators when conditions indicating a problem are met. This enables proactive detection and faster response.
Turning off monitoring or relying solely on user reports removes this proactive capability. Problems may go unnoticed for long periods, incidents may be harder to investigate due to missing data, and security threats can spread without detection. That is why the choice that disables logging and alerts and depends on users is the incorrect approach that should be avoided.
Topic: Describe Azure Management and Governance
A company stores customer data in several Azure SQL databases and on-premises databases. They must identify and classify sensitive data, track where it is stored, and demonstrate compliance with data protection requirements. Which Azure service should they use?
Options:
A. Microsoft Defender for Cloud
B. Azure Monitor
C. Microsoft Purview
D. Azure Policy
Best answer: C
Explanation: The scenario is about governance and compliance for data: the company wants to identify and classify sensitive data, track where it lives across Azure and on-premises, and show that they meet data protection requirements. This is a classic data governance use case.
Microsoft Purview is Azure’s unified data governance solution. It can connect to many data sources (including Azure SQL and on-premises databases), automatically scan and classify data, build a searchable data catalog, and produce lineage and classification reports. These capabilities directly support tracking sensitive information and demonstrating compliance.
The other services listed are important governance and security tools but focus on resource configuration, security posture, or monitoring, not on cataloging and classifying data itself. That’s why Microsoft Purview is the most appropriate choice here.
Topic: Describe Azure Architecture and Services
In Azure Virtual Machines, which choice best describes what directly determines both the VM’s operating system and its hardware characteristics such as vCPU count and memory?
Options:
A. The combination of the selected VM image and the chosen VM size
B. The resource group name and the subscription that contain the VM
C. The Azure region and availability zone where the VM is deployed
D. The virtual network and subnet to which the VM is connected
Best answer: A
Explanation: In Azure, an individual Virtual Machine is defined by both its software stack and its hardware profile. The VM image specifies the operating system and any preinstalled software that will run on the VM. The VM size specifies the amount of compute resources (such as vCPUs, memory, and sometimes storage and network performance characteristics) that the VM will have. Together, these two selections determine the VM’s OS and hardware characteristics.
Region, resource group, subscription, and network configuration are all important, but they serve different purposes: location and availability, organization and billing, and connectivity, rather than defining what OS runs on the VM or how powerful it is.
Topic: Describe Cloud Concepts
A manufacturing company runs a latency‑sensitive production control system and a database in its on-premises datacenter. Due to regulatory and performance requirements, these systems must remain on-premises, but the company wants to deploy new analytics and reporting applications in Azure. They need both environments to integrate and share data. Which cloud deployment model is most appropriate for this scenario?
Options:
A. Multi-cloud deployment model using multiple public cloud providers but no on-premises systems
B. Public cloud deployment model that moves all workloads from the datacenter into Azure
C. Private cloud deployment model running only in the company’s on-premises datacenter
D. Hybrid cloud deployment model that connects on-premises systems with Azure resources
Best answer: D
Explanation: This scenario describes a company that must keep certain critical systems on-premises because of regulatory and latency requirements, but also wants to use Azure for new analytics workloads and integrate the two environments.
A hybrid cloud deployment model is specifically designed for this type of situation. In a hybrid cloud, an organization runs some workloads in its on-premises datacenter and others in a public cloud like Azure, with connectivity and integration between them. This supports phased migrations, data residency requirements, and low-latency access to on-premises systems while still gaining cloud benefits such as scalability and agility for new applications.
Because the key deciding factor here is needing to keep some workloads on-premises while moving others to Azure and integrating them, the hybrid cloud model is the best fit.
Topic: Describe Cloud Concepts
Your team is deploying a new customer portal to Azure App Service. The security lead wants to focus only on tasks that remain your responsibility under the shared responsibility model, rather than tasks Microsoft handles for you. Which of the following actions will meet these requirements? (Select TWO.)
Options:
A. Configure Microsoft Entra ID Conditional Access policies for users of the portal.
B. Classify and encrypt sensitive customer data stored by the portal.
C. Physically secure the buildings and server rooms of the Azure regional data centers used by the portal.
D. Maintain power, cooling, and hardware replacement for the servers that run the portal.
E. Apply operating system security patches to the Azure App Service worker machines.
F. Upgrade and harden the virtualization layer (hypervisor) that hosts the App Service platform.
Correct answers: A and B
Explanation: In the Azure shared responsibility model, Microsoft is responsible for security of the cloud (the physical data centers, networking, hosts, and managed platform components), while customers are responsible for security in the cloud (accounts, identities, data, and configurations of the services they use).
With a PaaS service like Azure App Service, Microsoft operates and secures the underlying hardware, hypervisor, networking, and operating system that run the service. Your organization must still control who can access the app, how they authenticate, and how application data is protected.
Therefore, actions about identity and access policies and data protection fall to you, while tasks related to physical facilities, power, hardware, OS, and hypervisor remain Microsoft’s responsibility.
Topic: Describe Azure Architecture and Services
A company has two Azure virtual machines in the same region: one hosts a public web app and the other hosts a database. Both currently have public IP addresses. The company wants the database to be isolated from the internet but still reachable from the web server over a private network. You should keep the design simple and support secure communication between the VMs. Which action is the most appropriate?
Options:
A. Create a separate subscription for the database VM and continue to use public IP addresses for both VMs to keep them logically separated.
B. Move the database VM to a different Azure region and keep its public IP so only the web VM can connect over the internet.
C. Place both VMs in the same Azure virtual network, remove the public IP from the database VM, and allow the web VM to access the database using its private IP address.
D. Keep both VMs with public IP addresses and use a network security group to block most ports on the database VM from the internet.
Best answer: C
Explanation: An Azure virtual network (VNet) is a logically isolated network in Azure that lets you securely connect Azure resources to each other, to the internet, and to on-premises networks. Within a VNet, resources can use private IP addresses to communicate without exposing themselves directly to the public internet.
In this scenario, the goal is to remove direct internet exposure for the database while allowing the web server and database to communicate securely over a private network. Using a single Azure VNet for both VMs meets both requirements: it isolates the database from the internet by removing its public IP, and it lets the web VM reach the database over private IP within the VNet.
Other options either keep the database directly reachable from the internet or rely on public IP communication instead of the VNet’s private connectivity. That breaks the requirement to isolate the database while still allowing secure, private communication within Azure.
Use the AZ-900 Practice Test page for the full IT Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.
Try AZ-900 on Web View AZ-900 Practice Test
Read the AZ-900 Cheat Sheet on Tech Exam Lexicon for concept review before another timed run.