Browse Certification Practice Tests by Exam Family

AZ-900: Describe Azure Architecture and Services

Try 10 focused AZ-900 questions on Describe Azure Architecture and Services, with explanations, then continue with IT Mastery.

On this page

Open the matching IT Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Try AZ-900 on Web View full AZ-900 practice page

Topic snapshot

FieldDetail
Exam routeAZ-900
Topic areaDescribe Azure Architecture and Services
Blueprint weight39%
Page purposeFocused sample questions before returning to mixed practice

How to use this topic drill

Use this page to isolate Describe Azure Architecture and Services for AZ-900. Work through the 10 questions first, then review the explanations and return to mixed practice in IT Mastery.

PassWhat to doWhat to record
First attemptAnswer without checking the explanation first.The fact, rule, calculation, or judgment point that controlled your answer.
ReviewRead the explanation even when you were correct.Why the best answer is stronger than the closest distractor.
RepairRepeat only missed or uncertain items after a short break.The pattern behind misses, not the answer letter.
TransferReturn to mixed practice once the topic feels stable.Whether the same skill holds up when the topic is no longer obvious.

Blueprint context: 39% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.

Sample questions

These questions are original IT Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.

Question 1

Topic: Describe Azure Architecture and Services

A company plans to deploy an Azure virtual machine for a line-of-business application. They want to understand how VM images and sizes affect the VM configuration. Which statement about Azure VM images and sizes is INCORRECT?

Options:

  • A. The VM image controls the CPU and memory resources, while the VM size only affects the name and billing label of the VM.

  • B. The VM size specifies the number of vCPUs, amount of memory, and available storage performance for the VM.

  • C. The VM image determines the operating system and any pre-installed software that are available when the VM is created.

  • D. You can change the VM size later if the workload requirements change, as long as the new size is supported by the underlying hardware in the region.

Best answer: A

Explanation: In Azure, a virtual machine is defined by both an image and a size, and each has a different purpose.

The VM image is like a template. It specifies the operating system (such as Windows Server or Linux) and may include extra software or configurations. When the VM is created, it is based on this image so you do not have to manually install the OS from scratch.

The VM size defines the underlying virtual hardware: how many vCPUs the VM gets, how much RAM is available, and what level of storage and networking performance it can use. Because the size determines the resource capacity, it also strongly affects cost.

The incorrect statement is the one that says the image controls CPU and memory and that the size is just a name and billing label. This reverses the roles of images and sizes and violates the basic principle of correctly mapping VM hardware capacity (and cost) to workload requirements. Misunderstanding this can lead to significant under-sizing or over-sizing of VMs and poor cost optimization.

The other statements correctly describe that images define software/OS, sizes define hardware, and that you can often resize a VM later (within Azure’s compatibility limits) if your workload needs change.


Question 2

Topic: Describe Azure Architecture and Services

A company runs several custom line-of-business applications on Windows and Linux servers in its own datacenter. The apps require full control of the operating system to install custom agents and drivers, but the company wants to stop buying and maintaining physical hardware. Which Azure option is the most appropriate to improve this situation?

Options:

  • A. Deploy the applications to Azure App Service as web apps.

  • B. Migrate the servers to Azure Virtual Machines and run the applications on those VMs.

  • C. Rewrite the applications to run as serverless functions in Azure Functions.

  • D. Replace the applications with a Software as a Service (SaaS) productivity suite such as Microsoft 365.

Best answer: B

Explanation: This scenario is about choosing the right Azure compute option to migrate existing Windows and Linux workloads that require full operating system control, while eliminating the need to manage physical hardware.

Azure Virtual Machines are an infrastructure as a service (IaaS) offering. With IaaS, Azure provides and manages the underlying physical infrastructure (servers, storage, networking, and virtualization), while you control the virtual machines: the OS, middleware, and applications. This is ideal when you need to keep your existing application stack and require OS-level customization like installing agents and drivers.

Platform as a service (PaaS) and serverless offerings, such as Azure App Service and Azure Functions, simplify management further but deliberately hide the underlying OS and limit the ability to install arbitrary software at the OS level. SaaS offerings like Microsoft 365 provide complete, ready-made applications and are not intended to host custom line-of-business workloads.

Therefore, moving the workloads to Azure Virtual Machines improves the situation by removing on-premises hardware responsibility while preserving the needed control over the servers.


Question 3

Topic: Describe Azure Architecture and Services

A company uses Microsoft Entra ID for Microsoft 365 and several SaaS apps. Today, users authenticate only with a username and password. New security policy requires sign-ins to be stronger than a single password and to reduce the impact of stolen passwords, without adding new on-premises infrastructure. Which of the following actions/solutions will meet these requirements? (Select TWO.)

Options:

  • A. Deploy on-premises federation servers so users can sign in with their existing corporate passwords to cloud apps.

  • B. Enable Microsoft Entra multi-factor authentication (MFA) so users must provide an additional verification method during sign-in.

  • C. Configure passwordless authentication methods in Microsoft Entra ID, such as the Microsoft Authenticator app or Windows Hello for Business.

  • D. Require users to connect to the corporate network by VPN before accessing Microsoft 365 and other SaaS apps.

  • E. Increase password complexity requirements and require users to change their password every 30 days.

Correct answers: B and C

Explanation: The scenario focuses on strengthening authentication beyond a single password and reducing the risk from stolen passwords, without adding new on-premises infrastructure. In Azure, this is addressed by modern sign-in options like multi-factor authentication (MFA) and passwordless authentication.

Multi-factor authentication combines two or more different factors (for example, password plus a mobile app notification or phone call). Even if an attacker knows a user’s password, they still cannot sign in without the second factor.

Passwordless authentication methods, such as the Microsoft Authenticator app or Windows Hello for Business, remove passwords from the sign-in flow altogether. Users authenticate using something they have (a registered device or app) and often something they are (biometrics), which significantly reduces password-related attacks.

Simply making passwords more complex, forcing VPN use, or adding federation servers does not change the fact that access is still protected mainly by a password, and federation also violates the requirement to avoid extra on-premises infrastructure.


Question 4

Topic: Describe Azure Architecture and Services

Your security team notices that several users successfully signed in to Microsoft 365 from countries where your company has no offices, using only their passwords. Most users currently use only passwords, and you want to require multi-factor authentication (MFA) only when users sign in from unfamiliar locations, not for every sign-in. Which action should you take?

Options:

  • A. Enable multi-factor authentication for all users in Microsoft Entra ID so that MFA is always required.

  • B. Apply a read-only resource lock to the Microsoft Entra ID tenant to prevent unauthorized sign-ins from unfamiliar locations.

  • C. Assign an Azure Policy at the subscription level to enforce MFA for all user sign-ins.

  • D. Create a Microsoft Entra Conditional Access policy that requires MFA for sign-ins from non-trusted locations.

Best answer: D

Explanation: This scenario describes a security gap: users are signing in from countries where the company has no presence, using only a password. The requirement is to add multi-factor authentication (MFA) only when users sign in from unfamiliar locations, without forcing MFA on every sign-in.

Microsoft Entra Conditional Access is the feature that allows you to define policies that evaluate conditions such as user, app, sign-in risk, device state, and sign-in location. Based on these conditions, the policy can require controls such as MFA or block access.

By creating a Conditional Access policy that targets sign-ins from locations that are not marked as trusted (for example, by using named locations), you can require MFA only when the sign-in originates from an unfamiliar location. Sign-ins from trusted corporate locations can still proceed with fewer prompts, satisfying both security and usability requirements at a fundamentals level.

Other tools mentioned, such as Azure Policy and resource locks, govern Azure resources, not user authentication logic, so they cannot solve this sign-in security problem.


Question 5

Topic: Describe Azure Architecture and Services

Your company has 15 Azure subscriptions used by different departments. Leadership wants to enforce the same set of security policies and provide the audit team with consistent read-only access across all subscriptions, with minimal ongoing administration.

Which TWO actions should you AVOID? (Select TWO.)

Options:

  • A. Create a top-level management group and place all current and future subscriptions under it to centralize governance.

  • B. Assign the Reader role at the management group level to the audit team so they can view resources across all subscriptions without making changes.

  • C. Assign a set of Azure Policies at the management group level to enforce allowed regions for resources across all subscriptions.

  • D. Grant each department a custom Owner role at the tenant root so they can fully manage any subscription in the organization.

  • E. Assign the required Azure Policies separately to each individual subscription instead of using management groups.

Correct answers: D and E

Explanation: Azure management groups sit above subscriptions and let you organize multiple subscriptions into a hierarchy. Policies and Azure role-based access control (RBAC) assignments made at a management group scope are inherited by all child subscriptions.

For consistent governance across many subscriptions, you typically create one or more management groups and assign Azure Policy and RBAC roles at those group levels. This reduces duplication, simplifies administration, and helps ensure that all subscriptions follow the same standards.

The actions to avoid are the ones that ignore management groups for policy assignment and that grant overly broad permissions at the highest scope, both of which conflict with core Azure governance and least-privilege principles.


Question 6

Topic: Describe Azure Architecture and Services

An application running in an Azure virtual network stores sensitive documents in an Azure Storage account using Blob storage. The company requires that all data in the storage account be encrypted at rest and that access to the blobs occurs only over the virtual network using private IP addresses, with no public internet access. Which of the following actions/solutions will meet these requirements? (Select TWO.)

Options:

  • A. Create a private endpoint for the storage account in the application’s virtual network and disable public network access for the account.

  • B. Deploy an Azure VPN Gateway to connect the virtual network to the on-premises network and route storage traffic through the VPN.

  • C. Rely on Azure Storage’s built-in encryption at rest with Microsoft-managed keys for the storage account.

  • D. Enable the “HTTPS only” setting on the storage account so that all traffic uses TLS over the public endpoint.

  • E. Use shared access signatures (SAS) in the application to generate time-limited blob access URLs over the internet.

Correct answers: A and C

Explanation: Azure Storage provides encryption at rest by default using Storage Service Encryption, so you do not need to build your own disk or application-level encryption just to meet basic compliance requirements. For private access, you can use a private endpoint so that the storage account is reachable only via private IP addresses inside a virtual network, and you can disable its public endpoint.

Option review:

  • ✔ Rely on Azure Storage’s built-in encryption at rest with Microsoft-managed keys for the storage account: This satisfies the requirement that all data be encrypted at rest using the platform’s default encryption.
  • ✔ Create a private endpoint for the storage account in the application’s virtual network and disable public network access for the account: This ensures traffic flows over the virtual network using private IPs and blocks public internet access.
  • ✖ Enable the “HTTPS only” setting on the storage account so that all traffic uses TLS over the public endpoint: This secures data in transit but still exposes the account over a public endpoint and does not address at-rest encryption directly.
  • ✖ Use shared access signatures (SAS) in the application to generate time-limited blob access URLs over the internet: This controls who can access data and for how long, but still relies on public endpoints and does not enforce private-only network access.
  • ✖ Deploy an Azure VPN Gateway to connect the virtual network to the on-premises network and route storage traffic through the VPN: This focuses on hybrid connectivity with on-premises, not on restricting the storage account to private IP access within Azure.

At the AZ-900 level, it is important to recognize that Azure Storage already encrypts data at rest and that private endpoints are a primary way to keep traffic on private IPs inside a virtual network while blocking the public endpoint.


Question 7

Topic: Describe Azure Architecture and Services

A company wants to copy files from an on-premises server to Azure Blob Storage every night as part of an automated script that runs without user interaction. Administrators prefer a command-line tool they can schedule with existing automation. Which option is the MOST appropriate tool to use?

Options:

  • A. Use Azure Storage Explorer to manually drag and drop files into the Blob container each morning.

  • B. Use AzCopy in a script scheduled by an automation tool such as Task Scheduler or a CI/CD pipeline.

  • C. Order an Azure Data Box device and ship it to Microsoft each week with the updated files.

  • D. Use the Azure portal to upload files to Blob Storage through the browser interface when needed.

Best answer: B

Explanation: This scenario focuses on how the data movement will be performed: it must run every night as part of an automated script without any user interaction. Among the tools listed, the key deciding factor is whether the tool is designed for non-interactive, scriptable automation or for interactive, manual use.

AzCopy is a command-line utility specifically built for high-performance data transfer to and from Azure Storage. Because it runs from the command line, it can be easily integrated into scripts (for example, PowerShell or shell scripts) and scheduled with tools like Task Scheduler, cron, or CI/CD pipelines. This makes it the best fit when you need repeatable, unattended transfers.

In contrast, Azure Storage Explorer and the Azure portal are graphical interfaces that are ideal for manual, ad-hoc management of storage resources. They require a user to sign in and perform actions interactively, which does not meet the requirement for nightly, unattended automation.

Azure Data Box targets large, one-time or infrequent bulk data migrations by shipping physical devices, not ongoing daily transfers over the network. It is not intended for regular, scripted jobs.

Therefore, using AzCopy in a scheduled script is the most appropriate solution for the stated requirements.


Question 8

Topic: Describe Azure Architecture and Services

You are planning to move a small line-of-business application to Azure. The application details are shown in the exhibit.

Exhibit:

SettingValue
Application typeInternal web API
PackagingSingle Docker container image
Runtime behaviorRuns at month-end only
Management preferenceMinimal infrastructure effort

Based on the exhibit, which Azure hosting option is the most appropriate?

Options:

  • A. Deploy the application to Azure Container Instances

  • B. Host the application on Azure Virtual Machines

  • C. Deploy the application to Azure Kubernetes Service (AKS)

  • D. Deploy the application to Azure App Service (code-based deployment)

Best answer: A

Explanation: The exhibit describes an internal web API that is already packaged as a single Docker container image, runs only at month-end, and should require minimal infrastructure effort.

Azure Container Instances (ACI) is a serverless container service designed exactly for these scenarios: running one or a few containers without managing virtual machines or a Kubernetes cluster. You pay for the container only while it is running, which aligns well with a periodic, month-end workload.

By contrast, virtual machines and Kubernetes clusters require significantly more operational management, and a full Azure App Service web app is more suited to always-on web workloads rather than an infrequent, single-container job.


Question 9

Topic: Describe Azure Architecture and Services

A company wants to standardize on Azure App Service to reduce management overhead when hosting new applications. Which of the following planned uses of Azure App Service is NOT appropriate?

Options:

  • A. Hosting the server-side back end for a mobile application, including authentication and data access logic.

  • B. Hosting a custom network virtual appliance that must inspect and route all traffic between on-premises networks and Azure virtual networks.

  • C. Hosting a public-facing company website built with ASP.NET Core.

  • D. Hosting a REST-based API used by web and mobile clients to access business data.

Best answer: B

Explanation: Azure App Service is a Platform as a Service (PaaS) offering that simplifies hosting of web apps, REST APIs, and mobile back ends. Microsoft manages the underlying infrastructure, operating system, and runtime, so you can focus on your application code rather than servers and patches.

Because App Service is optimized for HTTP/HTTPS-based application workloads, it is not suitable for infrastructure-focused components such as network virtual appliances, firewalls, or routers that need deep control over networking. Those workloads typically require virtual machines or container-based solutions with more control over the OS and network configuration.

In this question, the valid uses of App Service all involve hosting application logic exposed over HTTP/HTTPS, while the incorrect option attempts to use App Service for a network appliance scenario, which violates the principle of choosing the right service type for the workload.


Question 10

Topic: Describe Azure Architecture and Services

Which Azure service is specifically designed to continuously assess the security posture of your Azure and hybrid workloads, provide hardening recommendations, and raise alerts when threats are detected?

Options:

  • A. Azure Advisor

  • B. Azure Monitor

  • C. Microsoft Defender for Cloud

  • D. Microsoft Sentinel

Best answer: C

Explanation: Microsoft Defender for Cloud is Azure’s built-in service for cloud security posture management (CSPM) and threat protection. It continuously assesses the configuration and security of your Azure, multicloud, and hybrid resources, then provides prioritized recommendations to harden them.

Defender for Cloud also enables threat protection by analyzing signals from resources and raising security alerts when it detects suspicious or malicious activity. This makes it the central Azure service for monitoring and improving your overall cloud security posture, not just collecting logs or generic best-practice tips.

Other services like Azure Advisor, Azure Monitor, and Microsoft Sentinel can contribute to security, but they focus on optimization, observability, or enterprise-wide security analytics rather than posture management and targeted hardening recommendations for Azure workloads.

Continue with full practice

Use the AZ-900 Practice Test page for the full IT Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.

Try AZ-900 on Web View AZ-900 Practice Test

Free review resource

Read the AZ-900 Cheat Sheet on Tech Exam Lexicon, then return to IT Mastery for timed practice.

Revised on Thursday, May 14, 2026