Browse Certification Practice Tests by Exam Family

AZ-104: Deploy and Manage Azure Compute Resources

Try 10 focused AZ-104 questions on Deploy and Manage Azure Compute Resources, with explanations, then continue with IT Mastery.

On this page

Open the matching IT Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Try AZ-104 on Web View full AZ-104 practice page

Topic snapshot

FieldDetail
Exam routeAZ-104
Topic areaDeploy and Manage Azure Compute Resources
Blueprint weight25%
Page purposeFocused sample questions before returning to mixed practice

How to use this topic drill

Use this page to isolate Deploy and Manage Azure Compute Resources for AZ-104. Work through the 10 questions first, then review the explanations and return to mixed practice in IT Mastery.

PassWhat to doWhat to record
First attemptAnswer without checking the explanation first.The fact, rule, calculation, or judgment point that controlled your answer.
ReviewRead the explanation even when you were correct.Why the best answer is stronger than the closest distractor.
RepairRepeat only missed or uncertain items after a short break.The pattern behind misses, not the answer letter.
TransferReturn to mixed practice once the topic feels stable.Whether the same skill holds up when the topic is no longer obvious.

Blueprint context: 25% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.

Sample questions

These questions are original IT Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.

Question 1

Topic: Deploy and Manage Azure Compute Resources

You plan to deploy a single-container workload using Azure Container Instances (ACI). The container hosts an internal REST API that must be called only by other services in your existing virtual network. The API connects to an Azure SQL Database that is exposed only through a private endpoint in the same virtual network. You must: 1) ensure the container is not reachable from the internet, and 2) store the database connection string in a way that it is not visible in the Azure portal after deployment and is not written to container logs. Which configuration should you implement in ACI to meet these requirements?

Options:

  • A. Deploy the container into a subnet using ACI virtual network integration with only a private IP address, and bake the database connection string into the container image so it is not visible in the portal configuration.

  • B. Deploy the container with a public IP address and restrict inbound access by attaching a network security group (NSG) that allows traffic only from the SQL Database private endpoint IP range, and store the connection string as a regular environment variable.

  • C. Deploy the container with a public IP address and configure a system-assigned managed identity on the container group; retrieve the database connection string from Azure Key Vault at runtime from the application code.

  • D. Deploy the container into a subnet using ACI virtual network integration with only a private IP address, and configure the database connection string as a secure environment variable in the container settings.

Best answer: D

Explanation: Azure Container Instances can be deployed either with a public IP address or into a virtual network with only a private IP. To keep a container internal-only, you should use virtual network integration so that the container group is placed in a subnet and is reachable only over private IP addresses.

For secrets, ACI supports both regular and secure environment variables. Regular environment variables show their values in the portal and exported templates. Secure environment variables are treated as secrets: their values are not displayed in the portal after creation and are redacted from diagnostic views. This is the simplest way, at an administrator level, to store connection strings without modifying application code.

In this scenario, the requirements are to avoid any public internet exposure and to prevent the database connection string from appearing in the portal after deployment or in logs. The configuration that uses ACI virtual network integration with private IP only, plus secure environment variables for the connection string, satisfies both requirements directly within the ACI settings in the Azure portal.


Question 2

Topic: Deploy and Manage Azure Compute Resources

You administer a Windows VM in Azure that uses managed OS and data disks encrypted with platform-managed keys. Security now requires Azure Disk Encryption using customer-managed keys stored in Azure Key Vault. You must protect the keys from accidental deletion and avoid managing secrets in scripts. Which approach BEST meets these requirements?

Options:

  • A. Create a Disk Encryption Set that uses a key in Key Vault and assign it to the managed disks, but do not enable Azure Disk Encryption on the VM.

  • B. Create a Key Vault without soft-delete. Store a BitLocker recovery password as a secret and run a custom PowerShell script on the VM that uses a stored service principal secret to enable BitLocker on each disk.

  • C. Enable a system-assigned managed identity on the VM. Create a Key Vault in the same region with soft-delete and purge protection enabled. Generate a key, grant the VM’s managed identity key permissions, then enable Azure Disk Encryption on the VM using that key.

  • D. Enable encryption at host on the VM size and continue using platform-managed keys. Configure regular VM backups to protect data.

Best answer: C

Explanation: Azure Disk Encryption (ADE) encrypts the OS and data volumes inside the guest OS using technologies like BitLocker on Windows, but it is orchestrated and managed through Azure. When using customer-managed keys, those keys are stored in Azure Key Vault. For strong governance, Microsoft recommends enabling soft-delete and purge protection on the Key Vault to prevent accidental or malicious key deletion.

In this scenario, security specifically requires Azure Disk Encryption with customer-managed keys in Key Vault, protection against key deletion, and avoiding secrets in scripts. Using a system-assigned managed identity on the VM lets the ADE extension access the Key Vault key without storing credentials. Enabling soft-delete and purge protection on the Key Vault ensures that even if keys are deleted, they can be recovered within the retention period, reducing risk.

Therefore, the best solution is to configure a managed identity for the VM, create a properly protected Key Vault in the same region, grant appropriate key permissions to the VM identity, and then enable Azure Disk Encryption using the Key Vault key. This meets all stated requirements without introducing extra complexity or security risks.


Question 3

Topic: Deploy and Manage Azure Compute Resources

You manage a small dev/test API that runs in an Azure App Service plan (Standard S1, 1 instance). The app uses a custom domain and a TLS certificate. You must reduce monthly cost without losing these capabilities. What should you do?

Options:

  • A. Move the app to an Isolated I1 App Service Environment-based plan to improve isolation.

  • B. Move the app to a Free F1 App Service plan.

  • C. Change the App Service plan pricing tier to Basic B1 and keep a single instance.

  • D. Scale the App Service plan up to Premium V3 P1v3 and enable autoscale.

Best answer: C

Explanation: Azure App Service plans are offered in several pricing tiers (Free/Shared, Basic, Standard, Premium, Isolated), each with different capabilities and cost. When you choose a tier, you must balance feature needs (custom domains, SSL, autoscale, staging slots) against budget and workload criticality.

In this scenario, the workload is explicitly described as a small dev/test API. It runs on a Standard S1 plan and uses a custom domain and TLS certificate. The only stated goal is to reduce cost while keeping the custom domain and certificate working.

Standard S1 provides features like autoscale and deployment slots that are not required for this simple dev/test use case. Basic B1, on the other hand, supports custom domains and SSL, allows manual scaling to multiple instances if needed, but costs less than Standard. Therefore, changing to Basic B1 preserves the required capabilities and reduces cost.

Free, Premium, and Isolated tiers either remove required features (Free) or greatly increase cost (Premium, Isolated), so they are not appropriate optimizations for this case.


Question 4

Topic: Deploy and Manage Azure Compute Resources

You manage an Azure VNet with two subnets: Web-Subnet for internet-facing web VMs and Db-Subnet for backend SQL VMs. Admins use a site-to-site VPN and must manage all VMs only over private IPs. Web VMs may be reachable from the internet only on ports 80 and 443; database VMs must never be internet-accessible.

Which of the following VM networking configurations should you AVOID? (Select THREE.)

Options:

  • A. Assign a public IP to each database VM NIC and configure an NSG on Db-Subnet that allows inbound SQL (TCP 1433) from any source on the internet.

  • B. Place each web VM NIC in Web-Subnet, associate an NSG to Web-Subnet allowing inbound HTTP/HTTPS (80/443) from any internet source and RDP/SSH only from the on-premises address range, and assign a Standard public IP to each web VM.

  • C. Configure the NSG on Web-Subnet to allow inbound RDP and SSH from any internet source to web VMs, so administrators can connect directly without using the VPN.

  • D. Place each database VM NIC in Db-Subnet with only a private IP, and associate an NSG to Db-Subnet that allows inbound SQL traffic only from Web-Subnet and RDP/SSH only from the on-premises address range.

  • E. Place both web and database VM NICs in Web-Subnet and use a single NSG that allows inbound HTTP/HTTPS from any internet source to any VM in the subnet.

Correct answers: A, C and E

Explanation: The scenario requires a classic two-tier design with strict network isolation. Web VMs are allowed limited internet exposure on ports 80 and 443, while database VMs must never be directly reachable from the internet. All management must occur over private IPs via the site-to-site VPN.

To meet these requirements, you should:

  • Place web and database VMs in separate subnets.
  • Use NSGs so that only HTTP/HTTPS reach the web tier from the internet.
  • Ensure the database tier has no public IPs and only accepts traffic from the web tier and on-premises networks.
  • Restrict RDP/SSH to on-premises address ranges, not to arbitrary internet sources.

Configurations that give database VMs public IPs, allow SQL from the internet, or expose RDP/SSH from any public source are clear anti-patterns. Likewise, collapsing both tiers into a single internet-facing subnet undermines network segmentation and makes it easier for misconfiguration to expose the database tier.


Question 5

Topic: Deploy and Manage Azure Compute Resources

You manage 15 production Azure App Service web apps. You must centrally collect and retain HTTP access logs for 90 days with minimal administrative effort and be able to query across all apps. Which configuration should you use?

Options:

  • A. Enable application logging to the file system for each app and configure each app with its own Application Insights resource.

  • B. Configure diagnostic settings on each app to send HTTP logs to a shared Log Analytics workspace and set the workspace retention to 90 days.

  • C. Enable web server logging to the file system for each app and manually download log files every month.

  • D. Enable web server logging to blob storage for each app, writing logs to a separate storage account per app.

Best answer: B

Explanation: Azure App Service exposes several logging options, including web server (HTTP) logs and application logs. For centralized, queryable logging with consistent retention across many apps, you should use Azure Monitor diagnostic settings to send logs to a shared Log Analytics workspace.

By configuring each App Service to stream HTTP logs (such as AppServiceHTTPLogs/AppServiceConsoleLogs) into the same Log Analytics workspace, you gain a single data store for all apps. You can then set workspace retention to 90 days once, and use Azure Monitor Logs queries to correlate requests across apps. This approach minimizes ongoing administrative effort and supports centralized troubleshooting.

File system or per-app storage solutions may capture logs, but they require manual aggregation and lack out-of-the-box query capabilities across all apps. Application Insights is excellent for application telemetry but is not the simplest solution for centralized HTTP access logs across many apps when created separately for each app.


Question 6

Topic: Deploy and Manage Azure Compute Resources

You manage an Azure virtual machine scale set that currently has one instance hosting a web app. An autoscale rule is configured to add one instance if the average CPU percentage of the existing instance exceeds 70% over a 60-minute period.

Over the last hour, Azure Monitor shows the following CPU values for the instance:

Interval (minutes)Average CPU (%)
0–3080
30–4560
45–6040

Assume the hourly CPU percentage is the time-weighted average of the intervals shown. Round to the nearest whole percent.

The scale set did not scale out. Based on these data, what should you conclude?

Options:

  • A. The average CPU was 70%, so the autoscale threshold was exceeded and a scale-out should have occurred.

  • B. The average CPU was 65%, so the autoscale threshold was not exceeded and no scale-out is expected.

  • C. The average CPU was 80%, so the autoscale threshold was exceeded and a scale-out should have occurred.

  • D. The average CPU was 75%, so the autoscale threshold was exceeded and a scale-out should have occurred.

Best answer: B

Explanation: To understand why the scale set did not scale out, you must verify whether the autoscale condition was truly met based on the measured CPU values.

The autoscale rule is: if the average CPU over 60 minutes exceeds 70%, add an instance. The table gives three intervals with different CPU percentages and durations. The correct way to calculate the hourly average is to compute a time-weighted average:

  • For 30 minutes, CPU = 80%
  • For 15 minutes, CPU = 60%
  • For 15 minutes, CPU = 40%

Compute total CPU-minutes:

\[ \begin{aligned} \text{Total CPU-minutes} &= 80 \times 30 + 60 \times 15 + 40 \times 15 \\ &= 2,400 + 900 + 600 \\ &= 3,900 \end{aligned} \]

Now divide by the total time (60 minutes) to get the average percentage:

\[ \text{Average CPU} = \frac{3,900}{60} = 65\%. \]

Rounded to the nearest whole percent, the average CPU is 65%, which is below the 70% threshold. Therefore, the autoscale rule condition was not satisfied and the scale set correctly did not add an instance. There is no autoscale fault here; the metrics simply do not justify a scale-out.

In a real troubleshooting scenario, you would next decide whether to adjust the autoscale threshold, use a shorter evaluation period to react faster to spikes, or investigate the workload pattern further.


Question 7

Topic: Deploy and Manage Azure Compute Resources

You administer a production Azure virtual machine that runs a line-of-business application and is running low on disk space for application data. You must increase available storage following Microsoft best practices. Which TWO configurations should you AVOID? (Select TWO.)

Options:

  • A. Attach an additional managed data disk to the VM, initialize it in the guest OS, create a new volume, and move the application’s data to this new volume during a maintenance window.

  • B. Detach the only data disk that holds the production database while the VM and application are still running, then reattach it after a few minutes.

  • C. Stop the VM, increase the size of the existing managed data disk in the Azure portal, start the VM, and then extend the volume inside the guest OS.

  • D. Before resizing a production data disk, create a snapshot of the disk for backup, then perform the resize and extend the filesystem during a scheduled maintenance window.

  • E. Store all application data on the OS disk to avoid adding or resizing managed data disks.

Correct answers: B and E

Explanation: This scenario focuses on safely increasing storage for application data on an Azure virtual machine by attaching, detaching, or resizing managed data disks. Azure best practices separate OS and data workloads, and any operation that changes storage attached to a running workload must avoid data corruption and unplanned downtime.

Using managed data disks, administrators can either increase the size of an existing disk or add additional disks. Both approaches are valid when performed with appropriate safeguards: backups (such as snapshots), maintenance windows, and correct steps inside the guest OS (initializing disks, creating partitions, and extending volumes).

The configurations to avoid are those that either misuse the OS disk for application data or change the attachment of a disk that is actively in use by the application. Both practices significantly increase the risk of data loss and service interruption, which contradicts standard Azure administration guidance.


Question 8

Topic: Deploy and Manage Azure Compute Resources

Your company runs line-of-business HTTP APIs on Azure Container Apps. You are deploying a new internal API named Orders to an existing Azure Container Apps environment that is already integrated with a virtual network.

Requirements:

  • Orders must be reachable only from other apps and services inside the virtual network (no public internet access).
  • The platform team wants to keep using the existing shared Container Apps environment in each region instead of creating additional ones.
  • Deployments must use controlled blue/green-style traffic shifting between revisions, and the API must always have at least one replica running.

You are reviewing several proposed configuration approaches for the Orders container app.

Which of the following configurations should you AVOID? (Select THREE.)

Options:

  • A. Configure ingress as external with a public fully qualified domain name (FQDN) and allow unauthenticated HTTP requests from the internet so that other apps can reach Orders.

  • B. Place Orders in the existing Container Apps environment in the region and configure ingress with an internal-only endpoint so that it is accessible only inside the virtual network.

  • C. Set revision mode to multiple revisions and route 100% of traffic to the current stable revision, then adjust traffic weights when rolling out a new revision.

  • D. Set revision mode to single revision with automatic traffic to the latest revision and configure minimum replicas to 0 to allow Orders to scale to zero when idle.

  • E. Create a new Container Apps environment in the same region dedicated solely to the Orders app, instead of using the existing shared environment.

Correct answers: A, D and E

Explanation: Azure Container Apps lets you manage app isolation and deployment behaviors through the environment, ingress configuration, and revision settings.

In this scenario, Orders is an internal line-of-business API that must be private to a virtual network, share the existing regional environment, and support controlled blue/green-style deployments while always keeping at least one running replica.

Configurations that create extra environments in the same region, expose public unauthenticated ingress, or eliminate controlled revision traffic and always-on replicas contradict those requirements and should be avoided. Reusing the existing environment, using internal-only ingress, and configuring multiple revisions with manual traffic splitting all align with best practices for this scenario.


Question 9

Topic: Deploy and Manage Azure Compute Resources

You plan to deploy a new dev/test internal API to Azure App Service. The API must use your company’s custom DNS name and a custom TLS/SSL certificate, but you also must keep hosting costs as low as possible. Which App Service plan pricing tier should you choose?

Options:

  • A. Free (F1) App Service plan

  • B. Shared (D1) App Service plan

  • C. Basic (B1) App Service plan

  • D. Premium (P1v3) App Service plan

Best answer: C

Explanation: To choose an App Service plan tier, you match workload needs with tier capabilities and cost. In this scenario, the workload is explicitly described as dev/test and cost-sensitive, but it still requires a custom DNS name and a custom TLS/SSL certificate.

Free and Shared tiers are optimized for experiments and very low-cost scenarios but have important limitations. Free does not support custom domains at all. Shared supports custom domains but cannot use your own TLS/SSL certificate for that domain. To bind a custom certificate to a custom domain, you need at least the Basic tier. Premium provides additional performance and scale features useful for production and mission-critical workloads, but that is unnecessary for this small dev/test API when the main deciding factor is minimizing cost while meeting the custom domain and certificate requirements.

Therefore, the Basic (B1) App Service plan is the most appropriate choice: it is the lowest-priced tier that satisfies both custom domain and custom TLS/SSL certificate support, aligning with the dev/test and cost constraints.


Question 10

Topic: Deploy and Manage Azure Compute Resources

Which TWO of the following statements about deploying applications to Azure App Service are INCORRECT? (Select TWO.)

Options:

  • A. FTP/FTPS is the recommended primary deployment option for production workloads because it provides built-in source control, automated builds, and easy rollbacks.

  • B. When deploying a custom container to Azure App Service, you must also deploy your application files separately using ZIP deployment or FTP so that App Service can mount them into the container.

  • C. To deploy a containerized application to Azure App Service, you can configure the app to use a container image stored in a registry such as Azure Container Registry or Docker Hub.

  • D. Using ZIP deployment with a WEBSITE_RUN_FROM_PACKAGE setting (run-from-package) mounts the ZIP file as read-only, improving deployment consistency and preventing in-place modification of files.

  • E. Source control–based deployment can pull code from a repository such as GitHub or Azure Repos and automatically deploy changes when commits are pushed to a configured branch.

Correct answers: A and B

Explanation: Azure App Service supports several deployment methods, including source control integration, ZIP deployment (with or without run-from-package), and container images from a registry. Each method has different trade-offs in terms of automation, reliability, and operational tooling.

Source control–based deployment is tightly integrated with Git repositories and can trigger automatic deployments when new commits are pushed. ZIP deployment with run-from-package mounts a read-only package for consistency and immutability. Container-based deployment relies on publishing a fully built image to a container registry.

The two incorrect statements misrepresent how container deployments and FTP/FTPS work. With containers, the image already includes the application files, and there is no requirement to deploy additional files separately. FTP/FTPS is a legacy, manual deployment method and lacks the advanced capabilities expected for modern production workflows, such as integrated source control, automatic builds, and rollbacks.

Continue with full practice

Use the AZ-104 Practice Test page for the full IT Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.

Try AZ-104 on Web View AZ-104 Practice Test

Free review resource

Read the AZ-104 Cheat Sheet on Tech Exam Lexicon, then return to IT Mastery for timed practice.

Revised on Thursday, May 14, 2026