Browse Certification Practice Tests by Exam Family

AZ-104: Implement and Manage Storage

Try 10 focused AZ-104 questions on Implement and Manage Storage, with explanations, then continue with IT Mastery.

On this page

Open the matching IT Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Try AZ-104 on Web View full AZ-104 practice page

Topic snapshot

FieldDetail
Exam routeAZ-104
Topic areaImplement and Manage Storage
Blueprint weight20%
Page purposeFocused sample questions before returning to mixed practice

How to use this topic drill

Use this page to isolate Implement and Manage Storage for AZ-104. Work through the 10 questions first, then review the explanations and return to mixed practice in IT Mastery.

PassWhat to doWhat to record
First attemptAnswer without checking the explanation first.The fact, rule, calculation, or judgment point that controlled your answer.
ReviewRead the explanation even when you were correct.Why the best answer is stronger than the closest distractor.
RepairRepeat only missed or uncertain items after a short break.The pattern behind misses, not the answer letter.
TransferReturn to mixed practice once the topic feels stable.Whether the same skill holds up when the topic is no longer obvious.

Blueprint context: 20% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.

Sample questions

These questions are original IT Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.

Question 1

Topic: Implement and Manage Storage

You manage an Azure storage account used by a legacy application. The application currently sends 180,000 requests per day over HTTPS and 20,000 requests per day over HTTP. To enforce encryption in transit, you enable the storage account’s secure transfer required setting, which rejects any HTTP requests.

Assuming the application behavior does not change, how many of the application’s daily requests will now fail because they are not using HTTPS? (Round to a whole number.)

Options:

  • A. 0 requests

  • B. 180,000 requests

  • C. 200,000 requests

  • D. 20,000 requests

Best answer: D

Explanation: The secure transfer required setting on an Azure storage account enforces encryption in transit by requiring all requests to use HTTPS. When this setting is enabled, any unencrypted HTTP requests to the storage account are rejected.

In the scenario, the application sends a total of 200,000 requests per day:

  • 180,000 requests over HTTPS (already encrypted in transit)
  • 20,000 requests over HTTP (not encrypted in transit)

When secure transfer required is enabled, the storage account will allow only HTTPS traffic. Therefore, all 180,000 HTTPS requests will continue to succeed, while all 20,000 HTTP requests will now be rejected.

The arithmetic is straightforward:

\[ \text{Rejected requests per day} = 20{,}000\;\text{HTTP requests} \]

So, 20,000 of the application’s daily requests will fail until the application is updated to use HTTPS for those calls.

This demonstrates how enabling secure transfer required immediately blocks unencrypted connections, helping enforce encryption in transit for all storage account access.


Question 2

Topic: Implement and Manage Storage

You manage an Azure storage account that hosts an Azure file share named corpdata. Users must access the share from both on-premises and Azure workloads.

Requirements:

  • On-premises Windows 11 devices are joined to an on-premises Active Directory domain and must access \\storageacct.file.core.windows.net\corpdata using their domain user accounts over SMB, without configuring storage account keys on those PCs.
  • An Ubuntu Server VM running in Azure is not domain-joined. It must mount the same Azure file share over SMB at boot for an application. Using a storage account key on this VM is acceptable.
  • Storage account keys must not be distributed to any on-premises Windows clients.

Which of the following actions/solutions will meet these requirements? (Select TWO.)

Options:

  • A. Distribute the storage account key to all Windows and Linux clients via a secure documentation portal and update their login scripts to pass the storage account key as the password when running net use and mount -t cifs commands.

  • B. Generate an account-level shared access signature (SAS) for the storage account and configure both Windows and Linux clients to mount the Azure file share over SMB using the SAS token as the password in their net use and mount commands.

  • C. On the Ubuntu VM, install cifs-utils, store the storage account name and key in a protected credentials file (for example /etc/smbcredentials/storageacct.cred with 600 permissions), and mount the share using an SMB 3.0 mount -t cifs command that specifies the credentials file in the -o options.

  • D. Configure Azure Files Active Directory authentication for the storage account using the on-premises AD DS environment, assign share and NTFS permissions to domain groups, and have Windows clients map \\storageacct.file.core.windows.net\corpdata over SMB using their existing domain credentials (for example, with a net use command).

  • E. Enable public anonymous access on the corpdata file share and have both Windows and Linux clients mount the share over SMB without providing any credentials.

Correct answers: C and D

Explanation: Azure Files supports SMB access with two main authentication models relevant here: storage account key–based authentication and identity-based authentication using Active Directory (on-prem AD DS, Microsoft Entra Domain Services, or Entra Kerberos for Azure Files). For domain-joined Windows clients where you want users to access shares with their existing domain accounts, you configure Azure Files for AD-based authentication so access is controlled by share and NTFS permissions. For non-domain-joined Linux workloads, the common pattern is to mount Azure file shares over SMB using the storage account name as the username and a storage account key as the password in a mount -t cifs command, with credentials stored in a protected file.

In this scenario, Windows 11 devices must use their domain accounts and must not store storage account keys. That points directly to configuring Azure Files Active Directory authentication and using SMB with Kerberos tickets instead of keys. The Ubuntu VM is explicitly allowed to use the storage account key and is not domain-joined, so a key-based SMB mount with cifs-utils is appropriate. Options that rely on SAS tokens for SMB, anonymous access, or broad distribution of storage keys either will not work technically or violate the stated security requirements.


Question 3

Topic: Implement and Manage Storage

You store confidential reports in an Azure Storage blob container. About 100 employees must read the reports using the Azure portal and AzCopy. The security team requires that access be controlled and audited per user via Microsoft Entra ID, without sharing storage account keys or SAS tokens. Which access method should you configure to meet this requirement?

Options:

  • A. Create an application registration in Microsoft Entra ID that stores the storage account key in Azure Key Vault, and have users call this app to access the blobs.

  • B. Use Microsoft Entra ID–based authorization by assigning users or groups the Storage Blob Data Reader role on the container, and require them to sign in with their Entra ID accounts.

  • C. Share the storage account access key with all users and instruct them to configure the key in the Azure portal and AzCopy.

  • D. Generate an account SAS for the container and email a unique SAS URL to each user, regenerating the SAS when someone leaves the organization.

Best answer: B

Explanation: The requirement focuses on controlling and auditing access per user through Microsoft Entra ID, while avoiding distribution of storage account keys or SAS tokens. The best way to achieve this is to use identity-based authorization for Azure Storage, where users authenticate with their Entra ID identities and are granted data-plane permissions via Azure RBAC.

For blob data, you can assign roles such as Storage Blob Data Reader or Storage Blob Data Contributor at the storage account, container, or even individual blob scope. Users then access blobs through the Azure portal, AzCopy, or SDKs using their Entra ID credentials. Access events can be logged and correlated to specific Entra users, and revoking access is as simple as removing a role assignment or disabling an account.

Key-based and SAS-based methods revolve around secrets derived from the storage account key. They are not inherently bound to individual identities and are harder to manage and audit per user, especially at scale. They also require secret distribution and rotation, which increases operational effort and risk.


Question 4

Topic: Implement and Manage Storage

Which of the following statements about Azure file share snapshots are correct from an administrator’s perspective? (Select THREE.)

Options:

  • A. Snapshots are incremental; after the first snapshot, only changes consume additional storage, which makes taking frequent snapshots more cost-effective.

  • B. Snapshots are automatically replicated to the paired region, so they can be used as a disaster recovery solution for a regional outage without additional configuration.

  • C. You can restore an individual file from a snapshot by copying the earlier version from the snapshot to the active share using tools such as the Azure portal, PowerShell, or SMB clients.

  • D. A snapshot is a point-in-time, read-only view of a file share that is stored in the same storage account and counts toward the share’s capacity.

  • E. Deleting the file share immediately deletes all associated snapshots, even if they were taken recently, and you can still restore the share from those snapshots later.

  • F. Azure Backup is required in order to create any Azure Files snapshots; snapshots cannot be created directly on the storage account.

Correct answers: A, C and D

Explanation: Azure file share snapshots provide a lightweight, point-in-time way to protect data in an Azure Files share. They are designed for fast, within-share recovery of files or entire directory trees after accidental deletion or corruption, but they are not a replacement for full-featured backup.

A snapshot captures the state of the share at a specific time and is stored alongside the share in the same storage account and region. Because snapshots are incremental, only blocks that change after the first snapshot consume extra capacity, which makes frequent snapshots relatively cost-effective compared to creating full copies of the share.

Administrators can browse snapshots and recover individual files or folders by copying data from the snapshot back into the live share. This supports simple restore scenarios without rolling back the entire share. However, snapshots do not provide independent retention or region-level resilience: they are tied to the life of the share and the storage account and rely on the account’s redundancy (such as LRS or GRS) for durability.

For long-term retention, protection against accidental deletion of the whole share, and compliance scenarios, Azure Backup for Azure Files should be used in addition to snapshots, not instead of them.


Question 5

Topic: Implement and Manage Storage

You manage an Azure storage account that has its network access configured to “Selected networks”. A Microsoft.Storage service endpoint is enabled on a subnet in a virtual network (VNet1). VMs in VNet1 can access the blobs, but an on-premises file server connected to VNet1 by a site-to-site VPN receives 403 Forbidden when accessing the same storage account. You must restore access from on-premises without exposing the storage account to the public internet. What should you do?

Options:

  • A. Enable the Microsoft.Storage service endpoint on the VPN gateway subnet so that on-premises traffic can use the endpoint.

  • B. Add the on-premises subnet address range to the storage account’s “Virtual networks” list in the firewall settings.

  • C. Change the storage account firewall setting from “Selected networks” to “All networks” and rely on shared access signatures (SAS) for security.

  • D. Create a private endpoint for the storage account in a subnet in VNet1 and configure on-premises DNS to resolve the storage account name to the private IP.

Best answer: D

Explanation: The storage account is locked down to “Selected networks” and a Microsoft.Storage service endpoint is configured on a subnet in VNet1. As a result, traffic from that Azure subnet is allowed even though public network access to the storage account is restricted.

However, service endpoints only apply to traffic originating from Azure virtual networks. On-premises systems, even when connected via site-to-site VPN, are not considered part of the VNet for the purpose of service endpoints. When the on-premises file server accesses the storage account, its traffic reaches the storage account’s public endpoint, and the storage firewall evaluates the source as an external IP, not a VNet. Because only selected VNets are allowed, the firewall returns 403 Forbidden.

To maintain a non-public posture and still allow on-premises access, you should use a private endpoint. A private endpoint assigns a private IP address from your VNet to the storage account. Traffic from on-premises to that private IP flows entirely over your VPN and is evaluated as internal, bypassing the public endpoint restrictions. Proper DNS configuration ensures that the storage account name resolves to the private IP for on-premises clients.

Therefore, creating a private endpoint in VNet1 and updating DNS so on-premises clients resolve the storage account to this private IP is the best solution that satisfies both secure and functional requirements.


Question 6

Topic: Implement and Manage Storage

You manage several applications that read and write data to an Azure Blob Storage account.

Security requirements:

  • Do not store long-lived secrets (keys or tokens) in application code or configuration.
  • Prefer centrally managed, easily revocable access.
  • Use Azure-native authentication where possible for Azure-hosted workloads.

Which of the following configurations should you AVOID for these applications? (Select THREE.)

Options:

  • A. Configure a user-assigned managed identity shared by several Azure Virtual Machines, restrict the storage account with a private endpoint, and assign the Storage Blob Data Reader or Contributor role to that identity at the container scope.

  • B. Configure an Azure Virtual Machine to use a system-assigned managed identity, grant that identity the Storage Blob Data Contributor role at the container scope, and use Azure AD authentication from the VM code to access blobs.

  • C. Create a single account-level SAS with full permissions on the storage account, set it to expire in one year with no IP restrictions, store it as a secret in Azure Key Vault, and have an Azure Function retrieve and use that SAS at runtime.

  • D. Configure an Azure App Service web app to use a storage account connection string that embeds the account key, stored in App Service application settings, and rotate the key manually once per year.

  • E. Configure an Azure App Service containerized web app with a hard-coded SAS token compiled into the application image, with a 5-year expiry, and distribute the image to multiple partners.

Correct answers: C, D and E

Explanation: For Azure services such as Virtual Machines and App Service, the recommended way to access Azure Blob Storage is via managed identities and Azure RBAC. This removes the need to store account keys or SAS tokens in code or configuration and allows access to be controlled centrally.

Long-lived secrets like storage account keys or broad, long-duration SAS tokens are harder to rotate and revoke and increase the risk of compromise, especially when embedded in application code or widely distributed images. When workloads run inside Azure, managed identities should be preferred so that the platform handles credential issuance and rotation.

In this scenario, you must avoid configurations that rely on long-lived secrets or that do not use Azure-native identity when it is available. The unsafe options are those that embed or rely on powerful, long-lived keys or SAS tokens rather than managed identities and RBAC.


Question 7

Topic: Implement and Manage Storage

You administer a mission-critical app that uses a single Azure Storage account from multiple web apps. You must rotate the storage account access keys monthly while minimizing downtime and manual updates. Which THREE of the following configurations should you AVOID? (Select THREE.)

Options:

  • A. Ensure all applications currently use key1, regenerate key2, update connection strings to key2, verify access, and then regenerate key1.

  • B. Regenerate both key1 and key2 at the same time during a maintenance window, then update all application connection strings to use the new keys.

  • C. Hard-code the storage account connection string, including key1, inside the application source code and perform a full redeploy every time you rotate the key.

  • D. Store the storage account keys as secrets in Azure Key Vault and configure apps to use Key Vault references in their settings, then rotate keys by updating the Key Vault secrets one key at a time.

  • E. For new workloads that support Microsoft Entra ID, use identity-based access to Azure Storage instead of account keys, reducing how often you must rotate keys for those apps.

  • F. Regenerate key1 even though applications are still using key1, accept that storage access will fail temporarily, and plan to update application connection strings over the next few hours.

Correct answers: B, C and F

Explanation: Azure Storage accounts expose two access keys, key1 and key2, so you can rotate them without interrupting applications. To minimize downtime, you should use a rolling rotation approach: ensure all applications use one key, regenerate the unused key, update applications to use the newly regenerated key, validate access, and only then regenerate the original key.

Centralizing secrets in Azure Key Vault and using configuration, not source code, for connection strings helps update keys without code changes. Where possible, identity-based access via Microsoft Entra ID further reduces dependence on storage account keys, lowering both security risk and operational overhead.

The configurations to avoid are those that invalidate all keys at once, invalidate the key currently in use before applications are updated, or force redeployments and manual changes by hard-coding keys in application code. These patterns create unnecessary downtime and operational risk.


Question 8

Topic: Implement and Manage Storage

You manage an Azure Storage account configured with the firewall set to “Selected networks.” An on‑premises application that previously worked now receives 403 errors when accessing blobs using a valid connection string. You confirm that the application’s public IP address is not listed in the storage firewall. You must restore access for this application while keeping all other internet clients blocked. Which action should you take?

Options:

  • A. Generate a new user delegation SAS with read and write permissions and update the application to use it.

  • B. Assign the Storage Blob Data Contributor role to the application’s identity at the storage account scope.

  • C. Add the application’s public IP address to the storage account firewall allowed IP addresses list.

  • D. Change the storage account firewall setting to allow access from all networks.

Best answer: C

Explanation: The scenario indicates that the storage account firewall is configured for Selected networks, and the on‑premises application’s public IP address is not listed. This points to the firewall blocking the traffic, even though the connection string and credentials are valid. The requirement is to restore access for this specific application while keeping other internet clients blocked, so the fix must be a targeted firewall configuration change, not a broad relaxation of network security.

Azure Storage firewalls operate at the network layer. If a client’s public IP is not in the allowed list (or not coming from an allowed virtual network or private endpoint), the request is blocked before authorization is evaluated. SAS tokens and Azure RBAC are evaluated after the network access check; they cannot override a firewall deny.

Therefore, the correct solution is to add the on‑premises application’s public IP to the storage account’s allowed IP addresses. This restores the application’s access and preserves the security posture that blocks other internet clients.


Question 9

Topic: Implement and Manage Storage

Which of the following statements about mounting Azure Files SMB shares from Windows and Linux clients is NOT correct?

Options:

  • A. On Windows, you can map an Azure Files SMB share using the net use command and authenticate with the storage account name and key.

  • B. SMB access to Azure Files supports only storage account key authentication; integration with Microsoft Entra ID or Active Directory for identity-based access is not available.

  • C. When using identity-based access to Azure Files over SMB, access is controlled through NTFS-style ACLs that are assigned to user and group identities.

  • D. On Linux, you can mount an Azure Files SMB share using the mount -t cifs command and provide the storage account name and key as credentials.

Best answer: B

Explanation: Azure Files offers SMB file shares that can be mounted from Windows and Linux clients using familiar OS tools. For authentication, administrators can choose between using storage account keys (a shared secret) or identity-based access integrated with Microsoft Entra ID or Active Directory.

On Windows, Azure Files SMB shares are typically mapped using the net use command or the File Explorer UI. The storage account name is used as the username, and either the storage account key or a specific password/credential is used for authentication.

On Linux, Azure Files SMB shares are usually mounted with the mount -t cifs command or persistently via /etc/fstab, again using the storage account name and key when using key-based authentication.

In addition to key-based access, Azure Files supports identity-based authentication over SMB using Microsoft Entra ID or Active Directory. With identity-based access, users access shares with their domain credentials, and administrators control permissions via NTFS-style ACLs on the Azure file share.

Therefore, any statement that claims SMB access to Azure Files only supports storage account keys and cannot use identity-based authentication is inaccurate.


Question 10

Topic: Implement and Manage Storage

You manage an Azure storage account that hosts confidential reports in a blob container. External partners access the reports using SAS URLs that are hard-coded in their applications. You must be able to shorten or revoke partners’ SAS access centrally, without requiring them to update their stored URLs. Which approach should you use?

Options:

  • A. Enable the storage account firewall and add or remove the partners’ public IP addresses as needed.

  • B. Create a stored access policy on the container and generate SAS tokens that reference this policy.

  • C. Use an account-level SAS with a short expiry and regenerate the storage account keys when a partner should lose access.

  • D. Require partners to authenticate with Microsoft Entra ID and issue user delegation SAS tokens directly to each partner.

Best answer: B

Explanation: Stored access policies provide a way to centrally manage permissions and lifetimes for Shared Access Signatures (SAS) on a container or file share. When you create a SAS that references a stored access policy, the SAS does not hard-code its own start time, expiry, or permissions. Instead, it points to the policy.

Because the SAS depends on the stored access policy, you can later update or delete the policy to immediately affect all SAS tokens that reference it. This means you can shorten the validity period, remove permissions, or revoke access altogether without requiring clients to change the SAS URLs they already use.

In the scenario, the key deciding factor is the ability to centrally adjust or revoke SAS access without redistributing new tokens to partners. The only option that directly supports this capability is generating SAS tokens that are bound to a stored access policy on the container.

Continue with full practice

Use the AZ-104 Practice Test page for the full IT Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.

Try AZ-104 on Web View AZ-104 Practice Test

Free review resource

Read the AZ-104 Cheat Sheet on Tech Exam Lexicon, then return to IT Mastery for timed practice.

Revised on Thursday, May 14, 2026