Try 10 focused Java 21 1Z0-830 questions on Managing Concurrent Code Execution, with explanations, then continue with IT Mastery.
Open the matching IT Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.
Try Java 21 1Z0-830 on Web View full Java 21 1Z0-830 practice page
| Field | Detail |
|---|---|
| Exam route | Java 21 1Z0-830 |
| Topic area | Managing Concurrent Code Execution |
| Blueprint weight | 10% |
| Page purpose | Focused sample questions before returning to mixed practice |
Use this page to isolate Managing Concurrent Code Execution for Java 21 1Z0-830. Work through the 10 questions first, then review the explanations and return to mixed practice in IT Mastery.
| Pass | What to do | What to record |
|---|---|---|
| First attempt | Answer without checking the explanation first. | The fact, rule, calculation, or judgment point that controlled your answer. |
| Review | Read the explanation even when you were correct. | Why the best answer is stronger than the closest distractor. |
| Repair | Repeat only missed or uncertain items after a short break. | The pattern behind misses, not the answer letter. |
| Transfer | Return to mixed practice once the topic feels stable. | Whether the same skill holds up when the topic is no longer obvious. |
Blueprint context: 10% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.
These questions are original IT Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.
Topic: Managing Concurrent Code Execution
An order-report job was changed from stream() to parallelStream() to reduce runtime. It now sometimes reports a total less than orders.size(), although no orders are filtered. Assume orders is a fixed List<Order> that is not modified while this method runs.
record Order(String region) {}
static Map<String, Long> summarize(List<Order> orders) {
var counts = new HashMap<String, Long>();
orders.parallelStream()
.forEach(o -> counts.merge(o.region(), 1L, Long::sum));
return counts;
}
Which option is the best cause and fix while preserving parallel processing?
Options:
A. forEach returns early; wait before reading the map.
B. merge drops duplicate keys; replace it with putIfAbsent.
C. Shared HashMap updates race; use groupingByConcurrent.
D. The source list is unsafe; copy it inside the lambda.
Best answer: C
Explanation: The symptom comes from shared mutable state in a parallel stream. Multiple worker threads call HashMap.merge() on the same map, but HashMap is not thread-safe. Use a concurrent reduction, such as Collectors.groupingByConcurrent(Order::region, Collectors.counting()), instead of side-effecting forEach.
Parallel stream lambdas should be non-interfering and avoid unsafe shared mutable state. Here, the shared HashMap is updated concurrently by multiple worker threads, so updates can be lost or the map can become internally inconsistent. A better parallel solution is to let the stream perform a concurrent reduction:
return orders.parallelStream()
.collect(Collectors.groupingByConcurrent(
Order::region, Collectors.counting()));
The terminal operation already waits for the pipeline to finish; the problem is the unsafe target of mutation, not premature return.
forEach completes only after the stream pipeline finishes.merge to putIfAbsent would not count duplicates and would not make HashMap thread-safe.Topic: Managing Concurrent Code Execution
A monitoring component has a Runnable poll that performs one polling cycle. It must run the first cycle 1 second after startup, then wait 3 seconds after each completed cycle before starting the next one. The executor will be stored and shut down later. Which concurrency pattern correctly applies the Java SE 21 scheduled executor API?
Options:
A. Create a ScheduledExecutorService and call schedule(poll, 3, TimeUnit.SECONDS).
B. Create a ScheduledExecutorService and call scheduleAtFixedRate(poll, 1, 3, TimeUnit.SECONDS).
C. Create a ScheduledExecutorService and call scheduleWithFixedDelay(poll, 1, 3, TimeUnit.SECONDS).
D. Create an ExecutorService and call submit(poll) after Thread.sleep(1_000).
Best answer: C
Explanation: The requirement is completion-to-start spacing: each run finishes, then the service waits 3 seconds before the next run begins. ScheduledExecutorService.scheduleWithFixedDelay() is the Java SE API designed for that pattern, with a separate initial delay for the first run.
ScheduledExecutorService supports both one-shot and periodic task execution. For periodic tasks, scheduleWithFixedDelay uses the delay between the end of one execution and the start of the next. Its arguments are the task, initial delay, delay between completed runs, and time unit, so poll, 1, 3, TimeUnit.SECONDS matches the stated timing. By contrast, fixed-rate scheduling aims for a regular start-to-start period, which is not the same rule when task duration matters.
The key distinction is fixed delay versus fixed rate.
schedule runs the task once after the given delay.Topic: Managing Concurrent Code Execution
A Java 21 batch job was changed to use a parallel stream. In production, it sometimes prints a number smaller than 10,000 and occasionally fails while adding results. The team must keep parallel processing and avoid shared mutable state inside the pipeline. What is the best next fix?
var ids = IntStream.range(0, 10_000).boxed().toList();
var results = new ArrayList<String>();
ids.parallelStream()
.map(id -> "U" + id)
.forEach(results::add);
System.out.println(results.size());
Options:
A. Change the terminal operation to forEachOrdered.
B. Use ids.parallelStream().map(...).toList() for results.
C. Call results.ensureCapacity(10_000) before the stream.
D. Wrap results with Collections.synchronizedList.
Best answer: B
Explanation: The failure comes from mutating a shared ArrayList from multiple parallel stream worker threads. A better parallel-stream design keeps the pipeline non-interfering and lets the stream framework build the result, such as with toList() after map.
Parallel stream operations should use stateless, non-interfering functions. In the snippet, each worker can call results.add(...) concurrently, but ArrayList is not thread-safe; its internal size and array updates can be corrupted or lost. Building the result through the stream terminal operation lets the stream implementation manage per-task accumulation and combination safely.
The key takeaway is to avoid shared mutable side effects in parallel pipelines rather than trying to patch the shared list.
ArrayList.add() atomic or thread-safe.Topic: Managing Concurrent Code Execution
An application submits one worker to an ExecutorService. The worker calls this method:
static int load(Path path) throws IOException { /* ... */ }
The submitting code must obtain the int result from a Future<Integer> and detect an IOException failure when calling Future.get(). Which worker declaration best fits?
Options:
A. Runnable task = () -> load(path);
B. Callable<Void> task = () -> { load(path); return null; };
C. Callable<Integer> task = () -> load(path);
D. Runnable task = () -> { try { load(path); } catch (IOException e) { throw e; } };
Best answer: C
Explanation: A Callable<V> is the worker type designed for tasks that produce a result and may throw checked exceptions. When submitted to an executor, its result is available through Future<V>, and a checked exception is wrapped for retrieval by Future.get().
The decisive distinction is Callable versus Runnable. Runnable.run() returns void and does not declare checked exceptions, so it cannot directly represent a worker that both returns an int and propagates IOException. Callable<Integer>.call() returns an Integer and declares throws Exception, so the lambda can call load(path). If load throws IOException, the executor stores the failure and Future.get() throws ExecutionException whose cause is the original IOException.
The key takeaway is to use Callable when the task has a result or checked failure behavior that must be observed by the submitter.
Runnable cannot produce the Future<Integer> result required by the stem.int result.Runnable.run() is invalid because run() does not declare checked exceptions.Topic: Managing Concurrent Code Execution
A metrics component must support this loop while other code may add keys. It does not need a point-in-time snapshot, but iteration must be weakly consistent rather than fail-fast. Which declaration for counts correctly satisfies the requirement?
/* choose declaration for counts */
counts.put("a", 1);
counts.put("b", 2);
for (var key : counts.keySet()) {
counts.put("c", 3);
}
Options:
A. Map<String, Integer> counts = new ConcurrentHashMap<>();
B. Map<String, Integer> counts = Collections.synchronizedMap(new HashMap<>());
C. Map<String, Integer> counts = new HashMap<>();
D. Map<String, Integer> counts = new TreeMap<>();
Best answer: A
Explanation: ConcurrentHashMap provides weakly consistent iterators for its collection views, including keySet(). They do not throw ConcurrentModificationException merely because the map changes during traversal, and they are not required to show every update.
The key rule is that iterator behavior comes from the collection implementation, not just the Map reference type. ConcurrentHashMap view iterators are weakly consistent: they can proceed while the map is being updated and may reflect some, all, or none of the changes made after traversal begins. By contrast, ordinary map implementations such as HashMap and TreeMap have fail-fast iterators for their views; structural modification outside the iterator can cause ConcurrentModificationException. A synchronized wrapper serializes individual method calls but does not turn the wrapped map’s iterator into a weakly consistent iterator. For this requirement, a concurrent collection implementation is the deciding feature.
HashMap and TreeMap use fail-fast view iterators for structural changes.Topic: Managing Concurrent Code Execution
A method submits a Callable<String> using executor.submit(task). The caller must wait no more than 500 ms for the result; if the task is not finished, it should request interruption and return a fallback value. Which approach best meets this requirement?
Options:
A. Poll isDone() until it returns true, then call get().
B. Use get(500, TimeUnit.MILLISECONDS); on TimeoutException, call cancel(true).
C. Call cancel(false) immediately, then call timed get().
D. Call get() first; if it takes too long, then call cancel(true).
Best answer: B
Explanation: Future.get(long, TimeUnit) is the API that bounds how long the caller waits. If it throws TimeoutException, the task is still not complete, so cancel(true) is the appropriate way to request interruption before returning a fallback.
A plain Future.get() waits until the task completes, fails, or is canceled, so it does not satisfy a fixed 500 ms limit. The timed overload, get(500, TimeUnit.MILLISECONDS), either returns the result in time or throws TimeoutException. A timeout does not automatically cancel the task; it may keep running. Calling cancel(true) after the timeout requests cancellation and interruption if the task is running.
isDone() is only a status check, not a timeout mechanism, and cancel(false) does not interrupt a running task.
get() can block indefinitely from the caller’s perspective.isDone() does not enforce the 500 ms limit by itself.cancel(false) does not request interruption of an already running task.Topic: Managing Concurrent Code Execution
A log-cleanup task must allow the loop to add a follow-up task while iterating, and it must visit only the tasks that were present when the loop began. Which declaration should replace line X so this Java 21 program reliably prints scan,index | [scan, index, report]?
import java.util.*;
import java.util.concurrent.*;
public class Tasks {
public static void main(String[] args) {
// line X
var seen = new StringJoiner(",");
for (String job : jobs) {
if (job.equals("scan")) {
jobs.add("report");
}
seen.add(job);
}
System.out.println(seen + " | " + jobs);
}
}
Options:
A. Queue<String> jobs = new ConcurrentLinkedQueue<>(List.of("scan", "index"));
B. List<String> jobs = Collections.synchronizedList(new ArrayList<>(List.of("scan", "index")));
C. List<String> jobs = new CopyOnWriteArrayList<>(List.of("scan", "index"));
D. List<String> jobs = new ArrayList<>(List.of("scan", "index"));
Best answer: C
Explanation: CopyOnWriteArrayList is the collection that matches both requirements: safe structural modification during iteration and snapshot iteration behavior. The loop sees only scan and index, while the list printed afterward includes the added report element.
The core rule is that concurrent collections differ in their iterator guarantees. CopyOnWriteArrayList creates iterators over a snapshot of the array as it existed when iteration began. Adding report during the loop creates a new backing array, so the current enhanced for loop does not visit that new element. After the loop, the list itself contains the update, so the final print includes [scan, index, report].
A synchronized wrapper protects individual method calls, but it does not create snapshot iterators. A concurrent queue has weakly consistent iteration, which is safe but not a guarantee that only the original elements are visited.
for can still cause fail-fast behavior.ConcurrentModificationException.Topic: Managing Concurrent Code Execution
A file-ingestion service uses several producer threads and consumer threads. Producers must wait when the fixed-size work buffer is full, consumers must wait when the buffer is empty, and consumers must increment a per-customer processed count safely without surrounding the map update with synchronized. Assume InterruptedException is handled or propagated. Which combination best satisfies these requirements?
Options:
A. Use ConcurrentLinkedQueue.add()/remove() and ConcurrentHashMap.putIfAbsent().
B. Use ArrayBlockingQueue.offer()/poll() and ConcurrentHashMap.get() then put().
C. Use ArrayBlockingQueue.put()/take() and ConcurrentHashMap.merge().
D. Use ArrayDeque.addLast()/removeFirst() and Collections.synchronizedMap().
Best answer: C
Explanation: The requirement calls for blocking producer-consumer behavior and atomic shared-map updates. BlockingQueue.put() waits for space, take() waits for an element, and ConcurrentHashMap.merge() safely combines the read-modify-write count update for a key.
A bounded BlockingQueue such as ArrayBlockingQueue is designed for producer-consumer coordination. Its put() method blocks when the queue is full, and its take() method blocks when the queue is empty. For the shared count map, a plain get() followed by put() is a race because multiple threads can read the same old value. ConcurrentHashMap.merge(key, 1, Integer::sum) performs the per-key update atomically for this counting use case. The key distinction is using blocking queue operations and concurrent map compound operations instead of separate manual steps.
offer() and poll() return immediately rather than waiting for capacity or data.get() followed by put() is not an atomic increment under concurrent access.ConcurrentLinkedQueue is nonblocking, and putIfAbsent() does not increment an existing count.ArrayDeque is not a blocking queue, and a synchronized map wrapper does not make compound updates atomic without external locking.Topic: Managing Concurrent Code Execution
An application uses Java 21 virtual threads. A shared counter is declared as private volatile int count;. Many tasks run concurrently and each executes count++;. Which Java rule best describes the thread-safety of this counter?
Options:
A. volatile gives visibility, but count++ can lose updates.
B. volatile makes count++ atomic for an int field.
C. An int field write can be torn without a lock.
D. Virtual threads serialize access to shared variables.
Best answer: A
Explanation: A volatile field provides visibility guarantees for reads and writes, but it does not make compound operations atomic. The increment expression reads the old value, computes a new value, and writes it back, so concurrent executions can cause lost updates.
The core issue is a lost update caused by a non-atomic compound action. In Java, count++ is not a single indivisible operation; it performs a read, an addition, and a write. Declaring the field volatile ensures that threads see recent writes to the field, but two threads can still read the same value and both write back the same incremented result. Virtual threads follow the same Java Memory Model rules as platform threads. Use synchronization, a lock, or an atomic class such as AtomicInteger when the update itself must be atomic.
volatile does not make a read-modify-write expression indivisible.int values are atomic, though compound increments are not.Topic: Managing Concurrent Code Execution
In Java 21, a service uses a bounded BlockingQueue<Job> for producer-consumer handoff and a ConcurrentMap<String, Worker> for shared registration. Producers must wait when the queue is full, consumers must wait when it is empty, and registration must not overwrite an existing worker for the same ID. Which API choice satisfies these rules without external locking?
Options:
A. Use put, take, and putIfAbsent.
B. Use peek, remove, and a synchronized HashMap.
C. Use offer, poll, and put.
D. Use add, remove, and containsKey followed by put.
Best answer: A
Explanation: BlockingQueue has distinct methods for blocking, nonblocking, and exception-based queue operations. For a bounded producer-consumer handoff, put() waits for space and take() waits for an element. ConcurrentMap.putIfAbsent() performs the conditional map update atomically.
The core rule is to choose operations whose concurrency behavior matches the requirement. BlockingQueue.put(e) blocks until capacity is available, and take() blocks until an element is available. For the shared map, ConcurrentMap.putIfAbsent(key, value) is an atomic check-then-insert operation for that key, so no separate containsKey() test or external lock is needed.
offer() and poll() are useful when immediate failure is acceptable, while add() and remove() are exception-oriented queue methods. The key takeaway is that blocking queue operations and atomic concurrent-map operations encode the synchronization contract in the API call.
add() and remove() do not provide the required waiting behavior.offer() and poll() return immediately instead of blocking indefinitely.containsKey() followed by put() is not an atomic registration operation.Use the Java 21 1Z0-830 Practice Test page for the full IT Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.
Try Java 21 1Z0-830 on Web View Java 21 1Z0-830 Practice Test
Read the Java 21 1Z0-830 Cheat Sheet on Tech Exam Lexicon, then return to IT Mastery for timed practice.