Try 10 focused Java 17 1Z0-829 questions on Managing Concurrent Code Execution, with explanations, then continue with IT Mastery.
Open the matching IT Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.
Try Java 17 1Z0-829 on Web View full Java 17 1Z0-829 practice page
| Field | Detail |
|---|---|
| Exam route | Java 17 1Z0-829 |
| Topic area | Managing Concurrent Code Execution |
| Blueprint weight | 10% |
| Page purpose | Focused sample questions before returning to mixed practice |
Use this page to isolate Managing Concurrent Code Execution for Java 17 1Z0-829. Work through the 10 questions first, then review the explanations and return to mixed practice in IT Mastery.
| Pass | What to do | What to record |
|---|---|---|
| First attempt | Answer without checking the explanation first. | The fact, rule, calculation, or judgment point that controlled your answer. |
| Review | Read the explanation even when you were correct. | Why the best answer is stronger than the closest distractor. |
| Repair | Repeat only missed or uncertain items after a short break. | The pattern behind misses, not the answer letter. |
| Transfer | Return to mixed practice once the topic feels stable. | Whether the same skill holds up when the topic is no longer obvious. |
Blueprint context: 10% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.
These questions are original IT Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.
Topic: Managing Concurrent Code Execution
In a Java 17 class, a method uses a shared ReentrantLock to protect an update but should avoid waiting forever if another thread already owns the lock. Which statement describes the correct rule for Lock.tryLock() usage?
Options:
A. tryLock() blocks like lock() until the lock is available.
B. A successful tryLock() is released automatically on method return.
C. Call unlock() after tryLock() even when it returned false.
D. Call unlock() only when tryLock() returned true.
Best answer: D
Explanation: tryLock() is a non-blocking lock attempt. If it returns true, the current thread acquired the lock and must release it, usually in a finally block; if it returns false, the thread must not call unlock().
With java.util.concurrent.locks.Lock, explicit locking means explicit release. The no-argument tryLock() attempts to acquire the lock immediately and returns a boolean result. A true result means the current thread owns the lock and should release it in a finally block to avoid leaving the lock held if protected code throws. A false result means no ownership was obtained, so calling unlock() would be invalid and can throw IllegalMonitorStateException for ReentrantLock.
The key pattern is: attempt, check the result, run protected work only when acquired, and release only when acquired.
tryLock() returns immediately; lock() is the blocking acquisition method.tryLock() does not make the current thread the lock owner.synchronized; they must be released explicitly.Topic: Managing Concurrent Code Execution
A scheduled cleanup runs while worker threads update the same registry. The current Java 17 code sometimes throws ConcurrentModificationException. The cleanup must not hold a lock for the whole scan, and it is acceptable for the scan to see some, all, or none of the updates made during the scan. Which refactor is the best fix?
class ActiveJobs {
private final Map<String, Long> lastSeen = new HashMap<>();
void touch(String id) {
lastSeen.put(id, System.nanoTime());
}
void purge(long cutoff) {
for (var e : lastSeen.entrySet()) {
if (e.getValue() < cutoff)
lastSeen.remove(e.getKey());
}
}
}
Options:
A. Declare lastSeen as ConcurrentHashMap and call remove(key, value).
B. Run the cleanup with lastSeen.entrySet().parallelStream().
C. Iterate new ArrayList<>(lastSeen.entrySet()) while keeping HashMap.
D. Use Collections.synchronizedMap and leave the enhanced loop unchanged.
Best answer: A
Explanation: ConcurrentHashMap is designed for concurrent updates and weakly consistent iteration. Its iterators do not fail fast with ConcurrentModificationException; instead, a traversal may observe some concurrent changes, which the requirement explicitly allows. The remove(key, value) form protects a refreshed entry from being deleted.
Fail-fast iterators, such as those from HashMap views, are not designed for concurrent structural changes. They may throw ConcurrentModificationException when the map is changed outside the iterator, including by another thread or by calling map.remove() inside an enhanced for loop. ConcurrentHashMap provides weakly consistent iterators: they do not throw ConcurrentModificationException and may reflect concurrent updates made during the traversal. That matches the stated requirement because the cleanup can proceed without locking all writers for the whole scan. Using remove(key, value) also avoids removing a job that was refreshed after the entry value was read. A synchronized wrapper or snapshot of a HashMap does not provide the same nonblocking, concurrent traversal behavior.
HashMap is not thread-safe.HashMap concurrent; it can add races while solving the wrong problem.Topic: Managing Concurrent Code Execution
In Java 17, a developer reviews a parallel stream pipeline whose source is an ArrayList, so the stream has encounter order unless it is made unordered. Which statement correctly describes how encounter order affects forEach, forEachOrdered, limit, skip, and find operations?
Options:
A. limit and skip select arbitrary elements in parallel streams even when the stream is ordered.
B. findFirst and findAny are equivalent on ordered streams because both return the first element.
C. forEachOrdered and findFirst honor encounter order; forEach and findAny need not, while limit and skip are order-sensitive on ordered streams.
D. forEach must honor encounter order on ordered streams, while forEachOrdered is only useful for unordered streams.
Best answer: C
Explanation: Encounter order matters for several stream operations, especially with parallel streams. On an ordered stream, forEachOrdered, findFirst, limit, and skip have order-related guarantees, while forEach and findAny do not require encounter-order behavior.
A stream may have an encounter order based on its source, such as an ArrayList. In a parallel stream, forEach is allowed to process elements in any order, but forEachOrdered performs the action in encounter order when one exists. Similarly, findFirst returns the first element in encounter order, while findAny may return any element for better parallel performance. Operations such as limit(n) and skip(n) are also order-sensitive on ordered streams because they refer to the first n elements or the elements after them. The key distinction is that parallel execution does not remove encounter order, but some operations choose not to preserve it.
forEach is explicitly not required to preserve encounter order.limit and skip are constrained by encounter order when the stream is ordered.findAny is intentionally less restrictive than findFirst, especially in parallel pipelines.Topic: Managing Concurrent Code Execution
A reporting job was changed to use a parallel stream. jobs is a List<Job> in the required report order, and JobRunner.run returns a Result for that job.
var completed = new ConcurrentLinkedQueue<String>();
jobs.parallelStream()
.map(JobRunner::run)
.forEach(r -> completed.add(r.jobName()));
System.out.println(completed);
The report must list job names in the same encounter order as jobs, not in the order work happens to complete. Which refactor is the best fix?
Options:
A. Store names in a ConcurrentSkipListSet.
B. Collect names with map(Result::jobName).toList().
C. Keep forEach, but use CopyOnWriteArrayList.
D. Add .unordered() before the terminal operation.
Best answer: B
Explanation: A parallel stream does not make forEach run in encounter order. A thread-safe collection prevents data races, but it does not make insertion order match the original list. Collecting from the ordered stream with toList() gives a deterministic report order.
The core issue is confusing thread safety with ordering. ConcurrentLinkedQueue makes concurrent add calls safe, but forEach on a parallel stream may invoke the action in any order. Because jobs is a List, the stream has an encounter order. Mapping each job to a result and collecting with toList() preserves that encounter order in the resulting list, while still allowing upstream work to be evaluated in parallel. The key takeaway is that completion order is not the same as stream encounter order.
Topic: Managing Concurrent Code Execution
A job tracker is used by several worker threads. No ordering is required for IDs, but the completed count must not lose increments. report() may run while workers call done() and must not require an explicit synchronized block around the loop. Which replacement best fixes the unsafe state?
class Tracker {
private int completed;
private final Set<String> ids = new HashSet<>();
void done(String id) {
completed++;
ids.add(id);
}
void report() {
for (String id : ids) System.out.println(id);
}
}
Options:
A. Use AtomicInteger and ConcurrentHashMap.newKeySet(); call incrementAndGet().
B. Use volatile int and HashSet; keep completed++.
C. Use LongAdder and TreeSet; call increment().
D. Use int and Collections.synchronizedSet(new HashSet<>()); keep completed++.
Best answer: A
Explanation: The shared counter needs an atomic update operation, not just visibility. The shared set also needs a concurrent collection whose iterator can tolerate updates while report() is running, which ConcurrentHashMap.newKeySet() provides.
This scenario combines two common thread-safety problems: a read-modify-write field update and concurrent collection traversal. completed++ is not atomic, so multiple workers can overwrite each other’s increments. AtomicInteger.incrementAndGet() performs the increment atomically. A regular HashSet or TreeSet is unsafe when modified concurrently, and a synchronized wrapper still requires manual synchronization when using an iterator in an enhanced for loop. A set from ConcurrentHashMap.newKeySet() is designed for concurrent access and has weakly consistent iteration, so report() can run while updates continue without throwing ConcurrentModificationException. The key distinction is visibility versus atomicity and ordinary collections versus concurrent collections.
completed++ is still a non-atomic read-modify-write operation, and HashSet remains unsafe.TreeSet is not thread-safe, even though LongAdder can be useful for counters.Topic: Managing Concurrent Code Execution
A service uses the following methods to build a list of uppercase IDs. The source List is not modified while either method runs, and the required result is a deterministic list containing all mapped IDs in source encounter order.
class Report {
static List<String> a(List<String> ids) {
var out = new ArrayList<String>();
ids.parallelStream()
.map(String::toUpperCase)
.forEach(out::add);
return out;
}
static List<String> b(List<String> ids) {
return ids.parallelStream()
.map(String::toUpperCase)
.collect(Collectors.toList());
}
}
Which statement correctly applies the Java SE 17 parallel stream rule?
Options:
A. Both methods are safe for this goal.
B. Only a() is safe for this goal.
C. Only b() is safe for this goal.
D. Neither method is safe for this goal.
Best answer: C
Explanation: Parallel stream operations should avoid unsafe mutation of shared external state. Method a() passes ArrayList::add to forEach, allowing concurrent mutation and nondeterministic ordering. Method b() uses a collector-based reduction, which is the safe aggregation pattern for this goal.
The core rule is that stream pipeline actions should be non-interfering and should not rely on unsafely shared mutable state. In a(), multiple worker threads may call out.add(...) on the same ArrayList, which is not thread-safe, and forEach does not preserve encounter order in a parallel stream. In b(), collect(Collectors.toList()) is a reduction operation: the stream framework manages accumulation and combines partial results safely. Because the source is an ordered List and the pipeline is not made unordered, the collected result preserves encounter order. The key takeaway is to aggregate parallel stream results with collectors rather than external mutation.
forEach(out::add) mutates one unsynchronized ArrayList from parallel tasks.a() has both a data-race risk and no forEach encounter-order guarantee.b() is a safe way to produce the list.Topic: Managing Concurrent Code Execution
An audit service stores records in priority order. Assume no other thread writes to standard output.
var records = List.of("A", "B", "C", "D");
Which pipeline is guaranteed by Java SE 17 to print BC in that order, even though the stream is parallel?
Options:
A. records.parallelStream().skip(1).limit(2).findAny().ifPresent(System.out::print);
B. records.parallelStream().skip(1).limit(2).forEachOrdered(System.out::print);
C. records.parallelStream().skip(1).limit(2).forEach(System.out::print);
D. records.parallelStream().unordered().skip(1).limit(2).forEachOrdered(System.out::print);
Best answer: B
Explanation: A List stream has a defined encounter order. For an ordered stream, skip and limit select elements using that order, and forEachOrdered preserves it even when the stream runs in parallel.
Encounter order is the source-defined traversal order of a stream. List.of("A", "B", "C", "D") supplies an ordered stream, so skip(1).limit(2) selects B and C in that order unless the stream is made unordered. In a parallel stream, forEach may perform actions in any order, but forEachOrdered performs actions according to the encounter order when one exists. findAny is also intentionally flexible and is not a way to process all selected elements. The key rule is that ordered intermediate selection does not make forEach ordered; the terminal operation still matters.
forEach may print the selected elements in either order when the stream is parallel.unordered() removes the requirement that skip and limit refer to the original list positions.findAny prints at most one selected element and does not guarantee the first one.Topic: Managing Concurrent Code Execution
A telemetry service has a shared counter updated by many worker threads. The requirement is that each call to next() returns a distinct increasing value, without using a synchronized block or method. Which implementation meets the requirement?
Options:
A. Use AtomicInteger count; and return count.get() + 1.
B. Use int count; and return ++count.
C. Use volatile int count; and return ++count.
D. Use AtomicInteger count; and return count.incrementAndGet().
Best answer: D
Explanation: The key distinction is atomic compound update versus visibility only. volatile makes reads and writes visible across threads, but ++count is still a read-modify-write sequence that can lose updates. AtomicInteger.incrementAndGet() performs that compound action atomically.
For a shared counter, the operation must read the current value, add one, store the result, and return it as one indivisible action. An ordinary int provides no thread-safety guarantee for this shared mutation. A volatile int improves visibility, but ++count is still multiple steps, so two threads can read the same old value and both write the same new value. AtomicInteger provides atomic update methods such as incrementAndGet() and getAndIncrement() that are designed for this exact lock-free counter use case.
The key takeaway is that volatile is not a substitute for atomic read-modify-write operations.
get() + 1 does not update the counter at all.Topic: Managing Concurrent Code Execution
An application must start two independent tasks using an ExecutorService so they may run concurrently, but the final message must be exactly AB every run. The order in which worker threads actually execute the tasks is not controlled. Which approach meets the requirement without relying on thread scheduling?
Options:
A. Call Thread.yield() in the task that prints B.
B. Call awaitTermination() and assume tasks finished in submission order.
C. Submit Callable<String> tasks and concatenate futureA.get() + futureB.get().
D. Submit two Runnable tasks that each call System.out.print().
Best answer: C
Explanation: Concurrent execution does not imply a deterministic execution order. Returning values through Future objects lets the main thread assemble the final message in a fixed order after the tasks complete.
With an ExecutorService, worker thread scheduling is controlled by the runtime and operating system, so tasks submitted first are not guaranteed to run or finish first. A reliable design avoids using task-side printing order as the output order. By making each task a Callable<String>, each Future represents one known result; calling get() waits for completion, and concatenating futureA.get() before futureB.get() fixes the final message order. This preserves concurrency for the work while making the final assembly deterministic.
yield() is only a scheduling hint and provides no ordering guarantee.Topic: Managing Concurrent Code Execution
An account class is used by multiple platform threads. The team wants to keep simple monitor-based synchronization, but the current code sometimes loses updates:
class Account {
private Integer lock = 0;
private int balance;
void deposit(int amount) {
synchronized (lock) {
balance += amount;
lock++;
}
}
int getBalance() {
synchronized (lock) {
return balance;
}
}
}
Which refactor is the best correction?
Options:
A. Use a private final Object and lock all balance access on it.
B. Keep Integer lock, but remove the lock++ statement.
C. Synchronize on new Object() inside each method.
D. Use virtual threads for deposit calls.
Best answer: A
Explanation: The lock object used with synchronized must have stable identity and be shared by all critical sections that protect the same state. This code changes the Integer reference and also uses a wrapper object as a monitor, both of which are unsafe locking choices.
Intrinsic locks work only when every critical section uses the same monitor object. In the snippet, synchronized (lock) evaluates the current Integer; then lock++ assigns a different wrapper object, so later calls can synchronize on a different monitor while an earlier call is still inside its critical section. Integer is also a value-based wrapper, and small values may be cached and shared, so it is not a safe private monitor. Use a dedicated private final Object lock = new Object(); and synchronize both deposit() and getBalance() on that lock. The key is a stable lock identity, not a changing value.
Integer, which is a poor monitor choice even if it is no longer incremented.Use the Java 17 1Z0-829 Practice Test page for the full IT Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.
Try Java 17 1Z0-829 on Web View Java 17 1Z0-829 Practice Test
Read the Java 17 1Z0-829 Cheat Sheet on Tech Exam Lexicon, then return to IT Mastery for timed practice.