Try 10 focused Java 21 1Z0-830 questions on Streams and Lambdas, with explanations, then continue with IT Mastery.
Open the matching IT Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.
Try Java 21 1Z0-830 on Web View full Java 21 1Z0-830 practice page
| Field | Detail |
|---|---|
| Exam route | Java 21 1Z0-830 |
| Topic area | Working with Streams and Lambda Expressions |
| Blueprint weight | 11% |
| Page purpose | Focused sample questions before returning to mixed practice |
Use this page to isolate Working with Streams and Lambda Expressions for Java 21 1Z0-830. Work through the 10 questions first, then review the explanations and return to mixed practice in IT Mastery.
| Pass | What to do | What to record |
|---|---|---|
| First attempt | Answer without checking the explanation first. | The fact, rule, calculation, or judgment point that controlled your answer. |
| Review | Read the explanation even when you were correct. | Why the best answer is stronger than the closest distractor. |
| Repair | Repeat only missed or uncertain items after a short break. | The pattern behind misses, not the answer letter. |
| Transfer | Return to mixed practice once the topic feels stable. | Whether the same skill holds up when the topic is no longer obvious. |
Blueprint context: 11% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.
These questions are original IT Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.
Topic: Working with Streams and Lambda Expressions
Which statement about creating streams in Java 21 is correct?
Options:
A. Arrays.stream(int[]) creates an IntStream; Stream.of(int[]) creates a one-element Stream<int[]>.
B. Collection.stream() creates a stream that can be traversed again after a terminal operation.
C. IntStream.range(1, 4) includes both 1 and 4.
D. Files.lines(path) returns a List<String> after reading all lines eagerly.
Best answer: A
Explanation: Stream source APIs differ by source type. For a primitive array, Arrays.stream uses a primitive specialization such as IntStream, but Stream.of receives the entire primitive array as a single object element.
The core rule is that stream creation depends on the overload selected and the source type. Arrays.stream(int[]) has a primitive-array overload and returns IntStream, so its elements are the individual int values. Stream.of(int[]) is a generic varargs factory; because int[] is itself a reference type and cannot be expanded into Integer elements, the result is a Stream<int[]> containing one element: the array object. This distinction matters when choosing between primitive streams and object streams.
IntStream.range(start, end) excludes the end value.Files.lines(path) returns a lazy Stream<String>, not a List<String>.Topic: Working with Streams and Lambda Expressions
A developer is troubleshooting a Java 21 report. The code throws IllegalStateException: Duplicate key open when a team has two tickets with the same status. The required result type is Map<String, Map<String, List<Integer>>>: team -> status -> ticket IDs.
record Ticket(String team, String status, int id) {}
var report = tickets.stream().collect(
Collectors.groupingBy(Ticket::team,
Collectors.toMap(Ticket::status, Ticket::id)));
Which change best fixes the stream result issue?
Options:
A. Add .distinct() before the collect() call.
B. Change the outer collector to Collectors.groupingBy(Ticket::status, Collectors.toList()).
C. Use downstream Collectors.groupingBy(Ticket::status, Collectors.mapping(Ticket::id, Collectors.toList())).
D. Add merge function (first, second) -> second to downstream toMap.
Best answer: C
Explanation: The downstream toMap requires unique status keys within each team unless a merge function is provided. The required output keeps all ticket IDs, so the better model is nested grouping with a downstream mapping collector that builds List<Integer> values.
Collectors.toMap() throws an IllegalStateException for duplicate keys when no merge function is supplied. In this code, the outer groupingBy groups by team, but the downstream toMap still requires each status to be unique inside a team. The target type says each team maps to a status map, and each status maps to a list of IDs. A nested groupingBy on status with mapping(Ticket::id, toList()) matches that shape directly: Map<String, Map<String, List<Integer>>>. A merge function that keeps one ID may avoid the exception, but it does not preserve all IDs or produce the required list-valued result.
Integer values.Ticket records; it does not aggregate different tickets with the same status.Topic: Working with Streams and Lambda Expressions
A developer is refactoring a stream pipeline to count names whose length is greater than zero. The following Java 21 code does not compile. Which change is the best correction while preserving that requirement?
import java.util.*;
class Demo {
public static void main(String[] args) {
var names = List.of("Ana", "", "Bo", " ");
long count = names.stream()
.filter(String::length)
.count();
System.out.println(count);
}
}
Options:
A. Replace it with .filter(String::isBlank)
B. Replace it with .filter(String::length > 0)
C. Replace it with .filter((int n) -> n > 0)
D. Replace it with .filter(s -> s.length() > 0)
Best answer: D
Explanation: Stream.filter is target-typed to a Predicate, so its argument must accept a stream element and return boolean. String::length returns an int, which is not compatible with Predicate<String>. A lambda comparing the length to zero supplies the required boolean condition.
A method reference is not valid by itself; it must match the functional interface expected by the target context. In this pipeline, names.stream() is a Stream<String>, so filter expects a Predicate<? super String>. That means the argument must take a String and return boolean. The method reference String::length can fit a function-like target such as ToIntFunction<String>, but not a predicate. Writing s -> s.length() > 0 converts the length check into the boolean test that filter needs. The key distinction is between computing a value and testing a condition.
String, not an int.0.Topic: Working with Streams and Lambda Expressions
An application team is reviewing two implementations. names is a List, and the caller only reads the returned list.
static List<String> sequential(List<String> names) {
var out = new ArrayList<String>();
names.stream().map(String::toUpperCase).forEach(out::add);
return out;
}
static List<String> parallel(List<String> names) {
var out = new ArrayList<String>();
names.parallelStream().map(String::toUpperCase).forEach(out::add);
return out;
}
Which comparison is correct?
Options:
A. Both methods preserve encounter order because names is a List.
B. parallel() is safe but unordered; replacing forEach with forEachOrdered only changes order.
C. sequential() preserves encounter order; parallel() has unsafe shared mutation and no forEach order guarantee.
D. parallel() is safe and ordered because the terminal operation waits for all tasks.
Best answer: C
Explanation: The sequential version processes the ordered List without concurrent calls to out.add. The parallel version uses forEach, which does not guarantee encounter order, and it mutates a non-thread-safe ArrayList from multiple worker threads.
Parallel streams can split work across threads, so side effects that update shared mutable state must be thread-safe. In the parallel() method, multiple tasks may call out.add at the same time, but ArrayList is not designed for concurrent mutation. Also, forEach on a parallel stream is not required to respect the source list’s encounter order. A safer ordered approach is usually to avoid external mutation and let the stream collect the result, such as using map(...).toList() for a read-only result. The key distinction is not whether the source is ordered, but whether the terminal operation and side effects preserve order safely.
List has encounter order, but parallel forEach need not preserve it.ArrayList.add is not safe for concurrent updates from a parallel stream.Topic: Working with Streams and Lambda Expressions
An order service needs the alphabetically first 5 distinct SKU strings beginning with BK- across all orders. The current Java 21 code does not compile because it treats each order’s SKU list as if it were a single SKU. Which refactor is the best fix?
record Order(String id, List<String> skus) {}
var result = orders.stream()
.map(Order::skus)
.filter(s -> s.startsWith("BK-"))
.distinct()
.sorted()
.limit(5)
.toList();
Options:
A. Use peek(o -> o.skus().stream()), then continue with the existing filter.
B. Use map(o -> o.skus().stream()), then filter, distinct, sorted, and limit.
C. Use flatMap(o -> o.skus().stream()), then filter, distinct, sorted, and limit.
D. Keep map(Order::skus), then cast each mapped value to String.
Best answer: C
Explanation: The pipeline needs to flatten nested SKU lists into a stream of individual strings. flatMap is the appropriate operation because it maps each order to a stream of SKUs and then flattens those streams into one Stream<String>. After that, filter, distinct, sorted, and limit operate on SKUs as intended.
The original map(Order::skus) produces a Stream<List<String>>, so the next lambda parameter is a list, not a String. Since startsWith() is a String method, the pipeline does not compile. Use flatMap when each input element produces multiple output elements and the downstream operations should see one flattened stream.
A suitable pipeline is:
orders.stream()
.flatMap(o -> o.skus().stream())
.filter(s -> s.startsWith("BK-"))
.distinct()
.sorted()
.limit(5)
.toList();
sorted() before limit() is important here because the requirement asks for the alphabetically first 5 matching distinct SKUs.
Stream<Stream<String>>, so the string filter still is not applied to SKU values.peek is for observing elements and does not transform or flatten the stream.Topic: Working with Streams and Lambda Expressions
An analyst is merging two ordered feeds of batched product IDs. This code fails to compile with an error indicating that List<Stream<String>> cannot be converted to List<String>.
var left = Stream.of(List.of("A1", "A2"), List.of("A3"));
var right = Stream.of(List.of("B1"), List.of("B2", "B3"));
List<String> ids =
Stream.concat(left, right)
.map(List::stream)
.toList();
Which change is the best fix while preserving the encounter order of the IDs?
Options:
A. Change the target type to List<Stream<String>>.
B. Use Stream.of(left, right).flatMap(s -> s).toList().
C. Use left.flatMap(List::stream).concat(right.flatMap(List::stream)).
D. Replace .map(List::stream) with .flatMap(List::stream).
Best answer: D
Explanation: Stream.concat(left, right) produces one stream whose elements are still List<String> objects. Using map(List::stream) creates a nested Stream<Stream<String>>, so the collected list is not a List<String>. flatMap(List::stream) performs the needed decomposition and flattening.
The core issue is the difference between map and flatMap after concatenating streams. Stream.concat(left, right) combines the two Stream<List<String>> sources into another Stream<List<String>> and preserves encounter order: all left elements, then all right elements. Applying map(List::stream) maps each list to its own stream, producing Stream<Stream<String>>. Applying flatMap(List::stream) maps each list to a stream and then flattens those inner streams into a single Stream<String>, so toList() returns the intended List<String>. The closest distractor is changing the target type, but that accepts the wrong nested result instead of producing the required IDs.
concat is a static method on Stream, not an instance method.Stream.of(left, right).flatMap(s -> s) still leaves List<String> elements, not String elements.Topic: Working with Streams and Lambda Expressions
A defect was found in this Java 21 method. It must return the front items first, followed by every item from each later batch. Assume neither input stream has been consumed.
static Stream<String> combine(Stream<String> front,
Stream<List<String>> batches) {
return Stream.concat(front, batches); // line X
}
Which replacement for line X is the best fix?
Options:
A. return Stream.concat(front, batches.flatMap(List::stream));
B. return Stream.concat(front, batches.map(List::stream));
C. return Stream.concat(front, batches.toList().stream());
D. return Stream.concat(front, groups).flatMap(List::stream);
Best answer: A
Explanation: Stream.concat combines two streams whose element types are compatible with the requested result type. Here, the second input is a stream of lists, so it must be decomposed with flatMap(List::stream) before concatenation.
The core issue is the difference between mapping and flattening. batches has type Stream<List<String>>, but the method must return Stream<String>. Calling batches.flatMap(List::stream) turns each List<String> into a Stream<String> and flattens those inner streams into one stream. Then Stream.concat(front, ...) produces a single stream with the front elements first, followed by the flattened batch elements.
Using map(List::stream) would produce Stream<Stream<String>>, which is still nested. The key takeaway is to flatten the nested stream before concatenating it with the already-flat stream.
map(List::stream) creates a nested Stream<Stream<String>>, not Stream<String>.Stream<String> with Stream<List<String>> does not produce a stream that List::stream can flatten safely.Stream<List<String>> and unnecessarily consumes the input stream eagerly.Topic: Working with Streams and Lambda Expressions
Assume java.util.function.* is imported. Which lambda assignment is valid in Java 21?
Options:
A. Function<String, Integer> f = s -> s.length();
B. Function<String, Integer> f = s -> s.isEmpty();
C. Predicate<String> p = s -> s.length();
D. UnaryOperator<Number> op = (Integer n) -> n + 1;
Best answer: A
Explanation: A lambda is checked against its functional interface target type. For Function<String, Integer>, the parameter is a String, and s.length() returns an int that is assignment-compatible with Integer by boxing.
Lambda expressions are target-typed: Java uses the functional interface’s single abstract method to determine the lambda parameter types and required return type. Function<String, Integer> represents a method that accepts a String and returns an Integer. With s -> s.length(), s is inferred as String, and String.length() returns int, which can be boxed to Integer in the return context.
Explicit lambda parameter types must also match the target function type. A lambda targeted to UnaryOperator<Number> has a Number parameter, so declaring it as Integer is not accepted.
s.isEmpty() returns boolean, not something compatible with Integer.Predicate<String> requires a boolean, but s.length() returns int.(Integer n) does not match the Number parameter required by UnaryOperator<Number>.Topic: Working with Streams and Lambda Expressions
A reporting method receives batches in this fixed encounter order:
var batches = List.of(
List.of("A2", "A1"),
List.of("A1", "B9"),
List.of("A0")
);
The method must flatten the batches, keep only the initial run of codes that start with A, remove duplicates from that run, sort the remaining codes, and return at most two codes. It must not include A0, because it appears after B9. Which stream expression correctly applies the requirement?
Options:
A. batches.stream().flatMap(List::stream).distinct().sorted().takeWhile(s -> s.startsWith("A")).limit(2).toList()
B. batches.stream().flatMap(List::stream).filter(s -> s.startsWith("A")).distinct().sorted().limit(2).toList()
C. batches.stream().flatMap(List::stream).takeWhile(s -> s.startsWith("A")).distinct().sorted().limit(2).toList()
D. batches.stream().map(List::stream).takeWhile(s -> s.startsWith("A")).distinct().sorted().limit(2).toList()
Best answer: C
Explanation: For an ordered stream, takeWhile is prefix-based, not a general filter. After flattening, the prefix that starts with A ends at B9, so A0 is excluded before duplicate removal, sorting, and limiting are applied.
The flattened encounter order is A2, A1, A1, B9, A0. On an ordered stream, takeWhile(s -> s.startsWith("A")) keeps the longest initial prefix that satisfies the predicate and stops when B9 is reached. That leaves A2, A1, A1; distinct() removes the duplicate, sorted() orders the remaining codes, and limit(2) returns at most two values. A normal filter is not equivalent because it checks the whole stream and could include matching elements after the first nonmatching element.
filter can include A0, even though it appears after the first nonmatching code.map(List::stream) creates stream elements, so the lambda parameter is not a String.takeWhile would then operate on sorted order rather than the original flattened encounter order.Topic: Working with Streams and Lambda Expressions
A developer wants to compare a sequential stream with a parallel stream while storing results in mutable lists. What is guaranteed when this Java 21 program runs?
import java.util.*;
public class StreamAudit {
public static void main(String[] args) {
var ids = List.of(1, 2, 3, 4, 5);
var sequential = new ArrayList<Integer>();
var parallel = Collections.synchronizedList(new ArrayList<Integer>());
ids.stream().forEachOrdered(sequential::add);
ids.parallelStream().forEach(parallel::add);
System.out.println(sequential);
System.out.println(parallel);
}
}
Options:
A. The first line is ordered; the second has the same values in unspecified order.
B. The second line can miss values because parallel streams race.
C. Both lines are always [1, 2, 3, 4, 5].
D. The program can throw ConcurrentModificationException while adding to parallel.
Best answer: A
Explanation: The sequential pipeline uses forEachOrdered, so it adds values in the list’s encounter order. The parallel pipeline uses forEach, whose execution order is not guaranteed. The synchronized list protects the shared mutation but does not impose stream encounter order.
Parallel stream forEach may process elements on different threads and does not guarantee encounter order. Here, parallel is a synchronized list, so each add operation is thread-safe and the stream still performs the action for each source element. However, the order of those adds depends on parallel execution timing. The first pipeline uses forEachOrdered on an ordered source, so it prints [1, 2, 3, 4, 5]. The second line contains the same five values, but its printed order is not guaranteed.
List source does not make parallel forEach preserve order.Collections.synchronizedList makes the individual add operations thread-safe.ids is not being modified during traversal.Use the Java 21 1Z0-830 Practice Test page for the full IT Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.
Try Java 21 1Z0-830 on Web View Java 21 1Z0-830 Practice Test
Read the Java 21 1Z0-830 Cheat Sheet on Tech Exam Lexicon, then return to IT Mastery for timed practice.