Subscribe To Our Newsletter
You will receive our latest post and tutorial.
Thank you for subscribing!

required
required


Java 21 Sequenced Collections

1. Introduction

Java 21 introduces one of the most overdue additions to the Collections Framework: Sequenced Collections. This feature addresses a problem that has annoyed Java developers for over two decades — there was no uniform way to access the first and last elements of an ordered collection.

Think about it. You have a List, a Deque, a SortedSet, and a LinkedHashSet. All four maintain a defined encounter order. All four have a concept of “first element” and “last element.” Yet the method you call to get that first or last element is completely different for each type. It is as if every car manufacturer put the steering wheel in a different location. The car works, but every time you switch cars, you have to relearn how to drive.

Java 21 fixes this with three new interfaces: SequencedCollection, SequencedSet, and SequencedMap. These interfaces provide a unified API for accessing, adding, and removing elements at both ends of any ordered collection, plus a reversed() method that gives you a reversed view without copying.

This is not a minor convenience. It is a fundamental improvement to the Collections Framework that changes how you write collection-handling code. Let us walk through the problem, the solution, and practical applications.

2. The Problem Before Java 21

Before Java 21, getting the first and last element of different collection types required completely different code. There was no common interface, no polymorphism, no way to write a generic method that says “give me the first element of this ordered collection.” Let us look at how bad this was.

2.1 Getting the First Element

Here is how you get the first element from four different collection types — all of which maintain order:

// Getting the first element -- 4 different APIs for the same concept

// List -- use index
List list = List.of("alpha", "beta", "gamma");
String first = list.get(0);

// Deque -- dedicated method
Deque deque = new ArrayDeque<>(List.of("alpha", "beta", "gamma"));
String first = deque.getFirst();

// SortedSet -- yet another method
SortedSet sortedSet = new TreeSet<>(List.of("alpha", "beta", "gamma"));
String first = sortedSet.first();

// LinkedHashSet -- no direct method at all!
LinkedHashSet linkedHashSet = new LinkedHashSet<>(List.of("alpha", "beta", "gamma"));
String first = linkedHashSet.iterator().next(); // ugly

2.2 Getting the Last Element

Getting the last element was even worse:

// Getting the last element -- even more inconsistent

// List -- calculate size minus one
List list = List.of("alpha", "beta", "gamma");
String last = list.get(list.size() - 1);

// Deque -- dedicated method
Deque deque = new ArrayDeque<>(List.of("alpha", "beta", "gamma"));
String last = deque.getLast();

// SortedSet -- different name than Deque
SortedSet sortedSet = new TreeSet<>(List.of("alpha", "beta", "gamma"));
String last = sortedSet.last();

// LinkedHashSet -- absolutely terrible
LinkedHashSet linkedHashSet = new LinkedHashSet<>(List.of("alpha", "beta", "gamma"));
String last = null;
for (String s : linkedHashSet) {
    last = s; // iterate through EVERYTHING to get the last one
}

2.3 Reversing a Collection

Reversing the iteration order was equally inconsistent:

// Reversing -- no common approach

// List -- create a new reversed copy
List list = new ArrayList<>(List.of("alpha", "beta", "gamma"));
Collections.reverse(list); // mutates the original!

// or use ListIterator to go backwards (verbose)
ListIterator it = list.listIterator(list.size());
while (it.hasPrevious()) {
    System.out.println(it.previous());
}

// Deque -- use descendingIterator
Deque deque = new ArrayDeque<>(List.of("alpha", "beta", "gamma"));
Iterator descIt = deque.descendingIterator();

// NavigableSet -- descendingSet returns a view
NavigableSet navSet = new TreeSet<>(List.of("alpha", "beta", "gamma"));
NavigableSet reversed = navSet.descendingSet();

// LinkedHashSet -- no built-in way to reverse at all
// You have to copy to a List, reverse it, and create a new LinkedHashSet

This inconsistency made it impossible to write generic utility methods. If you wanted a method getFirst(Collection c) that works with any ordered collection, you could not do it cleanly. You needed instanceof checks everywhere. The lack of a common interface for ordered collections was a fundamental gap in Java’s type system.

3. SequencedCollection Interface

The SequencedCollection interface is the core of this feature. It represents a collection with a defined encounter order — meaning there is a well-defined first element, second element, and so on, through the last element. It extends Collection and adds the following methods:

public interface SequencedCollection extends Collection {

    // Returns a reversed-order view of this collection
    SequencedCollection reversed();

    // First element operations
    void addFirst(E e);
    void addLast(E e);
    E getFirst();
    E getLast();
    E removeFirst();
    E removeLast();
}

3.1 Using SequencedCollection Methods

Now every ordered collection speaks the same language:

import java.util.*;

public class SequencedCollectionDemo {

    public static void main(String[] args) {

        // ArrayList implements SequencedCollection
        List list = new ArrayList<>(List.of("alpha", "beta", "gamma"));
        System.out.println(list.getFirst());  // alpha
        System.out.println(list.getLast());   // gamma

        // ArrayDeque implements SequencedCollection
        Deque deque = new ArrayDeque<>(List.of("alpha", "beta", "gamma"));
        System.out.println(deque.getFirst()); // alpha
        System.out.println(deque.getLast());  // gamma

        // LinkedHashSet implements SequencedCollection (via SequencedSet)
        LinkedHashSet linkedSet = new LinkedHashSet<>(List.of("alpha", "beta", "gamma"));
        System.out.println(linkedSet.getFirst()); // alpha
        System.out.println(linkedSet.getLast());  // gamma

        // TreeSet implements SequencedCollection (via SequencedSet)
        TreeSet treeSet = new TreeSet<>(List.of("alpha", "beta", "gamma"));
        System.out.println(treeSet.getFirst()); // alpha
        System.out.println(treeSet.getLast());  // gamma
    }
}

Same method, same name, same behavior — regardless of the concrete collection type. This is the polymorphism that was missing for 25 years.

3.2 addFirst() and addLast()

These methods add elements at the beginning or end of the collection:

List languages = new ArrayList<>(List.of("Java", "Python", "Go"));

languages.addFirst("Rust");  // [Rust, Java, Python, Go]
languages.addLast("Kotlin"); // [Rust, Java, Python, Go, Kotlin]

System.out.println(languages);
// Output: [Rust, Java, Python, Go, Kotlin]

// Works with Deque too
Deque stack = new ArrayDeque<>();
stack.addFirst("bottom");
stack.addFirst("middle");
stack.addFirst("top");
System.out.println(stack); // [top, middle, bottom]

3.3 removeFirst() and removeLast()

These methods remove and return elements from the ends:

List tasks = new ArrayList<>(List.of("email", "code review", "standup", "deploy"));

String firstTask = tasks.removeFirst(); // "email"
String lastTask = tasks.removeLast();   // "deploy"

System.out.println(firstTask);  // email
System.out.println(lastTask);   // deploy
System.out.println(tasks);      // [code review, standup]

// Throws NoSuchElementException on empty collections
List empty = new ArrayList<>();
try {
    empty.getFirst(); // NoSuchElementException
} catch (NoSuchElementException e) {
    System.out.println("Collection is empty: " + e.getMessage());
}

3.4 Writing Generic Code

The biggest win is that you can now write methods that work with any sequenced collection:

// Generic method that works with ANY sequenced collection
public static  void printEndpoints(SequencedCollection collection) {
    if (collection.isEmpty()) {
        System.out.println("Collection is empty");
        return;
    }
    System.out.println("First: " + collection.getFirst());
    System.out.println("Last:  " + collection.getLast());
    System.out.println("Size:  " + collection.size());
}

// Works with all ordered collection types
printEndpoints(new ArrayList<>(List.of(1, 2, 3)));
printEndpoints(new ArrayDeque<>(List.of(1, 2, 3)));
printEndpoints(new LinkedHashSet<>(List.of(1, 2, 3)));
printEndpoints(new TreeSet<>(List.of(1, 2, 3)));

Before Java 21, this method would have required either method overloading for each collection type or ugly instanceof checks. Now you just declare the parameter as SequencedCollection and it works everywhere.

4. SequencedSet Interface

SequencedSet extends SequencedCollection and adds set semantics — no duplicate elements are allowed. It also refines the return type of reversed() to return a SequencedSet:

public interface SequencedSet extends Set, SequencedCollection {

    @Override
    SequencedSet reversed();
}

The classes that implement SequencedSet include LinkedHashSet, TreeSet, and ConcurrentSkipListSet. The set-specific behavior affects addFirst() and addLast(): if the element already exists, it is repositioned to the requested end rather than creating a duplicate.

// SequencedSet repositions existing elements
LinkedHashSet colors = new LinkedHashSet<>();
colors.add("red");
colors.add("green");
colors.add("blue");
System.out.println(colors); // [red, green, blue]

// addFirst moves "blue" to the front (no duplicate)
colors.addFirst("blue");
System.out.println(colors); // [blue, red, green]

// addLast moves "blue" to the end
colors.addLast("blue");
System.out.println(colors); // [red, green, blue]

// Adding a new element works as expected
colors.addFirst("yellow");
System.out.println(colors); // [yellow, red, green, blue]

4.1 SequencedSet with TreeSet

For sorted sets like TreeSet, addFirst() and addLast() throw UnsupportedOperationException because the position of elements is determined by the sort order, not by insertion order. However, getFirst(), getLast(), removeFirst(), removeLast(), and reversed() all work perfectly:

TreeSet scores = new TreeSet<>(List.of(85, 92, 78, 95, 88));

System.out.println(scores);          // [78, 85, 88, 92, 95]
System.out.println(scores.getFirst()); // 78  (lowest)
System.out.println(scores.getLast());  // 95  (highest)

// Remove the lowest and highest
int lowest = scores.removeFirst();  // 78
int highest = scores.removeLast();  // 95
System.out.println(scores);         // [85, 88, 92]

// addFirst/addLast throw UnsupportedOperationException on TreeSet
try {
    scores.addFirst(100);
} catch (UnsupportedOperationException e) {
    System.out.println("Cannot addFirst on TreeSet -- order is determined by comparator");
}

5. SequencedMap Interface

SequencedMap brings the same concept to maps. It extends Map and provides methods to access the first and last entries, put entries at specific positions, and get sequenced views of keys, values, and entries:

public interface SequencedMap extends Map {

    // Reversed view
    SequencedMap reversed();

    // First and last entries
    Map.Entry firstEntry();
    Map.Entry lastEntry();

    // Positional put
    Map.Entry putFirst(K key, V value);
    Map.Entry putLast(K key, V value);

    // Remove from ends
    Map.Entry pollFirstEntry();
    Map.Entry pollLastEntry();

    // Sequenced views
    SequencedSet sequencedKeySet();
    SequencedCollection sequencedValues();
    SequencedSet> sequencedEntrySet();
}

5.1 Using SequencedMap with LinkedHashMap

LinkedHashMap rankings = new LinkedHashMap<>();
rankings.put("Alice", 95);
rankings.put("Bob", 88);
rankings.put("Charlie", 92);

// Access first and last entries
Map.Entry first = rankings.firstEntry();
Map.Entry last = rankings.lastEntry();
System.out.println("First: " + first); // First: Alice=95
System.out.println("Last: " + last);   // Last: Charlie=92

// Put at specific positions
rankings.putFirst("Diana", 99); // Diana goes to the front
rankings.putLast("Eve", 85);    // Eve goes to the end
System.out.println(rankings);
// {Diana=99, Alice=95, Bob=88, Charlie=92, Eve=85}

// If the key already exists, putFirst/putLast repositions it
rankings.putFirst("Charlie", 97); // Charlie moves to front with new value
System.out.println(rankings);
// {Charlie=97, Diana=99, Alice=95, Bob=88, Eve=85}

// Poll (remove and return) from ends
Map.Entry polledFirst = rankings.pollFirstEntry();
Map.Entry polledLast = rankings.pollLastEntry();
System.out.println("Polled first: " + polledFirst); // Charlie=97
System.out.println("Polled last: " + polledLast);   // Eve=85
System.out.println(rankings); // {Diana=99, Alice=95, Bob=88}

5.2 Sequenced Views

The sequencedKeySet(), sequencedValues(), and sequencedEntrySet() methods return sequenced views that support all the sequenced operations:

LinkedHashMap prices = new LinkedHashMap<>();
prices.put("Apple", 1.50);
prices.put("Banana", 0.75);
prices.put("Cherry", 3.00);
prices.put("Date", 5.50);

// Sequenced key set -- supports getFirst/getLast
SequencedSet keys = prices.sequencedKeySet();
System.out.println("First key: " + keys.getFirst()); // Apple
System.out.println("Last key: " + keys.getLast());   // Date

// Sequenced values -- supports getFirst/getLast
SequencedCollection values = prices.sequencedValues();
System.out.println("First value: " + values.getFirst()); // 1.5
System.out.println("Last value: " + values.getLast());   // 5.5

// Sequenced entry set
SequencedSet> entries = prices.sequencedEntrySet();
System.out.println("First entry: " + entries.getFirst()); // Apple=1.5
System.out.println("Last entry: " + entries.getLast());   // Date=5.5

// Iterate in reverse using reversed views
for (String key : keys.reversed()) {
    System.out.println(key + " -> " + prices.get(key));
}
// Output: Date -> 5.5, Cherry -> 3.0, Banana -> 0.75, Apple -> 1.5

5.3 SequencedMap with TreeMap

Just like TreeSet, TreeMap supports most sequenced operations except putFirst() and putLast() (because entry order is determined by key comparison):

TreeMap sortedScores = new TreeMap<>();
sortedScores.put("Charlie", 92);
sortedScores.put("Alice", 95);
sortedScores.put("Bob", 88);

// Sorted by key (natural order)
System.out.println(sortedScores); // {Alice=95, Bob=88, Charlie=92}

Map.Entry firstEntry = sortedScores.firstEntry();
Map.Entry lastEntry = sortedScores.lastEntry();
System.out.println("First: " + firstEntry); // Alice=95
System.out.println("Last: " + lastEntry);   // Charlie=92

// Poll operations work
Map.Entry polled = sortedScores.pollFirstEntry();
System.out.println("Polled: " + polled);      // Alice=95
System.out.println(sortedScores);             // {Bob=88, Charlie=92}

6. Collections Hierarchy Changes

Java 21 retrofits the three new interfaces into the existing collections hierarchy. Here is how the hierarchy looks after Java 21:

6.1 Interface Hierarchy

New Interface Extends Purpose
SequencedCollection Collection Ordered collection with first/last access
SequencedSet Set, SequencedCollection Ordered set with no duplicates
SequencedMap Map Ordered map with first/last entry access

6.2 Which Classes Implement What

Class Implements addFirst/addLast Notes
ArrayList SequencedCollection (via List) Supported addFirst is O(n) due to shifting
LinkedList SequencedCollection (via List, Deque) Supported O(1) for both ends
ArrayDeque SequencedCollection (via Deque) Supported O(1) amortized for both ends
LinkedHashSet SequencedSet Supported (repositions if exists) Maintains insertion order
TreeSet SequencedSet (via SortedSet, NavigableSet) Throws UnsupportedOperationException Order determined by comparator
ConcurrentSkipListSet SequencedSet Throws UnsupportedOperationException Concurrent sorted set
LinkedHashMap SequencedMap putFirst/putLast supported Maintains insertion order
TreeMap SequencedMap (via SortedMap, NavigableMap) putFirst/putLast throw UnsupportedOperationException Order determined by key comparator
ConcurrentSkipListMap SequencedMap putFirst/putLast throw UnsupportedOperationException Concurrent sorted map

6.3 List Now Extends SequencedCollection

The List interface itself now extends SequencedCollection. This means every List implementation automatically inherits getFirst(), getLast(), and all other sequenced methods. The same is true for Deque, SortedSet, and NavigableSet.

// List extends SequencedCollection -- so these methods are available on ALL lists
List immutableList = List.of("one", "two", "three");

System.out.println(immutableList.getFirst()); // one
System.out.println(immutableList.getLast());  // three

// reversed() also works on immutable lists -- returns a view
List reversedView = immutableList.reversed();
System.out.println(reversedView); // [three, two, one]

// The original is unchanged
System.out.println(immutableList); // [one, two, three]

// Note: addFirst/addLast/removeFirst/removeLast throw
// UnsupportedOperationException on immutable lists

7. The reversed() View

The reversed() method is one of the most powerful additions. It returns a view of the collection in reverse order — not a copy. This is an important distinction. A view does not allocate new memory for the elements. It simply provides a reversed perspective of the same underlying data. Modifications through the view are reflected in the original, and vice versa.

7.1 reversed() Returns a View, Not a Copy

List original = new ArrayList<>(List.of("A", "B", "C", "D", "E"));
List reversed = original.reversed();

System.out.println("Original: " + original); // [A, B, C, D, E]
System.out.println("Reversed: " + reversed); // [E, D, C, B, A]

// Modify through the reversed view
reversed.addFirst("Z"); // adds to the END of the original
System.out.println("Original: " + original); // [A, B, C, D, E, Z]
System.out.println("Reversed: " + reversed); // [Z, E, D, C, B, A]

// Modify the original -- reflected in the view
original.addFirst("START");
System.out.println("Original: " + original); // [START, A, B, C, D, E, Z]
System.out.println("Reversed: " + reversed); // [Z, E, D, C, B, A, START]

7.2 Using reversed() for Iteration

The reversed view makes backward iteration trivial with enhanced for loops and streams:

List history = new ArrayList<>(List.of("page1", "page2", "page3", "page4"));

// Iterate in reverse with enhanced for loop -- clean and readable
System.out.println("Recent history (newest first):");
for (String page : history.reversed()) {
    System.out.println("  " + page);
}
// Output:
//   page4
//   page3
//   page2
//   page1

// Use with streams
history.reversed().stream()
    .limit(3)
    .forEach(page -> System.out.println("Recent: " + page));
// Output:
//   Recent: page4
//   Recent: page3
//   Recent: page2

// Works with forEach too
history.reversed().forEach(System.out::println);

7.3 reversed() on Maps

The reversed view on a SequencedMap reverses the entry order:

LinkedHashMap orderedMap = new LinkedHashMap<>();
orderedMap.put("Monday", 1);
orderedMap.put("Tuesday", 2);
orderedMap.put("Wednesday", 3);
orderedMap.put("Thursday", 4);
orderedMap.put("Friday", 5);

// Reversed map view
SequencedMap reversedMap = orderedMap.reversed();
System.out.println("Original first: " + orderedMap.firstEntry());  // Monday=1
System.out.println("Reversed first: " + reversedMap.firstEntry()); // Friday=5

// Iterate the map in reverse
for (var entry : orderedMap.reversed().entrySet()) {
    System.out.println(entry.getKey() + " = " + entry.getValue());
}
// Friday = 5
// Thursday = 4
// Wednesday = 3
// Tuesday = 2
// Monday = 1

// Double reverse returns original order
SequencedMap doubleReversed = orderedMap.reversed().reversed();
System.out.println(doubleReversed.firstEntry()); // Monday=1

7.4 Creating an Independent Reversed Copy

If you need a reversed copy that is independent of the original, use the copy constructor or stream().toList():

List original = new ArrayList<>(List.of("A", "B", "C"));

// Independent reversed copy (changes to original do not affect the copy)
List reversedCopy = new ArrayList<>(original.reversed());

// Or with streams
List reversedImmutable = original.reversed().stream().toList();

original.add("D");
System.out.println("Original: " + original);            // [A, B, C, D]
System.out.println("Reversed copy: " + reversedCopy);   // [C, B, A] (unaffected)
System.out.println("Reversed immutable: " + reversedImmutable); // [C, B, A] (unaffected)

8. Practical Examples

Let us look at real-world scenarios where sequenced collections make your code cleaner and more expressive.

8.1 LRU Cache with SequencedMap

A Least Recently Used (LRU) cache evicts the oldest entry when the cache is full. With SequencedMap, this becomes trivial:

import java.util.*;

public class LRUCache {
    private final int maxSize;
    private final LinkedHashMap cache;

    public LRUCache(int maxSize) {
        this.maxSize = maxSize;
        // accessOrder=true means most recently accessed entry moves to the end
        this.cache = new LinkedHashMap<>(16, 0.75f, true);
    }

    public V get(K key) {
        return cache.get(key); // automatically moves to end (most recent)
    }

    public void put(K key, V value) {
        cache.put(key, value);
        // Evict the oldest entry (first entry) if over capacity
        while (cache.size() > maxSize) {
            Map.Entry eldest = cache.pollFirstEntry(); // Java 21!
            System.out.println("Evicted: " + eldest);
        }
    }

    public V getMostRecent() {
        return cache.isEmpty() ? null : cache.lastEntry().getValue(); // Java 21!
    }

    public V getLeastRecent() {
        return cache.isEmpty() ? null : cache.firstEntry().getValue(); // Java 21!
    }

    @Override
    public String toString() {
        return cache.toString();
    }

    public static void main(String[] args) {
        LRUCache cache = new LRUCache<>(3);
        cache.put("user:1", "Alice");
        cache.put("user:2", "Bob");
        cache.put("user:3", "Charlie");
        System.out.println(cache); // {user:1=Alice, user:2=Bob, user:3=Charlie}

        cache.get("user:1"); // access moves user:1 to the end
        System.out.println(cache); // {user:2=Bob, user:3=Charlie, user:1=Alice}

        cache.put("user:4", "Diana"); // evicts user:2 (least recently used)
        System.out.println(cache); // {user:3=Charlie, user:1=Alice, user:4=Diana}

        System.out.println("Most recent: " + cache.getMostRecent());   // Diana
        System.out.println("Least recent: " + cache.getLeastRecent()); // Charlie
    }
}

8.2 Browser History / Undo System

A history system where you need to access both the most recent action and the oldest, and iterate in reverse order to show “recent first”:

import java.util.*;
import java.time.LocalDateTime;

public class BrowsingHistory {
    private final LinkedHashSet visited = new LinkedHashSet<>();
    private final int maxHistory;

    public BrowsingHistory(int maxHistory) {
        this.maxHistory = maxHistory;
    }

    public void visit(String url) {
        // If already visited, move to the end (most recent)
        visited.addLast(url); // repositions if already exists -- Java 21!

        // Trim old history
        while (visited.size() > maxHistory) {
            String oldest = visited.removeFirst(); // Java 21!
            System.out.println("Trimmed from history: " + oldest);
        }
    }

    public String currentPage() {
        return visited.isEmpty() ? null : visited.getLast(); // Java 21!
    }

    public String oldestPage() {
        return visited.isEmpty() ? null : visited.getFirst(); // Java 21!
    }

    public List recentHistory(int count) {
        // Recent pages first using reversed view -- Java 21!
        return visited.reversed().stream()
            .limit(count)
            .toList();
    }

    public static void main(String[] args) {
        BrowsingHistory history = new BrowsingHistory(5);
        history.visit("google.com");
        history.visit("stackoverflow.com");
        history.visit("github.com");
        history.visit("docs.oracle.com");
        history.visit("reddit.com");

        System.out.println("Current: " + history.currentPage());   // reddit.com
        System.out.println("Oldest: " + history.oldestPage());     // google.com
        System.out.println("Recent 3: " + history.recentHistory(3));
        // [reddit.com, docs.oracle.com, github.com]

        // Re-visiting a page moves it to the end
        history.visit("google.com");
        System.out.println("Current: " + history.currentPage());   // google.com
        System.out.println("Recent 3: " + history.recentHistory(3));
        // [google.com, reddit.com, docs.oracle.com]
    }
}

8.3 Task Queue with Priority and Ordering

A task queue where you can add high-priority tasks to the front and normal tasks to the back:

import java.util.*;

public class TaskQueue {
    private final List tasks = new ArrayList<>();

    public void addTask(String task) {
        tasks.addLast(task); // Java 21 -- same as add() but more expressive
    }

    public void addUrgentTask(String task) {
        tasks.addFirst(task); // Java 21 -- urgent tasks go to front
    }

    public String processNext() {
        if (tasks.isEmpty()) return null;
        return tasks.removeFirst(); // Java 21 -- process from the front
    }

    public String peekNext() {
        return tasks.isEmpty() ? null : tasks.getFirst(); // Java 21
    }

    public String peekLast() {
        return tasks.isEmpty() ? null : tasks.getLast(); // Java 21
    }

    public List getAllTasks() {
        return Collections.unmodifiableList(tasks);
    }

    public static void main(String[] args) {
        TaskQueue queue = new TaskQueue();
        queue.addTask("Write unit tests");
        queue.addTask("Update documentation");
        queue.addTask("Deploy to staging");

        queue.addUrgentTask("Fix production bug"); // goes to front!

        System.out.println("All tasks: " + queue.getAllTasks());
        // [Fix production bug, Write unit tests, Update documentation, Deploy to staging]

        System.out.println("Processing: " + queue.processNext()); // Fix production bug
        System.out.println("Processing: " + queue.processNext()); // Write unit tests
    }
}

8.4 Sorted Leaderboard

A leaderboard that always keeps scores sorted and lets you quickly get the top and bottom players:

import java.util.*;

public class Leaderboard {

    // TreeMap sorts by score (descending), then by name
    private final TreeMap> scoreBoard = new TreeMap<>(Comparator.reverseOrder());

    public void addScore(String player, int score) {
        scoreBoard.computeIfAbsent(score, k -> new ArrayList<>()).add(player);
    }

    public Map.Entry> getTopScore() {
        return scoreBoard.firstEntry(); // Java 21 -- highest score (reversed order)
    }

    public Map.Entry> getLowestScore() {
        return scoreBoard.lastEntry(); // Java 21 -- lowest score
    }

    public void printLeaderboard() {
        System.out.println("=== Leaderboard ===");
        int rank = 1;
        for (var entry : scoreBoard.sequencedEntrySet()) { // Java 21
            for (String player : entry.getValue()) {
                System.out.printf("#%d  %s - %d points%n", rank++, player, entry.getKey());
            }
        }
    }

    public void printBottomUp() {
        System.out.println("=== Bottom to Top ===");
        for (var entry : scoreBoard.reversed().sequencedEntrySet()) { // Java 21
            for (String player : entry.getValue()) {
                System.out.printf("  %s - %d points%n", player, entry.getKey());
            }
        }
    }

    public static void main(String[] args) {
        Leaderboard lb = new Leaderboard();
        lb.addScore("Alice", 1500);
        lb.addScore("Bob", 1200);
        lb.addScore("Charlie", 1800);
        lb.addScore("Diana", 1500); // same score as Alice
        lb.addScore("Eve", 900);

        lb.printLeaderboard();
        // #1  Charlie - 1800 points
        // #2  Alice - 1500 points
        // #3  Diana - 1500 points
        // #4  Bob - 1200 points
        // #5  Eve - 900 points

        System.out.println("Top: " + lb.getTopScore());     // 1800=[Charlie]
        System.out.println("Bottom: " + lb.getLowestScore()); // 900=[Eve]
    }
}

9. Comparison Table: Old Way vs New Way

Here is a comprehensive comparison of how common operations looked before Java 21 versus the clean API that sequenced collections provide:

Operation Old Way (Pre-Java 21) New Way (Java 21)
Get first element of a List list.get(0) list.getFirst()
Get last element of a List list.get(list.size() - 1) list.getLast()
Get first element of a SortedSet sortedSet.first() sortedSet.getFirst()
Get last element of a SortedSet sortedSet.last() sortedSet.getLast()
Get first element of a LinkedHashSet linkedHashSet.iterator().next() linkedHashSet.getFirst()
Get last element of a LinkedHashSet Loop through entire set linkedHashSet.getLast()
Remove first element of a List list.remove(0) list.removeFirst()
Remove last element of a List list.remove(list.size() - 1) list.removeLast()
Add to front of a List list.add(0, element) list.addFirst(element)
Reverse iteration of a List Collections.reverse(copy) or ListIterator list.reversed()
Get first entry of a LinkedHashMap map.entrySet().iterator().next() map.firstEntry()
Get last entry of a LinkedHashMap Loop through entire entry set map.lastEntry()
Reverse iteration of a Map Copy keys to list, reverse, iterate map.reversed().forEach(...)
Generic “get first” for any ordered collection Impossible without instanceof checks sequencedCollection.getFirst()

The pattern is clear: the new API is more readable, more consistent, and more composable. You no longer need to remember different method names for conceptually identical operations.

10. Best Practices

10.1 Prefer SequencedCollection as a Parameter Type

When writing methods that need ordered access, use SequencedCollection as the parameter type instead of concrete types. This makes your methods work with lists, deques, and ordered sets:

// Good -- accepts any sequenced collection
public static  E getLastOrDefault(SequencedCollection collection, E defaultValue) {
    return collection.isEmpty() ? defaultValue : collection.getLast();
}

// Good -- works with SequencedMap
public static  V getNewestValue(SequencedMap map) {
    Map.Entry last = map.lastEntry();
    return last == null ? null : last.getValue();
}

// Avoid -- too specific
public static String getLastElement(ArrayList list) {
    return list.get(list.size() - 1);
}

10.2 Be Aware of Performance Characteristics

Not all sequenced operations are O(1) for every collection type:

Operation ArrayList LinkedList ArrayDeque LinkedHashSet TreeSet
getFirst() O(1) O(1) O(1) O(1) O(log n)
getLast() O(1) O(1) O(1) O(1) O(log n)
addFirst() O(n) O(1) O(1) O(1) N/A
addLast() O(1)* O(1) O(1)* O(1) N/A
removeFirst() O(n) O(1) O(1) O(1) O(log n)
removeLast() O(1) O(1) O(1) O(1) O(log n)
reversed() O(1) O(1) O(1) O(1) O(1)

* Amortized O(1). Key takeaway: addFirst() and removeFirst() on ArrayList are O(n) because all elements must be shifted. If you frequently add or remove from the front, use ArrayDeque or LinkedList instead.

10.3 Use reversed() Instead of Collections.reverse()

Collections.reverse() mutates the list in place. reversed() returns a lightweight view with zero allocation overhead. Prefer reversed() unless you specifically need to reorder the underlying data:

List logs = getRecentLogs();

// Bad -- mutates the list, allocates nothing but changes state
Collections.reverse(logs);
for (String log : logs) {
    process(log);
}
Collections.reverse(logs); // have to reverse back!

// Good -- view-based, no mutation, no allocation
for (String log : logs.reversed()) {
    process(log);
}

10.4 Handle Empty Collections

getFirst(), getLast(), removeFirst(), and removeLast() throw NoSuchElementException on empty collections. Always check for emptiness first, or use a try-catch if the empty case is exceptional:

// Safe access pattern
public static  Optional safeGetFirst(SequencedCollection collection) {
    return collection.isEmpty() ? Optional.empty() : Optional.of(collection.getFirst());
}

public static  Optional safeGetLast(SequencedCollection collection) {
    return collection.isEmpty() ? Optional.empty() : Optional.of(collection.getLast());
}

// Usage
List items = fetchItems();
String first = safeGetFirst(items).orElse("No items");
String last = safeGetLast(items).orElse("No items");

10.5 Remember That reversed() Views Are Live

Since reversed() returns a view, be careful not to modify the original collection while iterating over its reversed view (unless you specifically want to). This follows the same concurrent modification rules as other collection views:

List names = new ArrayList<>(List.of("Alice", "Bob", "Charlie"));

// This will throw ConcurrentModificationException
try {
    for (String name : names.reversed()) {
        if (name.startsWith("B")) {
            names.remove(name); // modifying original while iterating view!
        }
    }
} catch (ConcurrentModificationException e) {
    System.out.println("Cannot modify during iteration");
}

// Safe approach: collect items to remove first
List toRemove = names.reversed().stream()
    .filter(n -> n.startsWith("B"))
    .toList();
names.removeAll(toRemove);

10.6 Migrate Gradually

You do not need to rewrite all your code at once. Here is a prioritized migration approach:

  1. Replace list.get(0) with list.getFirst() — immediate readability improvement, zero risk
  2. Replace list.get(list.size() - 1) with list.getLast() — eliminates off-by-one risk
  3. Replace Collections.reverse() with reversed() — when you only need reversed iteration, not mutation
  4. Update method signatures to use SequencedCollection — when refactoring utility methods
  5. Use firstEntry()/lastEntry() on maps — when working with LinkedHashMap or TreeMap

Sequenced Collections is one of those features that seems small on the surface but fundamentally improves the Java Collections Framework. The uniform API, the lightweight reversed views, and the ability to write truly generic collection-handling code make this one of the most practical additions in Java 21. Start using getFirst(), getLast(), and reversed() today — your code will be cleaner for it.

March 1, 2026

Java 21 Virtual Threads

1. Introduction

For over two decades, Java’s concurrency model has been built on a simple idea: one thread per request. A web server receives an HTTP request, assigns it to a thread, that thread does everything — reads from the database, calls an external API, formats the response — and then returns to the pool. This model is easy to reason about. It is also hitting a wall.

The problem is that Java threads are thin wrappers around operating system threads. Every new Thread() call asks the OS kernel to allocate a real thread with its own stack space, register set, and scheduling context. On most systems, each platform thread consumes 512 KB to 1 MB of memory just for the stack. That means 10,000 threads need roughly 5-10 GB of RAM — just for stacks, before your application does anything useful.

Think of it this way: imagine you run a restaurant where every customer gets a dedicated waiter who stands at their table for the entire meal — even while the kitchen is cooking, even while the customer is reading the menu. You would run out of waiters very quickly. What you really want is waiters who can walk away while the customer is thinking and come back when the food is ready. That is exactly what virtual threads do.

Java 21 introduces virtual threads as a permanent feature (JEP 444), after two preview rounds in Java 19 and 20. Virtual threads are lightweight threads managed by the JVM rather than the OS. They are cheap to create (a few hundred bytes each), cheap to block (the JVM unmounts them from the carrier thread), and you can run millions of them simultaneously. They do not require you to rewrite your code in reactive or callback style — your existing synchronous, blocking code just works, but now it scales.

This post covers everything you need to know about virtual threads: how they work under the hood, how to create and use them, when they shine, when they do not, and how to migrate your existing applications.

2. Platform Threads vs Virtual Threads

Before diving into code, you need to understand the fundamental difference between the two types of threads now available in Java 21. Platform threads are what you have been using all along — they are the traditional java.lang.Thread backed by an OS thread. Virtual threads are a new kind of thread that is managed entirely by the JVM.

Characteristic Platform Thread Virtual Thread
Backed by OS kernel thread (1:1 mapping) JVM-managed; many virtual threads share a few OS threads
Memory cost ~512 KB – 1 MB per thread (stack) ~200 bytes – a few KB initially; grows as needed
Creation cost Expensive (kernel syscall, memory allocation) Cheap (JVM object allocation, no syscall)
Max practical count ~2,000 – 10,000 per JVM Millions per JVM
Scheduling OS kernel scheduler (preemptive) JVM scheduler using ForkJoinPool (cooperative at I/O)
Blocking behavior Blocks the OS thread; wastes resources Unmounts from carrier; carrier thread reused immediately
CPU-bound work Well-suited No advantage (still uses carrier threads)
I/O-bound work Wastes thread while waiting Ideal — blocks cheaply, scales massively
Thread pooling Required (creating threads is expensive) Not needed and discouraged (creating is cheap)
Thread identity Has a meaningful OS thread ID Has a Java thread ID; no OS thread identity
ThreadLocal Works normally Works but discouraged (millions of copies = memory waste)
Stack trace Shows OS thread info Shows virtual thread info; carrier thread is hidden

The OS Thread Model Problem

To understand why this matters, consider what happens when a platform thread makes a JDBC call. The thread sends the query to the database and then blocks, waiting for the response. During that wait — which could be 5, 50, or 500 milliseconds — the OS thread sits idle. It cannot be used for anything else. It is consuming memory, occupying an OS scheduling slot, and doing nothing. Multiply that by thousands of concurrent requests, and you have a server that is mostly idle threads waiting for I/O.

The reactive programming movement (Project Reactor, RxJava, Vert.x) tried to solve this by eliminating blocking entirely — you write everything as callbacks, operators, and event loops. It works, but it comes at a steep cost: your code becomes harder to read, harder to debug (stack traces are useless), and every library in your stack needs to be reactive-aware. Virtual threads give you the scalability of reactive with the simplicity of blocking code.

3. Creating Virtual Threads

Java 21 provides several ways to create virtual threads. All of them are straightforward and feel familiar if you have ever created platform threads. The key difference is that you are now creating something extremely lightweight — do not think of these as resources to conserve. Think of them as tasks to launch.

Method 1: Thread.ofVirtual().start()

The new Thread.Builder API (added in Java 21) gives you a fluent way to create threads. Thread.ofVirtual() returns a builder for virtual threads, and Thread.ofPlatform() returns one for platform threads.

// Create and start a virtual thread
Thread vThread = Thread.ofVirtual().start(() -> {
    System.out.println("Hello from virtual thread: " + Thread.currentThread());
});

// Wait for it to finish
vThread.join();

// Output: Hello from virtual thread: VirtualThread[#21]/runnable@ForkJoinPool-1-worker-1

Method 2: Thread.startVirtualThread()

This is the simplest way — a one-liner that creates and starts a virtual thread immediately.

// One-liner to start a virtual thread
Thread vThread = Thread.startVirtualThread(() -> {
    System.out.println("Running in a virtual thread");
    System.out.println("Is virtual: " + Thread.currentThread().isVirtual()); // true
});

vThread.join();

Method 3: Thread.ofVirtual() with Unstarted Thread

Sometimes you want to create the thread but not start it immediately. Use unstarted() for that.

// Create without starting
Thread vThread = Thread.ofVirtual()
    .name("my-worker")
    .unstarted(() -> {
        System.out.println(Thread.currentThread().getName()); // "my-worker"
        // do work here
    });

// Start later when ready
vThread.start();
vThread.join();

Method 4: Naming Virtual Threads with a Factory

When you are creating thousands of virtual threads, you want meaningful names for debugging. Use the name() method with a prefix and start number to get auto-incrementing names.

// Create a thread factory with auto-incrementing names
ThreadFactory factory = Thread.ofVirtual()
    .name("worker-", 0)  // worker-0, worker-1, worker-2, ...
    .factory();

// Use the factory to create threads
for (int i = 0; i < 5; i++) {
    Thread t = factory.newThread(() -> {
        System.out.println(Thread.currentThread().getName() + " running");
    });
    t.start();
}

// Output:
// worker-0 running
// worker-1 running
// worker-2 running
// worker-3 running
// worker-4 running

Checking If a Thread Is Virtual

Use Thread.currentThread().isVirtual() to check at runtime.

Thread.startVirtualThread(() -> {
    Thread current = Thread.currentThread();
    System.out.println("Name: " + current.getName());
    System.out.println("Is virtual: " + current.isVirtual());    // true
    System.out.println("Is daemon: " + current.isDaemon());      // true (always)
    System.out.println("Thread ID: " + current.threadId());
});

Important note: Virtual threads are always daemon threads. You cannot change this. If the main thread exits, virtual threads will be terminated. This is by design — virtual threads are meant for tasks, not for long-lived background processing that outlives the application.

4. Virtual Thread Executors

In real applications, you rarely create threads directly. You use ExecutorService to submit tasks. Java 21 adds a new executor designed specifically for virtual threads: Executors.newVirtualThreadPerTaskExecutor(). This executor creates a new virtual thread for every submitted task — there is no pool, no queue, no capacity limit.

Basic Usage

import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;

// Every task gets its own virtual thread -- no pooling
try (ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor()) {

    for (int i = 0; i < 10_000; i++) {
        final int taskId = i;
        executor.submit(() -> {
            // Simulate an I/O operation (database call, HTTP request, etc.)
            Thread.sleep(1000);
            System.out.println("Task " + taskId + " completed on " + Thread.currentThread());
            return taskId;
        });
    }

} // executor.close() is called here, which waits for all tasks to complete
// All 10,000 tasks complete in ~1 second, not 10,000 seconds

Migrating from Fixed Thread Pools

If you have existing code that uses Executors.newFixedThreadPool(), the migration is often a one-line change. Here is a before-and-after comparison.

// BEFORE: Fixed thread pool with 200 threads
// At most 200 tasks run concurrently. Others queue and wait.
ExecutorService executor = Executors.newFixedThreadPool(200);

// AFTER: Virtual thread executor
// Every task runs immediately in its own virtual thread. No queuing.
ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor();

Warning: Before making this swap, think about whether the fixed pool size was serving as a rate limiter. If you had 200 threads because your database can only handle 200 connections, switching to virtual threads will allow all tasks to run at once, potentially overwhelming your database. In that case, use a Semaphore to control concurrency.

import java.util.concurrent.Semaphore;

// Control concurrency when downstream resources have limits
Semaphore dbPermits = new Semaphore(200); // max 200 concurrent DB calls

try (var executor = Executors.newVirtualThreadPerTaskExecutor()) {
    for (int i = 0; i < 100_000; i++) {
        executor.submit(() -> {
            dbPermits.acquire();
            try {
                // At most 200 virtual threads execute this block at once
                return queryDatabase();
            } finally {
                dbPermits.release();
            }
        });
    }
}

5. How Virtual Threads Work

Understanding the internals helps you use virtual threads correctly and debug problems when they arise. The key concept is the carrier thread model.

Carrier Threads

The JVM maintains a pool of platform threads called carrier threads. By default, this pool is a ForkJoinPool with a number of threads equal to the number of available CPU cores. Virtual threads are mounted onto carrier threads to execute. When a virtual thread performs a blocking operation, it is unmounted from the carrier thread, and the carrier thread is free to run another virtual thread.

Think of carrier threads as taxi cabs and virtual threads as passengers. There are a limited number of cabs (carrier threads = CPU cores), but they can serve many passengers (virtual threads) throughout the day. When a passenger arrives at a stop and goes inside a building (blocking I/O), the cab does not wait — it picks up another passenger.

Mounting and Unmounting

Here is what happens step by step when a virtual thread runs:

  1. Mount: The JVM scheduler picks a virtual thread from the run queue and mounts it onto an available carrier thread. The virtual thread’s stack (its continuation) is loaded.
  2. Execute: The virtual thread runs on the carrier thread, executing bytecode normally. To the JVM, it looks like regular code running on a regular thread.
  3. Block: When the virtual thread hits a blocking operation (e.g., Socket.read(), Thread.sleep(), Lock.lock()), the JVM intercepts it.
  4. Unmount: The virtual thread’s stack is saved (its continuation is parked). The carrier thread is released back to the pool.
  5. Resume: When the I/O completes (data arrives, sleep expires, lock is acquired), the virtual thread is put back on the run queue.
  6. Remount: A carrier thread picks it up (possibly a different carrier than before) and the virtual thread continues execution from where it left off.

Continuation-Based Scheduling

Under the hood, virtual threads use continuations — a mechanism that allows the JVM to save and restore the execution state of a thread. A continuation captures the entire call stack, local variables, and program counter. When a virtual thread is unmounted, its continuation is stored on the heap (not on the OS thread stack). This is why virtual threads are so memory-efficient: their stack starts small and grows on the heap as needed, unlike platform threads that pre-allocate a fixed stack.

// Demonstrating the carrier thread behavior
Thread.startVirtualThread(() -> {
    System.out.println("Before sleep - carrier: " + carrierThread());

    try {
        Thread.sleep(100); // Virtual thread unmounts here
    } catch (InterruptedException e) {
        Thread.currentThread().interrupt();
    }

    // After sleep, may be on a DIFFERENT carrier thread
    System.out.println("After sleep - carrier: " + carrierThread());
});

// Helper to get carrier thread info (for demonstration)
// In production, use -Djdk.tracePinnedThreads=full
static String carrierThread() {
    // Virtual thread toString includes carrier info
    return Thread.currentThread().toString();
}

ForkJoinPool Configuration

The carrier thread pool can be configured via system properties:

Property Default Description
jdk.virtualThreadScheduler.parallelism Number of CPU cores Number of carrier threads
jdk.virtualThreadScheduler.maxPoolSize 256 Maximum carrier threads (for compensating pinned threads)
jdk.virtualThreadScheduler.minRunnable 1 Minimum runnable threads before creating compensation threads

6. Blocking Operations

The magic of virtual threads is how they handle blocking. In traditional Java, when a thread calls InputStream.read() or Socket.accept(), the OS thread blocks — it sits idle, consuming memory and an OS scheduling slot, until the I/O completes. With virtual threads, the JVM intercepts the blocking call and unmounts the virtual thread from its carrier, freeing the carrier to do other work.

Which Operations Unmount Correctly?

The JDK has been updated so that most blocking operations properly unmount virtual threads. Here is what works:

Operation Virtual Thread Behavior Notes
Thread.sleep() Unmounts correctly Virtual thread yields carrier
Socket.read()/write() Unmounts correctly Non-blocking I/O under the hood
InputStream/OutputStream Unmounts correctly Rewired to use non-blocking I/O
java.net.http.HttpClient Unmounts correctly Already async internally
JDBC (most drivers) Unmounts correctly Uses socket I/O which is intercepted
ReentrantLock.lock() Unmounts correctly Virtual-thread-aware since Java 21
BlockingQueue.take() Unmounts correctly Uses LockSupport.park internally
LockSupport.park() Unmounts correctly Core parking mechanism for virtual threads
synchronized block PINS the carrier Does NOT unmount — see Thread Pinning section
JNI / native code PINS the carrier Cannot unmount during native execution

JDBC Example

Here is a practical example showing how virtual threads handle database calls. Each virtual thread blocks on JDBC, but the carrier threads stay busy serving other virtual threads.

import java.sql.*;
import java.util.concurrent.*;

public class VirtualThreadJdbc {

    private static final String DB_URL = "jdbc:postgresql://localhost:5432/mydb";

    public static void main(String[] args) throws Exception {

        long start = System.currentTimeMillis();

        // Launch 1,000 concurrent database queries
        try (var executor = Executors.newVirtualThreadPerTaskExecutor()) {

            List> futures = new ArrayList<>();

            for (int i = 0; i < 1000; i++) {
                final int userId = i;
                futures.add(executor.submit(() -> queryUser(userId)));
            }

            // Collect results
            for (Future future : futures) {
                String result = future.get(); // blocks, but cheaply
            }
        }

        long elapsed = System.currentTimeMillis() - start;
        System.out.println("1,000 DB queries completed in " + elapsed + "ms");
        // With platform threads (pool of 50): ~20 seconds
        // With virtual threads: ~1 second (limited by DB connection pool)
    }

    static String queryUser(int userId) throws SQLException {
        try (Connection conn = DriverManager.getConnection(DB_URL, "user", "pass");
             PreparedStatement stmt = conn.prepareStatement(
                 "SELECT name FROM users WHERE id = ?")) {

            stmt.setInt(1, userId);
            ResultSet rs = stmt.executeQuery(); // blocks here -- virtual thread unmounts

            if (rs.next()) {
                return rs.getString("name");
            }
            return "not found";
        }
    }
}

HTTP Client Example

The built-in java.net.http.HttpClient works beautifully with virtual threads. Each request blocks the virtual thread (not the carrier), allowing thousands of concurrent HTTP calls.

import java.net.URI;
import java.net.http.*;
import java.util.concurrent.*;
import java.util.List;
import java.util.ArrayList;

public class VirtualThreadHttp {

    public static void main(String[] args) throws Exception {
        HttpClient client = HttpClient.newHttpClient();

        try (var executor = Executors.newVirtualThreadPerTaskExecutor()) {

            List> futures = new ArrayList<>();

            // Fire off 500 HTTP requests concurrently
            for (int i = 0; i < 500; i++) {
                futures.add(executor.submit(() -> {
                    HttpRequest request = HttpRequest.newBuilder()
                        .uri(URI.create("https://httpbin.org/delay/1"))
                        .GET()
                        .build();

                    HttpResponse response = client.send(
                        request, HttpResponse.BodyHandlers.ofString()
                    ); // blocks the virtual thread, not the carrier

                    return response.statusCode();
                }));
            }

            long successCount = futures.stream()
                .map(f -> {
                    try { return f.get(); }
                    catch (Exception e) { return -1; }
                })
                .filter(code -> code == 200)
                .count();

            System.out.println("Successful requests: " + successCount + "/500");
        }
    }
}

File I/O

File I/O with java.nio also works with virtual threads. The virtual thread unmounts while waiting for disk I/O.

import java.nio.file.*;
import java.util.concurrent.*;

try (var executor = Executors.newVirtualThreadPerTaskExecutor()) {
    // Read 1000 files concurrently
    List> results = new ArrayList<>();

    for (int i = 0; i < 1000; i++) {
        Path file = Path.of("/data/files/file_" + i + ".txt");
        results.add(executor.submit(() -> {
            return Files.readString(file); // unmounts while reading from disk
        }));
    }

    for (Future result : results) {
        String content = result.get();
        process(content);
    }
}

7. Structured Concurrency (Preview)

Structured concurrency (JEP 462, preview in Java 21) brings a disciplined approach to managing multiple concurrent operations. The core idea is that concurrent tasks should have a clear owner and a defined lifetime, just like local variables have a clear scope. When you open a StructuredTaskScope, all tasks launched within it are bounded by that scope — if the scope closes, all tasks are cancelled and cleaned up.

Think of it like a meeting: you assign tasks to people at the start of the meeting, and the meeting does not end until everyone reports back. If someone fails catastrophically, you end the meeting early and cancel the remaining work.

ShutdownOnFailure — All Must Succeed

Use ShutdownOnFailure when you need all subtasks to complete successfully. If any one fails, the scope shuts down and cancels the rest.

import jdk.incubator.concurrent.StructuredTaskScope;
import jdk.incubator.concurrent.StructuredTaskScope.ShutdownOnFailure;

// Fetch user data from multiple services -- all must succeed
record UserProfile(String name, String email, List orders) {}

UserProfile fetchUserProfile(long userId) throws Exception {

    try (var scope = new StructuredTaskScope.ShutdownOnFailure()) {

        // Fork three concurrent tasks
        Subtask nameFuture   = scope.fork(() -> fetchUserName(userId));
        Subtask emailFuture  = scope.fork(() -> fetchUserEmail(userId));
        Subtask> ordersFuture = scope.fork(() -> fetchUserOrders(userId));

        // Wait for all tasks to complete (or one to fail)
        scope.join();

        // Throw if any task failed
        scope.throwIfFailed();

        // All succeeded -- combine results
        return new UserProfile(
            nameFuture.get(),
            emailFuture.get(),
            ordersFuture.get()
        );
    }
    // If fetchUserEmail throws, fetchUserName and fetchUserOrders are cancelled
}

ShutdownOnSuccess — First One Wins

Use ShutdownOnSuccess when you want the result of the first task to complete successfully. The rest are cancelled.

import jdk.incubator.concurrent.StructuredTaskScope.ShutdownOnSuccess;

// Query multiple replicas -- use the first response
String queryWithFallback(String query) throws Exception {

    try (var scope = new StructuredTaskScope.ShutdownOnSuccess()) {

        // Race three database replicas
        scope.fork(() -> queryReplica("replica-1.db.internal", query));
        scope.fork(() -> queryReplica("replica-2.db.internal", query));
        scope.fork(() -> queryReplica("replica-3.db.internal", query));

        scope.join();

        // Return the first successful result
        return scope.result();
    }
    // The slower replicas are automatically cancelled
}

Fan-Out Pattern

Structured concurrency excels at the fan-out/fan-in pattern where you split work into many subtasks and combine the results. Here is a practical example that fetches data from multiple APIs in parallel.

// Fan-out: fetch prices from multiple vendors concurrently
record PriceQuote(String vendor, double price) {}

List getBestPrices(String product) throws Exception {

    List vendors = List.of(
        "https://vendor-a.com/api/price",
        "https://vendor-b.com/api/price",
        "https://vendor-c.com/api/price",
        "https://vendor-d.com/api/price",
        "https://vendor-e.com/api/price"
    );

    try (var scope = new StructuredTaskScope.ShutdownOnFailure()) {

        // Fork one task per vendor
        List> tasks = vendors.stream()
            .map(url -> scope.fork(() -> fetchPrice(url, product)))
            .toList();

        scope.join();
        scope.throwIfFailed();

        // Collect and sort by price
        return tasks.stream()
            .map(Subtask::get)
            .sorted(Comparator.comparingDouble(PriceQuote::price))
            .toList();
    }
}

PriceQuote fetchPrice(String vendorUrl, String product) throws Exception {
    HttpClient client = HttpClient.newHttpClient();
    HttpRequest request = HttpRequest.newBuilder()
        .uri(URI.create(vendorUrl + "?product=" + product))
        .build();
    HttpResponse response = client.send(request, HttpResponse.BodyHandlers.ofString());
    // Parse JSON response...
    return new PriceQuote(vendorUrl, parsePrice(response.body()));
}

Note: Structured concurrency is a preview feature in Java 21. To use it, compile and run with --enable-preview and add --add-modules jdk.incubator.concurrent.

8. Scoped Values (Preview)

Scoped values (JEP 464, preview in Java 21) are a modern replacement for ThreadLocal that work well with virtual threads. The problem with ThreadLocal is that each thread gets its own mutable copy of the value, and when you have millions of virtual threads, that means millions of copies consuming memory. Scoped values solve this by providing immutable, bounded-lifetime values that are automatically shared with child threads.

ThreadLocal Problems with Virtual Threads

Problem ThreadLocal ScopedValue
Memory One copy per thread (millions = millions of copies) Shared immutably; no per-thread copies
Mutability Mutable — can be changed anywhere, hard to track Immutable within a scope — set once, read many
Lifetime Lives until explicitly removed (easy to leak) Bounded to scope — automatically cleaned up
Inheritance InheritableThreadLocal copies to child threads (expensive) Naturally inherited without copying
Debugging Hard to trace where value was set Clear scope boundary in code

Basic ScopedValue Usage

import jdk.incubator.concurrent.ScopedValue;

public class RequestHandler {

    // Declare a scoped value -- typically a static final field
    private static final ScopedValue CURRENT_USER = ScopedValue.newInstance();
    private static final ScopedValue REQUEST_ID = ScopedValue.newInstance();

    void handleRequest(HttpServletRequest request) {
        String user = request.getHeader("X-User");
        String requestId = request.getHeader("X-Request-Id");

        // Bind values for the duration of the lambda
        ScopedValue.where(CURRENT_USER, user)
            .where(REQUEST_ID, requestId)
            .run(() -> {
                // All code in this scope (and child threads) can read these values
                processOrder();
            });
        // Values are automatically unbound here
    }

    void processOrder() {
        // Read the scoped value -- no need to pass it as a parameter
        String user = CURRENT_USER.get();
        String requestId = REQUEST_ID.get();

        System.out.println("[" + requestId + "] Processing order for " + user);

        auditLog(); // Also has access to CURRENT_USER and REQUEST_ID
    }

    void auditLog() {
        // Scoped values are available deep in the call stack
        System.out.println("Audit: " + CURRENT_USER.get() + " at " + Instant.now());
    }
}

ScopedValue with Structured Concurrency

Scoped values combine naturally with structured concurrency. Values bound in the parent scope are automatically visible to forked subtasks.

private static final ScopedValue TENANT_ID = ScopedValue.newInstance();

void processMultiTenantRequest(String tenantId) throws Exception {
    ScopedValue.where(TENANT_ID, tenantId).run(() -> {
        try (var scope = new StructuredTaskScope.ShutdownOnFailure()) {
            // Both forked tasks can read TENANT_ID
            scope.fork(() -> loadTenantConfig()); // reads TENANT_ID.get()
            scope.fork(() -> loadTenantData());   // reads TENANT_ID.get()

            scope.join();
            scope.throwIfFailed();
        }
    });
}

String loadTenantConfig() {
    String tenant = TENANT_ID.get(); // works -- inherited from parent scope
    return "config for " + tenant;
}

Note: Scoped values are a preview feature in Java 21. Compile and run with --enable-preview and --add-modules jdk.incubator.concurrent.

9. Virtual Threads with Spring Boot

Spring Boot 3.2 (released November 2023) added first-class support for virtual threads. Enabling them is remarkably simple — a single configuration property — but understanding what happens under the hood helps you make the right architectural decisions.

Enabling Virtual Threads

Add this to your application.properties or application.yml:

// application.properties
spring.threads.virtual.enabled=true

Or in YAML:

spring:
  threads:
    virtual:
      enabled: true

That single property does the following:

  • The embedded Tomcat/Jetty/Undertow server creates a virtual thread for each incoming HTTP request instead of using its platform thread pool
  • Spring MVC controller methods run on virtual threads
  • @Async methods use virtual threads by default
  • Spring task scheduling (@Scheduled) uses virtual threads

What This Means for Your Architecture

Component Without Virtual Threads With Virtual Threads
Tomcat thread pool 200 threads (default max) Unlimited virtual threads
Concurrent request capacity ~200 simultaneous requests Thousands to millions
Blocking JDBC calls Ties up a Tomcat thread Virtual thread unmounts; Tomcat capacity unaffected
Code changes needed N/A None — same @Controller, @Service, @Repository

Spring Boot + Virtual Threads Example

@RestController
@RequestMapping("/api/users")
public class UserController {

    private final UserService userService;
    private final OrderService orderService;

    @GetMapping("/{id}")
    public ResponseEntity getUser(@PathVariable Long id) {
        // This method runs on a virtual thread (when spring.threads.virtual.enabled=true)
        // The JDBC calls inside these services block the virtual thread,
        // NOT the carrier thread

        User user = userService.findById(id);           // JDBC call -- blocks cheaply
        List orders = orderService.findByUser(id); // JDBC call -- blocks cheaply

        // No reactive code needed. Simple, readable, synchronous code.
        return ResponseEntity.ok(new UserProfile(user, orders));
    }
}

@Service
public class UserService {

    private final JdbcTemplate jdbc;

    public User findById(Long id) {
        // This blocking call unmounts the virtual thread from its carrier
        return jdbc.queryForObject(
            "SELECT * FROM users WHERE id = ?",
            new UserRowMapper(),
            id
        );
    }
}

Virtual Threads vs WebFlux

With virtual threads, you might wonder: do I still need Spring WebFlux? Here is how they compare:

Aspect Spring MVC + Virtual Threads Spring WebFlux
Programming model Imperative, blocking, synchronous Reactive, non-blocking, async
Code complexity Simple — looks like normal Java Complex — Mono, Flux, operators everywhere
Debugging Normal stack traces Difficult — async stack traces are fragmented
Ecosystem All existing libraries work (JDBC, JPA, etc.) Requires reactive drivers (R2DBC, WebClient)
Throughput Excellent for I/O-bound workloads Excellent for I/O-bound workloads
Backpressure Not built-in (use Semaphore) Built-in with reactive streams
Learning curve Low — same Spring MVC you know Steep — reactive programming is a paradigm shift

Recommendation: For most new Spring Boot applications, Spring MVC + virtual threads is now the better default choice. Choose WebFlux only if you need streaming data, backpressure, or are already invested in the reactive ecosystem.

10. Performance Comparison

Let us put numbers to the theory. This benchmark compares platform threads and virtual threads handling 10,000 concurrent simulated HTTP calls, each with a 1-second latency.

import java.util.concurrent.*;
import java.util.List;
import java.util.ArrayList;

public class ThreadBenchmark {

    static final int TASK_COUNT = 10_000;
    static final int IO_LATENCY_MS = 1000; // Simulate 1-second I/O per task

    public static void main(String[] args) throws Exception {
        System.out.println("=== Thread Benchmark: " + TASK_COUNT + " concurrent I/O tasks ===\n");

        // Benchmark 1: Platform threads with fixed pool
        benchmarkPlatformThreads(50);   // Typical pool size
        benchmarkPlatformThreads(200);  // Large pool

        // Benchmark 2: Virtual threads
        benchmarkVirtualThreads();

        // Benchmark 3: Platform threads trying to match virtual (will likely OOM)
        // benchmarkPlatformThreads(10_000); // Don't try this at home
    }

    static void benchmarkPlatformThreads(int poolSize) throws Exception {
        System.out.println("Platform Threads (pool=" + poolSize + "):");
        long start = System.currentTimeMillis();

        try (var executor = Executors.newFixedThreadPool(poolSize)) {
            List> futures = new ArrayList<>();

            for (int i = 0; i < TASK_COUNT; i++) {
                futures.add(executor.submit(() -> {
                    Thread.sleep(IO_LATENCY_MS); // Simulate I/O
                    return 1;
                }));
            }

            int completed = 0;
            for (Future f : futures) {
                completed += f.get();
            }

            long elapsed = System.currentTimeMillis() - start;
            System.out.println("  Completed: " + completed + " tasks");
            System.out.println("  Time: " + elapsed + "ms");
            System.out.println("  Throughput: " + (TASK_COUNT * 1000L / elapsed) + " tasks/sec\n");
        }
    }

    static void benchmarkVirtualThreads() throws Exception {
        System.out.println("Virtual Threads:");
        long start = System.currentTimeMillis();

        try (var executor = Executors.newVirtualThreadPerTaskExecutor()) {
            List> futures = new ArrayList<>();

            for (int i = 0; i < TASK_COUNT; i++) {
                futures.add(executor.submit(() -> {
                    Thread.sleep(IO_LATENCY_MS); // Simulate I/O
                    return 1;
                }));
            }

            int completed = 0;
            for (Future f : futures) {
                completed += f.get();
            }

            long elapsed = System.currentTimeMillis() - start;
            System.out.println("  Completed: " + completed + " tasks");
            System.out.println("  Time: " + elapsed + "ms");
            System.out.println("  Throughput: " + (TASK_COUNT * 1000L / elapsed) + " tasks/sec\n");
        }
    }
}

Expected Results

Configuration Time Throughput Memory
Platform Threads (pool=50) ~200 seconds ~50 tasks/sec ~50 MB
Platform Threads (pool=200) ~50 seconds ~200 tasks/sec ~200 MB
Virtual Threads ~1-2 seconds ~5,000-10,000 tasks/sec ~20 MB

The result is dramatic. Virtual threads complete all 10,000 tasks in about the time it takes for one I/O operation, because all 10,000 virtual threads are blocked simultaneously — and blocking is free. Platform threads can only run as many tasks concurrently as there are threads in the pool, so the rest queue up.

Key insight: The advantage of virtual threads is not that they run code faster. They run code at the same speed. The advantage is that they do not waste resources while waiting. If your application is CPU-bound (crunching numbers, not waiting for I/O), virtual threads offer zero benefit.

11. When NOT to Use Virtual Threads

Virtual threads are powerful, but they are not a silver bullet. There are specific scenarios where they provide no benefit or can actually hurt performance.

1. CPU-Bound Tasks

If your task is pure computation — image processing, encryption, matrix multiplication, data compression — it never blocks. A virtual thread running CPU-bound code occupies its carrier thread the entire time, just like a platform thread would. You gain nothing.

// BAD: No benefit from virtual threads -- CPU-bound work
try (var executor = Executors.newVirtualThreadPerTaskExecutor()) {
    executor.submit(() -> {
        // This never blocks, so the virtual thread never unmounts
        // It just occupies a carrier thread the whole time
        double result = 0;
        for (long i = 0; i < 1_000_000_000L; i++) {
            result += Math.sin(i) * Math.cos(i);
        }
        return result;
    });
}

// GOOD: Use platform threads for CPU-bound work
// Match the number of threads to CPU cores
try (var executor = Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors())) {
    executor.submit(() -> {
        double result = 0;
        for (long i = 0; i < 1_000_000_000L; i++) {
            result += Math.sin(i) * Math.cos(i);
        }
        return result;
    });
}

2. Synchronized Blocks (Pinning)

When a virtual thread holds a synchronized lock and then blocks inside the synchronized block, the carrier thread is pinned -- it cannot be reused. This can effectively reduce your application to as many concurrent operations as you have carrier threads (typically the number of CPU cores). We cover this in detail in the Thread Pinning section.

3. Native Code / JNI

When a virtual thread calls native code through JNI, the carrier thread is pinned for the entire duration of the native call. If your application makes heavy use of native libraries, virtual threads will not help.

4. Tasks That Are Already Fast

If each task takes microseconds (e.g., in-memory cache lookups, simple calculations), the overhead of creating and scheduling virtual threads -- while small -- can exceed the actual work. Use platform threads or direct method calls.

5. When Thread Pooling Serves as Rate Limiting

Sometimes a fixed thread pool is intentional: it limits concurrency to protect a downstream resource (database connection pool, rate-limited API). Blindly switching to virtual threads removes that limit. Use a Semaphore if you need explicit concurrency control.

Decision Flowchart

Question Answer Use
Is the task I/O-bound (network, disk, DB)? Yes Virtual threads
Is the task CPU-bound? Yes Platform threads (ForkJoinPool or fixed pool)
Do you need millions of concurrent tasks? Yes Virtual threads
Does the task use synchronized + blocking? Yes Refactor to ReentrantLock first, then virtual threads
Does the task call native code heavily? Yes Platform threads
Is the task very short-lived (microseconds)? Yes Platform threads or direct execution

12. Thread Pinning

Thread pinning is the most important pitfall to understand when adopting virtual threads. It occurs when a virtual thread cannot unmount from its carrier thread, even though it is blocked. The carrier thread becomes dedicated to that one virtual thread, defeating the entire purpose of virtual threads.

What Causes Pinning?

There are two situations that cause pinning:

  1. Blocking inside a synchronized block or method -- The JVM cannot unmount the virtual thread because the object monitor is tied to the OS thread.
  2. Blocking during a native method call (JNI) -- The native code expects to run on the same OS thread throughout.

Pinning Example

public class PinningExample {

    private final Object lock = new Object();

    // BAD: This pins the carrier thread
    void pinnedMethod() {
        synchronized (lock) {
            // This blocks INSIDE a synchronized block
            // The virtual thread CANNOT unmount -- carrier thread is stuck
            try {
                Thread.sleep(1000); // Carrier is pinned for 1 full second
            } catch (InterruptedException e) {
                Thread.currentThread().interrupt();
            }
        }
    }

    // GOOD: Use ReentrantLock instead
    private final ReentrantLock reentrantLock = new ReentrantLock();

    void unpinnedMethod() {
        reentrantLock.lock();
        try {
            // This blocks INSIDE a ReentrantLock
            // The virtual thread CAN unmount -- carrier thread is free
            Thread.sleep(1000); // Carrier is released during sleep
        } catch (InterruptedException e) {
            Thread.currentThread().interrupt();
        } finally {
            reentrantLock.unlock();
        }
    }
}

How to Detect Pinning

The JVM provides a system property to log pinning events:

// Run your application with this flag:
// java -Djdk.tracePinnedThreads=full MyApp

// Output shows the full stack trace where pinning occurs:
// Thread[#28,ForkJoinPool-1-worker-1,5,CarrierThreads]
//     java.base/java.lang.VirtualThread$VThreadContinuation.onPinned(VirtualThread.java:180)
//     java.base/jdk.internal.vm.Continuation.onPinned0(Continuation.java:393)
//     java.base/java.lang.VirtualThread.parkOnCarrierThread(VirtualThread.java:661)
//     ...at com.example.PinningExample.pinnedMethod(PinningExample.java:8)

// Use "short" for just the location:
// java -Djdk.tracePinnedThreads=short MyApp

How to Fix Pinning

The fix is straightforward: replace synchronized with java.util.concurrent.locks.ReentrantLock. Here is a systematic approach:

// STEP 1: Identify synchronized blocks (search your codebase)
// grep -rn "synchronized" src/

// STEP 2: Replace synchronized block with ReentrantLock

// BEFORE
public class ConnectionPool {
    private final List available = new ArrayList<>();

    public synchronized Connection acquire() {
        while (available.isEmpty()) {
            wait(); // PINS the carrier thread!
        }
        return available.remove(0);
    }

    public synchronized void release(Connection conn) {
        available.add(conn);
        notifyAll();
    }
}

// AFTER
public class ConnectionPool {
    private final List available = new ArrayList<>();
    private final ReentrantLock lock = new ReentrantLock();
    private final Condition notEmpty = lock.newCondition();

    public Connection acquire() throws InterruptedException {
        lock.lock();
        try {
            while (available.isEmpty()) {
                notEmpty.await(); // Does NOT pin the carrier thread
            }
            return available.remove(0);
        } finally {
            lock.unlock();
        }
    }

    public void release(Connection conn) {
        lock.lock();
        try {
            available.add(conn);
            notEmpty.signalAll();
        } finally {
            lock.unlock();
        }
    }
}

Monitoring Pinning with JFR

Java Flight Recorder (JFR) can capture pinning events in production:

// Start your app with JFR enabled
// java -XX:StartFlightRecording=filename=recording.jfr,duration=60s MyApp

// Then analyze the recording with JDK Mission Control (JMC)
// Look for jdk.VirtualThreadPinned events

// Or programmatically:
import jdk.jfr.consumer.RecordingFile;
import java.nio.file.Path;

try (var file = new RecordingFile(Path.of("recording.jfr"))) {
    while (file.hasMoreEvents()) {
        var event = file.readEvent();
        if (event.getEventType().getName().equals("jdk.VirtualThreadPinned")) {
            System.out.println("Pinned at: " + event.getStackTrace());
            System.out.println("Duration: " + event.getDuration());
        }
    }
}

Key takeaway: Thread pinning is not a bug -- it is a known limitation. The JVM handles it gracefully by creating compensation threads (up to jdk.virtualThreadScheduler.maxPoolSize, default 256). But frequent pinning reduces throughput. Identify and fix the worst offenders, especially in hot paths like connection pools, caches, and request processing pipelines.

13. Migration Guide

Migrating from traditional thread pools to virtual threads is usually straightforward, but it requires careful analysis. Here is a step-by-step process.

Step 1: Inventory Your Thread Usage

Find every place in your codebase that creates threads or thread pools:

// Search for these patterns in your codebase:
// 1. Direct thread creation
new Thread(...)
Thread.ofPlatform()

// 2. Executor services
Executors.newFixedThreadPool(...)
Executors.newCachedThreadPool()
Executors.newSingleThreadExecutor()
new ThreadPoolExecutor(...)

// 3. Scheduled executors
Executors.newScheduledThreadPool(...)

// 4. CompletableFuture with custom executors
CompletableFuture.supplyAsync(task, executor)

// 5. Parallel streams (these use ForkJoinPool, not affected by virtual threads)
list.parallelStream()

Step 2: Classify Each Usage

Current Usage Purpose Migration Action
newFixedThreadPool(n) for I/O tasks Handle concurrent I/O Replace with newVirtualThreadPerTaskExecutor()
newFixedThreadPool(n) for rate limiting Limit concurrent access to resource Replace with virtual threads + Semaphore(n)
newCachedThreadPool() Dynamic thread pool for I/O Replace with newVirtualThreadPerTaskExecutor()
newFixedThreadPool(cores) for CPU tasks Parallel computation Keep as-is -- virtual threads do not help
newScheduledThreadPool(n) Periodic tasks Keep as-is (no virtual thread equivalent yet)
new Thread(...) One-off task Replace with Thread.startVirtualThread(...)

Step 3: Check for Pinning Risks

// Search your codebase for synchronized blocks that contain blocking calls
// These WILL cause pinning and should be refactored first

// Pattern 1: synchronized + I/O
synchronized (lock) {
    inputStream.read(); // PINS carrier
}

// Pattern 2: synchronized + sleep/wait
synchronized (lock) {
    Thread.sleep(100); // PINS carrier
    lock.wait();       // PINS carrier
}

// Pattern 3: synchronized + JDBC
synchronized (lock) {
    statement.executeQuery(); // PINS carrier (if JDBC driver blocks)
}

// Fix: Replace synchronized with ReentrantLock (see Thread Pinning section)

Step 4: Audit ThreadLocal Usage

ThreadLocal works with virtual threads but is wasteful. If you have a million virtual threads, each gets its own copy of every ThreadLocal. Audit your usage:

// BEFORE: ThreadLocal in an environment with many virtual threads
private static final ThreadLocal dateFormat =
    ThreadLocal.withInitial(() -> new SimpleDateFormat("yyyy-MM-dd"));

// AFTER: Use a local variable (if possible) or a thread-safe alternative
private static final DateTimeFormatter dateFormat = DateTimeFormatter.ofPattern("yyyy-MM-dd");

// For request-scoped data, consider ScopedValue (preview):
private static final ScopedValue CONTEXT = ScopedValue.newInstance();

Step 5: Migrate Incrementally

Do not convert everything at once. Start with one executor, measure the impact, and then expand.

// Phase 1: Feature flag the migration
public class ExecutorFactory {

    private static final boolean USE_VIRTUAL_THREADS =
        Boolean.getBoolean("app.useVirtualThreads"); // -Dapp.useVirtualThreads=true

    public static ExecutorService createIOExecutor() {
        if (USE_VIRTUAL_THREADS) {
            return Executors.newVirtualThreadPerTaskExecutor();
        } else {
            return Executors.newFixedThreadPool(200);
        }
    }

    public static ExecutorService createCPUExecutor() {
        // Always use platform threads for CPU-bound work
        return Executors.newFixedThreadPool(
            Runtime.getRuntime().availableProcessors()
        );
    }
}

// Phase 2: Monitor with JFR and -Djdk.tracePinnedThreads=short
// Phase 3: Fix any pinning issues found
// Phase 4: Remove the feature flag, make virtual threads the default

14. Best Practices

After working with virtual threads across multiple production systems, these are the practices that matter most.

1. Do Not Pool Virtual Threads

Pooling virtual threads defeats their purpose. They are designed to be created and discarded freely. Creating a pool of virtual threads is like buying a fleet of taxis and then only using three at a time.

// BAD: Pooling virtual threads
ExecutorService pool = Executors.newFixedThreadPool(100, Thread.ofVirtual().factory());
// This creates a pool of 100 virtual threads. Why limit yourself?

// GOOD: Create one per task
ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor();

2. Avoid ThreadLocal -- Use ScopedValue or Parameters

// BAD: ThreadLocal with virtual threads (1 million copies)
private static final ThreadLocal context = new ThreadLocal<>();

// GOOD option 1: Pass values explicitly
void processRequest(UserContext context) {
    handleOrder(context);
}

// GOOD option 2: Use ScopedValue (preview)
private static final ScopedValue CONTEXT = ScopedValue.newInstance();
ScopedValue.where(CONTEXT, userContext).run(() -> processRequest());

3. Use try-with-resources for Executors

In Java 21, ExecutorService implements AutoCloseable. Always use try-with-resources to ensure tasks complete and resources are cleaned up.

// GOOD: try-with-resources ensures all tasks complete before proceeding
try (var executor = Executors.newVirtualThreadPerTaskExecutor()) {
    for (var url : urls) {
        executor.submit(() -> fetch(url));
    }
} // blocks here until all tasks complete

// BAD: manual shutdown -- easy to forget or get wrong
ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor();
try {
    for (var url : urls) {
        executor.submit(() -> fetch(url));
    }
} finally {
    executor.shutdown();
    executor.awaitTermination(1, TimeUnit.HOURS); // easy to forget
}

4. Use Semaphores for Concurrency Control

// Control access to limited resources (DB connections, API rate limits)
private final Semaphore dbPermits = new Semaphore(50); // max 50 concurrent DB calls

void queryWithLimit() throws InterruptedException {
    dbPermits.acquire();
    try {
        // At most 50 virtual threads here at once
        return jdbcTemplate.queryForList("SELECT * FROM orders");
    } finally {
        dbPermits.release();
    }
}

5. Monitor with Java Flight Recorder (JFR)

JFR has been updated with virtual-thread-specific events:

JFR Event What It Tells You
jdk.VirtualThreadStart A virtual thread was created and started
jdk.VirtualThreadEnd A virtual thread completed
jdk.VirtualThreadPinned A virtual thread was pinned to its carrier (problem!)
jdk.VirtualThreadSubmitFailed Failed to submit a virtual thread for execution

6. Handle InterruptedException Properly

// Virtual threads respond to interruption -- do not swallow it
Thread.startVirtualThread(() -> {
    try {
        while (!Thread.currentThread().isInterrupted()) {
            String data = blockingRead(); // InterruptedException possible
            process(data);
        }
    } catch (InterruptedException e) {
        // Restore the interrupted status and exit cleanly
        Thread.currentThread().interrupt();
        System.out.println("Virtual thread interrupted, shutting down");
    }
});

7. Do Not Set Thread Priority or Daemon Status

Virtual threads are always daemon threads with normal priority. Attempts to change priority are silently ignored. Do not rely on thread priority for scheduling behavior.

Summary of Best Practices

Do Do Not
Create virtual threads freely -- one per task Pool virtual threads in a fixed pool
Use ReentrantLock for synchronization Use synchronized around blocking operations
Use try-with-resources for executors Manually manage executor shutdown
Use ScopedValue for request context Use ThreadLocal with millions of threads
Use Semaphore for rate limiting Rely on pool size for concurrency control
Monitor with JFR and -Djdk.tracePinnedThreads Deploy without observability
Keep virtual threads for I/O-bound tasks Use virtual threads for CPU-bound computation

15. Complete Example: Concurrent Web Scraper

Let us put everything together with a real-world example: a web scraper that fetches 100 URLs concurrently using virtual threads. This example demonstrates virtual thread creation, executor usage, error handling, concurrency control, and result aggregation.

import java.net.URI;
import java.net.http.*;
import java.time.Duration;
import java.time.Instant;
import java.util.*;
import java.util.concurrent.*;

/**
 * A concurrent web scraper using Java 21 Virtual Threads.
 *
 * Demonstrates:
 * - Executors.newVirtualThreadPerTaskExecutor()
 * - Semaphore for rate limiting
 * - Structured error handling
 * - Result aggregation
 * - Performance measurement
 */
public class VirtualThreadWebScraper {

    // Rate limit: max 20 concurrent HTTP requests
    // (Be a good citizen -- don't overwhelm target servers)
    private static final Semaphore HTTP_PERMITS = new Semaphore(20);

    // HTTP client configured for virtual threads
    private static final HttpClient HTTP_CLIENT = HttpClient.newBuilder()
        .connectTimeout(Duration.ofSeconds(10))
        .followRedirects(HttpClient.Redirect.NORMAL)
        .build();

    // Result record
    record ScrapeResult(String url, int statusCode, int contentLength,
                        long latencyMs, String error) {

        boolean isSuccess() { return error == null && statusCode >= 200 && statusCode < 300; }

        static ScrapeResult success(String url, int statusCode, int contentLength, long latencyMs) {
            return new ScrapeResult(url, statusCode, contentLength, latencyMs, null);
        }

        static ScrapeResult failure(String url, String error) {
            return new ScrapeResult(url, -1, 0, 0, error);
        }
    }

    /**
     * Scrape a list of URLs concurrently using virtual threads.
     */
    public List scrape(List urls) throws InterruptedException {

        List results = Collections.synchronizedList(new ArrayList<>());

        try (var executor = Executors.newVirtualThreadPerTaskExecutor()) {

            List> futures = new ArrayList<>();

            for (String url : urls) {
                futures.add(executor.submit(() -> {
                    ScrapeResult result = fetchUrl(url);
                    results.add(result);
                    return result;
                }));
            }

            // Wait for all tasks to complete
            for (Future future : futures) {
                try {
                    future.get(30, TimeUnit.SECONDS);
                } catch (TimeoutException e) {
                    future.cancel(true);
                } catch (ExecutionException e) {
                    // Individual failures are captured in ScrapeResult
                }
            }
        }

        return results;
    }

    /**
     * Fetch a single URL with rate limiting.
     */
    private ScrapeResult fetchUrl(String url) {
        try {
            // Acquire a permit (blocks the virtual thread, not the carrier)
            HTTP_PERMITS.acquire();

            try {
                Instant start = Instant.now();

                HttpRequest request = HttpRequest.newBuilder()
                    .uri(URI.create(url))
                    .header("User-Agent", "VirtualThreadScraper/1.0")
                    .timeout(Duration.ofSeconds(15))
                    .GET()
                    .build();

                // This blocks the virtual thread while waiting for the response
                // The carrier thread is freed to run other virtual threads
                HttpResponse response = HTTP_CLIENT.send(
                    request, HttpResponse.BodyHandlers.ofString()
                );

                long latencyMs = Duration.between(start, Instant.now()).toMillis();

                return ScrapeResult.success(
                    url,
                    response.statusCode(),
                    response.body().length(),
                    latencyMs
                );

            } finally {
                HTTP_PERMITS.release();
            }

        } catch (Exception e) {
            return ScrapeResult.failure(url, e.getClass().getSimpleName() + ": " + e.getMessage());
        }
    }

    /**
     * Print a summary report.
     */
    public void printReport(List results, long totalMs) {
        long successes = results.stream().filter(ScrapeResult::isSuccess).count();
        long failures = results.size() - successes;
        double avgLatency = results.stream()
            .filter(ScrapeResult::isSuccess)
            .mapToLong(ScrapeResult::latencyMs)
            .average()
            .orElse(0);

        System.out.println("\n=== Scrape Report ===");
        System.out.println("Total URLs:      " + results.size());
        System.out.println("Successes:       " + successes);
        System.out.println("Failures:        " + failures);
        System.out.printf("Avg latency:     %.0f ms%n", avgLatency);
        System.out.println("Total time:      " + totalMs + " ms");
        System.out.println("Throughput:      " + (results.size() * 1000L / Math.max(totalMs, 1)) + " urls/sec");

        if (failures > 0) {
            System.out.println("\nFailed URLs:");
            results.stream()
                .filter(r -> !r.isSuccess())
                .forEach(r -> System.out.println("  " + r.url() + " -> " + r.error()));
        }
    }

    // ===== Main =====

    public static void main(String[] args) throws Exception {

        // Generate 100 URLs to scrape
        List urls = new ArrayList<>();
        for (int i = 1; i <= 100; i++) {
            urls.add("https://httpbin.org/delay/1?page=" + i);
        }
        // Add some real sites
        urls.addAll(List.of(
            "https://docs.oracle.com/en/java/",
            "https://spring.io/",
            "https://github.com/",
            "https://stackoverflow.com/"
        ));

        System.out.println("Scraping " + urls.size() + " URLs with virtual threads...");
        System.out.println("Max concurrent requests: " + HTTP_PERMITS.availablePermits());
        System.out.println("Available carrier threads: " +
            Runtime.getRuntime().availableProcessors());

        VirtualThreadWebScraper scraper = new VirtualThreadWebScraper();

        long start = System.currentTimeMillis();
        List results = scraper.scrape(urls);
        long elapsed = System.currentTimeMillis() - start;

        scraper.printReport(results, elapsed);

        // With platform threads (pool=20): ~520 seconds (100 URLs / 20 threads * ~1s each, sequential batches)
        // With virtual threads: ~6 seconds (all 100 run concurrently, rate-limited to 20 at a time)
    }
}

Running the Example

// Compile (no special flags needed -- virtual threads are final in Java 21)
javac VirtualThreadWebScraper.java

// Run
java VirtualThreadWebScraper

// Run with pinning detection enabled
java -Djdk.tracePinnedThreads=short VirtualThreadWebScraper

// Run with JFR for production monitoring
java -XX:StartFlightRecording=filename=scraper.jfr,duration=60s VirtualThreadWebScraper

What This Example Demonstrates

Concept Where in the Code
Virtual thread executor Executors.newVirtualThreadPerTaskExecutor() in scrape()
Rate limiting with Semaphore HTTP_PERMITS.acquire()/release() in fetchUrl()
try-with-resources for executor try (var executor = ...) ensures all tasks complete
Blocking I/O that unmounts HTTP_CLIENT.send() releases the carrier thread
Error handling per task ScrapeResult.failure() captures errors without crashing
Result aggregation Collections.synchronizedList collects results from all threads
Timeout handling future.get(30, TimeUnit.SECONDS) with cancellation

Virtual threads transform how we write concurrent Java code. The thread-per-request model that served us for decades is no longer a scalability bottleneck. You can write simple, synchronous, blocking code and let the JVM handle the complexity of multiplexing millions of tasks onto a handful of OS threads. For I/O-bound applications -- which is most web services, microservices, and data processing pipelines -- virtual threads are the most impactful feature in Java since generics.

March 1, 2026

Java 17 Migration Guide (11→17)

1. Introduction

Java 11 was released in September 2018. Java 17 was released in September 2021. Both are Long-Term Support (LTS) releases, and migrating from 11 to 17 is one of the most common upgrade paths in the Java ecosystem today. If you are still on Java 11, you are missing three years of language features, performance improvements, and security updates.

Unlike the Java 8 to 11 migration (which broke many applications due to module system changes and API removals), the 11 to 17 migration is significantly smoother. Most of the painful changes happened in Java 9-11. The 12-17 releases are largely additive — new language features, new APIs, and incremental improvements. That said, there are breaking changes you must plan for, particularly around strong encapsulation of JDK internals.

Here is what is at stake:

Factor Java 11 Java 17
LTS Support (Oracle) Extended support until September 2026 Extended support until September 2029
LTS Support (Adoptium/Eclipse) Available but winding down Actively maintained
Spring Boot Compatibility Spring Boot 2.x (maintenance mode) Spring Boot 3.x (active development, Java 17 required)
Language Features var, HTTP Client Records, sealed classes, pattern matching, text blocks, switch expressions
Performance Baseline 15-20% GC improvements, faster startup, smaller footprint
Security Known CVEs accumulating Latest security patches

The bottom line: If you are using Spring Boot and plan to stay current, you must migrate to Java 17 because Spring Boot 3.x requires it. Even without Spring, the language features and performance improvements alone justify the upgrade.

2. New Language Features Summary

Here is every language feature added between Java 12 and Java 17. Features marked as “Standard” are finalized and ready for production use:

Java Version Feature Status in 17 JEP
Java 12 Switch Expressions (preview) Standard (Java 14) JEP 325
Java 12 Compact Number Formatting Standard
Java 13 Text Blocks (preview) Standard (Java 15) JEP 355
Java 14 Switch Expressions (finalized) Standard JEP 361
Java 14 Helpful NullPointerExceptions Standard (default on) JEP 358
Java 14 Records (preview) Standard (Java 16) JEP 359
Java 14 Pattern Matching instanceof (preview) Standard (Java 16) JEP 305
Java 15 Text Blocks (finalized) Standard JEP 378
Java 15 Sealed Classes (preview) Standard (Java 17) JEP 360
Java 16 Records (finalized) Standard JEP 395
Java 16 Pattern Matching instanceof (finalized) Standard JEP 394
Java 16 Stream.toList() Standard
Java 17 Sealed Classes (finalized) Standard JEP 409
Java 17 RandomGenerator API Standard JEP 356
Java 17 Foreign Function & Memory API Incubator JEP 412
Java 17 Vector API Incubator (2nd) JEP 414

3. Breaking Changes

While the 11 to 17 migration is smoother than 8 to 11, there are still breaking changes that can cause compilation errors and runtime failures. Here are the ones you must address:

3.1 Strong Encapsulation (Most Impactful)

In Java 11, the --illegal-access=permit flag was the default, which meant reflective access to JDK internals produced warnings but still worked. In Java 17, the flag is removed entirely. All reflective access to JDK internals is blocked by default.

// This code compiled and ran on Java 11 with a warning
// On Java 17, it throws InaccessibleObjectException
var field = String.class.getDeclaredField("value");
field.setAccessible(true); // FAILS on Java 17

// Fix: Use --add-opens as a temporary workaround
// java --add-opens java.base/java.lang=ALL-UNNAMED -jar app.jar

// Better fix: Stop using internal APIs and use public alternatives

3.2 Removed APIs

Removed API Removed In Replacement
Nashorn JavaScript Engine Java 15 GraalJS or standalone V8
RMI Activation (java.rmi.activation) Java 17 gRPC, REST APIs, or standard RMI
Pack200 API and tools Java 14 Standard compression (gzip, bzip2)
AOT and JIT compiler (Graal-based) Java 17 GraalVM (external)
Solaris and SPARC ports Java 17 Linux, macOS, or Windows

3.3 Behavioral Changes

// 1. Stricter JAR signature validation
// Java 17 rejects JARs signed with SHA-1 by default
// Fix: Re-sign JARs with SHA-256 or stronger

// 2. Default charset change (Java 18 heads-up)
// Java 17 still uses platform-specific default charset
// But Java 18 defaults to UTF-8. Start using explicit charsets now:
Files.readString(path, StandardCharsets.UTF_8);  // explicit -- always safe
Files.readString(path);  // uses platform default -- risky

// 3. DatagramSocket reimplementation (Java 15+)
// The underlying implementation of DatagramSocket was rewritten
// Unlikely to affect most apps, but test network code thoroughly

4. Step-by-Step Migration

Follow this checklist in order. Each step builds on the previous one.

Step 1: Audit Your Dependencies

Before changing any JDK version, check that all your dependencies support Java 17. This is the single most important step.







Step 2: Update Build Tool Plugins



    org.apache.maven.plugins
    maven-compiler-plugin
    3.11.0
    
        17
    




    org.apache.maven.plugins
    maven-surefire-plugin
    3.1.2

Step 3: Update Critical Dependencies

Dependency Minimum for Java 17 Recommended
Spring Boot 2.5.x 3.x (requires Java 17)
Spring Framework 5.3.15 6.x (requires Java 17)
Hibernate 5.6.x 6.x
Lombok 1.18.22 1.18.30+
Mockito 4.0 5.x
Jackson 2.13 2.15+
JUnit 5 5.8 5.10+
ByteBuddy 1.12 1.14+
ASM 9.0 9.5+
Flyway 8.0 9.x+

Step 4: Install Java 17 JDK

# Using SDKMAN (recommended)
sdk install java 17.0.9-tem
sdk use java 17.0.9-tem

# Or download directly:
# - Eclipse Temurin: https://adoptium.net/
# - Amazon Corretto: https://aws.amazon.com/corretto/
# - Oracle JDK: https://www.oracle.com/java/

# Verify installation
java --version
# openjdk 17.0.9 2023-10-17
# OpenJDK Runtime Environment Temurin-17.0.9+9 (build 17.0.9+9)

# Set JAVA_HOME
export JAVA_HOME=$HOME/.sdkman/candidates/java/17.0.9-tem

Step 5: Compile and Fix Errors

Compile your project with Java 17 and address each compilation error. The most common errors fall into these categories:

// Error 1: Using removed APIs
// javax.script.ScriptEngine with Nashorn
// Fix: Remove Nashorn usage or add GraalJS dependency

// Error 2: Deprecated API warnings (promoted to errors)
// Fix: Replace deprecated APIs with recommended alternatives

// Error 3: Internal API access
// sun.misc.Unsafe, com.sun.*, etc.
// Fix: Use public API alternatives or --add-opens as temporary fix

// Error 4: New reserved keywords
// "sealed", "permits" are now restricted identifiers
// If you have variables named "sealed" or "permits", rename them
String sealed = "value"; // Compile error in Java 17 if used as type name

Step 6: Run jdeps to Find Internal API Usage

# Scan your application for JDK internal API usage
jdeps --jdk-internals --multi-release 17 -cp 'lib/*' your-app.jar

# Sample output:
# your-app.jar -> java.base
#   com.example.MyClass -> sun.misc.Unsafe (JDK internal API)
#
# Warning: JDK internal APIs are unsupported and private to JDK implementation
# that are subject to be removed or changed incompatibly and could break your
# application.
#
# JDK Internal API                Suggested Replacement
# ----------------                ---------------------
# sun.misc.Unsafe                 Use VarHandle or MethodHandle API

Step 7: Add –add-opens for Libraries (If Needed)

Some libraries (especially older versions of ORMs, serialization frameworks, and mocking tools) need reflective access to JDK internals. Add --add-opens flags as needed:



    org.apache.maven.plugins
    maven-surefire-plugin
    3.1.2
    
        
            --add-opens java.base/java.lang=ALL-UNNAMED
            --add-opens java.base/java.lang.reflect=ALL-UNNAMED
            --add-opens java.base/java.util=ALL-UNNAMED
        
    




Step 8: Run All Tests

After compilation succeeds, run your entire test suite. Pay special attention to:

  • Serialization/deserialization tests
  • Reflection-heavy code (custom annotations, DI containers)
  • Network and socket tests (DatagramSocket reimplementation)
  • Security-related tests (SecurityManager deprecation)
  • Date/time formatting tests (locale data updates)

Step 9: Performance Testing

Java 17 includes significant GC improvements. Run performance benchmarks to validate:

# Compare GC performance between Java 11 and Java 17
# Run your application with GC logging enabled

# Java 17 with G1GC (default)
java -Xlog:gc*:file=gc-java17.log -jar app.jar

# Key improvements in Java 17 GC:
# - G1GC: Reduced pause times, better throughput
# - ZGC: Production-ready, sub-millisecond pauses (Java 15+)
# - Shenandoah: Production-ready, low-pause concurrent GC

# Consider switching GC if applicable:
# java -XX:+UseZGC -jar app.jar          # Ultra-low pause times
# java -XX:+UseShenandoahGC -jar app.jar # Low-pause alternative

Step 10: Adopt New Language Features Incrementally

Once your application runs on Java 17, start adopting new language features gradually. You do not need to rewrite everything at once:

// Priority 1: Use text blocks for multi-line strings (immediate wins)
// Before
String sql = "SELECT u.id, u.name, u.email\n" +
             "FROM users u\n" +
             "WHERE u.active = true\n" +
             "ORDER BY u.name";

// After
String sql = """
    SELECT u.id, u.name, u.email
    FROM users u
    WHERE u.active = true
    ORDER BY u.name
    """;

// Priority 2: Use pattern matching instanceof (reduce casting)
// Before
if (obj instanceof String) {
    String s = (String) obj;
    System.out.println(s.length());
}

// After
if (obj instanceof String s) {
    System.out.println(s.length());
}

// Priority 3: Use switch expressions where appropriate
// Before
String label;
switch (status) {
    case ACTIVE:   label = "Active";   break;
    case INACTIVE: label = "Inactive"; break;
    case PENDING:  label = "Pending";  break;
    default:       label = "Unknown";
}

// After
String label = switch (status) {
    case ACTIVE   -> "Active";
    case INACTIVE -> "Inactive";
    case PENDING  -> "Pending";
};

// Priority 4: Use records for new DTOs and value objects
// Priority 5: Use sealed classes for new type hierarchies

5. Build Tool Updates

5.1 Maven



    17
    17
    17
    
    17



    
        
        
            org.apache.maven.plugins
            maven-compiler-plugin
            3.11.0
            
                17
            
        

        
        
            org.apache.maven.plugins
            maven-surefire-plugin
            3.1.2
        

        
        
            org.apache.maven.plugins
            maven-failsafe-plugin
            3.1.2
        

        
        
            org.apache.maven.plugins
            maven-jar-plugin
            3.3.0
        
    

5.2 Gradle

// build.gradle updates for Java 17
// Gradle 7.3+ is required for Java 17 support

plugins {
    id 'java'
    id 'org.springframework.boot' version '3.2.0' // if using Spring Boot
}

java {
    sourceCompatibility = JavaVersion.VERSION_17
    targetCompatibility = JavaVersion.VERSION_17
    toolchain {
        languageVersion = JavaLanguageVersion.of(17)
    }
}

tasks.withType(JavaCompile).configureEach {
    options.release = 17
}

// For Kotlin DSL (build.gradle.kts)
java {
    toolchain {
        languageVersion.set(JavaLanguageVersion.of(17))
    }
}

6. Framework Compatibility

6.1 Spring Boot 3.x (Requires Java 17)

If you are migrating to Spring Boot 3.x, Java 17 is the minimum requirement. This is the most significant framework change most Java developers will encounter:

// Key Spring Boot 3.x changes:
// 1. javax.* -> jakarta.* namespace migration
//    javax.persistence.* -> jakarta.persistence.*
//    javax.servlet.*     -> jakarta.servlet.*
//    javax.validation.*  -> jakarta.validation.*
//    javax.annotation.*  -> jakarta.annotation.*

// Before (Spring Boot 2.x / Java 11)
import javax.persistence.Entity;
import javax.persistence.Id;
import javax.servlet.http.HttpServletRequest;
import javax.validation.constraints.NotNull;

// After (Spring Boot 3.x / Java 17)
import jakarta.persistence.Entity;
import jakarta.persistence.Id;
import jakarta.servlet.http.HttpServletRequest;
import jakarta.validation.constraints.NotNull;

Spring Boot 2.x to 3.x migration checklist:

  • Replace all javax.* imports with jakarta.*
  • Update Spring Security configuration (new Lambda DSL)
  • Update Spring Data repositories (new method signatures)
  • Replace WebSecurityConfigurerAdapter with SecurityFilterChain bean
  • Update Actuator endpoint paths if customized

6.2 Hibernate 6

Hibernate 6 is the default JPA provider in Spring Boot 3.x. Key changes:

  • javax.persistence to jakarta.persistence namespace
  • New type system for custom types
  • Some HQL query syntax changes
  • Removed deprecated criteria API methods

6.3 Compatibility Matrix

Framework/Library Java 11 Compatible Java 17 Compatible Java 17 Required
Spring Boot 2.7.x Yes Yes No
Spring Boot 3.x No Yes Yes
Quarkus 3.x No Yes Yes (11+ for 2.x)
Micronaut 4.x No Yes Yes
Jakarta EE 10 No Yes Yes (11 minimum)
Hibernate 5.6 Yes Yes No
Hibernate 6.x No Yes Yes (11 minimum)
JUnit 5.9+ Yes Yes No (Java 8+)

7. Common Migration Issues

Here are the most common problems teams encounter during the 11 to 17 migration, along with their solutions:

# Problem Symptom Solution
1 InaccessibleObjectException Library uses reflection on JDK internals Update library or add --add-opens
2 Lombok compilation failure java.lang.IllegalAccessError Update Lombok to 1.18.22+
3 Mockito/ByteBuddy failure Cannot access internal API for mocking Update Mockito to 4.0+, ByteBuddy 1.12+
4 ASM version incompatibility UnsupportedClassVersionError Update ASM to 9.0+ (transitive dependency)
5 Nashorn removal ScriptEngineManager returns null for “nashorn” Add GraalJS dependency or rewrite
6 JAR signature failures SHA-1 signed JARs rejected Re-sign with SHA-256 or disable validation
7 Gradle version too old Cannot compile Java 17 sources Update Gradle to 7.3+
8 Maven compiler plugin too old source release 17 requires target release 17 Update maven-compiler-plugin to 3.8.1+
9 Annotation processor failures Processors fail on Java 17 bytecode Update processors (MapStruct, Dagger, etc.)
10 Javadoc generation fails Stricter Javadoc linting in newer JDK Add -Xdoclint:none or fix Javadoc warnings
11 Date/time formatting differences Locale-dependent formatting changed Use explicit locale and format patterns
12 JaCoCo coverage failures Cannot instrument Java 17 classes Update JaCoCo to 0.8.7+

8. Testing Strategy

A thorough testing strategy is essential for a safe migration. Here is the approach that works for production systems:

8.1 Unit Tests

// Step 1: Run ALL existing unit tests on Java 17
// If tests fail, categorize the failures:
// - Compilation error -> fix source code
// - Runtime error -> update dependencies or add --add-opens
// - Behavior change -> investigate and update test expectations

// Step 2: Add tests for any workarounds
@Test
void testReflectiveAccessWorkaround() {
    // If you added --add-opens, test that the workaround works
    assertDoesNotThrow(() -> {
        // Code that requires reflective access
    });
}

// Step 3: Verify serialization compatibility
@Test
void testSerializationBackwardCompatibility() {
    // Deserialize objects that were serialized on Java 11
    byte[] java11Serialized = loadFromFile("test-data/user-java11.ser");
    User user = deserialize(java11Serialized);
    assertEquals("John", user.getName());
}

8.2 Integration Tests

Focus on the areas most likely to be affected:

  • Database layer — JDBC drivers, connection pools, ORM behavior
  • HTTP clients and servers — TLS handshakes, HTTP/2 behavior
  • Serialization — JSON, XML, and Java serialization compatibility
  • External service integrations — LDAP, SMTP, message queues

8.3 Performance Testing

Java 17 should be faster than 11 in most scenarios, but verify with your workload:

  • Run load tests with production-like traffic patterns
  • Compare GC pause times (G1GC improved significantly)
  • Monitor memory footprint — Java 17 generally uses less memory
  • Check startup time — CDS (Class Data Sharing) improvements help

9. Docker and CI/CD

9.1 Dockerfile Updates

# Before (Java 11)
FROM eclipse-temurin:11-jre-alpine
COPY target/app.jar /app.jar
ENTRYPOINT ["java", "-jar", "/app.jar"]

# After (Java 17)
FROM eclipse-temurin:17-jre-alpine
COPY target/app.jar /app.jar
ENTRYPOINT ["java", "-jar", "/app.jar"]

# Multi-stage build for smaller images
FROM eclipse-temurin:17-jdk-alpine AS build
WORKDIR /app
COPY pom.xml .
COPY src ./src
RUN mvn clean package -DskipTests

FROM eclipse-temurin:17-jre-alpine
COPY --from=build /app/target/app.jar /app.jar
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "/app.jar"]

# With --add-opens if needed
ENTRYPOINT ["java", \
    "--add-opens", "java.base/java.lang=ALL-UNNAMED", \
    "-jar", "/app.jar"]

9.2 CI/CD Pipeline Updates

# GitHub Actions example
name: Build and Test
on: [push, pull_request]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Set up Java 17
        uses: actions/setup-java@v3
        with:
          distribution: 'temurin'
          java-version: '17'

      - name: Build and test
        run: mvn clean verify

      # Optional: Test on both Java 11 and 17 during migration
  compatibility-test:
    runs-on: ubuntu-latest
    strategy:
      matrix:
        java-version: [11, 17]
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-java@v3
        with:
          distribution: 'temurin'
          java-version: ${{ matrix.java-version }}
      - run: mvn clean verify

9.3 Container Image Comparison

Image Base OS Approximate Size Use Case
eclipse-temurin:17-jre-alpine Alpine Linux ~150 MB Production (smallest)
eclipse-temurin:17-jre-jammy Ubuntu 22.04 ~260 MB Production (better compatibility)
eclipse-temurin:17-jdk-alpine Alpine Linux ~330 MB CI/CD builds
amazoncorretto:17-alpine Alpine Linux ~200 MB AWS deployments

10. Best Practices

10.1 Incremental Adoption

Do not try to adopt every Java 17 feature at once. Follow this order:

  1. Week 1: Get the application compiling and all tests passing on Java 17 without using any new features
  2. Week 2-3: Start using text blocks in new code (lowest risk, highest readability gain)
  3. Week 3-4: Adopt pattern matching instanceof during regular code changes
  4. Month 2: Use switch expressions in new code
  5. Month 2-3: Create new DTOs as records
  6. Month 3+: Evaluate sealed classes for new type hierarchies

10.2 Migration Dos and Don’ts

Do Don’t
Update dependencies before changing JDK Change JDK and dependencies simultaneously
Run jdeps to find internal API usage Assume your code does not use internal APIs
Test on Java 17 in CI before production Deploy to production without thorough testing
Use --add-opens as a temporary fix Leave --add-opens permanently without a plan to remove
Adopt new features incrementally Rewrite everything in new syntax at once
Keep Java 11 builds running during transition Remove Java 11 CI jobs before migration is complete
Document all --add-opens flags with reasons Add --add-opens flags without understanding why
Benchmark before and after migration Assume Java 17 is faster for your specific workload

10.3 Feature Flags for New Syntax

When adopting new language features in a team, consider these guidelines:

  • Text blocks: Use everywhere immediately — purely cosmetic, zero risk
  • Pattern matching instanceof: Use in new code and during refactoring — simple and safe
  • Switch expressions: Use in new code — slightly higher learning curve for team members unfamiliar with arrow syntax
  • Records: Use for new DTOs and value objects — do not convert existing classes unless you have comprehensive tests
  • Sealed classes: Use for new type hierarchies — do not retroactively seal existing class hierarchies without careful analysis

The key principle is: new code uses new features, existing code is migrated only during regular maintenance. Do not create a separate “migration sprint” for syntax changes. Instead, adopt the boy scout rule — leave each file a little better than you found it.

March 1, 2026

Java 17 Other Improvements

1. Introduction

Java 17 is not just about sealed classes, records, and pattern matching. It ships with a collection of smaller but highly practical improvements that affect your daily coding work. These changes span better error messages, new API conveniences, formatting utilities, and significant runtime changes that every Java developer needs to understand.

Some of these features were introduced in earlier releases (Java 14-16) and are standard by the time you reach Java 17. Others are incubator or preview features that hint at Java’s future direction. This post covers all of them in one place, so you have a complete picture of what Java 17 brings beyond the headline features.

Here is a quick overview of what we will cover:

Feature Introduced In Status in Java 17 Impact
Helpful NullPointerExceptions Java 14 Standard (on by default) High — saves debugging time daily
Stream.toList() Java 16 Standard Medium — cleaner stream terminal ops
New Random Generator API Java 17 Standard Medium — better random number generation
Compact Number Formatting Java 12 Standard Low-Medium — user-facing number display
Day Period Support Java 16 Standard Low — human-friendly time display
Foreign Function & Memory API Java 14 Incubator High (future) — JNI replacement
Vector API Java 16 Incubator High (future) — SIMD performance
Strong JDK Encapsulation Java 16 Standard (enforced) High — breaks code using internals
Deprecations and Removals Java 17 Standard Medium — must check for usage

2. Helpful NullPointerExceptions

This is arguably the most developer-friendly change in recent Java history. Before Java 14, a NullPointerException gave you a stack trace pointing to a line number, but if that line had a chain of method calls, you had no idea which reference was null. Starting with Java 14 and enabled by default in Java 17, the JVM tells you exactly what was null.

2.1 The Old Problem

// This code throws NPE -- but which reference is null?
String city = user.getAddress().getCity().toUpperCase();

// Java 11 error message:
// Exception in thread "main" java.lang.NullPointerException
//     at com.example.App.main(App.java:15)
//
// Which one is null? user? getAddress()? getCity()?
// You have to add breakpoints or null checks to find out.

2.2 The Java 17 Solution

// Same code on Java 17:
String city = user.getAddress().getCity().toUpperCase();

// Java 17 error message:
// Exception in thread "main" java.lang.NullPointerException:
//     Cannot invoke "Address.getCity()" because the return value of
//     "User.getAddress()" is null
//     at com.example.App.main(App.java:15)
//
// Instantly tells you: getAddress() returned null

The enhanced messages work for all NPE scenarios:

// Array access
int[] arr = null;
int x = arr[0];
// Cannot load from int array because "arr" is null

// Field access
String name = person.name;
// Cannot read field "name" because "person" is null

// Method invocation
String upper = str.toUpperCase();
// Cannot invoke "String.toUpperCase()" because "str" is null

// Array store
Object[] objs = null;
objs[0] = "hello";
// Cannot store to object array because "objs" is null

// Nested chains
int length = order.getCustomer().getProfile().getBio().length();
// Cannot invoke "String.length()" because the return value of
// "Profile.getBio()" is null

Key point: In Java 14-15, you had to explicitly enable this with -XX:+ShowCodeDetailsInExceptionMessages. In Java 17, it is on by default. You get better NPE messages without changing a single line of code or any JVM flags.

3. Stream.toList()

Before Java 16, every time you wanted to collect a stream into a list, you wrote .collect(Collectors.toList()). Java 16 added Stream.toList() as a convenience method, but there is a critical difference you must understand.

3.1 Basic Usage

List names = List.of("Alice", "Bob", "Charlie", "Diana");

// Before Java 16 -- verbose
List filtered = names.stream()
    .filter(n -> n.length() > 3)
    .collect(Collectors.toList());

// Java 16+ -- concise
List filtered = names.stream()
    .filter(n -> n.length() > 3)
    .toList();

3.2 The Critical Difference: Immutability

This is the detail that catches developers off guard. Stream.toList() returns an unmodifiable list, while Collectors.toList() returns a mutable ArrayList:

List names = List.of("Alice", "Bob", "Charlie");

// Collectors.toList() -- mutable (returns ArrayList)
List mutable = names.stream()
    .filter(n -> n.length() > 3)
    .collect(Collectors.toList());
mutable.add("Eve");         // works fine
mutable.sort(null);         // works fine
mutable.set(0, "Updated");  // works fine

// Stream.toList() -- UNMODIFIABLE
List immutable = names.stream()
    .filter(n -> n.length() > 3)
    .toList();
immutable.add("Eve");         // throws UnsupportedOperationException
immutable.sort(null);         // throws UnsupportedOperationException
immutable.set(0, "Updated");  // throws UnsupportedOperationException

3.3 Null Handling Difference

// Stream.toList() ALLOWS null elements
List withNulls = Stream.of("a", null, "b").toList();
// [a, null, b] -- works fine

// List.of() does NOT allow nulls
List fails = List.of("a", null, "b");
// throws NullPointerException

// Collectors.toList() also allows nulls
List alsoWorks = Stream.of("a", null, "b")
    .collect(Collectors.toList());
// [a, null, b] -- works fine

3.4 When to Use Which

Method Returns Nulls Allowed Use When
.toList() Unmodifiable list Yes Default choice — you usually do not need to modify the result
Collectors.toList() Mutable ArrayList Yes You need to modify the list after collection
Collectors.toUnmodifiableList() Unmodifiable list No (throws NPE) You want immutability AND want to reject nulls

Recommendation: Use .toList() as your default. It is shorter, returns an unmodifiable list (which is safer), and handles nulls gracefully. Only fall back to Collectors.toList() when you need mutability.

4. New Random Generator API

Java 17 overhauled random number generation with the RandomGenerator API (JEP 356). The old java.util.Random class still works, but the new API provides a unified interface, better algorithms, and support for jumpable and splittable generators that are critical for parallel workloads.

4.1 The RandomGenerator Interface

import java.util.random.RandomGenerator;
import java.util.random.RandomGeneratorFactory;

// Create a random generator using the default algorithm
RandomGenerator generator = RandomGenerator.getDefault();

// Generate different types
int randomInt = generator.nextInt(100);           // 0-99
long randomLong = generator.nextLong();
double randomDouble = generator.nextDouble();     // 0.0 to 1.0
boolean randomBool = generator.nextBoolean();
double gaussian = generator.nextGaussian();       // normal distribution

// Bounded ranges
int between10And50 = generator.nextInt(10, 50);   // 10 to 49
double between1And10 = generator.nextDouble(1.0, 10.0);

4.2 Choosing an Algorithm

// List all available algorithms
RandomGeneratorFactory.all()
    .sorted(Comparator.comparing(RandomGeneratorFactory::name))
    .forEach(factory -> System.out.printf("%-20s | Group: %-15s | Period: 2^%d%n",
        factory.name(), factory.group(),
        (int) Math.log(factory.period().doubleValue()) / (int) Math.log(2)));

// Choose a specific algorithm
RandomGenerator xoshiro = RandomGeneratorFactory.of("Xoshiro256PlusPlus").create();
RandomGenerator l128x256 = RandomGeneratorFactory.of("L128X256MixRandom").create();

// Create with a seed for reproducibility
RandomGenerator seeded = RandomGeneratorFactory.of("Xoshiro256PlusPlus").create(42L);

// Common algorithms available in Java 17:
// - L32X64MixRandom    (good general purpose)
// - L64X128MixRandom   (better period)
// - L128X256MixRandom  (very large period)
// - Xoshiro256PlusPlus (fast, good quality)
// - Xoroshiro128PlusPlus (fast, smaller state)

4.3 SplittableRandom for Parallel Streams

When using parallel streams, you need a random generator that can be split into independent sub-generators without correlation. The old ThreadLocalRandom handles this, but the new API makes it explicit:

import java.util.random.RandomGenerator.SplittableGenerator;

// Create a splittable generator
SplittableGenerator splittable =
    (SplittableGenerator) RandomGeneratorFactory.of("L64X128MixRandom").create();

// Generate random numbers in parallel safely
List randomNumbers = splittable.splits(10) // 10 independent generators
    .parallel()
    .mapToInt(gen -> gen.nextInt(1000))
    .boxed()
    .toList();

// Monte Carlo simulation example
SplittableGenerator rng =
    (SplittableGenerator) RandomGeneratorFactory.of("L128X256MixRandom").create(42L);

long totalPoints = 10_000_000L;
long insideCircle = rng.splits(totalPoints)
    .parallel()
    .filter(gen -> {
        double x = gen.nextDouble();
        double y = gen.nextDouble();
        return x * x + y * y <= 1.0;
    })
    .count();

double piEstimate = 4.0 * insideCircle / totalPoints;
System.out.printf("Pi estimate: %.6f%n", piEstimate);

4.4 Migration from Old Random

Old Way New Way (Java 17)
new Random() RandomGenerator.getDefault()
new Random(seed) RandomGeneratorFactory.of("...").create(seed)
ThreadLocalRandom.current() Still valid and recommended for single-thread use
new SplittableRandom() Use SplittableGenerator from the factory

Note: The old java.util.Random now implements RandomGenerator, so existing code continues to work. You can gradually adopt the new API where it makes sense.

5. Compact Number Formatting

When building user-facing applications, you often need to display large numbers in a human-readable format -- "1.5K" instead of "1,500" or "2.3M" instead of "2,300,000". Java 12 introduced CompactNumberFormat, and it is fully available in Java 17.

import java.text.NumberFormat;
import java.util.Locale;

// Short format (default)
NumberFormat shortFormat = NumberFormat.getCompactNumberInstance(
    Locale.US, NumberFormat.Style.SHORT);

System.out.println(shortFormat.format(1_000));         // "1K"
System.out.println(shortFormat.format(1_500));         // "2K" (rounds)
System.out.println(shortFormat.format(1_000_000));     // "1M"
System.out.println(shortFormat.format(1_500_000));     // "2M"
System.out.println(shortFormat.format(1_000_000_000)); // "1B"
System.out.println(shortFormat.format(42));            // "42"

// Long format -- spells out the suffix
NumberFormat longFormat = NumberFormat.getCompactNumberInstance(
    Locale.US, NumberFormat.Style.LONG);

System.out.println(longFormat.format(1_000));         // "1 thousand"
System.out.println(longFormat.format(1_000_000));     // "1 million"
System.out.println(longFormat.format(1_000_000_000)); // "1 billion"

// Control decimal places
shortFormat.setMaximumFractionDigits(1);
System.out.println(shortFormat.format(1_500));         // "1.5K"
System.out.println(shortFormat.format(1_230_000));     // "1.2M"

// Different locales
NumberFormat germanShort = NumberFormat.getCompactNumberInstance(
    Locale.GERMANY, NumberFormat.Style.SHORT);
System.out.println(germanShort.format(1_000_000)); // "1 Mio."

NumberFormat japaneseShort = NumberFormat.getCompactNumberInstance(
    Locale.JAPAN, NumberFormat.Style.SHORT);
System.out.println(japaneseShort.format(10_000)); // "1万"

Practical use case: Displaying follower counts, view counts, file sizes, or financial summaries in dashboards and mobile UIs where space is limited.

6. Day Period Support

Java 16 added the "B" pattern letter to DateTimeFormatter, which formats the time of day into human-readable periods like "in the morning," "in the afternoon," "in the evening," and "at night." This is more natural than the simple AM/PM distinction.

import java.time.LocalTime;
import java.time.format.DateTimeFormatter;
import java.util.Locale;

DateTimeFormatter formatter = DateTimeFormatter.ofPattern("h:mm B", Locale.US);

System.out.println(LocalTime.of(8, 30).format(formatter));   // "8:30 in the morning"
System.out.println(LocalTime.of(12, 0).format(formatter));   // "12:00 noon"
System.out.println(LocalTime.of(14, 30).format(formatter));  // "2:30 in the afternoon"
System.out.println(LocalTime.of(18, 45).format(formatter));  // "6:45 in the evening"
System.out.println(LocalTime.of(22, 0).format(formatter));   // "10:00 at night"
System.out.println(LocalTime.of(0, 0).format(formatter));    // "12:00 midnight"

// Compare with AM/PM
DateTimeFormatter amPm = DateTimeFormatter.ofPattern("h:mm a", Locale.US);
System.out.println(LocalTime.of(14, 30).format(amPm));       // "2:30 PM"
// vs
System.out.println(LocalTime.of(14, 30).format(formatter));  // "2:30 in the afternoon"

// Works with different locales
DateTimeFormatter germanFormatter = DateTimeFormatter.ofPattern("H:mm B", Locale.GERMANY);
System.out.println(LocalTime.of(14, 30).format(germanFormatter)); // "14:30 nachmittags"

Use case: Chat applications, notification systems, and any user-facing time display where "2:30 in the afternoon" reads better than "2:30 PM" -- particularly in internationalized applications.

7. Foreign Function & Memory API (Incubator)

The Foreign Function & Memory API (Project Panama) is one of the most ambitious changes in Java's evolution. It provides a pure Java alternative to JNI (Java Native Interface) for calling native code and managing off-heap memory. In Java 17, it is available as an incubator module (JEP 412).

7.1 Why Replace JNI?

JNI has been the standard way to call C/C++ libraries from Java since Java 1.1, but it has serious problems:

  • Complex boilerplate -- requires writing C header files, JNI wrapper code, and managing native library loading
  • Unsafe -- incorrect JNI code can crash the JVM with no recovery
  • Brittle -- tight coupling between Java and C code, hard to maintain
  • No modern memory management -- ByteBuffer is limited to 2GB and has poor performance characteristics

7.2 What Panama Offers

// Note: This requires --add-modules jdk.incubator.foreign
// and is an incubator API in Java 17. Syntax may change.

import jdk.incubator.foreign.*;
import java.lang.invoke.MethodHandle;

// Call strlen from the C standard library -- no JNI code needed
public class PanamaExample {

    public static void main(String[] args) throws Throwable {
        // Get a linker for the platform's C ABI
        CLinker linker = CLinker.systemCLinker();

        // Look up the strlen function
        MethodHandle strlen = linker.downcallHandle(
            CLinker.systemLookup().lookup("strlen").get(),
            FunctionDescriptor.of(ValueLayout.JAVA_LONG, ValueLayout.ADDRESS)
        );

        // Allocate native memory for a C string
        try (ResourceScope scope = ResourceScope.newConfinedScope()) {
            MemorySegment cString = CLinker.toCString("Hello, Panama!", scope);
            long length = (long) strlen.invoke(cString);
            System.out.println("String length: " + length); // 14
        }
        // Native memory is automatically freed when scope closes
    }
}

Status: The Foreign Function & Memory API was incubating in Java 17 and was finalized in Java 22 (JEP 454). If you are on Java 17, you can experiment with it, but expect API changes in future versions. The core concepts -- linkers, function descriptors, memory segments, and resource scopes -- remain consistent.

8. Vector API (Incubator)

The Vector API (JEP 338) enables Java programs to express SIMD (Single Instruction, Multiple Data) computations that the JVM can map to hardware vector instructions on supported CPU architectures (x86 SSE/AVX, ARM NEON). This can provide massive performance gains for numerical workloads.

8.1 What SIMD Means

Normally, if you want to add two arrays of 256 numbers, the CPU processes them one at a time -- 256 additions. With SIMD instructions, the CPU can process 4, 8, or even 16 additions in a single instruction, depending on the hardware. The Vector API lets you write this kind of code in pure Java.

// Note: Requires --add-modules jdk.incubator.vector
// This is an incubator API in Java 17

import jdk.incubator.vector.*;

public class VectorExample {

    // Traditional scalar addition -- processes one element at a time
    public static float[] scalarAdd(float[] a, float[] b) {
        float[] result = new float[a.length];
        for (int i = 0; i < a.length; i++) {
            result[i] = a[i] + b[i];
        }
        return result;
    }

    // Vector API -- processes multiple elements per instruction
    static final VectorSpecies SPECIES = FloatVector.SPECIES_PREFERRED;

    public static float[] vectorAdd(float[] a, float[] b) {
        float[] result = new float[a.length];
        int i = 0;
        int upperBound = SPECIES.loopBound(a.length);

        // Process chunks using SIMD
        for (; i < upperBound; i += SPECIES.length()) {
            FloatVector va = FloatVector.fromArray(SPECIES, a, i);
            FloatVector vb = FloatVector.fromArray(SPECIES, b, i);
            FloatVector vr = va.add(vb);
            vr.intoArray(result, i);
        }

        // Handle remaining elements
        for (; i < a.length; i++) {
            result[i] = a[i] + b[i];
        }
        return result;
    }
}

Performance impact: For numerical workloads like image processing, machine learning inference, scientific computing, and financial calculations, the Vector API can provide 2x-8x speedups over scalar code, depending on the algorithm and hardware.

Status: The Vector API has been incubating since Java 16 and continues to evolve. As of Java 22, it remains in its seventh incubation. It is not yet finalized but is stable enough for experimentation and performance-sensitive applications.

9. Strong Encapsulation of JDK Internals

This is one of the most impactful changes in Java 17 and the one most likely to break existing applications. Starting with Java 16 and enforced in Java 17, the JDK strongly encapsulates its internal APIs by default. The --illegal-access flag no longer has a permissive mode.

9.1 What Changed

Java Version Default Behavior --illegal-access Options
Java 9-15 --illegal-access=permit (warning only) permit, warn, debug, deny
Java 16 --illegal-access=deny (blocked) permit, warn, debug, deny
Java 17 Strong encapsulation (no option) Flag removed entirely

9.2 What This Breaks

Code that uses reflection to access internal JDK classes will fail with InaccessibleObjectException:

// This code worked in Java 8-15 but FAILS in Java 17:
import sun.misc.Unsafe;

// Accessing internal class -- blocked
Field f = Unsafe.class.getDeclaredField("theUnsafe");
f.setAccessible(true); // throws InaccessibleObjectException in Java 17
Unsafe unsafe = (Unsafe) f.get(null);

// Reflective access to java.base internals -- also blocked
var field = String.class.getDeclaredField("value");
field.setAccessible(true); // throws InaccessibleObjectException

9.3 How to Fix It

Option 1: Use public APIs (preferred)

// Instead of sun.misc.BASE64Encoder (removed)
import java.util.Base64;
String encoded = Base64.getEncoder().encodeToString(data);

// Instead of sun.misc.Unsafe for memory operations
// Use VarHandle (Java 9+)
import java.lang.invoke.VarHandle;
import java.lang.invoke.MethodHandles;

// Instead of com.sun.org.apache.xerces for XML
// Use javax.xml.parsers (public API)

Option 2: --add-opens at runtime (temporary workaround)

If you must use internal APIs during migration, you can open specific modules at runtime:

// JVM flags to open internal modules
// --add-opens java.base/java.lang=ALL-UNNAMED
// --add-opens java.base/java.util=ALL-UNNAMED
// --add-opens java.base/sun.nio.ch=ALL-UNNAMED

// In Maven, add to maven-surefire-plugin configuration:
// --add-opens java.base/java.lang=ALL-UNNAMED

Common libraries affected: Older versions of Lombok, Spring, Hibernate, Mockito, and Apache Commons may use internal APIs. Update to recent versions that support Java 17:

Library Minimum Version for Java 17
Lombok 1.18.22+
Spring Framework 5.3.15+
Spring Boot 2.5.x+ (recommended: 3.x)
Hibernate 5.6.x+ (recommended: 6.x)
Mockito 4.0+
ByteBuddy 1.12+
Jackson 2.13+

10. Deprecations and Removals

Java 17 deprecates and removes several APIs that have been part of Java for decades. If your codebase uses any of these, you need to plan for migration.

10.1 Applet API (Deprecated for Removal)

The entire java.applet package is deprecated for removal. Applets have not been supported by any major browser since 2017. If your codebase has any java.applet.Applet references, they need to go.

10.2 SecurityManager (Deprecated for Removal)

The SecurityManager and its associated infrastructure are deprecated for removal. This is a significant change for applications that relied on Java's sandboxing model. The replacement approach is to use OS-level security (containers, process isolation) rather than in-JVM security policies.

// Deprecated -- do not use in new code
System.setSecurityManager(new SecurityManager()); // produces deprecation warning

// The SecurityManager will be removed in a future Java version.
// Migration strategy:
// - Use containerization (Docker) for isolation
// - Use OS-level permissions
// - Use Java module system for encapsulation
// - Use custom ClassLoaders for restricted code execution

10.3 RMI Activation (Removed)

The RMI Activation mechanism (java.rmi.activation) was removed in Java 17 (JEP 407). It was deprecated in Java 15. If your application uses java.rmi.activation.Activatable or the rmid daemon, you need to migrate to a different remote invocation mechanism. The core RMI functionality (without activation) remains available.

10.4 Complete Deprecation and Removal Summary

API / Feature Status in Java 17 Replacement
Applet API Deprecated for removal Use web technologies (HTML/JS)
SecurityManager Deprecated for removal Container-level security, JPMS
RMI Activation Removed gRPC, REST, or standard RMI
AOT/Graal JIT Compiler Removed GraalVM native-image (external)
Experimental AOT Compilation Removed GraalVM
Nashorn JavaScript Engine Removed (Java 15) GraalJS or standalone V8/Node
Pack200 tools Removed (Java 14) Use standard compression (gzip, bzip2)
Solaris/SPARC Ports Removed Use Linux/x64 or Linux/AArch64

11. Summary Table

Here is a consolidated reference of all improvements covered in this post, along with the Java version they were introduced and their current status:

# Feature Introduced Status in Java 17 Category
1 Helpful NullPointerExceptions Java 14 Standard (on by default) Developer Experience
2 Stream.toList() Java 16 Standard API Convenience
3 RandomGenerator API Java 17 Standard API Addition
4 Compact Number Formatting Java 12 Standard Internationalization
5 Day Period Support ("B") Java 16 Standard Internationalization
6 Foreign Function & Memory API Java 14 Incubator Native Interop
7 Vector API Java 16 Incubator Performance
8 Strong JDK Encapsulation Java 16 Enforced Security/Modularity
9 Applet API Deprecation Java 9 Deprecated for Removal Cleanup
10 SecurityManager Deprecation Java 17 Deprecated for Removal Security
11 RMI Activation Removal Java 15 (dep) Removed Cleanup
12 AOT/Graal JIT Removal Java 17 Removed Cleanup

Each of these changes reflects Java's ongoing evolution toward a more secure, performant, and developer-friendly platform. While the headline features like records and sealed classes get the most attention, these smaller improvements collectively make a significant difference in your daily development experience.

March 1, 2026

Java 17 Records

1. Introduction

Records were introduced as a preview feature in Java 14, refined in Java 15, and finalized as a permanent language feature in Java 16. By the time Java 17 LTS shipped, records had become a core part of the language that every Java developer should master. They are not experimental. They are production-ready and here to stay.

If you are new to records or need a refresher on the fundamentals — what the compiler generates, the canonical constructor, accessor methods, and basic syntax — read our comprehensive Java Record tutorial first. This post assumes you know the basics and focuses on practical patterns, real-world usage, and advanced techniques that make records shine in production Java 17 codebases.

We will cover records as POJO/DTO replacements, compact constructors for validation, custom methods, records with collections and streams, serialization with Jackson, local records, and a detailed comparison with classes and Lombok. By the end, you will know exactly when and how to use records in your day-to-day work.

2. Records as Data Carriers

The most common use case for records is replacing POJOs, DTOs, and value objects. In enterprise Java, these classes are everywhere — carrying data between layers, representing API responses, holding configuration values, and wrapping query results. Before records, every one of these classes required dozens of lines of boilerplate code.

2.1 Replacing a POJO

Consider a typical POJO used to represent a user in a REST API. Here is the traditional approach:

// Traditional POJO -- 55+ lines of boilerplate
public class UserDTO {
    private final String id;
    private final String username;
    private final String email;
    private final LocalDate joinDate;

    public UserDTO(String id, String username, String email, LocalDate joinDate) {
        this.id = id;
        this.username = username;
        this.email = email;
        this.joinDate = joinDate;
    }

    public String getId() { return id; }
    public String getUsername() { return username; }
    public String getEmail() { return email; }
    public LocalDate getJoinDate() { return joinDate; }

    @Override
    public boolean equals(Object o) {
        if (this == o) return true;
        if (o == null || getClass() != o.getClass()) return false;
        UserDTO userDTO = (UserDTO) o;
        return Objects.equals(id, userDTO.id) &&
               Objects.equals(username, userDTO.username) &&
               Objects.equals(email, userDTO.email) &&
               Objects.equals(joinDate, userDTO.joinDate);
    }

    @Override
    public int hashCode() {
        return Objects.hash(id, username, email, joinDate);
    }

    @Override
    public String toString() {
        return "UserDTO{id='" + id + "', username='" + username +
               "', email='" + email + "', joinDate=" + joinDate + "}";
    }
}

With a record, the entire class becomes:

// Record -- 1 line, same behavior
public record UserDTO(String id, String username, String email, LocalDate joinDate) { }

That single line gives you the constructor, accessor methods (id(), username(), email(), joinDate()), properly implemented equals() and hashCode(), and a readable toString(). The fields are private and final. The class is implicitly final. You go from 55+ lines to 1 line with zero loss of functionality.

2.2 Replacing Value Objects in Domain-Driven Design

In Domain-Driven Design, value objects are defined by their attribute values rather than identity. Records are a natural fit because they implement value-based equality by default:

// Value objects using records
public record Money(BigDecimal amount, Currency currency) {

    public Money add(Money other) {
        if (!this.currency.equals(other.currency)) {
            throw new IllegalArgumentException("Cannot add different currencies");
        }
        return new Money(this.amount.add(other.amount), this.currency);
    }

    public Money multiply(int quantity) {
        return new Money(this.amount.multiply(BigDecimal.valueOf(quantity)), this.currency);
    }
}

public record Address(String street, String city, String state, String zipCode, String country) { }

public record DateRange(LocalDate start, LocalDate end) {

    public long days() {
        return ChronoUnit.DAYS.between(start, end);
    }

    public boolean contains(LocalDate date) {
        return !date.isBefore(start) && !date.isAfter(end);
    }

    public boolean overlaps(DateRange other) {
        return !this.end.isBefore(other.start) && !other.end.isBefore(this.start);
    }
}

2.3 API Response and Request DTOs

Records work exceptionally well for REST API request and response objects. They make your API contracts explicit and concise:

// API DTOs
public record CreateOrderRequest(
    String customerId,
    List items,
    String shippingAddress
) { }

public record OrderItemRequest(String productId, int quantity) { }

public record OrderResponse(
    String orderId,
    String status,
    BigDecimal totalAmount,
    LocalDateTime createdAt,
    List items
) { }

public record OrderItemResponse(
    String productName,
    int quantity,
    BigDecimal unitPrice,
    BigDecimal subtotal
) { }

// Usage in a Spring controller
@PostMapping("/orders")
public ResponseEntity createOrder(@RequestBody CreateOrderRequest request) {
    Order order = orderService.create(request);
    return ResponseEntity.ok(toResponse(order));
}

Notice how the record declarations read like documentation. You can look at CreateOrderRequest and immediately understand what the API expects. No scrolling through getters and setters to figure out the fields.

2.4 Replacing Tuples and Anonymous Data

Before records, Java developers often resorted to Map.Entry, arrays, or generic Pair classes to return multiple values. Records eliminate this anti-pattern:

// Before: returning multiple values was awkward
public Map.Entry> loadUserWithPermissions(String userId) {
    // type-unsafe, hard to read
    return Map.entry(user, permissions);
}

// After: a record makes the return type explicit
public record UserWithPermissions(User user, List permissions) { }

public UserWithPermissions loadUserWithPermissions(String userId) {
    User user = userRepo.findById(userId);
    List permissions = permissionService.getFor(userId);
    return new UserWithPermissions(user, permissions);
}

// Even better -- the method signature documents itself
// No guessing what the Map.Entry key or value represents

3. Compact Constructors

One of the most powerful features of records is the compact constructor. Unlike a regular constructor, a compact constructor does not declare parameters and does not assign fields — the compiler handles those automatically. You just write the validation and normalization logic. Think of it as a pre-assignment hook.

3.1 Input Validation

Production code must validate inputs. The compact constructor is the perfect place to enforce invariants:

public record EmailAddress(String value) {

    // Compact constructor -- no parentheses, no parameter list
    public EmailAddress {
        Objects.requireNonNull(value, "Email cannot be null");
        if (!value.matches("^[A-Za-z0-9+_.-]+@[A-Za-z0-9.-]+\\.[A-Za-z]{2,}$")) {
            throw new IllegalArgumentException("Invalid email: " + value);
        }
    }
}

public record Age(int value) {

    public Age {
        if (value < 0 || value > 150) {
            throw new IllegalArgumentException("Age must be between 0 and 150, got: " + value);
        }
    }
}

public record Percentage(double value) {

    public Percentage {
        if (value < 0.0 || value > 100.0) {
            throw new IllegalArgumentException("Percentage must be 0-100, got: " + value);
        }
    }
}

With a compact constructor, it is impossible to create an EmailAddress with an invalid format. The validation runs before the fields are assigned. This is a major advantage over traditional classes where you might forget to add validation, or worse, where someone creates the object through reflection bypassing the constructor.

3.2 Input Normalization

Compact constructors can also normalize data. In a compact constructor, you can reassign the parameter variables and the compiler uses the normalized values for field assignment:

public record Username(String value) {

    public Username {
        Objects.requireNonNull(value, "Username cannot be null");
        // Normalize: trim whitespace and convert to lowercase
        value = value.trim().toLowerCase();
        if (value.length() < 3 || value.length() > 30) {
            throw new IllegalArgumentException(
                "Username must be 3-30 characters, got: " + value.length());
        }
    }
}

public record PhoneNumber(String countryCode, String number) {

    public PhoneNumber {
        // Strip all non-digit characters
        countryCode = countryCode.replaceAll("[^0-9]", "");
        number = number.replaceAll("[^0-9]", "");
        // Validate after normalization
        if (countryCode.isEmpty()) {
            throw new IllegalArgumentException("Country code is required");
        }
        if (number.length() < 7 || number.length() > 15) {
            throw new IllegalArgumentException("Invalid phone number length");
        }
    }

    public String formatted() {
        return "+" + countryCode + " " + number;
    }
}

// Usage
var username = new Username("  JohnDoe  ");
System.out.println(username.value()); // "johndoe" -- trimmed and lowercased

var phone = new PhoneNumber("+1", "(555) 123-4567");
System.out.println(phone.formatted()); // "+1 5551234567"

3.3 Defensive Copies

Records are immutable, but if a component is a mutable collection, someone can modify the list after creating the record. Compact constructors should make defensive copies:

public record Team(String name, List members) {

    public Team {
        Objects.requireNonNull(name, "Team name cannot be null");
        Objects.requireNonNull(members, "Members list cannot be null");
        // Defensive copy -- make the list unmodifiable
        members = List.copyOf(members);
    }
}

// Now the record is truly immutable
List originalList = new ArrayList<>(List.of("Alice", "Bob"));
Team team = new Team("Backend", originalList);

originalList.add("Charlie"); // modifying the original list
System.out.println(team.members()); // [Alice, Bob] -- record is unaffected
team.members().add("Dave"); // throws UnsupportedOperationException

Rule of thumb: If any record component is a mutable type (List, Map, Set, Date, arrays), always make a defensive copy in the compact constructor using List.copyOf(), Map.copyOf(), or Set.copyOf(). This guarantees true immutability.

4. Records with Custom Methods

Records are not limited to accessor methods. You can add any instance method, static method, or static factory method to a record. The only restrictions are: you cannot add mutable instance fields (no non-final fields beyond the record components), and you cannot declare non-static fields at all.

4.1 Business Logic Methods

Adding behavior to records keeps related logic close to the data it operates on:

public record Product(String id, String name, BigDecimal price, int stockQuantity) {

    public boolean isInStock() {
        return stockQuantity > 0;
    }

    public boolean isLowStock(int threshold) {
        return stockQuantity > 0 && stockQuantity <= threshold;
    }

    public BigDecimal calculateTotal(int quantity) {
        if (quantity > stockQuantity) {
            throw new IllegalArgumentException(
                "Requested " + quantity + " but only " + stockQuantity + " in stock");
        }
        return price.multiply(BigDecimal.valueOf(quantity));
    }

    // Records are immutable -- "modifying" returns a new record
    public Product withPrice(BigDecimal newPrice) {
        return new Product(id, name, newPrice, stockQuantity);
    }

    public Product withStockReduced(int sold) {
        return new Product(id, name, price, stockQuantity - sold);
    }
}

Notice the withPrice() and withStockReduced() methods. Since records are immutable, you cannot change a field — you create a new record with the modified value. This “wither” pattern is common in immutable designs and keeps the record’s data integrity intact.

4.2 Static Factory Methods

Static factory methods provide named constructors that make object creation clearer and can encapsulate complex initialization:

public record Temperature(double value, String unit) {

    public Temperature {
        if (!unit.equals("C") && !unit.equals("F") && !unit.equals("K")) {
            throw new IllegalArgumentException("Unit must be C, F, or K");
        }
        if (unit.equals("K") && value < 0) {
            throw new IllegalArgumentException("Kelvin cannot be negative");
        }
    }

    // Named factory methods -- much clearer than new Temperature(100, "C")
    public static Temperature celsius(double value) {
        return new Temperature(value, "C");
    }

    public static Temperature fahrenheit(double value) {
        return new Temperature(value, "F");
    }

    public static Temperature kelvin(double value) {
        return new Temperature(value, "K");
    }

    // Conversion methods
    public Temperature toCelsius() {
        return switch (unit) {
            case "C" -> this;
            case "F" -> celsius((value - 32) * 5.0 / 9.0);
            case "K" -> celsius(value - 273.15);
            default -> throw new IllegalStateException("Unknown unit: " + unit);
        };
    }

    public Temperature toFahrenheit() {
        double inCelsius = toCelsius().value();
        return fahrenheit(inCelsius * 9.0 / 5.0 + 32);
    }
}

// Usage
Temperature boiling = Temperature.celsius(100);
Temperature inFahrenheit = boiling.toFahrenheit();
System.out.println(inFahrenheit); // Temperature[value=212.0, unit=F]

4.3 Implementing Interfaces

Records can implement interfaces, which makes them work seamlessly with polymorphic designs:

public interface Printable {
    String toPrintFormat();
}

public interface Discountable {
    BigDecimal applyDiscount(BigDecimal discountPercent);
}

public record Invoice(
    String invoiceNumber,
    String customerName,
    BigDecimal amount,
    LocalDate dueDate
) implements Printable, Discountable, Comparable {

    @Override
    public String toPrintFormat() {
        return String.format("Invoice #%s | %s | $%s | Due: %s",
            invoiceNumber, customerName, amount, dueDate);
    }

    @Override
    public BigDecimal applyDiscount(BigDecimal discountPercent) {
        BigDecimal factor = BigDecimal.ONE.subtract(
            discountPercent.divide(BigDecimal.valueOf(100)));
        return amount.multiply(factor);
    }

    @Override
    public int compareTo(Invoice other) {
        return this.dueDate.compareTo(other.dueDate);
    }
}

5. Records and Collections

Records have properly implemented equals() and hashCode() by default, which makes them ideal for use as map keys, in sets, and for deduplication. This is a significant advantage over regular classes where forgetting to implement these methods leads to subtle bugs.

5.1 Records as Map Keys

public record Coordinate(double latitude, double longitude) { }

public record CacheKey(String endpoint, Map params) {

    public CacheKey {
        params = Map.copyOf(params); // defensive copy for immutability
    }
}

// Records work perfectly as map keys because equals/hashCode are correct
Map landmarks = new HashMap<>();
landmarks.put(new Coordinate(48.8566, 2.3522), "Eiffel Tower");
landmarks.put(new Coordinate(40.7484, -73.9857), "Empire State Building");

// This works because records compare by value
String name = landmarks.get(new Coordinate(48.8566, 2.3522));
System.out.println(name); // "Eiffel Tower"

// Cache keys with the same data are equal
Map cache = new HashMap<>();
var key1 = new CacheKey("/api/users", Map.of("page", "1"));
cache.put(key1, "{cached response}");

var key2 = new CacheKey("/api/users", Map.of("page", "1"));
System.out.println(cache.get(key2)); // "{cached response}" -- key2 equals key1

5.2 Records in Sets for Deduplication

public record Tag(String name) {

    public Tag {
        name = name.trim().toLowerCase();
    }
}

// Automatic deduplication
List allTags = List.of(
    new Tag("Java"), new Tag("java"), new Tag(" JAVA "), new Tag("Python"), new Tag("java")
);

Set uniqueTags = new HashSet<>(allTags);
System.out.println(uniqueTags); // [Tag[name=python], Tag[name=java]] -- only 2 unique tags

5.3 Sorting Records

public record Employee(String name, String department, double salary) { }

List employees = List.of(
    new Employee("Alice", "Engineering", 120000),
    new Employee("Bob", "Marketing", 90000),
    new Employee("Charlie", "Engineering", 115000),
    new Employee("Diana", "Marketing", 95000),
    new Employee("Eve", "Engineering", 130000)
);

// Sort by salary descending
List bySalary = employees.stream()
    .sorted(Comparator.comparingDouble(Employee::salary).reversed())
    .toList();

// Sort by department, then by salary within department
List byDeptAndSalary = employees.stream()
    .sorted(Comparator.comparing(Employee::department)
                       .thenComparing(Comparator.comparingDouble(Employee::salary).reversed()))
    .toList();

// Accessor method references work naturally with Comparator

6. Records and Streams

Records and streams are a powerful combination. Records provide clean data containers, and streams provide the processing pipeline. The accessor method references (Employee::salary, Employee::department) work seamlessly with stream operations.

6.1 Stream Processing with Records

public record Transaction(
    String id,
    String customerId,
    BigDecimal amount,
    String category,
    LocalDateTime timestamp
) {

    public boolean isLargeTransaction() {
        return amount.compareTo(BigDecimal.valueOf(10000)) > 0;
    }

    public boolean isRecent(int daysBack) {
        return timestamp.isAfter(LocalDateTime.now().minusDays(daysBack));
    }
}

// Sample data
List transactions = loadTransactions();

// Find total spending by category
Map totalByCategory = transactions.stream()
    .collect(Collectors.groupingBy(
        Transaction::category,
        Collectors.reducing(BigDecimal.ZERO, Transaction::amount, BigDecimal::add)
    ));

// Find the highest-spending customer
Optional> topSpender = transactions.stream()
    .collect(Collectors.groupingBy(
        Transaction::customerId,
        Collectors.reducing(BigDecimal.ZERO, Transaction::amount, BigDecimal::add)
    ))
    .entrySet().stream()
    .max(Map.Entry.comparingByValue());

// Get large transactions from the last 30 days
List recentLarge = transactions.stream()
    .filter(Transaction::isLargeTransaction)
    .filter(t -> t.isRecent(30))
    .sorted(Comparator.comparing(Transaction::amount).reversed())
    .toList();

6.2 Transforming Between Record Types

Streams are excellent for mapping between different record types — for example, converting domain entities to API responses:

// Domain entity
public record OrderEntity(
    Long id, Long customerId, BigDecimal total,
    String status, LocalDateTime createdAt
) { }

// API response
public record OrderSummary(
    String orderId, BigDecimal total, String status, String createdDate
) {

    public static OrderSummary fromEntity(OrderEntity entity) {
        return new OrderSummary(
            "ORD-" + entity.id(),
            entity.total(),
            entity.status(),
            entity.createdAt().format(DateTimeFormatter.ISO_LOCAL_DATE)
        );
    }
}

// Convert a list of entities to API responses
List summaries = orderEntities.stream()
    .map(OrderSummary::fromEntity)
    .toList();

// Group orders by status and count them
record StatusCount(String status, long count) { }

List statusCounts = orderEntities.stream()
    .collect(Collectors.groupingBy(OrderEntity::status, Collectors.counting()))
    .entrySet().stream()
    .map(e -> new StatusCount(e.getKey(), e.getValue()))
    .sorted(Comparator.comparingLong(StatusCount::count).reversed())
    .toList();

6.3 Collectors.groupingBy with Records

Records make grouped data structures much cleaner. Instead of dealing with Map<String, Map<String, List<...>>>, you can create a record to represent the grouping key:

public record SaleRecord(
    String region, String productCategory, int quarter, BigDecimal revenue
) { }

// Multi-level grouping key as a record
public record RegionCategory(String region, String category) { }

List sales = loadSalesData();

// Group by region AND category using a record as the key
Map revenueByRegionCategory = sales.stream()
    .collect(Collectors.groupingBy(
        s -> new RegionCategory(s.region(), s.productCategory()),
        Collectors.reducing(BigDecimal.ZERO, SaleRecord::revenue, BigDecimal::add)
    ));

// The record key makes the map easy to query
BigDecimal usElectronics = revenueByRegionCategory.get(
    new RegionCategory("US", "Electronics")
);

// Statistics per group
record SalesStats(RegionCategory group, long count, BigDecimal totalRevenue) { }

List stats = sales.stream()
    .collect(Collectors.groupingBy(
        s -> new RegionCategory(s.region(), s.productCategory())))
    .entrySet().stream()
    .map(e -> new SalesStats(
        e.getKey(),
        e.getValue().size(),
        e.getValue().stream()
            .map(SaleRecord::revenue)
            .reduce(BigDecimal.ZERO, BigDecimal::add)
    ))
    .toList();

7. Records in JSON/Serialization

Serialization is where records meet the real world. Most Java applications need to convert records to JSON (for REST APIs) or serialize them for caching, messaging, and persistence. Both Jackson and Gson support records, but there are important details to know.

7.1 Jackson with Records

Jackson has supported records since version 2.12. Serialization works out of the box. Deserialization uses the canonical constructor automatically:

public record ApiResponse(
    boolean success,
    String message,
    T data,
    Instant timestamp
) {

    public static  ApiResponse ok(T data) {
        return new ApiResponse<>(true, "OK", data, Instant.now());
    }

    public static  ApiResponse error(String message) {
        return new ApiResponse<>(false, message, null, Instant.now());
    }
}

// Jackson serialization -- works out of the box
ObjectMapper mapper = new ObjectMapper();
mapper.registerModule(new JavaTimeModule());
mapper.disable(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS);

record UserProfile(String name, String email, int age) { }

var profile = new UserProfile("John", "john@example.com", 30);
String json = mapper.writeValueAsString(profile);
// {"name":"John","email":"john@example.com","age":30}

// Deserialization -- Jackson calls the canonical constructor
UserProfile deserialized = mapper.readValue(json, UserProfile.class);
System.out.println(deserialized.name()); // "John"

7.2 Customizing JSON Property Names

You can use Jackson annotations on record components:

public record GitHubRepo(
    @JsonProperty("full_name") String fullName,
    @JsonProperty("stargazers_count") int stars,
    @JsonProperty("open_issues_count") int openIssues,
    @JsonProperty("html_url") String url
) { }

// Deserialization maps snake_case JSON keys to camelCase record components
String githubJson = """
    {
        "full_name": "openjdk/jdk",
        "stargazers_count": 17500,
        "open_issues_count": 324,
        "html_url": "https://github.com/openjdk/jdk"
    }
    """;

GitHubRepo repo = mapper.readValue(githubJson, GitHubRepo.class);
System.out.println(repo.fullName()); // "openjdk/jdk"

7.3 Gson with Records

Gson support for records was added in version 2.10. Earlier versions require a custom TypeAdapter. With Gson 2.10+, records work similarly to regular classes:

// Requires Gson 2.10+
Gson gson = new GsonBuilder()
    .setPrettyPrinting()
    .create();

record Config(String host, int port, boolean ssl) { }

var config = new Config("api.example.com", 443, true);
String json = gson.toJson(config);
// {"host":"api.example.com","port":443,"ssl":true}

Config parsed = gson.fromJson(json, Config.class);
System.out.println(parsed.host()); // "api.example.com"

7.4 Serialization Considerations

Key things to remember when serializing records:

Topic Details
Java Serialization Records implement java.io.Serializable if declared. Deserialization always uses the canonical constructor, which is safer than regular class deserialization (no bypass of validation)
Jackson version Use Jackson 2.12+ for record support. Older versions do not recognize records
Gson version Use Gson 2.10+ for native record support
Compact constructor validation Validation in compact constructors runs during deserialization, so invalid JSON will throw exceptions at parse time — this is a feature, not a bug
Nested records Both Jackson and Gson handle nested records automatically
Generic records Jackson handles TypeReference for generic records; Gson uses TypeToken

8. Local Records

Java allows you to define records inside a method. These are called local records. They are useful for intermediate data transformations where creating a top-level class would be overkill. Think of them as named tuples scoped to a single method.

public class ReportGenerator {

    public String generateSalesReport(List transactions) {
        // Local record -- only exists inside this method
        record DailySummary(LocalDate date, long transactionCount, BigDecimal totalRevenue) { }

        List dailySummaries = transactions.stream()
            .collect(Collectors.groupingBy(
                t -> t.timestamp().toLocalDate()))
            .entrySet().stream()
            .map(e -> new DailySummary(
                e.getKey(),
                e.getValue().size(),
                e.getValue().stream()
                    .map(Transaction::amount)
                    .reduce(BigDecimal.ZERO, BigDecimal::add)
            ))
            .sorted(Comparator.comparing(DailySummary::date))
            .toList();

        // Use the local record to build the report
        StringBuilder report = new StringBuilder("Daily Sales Report\n");
        for (var summary : dailySummaries) {
            report.append(String.format("%s: %d transactions, $%s revenue%n",
                summary.date(), summary.transactionCount(), summary.totalRevenue()));
        }
        return report.toString();
    }
}

Local records are also excellent for multi-field sorting and grouping keys inside stream pipelines:

public Map> getTopProductsByCategory(List sales) {
    // Local record for the intermediate aggregation
    record ProductRevenue(String product, BigDecimal revenue) { }

    return sales.stream()
        .collect(Collectors.groupingBy(SaleRecord::productCategory))
        .entrySet().stream()
        .collect(Collectors.toMap(
            Map.Entry::getKey,
            entry -> entry.getValue().stream()
                .collect(Collectors.groupingBy(
                    SaleRecord::productName,
                    Collectors.reducing(BigDecimal.ZERO, SaleRecord::revenue, BigDecimal::add)))
                .entrySet().stream()
                .map(e -> new ProductRevenue(e.getKey(), e.getValue()))
                .sorted(Comparator.comparing(ProductRevenue::revenue).reversed())
                .limit(5)
                .map(ProductRevenue::product)
                .toList()
        ));
}

Local records keep your code clean by scoping temporary data structures to where they are used. They do not pollute the class namespace and make refactoring easier — if the method goes away, so does the record.

9. Records vs Classes vs Lombok

One of the most common questions developers ask is: “Should I use a record, a regular class, or Lombok?” The answer depends on your use case. Here is a detailed comparison:

Feature Record Regular Class Lombok @Data Lombok @Value
Boilerplate None Full manual None (generated) None (generated)
Immutability Built-in (always) Manual (final fields) Not immutable Immutable
Inheritance Cannot extend classes Full inheritance Full inheritance Cannot extend (final)
Interfaces Can implement Can implement Can implement Can implement
Mutable fields Not allowed Allowed Allowed (setters) Not allowed
Custom methods Yes Yes Yes Yes
equals/hashCode All fields (auto) Manual or IDE All fields (auto) All fields (auto)
toString All fields (auto) Manual or IDE All fields (auto) All fields (auto)
Compact constructor Yes No No No
Deconstruction Yes (Java 21+) No No No
External dependency None (JDK built-in) None Lombok library Lombok library
IDE support Full (Java 17+) Full Requires plugin Requires plugin
Serialization safety Canonical constructor always used Can bypass constructor Can bypass constructor Can bypass constructor

When to Use Each

Use records when:

  • The primary purpose is carrying data (DTOs, value objects, API models)
  • You want guaranteed immutability
  • You do not need inheritance
  • You want clean equals()/hashCode() without thinking about it
  • You are on Java 16+ and want to minimize external dependencies

Use regular classes when:

  • You need mutable state (JPA entities with setters)
  • You need class inheritance
  • You need fine-grained control over equals()/hashCode() (e.g., only compare by ID)
  • You are writing framework code that uses reflection to set fields

Use Lombok when:

  • You are stuck on Java 8-15 and cannot use records
  • You need the builder pattern (@Builder)
  • You need mutable objects with auto-generated getters/setters
  • Your team already uses Lombok and consistency matters

10. Record Patterns Preview

Java 21 introduced record patterns (finalized), which allow you to deconstruct records directly in instanceof checks and switch expressions. This is one of the most exciting features on the Java roadmap, and records were designed with this in mind.

Here is a preview of what record patterns look like:

// Java 21 Record Patterns (preview of what's coming if you're on Java 17)

// Traditional approach
public String describeShape(Object shape) {
    if (shape instanceof Circle c) {
        return "Circle with radius " + c.radius();
    } else if (shape instanceof Rectangle r) {
        return "Rectangle " + r.width() + "x" + r.height();
    }
    return "Unknown shape";
}

// With record patterns (Java 21) -- deconstruction in instanceof
public String describeShape(Object shape) {
    if (shape instanceof Circle(double radius)) {
        return "Circle with radius " + radius; // radius extracted directly
    } else if (shape instanceof Rectangle(double width, double height)) {
        return "Rectangle " + width + "x" + height; // both fields extracted
    }
    return "Unknown shape";
}

// Record patterns in switch expressions (Java 21)
sealed interface Shape permits Circle, Rectangle, Triangle { }

record Circle(double radius) implements Shape { }
record Rectangle(double width, double height) implements Shape { }
record Triangle(double a, double b, double c) implements Shape { }

public double area(Shape shape) {
    return switch (shape) {
        case Circle(var r)           -> Math.PI * r * r;
        case Rectangle(var w, var h) -> w * h;
        case Triangle(var a, var b, var c) -> {
            double s = (a + b + c) / 2;
            yield Math.sqrt(s * (s - a) * (s - b) * (s - c));
        }
    };
}

// Nested record patterns -- deconstruct deeply
record Point(double x, double y) { }
record Line(Point start, Point end) { }

public double lineLength(Line line) {
    // Deconstruct Line AND its nested Point records in one step
    if (line instanceof Line(Point(var x1, var y1), Point(var x2, var y2))) {
        return Math.sqrt(Math.pow(x2 - x1, 2) + Math.pow(y2 - y1, 2));
    }
    return 0;
}

If you are currently on Java 17, record patterns give you a compelling reason to plan your upgrade to Java 21. The combination of sealed interfaces + records + record patterns enables type-safe, exhaustive pattern matching that eliminates entire categories of bugs.

11. Best Practices

After working with records extensively in production Java 17 applications, here are the guidelines that matter most:

11.1 Always Validate in Compact Constructors

// Good -- validated at construction time
public record UserId(String value) {

    public UserId {
        Objects.requireNonNull(value, "UserId cannot be null");
        if (value.isBlank()) {
            throw new IllegalArgumentException("UserId cannot be blank");
        }
    }
}

// Bad -- no validation, allows invalid state
public record UserId(String value) { }

11.2 Make Defensive Copies for Mutable Components

// Good -- truly immutable
public record Config(String name, List allowedOrigins) {

    public Config {
        allowedOrigins = List.copyOf(allowedOrigins);
    }
}

// Bad -- external code can modify the list
public record Config(String name, List allowedOrigins) { }

11.3 Use the “With” Pattern for Modifications

public record Settings(int maxRetries, Duration timeout, boolean verbose) {

    public Settings withMaxRetries(int maxRetries) {
        return new Settings(maxRetries, this.timeout, this.verbose);
    }

    public Settings withTimeout(Duration timeout) {
        return new Settings(this.maxRetries, timeout, this.verbose);
    }

    public Settings withVerbose(boolean verbose) {
        return new Settings(this.maxRetries, this.timeout, verbose);
    }
}

// Usage -- reads like a builder
Settings defaults = new Settings(3, Duration.ofSeconds(30), false);
Settings custom = defaults.withMaxRetries(5).withVerbose(true);

11.4 When NOT to Use Records

Records are powerful, but they are not the right tool for every situation:

  • JPA/Hibernate entities — Records cannot be JPA entities because JPA requires a no-arg constructor, mutable fields, and non-final classes. Use regular classes for entities.
  • Objects that need mutable state — Builders, configuration objects being assembled incrementally, or anything that accumulates state over time.
  • Classes that need inheritance — Records are implicitly final and extend java.lang.Record. If you need a class hierarchy, use regular classes or sealed classes.
  • Large numbers of fields — A record with 15+ components becomes hard to construct and read. Consider using a builder or breaking the data into nested records.
  • Custom equals/hashCode logic — If you need equals to only compare by ID (common in entities), a record is the wrong choice because its equals always uses all fields.

11.5 Summary of Best Practices

Practice Why
Validate in compact constructors Prevents invalid objects from existing
Defensive copies for collections/arrays Ensures true immutability
Use “with” methods for modifications Maintains immutability while enabling fluent APIs
Prefer records for DTOs and value objects Eliminates boilerplate, guarantees correct equality
Use local records for intermediate data Keeps temporary types scoped and clean
Keep component count under 7 Readability — too many fields need a different pattern
Use static factory methods Named constructors improve API clarity
Implement interfaces for polymorphism Records work well with sealed hierarchies
Do not use records for JPA entities JPA requires no-arg constructors and mutability
Use Jackson 2.12+ or Gson 2.10+ Older versions do not support records natively
March 1, 2026