Subscribe To Our Newsletter
You will receive our latest post and tutorial.
Thank you for subscribing!

required
required


Java 25 Migration Guide (21→25)

1. Introduction

Java 25 is the next Long-Term Support (LTS) release after Java 21, expected in September 2025. If your organization runs on Java 21 — which it should if you followed the last migration guide — Java 25 is the natural next upgrade target. Between Java 21 and Java 25, four feature releases shipped (22, 23, 24, 25), each adding language features, API improvements, and runtime enhancements.

The upgrade from 21 to 25 is less disruptive than the jump from 17 to 21. There are no paradigm-shifting features like virtual threads this time around. Instead, Java 25 polishes and finalizes features that were in preview during Java 21, adds new language conveniences, and delivers meaningful performance improvements. Think of it as Java 21 with the rough edges smoothed out.

LTS Support Timelines

Version Release Date Type Oracle Premier Support Until Extended Support Until
Java 17 September 2021 LTS September 2026 September 2029
Java 21 September 2023 LTS September 2028 September 2031
Java 22 March 2024 Non-LTS September 2024 N/A
Java 23 September 2024 Non-LTS March 2025 N/A
Java 24 March 2025 Non-LTS September 2025 N/A
Java 25 September 2025 LTS September 2030 September 2033

Why migrate now? Java 21 premier support runs until September 2028, so there is no rush. But the new features in Java 25 — particularly finalized structured concurrency, scoped values, and the performance improvements — provide real value. Planning your migration now gives you time to test thoroughly and adopt new features incrementally.

2. New Language Features Summary (Java 22-25)

Here is a comprehensive table of every significant feature added between Java 22 and Java 25. Features marked Final are production-ready. Features marked Preview require --enable-preview to use.

Feature JEP Introduced Finalized Status in Java 25
Module Import Declarations 476 Java 23 (preview) Java 25 Final
Implicitly Declared Classes & Instance Main 477 Java 21 (preview as JEP 445) Java 25 Final
Primitive Types in Patterns 488 Java 23 (preview) Java 25 Final
Flexible Constructor Bodies 492 Java 22 (preview) Java 25 Final
Structured Concurrency 499 Java 19 (incubator) Java 25 (expected) Final (expected)
Scoped Values 487 Java 20 (incubator) Java 25 (expected) Final (expected)
Class-File API 484 Java 22 (preview) Java 24 Final
Foreign Function & Memory API 454 Java 14 (incubator) Java 22 Final
Unnamed Patterns and Variables 456 Java 21 (preview) Java 22 Final
Statements Before super() 492 Java 22 (preview) Java 25 Final
Stream Gatherers 485 Java 22 (preview) Java 24 Final
Key Derivation Function API 478 Java 24 Java 24 Final
AOT Class Loading & Linking 483 Java 24 Java 24 Final
Stable Values 502 Java 25 Preview
Compact Object Headers 450 Java 24 Experimental
Vector API 489 Java 16 (incubator) Incubator

The theme of Java 22-25 is finalization. Many features that were in preview or incubator during Java 21 have graduated to production-ready status. This means you can use them without --enable-preview flags and rely on them in production code with confidence.

3. Breaking Changes

The migration from Java 21 to Java 25 has fewer breaking changes than the 17-to-21 jump, but there are important ones to be aware of:

3.1 Removed Features

Removed Feature Removed In Replacement Action Required
String Templates (STR, FMT processors) Java 23 None yet (may return in different form) Remove any preview usage of STR."..."
sun.misc.Unsafe memory methods (partial) Java 23+ Foreign Function & Memory API (java.lang.foreign) Migrate to MemorySegment and Arena
Windows 32-bit x86 port Java 24 Use 64-bit JDK Switch to 64-bit if running 32-bit Windows

3.2 Behavioral Changes

Change Version Impact What to Do
Integrity by default — stronger module encapsulation Java 24 Illegal reflective access warnings become errors Add --add-opens or fix code to use public APIs
UTF-8 by default (already in Java 18) Java 18+ File I/O uses UTF-8 regardless of system locale Verify encoding assumptions in file operations
Deprecation enforcement Various Previously deprecated methods may be removed in future versions Address deprecation warnings now
Security Manager restrictions Java 24 Cannot install a SecurityManager at all Remove SecurityManager usage entirely

3.3 The Integrity by Default Change (Important)

This is the most impactful behavioral change. Starting in Java 24, the JVM enforces module boundaries more strictly. Libraries that use deep reflection to access internal JDK classes will fail at runtime instead of just warning. This primarily affects:

  • Serialization libraries that access private fields via reflection (Jackson, Gson older versions)
  • ORM frameworks that create proxy classes (Hibernate, older versions)
  • Testing frameworks that mock final classes or private methods (older Mockito versions)
  • Application servers that manipulate classloading

The fix is to upgrade to recent versions of these libraries (which already handle the restrictions properly) or add --add-opens flags as a temporary workaround.

4. Step-by-Step Migration Checklist

Follow these ten steps to migrate from Java 21 to Java 25. Each step should be a separate commit or PR so you can isolate issues.

Step 1: Audit Your Current Setup

// Check your current Java version
// $ java -version
// openjdk version "21.0.x" ...

// Check for deprecation warnings in your build
// $ mvn compile 2>&1 | grep -i "deprecat"
// $ gradle compileJava 2>&1 | grep -i "deprecat"

// List all --add-opens and --add-exports flags you currently use
// $ grep -r "add-opens\|add-exports" pom.xml build.gradle Dockerfile

Step 2: Update Dependencies First

Before changing the JDK version, update your dependencies to versions that support Java 25. This is the step that catches most issues.

// Key dependencies to update (minimum versions for Java 25 support):
//
// Build plugins:
//   maven-compiler-plugin   >= 3.13
//   maven-surefire-plugin   >= 3.3
//   gradle                  >= 8.8
//
// Frameworks:
//   Spring Boot             >= 3.4 (recommended: 3.5+)
//   Spring Framework        >= 6.2 (recommended: 6.3+)
//   Quarkus                 >= 3.15+
//   Micronaut               >= 4.7+
//
// Libraries:
//   Jackson                 >= 2.17
//   Hibernate               >= 6.5
//   Lombok                  >= 1.18.34
//   Mockito                 >= 5.12
//   JUnit                   >= 5.11
//   Byte Buddy              >= 1.15
//   ASM                     >= 9.7

Step 3: Install JDK 25

// Option 1: SDKMAN (recommended for developers)
// $ sdk install java 25-open
// $ sdk use java 25-open

// Option 2: Eclipse Temurin (recommended for production)
// Download from https://adoptium.net/temurin/releases/

// Option 3: Amazon Corretto
// Download from https://aws.amazon.com/corretto/

// Verify installation
// $ java -version
// openjdk version "25" 2025-09-16

Step 4: Update Build Configuration

// Maven pom.xml
// 
//     25
//     25
//     25
//     25
// 

// Gradle build.gradle
// java {
//     toolchain {
//         languageVersion = JavaLanguageVersion.of(25)
//     }
// }

// Gradle build.gradle.kts (Kotlin DSL)
// java {
//     toolchain {
//         languageVersion.set(JavaLanguageVersion.of(25))
//     }
// }

Step 5: Compile and Fix Errors

Run a clean build and address compilation errors. Common issues:

Error Cause Fix
cannot access class sun.misc.Unsafe Direct Unsafe usage Migrate to java.lang.foreign API or VarHandle
module X does not export Y Accessing internal APIs Use public API alternatives or --add-exports
class file has wrong version 69.0 Dependency compiled with newer Java than build tool expects Update build tool plugins
Preview feature warnings Using features that were preview in 21 but changed since Update code to use finalized syntax

Step 6: Run Tests

// Run the full test suite
// $ mvn test -Dsurefire.useFile=false
// $ gradle test --info

// Pay attention to:
// 1. Reflection-based tests (may fail due to stronger encapsulation)
// 2. Serialization tests (format may differ between JDK versions)
// 3. Tests that depend on internal JDK behavior (GC, classloading order)
// 4. Tests that parse java -version output or check system properties

Step 7: Check Runtime Behavior

Some issues only appear at runtime. Start your application and verify:

  • Application starts without warnings or errors
  • All reflection-heavy features work (dependency injection, ORM, serialization)
  • Performance is equal to or better than Java 21
  • Memory usage is stable
  • Third-party integrations (databases, message queues, caches) connect successfully

Step 8: Address Deprecation Warnings

// Compile with deprecation warnings visible
// $ mvn compile -Xlint:deprecation
// $ gradle compileJava -Xlint:deprecation

// Common deprecations to address:
// - SecurityManager usage -> remove entirely
// - Finalize methods -> use Cleaner or try-with-resources
// - Thread.stop(), Thread.suspend() -> use interrupt-based signaling
// - Old Date/Calendar APIs -> use java.time

Step 9: Optimize with New Features (Optional)

Once the migration is stable, selectively adopt new features:

Feature Migration Effort Benefit Priority
Module imports Low — IDE refactor Cleaner import blocks Low (cosmetic)
Structured concurrency Medium — refactor concurrent code Safer, more maintainable concurrency High if using concurrency
Scoped values Medium — replace ThreadLocal No memory leaks, better with virtual threads High if using ThreadLocal
Primitive patterns Low — refactor switch/if-else Cleaner pattern matching Medium
AOT class loading Low — deployment config only Faster startup High for microservices
Stream Gatherers Low — new code or refactor Custom stream operations Low to Medium

Step 10: Update Deployment Infrastructure

Update your deployment pipeline, Docker images, and CI/CD configuration (covered in detail in sections 8 and 9 below).

5. Build Tool Updates

5.1 Maven Configuration

// pom.xml -- complete Maven configuration for Java 25
// 
//     
//         25
//         25
//     
//
//     
//         
//             
//                 org.apache.maven.plugins
//                 maven-compiler-plugin
//                 3.13.0
//                 
//                     25
//                     
//                     
//                 
//             
//
//             
//                 org.apache.maven.plugins
//                 maven-surefire-plugin
//                 3.3.1
//                 
//                     
//                     
//                 
//             
//         
//     
// 

5.2 Gradle Configuration

// build.gradle.kts -- Kotlin DSL
// plugins {
//     java
//     id("org.springframework.boot") version "3.5.0"
// }
//
// java {
//     toolchain {
//         languageVersion.set(JavaLanguageVersion.of(25))
//     }
// }
//
// tasks.withType {
//     options.release.set(25)
//     // For preview features:
//     // options.compilerArgs.add("--enable-preview")
// }
//
// tasks.withType {
//     useJUnitPlatform()
//     // For preview features:
//     // jvmArgs("--enable-preview")
// }

// build.gradle -- Groovy DSL
// java {
//     toolchain {
//         languageVersion = JavaLanguageVersion.of(25)
//     }
// }
//
// compileJava {
//     options.release = 25
// }

6. Framework Compatibility

Here is the compatibility matrix for major Java frameworks with Java 25:

Framework Minimum Version for Java 25 Recommended Version Notes
Spring Boot 3.4.x 3.5.x or 4.0.x 3.5+ expected to have official Java 25 support
Spring Framework 6.2.x 6.3.x or 7.0.x Spring Framework 7 targets Java 25 as baseline
Quarkus 3.15+ 3.17+ or 4.x Quarkus adds Java support quickly after release
Micronaut 4.7+ 4.8+ Good Java version support track record
Jakarta EE 10 11 Jakarta EE 11 aligns with Java 25 features
Hibernate 6.5+ 6.6+ Ensure Byte Buddy version is compatible
Lombok 1.18.34+ Latest Lombok is sensitive to JDK internals — always use latest
MapStruct 1.6+ Latest Annotation processors need compiler compatibility

6.1 Spring Boot Migration Tips

// application.properties updates for Java 25

// Enable virtual threads (already available since Spring Boot 3.2 + Java 21)
spring.threads.virtual.enabled=true

// If using AOT class loading, configure the training profile
// spring.profiles.active=aot-training

// Structured concurrency with Spring -- example service
import java.util.concurrent.StructuredTaskScope;

@Service
public class UserProfileService {

    @Autowired private UserRepository userRepo;
    @Autowired private OrderRepository orderRepo;

    public UserProfile getProfile(long userId) throws Exception {
        try (var scope = StructuredTaskScope.open()) {
            var userTask = scope.fork(() -> userRepo.findById(userId));
            var ordersTask = scope.fork(() -> orderRepo.findByUserId(userId));
            scope.join();
            return new UserProfile(userTask.get(), ordersTask.get());
        }
    }
}

7. Common Migration Issues

Based on community experience with Java 22-24 migrations, here are the most common issues and their solutions:

# Issue Symptom Solution
1 Illegal reflective access InaccessibleObjectException at runtime Update the library to a version that uses public APIs, or add --add-opens java.base/java.lang=ALL-UNNAMED as a temporary workaround
2 Lombok compilation failure java.lang.IllegalAccessError during annotation processing Update Lombok to >= 1.18.34. Lombok accesses JDK internals and needs frequent updates
3 Byte Buddy / Mockito failure IllegalArgumentException: Unsupported class file version Update Byte Buddy to >= 1.15 and Mockito to >= 5.12
4 Jackson serialization issues Reflection errors on record types or sealed classes Update Jackson to >= 2.17; ensure jackson-module-parameter-names is included
5 SecurityManager removal UnsupportedOperationException: The Security Manager is deprecated Remove all SecurityManager code; it cannot be installed in Java 24+
6 ASM version incompatibility Build tools or plugins fail to parse class files Update ASM to >= 9.7 (via dependency management or plugin updates)
7 Finalizer deprecation warnings Warnings about overriding finalize() Replace with Cleaner or explicit close() methods with try-with-resources
8 Preview feature code from Java 21 Compilation errors for preview syntax that changed Remove String Templates usage (removed in 23); update any preview feature syntax
9 Test framework incompatibility Tests fail to start or mock creation fails Update JUnit to 5.11+ and test dependency versions
10 Docker base image not available No eclipse-temurin:25 image found Wait for Adoptium release (usually within days of GA) or use early-access builds

8. Performance Improvements

Java 25 delivers measurable performance improvements in several areas. Here is what you can expect without changing any application code:

8.1 AOT Class Loading & Linking

The biggest startup improvement. By pre-loading and pre-linking classes, the JVM skips the expensive discovery, verification, and linking phases that dominate startup time. This benefits all applications but is most impactful for large applications with many classes (Spring Boot applications, Jakarta EE servers).

Metric Java 21 Java 25 (without AOT) Java 25 (with AOT cache)
Spring Boot startup ~4.0s ~3.5s ~1.5s
Classes loaded at startup ~12,000 ~12,000 ~12,000 (from cache)
Time to first request ~5.0s ~4.5s ~2.0s

8.2 Compact Object Headers

Reduces every object’s header from 12 bytes to 8 bytes. For applications with millions of small objects (collections-heavy code, graph structures, caches), this translates to 10-20% heap memory savings. Enable with -XX:+UseCompactObjectHeaders (experimental in Java 25).

8.3 Generational ZGC Improvements

ZGC’s generational mode (default since Java 23) has been further tuned:

  • Lower pause times: Sub-millisecond pauses are more consistently achieved
  • Better throughput: Young generation collection is more efficient
  • Less tuning needed: Default heap region sizes and collection triggers are smarter
  • Lower memory overhead: ZGC’s own memory usage has been reduced

8.4 JIT Compiler Improvements

The C2 JIT compiler includes better escape analysis, improved loop optimizations, and enhanced auto-vectorization. These improvements benefit all code without any configuration changes. Typical throughput improvement is 2-5% compared to Java 21 for compute-heavy workloads.

9. Docker and CI/CD Updates

9.1 Updated Dockerfile

// Multi-stage Dockerfile for Java 25 with AOT cache
//
// # Build stage
// FROM eclipse-temurin:25-jdk AS builder
// WORKDIR /app
// COPY pom.xml .
// COPY src ./src
// RUN mvn package -DskipTests
//
// # AOT training stage
// FROM eclipse-temurin:25-jre AS aot-trainer
// WORKDIR /app
// COPY --from=builder /app/target/myapp.jar .
// RUN java -XX:AOTMode=record -XX:AOTConfiguration=app.aotconf \
//     -Dspring.context.exit=onRefresh -jar myapp.jar || true
// RUN java -XX:AOTMode=create -XX:AOTConfiguration=app.aotconf \
//     -XX:AOTCache=app.aot -jar myapp.jar
//
// # Production stage
// FROM eclipse-temurin:25-jre
// WORKDIR /app
// COPY --from=builder /app/target/myapp.jar .
// COPY --from=aot-trainer /app/app.aot .
//
// ENV JAVA_OPTS="-XX:AOTCache=/app/app.aot \
//     -XX:+UseZGC -XX:+ZGenerational \
//     -XX:MaxRAMPercentage=75"
//
// EXPOSE 8080
// ENTRYPOINT ["sh", "-c", "java $JAVA_OPTS -jar myapp.jar"]

9.2 Base Image Options

Image Size Use Case
eclipse-temurin:25-jre ~200MB Standard production image
eclipse-temurin:25-jre-alpine ~100MB Smaller image, Alpine-based
eclipse-temurin:25-jdk ~350MB Build stage or development
amazoncorretto:25 ~220MB AWS-optimized, good for ECS/EKS
azul/zulu-openjdk:25 ~210MB Azul-supported, good for Azure

9.3 CI/CD Pipeline Updates

// GitHub Actions workflow for Java 25
//
// name: Build and Test
// on: [push, pull_request]
//
// jobs:
//   build:
//     runs-on: ubuntu-latest
//     steps:
//       - uses: actions/checkout@v4
//
//       - name: Set up JDK 25
//         uses: actions/setup-java@v4
//         with:
//           java-version: '25'
//           distribution: 'temurin'
//           cache: 'maven'
//
//       - name: Build and test
//         run: mvn verify
//
//       - name: Upload artifact
//         uses: actions/upload-artifact@v4
//         with:
//           name: app-jar
//           path: target/*.jar

10. Best Practices for Adoption

10.1 Adopt Features Incrementally

Do not try to adopt every new feature at once. Here is a recommended order:

  1. Week 1-2: Get the application compiling and tests passing on Java 25 with zero code changes
  2. Week 3-4: Enable AOT class loading for faster startup (deployment config only, no code changes)
  3. Month 2: Start using module imports in new code
  4. Month 2-3: Replace ThreadLocal with ScopedValue where appropriate
  5. Month 3-4: Refactor concurrent code to use structured concurrency
  6. Ongoing: Use primitive patterns, flexible constructors, and other language features in new code

10.2 LTS Stability Considerations

Java 25 as an LTS release will receive updates for years. Here are best practices for stability:

  • Pin your JDK version: Use specific update releases (e.g., 25.0.2) rather than just 25 in production. Update deliberately, not automatically.
  • Avoid preview features in production: Preview features can change between releases. Use them for experimentation and testing, but not in production code that needs to survive JDK updates.
  • Test with the exact JDK you will deploy: Temurin, Corretto, and Oracle JDK can have subtle behavioral differences. Test with the same distribution you run in production.
  • Monitor JDK release notes: Each quarterly update (25.0.1, 25.0.2, etc.) may include behavioral changes. Read the release notes before upgrading.

10.3 Migration Timeline Recommendation

Phase Timeline Goal
Evaluation October 2025 Build and test on Java 25, identify issues
Development November-December 2025 Fix issues, update dependencies, run full test suite
Staging January 2026 Deploy to staging/pre-production, performance testing
Production rollout February-March 2026 Gradual production deployment (canary, then full)
Feature adoption Q2 2026 onward Incrementally adopt new language features and APIs

11. Summary

Migrating from Java 21 to Java 25 is a straightforward upgrade with high reward. The breaking changes are minimal (primarily around module encapsulation enforcement and SecurityManager removal), and the benefits are substantial:

  • Finalized concurrency APIs: Structured concurrency and scoped values are production-ready, giving you safer, more maintainable concurrent code
  • Language improvements: Module imports, primitive patterns, flexible constructors, and simplified source files make Java code more concise and expressive
  • Performance gains: AOT class loading for 2-3x faster startup, compact object headers for 10-20% less memory, improved ZGC and JIT compilation
  • Better developer experience: Simpler programs for teaching and scripting, less boilerplate everywhere

The migration playbook is: update dependencies first, change the JDK version, fix compilation and test failures, then adopt new features incrementally. Most teams should be able to complete the core migration in two to four weeks, with feature adoption continuing over the following months.

Java 25 is a worthy successor to Java 21. It does not introduce another paradigm shift like virtual threads, but it polishes, finalizes, and optimizes everything that Java 21 started. If you are on Java 21, start planning your Java 25 migration now. If you are still on Java 17 or earlier, consider jumping directly to Java 25 — you will get the best of both worlds.

March 1, 2026

Java 25 Flexible Constructor Bodies

1. Introduction

For over 25 years, Java developers have lived with one of the most annoying rules in the language: the first statement in a constructor must be super() or this(). No exceptions. No validation before calling the superclass. No computation to prepare arguments. If your subclass constructor needed to check that an argument was valid before passing it up, you could not do it directly — you had to resort to ugly workarounds like static helper methods, ternary operator abuse, or factory method patterns.

Think of it this way: imagine you are renovating a house, and the building code says you must pour the foundation before you can even inspect the land. You cannot test the soil, check the property lines, or measure the slope first. You just have to hope everything is fine and deal with problems after the concrete is set. That is what Java constructors felt like before this change.

Here is the kind of code that drove developers crazy:

public class PositiveBigInteger extends BigInteger {

    public PositiveBigInteger(long value) {
        // I want to validate that value > 0 BEFORE calling super()
        // But the compiler says: "Call to 'super()' must be first statement"
        super(Long.toString(value));  // Forced to call super first

        // Now I can validate, but the superclass already did all its work
        // with potentially invalid data
        if (value <= 0) {
            throw new IllegalArgumentException("non-positive value");
        }
    }
}

The superclass constructor runs, allocates resources, possibly writes to files or databases -- all with an invalid argument. Then you throw an exception. The damage is done. This pattern is wasteful, error-prone, and fundamentally backwards.

Java 25 fixes this with Flexible Constructor Bodies (JEP 513). You can now write statements before the call to super() or this(). Validation, argument transformation, field initialization -- all of it can happen in a "prologue" section before the constructor delegation. The feature was previewed in Java 22 (JEP 447), Java 23 (JEP 482), and Java 24 (JEP 492), and is finalized as a permanent feature in Java 25 LTS.

2. What Changed

The rule is simple: you can now place statements before the explicit constructor invocation (super(...) or this(...)). These statements form what the JLS calls the prologue of the constructor. The statements after the constructor invocation form the epilogue.

public class Employee extends Person {

    public Employee(String name, int age) {
        // === PROLOGUE (new in Java 25) ===
        // Validate arguments before calling super
        if (name == null || name.isBlank()) {
            throw new IllegalArgumentException("Name cannot be blank");
        }
        if (age < 18 || age > 67) {
            throw new IllegalArgumentException("Age must be between 18 and 67");
        }

        // Compute values to pass to super
        String normalizedName = name.trim().toUpperCase();

        // === CONSTRUCTOR INVOCATION ===
        super(normalizedName, age);

        // === EPILOGUE (same as before) ===
        // Full access to this, fields, methods
        this.startDate = LocalDate.now();
        log("Employee created: " + this.getName());
    }
}

The constructor body is now divided into two phases:

Phase Location What You Can Do What You Cannot Do
Prologue Before super() / this() Validate arguments, compute values, declare local variables, throw exceptions, initialize uninitialized fields Access this (methods, read fields), access super (fields, methods), create inner class instances
Epilogue After super() / this() Everything -- full access to this, fields, methods, superclass members Nothing restricted (same as traditional constructor body)

The key insight is that the object is in an early construction context during the prologue. The superclass has not been initialized yet, so the object is not fully formed. Java protects you from accessing the uninitialized object by restricting what you can do in the prologue. But it gives you enough freedom to validate, compute, and prepare -- which is all you typically need.

3. Validation Before super()

The most common use case for flexible constructor bodies is argument validation. You want to fail fast -- reject bad input before the superclass constructor does any work. This is especially important when the superclass constructor is expensive (allocates resources, opens connections, writes to disk).

Example: Validate Before BigInteger Construction

public class PositiveBigInteger extends BigInteger {

    public PositiveBigInteger(long value) {
        // Validate BEFORE superclass does any work
        if (value <= 0) {
            throw new IllegalArgumentException("Value must be positive: " + value);
        }
        super(Long.toString(value));
    }
}

// Usage:
var num = new PositiveBigInteger(42);   // Works fine
var bad = new PositiveBigInteger(-5);   // Throws immediately, no wasted work

Example: Multiple Validation Checks

public class DatabaseConnection extends AbstractConnection {
    private final String schema;

    public DatabaseConnection(String host, int port, String schema, String user) {
        // Validate all arguments before superclass initializes the connection
        Objects.requireNonNull(host, "Host cannot be null");
        Objects.requireNonNull(schema, "Schema cannot be null");
        Objects.requireNonNull(user, "User cannot be null");

        if (host.isBlank()) {
            throw new IllegalArgumentException("Host cannot be blank");
        }
        if (port < 1 || port > 65535) {
            throw new IllegalArgumentException("Port must be between 1 and 65535: " + port);
        }
        if (!schema.matches("[a-zA-Z_][a-zA-Z0-9_]*")) {
            throw new IllegalArgumentException("Invalid schema name: " + schema);
        }

        // All validated -- safe to proceed with connection setup
        super(host, port, user);

        // Now we can initialize our own fields
        this.schema = schema;
    }
}

Example: Validate with Complex Business Rules

public class Order extends BaseEntity {

    public Order(List items, Customer customer, LocalDate deliveryDate) {
        // Business rule validation in the prologue
        if (items == null || items.isEmpty()) {
            throw new IllegalArgumentException("Order must have at least one item");
        }
        if (customer.isSuspended()) {
            throw new IllegalStateException("Cannot create order for suspended customer: "
                + customer.getId());
        }
        if (deliveryDate.isBefore(LocalDate.now().plusDays(1))) {
            throw new IllegalArgumentException("Delivery date must be at least tomorrow");
        }

        double total = items.stream()
            .mapToDouble(item -> item.price() * item.quantity())
            .sum();

        if (total > customer.getCreditLimit()) {
            throw new IllegalStateException(
                "Order total $%.2f exceeds credit limit $%.2f".formatted(
                    total, customer.getCreditLimit()));
        }

        // All business rules passed -- initialize the entity
        super(UUID.randomUUID(), LocalDateTime.now());
    }
}

4. Computation Before super()

Sometimes the superclass constructor expects arguments that require transformation or computation from the subclass's raw inputs. Before Java 25, this often forced you into contortions with static helper methods or inline ternary expressions.

Example: Transform Arguments for Superclass

public class HttpEndpoint extends Endpoint {

    public HttpEndpoint(String rawUrl) {
        // Parse and normalize the URL before passing to superclass
        String url = rawUrl.trim().toLowerCase();
        if (!url.startsWith("http://") && !url.startsWith("https://")) {
            url = "https://" + url;
        }

        URI uri = URI.create(url);
        String host = uri.getHost();
        int port = uri.getPort() == -1 ? 443 : uri.getPort();
        String path = uri.getPath().isEmpty() ? "/" : uri.getPath();

        super(host, port, path);
    }
}

Example: Compute Derived Values

public class Circle extends Shape {

    public Circle(double radius) {
        if (radius <= 0) {
            throw new IllegalArgumentException("Radius must be positive: " + radius);
        }

        // Compute area and circumference to pass to superclass
        double area = Math.PI * radius * radius;
        double circumference = 2 * Math.PI * radius;

        super("Circle", area, circumference);
    }
}

public class Rectangle extends Shape {

    public Rectangle(double width, double height) {
        if (width <= 0 || height <= 0) {
            throw new IllegalArgumentException(
                "Dimensions must be positive: %s x %s".formatted(width, height));
        }

        double area = width * height;
        double perimeter = 2 * (width + height);

        super("Rectangle", area, perimeter);
    }
}

Example: Read Configuration Before Initializing

public class ConfigurableService extends ManagedService {

    public ConfigurableService(Path configPath) {
        // Read and parse configuration before superclass init
        Properties props;
        try {
            props = new Properties();
            props.load(Files.newInputStream(configPath));
        } catch (IOException e) {
            throw new UncheckedIOException("Cannot read config: " + configPath, e);
        }

        String serviceName = props.getProperty("service.name", "default");
        int threadPoolSize = Integer.parseInt(
            props.getProperty("thread.pool.size", "10"));
        Duration timeout = Duration.parse(
            props.getProperty("timeout", "PT30S"));

        super(serviceName, threadPoolSize, timeout);
    }
}

5. Restrictions

The prologue is not a free-for-all. Java imposes strict rules to prevent you from accessing an uninitialized object, which would lead to subtle bugs and security vulnerabilities. Understanding these restrictions is critical to using the feature correctly.

What You CANNOT Do in the Prologue

class Parent {
    int parentField;
    void parentMethod() { System.out.println("parent"); }
}

class Child extends Parent {
    int childField = 10;
    String name;  // No initializer -- can be assigned in prologue

    Child(int value) {
        // CANNOT reference 'this' explicitly
        System.out.println(this);           // COMPILE ERROR
        this.hashCode();                    // COMPILE ERROR

        // CANNOT read instance fields (even uninitialized ones)
        int x = childField;                // COMPILE ERROR
        int y = this.childField;           // COMPILE ERROR

        // CANNOT call instance methods
        toString();                         // COMPILE ERROR
        this.someMethod();                  // COMPILE ERROR

        // CANNOT access superclass members
        int z = super.parentField;          // COMPILE ERROR
        super.parentMethod();               // COMPILE ERROR

        // CANNOT create inner class instances (they capture 'this')
        class Inner {}
        new Inner();                        // COMPILE ERROR

        super(value);
    }

    void someMethod() {}
}

What You CAN Do in the Prologue

class Child extends Parent {
    final int x;       // No initializer
    String label;      // No initializer

    Child(int value, String rawLabel) {
        // CAN declare and use local variables
        int computed = value * 2 + 1;
        String normalized = rawLabel.trim().toLowerCase();

        // CAN call static methods
        Objects.requireNonNull(rawLabel);
        int validated = Math.max(0, value);

        // CAN throw exceptions
        if (value < 0) {
            throw new IllegalArgumentException("negative: " + value);
        }

        // CAN use control flow (if/else, switch, try/catch, loops)
        String prefix;
        switch (value) {
            case 0 -> prefix = "ZERO";
            case 1 -> prefix = "ONE";
            default -> prefix = "OTHER";
        }

        // CAN assign to uninitialized fields (no initializer in declaration)
        this.x = computed;
        this.label = prefix + "_" + normalized;

        // CAN access enclosing instance (if this is a nested class)
        // CAN use constructor parameters
        // CAN create objects that do not reference 'this'

        super(validated);
    }
}

The Field Assignment Rule

One of the most interesting aspects is that you can assign to fields in the prologue, but only to fields that have no initializer in their declaration. You cannot read those fields -- only write to them. This enables a critical pattern: setting a field's value before super() runs, so that if the superclass constructor calls an overridable method, the field is already initialized.

class Super {
    Super() {
        // Superclass constructor calls overridable method
        overriddenMethod();
    }
    void overriddenMethod() {}
}

// BEFORE Java 25: Field is 0 when overriddenMethod() is called
class OldSub extends Super {
    final int x;
    OldSub(int x) {
        super();          // Calls overriddenMethod() -- this.x is still 0!
        this.x = x;       // Too late
    }
    @Override
    void overriddenMethod() {
        System.out.println("x = " + x);  // Prints "x = 0" -- uninitialized!
    }
}

// AFTER Java 25: Field is properly set before super() runs
class NewSub extends Super {
    final int x;
    NewSub(int x) {
        this.x = x;       // Set field BEFORE super()
        super();           // Calls overriddenMethod() -- this.x is already set!
    }
    @Override
    void overriddenMethod() {
        System.out.println("x = " + x);  // Prints "x = 42" -- correct!
    }
}

public static void main(String[] args) {
    new OldSub(42);  // Prints: x = 0
    new NewSub(42);  // Prints: x = 42
}

This is not just a convenience -- it fixes a correctness problem that has plagued Java since its inception. Any time a superclass constructor calls an overridable method, subclass fields are in an uninitialized state. With flexible constructor bodies, you can ensure fields are set before the superclass sees them.

Restrictions Summary Table

Action Allowed in Prologue? Notes
Declare local variables Yes Normal local variable rules apply
Use constructor parameters Yes Full access to all parameters
Call static methods Yes No instance needed
Throw exceptions Yes Fail-fast validation
Control flow (if, switch, loops) Yes Full control flow
Assign to this.field (no initializer) Yes Write-only; cannot read back
Read this.field No Object not initialized yet
Call instance methods No Object not initialized yet
Reference this No Except for field assignment
Access super members No Superclass not initialized yet
Create inner class instances No Inner classes capture this
Assign to field with initializer No Only uninitialized fields

6. Before vs After Comparison

Let us look at five real-world patterns and see how they improve with flexible constructor bodies. In each case, the "before" code uses a workaround, and the "after" code uses the new prologue.

Example 1: Input Validation

// BEFORE: Validation AFTER super() -- too late, superclass already ran
class OldEmployee extends Person {
    OldEmployee(String name, int age) {
        super(name, age);  // Person does work with potentially bad values
        if (age < 0 || age > 150) {
            throw new IllegalArgumentException("Invalid age: " + age);
        }
    }
}

// AFTER: Validation BEFORE super() -- fail fast
class NewEmployee extends Person {
    NewEmployee(String name, int age) {
        if (age < 0 || age > 150) {
            throw new IllegalArgumentException("Invalid age: " + age);
        }
        Objects.requireNonNull(name, "Name required");
        super(name, age);
    }
}

Example 2: Argument Transformation

// BEFORE: Static helper method workaround
class OldHttpUrl extends Url {
    OldHttpUrl(String rawUrl) {
        super(normalizeUrl(rawUrl));  // Must use static method
    }

    // Forced to write a static helper just to prepare the argument
    private static String normalizeUrl(String raw) {
        String url = raw.trim().toLowerCase();
        if (!url.startsWith("https://")) {
            url = "https://" + url;
        }
        return url;
    }
}

// AFTER: Inline computation in the prologue
class NewHttpUrl extends Url {
    NewHttpUrl(String rawUrl) {
        String url = rawUrl.trim().toLowerCase();
        if (!url.startsWith("https://")) {
            url = "https://" + url;
        }
        super(url);
    }
}

Example 3: Conditional Super Arguments

// BEFORE: Ternary operator abuse for conditional arguments
class OldRetryPolicy extends Policy {
    OldRetryPolicy(int maxRetries, Duration timeout) {
        super(
            maxRetries <= 0 ? 3 : maxRetries,           // Default if invalid
            timeout == null ? Duration.ofSeconds(30) : timeout,  // Default if null
            maxRetries > 10 ? "aggressive" : "standard"  // Computed strategy
        );
    }
}

// AFTER: Clear, readable prologue
class NewRetryPolicy extends Policy {
    NewRetryPolicy(int maxRetries, Duration timeout) {
        // Normalize with clear variable names
        int retries = maxRetries <= 0 ? 3 : maxRetries;
        Duration actualTimeout = timeout != null ? timeout : Duration.ofSeconds(30);
        String strategy = retries > 10 ? "aggressive" : "standard";

        super(retries, actualTimeout, strategy);
    }
}

Example 4: Multi-Step Argument Preparation

// BEFORE: Multiple static helpers or deeply nested ternaries
class OldSecureConnection extends Connection {
    OldSecureConnection(String host, Map config) {
        super(
            extractHost(host),
            extractPort(host, config),
            buildSslContext(config)
        );
    }

    private static String extractHost(String host) {
        return host.contains(":") ? host.split(":")[0] : host;
    }

    private static int extractPort(String host, Map config) {
        if (host.contains(":")) {
            return Integer.parseInt(host.split(":")[1]);
        }
        return Integer.parseInt(config.getOrDefault("default.port", "443"));
    }

    private static SSLContext buildSslContext(Map config) {
        try {
            SSLContext ctx = SSLContext.getInstance(
                config.getOrDefault("ssl.protocol", "TLSv1.3"));
            ctx.init(null, null, null);
            return ctx;
        } catch (Exception e) {
            throw new RuntimeException(e);
        }
    }
}

// AFTER: Everything inline in the prologue -- clear flow, no scattered helpers
class NewSecureConnection extends Connection {
    NewSecureConnection(String host, Map config) {
        // Parse host and port
        String actualHost;
        int port;
        if (host.contains(":")) {
            String[] parts = host.split(":");
            actualHost = parts[0];
            port = Integer.parseInt(parts[1]);
        } else {
            actualHost = host;
            port = Integer.parseInt(config.getOrDefault("default.port", "443"));
        }

        // Build SSL context
        SSLContext sslContext;
        try {
            String protocol = config.getOrDefault("ssl.protocol", "TLSv1.3");
            sslContext = SSLContext.getInstance(protocol);
            sslContext.init(null, null, null);
        } catch (Exception e) {
            throw new RuntimeException("Failed to create SSL context", e);
        }

        super(actualHost, port, sslContext);
    }
}

Example 5: Defensive Copy Before Super

// BEFORE: Cannot make defensive copy before super()
class OldImmutableConfig extends Config {
    private final Map properties;

    OldImmutableConfig(Map properties) {
        super(properties);  // Passes mutable reference to super!
        // Defensive copy is too late -- super already has the mutable reference
        this.properties = Map.copyOf(properties);
    }
}

// AFTER: Defensive copy in prologue -- super gets the immutable version
class NewImmutableConfig extends Config {
    private final Map properties;

    NewImmutableConfig(Map properties) {
        // Make defensive copy BEFORE super sees it
        var safeCopy = Map.copyOf(properties);
        super(safeCopy);  // Super gets the immutable copy
        this.properties = safeCopy;
    }
}

7. Old Workarounds Eliminated

Over the years, Java developers developed several workaround patterns to get around the "super must be first" restriction. Every one of these can now be replaced with a simple prologue. Let us inventory these anti-patterns and retire them.

Workaround 1: Static Factory Methods

This was the most common approach -- make the constructor private and provide a static method that does the validation/computation:

// OLD: Static factory workaround
class OldPercentage extends Number {
    private final double value;

    // Private constructor -- no validation here
    private OldPercentage(double validated) {
        this.value = validated;
    }

    // Public factory does the validation
    public static OldPercentage of(double value) {
        if (value < 0 || value > 100) {
            throw new IllegalArgumentException("Not a percentage: " + value);
        }
        return new OldPercentage(value);
    }

    // ... Number abstract methods ...
}

// NEW: Direct constructor with prologue
class NewPercentage extends Number {
    private final double value;

    public NewPercentage(double value) {
        if (value < 0 || value > 100) {
            throw new IllegalArgumentException("Not a percentage: " + value);
        }
        super();
        this.value = value;
    }

    // ... Number abstract methods ...
}

Workaround 2: Static Helper Methods

When the superclass constructor needed computed arguments, developers wrote private static methods just to compute them:

// OLD: Static helper method to compute super() arguments
class OldTimestamp extends Instant {
    OldTimestamp(String isoString) {
        super(parseToEpochSecond(isoString), parseToNano(isoString));
    }

    private static long parseToEpochSecond(String s) {
        return Instant.parse(s).getEpochSecond();
    }

    private static int parseToNano(String s) {
        return Instant.parse(s).getNano();  // Parses TWICE!
    }
}

// NEW: Parse once in the prologue
class NewTimestamp extends Instant {
    NewTimestamp(String isoString) {
        Instant parsed = Instant.parse(isoString);
        super(parsed.getEpochSecond(), parsed.getNano());  // Parse once, use twice
    }
}

Notice how the old version had to parse the string twice because each static method was independent. The prologue lets you parse once and use the result for multiple super() arguments.

Workaround 3: Ternary Operator Abuse

When the computation was "simple enough," developers crammed it into ternary expressions inside the super() call:

// OLD: Unreadable nested ternaries
class OldCacheConfig extends Config {
    OldCacheConfig(String name, int size, boolean isLocal) {
        super(
            name == null ? "default-cache" : name.trim(),
            size <= 0 ? (isLocal ? 1000 : 10000) : Math.min(size, isLocal ? 5000 : 50000),
            isLocal ? Duration.ofMinutes(5) : Duration.ofHours(1),
            name != null && name.startsWith("temp-") ? EvictionPolicy.LRU : EvictionPolicy.LFU
        );  // Good luck debugging this
    }
}

// NEW: Clear, debuggable prologue
class NewCacheConfig extends Config {
    NewCacheConfig(String name, int size, boolean isLocal) {
        String cacheName = (name == null) ? "default-cache" : name.trim();

        int maxAllowed = isLocal ? 5000 : 50000;
        int defaultSize = isLocal ? 1000 : 10000;
        int cacheSize = (size <= 0) ? defaultSize : Math.min(size, maxAllowed);

        Duration ttl = isLocal ? Duration.ofMinutes(5) : Duration.ofHours(1);

        EvictionPolicy policy = (name != null && name.startsWith("temp-"))
            ? EvictionPolicy.LRU
            : EvictionPolicy.LFU;

        super(cacheName, cacheSize, ttl, policy);
    }
}

8. Use Cases

Beyond validation and argument transformation, flexible constructor bodies open up several important patterns that were previously awkward or impossible.

8.1 Builder-to-Constructor Pattern

When a subclass accepts a builder or configuration object and needs to extract specific values for the superclass:

public class RestClient extends HttpClient {

    public RestClient(RestClientConfig config) {
        // Extract and validate in the prologue
        Objects.requireNonNull(config, "Config required");
        String baseUrl = config.getBaseUrl();
        if (baseUrl == null || baseUrl.isBlank()) {
            throw new IllegalArgumentException("Base URL required");
        }

        Duration connectTimeout = config.getConnectTimeout() != null
            ? config.getConnectTimeout()
            : Duration.ofSeconds(10);

        Duration readTimeout = config.getReadTimeout() != null
            ? config.getReadTimeout()
            : Duration.ofSeconds(30);

        int maxConnections = Math.max(1, config.getMaxConnections());

        // Pass extracted values to superclass
        super(baseUrl, connectTimeout, readTimeout, maxConnections);
    }
}

8.2 Defensive Copies

Making defensive copies before the superclass sees the mutable data is a fundamental security and correctness practice:

public class ImmutableMatrix extends Matrix {

    public ImmutableMatrix(double[][] data) {
        // Deep defensive copy in the prologue
        Objects.requireNonNull(data, "Data required");
        if (data.length == 0) {
            throw new IllegalArgumentException("Matrix must have at least one row");
        }

        int cols = data[0].length;
        double[][] copy = new double[data.length][cols];
        for (int i = 0; i < data.length; i++) {
            if (data[i].length != cols) {
                throw new IllegalArgumentException("Jagged arrays not allowed");
            }
            System.arraycopy(data[i], 0, copy[i], 0, cols);
        }

        // Super receives the defensive copy -- original cannot mutate our state
        super(copy, data.length, cols);
    }
}

8.3 Argument Canonicalization

Converting arguments to a canonical form before construction:

public class EmailAddress extends Address {

    public EmailAddress(String email) {
        Objects.requireNonNull(email, "Email required");

        // Canonicalize: trim, lowercase, validate format
        String canonical = email.trim().toLowerCase(Locale.ROOT);

        if (!canonical.matches("^[a-z0-9._%+-]+@[a-z0-9.-]+\\.[a-z]{2,}$")) {
            throw new IllegalArgumentException("Invalid email format: " + email);
        }

        // Split into parts for the superclass
        int atIndex = canonical.indexOf('@');
        String localPart = canonical.substring(0, atIndex);
        String domain = canonical.substring(atIndex + 1);

        super(localPart, domain);
    }
}

8.4 Logging and Auditing

Recording construction events before the object is fully initialized:

public class AuditedTransaction extends Transaction {
    private static final Logger log = LoggerFactory.getLogger(AuditedTransaction.class);

    public AuditedTransaction(BigDecimal amount, String fromAccount, String toAccount) {
        // Log and audit BEFORE the transaction is constructed
        log.info("Creating transaction: {} from {} to {}",
            amount, fromAccount, toAccount);

        if (amount.compareTo(BigDecimal.ZERO) <= 0) {
            log.warn("Rejected: non-positive amount {}", amount);
            throw new IllegalArgumentException("Amount must be positive");
        }

        if (amount.compareTo(new BigDecimal("1000000")) > 0) {
            log.warn("Large transaction flagged for review: {}", amount);
            AuditService.flagForReview(fromAccount, toAccount, amount);
        }

        // Generate a unique transaction ID
        String txId = UUID.randomUUID().toString();
        log.debug("Assigned transaction ID: {}", txId);

        super(txId, amount, fromAccount, toAccount);
    }
}

9. With Records

Records introduced in Java 16 have their own constructor conventions, and flexible constructor bodies work with them -- but with specific rules you need to understand.

Record Constructor Types

Constructor Type Description Flexible Bodies?
Canonical constructor Matches all record components No super() or this() to precede -- but can have early field assignments
Compact canonical constructor No parameter list -- parameters are implicit Same as above -- validates/transforms before implicit assignment
Non-canonical constructor Different signature, delegates via this() Yes -- statements before this()

The big win for records is in non-canonical constructors. These must delegate to the canonical constructor via this(), and you can now put validation and computation before that delegation.

Example: Non-Canonical Record Constructor with Prologue

record Point(double x, double y) {

    // Compact canonical constructor -- validates components
    Point {
        if (Double.isNaN(x) || Double.isNaN(y)) {
            throw new IllegalArgumentException("Coordinates cannot be NaN");
        }
    }

    // Non-canonical: construct from polar coordinates
    Point(double radius, double angleRadians, boolean polar) {
        // Prologue: validate and convert polar to cartesian
        if (radius < 0) {
            throw new IllegalArgumentException("Radius cannot be negative: " + radius);
        }
        double cartX = radius * Math.cos(angleRadians);
        double cartY = radius * Math.sin(angleRadians);

        this(cartX, cartY);  // Delegate to canonical constructor
    }

    // Non-canonical: construct from string "x,y"
    Point(String coordinates) {
        // Prologue: parse and validate
        Objects.requireNonNull(coordinates, "Coordinates string required");
        String[] parts = coordinates.split(",");
        if (parts.length != 2) {
            throw new IllegalArgumentException(
                "Expected format 'x,y' but got: " + coordinates);
        }

        double parsedX;
        double parsedY;
        try {
            parsedX = Double.parseDouble(parts[0].trim());
            parsedY = Double.parseDouble(parts[1].trim());
        } catch (NumberFormatException e) {
            throw new IllegalArgumentException(
                "Invalid coordinate numbers: " + coordinates, e);
        }

        this(parsedX, parsedY);
    }
}

Example: Record with Complex Non-Canonical Constructor

record DateRange(LocalDate start, LocalDate end) {

    // Canonical validates ordering
    DateRange {
        if (start.isAfter(end)) {
            throw new IllegalArgumentException(
                "Start date %s is after end date %s".formatted(start, end));
        }
    }

    // Construct from a pair of ISO strings
    DateRange(String startStr, String endStr) {
        LocalDate parsedStart = LocalDate.parse(startStr);
        LocalDate parsedEnd = LocalDate.parse(endStr);

        this(parsedStart, parsedEnd);  // Delegates to canonical
    }

    // Construct a range of N days starting from a date
    DateRange(LocalDate start, int days) {
        if (days <= 0) {
            throw new IllegalArgumentException("Days must be positive: " + days);
        }
        LocalDate computedEnd = start.plusDays(days - 1);

        this(start, computedEnd);
    }

    // Construct for "this week" (Monday to Sunday)
    DateRange(int isoWeekNumber, int year) {
        if (isoWeekNumber < 1 || isoWeekNumber > 53) {
            throw new IllegalArgumentException("Invalid week: " + isoWeekNumber);
        }

        LocalDate monday = LocalDate.of(year, 1, 1)
            .with(java.time.temporal.WeekFields.ISO.weekOfYear(), isoWeekNumber)
            .with(java.time.DayOfWeek.MONDAY);

        LocalDate sunday = monday.plusDays(6);

        this(monday, sunday);
    }
}

Example: Record with Defensive Copy in Non-Canonical Constructor

record ImmutablePair(A first, B second) {

    ImmutablePair {
        Objects.requireNonNull(first, "First element cannot be null");
        Objects.requireNonNull(second, "Second element cannot be null");
    }

    // Construct from a Map.Entry with defensive extraction
    ImmutablePair(Map.Entry entry) {
        Objects.requireNonNull(entry, "Entry cannot be null");

        // Extract values in prologue -- entry might be modified concurrently
        A extractedFirst = entry.getKey();
        B extractedSecond = entry.getValue();

        this(extractedFirst, extractedSecond);
    }
}

10. Best Practices

Flexible constructor bodies are a welcome improvement, but like any feature, they can be misused. Here are the guidelines I follow for writing clean, maintainable constructor prologues.

10.1 Keep the Prologue Simple

The prologue should be short, focused, and obvious. If your prologue is longer than 10-15 lines, consider whether some of that logic belongs in a separate method or class.

// GOOD: Short, focused prologue
public class ApiClient extends HttpClient {
    public ApiClient(String baseUrl) {
        Objects.requireNonNull(baseUrl, "Base URL required");
        String normalized = baseUrl.endsWith("/")
            ? baseUrl.substring(0, baseUrl.length() - 1)
            : baseUrl;
        super(URI.create(normalized));
    }
}

// BAD: Prologue doing too much work
public class OverEngineered extends Service {
    public OverEngineered(Path configFile) {
        // 50 lines of config parsing, network calls, database lookups...
        // This belongs in a factory method or builder, not a prologue
        Properties props = new Properties();
        // ... 40 more lines ...
        super(/* many args */);
    }
}

10.2 Validation Patterns

Establish consistent validation patterns across your codebase:

// Pattern 1: Fail-fast with descriptive messages
public class Account extends Entity {
    public Account(String id, BigDecimal balance, String currency) {
        Objects.requireNonNull(id, "Account ID cannot be null");
        Objects.requireNonNull(balance, "Balance cannot be null");
        Objects.requireNonNull(currency, "Currency cannot be null");

        if (id.length() != 10) {
            throw new IllegalArgumentException(
                "Account ID must be 10 characters, got: " + id.length());
        }
        if (balance.signum() < 0) {
            throw new IllegalArgumentException(
                "Initial balance cannot be negative: " + balance);
        }
        if (!Set.of("USD", "EUR", "GBP", "JPY").contains(currency)) {
            throw new IllegalArgumentException(
                "Unsupported currency: " + currency);
        }

        super(id, balance, currency);
    }
}

// Pattern 2: Use Preconditions utility (like Guava)
public class Shipment extends Entity {
    public Shipment(String trackingId, double weight, String destination) {
        Preconditions.checkNotNull(trackingId, "Tracking ID required");
        Preconditions.checkArgument(weight > 0, "Weight must be positive: %s", weight);
        Preconditions.checkArgument(
            destination != null && !destination.isBlank(),
            "Destination required");

        super(trackingId);
    }
}

10.3 When to Use Flexible Constructor Bodies

Use the prologue when:

  • Validation before delegation: You need to reject bad input before the superclass does work
  • Argument transformation: You need to compute values to pass to super()
  • Field initialization for overridable methods: The superclass constructor calls overridable methods and your field must be set first
  • Defensive copies: You need to copy mutable arguments before super() sees them

Do NOT use the prologue for:

  • Complex business logic: If the prologue needs 30+ lines, use a builder or factory method instead
  • I/O operations: Reading files, making network calls, or database queries in a constructor prologue is a code smell. Use a factory method.
  • Side effects: Avoid modifying global state, sending emails, or firing events from the prologue. Constructors should construct, not orchestrate.

10.4 Migration Strategy

If you have existing code with the old workaround patterns, here is how to migrate:

  1. Identify candidates: Search for private static methods called only from constructors -- these are likely argument-preparation helpers
  2. Inline the logic: Move the static method body into the constructor prologue
  3. Delete the helper: Remove the now-unused static method
  4. Test: Ensure behavior is identical. The change should be purely structural.
// Step 1: Identify the pattern
class Before extends Parent {
    Before(String input) {
        super(validate(input), transform(input));  // Static helpers
    }
    private static String validate(String s) {
        if (s == null || s.isBlank()) throw new IllegalArgumentException("blank");
        return s;
    }
    private static int transform(String s) {
        return s.trim().length();
    }
}

// Step 2 & 3: Inline and delete
class After extends Parent {
    After(String input) {
        if (input == null || input.isBlank()) {
            throw new IllegalArgumentException("blank");
        }
        String trimmed = input.trim();
        int length = trimmed.length();
        super(trimmed, length);
    }
    // No more static helpers needed
}

Summary

Flexible Constructor Bodies remove one of Java's oldest and most frustrating restrictions. The ability to write code before super() or this() enables cleaner validation, simpler argument preparation, safer field initialization, and the elimination of static helper method workarounds. The restrictions in the prologue (no this access, no superclass member access) are sensible and prevent the bugs that the old rule was originally trying to avoid. With Java 25, constructor code can finally be written in the order you think about it: validate first, then initialize.

March 1, 2026

Java 25 Other Improvements

1. Introduction

Java 25 is the next Long-Term Support (LTS) release, expected in September 2025. As an LTS release, it is the version that enterprises will standardize on for the next three to five years, making it the natural upgrade target after Java 21. Between Java 22 and Java 25, four releases shipped, each advancing features through preview, incubation, and finalization stages.

While the headline features — module imports and simplified source files — get dedicated coverage in their own tutorial, Java 25 ships with a broad set of improvements that touch the language, the runtime, and the standard library. Some are brand new. Others have been in preview for multiple releases and are finally production-ready.

This post covers every significant Java 25 improvement beyond the module import and simplified source file features. Here is what we will go through:

Feature JEP Status in Java 25 Impact
Primitive Types in Patterns JEP 488 Final High — pattern matching works with all types now
Stable Values JEP 502 Preview Medium — lazy initialization done right
Structured Concurrency JEP 499 (expected) Final (expected) High — structured thread management for production
Scoped Values JEP 487 (expected) Final (expected) High — ThreadLocal replacement
Class-File API JEP 484 Final Medium — standard bytecode manipulation
Ahead-of-Time Class Loading & Linking JEP 483 Final High — dramatically faster startup
Compact Object Headers JEP 450 Experimental Medium — reduced memory footprint
Vector API JEP 489 Incubator Medium — SIMD operations continue to mature

Let us go through each one in detail.

2. Primitive Types in Patterns (JEP 488)

Pattern matching has been evolving in Java since Java 16. We got instanceof pattern matching, then switch pattern matching, then record patterns. But all of these worked only with reference types — objects, not primitives. If you wanted to match on an int, double, or boolean, you were stuck with old-fashioned if-else chains or traditional switch statements.

JEP 488 closes this gap. In Java 25, pattern matching works with all types, including primitives. This applies to both instanceof expressions and switch expressions.

2.1 Primitive Patterns in switch

The most useful application is in switch expressions. Before Java 25, you could not use guards or pattern matching syntax with primitive types in switch. Now you can:

// BEFORE: Traditional switch with if-else for range checks
public String getTemperatureCategory(int tempFahrenheit) {
    if (tempFahrenheit < 0) {
        return "Extreme cold";
    } else if (tempFahrenheit < 32) {
        return "Freezing";
    } else if (tempFahrenheit < 60) {
        return "Cold";
    } else if (tempFahrenheit < 80) {
        return "Comfortable";
    } else if (tempFahrenheit < 100) {
        return "Hot";
    } else {
        return "Extreme heat";
    }
}

// AFTER: Primitive patterns with guards in switch (Java 25)
public String getTemperatureCategory(int tempFahrenheit) {
    return switch (tempFahrenheit) {
        case int t when t < 0   -> "Extreme cold";
        case int t when t < 32  -> "Freezing";
        case int t when t < 60  -> "Cold";
        case int t when t < 80  -> "Comfortable";
        case int t when t < 100 -> "Hot";
        default                 -> "Extreme heat";
    };
}

The pattern case int t when t < 32 does two things: it binds the value to a new variable t, and it applies a guard condition. This is the same when guard syntax used with reference type patterns, now extended to primitives.

2.2 Primitive Patterns in instanceof

You can also use primitive patterns with instanceof. This is particularly useful for safe narrowing conversions:

// Safe narrowing conversion with instanceof
public void processNumber(long value) {
    if (value instanceof int i) {
        // Safe: value fits in an int
        System.out.println("Fits in int: " + i);
        processAsInt(i);
    } else {
        // Value is too large for int
        System.out.println("Needs long: " + value);
        processAsLong(value);
    }
}

// Example calls:
processNumber(42L);           // Fits in int: 42
processNumber(3_000_000_000L); // Needs long: 3000000000

Before Java 25, safe narrowing required manual range checking:

// BEFORE: Manual range checking for narrowing
public void processNumber(long value) {
    if (value >= Integer.MIN_VALUE && value <= Integer.MAX_VALUE) {
        int i = (int) value;
        processAsInt(i);
    } else {
        processAsLong(value);
    }
}

// AFTER: Primitive instanceof handles the range check for you
public void processNumber(long value) {
    if (value instanceof int i) {
        processAsInt(i);
    } else {
        processAsLong(value);
    }
}

2.3 Primitive Patterns with Records

Primitive patterns also work with record patterns, enabling deep destructuring that includes primitive components:

record Temperature(double value, String unit) {}

record WeatherReading(Temperature temp, int humidity, long timestamp) {}

public String describeWeather(WeatherReading reading) {
    return switch (reading) {
        case WeatherReading(Temperature(double v, String u), int h, long ts)
            when v > 100.0 && u.equals("F") ->
                "Dangerously hot! Temperature: " + v + "°F, Humidity: " + h + "%";

        case WeatherReading(Temperature(double v, String u), int h, long ts)
            when h > 90 ->
                "Very humid! Humidity at " + h + "%, Temp: " + v + "°" + u;

        case WeatherReading(Temperature t, int h, long ts) ->
                "Normal: " + t.value() + "°" + t.unit() + ", Humidity: " + h + "%";
    };
}

2.4 Exhaustiveness with Primitive Patterns

Java's compiler can now check exhaustiveness for primitive switch expressions. Because primitive types have known ranges, the compiler verifies that all possible values are covered:

// boolean is naturally exhaustive
public String boolSwitch(boolean flag) {
    return switch (flag) {
        case true  -> "Enabled";
        case false -> "Disabled";
        // No default needed -- all boolean values are covered
    };
}

// For int/long/double, you need a default or catch-all pattern
public String intCategory(int value) {
    return switch (value) {
        case 0         -> "Zero";
        case int i when i > 0 -> "Positive: " + i;
        case int i     -> "Negative: " + i;
        // Exhaustive: covers 0, positive, and everything else (negative)
    };
}

Primitive patterns in Java 25 complete the pattern matching story. Every type in Java -- objects, records, sealed types, and now primitives -- can participate in pattern matching. This makes switch expressions a truly universal dispatching mechanism.

3. Stable Values (JEP 502 - Preview)

Lazy initialization is one of the most common patterns in Java. You have a field that is expensive to compute, so you defer its creation until first access. The problem is that doing this correctly in a concurrent environment is surprisingly hard. The classic double-checked locking pattern is notoriously error-prone, and simpler approaches either sacrifice thread safety or performance.

Java 25 introduces the StableValue API (preview) to solve this problem once and for all. A StableValue is a container that holds a value computed lazily on first access, with guaranteed thread safety and optimal performance after initialization.

3.1 The Problem with Manual Lazy Initialization

Here is the classic approach and its pitfalls:

// Approach 1: Not thread-safe
public class ConnectionPool {
    private DataSource dataSource;

    public DataSource getDataSource() {
        if (dataSource == null) {
            // Race condition: two threads can both see null
            // and create two DataSource instances
            dataSource = createDataSource();
        }
        return dataSource;
    }
}

// Approach 2: Thread-safe but slow
public class ConnectionPool {
    private DataSource dataSource;

    public synchronized DataSource getDataSource() {
        if (dataSource == null) {
            dataSource = createDataSource();
        }
        return dataSource;
        // Problem: synchronized on every access, even after initialization
    }
}

// Approach 3: Double-checked locking (correct but complex)
public class ConnectionPool {
    private volatile DataSource dataSource;

    public DataSource getDataSource() {
        DataSource result = dataSource;
        if (result == null) {
            synchronized (this) {
                result = dataSource;
                if (result == null) {
                    dataSource = result = createDataSource();
                }
            }
        }
        return result;
        // Correct, but verbose and easy to get wrong
    }
}

3.2 StableValue: The Clean Solution

StableValue provides a one-liner replacement for all of the above approaches:

// Java 25 Preview: StableValue for lazy initialization
import java.lang.StableValue;

public class ConnectionPool {

    // Lazy, thread-safe, optimal performance after initialization
    private final StableValue dataSource =
        StableValue.of(() -> createDataSource());

    public DataSource getDataSource() {
        return dataSource.get();
    }

    private DataSource createDataSource() {
        System.out.println("Creating DataSource (expensive operation)...");
        // ... setup connection pool
        return new HikariDataSource(config);
    }
}

The StableValue.of(Supplier) factory method takes a supplier that computes the value. The first call to get() executes the supplier. Subsequent calls return the cached result with no synchronization overhead -- the JVM can optimize the access to be as fast as reading a final field.

3.3 StableValue for Collections

The API also provides stable lists and maps for cases where you need lazy initialization of individual elements:

// Stable list: each element is lazily initialized independently
private final List loggers = StableValue.list(10, i ->
    Logger.getLogger("module-" + i)
);

// Accessing loggers.get(3) only initializes the logger at index 3
// Other loggers remain uninitialized until accessed

// Stable map: each value is lazily initialized by key
private final Map configs = StableValue.map(
    Set.of("database", "cache", "messaging"),
    key -> loadConfiguration(key)
);

// Accessing configs.get("database") only loads the database config
// Other configs remain uninitialized until accessed

3.4 Why StableValue Matters

Beyond convenience, StableValue gives the JVM optimization hints that manual lazy initialization cannot. Because the JVM knows that a StableValue will be set exactly once, it can treat the value as a constant after initialization. This means the JIT compiler can inline the value, eliminate null checks, and perform constant folding -- optimizations that are impossible with volatile fields or synchronized blocks.

Think of StableValue as a lazy final field: it behaves like final after initialization but defers the computation until needed.

Note: StableValue is a preview feature in Java 25. Enable it with --enable-preview. It is expected to be finalized in a future release.

4. Structured Concurrency

Structured concurrency has been in preview since Java 19 (as an incubator) and has gone through multiple rounds of refinement. In Java 25, it is expected to be finalized with StructuredTaskScope as the core API.

The fundamental idea is simple: when you fork concurrent tasks, they should be treated as a unit. If one fails, the others should be cancelled. When the scope exits, all tasks must be complete. No thread leaks, no orphaned tasks, no forgotten futures.

4.1 The Problem with Unstructured Concurrency

Here is what concurrent code looks like without structured concurrency:

// BEFORE: Unstructured concurrency -- error-prone
public UserProfile loadUserProfile(long userId) throws Exception {
    ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor();

    Future userFuture = executor.submit(() -> fetchUser(userId));
    Future> ordersFuture = executor.submit(() -> fetchOrders(userId));
    Future prefsFuture = executor.submit(() -> fetchPreferences(userId));

    // Problem 1: If fetchOrders() fails, fetchUser() and fetchPreferences()
    // keep running even though the result is useless

    // Problem 2: If this thread is interrupted, the futures are orphaned

    // Problem 3: Exception handling is scattered and complex
    try {
        User user = userFuture.get(5, TimeUnit.SECONDS);
        List orders = ordersFuture.get(5, TimeUnit.SECONDS);
        Preferences prefs = prefsFuture.get(5, TimeUnit.SECONDS);
        return new UserProfile(user, orders, prefs);
    } catch (ExecutionException e) {
        // Which task failed? Hard to tell without checking each one.
        throw new RuntimeException("Failed to load profile", e);
    } finally {
        executor.shutdown(); // Easy to forget
    }
}

4.2 Structured Concurrency with StructuredTaskScope

Here is the same code with structured concurrency:

// AFTER: Structured concurrency -- clean and safe
import java.util.concurrent.StructuredTaskScope;

public UserProfile loadUserProfile(long userId) throws Exception {
    try (var scope = StructuredTaskScope.open()) {

        // Fork concurrent tasks within the scope
        var userTask  = scope.fork(() -> fetchUser(userId));
        var ordersTask = scope.fork(() -> fetchOrders(userId));
        var prefsTask  = scope.fork(() -> fetchPreferences(userId));

        // Wait for all tasks to complete
        scope.join();

        // Get results -- all tasks are guaranteed complete
        return new UserProfile(
            userTask.get(),
            ordersTask.get(),
            prefsTask.get()
        );
    }
    // Scope is closed: all tasks are done, no thread leaks
}

Key improvements:

  • Automatic cleanup: The try-with-resources block ensures all tasks are complete or cancelled when the scope exits
  • Cancellation propagation: If the parent thread is interrupted, all child tasks are cancelled
  • No thread leaks: It is impossible for a forked task to outlive its scope
  • Clear ownership: Every thread has a clear parent-child relationship, visible in thread dumps

4.3 Joiner Policies

Structured concurrency offers different joining strategies through Joiner policies that control how the scope behaves when tasks complete or fail:

// Strategy 1: Wait for all tasks, throw on any failure (default)
try (var scope = StructuredTaskScope.open()) {
    var task1 = scope.fork(() -> fetchFromServiceA());
    var task2 = scope.fork(() -> fetchFromServiceB());
    scope.join();
    // Both tasks must succeed
    return combine(task1.get(), task2.get());
}

// Strategy 2: Return first successful result, cancel the rest
try (var scope = StructuredTaskScope.open(
        StructuredTaskScope.Joiner.anySuccessfulResultOrThrow())) {
    scope.fork(() -> fetchFromPrimary());
    scope.fork(() -> fetchFromFallback());
    scope.fork(() -> fetchFromCache());
    // Returns the first successful result; cancels slower tasks
    return scope.join();
}

// Strategy 3: Collect all results (including failures)
try (var scope = StructuredTaskScope.open(
        StructuredTaskScope.Joiner.allSuccessfulOrThrow())) {
    scope.fork(() -> validateAddress(address));
    scope.fork(() -> checkCreditScore(userId));
    scope.fork(() -> verifyIdentity(userId));
    // All must succeed; returns all results
    return scope.join();
}

Structured concurrency works naturally with virtual threads (finalized in Java 21). Each forked task runs on a virtual thread, meaning you can fork thousands of tasks without exhausting platform threads. The combination of virtual threads and structured concurrency makes Java's concurrency model one of the most powerful in any mainstream language.

5. Scoped Values

Scoped values are the modern replacement for ThreadLocal. If you have ever used ThreadLocal to pass context (like a request ID, user identity, or transaction context) through a call chain without explicit parameters, scoped values do the same thing but better, safer, and faster.

5.1 The Problems with ThreadLocal

// ThreadLocal problems:

// 1. Mutable -- can be changed at any time from anywhere
private static final ThreadLocal REQUEST_ID = new ThreadLocal<>();

public void handleRequest(String requestId) {
    REQUEST_ID.set(requestId);
    processRequest();
    REQUEST_ID.remove(); // Easy to forget -> memory leak!
}

// 2. Unbounded lifetime -- lives as long as the thread lives
// With thread pools, values persist across unrelated requests

// 3. Expensive with virtual threads -- each virtual thread
// gets its own copy, and there can be millions of them

// 4. Inheritance is broken -- InheritableThreadLocal copies values
// to child threads, but there is no way to scope the lifetime

5.2 ScopedValue: The Clean Alternative

// Java 25: ScopedValue -- immutable, bounded lifetime, inherited by child threads
import java.lang.ScopedValue;

private static final ScopedValue REQUEST_ID = ScopedValue.newInstance();

public void handleRequest(String requestId) {
    ScopedValue.runWhere(REQUEST_ID, requestId, () -> {
        // REQUEST_ID is bound to requestId within this scope
        processRequest();
        // After this block, the binding is automatically removed
        // No cleanup needed, no memory leaks possible
    });
}

private void processRequest() {
    // Access the scoped value anywhere in the call chain
    String id = REQUEST_ID.get();
    System.out.println("Processing request: " + id);
    callDatabaseLayer();
}

private void callDatabaseLayer() {
    // Still accessible -- inherited through the call chain
    String id = REQUEST_ID.get();
    System.out.println("DB query for request: " + id);
}

5.3 ScopedValues with Structured Concurrency

The real power of scoped values shows when combined with structured concurrency. Scoped values are automatically inherited by child tasks in a StructuredTaskScope:

private static final ScopedValue USER_CTX = ScopedValue.newInstance();

public void handleApiRequest(UserContext ctx) {
    ScopedValue.runWhere(USER_CTX, ctx, () -> {
        try (var scope = StructuredTaskScope.open()) {
            // Both tasks automatically inherit USER_CTX
            var audit = scope.fork(() -> {
                // USER_CTX.get() works here -- inherited from parent
                logAuditEvent("action started", USER_CTX.get().userId());
                return true;
            });

            var result = scope.fork(() -> {
                // USER_CTX.get() works here too
                return processForUser(USER_CTX.get());
            });

            scope.join();
        }
    });
}

Key advantages of ScopedValue over ThreadLocal:

Aspect ThreadLocal ScopedValue
Mutability Mutable -- can be set/changed anytime Immutable within a scope -- set once
Lifetime Unbounded -- lives with the thread Bounded -- lives within the runWhere block
Cleanup Manual -- must call remove() Automatic -- cleaned up when scope exits
Memory leaks Common -- forgotten remove() calls Impossible -- bounded lifetime
Virtual thread cost Expensive -- each thread gets a copy Cheap -- optimized for millions of threads
Child thread inheritance InheritableThreadLocal (copies, expensive) Automatic with StructuredTaskScope (shares, cheap)

6. Class-File API (JEP 484)

The Class-File API provides a standard, JDK-included API for reading, writing, and transforming Java class files. This is a big deal for frameworks and tools that work with bytecode -- think Spring, Hibernate, Mockito, Byte Buddy, and build tools.

6.1 Why a New API?

Until now, the Java ecosystem relied on third-party libraries for bytecode manipulation:

  • ASM -- low-level, fast, but complex visitor-based API
  • Byte Buddy -- higher-level, easier to use, built on ASM
  • Javassist -- source-level API, simpler but slower

The problem is that these libraries must be updated every time a new class file version ships (which is every six months with Java's release cadence). If ASM does not support the latest class file format, frameworks that depend on it break. The JDK's own tools (like javac, jlink, and jar) had their own internal bytecode library that was not available to external users.

The Class-File API makes bytecode manipulation a first-class platform feature that is always in sync with the latest class file format.

6.2 Reading Class Files

import java.lang.classfile.*;

// Read and inspect a class file
public void inspectClass(Path classFile) throws IOException {
    ClassModel cm = ClassFile.of().parse(classFile);

    System.out.println("Class: " + cm.thisClass().asInternalName());
    System.out.println("Version: " + cm.majorVersion() + "." + cm.minorVersion());
    System.out.println("Flags: " + cm.flags().flagsMask());

    // List all methods
    System.out.println("\nMethods:");
    for (MethodModel method : cm.methods()) {
        System.out.printf("  %s %s%n",
            method.methodName().stringValue(),
            method.methodType().stringValue());
    }

    // List all fields
    System.out.println("\nFields:");
    for (FieldModel field : cm.fields()) {
        System.out.printf("  %s %s%n",
            field.fieldName().stringValue(),
            field.fieldType().stringValue());
    }
}

6.3 Transforming Class Files

The API uses a functional transformation model -- you pass a transformation function that can modify, add, or remove elements:

// Add logging to every method entry
public byte[] addMethodLogging(byte[] classBytes) {
    ClassFile cf = ClassFile.of();
    return cf.transformClass(cf.parse(classBytes), (builder, element) -> {
        if (element instanceof MethodModel method) {
            // Transform each method to add entry logging
            builder.transformMethod(method, (mb, me) -> {
                if (me instanceof CodeModel code) {
                    mb.withCode(cb -> {
                        // Add: System.out.println("Entering: " + methodName)
                        cb.getstatic(ClassDesc.of("java.lang.System"), "out",
                            ClassDesc.of("java.io.PrintStream"));
                        cb.ldc("Entering: " + method.methodName().stringValue());
                        cb.invokevirtual(ClassDesc.of("java.io.PrintStream"),
                            "println",
                            MethodTypeDesc.of(ClassDesc.ofVoid(),
                                ClassDesc.of("java.lang.String")));

                        // Then include the original code
                        code.forEach(cb::with);
                    });
                } else {
                    mb.with(me);
                }
            });
        } else {
            builder.with(element);
        }
    });
}

The Class-File API is not something most application developers will use directly. But it is critical infrastructure for the frameworks and tools that application developers depend on. With the API in the JDK, frameworks can drop their ASM dependency and use a standard API that is always compatible with the latest Java version.

7. Ahead-of-Time Class Loading & Linking (JEP 483)

Java applications have always been criticized for slow startup times compared to native applications. Every time a Java application starts, the JVM must find, load, verify, and link hundreds or thousands of classes. For a Spring Boot application, this can take several seconds -- an eternity in a containerized, scale-to-zero world.

JEP 483 introduces ahead-of-time (AOT) class loading and linking, which performs these steps once during a training run and caches the results. On subsequent starts, the JVM loads the pre-processed classes from the cache, skipping the expensive discovery and verification steps.

7.1 How It Works

The process has three steps:

  1. Training run: Start your application with a special flag that records which classes are loaded and how they are linked
  2. Cache generation: The JVM generates a cache file containing the pre-loaded, pre-linked classes
  3. Production run: Start your application with the cache -- it loads classes from the cache instead of discovering them at runtime
// Step 1: Training run -- record class loading behavior
// $ java -XX:AOTMode=record -XX:AOTConfiguration=app.aotconf -jar myapp.jar

// Step 2: Generate the AOT cache
// $ java -XX:AOTMode=create -XX:AOTConfiguration=app.aotconf -XX:AOTCache=app.aot -jar myapp.jar

// Step 3: Production run -- use the AOT cache
// $ java -XX:AOTCache=app.aot -jar myapp.jar

// Result: Significantly faster startup time

7.2 Performance Impact

The improvement depends on the size and complexity of the application:

Application Type Typical Startup Without AOT With AOT Cache Improvement
Simple CLI tool ~100ms ~50ms ~2x faster
Spring Boot microservice ~3-5 seconds ~1-2 seconds ~2-3x faster
Large enterprise application ~10-20 seconds ~4-8 seconds ~2-3x faster

This is not a replacement for GraalVM native image -- native image eliminates the JVM entirely and starts in milliseconds. But AOT class loading provides a significant improvement without the native image trade-offs (reflection limitations, longer build times, reduced peak performance). You keep the full JVM with JIT compilation and all runtime capabilities while getting dramatically faster startup.

7.3 Integration with Build Tools

The AOT cache can be generated as part of your build pipeline. For containerized applications, generate the cache in your Docker build stage:

// Dockerfile with AOT cache generation
// FROM eclipse-temurin:25-jre AS builder
// COPY target/myapp.jar /app/myapp.jar
// WORKDIR /app
//
// # Training run
// RUN java -XX:AOTMode=record -XX:AOTConfiguration=app.aotconf \
//     -jar myapp.jar --spring.profiles.active=aot-training &
//     sleep 10 && kill %1
//
// # Generate cache
// RUN java -XX:AOTMode=create -XX:AOTConfiguration=app.aotconf \
//     -XX:AOTCache=app.aot -jar myapp.jar
//
// FROM eclipse-temurin:25-jre
// COPY --from=builder /app/myapp.jar /app/myapp.jar
// COPY --from=builder /app/app.aot /app/app.aot
// CMD ["java", "-XX:AOTCache=/app/app.aot", "-jar", "/app/myapp.jar"]

AOT class loading is fully transparent to your application code. You do not need to change any source code, annotations, or configurations. It is purely a deployment-time optimization.

8. Compact Object Headers (Experimental)

Every Java object has a header that contains metadata: the object's class pointer, hash code, lock state, and garbage collector information. In the current JVM, this header is 12-16 bytes on 64-bit systems. For small objects (like a Point record with two int fields), the header can be as large as the payload.

Compact object headers (JEP 450, experimental in Java 25) reduce the header size to 8 bytes by compressing the metadata representation. This saves approximately 10-20% of heap memory for typical applications -- a significant improvement that requires zero code changes.

Object Current Header Compact Header Payload Memory Saved
Integer (wrapper) 12 bytes 8 bytes 4 bytes 25%
Point(int x, int y) 12 bytes 8 bytes 8 bytes 20%
String (empty) 12 bytes 8 bytes 12 bytes 17%
HashMap.Node 12 bytes 8 bytes 32 bytes 9%

Enable compact object headers with -XX:+UseCompactObjectHeaders. This is experimental and may have edge cases, so test thoroughly before using in production.

9. Vector API Progress (Incubator)

The Vector API has been in incubator since Java 16, enabling SIMD (Single Instruction, Multiple Data) operations in Java. SIMD allows a single CPU instruction to process multiple data elements simultaneously -- for example, adding four pairs of floats in a single instruction instead of four separate instructions.

9.1 What the Vector API Does

The API provides types like FloatVector, IntVector, and DoubleVector that map directly to hardware SIMD registers (SSE, AVX, AVX-512 on x86; NEON on ARM):

// Traditional scalar loop -- processes one element at a time
public float[] scalarMultiply(float[] a, float[] b) {
    float[] result = new float[a.length];
    for (int i = 0; i < a.length; i++) {
        result[i] = a[i] * b[i]; // One multiplication per iteration
    }
    return result;
}

// Vector API -- processes multiple elements per iteration
import jdk.incubator.vector.*;

public float[] vectorMultiply(float[] a, float[] b) {
    var species = FloatVector.SPECIES_256; // 256-bit vectors (8 floats)
    float[] result = new float[a.length];

    int i = 0;
    for (; i < species.loopBound(a.length); i += species.length()) {
        var va = FloatVector.fromArray(species, a, i);
        var vb = FloatVector.fromArray(species, b, i);
        var vr = va.mul(vb); // 8 multiplications in one instruction
        vr.intoArray(result, i);
    }
    // Handle remaining elements
    for (; i < a.length; i++) {
        result[i] = a[i] * b[i];
    }
    return result;
}

9.2 Why It Is Still in Incubator

The Vector API's finalization is blocked by Project Valhalla -- specifically, the value types feature. The Vector API's types (like FloatVector) need to be value types to achieve optimal performance (stack allocation, no object headers, no GC pressure). Until value types are available in the language, finalizing the Vector API would lock in a suboptimal design.

In the meantime, the JIT compiler's auto-vectorization has improved significantly. For many common patterns, the JVM automatically uses SIMD instructions without the Vector API. The API remains important for cases where auto-vectorization cannot figure out the optimal strategy, particularly in scientific computing, machine learning, data processing, and cryptography.

10. Additional Improvements

Beyond the major features, Java 25 includes several smaller but notable improvements:

10.1 Flexible Constructor Bodies (JEP 492)

In previous Java versions, statements before super() or this() calls in constructors were forbidden. Java 25 relaxes this restriction, allowing you to validate and compute arguments before delegating to another constructor:

// BEFORE: Had to use static helper methods or factory methods
public class PositiveRange {
    private final int low;
    private final int high;

    public PositiveRange(int low, int high) {
        // Could NOT put validation before super() call
        super(); // Must be first statement
        if (low < 0 || high < 0) throw new IllegalArgumentException("Must be positive");
        if (low > high) throw new IllegalArgumentException("low must be <= high");
        this.low = low;
        this.high = high;
    }
}

// AFTER: Statements allowed before super()/this()
public class PositiveRange {
    private final int low;
    private final int high;

    public PositiveRange(int low, int high) {
        // Validation BEFORE calling super -- now legal in Java 25
        if (low < 0 || high < 0) throw new IllegalArgumentException("Must be positive");
        if (low > high) throw new IllegalArgumentException("low must be <= high");
        super();
        this.low = low;
        this.high = high;
    }
}

10.2 Key Derivation Function API (JEP 478)

A new standard API for Key Derivation Functions (KDFs) like HKDF and Argon2. This is important for applications that implement custom encryption schemes, key management, or password hashing:

import javax.crypto.KDF;

// Derive an encryption key using HKDF
KDF hkdf = KDF.getInstance("HKDF-SHA256");
SecretKey derived = hkdf.deriveKey("AES",
    KDF.HKDFParameterSpec.ofExtract()
        .addIKM(inputKeyMaterial)
        .addSalt(salt)
        .thenExpand(info, 32)
        .build());
// Use the derived key for AES encryption

10.3 ZGC Improvements

The Z Garbage Collector continues to improve in Java 25. Generational ZGC (introduced in Java 21) has been further optimized with better young generation sizing, improved concurrent relocation, and reduced pause times. For applications that previously tuned ZGC parameters manually, the defaults are now better and require less tuning.

10.4 Deprecations and Removals

Java 25 continues the cleanup of old APIs:

  • Memory-access methods in sun.misc.Unsafe are further restricted -- use the Foreign Function & Memory API (finalized in Java 22) instead
  • Security Manager continues its deprecation path -- it has been deprecated for removal since Java 17
  • 32-bit x86 ports are deprecated for removal -- only 64-bit x86 and ARM are the future

11. Summary Table: All Java 25 Features

Feature JEP Status Category Description
Module Import Declarations 476 Final Language Import all exported types from a module with one statement
Implicitly Declared Classes & Instance Main 477 Final Language Write Java programs without class declarations
Primitive Types in Patterns 488 Final Language Pattern matching for instanceof and switch with primitives
Flexible Constructor Bodies 492 Final Language Statements before super()/this() in constructors
Structured Concurrency 499 Final (expected) Library Structured thread management with StructuredTaskScope
Scoped Values 487 Final (expected) Library Immutable, scoped ThreadLocal replacement
Class-File API 484 Final Library Standard API for bytecode reading/writing/transformation
Key Derivation Function API 478 Final Library Standard KDF API (HKDF, etc.)
Ahead-of-Time Class Loading 483 Final Runtime Pre-load and pre-link classes for faster startup
Stable Values 502 Preview Library Lazy, thread-safe, optimizable value holders
Compact Object Headers 450 Experimental Runtime Reduced object header size (12 bytes to 8 bytes)
Vector API 489 Incubator Library SIMD operations for data-parallel computation

Java 25 is a substantial release. The finalization of structured concurrency and scoped values, combined with language improvements like primitive patterns and flexible constructors, makes this the most feature-rich LTS release since Java 21. The runtime improvements (AOT class loading, compact headers, ZGC tuning) mean that upgrading delivers immediate performance benefits even before you adopt any new language features.

For production teams, the message is clear: start planning your Java 25 migration now. The combination of new features, performance improvements, and the LTS support window makes Java 25 the version you want to be running by the end of 2025.

March 1, 2026

Java 25 Stream Gatherers

1. Introduction

Since Java 8, the Stream API has been one of the most powerful tools in your toolkit. You can filter, map, flatMap, reduce, and collect your way through most data-processing tasks with clean, declarative code. But if you have spent enough time with streams, you have inevitably hit a wall: there is no way to define your own intermediate operations.

Think about it. Java gives you Collector as an extension point for terminal operations — you can write custom collectors that fold, group, partition, or summarize data in any way you want. But for intermediate operations? You are stuck with what the API provides. If the built-in map(), filter(), flatMap(), distinct(), sorted(), peek(), limit(), skip(), and takeWhile() do not cover your use case, you have to break out of the stream pipeline, materialize into a collection, manipulate it imperatively, and then stream it again. That defeats the entire point.

Consider an analogy: imagine you are building an assembly line in a factory. Java gave you the ability to customize the packaging station at the end (collectors), but it locked down every station in the middle of the line. Want a station that groups items into batches of five? Want one that computes a running average and passes it along? Want one that deduplicates consecutive items? You had to hack around the limitation or abandon the assembly line entirely.

Java 25 changes this with Stream Gatherers (JEP 485). Gatherers are the missing counterpart to collectors — they let you define custom intermediate operations that plug directly into a stream pipeline. A gatherer can transform elements one-to-one, one-to-many, many-to-one, or many-to-many. It can carry state across elements. It can short-circuit to stop processing early. It can even support parallel execution. And just like collectors, Java ships several built-in gatherers that handle common use cases out of the box.

Stream Gatherers were previewed in Java 22 (JEP 461) and Java 23 (JEP 473), and finalized without changes in Java 24 (JEP 485), making them a standard feature in Java 25 LTS. This post covers everything you need to know: the interface anatomy, all five built-in gatherers, how to write your own, and real-world patterns that will change how you think about stream pipelines.

2. The Problem Gatherers Solve

To appreciate why gatherers matter, let us look at things you cannot do cleanly with the existing Stream API — operations that require state, context, or structural transformation between elements.

Problem 1: Sliding Windows

Suppose you have a stream of stock prices and you want to compute a 3-day moving average. You need to look at three consecutive elements at a time, slide one position forward, and produce an average for each window. There is no built-in stream operation for this. Before gatherers, your options were:

// The ugly workaround: materialize, index, and re-stream
List prices = List.of(100.0, 102.5, 101.0, 105.0, 103.5, 107.0);

List movingAverages = IntStream.range(0, prices.size() - 2)
    .mapToObj(i -> (prices.get(i) + prices.get(i + 1) + prices.get(i + 2)) / 3.0)
    .toList();
// Requires random access -- cannot work with a true stream

This only works because you materialized the data into a List first. With a true stream (say, reading from a socket or a database cursor), you cannot index into it. You need an intermediate operation that remembers previous elements.

Problem 2: Stateful Deduplication

The built-in distinct() removes all duplicates across the entire stream, but what if you only want to remove consecutive duplicates? For example, turning [1, 1, 2, 2, 2, 3, 1, 1] into [1, 2, 3, 1]. There is no built-in operation for this. You need state — specifically, you need to remember the last element you emitted.

Problem 3: Batching / Chunking

You have a stream of records and you want to group them into batches of 100 for bulk database inserts. The stream might have 10,000 elements, and you need to emit 100 lists of 100. This is a many-to-many transformation that requires an accumulator, and the built-in API has nothing for it.

Problem 4: Running Totals / Prefix Sums

You want a running total: given [1, 2, 3, 4], produce [1, 3, 6, 10]. The reduce() operation produces a single value, not a stream of intermediate results. You would have to use an external mutable variable (which violates the stream contract) or fall back to imperative code.

Why Collectors Cannot Help Here

Collectors are powerful, but they are terminal operations. They consume the entire stream and produce a single result. They cannot sit in the middle of a pipeline and emit elements downstream. Gatherers fill exactly this gap — they are intermediate operations that can carry state, transform structure, and emit zero or more elements per input, all while remaining composable with the rest of the pipeline.

Limitation Before Gatherers With Gatherers
Sliding window Materialize to list, index manually Gatherers.windowSliding(n)
Fixed-size batches Collect all, then partition Gatherers.windowFixed(n)
Running total External mutable variable (unsafe) Gatherers.scan(init, fn)
Fold to single value (intermediate) Not possible in pipeline Gatherers.fold(init, fn)
Concurrent mapping with limit Custom thread pool + futures Gatherers.mapConcurrent(n, fn)
Custom stateful logic Break pipeline, write imperative code stream.gather(myGatherer)

3. The Gatherer Interface

A gatherer is defined by the java.util.stream.Gatherer<T, A, R> interface. If you have worked with Collector<T, A, R>, the shape will feel familiar, but there are important differences. Let us break down the type parameters and the four functions that make up a gatherer.

Type Parameters

Parameter Meaning Collector Equivalent
T Type of input elements consumed from upstream Same — input type
A Type of the mutable state object (private, per-gatherer) Same — accumulator type
R Type of output elements emitted downstream Different — in Collector, R is the final result type, not a stream element type

The critical insight: a Collector’s R is the single result you get back (like a List or a Map). A Gatherer’s R is the element type of the output stream. The gatherer sits in the middle of the pipeline and produces a new stream.

The Four Functions

Every gatherer is composed of four functions. Two are required (well, one is required), and two are optional:

Function Type Required? Purpose
initializer() Supplier<A> Optional Creates the private mutable state object. Called once per stream evaluation. If omitted, the gatherer is stateless.
integrator() Integrator<A, T, R> Required The core logic. Called once per input element. Receives the state, the current element, and a Downstream handle to push output elements. Returns boolean: true to continue, false to short-circuit.
combiner() BinaryOperator<A> Optional Merges two state objects when running in parallel. Without this, the gatherer runs sequentially even on a parallel stream.
finisher() BiConsumer<A, Downstream<? super R>> Optional Called after all input elements have been processed. Can emit final elements downstream. Useful for flushing buffered state.

The Downstream Interface

The Downstream<R> object is how a gatherer emits elements to the next stage in the pipeline. It has two key methods:

public interface Downstream {
    // Push an element downstream. Returns true if more elements are accepted,
    // false if the downstream is done (e.g., a short-circuiting terminal op).
    boolean push(R element);

    // Check if the downstream is rejecting further elements.
    boolean isRejecting();
}

This is one of the key differences from Collector. A collector’s accumulator is a BiConsumer — it just consumes, with no feedback. A gatherer’s integrator gets feedback from downstream via the return value of push(). This enables short-circuiting: if the downstream says “I am done,” the gatherer can stop processing immediately.

Integrator Variants

The Integrator interface comes in two flavors:

// Standard integrator -- may short-circuit (return false)
Integrator.of((state, element, downstream) -> {
    // Process element, optionally push to downstream
    // Return false to stop processing early
    return true;
});

// Greedy integrator -- promises to never short-circuit
// The stream runtime can optimize based on this guarantee
Integrator.ofGreedy((state, element, downstream) -> {
    downstream.push(transform(element));
    // No return value needed -- always continues
});

Use Integrator.ofGreedy() when your gatherer always processes all elements (like a mapping or filtering gatherer). Use Integrator.of() when your gatherer might need to stop early (like a “take first N matching” gatherer).

How a Gatherer Relates to a Collector

Aspect Collector<T, A, R> Gatherer<T, A, R>
Pipeline position Terminal (end) Intermediate (middle)
Output Single result of type R Stream of elements of type R
Accumulator / Integrator BiConsumer<A, T> — no feedback Integrator<A, T, R> — returns boolean, has Downstream
Finisher Function<A, R> — returns the result BiConsumer<A, Downstream> — pushes to stream
Short-circuiting Not supported Supported via integrator return value
Composability Not directly composable Composable via andThen()

4. Built-in Gatherers

Java ships five built-in gatherers in the java.util.stream.Gatherers utility class. These cover the most commonly requested operations that were impossible with the old API. Let us go through each one with concrete examples.

4.1 Gatherers.fold()

fold() is a many-to-one gatherer. It works like reduce(), but as an intermediate operation that emits a single result element into the downstream when all input is consumed. Think of it as “reduce, but keep going with the pipeline.”

Signature:

static  Gatherer fold(
    Supplier initial,
    BiFunction folder
)

How it works: The initial supplier creates the starting value (the identity). The folder function takes the current accumulated value and the next input element, and returns the new accumulated value. After all input elements are processed, the final accumulated value is emitted downstream as a single element.

Example: Join strings with a semicolon delimiter

String result = Stream.of(1, 2, 3, 4, 5, 6, 7, 8, 9)
    .gather(
        Gatherers.fold(
            () -> "",
            (accumulated, element) -> {
                if (accumulated.isEmpty()) return element.toString();
                return accumulated + ";" + element;
            }
        )
    )
    .findFirst()
    .get();

System.out.println(result);
// Output: 1;2;3;4;5;6;7;8;9

“Wait, can’t I just use Collectors.joining(";")?” Yes, for this specific case. But fold() is an intermediate operation — the result flows into the rest of the pipeline. You could chain more gatherers or operations after it. With a collector, the pipeline ends.

Example: Sum as intermediate operation, then continue processing

// Fold to compute sum, then map the result, then collect
List result = Stream.of(10, 20, 30)
    .gather(Gatherers.fold(() -> 0, Integer::sum))
    .map(sum -> "Total: " + sum)
    .toList();

System.out.println(result);
// Output: [Total: 60]

4.2 Gatherers.scan()

scan() is a one-to-one stateful gatherer. It produces a running accumulation — for each input element, it emits the current accumulated value. If you are familiar with functional programming, this is the classic “prefix scan” or “cumulative fold.”

Signature:

static  Gatherer scan(
    Supplier initial,
    BiFunction scanner
)

How it works: For each input element, the scanner function is applied to the current state and the element. The result becomes both the new state and the output element pushed downstream.

Example: Running sum (prefix sum)

Stream.of(1, 2, 3, 4, 5)
    .gather(Gatherers.scan(() -> 0, Integer::sum))
    .forEach(System.out::println);

// Output:
// 1
// 3
// 6
// 10
// 15

Notice the output has the same number of elements as the input — that is the one-to-one nature of scan(). Each output is the cumulative result up to that point.

Example: Running sum starting from a seed value

Stream.of(1, 2, 3, 4, 5)
    .gather(Gatherers.scan(() -> 100, (current, next) -> current + next))
    .forEach(System.out::println);

// Output:
// 101
// 103
// 106
// 110
// 115

Example: Running maximum

Stream.of(3, 1, 4, 1, 5, 9, 2, 6)
    .gather(Gatherers.scan(() -> Integer.MIN_VALUE, Integer::max))
    .forEach(System.out::println);

// Output:
// 3
// 3
// 4
// 4
// 5
// 9
// 9
// 9

4.3 Gatherers.windowFixed()

windowFixed() is a many-to-many gatherer. It groups input elements into fixed-size lists (batches). When the window is full, it is emitted downstream as a List. The last window may contain fewer elements if the stream size is not evenly divisible.

Signature:

static  Gatherer> windowFixed(int windowSize)

Example: Batch elements into groups of 3

Stream.of(1, 2, 3, 4, 5, 6, 7, 8)
    .gather(Gatherers.windowFixed(3))
    .forEach(System.out::println);

// Output:
// [1, 2, 3]
// [4, 5, 6]
// [7, 8]

Notice how the last window [7, 8] has only two elements — it emits whatever is left when the stream ends.

Example: Bulk database inserts in batches of 100

records.stream()
    .gather(Gatherers.windowFixed(100))
    .forEach(batch -> {
        jdbcTemplate.batchUpdate(
            "INSERT INTO orders (id, amount) VALUES (?, ?)",
            batch.stream()
                .map(order -> new Object[]{order.id(), order.amount()})
                .toList()
        );
        System.out.println("Inserted batch of " + batch.size());
    });

4.4 Gatherers.windowSliding()

windowSliding() is a many-to-many gatherer that creates overlapping windows. Each window contains windowSize elements, and the window slides forward by one position for each new element. This is exactly what you need for moving averages, n-gram generation, and similar sliding-window algorithms.

Signature:

static  Gatherer> windowSliding(int windowSize)

Example: Sliding windows of size 3

Stream.of(1, 2, 3, 4, 5, 6, 7, 8)
    .gather(Gatherers.windowSliding(3))
    .forEach(System.out::println);

// Output:
// [1, 2, 3]
// [2, 3, 4]
// [3, 4, 5]
// [4, 5, 6]
// [5, 6, 7]
// [6, 7, 8]

Each window overlaps with the previous one by windowSize - 1 elements. The input stream of 8 elements produces 6 windows of size 3.

Example: 3-day moving average of stock prices

List prices = List.of(100.0, 102.5, 101.0, 105.0, 103.5, 107.0, 106.0);

prices.stream()
    .gather(Gatherers.windowSliding(3))
    .map(window -> window.stream()
        .mapToDouble(Double::doubleValue)
        .average()
        .orElse(0.0))
    .forEach(avg -> System.out.printf("%.2f%n", avg));

// Output:
// 101.17
// 102.83
// 103.17
// 105.17
// 105.50

4.5 Gatherers.mapConcurrent()

mapConcurrent() is a one-to-one gatherer that applies a mapping function concurrently, up to a specified concurrency limit. This is incredibly useful for I/O-bound operations where you want to parallelize the mapping without converting the entire stream to parallel (which would parallelize everything, including non-I/O stages).

Signature:

static  Gatherer mapConcurrent(
    int maxConcurrency,
    Function mapper
)

Key properties:

  • Limits concurrency to maxConcurrency simultaneous invocations
  • Preserves encounter order — elements come out in the same order they went in
  • Uses virtual threads under the hood for efficient I/O handling
  • Perfect for rate-limiting API calls, database queries, or file operations

Example: Fetch URLs with concurrency limit of 5

List urls = List.of(
    "https://api.example.com/users/1",
    "https://api.example.com/users/2",
    "https://api.example.com/users/3",
    "https://api.example.com/users/4",
    "https://api.example.com/users/5",
    "https://api.example.com/users/6",
    "https://api.example.com/users/7",
    "https://api.example.com/users/8"
);

List responses = urls.stream()
    .gather(Gatherers.mapConcurrent(5, url -> {
        // This runs concurrently, up to 5 at a time
        HttpClient client = HttpClient.newHttpClient();
        HttpRequest request = HttpRequest.newBuilder()
            .uri(URI.create(url))
            .build();
        try {
            return client.send(request, HttpResponse.BodyHandlers.ofString())
                         .body();
        } catch (Exception e) {
            return "Error: " + e.getMessage();
        }
    }))
    .toList();

// All 8 URLs are fetched, max 5 at a time, results in original order

Before mapConcurrent(), achieving this required manually managing a thread pool, submitting CompletableFuture tasks, collecting results, and handling ordering yourself. Now it is one line in a stream pipeline.

5. Using stream.gather()

The gather() method is the new intermediate operation on Stream that accepts a Gatherer. It sits alongside map(), filter(), and flatMap() in the pipeline.

Signature:

// On the Stream interface:
 Stream gather(Gatherer gatherer)

You can use gather() anywhere you would use any other intermediate operation. You can chain multiple gather() calls, mix them with map() and filter(), and end with any terminal operation.

Example: Chaining gatherers with standard operations

List result = Stream.of(10.0, 20.5, 15.3, 30.0, 25.7, 18.2, 22.1, 35.0, 28.4, 19.6)
    .filter(value -> value > 15.0)          // Standard filter
    .gather(Gatherers.windowSliding(3))      // Sliding windows of 3
    .map(window -> window.stream()           // Standard map
        .mapToDouble(Double::doubleValue)
        .average()
        .orElse(0.0))
    .gather(Gatherers.scan(() -> 0.0,        // Running sum of averages
        (sum, avg) -> sum + avg))
    .toList();                               // Terminal operation

System.out.println(result);

Composing Gatherers with andThen()

Gatherers support composition via the andThen() method. This lets you combine two gatherers into a single gatherer, which can be useful for building reusable, composable transformation pipelines.

// Create composed gatherer: scan then fold
Gatherer scanGatherer =
    Gatherers.scan(() -> 0, Integer::sum);

Gatherer foldGatherer =
    Gatherers.fold(
        () -> "",
        (result, element) -> result.isEmpty()
            ? element.toString()
            : result + ";" + element
    );

// Compose them: first scan (running sum), then fold (join into string)
Gatherer composed = scanGatherer.andThen(foldGatherer);

// These are equivalent:
String result1 = Stream.of(1, 2, 3, 4, 5)
    .gather(composed)
    .findFirst().get();

String result2 = Stream.of(1, 2, 3, 4, 5)
    .gather(scanGatherer)
    .gather(foldGatherer)
    .findFirst().get();

System.out.println(result1);
// Output: 1;3;6;10;15

System.out.println(result1.equals(result2));
// Output: true

How stream.gather() Works Under the Hood

When the stream pipeline is executed and encounters a gather() step, the following happens:

  1. A Downstream object is created that forwards elements to the next pipeline stage
  2. The gatherer’s initializer is called to create the state object
  3. The integrator function is retrieved
  4. For each input element, integrator.integrate(state, element, downstream) is called
  5. If the integrator returns false, processing stops immediately (short-circuit)
  6. After all elements (or short-circuit), the finisher is called with the final state and downstream

6. Creating Custom Gatherers

The built-in gatherers cover common cases, but the real power of the API is creating your own. There are two approaches: implementing the Gatherer interface directly, or using the factory methods Gatherer.of() and Gatherer.ofSequential().

Approach 1: Implementing the Interface

Let us build a gatherer that removes consecutive duplicate elements — something you cannot do with distinct() (which removes all duplicates globally).

import java.util.stream.Gatherer;
import java.util.function.*;

/**
 * A gatherer that removes consecutive duplicates.
 * [1, 1, 2, 2, 2, 3, 1, 1] -> [1, 2, 3, 1]
 */
public class DistinctConsecutive implements Gatherer, T> {

    @Override
    public Supplier> initializer() {
        // State: a single-element list holding the last emitted value
        return () -> new ArrayList<>(1);
    }

    @Override
    public Integrator, T, T> integrator() {
        return Integrator.ofGreedy((state, element, downstream) -> {
            if (state.isEmpty() || !Objects.equals(state.getFirst(), element)) {
                state.clear();
                state.add(element);
                downstream.push(element);
            }
            // If same as last, skip it
        });
    }

    // No combiner -- sequential only
    // No finisher -- nothing to flush
}

Usage:

Stream.of(1, 1, 2, 2, 2, 3, 1, 1, 4, 4)
    .gather(new DistinctConsecutive<>())
    .forEach(System.out::println);

// Output:
// 1
// 2
// 3
// 1
// 4

Approach 2: Using Gatherer.ofSequential()

For simpler gatherers that do not need parallel support, Gatherer.ofSequential() is more concise. Let us build the same consecutive-distinct gatherer using the factory method:

static  Gatherer distinctConsecutive() {
    return Gatherer.ofSequential(
        // Initializer: mutable container for the last seen element
        () -> new ArrayList(1),

        // Integrator
        Integrator.ofGreedy((state, element, downstream) -> {
            if (state.isEmpty() || !Objects.equals(state.getFirst(), element)) {
                state.clear();
                state.add(element);
                downstream.push(element);
            }
        })
    );
}

// Usage:
Stream.of("a", "a", "b", "b", "c", "a", "a")
    .gather(distinctConsecutive())
    .toList();
// Result: [a, b, c, a]

Approach 3: Using Gatherer.of() for Parallel Support

When you need parallel execution, use Gatherer.of() which accepts all four functions including a combiner:

static  Gatherer distinctConsecutiveParallel() {
    return Gatherer.of(
        // Initializer
        () -> new ArrayList(1),

        // Integrator
        Integrator.ofGreedy((state, element, downstream) -> {
            if (state.isEmpty() || !Objects.equals(state.getFirst(), element)) {
                state.clear();
                state.add(element);
                downstream.push(element);
            }
        }),

        // Combiner for parallel execution
        (left, right) -> {
            // When merging parallel segments, keep the state from the right segment
            // because it represents the more recent "last seen" element
            return right.isEmpty() ? left : right;
        }

        // No finisher needed
    );
}

Factory Methods Comparison

Factory Method Parameters Parallel? Use When
Gatherer.ofSequential(integrator) Integrator only No Stateless, sequential transformation
Gatherer.ofSequential(init, integrator) Initializer + Integrator No Stateful, sequential, no flush needed
Gatherer.ofSequential(init, integrator, finisher) Initializer + Integrator + Finisher No Stateful, sequential, needs final flush
Gatherer.of(init, integrator, combiner) Initializer + Integrator + Combiner Yes Stateful, parallelizable, no flush
Gatherer.of(init, integrator, combiner, finisher) All four Yes Full-featured gatherer

7. Stateful Gatherers

Stateful gatherers are where the API truly shines. These are operations that need to remember information across elements — something that was impossible to do correctly in a stream pipeline before gatherers.

7.1 Running Average

A gatherer that computes a running average, emitting the current average after each input element:

static Gatherer runningAverage() {
    // State: [sum, count]
    return Gatherer.ofSequential(
        () -> new double[]{0.0, 0.0},

        Integrator.ofGreedy((state, element, downstream) -> {
            state[0] += element;  // sum
            state[1] += 1;        // count
            downstream.push(state[0] / state[1]);
        })
    );
}

// Usage:
Stream.of(10.0, 20.0, 30.0, 40.0, 50.0)
    .gather(runningAverage())
    .forEach(avg -> System.out.printf("%.1f%n", avg));

// Output:
// 10.0
// 15.0
// 20.0
// 25.0
// 30.0

7.2 Deduplication with Memory

Unlike distinct(), which uses a HashSet internally and removes all duplicates, you might want to deduplicate within a time or count window — for example, suppress duplicate log messages within a batch of 100:

static  Gatherer deduplicateWithinWindow(int windowSize) {
    return Gatherer.ofSequential(
        () -> new Object[]{new LinkedHashSet(), 0},

        Integrator.ofGreedy((state, element, downstream) -> {
            @SuppressWarnings("unchecked")
            LinkedHashSet seen = (LinkedHashSet) state[0];
            int count = (int) state[1];

            // Reset the window when we hit the limit
            if (count > 0 && count % windowSize == 0) {
                seen.clear();
            }

            if (seen.add(element)) {
                downstream.push(element);
            }

            state[1] = count + 1;
        })
    );
}

// Usage: Suppress duplicate log levels within batches of 5
Stream.of("INFO", "WARN", "INFO", "ERROR", "WARN", "INFO", "DEBUG", "INFO", "WARN", "ERROR")
    .gather(deduplicateWithinWindow(5))
    .forEach(System.out::println);

// Output (first window of 5 input elements):
// INFO
// WARN
// ERROR
// (second window of 5 input elements -- seen set is cleared):
// INFO
// DEBUG
// WARN
// ERROR

7.3 Rate Limiting

A gatherer that enforces a maximum throughput by introducing delays when elements arrive too quickly:

static  Gatherer rateLimited(int maxPerSecond) {
    long intervalNanos = 1_000_000_000L / maxPerSecond;

    return Gatherer.ofSequential(
        // State: last emission time in nanos
        () -> new long[]{0L},

        Integrator.ofGreedy((state, element, downstream) -> {
            long now = System.nanoTime();
            long elapsed = now - state[0];

            if (elapsed < intervalNanos) {
                try {
                    Thread.sleep((intervalNanos - elapsed) / 1_000_000);
                } catch (InterruptedException e) {
                    Thread.currentThread().interrupt();
                }
            }

            state[0] = System.nanoTime();
            downstream.push(element);
        })
    );
}

// Usage: Process API calls at most 10 per second
urls.stream()
    .gather(rateLimited(10))
    .gather(Gatherers.mapConcurrent(5, url -> fetchUrl(url)))
    .forEach(response -> process(response));

7.4 Session Grouping

Group elements into sessions based on a gap threshold. If two consecutive elements are more than a specified distance apart, start a new session:

record TimestampedEvent(long timestamp, String data) {}

static Gatherer> sessionGrouping(long maxGapMillis) {
    return Gatherer.ofSequential(
        // State: current session (list of events)
        ArrayList::new,

        // Integrator: add to current session or start new one
        Integrator.ofGreedy((session, event, downstream) -> {
            if (!session.isEmpty()) {
                long lastTimestamp = session.getLast().timestamp();
                if (event.timestamp() - lastTimestamp > maxGapMillis) {
                    // Gap exceeds threshold -- emit current session, start new one
                    downstream.push(List.copyOf(session));
                    session.clear();
                }
            }
            session.add(event);
        }),

        // Finisher: emit the last session
        (session, downstream) -> {
            if (!session.isEmpty()) {
                downstream.push(List.copyOf(session));
            }
        }
    );
}

// Usage: Group click events into sessions (30-second gap threshold)
List clicks = List.of(
    new TimestampedEvent(1000, "click_home"),
    new TimestampedEvent(3000, "click_products"),
    new TimestampedEvent(5000, "click_item"),
    new TimestampedEvent(60000, "click_home"),       // 55s gap -- new session
    new TimestampedEvent(62000, "click_cart"),
    new TimestampedEvent(63000, "click_checkout")
);

clicks.stream()
    .gather(sessionGrouping(30_000))
    .forEach(session -> System.out.println("Session: " + session));

// Output:
// Session: [TimestampedEvent[timestamp=1000, data=click_home], ...]
// Session: [TimestampedEvent[timestamp=60000, data=click_home], ...]

8. One-to-Many Gatherers

A one-to-many gatherer emits multiple output elements for each input element. You might think “that is what flatMap() does” — and you would be right for the stateless case. But gatherers can do stateful one-to-many transformations, which flatMap() cannot.

Example: Emit Element and Its Cumulative Sum

For each input number, emit both the number itself and the running total so far:

static Gatherer elementAndRunningTotal() {
    return Gatherer.ofSequential(
        () -> new int[]{0},  // running total

        Integrator.ofGreedy((state, element, downstream) -> {
            state[0] += element;
            downstream.push("Value: " + element);
            downstream.push("Running total: " + state[0]);
        })
    );
}

Stream.of(10, 20, 30)
    .gather(elementAndRunningTotal())
    .forEach(System.out::println);

// Output:
// Value: 10
// Running total: 10
// Value: 20
// Running total: 30
// Value: 30
// Running total: 60

Example: Expand Ranges into Individual Elements

Given a stream of range objects, expand each into individual integers — but only emit values that are unique across all ranges (stateful dedup during expansion):

record IntRange(int start, int endInclusive) {}

static Gatherer expandUniqueRanges() {
    return Gatherer.ofSequential(
        HashSet::new,  // track all emitted values globally

        Integrator.ofGreedy((seen, range, downstream) -> {
            for (int i = range.start(); i <= range.endInclusive(); i++) {
                if (seen.add(i)) {
                    downstream.push(i);
                }
            }
        })
    );
}

Stream.of(
    new IntRange(1, 5),
    new IntRange(3, 8),   // 3, 4, 5 already seen -- only emit 6, 7, 8
    new IntRange(7, 10)   // 7, 8 already seen -- only emit 9, 10
)
.gather(expandUniqueRanges())
.forEach(System.out::println);

// Output: 1 2 3 4 5 6 7 8 9 10

Example: Tokenizer -- Split Lines into Words

A gatherer that splits each input line into words, maintaining a word count across all lines:

record NumberedWord(int globalIndex, String word) {}

static Gatherer tokenize() {
    return Gatherer.ofSequential(
        () -> new int[]{0},  // global word counter

        Integrator.ofGreedy((counter, line, downstream) -> {
            String[] words = line.trim().split("\\s+");
            for (String word : words) {
                if (!word.isEmpty()) {
                    counter[0]++;
                    downstream.push(new NumberedWord(counter[0], word));
                }
            }
        })
    );
}

Stream.of("hello world", "foo bar baz", "java streams")
    .gather(tokenize())
    .forEach(System.out::println);

// Output:
// NumberedWord[globalIndex=1, word=hello]
// NumberedWord[globalIndex=2, word=world]
// NumberedWord[globalIndex=3, word=foo]
// NumberedWord[globalIndex=4, word=bar]
// NumberedWord[globalIndex=5, word=baz]
// NumberedWord[globalIndex=6, word=java]
// NumberedWord[globalIndex=7, word=streams]

9. Short-Circuiting Gatherers

Short-circuiting is one of the most powerful capabilities of gatherers. By returning false from the integrator, you tell the stream to stop processing immediately. This enables operations like "take while a condition holds, but with state" -- something that built-in takeWhile() cannot do because it is stateless.

Example: Take Until Sum Exceeds Threshold

Take elements from the stream until the cumulative sum exceeds a threshold:

static Gatherer takeUntilSumExceeds(int threshold) {
    return Gatherer.ofSequential(
        () -> new int[]{0},  // running sum

        Integrator.of((state, element, downstream) -> {
            state[0] += element;
            if (state[0] > threshold) {
                return false;  // Stop -- sum exceeded threshold
            }
            return downstream.push(element);
        })
    );
}

Stream.of(10, 20, 30, 40, 50, 60)
    .gather(takeUntilSumExceeds(55))
    .forEach(System.out::println);

// Output:
// 10
// 20
// (30 would make sum = 60 > 55, so processing stops)

Example: Take N Distinct Elements

Take elements until you have seen N distinct values. This combines state (a set of seen values) with short-circuiting:

static  Gatherer takeNDistinct(int n) {
    return Gatherer.ofSequential(
        HashSet::new,

        Integrator.of((seen, element, downstream) -> {
            seen.add(element);
            downstream.push(element);
            return seen.size() < n;  // Stop when we have N distinct values
        })
    );
}

Stream.of(1, 2, 1, 3, 2, 4, 5, 3, 6)
    .gather(takeNDistinct(4))
    .forEach(System.out::println);

// Output: 1, 2, 1, 3, 2, 4
// Stops after seeing 4th distinct value (4), but includes all elements up to that point

Example: Find First Match After N Elements

Skip the first N elements, then take the first one that matches a predicate:

static  Gatherer firstMatchAfterSkipping(int skip, Predicate predicate) {
    return Gatherer.ofSequential(
        () -> new int[]{0},  // element counter

        Integrator.of((counter, element, downstream) -> {
            counter[0]++;
            if (counter[0] > skip && predicate.test(element)) {
                downstream.push(element);
                return false;  // Found it -- stop
            }
            return true;  // Keep looking
        })
    );
}

Stream.of(2, 4, 6, 7, 8, 9, 10, 11)
    .gather(firstMatchAfterSkipping(3, n -> n % 2 != 0))
    .forEach(System.out::println);

// Output: 7
// Skipped first 3 (2, 4, 6), then found first odd number (7)

The key pattern for short-circuiting: use Integrator.of() (not ofGreedy()) and return false when you want to stop. This works even on infinite streams -- the gatherer will terminate the pipeline when the condition is met.

10. Gatherers vs Collectors

Since gatherers and collectors share a similar structure, it is important to understand exactly when to use each one. Here is a comprehensive comparison:

Aspect Collector<T, A, R> Gatherer<T, A, R>
Pipeline position Terminal -- ends the pipeline Intermediate -- pipeline continues after it
Output Single result (List, Map, String, etc.) Stream of zero or more elements
Used with stream.collect(collector) stream.gather(gatherer)
Accumulator / Integrator BiConsumer<A, T> -- no return value Integrator<A, T, R> -- returns boolean
Downstream access No -- accumulates into state Yes -- can push elements to next stage
Finisher Function<A, R> -- transforms state to result BiConsumer<A, Downstream> -- pushes to stream
Short-circuiting Not supported Supported via integrator return value
Composability Not composable with other collectors Composable via andThen()
Cardinality Many-to-one (always produces single result) Any: 1-to-1, 1-to-many, many-to-1, many-to-many
Infinite streams Cannot handle (never terminates) Can handle via short-circuiting
Use case Aggregate/summarize data Transform/reshape data flow

Decision Guide

Use a Collector when:

  • You need a single result at the end of the pipeline (a List, Map, sum, average)
  • You are aggregating data into a container
  • You want to group, partition, or summarize
  • The pipeline has no more transformations after the collection

Use a Gatherer when:

  • You need a custom transformation in the middle of the pipeline
  • You need to carry state across elements (running totals, deduplication)
  • You need to change the cardinality (batch N elements into groups, expand one element into many)
  • You need to stop early based on accumulated state
  • You want to compose multiple transformations into a reusable unit

Can a Gatherer replace a Collector? In some cases, yes. Gatherers.fold() is essentially a collector that emits into the stream. But collectors have a richer ecosystem (groupingBy, partitioningBy, teeing, etc.) and produce direct results without needing a terminal operation after them. Use the right tool for the job.

11. Practical Examples

Let us look at real-world scenarios where gatherers solve problems that were painful or impossible with the old Stream API.

11.1 Moving Average Calculator

A financial application needs to compute a simple moving average (SMA) over a configurable window of price data points:

static Gatherer movingAverage(int windowSize) {
    // Use windowSliding + map, composed into a single gatherer
    return Gatherers.windowSliding(windowSize)
        .andThen(Gatherer.ofSequential(
            Integrator.ofGreedy((Void state, List window, Gatherer.Downstream downstream) -> {
                double avg = window.stream()
                    .mapToDouble(Double::doubleValue)
                    .average()
                    .orElse(0.0);
                downstream.push(avg);
            })
        ));
}

// Usage: 5-day SMA
List closingPrices = List.of(
    150.0, 152.5, 148.0, 155.0, 153.0,
    157.5, 160.0, 158.0, 162.0, 165.0
);

closingPrices.stream()
    .gather(movingAverage(5))
    .forEach(sma -> System.out.printf("SMA: %.2f%n", sma));

// Output:
// SMA: 151.70
// SMA: 153.20
// SMA: 154.70
// SMA: 156.70
// SMA: 158.10
// SMA: 160.50

11.2 Batch Processing with Progress Tracking

Process large datasets in batches, logging progress after each batch:

record BatchResult(int batchNumber, int batchSize, List items) {}

static  Gatherer> batchWithProgress(int batchSize) {
    return Gatherer.ofSequential(
        () -> new Object[]{new ArrayList(), 0},  // [buffer, batchCount]

        Integrator.ofGreedy((state, element, downstream) -> {
            @SuppressWarnings("unchecked")
            List buffer = (List) state[0];
            buffer.add(element);

            if (buffer.size() >= batchSize) {
                int batchNum = (int) state[1] + 1;
                state[1] = batchNum;
                downstream.push(new BatchResult<>(batchNum, buffer.size(), List.copyOf(buffer)));
                buffer.clear();
            }
        }),

        // Finisher: emit remaining elements as final batch
        (state, downstream) -> {
            @SuppressWarnings("unchecked")
            List buffer = (List) state[0];
            if (!buffer.isEmpty()) {
                int batchNum = (int) state[1] + 1;
                downstream.push(new BatchResult<>(batchNum, buffer.size(), List.copyOf(buffer)));
            }
        }
    );
}

// Usage:
IntStream.rangeClosed(1, 23)
    .boxed()
    .gather(batchWithProgress(5))
    .forEach(batch -> {
        System.out.printf("Processing batch %d (%d items): %s%n",
            batch.batchNumber(), batch.batchSize(), batch.items());
        // Insert into database, send to API, etc.
    });

// Output:
// Processing batch 1 (5 items): [1, 2, 3, 4, 5]
// Processing batch 2 (5 items): [6, 7, 8, 9, 10]
// Processing batch 3 (5 items): [11, 12, 13, 14, 15]
// Processing batch 4 (5 items): [16, 17, 18, 19, 20]
// Processing batch 5 (3 items): [21, 22, 23]

11.3 Chunked HTTP Requests

Send a large list of IDs to an API that accepts a maximum of 50 IDs per request, executing requests concurrently with a limit of 3:

record ApiResponse(int status, String body) {}

List responses = userIds.stream()
    // Step 1: Chunk IDs into groups of 50
    .gather(Gatherers.windowFixed(50))

    // Step 2: Convert each chunk to a comma-separated query parameter
    .map(chunk -> chunk.stream()
        .map(String::valueOf)
        .collect(Collectors.joining(",")))

    // Step 3: Make concurrent API calls (max 3 at a time)
    .gather(Gatherers.mapConcurrent(3, ids -> {
        HttpClient client = HttpClient.newHttpClient();
        HttpRequest request = HttpRequest.newBuilder()
            .uri(URI.create("https://api.example.com/users?ids=" + ids))
            .build();
        try {
            HttpResponse resp = client.send(request,
                HttpResponse.BodyHandlers.ofString());
            return new ApiResponse(resp.statusCode(), resp.body());
        } catch (Exception e) {
            return new ApiResponse(500, e.getMessage());
        }
    }))

    // Step 4: Filter successful responses
    .filter(resp -> resp.status() == 200)
    .toList();

This entire pipeline -- chunking, URL construction, concurrent HTTP calls, response filtering -- is expressed as a single, readable stream pipeline. Before gatherers, you would need a loop, a thread pool, a list of futures, and manual result collection.

11.4 Log Session Grouping

Group log entries into sessions based on time gaps, useful for analyzing user behavior or debugging:

record LogEntry(Instant timestamp, String level, String message) {}

record LogSession(int sessionId, Duration duration, List entries) {
    static LogSession from(int id, List entries) {
        Duration dur = Duration.between(
            entries.getFirst().timestamp(),
            entries.getLast().timestamp()
        );
        return new LogSession(id, dur, entries);
    }
}

static Gatherer groupIntoSessions(Duration maxGap) {
    return Gatherer.ofSequential(
        () -> new Object[]{new ArrayList(), 0},

        Integrator.ofGreedy((state, entry, downstream) -> {
            @SuppressWarnings("unchecked")
            List current = (List) state[0];

            if (!current.isEmpty()) {
                Instant lastTime = current.getLast().timestamp();
                if (Duration.between(lastTime, entry.timestamp()).compareTo(maxGap) > 0) {
                    // Gap exceeds threshold -- emit session
                    int sessionId = (int) state[1] + 1;
                    state[1] = sessionId;
                    downstream.push(LogSession.from(sessionId, List.copyOf(current)));
                    current.clear();
                }
            }
            current.add(entry);
        }),

        (state, downstream) -> {
            @SuppressWarnings("unchecked")
            List current = (List) state[0];
            if (!current.isEmpty()) {
                int sessionId = (int) state[1] + 1;
                downstream.push(LogSession.from(sessionId, List.copyOf(current)));
            }
        }
    );
}

// Usage:
logEntries.stream()
    .sorted(Comparator.comparing(LogEntry::timestamp))
    .gather(groupIntoSessions(Duration.ofMinutes(5)))
    .forEach(session -> System.out.printf(
        "Session %d: %d entries, duration %s%n",
        session.sessionId(), session.entries().size(), session.duration()
    ));

12. Best Practices

Stream Gatherers are a powerful new tool, and with great power comes the opportunity to misuse it. Here are the guidelines I follow when writing production-quality gatherers.

12.1 Thread Safety

The state object returned by the initializer is not shared across threads unless you provide a combiner. Each thread segment gets its own state via a fresh initializer() call. However, you must ensure that:

  • Your state object is mutable but does not reference shared external state
  • The integrator does not capture mutable variables from outside the gatherer
  • If you provide a combiner, the merge logic is correct and does not lose data
// BAD: Shared mutable state outside the gatherer
List sharedList = new ArrayList<>();  // Danger!
Gatherer bad = Gatherer.ofSequential(
    Integrator.ofGreedy((Void state, String element, Gatherer.Downstream downstream) -> {
        sharedList.add(element);  // Race condition in parallel streams!
        downstream.push(element);
    })
);

// GOOD: All state is inside the gatherer
Gatherer good = Gatherer.ofSequential(
    ArrayList::new,
    Integrator.ofGreedy((state, element, downstream) -> {
        state.add(element);  // Safe -- state is per-evaluation
        downstream.push(element);
    })
);

12.2 Performance Considerations

  • Use Integrator.ofGreedy() when your gatherer never short-circuits. This gives the runtime optimization hints.
  • Minimize state size. Large state objects are copied per-thread-segment in parallel execution.
  • Prefer primitive arrays over boxed types for numerical state (e.g., new int[]{0} instead of new AtomicInteger(0)).
  • Avoid heavy computation in the integrator if possible. The integrator is called once per element -- keep it lightweight and push heavy work downstream.
  • Be cautious with mapConcurrent(). It uses virtual threads, which are great for I/O but provide no benefit for CPU-bound work.

12.3 Composability

  • Write focused, single-purpose gatherers. Compose them with andThen() rather than building monolithic ones.
  • Return Gatherer from static factory methods for clean API design, just like Collectors.toList().
  • Parameterize your gatherers. A windowFixed(int size) is more reusable than a windowOfFive().
// GOOD: Reusable, composable, parameterized gatherer library
public class CustomGatherers {

    public static  Gatherer distinctConsecutive() {
        return Gatherer.ofSequential(
            () -> new ArrayList(1),
            Integrator.ofGreedy((state, element, downstream) -> {
                if (state.isEmpty() || !Objects.equals(state.getFirst(), element)) {
                    state.clear();
                    state.add(element);
                    downstream.push(element);
                }
            })
        );
    }

    public static  Gatherer takeUntilSumExceeds(
            int threshold, ToIntFunction valueExtractor) {
        return Gatherer.ofSequential(
            () -> new int[]{0},
            Integrator.of((state, element, downstream) -> {
                state[0] += valueExtractor.applyAsInt(element);
                if (state[0] > threshold) return false;
                return downstream.push(element);
            })
        );
    }

    public static  Gatherer> groupByGap(
            BiPredicate gapDetector) {
        return Gatherer.ofSequential(
            ArrayList::new,
            Integrator.ofGreedy((group, element, downstream) -> {
                if (!group.isEmpty() && gapDetector.test(group.getLast(), element)) {
                    downstream.push(List.copyOf(group));
                    group.clear();
                }
                group.add(element);
            }),
            (group, downstream) -> {
                if (!group.isEmpty()) {
                    downstream.push(List.copyOf(group));
                }
            }
        );
    }
}

// Usage: compose distinct + window + fold
Stream.of(1, 1, 2, 3, 3, 4, 5, 5)
    .gather(CustomGatherers.distinctConsecutive()
        .andThen(Gatherers.windowFixed(2))
        .andThen(Gatherers.fold(
            () -> new ArrayList>(),
            (acc, window) -> { acc.add(window); return acc; }
        )))
    .forEach(System.out::println);
// Output: [[1, 2], [3, 4], [5]]

12.4 Do Not Forget the Finisher

If your gatherer buffers elements (like windowing or batching), you must provide a finisher to flush the remaining buffer. Without it, the last partial batch is silently lost:

// BAD: Missing finisher -- last partial window is lost
static  Gatherer> brokenWindow(int size) {
    return Gatherer.ofSequential(
        ArrayList::new,
        Integrator.ofGreedy((buffer, element, downstream) -> {
            buffer.add(element);
            if (buffer.size() >= size) {
                downstream.push(List.copyOf(buffer));
                buffer.clear();
            }
        })
        // No finisher! Elements [4, 5] are lost if stream has 5 elements and window is 3
    );
}

// GOOD: Finisher flushes remaining elements
static  Gatherer> correctWindow(int size) {
    return Gatherer.ofSequential(
        ArrayList::new,
        Integrator.ofGreedy((buffer, element, downstream) -> {
            buffer.add(element);
            if (buffer.size() >= size) {
                downstream.push(List.copyOf(buffer));
                buffer.clear();
            }
        }),
        (buffer, downstream) -> {
            if (!buffer.isEmpty()) {
                downstream.push(List.copyOf(buffer));  // Flush remainder
            }
        }
    );
}

12.5 When NOT to Use Gatherers

Gatherers are not always the right answer:

  • Simple transformations: If map(), filter(), or flatMap() can do the job, use them. They are simpler, more readable, and better optimized by the runtime.
  • Terminal aggregation: If you need a single result at the end, use a Collector. Gatherers.fold() is neat, but Collectors.reducing() or Collectors.groupingBy() are more idiomatic for terminal operations.
  • Pure CPU-bound parallelism: mapConcurrent() uses virtual threads, which shine for I/O. For CPU-bound parallel work, use parallelStream() or a ForkJoinPool.
  • Very simple batching: If you just need to split a known-size list into sublists, List.subList() or Guava's Lists.partition() might be simpler.

Summary

Stream Gatherers are the biggest expansion to the Stream API since Java 8. They fill the critical gap of custom intermediate operations, enabling stateful transformations, windowing, concurrent mapping, and short-circuiting that were previously impossible within a stream pipeline. The five built-in gatherers (fold, scan, windowFixed, windowSliding, mapConcurrent) cover the most common needs, and the Gatherer interface gives you the power to build anything else. Welcome to the next era of Java streams.

March 1, 2026

Java 25 Module Imports & Simple Source Files

1. Introduction

If you have ever watched a new developer’s face when they see their first Java program, you know the problem. Before they can print “Hello, World!”, they must understand public, class, static, void, String[], and why the file name must match the class name. Compare that to Python, where the entire program is print("Hello, World!"). That gap has been a legitimate criticism of Java for over two decades.

Java 25 fixes this with two finalized features that, when combined, make Java programs as simple as scripts while retaining the full power of the platform:

Feature JEP Status in Java 25 What It Does
Module Import Declarations JEP 476 Final Import all public types from a module with one statement
Implicitly Declared Classes & Instance Main Methods JEP 477 Final Write Java programs without class declarations or public static void main

These features went through multiple rounds of preview in Java 22, 23, and 24. In Java 25, they are finalized and production-ready. No --enable-preview flag required. This post covers both features in detail — what they are, how they work, where the edge cases are, and when you should (and should not) use them.

Think of it this way: Java has always been a language that favors ceremony — explicit declarations that make large codebases maintainable. These features do not remove that ceremony from production code. Instead, they give you a casual mode for situations where the ceremony gets in the way: teaching, scripting, prototyping, and quick experiments.

2. Module Import Declarations (JEP 476)

Before we get to the simplified programs, we need to tackle the import problem. Java’s import system works well once you know it, but it creates a wall of boilerplate at the top of every file. A typical utility class might start like this:

import java.util.List;
import java.util.ArrayList;
import java.util.Map;
import java.util.HashMap;
import java.util.Set;
import java.util.HashSet;
import java.util.Optional;
import java.util.stream.Collectors;
import java.util.stream.Stream;
import java.io.IOException;
import java.io.BufferedReader;
import java.io.FileReader;
import java.nio.file.Path;
import java.nio.file.Files;

public class DataProcessor {
    // ... actual code starts here, 14 lines later
}

That is fourteen import statements before a single line of business logic. IDEs hide them behind a fold, but they still exist, and they are still noise. Star imports (import java.util.*) help, but they only import from a single package, not from related packages. You still need separate star imports for java.util, java.util.stream, java.io, and java.nio.file.

Module import declarations solve this at the module level.

2.1 The import module Syntax

The new syntax imports all public top-level types exported by a module:

import module java.base;

That single line replaces every import you would ever need from the java.base module. It imports types from java.util, java.util.stream, java.io, java.nio.file, java.time, java.math, java.net, java.text, java.util.concurrent, and every other package in the java.base module — roughly 60 packages and over 1,500 types.

Here is the comparison side by side:

// BEFORE: Traditional imports
import java.util.List;
import java.util.ArrayList;
import java.util.Map;
import java.util.HashMap;
import java.util.stream.Collectors;
import java.io.IOException;
import java.io.BufferedReader;
import java.io.FileReader;
import java.nio.file.Path;
import java.nio.file.Files;
import java.time.LocalDateTime;
import java.time.format.DateTimeFormatter;
import java.math.BigDecimal;
import java.math.RoundingMode;

public class DataProcessor {
    public void process() throws IOException {
        Path file = Path.of("data.csv");
        List lines = Files.readAllLines(file);
        Map totals = new HashMap<>();
        LocalDateTime now = LocalDateTime.now();
        // ...
    }
}

// AFTER: Module import
import module java.base;

public class DataProcessor {
    public void process() throws IOException {
        Path file = Path.of("data.csv");
        List lines = Files.readAllLines(file);
        Map totals = new HashMap<>();
        LocalDateTime now = LocalDateTime.now();
        // ...
    }
}

Fourteen imports collapsed to one. The code below the imports is identical — you do not change how you use the types, only how you import them.

2.2 Which Modules Are Available

You can use import module with any named module in the Java Platform Module System (JPMS). Here are the most commonly used ones:

Module Key Packages Common Use Case
java.base java.lang, java.util, java.io, java.nio, java.time, java.math, java.net, java.util.concurrent Core language — covers 90% of typical imports
java.sql java.sql, javax.sql JDBC database access
java.logging java.util.logging JDK built-in logging
java.net.http java.net.http HTTP client (HttpClient, HttpRequest, HttpResponse)
java.desktop java.awt, javax.swing GUI applications
java.xml javax.xml, org.xml.sax, org.w3c.dom XML parsing and processing
java.compiler javax.tools, javax.annotation.processing Annotation processing and compilation

For most application code, import module java.base; is all you need. If you do database work, add import module java.sql;. If you use the HTTP client, add import module java.net.http;. You can combine module imports with traditional imports freely.

2.3 How Module Imports Work Under the Hood

A module import is not the same as importing every package with a star import. Here are the key rules:

1. Only public top-level types are imported. Nested classes, package-private classes, and non-exported packages are not imported. This is the same visibility you would get if you wrote individual import statements.

2. Transitive dependencies are included. If module A requires transitive module B, then import module A; also imports all exported types from module B. For example, java.sql requires transitive java.logging and java.xml, so importing java.sql gives you logging and XML types too.

3. No runtime cost. Module imports are purely a compile-time convenience. The compiled bytecode contains exactly the same class references as if you had written individual imports. There is no additional classloading, no additional memory usage, and no performance difference.

3. Ambiguity Resolution

When you import an entire module, you inevitably import types with the same simple name from different packages. For example, java.base exports both java.util.List and java.awt.List (if you also import java.desktop). How does Java handle this?

The resolution follows a clear priority order:

Priority Import Type Example Wins When
1 (highest) Single-type import import java.util.List; Always wins — most specific
2 Package star import import java.util.*; Wins over module imports
3 (lowest) Module import import module java.base; Only if no higher-priority import matches

Here is a practical example:

// Scenario: Two modules export a class with the same simple name
import module java.base;    // exports java.util.List
import module java.desktop; // exports java.awt.List

// This would be ambiguous -- compiler error!
// List names = new ArrayList<>(); // Which List?

// Fix: Add a single-type import to disambiguate
import java.util.List;  // This takes priority over both module imports

List names = new ArrayList<>(); // Resolves to java.util.List

The practical implication is simple: start with import module java.base;, and if the compiler reports an ambiguity, add a single-type import to resolve it. This is the exact same approach you use today when star imports clash — the resolution mechanism is familiar.

Within a single module, the JDK designers have already ensured there are no ambiguous simple names. You will only hit ambiguity when importing multiple modules that happen to export types with the same name, which is relatively rare in practice.

3.1 Combining Import Styles

You can mix all three import styles in the same file:

import module java.base;           // Module import -- all of java.base
import module java.sql;             // Module import -- all of java.sql
import java.util.logging.*;         // Star import -- one specific package
import com.myapp.util.StringUtils;  // Single-type import -- one specific class

public class MyService {
    // All types from java.base, java.sql, java.util.logging,
    // and StringUtils are available here
}

This flexibility means you can adopt module imports gradually. Replace your most common star imports with a single import module java.base; and keep specific imports for third-party libraries that are not modularized.

4. Before vs After: Real-World Import Blocks

To appreciate the difference, here are three real-world examples showing typical import blocks before and after module imports.

4.1 REST API Controller

// BEFORE (22 imports)
import java.util.List;
import java.util.Map;
import java.util.Optional;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.stream.Collectors;
import java.time.LocalDateTime;
import java.time.format.DateTimeFormatter;
import java.io.IOException;
import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.math.BigDecimal;
import java.math.RoundingMode;
import java.nio.file.Path;
import java.nio.file.Files;
import java.util.function.Function;
import java.util.function.Predicate;

public class OrderService {
    // ...
}

// AFTER (2 imports)
import module java.base;
import module java.net.http;

public class OrderService {
    // Exact same code below -- all types resolve correctly
}

4.2 Database Repository Class

// BEFORE (12 imports)
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Timestamp;
import java.util.List;
import java.util.ArrayList;
import java.util.Optional;
import java.time.LocalDateTime;
import java.time.Instant;
import java.math.BigDecimal;
import java.util.logging.Logger;

public class UserRepository {
    // ...
}

// AFTER (2 imports)
import module java.base;
import module java.sql;

public class UserRepository {
    // java.sql requires transitive java.logging,
    // so Logger is available too
}

4.3 File Processing Utility

// BEFORE (16 imports)
import java.io.BufferedReader;
import java.io.BufferedWriter;
import java.io.FileReader;
import java.io.FileWriter;
import java.io.IOException;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.nio.file.Files;
import java.nio.file.StandardOpenOption;
import java.nio.charset.StandardCharsets;
import java.util.List;
import java.util.Map;
import java.util.stream.Collectors;
import java.util.stream.Stream;
import java.util.regex.Pattern;
import java.util.regex.Matcher;

public class FileProcessor {
    // ...
}

// AFTER (1 import)
import module java.base;

public class FileProcessor {
    // Everything is in java.base -- one import covers it all
}

The pattern is clear: for typical application code, import module java.base; eliminates 80-90% of your import statements. Add one or two more module imports for database, HTTP, or desktop work, and you cover virtually everything.

5. Implicitly Declared Classes and Instance Main Methods (JEP 477)

Now we get to the second half of Java’s simplification story. If module imports tackle the import problem, JEP 477 tackles the ceremony problem — the amount of boilerplate code required to write even the simplest Java program.

5.1 Java’s Ceremony Problem

Here is the classic “Hello, World!” in Java, compared to other languages:

// Java (traditional) -- 5 lines of ceremony
public class HelloWorld {
    public static void main(String[] args) {
        System.out.println("Hello, World!");
    }
}

// Python -- 1 line
// print("Hello, World!")

// JavaScript -- 1 line
// console.log("Hello, World!");

// Go -- still needs ceremony, but less
// func main() {
//     fmt.Println("Hello, World!")
// }

To write one line of useful code in Java, you need to understand:

  • public — access modifiers (why is the class public? what does public mean?)
  • class — object-oriented programming concepts
  • static — class vs instance methods (one of Java’s most confusing concepts for beginners)
  • void — return types
  • String[] args — arrays and command-line arguments
  • System.out.println — nested static field access on a class

That is a lot of concepts to explain before a new programmer can see their first output. JEP 477 removes most of this ceremony.

5.2 Instance Main Methods

The first simplification: you no longer need public static void main(String[] args). Here are the new options:

// Option 1: Simplest possible main method
class HelloWorld {
    void main() {
        System.out.println("Hello, World!");
    }
}

// Option 2: Main with args (if you need command-line arguments)
class HelloWorld {
    void main(String[] args) {
        System.out.println("Hello, " + args[0] + "!");
    }
}

// Option 3: Static main still works (backward compatible)
class HelloWorld {
    static void main() {
        System.out.println("Hello, World!");
    }
}

Notice what changed:

  • No public required — the main method does not need to be public anymore
  • No static required — it can be an instance method, meaning the JVM will create an instance of the class and call main() on it
  • No String[] args required — if you do not need command-line arguments, leave them out
  • The class does not need to be public either

This is a significant improvement. A beginner can now write a program that uses only concepts they understand: a class has a method, the method does something. No static, no access modifiers, no array parameters.

5.3 Implicitly Declared Classes

The second simplification goes further: you do not need a class declaration at all. If a Java source file contains methods (including main) but no class declaration, the compiler wraps them in an implicitly declared class.

// File: HelloWorld.java
// No class declaration needed!
void main() {
    System.out.println("Hello, World!");
}

That is it. One method, one line of logic. The file compiles and runs like any other Java program:

// Compile and run
// $ javac HelloWorld.java
// $ java HelloWorld
// Output: Hello, World!

// Or use the source-file launcher (no explicit compilation needed)
// $ java HelloWorld.java
// Output: Hello, World!

Behind the scenes, the compiler generates a class with the same name as the file (minus the .java extension). But you never see it, and you never have to think about it.

You can add fields and helper methods to an implicitly declared class, just like a regular class:

// File: Greeting.java
// Fields and helper methods work in implicitly declared classes

String greeting = "Hello";

void main() {
    String name = "Java 25";
    System.out.println(greet(name));
}

String greet(String name) {
    return greeting + ", " + name + "!";
}

// Output: Hello, Java 25!

6. The Launch Protocol

With multiple valid main method signatures now possible, the JVM follows a specific priority order when looking for the entry point. Understanding this protocol is important because it determines which main method runs if you have multiple candidates.

The JVM tries these signatures in order, picking the first one it finds:

Priority Signature Notes
1 static void main(String[] args) Traditional — highest priority for backward compatibility
2 static void main() Static without args — new in Java 25
3 void main(String[] args) Instance method with args — JVM creates instance first
4 void main() Instance method without args — simplest form

Key points:

  • Static methods take priority over instance methods with the same parameter list
  • Methods with String[] args take priority over methods without args
  • Access modifiers do not matterpublic, protected, package-private, and private are all valid (though private only works in implicitly declared classes)
  • Existing programs are unaffected — if you already have public static void main(String[] args), it still runs first
// Example: Which main runs?
class MultipleMain {

    // Priority 1 -- this one wins
    static void main(String[] args) {
        System.out.println("Static main with args");
    }

    // Priority 4 -- never reached
    void main() {
        System.out.println("Instance main without args");
    }
}

// Output: Static main with args

For instance main methods, the JVM creates an instance of the class using the no-argument constructor before calling main(). This means the class must have a no-arg constructor (which every class has by default unless you declare a different constructor).

7. Combined: Simple Source Files

The real power comes when you combine both features: module imports and implicitly declared classes. Together, they give you what the JDK team calls “simple source files” — Java programs that are as concise as scripts.

Here is the transformation, step by step:

// Step 1: Traditional Java (Java 8 style)
import java.util.List;
import java.util.ArrayList;
import java.util.stream.Collectors;

public class NameProcessor {
    public static void main(String[] args) {
        List names = new ArrayList<>(List.of("alice", "bob", "charlie"));
        List result = names.stream()
            .map(String::toUpperCase)
            .collect(Collectors.toList());
        System.out.println(result);
    }
}

// Step 2: Add module imports (remove import block)
import module java.base;

public class NameProcessor {
    public static void main(String[] args) {
        List names = new ArrayList<>(List.of("alice", "bob", "charlie"));
        List result = names.stream()
            .map(String::toUpperCase)
            .collect(Collectors.toList());
        System.out.println(result);
    }
}

// Step 3: Use instance main (remove public, static, String[] args)
import module java.base;

public class NameProcessor {
    void main() {
        List names = new ArrayList<>(List.of("alice", "bob", "charlie"));
        List result = names.stream()
            .map(String::toUpperCase)
            .collect(Collectors.toList());
        System.out.println(result);
    }
}

// Step 4: Implicitly declared class (remove class declaration)
import module java.base;

void main() {
    List names = new ArrayList<>(List.of("alice", "bob", "charlie"));
    List result = names.stream()
        .map(String::toUpperCase)
        .collect(Collectors.toList());
    System.out.println(result);
}

From 14 lines down to 8. From 3 import statements and a class declaration down to a single module import. The actual logic did not change at all — it is the same streams, the same collections, the same method references. But the noise is gone.

7.1 Automatic java.base Import in Implicitly Declared Classes

There is one more convenience: in implicitly declared classes (files without a class declaration), java.base is automatically imported. You do not even need import module java.base;. This means the simplest version of the program above is:

// File: NameProcessor.java
// No imports needed! java.base is automatically imported
// for implicitly declared classes.

void main() {
    var names = new ArrayList<>(List.of("alice", "bob", "charlie"));
    var result = names.stream()
        .map(String::toUpperCase)
        .collect(Collectors.toList());
    System.out.println(result);
}

This is as simple as Java gets. No imports, no class, no public static void main. Just the logic. Run it with java NameProcessor.java and it works.

7.2 More Examples: Simple Source Files in Action

Here are practical examples showing simple source files for common tasks:

// Example 1: Read a file and count words
// File: WordCounter.java

void main() {
    var path = Path.of("document.txt");
    try {
        var lines = Files.readAllLines(path);
        long wordCount = lines.stream()
            .flatMap(line -> Stream.of(line.split("\\s+")))
            .filter(word -> !word.isEmpty())
            .count();
        System.out.println("Word count: " + wordCount);
    } catch (IOException e) {
        System.err.println("Error reading file: " + e.getMessage());
    }
}
// Example 2: Simple HTTP request
// File: QuickFetch.java

import module java.net.http;

void main() throws Exception {
    var client = HttpClient.newHttpClient();
    var request = HttpRequest.newBuilder()
        .uri(URI.create("https://api.github.com/zen"))
        .build();
    var response = client.send(request, HttpResponse.BodyHandlers.ofString());
    System.out.println("Status: " + response.statusCode());
    System.out.println("Body: " + response.body());
}
// Example 3: CSV processor with helper methods
// File: CsvProcessor.java

void main() {
    var data = List.of(
        "Alice,Engineering,95000",
        "Bob,Marketing,78000",
        "Charlie,Engineering,102000",
        "Diana,Marketing,81000"
    );

    var avgByDept = data.stream()
        .map(line -> line.split(","))
        .collect(Collectors.groupingBy(
            parts -> parts[1],
            Collectors.averagingDouble(parts -> Double.parseDouble(parts[2]))
        ));

    avgByDept.forEach((dept, avg) ->
        System.out.printf("%s: $%,.0f%n", dept, avg));
}

// Output:
// Engineering: $98,500
// Marketing: $79,500

8. Use Cases

These features are not intended to replace traditional Java class declarations in production codebases. They serve specific purposes where ceremony reduction matters most:

8.1 Teaching and Education

This is the primary motivation. When teaching Java to beginners, you can now start with:

// Lesson 1: Your first Java program
void main() {
    System.out.println("Hello, World!");
}

// Lesson 2: Variables and types
void main() {
    String name = "Student";
    int age = 20;
    double gpa = 3.8;
    System.out.println(name + " is " + age + " years old with GPA " + gpa);
}

// Lesson 3: Loops
void main() {
    for (int i = 1; i <= 10; i++) {
        System.out.println(i + " x 7 = " + (i * 7));
    }
}

// Introduce classes, static, and access modifiers LATER,
// when students are ready for object-oriented concepts

Educators can now teach procedural programming first, then introduce object-oriented concepts when students have a foundation in variables, loops, and methods. This matches how most programming courses are structured -- Java was the outlier that forced OOP from line one.

8.2 Scripting and Automation

Java has never been a scripting language, but simple source files bring it closer. Quick tasks like file processing, data transformation, or API testing become practical without setting up a full project:

// File: CleanupLogs.java
// Run with: java CleanupLogs.java

void main() throws Exception {
    var logDir = Path.of("/var/log/myapp");
    var cutoff = Instant.now().minus(Duration.ofDays(30));

    try (var files = Files.list(logDir)) {
        var deleted = files
            .filter(f -> f.toString().endsWith(".log"))
            .filter(f -> {
                try {
                    return Files.getLastModifiedTime(f).toInstant().isBefore(cutoff);
                } catch (IOException e) {
                    return false;
                }
            })
            .peek(f -> {
                try { Files.delete(f); } catch (IOException e) {
                    System.err.println("Failed to delete: " + f);
                }
            })
            .count();
        System.out.println("Deleted " + deleted + " old log files");
    }
}

8.3 Prototyping and Experimentation

When you want to quickly test a library feature, validate an algorithm, or experiment with an API, simple source files remove the friction of creating a class, picking a package, and writing the main method signature:

// File: TestRegex.java
// Quick experiment -- does my regex work?

void main() {
    var pattern = Pattern.compile("(\\d{4})-(\\d{2})-(\\d{2})");
    var input = "The release date is 2025-09-16 and EOL is 2033-09-16.";

    var matcher = pattern.matcher(input);
    while (matcher.find()) {
        System.out.printf("Full match: %s | Year: %s | Month: %s | Day: %s%n",
            matcher.group(0), matcher.group(1), matcher.group(2), matcher.group(3));
    }
}

// Output:
// Full match: 2025-09-16 | Year: 2025 | Month: 09 | Day: 16
// Full match: 2033-09-16 | Year: 2033 | Month: 09 | Day: 16

8.4 Competitive Programming

Competitive programmers care about typing speed and code brevity. Removing the class declaration, access modifiers, and static keyword saves time and reduces mistakes under pressure. The auto-import of java.base means all standard library types are available without import statements.

9. Restrictions and Limitations

Implicitly declared classes are not full replacements for regular classes. They have specific limitations you need to understand:

Restriction Why What to Do Instead
Cannot be referenced by other classes Implicitly declared classes have no accessible name Use a regular class declaration when other classes need to reference it
No constructors The JVM uses the default no-arg constructor Use initialization in field declarations or the main method
No extends or implements No class declaration means no inheritance clause Use a regular class if you need to extend or implement
Cannot define static members (except main) Instance-oriented by design; static methods in an unnamed class add complexity Use instance methods, or switch to a regular class
No package declaration Implicitly declared classes are always in the unnamed package Use a regular class for packaged code
Not part of a module Lives in the unnamed module Use a regular class for modular applications
One per file Same rule as top-level classes Keep each implicitly declared class in its own file

9.1 What Works in Implicitly Declared Classes

Despite the restrictions, you can still do a lot:

// File: FeatureDemo.java
// All of these work in an implicitly declared class:

// Instance fields
int counter = 0;
String prefix = "Item";

// Instance methods
String format(int num) {
    return prefix + "-" + String.format("%04d", num);
}

void processItems(List items) {
    for (String item : items) {
        counter++;
        System.out.println(format(counter) + ": " + item);
    }
}

// Main entry point
void main() {
    var items = List.of("Keyboard", "Monitor", "Mouse", "Headset");
    processItems(items);
    System.out.println("Total items processed: " + counter);
}

// Output:
// Item-0001: Keyboard
// Item-0002: Monitor
// Item-0003: Mouse
// Item-0004: Headset
// Total items processed: 4

10. Best Practices

Now that you understand both features, here are guidelines for when to use them:

10.1 When to Use Simplified Syntax

Scenario Use Simplified? Why
Teaching beginners Yes Reduces cognitive load, lets students focus on fundamentals
Quick scripts and one-off tools Yes Faster to write, no project setup needed
Prototyping and experiments Yes Get to the interesting code faster
Competitive programming Yes Saves keystrokes and reduces boilerplate errors
Production application code No Use full class declarations for maintainability
Library code No Other classes need to reference your types
Code that needs inheritance No Implicitly declared classes cannot extend or implement
Multi-class applications No Classes need to reference each other by name

10.2 Module Imports in Production Code

Module imports are useful even in production code. There is a reasonable debate about whether to use them:

Arguments for using import module java.base; in production:

  • Eliminates import management busywork
  • No runtime performance cost
  • Reduces merge conflicts in import blocks
  • Makes code shorter and more focused on logic

Arguments against:

  • Traditional imports serve as documentation of dependencies
  • IDEs already manage imports automatically
  • Team conventions may prefer explicit imports
  • Code review tools may flag unused imports differently

Recommendation: For new projects and teams open to change, module imports are a net positive -- use them. For existing projects with established conventions, discuss with your team before switching. The important thing is consistency within a codebase.

10.3 Transition Path

If you want to adopt these features, here is a sensible path:

  1. Start with module imports in production code -- they are the lower-risk change
  2. Use implicitly declared classes for scripts, tools, and tests -- not production code
  3. Update team coding standards to document when each style is appropriate
  4. Let IDE support mature -- IntelliJ IDEA and Eclipse are updating their templates and code generation for the new syntax

11. Complete Example: Putting It All Together

Let us build a complete, practical program using both features. This script reads a CSV file of employees, groups them by department, calculates statistics, and outputs a formatted report.

// File: EmployeeReport.java
// Run with: java EmployeeReport.java employees.csv
// No class declaration, no explicit imports -- java.base is auto-imported

void main(String[] args) {
    if (args.length == 0) {
        System.err.println("Usage: java EmployeeReport.java ");
        System.exit(1);
    }

    var file = Path.of(args[0]);
    try {
        var lines = Files.readAllLines(file);

        // Skip header, parse each line
        var employees = lines.stream()
            .skip(1)
            .map(line -> line.split(","))
            .filter(parts -> parts.length >= 3)
            .toList();

        // Group by department
        var byDept = employees.stream()
            .collect(Collectors.groupingBy(parts -> parts[1].trim()));

        // Print report
        System.out.println("=" .repeat(50));
        System.out.println("EMPLOYEE REPORT - " + LocalDate.now());
        System.out.println("=".repeat(50));

        byDept.forEach((dept, members) -> {
            var avgSalary = members.stream()
                .mapToDouble(p -> Double.parseDouble(p[2].trim()))
                .average()
                .orElse(0);

            var maxSalary = members.stream()
                .mapToDouble(p -> Double.parseDouble(p[2].trim()))
                .max()
                .orElse(0);

            System.out.printf("%n Department: %s%n", dept);
            System.out.printf(" Headcount:  %d%n", members.size());
            System.out.printf(" Avg Salary: $%,.0f%n", avgSalary);
            System.out.printf(" Max Salary: $%,.0f%n", maxSalary);
            System.out.println(" Members:");
            members.forEach(m ->
                System.out.printf("   - %s ($%s)%n", m[0].trim(), m[2].trim()));
        });

        System.out.println("\n" + "=".repeat(50));
        System.out.printf("Total employees: %d%n", employees.size());
        System.out.println("=".repeat(50));

    } catch (IOException e) {
        System.err.println("Error reading file: " + e.getMessage());
        System.exit(1);
    }
}

That is a complete, runnable Java program with file I/O, streams, collectors, date formatting, and string formatting -- all in a single method with no class declaration and no imports. Run it with java EmployeeReport.java employees.csv and it just works.

Compare the first line of this program (void main(String[] args)) to the traditional version (public class EmployeeReport { public static void main(String[] args) {). The reduction in ceremony is dramatic, and the code is easier to read because it focuses entirely on what the program does rather than how Java requires it to be structured.

12. Summary

Java 25 finalizes two features that make the language significantly more approachable for beginners and more convenient for experienced developers writing scripts, prototypes, and small programs:

Feature What It Does Best For
Module Import Declarations (JEP 476) Import all public types from a module with import module java.base; All Java code -- reduces import boilerplate everywhere
Instance Main Methods (part of JEP 477) Write void main() without public static String[] args Simpler entry points, teaching, scripts
Implicitly Declared Classes (part of JEP 477) Write methods without a class declaration; file is the class Scripts, prototypes, teaching, quick experiments
Auto-import of java.base Implicitly declared classes get java.base imported automatically Zero-import Java programs for simple tasks

These features do not change how production Java applications are built. They add a new, simpler way to write Java when the full ceremony is unnecessary. Think of them as Java's casual Friday -- you can still wear the suit when you need to, but now you have the option to dress down when the situation calls for it.

The combination of module imports, instance main methods, and implicitly declared classes means that Java 25 has the simplest on-ramp of any Java version in the language's 30-year history. A new programmer can write their first Java program in one line. That is a significant achievement for a language that has always prioritized explicitness and structure.

March 1, 2026