Java 8
Java 11
Java 17
Java 21
Java 25
Java 25 is the next Long-Term Support (LTS) release after Java 21, expected in September 2025. If your organization runs on Java 21 — which it should if you followed the last migration guide — Java 25 is the natural next upgrade target. Between Java 21 and Java 25, four feature releases shipped (22, 23, 24, 25), each adding language features, API improvements, and runtime enhancements.
The upgrade from 21 to 25 is less disruptive than the jump from 17 to 21. There are no paradigm-shifting features like virtual threads this time around. Instead, Java 25 polishes and finalizes features that were in preview during Java 21, adds new language conveniences, and delivers meaningful performance improvements. Think of it as Java 21 with the rough edges smoothed out.
| Version | Release Date | Type | Oracle Premier Support Until | Extended Support Until |
|---|---|---|---|---|
| Java 17 | September 2021 | LTS | September 2026 | September 2029 |
| Java 21 | September 2023 | LTS | September 2028 | September 2031 |
| Java 22 | March 2024 | Non-LTS | September 2024 | N/A |
| Java 23 | September 2024 | Non-LTS | March 2025 | N/A |
| Java 24 | March 2025 | Non-LTS | September 2025 | N/A |
| Java 25 | September 2025 | LTS | September 2030 | September 2033 |
Why migrate now? Java 21 premier support runs until September 2028, so there is no rush. But the new features in Java 25 — particularly finalized structured concurrency, scoped values, and the performance improvements — provide real value. Planning your migration now gives you time to test thoroughly and adopt new features incrementally.
Here is a comprehensive table of every significant feature added between Java 22 and Java 25. Features marked Final are production-ready. Features marked Preview require --enable-preview to use.
| Feature | JEP | Introduced | Finalized | Status in Java 25 |
|---|---|---|---|---|
| Module Import Declarations | 476 | Java 23 (preview) | Java 25 | Final |
| Implicitly Declared Classes & Instance Main | 477 | Java 21 (preview as JEP 445) | Java 25 | Final |
| Primitive Types in Patterns | 488 | Java 23 (preview) | Java 25 | Final |
| Flexible Constructor Bodies | 492 | Java 22 (preview) | Java 25 | Final |
| Structured Concurrency | 499 | Java 19 (incubator) | Java 25 (expected) | Final (expected) |
| Scoped Values | 487 | Java 20 (incubator) | Java 25 (expected) | Final (expected) |
| Class-File API | 484 | Java 22 (preview) | Java 24 | Final |
| Foreign Function & Memory API | 454 | Java 14 (incubator) | Java 22 | Final |
| Unnamed Patterns and Variables | 456 | Java 21 (preview) | Java 22 | Final |
| Statements Before super() | 492 | Java 22 (preview) | Java 25 | Final |
| Stream Gatherers | 485 | Java 22 (preview) | Java 24 | Final |
| Key Derivation Function API | 478 | Java 24 | Java 24 | Final |
| AOT Class Loading & Linking | 483 | Java 24 | Java 24 | Final |
| Stable Values | 502 | Java 25 | — | Preview |
| Compact Object Headers | 450 | Java 24 | — | Experimental |
| Vector API | 489 | Java 16 (incubator) | — | Incubator |
The theme of Java 22-25 is finalization. Many features that were in preview or incubator during Java 21 have graduated to production-ready status. This means you can use them without --enable-preview flags and rely on them in production code with confidence.
The migration from Java 21 to Java 25 has fewer breaking changes than the 17-to-21 jump, but there are important ones to be aware of:
| Removed Feature | Removed In | Replacement | Action Required |
|---|---|---|---|
| String Templates (STR, FMT processors) | Java 23 | None yet (may return in different form) | Remove any preview usage of STR."..." |
| sun.misc.Unsafe memory methods (partial) | Java 23+ | Foreign Function & Memory API (java.lang.foreign) |
Migrate to MemorySegment and Arena |
| Windows 32-bit x86 port | Java 24 | Use 64-bit JDK | Switch to 64-bit if running 32-bit Windows |
| Change | Version | Impact | What to Do |
|---|---|---|---|
| Integrity by default — stronger module encapsulation | Java 24 | Illegal reflective access warnings become errors | Add --add-opens or fix code to use public APIs |
| UTF-8 by default (already in Java 18) | Java 18+ | File I/O uses UTF-8 regardless of system locale | Verify encoding assumptions in file operations |
| Deprecation enforcement | Various | Previously deprecated methods may be removed in future versions | Address deprecation warnings now |
| Security Manager restrictions | Java 24 | Cannot install a SecurityManager at all | Remove SecurityManager usage entirely |
This is the most impactful behavioral change. Starting in Java 24, the JVM enforces module boundaries more strictly. Libraries that use deep reflection to access internal JDK classes will fail at runtime instead of just warning. This primarily affects:
The fix is to upgrade to recent versions of these libraries (which already handle the restrictions properly) or add --add-opens flags as a temporary workaround.
Follow these ten steps to migrate from Java 21 to Java 25. Each step should be a separate commit or PR so you can isolate issues.
// Check your current Java version // $ java -version // openjdk version "21.0.x" ... // Check for deprecation warnings in your build // $ mvn compile 2>&1 | grep -i "deprecat" // $ gradle compileJava 2>&1 | grep -i "deprecat" // List all --add-opens and --add-exports flags you currently use // $ grep -r "add-opens\|add-exports" pom.xml build.gradle Dockerfile
Before changing the JDK version, update your dependencies to versions that support Java 25. This is the step that catches most issues.
// Key dependencies to update (minimum versions for Java 25 support): // // Build plugins: // maven-compiler-plugin >= 3.13 // maven-surefire-plugin >= 3.3 // gradle >= 8.8 // // Frameworks: // Spring Boot >= 3.4 (recommended: 3.5+) // Spring Framework >= 6.2 (recommended: 6.3+) // Quarkus >= 3.15+ // Micronaut >= 4.7+ // // Libraries: // Jackson >= 2.17 // Hibernate >= 6.5 // Lombok >= 1.18.34 // Mockito >= 5.12 // JUnit >= 5.11 // Byte Buddy >= 1.15 // ASM >= 9.7
// Option 1: SDKMAN (recommended for developers) // $ sdk install java 25-open // $ sdk use java 25-open // Option 2: Eclipse Temurin (recommended for production) // Download from https://adoptium.net/temurin/releases/ // Option 3: Amazon Corretto // Download from https://aws.amazon.com/corretto/ // Verify installation // $ java -version // openjdk version "25" 2025-09-16
// Maven pom.xml //// // Gradle build.gradle // java { // toolchain { // languageVersion = JavaLanguageVersion.of(25) // } // } // Gradle build.gradle.kts (Kotlin DSL) // java { // toolchain { // languageVersion.set(JavaLanguageVersion.of(25)) // } // }25 //25 //25 //25 //
Run a clean build and address compilation errors. Common issues:
| Error | Cause | Fix |
|---|---|---|
cannot access class sun.misc.Unsafe |
Direct Unsafe usage | Migrate to java.lang.foreign API or VarHandle |
module X does not export Y |
Accessing internal APIs | Use public API alternatives or --add-exports |
class file has wrong version 69.0 |
Dependency compiled with newer Java than build tool expects | Update build tool plugins |
| Preview feature warnings | Using features that were preview in 21 but changed since | Update code to use finalized syntax |
// Run the full test suite // $ mvn test -Dsurefire.useFile=false // $ gradle test --info // Pay attention to: // 1. Reflection-based tests (may fail due to stronger encapsulation) // 2. Serialization tests (format may differ between JDK versions) // 3. Tests that depend on internal JDK behavior (GC, classloading order) // 4. Tests that parse java -version output or check system properties
Some issues only appear at runtime. Start your application and verify:
// Compile with deprecation warnings visible // $ mvn compile -Xlint:deprecation // $ gradle compileJava -Xlint:deprecation // Common deprecations to address: // - SecurityManager usage -> remove entirely // - Finalize methods -> use Cleaner or try-with-resources // - Thread.stop(), Thread.suspend() -> use interrupt-based signaling // - Old Date/Calendar APIs -> use java.time
Once the migration is stable, selectively adopt new features:
| Feature | Migration Effort | Benefit | Priority |
|---|---|---|---|
| Module imports | Low — IDE refactor | Cleaner import blocks | Low (cosmetic) |
| Structured concurrency | Medium — refactor concurrent code | Safer, more maintainable concurrency | High if using concurrency |
| Scoped values | Medium — replace ThreadLocal | No memory leaks, better with virtual threads | High if using ThreadLocal |
| Primitive patterns | Low — refactor switch/if-else | Cleaner pattern matching | Medium |
| AOT class loading | Low — deployment config only | Faster startup | High for microservices |
| Stream Gatherers | Low — new code or refactor | Custom stream operations | Low to Medium |
Update your deployment pipeline, Docker images, and CI/CD configuration (covered in detail in sections 8 and 9 below).
// pom.xml -- complete Maven configuration for Java 25 //// // // //25 //25 //// //// //// // //org.apache.maven.plugins //maven-compiler-plugin //3.13.0 //// //25 // // //// //org.apache.maven.plugins //maven-surefire-plugin //3.3.1 //// // // //
// build.gradle.kts -- Kotlin DSL
// plugins {
// java
// id("org.springframework.boot") version "3.5.0"
// }
//
// java {
// toolchain {
// languageVersion.set(JavaLanguageVersion.of(25))
// }
// }
//
// tasks.withType {
// options.release.set(25)
// // For preview features:
// // options.compilerArgs.add("--enable-preview")
// }
//
// tasks.withType {
// useJUnitPlatform()
// // For preview features:
// // jvmArgs("--enable-preview")
// }
// build.gradle -- Groovy DSL
// java {
// toolchain {
// languageVersion = JavaLanguageVersion.of(25)
// }
// }
//
// compileJava {
// options.release = 25
// }
Here is the compatibility matrix for major Java frameworks with Java 25:
| Framework | Minimum Version for Java 25 | Recommended Version | Notes |
|---|---|---|---|
| Spring Boot | 3.4.x | 3.5.x or 4.0.x | 3.5+ expected to have official Java 25 support |
| Spring Framework | 6.2.x | 6.3.x or 7.0.x | Spring Framework 7 targets Java 25 as baseline |
| Quarkus | 3.15+ | 3.17+ or 4.x | Quarkus adds Java support quickly after release |
| Micronaut | 4.7+ | 4.8+ | Good Java version support track record |
| Jakarta EE | 10 | 11 | Jakarta EE 11 aligns with Java 25 features |
| Hibernate | 6.5+ | 6.6+ | Ensure Byte Buddy version is compatible |
| Lombok | 1.18.34+ | Latest | Lombok is sensitive to JDK internals — always use latest |
| MapStruct | 1.6+ | Latest | Annotation processors need compiler compatibility |
// application.properties updates for Java 25
// Enable virtual threads (already available since Spring Boot 3.2 + Java 21)
spring.threads.virtual.enabled=true
// If using AOT class loading, configure the training profile
// spring.profiles.active=aot-training
// Structured concurrency with Spring -- example service
import java.util.concurrent.StructuredTaskScope;
@Service
public class UserProfileService {
@Autowired private UserRepository userRepo;
@Autowired private OrderRepository orderRepo;
public UserProfile getProfile(long userId) throws Exception {
try (var scope = StructuredTaskScope.open()) {
var userTask = scope.fork(() -> userRepo.findById(userId));
var ordersTask = scope.fork(() -> orderRepo.findByUserId(userId));
scope.join();
return new UserProfile(userTask.get(), ordersTask.get());
}
}
}
Based on community experience with Java 22-24 migrations, here are the most common issues and their solutions:
| # | Issue | Symptom | Solution |
|---|---|---|---|
| 1 | Illegal reflective access | InaccessibleObjectException at runtime |
Update the library to a version that uses public APIs, or add --add-opens java.base/java.lang=ALL-UNNAMED as a temporary workaround |
| 2 | Lombok compilation failure | java.lang.IllegalAccessError during annotation processing |
Update Lombok to >= 1.18.34. Lombok accesses JDK internals and needs frequent updates |
| 3 | Byte Buddy / Mockito failure | IllegalArgumentException: Unsupported class file version |
Update Byte Buddy to >= 1.15 and Mockito to >= 5.12 |
| 4 | Jackson serialization issues | Reflection errors on record types or sealed classes | Update Jackson to >= 2.17; ensure jackson-module-parameter-names is included |
| 5 | SecurityManager removal | UnsupportedOperationException: The Security Manager is deprecated |
Remove all SecurityManager code; it cannot be installed in Java 24+ |
| 6 | ASM version incompatibility | Build tools or plugins fail to parse class files | Update ASM to >= 9.7 (via dependency management or plugin updates) |
| 7 | Finalizer deprecation warnings | Warnings about overriding finalize() |
Replace with Cleaner or explicit close() methods with try-with-resources |
| 8 | Preview feature code from Java 21 | Compilation errors for preview syntax that changed | Remove String Templates usage (removed in 23); update any preview feature syntax |
| 9 | Test framework incompatibility | Tests fail to start or mock creation fails | Update JUnit to 5.11+ and test dependency versions |
| 10 | Docker base image not available | No eclipse-temurin:25 image found |
Wait for Adoptium release (usually within days of GA) or use early-access builds |
Java 25 delivers measurable performance improvements in several areas. Here is what you can expect without changing any application code:
The biggest startup improvement. By pre-loading and pre-linking classes, the JVM skips the expensive discovery, verification, and linking phases that dominate startup time. This benefits all applications but is most impactful for large applications with many classes (Spring Boot applications, Jakarta EE servers).
| Metric | Java 21 | Java 25 (without AOT) | Java 25 (with AOT cache) |
|---|---|---|---|
| Spring Boot startup | ~4.0s | ~3.5s | ~1.5s |
| Classes loaded at startup | ~12,000 | ~12,000 | ~12,000 (from cache) |
| Time to first request | ~5.0s | ~4.5s | ~2.0s |
Reduces every object’s header from 12 bytes to 8 bytes. For applications with millions of small objects (collections-heavy code, graph structures, caches), this translates to 10-20% heap memory savings. Enable with -XX:+UseCompactObjectHeaders (experimental in Java 25).
ZGC’s generational mode (default since Java 23) has been further tuned:
The C2 JIT compiler includes better escape analysis, improved loop optimizations, and enhanced auto-vectorization. These improvements benefit all code without any configuration changes. Typical throughput improvement is 2-5% compared to Java 21 for compute-heavy workloads.
// Multi-stage Dockerfile for Java 25 with AOT cache // // # Build stage // FROM eclipse-temurin:25-jdk AS builder // WORKDIR /app // COPY pom.xml . // COPY src ./src // RUN mvn package -DskipTests // // # AOT training stage // FROM eclipse-temurin:25-jre AS aot-trainer // WORKDIR /app // COPY --from=builder /app/target/myapp.jar . // RUN java -XX:AOTMode=record -XX:AOTConfiguration=app.aotconf \ // -Dspring.context.exit=onRefresh -jar myapp.jar || true // RUN java -XX:AOTMode=create -XX:AOTConfiguration=app.aotconf \ // -XX:AOTCache=app.aot -jar myapp.jar // // # Production stage // FROM eclipse-temurin:25-jre // WORKDIR /app // COPY --from=builder /app/target/myapp.jar . // COPY --from=aot-trainer /app/app.aot . // // ENV JAVA_OPTS="-XX:AOTCache=/app/app.aot \ // -XX:+UseZGC -XX:+ZGenerational \ // -XX:MaxRAMPercentage=75" // // EXPOSE 8080 // ENTRYPOINT ["sh", "-c", "java $JAVA_OPTS -jar myapp.jar"]
| Image | Size | Use Case |
|---|---|---|
eclipse-temurin:25-jre |
~200MB | Standard production image |
eclipse-temurin:25-jre-alpine |
~100MB | Smaller image, Alpine-based |
eclipse-temurin:25-jdk |
~350MB | Build stage or development |
amazoncorretto:25 |
~220MB | AWS-optimized, good for ECS/EKS |
azul/zulu-openjdk:25 |
~210MB | Azul-supported, good for Azure |
// GitHub Actions workflow for Java 25 // // name: Build and Test // on: [push, pull_request] // // jobs: // build: // runs-on: ubuntu-latest // steps: // - uses: actions/checkout@v4 // // - name: Set up JDK 25 // uses: actions/setup-java@v4 // with: // java-version: '25' // distribution: 'temurin' // cache: 'maven' // // - name: Build and test // run: mvn verify // // - name: Upload artifact // uses: actions/upload-artifact@v4 // with: // name: app-jar // path: target/*.jar
Do not try to adopt every new feature at once. Here is a recommended order:
ThreadLocal with ScopedValue where appropriateJava 25 as an LTS release will receive updates for years. Here are best practices for stability:
25.0.2) rather than just 25 in production. Update deliberately, not automatically.| Phase | Timeline | Goal |
|---|---|---|
| Evaluation | October 2025 | Build and test on Java 25, identify issues |
| Development | November-December 2025 | Fix issues, update dependencies, run full test suite |
| Staging | January 2026 | Deploy to staging/pre-production, performance testing |
| Production rollout | February-March 2026 | Gradual production deployment (canary, then full) |
| Feature adoption | Q2 2026 onward | Incrementally adopt new language features and APIs |
Migrating from Java 21 to Java 25 is a straightforward upgrade with high reward. The breaking changes are minimal (primarily around module encapsulation enforcement and SecurityManager removal), and the benefits are substantial:
The migration playbook is: update dependencies first, change the JDK version, fix compilation and test failures, then adopt new features incrementally. Most teams should be able to complete the core migration in two to four weeks, with feature adoption continuing over the following months.
Java 25 is a worthy successor to Java 21. It does not introduce another paradigm shift like virtual threads, but it polishes, finalizes, and optimizes everything that Java 21 started. If you are on Java 21, start planning your Java 25 migration now. If you are still on Java 17 or earlier, consider jumping directly to Java 25 — you will get the best of both worlds.
For over 25 years, Java developers have lived with one of the most annoying rules in the language: the first statement in a constructor must be super() or this(). No exceptions. No validation before calling the superclass. No computation to prepare arguments. If your subclass constructor needed to check that an argument was valid before passing it up, you could not do it directly — you had to resort to ugly workarounds like static helper methods, ternary operator abuse, or factory method patterns.
Think of it this way: imagine you are renovating a house, and the building code says you must pour the foundation before you can even inspect the land. You cannot test the soil, check the property lines, or measure the slope first. You just have to hope everything is fine and deal with problems after the concrete is set. That is what Java constructors felt like before this change.
Here is the kind of code that drove developers crazy:
public class PositiveBigInteger extends BigInteger {
public PositiveBigInteger(long value) {
// I want to validate that value > 0 BEFORE calling super()
// But the compiler says: "Call to 'super()' must be first statement"
super(Long.toString(value)); // Forced to call super first
// Now I can validate, but the superclass already did all its work
// with potentially invalid data
if (value <= 0) {
throw new IllegalArgumentException("non-positive value");
}
}
}
The superclass constructor runs, allocates resources, possibly writes to files or databases -- all with an invalid argument. Then you throw an exception. The damage is done. This pattern is wasteful, error-prone, and fundamentally backwards.
Java 25 fixes this with Flexible Constructor Bodies (JEP 513). You can now write statements before the call to super() or this(). Validation, argument transformation, field initialization -- all of it can happen in a "prologue" section before the constructor delegation. The feature was previewed in Java 22 (JEP 447), Java 23 (JEP 482), and Java 24 (JEP 492), and is finalized as a permanent feature in Java 25 LTS.
The rule is simple: you can now place statements before the explicit constructor invocation (super(...) or this(...)). These statements form what the JLS calls the prologue of the constructor. The statements after the constructor invocation form the epilogue.
public class Employee extends Person {
public Employee(String name, int age) {
// === PROLOGUE (new in Java 25) ===
// Validate arguments before calling super
if (name == null || name.isBlank()) {
throw new IllegalArgumentException("Name cannot be blank");
}
if (age < 18 || age > 67) {
throw new IllegalArgumentException("Age must be between 18 and 67");
}
// Compute values to pass to super
String normalizedName = name.trim().toUpperCase();
// === CONSTRUCTOR INVOCATION ===
super(normalizedName, age);
// === EPILOGUE (same as before) ===
// Full access to this, fields, methods
this.startDate = LocalDate.now();
log("Employee created: " + this.getName());
}
}
The constructor body is now divided into two phases:
| Phase | Location | What You Can Do | What You Cannot Do |
|---|---|---|---|
| Prologue | Before super() / this() |
Validate arguments, compute values, declare local variables, throw exceptions, initialize uninitialized fields | Access this (methods, read fields), access super (fields, methods), create inner class instances |
| Epilogue | After super() / this() |
Everything -- full access to this, fields, methods, superclass members |
Nothing restricted (same as traditional constructor body) |
The key insight is that the object is in an early construction context during the prologue. The superclass has not been initialized yet, so the object is not fully formed. Java protects you from accessing the uninitialized object by restricting what you can do in the prologue. But it gives you enough freedom to validate, compute, and prepare -- which is all you typically need.
The most common use case for flexible constructor bodies is argument validation. You want to fail fast -- reject bad input before the superclass constructor does any work. This is especially important when the superclass constructor is expensive (allocates resources, opens connections, writes to disk).
public class PositiveBigInteger extends BigInteger {
public PositiveBigInteger(long value) {
// Validate BEFORE superclass does any work
if (value <= 0) {
throw new IllegalArgumentException("Value must be positive: " + value);
}
super(Long.toString(value));
}
}
// Usage:
var num = new PositiveBigInteger(42); // Works fine
var bad = new PositiveBigInteger(-5); // Throws immediately, no wasted work
public class DatabaseConnection extends AbstractConnection {
private final String schema;
public DatabaseConnection(String host, int port, String schema, String user) {
// Validate all arguments before superclass initializes the connection
Objects.requireNonNull(host, "Host cannot be null");
Objects.requireNonNull(schema, "Schema cannot be null");
Objects.requireNonNull(user, "User cannot be null");
if (host.isBlank()) {
throw new IllegalArgumentException("Host cannot be blank");
}
if (port < 1 || port > 65535) {
throw new IllegalArgumentException("Port must be between 1 and 65535: " + port);
}
if (!schema.matches("[a-zA-Z_][a-zA-Z0-9_]*")) {
throw new IllegalArgumentException("Invalid schema name: " + schema);
}
// All validated -- safe to proceed with connection setup
super(host, port, user);
// Now we can initialize our own fields
this.schema = schema;
}
}
public class Order extends BaseEntity {
public Order(List items, Customer customer, LocalDate deliveryDate) {
// Business rule validation in the prologue
if (items == null || items.isEmpty()) {
throw new IllegalArgumentException("Order must have at least one item");
}
if (customer.isSuspended()) {
throw new IllegalStateException("Cannot create order for suspended customer: "
+ customer.getId());
}
if (deliveryDate.isBefore(LocalDate.now().plusDays(1))) {
throw new IllegalArgumentException("Delivery date must be at least tomorrow");
}
double total = items.stream()
.mapToDouble(item -> item.price() * item.quantity())
.sum();
if (total > customer.getCreditLimit()) {
throw new IllegalStateException(
"Order total $%.2f exceeds credit limit $%.2f".formatted(
total, customer.getCreditLimit()));
}
// All business rules passed -- initialize the entity
super(UUID.randomUUID(), LocalDateTime.now());
}
}
Sometimes the superclass constructor expects arguments that require transformation or computation from the subclass's raw inputs. Before Java 25, this often forced you into contortions with static helper methods or inline ternary expressions.
public class HttpEndpoint extends Endpoint {
public HttpEndpoint(String rawUrl) {
// Parse and normalize the URL before passing to superclass
String url = rawUrl.trim().toLowerCase();
if (!url.startsWith("http://") && !url.startsWith("https://")) {
url = "https://" + url;
}
URI uri = URI.create(url);
String host = uri.getHost();
int port = uri.getPort() == -1 ? 443 : uri.getPort();
String path = uri.getPath().isEmpty() ? "/" : uri.getPath();
super(host, port, path);
}
}
public class Circle extends Shape {
public Circle(double radius) {
if (radius <= 0) {
throw new IllegalArgumentException("Radius must be positive: " + radius);
}
// Compute area and circumference to pass to superclass
double area = Math.PI * radius * radius;
double circumference = 2 * Math.PI * radius;
super("Circle", area, circumference);
}
}
public class Rectangle extends Shape {
public Rectangle(double width, double height) {
if (width <= 0 || height <= 0) {
throw new IllegalArgumentException(
"Dimensions must be positive: %s x %s".formatted(width, height));
}
double area = width * height;
double perimeter = 2 * (width + height);
super("Rectangle", area, perimeter);
}
}
public class ConfigurableService extends ManagedService {
public ConfigurableService(Path configPath) {
// Read and parse configuration before superclass init
Properties props;
try {
props = new Properties();
props.load(Files.newInputStream(configPath));
} catch (IOException e) {
throw new UncheckedIOException("Cannot read config: " + configPath, e);
}
String serviceName = props.getProperty("service.name", "default");
int threadPoolSize = Integer.parseInt(
props.getProperty("thread.pool.size", "10"));
Duration timeout = Duration.parse(
props.getProperty("timeout", "PT30S"));
super(serviceName, threadPoolSize, timeout);
}
}
The prologue is not a free-for-all. Java imposes strict rules to prevent you from accessing an uninitialized object, which would lead to subtle bugs and security vulnerabilities. Understanding these restrictions is critical to using the feature correctly.
class Parent {
int parentField;
void parentMethod() { System.out.println("parent"); }
}
class Child extends Parent {
int childField = 10;
String name; // No initializer -- can be assigned in prologue
Child(int value) {
// CANNOT reference 'this' explicitly
System.out.println(this); // COMPILE ERROR
this.hashCode(); // COMPILE ERROR
// CANNOT read instance fields (even uninitialized ones)
int x = childField; // COMPILE ERROR
int y = this.childField; // COMPILE ERROR
// CANNOT call instance methods
toString(); // COMPILE ERROR
this.someMethod(); // COMPILE ERROR
// CANNOT access superclass members
int z = super.parentField; // COMPILE ERROR
super.parentMethod(); // COMPILE ERROR
// CANNOT create inner class instances (they capture 'this')
class Inner {}
new Inner(); // COMPILE ERROR
super(value);
}
void someMethod() {}
}
class Child extends Parent {
final int x; // No initializer
String label; // No initializer
Child(int value, String rawLabel) {
// CAN declare and use local variables
int computed = value * 2 + 1;
String normalized = rawLabel.trim().toLowerCase();
// CAN call static methods
Objects.requireNonNull(rawLabel);
int validated = Math.max(0, value);
// CAN throw exceptions
if (value < 0) {
throw new IllegalArgumentException("negative: " + value);
}
// CAN use control flow (if/else, switch, try/catch, loops)
String prefix;
switch (value) {
case 0 -> prefix = "ZERO";
case 1 -> prefix = "ONE";
default -> prefix = "OTHER";
}
// CAN assign to uninitialized fields (no initializer in declaration)
this.x = computed;
this.label = prefix + "_" + normalized;
// CAN access enclosing instance (if this is a nested class)
// CAN use constructor parameters
// CAN create objects that do not reference 'this'
super(validated);
}
}
One of the most interesting aspects is that you can assign to fields in the prologue, but only to fields that have no initializer in their declaration. You cannot read those fields -- only write to them. This enables a critical pattern: setting a field's value before super() runs, so that if the superclass constructor calls an overridable method, the field is already initialized.
class Super {
Super() {
// Superclass constructor calls overridable method
overriddenMethod();
}
void overriddenMethod() {}
}
// BEFORE Java 25: Field is 0 when overriddenMethod() is called
class OldSub extends Super {
final int x;
OldSub(int x) {
super(); // Calls overriddenMethod() -- this.x is still 0!
this.x = x; // Too late
}
@Override
void overriddenMethod() {
System.out.println("x = " + x); // Prints "x = 0" -- uninitialized!
}
}
// AFTER Java 25: Field is properly set before super() runs
class NewSub extends Super {
final int x;
NewSub(int x) {
this.x = x; // Set field BEFORE super()
super(); // Calls overriddenMethod() -- this.x is already set!
}
@Override
void overriddenMethod() {
System.out.println("x = " + x); // Prints "x = 42" -- correct!
}
}
public static void main(String[] args) {
new OldSub(42); // Prints: x = 0
new NewSub(42); // Prints: x = 42
}
This is not just a convenience -- it fixes a correctness problem that has plagued Java since its inception. Any time a superclass constructor calls an overridable method, subclass fields are in an uninitialized state. With flexible constructor bodies, you can ensure fields are set before the superclass sees them.
| Action | Allowed in Prologue? | Notes |
|---|---|---|
| Declare local variables | Yes | Normal local variable rules apply |
| Use constructor parameters | Yes | Full access to all parameters |
| Call static methods | Yes | No instance needed |
| Throw exceptions | Yes | Fail-fast validation |
| Control flow (if, switch, loops) | Yes | Full control flow |
Assign to this.field (no initializer) |
Yes | Write-only; cannot read back |
Read this.field |
No | Object not initialized yet |
| Call instance methods | No | Object not initialized yet |
Reference this |
No | Except for field assignment |
Access super members |
No | Superclass not initialized yet |
| Create inner class instances | No | Inner classes capture this |
| Assign to field with initializer | No | Only uninitialized fields |
Let us look at five real-world patterns and see how they improve with flexible constructor bodies. In each case, the "before" code uses a workaround, and the "after" code uses the new prologue.
// BEFORE: Validation AFTER super() -- too late, superclass already ran
class OldEmployee extends Person {
OldEmployee(String name, int age) {
super(name, age); // Person does work with potentially bad values
if (age < 0 || age > 150) {
throw new IllegalArgumentException("Invalid age: " + age);
}
}
}
// AFTER: Validation BEFORE super() -- fail fast
class NewEmployee extends Person {
NewEmployee(String name, int age) {
if (age < 0 || age > 150) {
throw new IllegalArgumentException("Invalid age: " + age);
}
Objects.requireNonNull(name, "Name required");
super(name, age);
}
}
// BEFORE: Static helper method workaround
class OldHttpUrl extends Url {
OldHttpUrl(String rawUrl) {
super(normalizeUrl(rawUrl)); // Must use static method
}
// Forced to write a static helper just to prepare the argument
private static String normalizeUrl(String raw) {
String url = raw.trim().toLowerCase();
if (!url.startsWith("https://")) {
url = "https://" + url;
}
return url;
}
}
// AFTER: Inline computation in the prologue
class NewHttpUrl extends Url {
NewHttpUrl(String rawUrl) {
String url = rawUrl.trim().toLowerCase();
if (!url.startsWith("https://")) {
url = "https://" + url;
}
super(url);
}
}
// BEFORE: Ternary operator abuse for conditional arguments
class OldRetryPolicy extends Policy {
OldRetryPolicy(int maxRetries, Duration timeout) {
super(
maxRetries <= 0 ? 3 : maxRetries, // Default if invalid
timeout == null ? Duration.ofSeconds(30) : timeout, // Default if null
maxRetries > 10 ? "aggressive" : "standard" // Computed strategy
);
}
}
// AFTER: Clear, readable prologue
class NewRetryPolicy extends Policy {
NewRetryPolicy(int maxRetries, Duration timeout) {
// Normalize with clear variable names
int retries = maxRetries <= 0 ? 3 : maxRetries;
Duration actualTimeout = timeout != null ? timeout : Duration.ofSeconds(30);
String strategy = retries > 10 ? "aggressive" : "standard";
super(retries, actualTimeout, strategy);
}
}
// BEFORE: Multiple static helpers or deeply nested ternaries
class OldSecureConnection extends Connection {
OldSecureConnection(String host, Map config) {
super(
extractHost(host),
extractPort(host, config),
buildSslContext(config)
);
}
private static String extractHost(String host) {
return host.contains(":") ? host.split(":")[0] : host;
}
private static int extractPort(String host, Map config) {
if (host.contains(":")) {
return Integer.parseInt(host.split(":")[1]);
}
return Integer.parseInt(config.getOrDefault("default.port", "443"));
}
private static SSLContext buildSslContext(Map config) {
try {
SSLContext ctx = SSLContext.getInstance(
config.getOrDefault("ssl.protocol", "TLSv1.3"));
ctx.init(null, null, null);
return ctx;
} catch (Exception e) {
throw new RuntimeException(e);
}
}
}
// AFTER: Everything inline in the prologue -- clear flow, no scattered helpers
class NewSecureConnection extends Connection {
NewSecureConnection(String host, Map config) {
// Parse host and port
String actualHost;
int port;
if (host.contains(":")) {
String[] parts = host.split(":");
actualHost = parts[0];
port = Integer.parseInt(parts[1]);
} else {
actualHost = host;
port = Integer.parseInt(config.getOrDefault("default.port", "443"));
}
// Build SSL context
SSLContext sslContext;
try {
String protocol = config.getOrDefault("ssl.protocol", "TLSv1.3");
sslContext = SSLContext.getInstance(protocol);
sslContext.init(null, null, null);
} catch (Exception e) {
throw new RuntimeException("Failed to create SSL context", e);
}
super(actualHost, port, sslContext);
}
}
// BEFORE: Cannot make defensive copy before super()
class OldImmutableConfig extends Config {
private final Map properties;
OldImmutableConfig(Map properties) {
super(properties); // Passes mutable reference to super!
// Defensive copy is too late -- super already has the mutable reference
this.properties = Map.copyOf(properties);
}
}
// AFTER: Defensive copy in prologue -- super gets the immutable version
class NewImmutableConfig extends Config {
private final Map properties;
NewImmutableConfig(Map properties) {
// Make defensive copy BEFORE super sees it
var safeCopy = Map.copyOf(properties);
super(safeCopy); // Super gets the immutable copy
this.properties = safeCopy;
}
}
Over the years, Java developers developed several workaround patterns to get around the "super must be first" restriction. Every one of these can now be replaced with a simple prologue. Let us inventory these anti-patterns and retire them.
This was the most common approach -- make the constructor private and provide a static method that does the validation/computation:
// OLD: Static factory workaround
class OldPercentage extends Number {
private final double value;
// Private constructor -- no validation here
private OldPercentage(double validated) {
this.value = validated;
}
// Public factory does the validation
public static OldPercentage of(double value) {
if (value < 0 || value > 100) {
throw new IllegalArgumentException("Not a percentage: " + value);
}
return new OldPercentage(value);
}
// ... Number abstract methods ...
}
// NEW: Direct constructor with prologue
class NewPercentage extends Number {
private final double value;
public NewPercentage(double value) {
if (value < 0 || value > 100) {
throw new IllegalArgumentException("Not a percentage: " + value);
}
super();
this.value = value;
}
// ... Number abstract methods ...
}
When the superclass constructor needed computed arguments, developers wrote private static methods just to compute them:
// OLD: Static helper method to compute super() arguments
class OldTimestamp extends Instant {
OldTimestamp(String isoString) {
super(parseToEpochSecond(isoString), parseToNano(isoString));
}
private static long parseToEpochSecond(String s) {
return Instant.parse(s).getEpochSecond();
}
private static int parseToNano(String s) {
return Instant.parse(s).getNano(); // Parses TWICE!
}
}
// NEW: Parse once in the prologue
class NewTimestamp extends Instant {
NewTimestamp(String isoString) {
Instant parsed = Instant.parse(isoString);
super(parsed.getEpochSecond(), parsed.getNano()); // Parse once, use twice
}
}
Notice how the old version had to parse the string twice because each static method was independent. The prologue lets you parse once and use the result for multiple super() arguments.
When the computation was "simple enough," developers crammed it into ternary expressions inside the super() call:
// OLD: Unreadable nested ternaries
class OldCacheConfig extends Config {
OldCacheConfig(String name, int size, boolean isLocal) {
super(
name == null ? "default-cache" : name.trim(),
size <= 0 ? (isLocal ? 1000 : 10000) : Math.min(size, isLocal ? 5000 : 50000),
isLocal ? Duration.ofMinutes(5) : Duration.ofHours(1),
name != null && name.startsWith("temp-") ? EvictionPolicy.LRU : EvictionPolicy.LFU
); // Good luck debugging this
}
}
// NEW: Clear, debuggable prologue
class NewCacheConfig extends Config {
NewCacheConfig(String name, int size, boolean isLocal) {
String cacheName = (name == null) ? "default-cache" : name.trim();
int maxAllowed = isLocal ? 5000 : 50000;
int defaultSize = isLocal ? 1000 : 10000;
int cacheSize = (size <= 0) ? defaultSize : Math.min(size, maxAllowed);
Duration ttl = isLocal ? Duration.ofMinutes(5) : Duration.ofHours(1);
EvictionPolicy policy = (name != null && name.startsWith("temp-"))
? EvictionPolicy.LRU
: EvictionPolicy.LFU;
super(cacheName, cacheSize, ttl, policy);
}
}
Beyond validation and argument transformation, flexible constructor bodies open up several important patterns that were previously awkward or impossible.
When a subclass accepts a builder or configuration object and needs to extract specific values for the superclass:
public class RestClient extends HttpClient {
public RestClient(RestClientConfig config) {
// Extract and validate in the prologue
Objects.requireNonNull(config, "Config required");
String baseUrl = config.getBaseUrl();
if (baseUrl == null || baseUrl.isBlank()) {
throw new IllegalArgumentException("Base URL required");
}
Duration connectTimeout = config.getConnectTimeout() != null
? config.getConnectTimeout()
: Duration.ofSeconds(10);
Duration readTimeout = config.getReadTimeout() != null
? config.getReadTimeout()
: Duration.ofSeconds(30);
int maxConnections = Math.max(1, config.getMaxConnections());
// Pass extracted values to superclass
super(baseUrl, connectTimeout, readTimeout, maxConnections);
}
}
Making defensive copies before the superclass sees the mutable data is a fundamental security and correctness practice:
public class ImmutableMatrix extends Matrix {
public ImmutableMatrix(double[][] data) {
// Deep defensive copy in the prologue
Objects.requireNonNull(data, "Data required");
if (data.length == 0) {
throw new IllegalArgumentException("Matrix must have at least one row");
}
int cols = data[0].length;
double[][] copy = new double[data.length][cols];
for (int i = 0; i < data.length; i++) {
if (data[i].length != cols) {
throw new IllegalArgumentException("Jagged arrays not allowed");
}
System.arraycopy(data[i], 0, copy[i], 0, cols);
}
// Super receives the defensive copy -- original cannot mutate our state
super(copy, data.length, cols);
}
}
Converting arguments to a canonical form before construction:
public class EmailAddress extends Address {
public EmailAddress(String email) {
Objects.requireNonNull(email, "Email required");
// Canonicalize: trim, lowercase, validate format
String canonical = email.trim().toLowerCase(Locale.ROOT);
if (!canonical.matches("^[a-z0-9._%+-]+@[a-z0-9.-]+\\.[a-z]{2,}$")) {
throw new IllegalArgumentException("Invalid email format: " + email);
}
// Split into parts for the superclass
int atIndex = canonical.indexOf('@');
String localPart = canonical.substring(0, atIndex);
String domain = canonical.substring(atIndex + 1);
super(localPart, domain);
}
}
Recording construction events before the object is fully initialized:
public class AuditedTransaction extends Transaction {
private static final Logger log = LoggerFactory.getLogger(AuditedTransaction.class);
public AuditedTransaction(BigDecimal amount, String fromAccount, String toAccount) {
// Log and audit BEFORE the transaction is constructed
log.info("Creating transaction: {} from {} to {}",
amount, fromAccount, toAccount);
if (amount.compareTo(BigDecimal.ZERO) <= 0) {
log.warn("Rejected: non-positive amount {}", amount);
throw new IllegalArgumentException("Amount must be positive");
}
if (amount.compareTo(new BigDecimal("1000000")) > 0) {
log.warn("Large transaction flagged for review: {}", amount);
AuditService.flagForReview(fromAccount, toAccount, amount);
}
// Generate a unique transaction ID
String txId = UUID.randomUUID().toString();
log.debug("Assigned transaction ID: {}", txId);
super(txId, amount, fromAccount, toAccount);
}
}
Records introduced in Java 16 have their own constructor conventions, and flexible constructor bodies work with them -- but with specific rules you need to understand.
| Constructor Type | Description | Flexible Bodies? |
|---|---|---|
| Canonical constructor | Matches all record components | No super() or this() to precede -- but can have early field assignments |
| Compact canonical constructor | No parameter list -- parameters are implicit | Same as above -- validates/transforms before implicit assignment |
| Non-canonical constructor | Different signature, delegates via this() |
Yes -- statements before this() |
The big win for records is in non-canonical constructors. These must delegate to the canonical constructor via this(), and you can now put validation and computation before that delegation.
record Point(double x, double y) {
// Compact canonical constructor -- validates components
Point {
if (Double.isNaN(x) || Double.isNaN(y)) {
throw new IllegalArgumentException("Coordinates cannot be NaN");
}
}
// Non-canonical: construct from polar coordinates
Point(double radius, double angleRadians, boolean polar) {
// Prologue: validate and convert polar to cartesian
if (radius < 0) {
throw new IllegalArgumentException("Radius cannot be negative: " + radius);
}
double cartX = radius * Math.cos(angleRadians);
double cartY = radius * Math.sin(angleRadians);
this(cartX, cartY); // Delegate to canonical constructor
}
// Non-canonical: construct from string "x,y"
Point(String coordinates) {
// Prologue: parse and validate
Objects.requireNonNull(coordinates, "Coordinates string required");
String[] parts = coordinates.split(",");
if (parts.length != 2) {
throw new IllegalArgumentException(
"Expected format 'x,y' but got: " + coordinates);
}
double parsedX;
double parsedY;
try {
parsedX = Double.parseDouble(parts[0].trim());
parsedY = Double.parseDouble(parts[1].trim());
} catch (NumberFormatException e) {
throw new IllegalArgumentException(
"Invalid coordinate numbers: " + coordinates, e);
}
this(parsedX, parsedY);
}
}
record DateRange(LocalDate start, LocalDate end) {
// Canonical validates ordering
DateRange {
if (start.isAfter(end)) {
throw new IllegalArgumentException(
"Start date %s is after end date %s".formatted(start, end));
}
}
// Construct from a pair of ISO strings
DateRange(String startStr, String endStr) {
LocalDate parsedStart = LocalDate.parse(startStr);
LocalDate parsedEnd = LocalDate.parse(endStr);
this(parsedStart, parsedEnd); // Delegates to canonical
}
// Construct a range of N days starting from a date
DateRange(LocalDate start, int days) {
if (days <= 0) {
throw new IllegalArgumentException("Days must be positive: " + days);
}
LocalDate computedEnd = start.plusDays(days - 1);
this(start, computedEnd);
}
// Construct for "this week" (Monday to Sunday)
DateRange(int isoWeekNumber, int year) {
if (isoWeekNumber < 1 || isoWeekNumber > 53) {
throw new IllegalArgumentException("Invalid week: " + isoWeekNumber);
}
LocalDate monday = LocalDate.of(year, 1, 1)
.with(java.time.temporal.WeekFields.ISO.weekOfYear(), isoWeekNumber)
.with(java.time.DayOfWeek.MONDAY);
LocalDate sunday = monday.plusDays(6);
this(monday, sunday);
}
}
record ImmutablePair(A first, B second) { ImmutablePair { Objects.requireNonNull(first, "First element cannot be null"); Objects.requireNonNull(second, "Second element cannot be null"); } // Construct from a Map.Entry with defensive extraction ImmutablePair(Map.Entry extends A, ? extends B> entry) { Objects.requireNonNull(entry, "Entry cannot be null"); // Extract values in prologue -- entry might be modified concurrently A extractedFirst = entry.getKey(); B extractedSecond = entry.getValue(); this(extractedFirst, extractedSecond); } }
Flexible constructor bodies are a welcome improvement, but like any feature, they can be misused. Here are the guidelines I follow for writing clean, maintainable constructor prologues.
The prologue should be short, focused, and obvious. If your prologue is longer than 10-15 lines, consider whether some of that logic belongs in a separate method or class.
// GOOD: Short, focused prologue
public class ApiClient extends HttpClient {
public ApiClient(String baseUrl) {
Objects.requireNonNull(baseUrl, "Base URL required");
String normalized = baseUrl.endsWith("/")
? baseUrl.substring(0, baseUrl.length() - 1)
: baseUrl;
super(URI.create(normalized));
}
}
// BAD: Prologue doing too much work
public class OverEngineered extends Service {
public OverEngineered(Path configFile) {
// 50 lines of config parsing, network calls, database lookups...
// This belongs in a factory method or builder, not a prologue
Properties props = new Properties();
// ... 40 more lines ...
super(/* many args */);
}
}
Establish consistent validation patterns across your codebase:
// Pattern 1: Fail-fast with descriptive messages
public class Account extends Entity {
public Account(String id, BigDecimal balance, String currency) {
Objects.requireNonNull(id, "Account ID cannot be null");
Objects.requireNonNull(balance, "Balance cannot be null");
Objects.requireNonNull(currency, "Currency cannot be null");
if (id.length() != 10) {
throw new IllegalArgumentException(
"Account ID must be 10 characters, got: " + id.length());
}
if (balance.signum() < 0) {
throw new IllegalArgumentException(
"Initial balance cannot be negative: " + balance);
}
if (!Set.of("USD", "EUR", "GBP", "JPY").contains(currency)) {
throw new IllegalArgumentException(
"Unsupported currency: " + currency);
}
super(id, balance, currency);
}
}
// Pattern 2: Use Preconditions utility (like Guava)
public class Shipment extends Entity {
public Shipment(String trackingId, double weight, String destination) {
Preconditions.checkNotNull(trackingId, "Tracking ID required");
Preconditions.checkArgument(weight > 0, "Weight must be positive: %s", weight);
Preconditions.checkArgument(
destination != null && !destination.isBlank(),
"Destination required");
super(trackingId);
}
}
Use the prologue when:
super()super() sees themDo NOT use the prologue for:
If you have existing code with the old workaround patterns, here is how to migrate:
private static methods called only from constructors -- these are likely argument-preparation helpers// Step 1: Identify the pattern
class Before extends Parent {
Before(String input) {
super(validate(input), transform(input)); // Static helpers
}
private static String validate(String s) {
if (s == null || s.isBlank()) throw new IllegalArgumentException("blank");
return s;
}
private static int transform(String s) {
return s.trim().length();
}
}
// Step 2 & 3: Inline and delete
class After extends Parent {
After(String input) {
if (input == null || input.isBlank()) {
throw new IllegalArgumentException("blank");
}
String trimmed = input.trim();
int length = trimmed.length();
super(trimmed, length);
}
// No more static helpers needed
}
Flexible Constructor Bodies remove one of Java's oldest and most frustrating restrictions. The ability to write code before super() or this() enables cleaner validation, simpler argument preparation, safer field initialization, and the elimination of static helper method workarounds. The restrictions in the prologue (no this access, no superclass member access) are sensible and prevent the bugs that the old rule was originally trying to avoid. With Java 25, constructor code can finally be written in the order you think about it: validate first, then initialize.
Java 25 is the next Long-Term Support (LTS) release, expected in September 2025. As an LTS release, it is the version that enterprises will standardize on for the next three to five years, making it the natural upgrade target after Java 21. Between Java 22 and Java 25, four releases shipped, each advancing features through preview, incubation, and finalization stages.
While the headline features — module imports and simplified source files — get dedicated coverage in their own tutorial, Java 25 ships with a broad set of improvements that touch the language, the runtime, and the standard library. Some are brand new. Others have been in preview for multiple releases and are finally production-ready.
This post covers every significant Java 25 improvement beyond the module import and simplified source file features. Here is what we will go through:
| Feature | JEP | Status in Java 25 | Impact |
|---|---|---|---|
| Primitive Types in Patterns | JEP 488 | Final | High — pattern matching works with all types now |
| Stable Values | JEP 502 | Preview | Medium — lazy initialization done right |
| Structured Concurrency | JEP 499 (expected) | Final (expected) | High — structured thread management for production |
| Scoped Values | JEP 487 (expected) | Final (expected) | High — ThreadLocal replacement |
| Class-File API | JEP 484 | Final | Medium — standard bytecode manipulation |
| Ahead-of-Time Class Loading & Linking | JEP 483 | Final | High — dramatically faster startup |
| Compact Object Headers | JEP 450 | Experimental | Medium — reduced memory footprint |
| Vector API | JEP 489 | Incubator | Medium — SIMD operations continue to mature |
Let us go through each one in detail.
Pattern matching has been evolving in Java since Java 16. We got instanceof pattern matching, then switch pattern matching, then record patterns. But all of these worked only with reference types — objects, not primitives. If you wanted to match on an int, double, or boolean, you were stuck with old-fashioned if-else chains or traditional switch statements.
JEP 488 closes this gap. In Java 25, pattern matching works with all types, including primitives. This applies to both instanceof expressions and switch expressions.
The most useful application is in switch expressions. Before Java 25, you could not use guards or pattern matching syntax with primitive types in switch. Now you can:
// BEFORE: Traditional switch with if-else for range checks
public String getTemperatureCategory(int tempFahrenheit) {
if (tempFahrenheit < 0) {
return "Extreme cold";
} else if (tempFahrenheit < 32) {
return "Freezing";
} else if (tempFahrenheit < 60) {
return "Cold";
} else if (tempFahrenheit < 80) {
return "Comfortable";
} else if (tempFahrenheit < 100) {
return "Hot";
} else {
return "Extreme heat";
}
}
// AFTER: Primitive patterns with guards in switch (Java 25)
public String getTemperatureCategory(int tempFahrenheit) {
return switch (tempFahrenheit) {
case int t when t < 0 -> "Extreme cold";
case int t when t < 32 -> "Freezing";
case int t when t < 60 -> "Cold";
case int t when t < 80 -> "Comfortable";
case int t when t < 100 -> "Hot";
default -> "Extreme heat";
};
}
The pattern case int t when t < 32 does two things: it binds the value to a new variable t, and it applies a guard condition. This is the same when guard syntax used with reference type patterns, now extended to primitives.
You can also use primitive patterns with instanceof. This is particularly useful for safe narrowing conversions:
// Safe narrowing conversion with instanceof
public void processNumber(long value) {
if (value instanceof int i) {
// Safe: value fits in an int
System.out.println("Fits in int: " + i);
processAsInt(i);
} else {
// Value is too large for int
System.out.println("Needs long: " + value);
processAsLong(value);
}
}
// Example calls:
processNumber(42L); // Fits in int: 42
processNumber(3_000_000_000L); // Needs long: 3000000000
Before Java 25, safe narrowing required manual range checking:
// BEFORE: Manual range checking for narrowing
public void processNumber(long value) {
if (value >= Integer.MIN_VALUE && value <= Integer.MAX_VALUE) {
int i = (int) value;
processAsInt(i);
} else {
processAsLong(value);
}
}
// AFTER: Primitive instanceof handles the range check for you
public void processNumber(long value) {
if (value instanceof int i) {
processAsInt(i);
} else {
processAsLong(value);
}
}
Primitive patterns also work with record patterns, enabling deep destructuring that includes primitive components:
record Temperature(double value, String unit) {}
record WeatherReading(Temperature temp, int humidity, long timestamp) {}
public String describeWeather(WeatherReading reading) {
return switch (reading) {
case WeatherReading(Temperature(double v, String u), int h, long ts)
when v > 100.0 && u.equals("F") ->
"Dangerously hot! Temperature: " + v + "°F, Humidity: " + h + "%";
case WeatherReading(Temperature(double v, String u), int h, long ts)
when h > 90 ->
"Very humid! Humidity at " + h + "%, Temp: " + v + "°" + u;
case WeatherReading(Temperature t, int h, long ts) ->
"Normal: " + t.value() + "°" + t.unit() + ", Humidity: " + h + "%";
};
}
Java's compiler can now check exhaustiveness for primitive switch expressions. Because primitive types have known ranges, the compiler verifies that all possible values are covered:
// boolean is naturally exhaustive
public String boolSwitch(boolean flag) {
return switch (flag) {
case true -> "Enabled";
case false -> "Disabled";
// No default needed -- all boolean values are covered
};
}
// For int/long/double, you need a default or catch-all pattern
public String intCategory(int value) {
return switch (value) {
case 0 -> "Zero";
case int i when i > 0 -> "Positive: " + i;
case int i -> "Negative: " + i;
// Exhaustive: covers 0, positive, and everything else (negative)
};
}
Primitive patterns in Java 25 complete the pattern matching story. Every type in Java -- objects, records, sealed types, and now primitives -- can participate in pattern matching. This makes switch expressions a truly universal dispatching mechanism.
Lazy initialization is one of the most common patterns in Java. You have a field that is expensive to compute, so you defer its creation until first access. The problem is that doing this correctly in a concurrent environment is surprisingly hard. The classic double-checked locking pattern is notoriously error-prone, and simpler approaches either sacrifice thread safety or performance.
Java 25 introduces the StableValue API (preview) to solve this problem once and for all. A StableValue is a container that holds a value computed lazily on first access, with guaranteed thread safety and optimal performance after initialization.
Here is the classic approach and its pitfalls:
// Approach 1: Not thread-safe
public class ConnectionPool {
private DataSource dataSource;
public DataSource getDataSource() {
if (dataSource == null) {
// Race condition: two threads can both see null
// and create two DataSource instances
dataSource = createDataSource();
}
return dataSource;
}
}
// Approach 2: Thread-safe but slow
public class ConnectionPool {
private DataSource dataSource;
public synchronized DataSource getDataSource() {
if (dataSource == null) {
dataSource = createDataSource();
}
return dataSource;
// Problem: synchronized on every access, even after initialization
}
}
// Approach 3: Double-checked locking (correct but complex)
public class ConnectionPool {
private volatile DataSource dataSource;
public DataSource getDataSource() {
DataSource result = dataSource;
if (result == null) {
synchronized (this) {
result = dataSource;
if (result == null) {
dataSource = result = createDataSource();
}
}
}
return result;
// Correct, but verbose and easy to get wrong
}
}
StableValue provides a one-liner replacement for all of the above approaches:
// Java 25 Preview: StableValue for lazy initialization
import java.lang.StableValue;
public class ConnectionPool {
// Lazy, thread-safe, optimal performance after initialization
private final StableValue dataSource =
StableValue.of(() -> createDataSource());
public DataSource getDataSource() {
return dataSource.get();
}
private DataSource createDataSource() {
System.out.println("Creating DataSource (expensive operation)...");
// ... setup connection pool
return new HikariDataSource(config);
}
}
The StableValue.of(Supplier) factory method takes a supplier that computes the value. The first call to get() executes the supplier. Subsequent calls return the cached result with no synchronization overhead -- the JVM can optimize the access to be as fast as reading a final field.
The API also provides stable lists and maps for cases where you need lazy initialization of individual elements:
// Stable list: each element is lazily initialized independently private final Listloggers = StableValue.list(10, i -> Logger.getLogger("module-" + i) ); // Accessing loggers.get(3) only initializes the logger at index 3 // Other loggers remain uninitialized until accessed // Stable map: each value is lazily initialized by key private final Map configs = StableValue.map( Set.of("database", "cache", "messaging"), key -> loadConfiguration(key) ); // Accessing configs.get("database") only loads the database config // Other configs remain uninitialized until accessed
Beyond convenience, StableValue gives the JVM optimization hints that manual lazy initialization cannot. Because the JVM knows that a StableValue will be set exactly once, it can treat the value as a constant after initialization. This means the JIT compiler can inline the value, eliminate null checks, and perform constant folding -- optimizations that are impossible with volatile fields or synchronized blocks.
Think of StableValue as a lazy final field: it behaves like final after initialization but defers the computation until needed.
Note: StableValue is a preview feature in Java 25. Enable it with --enable-preview. It is expected to be finalized in a future release.
Structured concurrency has been in preview since Java 19 (as an incubator) and has gone through multiple rounds of refinement. In Java 25, it is expected to be finalized with StructuredTaskScope as the core API.
The fundamental idea is simple: when you fork concurrent tasks, they should be treated as a unit. If one fails, the others should be cancelled. When the scope exits, all tasks must be complete. No thread leaks, no orphaned tasks, no forgotten futures.
Here is what concurrent code looks like without structured concurrency:
// BEFORE: Unstructured concurrency -- error-prone
public UserProfile loadUserProfile(long userId) throws Exception {
ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor();
Future userFuture = executor.submit(() -> fetchUser(userId));
Future> ordersFuture = executor.submit(() -> fetchOrders(userId));
Future prefsFuture = executor.submit(() -> fetchPreferences(userId));
// Problem 1: If fetchOrders() fails, fetchUser() and fetchPreferences()
// keep running even though the result is useless
// Problem 2: If this thread is interrupted, the futures are orphaned
// Problem 3: Exception handling is scattered and complex
try {
User user = userFuture.get(5, TimeUnit.SECONDS);
List orders = ordersFuture.get(5, TimeUnit.SECONDS);
Preferences prefs = prefsFuture.get(5, TimeUnit.SECONDS);
return new UserProfile(user, orders, prefs);
} catch (ExecutionException e) {
// Which task failed? Hard to tell without checking each one.
throw new RuntimeException("Failed to load profile", e);
} finally {
executor.shutdown(); // Easy to forget
}
}
Here is the same code with structured concurrency:
// AFTER: Structured concurrency -- clean and safe
import java.util.concurrent.StructuredTaskScope;
public UserProfile loadUserProfile(long userId) throws Exception {
try (var scope = StructuredTaskScope.open()) {
// Fork concurrent tasks within the scope
var userTask = scope.fork(() -> fetchUser(userId));
var ordersTask = scope.fork(() -> fetchOrders(userId));
var prefsTask = scope.fork(() -> fetchPreferences(userId));
// Wait for all tasks to complete
scope.join();
// Get results -- all tasks are guaranteed complete
return new UserProfile(
userTask.get(),
ordersTask.get(),
prefsTask.get()
);
}
// Scope is closed: all tasks are done, no thread leaks
}
Key improvements:
Structured concurrency offers different joining strategies through Joiner policies that control how the scope behaves when tasks complete or fail:
// Strategy 1: Wait for all tasks, throw on any failure (default)
try (var scope = StructuredTaskScope.open()) {
var task1 = scope.fork(() -> fetchFromServiceA());
var task2 = scope.fork(() -> fetchFromServiceB());
scope.join();
// Both tasks must succeed
return combine(task1.get(), task2.get());
}
// Strategy 2: Return first successful result, cancel the rest
try (var scope = StructuredTaskScope.open(
StructuredTaskScope.Joiner.anySuccessfulResultOrThrow())) {
scope.fork(() -> fetchFromPrimary());
scope.fork(() -> fetchFromFallback());
scope.fork(() -> fetchFromCache());
// Returns the first successful result; cancels slower tasks
return scope.join();
}
// Strategy 3: Collect all results (including failures)
try (var scope = StructuredTaskScope.open(
StructuredTaskScope.Joiner.allSuccessfulOrThrow())) {
scope.fork(() -> validateAddress(address));
scope.fork(() -> checkCreditScore(userId));
scope.fork(() -> verifyIdentity(userId));
// All must succeed; returns all results
return scope.join();
}
Structured concurrency works naturally with virtual threads (finalized in Java 21). Each forked task runs on a virtual thread, meaning you can fork thousands of tasks without exhausting platform threads. The combination of virtual threads and structured concurrency makes Java's concurrency model one of the most powerful in any mainstream language.
Scoped values are the modern replacement for ThreadLocal. If you have ever used ThreadLocal to pass context (like a request ID, user identity, or transaction context) through a call chain without explicit parameters, scoped values do the same thing but better, safer, and faster.
// ThreadLocal problems: // 1. Mutable -- can be changed at any time from anywhere private static final ThreadLocalREQUEST_ID = new ThreadLocal<>(); public void handleRequest(String requestId) { REQUEST_ID.set(requestId); processRequest(); REQUEST_ID.remove(); // Easy to forget -> memory leak! } // 2. Unbounded lifetime -- lives as long as the thread lives // With thread pools, values persist across unrelated requests // 3. Expensive with virtual threads -- each virtual thread // gets its own copy, and there can be millions of them // 4. Inheritance is broken -- InheritableThreadLocal copies values // to child threads, but there is no way to scope the lifetime
// Java 25: ScopedValue -- immutable, bounded lifetime, inherited by child threads import java.lang.ScopedValue; private static final ScopedValueREQUEST_ID = ScopedValue.newInstance(); public void handleRequest(String requestId) { ScopedValue.runWhere(REQUEST_ID, requestId, () -> { // REQUEST_ID is bound to requestId within this scope processRequest(); // After this block, the binding is automatically removed // No cleanup needed, no memory leaks possible }); } private void processRequest() { // Access the scoped value anywhere in the call chain String id = REQUEST_ID.get(); System.out.println("Processing request: " + id); callDatabaseLayer(); } private void callDatabaseLayer() { // Still accessible -- inherited through the call chain String id = REQUEST_ID.get(); System.out.println("DB query for request: " + id); }
The real power of scoped values shows when combined with structured concurrency. Scoped values are automatically inherited by child tasks in a StructuredTaskScope:
private static final ScopedValueUSER_CTX = ScopedValue.newInstance(); public void handleApiRequest(UserContext ctx) { ScopedValue.runWhere(USER_CTX, ctx, () -> { try (var scope = StructuredTaskScope.open()) { // Both tasks automatically inherit USER_CTX var audit = scope.fork(() -> { // USER_CTX.get() works here -- inherited from parent logAuditEvent("action started", USER_CTX.get().userId()); return true; }); var result = scope.fork(() -> { // USER_CTX.get() works here too return processForUser(USER_CTX.get()); }); scope.join(); } }); }
Key advantages of ScopedValue over ThreadLocal:
| Aspect | ThreadLocal | ScopedValue |
|---|---|---|
| Mutability | Mutable -- can be set/changed anytime | Immutable within a scope -- set once |
| Lifetime | Unbounded -- lives with the thread | Bounded -- lives within the runWhere block |
| Cleanup | Manual -- must call remove() |
Automatic -- cleaned up when scope exits |
| Memory leaks | Common -- forgotten remove() calls |
Impossible -- bounded lifetime |
| Virtual thread cost | Expensive -- each thread gets a copy | Cheap -- optimized for millions of threads |
| Child thread inheritance | InheritableThreadLocal (copies, expensive) | Automatic with StructuredTaskScope (shares, cheap) |
The Class-File API provides a standard, JDK-included API for reading, writing, and transforming Java class files. This is a big deal for frameworks and tools that work with bytecode -- think Spring, Hibernate, Mockito, Byte Buddy, and build tools.
Until now, the Java ecosystem relied on third-party libraries for bytecode manipulation:
The problem is that these libraries must be updated every time a new class file version ships (which is every six months with Java's release cadence). If ASM does not support the latest class file format, frameworks that depend on it break. The JDK's own tools (like javac, jlink, and jar) had their own internal bytecode library that was not available to external users.
The Class-File API makes bytecode manipulation a first-class platform feature that is always in sync with the latest class file format.
import java.lang.classfile.*;
// Read and inspect a class file
public void inspectClass(Path classFile) throws IOException {
ClassModel cm = ClassFile.of().parse(classFile);
System.out.println("Class: " + cm.thisClass().asInternalName());
System.out.println("Version: " + cm.majorVersion() + "." + cm.minorVersion());
System.out.println("Flags: " + cm.flags().flagsMask());
// List all methods
System.out.println("\nMethods:");
for (MethodModel method : cm.methods()) {
System.out.printf(" %s %s%n",
method.methodName().stringValue(),
method.methodType().stringValue());
}
// List all fields
System.out.println("\nFields:");
for (FieldModel field : cm.fields()) {
System.out.printf(" %s %s%n",
field.fieldName().stringValue(),
field.fieldType().stringValue());
}
}
The API uses a functional transformation model -- you pass a transformation function that can modify, add, or remove elements:
// Add logging to every method entry
public byte[] addMethodLogging(byte[] classBytes) {
ClassFile cf = ClassFile.of();
return cf.transformClass(cf.parse(classBytes), (builder, element) -> {
if (element instanceof MethodModel method) {
// Transform each method to add entry logging
builder.transformMethod(method, (mb, me) -> {
if (me instanceof CodeModel code) {
mb.withCode(cb -> {
// Add: System.out.println("Entering: " + methodName)
cb.getstatic(ClassDesc.of("java.lang.System"), "out",
ClassDesc.of("java.io.PrintStream"));
cb.ldc("Entering: " + method.methodName().stringValue());
cb.invokevirtual(ClassDesc.of("java.io.PrintStream"),
"println",
MethodTypeDesc.of(ClassDesc.ofVoid(),
ClassDesc.of("java.lang.String")));
// Then include the original code
code.forEach(cb::with);
});
} else {
mb.with(me);
}
});
} else {
builder.with(element);
}
});
}
The Class-File API is not something most application developers will use directly. But it is critical infrastructure for the frameworks and tools that application developers depend on. With the API in the JDK, frameworks can drop their ASM dependency and use a standard API that is always compatible with the latest Java version.
Java applications have always been criticized for slow startup times compared to native applications. Every time a Java application starts, the JVM must find, load, verify, and link hundreds or thousands of classes. For a Spring Boot application, this can take several seconds -- an eternity in a containerized, scale-to-zero world.
JEP 483 introduces ahead-of-time (AOT) class loading and linking, which performs these steps once during a training run and caches the results. On subsequent starts, the JVM loads the pre-processed classes from the cache, skipping the expensive discovery and verification steps.
The process has three steps:
// Step 1: Training run -- record class loading behavior // $ java -XX:AOTMode=record -XX:AOTConfiguration=app.aotconf -jar myapp.jar // Step 2: Generate the AOT cache // $ java -XX:AOTMode=create -XX:AOTConfiguration=app.aotconf -XX:AOTCache=app.aot -jar myapp.jar // Step 3: Production run -- use the AOT cache // $ java -XX:AOTCache=app.aot -jar myapp.jar // Result: Significantly faster startup time
The improvement depends on the size and complexity of the application:
| Application Type | Typical Startup Without AOT | With AOT Cache | Improvement |
|---|---|---|---|
| Simple CLI tool | ~100ms | ~50ms | ~2x faster |
| Spring Boot microservice | ~3-5 seconds | ~1-2 seconds | ~2-3x faster |
| Large enterprise application | ~10-20 seconds | ~4-8 seconds | ~2-3x faster |
This is not a replacement for GraalVM native image -- native image eliminates the JVM entirely and starts in milliseconds. But AOT class loading provides a significant improvement without the native image trade-offs (reflection limitations, longer build times, reduced peak performance). You keep the full JVM with JIT compilation and all runtime capabilities while getting dramatically faster startup.
The AOT cache can be generated as part of your build pipeline. For containerized applications, generate the cache in your Docker build stage:
// Dockerfile with AOT cache generation // FROM eclipse-temurin:25-jre AS builder // COPY target/myapp.jar /app/myapp.jar // WORKDIR /app // // # Training run // RUN java -XX:AOTMode=record -XX:AOTConfiguration=app.aotconf \ // -jar myapp.jar --spring.profiles.active=aot-training & // sleep 10 && kill %1 // // # Generate cache // RUN java -XX:AOTMode=create -XX:AOTConfiguration=app.aotconf \ // -XX:AOTCache=app.aot -jar myapp.jar // // FROM eclipse-temurin:25-jre // COPY --from=builder /app/myapp.jar /app/myapp.jar // COPY --from=builder /app/app.aot /app/app.aot // CMD ["java", "-XX:AOTCache=/app/app.aot", "-jar", "/app/myapp.jar"]
AOT class loading is fully transparent to your application code. You do not need to change any source code, annotations, or configurations. It is purely a deployment-time optimization.
Every Java object has a header that contains metadata: the object's class pointer, hash code, lock state, and garbage collector information. In the current JVM, this header is 12-16 bytes on 64-bit systems. For small objects (like a Point record with two int fields), the header can be as large as the payload.
Compact object headers (JEP 450, experimental in Java 25) reduce the header size to 8 bytes by compressing the metadata representation. This saves approximately 10-20% of heap memory for typical applications -- a significant improvement that requires zero code changes.
| Object | Current Header | Compact Header | Payload | Memory Saved |
|---|---|---|---|---|
Integer (wrapper) |
12 bytes | 8 bytes | 4 bytes | 25% |
Point(int x, int y) |
12 bytes | 8 bytes | 8 bytes | 20% |
String (empty) |
12 bytes | 8 bytes | 12 bytes | 17% |
HashMap.Node |
12 bytes | 8 bytes | 32 bytes | 9% |
Enable compact object headers with -XX:+UseCompactObjectHeaders. This is experimental and may have edge cases, so test thoroughly before using in production.
The Vector API has been in incubator since Java 16, enabling SIMD (Single Instruction, Multiple Data) operations in Java. SIMD allows a single CPU instruction to process multiple data elements simultaneously -- for example, adding four pairs of floats in a single instruction instead of four separate instructions.
The API provides types like FloatVector, IntVector, and DoubleVector that map directly to hardware SIMD registers (SSE, AVX, AVX-512 on x86; NEON on ARM):
// Traditional scalar loop -- processes one element at a time
public float[] scalarMultiply(float[] a, float[] b) {
float[] result = new float[a.length];
for (int i = 0; i < a.length; i++) {
result[i] = a[i] * b[i]; // One multiplication per iteration
}
return result;
}
// Vector API -- processes multiple elements per iteration
import jdk.incubator.vector.*;
public float[] vectorMultiply(float[] a, float[] b) {
var species = FloatVector.SPECIES_256; // 256-bit vectors (8 floats)
float[] result = new float[a.length];
int i = 0;
for (; i < species.loopBound(a.length); i += species.length()) {
var va = FloatVector.fromArray(species, a, i);
var vb = FloatVector.fromArray(species, b, i);
var vr = va.mul(vb); // 8 multiplications in one instruction
vr.intoArray(result, i);
}
// Handle remaining elements
for (; i < a.length; i++) {
result[i] = a[i] * b[i];
}
return result;
}
The Vector API's finalization is blocked by Project Valhalla -- specifically, the value types feature. The Vector API's types (like FloatVector) need to be value types to achieve optimal performance (stack allocation, no object headers, no GC pressure). Until value types are available in the language, finalizing the Vector API would lock in a suboptimal design.
In the meantime, the JIT compiler's auto-vectorization has improved significantly. For many common patterns, the JVM automatically uses SIMD instructions without the Vector API. The API remains important for cases where auto-vectorization cannot figure out the optimal strategy, particularly in scientific computing, machine learning, data processing, and cryptography.
Beyond the major features, Java 25 includes several smaller but notable improvements:
In previous Java versions, statements before super() or this() calls in constructors were forbidden. Java 25 relaxes this restriction, allowing you to validate and compute arguments before delegating to another constructor:
// BEFORE: Had to use static helper methods or factory methods
public class PositiveRange {
private final int low;
private final int high;
public PositiveRange(int low, int high) {
// Could NOT put validation before super() call
super(); // Must be first statement
if (low < 0 || high < 0) throw new IllegalArgumentException("Must be positive");
if (low > high) throw new IllegalArgumentException("low must be <= high");
this.low = low;
this.high = high;
}
}
// AFTER: Statements allowed before super()/this()
public class PositiveRange {
private final int low;
private final int high;
public PositiveRange(int low, int high) {
// Validation BEFORE calling super -- now legal in Java 25
if (low < 0 || high < 0) throw new IllegalArgumentException("Must be positive");
if (low > high) throw new IllegalArgumentException("low must be <= high");
super();
this.low = low;
this.high = high;
}
}
A new standard API for Key Derivation Functions (KDFs) like HKDF and Argon2. This is important for applications that implement custom encryption schemes, key management, or password hashing:
import javax.crypto.KDF;
// Derive an encryption key using HKDF
KDF hkdf = KDF.getInstance("HKDF-SHA256");
SecretKey derived = hkdf.deriveKey("AES",
KDF.HKDFParameterSpec.ofExtract()
.addIKM(inputKeyMaterial)
.addSalt(salt)
.thenExpand(info, 32)
.build());
// Use the derived key for AES encryption
The Z Garbage Collector continues to improve in Java 25. Generational ZGC (introduced in Java 21) has been further optimized with better young generation sizing, improved concurrent relocation, and reduced pause times. For applications that previously tuned ZGC parameters manually, the defaults are now better and require less tuning.
Java 25 continues the cleanup of old APIs:
sun.misc.Unsafe are further restricted -- use the Foreign Function & Memory API (finalized in Java 22) instead| Feature | JEP | Status | Category | Description |
|---|---|---|---|---|
| Module Import Declarations | 476 | Final | Language | Import all exported types from a module with one statement |
| Implicitly Declared Classes & Instance Main | 477 | Final | Language | Write Java programs without class declarations |
| Primitive Types in Patterns | 488 | Final | Language | Pattern matching for instanceof and switch with primitives |
| Flexible Constructor Bodies | 492 | Final | Language | Statements before super()/this() in constructors |
| Structured Concurrency | 499 | Final (expected) | Library | Structured thread management with StructuredTaskScope |
| Scoped Values | 487 | Final (expected) | Library | Immutable, scoped ThreadLocal replacement |
| Class-File API | 484 | Final | Library | Standard API for bytecode reading/writing/transformation |
| Key Derivation Function API | 478 | Final | Library | Standard KDF API (HKDF, etc.) |
| Ahead-of-Time Class Loading | 483 | Final | Runtime | Pre-load and pre-link classes for faster startup |
| Stable Values | 502 | Preview | Library | Lazy, thread-safe, optimizable value holders |
| Compact Object Headers | 450 | Experimental | Runtime | Reduced object header size (12 bytes to 8 bytes) |
| Vector API | 489 | Incubator | Library | SIMD operations for data-parallel computation |
Java 25 is a substantial release. The finalization of structured concurrency and scoped values, combined with language improvements like primitive patterns and flexible constructors, makes this the most feature-rich LTS release since Java 21. The runtime improvements (AOT class loading, compact headers, ZGC tuning) mean that upgrading delivers immediate performance benefits even before you adopt any new language features.
For production teams, the message is clear: start planning your Java 25 migration now. The combination of new features, performance improvements, and the LTS support window makes Java 25 the version you want to be running by the end of 2025.
Since Java 8, the Stream API has been one of the most powerful tools in your toolkit. You can filter, map, flatMap, reduce, and collect your way through most data-processing tasks with clean, declarative code. But if you have spent enough time with streams, you have inevitably hit a wall: there is no way to define your own intermediate operations.
Think about it. Java gives you Collector as an extension point for terminal operations — you can write custom collectors that fold, group, partition, or summarize data in any way you want. But for intermediate operations? You are stuck with what the API provides. If the built-in map(), filter(), flatMap(), distinct(), sorted(), peek(), limit(), skip(), and takeWhile() do not cover your use case, you have to break out of the stream pipeline, materialize into a collection, manipulate it imperatively, and then stream it again. That defeats the entire point.
Consider an analogy: imagine you are building an assembly line in a factory. Java gave you the ability to customize the packaging station at the end (collectors), but it locked down every station in the middle of the line. Want a station that groups items into batches of five? Want one that computes a running average and passes it along? Want one that deduplicates consecutive items? You had to hack around the limitation or abandon the assembly line entirely.
Java 25 changes this with Stream Gatherers (JEP 485). Gatherers are the missing counterpart to collectors — they let you define custom intermediate operations that plug directly into a stream pipeline. A gatherer can transform elements one-to-one, one-to-many, many-to-one, or many-to-many. It can carry state across elements. It can short-circuit to stop processing early. It can even support parallel execution. And just like collectors, Java ships several built-in gatherers that handle common use cases out of the box.
Stream Gatherers were previewed in Java 22 (JEP 461) and Java 23 (JEP 473), and finalized without changes in Java 24 (JEP 485), making them a standard feature in Java 25 LTS. This post covers everything you need to know: the interface anatomy, all five built-in gatherers, how to write your own, and real-world patterns that will change how you think about stream pipelines.
To appreciate why gatherers matter, let us look at things you cannot do cleanly with the existing Stream API — operations that require state, context, or structural transformation between elements.
Suppose you have a stream of stock prices and you want to compute a 3-day moving average. You need to look at three consecutive elements at a time, slide one position forward, and produce an average for each window. There is no built-in stream operation for this. Before gatherers, your options were:
// The ugly workaround: materialize, index, and re-stream Listprices = List.of(100.0, 102.5, 101.0, 105.0, 103.5, 107.0); List movingAverages = IntStream.range(0, prices.size() - 2) .mapToObj(i -> (prices.get(i) + prices.get(i + 1) + prices.get(i + 2)) / 3.0) .toList(); // Requires random access -- cannot work with a true stream
This only works because you materialized the data into a List first. With a true stream (say, reading from a socket or a database cursor), you cannot index into it. You need an intermediate operation that remembers previous elements.
The built-in distinct() removes all duplicates across the entire stream, but what if you only want to remove consecutive duplicates? For example, turning [1, 1, 2, 2, 2, 3, 1, 1] into [1, 2, 3, 1]. There is no built-in operation for this. You need state — specifically, you need to remember the last element you emitted.
You have a stream of records and you want to group them into batches of 100 for bulk database inserts. The stream might have 10,000 elements, and you need to emit 100 lists of 100. This is a many-to-many transformation that requires an accumulator, and the built-in API has nothing for it.
You want a running total: given [1, 2, 3, 4], produce [1, 3, 6, 10]. The reduce() operation produces a single value, not a stream of intermediate results. You would have to use an external mutable variable (which violates the stream contract) or fall back to imperative code.
Collectors are powerful, but they are terminal operations. They consume the entire stream and produce a single result. They cannot sit in the middle of a pipeline and emit elements downstream. Gatherers fill exactly this gap — they are intermediate operations that can carry state, transform structure, and emit zero or more elements per input, all while remaining composable with the rest of the pipeline.
| Limitation | Before Gatherers | With Gatherers |
|---|---|---|
| Sliding window | Materialize to list, index manually | Gatherers.windowSliding(n) |
| Fixed-size batches | Collect all, then partition | Gatherers.windowFixed(n) |
| Running total | External mutable variable (unsafe) | Gatherers.scan(init, fn) |
| Fold to single value (intermediate) | Not possible in pipeline | Gatherers.fold(init, fn) |
| Concurrent mapping with limit | Custom thread pool + futures | Gatherers.mapConcurrent(n, fn) |
| Custom stateful logic | Break pipeline, write imperative code | stream.gather(myGatherer) |
A gatherer is defined by the java.util.stream.Gatherer<T, A, R> interface. If you have worked with Collector<T, A, R>, the shape will feel familiar, but there are important differences. Let us break down the type parameters and the four functions that make up a gatherer.
| Parameter | Meaning | Collector Equivalent |
|---|---|---|
| T | Type of input elements consumed from upstream | Same — input type |
| A | Type of the mutable state object (private, per-gatherer) | Same — accumulator type |
| R | Type of output elements emitted downstream | Different — in Collector, R is the final result type, not a stream element type |
The critical insight: a Collector’s R is the single result you get back (like a List or a Map). A Gatherer’s R is the element type of the output stream. The gatherer sits in the middle of the pipeline and produces a new stream.
Every gatherer is composed of four functions. Two are required (well, one is required), and two are optional:
| Function | Type | Required? | Purpose |
|---|---|---|---|
| initializer() | Supplier<A> |
Optional | Creates the private mutable state object. Called once per stream evaluation. If omitted, the gatherer is stateless. |
| integrator() | Integrator<A, T, R> |
Required | The core logic. Called once per input element. Receives the state, the current element, and a Downstream handle to push output elements. Returns boolean: true to continue, false to short-circuit. |
| combiner() | BinaryOperator<A> |
Optional | Merges two state objects when running in parallel. Without this, the gatherer runs sequentially even on a parallel stream. |
| finisher() | BiConsumer<A, Downstream<? super R>> |
Optional | Called after all input elements have been processed. Can emit final elements downstream. Useful for flushing buffered state. |
The Downstream<R> object is how a gatherer emits elements to the next stage in the pipeline. It has two key methods:
public interface Downstream{ // Push an element downstream. Returns true if more elements are accepted, // false if the downstream is done (e.g., a short-circuiting terminal op). boolean push(R element); // Check if the downstream is rejecting further elements. boolean isRejecting(); }
This is one of the key differences from Collector. A collector’s accumulator is a BiConsumer — it just consumes, with no feedback. A gatherer’s integrator gets feedback from downstream via the return value of push(). This enables short-circuiting: if the downstream says “I am done,” the gatherer can stop processing immediately.
The Integrator interface comes in two flavors:
// Standard integrator -- may short-circuit (return false)
Integrator.of((state, element, downstream) -> {
// Process element, optionally push to downstream
// Return false to stop processing early
return true;
});
// Greedy integrator -- promises to never short-circuit
// The stream runtime can optimize based on this guarantee
Integrator.ofGreedy((state, element, downstream) -> {
downstream.push(transform(element));
// No return value needed -- always continues
});
Use Integrator.ofGreedy() when your gatherer always processes all elements (like a mapping or filtering gatherer). Use Integrator.of() when your gatherer might need to stop early (like a “take first N matching” gatherer).
| Aspect | Collector<T, A, R> | Gatherer<T, A, R> |
|---|---|---|
| Pipeline position | Terminal (end) | Intermediate (middle) |
| Output | Single result of type R | Stream of elements of type R |
| Accumulator / Integrator | BiConsumer<A, T> — no feedback |
Integrator<A, T, R> — returns boolean, has Downstream |
| Finisher | Function<A, R> — returns the result |
BiConsumer<A, Downstream> — pushes to stream |
| Short-circuiting | Not supported | Supported via integrator return value |
| Composability | Not directly composable | Composable via andThen() |
Java ships five built-in gatherers in the java.util.stream.Gatherers utility class. These cover the most commonly requested operations that were impossible with the old API. Let us go through each one with concrete examples.
fold() is a many-to-one gatherer. It works like reduce(), but as an intermediate operation that emits a single result element into the downstream when all input is consumed. Think of it as “reduce, but keep going with the pipeline.”
Signature:
staticGatherer fold( Supplier initial, BiFunction super R, ? super T, ? extends R> folder )
How it works: The initial supplier creates the starting value (the identity). The folder function takes the current accumulated value and the next input element, and returns the new accumulated value. After all input elements are processed, the final accumulated value is emitted downstream as a single element.
Example: Join strings with a semicolon delimiter
String result = Stream.of(1, 2, 3, 4, 5, 6, 7, 8, 9)
.gather(
Gatherers.fold(
() -> "",
(accumulated, element) -> {
if (accumulated.isEmpty()) return element.toString();
return accumulated + ";" + element;
}
)
)
.findFirst()
.get();
System.out.println(result);
// Output: 1;2;3;4;5;6;7;8;9
“Wait, can’t I just use Collectors.joining(";")?” Yes, for this specific case. But fold() is an intermediate operation — the result flows into the rest of the pipeline. You could chain more gatherers or operations after it. With a collector, the pipeline ends.
Example: Sum as intermediate operation, then continue processing
// Fold to compute sum, then map the result, then collect Listresult = Stream.of(10, 20, 30) .gather(Gatherers.fold(() -> 0, Integer::sum)) .map(sum -> "Total: " + sum) .toList(); System.out.println(result); // Output: [Total: 60]
scan() is a one-to-one stateful gatherer. It produces a running accumulation — for each input element, it emits the current accumulated value. If you are familiar with functional programming, this is the classic “prefix scan” or “cumulative fold.”
Signature:
staticGatherer scan( Supplier initial, BiFunction super R, ? super T, ? extends R> scanner )
How it works: For each input element, the scanner function is applied to the current state and the element. The result becomes both the new state and the output element pushed downstream.
Example: Running sum (prefix sum)
Stream.of(1, 2, 3, 4, 5)
.gather(Gatherers.scan(() -> 0, Integer::sum))
.forEach(System.out::println);
// Output:
// 1
// 3
// 6
// 10
// 15
Notice the output has the same number of elements as the input — that is the one-to-one nature of scan(). Each output is the cumulative result up to that point.
Example: Running sum starting from a seed value
Stream.of(1, 2, 3, 4, 5)
.gather(Gatherers.scan(() -> 100, (current, next) -> current + next))
.forEach(System.out::println);
// Output:
// 101
// 103
// 106
// 110
// 115
Example: Running maximum
Stream.of(3, 1, 4, 1, 5, 9, 2, 6)
.gather(Gatherers.scan(() -> Integer.MIN_VALUE, Integer::max))
.forEach(System.out::println);
// Output:
// 3
// 3
// 4
// 4
// 5
// 9
// 9
// 9
windowFixed() is a many-to-many gatherer. It groups input elements into fixed-size lists (batches). When the window is full, it is emitted downstream as a List. The last window may contain fewer elements if the stream size is not evenly divisible.
Signature:
staticGatherer > windowFixed(int windowSize)
Example: Batch elements into groups of 3
Stream.of(1, 2, 3, 4, 5, 6, 7, 8)
.gather(Gatherers.windowFixed(3))
.forEach(System.out::println);
// Output:
// [1, 2, 3]
// [4, 5, 6]
// [7, 8]
Notice how the last window [7, 8] has only two elements — it emits whatever is left when the stream ends.
Example: Bulk database inserts in batches of 100
records.stream()
.gather(Gatherers.windowFixed(100))
.forEach(batch -> {
jdbcTemplate.batchUpdate(
"INSERT INTO orders (id, amount) VALUES (?, ?)",
batch.stream()
.map(order -> new Object[]{order.id(), order.amount()})
.toList()
);
System.out.println("Inserted batch of " + batch.size());
});
windowSliding() is a many-to-many gatherer that creates overlapping windows. Each window contains windowSize elements, and the window slides forward by one position for each new element. This is exactly what you need for moving averages, n-gram generation, and similar sliding-window algorithms.
Signature:
staticGatherer > windowSliding(int windowSize)
Example: Sliding windows of size 3
Stream.of(1, 2, 3, 4, 5, 6, 7, 8)
.gather(Gatherers.windowSliding(3))
.forEach(System.out::println);
// Output:
// [1, 2, 3]
// [2, 3, 4]
// [3, 4, 5]
// [4, 5, 6]
// [5, 6, 7]
// [6, 7, 8]
Each window overlaps with the previous one by windowSize - 1 elements. The input stream of 8 elements produces 6 windows of size 3.
Example: 3-day moving average of stock prices
Listprices = List.of(100.0, 102.5, 101.0, 105.0, 103.5, 107.0, 106.0); prices.stream() .gather(Gatherers.windowSliding(3)) .map(window -> window.stream() .mapToDouble(Double::doubleValue) .average() .orElse(0.0)) .forEach(avg -> System.out.printf("%.2f%n", avg)); // Output: // 101.17 // 102.83 // 103.17 // 105.17 // 105.50
mapConcurrent() is a one-to-one gatherer that applies a mapping function concurrently, up to a specified concurrency limit. This is incredibly useful for I/O-bound operations where you want to parallelize the mapping without converting the entire stream to parallel (which would parallelize everything, including non-I/O stages).
Signature:
staticGatherer mapConcurrent( int maxConcurrency, Function super T, ? extends R> mapper )
Key properties:
maxConcurrency simultaneous invocationsExample: Fetch URLs with concurrency limit of 5
Listurls = List.of( "https://api.example.com/users/1", "https://api.example.com/users/2", "https://api.example.com/users/3", "https://api.example.com/users/4", "https://api.example.com/users/5", "https://api.example.com/users/6", "https://api.example.com/users/7", "https://api.example.com/users/8" ); List responses = urls.stream() .gather(Gatherers.mapConcurrent(5, url -> { // This runs concurrently, up to 5 at a time HttpClient client = HttpClient.newHttpClient(); HttpRequest request = HttpRequest.newBuilder() .uri(URI.create(url)) .build(); try { return client.send(request, HttpResponse.BodyHandlers.ofString()) .body(); } catch (Exception e) { return "Error: " + e.getMessage(); } })) .toList(); // All 8 URLs are fetched, max 5 at a time, results in original order
Before mapConcurrent(), achieving this required manually managing a thread pool, submitting CompletableFuture tasks, collecting results, and handling ordering yourself. Now it is one line in a stream pipeline.
The gather() method is the new intermediate operation on Stream that accepts a Gatherer. It sits alongside map(), filter(), and flatMap() in the pipeline.
Signature:
// On the Stream interface:Stream gather(Gatherer super T, ?, R> gatherer)
You can use gather() anywhere you would use any other intermediate operation. You can chain multiple gather() calls, mix them with map() and filter(), and end with any terminal operation.
Example: Chaining gatherers with standard operations
Listresult = Stream.of(10.0, 20.5, 15.3, 30.0, 25.7, 18.2, 22.1, 35.0, 28.4, 19.6) .filter(value -> value > 15.0) // Standard filter .gather(Gatherers.windowSliding(3)) // Sliding windows of 3 .map(window -> window.stream() // Standard map .mapToDouble(Double::doubleValue) .average() .orElse(0.0)) .gather(Gatherers.scan(() -> 0.0, // Running sum of averages (sum, avg) -> sum + avg)) .toList(); // Terminal operation System.out.println(result);
Gatherers support composition via the andThen() method. This lets you combine two gatherers into a single gatherer, which can be useful for building reusable, composable transformation pipelines.
// Create composed gatherer: scan then fold GathererscanGatherer = Gatherers.scan(() -> 0, Integer::sum); Gatherer foldGatherer = Gatherers.fold( () -> "", (result, element) -> result.isEmpty() ? element.toString() : result + ";" + element ); // Compose them: first scan (running sum), then fold (join into string) Gatherer composed = scanGatherer.andThen(foldGatherer); // These are equivalent: String result1 = Stream.of(1, 2, 3, 4, 5) .gather(composed) .findFirst().get(); String result2 = Stream.of(1, 2, 3, 4, 5) .gather(scanGatherer) .gather(foldGatherer) .findFirst().get(); System.out.println(result1); // Output: 1;3;6;10;15 System.out.println(result1.equals(result2)); // Output: true
When the stream pipeline is executed and encounters a gather() step, the following happens:
Downstream object is created that forwards elements to the next pipeline stageinitializer is called to create the state objectintegrator function is retrievedintegrator.integrate(state, element, downstream) is calledfalse, processing stops immediately (short-circuit)finisher is called with the final state and downstreamThe built-in gatherers cover common cases, but the real power of the API is creating your own. There are two approaches: implementing the Gatherer interface directly, or using the factory methods Gatherer.of() and Gatherer.ofSequential().
Let us build a gatherer that removes consecutive duplicate elements — something you cannot do with distinct() (which removes all duplicates globally).
import java.util.stream.Gatherer; import java.util.function.*; /** * A gatherer that removes consecutive duplicates. * [1, 1, 2, 2, 2, 3, 1, 1] -> [1, 2, 3, 1] */ public class DistinctConsecutiveimplements Gatherer , T> { @Override public Supplier > initializer() { // State: a single-element list holding the last emitted value return () -> new ArrayList<>(1); } @Override public Integrator
, T, T> integrator() { return Integrator.ofGreedy((state, element, downstream) -> { if (state.isEmpty() || !Objects.equals(state.getFirst(), element)) { state.clear(); state.add(element); downstream.push(element); } // If same as last, skip it }); } // No combiner -- sequential only // No finisher -- nothing to flush }
Usage:
Stream.of(1, 1, 2, 2, 2, 3, 1, 1, 4, 4)
.gather(new DistinctConsecutive<>())
.forEach(System.out::println);
// Output:
// 1
// 2
// 3
// 1
// 4
For simpler gatherers that do not need parallel support, Gatherer.ofSequential() is more concise. Let us build the same consecutive-distinct gatherer using the factory method:
staticGatherer distinctConsecutive() { return Gatherer.ofSequential( // Initializer: mutable container for the last seen element () -> new ArrayList (1), // Integrator Integrator.ofGreedy((state, element, downstream) -> { if (state.isEmpty() || !Objects.equals(state.getFirst(), element)) { state.clear(); state.add(element); downstream.push(element); } }) ); } // Usage: Stream.of("a", "a", "b", "b", "c", "a", "a") .gather(distinctConsecutive()) .toList(); // Result: [a, b, c, a]
When you need parallel execution, use Gatherer.of() which accepts all four functions including a combiner:
staticGatherer distinctConsecutiveParallel() { return Gatherer.of( // Initializer () -> new ArrayList (1), // Integrator Integrator.ofGreedy((state, element, downstream) -> { if (state.isEmpty() || !Objects.equals(state.getFirst(), element)) { state.clear(); state.add(element); downstream.push(element); } }), // Combiner for parallel execution (left, right) -> { // When merging parallel segments, keep the state from the right segment // because it represents the more recent "last seen" element return right.isEmpty() ? left : right; } // No finisher needed ); }
| Factory Method | Parameters | Parallel? | Use When |
|---|---|---|---|
Gatherer.ofSequential(integrator) |
Integrator only | No | Stateless, sequential transformation |
Gatherer.ofSequential(init, integrator) |
Initializer + Integrator | No | Stateful, sequential, no flush needed |
Gatherer.ofSequential(init, integrator, finisher) |
Initializer + Integrator + Finisher | No | Stateful, sequential, needs final flush |
Gatherer.of(init, integrator, combiner) |
Initializer + Integrator + Combiner | Yes | Stateful, parallelizable, no flush |
Gatherer.of(init, integrator, combiner, finisher) |
All four | Yes | Full-featured gatherer |
Stateful gatherers are where the API truly shines. These are operations that need to remember information across elements — something that was impossible to do correctly in a stream pipeline before gatherers.
A gatherer that computes a running average, emitting the current average after each input element:
static GathererrunningAverage() { // State: [sum, count] return Gatherer.ofSequential( () -> new double[]{0.0, 0.0}, Integrator.ofGreedy((state, element, downstream) -> { state[0] += element; // sum state[1] += 1; // count downstream.push(state[0] / state[1]); }) ); } // Usage: Stream.of(10.0, 20.0, 30.0, 40.0, 50.0) .gather(runningAverage()) .forEach(avg -> System.out.printf("%.1f%n", avg)); // Output: // 10.0 // 15.0 // 20.0 // 25.0 // 30.0
Unlike distinct(), which uses a HashSet internally and removes all duplicates, you might want to deduplicate within a time or count window — for example, suppress duplicate log messages within a batch of 100:
staticGatherer deduplicateWithinWindow(int windowSize) { return Gatherer.ofSequential( () -> new Object[]{new LinkedHashSet (), 0}, Integrator.ofGreedy((state, element, downstream) -> { @SuppressWarnings("unchecked") LinkedHashSet seen = (LinkedHashSet ) state[0]; int count = (int) state[1]; // Reset the window when we hit the limit if (count > 0 && count % windowSize == 0) { seen.clear(); } if (seen.add(element)) { downstream.push(element); } state[1] = count + 1; }) ); } // Usage: Suppress duplicate log levels within batches of 5 Stream.of("INFO", "WARN", "INFO", "ERROR", "WARN", "INFO", "DEBUG", "INFO", "WARN", "ERROR") .gather(deduplicateWithinWindow(5)) .forEach(System.out::println); // Output (first window of 5 input elements): // INFO // WARN // ERROR // (second window of 5 input elements -- seen set is cleared): // INFO // DEBUG // WARN // ERROR
A gatherer that enforces a maximum throughput by introducing delays when elements arrive too quickly:
staticGatherer rateLimited(int maxPerSecond) { long intervalNanos = 1_000_000_000L / maxPerSecond; return Gatherer.ofSequential( // State: last emission time in nanos () -> new long[]{0L}, Integrator.ofGreedy((state, element, downstream) -> { long now = System.nanoTime(); long elapsed = now - state[0]; if (elapsed < intervalNanos) { try { Thread.sleep((intervalNanos - elapsed) / 1_000_000); } catch (InterruptedException e) { Thread.currentThread().interrupt(); } } state[0] = System.nanoTime(); downstream.push(element); }) ); } // Usage: Process API calls at most 10 per second urls.stream() .gather(rateLimited(10)) .gather(Gatherers.mapConcurrent(5, url -> fetchUrl(url))) .forEach(response -> process(response));
Group elements into sessions based on a gap threshold. If two consecutive elements are more than a specified distance apart, start a new session:
record TimestampedEvent(long timestamp, String data) {}
static Gatherer> sessionGrouping(long maxGapMillis) {
return Gatherer.ofSequential(
// State: current session (list of events)
ArrayList::new,
// Integrator: add to current session or start new one
Integrator.ofGreedy((session, event, downstream) -> {
if (!session.isEmpty()) {
long lastTimestamp = session.getLast().timestamp();
if (event.timestamp() - lastTimestamp > maxGapMillis) {
// Gap exceeds threshold -- emit current session, start new one
downstream.push(List.copyOf(session));
session.clear();
}
}
session.add(event);
}),
// Finisher: emit the last session
(session, downstream) -> {
if (!session.isEmpty()) {
downstream.push(List.copyOf(session));
}
}
);
}
// Usage: Group click events into sessions (30-second gap threshold)
List clicks = List.of(
new TimestampedEvent(1000, "click_home"),
new TimestampedEvent(3000, "click_products"),
new TimestampedEvent(5000, "click_item"),
new TimestampedEvent(60000, "click_home"), // 55s gap -- new session
new TimestampedEvent(62000, "click_cart"),
new TimestampedEvent(63000, "click_checkout")
);
clicks.stream()
.gather(sessionGrouping(30_000))
.forEach(session -> System.out.println("Session: " + session));
// Output:
// Session: [TimestampedEvent[timestamp=1000, data=click_home], ...]
// Session: [TimestampedEvent[timestamp=60000, data=click_home], ...]
A one-to-many gatherer emits multiple output elements for each input element. You might think “that is what flatMap() does” — and you would be right for the stateless case. But gatherers can do stateful one-to-many transformations, which flatMap() cannot.
For each input number, emit both the number itself and the running total so far:
static GathererelementAndRunningTotal() { return Gatherer.ofSequential( () -> new int[]{0}, // running total Integrator.ofGreedy((state, element, downstream) -> { state[0] += element; downstream.push("Value: " + element); downstream.push("Running total: " + state[0]); }) ); } Stream.of(10, 20, 30) .gather(elementAndRunningTotal()) .forEach(System.out::println); // Output: // Value: 10 // Running total: 10 // Value: 20 // Running total: 30 // Value: 30 // Running total: 60
Given a stream of range objects, expand each into individual integers — but only emit values that are unique across all ranges (stateful dedup during expansion):
record IntRange(int start, int endInclusive) {}
static Gatherer expandUniqueRanges() {
return Gatherer.ofSequential(
HashSet::new, // track all emitted values globally
Integrator.ofGreedy((seen, range, downstream) -> {
for (int i = range.start(); i <= range.endInclusive(); i++) {
if (seen.add(i)) {
downstream.push(i);
}
}
})
);
}
Stream.of(
new IntRange(1, 5),
new IntRange(3, 8), // 3, 4, 5 already seen -- only emit 6, 7, 8
new IntRange(7, 10) // 7, 8 already seen -- only emit 9, 10
)
.gather(expandUniqueRanges())
.forEach(System.out::println);
// Output: 1 2 3 4 5 6 7 8 9 10
A gatherer that splits each input line into words, maintaining a word count across all lines:
record NumberedWord(int globalIndex, String word) {}
static Gatherer tokenize() {
return Gatherer.ofSequential(
() -> new int[]{0}, // global word counter
Integrator.ofGreedy((counter, line, downstream) -> {
String[] words = line.trim().split("\\s+");
for (String word : words) {
if (!word.isEmpty()) {
counter[0]++;
downstream.push(new NumberedWord(counter[0], word));
}
}
})
);
}
Stream.of("hello world", "foo bar baz", "java streams")
.gather(tokenize())
.forEach(System.out::println);
// Output:
// NumberedWord[globalIndex=1, word=hello]
// NumberedWord[globalIndex=2, word=world]
// NumberedWord[globalIndex=3, word=foo]
// NumberedWord[globalIndex=4, word=bar]
// NumberedWord[globalIndex=5, word=baz]
// NumberedWord[globalIndex=6, word=java]
// NumberedWord[globalIndex=7, word=streams]
Short-circuiting is one of the most powerful capabilities of gatherers. By returning false from the integrator, you tell the stream to stop processing immediately. This enables operations like "take while a condition holds, but with state" -- something that built-in takeWhile() cannot do because it is stateless.
Take elements from the stream until the cumulative sum exceeds a threshold:
static GatherertakeUntilSumExceeds(int threshold) { return Gatherer.ofSequential( () -> new int[]{0}, // running sum Integrator.of((state, element, downstream) -> { state[0] += element; if (state[0] > threshold) { return false; // Stop -- sum exceeded threshold } return downstream.push(element); }) ); } Stream.of(10, 20, 30, 40, 50, 60) .gather(takeUntilSumExceeds(55)) .forEach(System.out::println); // Output: // 10 // 20 // (30 would make sum = 60 > 55, so processing stops)
Take elements until you have seen N distinct values. This combines state (a set of seen values) with short-circuiting:
staticGatherer takeNDistinct(int n) { return Gatherer.ofSequential( HashSet ::new, Integrator.of((seen, element, downstream) -> { seen.add(element); downstream.push(element); return seen.size() < n; // Stop when we have N distinct values }) ); } Stream.of(1, 2, 1, 3, 2, 4, 5, 3, 6) .gather(takeNDistinct(4)) .forEach(System.out::println); // Output: 1, 2, 1, 3, 2, 4 // Stops after seeing 4th distinct value (4), but includes all elements up to that point
Skip the first N elements, then take the first one that matches a predicate:
staticGatherer firstMatchAfterSkipping(int skip, Predicate predicate) { return Gatherer.ofSequential( () -> new int[]{0}, // element counter Integrator.of((counter, element, downstream) -> { counter[0]++; if (counter[0] > skip && predicate.test(element)) { downstream.push(element); return false; // Found it -- stop } return true; // Keep looking }) ); } Stream.of(2, 4, 6, 7, 8, 9, 10, 11) .gather(firstMatchAfterSkipping(3, n -> n % 2 != 0)) .forEach(System.out::println); // Output: 7 // Skipped first 3 (2, 4, 6), then found first odd number (7)
The key pattern for short-circuiting: use Integrator.of() (not ofGreedy()) and return false when you want to stop. This works even on infinite streams -- the gatherer will terminate the pipeline when the condition is met.
Since gatherers and collectors share a similar structure, it is important to understand exactly when to use each one. Here is a comprehensive comparison:
| Aspect | Collector<T, A, R> | Gatherer<T, A, R> |
|---|---|---|
| Pipeline position | Terminal -- ends the pipeline | Intermediate -- pipeline continues after it |
| Output | Single result (List, Map, String, etc.) | Stream of zero or more elements |
| Used with | stream.collect(collector) |
stream.gather(gatherer) |
| Accumulator / Integrator | BiConsumer<A, T> -- no return value |
Integrator<A, T, R> -- returns boolean |
| Downstream access | No -- accumulates into state | Yes -- can push elements to next stage |
| Finisher | Function<A, R> -- transforms state to result |
BiConsumer<A, Downstream> -- pushes to stream |
| Short-circuiting | Not supported | Supported via integrator return value |
| Composability | Not composable with other collectors | Composable via andThen() |
| Cardinality | Many-to-one (always produces single result) | Any: 1-to-1, 1-to-many, many-to-1, many-to-many |
| Infinite streams | Cannot handle (never terminates) | Can handle via short-circuiting |
| Use case | Aggregate/summarize data | Transform/reshape data flow |
Use a Collector when:
Use a Gatherer when:
Can a Gatherer replace a Collector? In some cases, yes. Gatherers.fold() is essentially a collector that emits into the stream. But collectors have a richer ecosystem (groupingBy, partitioningBy, teeing, etc.) and produce direct results without needing a terminal operation after them. Use the right tool for the job.
Let us look at real-world scenarios where gatherers solve problems that were painful or impossible with the old Stream API.
A financial application needs to compute a simple moving average (SMA) over a configurable window of price data points:
static GatherermovingAverage(int windowSize) { // Use windowSliding + map, composed into a single gatherer return Gatherers. windowSliding(windowSize) .andThen(Gatherer.ofSequential( Integrator.ofGreedy((Void state, List window, Gatherer.Downstream super Double> downstream) -> { double avg = window.stream() .mapToDouble(Double::doubleValue) .average() .orElse(0.0); downstream.push(avg); }) )); } // Usage: 5-day SMA List closingPrices = List.of( 150.0, 152.5, 148.0, 155.0, 153.0, 157.5, 160.0, 158.0, 162.0, 165.0 ); closingPrices.stream() .gather(movingAverage(5)) .forEach(sma -> System.out.printf("SMA: %.2f%n", sma)); // Output: // SMA: 151.70 // SMA: 153.20 // SMA: 154.70 // SMA: 156.70 // SMA: 158.10 // SMA: 160.50
Process large datasets in batches, logging progress after each batch:
record BatchResult(int batchNumber, int batchSize, List items) {} static Gatherer > batchWithProgress(int batchSize) { return Gatherer.ofSequential( () -> new Object[]{new ArrayList (), 0}, // [buffer, batchCount] Integrator.ofGreedy((state, element, downstream) -> { @SuppressWarnings("unchecked") List buffer = (List ) state[0]; buffer.add(element); if (buffer.size() >= batchSize) { int batchNum = (int) state[1] + 1; state[1] = batchNum; downstream.push(new BatchResult<>(batchNum, buffer.size(), List.copyOf(buffer))); buffer.clear(); } }), // Finisher: emit remaining elements as final batch (state, downstream) -> { @SuppressWarnings("unchecked") List buffer = (List ) state[0]; if (!buffer.isEmpty()) { int batchNum = (int) state[1] + 1; downstream.push(new BatchResult<>(batchNum, buffer.size(), List.copyOf(buffer))); } } ); } // Usage: IntStream.rangeClosed(1, 23) .boxed() .gather(batchWithProgress(5)) .forEach(batch -> { System.out.printf("Processing batch %d (%d items): %s%n", batch.batchNumber(), batch.batchSize(), batch.items()); // Insert into database, send to API, etc. }); // Output: // Processing batch 1 (5 items): [1, 2, 3, 4, 5] // Processing batch 2 (5 items): [6, 7, 8, 9, 10] // Processing batch 3 (5 items): [11, 12, 13, 14, 15] // Processing batch 4 (5 items): [16, 17, 18, 19, 20] // Processing batch 5 (3 items): [21, 22, 23]
Send a large list of IDs to an API that accepts a maximum of 50 IDs per request, executing requests concurrently with a limit of 3:
record ApiResponse(int status, String body) {}
List responses = userIds.stream()
// Step 1: Chunk IDs into groups of 50
.gather(Gatherers.windowFixed(50))
// Step 2: Convert each chunk to a comma-separated query parameter
.map(chunk -> chunk.stream()
.map(String::valueOf)
.collect(Collectors.joining(",")))
// Step 3: Make concurrent API calls (max 3 at a time)
.gather(Gatherers.mapConcurrent(3, ids -> {
HttpClient client = HttpClient.newHttpClient();
HttpRequest request = HttpRequest.newBuilder()
.uri(URI.create("https://api.example.com/users?ids=" + ids))
.build();
try {
HttpResponse resp = client.send(request,
HttpResponse.BodyHandlers.ofString());
return new ApiResponse(resp.statusCode(), resp.body());
} catch (Exception e) {
return new ApiResponse(500, e.getMessage());
}
}))
// Step 4: Filter successful responses
.filter(resp -> resp.status() == 200)
.toList();
This entire pipeline -- chunking, URL construction, concurrent HTTP calls, response filtering -- is expressed as a single, readable stream pipeline. Before gatherers, you would need a loop, a thread pool, a list of futures, and manual result collection.
Group log entries into sessions based on time gaps, useful for analyzing user behavior or debugging:
record LogEntry(Instant timestamp, String level, String message) {}
record LogSession(int sessionId, Duration duration, List entries) {
static LogSession from(int id, List entries) {
Duration dur = Duration.between(
entries.getFirst().timestamp(),
entries.getLast().timestamp()
);
return new LogSession(id, dur, entries);
}
}
static Gatherer groupIntoSessions(Duration maxGap) {
return Gatherer.ofSequential(
() -> new Object[]{new ArrayList(), 0},
Integrator.ofGreedy((state, entry, downstream) -> {
@SuppressWarnings("unchecked")
List current = (List) state[0];
if (!current.isEmpty()) {
Instant lastTime = current.getLast().timestamp();
if (Duration.between(lastTime, entry.timestamp()).compareTo(maxGap) > 0) {
// Gap exceeds threshold -- emit session
int sessionId = (int) state[1] + 1;
state[1] = sessionId;
downstream.push(LogSession.from(sessionId, List.copyOf(current)));
current.clear();
}
}
current.add(entry);
}),
(state, downstream) -> {
@SuppressWarnings("unchecked")
List current = (List) state[0];
if (!current.isEmpty()) {
int sessionId = (int) state[1] + 1;
downstream.push(LogSession.from(sessionId, List.copyOf(current)));
}
}
);
}
// Usage:
logEntries.stream()
.sorted(Comparator.comparing(LogEntry::timestamp))
.gather(groupIntoSessions(Duration.ofMinutes(5)))
.forEach(session -> System.out.printf(
"Session %d: %d entries, duration %s%n",
session.sessionId(), session.entries().size(), session.duration()
));
Stream Gatherers are a powerful new tool, and with great power comes the opportunity to misuse it. Here are the guidelines I follow when writing production-quality gatherers.
The state object returned by the initializer is not shared across threads unless you provide a combiner. Each thread segment gets its own state via a fresh initializer() call. However, you must ensure that:
// BAD: Shared mutable state outside the gatherer ListsharedList = new ArrayList<>(); // Danger! Gatherer bad = Gatherer.ofSequential( Integrator.ofGreedy((Void state, String element, Gatherer.Downstream super String> downstream) -> { sharedList.add(element); // Race condition in parallel streams! downstream.push(element); }) ); // GOOD: All state is inside the gatherer Gatherer good = Gatherer.ofSequential( ArrayList ::new, Integrator.ofGreedy((state, element, downstream) -> { state.add(element); // Safe -- state is per-evaluation downstream.push(element); }) );
Integrator.ofGreedy() when your gatherer never short-circuits. This gives the runtime optimization hints.new int[]{0} instead of new AtomicInteger(0)).mapConcurrent(). It uses virtual threads, which are great for I/O but provide no benefit for CPU-bound work.andThen() rather than building monolithic ones.Gatherer from static factory methods for clean API design, just like Collectors.toList().windowFixed(int size) is more reusable than a windowOfFive().// GOOD: Reusable, composable, parameterized gatherer library
public class CustomGatherers {
public static Gatherer distinctConsecutive() {
return Gatherer.ofSequential(
() -> new ArrayList(1),
Integrator.ofGreedy((state, element, downstream) -> {
if (state.isEmpty() || !Objects.equals(state.getFirst(), element)) {
state.clear();
state.add(element);
downstream.push(element);
}
})
);
}
public static Gatherer takeUntilSumExceeds(
int threshold, ToIntFunction valueExtractor) {
return Gatherer.ofSequential(
() -> new int[]{0},
Integrator.of((state, element, downstream) -> {
state[0] += valueExtractor.applyAsInt(element);
if (state[0] > threshold) return false;
return downstream.push(element);
})
);
}
public static Gatherer> groupByGap(
BiPredicate gapDetector) {
return Gatherer.ofSequential(
ArrayList::new,
Integrator.ofGreedy((group, element, downstream) -> {
if (!group.isEmpty() && gapDetector.test(group.getLast(), element)) {
downstream.push(List.copyOf(group));
group.clear();
}
group.add(element);
}),
(group, downstream) -> {
if (!group.isEmpty()) {
downstream.push(List.copyOf(group));
}
}
);
}
}
// Usage: compose distinct + window + fold
Stream.of(1, 1, 2, 3, 3, 4, 5, 5)
.gather(CustomGatherers.distinctConsecutive()
.andThen(Gatherers.windowFixed(2))
.andThen(Gatherers.fold(
() -> new ArrayList>(),
(acc, window) -> { acc.add(window); return acc; }
)))
.forEach(System.out::println);
// Output: [[1, 2], [3, 4], [5]]
If your gatherer buffers elements (like windowing or batching), you must provide a finisher to flush the remaining buffer. Without it, the last partial batch is silently lost:
// BAD: Missing finisher -- last partial window is lost staticGatherer > brokenWindow(int size) { return Gatherer.ofSequential( ArrayList ::new, Integrator.ofGreedy((buffer, element, downstream) -> { buffer.add(element); if (buffer.size() >= size) { downstream.push(List.copyOf(buffer)); buffer.clear(); } }) // No finisher! Elements [4, 5] are lost if stream has 5 elements and window is 3 ); } // GOOD: Finisher flushes remaining elements static Gatherer > correctWindow(int size) { return Gatherer.ofSequential( ArrayList ::new, Integrator.ofGreedy((buffer, element, downstream) -> { buffer.add(element); if (buffer.size() >= size) { downstream.push(List.copyOf(buffer)); buffer.clear(); } }), (buffer, downstream) -> { if (!buffer.isEmpty()) { downstream.push(List.copyOf(buffer)); // Flush remainder } } ); }
Gatherers are not always the right answer:
map(), filter(), or flatMap() can do the job, use them. They are simpler, more readable, and better optimized by the runtime.Collectors.reducing() or Collectors.groupingBy() are more idiomatic for terminal operations.mapConcurrent() uses virtual threads, which shine for I/O. For CPU-bound parallel work, use parallelStream() or a ForkJoinPool.List.subList() or Guava's Lists.partition() might be simpler.Stream Gatherers are the biggest expansion to the Stream API since Java 8. They fill the critical gap of custom intermediate operations, enabling stateful transformations, windowing, concurrent mapping, and short-circuiting that were previously impossible within a stream pipeline. The five built-in gatherers (fold, scan, windowFixed, windowSliding, mapConcurrent) cover the most common needs, and the Gatherer interface gives you the power to build anything else. Welcome to the next era of Java streams.
If you have ever watched a new developer’s face when they see their first Java program, you know the problem. Before they can print “Hello, World!”, they must understand public, class, static, void, String[], and why the file name must match the class name. Compare that to Python, where the entire program is print("Hello, World!"). That gap has been a legitimate criticism of Java for over two decades.
Java 25 fixes this with two finalized features that, when combined, make Java programs as simple as scripts while retaining the full power of the platform:
| Feature | JEP | Status in Java 25 | What It Does |
|---|---|---|---|
| Module Import Declarations | JEP 476 | Final | Import all public types from a module with one statement |
| Implicitly Declared Classes & Instance Main Methods | JEP 477 | Final | Write Java programs without class declarations or public static void main |
These features went through multiple rounds of preview in Java 22, 23, and 24. In Java 25, they are finalized and production-ready. No --enable-preview flag required. This post covers both features in detail — what they are, how they work, where the edge cases are, and when you should (and should not) use them.
Think of it this way: Java has always been a language that favors ceremony — explicit declarations that make large codebases maintainable. These features do not remove that ceremony from production code. Instead, they give you a casual mode for situations where the ceremony gets in the way: teaching, scripting, prototyping, and quick experiments.
Before we get to the simplified programs, we need to tackle the import problem. Java’s import system works well once you know it, but it creates a wall of boilerplate at the top of every file. A typical utility class might start like this:
import java.util.List;
import java.util.ArrayList;
import java.util.Map;
import java.util.HashMap;
import java.util.Set;
import java.util.HashSet;
import java.util.Optional;
import java.util.stream.Collectors;
import java.util.stream.Stream;
import java.io.IOException;
import java.io.BufferedReader;
import java.io.FileReader;
import java.nio.file.Path;
import java.nio.file.Files;
public class DataProcessor {
// ... actual code starts here, 14 lines later
}
That is fourteen import statements before a single line of business logic. IDEs hide them behind a fold, but they still exist, and they are still noise. Star imports (import java.util.*) help, but they only import from a single package, not from related packages. You still need separate star imports for java.util, java.util.stream, java.io, and java.nio.file.
Module import declarations solve this at the module level.
The new syntax imports all public top-level types exported by a module:
import module java.base;
That single line replaces every import you would ever need from the java.base module. It imports types from java.util, java.util.stream, java.io, java.nio.file, java.time, java.math, java.net, java.text, java.util.concurrent, and every other package in the java.base module — roughly 60 packages and over 1,500 types.
Here is the comparison side by side:
// BEFORE: Traditional imports
import java.util.List;
import java.util.ArrayList;
import java.util.Map;
import java.util.HashMap;
import java.util.stream.Collectors;
import java.io.IOException;
import java.io.BufferedReader;
import java.io.FileReader;
import java.nio.file.Path;
import java.nio.file.Files;
import java.time.LocalDateTime;
import java.time.format.DateTimeFormatter;
import java.math.BigDecimal;
import java.math.RoundingMode;
public class DataProcessor {
public void process() throws IOException {
Path file = Path.of("data.csv");
List lines = Files.readAllLines(file);
Map totals = new HashMap<>();
LocalDateTime now = LocalDateTime.now();
// ...
}
}
// AFTER: Module import
import module java.base;
public class DataProcessor {
public void process() throws IOException {
Path file = Path.of("data.csv");
List lines = Files.readAllLines(file);
Map totals = new HashMap<>();
LocalDateTime now = LocalDateTime.now();
// ...
}
}
Fourteen imports collapsed to one. The code below the imports is identical — you do not change how you use the types, only how you import them.
You can use import module with any named module in the Java Platform Module System (JPMS). Here are the most commonly used ones:
| Module | Key Packages | Common Use Case |
|---|---|---|
java.base |
java.lang, java.util, java.io, java.nio, java.time, java.math, java.net, java.util.concurrent | Core language — covers 90% of typical imports |
java.sql |
java.sql, javax.sql | JDBC database access |
java.logging |
java.util.logging | JDK built-in logging |
java.net.http |
java.net.http | HTTP client (HttpClient, HttpRequest, HttpResponse) |
java.desktop |
java.awt, javax.swing | GUI applications |
java.xml |
javax.xml, org.xml.sax, org.w3c.dom | XML parsing and processing |
java.compiler |
javax.tools, javax.annotation.processing | Annotation processing and compilation |
For most application code, import module java.base; is all you need. If you do database work, add import module java.sql;. If you use the HTTP client, add import module java.net.http;. You can combine module imports with traditional imports freely.
A module import is not the same as importing every package with a star import. Here are the key rules:
1. Only public top-level types are imported. Nested classes, package-private classes, and non-exported packages are not imported. This is the same visibility you would get if you wrote individual import statements.
2. Transitive dependencies are included. If module A requires transitive module B, then import module A; also imports all exported types from module B. For example, java.sql requires transitive java.logging and java.xml, so importing java.sql gives you logging and XML types too.
3. No runtime cost. Module imports are purely a compile-time convenience. The compiled bytecode contains exactly the same class references as if you had written individual imports. There is no additional classloading, no additional memory usage, and no performance difference.
When you import an entire module, you inevitably import types with the same simple name from different packages. For example, java.base exports both java.util.List and java.awt.List (if you also import java.desktop). How does Java handle this?
The resolution follows a clear priority order:
| Priority | Import Type | Example | Wins When |
|---|---|---|---|
| 1 (highest) | Single-type import | import java.util.List; |
Always wins — most specific |
| 2 | Package star import | import java.util.*; |
Wins over module imports |
| 3 (lowest) | Module import | import module java.base; |
Only if no higher-priority import matches |
Here is a practical example:
// Scenario: Two modules export a class with the same simple name import module java.base; // exports java.util.List import module java.desktop; // exports java.awt.List // This would be ambiguous -- compiler error! // Listnames = new ArrayList<>(); // Which List? // Fix: Add a single-type import to disambiguate import java.util.List; // This takes priority over both module imports List names = new ArrayList<>(); // Resolves to java.util.List
The practical implication is simple: start with import module java.base;, and if the compiler reports an ambiguity, add a single-type import to resolve it. This is the exact same approach you use today when star imports clash — the resolution mechanism is familiar.
Within a single module, the JDK designers have already ensured there are no ambiguous simple names. You will only hit ambiguity when importing multiple modules that happen to export types with the same name, which is relatively rare in practice.
You can mix all three import styles in the same file:
import module java.base; // Module import -- all of java.base
import module java.sql; // Module import -- all of java.sql
import java.util.logging.*; // Star import -- one specific package
import com.myapp.util.StringUtils; // Single-type import -- one specific class
public class MyService {
// All types from java.base, java.sql, java.util.logging,
// and StringUtils are available here
}
This flexibility means you can adopt module imports gradually. Replace your most common star imports with a single import module java.base; and keep specific imports for third-party libraries that are not modularized.
To appreciate the difference, here are three real-world examples showing typical import blocks before and after module imports.
// BEFORE (22 imports)
import java.util.List;
import java.util.Map;
import java.util.Optional;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.stream.Collectors;
import java.time.LocalDateTime;
import java.time.format.DateTimeFormatter;
import java.io.IOException;
import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.math.BigDecimal;
import java.math.RoundingMode;
import java.nio.file.Path;
import java.nio.file.Files;
import java.util.function.Function;
import java.util.function.Predicate;
public class OrderService {
// ...
}
// AFTER (2 imports)
import module java.base;
import module java.net.http;
public class OrderService {
// Exact same code below -- all types resolve correctly
}
// BEFORE (12 imports)
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Timestamp;
import java.util.List;
import java.util.ArrayList;
import java.util.Optional;
import java.time.LocalDateTime;
import java.time.Instant;
import java.math.BigDecimal;
import java.util.logging.Logger;
public class UserRepository {
// ...
}
// AFTER (2 imports)
import module java.base;
import module java.sql;
public class UserRepository {
// java.sql requires transitive java.logging,
// so Logger is available too
}
// BEFORE (16 imports)
import java.io.BufferedReader;
import java.io.BufferedWriter;
import java.io.FileReader;
import java.io.FileWriter;
import java.io.IOException;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.nio.file.Files;
import java.nio.file.StandardOpenOption;
import java.nio.charset.StandardCharsets;
import java.util.List;
import java.util.Map;
import java.util.stream.Collectors;
import java.util.stream.Stream;
import java.util.regex.Pattern;
import java.util.regex.Matcher;
public class FileProcessor {
// ...
}
// AFTER (1 import)
import module java.base;
public class FileProcessor {
// Everything is in java.base -- one import covers it all
}
The pattern is clear: for typical application code, import module java.base; eliminates 80-90% of your import statements. Add one or two more module imports for database, HTTP, or desktop work, and you cover virtually everything.
Now we get to the second half of Java’s simplification story. If module imports tackle the import problem, JEP 477 tackles the ceremony problem — the amount of boilerplate code required to write even the simplest Java program.
Here is the classic “Hello, World!” in Java, compared to other languages:
// Java (traditional) -- 5 lines of ceremony
public class HelloWorld {
public static void main(String[] args) {
System.out.println("Hello, World!");
}
}
// Python -- 1 line
// print("Hello, World!")
// JavaScript -- 1 line
// console.log("Hello, World!");
// Go -- still needs ceremony, but less
// func main() {
// fmt.Println("Hello, World!")
// }
To write one line of useful code in Java, you need to understand:
That is a lot of concepts to explain before a new programmer can see their first output. JEP 477 removes most of this ceremony.
The first simplification: you no longer need public static void main(String[] args). Here are the new options:
// Option 1: Simplest possible main method
class HelloWorld {
void main() {
System.out.println("Hello, World!");
}
}
// Option 2: Main with args (if you need command-line arguments)
class HelloWorld {
void main(String[] args) {
System.out.println("Hello, " + args[0] + "!");
}
}
// Option 3: Static main still works (backward compatible)
class HelloWorld {
static void main() {
System.out.println("Hello, World!");
}
}
Notice what changed:
public required — the main method does not need to be public anymorestatic required — it can be an instance method, meaning the JVM will create an instance of the class and call main() on itString[] args required — if you do not need command-line arguments, leave them outThis is a significant improvement. A beginner can now write a program that uses only concepts they understand: a class has a method, the method does something. No static, no access modifiers, no array parameters.
The second simplification goes further: you do not need a class declaration at all. If a Java source file contains methods (including main) but no class declaration, the compiler wraps them in an implicitly declared class.
// File: HelloWorld.java
// No class declaration needed!
void main() {
System.out.println("Hello, World!");
}
That is it. One method, one line of logic. The file compiles and runs like any other Java program:
// Compile and run // $ javac HelloWorld.java // $ java HelloWorld // Output: Hello, World! // Or use the source-file launcher (no explicit compilation needed) // $ java HelloWorld.java // Output: Hello, World!
Behind the scenes, the compiler generates a class with the same name as the file (minus the .java extension). But you never see it, and you never have to think about it.
You can add fields and helper methods to an implicitly declared class, just like a regular class:
// File: Greeting.java
// Fields and helper methods work in implicitly declared classes
String greeting = "Hello";
void main() {
String name = "Java 25";
System.out.println(greet(name));
}
String greet(String name) {
return greeting + ", " + name + "!";
}
// Output: Hello, Java 25!
With multiple valid main method signatures now possible, the JVM follows a specific priority order when looking for the entry point. Understanding this protocol is important because it determines which main method runs if you have multiple candidates.
The JVM tries these signatures in order, picking the first one it finds:
| Priority | Signature | Notes |
|---|---|---|
| 1 | static void main(String[] args) |
Traditional — highest priority for backward compatibility |
| 2 | static void main() |
Static without args — new in Java 25 |
| 3 | void main(String[] args) |
Instance method with args — JVM creates instance first |
| 4 | void main() |
Instance method without args — simplest form |
Key points:
String[] args take priority over methods without argspublic, protected, package-private, and private are all valid (though private only works in implicitly declared classes)public static void main(String[] args), it still runs first// Example: Which main runs?
class MultipleMain {
// Priority 1 -- this one wins
static void main(String[] args) {
System.out.println("Static main with args");
}
// Priority 4 -- never reached
void main() {
System.out.println("Instance main without args");
}
}
// Output: Static main with args
For instance main methods, the JVM creates an instance of the class using the no-argument constructor before calling main(). This means the class must have a no-arg constructor (which every class has by default unless you declare a different constructor).
The real power comes when you combine both features: module imports and implicitly declared classes. Together, they give you what the JDK team calls “simple source files” — Java programs that are as concise as scripts.
Here is the transformation, step by step:
// Step 1: Traditional Java (Java 8 style)
import java.util.List;
import java.util.ArrayList;
import java.util.stream.Collectors;
public class NameProcessor {
public static void main(String[] args) {
List names = new ArrayList<>(List.of("alice", "bob", "charlie"));
List result = names.stream()
.map(String::toUpperCase)
.collect(Collectors.toList());
System.out.println(result);
}
}
// Step 2: Add module imports (remove import block)
import module java.base;
public class NameProcessor {
public static void main(String[] args) {
List names = new ArrayList<>(List.of("alice", "bob", "charlie"));
List result = names.stream()
.map(String::toUpperCase)
.collect(Collectors.toList());
System.out.println(result);
}
}
// Step 3: Use instance main (remove public, static, String[] args)
import module java.base;
public class NameProcessor {
void main() {
List names = new ArrayList<>(List.of("alice", "bob", "charlie"));
List result = names.stream()
.map(String::toUpperCase)
.collect(Collectors.toList());
System.out.println(result);
}
}
// Step 4: Implicitly declared class (remove class declaration)
import module java.base;
void main() {
List names = new ArrayList<>(List.of("alice", "bob", "charlie"));
List result = names.stream()
.map(String::toUpperCase)
.collect(Collectors.toList());
System.out.println(result);
}
From 14 lines down to 8. From 3 import statements and a class declaration down to a single module import. The actual logic did not change at all — it is the same streams, the same collections, the same method references. But the noise is gone.
There is one more convenience: in implicitly declared classes (files without a class declaration), java.base is automatically imported. You do not even need import module java.base;. This means the simplest version of the program above is:
// File: NameProcessor.java
// No imports needed! java.base is automatically imported
// for implicitly declared classes.
void main() {
var names = new ArrayList<>(List.of("alice", "bob", "charlie"));
var result = names.stream()
.map(String::toUpperCase)
.collect(Collectors.toList());
System.out.println(result);
}
This is as simple as Java gets. No imports, no class, no public static void main. Just the logic. Run it with java NameProcessor.java and it works.
Here are practical examples showing simple source files for common tasks:
// Example 1: Read a file and count words
// File: WordCounter.java
void main() {
var path = Path.of("document.txt");
try {
var lines = Files.readAllLines(path);
long wordCount = lines.stream()
.flatMap(line -> Stream.of(line.split("\\s+")))
.filter(word -> !word.isEmpty())
.count();
System.out.println("Word count: " + wordCount);
} catch (IOException e) {
System.err.println("Error reading file: " + e.getMessage());
}
}
// Example 2: Simple HTTP request
// File: QuickFetch.java
import module java.net.http;
void main() throws Exception {
var client = HttpClient.newHttpClient();
var request = HttpRequest.newBuilder()
.uri(URI.create("https://api.github.com/zen"))
.build();
var response = client.send(request, HttpResponse.BodyHandlers.ofString());
System.out.println("Status: " + response.statusCode());
System.out.println("Body: " + response.body());
}
// Example 3: CSV processor with helper methods
// File: CsvProcessor.java
void main() {
var data = List.of(
"Alice,Engineering,95000",
"Bob,Marketing,78000",
"Charlie,Engineering,102000",
"Diana,Marketing,81000"
);
var avgByDept = data.stream()
.map(line -> line.split(","))
.collect(Collectors.groupingBy(
parts -> parts[1],
Collectors.averagingDouble(parts -> Double.parseDouble(parts[2]))
));
avgByDept.forEach((dept, avg) ->
System.out.printf("%s: $%,.0f%n", dept, avg));
}
// Output:
// Engineering: $98,500
// Marketing: $79,500
These features are not intended to replace traditional Java class declarations in production codebases. They serve specific purposes where ceremony reduction matters most:
This is the primary motivation. When teaching Java to beginners, you can now start with:
// Lesson 1: Your first Java program
void main() {
System.out.println("Hello, World!");
}
// Lesson 2: Variables and types
void main() {
String name = "Student";
int age = 20;
double gpa = 3.8;
System.out.println(name + " is " + age + " years old with GPA " + gpa);
}
// Lesson 3: Loops
void main() {
for (int i = 1; i <= 10; i++) {
System.out.println(i + " x 7 = " + (i * 7));
}
}
// Introduce classes, static, and access modifiers LATER,
// when students are ready for object-oriented concepts
Educators can now teach procedural programming first, then introduce object-oriented concepts when students have a foundation in variables, loops, and methods. This matches how most programming courses are structured -- Java was the outlier that forced OOP from line one.
Java has never been a scripting language, but simple source files bring it closer. Quick tasks like file processing, data transformation, or API testing become practical without setting up a full project:
// File: CleanupLogs.java
// Run with: java CleanupLogs.java
void main() throws Exception {
var logDir = Path.of("/var/log/myapp");
var cutoff = Instant.now().minus(Duration.ofDays(30));
try (var files = Files.list(logDir)) {
var deleted = files
.filter(f -> f.toString().endsWith(".log"))
.filter(f -> {
try {
return Files.getLastModifiedTime(f).toInstant().isBefore(cutoff);
} catch (IOException e) {
return false;
}
})
.peek(f -> {
try { Files.delete(f); } catch (IOException e) {
System.err.println("Failed to delete: " + f);
}
})
.count();
System.out.println("Deleted " + deleted + " old log files");
}
}
When you want to quickly test a library feature, validate an algorithm, or experiment with an API, simple source files remove the friction of creating a class, picking a package, and writing the main method signature:
// File: TestRegex.java
// Quick experiment -- does my regex work?
void main() {
var pattern = Pattern.compile("(\\d{4})-(\\d{2})-(\\d{2})");
var input = "The release date is 2025-09-16 and EOL is 2033-09-16.";
var matcher = pattern.matcher(input);
while (matcher.find()) {
System.out.printf("Full match: %s | Year: %s | Month: %s | Day: %s%n",
matcher.group(0), matcher.group(1), matcher.group(2), matcher.group(3));
}
}
// Output:
// Full match: 2025-09-16 | Year: 2025 | Month: 09 | Day: 16
// Full match: 2033-09-16 | Year: 2033 | Month: 09 | Day: 16
Competitive programmers care about typing speed and code brevity. Removing the class declaration, access modifiers, and static keyword saves time and reduces mistakes under pressure. The auto-import of java.base means all standard library types are available without import statements.
Implicitly declared classes are not full replacements for regular classes. They have specific limitations you need to understand:
| Restriction | Why | What to Do Instead |
|---|---|---|
| Cannot be referenced by other classes | Implicitly declared classes have no accessible name | Use a regular class declaration when other classes need to reference it |
| No constructors | The JVM uses the default no-arg constructor | Use initialization in field declarations or the main method |
No extends or implements |
No class declaration means no inheritance clause | Use a regular class if you need to extend or implement |
Cannot define static members (except main) |
Instance-oriented by design; static methods in an unnamed class add complexity | Use instance methods, or switch to a regular class |
| No package declaration | Implicitly declared classes are always in the unnamed package | Use a regular class for packaged code |
| Not part of a module | Lives in the unnamed module | Use a regular class for modular applications |
| One per file | Same rule as top-level classes | Keep each implicitly declared class in its own file |
Despite the restrictions, you can still do a lot:
// File: FeatureDemo.java
// All of these work in an implicitly declared class:
// Instance fields
int counter = 0;
String prefix = "Item";
// Instance methods
String format(int num) {
return prefix + "-" + String.format("%04d", num);
}
void processItems(List items) {
for (String item : items) {
counter++;
System.out.println(format(counter) + ": " + item);
}
}
// Main entry point
void main() {
var items = List.of("Keyboard", "Monitor", "Mouse", "Headset");
processItems(items);
System.out.println("Total items processed: " + counter);
}
// Output:
// Item-0001: Keyboard
// Item-0002: Monitor
// Item-0003: Mouse
// Item-0004: Headset
// Total items processed: 4
Now that you understand both features, here are guidelines for when to use them:
| Scenario | Use Simplified? | Why |
|---|---|---|
| Teaching beginners | Yes | Reduces cognitive load, lets students focus on fundamentals |
| Quick scripts and one-off tools | Yes | Faster to write, no project setup needed |
| Prototyping and experiments | Yes | Get to the interesting code faster |
| Competitive programming | Yes | Saves keystrokes and reduces boilerplate errors |
| Production application code | No | Use full class declarations for maintainability |
| Library code | No | Other classes need to reference your types |
| Code that needs inheritance | No | Implicitly declared classes cannot extend or implement |
| Multi-class applications | No | Classes need to reference each other by name |
Module imports are useful even in production code. There is a reasonable debate about whether to use them:
Arguments for using import module java.base; in production:
Arguments against:
Recommendation: For new projects and teams open to change, module imports are a net positive -- use them. For existing projects with established conventions, discuss with your team before switching. The important thing is consistency within a codebase.
If you want to adopt these features, here is a sensible path:
Let us build a complete, practical program using both features. This script reads a CSV file of employees, groups them by department, calculates statistics, and outputs a formatted report.
// File: EmployeeReport.java
// Run with: java EmployeeReport.java employees.csv
// No class declaration, no explicit imports -- java.base is auto-imported
void main(String[] args) {
if (args.length == 0) {
System.err.println("Usage: java EmployeeReport.java ");
System.exit(1);
}
var file = Path.of(args[0]);
try {
var lines = Files.readAllLines(file);
// Skip header, parse each line
var employees = lines.stream()
.skip(1)
.map(line -> line.split(","))
.filter(parts -> parts.length >= 3)
.toList();
// Group by department
var byDept = employees.stream()
.collect(Collectors.groupingBy(parts -> parts[1].trim()));
// Print report
System.out.println("=" .repeat(50));
System.out.println("EMPLOYEE REPORT - " + LocalDate.now());
System.out.println("=".repeat(50));
byDept.forEach((dept, members) -> {
var avgSalary = members.stream()
.mapToDouble(p -> Double.parseDouble(p[2].trim()))
.average()
.orElse(0);
var maxSalary = members.stream()
.mapToDouble(p -> Double.parseDouble(p[2].trim()))
.max()
.orElse(0);
System.out.printf("%n Department: %s%n", dept);
System.out.printf(" Headcount: %d%n", members.size());
System.out.printf(" Avg Salary: $%,.0f%n", avgSalary);
System.out.printf(" Max Salary: $%,.0f%n", maxSalary);
System.out.println(" Members:");
members.forEach(m ->
System.out.printf(" - %s ($%s)%n", m[0].trim(), m[2].trim()));
});
System.out.println("\n" + "=".repeat(50));
System.out.printf("Total employees: %d%n", employees.size());
System.out.println("=".repeat(50));
} catch (IOException e) {
System.err.println("Error reading file: " + e.getMessage());
System.exit(1);
}
}
That is a complete, runnable Java program with file I/O, streams, collectors, date formatting, and string formatting -- all in a single method with no class declaration and no imports. Run it with java EmployeeReport.java employees.csv and it just works.
Compare the first line of this program (void main(String[] args)) to the traditional version (public class EmployeeReport { public static void main(String[] args) {). The reduction in ceremony is dramatic, and the code is easier to read because it focuses entirely on what the program does rather than how Java requires it to be structured.
Java 25 finalizes two features that make the language significantly more approachable for beginners and more convenient for experienced developers writing scripts, prototypes, and small programs:
| Feature | What It Does | Best For |
|---|---|---|
| Module Import Declarations (JEP 476) | Import all public types from a module with import module java.base; |
All Java code -- reduces import boilerplate everywhere |
| Instance Main Methods (part of JEP 477) | Write void main() without public static String[] args |
Simpler entry points, teaching, scripts |
| Implicitly Declared Classes (part of JEP 477) | Write methods without a class declaration; file is the class | Scripts, prototypes, teaching, quick experiments |
| Auto-import of java.base | Implicitly declared classes get java.base imported automatically |
Zero-import Java programs for simple tasks |
These features do not change how production Java applications are built. They add a new, simpler way to write Java when the full ceremony is unnecessary. Think of them as Java's casual Friday -- you can still wear the suit when you need to, but now you have the option to dress down when the situation calls for it.
The combination of module imports, instance main methods, and implicitly declared classes means that Java 25 has the simplest on-ramp of any Java version in the language's 30-year history. A new programmer can write their first Java program in one line. That is a significant achievement for a language that has always prioritized explicitness and structure.