Your app is slow, and you've been trying to solve the mystery for an hour. The logs show nothing useful. The GC metrics look vaguely suspicious but not alarming enough to actually blame.
JFR doesn't care about your logs. It goes to the JVM itself: CPU usage, thread states, lock contention, GC behavior, allocations, I/O. All of that data is written to a binary file you can open and read. It's been part of OpenJDK since Java 11, and it is also part of Liberica JDK 8 if you are in Java 8. So, you don't need to install anything.
Let's look at what JFR can do, how to use it locally and in containers, and how to analyze its reports in JDK Mission Control.
Table of Contents
What JFR Is (and Isn't)
Think of JFR as a flight recorder for your JVM. It captures runtime events with timestamps and writes them to a .jfr binary file. You analyze the results in JDK Mission Control, an open-source GUI that organizes events by category and flags suspicious patterns automatically.
The overhead is low enough for production use. You can start recording at JVM startup or attach to a running process on demand via jcmd, which is also in your JDK.
JFR is the right tool when the question is:
- What is this JVM actually doing?
- Where is it spending time?
- Are threads blocked, are allocations unusually high, is I/O holding things up?
It doesn't profile native libraries, it doesn't reconstruct a request's path across multiple services, and it won't tell you why a specific SQL query is slow — for those you need system profilers, distributed tracing, or database tools. But for JVM-level visibility, there's nothing easier to reach for.
Using JFR Locally
Let's start locally. The simplest way is to start a recording at JVM startup:
java -XX:+UnlockDiagnosticVMOptions \
-XX:+DebugNonSafepoints \
-XX:StartFlightRecording=duration=30s,filename=my-recording.jfr -jar app.jar
-XX:+UnlockDiagnosticVMOptions and -XX:+DebugNonSafepoints are optional but improve accuracy — they allow the JVM to record method samples at more precise points instead of only at safepoints.
If the app is already running, jcmd can start and stop a recording without a restart:
jcmd PID JFR.start
jcmd PID JFR.dump filename=my-recording.jfr
jcmd PID JFR.stop
For long-running apps, continuous recording keeps a rolling in-memory buffer. In JDK 25 the default max size is 250 MiB, which you can override:
-XX:StartFlightRecording=name=background,maxsize=1g
Combine maxsize with maxage=1h to keep the last hour of data, or use dumponexit=true to flush the buffer to disk when the JVM exits. JFR can generate a lot of data, so putting a cap on it keeps overhead manageable.
JFR in Containers
Profiling a containerized app is a bit more involved, depending on what you have access to.
Starting a recording at container startup
If you control the image build, you can bake in the JFR flags directly. Here's a multi-stage Dockerfile using BellSoft Hardened Images — minimal, security-locked images built on Liberica JDR recommended by Spring and lightweight Alpaquita Linux:
FROM bellsoft/hardened-liberica-runtime-container:jdk-25-stream-musl as builder
WORKDIR /app
ADD my-app /app/my-app
RUN cd my-app && ./mvnw package
FROM bellsoft/hardened-liberica-runtime-container:jre-25-stream-musl
WORKDIR /app
COPY /app/my-app/target/*.jar /app/my-app.jar
EXPOSE 8081
ENTRYPOINT ["java", \
"-XX:+UnlockDiagnosticVMOptions", \
"-XX:+DebugNonSafepoints", \
"-XX:StartFlightRecording=duration=30s,filename=/tmp/recording.jfr", \
"-XX:MaxRAMPercentage=80.0", \
"-jar", "/app/my-app.jar"]
Build and run as usual. The recording starts with the JVM.
Using buildpacks
If you're building container images with Cloud Native Buildpacks, one environment variable is enough. You can use it with the pack CLI or specify in Maven or Gradle plugin. Here's the Maven example:
<BPL_JFR_ENABLED>true</BPL_JFR_ENABLED>
By default, this writes the recording to /tmp/recording.jfr on JVM exit. To control the duration and filename, add BPE_APPEND_JAVA_TOOL_OPTIONS:
<BPL_JFR_ENABLED>true</BPL_JFR_ENABLED>
<BPE_DELIM_JAVA_TOOL_OPTIONS xml:space="preserve"> </BPE_DELIM_JAVA_TOOL_OPTIONS>
<BPE_APPEND_JAVA_TOOL_OPTIONS>-XX:StartFlightRecording=duration=15s,filename=/tmp/recording.jfr</BPE_APPEND_JAVA_TOOL_OPTIONS>
For either approach, you retrieve the recording from the container with docker cp:
docker cp <container-id>:/tmp/recording.jfr .
Attaching to a running container
Both approaches above require the JFR flags to be there at startup. If the app is already running with no JFR flags (or if the image is JRE-only, distroless, or has no shell), you can't exec in and run jcmd.
Ephemeral containers solve this. They're temporary containers that share the process namespace of another running container, which means jcmd inside the ephemeral container can see the JVM in the app container.
The ephemeral image only needs a JDK:
FROM bellsoft/hardened-liberica-runtime-container:jdk-25-stream-musl
Run the app container normally. Then attach the ephemeral container to it:
docker run --cap-add SYS_PTRACE \
--security-opt=apparmor:unconfined \
--name ephemeral \
--pid=container:app \
-it prof-jcmd sh
--pid=container:app puts the ephemeral container in the same PID namespace. SYS_PTRACE lets it read JVM performance data. Inside the ephemeral container, list the running JVMs and start a recording:
When you are inside the ephemeral container, you use jcmd commands to start and dump a JFR recording:
jcmd <PID> JFR.start
jcmd <PID> JFR.dump filename=/tmp/recording.jfr
jcmd <PID> JFR.stop
When done, copy the recording from the app container (not the ephemeral one):
docker cp <app-container-id>:/tmp/recording.jfr .
Reading the Evidence in Mission Control
Let's look at what a real recording actually tells you. I took a 60-second JFR recording of Neurowatch, a small Spring Boot demo service, and opened it in Liberica Mission Control, a distribution of JDK Mission Control.
Automated Analysis Results tab in Liberica Mission Control
The first thing Mission Control shows you is the Automated Analysis Results page: rule-based findings and tuning hints. Looking at Neurowatch's recording, there's a lot of memory used, long socket read pauses, dozens of exceptions, and GC behavior possibly indicating a memory leak. Wow. So much for a small demo service.
The rule-based findings flag what looks suspicious and suggest where to dig first. Take them as a signal, then go look. The recording was taken during app startup, and that context changes how you interpret everything.
Garbage Collection tab
GC is active, but the pauses are short. A few milliseconds each. The JVM is collecting often enough to show it's allocating, but handling it comfortably. GC is not the problem here.
Memory tab
byte[] accounts for around 60% of all allocations, followed by a Spring Boot class loader, URL, String, and int. That's buffer churn, string processing, and framework initialization: exactly what startup looks like.
Threads tab
The Threads tab shows whether the JVM is actively working or mostly waiting. Zooming into the timeline gives you a detailed breakdown of any thread's state at any moment during the recording window.
Methods tab
From the method profiling view, you can see how the application spent its time down to the method name. BCrypt.encipher appears near the top, which means some password hashing or security setup happened during the recording window. BCrypt is expensive by design, so this is expected if login or security initialization runs at startup.
Exceptions tab
The raw exception count is high. Most trace back to classloading, reflection, and framework internals — the usual noise of startup. Keep an eye on it if you profile during live traffic.
Environment tab
At the Environment tab: JVM configuration and hardware details from the moment of capture. Especially useful when you're reading someone else's recording.
The full picture: Neurowatch wasn't in trouble. We captured a Spring Boot app starting up — framework initialization, classpath scanning, security configuration, preload work. That explains the allocations, the short GC pauses, the BCrypt work, and the exception count.
Where to Go Next
Download JDK Mission Control and take a 30-second recording of any Java app you're running. The Automated Analysis page gives you something to look at immediately — even if it turns out the JVM was perfectly fine all along.
When you are ready to go further, BellSoft has a few articles that discuss more advanced topics on Java profiling with JFR:







