Tips and tricks for analyzing Java virtual machine heap memory dumps
Memory dumps are usually something that you don’t want to deal with. In a perfect Java application, normally everything should run fine and it will never run out of memory or misbehave. Unfortunately, such a perfect application does not exist, and chances are likely that you will run into “OutOfMemory” exceptions at some point or another. Memory dumps are a very useful feature of the JVM to analyze the contents of the memory at any given time, but their usage requires some experience, and in this post I will share with you some tips and tricks that I’ve learned over the years, so hopefully, they will be useful to you too.
In a previous blog post, I illustrated how you could use the YourKit Profiler to analyze overall system performance, but I didn’t go into much detail concerning the generation and analysis of memory dumps. Now we will try to explain how you can generate these memory dumps and especially what pitfalls to avoid when transferring and analyzing them.
The dreaded “OutOfMemory” exception is generated when the application reaches the maximum allowed heap configured on the command line using the “-Xmx” option (or when the default size is reached if there is no such command line option).
Generating memory dumps
There are different ways of generating memory dumps, and we will quickly list them.
Have them automatically generated when an OutOfMemory exception occurs. This is usually the most useful way of generating memory dumps, although you should be made aware that since memory dumps are a disk intensive task, it will pause the JVM for quite a long time while dumping memory contents, and if the application is under heavy load, it might even crash after (or sometimes even during) the generation of the memory dump file. You can activate the automatic generation on the JVM command line, by using the following option :
-XX:+HeapDumpOnOutOfMemoryError
There are also other options that you can use to control automatic memory dump generation, such as -XX
:HeapDumpPath=path_to_file
which will allow you to override the default name (usually something like java_pid.hprof). Depending on the JVM version there might be even some newer options, so you can check the official command line documentation for more information. It is usually acceptable to use this setting on a production server since normally it shouldn’t reach out of memory scenarios, and if it does, you will want to understand what happened.
Using the jmap command line tool, as in the following example:
jmap -dump:file=path_to_file java_process_id
Use a JVM profiler such as YourKit and use it’s own built-in memory dump feature
(There is actually a fourth method but it is a bit more tricky to do using jconsole).
Once you have generated the dump, depending on the method used, you will find the dump file either at the specified path location or if you didn’t specify a location it should be in the JVM’s working directory. If, for example, you launched Tomcat from the bin/startup.
sh
script, it will probably be located in the bin/ subdirectory.
Transferring memory dumps
As heap memory dumps can be large files, it is highly recommended that you compress them (using ZIP or TAR.GZ) before you transfer them. They usually compress very well, for example, a 4.16GB hprof file can compress down to 525MB making it 8x smaller! If you intend to send this file to someone else for analysis, you will save a lot of time in the transfer process.
Analyzing memory dumps
The following list of tips and tricks are very useful for proper memory dump analysis to go smoothly, so I highly recommend you use them.
Reduce the size of your application’s heap as much as possible before generating memory dumps. Analyzing heap memory dumps is a process that requires a lot of memory, and usually, despite the tools best efforts to reduce memory consumption, you will need at least as much free memory as the size of the uncompressed heap memory dump file. So for a 4GB dump file, you will need 4GB of free memory on the machine that will perform the analysis. So making the JVM heap as small as possible will generate smaller heap dumps, which will therefore be easier to analyze. Of course, this is not always possible but when it is this will prove extremely useful.
Close as many applications as possible on the machine that will analyze the memory dump. As in the first point, we illustrated that the analysis tools require lots of free memory, it is usually a good idea to temporarily dedicate the machine to the analysis tool, by closing as many running applications (and daemons) as possible. Usually, I use my OS’s task manager to see which applications are using up memory and close those first. Unfortunately this also usually involves closing the Java IDE, which is usually an application that consumes a lot of memory. So, when possible, try to use another machine to look at the code at the same time.
Usually, you will want to use some kind of class histogram view, which lists the memory consumption of objects by class type. This makes it easier to understand what class type is consuming memory, which in turn will help you identify the reason why so much memory is being used. Try to avoid analyzing low-level classes such as Strings or even primitive types such as byte arrays. Instead, navigate up their references to find which objects contain them to see what class is actually using the memory.
Know the difference between shallow and retained sizes. The shallow size is the actual memory consumed by the direct fields of the object instance, that is to say, all the fields that do NOT reference other Java objects. Usually, it is quite small and not that interesting, unless it contains huge primitive type arrays, so the retained size will be more interesting. The retained size includes the referenced Java objects, so it is much more expensive to calculate, and some tools might defer calculation in order to avoid calculating as it is quite CPU intensive. In the case of the Eclipse Memory Analyzer for example, it calculates estimates of the retained size, but will require the user to actually trigger the generation of precise retained sizes. In general it is a good idea to calculate at least part of the retained sizes, because they might be VERY different from the estimated ones.
The Eclipse Memory Analyzer project has a very powerful feature called “group by value”, which makes it possible to build an object query and regroup the instances by a field value. This is useful in the case where you have a lot of instances that are containing a smaller set of possible values, and you can to see which values are being used the most. This has really helped me understand some complex memory dumps so I recommend you try it out.
There is a way to analyze serialized data ! Some memory dumps might contain serialized data, for example data that was sent over the network into a buffer object and that was not yet deserialized. This is especially true for JGroups buffers. If your profile offers the possibility to export the value of the serialized data to a file (Eclipse Memory Analyzer has this feature in the “Copy” -> “Save Value to file” to contextual menu option), you can then use the following tool to deserialize the data (in the case of Eclipse files, you will need to skip the first byte of the file, so using something like : java -jar
jdeserialize
-1.3.jar -
skipfirstbyte
1 test.dump
). Knowing that you can actually inspect serialized data can be a lifesaver because sometimes you might just give up when seeing a serialized data buffer when you can actually drill down into it (although there is not yet a fancy UI to do that :)).
Did you know that JVM 1.6+ memory dumps contain thread dumps? In any case, make sure you have a look at the thread dump since it might help you understand what the threads were doing at the time of the memory dump. In the case of an OutOfMemory exception, you might even be able to understand the source of the problem using the combination of memory snapshot and thread stacks.
Use temporary Amazon EC3 instances if you need more RAM to analyze memory dumps. If you absolutely need to analyze a large memory dump (8GB or more) and don’t have access to hardware that contains enough physical memory, don’t forget that you could simply use a temporary Amazon EC3 instance to run your favorite memory analysis tool on such an instance. In one case I started a Windows Amazon instance with 32GB of RAM just for an hour, installed the YourKit profiler on there and I instantly had a machine dedicated to memory dump analysis. All of this for less than a dollar :)
By default the Eclipse Memory Analyzer tool does not run with a large maximum heap size (1GB), so make sure you extend it before using it to open large heaps. You can find the instructions on how to do this here
Sometimes the largest objects are no longer “live”. If the memory dump you are analyzing contains the activity of an application that generates a lot of new objects very quickly, it might be that at the time of the dump the JVM garbage collector might not yet have been able to remove all the objects from memory. So make sure that you have a look at the “unreachable objects” size. The rule of thumb is this: if the total amount of (live) objects size in your analysis tool is much smaller than the size of the memory dump, it is highly likely that you are dealing with a memory dump that contains a lot of unreachable objects. Some tools take them automatically into account (as in the case of Yourkit), but in some other cases (like in the Eclipse Memory Analyzer case), you need to activate processing of such objects.
The object explorer (or any similar functionality that makes it possible to simply explore objects) might be more useful than you think. Sometimes looking at a few sample objects, even if they cannot be statistically significant if there are a large amount of these, does help get a better picture of how the data is structured in memory. So don’t be afraid, even on large object collections, to drill down into a few instances to see if everything looks alright (or not).
Although it is tempting, avoid as much as possible using allocation recording. Most profilers offer a feature called “record allocations”. Most of the time this feature will slow down the JVM to a crawl, so unless you have exhausted all other ways of analyzing the contents of the memory and what code is generating the objects, I strongly recommend against using this. Personally, I tried to use it a lot at first, but now I rarely activate it at all. In general, any feature that has a heavy performance impact is usually not very useful, since it completely changes the behavior of the application.
Wrapping up
I am sure there is still plenty more memory dump analysis tips and tricks out there, and I’d love to learn some from you. Feel free to add your own in the comments below.
Further reading
- Java OutOfMemoryError – A tragedy in seven acts a very detailed (but not yet completed) list of articles on Java memory issues
- Memory Analysis Part 1 – Obtaining a Java Heapdump detailed instructions on the different ways to obtain a Java heap memory dump
- Monitoring your server for performance - A documentation from the Jahia Academy