Profiling in Java

In any Computer Science lectures that has anything to do with the topic of profiling, the story of how 'awk' got a performance boost by 25% just using gprof never fails to not be mentioned. The story ends with finding that offending loop code that accounted for most of the wasted computational time.

What makes it such a good story, is probably because that a quarter of the performance drag hinges on just a single line of code. But before anybody starts embarking on their own witch hunts in looking for such performance bottlenecks, I’ll like to remind you about the other story that CS students are always told, the one related to by the ‘Grandpa of Computer Science’, Donald Knuth, on premature optimization:

‘There is no doubt that the ‘grail’ of efficiency leads to abuse. Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.’

This is important, because the first thing that a programmer should look out for, is correctness. You can always speculate about where performance bottlenecks are, but unless you’re using something as obvious as a ‘Bubblesort’, you can’t really be sure where the major performance overhead is going to come from.

This is why profilers are handy. It tells you where exactly the performance bottlenecks are, rather than trying to make wild guesses about them before implementation, and writing obscured optimizations that will most likely add unnecessary complexities and potentially introduce more bugs.

Profiling in Java

The equivalent of ‘gprof’ in the GNU/C world is named, unimaginatively, as ‘hprof’ for Java, a natural succession from the letter 'h' to 'g', just as the 'C' language is the successor to 'B'. Gosh, aren’t Computer Scientists lacking in imagination!

An example syntax for invoking a profiler looks like:

java -agentlib:hprof[=options] ToBeProfiledClass

Hprof allows for the measurement of CPU cycles, Memory Usage, and Thread traces, so it is absolutely helpful in determining the state of the application, and sometimes gives you additional insights in bug tracking either from traces, or locate the sources of memory leaks. I won’t provide more usage details, but a more comprehensive run through of hprof can be found here.

Understanding Hprof’s output

SITES BEGIN (ordered by live bytes) Sat Sep 29 13:46:46 2007
        percent          live          alloc'ed  stack class
rank   self  accum     bytes objs     bytes  objs trace name
  1 18.07% 18.07%    387192 16133    390912 16288 302005 java.lang.String
  2 12.04% 30.11%    258144 16134    258144 16134 302007
  3  6.14% 36.25%    131512    2    131512     2 301079[]
  4  6.12% 42.36%    131088    1    131088     1 301080 byte[]
  5  6.12% 48.48%    131072    1    131072     1 301679 byte[]
  6  3.06% 51.54%     65536    1     65536     1 301187 byte[]
  7  3.06% 54.59%     65536    1     65536     1 301677 byte[]
  8  1.60% 56.19%     34240  428     34800   435 302250 byte[]
  9  1.11% 57.30%     23856  426     23856   426 302248$MethodSymbol

The various columns indicates the following:

A profile dump is only meant to be an indicative measurement of the application at the given time, so when you profile is important as well. You’ll most likely want execute a profile dump when the CPU load is high rather than when it is idle. The magic keystrokes that allows you to do a dump, is 'ctrl-\' in Linux and 'ctrl-break' in Windows.

Using Hat

Hat or (JHat as is now called in Java 6 SE), is a profile analysis tool which allows you to have make relational queries (like SQL) of memory usage, which is something that’s pretty interesting.

( Screenshot of Hat’s Web Application Interface )

While I’ve known Hat for quite a while already, but I have given up a long time ago for not managing to get it to work. But recently, there is a new bug report filed for the error, which explains the reason for my problem is because Hat only being able to read the binary format output from hprof.

So make sure you have the following parameter for binary format activated:

java -agentlib:hprof=agentlib:hprof=heap=sites,format=b ToBeProfiledClass

If you read the bug report, you’ll realise I wasn’t the only person being annoyed that such a trivial bug isn’t fixed for such a long time, not to mention the immense confusion that it’s caused. Perhaps someone should start renaming JHat as ‘asshat’ instead!