Crash Dump Analysis Patterns (Part 116)

During repeated execution either on one computer or in parallel on many computers with a uniform software / hardware the given process VM size tends to cluster around some value range, for example, 40 - 60 Mb. If we get a collection of user process memory dumps taken from several production servers, say 20 files, we can either employ scripts to process all of them or compare their file size and look for a bigger ones for a starter, for example, 85 or 110 Mb. For certain processes, for example, a print spooler, after a problem the process size tends to increase compared to normal execution. For other processes, certain error processing modules might be loaded increasing VM size or in case of incoming requests for a hang process certain memory regions like heap could increase as well contributing to dump file size increase. We call this pattern Fat Process Dump. If we have fat and thin clients we should also have thin and fat process dumps as well. A case study is following.

- Dmitry Vostokov @ DumpAnalysis.org + TraceAnalysis.org -

Leave a Reply