Archive for November, 2017

Trace Analysis Patterns (Part 155)

Sunday, November 26th, 2017

When we analyze traces and logs we work with only a small subset of log messages. We call any such current subset Working Set by analogy with working sets in operating system memory paging implementations:

This analysis pattern can also be reconciled with an operadic approach to trace and log analysis by chaining appropriate diagnostic operads from the original traces to the desired working sets:

- Dmitry Vostokov @ DumpAnalysis.org + TraceAnalysis.org -

Crash Dump Analysis Patterns (Part 134b)

Saturday, November 25th, 2017

This is a variant of Data Correlation (function parameters) analysis pattern where we look at correlations across memory structures. Simple arithmetical ratios may link such structures and correlate corresponding behavioral processes. Here we look at a recent instance of calc.exe consuming a lot of CPU. Upon the discovery of that process we were curious and saved its full process memory dump via Task Manager. In the dump we discovered 4 Spiking Threads:

0:000> !runaway f
User Mode Time
Thread       Time
13:1b68      0 days 1:51:39.906
10:23a8      0 days 1:51:37.796

11:1b98      0 days 0:00:09.890
14:88c       0 days 0:00:09.828

1:2eb4      0 days 0:00:00.390
18:2a44      0 days 0:00:00.015
19:28f0      0 days 0:00:00.000
17:22c0      0 days 0:00:00.000
16:232c      0 days 0:00:00.000
15:2008      0 days 0:00:00.000
12:2880      0 days 0:00:00.000
9:2f38      0 days 0:00:00.000
8:1a98      0 days 0:00:00.000
7:1dcc      0 days 0:00:00.000
6:c58       0 days 0:00:00.000
5:1550      0 days 0:00:00.000
4:2938      0 days 0:00:00.000
3:2b64      0 days 0:00:00.000
2:2f90      0 days 0:00:00.000
0:dc4       0 days 0:00:00.000
[…]

We see that 10/13 approx. equals 1 and #11/#14 too, or #10/#11 approx. equals #13/#14 in user mode CPU consumption. If we look at kernel times we see the same ratios:

[...]
Kernel Mode Time
Thread Time
10:23a8 0 days 0:10:36.718
13:1b68 0 days 0:10:32.968

14:88c 0 days 0:00:23.859
11:1b98 0 days 0:00:23.812

1:2eb4 0 days 0:00:00.218
2:2f90 0 days 0:00:00.015
0:dc4 0 days 0:00:00.015
19:28f0 0 days 0:00:00.000
18:2a44 0 days 0:00:00.000
17:22c0 0 days 0:00:00.000
16:232c 0 days 0:00:00.000
15:2008 0 days 0:00:00.000
12:2880 0 days 0:00:00.000
9:2f38 0 days 0:00:00.000
8:1a98 0 days 0:00:00.000
7:1dcc 0 days 0:00:00.000
6:c58 0 days 0:00:00.000
5:1550 0 days 0:00:00.000
4:2938 0 days 0:00:00.000
3:2b64 0 days 0:00:00.000
[…]

Elapsed times are also correlated and we see that correlated threads were created in pairs {#10, #11} and {#13, #14}:

[...]
Elapsed Time
Thread Time
0:dc4 0 days 18:20:55.778
1:2eb4 0 days 18:20:55.731
2:2f90 0 days 18:20:55.725
3:2b64 0 days 18:20:55.721
4:2938 0 days 18:20:55.715
5:1550 0 days 18:20:55.582
6:c58 0 days 18:20:55.522
7:1dcc 0 days 18:20:55.522
8:1a98 0 days 18:20:55.522
9:2f38 0 days 18:20:55.522
10:23a8 0 days 16:12:52.330
11:1b98 0 days 16:12:52.329
12:2880 0 days 16:12:52.195
13:1b68 0 days 16:11:44.822
14:88c 0 days 16:11:44.821
15:2008 0 days 16:11:44.693
16:232c 0 days 2:09:35.021
17:22c0 0 days 2:05:13.038
18:2a44 0 days 0:23:38.000
19:28f0 0 days 0:00:24.261

This suggests that the threads are related. We call such analysis pattern variant Data Correlation (CPU times). It may also help in finding weak Coupled Processes.

- Dmitry Vostokov @ DumpAnalysis.org + TraceAnalysis.org -

Crash Dump Analysis Patterns (Part 253)

Saturday, November 25th, 2017

Usually threads are Blocked waiting for synchronization objects or Active running threads. There is a different category of threads that appear blocked in user space but in fact may be doing a lot of invisible work (not visible from the dump type) in kernel space. We call such an analysis pattern System Call. Compare these thread Stack Traces from Stack Trace Collection:

# Call Site
00 ntdll!NtWaitForMultipleObjects
01 KERNELBASE!WaitForMultipleObjectsEx
02 user32!MsgWaitForMultipleObjectsEx
03 combase!ASTAWaitContext::KernelWait
04 combase!ASTAWaitContext::Wait
05 combase!CoMsgWaitInProcessEvents
06 Windows_UI!Windows::UI::Core::CDispatcher::WaitAndProcessMessages
07 Windows_UI!Windows::UI::Core::CDispatcher::ProcessEvents
08 Windows_UI_Xaml!CJupiterWindow::RunCoreWindowMessageLoop
09 Windows_UI_Xaml!CJupiterControl::RunMessageLoop
0a Windows_UI_Xaml!DirectUI::DXamlCore::RunMessageLoop
0b twinapi_appcore!Windows::ApplicationModel::Core::CoreApplicationView::Run
[...]
0e kernel32!BaseThreadInitThunk
0f ntdll!RtlUserThreadStart

# Call Site
00 user32!NtUserCallNoParam
01 user32!MsgWaitForMultipleObjectsEx
02 combase!ASTAWaitContext::KernelWait
03 combase!ASTAWaitContext::Wait
04 combase!CoMsgWaitInProcessEvents
05 Windows_UI!Windows::UI::Core::CDispatcher::WaitAndProcessMessages
06 Windows_UI!Windows::UI::Core::CDispatcher::ProcessEvents
07 Windows_UI_Xaml!CJupiterWindow::RunCoreWindowMessageLoop
08 Windows_UI_Xaml!CJupiterControl::RunMessageLoop
09 Windows_UI_Xaml!DirectUI::DXamlCore::RunMessageLoop
0a twinapi_appcore!Windows::ApplicationModel::Core::CoreApplicationView::Run
[…]
0d kernel32!BaseThreadInitThunk
0e ntdll!RtlUserThreadStart

The first is traditionally waiting, but the other was waked up to process an input message and may be other blocked in kernel or doing some work there. In case of a doubt we can double check the last executed instruction (from the stack trace perspective):

# Child-SP          RetAddr           Call Site
00 000000b9`358bf068 00007ffd`3de0a3be user32!NtUserCallNoParam+0xa
01 000000b9`358bf070 00007ffd`3dbdc106 user32!MsgWaitForMultipleObjectsEx+0×15e
[…]

0:000> ub user32!NtUserCallNoParam+0xa
user32!NtUserInvalidateRect:
00007ffd`3de1fc90 4c8bd1          mov     r10,rcx
00007ffd`3de1fc93 b807100000      mov     eax,1007h
00007ffd`3de1fc98 0f05            syscall
00007ffd`3de1fc9a c3              ret
00007ffd`3de1fc9b 0f1f440000      nop     dword ptr [rax+rax]
user32!NtUserCallNoParam:
00007ffd`3de1fca0 4c8bd1          mov     r10,rcx
00007ffd`3de1fca3 b808100000      mov     eax,1008h
00007ffd`3de1fca8 0f05            syscall

System Calls (especially discovered in unmanaged Execution Residue data and from Past Stack Traces reconstruction from raw stack regions and Historical Information) may aid in memory forensics (broadly speaking as reconstruction of past behaviour) in addition to System Objects.

- Dmitry Vostokov @ DumpAnalysis.org + TraceAnalysis.org -