Archive for the ‘Science of Memory Dump Analysis’ Category

Memory Dump Analysis Walks

Tuesday, March 24th, 2009

One day, last week, Dmitry was walking in Malahide Woods and thinking through his dangerous ideas about universal memory dumps and how to reconcile man-made PDB files with empirically discovered science files. Upon finding a problem resolution, Dmitry sat firmly on the ground and remained there happily for some time.

Click to enlarge 

- Dmitry Vostokov @ DumpAnalysis.org -   

Unique Events and Historical Narratives

Friday, March 20th, 2009

Sometimes a problem like a crash or a hang never happens again, the so called a unique computational event, like the extinction of dinosaurs if we apply biological metaphors. Biology science copes with such events via constructing historical narratives and multiple probabilistic explanations with cross data examinations. The same is true for memory dump analysis where we construct possible explanations based on evidence and collected supporting data. Like Ernst Mayr pointed, we try to answer both questions: “How?” and “Why?”. Usually the answer to the first question is very simple and straightforward, like NULL pointer access (proximate, functional causation) and the answer to the second question is provided by testing various possible historical narratives (ultimate or evolutionary causation) possibly involving an animate agent (a human user of a system).

- Dmitry Vostokov @ DumpAnalysis.org -   

Memory Dumps and Philosophy of Science

Friday, March 13th, 2009

During the last week of March I’m planning to take a break to write a mini-treatise explaining my dangerous idea in detail:

Parameterized Science: Universal Memory Dumps and the Grand Unification (ISBN: 978-1906717650)

This full color small publication should appear in print by the end of April and start an iterative and incremental publishing thread in philosophy.

- Dmitry Vostokov @ DumpAnalysis.org -

My Dangerous Idea: Parameterized Science

Thursday, March 12th, 2009

Today I found this book in a local bookshop but didn’t buy it because I couldn’t find enough dangerous ideas in it: 

What Is Your Dangerous Idea?: Today’s Leading Thinkers on the Unthinkable

Buy from Amazon

So I give my own dangerous idea in return: in the future, all sciences, engineering and technology will be ultimately fused and concerned with universal memory dumps of empirical data where appropriate symbol files will be used for every science as we know today, these files called science files. The set of science files can be considered as a parameter, hence the name of this idea. In another words, there will be one Science of memory dump analysis and many sciences. All sciences will be finally unified.

Now the question. Would it be also possible to discover new sciences by finding a suitable set of science files corresponding to a collected dump of empirical data?

- Dmitry Vostokov @ DumpAnalysis.org -

Is Memory Dump Analysis a Science?

Friday, March 6th, 2009

Based on John Moore 8 science criteria we can consider Memory Dump Analysis (MDA) as a science:

1. MDA is based on data (memory dumps) collected in the field or re-pro / test environment.

2. Data (memory dumps) is collected to answer troubleshooting, debugging or forensics and intelligence questions. Observations in memory dumps are made to support or refute these questions.

3. Analysis of data (via memory dump analyzers, debuggers and log analyzers) is done objectively.

4. Troubleshooting, debugging or forensics hypotheses are developed and they are consistent with observations and compatible with general conceptual computer memory framework.

5. Troubleshooting, debugging or forensics hypotheses are tested and several comparable competing ones may be developed at any one time.

6. Generalizations are made that are valid universally within the domain of MDA.

7. The facts are confirmed independently.

8. Previously puzzling facts are explained.

It is also interesting to generalize the domain of MDA to empirical data collection via the so called universal memory dumps.

- Dmitry Vostokov @ DumpAnalysis.org -

Bugtation No.85

Saturday, February 28th, 2009

A contribution to Software Resistentialism:

Software objects can be classified scientifically into three major categories: those that don’t work, those that crash and those that hang.

Russell Wayne Baker

- Dmitry Vostokov @ DumpAnalysis.org -

Cantor Operating System (Part 1)

Wednesday, February 25th, 2009

Named after Georg Cantor CAN.TOR.OS brings computation from the distant future into today. The transfinite worldview and universe of tomorrow into the finite worldview and universe of today. Cantor OS drives transfinite computing and saves transfinite memory dumps. More on this in subsequent parts as I have to come back to finite memory dumps… One cautious note though: transfinite doesn’t mean absolute infinity, or God-like computation, the latter is the realm of Memory Religion

(∞) TOR is a new transfinite operation in addition to finite OR, AND or XOR 

- Dmitry Vostokov @ DumpAnalysis.org -

Transfinite Memory Dumps (Part 1)

Wednesday, February 25th, 2009

These dumps are larger than any finite memory dump and contain all of them inside (see the definition of a transfinite number). Think about them as a variant of the Library of Babel where all possible memory snapshots of your Windows or Linux PC are stored including Googol dumps. If you have some code then all possible code defects are there too. An interesting question then arises. If this dump is collected what kind of patterns we can see there? Are these patterns extrapolated infinite versions of finite patterns or there come new ones specific to transfinite computations? More on this in the next parts.

- Dmitry Vostokov @ DumpAnalysis.org -

The Topos of Debugging

Sunday, February 15th, 2009

An idea struck me today while I was walking in People’s Park near Dun Laoghaire to formalize various effective intuitive notions in memory dump analysis, debugging and troubleshooting using topos theory. More on this later.

- Dmitry Vostokov @ DumpAnalysis.org -

Geometrical Debugging (Part 1)

Tuesday, February 10th, 2009

Most of (if not all) debugging is arithmetical. Here I would like to introduce a new kind of debugging and troubleshooting approach that interprets observables as objects in their own spaces, for example, the possible space of various GUI forms. These spaces are not necessarily rational-valued spaces of simulation output or discreet arithmetic spaces of memory locations and values.

This geometrical approach applies modeling and systems theory to debugging and troubleshooting by treating them as mappings (or functions in the case of one-to-one or many-to-one mappings) from the space of all possible software environment states (SE) to the space(s) of observables. Here we have a family of mappings to different spaces:

fi: SE → SOi

Some observables can be found fixed like the list of components and the number of mappings can be reduced (i < j):

fj: SEa,b,c,d,… → SOj

In every system and its environment we have something fixed as parameters (a, b, c, d, …) and this could be the list of components as high level ”genotype” or it could be just specific code (low-level “genotype”), specific data or hardware specification. The whole family of mappings become parametrized. If we want, we can reduce mappings even more to treat them as many-valued (one-to-many or many-to-many) if several observables belong to the same kind of space. 

Let me illustrate this by an analogy with modeling of a natural system. The system to be modeled is a falling ball together with its environment (Earth). The system obviously has some internal structure (abstract space of states, E) but we don’t know it. Fortunately, we can observe some measurable values like the ball position at any time (Q). So we have these mappings for balls with different masses:

fm: E → Q

We also find that for any individual ball its mass doesn’t change so we abstract it as a parameter:

f: Em → Q

The same modeling approach can be applied to a software system be it an application or a service running inside an operating system or a software system itself running inside a hardware. The case of pure software system abstracted from hardware is simple. In such a case SE space theoretically could be the space of abstract memory dumps. Practically we deal with the space of observables (universal memory dumps) that approximate SE and spaces of software “phenotypes”, observable behaviour, like distorted GUI, for example, or measured values of memory and CPU consumption or disk I/O throughput. 

- Dmitry Vostokov @ DumpAnalysis.org -

The Source of Intuition about Infinite

Wednesday, February 4th, 2009

What is the source of our intuition about ∞, or ∞, more powers of ∞, and even ∞ number of powers? I believe that the underlying structure of our Universe or at least a universe as a model of Universe, Infinite Memory, with perceived processes as limits and Time Arrow as a bundle of sequences of memory pointers, provides basis for our intuition about infinite.

- Dmitry Vostokov @ DumpAnalysis.org

On Extraterrestrial Problem

Monday, January 26th, 2009

What if you are given a universal memory dump and want to find some intelligence artifacts in it? I think the problem is similar to searching for software artifacts in a computer memory dump out of quadrimemorillion of them in the absence of symbol files and suitable memory dump reader. Perhaps memory visualization techniques provide a direction to solving extraterrestrial problems too. This SETI association probably came to my mind when one of the readers of my memory religion post recalled his job application to SETI institute.

- Dmitry Vostokov @ DumpAnalysis.org -

Universal Memory Dump: A Definition

Friday, January 23rd, 2009

Applying  a mathematical definition of a memory dump to natural systems we can introduce:

Universal Memory Dump

    - a snapshot of observables describing the system.

Similar to software memory dump analysis we need a suitable reader and a set of:

Universal Symbol Files

    - semantical mappings or NDB (Nature Data Base) files.

Therefore we have these two categories of universal memory dumps:

  • - Natural Memory Dumps
  • - Software Memory Dumps 

- Dmitry Vostokov @ DumpAnalysis.org -

Memory Analysis and Debugging Institute

Saturday, December 27th, 2008

It had always been my dream since I left Moscow State University to be associated with a research institute. Until yesterday it became a reality with the announcement of

Memory Analysis & Debugging Institute (MA&DI).

From: http://www.dumpanalysis.org/madinstitute-announcement

- Dmitry Vostokov @ DumpAnalysis.org -

The Measure of Debugging and Memory Dump Analysis Complexity

Tuesday, December 16th, 2008

Recently I was asked how to measure complexity of technical support cases especially ones that require memory dump analysis. My first response was that it is a subjective qualitative measure based mostly on experience and feeling. However after careful consideration I understood that nothing has changed for the last 5 years: the nature and causes of system or application hangs and crashes still the same regardless of OS types and versions. Therefore the complexity measure shifts from a case description and its artifacts to an analyst, a memory dump reader. Here the number of queries, questions asked or commands executed to gather information for analysis, can be a good approximation to the measure of complexity. For example, 5 years ago I started with a few commands like !analyze -v, kv and dd and progressed to an elaborate checklist. Here the natural logarithm can be used to approximate the measure:

C = ln (Ndc), where Ndc is the number of debugging commands used.

Initially the complexity was ln(3) ≈ 1.1 and now if someone uses 10 commands on average or asks 10 questions the complexity is ln(10) ≈ 2.3. The analysis is more than 2 times complex than it was.

- Dmitry Vostokov @ DumpAnalysis.org -

TOC from Dumps, Bugs and Debugging Forensics Book

Tuesday, November 25th, 2008

I’m pleased to announce that OpenTask has submitted the book Dumps, Bugs and Debugging Forensics: The Adventures of Dr. Debugalov for printing and here is the link to TOC:

Table of Contents

- Dmitry Vostokov @ DumpAnalysis.org

Breaking the Bug: Debugging as a Natural Phenomenon

Monday, November 24th, 2008

I was thinking about the universal character of debugging for quite some time and finally the following bugtation provided an inspiration for a new book title to be published during the Year of Debugging:

Title: Breaking the Bug: Debugging as a Natural Phenomenon
ISBN-13: 978-1906717377

More product details will be announced later.

Actually I believe in the mystical nature of various debugging numbers and sequences. For example, the ISBN number of this book ends in 377 which is the octal base equivalent of 0n255 or 0xFF.

- Dmitry Vostokov @ DumpAnalysis.org

Bugtation No.69

Monday, November 24th, 2008

“Breaking the” Bug: Debugging “as a Natural Phenomenon”

Daniel Clement Dennett, Breaking the Spell: Religion as a Natural Phenomenon

- Dmitry Vostokov @ DumpAnalysis.org -

DLL List Landscape

Sunday, November 23rd, 2008

DLL is also a recursive acronym for DLL List Landscape. OpenTask is going to publish soon the new full color book:

Title: DLL List Landscape: The Art from Computer Memory Space
ISBN-13: 978-1-906717-36-0

More details will be announced tomorrow.  

- Dmitry Vostokov @ DumpAnalysis.org -

Googol Dump: A Computational SF Novel

Friday, November 7th, 2008

Science fiction books are among my favourite. I used to read lots of them (in Russian) during my school and university years. Also I started reading science fiction in English 8 years ago upon my arrival to Ireland and one of Asimov’s books about Foundation was my first English fiction book fully read from cover to cover. Now I want to write something fictional related to memory dump analysis and the notion of 3-dimensional memory dumps is a fascinating idea to exploit:

Googol Dump - a 3D memory dump where the 3rd dimension is a time arrow of computational 2D memory snapshots.

Note: one googol is 10100 and this number of bits is sufficient enough to record the full history of memory snapshots for 64-bit, 128-bit, 256-bit and even 1024-bit computers running thousands of years.

The novel is planned to be published next year (ISBN: 978-1906717322) and is written from the perspective of a debugger.

- Dmitry Vostokov @ DumpAnalysis.org -