4.2.Explanation of error messages from Memcheck
Memcheck issues a range of error messages. This section presents a
quick summary of what error messages mean. The precise behaviour of the
error-checking machinery is described in Details of Memcheck's checking machinery.
4.2.1.Illegal read / Illegal write errors
For example:
Invalid read of size 4
at 0x40F6BBCC: (within /usr/lib/libpng.so.2.1.0.9)
by 0x40F6B804: (within /usr/lib/libpng.so.2.1.0.9)
by 0x40B07FF4: read_png_image(QImageIO *) (kernel/qpngio.cpp:326)
by 0x40AC751B: QImageIO::read() (kernel/qimage.cpp:3621)
Address 0xBFFFF0E0 is not stack'd, malloc'd or free'd
This happens when your program reads or writes memory at a place
which Memcheck reckons it shouldn't. In this example, the program did a
4-byte read at address 0xBFFFF0E0, somewhere within the system-supplied
library libpng.so.2.1.0.9, which was called from somewhere else in the
same library, called from line 326 of qpngio.cpp
,
and so on.
Memcheck tries to establish what the illegal address might relate
to, since that's often useful. So, if it points into a block of memory
which has already been freed, you'll be informed of this, and also where
the block was freed. Likewise, if it should turn out to be just off
the end of a heap block, a common result of off-by-one-errors in
array subscripting, you'll be informed of this fact, and also where the
block was allocated. If you use the --read-var-info
option Memcheck will run more slowly
but may give a more detailed description of any illegal address.
In this example, Memcheck can't identify the address. Actually
the address is on the stack, but, for some reason, this is not a valid
stack address -- it is below the stack pointer and that isn't allowed.
In this particular case it's probably caused by GCC generating invalid
code, a known bug in some ancient versions of GCC.
Note that Memcheck only tells you that your program is about to
access memory at an illegal address. It can't stop the access from
happening. So, if your program makes an access which normally would
result in a segmentation fault, you program will still suffer the same
fate -- but you will get a message from Memcheck immediately prior to
this. In this particular example, reading junk on the stack is
non-fatal, and the program stays alive.
4.2.2.Use of uninitialised values
For example:
Conditional jump or move depends on uninitialised value(s)
at 0x402DFA94: _IO_vfprintf (_itoa.h:49)
by 0x402E8476: _IO_printf (printf.c:36)
by 0x8048472: main (tests/manuel1.c:8)
An uninitialised-value use error is reported when your program
uses a value which hasn't been initialised -- in other words, is
undefined. Here, the undefined value is used somewhere inside the
printf
machinery of the C library. This error was
reported when running the following small program:
int main()
{
int x;
printf ("x = %d\n", x);
}
It is important to understand that your program can copy around
junk (uninitialised) data as much as it likes. Memcheck observes this
and keeps track of the data, but does not complain. A complaint is
issued only when your program attempts to make use of uninitialised
data in a way that might affect your program's externally-visible behaviour.
In this example, x
is uninitialised. Memcheck observes
the value being passed to _IO_printf
and thence to
_IO_vfprintf
, but makes no comment. However,
_IO_vfprintf
has to examine the value of
x
so it can turn it into the corresponding ASCII string,
and it is at this point that Memcheck complains.
Sources of uninitialised data tend to be:
Local variables in procedures which have not been initialised,
as in the example above.
The contents of heap blocks (allocated with
malloc
, new
, or a similar
function) before you (or a constructor) write something there.
To see information on the sources of uninitialised data in your
program, use the --track-origins=yes
option. This
makes Memcheck run more slowly, but can make it much easier to track down
the root causes of uninitialised value errors.
4.2.3.Use of uninitialised or unaddressable values in system
calls
Memcheck checks all parameters to system calls:
It checks all the direct parameters themselves, whether they are
initialised.
Also, if a system call needs to read from a buffer provided by
your program, Memcheck checks that the entire buffer is addressable
and its contents are initialised.
Also, if the system call needs to write to a user-supplied
buffer, Memcheck checks that the buffer is addressable.
After the system call, Memcheck updates its tracked information to
precisely reflect any changes in memory state caused by the system
call.
Here's an example of two system calls with invalid parameters:
#include <stdlib.h>
#include <unistd.h>
int main( void )
{
char* arr = malloc(10);
int* arr2 = malloc(sizeof(int));
write( 1 /* stdout */, arr, 10 );
exit(arr2[0]);
}
You get these complaints ...
Syscall param write(buf) points to uninitialised byte(s)
at 0x25A48723: __write_nocancel (in /lib/tls/libc-2.3.3.so)
by 0x259AFAD3: __libc_start_main (in /lib/tls/libc-2.3.3.so)
by 0x8048348: (within /auto/homes/njn25/grind/head4/a.out)
Address 0x25AB8028 is 0 bytes inside a block of size 10 alloc'd
at 0x259852B0: malloc (vg_replace_malloc.c:130)
by 0x80483F1: main (a.c:5)
Syscall param exit(error_code) contains uninitialised byte(s)
at 0x25A21B44: __GI__exit (in /lib/tls/libc-2.3.3.so)
by 0x8048426: main (a.c:8)
... because the program has (a) written uninitialised junk
from the heap block to the standard output, and (b) passed an
uninitialised value to exit
. Note that the first
error refers to the memory pointed to by
buf
(not
buf
itself), but the second error
refers directly to exit
's argument
arr2[0]
.
For example:
Invalid free()
at 0x4004FFDF: free (vg_clientmalloc.c:577)
by 0x80484C7: main (tests/doublefree.c:10)
Address 0x3807F7B4 is 0 bytes inside a block of size 177 free'd
at 0x4004FFDF: free (vg_clientmalloc.c:577)
by 0x80484C7: main (tests/doublefree.c:10)
Memcheck keeps track of the blocks allocated by your program
with malloc
/new
,
so it can know exactly whether or not the argument to
free
/delete
is
legitimate or not. Here, this test program has freed the same block
twice. As with the illegal read/write errors, Memcheck attempts to
make sense of the address freed. If, as here, the address is one
which has previously been freed, you wil be told that -- making
duplicate frees of the same block easy to spot. You will also get this
message if you try to free a pointer that doesn't point to the start of a
heap block.
4.2.5.When a heap block is freed with an inappropriate deallocation
function
In the following example, a block allocated with
new[]
has wrongly been deallocated with
free
:
Mismatched free() / delete / delete []
at 0x40043249: free (vg_clientfuncs.c:171)
by 0x4102BB4E: QGArray::~QGArray(void) (tools/qgarray.cpp:149)
by 0x4C261C41: PptDoc::~PptDoc(void) (include/qmemarray.h:60)
by 0x4C261F0E: PptXml::~PptXml(void) (pptxml.cc:44)
Address 0x4BB292A8 is 0 bytes inside a block of size 64 alloc'd
at 0x4004318C: operator new[](unsigned int) (vg_clientfuncs.c:152)
by 0x4C21BC15: KLaola::readSBStream(int) const (klaola.cc:314)
by 0x4C21C155: KLaola::stream(KLaola::OLENode const *) (klaola.cc:416)
by 0x4C21788F: OLEFilter::convert(QCString const &) (olefilter.cc:272)
In C++
it's important to deallocate memory in a
way compatible with how it was allocated. The deal is:
If allocated with
malloc
,
calloc
,
realloc
,
valloc
or
memalign
, you must
deallocate with free
.
If allocated with new
, you must deallocate
with delete
.
If allocated with new[]
, you must
deallocate with delete[]
.
The worst thing is that on Linux apparently it doesn't matter if
you do mix these up, but the same program may then crash on a
different platform, Solaris for example. So it's best to fix it
properly. According to the KDE folks "it's amazing how many C++
programmers don't know this".
The reason behind the requirement is as follows. In some C++
implementations, delete[]
must be used for
objects allocated by new[]
because the compiler
stores the size of the array and the pointer-to-member to the
destructor of the array's content just before the pointer actually
returned. delete
doesn't account for this and will get
confused, possibly corrupting the heap.
4.2.6.Overlapping source and destination blocks
The following C library functions copy some data from one
memory block to another (or something similar):
memcpy
,
strcpy
,
strncpy
,
strcat
,
strncat
.
The blocks pointed to by their src
and
dst
pointers aren't allowed to overlap.
The POSIX standards have wording along the lines "If copying takes place
between objects that overlap, the behavior is undefined." Therefore,
Memcheck checks for this.
For example:
==27492== Source and destination overlap in memcpy(0xbffff294, 0xbffff280, 21)
==27492== at 0x40026CDC: memcpy (mc_replace_strmem.c:71)
==27492== by 0x804865A: main (overlap.c:40)
You don't want the two blocks to overlap because one of them could
get partially overwritten by the copying.
You might think that Memcheck is being overly pedantic reporting
this in the case where dst
is less than
src
. For example, the obvious way to
implement memcpy
is by copying from the first
byte to the last. However, the optimisation guides of some
architectures recommend copying from the last byte down to the first.
Also, some implementations of memcpy
zero
dst
before copying, because zeroing the
destination's cache line(s) can improve performance.
The moral of the story is: if you want to write truly portable
code, don't make any assumptions about the language
implementation.
4.2.7.Fishy argument values
All memory allocation functions take an argument specifying the
size of the memory block that should be allocated. Clearly, the requested
size should be a non-negative value and is typically not excessively large.
For instance, it is extremely unlikly that the size of an allocation
request exceeds 2**63 bytes on a 64-bit machine. It is much more likely that
such a value is the result of an erroneous size calculation and is in effect
a negative value (that just happens to appear excessively large because
the bit pattern is interpreted as an unsigned integer).
Such a value is called a "fishy value".
The size
argument of the following allocation functions
is checked for being fishy:
malloc
,
calloc
,
realloc
,
memalign
,
new
,
new []
.
__builtin_new
,
__builtin_vec_new
,
For calloc
both arguments are being checked.
For example:
==32233== Argument 'size' of function malloc has a fishy (possibly negative) value: -3
==32233== at 0x4C2CFA7: malloc (vg_replace_malloc.c:298)
==32233== by 0x400555: foo (fishy.c:15)
==32233== by 0x400583: main (fishy.c:23)
In earlier Valgrind versions those values were being referred to
as "silly arguments" and no back-trace was included.
4.2.8.Memory leak detection
Memcheck keeps track of all heap blocks issued in response to
calls to
malloc
/new
et al.
So when the program exits, it knows which blocks have not been freed.
If --leak-check
is set appropriately, for each
remaining block, Memcheck determines if the block is reachable from pointers
within the root-set. The root-set consists of (a) general purpose registers
of all threads, and (b) initialised, aligned, pointer-sized data words in
accessible client memory, including stacks.
There are two ways a block can be reached. The first is with a
"start-pointer", i.e. a pointer to the start of the block. The second is with
an "interior-pointer", i.e. a pointer to the middle of the block. There are
several ways we know of that an interior-pointer can occur:
The pointer might have originally been a start-pointer and have been
moved along deliberately (or not deliberately) by the program. In
particular, this can happen if your program uses tagged pointers, i.e.
if it uses the bottom one, two or three bits of a pointer, which are
normally always zero due to alignment, in order to store extra
information.
It might be a random junk value in memory, entirely unrelated, just
a coincidence.
It might be a pointer to the inner char array of a C++
std::string
. For example, some
compilers add 3 words at the beginning of the std::string to
store the length, the capacity and a reference count before the
memory containing the array of characters. They return a pointer
just after these 3 words, pointing at the char array.
Some code might allocate a block of memory, and use the first 8
bytes to store (block size - 8) as a 64bit number.
sqlite3MemMalloc
does this.
It might be a pointer to an array of C++ objects (which possess
destructors) allocated with new[]
. In
this case, some compilers store a "magic cookie" containing the array
length at the start of the allocated block, and return a pointer to just
past that magic cookie, i.e. an interior-pointer.
See this
page for more information.
It might be a pointer to an inner part of a C++ object using
multiple inheritance.
You can optionally activate heuristics to use during the leak
search to detect the interior pointers corresponding to
the stdstring
,
length64
,
newarray
and multipleinheritance
cases. If the
heuristic detects that an interior pointer corresponds to such a case,
the block will be considered as reachable by the interior
pointer. In other words, the interior pointer will be treated
as if it were a start pointer.
With that in mind, consider the nine possible cases described by the
following figure.
Pointer chain AAA Leak Case BBB Leak Case
------------- ------------- -------------
(1) RRR ------------> BBB DR
(2) RRR ---> AAA ---> BBB DR IR
(3) RRR BBB DL
(4) RRR AAA ---> BBB DL IL
(5) RRR ------?-----> BBB (y)DR, (n)DL
(6) RRR ---> AAA -?-> BBB DR (y)IR, (n)DL
(7) RRR -?-> AAA ---> BBB (y)DR, (n)DL (y)IR, (n)IL
(8) RRR -?-> AAA -?-> BBB (y)DR, (n)DL (y,y)IR, (n,y)IL, (_,n)DL
(9) RRR AAA -?-> BBB DL (y)IL, (n)DL
Pointer chain legend:
- RRR: a root set node or DR block
- AAA, BBB: heap blocks
- --->: a start-pointer
- -?->: an interior-pointer
Leak Case legend:
- DR: Directly reachable
- IR: Indirectly reachable
- DL: Directly lost
- IL: Indirectly lost
- (y)XY: it's XY if the interior-pointer is a real pointer
- (n)XY: it's XY if the interior-pointer is not a real pointer
- (_)XY: it's XY in either case
Every possible case can be reduced to one of the above nine. Memcheck
merges some of these cases in its output, resulting in the following four
leak kinds.
"Still reachable". This covers cases 1 and 2 (for the BBB blocks)
above. A start-pointer or chain of start-pointers to the block is
found. Since the block is still pointed at, the programmer could, at
least in principle, have freed it before program exit. "Still reachable"
blocks are very common and arguably not a problem. So, by default,
Memcheck won't report such blocks individually.
"Definitely lost". This covers case 3 (for the BBB blocks) above.
This means that no pointer to the block can be found. The block is
classified as "lost", because the programmer could not possibly have
freed it at program exit, since no pointer to it exists. This is likely
a symptom of having lost the pointer at some earlier point in the
program. Such cases should be fixed by the programmer.
"Indirectly lost". This covers cases 4 and 9 (for the BBB blocks)
above. This means that the block is lost, not because there are no
pointers to it, but rather because all the blocks that point to it are
themselves lost. For example, if you have a binary tree and the root
node is lost, all its children nodes will be indirectly lost. Because
the problem will disappear if the definitely lost block that caused the
indirect leak is fixed, Memcheck won't report such blocks individually
by default.
"Possibly lost". This covers cases 5--8 (for the BBB blocks)
above. This means that a chain of one or more pointers to the block has
been found, but at least one of the pointers is an interior-pointer.
This could just be a random value in memory that happens to point into a
block, and so you shouldn't consider this ok unless you know you have
interior-pointers.
(Note: This mapping of the nine possible cases onto four leak kinds is
not necessarily the best way that leaks could be reported; in particular,
interior-pointers are treated inconsistently. It is possible the
categorisation may be improved in the future.)
Furthermore, if suppressions exists for a block, it will be reported
as "suppressed" no matter what which of the above four kinds it belongs
to.
The following is an example leak summary.
LEAK SUMMARY:
definitely lost: 48 bytes in 3 blocks.
indirectly lost: 32 bytes in 2 blocks.
possibly lost: 96 bytes in 6 blocks.
still reachable: 64 bytes in 4 blocks.
suppressed: 0 bytes in 0 blocks.
If heuristics have been used to consider some blocks as
reachable, the leak summary details the heuristically reachable subset
of 'still reachable:' per heuristic. In the below example, of the 95
bytes still reachable, 87 bytes (56+7+8+16) have been considered
heuristically reachable.
LEAK SUMMARY:
definitely lost: 4 bytes in 1 blocks
indirectly lost: 0 bytes in 0 blocks
possibly lost: 0 bytes in 0 blocks
still reachable: 95 bytes in 6 blocks
of which reachable via heuristic:
stdstring : 56 bytes in 2 blocks
length64 : 16 bytes in 1 blocks
newarray : 7 bytes in 1 blocks
multipleinheritance: 8 bytes in 1 blocks
suppressed: 0 bytes in 0 blocks
If --leak-check=full
is specified,
Memcheck will give details for each definitely lost or possibly lost block,
including where it was allocated. (Actually, it merges results for all
blocks that have the same leak kind and sufficiently similar stack traces
into a single "loss record". The
--leak-resolution
lets you control the
meaning of "sufficiently similar".) It cannot tell you when or how or why
the pointer to a leaked block was lost; you have to work that out for
yourself. In general, you should attempt to ensure your programs do not
have any definitely lost or possibly lost blocks at exit.
For example:
8 bytes in 1 blocks are definitely lost in loss record 1 of 14
at 0x........: malloc (vg_replace_malloc.c:...)
by 0x........: mk (leak-tree.c:11)
by 0x........: main (leak-tree.c:39)
88 (8 direct, 80 indirect) bytes in 1 blocks are definitely lost in loss record 13 of 14
at 0x........: malloc (vg_replace_malloc.c:...)
by 0x........: mk (leak-tree.c:11)
by 0x........: main (leak-tree.c:25)
The first message describes a simple case of a single 8 byte block
that has been definitely lost. The second case mentions another 8 byte
block that has been definitely lost; the difference is that a further 80
bytes in other blocks are indirectly lost because of this lost block.
The loss records are not presented in any notable order, so the loss record
numbers aren't particularly meaningful. The loss record numbers can be used
in the Valgrind gdbserver to list the addresses of the leaked blocks and/or give
more details about how a block is still reachable.
The option --show-leak-kinds=<set>
controls the set of leak kinds to show
when --leak-check=full
is specified.
The <set>
of leak kinds is specified
in one of the following ways:
The default value for the leak kinds to show is
--show-leak-kinds=definite,possible
.
To also show the reachable and indirectly lost blocks in
addition to the definitely and possibly lost blocks, you can
use --show-leak-kinds=all
. To only show the
reachable and indirectly lost blocks, use
--show-leak-kinds=indirect,reachable
. The reachable
and indirectly lost blocks will then be presented as shown in
the following two examples.
64 bytes in 4 blocks are still reachable in loss record 2 of 4
at 0x........: malloc (vg_replace_malloc.c:177)
by 0x........: mk (leak-cases.c:52)
by 0x........: main (leak-cases.c:74)
32 bytes in 2 blocks are indirectly lost in loss record 1 of 4
at 0x........: malloc (vg_replace_malloc.c:177)
by 0x........: mk (leak-cases.c:52)
by 0x........: main (leak-cases.c:80)
Because there are different kinds of leaks with different
severities, an interesting question is: which leaks should be
counted as true "errors" and which should not?
The answer to this question affects the numbers printed in
the ERROR SUMMARY
line, and also the
effect of the --error-exitcode
option. First, a leak
is only counted as a true "error"
if --leak-check=full
is specified. Then, the
option --errors-for-leak-kinds=<set>
controls
the set of leak kinds to consider as errors. The default value
is --errors-for-leak-kinds=definite,possible
4.4.Writing suppression files
The basic suppression format is described in
Suppressing errors.
The suppression-type (second) line should have the form:
Memcheck:suppression_type
The Memcheck suppression types are as follows:
Value1
,
Value2
,
Value4
,
Value8
,
Value16
,
meaning an uninitialised-value error when
using a value of 1, 2, 4, 8 or 16 bytes.
Cond
(or its old
name, Value0
), meaning use
of an uninitialised CPU condition code.
Addr1
,
Addr2
,
Addr4
,
Addr8
,
Addr16
,
meaning an invalid address during a
memory access of 1, 2, 4, 8 or 16 bytes respectively.
Jump
, meaning an
jump to an unaddressable location error.
Param
, meaning an
invalid system call parameter error.
Free
, meaning an
invalid or mismatching free.
Overlap
, meaning a
src
/
dst
overlap in
memcpy
or a similar function.
Leak
, meaning
a memory leak.
Param
errors have a mandatory extra
information line at this point, which is the name of the offending
system call parameter.
Leak
errors have an optional
extra information line, with the following format:
match-leak-kinds:<set>
where <set>
specifies which
leak kinds are matched by this suppression entry.
<set>
is specified in the
same way as with the option --show-leak-kinds
, that is,
one of the following:
- a comma separated list of one or more of
definite indirect possible reachable
.
-
all
to specify the complete set (all leak kinds).
-
none
for the empty set.
If this optional extra line is not present, the suppression
entry will match all leak kinds.
Be aware that leak suppressions that are created using
--gen-suppressions
will contain this optional extra
line, and therefore may match fewer leaks than you expect. You may
want to remove the line before using the generated
suppressions.
The other Memcheck error kinds do not have extra lines.
If you give the -v
option, Valgrind will print
the list of used suppressions at the end of execution.
For a leak suppression, this output gives the number of different
loss records that match the suppression, and the number of bytes
and blocks suppressed by the suppression.
If the run contains multiple leak checks, the number of bytes and blocks
are reset to zero before each new leak check. Note that the number of different
loss records is not reset to zero.
In the example below, in the last leak search, 7 blocks and 96 bytes have
been suppressed by a suppression with the name
some_leak_suppression
:
--21041-- used_suppression: 10 some_other_leak_suppression s.supp:14 suppressed: 12,400 bytes in 1 blocks
--21041-- used_suppression: 39 some_leak_suppression s.supp:2 suppressed: 96 bytes in 7 blocks
For ValueN
and AddrN
errors, the first line of the calling context is either the name of
the function in which the error occurred, or, failing that, the full
path of the .so
file or executable containing the
error location. For Free
errors, the first line is
the name of the function doing the freeing (eg,
free
, __builtin_vec_delete
,
etc). For Overlap
errors, the first line is the name of the
function with the overlapping arguments (eg.
memcpy
, strcpy
, etc).
The last part of any suppression specifies the rest of the
calling context that needs to be matched.
4.5.Details of Memcheck's checking machinery
Read this section if you want to know, in detail, exactly
what and how Memcheck is checking.
4.5.1.Valid-value (V) bits
It is simplest to think of Memcheck implementing a synthetic CPU
which is identical to a real CPU, except for one crucial detail. Every
bit (literally) of data processed, stored and handled by the real CPU
has, in the synthetic CPU, an associated "valid-value" bit, which says
whether or not the accompanying bit has a legitimate value. In the
discussions which follow, this bit is referred to as the V (valid-value)
bit.
Each byte in the system therefore has a 8 V bits which follow it
wherever it goes. For example, when the CPU loads a word-size item (4
bytes) from memory, it also loads the corresponding 32 V bits from a
bitmap which stores the V bits for the process' entire address space.
If the CPU should later write the whole or some part of that value to
memory at a different address, the relevant V bits will be stored back
in the V-bit bitmap.
In short, each bit in the system has (conceptually) an associated V
bit, which follows it around everywhere, even inside the CPU. Yes, all the
CPU's registers (integer, floating point, vector and condition registers)
have their own V bit vectors. For this to work, Memcheck uses a great deal
of compression to represent the V bits compactly.
Copying values around does not cause Memcheck to check for, or
report on, errors. However, when a value is used in a way which might
conceivably affect your program's externally-visible behaviour,
the associated V bits are immediately checked. If any of these indicate
that the value is undefined (even partially), an error is reported.
Here's an (admittedly nonsensical) example:
int i, j;
int a[10], b[10];
for ( i = 0; i < 10; i++ ) {
j = a[i];
b[i] = j;
}
Memcheck emits no complaints about this, since it merely copies
uninitialised values from a[]
into
b[]
, and doesn't use them in a way which could
affect the behaviour of the program. However, if
the loop is changed to:
for ( i = 0; i < 10; i++ ) {
j += a[i];
}
if ( j == 77 )
printf("hello there\n");
then Memcheck will complain, at the
if
, that the condition depends on
uninitialised values. Note that it doesn't complain
at the j += a[i];
, since at that point the
undefinedness is not "observable". It's only when a decision has to be
made as to whether or not to do the printf
-- an
observable action of your program -- that Memcheck complains.
Most low level operations, such as adds, cause Memcheck to use the
V bits for the operands to calculate the V bits for the result. Even if
the result is partially or wholly undefined, it does not
complain.
Checks on definedness only occur in three places: when a value is
used to generate a memory address, when control flow decision needs to
be made, and when a system call is detected, Memcheck checks definedness
of parameters as required.
If a check should detect undefinedness, an error message is
issued. The resulting value is subsequently regarded as well-defined.
To do otherwise would give long chains of error messages. In other
words, once Memcheck reports an undefined value error, it tries to
avoid reporting further errors derived from that same undefined
value.
This sounds overcomplicated. Why not just check all reads from
memory, and complain if an undefined value is loaded into a CPU
register? Well, that doesn't work well, because perfectly legitimate C
programs routinely copy uninitialised values around in memory, and we
don't want endless complaints about that. Here's the canonical example.
Consider a struct like this:
struct S { int x; char c; };
struct S s1, s2;
s1.x = 42;
s1.c = 'z';
s2 = s1;
The question to ask is: how large is struct S
,
in bytes? An int
is 4 bytes and a
char
one byte, so perhaps a struct
S
occupies 5 bytes? Wrong. All non-toy compilers we know
of will round the size of struct S
up to a whole
number of words, in this case 8 bytes. Not doing this forces compilers
to generate truly appalling code for accessing arrays of
struct S
's on some architectures.
So s1
occupies 8 bytes, yet only 5 of them will
be initialised. For the assignment s2 = s1
, GCC
generates code to copy all 8 bytes wholesale into s2
without regard for their meaning. If Memcheck simply checked values as
they came out of memory, it would yelp every time a structure assignment
like this happened. So the more complicated behaviour described above
is necessary. This allows GCC to copy
s1
into s2
any way it likes, and a
warning will only be emitted if the uninitialised values are later
used.
4.5.2.Valid-address (A) bits
Notice that the previous subsection describes how the validity of
values is established and maintained without having to say whether the
program does or does not have the right to access any particular memory
location. We now consider the latter question.
As described above, every bit in memory or in the CPU has an
associated valid-value (V) bit. In addition, all bytes in memory, but
not in the CPU, have an associated valid-address (A) bit. This
indicates whether or not the program can legitimately read or write that
location. It does not give any indication of the validity of the data
at that location -- that's the job of the V bits -- only whether or not
the location may be accessed.
Every time your program reads or writes memory, Memcheck checks
the A bits associated with the address. If any of them indicate an
invalid address, an error is emitted. Note that the reads and writes
themselves do not change the A bits, only consult them.
So how do the A bits get set/cleared? Like this:
When the program starts, all the global data areas are
marked as accessible.
When the program does
malloc
/new
,
the A bits for exactly the area allocated, and not a byte more,
are marked as accessible. Upon freeing the area the A bits are
changed to indicate inaccessibility.
When the stack pointer register (SP
) moves
up or down, A bits are set. The rule is that the area from
SP
up to the base of the stack is marked as
accessible, and below SP
is inaccessible. (If
that sounds illogical, bear in mind that the stack grows down, not
up, on almost all Unix systems, including GNU/Linux.) Tracking
SP
like this has the useful side-effect that the
section of stack used by a function for local variables etc is
automatically marked accessible on function entry and inaccessible
on exit.
When doing system calls, A bits are changed appropriately.
For example, mmap
magically makes files appear in the process'
address space, so the A bits must be updated if mmap
succeeds.
Optionally, your program can tell Memcheck about such changes
explicitly, using the client request mechanism described
above.
4.5.3.Putting it all together
Memcheck's checking machinery can be summarised as
follows:
Each byte in memory has 8 associated V (valid-value) bits,
saying whether or not the byte has a defined value, and a single A
(valid-address) bit, saying whether or not the program currently has
the right to read/write that address. As mentioned above, heavy
use of compression means the overhead is typically around 25%.
When memory is read or written, the relevant A bits are
consulted. If they indicate an invalid address, Memcheck emits an
Invalid read or Invalid write error.
When memory is read into the CPU's registers, the relevant V
bits are fetched from memory and stored in the simulated CPU. They
are not consulted.
When a register is written out to memory, the V bits for that
register are written back to memory too.
When values in CPU registers are used to generate a memory
address, or to determine the outcome of a conditional branch, the V
bits for those values are checked, and an error emitted if any of
them are undefined.
When values in CPU registers are used for any other purpose,
Memcheck computes the V bits for the result, but does not check
them.
Once the V bits for a value in the CPU have been checked, they
are then set to indicate validity. This avoids long chains of
errors.
-
When values are loaded from memory, Memcheck checks the A bits
for that location and issues an illegal-address warning if needed.
In that case, the V bits loaded are forced to indicate Valid,
despite the location being invalid.
This apparently strange choice reduces the amount of confusing
information presented to the user. It avoids the unpleasant
phenomenon in which memory is read from a place which is both
unaddressable and contains invalid values, and, as a result, you get
not only an invalid-address (read/write) error, but also a
potentially large set of uninitialised-value errors, one for every
time the value is used.
There is a hazy boundary case to do with multi-byte loads from
addresses which are partially valid and partially invalid. See
details of the option --partial-loads-ok
for details.
Memcheck intercepts calls to malloc
,
calloc
, realloc
,
valloc
, memalign
,
free
, new
,
new[]
,
delete
and
delete[]
. The behaviour you get
is:
malloc
/new
/new[]
:
the returned memory is marked as addressable but not having valid
values. This means you have to write to it before you can read
it.
calloc
: returned memory is marked both
addressable and valid, since calloc
clears
the area to zero.
realloc
: if the new size is larger than
the old, the new section is addressable but invalid, as with
malloc
. If the new size is smaller, the
dropped-off section is marked as unaddressable. You may only pass to
realloc
a pointer previously issued to you by
malloc
/calloc
/realloc
.
free
/delete
/delete[]
:
you may only pass to these functions a pointer previously issued
to you by the corresponding allocation function. Otherwise,
Memcheck complains. If the pointer is indeed valid, Memcheck
marks the entire area it points at as unaddressable, and places
the block in the freed-blocks-queue. The aim is to defer as long
as possible reallocation of this block. Until that happens, all
attempts to access it will elicit an invalid-address error, as you
would hope.
4.8.Memory Pools: describing and working with custom allocators
Some programs use custom memory allocators, often for performance
reasons. Left to itself, Memcheck is unable to understand the
behaviour of custom allocation schemes as well as it understands the
standard allocators, and so may miss errors and leaks in your program. What
this section describes is a way to give Memcheck enough of a description of
your custom allocator that it can make at least some sense of what is
happening.
There are many different sorts of custom allocator, so Memcheck
attempts to reason about them using a loose, abstract model. We
use the following terminology when describing custom allocation
systems:
Custom allocation involves a set of independent "memory pools".
Memcheck's notion of a a memory pool consists of a single "anchor
address" and a set of non-overlapping "chunks" associated with the
anchor address.
Typically a pool's anchor address is the address of a
book-keeping "header" structure.
Typically the pool's chunks are drawn from a contiguous
"superblock" acquired through the system
malloc
or
mmap
.
Keep in mind that the last two points above say "typically": the
Valgrind mempool client request API is intentionally vague about the
exact structure of a mempool. There is no specific mention made of
headers or superblocks. Nevertheless, the following picture may help
elucidate the intention of the terms in the API:
"pool"
(anchor address)
|
v
+--------+---+
| header | o |
+--------+-|-+
|
v superblock
+------+---+--------------+---+------------------+
| |rzB| allocation |rzB| |
+------+---+--------------+---+------------------+
^ ^
| |
"addr" "addr"+"size"
Note that the header and the superblock may be contiguous or
discontiguous, and there may be multiple superblocks associated with a
single header; such variations are opaque to Memcheck. The API
only requires that your allocation scheme can present sensible values
of "pool", "addr" and "size".
Typically, before making client requests related to mempools, a client
program will have allocated such a header and superblock for their
mempool, and marked the superblock NOACCESS using the
VALGRIND_MAKE_MEM_NOACCESS
client request.
When dealing with mempools, the goal is to maintain a particular
invariant condition: that Memcheck believes the unallocated portions
of the pool's superblock (including redzones) are NOACCESS. To
maintain this invariant, the client program must ensure that the
superblock starts out in that state; Memcheck cannot make it so, since
Memcheck never explicitly learns about the superblock of a pool, only
the allocated chunks within the pool.
Once the header and superblock for a pool are established and properly
marked, there are a number of client requests programs can use to
inform Memcheck about changes to the state of a mempool:
-
VALGRIND_CREATE_MEMPOOL(pool, rzB, is_zeroed)
:
This request registers the address pool
as the anchor
address for a memory pool. It also provides a size
rzB
, specifying how large the redzones placed around
chunks allocated from the pool should be. Finally, it provides an
is_zeroed
argument that specifies whether the pool's
chunks are zeroed (more precisely: defined) when allocated.
Upon completion of this request, no chunks are associated with the
pool. The request simply tells Memcheck that the pool exists, so that
subsequent calls can refer to it as a pool.
-
VALGRIND_CREATE_MEMPOOL_EXT(pool, rzB, is_zeroed, flags)
:
Create a memory pool with some flags (that can
be OR-ed together) specifying extended behaviour. When flags is
zero, the behaviour is identical to
VALGRIND_CREATE_MEMPOOL
.
The flag VALGRIND_MEMPOOL_METAPOOL
specifies that the pieces of memory associated with the pool
using VALGRIND_MEMPOOL_ALLOC
will be used
by the application as superblocks to dole out MALLOC_LIKE
blocks using VALGRIND_MALLOCLIKE_BLOCK
.
In other words, a meta pool is a "2 levels" pool : first
level is the blocks described
by VALGRIND_MEMPOOL_ALLOC
. The second
level blocks are described
using VALGRIND_MALLOCLIKE_BLOCK
. Note
that the association between the pool and the second level
blocks is implicit : second level blocks will be located
inside first level blocks. It is necessary to use
the VALGRIND_MEMPOOL_METAPOOL
flag for
such 2 levels pools, as otherwise valgrind will detect
overlapping memory blocks, and will abort execution
(e.g. during leak search).
VALGRIND_MEMPOOL_AUTO_FREE
. Such a meta
pool can also be marked as an 'auto free' pool using the
flag VALGRIND_MEMPOOL_AUTO_FREE
, which
must be OR-ed together with
the VALGRIND_MEMPOOL_METAPOOL
. For an
'auto free' pool, VALGRIND_MEMPOOL_FREE
will automatically free the second level blocks that are
contained inside the first level block freed
with VALGRIND_MEMPOOL_FREE
. In other
words, calling VALGRIND_MEMPOOL_FREE
will
cause implicit calls
to VALGRIND_FREELIKE_BLOCK
for all the
second level blocks included in the first level block.
Note: it is an error to use
the VALGRIND_MEMPOOL_AUTO_FREE
flag
without the
VALGRIND_MEMPOOL_METAPOOL
flag.
VALGRIND_DESTROY_MEMPOOL(pool)
:
This request tells Memcheck that a pool is being torn down. Memcheck
then removes all records of chunks associated with the pool, as well
as its record of the pool's existence. While destroying its records of
a mempool, Memcheck resets the redzones of any live chunks in the pool
to NOACCESS.
VALGRIND_MEMPOOL_ALLOC(pool, addr, size)
:
This request informs Memcheck that a size
-byte chunk
has been allocated at addr
, and associates the chunk with the
specified
pool
. If the pool was created with nonzero
rzB
redzones, Memcheck will mark the
rzB
bytes before and after the chunk as NOACCESS. If
the pool was created with the is_zeroed
argument set,
Memcheck will mark the chunk as DEFINED, otherwise Memcheck will mark
the chunk as UNDEFINED.
VALGRIND_MEMPOOL_FREE(pool, addr)
:
This request informs Memcheck that the chunk at addr
should no longer be considered allocated. Memcheck will mark the chunk
associated with addr
as NOACCESS, and delete its
record of the chunk's existence.
-
VALGRIND_MEMPOOL_TRIM(pool, addr, size)
:
This request trims the chunks associated with pool
.
The request only operates on chunks associated with
pool
. Trimming is formally defined as:
All chunks entirely inside the range
addr..(addr+size-1)
are preserved.
All chunks entirely outside the range
addr..(addr+size-1)
are discarded, as though
VALGRIND_MEMPOOL_FREE
was called on them.
All other chunks must intersect with the range
addr..(addr+size-1)
; areas outside the
intersection are marked as NOACCESS, as though they had been
independently freed with
VALGRIND_MEMPOOL_FREE
.
This is a somewhat rare request, but can be useful in
implementing the type of mass-free operations common in custom
LIFO allocators.
-
VALGRIND_MOVE_MEMPOOL(poolA, poolB)
: This
request informs Memcheck that the pool previously anchored at
address poolA
has moved to anchor address
poolB
. This is a rare request, typically only needed
if you realloc
the header of a mempool.
No memory-status bits are altered by this request.
-
VALGRIND_MEMPOOL_CHANGE(pool, addrA, addrB,
size)
: This request informs Memcheck that the chunk
previously allocated at address addrA
within
pool
has been moved and/or resized, and should be
changed to cover the region addrB..(addrB+size-1)
. This
is a rare request, typically only needed if you
realloc
a superblock or wish to extend a chunk
without changing its memory-status bits.
No memory-status bits are altered by this request.
VALGRIND_MEMPOOL_EXISTS(pool)
:
This request informs the caller whether or not Memcheck is currently
tracking a mempool at anchor address pool
. It
evaluates to 1 when there is a mempool associated with that address, 0
otherwise. This is a rare request, only useful in circumstances when
client code might have lost track of the set of active mempools.
4.9.Debugging MPI Parallel Programs with Valgrind
Memcheck supports debugging of distributed-memory applications
which use the MPI message passing standard. This support consists of a
library of wrapper functions for the
PMPI_*
interface. When incorporated
into the application's address space, either by direct linking or by
LD_PRELOAD
, the wrappers intercept
calls to PMPI_Send
,
PMPI_Recv
, etc. They then
use client requests to inform Memcheck of memory state changes caused
by the function being wrapped. This reduces the number of false
positives that Memcheck otherwise typically reports for MPI
applications.
The wrappers also take the opportunity to carefully check
size and definedness of buffers passed as arguments to MPI functions, hence
detecting errors such as passing undefined data to
PMPI_Send
, or receiving data into a
buffer which is too small.
Unlike most of the rest of Valgrind, the wrapper library is subject to a
BSD-style license, so you can link it into any code base you like.
See the top of mpi/libmpiwrap.c
for license details.
4.9.1.Building and installing the wrappers
The wrapper library will be built automatically if possible.
Valgrind's configure script will look for a suitable
mpicc
to build it with. This must be
the same mpicc
you use to build the
MPI application you want to debug. By default, Valgrind tries
mpicc
, but you can specify a
different one by using the configure-time option
--with-mpicc
. Currently the
wrappers are only buildable with
mpicc
s which are based on GNU
GCC or Intel's C++ Compiler.
Check that the configure script prints a line like this:
checking for usable MPI2-compliant mpicc and mpi.h... yes, mpicc
If it says ... no
, your
mpicc
has failed to compile and link
a test MPI2 program.
If the configure test succeeds, continue in the usual way with
make
and make
install
. The final install tree should then contain
libmpiwrap-<platform>.so
.
Compile up a test MPI program (eg, MPI hello-world) and try
this:
LD_PRELOAD=$prefix/lib/valgrind/libmpiwrap-<platform>.so \
mpirun [args] $prefix/bin/valgrind ./hello
You should see something similar to the following
valgrind MPI wrappers 31901: Active for pid 31901
valgrind MPI wrappers 31901: Try MPIWRAP_DEBUG=help for possible options
repeated for every process in the group. If you do not see
these, there is an build/installation problem of some kind.
The MPI functions to be wrapped are assumed to be in an ELF
shared object with soname matching
libmpi.so*
. This is known to be
correct at least for Open MPI and Quadrics MPI, and can easily be
changed if required.
Compile your MPI application as usual, taking care to link it
using the same mpicc
that your
Valgrind build was configured with.
Use the following basic scheme to run your application on Valgrind with
the wrappers engaged:
MPIWRAP_DEBUG=[wrapper-args] \
LD_PRELOAD=$prefix/lib/valgrind/libmpiwrap-<platform>.so \
mpirun [mpirun-args] \
$prefix/bin/valgrind [valgrind-args] \
[application] [app-args]
As an alternative to
LD_PRELOAD
ing
libmpiwrap-<platform>.so
, you can
simply link it to your application if desired. This should not disturb
native behaviour of your application in any way.
4.9.3.Controlling the wrapper library
Environment variable
MPIWRAP_DEBUG
is consulted at
startup. The default behaviour is to print a starting banner
valgrind MPI wrappers 16386: Active for pid 16386
valgrind MPI wrappers 16386: Try MPIWRAP_DEBUG=help for possible options
and then be relatively quiet.
You can give a list of comma-separated options in
MPIWRAP_DEBUG
. These are
verbose
:
show entries/exits of all wrappers. Also show extra
debugging info, such as the status of outstanding
MPI_Request
s resulting
from uncompleted MPI_Irecv
s.
quiet
:
opposite of verbose
, only print
anything when the wrappers want
to report a detected programming error, or in case of catastrophic
failure of the wrappers.
warn
:
by default, functions which lack proper wrappers
are not commented on, just silently
ignored. This causes a warning to be printed for each unwrapped
function used, up to a maximum of three warnings per function.
strict
:
print an error message and abort the program if
a function lacking a wrapper is used.
If you want to use Valgrind's XML output facility
(--xml=yes
), you should pass
quiet
in
MPIWRAP_DEBUG
so as to get rid of any
extraneous printing from the wrappers.
All MPI2 functions except
MPI_Wtick
,
MPI_Wtime
and
MPI_Pcontrol
have wrappers. The
first two are not wrapped because they return a
double
, which Valgrind's
function-wrap mechanism cannot handle (but it could easily be
extended to do so). MPI_Pcontrol
cannot be
wrapped as it has variable arity:
int MPI_Pcontrol(const int level, ...)
Most functions are wrapped with a default wrapper which does
nothing except complain or abort if it is called, depending on
settings in MPIWRAP_DEBUG
listed
above. The following functions have "real", do-something-useful
wrappers:
PMPI_Send PMPI_Bsend PMPI_Ssend PMPI_Rsend
PMPI_Recv PMPI_Get_count
PMPI_Isend PMPI_Ibsend PMPI_Issend PMPI_Irsend
PMPI_Irecv
PMPI_Wait PMPI_Waitall
PMPI_Test PMPI_Testall
PMPI_Iprobe PMPI_Probe
PMPI_Cancel
PMPI_Sendrecv
PMPI_Type_commit PMPI_Type_free
PMPI_Pack PMPI_Unpack
PMPI_Bcast PMPI_Gather PMPI_Scatter PMPI_Alltoall
PMPI_Reduce PMPI_Allreduce PMPI_Op_create
PMPI_Comm_create PMPI_Comm_dup PMPI_Comm_free PMPI_Comm_rank PMPI_Comm_size
PMPI_Error_string
PMPI_Init PMPI_Initialized PMPI_Finalize
A few functions such as
PMPI_Address
are listed as
HAS_NO_WRAPPER
. They have no wrapper
at all as there is nothing worth checking, and giving a no-op wrapper
would reduce performance for no reason.
Note that the wrapper library itself can itself generate large
numbers of calls to the MPI implementation, especially when walking
complex types. The most common functions called are
PMPI_Extent
,
PMPI_Type_get_envelope
,
PMPI_Type_get_contents
, and
PMPI_Type_free
.
MPI-1.1 structured types are supported, and walked exactly.
The currently supported combiners are
MPI_COMBINER_NAMED
,
MPI_COMBINER_CONTIGUOUS
,
MPI_COMBINER_VECTOR
,
MPI_COMBINER_HVECTOR
MPI_COMBINER_INDEXED
,
MPI_COMBINER_HINDEXED
and
MPI_COMBINER_STRUCT
. This should
cover all MPI-1.1 types. The mechanism (function
walk_type
) should extend easily to
cover MPI2 combiners.
MPI defines some named structured types
(MPI_FLOAT_INT
,
MPI_DOUBLE_INT
,
MPI_LONG_INT
,
MPI_2INT
,
MPI_SHORT_INT
,
MPI_LONG_DOUBLE_INT
) which are pairs
of some basic type and a C int
.
Unfortunately the MPI specification makes it impossible to look inside
these types and see where the fields are. Therefore these wrappers
assume the types are laid out as struct { float val;
int loc; }
(for
MPI_FLOAT_INT
), etc, and act
accordingly. This appears to be correct at least for Open MPI 1.0.2
and for Quadrics MPI.
If strict
is an option specified
in MPIWRAP_DEBUG
, the application
will abort if an unhandled type is encountered. Otherwise, the
application will print a warning message and continue.
Some effort is made to mark/check memory ranges corresponding to
arrays of values in a single pass. This is important for performance
since asking Valgrind to mark/check any range, no matter how small,
carries quite a large constant cost. This optimisation is applied to
arrays of primitive types (double
,
float
,
int
,
long
, long
long
, short
,
char
, and long
double
on platforms where sizeof(long
double) == 8
). For arrays of all other types, the
wrappers handle each element individually and so there can be a very
large performance cost.
4.9.6.Writing new wrappers
For the most part the wrappers are straightforward. The only
significant complexity arises with nonblocking receives.
The issue is that MPI_Irecv
states the recv buffer and returns immediately, giving a handle
(MPI_Request
) for the transaction.
Later the user will have to poll for completion with
MPI_Wait
etc, and when the
transaction completes successfully, the wrappers have to paint the
recv buffer. But the recv buffer details are not presented to
MPI_Wait
-- only the handle is. The
library therefore maintains a shadow table which associates
uncompleted MPI_Request
s with the
corresponding buffer address/count/type. When an operation completes,
the table is searched for the associated address/count/type info, and
memory is marked accordingly.
Access to the table is guarded by a (POSIX pthreads) lock, so as
to make the library thread-safe.
The table is allocated with
malloc
and never
free
d, so it will show up in leak
checks.
Writing new wrappers should be fairly easy. The source file is
mpi/libmpiwrap.c
. If possible,
find an existing wrapper for a function of similar behaviour to the
one you want to wrap, and use it as a starting point. The wrappers
are organised in sections in the same order as the MPI 1.1 spec, to
aid navigation. When adding a wrapper, remember to comment out the
definition of the default wrapper in the long list of defaults at the
bottom of the file (do not remove it, just comment it out).
4.9.7.What to expect when using the wrappers
The wrappers should reduce Memcheck's false-error rate on MPI
applications. Because the wrapping is done at the MPI interface,
there will still potentially be a large number of errors reported in
the MPI implementation below the interface. The best you can do is
try to suppress them.
You may also find that the input-side (buffer
length/definedness) checks find errors in your MPI use, for example
passing too short a buffer to
MPI_Recv
.
Functions which are not wrapped may increase the false
error rate. A possible approach is to run with
MPI_DEBUG
containing
warn
. This will show you functions
which lack proper wrappers but which are nevertheless used. You can
then write wrappers for them.
A known source of potential false errors are the
PMPI_Reduce
family of functions, when
using a custom (user-defined) reduction function. In a reduction
operation, each node notionally sends data to a "central point" which
uses the specified reduction function to merge the data items into a
single item. Hence, in general, data is passed between nodes and fed
to the reduction function, but the wrapper library cannot mark the
transferred data as initialised before it is handed to the reduction
function, because all that happens "inside" the
PMPI_Reduce
call. As a result you
may see false positives reported in your reduction function.