1 \input texinfo @c -*-texinfo-*-
2 @setfilename gprof.info
7 @c This is a dir.info fragment to support semi-automated addition of
11 * gprof: (gprof). Profiling your program's execution
17 This file documents the gprof profiler of the GNU system.
19 Copyright (C) 1988, 1992 Free Software Foundation, Inc.
21 Permission is granted to make and distribute verbatim copies of
22 this manual provided the copyright notice and this permission notice
23 are preserved on all copies.
26 Permission is granted to process this file through Tex and print the
27 results, provided the printed document carries copying permission
28 notice identical to this one except for the removal of this paragraph
29 (this paragraph not being relevant to the printed manual).
32 Permission is granted to copy and distribute modified versions of this
33 manual under the conditions for verbatim copying, provided that the entire
34 resulting derived work is distributed under the terms of a permission
35 notice identical to this one.
37 Permission is granted to copy and distribute translations of this manual
38 into another language, under the above conditions for modified versions.
46 @subtitle The @sc{gnu} Profiler
47 @author Jay Fenlason and Richard Stallman
51 This manual describes the @sc{gnu} profiler, @code{gprof}, and how you
52 can use it to determine which parts of a program are taking most of the
53 execution time. We assume that you know how to write, compile, and
54 execute programs. @sc{gnu} @code{gprof} was written by Jay Fenlason.
56 This manual was edited January 1993 by Jeffrey Osier.
58 @vskip 0pt plus 1filll
59 Copyright @copyright{} 1988, 1992 Free Software Foundation, Inc.
61 Permission is granted to make and distribute verbatim copies of
62 this manual provided the copyright notice and this permission notice
63 are preserved on all copies.
66 Permission is granted to process this file through TeX and print the
67 results, provided the printed document carries copying permission
68 notice identical to this one except for the removal of this paragraph
69 (this paragraph not being relevant to the printed manual).
72 Permission is granted to copy and distribute modified versions of this
73 manual under the conditions for verbatim copying, provided that the entire
74 resulting derived work is distributed under the terms of a permission
75 notice identical to this one.
77 Permission is granted to copy and distribute translations of this manual
78 into another language, under the same conditions as for modified versions.
84 @top Profiling a Program: Where Does It Spend Its Time?
86 This manual describes the @sc{gnu} profiler, @code{gprof}, and how you
87 can use it to determine which parts of a program are taking most of the
88 execution time. We assume that you know how to write, compile, and
89 execute programs. @sc{gnu} @code{gprof} was written by Jay Fenlason.
91 This manual was updated January 1993.
94 * Why:: What profiling means, and why it is useful.
95 * Compiling:: How to compile your program for profiling.
96 * Executing:: How to execute your program to generate the
97 profile data file @file{gmon.out}.
98 * Invoking:: How to run @code{gprof}, and how to specify
101 * Flat Profile:: The flat profile shows how much time was spent
102 executing directly in each function.
103 * Call Graph:: The call graph shows which functions called which
104 others, and how much time each function used
105 when its subroutine calls are included.
107 * Implementation:: How the profile data is recorded and written.
108 * Sampling Error:: Statistical margins of error.
109 How to accumulate data from several runs
110 to make it more accurate.
112 * Assumptions:: Some of @code{gprof}'s measurements are based
113 on assumptions about your program
114 that could be very wrong.
116 * Incompatibilities:: (between GNU @code{gprof} and Unix @code{gprof}.)
123 Profiling allows you to learn where your program spent its time and which
124 functions called which other functions while it was executing. This
125 information can show you which pieces of your program are slower than you
126 expected, and might be candidates for rewriting to make your program
127 execute faster. It can also tell you which functions are being called more
128 or less often than you expected. This may help you spot bugs that had
129 otherwise been unnoticed.
131 Since the profiler uses information collected during the actual execution
132 of your program, it can be used on programs that are too large or too
133 complex to analyze by reading the source. However, how your program is run
134 will affect the information that shows up in the profile data. If you
135 don't use some feature of your program while it is being profiled, no
136 profile information will be generated for that feature.
138 Profiling has several steps:
142 You must compile and link your program with profiling enabled.
146 You must execute your program to generate a profile data file.
150 You must run @code{gprof} to analyze the profile data.
154 The next three chapters explain these steps in greater detail.
156 The result of the analysis is a file containing two tables, the
157 @dfn{flat profile} and the @dfn{call graph} (plus blurbs which briefly
158 explain the contents of these tables).
160 The flat profile shows how much time your program spent in each function,
161 and how many times that function was called. If you simply want to know
162 which functions burn most of the cycles, it is stated concisely here.
165 The call graph shows, for each function, which functions called it, which
166 other functions it called, and how many times. There is also an estimate
167 of how much time was spent in the subroutines of each function. This can
168 suggest places where you might try to eliminate function calls that use a
169 lot of time. @xref{Call Graph}.
172 @chapter Compiling a Program for Profiling
174 The first step in generating profile information for your program is
175 to compile and link it with profiling enabled.
177 To compile a source file for profiling, specify the @samp{-pg} option when
178 you run the compiler. (This is in addition to the options you normally
181 To link the program for profiling, if you use a compiler such as @code{cc}
182 to do the linking, simply specify @samp{-pg} in addition to your usual
183 options. The same option, @samp{-pg}, alters either compilation or linking
184 to do what is necessary for profiling. Here are examples:
187 cc -g -c myprog.c utils.c -pg
188 cc -o myprog myprog.o utils.o -pg
191 The @samp{-pg} option also works with a command that both compiles and links:
194 cc -o myprog myprog.c utils.c -g -pg
197 If you run the linker @code{ld} directly instead of through a compiler
198 such as @code{cc}, you must specify the profiling startup file
199 @file{/lib/gcrt0.o} as the first input file instead of the usual startup
200 file @file{/lib/crt0.o}. In addition, you would probably want to
201 specify the profiling C library, @file{/usr/lib/libc_p.a}, by writing
202 @samp{-lc_p} instead of the usual @samp{-lc}. This is not absolutely
203 necessary, but doing this gives you number-of-calls information for
204 standard library functions such as @code{read} and @code{open}. For
208 ld -o myprog /lib/gcrt0.o myprog.o utils.o -lc_p
211 If you compile only some of the modules of the program with @samp{-pg}, you
212 can still profile the program, but you won't get complete information about
213 the modules that were compiled without @samp{-pg}. The only information
214 you get for the functions in those modules is the total time spent in them;
215 there is no record of how many times they were called, or from where. This
216 will not affect the flat profile (except that the @code{calls} field for
217 the functions will be blank), but will greatly reduce the usefulness of the
221 @chapter Executing the Program to Generate Profile Data
223 Once the program is compiled for profiling, you must run it in order to
224 generate the information that @code{gprof} needs. Simply run the program
225 as usual, using the normal arguments, file names, etc. The program should
226 run normally, producing the same output as usual. It will, however, run
227 somewhat slower than normal because of the time spent collecting and the
228 writing the profile data.
230 The way you run the program---the arguments and input that you give
231 it---may have a dramatic effect on what the profile information shows. The
232 profile data will describe the parts of the program that were activated for
233 the particular input you use. For example, if the first command you give
234 to your program is to quit, the profile data will show the time used in
235 initialization and in cleanup, but not much else.
237 You program will write the profile data into a file called @file{gmon.out}
238 just before exiting. If there is already a file called @file{gmon.out},
239 its contents are overwritten. There is currently no way to tell the
240 program to write the profile data under a different name, but you can rename
241 the file afterward if you are concerned that it may be overwritten.
243 In order to write the @file{gmon.out} file properly, your program must exit
244 normally: by returning from @code{main} or by calling @code{exit}. Calling
245 the low-level function @code{_exit} does not write the profile data, and
246 neither does abnormal termination due to an unhandled signal.
248 The @file{gmon.out} file is written in the program's @emph{current working
249 directory} at the time it exits. This means that if your program calls
250 @code{chdir}, the @file{gmon.out} file will be left in the last directory
251 your program @code{chdir}'d to. If you don't have permission to write in
252 this directory, the file is not written. You may get a confusing error
253 message if this happens. (We have not yet replaced the part of Unix
254 responsible for this; when we do, we will make the error message
258 @chapter @code{gprof} Command Summary
260 After you have a profile data file @file{gmon.out}, you can run @code{gprof}
261 to interpret the information in it. The @code{gprof} program prints a
262 flat profile and a call graph on standard output. Typically you would
263 redirect the output of @code{gprof} into a file with @samp{>}.
265 You run @code{gprof} like this:
268 gprof @var{options} [@var{executable-file} [@var{profile-data-files}@dots{}]] [> @var{outfile}]
272 Here square-brackets indicate optional arguments.
274 If you omit the executable file name, the file @file{a.out} is used. If
275 you give no profile data file name, the file @file{gmon.out} is used. If
276 any file is not in the proper format, or if the profile data file does not
277 appear to belong to the executable file, an error message is printed.
279 You can give more than one profile data file by entering all their names
280 after the executable file name; then the statistics in all the data files
283 The following options may be used to selectively include or exclude
284 functions in the output:
288 The @samp{-a} option causes @code{gprof} to suppress the printing of
289 statically declared (private) functions. (These are functions whose
290 names are not listed as global, and which are not visible outside the
291 file/function/block where they were defined.) Time spent in these
292 functions, calls to/from them, etc, will all be attributed to the
293 function that was loaded directly before it in the executable file.
294 @c This is compatible with Unix @code{gprof}, but a bad idea.
295 This option affects both the flat profile and the call graph.
298 The @samp{-D} option causes @code{gprof} to ignore symbols which
299 are not known to be functions. This option will give more accurate
300 profile data on systems where it is supported (Solaris and HPUX for
303 @item -e @var{function_name}
304 The @samp{-e @var{function}} option tells @code{gprof} to not print
305 information about the function @var{function_name} (and its
306 children@dots{}) in the call graph. The function will still be listed
307 as a child of any functions that call it, but its index number will be
308 shown as @samp{[not printed]}. More than one @samp{-e} option may be
309 given; only one @var{function_name} may be indicated with each @samp{-e}
312 @item -E @var{function_name}
313 The @code{-E @var{function}} option works like the @code{-e} option, but
314 time spent in the function (and children who were not called from
315 anywhere else), will not be used to compute the percentages-of-time for
316 the call graph. More than one @samp{-E} option may be given; only one
317 @var{function_name} may be indicated with each @samp{-E} option.
319 @item -f @var{function_name}
320 The @samp{-f @var{function}} option causes @code{gprof} to limit the
321 call graph to the function @var{function_name} and its children (and
322 their children@dots{}). More than one @samp{-f} option may be given;
323 only one @var{function_name} may be indicated with each @samp{-f}
326 @item -F @var{function_name}
327 The @samp{-F @var{function}} option works like the @code{-f} option, but
328 only time spent in the function and its children (and their
329 children@dots{}) will be used to determine total-time and
330 percentages-of-time for the call graph. More than one @samp{-F} option
331 may be given; only one @var{function_name} may be indicated with each
332 @samp{-F} option. The @samp{-F} option overrides the @samp{-E} option.
334 @item -k @var{from@dots{}} @var{to@dots{}}
335 The @samp{-k} option allows you to delete from the profile any arcs from
336 routine @var{from} to routine @var{to}.
339 The @samp{-v} flag causes @code{gprof} to print the current version
340 number, and then exit.
343 If you give the @samp{-z} option, @code{gprof} will mention all
344 functions in the flat profile, even those that were never called, and
345 that had no time spent in them. This is useful in conjunction with the
346 @samp{-c} option for discovering which routines were never called.
349 The order of these options does not matter.
351 Note that only one function can be specified with each @code{-e},
352 @code{-E}, @code{-f} or @code{-F} option. To specify more than one
353 function, use multiple options. For example, this command:
356 gprof -e boring -f foo -f bar myprogram > gprof.output
360 lists in the call graph all functions that were reached from either
361 @code{foo} or @code{bar} and were not reachable from @code{boring}.
363 There are a few other useful @code{gprof} options:
367 If the @samp{-b} option is given, @code{gprof} doesn't print the
368 verbose blurbs that try to explain the meaning of all of the fields in
369 the tables. This is useful if you intend to print out the output, or
370 are tired of seeing the blurbs.
373 The @samp{-c} option causes the static call-graph of the program to be
374 discovered by a heuristic which examines the text space of the object
375 file. Static-only parents or children are indicated with call counts of
379 The @samp{-d @var{num}} option specifies debugging options.
383 The @samp{-s} option causes @code{gprof} to summarize the information
384 in the profile data files it read in, and write out a profile data
385 file called @file{gmon.sum}, which contains all the information from
386 the profile data files that @code{gprof} read in. The file @file{gmon.sum}
387 may be one of the specified input files; the effect of this is to
388 merge the data in the other input files into @file{gmon.sum}.
389 @xref{Sampling Error}.
391 Eventually you can run @code{gprof} again without @samp{-s} to analyze the
392 cumulative data in the file @file{gmon.sum}.
395 The @samp{-T} option causes @code{gprof} to print its output in
396 ``traditional'' BSD style.
400 @chapter How to Understand the Flat Profile
403 The @dfn{flat profile} shows the total amount of time your program
404 spent executing each function. Unless the @samp{-z} option is given,
405 functions with no apparent time spent in them, and no apparent calls
406 to them, are not mentioned. Note that if a function was not compiled
407 for profiling, and didn't run long enough to show up on the program
408 counter histogram, it will be indistinguishable from a function that
411 This is part of a flat profile for a small program:
417 Each sample counts as 0.01 seconds.
418 % cumulative self self total
419 time seconds seconds calls ms/call ms/call name
420 33.34 0.02 0.02 7208 0.00 0.00 open
421 16.67 0.03 0.01 244 0.04 0.12 offtime
422 16.67 0.04 0.01 8 1.25 1.25 memccpy
423 16.67 0.05 0.01 7 1.43 1.43 write
424 16.67 0.06 0.01 mcount
425 0.00 0.06 0.00 236 0.00 0.00 tzset
426 0.00 0.06 0.00 192 0.00 0.00 tolower
427 0.00 0.06 0.00 47 0.00 0.00 strlen
428 0.00 0.06 0.00 45 0.00 0.00 strchr
429 0.00 0.06 0.00 1 0.00 50.00 main
430 0.00 0.06 0.00 1 0.00 0.00 memcpy
431 0.00 0.06 0.00 1 0.00 10.11 print
432 0.00 0.06 0.00 1 0.00 0.00 profil
433 0.00 0.06 0.00 1 0.00 50.00 report
439 The functions are sorted by decreasing run-time spent in them. The
440 functions @samp{mcount} and @samp{profil} are part of the profiling
441 aparatus and appear in every flat profile; their time gives a measure of
442 the amount of overhead due to profiling.
444 The sampling period estimates the margin of error in each of the time
445 figures. A time figure that is not much larger than this is not
446 reliable. In this example, the @samp{self seconds} field for
447 @samp{mcount} might well be @samp{0} or @samp{0.04} in another run.
448 @xref{Sampling Error}, for a complete discussion.
450 Here is what the fields in each line mean:
454 This is the percentage of the total execution time your program spent
455 in this function. These should all add up to 100%.
457 @item cumulative seconds
458 This is the cumulative total number of seconds the computer spent
459 executing this functions, plus the time spent in all the functions
460 above this one in this table.
463 This is the number of seconds accounted for by this function alone.
464 The flat profile listing is sorted first by this number.
467 This is the total number of times the function was called. If the
468 function was never called, or the number of times it was called cannot
469 be determined (probably because the function was not compiled with
470 profiling enabled), the @dfn{calls} field is blank.
473 This represents the average number of milliseconds spent in this
474 function per call, if this function is profiled. Otherwise, this field
475 is blank for this function.
478 This represents the average number of milliseconds spent in this
479 function and its descendants per call, if this function is profiled.
480 Otherwise, this field is blank for this function.
483 This is the name of the function. The flat profile is sorted by this
484 field alphabetically after the @dfn{self seconds} field is sorted.
488 @chapter How to Read the Call Graph
491 The @dfn{call graph} shows how much time was spent in each function
492 and its children. From this information, you can find functions that,
493 while they themselves may not have used much time, called other
494 functions that did use unusual amounts of time.
496 Here is a sample call from a small program. This call came from the
497 same @code{gprof} run as the flat profile example in the previous
502 granularity: each sample hit covers 2 byte(s) for 20.00% of 0.05 seconds
504 index % time self children called name
506 [1] 100.0 0.00 0.05 start [1]
507 0.00 0.05 1/1 main [2]
508 0.00 0.00 1/2 on_exit [28]
509 0.00 0.00 1/1 exit [59]
510 -----------------------------------------------
511 0.00 0.05 1/1 start [1]
512 [2] 100.0 0.00 0.05 1 main [2]
513 0.00 0.05 1/1 report [3]
514 -----------------------------------------------
515 0.00 0.05 1/1 main [2]
516 [3] 100.0 0.00 0.05 1 report [3]
517 0.00 0.03 8/8 timelocal [6]
518 0.00 0.01 1/1 print [9]
519 0.00 0.01 9/9 fgets [12]
520 0.00 0.00 12/34 strncmp <cycle 1> [40]
521 0.00 0.00 8/8 lookup [20]
522 0.00 0.00 1/1 fopen [21]
523 0.00 0.00 8/8 chewtime [24]
524 0.00 0.00 8/16 skipspace [44]
525 -----------------------------------------------
526 [4] 59.8 0.01 0.02 8+472 <cycle 2 as a whole> [4]
527 0.01 0.02 244+260 offtime <cycle 2> [7]
528 0.00 0.00 236+1 tzset <cycle 2> [26]
529 -----------------------------------------------
533 The lines full of dashes divide this table into @dfn{entries}, one for each
534 function. Each entry has one or more lines.
536 In each entry, the primary line is the one that starts with an index number
537 in square brackets. The end of this line says which function the entry is
538 for. The preceding lines in the entry describe the callers of this
539 function and the following lines describe its subroutines (also called
540 @dfn{children} when we speak of the call graph).
542 The entries are sorted by time spent in the function and its subroutines.
544 The internal profiling function @code{mcount} (@pxref{Flat Profile})
545 is never mentioned in the call graph.
548 * Primary:: Details of the primary line's contents.
549 * Callers:: Details of caller-lines' contents.
550 * Subroutines:: Details of subroutine-lines' contents.
551 * Cycles:: When there are cycles of recursion,
552 such as @code{a} calls @code{b} calls @code{a}@dots{}
556 @section The Primary Line
558 The @dfn{primary line} in a call graph entry is the line that
559 describes the function which the entry is about and gives the overall
560 statistics for this function.
562 For reference, we repeat the primary line from the entry for function
563 @code{report} in our main example, together with the heading line that
564 shows the names of the fields:
568 index % time self children called name
570 [3] 100.0 0.00 0.05 1 report [3]
574 Here is what the fields in the primary line mean:
578 Entries are numbered with consecutive integers. Each function
579 therefore has an index number, which appears at the beginning of its
582 Each cross-reference to a function, as a caller or subroutine of
583 another, gives its index number as well as its name. The index number
584 guides you if you wish to look for the entry for that function.
587 This is the percentage of the total time that was spent in this
588 function, including time spent in subroutines called from this
591 The time spent in this function is counted again for the callers of
592 this function. Therefore, adding up these percentages is meaningless.
595 This is the total amount of time spent in this function. This
596 should be identical to the number printed in the @code{seconds} field
597 for this function in the flat profile.
600 This is the total amount of time spent in the subroutine calls made by
601 this function. This should be equal to the sum of all the @code{self}
602 and @code{children} entries of the children listed directly below this
606 This is the number of times the function was called.
608 If the function called itself recursively, there are two numbers,
609 separated by a @samp{+}. The first number counts non-recursive calls,
610 and the second counts recursive calls.
612 In the example above, the function @code{report} was called once from
616 This is the name of the current function. The index number is
619 If the function is part of a cycle of recursion, the cycle number is
620 printed between the function's name and the index number
621 (@pxref{Cycles}). For example, if function @code{gnurr} is part of
622 cycle number one, and has index number twelve, its primary line would
630 @node Callers, Subroutines, Primary, Call Graph
631 @section Lines for a Function's Callers
633 A function's entry has a line for each function it was called by.
634 These lines' fields correspond to the fields of the primary line, but
635 their meanings are different because of the difference in context.
637 For reference, we repeat two lines from the entry for the function
638 @code{report}, the primary line and one caller-line preceding it, together
639 with the heading line that shows the names of the fields:
642 index % time self children called name
644 0.00 0.05 1/1 main [2]
645 [3] 100.0 0.00 0.05 1 report [3]
648 Here are the meanings of the fields in the caller-line for @code{report}
649 called from @code{main}:
653 An estimate of the amount of time spent in @code{report} itself when it was
654 called from @code{main}.
657 An estimate of the amount of time spent in subroutines of @code{report}
658 when @code{report} was called from @code{main}.
660 The sum of the @code{self} and @code{children} fields is an estimate
661 of the amount of time spent within calls to @code{report} from @code{main}.
664 Two numbers: the number of times @code{report} was called from @code{main},
665 followed by the total number of nonrecursive calls to @code{report} from
668 @item name and index number
669 The name of the caller of @code{report} to which this line applies,
670 followed by the caller's index number.
672 Not all functions have entries in the call graph; some
673 options to @code{gprof} request the omission of certain functions.
674 When a caller has no entry of its own, it still has caller-lines
675 in the entries of the functions it calls.
677 If the caller is part of a recursion cycle, the cycle number is
678 printed between the name and the index number.
681 If the identity of the callers of a function cannot be determined, a
682 dummy caller-line is printed which has @samp{<spontaneous>} as the
683 ``caller's name'' and all other fields blank. This can happen for
685 @c What if some calls have determinable callers' names but not all?
686 @c FIXME - still relevant?
688 @node Subroutines, Cycles, Callers, Call Graph
689 @section Lines for a Function's Subroutines
691 A function's entry has a line for each of its subroutines---in other
692 words, a line for each other function that it called. These lines'
693 fields correspond to the fields of the primary line, but their meanings
694 are different because of the difference in context.
696 For reference, we repeat two lines from the entry for the function
697 @code{main}, the primary line and a line for a subroutine, together
698 with the heading line that shows the names of the fields:
701 index % time self children called name
703 [2] 100.0 0.00 0.05 1 main [2]
704 0.00 0.05 1/1 report [3]
707 Here are the meanings of the fields in the subroutine-line for @code{main}
708 calling @code{report}:
712 An estimate of the amount of time spent directly within @code{report}
713 when @code{report} was called from @code{main}.
716 An estimate of the amount of time spent in subroutines of @code{report}
717 when @code{report} was called from @code{main}.
719 The sum of the @code{self} and @code{children} fields is an estimate
720 of the total time spent in calls to @code{report} from @code{main}.
723 Two numbers, the number of calls to @code{report} from @code{main}
724 followed by the total number of nonrecursive calls to @code{report}.
727 The name of the subroutine of @code{main} to which this line applies,
728 followed by the subroutine's index number.
730 If the caller is part of a recursion cycle, the cycle number is
731 printed between the name and the index number.
734 @node Cycles,, Subroutines, Call Graph
735 @section How Mutually Recursive Functions Are Described
737 @cindex recursion cycle
739 The graph may be complicated by the presence of @dfn{cycles of
740 recursion} in the call graph. A cycle exists if a function calls
741 another function that (directly or indirectly) calls (or appears to
742 call) the original function. For example: if @code{a} calls @code{b},
743 and @code{b} calls @code{a}, then @code{a} and @code{b} form a cycle.
745 Whenever there are call-paths both ways between a pair of functions, they
746 belong to the same cycle. If @code{a} and @code{b} call each other and
747 @code{b} and @code{c} call each other, all three make one cycle. Note that
748 even if @code{b} only calls @code{a} if it was not called from @code{a},
749 @code{gprof} cannot determine this, so @code{a} and @code{b} are still
752 The cycles are numbered with consecutive integers. When a function
753 belongs to a cycle, each time the function name appears in the call graph
754 it is followed by @samp{<cycle @var{number}>}.
756 The reason cycles matter is that they make the time values in the call
757 graph paradoxical. The ``time spent in children'' of @code{a} should
758 include the time spent in its subroutine @code{b} and in @code{b}'s
759 subroutines---but one of @code{b}'s subroutines is @code{a}! How much of
760 @code{a}'s time should be included in the children of @code{a}, when
761 @code{a} is indirectly recursive?
763 The way @code{gprof} resolves this paradox is by creating a single entry
764 for the cycle as a whole. The primary line of this entry describes the
765 total time spent directly in the functions of the cycle. The
766 ``subroutines'' of the cycle are the individual functions of the cycle, and
767 all other functions that were called directly by them. The ``callers'' of
768 the cycle are the functions, outside the cycle, that called functions in
771 Here is an example portion of a call graph which shows a cycle containing
772 functions @code{a} and @code{b}. The cycle was entered by a call to
773 @code{a} from @code{main}; both @code{a} and @code{b} called @code{c}.
776 index % time self children called name
777 ----------------------------------------
779 [3] 91.71 1.77 0 1+5 <cycle 1 as a whole> [3]
780 1.02 0 3 b <cycle 1> [4]
781 0.75 0 2 a <cycle 1> [5]
782 ----------------------------------------
784 [4] 52.85 1.02 0 0 b <cycle 1> [4]
787 ----------------------------------------
790 [5] 38.86 0.75 0 1 a <cycle 1> [5]
793 ----------------------------------------
797 (The entire call graph for this program contains in addition an entry for
798 @code{main}, which calls @code{a}, and an entry for @code{c}, with callers
799 @code{a} and @code{b}.)
802 index % time self children called name
804 [1] 100.00 0 1.93 0 start [1]
805 0.16 1.77 1/1 main [2]
806 ----------------------------------------
807 0.16 1.77 1/1 start [1]
808 [2] 100.00 0.16 1.77 1 main [2]
809 1.77 0 1/1 a <cycle 1> [5]
810 ----------------------------------------
812 [3] 91.71 1.77 0 1+5 <cycle 1 as a whole> [3]
813 1.02 0 3 b <cycle 1> [4]
814 0.75 0 2 a <cycle 1> [5]
816 ----------------------------------------
818 [4] 52.85 1.02 0 0 b <cycle 1> [4]
821 ----------------------------------------
824 [5] 38.86 0.75 0 1 a <cycle 1> [5]
827 ----------------------------------------
828 0 0 3/6 b <cycle 1> [4]
829 0 0 3/6 a <cycle 1> [5]
831 ----------------------------------------
834 The @code{self} field of the cycle's primary line is the total time
835 spent in all the functions of the cycle. It equals the sum of the
836 @code{self} fields for the individual functions in the cycle, found
837 in the entry in the subroutine lines for these functions.
839 The @code{children} fields of the cycle's primary line and subroutine lines
840 count only subroutines outside the cycle. Even though @code{a} calls
841 @code{b}, the time spent in those calls to @code{b} is not counted in
842 @code{a}'s @code{children} time. Thus, we do not encounter the problem of
843 what to do when the time in those calls to @code{b} includes indirect
844 recursive calls back to @code{a}.
846 The @code{children} field of a caller-line in the cycle's entry estimates
847 the amount of time spent @emph{in the whole cycle}, and its other
848 subroutines, on the times when that caller called a function in the cycle.
850 The @code{calls} field in the primary line for the cycle has two numbers:
851 first, the number of times functions in the cycle were called by functions
852 outside the cycle; second, the number of times they were called by
853 functions in the cycle (including times when a function in the cycle calls
854 itself). This is a generalization of the usual split into nonrecursive and
857 The @code{calls} field of a subroutine-line for a cycle member in the
858 cycle's entry says how many time that function was called from functions in
859 the cycle. The total of all these is the second number in the primary line's
862 In the individual entry for a function in a cycle, the other functions in
863 the same cycle can appear as subroutines and as callers. These lines show
864 how many times each function in the cycle called or was called from each other
865 function in the cycle. The @code{self} and @code{children} fields in these
866 lines are blank because of the difficulty of defining meanings for them
867 when recursion is going on.
869 @node Implementation, Sampling Error, Call Graph, Top
870 @chapter Implementation of Profiling
872 Profiling works by changing how every function in your program is compiled
873 so that when it is called, it will stash away some information about where
874 it was called from. From this, the profiler can figure out what function
875 called it, and can count how many times it was called. This change is made
876 by the compiler when your program is compiled with the @samp{-pg} option.
878 Profiling also involves watching your program as it runs, and keeping a
879 histogram of where the program counter happens to be every now and then.
880 Typically the program counter is looked at around 100 times per second of
881 run time, but the exact frequency may vary from system to system.
883 A special startup routine allocates memory for the histogram and sets up
884 a clock signal handler to make entries in it. Use of this special
885 startup routine is one of the effects of using @samp{gcc @dots{} -pg} to
886 link. The startup file also includes an @samp{exit} function which is
887 responsible for writing the file @file{gmon.out}.
889 Number-of-calls information for library routines is collected by using a
890 special version of the C library. The programs in it are the same as in
891 the usual C library, but they were compiled with @samp{-pg}. If you
892 link your program with @samp{gcc @dots{} -pg}, it automatically uses the
893 profiling version of the library.
895 The output from @code{gprof} gives no indication of parts of your program that
896 are limited by I/O or swapping bandwidth. This is because samples of the
897 program counter are taken at fixed intervals of run time. Therefore, the
898 time measurements in @code{gprof} output say nothing about time that your
899 program was not running. For example, a part of the program that creates
900 so much data that it cannot all fit in physical memory at once may run very
901 slowly due to thrashing, but @code{gprof} will say it uses little time. On
902 the other hand, sampling by run time has the advantage that the amount of
903 load due to other users won't directly affect the output you get.
905 @node Sampling Error, Assumptions, Implementation, Top
906 @chapter Statistical Inaccuracy of @code{gprof} Output
908 The run-time figures that @code{gprof} gives you are based on a sampling
909 process, so they are subject to statistical inaccuracy. If a function runs
910 only a small amount of time, so that on the average the sampling process
911 ought to catch that function in the act only once, there is a pretty good
912 chance it will actually find that function zero times, or twice.
914 By contrast, the number-of-calls figures are derived by counting, not
915 sampling. They are completely accurate and will not vary from run to run
916 if your program is deterministic.
918 The @dfn{sampling period} that is printed at the beginning of the flat
919 profile says how often samples are taken. The rule of thumb is that a
920 run-time figure is accurate if it is considerably bigger than the sampling
923 The actual amount of error is usually more than one sampling period. In
924 fact, if a value is @var{n} times the sampling period, the @emph{expected}
925 error in it is the square-root of @var{n} sampling periods. If the
926 sampling period is 0.01 seconds and @code{foo}'s run-time is 1 second, the
927 expected error in @code{foo}'s run-time is 0.1 seconds. It is likely to
928 vary this much @emph{on the average} from one profiling run to the next.
929 (@emph{Sometimes} it will vary more.)
931 This does not mean that a small run-time figure is devoid of information.
932 If the program's @emph{total} run-time is large, a small run-time for one
933 function does tell you that that function used an insignificant fraction of
934 the whole program's time. Usually this means it is not worth optimizing.
936 One way to get more accuracy is to give your program more (but similar)
937 input data so it will take longer. Another way is to combine the data from
938 several runs, using the @samp{-s} option of @code{gprof}. Here is how:
942 Run your program once.
945 Issue the command @samp{mv gmon.out gmon.sum}.
948 Run your program again, the same as before.
951 Merge the new data in @file{gmon.out} into @file{gmon.sum} with this command:
954 gprof -s @var{executable-file} gmon.out gmon.sum
958 Repeat the last two steps as often as you wish.
961 Analyze the cumulative data using this command:
964 gprof @var{executable-file} gmon.sum > @var{output-file}
968 @node Assumptions, Incompatibilities, Sampling Error, Top
969 @chapter Estimating @code{children} Times Uses an Assumption
971 Some of the figures in the call graph are estimates---for example, the
972 @code{children} time values and all the the time figures in caller and
975 There is no direct information about these measurements in the profile
976 data itself. Instead, @code{gprof} estimates them by making an assumption
977 about your program that might or might not be true.
979 The assumption made is that the average time spent in each call to any
980 function @code{foo} is not correlated with who called @code{foo}. If
981 @code{foo} used 5 seconds in all, and 2/5 of the calls to @code{foo} came
982 from @code{a}, then @code{foo} contributes 2 seconds to @code{a}'s
983 @code{children} time, by assumption.
985 This assumption is usually true enough, but for some programs it is far
986 from true. Suppose that @code{foo} returns very quickly when its argument
987 is zero; suppose that @code{a} always passes zero as an argument, while
988 other callers of @code{foo} pass other arguments. In this program, all the
989 time spent in @code{foo} is in the calls from callers other than @code{a}.
990 But @code{gprof} has no way of knowing this; it will blindly and
991 incorrectly charge 2 seconds of time in @code{foo} to the children of
994 @c FIXME - has this been fixed?
995 We hope some day to put more complete data into @file{gmon.out}, so that
996 this assumption is no longer needed, if we can figure out how. For the
997 nonce, the estimated figures are usually more useful than misleading.
999 @node Incompatibilities, , Assumptions, Top
1000 @chapter Incompatibilities with Unix @code{gprof}
1002 @sc{gnu} @code{gprof} and Berkeley Unix @code{gprof} use the same data
1003 file @file{gmon.out}, and provide essentially the same information. But
1004 there are a few differences.
1008 For a recursive function, Unix @code{gprof} lists the function as a
1009 parent and as a child, with a @code{calls} field that lists the number
1010 of recursive calls. @sc{gnu} @code{gprof} omits these lines and puts
1011 the number of recursive calls in the primary line.
1014 When a function is suppressed from the call graph with @samp{-e}, @sc{gnu}
1015 @code{gprof} still lists it as a subroutine of functions that call it.
1017 @ignore - it does this now
1019 The function names printed in @sc{gnu} @code{gprof} output do not include
1020 the leading underscores that are added internally to the front of all
1021 C identifiers on many operating systems.
1025 The blurbs, field widths, and output formats are different. @sc{gnu}
1026 @code{gprof} prints blurbs after the tables, so that you can see the
1027 tables without skipping the blurbs.
1035 The @file{gmon.out} file is written in the program's @emph{current working
1036 directory} at the time it exits. This means that if your program calls
1037 @code{chdir}, the @file{gmon.out} file will be left in the last directory
1038 your program @code{chdir}'d to. If you don't have permission to write in
1039 this directory, the file is not written. You may get a confusing error
1040 message if this happens. (We have not yet replaced the part of Unix
1041 responsible for this; when we do, we will make the error message
1046 -d debugging...? should this be documented?
1048 -T - "traditional BSD style": How is it different? Should the
1049 differences be documented?
1051 what is this about? (and to think, I *wrote* it...)
1053 The @samp{-c} option causes the static call-graph of the program to be
1054 discovered by a heuristic which examines the text space of the object
1055 file. Static-only parents or children are indicated with call counts of
1058 example flat file adds up to 100.01%...
1060 note: time estimates now only go out to one decimal place (0.0), where
1061 they used to extend two (78.67).