]>
Commit | Line | Data |
---|---|---|
be4e1cd5 JO |
1 | \input texinfo @c -*-texinfo-*- |
2 | @setfilename gprof.info | |
3 | @settitle GNU gprof | |
4 | @setchapternewpage odd | |
44c8c1d5 DZ |
5 | |
6 | @ifinfo | |
7 | @c This is a dir.info fragment to support semi-automated addition of | |
8 | @c manuals to an info tree. [email protected] is developing this facility. | |
9 | @format | |
10 | START-INFO-DIR-ENTRY | |
5ee3dd17 | 11 | * gprof: (gprof). Profiling your program's execution |
44c8c1d5 DZ |
12 | END-INFO-DIR-ENTRY |
13 | @end format | |
14 | @end ifinfo | |
15 | ||
be4e1cd5 JO |
16 | @ifinfo |
17 | This file documents the gprof profiler of the GNU system. | |
18 | ||
e2fd4231 | 19 | Copyright (C) 1988, 1992, 1997, 1998 Free Software Foundation, Inc. |
be4e1cd5 JO |
20 | |
21 | Permission is granted to make and distribute verbatim copies of | |
22 | this manual provided the copyright notice and this permission notice | |
23 | are preserved on all copies. | |
24 | ||
25 | @ignore | |
26 | Permission is granted to process this file through Tex and print the | |
27 | results, provided the printed document carries copying permission | |
28 | notice identical to this one except for the removal of this paragraph | |
29 | (this paragraph not being relevant to the printed manual). | |
30 | ||
31 | @end ignore | |
32 | Permission is granted to copy and distribute modified versions of this | |
33 | manual under the conditions for verbatim copying, provided that the entire | |
34 | resulting derived work is distributed under the terms of a permission | |
35 | notice identical to this one. | |
36 | ||
37 | Permission is granted to copy and distribute translations of this manual | |
38 | into another language, under the above conditions for modified versions. | |
39 | @end ifinfo | |
40 | ||
41 | @finalout | |
42 | @smallbook | |
43 | ||
44 | @titlepage | |
45 | @title GNU gprof | |
46 | @subtitle The @sc{gnu} Profiler | |
47 | @author Jay Fenlason and Richard Stallman | |
48 | ||
49 | @page | |
50 | ||
51 | This manual describes the @sc{gnu} profiler, @code{gprof}, and how you | |
52 | can use it to determine which parts of a program are taking most of the | |
53 | execution time. We assume that you know how to write, compile, and | |
54 | execute programs. @sc{gnu} @code{gprof} was written by Jay Fenlason. | |
55 | ||
e2fd4231 ILT |
56 | This manual was edited January 1993 by Jeffrey Osier |
57 | and updated September 1997 by Brent Baccala. | |
be4e1cd5 JO |
58 | |
59 | @vskip 0pt plus 1filll | |
e2fd4231 | 60 | Copyright @copyright{} 1988, 1992, 1997, 1998 Free Software Foundation, Inc. |
be4e1cd5 JO |
61 | |
62 | Permission is granted to make and distribute verbatim copies of | |
63 | this manual provided the copyright notice and this permission notice | |
64 | are preserved on all copies. | |
65 | ||
66 | @ignore | |
67 | Permission is granted to process this file through TeX and print the | |
68 | results, provided the printed document carries copying permission | |
69 | notice identical to this one except for the removal of this paragraph | |
70 | (this paragraph not being relevant to the printed manual). | |
71 | ||
72 | @end ignore | |
73 | Permission is granted to copy and distribute modified versions of this | |
74 | manual under the conditions for verbatim copying, provided that the entire | |
75 | resulting derived work is distributed under the terms of a permission | |
76 | notice identical to this one. | |
77 | ||
78 | Permission is granted to copy and distribute translations of this manual | |
79 | into another language, under the same conditions as for modified versions. | |
80 | ||
81 | @end titlepage | |
82 | ||
83 | @ifinfo | |
84 | @node Top | |
85 | @top Profiling a Program: Where Does It Spend Its Time? | |
86 | ||
87 | This manual describes the @sc{gnu} profiler, @code{gprof}, and how you | |
88 | can use it to determine which parts of a program are taking most of the | |
89 | execution time. We assume that you know how to write, compile, and | |
90 | execute programs. @sc{gnu} @code{gprof} was written by Jay Fenlason. | |
91 | ||
e2fd4231 | 92 | This manual was updated August 1997 by Brent Baccala. |
a2b34707 | 93 | |
be4e1cd5 | 94 | @menu |
e2fd4231 | 95 | * Introduction:: What profiling means, and why it is useful. |
be4e1cd5 | 96 | |
e2fd4231 ILT |
97 | * Compiling:: How to compile your program for profiling. |
98 | * Executing:: Executing your program to generate profile data | |
99 | * Invoking:: How to run @code{gprof}, and its options | |
be4e1cd5 | 100 | |
e2fd4231 | 101 | * Output:: Interpreting @code{gprof}'s output |
be4e1cd5 | 102 | |
e2fd4231 ILT |
103 | * Inaccuracy:: Potential problems you should be aware of |
104 | * How do I?:: Answers to common questions | |
105 | * Incompatibilities:: (between @sc{gnu} @code{gprof} and Unix @code{gprof}.) | |
106 | * Details:: Details of how profiling is done | |
be4e1cd5 JO |
107 | @end menu |
108 | @end ifinfo | |
109 | ||
e2fd4231 ILT |
110 | @node Introduction |
111 | @chapter Introduction to Profiling | |
be4e1cd5 JO |
112 | |
113 | Profiling allows you to learn where your program spent its time and which | |
114 | functions called which other functions while it was executing. This | |
115 | information can show you which pieces of your program are slower than you | |
116 | expected, and might be candidates for rewriting to make your program | |
117 | execute faster. It can also tell you which functions are being called more | |
118 | or less often than you expected. This may help you spot bugs that had | |
119 | otherwise been unnoticed. | |
120 | ||
121 | Since the profiler uses information collected during the actual execution | |
122 | of your program, it can be used on programs that are too large or too | |
123 | complex to analyze by reading the source. However, how your program is run | |
124 | will affect the information that shows up in the profile data. If you | |
125 | don't use some feature of your program while it is being profiled, no | |
126 | profile information will be generated for that feature. | |
127 | ||
128 | Profiling has several steps: | |
129 | ||
130 | @itemize @bullet | |
131 | @item | |
132 | You must compile and link your program with profiling enabled. | |
133 | @xref{Compiling}. | |
134 | ||
135 | @item | |
136 | You must execute your program to generate a profile data file. | |
137 | @xref{Executing}. | |
138 | ||
139 | @item | |
140 | You must run @code{gprof} to analyze the profile data. | |
141 | @xref{Invoking}. | |
142 | @end itemize | |
143 | ||
144 | The next three chapters explain these steps in greater detail. | |
145 | ||
e2fd4231 | 146 | Several forms of output are available from the analysis. |
be4e1cd5 | 147 | |
e2fd4231 | 148 | The @dfn{flat profile} shows how much time your program spent in each function, |
be4e1cd5 JO |
149 | and how many times that function was called. If you simply want to know |
150 | which functions burn most of the cycles, it is stated concisely here. | |
151 | @xref{Flat Profile}. | |
152 | ||
e2fd4231 | 153 | The @dfn{call graph} shows, for each function, which functions called it, which |
be4e1cd5 JO |
154 | other functions it called, and how many times. There is also an estimate |
155 | of how much time was spent in the subroutines of each function. This can | |
156 | suggest places where you might try to eliminate function calls that use a | |
157 | lot of time. @xref{Call Graph}. | |
158 | ||
e2fd4231 ILT |
159 | The @dfn{annotated source} listing is a copy of the program's |
160 | source code, labeled with the number of times each line of the | |
161 | program was executed. @xref{Annotated Source}. | |
162 | ||
163 | To better understand how profiling works, you may wish to read | |
164 | a description of its implementation. | |
165 | @xref{Implementation}. | |
166 | ||
be4e1cd5 JO |
167 | @node Compiling |
168 | @chapter Compiling a Program for Profiling | |
169 | ||
170 | The first step in generating profile information for your program is | |
171 | to compile and link it with profiling enabled. | |
172 | ||
173 | To compile a source file for profiling, specify the @samp{-pg} option when | |
174 | you run the compiler. (This is in addition to the options you normally | |
175 | use.) | |
176 | ||
177 | To link the program for profiling, if you use a compiler such as @code{cc} | |
178 | to do the linking, simply specify @samp{-pg} in addition to your usual | |
179 | options. The same option, @samp{-pg}, alters either compilation or linking | |
180 | to do what is necessary for profiling. Here are examples: | |
181 | ||
182 | @example | |
183 | cc -g -c myprog.c utils.c -pg | |
184 | cc -o myprog myprog.o utils.o -pg | |
185 | @end example | |
186 | ||
187 | The @samp{-pg} option also works with a command that both compiles and links: | |
188 | ||
189 | @example | |
190 | cc -o myprog myprog.c utils.c -g -pg | |
191 | @end example | |
192 | ||
193 | If you run the linker @code{ld} directly instead of through a compiler | |
e2fd4231 ILT |
194 | such as @code{cc}, you may have to specify a profiling startup file |
195 | @file{gcrt0.o} as the first input file instead of the usual startup | |
196 | file @file{crt0.o}. In addition, you would probably want to | |
197 | specify the profiling C library, @file{libc_p.a}, by writing | |
be4e1cd5 JO |
198 | @samp{-lc_p} instead of the usual @samp{-lc}. This is not absolutely |
199 | necessary, but doing this gives you number-of-calls information for | |
200 | standard library functions such as @code{read} and @code{open}. For | |
201 | example: | |
202 | ||
203 | @example | |
204 | ld -o myprog /lib/gcrt0.o myprog.o utils.o -lc_p | |
205 | @end example | |
206 | ||
207 | If you compile only some of the modules of the program with @samp{-pg}, you | |
208 | can still profile the program, but you won't get complete information about | |
209 | the modules that were compiled without @samp{-pg}. The only information | |
210 | you get for the functions in those modules is the total time spent in them; | |
211 | there is no record of how many times they were called, or from where. This | |
212 | will not affect the flat profile (except that the @code{calls} field for | |
213 | the functions will be blank), but will greatly reduce the usefulness of the | |
214 | call graph. | |
215 | ||
e2fd4231 ILT |
216 | If you wish to perform line-by-line profiling, |
217 | you will also need to specify the @samp{-g} option, | |
218 | instructing the compiler to insert debugging symbols into the program | |
219 | that match program addresses to source code lines. | |
220 | @xref{Line-by-line}. | |
221 | ||
222 | In addition to the @samp{-pg} and @samp{-g} options, | |
223 | you may also wish to specify the @samp{-a} option when compiling. | |
224 | This will instrument | |
225 | the program to perform basic-block counting. As the program runs, | |
226 | it will count how many times it executed each branch of each @samp{if} | |
227 | statement, each iteration of each @samp{do} loop, etc. This will | |
228 | enable @code{gprof} to construct an annotated source code | |
229 | listing showing how many times each line of code was executed. | |
230 | ||
be4e1cd5 | 231 | @node Executing |
e2fd4231 | 232 | @chapter Executing the Program |
be4e1cd5 JO |
233 | |
234 | Once the program is compiled for profiling, you must run it in order to | |
235 | generate the information that @code{gprof} needs. Simply run the program | |
236 | as usual, using the normal arguments, file names, etc. The program should | |
237 | run normally, producing the same output as usual. It will, however, run | |
238 | somewhat slower than normal because of the time spent collecting and the | |
239 | writing the profile data. | |
240 | ||
241 | The way you run the program---the arguments and input that you give | |
242 | it---may have a dramatic effect on what the profile information shows. The | |
243 | profile data will describe the parts of the program that were activated for | |
244 | the particular input you use. For example, if the first command you give | |
245 | to your program is to quit, the profile data will show the time used in | |
246 | initialization and in cleanup, but not much else. | |
247 | ||
e2fd4231 | 248 | Your program will write the profile data into a file called @file{gmon.out} |
be4e1cd5 JO |
249 | just before exiting. If there is already a file called @file{gmon.out}, |
250 | its contents are overwritten. There is currently no way to tell the | |
251 | program to write the profile data under a different name, but you can rename | |
252 | the file afterward if you are concerned that it may be overwritten. | |
253 | ||
254 | In order to write the @file{gmon.out} file properly, your program must exit | |
255 | normally: by returning from @code{main} or by calling @code{exit}. Calling | |
256 | the low-level function @code{_exit} does not write the profile data, and | |
257 | neither does abnormal termination due to an unhandled signal. | |
258 | ||
259 | The @file{gmon.out} file is written in the program's @emph{current working | |
260 | directory} at the time it exits. This means that if your program calls | |
261 | @code{chdir}, the @file{gmon.out} file will be left in the last directory | |
262 | your program @code{chdir}'d to. If you don't have permission to write in | |
e2fd4231 ILT |
263 | this directory, the file is not written, and you will get an error message. |
264 | ||
265 | Older versions of the @sc{gnu} profiling library may also write a file | |
266 | called @file{bb.out}. This file, if present, contains an human-readable | |
267 | listing of the basic-block execution counts. Unfortunately, the | |
268 | appearance of a human-readable @file{bb.out} means the basic-block | |
269 | counts didn't get written into @file{gmon.out}. | |
270 | The Perl script @code{bbconv.pl}, included with the @code{gprof} | |
271 | source distribution, will convert a @file{bb.out} file into | |
272 | a format readable by @code{gprof}. | |
be4e1cd5 JO |
273 | |
274 | @node Invoking | |
275 | @chapter @code{gprof} Command Summary | |
276 | ||
277 | After you have a profile data file @file{gmon.out}, you can run @code{gprof} | |
278 | to interpret the information in it. The @code{gprof} program prints a | |
279 | flat profile and a call graph on standard output. Typically you would | |
280 | redirect the output of @code{gprof} into a file with @samp{>}. | |
281 | ||
282 | You run @code{gprof} like this: | |
283 | ||
284 | @smallexample | |
285 | gprof @var{options} [@var{executable-file} [@var{profile-data-files}@dots{}]] [> @var{outfile}] | |
286 | @end smallexample | |
287 | ||
288 | @noindent | |
289 | Here square-brackets indicate optional arguments. | |
290 | ||
291 | If you omit the executable file name, the file @file{a.out} is used. If | |
292 | you give no profile data file name, the file @file{gmon.out} is used. If | |
293 | any file is not in the proper format, or if the profile data file does not | |
294 | appear to belong to the executable file, an error message is printed. | |
295 | ||
296 | You can give more than one profile data file by entering all their names | |
297 | after the executable file name; then the statistics in all the data files | |
298 | are summed together. | |
299 | ||
e2fd4231 ILT |
300 | The order of these options does not matter. |
301 | ||
302 | @menu | |
303 | * Output Options:: Controlling @code{gprof}'s output style | |
304 | * Analysis Options:: Controlling how @code{gprof} analyses its data | |
305 | * Miscellaneous Options:: | |
306 | * Depricated Options:: Options you no longer need to use, but which | |
307 | have been retained for compatibility | |
308 | * Symspecs:: Specifying functions to include or exclude | |
309 | @end menu | |
310 | ||
311 | @node Output Options,Analysis Options,,Invoking | |
312 | @section Output Options | |
313 | ||
314 | These options specify which of several output formats | |
315 | @code{gprof} should produce. | |
316 | ||
317 | Many of these options take an optional @dfn{symspec} to specify | |
318 | functions to be included or excluded. These options can be | |
319 | specified multiple times, with different symspecs, to include | |
320 | or exclude sets of symbols. @xref{Symspecs}. | |
321 | ||
322 | Specifying any of these options overrides the default (@samp{-p -q}), | |
323 | which prints a flat profile and call graph analysis | |
324 | for all functions. | |
be4e1cd5 JO |
325 | |
326 | @table @code | |
e2fd4231 ILT |
327 | |
328 | @item -A[@var{symspec}] | |
329 | @itemx --annotated-source[=@var{symspec}] | |
330 | The @samp{-A} option causes @code{gprof} to print annotated source code. | |
331 | If @var{symspec} is specified, print output only for matching symbols. | |
332 | @xref{Annotated Source}. | |
333 | ||
334 | @item -b | |
335 | @itemx --brief | |
336 | If the @samp{-b} option is given, @code{gprof} doesn't print the | |
337 | verbose blurbs that try to explain the meaning of all of the fields in | |
338 | the tables. This is useful if you intend to print out the output, or | |
339 | are tired of seeing the blurbs. | |
340 | ||
341 | @item -C[@var{symspec}] | |
342 | @itemx --exec-counts[=@var{symspec}] | |
343 | The @samp{-C} option causes @code{gprof} to | |
344 | print a tally of functions and the number of times each was called. | |
345 | If @var{symspec} is specified, print tally only for matching symbols. | |
346 | ||
347 | If the profile data file contains basic-block count records, specifing | |
348 | the @samp{-l} option, along with @samp{-C}, will cause basic-block | |
349 | execution counts to be tallied and displayed. | |
350 | ||
351 | @item -i | |
352 | @itemx --file-info | |
353 | The @samp{-i} option causes @code{gprof} to display summary information | |
354 | about the profile data file(s) and then exit. The number of histogram, | |
355 | call graph, and basic-block count records is displayed. | |
356 | ||
357 | @item -I @var{dirs} | |
358 | @itemx --directory-path=@var{dirs} | |
359 | The @samp{-I} option specifies a list of search directories in | |
360 | which to find source files. Environment variable @var{GPROF_PATH} | |
361 | can also be used to convery this information. | |
362 | Used mostly for annotated source output. | |
363 | ||
364 | @item -J[@var{symspec}] | |
365 | @itemx --no-annotated-source[=@var{symspec}] | |
366 | The @samp{-J} option causes @code{gprof} not to | |
367 | print annotated source code. | |
368 | If @var{symspec} is specified, @code{gprof} prints annotated source, | |
369 | but excludes matching symbols. | |
370 | ||
371 | @item -L | |
372 | @itemx --print-path | |
373 | Normally, source filenames are printed with the path | |
374 | component suppressed. The @samp{-L} option causes @code{gprof} | |
375 | to print the full pathname of | |
376 | source filenames, which is determined | |
377 | from symbolic debugging information in the image file | |
378 | and is relative to the directory in which the compiler | |
379 | was invoked. | |
380 | ||
381 | @item -p[@var{symspec}] | |
382 | @itemx --flat-profile[=@var{symspec}] | |
383 | The @samp{-p} option causes @code{gprof} to print a flat profile. | |
384 | If @var{symspec} is specified, print flat profile only for matching symbols. | |
385 | @xref{Flat Profile}. | |
386 | ||
387 | @item -P[@var{symspec}] | |
388 | @itemx --no-flat-profile[=@var{symspec}] | |
389 | The @samp{-P} option causes @code{gprof} to suppress printing a flat profile. | |
390 | If @var{symspec} is specified, @code{gprof} prints a flat profile, | |
391 | but excludes matching symbols. | |
392 | ||
393 | @item -q[@var{symspec}] | |
394 | @itemx --graph[=@var{symspec}] | |
395 | The @samp{-q} option causes @code{gprof} to print the call graph analysis. | |
396 | If @var{symspec} is specified, print call graph only for matching symbols | |
397 | and their children. | |
398 | @xref{Call Graph}. | |
399 | ||
400 | @item -Q[@var{symspec}] | |
401 | @itemx --no-graph[=@var{symspec}] | |
402 | The @samp{-Q} option causes @code{gprof} to suppress printing the | |
403 | call graph. | |
404 | If @var{symspec} is specified, @code{gprof} prints a call graph, | |
405 | but excludes matching symbols. | |
406 | ||
407 | @item -y | |
408 | @itemx --separate-files | |
409 | This option affects annotated source output only. | |
410 | Normally, gprof prints annotated source files | |
411 | to standard-output. If this option is specified, | |
412 | annotated source for a file named @file{path/filename} | |
413 | is generated in the file @file{filename-ann}. | |
414 | ||
415 | @item -Z[@var{symspec}] | |
416 | @itemx --no-exec-counts[=@var{symspec}] | |
417 | The @samp{-Z} option causes @code{gprof} not to | |
418 | print a tally of functions and the number of times each was called. | |
419 | If @var{symspec} is specified, print tally, but exclude matching symbols. | |
420 | ||
421 | @item --function-ordering | |
422 | The @samp{--function-ordering} option causes @code{gprof} to print a | |
423 | suggested function ordering for the program based on profiling data. | |
424 | This option suggests an ordering which may improve paging, tlb and | |
425 | cache behavior for the program on systems which support arbitrary | |
426 | ordering of functions in an executable. | |
427 | ||
428 | The exact details of how to force the linker to place functions | |
429 | in a particular order is system dependent and out of the scope of this | |
430 | manual. | |
431 | ||
432 | @item --file-ordering @var{map_file} | |
433 | The @samp{--file-ordering} option causes @code{gprof} to print a | |
434 | suggested .o link line ordering for the program based on profiling data. | |
435 | This option suggests an ordering which may improve paging, tlb and | |
436 | cache behavior for the program on systems which do not support arbitrary | |
437 | ordering of functions in an executable. | |
438 | ||
439 | Use of the @samp{-a} argument is highly recommended with this option. | |
440 | ||
441 | The @var{map_file} argument is a pathname to a file which provides | |
442 | function name to object file mappings. The format of the file is similar to | |
443 | the output of the program @code{nm}. | |
444 | ||
445 | @smallexample | |
446 | @group | |
447 | c-parse.o:00000000 T yyparse | |
448 | c-parse.o:00000004 C yyerrflag | |
449 | c-lang.o:00000000 T maybe_objc_method_name | |
450 | c-lang.o:00000000 T print_lang_statistics | |
451 | c-lang.o:00000000 T recognize_objc_keyword | |
452 | c-decl.o:00000000 T print_lang_identifier | |
453 | c-decl.o:00000000 T print_lang_type | |
454 | @dots{} | |
455 | ||
456 | @end group | |
457 | @end smallexample | |
458 | ||
459 | GNU @code{nm} @samp{--extern-only} @samp{--defined-only} @samp{-v} @samp{--print-file-name} can be used to create @var{map_file}. | |
460 | ||
461 | @item -T | |
462 | @itemx --traditional | |
463 | The @samp{-T} option causes @code{gprof} to print its output in | |
464 | ``traditional'' BSD style. | |
465 | ||
466 | @item -w @var{width} | |
467 | @itemx --width=@var{width} | |
468 | Sets width of output lines to @var{width}. | |
469 | Currently only used when printing the function index at the bottom | |
470 | of the call graph. | |
471 | ||
472 | @item -x | |
473 | @itemx --all-lines | |
474 | This option affects annotated source output only. | |
475 | By default, only the lines at the beginning of a basic-block | |
476 | are annotated. If this option is specified, every line in | |
477 | a basic-block is annotated by repeating the annotation for the | |
478 | first line. This behavior is similar to @code{tcov}'s @samp{-a}. | |
479 | ||
e4dee78c ILT |
480 | @item --demangle |
481 | @itemx --no-demangle | |
482 | These options control whether C++ symbol names should be demangled when | |
483 | printing output. The default is to demangle symbols. The | |
484 | @code{--no-demangle} option may be used to turn off demangling. | |
485 | ||
e2fd4231 ILT |
486 | @end table |
487 | ||
488 | @node Analysis Options,Miscellaneous Options,Output Options,Invoking | |
489 | @section Analysis Options | |
490 | ||
491 | @table @code | |
492 | ||
be4e1cd5 | 493 | @item -a |
e2fd4231 | 494 | @itemx --no-static |
be4e1cd5 JO |
495 | The @samp{-a} option causes @code{gprof} to suppress the printing of |
496 | statically declared (private) functions. (These are functions whose | |
497 | names are not listed as global, and which are not visible outside the | |
498 | file/function/block where they were defined.) Time spent in these | |
499 | functions, calls to/from them, etc, will all be attributed to the | |
500 | function that was loaded directly before it in the executable file. | |
501 | @c This is compatible with Unix @code{gprof}, but a bad idea. | |
502 | This option affects both the flat profile and the call graph. | |
503 | ||
e2fd4231 ILT |
504 | @item -c |
505 | @itemx --static-call-graph | |
506 | The @samp{-c} option causes the call graph of the program to be | |
507 | augmented by a heuristic which examines the text space of the object | |
508 | file and identifies function calls in the binary machine code. | |
509 | Since normal call graph records are only generated when functions are | |
510 | entered, this option identifies children that could have been called, | |
511 | but never were. Calls to functions that were not compiled with | |
512 | profiling enabled are also identified, but only if symbol table | |
513 | entries are present for them. | |
514 | Calls to dynamic library routines are typically @emph{not} found | |
515 | by this option. | |
516 | Parents or children identified via this heuristic | |
517 | are indicated in the call graph with call counts of @samp{0}. | |
518 | ||
32843f94 | 519 | @item -D |
e2fd4231 | 520 | @itemx --ignore-non-functions |
32843f94 JL |
521 | The @samp{-D} option causes @code{gprof} to ignore symbols which |
522 | are not known to be functions. This option will give more accurate | |
523 | profile data on systems where it is supported (Solaris and HPUX for | |
524 | example). | |
525 | ||
e2fd4231 ILT |
526 | @item -k @var{from}/@var{to} |
527 | The @samp{-k} option allows you to delete from the call graph any arcs from | |
528 | symbols matching symspec @var{from} to those matching symspec @var{to}. | |
529 | ||
530 | @item -l | |
531 | @itemx --line | |
532 | The @samp{-l} option enables line-by-line profiling, which causes | |
533 | histogram hits to be charged to individual source code lines, | |
534 | instead of functions. | |
535 | If the program was compiled with basic-block counting enabled, | |
536 | this option will also identify how many times each line of | |
537 | code was executed. | |
538 | While line-by-line profiling can help isolate where in a large function | |
539 | a program is spending its time, it also significantly increases | |
540 | the running time of @code{gprof}, and magnifies statistical | |
541 | inaccuracies. | |
542 | @xref{Sampling Error}. | |
543 | ||
544 | @item -m @var{num} | |
545 | @itemx --min-count=@var{num} | |
546 | This option affects execution count output only. | |
547 | Symbols that are executed less than @var{num} times are suppressed. | |
548 | ||
549 | @item -n[@var{symspec}] | |
550 | @itemx --time[=@var{symspec}] | |
551 | The @samp{-n} option causes @code{gprof}, in its call graph analysis, | |
552 | to only propagate times for symbols matching @var{symspec}. | |
553 | ||
554 | @item -N[@var{symspec}] | |
555 | @itemx --no-time[=@var{symspec}] | |
556 | The @samp{-n} option causes @code{gprof}, in its call graph analysis, | |
557 | not to propagate times for symbols matching @var{symspec}. | |
558 | ||
559 | @item -z | |
560 | @itemx --display-unused-functions | |
561 | If you give the @samp{-z} option, @code{gprof} will mention all | |
562 | functions in the flat profile, even those that were never called, and | |
563 | that had no time spent in them. This is useful in conjunction with the | |
564 | @samp{-c} option for discovering which routines were never called. | |
565 | ||
566 | @end table | |
567 | ||
568 | @node Miscellaneous Options,Depricated Options,Analysis Options,Invoking | |
569 | @section Miscellaneous Options | |
570 | ||
571 | @table @code | |
572 | ||
573 | @item -d[@var{num}] | |
574 | @itemx --debug[=@var{num}] | |
575 | The @samp{-d @var{num}} option specifies debugging options. | |
576 | If @var{num} is not specified, enable all debugging. | |
577 | @xref{Debugging}. | |
578 | ||
579 | @item -O@var{name} | |
580 | @itemx --file-format=@var{name} | |
581 | Selects the format of the profile data files. | |
582 | Recognized formats are @samp{auto} (the default), @samp{bsd}, @samp{magic}, | |
583 | and @samp{prof} (not yet supported). | |
584 | ||
585 | @item -s | |
586 | @itemx --sum | |
587 | The @samp{-s} option causes @code{gprof} to summarize the information | |
588 | in the profile data files it read in, and write out a profile data | |
589 | file called @file{gmon.sum}, which contains all the information from | |
590 | the profile data files that @code{gprof} read in. The file @file{gmon.sum} | |
591 | may be one of the specified input files; the effect of this is to | |
592 | merge the data in the other input files into @file{gmon.sum}. | |
593 | ||
594 | Eventually you can run @code{gprof} again without @samp{-s} to analyze the | |
595 | cumulative data in the file @file{gmon.sum}. | |
596 | ||
597 | @item -v | |
598 | @itemx --version | |
599 | The @samp{-v} flag causes @code{gprof} to print the current version | |
600 | number, and then exit. | |
601 | ||
602 | @end table | |
603 | ||
604 | @node Depricated Options,Symspecs,Miscellaneous Options,Invoking | |
605 | @section Depricated Options | |
606 | ||
607 | @table @code | |
608 | ||
609 | These options have been replaced with newer versions that use symspecs. | |
610 | ||
be4e1cd5 JO |
611 | @item -e @var{function_name} |
612 | The @samp{-e @var{function}} option tells @code{gprof} to not print | |
613 | information about the function @var{function_name} (and its | |
614 | children@dots{}) in the call graph. The function will still be listed | |
615 | as a child of any functions that call it, but its index number will be | |
616 | shown as @samp{[not printed]}. More than one @samp{-e} option may be | |
617 | given; only one @var{function_name} may be indicated with each @samp{-e} | |
618 | option. | |
619 | ||
620 | @item -E @var{function_name} | |
621 | The @code{-E @var{function}} option works like the @code{-e} option, but | |
622 | time spent in the function (and children who were not called from | |
623 | anywhere else), will not be used to compute the percentages-of-time for | |
624 | the call graph. More than one @samp{-E} option may be given; only one | |
625 | @var{function_name} may be indicated with each @samp{-E} option. | |
626 | ||
627 | @item -f @var{function_name} | |
628 | The @samp{-f @var{function}} option causes @code{gprof} to limit the | |
629 | call graph to the function @var{function_name} and its children (and | |
630 | their children@dots{}). More than one @samp{-f} option may be given; | |
631 | only one @var{function_name} may be indicated with each @samp{-f} | |
632 | option. | |
633 | ||
634 | @item -F @var{function_name} | |
635 | The @samp{-F @var{function}} option works like the @code{-f} option, but | |
636 | only time spent in the function and its children (and their | |
637 | children@dots{}) will be used to determine total-time and | |
638 | percentages-of-time for the call graph. More than one @samp{-F} option | |
639 | may be given; only one @var{function_name} may be indicated with each | |
640 | @samp{-F} option. The @samp{-F} option overrides the @samp{-E} option. | |
641 | ||
be4e1cd5 JO |
642 | @end table |
643 | ||
be4e1cd5 JO |
644 | Note that only one function can be specified with each @code{-e}, |
645 | @code{-E}, @code{-f} or @code{-F} option. To specify more than one | |
646 | function, use multiple options. For example, this command: | |
647 | ||
648 | @example | |
649 | gprof -e boring -f foo -f bar myprogram > gprof.output | |
650 | @end example | |
651 | ||
652 | @noindent | |
653 | lists in the call graph all functions that were reached from either | |
654 | @code{foo} or @code{bar} and were not reachable from @code{boring}. | |
655 | ||
e2fd4231 ILT |
656 | @node Symspecs,,Depricated Options,Invoking |
657 | @section Symspecs | |
be4e1cd5 | 658 | |
e2fd4231 ILT |
659 | Many of the output options allow functions to be included or excluded |
660 | using @dfn{symspecs} (symbol specifications), which observe the | |
661 | following syntax: | |
64c50fc5 | 662 | |
e2fd4231 ILT |
663 | @example |
664 | filename_containing_a_dot | |
665 | | funcname_not_containing_a_dot | |
666 | | linenumber | |
667 | | ( [ any_filename ] `:' ( any_funcname | linenumber ) ) | |
668 | @end example | |
64c50fc5 | 669 | |
e2fd4231 | 670 | Here are some sample symspecs: |
64c50fc5 | 671 | |
e2fd4231 ILT |
672 | @table @code |
673 | @item main.c | |
674 | Selects everything in file "main.c"---the | |
675 | dot in the string tells gprof to interpret | |
676 | the string as a filename, rather than as | |
677 | a function name. To select a file whose | |
678 | name does not contain a dot, a trailing colon | |
679 | should be specified. For example, "odd:" is | |
680 | interpreted as the file named "odd". | |
681 | ||
682 | @item main | |
683 | Selects all functions named "main". Notice | |
684 | that there may be multiple instances of the | |
685 | same function name because some of the | |
686 | definitions may be local (i.e., static). | |
687 | Unless a function name is unique in a program, | |
688 | you must use the colon notation explained | |
689 | below to specify a function from a specific | |
690 | source file. Sometimes, function names contain | |
691 | dots. In such cases, it is necessar to | |
692 | add a leading colon to the name. For example, | |
693 | ":.mul" selects function ".mul". | |
694 | ||
695 | @item main.c:main | |
696 | Selects function "main" in file "main.c". | |
697 | ||
698 | @item main.c:134 | |
699 | Selects line 134 in file "main.c". | |
700 | @end table | |
64c50fc5 | 701 | |
e2fd4231 ILT |
702 | @node Output |
703 | @chapter Interpreting @code{gprof}'s Output | |
64c50fc5 | 704 | |
e2fd4231 ILT |
705 | @code{gprof} can produce several different output styles, the |
706 | most important of which are described below. The simplest output | |
707 | styles (file information, execution count, and function and file ordering) | |
708 | are not described here, but are documented with the respective options | |
709 | that trigger them. | |
710 | @xref{Output Options}. | |
64c50fc5 | 711 | |
e2fd4231 ILT |
712 | @menu |
713 | * Flat Profile:: The flat profile shows how much time was spent | |
714 | executing directly in each function. | |
715 | * Call Graph:: The call graph shows which functions called which | |
716 | others, and how much time each function used | |
717 | when its subroutine calls are included. | |
718 | * Line-by-line:: @code{gprof} can analyze individual source code lines | |
719 | * Annotated Source:: The annotated source listing displays source code | |
720 | labeled with execution counts | |
721 | @end menu | |
64c50fc5 | 722 | |
be4e1cd5 | 723 | |
e2fd4231 ILT |
724 | @node Flat Profile,Call Graph,,Output |
725 | @section The Flat Profile | |
be4e1cd5 JO |
726 | @cindex flat profile |
727 | ||
728 | The @dfn{flat profile} shows the total amount of time your program | |
729 | spent executing each function. Unless the @samp{-z} option is given, | |
730 | functions with no apparent time spent in them, and no apparent calls | |
731 | to them, are not mentioned. Note that if a function was not compiled | |
732 | for profiling, and didn't run long enough to show up on the program | |
733 | counter histogram, it will be indistinguishable from a function that | |
734 | was never called. | |
735 | ||
736 | This is part of a flat profile for a small program: | |
737 | ||
738 | @smallexample | |
739 | @group | |
740 | Flat profile: | |
741 | ||
742 | Each sample counts as 0.01 seconds. | |
743 | % cumulative self self total | |
744 | time seconds seconds calls ms/call ms/call name | |
745 | 33.34 0.02 0.02 7208 0.00 0.00 open | |
746 | 16.67 0.03 0.01 244 0.04 0.12 offtime | |
747 | 16.67 0.04 0.01 8 1.25 1.25 memccpy | |
748 | 16.67 0.05 0.01 7 1.43 1.43 write | |
749 | 16.67 0.06 0.01 mcount | |
750 | 0.00 0.06 0.00 236 0.00 0.00 tzset | |
751 | 0.00 0.06 0.00 192 0.00 0.00 tolower | |
752 | 0.00 0.06 0.00 47 0.00 0.00 strlen | |
753 | 0.00 0.06 0.00 45 0.00 0.00 strchr | |
754 | 0.00 0.06 0.00 1 0.00 50.00 main | |
755 | 0.00 0.06 0.00 1 0.00 0.00 memcpy | |
756 | 0.00 0.06 0.00 1 0.00 10.11 print | |
757 | 0.00 0.06 0.00 1 0.00 0.00 profil | |
758 | 0.00 0.06 0.00 1 0.00 50.00 report | |
759 | @dots{} | |
760 | @end group | |
761 | @end smallexample | |
762 | ||
763 | @noindent | |
e2fd4231 ILT |
764 | The functions are sorted by first by decreasing run-time spent in them, |
765 | then by decreasing number of calls, then alphabetically by name. The | |
be4e1cd5 JO |
766 | functions @samp{mcount} and @samp{profil} are part of the profiling |
767 | aparatus and appear in every flat profile; their time gives a measure of | |
768 | the amount of overhead due to profiling. | |
769 | ||
e2fd4231 ILT |
770 | Just before the column headers, a statement appears indicating |
771 | how much time each sample counted as. | |
772 | This @dfn{sampling period} estimates the margin of error in each of the time | |
be4e1cd5 | 773 | figures. A time figure that is not much larger than this is not |
e2fd4231 ILT |
774 | reliable. In this example, each sample counted as 0.01 seconds, |
775 | suggesting a 100 Hz sampling rate. | |
776 | The program's total execution time was 0.06 | |
777 | seconds, as indicated by the @samp{cumulative seconds} field. Since | |
778 | each sample counted for 0.01 seconds, this means only six samples | |
779 | were taken during the run. Two of the samples occured while the | |
780 | program was in the @samp{open} function, as indicated by the | |
781 | @samp{self seconds} field. Each of the other four samples | |
782 | occured one each in @samp{offtime}, @samp{memccpy}, @samp{write}, | |
783 | and @samp{mcount}. | |
784 | Since only six samples were taken, none of these values can | |
785 | be regarded as particularly reliable. | |
786 | In another run, | |
787 | the @samp{self seconds} field for | |
788 | @samp{mcount} might well be @samp{0.00} or @samp{0.02}. | |
be4e1cd5 JO |
789 | @xref{Sampling Error}, for a complete discussion. |
790 | ||
e2fd4231 ILT |
791 | The remaining functions in the listing (those whose |
792 | @samp{self seconds} field is @samp{0.00}) didn't appear | |
793 | in the histogram samples at all. However, the call graph | |
794 | indicated that they were called, so therefore they are listed, | |
795 | sorted in decreasing order by the @samp{calls} field. | |
796 | Clearly some time was spent executing these functions, | |
797 | but the paucity of histogram samples prevents any | |
798 | determination of how much time each took. | |
799 | ||
be4e1cd5 JO |
800 | Here is what the fields in each line mean: |
801 | ||
802 | @table @code | |
803 | @item % time | |
804 | This is the percentage of the total execution time your program spent | |
805 | in this function. These should all add up to 100%. | |
806 | ||
807 | @item cumulative seconds | |
808 | This is the cumulative total number of seconds the computer spent | |
809 | executing this functions, plus the time spent in all the functions | |
810 | above this one in this table. | |
811 | ||
812 | @item self seconds | |
813 | This is the number of seconds accounted for by this function alone. | |
814 | The flat profile listing is sorted first by this number. | |
815 | ||
816 | @item calls | |
817 | This is the total number of times the function was called. If the | |
818 | function was never called, or the number of times it was called cannot | |
819 | be determined (probably because the function was not compiled with | |
820 | profiling enabled), the @dfn{calls} field is blank. | |
821 | ||
822 | @item self ms/call | |
823 | This represents the average number of milliseconds spent in this | |
824 | function per call, if this function is profiled. Otherwise, this field | |
825 | is blank for this function. | |
826 | ||
827 | @item total ms/call | |
828 | This represents the average number of milliseconds spent in this | |
829 | function and its descendants per call, if this function is profiled. | |
830 | Otherwise, this field is blank for this function. | |
e2fd4231 | 831 | This is the only field in the flat profile that uses call graph analysis. |
be4e1cd5 JO |
832 | |
833 | @item name | |
834 | This is the name of the function. The flat profile is sorted by this | |
e2fd4231 ILT |
835 | field alphabetically after the @dfn{self seconds} and @dfn{calls} |
836 | fields are sorted. | |
be4e1cd5 JO |
837 | @end table |
838 | ||
e2fd4231 ILT |
839 | @node Call Graph,Line-by-line,Flat Profile,Output |
840 | @section The Call Graph | |
be4e1cd5 JO |
841 | @cindex call graph |
842 | ||
843 | The @dfn{call graph} shows how much time was spent in each function | |
844 | and its children. From this information, you can find functions that, | |
845 | while they themselves may not have used much time, called other | |
846 | functions that did use unusual amounts of time. | |
847 | ||
848 | Here is a sample call from a small program. This call came from the | |
849 | same @code{gprof} run as the flat profile example in the previous | |
850 | chapter. | |
851 | ||
852 | @smallexample | |
853 | @group | |
854 | granularity: each sample hit covers 2 byte(s) for 20.00% of 0.05 seconds | |
855 | ||
856 | index % time self children called name | |
857 | <spontaneous> | |
858 | [1] 100.0 0.00 0.05 start [1] | |
859 | 0.00 0.05 1/1 main [2] | |
860 | 0.00 0.00 1/2 on_exit [28] | |
861 | 0.00 0.00 1/1 exit [59] | |
862 | ----------------------------------------------- | |
863 | 0.00 0.05 1/1 start [1] | |
864 | [2] 100.0 0.00 0.05 1 main [2] | |
865 | 0.00 0.05 1/1 report [3] | |
866 | ----------------------------------------------- | |
867 | 0.00 0.05 1/1 main [2] | |
868 | [3] 100.0 0.00 0.05 1 report [3] | |
869 | 0.00 0.03 8/8 timelocal [6] | |
870 | 0.00 0.01 1/1 print [9] | |
871 | 0.00 0.01 9/9 fgets [12] | |
872 | 0.00 0.00 12/34 strncmp <cycle 1> [40] | |
873 | 0.00 0.00 8/8 lookup [20] | |
874 | 0.00 0.00 1/1 fopen [21] | |
875 | 0.00 0.00 8/8 chewtime [24] | |
876 | 0.00 0.00 8/16 skipspace [44] | |
877 | ----------------------------------------------- | |
878 | [4] 59.8 0.01 0.02 8+472 <cycle 2 as a whole> [4] | |
879 | 0.01 0.02 244+260 offtime <cycle 2> [7] | |
880 | 0.00 0.00 236+1 tzset <cycle 2> [26] | |
881 | ----------------------------------------------- | |
882 | @end group | |
883 | @end smallexample | |
884 | ||
885 | The lines full of dashes divide this table into @dfn{entries}, one for each | |
886 | function. Each entry has one or more lines. | |
887 | ||
888 | In each entry, the primary line is the one that starts with an index number | |
889 | in square brackets. The end of this line says which function the entry is | |
890 | for. The preceding lines in the entry describe the callers of this | |
891 | function and the following lines describe its subroutines (also called | |
892 | @dfn{children} when we speak of the call graph). | |
893 | ||
894 | The entries are sorted by time spent in the function and its subroutines. | |
895 | ||
896 | The internal profiling function @code{mcount} (@pxref{Flat Profile}) | |
897 | is never mentioned in the call graph. | |
898 | ||
899 | @menu | |
900 | * Primary:: Details of the primary line's contents. | |
901 | * Callers:: Details of caller-lines' contents. | |
902 | * Subroutines:: Details of subroutine-lines' contents. | |
903 | * Cycles:: When there are cycles of recursion, | |
904 | such as @code{a} calls @code{b} calls @code{a}@dots{} | |
905 | @end menu | |
906 | ||
907 | @node Primary | |
e2fd4231 | 908 | @subsection The Primary Line |
be4e1cd5 JO |
909 | |
910 | The @dfn{primary line} in a call graph entry is the line that | |
911 | describes the function which the entry is about and gives the overall | |
912 | statistics for this function. | |
913 | ||
914 | For reference, we repeat the primary line from the entry for function | |
915 | @code{report} in our main example, together with the heading line that | |
916 | shows the names of the fields: | |
917 | ||
918 | @smallexample | |
919 | @group | |
920 | index % time self children called name | |
921 | @dots{} | |
922 | [3] 100.0 0.00 0.05 1 report [3] | |
923 | @end group | |
924 | @end smallexample | |
925 | ||
926 | Here is what the fields in the primary line mean: | |
927 | ||
928 | @table @code | |
929 | @item index | |
930 | Entries are numbered with consecutive integers. Each function | |
931 | therefore has an index number, which appears at the beginning of its | |
932 | primary line. | |
933 | ||
934 | Each cross-reference to a function, as a caller or subroutine of | |
935 | another, gives its index number as well as its name. The index number | |
936 | guides you if you wish to look for the entry for that function. | |
937 | ||
938 | @item % time | |
939 | This is the percentage of the total time that was spent in this | |
940 | function, including time spent in subroutines called from this | |
941 | function. | |
942 | ||
943 | The time spent in this function is counted again for the callers of | |
944 | this function. Therefore, adding up these percentages is meaningless. | |
945 | ||
946 | @item self | |
947 | This is the total amount of time spent in this function. This | |
948 | should be identical to the number printed in the @code{seconds} field | |
949 | for this function in the flat profile. | |
950 | ||
951 | @item children | |
952 | This is the total amount of time spent in the subroutine calls made by | |
953 | this function. This should be equal to the sum of all the @code{self} | |
954 | and @code{children} entries of the children listed directly below this | |
955 | function. | |
956 | ||
957 | @item called | |
958 | This is the number of times the function was called. | |
959 | ||
960 | If the function called itself recursively, there are two numbers, | |
961 | separated by a @samp{+}. The first number counts non-recursive calls, | |
962 | and the second counts recursive calls. | |
963 | ||
964 | In the example above, the function @code{report} was called once from | |
965 | @code{main}. | |
966 | ||
967 | @item name | |
968 | This is the name of the current function. The index number is | |
969 | repeated after it. | |
970 | ||
971 | If the function is part of a cycle of recursion, the cycle number is | |
972 | printed between the function's name and the index number | |
973 | (@pxref{Cycles}). For example, if function @code{gnurr} is part of | |
974 | cycle number one, and has index number twelve, its primary line would | |
975 | be end like this: | |
976 | ||
977 | @example | |
978 | gnurr <cycle 1> [12] | |
979 | @end example | |
980 | @end table | |
981 | ||
982 | @node Callers, Subroutines, Primary, Call Graph | |
e2fd4231 | 983 | @subsection Lines for a Function's Callers |
be4e1cd5 JO |
984 | |
985 | A function's entry has a line for each function it was called by. | |
986 | These lines' fields correspond to the fields of the primary line, but | |
987 | their meanings are different because of the difference in context. | |
988 | ||
989 | For reference, we repeat two lines from the entry for the function | |
990 | @code{report}, the primary line and one caller-line preceding it, together | |
991 | with the heading line that shows the names of the fields: | |
992 | ||
993 | @smallexample | |
994 | index % time self children called name | |
995 | @dots{} | |
996 | 0.00 0.05 1/1 main [2] | |
997 | [3] 100.0 0.00 0.05 1 report [3] | |
998 | @end smallexample | |
999 | ||
1000 | Here are the meanings of the fields in the caller-line for @code{report} | |
1001 | called from @code{main}: | |
1002 | ||
1003 | @table @code | |
1004 | @item self | |
1005 | An estimate of the amount of time spent in @code{report} itself when it was | |
1006 | called from @code{main}. | |
1007 | ||
1008 | @item children | |
1009 | An estimate of the amount of time spent in subroutines of @code{report} | |
1010 | when @code{report} was called from @code{main}. | |
1011 | ||
1012 | The sum of the @code{self} and @code{children} fields is an estimate | |
1013 | of the amount of time spent within calls to @code{report} from @code{main}. | |
1014 | ||
1015 | @item called | |
1016 | Two numbers: the number of times @code{report} was called from @code{main}, | |
1017 | followed by the total number of nonrecursive calls to @code{report} from | |
1018 | all its callers. | |
1019 | ||
1020 | @item name and index number | |
1021 | The name of the caller of @code{report} to which this line applies, | |
1022 | followed by the caller's index number. | |
1023 | ||
1024 | Not all functions have entries in the call graph; some | |
1025 | options to @code{gprof} request the omission of certain functions. | |
1026 | When a caller has no entry of its own, it still has caller-lines | |
1027 | in the entries of the functions it calls. | |
1028 | ||
1029 | If the caller is part of a recursion cycle, the cycle number is | |
1030 | printed between the name and the index number. | |
1031 | @end table | |
1032 | ||
1033 | If the identity of the callers of a function cannot be determined, a | |
1034 | dummy caller-line is printed which has @samp{<spontaneous>} as the | |
1035 | ``caller's name'' and all other fields blank. This can happen for | |
1036 | signal handlers. | |
1037 | @c What if some calls have determinable callers' names but not all? | |
1038 | @c FIXME - still relevant? | |
1039 | ||
1040 | @node Subroutines, Cycles, Callers, Call Graph | |
e2fd4231 | 1041 | @subsection Lines for a Function's Subroutines |
be4e1cd5 JO |
1042 | |
1043 | A function's entry has a line for each of its subroutines---in other | |
1044 | words, a line for each other function that it called. These lines' | |
1045 | fields correspond to the fields of the primary line, but their meanings | |
1046 | are different because of the difference in context. | |
1047 | ||
1048 | For reference, we repeat two lines from the entry for the function | |
1049 | @code{main}, the primary line and a line for a subroutine, together | |
1050 | with the heading line that shows the names of the fields: | |
1051 | ||
1052 | @smallexample | |
1053 | index % time self children called name | |
1054 | @dots{} | |
1055 | [2] 100.0 0.00 0.05 1 main [2] | |
1056 | 0.00 0.05 1/1 report [3] | |
1057 | @end smallexample | |
1058 | ||
1059 | Here are the meanings of the fields in the subroutine-line for @code{main} | |
1060 | calling @code{report}: | |
1061 | ||
1062 | @table @code | |
1063 | @item self | |
1064 | An estimate of the amount of time spent directly within @code{report} | |
1065 | when @code{report} was called from @code{main}. | |
1066 | ||
1067 | @item children | |
1068 | An estimate of the amount of time spent in subroutines of @code{report} | |
1069 | when @code{report} was called from @code{main}. | |
1070 | ||
1071 | The sum of the @code{self} and @code{children} fields is an estimate | |
1072 | of the total time spent in calls to @code{report} from @code{main}. | |
1073 | ||
1074 | @item called | |
1075 | Two numbers, the number of calls to @code{report} from @code{main} | |
1076 | followed by the total number of nonrecursive calls to @code{report}. | |
e2fd4231 ILT |
1077 | This ratio is used to determine how much of @code{report}'s @code{self} |
1078 | and @code{children} time gets credited to @code{main}. | |
1079 | @xref{Assumptions}. | |
be4e1cd5 JO |
1080 | |
1081 | @item name | |
1082 | The name of the subroutine of @code{main} to which this line applies, | |
1083 | followed by the subroutine's index number. | |
1084 | ||
1085 | If the caller is part of a recursion cycle, the cycle number is | |
1086 | printed between the name and the index number. | |
1087 | @end table | |
1088 | ||
1089 | @node Cycles,, Subroutines, Call Graph | |
e2fd4231 | 1090 | @subsection How Mutually Recursive Functions Are Described |
be4e1cd5 JO |
1091 | @cindex cycle |
1092 | @cindex recursion cycle | |
1093 | ||
1094 | The graph may be complicated by the presence of @dfn{cycles of | |
1095 | recursion} in the call graph. A cycle exists if a function calls | |
1096 | another function that (directly or indirectly) calls (or appears to | |
1097 | call) the original function. For example: if @code{a} calls @code{b}, | |
1098 | and @code{b} calls @code{a}, then @code{a} and @code{b} form a cycle. | |
1099 | ||
e2fd4231 | 1100 | Whenever there are call paths both ways between a pair of functions, they |
be4e1cd5 JO |
1101 | belong to the same cycle. If @code{a} and @code{b} call each other and |
1102 | @code{b} and @code{c} call each other, all three make one cycle. Note that | |
1103 | even if @code{b} only calls @code{a} if it was not called from @code{a}, | |
1104 | @code{gprof} cannot determine this, so @code{a} and @code{b} are still | |
1105 | considered a cycle. | |
1106 | ||
1107 | The cycles are numbered with consecutive integers. When a function | |
1108 | belongs to a cycle, each time the function name appears in the call graph | |
1109 | it is followed by @samp{<cycle @var{number}>}. | |
1110 | ||
1111 | The reason cycles matter is that they make the time values in the call | |
1112 | graph paradoxical. The ``time spent in children'' of @code{a} should | |
1113 | include the time spent in its subroutine @code{b} and in @code{b}'s | |
1114 | subroutines---but one of @code{b}'s subroutines is @code{a}! How much of | |
1115 | @code{a}'s time should be included in the children of @code{a}, when | |
1116 | @code{a} is indirectly recursive? | |
1117 | ||
1118 | The way @code{gprof} resolves this paradox is by creating a single entry | |
1119 | for the cycle as a whole. The primary line of this entry describes the | |
1120 | total time spent directly in the functions of the cycle. The | |
1121 | ``subroutines'' of the cycle are the individual functions of the cycle, and | |
1122 | all other functions that were called directly by them. The ``callers'' of | |
1123 | the cycle are the functions, outside the cycle, that called functions in | |
1124 | the cycle. | |
1125 | ||
1126 | Here is an example portion of a call graph which shows a cycle containing | |
1127 | functions @code{a} and @code{b}. The cycle was entered by a call to | |
1128 | @code{a} from @code{main}; both @code{a} and @code{b} called @code{c}. | |
1129 | ||
1130 | @smallexample | |
1131 | index % time self children called name | |
1132 | ---------------------------------------- | |
1133 | 1.77 0 1/1 main [2] | |
1134 | [3] 91.71 1.77 0 1+5 <cycle 1 as a whole> [3] | |
1135 | 1.02 0 3 b <cycle 1> [4] | |
1136 | 0.75 0 2 a <cycle 1> [5] | |
1137 | ---------------------------------------- | |
1138 | 3 a <cycle 1> [5] | |
1139 | [4] 52.85 1.02 0 0 b <cycle 1> [4] | |
1140 | 2 a <cycle 1> [5] | |
1141 | 0 0 3/6 c [6] | |
1142 | ---------------------------------------- | |
1143 | 1.77 0 1/1 main [2] | |
1144 | 2 b <cycle 1> [4] | |
1145 | [5] 38.86 0.75 0 1 a <cycle 1> [5] | |
1146 | 3 b <cycle 1> [4] | |
1147 | 0 0 3/6 c [6] | |
1148 | ---------------------------------------- | |
1149 | @end smallexample | |
1150 | ||
1151 | @noindent | |
1152 | (The entire call graph for this program contains in addition an entry for | |
1153 | @code{main}, which calls @code{a}, and an entry for @code{c}, with callers | |
1154 | @code{a} and @code{b}.) | |
1155 | ||
1156 | @smallexample | |
1157 | index % time self children called name | |
1158 | <spontaneous> | |
1159 | [1] 100.00 0 1.93 0 start [1] | |
1160 | 0.16 1.77 1/1 main [2] | |
1161 | ---------------------------------------- | |
1162 | 0.16 1.77 1/1 start [1] | |
1163 | [2] 100.00 0.16 1.77 1 main [2] | |
1164 | 1.77 0 1/1 a <cycle 1> [5] | |
1165 | ---------------------------------------- | |
1166 | 1.77 0 1/1 main [2] | |
1167 | [3] 91.71 1.77 0 1+5 <cycle 1 as a whole> [3] | |
1168 | 1.02 0 3 b <cycle 1> [4] | |
1169 | 0.75 0 2 a <cycle 1> [5] | |
1170 | 0 0 6/6 c [6] | |
1171 | ---------------------------------------- | |
1172 | 3 a <cycle 1> [5] | |
1173 | [4] 52.85 1.02 0 0 b <cycle 1> [4] | |
1174 | 2 a <cycle 1> [5] | |
1175 | 0 0 3/6 c [6] | |
1176 | ---------------------------------------- | |
1177 | 1.77 0 1/1 main [2] | |
1178 | 2 b <cycle 1> [4] | |
1179 | [5] 38.86 0.75 0 1 a <cycle 1> [5] | |
1180 | 3 b <cycle 1> [4] | |
1181 | 0 0 3/6 c [6] | |
1182 | ---------------------------------------- | |
1183 | 0 0 3/6 b <cycle 1> [4] | |
1184 | 0 0 3/6 a <cycle 1> [5] | |
1185 | [6] 0.00 0 0 6 c [6] | |
1186 | ---------------------------------------- | |
1187 | @end smallexample | |
1188 | ||
1189 | The @code{self} field of the cycle's primary line is the total time | |
1190 | spent in all the functions of the cycle. It equals the sum of the | |
1191 | @code{self} fields for the individual functions in the cycle, found | |
1192 | in the entry in the subroutine lines for these functions. | |
1193 | ||
1194 | The @code{children} fields of the cycle's primary line and subroutine lines | |
1195 | count only subroutines outside the cycle. Even though @code{a} calls | |
1196 | @code{b}, the time spent in those calls to @code{b} is not counted in | |
1197 | @code{a}'s @code{children} time. Thus, we do not encounter the problem of | |
1198 | what to do when the time in those calls to @code{b} includes indirect | |
1199 | recursive calls back to @code{a}. | |
1200 | ||
1201 | The @code{children} field of a caller-line in the cycle's entry estimates | |
1202 | the amount of time spent @emph{in the whole cycle}, and its other | |
1203 | subroutines, on the times when that caller called a function in the cycle. | |
1204 | ||
1205 | The @code{calls} field in the primary line for the cycle has two numbers: | |
1206 | first, the number of times functions in the cycle were called by functions | |
1207 | outside the cycle; second, the number of times they were called by | |
1208 | functions in the cycle (including times when a function in the cycle calls | |
1209 | itself). This is a generalization of the usual split into nonrecursive and | |
1210 | recursive calls. | |
1211 | ||
1212 | The @code{calls} field of a subroutine-line for a cycle member in the | |
1213 | cycle's entry says how many time that function was called from functions in | |
1214 | the cycle. The total of all these is the second number in the primary line's | |
1215 | @code{calls} field. | |
1216 | ||
1217 | In the individual entry for a function in a cycle, the other functions in | |
1218 | the same cycle can appear as subroutines and as callers. These lines show | |
1219 | how many times each function in the cycle called or was called from each other | |
1220 | function in the cycle. The @code{self} and @code{children} fields in these | |
1221 | lines are blank because of the difficulty of defining meanings for them | |
1222 | when recursion is going on. | |
1223 | ||
e2fd4231 ILT |
1224 | @node Line-by-line,Annotated Source,Call Graph,Output |
1225 | @section Line-by-line Profiling | |
be4e1cd5 | 1226 | |
e2fd4231 ILT |
1227 | @code{gprof}'s @samp{-l} option causes the program to perform |
1228 | @dfn{line-by-line} profiling. In this mode, histogram | |
1229 | samples are assigned not to functions, but to individual | |
1230 | lines of source code. The program usually must be compiled | |
1231 | with a @samp{-g} option, in addition to @samp{-pg}, in order | |
1232 | to generate debugging symbols for tracking source code lines. | |
be4e1cd5 | 1233 | |
e2fd4231 ILT |
1234 | The flat profile is the most useful output table |
1235 | in line-by-line mode. | |
1236 | The call graph isn't as useful as normal, since | |
1237 | the current version of @code{gprof} does not propagate | |
1238 | call graph arcs from source code lines to the enclosing function. | |
1239 | The call graph does, however, show each line of code | |
1240 | that called each function, along with a count. | |
be4e1cd5 | 1241 | |
e2fd4231 ILT |
1242 | Here is a section of @code{gprof}'s output, without line-by-line profiling. |
1243 | Note that @code{ct_init} accounted for four histogram hits, and | |
1244 | 13327 calls to @code{init_block}. | |
be4e1cd5 | 1245 | |
e2fd4231 ILT |
1246 | @smallexample |
1247 | Flat profile: | |
be4e1cd5 | 1248 | |
e2fd4231 ILT |
1249 | Each sample counts as 0.01 seconds. |
1250 | % cumulative self self total | |
1251 | time seconds seconds calls us/call us/call name | |
1252 | 30.77 0.13 0.04 6335 6.31 6.31 ct_init | |
be4e1cd5 | 1253 | |
e2fd4231 ILT |
1254 | |
1255 | Call graph (explanation follows) | |
1256 | ||
1257 | ||
1258 | granularity: each sample hit covers 4 byte(s) for 7.69% of 0.13 seconds | |
1259 | ||
1260 | index % time self children called name | |
1261 | ||
1262 | 0.00 0.00 1/13496 name_too_long | |
1263 | 0.00 0.00 40/13496 deflate | |
1264 | 0.00 0.00 128/13496 deflate_fast | |
1265 | 0.00 0.00 13327/13496 ct_init | |
1266 | [7] 0.0 0.00 0.00 13496 init_block | |
1267 | ||
1268 | @end smallexample | |
1269 | ||
1270 | Now let's look at some of @code{gprof}'s output from the same program run, | |
1271 | this time with line-by-line profiling enabled. Note that @code{ct_init}'s | |
1272 | four histogram hits are broken down into four lines of source code - one hit | |
1273 | occured on each of lines 349, 351, 382 and 385. In the call graph, | |
1274 | note how | |
1275 | @code{ct_init}'s 13327 calls to @code{init_block} are broken down | |
1276 | into one call from line 396, 3071 calls from line 384, 3730 calls | |
1277 | from line 385, and 6525 calls from 387. | |
1278 | ||
1279 | @smallexample | |
1280 | Flat profile: | |
1281 | ||
1282 | Each sample counts as 0.01 seconds. | |
1283 | % cumulative self | |
1284 | time seconds seconds calls name | |
1285 | 7.69 0.10 0.01 ct_init (trees.c:349) | |
1286 | 7.69 0.11 0.01 ct_init (trees.c:351) | |
1287 | 7.69 0.12 0.01 ct_init (trees.c:382) | |
1288 | 7.69 0.13 0.01 ct_init (trees.c:385) | |
1289 | ||
1290 | ||
1291 | Call graph (explanation follows) | |
1292 | ||
1293 | ||
1294 | granularity: each sample hit covers 4 byte(s) for 7.69% of 0.13 seconds | |
1295 | ||
1296 | % time self children called name | |
1297 | ||
1298 | 0.00 0.00 1/13496 name_too_long (gzip.c:1440) | |
1299 | 0.00 0.00 1/13496 deflate (deflate.c:763) | |
1300 | 0.00 0.00 1/13496 ct_init (trees.c:396) | |
1301 | 0.00 0.00 2/13496 deflate (deflate.c:727) | |
1302 | 0.00 0.00 4/13496 deflate (deflate.c:686) | |
1303 | 0.00 0.00 5/13496 deflate (deflate.c:675) | |
1304 | 0.00 0.00 12/13496 deflate (deflate.c:679) | |
1305 | 0.00 0.00 16/13496 deflate (deflate.c:730) | |
1306 | 0.00 0.00 128/13496 deflate_fast (deflate.c:654) | |
1307 | 0.00 0.00 3071/13496 ct_init (trees.c:384) | |
1308 | 0.00 0.00 3730/13496 ct_init (trees.c:385) | |
1309 | 0.00 0.00 6525/13496 ct_init (trees.c:387) | |
1310 | [6] 0.0 0.00 0.00 13496 init_block (trees.c:408) | |
1311 | ||
1312 | @end smallexample | |
1313 | ||
1314 | ||
1315 | @node Annotated Source,,Line-by-line,Output | |
1316 | @section The Annotated Source Listing | |
1317 | ||
1318 | @code{gprof}'s @samp{-A} option triggers an annotated source listing, | |
1319 | which lists the program's source code, each function labeled with the | |
1320 | number of times it was called. You may also need to specify the | |
1321 | @samp{-I} option, if @code{gprof} can't find the source code files. | |
1322 | ||
1323 | Compiling with @samp{gcc @dots{} -g -pg -a} augments your program | |
1324 | with basic-block counting code, in addition to function counting code. | |
1325 | This enables @code{gprof} to determine how many times each line | |
1326 | of code was exeucted. | |
1327 | For example, consider the following function, taken from gzip, | |
1328 | with line numbers added: | |
1329 | ||
1330 | @smallexample | |
1331 | 1 ulg updcrc(s, n) | |
1332 | 2 uch *s; | |
1333 | 3 unsigned n; | |
1334 | 4 @{ | |
1335 | 5 register ulg c; | |
1336 | 6 | |
1337 | 7 static ulg crc = (ulg)0xffffffffL; | |
1338 | 8 | |
1339 | 9 if (s == NULL) @{ | |
1340 | 10 c = 0xffffffffL; | |
1341 | 11 @} else @{ | |
1342 | 12 c = crc; | |
1343 | 13 if (n) do @{ | |
1344 | 14 c = crc_32_tab[...]; | |
1345 | 15 @} while (--n); | |
1346 | 16 @} | |
1347 | 17 crc = c; | |
1348 | 18 return c ^ 0xffffffffL; | |
1349 | 19 @} | |
1350 | ||
1351 | @end smallexample | |
1352 | ||
1353 | @code{updcrc} has at least five basic-blocks. | |
1354 | One is the function itself. The | |
1355 | @code{if} statement on line 9 generates two more basic-blocks, one | |
1356 | for each branch of the @code{if}. A fourth basic-block results from | |
1357 | the @code{if} on line 13, and the contents of the @code{do} loop form | |
1358 | the fifth basic-block. The compiler may also generate additional | |
1359 | basic-blocks to handle various special cases. | |
1360 | ||
1361 | A program augmented for basic-block counting can be analyzed with | |
1362 | @code{gprof -l -A}. I also suggest use of the @samp{-x} option, | |
1363 | which ensures that each line of code is labeled at least once. | |
1364 | Here is @code{updcrc}'s | |
1365 | annotated source listing for a sample @code{gzip} run: | |
1366 | ||
1367 | @smallexample | |
1368 | ulg updcrc(s, n) | |
1369 | uch *s; | |
1370 | unsigned n; | |
1371 | 2 ->@{ | |
1372 | register ulg c; | |
1373 | ||
1374 | static ulg crc = (ulg)0xffffffffL; | |
1375 | ||
1376 | 2 -> if (s == NULL) @{ | |
1377 | 1 -> c = 0xffffffffL; | |
1378 | 1 -> @} else @{ | |
1379 | 1 -> c = crc; | |
1380 | 1 -> if (n) do @{ | |
1381 | 26312 -> c = crc_32_tab[...]; | |
1382 | 26312,1,26311 -> @} while (--n); | |
1383 | @} | |
1384 | 2 -> crc = c; | |
1385 | 2 -> return c ^ 0xffffffffL; | |
1386 | 2 ->@} | |
1387 | @end smallexample | |
1388 | ||
1389 | In this example, the function was called twice, passing once through | |
1390 | each branch of the @code{if} statement. The body of the @code{do} | |
1391 | loop was executed a total of 26312 times. Note how the @code{while} | |
1392 | statement is annotated. It began execution 26312 times, once for | |
1393 | each iteration through the loop. One of those times (the last time) | |
1394 | it exited, while it branched back to the beginning of the loop 26311 times. | |
1395 | ||
1396 | @node Inaccuracy | |
1397 | @chapter Inaccuracy of @code{gprof} Output | |
1398 | ||
1399 | @menu | |
1400 | * Sampling Error:: Statistical margins of error | |
1401 | * Assumptions:: Estimating children times | |
1402 | @end menu | |
1403 | ||
1404 | @node Sampling Error,Assumptions,,Inaccuracy | |
1405 | @section Statistical Sampling Error | |
be4e1cd5 JO |
1406 | |
1407 | The run-time figures that @code{gprof} gives you are based on a sampling | |
1408 | process, so they are subject to statistical inaccuracy. If a function runs | |
1409 | only a small amount of time, so that on the average the sampling process | |
1410 | ought to catch that function in the act only once, there is a pretty good | |
1411 | chance it will actually find that function zero times, or twice. | |
1412 | ||
e2fd4231 ILT |
1413 | By contrast, the number-of-calls and basic-block figures |
1414 | are derived by counting, not | |
be4e1cd5 JO |
1415 | sampling. They are completely accurate and will not vary from run to run |
1416 | if your program is deterministic. | |
1417 | ||
1418 | The @dfn{sampling period} that is printed at the beginning of the flat | |
1419 | profile says how often samples are taken. The rule of thumb is that a | |
1420 | run-time figure is accurate if it is considerably bigger than the sampling | |
1421 | period. | |
1422 | ||
e2fd4231 ILT |
1423 | The actual amount of error can be predicted. |
1424 | For @var{n} samples, the @emph{expected} error | |
1425 | is the square-root of @var{n}. For example, | |
1426 | if the sampling period is 0.01 seconds and @code{foo}'s run-time is 1 second, | |
1427 | @var{n} is 100 samples (1 second/0.01 seconds), sqrt(@var{n}) is 10 samples, so | |
1428 | the expected error in @code{foo}'s run-time is 0.1 seconds (10*0.01 seconds), | |
1429 | or ten percent of the observed value. | |
1430 | Again, if the sampling period is 0.01 seconds and @code{bar}'s run-time is | |
1431 | 100 seconds, @var{n} is 10000 samples, sqrt(@var{n}) is 100 samples, so | |
1432 | the expected error in @code{bar}'s run-time is 1 second, | |
1433 | or one percent of the observed value. | |
1434 | It is likely to | |
be4e1cd5 JO |
1435 | vary this much @emph{on the average} from one profiling run to the next. |
1436 | (@emph{Sometimes} it will vary more.) | |
1437 | ||
1438 | This does not mean that a small run-time figure is devoid of information. | |
1439 | If the program's @emph{total} run-time is large, a small run-time for one | |
1440 | function does tell you that that function used an insignificant fraction of | |
1441 | the whole program's time. Usually this means it is not worth optimizing. | |
1442 | ||
1443 | One way to get more accuracy is to give your program more (but similar) | |
1444 | input data so it will take longer. Another way is to combine the data from | |
1445 | several runs, using the @samp{-s} option of @code{gprof}. Here is how: | |
1446 | ||
1447 | @enumerate | |
1448 | @item | |
1449 | Run your program once. | |
1450 | ||
1451 | @item | |
1452 | Issue the command @samp{mv gmon.out gmon.sum}. | |
1453 | ||
1454 | @item | |
1455 | Run your program again, the same as before. | |
1456 | ||
1457 | @item | |
1458 | Merge the new data in @file{gmon.out} into @file{gmon.sum} with this command: | |
1459 | ||
1460 | @example | |
1461 | gprof -s @var{executable-file} gmon.out gmon.sum | |
1462 | @end example | |
1463 | ||
1464 | @item | |
1465 | Repeat the last two steps as often as you wish. | |
1466 | ||
1467 | @item | |
1468 | Analyze the cumulative data using this command: | |
1469 | ||
1470 | @example | |
1471 | gprof @var{executable-file} gmon.sum > @var{output-file} | |
1472 | @end example | |
1473 | @end enumerate | |
1474 | ||
e2fd4231 ILT |
1475 | @node Assumptions,,Sampling Error,Inaccuracy |
1476 | @section Estimating @code{children} Times | |
be4e1cd5 JO |
1477 | |
1478 | Some of the figures in the call graph are estimates---for example, the | |
1479 | @code{children} time values and all the the time figures in caller and | |
1480 | subroutine lines. | |
1481 | ||
1482 | There is no direct information about these measurements in the profile | |
1483 | data itself. Instead, @code{gprof} estimates them by making an assumption | |
1484 | about your program that might or might not be true. | |
1485 | ||
1486 | The assumption made is that the average time spent in each call to any | |
1487 | function @code{foo} is not correlated with who called @code{foo}. If | |
1488 | @code{foo} used 5 seconds in all, and 2/5 of the calls to @code{foo} came | |
1489 | from @code{a}, then @code{foo} contributes 2 seconds to @code{a}'s | |
1490 | @code{children} time, by assumption. | |
1491 | ||
1492 | This assumption is usually true enough, but for some programs it is far | |
1493 | from true. Suppose that @code{foo} returns very quickly when its argument | |
1494 | is zero; suppose that @code{a} always passes zero as an argument, while | |
1495 | other callers of @code{foo} pass other arguments. In this program, all the | |
1496 | time spent in @code{foo} is in the calls from callers other than @code{a}. | |
1497 | But @code{gprof} has no way of knowing this; it will blindly and | |
1498 | incorrectly charge 2 seconds of time in @code{foo} to the children of | |
1499 | @code{a}. | |
1500 | ||
1501 | @c FIXME - has this been fixed? | |
1502 | We hope some day to put more complete data into @file{gmon.out}, so that | |
1503 | this assumption is no longer needed, if we can figure out how. For the | |
1504 | nonce, the estimated figures are usually more useful than misleading. | |
1505 | ||
e2fd4231 ILT |
1506 | @node How do I? |
1507 | @chapter Answers to Common Questions | |
1508 | ||
1509 | @table @asis | |
1510 | @item How do I find which lines in my program were executed the most times? | |
1511 | ||
1512 | Compile your program with basic-block counting enabled, run it, then | |
1513 | use the following pipeline: | |
1514 | ||
1515 | @example | |
1516 | gprof -l -C @var{objfile} | sort -k 3 -n -r | |
1517 | @end example | |
1518 | ||
1519 | This listing will show you the lines in your code executed most often, | |
1520 | but not necessarily those that consumed the most time. | |
1521 | ||
1522 | @item How do I find which lines in my program called a particular function? | |
1523 | ||
1524 | Use @code{gprof -l} and lookup the function in the call graph. | |
1525 | The callers will be broken down by function and line number. | |
1526 | ||
1527 | @item How do I analyze a program that runs for less than a second? | |
1528 | ||
1529 | Try using a shell script like this one: | |
1530 | ||
1531 | @example | |
1532 | for i in `seq 1 100`; do | |
1533 | fastprog | |
1534 | mv gmon.out gmon.out.$i | |
1535 | done | |
1536 | ||
1537 | gprof -s fastprog gmon.out.* | |
1538 | ||
1539 | gprof fastprog gmon.sum | |
1540 | @end example | |
1541 | ||
1542 | If your program is completely deterministic, all the call counts | |
1543 | will be simple multiples of 100 (i.e. a function called once in | |
1544 | each run will appear with a call count of 100). | |
1545 | ||
1546 | @end table | |
1547 | ||
1548 | @node Incompatibilities | |
be4e1cd5 JO |
1549 | @chapter Incompatibilities with Unix @code{gprof} |
1550 | ||
1551 | @sc{gnu} @code{gprof} and Berkeley Unix @code{gprof} use the same data | |
1552 | file @file{gmon.out}, and provide essentially the same information. But | |
1553 | there are a few differences. | |
1554 | ||
1555 | @itemize @bullet | |
e2fd4231 ILT |
1556 | @item |
1557 | @sc{gnu} @code{gprof} uses a new, generalized file format with support | |
1558 | for basic-block execution counts and non-realtime histograms. A magic | |
1559 | cookie and version number allows @code{gprof} to easily identify | |
1560 | new style files. Old BSD-style files can still be read. | |
1561 | @xref{File Format}. | |
1562 | ||
be4e1cd5 JO |
1563 | @item |
1564 | For a recursive function, Unix @code{gprof} lists the function as a | |
1565 | parent and as a child, with a @code{calls} field that lists the number | |
1566 | of recursive calls. @sc{gnu} @code{gprof} omits these lines and puts | |
1567 | the number of recursive calls in the primary line. | |
1568 | ||
1569 | @item | |
1570 | When a function is suppressed from the call graph with @samp{-e}, @sc{gnu} | |
1571 | @code{gprof} still lists it as a subroutine of functions that call it. | |
1572 | ||
e2fd4231 ILT |
1573 | @item |
1574 | @sc{gnu} @code{gprof} accepts the @samp{-k} with its argument | |
1575 | in the form @samp{from/to}, instead of @samp{from to}. | |
1576 | ||
1577 | @item | |
1578 | In the annotated source listing, | |
1579 | if there are multiple basic blocks on the same line, | |
1580 | @sc{gnu} @code{gprof} prints all of their counts, seperated by commas. | |
1581 | ||
be4e1cd5 JO |
1582 | @ignore - it does this now |
1583 | @item | |
1584 | The function names printed in @sc{gnu} @code{gprof} output do not include | |
1585 | the leading underscores that are added internally to the front of all | |
1586 | C identifiers on many operating systems. | |
1587 | @end ignore | |
1588 | ||
1589 | @item | |
1590 | The blurbs, field widths, and output formats are different. @sc{gnu} | |
1591 | @code{gprof} prints blurbs after the tables, so that you can see the | |
1592 | tables without skipping the blurbs. | |
c142a1f5 | 1593 | @end itemize |
be4e1cd5 | 1594 | |
e2fd4231 ILT |
1595 | @node Details |
1596 | @chapter Details of Profiling | |
be4e1cd5 | 1597 | |
e2fd4231 ILT |
1598 | @menu |
1599 | * Implementation:: How a program collets profiling information | |
1600 | * File Format:: Format of @samp{gmon.out} files | |
1601 | * Internals:: @code{gprof}'s internal operation | |
1602 | * Debugging:: Using @code{gprof}'s @samp{-d} option | |
1603 | @end menu | |
1604 | ||
1605 | @node Implementation,File Format,,Details | |
1606 | @section Implementation of Profiling | |
1607 | ||
1608 | Profiling works by changing how every function in your program is compiled | |
1609 | so that when it is called, it will stash away some information about where | |
1610 | it was called from. From this, the profiler can figure out what function | |
1611 | called it, and can count how many times it was called. This change is made | |
1612 | by the compiler when your program is compiled with the @samp{-pg} option, | |
1613 | which causes every function to call @code{mcount} | |
1614 | (or @code{_mcount}, or @code{__mcount}, depending on the OS and compiler) | |
1615 | as one of its first operations. | |
1616 | ||
1617 | The @code{mcount} routine, included in the profiling library, | |
1618 | is responsible for recording in an in-memory call graph table | |
1619 | both its parent routine (the child) and its parent's parent. This is | |
1620 | typically done by examining the stack frame to find both | |
1621 | the address of the child, and the return address in the original parent. | |
1622 | Since this is a very machine-dependant operation, @code{mcount} | |
1623 | itself is typically a short assembly-language stub routine | |
1624 | that extracts the required | |
1625 | information, and then calls @code{__mcount_internal} | |
1626 | (a normal C function) with two arguments - @code{frompc} and @code{selfpc}. | |
1627 | @code{__mcount_internal} is responsible for maintaining | |
1628 | the in-memory call graph, which records @code{frompc}, @code{selfpc}, | |
1629 | and the number of times each of these call arcs was transversed. | |
1630 | ||
1631 | GCC Version 2 provides a magical function (@code{__builtin_return_address}), | |
1632 | which allows a generic @code{mcount} function to extract the | |
1633 | required information from the stack frame. However, on some | |
1634 | architectures, most notably the SPARC, using this builtin can be | |
1635 | very computationally expensive, and an assembly language version | |
1636 | of @code{mcount} is used for performance reasons. | |
1637 | ||
1638 | Number-of-calls information for library routines is collected by using a | |
1639 | special version of the C library. The programs in it are the same as in | |
1640 | the usual C library, but they were compiled with @samp{-pg}. If you | |
1641 | link your program with @samp{gcc @dots{} -pg}, it automatically uses the | |
1642 | profiling version of the library. | |
1643 | ||
1644 | Profiling also involves watching your program as it runs, and keeping a | |
1645 | histogram of where the program counter happens to be every now and then. | |
1646 | Typically the program counter is looked at around 100 times per second of | |
1647 | run time, but the exact frequency may vary from system to system. | |
be4e1cd5 | 1648 | |
e2fd4231 ILT |
1649 | This is done is one of two ways. Most UNIX-like operating systems |
1650 | provide a @code{profil()} system call, which registers a memory | |
1651 | array with the kernel, along with a scale | |
1652 | factor that determines how the program's address space maps | |
1653 | into the array. | |
1654 | Typical scaling values cause every 2 to 8 bytes of address space | |
1655 | to map into a single array slot. | |
1656 | On every tick of the system clock | |
1657 | (assuming the profiled program is running), the value of the | |
1658 | program counter is examined and the corresponding slot in | |
1659 | the memory array is incremented. Since this is done in the kernel, | |
1660 | which had to interrupt the process anyway to handle the clock | |
1661 | interrupt, very little additional system overhead is required. | |
1662 | ||
1663 | However, some operating systems, most notably Linux 2.0 (and earlier), | |
1664 | do not provide a @code{profil()} system call. On such a system, | |
1665 | arrangements are made for the kernel to periodically deliver | |
1666 | a signal to the process (typically via @code{setitimer()}), | |
1667 | which then performs the same operation of examining the | |
1668 | program counter and incrementing a slot in the memory array. | |
1669 | Since this method requires a signal to be delivered to | |
1670 | user space every time a sample is taken, it uses considerably | |
1671 | more overhead than kernel-based profiling. Also, due to the | |
1672 | added delay required to deliver the signal, this method is | |
1673 | less accurate as well. | |
1674 | ||
1675 | A special startup routine allocates memory for the histogram and | |
1676 | either calls @code{profil()} or sets up | |
1677 | a clock signal handler. | |
1678 | This routine (@code{monstartup}) can be invoked in several ways. | |
1679 | On Linux systems, a special profiling startup file @code{gcrt0.o}, | |
1680 | which invokes @code{monstartup} before @code{main}, | |
1681 | is used instead of the default @code{crt0.o}. | |
1682 | Use of this special startup file is one of the effects | |
1683 | of using @samp{gcc @dots{} -pg} to link. | |
1684 | On SPARC systems, no special startup files are used. | |
1685 | Rather, the @code{mcount} routine, when it is invoked for | |
1686 | the first time (typically when @code{main} is called), | |
1687 | calls @code{monstartup}. | |
1688 | ||
1689 | If the compiler's @samp{-a} option was used, basic-block counting | |
1690 | is also enabled. Each object file is then compiled with a static array | |
1691 | of counts, initially zero. | |
1692 | In the executable code, every time a new basic-block begins | |
1693 | (i.e. when an @code{if} statement appears), an extra instruction | |
1694 | is inserted to increment the corresponding count in the array. | |
1695 | At compile time, a paired array was constructed that recorded | |
1696 | the starting address of each basic-block. Taken together, | |
1697 | the two arrays record the starting address of every basic-block, | |
1698 | along with the number of times it was executed. | |
1699 | ||
1700 | The profiling library also includes a function (@code{mcleanup}) which is | |
1701 | typically registered using @code{atexit()} to be called as the | |
1702 | program exits, and is responsible for writing the file @file{gmon.out}. | |
1703 | Profiling is turned off, various headers are output, and the histogram | |
1704 | is written, followed by the call-graph arcs and the basic-block counts. | |
be4e1cd5 | 1705 | |
e2fd4231 ILT |
1706 | The output from @code{gprof} gives no indication of parts of your program that |
1707 | are limited by I/O or swapping bandwidth. This is because samples of the | |
1708 | program counter are taken at fixed intervals of the program's run time. | |
1709 | Therefore, the | |
1710 | time measurements in @code{gprof} output say nothing about time that your | |
1711 | program was not running. For example, a part of the program that creates | |
1712 | so much data that it cannot all fit in physical memory at once may run very | |
1713 | slowly due to thrashing, but @code{gprof} will say it uses little time. On | |
1714 | the other hand, sampling by run time has the advantage that the amount of | |
1715 | load due to other users won't directly affect the output you get. | |
1716 | ||
1717 | @node File Format,Internals,Implementation,Details | |
1718 | @section Profiling Data File Format | |
1719 | ||
1720 | The old BSD-derived file format used for profile data does not contain a | |
1721 | magic cookie that allows to check whether a data file really is a | |
1722 | gprof file. Furthermore, it does not provide a version number, thus | |
1723 | rendering changes to the file format almost impossible. @sc{gnu} @code{gprof} | |
1724 | uses a new file format that provides these features. For backward | |
1725 | compatibility, @sc{gnu} @code{gprof} continues to support the old BSD-derived | |
1726 | format, but not all features are supported with it. For example, | |
1727 | basic-block execution counts cannot be accommodated by the old file | |
1728 | format. | |
1729 | ||
1730 | The new file format is defined in header file @file{gmon_out.h}. It | |
1731 | consists of a header containing the magic cookie and a version number, | |
1732 | as well as some spare bytes available for future extensions. All data | |
1733 | in a profile data file is in the native format of the host on which | |
1734 | the profile was collected. @sc{gnu} @code{gprof} adapts automatically to the | |
1735 | byte-order in use. | |
1736 | ||
1737 | In the new file format, the header is followed by a sequence of | |
1738 | records. Currently, there are three different record types: histogram | |
1739 | records, call-graph arc records, and basic-block execution count | |
1740 | records. Each file can contain any number of each record type. When | |
1741 | reading a file, @sc{gnu} @code{gprof} will ensure records of the same type are | |
1742 | compatible with each other and compute the union of all records. For | |
1743 | example, for basic-block execution counts, the union is simply the sum | |
1744 | of all execution counts for each basic-block. | |
1745 | ||
1746 | @subsection Histogram Records | |
1747 | ||
1748 | Histogram records consist of a header that is followed by an array of | |
1749 | bins. The header contains the text-segment range that the histogram | |
1750 | spans, the size of the histogram in bytes (unlike in the old BSD | |
1751 | format, this does not include the size of the header), the rate of the | |
1752 | profiling clock, and the physical dimension that the bin counts | |
1753 | represent after being scaled by the profiling clock rate. The | |
1754 | physical dimension is specified in two parts: a long name of up to 15 | |
1755 | characters and a single character abbreviation. For example, a | |
1756 | histogram representing real-time would specify the long name as | |
1757 | "seconds" and the abbreviation as "s". This feature is useful for | |
1758 | architectures that support performance monitor hardware (which, | |
1759 | fortunately, is becoming increasingly common). For example, under DEC | |
1760 | OSF/1, the "uprofile" command can be used to produce a histogram of, | |
1761 | say, instruction cache misses. In this case, the dimension in the | |
1762 | histogram header could be set to "i-cache misses" and the abbreviation | |
1763 | could be set to "1" (because it is simply a count, not a physical | |
1764 | dimension). Also, the profiling rate would have to be set to 1 in | |
1765 | this case. | |
1766 | ||
1767 | Histogram bins are 16-bit numbers and each bin represent an equal | |
1768 | amount of text-space. For example, if the text-segment is one | |
1769 | thousand bytes long and if there are ten bins in the histogram, each | |
1770 | bin represents one hundred bytes. | |
1771 | ||
1772 | ||
1773 | @subsection Call-Graph Records | |
1774 | ||
1775 | Call-graph records have a format that is identical to the one used in | |
1776 | the BSD-derived file format. It consists of an arc in the call graph | |
1777 | and a count indicating the number of times the arc was traversed | |
1778 | during program execution. Arcs are specified by a pair of addresses: | |
1779 | the first must be within caller's function and the second must be | |
1780 | within the callee's function. When performing profiling at the | |
1781 | function level, these addresses can point anywhere within the | |
1782 | respective function. However, when profiling at the line-level, it is | |
1783 | better if the addresses are as close to the call-site/entry-point as | |
1784 | possible. This will ensure that the line-level call-graph is able to | |
1785 | identify exactly which line of source code performed calls to a | |
1786 | function. | |
1787 | ||
1788 | @subsection Basic-Block Execution Count Records | |
1789 | ||
1790 | Basic-block execution count records consist of a header followed by a | |
1791 | sequence of address/count pairs. The header simply specifies the | |
1792 | length of the sequence. In an address/count pair, the address | |
1793 | identifies a basic-block and the count specifies the number of times | |
1794 | that basic-block was executed. Any address within the basic-address can | |
1795 | be used. | |
1796 | ||
1797 | @node Internals,Debugging,File Format,Details | |
1798 | @section @code{gprof}'s Internal Operation | |
1799 | ||
1800 | Like most programs, @code{gprof} begins by processing its options. | |
1801 | During this stage, it may building its symspec list | |
1802 | (@code{sym_ids.c:sym_id_add}), if | |
1803 | options are specified which use symspecs. | |
1804 | @code{gprof} maintains a single linked list of symspecs, | |
1805 | which will eventually get turned into 12 symbol tables, | |
1806 | organized into six include/exclude pairs - one | |
1807 | pair each for the flat profile (INCL_FLAT/EXCL_FLAT), | |
1808 | the call graph arcs (INCL_ARCS/EXCL_ARCS), | |
1809 | printing in the call graph (INCL_GRAPH/EXCL_GRAPH), | |
1810 | timing propagation in the call graph (INCL_TIME/EXCL_TIME), | |
1811 | the annotated source listing (INCL_ANNO/EXCL_ANNO), | |
1812 | and the execution count listing (INCL_EXEC/EXCL_EXEC). | |
1813 | ||
1814 | After option processing, @code{gprof} finishes | |
1815 | building the symspec list by adding all the symspecs in | |
1816 | @code{default_excluded_list} to the exclude lists | |
1817 | EXCL_TIME and EXCL_GRAPH, and if line-by-line profiling is specified, | |
1818 | EXCL_FLAT as well. | |
1819 | These default excludes are not added to EXCL_ANNO, EXCL_ARCS, and EXCL_EXEC. | |
1820 | ||
1821 | Next, the BFD library is called to open the object file, | |
1822 | verify that it is an object file, | |
1823 | and read its symbol table (@code{core.c:core_init}), | |
1824 | using @code{bfd_canonicalize_symtab} after mallocing | |
1825 | an appropiate sized array of asymbols. At this point, | |
1826 | function mappings are read (if the @samp{--file-ordering} option | |
1827 | has been specified), and the core text space is read into | |
1828 | memory (if the @samp{-c} option was given). | |
1829 | ||
1830 | @code{gprof}'s own symbol table, an array of Sym structures, | |
1831 | is now built. | |
1832 | This is done in one of two ways, by one of two routines, depending | |
1833 | on whether line-by-line profiling (@samp{-l} option) has been | |
1834 | enabled. | |
1835 | For normal profiling, the BFD canonical symbol table is scanned. | |
1836 | For line-by-line profiling, every | |
1837 | text space address is examined, and a new symbol table entry | |
1838 | gets created every time the line number changes. | |
1839 | In either case, two passes are made through the symbol | |
1840 | table - one to count the size of the symbol table required, | |
1841 | and the other to actually read the symbols. In between the | |
1842 | two passes, a single array of type @code{Sym} is created of | |
1843 | the appropiate length. | |
1844 | Finally, @code{symtab.c:symtab_finalize} | |
1845 | is called to sort the symbol table and remove duplicate entries | |
1846 | (entries with the same memory address). | |
1847 | ||
1848 | The symbol table must be a contiguous array for two reasons. | |
1849 | First, the @code{qsort} library function (which sorts an array) | |
1850 | will be used to sort the symbol table. | |
1851 | Also, the symbol lookup routine (@code{symtab.c:sym_lookup}), | |
1852 | which finds symbols | |
1853 | based on memory address, uses a binary search algorithm | |
1854 | which requires the symbol table to be a sorted array. | |
1855 | Function symbols are indicated with an @code{is_func} flag. | |
1856 | Line number symbols have no special flags set. | |
1857 | Additionally, a symbol can have an @code{is_static} flag | |
1858 | to indicate that it is a local symbol. | |
1859 | ||
1860 | With the symbol table read, the symspecs can now be translated | |
1861 | into Syms (@code{sym_ids.c:sym_id_parse}). Remember that a single | |
1862 | symspec can match multiple symbols. | |
1863 | An array of symbol tables | |
1864 | (@code{syms}) is created, each entry of which is a symbol table | |
1865 | of Syms to be included or excluded from a particular listing. | |
1866 | The master symbol table and the symspecs are examined by nested | |
1867 | loops, and every symbol that matches a symspec is inserted | |
1868 | into the appropriate syms table. This is done twice, once to | |
1869 | count the size of each required symbol table, and again to build | |
1870 | the tables, which have been malloced between passes. | |
1871 | From now on, to determine whether a symbol is on an include | |
1872 | or exclude symspec list, @code{gprof} simply uses its | |
1873 | standard symbol lookup routine on the appropriate table | |
1874 | in the @code{syms} array. | |
1875 | ||
1876 | Now the profile data file(s) themselves are read | |
1877 | (@code{gmon_io.c:gmon_out_read}), | |
1878 | first by checking for a new-style @samp{gmon.out} header, | |
1879 | then assuming this is an old-style BSD @samp{gmon.out} | |
1880 | if the magic number test failed. | |
1881 | ||
1882 | New-style histogram records are read by @code{hist.c:hist_read_rec}. | |
1883 | For the first histogram record, allocate a memory array to hold | |
1884 | all the bins, and read them in. | |
1885 | When multiple profile data files (or files with multiple histogram | |
1886 | records) are read, the starting address, ending address, number | |
1887 | of bins and sampling rate must match between the various histograms, | |
1888 | or a fatal error will result. | |
1889 | If everything matches, just sum the additional histograms into | |
1890 | the existing in-memory array. | |
1891 | ||
1892 | As each call graph record is read (@code{call_graph.c:cg_read_rec}), | |
1893 | the parent and child addresses | |
1894 | are matched to symbol table entries, and a call graph arc is | |
1895 | created by @code{cg_arcs.c:arc_add}, unless the arc fails a symspec | |
1896 | check against INCL_ARCS/EXCL_ARCS. As each arc is added, | |
1897 | a linked list is maintained of the parent's child arcs, and of the child's | |
1898 | parent arcs. | |
1899 | Both the child's call count and the arc's call count are | |
1900 | incremented by the record's call count. | |
1901 | ||
1902 | Basic-block records are read (@code{basic_blocks.c:bb_read_rec}), | |
1903 | but only if line-by-line profiling has been selected. | |
1904 | Each basic-block address is matched to a corresponding line | |
1905 | symbol in the symbol table, and an entry made in the symbol's | |
1906 | bb_addr and bb_calls arrays. Again, if multiple basic-block | |
1907 | records are present for the same address, the call counts | |
1908 | are cumulative. | |
1909 | ||
1910 | A gmon.sum file is dumped, if requested (@code{gmon_io.c:gmon_out_write}). | |
1911 | ||
1912 | If histograms were present in the data files, assign them to symbols | |
1913 | (@code{hist.c:hist_assign_samples}) by iterating over all the sample | |
1914 | bins and assigning them to symbols. Since the symbol table | |
1915 | is sorted in order of ascending memory addresses, we can | |
1916 | simple follow along in the symbol table as we make our pass | |
1917 | over the sample bins. | |
1918 | This step includes a symspec check against INCL_FLAT/EXCL_FLAT. | |
1919 | Depending on the histogram | |
1920 | scale factor, a sample bin may span multiple symbols, | |
1921 | in which case a fraction of the sample count is allocated | |
1922 | to each symbol, proportional to the degree of overlap. | |
1923 | This effect is rare for normal profiling, but overlaps | |
1924 | are more common during line-by-line profiling, and can | |
1925 | cause each of two adjacent lines to be credited with half | |
1926 | a hit, for example. | |
1927 | ||
1928 | If call graph data is present, @code{cg_arcs.c:cg_assemble} is called. | |
1929 | First, if @samp{-c} was specified, a machine-dependant | |
1930 | routine (@code{find_call}) scans through each symbol's machine code, | |
1931 | looking for subroutine call instructions, and adding them | |
1932 | to the call graph with a zero call count. | |
1933 | A topological sort is performed by depth-first numbering | |
1934 | all the symbols (@code{cg_dfn.c:cg_dfn}), so that | |
1935 | children are always numbered less than their parents, | |
1936 | then making a array of pointers into the symbol table and sorting it into | |
1937 | numerical order, which is reverse topological | |
1938 | order (children appear before parents). | |
1939 | Cycles are also detected at this point, all members | |
1940 | of which are assigned the same topological number. | |
1941 | Two passes are now made through this sorted array of symbol pointers. | |
1942 | The first pass, from end to beginning (parents to children), | |
1943 | computes the fraction of child time to propogate to each parent | |
1944 | and a print flag. | |
1945 | The print flag reflects symspec handling of INCL_GRAPH/EXCL_GRAPH, | |
1946 | with a parent's include or exclude (print or no print) property | |
1947 | being propagated to its children, unless they themselves explicitly appear | |
1948 | in INCL_GRAPH or EXCL_GRAPH. | |
1949 | A second pass, from beginning to end (children to parents) actually | |
1950 | propogates the timings along the call graph, subject | |
1951 | to a check against INCL_TIME/EXCL_TIME. | |
1952 | With the print flag, fractions, and timings now stored in the symbol | |
1953 | structures, the topological sort array is now discarded, and a | |
1954 | new array of pointers is assembled, this time sorted by propagated time. | |
1955 | ||
1956 | Finally, print the various outputs the user requested, which is now fairly | |
1957 | straightforward. The call graph (@code{cg_print.c:cg_print}) and | |
1958 | flat profile (@code{hist.c:hist_print}) are regurgitations of values | |
1959 | already computed. The annotated source listing | |
1960 | (@code{basic_blocks.c:print_annotated_source}) uses basic-block | |
1961 | information, if present, to label each line of code with call counts, | |
1962 | otherwise only the function call counts are presented. | |
1963 | ||
1964 | The function ordering code is marginally well documented | |
1965 | in the source code itself (@code{cg_print.c}). Basically, | |
1966 | the functions with the most use and the most parents are | |
1967 | placed first, followed by other functions with the most use, | |
1968 | followed by lower use functions, followed by unused functions | |
1969 | at the end. | |
1970 | ||
1971 | @node Debugging,,Internals,Details | |
1972 | @subsection Debugging @code{gprof} | |
1973 | ||
1974 | If @code{gprof} was compiled with debugging enabled, | |
1975 | the @samp{-d} option triggers debugging output | |
1976 | (to stdout) which can be helpful in understanding its operation. | |
1977 | The debugging number specified is interpreted as a sum of the following | |
1978 | options: | |
1979 | ||
1980 | @table @asis | |
1981 | @item 2 - Topological sort | |
1982 | Monitor depth-first numbering of symbols during call graph analysis | |
1983 | @item 4 - Cycles | |
1984 | Shows symbols as they are identified as cycle heads | |
1985 | @item 16 - Tallying | |
1986 | As the call graph arcs are read, show each arc and how | |
1987 | the total calls to each function are tallied | |
1988 | @item 32 - Call graph arc sorting | |
1989 | Details sorting individual parents/children within each call graph entry | |
1990 | @item 64 - Reading histogram and call graph records | |
1991 | Shows address ranges of histograms as they are read, and each | |
1992 | call graph arc | |
1993 | @item 128 - Symbol table | |
1994 | Reading, classifying, and sorting the symbol table from the object file. | |
1995 | For line-by-line profiling (@samp{-l} option), also shows line numbers | |
1996 | being assigned to memory addresses. | |
1997 | @item 256 - Static call graph | |
1998 | Trace operation of @samp{-c} option | |
1999 | @item 512 - Symbol table and arc table lookups | |
2000 | Detail operation of lookup routines | |
2001 | @item 1024 - Call graph propagation | |
2002 | Shows how function times are propagated along the call graph | |
2003 | @item 2048 - Basic-blocks | |
2004 | Shows basic-block records as they are read from profile data | |
2005 | (only meaningful with @samp{-l} option) | |
2006 | @item 4096 - Symspecs | |
2007 | Shows symspec-to-symbol pattern matching operation | |
2008 | @item 8192 - Annotate source | |
2009 | Tracks operation of @samp{-A} option | |
2010 | @end table | |
be4e1cd5 | 2011 | |
e2fd4231 ILT |
2012 | @contents |
2013 | @bye | |
2014 | ||
2015 | NEEDS AN INDEX | |
be4e1cd5 JO |
2016 | |
2017 | -T - "traditional BSD style": How is it different? Should the | |
2018 | differences be documented? | |
2019 | ||
be4e1cd5 JO |
2020 | example flat file adds up to 100.01%... |
2021 | ||
2022 | note: time estimates now only go out to one decimal place (0.0), where | |
2023 | they used to extend two (78.67). |