| |
- Profile
-
- HotProfile
- OldProfile
class Profile |
|
#**************************************************************************
# class Profile documentation:
#**************************************************************************
# self.cur is always a tuple. Each such tuple corresponds to a stack
# frame that is currently active (self.cur[-2]). The following are the
# definitions of its members. We use this external "parallel stack" to
# avoid contaminating the program that we are profiling. (old profiler
# used to write into the frames local dictionary!!) Derived classes
# can change the definition of some entries, as long as they leave
# [-2:] intact.
#
# [ 0] = Time that needs to be charged to the parent frame's function. It is
# used so that a function call will not have to access the timing data
# for the parents frame.
# [ 1] = Total time spent in this frame's function, excluding time in
# subfunctions
# [ 2] = Cumulative time spent in this frame's function, including time in
# all subfunctions to this frame.
# [-3] = Name of the function that corresonds to this frame.
# [-2] = Actual frame that we correspond to (used to sync exception handling)
# [-1] = Our parent 6-tuple (corresonds to frame.f_back)
#**************************************************************************
# Timing data for each function is stored as a 5-tuple in the dictionary
# self.timings[]. The index is always the name stored in self.cur[4].
# The following are the definitions of the members:
#
# [0] = The number of times this function was called, not counting direct
# or indirect recursion,
# [1] = Number of times this function appears on the stack, minus one
# [2] = Total time spent internal to this function
# [3] = Cumulative time that this function was present on the stack. In
# non-recursive functions, this is the total execution time from start
# to finish of each invocation of a function, including time spent in
# all subfunctions.
# [5] = A dictionary indicating for each function name, the number of times
# it was called by us.
#**************************************************************************
|
| |
- __init__(self, timer=None)
- no doc string
- calibrate(self, m)
- #******************************************************************
- # The following calculates the overhead for using a profiler. The
- # problem is that it takes a fair amount of time for the profiler
- # to stop the stopwatch (from the time it recieves an event).
- # Similarly, there is a delay from the time that the profiler
- # re-starts the stopwatch before the user's code really gets to
- # continue. The following code tries to measure the difference on
- # a per-event basis. The result can the be placed in the
- # Profile.dispatch_event() routine for the given platform. Note
- # that this difference is only significant if there are a lot of
- # events, and relatively little user code per event. For example,
- # code with small functions will typically benefit from having the
- # profiler calibrated for the current platform. This *could* be
- # done on the fly during init() time, but it is not worth the
- # effort. Also note that if too large a value specified, then
- # execution time on some functions will actually appear as a
- # negative number. It is *normal* for some functions (with very
- # low call counts) to have such negative stats, even if the
- # calibration figure is "correct."
- #
- # One alternative to profile-time calibration adjustments (i.e.,
- # adding in the magic little delta during each event) is to track
- # more carefully the number of events (and cumulatively, the number
- # of events during sub functions) that are seen. If this were
- # done, then the arithmetic could be done after the fact (i.e., at
- # display time). Currintly, we track only call/return events.
- # These values can be deduced by examining the callees and callers
- # vectors for each functions. Hence we *can* almost correct the
- # internal time figure at print time (note that we currently don't
- # track exception event processing counts). Unfortunately, there
- # is currently no similar information for cumulative sub-function
- # time. It would not be hard to "get all this info" at profiler
- # time. Specifically, we would have to extend the tuples to keep
- # counts of this in each frame, and then extend the defs of timing
- # tuples to include the significant two figures. I'm a bit fearful
- # that this additional feature will slow the heavily optimized
- # event/time ratio (i.e., the profiler would run slower, fur a very
- # low "value added" feature.)
- #
- # Plugging in the calibration constant doesn't slow down the
- # profiler very much, and the accuracy goes way up.
#**************************************************************
- create_stats(self)
- no doc string
- dump_stats(self, file)
- no doc string
- get_time(self)
- no doc string
- get_time_mac(self)
- no doc string
- instrumented(self)
- # simulate a program with call/return event processing
- print_stats(self)
- no doc string
- profiler_simulation(self, x, y, z)
- # simulate an event processing activity (from user's perspective)
- run(self, cmd)
- # The following two methods can be called by clients to use
- # a profiler to profile a statement, given as a string.
- runcall(self, func, *args)
- # This method is more useful to profile a single function call.
- runctx(self, cmd, globals, locals)
- no doc string
- set_cmd(self, cmd)
- # The next few function play with self.cmd. By carefully preloading
- # our paralell stack, we can force the profiled result to include
- # an arbitrary string as the name of the calling function.
- # We use self.cmd as that string, and the resulting stats look
- # very nice :-).
- simple(self)
- # simulate a program with no profiler activity
- simulate_call(self, name)
- no doc string
- simulate_cmd_complete(self)
- # collect stats from pending stack, including getting final
- # timings for self.cmd frame.
- snapshot_stats(self)
- no doc string
- trace_dispatch(self, frame, event, arg)
- # Heavily optimized dispatch routine for os.times() timer
- trace_dispatch_call(self, frame, t)
- no doc string
- trace_dispatch_exception(self, frame, t)
- no doc string
- trace_dispatch_i(self, frame, event, arg)
- # Dispatch routine for best timer program (return = scalar integer)
- trace_dispatch_l(self, frame, event, arg)
- # SLOW generic dispatch rountine for timer returning lists of numbers
- trace_dispatch_mac(self, frame, event, arg)
- # Dispatch routine for macintosh (timer returns time in ticks of 1/60th second)
- trace_dispatch_return(self, frame, t)
- no doc string
| |