Search is based on keyword.
Do not search with natural language
Ex: "How do I write a new procedure?"
Creating Profiling Reports
Profiling reports are generated at the end of each profiling session per process (katanaBin versus renderboot, for example), per Runtime instance. Renders that call the Katana procedural on multiple threads, for example, instantiate one Runtime per thread, each one producing a different report. Each report consists of two files:
|1.||A .dot file that contains the graph with the Op tree at the end of the profiling session.|
|2.||A .csv file containing the recorded cooking times in a table format.|
The profile reports are, by default, written to the Katana session's temporary directory and, optionally, in the directory specified by the --profiling-dir command-line option. For example:
./katana --profile --profiling-dir=/tmp/katana_profiling
The naming of the reports follows the format:
In which P is the Process ID of the process that produced the report and R is a number that identifies the instance of the Runtime. In a multi-threaded render that instantiates several Runtimes, there is a P for katanaBin and another for renderboot and, in the latter, one R per Runtime instance created at render time. The date and time at which the profiling session was ended is appended to the file name.
The .dot files can be converted, for example, into a PDF document using the following command, which requires Graphviz:
dot [DOT FILE] -Tpdf > [OUTPUT PDF FILE]
Analyzing Profiling Reports
The .csv (comma-separated values) files contain the aggregated cook times and number of cooks. This can be read directly into a spreadsheet application or other reporting tool. Each entry (row) contains the following values (columns):
|1.||OpId - integer ID of the Op instance (or the invoking Op instance if IsExecOp is true).|
|2.||OpType - Op type string (for example, "AttributeSet").|
|3.||IsExecOp - specifies whether or not the entry refers to an Op invoked using execOp() (true or false).|
|4.||Location - the path of the scene graph location that was cooked|
|5.||TotalTime(usecs) - the total time spent by the Op instance in successfully cooking Location (in microseconds).|
|6.||AbortTime(usecs) - the total time spent by the Op instance in aborted cooks of Location (in microseconds).|
|7.||TotalCount - the number of cooks of the Op at Location.|
|8.||AbortCount - the number of aborted cooks of the Op at Location.|
An entry for which IsExecOp is true represents the times for an Op that was explicitly cooked by an invoking Op, using a call to execOp(). In this case, the OpId corresponds to the invoking Op instance, while the opType corresponds to the type of the invoked Op.
Note: Time reported for entries for which IsExecOp is true is also included in the entry for the invoking Op. The total cook time during the session is therefore the sum of the cook time for all entries for which IsExecOp is false.
For example, if Op A (for which IsExecOp is false) calls execOp() on Op B, which in turn calls execOp() on Op C, then TotalTime for Op A is strictly greater than that for Op B, which is strictly greater than that for Op C. These Ops are all reported with the same OpID: that of Op A.
Note: Abort time is a normal component of scene expansion in Geolib3, but is minimized through good Op-writing practices. For more information, see The Op API.
Currently, the Geolib3 Runtime has no knowledge of the mapping between Ops and their respective project nodes. However, the graph produced by the .dot file shows the Runtime's Op tree structure at the time the profiling session was ended, and can assist in matching Ops to nodes. Note that profiling the cooking of distinct or altered Op trees within a single session is likely to produce less helpful and, perhaps, invalid results, since aggregation may erroneously occur across re-purposed Op instances.
The Op tree is created directly from the node graph, plus all implicit resolvers, Interactive Render Filters, and terminal Ops on different UI elements. Note that some nodes produce multiple Ops. OpId values in the graph, of course, correspond to those in the respective .csv file.
When loading the .csv file in a spreadsheet, the values can easily be filtered, sorted, aggregated, and summed. Average times can be calculated by dividing the times by the number of cooks. It is quite possible to produce extremely large reports (to the order to millions of entries) that may not be loaded completely into common spreadsheet software. For such cases, the .csv format is readily parsed by a script or program that aggregates values, for example, per location or per Op type.
Sorry you didn't find this helpful
Why wasn't this helpful? (check all that apply)
Thanks for your feedback.
If you can't find what you're looking for or you have a workflow question, please try Foundry Support.
If you have any thoughts on how we can improve our learning content, please email the Documentation team using the button below.
Thanks for taking time to give us feedback.