To determine the performance
of the recorded application, you can evaluate the results that are
generated dynamically during a run. After the run too, you can regenerate
the results for viewing and analysis.
Comparing results within and among runs
You can create a report that
compares different nodes or time ranges within a single run. You can
also create a report that compares the results of different runs.
Comparing schedule stages
When you are running a schedule that contains stages, time
ranges are automatically created for each stage. You can display a
report that compares these stages, and you also can set preferences
to display the report automatically at the end of a staged run.
Viewing stage results in real time
When you run a schedule that contains stages, you can analyze
results for each stage in real time. You can set preferences to control
how stage results are displayed in real time.
Generating functional test reports
You can generate functional test reports of your tests,
which summarize the pass or fail verdicts of elements in the test
log. Functional reports are generated from the test run as HTML files
that use predefined report designs.
Customizing reports
You can customize reports
to specifically investigate a performance problem in more detail than
what is provided in the default reports.
Exporting data from runs
You can export reports in HTML format for others
to view; export an entire run or specific counters to a CSV file for
further analysis; or export report metadata (templates) so that other
users can generate any custom reports that you have created.