Reports

Read and Configure

Types

What report types does XLT offer?

  • Performance Reports
    • A single test run or manually combined set of runs
  • Comparison Reports
    • Two runs compared
  • Trend Reports
    • Multiple runs compared

Terminology

Some important terms first to avoid confusion

  • A transaction is an executed and completed test case. Each test case consists of one or more actions.
  • An action is part of a transaction and usually consists of one or more requests. When testing web applications, an action resembles a page view or user interaction.
  • A request is the physical execution of an HTTP/HTTPS call.
  • A transaction has at least one action and an action has at least one request.
  • Transaction times consist of all sub-times including think time and code execution.
  • Actions are the sum of all requests. Some requests run concurrently, some not.

Transactions

An executed test case forms a transaction

  • Name: The defined load test name, not the class
  • Count: The execution count total and per period of interest
  • Errors: The error count and percentage
  • Runtime: The runtime per execution including min and max
  • Runtime includes thinktime and any other processing overhead
  • PXX data: For the nerd, mostly not useful for transactions

Actions

User interactions of any kind

  • Most customer ask for this data
  • A single user activity
  • Might be a page load or just an interaction (e.g. Quickview)
  • Data similar to transactions
  • Server: This is page loading but not rendering
  • Server: This is the sum of resource downloads
  • CPT: This is page loading and rendering
  • No thinktime included, but processing overhead
  • CPT: Contains the waittime for elements

Requests Overview

What really happens against the servers

  • Single HTTP(S) activity aka physical request
  • Each line can be a "bucket" of things
  • Data similar to actions plus runtime segmentation (SLA)
  • Times include header parsing, but not DOM building
  • Errors are hard failures, such as response codes >= 500, timeouts, connection errors

Requests Bandwidth

This is bandwidth usage per request bucket

  • Socket transfer statistics
  • Raw data transfer on the network
  • When gzip, it is the compressed view
  • Includes headers

Requests Network Timing

Socket timing view

  • Time to First Byte measured till first byte during receive comes in
  • Time to Last Byte measured till last byte comes in
  • Runtime (first tab) includes connection close and header parsing: last byte != connection end

Requests Gems

External Data

Made for you to enrich the results

  • Data collected externally
  • Merged into the report by custom data readers
  • Anything fits, provide the data and merge it in

Anything Custom

Timers and any values can be collected

  • Manually placed timers
  • Manually placed counters or other data reporters
  • This data is from within the test aka collected by your code during execution on an agent
  • Nice for things to keep track off beyond runtimes

Agents

What was your execution environment doing?

  • See agent and agent machine utilization
  • CPU utilization
  • Memory and GC charts
  • Per agent display - total cpu vs. agent cpu

Create Reports

How to create reports from result data

  • Script bin/create_reports.sh
  • Default target is reports
  • Default name is the result download start time
  • Same name unless specified otherwise
  • Might take a while, depending on size
  • Regenerate at any time
  • Does not need the test suite!
bin $ ./create_report.sh ../results/20161126-131200

bin $ ./create_report.sh ../results/20161126-131200 -o ../reports/test01

Time Slider

Limit the view to a certain time period

  • -from: Where to start
  • -to: Where to end
  • -l: Duration/length as alternative to -to
  • All times in timezone of the report
  • Base time format: ISO8601 YYYYMMDD-HHMMSS
  • Alternatively +/-<timedefinition>
  • '+1h15m', '+1:15:00', or '-30m'
  • Duration works without +/-
./create_report.sh ../results/20161113-090012 -from +15m -to -45m -timezone=UTC
./create_report.sh ../results/20161113-090012 -from +15m -l 30m
./create_report.sh ../results/20161113-090012 -from 20161113-100000 -l 30m
./create_report.sh ../results/20161113-090012 -from 20161113-100000 -to 20161113-110000
./create_report.sh ../results/20161113-090012 -from +15m -l 30m -o /tmp/test-2
./create_report.sh ../results/20161113-090012 -to -30m -o /tmp/test-2

Report Options

What else can be done?

  • -timezone: Change the default timezone
  • -noCharts: Speed up calculation without chart generation
  • -noRampUp: Exclude the ramp-up period
  • -linkToResults: Shall we link to the errors?
  • -pf: Additional property file for overrides
  • -e: Exclude these test cases
  • -i: Include these test cases
  • -o is important to avoid overwriting older versions
  • Test case related rules
    • Named: "TBrowse,TOrder"
    • RegEx: "TBrowse,T.*Order,TOrder[1-5]"

SLA and PXX

How to change these settings

## The percentiles to show in runtime data tables. Specify them as a comma-
## separated list of double values in the range (0, 100].
## Defaults to "50, 95, 99, 99.9". If left empty, no percentiles will be shown.
com.xceptance.xlt.reportgenerator.runtimePercentiles = 50, 95, 99, 99.9

## The list of run time values [ms] that mark the boundaries of the run time
## intervals which are used to segment the value range. For example, the values
## 1000 and 5000 segment the value range into three intervals: [0...1000],
## [0...5000] and [5001...]. These segments are shown separately in the
## report to illustrate the compliance to certain service level agreements.
## If this settings is missing or left empty, no segments will be shown.
com.xceptance.xlt.reportgenerator.runtimeIntervalBoundaries = 1000, 3000, 5000
  • config/reportgenerator.properties
  • You can also pass a custom property file to the report creation, see -pf

Modify Charts

XLT has extensive ways to adjust charts to see more details

  • Capping per chart type
  • Most useful for requests
  • Cap either relative to average or absolute
  • Change the display mode to logarithmic
  • Change the width and height if required
## Sets a capping for run time charts. All run time values greater than the cap
## are not shown. The cap can be defined using two alternative methods. First,
## you may specify the capping value directly. Second, you may specify a factor
## that, when applied to the mean of all run time values, defines the ultimate
## capping value. The factor must be a double greater than 1. Note that capping
## values take precedence over capping factors. By default, there is no capping.
##
## Furthermore, you may configure the capping mode:
## - smart .... cap the chart only if necessary (ie. max > cap) [default]
## - always ... always cap the chart at the capping value
##
## Note that the capping value/factor and the capping mode can be defined
## separately for each chart type, but it is also possible to define a default
## that applies to all chart types.
#com.xceptance.xlt.reportgenerator.charts.cappingValue = 5000
#com.xceptance.xlt.reportgenerator.charts.cappingValue.transactions = 50000
#com.xceptance.xlt.reportgenerator.charts.cappingValue.actions = 10000
com.xceptance.xlt.reportgenerator.charts.cappingValue.requests = 10000
#com.xceptance.xlt.reportgenerator.charts.cappingValue.custom = 1000

#com.xceptance.xlt.reportgenerator.charts.cappingFactor = 5
#com.xceptance.xlt.reportgenerator.charts.cappingFactor.transactions = 5
#com.xceptance.xlt.reportgenerator.charts.cappingFactor.actions = 5
#com.xceptance.xlt.reportgenerator.charts.cappingFactor.requests = 5
#com.xceptance.xlt.reportgenerator.charts.cappingFactor.custom = 5

com.xceptance.xlt.reportgenerator.charts.cappingMode = always

Modified Charts - Examples

Merge Rules

Original Buckets

COLogin.1

https://host.net/s/Foo/cart?dwcont=C1250297253

COLogin.2

https://host.net/on/d.store/Sites-Foo-Site/en_US/COCustomer-Start

COLogin.3

https://host.net/on/d.store/Sites-Foo-Site/en_US/
	COAddress-UpdateShippingMethodList
	 ?address1=&address2=&countryCode=&stateCode=&postalCode=
	  &city=&firstName=Armin&lastName=Warnes&format=ajax

COLogin.4

https://host.net/on/d.store/Sites-Foo-Site/en_US/
	COAddress-UpdateShippingMethodList
	 ?address1=&address2=&countryCode=&stateCode=&postalCode=
	  &city=&firstName=Armin&lastName=Warnes

COLogin.5

https://host.net/on/d.store/Sites-Foo-Site/en_US/COBilling-UpdateSummary

COLogin.6

https://host.net/on/d.store/Sites-Foo-Site/en_US/__Analytics-Tracking
	?url=https%3A%2F%2host.net%2Fon%2Fd.store%2FSites-Foo-Site%2Fen_US%2FCOCustomer-Start
	 &res=1600x1200&cookie=1&cmpn=&java=0&gears=0&fla=0&ag=0&dir=0&pct=0
	 &pdf=0&qt=0&realp=0&tz=US%2FEastern&wma=1&dwac=0.7869769714444649
	 &pcat=new-arrivals&title=Cole+Haan+Checkout&fake=13581407137497

Merge Rule Rules

How to setup rules

  • Goal: New name aka bucket
  • Match certain criterias
  • Fetch data for the name
  • Numbered rules, gaps permitted
  • Ordered execution
  • Open range, positive numbers
  • Applies only to requests
com.xceptance.xlt.reportgenerator.requestMergeRules.1.newName = {n:0} NonJS [{u:1}]

com.xceptance.xlt.reportgenerator.requestMergeRules.1.namePattern = .+
com.xceptance.xlt.reportgenerator.requestMergeRules.1.statusCodePattern = (30[12])
com.xceptance.xlt.reportgenerator.requestMergeRules.1.contentTypePattern.exclude = javascript

com.xceptance.xlt.reportgenerator.requestMergeRules.1.stopOnMatch = false
newName .................. new request name (required)

namePattern [n] .......... reg-ex defining a matching request name
transactionPattern [t] ... reg-ex defining a matching transaction name
agentPattern [a] ......... reg-ex defining a matching agent name
contentTypePattern [c] ... reg-ex defining a matching response content type
statusCodePattern [s] .... reg-ex defining a matching status code
urlPattern [u] ........... reg-ex defining a matching request URL
runTimeRanges [r] ........ list of run time segment boundaries

stopOnMatch .............. whether or not to process the next rule even if
                           the current rule applied (defaults to true)

Merge Rules

Couple of things before crunching data

  • Know what the request does
  • Decide what details you need
  • Carefully craft the regex
  • Avoid separating good from bad
  • Config is read from the result not the suite!
  • Split up redirects
  • Sum up identical requests
  • Don't destroy context (action) except when not needed
  • The fewer data, the worse PXX and averages

Example Continued

  • Get rid off the dot
  • Split off __Analytics
  • List the pipeline if known
  • List the response code when different from 200
  • Site is not of interested
  • Languages and protocol are not an issue
  • No parameter changes the runtime
  • Host is the same every time
COLogin.1
https://host.net/s/Foo/cart?dwcont=C1250297253

COLogin.2
https://host.net/on/d.store/Sites-Foo-Site/en_US/COCustomer-Start

COLogin.3
https://host.net/on/d.store/Sites-Foo-Site/en_US/
 COAddress-UpdateShippingMethodList
  ?address1=&address2=&countryCode=&stateCode=&postalCode=
   &city=&firstName=Armin&lastName=Warnes&format=ajax

COLogin.4
https://host.net/on/d.store/Sites-Foo-Site/en_US/
 COAddress-UpdateShippingMethodList
  ?address1=&address2=&countryCode=&stateCode=&postalCode=
   &city=&firstName=Armin&lastName=Warnes

COLogin.5
https://host.net/on/d.store/Sites-Foo-Site/en_US/COBilling-UpdateSummary

COLogin.6
https://host.net/on/d.store/Sites-Foo-Site/en_US/__Analytics-Tracking
 ?url=https%3A%2F%2host.net%2Fon%2Fd.store%2FSites-Foo-Site%2Fen_US%2FCOCustomer-Start
  &res=1600x1200&cookie=1&cmpn=&java=0&gears=0&fla=0&ag=0&dir=0&pct=0
  &pdf=0&qt=0&realp=0&tz=US%2FEastern&wma=1&dwac=0.7869769714444649
  &pcat=new-arrivals&title=Cole+Haan+Checkout&fake=13581407137497

Buckets everywhere - Set 1

    ## Summarize Analytics Tracking
    ...requestMergeRules.10.newName = {u:1}
    ...requestMergeRules.10.urlPattern = /(__Analytics-Tracking)\\?
    ...requestMergeRules.10.stopOnMatch = true
  • Match urls with __Analytics-Tracking before '?'
  • Take the name out of the match
  • Stop further processing
    ## First, we eliminate the sub-request naming pattern, because we do not need
    ## that at the moment. This turns all "name.1" or "name.1.1"
    ## and so on into just "name".
    ...requestMergeRules.20.newName = {n:1}
    ...requestMergeRules.20.namePattern = ^([^\\.]*)(\\.[0-9]+)+$
    ...requestMergeRules.20.stopOnMatch = false
  • Bye bye dot!
  • Escaping: \\. - escapes the \ with \ (Java properties) and that escapes the . for RegEx
  • Let's continue!

Buckets everywhere - Set 2

    # Do a split by pipeline name
    ...requestMergeRules.60.newName = {n:0} ({u:1})
    ...requestMergeRules.60.namePattern = [^.]+
    ...requestMergeRules.60.urlPattern = -Site/[^/]+/([^/\\?]+).*
    ...requestMergeRules.60.stopOnMatch = false
  • -Site/locale/Pipeline is the format
  • Make sure we do not capture the ?
    ## Get us the redirect codes into the name
    ...requestMergeRules.80.newName = {n:0} [{s:0}]
    ...requestMergeRules.80.namePattern = .*
    ...requestMergeRules.80.statusCodePattern = (30[0-9])
    ...requestMergeRules.80.stopOnMatch = false
  • Match every response code 300 to 309
  • Capture it for the name
  • Attention: When the call fails with something else than 30[0-9], we get another row and so we might not see all errors or spikes correctly as part of the main row.

Compare Runs

How to compare two load test runs

  • Compare two reports
  • Reports, not results!
./create_diff_report.sh <report-1> <report-2>

See Trends

Comparing multiple reports forms a trend

  • Draw a trend picture
  • At least three reports needed
  • Reporting in order of start time
  • Reports, not results!
  • Requires constant measurements
  • Relative to baseline and relative to previous run
./create_trend_report.sh <report-1> ... <report-n>

The End

And they tested happily ever after.