Client Performance Testing
Use a Real Browser for Performance Testing
René Schwietzke, Xceptance
Motivation
Why do we need to know how a browser does things
„About 25 years ago Jakob Nielsen wrote a book called Usability Engineering and he offered some advice about response times which had already been in place for at least 25 years before he wrote the book.“
- 0.1 second is about the limit for having a visitor feel as though the system is reacting instantaneously.
- 1.0 second is about the limit for a visitor’s flow of thought to stay uninterrupted, even though the visitor will notice the delay.
- 10 seconds is about the limit for keeping the visitor’s attention focused on the task they want to perform.
Perception
Data vs. Feelings
- Performance is a measure of how fast your site is
- Perceived performance is a measure of how fast a visitor thinks your site is
|
First |
Repeated |
TTFB | 0.8s | 0.8s |
First Paint | 2.2s | 1.9s |
DOMContentLoaded | 3.3s | 2.4s |
Load | 6.6s | 4.9s |
Finished | 9.2s | 6.7s |
Visual Complete | 9.0s | 7.4s |
Bytes Transferred | 3,100kB | 187kB |
Request Count | 268/268 | 69/268 |

0s

1s

2s

3s

4s

5s

6s

7s

8s

9s
Timing and Events
Most important events during a page's lifecycle
Events - TTFB/TTLB
When do we get things
- Time to first byte
- Time to last byte
- Interesting because we know how long server processing and networking took
- Interesting because we know how long downloading took
- The following technical details are involved:
- DNS
- TCP handshake
- TLS handshake
- Send time
- Server processing time
- Time to first byte delivered (contains latency)
- Time to last byte downloaded (also with latency)
- Network capacity
DOM Events
A series of timestamps and events, measurements by Performance Timing API
- domLoading: Got the first bytes and started parsing
- domInteractive: Got all HTML, finished parsing, finished async JS, finished blocking JS, starting deferred JS processing
- domContentLoaded: Deferred JS was executed, DomContentLoaded event fires and triggers event handler for JS
- domComplete: All content has been loaded (aka images and more), DomContentLoaded event was fully processed (attached JS), fire onload event and start processing JS
- loadEventEnd: All JS attached to onload was executed, dust should have settled
DOM Events still useful?
How to use the measurements despite being of low interest by the user
- domInteractive: How long does parsing take and do I block it with JS?
- domContentLoadedStart - domInteractive: How much defer JS do I have?
- domContentLoadedEnd - domContentLoadedStart: How long does the DomContentLoaded JS event processing take aka working on the parsed and mostly ready CSSOM? Do I block the CSSOM by any chance?
- domComplete: When is really everything downloaded?
- loadEventEnd - loadEventStart: How long does the final onload event handling JS take?
- loadEventEnd - domLoading: How does does the full page processing take?
How to test it automatically
The Marriage of Automation and Performance Testing
Client vs. Server Testing
Server Side
- No real browser
- No rendering
- Often no static content download
- Certain third parties excluded
- Available data is request centric
- Easier on test resources
- Enables large scale testing
- Complex UIs hard to automate due to JS based logic handling
Client Side
- Real browser
- Real loading and rendering
- High resource demand
- Limited API
- Hard to acquire data
- Limited control over events
Marry both concepts
What XLT offers to overcome these problems
- Married test automation to performance testing
- Utilizing special WebDriver setup
- Able to retrieve data from the browser
- Querying Performance and Navigation Timing API
- Storing data alongside other XLT data
- Write your UI testing as usual (assuming XLT style for a moment)
- Use action concept to mark areas for screenshots and time naming
- Use chrome_clientperformance profile
- Run as small performance test to measure and sample enough data
Technical Insides
How this all works
- Instad of using XLTDriver, Chrome or FirefoxDriver is used
- Start profile of the browser configures plugin
- Browser pushes data via Websocket to XLT
- Actions define time borders to match up names
- Test runs and reporting as usual
- Data will be part of custom timers
Demo
Three examples with different levels of data
BM
- Heterogeneous technologies
- In many aspects async
- Header menu is always after pageload
- Wait operations needed to make sure we account for async
- Browser detection bug alert screens break page load data
- Experimental visual complete detection
SG
- Storefront
- Mostly synchronous operations
- Just a checkout
Columbia
- Real world
- Long runtimes
- Test of visual complete
Important Considerations
What to pay attention to
- Async operations have to be manually identified
- Action time frame has to kept open until these finish
- Late changes of the page are hard to code against
- Prefer waitForVisible over waitForElement
- Don't use a single number to identify issues or slowness
- Collect enough data to avoid outliers
- Optionally implement a JavaScript ready state of your library
- Can be queried and wait for by WebDriver
- Less cumbersome than element wait or screen change wait
- But might still not tell the real store in terms of visual impression
Other test setups possible
Three APIs are important
- You can integrate your personal automation flavor
- XLT requires three things to work
- Test suite layout
- Action start and stop
- Enriched WebDriver setup (plugin added)
- Test suite standalone execution will write timer.csv
- Test suite and XLT performance test setup will enable reporting, load distribution, repeated execution, and more
- You can run real large scale tests, if you have the "uumpf" for that
- 8 cores are between 8 to 16 browsers at max
Questions
Feel free to ask