When looking at increasing application performance, it's so easy these days to start by throwing hardware at the problem. Bigger, cheaper, faster servers. New desktops and laptops. Everyone has the latest and greatest tablets and smartphones, right? Well, how about starting by looking at the application or web site first?
Source code optimization is one of the natural outcomes of taking a closer look at application performance and really thinking about how to best provide the features and functionality required. Software can quickly become bloatware as new features are added, sometimes with haste. "I'll clean THAT up later..." becomes an easy excuse and it often never happens. As an example, here is a graph of the Thin Air website source code size over time:
The "hack-a-thon" is obvious. While there is much less source code now, the web app actually functions better as per the captured performance metrics shown below!
The combined improvements for each of the application URIs resulted in the following overall application performance improvement:
DOM | ||
---|---|---|
HTTP | Data | Content |
Requests | Transfer | Loaded |
56.41% | 68.44% | 44.82% |
For reference, a 50% performance increase is the same as 2X faster or 2X less storage. This combined average improved performance is signficant but there were specific scenerios that had amazing results. Read more details below.
HTTP Requests are measured in number of requests and Data Transfered is measured in KB. DOM Content Loaded is measured in milliseconds (ms) and is therefore significantly affected by network performance. All of the measurements provided below were taken in the "best case" scenerio - that is, with full broadband network capability. The results are MUCH more dramatic when throttling the network via Google Chrome DevTools. As an example, the sample data on the left is for the Thin Air /index page with full network bandwidth accessed via WiFi while the data on the right was produced by artificially throttling the network back to GPRS (50 Kbps) speeds.
WiFi | GPRS | |
---|---|---|
v1.0 | 1250 ms | 27.00 s |
v2.0 | 612 ms | 3.25 s |
51.04% | 87.96% |
While an 88% increase in performance [ (27-3)/27 ] for the GPRS test doesn't sound all that great, marketing spin using standard statistics can sound more impressive. The new code is 9X faster [ 27/3 ]. That represents an 800% speed increase [ (27-3)/3 ]. Now that sounds better!
Regardless of how the results are "spun", users would likely notice (and appreciate) a reduction in page load time from 27 seconds to just over 3 seconds!
One final note - all measurements were only taken once and to perform a complete and proper assessment, measurements should be take several times in each scenerio and then the average should be calculated.
Performance direction and guidance can be found in the following excellent Google Web Developer resources:
In addition, Google Chrome DevTools were invaluable to capture performance data.
Number of HTTP Requests, data transfered in KB and DOMContentLoaded event timer in milliseconds were easily measured using a "before and after" approach. Other data, such as number of render-blocking JS & CSS resources and overall mobile/desktop speed and UI scores reported from Insights were captured and used to identify performance opportunities. The data shown here was captured from DevTools.
Each URI section has three rows of data. The first row shows the original performance measured. The second row shows the improved measured performance after re-write. The third row shows the percentage improvement for each URI.
URI: /
DOM | ||
---|---|---|
HTTP | Data | Content |
Requests | Transfer | Loaded |
16 | 306 KB | 1.45 s |
6 | 55 KB | 215 ms |
62.50% | 82.03% | 85.17% |
URI: /index
DOM | ||
---|---|---|
HTTP | Data | Content |
Requests | Transfer | Loaded |
18 | 288 KB | 1250 ms |
8 | 51 KB | 612 ms |
55.56% | 82.29% | 51.04% |
URI: /blog
DOM | ||
---|---|---|
HTTP | Data | Content |
Requests | Transfer | Loaded |
23 | 554 KB | 898 ms |
9 | 262 KB | 525 ms |
60.87% | 52.71% | 41.54% |
URI: /stats
DOM | ||
---|---|---|
HTTP | Data | Content |
Requests | Transfer | Loaded |
15 | 300 KB | 741 ms |
7 | 63 KB | 570 ms |
53.33% | 79.00% | 23.08% |
URI: /stats?n=yyyy
DOM | ||
---|---|---|
HTTP | Data | Content |
Requests | Transfer | Loaded |
16 | 307 KB | 674 ms |
6 | 46 KB | 297 ms |
62.50% | 85.02% | 55.93% |
URI: /statsi
DOM | ||
---|---|---|
HTTP | Data | Content |
Requests | Transfer | Loaded |
21 | 564 KB | 2.45 s |
12 | 319 KB | 2.00 s |
42.86% | 43.44% | 18.37% |
Ironically, much of the performance improvement was achieved by eliminating the jQuery Mobile framework (and related jQuery dependancy) from pages that required little (or in some cases, no) functions or classes from the framework. Where JQM functions or classes were needed, alternate JS functions were quite easily custom written with little effort. In general, the increased performance benefits vastly outweigh the small amount of effort. This is not a knock against JQM - it's a great framework and I've enjoyed using it. It's just not the right tool for this particular project. Also, the ability to quickly prototype with JQM has been immensely valuable. Having said that, choose wisely at the beginning of your project because I have found that untangling a framework from a project to be very time consuming. Your mileable may vary.