This is a cross-post from @stoyanstefanov’s 2018 Performance Calendar, the original can be found here. Consider the following timing data for a stylesheet request: If you were staring at the browser who had to wait around for a response for those 236.79 milliseconds, you’d be hard pressed to find out what was going on. Maybe that time reflects RTT (roundtrip time) and my server responded instantly. Or maybe my server had to do a bunch of custom work to hand me back the bytes of my stylesheet.

Server-Timing Compression

For domains that use our RUM product, through the power of the Edge, we are now injecting up to 3 Server-Timing entries per resource (including basepage). Because we don’t sample and because we beacon back every timer of every resource requested from every non cross-origin <IFRAME>, we’ve already been applying custom compression based on tries to save data going out of the browser. For Server-Timing, we have a lot of work to do, because the data can get pretty verbose and redundant.


While doing some field research for a talk he gave at #PerfMatters Conference, my colleague @simonhearne made an interesting discovery. 👋 @newrelic @newrelicdev ! Your injected JS clears the ResourceTiming buffer when it gets full, meaning nothing else can access the data. Please fix, i.e. don't clear the buffer 🙏 — Simon Hearne (@simonhearne) March 23, 2018 Was there really “beef” between two Javascript analytics agents? How did it resolve?