Web Performance: Video Optimization

According to HTTPArchive, sites went from an average of 2,135Kb  to 3,034 KB in the two years from July 1, 2015, to  July 1, 2017. Videos are a major part of that. The average web sites video weight grew from  204 Kb to 729 Kb over the same two year period. The “Low hanging fruit” of performance used to be optimizing images. Now it’s both images and video.

My optimizing video rules:

  1. If possible, omit videos
  2. Compress all videos
  3. Optimize <source> order
  4. Remove audio from muted heroes

If possible, omit videos

The best way to optimize is to remove unneeded content and un-needed requests.

Do you really need a hero video? Do you really need it on the mobile version of your site?

You can use media queries to avoid downloading the #hero-video on narrow screens.

@media screen and (max-width: 650px) { 
  #hero-video { 
      display: none; 
   } 
}

Compress all videos

Most video compression efforts involve comparing adjacent frames within a video and removing details that are the same in the original and subsequent frame. You want to both compress the video and export it to multiple video formats, including WebM, MPEG-4/H.264 and Ogg/Theora.

The software you used to create your video likely includes the ability to optimize the file size down. If not, there are several online tools, like FFmpeg, discussed below, that can help encode, decode, convert and perform other forms of magic.

Optimize <source> order

Order from smallest to largest.  For example, given three video compressions at 10MB, 12MB, and 13MB, but the smallest first and the largest last:

<video width="400" height="300" controls="controls">
  <!-- WebM: 10 MB -->
  <source src="video.webm" type="video/webm" />
  <!-- MPEG-4/H.264: 12 MB -->
  <source src="video.mp4" type="video/mp4" />
  <!-- Ogg/Theora: 13 MB -->
  <source src="video.ogv" type="video/ogv" />
</video>

In terms of the order, the browser will download the first video source it understands, so let it hit a smaller one first.  In terms of “smallest”, do make sure that your most compressed video still looks good. There are some compression algorithms that can make your video look like an animated gif. While a 128 Kb video may seem like better user experience than having your users download a 10 MB video, putting a grainy gif-like video behind your content may also negatively impact your brand.

See CanIUse.com for current browser support of video and the various media types.  

Remove audio from muted heroes

Lastly, if you do have a hero video or other video without audio, remove the audio from your video file. Remove audio that is muted

<video autoplay="" loop="" muted="true" id="hero-video">
  <source src="banner_video.webm" 
          type='video/webm; codecs="vp8, vorbis"'>
  <source src="web_banner.mp4" type="video/mp4">
</video>

This hero video code, common to many conference websites and corporate home pages, includes a video that is auto-playing, looping, and muted. It contains no controls, so there is no way to hear the audio. The audio is often empty, but it is still present. It is still using up bandwidth. There is no reason to serve the audio along with a video that is always muted. Removing the audio can save 20% of the bandwidth, which is 2 MB if your video is 10 MB.

Depending on your video making software, you may be able to remove the audio during export and compression. If not, there is a free tool called FFmpeg that can do it for you with the following command:

ffmpeg -i original.mp4 -an -c:v copy audioFreeVersion.mp4

FFmpeg bills itself as the “complete, cross-platform solution to record, convert and stream audio and video,” which it pretty much is..

Feeding the Diversity Pipeline

Welcoming work environments require multiple components. Most notably, the environment needs to be both diverse and inclusive. A lot of attention has been focused on diversity: on ensuring the pipeline to the workforce has diverse candidates. While it is up to each employer / recruiter to ensure they are reaching out to diverse applicants, many organizations have worked to ensure that, if the recruiter actually looked, they would find diverse candidates.

Below are some organizations doing the work of filling up that pipeline:

National & International “In Person” Groups

Girl Develop It: With locations in 56 cities across 33 US states, Girl Develop provides affordable programs for adult women interested in learning web and software development in a judgment-free environment.

Black Girls Code Black Girls Code provides workshops with the goal of addressing the dearth of African-American women in STEM.

Yes We Code: #YesWeCode targets low-opportunity youth, providing them with the resources and tools to become computer programmers.

Code2040: With outposts in SF, Austin, Chicago, and Durham, Code 2040 creates access, awareness, and opportunities for Black and Latinx engineers

Girls Who Code: Geared toward 13- to 17-year-old girls, Girls Who Code pairs instruction and mentorship to “educate, inspire and equip” students to pursue their engineering and tech dreams.

Write/Speak/Code:  With three chapters in the USA and an annual conference, Write/Speak/Code empowers women and non-binary software developers to become thought leaders, conference speakers, and open source contributors.

Women Who Code: Focusing on women already in tech, the objectives of Women Who Code include providing networking and mentorship opportunities for women in tech around the world

Lesbians who Tech: The goal of Lesbian Who Tech is to make lesbians and their allies in tech more visible to each other and others, to get more women, particularly lesbians, into technology, and to connect lesbians who tech to community LGBTQ and women’s organizations.

Girl Geek Academy: With events in Australia and the USA, Girl Geek Academy initiatives include coding and hackathons, 3D printing and wearables, game development, design, entrepreneurship, and startups. They work with individuals, teachers, schools, corporates, and startups to increase the number of women with professional technical and entrepreneurial skills.

Girls Who Code. With after school clubs for 6th-12th grade girls to explore coding and 7-week summer immersion programs for 10-11th grade girls, Girls Who Code aims to “build the largest pipeline of future female engineers in the United States.”

Girls Inc., Operation SMART: With the premise that girls do like and can be good at STEM, the mission of Girls Inc. is to inspire girls to be strong, smart, and bold. With locations in low-income neighborhoods across the United States and Canada, they provide research-based curricula to equip girls to achieve academically, lead healthy and physically active lives and discover an interest in science, technology, engineering, and math. Girls do like and can be good at STEM. Operation SMART develops girls’ enthusiasm for and skills in STEM through hands-on activities, training, and mentorship.

AnnieCannons: Helps human trafficking survivors gain web development skills

LadiesThatUX.com: A group that creates spaces for women from all levels to engage and talk about their experiences, both positive and negative, and get the support and inspiration, Ladies that UX has over 54 chapters in over 20 countries, spanning four continents.

Technovation: Entrepreneurs, mentors, and educators teaching girls how to become tech entrepreneurs and leaders, from a completing a tech curriculum to launching their mobile app startup.

Regional Efforts

North America

Color Coded: Based in Washington DC, Color Coded is a community for people in tech and people interested in tech.  Their events include workshops, co-working sessions, hackathons, interview preparation, and everything in between.

Native Girls Code: Based in Seattle, Washington, Native Girls Code introduces indigenous teen girls (12 to 18) to opportunities in the field of science, technology, engineering, art, and mathematics (STEAM).

Techqueria!: A community of latinx professionals in the tech industry in greater San Francisco Bay Area.

The Last Mile: Based out of San Quentin Prison is the San Francisco Bay area, The Last Mile teaches inmates CSS, JS, HTML and Python with the aim to provide successful reentry and reduce recidivism.

Coder_Girl: Based in St. Louis, Missouri, CoderGirl is a year-long tech training program consisting of two 6-month cycles: a learning cycle and a project cycle. CoderGirl provides a space for women of all skill levels to learn to code: from Web Development & Design, C#/.NET, SQL, iOS, to Java and UX.

Sisters Code: Based in Detroit, Michigan, Sisters Code’s goal is to educate, empower, and entice women ages 25 – 85 to explore the world of coding and technology. Their weekend-long classes teach women to code interactive websites using JavaScript, HTML and CSS for re-careering into tech.

TechTonica: Free tech training, living and childcare stipends, and job placement for local women & non-binary adults with low incomes in the San Francisco Bay area

Hack the Hood: Web dev workshops and bootcamps for low-income young people of color in the Bay Area and throughout Northern California from Modesto to Gilroy to San Francisco and beyond.

Techfest Club: Event series for women in tech in New York City

Outside the USA

Geekettes: While they have many hubs, currently the Berlin & Minneapolis are hosting tech talks, workshops and hackathons to teach and refine skills and bring women together to create unique, original products.

She Codes: Provides educational events for female software developers in the hi-tech industry throughout Israel.

CodeBar: Free workshops in London bridging the diversity gap.

BlackGirl.Tech:  Providing black women the opportunity to get into coding in London/Bristol

Online Resources:

Programming Language Resources

  • RailsBridge: Free workshops from beginner to intermediate in Rails and Ruby, focus on increasing diversity and inclusion for all genders, races, experiences, etc.
  • LinuxChix: Linux community providing technical and social support for women Linux users.
  • PyLadies: Mentorship focusing on helping more women become active participants and leaders in the Python open-source community.
  • Django Girls: Free Python and Django workshops and open sourced online tutorials.

Soon to be Added:

Additional Resources

  • Under-represented conferences: Twitter list of programming conferences for or about people of color, women, LGBTQI, etc in tech from CallBackWomen
  • Tech Women Programs List: Twitter list of organizations, publications, events and other programs that work to advance technical women via Anita Borg
  • Speaking Advice: Twitter list of resources encouraging people from underrepresented groups to speak at conferences from CallBackWomen
  • Karen Church’s Medium Post: List of 80 Women in STEM SF Bay area resources from 2015.
  • Kira Newman’s list: 30+ Organizations for Women in Technology from 2012
  • Being inclusive in CS: Cynthis Lee’s guide to supporting inclusion in computer science classrooms

Diversity v. Inclusivity

Interviewing diverse candidates will not create a diverse environment. While the above organizations may have filled that diversity pipeline, that pipeline is full of leaks. Diversity recruiting is really only lip service. Work, school, community and conference environments need to be inclusive. Inclusivity in the sealant that prevents many pipeline leaks. Creating an inclusive work environment is necessary, but was not the focus of this post.

Let’s Encrypt on cPanel

Here is how I set up Let’s Encrypt for all sites on my hosted virtual private server running cPanel.

1. SSH as root

$ ssh -p 22 root@123.1.23.123

Your port might also be 2200. Ask your VPS hosting provider.

2. Then run the command:

$ /scripts/install_lets_encrypt_autossl_provider

3.Log into your main control panel:

https://123.1.23.123:2087

or however you access it (possibly port 2086 if http://)

You should see the following (or something similar) if successful:

Running transaction
  Installing : cpanel-letsencrypt-2.16-3.2.noarch            1/1
 
  Verifying  : cpanel-letsencrypt-2.16-3.2.noarch            1/1
 
Complete!

4. Under SSL/TLS you’ll find “Manage AutoSSL”
Under “providers”, you’ll see “Let’s Encrypt”. That’s a new option that was created by running the command as root.

Select “Let’s Encrypt”. Then agree to their terms of service and create a new registration with Let’s Encrypt if necessary. Under the “managed users” tab you can enable / disable AutoSSL by account.

5. Now, under the control panel of each account, under SECURITY > SSL/TLS, under “Install and Manage SSL for your site (HTTPS)”, if you select “Manage SSL Sites”, you’ll see the Let’s Encrypt  cert.

Note: If you had a self signed certificate (which you don’t want), delete the cert in the individual account. Click the “run AutoSSL for all users” button as root under “Manage Auto SSL”. When you refresh the individual user, the correct cert should be there.

6. Yay. All your accounts now have an SSL Cert. You still need to redirect all of your http:// traffic to https://. In the .htaccess add the following:

RewriteEngine On
RewriteCond %{HTTPS} off
RewriteRule (.*) https://%{HTTP_HOST}/%{REQUEST_URI}

Now http://machinelearningworkshop.com/ redirects to https://machinelearningworkshop.com and http://www.machinelearningworkshop.com redirects to https://www.machinelearningworkshop.com/

and one of these days this blog will do the same.

W3C Performance Specifications

Here are some of the W3C’s web performance specifications:

  • High Resolution Time (Level 3)

    The DOMHighResTimeStamp type, performance.now method, and performance.timeOrigin attributes of the Performance interface resolve Date.now() issues with monotonically increasing time values with sub-millisecond resolution.

    https://w3c.github.io/hr-time

  • Performance Timeline (Level 2)

    Extends definition of the Performance interface, exposes PerformanceEntry in Web Workers and adds support for the PerformanceObserver interface.

    https://w3c.github.io/performance-timeline

  • Resource Timing (Level 3)

    Defines the PerformanceResourceTiming interface providing timing information related to resources in a document.

    https://w3c.github.io/resource-timing https://w3c.github.io/navigation-timing
    Supported in all browser except Safari and Opera Mini, starting with IE10

  • User Timing (Level 2)

    Extends Performance interface with PerformanceMark and PerformanceMeasure.

    https://w3c.github.io/user-timing
    Supported in all browser except Safari and Opera Mini, starting with IE10

  • Beacon API

    Defines a beacon API which can “guarantee” asynchronous and non-blocking delivery of data, while minimizing resource contention with other time-critical operations.

    https://w3c.github.io/beacon
    Not supported in IE, Safari or Opera Mini. Support started with Edge 14
    navigator.sendBeacon() on MDN

  • Preload

    Defines preload for resources which need to be fetched as early as possible, without being immediately processed and executed. Preloaded resources can be specified via declarative markup, the Link HTTP header, or scheduled with JS.

    https://w3c.github.io/preload

  • Cooperative Scheduling of Background Tasks

    Adds the requestIdleCallback method on the Window object, which enables the browser to schedule a callback when it would otherwise be idle, along with the associated cancelIdleCallback and timeRemaining methods.

    https://w3c.github.io/requestidlecallback

Capturing Captions from Youtube Videos

There are many tutorials on how to download the caption files created by Youtube if you own the video, but I was unable to find a way to download the captions of videos I don’t own. There’s probably an easy way to do it, but since I couldn’t find it, creating a JavaScript function to do it via the console took less effort.

Here’s the code (I did it three ways depending on how you like to code your JS)

Constructor method:

function CaptionCollector () {
  var that = this;
  this.captions = '';
  var nowShowing = '';

  this.collect = function(){
    try {
      var currentCaption = document.getElementsByClassName("captions-text")[0].innerText;
    } catch (e) {
      var currentCaption = null;
    }

    if(currentCaption && nowShowing != currentCaption) {
      nowShowing = currentCaption;
      that.captions += ' ' + nowShowing;
    }

    setTimeout(that.collect, 300);
  }
}

var foo = new CaptionCollector();
foo.collect();

Print the caption with foo.captions. Of course you can use anything instead of “foo”.

Here’s a version using JS object notation:

var captionCollector = {
    captions : '',
    nowShowing: '',

    collect : function(){
      try {
        var currentCaption = document.getElementsByClassName("captions-text")[0].innerText;
      } catch (e) {
        var currentCaption = null;
      }
    if(currentCaption && this.nowShowing != currentCaption) {
        this.nowShowing = currentCaption;
        captionCollector.captions += ' ' + captionCollector.nowShowing;
     }
    setTimeout(captionCollector.collect, 300);
  }
}

captionCollector.collect();

With this version, you print the console with captionCollector.captions

Or you can use the anonymous function method with a single global variable:

(function(){
    ___captions = '';
    var ___nowShowing = '';

    function getCaption() {
        try {
          var currentCaption = document.getElementsByClassName("captions-text")[0].innerText;
        } catch (e) {
          var currentCaption = null;
        }

        if(currentCaption && ___nowShowing != currentCaption) {
          ___nowShowing = currentCaption;
          ___captions += ' ' + ___nowShowing;
        }
        setTimeout(getCaption, 300);
    }

    getCaption();
})();

With this version, you print the console with the global variable ____captions

Because it uses the classname of the caption box Youtube uses for videos, this only works on Youtube. Alter the classname for other video services.

You do have to play the whole video to capture all the captions. With settings, you can play the video at twice the speed.

Clear the console when the video ends. Print the transcript to the console. Select all. Copy. You’re good to go.

TWELP: Twitter Help

Sometimes Twitter gets things wrong. Very, very, wrong. A few “features” that I think are bugs include Twitter Moments,

To this end, I created a little bookmarklet called “TWELP”.

<a href="javascript:(function(){function kill(){$('.promoted-tweet, .Icon--heartBadge').closest('li.stream-item').css('display','none');$('.js-moments-tab, .DismissibleModule').css('display','none');setTimeout(kill, 1000);}kill();})();">TWELP</a>.

Let me rewrite that for you in a version that’s easy to read, but won’t work to copy and paste:

(function(){
    function kill() {
         $('.promoted-tweet, .Icon--heartBadge').closest('li.stream-item').css('display','none');
         $('.js-moments-tab, .DismissibleModule').css('display','none');

         setTimeout(kill, 1000);
     }

     kill();
})();

The bookmarklet creates a kill function that:

  1. hides promoted tweets by finding the parent tweet containing a promoted-tweet child class
  2. hides any “liked” tweets that contains the heart icon, including uninteresting tweets in your stream suck as the fact that your friend Jane liked a tweet of a picture of her acquaintance Joe, who you are not following, eating an oyster. Seriously, who the fuck cares? It also hides the “people who liked your tweet” feature in your notifications. Not sure if that is a feature or a bug.
  3. hides the “Moments” tab by hiding the tab that has the  js-moments-tab class
  4. hides promoted modules that I hate like “In Case You Missed It” and “Who to follow”
  5. Calls itself once per second so if you scroll, it will continue killing those annoying tweets mentioned above.

TWELP – You can drag this link to your bookmarks bar, and click TWELP bookmarklet whenever you load Twitter. It kills the “Moments” tab, all ads, and removes the “X liked” tweets.

or, you can wrap your own.

Speed Perception

TL;DR: Please take the speedPerception challenge to help us confirm results about which web performance metrics best match human perception of speed.

Last summer, I was involved in a study called “SpeedPerception”, a large-scale web performance crowdsourced study focused on the perceived loading performance of above-the-fold content aimed at understanding what “slow” and “fast” mean to users. I am now involved in the second part of this study which aims to confirm (or refute) our findings.

SpeedPerception: the general idea

Traditional web performance metrics, like those defined in W3C Navigating Timing draft specification focus on timing each process along the content delivery pipeline, such as Time to First Byte (TTFB) and Page Load Time. SpeedPerception’s goal is to tackle the web performance measurement challenge by looking at it from a different angle: one which puts user experience into focus by focusing on the visual perception of the page load process. We show the user  sample video pairs of websites loading generated with http://www.webpagetest.org/ (WPT), and ask them which of the pair they perceive as having loaded faster.

In the first phase, we measured only Internet Retailer top-500 (IR500) sites in desktop size. Now we are testing whether the results we measured are true: in other words, do they only work for our IR500 sites on desktop? Will we get consistent results when testing  Alexa top-1000 (Alex1000) homepages? Will we see the same results if we test on mobile size screens with mobile lie-fi performance?

In this second phase, we’re testing both mobile and desktop versions of both IR500  and Alexa1000 website home pages. We’ve also added a way of measuring the user’s time to click so we can compare apples to apples.

The goal is to create a free, open-source, benchmark dataset to advance the systematic study of how human end-users perceive the webpage loading process: the above-the-fold rendering in particular. Our belief (and hope) is that such a benchmark can provide a quantitative basis to compare different algorithms and spur computer scientists to make progress on helping quantify perceived webpage performance.

Take the Speed Perception challenge !

How was SpeedPerception created?

Videos were created using  Patrick Meenan’s open-source WebPagetest (a.k.a WPT). We made 600+ videos of  2016 IR500 and Alexa-1000 home pages loading. The runs were done in February 2017. Videos were turned into gifs. Video pairs  were grouped using a specific set of rules to help limit bias and randomness. Everything is available on GitHub at https://github.com/pdey/SpeedPerception.

Open Source

Just like we did with the results of phase 1 of SpeedPerception, once the crowd-sourcing component generates a sufficient amount of user data, we will open source the dataset, making it available to the web performance community, along with the analysis of what we discover.

Please help us by taking the SpeedPerception Challenge now. Thanks.

Results from Phase 1

In phase 1, we discovered a combination of three values: an abbreviated SpeedIndex up to time to click (TTC) and an abbreviated Perceptual Speed Index up to TTC, in conjunction with startRender (or Render), can achieve upwards of 85%+ accuracy in explaining majority human A/B choices. Does the power of this new combination “model” hold true for all sites, or just our original data set? This is what we’re working on finding out.

If you’re interested in phase 1, here’s some more light reading:

 

Leak without a trace: Anonymous Whistle blowing

Instructions on how to leak data without getting caught

  1. Don’t leave digital traces while copying data.
  2. Write stuff down on a pad which belongs to you, and take it home.
  3. Photograph your screen. Don’t create files with copies of the data you are planning to exfiltrate on your work computer.
  4. If the data is on an internal web server, try to access what you are planning to leak in the form of multiple partial queries over a period of time, instead of as one big query.
  5. If you have large volumes of data, save it on a never-used-before USB drive and bring that home. Don’t ever use that USB drive for anything else again.
  6. Don’t use work-owned equipment to post from. Many employers have monitoring software installed, and will easily be able to see who posted what.
  7. Don’t use equipment where you’ve installed employer-supplied monitoring software to post.
  8. If you just read this, on equipment which might be monitored, wait a while before posting anything sensitive. Don’t give somebody who is watching a chance to correlate your seeing this with a post right after that.
  9. Install tor. You can get it at http://tor.eff.org. This is a special browser which is slow, but provides strong anonymity. It will prevent anybody at your ISP (if they’re watching) from knowing what sites you visit with it, and it will prevent any sites you visit from knowing what the IP address of your computer is. This will make it much harder for either of them to identify you.
  10. Use the Tor browser for posting your leak.
  11. You must follow these instructions to make sure tor really works: https://tor.eff.org/download/download-easy.html.en#warning.
  12. The instructions about never opening a downloaded file are vitally important — things like .doc and .pdf files can contain software which will expose who you are.
  13. The instruction about using an https version of a web site instead of the plain old http version is also very important. This is because while tor provides very strong anonymity, it doesn’t provide a secure connection to the web site — the https connection to the site does that.
  14. If you’re posting to a social site like twitter or reddit, or using an email account for it, you’ll need to set up a new account for posting your leak.
  15. If you’ve got an account which you have used when not on tor, then Google, twitter, reddit, etc. can identify the IP address of your computer from previous sessions.
  16. If you’ve ever posted anything on an account, there’s a good chance you’ve leaked information about yourself. Don’t take this risk, and just use a new account exclusively for leaking.
  17. Do NOT post files from standard editors, like Word, Excel, or photos from your camera. Most programs and recording equipment embed metadata in their files, like the identify of the creator, the serial number of a camera, and the likes in their files which can be used to identify you. Plain old .txt files are ok. Pretty much anything else risks your identity.
  18. When posting photos, be sure to use a metadata stripping tool like jhead for photos. Turn off photo syncing (e.g., iCloud).
  19. The secure drop systems used by some news outlets like ProPublica and the New York Times may be able to strip this kind of thing, but ask a journalist about your specific file format first.
  20. After posting the leak, you need to NEVER use that account for any purpose not directly tied to the leak, since you may make the mistake of giving away your identity.
  21. If you want to leak to the press (which may get broader coverage, but may also decide not to publish at all) organizations like ProPublica and the New York Times have special drop boxes set up to allow the posting.
  22. Details here:
    http://www.niemanlab.org/2017/01/how-easy-is-it-to-securely-leak-information-to-some-of-americas-top-news-organizations-this-easy/
    https://www.nytimes.com/newsgraphics/2016/news-tips/#securedrop

Web Performance Stats are Overrated

Note: A more polite version of this post originally appeared on Instart Logic’s blog

Who cares about performance?  Your customers, that’s who!  Speeding up your site by several seconds, or even by hundreds of milliseconds, will make your customers happy. Well, at least they’ll be less irritated.  That’s pure logic. I am not citing statistics. I am using common sense.

For example, a 50% decrease in file size — cutting bloat in half — has a greater impact on someone 2G mobile than on a T1 line. Common sense dictates the impact is greater on a 5Mb site than a 5Kb one, no matter the network speed as performance gains achieved by halving a 5Kb download may not even have a noticeable effect. Similarly, improving a 12s page load to 11s doesn’t have the same impact as improving a site from 3s down to 2s. The impact of a one second improvement depends on the original experience. This is obvious. It’s common sense.

Conclusions based on common sense are often accurate, but aren’t considered factual like conclusions based on statistics.  Statistics are often used to demonstrate something as fact, but a statistic is really only a fact about a piece of data. Mark Twain is credited with saying “Facts are stubborn things, but statistics are pliable.”

We discuss web performance statistics as if the web were a monolith, as if all web applications were uniform, like performance improvement effects are linear. They may not be represented by a simple exponential equation, but performance isn’t a simple science, and metric interpretation shouldn’t be assumed to be linear, or even accurate.

The statistic many web speeders quote “a one second delay in web page responsiveness leads to a 7% decrease in conversions, an 11% drop in pageviews, and a 16% decrease in customer satisfaction” is kind of bullshit today. This quote comes from a 2008 study: a study that came out a few months after the iPhone SDK was released and Android Market came into being, before either proliferated. When that study was conducted, the iPad and Android tablet were a few years off. As web speeders, we can’t quote a performance study that basically predates mobile. Yet, that’s what we’re doing.

While the customer satisfaction statistics might be BS, the conclusions are not. The longer the site takes to load, the less happy your customers will be, the more users will abandon your site, the lower your conversion rates, and the less money you’ll make. It’s not even that your customers will be unhappy — it’s that they may not be your customers anymore. That’s common sense, no matter what the actual percentages are.

Definitely give yourself a performance budget and test your site. Test your application throughout the development process. Optimize and right size your images, setting dimensions. Reduce DNS lookups. Reduce bloat, minimizing request size, GZipping all requests. Make fewer HTTP requests or serve content over HTTP/2, caching what you can. Basically, follow as many performance recommendations as you can.

Improve the performance of all of your content, not just your home page. Focus your energies on the critical path of your site. It’s obviously less important to optimize some pages, say the “libraries we use” page linked to from an “about us” page linked to from a political candidate’s website. But do realize users often enter your site thru a side door: for example, a product page, with the shopping cart experience being a necessity for all sales for an online store. In the case of eCommerce, optimizing your homepage might create the most “savings”, but will do little to improve user experience for your actual potential customers.

And never forget: improving the download speed is not enough: improving your site by one second delay won’t improve customer satisfaction if, once loaded, they’re met with an unresponsive UI.

Or maybe it will.

Let me go back to 2008 to test that out.

animation-iteration-delay hack

While there is no such property as an `animation-iteration-delay`, you can employ the `animation-delay` property, incorporate delays within your keyframe declaration, or use JavaScript to fake it. The best method for ‘faking it’ depends on the number of iterations, performance, and whether the delays are all equal in length.

What is an animation iteration delay? Sometimes you want an animation to occur multiple times, but want to wait a specific amount of time between each iteration.

Let’s say you want your element to grow 3 times, but want to wait 4 seconds between each 1s iteration. You can include the delay within your keyframe definition, and iterate through it 3 times:

.animate3times {
    background-color: red;
    animation: yellow;
    animation-iteration-count: 3;
    animation-duration: 5s;
 }
@keyframes yellow {
 80% {
    transform: scale(1);
    background-color:red;
  }
  80.1% {
    background-color: green; 
    transform: scale(0.5);
  }
  100% {
    background-color: yellow;
    transform: scale(1.5);
  }
 }

Note the first keyframe selector is at the 80% mark, and matches the default state. This will animate your element 3 times, staying in the default state for 80% of the 5 second animation, or for 4 seconds, and moving from green to yellow and small to big over the last 1 second of the animation duration, before iterating again, stopping after 3 iterations.

This method works for infinite repetitions of the animation as well. Unfortunately, it is only a good solution if the iteration delay between each iteration is identical. If you want to change the delay between each iteration, while not changing the duration of the change in size and color, you have to write a new @keyframes definition.

To enable different iteration delays between animations, we could create a single animation and bake in the effect of three different delays:

.animate3times {
    background-color: red;
    animation: yellow;
    animation-iteration-count: 1;
    animation-duration: 15s;
 }
@keyframes yellow {
  0%, 13.32%, 20.01%, 40%, 46.67%, 93.32% {
    transform: scale(1);
    background-color:red;
  }
  13.33%, 40.01%, 93.33% {
    background-color: green; 
    transform: scale(0.5);
  }
  20%, 46.66%, 100% {
    background-color: yellow;
    transform: scale(1.5);
  }
 }

This method may be more difficult to code and maintain. It works for a single cycle of the animation. To change the number of animations or the iteration delay durations, another keyframe declaration would be required.

The animation-iteration-delay hack

There’s a solution that currently works that is not specifically allowed in the animation specification, but it isn’t disallowed, and it’s supported: you can declare an animation multiple times, each with a different animation-delay.

.animate3times {
    animation: yellow, yellow, yellow;
    animation-delay: 0, 4s, 10s;
    animation-duration: 1s;
 }
@keyframes yellow {
  0% {
    background-color: green; 
    transform: scale(0.5);
  }
  100% {
    background-color: yellow;
    transform: scale(1.5);
  }
 }

See http://codepen.io/estelle/pen/PqRwGj.

We’ve attached the animation three times, each with a different delay. In this case, each animation iteration concludes before the next one proceeds. If they overlap, while they’re concurrently animating, the values will be the values from the last declared animation.

See http://codepen.io/estelle/pen/gpemWW

Of course, you can also use JavaScript with animationstart, animationiteration and animationend eventlisteners that add or remove animation names or classes from your element.