animation-iteration-delay hack

While there is no such property as an `animation-iteration-delay`, you can employ the `animation-delay` property, incorporate delays within your keyframe declaration, or use JavaScript to fake it. The best method for ‘faking it’ depends on the number of iterations, performance, and whether the delays are all equal in length.

What is an animation iteration delay? Sometimes you want an animation to occur multiple times, but want to wait a specific amount of time between each iteration.

Let’s say you want your element to grow 3 times, but want to wait 4 seconds between each 1s iteration. You can include the delay within your keyframe definition, and iterate through it 3 times:

.animate3times {
    background-color: red;
    animation: yellow;
    animation-iteration-count: 3;
    animation-duration: 5s;
 }
@keyframes yellow {
 80% {
    transform: scale(1);
    background-color:red;
  }
  80.1% {
    background-color: green; 
    transform: scale(0.5);
  }
  100% {
    background-color: yellow;
    transform: scale(1.5);
  }
 }

Note the first keyframe selector is at the 80% mark, and matches the default state. This will animate your element 3 times, staying in the default state for 80% of the 5 second animation, or for 4 seconds, and moving from green to yellow and small to big over the last 1 second of the animation duration, before iterating again, stopping after 3 iterations.

This method works for infinite repetitions of the animation as well. Unfortunately, it is only a good solution if the iteration delay between each iteration is identical. If you want to change the delay between each iteration, while not changing the duration of the change in size and color, you have to write a new @keyframes definition.

To enable different iteration delays between animations, we could create a single animation and bake in the effect of three different delays:

.animate3times {
    background-color: red;
    animation: yellow;
    animation-iteration-count: 1;
    animation-duration: 15s;
 }
@keyframes yellow {
  0%, 13.32%, 20.01%, 40%, 46.67%, 93.32% {
    transform: scale(1);
    background-color:red;
  }
  13.33%, 40.01%, 93.33% {
    background-color: green; 
    transform: scale(0.5);
  }
  20%, 46.66%, 100% {
    background-color: yellow;
    transform: scale(1.5);
  }
 }

This method may be more difficult to code and maintain. It works for a single cycle of the animation. To change the number of animations or the iteration delay durations, another keyframe declaration would be required.

The animation-iteration-delay hack

There’s a solution that currently works that is not specifically allowed in the animation specification, but it isn’t disallowed, and it’s supported: you can declare an animation multiple times, each with a different animation-delay.

.animate3times {
    animation: yellow, yellow, yellow;
    animation-delay: 0, 4s, 10s;
    animation-duration: 1s;
 }
@keyframes yellow {
  0% {
    background-color: green; 
    transform: scale(0.5);
  }
  100% {
    background-color: yellow;
    transform: scale(1.5);
  }
 }

See http://codepen.io/estelle/pen/PqRwGj.

We’ve attached the animation three times, each with a different delay. In this case, each animation iteration concludes before the next one proceeds. If they overlap, while they’re concurrently animating, the values will be the values from the last declared animation.

See http://codepen.io/estelle/pen/gpemWW

Of course, you can also use JavaScript with animationstart, animationiteration and animationend eventlisteners that add or remove animation names or classes from your element.

Lyrinė Tālrunis: “Telephone” with Lyric Translations

Last week I hacked together a mashup of HP’s IdolOnDemand‘s free Speech Recognition API and Google’s fee-for-service Translation API to create a Lyric Translator. Of course, I had to make the site responsive using VW units for text, and Flexbox for the layout. I also used a datalist to provide an optional list of usable media files. Then, tonight, I wrote a blog post explaining these components.

Enjoy!

The app

Lyrinė Tālrunis, or “Lyric Telephone”, does two things:

  1. It captures the text from an audio file using the free Speech Recognition API from HP IdolOnDemand,
  2. It then translates the captured text into from the original language into French, German, Spanish, Vietnamese, Russian and then back to the original language, enabling you to create new lyrics for — or a more entertaining interpretation of — your favorite songs.

The name is basically Lyrics, as in lyrics from songs, but you can use any media file, and ‘Telephone’ in reference to the game of telephone you learned in pre-K, where when a story gets passed to too many people (or languages) the original text gets morphed into something else.

As deployed, the app provides for two preloaded options of Rudolph the Red-nosed Reindeer and Frosty the Snowman, but you can include a link to any .wav or .mp3 file you find on line.

I’ll be expanding the application to allow for file uploads, and hope to implement uploading directly from your device’s microphone with:

<input type="file" accept="audio/*;capture=microphone">

For right now, simply find a song online, or even a video file, and include the full URL in the input box.

HP’s IdolOnDemand Speech Recognition API

IDOL OnDemand’s Speech Recognition API creates a transcript of the text in an audio or video file. It supports seven languages – including both US an UK English (yes, it will produce “colour” instead of “color” if you ask it to).

You do first need to register with IdolOnDemand to get an API Key. Then it’s a simple AJAX call.

The values you need create the request URL for the speech recognitio API include:

  • The encoded URI to your .wav or other audio of video file.
  • Your API Key
  • The language your file is in (defaults to en-US)

Here is how I created my URL:

  var apikey = YOUR_HP_API_KEY;
      url = document.getElementById('url').value,
      language = document.getElementById('lang').value, 
      query = 'https://api.idolondemand.com/1/api/sync/recognizespeech/v1' + '?';       
  query  += "&url=" + encodeURIComponent(url);
  query  += "&apikey=" + apikey;
  query  += "&language=" + ( language || "en-US");

Where url is the id of the input where the user enters the full path to the media file. The input has 3 default options for you to choose from, but you can enter any text you wish. This is explained in the <datalist> / list attribute section below.

The ‘lang’ is the ID of the <select> drop down that list the langage of the media file.

<select name="lang" id="lang">
  <option value="en-US">English (US)</option>
  <option value="en-GB">English (UK)</option>
  <option value="de-DE">German</option>
  <option value="es-ES">Español</option>
  <option value="fr-FR">Français</option>
  <option value="it-IT">Italiano</option>
</select>

The speech recognition API works for all the above languages and Chinese. Surprisingly, I can actually read and understand all of the languages above (don’t ask).  As I can’t read Chinese and wouldn’t be able to debug it, I didn’t include it in this app.  If you want to include Chinese as an option, please feel free to fork the app repo and add Chinese back in, but please use your own API key.

IDOL OnDemand’s Speech Recognition API uses the language code and the country subcode, so use the long form like “en-US” and not “en”. Don’t know what I am talking about (or have insomnia)? Read up on language tags.

Depending on the file size, your file can take a while to process, so make sure to let your user know something is happening, and make sure to handle errors in case it times out. For better user experience, when the button gets clicked, calling the API, the content of the button changes to a rotating pipe in the hopes of making a quick and easy spinner. (Check out the CSS file if you want to learn the animation, as I am not covering it here). The animation stops and the button returns to the original text when the text extraction of the media file is returned from the Speech API.

 var app = {
     ...
    
     init : function () {
	     // add eventHandler to button
        document.getElementById('doThis').addEventListener('click', function(){
          app.submitToHP();
          app.changeButton();
        });
      },

      // get the words from the original media file
      // the `data` object contains default values & the `apikey`
      submitToHP : function () {
        data.url = document.getElementById(
url').value || data.url;
        var query = data.request_url + '?';
            query += "&url=" + encodeURIComponent(data.url);
            query += "&apikey=" + data.apikey;
            query += "&language=" + document.getElementById('lang').value || "en-US");

        var request = $.ajax(query, function(e) {
                // successfully sent - no actions
              })
            .done(function(e) {
                // response received - handle it
                app.acceptResponse(e.document[0].content);
              })
            .fail(function(e) {
                // error - handle it
                app.acceptResponse('Oops, something went wrong.');
              })
            .always(function(e) {
                // finished - stop button animation
                app.revertButton();
            });
      },
      ... // continues

The reqest returns a JSON object:

{
  "document": [
    {
      "content": "the media speech is here"
    }
  ]
}

So we grab that content with:

e.document[0].content

where e is the response.

The other functions included, but not described include:

  • app.changeButton() — changes button to a spinner
  • app.revertButton() — resets the button to original behavior
  • app.acceptResponse(e.document[0].content) — writes text to the page, and initiates translations, which are done via the Goole Translate API

Google Translate API

I tried finding a good, free, intuitive, easy to use, translation API, but came up empty handed. Sorry Microsoft, you have too many steps, and I couldn’t just “dive right in.” I do, however, have the Yandex translation API on my list of things to look up. It looks like it might be a good free alternative to Google’s fee-for-service translation API.

Time is money, and I was already familiar with the Google Translate API, which is the main reason I chose it. Again, fork me repo to try something different.

The request URL for Google Translate API looks something like this:

var request_URL = "https://www.googleapis.com/language/translate/v2?key=" +  
  YOUR_GOOGLE_API_KEY +
  "&source=" + from +
  "&target=" + to +
  "&q=" + encodeURIComponent(text);

Where you use your own Google API Key, the ‘from’ is the original language, the ‘to’ is the language you want to translate to, and the text you encode with encodeURIComponent(text) is the return value from the Speech Recognition API, what we captured as e.document[0].content above.

As we are translating from the original language to French, German, Spanish, Vietnamese, Russian and then back again to the original language, we are calling the Google API 6 times. The important information is how to create the link for Google’s REST API. You can take a look at the source code to see how I iterate thru the various languages and call the AJAX call for each translation.

CSS3 Values

The page is full responsive. On larger devices, the font is larger. This is done without @media queries. I know. I know. Media queries are all the rage. But they’re not always necessary. CSS3 provides responsive features that enable the creation of responsive content without having to define where your design splits. The browser can do it for you.

In this case, it’s the VW, or viewport width unit, that makes the site naturally responsive. The VW unit is relative to the viewport width: the viewport width is 100vw. If you don’t know all your length units, my 4 year old post needs some updating.

  h1 {
    line-height: 30vh;
    font-size: 8vw;
  }

The above CSS snippet reads “the line height should be 30% of the viewport height. The font size should be 8% of the viewport width”

As the viewport narrows, the font-size will shrink. Yes, it will get illegible if the viewport is too narrow, but no phone is narrow enough to make that illegible. That size is relatively huge, and will even be legible on new watches. And the height shrinks, so does the line height, meaning the h1 will never be taller than 30% of the height of the viewport.

I used VW and VH for the height of the blue header and containers for the translation content: even when empty, the articles will be 40% of the document height.

The content in the button and the input also grow and shrink as the viewport grows or shrinks. vh and vw are very well supported, though vmax, the maximum value of the viewport width and height, and vmin, the lesser of those 2 values, is not fully supported.

CSS Flexible Box Layout Module

CSS Layout is fun! No. Seriously. It is. Just use flexbox.

body {
  display: flex;
  flex-direction: column;
  flex: 1 0 300px;
}
header, article {
  flex: 3;
  min-height: 40vh;
}
footer, main {
  flex: 1; 
  max-height: 10vh;
}
main, article {
  display: flex;
  justify-content: space-around;
  align-items: center;
}
section {
  flex: 1;
}
@media screen and (max-width: 500px) {
  main, article {
    flex-wrap: wrap;
  }
}

Above is just part of the CSS. I’ve posted the CSS relevant to the flexible layout of the document. You’ll note that the layout has four vertical sections: the header, main section with the buttons, the articles where the translations go, and the footer.

The body CSS code block above makes the body a flex container, and the header, main, article and footer all flex items. The flex-direction is column, so they’re one on top of another instead of row, which would put the parts side by side.

The main and article are not only flex items, but they, in turn, are also flex containers. The default flex-direction is row, so the children of the main and the children of the article will be laid out side by side within their parent. We have two flex items within the article (the original text result from the Speech API and the final result processed thru 5 translations.) These will be side by side, and will always be of equal height. They will not wrap by default, but if the width of the screen is 500px wide or less, the row of content can wrap, which means the translation can land below the text capture, and the input can land below the button, which can fall below the language selection.

This project is to show a very simple example of flex layout, and is not meant as a full flexbox tutorial. To learn more about flexbox, I have an open source flexbox tutorial you can play with. There you can see that justify-content: space-around; means that the extra space around the items will be evenly distributed around each item, and all the other flexbox, including several not included in this demonstration.

Datalist element and list attribute

Take a look at the input on the page. You’ll note, in modern browsers, there is a little arrow on the right, which if clicked, shows an autocomplete. This effect is achieved using the HTML5 list attribute along with the <datalist> element and that element’s nested <options>.

<input type="url" list="urls" id="url" placeholder=" URL of .wav file">
    
<datalist id="urls">
  <option value="http://estelle.github.io/audiotranslator/data/frosty.wav">Frosty</option>
  <option value="http://estelle.github.io/audiotranslator/data/rudolph.wav">Rudolph</option>
  <option value="http://estelle.github.io/audiotranslator/data/tet.wav">Test File</option>
</datalist>

In this example, we have an <input> of type url, with a list attribute. The value of the list attribute is the value of the ID of the datalist element. This associates the urls datalist with the input. If a browser doesn’t support datalist, it will simply not show the datalist. Totally progressive enhancement: the form control is still usable in Netscape 4.7

If the browser does support datalist, a drop down menu of the options will show when the input has focus. It will only show the values that are still potentionally valid. If you enter ‘h’, all the values will still show. ANy other character, and the options will no longer be valide options, and they will no longer be displayed. I have a tutorial on web forms, which demonstrates the inclusion of datalist on text, url, email, number, color, range, date and time input types.

Future of the app

Here are some ideas I have for expanding the application. Please fork and do it for me :D

  • File upload option
  • Drag and Drop from desktop to upload file
  • Audio Capture directly from your device
  • Inclusion of original <audio> for your listening pleasure

px to rem conversion if :root font-size is 16px

If you have

:root {
font-size: 16px;
}

you can use the following table to convert from PIXELS to REMS

10px 0.625rem
11px 0.6875rem
12px 0.75rem
13px 0.8125rem
14px 0.875rem
15px 0.9375rem
16px 1rem
17px 1.0625rem
18px 1.125rem
19px 1.1875rem
20px 1.25rem
21px 1.3125rem
22px 1.375rem
23px 1.4375rem
24px 1.5rem
25px 1.5625rem
26px 1.625rem
27px 1.6875rem
28px 1.75rem
29px 1.8125rem
30px 1.875rem
31px 1.9375rem
32px 2rem
33px 2.0625rem
34px 2.125rem
35px 2.1875rem
36px 2.25rem
37px 2.3125rem
38px 2.375rem
39px 2.4375rem
40px 2.5rem
41px 2.5625rem
42px 2.625rem
43px 2.6875rem
44px 2.75rem
45px 2.8125rem
46px 2.875rem
47px 2.9375rem
48px 3rem
49px 3.0625rem
50px 3.125rem
51px 3.1875rem
52px 3.25rem
53px 3.3125rem
54px 3.375rem
55px 3.4375rem
56px 3.5rem
57px 3.5625rem
58px 3.625rem
59px 3.6875rem
60px 3.75rem
61px 3.8125rem
62px 3.875rem
63px 3.9375rem
64px 4rem

Responsive Images: Clown Car Technique

Adaptive images are the current hot topic in the Adaptive Design and Responsive Web Design conversations. Why? Because no one likes any of the solutions. New elements and attributes are being discussed as a solution for what is, for most of use, a big headache.

We have foreground and background images. We have large displays and small displays. We have regular resolution and high resolution displays. We have good bandwidth and low bandwidth connections.

Some choose to “waste bandwidth” (and memory) by sending high-resolution images to all devices. Others send regular resolution images to all devices which look less crisp on high resolution displays.

When it comes to background images, we have media queries. This makes it (almost) simple to send the right size and resolution images based on the device pixel ratio, size and / or orientation of the screen.

Proposed solutions with new technology

With content images, it’s a bit more difficult. Most believe that there is no mechanism for the <img> tag to download the right size and resolution image. Polyfills have been created. Services have been formed.

The <picture> element leveraging the semantics of the HTML5 <video> elements, with its support of media queries to swap in different source files was proposed:

<picture alt="responsive image">
     <source src=large.jpg media="(min-width:1600px), (min-resolution: 136dpi) and (min-width:800px) ">
     <source src=medium.jpg media="(min-width:800px), (min-resolution: 136dpi) and (min-width:400px) ">
     <source src=small.jpg>
        <!-- fallback -->
        <img src=small.jpg alt="responsive image">
  </picture>

Another method, using a srcset attribute on the <img> element has also been proposed. The above would be written as:

 <img alt="responsive image"
           src="small.jpg"
           srcset="large.jpg 1600w, large.jpg 800w 1.95x, medium.jpg 800w, medium.jpg 400w 1.95x">

Possible Solutions with Existing Tech: SVG

What many people don’t realize is we already have the technology to serve responsive images. We have had browser support for responsive images for a long, long time! SVG has supported media queries for a long time, and browsers have supported SVG for … well, not quite a long time, but still. Most browsers support media queries in the SVG (test your browser). The main issue is terms of mobile is old Androids lack of support until 3.0..

We can use media queries within SVG to serve up the right image. The beauty of the "Clown Car" technique is that all the logic remains in the SVG file. I’ve called it the "Clown Car" technique since we are including (or stuffing) many images (clows) into a single image file (car).

When you mark up your HTML, you simply add a single call to an SVG file.

<img src="awesomefile.svg" alt="responsive image">

Now isn’t that code simple?

The magic is that SVG supports both media queries and rasterized images.

In our SVG file, using the <image> element, will include the all the images that we may need to serve, and include all the media queries.

Here is the code for one of the SVG files:

<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="300" height="329">
  <title>The Clown Car Technique</title>
    <defs>
    <style>
    image {display: none;}
    #small {display: block}
     @media screen and (min-width: 401px) and (max-width: 700px) {
        #medium {display: block}
        #small {display: none}
    }
      @media screen and (min-width: 701px) and (max-width: 1000px) {
        #big {display: block}
        #small {display: none}
    }
     @media screen and (min-width: 1001px)  {
      #huge {display: block}
      #small {display: none;}
    }
    </style>
  </defs>
  <g>
    <image id="small"  height="329" width="300" xlink:href="images/small.png" />
    <image id="medium" height="329" width="300" xlink:href="images/medium.png" />
    <image id="big"    height="329" width="300" xlink:href="images/big.png" />
    <image id="huge"   height="329" width="300" xlink:href="images/huge.png" />
  </g>
  </svg>

Unfortunately, when this file is used, all 4 PNGs are retrieved from the server. To solve this issue, we can use background images instead:

<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 300 329" preserveAspectRatio="xMidYMid meet">
 <title>Clown Car Technique</title>
 <style> 
  svg {
    background-size: 100% 100%;
    background-repeat: no-repeat;
  } 
 @media screen and (max-width: 400px) {
  svg {background-image: url(images/small.png");}
 } 
 @media screen and (min-width: 401px) and (max-width: 700px) {
  svg {background-image: url(images/medium.png);}
 } 
 @media screen and (min-width: 701px) and (max-width: 1000px) {
  svg {background-image: url(images/big.png);}
 } 
 @media screen and (min-width: 1001px) {
  svg {background-image: url(images/huge.png);}
 }
 </style>
</svg>

This version only downloads the images required, thereby solving the multi-HTTP and waste of bandwidth concerns. However, it seems to trigger mor Content Security Policy issues than the previous SVG.

The SVG file has it’s own declared size, but when included in HTML, the media query size is based on the proportions of the DOM node in the HTML –. it reflect thespace provided to it.

Open the first SVG file or the second SVG file in your browser, then grow and shrink your browser window. As the window grows and shrinks the image the SVG displays changes. The image size appears to stay the same — because it is the same. In the SVG, all the images to have the same dimensions. Its when we include the SVG in a flexible layout that the magic happens. You’ll note that the first time you load the second one there may be flickers of white as the browser requests the next required PNG.

When you include the SVG in your HTML <img> with a flexible width, such as 70% of viewport, as you grow and shrink the container, the image responds to the changes. The "width" media query in the SVG is based on the element width in which the SVG is contained, not the viewport width.

I have included the first SVG and the second SVG so you can see SVG with both foreground and background images. These foreground images work perfectly in Opera. In Chrome and Safari, I need to open the SVG file first, after which the HTML file containing the foreground SVG image works perfectly*. In Firefox, the SVG works. Firefox supports SVG and supports SVG as background image, but blocks the importing of external raster images due to their content security policy (CSP).

The content security policy does make sense: you don’t want a file to be pulling in untrustworthy content. SVG technology is supported. It is just being prevented from pulling in external raster image. Firefox prevents it altogether. Safari and Chrome work if the SVG is preloaded. We can work with the browser vendors to get this CSP lifted.

The browsers do all support the media queries with the SVG. They all support SVG as foreground or content images. They all support SVG as background images. The support just isn’t all the same.

Responsive SVG for foreground images works. It simply works right out of the box. For background images, it works, but I am still tweaking the code to make the background responsive (in Chrome it works if background-size is declared, but Opera and Safari are using the declared SVG size as the image size… still working on combinations that work everywhere.)

The drawback of the first SVG is that the SVG file is making 4 http requests. So, while we are displaying the right image, we are not saving bandwidth. In the second, the raster images are background image. Only the required image is downloaded.

Another pro for this technique: similar to how we separate content from presentation from behavior: this method enables us to also separate out images — all the logic is in the SVG image instead of polluting our CSS or HTML.

With <object> tag: up Next

<object> can take care of Content Security Policy drawback we see with <img> that disallows the importing of images or script into an <img> file. <object> allows both. I am working on that angle now.

The main question is: should we attempt this with <object>, or should we get browser vendors to change their content security policy?

Note:

* Interestingly, the SVG works best when the raster images are pulled cross domain rather than same origin. You would think, with CSP, it would be the exact opposite.

Web Development Tips

My Twitter account, WebDevTips, has been reactivated. Follow @webDevTips to get (almost) daily Web Development tips in your timeline.


details and summary polyfill

var summaryClick = function(elem){
  if (!('open' in document.createElement('details'))){
    var deetails = elem.parentNode;
    if(deetails.hasAttribute('open')){
     deetails.removeAttribute('open');
    } else {
     deetails.setAttribute('open', 'open')
    }
  }
}

add the summaryClick to your <summary> click event listeners

And then in my CSS I put something similar to:

details:not([open]) {
   height: 2em; 
   overflow: hidden;
}
details[open]  {
   height: auto;
}
summary::before {
   content: "▶ ";
}

Only browsers that understand the JS used will understand the selectors used. So, this isn’t fool proof, but it’s nice, quick and dirty.

SpeciFISHity: CSS Specificity

Some people are confused by CSS Specificity, especially with all of the (not-so) new CSS3 Selectors. The image below may help make sense of CSS Specificity.

X-0-0: The number of ID selectors, represented by Sharks
0-Y-0: The number of class selectors, attributes selectors, and pseudo-classes, represented by Fish
0-0-Z: The number of type selectors and pseudo-elements, represented by Plankton a la Spongebob
*: The universal selector has no value

Download the specificity chart and read the rest of the SpeciFISHity article here

HSLA Color Picker / RGBA Converter

color:hsla(
   328,   H  
blah
100%, S
44%, L
1.00 A
);

HSLA Colors