The 6 best new features in npm 5

Last night I attended waffle.js and heard a great talk by Laurie Voss, aka seldo, the CTO of npm, about new features in npm@5. Several things have been upgraded and new features have been added in npm@5, so, if you haven’t upgraded yet, you’re missing out.

Here are a few things were upgraded in npm@5:

1 ) npm@5 is fast

npm@5 is much faster than previous versions of npm, and as easy to install as:

npm i npm -g

npm@5 is ten times faster than npm@4.
npm@4 is ten times faster than npm@3.
npm@3 is ten times faster than npm@2.
npm@2 is ten times faster than npm@1.

What does this mean?

10 x 10 x 10 x 10 = 104 = 10,000

It means the current version of npm is 10,000 faster than v1. Yes, I am exaggerating, but just a bit. Believe me, npm@5 is super fast.

2) npm@5 includes package-lock.json by default

Package lock makes npm faster because it doesn’t have to look up all the packages: it knows to use the ones you locked in. In addition, with locked packages, you get the same package versions in production as you do in development. This makes builds more reliable. Gone are the days of trying to figure out a bug only to discover it came from others running different versions of a library.

This does change how updates work. You will no longer automatically get the latest updates. Rather, you now need to explicitly update your packages to semver compatible package versions.

npx npm-check -u

The above snippet will make you go thru each npm update, allowing you to accept or decline breaking changes. npx is introduced in #6 below.

3) npm@5 saves by default

The risk of accidentally deleting a package is gone. You no longer have to use --save. If you install a package it will be there. If you don’t want the package anymore, you can delete it.

If you don’t want to save, you can override the default saving with --no-save.

4) Improved cache

Unbeknownst to many, npm has always had a cache — everything downloaded was stored locally — but not a very good one.

The cache improvements in npm@5 include how corrupt caches are handled: cache --clear has been made obsolete as with npm@5, if the cache is corrupted it will not be used.

5) npm@5 works offline

npm@5 auto detects when you are offline. When offline, npm will install using the local, improved, non-corrupted cache, noted in #4 above. You can force npm to run offline when you are actually online, by using --offline, This is handy for less than optimal connections, like when you’re whining about airplane, Starbucks, or conference wifi, and for those on metered data plans.

6) npx package runner

The new npx package runner will download, install, execute, and clean itself up. The order will be executed either locally or from a cache,  installing any dependencies needed. If a full package specifier is included,  npx will use a freshly-installed, temporary version of the package. If no –package option is specified, npx can guess the name of the binary to invoke depending on the specifier provided.

As an example, if you want to create a React app, simply run:

$ npx create-react-app
$ npm start

This will create a working React app.

The npx package runner is probably the most exciting of the six new features. Laurie’s talk covered the syntax for creating working apps for various libraries.

Future feature: cipm

cipm, the continuous integration package manager currently under development, installs npm dependencies from a package lock. Similar in usage to npm installcipm removes node_modules before beginning the install. It works with npm@5’s lock file.  cipm is useful in continuous integration environments requiring regular, full installs of apps that can cache package data in a central cache.

You can take a look at Play with it, but don’t put it into production. Yet.

Web Performance: Video Optimization

According to HTTPArchive, sites went from an average of 2,135Kb  to 3,034 KB in the two years from July 1, 2015, to  July 1, 2017. Videos are a major part of that. The average web sites video weight grew from  204 Kb to 729 Kb over the same two year period. The “Low hanging fruit” of performance used to be optimizing images. Now it’s both images and video.

My optimizing video rules:

  1. If possible, omit videos
  2. Compress all videos
  3. Optimize <source> order
  4. Remove audio from muted heroes

If possible, omit videos

The best way to optimize is to remove unneeded content and un-needed requests.

Do you really need a hero video? Do you really need it on the mobile version of your site?

You can use media queries to avoid downloading the #hero-video on narrow screens.

@media screen and (max-width: 650px) { 
  #hero-video { 
      display: none; 

Compress all videos

Most video compression efforts involve comparing adjacent frames within a video and removing details that are the same in the original and subsequent frame. You want to both compress the video and export it to multiple video formats, including WebM, MPEG-4/H.264 and Ogg/Theora.

The software you used to create your video likely includes the ability to optimize the file size down. If not, there are several online tools, like FFmpeg, discussed below, that can help encode, decode, convert and perform other forms of magic.

Optimize <source> order

Order from smallest to largest.  For example, given three video compressions at 10MB, 12MB, and 13MB, but the smallest first and the largest last:

<video width="400" height="300" controls="controls">
  <!-- WebM: 10 MB -->
  <source src="video.webm" type="video/webm" />
  <!-- MPEG-4/H.264: 12 MB -->
  <source src="video.mp4" type="video/mp4" />
  <!-- Ogg/Theora: 13 MB -->
  <source src="video.ogv" type="video/ogv" />

In terms of the order, the browser will download the first video source it understands, so let it hit a smaller one first.  In terms of “smallest”, do make sure that your most compressed video still looks good. There are some compression algorithms that can make your video look like an animated gif. While a 128 Kb video may seem like better user experience than having your users download a 10 MB video, putting a grainy gif-like video behind your content may also negatively impact your brand.

See for current browser support of video and the various media types.  

Remove audio from muted heroes

Lastly, if you do have a hero video or other video without audio, remove the audio from your video file. Remove audio that is muted

<video autoplay="" loop="" muted="true" id="hero-video">
  <source src="banner_video.webm" 
          type='video/webm; codecs="vp8, vorbis"'>
  <source src="web_banner.mp4" type="video/mp4">

This hero video code, common to many conference websites and corporate home pages, includes a video that is auto-playing, looping, and muted. It contains no controls, so there is no way to hear the audio. The audio is often empty, but it is still present. It is still using up bandwidth. There is no reason to serve the audio along with a video that is always muted. Removing the audio can save 20% of the bandwidth, which is 2 MB if your video is 10 MB.

Depending on your video making software, you may be able to remove the audio during export and compression. If not, there is a free tool called FFmpeg that can do it for you with the following command:

ffmpeg -i original.mp4 -an -c:v copy audioFreeVersion.mp4

FFmpeg bills itself as the “complete, cross-platform solution to record, convert and stream audio and video,” which it pretty much is..

Let’s Encrypt on cPanel

Here is how I set up Let’s Encrypt for all sites on my hosted virtual private server running cPanel.

1. SSH as root

$ ssh -p 22 root@

Your port might also be 2200. Ask your VPS hosting provider.

2. Then run the command:

$ /scripts/install_lets_encrypt_autossl_provider

3.Log into your main control panel:

or however you access it (possibly port 2086 if http://)

You should see the following (or something similar) if successful:

Running transaction
  Installing : cpanel-letsencrypt-2.16-3.2.noarch            1/1
  Verifying  : cpanel-letsencrypt-2.16-3.2.noarch            1/1

4. Under SSL/TLS you’ll find “Manage AutoSSL”
Under “providers”, you’ll see “Let’s Encrypt”. That’s a new option that was created by running the command as root.

Select “Let’s Encrypt”. Then agree to their terms of service and create a new registration with Let’s Encrypt if necessary. Under the “managed users” tab you can enable / disable AutoSSL by account.

5. Now, under the control panel of each account, under SECURITY > SSL/TLS, under “Install and Manage SSL for your site (HTTPS)”, if you select “Manage SSL Sites”, you’ll see the Let’s Encrypt  cert.

Note: If you had a self signed certificate (which you don’t want), delete the cert in the individual account. Click the “run AutoSSL for all users” button as root under “Manage Auto SSL”. When you refresh the individual user, the correct cert should be there.

6. Yay. All your accounts now have an SSL Cert. You still need to redirect all of your http:// traffic to https://. In the .htaccess add the following:

RewriteEngine On
RewriteCond %{HTTPS} off
RewriteRule (.*) https://%{HTTP_HOST}/%{REQUEST_URI}

Now redirects to and redirects to

and one of these days this blog will do the same.

W3C Performance Specifications

Here are some of the W3C’s web performance specifications:

  • High Resolution Time (Level 3)

    The DOMHighResTimeStamp type, method, and performance.timeOrigin attributes of the Performance interface resolve issues with monotonically increasing time values with sub-millisecond resolution.

  • Performance Timeline (Level 2)

    Extends definition of the Performance interface, exposes PerformanceEntry in Web Workers and adds support for the PerformanceObserver interface.

  • Resource Timing (Level 3)

    Defines the PerformanceResourceTiming interface providing timing information related to resources in a document.
    Supported in all browser except Safari and Opera Mini, starting with IE10

  • User Timing (Level 2)

    Extends Performance interface with PerformanceMark and PerformanceMeasure.
    Supported in all browser except Safari and Opera Mini, starting with IE10

  • Beacon API

    Defines a beacon API which can “guarantee” asynchronous and non-blocking delivery of data, while minimizing resource contention with other time-critical operations.
    Not supported in IE, Safari or Opera Mini. Support started with Edge 14
    navigator.sendBeacon() on MDN

  • Preload

    Defines preload for resources which need to be fetched as early as possible, without being immediately processed and executed. Preloaded resources can be specified via declarative markup, the Link HTTP header, or scheduled with JS.

  • Cooperative Scheduling of Background Tasks

    Adds the requestIdleCallback method on the Window object, which enables the browser to schedule a callback when it would otherwise be idle, along with the associated cancelIdleCallback and timeRemaining methods.

Capturing Captions from Youtube Videos

Note: This works for pre-captioned videos, not videos auto-captioned by Youtube.

There are many tutorials on how to download the caption files created by Youtube if you own the video, but I was unable to find a way to download the captions of videos I don’t own. There’s probably an easy way to do it, but since I couldn’t find it, creating a JavaScript function to do it via the console took less effort.

Here’s the code (I did it three ways depending on how you like to code your JS)

Constructor method:

function CaptionCollector () {
  var that = this;
  this.captions = '';
  var nowShowing = '';

  this.collect = function(){
    try {
      var currentCaption = document.getElementsByClassName("captions-text")[0].innerText;
    } catch (e) {
      var currentCaption = null;

    if(currentCaption && nowShowing != currentCaption) {
      nowShowing = currentCaption;
      that.captions += ' ' + nowShowing;

    setTimeout(that.collect, 300);

var foo = new CaptionCollector();

Print the caption with foo.captions. Of course you can use anything instead of “foo”.

Here’s a version using JS object notation:

var captionCollector = {
    captions : '',
    nowShowing: '',

    collect : function(){
      try {
        var currentCaption = document.getElementsByClassName("captions-text")[0].innerText;
      } catch (e) {
        var currentCaption = null;
    if(currentCaption && this.nowShowing != currentCaption) {
        this.nowShowing = currentCaption;
        captionCollector.captions += ' ' + captionCollector.nowShowing;
    setTimeout(captionCollector.collect, 300);


With this version, you print the console with captionCollector.captions

Or you can use the anonymous function method with a single global variable:

    ___captions = '';
    var ___nowShowing = '';

    function getCaption() {
        try {
          var currentCaption = document.getElementsByClassName("captions-text")[0].innerText;
        } catch (e) {
          var currentCaption = null;

        if(currentCaption && ___nowShowing != currentCaption) {
          ___nowShowing = currentCaption;
          ___captions += ' ' + ___nowShowing;
        setTimeout(getCaption, 300);


With this version, you print the console with the global variable ____captions

Because it uses the classname of the caption box Youtube uses for videos, this only works on Youtube. Alter the classname for other video services.

You do have to play the whole video to capture all the captions. With settings, you can play the video at twice the speed.

Clear the console when the video ends. Print the transcript to the console. Select all. Copy. You’re good to go.

TWELP: Twitter Help

Note: Updated September 15 to add Firefox support.

Sometimes Twitter gets things wrong. Very, very, wrong. A few “features” that I think are bugs include Twitter Moments,

To this end, I created a little bookmarklet called “TWELP”.

<a href="javascript:(function($){function kill(){$('.promoted-tweet, .Icon--heartBadge').closest('').css('display','none');$('.js-moments-tab, .DismissibleModule').css('display','none');setTimeout(kill, 1000);}kill();})(window.jQuery);">TWELP</a>

Let me rewrite that for you in a version that’s easy to read, but won’t work to copy and paste:

    function kill($) {
         $('.promoted-tweet, .Icon--heartBadge').closest('').css('display','none');
         $('.js-moments-tab, .DismissibleModule').css('display','none');

         setTimeout(kill, 1000);


The bookmarklet creates a kill function that:

  1. hides promoted tweets by finding the parent tweet containing a promoted-tweet child class
  2. hides any “liked” tweets that contain the heart icon, including uninteresting tweets in your stream suck as the fact that your friend Jane liked a tweet of a picture of her acquaintance Joe, who you are not following, eating an oyster. Seriously, who the fuck cares? It also hides the “people who liked your tweet” feature in your notifications. Not sure if that is a feature or a bug.
  3. hides the “Moments” tab by hiding the tab that has the  js-moments-tab class
  4. hides promoted modules that I hate like “In Case You Missed It” and “Who to follow”
  5. Calls itself once per second so if you scroll, it will continue killing those annoying tweets mentioned above.
  6. You have to pass window.jQuery to $ because Firefox defines it’s own $. (Thanks to @Potch for that tidbit)

TWELP – You can drag this link to your bookmarks bar, and click TWELP bookmarklet whenever you load Twitter. It kills the “Moments” tab, all ads, and removes the “X liked” tweets.

or, you can wrap your own.