Enovate https://www.enovate.co.uk/ Latest news from Essex based web design company Enovate Design, as well as commentary on responsive web design and other website design related topics. en-GB Thu, 23 Aug 2018 00:00:00 +0100 Tue, 28 Aug 2018 09:54:04 +0100 Enovate accepted into the Craft Partner network https://www.enovate.co.uk/blog/2018/08/23/enovate-accepted-into-the-craft-partner-network Wed, 22 Aug 2018 23:00:00 +0000 https://www.enovate.co.uk/blog/2018/08/23/enovate-accepted-into-the-craft-partner-network

Craft has been our "go to" CMS since 2013, and since then we've designed and built over 53 Craft CMS projects. We've employed Craft in a variety of guises, such as the standard approach of powering a small brochure style website right through to bringing Craft's deft content management to a larger web application project via an API integration. And more recently, we've utilised the first-party Craft Commerce plugin to build highly bespoke e-commerce experiences.

Craft CMS Service Partner

But where Craft really shines throughout all of these projects, is that no matter the brief, Craft is always flexible and adaptable so we can design and build a solution without feeling like we're bending something to fit the requirements. That's really important, so we can stand back and feel completely confident that we can continue to grow and build upon our projects for years to come, rather than feel like it's a case of trying to keep an array of plates constantly spinning!

We're really proud of the projects we've delivered using Craft CMS since 2013, so being accepted into the Craft Partner network further solidifies our commitment to Craft CMS and helps to demonstrate the depth of our hard-earned experience and knowledge in Craft.

Atomic deployment with Deployer https://www.enovate.co.uk/blog/2018/07/23/atomic-deployment-with-deployer Mon, 23 Jul 2018 13:30:00 +0000 https://www.enovate.co.uk/blog/2018/07/23/atomic-deployment-with-deployer

With the arrival of the much awaited release of Craft CMS 3 in April this year, which is heavily integrated with Composer, it was evident that we needed to update our approach for deploying our projects.

Our current approach essentially boiled down to running a git pull on the production server instances and whilst it wasn’t perfect it did serve us well. Granted a git pull does take a few seconds to complete and during which time the website’s files may be inconsistent, this was somewhat mitigated by our use of Varnish Cache, which is cleared after the deployment has succeeded, but even so there was still scope to improve things.

With Composer-based projects such as Craft CMS 3 and Laravel a “git pull” no longer suffices as we have to run a “composer install” to pull any new or updated dependencies into the vendor folder, and that’s a process that likely takes minutes rather than seconds to complete. Therefore we needed to look at an atomic deployment solution.

I will politely sidestep the argument around whether the vendor folder should in fact be committed into Git. I think that really depends on the project and for some of our larger projects maybe the case for that is greater but for our smaller projects any benefits seem very negligible in my opinion.

Atomic deployment takes its name from database terminology but essentially the idea is that either the deployment has executed completely or not executed at all, and a failed deployment does not leave your project in an inconsistent state. In practice this means a different sort of folder structure for your project on the web servers. Rather than just updating the currently running files, a brand new folder is created where the new files are placed, vendor folders are updated, any other tasks are completed and then once everything has successfully completed this new folder is symlinked to the folder your web server (Nginx, Apache, etc.) is running from. So regardless of however long it takes for a “composer install” to run the files that the web servers are using are always in a consistent state.

We wanted to find a solution that we could use across the board for our PHP projects, so from Craft CMS sites (version 2 and 3), Laravel web apps and even WordPress, Magento and any bespoke/legacy sites.

Of course there are plenty of software-as-a-service tools that fill this void and I’m sure they do an excellent job but the costs for these would soon become quite eye watering when you take into account a large number of projects.

Initially we had a look at a Node.js tool called ShipIt, this looked good but in the end we selected Deployer as it seemed to have more recipes for the sort of projects we wanted to deploy. One slight caveat to our production environments is that we use autoscaling at AWS so at the point a deployment is actually happening we need to query AWS to ascertain how many servers are running and their addresses. We already have code that accomplishes that both in PHP and Node.js, but it seemed clearer as to how that could be accomplished in Deployer than ShipIt, and from my investigations it seemed there was a more active community around Deployer than ShipIt.

So we set about getting Deployer integrated into a Craft CMS 3 project and a Laravel project, and on the whole this was a very smooth operation. The example recipes it provided were very useful and it didn’t take long until we had parallel deployments working. A parallel deployment means the deployment is executed simultaneously across all server instances, again a plus for our load-balanced production environments.

The only slight hiccup we found related to our Nginx config, which meant that PHP-FPM’s OpCache was not cleared when the new deploy folder was symlinked, with a quick change to the FastCGI config settings as described in this Server Fault post the issue was soon resolved and PHP-FPM’s OpCache was cleared with each deploy.

To help others get started with using Deployer, here are some example configurations for both Craft CMS 3 and Laravel projects:

Deployer recipe for Craft CMS 3 projects

namespace Deployer;
require 'recipe/common.php';

// Project name
set('application', 'enovate.co.uk');

// Project repository
set('repository', 'git@githosting.com:enovatedesign/project.git');

// Shared files/dirs between deploys
set('shared_files', [
set('shared_dirs', [

// Writable dirs by web server
set('writable_dirs', [

// Set the worker process user
set('http_user', 'worker');

// Set the default deploy environment to production
set('default_stage', 'production');

// Disable multiplexing
set('ssh_multiplexing', false);

// Tasks

// Upload build assets
task('upload', function () {
    upload(__DIR__ . "/public/assets/", '{{release_path}}/public/assets/');
    //upload(__DIR__ . "/public/service-worker.js", '{{release_path}}/public/service-worker.js');

// Tasks

desc('Execute migrations');
task('craft:migrate', function () {
    run('{{release_path}}/craft migrate/up');

// Hosts

// Production Server(s)
host('', '', '')
    ->set('deploy_path', '/websites/{{application}}')
    ->set('branch', 'master')

// Staging Server
    ->set('deploy_path', '/websites/{{application}}')
    ->set('branch', 'develop')

// Group tasks

desc('Deploy your project');
task('deploy', [
    'upload', // Custom task to upload build assets

// [Optional] Run migrations
after('deploy:vendors', 'craft:migrate');

// [Optional] If deploy fails automatically unlock
after('deploy:failed', 'deploy:unlock');

// Run with '--parallel'
// dep deploy --parallel

Deployer recipe for Laravel projects

namespace Deployer;
require 'recipe/common.php';

// Project name
set('application', 'enovate.co.uk');

// Project repository
set('repository', 'git@githosting.com:enovatedesign/project.git');

// Shared files/dirs between deploys
set('shared_files', [
set('shared_dirs', [

// Writable dirs by web server
set('writable_dirs', [

// Set Laravel version
set('laravel_version', function () {
    $result = run('{{bin/php}} {{release_path}}/artisan --version');
    preg_match_all('/(\d+\.?)+/', $result, $matches);
    $version = $matches[0][0] ?? 5.5;
    return $version;

// Set the worker process user
set('http_user', 'worker');

// Set the default deploy environment to production
set('default_stage', 'production');

// Disable multiplexing
set('ssh_multiplexing', false);

// Helper Tasks

desc('Disable maintenance mode');
task('artisan:up', function () {
    $output = run('if [ -f {{deploy_path}}/current/artisan ]; then {{bin/php}} {{deploy_path}}/current/artisan up; fi');
    writeln('<info>' . $output . '</info>');

desc('Enable maintenance mode');
task('artisan:down', function () {
    $output = run('if [ -f {{deploy_path}}/current/artisan ]; then {{bin/php}} {{deploy_path}}/current/artisan down; fi');
    writeln('<info>' . $output . '</info>');

desc('Execute artisan migrate');
task('artisan:migrate', function () {
    run('{{bin/php}} {{release_path}}/artisan migrate --force');

desc('Execute artisan migrate:fresh');
task('artisan:migrate:fresh', function () {
    run('{{bin/php}} {{release_path}}/artisan migrate:fresh --force');

desc('Execute artisan migrate:rollback');
task('artisan:migrate:rollback', function () {
    $output = run('{{bin/php}} {{release_path}}/artisan migrate:rollback --force');
    writeln('<info>' . $output . '</info>');

desc('Execute artisan migrate:status');
task('artisan:migrate:status', function () {
    $output = run('{{bin/php}} {{release_path}}/artisan migrate:status');
    writeln('<info>' . $output . '</info>');

desc('Execute artisan db:seed');
task('artisan:db:seed', function () {
    $output = run('{{bin/php}} {{release_path}}/artisan db:seed --force');
    writeln('<info>' . $output . '</info>');

desc('Execute artisan migrate:fresh --seed');
task('artisan:migrate:fresh:seed', function () {
    $output = run('{{bin/php}} {{release_path}}/artisan migrate:fresh --seed');
    writeln('<info>' . $output . '</info>');

desc('Execute artisan cache:clear');
task('artisan:cache:clear', function () {
    run('{{bin/php}} {{release_path}}/artisan cache:clear');

desc('Execute artisan config:cache');
task('artisan:config:cache', function () {
    run('{{bin/php}} {{release_path}}/artisan config:cache');

desc('Execute artisan route:cache');
task('artisan:route:cache', function () {
    run('{{bin/php}} {{release_path}}/artisan route:cache');

desc('Execute artisan view:clear');
task('artisan:view:clear', function () {
    run('{{bin/php}} {{release_path}}/artisan view:clear');

desc('Execute artisan optimize');
task('artisan:optimize', function () {
    $deprecatedVersion = 5.5;
    $currentVersion = get('laravel_version');
    if (version_compare($currentVersion, $deprecatedVersion, '<')) {
        run('{{bin/php}} {{release_path}}/artisan optimize');

desc('Execute artisan queue:restart');
task('artisan:queue:restart', function () {
    run('{{bin/php}} {{release_path}}/artisan queue:restart');

desc('Execute artisan storage:link');
task('artisan:storage:link', function () {
    $needsVersion = 5.3;
    $currentVersion = get('laravel_version');
    if (version_compare($currentVersion, $needsVersion, '>=')) {
        run('{{bin/php}} {{release_path}}/artisan storage:link');

 * Task deploy:public_disk support the public disk.
 * To run this task automatically, please add below line to your deploy.php file
 *     before('deploy:symlink', 'deploy:public_disk');
 * @see https://laravel.com/docs/master/filesystem#the-public-disk

desc('Make symlink for public disk');
task('deploy:public_disk', function () {
    // Remove from source.
    run('if [ -d $(echo {{release_path}}/public/storage) ]; then rm -rf {{release_path}}/public/storage; fi');
    // Create shared dir if it does not exist.
    run('mkdir -p {{deploy_path}}/shared/storage/app/public');
    // Symlink shared dir to release dir
    run('{{bin/symlink}} {{deploy_path}}/shared/storage/app/public {{release_path}}/public/storage');

// Tasks

// Upload build assets
task('upload', function () {
    upload(__DIR__ . "/public/js/", '{{release_path}}/public/js/');
    upload(__DIR__ . "/public/css/", '{{release_path}}/public/css/');
    upload(__DIR__ . "/public/mix-manifest.json", '{{release_path}}/public/mix-manifest.json');
    //upload(__DIR__ . "/public/service-worker.js", '{{release_path}}/public/service-worker.js');

// Hosts

// Production Server(s)
host('', '', '')
    ->set('deploy_path', '/sites/{{application}}')
    ->set('branch', 'master')

// Development/Staging Server
// Note: Overrides Composer options to include development dependencies
    ->set('deploy_path', '/sites/{{application}}')
    ->set('branch', 'develop')
    ->set('composer_options', '{{composer_action}} --verbose --prefer-dist --no-progress --no-interaction --optimize-autoloader')

// Group tasks

desc('Deploy your project');
task('deploy', [

// [Optional] Run migrations
after('deploy:vendors', 'artisan:migrate');

// [Optional] If deploy fails automatically unlock
after('deploy:failed', 'deploy:unlock');

// [Optional] Symlink the public disk.
//before('deploy:symlink', 'deploy:public_disk');

// Run with '--parallel'
// dep deploy --parallel

Josh's first week https://www.enovate.co.uk/blog/2018/06/22/josh-lambs-first-week Thu, 21 Jun 2018 23:00:00 +0000 https://www.enovate.co.uk/blog/2018/06/22/josh-lambs-first-week

It was early on in my third year of university when I started looking for a web design agency to join and further develop my skills. I was browsing local web design agencies when I stumbled across a job advertisement for a ‘Junior Front-end Web Developer’ at Enovate. Curiously, I extended my browsing to their website and it was obvious that they shared the same passion and enthusiasm for web design and development as I do. An interview and a team meeting later, much to my excitement an offer was extended to me.

My first week as a Front-end Web Developer

My starting date was 12th June 2018. After finishing university two weeks prior, I thought I would be looking forward to having a break, but I found myself more excited to start work, which probably seems unusual to most.

On my first day everyone was very warm and welcoming. A plan was drawn up to help me transition into the company which was very helpful. Throughout the day I spoke to every member of the team, listening to their role within the company and getting advice from each one of them. For lunch, the team took me out to a restaurant which was very kind and a great way to get to know one another.

At the start of the week, once I was introduced to each team member I was also introduced to the technology the company uses and how they are implemented into the workflow. Coming from university, the company workflow was of great interest to me as I was keen to know how web development projects are completed on a larger scale in a team environment.

After familiarising myself with the company workflow, which I must admit it took a while to get used to, I was introduced to a project and got the opportunity to start working on it. Towards the end of the week I was creating responsive static templates using Bootstrap 4 from the designs passed along by Dan. Throughout the process I was getting advice on how to improve my work from Jamie, to make me work more efficiently and produce better code.

I thoroughly enjoyed my first week and I am looking forward to further develop my knowledge in front-end web development and to continue enjoying my time here.

Geocurve goes live https://www.enovate.co.uk/blog/2018/06/21/geocurve-goes-live Thu, 21 Jun 2018 17:00:00 +0000 https://www.enovate.co.uk/blog/2018/06/21/geocurve-goes-live

Geocurve are a specialist surveying company, combining traditional surveying techniques with modern techniques such as mobile mapping, photogrammetry and Virtual Reality surveying.

Geocurve approached us in dire need of a new website. Their existing site was slow, outdated, and not representative of a company that leads their field; all points that we needed to address with the commission of a new website.

As with all our projects, we began with the production and completion of the specification, to which we would gain an understanding of the ideas, sketches or designs the clients may have, including colour schemes and fonts used for the content throughout the site. Once we collaborated with the client during the design process, and once the designs had been approved, it was time to turn them into a working website.

Our client did not have any special requirements in terms of functionality, so we saw this as a perfect opportunity to launch our very first website built with Version 3 of Craft CMS. Craft 3 is reported to be around 3x faster than Craft 2, and included a complete rebuild of the CMS’s core functionality, so we were very excited to get our teeth stuck into it.

As well as being the first site to leave the shop built using Craft 3, this project is also the first to benefit from our new atomic deployment process. In short, this automatically runs a deployment to the live site when certain criteria have been met, for example once the code changes have been published and approved.

We’re very happy with the final result, why not take a look for yourself. You can also read more about our Atomic deployment process in our recent blog post.

Josh joins Enovate https://www.enovate.co.uk/blog/2018/06/12/josh-lamb-joins-enovate Mon, 11 Jun 2018 23:00:00 +0000 https://www.enovate.co.uk/blog/2018/06/12/josh-lamb-joins-enovate

Josh has joined Enovate as a Junior Front-end Developer after graduating from Southampton Solent University with a 2:1 honours degree in Web Design and Development.

Josh was attracted to the role at Enovate after researching web design agencies in Essex and felt that Enovate would be an excellent place to begin his career.

It was obvious that Enovate shared the same passion and enthusiasm for web design and development as I do.

We had a record number of applicants for the Junior Front-end Developer position and many of an excellent standard. But we selected Josh because of his depth of existing knowledge and he was able to speak comfortably about many web technologies that assured us of his own passion for web design and development, which is core to our company culture.

Josh will be taken under the wing of Jamie our current Front-end Developer and thereby creating additional capacity to respond to the growing demand for website design and development services.

In his final year at University Josh completed a project which explored and implemented technologies such as server-side scripting with PHP and full-text document searching with Elasticsearch. The aim of the project was to enable students to create long lasting friendships by matching students with common interests.

When Josh is out of the office, he loves to catch up with friends, be it over a drink at the pub or other activities such as rock climbing or going to gigs. When he's not catching up with friends he likes to watch movies and mess about with new coding technologies, yes even outside of the office.

My first experiences with Affinity Designer https://www.enovate.co.uk/blog/2018/06/05/my-first-experiences-with-affinity-designer Tue, 05 Jun 2018 12:00:00 +0000 https://www.enovate.co.uk/blog/2018/06/05/my-first-experiences-with-affinity-designer Affinity logo

As the senior member of staff on the team (in terms of age, not rank) I've been designing websites for a long, long time and much has changed during this time in terms of what the sites I create look like but one thing has remained constant: Adobe Photoshop. Every site design I've ever created began life as a blank canvas in Photoshop. This has always struck me as weird.

Photoshop, as the name clearly suggests, is a piece of photo editing software aimed squarely at professional photographers to allow them to modify, tweak and improve their digital images. Following all the fuss made over doctored images in newspapers and magazines the term "Photoshopped" even entered the public's conscious some time ago making the humble software package something of a pantomime villain wielded by unscrupulous image editors.

So, you might well ask, and I often do, what has Photoshop got to do with web design? Well, not much to be honest other than by using the tools within it, mostly designed for the photographers it's aimed at, it is possible to create a website design in the same way it's entirely possible to recreate the Mona Lisa in Microsoft Paint. But why would you? Because there's nothing else... until now.

I've always been frustrated by the the lack of a dedicated software package specifically for web designers (please no-one mention Macromedia/Adobe Fireworks) and catering to their unique needs so I was intrigued and excited when I heard about Affinity Designer by Serif.

Now, Affinity Designer's sister product Affinity Photo had been making waves over on Apple devices winning countless awards and rave reviews from industry critics and users alike so their pedigree was certainly good meaning I had high hopes even before I'd made a purchase. Speaking of purchase, another attractive aspect of Affinity Designer is the pricing model. Unlike Adobe's expensive subscription plans I can buy Designer outright and receive regular software updates with bug fixes and meaningful enhancements all for the meagre sun of £48.99 (or £38.99 if it's on a 20% off sale as it often is). So far you may be thinking it all sounds a little too good to be true...

After installing Designer and getting over the shock of change that an old dog like me dreads I was pleasantly surprised. The overall interface is easy on the eye, the tools and palettes bear a similar resemblance to their Photoshop equivalents so I didn't feel as though I was learning everything from scratch and the fact it's all vector based means you can achieve super-high levels of detail and control in a design, a nice change form always working with pixels in Photoshop.

I won't detail all the features and functions available in Affinity Designer, you can get all that on Affinity website, but what I will do is list out a few of the headline items that have already made a significant difference to the way I work and why I won't be going back to Photoshop any time soon.


This is so, so good and probably top of my "features most interested in" list. A symbol is a single layer or a complex group of many layers but once they've been converted into a symbol it becomes an object that can be dragged and dropped into any of the other artboards (more on these later) you happen to have running within the file you are working on.

I cannot tell you how useful this is. My website designs often contain cards, panels, patterns, whatever you want to call them, that contain perhaps a background layer that everything sits on and then a heading, some summary text, an image perhaps and maybe a call to action link. Throw in a button or two, some icons and some design and typography to make it all look nice and "on brand" and there you have it.

Now, I want to repeat this element multiple times on a page but also elsewhere on the other page designs that make up the site as a whole and this is where Affinity Designer beats Photoshop hands down. I can add symbols wherever I like. I can edit them and every instance updates to show my changes which could be many, many instances across lots of artboards.

No more duplicating layer groups over and over or copying them from one file to another. Symbols retain their original properties by being synchronised and if I make a change to one of them that same change is made in all instances of that particular symbol. If I don't want that to happen I simply switch off synchronisation in the version I want to make different to the parent symbol, make my change or changes, and then switch synchronisation back on. It's a genius feature and saves tonnes of time.


Affinity Designer contains what they call personas. These allow you to view your artwork in either pixel or vector modes and you can switch between to the two on the same file - you don't have to commit to one or the other from the outset.

Working in the vector persona is a dream come true - everything is razor sharp, images, text, icons, shapes, you name it you can zoom to 1,000,000% on it. It's incredible and it even helps when exporting artwork from Affinity Designer to send to clients because I can create a PDF file containing my artwork which keeps everything in vector and ensures the file looks beautiful when the recipient reviews it.

But lets not forget about the humble pixel, it's still useful to occasionally work on a pixel level, but for now I'm loving designing in vector.


When I worked in Photoshop I'd create a PSD for a project's homepage design, another for the services page, one for the news landing page, one for the blog post page and so on. Without good discipline and organisation it's easy to end up with a mountain of PSD files which can become confusing not only for me but also for colleagues that might have to jump into the artwork to grab a particular asset when I'm not around.

The solution? Artboards. In Affinity Designer I begin with a single artboard in which I create a page design. When I need to do another page layout I create another artboard within the same file and so on until I have a single file that contains all the necessary designs for a project.

As designs are created and passed to the client for feedback I create additional artboards beneath the original artboard to incorporate their changes until everything has been approved and signed-off. At the end of the process I have a single file that shows how each page has evolved and contains all the master artwork for a project.

As if that wasn't enough, another great thing about artboards is that they allow you to spot those little inconsistencies that can crop up when you work on each page design in isolation: an image with a stroke where no other images have it, a heading that isn't the right text style, a dashed divider when the format is solid, etc. etc. With artboards you can quickly zoom out and take a birds-eye view of every design in the project and marvel at the accuracy and consistency you've achieved or make a change where you haven't.


Like many designers I use Font Awesome a lot. Probably too much if I'm completely honest but hey, it's so useful and saves me creating custom icons and, being a font file, our developers can easily use the icons in my designs when they start writing code with just a smattering of Sass.

Previously, when designing with Photoshop, I'd have to pick the icons I wanted from the Character Map and add them to my PSD as a text layer - this is both tedious and time consuming. Affinity Designer, however, allows me to use the entire Font Awesome collection as an asset within Affinity Designer itself and I can even search for icons based on their name so if I need a social media icon I just type "Facebook" and there it is. No more headache-inducing searching for an obscure icon!

Text Styles

The developers here love this feature because they can see exactly what text settings I'm using in a design and can easily translate them into code during the development process. I love it because it allows me to quickly and easily setup the appearance of the text within a project and apply it consistently across multiple artboards.

By setting up text styles within an Affinity document I can define and manage headings, lead text, body text, links, whatever I want. I can base each font on a base style where I might set the font face to use and perhaps a master colour and then make changes to that to for each use case.

With text styles, if I make a change to a style it gets applied wherever it's used. Say I've got red H1 headings used across 12 artboards and the client wants to make them blue I just change the colour in the H1 text style and bam! it gets applied to all my artboards. I don't have to open up 12 separate files and update the headings one-by-one.


This might seem quite trivial but lining everything up in a design can be a big time drain in Photoshop - constantly measuring gaps and whitespace with the marque tool and guides. Affinity Designer gives me the tools I really need as a web designer to get everything just right.

Maybe I've got an area of space that's 1600px wide and I have six service cards to display evenly across that space. In Photoshop I'd have to get the calculator out, start diving 1600 by 6, allowing for right-hand margin between each card (but not the last one! rookie error!) and then position each card appropriately. This is more maths than design and I got an E grade for AS level Maths so it's not why I became a web designer!

Affinity Designer does it differently. Using the alignment tool I put my six cards (these will be symbols if you read the symbols section above) anywhere in that 1600px wide space, make sure the first one is at the start of the area and the last one exactly where I want the last one to be, select them all and choose align horizontally within the selection bounds. This results in six evenly spaced cards across my 1600px wide space. So easy and so fast.

So far I've used Affinity Designer on a handful of client projects but I won't be going back to Photoshop in a hurry. Every time I work with Affinity I discover some new, time saving feature that makes my life easier and helps me deliver better work, faster. It's what I and probably many other web designers have been waiting for and my new years resolution for 2018 is to become something of an Affinity Designer guru so I'l be sure to update this post with the next batch of treats and treasures I come across.

Progressive Web Apps for Desktop https://www.enovate.co.uk/blog/2018/06/05/progressive-web-apps-for-desktop Tue, 05 Jun 2018 07:23:00 +0000 https://www.enovate.co.uk/blog/2018/06/05/progressive-web-apps-for-desktop

For a quick primer on what a Progressive Web App or PWA is and what all the fuss is about please read my blog post from October 2017 which introduces Progressive Web Apps.

Much of the early conversation around Progressive Web Apps (PWAs) focused on mobile, where PWAs have narrowed the gap between a web and native apps. This has been achieved by bringing features such as offline capability, access to device sensors and improved performance to the Web Platform and now it’s time to deliver these same benefits to that other place where many of us spend plenty of our time, the desktop.

When I first blogged about PWAs back in October 2017 I was excited by their potential not only on mobile but on desktop too. At that time Google barely made any mention to PWAs on desktop and instead it was Microsoft who were already demonstrating how they envisaged PWAs being featured in the Microsoft Store and their automated discovery by Bing as it crawled the web. More recently Microsoft have delivered on that promise, with the first batch of PWAs arriving in the Microsoft Store for those on Windows 10 build 1803 (aka Redstone 4).

This is great news that Microsoft is really going all in on PWAs to bring their capabilities to the vast audience of half a billion devices running Windows 10. As web developers this is an exciting time as it lowers the bar for entry into the Microsoft Store and with Microsoft’s example hopefully others such as the Google Play Store and Apple Store will soon follow suit. Although probably best not to hold your breath regarding PWAs arriving in the Apple Store but I live in hope!

It would be quite a revolution to be able to utilise the Web Platform to deliver a native app like experience across mobile and desktop devices from a single codebase in a way that doesn’t require additional dependencies such as Cordova or Electron.

PWAs present a great opportunity for startups and small-to-medium sized businesses who can utilise a PWA to iterate their product faster and serve their customers on whatever device or platform they prefer to use. With PWAs on desktop as we’ve seen on mobile the line between a PWA and native app will blur and become even less perceptible to users.

PWAs are proving to be a welcome driving force for the ongoing progress of the Web Platform and a catalyst for a new level of integration between operating systems and the Web that presents enormous potential to leverage existing skills in web technologies to deliver a new wave of apps and experiences for users.

Google Assistant can make calls for you https://www.enovate.co.uk/blog/2018/05/11/google-assistant-can-make-calls-for-you Fri, 11 May 2018 08:49:00 +0000 https://www.enovate.co.uk/blog/2018/05/11/google-assistant-can-make-calls-for-you

What with the unrelenting progress and advances of technology it can be hard to be impressed these days when we have robotic dogs that can run and cars that can drive themselves. But Google's demonstration of their Google Assistant holding a telephone call in such a realistic manner as to be imperceptible from an actual human being was nothing short of remarkable.

In the demonstration the Google Assistant calls a hair salon to book an appointment and is able to handle the call perfectly, with spot-on timing and responses to the member of staff at the salon. The Google Assistant even uses speech disfluencies such as "erm" and "um" to make the call even more realistic.

In the next example the Google Assistant calls a restaurant to book a table and the call was even more complex as the staff member has a strong accent and the line was poor. I even struggled to follow the conversation at times, but the Google Assistant took it in its stride and was able to "think" quickly and ask if the restaurant would be busy at the time it wished to book, as it was not able to book a table for under 5 guests.

Google CEO, Sundar Pichai revealed that the Google Duplex technology has been in development for many years and they intend on rolling out Google Duplex in the coming weeks and months and see its potential beyond just booking your haircut or meal out via Google Assistant but also making calls on behalf of Google to small businesses to find out their holiday hours and then updating Google's business listings with that information thereby saving small businesses from repeated calls from customers to check if they are open or when they are closing.

Despite the outstanding technical accomplishment Google Duplex has not had an entirely warm reception, some argue that a technology that allows computers to so realistically mimic a real human phone call is deceiving, and maybe they have a point.

But undeniably Google Duplex has amazing potential, consider how it could revolutionise call centres where it might be able to handle a lot of the more routine calls from customers and the improvement in speed of call handling as it would eliminate the human-computer interaction of the operator. But perhaps, at least for now, the investment in AI to achieve this for a single company would be too great but no doubt that will change as Google commodifies the technology.

Fanatic Aquatic Design goes live https://www.enovate.co.uk/blog/2018/04/10/fanatic-aquatic-design-goes-live Tue, 10 Apr 2018 14:23:00 +0000 https://www.enovate.co.uk/blog/2018/04/10/fanatic-aquatic-design-goes-live

Enovate have developed a brand new, responsive website for our client Fanatic Aquatic Design, who have over twenty years experience in the aquatic industry and specialise in the installation and maintenance of tropical and marine aquarium systems.

Fanatic Aquatic came across our website and got in contact with requirements of what they wanted their website to look like, the functionality it should contain and how they wanted their new website to portray them as a brand. They also provided any business goals they are hoping to achieve by reaching out to us to create a modern, responsive website.  

The process began with the production and completion of the project specification, to which we could then gain a solid understanding of the ideas of designs the client may have, including colour schemes and fonts used for the content throughout the site.

Discussions were iterated back and forth between ourselves and the client after receiving feedback and sending over the revised design. This process continued until a final approval was given, to which the next stage was for the developers to start writing the code to bring the designs to life.

Throughout the project, the client had access to the development site, allowing them to watch the project transform over time. This also allowed the opportunity for the client to provide feedback in the early stages of development. The client had the opportunity to test the site on any device, such as mobile phones, tablets, laptops and desktop PC’s.

After receiving all the content from the client, we began the process of putting that content into the CMS. As with all our responsive website design projects, our CMS of choice was Craft CMS.

Our clients received a training session, which brought them up-to-speed with the CMS, and allowed them to make their own content adjustments, straight to the live site.

Both Enovate and our client are happy with the responsive website and we hope it serves them well for many years to come, and helps generate a steady flow of business enquiries and new projects.

Why not take a look at it yourself, and if you’re considering a similar project we'd love to hear from you.

Essex Cares Ltd (ECL) goes live https://www.enovate.co.uk/blog/2018/03/29/essex-cares-ltd-ecl-goes-live Thu, 29 Mar 2018 15:30:00 +0000 https://www.enovate.co.uk/blog/2018/03/29/essex-cares-ltd-ecl-goes-live

ECL approached Enovate, and a number of other companies, with a tender opportunity to redevelop their website. The basic requirement was to replace the existing content management system (CMS) with something new and modern but there were also a wide range of additional features and enhancements needed.

We submitted a response to the tender and after meeting the project team at ECL we were delighted to be awarded the contract.

After documenting and confirming the project requirements in a statement of work we began the planning and design stages of the project. We worked very closely with ECL until the draft designs they’d supplied were evolved and enhanced into the final, approved site page designs. During these early stages we also considered SEO best practices, ECL’s existing Google search engine ranking and Google Analytics data, which helped guide us when making decisions related to design and content strategy.

With the page designs all approved, it was now the turn of the development team to get the new site up-and-running. As with all our responsive website design projects, the content management system we used was Craft CMS. Its flexibility and ease-of-use for clients making it the best choice for a project of this scale and complexity.

During the development stage of the project, we gave the ECL team access to a secure development site as soon as we could - the development site is effectively and work in progress site that the client can access - which allowed them to follow our progress as the website was being built. This always proves to be a valuable asset for both parties because it allows the client to supply valuable feedback during the build which can be addressed before the new website is launched.

The finished website has a number of custom features, including a fully bespoke Location Finder. This allows website visitors to find ECL locations in their area, providing matches based on the location and service criteria they have supplied. Another interesting feature is the custom Contact Us page. This has been designed and developed in such a way to help visitors find the exact contact information they need to resolve their query quickly rather than just supplying a single, generic email address or telephone number.

Another unique requirement from ECL was to give their Local Business Managers (approximately 30 users) Craft CMS accounts with restricted permissions so that they could only make content changes to certain areas of the website. Thanks to the flexibility of Craft CMS, we were able to accommodate this.
The positive working relationship developed between ourselves and ECL ensured the project ran smoothly and we are elated with the end result, which was delivered on time and within budget. ECL are a fantastic team to work with and we are looking forward to completing phase two of the project and incorporating the new features that will be coming to the site very soon!
Why not take a look at the site yourself, and if you're considering a similar project we'd love to hear from you.

Google Analytics: My experience with their courses https://www.enovate.co.uk/blog/2018/03/15/my-experience-with-google-analytics-courses Thu, 15 Mar 2018 11:00:00 +0000 https://www.enovate.co.uk/blog/2018/03/15/my-experience-with-google-analytics-courses

Google Analytics is vital for training purposes and the workplace, but also helped me work towards my Digital Marketing Level 3 Apprenticeship. There are multiple online courses you can choose from, but Google Analytics for Beginners was the first one I completed out of my list of three.

Second and third on my list were Google AdWords and HootSuite. Both of these are highly relevant to digital marketing, as they give you insights into the business itself and the background knowledge you need when it comes to publishing advertisements and improving income. But for now, let’s look at the Google Analytics courses.

I decided to start gently and chose the beginners option for Google Analytics, hoping to tackle the Advanced Google Analytics course in the future. The units included in the Beginners course are:

  • Unit 1 - Introducing Google Analytics
  • Unit 2 - The Google Analytics Layout
  • Unit 3 - Basic Reporting
  • Unit 4 - Basic Campaign and Conversion Tracking

I worked my way through these units without any problems. Everything was simple, clear and easy to take on board, and so I felt comfortable going forward.

Google Analytics Academy for Beginners and Advanced.

It was then time to tackle Advanced Google Analytics. Looking at the course, I noticed it included the same number of units but each unit contained more lessons than the Beginners course. There was much more information to process, but it was well worth it to gain the qualification in the industry.

The Advanced course units are:

  • Unit 1 - Data Collection and Processing
  • Unit 2 - Setting Up Data Collection and Configuration
  • Unit 3 - Advanced Analysis Tools and Techniques
  • Unit 4 - Advanced Marketing Tools

Both courses will impact deeply on my knowledge of the industry. As an apprentice, you want to increase your skills as much as possible and keep on learning throughout the whole process, and I have to say both these courses helped me achieve that.

Please see the links below for the Google Analytics Beginner and Advanced courses, as well as Hootsuite and the Future Learn Digital Marketing course (not run by Google).

Unfortunately, the Google Adwords course is only accessible for companies or individuals associated with Google Partners, which is useful for digital marketing firms but sadly not for general access. So I have linked below to the free courses that are targeted towards everyone:

How you can learn on the courses

There are two ways you can learn throughout the Google Analytics courses:

1. Video sources

Each lesson is split into a number of videos from Google Analytics experts, with the useful option to change the speed of the video either faster or slower. I found it helpful to watch the videos slightly slower to gain as much knowledge as possible.

2. Transcripts

If you prefer to copy the key parts of the notes down, you have access to the transcripts of the videos, with images relevant to the information included.

I personally did both, watching the videos first and then writing notes from the transcripts. Some people might find it easier to just watch the videos with the option for subtitles, or simply to copy the transcript notes, whichever works best for you.

Luckily for Beginners and Advanced students, Google Analytics doesn’t have an exam at the end for you to pass. Each unit has an end-of-unit assessment to go over all the knowledge you’ve just learnt - when you have passed the assessments for all the units, then you have passed the whole course.

The certification runs out 12 months after your pass date, so you need to retake the course every year to keep your certification current.

So what was next?

After you have completed both courses, it is then time to move on to the Google Analytics IQ Assessment. Unlike the two courses, this is a 90-minute assessment exam with a pass mark of 80%.

The Google Analytics IQ Assessment substitutes for one of the exams as part of my training for the Digital Marketing Level 3 apprenticeship, as this covers content learnt in college and in the workplace (if asked to).

The good news: if you fail, then you are able to retake the exam in 24 hours. To keep your certification up to date, you need to retake and pass the assessment every 18 months.

Thursday 22nd February was the big day I went to college to complete the IQ assessment with my fellow trainees. It’s safe to say it was a nerve-racking experience, but we had a morning packed full of revision and mock tests in Google Analytics Beginner and Google Analytics Advanced to refresh our memories and give us an idea of what the questions would look like in the exam.

It is important to note that with 70 questions in 90 minutes, you have on average 1 min 28 secs to answer each question and once you have pressed submit for your answer that’s it, you cannot go back. So make sure you read the question carefully and be 100% sure the answer you have chosen is the one you want to commit to.

Any exam is nerve racking, no matter how much revision you have packed in. However, I felt less nervous about this exam due to using Google Analytics most days to check data such as Bounce Rate, Average Session Duration, Users and Sessions on the site during a particular time period.

It also helped that I had set up Custom Alerts and Custom Reports to not only give me a brief outlook on the business, but also for my own benefit. I was able to really work my way around Google Analytics so I had a higher chance of getting questions on any subject that appeared correct. I was over the moon, to say the least, that I passed the exam first time.

Google Analytics IQ Assessment
The page you should see after you have logged in to Google's Academy for Ads

How did I revise?

Some of you might be wondering if I have any tips to pass this exam first time, especially as it is a combination of Google Beginners, Google Advanced and other questions you have never seen before. I will say this exam is harder than the assessments, so get prepared and revise.

Personally, I revise in the classic way, using note taking. I have notebooks full of notes from all different areas of digital marketing, and I find these handy to look back on when I need to.

There are other ways that might work for you, like mind maps, flashcards, watching videos without taking notes, or simply reading big chunks of text, but none of them work for me. I watch a lot of videos but I always write as many notes as I can. A lot of sites have the option to slow down videos, or you can always pause them, so you can make notes at any pace you want.

Reading my notes over and over again helps the knowledge to stick in my brain, and I have shortened them to only the most vital information, to make it easier to learn. I feel the trick is to not over-revise, otherwise your brain could become burnt out with trying to store too much knowledge at once. Even just one hour a night should help you gain the knowledge effectively whilst not over-cramming, as this might stress you out more.

I also Google searched study guides for the IQ exam and there are definitely some helpful ones out there! Study guides are always good to look at, and will usually have a mixture of information and questions to get your teeth into before the exam.

Here are some links for more information:

Want to know more about what a digital marketing agency does?

Check out our case studies to see how we help people and businesses to connect and grow.

Progressive Web Apps Roadshow (E-commerce Edition) https://www.enovate.co.uk/blog/2018/03/05/progressive-web-apps-roadshow-e-commerce-edition Mon, 05 Mar 2018 16:09:00 +0000 https://www.enovate.co.uk/blog/2018/03/05/progressive-web-apps-roadshow-e-commerce-edition Introduction

Firstly, a confession, I was due to attend the PWA Roadshow at Google’s London offices on March 1st 2018. But due to the trains from Chelmsford being in utter disarray due to a deluge of snow courtesy of a cold weather front known as “The Beast from the East”. I made the reluctant decision to recreate the experience in the comfort (and warmth) of my own home, sadly without Google’s awesome catering and other attendees! This was possible thanks to the talks being available on YouTube and the codelabs being available online too. So I can’t comment on what it was like to attend the day, as I wasn’t able to!

But what I was able to do was go through the talks at my own pace and take ample notes, the end result being this “live” blog capturing what I consider to be the most important points that were covered.

Progressive Web Apps: What, Why and How?

  • The number of global users on mobile surpassed desktop in 2014 (source: comScore)
  • Mobile users spend 13% of their time on the mobile web versus 87% in apps
  • Users spend 78% of their time in their top 3 apps
  • The average user installs zero apps per month
  • Mobile users visit over 100 websites every month

What’s so great about native apps?

  • They behave predictably
  • They can be added to the home screen
  • They start quickly
  • They use push notifications to keep users returning
  • They work offline
  • They sync in the background
  • They have access to device sensors such as the camera and microphone

What are the drawbacks of native apps?

  • Limited reach, as a different version is needed for each device platform

Meanwhile, the Web Platform:

  • Is safer
  • Has a permissions model that is more respectful of user privacy
  • Has far greater reach due to frictionless URLs

Progressive Web Apps (PWAs) combine the capabilities of native apps with the broad reach of the Web Platform. In essence, PWAs aim to radically improve the end-to-end user experience.

This is achieved in four key areas, PWAs should be:

  • Fast - PWAs need to be fast as 53% of users abandon a website that doesn’t load within 3 seconds (source: DoubleClick, September 2016).
  • Integrated - Users need to be able to add PWAs to their home screen and PWAs should integrate with the platform/browser through APIs such as the Media Playback API, Payment Request API and Media Session API.
  • Reliable - PWAs should work offline and be able to handle poor connectivity.
  • Engaging - PWAs should use push notifications to drive user engagement and web push to stay up-to-date.

Twitter has had great success with their own PWA, called Twitter Lite, which launched in the latter part of 2017. Twitter Lite now has more active users than any other Twitter client and beats all other Twitter clients in terms of smallest download size:

  • iOS: 214MB!
  • Android: 24MB
  • PWA: 0.6MB

The Twitter Lite PWA was a great success delivering:

  • 65% increase in page views per session
  • 75% increase in Tweets sent

Other companies have had similar success with their own PWAs including: Air Berlin, The Weather Company, Lancôme, Lyft, OLX, Expedia, Air France, Tui, Trivago, Forbes, CNET, CNN, The Washington Post, The Guardian, The Financial Times, The Independent, Nikkei, Nivea, Rakuten, Alibaba, Pinterest, NBA and OLA Cabs to name a few.

PWAs are particularly important for emerging markets where native apps are:

  • Costly to download
  • Not always supported by older devices
  • Suffer from slow performance on older hardware

Integrated Experiences

66% of online purchases on mobile are on the web rather than native apps, but currently mobile conversion is a whopping 66% lower than on desktop devices. This is because checkout forms on mobile are:

  • Very manual
  • Tedious to complete
  • Inconsistent from one site to another
  • Slow

PWAs can change this dramatically by:

  • Using autocomplete attributes, which generally make the checkout process 33% faster.
  • Using the Payment Request API to eliminate checkout forms for users and standardise payment collection.

The Payment Request API allows developers to request payment information and more with a single API call that returns a JSON Payment Response object, which is then used to collect the payment. The Payment Request API is supported in Chrome with Microsoft and Firefox actively working on its implementation.

Reliable Experiences

Google Chrome's default "No internet connection" dinosaur image
Don't show the Dinosaur!

PWAs should work offline and be able to cope with poor connectivity.

60% of global mobile connections are 2G, so mobile experiences need to take into account slow or non-existent connectivity.

The Server Worker API makes it possible for PWAs to be realiable even when the network isn't.

With a Service Worker the first request to a PWA along with the assets for that page come from the network as normal. The Service Worker only comes into effect with the second request (with one exception if you use skipWaiting(), which immediately activates the Service Worker).

Service workers do not consume system resources until they are woken up to handle an event e.g. a push notification.

Service Workers act as a proxy between the PWA and the network and enable developers to implement different caching strategies for each request within the app.  These caching strategies include:

  • Cache first, fallback to the network - The Service Worker will attempt to retrieve the request response from its cache first before going to the network
  • Network first, fallback to cache - The Service Worker will attempt to retrieve the request response from the network before falling back to the cache. This can be problematic for slow connections as it can take time for the network request to fail.
  • Generic fallback - The Service Worker would try the cache first, then the network and then fallback to a generic cached response, such as “you are offline so we’re unable to retrieve that article right now, how about reading these articles:”, which would list articles from the user’s cache.
  • Cache only - The Service Worker would only try the user’s cache for the request response.
  • Network only - The Service Worker would only use the network for the request response.

Engaging Experiences

Developers should browse the Material Design specification and iOS Human Interface Guidelines to increase familiarity with common user-interface components and their interactions.

Push notifications drive user engagement but can be a source of annoyance for users so require considered use, after all we are interrupting the user. 

Push notifications should be:

  • Timely - It matters right now
  • Relevant - Make it personal to your user
  • Precise - Specific information that’s good to know or act upon

Good examples of appropriate scenarios for using a push notification is a notification that an upcoming flight is delayed. The push notification should be relevant so it’s important to include information such as the flight number and be precise by including the new departure time.

Sending Push Notifications from your PWAs back-end is not that simple so it’s worth using a Web Push library to accomplish this.

Data properties can be added to the notification options to pass information from the push message to the notificationClick event to for example open a particular URL when the push message is clicked by the user. Furthermore, as the push event is handled by a Service Worker it could also cache the URL a user is likely to access from the push notification before it is delivered.

Secure Experiences

Using HTTPS ensures:

  • Identity - Who you are exchanging information with
  • Confidentiality - Who can ready your data
  • Integrity - Who can modify your data

If you need further convincing as to the benefits of swapping to HTTPS, consider that many modern browser APIs are only available over HTTPS such as Service Workers, Push Notifications and Geolocation.

The only drawback of switching to HTTPS is performance, as the setup of the secure connection does decrease server response times. But this performance impact of HTTPS can be mitigated by introducing some new technologies:

HTTP Strict Transport Security

This is introduced with a simple header added to the server response:

Strict-Transport-Security: max-age=259200; includeSubDomains

This says to the browser "only access this website and all its subdomains over HTTPS for the next month". This improves performance by stopping the browser from first requesting a URL over HTTP to then be redirected to HTTPS, essentially removing a round trip from the negotiation.

Further optimisations such as TLS Session Resumption and TLS False Start help to shave off further round trips during the TLS handshake to setup client-server secure connections.


HTTP/2 unlocks some dramatic performance improvements for HTTPS.

When Weather.com launched HTTPS there was a 50ms hit for the TLS negotiation, which was more than offset by a ~250ms drop per page view (on supporting devices) when HTTP/2 was launched a month later.

With the introduction of Let’s Encrypt even cost is removed as a factor preventing websites from implementing HTTPS. Other certificate types that are not (yet) available from Let’s Encrypt can be purchased for low cost from providers such as SSLMate.

Tooling for Progressive Web Apps

Google’s Lighthouse tool is a great way to benchmark a PWA, returning scores out of 100 across five key metrics:

  • PWA
  • Performance
  • Accessibility
  • Best Practices
  • SEO (added January 2018 v2.7)

Starting Fast and Staying Fast with AMP and PWAs

For humans:

  • 0.1s feel instant
  • 1s feel natural
  • 10s loses the user’s attention

On the mobile web today:

  • 19s is the average mobile page load time
  • 77% of mobile sites take 10+ seconds to load
  • 214 server requests per mobile web page
  • 50% of requests are ad related

Accelerated Mobile Pages (AMP) aims to solve this by improving page loading times on mobile devices. AMPs are built with three core components:

  • AMP HTML is HTML with some restrictions for reliable performance.
  • The AMP JS library ensures the fast rendering of AMP HTML pages.
  • The Google AMP Cache can be used to serve cached AMP HTML pages.

Combining AMP with PWAs takes advantage of the benefits of both complementary efforts to improve the Web Platform.

Wrap Up

Future Web APIs

The PWA Roadshow has only covered a few of the new and upcoming APIs coming to the Web Platform, there are many more and there just isn’t time to cover them all.  Here are a few of the more notable ones:

Credentials Management API

This is a standards-based browser API that provides a programmatic interface between a website and the browser for a seamless login across devices. It removes friction from user login flows, as users can automatically be signed back in even if their session has expired or if they have saved their credentials on another device. It allows for a one tap sign in that leverages the native account chooser user-interface. The API allows the website or app to store the users credentials and synchronise them across devices, either a username and password combination or even federated account details. The API is supported in Google Chrome and Opera today and Apple has started working on it.

Web VR

Web VR allows developers to create a fully immersive 3D experience in the browser using a VR headset and a VR capable device. Web VR is supported today in Google Chrome, Firefox, Microsoft Edge and Samsung Browser.

Web Assembly (WASM)

Web Assembly provides a new way to run code such as C++ on the web at near native speeds. It provides the speed necessary to deliver demanding applications such as an in browser video editor or running a Unity game at a high frame rate.


After the talks it’s codelab time, where you gets hands-on with some example code to implement some of the ideas and approaches covered in the talks. All of the codelabs are straightforward to follow and produce the the desired results without exception.

Completing the codelabs at my own pace rather than within an allotted time allowed me to really dig into the code and do as much background research and experimentation as I wished. From previous experience, the codelabs on the last PWA training day I attended were often more like cut-and-paste races. Where the winner was the person who could cut-and-paste the code samples from the codelab into their files fastest, which isn’t conducive to actually learning and understanding what the code is doing.

Here are links to the codelabs along with the source code to download:

Little Acorns to Mighty Oaks goes live https://www.enovate.co.uk/blog/2018/01/12/little-acorns-to-mighty-oaks-goes-live Fri, 12 Jan 2018 00:00:00 +0000 https://www.enovate.co.uk/blog/2018/01/12/little-acorns-to-mighty-oaks-goes-live

As many of our clients do, Little Acorns found our website and made contact with us through it to discuss requirements and better our understanding of the project and what business achievements have been planned to accomplish when redesigning the site.

Whilst the client had a working website, the client felt it was time for an updated and responsive site to increase traffic, conversions and purchases/transactions made on the site. The functionality was to be improved, making an easier checkout system for the customers and an emailing system with clear communication of each transaction steps.

When the production and completion of the project specification was ready, we began working on the design of the website, which involves iteration back and forth between ourselves and the client to gain a firm understanding of what they want the website to portray their business message and steps to achieving their business goals.

Designs for the major pages in the site were created and developed, following the clients feedback at each development stage until they felt ready to approve them. Once the designs were all approved, the next stage was for the developers to start writing the code to bring these designs to life.

Due to this being an e-commerce build, the client wanted to be majorly involved and responsible for adding the photographs of the products and the descriptions written for these. To be able to do this, the client has a training session on navigating around Craft CMS, a flexible content management system (CMS) that us here at Enovate use for the vast majority of our projects. Craft CMS allows both us and the client to have responsibilities throughout the CMS, being editing and adding content wherever necessary.

During the development of the project, the client had access to our Development site. This allows them to see the project as it’s being built, and provide feedback in the early stages. This also allows testing on the site to be ran on any device, such as mobile phones, tablets, laptops, and desktop PC’s.

After the launch date, both ourselves and the client were delighted to see the end results of not only the design and functionality, but also the increasing number in transactions already being made on the site in the first few hours.

The site generated 1 months revenue on launch day.

It was a true pleasure working with the team at Little Acorns to Mighty Oaks and we hope the website continues to grow into a highly successful e-commerce store.

Why not take a look at it yourself, and if you’re considering a similar project we'd love to hear from you.

Web Platform APIs https://www.enovate.co.uk/blog/2017/12/13/web-platform-apis Wed, 13 Dec 2017 22:53:00 +0000 https://www.enovate.co.uk/blog/2017/12/13/web-platform-apis Introduction

The vast and growing array of browser APIs enable web developers to build richer user experiences such as Progressive Web Apps, which are made possible by the collective use of several new browser APIs.

Progressive Web Apps narrow the gap between web and native apps, and establish a set of best practices that can be implemented with universal benefit in almost any web app or website in existence. But in some ways a Progressive Web App is just scratching the surface of what can be achieved on the modern Web today. As an approach that has such broad application there are lots of occasions where web developers can and should go even further.

Depending on the nature of the website or web app, web developers can take a progressive enhancement approach to utilise some of the more niche APIs that are arriving in modern browsers with every release. For example for a web app that streams audio and/or video the Media Session API will allow you to display custom track/artist information and imagery on the mobile device notification tray along with the lock screen and even any paired wearable devices. Yes, the Web Platform can do that today!

But keeping up with all the new and shiny browser APIs is no mean feet, so hence in this blog post I’m going to cover various upcoming Web Platform APIs along with some not so new and lesser known APIs that bring exciting new capabilities to the Web Platform and create new possibilities for building richer and more effective web apps and websites.

Web Share API

Status: Unofficial
Support: Available in Chrome

The Web Share API enables websites to invoke the native sharing capabilities of the host platform. So on an Android device this would open the normal share dialog, which includes sharing via whatever native apps may be installed. This is music to our ears, as for too long have we had to include such functionality with the performance hit and clumsy UI of a third party widget or worse still spend time rolling our own, the video below demonstrates the Web Share API in action:

Cache API

Status: Editor's Draft (Service Workers)
Support: Available in Chrome, Opera and Firefox. Edge and Safari have marked the API as 'In Development'.

The Cache API is one of the core technologies of PWAs and is essentially like a key-value store, with the keys being HTTP requests and the values being HTTP responses. The Cache API provides the offline functionality of Progressive Web Apps, as a service worker can match requests to cached responses.

Fetch API

Status: Living Standard
Support: Available in Chrome, Opera, Firefox, Edge, Safari and polyfill for other browsers

The Fetch API replaces the old and trusty XMLHttpRequest object, which allows web developers to make network requests and handle the responses. The Fetch API uses Promises, which is a far easier and simpler interface to use and helps to avoid issues such as callback hell. Again, the Fetch API is used heavily within Progressive Web Apps to gracefully handle network requests from a service worker and their responses.

Web Workers API

Status: Living Standard
Support: Available in Chrome, Firefox, Opera and Safari

Service workers are a type of web worker, a web worker runs a named JavaScript file in its own thread with a different global context to the current window object. Web workers are useful as they allow web developers to run scripts in another thread, similar to a background process, which keeps the main thread more idle and able to respond to user interaction.

Service Worker API

Status: Editor's Draft
Support: Available in Chrome, Opera and Firefox. Edge and Safari have marked the API as 'In Development'.

Service Workers are a type of web worker that run a JavaScript file in the background, not requiring a web page or user interaction and act as proxy between a web app, and the browser and the network (when available). Service workers in the case of Progressive Web Apps take advantage of several new Web Platform APIs such as the Fetch API, Cache API, Push API and Background Sync API to deliver new capabilities to the Web. Service workers are crucial to Progressive Web Apps and are key to features such as offline access, background sync and push notifications.

Image Capture API

Status: Editor's Draft (MediaStream Image Capture)
Support: Available in Chrome and Opera

The Image Capture API makes is far easier for web developers to control the device’s camera to capture a still image or video and adjust hardware camera settings such as zoom, brightness, contrast, ISO and white balance along with whether to use the front or rear facing camera if applicable.

Payment Request API

Status: Candidate Recommendation
Support: Available in Chrome and Edge

Much like the Web Share API, the Payment Request API seeks to improve user experience on the Web by standardising the e-commerce checkout flow for web users, while potentially saving the collective effort of many web developers in the process. For users, rather than having to become accustomed to the intricacies of each and every e-commerce website checkout flow the Payment Request API will allow web developers to handover part of the checkout flow to supporting web browsers using the Payment Request API. This will not only save web developers time, but will result in an improved user experience for visitors as they can pre-populate forms with details securely saved within their web browser along with utilising a checkout flow with which they are already familiar.

Network Information API

Status: Living Document
Support: Available in Chrome, Opera, Samsung Internet

The network information API allows web developers to detect the connection type (wifi, ethernet, cellular, etc.) of the user’s device and effective connection type (slow-2g, 2g, 3g or 4g). This is useful as it allows us to make decisions in terms of whether to preload videos on page load or for a service worker to cache certain assets depending on effective connection type.

Background Sync API

Status: Draft Community Group Report
Support: Available in Chrome

While Service Workers allow us to provide a web app experience when offline in terms of serving cached content to the user, what happens when the user wants to send data to our web app offline? That's where the Background Sync API comes in, it allows us to capture the request the user wants to send to the server and send it when the network connection is restored.

Push API

Status: Editor's Draft
Support: Available in Chrome and Firefox

Building upon Service Workers, the Push API gives web apps the ability to receive messages pushed to them from a server, regardless of whether the web app is in the foreground, or even currently open, on a device. This is already one of the more popular new APIs in usage terms as web apps are always keen to find new ways to re-engage users and push notifications are quite an effective way of communicating with users. In order to send push notifications web apps have to request permission from their users via the Permissions API.

Ambient Light API

Status: Editor's Draft
Support: Available in Edge and Firefox

As it sounds this API allows web apps to detect changes in ambient light, so much like how most SatNav systems will adapt their display contrast settings for night time use, this API enables web apps to similarly respond to changes in ambient light.

Broadcast Channel API

Status: Living Standard
Support: Available in Chrome, Firefox and Opera

The Broadcast Channel API allows simple communication between windows, tabs, frames and iframes from the same origin (domain). Use cases include being able to update the logged in or logged out status of a user across multiple tabs if they log-in or log-out on one tab. The Broadcast Channel API has similarities to the Channel Messaging API except that the latter is for one script dispatching messages to one other (one-to-one). Whereas the Broadcast Channel API is suitable for dispatching messages to many listeners (one-to-many). 

Web Audio API

Status: Editor's Draft
Support: Available in Chrome, Edge, Firefox, Opera and Safari

The Web Audio API is far from new, but it has been steadily improved more recently and provides web developers with the ability to control audio on the Web, allowing us to choose audio sources, add effects to audio, create audio visualisations, apply spatial effects (e.g. panning) and more.

Web Animations API

Status: Editor's Draft
Support: Available in Chrome, Firefox, Opera and Samsung Internet

The Web Animations API is a great addition that allows web developers to build animations using JavaScript that render with the same performance as declarative CSS animations. The benefits include faster frame-rate with lower power consumption compared to traditional JavaScript animation which translates to a better user experience on all devices, particularly mobile. This is achieved by empowering developers to “build performant compositor threaded animations using JavaScript”. In addition, the API also allows us to inspect and manipulate running CSS animations, making it far easier to update some state once a series of animations have completed or pause running animations.


Status: Editor's Draft
Support: Available in Chrome and Opera

While full browser support may never arrive the WebUSB API is still worth a mention as being able to access USB devices on the Web feels like somewhat of a milestone.

Media Source / Media Source Extensions API

Status: Recommendation
Support: Available in Chrome, Firefox, Edge, IE, Safari and Samsung Internet

Another API that’s not new but perhaps not well-known amongst web developers. The Media Source Extensions API (MSE) provides functionality enabling plugin-free web-based streaming media. Using MSE, the traditional src attribute of a <video> element and can be replaced with a Media Source object. This enables more advanced media delivery and control than is currently possible with <video> and <audio> elements alone. Use cases include adaptive streaming, so swapping the bitrate of a video stream in response to changing connection speeds. It lays the groundwork for adaptive bitrate streaming clients (using DASH or HLS) to be built on the MSE API, such as the open source Shaka player.

Media Recorder API

Status: Working Draft
Support: Available in Chrome, Firefox

The Media Recorder API makes it possible for web apps to easily record and instantly use media from a user’s input devices so audio, video or both. This therefore gives web developers the ability to easily build web apps that capture audio recordings or video recordings, another great asset to the Web Platform.


Status: Editor's Draft
Support: Available in Edge and Firefox (basic support)

The WebVR API provides a means for web apps to communicate with virtual reality devices such as the head-mounted Oculus Rift or HTC Vive. This enables web developers to build apps that can receive position and movement information from the VR device and translate that into movement around a 3D scene. Enabling web developers to build virtual product tours, interactive training and immersive games.

Generic Sensor API

Status: Working Draft
Support: Not supported

Data from device sensors is used in many native apps such as games, augmented reality apps and fitness tracking apps and whilst it’s possible to access some sensor data on the Web already the Web is lacking the broad range of sensor data available to native apps. This is what the Generic Sensor API hopes to address, by exposing sensor devices to the Web Platform in a consistent, performant and easy to use way.  At the time of writing the Generic Sensor API has just been released as an origin trial in Google Chrome 63, so it’s very new. But it’s certainly one to watch as support, and the number of sensor interfaces, will hopefully grow.

Visual Viewport API

Status: Draft Community Group Report
Support: Available in Chrome and coming to other browsers soon

This isn’t enormously exciting but worth a mention anyway as it might come as a surprise to many that there’s such a thing as a layout viewport and a visual viewport. When a user pinches and zooms on your page the visual and layout viewports diverge and this can cause unpredictable results. With the new Viewport API it makes it possible for web developers to tame this unpredictability in style! The video below shows both the visual viewport (red border) and the layout viewport (green overlay) and how they can diverge when pinching and zooming.

Media Session API

Status: Editor's Draft
Support: Available in Chrome for Android, coming soon to Chrome, Safari and Firefox

If you have an Android phone you are probably familiar with the notifications tray, that place where you see a banner spanning the width of the screen showing your latest emails, messages, push notifications, etc. When casting or consuming some video/audio media you might be familiar with seeing some playback controls here that allow you to skip the track or see the song, album and artist information. The Media Session API brings the ability to customise the notification tray content and playback controls for media on the Web. Enabling web developers to provide an experience deeply integrated into the host platform. So much so that this metadata, artwork and playback controls can be seen on the user’s lock screen and even a paired wearable device.

Web Bluetooth API

Status: Draft Community Group Report
Support: Available in Google Chrome

This is another one of those milestone type APIs, I don’t think many web developers would realise that it’s possible today in Chrome to connect your web app to a Bluetooth device! Granted it’s probably not for the faint-hearted and the specification if you can even call it that is very much in flux but it’s pretty incredible to know it’s something that is available on the web right now and is improving with each browser release.

Device Memory API

Status: Editor's Draft
Support: Available in Google Chrome

The Device Memory API allows web developers to get some form of benchmark in terms of the performance and capabilities of a user's device. With a plethora of devices able to access the Web not all are able to handle everything web developers can build. Utilising the Device Memory API enables us to deliver a "Lite" experience to devices which will struggle with a website's full experience. We can also augment our website statistics by gathering device memory data where possible and therefore better inform our decision making and testing to match the capabilities of real user devices.

And there you have it, some bleeding-edge APIs, some existing APIs but not so well-known, and some other APIs that have just come a long way. Hopefully, this blog post has sparked some ideas for how you can utilise these new APIs to build experiences that you never thought were possible on the Web.

Plus Risk goes live https://www.enovate.co.uk/blog/2017/11/20/plus-risk-goes-live Mon, 20 Nov 2017 00:00:00 +0000 https://www.enovate.co.uk/blog/2017/11/20/plus-risk-goes-live

That was exactly the situation with the website we designed and built for Plus Risk, a brand new insurance company based in Essex striving to provide 'straightforward insurance in a complex world'.

The client found and made contact with us via our own website, as many of our clients do, and from there a meeting was scheduled so that we could meet, discuss and better understand not just the proposed project but the business as well.

Following the production and completion of the project specification we began work on the website design, this involves close collaboration with the client to gain a firm understanding of how they want their business to be portrayed and what they want the site to do in terms of business goals.

Designs for the major pages in the site were created and evolved until the client was happy to approve them and then they could be passed to the developers to be used as blueprints for the next step of the project, writing the code.

As with the vast majority of our content management system (CMS) projects we used Craft CMS as the foundation of the site, mainly for its flexibility for us in terms of development but also, and just as important, for its ease-of-use for our clients who are responsible for editing and adding content following the successful launch.

Once the development site was up and running we provided secure access to it so that the client could follow our progress and supply feedback. It also meant they could test the website in any device, seeing how it adapts and responds to different screen sizes, as any modern, responsive website should.

This particular project really was a joint effort between us and the client and we're delighted with the end result and hope it serves Plus Risk and the team behind it for many years to come. Why not take a look at it yourself and if you're considering a similar project we'd love to hear from you.

Boosting Google traffic to your blog https://www.enovate.co.uk/blog/2017/11/08/boosting-google-traffic-to-your-blog Wed, 08 Nov 2017 13:30:00 +0000 https://www.enovate.co.uk/blog/2017/11/08/boosting-google-traffic-to-your-blog

When a blog author has finally published a new blog post and is sitting back and basking in the success of their achievement, one thing is guaranteed. That it won’t be long until they use Google to try and find their newly published prose and they may well be disappointed with the result. Perhaps they can’t find the new blog post at all or it’s a long way from the prime ranking positions.

When thinking of boosting traffic to your blog from Google looking at inclusion in Google News is a great place to start as it delivers over 6 billion clicks to publishers every month. That said, your website needs to be approved for inclusion and Google is fairly selective so you may need to be realistic that if you’re not really blogging about newsworthy topics on a regular basis perhaps Google News isn’t where you should focus your efforts. If you think you might have a chance at getting included in Google News great, their quick start guide is where to get started.

For the rest of you fear not, there’s still lots you can do to boost your traffic from Google:

1. Blog about popular topics, it’s true that in journalism timing is everything. So if you’re able to keep your finger on the pulse and write blog posts that tap into events or topics of the moment that’s a great strategy to increase traffic to your blog as there’s no point blogging about topics or events that are only of interest to a narrow audience.

A popular blog post on our website is about implementing Brotli compression in Nginx, “what?!” you may say but in our field it was quite popular around February this year, exactly when I published the post and it was our most popular blog post for a number of weeks thereafter.

2. Make sure you’re not doing your content a disservice when it comes to the blog post’s title and meta description, these should be carefully considered. Don’t leave this to an automated tool, it’s definitely the most important few characters you will write so do take your time.

3. Whilst this won’t necessarily have a big Google search impact I’d still recommend you invest some further time to compose an eye-catching Open Graph image (example below), this is the image that displays when your blog post is shared at Twitter, Facebook and LinkedIn. If design isn’t your thing consider employing the skills of a designer for this purpose. We’ve noticed more retweets and shares when we’ve put more effort into the design of our Open Graph images, so don’t neglect this.

The Open Graph image for our blog post “Why we love Craft CMS”
The Open Graph image for our blog post “Why we love Craft CMS”

4. Make sure your blog has an XML sitemap and it has been submitted to Google’s Search Console, this XML file gives Google a list of all the pages in your site so it can quickly identify new additions it needs to index.

5. For particularly time sensitive blog posts, from the Google Search Console, use the “Fetch as Google” tool, which then provides an option to “Request Indexing” this is effectively a notification to Google to ask the search engine to crawl your new page as soon as possible. This may well be functionality that is built into your chosen CMS but it’s worth checking if that’s the case, and if not doing so manually as it should mean new blog posts are indexed in a matter of hours instead of days.

6. Boost your reputation as an author by making sure your blog posts are attributed to you as the author, if you’re lucky enough to get blog posts published on other leading sites in your field this is great as it should boost your reputation in Google’s eyes as an author and in turn increase your rankings as a result.

7. Accelerated Mobile Pages (AMP) is an open-source initiative Google launched in October 2015, with the aim to improve page loading times and reduce data use of web pages on mobile devices. It’s been very successful with many major news publishers quickly adopting the approach. You can often see AMP enabled web pages listed in a carousel at the top of the search results so also serving your blogs posts via AMP is a surefire way to increase their search engine prominence and drive more traffic.

So there you have it, that's our top strategies and tips for boosting traffic to your blog from Google and achieving faster entry into Google’s search results for new blog posts, I hope you've found it useful.

Graphic Design Essex goes live https://www.enovate.co.uk/blog/2017/11/08/graphic-design-essex-goes-live Wed, 08 Nov 2017 00:00:00 +0000 https://www.enovate.co.uk/blog/2017/11/08/graphic-design-essex-goes-live

The existing Graphic Design Essex site had served us very well, providing a home for our range of graphic design services and portfolio of work for almost eight years - a lifetime in web years (very similar to dog years, before you ask) - and proudly holding the top spot in Google for “Graphic Design Essex” for most of that time. Successful SEO performance aside, it was in desperate need of a complete overhaul to introduce a brand new, responsive design and the integration of modern content management system in the form of Craft CMS.

I began work on the new site design whenever I had some spare time so, as with most internal projects in a busy web design agency, it took a little while to come together but, eventually, the bold colour scheme and clean lines fell into place and a design that we were all happy with emerged meaning the developers could begin coding.

Building the site in our favourite CMS, Craft CMS, was relatively straightforward and didn’t cause too many headaches for us. After all, we’ve built almost 53 Craft CMS client sites now so there’s not much we haven’t seen when it comes to developing code for Craft.

As the development site came together we began the process of writing content, taking photographs of our graphic design work (using a very professional and fancy-looking lighting and camera set-up) and then populating each page with text, images and stock images where necessary.

As with most website projects, it’s the content that can become the hardest task but with a final burst of dedication and determination we’ve delivered a site that looks great, contains a wealth of information about our graphic design services and examples of our work so hopefully it’ll serve us for another eight years and perform just as strongly in the search engines as it’s predecessor.

Van Vynck Craft CMS re-build goes live https://www.enovate.co.uk/blog/2017/10/12/van-vynck-craft-cms-re-build-goes-live Wed, 11 Oct 2017 23:00:00 +0000 https://www.enovate.co.uk/blog/2017/10/12/van-vynck-craft-cms-re-build-goes-live

Enovate first began working with Van Vynck over twelve years ago and in that time we have designed, developed and hosted three versions of their corporate website.

This latest version of the site is the first to use Craft CMS. The reason for switching from MODx to Craft was for two main reasons: the need for an improved editing experience for Van Vynck’s content authors and a desire to boost their search engine rank by totally replacing the underlying code with search engine friendly code. After discussing all the benefits to Van Vynck, they were more than keen to relaunch their site using Craft CMS.

Van Vynck were, for the most part, happy with the site’s original design, the only exception being the homepage, which was redesigned and redeveloped to include a carousel and to better display the key information and calls to action. Working closely with the client and communicating regularly helped keep the project on track and ensure expectations were met.

Following the completion of the development work we handled client training, teaching the content editors how to use Craft CMS, and then we worked through an automated content import process, bringing all the text and images from the old site into the new.

After adding the final touches and addressing any bugs that came up in testing, the client was happy with the finished site and we were proud to deliver a fast website that is easy to navigate and runs smoothly on any device.

ScanmarQED goes live https://www.enovate.co.uk/blog/2017/10/11/scanmarqed-goes-live Tue, 10 Oct 2017 23:00:00 +0000 https://www.enovate.co.uk/blog/2017/10/11/scanmarqed-goes-live

ScanmarQED approached Enovate with a requirement for a modern, innovative website, that would clearly describe the marketing products and services they offer to clients around the world as well as integrating with a host of third-party systems that they relied on.

After a detailed research and discovery phase we created an initial site design based on our understanding of the client’s requirements which they provided feedback on, specifically what they liked, disliked and what they wanted to change. Following this feedback process we revised the design until we reached a version the client was happy to approve.

On completion of the design and development work, we moved onto delivering the client training, which involved teaching the client how to add content and images to the site, create new pages or remove redundant ones and how the various integrations had been implemented within their Craft CMS installation.

Both Enovate and the client are very happy with and proud of the finished site and it achieves our goals of delivering a fast website that is reliable and responsive, ensuring it is usable on any device, whether that be a desktop or mobile phone. We hope the new site serves the client well for many years and we looking forward to working with them as the site grows and evolves.

Progressive Web Apps https://www.enovate.co.uk/blog/2017/10/02/progressive-web-apps Mon, 02 Oct 2017 14:30:00 +0000 https://www.enovate.co.uk/blog/2017/10/02/progressive-web-apps

"Progressive Web App" is a term first coined by designer Frances Berriman and Google Chrome engineer Alex Russell back in 2015 to describe web apps that narrow the gap between web and native apps. Progressive Web Apps take advantage of modern web technologies such as service workers and web app manifests to provide a user experience nearer to native apps without the chore of an app store download and install.

PWAs are exciting for us as web developers and web designers as it has the potential to bring greater demand for our skills and a broader canvas for our work to be consumed. For brands large and small it presents a fantastic opportunity to potentially reduce the costly development and maintenance of mobile apps across multiple platforms, along with managing the legacy of older versions of an application still being in existence. Instead brands can continue to invest in their web apps and take advantage of PWAs to deliver a rich native-like user experience from the "web platform", which is constantly up-to-date.

But what's the difference between a web app and a progressive web app? Progressive Web Apps have the following characteristics:

  • Progressive - Work for every user, regardless of browser choice because they're built with progressive enhancement as a core tenet.
  • Responsive - Fit any form factor: desktop, mobile, tablet, or forms yet to emerge.
  • Connectivity independent - Service workers allow work offline, or on low quality networks.
  • App-like - Feel like an app to the user with app-style interactions and navigation.
  • Fresh - Always up-to-date thanks to the service worker update process.
  • Safe - Served via HTTPS to prevent snooping and ensure content hasn't been tampered with.
  • Discoverable - Are identifiable as "applications" thanks to W3C manifests and service worker registration scope allowing search engines to find them.
  • Re-engageable - Make re-engagement easy through features like push notifications.
  • Installable - Allow users to "keep" apps they find most useful on their home screen without the hassle of an app store.
  • Linkable - Easily shared via a URL and do not require complex installation.

Browser support for Progressive Web Apps is excellent in Google Chrome and Opera, followed by good support in Firefox and Samsung Internet with Microsoft Edge improving fast. Safari on the other hand is notably lacking, perhaps because of their reliance on apps and the app store and PWAs could be seen as a threat to that business.

Whilst most demos of PWAs focus on the mobile experience PWAs are also set to make an impact on desktop devices by being built into Chrome OS and Windows, launching what feels far more like a desktop application rather than a web experience. This makes PWAs even more exciting as they have the potential to not only consolidate development effort across mobile devices but desktops and beyond, using the universal support of web standards and the advances in the web that PWAs are forging.

Microsoft is taking an interesting approach in Windows, where they speak about the "web platform", meaning the triad of cornerstone technologies for the web (HTML, CSS and JS) with Edge being one application built upon that web platform. Their implementation of PWAs in Windows is not taking the approach of another manifestation of Edge but rather a secure and sandboxed environment for a real application to exist with its own identity, ratings and comments on the Windows store. Which can then be installed and pinned to the start menu and taskbar like any native Windows application.

Microsoft has also indicated plans to crawl the web for PWAs and automatically present quality examples in the Windows Store (source), if the idea takes hold I imagine it wouldn't be long until the Google Play Store follows suit.

Some notable examples of PWAs already out in the wild include the Financial Times, Forbes, Twitter Mobile, Paper Planes and more examples can be found at PWA.rocks.

Whilst the concept of PWAs may seem new it's actually been quite a long journey to get here. The first seeds of PWAs were sown as far back as 1999 when HTML Applications, with .hta file extensions were first introduced into Windows. More recently we’ve seen the Electron, Ionic and Cordova platforms package up applications developed in web-based technologies into forms that mimic native applications.

If you are itching to get started and develop a PWA, there are some good places to start, first off Google has a great tutorial on building your first PWA. The "Hello World!" of PWAs is a Hacker News Reader app and so HNPWA is a great resource to discover common approaches and architectures for PWAs that serve the same purpose as a Hacker News Reader. Many of the JavaScript frameworks of the moment provide tooling to scaffold out the groundwork of a PWA, take a look at Preact CLI and Vue.js. But that's not to say a PWA requires a JavaScript framework, Google have released a JavaScript library called Workbox, which helps to build some of the more complex aspects of PWAs such as Service Worker caching strategies.

It will be interesting to follow the story of PWAs over the coming months and years. With the likes of Google and Microsoft throwing their weight behind PWAs it's likely we'll see more and more brands turning to PWAs as a viable alternative to native apps and reaping the benefits.