Steve Fenton Tue, 04 Aug 2020 15:16:51 +0000 en-GB hourly 1 Steve Fenton 32 32 140638488 Notifications for Web Apps Wed, 05 Aug 2020 05:00:10 +0000 Although it has been abused with an enthusiasm that borders on the insane, there are good reasons to use the Notifications API in your web apps. For example, you write a mail client that allows the user to request notifications for key contacts… if they are browsing your web-based app, they should get notifications.

To pop a notification, we need to do two things. First, we ask permissions. Second, if we have permission we show a message.

In my opinion, the best time to ask permission is when you want to pop the first message. When you ask way in advance, it feels creepy. For example, when I browse an article on your blog and I get asked for permission to pop notifications as soon as the page loads; that’s creepy.

So, we’ll call out to a tryNotify function when we want to pop a notification and it will handle getting permission if it needs to. If it has the green light, it will call the showNotification function.

The key components within the function are:

  1. Checking that the browser supports Notification as a feature
  2. Using Notification.permission to see if permission has been 'granted' (it can also be default, which means not yet set, or rejected)
  3. Calling Notification.requestPermission() to ask nicely for the user’s go-ahead
  4. Creating a new Notification to show the message… we can use this handle to add click events or track the status of the notification later

Here is a hastily written example that works. You can make this cleaner, but I’ve optimised for comprehension within a blog post where you won’t be navigating easily between lots of functions if I split it up.

// This is the function you call when you want to notify a user
function tryNotify(title, message, link) {

    // Does the notification feature exist?
    if (window.Notification) {

        // If we have permission, let's just show the notification
        if (Notification.permission === 'granted') {
            showNotification(title, message, link);

        // If the permissions haven't yet been set, let's ask for the user's consent
        if (Notification.permission === 'default') {
            Notification.requestPermission().then(function (permission) {
                if (permission === 'granted') {
                    showNotification(title, message, link);
                } else {
                    console.log('Notifications have been rejected!', title, message, link);
            }).catch(function (err) {

// This function should only be called by the tryNotify function
function showNotification(title, message, link) {

    // Create the notification with a title, body, and icon
    // The options object with body and icon is optional, but recommended (by me)
    const notify = new Notification(title, {
        body: message,
        icon: 'path-to-icon.png'

    // What to do if the user clicks the notification
    if (link) {
        notify.onclick = function () {
  , '_blank');

Web Notification

The notifications feature is reasonably well supported on desktop and even has some support on mobile. In any case, you should be design robustly as if it’s not there in case either (a) you don’t have the user’s consent for notifications, or (b) they have disabled notifications at the operating system level (i.e. you can disable all notifications on Windows).

IISExpress.exe Exited With Access Violation Fri, 31 Jul 2020 17:18:24 +0000 I was happily typing away at some ASP.NET Core controllers and views, calling a view component to render out some paging links, when this happened…

The program iisexpress.exe’ has exited with code -1073741819 (0xc0000005) ‘Access violation’.

No exception visible in Visual Studio. No indication of where the error was. Nothing. The Internet supplied links to low level HTTPSYS issues, but surely this was something to do with my changes? I removed the call to the paging view component and everything worked. Yes. It was my code.

To cut a long story short, this Access Violation error is caused by a type problem. This may come in several flavours, but my particular example was pretty vanilla and easy to understand.

I have a method that generates a dictionary that can be used on a link, thus:

<a asp-action="index" asp-all-route-data="@Model.GetRouteParameters(page)">Link Text</a>

The method hands back a dictionary with all the route parameters needed to render the page, with the adjustment to the page number. Simple right. So if my current page is /Stuff/my-id?p=1, I can use this to supply a link to /Stuff/my-id?p=2 or whatever.

The model doesn’t do the work, it just calls something more general that does the work…

public IDictionary<string, string> GetRouteParameters(int p)
    return Controls.GetRouteParameters(p);

Due to introducing some other use cases, it became necessary in that “Controls” class to make the page optional. That meant that my call chain for the existing links involved an int in this model, but the general purpose code beneath accepted an int?. This coercion into the nullable type was causing the issue. Severe.

The fix was simple (far simpler that finding what needed to be fixed). Use the same type throughout…

// ------------------------------------------------------↧
public IDictionary<string, string> GetRouteParameters(int? p)
    return Controls.GetRouteParameters(p);

This might help someone in the future if they get this Access Denied crash. Probably me.

Reduce Costs by 12x on Azure Tue, 28 Jul 2020 19:00:24 +0000 I’m in the process of writing a little test app that I’d like to run on Azure to keep an eye on a suite of 1,000 websites. It’s a .NET Core app that replaces a test pack written with JMeter that has been manually “push-button” executed in the past. It means the tests can run continuously with alarms if there’s an issue. Removing manual work is worth some money, but there’s no reason to spend more than you have to, so let’s look at a week of gentle optimisation of costs.

The test app is pretty simple. A data store, a user-interface to add more tests to the pack and to review test runs, and a little robot that actually does all the work.

My first guess for the Azure set up was a serverless SQL database for the data store, an app service for the user-interface, and a web job for the robot. With this all set up, what tools are there to find out if this is a good solution from a cost perspective?

Azure Cost Management

The Azure Portal has a really neat area for cost management, which includes something called Cost Analysis. You’ll find that in the menu as shown below.

Azure Cost Analysis Menu

This is the best place to start as it breaks down the cost per resource and provides a forecast of spending. This screen is able to provide reasonable forecasts after a couple of days of normal operation.

Cost Forecast

My first attempt to save money was to write some basic scheduling to switch off the app service on a schedule using an Azure logic app. The user-interface wasn’t required out-of-hours. This saved a little bit of money, but with the robot working full time the app services was still the expensive resource. As it was costing more than a basic Virtual Machine, I decided to shift the robot out of a web job and into a small Virtual Machine. This achieved a bigger saving.

A quick aside… this article is not “using a small virtual machine is cheaper than using a web job”! It depends on what you are doing. This article is “here are tools you can use to find what works for you”.

Park My Cloud

The next cost saving tool is Park My Cloud. It works across a number of providers, including Azure, and provides a simple way to create schedules that automatically run. It also looks at your Virtual Machines and suggests right-sizing fixes too. For my purposes, using one of the standard schedules to power-down the machine out of hours removed around half the cost of the Virtual Machine.

Park My Cloud Scheduling

Park My Cloud is like having an accountant for your cloud spend; they basically pay for themselves by saving you money.


The first chart shows the changing cost curve as different changes were applied.

Actual Spend Chart

Taking “today” as day zero, we can compare the long-term costs before and after the cost saving measures.

Comparison of Different Configurations

The result is, it costs 12x less after spending a few hours thinking about different ways to power the test app on Azure.

Little Scripts: Checking Web Page Images Tue, 28 Jul 2020 12:43:07 +0000 This is a note-to-future-self as I just threw together a little script to test images on a web page. Specifically, it highlights:

  • Images that are not lazy loaded
  • Images that are much bigger than their display size

As images sizes aren’t reliable until the image is displayed, you will need to run it if your page is updated (i.e. hidden images are displayed or background requests add content).

Just paste this whole thing into your browser console… and then call checkAllImages() each time you want to re-check.

window.checkAllImages = (function () {

    // We're going to look for images more than 20% bigger than their display...
    let checkLimit = 20;

    let borderTimer = null;
    let borderColor = 'orange';
    let sizeAllowance = 1;
    let badImages = [];
    let messages = [];

    function run() {

    function blink() {
        borderColor = (borderColor === 'orange') ? 'aqua' : 'orange';
        badImages.forEach(function (img) {
   = borderColor;

    function cleanUp() {
        let warningElements = document.getElementsByClassName('dynamic-warning');
        for (let warningElement of warningElements) {

    function calculateSizeAllowance() {
        sizeAllowance = (1 / 100) * (100 + checkLimit);

    function checkImages() {
        console.log('Checking Images')
        badImages = [];
        messages = [];'img')).forEach(checkImage);

    function checkImage(imageElement) {
        const loadingAttr = imageElement.getAttribute('loading');
        const displayHeight = imageElement.height;
        const displayWidth = imageElement.width;
        const naturalHeight = imageElement.naturalHeight;
        const naturalWidth = imageElement.naturalWidth;

        const isNotLazy = loadingAttr !== 'lazy';
        const isTooWide = (naturalHeight > (displayHeight * sizeAllowance));
        const isTooHigh = (naturalWidth > (displayWidth * sizeAllowance));

        if (isNotLazy || isTooWide || isTooHigh) {
            const message = ((isNotLazy) ? 'Not Lazy Loaded <br />' : '') + 'Shown ' + displayWidth + 'x' + displayHeight + '<br />Natural ' + naturalWidth + 'x' + naturalHeight;
            messages.push(message.replace(/<br \/>/g, ''));

            const text = document.createElement('div');
            text.className = 'dynamic-warning';
            text.innerHTML = message;
   = 'absolute';
   = 'red';
   = 'white';
   = '10000000';
            imageElement.parentNode.insertBefore(text, imageElement);

            let borderWidth = 1;

            if (isNotLazy) borderWidth++;
            if (isTooWide) borderWidth++;
            if (isTooHigh) borderWidth++;

   = (borderWidth * 2) + 'px solid orange';

    function logSummary() {

    function drawAttention() {
        borderTimer = window.setInterval(blink, 1000);

    return run;


Here’s an example of it running against my home page:

Result of Checking Images

Yikes! I need to go fix those images.

Start and Stop an Azure App Service on a Schedule with Azure Logic Apps Wed, 22 Jul 2020 18:00:05 +0000 Although starting and stopping a web app doesn’t in itself save you a great deal of cash, in situations where you have Web Jobs running and a serverless database, you can effectively run a “business hours” app at a lower cost if you stop it outside of business hours.

Start an App Service Each Week Day

To start our App Service, we’ll create an Azure Logic App with a Recurrence scheduler and a Start Web App step.

Let’s add a logic app called “ServiceStartScheduler”.

To trigger the logic app, add a Recurrence trigger. This can be a bit confusing as the only visible options when you start are “Interval” and “Frequency”. We want to trigger our task at a set time each week day, which we will get to shortly. For now, select an interval of 1 and a frequency of “Week”.

Recurrence trigger with a 1 week frequency

To change this recurrence trigger to fire on specific days at a selected time, use the “Add new parameter” section to add the items “On these days”, “At these hours”, and “At these minutes”.

Recurrence trigger with new parameters selected

You can now fill in the parameters to set your schedule. You can set it to run on certain days and, in our case, set a specific single time to trigger the action. It is possible to set the task to run multiple times on the selected days, but that’s probably overkill for what we’re doing here.

A recurrence trigger for each weekday at 7 AM

Now it’s time to add a new step. Search for “Azure App Service” and select the “Start Web App” action.

Start Azure App Service Action

You might be prompted to sign in for the next step, once you’ve done that use the lists to select the specific App Service you want to start.

Select the app service to start

Now save the logic app and you are done with the “Start Azure App Service” logic app.

Stop an App Service Each Week Day

To stop our App Service, we can simply create an Azure Logic App with a Recurrence scheduler and a Stop Web App Service step.

What’s easier, though, is to clone the one we just made and tweak it. If you view “ServiceStartScheduler” and hit “Clone”, you can enter a new name for a “ServiceStopScheduler”.

Once the operation is complete, visit the new resource and edit it.

Adjust the schedule so it runs at 17:00 each week day.

Delete the “Start web app” step and replace it with a “Stop web app” step instead.

Testing the Logic App

You don’t have to wait a whole day to test your logic app. Open the “ServiceStopScheduler” and hit “Run” to trigger the task immediately.

Run Logic App

You can immediately confirm that the app has in fact stopped by visiting it.

Stopped App

You can now repeat the process to test the “ServiceStartScheduler”. The app should now be running.

The logic app keeps a history of runs, so you can check in on it to make sure your schedule is working as expected.

Software Development Process Does Not Matter Wed, 08 Jul 2020 18:03:25 +0000 Process interests me. Refining and improving a process to make work more joyful and productive matters to me. But, what has become clear over the past thirty years is that in software development, process just doesn’t matter. Seriously. It doesn’t. It might just be because I’m becoming a software punk revolutionary, but I suspect not.

The reason it doesn’t matter is this simple fact: Any process used with good intent works better than any process used with bad intent.

There are two general factors that affect intent. The team and the management. You can deploy quaductionism to simplify this as shown below:

Impact of intent on outcome

Team Intent

What does good intent look like at the team level? The team is self-motivated by the desire to deliver working software. This is their primary driver.

They don’t want to stuff their CV full of the latest buzztech. They aren’t striving for individual advancement or recognition. They are not interested in complexity for its own sake or bragging rights with their industry peers.

Here’s an example. A developer opens the admin screen to test their new feature, and they realise that you can’t keyboard-navigate to the fields. That means users are being excluded based on their input device. That’s not good… so they make the fields keyboard navigable. A keyboard user in Brighton signs in and smiles when they manage to complete the task they wanted to achieve without having to wrestle with a peripheral device.

Management Intent

What does good intent look like at the management level? The managers create a strong vision and operate against the whole system to create a healthy environment for the team.

They don’t want to directly control the team’s decisions. They don’t tell the team how to do their job. They don’t need to be asked for permission when the team wants to do the right thing™.

It’s hard to provide great examples here as there are few thinner lines that exist in the workplace. Concrete examples fail to illustrate the craft of this role, because a manager who obtains code coverage metrics may do so with good or bad intent. A manager who gives feedback or course corrections may do so with good or bad intent. It’s not what you see, it’s the intagible swirling of thoughts that are invisibly behind the action that reveal the intent. Despite this, if you are team with good intent, you know when you’re being managed by someone with bad intent. Without doubt.

Warning signs might include demands for the asymptotic, checkbox exercises against the intangible, and demands that only sound reasonable if you are insane (for example, “I want 100% of potential test cases executed”).

Intent in General

Where there is bad intent, we need to follow the breadcrumbs. Often, there is a negative spiral at play. If a team appears to be operating with bad intent, what management practices might be encouraging it? For example, it’s common to find individual objectives behind a lack of collaboration; everyone is competing with each other for some scarce reward.

When a team acts with bad intent, it is rarely in isolation of the manager acting with bad intent. The intent on the one side is driving the intent on the other. This can then cycle infinitely as the team under-performs and the manager pulls levers that only result in worse performance.

When a manager acts with bad intent, it is a little more likely they are doing it in isolation of the team. You will find instances where team good intent is stifled by bad manager intent far more often than you’ll find managers with good intent being crushed by a team acting ill. Where you do find this unusual situation, it’s not uncommon to find the team are still stampeding from prior management with bad intent.

In both cases, intent should not be assumed. You need to look behind the curtain, as they did in Oz, to find out the real human story. However, the use of the word intent here should not be personal or judgemental. It does, however, need to be changed.

What This Means

We have four quadrants in play. Only one of them works. The top-right quadrant, the one titled “virtuous circles”, is the only good place to be. The team and the management are all acting with good intent.

On the top-left, a team with good intent is being stifled by the management. On the bottom-right a manager acting with good intent is being defeated by the team.

Finally, in the bottom-left zone we have a feeding frenzy. The team and management are eating each other alive and the users are undoubtedly suffering.

A team in the virtuous circles quadrant will outperform a team in the feeding frenzy quadrant no matter which process they are using I’m serious. A virtuous circles Waterfall team will beat a feeding frenzy Scrum team. Every time. Insert the name of your favourite process. It still applies. In most cases, the virtuous circles team will beat all the other quadrants. As time goes by, it becomes ever more likely they will beat the other three.

Virtuous circle organisations will change their process over time and it will get better. Feeding frenzy organisations will also change their process over time; and it will get worse.

Process evolution works in this way. It adapts the goals. The goal or a virtuous circles organisation is to deliver working software that is relevant to their users, whereas in the feeding frenzy everyone is optimizing for their individual survival.

This might mean the process converges on a common agile way of working in a virtuous circles organisation; but only if that’s what works. We don’t need to debate that here because we know that process doesn’t matter, in the same way that gravity doesn’t matter to water. It looks to us, as observers, like it matters, but the water is just doing its own thing. It doesn’t need to be told to go downhill, or to flow around obstacles.

Remain Vigilant

One last thing. If you have nailed one of the two requirements of a virtuous circle organisation, it’s easy to fool yourself into thinking the job is done. Don’t be complacent, though, as the two half-way states are temporary. You either work your way up, or you naturally decay. Without effort, you move down and left. Great teams are killed by management with bad intent (either the fire in their eyes dies, or they leave) and ace managers are worn down by teams with bad intent until they leave or become bad managers. It’s thermodynamics. Almost.

So, pay attention to the intent behind the actions of teams and managers and seek the virtuous circles. The process doesn’t matter.

The Software Punk Revolution Sun, 05 Jul 2020 18:30:53 +0000 Let’s be honest. Our planet is churning out supercodemonsters at an alarming rate. These pinnacles of virtuosity are idols to the false gods of development ego. The pattern has always been there, but in the past it was misdirected enthusiasm. On day one you’d learn how to extend prototypes in JavaScript. On day two, everything you needed to do in code seemed to be solved by extending prototypes in JavaScript.

Side-anecdote. Some time ago, I worked on a team that kept seeing strange commits to the codebase that used unusual features of the C# language in obscure ways. It made no sense at all and we were all scratching our heads. That is, until someone realised that the commits chronologically followed the chapter order of C# in Depth. It’s a great book, but care must be taken to avoid the over-enthusiastic application of a novice. But this is beside the point. Eagerness is one thing, but what we have right now is pure hubris.

If I were in a forgiving mood, I might consider that there is a signal imbalance in the software development world. There are some really-massive-scale programs out there in really-massive-organisations that solve really-massive-problems. These organisations tend to get a lot of newspaper inches as there is a lot of esteem granted to the individuals who talk about what they are doing. If you aren’t following, consider this… if you keep up-to-date with technology at all you will have read books and articles from people who work for Microsoft, Google, Spotify, Facebook, Twitter, Stack Overflow, and other similar thought-giants. For the problem they are solving, they need to do some strange stuff. However, due to the level of respect and adoration these organisations get, their solutions are being redeployed in damaging ways where the strange stuff is not needed and not beneficial. Software teams end up trying to swat a fly with a Challenger tank. You may or may not kill the fly, but you definitely will blow holes in the walls.

So, many problems might be caused by the availability heuristic. If you are feeling generous. A swarm of articles on microservices might make you think everyone needs to use microservices.

But I’m not feeling generous. The following might offend musicians or music lovers. Don’t @ me.


Back in the latter-half of the sixties, something happened to music; it got pretentious. Under the moniker of art-rock, progressive, or the short-form, “prog”, we lost all sense of the song and scratched the itch of virtuosity. Wow! That massive instrumental section with the technically brilliant solo was such wow. Yawn. It was almost as if the musicians were entirely preoccupied with showing everyone how proficient they were. There’s no denying it – they were amazing musicians. They could do things that you wouldn’t believe at speeds you couldn’t comprehend. It was, however, dreadful.

Flocking in their droves to witness this self-congratulatory brilliance were scores of other musicians. It was less like a gig. More like a masterclass. A musical conference, perhaps. Someone who was good on the drums would watch someone who was stunning on drums and marvel at their skill. The songs, though, were awful. A guitarist who was reasonable might attend to see the guitar played phenomenally. The actual tunes, however, were appalling. You get the idea, repeated across the suite of musical instruments. To further elevate themselves, these artsy bands would – purely for cultural reasons, you understand – show that they had mastered several instruments including some that they had discovered in small huts up a mountain that hadn’t been seen in the Western world before. But the music was bad.

The only way to stamp out this plague of virtuosity was a revolution.

That revolution was punk.


Though it was never as bad as the “they only know two chords” moans of its detractors, punk was undoubtedly not the territory of master musicians. There was no doubt that these bands did not grow up listening to Baroque composers. They were noisy, simple, and necessarily had an attitude of visceral defiance. Looking at their safety-pin encrusted trousers, you might not realise they were here to save us all from double a-side 12″ prog compositions (two songs and forty-four minutes of your life).

By throwing away all claims of mastery, they gave us back the song. No more “Supper’s Ready” (23:06), “Echoes” (23:31), or “2112” (20:33). Instead we had “Suspect Device” (2:40), “Pretty Vacant” (3:18), and “London Calling” (3:19). There might have been the flicker of a solo here or there, but nothing to make another guitarist weep with envy.

That’s not to say there wasn’t an abundance of talent. Look through those bands and listen carefully and you’ll see that there is plenty. However, the talent was never more important than the song.


This is what we need right now in software development. An uprising. A revolution. Punk. Nothing should be more important than the problem at hand. No complex architecture. No amazing framework. No cool new tech. Nothing is more important that providing something useful to your users.

Don’t be one of those pretentious prog bands. Be a punk revolutionary. Make useful software, not keynote-worthy architecture.

Should I Migrate to Hey for Email? Wed, 01 Jul 2020 13:26:06 +0000 Having signed up the wait list for Hey, I received my invite and subsequently undertook a two-week trial. What happened next has changed how I interact with email forever. Although I may be repeating information that is already out there, I’m going to focus on a small number of features that let you change your relationship with email. These features ultimately convinced me to part with £100 and sign up for my first year. This article is not an “influencer” article. I’m not being paid to write it. There are no kick-backs or discounts. It’s my celebration of a problem solved.

The Really Important Stuff

Here’s the really important stuff. Just like this blog post, all of the really important stuff is right here at the top. In Hey, it’s called your “Imbox”, and you have tools to prevent anything getting into your Imbox unless you really want it there. When you open normal email, your inbox is jammed full of stuff from everyone who you ever gave your mail address to. Plus those they sold it or leaked it to. When you open Hey, you very often get an empty mailbox (which is a new experience for me). Mail you don’t want just never arrives, and stuff that isn’t important can be redirected into The Feed (newsletters and stuff that you can read whenever) or the Paper Trail (order confirmations and receipts).

On top of this, just because an email makes it into your Imbox doesn’t mean you get disturbed. You can limit which contacts will generate a notification and let the rest wait until you are ready to deal with it.

I expand on these points below, comparing my old workflow to my new one.


Before: I open up my inbox and tick everything that is obviously spam. I then mark it as spam so it leaves my inbox. I wonder whether marking this stuff as spam actually does anything because most of the spam has the same title related to property investments. I then scroll past my newsletters. I want to keep these because they are from InfoQ, or Microsoft, or Autumn Christian, or TSConf. They aren’t urgent, but I’ll read them later. I’m scrolling and looking for emails from humans. I select these out and read them. If I’m responding now, I’ll reply. If I need to respond later, or do something with the email later, I mark the email as unread so it will still be bold when I come back later on. For emails I’m done with, I’ll archive them to move them out of my inbox. Repeat.

Hey: I open my Imbox and it’s either empty, or has one or two emails from humans. I read the mail and either reply, or hit the “reply later” button, or hit the “aside” button. Reply later is for stuff I’ll respond to when I have the information I need. Aside is for stuff I need to read later, like the shopping list someone emailed me. If I don’t need to take action, the email drops into my “seen before” pile and I can forget about it.

My inbox was a perpetual collection of 100 emails in various states. My Imbox is one or two emails I’ve not seen before.

The Feed

Before: I leave newsletters unread until I get to them. Most of my unread inbox is newsletters because they aren’t terribly time-sensitive or important. I enjoy reading them, when I’m in the mood to read them. They clog up my mailbox and make it hard to find real email.

Hey: All of my newsletters drop straight into The Feed. I can scroll through this when I want to. It’s a bit like a social feed, I can scroll past stuff that’s less interesting to me. I don’t need to mark them as read, or archive them. They are just there. I can read them, or not.

The Paper Trail

Before: I tend to keep my order confirmations and dispatch notifications as unread emails until I get the thing I ordered. Then I archive them. I rarely need to refer to them, unless someone sends me a pizza with no cheese on it (you know who you are Dominos). Later on I might need to find them.

Hey: My receipts and order emails land in The Paper Trail. It’s the automatic version of what I did before.


Before: I get a notification each time an email arrives in my inbox, until I get so frustrated I disable notifications. Then I get no notifications for any emails ever.

Hey: I get a notification each time my wife emails me. The rest of you can wait.


Before: People sending me emails tracked when I opened and interacted with the email. While I’m not shocked they do this, they definitely don’t have my consent to record whether I opened their email, what time of day it was, or whether I opened it multiple times.

Hey: I get a little notification telling me that tracking was prevented.

Converting into a Paying Customer

It’s literally economics. An hour a day of email management has been eliminated. That’s probably 300+ hours a year saved. That’s ten weeks of unpaid life overtime working as spam and email administrator gone from my life per year. But it’s more than “is my time worth the money”. Email had become a real drag. Now it’s a joy. The difference now is that email is once again something that connects me to other humans. All the automated trash is gone and only the people remain (er, I mean this respectfully… as I do kinda want the newsletters… just on different terms to how I want to communicate with individuals).

With Hey, I can once again imagine using email to get back in touch with people. The annual cost feels more like a donation to an organisation that will make lives better. Genuinely. If they make money out of it, all the better. A company making money by being on my side isn’t going to be tempted to look inside my emails for advertising opportunities or other nefarious activities.

Join the email revolution over at Hey.

#SleepBedless Nominations Sun, 28 Jun 2020 08:32:33 +0000 This month (June 2020), Depaul UK launched the #SleepBedless campaign. They asked members of the public to sleep without beds for a night to raise awareness of hidden homelessness and to help fundraise to help them with their coronavirus crisis costs.

More than half of young people in hidden homelessness have suffered harm and Depaul offer services to support these people – including their Nightstop initiative that offers emergency accommodation to young people.

This campaign will directly help vulnerable people, so spending one night without a bed and making a donation is a small gesture that might make a big difference. Having spent the night on the floor, I nominate my Geronimo Web friends Dan Horrocks-Burgess, Petr Vinklarek, and Rob Lawson to do the same.

Why not join us?

The #SleepBedless campaign asks you to do three simple actions:

  • SLEEP bedless for one night,
  • DONATE £5 to Depaul UK, and
  • NOMINATE three other people to complete the #SleepBedless Challenge.

Why three people? Because £15 is enough to provide a bed for one night for a vulnerable young person. It makes sense doesn’t it.

Every pound raised goes a long way to help our young people to avoid the dangers of rough sleeping. But that's not all we do. Our teams also provide thousands of young people with the life skills, education and training they need to lead happy and healthy lives.

The Lockdown Effect Wed, 03 Jun 2020 08:33:42 +0000 As some countries around the world lift restrictions following COVID-19 lockdowns, we can start to see the effects the lockdown will have on different companies and industries. From the data I have seen so far, the lockdown effect seems to divide organisations into three broad categories: winners, losers, and recoverees. Let’s look at what this means.


There are some different kinds of winners. In some cases, businesses that were ready to handle online orders will have gained sales that might otherwise have been made in bricks-and-mortar stores. Other winners will be companies that will have benefitted from a short period of increased demand. So, for example, Amazon are likely to be a big winner due to their dominance online. They will have gained sales that would have happened on the high street if it were open. This is a transfer of sales from the losers to the winners. Ikea will have seen a surge in demand from people kitting out their home office, which are sales that would not have happened at all. There will have been knock-on winners in the packaging industry and for courier companies.

Some of the winners can expect to go back to normal when restrictions are eased. Some will find they retain a proportion of the win over a longer term.


Most of the losers will have lost sales to winners during the lockdown, or will be in an industry where the sales simply stopped (such as restaurants). They will have a bit of a fight on their hands to bring customers back from the competition if people started to shop elsewhere. Unfortunately, some of the losers won’t be able to recover, either because the runway has already ended, the cost of getting back in the game is prohibitive, or the customers are now shopping elsewhere.


The recoverees find themselves in industries that provide a necessary product or service that people have had to wait for. If you needed a vehicle service in May, you still need it now. Much of the “lost” business in these industries is actually “delayed” business and a rush to catch up will bring in much of the lost revenue. That double-glazing repair you needed will still happen and if the businesses can handle the capacity there is an opportunity to recover “lost” sales.

Watch Out in 2021

The following insight is not mine, so all credit to Rebecca who noticed this. For businesses with short-term memories, 2021 is going to be a mess. If you work in a business where targets are set based on “last year’s sales”, you’ll be fighting the effects of your lockdown effect category. For example, a courier company that tries to hit similar sales volumes in April 2021 is going to struggle as they won’t have millions of people stuck at home with delivery being the only shopping option. Equally, recoverees will have a bumper June / July / August in 2020 and might struggle to hit those volumes in a year’s time.


A global pandemic is certainly an accelerator, both good and bad. Many organisations will realise that working from home can be productive and increase workplace joy. Many businesses that were already struggling may find the end comes much sooner. Things that were likely to happen eventually will be happening much sooner.