Steve Fenton https://www.stevefenton.co.uk Tue, 27 Oct 2020 16:36:05 +0000 en-GB hourly 1 https://wordpress.org/?v=5.5.2 https://www.stevefenton.co.uk/wp-content/uploads/2017/06/cropped-wheel-32x32.png Steve Fenton https://www.stevefenton.co.uk 32 32 140638488 Awesome Microsoft Edge Vertical Tabs https://www.stevefenton.co.uk/2020/10/awesome-microsoft-edge-vertical-tabs/ Tue, 27 Oct 2020 16:35:47 +0000 https://www.stevefenton.co.uk/?p=10246 I’ll be honest, it has taken a day or two to retrain my cerebellum for this one, but it is totally worth it. Modern day displays happen to be wider than you need. Horizontal real estate is in surplus and is cheap, but vertical space is more valuable. That’s why Microsoft Edge is getting a feature called Vertical Tabs. Which is easier shown than told, so let’s take a look.

Microsoft Edge Vertical Tabs

That’s the beauties. They sit quietly down the side of your screen, giving you a few extra prime vertical pixels for the web page. You can fit quite a lot of tabs down this list, which you can set to expand and collapse. To show it for scale, this is the whole screen with the tabs in minimal mode:

Minimal Vertical Tabs

And this is the screen with them expanded. You can keep them expanded if you prefer.

Expanded Vertical Tabs

Enable Vertical Tabs in Edge

You can enable vertical tabs by visiting edge://settings/ and doing a quick search for “vertical tabs”.

Edge Setting for Vertical Tabs

Summary

Day one of enabling this was painful for me. While I understood the benefits, my brain thinks it knows where to click for tabs and it’s been convinced over the years that it’s at the top of the screen. However, the more I use the feature, the more natural it feels. The brief disruption of the learning curve is entirely worth it. If you aren’t running Microsoft Edge, you can download Edge Browser from the Microsoft website.

This is one occasion where we can all agree that tabs are better when they use less spaces (sic).

]]>
10246
Increase Productivity by Quantifying Simpler Tasks https://www.stevefenton.co.uk/2020/10/increase-productivity-by-quantifying-simpler-tasks/ Wed, 21 Oct 2020 07:21:12 +0000 https://www.stevefenton.co.uk/?p=10103 The full title of this article should really be “Increase Productivity by Quantifying Simple Tasks; Protect Complex Task Productivity by Not Quantifying It”. This is the result of a study by Aruna Ranganathan, co-authored by Alan Benson, that studied workers in a garment factory and I’ve added my opinion because sometimes I’m a narcissist like that.

When workers completing simple tasks have their work quantified, they’re more likely to turn the experience into a personal game, a concept known as “auto-gamification.” They compete against themselves to increase efficiency, even when there’s no reward for doing so and no punishment if they don’t.

In contrast, those who perform complex tasks that require higher levels of artisanship believe quantification to be an imperfect measure of their on-the-job performance and are thus demotivated by such real-time scorekeeping. Deborah Lynn Blumburg – Stanford Business

Please heed previous warnings on gamification and funification.

My View

The question of whether quantification will improve or hinder performance is not complex. Unlike many simple things that are hard to do, this one is easy. If the task is simpler than than the numbers, productivity will increase. If the task is harder than the numbers, productivity will decrease. The measurement acts a bit like a gravitational force.

Let’s take two examples that look the same to see this effect.

Simple task. Typing. I stick up a piece of paper with some text on it and type it into a text editor to practice my touch typing. I set up my editor to collect words per minute. The task is very simple as I’m just copying text from a printed page into a text editor. By measuring my words per minute, I can track my progress pretty accurately and will have an intrinsic desire to improve my count. The simple task gravitates upwards to the measurement.

Complex task. Writing an article. I open a blank text editor and start writing an article. If I try to quantify my progress by words per minute, the measurement is simpler than the task. My productivity diminishes as the gravitational force of the numbers drags it downwards.

Let’s drop them on a picture, with the measurement of “Words per Minute” in the middle, and its gravitational effect on the touch typing task and the article writing task…

Measurement Complexity Creates a Gravitational Force

I suspect that it is, in theory, possible to reverse the gravitational pull by operating on the method of quantification (i.e. if I were to measure my article in a more refined manner). Over time as sophisticated measurement tools are made available, we might be able to push the quantification upwards and create a stronger pull that will work positively against more complex tasks. As my good friend Keith Drew also pointed out, we could operate to break up and simplify a task to bring it below the measurement line. In practice, though, it seems sensible to abandon the numbers game once a certain complexity threshold is reached.

]]>
10103
Skipping the Chasm: How a Crisis Accelerates Progress https://www.stevefenton.co.uk/2020/10/skipping-the-chasm-how-a-crisis-accelerates-progress/ Wed, 14 Oct 2020 05:00:28 +0000 https://www.stevefenton.co.uk/?p=9964 Full credit to Geoffrey Moore, whose seminal “Crossing the Chasm” keeps proving to be a useful book thirty years after it was written. Credit also to Hans Baumhardt who introduced me to the book and who critically shaped my thinking about work and life. What I hope do, now that the credits are over, is explain how we are all currently being affected by a crisis in a positive way. By no means do I intend to side-line the terrible impact of the pandemic, which has sadly affected hundreds of thousands of people. However, in this piece I’m going to talk exclusively about the upside, where some advantages are to be found by those who can see the opportunity, and how all of that relates to Moore’s Crossing the Chasm.

Crossing the Chasm

In Crossing the Chasm, Geoffrey Moore describes a process (aptly shown on the cover of the third edition) by which technology is adopted. Like all great ideas, it is more broadly applicable than you may realise. Anyway, the stages of the technology adoption lifecycle are…

Crossing the Chasm Book Cover

  1. Innovators
  2. Early Adopters
  3. The Early Majority
  4. The Late Majority
  5. Laggards

But, importantly, there is a big gap between Early Adopters and The Early Majority called The Chasm. This is the leap your product or idea needs to cross in order to make it into the mainstream.

The normal route used to cross the chasm is the spark ignited by the Innovators and the momentum and promotion of the Early Adopters.

Skipping the Chasm

When there is a crisis, that gap between Early Adopters and The Early Majority narrows. That means that adverse conditions can create an environment within which progress is accelerated. Potentially by years.

Prior to the current pandemic, you really needed to be working for one of the well-known tech innovators, such as Octopus Deploy, Basecamp, or GitHub or at one of The Early Adopters of remote teams (probably inspired by companies such as these innovators) to find situations where distributed teams were normal. Fully remote teams and hybrid distributed teams were the dream, but the reality for the mainstream was probably five years out. Large companies had invested so much in the offices, the equipment, and the way of working that it was hard to see past the sunk cost. It was inevitable that remote would become the default, but it seemed a way off.

Then the crisis hit, and there was no alternative for companies but to acquire the laptops that they probably should have been investing in some years earlier. They had to make it possible to access systems from outside the concrete walls. They had to adopt video conferencing and other communication technologies. All of the investment of time and money into solving the problems was accelerated from years into weeks.

A crisis narrows the chasm, or even closes it.

What Does This Mean for Business

The subject of remote working is one that has come up in most companies I have worked at over the past ten years. Over this time, I have built and refined a model that predicts there are certain phases to remote work. There are important implications for organisations that have not yet moved to remote or hybrid teams. I have adjusted the model based on events this year.

The waves, or phases, are:

  1. Redundancy and Furlough Movers
  2. Remote Preferred – Local Salary
  3. Remote Preferred – National Salary
  4. Remote Required

That is to say, we have already had a wave of tech talent being displaced by disruption to their previous employment. They will have moved to a role that was necessarily remote due to lockdown restrictions. They may have been willing to drop some salary to secure an income during the crisis. They will look to recover this later, potentially with another move. There is certainly no surplus of available talent, as smart organisations made roles available to snatch from this temporary pool.

We are now moving into the next phase, which is that people will be looking out for fully remote work. At this stage, they will consider roles at local rates. For example, a developer in Lymington may be happy to work for a company in London on a “Lymington wage”. The local competition for talent in Lymington means that salaries are lower than London rates. Companies willing to be in The Early Majority may secure talent in this phase that will not be available later.

The next phase will be disruptive to The Late Majority and devastating to Laggards. The early advantage will be lost and local salaries will no longer be viable. Latecomers will need to compete with companies in London for the candidate in Lymington, which means you’ll need to offer London benefits packages. Over time, salaries get more expensive for previously lower-paid areas.

Finally, we reach the phase where the best tech talent is only available remotely. If you haven’t adopted by this stage, you’ll be working out the details while others are flying. Any competitive advantage will have been lost and you’ll be spending even more money that The Early Adopters just trying to catch up to them.

Prove It!

The writing is on the wall. Microsoft have invested a ton of effort into Microsoft Teams, researching remote work fatigue and introducing features that reduce it. They’ve quickly made Teams the most compelling tool for remote workforces. On top of this Microsoft has announced its own move to a hybrid workforce. Those who opt to work remotely for more than half their time lose a permadesk at the office, but can use a pool of office space when they need to visit HQ.

Why?!!

The old script of working hard to fund a retirement has been torn up. Not many people hold out much hope of having a dream lifestyle when they step back from the workforce at the ever increasing retirement age. People are realising that if you want to live a life, you need to do that while you are working. Once again, the Innovators and The Early Adopters are selling this as part of the package. Basecamp work four-day weeks for the whole summer, offer generous holiday (including a 30-day paid sabbatical every three years), and strive to promote a healthy balance of work and life. Many other companies are moving in a similar direction.

To put it simply, these changes are all coming. Your organisation will be changing, willingly or by market forces. All of the advantages will fall to those who move right now. If the work being done in your organisation doesn’t require people to be on prem, and if you aren’t remote or hybrid when 2021 lands, the potential benefits will be eroding very quickly.

]]>
9964
Execute Raw SQL Scripts in Entity Framework Core https://www.stevefenton.co.uk/2020/10/execute-raw-sql-scripts-in-entity-framework-core/ Tue, 13 Oct 2020 06:00:36 +0000 https://www.stevefenton.co.uk/?p=9947 Most of the time, Entity Framework Core will just do the right thing. Every now and then, though, you’ll find that it’s doing something in a bit of a sticky way and you’ll want to take control. Usually it’s when you’re deleting a range on a table with cascading deletes.

Here’s an example of the Entity Framework code that will take a bit longer than you might want:

_context.Checks
    .RemoveRange(_context.Checks.Where(c => c.OrganisationId == model.OrganisationId));

await _context.SaveChangesAsync();

Please be careful here, as there is a method called ExecuteSqlRaw that could end up allowing Bobby Tables to trash your database. The method you are looking for is ExecuteSqlInterpolatedAsync, which will automatically convert an interpolated string into a parameterised query.

await _context.Database
    .ExecuteSqlInterpolatedAsync($"DELETE FROM Checks WHERE OrganisationId = {model.OrganisationId}");

In cases where your Entity Framework version was problematic or slow, this will run at the speed of DELETE. In my case, that’s about 30 seconds faster (as the Entity Framework one was taking 30 seconds).

You can also retrieve your items using a custom SQL statement, in cases where you need to get them from a view, or do something outside of the norm. The example below is overly simple, but you’ll see the idea. When you want your proper entities back, you run the SQL from the DBSet level, rather than on _context.Database.

_context.Checks
    .FromSqlInterpolated($"SELECT * FROM Checks WHERE Organisation = {model.OrganisationId}");

The interpolated SQL methods are super useful and are a neat shortcut for setting up a command, adding command text, adding parameters, and all that ADO ephemera.

]]>
9947
Simple Conditional Updates to Entities in ASP.NET Core MVC https://www.stevefenton.co.uk/2020/10/simple-conditional-updates-to-entities-in-asp-net-core-mvc/ Sun, 11 Oct 2020 09:07:28 +0000 https://www.stevefenton.co.uk/?p=9932 When you accept a view model in your .NET Core MVC application, you can request that only certain fields are bound, like this [Bind("Title")]. Neat. But when you want to apply the changes to your domain object, you often want to do a similar thing and only update certain fields (and only if they really changed). I use the following code to avoid checking each individual field, to make my controller super obvious.

The result should be that only fields that I agree to allow the user to change get pushed for update, and the object is only updated if there is a change. Or in summary:

  • I control what fields can be updated
  • A change is only triggered for real updates

Basically, the controller says what fields to update like this:

currentItem.MapField(nameof(currentItem.Title), replacement);

It doesn’t matter how many fields I have on my type, I’m only sending back the title to the database.

To see how it all works, I’m going to use my Entity base class, and an Organisation type that extends Entity

Let’s follow the code down from my controller. The base class actually does the work. It grabs the property info from the type, grabs the local value and the updated value, compares them, and updates them if there is a change. It won’t update where the values are the same. So, my Organisation actually doesn’t matter too much for this example…

    public class Organisation
        : Entity

And my Entity class has a method that works for any kinds of entity.

    public class Entity
    {
        // ... other stuff

        protected void MapField<T>(string field, T replacement, Type myType) where T : Entity
        {
            Type type = replacement.GetType();
            PropertyInfo property = type.GetProperty(field);
            var originalValue = property.GetValue(this, null);
            var replacementValue = property.GetValue(replacement, null);

            if (originalValue != replacementValue)
            {
                property.SetValue(this, replacementValue, null);
            }
        }
    }

In the full version, when I land in the property.SetValue condition, I also update some base class stuff as it’s a neat place to say “something really changed”. I also happen to do some other manipulations, such as sanitising user input. You might find this a useful place to do things like trimming user input, or running it through allow lists.

If you don’t like inheriting from an Entity class you could delegate this off, or write an extension method, or whatever floats your boat.

]]>
9932
The Fly on the Windshield https://www.stevefenton.co.uk/2020/10/the-fly-on-the-windshield/ Wed, 07 Oct 2020 06:05:43 +0000 https://www.stevefenton.co.uk/?p=9880 Every organisation has to deal with this problem. It has been described in many different ways, including the famous quaductionism of “Urgent vs Important” to the simplicity of phrases like “firefighting”. Yes, it’s all those tasks that people seem to want now, which are stopping you from doing the real work.

My analogy for this stuff is the fly on the windshield. You’re driving along and a big fly lands right in front of you on the glass. You could ignore it because it doesn’t stop you from driving safely. However, because it’s right there on your windshield, it looks bigger than the real obstacles that surround you. It’s proximity makes it seem more important than it is. Some people crash their cars because they are so intent on getting the fly off their windshield.

The solution falls into the “simple but hard” category. All you have to do is ignore the fly. It will go of its own accord. Just ignore it. You’ve got bigger obstacles ahead that are far more deadly. Of course, this is easier said than done. Not only are we emotional creatures, in our work life it’s not always obvious which are the flies and which are the heavy trucks. We need a simple knife to cut through all the noise and give us a starting point for our categorisation. That knife is simple, too. If a task has “just cropped up”, the probability is that it’s a fly. Take a deep breath. You don’t need to panic. Make sure nobody is going to die if you ignore it. Then carry on with the important stuff you should be doing.

]]>
9880
Switch Off Rich Link Pasting in Edge https://www.stevefenton.co.uk/2020/10/switch-off-rich-link-pasting-in-edge/ Tue, 06 Oct 2020 10:35:54 +0000 https://www.stevefenton.co.uk/?p=9872 There is a cool new feature in Microsoft Edge that pastes links with rich formatting. If you copy a link from the page or from the address bar, it will paste in a rich format, so instead of seeing https://www.example.com/ you’ll see Example Website (example.com) and it will already be linked to your selected destination. You can choose to “Paste as Plain Text” in applications that give you that option.

However, if you’re old-school, technical, or just plain obstinate like me, you might need to share the actual URLs more often than you want to share nicely formatted links. You can get back to plain old addresses by updating the Share Copy Paste settings (edge://settings/shareCopyPaste) in your browser. Just choose “Plain Text” and you’ll be back to your old ways in no time.

Select "Plain Text" To Switch Off Rich Links in Edge

Equally, if you find you want to upgrade your experience, you can visit that same settings page to select “Link” mode.

]]>
9872
What Data is Missing? https://www.stevefenton.co.uk/2020/09/what-data-is-missing/ Fri, 25 Sep 2020 05:00:08 +0000 https://www.stevefenton.co.uk/?p=9759 When you start collecting data at scale, you need to decide when to invest in keeping “all the datas” and when to keep only a sample. When it comes to sampling, you need to ensure that the parts you discard and the parts that you keep are a truly representative sample. That means you need to discard at random to ensure that what you keep “looks like” the total data when scaled up.

Common mistakes in this arena include…

  • Believing numbers are precise
  • Not considering excluded groups

Analytics are not precise (but that’s okay), because of loss due to many factors, one of which will be deliberate if you are sampling.

Ignoring excluded groups is a more fatal error.

For example, if your analytics software runs with JavaScript, it would be wrong to infer that “100% of our visitors have JavaScript enabled” – those without it aren’t being counted, so you don’t know how many there are.

Let’s look at an example that embodies this principle.

These are the results of the 2016 US Election in terms of votes. Clinton had 1.3 million more votes than Trump (but the electoral college system resulted in Trump being elected).

Column chart shows Clinton has the majority of votes, a small lead over Trump, with a small column for other votes.

You will hear people saying that 48% of voters chose Clinton and 47% chose Trump. This is wrong. When we look at votes, we are excluding specific groups of the population. In particular, we exclude those not eligible to vote and those who didn’t vote (or whose vote was not counted).

That doesn’t matter right? Can’t we just scale up our population? No, because the people who didn’t vote will not behave the same as people who did vote. For example, if the right candidate was available, they might vote. If their vote was not counted due to some form of corruption, or technical fault, or usability problem with the voting system – their vote could count next time.

In the case of the US Election in 2016, the “no vote” population is massive. In fact, if we look at voters rather than votes we find that “no vote” represents 43% of the voting population, leaving just 27% for Clinton and 26% for Trump.

Chart shows the largest column is no votes, with columns that are three fifths as high for Clinton and Trump and a very small column for others.

Why does this matter?

Because it changes how you react to the data. Without considering the whole population of voters your strategy will be to win votes from “the other side”. With the broader picture, you will focus more effort on motivating non-voters to attend. Your web analytics might not be as fundamental to saving the planet from corruption as they will be in the next US election, but the data you might be ignoring could be just as important to your strategy.

Number of votes obtained from The BBC, number of eligible voters from Heavy.

]]>
9759
Working With Public Coronavirus Data https://www.stevefenton.co.uk/2020/09/working-with-public-coronavirus-data/ Mon, 21 Sep 2020 09:14:50 +0000 https://www.stevefenton.co.uk/?p=9736 The UK Government provides public datasets that can be used by the media or the public. One such dataset contains information collected for the Coronavirus pandemic and its impact on people living and working in the UK. The problem with this dataset, though, is that we weren’t able to record the data until after the pandemic had got into full swing. Some examples of factors that defeat those trying to understand the data are below…

There was no actual testing system in place in March, which likely means a massive under-reporting of cases. The current testing system has reached capacity in some areas, which will mean under-reporting of cases now. In between, when there was a fully working test system, we would have been receiving reasonable data (the testing process itself is not 100% accurate, but we can expect the numbers to provide a good indication of the state of affairs when the system is in place and working).

Cases has been the go-to metric for reporting on Coronavirus, but I think this could be a mistake, given the massive problems with the collection of the data for cases. With this in mind, I have examined the other data that is available and made an effort to construct a model based on a more reliable measurement for how the situation has developed. A reliable metric needs to be one that is likely to have reported reasonable numbers throughout the March – September date range. One that is not affected by time slices with no testing, or limited testing. From this, we can examine the relationship between cases and our new metric to see if it provides a model for predicting cases.

Stable Metric – Hospital Admissions

The measurement I have selected is hospital admissions. This metric is not dependent on self-reporting or the availability of testing. We can theorise that there is a strong relationship between the number of cases and the number of patients admitted to hospital. Using the data from the public dataset, we can construct the following chart.

Original Coronavirus Data

The main suspect area in this cart appears on the left-hand side, where the number of hospital admissions seems high compared to the number of cases. There is a secondary suspect area on the right, where the media reported that the availability of tests was limited.

Building a Model

If we take the data “in the middle”, where we know there was a testing system in place (but before the tests started to run out), we can create the following chart based on a relationship between hospital admissions and cases.

Coronavirus Adjusted Model

The model suggests that the number of cases during the peak of the pandemic may have been in the region of 50,000 cases per day. This is significantly higher – in fact, so much higher we need to remain sceptical about the model. Let’s test the model on recent numbers.

Model Prediction August/September

And now lets look at the reported numbers for the same period.

Coronavirus Reported Number August/September

The model isn’t too far out from the reported numbers. The reality is likely to be in the same zone – in all probability, higher than being reported of the past week by some fifteen to twenty-five percent.

Early Case Reporting Likely to be Wrong

Based on these numbers, the current number of cases isn’t as worrying as they first appear in official charts. They are still concerning as they are going up, which we know is not a linear process. The under-reporting in March/April is likely to have resulted in a big hole in the cases dataset. That means our understanding of the spread of Coronavirus, as based on the early data, is likely to be wrong. What we can do is track a more reliable metric, although we also need to understand that they may suffer from more lag than cases (cases are likely to be reported closer to an individual first being ill, with hospitalisation happening days later).

The model that we have scratched together might not be perfect, what we can infer is that the cases were massively under-reported in March… how wide the “error bar” needs to be is unclear as much of the reporting of ratios is based on the same data we are questioning in this article. We are also working our way from quite a small number (250 a day in a population of sixty million), which means being one-or-two out at this level makes a big difference to the number of cases we can infer.

In the US, the case rates were 5x the hospitalisation numbers. The model in this article finds a somewhat larger gap (more like 16x) – but please remember this is just one way to examine the relationship between the numbers. The relationship we are basing this on seems a likely one, but that’s not proven by the above analysis.

And finally, please don’t walk away from this post thinking “oh well, that’s okay then!” The ability of the epidemic to spread will take us by surprise if we don’t keep a careful watch on the spread of the virus. In the UK the rates are doubling each week, which means the graph will “hockey stick” upwards if we can’t get things under control fast. If you can stay away from other people, limit that contact, and stop the spread – you are going to save real lives.

Updates

As of the 22nd October 2020 the Cases to Admissions ratio seems to be reliably around the 16x mark. The average since the start of August is 16.08 and the median for the same period is 16.73. That means that on average, around 6% of cases result in hospitalisation. The distribution of daily ratios is shown below. Not quite a normal distribution, but while it’s not a bell, it is jellyfish-like.

Distribution of Cases to Admissions

Based on the median of 16.73x cases to admissions, we can compare how this compares to the actual numbers since August. As you’d expect, hospital admissions are a lagging indicator, as people tend to become a case some days before they are admitted to hospital.

Predicted vs Actual Cases Since August

We can then look back to the previous periods to predict how many cases there were during the first peak, even though it wasn’t possibly to measure the number of cases at the time. Perhaps we should call this “predicted vs recorded” as the recorded numbers are less likely to be the actual numbers than the predicted ones.

Predicted vs Recorded Cases Since March

We can revisit this 16.73x multiplier with new data as it emerges to confirm the model, but it looks like the uppermost spike was almost 60,000 cases. That’s higher than the original model, which pitched it at around 50,000 – but both models are likely to be more indicative of the true extent of cases than the recorded numbers.

]]>
9736
Remove Blank Lines From a File with PowerShell https://www.stevefenton.co.uk/2020/09/remove-blank-lines-from-a-file-with-powershell/ Fri, 11 Sep 2020 07:14:11 +0000 https://www.stevefenton.co.uk/?p=9568 When importing a file full of data into a test system, I discovered that the CSV library I was using to do all the work was stopping when it reached a blank line. That makes sense, it thinks the data has ended. On inspection, I found quite a lot of blank lines, so there was no way I was going to fix them all manually. Instead of spending five minutes manually removing lines, I spent five minutes writing this PowerShell to do it… and I’ve made it run each time the test file is created.

We have a pretty simple command-triplet here, Get-Content sends the lines into the Where-Object filter, which only returns non-blank lines for Set-Content to drop in the output file.

(Get-Content $inputFile) | Where-Object {$_.trim() -ne "" } | Set-Content $outputFile
]]>
9568