Steve Fenton https://www.stevefenton.co.uk Sat, 25 Jan 2020 14:59:11 +0000 en-GB hourly 1 https://wordpress.org/?v=5.3.2 https://www.stevefenton.co.uk/wp-content/uploads/2017/06/cropped-wheel-32x32.png Steve Fenton https://www.stevefenton.co.uk 32 32 140638488 Advertising Experiment: What are Annoying Adverts Worth https://www.stevefenton.co.uk/2020/01/advertising-experiment-what-are-annoying-adverts-worth/ Wed, 08 Jan 2020 16:47:22 +0000 https://www.stevefenton.co.uk/?p=6949 Firstly, to list of the sampling issues with this experiment would mean a near-infinite scrollbar. The intention here is not to say that “this is what you will experience”. The purpose of this article is to show that annoying adverts make more money than subtle advertising; but that you should test how much more and […]

The journal Advertising Experiment: What are Annoying Adverts Worth was first published at Steve Fenton.

]]>
Firstly, to list of the sampling issues with this experiment would mean a near-infinite scrollbar. The intention here is not to say that “this is what you will experience”. The purpose of this article is to show that annoying adverts make more money than subtle advertising; but that you should test how much more and consider whether it is worth the cold hard cash.

There were two variations to look at here. The first is pretty much what you now see on the website… there are two ad placements and they are quite friendly to those of you reading the content. There are no adverts within the content, or that look like content, or that otherwise annoy you. The second (please accept my apologies) I inflicted on a sample of users and it automatically splashed adverts all over the shop. For example, at the bottom of an article there are a number of related posts. The advertising would add a couple of list items to this area that were paid adverts. It would place adverts in the middle of the article, which could be distracting as it becomes hard to visually separate article-images from adverts. It was, essentially, annoying.

Annoying Ads Make More Money

The annoying adverts made more money. Karma doesn’t work here and the nice guys aren’t going to win. Basically, we are sophisticated enough to mentally filter out adverts that aren’t annoying, so it takes an annoying ad to get our attention. To put a number on it, annoying ads bring in 4.6% more revenue (based on revenue per 1,000 sessions).

This number isn’t global. That’s the different in my experiment, on my site, with my content. It’s probably fair to anticipate that you would find that annoying ads make more money in the vast majority of cases, though the extent to which they make more may be greater or lesser than my 4.6%.

Other Metrics

You might expect me to confirm that annoying ads increased the bounce rate, or decreased the average session duration. As I mentioned before, there is no Karma here. There is no punishment for annoying adverts… except, perhaps, for the people being annoyed.

Lucky Me

I consider myself lucky that I’m not desperate for the ad money. I’m trying to soften my hosting costs without giving up my content to a platform. I believe individuals running their own websites is important for The Web. I can control how my content is used (for example, I can choose not to run annoying ads, I can choose not to paywall, I can choose not to limit how many of my articles you can read this month).

So, obviously, for my website the annoying ads are not an option. Not even for an extra 4.6%.

The journal Advertising Experiment: What are Annoying Adverts Worth was first published at Steve Fenton.

]]>
6949
Half-Donut Charts are Still Pie Charts https://www.stevefenton.co.uk/2020/01/half-donut-charts-are-still-pie-charts/ Wed, 01 Jan 2020 20:56:17 +0000 https://www.stevefenton.co.uk/?p=6832 It was recently hinted to me that half-donut charts are a better alternative to pie charts. As I really dislike pie charts, I sat down for a while and thought really hard about this. Having approached this chart from a few different perspectives (it seems very attractive when achiving 50% is important) I have realised […]

The journal Half-Donut Charts are Still Pie Charts was first published at Steve Fenton.

]]>
It was recently hinted to me that half-donut charts are a better alternative to pie charts. As I really dislike pie charts, I sat down for a while and thought really hard about this. Having approached this chart from a few different perspectives (it seems very attractive when achiving 50% is important) I have realised that these are still just pie charts. The shape of the chart offers no benefit other than implying where the half-way mark is.

If the half-way mark is important, why not make it explicit, rather than imply it.

Let’s look at one of the first examples The Web gives us for a search on Half-Donut charts.

Half-Donut Chart

Meh. This is a probably a bad example even of its kind. It doesn’t look like the 50% mark is terribly important, it’s just a comparison of four technologies (three are programming languages, the other is a mysterious stable-mate as it isn’t a programming language; it’s an operating system). Putting these thoughts aside, let’s re-make the chart without adding an additional visual dimension.

I had to roll out the trusty TypeScript pixel counter to reverse-engineer the relative size of the segments. The output is shown below…

Pixel Counter Output

It gave me a working set of numbers with the values of the four areas being: Javascript 39%, Python 26%, Android 22%, PHP 13%.

Here is a simple stacked bar, which is fundamentally the same except there is no need to curve it. There is an explicit half-way marker (and we can add other markers if needed). I have chosen to order the items by size, but I could have preserved the original order if it was important.

Stacked Bar Chart

The fact is, there is no need to add another dimension if it doesn’t add anything to the data. We don’t need to make things 3D if that doesn’t aid undertanding. There’s no need to use circles, or curves, or circular sections, as we do with donuts, pies, half-donuts, and the like. They never increase the understanding of the data; they actually detract.

So, I don’t see any reason (other than marketing) to use a half-donut chart. It is arguably better than a pie chart, but still actually worse than a simple column chart, or stacked bar… depending on the data.

The journal Half-Donut Charts are Still Pie Charts was first published at Steve Fenton.

]]>
6832
Tragic Competition https://www.stevefenton.co.uk/2020/01/tragic-competition/ Wed, 01 Jan 2020 20:19:37 +0000 https://www.stevefenton.co.uk/?p=6830 Tragic Competition occurs when the service fragments between many service providers, and each charges a similar subscription. For example, you can currently subscribe to a music service provider who will give you “all music” for $10/month… imagine if this was replaced with multiple partial offerings at the same cost, for example each record label offering […]

The journal Tragic Competition was first published at Steve Fenton.

]]>
Tragic Competition occurs when the service fragments between many service providers, and each charges a similar subscription. For example, you can currently subscribe to a music service provider who will give you “all music” for $10/month… imagine if this was replaced with multiple partial offerings at the same cost, for example each record label offering exclusive access to their artists.

Case Study: No Single Buyer

Let’s take a look at Premier League Football, which became the subject of an Ofcom investigation related to the Competition Act 1998. If you were a hardcore football fan, you used to subscribe to Sky (because they had all of the Premier League live broadcast rights).

The next auction will include a ‘no single buyer’ rule, which means that more than one broadcaster must be awarded rights. At least 42 matches per season will be reserved for a second buyer, of which a minimum of 30 will be available for broadcast at the weekend.

Ofcom

The outcome of this no single buyer rule was that hardcore football fans had to continue paying their Sky subscription, and also buy a subscription from a second service provider, or miss out on live broadcasts of games. Overall, an additional 22 games were broadcast, but if you didn’t buy the second subscription, you made a net loss of 20 games.

What is Competition?

The intention was to increase competition, but it is only real competition if the consumer benefits.

As we see subscription content evolving, and content creators making the tough choices about whether to offer content through other platforms, there is every chance we will see more tragic competition emerge. It’s becoming increasingly common to have Sky, Netflix, Amazon, and Disney subscriptions, which represents a massive consumer spend in the entertainment subscription business.

I hope that the music subscription industry doesn’t become fragmented like this as the end result is highly expensive sets of subscriptions that actually don’t benefit the subscribers.

The journal Tragic Competition was first published at Steve Fenton.

]]>
6830
The Definitive Decadic Reference: Gifting Back the New Decade https://www.stevefenton.co.uk/2020/01/the-definitive-decadic-reference-gifting-back-the-new-decade/ Wed, 01 Jan 2020 20:04:01 +0000 https://www.stevefenton.co.uk/?p=6828 As we entered 2020, you will have no doubt heard people celebrating the new decade. Just as certainly, you will have heard those declaring that this is not the start of a decade, because we have to wait until 2021. In this post, I rebut this tiresome pseudo-intellectual peacocking and gift back to the humble […]

The journal The Definitive Decadic Reference: Gifting Back the New Decade was first published at Steve Fenton.

]]>
As we entered 2020, you will have no doubt heard people celebrating the new decade. Just as certainly, you will have heard those declaring that this is not the start of a decade, because we have to wait until 2021. In this post, I rebut this tiresome pseudo-intellectual peacocking and gift back to the humble people of the world the decade that is, to the 21st century, the twenties.

Before we discard the irritating argument used by those who want to position themselves as intellectually superior to the majority of their peers, let’s examine it. This problem was started when we looked upon the first year and named it “1”.

When the de-facto calendar was put into use in 1582, the word “decade” referred to any group of term things. You could divide your Facescroll friends into decades, where each decade contained ten friends. You could distribute the contents of your coal scuttle into decades of coal, perhaps intending to observe a daily decade heating budget.

Applied to years, a decade must observe one simple rule; it must a collection of ten years. It would be an abuse to the Greek “deka”, meaning ten, for it to be any other.

So, the argument goes that if we start with year “1”, the first group of term would be:

1, 2, 3, 4, 5, 6, 7, 8, 9, 10

This would place year “10” in the first decade, meaning the second decade doesn’t begin until year “11”. If you keep taking sequential decades from this starting point, you find 2020 is the final year of the current decade, not the first year of the next.

That’s the argument, but the argument is idiotic.

To start from year “1” is an entirely arbitrary decision, because we all know it wasn’t the first year. More than one calendar is based on a start date calculated to be when Jesus was announced to Mary, for example our Gregorian calendar and the Ethiopian calendar, but these differ by seven to eight years.

Additionally, any arguments about what do with the errant year “1” can be discarded when asking the real question. For example, when you orate a date as eighteen-twelve, in the year “1900” the question would have been; “how shall we divide up the 100 years starting with ‘nineteen’?”

This question would, as always, be answered by grouping 1900-1909, 1910-1919, 1920-1929, and so on.

They are valid decades because they each contain ten years. They are additionally valid because this meaning of decade is so well accepted that it had made it into the Oxford English Dictionary as “a period of ten years beginning with a year ending in 0”.

This definition of a decade also allows us to refer to them colloquially with terms such as “the sixties”, or even “the swinging sixties”.

Those who attempt to redefine a decade’s start have no grounds to do so as the collectively accepted terminology gives us all a basis for shared understanding. This collective language has more merit that a weaselly self-important “correct” definition.

So, welcome to the twenties! Enjoy this hopeful decade where we hope to improve upon the tensies and feel free to disdainfully shut down those who would confuse the well established convention for reasons of personal vanity.

The journal The Definitive Decadic Reference: Gifting Back the New Decade was first published at Steve Fenton.

]]>
6828
Manipulating Variables in JMeter https://www.stevefenton.co.uk/2019/12/manipulating-variables-in-jmeter/ Mon, 16 Dec 2019 09:53:38 +0000 https://www.stevefenton.co.uk/?p=6683 There are many reasons for manipulating variables in JMeter, especially when you are loading data from a CSV data set config element. You might want to trim a JMeter variable, or grab just a substring. In all of these cases, your existing knowledge of JavaScript can come to the rescue. Wherever you were about to […]

The journal Manipulating Variables in JMeter was first published at Steve Fenton.

]]>
There are many reasons for manipulating variables in JMeter, especially when you are loading data from a CSV data set config element. You might want to trim a JMeter variable, or grab just a substring.

In all of these cases, your existing knowledge of JavaScript can come to the rescue.

Wherever you were about to use a raw variable, such as ${Example} you can wrap it with a call to the JavaScript processor…

${__javaScript("${Example}".trim())}

The important thing to remember is that the ${__javaScript( )} will let you drop into JavaScript wherever you can use a variable, so you can easily drop into here to use pretty much any JavaScript stuff that will help you.

You can also store back the result into a different variable, as this example shows – it adds together two variables, trims the string, and stores it in NewVariableName.

${__javaScript("${Prefix} ${SomeValue}".trim(), NewVariableName)}

The journal Manipulating Variables in JMeter was first published at Steve Fenton.

]]>
6683
Budgets Not Estimates https://www.stevefenton.co.uk/2019/11/budgets-not-estimates/ Sat, 30 Nov 2019 19:53:30 +0000 https://www.stevefenton.co.uk/?p=6661 This is an early view over a process we are experimenting with in my organisation; budgets not estimates. It represents a re-ordering of components in the planning process that generates more options and reduces single-option big bets. There are lots of different terms for how people plan software, but very often it involves someone turning […]

The journal Budgets Not Estimates was first published at Steve Fenton.

]]>
This is an early view over a process we are experimenting with in my organisation; budgets not estimates. It represents a re-ordering of components in the planning process that generates more options and reduces single-option big bets.

There are lots of different terms for how people plan software, but very often it involves someone turning up with a fully formed idea and asking how much it would cost to implement it. It might be called a requirement, a specification, a user story, a feature… it has many names, but it means “a single idea that I want executed”.

Traditional

Most of the components of this planning are reasonable enough, but just occur in the wrong order. No amount of “Start With Why” posters seems to materially improve this situation. There are also all of the well reported dysfunctions around estimates, even though the estimates themselves are innocent enough when handled by responsible folks. We need to resolve these two issues. What two issues?

  1. The problem of planning against a single pre-defined option, when a set of options is more powerful
  2. The problems around estimation that we’ve been talking about for a few years now

Problem Budget

Our experimental solution to this planning issue is problem budgets. You define a problem. You work out how much you want to invest in an attempt to solve the problem. You work on generating a set of options that you think might solve the problem within the budget. You select an option and work on it until the initial budget is gone.

Budget Not Estimate

For example, the easy case of… “Thirty-percent of users drop out when they get to the payment screen, how much money is a solution to this problem worth?” (Not every problem is as easy to quantify.)

Example

So, how does it work?

Start With Why!

You need to focus on the problem you need to solve. Approaching this process with a fixed idea of the solution is a sure way to fail. Look at the problem, find out what a solution to the problem is worth, and decide how much of that you want to risk on an experiment. You’ll collect data as you go that will help you understand your probably success rate (how many times your first idea works, how many times your second idea works, and so on).

So, for example, “it’s kinda hard to update the website navigation”. You now have a problem to solve. Work on the problem until it is crystal clear. Use established techniques to dig below the surface to make sure you on the bedrock-problem, not some loose gravel layer above it.

What’s it Worth?

With a clear idea of the problem, you now work out what it would be worth to fix the problem. This is the money you will draw from to run experiments. You don’t just allocate the whole value to a big bet (you might do this sometimes, but if you’re doing it every time you need to rethink your attitude to risk). Wherever possible, consider allocating the budget in blocks that allow experiments that provide either success, or significant learning that can be fed back through the process.

“Let’s spend two-developer weeks on an idea to simplify updates to the website navigation for our users.”

Great! Now we have a clear problem, and a budget for our first experiment.

Option Generation

There are different ways to approach the next part, but all involve assembling a cross-functional team who are going to spend the budget. Now you run a session to generate options. You might start without the budget constraint and introduce it as a method of eliminating options that seem too expensive. You might begin with the budget and see what options come out of it. You can try different methods at this stage.

You might have multiple options, one option (beware), or no options at all. Depending on where you have ended up, you might adjust your budget, or come up with variations of existing ideas, or generate some new ideas.

In some teams, you might want to undertake this process in two or three short sessions to give people time to ponder between each one. Not everyone likes the cut and thrust of a single session and it’s more valuable to generate great ideas than it is to crack out a decision in one punch. You need a skilled facilitator either in each cross-functional team, or available for them to use. The facilitator will make sure the sessions work across diverse personality types and protect the divergent stage from premature convergence (convergent stages generally require less protection, but they will also make this stage more effective, too).

When the experiment is selected, the session can be capped-off with example generation, accompanied by any decomposition or slicing that is required to maintain the laser-sharp focus of the problem/budget.

Make it Happen

Now the team can get to work solving the problem. There are some important notes at this stage.

When the budget is used up, the work stops by default. This is explained well in Shape Up (published by Basecamp). The budget owner might decide to add further funding as an exception, but this doesn’t happen by default as it does in many software development endeavours. If the problem isn’t worth additional developer-weeks, it’s better that we stop and learn from what happened, rather than let the sunk-cost fallacy slowly bleed our annual budget on problems that aren’t valuable enough. Large projects kill companies by budget extensions, so we’re stopping any experiment from being big enough to crush us.

Measure

In respect of the original problem, we need to know if we solved it. Even if we completed our work “on time and on budget”, the users might still struggle to update this darned website navigation. That’s what our DITE cycle is for.

Additionally (and avoiding dysfunction wherever possible) we want to collect some data that will help us make better decisions. If you collect a binary result of each experiment where it finishes within the budget or not (regardless of being extended), you will find out how many times you succeed on “option 1”, “option 2”, “option 3”, and so on. This will generate a probability spread that will help you decide how much budget should be spent on that first attempt. You can also fine-tune your attitude to risk if you find you succeed more than eighty-percent of the time on the first option.

If you decide to allocate a second-round of funding to a “first option”, you treat this as “option two” (the same solution with a different budget is another option, just as a second solution with the same budget would be).

We always want to have one eye on sunk-cost fallacy; and we always want to acknowledge that dysfunctions can creep in from all angles. Although we often highlight management dysfunctions around estimates in the upper branches of an organisation, there are also non-management dysfunctions relating to non-disclosure of information that grow insipidly about the roots, sometimes as a result.

Benefits of Stop-Dead Funding

However you run this, you need a mechanism that stops funding an idea that has gone out of control. Here’s a simple ten-Euro version of the principle.

Pivot vs Persevere

Scenario one is the most common mistake in software development; despite being offered the lesson that the idea is not going to work within the budget, the work continues until it is complete. It might cost 2x, 3x, 4x the original budget, but nobody stops spending the money because each time they review progress, they have become more bought-in to making things happen at all costs. You might eventually solve the original problem, but you’ve ignored every opportunity to learn along the way.

Let’s replay it with scenario two. In this alternate reality, we ditch the first attempt to solve the problem because we have proved that it isn’t possible within the budget. We go back to our set of options, possibly adding new options based on what we have recently learned, and we allocate some budget to one of those instead. If we are hitting an 80% success rate, this attempt is likely to be more successful.

Obviously there are more possible outcomes to this puzzle; the second attempt might also fail, perhaps the problem cannot be solved as easily as we thought. Two different ideas failed, so this is going to cost more than we anticipated. We’ve banked twice as much learning, so now we decide if maybe we should solve a different problem. Or perhaps we discovered a new way to solve it by trying the previous ideas. We are generating information and options, so we’re getting something for our money.

In all cases, trying a different idea will bring more value through learning and cancelling work that fails to meet the budget leads to better clarity around the whole process. The biggest threat to this process is allowing every idea to over-burn. When you do this, people stop paying attention to the concept of problem budgets. “Meh, they’ll let us spend longer on this, they always do!”

Each time you use up the whole budget, you need to take a pragmatic decision. If you think you can complete the task given another day, you might decide to continue (but track it as a second option). Keep you eyes peeled for the sunk-cost fallacy and proceed with caution.

As this process matures, I’m sure a great deal of learning will emerge, so I’ll write updates as the insights arrive.

The journal Budgets Not Estimates was first published at Steve Fenton.

]]>
6661
Make HAProxy Strip Spaces From a Request Header https://www.stevefenton.co.uk/2019/11/make-haproxy-strip-spaces-from-a-request-header/ Tue, 19 Nov 2019 16:14:53 +0000 https://www.stevefenton.co.uk/?p=6646 There is some shared code out in the wild that browser extensions are using to make requests, which might cause problems if you parse the Referer header in your website. The issue is with the following request header, which you might see in your logs as Referer: http://+www.example.com: Referer: http:// www.example.com That space between the […]

The journal Make HAProxy Strip Spaces From a Request Header was first published at Steve Fenton.

]]>
There is some shared code out in the wild that browser extensions are using to make requests, which might cause problems if you parse the Referer header in your website.

The issue is with the following request header, which you might see in your logs as Referer: http://+www.example.com:

Referer: http:// www.example.com

That space between the scheme and host name causes the problem.

Strip Spaces From Request Header

The following rule goes in your HAProxy backend, and replaces the Referer header with the same values stripped of spaces.

http-request set-header Referer %[req.hdr(Referer),regsub(' ','',g)]

A similar rule could be used for other request headers if necessary.

The journal Make HAProxy Strip Spaces From a Request Header was first published at Steve Fenton.

]]>
6646
The Microservices vs Conway Test https://www.stevefenton.co.uk/2019/10/the-microservices-vs-conway-test/ Sat, 26 Oct 2019 16:00:54 +0000 https://www.stevefenton.co.uk/?p=6606 Following on from my article on Mescoservices back in 2015, this article expands on an idea I had in September on how monoliths, mescoservices, and microservices fit into organisation design. The microservices vs Conway test encodes a common piece of advice into a first-draft formula for testing your architecture against your organisation. Microservice Advice Microservices […]

The journal The Microservices vs Conway Test was first published at Steve Fenton.

]]>
Following on from my article on Mescoservices back in 2015, this article expands on an idea I had in September on how monoliths, mescoservices, and microservices fit into organisation design. The microservices vs Conway test encodes a common piece of advice into a first-draft formula for testing your architecture against your organisation.

Microservice Advice

Microservices offer several benefits, and also some cost. In return for increased complexity, you get to mix different technology and scale up the number of autonomous teams working on the platform. A simple way to look at this is to imagine a successful team working on a monolith who need to either broaden the scope of their application, or divide the work between themselves and another team. If they can find a seam that allows them to divide the monolith, each team can work autonomously on each of the new parts that have been created.

Not only does each team get to work how they want, using whatever tech stack they choose; they also get to work at their own pace without tripping over or impacting the other team.

This is just one example of the Inverse Conway Manoeuvre. Whereas Conway’s Law states that any application’s architecture will end up looking a lot like the organisation’s communication structure, the Inverse Conway Manoeuvre utilises organisation design to take advantage of Conway’s Law. Putting it bluntly, you fix the communication structure of the organisation to ensure that when Conway’s Law strikes, it results in the software architecture you intended.

So, it’s pretty common for people to give advice that includes organisation design, and warnings about the complexity trade-off when microservices crop up.

Microservices vs Conway Test

If we use m to represent the number microservices you have, and t to represent the number of teams you have, we can use the following test to determine how well microservices fit into our organisation by testing the resulting complexity, which we’ll call c.

c = (mt)²

If things go well, complexity will scale linearly. When this is the case, the complexity will be zero. If you have too many services compared to teams, or too many teams compared to services, current wisdom says that things will be more complex. Negative numbers indicate the complexity of multiple teams tripping over each other as part of the delivery pipeline, for example multiple teams attempting to service a monolith. Positive numbers represent complexity that is being introduced without benefit.

For example, one team on a monolith will score zero. Three teams working on three services will score zero. This doesn’t mean zero-complexity; it means the complexity and benefit are likely to be balanced.

To look at some negative cases, four teams working on a monolith will score 9, as will one team working on four services.

The relationships can be described with the following examples.

The relationship for a single team is shown below. Complexity increases as more services are added to a single team. The more you hope to break Conway’s Law, the more the complexity hurts.

Complexity increases as more services are added to a single team

The relationship for a larger number of teams is illustrated below. Where we have five teams, we can survive give-or-take two either way. But if there are too-few services, or too-many services we increase the otherwise linear complexity.

For five teams, complexity increases beyond a manageable level when there are too many, or too few services.

The complexity curve follows the assertion that the further you deviate from the team-per-service organisation design, the more complex things will become; no matter whether it is too many teams for the number of services, or vice versa.

Complexity is symmetrical based on deviation from balanced team and service numbers.

Comparing these figures to real-world examples, I would say that single-digit complexity is desirable unless you can take additional action to limit complexity.

Other Complexity Limiting Techniques

I’ll add more examples as they emerge, but Monzo (who have more microservices than most organisations) undertook an exercise to limit the connections. By making connections explicit, they prevented a situation wherein any service could talk to any other. This massively reduces the total complexity. For example, if you have 100 services all able to talk to the others, you have the classic (n x (n – 1)) ÷ 2 problem (4,950), but if you review and limit connections to a maximum of five dependencies, you limit the connections to just 500. If you review and limit connections, you can understand how much complexity a team has based on the exact number of connections a team must manage.

Summary

Complexity: Microservices vs Conway Test

Having been careful to consider Conway’s law, I have avoided designing teams and architecture in isolation of each other. I believe this is the only way to ensure the design of both is successful. If you don’t balance the design on both sides (the technology on one side and the people on the other), the complexity is damaging to both.

The journal The Microservices vs Conway Test was first published at Steve Fenton.

]]>
6606
Why Devs (Should) Understand Estimates https://www.stevefenton.co.uk/2019/10/why-devs-should-understand-estimates/ Thu, 24 Oct 2019 12:30:11 +0000 https://www.stevefenton.co.uk/?p=6592 Yes, this is a sub-post! A reaction to a post titled “Why Devs (Should) Like Estimates”. I try not to get involved in industry conversations about estimation (or, indeed, #NoEstimates) as it can get very dicey at a general level in ways that simply don’t occur for a specific team or organisation. I’ll briefly qualify […]

The journal Why Devs (Should) Understand Estimates was first published at Steve Fenton.

]]>
Yes, this is a sub-post! A reaction to a post titled “Why Devs (Should) Like Estimates”. I try not to get involved in industry conversations about estimation (or, indeed, #NoEstimates) as it can get very dicey at a general level in ways that simply don’t occur for a specific team or organisation. I’ll briefly qualify this before I continue with why devs should understand estimates. When you sit down with the people who are spending money in the hope of some return (business), and the people able to take that money and generate something more valuable than the cash itself (technical), you can usually resolve the questions of whether estimates are needed, what form they will take, and how they will be used. It’s rarely a difficult conversation because when you know the purpose of an estimate, you can select a method that matches the need. This is very important, so I’ll do a dramatic inverse pull-quote…

When you don’t understand why an estimate is needed, you won’t solve the problem – even with an estimate.

There are two tools I use in respect of estimates: One is the phase precision premise, which identifies some broad classifications that apply to software development that should affect your decision on whether to estimate. The second is a simple decision flowchart for estimation / not estimating, which helps you decide whether the underlying need can be solved in a different way. This post builds on this and offers a critique of the “Why Devs (Should) Like Estimates” article.

Why Now?

Having stayed away from this debate for a long time, the publication of the article on a website that has a massive influence with developers (the Stack Overflow Blog), has made it impossible to ignore. A great many people will take it for granted that the information is accurate, complete, and correct. It is important to draw attention to areas that you, dear reader, must look at in more depth before you eat the poisoned apple that has been placed in your hands.

In particular, I don’t want Yaakov Ellis to take any of this critique personally. In fact, the advice might work perfectly well within Stack HQ. The problems arise, as I’ve mentioned before, when local success is misinterpreted as general practice. In other words, I have worked on teams that never needed to provide estimates, but my general advice is not “don’t ever provide estimates”.

Brick by Boring Brick

Let’s take some quotes and nitpick them, hopefully constructively.

As a Principal Web Developer at Stack Overflow and a long time Tech Lead, I learnt that accurate estimates were essential in order for a company to be healthy and productive.

This is a generalisation. Many healthy and productive companies do not require estimates, let alone accurate estimates. If you have experience with ten companies and the five HAP ones used estimates, while the five un-HAP ones didn’t, you have a correlation based on a limited sample. You now need to collect a larger sample and remove other factors to find out whether this correlation is causation. I propose the theory that a representative sample will show that accurate estimates are not strongly correlated to HAP organisations (and are not the cause of health or productivity).

An estimate helps to plan and coordinate product releases, synchronize work with other teams, ensure that resources [and people] are allocated properly to meet the needs of the product

As many have described before (Woody, Neil, Duarto, et al) you can plan, co-ordinate, and undertake work without estimates. It is commonly said that the planning is more important than the plan. If you decomposed all of the activities that contribute towards the successful execution of a feature, it is likely you could remove the “estimation” component without much affecting benefits described above.

and of course to enable accurate billing of clients when your team has been hired to do a job for an outside company.

Having worked within an agency environment in the past, I feel we need to distinguish between estimation and billing. If you are billing based on the estimate, for example as part of a fixed-price contract, it is important that across a number of projects you remain profitable. The bill will always be accurate in this scenario, because it will be for the exact amount agreed as part of the fixed price contract. You, as an agency, are being paid for the sum total of effort and to assume the risk. Your price should reflect this. If, though, you are working on a shared-risk model, such as an agile contract, you are partners in the endeavour and alternatives to estimation are available. For example, I worked on an agile contract where the client reviewed what had actually been delivered every two weeks, before deciding whether to continue funding the project. The client therefore assumes a capped two-week risk, and the agency is motivated to deliver value for money in each period to gain the renewal. No estimates are required in this case as the risk is capped and the decisions are based on real delivery, not estimated delivery.

as many of us have learned, while it is easy to give someone a number, it is much harder to give a number that is in any way accurate

As Robert C. Martin often points out, it is trivial to give an accurate estimate. It is also trivial to give a precise estimate. The only time it becomes difficult is when both precision and accuracy are needed. Without getting into lots of detail about the cone of uncertainty, or individual estimation methods, I’ll simply state than an “accurate” estimate means one that turns out to be correct (for example, between one and one-million days). A “precise” estimate is one that has a narrow range (for example, between three and five days). The cone of uncertainty shows us how likely we are to achieve both precision and accuracy on average across a number of projects, based on the stage in the project lifecycle. It should be possible to transfer this thinking between projects, products, and features with minor adjustments.

an accurate estimate will enable you to deliver your work with a high level of quality in the least amount of time

An organisation must exhibit a form of dysfunction if the estimates impact quality. To put it bluntly, the organisation has to choose that a date is more important than quality in order for this correlation to emerge. When this happens implicitly, it is best for software professionals to pause and ensure the decision is made explicit.

It will help you to avoid false starts and the pain of having to throw away code unnecessarily

An estimate does not prevent false starts. If the planned solution doesn’t solve the underlying problem you will have a false start, even with an accurate and precise estimate. Learning-based cycles such as DIBBs and DITE are designed to help you avoid investing too much in a false start. Unless you have no feedback loop, you should only ever need to throw away code necessarily.

It will help to minimize scope changes

No it won’t. When the scope inevitably does change, you will need to revise the estimate. Fixed price contracts are usually protected by a clause to ensure this, otherwise bankruptcy beckons.

It will allow you to structure and to plan out your work in the most efficient way that you can

We all seem to prefer the term “effective” rather than “efficient” these days, but in either case the planning is not the estimate. If the only way to ensure an appropriate level of thought is put into a task is to request an estimate, you have uncovered the signs of a dysfunction that could be fixed. Until it is fixed, by all means use the estimate as a way to promote a reasonable level of thought.

Estimation technique is a personal decision that each dev has to make for themselves.

Beware of a situation that may result in every individual using a different method to estimate. In fact, beware of a situation where any estimate is generated solely by an individual. You’ll be more successful if you use group techniques to expose and discuss differences in estimates. The differences are always interesting. In the sources I quote below, more than ninety-percent of group estimates were more accurate than individual estimates and the magnitude of the errors was reduced by half.

An estimate that is made by someone other than the person who will be doing the work is much harder to get correct

This is a near miss. In most modern software development organisations we don’t want to allocate tasks too early, so forcing the estimator to be the individual contributor actually doing the work will cause scheduling problems later on. What is true is that the people doing the work are better placed to provide accurate estimates than people not doing the work. In combination with the previous point, accurate estimates can be obtained from the group of people doing the work using an acknowledged estimation technique that takes into account the cone of uncertainty. Yes, individuals work at different speeds, or know different things; but the range you supply in an estimate can take this into account. Unless you work in total isolation, this shouldn’t make a material difference. The developer could pair with the current expert and complete the task in the anticipated time, thus giving you a second expert.

An estimate that is not based off of a strong understanding of the functional requirements will most likely result in an inaccurate estimate

I won’t pull out the many quotes from the article about refusing to estimate until the specification is final, but I’ll include a commentary on that here. From an industry perspective, estimating at different points in the software development lifecycle is possible and has known industry-level ranges of uncertainty… this is called the cone of uncertainty. I’ve mentioned it a few times already. If you want to be scientific about it, you could gather your own organisation-level cone of uncertainty, but it might not be any more useful than the industry one (as your one will be impacted every time something within your organisation changes, such as the individuals on the team).

Your estimate should end up setting a number of hours to complete the defined tasks

In fact, the granularity of your estimate should be adjusted to ensure you don’t give a false sense of precision. If you told me a task would take between 132 and 141 hours, you are suggesting you know more than you really do. It is better to say it will take between 5 and 6 days, which is materially the same information. If it really does take 132 hours, the 5-6 days estimate is accurate. If it really does take 141 hours, the 5-6 days estimate is accurate.

And There’s More

A more general problem I have with the article in general is that it seems highly skewed to project-thinking. Perhaps for those of use using a data-driven insights-based approach where we take small steps and validate our direction often, this kind of “ensure the spec and estimate are updated regularly” thinking simply doesn’t apply. It seems to be more closely married to a Waterfall approach, where things are “nailed down” too early and the feedback loop is too late. This is not my world. Even if it were, the advice does not align to some very well researched advice that is available from the authors I mention below.

Good Advice on Estimation

Steve McConnell wrote the definitive guide to estimation in Software Estimation: Demystifying the Black Art (Microsoft Press). This book is backed by a great deal of well researched information from industry sources. It debunks the aforementioned article in great detail, despite being written more than a decade earlier. Mike Cohn has also written a book I refer to often, Agile Estimating and Planning, which brings solid estimation advice to iterative software development.

Software Estimation Books

Remember, you don’t always have to estimate. The Phase Precision Premise and the Estimation Decision Flowchart may lead you to alternatives. When you do need to estimate, Steve and Mike have got you covered.

The journal Why Devs (Should) Understand Estimates was first published at Steve Fenton.

]]>
6592
Disable Swipe Navigation in Chrome and Edge (Chromium Edition) https://www.stevefenton.co.uk/2019/10/disable-swipe-navigation-in-chrome-and-edge-chromium-edition/ Wed, 23 Oct 2019 12:08:49 +0000 https://www.stevefenton.co.uk/?p=6585 There’s a feature in Google Chrome and the new Chromium version of Microsoft Edge that navigates back or forward through your browser history when you swipe. It navigates on touch interactions, and also on track-pad interaction. If you use some web-based tools that feature horizontal scrolling (such as online Kanban boards) – this becomes infuriating. […]

The journal Disable Swipe Navigation in Chrome and Edge (Chromium Edition) was first published at Steve Fenton.

]]>
There’s a feature in Google Chrome and the new Chromium version of Microsoft Edge that navigates back or forward through your browser history when you swipe. It navigates on touch interactions, and also on track-pad interaction. If you use some web-based tools that feature horizontal scrolling (such as online Kanban boards) – this becomes infuriating.

After accidentally navigating for the 100th time today, I went and found the setting that is responsible for this behaviour.

You will find the settings on the flags page, which has a slightly different address depending on whether you are in Chrome or Edge:

Chrome

chrome://flags/#overscroll-history-navigation

Edge

edge://flags/#overscroll-history-navigation

Within the settings page, you’ll find an item titled “Overscroll history navigation”, which you can disable.

Overscroll Navigation History

You might also spot other settings that can be a pain in terms of accidental activation, such as pull-to-refresh.

Change those settings and re-launch your browser for a more enjoyable life!

The journal Disable Swipe Navigation in Chrome and Edge (Chromium Edition) was first published at Steve Fenton.

]]>
6585