Blog and ramblings of a_c_m (Alex McFadyen) Likely topics include, Software Development, [Type|Java]Script, React, NodeJS, GraphQL, Python, DevOps, Hardware, Raspberry Pi, Food, Quotes, Gadgets and all sorts of other stuff. All of my media are copyright me, or respective owners. No reuse without permission. Ask if you want to use/license.
Don't wanna be here? Send us removal request.
Text
I've been testing a few CMS/Blogging solutions for work and personal usage. Ghost (https://ghost.org/) is really impressive with the latest 4.0 release its well worth checking out.
0 notes
Text
Next Gen Note-taking?
Anyone else using Roam Research or similar?
Really interested to learn from people who have adopted a Roam like system, then abandoned it. Does it scale? Do you get an ROI? Why did you abandon it?
I've only recently discovered this way of taking notes. Up until last week, I had kept a daily note/dev diary, to track my work, in a markdown format.
Because of this i've gravitated towards Obsidian.md, which seems fantastic. I really like how the data is just markdown and stored locally. I'm still in the early days however.
I will probably create a longer blog post (or perhaps series) on this topic - its becoming quite the obsession.
Obligatory image of my current (very small) network:
4 notes
·
View notes
Text
#NoEstimates ?
INDEX
TL;DR
Late to the party
Guilty by association
Inputs vs Outcomes
There is no cake
Are you OK?
How big is small?
Be Precise?
Do you trust me?
TL;DR
Just my 0.02 units of currency - but :
Don't confuse estimation with setting a final project delivery date
Consider estimates as an input into a process, not an outcome/performance
Estimates can help the team plan their work and spot bottle necks/problems before they happen
Estimates can confirm and communicate a shared understanding
Velocity could be the answer - I'm not sure
Don't use estimates to measure output
Trust the team is working hard and the stakeholder isn't out to get you
LATE TO THE PARTY
Yup. But it does look like it's going on all night. The debate still rages, and who doesn't love a heated debate. https://twitter.com/search?q=%23NoEstimates
There are very smart people advocating for no estimates at all. There is a nice post by Malcom Isaacs, called "The #NoEstimate debate", which seems to hit a lot of the high notes and key players.
As I have come to understand it, the origin of this debate was trigged by a 2012 blog post titled No Estimate Programming Series by Woody Zuill, where he outlined the case for #NoEstimates.
No point me rehashing the debate, it's better if you read the debate summary and origin post above. Then I can dive into my own 0.02 units of currency on the topic.
Why am I sharing? Because I want to test my hypnosis. By sharing my thoughts and take on this, I am actively seeking contrary/other/agreeing points of view, so I can be better informed and make the right choices for my self and my team.
GUILTY BY ASSOCIATION
Some criticism of estimation, to me, seem to be linked to other processes people associate with estimation.
Using estimate to set hard delivery deadlines at the start of a project is very unpopular (rightly). Estimation is being associated with that bad practice, but it's not the process of estimation at fault - its how the estimate is being used.
Waterfall vs Scrum vs Kanban vs ?? It is another debate entirely, while making estimates can be a critical component of wider project management frameworks, I think its important we look at estimates on their own merits (or failures) and not taint estimation with our feeling of related (and/or dependent) processes.
INPUT VS OUTCOMES
I consider an estimates to be part of the input, not the output.
Accelerate Metrics have give us fantastic ways to measure software development teams performance/Code Metrics, backed by some impressive research. They measure outcomes and there are a bunch of tools to help make this easy. None of the accelerate metrics, measure input, the book does talk a lot about the inputs - but they don't measure them.
If you start using inputs, as a measure of outcomes, you can get into an ugly place, fast. It sets up (and rewards) all sorts of bad behaviour, like padding estimates and/or you see Parkinson's Law kick in.
THERE IS NO CAKE
I admit, the idea of not doing an estimate is appealing.
Its less work, one less thing to do, before we get to the fun part of creating. It is easy to revert to #NoEstimate in times of stress or urgency - we don't have time, it doesn't add value, just get it done. Right?
But. From the Original Post that started it all:
Ask Boss to select one “small” thing that she felt was very important.
Quite a bit to unpack here:
"Small" is an estimate.
So now we are talking about degrees of estimation, not #NoEstimation. Right?
How does the team/pm/dev know what "small" is? Relative to other tasks? Relative to total time of the project?
Why pick a small thing? Why not just the very important thing?
Even in #NoEstimate, we still expect estimates.
Even if you remove the word "Small" and used any story without some kind of understanding of size, then there is the question of what's the right size for a story? How do you know when to break a story down more or leave it alone? What becomes our unit of work?
So we do estimate, the question is more about WHAT we do with the estimates.
ARE YOU OK?
If we do have estimates (and we do), what should they be used for? I would argue, as in input, they should be used to make sure the team members are OK.
By that I mean:
Is the team member stuck on their current task?
Does the team/team member have too much work?
Does the team/team member have enough work to be happy/busy?
It's hard to answer the above, without some metric or measure of input.
Looks like Mike has been on the same task for a week - is that OK?
Task was expected to be completed quickly = No, he may need help. We also may need to re-assess similar work in the future, what did we miss?
Task was expected to take a whole = OK, keep going, don't interrupt.
5 tasks, no estimates, is that enough work for Jane for the next 2 weeks?
All tasks are small, expected to be completed quickly = Probably not, may need more - by why does she only have small tasks?
A mix of small/large = Yes, probably
All tasks are huge and complex = More than enough, perhaps we need to see about sharing out the work better?
If we use estimates as a way to keep teams happy and productive (not as a measure of output/outcomes), I think we can keep a lot of the value from estimation, its when estimates are used as a tool to measure output and/or fix deadlines, that we get into trouble.
HOW BIG IS SMALL?
"It depends" → "It doesn't matter"
But then why bother at all?
Because, I would argue, the value is in coming from an agreed group understanding on complexity/likely time to deliver. Doesn't really matter how you measure it: hours, story points or story count - it doesn't matter. The value here is in (as a team) exploring and communicating what needs to be done and how hard it might be to do it, before you start doing it.
The process of estimation is high value. Estimating work means dedicating time to understand it, agreeing on that understanding and then communicating that understanding in a standardized way.
That act of discussing, and then estimating can tease out unexpected or overlooked complexity, miss-understandings and gaps in understanding. If 3 devs agree, but one other has a very different estimate, that is something to be discussed.
Small? I've found that once a single task gets over a few days, all bets are off, so we should break it down and/or get more context. For me, an upper bound on tickets should be a few days.
What metric you use is much less important than the act of discussing and agreeing. And, once you have that agreement, that metric can be communicated to others, who can take it into consideration when deciding on the importance of any give task or group of tasks to achieve an outcome.
BE PRECISE?
Precise estimates and other logical fallacies.
Another key criticism of estimates is they are not accurate, and so they are not worth the time. As I discussed above, I don't think this is true. The value is in the process, as much as the output. Refining the accuracy of the estimates is useful in as much as it helps the team communicate and coordinate and better manage their own time/and input, not output.
They get even less precise if they are used as a measure of output, as they are easy to game, pad or otherwise manipulate.
WHY NOT VELOCITY?
In many #NoEstimate posts, they point to looking at velocity (tasks/stories completed) as an alternative to estimating.
Velocity is not without its drawbacks, it can be manipulated and may incentivize the wrong behaviour depending on how the metric is used.
But, even if you skip estimation and use velocity, you are still actually estimating on the front, to slice up the work into "small" story units to work on. But you lose resolution, A 1 line type-o fix in a field label vs adding a new database abstraction layer - both could be a single story, but with very different estimates and are not equal.
It feels like trying to measure an input, with an outcome. And you are going to miss out on the other benefits of estimation, if you skip the process. But perhaps its fine - I'm not sure.
DO YOU TRUST ME?
The final key criticism of estimates I want to consider, is fear. Fear that the estimate they create, will be used as a tool to punish or pressure them to do things that are unsustainable, unreasonable or unachievable and which then cause undue stress and disappointment.
Yes. This can (and does) happen. It shouldn't, but it does. Will abandoning estimations fix this? I don't think so. It might move the conversation from quantitative, to qualitative - which may buy time or give the louder voices a chance to control the narrative a bit. But, if the manager/business/client etc already doesn't trust a team, a number (or lack of one) isn't going to change that.
0 notes
Text
Code Metrics?
Index
The 1st Question
The 2nd Question
The Background
The Metrics
The Tools
The Prices
The Conclusion
The 1st Question
"Q: How can we measure software development teams performance?" "A: Metrics!"
But, of course, then :
"Q: Which metrics?" "A: ...?!"
Not so easy.
Metric Good/Bad/Ugly? Lines Of Code Bad. We all know this. Too many lines = bloat, too few = complexity. Commits Ugly. Could be gamed, very simply. Unclear value. Velocity Ugly. Could be gamed. Is relative. Utilization Ugly. 100% isn't a good thing. Features Ugly. Lots of features being released isn't always what you want.
Etc etc. There are lots of bad ways to track software development performance.
Lucky for us, the hard-work (and hard research) has already been done by other people.
The 2nd Question
So we can, but...
"Q: Why are we measuring performance?" "A: ...?"
So teams can improve and so we have the tools to communicate the impact of changes and decisions on those teams.
Did the re-org last month help us?
How are we doing this month vs last?
Is code debt catching up to us?
Etc. Without metrics, it becomes a more subjective discussion. We love numbers.
Also the right metrics can give engineers as way to communicate harder to value root causes, technical debt, training, innovation, investments can be measured and valued - if we have the tools to do so.
The Background
A book called Accelerate, published in 2018, focuses on this question and answers it.
Accelerate: The Science of Lean Software and Devops: Building and Scaling High Performing Technology Organizations
It says good measures have 2 characteristics:
They focus on global - so teams are not misaligned, they instead push towards the same goal.
They focus on outcomes, not output - busy work isn't helpful.
The book has a LOT more than this, well worth reading if you haven't already.
The Metrics
Metric Description Cycle Time Time it takes a task from the start of development to it being live in production. Change Failure Rate (CFR) The percentage of deployments we have to rollback or apply a hotfix to. Mean Time To Recovery (MTTR) Hours elapsed from the start of a service failure to system recovery. Deployment Frequency How often we deploy to production.
The Accelerate book goes into a lot of detail about why these are good, how they correlate to high performing teams and benchmarks for team categorization. Thats outside the scope here - step 1 is track them.
Tracking and optimizing for these metrics can have a profound impact on product delivery. As well as being an excellent way to have those harder conversations with stake holders around things like technical debt, re-platforming or other larger non feature lead work.
The Tools
OK great. How do we track this? It will depend. Each metric may need its own solution.
Option Opinion Spreadsheet Can work, but it could be a lot of work. Ticketing system To the extent they support it, its good, but likely you won't get exactly what you need. Dedicated Solution Will give you the exact data, but you pay for it - is it worth it?
Spreadsheets
As a last resort only, instead look to automate things. Can be an OK place to start, or if tracking it any other way is even more time consuming. But generally, you want to avoid this.
Ticketing system
Can get quite close with the data it has. JIRA, for example, offers a range of reports (as would other tools). The closest in JIRA is the "Control Chart".
You get Cycle time, which is a lot of value and interesting (if a bit confusing) graphs. The other metrics require additional data (e.g. deployments, failures, etc) which this graph doesn't show and the tool itself may not have access to.
For some teams, the Control Chart + spreadsheet could be enough.
Dedicated Solutions
Accelerate metrics are popular, so people have built things to automate/improve/display the metrics.
There are quite a few companies which do this, most tie into your ticketing and source control systems, to extract the needed data. A quick survey of the market place gave me the following services:
Name URL Description LinearB https://linearb.io/ We correlate and reconstruct Git, project and release data so you get real-time project insights and team metrics with zero manual updates or developer interruptions. Haystack https://www.usehaystack.io/ Remove bottlenecks, optimize process, and work better together with insights from your Github data. Plural Sight - Flow https://www.pluralsight.com/product/flow Accelerate velocity and release products faster with visibility into your engineering workflow. Flow aggregates historical git data into easy-to-understand insights and reports to help make your engineer teams more successful. WayDev https://waydev.co/ Waydev analyzes your codebase, PRs and tickets to help you bring out the best in your engineers' work. CodeClimate https://codeclimate.com/ Velocity turns data from commits and pull requests into the insights you need to make lasting improvements to your team’s productivity.
Plenty of options, they provide very similar services (with their own take/pros/cons).
They all provide some or all of the accelerate metrics, but their ability to fit your team is something you will need to test for yourself.
Luckily, integration with these services seems to be VERY fast, you hook up your version control and optionally ticketing system and they provide instant value/data.
The Prices
Pricing isn't simple, each SaSS solution has different breaks and ways of pricing/scaling/segmenting - but all seem to price per dev.
For my needs, I'm thinking about total yearly cost, so I did some quick maths.
The following is based on a tier which includes being linked to a ticketing system to maximize the features/metrics available.
Google sheet of costs
I went with a Fibonacci increment, to cover most cases, you can copy the sheet if needed.
As you can see, there is quite a difference in price, with the cheapest options depending on team size.
The Conclusion
Performance of software teams can and should be measured
Accelerate Metrics https://amzn.to/3e9EPcK) are the industry standard
Tools can help, but won't cover ever case
Small teams can get services for free, with low adoption time
Larger teams should probably see if they get value, consider carefully TOC and start with a trial/cheapest option, before going all in on a large additional spend. It can add up FAST.
What did I do?
Currently testing LinearB with several teams, which so far have seen good value from the data.
0 notes
Photo
Mini Eggs + Microwave?
Oh yes. Try it. Melted chocolate with crunchy outer shell - so good.
Steps:
Get a few mini eggs
Resist the urge of eating them right away
Put them into a microwaveable plate/bowl (out of packaging)
Microwave for 10-30 seconds (try in 10 second bursts, then taste test)*
Enjoy!**
* Warning, contents may be hot. * WARNING, you may find yourself eating the entire bag.
This cullanry innvoation was brought to you by my co-workers in the BEN London office, specifically @AbiMieszczak!
Happy easter egg season.
1 note
·
View note
Text
Insightful videos/books
What books or videos have you watched which were aha moments or step changes in your thinking? I'm specifically interested in software/management topic, but would love recommendations from across the spectrum.
For me, in the last year, one of my key aha moments came from this video:
And then the book by the author https://amzn.to/3kJxEcf - I found the video long after the original talk was given, but its still excellent.
Team topologies focuses on the idea of Conways law ("Organizations will build systems which are copies of the communications structure of the organization") and trying to build software that fits in your head (optimize for cognitive load). It gives practical and actionable advice and after applying it, I've seen big improvements.
What was your aha of the last 12 months?
0 notes
Quote
Techo fixes are not sufficient, but they are necessary
Bill Gates, How To Avoid A Climate Disaster
I love this. Quotes like this I find extremely useful way of keeping yourself on track - do you have any sayings or quotes that support or direct your day to day?
Bill made this statement in the context of solving the climate crisis, which makes a lot of sense, but it also (for me) applies in my professional work. Like Bill, I'm a technophile, always looking for a solution with a technological solve, but while tech is the key, its not enough on its own. Of course i know this, but a pithy reminder like this helps keep it front of mind.
Got any other good ones?
0 notes
Link
BEN is hiring again! Javascript full stack, remote position.
0 notes
Link
Some nice tips in here, i disagree with some (e.g. we don't use SFC) but the statement "Code is meant to be read by humans" is very true. Optimise for the humans.
0 notes
Link
Interesting tech if ambient temp stays over 18c. Making your own cooking gas from waste food.
0 notes
Link
Drones are getting smaller, smarter and yes...i want one.
0 notes
Link
Because every now and then... you mess up - luckly git got your back.
0 notes
Link
Get the facts about how homeopathy works. http://www.howdoeshomeopathywork.com/
0 notes
Link
We've been using this at work - its so much more than it seems. Try it. The way you can build components in an isolated way really promotes the best of react.
We've been using it in conjunction with Atomic Design (http://bradfrost.com/blog/post/atomic-web-design/) and it seems to be an excellent marriage.
0 notes
Link
The DeepMind folks are at it again. Impressive stuff!
0 notes
Link
Using slack at a central tool in both triage and resolution of dev-ops related issues. Very cool, also links onto things like "Chat Ops" https://www.pagerduty.com/blog/what-is-chatops/ - really good post.
0 notes