ridgemoor-blog
ridgemoor-blog
More Ridgemoor
20 posts
Random blog articles from the author of "Ridgemoor" posted as and when he feels so inclined.
Don't wanna be here? Send us removal request.
ridgemoor-blog · 2 days ago
Text
Downloadable certificate for Ridgemoor Oath signatories
So you can proudly demonstrate your commitment to responsible AI use, there's now a certificate you can sign to take the Ridgemoor Oath.
It has two variants: a FULL certificate (containing the oath wording) or a SIMPLE certificate (just asserting the commitment).
Both are available to download at https://drive.google.com/drive/folders/1p4P-lMPX4HsiXd55Ja46k94YgQKvZ7fR?usp=share_link.
Tumblr media
Suggested use:
Download your preferred template from the folder at the link above
Edit it, to complete the fields where prompted
Save the completed form as a PDF file
To encourage others, please post on LinkedIn with #RidgemoorOath
... I'll be printing mine too.
Happy pledging!
0 notes
ridgemoor-blog · 2 days ago
Text
Triple-lock AI ethics
Tumblr media
Image created with Midjourney, 2025
A few people have asked me if the Ridgemoor Oath is really the best way to reduce the risk of socially destructive outcomes from AI implementation.
Some seem to believe that controlling AI is best left in the hands of private companies, which will after all seek to protect their reputations.
Sure, but they'll need to balance that need with their ever-unsated needs for growth and profitability.
Others suggest leaving the problem to governments, who can regulate the use of AI for social good, and establish clear rules for other organisations to adhere to.
Yes, but how do we ensure that all governments regulate adequately? If we can't do that, won't bad actors just transfer their operations overseas, where they can be sure of plenty of oversights in the oversight.
Let me be clear: I absolutely agree that organisations ... and governments ... and inter-governmental agencies ... should all establish clear, tight, visible codes for responsible technology use. I'll be the first to cheer them on.
But I think we can reduce still further the enormous risks posed by AI by tapping into the humanity of human beings.
As Robert Cialdini highlights in his book, Influence, humans have a deep-seated need to behave in ways consistent with their prior assertions and commitments. So, if people sign the Ridgemoor Oath (or a near equivalent) and proudly go-public about having done so, we can expect far more vigilance from them thereafter.
So, I propose a triple-lock. Let's all keep encouraging public and private organisations to make specific commitments to use AI responsibly. But let's make our own individual commitments too. Should enough of us do that, we can expect organisations to come on the journey that much more readily.
0 notes
ridgemoor-blog · 3 months ago
Text
El Juramento de Ridgemoor para el uso responsable de la tecnología
(v1 Alfa, 3 de abril de 2025)
Tumblr media
"En todos mis esfuerzos en el ámbito tecnológico, yo, [humano/motor de IA], prometo actuar con la intención de mejorar de forma segura, protegida y justa los resultados para las personas, las sociedades y el planeta.
"Me detendré a considerar las posibles implicaciones antes de resolverme a actuar, cualesquiera que sean las directivas de otros.
"Prometo respetar con compasión y confidencialidad los derechos, las libertades, las necesidades y las opiniones de quienes puedan verse afectados por mis acciones. Y obtendré su consentimiento previo e informado cuando exista riesgo de daño.
"No buscaré beneficios personales con mis actividades, excepto aquellos que se consideren legítimos en el desempeño de mi función.
"Prometo reflexionar sobre mis propias limitaciones y falibilidad, y trabajar para evitar sus efectos adversos. Seré transparente sobre la información que utilizo, sobre mi razonamiento y sobre cómo se aplica este juramento a ellos. Me haré responsable de mis acciones y espero que otros también me exijan responsabilidades.
"Crucialmente, también prometo influir en los actores humanos para que adopten este juramento y construir sistemas de inteligencia artificial que prioricen estas promesas sobre cualquier otro motivo."
Firmado: ________________ Nombre: ________________ Fecha: __________________
Para más información (en ingles) sobre el origen del juramento, consulta la entrada del blog complementaria, aquí.
0 notes
ridgemoor-blog · 3 months ago
Text
Tumblr media
Writing the 'Ridgemoor Oath'
Vanessa laughed again, "But, I just don't get what motivated you to write it? It's not something most people would even consider."
Jeremy agreed.
"Probably true. But look, for 30 years now I've been worried about abuses of technology, and the rise of GenAI really brings the problem to a head. Then recently I watched a webcast presented by Stephanie Hare. In it, she mentioned an idea postulated a few years ago by academic Hannah Fry for a modern equivalent of the Hippocratic Oath suited to computer engineers and others. I couldn't have agreed more! But there didn't seem to be anything like that publicly available - at least nothing from the GenAI era - and nothing designed for both people and AI systems to swear to. So, because I couldn't see what the academics were offering, I thought I should put something out there myself."
"Okay, well that makes sense. And, by the way - I really like it! Maybe I should have said that at the start of our chat? So ... I was wondering, was it easy to write?" Vanessa asked, sliding her smile across to the barista staggering across the forecourt in a struggle to avoid spilling their coffees.
"It wasn't hard to write. The first hurdle was overcoming imposter syndrome. I'm not a professor of ethics or anything, but I've been around for a while, worked across multiple sectors, and seen bad things happen when ethics aren't front-of-mind. And I think I can string a sentence together fairly well. So I thought, 'Why not give it a go? This is important.' I didn't feel that the first version should be sacred. Others would be very welcome to improve on it, and I could set time aside to work on updates.”
"Actually, I tried to improve it," Vanessa conceded. "But I couldn't! At least, not without making it a lot longer." Then she paused, considering her next question.
"So, what steps did you take to build it?"
"I started with research, of course," Jeremy began. "I checked-out a few variants of the current Hippocratic Oath. Did you know that there are hundreds of them out there? I kind-of assumed there was just one! Then I realised I could scale-up quickly by asking a couple of LLMs to find me a larger set of oath variants, and pre-screen them too - to create a summary table of their component themes."
"Makes sense. I'm always amazed when people don't use LLMs these days."
"Agreed! And, as expected, it worked pretty well. But I wanted an oath for professionals working with technology - not just for medics. That meant a few themes dropped, and a few others had to be introduced or adapted."
Vanessa nodded. That seemed inevitable.
"Also, I was keen to make sure that any oath I created wasn't actually AI-authored. It just felt inappropriate - I'm not sure I could adequately explain why. But anyway, I wrote the first pass myself, once I finished the research. It looked pretty good ... of course, we tend to like our own work, don't we?"
Vanessa shrugged noncommittally. She wasn't sure that always held true, but wasn’t surprised to hear Jeremy say it.
"So, I invited some LLMs to critique my first version. That helped, and I made a couple of small changes. Then, to make sure I was taking all the useful input I could, I asked the LLMs to write their own versions too - under strict instruction to include all the themes I'd identified as important. That was a little less useful, but after a lot of eyebrow-raising I worked-out what additional tweaks to make to further improve it. Throughout, I was keen to keep the oath as short as possible - though no shorter! - and I also wanted to keep the language simple, in the hope of more accurate translations away from English."
"Okay. That's smart. Good plan," Vanessa conceded.
"... and I consulted Casey of course!" Jeremy admitted. Suddenly aware that he'd talked a lot, Jeremy switched topic.
"How are things with you, anyway?"
"Well, even though she's claimed for years that my CDIO role made no sense, it now looks like Maddie might pursue a career in tech!"
"You've been a great role model for her," Jeremy acknowledged. "I'm not surprised."
"Maybe she'd like to sign the oath? I'll certainly suggest it to her!" Vanessa concluded, suddenly wincing as she checked the time. "Actually, I said I'd pick her up from school 5 minutes ago. See you next week?
"Gladly!" Jeremy enthused. "Maybe by then you and she will have worked-out all the flaws in the oath!"
Read more about Vanessa in 'Ridgemoor: A Novel Exploration of Agile Benefits & Blockers' (ISBN-13 978-1634625692) available in paperback and Kindle formats.
0 notes
ridgemoor-blog · 3 months ago
Text
The Ridgemoor Oath for the responsible use of technology
(v1 Alpha, 3rd April 2025)
Tumblr media
"In all my endeavours in technology, I, [human/AI engine], promise to act with the intent of safely, securely and fairly improving outcomes for people, societies and the planet.  
"I’ll pause to consider possible implications before resolving to act, whatever the directives of others.
"I promise to compassionately and confidentially respect the rights, freedoms, needs and views of those who might be affected by my actions. And I’ll gain their informed, prior consent where there's risk of harm.
"I won’t seek personal gain from my activities, except that which is commonly understood to be legitimate in the performance of my role.
"I promise to reflect on my own limitations and fallibility, and work to avoid their adverse effects.  I’ll be open about the information I use, about my reasoning, and about the ways in which this oath applies to them.  I’ll hold myself to account for my actions, and expect to be held to account by others too.
"Crucially, I also promise to influence human actors to adopt this oath, and to build artificial intelligence systems that give precedence to these promises ahead of all other motives."
Signed:  _________________
Name:  __________________
On date: ________________
Ready to sign? Check out the free, downloadable templates.
Or For more on how the oath originated, check out this companion blog featuring Vanessa from the Ridgemoor book. Or try this post which explains why we need individual commitments on the ethical use of AI.
0 notes
ridgemoor-blog · 4 months ago
Text
Your AI operating model should answer 35 questions
Tumblr media
If ever there was a fast-moving, disruptive technology – this is it.   Crawling caterpillar-like for decades AI was grounded in the realms of rules-based process automation, before flying from its chrysalis in 2023 to capture worldwide attention in the butterfly-like form of GenAI.
We spent 2024 recovering from the shock and reviewing the risks, but did little to assuage them. And now, in 2025, they're largely forgotten - with all attention on the AI opportunity.
Whatever ‘opportunity’ means to your organisation, you’ll face some common challenges in adopting AI sensibly – to strike the best balance of risks and rewards. Note that this balance will be specific to the organisation's mission and purpose, ownership, strategy and appetite for risk.
An exercise in operating model design is needed here - but it’s something very few people are talking about.
In case the term is new, an organisation’s ‘operating model’ is the set of decisions about how the organisation should work (AKA 'how things get done around here').  As such, it answers questions about the roles and skills that are needed, how they’re organised, what processes are followed, what oversight, measures and controls apply to those processes, the information, tools and other assets needed for all that, and the locations where it’s all done.
An operating model is about getting some control and consistency, so things can be done consistently well everywhere.  But it needn't constrain creativity – indeed the best operating models I'm involved with do exactly the opposite.
It's often desirable to revisit an operating model in the light of new technology (e.g. cloud, IoT, blockchain, ...) But ubiquitous, low-cost, intelligent computing brings a few new questions too - operating model question that simply didn't arise before 2023, and that even now have garnered precious-little attention.
Organisations might like to ask themselves …
On staffing
1.1 which roles are going to be critical to our AI success?
1.2 which recruitment battles do we need to win, and how are we going to do that?
1.3 given that substantial AI experience is extremely marketable, how will we retain key people who make a difference?
1.4 how do roles change more broadly around the organisation?
1.5 how are we going to train and enthuse the wider organisation quickly, so they can play their parts?
1.6 which skills will morph or decline, as AI engines pick-up tasks themselves?
On organisation
2.1 what new organisational capabilities do we need (e.g. specialist market research, data preparation, procurement, design, prototyping, …)?
2.2 for now, should AI be loosely federated across business units and OpCos? or centralised? or do we run a tailored hub-and-spoke system?
2.3 who will run what parts of the resulting organisation?
2.4 what information and other resources do they need to do this well?
On supply
3.1 what tools will help us (e.g. for wireframing, LLMs, hosting, training, testing, …)?
3.2 how can we (possibly!) keep pace with AI market development?
3.3 how will we detect when there’s a better supplier solution to try?
3.4 how rapidly do we need to react to these changes?
3.5 what measures should apply in adopting supplier solutions so as to allow those supply chain swaps?
On AI as the operating model
4.1 should we consider devolving some parts of our operating model into AI itself, so processes, thresholds, controls etc. adapt intelligently?
4.2 where should this happen automatically?
4.3 where should it happen with human approval?
4.4 where should we draw the line here? and why?
On decisions
5.1 which additional decisions can now become evidence-based?
5.2 which AI engines can be trusted to assist this?
5.3 how much evidence is adequate for humans to have confidence, especially given hallucination rates?
5.4 how much AI explainability is needed to enable that?
On responsibility
6.1 what constitutes responsibility, and to whom should we act responsibly?
6.2 how much uncertainty about negative impacts is acceptable, and can it be allowed to depend on the potential gains?
6.3 should we adopt some universal absolutes, e.g. ‘do no harm’?
6.4 how should our rules for responsibility be enshrined in policy, in employment contracts, in incentives etc.?
6.5 what committees will handle finely-balanced judgments, and with what rules and incentives?
6.6 what special treatments will apply to AI-related risks? 
On sustainability
7.1 given the inscrutability of AI’s environmental costs, how much AI is it right to consume?
7.2 does that depend on potential gains?
On delivery
8.1 what tools should be promoted, and which should be tolerated (e.g. for wireframing, data engineering, code assistance)?
8.2 what should be the basic solution delivery lifecycle?
8.3 what local adaptations are acceptable?
8.4 ... and critically, how do we develop a fully responsible, Agile culture, to get the best out of AI?
Lots of questions. But we should avoid building a new organisation here, with its own rules. The opposite should be the case - most of what’s determined in the AI operating model design will apply across the board – not just in some AI offshoot of the main organisation. 
In any case, these questions should be prioritised and approached incrementally. They needn’t all be answered on day 1.  To get the best out of AI, and to do so safely, we should expect to evolve our AI operating models as our understanding of AI grows. This means dedicating time and energy to nurturing it - just as we would a digital product - especially in the first 6 months, or when there are radically new market developments.
It's an active area of interest for me. So, please do get in touch if you think I can help - even if you just need a sounding board.
0 notes
ridgemoor-blog · 4 months ago
Text
A study of case studies
Tumblr media
I once worked for a company in which the following (possibly apocryphal) story abounded …
The company was assessing a new technology product, considering a purchase. A US executive in the company had been evaluating the technology, had read a few case studies, and felt able to assure the Chief Executive that it worked in practice.
'Yes,' retorted the wily CEO, 'but does it work in theory?'
The first time I heard this story I was attracted to this unusual line of argument. But it took me a while to work out why.
We're conditioned to expect case studies and to accept them as a form of proof. But every time someone presents one I find myself thinking about the hundreds of other cases I'm not seeing. The failed cases.
I guess that for every case study shared with potential buyers there are somewhere between 2 and 200 others that remain in the shadows … and with good reason!
Following this line of argument, the case studies you do see could be seen as pretty anecdotal. A rigorous thinker won't expect it to start snowing right after a middle-aged account buys a copy of the Times just because she saw it happen once … or even twice.
INSEAD professor Erin Meyer gets into this topic in some detail in her excellent book, 'The Culture Map'. Reading it reveals that I'm actually French (something I hadn't previous realised). In the book she explores 8 aspects of the way we think and behave for which our outlook typically depends on our nationality. The aspect she calls 'Persuasion' is the one that's relevant here. It turns out in the US, Australia, Canada and the UK most of us are attached to factual examples of the application of an idea, whereas people in France, Spain, Italy etc. tend to take a diametrically-opposed view so are more likely to find assurance in the strength of the underlying concept.
As I say, I'm French - at least in this respect. And I find it hard to accept that a case study presented by a salesperson is typical. Rather, I feel sure it's just the best of an otherwise disappointing set of stories. I find myself itching to ask:
Q. How many more times has this been deployed?
Q. On how many of those occasions did things do this well?
Q. Could we see your worst case study too?
But these questions don't prove as useful as they sound. So, when I'm evaluating anything I always look first at its qualities, its structure and its pedigree. Show me the case study last - if you must - but I'll be thinking 'meh' the whole time you're talking. Mind you, please do send me a copy, because if I'm convinced by the theory, the case study will surely help me to convince many of my colleagues.
Note that the views expressed here are mine alone, and don't necessarily reflect those of any past or present employer.
0 notes
ridgemoor-blog · 5 months ago
Text
Wow! Carla, the Vice-Chancellor of Ridgemoor, just got interviewed about the book. She mentions some of the experiences she had with Vanessa and the others, and offers her insights. The whole 5-minute interview is in the attached MP3 file. Hope you enjoy!
1 note · View note
ridgemoor-blog · 7 months ago
Text
Why we wait
A guest blog by Sukhi ...
Tumblr media
Created with Bing Image Creator, 10th December 2024
So much waiting!
Maybe it's been an unlucky month, but I'm noticing more and more how much I simply have to wait around for other people to do things.
It's not like this in 'Space Ace'. I guess that's one of the reasons why I like gaming - no waiting around for other people.
We British are stereotyped for our love of queueing (the 'national sport') but if that were ever true, those days are certainly long-gone. Now, I think most of us thoroughly dislike waiting—especially for something simple. I know Vanessa feels the same way, although she's a bit more patient than I am.
So why are we seeing more of this all the time? It's pretty ironic, in this digital age, when so many things seem to move so much faster.
Well, we've got increasing strain on public services like healthcare. So that's a factor. But there's something more insidious going on. Something that seems to apply equally well across healthcare, and higher education, ... in fact across all industries. We're all trying to do too many things at once. We're caught-up in the drive for efficiency.
Colin told me a couple of years ago that the main reason houses sell so slowly in the UK is because conveyancers (the lawyers handling moves) each run over 100 cases at the same time. That's mind-blowing if it's true. But even if the number is half that, I find it almost impossible to image how they stay focused. And with so much context-switching, it's hard to imagine how they would ever complete even one case. Certainly it would explain why so much chasing is needed on both sides of the sale.
We might expect that efficiency and speed come together, but I've been polling opinion in the Digital and CTO communities, and interestingly efficiency is often at odds with speed.
When we seek to maximise efficiency, blinkered to other objectives, we often home-in on spare cycles and fill them with extra work. But putting more load into a system tends to create more queues (see Little's Law). It can easily overload other parts of the system, but also makes the focus part less tolerant of any unplanned load.
In fact, graphing the implications of the law shows that wait times increase dramatically once any part of a system is busy more than 80% of the time. If we want speed in any unscheduled system, we need lower utilisation.
And if we're waiting for someone else to do something, but still measured on utilisation, how do we react? Instead of waiting patiently, we go off and do something else, meaning others are now more likely to be waiting for us! There's a compound effect here that's often overlooked.
Turns-out that the whole world is caught-up in this outmoded mindset.
On the bright side though, it's a great argument for digital transformation. If we can automate work, we can speed it up, and allow one human to self-serve at their own pace.
And it's a great argument for Agile too. At Ridgemoor we're always keen to 'unspecialise' our teams, creating fast-moving, multi-skilled squads so that (even with work we can't codify in advance) we can get stuff done without sitting in a succession of queues. Every time an unfamiliar task comes along, we just get on with it, instead of looking around for the cavalry to come.
So that's my message of hope. And another reason I'll be advocating for Agile everywhere I go! Looks like Carla is inspired to take-up the cause, and I'll be first in the queue to help her!
The views expressed in this article are Sukhi's alone, and don't necessarily reflect those of anyone in the real world ... except perhaps Jeremy's.
0 notes
ridgemoor-blog · 7 months ago
Text
Book launch, 18th November
Tumblr media
I'm delighted to be able to announce the launch of my new book, 'Ridgemoor - A Novel Exploration of Agile Benefits and Blockers'.
It's written in the 'business novel' genre, so hopefully it's as fun and accessible read. I certainly had fun writing it.
The setting is a large, established organisation (in this case, a University) which is struggling to get Agile implemented properly, having hit a number of common obstacles. The lessons learnt by the characters should apply to thousands of other organisations out there.
The paperback is already available, and the Kindle edition should be hot on its heels.
Thanks to everyone I've worked with who has provided the insights, information and context that made the book possible. (You know who you are!)
I'd love to get the message out there quickly, so could you help with positive Amazon reviews, and/or promotion in your LinkedIn networks? Thanks!
0 notes
ridgemoor-blog · 7 months ago
Text
An amoeba's guide to Agile scaling
Tumblr media
Cropped version of image created by Bing Image Creator, powered by DALL-E
(Originally posted on 23rd August 2023)
Pretty much all large organisations have tried-out Agile in some form. Maybe it’s just a few pockets of Digital development in the outer orbit of the CIO’s purview, but more likely by now you’ve seen an large transformation attempt.
If so, how did it go?
Swingingly?
I’m betting it didn’t.
Agile approaches are well-proven, and should benefit almost any organisation.  But it’s very easy (and all-too common) to jump straight from “Agile is useful” to “We must all implement Agile immediately”.
I get it. If we see something work well on a small-scale, it’s tempting to assume it will work at a larger-scale too … especially when there are numerous established frameworks whose sole job is to scale Agile.
But to me, this is far from the no-brainer it might seem.  Some attempts to scale Agile remind me of King Canute's proposed change programme for the ocean. Despite deliberate planning, there are unmanaged forces at play, so success is far from assured.
Why so difficult?
Multiple reasons of course:
Some pilot Agile initiatives happened in “the business” with limited IT influence, because they were desperately wanted there. We can’t magic-up that pre-condition everywhere immediately.
Others benefited from disproportionately high investment and management focus, which can’t be repeated every time.
IT vs. Digital feuds (often over legitimate issues) polarise some factions’ views against Agile.
Genuine concerns over job security can bring blockers at every level of the organisation.
Supporting functions such as Finance, Security, Privacy, etc. aren’t comfortable with Agile outcomes, and end-up in opposition despite good motives.
With 40+ Agile methods in town, there’s no shortage of competing terminology and opinion, slowing everything down.
Due to ignorance or impatience, some pre-requisites aren't recognised (or aren't met) so credibility wanes and traction slows.
Compromise is often expected, so organisations adopt hybrid Agile models which almost always work less well.
A cumbersome scaling framework is chosen, weakening and delaying real gains.
So how to reduce these risks?
Whatever route you take, there’s one thing you’ll need right away: Usable Executive Team backing, ideally from the CEO. Strong support eases risks 1, 2, 3, 4 and 5—and goes some way to help with the others—but use it intelligently or not at all.
With the right support in place, it’s time to call an acknowledged expert for help. Having a credible, informed, humble “catalyst leader” for an at-scale Agile transformation immediately helps mitigate 5, 6, 7 and 8. (Let me know if you need help here.)
That just leaves item 9.
Please don’t assume that the framework is the answer. Sometimes it's the problem.
A heavy framework means heavy implementation effort, and also creates complexity, multiple new roles, and new communication and reporting overheads. Some frameworks deliberately lean towards project management methods—giving the PMO a few crumbs of comfort, yes—but also watering-down the benefits. Note that:
Tumblr media
Donald A. Norman, Living with Complexity
So think very carefully about your method for scaling. In true Agile spirit I always vote for scaling as simply as possible, starting with a “zero-base” then adding complexity only when your average amoeba can see the need. Remember that refactoring the product space can reduce interdependencies between teams—sometimes allowing us to postpone scaling by years.
I'm very keen to hear your views on this one.  What makes sense to you, and what doesn't?
Disclaimer: These opinions are mine alone, so are not necessarily shared by my current employer or other organisation with which I'm connected.
0 notes
ridgemoor-blog · 7 months ago
Text
Responsible AI can't wait
Tumblr media
Generated on Microsoft Bing, using Image Creator powered by DALL-E
(Originally posted on 10th May 2023)
Imagine a world where machines can write and publish books, compose music and sketch art, answer the phone, offer "emotional" responses, design buildings, teach us, diagnose our illnesses, emulate established journalists, help the police to catch criminals, and help criminals avoid the police ... all thousands of times faster than we can.
Welcome to 2023.
It's time I offered some thoughts on AI, if only for my own therapy.
It's 28 years since I started worrying about AI. Probably inspired by Terminator and other dystopian works, I conceived the "League Against Synthetic Intelligence (LASI)".
No, I'm not joking.
And yes ... you're right ... I can be a little peculiar sometimes.
I never pressed the button to launch "LASI", and it would never have gone anywhere if I had. It didn't deserve to: No-one else was interested at the time, and there was little of substance for it to defend, or attack, or adhere to.
In any case, even the supremely disruptive technology that is AI doesn't warrant a simple "for" or "against" position. We shouldn't deny the huge societal potential just to avoid its threats. And even if we wanted to suppress it, that would be quite impossible.
So we must engage with AI. Even if we don't currently plan to use it, it will find ways to use us.
I believe that every organisation, large or small, tech-light or tech-heavy, public or private, should now establish a group focusing on responsible IT, and majoring on AI.
Every board should ensure that this happens, deep-diving on AI for the next 6 months; every government should create 100+ hours of open debate; and every regulatory authority should release a draft policy within 3 months.
I believe this is unprecedentedly urgent. We're used to emerging technologies moving faster than legislation, but this time the consequences will be far more dramatic wherever the law falls behind.
Sure, we're at the top of the hype curve right now, so we'll likely see a "trough of disillusionment" soon. But we're going to need all that time, and more, to develop our thinking on an issue which few understand, and which even the most enlightened struggle to keep pace with.
So let's not wait until legislation forces this move.
Let's not just squeeze this important topic into the agendas of our overstretched CyberSecurity, Legal or Privacy teams.
And let's not relegate it to a spare time activity for the usual 'B' team of reliable middle managers.
Every organisation needs intellectual horsepower and energy on this issue, with dedicated drive from a cross-section of mindful, modest, fearless, realistic staff, motivated to chart and rechart the organisation's course through an ethical minefield which is continuously shifting.
This group must enthusiastically:
Take a firm line on what constitutes responsible behaviour towards customers, shareholders, employees, suppliers, society, etc.;
research and explore AI technologies as they expand, to understand their benefits and limitations;
bravely set and reset policy alongside HR and Legal, and intelligently monitor its adoption;
plot and re-plot exposures from both visible and "shadow" IT, and from the outside ecosystem, if necessary putting the brakes on specific AI usage; and
harmonise with parallel disciplines which exert tech. governance, to reduce overall change "drag".
Ensuring a carefully considered position on AI technologies, and one which is which is demonstrably defensible, should be an imperative that no responsible organisation ignores. Start now, to snap-up the best people and avoid disappointment later.
Yes, this means creating a new team in response to a new issue. It sounds just like the kind of "sticking-plaster" mentality that I usually challenge, on the grounds of cost and complexity. But desperate times call for desperate measures, and I see no other way to give responsible AI new prominence that it deserves.
Disclaimer: These opinions are mine alone, so are not necessarily shared by my current employer or other organisation with which I'm connected.
0 notes
ridgemoor-blog · 7 months ago
Text
Risk-based Interviewing
Tumblr media
(Originally posted on 21st April 2023)
I've spent many years interviewing people, most recently for very senior roles in Digital & IT. Often it's been very successful ... but occasionally outcomes have surprised and disappointed.
I've been reflecting on that problem this week, and web-trawling to find tips. My findings fell into three categories:
Obvious, but sometimes neglected (e.g. co-ordinate tightly with co-interviewers before and after, ask more probing questions, keep score, ensure an anchor in panels, don't risk getting personal just to build rapport, ...)
Enlightening (e.g. accept you may need to drop a role instead of hiring a poor candidate, use pre-interview questionnaires, make STAR questions the centrepiece, note the non-verbals and never overlook intangible reservations, give real-world test tasks to take home, ...)
And finally:
3. Use risk-based interviewing
It probably falls into the "obvious" category really, but I hadn't thought of it this way before: Interviewing is just an exercise in de-risking.
When we scan CVs and profiles ahead of interviews, we're looking for positive matches to defined skills, competencies etc. We're seeking opportunity in what we read, but we're also picking-up risks with each candidate, which threaten the value (as illustrated here).
Tumblr media
So we reach the high point of the curves and move into assessment, perhaps planning to major on interviews. Here the task is to reduce the risks associated with the opportunity ... if we can.
I'm suggesting treating candidate assessment as a straight exercise in risk management, creating an actual risk register at this point in the process, and sharing it across the hiring team. The risks we collectively identify then inform the methods and questions we use, how much time we take, who else gets involved, etc.
Some examples, using very broad risks:
To explore the risk that a candidate can't do the job, we might choose to run numeracy or IQ tests, ask validation questions about previous experience, or use probing questions to check we buy the stories;
to explore the risk that a candidate won't want to stick around, we can offer briefings on the role, ask about similar experiences, and dig-into career aspirations and motivators; and
to explore the risk that a candidate won't fit in, we might make introductions to potential colleagues, use social settings, ask about outside interests, and run psychometrics.
More specific risks will suggest more specific questions than these. And some will bubble-up during interviews. Be ready to identify and explore them in the moment, leaving sufficient slack time to do so.
When the first phase of assessment is complete, look carefully at the risk register, examining the residual risks for each candidate. If they're acceptable, move to hire. If they're moderate, continue assessing. If they still feels way too high, go back to the drawing board to rethink the recruitment process or even the role.
Anyone else tried this out? As usual, any supporting points or counter-points very welcome!
Disclaimer: These opinions are mine alone, so are not necessarily shared by my current employer or other organisation with which I'm connected.
0 notes
ridgemoor-blog · 7 months ago
Text
Top technology goal? Simplicity, of course!
Tumblr media
Image: Michael Hamann, under Creative Commons (source: https://flic.kr/p/aAoPqo)
(Originally posted on 31st March 2023)
It's been a reflective week for me. But whatever I've been reflecting on, the theme of simplicity always seems to spring-up somewhere.
Why?
IT leadership often feels like a battle against time. Clearly that's true in the active arms race of cybersecurity, but I happen to think it's also true more generally. The sheer complexity of everything in our purview slows-down change, whilst in parallel popping-up problems faster than we can push them down.
Consider a corner of your estate with 3 applications, where 2 would do. That's likely to bring 50% more data & repositories, at least twice as many localised interfaces, more incidents, more privacy reporting, and a larger attack surface to defend.
Often the 3rd application will also bring another set of costs, and another supplier to manage. And it almost certainly means more skill-sets to maintain, and more meetings for everyone.
All this provides more distractions, and more context-switching for our focus-starved IT managers. So perhaps striving for simplicity is the only way we can help them to keep the lights on, gracefully land new projects, control spend, motivate their people, whilst still making time for meaningful strategic planning.
This article is my bid to make "simplicity" the top technology goal for any organisation with a long IT legacy, and a less-than compelling stance on tech debt. I believe it should be a managed IT measure in its own right, because it unlocks so much benefit everywhere else. And because, as humans, it seems we're prone to systematically over-valuing "more" ... and undervaluing "less".
There are many ways to bake this idea into your IT strategy, but a couple of things to watch-out for too:
Avoid the façade of simplicity, e.g. with extra abstraction layers: These make today's components easier to manage, but often complicate the overall estate by adding new components!
Be mindful that streamlining your estate might also mean streamlining your payments to partners. That's great, but don't count on their unquestioning support.
And consider that some of your staff might actually be happiest spinning plates, because it's what they know, and are good at. We'll always need some plate-spinning, but plan ahead to re-skill and re-purpose some people so they can create more value in a simpler systems landscape.
Anyone else tried to apply this? Please let us know how it went!
Disclaimer: These opinions are mine alone, so are not necessarily shared by my current employer or other organisation with which I'm connected.
1 note · View note
ridgemoor-blog · 7 months ago
Text
On Forced Quotas in Agile
Tumblr media
(Originally posted on 6th January 2023)
Agile Product Ownership model has a simple solution for ensuring the right priority calls for investment: Empower a skilled, experienced "Product Owner" (PO) to make the calls, with the objective of reaching the best overall outcomes for the organisation and its customers. And refrain from challenging that person's decisions, so they don't feel disempowered.
In concept this works well ... and sometimes in practice too. But often there are real-world hiccups.
For example, how can a new-in-role Product Owner be sure to have the necessary knowledge? And how can those close-by be sure of the new hire's judgement?
How can a more established PO avoid spinning eternally inside a circle of HiPPOs, ZeBRAs and WOLFs? And how can we ensure that a Product Owner working to annual objectives and incentives is really factoring-in much needed longer-term thinking?
The first measure we can take is to have POs do their research properly, to enable them to defend decisions when needed. If you don't consult your customers often, using robust, statistically-significant methods, how can you hope to assert that what you planned next is really more valuable than someone else's latest idea?
But often that's not enough, and we also need light-touch steering of prioritisation activity to support the PO in making good organisational decisions with minimal backlash.
Forced quotas for different types of work seem like a natural next step. So how do they work?
A steering group around each PO comes together a few time a year to agree a fixed quota for each different types of product work. The group is chaired by the business area owning the product--which also has the final say--but it also has representation from groups like Marketing, Security, Privacy, Sales, Operations, Architecture, ... all of whom can argue for higher (or lower) quotas in the coming period.
Bringing people together like this forces them to recognise the level of contention faced by the Product Owner, who has to balance demand for:
Research and experimentation
New features and enhancements
Fixes
Technology debt remediation (e.g. architectural, security, privacy, supportability ...)
Changes to team process (e.g. DevSecOps automation levels)
In fact these 5 might be the types of work to which you allocate quotas. Or you might choose a different set. That doesn't really matter, so long as all types of work are recognised and consciously considered.
Some other tips:
There should be organisation-wide stance and support in play. So it's worth basing allocations on a company default, which includes a starter allocation for each of the areas above.
Periodic re-review is important. Accept that different products will need different quotas at different times in their cycle. For example, technology debt may become more critical after a period of neglect, or feature work may temporarily take pole position when competition is unusually intense.
It's always best to base allocations on %age of actual capacity, rather than allocating a fixed amount of effort (e.g. story points). The latter could cause problems as velocity changes, and it is naturally a little volatile--especially in immature or changing teams.
Keep the Product Owner properly empowered by stressing that they still determines precisely what work gets done (within quota tolerances).
Provide transparency: Minute the meetings, and have each product team offer reports on its activities vs. allocations, to make sure the quotas are really being applied.
And finally, try at all costs to avoid the dreaded "just-this-once" overrides. We all know this is the thin end of the wedge, and that allocations will only work if they are allowed to become a habit.
Good luck!
Disclaimer: These opinions are mine alone, so are not necessarily shared by my current employer or other organisation with which I'm connected.
0 notes
ridgemoor-blog · 7 months ago
Text
A new, pragmatic Enterprise Architecture
Tumblr media
(Originally posted on 20th July 2022)
I've worked in-and-around Enterprise Architect (EA) for 20 years, seeing it advance in many ways.
Like many of you, I have struggled getting value from traditional EA in which we build a polished "To Be" architecture looking 5 years out, then try to reach it via equally polished roadmaps.
Despite the ineluctable logical of that approach, I've never seen it work. And it's fairly easy to see why: This is a lot of work, so it takes time and effort from people inside and outside the EA team. No-one has spare cycles anymore, so this presents an immediate challenge.
Then there's the pace of change: Over the weeks or months that it typically takes to do this, the ground is shifting underneath us: Business priorities are changing fast, and the current architecture is morphing—meaning that the outputs start to be come invalid as soon as they emerge.
So last year I was fascinated to read Gerben Wierda's excellent book "Chess and the Art of Enterprise Architecture". In it, the author tackles head-on the problem of increasing pace of change. I don't want to spoil the plot, but he suggests that because the future is so uncertain, we should major on the current state and its limitations. Some of those limitations are pretty obvious, but others depend on where the business is heading—which we can explore by creating an active library of future business "scenarios" with associated implications and probabilities.
And this approach has been working well for us .... kind-of.
I like it a lot, but we had to adapt slightly. (Hopefully only in ways Mr. Wierda would approve of.)
So what needed tweaking? Just two things really:
First, since 2019, we've found the world so difficult to predict that even those future "scenarios" don't always serve us well. And second, the way we wrote scenarios left many infrastructural limitations lurking in the shadows.
So what's our new, adapted approach?
We'll follow Wierda's 'Chess' method as closely as we can—emphasising current state modelling & analysis.
We'll use scenarios where they work for us, but also shorten our time horizon to spend more time looking at 6-12 month business roadmaps, and providing supporting guidance (principles, patterns, reference architectures, supplier analysis, etc.) in the form of "vignettes"—just ahead of the solution work for each initiative—to allow each one a fast start and clear guide-rails.
Finally, we'll examine those infrastructure and shared services domains which don't come to life in business scenarios. Here, common sense and simple KPIs often reveal the need to simplify, de-duplicate, reuse and standardise. Where the case is strong enough, we'll commit those actions to our roadmaps too.
Although it's not rocket science, I haven't heard of many others taking this approach—so maybe we're pioneering just by taking the most practical approach we can find. It feels realistic to us, so we're very hopeful of success.
Any inputs welcome, and please do share related experiences of your own!
Disclaimer: These opinions are mine alone, so are not necessarily shared by my current employer or other organisation with which I'm connected.
0 notes
ridgemoor-blog · 7 months ago
Text
Structural Stories (2 of 2)
Tumblr media
(Originally posted on 11th February 2022)
In the first part of this article I tried to explain why architectural design is still very much needed, and needs to be laid-out coherently.
When people buy that argument, the next debate is usually on precisely what should make it onto paper.   There’s no single right answer, because architecture depends on context—and is partly an art form in any case.  But a few universal rules do apply.
First, think about the stakeholders and their questions.   List the significant questions, and for each one imagine the diagram, table, bullet list, graph … whatever, that best answers it.   Often the best way to articulate the answer is with more than one of these in concert, because whilst a diagram can replace a thousand words, it also tends to leave ambiguity, so benefits from an accompanying narrative.  I can’t tell you how many times I've seen this go wrong, with authors unclear on the message their diagram conveys, or using a picture to explain something which is essentially tabular, or choosing to nest items which have no such real relationship.  If you’re unsure what form to use then Gene Zelazny's books offer some good generic advice. Or try the 5-Second Test (see chapter 21 here) on a colleague to see if your creation readily indicates what’s going on.
What to include?  Start by covering elements introduced or modified in your initiative.  Then contextualise by showing anything already in place that will interact with those.   The difference between the old and the new should scream off the page, so in a system with any real complexity, showing separate “before” and “after” versions can be especially powerful.
A super-important (often-overlooked) feature of an architecture document is the set of architecture decisions made.   Readers shouldn’t have to infer; instead each decision should be highlighted, showing the list of options considered, the pros and cons of each, and the considered choice that was eventually made. Before decisions are made, there should always be a set of principles to inform decision-making (many of which might be inherited company standards) and these principles should also be visible to readers.
And what about timing?   In Agile settings we accept that not all of the architecture is known before development starts.   But write-down what you can, highlight gaps so they’re not forgotten later, and organise documents so they’re easy to iterate on—just like the technology they describe.
Just as important as any of this are the language, notation and format of documents.   We are, after all, aiming for readability and ease of understanding.   Here are my top 6 tips:
a)     Sequence the story so it builds on what the reader already knows – for example, don’t begin with storage plans for data described later, or sketch infrastructure to support an application explained on the next page.
b)    Keep notation absolutely consistent throughout the document – if your APIs are blue hexagons, then they’re blue hexagons.   Please don’t make one of them a red square later, because we’ll assume that something has changed, and (if we’re awake) we’ll ask what it was.
c)     Keep your labels absolutely consistent throughout the document – component “AB” should never be renamed “AlphaBeta” later on.  As the author, you can probably relate the two, but anyone else has to double-check that they’re the same.  Don’t even change fonts or font sizes unless you need to—these are all ways to indicate a difference to the reader, who will then experience dissonance, slowing everyone down.
d)    If you’re using “before” and “after” diagrams to highlight changes, please don’t change diagram elements which stay the same in the real world—it’s 100x easier when we can flip between two similar diagrams, immediately see what changes, and get real meaning from that.  Yes, this can be difficult to depict if “before” and “after” look very different.  The easiest way is to first draw the “super-diagram” containing all the elements from both diagrams.  When you’ve got that ready, just delete the bits that don’t apply, to create the “before” and “after” views.
e)    If you’re in PowerPoint, KeyNote, Visio etc. please remember that an architecture document is still a document, not a presentation.  It needs to stand alone, without the voiceover.  Yes, you could offer instead to present the material, but beware the downsides (see reason #3 from part 1).
f)      Finally, use good grammar throughout, and avoid making space-filler assertions that you don’t really mean!  (More common than you might think.)   Again, we’re simply trying to make the reader’s job easier, because we respect their stake in the solution, and/or value their inputs.
Disclaimer: These opinions are mine alone, so are not necessarily shared by my current employer or other organisation with which I'm connected.
0 notes