Agile fantasies and harsh business realities

Computers & TechnologyTechnology

  • Author Pierluca Riminucci
  • Published April 20, 2022
  • Word count 1,399

Chronicles from the trenches.

Nowadays digital transformation programmes are invariably associated with Agile. Organizations want to become Agile since they want to increase speed, efficiency, effectiveness, etc. and Agile promises to deliver all that.

However, what typically happens on the ground is the quick adoption of a number of Agile ceremonies and tools (the likes of standing crowds in some open space corner, yellow stickers based Kanban boards, the ever present Jira, etc.) and the sweeping away of any deeper organization analysis, let alone alignment.

In that sense Agile is remarkably similar to religions. Typically most religions have an exterior part, (i.e. their prescriptive side: Going to the functions at a specific day, do not eat certain food, kneel down at certain moments of the function, etc.) and a substantial or moral part (i.e. behave in a certain way, etc.).

The exterior part is far easier to be adopted and “implemented” and usually ends up being identified with the religion itself.

Similarly, from the IT trenches you can easily see that most organization cannot even get near at the substantial part of Agile: They just adopt its choreography.

The overall result often materializes in a number of recurring anti-patterns that are remarkably repetitive across organizations, and invariably introduce even more inefficiencies and waste.

The purpose of this brief article is to provide some insights about some of the reasons why this happens and what are the watch-outs.

Basically, far from being tout court derogatory of Agile, this article simply aims at pointing out its many fake implementations, and in doing so, it truly follows one of the Agile key principles, i.e. the one that preaches the need of paying attention to lessons learned and to incorporate remediation actions going forwards.

Below is the relevant principle quoted verbatim: “At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behaviour accordingly”.

So why not reflecting on how Agile is typically deployed into organizations for real?

The first and foremost observation is that a so called digital transformation almost invariably involves the construction of a distributed solution or a set of solutions.

That is: a solution that is made of a number of components that are required to interoperate seamlessly, some to be built from scratch, some already exiting to be modified/enhanced.

In essence, there will be the need of building a UI layer (either native mobile or hybrid or web-responsive) that connects to an ecosystem of APIs (also typically to be largely built) that in turn will provide access to a layer of legacy systems of records typically via a number of different types of connectors.

So far there is no rocket science. However, the scope of what needs be done is typically far larger than the scope that a pizza-size team can crack, in a self-contained and self-organized mode.

And the latter implies that a couple of cornerstone Agile principles break down just to start with.

Further to that, such a big and complex scope would require, as a mean of technical coordination, an upfront elaboration of architecture (i.e. an engineering focused solution design) that in turn does require some clarity about requirements. And the devil is always in the details: Requirements do not make any exception.

Here, on the requirements elaboration front, is where the first and very evident failure typically happens.

Agile is almost invariably associated with user stories as the format to collect requirements. And most of the time what gets produced is a set of high-level hollow statements, almost never mutually exclusive nor exhaustive, that nobody is really able to make sense of, let alone architects or developers, who almost never really bother to read these small sentences in their hundreds.

And indeed, in fairness, Agile preaches that user stories are not meant to describe functionalities, but rather serve as placeholders for the self-contained and self-organized team to pick them up and further elaborate them (via team communication) to produce all the required details whilst progressing into their implementation. Something that might work only in a self-contained and small team.

So the first failure is requirements. I have seen or heard of programmes that after many months haven’t yet managed to produce any satisfactory description (i.e. with a meaningful level of detail) of the WHAT, in other terms of the functional requirements end-to-end.

Meanwhile the architecture teams, sometimes organized into separate work-streams, each focused on specific ‘views’ of the whole distributed architecture (i.e. data, security, integration, etc.) engage in rather lengthy and unfocused debates on potentially needed new capabilities, failing though to converge into a real solution design that the engineering team can read, understand and implement.

As implied by its subtitle, this brief article is meant to report experiences collected from the trenches; hence it is not my intention here to delve into a detailed analysis of the various remediation I myself have put forward or seen adopted with success.

Its goal is rather to highlight recurring anti-patterns with the aim of awakening pitfalls awareness especially in those non-technical stakeholders who often are the decision makers, so to help them to surely recognize whether their programme has fallen in the same anti-pattern.

And believe you me, the level of noise typically coming up from any one’s organization is such that it is very difficult to understand what is really going on at ground level, despite the many reports that are usually produced, all looking very professional, polished and often really impressive from a graphical point of view.

So what is the main symptom of such not-so-uncommon anti-pattern? In other terms: how do you recognise that your programme has got stuck at the rather basic level of requirements gathering and elaboration?

What do you need to probe to assess whether that is really the case?

Naturally enough you would need to check the artefact(s) used to document the agreed requirements and read them to assess if they describe with enough clarity and comprehensiveness what the solution will do in front of its specified category of user. The latter used to be called an actor and still is within the UML formalism.

In other more mundane terms, and assuming, as typically is the case, that your distributed solution starts with a UI (either mobile or web) you should be looking that the requirement clearly describes the sequence of screens, and for each screen its input fields together with associated validation rules, its combo boxes together with its associated choices and action buttons).

Also, very importantly, look to see if the relevant server-side calls are described in correspondence of any event or user action that can happen on the specific screen.

A server-side call should be documented with a logical name, together with its input and output parameters (again documented at a logical level) and a brief description of what it is required to do from a UI (or front-end ) perspective, again in business terms.

There is nothing technical in here. Requirement is about describing what the UI should be doing from a user point of view, both in terms of screen interactions and calls it (i.e. the UI) generates to perform business operations.

The reading should be sequentially-friendly. That is the document should be amenable to be read sequentially, from the first to the last page, and, page after page, the reader should get ever more convinced that the ground is covered comprehensively and exhaustively with no gaps or vagueness and it makes sense.

This suggestion might look trivial and old-fashion. However, if after months, there is no such description available (with the quality attributes I have just alluded to) it means requirements gathering is still all over the places and your programme has fallen in that not-so-uncommon anti-pattern.

Better still: the same exercise can be organized by leveraging independent judges with clearly defined instructions on how to perform the assessment so to assure objectiveness as much as possible.

And, if beforehand, a requirement template would have been defined with extreme care and attention (as I often recommend) this kind of programme’s progress measurement would be far easier to execute with reliability.

Assessing whether the requirements elaboration process has produced the expected results is important though often overlooked.

Requirement is often where most of the digital transformation programmes risk to be wrecked, though that is not widely recognised.

Pierluca Riminucci currently works for Infosys as Chief Technology Officer where he helps his customer CxOs at shaping and realizing

their strategic, delivery and transformation objectives. Previously, he served as Chief Digital Architect for HSBC and as Group CTO for Prada.

Article source: https://articlebiz.com
This article has been viewed 718 times.

Rate article

This article has a 4 rating with 3 votes.

Article comments

There are no posted comments.

Related articles