One measurable approach for sizing technical capability from business objectives.
Computers & Technology → Technology
- Author Pierluca Riminucci
- Published October 4, 2022
- Word count 1,675
= = = = INTRODUCTION = = = =
Business organizations are moving their sales operations more and more into digital channels, e.g. web, mobile, market places, social media, etc., and a few of them have outright ambitious expansion plans.
One of the unique challenges that digital channels pose is their utter dependence on technology. No product can be sold on digital channels without underlying technology. Indeed, there are no fallback manual processes that could stitch, at least for a while, IT outages.
More importantly, failing to live up to consumers’ expectations typically carries a much higher penalty price on a digital platform. A greater number of consumers are negatively impacted via digital channels outage and the amplification effect of social media can scale up frustration and diminish the perceived brand value.
It is therefore a common challenge for organizations and their leaders to make sure that their enabling technology is fit for purpose and is aligned with their business goals.
However, reality teaches that a few organizations are, more often than not, in a fire-fighting mode. Indeed, they muddle along by solving technical problems as soon as they arise and, worst, as soon as they have already become apparent to a, usually unknown but typically large, numbers of customers. And/or often after their internal business staff have become aware of the technical issue and, in turns, have alerted the IT.
= = = = SCOPING THE PROBLEM = = = =
So, an important question CIOs need to address is how to make sure their digital architecture and its realization into an ever more complex ecosystem of solutions, platforms, and infrastructure both on premise and on cloud, is reliably fit to sustain current and future business objectives.
Or in more casual language: do I have a ticking bomb under my table which might blow my career off course or seriously damage it?
Do I need to trust various reassurances I have received, alas, often caveated with obscure very specific technical details or should I robustly pursue a more disciplined and factually based approach?
= = = = PROPOSED APPROACH = = = =
From a technical point of view the problem amounts to evaluating the current architecture, or more realistically what has been realised of it over the years, from a performance and availability quality attributes viewpoint.
There are a couple of good academic books that clearly set the broader context. For those who are interested they are referenced at the end of this article.
Here, the focus of this brief paper is to provide a quick orientation into the topic to help the reader to identify what are the key conceptual steps that need be executed and to relate them back to the relevant architecture body of knowledge.
As in every architecture evaluation, the place to start is to identify the relevant stimuli, their sources and the architecture artefacts that are impacted.
In the case of the quality attribute of performance, typically the stimuli are the incoming http/(s) calls.
Not all stimuli are equally relevant. Also it is important to organize them into a meaningful hierarchy using an appropriate aggregative taxonomy, to better manage complexity as it will be explained later.
Regarding the architecture artefacts bit, I personally find it useful to initially generalize an entire digital infrastructure as though it were a single component, even though inside it, each call is typically resolved via orchestrating a number of other ‘realization calls’ to internal or external systems.
Finally and with reference to stimuli sources, at this stage what is really important is to properly identify the significant ‘entry channels’ that make up the entire digital sales funnel in scope. This let’s us follow the ‘server side calls’ generated by the various consuming applications, e.g. web, apps, kiosks, marketplaces, etc.
Once this structure is created, we analyse the traffic loads in three iterations as listed below.
- We relate the total traffic load to the business objectives via a predictive statistical model. Here, we are answering the question “how much
traffic does each business objective drive?”
- We finalize a reliable statistical distribution of the total traffic load. This lets us divide up the traffic across the various stimuli. In considering
a new business objective, we can take this model and use it to predict the relating fine-grained traffic loads onto the system.
- We slice the whole digital architecture following the chain of artefacts (i.e. nodes, applications, network connections, etc.) involved in fulfilling
each stimulus (or a set of them). This lets us identify key sub-components and their related partial traffic loads.
In the remaining part of this article I will spend a few words to summarily describe each of such steps.
First and foremost, the sales funnel cumulative traffic load needs be linked to the business objectives. The latter are likely to be expressed in revenues and potentially aggregated on a yearly or quarterly time frame. On the contrary, to be useful in this context, the predicted traffic load would need be in terms of throughput, i.e. number of incoming http(s) calls per second or per minute.
There are a few ways to meaningfully establish this link depending on the situation at hand. Broadly speaking, that step involve identifying a representative value for daily revenue given the yearly or quarterly revenue piece of data.
At the simplest level, this can be achieved by calculating each month percentage contribution to the yearly revenue out of historical data and then choosing a suitable value, for instance the maximum. Similarly, within each month, each day percentage contribution is evaluated and then a suitably representative value is chosen.
To complete this first step, we need to perform a regression analysis of the historical data set assuming it be daily revenues vs total daily traffic load.
However, total daily traffic is a too course item of data. It is important to be able to figure out how this value breaks down against the aggregative taxonomy of stimuli mentioned earlier.
Hence the second step: Out of available historical traffic data it should not be too difficult to derive a representative statistical distribution down to each previously identified stimulus.
We can now ‘predict’ the throughput – though only on a daily timeframe - for each stimulus against any given value of yearly/quarterly revenue.
To complete this step what is required is to translate it into a real throughput, i.e. per second or minute. That can be achieved in various ways. The simplest, just to provide an illustrative example, is again by calculating each hour percentage contribution of the total daily traffic out of historical data and choosing either an “average” or a suitable representative value.
This is going to be the hourly peak traffic load given a value of yearly revenue. Transforming it into throughput per minute or second could be just a matter of dividing it by either 60 or 3600.
So far so good, however it is also essential to understand how the traffic distributes itself into the various internal or external subcomponents comprising the whole digital architecture.
Here is a matter of identifying the right trade-off in terms of abstraction level. Which thing, at the end of the day, is the real essence of any architectural work: Too much detail quickly become unmanageable and unusable, too little is simply irrelevant.
It is a black art that comes with experience and more importantly perhaps, with a strong sense of purpose.
The final outcome could be an Excel model that, given an estimated future value for a year revenue, ultimately calculates the associated or expected peak throughput for each stimulus and its various intermediate partial sums leveraging the underlying aggregative taxonomy.
In addition it also consistently calculates the partial loads against each identified sub-components.
These latter values are extremely important to bring the required level of objectiveness in defining each sub-component’s SLA and also for designing focused performance tests and assessing the meaningfulness of the results.
= = = = EXTENSIBILITY OF THE APPROACH TOWARDS INNOVATIVE DIRECTIONS = = = =
It is important to recognise that is difficult in practical terms to design and run an end to end performance test against the whole digital architecture of any company, even of a modest size.
Indeed, a meaningful performance test implies a replication of the entire digital infrastructure (both on premise and on cloud) given the fact live environment cannot be used for obvious reasons.
And that is costly, time consuming and, more often than not, impractical.
However, a promising alternative approach can be considered instead.
I would call it: “Non-invasive approach to performance test”. It relies on the marvels statistical analysis can achieve on properly collected operational data.
For instance, let’s assume we want to determine how the response times change under increasing loads up to identifying the likely breaking points of our digital infrastructure. Instead of executing a real, invasive performance test, we could simply utilize historical data.
Indeed collecting stimuli throughputs vs. response time will allow us to perform an on-going self-correcting correlation analysis and determine a suitable regression equation on a multidimensional space that represents the performance of our whole digital architecture under different condition of loads.
This way we have now opened the door to (mathematically) determine singularities in terms of performance degradation that indicates the existence of loads limits.
We also will have the extra advantage of collecting benchmarking data nearly for free as part of the process. This is valuable data that most organizations struggle to collect.
= = = = CONCLUSIONS = = = =
This article provides a high level outline of a comprehensive approach to assessing whether an existing digital architecture is fit for purpose and able to sustain present and future business objectives. The business criticality of this assessment should not overlooked.
Often, this critical topic is tackled in a piecemeal and fragmented way, impacting the reliability of any conclusions we could draw from it. Without reliable information, the potential for adverse impacts is huge.
As illustrated here, reliable information on expected performance can be inferred by using statistical methods.
= = = = REFERENCE = = = =
Clements P; Kazman R; Klein M. Evaluating Software Architecture Methods and Case Studies. Addison Wesley 2002
Bass L; Clements P; Kazman R. Software Architecture in Practice. Addison Wesley 2013
= = = = ABOUT THE AUTHOR = = = =
Pierluca Riminucci currently works for Infosys as Account Chief Technology Officer where he helps his customer CxOs at shaping and realizing
their strategic, delivery and transformation objectives. Previously, he served as Chief Digital Architect for HSBC and as Group CTO for Prada.Article source: https://articlebiz.com
There are no posted comments.
- Benefits of Integrating WebChat Into Modern Web Apps
- Game On: The Power of Gamification in STEM Education
- Why iOS App Development Services Are Essential for Your Business
- Watch Out for the Challenges in Mobile Application Development
- Streamline Your Disk Partition Management and Tasks with BootIt Collection
- DJI Mini 2 Camera Specs
- DJI Mini 2 Drone - Features and Benefits
- Getting Started with Divi - A Step-by-Step Guide to Installing and Activating the Theme
- Why a Proximity Alert System can mitigate risk and keep your workers safer
- Revolutionizing the Future: AI and ML Unleashing Limitless Possibilities
- Powerful Secrets You Must Know About Chat GPT
- In 2023, it’s time to get your CCSP certification
- 5 Reasons Why Web Development is Important
- PDA SCanners
- Top 3 Best Cloud Platforms for your Business Startup 2022- 2023
- Benefits of Online Backup
- 6 valuable tips for using an international SIM card in Ireland
- BEST WEB DEVELOPMENT COMPANIES IN DUBAI
- Business Security Systems to Intercept Crimes in Ontario
- Four Ways Artificial Intelligence will Transform the Future of Project Management
- Drones For Me
- Top 9 Laravel Development Tools Developers Must Use in 2022
- How Do Smart Watch Measure Blood Pressure
- Data Collection and Migration with MDACA Data Flow
- Avoid Getting Your Cyber Insurance Payout Declined
- PESTLE Analysis of AT&T and its Subsidiaries
- Geospatial Analytics: Basics of Geospatial Data
- Boost Your Office Productivity with Employee Management Software
- WHY LOW-CODE AND NO-CODE ARE CONSIDERED AS THE FUTURE OF APPLICATION DEVELOPMENT