Pushing back the pendulum

Having a look back over time shows how the cost risk pendulum has swung back and forth between clients and service provider in technology services, and new commercial thinking is now needed to meet the challenges of 21st century technology delivery

If you look back over the vast vista of time at how technology services have been delivered you can see the evolution of risk between clients and suppliers with the pendulum of doom swinging back and forth between clients and suppliers. In the very early years, most organisations did everything themselves, but then eventually realized they didn’t need to do that. And so did suppliers who set themselves up to take advantage of the growing market, and the unwary clients…

1. Evolution of IT cost risk over the years

From before “noughties”, there was something of a “wild frontier” as clients had little experience of outsourcing technology services and suppliers made their fortune selling their wares. The deals back then were often not well constructed and frequently client unfriendly (even “gouging”). Often all of IT was thrown to the suppliers leaving a very thin retained organization, with most of the “brain” outsourced as well as arms and legs. So clients were at a significant disadvantage, and suppliers could be like a fox in the chicken coop, writing business cases for their own new services to be signed off by client managers who had no bandwidth to challenge them! Being virgin territory, services were also poorly defined and relatively unstructured, accompanied by opaque or just plain bad commercial models (e.g., pure T&M or lop-sided ARC/RRC models) that dumped the risk firmly in the clients’ lap – the commercial nadir…

In the 2010s, the pendulum started to swing back in client favour, as they started taking back control of strategy and architecture, business case development and key aspects of the “intelligent client” model (think of the previous model as the “hostage client”!). Other developments in service and commercial structures also helped as lessons were learnt from the first generation experience. For example, the service “tower” model became more well established, also with the beginning of Service Integration/SIAM disciplines to manage multi-vendor setups. The pricing models improved with the introduction of transparent PxQ “utility” pricing, aligned and integrated with well defined performance management and incentives. This was probably the (first) zenith of the art of technology service outsourcing.

As we move through the 2020s, you can see things falling apart from that peak of perfection as client “digital” demands and the technology landscape change with their technology teams looking for new ways of working to meet that. Probably the most significant trend is client organisations bringing the management of technology services and indeed some execution and delivery back in-house. There are no doubt many reasons for this drive, including disappointment with and inflexibility of previous outsourced arrangements and a perception that direct control is needed to increase agility. To be honest, the problem doesn’t always sit with the service providers, and reorganizing doesn’t often solve systemic problems, but there you go.

Apart from the swinging “in/out” door, there are other drivers. In particular, SaaS and IaaS are eroding “traditional” infrastructure and application management services reducing the scope for outside services (you can read more about that here). This erosion significantly thins out the service management layer required on top of the “Cloud” services compared to old-style hosting, which impacts the implicit business case. Automation also changes the landscape, increasing the reach and operational leverage of the people in the driving seat; a fully automated DevOps/SRE type mode needs no IT Ops people (well, that’s the philosophy). Demand Management (including FinOps) are now key skills with cloud bloat replacing VM bloat of yesteryear. Demand management is logically a client side function, in terms of the link to the business, although you can outsource / delegate the supply side matching.

If we look forward to the future, we can expect to see the incursion of GenAI further eroding the traditional opportunities for outsourcing, and further confusing the clear and simple lines.

2. What does the future hold?

The actual effect of genAI in terms of cost risk is probably one for the crystal ball just now. GenAI and its siblings might improve the profile as automation replaces yet more labour costs, but flakey implementation of expensive systems with unclear benefits and increasing supplier lock-in can easily drive the pendulum the wrong way. For suppliers, it drives a further shift of revenue from people to technology; the more fleet of foot will probably win either way

Coming back to the here and now, as client organisations swing back to a more in-sourced model, you can see repercussions with, for example, sub-optimal (re)sourcing when using third-parties. Often, resourcing generally reverts to unstructured staff augmentation and body-shopping to fill specific gaps in the in-house teams, forgoing the benefits of a more coordinated approach to third parties. Whilst previous models may have had carefully crafted off-shore resourcing to benefit from labour cost arbitrage, the in-house models have the hidden costs of expensive on-shore day rate contractors back-filling vacancies.

2. Buying “Bodies” with exacting specifications is hard

The “bodies” are often requested in an ad-hoc manner and to an exacting specification that significantly constrains what can be provided, artificially limiting supply and so driving up the cost and risk. For example, the role they to be filled in the patchwork structure will have a very tight skill specification, must work in a specific location and can only be quite senior/experienced (the logic being “we’re not paying to train junior supplier people on our projects”). In this model, the client organization carries all the risk on cost, productivity & quality.

So the unintended consequence of the well-meaning strategic changes in service resourcing model is the loss of some of the good stuff that went before. As well as the fragmented supplier deployment and inefficient resourcing, another major impact is the resultant loss of clear service structure and definition and unclear/limited performance management. So the cost risk pendulum is swinging significantly against clients who are now again bearing the risk.

4. Reintroducing third party service structure

The way out of the quandary is to reintroduce some sort of structure with a clearer and somewhat wider scope that can be resourced more flexibly as a service responsibility (even a small one), including the innate ability to refresh and improve itself. One of the challenges is that supplier have quite understandably aimed to limit their risk by walling themselves inside tightly negotiated scope with exceptions for anything that goes wrong everything outside that.

As a concept, it is one that lawyers and commercial managers like and that they can wrap themselves up warmly when they go to bed at night. However, it is somewhat dinosaur thinking, and is unsustainable in the modern world where business-technology performance is better and more meaningfully measured by end-to-end end experience service levels (e.g., with much touted XLAs – “experience level agreements”) or even more holistic business outcome metrics that are actually linked to the performance of the client business. Employee bonuses are commonly linked to such “One Team” metrics, however, supplier agreements are rarely so.

Some of that lack of performance-benefit linkage is historically because previous attempts to connect end-to-end have often failed dismally, be it due to poor definition, poor data, or somehow the grand ideas of partnership formed with a handshake on the golf course just didn’t pan out in the cold light of implementation (golf courses are really not good venues for making major strategic decisions!).

Some of the challenge however is also down to old-fashioned thinking about how the boundaries of responsibility are defined and how the benefits of supporting a winning business can be shared (or, of course, losses taken). You probably can’t blame the lawyers as they still think using Latin in contracts is smart, are mired in centuries of historic case law and outdated legislation and just don’t think along commercial service lines!

5. Service structure design decision

Without attempting to solve the entire conundrum of aligning client and supplier incentives and risk-reward sharing, we can push back against the reversion to bad (re)sourcing behaviours, by imposing some structure on the service requirements. That could be by moving back to the comfort blanket of older “tower” models. However, perhaps instead moving forward to a micro-sourcing model with small service components plugged together to fit the client need, and framed with more innovative commercial and performance management structures with a “One Team” twist. There will be nay-sayers who declare “but it’s not market standard”, however the appropriate response to that is “stasis is death” and “average is for losers”, so I say: think up, think harder and imagine the commercial possibilities!!!

Pushing back the pendulum Read More »

When “Smart” is Stupid

As a technology descriptor, “Smart” waxes and wanes in the various cycles of hype and marketing overreach, but looking at “Smart Home”, some of the real life examples are pretty Stupid…

If you look at the definition of Smart technology in general, you will see the sort of key attributes that look like this…

1. Defining “Smart” technology – key attributes

Those being:

  • Connected, by Wi-Fi or whatever, with some Cloud services and remote access to your devices
  • Friendly UI, typically connected to a mobile app
  • Automation of some sort, often by linking to Alexa (or similar, other virtual home assistants are available), or IFTTT
  • Using data analytics and processing to drive some level of intelligent behaviour
  • Being part of an eco-system of devices

In the home, central heating is one of the more mature uses of Smart technology, with the possibility of managing your home heating efficiently and effectively, using a mix of distributed WiFi/Zigbee thermostatic valves on the radiators controlled centrally and accessible by voice command using you home assistant. It’s a beefing up of the older analogue thermostatic controls

2. Smart Central Heating – a good application of Smart technology

Looking in the kitchen or utility room and you will however see another story, the sorry tale of woe that is “Smart” home laundry.

3. Home Laundry – not Smart Technology

Yes, the big names will sell you a “Smart” Wi-FI enabled washing machine or tumble dryer, but the marketing doesn’t match the reality. For example:

  • Cloud connected. Generally yes, the remote diagnostic feature can be useful…
  • Friendly UI. I suppose having a cute mobile app to programme the machine might be nice, but you need to be standing in front of the machine to load it, so just twiddle the knobs…Pointless.
  • Remote access. What is the point of being able to programme your washing machine when you are 20, 30, 50 or 10,000 miles away – you are not there to put the washing in. If you forget to put in it before you went out, then you are royally stuffed. (It’s not like a Wi-Fi enabled cooker that you could use to check if you left the gas on which would be very helpful, especially if you could command it to shut the gas off.) Also pointless
  • Automation. Having an alert when your washing cycle is finished might be useful if your house is sooo big that you can’t hear the washing machine when it finished, but if the house is that big, you are probably in the demographic that doesn’t do their own washing anyway. Again no use if you are far away, it will just have to stay in the machine and get wrinkled and fusty smelling. More pointlessness.
  • Analytics. OK, this part might work in some way, in that the machine can work out how much you put in and then adjust the water level and wash and spin cycles to match. Otherwise it’s not going to give you much useful advice like “last time you turned me on at this time you used the cotton cycle programme, would you like to do that again” or “55,345 people are watching this cycle now“, or “your friends are currently relaxing and watching TV whilst you are doing the washing“. Annoying
  • Ecosystem. This is the killer issue. The home laundry process is an almost entirely manual and labour-intensive, and so there is no automated continuous flow of washing passing through for which to apply Smart technology. Showstopper!

So, there you have it; it’s Stupid.

You can envisage some ways of changing the Home Laundry paradigm:

  1. Don’t wash your clothes – the Null solution, but loses you friends very quickly
  2. Outsource and send your clothes to a central laundry which is continuous flow, may being picked up and delivered by a Johnny-cab auto-taxi
  3. Truncate the whole process with self cleaning clothes – these sort out the bio-stink with little copper wires in the fabric but I am sure they would need to have the dirt washed off them, or maybe you just recycle them. You could try the HercLeon Apollo Self Cleaning T-Shirt which to quote the sales blurb “can be comfortably worn for days, weeks, and even months without having to be washed with soap” [my bold]
  4. Build the Laundry Jet laundry collection systems into your house
  5. Buy a Panasonic Laundroid laundry robot if they ever launch (apparently they invested $60m in this)

Time will tell what innovations will arise…

You can consider some other examples of Stupid and work out what differentiates Smart from Stupid like this, to the left of the donkey…

4. When “Smart” Technology is Stupid

Stupid technology shares a lot of the attributes of Z-list celebrities, that is, like a showroom dummy with a pretty face and all the intelligence of pond life which needs a handler and has nothing to say worth listening to.

  • Coffee machine. A coffee machine would only be smart if it was part of a caffeine delivery flow system, ordering fresh capsules, cup management robot, free flowing water and liquid waste pipework, and disposal / recycling system for the capsules and grounds. But they don’t, just a pretty UI and pointless Wi-Fi connection, like the Delonghi Primadonna Soul Bean-to-Cup Coffee Machine, a snip at £1299
  • Toaster. The crew of Red Dwarf had issues with the Talkie Toaster, so maybe a full continuous flow toast making eco-system could be an issue, but the current generation of Smart Toaster are just a pretty face, like the Revolution InstaGLO R180B Touchscreen Smart Toaster, yours for £366 on Amazon, and it doesn’t even have Wi-Fi
  • Pressure Washer. As my family would tell you I have a love-hate relationship with pressure washers. Even allowing for that bias, in my humble opinion, the use case for Smart pressure washers is pretty well non-existent. I suspect, however, the purpose is actually a dark, spooky objective to gather customer data (somehow). The Karcher K7 Premium Full Control Pressure Washer has a Bluetooth connection linking to the Karcher mobile app – why?. I installed the mobile app, couldn’t see the point and deleted it…

The key insight that we learn from this analysis above is that to be properly Smart, technology has to be part of a continuous flow system which is largely automated, otherwise it is just lip-stick on a pig.

So we can revise the first chart at the top of this article, and add that continuous flow requirement to the key attributes, thus…

5. Defining Smart Technology – key Attributes (Revised)

So there you have it, now we can spot when Smart is Stupid, and also have the signpost on the road to make things proper Smart

When “Smart” is Stupid Read More »

Hunting for Buried Treasure

There’s hidden treasure in tracking and capturing business benefits from change but it is not always easy and often neglected

I’ve tried to resist the temptation to start this piece with a gruff and throaty “Aye, me hearties!”, the topic being about searching for hidden treasure, or rather more prosaically, benefits realization.

You would have thought that after all the hard work running a project and implementing change people would be keen to make sure the promised benefits are actually banked. Sadly, no, this is often neglected and does not happen. There are probably a number of reasons:

  • it’s not in the “plan” / budget, and nobody is on the hook for it…
  • tracking benefits can be hard and requires discipline to stick with it as things plays out over a longer game…
  • implementing actual change means changing some ingrained behaviours, and it is easier to not to look…
  • there was more promise than reality in the proposition and nobody wants to get called out…
  • nobody wants their pet project to be starved of funding so don’t want anybody looking too closely…
  • tidying up is boring and not glamorous…
  • “not my job”…
  • the project objectives were to deliver a load of features (what), and benefits (why) weren’t clearly defined…
  • can’t measure / see the effect of the changes made…

To get in context and do some setup for the topic, we can define three levels of business, like this…

1. Business levels – strategy to change to operations

The Strategy level guides the direction of the business; Business Change actually makes changes to the business; and the BAU operations layer gets on with actually doing the business, in between being guided and changed. To see where the delivery of benefits actually needs to get tracked, we need to project that out over the “Think-Build-Run” lifecycle of change and operation, thus…

2. “X” marks the spot – finding the treasure

You can see that “X” marks the spot for where this should happen in the top right hand corner in the intersection where Strategy meets Run, bubbling up from the monitoring of impact in the business change level. Depending on how thin the oxygen is at that exalted level, Strategy types might think they do not have to get their hands dirty, but in business and battle planning terms, it makes a lot of sense to check out whether you are actually winning or just staggering along from one non-event to the next.

In an ideal world, the benefits tracking would have a gimlet-eyed, CSI-like forensic analysis of the results of the strategy as it plays out, like this…

3. If only it was like this…

…and, indeed sometimes it can be like that, where the change is intimately linked to the business performance. Things like company or product revenue, or operating costs are usually good things to look at, although sometime you need proxy measures and targets for the effect of the change itself (like, conversion rates) which do then track into actual business performance changes.

Just a quick anecdote from times past on when that link is strong, I had a discussion with an EVP of a global financial services company during a technology investment portfolio prioritization programme.  We were discussing him providing a proforma  business case and NPV for a project to futz with the FX rates on some of their cards.  He just said "I'm not doing that, I know we'll make sh*tloads of money", so that was that...the project featured high on the priority list

But, as often as not, tracking the impact is more like looking into a murky fog through the wrong end of a telescope…

4. …the reality of tracking benefits

…and seeing meaningless confusion (or a donkey).

This is a frequent problem for parts of an organization that are not well connected to customers and revenue, or where the effect is unclear, e.g., for developing enabling infrastructure, where direct business benefits cannot be linked to the change (Hint: try to get the business front-end on the hook for some actual business benefit for changes like that, makes the NPV positive, and the whole change becomes more meaningful from a business perspective)

The challenge often derives from the measurability of the impact on performance brought about by the change (if that was actually defined, a different challenge, of course). To measure that you need to look at the difference in some metric that should be impacted, which is relatively easy when you have a clear baseline before and can measure the performance after the change, or you can test the effect in the present by parallel testing of systems with and without the feature, like A/B testing for conversion rates with different customer journeys / click-flows on a web-site of mobile app. Those are the first two cases like this…

5. Measuring the performance impact of change – laughter and tears

…however the breakdown occurs in the second two cases where, either:

  • there is no “before” baseline, obviously not good, but can be fixable or perhaps reference external experience to determine the likely impact; or,
  • worse, the comparison would be between either of two possible future paths. This final case is the most difficult as you would (in theory) have to compare performance between:
    • A – How we would have performed in the “Path Not taken”; compared with
    • B – How we actually perform on the path we have taken

The case of the “Path Not Taken” is quite typical of commercial changes to the technology development process itself, something of a meta-topic perhaps (changing the change process…). Technology development has a huge discretionary element and there are many ways to waste money with it, so an important and essential question like “Are we doing technology development more effectively and efficiently now?” requires discerning analysis and thought.

However, to start with the basics across the broad spectrum of change, you need to set things up for success. One of the foundations is to start thinking in terms of outcomes and handing out the investment funds against outcomes rather than budgets by department/cost center or whatever. Delivering outcomes is typically multi-functional and cuts across organizational lines in order. There are a number of key elements to the recipe which you can see below.

6. starting with the basics – thinking about outcomes

Having measurable outcomes is fundamental to success in having real benefits to track, and the level at which they are defined sets the scope and breadth of their impact across the business. The higher up the performance hierarchy the more likely they will directly impact the fundamental performance of the business…

7. Business performance hierarchy

Outcomes need to be specified properly at the start of the any change journey, something like this:

  • Aim is to define the achievement of a specific improvement brought about by a series of coordinated actions in near term scope, say, up to 3 months out;
  • Outcomes should generally be SMART (Specific, Measurable, Actionable, Relevant, Timebound), or whatever your version of this acronym happens to be, “measurable” is not negotiable though.  They are specific, reasonably sized/feasible incremental beneficial results we are looking to achieve which contribute to the higher level business goals;
  • They need to be focused on specific improvement, so wording needs to be stricter in definition, and should be in this syntax: “<Improvement verb> <some Attribute(s)> of <some Thing(s)> by <some Target Measure(s)>”, using
    • <Improvement verbs> like: Improve, Streamline, Optimise, Tune, Reduce, Accelerate (but not Do, Create, Assess, Evaluate, Analyse, Synthesise, Perform, Enumerate, Distribute, Communicate &c – these are “doing” action words which are part of the “How”)
    • <Attributes> like: Quality, Accuracy, Effectiveness, Awareness, Speed, Timeliness, “Fit”, Cost, Usability, Accessibility, Reliability…
    • <Things> are whatever entities of which we need to improve the attributes
    • Specific <Target measure(s)> of success (percentage, absolute value, etc.) of the metric (or metrics) meaningful for the improvement of the Attributes 

There are many ways to define improvements, here are a few examples…

8. Illustrative business outcomes

When it comes to setting up the processes to track benefits, then we can slightly redraw the business level-lifecycle continuum from figure 2, and pick out the feedback loops…

9. Key investment and change feedback loops

The outermost Strategy & Investments loop (loop 1) is all about investing in the right things and typically runs on a quarterly cycle. The Business Change inner loops are all about prioritizing work to deliver the right features (loop 2) and delivering quality code (loop 3) which run bi-weekly and greater than daily frequency (or multi-quotidian, if you want to look that up in the dictionary). The Operations continuous improvement loop (loop 4) feeds upwards into the higher levels.

You can translate that conceptual model into an actual time-scape which integrates the loops, so that you have a rational approach to developing changes needed with quality delivery and proper tracking of the benefits to support regular investment review to accelerate or throttle funding…

10. Agile investment governance – tracking benefits iteratively

The features of this particular time-scape are:

  • Regular major releases of functionality, with content prioritized according to business need, maybe from outputs from a number of sprints
  • Periodic feedback from the “market” (however that is defined), both on features and benefits delivery, into investment review
  • Opportunity to reprioritize deliveries to market to focus on higher value features/service elements
  • Opportunity to stop spending on a individual stream at any point

Obviously you can roll your own version, and also rework it for more generic business applicability. So…

(gruff and throaty) Aye me hearties, there’s treasure to be had!

Hunting for Buried Treasure Read More »

Earnback’s a botch

Earnback of service credits in technology service agreements is an absurd concept that should be rooted out of contracts

As often seen, service credits are commonly used in technology service agreements to compensate clients for a service provider’s failure to meet a committed level of service. Whilst service credit mechanisms can range from simple to byzantine, earnback of service credits is one of the more egregious pieces of nonsense you can see in a service level agreement.

To take a step back, there is a social concept that you can somehow repay your bad actions to society by doing good works. It is a form of utilitarian philosophical thinking where the greater good can outweigh the bad; perhaps industrialised in the sale of indulgences back in Medieval times. Conceptually, a murderer can go to prison for many years to “pay their debt to society”, but they still killed somebody, nevertheless. So, the concept is based on some possibly dubious principles and which does not always survive robust scrutiny. Indeed, as often is the case with socio-philosophical inventions like this, there are conditions under which the simple equations fail, and the answer they produce is nonsense.

As an aside, and I hesitate to bring it up, but if you want to test boundary conditions and how a rule, social or otherwise, works or doesn't work in extremis, then try applying the Jimmy Savile Test (OK, yuck).  If you don't know, Jimmy Savile was a uniformly horrible person who curried favour with rich and famous people in the high strata of society with his apparent "good works" as a cover for his obscene behavior.  
The test is this:  if somebody proposes a generic rule then you think to yourself "what about Jimmy Savile?", and if the answer is "Euw!", "No way!", "Disgusting!", then the rule is probably a bust.

Back to earnback, which in the context of a commercial service agreement, means the service provider doing some good works to overcome below par performance at some point in time. Those good works then exempt them from paying compensation to the client which would otherwise be payable for the poor performance. Timing-wise, the performance bump can be ex-ante where previous good works put money in the bank to credit against claims, or ex-post where the good works occur after the default. In the generality, it is all about the push and shove of risk transfer between client and service provider; earnback pushes back risk to the client.

As a principle, earnback of sorts could make sense for a development activity or manufacturing piecework process creating widgets where poor rate of production in one month can be offset by increased output in later months so that the same expected pile of widgets is created in the overall period, meeting the expected weighted average production rate.

However it makes little sense in an operating service where the service delivered is of its moment and the experience is transient. In the service world, a dog, as they say, is only as good as its last trick – not an average of its good and not so good tricks. Sometimes it makes even less sense…

Consider this typical earnback clause that you might find in a service agreement…

Typical earnback clause in a service agreement

This says what it says which is that if the service provider behaves for three months following a service level default then they can play their “get out of jail free” card and not pay the service credit for their default. Graphically, it looks like this…

The absurdity of earnback – thanks for nothing!

The story is: the service provider commits to deliver to a service level in the contract. They then default on that service level in Month N, and proceed to meet the service level in Months N+1 to N+3.

The earnback clause magically absolves them of paying for the default…by doing the job they are already being paid to do. This is patently absurd: why does doing that job same as any other time exempt them from paying the compensation? Thanks for nothing!

In the fetid atmosphere of the negotiating rooms when these types of clause were first invented (maybe the 1980s?) it probably all seemed to be a very clever manipulation of risk and incentives. You can see other examples of “cleverness” where complex mechanisms have been designed that don’t make sense in the cold light of today (e.g., service level escalator ratchets which don’t work in a multi-vendor SIAM setup).

In the hierarchy of performance metrics many of the older concepts like earnback of service credits fiddle in the bottom tier of technical performance measures whilst user experience burns: the client suffering an “All-Green Hell” SLA dashboard for a service that users hate passionately. Better to focus design efforts on performance regimes further up the pyramid…

Level of user interest and understanding of key performance metrics

So, root out earnback from your service agreements, let the service provider make its penance at the time of default and move on!

Earnback’s a botch Read More »

Beware the Low-Code IPR Iceberg

Low-code app development is very popular now, but you need to ensure the commercial benefits of your titanic new app do not founder on the low-code intellectual property rights iceberg

Low-code and No-code is very popular with a range of promises around rapid development of engaging apps with little effort and the “democratization” of the app development from “Big IT” into the hands of citizen developers.

Of course, low-code is not new at all, and was largely pioneered in the fourth-generation languages (4GLs) which peaked in the 1980s and 90s, with GUI (and even sometimes WYSIWYG) systems like PowerBuilder, Pro-IV, Omnis, StaffWare, and even Microsoft Access.

As it happens, for a period in the 80s I ran the team that supported PRO-IV after its acquisition by McDonnell Douglas Information Systems.  Being written in C, PRO-IV was ported to more or less anything that had a C compiler.  Not always a good thing as it happens, especially for the IBM AS/400, which had a toy C compiler that produced code that was incredibly slow (for the technical amongst you that was due to every function call being generated as an inter-segment CALL which is about 1000 times slower than an intra-segment CALL and so a baaaad thing to do)

As I may have mentioned before I am a big fan of GUI-based WYSIWYG visual software design, with a simple philosophy…

Visual good, green screen bad (with apologies to George Orwell)

As most low-code designers are generally visual in nature, from a philosophical perspective they come under the “good” category. However, commercially speaking, there are some potentially significant commercial pitfalls of which you need to take account.

Before digging into that, we first need to have a look at the main components that make up an “app”. Whilst it is a rather corny cliché, the iceberg motif is quite apt here, as there is more of an app that you don’t see than you do, especially for low-code. The parts under the waves beneath “The App”, are the data and meta-data, supporting environment, execution engine and infrastructure on which it all runs, thus…

There’s more IPR hidden below the waterline in the Low-Code IPR iceberg

Not surprisingly there are some significant commercial attributes to all those parts, not the least who owns which parts, which raises the concern that whilst you might own some intellectual property rights (depending on your lawyers), due to the complexities of the other parts of the app construction, you may not own anything that you could actually take away as an useful asset.

Indeed there are a number of factors to thinks about when asking yourself the question – who really owns my app?

Who really owns my app?

Considering those factors:

  • IPR ownership. This is the obvious one that the lawyers focus on covering business trade secrets, copyright, patents and so on.
    • How much of the total app IPR do you own?
  • Portability. A key part of switching is taking your toys away and playing your game somewhere else for your own advantage.
    • Can you extract any useful description and source code for the the app, preferably in a commonly recognised format?
  • Third Party Access. This covers the scope of who can touch your app for development, support and general usage. (This is a historical trap for outsourcing IT services)
    • Can third parties modify and support your app code, and other parties actually use the app?
  • Licensing. This covers how the various parts that you don’t actually own are licensed to you and your associates and affiliates, and for how long that license actually lasts.
    • What is the scope, time period and other attributes of the licence given?
  • Run-time costs. This covers the costs associated with deploying and using your app which may include or exclude the infrastructure costs depending on the application and low-code service construction.
    • What is the on-going pricing for deployment and use of the app, and what happens when you stop paying?
  • Supplier Continuity. This covers the longevity of the supplier running your app and what happens if/when they go bust. In the past that was handled by a simple escrow clause, but which is becoming a much less tenable proposition in the SaaS world. In the worst case a supplier will cease to exist, their servers go offline, your app is gone, and any useful IP becomes “bona vacantia” owned by the Crown (in the UK, at least, other bad outcomes are available).
    • What happens to your app in the event of supplier failure?

Putting those together, you might own the whole kit and kaboodle of your app, which is less than likely for low-code, although may still be the case for older 3GL-based apps; or in fact you really own nothing useful at all, just some scraps that instead somebody generously let’s you use in exchange for some of your hard-earned cash.

You can map out the extremes of the commercial lock-in that is created by these sort of considerations, against the handy ice-berg layers, thus…

Commercial lock-in potential

Most low-code systems exhibit some of the features on the left hand side of the iceberg, and you can do some rough clustering of current low code systems according to that…

Examples of different low-code systems by degree of commercial lock-in
  • SaaS Platforms with low-code extensibility. These might not be considered “low-code” by purists in the fullest sense, but often exhibit some of the technical features. These typically have the highest level of lock-in as you are wedded to the mass of application functionality and adding customization around the side. The systems is hosted by the supplier, and you generally pay for most/all the users, with some significant restrictions around the licensing and usage. The app “code” is not portable and when the supplier dies or you stop paying for access your app dies too.
  • SaaS-like Low-Code systems, hosted by supplier. These provide low-code features but are locked to the the supplier’s systems and infrastructure with restrictive “take it or leave it” licensing and again you pay for most/all users who touch the app. Again the app “code” is not portable and when the supplier dies or you stop paying for access your app dies too.
  • Enterprise Low-Code systems, with choice of infrastructure deployment. Whilst these are like the previous group just above, they start to open the sealed world, by giving options to deploy your app on different underlying cloud IaaS, or even on-premise. They may also use some open-source components for the deployment tools and landing zones (e.g., Docker, Kubernetes, etc.). However, the apps themselves have relatively low portability, even if they run in a more open environment. These types of systems are often targeted at Enterprise clients who have multi-cloud strategies. No surprise, they also therefore carry an “Enterprise” price tag and may still have supplier imposed time-based limitations of use, access and so on
  • Code-generator Low-Code systems, deploy anywhere. The last group have the lowest level of lock-in and typically generate standard 3GL code, like Java or PHP. In their freest form, you only pay for the development tools, and the run-time is royalty-free with no application usage run-time costs. Since they generate code, the apps are relatively portable, although the generated code may not be pretty in a human sense. They also have effectively have perpetual life with no supplier imposed time limits unlike the other three categories. More locked-in versions will have run-time charges

Low-code is definitely a “good thing”, but you do need to go into it with open eyes and understand how the shiny promise of speedy development with high investment efficiency can be eroded if you don’t take the commercial realities into account,…

The promise of low-code can be seriously eroded by the commercial realities

…and your app founders on the low-code IPR iceberg and its commercial case sinks below the waves.

Beware the Low-Code IPR Iceberg Read More »

Category Error

Whilst categories are a good organising principle for procurement, for technology, at least, it can lead to siloed thinking that misses bigger transformational opportunities...

One of the challenges of functionally aligned organisation structures is that sometimes you get different parts of the business working in different ways, aligned to different objectives and speaking in different languages. This is quite common between Technology (be it Digital, IT, OT or whatever) and Procurement. Each group can be suspicious of or lacking respect for the other, and the schism can be so bad that Technology has its own procurement team because Group Procurement “just don’t get it”. Equally, Procurement feel aggrieved because they are engaged too late, the commercials handed to them in a hospital pass hamstrung by technical lock-in, and otherwise treated as low status order-placing “water boys”.

A classic area of speaking in different tongues is how you group the things you buy and deploy. Technology people may think of Services, which are built of a multiplicity of components of different types: software, hardware, Cloudy parts, people, external services and so on. (Aside: The people element is quite special here, as it can be actual internal headcount not a third party component so invisible in traditional AP-based spend cubes)

All those Service parts have different buying characteristics, whereas Procurement think about Categories, which are things that have similar buying characteristics and approaches.

Whilst that can undoubtedly make sense for commodity items, it starts to fall apart the more specialised the entities in the categories. Software is, for example, a hard category as it is not particularly susceptible to procurement amalgamation, even if there some common processes like license management. Ten different specialist softwares from different authors in a high-level category cannot be amalgamated, only switched/substituted or eliminated – which are business / technology decisions, not procurement, as such. So the high level category is really still ten little categories…

Then upshot is you can literally have people thinking and talking at cross-purposes – Services vs Categories, thus…

Talking at cross-purposes – Services vs Categories

There are perfectly valid reasons for both views but sometimes you need to lead with one rather than the other. This is particularly true when considering how to optimise costs, which can fall into three paths…

Cardboard Images: Andrea Crisante, Koya79 | Dreamstime.com; chwatson | Free3D

Business as usual Category Management won’t change the pile of cardboard boxes of your category, maybe just organise the contents a bit better; Category Sourcing might tidy up the boxes a bit. However, both are fundamentally limiting to the scope of opportunity that unfolds. Therefore, to unlock the bigger ticket opportunities and build the castle of your dreams you have to look across categories…

Examples of cross-category change and transformation

The cross-category opportunities don’t have to be the mega-sized reshaping of Digitalisation, BPO, technology outsourcing or even technology switching wizardry of Cloud migration and the like, they can be quite mundane. A good example is that of Technology contractors working in staff augmentation roles on daily rates. These people are often managed in a HR category where somebody has thoughtfully negotiated a single-source deal with a contract resource management company and their fancy resource management platform (you know the names).

However, these typically expensive on-shore contractor creatures should be factored away into managed services run by technology partners delivered from cost-effective locations wherever on the globe that may be.

But whilst the headcount is locked in to the HR category with spend stuck in the wrong bucket and savings counted against with “their” savings target, that doesn’t happen, and the bad behaviour of buying expensive unstructured resource is institutionalized and systematized. That is indeed a “category error”!

The necessary solution is to allow opportunity assessments and following commercial stages to break out of the category strait-jacket and think holistically about the business, its technology underpinnings and how it can be transformed for the better (and lower unit cost).

The starter for ten on that is to align the complementary roles of Technology and Procurement across the business service lifecycle to provide a mutual support and grasp the larger opportunities, like this…

Aligning complementary Technology and Procurement roles across the business service lifecycle

And so it goes…

Category Error Read More »

Digital, Phygital, Fiddlesticks

Digital is a rather abused term that has been round the block a few times, and now we have “Phygital” which is a load of bull..

I was prompted to think about the meaning of “Digital” recently by the unlikely conjunction of two disparate events, viz:

The first is a great step forward for a brand that has up to now been firmly “bricks and mortar”, and the second is apparently something “phygital” with the incursion of technology into actual clothing for reasons.

I get the commercial consumer driven logic of the first, but the second is somewhat more puzzling and perplexing. However, I don’t really care about clothing and fashion so it is a market logic that I would have to work hard to understand, so we’ll see how that business model succeeds over time.

Anyway, it set me thinking about words…

Digital has been around for many years, but “phygital” is a much more recently coined term, attributed to Chris Weil, Chairman of Momentum Worldwide, in 2007 (Thanks, Chris), picking up momentum c.2017. You can look at the frequency of some key technology terms in Google NGram Viewer…

NGram frequency of key technology terms by year

PCs were obviously quite a thing back in 1985 and also gave mainframes a little bump at the same time too. I tried “minicomputer”, but that barely features in this scaling, so apparently was not something that people talked about so much back then. Whilst departmental computing was a big wave of change versus mainframe in the 1970s and 80s, it was only in the business domain and so general awareness and interest was lower, I suppose.

Web and Internet were clearly also big talking points in 2000-ish, and beat down the Microcomputer Revolution in volume. But throughout you can see “Digital” growing steadily until it has actually overtaken what were the leaders, “Web” and “Internet”, with Web taking a sudden down-turn.

Most of the other newer terms like AI, “blockchain” and “metaverse” still bumble around at the bottom of awareness at this scale so not hitting it by the current 2019 end date of the NGrams corpuses. “Fintech” also is a relatively low scorer, even though it has now spawned a constellation of many new digital “<ANYthing>Tech” neologisms, like “InsureTech”, “PropTech”, “FemTech”, “EdTech”, “LegalTech”, “FoodTech”, “AgriTech” and so on). These are also probably more business vertical specific than broad-based so don’t get the volume of attention.

And don’t bother looking for “phygital” which also dribbles along the bottom of the chart if you add it to the query.

Before around 2015, “Digital” used to mean stuff related to computers generally. However, from then onwards it started to acquire jazzy new meanings related to exciting things like customer experience, digital marketing, mobile apps and otherwise being a “Digital” business, and with “digitalisation”, the process of becoming that thing. McKinsey had a go at defining it which you can read at your leisure.

What got lost is that many businesses have been digital for years and that technology rubbed up against the real world in many places, often not so glamorous. Like in manufacturing, supply chain, vending machines, door locks in hotels, the kitchen systems at KFC

To get to grips with this you can draw up a simple gameboard that maps out business typology against its manifestation.

Business classification – Typology vs manifestation

The business typology separates the places (“venues”) where people interact (e.g., actually trade or just get together and interact to do people stuff, like throwing sheep) from the actual trading businesses themselves, i.e., those those that generally exchange some value for some thing or benefit. These can be actual products, services and money but also in the wider context, could be social kudos, environmental benefit or other non-monetary value. For these purposes, broker-type businesses fit in the “trading” slot as they facilitate other peoples’ trading.

By the way, for the bankers reading this, we shall deliberately ignore where the trading transactions (financial, social, emotional, environmental, or otherwise) are cleared and "payments" handled, let's keep things simple for the purpose of this treatise.  

The manifestation dimension separates the real from the non-real. Physical covers what you expect (to be construed according to context as the lawyers say): buildings made of straw, sticks and bricks in locations with actual geographic locations, or cars, or books made of paper. The virtual covers everything that isn’t that, a nicely mutually exclusive definition. So can include virtual assets like photos, videos, software, financial products, and virtual businesses that provide places for people to connect and trade.

You can map out some businesses onto the landscape to see how the Pickup Sticks fall.

Digital business classification – some examples

What you can see (obviously) is that those which fall into the virtual column are heavily technology based (indeed, since we have selected this to exclude ectoplasmic spirit world businesses, wyverns, harpies, vampires, magic wand shops and other virtual manifestations of a more mystical sort). Whilst some of the virtual venues like Facebook support virtual interactions, a virtual platform like Uber facilitates real world transactions between car drivers and their passengers. And Utility Warehouse is a virtual business that loosely speaking brokers people-energy trading.

In this classification, the Metaverse is just another venue, and it could yet be a three-star Michelin restaurant experience or just a greasy spoon, as we shall see. But like the financial exchanges of today, the venues (exchanges) make a dribble of money in comparison with the eye-watering value that flows in the trades they facilitate. It’s largely what you do that makes the money, rather than where you do it (whether you have Meta-legs or not…).

The caveat to that is that a business with a captive supply base, and monopolistic channel control, like the Apple App store, can make shed-loads of money at its 30% transaction tax. Similarly, Facebook as a venue makes lots of money by selling access to its users for advertisers compared to the unfathomable value of the social interactions that take place upon it.

The key point here is that the businesses in the right-hand Physical columns also use technology, and often extensively, although not so visible to the untuned eye. Even the Louth Livestock Market, a very physical place with real farm animals and open outcry selling round the ring, also has a website and online auction trading. In other words, they are Digital businesses too.

So Digital is embedded in both Physical and Virtual manifestations and forms a solid and critical substrate on which almost all businesses run today. Like a seam of gold running through quartz…

Digital substrate embedded in most businesses

What does a “Digital” business actually look like these days? Well, it would undoubtedly include, internally, solid chunks of systems for Customer, Product & Operations and Performance & Control, and externally, multiple channels, non-linear supply chains and the like. But that is is a story for another day,

We used to see businesses sprout silo’d business units separate from the mainstream and built on electronic channels (oh yes, Digital channels) back in the early 2000s. This is less xenogenesis to birth something new and quite unlike its parent, than it is temporary firewalling to incubate a new way of doing things in the same business. Consequently, these offshoots have long been absorbed back into mainstream business models as they matured.

Many businesses have been omni-channel for years; it is no longer a rocket scientist level insight to suggest that, for example, you should have common stock management between an online store and physical shop, for example. However, the wave of the reworked “Digital” businesses in the last 5-7 years regurgitated the concept as something new, when indeed it is not.

The upshot of all this above this is that the newer Virtual businesses were called Digital by their over-enthusiastic and imprecise evangelists in thrall to a form of cognitive bias and so Virtual has been confused with Digital. This created the misbegotten conflation of two terms to describe an omni-channel experience across Physical and Virtual.

So we got “Phygital”. However, Digital embraces Virtual and Physical, so “Phygital” should really be “Phyrtual”, or “Virtical” or someother bull.

Digital is perfectly good…we don’t need Phygital, let it wither and die, like the eCommerce business units of old

Digital, Phygital, Fiddlesticks Read More »

30 Year Affair with Pen Computing

I’ve had a 30-year affair with pen computing technology, although good handwriting recognition has always eluded me. And now another generation of device comes along to tempt me…

I have always had a keen interest in pen computing (or even a passion, perhaps, in the modern way), all the way from the heady days of the first release of Windows for Pen Computing, 30 years ago. I’ve indulged myself over the years with various devices and the timeline looks like this…

30 years of my life in pen computing devices

You can see some patterns in the evolution of the technology across the years, with the pace of new device availability increasing in the past 5 or so years:

  • the long development of Windows PC-based pen technology, from the first steps with the TriGem Pen386SX designed by Eden Group, through XP Tablet edition, to Windows 10 / 11, which actually mostly works as an integrated experience
  • a cluster of “write-only” pen devices, the “write-only” characteristic making them mostly rubbish, although the Anoto-paper version of Filofax that came with the Nokia SU-1B was a lovely bit of leather
  • some pen-enabled screens which offer the joy of scribbling on a virtual Whiteboard or shared PowerPoint whilst on a Teams call at my desk
  • various Android tablets and E-Ink e-reader crossovers, always rather disappointing that they don’t do either job very well
  • the nirvana of dedicated writing tablets, exemplified by the Remarkable Tablet and the now discontinued Sony Digital Paper

I actually visited Eden Group in their chapel in Rainow, Cheshire, back in ’92 and developed a small Visual Basic demo app (a doctor’s Ward Round) for their pen computer, which looked like this (courtesy of an archived edition of Byte Magazine):

Eden Group designed TriGem Pen386SX

It was pretty slow compared to modern standards with its mighty 20MHz Intel 386SX processor, 4MB of RAM and 4MB of flash memory, and some PCMCIA slots (hands up who remembers those), but it did work and ran the demo which looked cool (of course).

The thing that has always eluded me, however, is fulfillment of the promise of scrawling great thoughts with my pen; then having that transcribed into a perfect machine-readable digital rendering that you can then also file and search in some useful way.

The path to that destination has been rocky and unsatisfying. For example, the Nokia SU-1B had a transcription service that came with it. It was very poor: you wrote notes in the leather Filofax diary and the software turned those into complete garbage that it carefully wrote into the corresponding slots in your Outlook calendar. So sad.

Even the Remarkable, which is indeed remarkable, does a pretty average job of transcribing my handwriting. Although it is probably more of a case of “no, it’s not you, it’s me” due to my outstandingly bad calligraphy and poor penmanship. They tried to make me write better at school with handwriting lessons and a big fountain pen, but all to no avail, as it still looks like this…

The Remarkable makes a fist of transcribing that and comes out with this pithy screed…

Here is an example of my herd wily converted to hold the Remarkable tablet is vey goal as a paper replaced but is defeated my hard way when it comes to conversion to tend

As transcribed by Remarkable

To add some spice as I am writing this post, and as always to learn something new, I briefly tested the accuracy of some transcription systems using my handwriting sample, measuring the Word Error Rate (using Amberscript). The results are not encouraging…

SystemWord Error Rate (%)
Google Lens24%
Windows 11 / Office26%
Remarkable Tablet30%
Pen to Print (Android App)36%
Transkribus Lite (“Where AI meets historical documents“)76%
Handwriting Transcription Word Error Rates (WER) by system

Whilst Microsoft and Google managed to get about 75% of the words right and with Remarkable coming in third, the other solutions are just worse. So, for me, automated handwriting transcription is largely a pipedream.

In fact, by far the best system ever for recognizing my handwriting is Roger Hill V1.0. Over the years, Roger has painstakingly transcribed my rocket surgeon chicken scratch to create great looking PowerPoint pages, with a WER of probably 1%!

This is Roger, from his LinkedIn profile picture which has not updated since about 1997, I think…Kudos, Roger!

Roger Hill V1.0 – the best handwriting transcription system in the world

And so, now, to a taster of the new generation of colour e-reader/pen devices, the one in my hand is the Boox Nova Air C. The idea of a colour eReader / handwriting device is a major move on from the previous generation of monochrome renderings.

Obviously, you can get colour handwriting and drawing on a mainstream Android tablet like the Samsung Galaxy Tab, but the screen is too smooth and slippery and so the writing/drawing experience is not good. The Boox Nova Air C, like the other devices from the same maker combines a Wacom stylus, an Android 11 system and a Kaleido Plus colour screen (actually the colour is a layer on top of the monochrome eInk).

Boox Nova Air C

Sadly, the rather pastel colours are just underwhelming and really seem just a gimmick. Also, whilst the primary handwriting optimized apps give a good experience, the standard Android apps (like Office, etc.) have very laggy response to the pen which is not usable, and video is not a good idea on eInk. It could replace my Kindle Oasis that is losing its battery life, but it would not replace either the Remarkable or the Samsung Galaxy Tab.

Roll on Remarkable 3?

30 Year Affair with Pen Computing Read More »

SaaS deployment creates cloudy cost conundrum

Technology cost optimisation has never been easy, but now there are devils lurking in the decisions driven by SaaS deployments…

Back in the day, the traditional layered structure was one of the organising principles for IT architecture, and was probably alright in the 90s…

Traditional layered information technology architecture

That manifested in an IT application and hosting infrastructure mirroring the layering with data centres full of servers to host the apps and a veritable throng of IT Operations people to run it all…

How Enterprise IT used to look

In terms of cost optimisation, a key strategic enterprise consideration was in the tension between vertical specialization and horizontal standardization and cost consolidation, with some simple tradeoffs…

Old view of technology architecture and cost tradeoffs – a simple dilemma of horizontal and vertical

In this simple world, the dilemma was whether you allow business units have their heads and anything they want in the application space or standardize and consolidate applications and infrastructure to get the volume and standardisation benefits. Of course, the decision varied depending on the application types. For example, driving between specialised customer facing apps where the business might be beholden to the requirements needed to deliver best customer experience and competitive advantage. Contrasting with the back-office applications where there is no sense every business unit having a different finance or HR system; that complexity just creates non-value-adding cost, so standardization wins out. The “tin & iron” infrastructure was simple too, as it all ran in the company’s data-centres or perhaps in more exotic cases, in outsourced supplier’s locations.

So a simple life, with easy decisions…yeah…

But then SaaS came along and just busted out of the joint, breaking up the simple layered model and disturbing the simple inward-looking contemplations of the various decisions IT…

SaaS busts out!

The consequence of the bust-out is a substantive reshaping of the enterprise technology landscape for many companies – turning the old layers sideways and abolishing some treasured assumptions. In doing so also the new enduring technology organisations that run that new shape (should) also get smaller since they no longer need to have the people to manage the “tin” (and, for those following the story above, the “iron”)…

SaaS “hollows out” traditional Enterprise technology environment

The “hollowing-out” of the traditional IT infrastructure with separate SaaS services fragments the overall technology cost base, for example, by the separation and vertical integration of hosting costs into the individual SaaS towers. This pushes against some of more traditional levers of infrastructure standardization and creates some new tradeoffs to consider.

Architecturally speaking, under the covers, the actual composition of the SaaS towers is a heterogenous mixture of parts, albeit with some dominant patterns of deployment. SaaS vendors may host their own services, but often enough, they will use commodity IaaS cloud services, e.g., Genesys Cloud and Workday both run on AWS, as does Salesforce which also has self-hosted services. Equally in reverse, Microsoft who do host their own SaaS services, offer an on-premise Office Online Server for those organisations who need to embrace the data themselves, maybe for data residency/sovereignty/security reasons, but also want the reflection of Microsoft 365 online services in their lives.

This admixture of potential solutions creates a trilemma with many more tradeoffs to balance between the different dimensions of the service and cost optimization equation, thus…

Technology architecture and cost tradeoffs – the new trilemma

So, it is a multi-dimensional problem to consider with the application architecture decisions having a considerable impact on the TCO of the resulting technology environment. Compared to the simple days of moving lots of ancient “tin” (and, of course, “iron”) to a shiny virtualized cloud infrastructure, moving to cloud (whether SaaS or IaaS/PaaS) is not these days a slam dunk guarantee of optimal costs; it depends very much on the transformation journey and where you are coming from and heading to, and how tightly managed that new place is.

To be sure, moving from a on-premises landscape with, for example, a multitudinous miscellany of disparate finance systems to a harmonised online SaaS ERP system, should generate some operational cost savings, as well as business benefits from process standardisation and shared services. There may or may not be associated benefits in the hosting infrastructure depending on the existing level of consolidation and virtualisation of the physical assets and support services. On a like for like basis, whilst the internal IT Operations labour costs should go down, it is possible for the (non-transparent) embedded hardware and related costs to go up. So you still need to “follow the money” by looking at the wider TCO benefits or disbenefits as well as just getting excited about the shiny new toys.

Sourcing the right solution, both software and implementation partner together, is crucial with a guiding commercial-technical design considering the target operating model, business and technology architecture, commercial and TCO levers, and beyond that potential for business operating model transformation to deliver the goods

Beware the devils…


In furtherance of technology understanding, I used the DALL-E AI to generate the featured image at the head of this post. Here are some of the other ideas it came up with…

SaaS deployment creates cloudy cost conundrum Read More »

Non-linear what?

It is fashionable to concoct phrases to create neologisms to try and make other people think the neologisers are somehow just so much smarter. Sometimes the new terms created are actually meaningful and sometimes just Deepak Chopra-style nonsense B*S. Non-linearity is one of those sort of tag phrases that has been dragged kicking and screaming out of the world of mathematics and physics with mixed results.

Non-linearity has a sort of smart feel about it, linear = simple, straight-forward and actually only quite a small part of the universe; non-linear is quirky, eccentric, and a bit edgy, and pretty well most of the universe.

You can google for quite a few things that you might expect to be non-linear, like…

  • “non-linear algebra”
  • “non-linear dynamics”
  • “non-linear control theory”

…and a whole lot more things that you wouldn’t necessarily…

  • “non-linear thinking”
  • “non-linear innovation”
  • “non-linear people”
  • “non-linear social network”
  • “non-linear politics”
  • “non-linear justice”
  • “non-linear economy”
  • “non-linear clothes”

Apparently non-linearity is a thing in physical architecture now…

The so-called nonlinear architectural design is the thing that using the essence of architectural complex as a starting point we get multiple factors affecting buildings through analyzing, which we organize through the parametric model by reasonable logic in designing, and finally use the computer to create complex forms according with the requirement of architectural complexity.

The Realization of Nonlinear Architectural on the Parametric Model – MinWu, Zhiliang Ma, 2012

No, I don’t know what it means either: apparently part of the architectural process these days is to take photos of stuff or maybe existing artwork, stick that in Photoshop and trace out something that then looks like a Dali-esque bad dream or a bad acid trip.

You can see it in buildings like the Bella Sky Hotel in Copenhagen, which is quite dramatic

Bella Sky Hotel, Copenhagen
lglazier618, CC BY 2.0 https://creativecommons.org/licenses/by/2.0, via Wikimedia Commons

In the Bella Sky, there is some distinct sacrifice of function subordinated to fashion in the large amount of steel that you have to walk around in the rooms. It is however very democratically configured in diagonal form, so anybody of any height can bash their brains out…

Bella Sky hotel room – with metal stanchions…
Just as an aside, the restaurant "Basalt" in the Bella Sky is a total hipster joint, offering "food from the bonfire", basically a small selection of hard nodular charred black vegetables, with a bit of sooty meat and some flavoured smoke 🤯

Back to something more like the real world of the supply chain, the old-world view is that of the linear supply chain like this…

Linear Supply Chain

However, that has been tagged by non-linearity, if you google “non-linear supply chain”, with the most obvious examples being current visualizations of the circular economy where the Worm Ouroboros eats its own tail with recycling.

Circular Supply Chain

Deloitte made a rather gruesome rendering of the future state in their “Supply Chains and Value Webs” story where the world ends up as an entropic collection of blobs in a bunch of value webs, which looks a bit like this…

Supply Chain Mesh
Consultants are often very good at come up with ideas that so hi'falutin' and analytically dense that they cannot be handled by mere mortals.  A world built of that value web complexity obviously needs a load of consultants to help make sense of it.  
Also on the topic of complexity, I recall seeing once an IT systems architectural concept of "white spaces" which manifested as a spreadsheet that mapped out the intersections between different systems, but was ultimately a very sparse matrix, of guess what, mostly interstitial white space, which didn't really do anything to simplify the situation
“White Spaces” – more gaps than glue…

From an engineering perspective and also for the sanity of us mere mortals, the sort of tangled mesh represented by “value webs” is not sustainable and a more rational structure is essential. So you have to draw the threads together and create some level of coordination and organising principle which supports the non-linear structure but glues it all together, thus…

Non-linear Supply Chain

These are the types of “non-linear supply chain” model that are currently tagged with “Digital Supply Chain” or “Industry 4.0” monikers to freshen them up and make them seem modern. However, there is nothing new under the sun, of course, and these are actually retreads of ideas from 2000 and behind (e.g., from the CPFR concept of the 90s). (I have a slide on this topic I used in 2001)

In software terms, and back to the start of this story, that structure with the central “digital eco-system” might be classed as a “non-linear software architecture”, oddly enough when I googled that term I came up with no results at all!

“Non-linear Software Architecture” was there none…

So maybe I can claim to have coined that term myself? Whichever way, when people start talking about “non-linear software architecture”, remember you heard it here first!

Non-linear what? Read More »