Earnback’s a botch

Earnback of service credits in technology service agreements is an absurd concept that should be rooted out of contracts

As often seen, service credits are commonly used in technology service agreements to compensate clients for a service provider’s failure to meet a committed level of service. Whilst service credit mechanisms can range from simple to byzantine, earnback of service credits is one of the more egregious pieces of nonsense you can see in a service level agreement.

To take a step back, there is a social concept that you can somehow repay your bad actions to society by doing good works. It is a form of utilitarian philosophical thinking where the greater good can outweigh the bad; perhaps industrialised in the sale of indulgences back in Medieval times. Conceptually, a murderer can go to prison for many years to “pay their debt to society”, but they still killed somebody, nevertheless. So, the concept is based on some possibly dubious principles and which does not always survive robust scrutiny. Indeed, as often is the case with socio-philosophical inventions like this, there are conditions under which the simple equations fail, and the answer they produce is nonsense.

As an aside, and I hesitate to bring it up, but if you want to test boundary conditions and how a rule, social or otherwise, works or doesn't work in extremis, then try applying the Jimmy Savile Test (OK, yuck).  If you don't know, Jimmy Savile was a uniformly horrible person who curried favour with rich and famous people in the high strata of society with his apparent "good works" as a cover for his obscene behavior.  
The test is this:  if somebody proposes a generic rule then you think to yourself "what about Jimmy Savile?", and if the answer is "Euw!", "No way!", "Disgusting!", then the rule is probably a bust.

Back to earnback, which in the context of a commercial service agreement, means the service provider doing some good works to overcome below par performance at some point in time. Those good works then exempt them from paying compensation to the client which would otherwise be payable for the poor performance. Timing-wise, the performance bump can be ex-ante where previous good works put money in the bank to credit against claims, or ex-post where the good works occur after the default. In the generality, it is all about the push and shove of risk transfer between client and service provider; earnback pushes back risk to the client.

As a principle, earnback of sorts could make sense for a development activity or manufacturing piecework process creating widgets where poor rate of production in one month can be offset by increased output in later months so that the same expected pile of widgets is created in the overall period, meeting the expected weighted average production rate.

However it makes little sense in an operating service where the service delivered is of its moment and the experience is transient. In the service world, a dog, as they say, is only as good as its last trick – not an average of its good and not so good tricks. Sometimes it makes even less sense…

Consider this typical earnback clause that you might find in a service agreement…

Typical earnback clause in a service agreement

This says what it says which is that if the service provider behaves for three months following a service level default then they can play their “get out of jail free” card and not pay the service credit for their default. Graphically, it looks like this…

The absurdity of earnback – thanks for nothing!

The story is: the service provider commits to deliver to a service level in the contract. They then default on that service level in Month N, and proceed to meet the service level in Months N+1 to N+3.

The earnback clause magically absolves them of paying for the default…by doing the job they are already being paid to do. This is patently absurd: why does doing that job same as any other time exempt them from paying the compensation? Thanks for nothing!

In the fetid atmosphere of the negotiating rooms when these types of clause were first invented (maybe the 1980s?) it probably all seemed to be a very clever manipulation of risk and incentives. You can see other examples of “cleverness” where complex mechanisms have been designed that don’t make sense in the cold light of today (e.g., service level escalator ratchets which don’t work in a multi-vendor SIAM setup).

In the hierarchy of performance metrics many of the older concepts like earnback of service credits fiddle in the bottom tier of technical performance measures whilst user experience burns: the client suffering an “All-Green Hell” SLA dashboard for a service that users hate passionately. Better to focus design efforts on performance regimes further up the pyramid…

Level of user interest and understanding of key performance metrics

So, root out earnback from your service agreements, let the service provider make its penance at the time of default and move on!

Beware the Low-Code IPR Iceberg

Low-code app development is very popular now, but you need to ensure the commercial benefits of your titanic new app do not founder on the low-code intellectual property rights iceberg

Low-code and No-code is very popular with a range of promises around rapid development of engaging apps with little effort and the “democratization” of the app development from “Big IT” into the hands of citizen developers.

Of course, low-code is not new at all, and was largely pioneered in the fourth-generation languages (4GLs) which peaked in the 1980s and 90s, with GUI (and even sometimes WYSIWYG) systems like PowerBuilder, Pro-IV, Omnis, StaffWare, and even Microsoft Access.

As it happens, for a period in the 80s I ran the team that supported PRO-IV after its acquisition by McDonnell Douglas Information Systems.  Being written in C, PRO-IV was ported to more or less anything that had a C compiler.  Not always a good thing as it happens, especially for the IBM AS/400, which had a toy C compiler that produced code that was incredibly slow (for the technical amongst you that was due to every function call being generated as an inter-segment CALL which is about 1000 times slower than an intra-segment CALL and so a baaaad thing to do)

As I may have mentioned before I am a big fan of GUI-based WYSIWYG visual software design, with a simple philosophy…

Visual good, green screen bad (with apologies to George Orwell)

As most low-code designers are generally visual in nature, from a philosophical perspective they come under the “good” category. However, commercially speaking, there are some potentially significant commercial pitfalls of which you need to take account.

Before digging into that, we first need to have a look at the main components that make up an “app”. Whilst it is a rather corny cliché, the iceberg motif is quite apt here, as there is more of an app that you don’t see than you do, especially for low-code. The parts under the waves beneath “The App”, are the data and meta-data, supporting environment, execution engine and infrastructure on which it all runs, thus…

There’s more IPR hidden below the waterline in the Low-Code IPR iceberg

Not surprisingly there are some significant commercial attributes to all those parts, not the least who owns which parts, which raises the concern that whilst you might own some intellectual property rights (depending on your lawyers), due to the complexities of the other parts of the app construction, you may not own anything that you could actually take away as an useful asset.

Indeed there are a number of factors to thinks about when asking yourself the question – who really owns my app?

Who really owns my app?

Considering those factors:

  • IPR ownership. This is the obvious one that the lawyers focus on covering business trade secrets, copyright, patents and so on.
    • How much of the total app IPR do you own?
  • Portability. A key part of switching is taking your toys away and playing your game somewhere else for your own advantage.
    • Can you extract any useful description and source code for the the app, preferably in a commonly recognised format?
  • Third Party Access. This covers the scope of who can touch your app for development, support and general usage. (This is a historical trap for outsourcing IT services)
    • Can third parties modify and support your app code, and other parties actually use the app?
  • Licensing. This covers how the various parts that you don’t actually own are licensed to you and your associates and affiliates, and for how long that license actually lasts.
    • What is the scope, time period and other attributes of the licence given?
  • Run-time costs. This covers the costs associated with deploying and using your app which may include or exclude the infrastructure costs depending on the application and low-code service construction.
    • What is the on-going pricing for deployment and use of the app, and what happens when you stop paying?
  • Supplier Continuity. This covers the longevity of the supplier running your app and what happens if/when they go bust. In the past that was handled by a simple escrow clause, but which is becoming a much less tenable proposition in the SaaS world. In the worst case a supplier will cease to exist, their servers go offline, your app is gone, and any useful IP becomes “bona vacantia” owned by the Crown (in the UK, at least, other bad outcomes are available).
    • What happens to your app in the event of supplier failure?

Putting those together, you might own the whole kit and kaboodle of your app, which is less than likely for low-code, although may still be the case for older 3GL-based apps; or in fact you really own nothing useful at all, just some scraps that instead somebody generously let’s you use in exchange for some of your hard-earned cash.

You can map out the extremes of the commercial lock-in that is created by these sort of considerations, against the handy ice-berg layers, thus…

Commercial lock-in potential

Most low-code systems exhibit some of the features on the left hand side of the iceberg, and you can do some rough clustering of current low code systems according to that…

Examples of different low-code systems by degree of commercial lock-in
  • SaaS Platforms with low-code extensibility. These might not be considered “low-code” by purists in the fullest sense, but often exhibit some of the technical features. These typically have the highest level of lock-in as you are wedded to the mass of application functionality and adding customization around the side. The systems is hosted by the supplier, and you generally pay for most/all the users, with some significant restrictions around the licensing and usage. The app “code” is not portable and when the supplier dies or you stop paying for access your app dies too.
  • SaaS-like Low-Code systems, hosted by supplier. These provide low-code features but are locked to the the supplier’s systems and infrastructure with restrictive “take it or leave it” licensing and again you pay for most/all users who touch the app. Again the app “code” is not portable and when the supplier dies or you stop paying for access your app dies too.
  • Enterprise Low-Code systems, with choice of infrastructure deployment. Whilst these are like the previous group just above, they start to open the sealed world, by giving options to deploy your app on different underlying cloud IaaS, or even on-premise. They may also use some open-source components for the deployment tools and landing zones (e.g., Docker, Kubernetes, etc.). However, the apps themselves have relatively low portability, even if they run in a more open environment. These types of systems are often targeted at Enterprise clients who have multi-cloud strategies. No surprise, they also therefore carry an “Enterprise” price tag and may still have supplier imposed time-based limitations of use, access and so on
  • Code-generator Low-Code systems, deploy anywhere. The last group have the lowest level of lock-in and typically generate standard 3GL code, like Java or PHP. In their freest form, you only pay for the development tools, and the run-time is royalty-free with no application usage run-time costs. Since they generate code, the apps are relatively portable, although the generated code may not be pretty in a human sense. They also have effectively have perpetual life with no supplier imposed time limits unlike the other three categories. More locked-in versions will have run-time charges

Low-code is definitely a “good thing”, but you do need to go into it with open eyes and understand how the shiny promise of speedy development with high investment efficiency can be eroded if you don’t take the commercial realities into account,…

The promise of low-code can be seriously eroded by the commercial realities

…and your app founders on the low-code IPR iceberg and its commercial case sinks below the waves.

Category Error

Whilst categories are a good organising principle for procurement, for technology, at least, it can lead to siloed thinking that misses bigger transformational opportunities...

One of the challenges of functionally aligned organisation structures is that sometimes you get different parts of the business working in different ways, aligned to different objectives and speaking in different languages. This is quite common between Technology (be it Digital, IT, OT or whatever) and Procurement. Each group can be suspicious of or lacking respect for the other, and the schism can be so bad that Technology has its own procurement team because Group Procurement “just don’t get it”. Equally, Procurement feel aggrieved because they are engaged too late, the commercials handed to them in a hospital pass hamstrung by technical lock-in, and otherwise treated as low status order-placing “water boys”.

A classic area of speaking in different tongues is how you group the things you buy and deploy. Technology people may think of Services, which are built of a multiplicity of components of different types: software, hardware, Cloudy parts, people, external services and so on. (Aside: The people element is quite special here, as it can be actual internal headcount not a third party component so invisible in traditional AP-based spend cubes)

All those Service parts have different buying characteristics, whereas Procurement think about Categories, which are things that have similar buying characteristics and approaches.

Whilst that can undoubtedly make sense for commodity items, it starts to fall apart the more specialised the entities in the categories. Software is, for example, a hard category as it is not particularly susceptible to procurement amalgamation, even if there some common processes like license management. Ten different specialist softwares from different authors in a high-level category cannot be amalgamated, only switched/substituted or eliminated – which are business / technology decisions, not procurement, as such. So the high level category is really still ten little categories…

Then upshot is you can literally have people thinking and talking at cross-purposes – Services vs Categories, thus…

Talking at cross-purposes – Services vs Categories

There are perfectly valid reasons for both views but sometimes you need to lead with one rather than the other. This is particularly true when considering how to optimise costs, which can fall into three paths…

Cardboard Images: Andrea Crisante, Koya79 | Dreamstime.com; chwatson | Free3D

Business as usual Category Management won’t change the pile of cardboard boxes of your category, maybe just organise the contents a bit better; Category Sourcing might tidy up the boxes a bit. However, both are fundamentally limiting to the scope of opportunity that unfolds. Therefore, to unlock the bigger ticket opportunities and build the castle of your dreams you have to look across categories…

Examples of cross-category change and transformation

The cross-category opportunities don’t have to be the mega-sized reshaping of Digitalisation, BPO, technology outsourcing or even technology switching wizardry of Cloud migration and the like, they can be quite mundane. A good example is that of Technology contractors working in staff augmentation roles on daily rates. These people are often managed in a HR category where somebody has thoughtfully negotiated a single-source deal with a contract resource management company and their fancy resource management platform (you know the names).

However, these typically expensive on-shore contractor creatures should be factored away into managed services run by technology partners delivered from cost-effective locations wherever on the globe that may be.

But whilst the headcount is locked in to the HR category with spend stuck in the wrong bucket and savings counted against with “their” savings target, that doesn’t happen, and the bad behaviour of buying expensive unstructured resource is institutionalized and systematized. That is indeed a “category error”!

The necessary solution is to allow opportunity assessments and following commercial stages to break out of the category strait-jacket and think holistically about the business, its technology underpinnings and how it can be transformed for the better (and lower unit cost).

The starter for ten on that is to align the complementary roles of Technology and Procurement across the business service lifecycle to provide a mutual support and grasp the larger opportunities, like this…

Aligning complementary Technology and Procurement roles across the business service lifecycle

And so it goes…

Digital, Phygital, Fiddlesticks

Digital is a rather abused term that has been round the block a few times, and now we have “Phygital” which is a load of bull..

I was prompted to think about the meaning of “Digital” recently by the unlikely conjunction of two disparate events, viz:

The first is a great step forward for a brand that has up to now been firmly “bricks and mortar”, and the second is apparently something “phygital” with the incursion of technology into actual clothing for reasons.

I get the commercial consumer driven logic of the first, but the second is somewhat more puzzling and perplexing. However, I don’t really care about clothing and fashion so it is a market logic that I would have to work hard to understand, so we’ll see how that business model succeeds over time.

Anyway, it set me thinking about words…

Digital has been around for many years, but “phygital” is a much more recently coined term, attributed to Chris Weil, Chairman of Momentum Worldwide, in 2007 (Thanks, Chris), picking up momentum c.2017. You can look at the frequency of some key technology terms in Google NGram Viewer…

NGram frequency of key technology terms by year

PCs were obviously quite a thing back in 1985 and also gave mainframes a little bump at the same time too. I tried “minicomputer”, but that barely features in this scaling, so apparently was not something that people talked about so much back then. Whilst departmental computing was a big wave of change versus mainframe in the 1970s and 80s, it was only in the business domain and so general awareness and interest was lower, I suppose.

Web and Internet were clearly also big talking points in 2000-ish, and beat down the Microcomputer Revolution in volume. But throughout you can see “Digital” growing steadily until it has actually overtaken what were the leaders, “Web” and “Internet”, with Web taking a sudden down-turn.

Most of the other newer terms like AI, “blockchain” and “metaverse” still bumble around at the bottom of awareness at this scale so not hitting it by the current 2019 end date of the NGrams corpuses. “Fintech” also is a relatively low scorer, even though it has now spawned a constellation of many new digital “<ANYthing>Tech” neologisms, like “InsureTech”, “PropTech”, “FemTech”, “EdTech”, “LegalTech”, “FoodTech”, “AgriTech” and so on). These are also probably more business vertical specific than broad-based so don’t get the volume of attention.

And don’t bother looking for “phygital” which also dribbles along the bottom of the chart if you add it to the query.

Before around 2015, “Digital” used to mean stuff related to computers generally. However, from then onwards it started to acquire jazzy new meanings related to exciting things like customer experience, digital marketing, mobile apps and otherwise being a “Digital” business, and with “digitalisation”, the process of becoming that thing. McKinsey had a go at defining it which you can read at your leisure.

What got lost is that many businesses have been digital for years and that technology rubbed up against the real world in many places, often not so glamorous. Like in manufacturing, supply chain, vending machines, door locks in hotels, the kitchen systems at KFC

To get to grips with this you can draw up a simple gameboard that maps out business typology against its manifestation.

Business classification – Typology vs manifestation

The business typology separates the places (“venues”) where people interact (e.g., actually trade or just get together and interact to do people stuff, like throwing sheep) from the actual trading businesses themselves, i.e., those those that generally exchange some value for some thing or benefit. These can be actual products, services and money but also in the wider context, could be social kudos, environmental benefit or other non-monetary value. For these purposes, broker-type businesses fit in the “trading” slot as they facilitate other peoples’ trading.

By the way, for the bankers reading this, we shall deliberately ignore where the trading transactions (financial, social, emotional, environmental, or otherwise) are cleared and "payments" handled, let's keep things simple for the purpose of this treatise.  

The manifestation dimension separates the real from the non-real. Physical covers what you expect (to be construed according to context as the lawyers say): buildings made of straw, sticks and bricks in locations with actual geographic locations, or cars, or books made of paper. The virtual covers everything that isn’t that, a nicely mutually exclusive definition. So can include virtual assets like photos, videos, software, financial products, and virtual businesses that provide places for people to connect and trade.

You can map out some businesses onto the landscape to see how the Pickup Sticks fall.

Digital business classification – some examples

What you can see (obviously) is that those which fall into the virtual column are heavily technology based (indeed, since we have selected this to exclude ectoplasmic spirit world businesses, wyverns, harpies, vampires, magic wand shops and other virtual manifestations of a more mystical sort). Whilst some of the virtual venues like Facebook support virtual interactions, a virtual platform like Uber facilitates real world transactions between car drivers and their passengers. And Utility Warehouse is a virtual business that loosely speaking brokers people-energy trading.

In this classification, the Metaverse is just another venue, and it could yet be a three-star Michelin restaurant experience or just a greasy spoon, as we shall see. But like the financial exchanges of today, the venues (exchanges) make a dribble of money in comparison with the eye-watering value that flows in the trades they facilitate. It’s largely what you do that makes the money, rather than where you do it (whether you have Meta-legs or not…).

The caveat to that is that a business with a captive supply base, and monopolistic channel control, like the Apple App store, can make shed-loads of money at its 30% transaction tax. Similarly, Facebook as a venue makes lots of money by selling access to its users for advertisers compared to the unfathomable value of the social interactions that take place upon it.

The key point here is that the businesses in the right-hand Physical columns also use technology, and often extensively, although not so visible to the untuned eye. Even the Louth Livestock Market, a very physical place with real farm animals and open outcry selling round the ring, also has a website and online auction trading. In other words, they are Digital businesses too.

So Digital is embedded in both Physical and Virtual manifestations and forms a solid and critical substrate on which almost all businesses run today. Like a seam of gold running through quartz…

Digital substrate embedded in most businesses

What does a “Digital” business actually look like these days? Well, it would undoubtedly include, internally, solid chunks of systems for Customer, Product & Operations and Performance & Control, and externally, multiple channels, non-linear supply chains and the like. But that is is a story for another day,

We used to see businesses sprout silo’d business units separate from the mainstream and built on electronic channels (oh yes, Digital channels) back in the early 2000s. This is less xenogenesis to birth something new and quite unlike its parent, than it is temporary firewalling to incubate a new way of doing things in the same business. Consequently, these offshoots have long been absorbed back into mainstream business models as they matured.

Many businesses have been omni-channel for years; it is no longer a rocket scientist level insight to suggest that, for example, you should have common stock management between an online store and physical shop, for example. However, the wave of the reworked “Digital” businesses in the last 5-7 years regurgitated the concept as something new, when indeed it is not.

The upshot of all this above this is that the newer Virtual businesses were called Digital by their over-enthusiastic and imprecise evangelists in thrall to a form of cognitive bias and so Virtual has been confused with Digital. This created the misbegotten conflation of two terms to describe an omni-channel experience across Physical and Virtual.

So we got “Phygital”. However, Digital embraces Virtual and Physical, so “Phygital” should really be “Phyrtual”, or “Virtical” or someother bull.

Digital is perfectly good…we don’t need Phygital, let it wither and die, like the eCommerce business units of old

30 Year Affair with Pen Computing

I’ve had a 30-year affair with pen computing technology, although good handwriting recognition has always eluded me. And now another generation of device comes along to tempt me…

I have always had a keen interest in pen computing (or even a passion, perhaps, in the modern way), all the way from the heady days of the first release of Windows for Pen Computing, 30 years ago. I’ve indulged myself over the years with various devices and the timeline looks like this…

30 years of my life in pen computing devices

You can see some patterns in the evolution of the technology across the years, with the pace of new device availability increasing in the past 5 or so years:

  • the long development of Windows PC-based pen technology, from the first steps with the TriGem Pen386SX designed by Eden Group, through XP Tablet edition, to Windows 10 / 11, which actually mostly works as an integrated experience
  • a cluster of “write-only” pen devices, the “write-only” characteristic making them mostly rubbish, although the Anoto-paper version of Filofax that came with the Nokia SU-1B was a lovely bit of leather
  • some pen-enabled screens which offer the joy of scribbling on a virtual Whiteboard or shared PowerPoint whilst on a Teams call at my desk
  • various Android tablets and E-Ink e-reader crossovers, always rather disappointing that they don’t do either job very well
  • the nirvana of dedicated writing tablets, exemplified by the Remarkable Tablet and the now discontinued Sony Digital Paper

I actually visited Eden Group in their chapel in Rainow, Cheshire, back in ’92 and developed a small Visual Basic demo app (a doctor’s Ward Round) for their pen computer, which looked like this (courtesy of an archived edition of Byte Magazine):

Eden Group designed TriGem Pen386SX

It was pretty slow compared to modern standards with its mighty 20MHz Intel 386SX processor, 4MB of RAM and 4MB of flash memory, and some PCMCIA slots (hands up who remembers those), but it did work and ran the demo which looked cool (of course).

The thing that has always eluded me, however, is fulfillment of the promise of scrawling great thoughts with my pen; then having that transcribed into a perfect machine-readable digital rendering that you can then also file and search in some useful way.

The path to that destination has been rocky and unsatisfying. For example, the Nokia SU-1B had a transcription service that came with it. It was very poor: you wrote notes in the leather Filofax diary and the software turned those into complete garbage that it carefully wrote into the corresponding slots in your Outlook calendar. So sad.

Even the Remarkable, which is indeed remarkable, does a pretty average job of transcribing my handwriting. Although it is probably more of a case of “no, it’s not you, it’s me” due to my outstandingly bad calligraphy and poor penmanship. They tried to make me write better at school with handwriting lessons and a big fountain pen, but all to no avail, as it still looks like this…

The Remarkable makes a fist of transcribing that and comes out with this pithy screed…

Here is an example of my herd wily converted to hold the Remarkable tablet is vey goal as a paper replaced but is defeated my hard way when it comes to conversion to tend

As transcribed by Remarkable

To add some spice as I am writing this post, and as always to learn something new, I briefly tested the accuracy of some transcription systems using my handwriting sample, measuring the Word Error Rate (using Amberscript). The results are not encouraging…

SystemWord Error Rate (%)
Google Lens24%
Windows 11 / Office26%
Remarkable Tablet30%
Pen to Print (Android App)36%
Transkribus Lite (“Where AI meets historical documents“)76%
Handwriting Transcription Word Error Rates (WER) by system

Whilst Microsoft and Google managed to get about 75% of the words right and with Remarkable coming in third, the other solutions are just worse. So, for me, automated handwriting transcription is largely a pipedream.

In fact, by far the best system ever for recognizing my handwriting is Roger Hill V1.0. Over the years, Roger has painstakingly transcribed my rocket surgeon chicken scratch to create great looking PowerPoint pages, with a WER of probably 1%!

This is Roger, from his LinkedIn profile picture which has not updated since about 1997, I think…Kudos, Roger!

Roger Hill V1.0 – the best handwriting transcription system in the world

And so, now, to a taster of the new generation of colour e-reader/pen devices, the one in my hand is the Boox Nova Air C. The idea of a colour eReader / handwriting device is a major move on from the previous generation of monochrome renderings.

Obviously, you can get colour handwriting and drawing on a mainstream Android tablet like the Samsung Galaxy Tab, but the screen is too smooth and slippery and so the writing/drawing experience is not good. The Boox Nova Air C, like the other devices from the same maker combines a Wacom stylus, an Android 11 system and a Kaleido Plus colour screen (actually the colour is a layer on top of the monochrome eInk).

Boox Nova Air C

Sadly, the rather pastel colours are just underwhelming and really seem just a gimmick. Also, whilst the primary handwriting optimized apps give a good experience, the standard Android apps (like Office, etc.) have very laggy response to the pen which is not usable, and video is not a good idea on eInk. It could replace my Kindle Oasis that is losing its battery life, but it would not replace either the Remarkable or the Samsung Galaxy Tab.

Roll on Remarkable 3?

SaaS deployment creates cloudy cost conundrum

Technology cost optimisation has never been easy, but now there are devils lurking in the decisions driven by SaaS deployments…

Back in the day, the traditional layered structure was one of the organising principles for IT architecture, and was probably alright in the 90s…

Traditional layered information technology architecture

That manifested in an IT application and hosting infrastructure mirroring the layering with data centres full of servers to host the apps and a veritable throng of IT Operations people to run it all…

How Enterprise IT used to look

In terms of cost optimisation, a key strategic enterprise consideration was in the tension between vertical specialization and horizontal standardization and cost consolidation, with some simple tradeoffs…

Old view of technology architecture and cost tradeoffs – a simple dilemma of horizontal and vertical

In this simple world, the dilemma was whether you allow business units have their heads and anything they want in the application space or standardize and consolidate applications and infrastructure to get the volume and standardisation benefits. Of course, the decision varied depending on the application types. For example, driving between specialised customer facing apps where the business might be beholden to the requirements needed to deliver best customer experience and competitive advantage. Contrasting with the back-office applications where there is no sense every business unit having a different finance or HR system; that complexity just creates non-value-adding cost, so standardization wins out. The “tin & iron” infrastructure was simple too, as it all ran in the company’s data-centres or perhaps in more exotic cases, in outsourced supplier’s locations.

So a simple life, with easy decisions…yeah…

But then SaaS came along and just busted out of the joint, breaking up the simple layered model and disturbing the simple inward-looking contemplations of the various decisions IT…

SaaS busts out!

The consequence of the bust-out is a substantive reshaping of the enterprise technology landscape for many companies – turning the old layers sideways and abolishing some treasured assumptions. In doing so also the new enduring technology organisations that run that new shape (should) also get smaller since they no longer need to have the people to manage the “tin” (and, for those following the story above, the “iron”)…

SaaS “hollows out” traditional Enterprise technology environment

The “hollowing-out” of the traditional IT infrastructure with separate SaaS services fragments the overall technology cost base, for example, by the separation and vertical integration of hosting costs into the individual SaaS towers. This pushes against some of more traditional levers of infrastructure standardization and creates some new tradeoffs to consider.

Architecturally speaking, under the covers, the actual composition of the SaaS towers is a heterogenous mixture of parts, albeit with some dominant patterns of deployment. SaaS vendors may host their own services, but often enough, they will use commodity IaaS cloud services, e.g., Genesys Cloud and Workday both run on AWS, as does Salesforce which also has self-hosted services. Equally in reverse, Microsoft who do host their own SaaS services, offer an on-premise Office Online Server for those organisations who need to embrace the data themselves, maybe for data residency/sovereignty/security reasons, but also want the reflection of Microsoft 365 online services in their lives.

This admixture of potential solutions creates a trilemma with many more tradeoffs to balance between the different dimensions of the service and cost optimization equation, thus…

Technology architecture and cost tradeoffs – the new trilemma

So, it is a multi-dimensional problem to consider with the application architecture decisions having a considerable impact on the TCO of the resulting technology environment. Compared to the simple days of moving lots of ancient “tin” (and, of course, “iron”) to a shiny virtualized cloud infrastructure, moving to cloud (whether SaaS or IaaS/PaaS) is not these days a slam dunk guarantee of optimal costs; it depends very much on the transformation journey and where you are coming from and heading to, and how tightly managed that new place is.

To be sure, moving from a on-premises landscape with, for example, a multitudinous miscellany of disparate finance systems to a harmonised online SaaS ERP system, should generate some operational cost savings, as well as business benefits from process standardisation and shared services. There may or may not be associated benefits in the hosting infrastructure depending on the existing level of consolidation and virtualisation of the physical assets and support services. On a like for like basis, whilst the internal IT Operations labour costs should go down, it is possible for the (non-transparent) embedded hardware and related costs to go up. So you still need to “follow the money” by looking at the wider TCO benefits or disbenefits as well as just getting excited about the shiny new toys.

Sourcing the right solution, both software and implementation partner together, is crucial with a guiding commercial-technical design considering the target operating model, business and technology architecture, commercial and TCO levers, and beyond that potential for business operating model transformation to deliver the goods

Beware the devils…


In furtherance of technology understanding, I used the DALL-E AI to generate the featured image at the head of this post. Here are some of the other ideas it came up with…

Non-linear what?

It is fashionable to concoct phrases to create neologisms to try and make other people think the neologisers are somehow just so much smarter. Sometimes the new terms created are actually meaningful and sometimes just Deepak Chopra-style nonsense B*S. Non-linearity is one of those sort of tag phrases that has been dragged kicking and screaming out of the world of mathematics and physics with mixed results.

Non-linearity has a sort of smart feel about it, linear = simple, straight-forward and actually only quite a small part of the universe; non-linear is quirky, eccentric, and a bit edgy, and pretty well most of the universe.

You can google for quite a few things that you might expect to be non-linear, like…

  • “non-linear algebra”
  • “non-linear dynamics”
  • “non-linear control theory”

…and a whole lot more things that you wouldn’t necessarily…

  • “non-linear thinking”
  • “non-linear innovation”
  • “non-linear people”
  • “non-linear social network”
  • “non-linear politics”
  • “non-linear justice”
  • “non-linear economy”
  • “non-linear clothes”

Apparently non-linearity is a thing in physical architecture now…

The so-called nonlinear architectural design is the thing that using the essence of architectural complex as a starting point we get multiple factors affecting buildings through analyzing, which we organize through the parametric model by reasonable logic in designing, and finally use the computer to create complex forms according with the requirement of architectural complexity.

The Realization of Nonlinear Architectural on the Parametric Model – MinWu, Zhiliang Ma, 2012

No, I don’t know what it means either: apparently part of the architectural process these days is to take photos of stuff or maybe existing artwork, stick that in Photoshop and trace out something that then looks like a Dali-esque bad dream or a bad acid trip.

You can see it in buildings like the Bella Sky Hotel in Copenhagen, which is quite dramatic

Bella Sky Hotel, Copenhagen
lglazier618, CC BY 2.0 https://creativecommons.org/licenses/by/2.0, via Wikimedia Commons

In the Bella Sky, there is some distinct sacrifice of function subordinated to fashion in the large amount of steel that you have to walk around in the rooms. It is however very democratically configured in diagonal form, so anybody of any height can bash their brains out…

Bella Sky hotel room – with metal stanchions…
Just as an aside, the restaurant "Basalt" in the Bella Sky is a total hipster joint, offering "food from the bonfire", basically a small selection of hard nodular charred black vegetables, with a bit of sooty meat and some flavoured smoke 🤯

Back to something more like the real world of the supply chain, the old-world view is that of the linear supply chain like this…

Linear Supply Chain

However, that has been tagged by non-linearity, if you google “non-linear supply chain”, with the most obvious examples being current visualizations of the circular economy where the Worm Ouroboros eats its own tail with recycling.

Circular Supply Chain

Deloitte made a rather gruesome rendering of the future state in their “Supply Chains and Value Webs” story where the world ends up as an entropic collection of blobs in a bunch of value webs, which looks a bit like this…

Supply Chain Mesh
Consultants are often very good at come up with ideas that so hi'falutin' and analytically dense that they cannot be handled by mere mortals.  A world built of that value web complexity obviously needs a load of consultants to help make sense of it.  
Also on the topic of complexity, I recall seeing once an IT systems architectural concept of "white spaces" which manifested as a spreadsheet that mapped out the intersections between different systems, but was ultimately a very sparse matrix, of guess what, mostly interstitial white space, which didn't really do anything to simplify the situation
“White Spaces” – more gaps than glue…

From an engineering perspective and also for the sanity of us mere mortals, the sort of tangled mesh represented by “value webs” is not sustainable and a more rational structure is essential. So you have to draw the threads together and create some level of coordination and organising principle which supports the non-linear structure but glues it all together, thus…

Non-linear Supply Chain

These are the types of “non-linear supply chain” model that are currently tagged with “Digital Supply Chain” or “Industry 4.0” monikers to freshen them up and make them seem modern. However, there is nothing new under the sun, of course, and these are actually retreads of ideas from 2000 and behind (e.g., from the CPFR concept of the 90s). (I have a slide on this topic I used in 2001)

In software terms, and back to the start of this story, that structure with the central “digital eco-system” might be classed as a “non-linear software architecture”, oddly enough when I googled that term I came up with no results at all!

“Non-linear Software Architecture” was there none…

So maybe I can claim to have coined that term myself? Whichever way, when people start talking about “non-linear software architecture”, remember you heard it here first!

The End of Paper as we know it?

Last year I got very excited when I read the news about the launch of the Remarkable Tablet – offering to replace paper in my life – a momentous event, much more impactful than replacing incandescent bulbs with LEDs in the house.  But how and why you may ask?

Early in my work life, I discovered that my short term memory is actually fairly poor short term memory, so I used to keep a little notebook and propelling pencil in my back pocket and scribble aide-memoires whenever.

I learnt this lesson after having meetings with one particular manager who used to say things in his office which all seemed to make sense just then; but once I left the room just made no sense at all.  Big lesson: never leave a room without a) understanding what somebody said, b) writing some notes so that it still made sense later (maybe didn’t because they never made sense in the first place)…

Whilst we are on a brief tangent, apparently the memory thing is supposed to be something to do with the neuro-chemistry of being an introvert – same as the annoying unavailable “tip of the tongue” words, and pithy comments that come to mind 10 minutes after the event…

So in my working life, I write a lot of notes in a lot of meetings and get through lots and lots of faithful yellow notebooks (now getting very difficult to source)

Yellow pads

I have been a long time fan of pen-based computing right back to the early 90s and all my laptops since about 2000 have been tablet PCs.  But the experience of writing notes on a PC has never worked for me.  due in part to my appalling handwriting which defies most recognition systems (although Microsoft have produced an outstanding handwriting capability in recent versions of windows, but ultimately, it just doesn’t feel like paper!

I need that slightly resistive feel of the paper – shiny screens with slidey styli just don’t cut it – try a Samsung S-Pen on a Note something.something and no, it just isn’t right (and Android apps are also really not that interested in eInk, either).

So I order my Remarkable Tablet, and waited, and waited, and waited a bit longer, and Hurrah it turned up!

Remarkable Tablet

Then I saw the Sony Digital Paper device, and had to try that too…

Sony DPT-RP1

…and the Onyx Boox Max 2, and had to try that too…

Onyx Boox Max 2

…although I rather wish now I could put that one back in the box and send it back whence it came – sadly a real disappointment.

To satisfy my analytical itch, I worked up an evaluation of all three devices, and you can read the three tables at the tail of this post

Suffice it to say, the roll of honour is like this:

  • the Remarkable Tablet is my choice of companion device to come with me to meetings;
  • the Sony Digital Paper is a lovely device and my favourite for drafting presentations, but tied to my office due to the very limited connectivity;
  • and the Boox Max 2, well, that’s just going to gather dust until the software gets way, way better, or maybe even way, way, way better (and even software may not fix it unless the eInk display and pen can interact with other Android apps)

And to those people who carp about the fact these devices have almost no functionality compared to their AndroiPad, well, so does a piece of paper, and these are way more functional than that!

and as a tailpiece, one of my family said, Tiny Tim style, his little face looking up at me:

Dear Papa, does that mean there will be no more shredding?” – Well, yes, that may just be! 

(And no, the family don’t really talk to me like that)

Evaluation of Remarkable Tablet, Sony DPT-RP1 and Onyx Boox Max 2

Remarkable Tablet

Design quality


  • Passive Wacom pen with is OK, however can choose others.
  • The Staedtler Noris is my current favourite
  • Any of them could use a click button to erase.


  • Great tracking for the ink
  • The feel varies by the pen – the standard pen is a bit slidey for me, I prefer the greater resistance of the Noris stylus, which is as much due to my tendency to scribble unreadable chicken scratch otherwise

Note taking experience

  • Good. Notebooks are easy to create and write in, and can delete pages, but not add them which is a slight annoyance, although not a serious issue, as notes tend to be a stream of words, and pages don’t really matter
  • Some limitations in the current software require a very disciplined work flow to save files to PC (e.g., lack of desktop sync)

Creating / marking presentations (PDFs)

  • Annotating PDF docs is OK, but cannot add or delete pages which is annoying (more of an issue than for notes).
  • Multi-layer capability also means you can edit the original page images, mainly erasing, or just annotate in a separate layer (which you do have to remember to add).
  • There are other various bugs/feature deficiencies in the software at the moment that fixing would improve the usability to another level

Share/print to device

  • Android share to Remarkable is very useful although it does not work for password protected Word document- a Microsoft issue rather than Remarkable

Desktop Sync

  • Indirectly by Cloud service, which then requires manual step to save the PDF to PC file structure
  • Needs automatic sync to desktop folder

Cloud Sync

  • Native support for sync via Wi-Fi to the cloud service is good
  • Will link to my home or phone Wi-Fi hotspot.
  • Have not attempted to link to Wi-Fi that requires login validation via web page

Other doc formats

  • No, but Android app sharing helps with a level of integration


  • PIN for device, one-time code to link device to cloud account.
  • Appears to use SSL protected connections, but otherwise security of cloud service is unknown.
  • SSH access to device allows secure file access
  • The device appears as a CD drive in Windows Explorer, but files are not accessible through that

Boot up time

  • 22s to PIN entry then immediately ready to use
  • OK if just asleep. Otherwise need to plan ahead to make sure it is ready for meetings!

Other functionality

  • No, but sharing on Android does help

Software Updates

  • One so far…hoping for more!

Overall summary

  • This is the best companion device and the one I take to meetings

Sony DPT-RP1

Design quality

  • The nicest looking and most satisfying, tactile experience (no surprise from Sony)


  • Proprietary, active pen that needs charging, but is generally quite usable and has a customisable) click button for erase. A slight pain to have to charge it.
  • Also losing it is quite possible due to the very weak magnetic holder, and they are not cheap to replace
  • Two types of nib: felt tip and plastic


  • Tracking is good.
  • The feel of the felt nibs is nicely resistive, just right for me, although they wear away quickly.
  • The plastic (POM) nib is quite slidey, not my preference

Note taking experience

  • Generally OK. The notepads are just a PDF file with a particular doc-type set by keyword in the PDF properties

Creating / marking presentations (PDFs)

  • Annotating PDFs seems quite a natural thing, and they are automatically added in a separate layer. However, cannot edit/rubout the underlying image as the background is not accessible.
  • You can add/delete pages in a notebook but cannot do that for a PDF document as such, but can work around that by adding the right keyword in Acrobat
  • However, when you change the doc type, you need to add a blank page at the front as the software uses the first page image as the background, which is annoying!

Share/print to device

  • Print to Digital Paper on PC is useful – when it works, the driver does not always initialise properly if the device is switched off when you attempt the print, so just gives errors

Desktop Sync

  • Uses the quirky Digital Paper app – Sync feature is generally good and allows you to keep your docs in order on device and PC. Also relatively non-intrusive when working
  • But does not sync empty folders which is annoying if you have a carefully created folder structure you have not yet used all of

Cloud Sync

  • Not out of the box as a native feature
  • They suggest you should use a 3rd party service on your PC which is not the same at all
  • Lack of cloud sync is quite inhibiting for broader use away from my office, as it is tied to the machine, even though I could also pair it to my laptop

Other doc formats

  • No, but Print To Digital Paper feature helps


  • PIN for device
  • Device sync uses some sort of self signed certificate generated when pairing
  • Not visible as USB device, files can only be accessed from the desktop app

Boot up time

  • Varies – 11-24s to PIN entry, then another 20s or so to fully awake for use

Other functionality

  • No

Software Updates

  • None seen since I received it (about 2 months).
  • By reputation, Sony are laggardly in providing updates which is a concern

Overall summary

  • A lovely device, but with weird limited desktop experience
  • Seems to be the most secure, but at the sacrifice of usability
  • It is my preferred device for creating presentations (being A4 help!)


Onyx Boox Max 2

Design quality

  • Does not look too bad although slightly industrial looking, and rather a monster in comparison to the other two devices. Metal back can be very cold to the touch


  • Passive Wacom pen, so alternatives can be used


  • Tracking is OK
  • Like the Remarkable the feel varies by the pen used – I prefer the Noris

Note taking experience

  • Not great. Notes are a separate doc-type, and do not seem to be easy to file in useful ways
  • Export goes to a fixed location which is not easy to find, nor connect to sync software.

Creating / marking presentations (PDFs)

  • Annotating PDFs is restricted and is not instantly on when opening a doc, have to click on the Notes menu item which is a couple of UI steps away
  • Exporting notes from a marked up PDF does not seem to be reliable, esp. if you vary the rotation of pages, it gets confused

Share/print to device

  • No

Desktop Sync

  • Not out of the box, except by Windows standard USB device functionality

Cloud Sync

  • Not out of the box.
  • Need to install 3rd party software. OneDrive does not work properly / crashes.
  • Resilio Sync seems to work OK, although finding any exported notes is a trial

Other doc formats

  • No. Lack of success with other apps and no share inhibits this route


  • Device PIN can be set (if you can find the Android setting, which is hidden), but does not actually work, as it is bypassed in the device start-up and not respected by the Onyx software setup. V.poor.
  • Unclear if it is possible to encrypt to the device (I didn’t risk it!)
  • Otherwise, it looks like a USB storage device, with no password protection
  • Cloud security depends on whatever software you install

Boot up time

  • 38s. The slowest of the three, goes straight to eReader interface (bypassing PIN)

Other functionality

  • Can (in theory) add other Android apps, but in practice, they do not always work properly or crash/hang or if they load, are difficult to read on the eInk screen
  • Cannot successfully install Google Play Services (not pre-installed), which also limits apps that can run
  • Pen based apps tried so far (e.g., OneNote, Inkredible) do not display/scribe properly with the Max 2 pen input so not usable
  • Also other apps do not seem to respond to touch input, just the pen
  • The use of the Max 2 as an external PC monitor is just a gimmick and adds no value

Software Updates

  • Have not seen any yet

Overall summary

  • A disappointment compared to the “Android” promise
  • Really just an eReader, not a useful note taking / paper replacement
  • Insufficiently secure to use for note taking


Tormenting AIs…

Quite a slew of “Robots will take our jobs” articles and AI future death of civilisation civilisation apocalypse FUD recently, so I thought I would undertake a quick investigation of the IBM Watson Developer Cloud which gives access to their cognitive computing APIs.

Watson’s Cookery book was an interesting read and Jeopardy game show appearance was a success…so good to see he is now branching out into more serious domains

Here below is the is the output from an entirely lightweight test of the Natural Language Classifier

We created something like this for Garlik to categorise the links relating to people spidered on the WWW into different topics areas.  It is not easy to make this work in a  useful way…

The sample classifier is trained on weather so lets try a weather question:

Q1 – “Is it raining outside?”

Output…Natural Language Classifier is 98% confident that the question submitted is talking about ‘conditions’.

Yup that seems OK

Let’s try something not about the weather to see if it discriminate out of context topics properly…

Q2 – “Peter Piper picked a peck of pickled peppers; A peck of pickled peppers Peter Piper picked; If Peter Piper picked a peck of pickled peppers,  Where’s the peck of pickled peppers Peter Piper picked?”

Output…Natural Language Classifier is 82% confident that the question submitted is talking about ‘temperature’.

Oh dear, 82% is pretty much a false positive.  Ah well, never mind; still interesting though, but not sure I would pay for that yet!

There are plenty more API examples to play with and, well, I could go on tormenting the AI with more daft questions all day, but real work beckons…

What the Bell?

I was rather interested to see a post on LinkedIn recently about “The Myth of the Bell Curve” which was saying (relatively) recent research had shown that human performance is more like a Power law distribution, than a Normal distribution.

The consequences of this is that a cherished HR sacred cows needs slaughtering.  Anyway you can read the post yourself, however, what tickled my interest is what would the two distributions look like when laid next to each other.

There is an image in the publicity material that attempts to show this…


…but that must be mathematically wrong, surely!

Nurse, bring the oxygen!

Both the Normal Distribution and Power Law are both types of probability density functions. however, as far as I can see from the published links, they have different axes:

  • Normal Distribution:  X = performance metric, Y = probability of that performance metric
  • Power Law :  X = some indicator of population; Y = performance metric of some sort

The problem of comparing these two is is that you need to rework the data to get both on the same axes.  Making the hypothesis that the x-axis of the Power Law is the performance rank of an individual – like a Zipf curve equivalent.

So X is not the size of the population, ‘cos that is just absurd:  the curve would otherwise show that for that any population of 1 is really brilliant, whereas the bigger it gets the more stupid it is…mmmm, weelllll, depends on who is counting themselves as the One, and how many of the rest read the Daily Mail/Mirror/Express/Sun/Star…

So if you work the data on that basis (modelling an arbitrary population size of 100 people) then the curves actually look like this…

Power law

…so they are curves with quite different shapes.  And if you re-plot them the other way round, then they look like this…


…which might superficially look like the picture at the top, but is actually showing the population of the long tail as the tall spike, not top performers.

Still a rather scary picture, as it indeed suggests that most of the people in the “team” are rather serious under-performers, hanging on the coat-tails of the many fewer high-flyers!

This may be a figment of the example data somewhat, and taking a probably unsubstantiated analytical leap, we can readjust the power law chart to align the median figures of performance and come up with a chart like this…

Normal (power adjusted)

…which even still suggests that there are a load of sub-middle slackers sitting on their hands, and they should really get moving and DO SOMETHING!

My general theory that if when leaving the house on the way to work in the morning, you harbour the thought that “today, “I will not make a difference”, go back indoors and get back under the duvet.

So I have scratched my itch, not sure it was so much fun for you, so here is another useful framework to help guide thinking and action and considers the destination of projects…

Thinking is…


A wasted opportunity swirling round the Plug-Hole of Life



by way of a path of good intentions




Implementation is…