Technology in Business

Beware the Low-Code IPR Iceberg

Low-code app development is very popular now, but you need to ensure the commercial benefits of your titanic new app do not founder on the low-code intellectual property rights iceberg

Low-code and No-code is very popular with a range of promises around rapid development of engaging apps with little effort and the “democratization” of the app development from “Big IT” into the hands of citizen developers.

Of course, low-code is not new at all, and was largely pioneered in the fourth-generation languages (4GLs) which peaked in the 1980s and 90s, with GUI (and even sometimes WYSIWYG) systems like PowerBuilder, Pro-IV, Omnis, StaffWare, and even Microsoft Access.

As it happens, for a period in the 80s I ran the team that supported PRO-IV after its acquisition by McDonnell Douglas Information Systems.  Being written in C, PRO-IV was ported to more or less anything that had a C compiler.  Not always a good thing as it happens, especially for the IBM AS/400, which had a toy C compiler that produced code that was incredibly slow (for the technical amongst you that was due to every function call being generated as an inter-segment CALL which is about 1000 times slower than an intra-segment CALL and so a baaaad thing to do)

As I may have mentioned before I am a big fan of GUI-based WYSIWYG visual software design, with a simple philosophy…

Visual good, green screen bad (with apologies to George Orwell)

As most low-code designers are generally visual in nature, from a philosophical perspective they come under the “good” category. However, commercially speaking, there are some potentially significant commercial pitfalls of which you need to take account.

Before digging into that, we first need to have a look at the main components that make up an “app”. Whilst it is a rather corny cliché, the iceberg motif is quite apt here, as there is more of an app that you don’t see than you do, especially for low-code. The parts under the waves beneath “The App”, are the data and meta-data, supporting environment, execution engine and infrastructure on which it all runs, thus…

There’s more IPR hidden below the waterline in the Low-Code IPR iceberg

Not surprisingly there are some significant commercial attributes to all those parts, not the least who owns which parts, which raises the concern that whilst you might own some intellectual property rights (depending on your lawyers), due to the complexities of the other parts of the app construction, you may not own anything that you could actually take away as an useful asset.

Indeed there are a number of factors to thinks about when asking yourself the question – who really owns my app?

Who really owns my app?

Considering those factors:

  • IPR ownership. This is the obvious one that the lawyers focus on covering business trade secrets, copyright, patents and so on.
    • How much of the total app IPR do you own?
  • Portability. A key part of switching is taking your toys away and playing your game somewhere else for your own advantage.
    • Can you extract any useful description and source code for the the app, preferably in a commonly recognised format?
  • Third Party Access. This covers the scope of who can touch your app for development, support and general usage. (This is a historical trap for outsourcing IT services)
    • Can third parties modify and support your app code, and other parties actually use the app?
  • Licensing. This covers how the various parts that you don’t actually own are licensed to you and your associates and affiliates, and for how long that license actually lasts.
    • What is the scope, time period and other attributes of the licence given?
  • Run-time costs. This covers the costs associated with deploying and using your app which may include or exclude the infrastructure costs depending on the application and low-code service construction.
    • What is the on-going pricing for deployment and use of the app, and what happens when you stop paying?
  • Supplier Continuity. This covers the longevity of the supplier running your app and what happens if/when they go bust. In the past that was handled by a simple escrow clause, but which is becoming a much less tenable proposition in the SaaS world. In the worst case a supplier will cease to exist, their servers go offline, your app is gone, and any useful IP becomes “bona vacantia” owned by the Crown (in the UK, at least, other bad outcomes are available).
    • What happens to your app in the event of supplier failure?

Putting those together, you might own the whole kit and kaboodle of your app, which is less than likely for low-code, although may still be the case for older 3GL-based apps; or in fact you really own nothing useful at all, just some scraps that instead somebody generously let’s you use in exchange for some of your hard-earned cash.

You can map out the extremes of the commercial lock-in that is created by these sort of considerations, against the handy ice-berg layers, thus…

Commercial lock-in potential

Most low-code systems exhibit some of the features on the left hand side of the iceberg, and you can do some rough clustering of current low code systems according to that…

Examples of different low-code systems by degree of commercial lock-in
  • SaaS Platforms with low-code extensibility. These might not be considered “low-code” by purists in the fullest sense, but often exhibit some of the technical features. These typically have the highest level of lock-in as you are wedded to the mass of application functionality and adding customization around the side. The systems is hosted by the supplier, and you generally pay for most/all the users, with some significant restrictions around the licensing and usage. The app “code” is not portable and when the supplier dies or you stop paying for access your app dies too.
  • SaaS-like Low-Code systems, hosted by supplier. These provide low-code features but are locked to the the supplier’s systems and infrastructure with restrictive “take it or leave it” licensing and again you pay for most/all users who touch the app. Again the app “code” is not portable and when the supplier dies or you stop paying for access your app dies too.
  • Enterprise Low-Code systems, with choice of infrastructure deployment. Whilst these are like the previous group just above, they start to open the sealed world, by giving options to deploy your app on different underlying cloud IaaS, or even on-premise. They may also use some open-source components for the deployment tools and landing zones (e.g., Docker, Kubernetes, etc.). However, the apps themselves have relatively low portability, even if they run in a more open environment. These types of systems are often targeted at Enterprise clients who have multi-cloud strategies. No surprise, they also therefore carry an “Enterprise” price tag and may still have supplier imposed time-based limitations of use, access and so on
  • Code-generator Low-Code systems, deploy anywhere. The last group have the lowest level of lock-in and typically generate standard 3GL code, like Java or PHP. In their freest form, you only pay for the development tools, and the run-time is royalty-free with no application usage run-time costs. Since they generate code, the apps are relatively portable, although the generated code may not be pretty in a human sense. They also have effectively have perpetual life with no supplier imposed time limits unlike the other three categories. More locked-in versions will have run-time charges

Low-code is definitely a “good thing”, but you do need to go into it with open eyes and understand how the shiny promise of speedy development with high investment efficiency can be eroded if you don’t take the commercial realities into account,…

The promise of low-code can be seriously eroded by the commercial realities

…and your app founders on the low-code IPR iceberg and its commercial case sinks below the waves.

Category Error

Whilst categories are a good organising principle for procurement, for technology, at least, it can lead to siloed thinking that misses bigger transformational opportunities...

One of the challenges of functionally aligned organisation structures is that sometimes you get different parts of the business working in different ways, aligned to different objectives and speaking in different languages. This is quite common between Technology (be it Digital, IT, OT or whatever) and Procurement. Each group can be suspicious of or lacking respect for the other, and the schism can be so bad that Technology has its own procurement team because Group Procurement “just don’t get it”. Equally, Procurement feel aggrieved because they are engaged too late, the commercials handed to them in a hospital pass hamstrung by technical lock-in, and otherwise treated as low status order-placing “water boys”.

A classic area of speaking in different tongues is how you group the things you buy and deploy. Technology people may think of Services, which are built of a multiplicity of components of different types: software, hardware, Cloudy parts, people, external services and so on. (Aside: The people element is quite special here, as it can be actual internal headcount not a third party component so invisible in traditional AP-based spend cubes)

All those Service parts have different buying characteristics, whereas Procurement think about Categories, which are things that have similar buying characteristics and approaches.

Whilst that can undoubtedly make sense for commodity items, it starts to fall apart the more specialised the entities in the categories. Software is, for example, a hard category as it is not particularly susceptible to procurement amalgamation, even if there some common processes like license management. Ten different specialist softwares from different authors in a high-level category cannot be amalgamated, only switched/substituted or eliminated – which are business / technology decisions, not procurement, as such. So the high level category is really still ten little categories…

Then upshot is you can literally have people thinking and talking at cross-purposes – Services vs Categories, thus…

Talking at cross-purposes – Services vs Categories

There are perfectly valid reasons for both views but sometimes you need to lead with one rather than the other. This is particularly true when considering how to optimise costs, which can fall into three paths…

Cardboard Images: Andrea Crisante, Koya79 |; chwatson | Free3D

Business as usual Category Management won’t change the pile of cardboard boxes of your category, maybe just organise the contents a bit better; Category Sourcing might tidy up the boxes a bit. However, both are fundamentally limiting to the scope of opportunity that unfolds. Therefore, to unlock the bigger ticket opportunities and build the castle of your dreams you have to look across categories…

Examples of cross-category change and transformation

The cross-category opportunities don’t have to be the mega-sized reshaping of Digitalisation, BPO, technology outsourcing or even technology switching wizardry of Cloud migration and the like, they can be quite mundane. A good example is that of Technology contractors working in staff augmentation roles on daily rates. These people are often managed in a HR category where somebody has thoughtfully negotiated a single-source deal with a contract resource management company and their fancy resource management platform (you know the names).

However, these typically expensive on-shore contractor creatures should be factored away into managed services run by technology partners delivered from cost-effective locations wherever on the globe that may be.

But whilst the headcount is locked in to the HR category with spend stuck in the wrong bucket and savings counted against with “their” savings target, that doesn’t happen, and the bad behaviour of buying expensive unstructured resource is institutionalized and systematized. That is indeed a “category error”!

The necessary solution is to allow opportunity assessments and following commercial stages to break out of the category strait-jacket and think holistically about the business, its technology underpinnings and how it can be transformed for the better (and lower unit cost).

The starter for ten on that is to align the complementary roles of Technology and Procurement across the business service lifecycle to provide a mutual support and grasp the larger opportunities, like this…

Aligning complementary Technology and Procurement roles across the business service lifecycle

And so it goes…

Digital, Phygital, Fiddlesticks

Digital is a rather abused term that has been round the block a few times, and now we have “Phygital” which is a load of bull..

I was prompted to think about the meaning of “Digital” recently by the unlikely conjunction of two disparate events, viz:

The first is a great step forward for a brand that has up to now been firmly “bricks and mortar”, and the second is apparently something “phygital” with the incursion of technology into actual clothing for reasons.

I get the commercial consumer driven logic of the first, but the second is somewhat more puzzling and perplexing. However, I don’t really care about clothing and fashion so it is a market logic that I would have to work hard to understand, so we’ll see how that business model succeeds over time.

Anyway, it set me thinking about words…

Digital has been around for many years, but “phygital” is a much more recently coined term, attributed to Chris Weil, Chairman of Momentum Worldwide, in 2007 (Thanks, Chris), picking up momentum c.2017. You can look at the frequency of some key technology terms in Google NGram Viewer…

NGram frequency of key technology terms by year

PCs were obviously quite a thing back in 1985 and also gave mainframes a little bump at the same time too. I tried “minicomputer”, but that barely features in this scaling, so apparently was not something that people talked about so much back then. Whilst departmental computing was a big wave of change versus mainframe in the 1970s and 80s, it was only in the business domain and so general awareness and interest was lower, I suppose.

Web and Internet were clearly also big talking points in 2000-ish, and beat down the Microcomputer Revolution in volume. But throughout you can see “Digital” growing steadily until it has actually overtaken what were the leaders, “Web” and “Internet”, with Web taking a sudden down-turn.

Most of the other newer terms like AI, “blockchain” and “metaverse” still bumble around at the bottom of awareness at this scale so not hitting it by the current 2019 end date of the NGrams corpuses. “Fintech” also is a relatively low scorer, even though it has now spawned a constellation of many new digital “<ANYthing>Tech” neologisms, like “InsureTech”, “PropTech”, “FemTech”, “EdTech”, “LegalTech”, “FoodTech”, “AgriTech” and so on). These are also probably more business vertical specific than broad-based so don’t get the volume of attention.

And don’t bother looking for “phygital” which also dribbles along the bottom of the chart if you add it to the query.

Before around 2015, “Digital” used to mean stuff related to computers generally. However, from then onwards it started to acquire jazzy new meanings related to exciting things like customer experience, digital marketing, mobile apps and otherwise being a “Digital” business, and with “digitalisation”, the process of becoming that thing. McKinsey had a go at defining it which you can read at your leisure.

What got lost is that many businesses have been digital for years and that technology rubbed up against the real world in many places, often not so glamorous. Like in manufacturing, supply chain, vending machines, door locks in hotels, the kitchen systems at KFC

To get to grips with this you can draw up a simple gameboard that maps out business typology against its manifestation.

Business classification – Typology vs manifestation

The business typology separates the places (“venues”) where people interact (e.g., actually trade or just get together and interact to do people stuff, like throwing sheep) from the actual trading businesses themselves, i.e., those those that generally exchange some value for some thing or benefit. These can be actual products, services and money but also in the wider context, could be social kudos, environmental benefit or other non-monetary value. For these purposes, broker-type businesses fit in the “trading” slot as they facilitate other peoples’ trading.

By the way, for the bankers reading this, we shall deliberately ignore where the trading transactions (financial, social, emotional, environmental, or otherwise) are cleared and "payments" handled, let's keep things simple for the purpose of this treatise.  

The manifestation dimension separates the real from the non-real. Physical covers what you expect (to be construed according to context as the lawyers say): buildings made of straw, sticks and bricks in locations with actual geographic locations, or cars, or books made of paper. The virtual covers everything that isn’t that, a nicely mutually exclusive definition. So can include virtual assets like photos, videos, software, financial products, and virtual businesses that provide places for people to connect and trade.

You can map out some businesses onto the landscape to see how the Pickup Sticks fall.

Digital business classification – some examples

What you can see (obviously) is that those which fall into the virtual column are heavily technology based (indeed, since we have selected this to exclude ectoplasmic spirit world businesses, wyverns, harpies, vampires, magic wand shops and other virtual manifestations of a more mystical sort). Whilst some of the virtual venues like Facebook support virtual interactions, a virtual platform like Uber facilitates real world transactions between car drivers and their passengers. And Utility Warehouse is a virtual business that loosely speaking brokers people-energy trading.

In this classification, the Metaverse is just another venue, and it could yet be a three-star Michelin restaurant experience or just a greasy spoon, as we shall see. But like the financial exchanges of today, the venues (exchanges) make a dribble of money in comparison with the eye-watering value that flows in the trades they facilitate. It’s largely what you do that makes the money, rather than where you do it (whether you have Meta-legs or not…).

The caveat to that is that a business with a captive supply base, and monopolistic channel control, like the Apple App store, can make shed-loads of money at its 30% transaction tax. Similarly, Facebook as a venue makes lots of money by selling access to its users for advertisers compared to the unfathomable value of the social interactions that take place upon it.

The key point here is that the businesses in the right-hand Physical columns also use technology, and often extensively, although not so visible to the untuned eye. Even the Louth Livestock Market, a very physical place with real farm animals and open outcry selling round the ring, also has a website and online auction trading. In other words, they are Digital businesses too.

So Digital is embedded in both Physical and Virtual manifestations and forms a solid and critical substrate on which almost all businesses run today. Like a seam of gold running through quartz…

Digital substrate embedded in most businesses

What does a “Digital” business actually look like these days? Well, it would undoubtedly include, internally, solid chunks of systems for Customer, Product & Operations and Performance & Control, and externally, multiple channels, non-linear supply chains and the like. But that is is a story for another day,

We used to see businesses sprout silo’d business units separate from the mainstream and built on electronic channels (oh yes, Digital channels) back in the early 2000s. This is less xenogenesis to birth something new and quite unlike its parent, than it is temporary firewalling to incubate a new way of doing things in the same business. Consequently, these offshoots have long been absorbed back into mainstream business models as they matured.

Many businesses have been omni-channel for years; it is no longer a rocket scientist level insight to suggest that, for example, you should have common stock management between an online store and physical shop, for example. However, the wave of the reworked “Digital” businesses in the last 5-7 years regurgitated the concept as something new, when indeed it is not.

The upshot of all this above this is that the newer Virtual businesses were called Digital by their over-enthusiastic and imprecise evangelists in thrall to a form of cognitive bias and so Virtual has been confused with Digital. This created the misbegotten conflation of two terms to describe an omni-channel experience across Physical and Virtual.

So we got “Phygital”. However, Digital embraces Virtual and Physical, so “Phygital” should really be “Phyrtual”, or “Virtical” or someother bull.

Digital is perfectly good…we don’t need Phygital, let it wither and die, like the eCommerce business units of old

Beware of BS Benchmarks & Krap KPIs

Recently our esteemed Green Knight, Sir Jonathan Porritt was attributed with saying  “Overweight people are ‘damaging the planet'”.  Of course it turns out that he said something like this in about 2007, in fact building on a comment by the then Secretary of State for Health, Alan Johnson.  But somebody else unearthed it again for some typically twisted reason – nothing can be more topical than mixing global warming with a bit of “fatty slapping”.

The hypothesis behind the hype is that fat people use more resources because they eat more food, but why not then include teenage boys (unfillable, as empty fridges around the country can testify), people with very high metabolic rate, and other some such big eaters.  Ah, well, the logic goes that fat people also drive everywhere and so contribute more CO2 than thin people who, of course, walk or cycle everywhere.   Well, maybe it applies in towns, but it is certainly not true in the countryside, so drawing a different intersection in the Venn diagram I am sketching out here in hyperspace, maybe the headline should have read “Teenage boys and country people with very high metabolic rates are ‘damaging the planet”” – not quite so catchy, or right-on, eh?

But, of course, there is a secondary thesis which is that obese people can be “cured”, especially if they all got out of their cars, walked and cycled, and stopped scarfing all the pies, whence their weight would magically drop away and they would join all the normal people in the happy mean.

When you look at whole populations analytically then of course you usually see some sort of distribution (Normal or otherwise) of whatever factor (weight, in this case) that you might be measuring.   So the theory is that by thinning down the fatties, the shape of the distribution will be changed. However, there are flies in this particular ointment, and if you look around you can find suggestions that obesity is actually a structural feature of a/the/any human population, that everybody has got fatter and that you need to treat the population as a whole, not just focus on the upper tail.

All in all, an example of woolly loose thinking gussying up to a political agenda.

BMI  is one of the weapons in the “fatty slapping” armoury, a metric with some very well documented short-comings, yet standard (mis-)guidance would label people like Lawrence Dilaglio, Jonah Lomu & Mel Gibson as over-weight or obese.  Whilst BMI might have some trivial diagnostic uses, some lard-brained, fat-heads try to use it as a decision-making metric, vide ‘Too fat’ to donate bone marrow – the 18-stone 5’10” sports teacher with a technical BMI of 36.1 who was ejected from the National Bone Marrow Register.  To make a proper health assessment, you need to have a more detailed look at structural features, like waist size, percentage of body fat and so on, before pronouncing.

Just pausing a moment to dissect BMI further, it has units of kg/m2 which is not unlike the metric used to define paper thickness.

Many organisations these days used 80gsm printer paper which is more environmentally friendly than the more sumptuous 100 paper of oldAnd even less rich feeling than the 120gsm paper that Tier 1 consultants use to create a table-thumping report – the dollars are in the loudness of the thump.

As Marshall McLuhan told us, the medium is indeed the message, thickness = quality, and just feel that silky china clay high white finish. Oooohhh…

Sorry, started to get rather indented there, must coach self, control tangents…

So a person who has a BMI of, say, yeah, like 25, is like a piece of 25000gsm paper, no really…equally a piece of A4 paper might have a BMI of about 0.08…


Thus BMI is a prime example of a benchmark ratio or KPI that is NOT a good basis for making decisions, as it fails to take account of significant structural factors.

This parable provides an important lesson for practitioners in the world of Information Technology Economics, where many a ratio is measured and analysed by pundits including Gartner et al, a classic being “IT Costs as percentage of Revenue”, one of their IT Key Metrics.

It is defined quite simply as:


If you dig into the typical drivers of the top and bottom parts of this formula as below, say,

MicroEconomic Drivers – Typical Examples
IT Costs Revenue
  • Business configuration, e.g., Channel/Distribution infrastructure
  • Organisation structure (e.g., headcount)
  • IT Governance & Policies (e.g., Group standardisation)
  • IS architecture and legacy (complexity)
  • IT Service definitions and service levels
  • Development methods & productivity
  • Sourcing/procurement strategy & execution
  • Supplier market diversity
  • Market Structure
  • Competitive environment
  • Market share
  • Product design
  • Consumer behaviour
  • Sales & Marketing performance
  • Customer Service (retention)

then you might surmise that it is quite possible that the Revenue numerator has significant elements that are certainly outside the direct control of the IT organisation, and indeed outside the control of the company, whereas the IT Costs are defined largely by the structure of the organisation, its distribution channels, and internal policies and practices.  The top line is also, I conjecture, more volatile than the denominator, and being mostly outside the control of the IT so a very unfair stick to beat the IT donkey with.  So in qualitative logical terms this metric is certainly appears to be a very poor ‘apples and oranges’ comparator.

If you stretch the analysis further, you can ask the question “what does it mean?”  Is the ratio intended to show the importance of IT? or IT leverage/gearing (bang for the buck)?

Well, if it is some level of importance we are trying to assess, then we should analyse the relationship between this benchmark ratio and true measures of business value, such as, Operating Margin.  Looking across a range of industries the curve looks like this:


OK, is is a deliberately silly chart, just to make the point that this is clearly a wobbly relationship.
If you do a linear regression analysis of the relationship between Operating Margin% and the IT Cost/Revenue ratio and a sibling ratio “IT Cost as a %age of Total Operating Costs” (or “Systems Intensity” to its friends), then you get these results for R2


IT Costs as %age of Revenue vs Operating Margin%


IT Costs as %age of Op. Costs vs Operating Margin%


What this shows is that there is no particularly significant linear relationship between these two key metrics and Operating Margin, so quantitatively, the ratios do not really tell you anything about how IT costs/investment drive overall business performance at all.

Even within an industry ratio comparisons are fairly meaningless.  For example, in the past UK Banks had an average Systems Intensity around 20%.  If you were to calculate the Systems Intensity for Egg, the Internet bank, at its height, you would come out with a number ranging from about 17% to 25% depending on how you treat the IT cost component of outsourced product processing and some other structural factors.  And I do recall having a conversation with one Investment Bank CIO who declared, “Yes, of course, we do spend 20% of our operating costs on IT, it’s how we set the budget!”

The whole averaging process loses information too.  Look at the four distributions below, they all have the same mean (i.e., average) but are wildly different in shape.


Without further detail on their parameters than just the mean value of the curves,  you cannot make a sensible comparison at all.

So all these ratios give is some rather weak macro illumination of the differing levels of IT spending between industries, like saying to a Bank “Did you know that, on average, Banks spend 7.3 times more on IT than Energy companies” to which the appropriate response is “YEAH, SO WHAT?”…

…Oh, and maybe, some vague diagnostic indication that there may (or may not) be something worth looking at with a more detailed structural review.  So, why not just go straight there, and dig out the real gold!

And so the morals of this story, O, Best Beloved,  are that just because you can divide two numbers, it doesn’t mean that you should, and be prepared to dig into the detail to truly understand how cost and performance could be improved.

Just so.

Phosphenes & Palimpsests…

About a year ago, I went though one of those few moments when I thought my normal powers of memory had somehow deserted me. It was not really anything important I couldn’t remember, just the word that describes the the lights you see when you squeeze your eyes tight shut. Like this…

So not very significant in the scheme of things: not one of the words I actually use very often in conversation or in Powerpoint presentations. Just annoying, because the word was just lurking on the edge of my perception, out of reach. But something that you can get a bit obsessed about when information normally falls to hand or mind quickly…

So I Googled and Wiki’d and all those searching jobs that normally count as work, and kept finding Tom, Nicole and Stanley and their film, and other flotsam and jetsam on the endless waves of Web surf.

But, eventually, I created a mega-whiz, sharp-as-a-scalpel, spot-on search string that gave me that Eureka moment…Ding!

The word I was looking for was “Phosphene

Mind you the Eureka moment was over quickly, as I came to that odd feeling that I had never known the word at all so how could I have semi-forgotten or demi-remembered it? But let us not confuse the story with such technical plot twists and devices.

Palimpsest is another word a bit like Phosphene, but in reverse, I know what the letters say, but the meaning slips my mind (a reused bit of parchment, in fact). It is however a word that I have read many times but never ever had the need to write down – until today. It is definitely a clever Stephen Fry sort of a word, or maybe a Will Self word

I wrote “normal powers of memory” at the top of this piece, though we Jungian Is “enjoy” the physical aspects of memory that are imposed by our brain chemitstry, being the dominant long acetylcholine pathway, compared the the short dopamine pathway of Es out there.

If you looked inside my head, it might look something like this…

…but brighter and probably in colour.

So I worked out many years ago that I should not waste my time remembering stuff, when a notebook works much better.

And so on into the Wonderful World of the Web, I have always found it useful to clip bits out and paste them into my digital scrapbook for longevity and to act as my long-term cyber memory. I gave up on browser Favourites early, as they quickly became useless signposts to where information was no more.

In my Adobe period, I printed bits of the Web to PDF files and stored them in a byzantine filing structure. But, eventually I settled on Onfolio and paid some brass for a real product…and then Microsoft bought it and gave me back my money because they were giving it away free in the Windows Live toolbar…then to become a zombie, twilighting product. The death knell was when they switched off the licensing servers last September.

RIP, Onfolio, you served me well

So I had to indulge in one of those distress-driven searches to find a new digital brain. I tried Ultra-Recall which can import Onfolio collections, but has the user experience of a broken lift. I tried TopicScape but that felt like I was in Castle Wolfenstein or Jurassic Park (the ” ‘I know this, it’s UNIX’ whilst looking at a mad graphical computerscape ” moment), and a host of other paraphernalia and arcana.

So I have ended up with MacroPool’s Web Research, which feels a bit like Onfolio…but German…so hopefully it will be most efficient. We’ll see…

The Rule of 7

Being of a fairly rational turn of mind, I don’t have much truck with Numerology and similar horoscopological mumbo-jumbo, but I have, over the years, observed that product development tends to have difficulties around 7th major version of a piece of software, the antithesis of the “lucky number 7”.  This is not a rigorously tested rule (it could be 5 or 6 or 7 or 8), but something more of an intuition with some empirical basis: rule or not, if it comes to pass for Microsoft, it does not bode well for Windows 7.

…well, not according to the entrails of this goat that I have been using to forecast the future of the global banking system, anyway…

A more robust, analytical explanation is that these difficulties are some manifestation of James Utterback theories about dynamics of innovation; of product and process innovation and dominant designs…


… maybe mixed with a bit of boredom, laziness, hubris, and less rational, human things (lemma  here)

Windows is moving from Vista (6), to version 7, and so maybe it already had its bad moment.  However, it is difficult to see how much more development can go into the product as it is, at 28 years old, quite far down the right hand end of the innovation curve, beyond the flush of youth (worrying about its pension, and oooh, it is so chilly, let’s turn the fire up, and what are we having for lunch, i’ve lost my teeth…)

Exercise for the reader: try plotting where you think Windows 1.0, 3.1, 95, XP and Vista fit on the curve?

Many of the other core information technologies we hold dear today are also really quite ancient: RDBMS, Word Processors, Spreadsheets, all dating from the 1970-80s.  So what’s new in the world, multi-touch, then, the much touted new technology for Win 7, who needs it on a desktop, I ask you?

Don’t get me going about Tom Cruise and Minority Report – although I do still keep half an eye on developments in data gloves…

There is a lot of talk of Cloud Computing and other exciting things, but apart from the fact that it is, in the main, new applications that will drive up usage, not base technologies, there is an interesting trend about where computing stuff actually happens, and more of it is likely to be happening in non-human places, and between consenting machines…
If these population estimates above are any way true, then only about 8% of connected devices are human-type information appliances, the other 92% are machine or devices that do things useful or mysterious – the balance is tilted to the machines by the 50 billion cockroaches in the basement;  analogous to the rat statistic – you are never more than six feet from one, but you may not know it…

If you take this Machine-to-Machine (M2M) intelligent device view of the world and mash it up with the Semantic Web & RDF  – creating machine readable data on the web, and maybe, as a by-product, defining the lingua franca so that machine can talk unto machine.

So, if the washing machine says, “I’ll be back”, get the h*ll out, Judgement Day is coming!


Of Washing Machines and Software Errors

The human brain is constructed so that it is very good at seeing patterns (even where there are none), and so “coincidentally” after my previous spat on the same topic, I have been suffering my own version of Call Centre hell this week – just trying to book an engineer to come and mend our ailing tumble-drier.

Last year, just about this time, I fell for the pitch of Domestic & General who sold me a three-in-one policy for kitchen equipment breakdowns. And so it came to pass that the Tumble Drier started thrashing itself to pieces, just after the renewal letter came through.

All should have been smooth: “direct debit”, “you need to do nothing”, “renew automatically” were the comforting phrases in the letter. Tchah!

To cut a long story short I lost a few precious hours of my life listening to on-hold music and all that other stuff, and then when I got through it was “that fine, just call blah on this number, oh, thats strange the policy has been renewed but the equipment shows it is lapsed, let me put you on hold”…

So clearly the renewal process had gone all agly, creating an insurance curate’s egg, in fact.

But the cherry on the cake was when I received a very cheerful automated email from D&G below…


Renew for a penny? Hmmm, aha, the light dawns, something has gone wrong with their arithmetic. Last years price was £119.88, nicely divisible by 36 (3 boxes x 12 months), this years price (up, of course) is £131.88 – oh dear, divide that by 36 and you get lots of 33333333333333333333333333s on the end. Add in a bit of truncation and you have a nice little problem building up. If they had charged me £131.76, maybe things would have turned out differently.

Nothing as serious as the Patriot missile failure, or crash of Mars Climate orbiter (Imperial/Metric System confusion), or others of that ilk , but my very own personalised, computerised, automated rounding error.

Aside: Spreading the cost of the £0.01 by direct debit. I laughed so much I nearly died….

I clicked on the button, of course, I had to, in for a penny in for a penny, so to speak; to see if I could make the whole problem go away, but a “technical error” on the web-site prevented me from completing the transaction!

So here is a little sign for the D&G development team to hang from their office wall to act as a reminder as they ply their daily toil…


Beating the Cost Crunch

We’ve all been through it and know how it goes…

Welcome to British Tap.  Please listen carefully as the following options have changed,

Subtext: You’ve not called us before so you wouldn’t know that and it makes no difference to you, and all we are doing is wasting your time whilst the call routing system finds somebody who might be able to answer your call

Please note that calls may be recorded for training and quality purposes

Subtext: Because we really don’t trust our agents or our customers for that matter and we need to be able to go back and find out what you said and then tell you that you were wrong and that you didn’t say what you know you said.

Please select from the following options.
Please dial 1 to buy a new widget, please dial 2 if you would like u
s to  try and cross-sell you some insurance for the widget you bought from us last week, Please dial 3 to report a fault with your widget

Your call is being held in a queue and will be answered shortly.
Our agents are busy answering other customers calls

Subtext: The people in front of you who are more important than you and were given a better phone number to call.

Your call is being held in a queue and will be answered shortly.
We value your call.and would love to answer it as soon as we can

Subtext: but we have not  staffed our call centre properly and things are getting a bit out of hand because we are operating on the cheap.

Your call is being held in a queue and (click)


कॉल करने के लिए धन्यवाद British Tap . मेरा नाम Nigel है . मैं आज की मदद से आप कैसे हो सकता है ?

Eh? Oh? My boiler is leaking.


Sorry, what did you say?



Yes, of course, that old cost-cutting gambit of the offshore call-centre.

Indeed, whilst the overseas call centre pendulum has been swinging back  on-shore in the last year or so, now with cost crunch following credit crunch we can expect that trend to reverse somewhat.

Call centres and the customer experience that go with it are not the only things to get crunched when belts are tightened, discretionary project spending is one of the first things to be reduced,with projects either being deferred or cancelled.  Whilst this is a very hand tap to close, by turning off spending on projects willy-nilly, as snuffing boring, run-of-the-mill sustaining projects, genuinely innovative activities also usually get the chop.

Conceptually, the unfettered application of cost-cutting measures looks something like this…


… with good stuff getting damaged at the same time as cutting off all the bad, spendthrift behaviours.

In particular, undiscriminating simplistic cost-cutting can be quite short-sighted and have unforeseen effects down the line.  Indeed,  this process of cost-cutting can accelerate an overall competitive cycle of pain…


…where declining profitability is met by efforts to increase efficiency through cutting costs, implementing new technology or whatever, which drives greater competition, which leads, to, oh dear, declining profitability, ad absurdum.  “The Banking Revolution: Salvation or Slaughter?”>

However, innovation is one of to primary decelerators of this cycle


So the conundrum is how to go about reducing costs without killing the good stuff, thus…


…to take out cost and building competitive advantage through innovation and better customer experience.

The solution generally lies in being more analytical about the cost-cutting process rather than simple “slash and burn”, such as:

  • Careful prioritisation of projects, for example, choosing to favour of genuine innovation efforts over the projects that just sustain the existing business;
  • Taking a system-level view, e.g, over the customer life cycle, and using joined up thinking to ensure that simplistic, functional cost-cutting does not cut across and destroy customer experience, or in the IT software development arena, taking the whole productivity equation into account (rather than focusing solely on daily rates)
  • Keeping a focus on profitability, rather than just the bottom-line, so that the overall financial health of the enterprise is improved

Meanwhile, back to the phone…


Благодарим ви за свикване на British Tap бойлер гореща линия. Казвам се Tony. Как мога да ви помогне да днес?

Oh, !$R£W”Q^%$£&^%$£&%$£”%^)*&%)(*^%$%^£!!!!

Middle Management: Muscle or Gristle?

Last year, I came across a couple of surveys about Middle Management that piqued my interest. The first said:

Middle managers emerge as a neglected, disillusioned and frustrated breed in research…a third say they are kept in the dark about company plans, almost two-thirds confess they are at a loss to understand their role   — jobs.telegraph, “Middle Managers are left in the dark”

And if you read the underlying report you see  that an astonishing 48% of middle managers do not think that communicating with their team is a key part of their job;

The second said:

…under performing middle managers are costing British business £220 billion a year in lost productivity.  Over half (54 per cent) of senior managers felt that middle managers were uncommitted to strategic goals, and 62 per cent criticized lack of management and leadership skills. — Hay Group,  “Alarming Performance Gap at Middle Management Level”

Whilst this is clearly a puff piece by Hay to sell all sorts of warm and fuzzy HR services, linking the two together, you can see why the senior managers and directors might hold those views.

Middle Management is possibly an endangered species these days, but does still seems to be hanging on in little niches,  according to these surveys, despite hating the job, and apparently failing in the eyes of their seniors, so you wonder why they stick it out?!

Wikipedia makes an amusingly naive attempt to define away the problem…

Middle management is a layer of management…whose primary job responsibility is to monitor activities of subordinates while reporting to upper management.  In pre-computer times < “What? Jurassic, maybe?”, dripping with sarcasm>, middle management would collect information from junior management and reassemble it for senior management.  With the advent of inexpensive PCs <“har, har”, choking on spittle>  this function has been taken over by e-business systems .  During the 1980s and 1990s thousands of middle managers were made redundant for this reason <“So simple?”>

…taking a Tom Peter’s-like knife to the whole layer, thus:

…with the backbone provided by those amazing “inexpensive PCs” and fantabulous “e-business systems”:  However, as a saving grace, the entry does at least refer to communication as a key job function.

I went through an epiphany on this topic some many years ago, when working as a development manager in a computer manufacturer.  I was sitting in a daily “War Room” session held during the torrid Beta trials of a piece of probably under-cooked software.  In the room were the luminaries of Technical and International Sales & Service divisions and assorted lackeys, acolytes, water carriers and coat holders.  In particular, on the Technical Division side was this management line:

  • The Technical Director
  • The Development Director
  • The Development Manager (Me)
  • The Project Manager

The Beta trials were displaying all the dysfunction of a classic “waterfall” software development project going to b*ggery, hampered also by a functionally aligned organisation, and all the attendant politics.  So we spent many a fractious morning in the cut and thrust of departmental politics, whilst attempting to alleviate the pain of the early Beta customers.

Outside that bun fight, the job of a middle manager was supposed to be to “put yourself about”, (be seen to) sniff out issues, especially the opposition’s dirty laundry, and inform on the organisation to the Directors in your line, in short – a communication role, pure and simple in concept, hellish in reality.

The War Room was, however, one shining light in the risk management firmament – and something that still features many years later in Agile development methods (e.g, as the daily stand-up).  The concept is cribbed directly from military usage and is all about shortening communication lines to improve responsiveness and to win battles.

And in this gladiatorial “circus”, whose job was mainly about communication?  Well, mine.

The fun started when discussing the approach to some issue and it came down to fixing some malfunctioning product feature, and the bullets starting heading my way.

It was a frustrating, no-win situation:

  • I could, for example, just nod the question over to the Project Manager and be seen as weak, but then, why have a dog and bark myself?
  • I could have taken the role as Project Manager from the meetings to control the information flow, but that made a nonsense of the whole War Room, and would have been a recipe for being blamed for everything wrong with the project (which was woven into the very fabric);
  • or other strategies which were all equally flawed, within the oxymoronic constraints of the project and the organisation, and most vitally, defied sanity and common sense!

Then, ding, the light went on!  This job is pointless!

Moving back to the current day, elaborating on the analogy of “organisation as anatomy” , then you can start to think that there are, at the very simplest,  two types of job:

  • useful, creative, purposeful roles that move stuff forward, onwards, upwards – like Muscle
  • other roles that are like the connective tissues, insulation, piping for insanitary fluids and other ugly bits that get left on the side of the plate of life, yes, Gristle

Visually, then the pure Middle Management communication role has to be seen in this light:


I made my decision on this years ago, but for anybody who is still uncertain, I offer this handy little decision-making 2×2 matrix:


Middle Managers
Career Game board
Want to be…
Gristle Muscle
Treated as.. Gristle Stay Move!
Muscle Retire Enjoy

20-20 Hindsight: who needs it?

I have recently been reading “Plundering the Public Sector” by David Craig and Richard Brooks, and now halfway through have been getting more and more irritated with the adversarial tone of the book, and its tendency to shower blame everywhere in unequal amounts.

UK Public Sector projects are usually particularly large (Connecting for Health is quoted as being the largest civilian IT project ever), and inevitably have all the challenges you might expect, and more of them after that.

When discussing the risk profile of projects, I usually use a 2D chart that expresses the two primary dimensions of Work Complexity and Business/Organisational Complexity, a framework drawn from my experience of programme management in large organisations.

The usual chart looks like this:


The Work Complexity dimension registers risks like complex technology, logistical scale, dynamic market environment, whereas the Organisation/Business risk dimension registers such factors as poor communication across fragmented, stove-piped structures and populations, divided loyalties, parochial viewpoints and so on, that arise in any large organisation (driven in the main by human nature in all its forms).

However, for monster public sector projects, I would recast it like this:


The black area represents the terra incognita where overall risk is extermely high due to the sheer size and people complexity, and other factors which have rarely been experienced before.

Blame-shifting and adversarial attitude are not helpful in the context of programme management, especially when exercised with 20:20 hindsight.

However, agile development methods show the way things can be if they are done right. These methods are rooted in the early insights of people like Barry Boehm, a god of software engineering who brought us this…


and this…


Iterative risk managemnt approach embedded methods can also be applied to business projects as well as pure development.

Maybe the book will get better and more evenly balanced as I read further, and maybe even propose some solutions, but, for now, having incurred my ire, it has been relegated to the bottom of the pile in the throne room where