Tuesday 28 August 2018

Paying to have your exercise taken

Going Nowhere

Paying to have your exercise taken

Turning Service Management into a Cargo Cult

The case for Service Governance & VeriSM


I enjoyed a most excellent lunch this past weekend, during which I was chatting to a senior manager in a retail organisation. I was struck by her comments on the 'Service Management people'. I thought her words summed the problem up well. To paraphrase them:

"
I'm not sure what the point of the service management people is. When they come to see us, they either want to tell us what to do, or they want us to take lots of measurements of metrics that don't seem to make much sense. The app development people seem to understand things much better, they talk to the business, and understand what they want.

I have known this organisation, on an off, for a couple of decades, so this wasn't a surprise to me.  I even taught a tailored ITIL foundations course to a team from the organisation a few years back, to help out a friend who was consulting to them, but didn't have ITIL qualification. I was, then, disappointed that they wanted to do everything on the cheap, and those attending were mainly very junior, and inexperienced employees, all in IT.
As far as I can see, the organisation should call it a day, and close down their service management section. It is a tribute to the people in that organisation that they have survived so long, as the organisation has a habit of carrying out Stalinist purges, reorganisations, every two, or three, years, carrying these out with sadistic secrecy and slowness, so the whole organisation is paralysed, for months, with everybody gossiping about the cuts and hoping the axe will fall on another person or department.

More than that, nobody should try using service management there for a long time. The whole idea has been poisoned, so it seem that, rather than poor execution, it is the thing itself that is no good.

To have worked so hard, for so long, surviving these purges, should count for something, at least for the people themselves. However, the result of the decade (or more) of effort is nothing. They are a cargo cult, going through the motions, as if the organisation had adopted service management, when, in fact, as my conversation demonstrates, the organisation may, as with many, use service management techniques, but has no understanding of service management at all.

They are like a person who pays somebody else to do their exercise for them. No matter how good the exerciser, and no matter how hard he works, the benefit is not going to accrue to the person who pays for it, but does no exercise himself.

The reason for the failure is simple: Trying to do Service Management bottom up does not work. It is deeply frustrating, difficult, and futile.

Service management is not a useful end in itself. It is only useful as a tool to help organisations produce value, it might be useful to have a group looking after some of the specifics, but service management is not carried out by one little team, it is carried out by the whole organisation, or not at all.

Unless the governing body of an organisation recognises what service management brings to the business, and decides to adopt it across the organisation, it is usually better not to try introducing it. Yes, you pilot a part of service management to make a business case to the board, but not more than that.

Service Governance, and VeriSM, recognise this, and are aimed at governing organisations through the service metaphor. They gain traction by using governance to set the policy for management restructuring of the positive sort, aimed specifically at those things required to produce organisational value.


Thursday 7 June 2018

Conjecture: Any set of rules rich enough to be useful can be gamed - The Internet of Things (IoT)

Rules are becoming very important. Robots rely on rules. Self-Driving cars will rely on rules. Already people have been killed because the rules have been inadequate to deal with quotidian reality. The Internet of Things (IoT) is busy working on all sorts of rule-based entities that will become part of the fabric of our lives.

A lot of work is being carried out in robotics, artificial intelligence, and other areas to deal with the problems that exist when you create rule-based systems that interact with the real world, and human beings.

Some work has been done to deal with the problems of inconsistent programming logic. Ada was designed to allow formal proof, or verification, that programs written in Ada do what they are intended to do.

The problem, though, is deeper than that. There are some mathematical and programming theories that have some bearing on rule-based systems, and give some insight into how they will behave. Game Theory allows some conclusions to be made about different agents making choices. Queuing theory allows some conclusions to be drawn on how long it might take for decisions to be made, or services to be delivered. There's Prolog, a language designed to work with propositions, that allows programming at a logical level to be tested.

What there does not seem to be is a stand alone theory of rule-based systems.

As human beings, we are familiar with working with rules. We know that even very carefully written rules, such as legal statutes, are open to interpretation and can be 'gamed'.

'Gaming' rules means, in essence, taking a set of rules that are intended to fulfil one set of objectives, and finding a behaviour, or set of behaviours, that obey the rules, but accomplish a quite different set of objectives, often a contradictory set.

As a simple example, the rule might be that a help desk person (I'll not say 'agent' because that may be confused with robotic agents) must minimise the time spent on calls. The objective of this rule is that the organisation will service as many callers as possible as well as possible. It soon becomes apparent that, if a caller has a complex requirement that will take some time, it is possible to make it appear to fit within the rules by closing the call when it gets near the required maximum time, and opening a new call. This is inconvenient for the caller, and gives the organisation a distorted picture of how long calls are actually taken, but allows the help desk person to comply with the rules.

It is quite difficult to put this into a formal language and show how the rule is being gamed.

So, in this article, I am proposing a conjecture:

"
Any set of rules rich enough to be useful can be gamed.
"

I believe it to be true, but realise that it cannot be proven, or demonstrated to be true. What we need is some sort of formal system that  will:

1. Define a 'set of rules'
2. Define 'rich enough'
3. Define 'useful'
4. Define 'gamed'
5. Allow theorems to be produced, so the above conjecture can  be stated formally
6. All the above clearly require the use of fuzzy logic, as well, perhaps, as modal logic and, perhaps, dialetheism. So these would provide a basic set of tools on which the new rule-language would be based.
7. The aim would be that any rules so produced would be strictly provable, as Ada programs are provable.

All that is far too much work simply to confirm, or disprove, my conjecture. However, if such a formal system did exist, it would be extremely useful in defining rules that satisfy real world requirements - such as human safety - in a formal manner that can then be translated into a working Ada or Prolog program that can then be used to operate a self-driving car, or IoT device, with a high degree of certainty that the rules will operate as required.

There are a number of existing notation systems, such as graph theory, Business Process Modeling Language (BPML), Ada, Prolog, and various ontology languages, such as OWL, that could, usefully, be used to work on this problem.

I think that it would be useful to get funding for a competition to answer this conjecture. That would provide an incentive for mathematicians, logicians, process engineers, robotics experts and others to take part. The competition would provide a loose framework for the above, and require those taking part to show how it could be tightened to be unambiguous and strong enough to answer the hypothesis.

Then the rule-sets so developed could be tested against real-world problem. For example, it could be a self-driving car that has a universal top-level rule (rules would need to have defined scope), that it must not hit people. The rule set could then be tested against: Someone kicking the car (would it count that as a 'fail'?), a person landing on a car from a hang-glider or paraglider, a cyclist, a pedestrian with metal crutches (if a sensor recognises metal as non-human), a wheelchair, a skateboarding person, and a gorilla. Also, of course, all these with different terrain and lighting conditions. The question isn't really about how good the sensors are, but how the rules interpret the results to ensure that any of these edge-cases (gaming the system, in a sense, even if not intentionally), do not break the cardinal rules... Of course, this is just an example, many more test cases could be established, and, for the competition, something like a turtle world would do, because complex sensors aren't part of the puzzle - just the integrating of the fuzzy logic from whatever sensors there are, into high and low level rules consistently and, in the formal sense above, 'usefully'.

Finally, an example to illustrate what 'useful' and 'gamed' might mean. The rules of chess can be written down quite simply, but don't enable you to produce a chess robot on their own, they are not rich enough, so not 'useful' in the automation sense. Early chess playing programs could be gamed quite easily. One method was to move the king forward, ideally to the other side of the board. This made the chess program behave erratically and made it much easier to beat. The reason being that it made decisions based partly on giving each square a static positional value, moving the king to where it was not expected to be upended this value, so, when moves against the king were evaluated, they did not give appropriate values. A more sophisticated set of rules would give values to squares based on the position of the king(s), not statically - as a human player would update tactics if the king moved in a similar way. The question the conjecture poses is whether such improvements to the rule-set can ever prevent such gaming of the rules.

If you are reading this, and are aware of work being done in this specific field, please get in touch, or leave a comment to this article.

If you would be interested in contributing to research in this area, likewise.

Hashtags:

#Rule #Game #Theory #Logic #AI #Maths #Phlosophy #GameTheory #RuleTheory #Robotics #Automation #Psychology #Behaviour #Perversity #Conjecture #knowledge #rules #learning #gaming #API #IoT #DeepLearning #DigitalMaking #DataScience #DigitalTransformation #Infosec #CyberSecurity #Ada #BPML #Prolog #Owl #Ontology #GraphTheory #Graphs #Machine Learning #Knowledge Management #Governance #Service Governance #Safety #Health&Safety #Robot #Self-Driving #Car #FuzzyLogic


Sunday 8 May 2016

The gods are against me - confirmation bias and capacity management

Usually the feeling that the gods and the forces of nature habitually conspire against us is a product of confirmation bias - we forget all the times that woes come as single spies, because the times they come in battalions are so much more memorable.

It's important to be aware that this is not always the case. You are not paranoid when the bastards really are out to get you.

In particular, in many circumstances, maybe even the case of a washing machine, the underlying problem can be one of capacity - capacity problems are difficult to detect because they are intermittent at first, and then, finally, and spectacularly, catastrophic.

There are, in fact, a few cognitive biases involved in producing such things as 'Murphy's law' and 'Sod's Law'. We find things more important if they happen to us. We like to have a reason for things happening, and though the theory that the world is against us is an unlikely one, it is, at least, a theory, so we prefer it to accepting that happenstance is usually a good reason for coincidences.


We also are very poor at judging the probability of things happening. Often, what seems a very unlikely event, is, when you consider the size of the population, and the time over which it could happen, actually something that's almost certain to happen somewhere at least a few times a decade.

How can we then distinguish those events that signal a preventable catastrophe from those that are merely isolated events?

Unfortunately, the simple answer is, that we can't. The reason that our brains are so inclined to so many fallacies is because we live in an uncertain world, and a collection of heuristics that work fairly well, most of the time, is worth having, and using, even though it also leads us into such errors.

The more complicated answer is that events that are connected to one, or a small number, of related causes, that are a consequence of a mismatch between demand and capacity, have some characteristics that allows you to spot them against the camouflage of background noise.

These are that capacity related problems cause events that are:

- Intermittent.
- Apparently unrelated, but often coincident with a specific time of day, week or month.
- Progressive. Strange things happen once or twice a week, but then more often, once or twice a day
- Responsive to intervention. You may try to fix a symptom, and find they go away for a while
- More serious over time. Before the final catastrophe, you'll have one or two more serious events than usual

You'll notice that these characteristics fit a number of naturally occurring events - avalanches, earthquakes and volcanoes being examples. That's not an accident, these events are also capacity related - stresses build up over time, with minor event cascades (there are often a series of small earthquakes before a volcanic eruption, for example).

What can we do about this unpredictability?

When you see the relationship with natural events, you'll see what we actually do. Firstly, we need to anticipate where such a problem might occur, then see how serious it is (we're less concerned with volcanoes under the sea, far from any land, than volcanoes near towns, for example), and then put monitoring in place.

We need to design the monitoring carefully, to make sure that the metrics we use make sense, are connected with the likely capacity problem, and are measuring the system itself.

Then we need to measure the trends. Not just trends that are obviously leading to a catastrophe, but all trends. Then we need to correlate these trends with each other, project where they are tending towards, and find out what is causing the trends. Then we can put measures in place to reverse the trend, or, if that isn't possible, increase the capacity we have to deal with it, or, if that isn't possible, find a way to mitigate the risk of a meltdown.

Measuring trends is a more subtle matter than it might seem. It's often not the most obvious trend, in the main demand, that's the danger. Smaller, deviations at periods of quiet demand, or on the shoulders of a demand peak, are often the warnings.

The analysis required to detect such off-peak trends isn't that difficult to do from a mathematical point of view, but it does mean that you need to design your thresholds in a more sophisticated way than simply a maximum or minimum, based on a percentage of historical demand.









Tuesday 26 April 2016

Open Source Project Proposal: Ada Transport Level Security (TLS) module [Draft]

The Problem

The security of web-sites underpins much of the world's on-line economy. Breaches to it, and potential flaws in implementations of it, are a substantial risk to many organisations, in many countries.

There has already been a famous vulnerability found, and repaired, in the OpenSSL implementation, but there are many closed-source implementations that may still have similar, or more severe, vulnerabilities, or that may be compromised in other ways.

One of the reasons for these vulnerabilities has been the implementation of the solutions to TLS in languages such as c, which is an inherently insecure language, and a language that it is difficult to prove, verify or to correct.

The Proposal

To establish a team, or Ada and security experts, to produce a TLS solution, written in Ada for, in the first instance, servers. This solution would provide an API that could be used with, for example, Apache.

Once this solution had been tested, proved and deployed successfully, the solution would be extended to the client side, so that browsers, such as Firefox could use it.

Funding

The proposal depends on the team being paid for the work, and for enhancements also to be paid.

Ideally this would first come from a grant, or grants. Bodies that might wish to provide funds for such grants could include OpenSSL (www.openssl.org), the EU (https://www.enisa.europa.eu ), the Digital Governments Initiative, D5 London (UK, South Korea, Estonia, Israel and New Zealand https://www.gov.uk/government/topical-events/d5-london-2014-leading-digital-governments ), the British Banking Association (BBA), and many others.

Long term funding would come from income. The produce would have dual-licensing. Free open source to individuals, and open source projects, such as Mozilla, but commercial licensing to organisations such as Apple.

The Requirements

The project, to be a success must comply with these requirements 

  • Satisfy TLS 1.2 and 1.3
  • Be designed to provide general transport layer security
  • Be compatible with existing TLS apis
  • Ensure highly secure design
  • Establish a method to verify a server is running a particular version
  • Ensure code is easy to maintain
  • Use Ada not just as the language, but as an example of good, secure, reliable and fast open source Ada

Provisional timeline

Funding applications: May-August 2016
Team Recruitment: September 2016
Design: September-October 2016
Coding: November-December-January 2016
Testing: February-March 2017
Beta with customers: April-May 2017
Full Release: September 2017


Next Steps

Please comment on this blog if you have any suggestions for improvements to this draft, or write to peter.brooks@service-governance.org




















Wednesday 16 March 2016

Good corporate citizenship - and 'The Myth of Maximizing Shareholder Value' - and Service Governance

Here's an important article, on governance, The Myth of Maximizing Shareholder Value  - unfortunately the page doesn't allow replies, so I've put the points in this short blog entry.

Governance thinking, even in the US, is moving. When we are providing consultancy to organisations, we need to be aware of this shift, and, as discussed in 'Collaborative Consultancy' able to make judgements about our ethical accountability to the organisation, its stakeholders, and to ourselves.

Some of the ideas, being based on US law, are not directly applicable everywhere, but the overall argument is, and it's crucial to the future.

The article stops short of a full description of the solution - which is fair enough, as it's seeking to illustrate the problem.

Outside the US, governance thinking has understood this for some time. The law in the UK, South Africa, and other places that have accepted the thinking found in the Cadbury Report, and the King Commission, is that Corporations are required to be good corporate citizens. Their duty is indeed not to maximise profit for shareholders, rather, their duty is to deliver value to all their stakeholders (and, of course, shareholders are a stakeholder, and returns are important to them).

Corporate governance, requiring that corporations deliver value to their stakeholders is a powerful principle, particularly when enforced through a 'comply or explain' method (not ticking boxes on a pro-forma 'have you complied with X' sheet).

What it means is that corporations have to understand who their stakeholders are - the inhabitants of Bhopal were stakeholders in Union Carbide, as they found out, most horribly. If Union Carbide had known that they were stakeholders, and known that it had a corporate duty, to make sure that there was no negligence at that site that could lead to such a disaster, then history would have been very different.

They then have to understand how their vision, mission and charter can deliver value appropriately to all their stakeholders.

Part of the difficulty, particularly for those who have only been aware of profit as a value, is understanding what stakeholder value is, and how to govern it.

A method, Service Governance, using existing best practice frameworks as a basis, exists to help identify stakeholder value, and govern that value, using the paradigm of a 'service' and governing the organisation through a service portfolio, optimising the value / cost ratio, for stakeholder value.

There's more on Service Governance here:

Adopting Service Governance - Governing Portfolio Value for Sound Corporate Citizenship

There is an example of Service Governance working, a short video, on the web-site www.service-governance.org

Adopting Service Governance - a short introduction (Video)

There are also blogs, discussing service governance here:

AXELOS Blog: Making Service Governance Work - The ITIL Advantagetil-advantage

Corporate Governance issues & Service Governance

Organisational value through Service Governance




Wednesday 24 February 2016

Centaurs: Organisational Change Management and horse riding.

Centaurs: Organisational Change Management & horse riding.



Part of the problem with organisational change is perception. People see it as something you do, like driving a car, or riding a bicycle. It isn't, though, like that, it's more like riding a horse. 

If the horse wants to make a dash for home, or throw you into the ditch, that's what it'll do. 

You have to help the horse see things your way, and agree to go where you want it to go, and you have to be aware that horses get tired, and need feeding, because, if you don't feed them, rest them, and give them time to play, they become sullen, resentful, uncooperative and, eventually, die.

It's also best not to walk behind a horse - with organisations it isn't alway obvious where the behind is. [though you might guess]

If you wish to be good at organisational change, you need the equivalent of riding lessons - and, if you've learned to ride a horse, you'll know that riding lessons involve lots, and lots of practice.

You also learn that you can't ride a horse on autopilot. You have to be one the horse and aware of it's every twitch and mood. You have to be fully engaged with the horse - with top riders, the horse and the rider seem to be one creature, with one mind. 

Some believe that that is where the myth of the centaur came from - seeing horses ridden so that they looked like one creature, part horse, and part man.

That's the aim. To be like that, when you work to change an organisation.


Sunday 15 February 2015

Dishonesty, sharp practice and good Corporate Citizenship

One important part of service governance, and most modern thinking about governance under 'comply or explain' is that a company should work to be a good 'corporate citizen'.

Is sharp practice against good corporate citizenship?

Clearly much 'sharp practice' is not illegal, but legality is not the only test of being a good corporate citizen.

Think of this example. You've probably encountered it. A supplier of a fairly intangible thing such as air time or data download sells it in bundles.

So far so good. There's nothing wrong with bundling things up and selling them in bundles for convenience. It'd be really painful if spaghetti wasn't sold in bundles.

But you pay for spaghetti in arrears - you get the bundle of spaghetti, then you pay for it.

With one of these intangibles you have to pay in advance.

Also, spaghetti takes a long time, several years, to go off, so, if you buy too much, it just takes longer to use it.

Bundles of things like air time, though, never have to 'go off'. In fact, they can't 'go off'. If you bought a minute of air time in 1995 it would have been a lot more expensive than now, but, there's no reason why the company you bought it from, if it still existed, shouldn't honour your purchase and give you the minute today.

But these bundles are given an artificial 'expiry date' after which they won't be honoured - often as short a time as a month.

If you run out of a bundle, then you can buy another one.

This is where the problem starts. Not all bundles are equal. Sometimes there's a minimum bundle size and small bundles cost more than big ones. Sometimes there's a much more expensive flat rate that you have to pay if your bundle runs out.

What is going on here is the basis of the sharp practice. Companies who sell like this are not wanting to sell the commodity fairly. They actually want to cheat their customers out of either money or the commodity (which is money). It works like this:

1. If you're a small user, you buy the smallest bundle - but you don't use all of it. So it 'expires'. That is the company steals the remainder from you. That's theft in the common law sense - because you signed a contract agreeing that they could 'expire' it on you, it isn't counted as theft because you signed up to be robbed.

2. If you are a large user, you buy a big bundle to get the discount, but, if you go over that usage, because you have to buy in advance, you have to pay the flat rate - which is often many times higher than the lower rate.

The selling company is actually gambling with you. It is hoping that either you won't consume all of what you've bought, so they can steal it, or that you will consume more than you estimate, so they can charge you a punitive rate.

The companies would say that this is not dishonest, as the lawyer would say, because the contract allows for exactly this form of cheating.

It is, though, sharp practice. Instead of selling the commodity to make money, the company is making money on our inability to predict our consumption accurately.

Is it fair to penalise people for having uneven patterns of consumption?

Should a company that's a good corporate citizen be ripping off its customers because they have difficulty predicting their usage?

The argument that companies bring to continue the practice is that 'everybody else does it'. Is that a good argument against acting responsibly towards your customers?

Let's say that one company broke ranks and said, you can by the commodity from us for one single price, let's say X per unit. It doesn't matter if you consume 10 units or 1000 units, it's the same price.

That price would be easy for it to work out. It would simply take the total income today and divide it by the total units.

Would that company do better or worse?

It would attract more small users - they'd have less to pay.

Would it attract more big users? I think it would. As a big user, you'd prefer to pay a known rate, perhaps a bit bigger than the apparently tempting bundle price, but less than the far too expensive flat rate.

If you, as a customer, had a choice between an honest flat-rate company, and one that used sharp practice against its customers, wouldn't you choose the good corporate citizen yourself?

Is there any big company out there prepared to try this and make the results known?

If so, please try it. Maybe it could usher in a new era of good corporate citizenship.

If not - wouldn't that let us know just how much is made from the penalising of customers for not predicting the future? Might that not then be a good reason to press for legislation to make this rip-off illegal?