top of page

Things: how we think of them, what that means for building systems

I've been asked to write down something about a topic I covered in a talk at the Manchester Java Community a couple of years ago. This is that. It goes quickly and leaves much out, just as a short talk does. Probably I've got some of it wrong. Feel free to invoke Cunningham's Law in comments on twitter.

 

My central point is that there is something about the way folks with a traditional, Western, technocratic STEM–style education, very much including software developers, are trained to think which is a poor fit for the way that people without that training tend to think. This perhaps explains some of the difficulties that software developers have in communicating with users, or rather, the difficulties that both groups can have in coming to a shared understanding about what is to be built, and why. But, this same mismatch I think also points towards some solutions and explains both why certain tools and techniques seem very effective at helping form that shared understanding and why many software folks are unhappy with them.


Things

Modern Physics—that is, the Physics of almost exactly the last hundred years, as I write in 2018—has, loosely speaking, come to view the fundamental properties of the universe and the stuff in it in terms of fields and processes, distributed and dynamic. But that's not how it looks to us, day–to–day. We see, and touch, things. And we organise our understanding of and thoughts about things with concepts, or categories. How do we do that?


How we think we should think about them

This has been a topic of discussion since at least as long ago as the time of Plato [c. 425 BCE—c. 348 BCE] who, in his dialog Politikós, “The Statesman”, describes a process of taking larger groups of things and successively finding some rule, based on similarity and dissimilarity, to divide them into two smaller, and ideally two roughly equal–sized groups, and so to refine the groups and eventually determine the true nature of things and the right way deal with them. It is in this context that he notoriously finds that humankind are, amongst all animals, the featherless bipeds. From which he appears to think we should derive some political wisdom. But Plato thought that actual things in the world are but pale imitations of their corresponding perfect "form", to which we have no direct access. His student Aristotle [384 BCE—322 BCE], considered that all knowledge beings with perception and may be checked empirically, leading Thomas Aquinas centuries later to coin the so–called Peripatetic Axiom: "Nothing is in the intellect that was not first in the senses". Aristotle taught his students in the walkways, the peripatoi, of the Lyceum, that is “wolf temple”, in Athens.


And Aristotle decided that our understanding is based on kinds, or categories, and in his book Kategoriaimeaning literally “that-which-is-said-in-public-assembly”, or perhaps, “public knowledge”—he says what the ten most general categories, are [with my gloss]:

  • [primary] substance [ie actual particular things or kinds not further divisible]

  • [absolute] quantity

  • qualification [ie a descriptor]

  • relation [ie a comparison to something else]

  • location in space

  • instant in, or period of, time

  • disposition [being in a position or state as a consequence of an action or process]

  • having [something in some relation]

  • doing [an action]

  • being affected [by some action]

What is categorised are all the things which can be the predicate—which via Latin has the same roots as does category: a public statement—of a proposition, telling us something about the subject. Aristotle refines Plato's idea about distinguishing between parts of sets of things and gives us the mechanism of the genus, a large group of things, and the differentia which allows a smaller group of things, the species to be distinguished. A small species will be part of a larger genus, which is itself a smaller species of another, yet larger, genus, until you get back to those ten basic categories, I suppose. And what is true of a particular gene

us is true of all its species, giving an inheritance of characteristics: Brownie is a horse, is a mammal, is an animal... What can be predicated of a species is more than its definition, but its definition is made of some of those things which can be predicated of it. Other things which can be predicated of a species are its peculiar properties, which are not true of any other species, and its accidental properties which may be true of other species. So, a species has:

  • a genus, a larger and more general grouping it is part of, and

  • a definition, which differentiates it from other species in the genus, and

  • some peculiar properties, which no other species has, but which are not part of the definition, and

  • other accidental properties shared with other species.

It is easy to see in this the structure, as built up by Mediaeval thinkers like Aquinas, of classical "natural philosophy" and later "science". Consider zoology. You are a human, your species is Homo sapiens, of the genus Homo. In zoology, both a species and a genus are taxons, and taxons may be organised into a hierarchy, a taxonomy. Actually, several. Here is one, based on certain shared characteristics:

  • your species [in the zoölogical sense] is Homo sapiens

  • your genus [ditto] is Homo, a part of the “tribe” taxon Hominini (including chimps)

  • Hominini is a part of subfamily Homininae (including gorillas)

  • Homininae part of family Hominidae (including orangutangs)

  • Hominidae is a part of suborder Haplorrhini (including eg, tarsiers and marmosets)

  • Haplorrhini is a part of order Primates (including, eg lemurs and lorises)

  • Primates is a part of superorder Eurachontoglires (including, eg rats and rabbits)

  • Eurachontoglires is a part of subclass Theria (including, eg koalas)

  • Theria is a part of class Mammalia (including, eg echidnas)

  • Mammalia is a part of subdivision Vertebrata (including, eg sharks, frogs and emus)

  • Vertebrata is a part of division Chordata (including, eg sea squirts)

  • Chordata is a part of kingdom Animalia (including eg spiders, jellyfish and and octopuses)

  • Animalia is a part of domain Eukarya (including eg trees, mushrooms, and protozoans).

  • Domain Eukarya is a part of Cellular Life (including bacteria and archaea)

The Eukarya, by the way, are a minority of the species of cellular life known on Earth. Better even than that, eukaryotic cells—your Homo sapiens species cells—are a small minority in number, although not of volume or mass, of all the cells inside that dynamic configuration of matter which you mistakenly think of as “your body”.


From the time of Plato and Aristotle until really quite recently, this sort of thing was, more or less implicitly, considered to be just true, and to reveal the the seams by which nature is naturally divided up. There is some justification for thinking this way. For example, all of the Haplorrhini—which means "single-" or "simple-nosed", considered "dry"—have, in distinction from other Primates, lost the ability to make their own vitamin C, but gained a dry, flexible, expressive bit of anatomy between nostril and lip—compare yours to the "wet" nose of a nearby cat, dog, horse, or, amongst primates, lemur—and a big brain to give it something to express. The taxonomy is not driven by capricious, whimsical distinctions although they do change and might be recent: Homininae was split into Hominini and Gorillini only in the 1980s, life in general was split into Archaea, Bacteria and Eukarya only in the 1990s, and the matter remains unsettled. Massive revisions of traditional taxonomies, which dated back to Linnaeus' work in the early 18th century, came about when genetic evidence became available to show with a high degree of confidence what species were descended from what from others, and therefore what their shared ancestors are. Linnaeus and botanists and zoologists following him had used gross anatomy to start with, and then more detailed morphological criteria, and refined from there. They weren't wrong, that scheme is what it is and does what it does, but there were oddities and the classification was less useful that it might have been. In the 1950s a new field of cladistics introduced new ways to find taxa based on hypotheses about most recent common evolutionary ancestors, which genetics has been able to confirm. This was hugely more useful, but not without controversy. These are vast and sophisticated mechanisms for organising our thoughts about the (living things of the) world. They aren't finished, and it still aren't wrong, but also aren't what people naturally do.


How we think we should think, generally

People with a Western, scientific education are trained to think roughly this way, unless pushed quite hard into something else:

  • the world is made of things, and

  • each thing is of one kind, and

  • each kind has one definition made of individually necessary and jointly sufficient conditions for a thing to be a member of the kind

  • these conditions are made out of true statements about the world

There's a fairly clear route back to Aristotle from this. And it is also fairly clearly baked in to IT, from relational database table columns to the member variables in a class definition via XML Schemas and validating web forms in CRUD workflows. This is very convenient. Aristotle was trying to pin down and systematise both what there is, and how to think about it. He liked crisp, unambiguous, non–overlapping, precise, comprehensive rules. Aristotle would have made an excellent Enterprise Data Architect.


However, this is not, it turns out, how people who aren't trained to do it generally think about the world. What about just thinking?


Aristotle's logic is based on analysis of the syllogism, arguments like:

  1. All men are mortal, and

  2. Socrates is a man, therefore

  3. Socrates is mortal

You can understand this as working by species: Socrates' species is "man", species "man" has the quality of mortality, being distinguished by that, I guess, both from the Gods and from things which never are alive, so therefore Socrates has the quality of mortality. The species "man" is the common term between the two premises, appearing as the subject in (1) and the predicate in (2), which is eliminated leading to the conclusion (3). The word “syllogism” comes via Latin from a Greek expression meaning something like "the very best kind of talking sense", Aristotle not being a shy man. The analysis—another Aristotleism, meaning "letting loose"—of syllogisms formed the apparently unshakable basis of logic, in Western thought, at least, until well into the 19th century CE; George Bool invented his algebra in part to re–express Aristotle's logic, in 1847. It was down hill for classical logic from there, but he didn't know that. The attractive thing about this kind of deductive reasoning, to a certain kind of highly trained tidy mind, is that if the argument is both valid, having the correct logical form, and sound, being based on true [whatever than means] statements about the world, then no reasonable person can honestly disagree with it. It compels agreement, and cannot admit of error. That there are useful ideas and correct arguments that it cannot possibly express was disturbing news for the philosophers and mathematicians of the late 19th and early 20th centuries. Something that contemporary "Logic Bros" would do well to bear in mind.


However, in day–to–day life this is not generally how people reason, and many very reasonable thoughts cannot be expressed as syllogisms.


So, what do people do? They do inference over examples. The exact details of how are still under investigation. But that's what they do. It's messy and unreliable, it lacks certainty, it does not compel agreement, it allows error, and is enormously effective.


Examples and Exemplars

We do, however much recent developments in Physics and Chemistry tell us that it is illusion, we really do experience the world in terms of things. We experience our own body as a largely fixed thing, with a defined boundary. We experience other things that our body-thing cannot occupy the same space as at the same time. And things are more or less similar to each other, as we experience them, and we build categories, and concepts of categories, out of those similarities. How?


Beginning in the 1970s, psychologists and what were begging to be called “Cognitive Scientists” began to do experimental work to try to understand how people really organise the objects of their experience into categories, and how they really form concepts attached to those categories. In 1978, Rosch wrote Principles of Categorisation, summarising work she had begun publishing in 1973, on the Prototype Model of concepts. In this model, a concept is held in the mind by memory of a prototype of the category the concept is of. This prototype might not actually exist, as an instance, it might be an idea of what a very most typical member of a category would be like, bearing properties that act as a summary . Or it might be an actual thing. When a new thing, a new stimulus, is perceived, it is associated with the category who's prototype it most closely resembles. There are a various way that this is theorised to happen, either through features of the prototype shared more or less strongly by the other instances in the category, or by a network of relationships between the instances, somehow centred or anchored on the prototype. An alternative is the Exemplar Model, also introduced in the late 1970s, by Medin et al., in which a small, perhaps singular, collection of actual instances of a category act as the most central, or anchoring exemplars. Experimental evidence has been variously strong for both models over time, and it is not clear which will end up being the better theory. I'm no expert to currently it seems that prototype based models are currently the best confirmed. Which is not to say that exemplars play no part. What is clear is that experiments done to test how it is that people naturally form categories to organise their understanding of the world show that it primarily works by constructing categories “bottom up” from examples. The experimental evidence shows that people do not naturally create categories top down from definitions.


Imagine that you had to explain to someone new to the Earth what a “bird” is. You could start by explaining that birds are the living descendants of the Therapod dinosaurs, with three–toed limbs and hollow bones, who's teeth have fused into beaks, and so on and so on, or you could show them a bird. But what bird, to begin with? What bird could serve as a prototype, or exemplar, of the category of birds?

Southern Cassowary
A cassowary?

Couldn't be much more obviously a dinosaur, but maybe not very useful for us. How about…

Emperor Penguin
A penguin?

Also not great, for different reasons. Maybe…

Feral Pigeon
A pigeon?

That seems more like it. All are birds, by the definition, but are very differently useful in first getting to grips with the idea of “bird–iness” if you aren't already familiar with birds.


This is not to say that no one ever uses definitions and rules, of course they do. Particularly when learning a new thing. But even then, over time the importance of rules fades, and prototype, and perhaps exemplar, based mechanisms seem to take over. I have come to the view that this goes some way to describe some of the difficulties that arise with getting to a good shared understanding between IT folks and the people who will use the systems we build.


The crux of the problem

How can IT folks better work in alignment with this apparent fact about people?


Here are some intrinsic problems with Aristotle's style of category. Such categories are:

  • hard–edged: a thing is in the category completely or not at all

  • unstructured: all things in the category are in the category to the same degree, in the same way, until and unless you use a differentia to distinguish out a new category

  • universal: there's exactly one set of categories that everyone is assumed to agree on—or at least, assumed will agree on once the categories are shown to them

But, in the messy, ad–hoc, somewhat arbitrary human world of doing business the concepts that we have to deal with are very often not like this.


Consider the largest scale. Notoriously, many banks and similar big IT–bound businesses occasionally launch an attempt to built One Data Model to Rule Them All, with exactly one Account entity, exactly one Customer entity, exactly one Calendar, exactly one…representation of each kind of concept the business deals with, used uniformly by every system in every division and department. Aristotle would indeed fit into such an endeavour seamlessly. They do not succeed. What happens is that, as it turns out, different parts of the same bank—or whatever—have, for very good reasons, very different ideas about what a Customer is, how they interact with the bank and what information to record. The category, concept and definition of Customer then grows all kinds of optional accretions to allow it to serve all parts of the Bank equally badly. This is because the concept, and category, of Customer that each division uses is actually quite different, for very good reasons. And the Bank–wide category, or concept of “Customer” has a rich internal structure which the Aristotelian model of category cannot reflect.


And consider the smallest scale. A field is needed in a form, perhaps a web form. A common approach is to ask a domain expert of some sort what the “definition” of the field is, what “validation rules” should apply to it. And domain experts, smart experienced cogent people, often struggle to express such things in terms crisp enough for a programmer to code expressing them.


Examples

So, rules and definitions for membership of hard–edged categories, while useful sometimes and absolutely baked in to our tools for building systems, might not be the best vehicles for conversations about what our users want systems to do for them. We must always remember that building an IT system, one that people are paying for, anyway, is not an intellectual adventure to discover the truth about the domain, but is about building a tool that's useful enough for enough users enough of the time. If the cognitive scientists are right about people's natural tendency to organise their understanding of their world in terms of prototypes and exemplars holding a central position amongst the variously strong members of fuzzy–edged, overlapping categories, that IT professionals should talk to users that way.


This means asking for examples. The examples which come first to a user's mind are probably the central examples that we would want to use as exemplars or (the foundation of) prototypes. These can be examples of things, or of actions. By collecting and recording many examples we can begin to build a shared understanding of what is needed, and use this to figure out what to build.


Through tooling, these examples can become automatically checked examples. These are not tests, as such, although for historical reasons the tools used to run them often use that term. Nevertheless, people skilled in testing—that is, genuine exploratory testing—are often very good at extracting useful examples from users. Don't fall in love with the tools, though. the thinking and communication is the valuable bit.


Datapoints with a simple linear regression and a over-fitted polynomial
The blue curve fits the datapoints exactly, but is a terrible model of them for almost any purpose. The black line fits them less well but is a more useful model.

There are lots of ways to convince yourself that specification by example, as it has come to be known, is deeply unsound. Folks well versed in numerical modelling, statistics, or machine learning, start to fret about the “overfitting” that can occur when a function is threaded through lots of data points and ends up wriggling around in worrying ways. They also worry about the traps which lie in extrapolating general behaviour from a small number of points.


Other folks, more familiar with formal specification worry that the examples will not be comprehensive, that cases will be overlooked, and that implementing all the cases given may leave an incomplete implementation.


The over–fitting issues, I think, may be dispensed with since the functions—broadly understood—that we implement in computer programs are not, in general, the continuous mappings between topological spaces that, say, the 10th–order polynomial shown in blue in that illustration is an example of. That's the kind of function that is used throughout the natural sciences and engineering, but that's not the kind that IT systems generally implement. So our functions don't need to make the kind of wild and misleading excursions that such over–fitted curves do in order to get lined up for the next data point.

The concern about lack of generality of specification by example is better grounded, but can be addressed by consideration of an engineering tradeoff. If all the checks on a system, all the checked examples, come up green, is the system complete and correct? We don't know. Does that matter? Not usually. The cost of creating a system that definitely is correct and we can prove it becomes very large very fast as the complexity of the system grown only very little. It turns out that few users, clients, project sponsors, or whatever it may be, are prepared to pay for such absolute confidence in correctness. So, an engineering tradeoff can, and should, be made. Can we assemble a large enough set of examples to check to give us high enough confidence that our system is correct enough at a price that the client will pay for it? it turns out, yes.


Added to which, in testing terms a function implemented this way, but induction over examples, is a white box, we know how it works. And we are allowed to be smart about how we do that.


Checked examples do become test cases, in regression. As with any kind of testing, that isn't able to guarantee correctness, and other kinds of verification and validation need to be used. But! By building the system by induction over examples in the first place we can get to a better shared understanding of what is required more quickly than with other techniques. Not least because some of the tools let us take artefacts, such as spreadsheets or even Word documents, and instrument them. This includes:

A lot of business workflows use such documents anyway, so there are lots of such examples around.


Other tools for the Acceptance–test Driven Development style of working with checked examples include:

These are not just tools for running tests, that is, checking examples, but also media of communication and venues for collaboration. If you aren't using such already, consider giving them a try. You might be surprised a the benefits.

 

References

 

Credits:

  • Cassowary: Summerdrought [CC BY-SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0)], from Wikimedia Commons https://commons.wikimedia.org/wiki/File:Southern_Cassowary_7071.jpg

  • Penguin: Samuel Blanc [CC By-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0/)] https://commons.wikimedia.org/wiki/File:Emperor_Penguin_Manchot_empereur.jpg

  • Pigeon: Jon Ascton [CC BY-SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0)] https://en.wikipedia.org/wiki/Columbidae#/media/File:Pigeon_on_high_tension_cable.png

  • Over–fitted curve: Ghiles [CC BY-SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0)] https://commons.wikimedia.org/wiki/File:Overfitted_Data.png







169 views

Recent Posts

See All

Scrum in context

Nobody: Should my team adopt Scrum? Me: I don't know, what's your context? What are you trying to achieve? What do you want to improve? It seems very difficult to talk about Scrum these days. Many fol

Scrum + XP < Agile

Part 2 of 4 Although applied now to projects, the kinds of management tool described in Part 1 arose originally in production, that is, factory environments. Scrum, however, uses ideas drawn from Prod

bottom of page