When Technology and Humanity Collide

Shawn Hamman
9 min readJan 1, 2019

--

I’ve found that some of the most interesting ideas are formed around the messy, rapidly developing blending of technology and humanity.

One such techno/human fusion is Twitter, a place where somewhat slow-changing society and culture comes up hard against fast changing technology, creating inevitable emotional turbulence and much uncertainty, while furiously generating and spreading new ideas and new concepts. That’s the one real utility that Twitter has for me; it’s why I follow the diverse set of people I do in an attempt to avoid an echo chamber and be open to diverse and contrasting views, as annoying as this sometimes is.

I came across a tweet recently that tied together some fascinating threads of messy techno/human confluences for me — concepts where human nature, technology and ethics are colliding in an interesting and unexpected way: luck, self-driving cars and Chinese social credit scores.

It suggestively questioned how awful it would be if self-driving cars in China were to use the reportedly planned Chinese “social credit system” to make the choice between who lives and who dies in accident situations. Presumably those with a “better” score get spared over those with a worse score, with the immediate proposition being that that would be awful. Perhaps it would be and, if I were a betting man, I’d put money on most people thinking that it would be at least some flavour of unpleasant.

I wondered, would it actually be awful? And if so, why would it be awful?

Self-Driving Cars

A particularly tricky problem to solve in a self-driving car system is, for all intents and purposes, dealing with the real world manifestation of the classic ethical Trolley Problem thought experiment. Given an anticipated accident scenario where death or injury to somebody is unavoidable, how should a self driving car’s software ethically decide who the victim should be out of a set of options?

For example, the software of a car might detect that an accident is inevitable and it has detected two categories of object in front of it and the only option available to it is avoiding one category of object over the other. Should it — the software — choose to drive into one child instead of a group of adults? Should it pick a couple of old people to die over a single young person? Should the car always save the driver over any other people?

This is not a trivial problem to overcome and no doubt there will be extreme disagreement on how it eventually gets solved. It’s become a very real challenge for the legion of programmers working on what will likely be one of the most profound changes to society to date (and, in my opinion, the most interesting software challenges in the world to be working on right now).

Is it even possible to have an acceptable way of rank ordering the value of people when it comes to making life and death decisions?

The Implications of Automation and Decision Delegation

Most people don’t need to think through ethical conundrums like the Trolley Problem and throughout history this kind of thing has generally been left to philosophers and clergy to contemplate.

We, as a species, are now rapidly moving past the information age and into the age of automation; the Fourth Industrial Revolution, the confluence of technologies blurring the lines between the physical and the digital. Breakthroughs in a number of fields are driving this revolution forward — robotics, machine learning, artificial intelligence, hyper-connectedness through high bandwidth, the (industrial) internet of things.

What perhaps isn’t so obvious is that using artificial intelligence, machine learning and automation technologies (especially complex automation technology like self-driving cars) is effectively delegating and outsourcing the (increasingly more often) serious and complex (and ethical) decision making to machines.

By its nature, solving a problem with software typically requires programmers to formally codify in excruciating detail the decision making or solutioning process. For complex decisions this is extremely challenging and sometimes impossible. This difficulty, along with improvements in computer hardware and access to large amounts of data has driven the field of machine learning and artificial intelligence forward.

One of the issues with artificial intelligence, neural networks and machine learning in general is that it’s difficult (if not impossible) to explain exactly how a decision is made or a result obtained because of the nature of how problems are solved by the technology and the nature of the problem space to which it is applied. This is a real impediment when trying to determine accountability, particularly in high-consequence scenarios.

Ironically this means that delegated decision making in something like the automation of driving inherently requires us — programmers, people — to either address and formally answer complex ethical challenges like the Trolley Problem for all possible scenarios by hand or employ nearly-opaque, nearly-black-box technology that makes it difficult or impossible to explain how a decision was made.

Addressing the Trolley Problem inherently means having to, at some point, rank people (or categories of people) in order of preference: who to kill and who not to kill.

Will it be important to us as a society and as individuals to know exactly how and why a machine decided that one person lived while another died?

Would it be better if a decision was made taking into account as much data about the individuals as possible? Or will simpler, impersonal categorical choice be acceptable, perhaps even be preferred?

Social Credit

Widely reported this year (Business Insider; Bloomberg), the Chinese central government, ever practical and efficient authoritarians, are said to be leveraging Big Data technology and ubiquitous surveillance to implement an explicit “social credit scoring” system, effectively standardising the assessment of each citizen’s business, economic and social reputation.

If one thinks about it, in the West (whether intentionally or not), various social media combined with credit scores already can (and do, effectively) function as an implicit social credit score.

Whether planned or haphazard, both implicit and explicit scoring mechanisms seem to have similar and largely punitive outcomes: say or do the wrong things where your score is measured and you could lose your job or access to opportunities. Generate a bad credit score and access to financial or social services could become more challenging.

Interestingly, the very idea of a planned social scoring system seems to be universally reviled in the West. Most people I’ve come across with an opinion on the topic — like the tweet that started this off — flinch at the thought of what the Chinese government might be up to. It seems extremely dark and dystopian with what we imagine to be some terrible consequences. The very concept seems to bring up a natural revulsion and I’ve seen it described in many negative terms: “awful”, “terrible”, “shuddering at the thought”, “worrying”. The US Vice President Mike Pence described it as “an Orwellian system premised on controlling virtually every facet of human life.”

The push-back against the pervasive, already implemented Western platforms of Google, Facebook, Twitter and YouTube doesn’t seem to be quite so vociferous, however.

I can’t help but wonder if it’s the more explicative central planning approach that resonates too much with Communism, or a just bit of a cultural blindspot at work there, where much is forgiven or overlooked in the name of capitalism and profit.

On Trust

I think there are two primary reasons for people (in the West at least) to be deeply skeptical of the planned, explicit Chinese social credit scoring system.

The first and obvious one lies in the judicious granting of trust to those who would define, operate and apply such a scoring system. Can the people who are trying to implement this system be trusted to implement it fairly, securely, impartially and keep it from being abused? Can the system, even if implemented initially to meet the aforementioned conditions, be guaranteed to stay that way over time? I think the easy answer is no, it’s not likely or at the very least extremely difficult to get right. The consequences for getting it wrong seem like they could be pretty dire.

An interesting counterfactual to contemplate is if, for the sake of argument, a social credit scoring system could be implemented and operated by a completely trustworthy, fair and impartial, benevolent super AGI, able to take into account all aspects of a person to generate the score, would there still be an objection to the concept?

Is there utility in having a “social credit score”, implementation and operation aside?

The second reason for Western skepticism of the concept of a social credit score is that it seems to boil down to another class system, albeit a significantly more complex and granular one. The way society seems to be trying to reduce competition in schools and the prevailing social justice opinions that seem to be insisting on equality and equity in all things doesn’t align very well with the concept: instead of having two or three classes (which already exist), the social credit score can effectively be an infinite number of individual classes and the presentation of this is much starker. The very concept tries to quantify the worth of one individual over the other which isn’t a particularly palatable approach, particularly in the West.

That, in itself, is worth considering: why is quantifying the worth of individuals and ranking them so off-putting?

On Luck

A third and perhaps less obvious reason is that what a “social credit score” is measuring is in fact not value or reputation exactly, but rather an aggregate of a subset of an individual’s luck across an arbitrarily chosen number of measurable dimensions.

Most if not all things about a person comes down to luck. It is the extremely unpalatable but entirely factual way of the universe. The genes you’re born with, the family you’re born into, the city, state, country you happen to be born in, the environment you grow up in, the air you breathe, the food you eat, the people who surround you, the culture you exist in: none of it is your doing. You made none of the choices for your circumstance or any of the things that you physically are. The choices you think you’ve made were made with the mental machinery that you had no influence in creating, that was entirely determined by what came before you, by the genes you were conceived with and where you happen to have come into being. By chance. Luck: good, bad or ugly.

Everything about you and me and everybody else is a measure of luck along some dimension. The difficulty with a social credit score then is that it is an imperfect measure of how lucky or unlucky a person is.

The trouble is that nearly all societies take a punitive approach to non-conforming and non-performing individuals. Once you realise that a person’s disposition is effectively due to luck and their actions too are due to luck — good or bad — punishing bad luck and rewarding good luck seems somewhat cruel.

Quantifying how lucky a person is to inform decision making about an individual in the context of society — even if that quantification is useful or valuable in predicting future behaviour and a net benefit to society as a whole — without a commensurate change in the society from a punitive to a supportive and restorative approach with regard to misbehaviour seems intuitively unfair.

Conclusion

Even if cultural differences (between East and West perhaps) cause the concept of a social credit score to be looked at differently, the tsunami wave of big data technology, fanatical metrics collection, pervasive social media and the hyper-connectedness of everything has meant that people are being quantified and scored whether they know it, like it or not. It is happening. For the moment, purportedly explicitly in the East, and definitely implicitly in the West. Whether this scoring will turn out to be a net benefit to society or not remains to be seen.

That same technology wave is also forcing forward progress in automation which will inevitably mean machines making serious ethical decisions on our behalf and it is not clear if we will prefer those ethical decisions to be made with hyper-personal data — something like social credit scores — or if we’d be more comfortable with semi-anonymous, categorical decisions.

The near future will surely be some of the most interesting years in human experience.

--

--

Shawn Hamman

Part time hacker, occasional runner, full time technical organisation leader; Python aficionado, Objective C enthusiast, Swift admirer, technology connoisseur.