There are plenty of arguments against AI – we are all doomed, it will want to kill us, it has no feelings, it doesn’t think like we do.

These charges are levelled at very crude AI – Artificial Neural Networks ( ANNs). which are just directed resistor networks that have been crudely programmed to badly handle simple cases and totally mishandle dynamic cases like autonomous vehicles, and with no trace of intelligence, but they have their fans – ” I love programming deep neural nets”, or Large Language Models ( LLMs) which, prompted by a few words, can spit out hundreds of words, but have no idea what the words mean, other than the few words appear in the hundreds of words. It is left to chance whether the words are relevant. It would be quite reasonable to say that such things should not play an important role, or make life and death decisions. Their role can be a Santa’s little helper, but no more than that.

 

The world is a complex place, growing more complex by the day, partly due to our lack of forward thinking, or denial of what was forecast. We haven’t changed much – particularly the strong limit on how much we can consciously think about at once. There are already systems in place that stop rash actions – many commercial airliners will override the pilot’s action if a simple analysis says it is dangerous. Yes, sometimes it overreacts, but on balance it saves lives. Similarly, rescue by helicopter over a raging sea requires very rapid decision-making in a dangerous environment – the procedures are automated, because humans aren’t reliable enough in such situations.

 

Defence adds an extra edge – we can’t rely on ML systems programmed years in advance, or crawlers scraping bits of text from the internet over months or years. If hostilities break out, we need to say what is the situation today  – data before today is out of date data. The ability to recast military systems using natural language is a very strong argument for AGI.

 

The sort of problems that AGI might be used to solve.

People can use their natural language to describe far more complex situations than mathematical symbols can describe, so it would seem reasonable to create a natural language interface for a machine, Said glibly, this sounds easy – a child can do it. The child can do it for itself, but it mostly doesn’t know what it is doing – its Unconscious Mind is doing the work. And it can’t be a team effort – someone learns all about nouns, someone else learns all about verbs, another about adjectives – not going to work. It has to be a single mind that learns about it all, while other people can only nudge it in the right direction. Creating a language interface for a machine is similarly a Single Mind Problem. It is a complex problem, having many strongly interwoven elements.

 

How good does the AGI natural language interface have to be? Very good indeed – it has to be able to create a richer and more complex abstract environment than a human can manage. Partly, that’s easy because it does not have the Conscious Mind’s Four Pieces Limit. Partly, that’s hard, because we do not understand how the Unconscious Mind goes about its business, and it is not going to tell us, so all we can do is polish the facets we do understand

Implementing Morality in AGI

If you are still terrified by the thought of machines making important decisions, think about Collaborations – AGI’s ability to support humans at a higher level than they can manage by themselves. This is where we see an obvious path for introduction.