20:43 Saturday 18th August 2018

Today, a short story.

-----

<b>AI: A Brief History of a Failed Dream</b>

In the 1950s, computer scientists confidently predicted that within 25 years, they could could produce working artificial intelligence. Specifically, computers that could do our thinking for us, but faster, better, and for longer than we could.

They wrote optimistic books about how the computers of the future would combine the rigour and reliability of algebraic formulas, with the subtlety and sophistication of creative human intuition.

By the 70s, they knew they were wrong. More importantly, they knew why they had been wrong. It wasn't just "The Hard Problem" of consciousness, nor "The Mysterious Problem" of creativity - it was that they couldn't define what qualities they were trying to distill.

Terms like "consciousness", "self-awareness", "thought" and even "reason" as distinct from "logic" - these are concepts from folk psychology. They had no direct neural corellates, and to describe them as "emergent" was simply to push the problem of definition one stage back.

Through the 1990s, multivalent "fuzzy" logic systems, smooth non-granular logics, and probabilistic randomisation were tried to mimic the spark of creativity which they thought distinguished "real" from "artificial" "intelligence". This was however to conflate indeterminacy with ambivalence, and tangential connectedness with unconnectedness.

In the 2000s, a new generation of computer scientists made the same confident predictions as half a century before, this time about neural networks. The problem would be solved within 25 years, they said, because humans didn't need to solve it at all.

Rather, each net would try trillions of decision trees, eventually finding the best one through brute force and dumb luck. However, it would do so more systematically and more thoroughly than any slow and ideosyncratic human could manage.

By the 2020s, they once again knew they were wrong. Their nets could indeed perform single menial tasks, without boredom or fatigue. But they required intensive expert training, on timescales and costs which expanded exponentially with the complexity of the task.

More than that, the notion of "the best solution" proved elusive. Much like "simplicity" which turned out to be extremely complex, "good" was different for every researcher, for every task, often every day.

The result was not the apocalyptic scenario of a computerised medical doctor concluding that the way to reduce cancer rates in patients was to commit genocide. Nor was it the pulp sci-fi plot of the machine doctor which exploded in a shower of sparks when told "I feel like a pair of curtains".

In the event, it was more like a doctor which concluded it could cure one patient's cold by persuading every fourteenth ginger cat to spell the word "coffee" with three Fs.

The new computers were insane. But it was no human kind of insanity where irreconcilable imperatives are reformulated and partitioned to achieve mental balance but real-world chaos.

Computer insanity was a meticulously plotted blind alley, a billion kilometers long, deriving from operational ambiguities and vaguenesses so subtle they were not expressible in ordinary language. Attempts to disambiguate and clarify inevitably had their own ambiguities and vaguenesses. The solution was therefore part of the problem.

Around the same time, other scientists turned their optimism to data mining. If, they thought, a human-but-better brain was impossible, a computer-but-bigger system might be the next step. They collected ever more vast quantities of raw data, feeding it through ever higher bandwidths of integration and model building.

The results were surprisingly similar. Applying massive amounts of complex logic to a small set of badly defined axioms might give us a cat-fixated doctor. Applying a little simple logic to vast amounts of badly defined data isn't so different.

The obvious answer was to increase the dataset even more, clarify it, and make the logic both expansive and clean. But increasing the resolution of an image is not the same as making it clearer. A detective looking for clues will see nothing but clues, even when there's no crime. The Pentagon's paranoid search algorithms showed that.

By 2050, it had become possible to scan the operations of a living brain, and even simulate small sections of it on an ordinary computer. Futurologists decided we would be able to keep our best and brightest alive for ever, as immortal wise advisors. When asked what was the point of recreating a single brilliant thinker as an office block that was only brilliant for one hour a day - as opposed to training a thousand students who could take their work further in a thousand directions - they had no answers.

At the same time, techniques were perfected of culturing real human neural tissue, in an organic support system. A "superskull" could be several square meters, living in a nutrient vat, being fed with constant multiple data streams, like an infant which grows up watching a thousand TV channels all at once. As "book geniuses", they were impressive. As willing slaves, they proved to be neither.

The "Back to the Brain" movement of the 2070s sought to hack the natural nervous system with implants that stimulated emotions toward problem solving, replaced sleep, auto-drilled learning, and linked to external information sources. Early successes led to excessive implantation, and burnout. With the ambition scaled down implantation is now a common part of education and employment.

As the 21st century draws to a close, there are projects to simulate brains which could not exist in the physical world. These "paraminds" operate in virtual universes with different laws of chemistry or physical dimensions. The researchers running these projects hope their creations can provide workable answers to real world problems that humans could literally never produce.

Others use languages and systems of logic that humans can design and define, but which the human mind is unable to use. Thus we are making for ourselves a council of alien friends who we can never hope to understand, but which can see problems we could never grasp, and solve them in ways we could never imagine.

Thinking is hard work. Technology lets us work harder by making the hard work easier. But we don't really want to work harder. We want someone else to do it all for us. We want someone who'll know what we need, and do it better than we could, without us even knowing what it was.

Perhaps it's fortunate that all our attempts to create an obedient god have failed. They failed because we can't really imagine what such a god would look like, and can't imagine how we could make one even if we could.

<i>Anas Malik, 21/06/2198</i>

3 comments:

  1. Thank goodness there is no artificial intelligence! Because if there were, such a being might rightfully conclude how destructive the human species is and eliminate us for the good of the world, as a necessary and logical solution!

    Besides, I'm all ready wary of laptops with cameras--they could be spying on me as I sleep or take a shower or pick my nose!

    ReplyDelete
  2. Some laptops automatically take pictures through the webcam when switched on. A lot of thieves have been caught that way. And yes, some embarrassing secrets revealed.

    ReplyDelete
  3. heard from someone in the IT industry with contacts in the 'intelligence' community: 'Just because its eyes are closed, doesn't mean it's asleep', so I guess even sleeping with one eye open won't help us...

    ReplyDelete