Ask
anyone in computer science or psychology about artificial
intelligence (AI) and they will tell you the same thing, namely AI is
at least twenty years away and will almost certainly require the
development of quantum computing in order to become reality. This is
slightly at odds with reports leaking out of IBM's super secret New
Mexico research centre. Unlisted on any IBM documentation and unknown
to any IBM staff below senior VP level this facility comprises two
supercomputers that are significantly higher performance than the
Sequoia computer that the public is aware of. Sequoia tested at 16.32
petaflops/second making it the fastest computer generally known.
Rumours suggest that the New Mexico facility, known as the Gonzalez
project (after the cartoon mouse) is running at 10x this speed. The
suggestion that quantum computing is required for AI is a conceit
based on a misunderstanding of the way in which human intelligence
works. We have an idea, propagated by the media, that the human mind
is essentially an organic computer, receiving sensory inputs and
producing specific outputs. This is incorrect for two reasons.
Firstly computers operate based on a set of instructions, without
instructions the computer does nothing, with incorrect instructions
the computer produces incorrect results, a phenomenon known as
Garbage In Garbage Out (GIGO). The human brain does not operate in
this manner. Secondly computers operate by sequentially processing
series of data, the human mind processes simultaneously.
In
order for AI to operate successfully the computer needs to be able to
function independently of programming, or to at least be able to
re-write its own programming to modify operation. It also needs to be
able to adapt and change connections, internally re-wiring itself to
adapt to changing needs. Finally it needs to be able to operate
without recourse to an external operator. None of these requirements
is a condition of quantum mechanics, but is beyond currently public
technology. Of course, what is public knowledge is not the same as
what has been developed. Working with neuro-biologists, linguists and
psychologists IBM has created software which is capable of evolving
over time based on experience. This is the first step in true AI, a
computer that can learn. In order to achieve this the machine has
been programmed initially to solve complex mathematical modelling
problems that involve multiple variables and have already happening
in the real World. Each time the computer establishes a model it is
tested against the real World result and the discrepancies
highlighted. Unlike a standard computer modelling system the process
of refining the mathematical algorithms driving the system is not
done by human programmers but is done by the computer itself.
Initially this was in response to a complex set of sub-routines that
created rules against which the computer judged its models and made
adaptations to the algorithms based on these rules.
The
breakthrough came when researchers programmed Gonzalez to apply its
rules for successful modelling to its own sub-routines establishing a
self correcting protocol for software development. The system
initiated fourteen changes to one of its climate modelling algorithms
creating a 32% improvement in accuracy over pre-programmed models
suggesting that the computer was able to learn and develop. This is
only the first step in a much longer project, but if the reports are
correct, this has tremendous potential for computer aided
improvements to societal issues. We will have to see what happens
next.
No comments:
Post a Comment