We should have a chat about Artificial Intelligence

As we adjust to the advent of civilisation threatening climate change there is a new threat that is emerging, the unregulated development of artificial intelligence.

Chat GPT has upended the development of Artificial Intelligence and has left its larger more resourced competitors such as Google languishing.

Suddenly there is this ability to generate superficially coherent prose.  As an example I used it to write a history of the Standard which was incorrect in parts but not bad for something that took milliseconds to generate.  The program is in development and I am confident that with further iterations its performance will improve.

But the worry is that the improvement is happening too quickly.

This week the Council of Trade Unions stated that the latest release of Chat GPT is a “wakeup call“, and that regulation is needed to make sure workers do not get a raw deal.

The implications are clear.  As Artificial Intelligence evolves more and more jobs will be lost as the need to perform work will be lessened.

This is not necessarily a bad thing.  I am sure we would all relish working less.

But the problem is that it appears almost inevitable that the result will be the increase of wealth for the few and greater poverty for the many.

And Chat GPT already has the ability to write code.  These may be simple snippets for basic coding but what happens when it can revise and rewrite its own code?  How will it develop when it becomes self aware?

This raises the prospect of requiring Isaac Asimov’s three laws to be hard coded into all AI programs including Chat GPT.

The laws state:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2.  robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The problem is that there is no ability to require developers to insert these requirements.

The CTU’s request has some unusual supporters.  Eion Musk, Steve Wozniak and a group of other prominent computer scientists and other tech industry notables who this week released a joint letter calling for a hiatus of development of Artificial Intelligence.  From AP News:

The letter warns that AI systems with “human-competitive intelligence can pose profound risks to society and humanity” — from flooding the internet with disinformation and automating away jobs to more catastrophic future risks out of the realms of science fiction.

It says “recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

“We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4,” the letter says. “This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

Governments are busily trying to work out what to do.  The problem is that the situation is changing that quickly that our leaders may be too late.

There was a fascinating interview yesterday on Radio New Zealand between Kim Hill and Brian Christian, author of the book The Alignment Problem.

His analysis is complex but he is essentially calling for AI to be much more transparent so that we can understand how it makes decisions.  Otherwise we may be allowing decisions to be made which enforces discrimination on the basis of race or sex without our knowledge.

Hopefully this issue can be addressed properly before AI becomes self aware.  Otherwise we may be in for an interesting time.

Powered by WPtouch Mobile Suite for WordPress