We should have a chat about Artificial Intelligence

Written By: - Date published: 10:48 am, April 2nd, 2023 - 36 comments
Categories: Deep stuff, employment, tech industry, uncategorized, unemployment, workers' rights - Tags:

As we adjust to the advent of civilisation threatening climate change there is a new threat that is emerging, the unregulated development of artificial intelligence.

Chat GPT has upended the development of Artificial Intelligence and has left its larger more resourced competitors such as Google languishing.

Suddenly there is this ability to generate superficially coherent prose.  As an example I used it to write a history of the Standard which was incorrect in parts but not bad for something that took milliseconds to generate.  The program is in development and I am confident that with further iterations its performance will improve.

But the worry is that the improvement is happening too quickly.

This week the Council of Trade Unions stated that the latest release of Chat GPT is a “wakeup call“, and that regulation is needed to make sure workers do not get a raw deal.

The implications are clear.  As Artificial Intelligence evolves more and more jobs will be lost as the need to perform work will be lessened.

This is not necessarily a bad thing.  I am sure we would all relish working less.

But the problem is that it appears almost inevitable that the result will be the increase of wealth for the few and greater poverty for the many.

And Chat GPT already has the ability to write code.  These may be simple snippets for basic coding but what happens when it can revise and rewrite its own code?  How will it develop when it becomes self aware?

This raises the prospect of requiring Isaac Asimov’s three laws to be hard coded into all AI programs including Chat GPT.

The laws state:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2.  robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The problem is that there is no ability to require developers to insert these requirements.

The CTU’s request has some unusual supporters.  Eion Musk, Steve Wozniak and a group of other prominent computer scientists and other tech industry notables who this week released a joint letter calling for a hiatus of development of Artificial Intelligence.  From AP News:

The letter warns that AI systems with “human-competitive intelligence can pose profound risks to society and humanity” — from flooding the internet with disinformation and automating away jobs to more catastrophic future risks out of the realms of science fiction.

It says “recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

“We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4,” the letter says. “This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

Governments are busily trying to work out what to do.  The problem is that the situation is changing that quickly that our leaders may be too late.

There was a fascinating interview yesterday on Radio New Zealand between Kim Hill and Brian Christian, author of the book The Alignment Problem.

His analysis is complex but he is essentially calling for AI to be much more transparent so that we can understand how it makes decisions.  Otherwise we may be allowing decisions to be made which enforces discrimination on the basis of race or sex without our knowledge.

Hopefully this issue can be addressed properly before AI becomes self aware.  Otherwise we may be in for an interesting time.

36 comments on “We should have a chat about Artificial Intelligence ”

  1. joe90 1

    Great, another Oppenheimer.

    The ChatGPT King Isn’t Worried, but He Knows You Might Be

    Sam Altman sees the pros and cons of totally changing the world as we know it. And if he does make human intelligence useless, he has a plan to fix it.

    […]

    This past week, more than a thousand A.I. experts and tech leaders called on OpenAI and other companies to pause their work on systems like ChatGPT, saying they present “profound risks to society and humanity.”

    And yet, when people act as if Mr. Altman has nearly realized his long-held vision, he pushes back.

    “The hype over these systems — even if everything we hope for is right long term — is totally out of control for the short term,” he told me on a recent afternoon. There is time, he said, to better understand how these systems will ultimately change the world.

    […]

    To spend time with Mr. Altman is to understand that Silicon Valley will push this technology forward even though it is not quite sure what the implications will be. At one point during our dinner in 2019, he paraphrased Robert Oppenheimer, the leader of the Manhattan Project, who believed the atomic bomb was an inevitability of scientific progress. “Technology happens because it is possible,” he said. (Mr. Altman pointed out that, as fate would have it, he and Oppenheimer share a birthday.)

    He believes that artificial intelligence will happen one way or another, that it will do wonderful things that even he can’t yet imagine and that we can find ways of tempering the harm it may cause.

    https://www.nytimes.com/2023/03/31/technology/sam-altman-open-ai-chatgpt.html

    https://archive.li/yg3Gz

    • Sanctuary 1.1

      Little story:

      Oppenheimer met Harry S Truman in October 1945 and it did not go well. Oppenheimer famously told Truman that "I feel I have blood on my hands". Truman immediately replied that that was no concern of Oppenheimer's, and that if anyone had bloody hands, it was the president.

      After the meeting Truman told told David Lillenthal that he was solely responsible for dropping the bomb and that "never wanted to see that son of a bitch in this office again".

  2. Bearded Git 2

    Personally I am highly skeptical AI is a threat to jobs.

    Chat GPT appears to be a very clever programme that that can pull info together from all over the web into "superficially coherent prose" as you describe it rather well Micky.

    But AI is not capable of logic, innovation or coherent argument based on constantly evolving events and knowledge in the real world. This is why we need humans.

    So-called AI (I dispute the use of the term intelligence) may well get better and better as Chat GPT and other programmes develop. This is going to be an issue for the universities, for instance, where it may become impossible to tell if a thesis has been written by Chat GPT or the student. (This may well already be a problem)

    But it will never be a substitute for the human brain in real life situations.

    • Drowsy M. Kram 2.1

      But it will never be a substitute for the human brain in real life situations.

      Perhaps not ChatGPT (Feed me!), and perhaps not tomorrow – but never say “never“.

      Do A.I.-powered 'agents' have to answer 'truthfully' when asked if they're human?

    • Aph 2.2

      Even if you are correct, and AI isn't able to outright replace jobs (which I think is shortsighted), if AI is able to increase the productivity of a worker by 100% a conservative number for many jobs, if demand remains the same then that means the business only needs half as many workers.

    • mickysavage 3.1

      That is quite a list. Trust you are well.

    • Tony Veitch 3.2

      I knew it, Erica Stanford was onto it far quicker than any of us: she intends to use ChatGPT to write an entirely new curriculum for schools in only 2 weeks!

      Aren't these Natz marvellous! /s

  3. Incognito 4

    I’d say that AI is (already) self-referential, by design. This might not be the same as being self-aware. Douglas Hofstadter wrote a fascinating book on this topic entitled I Am a Strange Loop (https://en.wikipedia.org/wiki/I_Am_a_Strange_Loop). One would think that self-awareness is associated with self-interest and self-serving actions/behaviour but one ‘drawback’ of those AI chatbots is that they lack intention and purpose and they are not conscious or sentient in the classical sense. I think the ‘intelligence’ of chatbots is absurdly over-rated and their IQ are very low (although they might do very well on IQ tests). My car, for example, has more ‘personality’ than my PC yet I spend many more hours on my PC than in my car. When things go awry I do tend to swear at both of them [not at the same time], so go figure. BTW, do I need to say “please” and “thank you” when using chatbots and do I have to ask how they’re doing?

  4. Belladonna 5

    There is little that 'we' (as in NZ) or our government can do about the development of AI.
    It's all happening overseas in jurisdictions over which we have no control.

    And, while we can consider the impact that it may have on employment – why should white-collar classes be any more exempt from the march of technology than blue-collar makers of buggy whips.

    Perhaps we'll have an upending of society where the hands-on craftsmen and women have jobs (no replacement for hairdressers and barbers) while the journalists, marketers and lawyers are rapidly retraining.

    There is zero change chat any of Asimov's hopes for 'laws of robotics' could be coded in any meaningful way. If AI takes over, we'd just better hope that it's positively disposed to humanity.

  5. Stuart Munro 6

    Well Chat GPT could certainly do the work of many self-styled journalists – and Hosking to boot.

    But the Turing test is a pretty low bar really, and the Chinese Rooms similarly lacks intention.

    The psychopathic intelligence of popular fiction – Skynet, Hal, the Borg etc, are really thinly disguised humans. Poorly socialized and lacking empathy, but otherwise humans.

    Nevertheless, pathological market forces will continue to erode the living standards of working people. Had we a Labour government, we might expect some protection from such threats. No such luck. Thus far they cannot even respond intelligently to cows nitrifying our acquifers – much less AGW.

    By the time the last of genus homo shambles off this mortal coil, and it will be soon, Intellects vast and cool and unsympathetic will seem like a distinct improvement.

  6. Ad 7

    Ask GPT4 to name 20 jobs it could replace, and it's an interesting start:

    Image

    • Belladonna 7.1

      The only one I'd really quibble over is translator. Word-for-word, yes; but real translating requires retaining the 'flavour' of the original, while translating the sense of the words. It's not easy.

      The triumph for me was always Anthea Bell – the not-as-famous-as-she-should-be translator of the wonderful Asterix series of graphic novels. Very few of the words are directly translated – but the sense is retained, and the puns and wordplay only work in their own languages.

      • Incognito 7.1.1

        ChatGPT does not translate word-by-word and it is not Shift+F7 (Thesaurus). It is a Language Model that is trained on [loads of] text and it looks for patterns and predicts the next word and so on. The training sets included multiple languages hence it can do translation (very) well.

        • In Vino 7.1.1.1

          I agree with Belladonna , and very much doubt that any AI is anywhere near doing the brilliant Asterix translations, where so many of the plays on words, the jokes, and the comic references in names are so very different from the French original, yet are of equal quality. I suspect that AI will take some time to gain such oversight.

          When it does, we will indeed have a great deal to fear.

          • Incognito 7.1.1.1.1

            Put GPT-4.0 to the test and let us know what you think then.

            • In Vino 7.1.1.1.1.1

              I do not have the time to feed in a whole French text with pictures. So I will not be able to judge the oversight shown..

              It is a bit like asking me to feed in a whole Dickens novel, and asking me if the AI French version is of equal quality.

              Short texts would be a different matter

  7. Ad 8

    Imagine a world where GPT's descendants essentially ruin social media by ensuring that it is viewed as utterly corrupted and without any redemptive capacity to see real humanity on it ever again. Far more people would repudiate social media altogether, and return back to a much smaller circle of communicative trust.

    That ruination of Twitter, 8 Chan, Weibo, TikTok, Tencent QQ, Telegraph etc would do the job that open societies such as the United States have failed to do namely to successfully regulate social media as a publisher or media company like any other.

    • RedLogix 8.1

      Yes. JP pointed to this issue a while back – what happens when AI is so pervasive that literally nothing we encounter online can be trusted? It would be the death of the internet and indeed all communications; as you say we would have to very quickly revert to purely face to face interactions only. This alone would be incredibly disruptive, before we consider unconstrained job destruction and warfighting catastrophies.

      Vernor Vinge anticipated something of this back in the 90's with some of his novels where he named the galactic version as the Net of a Billion Lies.

      I have not watched all of this, but Stephen Wolfram is one of the more influential mathematical minds alive:

      • Ad 8.1.1

        Very helpful expert interview with the actual ChatGPT inventor.

        Note it's a hour long so buckle up and open the chips.

  8. Ad 9

    If we've got orders in, as well as replacing conveyancing law and most accountancy practices, I'd put an order in for a ChatBot that could replicate a dairy farm dog.

    We like most industrialised countries are going through an intense shortage of skilled workers. We have a declining workforce, and an urgent need to boost productivity.

    In a bid to spruce up production lines that can churn out higher-value goods, China’s Ministry of Industry and Information released the Robotics-plus Application plan last month. It had a clear target: double the industrial sector’s robot density by 2025, from 246 per 10,000 workers in 2020. The blueprint recommends widening the use of machines into areas like hydropower stations, wind farms and critical electricity systems.

    China can do it, so can we.

    I'd also like to put an order in for robots who can harvest grapes at the right time, robots who then process them, robots who control the vats to a perfect market-focused flavour and sweetness and acidity level, then robots who bottle and label and pack them. In that list only the vat 'winemaking' processes aren't common already.

    We have nothing to lose but drudgery.

  9. tsmithfield 10

    My son is manager of an online training school.

    He said that one pupil had got way behind on assignments, and then suddenly submitted six very high quality assignments. My son was a bit suspicious. So, he fed the content into ChatGPT and asked if ChatGPT had written them. The answer came back that Chat GPT had written all of them.

    Then, as a confirmation he fed in other high quality material he knew hadn't been written by ChatGPT, and asked the same question. When it hadn't been written by ChatGPT, the answer from ChatGPT was that ChatGPT may have written the content, but not definitely saying it had written it.

    So, my son failed all six assignments.

    • Ad 10.1

      A nice solid threat to a lot of teaching, and pedagogy within it.

      Your son is doing what millions will do within months: self regulate.

    • Incognito 10.2

      Ghost writing of assignments has been around for yonks, which you would know if you had gone to uni. ChatGPT is just an extension of that. And there have been products such as Turnitin, to help keep academics and wannabe-academics on the straight and narrow: https://www.turnitin.com/solutions/ai-writing

      • Ad 10.2.1

        I have certainly gone to university, and it has been a threat to both lecturing and research there.

        But this is a substantial step upwards on that threat and extends to all levels of writing from students.

  10. tsmithfield 11

    Several general comments about AI.

    Firstly, AI tends to be isolated to the scope of problems it is programmed to solve. For instance, chess programs can now beat grand masters at chess. But they can't solve the inner workings of a black hole. So long as AI can't expand beyond the limits to which it has been programmed, then humans are still in control. But, if AI is able to organically expand into areas beyond its programmed scope, then I think we have problems.

    Secondly, I have thought about what it would take to know whether a computer program had become conscious. I think the test for that would be if the computer starts describing what it sees or hears. For instance, if a computer camera is recording while you type and the program says to the user that it likes the green top the user is wearing, or something like that, then I think it would be a strong indicator that it had become sentient.

    That is because being conscious is more than just having signals through the optical nerve activating the vision centre of the brain in vision. It is the part of us that views those visual activations as if it were someone watching a movie. So, if a computer starts expressing things in such a way that it suggests that it has that capability, then we could start thinking of it as sentient.

  11. Thinker 12

    What concerns me about chat gpt is not chat gpt but, if this is the technology we mortals get to see, then what similar but deeper technology is able to, and possibly is, coupling with all our 5 eyes data being "able to be…" stored. Imagine 5 eyes monitoring all the email traffic there ever was and some super chat gpt being able to write reports on us, who we know, which toothpaste we use etc. Ditto, Google, facebook and so on. The mind boggles.

    Just before chat gpt went live, I happened to watch a movie called Eagle Eye from 2008 and it amazed me that the scifi movie I'd recently watched was already mainstream, free technology. https://youtu.be/kU44N6MKG9Q

    However, to temper the mood a bit:

    1. I asked chat gpt if it should concern me that it was getting out of control and it said no, because there was a team of regulators monitoring it. Of course, that's the team calling time, so not sure if it is deep-stating me into false comfort.

    2. I asked myself if I would prefer chat gpt or Trump running the USA and immediately felt better about things.

  12. That_guy 13

    My experience with chatGPT and programming/coding is that I use it regularly and it's very useful. However if there is a "bad way of doing things" in the community then chatGPT will tend to do that as well, it's rare that it spits out code that is immediately usable, and you have to know a lot about what you want to do and how to ask questions using the correct terminology. In other words it's not that useful to someone with absolutely no knowledge of programming.

    I actually think that the jobs most at risk aren't to do with coding but are when human beings digest a lot of text-based information and summarise it in some way. Journalists, paralegals, etc.

    In fact the list of 20 jobs at risk that Ad posted isn't too bad, and notably does not include much in the way of "pure coding" jobs.

    • Thinker 13.1

      To some extent, we can compare this "revolution" with the changes to, say, the car industry, where factory workers were laid off because robots did the job better, for longer, and generally consistently accurately. I remember reading, when the first Lexus LS400 came out in the 90s, that only 66 people were required to assemble the whole world's supply of the cars.

      The move to robotics caused a short-term crisis for auto workers and even changed the face of places like Detroit, but overall it has left the auto workers with cleaner, safer working conditions and where their minds aren't toasted by repetitive mundane tasks.

      I've been playing with chat gpt and it sure has its uses, if you stick to the 'grunt' tasks that are equivalent to what the robots in a car factory do. Ask it to write poetry and it might do better than Trump (https://www.amazon.com/Beautiful-Poetry-Donald-Trump-Canons/dp/1786892278) but Keats and Shakespeare/Bacon won't be turning in their graves just yet.

  13. I've played with the free version of Chat-GPT, aka "GPT-3" and it's pretty impressive. GPT-4 is another level again – it can compose sonnets, develop software, write convincing blog posts, imitate voices, and manipulate photos. These deceptive practices are still detectable and probably should be illegal.

    A Google engineer was fired last year for raising the alarm when he believed the AI he was talking to had gained sentience. In other words it passed the Turing Test. No doubt later iterations of that AI will fool even more humans.

    I don't think we should be worried about AIs producing infinite paper clips. It is more likely that shadowy 3-letter agencies like the NSA already have even more advanced AIs and are working on weaponising them.

    Their main weapons? Probably memes, with humans being the vector for spreading misinformation. These agencies have been trying to hack human brains since the start – propaganda is a powerful tool.

    https://twitter.com/roblogic_/status/1613613675751038976?s=20

    • roblogic 14.1

      PS There are other words for disembodied entities that whisper to humans; egregore, collective unconscious, shadow transference, ghost.

      Zombie themes in popular media like 'The Last of Us' are a powerful analogy to the real existence of mind parasites and social contagions that can run through human cultures.

    • tsmithfield 14.2

      My son has access to the paid version of ChatGPT.

      He asked it to write a rather unflattering 100 word poem about me based on several inputs. It was really great prose that it came up with.

      So, I asked him to ask ChatGPT to write another poem of the same length based on the same inputs to see if it would repeat exactly what it had written last time. But it came up with a completely different poem which was quite impressive..

  14. Simon Louisson 15

    I recently downloaded ChatGPT and as an experiment asked it to tell me about our house, the Pilot's Cottage, a historic house in Seatoun.
    While it started off ok – "The Pilot's Cottage is a historic building located in the suburb of Seatoun in Wellington, New Zealand," it soon launched into a fantasy novel of "alternative facts" that Donald Trump would have been proud of, including the year it was built – it was actually 1866 not 1895.
    It then said: "Today, the Pilot's Cottage has been fully restored and serves as a museum and community space," which is complete news to our family who have lived there for 25 years.
    Most of the spiel was complete made-up nonsense. Tread warily folk.

The server will be getting hardware changes this evening starting at 10pm NZDT.
The site will be off line for some hours.