Written By:
mickysavage - Date published:
8:00 am, February 24th, 2019 - 34 comments
Categories: articles, child welfare, class, Deep stuff, families, internet, Media, Politics, youtube -
Tags:
I am a real fan of Mediawatch on Radio New Zealand. The show always provides cutting edge insightful commentary on all things media.
The show in the last week had this fascinating story about the world debating champion being pitted against a computer in a debate. The computer and the human were given 15 minutes to research and prepare and then present an argument in favour of or opposing the premise that public subsidising of child care centres is a public good.
Here is the debate:
My first impression, the computer was not freaking bad …
It used to be chess games. Contests between computers and humans used to be interesting affairs but computers have become more and more dominant. Only the best players on a good day can beat their computer opponents.
Peter Griffin in Noted has this description of the debate:
Last night’s debate in San Francisco between Project Debater, an artificial intelligence engine based on Watson, and Harish Natarajan, the 2016 World Debating Championship grand finalist, suggests this is where humans still have the edge – for now.
Project Debater sat on stage, a monolithic black tablet emanating an even, American-accented woman’s voice. Over 25 minutes she traded statements and rebuttals with Natarajan on the topic of whether pre-school should be subsidised.
Miss Debater, as she has been dubbed, was arguing for the resolution. She and her human rival had just 15 minutes to prepare for the debate. But able to trawl 10 billion sentences of reference material, mainly newspaper and magazine articles, it didn’t take long for the computer to formulate a strong argument for funding preschool.
The computer says yes
Laying out her case logically, she peppered her talk with OECD and US Centers for Disease Control report statistics and even quoted former British Prime Minister Benjamin Disraeli. Most impressive was Project Debater’s ability to listen to Natarajan’s arguments in real time, her display blinking away all the time, and counter them with a reasonable, fact-based rebuttal. She even managed some emotive flourishes at one point saying, “to be clear, my intention is not to leave a suitcase full of money for everyone to grab at will.”
Natarajan, without any research materials to draw on, had to rely on his best rhetorical skills. And his ability to do so skillfully is what separated man and machine. His rebuttals were stronger than Project Debater who largely just continued on with her narrative in favour of the resolution.
What really interested me was what the computer said about poverty:
And now a few words about poverty. While I cannot experience poverty directly, and have no complaints about my own standards of living, I still have the following to share.
Regarding poverty research shows that a good preschool can clearly help the disadvantages often associated with poverty.
And the computer referred to something Gough Whitlam said in 1973! That man was ahead of his time …
Given such enlightenment maybe it is time for our computer overlords to take over.
Of course it could all end in tears …
https://player.vimeo.com/api/player.jsKatherine Mansfield left New Zealand when she was 19 years old and died at the age of 34.In her short life she became our most famous short story writer, acquiring an international reputation for her stories, poetry, letters, journals and reviews. Biographies on Mansfield have been translated into 51 ...
The server will be getting hardware changes this evening starting at 10pm NZDT.
The site will be off line for some hours.
What’s missing from the comparison is an expert human assisted by the instant database searchability and recall capabilities of the computer.
Dir. Stanley Kubrick ahead of his time too, matte and models, and silent space, still aces a number of CGI laden, zap ’em space films
“Open the Pod bay doors, HAL”–sod off Dave…
no need to go totally trepidatious on AI, well unless your scooter or driverless car loses the plot perhaps! But AI under capitalist ownership and development will likely be horrendous indeed
Yes. That’s my leaning as well. I base my sense of this on three premises:
One the AI community is deeply divided on whether we can ever produce a conscious machine. We happen to live with a Professor of AI systems at present (a whole other story) and his sense is that given we understand almost nothing about the nature of consciousness in our own biological systems, it’s hard to imagine we could reproduce it in a machine.
There is the argument that self-awareness might arise in machines in an endogenous, spontaneous manner without humans even being aware of it happening. This is an interesting possibility with some deep consequences we must guard against.
Two is that I asked a question the other day “who are you going to trust”. This remains one of the core problems of computing; trust-ability. We already have enough problems determining what information we can trust on the internet that is generated by fellow humans that we share a great deal in common with. How then could we possibly hope to trust sentient machines we don’t understand at all?
Thirdly I fall back on my all time favourite author Vernor Vinge. He was a Professor of Computer Science at San Diego University and published a number of short stories and a solid core of brilliant hard sf novels:
https://en.wikipedia.org/wiki/Vernor_Vinge
Vinge’s core themes returned to the exact problem of the OP many, many times. And in a remarkably diverse number of forms. He was famously writing about the internet in the 80’s before it was invented. He also anticipated blogs, social media and virtual realities long before they became a reality. Critically he also foresaw the many unintended consequences of these things. In one novel he memorably constructed a galactic web colloquially called the “Net of a Billion Lies”.
And in another thread he called AI “one of the great failed dreams”.
Yet his novels are full of sentient life deeply entangled and enhanced by AI machines. In one story he has his High Tech humans each accompanied by a ‘robot’ which hovered about 1km above them, permanently linked to their human serving multiple layers of purposes, sensory, pre-processing, threat evaluation and defense.
This seems to me the most benign of the possible technological Singularities (another term Vinge coined) one where AI systems remain essentially the servants of humanity, not the other way around.
Computers will never be our overlords. They have no free will. In fact they have no will all.
It is said that good computer can beat any human chess champion but even the greatest current chess playing engines cannot move a single piece on the board, they require a human being to do that.
I suppose you could build a movable robot arm and program a chess computer to move it’s pieces. But would that chess playing robot drive across town to attend a chess conference?
No. The computer and its arm would still have to be loaded into trucks and driven to the chess venue.
But what if you had a fully autonomous robot able to drive across town to get to the venue and move pieces on a chess board?
Why would it do that?
It requires a human to want a chess playing computer to get to a venue to play chess.
A computer has no will. It doesn’t want to do anything, neither does it want for anything. It feels no pain, it feels no hunger, it has no desire, (for anything).
Only humans have desires and wants and ambitions and dreams and wishes.
Not only do computers not have free will, they have no will at all.
On the one hand. How can a bunch of on/off switches ever be anything other than a machine.
On the other. The human brain is a huge bunch of neural switches.
Get your point, but the brain is more chemical synapses than electrical.
Different levels to different parts and then the electrical side just joining the pathways.
Which is why psychoactive drugs work.
If it was all on off everybody would be the same logic conclusion and life would frankly be very boring
It’s an interesting problem Jenny. The AI community is pretty deeply divided on this. A large part of the problem is that we don’t really understand the true nature of ‘free will’ even in humans.
For instance using biomedical scanning techniques it can be shown that there are definite signals occurring within the brain some fraction of a second before we are ever conscious of a decision to act. We are very complex creatures indeed.
Much of this rests on what we might call the religious proposition. If we take the view that life is an entirely materialistic affair, then it’s easy to accept the idea that we might build entirely material machines which can fully emulate biological ones.
If on the other hand we accept there is a domain of the abstract or non-material … and that our consciousness is the link between the world of facts and values … then constructing a material machine that can operate in the non-material domain seems a much harder problem. Impossible even.
This whole discussion may seem a bit fanciful, yet it holds some very real and rapidly encroaching implications for us all. Within a decade or so.
Good topic for a post, Micky. In fact, it’s been on my mind for some time now to write about this too. I’ll keep my powder dry for now 😉
jenny , robots in factories do plenty of physical tasks. A Chess game can be played on a screen or tablet by humans, doesnt require a physical chessboard.
I understand the newer AI chess computers arent programmed to do so, but use AI to learn how to play and very quickly play very very well. But of course thats a multi million dollar AI computer not an app on a phone.
https://www.theguardian.com/technology/2017/dec/07/alphazero-google-deepmind-ai-beats-champion-program-teaching-itself-to-play-four-hours
My point is that a machine has no will to want to play chess, or any other activity, whether it be in the real world, or the virtual world. No matter how smart they are.
All their tasks and goals are set by humans. And, as far as I can envisage, they always will be. Even if the goals and tasks set for computers by human beings far exceed human abilities. In many applications computers already exceed human abilities, I can only see this trend continuing.
Whether computers will one day want to be our overlords?
Computers don’t want for anything.
Only human beings would want to enslave other human beings.
A supercomputer that one day had the ability to be our overlord, will never be more than a tool used by another human being or group of human beings who ‘want’ to enslave the rest of us. Hopefully by that time, ‘the rest of us’ will also have the use of machines of equal computing power able to resist the impulses of the other human beings who want to enslave us, using their machine.
A computational arms race of sorts.
Those days are far off. So we probably won’t have to worry about it.
If that time does arrive, hopefully also by that time, humanity will also be much more emotionally advanced, beyond such backward, primitive impulses.
Not every ‘thing’, wants to rule the world.
When electricity first came in suspicious older folks
would peer fearfully at the miracle magic installation
wary of the evil gremlins lurking inside 🙂
The ghost in the machine.
Even before the invention of machines, people used to attribute animals and trees, and even natural forces like the weather, with human motivations,
https://en.wikipedia.org/wiki/Anthropomorphism
Chomsky – “I have been hearing this for 60 years”
The threat of supercomputers
So the only threat from supercomputers comes from the people who will own the technology. Most likely they will use it to enrich themselves and impoverish the rest of us.
Can you have intelligence without curiosity?
Can you have intelligence without empathy?
Intelligence is often linked to other (human) traits, but could general ideas about what intelligence is, and how it develops, be the products of tunnel vision? Might there be some types of ‘intelligence’ that we won’t/can’t recognise?
Asimov wrote a number of short stories about “Multivac“, a fictional supercomputer that was given the task of governing humanity. Asimov pointed out a number of weaknesses in this solution, among them our tendency to cheat the system, and to want autonomy.
A short piece in The Atlantic notes that Silicon Valley thinks a technocratic utopia is the answer but that’s pie in the sky and the reality is that a proper functioning democracy needs a virtuous society in which civic education is valued.
@AB
Chomsky’s put down “I’ve been hearing about this for 60 years” is only partly founded in reality. In one sense he may be right, that AI will always be one of the failed dreams.
Yet in another sense it’s entirely possible that AI will become sufficiently capable and diverse that the difference becomes a meaningless question. We are already seeing this with Google, Facebook and other big data enterprises, we are already seeing disturbing political implications in China’s ubiquitous surveillance rollout.
Most likely they will use it to enrich themselves and impoverish the rest of us.
This has been true of every single technology humans have ever invented; from fire sticks onward. But none of us would discard every single human technology on that basis alone. (Or if you are really determined to do so, please check with any women in your life first to see if they want to permanently go back to the Stone Age.)
We always have this problem, that some people will naturally be better or quicker at implementing a new technological advantage, and that the rewards for doing so will flow disproportionately towards those who already have the most resource.
This is a basic law of nature; the more resource you already have, the more opportunity you will have, the faster you will grow that opportunity and the more dominant socially you will become. This isn’t necessarily about politics, greed or a lack of empathy … it’s fundamental to all natural systems.
What is remarkable about human social structures is that over time we’ve gotten better at extending the advantages of new technologies to more and more people. 200 years ago almost all people lived in absolute poverty; now almost a billion of us live extraordinary lives of comfort and security unimaginable to our great grandparents. Another five billion are catching up rapidly and only one billion, mostly in India and Nigeria remain in absolute poverty.
As an automation engineer what I’m seeing the so called “Internet of Things” (IOT) rapidly climbing over the peak of the hype curve, from an over-egged idea into reality. This will prove one of the crucial tools to enable the Third Industrial revolution, it should greatly improve our energy efficiencies and open the door to a new sharing economy. Services like Uber are just a glimpse of what will become possible.
Railing against this AI transition is pointless and counterproductive; it will happen whether we like it or not. What we can do is pay attention to it’s potentials, it unintended consequences and continuously ask the hard questions. To what purposes is this new technology being used? Is it enslaving us collectively, or is it being used to liberate the potential of each individual?
Probably both will be true at the same time, but we do need to know which direction we want to head in, and ensure we get there sooner.
PS It seems the reply function isn’t working on this thread for some reason.
PS It seems the reply function isn’t working on this thread for some reason.
Irony due to complexity.
http://www.scholarpedia.org/w/images/thumb/4/48/Complexity_figure1.jpg/300px-Complexity_figure1.jpg
@Poission
Yes I’ve encountered encounter over-complexity many times in my life; yet looking back it’s also obvious that the curve is not fixed in concrete, it moves with time toward the right. ie what was over-complex ten years ago, becomes good practice today.
In general the way we solve the complexity problem is to break down large problems into smaller layers, and over time these layers become better understood, more generalised, more capable and more trusted. This in turn enables more layers to be reliably combined and bigger problems to be solved.
In the IT world the usual order of priority is authentication, confidentiality and then availability. In my world of big machines and processes the order is the other way around … determinism and availability tops all. For instance once I had a system fully commissioned and verified (online in real-time with the physical process), assuming no-one fiddled with it in the meantime, I’d expect to come back 20 years later and still find the system working, no reboots, no updates … just quietly gathering dust in a cabinet somewhere doing it’s thing.
@RL
Chomsky’s point (as I understand it) is that humans cannot reproduce in machine form something, the workings of which, we do not even comprehend in ourselves. And will never comprehend in ourselves because it is beyond the cognitive limits of the human organism to do so.
No doubt we can get machines to do complex tasks in a way that on the surface looks like human intelligence in a narrow sense.
Therefore if we need to be concerned about technology – it is not the technology in itself, it is who owns it and how they use it.
Our battles are primarily political. Being concerned about sentient machines taking over the earth is the idle indulgence of a de-politicised populace. It’s a highly irritating distraction.
Agreed. Building a machine that mimics humans is impossible because we won’t understand how it works. arah, but, building a machine that mimics humans, well thats like building a human, as humans are the great mimics, it wont matter that the computer mimic undersrands itself. Most humanity doesnt even notice how religious belief is essentially stalking an internal idealized other, though no doubt a computing mimic AI will prove the non-existance of deities if it only does come to understand itself.
“Another five billion are catching up rapidly and only one billion, mostly in India and Nigeria remain”.
Swallowed the cool aid, Red?
https://tinyurl.com/y5aq54fx
“The global population as a whole hasn’t gained more wealth in the last 200 years, he wrote—instead, “the world went from a situation where most of humanity had no need of money at all to one where today most of humanity struggles to survive on extremely small amounts of money,” with much of the world having endured “a process of dispossession that bulldozed people into the capitalist labor system.”
They wouldn’t be much worse than the net effects of governments in my lifetime – which fell more than a little short of enlightened democracy. But if we take Michel’s law seriously, it would not be computer overlords, but IT folk, who, though many are pleasantly geeky, run as full a range of human foibles as any other rough sample of our species.
@AB
Yes that does seem a reasonable approach to the question. Peterson interestingly goes one step further and suggests that the notion of a ‘disembodied intelligence’ is a non-sequitur; that consciousness arises from the link between the material and non-material domains … but critically depends on both.
In other words the way we perceive the world is dependent on our bodies as much as the way our mind makes abstractions of reality. This doesn’t close the door on AI; maybe the sensory embodiment of a vast networked system will generate it’s own form of awareness we have trouble recognising.
Yet I tend to agree, sentient AI that ‘takes over the world’ is probably the wrong way to look at the problem. A more challenging prospect is that highly capable AI forms will create a new class of humans who are the first and most adept to exploit it. Given we don’t yet understand the potential advantage this will give them (and it could be enormous), we must not regard this as an annoying distraction.
As with all new tech it will have it’s unintended consequences; this one could come with some dramatic new blessings and curses all at the same time.
hmmmm let me guess. The answer was 42?
@ KJT
And here some rebuttals of Hickel’s romantic notion that we all lived better lives before the Industrial Revolution.
https://capx.co/bill-gates-is-right-the-world-really-is-getting-better/
https://capx.co/the-romantic-idea-of-a-plentiful-past-is-pure-fantasy/
https://www.humanprogress.org/article.php?p=1745
And the idea that the masses of non-Western people lived noble lives of subsistence peasantry, free from disease, want and oppression is equally fanciful.
In one sense you are correct, my views have changed; the idea the world is a better place now than even 40 years ago is something my partner doggedly argued to me for a very long time. Many of us who were influenced heavily by the 60’s and 70’s grew up with a deep suspicion of the modern world, pained at it’s cavalier treatment of the planet, and disturbed by the gross inequalities it seemed to tolerate.
Yet alongside that vision there is another way to look at what is happening; and in particular the past five years or so I’ve been fortunate to work in many different countries and seen for myself the very dramatic changes in the lives of people all over the planet. It’s not even, it remains patchy and arbitrary … but the fact is that fully half the people in the world now live moderately middle class lives:
https://capx.co/bourgeoisie-of-the-world-unite/
Tell that to a foxcon worker, or an African farmer now starving in a city slum.
Money is not wealth.
And averages are not indicative of the whole.
@KJT
The transition from subsistence poverty to the middle class is not always pretty or easy. The Victorian era industrialisation is evidence of that. But by 1900 few people in Europe would have chosen to revert back to life 100 years prior.
And you can argue all you want that averages are meaningless from the perspective of the individual, but they still clearly indicate which direction we are heading in.
The direction we are all heading in, is environmental collapse, followed by mass deaths and displacement of people.
Not to worry. We will be replaced by computers.
Just a heads up with this thread to iPrent
The reply function works on a phone (android, at least), but not a PC/laptop
Cheers!
rl
Yes I’ve encountered encounter over-complexity many times in my life; yet looking back it’s also obvious that the curve is not fixed in concrete, it moves with time toward the right.
No with complex problems say newton third law, there is a reflection of the concrete and the arrow of time (That one should be aware of)
https://www.ngssphenomena.com/arrow-vs-concrete/
She’s way to human. You don’t want a sentient being with no biological frailties and human emotions. Way to dangerous. No.
Interesting scientists are finding levels of intelligence and traits in animal intelligence, previously thought to be unique to humans.
Not news to those of us who have pets or work with animals, though.
I’ve known for years that cats have a sense of irony.
Despite all its stunning progress and sophistication, AI is not intelligence, nowhere near it.
According to Scientific American if we are to ever to develop intelligent machines we would practically have to ignore all the progress made in developing working neural networks to go right back to first principles