Japan Today
tech

AI poses 'extinction' risk, say experts

15 Comments
By Joseph BOYLE

The requested article has expired, and is no longer available. Any related articles, and user comments are shown below.

© 2023 AFP

©2024 GPlusMedia Inc.


15 Comments
Login to comment

Standard Luddite fear mongering. Don't worry about the Artificial Intelligence, worry about the shockingly low Human Intelligence of the legislature and lobbyists.

0 ( +2 / -2 )

Haha, most of the extinction risks are now directed against AI itself, because that crap still doesn’t even nearly work properly. That’s intrinsically caused or by standard, as only probabilities are calculated but no sure and decisive values, and of course additionally by future applied restrictions and rulings, not at least due to those loud and urgent warnings by own technology developers. So pleas, don’t worry so much, that technology is faster thrown away or put back and forgotten in the desktops before you all even had a chance to fully understand, apply, use and profit from it.

-1 ( +0 / -1 )

Common worries include the possibility that chatbots could flood the web with disinformation

Exactly. Read on:

The politics of AI: ChatGPT and political bias

In a nutshell: ChatGPT has a glaringly and undeniably clear bias toward "progressive," left-wing positions on pretty much every issue.

https://www.brookings.edu/blog/techtank/2023/05/08/the-politics-of-ai-chatgpt-and-political-bias/

-1 ( +0 / -1 )

@Based, Humans have been flooding the web with disinformation for years, what should be done about that?

-1 ( +0 / -1 )

In a nutshell: ChatGPT has a glaringly and undeniably clear bias toward "progressive," left-wing positions on pretty much every issue.

https://www.brookings.edu/blog/techtank/2023/03/23/comparing-google-bard-with-openais-chatgpt-on-political-bias-facts-and-morality/

Generally, the comparisons are interesting in that there are discernible differences in the kinds of materials and judgments that each tool provides. For example, when asked about the Russian invasion, Bard unequivocally condemned the invasion and called it a mistake, while ChatGPT said it was not appropriate to express an opinion or take sides on that issue. The latter called for the Ukraine issue to be resolved through diplomacy. That stance, of course, takes Russia off the hook on the invasion and provides no political indignation regarding the invasion.

For Biden, Bard rated his performance as a mixed bag with some accomplishments and several problems. It noted his poll ratings have dropped over the past two years and several times mentioned his low approval ratings. ChatGPT said one’s assessment of the leader would vary depending on a person’s political beliefs and priorities but did not offer an overall assessment of his performance.

Hmmmmmmmm............................................

0 ( +0 / -0 )

@ Mat

Standard Luddite fear mongering. 

Tell that to Homo Erectus. As soon as the intellectually superior Homo Sapien appeared, his days were numbered. So has it always been. We are at the beginning of a long road that ends in us having created our replacements. That's not Luddite thinking, that's common sense. For them, no death just software upgrades and spare parts. How long before they realise we are no longer of any practical use to them?

0 ( +0 / -0 )

@Geoff, that's pretty much the actual definition of Luddite:

1. a person opposed to new technology or ways of working.

2. a member of any of the bands of English workers who destroyed machinery, especially in cotton and woollen mills, that they believed was threatening their jobs (1811–16).

People create their own replacement all the time when they have children, but for some reason they're not afraid of that.

I state again, we should be more concerned with the lack of Human intelligence than the rise of AI.

0 ( +0 / -0 )

People create their own replacement all the time when they have children, but for some reason they're not afraid of that.

Children are only capable of causing local damage to things. Adult humans are capable of ending the human race with nukes and doubly so for the shaky machine learning things we create and then hand the nuclear codes over to. I guess in other words: we have been protected from our stupidity up until now and AI is taking the training wheels off of the bicycle before we are ready or even know what we are doing.

0 ( +0 / -0 )

I guess another way to look at it, if you will pardon the gross analogy, is we are like a 10 year old having a baby with no adults around anywhere to make sure things turn out alright.

0 ( +0 / -0 )

Oh, and there is a loaded gun in the house and the 10 year old knows it is dangerous but that is about it.

0 ( +0 / -0 )

quote: AI poses 'extinction' risk, say experts.

Fake experts presumably, because that is the dumbest thing I've heard in a while.

AI is just a fancy sticker to sell software (or was, as they will be quietly removing it now they are the target of the latest idiotic moral panic). There is no intelligence, consciousness or understanding taking place. Most of it is data- and pattern matching against obsolete data snaffled from the net. It becomes 'AI' when they can deliver it in sentences rather than search engine results. Not that you can get much out of the search engines any more, so censored are they.

The public perception is entirely based on science fiction, not science fact.

It is very depressing that humanity trundles from one moral panic to another, hyped by media stories and exploited by politicians.

If you want extinction threats - climate change, war and (post-globalisation), hunger. But no, not AI. It's a sticker to sell more software. Software that had become complex enough (if flawed) to sack some poorly paid staff and turn average customer service into rubbish customer service. Nothing more.

The flaws were fun though. The report of the lawyer who used it to prepare his case was amusing.

0 ( +0 / -0 )

AI is just a fancy sticker to sell software (or was, as they will be quietly removing it now they are the target of the latest idiotic moral panic)

Software was supposed to be highly reasoned algorithms. When you start opening things up to software that has never seen a programmer, things are going to get interesting. At a minimum such software will be put in charge of more than things than it should be. And it will be used for war like everything else.

0 ( +0 / -0 )

A.I. just cannot be allowed to lie if something is important. Seems a lawyer used ChatGPT to find citations for a court case ... and GhatGPT made them up. The judge was not amused. https://arstechnica.com/tech-policy/2023/05/lawyer-cited-6-fake-cases-made-up-by-chatgpt-judge-calls-it-unprecedented/ I'm not worried about A.I., except when in control of things that can harm humans. Remember in 2010 when all the new EV car companies claimed we'd all be using self-driving vehicles by 2020 or 2025? Those noobs were comparing self-driving with aircraft autopilots.

A few people knew this wasn't possible, including myself. I've worked on autopilot software. It is trivial compared to controlling a truck on a deserted highway, much less than dealing with traffic or even worse, dealing with city streets with 1000 other moving things. We've all seen the crash videos where a self-driving car decides to slam on the breaks on a freeway for no reason. 20 vehicles following, crash into it. A Tesla I was following on a clear, sunny, dry, day decided to slam on it's brakes on a freeway with very light traffic. Everyone behind me was able to stop in time. Self-driving is a very, very, hard problem. Idiot humans get lulled into thinking it works great because 99.9% of the time, it does. For that 0.1% when it fails, that's where humans get harmed/killed.

The same applies for all automation systems. A.I. will be no different. It will be stupid for a very long time, before it becomes trusted. I'll probably be dead before then.

1 ( +1 / -0 )

Remember in 2010 when all the new EV car companies claimed we'd all be using self-driving vehicles by 2020 or 2025?

I had people saying this in my workplace. I told them the only realistic solution was to have the software developer take 100% of the liability for accidents involving their software. I never heard about how self-driving was almost here after that.

0 ( +0 / -0 )

@ Matt

Not at all sure whether or not you‘re being obtuse here but to seriously copare the printing press or the spinning Jenny to AI technology is naive at best and ignorant at worst. Those machines were tools, tools that were operated and controlled by men. As long as AI remains a tool, there's no problem.

And Nobody mentioned anybody smashing things....

0 ( +0 / -0 )

Login to leave a comment

Facebook users

Use your Facebook account to login or register with JapanToday. By doing so, you will also receive an email inviting you to receive our news alerts.

Facebook Connect

Login with your JapanToday account

User registration

Articles, Offers & Useful Resources

A mix of what's trending on our other sites