Music Banter

Music Banter (https://www.musicbanter.com/)
-   Current Events, Philosophy, & Religion (https://www.musicbanter.com/current-events-philosophy-religion/)
-   -   The Wow I Can't Believe That News Story Thread (https://www.musicbanter.com/current-events-philosophy-religion/30710-wow-i-cant-believe-news-story-thread.html)

Frownland 08-04-2019 09:35 PM

Quote:

Originally Posted by jwb (Post 2070358)
Why? give me some reasoning besides stating it as fact.

https://futurism.com/1-evergreen-mak...earning-and-ai

It's like assigning sexiness to your computer monitor for showing you Pornhub videos.

jwb 08-04-2019 09:41 PM

Quote:

Originally Posted by Frownland (Post 2070360)
https://futurism.com/1-evergreen-mak...earning-and-ai

It's like assigning sexiness to your computer monitor for showing you Pornhub videos.

please answer in your own words

I'm scanning through the article and it's talking about AI in general, the kind of basic machine learning algorithms that currently exist
, Which aren't remotely close to actual AGI.

AGI stands for artificial general intelligence and in a nutshell means building AI that is capable of cognition comparable to human beings. So how will it be based in human intelligence, match or exceed us in terms of thought, yet lack basic autonomy/self determination?

Frownland 08-04-2019 09:49 PM

So we're working with different definitions. I thought you were just referring to AI which doesn't apply to your arguments given the basics of how it functions.

Having met humans, human intelligence in a box is definitely not a threat. It's how intelligent the box thinks it is that's the threat. That said, you don't think we can develop degrees of separation where one AGI system will determine that x and y are the right choices, while humans are the ones who decide to actually implement x and y? What would AGI self interests look like that would make that control necessary?

jwb 08-04-2019 09:55 PM

Quote:

Originally Posted by Frownland (Post 2070362)
So we're working with different definitions. I thought you were just referring to AI which doesn't apply to your arguments given the basics of how it functions.

Having met humans, human intelligence in a box is definitely not a threat. It's how intelligent the box thinks it is that's the threat. That said, you don't think we can develop degrees of separation where one AGI system will determine that x and y are the right choices, while humans are the ones who decide to actually implement x and y?

I did make sure to specify AGI from the beginning, which had the specific definition I'm using, as even your article notes

And no, I don't have any faith they will contain it. Given that virtually everything is networked these days and they can't even really properly secure the systems we use right now, I believe our advances in technology develope at a faster rate than our ability to control said technology.

Frownland 08-04-2019 09:57 PM

Quote:

Originally Posted by jwb (Post 2070363)
I did make sure to specify AGI from the beginning, which had the specific definition I'm using, as even your article notes

And no, I don't have any faith they will contain it. Given that virtually everything is networked these days and they can't even really properly secure the systems we use right now, I believe our advances in technology develope at a faster rate than our ability to control said technology.

What self interests would AGI use its autonomy to preserve?

OccultHawk 08-04-2019 10:08 PM

Quote:

Originally Posted by Frownland (Post 2070364)
What self interests would AGI use its autonomy to preserve?

We don’t know but we do know that everything we understand to be sentient is also inclined to survive and propagate.

jwb 08-05-2019 06:38 AM

@ OH

Exactly. It's a complete gamble. And technology proliferates in a decentralized ad hoc sort of way so even if there is a specific way to harness AI without it ever threatening us, it's hard to ensure that nobody is going to go outside those parameters and create something that could pose a threat.

There's an inherent incentive to create AGI because of the inherent utility of intelligence and seeing that AGI would match and eventually surpass human intelligence in every domain, that would include creating new, even more effective forms of AGI.

So with each successive generation of AGI, the engineers creating the next generation grow smarter and smarter because at the point the engineers themselves are robots.

This creates a feedback loop that allows for an exponential growth in AI that humans will not be able to keep up with because we are working with inherent biological constraints.

@ elph

There's no reason they shouldn't be able to, if we are ourselves biochemical machines. Without invoking magic to explain human intelligence it's hard to imagine that it's impossible.

But yes, this entire argument is predicated on the assumption that AGI gets developed. If it doesn't then obviously it doesn't pose any threat.

OccultHawk 08-05-2019 07:05 AM

Quote:

Originally Posted by elphenor (Post 2070376)
We don't even know yet that AI can ever mimic human intelligence

To merely mimic would just mean to pass the Turing test

No two intelligences on earth are really comparable be it plant, insect, microbial, or animal (including human). Whether a machine’s intelligence is even comprehensible within the parameters of our consciousness is likely irrelevant to the amount of harm it may inflict.

We have a very ˈspeciesist definition of what intelligence is. We sure think we’re smarter than mosquitos with our books and buildings and all but if Kafka turned you into one you’d be the dumbest mosquito in town. And boy they sure can kill some folks.

jwb 08-05-2019 07:17 AM

@ OH

That's a good point as well. Certain insects like ants and bee colonies have a collective form of Intelligence that makes them har capable of making complex decisions as a group without any individual ant or bee being conscious.

They're all basically robots following very simple rules which collectively results in complex decision making as a group. We have a very hard time controlling or containing the spread of these insects as it stands, even with the supposedly limited Intelligence they have.

That's a system that was developed ad hoc by trial and error via evolution. It's easy to imagine with direct engineering we can develop systems far more advanced and potentially menacing.

@ elph

We won't "know" until it happens. But the general trend that AI us advancing and is already surpassing humans in specific domains of Intelligence such as computers playing chess, go, etc.

OccultHawk 08-05-2019 07:23 AM

Quote:

we don't totally understand how human intelligence works,
We don’t understand how any intelligence works.

Quote:

capable of making complex decisions as a group without any individual ant or bee being conscious.
We don’t know if they’re conscious. I personally think they are.


All times are GMT -6. The time now is 03:30 PM.


© 2003-2025 Advameg, Inc.