Quote:
Originally Posted by Frownland
(Post 2166744)
It's stupid because it's less realistic and shows a general misunderstanding of how AI functions. It's just as much hubris to think we can create something so powerful too.
It's more likely that our overreliance on technology would turn fatal and destructive in a Carrington event scenario. Or maybe we'll just die in good ole fashioned heat waves and famines that we're taking care of without some dorky ass computers.
|
Less realistic than what exactly? If it is a distinct possibility and people are actively pursuing the goal of creating AGI then I don't think it's stupid to look at that and have some level of apprehension at the potential for it to backfire. Especially when in the same vein people are pontificating on some sort of futuristic post scarcity utopia where the technology we built does everything for us. That's no more grounded in demonstrable reality than the Terminator scenario.
In terms of risk assessment, we consider potential threats that are far less likely to happen than they are not to happen all the time. Things like biological warfare, terrorism, nuclear annihilation, etc. Saying something is less likely is not a good argument for not worrying about it at all.
As for understanding of how AI functions, let's be perfectly clear here. There are experts that know far better than you or I do how current AI functions who fall on both sides of this debate. They disagree with one another on the potential dangers at hand.I would recommend the lecture Intelligence Stairway by Jaan Tallin for a sober case for why this could be a potential existential threat.
The current AI is nowhere near the level of AGI and we might indeed never get to AGi, in which case yeah it's not a threat. It's like if we never got nukes then nukes wouldn't be a threat. But it does give me pause that there are so many scientists and engineers who are optimistically dedicated to the cause of bringing it into existence.
|