Quote:
Originally Posted by OccultHawk
I think mankind should try to apply Asimov’s laws to AI starting now. Do you have any thoughts on that?
|
Yep. They should think about those things now, but they won't. Even the smartest decision makers at the highest levels of government right now can't disseminate basic misinformation they see on Twitter. How will they handle autonomous, intelligent machines that far exceed our own mental capabilities?
The thing that's legitimately scary about today's world is that we live in a time where 99.9% of the population doesn't understand the technology that drives the economy, our supply chains, manufacturing, etc, much less these basic limited AIs that power social media.
So if some AI in some industry crosses the wrong threshold and gains both agency and actual awareness, that would probably spell the end of society within the next few decades. You wouldn't even need to wait for an environmental catastrophe, plus the rich won't know what to do about it either. Buying power can save you from a mob trying to break into your mansion, but it won't save you from something that could theoretically hack any system remotely and cause every airplane to crash around the world simultaneously.