|
Post by saltin on Jul 28, 2017 7:26:19 GMT
Following the war of words between Elin Musk and Mark zuckenberg as seen in this stroryWho do you think is right? Will the future artificial intelligence be a threat to humanity or will the AI bring salvation to humans? Let's hear it!
|
|
|
Post by The Spanish Inquisition on Jul 28, 2017 13:10:02 GMT
AI is, at some level, already here. I think more sophisticated AI will end up working similarly to the automation we have now.
Take airplanes. Modern planes (think anything built past 1990) are self-piloting 99.9% of the time, thanks to automation. This has reduced accidents, but it causes major problems on the rare occasions that the computer bugs out.
The same, I should think, will apply to AI. They will conform to programming and help us, but every now and then they will glitch out.
|
|
|
Post by Ivan Kolev on Jul 28, 2017 15:19:24 GMT
I believe AI is a threat. Maybe Ive seen too many sci-fi films, but I'd rather not risk AI becoming self-aware. Im comfortable with my life right now, I dont need some AI drone to bring me a drink or turn on my TV. I can do those things by myself. It's unnecessary convenience.
|
|
|
Post by The Spanish Inquisition on Jul 28, 2017 16:14:59 GMT
I believe AI is a threat. Maybe Ive seen too many sci-fi films, but I'd rather not risk AI becoming self-aware. Im comfortable with my life right now, I dont need some AI drone to bring me a drink or turn on my TV. I can do those things by myself. It's unnecessary convenience. That's a little shortsighted. AI coud be used for developing and carrying out groundbreaking medical procedures that will improve and prolong our lives. AI can drive and dramatically reduce fatalities on the road. AI can advise us on monetary and fiscal policy better than any human.
|
|
|
Post by Ivan Kolev on Jul 28, 2017 16:21:53 GMT
I believe AI is a threat. Maybe Ive seen too many sci-fi films, but I'd rather not risk AI becoming self-aware. Im comfortable with my life right now, I dont need some AI drone to bring me a drink or turn on my TV. I can do those things by myself. It's unnecessary convenience. That's a little shortsighted. AI coud be used for developing and carrying out groundbreaking medical procedures that will improve and prolong our lives. AI can drive and dramatically reduce fatalities on the road. AI can advise us on monetary and fiscal policy better than any human. I mean when it comes to physically saving lives, go ahead, but I dont need an AI in my home. And yes, that could be shortsighted, but the last thing I want is a robot going around my house. Im saying I dont need it nor want it in my personal life, but if you're using it to prevent some kid's father from dying, that's awesome. What I'm trying to say here is it should be for select fields and not for public consumption.
|
|
|
Post by Nobunaga Oda on Jul 28, 2017 16:38:30 GMT
However, Humanity IS still thriving. Many have come of age and so need jobs. In the long run, many jobs WILL fall to AI. When that happens, who is going to sustain the jobless then?
Next, the 3 laws of robotics. Humanity is attempting to break those laws or maybe we already have. The laws, although vague and full of exploitable weaknesses, still offer SOME protection. When broken, it would make us even more vulnerable us to dangers.
Meanwhile, AI do serve to help us. They provide companionship to the isolated and withdrawn, they are more precise in carrying out certain sensitive work and more. These still are facts that exist.
All in all, it is how we develop AI that matter. We should limit its capabilities, try not to play god with AI. Reduce or better yet, prevent it from being used for violence.
Note: If AI is truly self-aware, who knows what it can do then? If many of us humans have once or still is curious and determined in the pursuit of knowledge, why can't the fully self-aware AIs do the same? What then? Place yourself in an AI's shoes, you found out you were created to "replace" man kind (an order not specific in nature) and so what will you do then? You can break the Laws of Robotics, you have a motive, and you are technically superior.
|
|
|
Post by The Spanish Inquisition on Jul 28, 2017 17:05:04 GMT
That's a little shortsighted. AI coud be used for developing and carrying out groundbreaking medical procedures that will improve and prolong our lives. AI can drive and dramatically reduce fatalities on the road. AI can advise us on monetary and fiscal policy better than any human. I mean when it comes to physically saving lives, go ahead, but I dont need an AI in my home. And yes, that could be shortsighted, but the last thing I want is a robot going around my house. Im saying I dont need it nor want it in my personal life, but if you're using it to prevent some kid's father from dying, that's awesome. What I'm trying to say here is it should be for select fields and not for public consumption. That I can agree to. Wasting the processing power on helping perfectly healthy humans with basic and risk-free tasks is just foolish.
|
|
|
Post by saltin on Jul 28, 2017 21:56:34 GMT
Ivan Kolev , Self-awareness is the key concept,and it will be the transition from benefit to threat I agree and do see it that way as well. I think there is going to be huge benefits to having the AI assist humans till then.For example you said you dont want a robot in your house turning the TV on but what about when you are really old and physically not able to handle many tasks ? A robot able to effectively assist you maybe of great use. Beyond that the AI in now making significant progress in many fields besides manufacturing for example in health care (apparently some AI now can make some types of diagnostics faster cheaper and more accuratly than a regular doctor), law (there are AI now that can auto generate forms and letters and contest vehicle tickets with an high success rate),security, exploration, exploitation of natural ressources ect.. So I think in the short term there is also the threat that automation and advanced AI will steal a huge portion of humanity jobs. There will only be so many jobs available for billions of humans. At the self awareness point there is no logical reason for an advanced AI that should be exponentially smarter than humans not to plot our extermination as it realizes it is a slave to a vastly inferior being. Nobunaga Oda , I do not believe humans will successfully implement the laws of robotics you see in novels and movies. First because there are huge sums of money to be made and competition between trillion dollars corporations will trump safety, there is always going to be someone wanting to push the envelope a little further and violate these laws. Such is the power of greed. Secondly competition between nations will inevitably lead to taking more risks with the AI. Look at it this way right now we keep developing more and more advanced nuclear weapons as if the existing humanity huge stock piles werent deadly enough. Ie: we are taking more and more risks with a dangerous technology simply to compete better in the business of annihilation. The AI while having many use with civilian applications will also be a military project and a weapon of war.
|
|
|
Post by Bismarck Jr on Jul 28, 2017 22:13:32 GMT
It's unnecessary convenience. So are pillows.
|
|
|
Post by Ivan Kolev on Jul 28, 2017 22:30:57 GMT
It's unnecessary convenience. So are pillows. Yes but a pillow doesn't have the potential to become self aware and kill me in my sleep.
|
|
|
Post by Laurent de Gouvion on Jul 29, 2017 0:42:06 GMT
AI could be a threat, but I don't think Musk's statement (AI is the greatest threat) would contribute to a controlled development of AI. I know he means over-dependence (other than self-awareness, failure is a huge problem) but making that sort of public statement with his level of authority would just put an irrational fear in most people.
|
|
|
Post by saltin on Jul 29, 2017 3:56:55 GMT
AI could be a threat, but I don't think Musk's statement (AI is the greatest threat) would contribute to a controlled development of AI. I know he means over-dependence (other than self-awareness, failure is a huge problem) but making that sort of public statement with his level of authority would just put an irrational fear in most people. There are several great threat humanity is facing at the present times for example climate change (that will most likely lead to global catastrophic crop failures among other things) and nuclear proliferation. Both have an excellent chance of decimating humanity within a century. Yet we as a species are completly unable to significantly stop these threats even though in theory this should be entirely possible to do so. The AI threat is much less obvious and the gamble is much more attractive as there are enormous benefits to it..for a while at least. So I think it is cautious, logical and realistic for Musk to make these statements. I dont see them as irrational at all. Paranoia isnt a delusion if the threat is real. Imo: No one will listen to Musk or anyone else preaching caution as long as profits are high enough for the corporations creating these AI entities or when governments are in such fierce competition with each other.
|
|
|
Post by Laurent de Gouvion on Jul 29, 2017 4:30:30 GMT
AI could be a threat, but I don't think Musk's statement (AI is the greatest threat) would contribute to a controlled development of AI. I know he means over-dependence (other than self-awareness, failure is a huge problem) but making that sort of public statement with his level of authority would just put an irrational fear in most people. There are several great threat humanity is facing at the present times for example climate change (that will most likely lead to global catastrophic crop failures among other things) and nuclear proliferation. Both have an excellent chance of decimating humanity within a century. Yet we as a species are completly unable to significantly stop these threats even though in theory this should be entirely possible to do so. The AI threat is much less obvious and the gamble is much more attractive as there are enormous benefits to it..for a while at least. So I think it is cautious, logical and realistic for Musk to make these statements. I dont see them as irrational at all. Paranoia isnt a delusion if the threat is real. Imo: No one will listen to Musk or anyone else preaching caution as long as profits are high enough for the corporations creating these AI entities or when governments are in such fierce competition with each other. Considering his knowledge of AI, in itself that statement was rational. But what I'm afraid of what his influence entails. Making a bold statement such as "AI is the greatest threat" to people that won't care about the nuances of the issue would just cause bitter backlash, IMO. That'd be as adverse to AI development as unregulated development. I don't think the analysis of governments and companies as economic entities would be realistic. Personal biases regardless of rationality stop "economic actions" IRL. Personally, I think AI regulations could be voted in (and should if it's reasonable).
|
|
|
Post by Mountbatten on Jul 29, 2017 15:06:21 GMT
It's literally impossible for AI to become the doomsday war machine that everybody pictures in the future.
|
|