The host of fears and uncertainty about the future of artificial intelligence is groundless, Salesforce’s chief scientist said this week.
“fundamental risk to human civilization,” Salesforce’s Richard Socher explicitly disagreed.
“What some folks like Elon Musk are worried about is an existential threat that AI might pose, and that is really unfounded because we don’t really have a credible research path right now towards artificial general intelligence —that will set its own goals,” the scientist told CNBC’s Dan Murphy at the annual Innovfest Unbound tech conference in Singapore.
That is, he said the current state of AI is “not even close” to the level of human understanding since computers don’t yet have as many “transfer learning capabilities” as the brain. He added that the algorithms behind AI are currently capable of focusing only on a “particular domain and a specific problem.”
Socher’s research group at Salesforce is presently working on multiple-domain and multi-task algorithms as the next frontier of AI, hoping to be a step closer to artificial general intelligence.
He pointed out that in cases such as doing math, summarizing and translating multiple languages, AI proves to be on par or even superior to human capabilities.
“It is just faster and more efficient and maybe not as accurate on some of the subtle details, but it is still very cheap, of course, and very fast,” he said.
Despite many positive applications, Socher mentioned that AI is “only a tool” and “it is only as good as the people who use it, and the training data that we give it.”
“We have to be very careful about how it is impacting people, where the training data sets are coming from, and how do we make AI have, in some cases, better ethics than we ourselves have,” he said.
Whether or not an ethical framework needs to be incorporated into the technological development of AI, Socher said “it makes sense to regulate it when it is applied to something specific.”
Let’s block ads! (Why?)