Let’s take a hard look at the value AI delivers; and let’s safeguard our own intelligence, creativity and critical thinking — they will remain our most powerful tools for a very long time.
However, one of the key differences, I think, that will differentiate AI from us is the whole very grey area of conscience, ethics and variable value ranking. This is such a huge, largely uncharted ocean, that even trying to decipher it ourselves has not brought us much closer to understanding it. Getting an AI on this path would be truly difficult, at least for those who care. There are, of course, unscrupulous people in the world for whom this may not be a real issue!
Thank you Bobby. Indeed ethics is a key area of risk here. However, I will disagree with you and argue that AI is not that different from us on this. I will make three pointsL
First: different human cultures and people have different ethics and values; indeed some of the worst horrors in history have been perpetrated (and continue to be) in the name of ethical principles, often under the label of religion. Getting AI on the ethical path would not be as easy as agreeing on Asimov's three laws of robotics;
Second: we cannot rule out that an AI would eventually develop its own ethics and values, outside of our control; we might like them or not, but again that is true for ethics and values of other humans too;
Third: the difference then lies not in the ethics but in the power - would an AI be able to enslave us or wipe us out if its ethics leaned in that direction? And here I believe the answer is, not for a very, very long time, if ever.
True. Ethics vary based on point of view. As do values and how we rank them for ourselves in different situations. However, our ethics and values develop both from our environment (family, education, peer group etc) as well as our own 'right' and 'wrong' meter. My worry is that an AI might be exposed to the mindset of its creator and without the benefit of its own meter. It's learning environment would, of course, be the sum of all the knowledge it is exposed to as it learns.
Very well balanced article, Marco.
However, one of the key differences, I think, that will differentiate AI from us is the whole very grey area of conscience, ethics and variable value ranking. This is such a huge, largely uncharted ocean, that even trying to decipher it ourselves has not brought us much closer to understanding it. Getting an AI on this path would be truly difficult, at least for those who care. There are, of course, unscrupulous people in the world for whom this may not be a real issue!
Bobby
Thank you Bobby. Indeed ethics is a key area of risk here. However, I will disagree with you and argue that AI is not that different from us on this. I will make three pointsL
First: different human cultures and people have different ethics and values; indeed some of the worst horrors in history have been perpetrated (and continue to be) in the name of ethical principles, often under the label of religion. Getting AI on the ethical path would not be as easy as agreeing on Asimov's three laws of robotics;
Second: we cannot rule out that an AI would eventually develop its own ethics and values, outside of our control; we might like them or not, but again that is true for ethics and values of other humans too;
Third: the difference then lies not in the ethics but in the power - would an AI be able to enslave us or wipe us out if its ethics leaned in that direction? And here I believe the answer is, not for a very, very long time, if ever.
True. Ethics vary based on point of view. As do values and how we rank them for ourselves in different situations. However, our ethics and values develop both from our environment (family, education, peer group etc) as well as our own 'right' and 'wrong' meter. My worry is that an AI might be exposed to the mindset of its creator and without the benefit of its own meter. It's learning environment would, of course, be the sum of all the knowledge it is exposed to as it learns.