WHAT DO WE NEED TO DO NOW?
As AI becomes an ever more central part of every aspect of people’s lives, people should be able to trust it. Trustworthiness is a responsibility for every company and professionist that is pushing and promoting AI, and not just to find out use cases with the sole objective of using technology, or as Andrew Ng put it, instead of starting with an AI-first you should start with a Mission-first. But even before this we should start with an Ethical-First approach, and this is because AI is only a tool, and the problem is not the tool itself but the specific usage we give a tool, just as a hammer is not good or evil on itself; instead the purpose the human that uses the tool has, is what in turn is good or evil, be it to hang a picture on a wall or to harm another living being.
We as a society need to have a strong attachment to values and the rule of law as well as its proven capacity to build safe, reliable and sophisticated products and services that can enhance our lives.
While AI can do much good, including by making products and processes safer, it can also do harm. This harm might be both material (safety and health of individuals, including loss of life, damage to property) and immaterial (loss of privacy, limitations to the right of freedom of expression, human dignity, discrimination for instance in access to employment), and can relate to a wide variety of risks. Our society as a whole, our governments and most of all, the technical community pushing AI need to concentrate on how to minimize the various risks of potential harm, in particular the most significant ones.
The main risks related to the use of AI concern the application of rules designed to protect fundamental rights (including personal data and privacy protection and non-discrimination), as well as safety and liability-related issues.
The use of AI must support and help enhance the rights to freedom of expression, freedom of assembly, human dignity, non-discrimination based on sex, racial or ethnic origin, religion or belief, disability, age or sexual orientation. These risks might result from flaws, bad design or even intentional design of AI
Given the major impact that AI can have on our society and the need to build trust, it is vital that the technical community and governments are grounded in our values and fundamental rights such as human dignity and privacy protection.
We invite you to think about this urgency, and join our quest to have society, governments and the technical community to align, respect and provide trust through a legal framework and best practices for the correct and ethical usage of AI.