top of page
  • Thierry Spanjaard

Ethical and Responsible AI?

Last week, there was numerous reactions about a declaration by a Google engineer saying,the company’s Artificial Intelligence (AI) was starting to get conscious.” More specifically, Blake Lemoine, a Google engineer, claimed “he had profound discussions with the tech company's artificial intelligence system LaMDA (Language Model for Dialogue Applications),” according to MSNBC. Google management promptly reacted by putting Blake Lemoine on “paid leave”, as a result of violating Google’s confidentiality policy, they said.

Already, in the last century, Arthur C. Clarke declared “Any sufficiently advanced technology is indistinguishable from magic.” Has AI become a sufficiently advanced technology? This could be a subject for the philosophy class exam that concludes secondary studies in France …

There is no controversy about the interest and progress of AI. In the payment industry, machine learning is used to evaluate in real time risk levels associated with transactions and let merchants and issuers decide whether to accept or decline a payment. Machine learning is generally defined as a subset of AI: “With machine learning, we are able to give a computer a large amount of information and it can learn how to make decisions about the data, similar to a way that a human does. Machine learning is a set of methods and techniques that let computers recognize patterns and trends and generate predictions based on those.” Actually, fraud detection is usually a combination of machine learning and rules, in order to reduce fraud and try to avoid declining legitimate transactions at the same time.


Machine learning or AI are also used for many functions that we now take for granted: SIM card registration and verification, ID verification, credit scoring, chatbots, spam emails identification, IoT connected objects remote management, KYC and AML procedures, and a lot more! In our industry and elsewhere, machine learning and artificial intelligence are becoming ubiquitous.

Consequently, it's long overdue that ethics are investigated and rules are setup to oversee the spread of the technology. Various initiatives are popping up to regulate the use of AI to establish a path towards so-called “Responsible AI.” As usual most of the initiative is coming from the private sector in the US while European authorities are also trying to set up rules to provide a legal framework to AI usages.


Microsoft, which, with its Azure Machine Learning and Azure Cognitive Services, is a major player in the AI field, has set up a methodology for Responsible AI. This starts with a Responsible Assessment, including AI fairness checklist and error management principles. Then comes Responsible Development, which encompasses multiple topics such as AI Security Guidance, Conversational AI guidelines, SmartNoise, etc. And the final part is Responsible deployment, which includes methodology for datasets and confidentiality rules based on a Trusted Execution Environment (TEE) or encryption.

On the European side, the European Commission is engaged in a process to set up an “EU AI Act” that will “safeguard the functioning of markets and the public sector, and people’s safety and fundamental rights.” Typically, the EU would classify AI in three risk categories: Unacceptable-risk AI systems, High-risk AI systems and Limited- and Minimal-risk AI systems. Then, the proposed regulation will define “conformity assessments”, transparency guidelines leading to explain algorithms, and include cyberrisk management. In 2022, the European Council is proposing amendments to the original proposal by the European Commission, and analysts expect a provisional AI Act to be proposed to the European Parliament by the end of this year.


So are we trying to regulate magic? Or will technology developments always be ahead of regulation?

258 vues
Recent Posts
Archives
Rechercher par Tags
Retrouvez-nous
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
bottom of page