Principles within AI

How developing AI systems can cause social implications.

In an urgent letter, a group of Dutch scientists urged the Prime Minister of the Netherlands to tread carefully with the introduction of a contact tracing application during the corona crisis. The application might be effective to track possibly infected citizens, but several human rights are violated while deploying such an app. Does the expected effectiveness of contact tracing applications outweigh the cost of human rights being at stake? Such a conflict within principles often occurs when deploying new technologies, clearly, this applies to Artificial Intelligence (AI) solutions as well.

In one of our previous blog articles ‘Is your AI racist?’, the importance of fair AI was emphasized. Using several examples from deployed AI systems within society, it appeared that awareness of the possibility of bias within AI systems is not self-evident, resulting in unintentional racist behavior of the systems. We strive to be aware of possible ethical issues within AI, which is why this blog continues spreading awareness on the topic.

Photo by Romain V on Unsplash

In this follow-up blog, ethics within AI are being addressed from another perspective: the effect of principles. As a company that creates data-driven solutions for all sorts of clients in several fields, ultimate objectives and expectations of deliverables differ within each project. In general, a set of key principles have been pursued over the last few years within AI (Whittlestone et. al., 2019). These principles include that AI should not do harm and should serve the common good, it should not affect human rights, and it should be explainable and transparent. These are somewhat self-evident principles that could apply to any new technology, and although they are important, these principles do not have a direct mitigating effect on the deployment of new technologies. Why not?

First, principles are possibly interpreted differently within different groups, for example, the idea of fair treatment might appeal to everyone, but different groups have different ideas of what fair treatment means: it is not specified in the principle itself. Second, because these principles are quite general, the application of these principles in practice is difficult. Finally, principles will often eventually conflict with each other in some way. In the following section, two examples will be discussed that relate to the aforementioned issues.

“[...] AI-based technologies should be used for the common good, should not be used to harm people or undermine their rights, and should respect widely-held values such as fairness, privacy, and autonomy. “

The System of Risk Indication (SyRI) of the Dutch Government was created to detect fraud. Within the municipality of Rotterdam, this system was deployed to detect fraud within the social assistance benefit system. Detecting frauds would prevent payments from the government towards people who do not actually need the money, which would serve the common good. On the other hand, the workings of the system could not be explained: otherwise, it would be easier for frauds to surpass this risk indication. The system was for civilians, therefore, considered a black box. Besides, it appeared that SyRI was mainly marking citizens who lived in more deprived neighborhoods within Rotterdam as a risk. This suggested that just because a person lived in a certain area, the chance of them being a fraud increased. This affects the human right to not be discriminated against. If we connect this to the key principles mentioned above, the system does not abide by the principles to be explainable or to be discriminated against based on residence in this case.
The trade-off within cases like SyRI is a common one, but in this case, the benefit of detecting frauds does not outweigh the cost of invading citizens’ right to private life. The SyRI project is therefore not suited for application within society (yet).

The municipality of Amsterdam addresses a similar problem that seems more successful. Their goal is to detect illegal rent of apartments for tourists. The approach is different from SyRI’s: only openly available data is used and the system will provide a probability score. The decision of whether a premise should be investigated is entirely human, the system just performs analytical work to indicate whether a premise has any chance of being fraudulent that would take a lot of time when performed manually. In this case, the method of detecting possibly fraudulent rental practices is entirely transparent and the data on which it is based is publicly available. Therefore, it does not violate the right to anyone’s privacy.

Comparing the values of all stakeholders is an important aspect of creating data-driven solutions. The conflicts that arise within the development of the SyRI project should be addressed before the deployment. Within this article, the tensions that often arise between principles before deploying an AI system, are discussed thoroughly. It also states that the recognition of these tensions as well as other negative aspects of an AI system should be mapped before and during development, which forces the developer to think about the societal or ethical effects of the deployment of a new system.

Taking principles of all possible stakeholders involved within a data-driven system into account is important for the effective implementation of such systems. The two examples that were discussed in this blogpost clarify that some do and some do not yet master the art of creating socially acceptable AI solutions that affect large groups of people. Discussing matters like these during brainstorming and development phases can prevent surprises from (negatively) affected stakeholders during deployment, and creating better solutions for everyone.

Gyver aims to create fair and thoughtful solutions with the use of machine learning. To discuss this topic further for questions or suggestions, do not hesitate to contact us!

Let's get in touch