The Global AI Ethics Institute is an international not-for-profit think tank aiming at promoting cultural diversity in the field of ethics applied to AI (EA2AI).
Through publications, conferences, and courses, the Institute offers an opening to the broad spectrum of existing ethical perspectives around the world. Thanks to its wide international network the Institute benefits from the expertise and knowledge of a large number of people and institutions.
Currently, the Global AI Ethics Institute is involved in several projects with the Kingdom of Morocco, the Indian government, Brazil, and numerous organizations all over the globe.
Its members are operating in many countries delivering lectures and talks, participating in expert meetings, publishing papers, books, and book chapters, and intervening in the media.
We interviewed Aco Momčilović and Emmanuel Goffi, Co-Directors and Co-Founders of the Global AI Institute, about the threats and opportunities of Artificial Intelligence.
Our time is increasingly using AI: why is there a need for ethics for something that is so pervasive in our lives?
Our time is increasingly using AI: why is there a need for ethics for something that is so pervasive in our lives?

Aco: Talks about ethics are not new in the segment of AI. If we think about it in the context of General Purpose Technology, it is obvious that technology that is significant will be influenced by the values and different norms of its creators. But also, new ethical issues are emerging, with the new applications of AI systems and products, that were not seen before, at least not in this particular way. After over a decade of “use cases” underlined many segments of ethics and cosm-ethics in this field, we are seeing new layers of questions in my opinion.
For example, even broader ethics-related questions are connected with AI nationalism or AI colonialism which we might be seeing in even greater amounts than today. Exactly because it will soon be very pervasive in the everyday life of billions of people, we must pay special attention to that subject. And that is one need that we intuitively understand and agree on. But the problem we are seeing now is that organizations – namely corporations at this moment, who are mainly developing AI, are having different “sets of motivations” regarding AI.
And their short-term, profit orientation, could force them to something that is called – race to the bottom, in which, all “abstract” topics like ethics, will be surrendered to the goal of achieving competitiveness, efficiency and strategic advantage to the competitors.

Emmanuel: AI is pervasive technologies that most of us are using without even being aware. That poses a first ethical question about our individual responsibility in fostering technologies we don’t understand and of which consequences are out of our grasp. Paradoxically, while being totally blind regarding the potential consequences of these technologies, we are asserting that we remain in control, and that there is and will still be some level of human supervision. We have actually lost control over AI. We have already become mere cogs in a technical system we do not understand.
The second reason why ethics is essential is that some actors are already using and will keep using our lack of awareness and understanding to manipulate big part of the population. That might come form private companies for economic purposes, or from public institutions to control behaviors, or even from smaller stakeholders aiming at reaching specific goals. This is nothing new. Marketing is doing the same, as well as rhetoric in politics. French philosopher Michel Foucault labeled this “governmentality”, namely the techniques used to control behaviors without using force. AI is just a force multiplier. It allows controlling faster and more efficiently a larger part of people.
Third, related to the previous concern, is the one about the possibility for the Western world, to use these technologies to spread their culture and values without respect for cultural diversity. This could lead to international issues. It is what we stressed with Aco in our latest article for the Revista Misión Jurídica.
What steps can we take to ensure that the use of AI benefits everyone, regardless of their background or circumstances? How do we make sure that the benefits of AI, such as increased efficiency and productivity, do not create more inequalities?
Aco: That is an excellent question, without a final answer at the moment. In my opinion, for total penetration of AI into different populations, their designers must be aware of their values and cultural backgrounds. The second thing is that it will significantly depend on some strategic decisions, mate on the country levels, by the governments and documents they might or might not create on National AI Strategies.
Already now we can see that countries differ in the level of awareness and seriousness of approach to that topic. And my estimation is that some countries will lag behind and not have many benefits, or even worse could have disadvantages and even bigger differences that are present now. This is the topic I covered in many international lectures connecting AI and socio-economic consequences and is the topic of my Ph.D. thesis.
Emmanuel: I am a bit cynical on this. I do not think that it is feasible to ensure that the use of AI will be beneficial for everyone. Two reasons for that. First, AI is not a single technology. It is a wide spectrum of technologies, used for a wide spectrum of purposes, through a wide spectrum of tools. Each one of these subsegments of AI should be addressed specifically to make sure they are not doing harm. Given the number of stakeholders and variety of interests, it is naïve to believe we will control the whole spectrum. Second, “everyone” does not mean that much.
To make AI technologies beneficial to everyone, we should have some “everyone” agreeing on what is beneficial and what is not. Believing that the whole of humanity could reach an agreement on what is desirable and what is not when it comes to AI technologies is also naïve. In some places where people struggle to survive, the development of AI technologies is seen as a way to escape poverty. For some of these people, the ethical horizon is survival not pleasing others. This idea that AI should be beneficial is a spoiled people’s perspective. In the Western world, we have time to think about ethics and what makes us happy and unhappy, while in some parts of the world, many I would say, people, don’t even have time to live.
The problem here is threefold: reductionism, techno-solutionism, and universalism. Reductionism in the sense that we present AI as a single object, and “people” as homogeneous in their expectations. Techno-solutionism because we strongly believe that happiness is rooted in progress which lies in technology. Universalism, since in the Western world we arbitrarily and with no single strong evidence, assert that there are universal values on which can be built some kind of universal ethics that could apply to AI technologies. These are biases we must question even before trying to mitigate so-called AI biases.
Regarding the last part of the question, I would just open the debate. First, are increased efficiency and productivity really beneficial? What are the grounds for this assertion? Second, what do we mean by inequalities exactly? Third, what makes us believe that inequalities are always bad? Some philosophical questions that ethicists can address but for which there are certainly no definitive global answers.
How can we ensure that the development and deployment of AI are guided by a shared set of values and principles, rather than the interests of a select few? What ethical considerations should be kept in mind when creating AI systems, particularly when it comes to the impact on marginalized communities or developing countries?
Aco: First we should be aware that there is no unified set of values to be shared by everyone. There needs to be “individualization” and understanding of many different groups, which don’t have seat at the table right now. On the other hand, we could see, some guiding values that are proposed on the level / of European Union, that should cover many areas.
For example, the EU proposes human-centric AI development, with seven key requirements:
- Human agency and oversight
- Technical robustness and safety
- Privacy and Data Governance
- Transparency
- Diversity, non-discrimination, and fairness
- Societal and environmental well-being
- Accountability
Still, the question is how different the interpretation of those general ideas is. For us at the Institute, it is extremely important to address the issue and its impact on marginalized communities and developing countries. Only thing is that in this case, marginalized countries, unlike now where we have at least some kind of normal distribution, it might happen that ALL countries EXCEPT those 11 or 12 who are leading the way in the AI development – might be marginalized. And that is one fight that is worth or even crucial fighting.
Emmanuel: That is a complex question. To start with I would stress that it is phrased in a very misleading way, as often with this topic. The assumption that AI should be “guided by a shared set of values and principles” is biased. As I already mentioned, this stems from the belief that there are universal values out there that could guide the governance of AI, which is wrong. The same with the underlying assumption that “the development and deployment of AI are guided by (…) the interests of a select few”.
First of all, there is no reason to think that these interests are not related to values and principles. Secondly, there is no evidence that these interests are not beneficial to a huge number of people. When a company develops an IA tool, it benefits a wide ecosystem of stakeholders: employees, clients, investors, providers, or even governments through taxes, to cite but a few examples.
Ethics is a mediator between humans and their environment, making sure that some kind of balance is maintained. It is highly subjective and contextual. Consequently, the ethical acceptability of the development of AI technologies should always be addressed in a specific context with a specific purpose in mind. It is why with Aco and our team at the Global AI Ethics Institute we are promoting the inclusion of cultural perspectives in the debate over ethics applied to AI. We think that arbitrarily labeled universal principles or values cannot be imposed upon the whole world. Values are culture-dependent, and so are the principles that are inferred from them. I think we have been able to demonstrate that point many times through our publications and the many talks and lectures we are delivering around the world.
How can we ensure that AI is developed and used to contribute to sustainable development and improve the quality of life for all?
Aco: That question depends on the fact of WHO is developing AI. And their underlying motives. In the research on the topic of the National AI Capital, which I did 3 years ago, it was visible that development is mainly funded and organized by corporations, some parts by academia, and currently insignificant parts by the different governments/state-connected institutions around the world. That fact alone explains a lot. Where is it written that the main goals should or will be quality of life for all? Is this something that we can apply completely to other current or emerging technologies?
One of the questions we are trying to explore in the Global AI Ethics Institute is the question of AI governance on the global level. How to organize it and who should be responsible and who are the main stakeholders. On the other hand, part of the responsibility is on the individuals. To get certain levels of education, to get familiar with and to be able to use new AI tools that will emerge at a faster and faster pace. We have seen some initiatives – like Elements of AI, that are working exactly on that. I created one of the first education for nontech people – AI for Business, Economics, and Social Science Professionals, exactly for that purpose. To educate on time, and give equal opportunities to many segments of the population.
Emmanuel: Again, the question is biased. As long as we keep seeing the world as homogenous in terms of people’s expectations, values, and aspirations, we will be unable to answer such a question. It is impossible to improve everyone’s quality of life. It suffices to travel the world to understand that.
We have to segregate the question and apply it to specific situations with clear objectives. So, we might be able to “improve” (once improve will be defined in the given context), the “quality of life” (once quality of life will be defined in the given context) for some, which is already a huge challenge and would be a great achievement if we succeed.
On the other hand, AI applications could make more rational decisions – even in ethical and political terms – than human groups?
Aco: Humans are notoriously non-rational beings, despite what we like to believe about ourselves. Driven by emotions, and by different perceptive sets, and then logical mistakes that are natural to us, make our decision process far from perfect. And far from rational. How will this be affected by the totally rational algorithm, is a bit of a philosophical question I would say. Of course, the perception of the rationality of AI systems, helping us in our decision process, or maybe even one day, completely abducting it, depends greatly on the data we are feeding into the potential models.
An obvious problem lies there. We already now can see that our data sets are not perfect, that they are biased in many ways. So how will that help in making our automated decisions more rational? The worst case could happen is that we are making bad decisions, but have a perfect alibi for them – because seemingly sensible and logical AI systems proposed them. Not even to go at this moment into the dangers of manipulation and rigging those systems, as a continuation of cybersecurity risks. Human oversight will have to remain for the unknown time of the transition period. And then – quis custodet Ipsos custodes!
Emmanuel: I am not sure what “more rational decisions” refers to exactly. I think it is a mistake to think that rationality guides the world. Emotions are important even to support our rationality. The Ceartesian call for rationality is not a universally shared philosophical perspective. No need to travel abroad. In our own cultural settings, we all know people who react more based on their emotions than on their reason. Ethics is not only about the reason. Emotions are very important as Carol Gilligan demonstrates it convincingly with her ethics of care.
AI technologies can already act in highly rational ways, for they are deprived of emotions. I am not sure I would like to live in a world where emotions are set aside and where reason has become an intellectual tyrant.
The point with ethics is that you cannot only rely on reason. It is true when it comes to continental philosophy, it is also true when you dive into Animists wisdom such as Ubuntu, Aboriginal beliefs in Australia, Maori or native American spiritualities; or Shinto, Buddhist, Vedic, or Islamic philosophies. Once again, all this must be tackled at a low granularity level.
What role should government and private sector play in the development of AI, and how can we ensure that their actions align with the public interest? What if states do not care about governing the use of AI? Would there be regulatory inequalities between the desires of private individuals – managers, investors, etc. – and those of the state?
Aco: That may be the key question. The private sector – is regulated the way it is… by the market, by the shareholders, and by some norms of their employees. They will remain pragmatic in many cases, and adapt to the race between different competitors. Who is on the other side? Hard to say. Traditionally it would be governments, regulations they propose or impose, and maybe some academic institutions – Universities as a third party.
It will be interesting to see if there will be any startups in the future, with enough strength to disrupt the current biggest players. Many of them are even in the first phase recognized and backed by corporate investments. My fear is that many governments, because of their lack of education, and the short-term political goals, in many countries set to 4 years of survival, will neglect this very important topic, and not prepare their countries for disruptive times that might be coming to all. Those who are wise enough will think mid and long-term about the development of potential general-purpose technologies, that will again change the world, but this time in a much shorter period of time, and do what they can to prepare the people they represent.
Emmanuel: Governments, along with private actors and the general public, should seriously consider the subject. There is a clear lack of understanding and skills in public institutions today. Based on that, there is no decent debate on the subject. Mostly what we are seeing are polarized stances about the potential benefits and harmful effects of AI, with no deep and thorough reflections on the subject.
Governments are providing sets of principles, made of ill-defined words such as transparency, accountability, and human supervision, expecting people to buy them and abide by them. It is a narrative, what we call cosm-ethics, namely the use of the words of ethics without doing ethics. It is a strategy, pretty astutely developed and used by European governments, and intriguingly adopted by other governments around the world. That cannot work in the long run.
Governments need to listen to their citizens. They need and are expected to, consider the needs of their people. Obviously, in some instances, governments are not that interested in what their people want and need. Here again, there is a need for contextual reflections.
Short of any governments’ deep involvement in the development and governance of AI, some of them might soon become followers of other governments, others will merely miss the train of AI, and others again will leave the reins of AI to private actors. There is no universal rule here. As a minimum what would be needed is that governments reflect on the subject and take a clear stance on it.
One of the themes that contemporary philosophy discusses with great fervor is the possibility of a basic income due to the loss of human agency, especially from the point of view of work. Do you believe it is an applicable measure?
Aco: Basic income is one of the usual topics. The short answer is that yes – I believe it will be introduced at some point and in some manner. But the devil is in the details. So, the condition to be met is that we have a society of abundance and a surplus of different resources. I have some social structure like in Star Trek I can imagine. If we would be able to create a utopian AI future, in which consumers will consume all the products, but not necessarily will be required to work to create them, will need to have some income, in order to spend it.
So in this very simplified economic framework, in which human agency is not necessary to produce something, but is necessary for something to be bought, and for companies to earn revenues, some resource distribution will have to happen. Is it going to be basic income, some advanced version of it, or a different socio-economic setup, that we can’t predict right now, for me it is hard to say, since I am not an expert in those fields. Let’s just hope that those will be topics we will have to deal with. And not dystopian scenarios in which AI will be used for military purposes ad for domination of small segments of people over others.
Emmanuel: I do not think it is either feasible or desirable. We have moved from the idea that all humans are equaled they are entitled to have equal access to opportunities, to the idea that all humans should have access to the same results. Equality is not about being provided with a basic income to counter some loss of agency. It is about starting life with the same chances. Then, each individual is responsible for doing what is needed to succeed. Obviously, in some cases, people face difficulties, but not all of them. It is where the synecdochic tropism is misleading. Making particular cases general, blurs the end of the project. It is where some are reluctant towards the basic income.
Each time we try to apply general rules to the context-rooted situation, we are either failing or offering unenforceable solutions. Most people, at least in the Western world, still believe that you got what you deserve, even if this belief is questionable and far from being always true. So, for those who struggled to succeed, it can sound odd to hear that everyone is entitled to a basic income. The same with the loss of agency. The agency can be obviously lost due to exogenous conditions such as the development of AI, but it can also stem from intellectual laziness, or the mere will not leave one’s comfort zone. Here again, the causes of the loss of agency are so diverse that it seems impossible to fix them through a one-size-fits-all solution.
This interview is part of our dossier on Artificial Intelligence. Other interviews: