Big Data and Artificial Intelligence: Ethical and Societal Implications

BIG DATA AND ARTIFICIAL INTELLIGENCE: ETHICAL AND SOCIETAL IMPLICATIONS

Capco recently brought together experts from academia and finance at our Frankfurt office to discuss the ethical and social dimensions of the ongoing rise of artificial intelligence (AI) and Big Data.

The discussion focused on the manifold opportunities that AI potentially offers across a range of applications and scenarios, while also acknowledging the existence of ethical challenges. Not everything that is technically possible is ethically justifiable.

In his welcome remarks, Bodo Schaefer, CEO of Capco Austria, Germany and Slovakia, highlighted the central question from the perspective of financial institutions: "How can real value be added with the help of Big Data and AI? How can Big Data and AI be used in a meaningful but also ethically justifiable way for the benefit of the customer?”

Gerhard Längst, Partner at Capco in Frankfurt, then offered an overview from another perspective. "Big Data and AI are major global trends that will be decisive for competition in many industries in the near future, with a significant impact on society and each of us,” he said. “The race around AI and Big Data is currently taking place between the USA and China in particular, followed by Great Britain and Israel.” He noted that there are projections which foresee AI contributing up to 14 percent of global GDP by 2030: “Given this predicted rapid global growth, the question arises: is the heavily regulated EU market a competitive advantage for European companies or does it potentially hamper innovation compared to competitors based outside the EU?”

The evening’s keynote speaker, Professor Roberto V. Zicari, founder of the Big Data Lab at the Goethe University in Frankfurt, then addressed the audience. Professor Zicari’s extensive experience in the field of AI includes work at the University of California,. In his view, the ethical and social implications of AI are not always immediately obvious: "You can choose whether you want to be compliant or do something meaningful. AI is reality - but who is actually responsible for the effects of technology?” the Professor noted. “Per se, AI has enormous economic potential, but it also has ethical implications that we need to address in order not to harm people.”

Professor Zicari’s central argument is as follows: Europe in particular, with its great ethical tradition, must proactively devote itself to this topic, while at the same time ensuring that it does not fall behind in the international technology competition, he added. “In Europe, political decision-makers speak of trust, fairness and risk minimisation. But most of the controversial aspects of the rise of AI are not of a technical nature. They come down to issues that need to be resolved between individuals.”

Professor Zicari and his team are developing  an ethical framework to identify the moral hazards of AI. The challenge is how to trust something that is difficult to explain. In his view, there is a clear gap between the engineers who drive the technology forward and the rest of society, for whom AI should ultimately be available as a technology that at once adds value and is ethically sound. In the course of his research, trust should be established that AI can be used responsibly; to this end. To achieve this, the gap that currently exists between thinkers, society, politics and technologists must be closed.

Within the framework of the so-called ‘Z-Inspection’ process the team uses real-life scenarios that allow them to explore whether a particular AI is justifiable from an ethical point of view. It will be examined from a technical, legal and ethical point of view whilst taking into account aspects such as society, values or the wider human ecosystem. Professor Zicari illustrated the process by citing how heart attacks can be predicted by AI. Although desirable from the point of view of the patient, that predictive approach raises ethical challenges. such as data protection, that need to be taken into account. The key question is: use a less than perfect technology to save lives or use the perfect technology and you get problems with that.

The keynote was followed by a panel discussion that focused on the relevance of this topic for the financial sector. Setting the scene, Gerhard Längst said: "The loss of confidence in the markets and among investors triggered by the financial crisis shows that the discussion about ethics and society in our industry, and our EU markets, is also of great importance for the future viability of new technologies.”

The subsequent discussion found a high degree of consensus across the panel within some clear individual perspectives. Data analysis expert Dr. Shivaji Dasgupta, Head of Data & AI, Private Bank, Deutsche Bank, emphasised that ethics is not some monolithic block: "What are the different social and legal norms in different countries? Deutsche Bank operates in 60 countries - is there a common ethical line, or do we have to define different ethical lines for different countries? It makes no sense to use these terms without defining them.” Dr. Dasgupta highlighted the importance of institutions such as the UK’s Financial Conduct Authority, which allows some room for manoeuvre when developing new answers to complex questions, such as risk assessment for borrowers.

Dr. Kerem Tomak, Executive VP and Head of Big Data & Advanced Analytics (BDAA) at Commerzbank, highlighted the issue of data quality. "You must ensure that the data used is of the highest quality before looking to we derive anything from it. If you start with bad data in the algorithm, you get a bad result," he said.

Closing the discussion, Gerhard Längst voiced his appreciation for the diversity of views expressed but also about the common (basic) understanding and relevance of the topic: “Ethics and society are particularly important topics in our industry, and today's discussion has shown that a broader use of Big Data and AI raises numerous practical questions to be answered.