Artificial Intelligence (AI) is a field that has seen tremendous growth and evolution over recent years, sparking a mixture of excitement, curiosity, and concern across various sectors of society. AI’s potential applications seem limitless, yet its implications are still widely debated. What exactly is AI, and what are its capabilities and limitations? Can we even talk about it as a unified concept, considering the different understandings and implementations of AI across various countries?
In this interview, Mario Verdicchio – Professor of Computer Science at the Department of Foreign Languages, Literatures and Cultures at the University of Bergamo – explores the nature of AI, its strengths and weaknesses, and the different perspectives shaping its development and global deployment. He discusses how AI is changing tasks, its interplay with ethics and its impact on innovation.
NAXNAX Netzwerk Architekturexport: What is Artificial Intelligence, and what are its capabilities and limitations? Can we even talk about it as a unified AI, considering the different understandings of AI across various countries?
Verdicchio: After years of research and debates, I have concluded that the best definition of AI was given by Elaine Rich, an American computer scientist, in the 1980s: “Artificial Intelligence is the study of how to make computers do things at which, at the moment, people are better”. This is a great way to synthesize the strength and at the same time the weakness of AI: it is at the cutting edge of Computer Science on the one hand, but when it reaches a breakthrough result (in programming computers to do something better than humans) the achievement becomes ordinary Computer Science, and AI needs to move on to something else, more advanced, more difficult to achieve. For instance, doing math was an exclusively human activity a century ago, whereas today it is mostly done on machines, and it is considered a menial task that has nothing to do with AI. AI moves on, from task to task, to enable their transformation into something compatible with how computers work. Now is the turn of language processing and image generation. Anything, like characters or pixels, that can be described in terms of numbers can be processed with computers and, hence, AI. Instead, what is not compatible with a numerical description, like smells and taste, is out of reach. Despite enormous cultural and sociopolitical differences between countries, the theory and the physical realization of AI are based on the universal languages of arithmetics and of electronics.
NAX: What particularly fascinates you about the interplay/tension between AI and ethics?
Verdicchio: Responsibility is the very core of the discourse on AI and ethics. AI aims at automatizing more and more tasks that are traditionally executed by a person. When we substitute a human with a machine, the fundamental question is: who is responsible when something goes wrong? Tackling this issue is challenging from at least two perspectives: one technological, one cultural.
From a technological point of view, Machine Learning, the subfield of AI that has now become mainstream with neural networks used for recognition, classification, and generative tasks, makes ascription of responsibility much harder because of the very way it works. With a complex network of mathematical functions with millions of parameters that are automatically calibrated in accordance with online data, no team of human programmers is able to keep track of which particular data have shaped which particular characteristic of the system and how the numerous different computational components of the network have contributed to the final output of the system. So, who’s responsible when a chatbot that has learned how to write tweets online starts spewing out racist jokes?
When we substitute a human with a machine, the fundamental question is: who is responsible when something goes wrong?
Mario Verdicchio – Professor of Computer Science at the Department of Foreign Languages, Literatures and Cultures at the University of Bergamo
The cultural perspective on this issue then kicks in: despite the common ground of computational arithmetic’s and miniaturized electronics of computers, each person reacts in a different way, with a kaleidoscopic variety of emotions, habits, concepts, philosophies, and goals. We range from profit-oriented strategies for exploitation of the technology to anxiety and fear of a machine take-over of humanity. In this sense, responsibilities grow: not only we have to try to understand what caused a malfunction in the AI, but we need to keep our eyes and minds open on the consequences of the deployment of such technology, even when or especially when it works correctly, in terms of impacts on our social, economic and intellectual lives.
NAX: Does AI promote innovation – or does it rather hinder it? Is AI a catalyst for architecture or even a creativity killer?
Verdicchio: It all comes down to what you mean by “innovation”. In the pedestrian meaning of doing something new, AI is definitely an innovative endeavor: we delegate to machines activities that have traditionally carried out by people, and this is without a doubt a new way of doing things. We must not forget that AI is essentially based on the inner workings of computers, which means that any kind of delegation must necessarily go through a description of the task in terms of numbers that the processors can crunch. This shows that the innovation of AI, as with any other computer-based endeavor, is a constrained one: the new way of doing things must be a way of doing things with numbers. In this sense, AI innovation is unrelated to any development that is not amenable to a numerical description: all the great discoveries of the past that were generated by empirical accidents, like the misuse of a reading lens leading to the invention of the telescope or the accidental growth of mold on a forgotten Petri dish enabling the discovery of antibiotics, would not have been possible within the rigid and abstract framework of arithmetic rules running on a chip.
All the great discoveries of the past that were generated by empirical accidents would not have been possible within the rigid and abstract framework of arithmetic rules running on a chip.
Mario Verdicchio
Architecture is a very particular endeavor, because it mixes numerical rules, necessary for the solidity and the safety of the constructions, with non-numerical practices related to aesthetics, style, and how people inhabit spaces, to name a few. It has always been up to the architect to navigate dexterously between these two realms to turn their projects into successful realities. The use of AI adds onto this work: the architect will have to be clever enough to distinguish what quantitative activities they can delegate to the machine, and what qualitative decisions they have to keep for themselves. The golden ratio is an excellent example: it is meant to express aesthetic criteria by means of a number, but one can only do so much with it. Calculations can be (and are) automated, the autonomy with which an architect takes decisions is something else.
NAX: How sustainable is AI?
Verdicchio: Simply put, not very. The AI that is mostly used today is, as said, Machine Learning, which is a data-heavy activity. This means that AI is now in the hands of companies that build, operate and maintain gigantic data centers that consume enormous amount of energy not only to run the computers, but also to cool them down, since electronic computation may be invisible to the naked eye but produces a lot of heat that eventually destroys the very chips that perform the computation.
Considering that the materials necessary to build the electronic circuitry that sustains AI can only be found in specific areas in the world (e.g. Brazil, Central Africa) and that the know-how for the design and construction of the most advanced processors is well guarded in even more localized regions (e.g. Taiwan), it becomes evident that the development of AI is only worsening the darker sides of world-wide commerce and adding fuel to the existing political and economic tensions.
This is not only an AI problem, obviously: it is about computer technology in general. One could hope that some well programmed AI system might come up with a good solution to this (in terms of a new combination of molecules as energy source perhaps), but we must not forget that numbers can do so much, and the rest is up to us.
NAX: Where do you see AI and AI research in 20 years?
Verdicchio: Making predictions on the future is usually a futile exercise, because the most decisive factors are always the unforeseeable ones. Think of 20 years ago: in 2004 we had no ideaI.D.E.A. Inspire, Debate, Engage and Accelerate Action Inspirieren, Diskutieren, Engagieren und Maßnahmen anschieben of the disruption that the iPhone and Facebook would bring a few years later. In particular, Facebook is an interesting case: I am positive about the fact that Mark Zuckerberg had no plans of building “echo chambers” where people would get radicalized in their ideology of choice by means of electronic social networks. Even this metaphorical use of the concept of an echo chamber did not exist when Facebook was launched. This is an example where a technology has sociological effects that are radically more significant than the technology itself and are very distant from the expertise of the people who created that technology in the first place. I think that this gap is where AI research should and will focus on in the upcoming years, at least in Europe: the AI act, whose final draft was recently approved by the European Parliament, is full of well-meaning directives that are at a very abstract level, which means rather distant from the practicalities of an everyday use of AI. A dialogue between lawmakers and technologists is fundamental for such an initiative to reach its goals. The building of a common vocabulary is the next challenge AI people should turn their attention to.
NAX: Thank You!