PIEK regularly talks to leading experts in the field of the Digital Society. One of our recent interviewees was dr. Katleen Gabriels. She is a moral philosopher at Maastricht University (UM), where she is also the director of the Digital Society study programme.
The first question about her personal story gets her talking straight away.
‘I studied German and philosophy. After my PhD research into the morals of the virtual world Second Life I researched the ethical aspects of the Internet of Things in my postdoc period. These days I am looking into Artificial Intelligence (AI), and currently I am specifically trying to find out to which extent morals can be programmed into a machine. Before starting at Maastricht University, I worked at TU Eindhoven. I also wrote a book about the ethical aspects of AI, Conscientious AI. Machines Learning Morals.”
Can morality be programmed?
‘There are great breakthroughs in medical assessment: researchers have for instance trained algorithms with 130,000 photographs of skin cancer or melanomas. Their study has proved that their algorithms are just as good as trained dermatologists. Similar results can be found in oesophageal cancer and breast cancer. Some hospitals already make use of AI to trace suspicious breast tissue. Naturally, however, making moral assessments is a far more complex challenge. We do not have an unambiguous data set just like that with which to train algorithms. Morality, therefore, cannot just be programmed, no.’
At Maastricht University you are head of a whole new study programme. Can you tell us a bit more about the programme, and can adults also join?
This new study programme started in 2019 and has an inter-disciplinary structure. It is a cross between humanities, social science and data science. Students tackle statistics but also ethics, history, political science, research methodology, etc. All courses are focused on the digital society. What we aim to achieve, is creating graduates who build bridges with knowledge about technology as well as about ethics and social aspects in the digital society. The internship situation tells us that people able to build bridges are much in demand on the labour market. We do not have one type of student. Some opt for a more technical focus, and others go for more ethical themes, such as privacy challenges in a smart city. The inter-disciplinary nature of the studies is important: students have to have a wide frame of reference and philosophy is also on the menu. At this moment there is only a full-time programme and not a part-time one.’
You just mentioned ‘smart city’. What more can you tell us about this?
‘We clearly see that more and more objects are interconnected through the Internet. Think of Tesla cars and smart watches. Much traffic information, such as traffic jams, is deduced from mobile phones. The term smart city means that public, urban life gets more intertwined with the Internet. This also means that many data are collected, and patterns in them are researched. A specific example is this: instead of a system of emptying garbage containers at set times, the container sends a ‘nearly full’ signal to request being emptied. Or think of parking apps: when you drive into town, the app will show you a free parking spot just like that. Or consider logistics: suppliers to shops in the inner cities can consult an app to see what the best delivery time is, so that delivery vans are not in each other’s way. Current information is also collected on how busy streets are. If movement sensors indicate that many people are heading in the same direction, this could mean turmoil. After football matches you can analyse movements of crowds and predict whether there is going to be trouble. The municipality of Maastricht also wants to become a ‘smart city’, but is not as far advanced as Eindhoven. It might be that the future looks like this: your digital diary says when your first appointment is. The diary communicates with your alarm clock and this wakes you up in time. It also sends a signal to your coffee maker, so that your coffee is ready in time. The alarm clock also takes into account traffic information: if there is heavy traffic, it wakes you up earlier. And coffee is ground earlier too.’
This brings us to ethics: if machines dictate when the alarm clock rings, are we as humans not programmed?
‘You do get more dependent on technology. Look at healthcare. Our society is increasingly greying, and pressure on healthcare workers is up, while there are actually fewer of them. This increases the demand for smart technology. Think of ‘ageing in place’, meaning that the elderly live independently in their own homes as long as possible. Sensors are placed on certain objects, and patterns can be deduced from the data produced. E.g. the bedroom door is opened at 7.30 am, and around 7.45 am the coffee machine is switched on. If there is a major deviation from this ordinary pattern or time sequence, the care givers are warned. There are also smart pill boxes that send a signal for the patient to take his medication. Care givers or children can monitor the intake from a distance. Furthermore, movement sensors can also register whether a person is moving around the house. Should there be no such movements for a longer time, action can be taken, and cameras can be activated to inspect the home or room of a patient.’
In other words, we can age well at home and sensors and AI safeguard us, but I do get a Big-Brother-is-watching-you feeling. How about privacy?
‘We have the new Privacy Legislation, but is that enough? The big problem is the companies that have your data. In which way have they secured your data? If they are hacked, your data are out in the open. With Corona we have now see how difficult it is to make a Europe-wide decision on an app showing whether you have been vaccinated or not.The ethical question and consideration will always be: how much of your privacy will you give up for your health? Still, we need a clear legal framework, also for companies.’
When I look at the people trained by PIEK, I see people working for internationally active organisations producing devices that are applied everywhere. What should engineers who are active with robots and AI, according to you as a philosopher, be mindful of these days?
‘The key word is inter-disciplinarity. In advance, you pose ethical questions, and not in hindsight. Digitisation has such a massive impact that things can quickly go wrong. With the retweet button Twitter has given a voice to people to share message, but it has also contributed to polarisation and online witch hunts. With an inter-disciplinary mind set you have to look at risks and consequences.’
You see that at companies like Google and Facebook employees sometimes quit, because they disagree with their company’s policy. Will this spread?
‘There are many inconsistencies among the major tech companies. Google wanted to gain a foothold in China, so it went along with censorship. This meant that e.g. websites about human rights were not visible on Google for its Chinese users, whereas Google’s own slogan was ‘Don’t Be Evil’. Later on Google employees left the company due to Project Maven, the use of machine learning for military purposes. Many developers did not want their technology to be used by the military. Their conscience objected.’
The bottom line is that ethics will become more and more important in our society.
PIEK thanks dr. Katleen Gabriels for her insight into the realm of ethics in the technical interconnect industry.