It looks like you are using Internet Explorer, which unfortunately is not supported. Please use a modern browser like Chrome, Firefox, Safari or Edge.

Time to deal with our ethical debt - Part II: Some things are hard to fix once broken

Published in Business

Written by

Annika Madejska
Senior Designer

Annika Madejska is a Senior Designer at Nitor who has a thing for tackling complex design problems and who loves to push herself outside of the comfort zone. She carries a torch for ethical technology, especially AI driven services. In her spare time, she kicks back with some gardening, painting or textile arts.

Article

June 1, 2023 · 9 min read time

We all know that AI has the potential to do amazing things for us, but we also know that there are unresolved ethical issues in the field of tech and AI. The good news, though? We really only have to remember one thing while we’re creating new cutting edge AI services.

This is part two in a two-article series on the topic of ethical debt in the tech industry. Part one is available here.

Controlling what is developed and how it is developed is power.

To a large degree, this power is currently held by big tech companies and venture capitalists. Over the years, all the CEO’s of big tech companies have ended up in front of the U.S. Congress to explain the damage their products have done.

The pitfalls of speed over caution

Take for example the hearings of Twitter’s Jack Dorsey, Facebook’s Mark Zuckerberg, and Google’s Sundar Pichai in the Misinformation Hearing after the January 6th mob attack on the U.S. Capitol. Jack Dorsey was the only one who partially admitted to allowing false narratives of a stolen election to fester for months, while Mark Zuckerberg chose to fully place the blame on the people who broke the law and participated in the insurrection.

The Democratic Party Representative from New Jersey, Frank Pallone, finally stated that:

It is now painfully clear that neither the market nor social pressure will force these companies to take the aggressive action they need to take to eliminate this information and extremism from their platforms. Your business model itself has become the problem, and the time for self-regulation is over.

These company leaders probably had good intentions, and felt excited to lead companies inventing the next big thing. They also saw a nice increase to the company bottom line and felt good about their investment decisions paying off. We know, however, that they failed to act when the first warning signs appeared. Because there were indeed signs, and we know that they failed to listen when employees sounded the alarm.

Right now, Microsoft and Google have both chosen speed over caution in the race to get a larger portion of the cake in the development of Generative AI. In an email obtained by the New York Times, Sam Schillace, CVP and deputy CTO at Microsoft, wrote that it would be an “absolutely fatal error in this moment to worry about things that can be fixed later”. In other words, giving his blessing as a business leader to accumulate both technical and ethical debt. 

But what if the things you break can’t be fixed? What if the things that break are fundamental to our society – like democracy, for instance? Right now, there is fundamental mistrust towards traditional media and journalists. However, free, pluralistic and independent media is vital to democracy. 

If it sounds like a duck… is it really a duck, though?

On the World Press Freedom Day on the 2nd of May 2023, a joint declaration from five high-profile organizations working for freedom of expression, including The United Nations (UN) Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression, was published. An included paragraph raises deep concerns towards an overall lack of understanding of the media’s role as a fundamental pillar of democracy. 

The declaration outlines some of the issues identified with large online platforms, like the lack of transparency and accountability, and the disproportionate power and control a few media corporations and internet intermediaries, i.e. social media, exercise over public debate.

It is not the technology itself that is the threat, but how we manage it and control it.
It is about building sustainable and trustworthy services, and setting a foundation for the future. What kind of foundation are we building, if we can no longer trust the services we use as part of our work or our daily lives?

Image manipulation has existed as long as photography itself. However, with the help of generative AI, it has become increasingly easy to produce fake news, fake images of public persons, and fake recordings and videos of politicians that pass off as real and believable.

At the end of April, media outlets around the world published articles about the death of a Canadian actor who reportedly died during a plastic surgery operation to look more Korean. The story was delivered to media outlets as a press release, and The Daily Mail seems to have been the first one to publish the news. Just a few days later, it was uncovered that the whole story was an AI generated hoax

News media might have learnt their lesson and will implement better practices for fact checking, but the general public will be flooded with stories, audio files and videos, and disinformation will spread like wildfire whilst we’re left questioning if we can believe anything reported by news outlets anymore. The technology is here and we have to find the best ways of mitigating the damage it might do.

How do we measure success?

Right now, the tech industry is busy putting out fire after fire from decades of ethical debt. The AI incident database, founded in 2022, logs and indexes all harmful or near harmful incidents from AI outcomes reported to them, with the aim that we can learn from them and do better. As this is written, the database is closing in on 3000 logged reports.

Will it truly be a fatal error to spend the present worrying about things we can fix later, like Sam Schillace said? Or is the truly fatal move to step on the gas and just hope for the best? It hurts to give up privileges enjoyed for decades, such as moving fast and considering fallout later, but as the industry has done little to repay its ethical debt in the past, we are now seeing regulators stepping in.

The EU is expected to pass the AI Act this year, which will divide AI services into different categories depending on their risk level. Some types of AI, like systems used for social scoring, will be banned. Others will be subjected to careful monitoring.  

In a recent update, several changes were made to the AI Act and ethical principles were embedded in the proposed regulation.

One of the main criticisms of the proposed AI Act is that it is too technology specific. It renders it outdated quickly, as progress and innovation in the field moves so fast, while lawmaking is by nature slow. Critics suggest that legislation should focus on possible consequences rather than trying to mitigate possible risks associated with a given technology. Like the voices raised about the law being too lenient, as the manipulative traits of chat bots came into light in the wake of a Belgian man’s suicide.

Even if the EU has taken the lead in regulating technological services, as they did with GDPR, the U.S. plans to follow. A Blueprint of an AI Bill of Rights can be read on the White House website. On top of this, the EU is also updating the Product Liability Directive (PDL) to include software products.

This means that the tech industry will be held accountable.

Slowing down will not stifle innovation. Many disruptive technologies throughout history were released into the world, only to be regulated once it was clear that they were harmful unless controlled. Like cars for instance. Car technology is still developing, even if it now has to follow regulations, such as new technology to limit CO2 emissions. It wasn’t until lawmakers forced the industry to address the environmental issues that manufacturers started rethinking how vehicles could be powered with less harmful exhausts. 

We will still make progress, even if we decide to dial down the pace of development. It will give us a chance to build with thought, and to repay previously accumulated ethical debt without endlessly building up a new tally. Whether we acknowledge it or not, we are currently deciding how AI services will function and impact us for years to come.

We don’t have the luxury of ignoring ethical discussions simply because they are hard and don’t come prepackaged with out-of-the-box solutions.

Every single product we release also changes the world in one way or another. This has an impact on everything around us, as our products don’t exist in a vacuum. Each company and each product has its own context and ethical challenges, and there are no checklists for trustworthy AI. We need to find other ways to measure success than the bottom line, and make trustworthy AI part of the company process and strategy. We need to find other incentives than monetary rewards that increase with sales numbers and profit margins. That is, if we care about humans, humanity and our well-being.

It is easy to forget that at the receiving end of an AI service, stands a human. That human might one day be you. Or your aging parents. Or your child. If we take responsibility for what we release into the world and also expect to be held accountable, chances are that we will come to scrutinize and consider heavily what we allow to pass through our gate of approval into the hands of the users.

We all know what happens if the bill for our technical debt isn’t settled. If we think it will be any different with repaying our ethical debt, we’re fooling ourselves.

It’s high time to start considering how to tackle the ethical issues in the tech industry. I bet we will all wish we would have made it part of our development cycles earlier. But if we start by creating diverse teams and bring in vastly varying viewpoints to the development of AI services, we might uncover biases and inequitable fallout from the service before they become actual real-world problems. And importantly, before the problems become bigger than bias and inequity.

If we ask ourselves what harmful outcomes our AI service can contribute to and imagine that someone we love and care for would be impacted, then we might consider spending some more time on the product to mitigate the ramifications. If we make our services transparent and diligently audit the produced outcomes, we are more likely to catch the early warning signs of harm. If we keep watch of those early signs, we might avoid the cycle of irreparable harm to companies and brands caused by loss of trust, as people and whole communities suffer the consequences of thoughtlessly implemented AI.

Sure, progress has to happen – but at what cost? Who are we willing to walk over in the name of innovation and advancement? Is it progress if it hurts individuals or communities?

We who work in the IT industry already have to reconcile with the fact that we will play a part in enormous disruption in many people's lives – even our own. We have to maintain our humanity, our empathy and our human-centered perspective – even when we get excited about all these imaginable and unimaginable new possibilities.

AI can do amazing things for us, but we have to remember that we’re creating tools and services for humans. For humanity.

That is the only north star we really need.

This is part two in a two-article series on the topic of ethical debt in the tech industry. Part one is available here.

Written by

Annika Madejska
Senior Designer

Annika Madejska is a Senior Designer at Nitor who has a thing for tackling complex design problems and who loves to push herself outside of the comfort zone. She carries a torch for ethical technology, especially AI driven services. In her spare time, she kicks back with some gardening, painting or textile arts.