It looks like you are using Internet Explorer, which unfortunately is not supported. Please use a modern browser like Chrome, Firefox, Safari or Edge.

Time to deal with our ethical debt - Part I: Behold the disruption

Published in Design, Business

Written by

Annika Madejska
Senior Designer

Annika Madejska is a Senior Designer at Nitor who has a thing for tackling complex design problems and who loves to push herself outside of the comfort zone. She carries a torch for ethical technology, especially AI driven services. In her spare time, she kicks back with some gardening, painting or textile arts.

Article

May 29, 2023 · 6 min read time

We’re currently witnessing a technological disruption that will have an unimaginable impact on our society. The only problem is, that we haven’t dealt with racked up ethical debt in the tech industry, and now we’re building the future on a shaky foundation. In this article you will find out why caring about ethics is risk management, and brand protection.

This is part one in a two-article series on the topic of ethical debt in the tech industry. Read part two here.

With recent developments in the segment of generative AI, it is becoming evident that we’re standing in front of big changes to life and society. The future where AI would be taking over more and more of our tasks was decades away on November 29th, 2022. But with the release of Chat GPT the very next day, that future had suddenly arrived.

Suddenly, we went from a lax attitude towards ethical concerns about AI to a much more alarmed state, as there are now even concerns over significant structures of our society breaking due to things like fabricated videos, voice recordings or photos that will blur the lines between facts and fiction.

Some feel overwhelmed by how much ethical debt we have – others might be bothered by the ethicists trying to kill the joy over all the amazing possibilities. 

During 2023, AI technology has continued to take big leaps forward showing that the potential disruption to life as we know it is no longer just a product of our imagination. The disruption is here and unfolding right before our eyes. 

Up until now, most AI services have been narrow AI’s, which means they’ve been very specific and only really great at performing one task. But the release of ChatGPT radically changed the playing field.

For example, Microsoft's researchers are currently proposing to combine several of these narrow AI’s using ChatGPT to control them. Suddenly, old barriers are broken and new opportunities are created. 

Sure, this is still not an all-knowing, conscious entity from SciFi movies, but it does push us much closer to a one-stop-shop for many tasks we can imagine an AI performing on our behalf. And quite a few we still can’t even imagine it will be able to execute.

The double-edged sword of absence

In the software industry, it is well known that technical debt that is just left to accumulate will end up wreaking dire consequences for the team, the product, and the cost of development. 

You reap what you sow.

The release of ChatGPT has highlighted many of the unresolved issues regarding training data, privacy, bias, and the potential harm of utilizing AI services. Frankly, these issues don’t necessarily only apply to AI, but to the tech industry at large.

Yet we do not talk about our ethical debt, even though we have knowingly accumulated it for decades. It shouldn’t surprise anyone that there are structural issues within the IT industry, such as a lack of diversity or equal opportunity.

The development of AI services is currently in the hands of a very small part of the world’s population, mainly from the industrialized part of the world with a high standard of living. A kind of colonization of the digital world, if you want to put it bluntly. This all impacts what services get developed, how they get developed, and for whom – and who gets left out.

The ethical concerns are not only about what kind of data is included in training an algorithm or how the data has been obtained. It is also about what is left unrepresented in that data, because not all societies are highly digitized.

This means the art of those ignored is not part of the training data for image generating AI – which is both a blessing and a curse. They haven’t had their copyrights breached or their style appropriated. By not existing in the digital world they have effectively opted out. But it also means that in the new era of AI generated art, there is no trace of their culture, their stories or their existence.

Calls for regulation are nothing new

Sam Altman, the CEO of OpenAI, the company behind ChatGPT, has been a refreshing contrast to other tech CEOs who mostly have tried to deflect and not assume responsibility in Congress hearings. 

Altman testified in front of the U.S. Senate on the 16th of May 2023, acknowledging the potential harm technology can do. In his testimony, he was very candid about the fact that many jobs might be lost, but also that new jobs will be created. Altman urged lawmakers to regulate the technology and find ways to ease the career transitions many individuals will be facing.

However, on May 24th, while on tour in Europe to talk about the benefits of AI, it became clear that Altman isn’t really all that interested in regulation after all, as he said that Open AI might cease operations in Europe, as the company can’t comply with the planned AI Act.

The proposed legislation would force creators of foundation models to disclose how the AI model was trained and provide a summary of copyrighted data used for training.

Altman now states that he thinks regulation should apply to future more powerful AI, not the current algorithmic capabilities.

Senator Richard Blumenthal, Democrat of Connecticut and chairman of the Senate panel in the Altman hearing, recognized that politicians have failed to keep up with new innovations in the field of technology in the past. He also noted that it is imperative not to make similar mistakes as with social media.

But let's face it. We already know that legislators failed to act when the first warning signs appeared. 

Cathy O’Neil, an academic mathematician who became a Wall Street quant and then turned data scientist, wrote a book called Weapons of Math Destruction in 2016, and included numerous examples of harmful algorithmic outcomes. Elon Musk – the loss of his star quality over the last few years not withstanding – called for regulation in 2017, when speaking to U.S. governors at the National Governors Association, specifically spelling out that AI calls for proactive intervention.

global body for AI regulation was suggested by scientists from New Zealand in 2017, proposing that such a body should; …advocate for social justice and equity on the global stage; and advocate for ethics around AI and protection of rights, privacy and safety, so we are not forced to follow the lead of others.

Cathy O’Neil, writes in the aforementioned Weapons of Math Destruction, that algorithms are opinions embedded in code. This means that all of our collective ethical debt becomes embedded into AI services, and is amplified as systems are launched at a global scale. 

Tech companies, and especially designers in tech, are adamant about using a user centric approach. It’s all about empathizing with the users and creating services that people love because it brings them value. But are we truly empathetic and user centric if we don’t respect the users and don’t feel responsible for how our services affect them? Are we truly user friendly and bringing value, if the service is biased and flawed in ways that cause users harm?

The cost of not repaying our ethical debt is erosion of trust, and people will not use services or softwares they do not trust.

Eventually, even if people get locked into your ecosystem of products or feel forced to use your platform due to its popularity, that lost trust will start to have an impact on your company’s bottom line. 

Once trust is lost, it is exceptionally hard to regain. This will have an impact on any other products the company develops and launches. Dealing with ethical debt is to protect your brand. It is risk management.

It has to become a part of your business model.

This is part one in a two-article series on the topic of ethical debt in the tech industry. Read part two here.

Written by

Annika Madejska
Senior Designer

Annika Madejska is a Senior Designer at Nitor who has a thing for tackling complex design problems and who loves to push herself outside of the comfort zone. She carries a torch for ethical technology, especially AI driven services. In her spare time, she kicks back with some gardening, painting or textile arts.