Article
April 29, 2025 · 5 min read timeWhat is the blueprint for trustworthy, responsible AI usage for companies operating in the tech landscape – now and tomorrow? Get ready for some sobering truths, as we tackle this multifaceted issue with Nitor's Senior Designer and Digital Ethicist Annika Madejska and Sustainability Director Ari Koli.
There is an inherent fallacy in how many businesses approach AI implementation: the sentiment is that if you're not diving headfirst in AI, you're falling behind. Taking stock of present challenges and desired outcomes is often a follow-up notion after a vaguely outlined motive behind AI's introduction into the equation. Like forging a key to a lock that hasn't been built yet.
"This applies to product leadership as well. Companies aren’t generally primed to operate effectively in the discovery phase of a product, which is essential in creating valuable solutions for customers and driving product-market fit," Koli begins.
So, what does that have to do with a company's approach to responsible usage of technology and AI, adhering to regulations, and conducting long-term business in a sustainable way? Everything, really.
"It's at the core of how to make things go wrong in technology in general. Being in a hurry and adopting the mindset that ‘we'll fix it down the line’," Madejska notes.
"Everything from meeting legislative requirements to keeping a company's workforce sufficiently informed and educated to ensure mindful data utilisation must be built on a foundation of understanding. With the current hurried state of AI implementation, everyone is simply far too busy being in a hurry to see if they're focusing on the right things with AI, and how its inclusion will affect existing structures."
The responsibility of responsibility
When understanding is limited, so is awareness of how AI is utilised. Responsible use of AI starts with equipping employees with the understanding, knowledge, and skills needed to work with AI. Without a strong foundation of internal AI literacy, companies will struggle to apply AI responsibly and strategically.
Once employees understand AI, a company can confidently communicate its use to customers. Helping customers navigate AI means offering clear, accessible information about where and how AI is used in services that affect them – especially when personal data is involved. Building trust in AI isn’t just about communication; it’s grounded in internal clarity, capability, and confidence.
What we're talking about here is responsibility and accountability. It must be embedded across the organization, through shared understanding, transparent practices, and a culture that sees AI beyond just a technical solution, but as a series of human choices requiring care, reflection and oversight. A culture that allows and even expects raising ethical concerns, and where everyone expects to be held accountable.
Learn to deal with uncertainty
Many companies face AI-centric questions without tangible answers. Are employees utilising AI in their daily operations responsibly, and how is this monitored? Are outcomes of LLMs unbiased and fair, and how is that validated? If a chatbot in charge of customer service dishes out anything from misinformation to biased remarks, who at the company is primed to remedy the situation? Understanding is the keystone of not only refinement, but course correction.
To reiterate: ships don't have lifeboats for times when the ocean is calm.
"The lack of accountability is already adding to the pile of ethical debt the technology sector is amassing. Eventually everyone will need to write solid governance models on how AI is utilised within a company, but many are struggling. The road ahead will only get harder the longer companies wait to get underway," Koli emphasises.
"That road will also become more unpredictable over time, which is further underscored by the sense of speed we talked about earlier. We are not only accumulating ethical debt like before, but at an increased rate thanks to AI. The pile of debt is rising at hyper-speed levels. That makes it increasingly difficult to foresee what lies in wait for us in the future," Madejska continues.
Misconceptions around regulations
The problem of limited expertise and its partially unassigned nature is something that will become a powder keg for businesses due to new EU regulations concerning technology – a prime example being the AI Act. Few companies even in tech-savvy Finland have prepared to tackle its implementation, but in a global setting, its biggest issue is the framing: legislation is seen as a boundary rather than guard rails. Limitations rather than beacons of ethics and values. A wall in the way rather than a bridge across.
"The spirit of invention will be framed differently here due to EU regulation, but I do not believe for a second that Europe is falling behind the curve in a global sense. Constraints are very good, often necessary for innovation," Madejska notes.
"There are also misconceptions that the AI Act is overly difficult to comprehend and apply. It isn't. A company simply needs to take the time to draw up a solid governance model on how they use AI, and the AI Act serves to provide a legislative framework for that process," Koli adds.
AI governance is not a one-size-fits-all template or solution – each organisation has to tailor it to their context and ways of working. Merely utilising AI may require a more low-key governance model, but if your company is actively developing and deploying AI, the governance model needs to cover significantly more ground. The question isn’t whether or not companies and organisations should have AI governance in place – they do, you do – but rather, how wide its breadth needs to be.
On the trail of railroad barons
It is important to note that we're still in a very early phase of AI implementation, with many governance structures and information security details still in flux. This is something Nitor’s experts aim to help remedy with their expertise. Koli continues:
"Governance models make a company more trustworthy in the eyes of the customers, but they are also essential in helping employees understand how these new tools can be implemented effectively and operated with accountability. Effective governance models aren't black magic, but they require AI literate and knowledgeable employees, so responsibility can be taken and upheld organisation wide. That entails a structured approach to AI implementation."
Madejska concludes our discussion with a sobering thought:
"Industrialisation had an enormous impact on humanity. The ongoing technological revolution is even bigger, as we have the capacity to inflict widespread global change with unprecedented speed. However, even the railroad barons of the golden age of industrialisation were eventually held responsible for their actions like exploitation of labour, and corrupt business practices – if anyone in the tech industry expects to never be held accountable, they are fooling themselves."
Are you looking to strengthen your company's digital business to stand the test of time? Read more about our services!