Article
February 3, 2025 · 4 min read timeAnalytics is the cornerstone of modern business. It steers company strategy, guides internal processes, and helps identify development needs and critical pain points. While large language models simplify data utilisation, the growing use of AI also underscores the value and importance of skilled analysts.
Machine learning and AI algorithms tailored to tackle specific problems have long been integral to data analytics. However, new large language models have revolutionised applications in natural language processing, allowing problem-solving without specialised training data. Despite their impressive ways of presentation, the expertise and logical reasoning capabilities of these models fall short of the standards of professional analysts.
This is well understood by Nitor’s Eero Lihavainen, joined by Tuomo Kakkonen who recently departed to find new adventures. Kakkonen has also served as an external expert for the European Commission, contributing to the selection processes of new EU-funded AI projects.
Our interviewees note that in text analytics, the AI revolution has primarily meant redesigning many systems to leverage large language models. While traditional methods work well, the fact is that learning language models are beneficial in various ways for text analytics. AI can reduce manual labour and maintenance associated with older text analytics tools.
Many companies are now contemplating how to integrate language models into existing systems and determining how long to rely on older, proven systems and methods. Lihavainen is quick to note that AI has been part of data analytics for a long time – meaningful data analytics still requires the ability to practically discern which tasks can be automated and which still need expert input.
Data quality is emphasised in the era of AI
According to Lihavainen, common data analytics challenges are still tackled most effectively with traditional statistical and machine learning methods. However, language models have become a part of analysts’ daily lives, acting as “digital colleagues” that accelerate programming and tasks such as writing database queries. They can also handle simple data processing tasks that would otherwise be time-consuming.
Large language models can also serve as tools for automation. For applications involving reading or generating natural language – such as transforming unstructured data into structured formats for further analysis or application – language models are incredibly flexible. However, this flexibility comes with inherent risks: it’s easy to build a prototype that works as a flashy demo, but far more challenging to create a solution with minimal errors. Assessing how well a solution performs in practice requires data analytics expertise.
“Measurement is one of the most critical aspects of data analytics work, and as the use of large language models becomes more widespread, validating results takes on even greater importance. AI models are notorious for the fact that their output can change significantly when a new version is rolled out or even when the wording of a query is altered,” Lihavainen explains.
As companies now utilise AI to process text data, data quality becomes even more important. If a document fed to the model is incomprehensible to a human, the model likely won’t understand it either. On the other hand, language models can assist in writing and improving the quality of text content, which is one of their most popular applications today.
Measurement and Understanding Are the Most Valuable Equation
Measurement is key to deriving meaningful data that can guide a company’s actions effectively. Analysis and evaluation require a deep understanding of business components and their interrelations. In this equation, a knowledgeable, responsible, and ethically aware data analytics professional outperforms generative AI capabilities.
As data volumes continue to grow, so do the challenges associated with management and quality assurance. Our interviewees stress the need for increased automation to strengthen monitoring and validation processes.
“We urgently need new types of systems specifically designed to oversee AI systems. While GDPR compliance is mandatory in the EU and the upcoming AI Act will establish stricter regulations, the cornerstone of reliable analytics is understanding the building blocks of valuable, ethical, and secure data and AI models,” Kakkonen emphasises.
In other words, processes that produce valuable and insightful analytics have not become inherently easier with the influx of AI models. While natural language processing and large-scale data management have become more efficient, the perception of speed and simplicity can misguide businesses if they lack a thorough understanding of their goals, methods, and whether or not their chosen direction is based on robust data. Such is the understanding experts like Nitor’s Digital Engineers will continue to uphold and cultivate in the future.
Nitor helps your company leverage data more strategically. Learn more about our analytics services!