Seminar arranged by the Finnish Society of Sciences and Letters on February 13th 2025 in ‘Tieteiden talo’.
Speakers
Minna Ruckenstein – The Inconsistencies of AI Futures
Pii Telakivi – The Hybrid Thinker: How AI Shapes Reasoning and Decision-Making
Carl-Gustav Lindén – AI and the Disinformation Landscape
Johan Lundin – Could AI Provide Global and Equitable Access to Medical Diagnostics?
Linda Mannila – AI Literacy
Bo Gustafsson – The Challenge of Restoring the Baltic Sea: Learning From the Past to Find Solutions for the Future
Veronika Laippala – AI in the Humanities and Social Sciences: An Efficient but Unreliable Research Assistant?
Sasu Tarkoma – Deep Learning Software
Mikael Collan – Things to Take into Consideration When Using AI in Research
The aim of the seminar was to bring our sections together around a highly relevant topic, namely artificial intelligence (AI). All four sections, i.e., Mathematics and Physics, Biosciences, Humanities, and Social Sciences, were represented by excellent speakers from the University of Helsinki, Lappeenranta University of Technology, University of Turku, Stockholm University, and the University of Bergen.
The term “artificial intelligence” was first used in 1956 during a seminal event known as the Dartmouth Conference, officially titled the “Dartmouth Summer Research Project on Artificial Intelligence.” The main organizer of this conference was John McCarthy, a mathematician and computer scientist, who mostly worked at Stanford University. The conference is often regarded as the birthplace of AI as a recognized field of study, as it brought together researchers to explore the potential of creating machines that could simulate aspects of human intelligence.
AI is becoming increasingly important in today’s world due to its ability to process large amounts of data, recognize patterns, and make decisions much faster than humans. It has the potential to transform various sectors, including healthcare, education, transportation, and finance, by improving efficiency, increasing accuracy, and driving innovation. AI tools can enhance everyday life by automating routine tasks, providing real-time insights, and solving complex problems that were previously beyond reach. During the last five years there has also been a tremendous increase in the utilization of AI in the environmental field, especially considering monitoring and handling of big data.
As AI technologies continue to evolve, they hold the promise of creating smarter solutions that can address some of the world’s most pressing challenges, making it a critical component of future progress and development. The main advantages of AI lie in increased efficiency and productivity, improved accuracy and decision making, and the creation of new job opportunities. However, the main disadvantages and challenges that need to be considered are related to job displacements, security risks, ethical considerations, economic inequality, energy use, accessibility and lack of transparency. The EU commission has been active on this topic, and on the 2nd of February 2025, the first rules under the Artificial Intelligence Act (AI Act) started to apply. This is the first-ever legal framework on AI, which addresses the risks of AI and positions Europe to play a leading role globally. Noteworthy is that there is no comprehensive federal legislation or regulations in the USA that regulate the development of AI or specifically prohibit or restrict the use of AI.
Some of the most pressing issues and exciting possibilities of AI in various domains were presented. Minna Ruckenstein, Professor of Emerging Technologies in Society at the Consumer Society Research Centre at the University of Helsinki highlighted AI from a multidimensional societal and cultural perspective, where the ethical dimension is central: advanced data technology requires large amounts of energy, and this development clashes with efforts to counteract human impact on the climate. Her talk addressed the disparity between the promises of AI and its actual implementations, questioning who shapes the future of AI and advocating a more inclusive dialogue. The presentation offered three visions for the future and outlined areas that require greater attention in the context of AI. Topics included the materiality of AI, the human-powered nature of AI automation, and the revaluation of expertise. The talk clearly demonstrated that there are many ways to take advantage of AI technologies, and we need to be more imaginative in how they can benefit humanity and the planet. To achieve this, we must develop a better understanding of possible futures. Despite its many merits, AI can also be seen as part of the complex problem humanity faces.
Pii Telakivi, Postdoctoral researcher in Philosophy at the University of Turku, raised the question of AI as an “extended mind” and whether humans risk losing the final say in development. She mentioned cognitive aspects where AI at its best can expand knowledge, but at the same time risks of eroding individual cognitive ability (“cognitive atrophy”) and the risk of losing the right to privacy. She discussed how AI systems and other smart personalized technologies affect us as cognitive agents, critical thinkers, and more specifically, as researchers. She proposed in her talk a distinction between types of transparency when using AI technology. The “disproportionality of transparencies” (too much phenomenological transparency, too little reflective transparency) is linked to many risks associated with AI extenders. If the AI tool is highly phenomenologically transparent-in-use and seamlessly integrated into our cognitive system, it can create blind spots in our moral decision-making and may give the misleading impression that we are in control, even though our original intentions may have been altered.
As a Professor of Journalism studies at University of Bergen in Norway, Carl-Gustav Lindén pointed out that AI should not be “the human in the loop”; AI is not smarter than humans but can be a central tool in information analysis. The final critical assessment cannot be left to AI. His talk emphasized the importance of fact-checking and media and AI literacy to combat the information disorder and mitigate the risk that people stop believing anything because of disinformation. The information disorder was discussed by means of disinformation and misinformation, fake news, propaganda, and manipulation techniques. Ethical problems related to AI were highlighted, particularly those concerning the accuracy of media content; AI models can create totally false images and articles. Addressing these ethical issues requires a multifaceted approach, including developing technologies to detect and counteract fake content, implementing robust ethical guidelines, enhancing AI literacy among the public, and creating comprehensive legal and regulatory frameworks to govern the responsible and transparent use of AI in media.
Johan Lundin, Professor and Research director at FIMM (Institute for Molecular Medicine Finland), demonstrated with examples from medical diagnostics how care in specific cases can be revolutionized by machine learning, in this case through advanced image analysis, where huge image banks can be quickly analyzed to support diagnostics remotely on a global scale. In his talk, he pointed out that access to accurate and timely medical diagnostics remains a major global health challenge, particularly in low-resource settings were shortages of trained professionals and laboratory infrastructure hinder effective healthcare delivery. Advances in AI offer the potential to bridge this gap by enabling automated and cost-effective diagnostic solutions. AI powered image analysis can assist in cancer and infectious disease screening and diagnostics, reducing dependency on scarce human expertise. However, significant challenges remain, including access to data, regulatory hurdles, and the need for validation in diverse populations. Additionally, ethical considerations around AI deployment, infrastructure requirements, and sustainability must be addressed to ensure equitable implementation. The talk explored both the opportunities and limitations of AI based diagnostics, emphasizing the importance of global collaboration, robust validation, and context-specific solutions. AI should be “an excellent assistant” based on “outcome-based supervised learning” (a central aspect of the reasoning).
Reflections on how individuals should be able to critically and insightfully evaluate the use and benefits of AI were provided by Linda Mannila, Associate professor at the Department of Computer Science at the University of Helsinki. As AI becomes an integral part of research workflows, the importance of AI literacy for researchers across disciplines is highlighted. AI systems and tools assist in decision making, data analysis, literature reviews, and writing, but their use also raises critical questions about accuracy, bias, transparency, and ethical responsibility. In this case too, “privacy issues” were seen as central. AI has no built-in ethical dimension and risks contributing to “metacognitive laziness.” Machine learning is not equivalent to intelligence as it has no capacity for critical thinking and can risk outsourcing of thinking. “Intellectual property rights” and “confidentiality issues” should also be considered. Mannila discussed the key aspects of AI literacy and what these can mean in a research context, focusing on understanding AI’s capabilities and limitations, critically evaluating AI-results, and promoting responsible and mindful use while maintaining scientific integrity and rigor.
Bo Gustafsson, Director of Baltic Nest Institute at Stockholm University Baltic Sea Centre illustrated with examples from environmental research how large databases with a variety of parameters can be used with modeling tools to illustrate, explain, and understand large-scale processes in a large and complex context such as the entire Baltic Sea in a long-term perspective. His presentation highlighted how modeling helps explain historical trends and set effective nutrient management targets. Numerical modeling tools are necessary and useful for managing the Baltic Sea. While the achieved nutrient input reductions likely will lead to improvement, climate change will likely cause new challenges to the Baltic Sea ecosystem. Thus, modeling tools have become a support for continued research and societal decisions on major environmental issues. The need for “responsible experts” was emphasized, i.e., humans must have a comprehensive overview. He highlighted that AI provides insights and alternatives regarding issues that cannot be intuitively understood and helps identify unpredictable outcomes.
Veronika Laippala, Professor in Digital Language Studies at the University of Turku illustrated how AI (actually “machine learning,” which we choose to call “intelligence”) can help us understand languages. Over the past years, AI has opened new possibilities for analyzing textual data, offering exciting potential for use in many fields. In her talk, she discussed perspectives and experiences on using AI in diverse research settings involving large-scale textual data spanning humanities, social sciences, natural language processing, and data analytics. Key questions like “What kinds of tasks is AI particularly well-suited for?” “Where does it fall short?” “Can it function as a research assistant, perhaps even replacing human researchers?” were discussed. Laippala illustrated “Large Language Models” as tools for analyzing language and language development. She pointed out the imbalance in databases, where major languages are already offered reliable translation tools, while small languages have inadequate databases or are entirely absent. The development is rapid, and the tools’ application area is increasing. AI needs to be used properly to be useful and we need fully open, high-quality models. Testing the models on both humans and AI is of key importance.
Sasu Tarkoma, professor of Computer Science at the University of Helsinki and Dean of the Faculty of Science, explored in his talk the rapid evolution of AI from the perspective of modern software systems. Advances in computational power, vast training datasets, and innovative methodologies have propelled AI into a new era. Large language model techniques are driving the emergence of agentic software, systems that operate with increasing levels of autonomy and are built on novel software stacks and frameworks. This evolution is transforming software development and deployment, raising critical questions about efficiency, trustworthiness, and safety. AI is based on numerical “probabilistic tools,” which is central when different outcomes of AI generated information are evaluated and utilized. He discussed how these emerging AI capabilities are reshaping development practices and enabling distributed AI applications. Additionally, the role of AI software within academic environments, with examples of recent projects and initiatives from the University of Helsinki, was pointed out. He discussed AI from a broad technological and societal perspective and provided an insightful overview of the development and prospects of software development within AI and the need for regulatory tools regarding AI both regionally and globally.
Risks and merits of using AI in the service of research from a publication perspective were highlighted by Mikael Collan, Professor in Business Studies at LUT Business School, Lappeenranta. The use and misuse of generative AI (GenAI) in research were discussed. GenAI can be a great tool to help researchers perform tasks faster and make popularization of science faster and even convenient, while incorrect or fraudulent use of the technology is a real problem in reviewing and document generation. Guidelines for correct use must be put in place and ways to control misuse and misusers found. While GenAI is transforming the world of research, the change and how this happens offer an interesting and relevant research topic. He called for risk analysis and “good practices” from ethical and practical perspectives (“confidentiality ethics” and “individual property rights”). AI generated texts cannot create new thoughts/ideas nor replace the “originality of thought.” The need for shared rules to avoid large-scale fraud in scientific publishing, eroding the credibility of serious scientific publishing, was pointed out.
The symposium concluded with a concise summary and subsequent discussion. AI does not replace experts, but the work of experts becomes redefined. As technology continues to evolve, it is crucial for scientists, ethicists, and leaders to work together to ensure that AI benefits everyone fairly.
Organizing committee:
Susanne Wiedmer, Chair the Mathematics and Physics’ Section
Dan Lindholm, Chair of the Biosciences’ Section
Erik Bonsdorff, Vice-chair of the Biosciences’ Section
Pauline von Bonsdorff, Chair of the Humanities’ Section
Peter Söderlund, Chair of the Social Sciences’ Section
0 Comments