WAAS Talks on Science for Human Security: Artificial Intelligence

Online on October 17, 2024 from 1:30 to 3:00 pm CET

Nebojša Nešković
Vice President, World Academy of Art and Science (WAAS); President, Serbian Chapter of The Club of Rome.
Introduction

In August 2023, the UN General Assembly proclaimed the International Decade of Sciences for Sustainable Development, from 2024 to 2031. The task to lead the preparation and implementation of the activities within the Decade was given to UNESCO. On April 16, 2024, The Earth-Humanity Coalition was founded – as an association of global, regional, and national scientific organizations with the task to prepare and implement, in close cooperation with UNESCO, various initiatives within the overall program of the Decade. WAAS was among the founding Members of the Coalition. It had initiated the WAAS Program of Sciences for Sustainable Development, which became a specific initiative of the Coalition. The WAAS Talks on Science for Humans Security: Artificial Intelligence is the fourth webinar within the Program. The reports on the previous webinars can be found on the WAAS website, page Events.

Moderator
Smiljana Antonijević
Illinois Institute of Technology, Chicago, USA; Member, Serbian Chapter of
The Club of Rome.
Summary

This panel explored the evolving relationship between artificial intelligence (AI) and human security, as part of the WAAS series of talks addressing key global challenges of our time. In alignment with the United Nations’ designation of the current decade as the International Decade of Sciences for Sustainable Development, the discussion emphasized the urgency of tackling global challenges where AI plays a pivotal role – either safeguarding or potentially undermining human security. The panel featured four distinguished speakers, who brought their expert perspectives to the diverse but interconnected fields. 

Sally Wyatt explored the role of AI in medicine, contrasting recurring and often deterministic claims about its capabilities with the credibility crisis facing AI systems in healthcare. She highlighted often-overlooked issues, including the intricate networks of policy and practice, the importance of local socio-political contexts, the economic and environmental costs of AI, and the evolving dynamics of trust and responsibility between patients, medical professionals, and the tech companies increasingly involved in healthcare.

Daniel Erasmus argued that climate change, rather than AI, represents the primary existential risk of our time. He explored how tools like ClimateGPT (https://climategpt.ai) could support decision-making related to climate action, enabling socio-political, economic, and technical systems to adapt faster than the climate itself is changing. He highlighted that ClimateGPT is a social intelligence rather than an artificial intelligence move – to increase our sense-making and sense-breaking, and our organizational capacity to address these challenges.  

Debora Lanzeni highlighted the crucial role of social sciences in addressing AI-related societal issues. She discussed labor relations influenced by AI, criticized deterministic views of the technology and social sciences that primarily react to hypes, and called for social scientists to engage with AI from its development stages to the formulation of media policies.

Charlotte Tschider tackled the legal and ethical dimensions of AI. She discussed AI’s impact on legal practice, regulatory challenges across industries where AI misuse could lead to harm, and the balance between individual rights and collective interests in data protection. She also examined the trade-offs between privacy and safety, and the debate around regulating AI proactively versus responding to issues after they arise. The discussion highlighted that while AI is undeniably shaping the future, it brings forth complex challenges that demand thoughtful consideration and interdisciplinary collaboration. One key area of focus was training data, where several concerns emerged: the difficulties in gathering, preparing, and contextualizing data; the deep-seated but often unexamined biases present in datasets; the critical need to ensure that hundreds of millions of people who, in the coming decades, will receive much of their education from AI are using systems trained on the best materials and knowledge humanity has ever produced. The panel concluded that a shared commitment to safeguarding human security and upholding ethical standards is essential for harnessing AI’s benefits while mitigating its risks.

Talks

Sally Wyatt
 Professor, Science, Technology and Society Research Programme, University of Maastricht,
The Netherlands.
Artificial Intelligence and Medicine

Statements

Promissory statements about an impending artificial intelligence (AI) revolution are everywhere in policy and media discourses, and the healthcare domain is currently AI’s biggest investment space. The promises include efficiency by reducing time and workload of expensive professional labor, especially by machines taking over boring and repetitive tasks. There are also benefits for patients. The reduced workload means healthcare professionals can spend more time with patients. There are also promises about improved diagnosis, and new research methods delivering results about the causes and treatment of disease. These promises are similar to earlier claims about related healthcare innovations such as big data and computer-assisted diagnosis.

More recently, evidence has begun to emerge about the harmful consequences of AI-supported tools in healthcare, such as deskilling professionals, faulty diagnosis, incorrect treatments, unjust disparities in care outcomes, and data leaks threatening patient privacy. There are also concerns about the implications for care when clinicians and healthcare practitioners themselves don’t understand how AI works. The growing role for privately owned tech companies in the provision of administrative, treatment, and diagnostic tools raise worries about the ownership of data, the autonomy of healthcare professionals, the financial costs, and the provision of what should be a public good, namely healthcare. There are also potential harms to the physical environment given the enormous energy and water needs of AI.

.

Daniel Erasmus
Chief Executive Officer, Erasmus.AI, The Netherlandst
Artificial Intelligence and Climate

Statements

ClimateGPT (https://climategpt.ai) family built on the breakthrough Club of Rome Earth4All is open source, publicly and freely available to researchers on ClimateGPT.AI. Daniel Erasmus conceived and led the team building ClimateGPT – the world’s first foundational artificial intelligence (AI) model on Climate Change, building on more than a decade of running planetary scale datasets as the Chief Executive Officer of Erasmus.AI. The Erasmus.AI platform was built to augment, inform, and reveal hidden connections in the world’s news.

Debora Lanzeni
Research Fellow, Emerging Technologies Lab, Monash University, Melbourne, Australia.
Artificial Intelligence and Social Sciences

Statements

Artificial Intelligence (AI) as anticipatory infrastructure (Pink, 2023) gives rise to an intense discussion around future ethics in social sciences, particularly in its regulatory and philosophical aspects. Now that we live with Generative AI (GenAI), the scenario is entirely different from what we predicted. Remarkably, after two years, no successful business models have allowed for the much-anticipated impact and growth. The promises of the AI revolution are still in beta, and we face a myriad of challenges in almost all areas of social life, from living with fake news to the almost inevitable redesign of all professions and workplaces. What AI is and could be is still disputed in the symbolic, economic, and social realms despite what the dystopian and utopian discourses present as an extant fact and predestination for the near future.

Social sciences have demonstrated to be highly efficient in critically engaging with AI (and co-existing technologies) concerning its harmful potential, surveillance and control, worker displacement, bias and profiling, market regulation, model collapse, and self-regulating systems, for instance. This is a crucial work to ensure that AI and the following emerging technologies are aligned ethically with humans, diversity, and the planet to achieve a relative power balance. However, we also need social sciences that are oriented towards the future and engaged with the possible and desirable impacts of AI. This engagement should involve developers and other stakeholders in a generative manner. For example, we could ask ourselves: should we actively engage in the design and training of Large Language Models (LLMs) while considering the implications of those models for people’s everyday lives? Another crucial component and open question is: should we embrace AI in our professional practice, particularly in the research and communication of our findings?

Charlotte Tschider
Associate Professor, School of Law, Loyola University Chicago, IL, USA.
Artificial Intelligence and Law

Statements

Artificial Intelligence (AI) is part of nearly every sphere of modern society. AI is integrated into critical infrastructure, agriculture, transportation, manufacturing, healthcare, finance, and consumer goods. While only thirty-three percent of consumers think they are using AI platforms, at least seventy-seven percent are actually using them, and at least seventy-seven percent of devices currently feature some form of AI. Today, thirty-five percent of businesses have adopted a version of AI. AI can tell you when you are going to have a migraine, when it is time to invest in a new stock, or when you are about to get into a car accident. AI is designed to assess, diagnose, recommend, alert, and automate physical functions – it is positioned to leverage complex decisional systems to overcome human problems and improve a human world. It is AI’s artificiality, its distinction from human decision-making, that powers solutions to intractable human problems.

AI is not completely artificial, though. Human data scientists, at least initially, design, train, and test AI. Humans, through their interactions with technology, produce the data used to train AI. But like any other type of technology designed by humans, AI can fail to perform as expected due to human mistakes and failures to anticipate its function. AI may injure humans or cause property damage. It could compromise individual privacy or perpetuate and entrench discrimination. AI could influence human knowledge, attitudes, and behavior. AI’s backbox nature often means that the decisions AI makes may not be readily intelligible, even when examined by the AI’s designers. Overall, AI will not function effectively without humans involved in its design and operation. However, despite the effectiveness of human contributions in creating and using AI, caution should be taken when expecting humans to challenge, interrupt, or supervise it.

Recording of the talks and discussion