WAAS Talks on Science for Human Security: Artificial Intelligence

Online | October 17, 2024 from 1:30 to 3:00 pm CET

Register here for the event

Opening (5 minutes):

  • Nebojša Nešković, Vice President, World Academy of Art and Science (WAAS); President, Serbian Chapter of The Club of Rome

Introduction (5 minutes) and moderation:

  • Smiljana Antonijević, Illinois Institute of Technology, Chicago, USA; Member, Serbian Chapter of The Club of Rome

Statements of the speakers* (4 × 5 minutes):

  • Daniel Erasmus, Chief Executive Officer, Erasmus.AI, The Netherlands: Artificial intelligence and climate
  • Sally Wyatt, Professor, Science, Technology and Society Research Programme, University of Maastricht, The Netherlands: Artificial intelligence and medicine
  • Debora Lanzeni, Research Fellow, Emerging Technologies Lab, Monash University, Melbourne, Australia: Artificial intelligence and social sciences
  • Charlotte Tschider, Associate Professor, School of Law, Loyola University Chicago, IL, USA: Artificial intelligence and law

    Discussion among the speakers** (30 minutes)

    Discussion of the speakers with the audience*** (20 minutes)

    Conclusions of the moderator (10 minutes)

    *Each speaker will give a 5-minute statement about artificial intelligence and human security from his or her perspective prepared and sent to the moderator and other speakers before the webinar.
    **The discussion will be based on the questions prepared by the moderator and the statements delivered by the speakers.
    ***The discussion will include comments and questions from the audience and the responses of the speakers and the moderator.

    Nebojša Nešković
    Vice President, World Academy of Art and Science (WAAS); President, Serbian Chapter of The Club of Rome.
    Opening

    In August 2023, the UN General Assembly proclaimed the International Decade of Sciences for Sustainable Development, from 2024 to 2031. The task to lead the preparation and implementation of the activities within the Decade was given to UNESCO. On April 16, 2024, The Earth-Humanity Coalition was founded – as an association of global, regional, and national scientific organizations with the task to prepare and implement, in close cooperation with UNESCO, various initiatives within the overall program of the Decade. WAAS was among the founding Members of the Coalition. It had initiated the WAAS Program of Sciences for Sustainable Development, which became a specific initiative of the Coalition. The WAAS Talks on Science for Humans Security: Artificial Intelligence is the fourth webinar within the Program. The reports on the previous webinars can be found on the WAAS website, page Events.

    Smiljana Antonijević
    Illinois Institute of Technology, Chicago, USA; Member, Serbian Chapter of
    The Club of Rome.
    Introduction and Moderation

    As a Digital Anthropologist, Antonijević explores the intersection of culture, technology, and transformation design through research and teaching in the USA and Europe. She is also actively engaged in applied research, focusing on socio-cultural aspects of Artificial Intelligence and Machine Learning(AI/ML).

    Daniel Erasmus
    Chief Executive Officer, Erasmus.AI, The Netherlandst
    Artificial Intelligence and Climate

    ClimateGPT (https://climategpt.ai) family built on the breakthrough Club of Rome Earth4All is open source, publicly and freely available to researchers on ClimateGPT.AI. Daniel Erasmus conceived and led the team building ClimateGPT – the world’s first foundational artificial intelligence (AI) model on Climate Change, building on more than a decade of running planetary scale datasets as the Chief Executive Officer of Erasmus.AI. The Erasmus.AI platform was built to augment, inform, and reveal hidden connections in the world’s news.

    Sally Wyatt
     Professor, Science, Technology and Society Research Programme, University of Maastricht,
    The Netherlands.
    Artificial Intelligence and Medicine

    Promissory statements about an impending artificial intelligence (AI) revolution are everywhere in policy and media discourses, and the healthcare domain is currently AI’s biggest investment space. The promises include efficiency by reducing time and workload of expensive professional labor, especially by machines taking over boring and repetitive tasks. There are also benefits for patients. The reduced workload means healthcare professionals can spend more time with patients. There are also promises about improved diagnosis, and new research methods delivering results about the causes and treatment of disease. These promises are similar to earlier claims about related healthcare innovations such as big data and computer-assisted diagnosis.

    More recently, evidence has begun to emerge about the harmful consequences of AI-supported tools in healthcare, such as deskilling professionals, faulty diagnosis, incorrect treatments, unjust disparities in care outcomes, and data leaks threatening patient privacy. There are also concerns about the implications for care when clinicians and healthcare practitioners themselves don’t understand how AI works. The growing role for privately owned tech companies in the provision of administrative, treatment, and diagnostic tools raise worries about the ownership of data, the autonomy of healthcare professionals, the financial costs, and the provision of what should be a public good, namely healthcare. There are also potential harms to the physical environment given the enormous energy and water needs of AI.

    Debora Lanzeni
    Research Fellow, Emerging Technologies Lab, Monash University, Melbourne, Australia.
    Artificial Intelligence and Social Sciences

    Artificial Intelligence (AI) as anticipatory infrastructure (Pink, 2023) gives rise to an intense discussion around future ethics in social sciences, particularly in its regulatory and philosophical aspects. Now that we live with Generative AI (GenAI), the scenario is entirely different from what we predicted. Remarkably, after two years, no successful business models have allowed for the much-anticipated impact and growth. The promises of the AI revolution are still in beta, and we face a myriad of challenges in almost all areas of social life, from living with fake news to the almost inevitable redesign of all professions and workplaces. What AI is and could be is still disputed in the symbolic, economic, and social realms despite what the dystopian and utopian discourses present as an extant fact and predestination for the near future.

    Social sciences have demonstrated to be highly efficient in critically engaging with AI (and co-existing technologies) concerning its harmful potential, surveillance and control, worker displacement, bias and profiling, market regulation, model collapse, and self-regulating systems, for instance. This is a crucial work to ensure that AI and the following emerging technologies are aligned ethically with humans, diversity, and the planet to achieve a relative power balance. However, we also need social sciences that are oriented towards the future and engaged with the possible and desirable impacts of AI. This engagement should involve developers and other stakeholders in a generative manner. For example, we could ask ourselves: should we actively engage in the design and training of Large Language Models (LLMs) while considering the implications of those models for people’s everyday lives? Another crucial component and open question is: should we embrace AI in our professional practice, particularly in the research and communication of our findings?

    Charlotte Tschider
    Associate Professor, School of Law, Loyola University Chicago, IL, USA.
    Artificial Intelligence and Law

    Artificial Intelligence (AI) is part of nearly every sphere of modern society. AI is integrated into critical infrastructure, agriculture, transportation, manufacturing, healthcare, finance, and consumer goods. While only thirty-three percent of consumers think they are using AI platforms, at least seventy-seven percent are actually using them, and at least seventy-seven percent of devices currently feature some form of AI. Today, thirty-five percent of businesses have adopted a version of AI. AI can tell you when you are going to have a migraine, when it is time to invest in a new stock, or when you are about to get into a car accident. AI is designed to assess, diagnose, recommend, alert, and automate physical functions – it is positioned to leverage complex decisional systems to overcome human problems and improve a human world. It is AI’s artificiality, its distinction from human decision-making, that powers solutions to intractable human problems.

    AI is not completely artificial, though. Human data scientists, at least initially, design, train, and test AI. Humans, through their interactions with technology, produce the data used to train AI. But like any other type of technology designed by humans, AI can fail to perform as expected due to human mistakes and failures to anticipate its function. AI may injure humans or cause property damage. It could compromise individual privacy or perpetuate and entrench discrimination. AI could influence human knowledge, attitudes, and behavior. AI’s backbox nature often means that the decisions AI makes may not be readily intelligible, even when examined by the AI’s designers. Overall, AI will not function effectively without humans involved in its design and operation. However, despite the effectiveness of human contributions in creating and using AI, caution should be taken when expecting humans to challenge, interrupt, or supervise it.