[Veranstaltung] Einladung zur Konferenz // Critical AI: Rethinking Intelligence, Bias, and Control 19.11.25
12.50-13.00.Welcome and Opening Remarks-Ramón Reichert (Vienna)
13.00-14.00. Federica Frabetti (University of Roehampton, London):Conjunctural AI: Performativity, Authoritarianism, and the Crisis of Algorithmic Control
I argue that contemporary AI must be understood through a conjunctural analysis, rooted in Stuart Hall's work, which addresses the current geopolitical crisis as a moment of both intense peril and opportunity. I propose that AI's expansion is mutually constituted with rising authoritarianism and the erosion of democratic norms.
My methodology uses a feminist performativity framework, which I developed with Eleanor Drage, asserting that algorithmic systems are active, constitutive forces that produce and enact structural inequalities.
I draw on our published case studies of predictive governance: 1) AI-powered Event Detection in policing, where systems performatively create a racialized protest; and 2) Biometric Bordering Technologies, which function as 'copies without an original' to actively generate categories of exclusion.
By analysing performativity within this unstable conjuncture, I illuminate how AI is a powerful enactor of political control. I conclude by arguing that only a conjunctural analysis can fully capture the profound destabilization of power currently underway, offering a necessary, urgent direction for Critical AI studies.
14.00-15.00. Neda Atanasoski (University of Maryland, Baltimore):Artificial General Intelligence and the Reproduction of Power: Feminist Interventions in the Politics of Life
This talk examines seemingly opposed perspectives surrounding Artificial General Intelligence (AGI): its framing as a "New Manhattan Project" driven by geopolitical competition and fears of annihilation, and its reinterpretation by some as an expansion of the definition of life itself. The presentation argues that both narratives, despite their apparent opposition, are deeply intertwined with and perpetuate gendered, racial capitalist and colonial relations. The talk suggests that the push for AGI, whether for global supremacy or a redefinition of life, obscures ongoing exploitation and reinforces existing power structures, underscoring the need for feminist understandings of life and living.
15.00-15.15. Coffee Break
15.15-16.45. Iyo Bisseck (Dreaming Beyond AI, Paris) & Segal Hussein (University of Applied Science,Vienna):Technoaffection Against Control: Abolitionist Futures Beyond TESCREAL (Workshop)
In this workshop, we explore how contemporary visions of artificial intelligence are haunted by colonial logics of knowledge, extraction, and control. Drawing from theTESCREAL¹constellation that interlinked ideologies including Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism, we examine how these ideological frameworks reproduce hierarchies of intelligence and value under the guise of neutrality and progress.
Through the lens of Big Siblings and Dreaming Beyond AI, we propose "technoaffection"²as a practice of relational design that centers situated knowledge and embodied accountability.
We ask how designers can intervene critically and affectionately to reimagine tools, systems, and infrastructures beyond domination.
By weaving theory and practice, this session invites designers to rethink their complicity and potential in shaping technological futures, opening space for affect, resistance, and collective reconfiguration.
¹:The termTESCREALwas coined by Timnit Gebru and Émile P. Torres and is an acronym for Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalists, Effective Altruism, and Longtermism. They describe these ideologies as an interconnected movement prevalent in Silicon Valley that uses the specter of human extinction to justify costly or harmful AI-related projects.
² The concept Technoaffection draws from Tecnoafecciones, a project co-developed by Paola Ricaurte Quijano with the feminist digital rights organization Sursiendo in Mexico. It emphasizes the relational and affective nature of technology, showing that technological artifacts are deeply intertwined with human emotions, social relationships, and lived experiences.
16.45-17.45. Leonie Bossert (University of Vienna):Speciesist bias in AI – How AI impacts human-animal relations and what to do about it
Massive efforts are made to reduce biases in both data and algorithms in order to render AI applications fair. However, the AI fairness field, as well as the AI4Good discussion, still succumbs to a blind spot, namely, its insensitivity to discrimination against animals. This presentation critically discusses how AI technologies substantially impact both individual animals and the human-animal relation; a discussion that is still somewhat neglected in AI ethics.
The talk will first delve into the premises behind the claim that animals matter morally and that discriminating against them a) is happening and b) is unethical. After that, it will highlight the various AI applications that impact nonhuman animals, providing examples for direct and indirect, intended and unintended impacts, both at the individual and societal levels, and for farmed, companion, and wild animals. Amongst them, speciesist biases will be discussed. Speciesist biases are solidified by many mainstream AI applications, especially in the fields of computer vision as well as natural language processing.
Therefore, AI technologies currently play a significant role in perpetuating and normalizing an ethically problematic treatment of animals. Arguments are made to demonstrate that these problematic treatments are linked to power structures, and conflicts with attempts to use AI in a truly just manner, or truly “for good”. At the end, the talk provides thoughts on and arguments for how AI technologies can be used to benefit animals, and to create (more) respectful human-animal relations.
17.45-18.00: Coffee Break
18.00-19.30: Mira Reisinger (Leiwand.AI, Vienna) &Janine Vallaster (University of Vienna):Algorithmic Bias: Why "AI" is not for everyone(Workshop)
What does unwanted bias mean in the context of machine learning? In this workshop we will address the challenges of potential discrimination through AI systems from both a technical and a social viewpoint. The aim is to gain a better understanding of what can happen to whom, how and why. We will look at various “points of entry” for bias in the AI system life cycle – including training data, decision-making, and product-team composition.
We will see how inequalities are often already encoded in the training data, how important questions such as “Is AI needed for this?” and “What kind of model makes sense here?” are and we will pay attention to who is included and who is missing (in terms of representation, knowledge and decision-power) in the process of building AI systems. After identifying where things can go wrong together, we will look into actionable strategies for taking countermeasures – providing you some insights into fairness assessment, bias detection and mitigation strategies.
Anhang 1: https://uni-ak.at/accounts/anhang/IKKK_2025_10_30_11_51_Konferenz_Poster.pdf