Léa Steinacker is a journalist, moderator, and the Co-Founder and Chief Operations Officer of ada, an education tech start-up that equips workers with the mindset, knowledge and skills for the future. She oversees strategy, develops live experiences, and shapes content curation for the platform. Previously, Léa served as the Chief Innovation Officer of WirtschaftsWoche, Germany's leading business magazine, where she covered the future of work and socio-technological change.
Prior to joining Handelsblatt Media Group, Léa worked with social justice NGOs in Bosnia-Hercegovina, Rwanda, and the Democratic Republic of Congo. She was selected as a Forbes 30 Under 30 leader, one of Medium Magazine’s Top 30 Under 30 journalists, an Atlantik Bruecke Young Leader, and a Leader of Tomorrow by the St. Gallen Symposium. In 2011, she was awarded the Henry Richardson Labouisse Prize.
Léa holds degrees from Princeton University and the Harvard Kennedy School of Government. She is a PhD candidate at the University of St. Gallen under the supervision of Prof. Dr. Damian Borth (Artificial Intelligence & Machine Learning) and Prof. Dr. Veronica Barassi (Digital Anthropology, Media & Communication).
With rapid advances in machine learning (ML), more of this technology is being deployed into the real world interacting with us and our environment. One of the most widely applied application of ML is facial recognition as it is running on millions of devices. While being useful for some people, others perceive it as a threat when used by public authorities. This discrepancy and the lack of policy increases the uncertainty in the ML community about the future direction of facial recognition research and development. In this paper we present results from a cross-national survey about public acceptance, privacy, and discrimination of the use of facial recognition technology (FRT) in the public. This study provides insights about the opinion towards FRT from China, Germany, the United Kingdom (UK), and the United States (US), which can serve as input for policy makers and legal regulators.
How does the public perceive facial recognition technology and how much do they accept facial recognition technology in different political contexts? Based on online surveys resembling the Internet-connected population in China, Germany, the United Kingdom, and the United States, our study finds that facial recognition technology enjoys generally highest acceptance among respondents in China, while acceptance is lowest in Germany, and the United Kingdom and the United States are in between. A closer examination through the lens of an integrated technology acceptance model reveals interesting variations in the selected four countries based, among other factors, on socio-demographic factors as well as perceived consequences, usefulness, and reliability of facial recognition technology. While previous research has pointed out that facial recognition technology is an instrument for state surveillance and control, this study shows that surveillance and control are not foremost on the minds of citizens in China, Germany, the United Kingdom, and the United States, but rather notions of convenience and improved security.
In late 2019, the gravest pandemic in a century began spreading across the world. A state of uncertainty related to what has become known as SARS-CoV-2 has since fueled conspiracy narratives on social media about the origin, transmission and medical treatment of and vaccination against the resulting disease, COVID-19. Using social media intelligence to monitor and understand the proliferation of conspiracy narratives is one way to analyze the distribution of misinformation on the pandemic. We analyzed more than 9.5M German language tweets about COVID-19. The results show that only about 0.6% of all those tweets deal with conspiracy theory narratives. We also found that the political orientation of users correlates with the volume of content users contribute to the dissemination of conspiracy narratives, implying that partisan communicators have a higher motivation to take part in conspiratorial discussions on Twitter. Finally, we showed that contrary to other studies, automated accounts do not significantly influence the spread of misinformation in the German speaking Twitter sphere. They only represent about 1.31% of all conspiracy-related activities in our database.
This article examines China’s Social Credit System to illustrate how information and communication technologies bring forth new forms of interaction between the state and citizens. In particular, it asks how the transparency generated by the Social Credit System enables new forms of social control, trust, and self-regulation. For this purpose, the study frames the analysis with Foucault’s model of disciplinary power and provides an empirical description of the Social Credit System that focuses on the system’s basic design elements and on the political intentions behind its implementation. The article then suggests a heuristic framework of transparency as an analytical structure to chart the emerging configuration of government-citizen relations. The study finds that the system increases the control of the single-party state over society, likely diminishes trust, and reduces the freedom to act. However, compared to the clientelism and arbitrary decision-making of previous decades, the precise and depersonalized standards of the Social Credit System can be seen as an improvement that enables the capacity of individual actors to self-regulate. This theoretical and analytical approach thus adds to the debate about how government through algorithms rearranges practices of state power and control.