Researchers Introduce Socio-Technical Approach to AI Threat Modeling

Date

Author

By Casey Moffitt
a person typing on a laptop that has a screen of cyber badge in blue

A new, comprehensive framework that assesses artificial intelligence systems from technical, ethical, and legal dimensions has been published by a team that includes two Illinois Tech researchers.

Illinois Tech faculty members Ann Rangarajan, assistant professor of information technology and management, and Saran Ghatak, professor and chair of the Department of Humanities, Arts, and Social Sciences, worked with external collaborators to publish ā€œSTRIFE: A Socio-Technical Framework for Threat Modeling of Artificial Intelligence Systemsā€ in the International Journal of Intelligent Information Technologies. The framework should help developers, engineers, and users preemptively identify unintended consequences of AI systems. 

The paper introduces a methodology for systematically identifying and addressing AI threats proactively throughout the lifecycle of the system, such as biased outcomes, privacy violations, psychological harm, facilitation of mass surveillance, or the creation of environmental hazards.

ā€œAI systems operate within complex social, organizational, and cultural contexts that fundamentally shape how risks emerge,ā€ Ghatak says. ā€œSTRIFE recognizes that threats to AI systems often originate not from technical failures alone, but from the broader ecosystem of human users, institutional policies, and societal expectations."

These complex ecosystems create threats that span far beyond technical vulnerabilities, which include algorithmic bias and adversarial attacks. Ethical concerns such as threats to sustainability and inclusion, as well as legal challenges around intellectual property and reasonableness standards also emerge from the human-AI interaction dynamic. 

STRIFE provides the first comprehensive threat modeling approach specifically designed for the characteristics and risks of AI systems.

"The real innovation of STRIFE lies in its systematic approach to addressing AI threats through domain-specific terminology, which speaks directly to different professional communities," Rangarajan says. "This comprehensive approach ensures that AI threat modeling becomes as fundamental to AI system development as traditional threat modeling has become for conventional software systems."

The framework deliberately enables interdisciplinary research collaboration by bringing together computer scientists, social scientists, ethicists, and legal scholars to investigate these risks from multiple perspectives, as AI systems operate at the intersection of technology, society, and regulation.

And the STRIFE acronym fits within each dimension’s context. In technical contexts, STRIFE represents safety, transparency, reliability, interpretability, fairness, and explainability. From an ethical perspective, it encompasses sustainability, trust, reproducibility, inclusion, freedom, and equity. In legal contexts, STRIFE addresses sovereignty of technology, tort law, reasonableness, intellectual property, future of the legal profession, and extra-contractual liability.

The research demonstrates how this comprehensive threat modeling approach can significantly enhance the effectiveness of existing risk management standards. The framework strategically integrates with the four core functions outlined in the National Institute of Standards and Technology (NIST) AI Risk Management Framework: govern, map, measure, and manage. 

STRIFE adds a fifth function: mediate, which serves as a foundational threat modeling extension. This integration allows organizations to systematically operationalize NIST’s core functions while addressing the unique socio-technical nature of AI threats.

ā€œWhile the NIST AI Risk Management Framework provides essential guidance for trustworthy AI, practitioners often struggle with how to implement such principles in specific contexts,ā€ Rangarajan says. ā€œOur framework systematically guides threat identification across technical dimensions such as safety and transparency, ethical considerations including trust and inclusion, and legal factors such as reasonableness and intellectual property, because AI risks emerge from the complex interactions between technology, human behavior, and societal structures.ā€