Student Research Lagniappe
1.23.2026
11:30 AM – 1:30 PM | PFT 1246
Unmasking Privacy Pitfalls: How LLMs Trace Data Flows in Mobile Apps
Abstract
As mobile applications continue to grow rapidly, safeguarding user privacy has become more important than ever. Mobile applications frequently collect and share sensitive data such as location, audio, and contact information. Traditional data-flow analysis methods rely on complex rule-based systems, which limit flexibility and scalability.
To address these challenges, we introduce an AI-driven approach that integrates large language models (LLMs) to trace user data flows through reasoning, generate data-flow call graphs, and determine whether data leakage occurs. This approach enhances both the accuracy and interpretability of privacy analysis in the mobile application domain, offering a more scalable and adaptable solution for identifying privacy risks.
Mst Eshita Khatun
Lousiana State University
Two Decades of DFRWS: A Longitudinal Analysis of Digital Forensics Research (2002–2025) Using LLMs
Abstract
The Digital Forensic Research Workshop (DFRWS) is the longest-running venue dedicated exclusively to digital forensics, providing a unique vantage point for observing how the field has evolved. While previous bibliometric studies have focused on isolated regions or subdomains, this work presents the first longitudinal analysis encompassing the full DFRWS ecosystem across USA, EU, and APAC venues (2002–2025). Using a hybrid pipeline that combines Large Language Model (LLM)-assisted metadata extraction with manual validation, we analyze N=527 papers to map research evolution and tool practices. Our findings show that approximately 98% of studies are investigator-supportive, with a approx 52:1 ratio of digital forensic to anti-forensic research, and that DFRWS authors created 396 of the 414 newly created tools} (95.7\%), establishing DFRWS as a hub for community-driven software. Trend analysis highlights IoT forensics as a late-emerging area, while AI remains largely applied as a supporting instrument rather than as a primary forensic target. This longitudinal study offers researchers a structured baseline for situating future work, provides practitioners and developers with insights into tool adoption and research gaps, and supports policymakers and educators in aligning training and funding priorities with the evolving trajectory of digital forensic science.
Roohana Karim
Lousiana State University
Evaluating Users’ (Un)Confirmed Privacy Expectations in Smart Homes Through Expectation Confirmation Theory
Abstract
This presentation examines how Expectation-Confirmation Theory (ECT) explains users’ attitudes and behaviors toward smart home devices when their privacy expectations are either confirmed or violated. Using data from a real-world, in-situ study involving a network-monitoring tool that reveals actual data transmissions of smart devices, the research explores how users react upon learning about their devices’ data-sharing practices.
According to ECT, when a device’s data practices align with users’ initial trust expectations, users experience higher satisfaction and a stronger intention to continue using the device. Conversely, when expectations are violated, users exhibit lower satisfaction and adopt coping behaviors, such as a desire to block certain data disclosures.
The findings demonstrate that trust confirmation indirectly shapes behavioral intentions through satisfaction, highlighting satisfaction as the key mediating factor between trust and continued use of smart home technologies.
Tania Khatun
Lousiana State University


