A Synthetic Dataset for Personal Attribute Inference
OPEN ACCESS
Loading...
Author / Producer
Date
2024
Publication Type
Conference Paper
ETH Bibliography
yes
Citations
Altmetric
OPEN ACCESS
Data
Rights / License
Abstract
Recently, powerful Large Language Models (LLMs) have become easily accessible to hundreds of millions of users world-wide. However, their strong capabilities and vast world knowledge do not come without associated privacy risks. In this work, we focus on the emerging privacy threat LLMs pose -- the ability to accurately infer personal information from online texts. Despite the growing importance of LLM-based author profiling, research in this area has been hampered by a lack of suitable public datasets, largely due to ethical and privacy concerns associated with real personal data. We take two steps to address this problem: (i) we construct a simulation framework for the popular social media platform Reddit using LLM agents seeded with synthetic personal profiles; (ii) using this framework, we generate SynthPAI, a diverse synthetic dataset of over 7800 comments manually labeled for personal attributes. We validate our dataset with a human study showing that humans barely outperform random guessing on the task of distinguishing our synthetic comments from real ones. Further, we verify that our dataset enables meaningful personal attribute inference research by showing across 18 state-of-the-art LLMs that our synthetic comments allow us to draw the same conclusions as real-world data. Combined, our experimental results, dataset and pipeline form a strong basis for future privacy-preserving research geared towards understanding and mitigating inference-based privacy threats that LLMs pose.
Permanent link
Publication status
published
Book title
Advances in Neural Information Processing Systems 37
Journal / series
Volume
Pages / Article No.
120735 - 120779
Publisher
Curran
Event
38th Annual Conference on Neural Information Processing Systems (NeurIPS 2024)
Edition / version
Methods
Software
Geographic location
Date collected
Date created
Subject
quantization; large language models; security; poisoning
Organisational unit
03948 - Vechev, Martin / Vechev, Martin
Notes
Poster presentation on December 12, 2024. Datasets and Benchmarks Track.
Funding
MB22.00088 - SafeAI: Certified Safe, Fair and Robust Artificial Intelligence (SBFI)
Related publications and datasets
Is new version of: 10.48550/arXiv.2406.07217Is new version of: https://openreview.net/forum?id=1nqfIQIQBf