CAI Logo

Saliency Inference Attack

Dataset image
Dataset image

Description: Recent studies demonstrate that Machine Learning (ML) models are vulnerable to information stealing attacks. These attacks first leverage a dataset to query a target ML model and obtain the responses. The query-response pairs are then exploited to train an attack model whereby the goal is to infer the information of the target ML model.

Goal:
Perform a new side channel attack for model information stealing, i.e., given a trained model, infer information about the SalChartQA training dataset such as saliency maps, visulaization type, questions, and answers.

Supervisor: Mayar Elfares and Yao Wang

Distribution: 20% Literature, 60% implementation, 20% Analysis and discussion

Requirements: Strong Python programming skills, prefarrably Pytorch or Tenserflow.

Literature:

Zhang et al. A Plot is Worth a Thousand Words: Model Information Stealing Attacks via Scientific Plots . USENIX Security, 2023.

Masry et al. ChartQA: A Benchmark for Question Answering about Charts with Visual and Logical Reasoning . Findings of the Association for Computational Linguistics: ACL , 2022.

Wang et al. SalChartQA: Question-driven Saliency on Information Visualisations . ACM CHI , 2024.