欢迎来到尧图网

客户服务 关于我们

您的位置:首页 > 教育 > 高考 > 【论文阅读】Knockoff Nets: Stealing Functionality of Black-Box Models(2019)

【论文阅读】Knockoff Nets: Stealing Functionality of Black-Box Models(2019)

2024/10/25 4:17:55 来源:https://blog.csdn.net/Glass_Gun/article/details/141252395  浏览:    关键词:【论文阅读】Knockoff Nets: Stealing Functionality of Black-Box Models(2019)

在这里插入图片描述

摘要

Machine Learning (ML) models(机器学习模型) are increasingly(越来越多) deployed(部署) in the wild to perform(执行) a wide range of tasks(广泛的任务).
In this work, we ask to what extent(多大程度) can an adversary(对手) steal functionality(窃取功能) of such "victim’’ models based solely(仅基于) on blackbox interactions(黑箱交互): image in(图像输入), predictions out(预测输出).
In contrast to prior work(与之前的工作相反), we study complex victim blackbox models(复杂的受害者黑箱模型), and an adversary lacking knowledge of train/test data used by the model(缺乏模型使用的训练/测试数据), its internals(其内部), and semantics(语义) over model outputs(模型输出).
We formulate(表述) model functionality stealing(模型功能窃取) as a two-step approach(两步方法): (i) querying(查询) a set of(一组) input images(输入图像) to the blackbox model(黑盒模型) to obtain predictions(获得预测); and (ii) training a "knockoff’’ with queried image-prediction pairs(查询图像预测对).
We make multiple remarkable observations(多个显著的观察): (a) querying random images(查询随机图像) from a different distribution(从不同的分布中) than that of the blackbox training data(黑箱训练数

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com