Skip to content
/ PIRE Public

Adversarial queries for blocking Content-based Image Retrieval (CBIR)

License

Notifications You must be signed in to change notification settings

liuzrcc/PIRE

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

48 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PIRE: Adversarial Queries for Blocking Content-based Image Retrieval (CBIR).

This repository releases the pytorch implementation of "PIRE" in our paper "Who's Afraid of Adversarial Queries? The Impact of Image Modifications on Content-based Image Retrieval".

Basically, PIRE generates adversarial examples for blocking neural feature-based CBIR.

Now PIRE is tested on

  1. State-of-the-art CNN-based CBIR method GeM[1] with pre-trained ResNet-101-GeMsupports model and feature extraction codes provided by cnnimageretrieval-pytorch.

  2. Off-the-shelf ResNet-101 pre-trained on ImageNet by replacing the original AvgPool2d with adaptiveAvgPool2d for allowing arbitrary size of the input image, and adding an additional L2N layer for feature normalization.

The code for retrieval performance evaluation (implementation of CBIR system) is not included.

In order to generate adversarial queries for different models, please specific the parameter 'cnnmodel' when running the main file gen_pire.py.

Pytorch implementaiton of PIRE:

Prerequisites

Python3
PyTorch 1.0.0

Both CPU and GPU supported
(Code tested with Python 3.6.6 on Ubuntu 16.04)

How to use the code:

  • Clone the code and put your own image queries in folder ./img_input/.
git clone https://github.com/liuzrcc/PIRE.git
cd PIRE
  • To get the adversarial queries generated by the best-performed PIRE (T = 500) in our paper, please run:
python3 gen_pire.py -T "500" -gpu_id "0" -cnnmodel "gem" -in_dir "./img_input/" -out_dir "./img_output/" -p True
  • To get the adversarial queries for pre-trained ResNet-101, please run:
python3 gen_pire.py -T "500" -gpu_id "0" -cnnmodel "imagenet-res101" -in_dir "./img_input/" -out_dir "./img_output/" -p True
  • Detailed explanation of PIRE's parameters can be reached by:
python3 gen_pire.py -h

Experimental results:

Examples of generated adversarial images are shown as below:

patches

The impact of PIRE (with two different numbers of iterations: T=200 and T=500) on the performance (mAP) of GeM [1] on Oxford5k and Paris6k data sets is shown as below (BB represents Bounding Box, where only a pre-defined bounding box is used as query, following the standard evaluation of above two data sets, while WI represents whole Image):

Oxford5k (BB/WI) Paris6k(BB/WI)
Original Queries 78.39/74.42 87.27/87.26
PIRE (T = 200) 22.98/18.00 34.49/26.53
PIRE (T = 500) 3.93/2.31 10.53/7.18

The choice of T allows to control the trade off between PIRE’s adversarial effect and its visual impact on the original image query.

Examples of ranked list with corresponding average precision (AP). (For each example, both the original result (top row) and the adversial result (bottom row) are shown)

patches

Please cite the following paper if you use PIRE in your research.

  @inproceedings{pire2019,
  Author = {Zhuoran Liu and Zhengyu Zhao and Martha Larson},
  Title = {Who's Afraid of Adversarial Queries? The Impact of Image Modifications on Content-based Image Retrieval},
  Year = {2019},
  booktitle={ACM International Conference on Multimedia Retrieval (ICMR)},
  organization={ACM}      
  }

The copyright of all the images belongs to the image owners.

References

[1] Radenović, Filip, Giorgos Tolias, and Ondrej Chum. "Fine-tuning CNN image retrieval with no human annotation." IEEE Transactions on Pattern Analysis and Machine Intelligence (2018).

Releases

No releases published