Performance overview

MethodDataset
VIPPIMD2020DSO-1OpenForensicsFaceSwapCoverageNC16ColumbiaCASIA
ADQ10.500.290.420.480.280.210.210.400.49
ADQ20.570.450.530.680.43----
BLK0.430.260.460.260.110.240.23--
CAGI0.440.300.510.290.180.300.29--
DCT0.430.310.350.420.190.220.18--
Comprint0.500.300.760.630.350.350.40--
Noiseprint0.560.400.810.670.350.330.410.840.21
Comprint+Noiseprint0.580.440.810.710.410.370.44--
CATNet0.720.850.680.950.450.570.490.920.85
TruFor0.75-0.970.90-0.740.470.910.82
FusionIDlab0.73-0.75--0.540.51--

This table shows the performance using the F1 score, which balances the precision and recall. A higher score is better, and a perfect score would be 1.0.

In general, newer AI-based methods (at the bottom of the table, e.g., CAT-Net, Trufor & FusionIDLab) demonstrate better performance than older methods (at the top of the table, e.g., BLK, CAGI & DCT). However, there are large performance variations accross datasets. Thus, in practice, no method is perfect, and there is still a lot of room for improvement.

Source: Comprint, TruFor, FusionIDLab


An explanation and illustrative examples on how to interpret the methods' output can be found here.

ADQ1

ADQ1 (2009) is based on Aligned Double Quantization detection, using the DCT coefficient distribution.

Paper: Fast, automatic and fine-grained tampered JPEG image detection via DCT coefficient analysis
Code: pyIFD GitHub & MKLab-ITI GitHub
Authors: Zhouchen Lin 1, Junfeng He 2, Xiaoou Tang 1, Chi-Keung Tang 2
1 Microsoft Research Asia, Beijing, China
2 The Hong Kong University of Science and Technology, Hong Kong, China

ADQ2

ADQ2 (2011) is based on Aligned Double Quantization detection, and first estimates the quantization table of the previous compression. Works for JPEG files only.

Paper: Improved DCT coefficient analysis for forgery localization in JPEG images
Code: pyIFD GitHub & MKLab-ITI GitHub
Authors: Tiziano Bianchi, Alessia De Rosa, Alessandro Piva
University of Florence, Department of Electronics and Telecommunications, Florence, Italy

ADQ3

ADQ3 (2014) is based on Aligned Double Quantization detection, and works using an SVM on the distribution of DCT coefficients (single vs. double compression). Works for JPEG files only.

Paper: Splicing forgeries localization through the use of first digit features
Code: pyIFD GitHub & MKLab-ITI GitHub
Authors: Irene Amerini 1, Rudy Becarelli 1, Roberto Caldelli 1, 2, Andrea Del Mastio 1
1 Media Integration and Communication Center (MICC), University of Florence, Florence, Italy 2 National Interuniversity Consortium for Telecommunications (CNIT), Florence, Italy

BLK

BLK (2008) looks for mismatches in the block artifact grid of JPEG compression.

Paper: Passive detection of doctored JPEG image via block artifact grid extraction
Code: pyIFD GitHub & MKLab-ITI GitHub
Authors: Weihai Li 1, Yuan Yuan 2, Nenghai Yu 1
1 MOE-Microsoft Key Laboratory of Multimedia Computing and Communication, University of Science and Technology of China, Hefei, Anhui, China
2 School of Engineering and Applied Science, Aston University, Birmingham, UK

CAGI

CAGI (2018) stands for Content-Aware detection of Grid Inconsistencies. It looks for mismatches in the blocking artifact of JPEG compression and does content-aware filtering of false activations.

Paper: Content-aware detection of JPEG grid inconsistencies for intuitive image forensics
Code: pyIFD GitHub & MKLab-ITI GitHub
Authors: Chryssanthi Iakovidou, Markos Zampoglou, Symeon Papadopoulos, Yiannis Kompatsiaris
Information Technologies Institute (ITI), Centre for Research and Technology Hellas (CERTH), Thessaloniki, Greece

DCT

DCT (2007) looks for inconsistencies of JPEG blocking artifacts, using the estimated quantization table based on the power spectrum of the DCT coefficient histogram.

Paper: Detecting Digital Image Forgeries by Measuring Inconsistencies of Blocking Artifact
Code: pyIFD GitHub & MKLab-ITI GitHub
Authors: Shuiming Ye 1, 2, Qibin Sun 1, Ee-Chien Chang 2
1 Institute for Infocomm Research, Singapore
2 School of Computing, National University of Singapore, Singapore

Comprint

Comprint (2022) is an image manipulation detection and localization method that uses the comprint, a compression fingerprint representing the JPEG compression artifacts.

Paper: Comprint: Image Forgery Detection and Localization using Compression Fingerprints
Code: Comprint GitHub
Website: Comprint
Authors: Hannes Mareen 1, Dante Vanden Bussche 1, Fabrizio Guillaro 2, Davide Cozzolino 2, Glenn Van Wallendael 1, Peter Lambert 1, Luisa Verdoliva 2
1 IDLab, Ghent University - imec, Belgium
2 Image Processing Research Group (GRIP), University Federico II of Naples, Italy

Noiseprint

Noiseprint (2019) is an image manipulation detection and localization method that uses the noiseprint, a camera model fingerprint representing the image acquisation artifacts.

Paper: Noiseprint: a CNN-based camera model fingerprint
Code: Noiseprint GitHub
Authors: Davide Cozzolino, Luisa Verdoliva
University Federico II of Naples, Italy

Comprint+Noiseprint

Comprint+Noiseprint (2022) combines the fingerprints of Comprint and Noiseprint (see above), and generates a combined heatmap.

Paper: Comprint: Image Forgery Detection and Localization using Compression Fingerprints
Code: Comprint GitHub
Website: Comprint
Authors: Hannes Mareen 1, Dante Vanden Bussche 1, Fabrizio Guillaro 2, Davide Cozzolino 2, Glenn Van Wallendael 1, Peter Lambert 1, Luisa Verdoliva 2
1 IDLab, Ghent University - imec, Belgium
2 Image Processing Research Group (GRIP), University Federico II of Naples, Italy

CATNet

CATNet (v2, 2022) is an image manipulation detection and localization method that jointly uses image acquisition artifacts and compression artifacts. CATNet stands for Compression Artifact Tracing Network. It significantly outperforms traditional and deep neural network-based methods in detecting and localizing tampered regions.

Paper: Learning JPEG Compression Artifacts for Image Manipulation Detection and Localization
Code: CATNet GitHub
Authors: Myung-Joon Kwon 1, Seung-Hun Nam 2, In-Jae Yu 3, Heung-Kyu Lee 4, Changick Kim 1
1 School of Electrical Engineering, Korea, Korea Advanced Institute of Science and Technology (KAIST),
2 NAVER WEBTOON AI, Seongnam, South Korea
3 Visual Display Business, Samsung Electronics Co., Ltd., Suwon, South Korea
4 School of Computing, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea

TruFor

TruFor (2023) is a forensic framework that can be applied to a large variety of image manipulation methods, from classic cheapfakes to more recent manipulations based on deep learning. They are based on both high-level (RGB) and low-level (Noiseprint++) features.

Noiseprint++ is a learned noise residual. It is an improvement of their previous work Noiseprint. It is fingerprint that captures traces related to both the camera model and the editing history of the image. Inconsistencies between authentic and tampered regions may become visible in the Noiseprint++.

To reduce the impact of false alarms, TruFor additionally estimate a confidence map. Errors in the anomaly map are corrected by the confidence map, drastically improving the final detection score. In the confidence map, dark areas signify a low confidence and bright areas signify a high confidence.

Paper: TruFor: Leveraging all-round clues for trustworthy image forgery detection and localization
Code: TruFor GitHub
Website: TruFor
Authors: Fabrizio Guillaro 1, Davide Cozzolino 1, Avneesh Sud 2, Nicholas Dufour 2, Luisa Verdoliva 1
1 University Federico II of Naples, Italy
2 Google Research

FOCAL

FOCAL (2023) stands for FOrensic ContrAstive cLustering. Specifically, FOCAL extracts features from an image (trained using contrastive learning), and then clusters these in an unsupervised way (hence, not creating bias from the training set). Additionally, the detection performance is boosted by fusing two versions of FOCAL (i.e., combining ViT and HRNet). FOCAL demonstrated a significantly better performance than state of the art methods in 2023.

Note: it is possible that the FOCAL method gives incorrect results when a bug on the server disables the GPU. This will be given as a warning in the log ("Warning: FOCAL is run on CPU, which may lead to incorrect results."), which you can see when you click on 'Show more info' on top of a result page. The incorrect FOCAL heatmaps are recognizable when the right and bottom border of the image are highlighted in red, whereas the rest of the heatmap is blue.

Paper: Rethinking Image Forgery Detection via Contrastive Learning and Unsupervised Clustering (preprint)
Code: FOCAL GitHub
Authors: Haiwei Wu, Yiming Chen, Jiantao Zhou
State Key Laboratory of Internet of Things for Smart City Department of Computer and Information Science, University of Macau, Macau, China

FusionIDLab

FusionIDLab (2023) combines the outputs of ADQ1, BLK, CAGI, DCT, Comprint, Noiseprint, Comprint+Noiseprint, and CAT-Net. By combining these methods into a single heatmap, it may be easier to make conclusions. It learned this using a machine-learning approach, and is based on the Pix2Pix architecture.

Paper: Harmonizing Image Forgery Detection & Localization: Fusion of Complementary Approaches
Code: FusionIDLab GitHub
Authors: Hannes Mareen, Louis De Neve, Peter Lambert, Glenn Van Wallendael
IDLab, Ghent University - imec, Belgium
COM-PRESS

Combating disinformation by equipping journalists with new image manipulation insights and detection methods.

© 2022-2024 Copyright IDLab-MEDIA