Sorry, you need to enable JavaScript to visit this website.

Information about an image's source camera model is important knowledge in many forensic investigations. In this paper we propose a system that compares two image patches to determine if they were captured by the same camera model. To do this, we first train a CNN based feature extractor to output generic, high level features which encode information about the source camera model of an image patch. Then, we learn a similarity measure that maps pairs of these features to a score indicating whether the two image patches were captured by the same or different camera models.


Over the years, the forensic community has developed a series of very accurate camera attribution algorithms enabling to detect which device has been used to acquire an image with outstanding results. Many of these methods are based on photo response non uniformity (PRNU) that allows tracing back a picture to the camera used to shoot it. However, when privacy is required, it would be desirable to anonymize photos, unlinking them from their specific device. This paper investigates a new and alternative approach to image anonymization task.


Given a query image or video, or a known camera fingerprint, there is a lack of capabilities for fast identification of media, from a large repository of images and videos, that match the query fingerprint.
This work introduces a new approach that improves the computation efficiency of pairwise camera fingerprint matching and incorporates group testing to make the search more effective.


Scientific interest in automated abandoned object detection algorithms using visual information is high and many related systems have been published in recent years. However, most evaluation techniques rely only on statistical evaluation on the object level.


Copy-Move Forgery Detection (CMFD) is a well-studied
image forensics problem. However, CMFD with Similar but
Genuine Objects (SGO) has received relatively less attention.
Recently, it has been found that current state-of-the-art
CFMD techniques are mostly inadequate in satisfactorily
solving this important problem variant. In this paper, we have
addressed this issue by using Rotated Local Binary Pattern
(RLBP) based rotation-invariant texture features, followed
by Generalized Two Nearest Neighbourhood (g2NN) based


Video phylogeny research about joint analysis of correlated video sequences has shown the possibility of developing interesting forensic applications. As an example, it is possible to study the provenance of near-duplicate (ND) video sequences, i.e., videos generated from the same original one through content preserving transformations. To perform this kind of analysis, accurate detection of ND videos is paramount. In this paper, we propose an algorithm for ND video detection and clustering in a challenging setup.