• No results found

A structured approach to JPEG tampering detection using enhanced fusion algorithm

N/A
N/A
Protected

Academic year: 2022

Share "A structured approach to JPEG tampering detection using enhanced fusion algorithm"

Copied!
58
0
0

Loading.... (view fulltext now)

Full text

(1)

Master of Science in Computer Science January 2021

A structured approach to JPEG tampering detection using enhanced

fusion algorithm

Om Sai Teja Chennupati

Faculty of Computing, Blekinge Institute of Technology, 371 79 Karlskrona, Sweden

(2)

This thesis is submitted to the Faculty of Computing at Blekinge Institute of Technology in partial fulfilment of the requirements for the degree of Master of Science in Computer Science.

The thesis is equivalent to 20 weeks of full time studies.

The authors declare that they are the sole authors of this thesis and that they have not used any sources other than those listed in the bibliography and identified as references. They further declare that they have not submitted this thesis at any other institution to obtain a degree.

Contact Information:

Author:

Om Sai Teja Chennupati

E-mail: omch18@student.bth.se

University advisor:

Dr. Abbas Cheddad (Senior lecturer/Associate professor) Department of Computer Science

Faculty of Computing Internet : www.bth.se

Blekinge Institute of Technology Phone : +46 455 38 50 00

SE–371 79 Karlskrona, Sweden Fax : +46 455 38 50 57

(3)

Abstract

Background: The new technologies in this decade have created many digital plat- forms like social media and many more applications on the internet, making it more accessible for every age group in the society to share the data over the internet. This technology leap led to a massive escalation in other technologies like data manip- ulation tools. Images have become the primary data circulating and shared over the internet. JPEG format images are the predominantly used format of an image, making them venerable for image tampering. So a reliable study of the tampering detection is needed to authenticate the JPEG images. This study is focused on pro- viding an enhanced feature fusion algorithm for JPEG image tampering detection.

Objectives: This study provides a way to enhance the JPEG image tampering detection even when the tampered image does not originate from a JPEG image format. It is done by fusing the image features to provide a better detection rate and provide a performance comparison of different existing techniques.

Methods: The research methods used in this study are of two types wherein the Literature review is used to procure the knowledge on different types of feature fusion techniques existing to solve the tampering in an image. The second is exper- imentation on enhancing the existing algorithm to improve JPEG image tampering detection performance using feature fusion and performance evaluation of three dif- ferent techniques for JPEG tampering detection using feature fusion.

Results: The results for the three algorithms are compared for performance, and the algorithm 3 (enhanced fusion algorithm) showed better performance on the results with the highest accuracy of 94.68% with the former algorithm 2 without enhance- ment giving an accuracy of 85.49%. The AUC curve under the ROC curve provided a better performance of 0.946 for algorithm 3.

Conclusions: The experimentation provided statistics are analysed for the three algorithms and concluded that the algorithm 3 provided better performance statis- tics, followed by algorithm 2 and algorithm 1. We also observed that algorithm 3 improved in detecting the pre-processed tampered image than in algorithm 2.

Keywords: Image forensics, Feature fusion, Image tampering, JPEG tamper detec-

tion, PCA and SVM.

(4)
(5)

Acknowledgments

Writing thesis acknowledgements is one thing in my bucket list, which I want to strike it off while writing this section. Studying master’s is one of the most beautiful experiences that I have felt in recent times. This year we had our fair share of ups and downs caused due to pandemic, but we have strived to come back even stronger!!!.

I would like to express my most profound appreciation to my thesis supervisor Dr Abbas Cheddad for continuously supporting my research and patiently answering my questions during writing my thesis. This thesis would not have been possible without his valuable inputs and suggestions. I am forever grateful for this wonderful opportunity provided to me.

I would like to extend my gratitude to my family for their constant support in my studies and life. There had been some phases in my life where my decisions made my family uncomfortable. But my family supported me throughout every phase of my life, and I was never burdened and given my own room for growth.

Most people say that after parents friends will be one of the most people in our lives. I believe this, as well. Without these people, I would not have written this document. A countless number of sleepless nights, reviewing each other’s documents.

Nevertheless, this thesis was not less than a sort of mini-adventure. When I was in a pinch, they gave the moral support that I needed. Love you all!!!

iii

(6)
(7)

Contents

Abstract i

Acknowledgments iii

1 Introduction 5

1.1 Problem Statement . . . . 6

1.2 Research Questions . . . . 7

1.3 Aim and Objectives . . . . 7

1.4 Outline . . . . 7

2 Background 9 2.1 Image Tampering . . . . 9

2.1.1 Copy-move . . . . 9

2.1.2 Splicing . . . . 10

2.1.3 Post Processing Techniques . . . . 10

2.2 JPEG Compression: . . . . 10

2.3 Machine Learning . . . . 11

3 Related Work 13 4 Method 17 4.1 Literature Review . . . . 17

4.1.1 Formulation of Search String . . . . 17

4.1.2 Inclusion and Exclusion Criteria . . . . 18

4.2 Experimentation . . . . 20

4.2.1 Proposed Feature Fusion Method . . . . 20

4.2.2 Data Collection . . . . 24

4.3 Performance Metrics and Analysis . . . . 25

4.3.1 Performance Metrics . . . . 25

4.3.2 Statistical Significance . . . . 27

4.3.3 Independent and Dependent Variables . . . . 27

4.3.4 Tools and Environment setup . . . . 28

5 Results and Analysis 31 5.1 Literature review Results . . . . 31

5.2 Experiment Results . . . . 31

5.2.1 Algorithm 1 . . . . 31

5.2.2 Algorithm 2 . . . . 32

v

(8)

5.2.3 Algorithm 3 . . . . 33

5.3 Experiment Analysis . . . . 34

6 Discussion 37 6.1 Answering Research Question 1 . . . . 37

6.2 Answering Research Question 2 . . . . 37

6.3 Validity Threats . . . . 38

6.3.1 Internal Validity . . . . 38

6.3.2 External Validity . . . . 38

6.3.3 Conclusion Validity . . . . 38

7 Conclusions and Future Work 39 7.1 Conclusion . . . . 39

7.2 Future works . . . . 39

References 41

vi

(9)

List of Figures

2.1 Copy-move tamper: (a) Authentic Image (b) Tampered Image (c)

Ground Truth . . . . 9

2.2 Splicing tamper (a) Authentic Image 1 (b) Authentic Image 2 (c) Tampered Image (d) Ground Truth . . . . 10

4.1 Flow chart of proposed feature fusion method . . . . 21

5.1 Average performance metrics comparison of algorithms . . . . 34

5.2 ROC curve for Algorithm 1 . . . . 35

5.3 ROC curve for Algorithm 2 . . . . 35

5.4 ROC curve for Algorithm 3 . . . . 36

vii

(10)
(11)

List of Tables

4.1 Hardware Environment . . . . 28 5.1 Performance metrics of the Algorithm 1 with stratified 5-fold cross

validation. . . . 32 5.2 Performance metrics of the Algorithm 2 with stratified 5-fold cross

validation. . . . 32 5.3 Performance metrics of the Algorithm 3 with stratified 5-fold cross

validation . . . . 33 5.4 Average Performance metrics of the Algorithms . . . . 33

1

(12)
(13)

Nomenclature

AUC Area Under Curve.

DCT Discrete Cosine Transform.

DWT Discrete Wavelet Tranform.

IQM Image Quality Metrics.

JPEG Joint Photographic Experts Group.

LBP Linear Binary Pattern.

PCA Principal Component Analysis.

ROC Receiver Operating Characteristic.

SVD Singular Value Decomposition.

SVM Support Vector Machine.

3

(14)
(15)

Chapter 1

Introduction

The world is developing at a rapid pace, and so is technology. In this age of in- formation, new technologies are making the internet more convenient for sharing of data. These new technologies have resulted in the rapid escalation of social media usages like Instagram, Snapchat and Whatsapp. There is an immense flow of data on the internet that circulates regularly. The circulating data contains all forms and types of data like audios, videos, images and documents in varying sizes. The usage of digital platforms made its way to people of every age and becoming much more accessible with effortless usage. The sharing of any data on these platforms came close to the fingertips making it handier.

With the enormous usage of data comes the enormous demand for the manipu- lation of various types of data. This originated many data manipulation tools over the decade, depending on the purpose of the manipulation and gave ways to use it for wrong purposes. The manipulation of data is done depending on the need or for falsification of data. The authenticity of the data is lost in this process, making it harder to distinguish between the authentic and tampered data.

Images became one of the widely shared data over the internet and social me- dia, making them vulnerable to data falsification due to over-usage. It has been more convenient for using image manipulation tools to tamper an image for different purposes(e.g., shading selfie for pleasant visualisation effects).

Image forgery is a process of faking facts in an image intentionally to falsify the data in the original image. Image tampering is a type of image forgery for altering the graphical content of a specific part or multiple parts of an image [1]. Image tampering has become one of the greatest threats as once the graphical data has tampered, it becomes difficult for the human eye to recognise the change. Many technologies have surged in the recent decade for providing an easy way to tamper image even to a novice user for aesthetic visualisation of the image for their gain of interest. It became subtle for the forgers to achieve their interests in tampering by using new era technologies, making it challenging to identify the tampering in an image. Many studies have revealed that it is difficult to recognise the forgery in an image by humans [2, 3].

Image manipulation has gains like manipulating the graphical content on an im- age or video to adore the human imagination such as movies and advertisements.

However, it certainly has some losses like being a reason for a legal crisis [4] in some parts of the world, making it significantly necessary to detect the tampering in pho- tos. It became essential to provide a reliable forensic analysis in case of the image’s authentication in the use of the law.

5

(16)

6 Chapter 1. Introduction The image tampering detection is the pressing issue to handle in this modern era of images, making it significant to authenticate its authenticity. Many detection techniques have been proposed to detect a single type of tampering in an image but failed to address another type of image tampering done on the same image. This has become more difficult for detecting multiple types of image tampering in an image. To make this more reliable for detecting various tampering types done on the image, very few proposed fusion of features extracted from the image to detect the tampering. Some of the fusion algorithms proposed also focused on a single type of tampering detection but various manipulations.

JPEG (Joint photographic experts group) image format is a widely used image format over the internet. JPEG compression involves the compression of an image by using JPEG standards [5]. It is mostly used to reduce the image data storage and store it in JPEG format. After JPEG image tampering, the tampered image is again stored in the JPEG image, which causes double compression. Image tampering can be detected in a JPEG image when an image is doubly compressed by measuring the double quantisation effect [6]. The JPEG images have unique features like DCT coefficients and Double quantisation effects extracted from the tampered image for authentication. However, these features lacked to address the image tampering when the tampered image does not originate the JPEG image format. To overcome this issue, feature fusion is used to make the tampering detection in JPEG image more reliable and the usage of features invariant to JPEG compression.

This thesis works on the drawbacks of JPEG tampering detection. It includes a fusion of invariant statistical measures of JPEG image like IQM measures [7], double blurring correlation [8], SVD [9] and LBP histograms [10] for detection of the tampering in an JPEG image.

1.1 Problem Statement

Limitations of the JPEG image tampering detection methods are that they fail to detect tampering when the original image is not a JPEG image or when the image processing operations are introduced between two compression rounds [1], making it more challenging to detect tampering in JPEG images.

The JPEG image tampering detection has become more difficult as the JPEG

image features like Markovian features for discrete cosine transforms [11] and Double

quantization effects became impractical in detecting the tampered image when it does

not originate from the JPEG image format and if there is no prior knowledge of the

original image. The detection of various image tampering types also became very

difficult to address when most of the research is done on a single type of tampering

detection. This made it more feasible to use feature fusion techniques for overcoming

the limitations of JPEG image tampering detection.

(17)

1.2. Research Questions 7

1.2 Research Questions

RQ1

What type of image-based features fusion techniques exist in the literature?

Motivation: This answer to this research question is deduced by conducting a literature review on the existing feature fusion techniques to gain knowledge and use them for further experimentation.

RQ2

Which features fusion algorithm can detect JPEG tampering even when the image is not originated from another JPEG image?

Motivation: This answer to this research question is deduced by enhancing a feature fusion algorithm for detecting the JPEG image tampering even when the tampered image does not originate from the JPEG image format and providing the performance evaluation between relevant techniques and the enhanced technique.

1.3 Aim and Objectives

Aim: The thesis aims to overcome the limitations of the JPEG tampering detection without the prior knowledge of image even when the tampered image does not orig- inate from the JPEG image format.

Objectives:

The objectives of the thesis are as follows.

1. To gain insights into the currently existing feature fusion techniques to detect image tampering.

2. To enhance a feature fusion algorithm to detect the JPEG image tampering even when the tampered image does not originate from JPEG format.

3. To evaluate the performance metrics of different feature fusion techniques.

1.4 Outline

The organization of the papers is as follows. The Chapter 1 summarizes the intro-

duction of image tampering and the importance of tampering detection in JPEG

image and the research gap with motivations, aims and objectives. In Chapter 2,

brief insights into the background and general overview of the related topics are dis-

cussed. Chapter 3 outlines the related works of this study. Chapter 4 outlines the

literature review and experimentation used for the research. Chapter 5 outlines the

results and analysis from the experiments. Chapter 6 outlines how the results and

analysis contribute to the current research. Finally, the conclusion and the future

works of this study are discussed in Chapter 7.

(18)
(19)

Chapter 2

Background

This chapter provides a concise summary of the related fields of studies used for this research. The sections are divided into individual fields of Image tampering, JPEG Image tampering, Machine learning and Feature Fusion.

2.1 Image Tampering

Image tampering is a part of Image forensics. It is the manipulation of the graphical content of the image to revision the authenticity of the image. The image tampering detection has become pivotal in the recent decade with the advent of latest tech- nologies for image manipulation. Farid et al. [12] reviewed the image tampering tools and put into the classification based on the pixel, based on format, based on geometry and based on physical features that are extracted from the image. The image tampering is mainly done using Cut-move and Copy-paste methods. These are explained in the subsections. After the image underwent this method, forgers depend on post-processing techniques to hide the digital forgeries clues. This makes image tampering detection much more challenging.

2.1.1 Copy-move

The process of altering the content in an image using some portions of the content (copying and moving the content) with in the same image is known as copy-move [13]. The process of detecting these tampering mainly depends upon the Key-point based and block-based. Mostly used Key-point based are like SIFT, and SURF [14], which are scale-invariant methods. An example of copy-move tampering is shown in Fig 2.1 taken from CASIA V2.0 dataset [15].

Figure 2.1: Copy-move tamper: (a) Authentic Image (b) Tampered Image (c) Ground Truth

9

(20)

10 Chapter 2. Background

2.1.2 Splicing

Splicing is the process of altering the data in an image using some portion of new data from another source or multiple images [16]. The latter technique is also known as cut-paste [17]. The splicing detection mainly comprises of edge anomaly and region anomalies. An example of splicing tampering is shown in Fig 2.2 taken from CASIA V2.0 dataset [15].

Figure 2.2: Splicing tamper (a) Authentic Image 1 (b) Authentic Image 2 (c) Tam- pered Image (d) Ground Truth

2.1.3 Post Processing Techniques

Image tampering detection becomes more complicated in forensic analysis due to the usage of post-processing operations. Two types of post-processing operations exist for tampering the image. One is active post-processing to improvise the tam- pering effect like blurring, re-sampling, brightness change, and contrast adjustments [18]. The other one is passive post-processing, an involuntary modification in data transmission like JPEG compression, noise adding, and colour reduction [19]. The post-processing hides the useful content for tampering detection in an image. The tampering detection methods and the features used should be invariant to the post- processing techniques to detect the tampering in an image better.

2.2 JPEG Compression:

JPEG compression is a lossy image compression technique used to reduce storage

data of the image, which involves loss of data and details in an image. Scaling of

(21)

2.3. Machine Learning 11 image decides the image’s quality and loss of data in JPEG compression [19]. JPEG compression is a passive post-processing operation in image tampering and shows the characteristics of regional anomalies [20]. This step helps in data reduction of the image and for falsification of the image after being tampered.

2.3 Machine Learning

Machine learning is sub-field of computer science that deals with algorithms that improve from the experience rather than programming every step by the programmer.

The human’s programming can not specify a particular program for solving the same kind of problem. Instead, machine learning helps solve the same kind of problems by improvising itself from experience gained from learning the same phenomena and knowledge discovery for better performance and results. Much computational and workforce are saved with the advent of this field of study. Machine learning has gained many insights into solving the real-world problem and providing a solution to many problems which have not been decoded for centuries, making it a rising subject in the recent era. The machine learning has four types: supervised learning, unsupervised learning, semi-supervised and reinforcement learning [21].

Supervised Learning

Supervised learning is learning with the data that must be trained on containing the problem’s labelled data and its classification. It learns from the provided data and improves better to classify the unknown labels of the same phenomena. For instance, an object can be identified in an image with providing the images containing the same object as training to an algorithm to better classify the object in an unknown image [22].

Unsupervised Learning

Unsupervised learning is learning with the unlabeled data trained on its characteris- tics by generating a model to provide a value or vector to achieve the goal. Clustering and data reduction are some of the usages of unsupervised learning [22].

Semi-supervised Learning

The semi-supervised learning is learning with the labelled and unlabeled data for better classification or builds a model to solve the problem state of same phenomena [22].

Reinforcement Learning

This learning consists of a goal state with an environment state, containing actions

to achieve it with each action associated with rewards. This type of machine learning

should learn the problem’s policy to do optimal actions for maximizing the rewards

to reach the goal state [22].

(22)
(23)

Chapter 3

Related Work

The related works are comprehensively studied and chosen by narrowing down to the scope of the research. Numerous research papers, journals, and surveys [1, 18]

are reviewed and selected for the insights of the research is related to this research.

These papers provided the background for various image features, feature extraction, methods and feature fusion techniques. These extensive research provided the scope of this research and the algorithms to solve the problem.

JPEG image tampering detection, dealing with the automated tampering detec- tion, gives better performance [1] by using the discrete cosine transforms (DCT) [23, 24] and the discrete wavelet transforms (DWT) [25, 26], or by analyzing the image residuals [27, 28, 29] which also gives a better output for tampering detection.

The major drawback of the JPEG based methods is that they fail when the tam- pered images are originally not JPEG image format or when other image processing operations like re-sampling have been inserted between the two compression rounds.

To overcome these drawbacks and detect the JPEG image tampering, we need to rely on the methods invariant to the drawbacks. These methods should detect the JPEG tampering even when the original image is of another image format.

PCA is a robust technique for dimensionality reduction of the features used in data as they can be more effective in fusing the image tampering features in the recent works [30]. The singular value decomposition is a matrix decomposition method for its component parts of a general matrix. Kang et al. [9] showed SVD on image features and invariants of algebraically and geometrically produced better results of the tampering detection over the images that underwent post-processing techniques like Gaussian noise and JPEG compression. SVD with PCA generated good analysis and results [31] on the images features to help detect the tampering in an image [30]

making it to overcome the drawbacks of JPEG image tampering.

Wang et al. [8] proposed a correlation scheme to detect tampering by blurring the given image that is tampered generating the double blurring mechanism. These double blurring feature vectors are extracted, and the feature correlations are ex- tracted with generating a mapping space. The matrices with the permutations with the correlations in mapping space are used to depict the segregation of the image tampering space. This method has been invariant to the Gaussian and JPEG com- pression blurring, making it to overcome the drawbacks of JPEG image tampering.

Zhou et al. [7] proposed an image tampering detection based on image quality metrics and variances. The prediction error arrays and the moments are generated with the noise variances subjected to wavelet transforms. The IQM features are se- lected using a statistical tool ANOVA, which makes them significant for tampering

13

(24)

14 Chapter 3. Related Work detection. The SVM is trained on the IQM features extracted from ANOVA, and the trained SVM is used to classify the tampering in an image. The proposed tech- nique has proven to be generating better average accuracy results than the existing algorithms. It is invariant to the post-processing techniques making it to overcome the drawbacks of JPEG image tampering.

Yang et al. [30] proposed a fusion of features based on PCA dimension reduction.

The initial features are the linear correlation features that have high invariance in blurring operations are extracted using the SVD. The features extracted from the double blurring correlations are used as one of the feature category followed by the IQM features, i.e. image prediction errors, mean absolute errors, content structure correlations, Minkowsky measure and normalised cross relations are extracted. These feature categories are fused to form three types of analysis of the proposed techniques.

The first technique involves fusion using PCA on all the types of features combined.

Each feature categories are assigned weights, and the second technique involves ap- plying fusion with PCA for each feature category individually. The third technique involves the weighing of the features with the second technique. All the techniques are trained on SVM individually. The last technique provided better performance even on the tampered images involving post-processing techniques like JPEG com- pression and Gaussian blur, overcoming JPEG image tampering drawbacks.

The linear binary patterns have proven to be useful in detecting tampering, es- pecially splicing in an image. Zhang et al. [32] proposed a Linear binary pattern technique and generated feature vectors of different block sizes of discrete cosine transform. The features extracted using different kernel sizes PCA are fed into LIB- SVM classifier [33] and the splicing detection is done on trained SVM generating better results from Markovian model and Hu moments model.

The illumination reflectance model proposed by Chen et al. [34] provided a way to overcome the reflectance, which is high frequency in the image low-frequency lu- minance is extracted for better detection. This is applied for image splicing detection by Niyishaka et al. [10] converting the image into YCbCr colour space and the im- age the luminance is extracted from the colour space. The illumination extracted underwent the min and max filter. The acquired illuminance space, Cr and Cb linear binary patterns are extracted, and the histograms of these patterns are fed into different classifiers. The proposed model is tested on the CASIA V2.0 dataset producing better performance on image tampering detection classification.

Most of the feature fusion algorithms’ previous works defined on only one type of image tampering detection rather than both. It is either copy-move or splicing.

This research overcomes this limitation and provides a method to detect both. The previous related works are explained below.

Ferreira et al. [35] proposed a model with different resizing windows of the im- age, and the images are trained for generating the behavioural knowledge space for different scaling. Each BKS contains the tables of each pixel’s probability is defined based on the neighbour pixels analysis. The trained BKS with local threshold and random forest classifier generated better performance on the training data.

Tatkare et al. [36] proposed a feature fusion on copy-move detection. The Hu moments that is block-based features are fused with key point base SIFT features.

Initially, the images are resized and converted to greyscale and the sift keypoints

are weighed based on a number of key points matched. The algorithm is trained

(25)

15 with the weights generated by the number of key points matched and Hu moments generating the weights for the euclidean distance. The training data uses the weights for euclidean distance to classify the image tampering blocks. An accuracy of 92.48 is generated on the MICC F220 dataset [37], giving better detection accuracy than the individual feature techniques, but splicing detection is not possible using this method.

Jing et al. [38] proposed a feature fusion model based on the DCT features and the Markovian features. The features are extracted from the image, and the different feature size vectors have experimented with different combinations of the Markovian features with DCT features. These features are fused and trained under LIBSVM classifier [33]. The Markovian feature with the Cst feature of DCT(with calibration) and the Markovian feature with H feature of DCT(with calibration) achieved an accuracy of 91.5 per cent.

Kaur et al. [39] proposed a fusion algorithm based on copy-move features of intensity gradients using HOG and noise variances estimations of splicing are fused to generate better results than the individual methods. The proposed feature fusion algorithm on CASIA V2.0 produced 74.3 per cent accuracy.

The related works gave insights into the feature fusion algorithms and the various features extracted for fusion. The proposed feature fusion algorithm by Yang et al.

[30] helps in enhancing the algorithm to better perform with the LBP features [10]

as both algorithms generated robust results over the image tampering detection.

(26)
(27)

Chapter 4

Method

This chapter provides a detailed approach for research methods used for solving the research questions. The research question 1 is addressed by giving a detailed scope of the feature fusion techniques present in the literature by using the Literature review method. The second research question is answered by conducting experimentation on the proposed enhanced feature fusion algorithms and providing performance eval- uation of different feature fusion techniques.

4.1 Literature Review

The literature review is used to solve the research question 1. It helps in gaining the knowledge on the existing and the past literature for the research. The literature review has significant importance in discovering knowledge and not developing the existing techniques. It helps maintain the overall scope of the research topic and narrow down to a specific research problem.

4.1.1 Formulation of Search String

Literature review and analysis of relevant knowledge collection is mostly based on search string for better utilization of the classification. The keywords are to be precisely placed in the search string for better search results. The search strings are formulated based on the keyword specificity in the research libraries like IEEEXplore, ACM, ScienceDirect and Scopus. The other research information providing includes Elsevier and Springer. All the research papers are substantially reviewed and selected based on inclusion and exclusion criteria to narrow down knowledge transfer and analysis.

Search String: (feature fusion) AND (JPEG OR image) AND (detect* OR tam- per* OR splicing* OR forgery*) NOT ("Video") NOT (“audio”)

The search string helps to find relevant research domain and to narrow down the search. The acquired relevant research papers provide the relevant research references useful for the research domain, and this is repeated following the inclusion and exclusion criteria. This process is known as snowballing [40] helps in finding related research.

17

(28)

18 Chapter 4. Method

4.1.2 Inclusion and Exclusion Criteria

Providing the inclusion and exclusion criteria helps manage the search for literature relevant to the research. The irrelevant literature corresponding to the research can be excluded.

Inclusion Criteria

• The research articles are analyzed and selected are ranging from the year 2009 to 2020.

• The research articles available from the journals, conferences, books and mag- azines.

• The research articles relevant to the area of research

• The research articles published in English.

Exclusion Criteria

• Research articles other than published in English are excluded.

• The presentations and the articles under review are excluded.

• Research articles regarding the video tampering, facial forensics and medical images are excluded.

• The research articles that are not published with abstract without full text are excluded.

This literature review is based on the search string, inclusion and exclusion crite- ria. This helps in narrowing down the literature that is adequate for answering the problem statement. The following are the articles reviewed concerning the research question 1, providing the in-depth sightings for finding the existing feature fusion techniques.

Li et al. [41] proposed a feature fusion technique by fusing two types of tampering detection features. One is a statistical feature-based approach, and the other is copy-move detection based approach. The spatial and colour-rich model (SCRM) features are extracted based on statistical features with a step size of 16 and the block size of 64 X 64 pixels. To check the pixel’s pristine condition, the ensemble classifier with Linear discriminant analysis (LDA) based learners are used to generate a possibility map. The patch match technique is used for copy-move detection, and a possibility map is generated. The fusion of this tampering possibility map is done for better performance and results on dataset Image Forensics Challenge corpus [42]

are presented achieving the state of the art F-measure of 0.4925 compared to that of other fusion-based approaches but is slightly sensitive to the scaling and rotations.

Cao et al. [43] proposed a fusion boost for detecting image tampering by us-

ing demosaiced formulas for extracting derivative correlations features. Probabilistic

support vector machine (PSVM) is trained for a small subset of features contain-

ing demosaic and feature types for achieving low error rate on a linear classification

(29)

4.1. Literature Review 19 model called ensemble learner to achieve fusion boost for different detectors, which has excellent potential individually for finding the type of manipulation. The exper- imentation is done on Canon Ixus I, Nikon D70, Sony P73 and Olympus Master 2 images, achieving the lowest error rate from 2 to 4.3 per cent.

Jing et al. [38] has experimented on the fusion of Markovian features and the DCT features of the image for splicing detection and the feature set of DCT (with calibration) and Markovian feature fusion achieved an accuracy of 91.5 per cent con- taining the feature vector of 123 dimensions. Simultaneously, the fusion of H feature in DCT (without calibration) with Markov feature achieved the same accuracy. The results also achieved a full detection accuracy of different quantizations of JPEG image tampering.

Zhang et al. [44] made use of camera-based features like colour based features, image quality features, wavelet features and bi-coherence features of an image. The features are fed into one classifier with varying feature vector dimensions to get the camera’s pattern match. The test images with a low number of image block matches are classified as tampered. The experimentation was done on image of cameras of Canon Por 1, Nikon E 5700, and Sony F 828 with 75 per cent JPEG image quality factor resulting no change in accuracy after 87 feature dimension in the set of 247 feature dimension. The results confined with this 87 features for tampering detection achieved a mean accuracy of 89 per cent even with the post-processing techniques and JPEG compression. This method fails in the case of when the splicing is done from different images taken by the same camera.

The authors in [36] proposed a fusion of key-point based and block-based features for Copy-move tampering in an image. Any two features with a high match at the manipulated area are determined as a cue. The Hue’s moments features are extracted for global usage, and SIFT features are extracted for local usage. These features are fused to form hybrid tampering detection. The experiment conducted on MICC F220 [37] gave better accuracy of 92.48 per cent compared to the individual accuracy’s of hue (70 per cent) and SIFT (67.8 per cent)

Korus et al. [45] proposed a window base analysis for tampering detection by making use of mode-based first digital features of candidate maps. The author pro- posed multiscaling fusion strategies like a Bayesian-based approach called fusion by energy minimization. The bottoms-up fusion approach starts with the small scale estimate of the candidate maps, thereby developing to the large scale estimate. Top- down fusion starts from large estimates to generalizing the small scale estimates.

The performance analysis among the fusion strategies and the multi voting decision fusion, averaging strategy, classification trade-off, supervised learning strategy gave that bottoms up, and energy minimization strategy gave better results with 87 per cent average accuracy. The results show that the widow based detectors yield better accuracy by using small and large scale estimates.

Ferreira et al. [35] proposed a behavioural knowledge-based approach for copy-

move detection. This approach generates a table of the decision of the classifiers for

different scales of images called Behaviour knowledge base. By using machine learn-

ing the unknown neighbour probabilities of pixels being tampered with the undefined

combination are generated by BKS. For BKS to be updated, general machine learn-

ing classifiers like SVM and Random forests are used. The generated probability map

is compared with a threshold to verify the authenticity of the pixel. To overcome the

(30)

20 Chapter 4. Method probability and threshold being the same for a combination, neighbourhood agree- ment and local variable threshold techniques are used for pixel classification based on neighbour pixels. The experimentation is done on the hand made dataset by the authors. The BKS-RF-LVT (Behavioural knowledge base with random forest with local variable threshold) presented all the classifiers best performance in compressed, non-compressed and post-processed images.

The author in [30] proposed three fusion technique for blurring detection. The first one considers three types of features singular values of the grey image matrix, correlation relation coefficients related to the blurring pixels and image quality fac- tors. The obtained features are fused by using principal component analysis and are fed into SVM classifier for training. The trained SVM provides the classification of a blurred image by Gaussian blurring or JPEG compression. The second follows the same scheme, but each type of features undergoes fusion using PCA instead of as a whole, and the third one goes as per the second method, but the weighing is introduced before the SVM classifier. The proposed scheme three outperformed the individual features performance and the other fusion techniques that are proposed.

The Literature review revealed the state of the art of the feature fusion techniques when compared with individual feature techniques. However, many techniques are confined to a specific type of image tamperings like either copy-move or copy-paste.

So there is apt necessity to fill the research gap of providing a reliable image tam- pering detection using feature fusion for different types of tampering detection in an image.

4.2 Experimentation

Many techniques, image features and methods are reviewed for apt selection of image features for fusion algorithm. The experimentation is confined to the adequate data collection available to the public and permission of free for literature usage. The literature review and related works provided insightful information for selecting the methods and appropriate enhancement of the fusion algorithm. The data undergoes preprocessing based on the features that are to be extracted, making them functional for the experimentation and the features are fused by PCA dimension reduction and passed over to the classifier for classification. The enhanced algorithm is analyzed and compared with the existing algorithms for predictive performance counters, and the results are shown. The experiment’s goal is to present a Fusion feature algorithm for detection of tampering in an image with higher performance.

4.2.1 Proposed Feature Fusion Method

The proposed approach goes through the preprocessing of the image. The new ap- proach involves four categories of features that have to be fused. Initially, the three singular value decomposition features are extracted from the image and the one dou- ble blurring correlations are extracted from the image following with the five image quality factors.

The algorithm is enhanced by fusing the LBP histograms feature category. The

other three features categories with different preprocessing steps are fused using PCA

(31)

4.2. Experimentation 21 dimension reduction and trained on the SVM classifier. The enhanced feature fusion algorithm’s goal is to detect the splicing and copy-move tampering in a JPEG image even when the image is not originated from the JPEG format image. The image is mentioned as I with a dimension of m x n and its degenerate version as of I  . The flow chart of the proposed algorithm is shown in Fig 4.1.

Figure 4.1: Flow chart of proposed feature fusion method

Singular Value Decomposition

The singular value decomposition technique extracts the features from the greyscale image. The image underwent blurring operations like JPEG compression shows diminish in the unique high-value indexes in the image. These indexes tend to show a significant change in the image after being tampered [30]. The singular value decomposition equations are as follows. The m x m order matrix X is a unitary matrix, m x n order matrix Σ represents the non-negative diagonal matrix, and V is a unitary matrix in the Equation 4.1. The Variables J and K represent the image’s dimension with j and k as limitations of the features in Equation 4.2. The function p in the Equation 4.3 is defined as the probability of the diagonal matrix at a particular order and function u in Equation 4.4 is a limit function. The features that are extracted from the greyscale image are defined as C 1 , C 2 and C 3 , which are as follows.

I = X 

Y F = diag( 

) (4.1)

j = min(J, K), k = (j/2) (4.2)

p (n) = F (n)/

 k i=1

F (m) m = 1, 2, . . . k (4.3)

(32)

22 Chapter 4. Method

u (i) =

 1 i<2

0 i ≥ 2 (4.4)

C 1 =  l

j=k+1

p (j)/k (4.5)

C 2 =

 l j=1

u (F (j))/k (4.6)

C 3 =

 l j=k+1

F (j) 2 (4.7)

Double Blurring Correlation

The images stored in JPEG format after being tampered exhibit blurring effects and the double correlations are extracted from the image by blurring the tampered image [8]. The double estimated blurring correlation coefficients are extracted to form a matching space, and these features are used for classification of the tampered image.

This feature is defined as C 4 as follows with ρ corr defined as the correlation between the original image and the blurred image with S as Fourier transform in the Equation 4.8.

C 4 = ρ corr (| ln(S(I)) |, ln S (I  ))|) (4.8)

IQM

The Image quality measures [7] are affected by the tampering detection model. The image prediction errors and statistics like mean absolute errors, mean of square errors, content structure correlations, Minkowsky measure and normalized cross relations are taken as IQM and defined as C 5 , C 6 , C 7 , C 8 and C 9 respectively. They are as follows with m x n image dimensions.

C 5 =

 I

m=1

 J

n=1 |I(m, n) − I  (m, n)|

IJ (4.9)

C 6 =

 I

m=1

 J

n=1 |I(m, n) − I  (m, n)| 2 IJ



1

2

(4.10)

(33)

4.2. Experimentation 23

C 7 = max(|I(m, n) − I  (m, n)|) (4.11)

C 8 =

 I

m=1

 J

n=1 I (m, n) 2

 I

m=1

 J

n=1 I  (m, n) 2 (4.12)

C 9 =

 I

m=1

 J

n=1 I (m, n)I  (m, n)

 I

m=1

 J

n=1 I (m, n) 2 (4.13)

LBP Histograms

Linear binary patterns are one of the most significant image features that are applied for solving real-world problems. The colour space with luminance reflectance extrac- tion [34] for Y colour space in YCrCb are defined in the Equation 4.14 with R as reflectance and with L as luminance. The Luminance for Y colour space undergoes the min and max filter, and the new luminance is calculated as shown in Equation 4.16 with u as a constant taken as 1.1 and t a constant taken as 0.05 to avoid null values as shown in Equation 4.15. The reflectance R is taken as the colour space, as shown in Equation 4.16. The new colour space is used for LBP feature extraction.

The spatial arrangement of the pixel intensities defines LBP features, and the nor- malized histograms are generated from the LBP. The LBP is as follows, where p c is the central pixel colour space and p j is the pixel colour space. P is defined as the number of neighbourhood pixels in the radius r, and S is the limit function for pixel colour space as shown in Equations 4.17 and 4.18.

I (x) = R(x) · L(x) (4.14)

L  (x) = (u (L(x) + e)) (4.15)

R = I (x)

L  (x) (4.16)

LBP p,r =

P −1 

j=1

S (p j − p c ) · 2 j (4.17)

S (p j − p c ) =

 1, p j − p c ≥ 0

0, p j − p c < 0 (4.18)

(34)

24 Chapter 4. Method Fusion Process

The Feature categories SVD, double correlation, and IQM are extracted initially using Greyscale image. The feature fusion algorithm is enhanced by fusing LBP histogram features into the algorithm by PCA.

PCA is a dimensional reduction technique used to fuse different features to gener- ate low dimensional features. PCA invokes the usage of correlations with the feature vectors and the principal components vectors are generated with the direction of highest variance in the features [22]. PCA makes the fusion of feature much easier by reducing the feature vector space and merge to generate new features. It preserves the most information possible during the fusion of the features.

Machine Learning Classifier

SVM is selected as the machine learning classifier as it has shown promising results in the existing feature fusion technique [30, 38, 43].

Support vector machine is a supervised machine learning algorithm that is mostly used for binary classification latter developed for multivariate classifications. The data classification is done using hyperplanes separating the classes in n-dimensional space formerly developed for linear separation hyper-planes latter extended to non- linear hyper-planes. The hyperparameters that define the SVM classifier are kernel, regularisation and gamma [21]. The kernel is transforming the problem to the relative linear algebraic function with a degree, making it more time-consuming in selecting for better results. The regularisation value C is a parameter which changes over the training to fit the problem classification best. Gamma is the scope of the how far the classification reaches the hyperplane with low gamma it reaches far extend from the hyperplane and vice versa. The hyper-plane learning in linear SVM is done by transforming the problem using linear algebra, making it more reliable for avoiding the outliers [22]. A linear kernel with a degree of three is set as it best suits to increase the performance of classification on multiple runs of the classifier [21] and gamma set to auto for better classification results.

4.2.2 Data Collection

Data-set

Dataset plays a vital role in the validation of the tamper detection algorithm. The data set chosen should be able to contain all types of tampering detection. Such kind of dataset is CASIA V2.0. It is created by Dong et al. [15]. This dataset is created for providing significant tampered images from authentic images by using popular tampering methods. With the advent of the recent increase in the exploitation of tampering in the images, there is an increase in the need for a dataset that can be re- liable for validating the image tampering algorithms. Many natural colour images are collected for this dataset. This dataset contains 12,323 colour images of which 7,200 images have tampered images generated from 5,123 authentic images. This database is the extension of the CASIA V1.0, which contains tampered images of copy-move.

CASIA V2.0 dataset images have undergone blurring edges around the tampered

(35)

4.3. Performance Metrics and Analysis 25 region, and general blurring is introduced in the original image, and the tamper- ing is done. The dataset contains images of various sizes ranging from 320X240 to 800X600 and with JPEG, BMP, and TIFF formats. The tampered images have gone many post-processing techniques like combinations of rotation, distortion and resize.

Other types of preprocessing involve blurring on the splicing image or other regions.

The preprocessing steps like blurring the image before tampering is also done. The authentic images in the database are from COREL image database [46], from web- sites, and own images captured. These images are classified as nine groups based on character, structure, scenes, animals, plants, indoor, nature, texture and articles.

This dataset has extensive image tampering types that make it unique to validate the image tampering detection algorithms.

Pre-processing

Data preprocessing fabricate a way to make use of the data for different purposes.

The preprocessed helps acquire the vital information required for extracting the features and enables the algorithms to extend its classification performance.

Each feature categories undergoes specific preprocessing steps. The Colour image is converted to Greyscale image for extracting the SVD feature category. The RGB image in any format is converted into a greyscale image, and it is blurred again for making it double blurred for double blurring variance feature category. The Image quality feature categories make use of the greyscale image, and the images are converted from RGB colour space to Greyscale for IQM features. The image is converted from RGB colour space to YCrCb colour space for LBP feature categories, and the Y is the luminance in the YCrCb colour space. The Y undergoes the min and max filters [34] as preprocessing step for low-frequency extraction.

Feature Extraction

The feature extraction from the image after preprocessing steps from each category are mentioned in Section 4.2.1. The extracted features from each feature category are fed into principal component analysis to fuse the features for dimension reduction and merge the features to generate new features. The features extracted are fed into the SVM classifier for classification.

4.3 Performance Metrics and Analysis

4.3.1 Performance Metrics

The Performance metrics play a vital role in the case of validation of the proposed

algorithm. The performance metric are selected based on the classifier chosen. The

classification done by the machine learning classifier are validated for performance

using Accuracy, F-measure, percison, and Recall [22]. The ROC is used for deter-

mining the ability of the classification of the classifier [21]. The performance metrics

are defined from the elementary level as follows.

(36)

26 Chapter 4. Method True Positive(TP)

It is defined as the number of correct classifications by the classifier as a positive class.

False Positive(FP)

It is defined as the number of incorrect classifications by the classifier as a positive class.

False Negative(FN)

It is defined as the number of incorrect classifications by the classifier as a negative class.

True Negative(TN)

It is defined as the number of correct classifications by the classifier as a negative class.

True Positive Rate

It is defined as proportion the number of correct classifications of the positive class to the total number of negative class. It is defined in the Equation 4.19.

TPR = T P

T P + F N (4.19)

False Positive Rate

It is defined as the number of incorrect classification of the negative class to the total number of negative class. It is defined in the Equation 4.20.

FPR = F P

T N + F P (4.20)

Precision

It is defined as proportion of number of correct classifications of the positive class to total number of positive classifications by the classifier. It is defined in the Equation 4.21.

P = T P

T P + F P (4.21)

Recall

It is defined as proportion of number of correct classifications of the positive class by the classifier to total number of positive classifications in the data. It is defined in the Equation 4.22.

R = T P

T P + F N (4.22)

(37)

4.3. Performance Metrics and Analysis 27 Accuracy

It is defined as proportion of total number of correct classification of positive and negative class by the classifier to the total number of instances that are classified in the data. It is defined in the Equation 4.23.

Accuracy = T P + T N

T P + F P + F N + T N (4.23)

F1 measure

F-score is a statistical measure analysis calculated by using precision and recall in binary classification. The more the F-score means classifier has better performance.

It is defined in the Equation 4.24.

F1 Score = 2 × P × R

P + R (4.24)

ROC

The area under the curve - Receiver operating characteristic curve is a widely used performance measure for defining the algorithm’s feasibility. The higher area under the curve means the algorithm has a higher capability to separate the classes and vice versa. The true positive rate is plotted on Y-axis, and the False positive rate is plotted on the axis. The ROC curve is obtained by plotting tpr and fpr, and the area under the curve specifies the algorithm’s performance.

4.3.2 Statistical Significance

The experiment based on a machine learning classifier has to be validated for its ability to classify based on cross-validation techniques. Hence Stratified 5-fold cross- validation [21] is used for this experiment. This cross-validation is stratified helping in avoiding the data anomalies. The Friedman test is used for stratified K-fold cross-validation in comparison of the different performance metrics [21]. The p-value obtained for the stratified 10-fold cross-validation is compared with a five percent significance and the results are shown.

The null hypothesis H0 is "For given performance metric there is no difference in the performance of the algorithms".

The alternative hypothesis H1 is "For given performance metric there is a signif- icant difference in the performance of the algorithms".

4.3.3 Independent and Dependent Variables

Independent Variable

The variables that are dependent on our experiment are PCA and SVM classifier.

Dependent Variable

Our experiment’s independent variables are the performance metrics, i.e. precision,

recall, F-score and ROC.

(38)

28 Chapter 4. Method

4.3.4 Tools and Environment setup

The environment used for the experiment affects the outcome of the experiment. It is foremost to set up the right environment for better results of the experiment.

Hardware Environment

The hardware environment of the experiment is mentioned in the Table 4.1.

Processor Intel(R) Core(TM) i7-5500U CPU @ 2.40GHZ Installed Ram 8.00 GB

Operating System Windows 10 Home

System type 64-bit operating system, X64 based processor Table 4.1: Hardware Environment

Software Environment

Python is used as the programming language for the experiment, and the program- ming is done in Jupyter notebook files. Python has been reliable and effective in the machine learning domain with the open-source libraries vast availability.

Python Libraries

• Numpy: Numpy is one of the most popular libraries in python that deals with multidimensional arrays. Matrices and linear algebra are the most used with techniques in Numpy [47].

• Scipy: Scipy is an open-source library that deals with complex computations, optimization procedures and many more tasks for scientific purposes [48].

• Matlotlib: It is a python library providing the visualization for plotting statis- tics and another type of data. This is the extension of Numpy [49].

• Pillow: It is a python library with the image processing potential and handles various file formats of the images. It has efficient ways of handling the processed image data [50].

• Sklearn-image: It is a python image processing library extensively used for colour space transformations, segmentation, geometrical manipulations and many more image processing techniques that compile with Numpy and Scipy libraries [51].

• Sklearn: Skearn is the most popular open-source library that implements

many of the machine learning algorithms and methods. It has many popular

machine classifiers like SVM, linear regression and many more. This library

compiles and operates with the libraries Numpy and Scipy [52].

(39)

4.3. Performance Metrics and Analysis 29

• Opencv: OpenCV is a python library that has numerous operations on com-

puter vision. It provides many embedded methods in computer vision focused

on solving real-world problems [53].

(40)
(41)

Chapter 5

Results and Analysis

5.1 Literature review Results

The Literature review using the inclusion and exclusion criteria for answering RQ1 has provided fusion algorithms that are proposed over the research domain. The fusion techniques proposed in the literature [30, 35, 36, 38, 41, 43, 44, 45] had shown the state of the art results over image tampering detection compared to that of the algorithms with individual features. The fusion algorithms have more scope in the classification performance of the image tampering. The SVM machine learning classifier provided promising results over the large feature domains and on feature centered algorithms. However many of the fusion algorithms are classifying one kind of tampering either copy-move or splicing. It made the need for providing a reliable fusion algorithm for detecting both types of tampering.

5.2 Experiment Results

The experimentation of the enhanced feature fusion algorithm is validated using the CASIA V2.0 dataset. The performance metrics like precision, recall, accuracy, F- score, time measures for image preprocessing and feature extraction are measured for each algorithm. Comparison of the algorithms is the profound commission to verify the validity and effectiveness of the algorithms.

The algorithm’s results are mentioned and compared for their performance in the image tamper classification and performance metrics. The quantitative performance measures are calculated from tamper detection classification by passing the image as a parameter. The performance metrics for the comparison mentioned in section 4.3.1 are discussed, and the results are shown.

5.2.1 Algorithm 1

The existing feature fusion algorithm [30] converts the RGB image into Grey-scale image and the features as discussed in the Section 4.2.2 and the feature extraction using SVD features, IQM features and double blurring correlation features categories and the image pixel correlations are taken in the image these pixels are classified based on the ground truth of the tampered image as discussed in the section 4.2.1.

The time taken to extract the SVD features on average of 0.83 seconds and the average time taken to extract the IQM feature category is 0.56 seconds. The average

31

(42)

32 Chapter 5. Results and Analysis time taken to extract the double blurring correlation is 0.27 seconds from each image.

The SVM used for training the algorithm has a kernel width of two, and the Gaussian blur is of kernel size nine are taken. The feature categories dimensionality is reduced using fusing features with PCA technique. The results are based on the performance metrics of the classification on CASIA V2.0 [15] with five fold cross validation are shown in the Table 5.1.

Precision Recall F-score Accuracy

Fold 1 0.788 0.819 0.803 84.31

Fold 2 0.825 0.871 0.847 88.65

Fold 3 0.843 0.834 0.862 86.35

Fold 4 0.763 0.816 0.805 82.83

Fold 5 0.807 0.879 0.832 84.73

Average 0.805 0.861 0.830 85.49

Table 5.1: Performance metrics of the Algorithm 1 with stratified 5-fold cross vali- dation.

5.2.2 Algorithm 2

The Linear binary pattern histograms are generated using the illumination reflectance [10]. The RGB colour space is converted into YCrCb colour space, and the Y colour space undergoes the min and max filters with the window size of nine. After pre- processing of the data, the Y, Cr and Cb space LBP features are extracted, and then the histograms of LBP are generated. The average time taken for each image to extract LBP histogram features is 0.76 seconds. The extracted features 356 fea- ture set undergoes dimension reduction by PCA are fed into SVM classifier with the kernel as linear and the degree set to three and the gamma set to auto for better performance. The results are based on the performance metrics of the classification on CASIA V2.0 [15] with five fold cross validation are shown in the Table 5.2.

Precision Recall F-score Accuracy

Fold 1 0.897 0.902 0.899 88.71

Fold 2 0.938 0.946 0.941 95.62

Fold 3 0.959 0.912 0.934 89.11

Fold 4 0.902 0.881 0.891 92.24

Fold 5 0.928 0.923 0.925 90.30

Average 0.924 0.912 0.918 91.19

Table 5.2: Performance metrics of the Algorithm 2 with stratified 5-fold cross vali-

dation.

(43)

5.2. Experiment Results 33

5.2.3 Algorithm 3

The proposed enhanced fusion algorithm extracts all the features categories pro- posed in the existing algorithm 1. The feature set LBP histograms are used as an enhancement, and the features are fused with the other feature categories using PCA. Initially, the data set is randomised with authentic and tampered images. The data is trained and tested on 30 per cent of the dataset. The SVD, IQM, double blurring correlation and LBP histograms feature sets are extracted after respective post-processing techniques. All the feature categories undergoes fusion using PCA due to significant large feature vector space for merging features for dimensional reduction. The features are fed into the SVM classifier with the linear kernel and the gamma is set to auto with the kernel degree of nine. The results are based on the performance metrics of the classification on CASIA V2.0 [15] with five fold cross validation are shown in the Table 5.3.

Precision Recall F-score Accuracy

Fold 1 0.917 0.935 0.925 91.25

Fold 2 0.973 0.989 0.980 97.74

Fold 3 0.926 0.943 0.934 94.25

Fold 4 0.954 0.976 0.964 96.84

Fold 5 0.938 0.959 0.948 93.95

Average 0.941 0.962 0.950 94.65

Table 5.3: Performance metrics of the Algorithm 3 with stratified 5-fold cross vali- dation

The average of the performance metrics shown in Table 5.4 gives that algorithm 3 performs better in terms of performance compared to that of the other algorithms.

The Friedman test shows that the p is less than the significance value 0.05 for all the performance metrics. Hence null hypothesis proposed in Section 4.3.2 is rejected for every performance metric. It shows that there is a significant difference between the algorithms.

Precision Recall F-score Accuracy

Algorithm 1 0.81 0.86 0.83 85.49

Algorithm 2 0.92 0.91 0.91 91.19

Algorithm 3 0.94 0.96 0.95 94.65

Table 5.4: Average Performance metrics of the Algorithms

(44)

34 Chapter 5. Results and Analysis

5.3 Experiment Analysis

The comparison between the algorithms is analysed based on individual performance results. The results show a significant difference in the algorithms under the same test conditions. The performance metrics precision, recall, F-score and accuracy for three algorithms are shown in the Fig 5.1.

Figure 5.1: Average performance metrics comparison of algorithms

The three algorithms results showed that enhanced feature fusion algorithm 3 has better results compared to that of the others. The performance metrics precision is increased from algorithm 1 to algorithm 3 from 0.81 to 0.94. The recall metric of algorithm 3 showed the best performance with 0.96. The F-score defines the ability of the algorithm for better classification. The algorithm 3 showed the best F-score of 0.95. Finally, the accuracy of algorithm 3 showed the highest with 94.65%. The difference shown in the accuracy from algorithm 1 to algorithm 3 is 9.19 per cent, and from algorithm 2 to algorithm 3 is 3.35 per cent. The enhanced feature fusion algorithm results show it is the state of the art algorithm for detecting tampering in a JPEG image.

The Receiver operating characteristic curve provides the way to analyse the al- gorithms under the thresholds of true positive rate, also called specificity and false- positive rate.

The analysis of the algorithm 1 with varying thresholds of true positive rate also

called specificity, and the false-positive rate is shown in the Fig 5.2. Algorithm 1

achieved an AUC (Area under curve) of 0.8562, showing its performance ability to

classify the tampered JPEG image.

(45)

5.3. Experiment Analysis 35

Figure 5.2: ROC curve for Algorithm 1

The analysis of algorithm 2 with varying thresholds of true positive rate, called specificity, and false- rate, is shown in the Fig 5.3. Algorithm 2 achieved an AUC (Area under curve) of 0.9175, showing its performance ability to classify the tam- pered JPEG image.

Figure 5.3: ROC curve for Algorithm 2

(46)

36 Chapter 5. Results and Analysis The analysis of the algorithm 3 with varying thresholds of true positive rate also called specificity, and the false-positive rate is shown in the Fig 5.4. Algorithm 3 achieved an AUC(Area under curve ) of 0.9496, showing its performance ability to classify the tampered JPEG image. The proposed enhanced feature fusion algorithm achieves state of the art with the highest AUC in the ROC curve analysis.

Figure 5.4: ROC curve for Algorithm 3

References

Related documents

The amplitude of averaged reflected chirp signal recorded by microphone in frequency domain (top row), its representation in one-third octave intervals (bottom row) correspond

These challenges in turn motivate us to propose three di- rections in which new ideals for interaction design might be sought: the first is to go beyond the language-body divide

The judges can neither have been ignorant of the fact that the narratives on ritual child murders (in which a named judge of the court of appeal had participated) constituted

Secondly, most of the studies are focused on women and on explaining why they engage less in corruption (e.g. socialisation theory, interest in defending welfare state theory,

government study, in the final report, it was concluded that there is “no evidence that high-frequency firms have been able to manipulate the prices of shares for their own

Specializing in a single biometric modality and taking into account several systems’ responses on the present claim, is commonly referred to as the monomodal multi-expert

In [6], we presented a novel feature detection/description algorithm targeting mobile embedded devices, called Harris- Hessian/FREAK, based on a Harris-Hessian feature detec- tor

In this step most important factors that affect employability of skilled immigrants from previous research (Empirical findings of Canada, Australia &amp; New Zealand) are used such