• No results found

Fixed points, fractals, iterated function systems and generalized support vector machines

N/A
N/A
Protected

Academic year: 2021

Share "Fixed points, fractals, iterated function systems and generalized support vector machines"

Copied!
27
0
0

Loading.... (view fulltext now)

Full text

(1)

Mälardalen University Press Licentiate Theses No. 247

FIXED POINTS, FRACTALS, ITERATED FUNCTION SYSTEMS

AND GENERALIZED SUPPORT VECTOR MACHINES

Xiaomin Qi

2016

School of Education, Culture and Communication

Mälardalen University Press Licentiate Theses

No. 247

FIXED POINTS, FRACTALS, ITERATED FUNCTION SYSTEMS

AND GENERALIZED SUPPORT VECTOR MACHINES

Xiaomin Qi

2016

(2)

Copyright © Xiaomin Qi, 2016 ISBN 978-91-7485-302-5 ISSN 1651-9256

Printed by Mälardalen University, Västerås, Sweden

Abstract

In this thesis, fixed point theory is used to construct fractal type sets and to solve data classification problem. Fixed point method, which is a beautiful mixture of analysis, topology, and geometry has been revealed as a very powerful and important tool in the study of nonlinear phenomena. The existence of fixed points is therefore of paramount importance in several areas of mathematics and other sciences. In particular, fixed points techniques have been applied in such diverse fields as biology, chemistry, economics, engineering, game theory and physics.

In Chapter 2 of this thesis it is demonstrated how to define and construct fractal type sets with the help of iterations of a finite family of generalized F-contraction mappings, a class of mappings more general than contraction mappings, defined in the context of b-metric space. This leads to a variety of results for iterated function system satisfying a different set of contrac-tive conditions. The results unify, generalize and extend various results in the existing literature. In Chapter 3, the theory of support vector machine for linear and nonlinear classification of data and the notion of generalized support vector machine is considered. In the thesis it is also shown that the problem of generalized support vector machine can be considered in the framework of generalized variation inequalities and results on the existence of solutions are established.

(3)

Abstract

In this thesis, fixed point theory is used to construct fractal type sets and to solve data classification problem. Fixed point method, which is a beautiful mixture of analysis, topology, and geometry has been revealed as a very powerful and important tool in the study of nonlinear phenomena. The existence of fixed points is therefore of paramount importance in several areas of mathematics and other sciences. In particular, fixed points techniques have been applied in such diverse fields as biology, chemistry, economics, engineering, game theory and physics.

In Chapter 2 of this thesis it is demonstrated how to define and construct fractal type sets with the help of iterations of a finite family of generalized F-contraction mappings, a class of mappings more general than contraction mappings, defined in the context of b-metric space. This leads to a variety of results for iterated function system satisfying a different set of contrac-tive conditions. The results unify, generalize and extend various results in the existing literature. In Chapter 3, the theory of support vector machine for linear and nonlinear classification of data and the notion of generalized support vector machine is considered. In the thesis it is also shown that the problem of generalized support vector machine can be considered in the framework of generalized variation inequalities and results on the existence of solutions are established.

(4)

Acknowledgements

First and foremost, I would like to thank my supervisor Professor Sergei

Silvestrov who accepted to take me up as his student under the Erasmus

Mundus project FUSION. Thank you Sergei for introducing me to this area

of research that I have come to love and for the wonderful discussions that we

had. Thank you for your patient guidance, enthusiastic encouragement and

useful critiques during the development of this work. I would like to express

my great appreciation to Dr.Talat Nazir for his valuable and constructive

suggestions during the numerous academic discussions we had. I learned a

lot during these discussions that I will take with me to wherever I go. My

grateful thanks are also extended to my supervisor from China, Professor

Dongyun Wang who also helped me during the period when I was studying

in Sweden.

I would like to express my deep gratitude to my family for the support

they have offered me. It has been comforting to know that I could count on

your support throughout all this time. I would also like to thank, in a special

way, my parents who gave me my life and raised me up. I thank my brother

and sister for the encouragement.

I would like to express great appreciation to the Erasmus Mundus project

FUSION for the financial support of my travels and stay in Sweden.

I would like to thank the staff at the School of education, culture and

communication, M¨alardalens University for providing a wonderful academic

and research environment in Mathematics and Applied Mathematics.

Spe-cial thanks to various people who have in one way or the other made my

stay in Sweden quite memorable, Dr.Ying Ni, Asgher Ali, Prashant G Metri,

Emanuel Guariglia and other students in Mathematics and Applied

Mathe-matics, M¨alardalen University for the nice moments we shared.

V¨aster˚

as, December, 2016

(5)

Acknowledgements

First and foremost, I would like to thank my supervisor Professor Sergei

Silvestrov who accepted to take me up as his student under the Erasmus

Mundus project FUSION. Thank you Sergei for introducing me to this area

of research that I have come to love and for the wonderful discussions that we

had. Thank you for your patient guidance, enthusiastic encouragement and

useful critiques during the development of this work. I would like to express

my great appreciation to Dr.Talat Nazir for his valuable and constructive

suggestions during the numerous academic discussions we had. I learned a

lot during these discussions that I will take with me to wherever I go. My

grateful thanks are also extended to my supervisor from China, Professor

Dongyun Wang who also helped me during the period when I was studying

in Sweden.

I would like to express my deep gratitude to my family for the support

they have offered me. It has been comforting to know that I could count on

your support throughout all this time. I would also like to thank, in a special

way, my parents who gave me my life and raised me up. I thank my brother

and sister for the encouragement.

I would like to express great appreciation to the Erasmus Mundus project

FUSION for the financial support of my travels and stay in Sweden.

I would like to thank the staff at the School of education, culture and

communication, M¨alardalens University for providing a wonderful academic

and research environment in Mathematics and Applied Mathematics.

Spe-cial thanks to various people who have in one way or the other made my

stay in Sweden quite memorable, Dr.Ying Ni, Asgher Ali, Prashant G Metri,

Emanuel Guariglia and other students in Mathematics and Applied

Mathe-matics, M¨alardalen University for the nice moments we shared.

V¨aster˚

as, December, 2016

(6)

List of Papers

The chapters 2 and 3 in this thesis are based on the following papers:

Paper A. T. Nazir, S. Silvestrov, X. Qi, Fractals of Generalized F -Hutchinson

Opera-tor in b-Metric Spaces, Journal of OperaOpera-tors, vol. 2016, Article ID 5250394,

9 pages, 2016. doi:10.1155/2016/5250394

Paper B. T. Nazir, X. Qi, S. Silvestrov, Linear Classification of data with Support

Vector Machines and Generalized Support Vector Machines, In: Engineering

Mathematics II: Algebraic, Stochastic and Analysis Structures for Networks, Data Classification and Optimization, Springer Proceedings in Mathematics and Statistics, 179, Springer, 2016.

Paper C. T. Nazir, X. Qi, S. Silvestrov, Linear and Nonlinear Classifiers of Data

with Support Vector Machines and Generalized Support Vector Machines, In:

Engineering Mathematics II: Algebraic, Stochastic and Analysis Structures for Networks, Data Classification and Optimization, Springer Proceedings in Mathematics and Statistics, 179, Springer, 2016.

Paper D. X. Qi, S. Silvestrov, T. Nazir, Data classification with Support Vector

Ma-chines and Generalized Support Vector MaMa-chines, To appear in Proceedings

of INCPAA 2016, 11th International Conference on Mathematical Problems in Engineering, Aerospace, and Sciences, La Rochelle, France, 05-08 July 2016.

(7)

List of Papers

The chapters 2 and 3 in this thesis are based on the following papers:

Paper A. T. Nazir, S. Silvestrov, X. Qi, Fractals of Generalized F -Hutchinson

Opera-tor in b-Metric Spaces, Journal of OperaOpera-tors, vol. 2016, Article ID 5250394,

9 pages, 2016. doi:10.1155/2016/5250394

Paper B. T. Nazir, X. Qi, S. Silvestrov, Linear Classification of data with Support

Vector Machines and Generalized Support Vector Machines, In: Engineering

Mathematics II: Algebraic, Stochastic and Analysis Structures for Networks, Data Classification and Optimization, Springer Proceedings in Mathematics and Statistics, 179, Springer, 2016.

Paper C. T. Nazir, X. Qi, S. Silvestrov, Linear and Nonlinear Classifiers of Data

with Support Vector Machines and Generalized Support Vector Machines, In:

Engineering Mathematics II: Algebraic, Stochastic and Analysis Structures for Networks, Data Classification and Optimization, Springer Proceedings in Mathematics and Statistics, 179, Springer, 2016.

Paper D. X. Qi, S. Silvestrov, T. Nazir, Data classification with Support Vector

Ma-chines and Generalized Support Vector MaMa-chines, To appear in Proceedings

of INCPAA 2016, 11th International Conference on Mathematical Problems in Engineering, Aerospace, and Sciences, La Rochelle, France, 05-08 July 2016.

(8)

Contents

1 Introduction 11

1.1 Fixed point method and F-contractions . . . 11

1.1.1 Fixed point theory in b-metric spaces . . . 14

1.1.2 Fractals construction based on fixed point theory in b-metric space . . . 17

1.1.3 Support vector machines and generalized support vector ma-chines . . . 18

1.2 Summary of the thesis . . . 21

1.2.1 Fractals of generalized F-Hutchinson operator in b-metric spaces . . . 21

1.2.2 Data classification with Support Vector Machines and Gen-eralized Support Vector Machines . . . 22

2 Fractals of generalized F-Hutchinson operator in b-metric spaces 29 2.1 Introduction . . . 29

2.2 Definitions and preliminaries . . . 30

2.3 Main Results . . . 36

(9)

Contents

1 Introduction 11

1.1 Fixed point method and F-contractions . . . 11

1.1.1 Fixed point theory in b-metric spaces . . . 14

1.1.2 Fractals construction based on fixed point theory in b-metric space . . . 17

1.1.3 Support vector machines and generalized support vector ma-chines . . . 18

1.2 Summary of the thesis . . . 21

1.2.1 Fractals of generalized F-Hutchinson operator in b-metric spaces . . . 21

1.2.2 Data classification with Support Vector Machines and Gen-eralized Support Vector Machines . . . 22

2 Fractals of generalized F-Hutchinson operator in b-metric spaces 29 2.1 Introduction . . . 29

2.2 Definitions and preliminaries . . . 30

2.3 Main Results . . . 36

(10)

Fixed points, fractals, iterated function systems and generalized

support vector machines

3 Data classification with Support Vector Machines and

General-ized Support Vector Machines 49

3.1 Introduction . . . 49

3.2 Support vector machine . . . 50

3.2.1 Linear classification . . . 50

3.2.2 Non-linear classification . . . 55

3.3 Generalized Support Vector Machines . . . 57

3.4 Examples and consequence . . . 62

3.5 Conclusion . . . 67

3.6 Acknowledgement . . . 67

Chapter 1

Introduction

The main content of this thesis is the description and application of fixed point theory. First we present applications of fixed point theory for construction of fractal type sets using iterated function systems on the b-metric space. Next, we consider the problem of generalized support vector machine that used to classify the data both linear and non-linear separable, and the solution of generalized variational inequality problem where fixed point theory also plays a vital role. We give the details in the following sections and the corresponding chapters.

1.1

Fixed point method and F-contractions

A point is often called fixed point when it remains invariant, irrespective of the type of transformation it undergoes. For a function f that has a set X as both domain and range, a fixed point is a point x ∈ X for which f (x) = x. Banach

contraction principle [5] is of paramount importance in metrical fixed point theory with a wide range of applications, including iterative methods for solving linear, non-linear, differential, integral, and difference equations.

Definition 1.1.1. Let X be a non-empty set. A mapping d : X× X → R is said

to be a metric (or distant function) if and only if the following conditions hold: (M1) d(x, y)≥ 0 for all x, y ∈ X;

(M2) d(x, y) = 0 if and only if x = y; (M3) d(x, y) = d(y, x) for all x, y∈ X;

(11)

Fixed points, fractals, iterated function systems and generalized

support vector machines

3 Data classification with Support Vector Machines and

General-ized Support Vector Machines 49

3.1 Introduction . . . 49

3.2 Support vector machine . . . 50

3.2.1 Linear classification . . . 50

3.2.2 Non-linear classification . . . 55

3.3 Generalized Support Vector Machines . . . 57

3.4 Examples and consequence . . . 62

3.5 Conclusion . . . 67

3.6 Acknowledgement . . . 67

Chapter 1

Introduction

The main content of this thesis is the description and application of fixed point theory. First we present applications of fixed point theory for construction of fractal type sets using iterated function systems on the b-metric space. Next, we consider the problem of generalized support vector machine that used to classify the data both linear and non-linear separable, and the solution of generalized variational inequality problem where fixed point theory also plays a vital role. We give the details in the following sections and the corresponding chapters.

1.1

Fixed point method and F-contractions

A point is often called fixed point when it remains invariant, irrespective of the type of transformation it undergoes. For a function f that has a set X as both domain and range, a fixed point is a point x ∈ X for which f (x) = x. Banach

contraction principle [5] is of paramount importance in metrical fixed point theory with a wide range of applications, including iterative methods for solving linear, non-linear, differential, integral, and difference equations.

Definition 1.1.1. Let X be a non-empty set. A mapping d : X× X → R is said

to be a metric (or distant function) if and only if the following conditions hold: (M1) d(x, y)≥ 0 for all x, y ∈ X;

(M2) d(x, y) = 0 if and only if x = y; (M3) d(x, y) = d(y, x) for all x, y∈ X;

(12)

Fixed points, fractals, iterated function systems and generalized

support vector machines

If d is metric for X, then the ordered pair (X, d) is called a metric space and d(x, y) is called the distance between x and y.

Let (X, d) be a metric space and let A be non-empty subset of X. Then the diameter of A, denoted by δ(A), is defined by δ(A) = sup{d(x, y) : x, y ∈ A} that

is, the diameter of A is the supremum of the set of all distance between points of A. The distances between a point p∈ X and a subset A of metric space X is defined

by d(p, A) = inf{d(p, x) : x ∈ A}. It is evident that d(p, A) = 0 if p ∈ A. The

distance between two non-empty subsets A and B of a metric space X is denoted and defined as d(A, B) = inf{d(x, y) : x ∈ A, y ∈ B}. Let (X, d) be a metric space

and A be any subset of X. A point x∈ X is an interior point of A if there exists r > 0, such that x∈ Sr(x)⊂ A, where Sr(x) ={p ∈ X : d(p, x) < r} is an open ball around x of radius r. A point x∈ X is an exterior point of A if there exists

an open ball Sr(x), such that Sr(x)⊂ Ac, that is Sr(x)∩ A = ∅. A point x ∈ X is said to be the boundary point of A if x is neither an interior point of A nor an exterior point of A. The boundary of A is denoted by ∂A. A sequence of elements

x1, x2, ..., xn, ... in a metric space X is said to converge to an element x ∈ X if the sequence of real numbers d(xn, x) converges to zero as n→ ∞. A sequence of points pnin X is said to be a Cauchy sequence in X if and only if for every  > 0 there exists a positive integer n() such that, m, n≥ n() ⇒ d(pm, pn) < .

It is clear that every convergent sequence in a metric space is a Cauchy sequence but converse do not need to be true. A metric space (X, d) is said to be complete if and only if every Cauchy sequence in X converges to a point in X.

Definition 1.1.2. A self mapping f of a metric space (X, d) is said to be Lipschitz mapping if for all x, y∈ X and some α ≥ 0, it holds that

d(f (x), f (y))≤ αd(x, y). (1.1) The mapping f is said to be contraction if (1.1) holds for some α ∈ [0, 1), and

non-expansive if α = 1 in (1.1). A contraction mapping is always continuous. Banach’s fixed point theorem provides sufficient conditions for existence and uniqueness of the fixed point.

Theorem 1.1.1. Let X be a complete metric space with metric d and f : X→ X be a contraction, that is there exists α < 1 such that

d(f (x), f (y))≤ αd(x, y), for all x, y ∈ X. (1.2)

Then, f has a fixed point, in fact exactly one.

Fixed point method and F-contractions

Wardowski [39] introduced a new contraction called F -contraction and proved a fixed point result as an interesting generalization of the Banach contraction principle. Wardowski also proved that in a complete metric space (X, d), every F -contractive self-map has a unique fixed point in X and for every x0in X a sequence of iterates {x0, f x0, f2x0, ...} converges to the fixed point of f. Consistent with [39], the following definition and examples are needed.

Let  be the collection of all continuous mappings F : R+ → R that satisfy the following conditions:

(F1) F is strictly increasing, that is, for all α, β∈ R+from α < β follows F (α) < F (β);

(F2) for sequence {αn} of positive real numbers, limn

→∞αn= 0 and limn→∞F (αn) = −∞ are equivalent;

(F3) there exists k∈ (0, 1) such that lim α→0+α

kF (α) = 0.

Definition 1.1.3. [39] Let (X, d) be a metric space. A self-mapping f on X is called an F -contraction if for any x, y ∈ X, there exists F ∈  and τ > 0 such

that

τ + F (d(f x, f y))≤ F (d(x, y)), (1.3) whenever d(f x, f y) > 0.

From (F1) and (1.3), we conclude that

d(f x, f y) < d(x, y), f or all x, y∈ X, fx = fy.

Indeed from (1.3), for all x, y∈ X with d (fx, fy) > 0, we have F (d (f x, f y)) < F (d (x, y)) .

Since F is strictly increasing (F1), it follows that

d (f x, f y) < d (x, y) , f or all x, y∈ X whenever fx = fy.

Thus, every F contraction mapping is nonexpansive, and in particular, every F -contraction mapping is continuous.

Following examples show that there are variety of contractive conditions cor-responding to different choices of elements in.

(13)

Fixed points, fractals, iterated function systems and generalized

support vector machines

If d is metric for X, then the ordered pair (X, d) is called a metric space and d(x, y) is called the distance between x and y.

Let (X, d) be a metric space and let A be non-empty subset of X. Then the diameter of A, denoted by δ(A), is defined by δ(A) = sup{d(x, y) : x, y ∈ A} that

is, the diameter of A is the supremum of the set of all distance between points of A. The distances between a point p∈ X and a subset A of metric space X is defined

by d(p, A) = inf{d(p, x) : x ∈ A}. It is evident that d(p, A) = 0 if p ∈ A. The

distance between two non-empty subsets A and B of a metric space X is denoted and defined as d(A, B) = inf{d(x, y) : x ∈ A, y ∈ B}. Let (X, d) be a metric space

and A be any subset of X. A point x∈ X is an interior point of A if there exists r > 0, such that x ∈ Sr(x)⊂ A, where Sr(x) ={p ∈ X : d(p, x) < r} is an open ball around x of radius r. A point x∈ X is an exterior point of A if there exists

an open ball Sr(x), such that Sr(x)⊂ Ac, that is Sr(x)∩ A = ∅. A point x ∈ X is said to be the boundary point of A if x is neither an interior point of A nor an exterior point of A. The boundary of A is denoted by ∂A. A sequence of elements

x1, x2, ..., xn, ... in a metric space X is said to converge to an element x ∈ X if the sequence of real numbers d(xn, x) converges to zero as n→ ∞. A sequence of points pn in X is said to be a Cauchy sequence in X if and only if for every  > 0 there exists a positive integer n() such that, m, n≥ n() ⇒ d(pm, pn) < .

It is clear that every convergent sequence in a metric space is a Cauchy sequence but converse do not need to be true. A metric space (X, d) is said to be complete if and only if every Cauchy sequence in X converges to a point in X.

Definition 1.1.2. A self mapping f of a metric space (X, d) is said to be Lipschitz mapping if for all x, y∈ X and some α ≥ 0, it holds that

d(f (x), f (y))≤ αd(x, y). (1.1) The mapping f is said to be contraction if (1.1) holds for some α ∈ [0, 1), and

non-expansive if α = 1 in (1.1). A contraction mapping is always continuous. Banach’s fixed point theorem provides sufficient conditions for existence and uniqueness of the fixed point.

Theorem 1.1.1. Let X be a complete metric space with metric d and f : X→ X be a contraction, that is there exists α < 1 such that

d(f (x), f (y))≤ αd(x, y), for all x, y ∈ X. (1.2)

Then, f has a fixed point, in fact exactly one.

Fixed point method and F-contractions

Wardowski [39] introduced a new contraction called F -contraction and proved a fixed point result as an interesting generalization of the Banach contraction principle. Wardowski also proved that in a complete metric space (X, d), every F -contractive self-map has a unique fixed point in X and for every x0in X a sequence of iterates {x0, f x0, f2x0, ...} converges to the fixed point of f. Consistent with [39], the following definition and examples are needed.

Let  be the collection of all continuous mappings F : R+ → R that satisfy the following conditions:

(F1) F is strictly increasing, that is, for all α, β∈ R+from α < β follows F (α) < F (β);

(F2) for sequence{αn} of positive real numbers, limn

→∞αn= 0 and limn→∞F (αn) = −∞ are equivalent;

(F3) there exists k∈ (0, 1) such that lim α→0+α

kF (α) = 0.

Definition 1.1.3. [39] Let (X, d) be a metric space. A self-mapping f on X is called an F -contraction if for any x, y ∈ X, there exists F ∈  and τ > 0 such

that

τ + F (d(f x, f y))≤ F (d(x, y)), (1.3) whenever d(f x, f y) > 0.

From (F1) and (1.3), we conclude that

d(f x, f y) < d(x, y), f or all x, y∈ X, fx = fy.

Indeed from (1.3), for all x, y∈ X with d (fx, fy) > 0, we have F (d (f x, f y)) < F (d (x, y)) .

Since F is strictly increasing (F1), it follows that

d (f x, f y) < d (x, y) , f or all x, y∈ X whenever fx = fy.

Thus, every F contraction mapping is nonexpansive, and in particular, every F -contraction mapping is continuous.

Following examples show that there are variety of contractive conditions cor-responding to different choices of elements in.

(14)

Fixed points, fractals, iterated function systems and generalized

support vector machines

Example 1.1.1. Let F :R+→ R be defined by F (λ) = ln (λ) for λ > 0. Then F

satisfies (F1) to (F3). A mapping f : X → X satisfying (1.3) is a contraction with contractive factor e−τ, that is,

d(f x, f y)≤ e−τd(x, y), f or all x, y∈ X, fx = fy. (1.4) It is clear that for x, y∈ X such that fx = fy the inequality d(fx, fy) ≤ e−τd(x, y) holds.

Example 1.1.2. If we take F (λ) = ln (λ) + λ, λ > 0, then F satisfies (F1) to (F3) and (1.2) is of the form

d(f x, f y) d(x, y) e

d(f x,f y)−d(x,y)≤ e−τ, f or all x, y∈ X, fx = fy. (1.5) Example 1.1.3. Consider F (λ) = −1/√λ for λ > 0, then F ∈ . In this case, F -contraction mapping f satisfies

d(f x, f y)≤ 1

(1 + τd(x, y))2d(x, y), f or all x, y∈ X, fx = fy. (1.6) Note that, the above is a special case of non-linear contraction of the type d(f x, f y)

≤ ψ(d(x, y))d(x, y) for all x, y ∈ X, fx = fy. For details, see [20, 34].

Example 1.1.4. Let F (λ) = ln(λ2+ λ), λ > 0. Then F satisfies (F

1)-(F3) and the mapping f satisfies the following condition

d(f x, f y)(d(f x, f y) + 1) d(x, y)(d(x, y) + 1) ≤ e

−τ, f or all x, y∈ X, fx = fy. (1.7) In all above examples, conditions (1.4) - (1.7) are satisfied for any x, y ∈ X with f x = f y.

Theorem 1.1.2. [39] Let (X, d) be a complete metric space and f : X → X an F -contraction mapping. Then f has a unique fixed point in X and for every x0in X a sequence of iterates {x0, f x0, f2x0, ...} converges to the fixed point of f.

1.1.1

Fixed point theory in b-metric spaces

Banach contraction principle have been extended either by generalizing the domain of the mappings [7, 8, 22, 24, 25, 27, 38] or by extending the contractive condition on the mappings [20, 21, 29, 32, 34]. Nadler [33] was the first who combined the ideas of multivalued mappings and contractions and hence initiated the study of

Fixed point method and F-contractions

metric fixed point theory of multivalued operators, see also [6, 9, 37]. The concept of metric has been generalized further in one to many ways. The concept of a

b-metric space was introduced by Czerwik in [16]. Since then, several papers have

been published on the fixed point theory of various classes of single-valued and multi-valued operators in b-metric space [10, 11, 12, 13, 14, 15, 16, 17, 18, 31, 35]. Definition 1.1.4. Let X be a non-empty set and b ≥ 1 a given real number.

A function d : X × X → R+ is said to be a b-metric if for any x, y, z ∈ X, the following conditions hold:

(b1) d(x, y) = 0 if and only if x = y; (b2) d(x, y) = d(y, x);

(b3) d(x, y)≤ b (d(x, z) + d(z, y)) .

The pair (X, d) is called a b-metric space with parameter b ≥ 1. If b = 1, then b-metric space is a metric spaces. But the converse does not hold in general

[10, 14, 16].

The class of b-metric spaces is more general than the class of metric spaces. Some known examples of b-metric, which show that a b-metric space is real gener-alization of a metric space, are the following.

Example 1.1.5. [28] The space lp(0 < p < 1), lp=  (xn)⊂ R :  n=1 |xn|p<∞  ,

together with the function d : lp× lp→ R where

d(x, y) =   n=1 |xn− yn|p 1 p ,

where x = xn, y = yn∈ lp is a b-metric space with parameter b = 2

1

p > 1.

Example 1.1.6. [28] The space Lp(0 < p < 1) of all real functions x(t), t∈ [0, 1] such that

1 

0

(15)

Fixed points, fractals, iterated function systems and generalized

support vector machines

Example 1.1.1. Let F :R+→ R be defined by F (λ) = ln (λ) for λ > 0. Then F

satisfies (F1) to (F3). A mapping f : X→ X satisfying (1.3) is a contraction with contractive factor e−τ, that is,

d(f x, f y)≤ e−τd(x, y), f or all x, y∈ X, fx = fy. (1.4) It is clear that for x, y∈ X such that fx = fy the inequality d(fx, fy) ≤ e−τd(x, y) holds.

Example 1.1.2. If we take F (λ) = ln (λ) + λ, λ > 0, then F satisfies (F1) to (F3) and (1.2) is of the form

d(f x, f y) d(x, y) e

d(f x,f y)−d(x,y)≤ e−τ, f or all x, y∈ X, fx = fy. (1.5) Example 1.1.3. Consider F (λ) =−1/√λ for λ > 0, then F ∈ . In this case, F -contraction mapping f satisfies

d(f x, f y)≤ 1

(1 + τd(x, y))2d(x, y), f or all x, y∈ X, fx = fy. (1.6) Note that, the above is a special case of non-linear contraction of the type d(f x, f y)

≤ ψ(d(x, y))d(x, y) for all x, y ∈ X, fx = fy. For details, see [20, 34].

Example 1.1.4. Let F (λ) = ln(λ2+ λ), λ > 0. Then F satisfies (F

1)-(F3) and the mapping f satisfies the following condition

d(f x, f y)(d(f x, f y) + 1) d(x, y)(d(x, y) + 1) ≤ e

−τ, f or all x, y∈ X, fx = fy. (1.7) In all above examples, conditions (1.4) - (1.7) are satisfied for any x, y∈ X with f x = f y.

Theorem 1.1.2. [39] Let (X, d) be a complete metric space and f : X → X an F -contraction mapping. Then f has a unique fixed point in X and for every x0in X a sequence of iterates{x0, f x0, f2x0, ...} converges to the fixed point of f.

1.1.1

Fixed point theory in b-metric spaces

Banach contraction principle have been extended either by generalizing the domain of the mappings [7, 8, 22, 24, 25, 27, 38] or by extending the contractive condition on the mappings [20, 21, 29, 32, 34]. Nadler [33] was the first who combined the ideas of multivalued mappings and contractions and hence initiated the study of

Fixed point method and F-contractions

metric fixed point theory of multivalued operators, see also [6, 9, 37]. The concept of metric has been generalized further in one to many ways. The concept of a

b-metric space was introduced by Czerwik in [16]. Since then, several papers have

been published on the fixed point theory of various classes of single-valued and multi-valued operators in b-metric space [10, 11, 12, 13, 14, 15, 16, 17, 18, 31, 35]. Definition 1.1.4. Let X be a non-empty set and b ≥ 1 a given real number.

A function d : X× X → R+ is said to be a b-metric if for any x, y, z ∈ X, the following conditions hold:

(b1) d(x, y) = 0 if and only if x = y; (b2) d(x, y) = d(y, x);

(b3) d(x, y)≤ b (d(x, z) + d(z, y)) .

The pair (X, d) is called a b-metric space with parameter b ≥ 1. If b = 1, then b-metric space is a metric spaces. But the converse does not hold in general

[10, 14, 16].

The class of b-metric spaces is more general than the class of metric spaces. Some known examples of b-metric, which show that a b-metric space is real gener-alization of a metric space, are the following.

Example 1.1.5. [28] The space lp(0 < p < 1), lp=  (xn)⊂ R :  n=1 |xn|p<∞  ,

together with the function d : lp× lp→ R where

d(x, y) =   n=1 |xn− yn|p 1 p ,

where x = xn, y = yn∈ lpis a b-metric space with parameter b = 2

1

p > 1.

Example 1.1.6. [28] The space Lp(0 < p < 1) of all real functions x(t), t∈ [0, 1] such that

1 

0

(16)

Fixed points, fractals, iterated function systems and generalized

support vector machines

is b-metric space if we take

d(x, y) =   1  0 |x(t) − y(t)|pdt   1 p for each x, y∈ Lp.

Example 1.1.7. [36] The set of real numbers together with the functional

d(x, y) =|x − y|2,

for all x, y∈ R is a b-metric space with constant b = 2. Also we obtain that d is

not a metric on X.

Definition 1.1.5. [13] Let (X, d) be a b-metric space. Then a subset C ⊂ X is

called

(i) closed if and only if for each sequence {xn} in C which converges to an element x, we have x∈ C (that is, C = C);

(ii) compact if and only if for every sequence of elements of C there exists a subsequence that converges to an element of C;

(iii) bounded if and only if δ(C) := sup{d(x, y) : x, y ∈ C} < ∞.

LetH(X) denotes the set of all non-empty compact subsets of X. For A, B ∈ H(X), let

H(A, B) = max{sup

b∈Bd(b, A), supa∈Ad(a, B)},

where d(x, B) = inf{d(x, y) : y ∈ B} is the distance of a point x from the set B. The mapping H is said to be the Pompeiu-Hausdorff metric induced by d. If

(X, d) is a complete b-metric space, then (H(X), H) is also a complete b-metric

space.

For the sake of completeness, we state the following Lemmas which hold in

b-metric space [26].

Lemma 1.1.1. Let (X, d) be a b-metric space. For all A, B, C, D ∈ H(X), the

following properties hold: (i) if B⊆ C, then sup

a∈A d(a, C)≤ sup a∈A d(a, B); (ii) sup x∈A∪B d(x, C) = max{sup a∈A d(a, C), sup b∈B d(b, C)};

Fixed point method and F-contractions

(iii) H(A∪ B, C ∪ D) ≤ max{H(A, C), H(B, D)}.

Lemma 1.1.2. Let (X, d) be a b-metric space and CB (X) denote the set of all non-empty closed and bounded subsets of X. For x, y ∈ X and A, B ∈ CB(X),

the following statements hold:

1. (CB(X), H) is a b-metric space; 2. d(x, B)≤ H(A, B) for all x ∈ A;

3. d(x, A)≤ b (d(x, y) + d(y, A)) ;

4. for h > 1 and ´a∈ A, there is a ´b ∈ B such that d(´a, ´b) ≤ hH(A, B);

5. for every h > 0 and ´a∈ A, there is a ´b ∈ B such that d(´a, ´b) ≤ H(A, B) + h;

6. for every λ > 0 and ˜a∈ A, there is a ˜b ∈ B such that d(˜a, ˜b) ≤ λ;

7. for every λ > 0 and ˜a∈ A, there is a ˜b ∈ B such that d(˜a, ˜b) ≤ λ implies H(A, B)≤ λ;

8. d(x, A) = 0 if and only if x∈ ¯A = A;

9. for{xn} ⊆ X,

d(x0, xn)≤ bd(x0, x1) + ... + bn−1d(xn−2, xn−1) + bn−1d(xn−1, xn).

1.1.2

Fractals construction based on fixed point theory

in b-metric space

Iterated function systems provide a method of constructing fractals and are based on the mathematical foundations laid by Hutchinson [23]. He showed that Hutchin-son operator constructed with the help of a finite system of contraction mappings defined on a Euclidean space Rn has closed and bounded subset ofRnas its fixed point, called attractor of iterated function system (see also in [19] ).

Let (X, d) be a b-metric space and {fn : n = 1, 2, ..., N} a finite family of generalized F -contraction self-mappings on X. Define T :H(X) → H(X) by

T (A) = f1(A)∪ f2(A)∪ · · · ∪ fN(A) = ∪Nn=1fn(A), f or each A∈ H(X). Then T is a generalized F -contraction onH (X).

(17)

Fixed points, fractals, iterated function systems and generalized

support vector machines

is b-metric space if we take

d(x, y) =   1  0 |x(t) − y(t)|pdt   1 p for each x, y∈ Lp.

Example 1.1.7. [36] The set of real numbers together with the functional

d(x, y) =|x − y|2,

for all x, y∈ R is a b-metric space with constant b = 2. Also we obtain that d is

not a metric on X.

Definition 1.1.5. [13] Let (X, d) be a b-metric space. Then a subset C ⊂ X is

called

(i) closed if and only if for each sequence {xn} in C which converges to an element x, we have x∈ C (that is, C = C);

(ii) compact if and only if for every sequence of elements of C there exists a subsequence that converges to an element of C;

(iii) bounded if and only if δ(C) := sup{d(x, y) : x, y ∈ C} < ∞.

Let H(X) denotes the set of all non-empty compact subsets of X. For A, B ∈ H(X), let

H(A, B) = max{sup

b∈Bd(b, A), supa∈Ad(a, B)},

where d(x, B) = inf{d(x, y) : y ∈ B} is the distance of a point x from the set B. The mapping H is said to be the Pompeiu-Hausdorff metric induced by d. If

(X, d) is a complete b-metric space, then (H(X), H) is also a complete b-metric

space.

For the sake of completeness, we state the following Lemmas which hold in

b-metric space [26].

Lemma 1.1.1. Let (X, d) be a b-metric space. For all A, B, C, D∈ H(X), the

following properties hold: (i) if B⊆ C, then sup

a∈A d(a, C)≤ sup a∈A d(a, B); (ii) sup x∈A∪B d(x, C) = max{sup a∈A d(a, C), sup b∈B d(b, C)};

Fixed point method and F-contractions

(iii) H(A∪ B, C ∪ D) ≤ max{H(A, C), H(B, D)}.

Lemma 1.1.2. Let (X, d) be a b-metric space and CB (X) denote the set of all non-empty closed and bounded subsets of X. For x, y ∈ X and A, B ∈ CB(X),

the following statements hold:

1. (CB(X), H) is a b-metric space; 2. d(x, B)≤ H(A, B) for all x ∈ A;

3. d(x, A)≤ b (d(x, y) + d(y, A)) ;

4. for h > 1 and ´a∈ A, there is a ´b ∈ B such that d(´a, ´b) ≤ hH(A, B);

5. for every h > 0 and ´a∈ A, there is a ´b ∈ B such that d(´a, ´b) ≤ H(A, B) + h;

6. for every λ > 0 and ˜a∈ A, there is a ˜b ∈ B such that d(˜a, ˜b) ≤ λ;

7. for every λ > 0 and ˜a∈ A, there is a ˜b ∈ B such that d(˜a, ˜b) ≤ λ implies H(A, B)≤ λ;

8. d(x, A) = 0 if and only if x∈ ¯A = A;

9. for{xn} ⊆ X,

d(x0, xn)≤ bd(x0, x1) + ... + bn−1d(xn−2, xn−1) + bn−1d(xn−1, xn).

1.1.2

Fractals construction based on fixed point theory

in b-metric space

Iterated function systems provide a method of constructing fractals and are based on the mathematical foundations laid by Hutchinson [23]. He showed that Hutchin-son operator constructed with the help of a finite system of contraction mappings defined on a Euclidean spaceRnhas closed and bounded subset ofRnas its fixed point, called attractor of iterated function system (see also in [19] ).

Let (X, d) be a b-metric space and {fn : n = 1, 2, ..., N} a finite family of generalized F -contraction self-mappings on X. Define T :H(X) → H(X) by

T (A) = f1(A)∪ f2(A)∪ · · · ∪ fN(A) = ∪Nn=1fn(A), f or each A∈ H(X). Then T is a generalized F -contraction onH (X).

(18)

Fixed points, fractals, iterated function systems and generalized

support vector machines

Let X be a complete b-metric space. If fn : X → X, n = 1, 2, ..., N are generalized F -contraction mappings, then (X; f1, f2, ..., fN) is called generalized F -contractive iterated function system (IFS). Thus generalized F -contractive

it-erated function system consists of a complete b-metric space and finite family of generalized F -contraction mappings on X. Hence we get the following result. Let (X, d) be a complete b-metric space and{X; fn, n = 1, 2,···, k} a generalized F -contractive iterated function system. Then following properties hold:

(a) A mapping T :H(X) → H(X) defined by

T (A) =∪kn=1fn(A), f or all A∈ H(X) is Ciric type generalized F -Hutchinson operator.

(b) Operator T has a unique fixed point U ∈ H (X) , that is U = T (U ) =∪kn=1fn(U ).

(c) For any initial set A0∈ H (X), the sequence of compact sets A0, T (A0) , T2(A0) , ...

converges to a fixed point of T .

1.1.3

Support vector machines and generalized

sup-port vector machines

Proposed by Vapnik et al. (1995), Support vector machines (SVM)[40, 41, 42, 43, 44] provide, beside multilayer perceptrons and radial-basis function networks, another approach to machine learning settings as for example pattern classification, object recognition, text classification or regression estimation. Ongoing research reveal continuously how SVM are able to outperform established machine learning techniques as neural networks, decision trees or K-Nearest Neighbour since they construct models that are complex enough to deal with real-world applications while remaining simple enough to be analysed mathematically.

Let D = {(xi, yi)}; i = 1, ..., l; xi = (x1, ..., xn); yi ∈ {−1, +1} be a linear separable training set. Then there exists a hyperplane of the form

w · x + b = 0 (1.8)

separating the positive from the negative training examples such that

w · x + b ≥ 0 for yi= +1, w · x + b ≤ 0 for yi=−1,

Fixed point method and F-contractions

where w is the normal to the hyperplane and b is the perpendicular distance of the hyperplane to the origin. A binary classification is frequently performed by using a function f :Rn→ R in the following way:

f (x) = w · x + b = n  i=1 wixi+ b.

The decision rule is given by sgn (f (x)) . The input x is assigned to positive if

f (x)≥ 0 and otherwise to negative.

To solve this, let d+(d−) be the shortest distance from the separating hyper-plane to a positive(negative) training example. The maximum algorithm simply looks for the hyperplane with the largest separating margin. This can be formu-lated by the following constraints for all x∈ D:

yk(w · xk + b) ≥ 1, k = 1, ..., l. (1.9) Thus we say the distance of every data point from the hyperplane to be greater than a certain value and this value to be +1 in terms of the unit vector.

Now consider all data points x∈ D for the condition (1.9) holds. For positive

points, these points lie on a hyperplane H1:w · x + b = +1 with normal w and perpendicular distance from the origin |1−b|w. Similarly, all negative points lie on a hyperplane H2 : w · x + b = −1 with normal w and perpendicular distance from the origin |−1−b|w . Hence, d+= d−=w implying a margin w2 . Figure. 1 visualises these finding.

Maximising the margin w2 subject to constraints of (1.9) would yield the solution for the optimal separating hyperplane and would provide the maximum possible separation between positive and negative training examples. To solve the maximisation problem, we transform it into a minimisation problem as a quadratic programming problem with inequality constraints as

min1 2w

2 such that y

k(w · xk + b) ≥ 1, k = 1, ..., l, for all x∈ D.

By using Lagrange optimise method, the formation of w yields:

w =

i

αiyixi, (1.10)

where w is a linear combination of the support vectors in the training data with Lagrange multiplier αi> 0. Thus, the decision function becomes

f (x) = i

(19)

Fixed points, fractals, iterated function systems and generalized

support vector machines

Let X be a complete b-metric space. If fn : X → X, n = 1, 2, ..., N are generalized F -contraction mappings, then (X; f1, f2, ..., fN) is called generalized F -contractive iterated function system (IFS). Thus generalized F -contractive

it-erated function system consists of a complete b-metric space and finite family of generalized F -contraction mappings on X. Hence we get the following result. Let (X, d) be a complete b-metric space and{X; fn, n = 1, 2,···, k} a generalized F -contractive iterated function system. Then following properties hold:

(a) A mapping T :H(X) → H(X) defined by

T (A) =∪kn=1fn(A), f or all A∈ H(X) is Ciric type generalized F -Hutchinson operator.

(b) Operator T has a unique fixed point U∈ H (X) , that is U = T (U ) =∪kn=1fn(U ).

(c) For any initial set A0∈ H (X), the sequence of compact sets A0, T (A0) , T2(A0) , ...

converges to a fixed point of T .

1.1.3

Support vector machines and generalized

sup-port vector machines

Proposed by Vapnik et al. (1995), Support vector machines (SVM)[40, 41, 42, 43, 44] provide, beside multilayer perceptrons and radial-basis function networks, another approach to machine learning settings as for example pattern classification, object recognition, text classification or regression estimation. Ongoing research reveal continuously how SVM are able to outperform established machine learning techniques as neural networks, decision trees or K-Nearest Neighbour since they construct models that are complex enough to deal with real-world applications while remaining simple enough to be analysed mathematically.

Let D = {(xi, yi)}; i = 1, ..., l; xi = (x1, ..., xn); yi ∈ {−1, +1} be a linear separable training set. Then there exists a hyperplane of the form

w · x + b = 0 (1.8)

separating the positive from the negative training examples such that

w · x + b ≥ 0 for yi= +1, w · x + b ≤ 0 for yi=−1,

Fixed point method and F-contractions

where w is the normal to the hyperplane and b is the perpendicular distance of the hyperplane to the origin. A binary classification is frequently performed by using a function f :Rn→ R in the following way:

f (x) = w · x + b = n  i=1 wixi+ b.

The decision rule is given by sgn (f (x)) . The input x is assigned to positive if

f (x)≥ 0 and otherwise to negative.

To solve this, let d+(d−) be the shortest distance from the separating hyper-plane to a positive(negative) training example. The maximum algorithm simply looks for the hyperplane with the largest separating margin. This can be formu-lated by the following constraints for all x∈ D:

yk(w · xk + b) ≥ 1, k = 1, ..., l. (1.9) Thus we say the distance of every data point from the hyperplane to be greater than a certain value and this value to be +1 in terms of the unit vector.

Now consider all data points x∈ D for the condition (1.9) holds. For positive

points, these points lie on a hyperplane H1:w · x + b = +1 with normal w and perpendicular distance from the origin |1−b|w. Similarly, all negative points lie on a hyperplane H2 : w · x + b = −1 with normal w and perpendicular distance from the origin |−1−b|w . Hence, d+= d−=w implying a margin w2 . Figure. 1 visualises these finding.

Maximising the margin w2 subject to constraints of (1.9) would yield the solution for the optimal separating hyperplane and would provide the maximum possible separation between positive and negative training examples. To solve the maximisation problem, we transform it into a minimisation problem as a quadratic programming problem with inequality constraints as

min1 2w

2 such that y

k(w · xk + b) ≥ 1, k = 1, ..., l, for all x∈ D.

By using Lagrange optimise method, the formation of w yields:

w =

i

αiyixi, (1.10)

where w is a linear combination of the support vectors in the training data with Lagrange multiplier αi> 0. Thus, the decision function becomes

f (x) = i

(20)

Fixed points, fractals, iterated function systems and generalized

support vector machines

Figure 1: Data separating hyperplane with maximum margin

For non-linear data, a mapping function Φ is used to transform data in input space X to data in feature space, in which the linear machine can be used. So the decision function is

f (x) = < w, Φ (x) > +b. (1.12) The vector w is a linear combination of the support vectors in the feature space  and can be written as

w =

i

αiyiΦ (xi) , (1.13)

where each αi is Lagrange multipliers of the support vectors. Thus, the decision function (1.12) becomes

f (x) = sgn( i

αiyiK(xi, x) + b), (1.14) where K(xi, x) is Kernel function defined as

K(xi, x) = Φ (xi)· Φ (x) ,

which directly calculates the value of the dot product of the mapped data points in some feature space. The advantage of such a kernel function is that the complexity of the optimization problem remains only dependent on the dimensional of the

Summary of the thesis

input space and not of the feature space. Therefore, it is possible to operate in a theoretical feature space of infinite height.

Let us consider a new control function F :Rp→ Rpdefined as

F (x) = W Φ (x) + B, (1.15) where W ∈ Rp×p, B∈ Rpare parameters and p is the dimension of feature space. In addition, W contains the wias a row, where each wiis the linear combination of the support vectors in the feature space and can be written as

wi= 

j

α(i)j Φ (xj) , (1.16)

where Φ is a linear or non-linear mapping that transforms the data in input space

X to data in feature space.

Here we define a map function G :Rp→ Rp + by

G (wi) = (wi , wi , ..., wi) for i = 1, 2, ..., p, (1.17) where wibe the row of Wp×pfor i = 1, 2, ..., p. Now, the programming problem is to find wi∈ Rp that satisfy

min wi∈W

G (wi) such that η≥ 1, (1.18)

where η is the constraints as

η = yk(W Φ (xj) + B) , (1.19) where yk ∈ {(−1, −1, ..., −1) , (1, 1, ..., 1)} is a p dimensional vector. We named this problem as the Generalized Support Vector Machine (GSVM). We provide sufficient conditions under which the solution of GSVM exist.

1.2

Summary of the thesis

Chapter 2 and Chapter 3 correspond respectively to [1], [2], [3], and [4] whose contents we summarize below.

1.2.1

Fractals of generalized F-Hutchinson operator in

b-metric spaces

We construct a fractal set of iterated function system, a certain finite collection of mappings defined on a b-metric space which induce compact valued mappings defined on a family of compact subsets of a b-metric space.

(21)

Fixed points, fractals, iterated function systems and generalized

support vector machines

Figure 1: Data separating hyperplane with maximum margin

For non-linear data, a mapping function Φ is used to transform data in input space X to data in feature space, in which the linear machine can be used. So the decision function is

f (x) = < w, Φ (x) > +b. (1.12) The vector w is a linear combination of the support vectors in the feature space  and can be written as

w =

i

αiyiΦ (xi) , (1.13)

where each αi is Lagrange multipliers of the support vectors. Thus, the decision function (1.12) becomes

f (x) = sgn( i

αiyiK(xi, x) + b), (1.14) where K(xi, x) is Kernel function defined as

K(xi, x) = Φ (xi)· Φ (x) ,

which directly calculates the value of the dot product of the mapped data points in some feature space. The advantage of such a kernel function is that the complexity of the optimization problem remains only dependent on the dimensional of the

Summary of the thesis

input space and not of the feature space. Therefore, it is possible to operate in a theoretical feature space of infinite height.

Let us consider a new control function F :Rp→ Rp defined as

F (x) = W Φ (x) + B, (1.15) where W ∈ Rp×p, B∈ Rpare parameters and p is the dimension of feature space. In addition, W contains the wias a row, where each wi is the linear combination of the support vectors in the feature space and can be written as

wi= 

j

α(i)j Φ (xj) , (1.16)

where Φ is a linear or non-linear mapping that transforms the data in input space

X to data in feature space.

Here we define a map function G :Rp→ Rp +by

G (wi) = (wi , wi , ..., wi) for i = 1, 2, ..., p, (1.17) where wi be the row of Wp×pfor i = 1, 2, ..., p. Now, the programming problem is to find wi∈ Rpthat satisfy

min wi∈W

G (wi) such that η≥ 1, (1.18)

where η is the constraints as

η = yk(W Φ (xj) + B) , (1.19) where yk ∈ {(−1, −1, ..., −1) , (1, 1, ..., 1)} is a p dimensional vector. We named this problem as the Generalized Support Vector Machine (GSVM). We provide sufficient conditions under which the solution of GSVM exist.

1.2

Summary of the thesis

Chapter 2 and Chapter 3 correspond respectively to [1], [2], [3], and [4] whose contents we summarize below.

1.2.1

Fractals of generalized F-Hutchinson operator in

b-metric spaces

We construct a fractal set of iterated function system, a certain finite collection of mappings defined on a b-metric space which induce compact valued mappings defined on a family of compact subsets of a b-metric space.

(22)

Fixed points, fractals, iterated function systems and generalized

support vector machines

We prove that Hutchinson operator defined with the help of a finite family of generalized F -contraction mappings on a complete b-metric space is itself general-ized F -contraction mapping on a family of compact subsets of X.

Consequently, we obtain a variety of results for iterated function system satis-fying a different set of contractive conditions.

Let (X, d) be a complete b-metric space and{X; fn, n = 1, 2,···, k} a generalized F -contractive iterated function system. Then following statements hold:

(a) A mapping T :H(X) → H(X) defined by

T (A) =∪kn=1fn(A), f or all A∈ H(X) is Ciric type generalized F -Hutchinson operator.

(b) Operator T has a unique fixed point U ∈ H (X) , that is U = T (U ) =∪kn=1fn(U ).

(c) For any initial set A0∈ H (X), the sequence of compact sets A0, T (A0) , T2(A0) , ...

converges to a fixed point of T .

1.2.2

Data classification with Support Vector Machines

and Generalized Support Vector Machines

Here we introduce the foundation of SVM that are known as maximum margin classifiers and popular for problems of classification and regression. To maximise the margin between two classes data, the objective function in mathematical form can be written as

max 1

w.

Of course, there are some constraints for the optimization problem. According to the definition of margin, we have yk(w · xk + b) ≥ 1, k = 1, ..., l. We rewrite the equivalent formation of the objective function with the constraints as

min1 2w

2 such that y

k(w · xk + b) ≥ 1, k = 1, ..., l.

So, to solve the problem of SVM is equivalent to solve a quadratic programming problem with inequality constraints.

For non-linear separable problem, in order to use SVM, Kernel function is considered. The Kernel K has an associated feature with mapping Φ and it takes

REFERENCES

two inputs and give their similarity in the feature space, that is, K : ×  → R defined as

K(xi, x) = Φ (xi)· Φ (x) . We also introduce the concept of GSVM defined as

min wi∈W

G (wi) such that η≥ 1, where η is the constraints as

η = yk(W Φ (xj) + B) ,

where yk∈ {(−1, −1, ..., −1) , (1, 1, ..., 1)} is a p dimensional vector. The problem of GSVM is equivalent to

f ind wi∈ W : 

G(wi) , v− wi 

≥ 0 for all v ∈ Rp with η≥ 1,

or f ind wi∈ W :  ηG(wi) , v− wi  ≥ 0 for all v ∈ Rp.

We also show the sufficient conditions for the existence of the solution for GSVM problem.

References

[1] T. Nazir, S. Silvestrov, X. Qi, Fractals of generalized F-Hutchinson operator

in b-metric spaces, Journal of Operators, vol. 2016, Article ID 5250394, 9

pages, 2016. doi:10.1155/2016/5250394.

[2] T. Nazir, X. Qi, S. Silvestrov, Linear Classification of data with

Sup-port Vector Machines and Generalized SupSup-port Vector Machines, In:

Engi-neering Mathematics II: Algebraic, Stochastic and Analysis Structures for Networks, Data Classification and Optimization, Springer Proceedings in Mathematics and Statistics, 179, Springer, 2016.

[3] T. Nazir, X. Qi, S. Silvestrov, Linear and Nonlinear Classifiers of Data with

Support Vector Machines and Generalized Support Vector Machines, In:

Engineering Mathematics II: Algebraic, Stochastic and Analysis Structures for Networks, Data Classification and Optimization, Springer Proceedings in Mathematics and Statistics, 179, Springer, 2016.

(23)

Fixed points, fractals, iterated function systems and generalized

support vector machines

We prove that Hutchinson operator defined with the help of a finite family of generalized F -contraction mappings on a complete b-metric space is itself general-ized F -contraction mapping on a family of compact subsets of X.

Consequently, we obtain a variety of results for iterated function system satis-fying a different set of contractive conditions.

Let (X, d) be a complete b-metric space and{X; fn, n = 1, 2,···, k} a generalized F -contractive iterated function system. Then following statements hold:

(a) A mapping T :H(X) → H(X) defined by

T (A) =∪kn=1fn(A), f or all A∈ H(X) is Ciric type generalized F -Hutchinson operator.

(b) Operator T has a unique fixed point U∈ H (X) , that is U = T (U ) =∪kn=1fn(U ).

(c) For any initial set A0∈ H (X), the sequence of compact sets A0, T (A0) , T2(A0) , ...

converges to a fixed point of T .

1.2.2

Data classification with Support Vector Machines

and Generalized Support Vector Machines

Here we introduce the foundation of SVM that are known as maximum margin classifiers and popular for problems of classification and regression. To maximise the margin between two classes data, the objective function in mathematical form can be written as

max 1

w.

Of course, there are some constraints for the optimization problem. According to the definition of margin, we have yk(w · xk + b) ≥ 1, k = 1, ..., l. We rewrite the equivalent formation of the objective function with the constraints as

min1 2w

2 such that y

k(w · xk + b) ≥ 1, k = 1, ..., l.

So, to solve the problem of SVM is equivalent to solve a quadratic programming problem with inequality constraints.

For non-linear separable problem, in order to use SVM, Kernel function is considered. The Kernel K has an associated feature with mapping Φ and it takes

REFERENCES

two inputs and give their similarity in the feature space, that is, K :  × → R defined as

K(xi, x) = Φ (xi)· Φ (x) . We also introduce the concept of GSVM defined as

min wi∈W

G (wi) such that η≥ 1, where η is the constraints as

η = yk(W Φ (xj) + B) ,

where yk∈ {(−1, −1, ..., −1) , (1, 1, ..., 1)} is a p dimensional vector. The problem of GSVM is equivalent to

f ind wi∈ W : 

G(wi) , v− wi 

≥ 0 for all v ∈ Rp with η≥ 1,

or f ind wi∈ W :  ηG(wi) , v− wi  ≥ 0 for all v ∈ Rp.

We also show the sufficient conditions for the existence of the solution for GSVM problem.

References

[1] T. Nazir, S. Silvestrov, X. Qi, Fractals of generalized F-Hutchinson operator

in b-metric spaces, Journal of Operators, vol. 2016, Article ID 5250394, 9

pages, 2016. doi:10.1155/2016/5250394.

[2] T. Nazir, X. Qi, S. Silvestrov, Linear Classification of data with

Sup-port Vector Machines and Generalized SupSup-port Vector Machines, In:

Engi-neering Mathematics II: Algebraic, Stochastic and Analysis Structures for Networks, Data Classification and Optimization, Springer Proceedings in Mathematics and Statistics, 179, Springer, 2016.

[3] T. Nazir, X. Qi, S. Silvestrov, Linear and Nonlinear Classifiers of Data with

Support Vector Machines and Generalized Support Vector Machines, In:

Engineering Mathematics II: Algebraic, Stochastic and Analysis Structures for Networks, Data Classification and Optimization, Springer Proceedings in Mathematics and Statistics, 179, Springer, 2016.

(24)

Fixed points, fractals, iterated function systems and generalized

support vector machines

[4] X. Qi, S. Silvestrov, T. Nazir, Data classification with Support Vector

Ma-chines and Generalized Support Vector MaMa-chines, To appear in Proceedings

of INCPAA 2016, 11th International Conference on Mathematical Prob-lems in Engineering, Aerospace, and Sciences, La Rochelle, France, 05-08 July 2016.

[5] S. Banach, Sur les op´erations dans les ensembles abstraits et leur

applica-tions aux ´equaapplica-tions int´egrales, Fund Math., vol. 3, pp. 133-181, 1922.

[6] M. Abbas, M. R. Alfuraidan and T. Nazir, Common fixed points of

multi-valued F -contractions on metric spaces with a directed graph, Carpathian

J. Math., vol. 32, no. 1, pp. 1-12, 2016.

[7] T. Abdeljawad, Fixed points for generalized weakly contractive mappings in

partial metric spaces, Math. Comput. Modelling., vol. 54, pp. 2923-2927,

2011.

[8] I. Arandjelovi´c, Z. Kadelburg and S. Radenovi´c, Boyd-Wong-type common

fixed point results in cone metric spaces, Appl Math Comput., vol. 217, pp.

7167-7171, 2011.

[9] N. A. Assad and W. A. Kirk, Fixed point theorems for setvalued mappings

of contractive type, Pacific. J. Math., vol. 43, pp. 533-562, 1972.

[10] T. V. An, L. Q. Tuyen and N. V. Dung, Stone-type theorem on b-metric

spaces and applications, Topology Appl., 185-186, pp. 50-64, 2015.

[11] H. Aydi, M. F. Bota, E. Karapınar and S. Mitrovi´c, A fixed point theorem

for set-valued quasicontractions in b-metric spaces, Fixed Point Theory

Appl., vol. 2012, Article ID 88, 2012.

[12] M. Boriceanu, A. Petru¸sel and I. A. Rus, Fixed point theorems for some

multivalued generalized contractions in b-metric spaces, Internat. J. Math.

Statistics, vol. 6, pp. 65-76, 2010.

[13] M. Boriceanu, M. Bota and A. Petru¸sel, Multivalued fractals in b-metric

spaces, Cent. Eur. J. Math., vol. 8, no. 2, pp. 367-377, 2010.

[14] Lj. ´Ciri´c, M. Abbas, M. Rajovi´c and B. Ali, Suzuki type fixed point theorems

for generalized multi-valued mappings on a set endowed with two b-metrics,

Appl. Math. Comput., vol. 219, pp. 1712-1723, 2012.

[15] C. Chifu and G. Petru¸sel, Fixed points for multivalued contractions in

b-metric spaces with applications to fractals, Taiwan. J. Math., vol. 18, no.

5, pp. 1365-1375, 2014.

REFERENCES

[16] S. Czerwik, Contraction mappings in b-metric spaces, Acta Math. Inform. Univ. Ostrav., vol. 1, pp. 5-11, 1993.

[17] S. Czerwik, K. Dlutek and S. L. Singh, Round-off stability of iteration

procedures for operators in b-metric spaces, J. Natur. Phys. Sci., vol. 11,

pp. 87-94, 1997.

[18] S. Czerwik, Nonlinear set-valued contraction mappings in b-metric spaces, Atti Semin. Mat. Fis. Univ. Modena, vol. 46, pp. 263-276, 1998.

[19] M. F. Barnsley, Fractals Everywhere, 2nd ed., Academic Press, San Diego, CA, 1993.

[20] D.W. Boyd and J. S. W. Wong, On nonlinear contractions, Proc Am Math Soc., vol. 20, pp. 458-464, 1969.

[21] M. Edelstein, An extension of Banach’s contraction principle, Proc. Amer. Math. Soc., vol. 12, pp. 07-10, 1961.

[22] L. G. Huang and X. Zhang, Cone metric spaces and fixed point theorems

of contractive maps, J. Math. Anal. Appl., vol. 332, pp. 1467-1475, 2007.

[23] J. Hutchinson, Fractals and self-similarity, Indiana Univ. J. Math., vol. 30, no. 5, pp. 713-747, 1981.

[24] D. Illic, M. Abbas and T. Nazir, Iterative approximation of fixed points

of Presi´c operators on partial metric spaces, Mathematische Nachrichten,

DOI 10.1002/mana.201400235, pp. 1-13, 2015.

[25] A. Jeribi and B. Krichen, Nonlinear functional analysis in Banach spaces

and Banach algebras. Fixed point theory under weak topology for nonlinear operators and block operator matrices with applications, Monographs and

Research Notes in Mathematics. Boca Raton, FL: CRC Press, 2016. [26] T. Nazir, S. Silvestrov and M. Abbas, Fractals of generalized F -Hutchinson

operator, Waves, Wavelets and Fractals Advanced Analysis, vol. 2, pp.

29-40, 2016.

[27] Z. Kadelburg, S. Radenovi´c and V. Rakoˇcevi´c, Remarks on

Quasi-contraction on a cone metric space, Appl. Math. Lett., vol. 22, pp.

1674-1679, 2009.

[28] M. Kir and H. Kiziltune, On some well known fixed point theorems in

b-metric spaces, Turkish J. of analysis and number theory, 1 1, 13–16,

Figure

Figure 1: Data separating hyperplane with maximum margin

References

Related documents

The influence of specific resistance training on force production in running in elite master athletes In a follow up study, a subgroup of master sprinters improved maximal isometric

with M and S being the mass and stiffness matrices, respectively.. You may work out the details in such

To justify the name generalized normal form it has been shown that if it is calculated for ane polynomial systems (22) one gets the usual normal form (28) as described in,

Here we will study how the generalized normal form dened in the previous section can be used for control purposes.. The main approach will be to try to generalize some of

Således skulle en checklista med utgångspunkt från ovanstående resultat, teoretiskt, kunna vara ett stöd för professionella inom polis och socialtjänst i syfte att

To be publicly discussed in lecture hall MA 121, Ume˚ a University, on Friday, September 30, 2005 at 10.15 for the degree of Doctor of Philosophy.

Paper II Mats Bodin, Harmonic functions and Lipschitz spaces on the Sierpinski gasket, Research Report in Mathematics No.. 8, Ume˚ a

Clearly to obtain optimal order convergence in both the energy and the L 2 norm we must use the same or higher order polynomials in the mappings as in the finite element space, i.e.,