• No results found

On pairs of generalized and hypergeneralized projections on a Hilbert space

N/A
N/A
Protected

Academic year: 2021

Share "On pairs of generalized and hypergeneralized projections on a Hilbert space"

Copied!
16
0
0

Loading.... (view fulltext now)

Full text

(1)

On pairs of generalized and hypergeneralized

projections on a Hilbert space

Sonja Radosavljevi´

c and Dragan S. Djordjevi´

c

February 16, 2012

Abstract

In this paper, we characterize generalized and hypergeneralized pro-jections (bounded linear operators which satisfy conditions A2= A∗ and A2= A). We give their matrix representations and examine under what

conditions the product, difference and sum of these operators are opera-tors of the same class.

Keywords: Generalized projections, hypergeneralized projections.

1

Introducton

Let H be a separable Hilbert space andL(H) be a space of all bounded linear operators on H. The symbols R(A), N (A) and A∗ denote range, null space and adjoint operator of operator A∈ L(H). Operator A ∈ L(H) is a projection (idempotent) if A2 = A, while it is an orthogonal projection if A∗ = A = A2. Operator is hermitian (self adjoint) if A = A∗, normal if AA∗ = A∗A and

unitary if AA∗ = A∗A = I. All these operators have been extensively studied

and there are plenty of characterizations both of these operators and their linear combinations ([5]).

The Moore-Penrose inverse of A ∈ L(H), denoted by A†, is the unique solution to the equations

AA†A = A, A†AA† = A†, (AA†)∗= AA†, (A†A)∗= A†A.

Notice that A† exists if and only ifR(A) is closed. Then AA†is the orthogonal projection ontoR(A) parallel to N (A∗), and A†A is the orthogonal projection

ontoR(A∗) parallel toN (A). Consequently, I−AA†is the orthogonal projection ontoN (A∗) and I− A†A is the orthogonal projection ontoN (A).

For A ∈ L(H), an element B ∈ L(H) is the Drazin inverse of A, if the following hold:

BAB = B, BA = AB, An+1B = An,

for some non-negative integer n. The smallest such n is called the Drazin index of A. By AD we denote Drazin inverse of A and by ind(A) we denote Drazin

(2)

If such n does not exist, ind(A) =∞ and operator A is generalized Drazin invertible. Its invers is denoted by Ad.

Operator A is invertible if and only if ind(A) = 0.

If ind(A)≤ 1, operator A is group invertible and AD is its group inverse, usually denoted by A#.

Notice that if the Drazin inverse exists, it is unique. Drazin inverse exists if

R(An) is closed for some non-negative integer n.

Operator A ∈ L(H) is a partial isometry if AA∗A = A or, equivalently, if A† = A∗. Operator A ∈ L(H) is EP if AA† = A†A, or, in the other words,

if A† = AD = A#. Set of all EP operators on H will be denoted by EP(H).

Self-adjoint and normal operators with closed range are important subset of set of all EP operators. However, converse is not true even in a finite dimensional case.

In this paper we consider pairs of generalized and hypergeneralized projec-tions on a Hilbert space, whose concept for matrices A∈ Cm×nwas introduced

by J. Gross and G. Trenkler in [6]. These operators extend the idea of orthogo-nal projections by deleting the idempotency requirement. Namely, we have the following definition:

Definition 1.1. Operator A∈ L(H) is

(a) a generalized projection if A2= A;

(b) a hypergeneralized projection if A2= A.

The set of all generalized projecton on H is denoted by GP(H) and the set of all hypergeneralized projecton is denoted byHGP(H).

2

Characterization of generalized and

hypergen-eralized projections

We begin this section by giving characterizations of generalized and hypergen-eralized projection. Similar to Theorem 2 in [4] and Theorem 1 in [6], we have the following result:

Theorem 2.1. Let A∈ L(H). Then the following conditions are equivalent:

(a) A is a generalized projection, (b) A is a normal operator and A4= A, (c) A is a partial isometry and A4= A.

Proof. (a⇒ b) Since

AA∗= AA2= A3= A2A = A∗A, A4= (A2)2= (A∗)2= (A2)∗= (A∗)∗= A,

(3)

the implication is obvious.

(b⇒ a) If AA∗= A∗A, recall that then there exists a unique spectral

mea-sure E on the Borel subsets of σ(A) such that E(∆) is an orthogonal projection for every subset ∆⊂ σ(A), E(∅) = 0, E(H) = I and if ∆i∩ ∆j =∅ for i ̸= j,

then E(∪∆i) = E(

i). Moreover, A has the following spectral representation

A =

σ(A)

λdEλ,

where Eλ= E(λ) is the spectral projection associated with the point λ∈ σ(A).

From A4 = A we conclude A3|R(A) = IR(A)and λ3= 1, or, equivalentely

σ(A)⊆ {0, 1, e2πi/3, e−2πi/3}. Now,

A = 0E(0)⊕ 1E(1) ⊕ e2πi/3E(e2πi/3)⊕ e−2πi/3E(e−2πi/3),

where E(λ) is the spectral projection of A associated with the point λ∈ σ(A) such that E(λ)̸= 0 if λ ∈ σ(A), E(λ) = 0 if λ ∈ {0, 1, e2πi/3, e−2πi/3}\σ(A) and

E(0)⊕ E(1) ⊕ E(e2πi/3)⊕ E(e−2πi/3) = I. From the fact that σ(A2) = σ(A)

and from uniqueness of spectral representation, we get A2= A.

(a ⇒ c) If A∗= A2, then A4= AA2A = AAA = A. Multiplying from the

left (from the right) by A∗, we get A∗AA∗A = A∗A (AA∗AA∗= AA∗), which proves that A∗A (AA∗) is the orhtogonal projecton ontoR(A∗A) =R(A∗) =

N (A)⊥ (R(AA) =R(A) = N (A)) i.e. A (A) is a partial isometry.

(c⇒ a) If A is a partial isometry, we know that AA∗is orthogonal projection onto R(AA∗) = R(A). Thus, AA∗A = PR(A)A = A and A4 = AA2A = A

implies A2= A.

We can give matrix representatons of generalized projections based upon previous characterizations.

Theorem 2.2. Let A∈ L(H) be a generalized projection. Then A is a closed

range operator and A3 is an orthogonal projection on R(A). Moreover, H has

decomposition

H =R(A) ⊕ N (A) and A has the following matrix representaton

A = [ A1 0 0 0 ] : [ R(A) N (A) ] [ R(A) N (A) ] , where the restriction A1= A|R(A) is unitary onR(A).

Proof. If A is a generalized projection, then AA∗A = A4= A and A is a partial

isometry implying that

A3= AA∗= PR(A), A3= A∗A = PN (A)⊥.

Thus, A3 is an orthogonal projection ontoR(A) = N (A) =R(A).

(4)

a Hilbert space. From Lemma (1.2) in [5] we get the following decomposition of the space

H =R(A∗)⊕ N (A) = R(A) ⊕ N (A).

Now, A has the following matrix representation in accordance with this decom-position: A = [ A1 0 0 0 ] : [ R(A) N (A) ] [ R(A) N (A) ] , where A2 1= A∗1, A41= A1 and A1A∗1= A∗1A1= A31= IR(A).

Similar to Theorem 2 in [6], we have:

Theorem 2.3. Let A∈ L(H). Then the following conditions are equivalent:

(a) A is a hypergeneralized projecton,

(b) A3 is an orthogonal projection ontoR(A),

(c) A is an EP operator and A4= A

Proof. (a⇒ b) If A2= A†, then from A3= AA†= PR(A) follows conclusion. (b⇒ a) If A3= PR(A), a direct verificaton of the Moore-Penrose equations shows that A2= A.

(a⇒ c) Since

AA† = AA2= A3= A2A = A†A,

we conclude that A is an EP operator, A† = A#, (A)n= (An) and

A4= (A2)2= (A†)2= (A2)†= (A†)†= A.

(c ⇒ a) If A is an EP operator, then A† = A# and ind(A) = 1 or, equiv-alently, A2A† = A. Since A4 = A2A2 = A, from uniqueness of A† follows

A2= A.

Theorem 2.4. Let A ∈ L(H) be a hypergeneralized projection. Then A is a

closed range operator and H has decomposition H =R(A) ⊕ N (A).

Also, A has the following matrix representaton with the respect to decomposition of the space A = [ A1 0 0 0 ] : [ R(A) N (A) ] [ R(A) N (A) ] , where the restriction A1= A|R(A) satisfies A31= IR(A).

Proof. If A is a hypergeneralized projecton, then A is an EP operator and

using Lemma (1.2) in [5], we get the following decomposition of the space H =

(5)

Notice that sinceR(A) is closed for both generalized and hypergeneralized projections, these operators have the Moore-Penrose and the Drazin inverses. Besides, they are EP operators, which implies that A† = AD = A#= A2. For generalized projections we can be more precise:

A†= AD= A#= A2= A∗.

We can also write

GP(H) ⊆ HGP(H) ⊆ EP(H).

Parts (b) and (c) of the following two theorems are known for matrices

A∈ Cm×n, see ([2], [3]). Unlike their proof, which is based on representation of

matrices, our proof relies only on properties of generalized and hypergeneralized projections and basic properties of the Moore-Penrose and group inverse.

Theorem 2.5. Let A∈ L(H). Then the following holds:

(a) A∈ GP(H) if and only if A∗∈ GP(H); (b) A∈ GP(H) if and only if A† ∈ GP(H);

(c) If ind(A)≤ 1, then A ∈ GP(H) if and only if A#∈ GP(H).

Proof. (a) If A∈ GP(H), then

(A∗)∗= A = A4= (A2)2= (A∗)2,

meaning that A∗∈ GP(H). Conversely, if A∗ ∈ GP(H), then (A∗)4= A and

(A∗)2= (A)= A, implying A2= (A∗)4= A∗ and A∈ GP(H). (b) If A∈ GP(H), then A† = A∗= A2and (A†)2= (A2)2= A = (A∗)∗= (A†)∗, implying A†∈ GP.

If A† ∈ GP(H), then (A†)2= (A)= (A) = A and (A)4= A. Thus,

A2= (A†)4= A†, A∗= ((A†))∗= A† and A∈ GP(H).

(c) If A∈ GP(H), then A†= A#and part (b) of this theorem implies that

A#∈ GP(H).

To prove converse, it is enough to see that A# ∈ GP(H) implies (A#)2 =

(A#)= (A#) = (A#)#= A and (A#)4= A#and

(6)

Remark 2.1. Let us mention an alternative proof for the previous theorem. If

A†∈ GP(H), then A is normal and R(A) is closed and A, A† have representa-tions A = [ A1 A2 0 0 ] : [ R(A) N (A∗) ] [ R(A) N (A∗) ] , A† = [ A∗1B 0 A∗2B 0 ] ,

where B = (A1A∗1+ A2A∗2)−1. From (A†)2= (A†)∗, we get

[ A∗1BA∗1B 0 A∗2BA∗1B 0 ] = [ BA1 BA2 0 0 ] ,

which implies A2= 0, A∗2= 0 and B = (A1A∗1)−1 and

A = [ A1 0 0 0 ] , A†= [ A−11 0 0 0 ] . Since (A−11 )2 = (A−1

1 )∗, the same equality is also satisfied for A1 and A

GP(H).

Similarly, to prove that A# ∈ GP(H) implies A ∈ GP(H), assume that H =R(A) ⊕ N (A∗). Then A#= A† A = [ A1 A2 0 0 ] , A#= [ A#1 (A#1)2A 2 0 0 ] .

Since (A#)2 = (A#)∗, we get A2 = 0 and (A # 1)

2 = (A#

1)∗. Fron the fact that

A1 is surjective on R(A) and R(A1)∩ N (A1) = {0}, we have A #

1 = A−11 .

Consequently, (A−11 )2= (A−1

1 ) and A21= A∗1, which proves that A∈ GP(H). Theorem 2.6. Let A∈ L(H). Then the following holds:

(a) A∈ HGP(H) if and only if A∗∈ HGP(H); (b) A∈ HGP(H) if and only if A†∈ HGP(H);

(c) If ind(A)≤ 1, then A ∈ HGP(H) if and only if A#∈ HGP(H).

Proof. Proofs of (a) and (b) are similar to proofs of Theorem 2.5 (a) and (b).

(c) We should only prove that A#∈ HGP(H) implies A ∈ HGP(H), since the ”⇒ ” is analogous to the sema part of Theorem 2.5.

Let H =R(A) ⊕ N (A∗) and ind(A)≤ 1. Then

A = [ A1 A2 0 0 ] , A#= [ A−11 (A−11 )2A 2 0 0 ] , (A#)= [ (A−11 )∗B 0 (A−12 )∗B 0 ] ,

where B = (A−11 (A−11 )∗+ A−12 (A2−1))−1. From (A#) = (A#)2, we get A 2= 0

and A1 = A−21 . Multiplying with A21, the last equation becomes A31 = IR(A)

(7)

3

Properties of products, differences, and sums

of generalized projections

In this section we study properties of products, differences, and sums of two generalized projections or of one orthogonal and one generalized projection. We begin with two very useful theorems which gives matrix representations of such pairs of operators. Also, we obtain basic properties of generalized projections which easily follow from their canonical representations.

Theorem 3.1. Let A, B∈ GP(H) and H = R(A) ⊕ N (A). Then A and B has

the following representation with respect to decomposition of the space: A = [ A1 0 0 0 ] : [ R(A) N (A) ] [ R(A) N (A) ] , B = [ B1 B2 B3 B4 ] : [ R(A) N (A) ] [ R(A) N (A) ] , where B1 = B12+ B2B3, B2 = B3B1+ B4B3, B3 = B1B2+ B2B4, B4 = B3B2+ B42.

Furthermore, B2= 0 if and only if B3= 0.

Proof. Let H =R(A) ⊕ N (A). Then representation of A follows from Theorem

2.2 and let B has representaton

B = [ B1 B2 B3 B4 ] . Then, from B2= [ B2 1+ B2B3 B1B2+ B2B4 B3B1+ B4B3 B3B2+ B42 ] = [ B1 B3 B2 B4 ] = B∗,

conclusion follows directly.

If B2= 0, then B3 = B1B2+ B2B4 = 0 and B3 = 0. Analogously, B3= 0

implies B2= 0.

Corollary 3.1. Under the assumptions of the previous theorem, B ∈ GP(H)

has one of the following matrix representations: B = [ B1 B2 B3 B4 ] or B = [ B1 0 0 B4 ] .

(8)

Theorem 3.2. Let P ∈ B(H) be an orthogonal projection, R(P ) = L and H =

L⊕ L⊥. If A∈ GP(H), then P and A has the following matrix representation with respect to decomposition of the space

P = [ IL 0 0 0 ] : [ L L⊥ ] [ L L⊥ ] , A = [ A1 A2 A3 A4 ] : [ L L⊥ ] [ L L⊥ ] , where A∗1 = A21+ A2A3, A∗2 = A3A1+ A4A3, A∗3 = A1A2+ A2A4, A∗4 = A3A2+ A24. Moreover, A1 = P AP|L, A2 = P A(I− P )|L⊥, A3 = (I− P )AP |L, A4 = (I− P )A(I − P )|L⊥.

Then A2= 0 if and only if A3= 0, or equivalentely, P A(I− P )|L⊥ = 0 if and

only if (I− P )AP |L= 0.

Opreators P and A commute if and only if either A2 = 0 or A3 = 0, or

equivalentely, if and only if either P A(I− P )|L⊥ = 0 or (I− P )AP |L= 0.

Proof. Matrix representation of A∈ GP(H) and properties of Ai, i = 1, 4 can

be obtained like in the proof of Theorem 3.1. Using matrix multiplication, it is easy to see that

P AP =

[

A1 0

0 0

]

and P AP|L = A1. The rest of the equalities are obtained in analogous way.

If P A = AP , again using matrix multiplication, we get A2= A3= 0 which

is equivalent to P A(I− P )|L⊥ = 0 or (I− P )AP |L= 0.

Corollary 3.2. Under the assumptions of the previous theorem, A ∈ GP(H)

has matrix representations: A = [ A1 A2 A3 A4 ] , when P and A do not commute, or

A = [ A1 0 0 A4 ] , when P and A commute.

(9)

As we know, if A is an orthogonal projection, I − A is also an orthogonal projection. It is of interest to examine whether generalized projections keep the same property. Example 3.1. If H =C2 and A = [ e2πi3 0 0 0 ] , then A2 = A∗, but I− A = [ 1− e2πi3 0 0 1 ]

and, clearly, I− A ̸= (I − A)4 implying that I− A is not a

generalized projection.

Thus, we have the following theorem.

Theorem 3.3. (Theorem 6 in [2]) Let A∈ L(H) be a generalized projection.

Then I− A is a normal operator. Moreover, I − A is a generalized projection if and only if A is an orthogonal projection.

If I− A is a generalized projection, then A is a normal operator and A is a generalized projection if and only if I− A is an orthogonal projecton.

Proof. If A is a generalized projection, then A is normal and A4 = A, which

implies

A = 0E(0)⊕ 1E(1) ⊕ e2πi/3E(e2πi/3)⊕ e−2πi/3E(e−2πi/3),

where E(λ) is the orthogonal projection such that E(λ)̸= 0 if λ ∈ σ(A), E(λ) = 0 if λ∈ {0, 1, e2πi/3, e−2πi/3}\σ(A) and E(0)⊕E(1)⊕E(e2πi/3)⊕E(e−2πi/3) = I.

Thus,

I−A = (1−0)E(0)⊕(1−1)E(1)⊕(1−e2πi/3)E(e2πi/3)⊕(1−e−2πi/3)E(e−2πi/3), and

(I− A)2= E(0)⊕ (1 − e2πi/3)2E(e2πi/3)⊕ (1 − e−2πi/3)2E(e−2πi/3), (I− A)∗= E(0)⊕ (1 − e2πi/3)∗E(e2πi/3)⊕ (1 − e−2πi/3)∗E(e−2πi/3). Hence

(I− A)2= (I− A)∗ if and only if

(1− e2πi/3)2E(e2πi/3) = (1− e2πi/3)∗E(e2πi/3) and

(1− e−2πi/3)2E(e−2πi/3) = (1− e−2πi/3)∗E(e−2πi/3).

This is true if and only if E(e2πi/3) = 0 and E(e−2πi/3) = 0, which is equivalent to σ(A) ={0, 1} and A is an orthogonal projection.

(10)

Remark 3.1. We can give another proof for this theorem. Let H =R(A) ⊕

N (A) and according to Theorem 3.1 generalized projection A has representation

A = [ A1 0 0 0 ] : [ R(A) N (A) ] [ R(A) N (A) ] . Then I− A = [ IR(A)− A1 0 0 IN (A) ]

and it is obvious that normality of A implies normality of I− A. Also,

(I− A)2= [ (IR(A)− A1)2 0 0 IN (A) ] = [ (IR(A)− A1) 0 0 IN (A) ] = (I− A)∗

holds if and only if (IR(A)− A1)2= (IR(A)− A1)∗. Since A2= A∗, we get

IR(A)− 2A1+ A21= IR(A)− 2A1+ A∗1= IR(A)− A∗1,

which is satisfied if and only if A1= A∗1. Hence, A = A∗= A2.

Theorem 3.4. If P is an orthogonal projection and A is a generalized

projec-tion, then AP is a generalized projection if and only if either P A(I− P ) = 0 or

(I− P )AP = 0.

Proof. LetR(P ) = L and H = L ⊕ L⊥. Then

P = [ IL 0 0 0 ] , A = [ A1 A2 A3 A4 ] . From AP = [ A1 0 A3 0 ] , (AP )2= [ A2 1 0 A3A1 0 ] , (AP )∗= [ A∗1 A∗3 0 0 ]

we conclude that (AP )2= (AP )∗ if and only if A3 = 0, which is equivalent to

(I− P )AP = 0.

Theorem 3.1 provide us with the condition that A3= 0 if and only if A2= 0,

or in the other words (I− P )AP = 0 if and only if P A(I − P ) = 0.

Theorem 3.5. Let P be an orthogonal projection and let A be a generalized

pro-jection. Then P− A is a generalized projection if and only if A is an orthogonal projection commuting with P .

Furthermore, if P is an orthogonal projection and A is a generalized pro-jection such that P AP is orthogonal propro-jection and either P A(I− P ) = 0 or

(I− P )AP = 0, then P − A is a generalized projection.

Proof. Obviously (P− A)2= P− P A − AP + A2= P− A= (P− A) if and

only if P A = AP = A∗. Since A2 = A, we conclude that A is an orthogonal

(11)

To prove the second part of the theorem, letR(P ) = L and H = L ⊕ L⊥. If

P A(I− P ) = 0 or (I − P )AP = 0, then P = [ IL 0 0 0 ] , A = [ A1 0 0 A4 ] , P− A = [ Il− A1 0 0 −A4 ] .

From orthogonality of P AP comes A1= A21= A∗1and

(P− A)2= [ (Il− A1)2 0 0 A2 4 ] = [ (Il− A1) 0 0 A∗4 ] = (P− A)∗.

Theorem 3.6. Let P be an orthogonal projection and let A be a generalized

projection. Then P + A is a generalized projection if P AP = 0. Moreover, if P + A is a generalized projection, then P AP = 0, P A(I − P ) = 0 and

(I− P )AP = 0.

Proof. From (P + A)2 = P + P A + AP + A2 = P + A we conclude that

AP = P A = 0. This is equivalent to P A = [ A1 A2 0 0 ] = [ A1 0 A3 0 ] = 0,

which holds if and only if A1= A2= A3= 0. Thus, P AP = 0, P A(I− P ) = 0

and (I− P )AP = 0.

Conversely, if P AP = 0, then A1 = 0 and by Theorem 3.1 A2 = 0 and

A3= 0. Clearly, P + A = [ IL 0 0 A4 ] is a generalized projection.

Theorem 3.7. If P is an orthogonal projection and A is a generalized

projec-tion, then AP− P A is a generalized projection if and only if P A(I − P ) = 0 or

(I− P )AP = 0.

Proof. The matrix representations of operators A, P and AP imply that P A− AP =

[

0 A2

−A3 0

]

and it is clear that (P A− AP )2= [ −A2A3 0 0 −A3A2 ] = [ 0 −A∗3 A∗2 0 ] = (P A− AP )∗ if and only if A2= A3= 0 which is equivalent to P A(I−P ) = 0 or (I −P )AP =

(12)

Next theorem is not new. Actually, it is proved for matrices A∈ Cm×nby J. Gross and G. Trenkler in [6] and it appears again in [2].

Theorem 3.8. Let A, B∈ GP(H). Then AB ∈ GP(H) if AB = BA.

Proof. If AB = [ A1B1 A1B2 0 0 ] = [ B1A1 0 B3A1 0 ] = BA,

it is clear that A1B1 = B1A1, B2 = 0 and B3 = 0. Form Theorem 3.1 we

conclude that B1∗= B21 and B∗4= B42 and

(AB)2= [ (A1B1)2 0 0 0 ] = [ (A1B1) 0 0 0 ] = (AB)∗.

Theorem 3.9. Let A, B ∈ GP(H). Then A + B ∈ GP(H) if and only if

AB = BA = 0.

Proof. If A, B∈ GP(H) have the representations given in Theorem 3.1, then A + B = [ A1+ B1 B2 B3 B4 ] and if (A + B)2 = [ (A1+ B1)2+ B2B3 (A1+ B1)B2B4 B3(A1+ B1) + B4B3 B3B2+ B24 ] = [ (A1+ B1) B3 B∗2 B4 ] = (A + B)∗, it is clear that (A1+ B1)2= A21+ A1B1+ B1A1+ B12+ B2B3= (A1+ B1) and since B∗1 = B2

1+ B2B3, we get A1B1+ B1A1 = 0. Thus, B1 = 0 which

implies B2= B3= 0, B42= B4∗. In this case we obtain AB = BA = 0.

Conversely, if AB = BA = 0, then A1B1= B1A1= 0, implying B1= B2=

B3= 0, B24= B4 and, obviously, (A + B)2= (A + B)∗.

The next theorem gives an answer when the difference of two generalized projections is a generalized projection itself. It can be proved using partial ordering onGP(H), like in [6] or [2]. We prefer using only basic propertie s of generalized projections and their matrix representation.

Theorem 3.10. Let A, B ∈ GP(H). Then A − B ∈ GP(H) if and only if

(13)

Proof. If A, B have the representations given in Theorem 3.1, then A− B = [ A1− B1 −B2 −B3 −B4 ] . From (A− B)2 = [ (A1− B1)2+ B2B3 −(A1− B1)B2+ B2B4 −B3(A1− B1) + B4B3 B3B2+ B42 ] = [ (A1− B1) −B3 −B∗ 2 −B4 ] = (A− B)∗,

we get B2 = 0, B3 = 0, B42 =−B4 and from Theorem 3.1 comes B42 = B4.

Now B4= 0 and

(A1− B1)2= A21− A1B1− B1A1+ B12= A∗1− B1

follows. This is true if and only if A1B1 = B1A1 = B∗1 and in that case

AB = BA = B∗.

Theorem 3.11. Let A and B be two commuting generalized projections. Then

A(I− B) ∈ GP(H) if and only if ABA = (AB)∗.

Proof. Since AB = BA, we know that AB is a generalized projection. Now,

(A(I− B))2 = (A− AB)2= A2− 2ABA + (AB)2 = A∗− 2ABA + (AB)∗= (A(I− B))∗ if and only if ABA = (AB)∗.

Theorem 3.12. If A∈ GP(H) and α ∈ {1, e2πi/3, e−2πi/3}, then αA ∈ GP(H).

Proof. Since (αA)3= α3A3= A3, then (αA)3|

R(A)= IR(A)and αA is a normal

operator, which completes the proof.

4

Properties of products, differences, and sums

of hypergeneralized projections

For hypergeneralized projections we have results similar to those for generalized projections. In some theorems we are not able to establish equivalency like the one we establish for the generalized projections because we need additional conditions to ensure that (A + B)†= A†+ B† and (A− B)†= A†− B†.

We start with properties of a pair of one orthogonal and one hypergeneralized projection.

Theorem 4.1. Let P ∈ B(H) be an orthogonal projection and let A ∈ HGP(H).

Then AP is a hypergeneralized projection if and only if (I− P )AP = 0. Simi-larly, P A is a hypergeneralized projection if and only if P A(I− P ) = 0.

(14)

Proof. Let H = L⊕ L⊥, whereR(P ) = L. Then A = [ A1 A2 A3 A4 ] , AP = [ A1 0 A3 0 ] . If (AP )2= [ (A1)2 0 A3A1 0 ] = [ D†A∗1 D†A∗3 0 0 ] = (P A)†,

then A3= 0, which is equivalent to (I− P )AP = 0.

Conversely, if (I− P )AP = 0 i.e. A3= 0, then A has matrix form

A = [ A1 A2 0 A4 ] , A2= [ A2 1 A1A2+ A2A4 0 A2 4 ] ,

and it is easy to see that

A† = [ A†1 (2I− A1)†(I− A1)†A2A†4 0 A†4 ] .

Since A is a hypergeneralized projection, it is clear that A2

1 = A†1 and conse-quently (AP )2= [ A2 1 0 0 0 ] = [ A†1 0 0 0 ] = (AP )†.

The following example shows that Theorem 3.3 does not hold for hypergen-eralized projections.

Example 4.1. Let H =C2and A =

[ 1 1 0 e2πi3 ] . Then A2= [ 1 1 + e2πi3 0 e−2πi3 ] , A3= I

R(A), A4= A and A is a hypergeneralized projection. However, I− A =

[

0 −1

0 1− e2πi3 ]

and it is not normal.

Theorem 4.2. Let A, B∈ HGP(H). If AB = BA, then AB ∈ HGP(H).

Proof. Let H =R(A) ⊕ N (A) and A, B ∈ HGP(H) have representations A = [ A1 0 0 0 ] , B = [ B1 B2 B3 B4 ] . Then AB = [ A1B1 A1B2 0 0 ] , (AB)2= [ A1B1A1B1 A1B1A1B2 0 0 ] .

A straightforward calculation using formula A†= A∗(AA∗) shows that (AB)† = [ (A1B1)∗D−1 0 (A1B2)∗D−1 0 ] ,

(15)

where D = A1B1(A1B1)∗+ A1B2(A1B2) > 0 is invertible. Assume that

hy-pergeneralized projections A, B commute i.e. that

AB = [ A1B1 A1B2 0 0 ] = [ B1A1 0 B3A1 0 ] = BA.

This implies B2= 0, B3= 0, A1B1= B1A1 and it is easy to see that (AB)2=

(AB)†.

Theorem 4.3. Let A, B∈ HGP(H). If AB = BA = 0, then A+B ∈ HGP(H).

Proof. Let H =R(A) ⊕ N (A). Then A = [ A1 0 0 0 ] , B = [ B1 B2 B3 B4 ] .

From these matrix representations it is easy to see that AB = BA = 0 implies

B1= B2= B3= 0 and B42= B4. Now,

(A + B)2= A2+ B2= A†+ B†= (A + B)†.

Theorem 4.4. Let A, B ∈ HGP(H). If AB = BA = B2, then A− B ∈

HGP(H).

Proof. Let H =R(A) ⊕ N (A). Then A = [ A1 0 0 0 ] , B = [ B1 B2 B3 B4 ] .

From condition AB = BA = B2, we get A

1B1 = B1A1 = B12, B2 = B3 = 0,

B2

4 = B†4, which implies (A− B)† = A†− B†. Hence

(A− B)2= A2− AB − BA + B2= A2− B2= A†− B†= (A− B)†.

References

[1] J.K. Baksalary and X. Liu, An alternative characterization of generalized

projectors, Linear Algebra and its Applications 388 (2004) 61-65

[2] J. K. Baksalary, O. M. Baksalary, X. Liu and G. Trenkler, Further results

on generalized and hypergeneralized projectors, Linear Algebra and its

Applications 429 (2008) 1038-1050

[3] J. K. Baksalary, O. M. Baksalary and X. Liu, Further properties on

generalized and hypergeneralized projectors, Linear Algebra and its

(16)

[4] H. Du and Y. Li, The spectral characterization of generalized projections, Linear Algebra and its Applications 400 (2005) 313-318

[5] D. S. Djordjevi´c and J. Koliha, Characterizing hermitian, normal and

EP operators, Filomat 21:1 (2007) 39-54

[6] J. Gross and G. Trenkler, Generalized and hypergeneralized projectors, Linear Algebra and its Applications 264 (1997) 463-474

Address:

Sonja Radosavljevic

MAI, Linkoping University 58351, Sweden sonja.radosavljevic@liu.se

Dragan Djordjevic

Faculty of Sciences and Mathematics, University of Niˇs P.O. Box 224, 18000 Niˇs, Serbia

References

Related documents

How will different inundation risk levels caused by a range of different increases in sea level, and a combination of sea level and highest projected high water, affect the

If the load is increased, the clay skeleton will be compressed (in the same manner as a sponge which is subjected to pressure) and since there is not sufficient time

Däremot är denna studie endast begränsat till direkta effekter av reformen, det vill säga vi tittar exempelvis inte närmare på andra indirekta effekter för de individer som

Planar graphs, graphs that can be drawn on the plane with edges only inter- secting at vertices, has properties in common with graphs embeddable on the sphere, some of which are

Re-examination of the actual 2 ♀♀ (ZML) revealed that they are Andrena labialis (det.. Andrena jacobi Perkins: Paxton & al. -Species synonymy- Schwarz & al. scotica while

Swedenergy would like to underline the need of technology neutral methods for calculating the amount of renewable energy used for cooling and district cooling and to achieve an

proved multivariate normal limit laws for the number of vertices with small degrees in self-similar hooking networks, which are hooking networks grown from a single block called

Then we have shown that the representation of GPDs in terms of Double Distributions (DDs), which is the consequence of the polynomiality property and implies an integral transform