Hierarchical Decomposition of 2D/3D Images, Based on SVD...

11
Hierarchical Decomposition of 2D/3D Images, Based on SVD 2×2 Roumen Kountchev 1 , Roumiana Kountcheva 2 1 TU-Sofia, 2 T&K Engineering Bulgaria Abstract The famous Singular Value Decomposition (SVD) is very efficient in the processing of multidimensional images, when efficient compression, and reduction of the features, used for objects recognition, are needed. The basic obstacle for the wide use of SVD is its high computational complexity. To solve the problem, here is offered the new approach for hierarchical image decomposition through SVD (2×2)-based algorithm. In this case, the multi- dimensional image is represented as a 3D tensor, divided into sub-tensors of size 2×2×2, called kernels. Each such kernel is then decomposed through Hierarchical SVD (HSVD), based on the SVD for a matrix of size 22. In the paper are given the HSVD algorithms for a 4×4 matrix, and for a tensor of size 4×4×4. The tensor decomposition is generalized for a tensor of size N×N×N, when N=2 n . The computational complexity of HSVD is evaluated, and compared to that of the iterative SVD algorithm for 2D matrices, and 3D tensors. The basic advantages of the new approach for decomposition of multidimensional images are the low computational complexity and the tree-like structure, which permits the low-energy branches to be cut-off through threshold selection. The algorithm is suitable for parallel processing of multi-dimensional images through multiprocessor systems, where the basic cell processor executes the decomposition of the kernel tensor, of size 2×2×2. 1. Introduction The SVD is statistical decomposition for processing, coding and analysis of images, widely used in computer vision systems. This decomposition was an object of vast research, presented in many monographs [1,2] and papers [3- 14]. The decomposition is optimal, because it concentrates significant part of the image energy in minimum number of components, and the restored image (after reduction of the low-energy components), has minimum mean square error. One of the basic problems, which limit the use of the “classic” SVD [1] to some degree, is related to its high computational complexity, which grows up together with the size of the image matrix. To overcome the problem, several new approaches are already offered. The first approach is based on the SVD calculation through iterative methods, which do not demand to define the characteristic polynomial of the matrix. In this case, the SVD is executed in two stages: in the first, each matrix is first transformed into triangular form with the QR decomposition [1], and then - into bidiagonal, through the Householder transforms. In the second stage, on the bidiagonal matrix is applied an iterative method, whose iterations stop when the needed accuracy is achieved. For this could be used the iterative method of Jacobi, where for the calculation of the SVD for the bidiagonal matrix is needed the execution of a sequence of orthogonal transforms with matrices, which differ from the unit matrix by the elements of the rotation matrix of size 2×2 only. The second approach is based on the relation between SVD and the Principal Component Analysis (PCA) [3]. It could be executed through neural networks [2] of the kind generalized Hebbian, or multilayer perceptron networks, which use iterative learning algorithms. The third approach is based on the algorithm, known as Sequential KL/SVD [4]. The basic idea here is as follows: the image matrix is divided into bocks of small size, and on each is applied SVD, based on the QR decomposition. At first, the SVD is calculated for the first block from the original image (the upper left, for example), and then iterative SVD calculation is executed for each of the remaining blocks by using the transform matrices, calculated for the first block (updating process). In the flow of the iterations process are deleted the SVD components, which correspond to very small eigen values. For the acceleration of the SVD calculation, several methods are already developed [6-8]. The first is based on the algorithm, called Randomized SVD [6], where accidentally is chosen some number of matrix rows (or columns). After scaling, they are used to build a small matrix, for which is calculated the SVD, which is later used as an approximation of the original matrix. In [7] is offered the algorithm QUIC-SVD, suitable for matrices of very large size. Through this algorithm, fast sample-based SVD approximation is achieved, with automatic relative error control. Another approach is based on the sampling mechanism, called the cosine tree, through which is achieved best-rank approximation. The experimental investigation of the QUIC-SVD in [8] presents better results than these, for the MATLAB SVD and the Tygert SVD. The so obtained 6-7 times acceleration compared to SVD depends on the pre-selected value of the parameter , which defines the upper limit of the approximation error, with probability (1-). Several SVD-based methods are developed, aimed at International Journal Multimedia and Image Processing (IJMIP), Volume 5, Issues 3/4, September/December 2015 Copyright © 2015, Infonomics Society 286

Transcript of Hierarchical Decomposition of 2D/3D Images, Based on SVD...

Page 1: Hierarchical Decomposition of 2D/3D Images, Based on SVD 2×2infonomics-society.org/wp-content/uploads/ijmip/published-papers/... · SVD for a matrix of size 2 2. In the paper are

Hierarchical Decomposition of 2D/3D Images, Based on SVD 2×2

Roumen Kountchev1, Roumiana Kountcheva

2

1TU-Sofia,

2T&K Engineering

Bulgaria

Abstract

The famous Singular Value Decomposition (SVD)

is very efficient in the processing of multidimensional

images, when efficient compression, and reduction of

the features, used for objects recognition, are

needed. The basic obstacle for the wide use of SVD is

its high computational complexity. To solve the

problem, here is offered the new approach for

hierarchical image decomposition through SVD

(2×2)-based algorithm. In this case, the multi-

dimensional image is represented as a 3D tensor,

divided into sub-tensors of size 2×2×2, called

kernels. Each such kernel is then decomposed

through Hierarchical SVD (HSVD), based on the

SVD for a matrix of size 22. In the paper are given

the HSVD algorithms for a 4×4 matrix, and for a

tensor of size 4×4×4. The tensor decomposition is

generalized for a tensor of size N×N×N, when N=2n.

The computational complexity of HSVD is evaluated,

and compared to that of the iterative SVD algorithm

for 2D matrices, and 3D tensors. The basic

advantages of the new approach for decomposition

of multidimensional images are the low

computational complexity and the tree-like structure,

which permits the low-energy branches to be cut-off

through threshold selection. The algorithm is

suitable for parallel processing of multi-dimensional

images through multiprocessor systems, where the

basic cell processor executes the decomposition of

the kernel tensor, of size 2×2×2.

1. Introduction

The SVD is statistical decomposition for

processing, coding and analysis of images, widely

used in computer vision systems. This

decomposition was an object of vast research,

presented in many monographs [1,2] and papers [3-

14]. The decomposition is optimal, because it

concentrates significant part of the image energy in

minimum number of components, and the restored

image (after reduction of the low-energy

components), has minimum mean square error. One

of the basic problems, which limit the use of the

“classic” SVD [1] to some degree, is related to its

high computational complexity, which grows up

together with the size of the image matrix. To

overcome the problem, several new approaches are

already offered. The first approach is based on the

SVD calculation through iterative methods, which do

not demand to define the characteristic polynomial of

the matrix. In this case, the SVD is executed in two

stages: in the first, each matrix is first transformed

into triangular form with the QR decomposition [1],

and then - into bidiagonal, through the Householder

transforms. In the second stage, on the bidiagonal

matrix is applied an iterative method, whose

iterations stop when the needed accuracy is achieved.

For this could be used the iterative method of Jacobi,

where for the calculation of the SVD for the

bidiagonal matrix is needed the execution of a

sequence of orthogonal transforms with matrices,

which differ from the unit matrix by the elements of

the rotation matrix of size 2×2 only. The second

approach is based on the relation between SVD and

the Principal Component Analysis (PCA) [3]. It

could be executed through neural networks [2] of the

kind generalized Hebbian, or multilayer perceptron

networks, which use iterative learning algorithms.

The third approach is based on the algorithm, known

as Sequential KL/SVD [4]. The basic idea here is as

follows: the image matrix is divided into bocks of

small size, and on each is applied SVD, based on the

QR decomposition. At first, the SVD is calculated

for the first block from the original image (the upper

left, for example), and then iterative SVD calculation

is executed for each of the remaining blocks by using

the transform matrices, calculated for the first block

(updating process). In the flow of the iterations

process are deleted the SVD components, which

correspond to very small eigen values. For the

acceleration of the SVD calculation, several methods

are already developed [6-8]. The first is based on the

algorithm, called Randomized SVD [6], where

accidentally is chosen some number of matrix rows

(or columns). After scaling, they are used to build a

small matrix, for which is calculated the SVD, which

is later used as an approximation of the original

matrix. In [7] is offered the algorithm QUIC-SVD,

suitable for matrices of very large size. Through this

algorithm, fast sample-based SVD approximation is

achieved, with automatic relative error control.

Another approach is based on the sampling

mechanism, called the cosine tree, through which is

achieved best-rank approximation. The experimental

investigation of the QUIC-SVD in [8] presents better

results than these, for the MATLAB SVD and the

Tygert SVD. The so obtained 6-7 times acceleration

compared to SVD depends on the pre-selected value

of the parameter , which defines the upper limit of

the approximation error, with probability (1-).

Several SVD-based methods are developed, aimed at

International Journal Multimedia and Image Processing (IJMIP), Volume 5, Issues 3/4, September/December 2015

Copyright © 2015, Infonomics Society 286

Page 2: Hierarchical Decomposition of 2D/3D Images, Based on SVD 2×2infonomics-society.org/wp-content/uploads/ijmip/published-papers/... · SVD for a matrix of size 2 2. In the paper are

the enhancement of the image compression

efficiency [9-12]. One of them, called Multi-

resolution SVD [9], comprises three steps: image

transform, through 9/7 biorthogonal two-levels

wavelets, decomposition of the SVD-transformed

image, by using blocks of size 2×2 up to level six,

and at last - the use of the algorithms SPIHT and

gzip. In [10] is offered the hybrid KLT-SVD

algorithm for efficient image compression. The

method K-SVD [11] for facial image compression, is

a generalization of the method K-means

clusterization, and is used for iterative learning of

overcomplete dictionaries for sparse coding. In

correspondence with the combined compression

algorithm, in [12] is proposed SVD-based sub-band

decomposition and multi-resolution representation of

digital color images.

One new approach for hierarchical

decomposition of matrix images, based on the

multiple SVD application on blocks of size 22, is

proposed in [13]. This decomposition with high

computational efficiency, called Hierarchical SVD

(НSVD), has tree-like structure of the kind binary

tree [14].

2. Related Works

In the last years, the interest to the processing of

multidimensional images, represented as third-order

tensors, was significantly increased. As a result,

many scientific papers were published [15-24],

related to the theory of numerical tensor calculus. In

the paper of De Lathauwer [15] was introduced the

multilinear generalization of the matrix SVD, called

Higher-Order SVD (HOSVD), through which the

matrix SVD is transformed so, that to be applied for

tensors. The higher-order tensor is decomposed into

blocks of smaller size, where the size is characterized

by a set of mode-n ranks. In the survey [16] is given

an overview of higher-order tensor decompositions

and their applications. The basic methods for tensor

decomposition are represented as higher-order

extensions of the matrix SVD:

CANDECOMP/PARAFAC (CP) decomposes a

tensor as a sum of rank-one tensors, and the Tucker

decomposition is a higher-order form of the Principal

Component Analysis (PCA). Other approaches for

tensor decomposition are also described, which come

from the basic methods: INDSCAL, CANDELINC,

DEDICOM, and PARATUCK2. In [17] are

discussed the 2 basic algorithms for tensor

decomposition (the Tucker/HOSVD and the CP

decompositions), and also - the software, used for

their implementation, including MATLAB

toolboxes. For higher calculation efficiency of the

higher order tensor decompositions, in [18-20] are

offered several hierarchical approaches. In [18] is

introduced the PARATREE tensor model with

sequential unfolding SVD algorithm, for which the

decomposition order (rank) can be adaptively

adjusted. The low rank PARATREE approximation

is used to reduce the data compression complexity.

In [19] is developed the hierarchical SVD for tensors

of order d > 2, and in [20] is presented the so-called

tensor-train decomposition - a simple non-recursive

form of the tensor decomposition in d dimensions,

based on low-rank approximation of auxiliary

unfolding matrices. In [21] are generalized the basic

methods for tensor decomposition through basic

Canonical Polyadic and Tucker models. There are

also shown some applications of the tensor

decompositions for the aims of the canonical signal

correlation and separation, linear regression, feature

extraction and classification, and big datasets

analysis. In [22] is developed the adaptive data

approximation technique based on a hierarchical

tensor-based transformation. The experimental

results indicate that the new technique can achieve

higher compression ratios than the existing methods,

including the wavelet transforms and the single level

tensor approximation.

In this paper is presented one new approach for

decomposition of multidimensional images, through

Hierarchical SVD (HSVD) algorithm, based on the

radix-(2×2) SVD for a matrix of size 22. In Section

3 is given the HSVD algorithm for a square N-

dimensional matrix for N=2n; in Section 4 is

introduced the evaluation of the computational

complexity of the algorithm, and in Section 5 - its

representation through binary tree-like structure. In

sections 6, 7 and 8 is described the HSVD algorithm

for images, represented as tensors of size 222,

444 and N×N×N (N=2n).

3. Hierarchical 2D SVD for the matrix

[X] of size NN

The direct SVD for the image, represented by the

matrix [X] of size NN, is defined by the relation

[5]:

,]V[]][U[]X[ t (1)

where the matrices ]U,..,U,U[]U[ N21

and

]V,..,V,V[]V[ N21

are comprised by the left-singular

vectors sU

and the right-singular vectors sV

for

s=1,2,..,N, respectively - the matrices t]X][X[]Y[

and ;]X[]X[]Z[ t ],..,,[diag][ N21 - diagonal

matrix, which consists of the singular values

ss of the two symmetrical matrices ]Y[ and

]Z[ ( s - the eigen values of these matrices). From

Eq. (1) it follows, that for the description of the

decomposition for a matrix of size NN, are needed

International Journal Multimedia and Image Processing (IJMIP), Volume 5, Issues 3/4, September/December 2015

Copyright © 2015, Infonomics Society 287

Page 3: Hierarchical Decomposition of 2D/3D Images, Based on SVD 2×2infonomics-society.org/wp-content/uploads/ijmip/published-papers/... · SVD for a matrix of size 2 2. In the paper are

N×(2N+1) parameters in total, i.e. in the general case

the SVD is a decomposition of the kind

“overcomplete”. In particular, the direct SVD of the

matrix [X] of size 22 (N=2), noted here as SVD2×2,

is described by the relation:

222122

2121211

12

111

t221

t111

2221

1211

2

1

2212

2111

v,vu

uv,v

u

uVUVU

vv

vv

0

0

uu

uu

dc

ba]X[

(2)

or

.]C[]C[]L[]L[]V][][U[dc

ba]X[ 212211

t

(3)

Here a,b,c,d are the pixels of the image [X];

],L[]C[ 111 ],L[]C[ 222 ],U,U[]U[ 21

],V,V[]V[ 21

],[diag][ 21 (4)

,11 ,11 ,VU]L[ t111

,VU]L[ t

222

],L[]C[ 111 ],L[]C[ 222 (5)

,]A,A[2A

1]u,u[U tt

12111

,]A,A[-2A

1]u,u[U tt

22212

(6)

,]A,A[2A

1]v,v[V tt

12111

,]A,A[-2A

1]v,v[V tt

22212

(7)

;dbca 2222 ;dcba 2222 ,cdab

,bdac ;dcba 2222

(8)

;4A 22 ;2/)A(1 .2/)A(2

(9)

( 04δμ4ην 2222 );

1U

and 2U

are the eigenvectors of the matrix

,]X][X[]Y[ t for which: sss UU]Y[

for s=1,2;

1V

and 2V

are the eigenvectors of the matrix

,]X[]X[]Z[ t for which: sss VV]Z[

for s=1,2;

1 and 2 are the corresponding singular values of the

symmetrical matrices [Y] and [Z];

[C1] and [C2] are the eigen images of the matrix [X].

The inverse SVD for the matrix [X] of size 22

(ISVD2×2) is defined as given below:

],V[]X[]U[0

0][ t

2

1

(10)

,cosθsinθ

sinθcosθ

AA

AA

2A

1

UU

UU

U

UU

11

11

2221

1211

t2

t1t

(11)

.cosθsinθ

sinθcosθ

AA

AA

2A

1

VV

VV]V,V[V

22

22

2212

211121

(12)

Here /2arctg)2/1(θ1 and /2arctg)2/1(θ2

are the rotation angles in clockwise and counter

clockwise direction of the coordinate system (i,j),

where the matrix [X] is defined. In this case, the

direct SVD of size 22 (SVD2×2) is represented

through rotation matrices, which contain the angles

1 and 2, as follows:

.coscossincos

cossinsinsin

sinsincossin

sincoscoscos

dc

ba

2121

21212

2121

21211

(13)

The inverse SVD of size 22 (ISVD2×2) is defined by

the relation (14):

22

22

11

11

2

1

cossin

sincos

dc

ba

cossin

sincos

0

0

The Figure 1 shows the algorithm, based on the

trigonometric form (13) for calculation of matrices

[C1] and [C2] by using the direct SVD of the matrix

[X] of size 2×2. This algorithm is the basic building

element - the kernel, used to create the HSVD

algorithm. As it follows from Eq. (13), the number of

needed parameters for its description is 4, i.e. - the

SVD2×2 is not “over-complete”. The energy of the

image [X] and each eigen image

[C1], [C2], is

respectively:

International Journal Multimedia and Image Processing (IJMIP), Volume 5, Issues 3/4, September/December 2015

Copyright © 2015, Infonomics Society 288

Page 4: Hierarchical Decomposition of 2D/3D Images, Based on SVD 2×2infonomics-society.org/wp-content/uploads/ijmip/published-papers/... · SVD for a matrix of size 2 2. In the paper are

,dcbaxE 22222

1i

2

1j

2j.iX

(15)

.2

A)1(cE

,2

A)1(cE

22

2

1i

2

1j

2j,iC

21

2

1i

2

1j

2j,iC

2

1

(16)

From the Perceval condition for energy preservation

)EEE(21 CCX and from Eqs. (15) and (16) it

follows, that ,EE21 CC i.e., the energy XE of the

matrix [X] is concentrated mainly in the first SVD2×2

component. The concentration degree is defined by

the relation:

)]./A(1[2

1

EE

E

22

21

21

CC

C

21

1

(17)

dc

baX ][

2

22

2

21

2

12

2

11

2cc

cc]C[

1

22

1

21

1

12

1

11

1cc

cc]C[

a b c d

2222 dbca

2222 dcba

cdab

bdac

43

21

11PP

PP]C[

11111 Pc

SVD 2×2

/2arctg)2/1(θ2

/2arctg)2/1(θ1

2/)A(1

22 4A

2222 dcba

11 sinF 12 cosF

23 sinF 24 cosF 2/)A(2

421 FFP 322 FFP 413 FFP 314 FFP

12

34

22PP

PP]C[

21112 Pc 31

121 Pc 41

122 Pc 42

211 Pc 32

212 Pc 22

221 Pc 12

222 Pc

Figure 1. The algorithm for Direct SVD of the matrix [X]

of size 22 (SVD2×2)

In particular, for the case, when the matrix [X] is

with equal values of the elements (a=b=c=d), from

Eqs. (13), (15), (16) and (17) is obtained

,a4EE 2CX 1 0E

2C and 1 .

Hence, the total energy of the matrix [X] is

concentrated in the first SVD component only. The

hierarchical n-level SVD (HSVD) for the image

matrix [X(N)] of size 2n2n pixels (N=2n) is executed

through multiple applying the SVD on the image

sub-blocks (sub-matrices) of size 22, followed by

rearrangement of the so calculated components.

In particular, for the case, when the image matrix

[X(4)] is of size 2222 (N=22=4), the number of the

hierarchical levels of the HSVD is n=2. The flow

graph, which represents the HSVD calculation, is

shown in Figure 2.

In the first level (r=1) of the HSVD, the matrix

[X(4)] is divided into four sub-matrices of size 22,

as shown in the left part of Fig. 3. Here the elements

of the sub-matrices on which is applied the SVD2×2 in

the first hierarchical level, are colored in same color

(yellow, green, blue, and red). The elements of the

sub-matrices are:

.

dc

ba

dc

ba

dc

ba

dc

ba

])2(X[])2(X[

])2(X[])2(X[)]4(X[

44

44

33

33

22

22

11

11

43

21

(18)

On each sub-matrix [Xk(2)] of size 22 (k=1,2,3,4) is

applied SVD2×2, in accordance with Eq. (3). As a

result, it is decomposed into two components:

])2(C[])2(C[])2(L[])2(L[)]2(X[ k,2k,1k,2k,2k,1k,1k

for k=1,2,3,4; (19)

where ,2/)A( k,1k,1k,1 ,2/)A( k,2k,2k,2

,VU)]2(L[ tk,1k,1k,1

.VU)]2(L[ t

k,2k,2k,2

Using the matrices )]2(C[k,m of size 22 for

k=1,2,3,4 and m=1,2, are composed the matrices

)]4(C[ m of size 44:

)4,m(c)4,m(c

)4,m(c)4,m(c

)3,m(c)3,m(c

)3,m(c)3,m(c

)2,m(c)2,m(c

)2,m(c)2,m(c

)1,m(c)1,m(c

)1,m(c)1,m(c

(2)]C[(2)]C[

(2)]C[(2)]C[)]4(C[

1413

1211

1413

1211

1413

1211

1413

1211

4m,3m,

2m,1m,

m

(20)

for m=1,2. Hence, the SVD decomposition of the

matrix [X] in the first level is represented by two

components:

(2)])C[(2)]C([])(2)C[](2)C([

(2)])C[(2)]C([])(2)C[](2)C([

(4)]C[+(4)]C[=(4)]X[

2,41,42,31,3

2,21,22,11,1

21

(21)

In the second level (r=2) of HSVD, on each matrix

)]4(C[ m of size 44 is applied four times the

SVD2×2. Unlike the transform in the previous level,

International Journal Multimedia and Image Processing (IJMIP), Volume 5, Issues 3/4, September/December 2015

Copyright © 2015, Infonomics Society 289

Page 5: Hierarchical Decomposition of 2D/3D Images, Based on SVD 2×2infonomics-society.org/wp-content/uploads/ijmip/published-papers/... · SVD for a matrix of size 2 2. In the paper are

in the second level the SVD2×2 is applied on the sub-

matrices [Cm,k(2)] of size 22, whose elements are

mutually interlaced and are defined in accordance

with the scheme, given in the upper part of Fig. 2.

The elements of the sub-matrices, on which is

applied the SVD2×2 in the second hierarchical level

are colored in same color (yellow, green, blue, and

red). As it is seen from the Figure, the elements of

the sub-matrices of size 2×2 in the second level are

not neighbors, but placed one element away in

horizontal and vertical directions. In result, each

matrix )]4(C[ m is decomposed into two components:

)]4(C[)]4(C[)]4(C[ 2,m1,mm for m=1,2. (22)

Then, the full decomposition of the matrix [X] is

represented by the relation (23):

2

1m

2

1s

s,m2,21,22,11,1 ])4(C[])4(C[])4(C[])4(C[])4(C[])4(X[

)4,4(c)3,4(c

)4,3(c)3,3(c

)2,4(c)1,4(c

)2,3(c)1,3(c

)4,2(c)3,2(c

)4,1(c)3,1(c

)2,2(c)1,2(c

)2,1(c)1,1(c

]C[

ii

ii

ii

ii

ii

ii

ii

ii

i

4443

3433

4241

3231

2423

1413

2221

1211

xx

xx

xx

xx

xx

xx

xx

xx

]X[

X(1,1)

X(1,2)

X(2,1)

X(2,2)

X(1,3)

X(1,4)

X(2,4)

X(3,1)

X(3,2)

X(4,1)

X(4,2)

X(3,3)

X(3,4)

X(4,3)

X(4,4)

X(2,3)

c1(1,2)

c1(1,4)

c1(3,2)

c1(3,4)

c2(2,3)

c2(4,1)

c2(4,3)

c1(2,1)

c1(2,3)

c1(4,1)

c1(4,3)

c1(2,2)

c1(2,4)

c1(4,4)

c1(1,1)

c1(3,1)

c1(1,3)

c2(1,1)

c1(3,3)

c2(1,2)

c2(1,1)

C2(1,3)

c2(3,1)

c2(3,3)

c2(1,2)

c2(1,4)

c2(3,2)

c2(2,1)

c2(2,1)

c2(2,2)

c2(1,3)

c2(1,4)

c2(2,3)

c2(2,4)

c2(3,1)

c2(3,4)

c1(4,2)

SVD 2x2SVD 2x2

SVD 2x2SVD 2x2

c1(1,1)

c1(1,2)

c1(2,1)

c1(2,2)

c1(1,3)

c1(1,4)

c1(2,3)

c1(2,4)

SVD 2x2SVD 2x2

c1(3,1)

SVD 2x2SVD 2x2

c1(3,2)

c1(4,1)

c1(4,2)

c1(3,3)

c1(3,4)

c1(4,3)

c1(4,4)

c2(3,2)

c2(4,1)

c2(4,2)

c2(3,3)

c2(3,4)

c2(4,3)

c2(4,4)

SVD 2x2

[C(1)]SVD 2x2

C1,2(1,1)C2,1(1,1)

C1,2(1,2)

C2,1(1,2)

C2,1(2,1)

C2,1(2,2)

C2,2(1,1)

C2,2(1,2)

C2,2(2,1)

C2,2(2,2)

SVD 2x2SVD 2x2

C1,1(1,1)

C1,1(1,2)

C1,2(2,1)

C1,1(2,1)

C1,2(2,2)

C1,1(2,2)

SVD 2x2

[C(1)]SVD 2x2

C1,2(1,3)

C2,1(1,3)

C1,2(1,4)

C2,1(1,4)

C2,1(2,3)

C2,1(2,4)

C2,2(1,3)

C2,2(1,4)

C2,2(2,4)

SVD 2x2SVD 2x2

C1,1(1,3)

C1,1(1,4)

C1,2(2,3)

C1,1(2,3)

C1,2(2,4)

C1,1(2,4)

SVD 2x2

[C(1)]SVD 2x2

C1,2(3,1)

C1,1(3,1)

C1,2(3,2)

C2,1(3,2)

C2,1(4,1)

C2,1(4,2)

C2,2(3,1)

C2,2(3,2)

C2,2(4,2)

SVD 2x2SVD 2x2

C1,1(3,1)

C1,2(4,1)

C1,1(4,1)

C1,2(4,2)

C1,1(4,2)

SVD 2x2

[C(1)]SVD 2x2

C1,2(3,3)C2,1(3,3)

C1,2(3,4)C2,1(3,4)

C2,1(4,3)

C21(4,4)

C2,2(3,3)

C2,2(3,4)

C2,2(4,4)

SVD 2x2SVD 2x2

C1,1(3,3)

C1,1(3,4)

C1,2(4,3)

C1,1(4,3)

C1,2(4,4)

C1,1(4,4)

Rearrangement

Level 1 Level 2

[X1]

[X2]

[X3]

[X4]

[Ci,1]

[Ci,2]

]C]/[C[ 21 ]C]/[C]/[C]/[C[ 2,21,22,11,1

[Ci,s,1]

C2,2(4,3)

C2,2(4,1)

C1,1(3,2)

11 12 21 22

13

31

33

14 23 24

32 41 42

34 43 44

]X[

]X[ 1 ]X[ 2

]X[ 4]X[ 3

22

31

13

33

41 32 42

23 14 24

43 34 44

)]1(C[ i )]2(C[ i

12

]C[ i

)]3(C[ i )]4(C[ i

11 21

i=1,2

i=1,2

[Ci,3] i=1,2

[Ci,4] i=1,2

c2(2,2)

c2(2,4)

c2(4,2)

c2(4,4)

)4,4(c)3,4(c

)4,3(c)3,3(c

)2,4(c)1,4(c

)2,3(c)1,3(c

)4,2(c)3,2(c

)4,1(c)3,1(c

)2,2(c)1,2(c

)2,1(c)1,1(c

]C[

s,is,i

s,is,i

s.is,i

s,is,i

s,is,i

s,is,i

s,is.i

s,is,i

s,i

i=1,2 i=1,2

s=1,2

[Ci,s,2]C2,2(2,3)

[Ci,s,3]

[Ci,s,4]

s=1,2

s=1,2

s=1,2

s=1,2

2x2

2x2

2x2

4x4

4x4 4x4

4x4

2x2

2x2

2x2

2x2

Block SVD2x2 for level 1

Block SVD2x2 for level 2

4x4

2x2

i=1,2

i=1,2

i=1,2

i=1,2

Figure 2. Flow graph of the HSVD algorithm represented through the vector-radix (2×2) for a matrix of size 4×4

Hence, the decomposition for image of size 4×4

comprises four components. The matrix [X(8)] is of size 2323

(N=23=8 for n=3), and in this case the

HSVD is executed through multiple calculation of

International Journal Multimedia and Image Processing (IJMIP), Volume 5, Issues 3/4, September/December 2015

Copyright © 2015, Infonomics Society 290

Page 6: Hierarchical Decomposition of 2D/3D Images, Based on SVD 2×2infonomics-society.org/wp-content/uploads/ijmip/published-papers/... · SVD for a matrix of size 2 2. In the paper are

the SVD2×2 on blocks of size 22, in all levels (the

general number of the decomposition components is

eight). In the first and second levels, the SVD2×2 is

executed in accordance with the scheme, shown in

Figure 1.

In the third level, the SVD2×2 is mainly applied

on sub-matrices of size 22. Their elements are

defined in similar way, as shown on Fig. 1, but the

elements of same color (i.e., which belong to same

sub-matrix) are moved three elements away in

horizontal and vertical directions. The so described

HSVD algorithm could be generalized for the cases,

when the image [X(2n)] is of size 2n2n

pixels. Then

the relation (23) becomes as shown below:

2

1p

2

1p

2

p

np,...p,p

n

1 2 n

n21])2(C[.....])2(X[ (24)

The maximum number of the HSVD decomposition

levels is n, the maximum number of the

decomposition components (24) is 2n, and the

distance in horizontal and vertical direction between

the elements of the blocks of size 22 in the level r is

correspondingly (2r-1-1) elements, for r =1,2,..,n.

4. Computational complexity of the

Hierarchical 2D SVD

4.1. Computational complexity of SVD22

The computational complexity could be defined

by using the Eq. (3), and taking into account the

operations multiplication and addition, needed for

the preliminary calculation of the components , ,

, , , ,A 1, 2, 1, 1, defined by the Eqs. (4)-(9).

Then:

- The number of the multiplications, needed for

the calculation of Eq. (3) is m = 39;

- The number of the additions, needed for the

calculation of Eq. (3) is s =15.

Then the total number of the algebraic operations

executed with floating point for SVD of size 22 is:

.54)22(O smSVD

(25)

4.2. Computational complexity of for HSVD

of size 2n2

n

The computational complexity is defined on the

basis of SVD2×2. In this case, the number М of the

sub-matrices of size 22, which comprise the image

of size 2n2n, is 2n-12n-1=4n-1, and the number of

decomposition levels is n.

- The SVD2×2 number in the first level is М1=М=4n-1;

- The SVD2×2 number in the second level is

М2=2×М=2×4n-1;

- The SVD2×2 number in the level n is Мn=2n-1×M=

=2n-1×4n-1.

The total number of SVD2×2 is correspondingly

М=M(1+2+...+2n-1)=4n-1(2n-1)=22n-2(2n-1). Then the

total number of the algebraic operations for HSVD

of size 2n2n is:

).12(227

)12(254)22(OM)22(O

n1-n2

n2-n2SVD

nnHSVD

(26)

4.3. Computational complexity of SVD of size

2n2

n

For the calculation of the matrices [Y(N)] and

[Z(N)] of size NN for N=2n are needed 2n2

m 2

multiplications and )12(2 n1ns

additions in

total. The global number of operations is:

.)123(2)12(22)N(O n1nn1n2n2Z,Y

(27)

In accordance with [21], the number of operations

O(N) for the iterative calculation of all N

eigenvalues and the eigen N-component vectors of

the matrix of size NN for N=2n with L iterations, is

correspondingly:

,)422172)(126)(/1(

)42N17N81)(N)(6/1()N(O

n3n2n

2val

(28)

]1-)1LL2(2[2]1-)1LLN(N2[N)N(O n1nnvec

(29)

The number of the needed operations for calculation

of the eigen vectors sU

and sV

in accordance with

Eq. (29) should be doubled. From the analysis of Eq.

(1) it follows that:

- The number of needed multiplications for all

components is: ;2)22(2 1n3n2n2nm

- The number of needed additions for all components

is: .12ns

Then the total number of needed operations for the

calculation of Eq. (1) is:

1)12(21)12(2122)N(O 1n2n1n2nn1n3D

(30)

Hence, the total number of algebraic operations,

needed for the execution of SVD of size 2n2n is

(31):

.1)422172)(6/1(]52)12(L2[2

)2(O)2(O2)2(O)2(O)2(2O

n3n21nn1n2

nD

nvec

nval

nZ,Y

nnSVD

4.4. Relative computational complexity of the

HSVD

The relative computational complexity of the

HSVD could be calculated on the basis of Eqs. (31)

and (26), using the relation below:

International Journal Multimedia and Image Processing (IJMIP), Volume 5, Issues 3/4, September/December 2015

Copyright © 2015, Infonomics Society 291

Page 7: Hierarchical Decomposition of 2D/3D Images, Based on SVD 2×2infonomics-society.org/wp-content/uploads/ijmip/published-papers/... · SVD for a matrix of size 2 2. In the paper are

6)422172()12(

]3)32(2)1LL2(2[23

)12(281

1

)22(O

)22(O)L,n(

n3n2n

n1nn2n1n

nn2nnHSVD

nnSVD

(32)

For n=2,3,4,5 (i.e., for image blocks of size 44,

88, 1616 and 3232 pixels), the values of (n,L)

for L=10 are given in Table 1.

Table 1. Coefficient (n,L) of the relative reduction of the

computational complexity of the HSVD towards the SVD

as a function of n, for L=10

n 2 3 4 5

(n,10) 5.94 4.21 3.67 3.44

For big values of n the relation (n,L) does not

depend on n and trends towards:

)1L3(0.1)L,n( n

(33)

Hence, for big values of n, when the number of the

iterations L 6, the relation 1)L,n( , and the

computational complexity of the HSVD is lower than

that of the SVD. Practically, the value of L is

significantly higher than 4. For big values of n the

coefficient (n,10)=3.1 and the computational

complexity of the HSVD is three times lower than

that of the SVD.

5. Representation of the 2D HSVD

algorithm through tree-like structure

The tree-like structure of the HSVD algorithm of

n=2 levels, shown in Figure 3, is built on the basis of

Eq. (23), for image block of size 4×4. As it could be

seen, this is a binary tree. For a block of size 8×8,

this binary tree should be of 3 levels (n=3).

HSVD

Level 1

HSVD

Level 2

[X] [X4]

[X1] [X2]

[X3]

[C1]

[C1,1] [C1,2] [C2,1] [C2,2]

[C2] 2,1 2,2

2,3 2,4 1,3 1,4

1,1 1,2

1,1(1) 1,2(1)

1,3(1) 1,4(1)

1,1(2) 1,2(2)

1,3(2) 1,4(2)

2,1(1) 2,2(1)

2,3(1) 2,4(1)

2,1(1) 2,2(1)

2,3(1) 2,4(1)

(44)

(44) (44)

(44) (44)

Figure 3. Binary tree, representing the HSVD algorithm for

the image matrix [X] of size 44

Each tree branch has a corresponding eigen value

,k,s or resp. k,sk,s for the level 1, and

)m(k,s or resp. )m()m( k,sk,s - for the level 2

(m=1,2). The total number of the tree branches

shown in Figure 4, is equal to six. It is possible some

branches to be cut-off, if for them are satisfied the

conditions: 0)m(k,sk,s or

,)m()m( k,sk,sk,sk,s i.e., when they are

equal to 0, or are smaller than a small threshold

,k,s resp. ).m(k,s To cut off one НSVD

component [Ci] in a given level, it is necessary all

values of i, which participate in this component, to

be equal to zero, or very close to it. In result, the

decomposition in the corresponding branch could be

stopped before the last level n. From this follows,

that the НSVD algorithm is adaptive in respect of the

contents of each image block. In this sense, the

algorithm НSVD is adaptive and could be easily

adjusted to the requirements of each particular

application.

6. Decomposition for tensor of size 222

The 3rd order tensor [T] with rank 2 and size

2×2×2, is noted here as [Т2×2×2], and is the kernel of

the decomposition of the tensor [T] of size N×N×N

(3rd order tensor with rank N, for N=2n). After

unfolding the tensor [T2×2×2] in vertical direction, is

obtained:

]X[]X[hgdc

feba]T[unfold 21222

(34)

The HSVD algorithm for the tensor [T] of size

2×2×2, based on the SVD2×2, is shown on Fig. 4.

In the first level of HSVD for the tensor [T2×2×2], on

each matrix [X1] and [X2] is applied SVD2×2, and in

result is got (35):

],C[]C[dc

ba]X[ 12111

]C[]C[

hg

fe]X[ 22212

(35)

The obtained matrices [Ci,j] of size 2×2 for i,j=1,2

should be rearranged in new couples in accordance

with the decrease of their energy .2j,i After the

rearrangement, the first couple of matrices [C11] and

[C21], which have high energy, defines the tensor

[T1(2×2×2)], and the second couple [C12] and [C22]

which have lower energy - the tensor [T2(2×2×2)]. Then:

].T[]T[]T[ )222(2)222(1222

(36)

After unfolding both tensors in horizontal direction, is

obtained:

International Journal Multimedia and Image Processing (IJMIP), Volume 5, Issues 3/4, September/December 2015

Copyright © 2015, Infonomics Society 292

Page 8: Hierarchical Decomposition of 2D/3D Images, Based on SVD 2×2infonomics-society.org/wp-content/uploads/ijmip/published-papers/... · SVD for a matrix of size 2 2. In the paper are

]X[]X[]X[]X[

]T[unfold]T[unfold

22122111

)222(2)222(1

(37)

In the second level of HSVD, on each matrix [Xi,j] of

size 2×2, in Eq. (37) is applied SVD2×2 and in result:

].C[]C[]X[ ],C[]C[]X[

],C[]C[]X[ ],C[]C[]X[

2222212212212112

2122112111211111

(38)

The obtained matrices [Ci,j,k] of size 2×2 for i,j,k=1,2

are rearranged into 4 new couples with similar energy 2

k,j,i in order, defined by its decrease. Each of these

4 couples of matrices defines a corresponding tensor

of size 2×2×2. After their unfolding is obtained (39):

.]C[]C[]C[]C[]C[]C[]C[]C[

}])2(T[])1(T[])2(T[])1(T[{oldinf

222122212112221121211111

)222(2)222(2)222(1)222(1

In result of the execution of the two levels of

HSVD2×2×2, the tensor [Т2×2×2] is represented as

below:

(40) .)]j(T[

])2(T[])1(T[])2(T[)]1(T[]T[

2

1i

2

1j

)222(i

)222(2)222(2)222(1)222(1222

Here the tensors [Ti(2×2×2)(j)] of size 2×2×2 are

arranged in accordance with the decrease of the

singular values I,j,k of the matrices [Ci,j,k]. The flow

graph of the 2-level HSVD algorithm for the tensor

[T] of size 2×2×2, is shown on Fig. 5. From Eq. (25)

it follows, that the total number of the algebraic

operations for HSVD with tensor [T2×2×2] is:

OHSVD(222)=6OSVD(22)=654=324.

(41)

SVD2x2

a

dc

b

hg

fe

Rearrengement 1

Rearrengement 2 Rearrengement 2

Level 1

Level 2

[X1]

[X2]

[C12]

]C[]C[dcba

]X[ 12111

]C[]C[hgfe

]X[ 22212

[C22]

[C111]

[C211]

[C121]

[C221]

[C112]

[C212]

[C122]

[C222]

]T[ 222

)]1(T[ 1 )]2(T[ 1)]1(T[ 2 )]2(T[ 2

)]2(T[)]1(T[)]2(T[)]1(T[

]T[]T[]T[

2211

21222

HSVD2x2x2

]T[ 2

+ + +

+

SVD2x2

11

1615

12

1817

1413

[C11][C21]

]T[ 1

SVD2x2

SVD2x2

SVD2x2

SVD2x2

1413

1211

11dc

ba]X[

1817

1615

12hg

fe]X[

2423

2221

21dc

ba]X[

2827

2625

22hg

fe]X[

24

28

26

22

25

21

27

23

Figure 4. HSVD algorithm for the tensor [T] of size 2×2×2, based on the SVD2×2, and binary 2-level tree

International Journal Multimedia and Image Processing (IJMIP), Volume 5, Issues 3/4, September/December 2015

Copyright © 2015, Infonomics Society 293

Page 9: Hierarchical Decomposition of 2D/3D Images, Based on SVD 2×2infonomics-society.org/wp-content/uploads/ijmip/published-papers/... · SVD for a matrix of size 2 2. In the paper are

a

b

c

d

e

f

h

g

c12(1,1)

c12(1,2)

c22(1,1)

c22(1,2)

c11(1,1)

c21(1,1)

c11(1,2)

c12(1,1)

c21(1,2)

c12(1,2)

c11(2,1)

c11(2,2)

c21(2,1)

c21(2,2)

c12(2,1)

c12(2,2)

c22(2,1)

c12(2,1)

c12(2,2)

c22(1,1)

c22(1,2)

c22(2,1)

c22(2,2) c22(2,2)

SVD 2x2SVD 2x2

1

SVD 2x2SVD 2x2

2

c11(1,1)

c11(1,2)

c11(2,1)

c11(2,2)

c21(1,1)

c21(1,2)

c21(2,1)

c21(2,2)

SVD 2x2

[C(1)]SVD 2x2

C112(1,1)

C211(1,1)

C112(1,2)

C211(1,2)

C211(2,1)

C211(2,2)

C212(1,1)

C212(1,2)

C212(2,1)

C212(2,2)

SVD 2x2SVD 2x2

3/4

C111(1,2)

C112(2,1)

C111(2,1)

C112(2,2)

C111(2,2)

SVD 2x2

[C(1)]SVD 2x2

C122(1,1)C221(1,1)

C122(1,2)

C221(1,2)

C221(2,1)

C221(2,2)

C222(1,1)

C222(1,2)

C222(2,2)

SVD 2x2SVD 2x2

5/6

C121(1,1)

C121(1,2)

C122(2,1)

C121(2,1)

C122(2,2)

C121(2,2)

Rearrangement 1Level 1 Level 2

[X1]

[X2] C222(2,1)

]C[ 11

]C[ 22

]C[ 12

]C[ 21

]C[ 111

]C[ 121

]C[ 112

]C[ 122

]C[ 121

]C[ 122

]C[ 221

]C[ 222

Rearrangement 2

]C[ 111

]C[ 112

]C[ 121

]C[ 221

]C[ 121

]C[ 122

]C[ 122

]C[ 222

C111(1,1)

C212(1,1)

C122(1,1)

C212(1,2)

C122(1,2)

C122(2,1)

C122(2,2)

C222(1,2)

C222(2,2)

C112(1,1)

C112(1,2)

C212(2,1)

C112(2,1)

C212(2,2)

C112(2,2)

C222(2,1)

C211(1,1)

C121(1,1)

C211(1,2)C121(1,2)

C121(2,1)

C121(2,2)

C221(1,2)

C221(2,2)

C111(1,1)

C111(1,2)

C211(2,1)

C111(2,1)

C211(2,2)

C111(2,2)

C221(2,1)

Tensor

hgdc

feba

]X[]X[

][Tunfold

21

222

]T[ 222

a

dc

b

hg

fe

C222(1,1)

C221(1,1)

[X11]

[X21]

]T[ 1

[X12]

[X22]

]T[ 2 [C112]

[C212]

]T[ 21

[C122]

[C222]

]T[ 22

[C121]

[C221]

]T[ 12

[C111]

[C211]

]T[ 11

[X1]

[X2]

],C[]C[dc

ba]X[ 12111

].C[]C[hg

fe]X[ 22212

],C[]C[]X[

],C[]C[]X[

21221121

11211111

].C[]C[]X[

],C[]C[]X[

22222122

12212112

[X11]

[X21]

[X12]

[X22]

Figure 5. The flow graph of the 2-level HSVD for the tensor [T] of size 2×2×2, based on the SVD2×2

7. The 3D HSVD algorithm for image of

size 444

In this case, the tensor [Т444] (for N=4) is divided

into 8 sub-tensors [T2×2×2], as shown on Fig. 5. The

pixels, which belong to one kernel, are colored in

same color: yellow, red, green, blue, white, purple,

light blue, and orange. The HSVD444 algorithm is

executed in two levels. In the first level of

HSVD444, on each kernel [T2×2×2(i)], where

i=1,2,..,8, from the tensor [Т444], is applied the

HSVD222 algorithm, shown on Fig. 6. As a result,

each kernel is represented as a sum of four

components:

)]2(T[)]1(T[)]2(T[)]1(T[)]i(T[ 2,i2,i1,i1,i222 (42)

for i=1,2,..,8.

Sub-tensor (kernel)

2×2×2

Tensor 1

4×4×4

Tensor 2

4×4×4

Tensor 3

4×4×4

Tensor 4

4×4×4

Figure 6. Division of tensors [Тi(4×4×4)(j)] into the kernels

[Tj(2×2×2)(i)] in the second HSVD444 level, where the

HSVD222 is applied on each group of pixels of same

color (32 in total)

The decomposition of all kernels leads to the

representation of the tensor [Т444] as a sum of four

components as well:

2

1i

2

1j

)444(i444 )]j(T[]T[ . (43)

Each of the four tensors [Тi(4×4×4)(j)] consists of eight

kernels [Ti,k(2×2×2)(j)] for i=1,2, k=1,2, j=1,2, and

hence the total number of kernels is 32. The tensors

[Тi(4×4×4)(j)] in Eq. (43) are rearranged in accordance

with the energy decrease of the kernels [Ti,k(2×2×2)(j)],

which build them.

In the second level of HSVD444, each of the four

tensors [Тi(4×4×4)(j)] from Eq. (42), is divided into 8

sub-tensors (kernels) [Ti,k(2×2×2)(j)] for i=1,2, j=1,2

and k=1,2 in the way, defined by the spatial set of

pixels of same color, shown on Fig. 6. The color of

the pixels for each kernel corresponds to that of the

first level of the HSVD444 algorithm: yellow, red,

green, blue, white, purple, light blue, and orange.

International Journal Multimedia and Image Processing (IJMIP), Volume 5, Issues 3/4, September/December 2015

Copyright © 2015, Infonomics Society 294

Page 10: Hierarchical Decomposition of 2D/3D Images, Based on SVD 2×2infonomics-society.org/wp-content/uploads/ijmip/published-papers/... · SVD for a matrix of size 2 2. In the paper are

On each kernel is applied again the two-level

НSVD2×2×2 in accordance with Figure 4. After the

execution of the second decomposition level, the

tensor [Т444] is represented as a sum of 16

components:

4

1i

4

1j

)444(i444 )]j(T[]T[ . (44)

The so obtained 16 tensors [Тi(4×4×4)(j)] are arranged

in accordance with the decreasing values of the

singular values (energies) of the kernels,

[Ti,k(2×2×2)(j)], which compose them, for i=1,2, j=1,2

and k=1,2. In case that the HSVD444 is represented

through the calculation graph from Fig. 5, the total

number of decomposition levels is four. The

corresponding binary tree structure of 16 leaves is

shown in Figure 7.

As it could be seen from Figures 5, 6 and 7, in the

first level of HSVD444 the decomposition

HSVD222 is applied eight times, and in the second

level - 32 times.

Level 1

Level 2

Level 3

Level 4

Rearrangement

Rearrangement

Rearrangement

Rearrangement

Tensor 4×4×4

Rejected

component

Truncated 3D HSVD

Retained

component

Tensor approximation

Level 1 of

3D HSVD

Level 2 of 3D HSVD

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Figure 7. Structure of the calculation graph for HSVD444:

full binary tree of 4 levels and 16 leaves

Then, taking into account Eq. (41), for the total

number of algebraic operations for HSVD444 of the

tensor [T4×4×4], is obtained (45):

.32440)222(O)41(8)222(O HSVD222

HSVD

As an example (Fig.7) the decomposition elements

(tensors of size 444), which could be reduced

through threshold selection (if this condition is

satisfied for the second component in the first level of

HSVD444), are colored in blue.

8. 3D HSVD for tensor-represented

image of size N×N×N

The decomposition of the tensor [Т444] in

correspondence with Eq. (44) could be generalized

for the case, when the tensor [TN×N×N] is of size

N×N×N for N=2n. As a result of the use of the radix-

(2×2) HSVDN×N×N algorithm, the tensor ]T[ nnn 222

is represented as a sum of N2=22n eigen tensors:

.)]j(T[]T[

n n

nnnnnn

2

1i

2

1j)222(i222

(46)

The eigen tensors )]j(T[)222(i nnn

of size 2n×2n×2n

for i, j, k = 1,2,..,2n are arranged in accordance with

the decreasing values of the kernels energies

[Ti,k(2×2×2)(j)], which build them. The computational

complexity of the algorithm for decomposition of the

tensor ]T[ nnn 222 could be defined, taking into

account that for the execution of nnn 222HSVD

are

needed n levels, where the HSVD222 is applied on

each level. Then, from Eqs. (41) and (45) follows:

(47) .1)2-2(543244)14)(3/8(

)222(O)41(84)222(O

2nn22nn

HSVD

1n

1i

i2nnnnHSVD

In the relation above, 54 represents the number of

algebraic operations for SVD22, which is applied 22 1)N-N( times on the tensor [TN×N×N]. The SVD22

is the basic operation of the decomposition, which is

executed repeatedly, the new algorithm is called

radix-(2×2) HSVD. In accordance to Grasedyck [27],

the complexity to compute hierarchical SVD for H-

Tucker tensors of rank 2n, mode size 2n and order 3,

is О(3×23n+3×24n). The comparison of the

computational complexity of O(24n) and the

nnn 222HSVD

shows the advantage of the last one.

The relative computational complexity of

nnn 222HSVD

for a 3D image is:

3)21(32

)22(3

)222(O

)222(O)n( n

n4

n4n3

nnnHSVD

nnnkerTucH

(48)

In case that the number of decomposition

components from Eq. (46) is limited through

threshold selection of the low-energy components,

the computational complexity could be additionally

reduced, which follows from Eq. (47). Besides, the

mean square approximation error of the tensor

[TN×N×N], got in result of the components reduction,

is minimum. This follows from the fact, that each

kernel is defined through HSVD222 on the basis of

the "classic" SVD22, which minimizes the mean

square approximation error.

9. Conclusions

In respect of the famous H-Tucker

decomposition, the presented 3DHSVD algorithm is

quasi-optimal. The basic difference from the Tucker

algorithm is, that each decomposition component for

International Journal Multimedia and Image Processing (IJMIP), Volume 5, Issues 3/4, September/December 2015

Copyright © 2015, Infonomics Society 295

Page 11: Hierarchical Decomposition of 2D/3D Images, Based on SVD 2×2infonomics-society.org/wp-content/uploads/ijmip/published-papers/... · SVD for a matrix of size 2 2. In the paper are

a tensor of size N×N×N is represented as an outer

product of 3 mutually orthogonal N-dimensional

vectors, while for the 3DHSVD the component is

defined by the kernels of size 2×2×2, contained by it.

The basic advantages of the new decomposition for

multi-dimensional images are the low computational

complexity and the tree-like structure, which permits

cutting-off the low-energy branches through

threshold selection. This makes the algorithm

suitable for parallel processing with multi-processor

systems, where the basic processor executes the

kernel decomposition algorithm for a tensor of size

2×2×2. The 3DHSVD algorithm opens new

possibilities for fast image processing in various

application areas, such as: image filtration,

segmentation, merging, digital watermarking,

extraction reduced number of features for pattern

recognition, etc.

References

[1] Fieguth, P. (2011). Statistical image processing and

multidimensional modeling, Springer.

[2] Diamantaras, K., Kung, S. (1996). Principal

Component Neural Networks, Wiley, New York.

[3] Orfanidis, S. (2007). SVD, PCA, KLT, CCA, and All

That, Rutgers University Electrical & Computer

Engineering Department, Optimum Signal Processing, pp.

1-77.

[4] Levy, A., Lindenbaum, M. (2000). Sequential

Karhunen-Loeve Basis Extraction and its Application to

Images, IEEE Trans. on Image Processing, Vol. 9, No. 8,

pp. 1371-1374.

[5] Sadek, R. (2012). SVD Based Image Processing

Applications: State of The Art, Intern. J. of Advanced

Computer Science and Applications, 3(7), pp. 26-34.

[6] Drinea, E., Drineas P., Huggins, P. (2001). A

Randomized SVD Algorithm for Image Processing Appl.,

Proc. of the 8th Panhellenic Conf. on Informatics, Y.

Manolopoulos, S. Evripidou (Eds), Nicosia, Cyprus, pp.

278-288.

[7] Holmes, M., Gray, A., Isbell, C. (2008). QUIC-SVD:

Fast SVD using Cosine trees, Proc. of NIPS, pp. 673-680.

[8] Foster, B., Mahadevan, S., Wang, R. (2012). A GPU-

based Approximate SVD Algorithm, 9th Intern. Conf. on

Parallel Processing and Applied Mathematics, 2011,

LNCS, Vol. 7203, pp. 569-578.

[9] Yoshikawa, M., Gong, Y., Ashino, R., Vaillancourt, R.

(2005). Case study on SVD multiresolution analysis,

CRM-3179, pp. 1-18.

[10] Waldemar, P., Ramstad, T. (1997). Hybrid KLT-SVD

image compression, IEEE Intern. Conf. on Acoustics,

Speech, and Signal Processing, IEEE Comput. Soc. Press,

Los Alamitos, pp. 2713-2716.

[11] Aharon, M., Elad, M., Bruckstein, A. (2006). The K-

SVD: an algorithm for designing of overcomplete

dictionaries for sparse representation, IEEE Trans. on

Signal Processing, 54, pp. 4311-4322.

[12] Singh, S., Kumar, S. (2011). SVD Based Sub-band

Decomposition and Multiresolution Representation of

Digital Colour Images, Pertanika J. of Science &

Technology, 19 (2), pp. 229-235.

[13] Kountchev, R., Kountcheva, R. (2015). Hierarchical

SVD for Halftone Images. The 7th International Conference

on Information Technology Big Data, Al Zayatoonah

University, Amman, Jordan, May 12-15, pp. 50-58.

[14] Kountchev, R., Kountcheva, R. (2015). Hierarchical

SVD-based Image Decomposition with Tree Structure,

Intern. J. of Reasoning-Based Intelligent Systems,

7(1/2),pp.114-129.

[15] De Lathauwer, L. (2008). Decompositions of a

higher-order tensor in block terms - Part I and II, SIAM J.

Matrix Anal. Appl., Vol. 30, pp. 1022-1066.

[16] Kolda, T., Bader, B. (2009). Tensor decompositions

and applications, SIAM Review, 51(3), pp. 455-500.

[17] Bergqvist, G., Larsson, E. (2010). The Higher-Order

SVD: Theory and an Application, IEEE Signal Processing

Magazine, 27(3), pp. 151-154.

[18] Salmi, J., Richter, A., Koivunen, V. (2009).

Sequential Unfolding SVD for Tensors with Applications in

Array Signal Processing, IEEE Trans, on Signal

Processing, 57 (12), pp. 4719-4733.

[19] Grasedyck, L. (2010). Hierarchical SVD of Tensors,

SIAM J, on Matrix Analysis and Applications, 31(4), pp.

2029-2054.

[20] Oseledets, I. (2011). Tensor-train decomposition,

SIAM Journal on Scientic Computing, 33(5), pp. 2295-

2317.

[21] Cichocki, A., Mandic, D., Phan, A-H., Caiafa, C., G.

Zhou, Zhao, Q., De Lathauwer, L. (2015). Tensor

Decompositions for Signal Processing Applications, IEEE

Signal Processing Magazine, 32(2), pp. 145-163.

[22] Wu, Q., Xia, T., Yu, Y. (2007). Hierarchical Tensor

Approximation of Multidimensional Images, IEEE Intern.

Conf. on Image Processing (ICIP’07), San Antonio, TX,

Vol. 6, pp. IV-49, IV-52.

International Journal Multimedia and Image Processing (IJMIP), Volume 5, Issues 3/4, September/December 2015

Copyright © 2015, Infonomics Society 296