Acta Electrotechnica et InformaticaActa Electrotechnica et Informatica No. 2, Vol. 4, 2004 5...
Transcript of Acta Electrotechnica et InformaticaActa Electrotechnica et Informatica No. 2, Vol. 4, 2004 5...
Košice No 2 Vol 4 2004 Slovak Republic
Acta Electrotechnica et Informatica
ISSN 1335-8243
33Invariant Pattern Recognition System Using RT and GMDH
KošiceNo 2 Vol 4 2004
Slovak Republic
Acta Electrotechnica
et Informatica
ISSN 1335-8243
O B S A H
TURAacuteN J OVSENIacuteK Ľ TURAacuteN J Jr
Invariant Pattern Recognition System Using RT and GMDH5
MENTLIacuteK V
The Aspects and Perspective Views of the Diagnostics of Electric Devices11
MACEKOVAacute Ľ MARCHEVSKYacute S
A New Image and Video Quality Criterion15
PERIC H Z BOGOSAVLJEVIC M S
Asymptotic Analysis of Optimal Unrestricted Polar Quantization20
KOLLAacuteR J
Process Functional Properties and Aspect Language25
DRUTAROVSKYacute M ŠIMKA M
Custom FPGA Cryptographic Blocks for Reconfigurable Embedded Nios Processor33
HEJTMAacuteNKOVAacute P ŠKORPIL J
Voltage in Electric Power System with High Photo Voltaic Cells Penetration40
CVEJN J
Effective Way of Overriding C++ Operators for Matrix Operations45
TESAŘOVAacute M
Using Voltage-Dip Matrices for Counting of Voltage Dips in Power Systems51
PLEVA M JUHAacuteR J ČIŽMAacuteR A
Vyacutevoj a evaluaacutecia multilingvaacutelnej databaacutezy pre systeacutemy automatickej transkripcie spraacutev elektronickyacutech meacutediiacute56
KARPIŠ O
Čiastočneacute potlaacutečanie rušivyacutech signaacutelov modifikovanyacutem zaacuterezovyacutem filtrom60
HIČAacuteR M
Zdvih žeriava s pozorovateľom hmotnosti bremena66
Pokyny pre autorov priacutespevkov do časopisu Acta Electrotechnica et Informatica71
Instructions for Authors of Contributions to Acta Electrotechnica et Informatica Journal73
C O N T E N T S
TURAacuteN J OVSENIacuteK Ľ TURAacuteN J Jr
Invariant Pattern Recognition System Using RT and GMDH5
MENTLIacuteK V
The Aspects and Perspective Views of the Diagnostics of Electric Devices11
MACEKOVAacute Ľ MARCHEVSKYacute S
A New Image and Video Quality Criterion15
PERIC H Z BOGOSAVLJEVIC M S
Asymptotic Analysis of Optimal Unrestricted Polar Quantization20
KOLLAacuteR J
Process Functional Properties and Aspect Language25
DRUTAROVSKYacute M ŠIMKA M
Custom FPGA Cryptographic Blocks for Reconfigurable Embedded Nios Processor33
HEJTMAacuteNKOVAacute P ŠKORPIL J
Voltage in Electric Power System with High Photo Voltaic Cells Penetration40
CVEJN J
Effective Way of Overriding C++ Operators for Matrix Operations45
TESAŘOVAacute M
Using Voltage-Dip Matrices for Counting of Voltage Dips in Power Systems51
PLEVA M JUHAacuteR J ČIŽMAacuteR A
About Development and Evaluation of Multilingual Database for Automatic Broadcast News Transcription Systems56
KARPIŠ O
Partial Suppressing of Disturbing Signals Using Modified Notch Filter60
HIČAacuteR M
Crane Uplifting with Burden Weight Observer66
Instructions for Authors of Contributions to Acta Electrotechnica et Informatica Journal (in Slovak)71
Instructions for Authors of Contributions to Acta Electrotechnica et Informatica Journal (in English)73
INVARIANT PATTERN RECOGNITION SYSTEM USING RT AND GMDH
Jaacuten TURAacuteN Ľuboš OVSENIacuteK Jaacuten TURAacuteN Jr
Department of Electronics and Multimedia Communications Faculty of Electrical Engineering and Informatics
Technical University of Košice Letnaacute 9 042 00 Košice Slovak Republic tel 055602 2943 E-mail
3D People gmbh Kaiser Passage 6 D-72766 Reutlingen Germany
SUMMARY
The proposed paper gives the results of the development work related to design pattern recognition system based on Application of fast translation invariant Rapid Transform (RT) and GMDH The system was implemented as a software package on PC and tested with identification of classes of real objects Experimental results are given for applying the proposed invariant pattern recognition system to recognition of Nativity Symbols Informative Symbols and Cuneiform Writings corrupted by noise
Keywords GMDH Rapid Transform (RT) Modified Rapid Transform (MRT) pattern recognition invariant feature extraction information symbol classification
1 INDRODUCTION
Transformation methods can be used to obtain alternative descriptions of signals These alternative descriptions have many uses such as classification redundancy reduction coding etc because some of these tasks can be better performed in the transform domain [12]
Various transformations have been suggested as a solution of the problem of high dimensionality of the feature vector and long computation time Such transforms are RT and modified RT (MRT) which are fast translation invariant transforms from the class CT [1-4] We apply the RT in feature extraction stage of the recognition process
Whereas conventional empirical modelling techniques require an assumed model structure new procedures have been developed which generate the model structure as well as the model coefficients from a database [15-7] One of these procedures is he GMDH (Group Method of Data Handling)
x
(0)
x
(1)
x
(2)
x
(3)
x
(4)
x
(5)
x
(6)
x
(7)
x
(i)
x
(j)
x
(i)
x
(j)
x
(i)+
x
(j)
x
(i)
-
x
(j)
~
cent
x
(0)
~
cent
x
(1)
~
cent
x
(2)
~
cent
x
(3)
~
cent
x
(4)
~
cent
x
(5)
~
cent
x
(6)
~
cent
x
(7)
x=x
(0)
x
(1)
x
(2)
x
(3)
=
x
~
input
training
set
1 layer
2 layer
3 layer
next layers
selected unit
unused unit
output
output
N
N
N
N
N
N
C
B
A
m
N
x
N
x
N
x
m
B
N
A
N
x
B
N
A
N
x
B
N
A
N
x
m
A
N
x
A
N
x
A
N
x
m
x
x
x
m
x
x
x
N
y
B
N
A
N
y
A
N
y
y
y
of
data
contain
of
data
contain
of
data
contain
2
1
2
1
2
1
2
2
2
1
2
1
2
1
1
1
2
1
C
B
A
iuml
thorn
iuml
yacute
uuml
thorn
yacute
uuml
iuml
thorn
iuml
yacute
uuml
uacute
uacute
uacute
uacute
uacute
uacute
uacute
ucirc
ugrave
ecirc
ecirc
ecirc
ecirc
ecirc
ecirc
ecirc
euml
eacute
uacute
uacute
uacute
uacute
uacute
uacute
uacute
ucirc
ugrave
ecirc
ecirc
ecirc
ecirc
ecirc
ecirc
ecirc
euml
eacute
+
+
+
+
L
M
O
M
M
L
M
O
M
M
L
M
O
M
M
L
L
M
M
M
algorithm usually used for creating polynomial networks with active units GMDH is a useful data analysis technique for the modelling of non-linear complex systems [567] We apply the GMDH algorithm as intelligent network classifier in the proposed new Invariant Pattern Recognition System
2 RAPID TRANSFORM
uacute
uacute
uacute
uacute
uacute
ucirc
ugrave
ecirc
ecirc
ecirc
ecirc
ecirc
euml
eacute
=
uacute
uacute
uacute
uacute
uacute
ucirc
ugrave
ecirc
ecirc
ecirc
ecirc
ecirc
euml
eacute
=
A
A
A
A
N
A
k
N
k
k
j
N
i
N
j
i
j
i
j
i
A
y
y
y
x
x
x
x
x
x
x
x
x
M
M
M
M
M
2
1
2
1
2
2
1
1
1
1
1
y
X
In the field of pattern recognition and also scene analysis is well known the class of fast translation invariant transforms - Certain Transforms (CT) [134] based on the original rapid transform (RT) [3] but with choosing of other pairs of simple commutative operators The RT results from a minor modification of the Walsh-Hadamard transform (WHT) The signal flow graph for the RT is identical to that of the WHT except that the absolute value of the output of each stage of the iteration is taken before feeding it to the next stage The signal flow graph for one-dimensional RT is showed on the Fig 1 RT is not an orthogonal transform as no direct inverse exists With the help of additional data however the signal can be recovered from the transform sequence ie invertible rapid transform (IRT) can be defined [18] RT has some interesting properties such as invariance to cyclic shift reflection of the data sequence and the slight rotation of a two-dimensional pattern It is applicable to both binary and analogue inputs and it can be extended to multiple dimensions [1]
3 GMDH ALGORITHM DESCRIPTION
The idea of GMDH (Group Method of Data Handling) is the following we are trying to build an analytical function (called model) which would behave itself in such a way that the estimated value of the output would be as close as possible to its actual value [5] For many applications such an analytical model is much more convenient than the distributed knowledge representation that is typical for neural network approach [6711]
The most common way to deal with such a problem is to use linear regression approach In this approach first of all we must introduce a set of basis functions The answer will then be sought as a linear combination of the basis functions [5] For example powers of input variables along with their double and triple cross products may be chosen as basis functions To obtain the best solution we should try all possible combinations of terms and choose those that give best prediction The decision about quality of each model must be made using some numeric criterion To reduce computational expenses one should reduce the number of basis functions (and the number of input variables) which are used to build the tested models To do that one must change from a one-stage procedure of model selection to a multistage procedure
GMDH is based on sorting out procedure that is successive testing of models selected out of a set of candidate models according to a specified criterion [6] Most of GMDH algorithms use polynomial support functions General connection between input and output variables can be found in form of a functional Volterra series whose discrete analogue is known as the Kolmogorov-Gabor polynomial [57]
K
+
aring
=
aring
=
aring
=
+
aring
=
aring
=
+
aring
=
+
=
n
i
n
j
n
k
k
x
j
x
i
x
ijk
a
n
i
n
j
j
x
i
x
ij
a
n
i
i
x
i
a
a
y
1
1
1
1
1
1
0
(1)
where X =( x1 x2 xn) is the vector of input variables and A =(ai aij a ijk ) is the vector of the summand coefficients Components of the input vector X can be independent variables functional forms or finite difference terms [5] The method allows finding simultaneously the structure of the model and the dependence of modelled system output on the values of most significant inputs of the system
The multilayer GMDH algorithm enables to construct Kolmogorov-Gabor polynomial by a composition of lower-order polynomials (partial function) of the form [511]
x
a
x
a
x
x
a
x
a
x
a
a
y
2
j
5
2
i
4
j
i
3
j
2
i
1
0
+
+
+
+
+
=
(2)
where ij = 1 2 m i(j
To find these polynomials (the coefficients) it is sufficient to have only six data points at our disposal Repeated solution of this quadratic polynomial (2) enables to construct the complete polynomial (1) of any complexity
The input data of m input variables x are fed randomly for example if they are fed in pairs at each unit (node or processing element PE) then a total of
2
)
1
(
2
-
=
m
m
C
m
partial functions (PEs) of the form below are generated at the first layer (Fig 2)
f(x)
y
=
(3)
where f( x) is partial function as in (2) and y is its estimated output
Then outputs of F1 (
4 IMPLEMENTATION OF THE GMDH ALGORITHM
Data can be previously normalized by
max
i
i
i
x
x
x
=
max
y
y
y
=
[
]
1
0
Icirc
y
x
i
(4)
Most of the selection criteria require the division of the data into two or more sets Suppose we have a sample set of N data points ( x1 y1) ( x2 y2) ( x N yN) First thing to do is splitting the data set into three sets the training data set A the selection data set B ( W = A ( B) the test data set C (Fig 3)
The first two sets are used to construct the network and the test data set is used to obtain a measure of its performance (to find the optimal model or models)
The data splitting can be performed in several ways which is depending on the application In general the data can be ordered (according to their variance time etc) or unordered and proportions of splitting can be 40 25 and 35 or 50 25 and 25 or other for A B and C set correspondingly (Fig 3) In the case when data are arranged according to their variance data with higher variance belong to the training set
In our experiments each processing element receives three input variables xi xj xk i ( j i ( k j ( k and generates output using linear and polynomial activation function respectively
A
k
l
j
l
i
l
l
N
l
x
a
x
a
x
a
a
y
1
3
2
1
0
K
)
=
+
+
+
=
(5)
A
k
l
j
l
i
l
k
l
j
l
i
l
l
N
l
x
a
x
a
x
a
x
a
x
a
x
a
a
y
1
ˆ
2
6
2
5
2
4
3
2
1
0
K
=
+
+
+
+
+
+
=
(6)
The weights a = [ a0 a1 a3] and a = [ a0 a1 a6] are computed by least squares technique
A
T
j
i
A
j
i
A
T
j
i
A
y
X
X
X
a
1
)
(
-
=
)
(7)
where
and
All partial functions are evaluated by follow external criterion
aring
aring
Icirc
Icirc
-
=
D
W
p
p
C
p
p
c
p
y
y
y
2
2
2
)
ˆ
(
ˆ
(8)
where W=A(B and C is test data set
The algorithm will stop when
middot a maximum number of layers has been reached (k =kmax)
middot the performance of the best-fitted node on each layer has reached a minimum
5 INVARIANT PATTERN RECOGNITION SYSTEM
Block scheme of the invariant pattern recognition system based on RT (or MRT) transform and GMDH algorithm is on Fig 4 Digital pattern enter to Image Transformation module where is transformed using RT or MRT The amount of data is reduced in Feature Reduction module Features that will be in feature vector are selected during the teaching process and stored in ldquoMemory of Featuresrdquo module
In GMDH classification system (Fig 4) each independent category of patterns (images) has itrsquos own model computed in teaching process These models are stored in Memory of GMDH Models module Output of each model is value 1 if the input pattern corresponds with the class of that model and output value is 0 otherwise
6 EXPERIMENTAL RESULTS
The proposed new invariant pattern (image) recognition system was tested in the recognition of a set of 120 independent classes of Nativity Symbols (Fig 5) of Informative Symbols (Fig 6) and of Cuneiform Writings (Fig 7) We implemented feature extraction with RT As teaching sets we use sets containing 60 to 252 symbols for each class of symbols As a recognition sets we use eight sets of 120 noised symbols with noise rate 1 2 8 The results of experiments with RT using simple Euclid classifier and polynomial (linear and non-linear) GMDH classifier are on Tab 1 for Nativity Symbols on Tab 2 for Informative Symbols and on Tab 3 for Cuneiform Writings As can be seen the recognition system with GMDH classifier gives better performance The recognition efficiency is increasing if we use teaching sets with noised patterns The best performance is for the system based on combination of RT and GMDH algorithm
7 CONCLUSION
The proposed paper gives the results of the development work related to design a new invariant pattern recognition system based on the combination of the RT and the GMDH algorithm The proposed system was realised as a software tool on the PC and tested in experiments with recognition of noised Nativity Symbols Informative Symbols and Cuneiform Writings The obtained experimental results are satisfied and recognition efficiency which was obtained are up to 69 - 97 for Nativity Symbols up to 85 - 98 for Informative Symbols and up to 77 - 97 for Cuneiform Writings The obtained results are satisfied
ACKNOWLEDGEMENTS
The authors are thanking for the financial support from the COST 276 and COST 292 grant and VEGA grant No 1038103
REFERENCES
BIOGRAPHY
Jaacuten Turaacuten (Prof Ing RNDr DrSc) was born in Šahy Slovakia He received Ing (MSc) degree in physical engineering with honours from the Czech Technical University Prague Czech Republic in 1974 and RNDr (MSc) degree in experimental physics with honours from Charles University Prague Czech Republic in 1980 He received a CSc (PhD) and DrSc degrees in radioelectronics from University of Technology Košice Slovakia in 1983 and 1992 respectively Since March 1979 he has been at the University of Technology Košice as Professor for electronics and information technology His research interests include digital signal processing and fiber optics communication and sensing
Ľuboš Ovseniacutek (Ing PhD) was born in Považskaacute Bystrica Slovakia in 1965 He received his Ing (MSc) degree in 1990 from the Faculty of Electrical Engineering and Informatics of University of Technology in Košice He received PhD degree in electronics from University of Technology Košice Slovakia in 2002 Since February 1997 he has been at the University of Technology Košice as Assistant professor for electronics and information technology His general research interests include optoelectronic digital signal processing photonics fiber optic communications and fiber optic sensors
Jaacuten Turaacuten Jr (Ing) was born in Košice Slovakia He received his Ing (MSc) degree in computer engineering in 1999 from the Faculty of Electrical Engineering and Informatics of University of Technology in Košice He works in 3D People gmbh as research manager His research interests include digital signal and image processing and computer games design
N1 N2 N3 N4 N5N6
N7 N8 N9 N10 N11 N12
Fig 5 Nativity Symbols used in experiments
N1 N2 N3 N4 N5 N6
N7 N8 N9 N10 N11
Fig 6 Informative Symbols used in experiments
eg ef eeacute ed ecs ec eb
a aacute ely el ek ak ej i iacute
eh egy es er ep ouml ő o oacute
eny en em ezs ez ev uuml ű
u uacute ety et esz
Fig 7 Cuneiform Writings used in experiments
THE ASPECTS AND PERSPECTIVE VIEWS OF THE DIAGNOSTICS
OF ELECTRIC DEVICES
Vaacuteclav MENTLIacuteK
Department of Technological and Measurement Faculty of Electrical Engineering
University of West Bohemia Univerzitniacute 8 306 14 Plzeň Czech Republic tel +420 377 634 513
E-mail mentlikketzcucz
SUMMARY
The diagnostics is an indispensable part of all stages of electrical engineering industry The diagnostics is a source of information which also accompanies a product in the exploatation This information influences on the construction based of the failure analysis regressively The diagnostics with the results of running checks gives information about the diagnostic objectrsquos property and provides beddings for the predictive data The ON-LINE diagnostics which monitors the object continuously during its work is essential for important and expensive objects It is necessary to construct the diagnostic systems (diagnostic tools) with respect to the deposition ability and the economic demand The structural approach to the solved problems is very perspective because it has bigger deposition ability and it provides more complex information than the current phenomenological approach
Keywords diagnosis observer fault rotor intensity simulation
1INDRODUCTION
We cannot imagine the electrical engineering without enough information The diagnostics plays an irreparable role in these areas The gained pieces of information are essential in the area of elements in the area of subsystems and in the area of electric devices The diagnostics is becoming a connecting element among the other branches which take part on the production of electrical machines in the electrical engineering industry The material engineering provides needful elements for the specific purpose ndash material selection ndash alternatively the fundamental materialrsquos modification to be able to discharge the expectant function ndash the information is needed about parameters and their development On the element input level into the next processing the further information is needed about whether all material properties are in the required limits This all is a top-priority task for the electrical engineering technological diagnostics because the diagnostics is getting to direct contact with the production here
2Diagnostics and production
The diagnostics is also important in the technological process area ndash in the ldquoknow-howrdquo area In this area the diagnostic examinations are important in several levels at once At first the in-process control has a large economic influence because this check can prevent the wrong product from the further processing on time The check-out is a next area where the diagnostics helps effectively ndash it is a test of the finished product The producer in his factory makes the test This check-out diagnostics has a big economic effect again because the guarantee repairs are reduced to minimum or there are no guarantee repairs at all In this aspect we can see massive power of the diagnostics with visible economic effects It is necessary to see the impacts of the diagnostics in a wider context ndash especially in failure analysis As it was said these failures are recorded assorted and archived in a database We can gain many facts and information from the fault source analysis These pieces of information are enormous worth For example this is a question of the designs aiming at the changes of devicersquos construction Then the diagnostics brings improvement aimed at the elimination of the elements which are the fault source frequently It is possible to use the results of the failure analysis for a treatment of the working environment We do this when the working environment affects the devices badly and the frequent failures show that the devices are overloaded because of bad working environment conditions The diagnostics helps eliminating this negative factor
When a failure is detected the diagnostics has a possibility to suggest the fastest method eliminating this failure It does means that the diagnostics localizes only the place of the failure but also it gives operative instructions for the maintenance and it sets the optimal sequences of operations leading to elimination of the failure This leads to the quick and direct repair without useless delay and operations
If we imagine the diagnostics as a connecting link and inseparable element of the material engineering and technological processes there is a huge and worth importance by the monitoring of the technical devices In this area there is not important only the trend monitoring of the devicersquos parameters but also a data recording a creation of worth databases describing own trend of system behaviour It is possible to crate a prediction of the further system behaviour based on such information in the future The electrical engineering technological prognosis is on the top of the diagnostics
We showed the importance of the diagnostics in the electrical engineering practice and now letrsquos pay attention to what the diagnostics needs to fill the expectation
3 The apparatus of the diagnostics
The apparatus of the diagnostics is concentrated in the diagnostic system This system includes
-A necessary instrumental equipment for the diagnostics (measuring instruments with suitable converters ndash it means devices which convert the diagnostic signals on the recordable signals) necessary sensors because the diagnostics should already be evident by the device design
-A mathematical model of the diagnostic object This model is able to simulate error-free situations and also all failure situations ndash representing failure situations of the diagnostic object of course with all possibilities which can happen If we want to create the mathematical model we have to collect all necessary characteristics and mathematic expressions of the parameter processing
-A choice of the diagnostic process (the setting of the diagnostics off-line or on-line diagnostics)
-A choice of the approach to the solution of the diagnostic problem phenomenological (we are only interested in the diagnostic object reactions on the input signals) or structural (we are interested in the happening in the structure of the diagnostic object) The structural approach gives more information and has a smaller value variance But it requires more expensive pieces of equipment and a special trained operator The phenomenological approach is simpler There are a lot of experience since it is used for a log time does not need a specialist for operation but has a wider value variance naturally is less expensive ndash no special instruments are necessary But its deposition ability is not so good
-A knowledge and empirical potential ndash it means workers which have got a relevant experience and knowledge on required level (this aspect seems to be very important for his possibility to realize the diagnostic on adequate level)
-A methodology assessment ndash a process of diagnostics it means optimisation of diagnostic activities and assessment of particular steps of diagnosis ndash of course with the authority of economics aspects in general Profundity of examination and exactness of diagnostic bears very closely on the price of the diagnosed device and its consequence in the working process
4 Connections in the diagnostics
Connections in the diagnostic of electrical devices are very good marked in the Fig 1 We can see there the fact that diagnostics (just mentioned) intervenes in both existing stages ndash manufacturing and operating Technical diagnostic gets through the preparative phase and then through the processing phase ndash the phase of diagnostic inquiry Acquisitions and impacts of the results of diagnostic were just mentioned
It is comprehensible that in diagnostics of important electrical devices (eg high or low speed alternators of main power stations transformers of important switching stations) exists higher form of connections between machines and their operators (especially at on-line diagnostic) consequently the expert systems which use the fuzzy logic and all eventualities situated in this area
We have made a mention of connections in diagnostics and then possibilities how to make the diagnostic system We also must notice the next very important point of view It is a tactics of the right choice of the diagnostic problem The most important fact is to find the key places which are significant for the operation and the right function of the monitored devices We have to pay attention to the subsystems or components which are the most sensitive to making defects These defects can cause the risk of life or the bad function of the device There is paid attention to the insulation systems in the area of diagnostics of electrical devices Insulation systems certainly belong to these very sensitive parts or subsystems We can see the electrical device as a serial reliable system with the very sensitive part ndash just mentioned the insulation system It is also evident that the fault source can be very exposed mechanical parts eg bearings We have to choose the process of diagnostic so as we get maximum of information about these monitored parts or subsystems
To this point of view is very closely associated the moment of capacity to do statements of chosen method The main fact is the structural approach For the research of this problem (the study of property) seems to be optimal for example methods which allow to describe enthalpy of materials [1] This method is good for their direct view on momentary state of the material If we monitor the trend of this quantity we receive quality beddings for the required prognostic propositions
5 On-line diagnostics
The next thing we must monitor is the demand on on-line examination This area which is also very sough-after is especially difficult because of doing diagnostic examination We can use only some methods and the whole system has to be connected to the direct data storage And it is the most modern way of diagnostic [2] ndash the application of the expert system with the other special things like fuzzy logic and neuronal networks This trend which is based on direct use of these new methods of technical diagnostic will need more and more research and effort In addition we must assume that diagnostic will be applied because of its difficulty in the events where it is really important and well founded eg by the important electrical devices like high and low level alternators in the big power stations or transformers in the switching stations)
We also have to make reference to the perspective of technical diagnostics since there is no doubt about the increasing importance especially at present The quality is the priority program in many companies ndash necessity to accept the standard of quality ISO 9000 and 14000 confirms its large importance
The importance of the structural approach is still increasing in the area of diagnostic methods The other methods may be next way ndash especially methods which do not need extra expensive devices for example the thermal analysis methods We have enough good experience with the application of this method on our department [3-8] There is also necessary to keep full detachment and economy of used methods
In the area of the insulations systems of transformers (the system oil-paper) seems to be perspective to monitor the trend of characteristics of the solid part of the insulation system But we are not able to take any test samples direct by the operation of transformers For the detection its state ndash the material based on the cellulose ndash we must use the indirect methods Possible methodology is the detection of the quantity of the furan compound ndash fissile products of cellulose with the dissidenced atom of carbon which are good soluble and identifiable in the insulation oil of transformers Furan components especially furfural and hydroxymethylfurfural ndash are the identifiers of the age level of the paper The best parameter for ageing evaluation of insulation systems of transformers is the level of polymerisation of the cellulose paper in transformers during the operating conditions We are able to define this level thanks to the method ndash liquid chromatography ndash HPLC (High Performance Liquid Chromatography) [9]
For the big rotating electrical machines seems to very useful the monitoring of these indicators measuring of vibration based on analysis of deviation from the standard stage and their size measuring of the level of acoustical capacity (noise) which advise imbalance and the level of operating quality analysis of the thermal state of machines (monitoring of temperature on selected places) analysis of coolant (ozone concentration in the machine test of the products of degradation) analysis of discharge activity The next ndash additional ndash can be used the application of the slot capacity tester for partial discharge measuring analysis of the leakage thermal record with relevant analysis
6 Conclusion
The problem of diagnostics is very wide and complex discipline which is formed from many fields of activity and is constantly developed Its fluent development displays dynamics its major ideas
REFERENCES
BIOGRAPHY
Prof Ing Vaacuteclav Mentliacutek CSc was born in 1939 He defended his CSc in the field of Elektrotechnologie at University of Czech Technical University in Prague in 1985 Doc in the field of Elektrotechnologie at University of West Bohemia in Plzeň in 1990 and Prof in the field of Elektrotechnologie at University of West Bohemia in Plzeň in 1998
Since 1962 he is working as a tutor with the Section of electrotechnology of the Department technology and measurements (formerly the Department of electrical machines) His scientific research is focusing on diagnostics of electrical systems physic and technology of dielectrics
A New Image and Video Quality Criterion
Ľudmila MACEKOVAacute Stanislav MARCHEVSKYacute
Department of Electronics and Multimedia Communications Faculty of Electrical Engineering and Informatics
Technical University of Košice Park Komenskeacuteho 13 041 20 Košice Slovak Republic tel 055602 2853
E-mail stanislavmarchevskytukesk
SUMMARY
The well known quality criteria of images and video as MSE or MAE are not corresponding sufficiently with the quality perceived by human visual system (HVS) HVS is mostly sensitive to the structural character of images and to structural errors too The new criterion of quality respects this aspect and can also be considered universal because of its value not exceeding one which is the best quality (identity actually) and the others values represent the worse quality
Keywords image quality criterion video quality perceived by human visual system (HVS)
8 INTRODUCTION
In various areas of application it is important to appreciate the quality of images or image sequences by mathematical criterion The mean absolute error (MAE) mean squared error (MSE) signal-to-noise ratio (SNR) or its modification [eg 4] are already well known and often used Their advantage is their independency of viewing conditions in opposite to subjective appraisal of quality The subjective measurement of image or video quality can gain as many values as a lot of observing conditions there are But on the other side the values of the numerical criteria mentioned above do not often correspond to quality perceived by human visual system ( HVS)
The good example illustrating this problem is presented in fig1 There are noticeable differences between three images with approximately equal MSE value The first one is original Lena and the others are an image with increased contrast and images degraded by blurring and by JPEG compression respectively The last three ones have MSE about 225 Therefore it is needed to find such numerical criterion which better reflects the serious quality and which approximate to the quality perceived by HVS
The photos in fig 1 suggest that our visual system is sensible to texture in image which is for us the main carrier of image information Therefore we are mainly sensible to texture distortion too This fact is the base idea of derivation of new ndash structural criterion of image quality
This article presents a new criteria of image quality and image sequence quality based on structural features of image or video The second part describes the mathematical derivation of criterion for static image the third part contains derivation and application of new criteria for image sequences The fourth part deals with experiments and their results and the last one is a conclusion
9 THE DEFINITION OF STRUCTURAL SIMILARITY INDEX (SSIM)
If we have two digitized images x y being compared (or just only their little parts corresponding to each other) we can describe them by values xi yi i = 1 n Their statistical mean x y dispersions x2 y2 and covariance xy are as follows
aring
=
n
i
x
x
n
1
m
aring
=
n
i
y
y
n
1
m
(1)
aring
-
-
=
n
x
i
x
x
n
2
2
)
(
1
1
m
s
aring
-
-
=
n
y
i
y
y
n
2
2
)
(
1
1
m
s
(2)
aring
=
-
-
-
=
n
i
y
i
x
i
xy
y
x
n
1
)
)(
(
1
1
m
m
s
(3)
The mean and standard deviation (square root of the variance) roughly match to the luminance and the contrast of the signal respectively The covariance reflects the linear correlation between x and y
Measures for luminance contrast and structure comparisons ( l c s) of 2 image flats can be define [6]
2
2
2
)
(
y
x
y
x
l
m
m
m
m
+
=
y
x
2
2
2
)
(
y
x
y
x
c
s
s
s
s
+
=
y
x
y
x
xy
s
s
s
s
=
)
(
y
x
(4)
The value s is the different kind of similarity than luminance or contrast similarity It reflects the structural similarity of two images it equals one only if the structures of both compared image are exactly the same
Then the overall similarity index S(x y) for comparing two similar image fragments can be expressed as the product of lcs
(
)
)
)(
(
4
)
(
)
(
)
(
2
2
2
2
y
x
y
x
xy
y
x
s
c
l
S
s
s
m
m
s
m
m
+
+
=
=
y
x
y
x
y
x
y
x
(5)
When the member
)
)(
(
2
2
2
2
y
x
y
x
s
s
m
m
+
+
is close to zero (in both too dark or too smooth-faced flats) the resulting term become unstable This problem is eliminated by some modifications of (5) ndash ie by definition of new measure of image comparing named Structural SIMilarity ( SSIM) index
(
)
(
)
)
)(
(
2
2
)
(
2
2
2
1
2
2
2
1
C
C
C
C
SSIM
y
x
y
x
xy
y
x
+
+
+
+
+
+
=
s
s
m
m
s
m
m
y
x
(6)
where C1 = (K1 L)2 C2 = (K2 L)2(7)
In (6) and (7) there are 3 constants established which depend on the character of image or sequence L is the dynamic range of pixel values - for 8 bits per pixel in gray scale images L=255 K1 K2 are set low enough such that C1 C2 will take effect only when
)
(
)
(
2
2
2
2
y
x
y
x
s
s
m
m
+
+
or
is very low In experiments K1=001 K2=003 were used
The SSIM index has the following properties
1 SSIM(x y) = SSIM(y x)
2 SSIM(x y) ( 1
3 SSIM(x y) = 1 if and only if x = y (in discrete signals there must be xi = yi for i = 1 2 hellip N)
Thus by the definition and by the properties of SSIM it is simple to evaluate the quality of destroyed image if it is compared with original image of perfect quality The more the SSIM index value differs from 1 the worse image quality
In practice the application of SSIM criterion for image does not execute in one step for the whole image First the criterion values are evaluated in each position of 8x8 sample window (in comparison with window in original image) The sample window is sliding across the whole image pixel by pixel In that way we gain so called quality map of image Subsequently the mean SSIM (MSSIM) index Q is evaluated as an overall image quality measure
N
SSIM
Q
N
i
i
aring
=
=
1
(8)
where N is the number of image pixels (horizontal dimension multiplied by vertical one)
21 SSIM index for color images
In the case of color image one must consider computation of local SSIM i index for all color components independently For example for the Y Cr Cb components there will be SSIM iY SSIM iCr SSIM iCb respectively Thus the overall index with respect to particular component weights is [7]
Cr
i
Cr
Cb
i
Cb
Y
i
Y
i
SSIM
W
SSIM
W
SSIM
W
SSIM
+
+
=
(9)
In the experiments the weights were fixed WY = 08 WCb = 01 WCr =01
22 A Video quality assessment
It would be simple to calculate the video sequence quality by MSSIM index for each frame and after this by mean value for whole sequence But it involves a huge volume of calculations The next work therefore is to find possibilities of their elimination
At first one can eliminate the calculation by restricting the number of sample windows Only the fixed smaller count of local windows will be chosen from random positions in each frame
The second problem is that the overall mean SSIM index is not optimal It does not response to the quality perceived by HVS Because of not equal importance of all particular areas of the frame for human eyes these sample windows can not have the same weight in the term (8) for the frame quality index The HVS perceives dark frame areas less then light ones This phenomenon is crucial for specification of each local weight in this work The darker areas the smaller are their weights The ground for choice of threshold can be eg the mean local luminance about value of 40 (for 255 gray levels)
Likewise the third reason to reform overall video quality criterion is that not all the frames in sequence have the same importance for HVS In both cases of grate value of motion in the scene or of high speed moving camera a frame quality is not as such important as in the case of quiet frames or of a small moving For example some blurring is usually very disagreeable type of distortion here Hence in a process of quality assessment only the frames with both no and small motion will get the non-zero weights
All the above mentioned aspects lead to video quality comparative assessment technique as follows
middot The local windows (eg 8x8) are randomly drawn from both original and inquired video frames (at the same position) The SSIM ij of each local window is calculated using (6) and (9) where window index is i=1RS RS is the count of windows and j denotes the frame item
middot For each random i-th sampling window (in j-th distorted frame) the mean luminance ij is evaluated by term (1) (in the case of color frames it is the mean of Y component) and local weighting is differently adjusted by an outline introduced
iuml
iuml
icirc
iuml
iuml
iacute
igrave
gt
pound
lt
-
pound
=
50
1
50
40
for
)10
40
(
40
0
ij
m
m
m
m
ij
ij
ij
j
i
w
(10)
middot Now one can evaluate the SSIM index Q j for each frame by weight summing of sample window quality index values
aring
aring
=
=
=
Rs
i
j
i
Rs
i
j
i
j
i
j
w
SSIM
w
Q
1
1
(11)
middot Assigning the weight W j to each frame can be realized after studying the motion value Method of block-based motion estimation can be employed for each i-th sample window by comparing the actual and the next frame [7] This step results in a frame set of local motion vector lengths mij Afterwards the frame motion level M j is
s
Rs
i
j
i
j
R
m
M
)
1
aring
=
=
(12)
And the weight W j of the j-th frame is designated by comparing W j with the motion level threshold tM
iuml
iuml
iuml
icirc
iuml
iuml
iuml
iacute
igrave
gt
pound
=
aring
=
M
j
M
j
Rs
i
j
i
j
t
M
t
M
w
W
0
1
(13)
The threshold can be set to 16 As well as for sampling window weights the frame weights can be more fine-tuned [7]
middot Finally the result step of algorithm is the calculating of video quality Q v
aring
aring
=
=
=
F
j
j
F
j
j
j
v
W
Q
W
Q
1
1
(14)
10 EXPERIMENTS AND RESULTS
The first goal of our experiments was to compare the values of new quality index with subjective evaluations for several types of distortion of Lena image which have around equal MSE These observations have the most marked results and are therefore proposed in this paper
The standard test image Lena was distorted by blurring contrast stretching impulsive salt-and-pepper multiplicative noise and JPEG compression respectively (see the fig 1 or fig 2)
All distortion types caused the MSE value around 225 The new numerical quality index Q was evaluated for each distorted image by means of method of sliding 8 x 8 window and by using of terms (6) - (8) On the other hand in the subjective experiment ten people who were not acquainted with image processing area compared these 5 (and original one) images and designated the ranks of quality from value of 1 (original image) to 6
The results of the above mentioned experiments are documented in table 1 and confirm our assumption The subjective rank is similar to Q index rank the best subjective rank was given to Contrast Stretching Lena and the worst one to Multiplicative noise image The Contrast stretching image obtained the highest index again (near value 1) and the Multiplicative Noise Lena got the lowest one
A lot of other calculations were performed to gain Q values of black-and-white (BW) Lena and Bridge damaged by several types and values of noise and filtered by several filters as well The results led to similar conclusions as above mentioned ones
There were also evaluated the color images (the color Lena and Mandrill) disturbed by impulsive correlated noise of volumes both 10 and 20 and filtered by median filters with a few square window sizes Some representative of this area are presented in table 2 The more detailed describing of these experiments can be found in work [2]
Tab 2 Demonstration of use of Q criterion for measurement of filtering efficiency
The new numerical quality index of little gray standard video Salesman (50 frames 256x256 dimensions 255 levels gray scale) was investigated ultimately [2] The decomposed image sequence was artificial damaged by the BW spots (1 of all frame pixels) and subsequently filtered by several modifications of median filter [1] One-step filtering and two-step filtering were realized both with and without spots detection The quality of result sequence was then calculated In the table 3 there are a few of the results introduced Though the enumeration was made by simplified procedure - with all sample window weights and frame weights equal to 1
It is known from many previous works of various authors [eg 3 and 5] the noise or blotch filters work better with detectors of distortion and the two-stage median filtering with blotch detector (MMF2+detector) shows the best visual results [1] The highest quality rank of this filter type in the table 3 corresponds with this fact
Tab 3 Results of experiments in form of MSE and SSIM index for image sequence Salesman filtered by several filter types
11 CONCLUSION
The new image quality criterion recently proposed in [6] and improved in [7] seems very useful and comprehensible for purpose of quality assessment closed to human visual perceiving It implies the change of structural properties of distorted image or video because of their priority for human eyes
We have proofed the new quality criterion for many standard noised and filtered images and image sequences which were examined previously by MAE and MSE Based on the all results of our objective and subjective experiments one can establish that this criterion is really effective and it more correlates with the quality perceived by human visual system than the criterion MSE or its derivative Because of its value below one it is more practical for the purpose of image and video quality assessment as well Its use would be appropriate in the future image processing research Of course there are the areas for more improvements like consideration both of motion and of perfect video with a damaged partitions [7] etc
REFERENCES
BIOGRAPHY
Ľudmila Macekovaacute graduated (MSc equiv degree) in radioelectronics from the Technical University of Košice in 1983 Since 1991 she was with Department of Electronics and Multimedia
Communications of the Faculty of Electrical Engineering and Informatics of this university as assistant professor and nowadays as research assistant She is working in projects in area of image processing The image and image sequence filtering is a problem content of her PhD work as well
Stanislav Marchevskyacute received the MSc in electrical engineering at the Faculty of Electrical Engineering Czech Technical University in Prague in 1976 and PhD degree in radioelectronics at the Technical University of Košice in 1985 Currently he is a Professor of Electronics and Multimedia Communication Department of Faculty of Electrical Engineering and Informatics of Technical University of Košice His teaching interests include switching theory digital television technology and satellite communications His research interests include image nonlinear filtering neural networks genetic algorithms and multiuser detection spacetime communication diversity communications over fading channel and power and bandwidth-efficient multiuser communications
ASYMPTOTIC ANALYSIS OF OPTIMAL UNRESTRICTED
POLAR QUANTIZATION
Zoran H Peric and Srdjan M Bogosavljevic
Faculty of Electronic Engineering University of Nis Beogradska 14 18000 Nis Serbia
ldquoTelecom Serbiardquo Nis Vozdova 13 a 18000 Nis Serbia
SUMMARY
The motivation for this work is maintaining high accuracy of phase information that is required for some applications such as interferometry and polarimetry polar quantization techniques as well as their applications in areas such as computer holography discrete Fourier transform encoding and image processing In this paper the simple and complete asymptotically analysis is given for a nonuniform polar quantizer with respect to the mean-square error (MSE) ie granular distortion (Dg) Granular (support) region of a quantizer is considered as the interval where quantization errors are small or at least bounded thatrsquos why it is greater challenge to include the overload distortion in estimation procedure of a quantizer ([1]) The support region for scalar quantizers has been found in [1] by minimization of the total distortion D which is a combination of granular (Dg) and overload (Do) distortions
o
g
D
D
D
+
=
Swaszek and Ku [2] didnrsquot consider the problem of finding the optimal maximal amplitude so-called support region The goal of this paper is solving the quantization problem in case of nonuniform polar quantizer and finding the corresponding support region We also gave the conditions for optimum of the polar quantizer and optimal compressor function The equation for
opt
g
D
is given in a closed form The construction procedure is given for iid Gaussian source
Keywords phase divisions number of levels optimal granular distortion asymptotical analysis Unrestricted Polar Quantization
12 INTRODUCTION
Polar quantization techniques as well as their applications in areas such as computer holography discrete Furrier transform encoding image processing and communications have been studied extensively in the literature Synthetic Aperture Radars (SARs) images can be represented in the polar format (ie magnitude and phase components) [3] In the case of MSE quantization of a symmetric two-dimensional source polar quantization gives the best result in the field of the implementation [3] The motivation behind this work is to maintain high accuracy of phase information that is required for some applications such as interferometry and polarimetry without loosing massive amounts of magnitude information [3]
One of the most important results in polar quantization are given by Swaszek and Ku who derived the asymptotically Unrestricted Polar Quantization (UPQ) [2] Swaszek and Ku gave an asymptotic solution for this problem without a mathematical proof of the optimum and using sometimes quite hard approximations which limit the application Polar quantization consists of separate but uniform magnitude and phase quantization on N levels so that rectangular coordinates of the source ( x y) are transformed into the polar coordinates in the following form r=( x2+ y2) 12 where r represents magnitude and ( is phase
1
1
1
1
tan
tan
tan
2tan
y
x
y
x
y
x
y
x
p
f
p
p
-
-
-
-
igrave
aeligouml
iuml
ccedildivide
egraveoslash
iuml
iuml
aeligouml
+
iuml
ccedildivide
iumlegraveoslash
=
iacute
aeligouml
iuml
+
ccedildivide
iuml
egraveoslash
iuml
aeligouml
iuml
+
ccedildivide
iuml
egraveoslash
icirc
for I II III and IV quadrant
The asymptotic optimal quantization problem even for the simplest case - uniform scalar quantization is actually nowadays [5] In [1] the analysis of scalar quantization is done in order to determine the optimal maximal amplitude
Swaszek and Ku [2] didnrsquot consider the problem of finding the optimal maximal amplitude so-called support region
The support region for scalar quantizers has been found in [1] by minimization of the total distortion D which is a combination of granular ( Dg) and overload ( Do) distortions
o
g
D
D
D
+
=
The goal of this paper is solving the quantization problem in the case of nonuniform polar quantizer and finding the corresponding support region It is done by analytical optimization of the granular distortion and numerical optimization of the total distortion
In the paper Peric and Stefanovic [6] analyses are given for optimal asymptotic uniform polar quantization Analysis of optimal polar quantization for moderate and smaller values of N is given in [7] In this paper the simple and complete asymptotical analyses (for large values N) are given for a nonuniform polar quantizer with respect to the mean-square error (MSE) ie granular distortion ( Dg) We consider D as a function of the vector P =
1
()
iiL
P
poundpound
whose elements are numbers of phase quantization levels at the each magnitude level Said by different words each concentric ring in quantization pattern is allowed to have a different number of partitions in the phase quantizer ( Pi) when r is in the i-th magnitude ring Optimal Unrestricted Polar Quantization (OUPQ) must satisfy the constraint
1
L
i
i
PN
=
=
aring
in order to use all of N regions for the quantization We prove the existence of one minimum and derive the expression for evaluating Popt( rm) for fixed values of reconstruction levels ( m =
1
()
iiL
m
poundpound
) decision levels ( r =
11
()
iiL
r
poundpound+
) and number of levels L We also gave the conditions for optimum of the polar quantizer optimal compressor function and optimal numbers of levels We derive
opt
g
D
in a closed form
We also gave the example of quantizer constructing for a Gaussian source This case has the importance because of using Gaussian quantizer on an arbitrary source we can take advantage of the central limit theorem and the known structure of an optimal scalar quantizer for a Gaussian random variable to encode a general process by first filtering it in order to produce an approximately Gaussian density scalar-quantizing the result and then inverse-filtering to recover the original [8]
13 CONDITIONS FOR OPTIMALITY AND DESIGN OF UNRESTRECTED POLAR QUANTIZER
For these analysis we assume that the input is from a continuously valued circularly source with unit variance rectangular coordinate marginals and bivariate density function
22
()()
fxypxy
=+
Transforming to polar coordinates the phase is uniformly distributed on a [02() and the magnitude is distributed on a [0() with density function
()2()
frrpr
p
=
Note that magnitude and phase are independent random variables The transformed probability density function for the Gaussian source is
2
2
2
2
1()
()
2
2
r
fr
frre
s
f
p
ps
-
=times=
Without loosing generality we assume that variance is
2
21
s
=
We consider nonuniform polar quantizer with L magnitude levels and Pi phase reconstruction points at magnitude reconstruction level mi 1pound ipound L In order to minimize the distortion we proceed as follows
First we partition the magnitude range [ 0rL+1 ] into magnitude rings by L+1 decision levels (see Fig 1) r =( r1 helliprL+1 ) and ( 0 = r1 lt r2 lt ltrL ltrL+1 =
max
r
)
The magnitude reconstruction levels (see Fig 1) m =( m1hellipmL) obviously satisfy ( 0 lt m1 lt m2 lt lt mL) Next we partition each magnitude ring into Pi phase subdivisions Let ij and ij+1 be two phase decision levels and let (ij be j-th phase reconstruction level for the i-th magnitude ring 1poundjpoundPi Then
(1)2121
ijii
jPjP
fp
=-=+
and
(21)
iji
jP
yp
=-
(see Fig 1)
Fig 1 UPQ and j -th cell on i -th level preview
The distortion D for UPQ ( rL+1=
yen
) is [6]
(
)
1
1
22
11
1
[2cos()]
22
ij
i
i
iji
r
P
L
iiij
ij
r
fr
Drmrmdrd
f
f
fyf
p
+
+
==
=+--times
aringaring
ograveograve
(1)
Total distortion D for OUPQ ( rL+1=
max
r
) is a combination of granulation and overload distortions D=Dg+Do
(
)
1
1
22
11
1
[2cos()]
22
ij
i
i
iji
r
P
L
iiij
ij
r
fr
Drmrmdrd
f
f
fyf
p
+
+
==
=+--times
aringaring
ograveograve
(
)
1
1
22
1
1
[2cos()]
22
Lj
L
LjL
P
LLLj
j
r
fr
rmrmdrd
f
f
fyf
p
+
+
yen
=
++--
aring
ograveograve
(2)
We integrated (2) by ( and get the equation for granular distortion
1
22
1
1
1
()[2sin()]()
2
i
i
r
L
gLii
i
i
r
DPPrmrmcfrdr
P
p
+
=
=+-
aring
ograve
L
(3)
(where in sinc( x) =sin( x) x) (2) we use
2
sin()1
1()
6
x
xx
x
e
=-+
1
2
2
2
1
1
[()]()
23
i
i
r
L
i
gi
i
i
r
rm
Drmfrdr
P
p
+
=
raquo-+
aring
ograve
(4)
From
0
g
i
D
m
para
=
para
we can find
i
m
as
2
1
2
1
1
62
ii
i
i
rr
m
P
p
+
aeligouml
+
=-
ccedildivide
egraveoslash
(5)
As final result we find approximation for
i
m
as
2
1
i
i
i
r
r
m
+
=
+
(6)
We can obtain from High Resolution Theory [1] that high values for R (
2
log
RN
=
) and critical values for Pi satisfy given approximation
The equation for Dg is obtained by using High Resolution Theory [6]
322
2
11
()()
24
6
LL
iiiii
g
ii
i
fmmfm
D
P
p
==
DD
=+
aringaring
(7)
where is
1
iii
rr
+
D=-
We prove that the problem of minimizing the Dg( P ) is a convex programming problem Function Dg( P ) is convex if its Hessian matrix is the positive semidefinite one [4]
2
2
3
2
()
6()
g
iii
i
i
D
mfm
P
P
p
para
para
=-D
2
2
2
4
()
()
0
g
iii
i
ij
D
mfmij
P
PP
ij
p
para
parapara
igrave
D=
iuml
=
iacute
iuml
sup1
icirc
2
0
g
ij
D
PP
para
parapara
THORNsup3
(8)
it follows that Dg( P ) is a convex function of P
The minimization of function Dg( P) for fixed number of magnitude levels L constrained by the total number of reconstruction points N is formulated in this way minimize Dg( P) under the constraints
1
L
i
i
PN
=
=
aring
We use the equation J=Dg+((Pi where ( represents Lagrange multiplier From
0
i
J
P
para
=
para
we obtain
2
2
3
2
()
6()
iii
i
i
J
mfm
P
P
p
l
para
para
=-D+
and finally
2
3
2
3
1
()
1
()
iii
iopt
L
jjj
j
mfm
PNiL
mfm
=
D
=poundpound
D
aring
(9)
The formula (9) is like to formula in paper [7] (ie it should obtained utilizing approximation
1
()()
i
i
r
iii
r
rfrdrmfm
+
raquoD
ograve
)
The approximation given by Swaszek and Ku for the asymptotically Unrestricted Polar Quantization (UPQ) [2]
(
)
1
1
2
LLLL
L
rmmr
Lgm
+
-raquo-=
(10)
is not correct for Unrestricted Polar Quantization because
1
LL
rm
+
-regyen
That is the elementary reason for introducing support region (
max
r
) where
max
r
is restricted for the scalar quantization analysis which is based on using compressor function g
We replaced
max
()
i
i
r
Lgm
D=
where g is compressor function and approximate the sums by integrals (
i
dr
Draquo
) and we get Pi as
max
2
3
max
22
3
0
()()
()(())
iii
i
r
Nrmfmgm
P
Lrfrgrdr
raquo
ograve
(11)
As final result we find the equation for granular distortion
max
2
max
22
0
()
24(())
r
g
r
fr
Ddr
Lgr
=+
ograve
max
22
223
3
22
max
0
(()(()))
6
r
L
rfrgrdr
Nr
p
+=
ograve
2
22
3
max
0
222
max
246
r
L
II
LNr
p
=+
(12)
The function Dg( L) is convex of L because
2
2
2
3
max
0
2422
max
43
g
D
r
II
LLNr
p
para
=+
para
The optimal number of levels problem can be solved analytically only for the asymptotical analysis as it is suggested from the condition
0
g
D
L
para
=
para
we came to the optimal solution for Lopt
2
0
4
max
23
4
opt
IN
Lr
I
p
=
(13)
The optimal granular distortion is
0
6
opt
g
DIII
N
p
=
(14)
We can obtain g( r) like in [2] by using Houmllderrsquos inequality
max
44
max
00
()()
()()()
r
r
frfr
grrdrdr
rr
=
ograveograve
(15)
and
max
2
0
(())
6
r
opt
g
Drfrdr
N
p
=
ograve
(16)
Example
We compared results for Gaussian source Numbers of magnitude levels and reconstruction points reconstruction points and decision levels are calculated by using (for Gaussian source [2])
2
LN
=
)
8
exp(
2
2
1
i
i
i
m
m
N
P
-
=
p
1
1
[(1)]1
iL
rgiLiLr
-
+
=-poundpound=yen
1
[(21)2]1
i
mgiLiL
-
=-poundpound
g( r) is a compressor function given by
44
00
()()
()()()
r
fsfs
grdsds
ss
yen
=
ograveograve
Method presented in the paper [2] cannrsquot be applied for some values of N and numbers of level L For number of level L the total number of points is in the range
(
12
NNN
poundpound
eacuteugraveecircuacute
ecircuacuteeumlucirc
)
2
1
2(()05)
NroundL
=-
2
2
2(()05)
NroundL
=+
This follows from the fact that r and m are equal for any N in the range(
12
NNN
poundpound
eacuteugraveecircuacute
ecircuacuteeumlucirc
) and since Popt is dependent of m N and introduced approximations then
1
L
i
i
PN
=
=
aring
will not be satisfied In addition for some values of N from the former range we cannot reach
1
L
i
i
PN
=
=
aring
With goal to calculate rough (approximately) the deviation of calculated number of points than proposed number of points N by the method from paper [2] we will make next approximate analisys For estimation of
1
L
i
i
P
=
aring
we gave following approximation we found the total number of points [2] as
2
11
exp()
8
LL
ii
ii
ii
i
m
PNm
p
==
D
=-raquo
D
aringaring
2
1
()exp()
24
L
i
ii
i
m
N
roundLm
=
raquo-Draquo
aring
2
0
()exp()
24
Nr
roundLrdr
yen
raquo-=
ograve
EMBED Equation3
()2
roundLNM
==
We considered the most critical values for N=
1
M
=
1
N
eacuteugrave
ecircuacute
and N=
2
M
=
2
N
ecircuacute
eumlucirc
where
ii
MM
d
=-
(see Table 1)
Table 1
Correct analysis ie the deviation of calculated number of points than proposed number of points we will give for L=11 i N=221 (see Table 2)
By Swaszek and Ku [2] for each L=const m and r are equal For N=
1
N
eacuteugrave
ecircuacute
=221 ( L=11
1
23284
L
i
i
P
=
=
aring
and
1
1184
d
=
(approximately
1
1026
d
=
from Table 1)
For Pi=round( Pi) we canrsquot satisfy constraint
1
233221
L
i
i
PN
=
=sup1=
aring
We get 11 values for Pi by rounding but 9 of them are different from values in [2]
Table 2
For a fixed number N we determine (
i
P
L
)
Step 1)
2
0
4
max
23
4
opt
IN
Lr
I
p
=
g( r) is a compressor function given by
max
44
max
00
()()
()()()
r
r
frfr
grrdrdr
rr
=
ograveograve
Step 2)
2
3
2
3
1
()
1
()
iii
iopt
L
jjj
j
mfm
PNiL
mfm
=
D
=poundpound
D
aring
Step 3)
The exact optimal value for
max
r
is obtained repeating our optimization method for different
max
r
and choosing the values for which
go
DDD
=+
is minimal
14 CONCLUSION
The solution given by Swaszek and Ku[2] is the best one found by now but for large N Swaszek and Ku gave an asymptotic solution for unrestricted nonuniform polar quantization without a mathematical proof of the optimum and using sometimes quite hard approximations which limit the application We gave elementary reasons for consideration support region of polar quantization In this paper the simple and complete asymptotical optimal analysis is given for constructing nonuniform unrestricted polar quantizer We also gave the conditions for optimality of the nonuniform polar quantizer We gave an equation for optimal number of points for different levels and also optimal number of levels (these equations always satisfy the constraint
1
L
iopt
i
PN
=
=
aring
) The equation for
opt
g
D
is given in a closed form Applying our algorithm incompleteness from [2] is eliminated
REFERENCES
[1]S Na D L Neuhoff On the Support of MSE-Optimal Fixed-Rate Scalar Quantizers IEEE Transaction on Information Theory vol47 pp 2972-2982 November 2001
[2]P F Swaszek T W Ku ldquoAsymptotic Performance of Unrestricted Polar Quantizerrdquo IEEE Transactions on Information Theory vol 32 pp 330-333 1986
[3]F T Arslan ldquoAdaptive Bit Rate Allocation in Compression of SAR Images with JPEG2000rdquo The University of Arizona USA 2001
[4]P Venkataraman Applied Optimization with Matlab Programming John Wiley New York USA 2002
[5]D Hui D L Neuhoff Asymptotic Analysis of Optimal Fixed-Rate Uniform Scalar Quantization IEEE Transaction on Information Theory vol47 pp 957-977 March 2001
[6]Z H Peric M C Stefanovic ldquoAsymptotic Analysis of Optimal Uniform Polar Quantizationrdquo International Journal of Electronics and Communications vol56 pp 345-347 2002
[7]Z H Peric S M Bogosavljevic ldquoAn algorithm for construction of optimal polar quantizersrdquo Journal of Electrical Engineering vol4 No 1 pp 73-78 2004
[8]K Popat and K Zeger ldquoRobust quantization of memoryless sources using dispersive FIR filtersrdquo IEEE TransCommun vol 40 pp 1670-1674 Nov 1992
BIOGRAPHY
Zoran H Peric was born in Nis Serbia in 1964 He received the B Sc degree in electronics and telecommunications from the Faculty of Electronic science Nis Serbia Yugoslavia in 1989 and M Sc degree in telecommunication from the University of Nis in 1994 He received the Ph D degree from the University of Nis also in 1999 He is currently Professor at the Department of Telecommunications University of Nis Yugoslavia His current research interests include the information theory source and channel coding and signal processing He is particulary working on scalar and vector quantization techniques in compression of images He has authored and coauthored over 60 scientific papers Dr Zoran Peric has been a Reviewer for IEEE Transactions on Information Theory
Srdjan M Bogosavljevic was born in Nis Serbia in 1967 He received the B Sc Degree in electronics and telecommunications from the Faculty of Electronic Engineering Nis Serbia in 1992 and M Sc Degree in telecommunications from the Univeristy of Nis in 1999 He has authored and coauthored 22 scientific papers His current interests include the information theory source coding polar quantization
PROCESS FUNCTIONAL PROPERTIES AND ASPECT LANGUAGE
Jaacuten KOLLAacuteR
Department of Computers and Informatics Faculty of Electrical Engineering and Informatics
Technical University of Košice Letnaacute 9 042 00 Košice Slovak Republic
tel +421 55 602 2577 E-mail JanKollartukesk
SUMMARY
In this paper we present essential characteristics of aspect-oriented approach to programming as provided in aspect programming languages Then we de-modularize a programming structure of a process functional sample to a type definition module and the own definition module using purely functional case Adding environment variables to the type definition module we show that there are possible resources to the computational reflection using process functional paradigm in a well-defined variable environment We also identify the weaknesses and possible directions in further development of object-oriented process functional language to extend it to an aspect oriented language
Keywords Programming paradigms process functional programming aspect oriented programming computational reflection programming environments
15 INTRODUCTION
Aspect oriented programming evolves from the fact that there exist some crosscutting concerns in systems that cannot be well modularized using traditional structured object or component based software development methodologies There is no formal proof but high deal of evidence that combination of different concerns of computation in complex software systems yields to scattered and tangled code which is inappropriate to maintenance [234] Sometimes the appropriate modularization still can be reached but the prize is too high ndash the run-time efficiency is decreased
The other source for producing tangled code is adding a new concern of computation after a system has been developed Then the situation when manifold source code modifications are needed for the purpose of efficiency is the nightmare of programmers Scattering code manually clearly decreases the reliability of the system and its capability for the maintenance
AspectJ [78] is a programming language which provides the opportunity to a programmer for the modular description of crosscutting concerns via aspect declarations The aspect declaration similar to class declaration is a modular unit which in addition to class declaration contains
middot pointcut ndash the definition of a collection of join point ndash well defined points of computation in which advice is applied and
middot advice ndash a part of code which is applied in join points defined by pointcut designator
AspectJ approach has evolved from Java ndash which is inherently object oriented imperative language Therefore it seems that the subject of aspect language is applicable just to an object-oriented paradigm but this is not true [11635] Crosscutting concerns can be taken into account also at the procedural level excluding object paradigm or at functional level excluding an imperative paradigm On the other hand the crucial question is the usefulness of separated programming paradigms for the development of large systems Our mention is that better direction is to integrate them
For example object paradigm is without doubt the best-balanced basis for applying crosscutting concerns across classes because of systems complexity and their imperative nature
However the limits of AspectJ language are currently known [9] The substance of these limits is as follows Sometimes there is too strong interference between the function of computation and an aspect (specifically when parallel concerns are considered) and then the benefits of an aspect approach are not so high as expected The reasons of this fact may be perhaps in strong binding of AspectJ to Java byte code It may be noticed that AspectJ pointcut designators have their origins in Java language implementation since AspectJ is an extension to Java
In this paper we present our approach to possible incorporation of aspect programming paradigm into PFL - a process functional programming language that is based on application of processes rather than statement sequences [1011121314] Although at the present time we have object PFL implemented [1529303132] with both Haskell [22] and Java target code it is not our aim to provide just a new programming language The aim is to exploit the uniform and simple multi-paradigmatic structure of PFL integrating the functional imperative [534] and object oriented paradigm [15] with the aspect paradigm We have found it useful during experiments with profiling process functional programs [232425] and mobile agents programming [20] In the following sections we present the essence of the aspect oriented conception and then using simple tracing example we will show the properties of process functional paradigm with respect to requirements to aspect extensions Finally we discuss the current state and possible directions in further research
16 ASPECT ORIENTED CONCEPTION
Let us introduce the essential conception of the aspect approach to system development according to Fig1 For the purpose of simplicity let us consider incremental development of a system considering first a functional aspect of computation and after that some tracing aspect Let the functionality of a system is defined by the structure of two modules as illustrated by gray rectangles in the stage 1 of Fig21
Figure 21 Aspect ndash oriented conception
Omitting the detailed function the system of two modules can be compiled and executed Suppose we need to include some tracing actions into modules Instead of doing it manually in aspect approach we write (in the stage 2) ASPECT module This module consists of the pointcut and the advice Pointcut is a collection of points in original modules that are the subject of interest (the subject of tracing in our case) Such points are called join points The pointcut is defined by the pointcut designator ie a formula that identifies a collection of join points marked by small dots in modules in Fig21 In this manner join points are just identified but the original modules are not affected
The second part of the aspect is the advice - a part of code which we want to place at join points The pointcut is used in the definition of advice The stage 2 is finished
The stage 3 in Fig21 illustrates weaving which is an automated process of transforming original modules and defined aspect module producing two modules in which tracing actions are woven
The result is a new system of consisting of two modules in which the advice is applied see stage 4 in Fig 21 As can be seen this new system has tracing code scattered across the original modules
There are two main benefits of this aspect approach First a programmer need not scatter the advised tracing code manually and second whenever needed tracing aspect may be ldquoremovedrdquo by re-compilation of original system to obtain the system with functionality as before its aspectizying
Although tracing example yields scattered code it is high deal of evidence that combining other aspects can yield even tangled code and it is not dependent on whether the system is developed incrementally or not
Tracing above is based on pointcut which defines static joint points that are the subject of compile time weaving Opposite to static join points dynamic joint points are such that are defined in dynamic context of program ie while execution An example is cflow pointcut designator in AspectJ which is used to define join points occurring in all methods called from a given method of a class
Then instead static weaving dynamic (ie run-time) weaving must be used to perform crosscutting in dynamic join points
The complication coming out from dynamic context of a program is as follows The events during execution belong to a different abstraction levels from such as input values of computation to those as architecture resources The commonly accepted mechanism which allows identify run-time crosscutting is computational reflection [26]
Computational reflection is the capability of a computational system to reason about itself and act upon itself and adjust to changing conditions The computational domain of a reflective system is the structure and the computations of the system itself A reflective system incorporates data representing static and dynamic aspects of it this activity is called reification This self-representation makes it possible for the system to answer questions about and support actions on it
Thus the crucial task associated with dynamic context reasoning is to incorporate reflection data into a system extracting them from original In particular we will show in this paper how it can be solved using process functional program structure
In the next section we will present the possible modularization of a purely functional program starting with a simple purely functional case obtaining separate function type definition module and function definition module In section 4 we will use the type module aspectized by variable environment
17 TYPE AND DEFINITION MODULE
Process functional paradigm is based on evaluation of processes that affect the memory cells by their applications PFL - an experimental process functional language comes out from pure functional languages including an imperative programming environments [15] PFL environments are manipulated neither in monadic manner [34] nor in an assignment-based manner Instead of this source form of a process functional program has strongly separated visible sets of environment variables (in type definitions) and invisible side-effect operations (in definitions) In this section we will consider just (pure) functions f and g (not processes) and main expression main as introduced in Fig 31
f Int -gt Int
f x = 2x
g Int -gt Int -gt Int
g x y = f x + f y
main Int
main = g 2 3
Figure 31 Purely functional program P
PFL form of purely functional program P is identical to that in Haskell using currying in application of functions for example (g 2 3) instead of g(23) ndash the form usual in imperative languages The evaluation of program P proceeds by the reduction as follows
main = g 2 3
( f 2 + f 3
( 22 + 23
( 10 (31)
The evaluation is the same if the program is written without function type definitions see Fig 32 because the types are derivable from definitions in Milner type system Let us designate this function module definition D Then the semantics of P and D is the same ie
[P ] = [D ] (32)
f x = 2x
g x y = f x + f y
main = g 2 3
Figure 32 Function definition module D
Since the mutual position of the type definition and the definition of a function in a program is not significant we may write all type definitions in separate type definition module TM illustrated in Fig 33
f Int -gt Int
g Int -gt Int -gt Int
main Int
Figure 33 Function type definition module TM
If applying the composition W to module TM and D the composed program W(TMD) is the source program in Fig 34 then the semantics of P is the same as W(TMD)
[P ] = [ W(TMD) ] (33)
f Int -gt Int
g Int -gt Int -gt Int
main Int
f x = 2x
g x y = f x + f y
main = g 2 3
Figure 34 Composed program W(TMD)
If D is an original module and TM is an advice which is added at join point before the first definition in D by default then in terms of aspect programming W is a trivial weaver This weaver is an identity since as follows from (32) and (33) it holds
[ W(TMD) ] = [D ] (34)
Let us consider polymorphic function type definitions in separated module in Fig 35 Instead of type constants Int type variables are used
f a -gt a
g a -gt a -gt a
main a
Figure 35 Polymorphic type module TP
The same weaver W is used to compose TP and D obtaining woven program W(TMD) according to Fig 36
f a -gt a
g a -gt a -gt a
main a
f x = 2x
g x y = f x + f y
main = g 2 3
Figure 36 Composed program W(TPD)
Since during type-checking phase the monomorphic types for all function are derived as in P we may conclude as for monomorphic case that it holds
[ W(TPD) ] = [D ] (35)
Informally including the `aspectrsquo to a purely functional definition module in the form of function type definitions (both monomorphic and polymorphic) does not affect evaluation at all since this is the same as introduced in (31)
It may be noticed that functional programming style is out of our interest (clearly the form in Fig 31 is the most appropriate form from this viewpoint) Here we are extremely interested in separating concerns in PFL with respect to aspect programming paradigm
The importance of separating concerns into different modules grows up when considering additional aspects of computation As shown in the next section we are able slightly modify the type module without any change of the definition module and then weave them changing the semantics of program P ie the definition D This fact is crucial in aspect programming
18 STATE ASPECT
Suppose now a ldquosmallrdquo change of the type definition module TP according to Fig 41 where u v and w are the environment variables
f u a -gt a
g v a -gt w a -gt a
main a
Figure 41 State aspect TS
In this way we have defined the state aspect of computation since by TS we require two things
1 For all applications of f in D before f is applied to an argument e assign e to u and then use e as an argument This follows from (u a) in the type definition for f
2 For all applications of g in D before g is applied to the first argument e1 assign e1 to v and then use e1 as the first argument of g and before (g e1) is applied to the argument e2 assign e2 to w and then use e2 as the second argument of g This follows from the type definition for g
For example (f 2) will perform assignment u=2 (using Pascal notation) and then (f 2) will be evaluated as in purely functional case Considering (g 2 3) it is guaranteed that assignments v=2 and w=3 are performed before (g 2 3) is evaluated continuing by f 2 + f 3 evaluation
It means that except a purely functional evaluation according to the reduction (31) additional side effect actions (assignments) are performed Or from another viewpoint argument values of functions f and g are traced using three environment variables u v and w
However the selection of join points is weak Our pointcut designator can be expressed just informally as follows
Join points are all arguments of functions defined by a user (ie except built-in operations)
Our joint points are identified with a very low flexibility since there are no designators able to use quantifiers andor logical operations in PFL
In this paper we will concentrate on advices as ldquoa parts of coderdquo being used at join points In this matter it is substantial to understand the weaving
W(TSD) (41)
which using the same weaver W and the same definitions D as above produces the program PS which evaluates differently than program P Hence new aspect TS affects the semantics Hence it holds
[ W(TSD) ]
sup1
[ D ] (42)
The woven form of program PS is in Fig 42
According to Fig 42 we have introduced three environment variables in an (imperative) environment we have defined three functions in a class Env and we apply them to each argument of user-defined functions Let us consider first these applications informally
env
uc a
vc a
wc a
class (Env b a) where
u b -gt a
v b -gt a
w b -gt a
instance (Env a a) where
u x = let uc=x in uc
v x = let vc=x in vc
w x = let wc=x in wc
instance (Env () a) where
u x =uc
v x =vc
w x =wc
f a -gt a
g a -gt a -gt a
main a
f x = 2x
g x y = f (u x) + f (u y)
main = g (v 2) (w 3)
Figure 42 Program PS = W(TSD)
Corresponding to our requirements to all applications of f and g defined by our informal pointcut above we require the result of evaluation to be the same as in (31) The function of computation is preserved if it holds
u e = e v e = e w e = e
for all expression e of a data type It means that environment variables in PFL are not just cells of memories but they are identities if their arguments are of a data type
Next before an environment variable is applied to argument e the argument e is stored to the variable (since the environment variable is not just an identity but also a memory cell) This state aspect corresponds to assignments
uc = e vc = e wc = e
for all expression e of a data type where variables as cells are marked by c to distinct them from variables as functions Hence the application such as (v e) evaluates in two subsequent steps s and e which we express by a pair
(s e)
where s may be an assignment or empty action ie state action and e is an expression which defines the (functional) value of application
Then the complete definition of a variable v in terms of two aspects is as follows
v x = (vc=x x)if x ( ()
v x = (( vc) if x = ()
Semantically equivalent definition to that above is as follows
Definition 41 Informal definition of environment variable
v x = (vc=x vc)if x ( ()
v x = (( vc) if x = ()
The latter better expresses the argument data flow through the variable The second equation is not used in our examples since here we work just with data values But notice that if an argument of a function would be control value designated by () then state is not affected (since state action is empty) and the application v () yields the data value having been stored in cell vc
The definition of v above is informal since the value of the application is not the pair on right hand side just the second item we use imperative sequencing () and imperative assignment in a pair on right hand side of informal definition But looking at Fig42 it is easy to see that it holds
(vc=x vc)= let vc=x in vc
(( vc)= vc
Using informal definition for environment variable the program PS is evaluated as follows
main = g (v=22) (w=33)
( f (u=22) + f (u=33)
( 22 + 23
( 10 (43)
To simplify notation we designate cells by u v and w not using uc vc and wc anymore Except the function of computation is evaluated (the value of (v=22) is 2 the value of (w=33) is 3 etc) program PS traces all argument values used in applications of user-defined functions storing them to variables ndash external memory cells that belong to variable environment env of computation
Since then functions affect the variable environment they are rather processes than functions That is why we call this paradigm process functional However in framework of this paper is more substantial that weaving the module TS and D the semantics of original module D will change according to (42)
Notice that our ldquoweaverrdquo W performs compile time transformation when producing W(TSD) But the same W acts as identity when producing W(D) In each case the type checking is performed after weaving
Further as follows from evaluation of W(TSD) we can say that arguments of user-defined functions are reflected in variable environment performing the next sequence of assignments
v=2 w=3 u=2 u=3
The sequence above is true if all arguments are evaluated in the leftmost order and + is left associative operation Some comments on this and other problems associated with maintaining reflective information are introduced in the following section
19 DISCUSSION
In this section we identify some problems coming out from the current state of process functional programming language which is aimed to be adapted to an aspect programming language
Currently we have developed a compiler from object-oriented PFL to both Haskell and Java languages The purpose of PFL project was to provide a programming language which would make open view to variable environment to a user as it is in imperative languages and at the same time to preserve the approach coming out from purely functional languages that the evaluation is defined by application of processes and functions excluding the sequences of statements As a result PFL is a simple and an expressive language and still more relaxed than Haskell since function of computation can be affected by evaluation order
The weaknesses of PFL language and its perspectives from the viewpoint of aspect programming paradigm are as follows
middot The order of evaluation is fixed and it is supposed to be known to a programmer Then aspect of evaluation order which is associated with parallelism cannot be defined separately Since this aspect is highly dependent on target architecture sometimes even at the level of built-in operations [633] it must be expressible explicitly
middot Nothing has been said about the use of reflected values in this paper But PFL is capable for the definition of multi-threaded programs and the mechanism for accessing the values in environments is defined by application of an environment variable to control value The updates can be performed in one thread and the accesses in the second thread
middot Using control values is possible but wrong programming praxis One possible solution is to ldquotearrdquo of purely functional programs is monadic approach This is well disciplined but still just programming methodology so including control values as a new control aspect seem to be more perspective
middot In this paper the mechanism of application of environment variables is used just to reflect the values of arguments But it may be noticed that the mechanism is very strong because we may reflect not just values coming from computation but also from an external environment such as architecture resources
middot Or it is possible to use the single variable for many points of a program Then if we use v instead of both u and w in Ts we would obtain the following tracing
v=2 v=3 v=2 v=3
middot Although PFL arrays are over the scope of this paper process functional paradigm can be applied in backward direction It means that it is possible to generate an application of a new generated variable to each expression instead of this expression and then compose the set of variables into an array that ldquoapplicationrdquo to a type substitutes this type in a function type definition Then we would obtain something like this
v=2 w=3 u0=2 u1=3
middot Using PFL the reflection interface is still not flexible enough since of using just environment variables in type definitions Extensions are the subject of our current research
middot At the time it is strong feeling that fixed number of abstraction levels is not sufficient enough to provide a general purpose aspect language open to new aspects that can arise in the future
middot Currently no pointcuts can be defined in PFL It is however clear that pointcuts must be defined rather over abstraction levels than according user requirements Providing the appropriate syntax and semantics of pointcuts is crucial task since they affect compile-time pre-weaving and are related to reflection information when performing run time weaving
20 CONCLUSION
In this paper we use the principle of composing multiple modules into target program by source-to-source transformation Using simple tracing example we have shown the principle of the reflection of values in purely functional evaluation to an external variable environment
We also discuss briefly the use of values coming from external environment variables It may be noticed that our type system unifies data and control types just for arguments of environment variables (the types are unified just in the type variable b in a generated class Env b a otherwise not) This is the difference between PFL and Haskell
Opposite to the specification approaches oriented to the correctness of programs [171819] or specialized tools for time-critical systems [2728] our approach supports the computational environments of the systems in a more open way We take into account different levels of abstraction working still at programming language level and at the same time at the level of programming paradigm
Considering the aspects are crosscutting concerns of computation pointcut designators must specify lexical syntactic and semantic levels of an aspect language the environmental properties and run-time events of computation But this is still not sufficient since it is necessary to prevent the situation when adding a new aspect fails since of language restrictions
The openness to dynamic aspects is the crucial property of an aspect language In this paper we have presented the systematic manipulation with environments provided by process functional paradigm as a proposition for the development of an aspect process functional language considering computational reflection
REFERENCES
BIOGRAPHY
Jaacuten Kollaacuter was born in 1954 He received his MSc summa cum laude in 1978 and his PhD in Computing Science in 1991 In 1978-1981 he was with the Institute of Electrical Machines in Košice In 1982-1991 he was with the Institute of Computer Science at the University of PJ Šafaacuterik in Košice Since 1992 he is with the Department of Computers and Informatics at the Technical University of Košice In 1985 he spent 3 months in the Joint Institute of Nuclear Research in Dubna Soviet Union In 1990 he spent 2 month at the Department of Computer Science at Reading University Great Britain He was involved in the research projects dealing with the real-time systems the design of(micro) programming languages image processing and remote sensing the dataflow systems the educational systems and the implementation offunctional programming languages Currently the subject of his research is the implementation of multi-paradigmatic languages
Fig 1 Signal flow graph for 1D RT13
13
13
13
Fig 2 Functional flow graph of multilayer GMDH algorithm13
13
13
13
Fig 3 Data splitting13
13
13
13
13
13
Fig 4 Block scheme of pattern recognition system based on RT and MRT transforms and GMDH algorithm13
13
EMBED Equation3 13
13
Tab 3 The efficiency of the recognition process for cuneiform writings13
Teaching set
Without noise and 1 of noise
Without noise 1 and 2 of noise
Without noise 1 2 and 3 of noise
Without noise 1 2 3 and 4 of noise
Without noise 1 2 3 4 and 5 of noise
RT
78549
80452
82778
84028
86354
RT + linear GMDH
86736
90973
94271
95209
96320
RT + non-linear GMDH
77222
84167
87361
91875
95174
13
13
13
13
Tab 2 The efficiency of the recognition process for informative symbols13
Teaching set
Without noise and 1 of noise
Without noise 1 and 2 of noise
Without noise 1 2 and 3 of noise
Without noise 1 2 3 and 4 of noise
Without noise 1 2 3 4 and 5 of noise
RT
89091
90101
91111
92828
94545
RT + linear GMDH
85859
91313
94646
95253
95960
RT + non-linear GMDH
93131
93838
94141
97576
97576
13
13
13
13
Tab 1 The efficiency of the recognition process for nativity symbols13
Teaching set
Without noise and 1 of noise
Without noise 1 and 2 of noise
Without noise 1 2 and 3 of noise
Without noise 1 2 3 and 4 of noise
Without noise 1 2 3 4 and 5 of noise
RT
80274
84167
88333
88981
90833
RT + linear GMDH
77592
82407
87315
95092
96111
RT + non-linear GMDH
69722
68333
84537
89722
94074
13
13
13
13
EMBED WordPicture8 13
13
13
13
13
13
13
Fig 1 Time behaviour in the middle area of fault13
13
(a)13
13
(b)13
13
(c)13
13
(d)13
13
Fig 1 The standard gray image Lena 256 x 256 a) original b) image changed by increasing contrast c) image distorted by blurring d) image after JPEG compression The bcd images have MSE about 225 in comparison with the original13
13
13
(a)13
13
(b)13
13
Fig 2 The Lena image distorted by salt-and-pepper noise (a) and by multiplicative noise (b) respectively The both noised images have the MSE value closed to 225 in comparing to original image13
13
13
13
Tab 1 Comparison of subjective MSE and structural similarity index (Q) ranking of damaged Lena image versions13
13
ri13
13
ij13
13
ij+113
13
(miij)13
13
ri+113
13
213
13
113
13
Pi13
13
ri13
13
ri+113
13
rL+113
13
r113
13
j13
13
13
13
13
13
13
13
This work was supported by VEGA Grant No 1106504 Specification and Implementation of Aspects in Programming13
13
uacute
uacute
uacute
uacute
uacute
ucirc
ugrave
ecirc
ecirc
ecirc
ecirc
ecirc
euml
eacute
=
uacute
uacute
uacute
uacute
uacute
ucirc
ugrave
ecirc
ecirc
ecirc
ecirc
ecirc
euml
eacute
=
A
A
A
A
N
A
k
N
k
k
j
N
i
N
j
i
j
i
j
i
A
y
y
y
x
x
x
x
x
x
x
x
x
M
M
M
M
M
2
1
2
1
2
2
1
1
1
1
1
y
X
Selection
Feature
Extraction
Memory
of GMDH
Models
GMDH
Classifier
GMDH
Algorithm
Image
Transformation
Memory
of
Fea
tures
Digital
Images
Image
Class
Feature Reduction
GMDH Recognition
System
Teaching
Teaching
RT
processor
Feature
MRT
processor