International Conference MEKON...e-mail: [email protected] 2Department of Chemistry, Bauman Moscow...

254
Conference Proceedings 22 nd International Conference MEKON 2020 February 6th, 2020 Faculty of Economics, VSB — TU Ostrava

Transcript of International Conference MEKON...e-mail: [email protected] 2Department of Chemistry, Bauman Moscow...

Page 1: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

Conference Proceedings

22nd

International Conference

MEKON 2020 February 6th, 2020 Faculty of Economics, VSB — TU Ostrava

Page 2: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

VSB – Technical University of Ostrava

Faculty of Economics

Proceedings of the 22nd International Conference

MEKON 2020

February 6th, 2020

Ostrava, Czech Republic

Page 3: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

The conference is organised by:

VSB – Technical University of Ostrava,

Faculty of Economics

Proceedings of the 22nd International Conference MEKON 2020

Publisher: VSB – Technical University of Ostrava

Sokolská třída 33, 702 00 Ostrava 1, Czech Republic

Editors: Jiří Branžovský, Jakub Pavelek

Cover: Alžběta Gregorová

ISBN 978-80-248-4410-7

Copyright © 2020 by VSB – Technical University of Ostrava

Copyright © 2020 by authors of the papers

Publication has been supported by the Karel Englis Endowment Fund. Publication is not a subject

of language check. All papers passed a review process.

Page 4: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

SCIENTIFIC COMMITTEE

doc. Ing. Vojtěch Spáčil, CSc.

Dean of the Faculty of Economics, VSB-TU Ostrava

doc. Ing. Lenka Kauerová, CSc.

Vice-dean for study affairs, Faculty of Economics, VSB-TU Ostrava

prof. Ing. Jana Hančlová, CSc.

Vice-dean for science, research and doctoral studies, Faculty of Economics, VSB-TU Ostrava

Ing. Karel Hlaváček, Ph.D.

Vice-dean for foreign affairs, Faculty of Economics, VSB-TU Ostrava

Ing. Aleš Lokaj, Ph.D.

Vice-dean for development, Faculty of Economics, VSB-TU Ostrava

doc. Ing. Lenka Fojtíková, Ph.D.

Department of European Integration, Faculty of Economics, VSB-TU Ostrava

doc. Ing. Petra Horváthová, Ph.D.

Department of Management, Faculty of Economics, VSB-TU Ostrava

doc. Ing. Igor Ivan, Ph.D.

Vice-rector for commercialization and cooperation with industry, VSB-TU Ostrava

Ing. Kateřina Kashi, Ph.D.

Chairman of Academic senate, Faculty of Economics, VSB-TU Ostrava

doc. Ing. Aleš Kresta, Ph.D.

Department of Finance, Faculty of Economics, VSB-TU Ostrava

prof. Ing. Martin Macháček, Ph.D. et Ph.D.

Department of Economics, Faculty of Economics, VSB-TU Ostrava

prof. Ing. Darja Noskievičová, CSc.

Department of Quality Management,

Faculty of Materials Science and Technology, VSB-TU Ostrava

prof. JUDr. Naděžda Rozehnalová, CSc.

Department of Law, Faculty of Economics, VSB-TU Ostrava

prof. Ing. Jan Sucháček, Ph.D.

Department of Department of Regional and Environmental Economics,

Faculty of Economics, VSB-TU Ostrava

prof. RNDr. Dana Šalounová, Ph.D.

Department of Mathematical Methods in Economics, Faculty of Economics, VSB-TU Ostrava

prof. Ing. Tomáš Tichý, Ph.D.

Department of Finance, Faculty of Economics, VSB-TU Ostrava

Page 5: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

CONFERENCE GUARANTEE

prof. Ing. Jana Hančlová, CSc.

Vice-dean for sciene, research and doctoral studies, Faculty of Economics, VSB-TU Ostrava

CONFERENCE ORGANISING GUARANTEE

Ing. Jiří Branžovský

Department of Finance, Faculty of Economics, VSB-TU Ostrava

ORGANISING COMMITEE

Ing. Jiří Branžovský

Depatrment of Finance, Faculty of Economics, VSB-TU Ostrava

Ing. Jakub Pavelek

Department of Economics, Faculty of Economics, VSB-TU Ostrava

Suggested citation:

Author, A. 2020. Title of the paper. In Branžovský, J. and J. Pavelek (eds.). Proceedings of the 22nd

International Conference MEKON 2020. Ostrava: VSB – Technical University of Ostrava, pp. xxx-xxx.

ISBN 978-80-248-4410-7

Page 6: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

Content

VENIAMIN BOLDYREV et al.

INFORMATION-COMPUTING SYSTEM FOR

DESIGNING AND CONSTRUCTION OF

INDUSTRIAL PAINTING LINES 1

JIŘÍ BRANŽOVSKÝ

THE STOCK MARKETS BEHAVIOR NEAR THE

OFFICIAL FEDERAL RESERVE SYSTEM'S

MEETINGS 10

VLADIMÍR BULKO

APPLICATION OF MEAN-REVERSION BINOMIAL

LATTICE APPROACH TO VALUATION OF

MORTGAGE IMPLICIT OPTION IN THE CZECH

MARKET

21

IVANA ČERMÁKOVÁ

USING GEOINFORMATION IN PUBLIC

ADMINISTRATION, CASE STUDY:

MORAVSKOSLEZKÝ REGION 32

ALINA CZAPLA INTER-ORGANIZATIONAL KNOWLEDGE

SHARING AND GAME THEORY 38

KATARZYNA CZERNÁ

COMPARISON OF EVALUATION OF INNOVATIVE

ACTIVITIES IN INNOVATIVE COMPANIES

WITHIN THE V4 COUNTRIES 47

PETRA DOLEŽELOVÁ

IMPACT OF UNILATERAL PREFERENTIAL

MEASURES OF THE EUROPEAN UNION, THE

UNITED STATES AND CHINA ON EXPORTS OF

THE LEAST DEVELOPED COUNTRIES

56

MERI DUDUCI

IMPLEMENTATION OF INDUSTRY 4.0: A

RESEARCH BASED ON THE EFFECTIVE

TRAINING OF HRM 66

IZABELA ERTINGEROVÁ

EVALUATION OF THE EFFICIENCY OF THE

SYSTEM OF SELECTED RESIDENTIAL SOCIAL

SERVICES FOR SENIORS IN THE CZECH

REPUBLIC

75

LUN GAO

ANALYSIS OF THE SPILLOVER EFFECT OF

STOCK MARKET RISK: BASED ON EVT-COPULA-

CVAR MODEL 85

DANIELA KHARROUBI

THE IDENTIFICATION OF FACTORS

INFLUENCING HUMAN RESOURCES

MANAGEMENT AND THE EVALUATION OF

THEIR INTENSITY: A CASE STUDY ON HUMAN

RESOURCES MANAGEMENT (HRM)

94

NATÁLIE KONEČNÁ

EVALUATION EFFICIENT PRICE OF

COMPENSATION OF SELECTED PUBLIC

TRANSPORT IN OLOMOUC REGION AND

MORAVIAN – SILESIAN REGION

105

FRANTIŠEK KONEČNÝ

EVALUATION OF CSR DISCLOSURE OF THE

BIGGEST COMPANIES IN CZECH REPUBLIC

WITH MCDM METHODS 116

FILIP LESSL MEASURING THE FINANCIAL PERFORMANCE OF

A COMPANY BASED ON SELECTED APPROACH 123

ONDŘEJ MIKULEC

IDENTIFYING FACTORS OF EMPLOYEE

TURNOVER WITH MULTIPLE

CORRESPONDENCE ANALYSIS 133

Page 7: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

DAVID NEDĚLA DATA ANALYSIS AND TESTING WITH RESPECT

OF PORTFOLIO SELECTION PROBLEM 141

MICHAELA PETROVÁ

INSURABLE AND UNINSURABLE RISKS AND

THEIR CLASSIFICATION FROM THE

PERSPECTIVE OF A CZECH EXPORTER 154

LEE SABRINA

GOVERNANCE STRUCTURES OF MUNICIPAL

ENTERPRISES – EMPIRICAL STUDY OF

EFFICIENCY OF HOSPITALS 162

ADÉLA ŠPAČKOVÁ GENERALIZED LINEAR MODELS IN A MOTOR

HULL INSURANCE PORTFOLIO 175

ADRIÁN ŠPERKA et al. OPTIMALIZATION OF DIRECT COSTS OF THE

RAILWAYS OF THE SLOVAK REPUBLIC 184

TRAN VAN HAI TRIEU

DIGITAL TRANSFORMATION AND BUSINESS

PROCESS MANAGEMENT

IN CREATIVE INDUSTRIES: THE CASE OF FILM

PRODUCTION PROCESS

195

RIJAD TRUMIC AVOIDANCE OF COST INCREASES DURING

CHANGE MANAGEMENT 206

SUSANN WIECZOREK BUSINESS STUDIES IN TIMES OF CHANGE

(INDUSTRY 4.0) 215

XIAOJUAN WU

RESEARCH ON THE IMPACT OF

CHARACTERISTICS OF THE BOARD OF

DIRECTORS OF CHINESE APPLIANCE LISTED

COMPANIES ON CORPORATE SOCIAL

RESPONSIBILITY

223

MARTINA ŽWAKOVÁ

MULTI-CRITERIA DECISION MAKING USING THE

ENTROPHY METHOD APPLIED ON SELECTED

VARIABLES FROM THE AREA OF

DIGITALIZATION AND DEVELOPMENT IN THE

CENTRAL EUROPE TERRITORY

232

Page 8: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

1

INFORMATION-COMPUTING SYSTEM FOR DESIGNING AND CONSTRUCTION OF

INDUSTRIAL PAINTING LINES

Boris Bogomolov1, Veniamin Boldyrev2, Valeria Elistratkina1, Vladimir Menshikov1, Yana

Seina2, Andrei Zubarev1,

1Department of innovative materials and corrosion protection, D. Mendeleev University of Chemical Technology

of Russia,

Miusskaya sqr. 9, Moscow 125047, Russian Federation

e-mail: [email protected]

2Department of Chemistry, Bauman Moscow State Technical University,

2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

e-mail: [email protected]

Abstract

This article describes an example of creating an information and computing system for the design

and construction of industrial painting lines. The methodology of infological modeling of databases

based on queries is applied. The calculation of the main technological units of the system was made,

while it was possible to autonomously calculate any of the three subsystems. All source information

and calculation results are stored in a result file recorded on electronic media. An array of the initial

data of a specific drying chamber is also saved in the form of a file, which allows not to enter the initial

information when recalculating the chamber. Testing of the information-computing system for the

design and construction of industrial painting lines was carried out on several variants of painting lines.

Keywords

painting line, computer-aided design, infological modelling, information support, information-

computing system

JEL Classification

L86; L74; O32; O21

Information-computing systems application bases

The computer system for the design and construction of industrial paint lines includes an information

subsystem and three interconnected software blocks:

• expert system unit for designing a technological unit for surface preparation,

• block calculation chamber for applying powder paints,

• unit for calculating the chamber of radiation and convective drying of painted surfaces.

Information and computing system allow you to:

− to provide an integrated approach to the procedure for the automated design of paint lines;

− reduce design time by automating standard calculations and the rapid exchange of information

between program blocks;

− improve the quality of design by eliminating technical and design errors;

− to provide a search for a rational design solution, both due to the application of optimization

procedures, and due to the possibility of operational analysis of several alternative

technological solutions.

Figure 1 shows the functional information structure of an information-computing system.

The information subsystem is used to store information and exchange data necessary for solving design

and design problems, and includes: information about the processed product; a working database

containing all the necessary information for the operation of the software package; information on the

results of the calculation of individual technological units of industrial painting lines, transmitted to the

Page 9: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

2

working database and output files of the system; reference and regulatory data necessary for design and

engineering calculations.

Figure 1. functionally information structure of an information and computing system for

the design and construction of industrial painting

Modern software applications in the chemical industries are complex information and computing

systems consisting of autonomous interconnected software blocks. In this case, special attention should

be paid to their information support, which includes three main groups of information arrays:

- source data files;

- files for storing and exchanging data within the software package;

- files of the results of work.

All these information arrays provide for the existence of procedures for their creation, filling,

organization and interconnection, requiring the development of special software blocks that are both

included in the software package and that work autonomously. The composition and structure of

information support is determined by the features of the applied problem and should be designed by

analogy with databases.

Expert system for painting lines

The technology of expert systems is one of the areas of a new field of research, which has received

the name of Artificial Intelligence - AI. Research in this area is focused on the development and

implementation of computer programs that can emulate (imitate, reproduce) those areas of human

activity that require thinking, a certain skill and accumulated experience. So, these include decision-

making tasks in the design and construction of industrial painting lines.

In the course of the evaluation of the designed paint line, the scope of available options is determined.

This allows you to get a reliable base for the preparation and implementation of long-term activities in

order to quickly return on investment.

Expert assessment in the design of industrial painting lines allows us to solve a number of urgent

problems and anticipate their appearance, for example:

• efficient use of energy and materials;

• compliance of the product with the specified standards and customer requirements;

• adaptation of production capacities;

Page 10: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

3

• optimal compliance with legal requirements.

In addition to all of the above, the relevance of the problem of reducing costs is also the main goal of

the production management.

The expert system for designing the technological unit for surface preparation is formed on the basis

of the algorithmic and informational support of the design calculation of the technological unit for

surface preparation in paint and varnish production.

The expert system includes the following main steps:

• input of initial data using intelligent interface procedures;

• search for suitable technological schemes for surface preparation;

• the formation of the suspension from parts subjected to processing depending on the material and

overall dimensions;

• carrying out design calculations of processes included in the scheme;

• selection of rational options for the scheme and preparation of reporting documentation.

Programs are implemented on the principle basis of object-oriented programming and are

accompanied by a friendly intelligent interface, which ensures the versatility and effectiveness of

software. Consider some features of the implementation of its main stages.

The initial information of the expert system contains the characteristics of the processed parts,

including data on the material of the products, their dimensions and the nature of surface contamination,

and information on the available production facilities. For the expert system to work correctly, the

analyzed text information is entered using the menu constructed in accordance with the lists of typical

attribute values obtained by logical analysis of the domain information. All data is stored in a sequential

file of the project database with a unique name specified by the user for the designed technological unit.

When the project is called up again, all the information written to the file is read out and can be easily

adjusted.

At the next stage of the system’s work, the specified characteristics of the parts are compared with

the information in the database of typical technological schemes for surface preparation formed on the

basis of the standard. At the user's request, reference information on the composition and characteristics

of the technological stages is provided for each circuit. The user selects the schemes proposed by the

expert system or sets the number of any scheme recorded in the database.

For the selected technological scheme in the expert system, the following sequential operations are

performed: the choice of configuration and suspension configuration; calculation of the dimensions of

the sections of the preparation unit; technological calculation of sections. If the quality of the

technological decision being made is unsatisfactory at each stage of the algorithm, it is possible to return

to the previous stage of calculation, correct the initial information and repeat the design procedure.

Application information algorithm

For the design of information support for applied problems, the methodology of infological modeling

of databases based on queries was applied [1-3]. This technique was applied in the development of an

information and computer complex for the design and construction of industrial painting lines. The

following main stages of infological modeling of information support of the applied problem are

determined.

1. Determining the structure of a system-wide file, including an array of source data for solving the

problem. The file includes the technical task of the project, the characteristics of substances and

materials, the parameters of standard equipment, etc. The file structure is clearly defined, since the

information contained in it is necessary for all program blocks of the complex and is determined by the

format of the data read by the programs. The system-wide file is built on the basis of the “data storefront”

methodology [1-4], which is filled only with the information that is necessary to solve the applied

problem.

2. Development of a procedure for filling data marts and information sources. These sources include:

- databases of normative indicators, standards and reference data;

- files - the results of other software systems;

Page 11: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

4

- data files filled in before the software starts using specialized dialog procedures (for example, entering

information about the technical task of the project). When filling out a data showcase, additional

operations are required for syntactic and semantic verification of data, analysis of dimension and

parameter values, and data verification. These operations are developed on the basis of an analysis of

the features of the applied problem, and within the domain can be transferred from one software package

to another.

3. Determining the composition of the source data files and the results of individual software modules

of the complex. The main goal of this data group is:

- preparation of information for repeated re-entry of parameters and their correction;

- preparation of these results for transmission to the document generation program;

- creating a file for the exchange of information between the programs of the complex. The peculiarity

of this block is that all information is created inside the corresponding program modules, but the data

generation procedure itself is determined by the features of the applied problem and is designed at the

stage of development of the infological model.

4. Formation of a library of standard documents, taking into account the metadata of the subject area, of

the results of the work of the program package.

The design of these blocks is based on the infological model for generating queries in the database and

the procedure for filling out standard documents (forms and database reports).

The use of infological modeling when creating information support for applied tasks allows you to

create file libraries, modules of typical information processing procedures, and document templates.

The first step in the design of a painting line is the formation of a surface preparation scheme for the

part before painting.

The choice of the circuit of the technological unit is from the list of circuits presented in the standard

in accordance with the characteristics of the machined parts [3-6]. At the user's request, reference

information on the composition and characteristics of the technological stages is provided for each

circuit. The user selects the schemes from those proposed by the expert system or sets the number of

any scheme recorded in the database.

For the selected technological scheme in the information-computer system, the following sequential

operations are performed:

- the choice of configuration and suspension;

- calculation of the dimensions of the sections of the preparation unit;

- technological calculation of sections.

If the quality of the technological decision being made is unsatisfactory at each stage of the algorithm,

it is possible to return to the previous stage of calculation, correct the initial information and repeat the

design procedure.

One of the most time-consuming and routine procedures for designing a surface preparation scheme

is to determine the suspension configuration, consisting of a set of products that are simultaneously

processed. Automatic determination of the limit dimensions of parts of one material by length, width

and height of the suspension is provided, which ensures the placement of any of the processed parts in

its volume.

The procedure for forming a set of suspensions is performed separately for each material of parts.

First of all, the total volume of each product is calculated, determined by the dimensions of the part and

their number. This is done in order to complete the suspension all the same parts were within the same

kit. If the total volume of the part is greater than the volume of the suspension, then the total number of

parts of the same type is divided into several suspensions of the same composition.

At the next stage, an approximate calculation of the dimensions of the circuit section occurs. The

same sizes are accepted for all sections, and the calculation of the width and height of the section is

performed on the basis of a typical configuration scheme [4-7]. The section length is calculated based

on the given conveyor speed and product processing time and taking into account the additional drain

zone, the length of which is 1.5-2 times longer than the length of the workpiece.

The technological calculation of the surface preparation unit is performed sequentially for all sections

included in the scheme. The main results of the calculation are the exact dimensions of the section, the

Page 12: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

5

technological conditions for processing parts, and the characteristics of the pumping line of the flood

circuit. The computer system has several automatic sketching procedures that explain the design process

and display, as a rule, a structural solution. So, in fig. Figure 2 shows an automatically generated sketch

that allows the user to evaluate the obtained section configuration and, if necessary, change it by

correcting the data in the control form of the program block.

Figure 2. Sketch of the configuration of the section in accordance with the results of the

calculation of dimensions

The calculation results are presented in the form of a text file, which for a comprehensive description

of the circuit includes both the source information and the results of the work.

Further details are sent to the powder coating chamber.

The procedures of the program unit are designed to solve the following main design problems [6-10]:

• calculation of the aerodynamic operating conditions of the chamber, including the speed and flow

rate of air flows in the chamber, hydraulic resistance of all technological elements of the chamber,

with the given dimensions and design of the chamber;

• calculation of pressure drops in the ducts of the process unit, flow rates and flow rates, pipe fittings

resistances, with a known duct configuration and technologically specified air flow conditions;

• calculation of the characteristics of the cyclone and filter of the recovery unit, including the

determination of the speeds and hydraulic resistances of the devices;

• the calculation of their overall dimensions and characteristics of the equipment elements for a given

type of cyclone and filter;

• calculation of the operational characteristics of the fan and the selection of a suitable fan from

models manufactured by the industry;

• analysis of the mutual arrangement of the elements of the technological unit, performed according

to the sketch of the placement of circuit elements, in order to clarify the elevations of the installation

of filters and cyclones and assess the correctness of the given length of the duct sections.

The calculation unit of the technological unit of the camera for applying powder paints, like other

program modules of the system, is implemented using the principles of object-oriented programming.

Each element of the program complex is an independent program or information module, which allows

you to adapt the complex to solve various technological problems and expand the complex by

introducing new program objects. The software package is built in the form of software modules loaded

from the “calculation control unit”. Information exchange takes place: - using an array of global

variables; - a working database of a complex of programs; - using sequential source data files and

calculation results; - from databases of typical equipment (fans, cyclones, filters).

The developed software unit provides a fairly complete engineering calculation of the technological

unit of the camera for applying powder paints in an intelligent dialogue with the user. It is possible to

repeatedly correct the information and re-calculate the circuit.

The calculation results of the camera are recorded in the result files and partially sent to the working

database of the information-computing system for the design and construction of industrial painting

lines.

Page 13: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

6

The next step in the design of the painting line is the calculation of convective-radiation drying

processes, providing: - design calculation of convective-radiation drying; - search for a rational design

solution; - obtaining a design calculation protocol containing all the stages and results of solving the

design problem [7-9].

Figure 3. Sketch of the placement of the elements of the technological unit of the powder coating

chamber

In fig. Figure 4 shows a block diagram of an algorithm for calculating a radiation-convective drying

chamber, on the basis of which a complex of programs was developed.

At the first stage of the algorithm, input of initial information for calculating the camera takes place.

In this case, it is advisable to divide the information into groups: - camera characteristics; - details of the

part; - characteristics of emitters and air flow; - parameters of the drying process. This separation allows

you to vary the parameters in order to determine rational design decisions. So, for example, for the

existing process, you can select the number of emitters for the required temperature of the paint layer

and study the kinetics of heating the part.

To enter textual information used for routing calculations, a menu system is used that excludes

spelling and semantic errors and, as a result, errors in the calculation procedures of the program complex.

To enter regulatory information (information from directories) relational database tables are used.

Further, an additional calculation of the drying parameters in accordance with the initial data.

To control the correctness of the input source information, a camera sketch is used that is

automatically generated in accordance with the entered numerical values. In this case, you can

immediately see the possible mismatch of the specified dimensions of the camera and the dimensions

of the part, or a clear mismatch to the process of a given number of emitters. In this case, even before

the calculation, the necessary changes are made to the source data array. Then, the calculation of the

drying process is performed in order to determine the dynamics of changes in temperature of the air,

part and the paint layer, as well as the degree of curing of the paint or enamel layer. The algorithm

provides a constant sequence of steps: calculation of emitters (the stage is not automatically performed

when using only convective drying); calculation of the characteristics of the air flow of the chamber and

the main heat transfer coefficients. calculation of the dynamics of temperature changes in the chamber

for air, part and paint layer; calculation of the thermal balance of the camera, in accordance with which

the verification of the correctness of the calculations is performed.

After completing one stage of the program block, the next step of the algorithm becomes available.

As well as in other blocks of the information-computer complex, the design results are recorded in the

output files and in the working database, the information from which is necessary for repeated design

calculations and the design of the complex of technical documentation.

After the calculation is completed, the user is again in the input window. In this case, it is possible

either to terminate the program, or to correct the initial data with the subsequent repetition of the

Page 14: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

7

calculation. The number of such repetitions is not limited. All initial information and calculation results

are stored in a result file recorded on electronic media. An array of the initial data of a specific drying

chamber is also saved in the form of a file, which allows not to enter the initial information when

recalculating the chamber.

Figure 4. Functional and informational structure of a program unit for designing a radiation-

convective drying chamber

Results

Testing of the information-computing system for the design and construction of industrial painting

lines was carried out on several variants of painting lines for managing a painting production projects at

NPO «Lakokoskrytie», Khot`kovo, Moscow region. The system is used by the scientific and design-

technological department, design bureau, engineering plant, commercial unit, which is part of the

organization. The system was applied to solve practical problems of developing the basic business

Page 15: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

8

processes of the life cycle of a complex chemical process systems of dyeing industries (planning

production and choosing a rational technology; managing the design business process).

The developed information-computer system was used in practice when fulfilling the agreements

of NPO “Lakokosokrytie” on the development of compositions and technology for applying

nanomodified environmentally friendly hydrophobic paints and coatings, as well as the development of

compositions and technology for applying universal anti-corrosion paintwork to protect large-sized

metal structures and equipment.

When developing a universal automated complex for painting containers with radioactive waste

using an automatic remote control unit and elements of robotics. The specified unique complex was

installed and launched at the Voronezh NPP.

The system was used by NPO «Lakokoskoprytie» together with «Tagiltransmashproekt» and the

Czech company «GALATEK a.s.» for the development of a detailed design for a high-tech painting line

of freight cars of «Uralvagonzavod», which operates in Nizhny Tagil. A new painting line developed

using the system has made it possible to paint up to 16 thousand various modifications of freight cars

per year. In addition, in this project, an optimal ventilation system for the spray booths and a cleaning

unit for the air contaminated by the solvent vapor removed from the spray dryer were developed. The

designed new high-tech painting line is equipped with a gas purification unit operating on the principle

of reversible capture of organic substances in rotary adsorbers with subsequent desorption by hot air and

thermal afterburning. The developed installation provides a reduction in the concentration of pollutants

to the maximum permissible values. Apply the developed information-computer system for dyeing

industries to increase the effectiveness of the NPO «Lakokoskrytie», which in the period from 2010 to

2019 increased the company's income by 5 times.

References

[1] Bogomolov B.B., Bykov E.D., Men’shikov V.V., Zubarev A.M. (2017). Organizational and

technological modeling of chemical process systems. Theoretical Foundations of Chemical

Engineering, 51 (2), pp. 238–246.

[2] Bogomolov B.B., Boldyrev V.S., Zubarev A.M., Meshalkin V.P., Men’shikov V.V. (2019).

Intelligent logical information algorithm for choosing energy-and resource-efficient chemical

technologies. Theoretical Foundations of Chemical Engineering, 53 (5), pp. 709–718.

[3] Date C.J. (1999). An introduction to Database Systems. Reading, Mass.: Addison-Wesley.

[4] Bogomolov B.B., Men’chikov V.V., Bogoslovskii K.G., Bykov E.D., Shumova V.S. (2012).

Managing the design and operation of paint lines using business modelling. The Technology of

Pain and Varnish Coatings: A Collection of Scientific Works, Transaction of the Lakokraspokrytie

Research and Production Association, Moscow: Paint-Media, pp. 40.

[5] Bogomolov B.B., Men’chikov V.V., Bykov E.D., Bogoslovskii K.G. (2013). Modelling of

chemical process systems using the organizational and technological models of business processes.

The Technology of Pain and Varnish Coatings: A Collection of Scientific Works, Transaction of

the Lakokraspokrytie Research and Production Association, Moscow: Paint-Media, pp. 4.

[6] Averina Yu.M., Kalyakina G.E., Menshikov V.V., et al. (2019). Neutralisation process design for

electroplating industry wastewater containing chromium and cyanides. Herald of Bauman

Moscow State Technical University, Series Natural Sciences, 3, pp. 70-80.

[7] Omelchenko I.N., Lyakhovich D.G., Dobryakova K.V. (2019). The method of forming innovative

project portfolio in a project-oriented organization. Herald of the Bauman Moscow State Technical

University, Series Mechanical Engineering, 1, pp. 84-89.

[8] Omelchenko I.N., Lyakhovich D.G., Dobryakova K.V. (2019). Algorithm for innovative

development management of a project-oriented organization. Herald of the Bauman Moscow State

Technical University, Series Instrument Engineering, 1, pp. 129-134.

Page 16: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

9

[9] Korobets B.N. (2016). Models for technology programs within an intellectual property

management systems. Herald of the Bauman Moscow State Technical University, Series Natural

Sciences, 6, pp. 135-142.

[10] Bessarabov A.M., Kvasyuk A.V., Zaremba G.A., Kulov N.N. (2016). System studies of innovation

development in the business sector of chemical science. Theoretical Foundations of Chemical

Engineering, 50 (6), pp. 1001–1014.

Page 17: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

10

THE STOCK MARKETS BEHAVIOR NEAR THE OFFICIAL FEDERAL RESERVE

SYSTEM'S MEETINGS

Jiří Branžovský1

1Department of Finance, VŠB – Technical University of Ostrava,

Sokolská tř. 33, Ostrava 70200, Czech Republic

e-mail: [email protected]

Abstract

Objective of this paper is to analyse through an event-study the US stock markets around the Federal

Reserve System (Fed)’s official meetings of short-term interest rates changes, that are announced by

Federal Open Market Committee (FOMC). The main focus is on the performance of stock index around

the FOMC meetings, meaning the returns, volatility and trading volumes ahead of the monetary policy

meetings ex-ante in estimation window, the stock returns at the event window, and later ex-post in the

post-event window. The quantitative research is proceeded on the nearly 50-year period of the daily

data in order to find out the changes in behaviour over the time. Time series are observed on the

background of open market and open mouth operations during transparent and non-transparent period

of Fed, over different business cycles and particular Fed’ monetary goals periods. The hypothesis is to

find different results when the Fed lowered or increased its federal fund rates, or made no change. The

paper is about the rational expectations and informational efficiency of the US equity markets tracked

by S&P 500 price index and the US monetary policy, and how both affect each other.

Keywords

Federal Reserve System, interest rates, stock returns

JEL Classification

C01, C32, E44, G10

Introduction

The Federal Reserve System has been the monetary authority of the USA since its establishment in 1913

conducting its national monetary policy that would promote the maximum employment, stable prices

and moderate long-term interest rates.

The Fed was non-transparent until February 1994 when all the MP was running through open-market

operations while markets had not been aware of these transactions. Fed has started being transparent and

creditworthy since 1994 via the clear communication through “open-mouth operations” (e.g. official

public announcements, forward guidance, ...) that would help to support a decision-making process of

the consumers and corporations, reduce the economic uncertainty, while paralelly increasing the

effectiveness of the monetary policy itself. Official and public fed fund rate targeting has started since

August 1997.

It is believed that in long run financial markets are influenced by GDP and unemployment (as referred

by Taylor, 1995), but the short-term volatility is run by changes in interest rates, yield to maturities,

trade volumes or by market risk premium.

Literature Review

Central banks should be fully transparent. Based on several research papers the transparency lowers the

stock market and foreign exchange volatilities.

Fed is considered to be the global monetary policy authority, too big to fail influencing the whole globe.

Tessaromatis (1991) confirmed negative relationship between money supply shocks and stock returns

in the 1980s on the official announcement days as well as on the following day.

Page 18: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

11

Rudebusch (1995) studied non-transparent period of the Fed and identified one-to-two day response of

EFFR on the target fed fund rates.

Greenspan (1997) announced that the pain of over-inflated economy of the 1970s was relieved by

moving with fed fund rate more often, generally confirming since 1982.

Robertson & Thornton (1997) empirically evidenced that fed fund target rates are more difficult to

predict than effective inter-bank EFFR.

Bernanke & Mihov (1998) considered fed fund rates to be the most important monetary tool of Fed.

Kuttner (2001) analysed monetary shocks as inter-day changes in 3M EURIBOR futures one day ahead

and following FOMC date.

Selling (2001) considers financial markets efficient hence only monetary shocks can influence stock

returns.

Thornton (2004) was comparing fed fund target rates with market inter-bank FFR and identified their

closer relationship during transparent period of the Fed.

Bernanke & Kuttner (2005) found out that stock markets had been generally affected by unanticipated

MP (monetary shocks). They studied two regressors, fed fund rates as well as its 30-day futures. Second

one represented an alternative to the market participants´ expectations.

Ross (2012) observed that approximately 80 % of the monetary “shock” and stock return is generated

one day ahead of FOMC announcement days.

Kontonikas & MacDonald & Saggu (2013) identified stronger stock market reactions at MP during

crises periods and when economic conditions got worse. Moreover, there was a suprise positive

correlation between policy rates and stock returns during the initial phase of the Great Recession. Similar

results were obtained by Sirucek (2011) focusing on money supply only.

Unalmis (2015) evidenced a hike in stock market volatility on the FOMC dates specifically.

Haitsma & Unalmis & Haan (2016) in their event study found out that ultra-loosen expansionary MP

during the Great Recession led european and british stocks controversely to their declines.

Methodology and Data

This research topic analyses not only on how particular interest rate policies of the Fed have an impact

on the stock index returns, but additionally it distinguishes between specific periods, e.g. how

transparent Fed´ MP was in public, or what is the impact of its MP on stocks regarding to the economic

conditions via business cycles announced ex-post by National Bureau of Economic Research (further as

“NBER”).

Proposed transmission channel of interest rates affecting the stock prices and their returns is as follows:

lower rates and expansionary forward guidance result in greater corporate profits due to cheaper credit

links and loosen credit activities of the banks leading to greater investment capacity and households

credit consumption (interest rate & credit channels). Lower rates mean alternatively cheaper discount

rates increasing present values of future cash flows (asset prices channel).

Author has selected an empirical approach via event study (MacKinley, 1997) defining period before

FOMC as estimation window, official announcement day as an event window, and following period as

post-event window.

Author realises an importance of unanticipated monetary shocks that would influence the US economy,

hence refers the system to define the meaning of monetary tool whether it has expansive or restrictive

impulse.

Granger causality (1969) is the statistical predictive concept for stochastic linear measuring whether

lagged value of a stationary variable X does/doesn´t improve an explanation of another stationary

variable Y. It is not implication of that causality cause-result. It is in fact to see what proportion of Y is

Page 19: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

12

explained by its time lags and then to see whether adding lagged values of X might improve their

relationship. If Y is Granger-caused by X, there are some information in variable X that help predict Y.

𝑥𝑡 = 𝑐1 +∑𝐴𝑖 . 𝑥𝑡−𝑖

𝑝

𝑖=1

+∑𝐵𝑖 . 𝑦𝑡−𝑖

𝑝

𝑖=1

+ 𝑒1𝑡 (1)

𝑦𝑡 = 𝑐2 +∑𝐴𝑖 . 𝑥𝑡−𝑖

𝑝

𝑖=1

+∑𝐵𝑖 . 𝑦𝑡−𝑖

𝑝

𝑖=1

+ 𝑒2𝑡

There have been selected two methods of monetary shocks decomposition, even though both take into

consideration changes in target federal fund rate (“FFR”) and changes in effective/market FFR

(“EFFR”).

MP shock1,t = (FFRt – FFRt-1) – (EFFRt – FFRt) (2)

MP shock2,t = (FFRt – FFRt-1) – (EFFRt – EFFRt-1) (3)

First monetary shock identification is based on the logic, what proportion of target FFR change was

predicted one day ahead (difference between target and effective FFR). Second shock is measured as

difference between day-over-day changes in FFR and changes in EFFR.

If MP shock negative, it is considered as expansionary as market participants significantly lowered their

interest rate expectations, more than the Fed really did, supporting their economic decisions to spend

and invest more. Oppositely once positive, shock is to have restrictive power slowing the economy

down.

Besides of traditional standard variance, semi-variance model was used too, as upside risks are not

always being considered to be true risks.

𝑠𝑡 = √1

p∑min (Ri − E(Ri); 0)2

𝑝

𝑖=1

(4)

Data

Observation is done on the long period of 12 203 working days from 4th January 1971 to 17th May 2019

of which 7 251 observations had been in the sample near FOMC official announcement dates. The data

were downloaded from the Fed´s official website, the FRED database and the Bloomberg financial

terminal.

All financial series X were recognized as trendy meaning those were non-stationary with a unit root,

hence first differentiation of naturally logarithms (DL_X) was used to obtain the stationarity in the first

order I(1). ADF and KPSS tests were performed for identifying this issue at 10% significance level.

The main and critical endogenous variable is world-widely known and observed US stock market S&P

500 price index returns (DL_SPX) of 500 largest and public-traded corporations in the USA.

Used regressors involved effective effective fed fund rate D_EFFR, ten-year US Treasury yields

D_TEN_YEAR_USTREASURIES, lower-bound fed fund target rate D_FFR_LOWER, narrow money supply

DL_M1, daily trading volumes of S&P 500 index DL_VOLUME, dummy variable by NBER setting up an

official periods of business booms and business crises ex-post DUMMY_NBER, and finally for extracting

a greater proportion of monetary residuum there was a necessity to set up some monetary shocks

impacting the market participants´ expectations of monetary policy.

Research was processed on the daily close stock prices of price market-capitalised and sectors-wide

S&P 500 index. Effective fed fund rate is a relevant indicator of market participants´ expectations of the

future MP that can be compared to fed fund target rate set up by FOMC ordinarily eight times a year

under Fed´ transparent period. Fed fund target rate directly influences effective fed fund rate which is

seasonally not-adjusted short-term nominal interest rate at which depository institutions trade federal

funds (balances held at Fed banks), with each other overnight (FRED, 2020), and indirectly influence

long-term rates.

Page 20: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

13

Chart 1: Raw time series (1971-2019)

Source: own calculations [Excel]

Chart 1 includes three time series since 4th January 1971 to 17th May 2019. Effective fed fund rate and

lower-bound fed fund target rate are both named in percentage (left axis), while stock price S&P 500

index is chartered on the right axis in points.

We went through all FOMC announcement days (T0, “10”), including also nine days ahead of them (T-

9 (“1”) to T-1 (“9”)) and five following days (T1 (“11”) to T5 (“15”)). When two FOMC meetings occurred

close to each other (and some days would be paralelly included into both time ranges, ex-post previous

FOMC and ex-ante the following FOMC), the meeting taking place later was preferred so that each

official meeting had at least five days ahead observed (unless that would mean zero ex-post days – in

that case one following date. The direction of fed fund rate and the stock index information have been

observed (returns, downside volatility, volume trades). Unofficial changes of fed fund rates were not

those planned ahead, there was no study of days around these dates of rate changes. One of the

observations is that market rate represented by effective FFR was in some particular times even lower

than target FFR set up by Fed. Fed fund target rate has started been modified in multiples of 25 bp since

1989.

Chart 2: Observations around FOMC announcement days (mentioned as number 10)

Source: statistical software eViews

In the Chart 2 there are number of observations of the dates around the FOMC official announcement

dates (“10”), approximately two weeks ahead and one week afterwards. There were 530 observed

FOMC dates, and their total number is decreasing at both sides as the chance of another (next or

0

500

1000

1500

2000

2500

3000

3500

0

5

10

15

20

25

04.0

1.19

71

04.0

1.19

73

04.0

1.19

75

04.0

1.19

77

04.0

1.19

79

04.0

1.19

81

04.0

1.19

83

04.0

1.19

85

04.0

1.19

87

04.0

1.19

89

04.0

1.19

91

04.0

1.19

93

04.0

1.19

95

04.0

1.19

97

04.0

1.19

99

04.0

1.20

01

04.0

1.20

03

04.0

1.20

05

04.0

1.20

07

04.0

1.20

09

04.0

1.20

11

04.0

1.20

13

04.0

1.20

15

04.0

1.20

17

04.0

1.20

19

EFFR Low SPX

Page 21: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

14

previous) FOMC meeting being in these scales was increasing. There had been in total 55 out-of-sample

changes in FFR, all of them in non-transparent period. NBER evaluated ex-post an observed period of

12 202 days as follows: 10 695 boomy days (86.4 % of time) and 1 507 “rainy” days (13.6 %).

From boxplots (Chart 3) is apparent that the largest volatility of returns is observed around expansionary

FOMC meetings, the lowest controversely with restrictionary meetings, but skewed more upside than

downside.

Chart 3: Boxplot of S&P 500 daily returns under different MP, excluded further observations

Notes: Each column represent average mean returns of stock price index over observed time period, based on

change in target FFR. Inner period includes days around FOMC official meetings, while outer period changes

are those outside.

Source: own calculations [Excel]

Results

ADF and KPSS tests were proceeded in order to address issue of unit roots at 10% significance level1. Stationary time series were generally not identified to be cross-correlated except of MP shock II with

effective FFR differences (91 %).

The Granger causality table reveals that monetary variables do Granger cause stock returns, but not

inversely. Obviously, different interest rates influence each other on the money markets, as well as fixed-

income markets.

Table 1: Granger causality among monetary time series

1 This paper works with * at 10%, ** at 5% and *** at 1% statistical significance level.

MP_SHOCK_I does not Granger Cause DL_SPX 12200 1.51226 0.2205

DL_SPX does not Granger Cause MP_SHOCK_I 1.42612 0.2403 MP_SHOCK_II does not Granger Cause DL_SPX 12200 7.13903 0.0008***

DL_SPX does not Granger Cause MP_SHOCK_II 2.02327 0.1323

DUMMY_NBER does not Granger Cause DL_SPX 12200 3.37517 0.0342**

DL_SPX does not Granger Cause DUMMY_NBER 0.25811 0.7725

D_EFFR does not Granger Cause DL_SPX 12200 11.3472 1.E-05***

DL_SPX does not Granger Cause D_EFFR 1.55798 0.2106 D_TEN_YEAR_USTREASURIES does not Granger Cause DL_SPX 12200 16.6479 6.E-08***

DL_SPX does not Granger Cause D_TEN_YEAR_USTREASURIES 1.27210 0.2803 D_FFR_LOWER does not Granger Cause DL_SPX 12196 0.98226 0.3745

DL_SPX does not Granger Cause D_FFR_LOWER 0.09294 0.9112

DL_M1 does not Granger Cause DL_SPX 11189 3.00225 0.0497**

Page 22: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

15

Sources: statistical software eViews

In total 426 cases, the Fed had made no changes in its policy rate, in 48 cases it decreased FFR while in

56 cases increased (once even during the economic crisis on 24 Sept 1982). Surprisingly, window study

over 530 official FOMC meetings revealed shocking information about the economic theory that

expansive MP actually caused the tumble in stock prices and restrictive MP led to increase in stock

returns – this concept was valid only during economic downturns, while during economic booms it was

vice versa.

Table 2: Change in MP based on business cycle and different MP

Source: own calculations [Excel]

In terms of the stock mean returns, if Fed had taken a restrictive tool, the stock markets had been

producing positive returns already two days ahead and even on the Day, while during three days

afterwards handing over previous profits. This could imply the theory that market participants have no

negative anticipation ex-ante, but restrictive surprise then support investors in their selling out the stocks

right afterwards.

Table 3: S&P 500 in-sample daily mean returns near FOMC dates under different MP

Source: own calculations [Excel]

Curiously, mean stock returns, if rates went up, have generally exceeded those returns if rates fell down,

three days ahead up to the official announcement dates, but this relation has inversed ex-post the FOMC

announcement when truely lowering the rate leads to higher returns ex-post and oppositely.

Inter-FOMC meetings average daily return during expansionary policy and outside of our observed

samples T-9 to T+5 is 0.342 %, significantly higher than anytime else, compared to population

expansionary average return 0.030 %, or to sample expansionary return 0.018 %. That identifies the fact

that unanticipated MP during unofficial meetings had significant power to influence the financial

markets.

Business cycles

Boom Crisis Celkem Number of FOMC mtg Celkem Average SPX return

Change in FFR Number of FOMC mtg Average SPX return Number of FOMC mtg Average SPX return

Down 33 -0,202% 15 0,104% 48 -0,106%

Unchanged_inner 367 0,174% 59 -0,060% 426 0,142%

Up 55 0,202% 1 -0,397% 56 0,191%

Total 455 0,150% 75 -0,032% 530 0,124%

DL_SPX does not Granger Cause DL_M1 2.54529 0.0785 DL_VOLUME does not Granger Cause DL_SPX 12196 0.62615 0.5347

DL_SPX does not Granger Cause DL_VOLUME 6.74925 0.0012***

D_TEN_YEAR_USTREASURIES does not Granger Cause D_EFFR 12200 3.52096 0.0296**

D_EFFR does not Granger Cause D_TEN_YEAR_USTREASURIES 32.1849 1.E-14***

D_FFR_LOWER does not Granger Cause D_EFFR 12196 12.6457 3.E-06***

D_EFFR does not Granger Cause D_FFR_LOWER 4.69516 0.0092***

Page 23: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

16

Chart 4: S&P 500 daily mean returns around FOMC dates under different MP

Source: own calculations [Excel]

Stock markets´ daily volatility of the returns, measured by standard deviation (Table 4), spiked

especially on the official FOMC announcement days, regardless of monetary policy rate change. A bit

surprisingly, variance during expansionary periods was rather significantly higher than during no change

or even contractionary periods. Particularly, if Fed lowered the rates, the stocks volatility jumped

enormously one day ahead and on the official date.

Table 4: S&P 500 daily volatility around FOMC dates

Source: own calculations [Excel]

Opposite to results of the Table 2, Chart 5 shows that in the sample during the days around FOMC

meetings any change in target FFR led to cumulative decrease of stock returns during economic booms,

and inversely both changes produced higher cumulative returns during crises.

Chart 5: S&P 500 in-sample daily mean cumulative returns under different business cycles and

MP

Source: own calculations [Excel]

In-sample stock volatility was obviously higher during economic crises, especially periods when Fed

made no updates in target FFR.

Page 24: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

17

Chart 6: S&P 500 in-sample daily returns volatility in different business cycles and MP

Source: own calculations [Excel]

Chart 7: S&P 500 downside volatility of daily returns under different MP

Source: own calculations [Excel]

For volatility measurement, a semi-variance model was selected too. The shape of semi-variance curves

copies similarly the widely used variance model, results show however smaller risks. Findings reveal

that semi-variance pops on the Day if Fed lowered the rates. Semi-volatility of returns is higher for

expansionary MP of the Fed.

Daily trading volumes are abnormally high on the official Days, even more if FOMC decided to

implement more loosen monetary tools. From the charting is nearly apparent some Week effect when

Wednesdays have more closed trades than other weekdays.

Chart 8: S&P 500 daily mean trading volumes around FOMC dates under different MP

Source: own calculations [Excel]

There are both monetary shocks present (left axis) on the histogram below over the observed period of

time, with daily stock returns too (right axis) with further observations excluded. The magnitude of the

shocks is related to the environment of fed fund rates then, as well as to the less transparent period when

the Fed was not focusing that much on forward guidance and qualitative speeches of governors and

others.

Page 25: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

18

Chart 9: MP shocks and stock returns

Sources: statistical software eViews

Conclusions

Author in his previous research papers has been studying and analysing factors influencing stock

markets via regressions, VAR and VEC models. From those the main conclusion was that it is mostly

real macroeconomic variables that play statistically significant role while monetary policy´ tools not that

much, even though Granger causality table in this paper shows statistically significant one-way causal

relations from monetary variables to the stock returns. However, in this paper author focused on the

relationship between particularly interest rates set up by Federal Open Market Committee and stock

returns that seem to vary over different periods of time, which is the key research of this paper.

Moreover, stock returns are significantly higher during inter-FOMC meetings than during the observed

periods of FOMC with nine days ahead and five days afterwards.

Findings here focused mostly on the event study of the Fed windows under different MP and business

cycle. There are many more alternatives to track, e.g. periods of transparency of the Fed, or structural

changes of primary MP targets that were not feasible to mention here in this research paper, but another

future ones. Approximately three fifths of the time had been considered as in-the-sample close FOMC

official meetings. The week effect was recognized on the daily trading volumes when in-sample –

Wednesdays are the top daily weekdays for their trading activity.

The largest volatility of returns is observed around expansionary FOMC meetings, the lowest

controversely with restrictionary meetings, but skewed more upside than downside. Findings reveal that

semi-variance of stock returns pops on the Day if Fed lowered the rates. Surprisingly, window study

over 530 official FOMC meetings revealed shocking information about the economic theory that

expansive MP actually caused the tumble in stock prices and restrictive MP led to increase in stock

returns – this concept was valid only during economic downturns, while during economic booms it was

vice versa. Additionally, in the sample around FOMC meetings any change in target FFR led to

cumulative decrease of stock returns during economic booms, and inversely both changes produced

higher cumulative returns during crises.

If Fed lowered the rates, the stocks volatility jumped enormously one day ahead and on the official date.

Acknowledgement

This paper has been elaborated in the framework of the grant programme „Support for Science and

Research in the Moravia-Silesia Region 2018" (RRC/10/2018), financed from the budget of the

Moravian-Silesian Region.

The author also gratefully acknowledges financial support from the VSB – Technical University of

Ostrava SGS grant project no. SP2020/116 (Economic policy challenges in developed countries). This

article has been supported by SGS grant from VSB – Technical University of Ostrava no. SP2020/116

(project ‘Economic policy challenges in developed countries’).

Page 26: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

19

References

[1] Bernanke, B. and K. N. Kuttner, 2005. What Explains the Stock Market's Reaction to Federal

Reserve Policy? The Journal of Finance [online]. 60(3), 1221-1257 [cit. 2020-01-02]. DOI:

10.1111/j.1540-6261.2005.00760.x. ISSN 00221082. Available from:

http://doi.wiley.com/10.1111/j.1540-6261.2005.00760.x

[2] Bernanke, B. and I. Mihov, 1998. Measuring Monetary Policy. The Quarterly Journal of

Economics [online]. 113(3), 869-902 [cit. 2020-01-22]. DOI: 10.1162/003355398555775. ISSN

0033-5533. Available from: https://academic.oup.com/qje/article-

lookup/doi/10.1162/003355398555775

[3] Bernanke, B. and K. N. Kuttner, 2005. What Explains the Stock Market's Reaction to Federal

Reserve Policy? The Journal of Finance [online]. 60(3), 1221-1257 [cit. 2020-01-02]. DOI:

10.1111/j.1540-6261.2005.00760.x. ISSN 00221082. Available from:

http://doi.wiley.com/10.1111/j.1540-6261.2005.00760.

[4] Board of Governors of the Federal Reserve System: About the Fed [online], 2019. Washington,

D.C., USA [cit. 2019-11-29]. Available from: https://www.federalreserve.gov/aboutthefed.htm

[5] FRED. Federal reserve economic data – FRED – St. Louis Fed. [online]. Federal reserve bank of

St. Louis, 2019. [cit. 2019-06-20]. Available from www:

<https://fred.stlouisfed.org/series/fedfunds>.

[6] Granger, C. W. J., 1969. Investigating Causal Relations by Econometric Models and Cross-

spectral Methods. Econometrica [online]. 37(3) [cit. 2020-01-22]. DOI: 10.2307/1912791. ISSN

00129682. Available from: https://www.jstor.org/stable/1912791?origin=crossref

[7] Greenspan, A., 1997. Remarks by Chairman Greenspan at the 15th Anniversary Conference of the

Center for Economic Policy Research at Stanford University, September 5.

[8] Haitsma, R., D. Unalmis and J. de Haan, 2016. The impact of the ECB's conventional and

unconventional monetary policies on stock markets. Journal of Macroeconomics [online]. 48, 101-

116 [cit. 2020-01-02]. DOI: 10.1016/j.jmacro.2016.02.004. ISSN 01640704. Available from:

https://linkinghub.elsevier.com/retrieve/pii/S0164070416000276

[9] Kuttner, K. N, 2001. Monetary policy surprises and interest rates: Evidence from the Fed funds

futures market. Journal of Monetary Economics [online]. 47(3), 523-544 [cit. 2020-01-02]. DOI:

10.1016/S0304-3932(01)00055-1. ISSN 03043932. Available from:

https://linkinghub.elsevier.com/retrieve/pii/S0304393201000551

[10] Kontonikas, A., R. MacDonald and A. Saggu, 2013. Stock market reaction to fed funds rate

surprises: State dependence and the financial crisis. Journal of Banking & Finance [online]. 37(11),

4025-4037 [cit. 2020-01-02]. DOI: 10.1016/j.jbankfin.2013.06.010. ISSN 03784266. Available

from: https://linkinghub.elsevier.com/retrieve/pii/S0378426613002987

[11] Robertson, J. and D. Thornton, 1997. Using federal funds futures rates to predict Federal Reserve actions,

Review from Federal Reserve Bank of St. Louis, Nov. 45-53

[12] Rudebusch, G. D., 1995. Federal Reserve interest rate targeting, rational expectations, and the

term structure. Journal of Monetary Economics [online]. 35(2), 245-274 [cit. 2020-01-02]. DOI:

10.1016/0304-3932(95)01190-Y. ISSN 03043932. Available from:

https://linkinghub.elsevier.com/retrieve/pii/030439329501190Y

[13] Sellin, P, 2002. Monetary Policy and the Stock Market: Theory and Empirical Evidence. Journal

of Economic Surveys [online]. 15(4), 491-541 [cit. 2020-01-02]. DOI: 10.1111/1467-6419.00147.

ISSN 0950-0804. Available from: https://onlinelibrary.wiley.com/doi/abs/10.1111/1467-

6419.00147

[14] Sirucek, M, 2011. Impact of monetary policy on US stock market. Trends economics and

management, Vol. V, No. 09 (September 02, 2011): 53-60.

Page 27: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

20

[15] Tessaromatis, N. P., 1991. Money supply announcements and stock prices: the UK evidence. ISSN:

1105-8919, Vol. 41, Edition: 4, Page: 408-419

[16] Thornton, D. L., 2014. Monetary policy: Why money matters (and interest rates don’t). Journal of

Macroeconomics [online]. 40, 202-213 [cit. 2020-01-02]. DOI: 10.1016/j.jmacro.2013.12.005.

ISSN 01640704. Available from:

https://linkinghub.elsevier.com/retrieve/pii/S0164070414000044

[17] Unalmis, D. a I. Unalmis, 2015. The Effects of Conventional and Unconventional Monetary Policy

Surprises on Asset Markets in the United States [online]. Munich [cit. 2020-01-04]. Available

from: https://mpra.ub.uni-muenchen.de/62585/

Page 28: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

21

APPLICATION OF MEAN-REVERSION BINOMIAL LATTICE APPROACH TO

VALUATION OF MORTGAGE IMPLICIT OPTION IN THE CZECH MARKET

Vladimír Bulko1

1Department of Finance, VŠB – Technical University of Ostrava,

Sokolská tř. 33, Ostrava 70200, Czech Republic

e-mail: [email protected]

Abstract

In the moment of a mortgage approval a Czech bank virtually writes an implicit option to a client who

either exercises it by signing the Mortgage contract or the option expires worthless. Described implicit

option is valued by the means of binomial lattice framework with utilization of the mean-reversion

approach to valuing of interest-rate backed derivatives. The method of valuation is applied to one of the

Czech banks mortgage data. The paper introduces the methodology and model framework with the main

focus on the application on the Czech specific data and analysis of the results. Goal of the paper is to

apply mean-reversion binomial lattice approach to valuation of interest-rate backed options to a

mortgage origination implicit option in the Czech market. The paper is concluded by a recommendation

to the Czech banks whether the used methodology yields significant results for this particular implicit

option.

Keywords

Mortgages, Options, Binomial lattice, Mean reversion

JEL Classification

C58, C63, D81, G12, G21

1 Introduction

The paper aims to apply standard financial methods to value an implicit option written to clients when

their mortgage is originated. The Czech mortgage data are used as an empirical background. The

profitability of mortgages is a complex problem to solve and this paper focuses on only a short part of

the mortgage’s life – the origination. Origination of the mortgage is defined here as a time between

approval of the mortgage with fixed interest rate (IR) and signing of the mortgage contract. The time

gap creates a risk for the bank. When the contract is finally signed the market rates might be very far

from the interest rate signed in the contract. The risk can be propagated into the mortgage’s profitability

via two channels: cost of funding (increased market rate increases cost of funding the mortgage and so

decreases profit margin) and opportunity cost (bank could have allocated its capital to current mortgages

with higher rates rather than yesterday’s, so increasing implied cost of capital and again decreasing profit

margin). To include this risk to the profitability it must be properly valued.

Relevant literature on mortgage-backed options valuation is scarce. The authors focus either on

valuation of a whole mortgage and its derived securities represented by Kau et al. (1987) and Calvo-

Garido and Vázquez (2017) or on more technical aspects of valuing the mortgage prepayment and

default options such as Hürliman (2011).

As literature directly related to the valuation of the risk described above was not found, the paper aims

to fill the gap by describing a possible approach and outcomes. The risk is valued by a construction of

the specific implicit option and valuing it by means of mean-reversion modeling and binomial tree

approach.

The first part of the paper presents the investigated implicit option, mean-reversion framework to model

the underlying asset’s price, binomial lattice approach to valuation of the option and at the end a brief

description of the data used. Second part focuses on estimation of necessary parameters and valuation

Page 29: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

22

of the implicit option. Concluding paragraph focuses on discussion of the results and recommendation

for further investigation.

2 Methodology and data

2.1 Mortgage origination implicit option

The investigated implicit option is written by the bank to a client in the moment of mortgage approval.

The Implicit option gives a right to a client to buy one unit of mortgage contract, hence it is a CALL

option, expires in 30 days and is of the American type because a client can exercise the option by signing

the mortgage contract any time since writing (mortgage approval). Strike price (IR to be signed) for the

implicit option is determined before a mortgage approval2. A rational client considers exercising the

option only unless he/she is offered a better rate from a competing bank. There exist also non-interest-

based costs (such as opening fees, discounts for current account usage, long-term loyalty benefits, etc.)

attributed to signing a mortgage contract in a specific bank which a rational client has to consider,

however from a mortgage life-time point of view these costs are negligible and therefore this paper

omits those in the decision rule. To decide whether the option is in-the-money, the data on all offers for

all clients at all times would have to be available however we have to rely on average mortgage market

interest rates (AMMIR).

2.2 Mean-reversion approach to model AMMIR

The decision rule is directly based on the dynamics of the AMMIR in time and therefore before

embarking on valuing the implicit option, AMMIR must be modelled at the first place. AMMIR as a

price is an IR which is experienced to behave differently to for example stock prices in a way that its

trajectory tends to reverse back to some theoretical mean. Intuition behind is that if the IR could rise

infinitely (as stock prices), this would have stopped most of the current economic activity because it

would have made today’s consumption very expensive relative to tomorrow’s consumption and all

surpluses would have been invested. In other words, IR is a price which intermediates inter-temporal

relationship between consumption today and tomorrow. This intuition coupled with hard data leads

economists to believe that interest rates in the long-run tend to hover around some theoretical steady

state (mean) which ensures inter-temporal general equilibrium.

Family of so-called mean-reversion models can be expressed by a differential equation

𝜹𝒓 = 𝜿(𝝁 − 𝒓)𝜹𝒕 + 𝝈𝒓𝜸 𝜹𝒛 (1)

where r stands for modelled IR (AMMIR in our case), 𝜅 is a rate of mean-reversion, µ is a theoretical

mean to which IR tends to return, σ controls a magnitude of randomness entering the system, γ express

an elasticity between IR change and the level of IR and δz is a standard Brownian motion. The equation

(1) can be reformulated to a form better suited for estimation

𝒓𝒕+𝟏 − 𝒓𝒕 = 𝜶 + 𝜷𝒓𝒕 + 𝜺𝒓,𝒕+𝟏 (𝟐)

𝑬𝒕[𝜺𝒓,𝒕+𝟏𝟐 ] = 𝝈𝟐𝒓𝒕

𝟐𝜸 (𝟑)

where 𝛼 = 𝜅 ∙ µ ∙ δ𝑡 and 𝛽 = −𝜅 ∙ δ𝑡, hence µ = −𝛼

𝛽 . Based on restrictions placed on different

parameters of the equation (1) we can distinguish number of well-known mean-reversion models. In this

2 Strike price of the implicit option is determined in the moment of mortgage IR offer by a banker to a client. The

offer tends to happen 2 to 4 weeks before the mortgage is approved, so there is already some time for the underlying

asset’s price to deviate from the offered fixed IR in upward direction. Result is that in the moment of writing, the

implicit option can be already deeply in the money (a negative open position for a bank). It is important to mention

that the offered fixed IR is not an object of approval and a bank have absolutely no means of adjusting it in upward

direction after the offer is given.

Page 30: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

23

paper however, only Vasicek (1977) model is investigated due to its frequent usage and parameter

restriction 𝛾 = 0 which removes a variance endogeneity from the model and hence yielding σ equal to

standard deviation of i.i.d. process residuals of the econometric interpretation (2).

2.3 Binomial model framework

As the implicit option is the American type, we cannot use the canonical Black-Scholes-Merton (BSM)

differential equation for valuation and the more robust Binomial model framework is utilized. This paper

uses description and notation of the binomial method from (Higham, 2002), however it is ultimately

based on original work from (Cox et al., 1979).

Let S be the price of the underlying asset in time t = 0 when an option is written. Holder may exercise

the option by buying the asset any time until expiration at time T. Between writing of the option and its

expiration there is discrete time assumed with a step δt. Any point in time t can be described as 𝑡 =(𝑖 − 1)𝛿𝑡; 𝑖 ∈ [1,𝑀 + 1] and therefore 𝑇 = 𝑀𝛿𝑡 . The crucial block of the binomial method is an

assumption that between successive points in time the asset’s price can move only in two directions –

up or down. Probability of an upward movement is p and of the downward movement is 1-p. Let u

denote the magnitude of the upward movement with condition 𝑢 > 1 and d the magnitude of the

downward movement with condition 𝑑 ∈ [0,1), we can derive price in the time 𝑡 = 𝛿𝑡 as either uS or

dS. Similarly, the asset’s price in time 𝑡 = 2𝛿𝑡 is derived to get three possible prices u2S, udS or d2S.

Generally, for any time 𝑡 = (𝑖 − 1)𝛿𝑡 there is 𝑖 possible asset prices which are denoted as

𝑆𝑛𝑖 = 𝑢𝑖−𝑛𝑑𝑛−1𝑆; 𝑛 ∈ [1, 𝑖]; 𝑖 ∈ [1,𝑀 + 1] (4)

and form the recombining binomial tree.

Let E be the strike price of the option which is a price the option holder can buy the underlying asset for

at any time until expiration at time T. Expiration time 𝑡 = 𝑡𝑀+1 = 𝑇 is special in a sense that at this

moment the option must be either exercised with profit 𝑆𝑛𝑀+1 − 𝐸 or expires worthless. Hence the value

of the option at expiration can be derived as

𝑉𝑛𝑀+1 = 𝑚𝑎𝑥(𝑆𝑛

𝑀+1 − 𝐸, 0) (5) Goal of the binomial method is to find the value of the option at t = 0 and this is achieved by recurring

weighting of the option values by probability and time. We know the possible option values in expiration

𝑉𝑛𝑀+1 from (5) and financial modelling theory states that we can find any 𝑉𝑛

𝑖 with recurrence equation

𝑉𝑛𝑖 = e−ρδt(𝑝𝑉𝑛+1

𝑖+1 + (1 − 𝑝)𝑉𝑛𝑖+1) ; 𝑛 ∈ [1, 𝑖]; 𝑖 ∈ [1,𝑀] (6)

where 𝜌 denotes a risk-free interest rate representing an opportunity cost of holding the option. Since

the option we price here is American type, it is possible to exercise the option in any point in time,

therefore equation (6) is not enough. Rational holder of an American type option must consider both the

expected price of the underlying asset in expiration and importantly also its current price. Such holder

considers in each point in time either making an instant profit if 𝐸 > 𝑆𝑛𝑖 or waiting another fraction of

time in expectation of making a profit then. This intuition leads us to the recurrence equation

representing our desired decision rule

𝑉𝑛𝑖 = 𝑚𝑎𝑥(𝑚𝑎𝑥(𝑆𝑛

𝑖 − 𝐸, 0), e−ρδt(𝑝𝑉𝑛+1𝑖+1 + (1 − 𝑝)𝑉𝑛

𝑖+1)); 𝑛 ∈ [1, 𝑖]; 𝑖 ∈ [1,𝑀] (7)

2.4 Binomial model parameters

To utilize the Binomial method for the problem stated in this paper we need to explicitly determine the

parameters p, u, d, 𝜌, M, E and S.

Price of the underlying asset at writing of the option S is defined in this paper as the AMMIR in the

month of the implicit option writing (month of mortgage approval) and will be modeled using mean-

reversion approach described above. Strike price E is obtained from the data on real mortgage offers

described in the next subchapter.

Technical parameter M must be chosen high enough for the binomial method to converge fast enough

to BSM results, however low enough for the method to be numerically stable and computationally

reasonable. The convergence is tested by changing the nature of the option to European type and

comparing results of the Binomial method with the results of BSM model for this testing option. As the

implicit option has expiration of 30 days, the multiples of 30 are tested as proposed values for M.

Page 31: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

24

Opportunity cost of holding the mortgage origination implicit option for a client is negligible because

except for spending time fetching the offer a client does not directly allocate any capital into the

transaction yet. This is underlined by a common practice of clients to collect number of mortgage rate

offers from different banks to compare and chose the one with the lowest offered rate – such practice

literally cost them nothing in neither direct cost nor invested capital. On the other hand, the writer of the

implicit option (a bank) invests capital to the transaction from the very beginning, for instance by

allocating the time of the banker and the back-office specialists to prepare the mortgage or maybe even

more importantly by taking the mere risk of the rates moving in the undesired direction. A bank could

have put its capital to the account at Czech national bank (CNB) and earn risk-free interest from just

keeping the capital there. Essentially a bank is deciding between the writing of the implicit option and

depositing its capital to CNB and consequently the CNB risk-free rate should be priced into the value

of the written option. Therefore, risk-free interest-rate 𝜌 is for the purpose of this paper chosen to be

average 2W REPO rate announced by the CNB in the month of the implicit option writing.

Mean-reversion Binomial model differs to standard Binomial model in the sense that in each node there

must be a consideration on the probability and/or magnitude of the upward and downward movements

to capture the mean-reversion setting of the underlying asset compared to standard parameters fixed

throughout all nodes. This paper determines parameters u and d implicitly based on work from Bastian-

Pinto (2015), such that the price of underlying asset in each node is formulated as

𝑆𝑛𝑖 = 𝜇 + (𝑆1

1 − 𝜇)𝑒−𝜅(𝑖−1)𝛿𝑡 + (𝑖 − 2𝑛 + 1)𝜎√𝛿𝑡 (8) where the first term of the sum 𝜇 is the steady-state level of the underlying asset, the second term

represents the mean-reversion setting and the third term brings time determined volatility into the system

with (𝑖 − 2𝑛 + 1) representing difference between number of upward and downward movements.

Probability of the upward movement p follows standardly described behavior

𝑝 =𝑒𝜅𝛿𝑡 − 𝑑

𝑢 − 𝑑; 𝑝 ∈ [0,1] (9)

2.5 Data description

Data are obtained either from the public sources (such as 2W REPO, PRIBOR) or are the courtesy of

one of the Czech banks (such as AMMIR, IRS or data for mortgage contracts).

AMMIR (known as “FINCENTRUM HYPOINDEX”) is constructed by NEWVALUES (2020) as a

weighted average interest rate pooling average rates of different mortgage fixes and loan-to-value

measures and weighting them by originated volumes. Throughout the whole Czech banking market this

AMMIR is considered as the best proxy for the average mortgage market rate available. The CNB also

publishes its own average mortgage market rate however this rate does not coincide with the banking

industry definition in a way that CNB rate on one hand omits unpurposed mortgages (and considers

them as consumer loan) and on the other hand includes all building savings loans which has considerably

higher rates than standard bank mortgages. This creates an upward bias of the CNB metric and that is

why we decided to use rather the widely acknowledged but paid metric before the freely published

however biased one.

Source of publicly published rates 2W REPO and PRIBOR is CNB (2020). 2W REPO monthly

representation is created by taking last day of month value to keep the sharp transitions in between

months when CNB monetary policy change occurs. PRIBOR is used as a daily average of the month

provided by CNB.

Page 32: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

25

Figure 1. Czech mortgage market important interest rates

(source: Author)

Data used to determine the strike prices E and are anonymized real data on 12941 mortgage contracts

which were approved two consecutive years. These data are the courtesy of one of the bigger mortgage

providers at the Czech market. Interesting fact from the data is that on average 8% of mortgage

origination implicit options does not result in a signed contract and median time to sign the contract

(exercise) is 20 days. Furthermore if we compare approved IR (strike E) with the AMMIR of the same

month (underlying asset’s price at t=0) we can distinguish 3 standard types of implicit options written

together with a mortgage contract: in-the-money (ITM), at-the-money (ATM) and out-of-the-money

(OTM). Figure (2) shows the share of ITM in all implicit options by month (few ATM options are

included into OTM in the figure (2)). Throughout the data period, there were 36% of ITM options

written.

0,00%

1,00%

2,00%

3,00%

4,00%

5,00%

6,00%2

00

9/0

1

200

9/0

5

200

9/0

9

201

0/0

1

201

0/0

5

201

0/0

9

201

1/0

1

201

1/0

5

201

1/0

9

201

2/0

1

201

2/0

5

201

2/0

9

201

3/0

1

201

3/0

5

201

3/0

9

201

4/0

1

201

4/0

5

201

4/0

9

201

5/0

1

201

5/0

5

201

5/0

9

201

6/0

1

201

6/0

5

201

6/0

9

201

7/0

1

201

7/0

5

201

7/0

9

201

8/0

1

201

8/0

5

201

8/0

9

201

9/0

1

201

9/0

5

201

9/0

9

CNB 2W REPO AMMIR PRIBOR 1M IRS 5Y

Page 33: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

26

Figure 2. Share of options by value at writing

(Source: Author)

3 Vasicek model parameters estimation

Due to linearity of the econometric interpretation of Vasicek model (with 𝛾 = 0) in equations (2) and

(3), the standard ordinary least squares (OLS) estimator is utilized. Data have monthly frequency and

model is estimated on periods 2009/01:2019/11 and 2014/01:2019/11. The latter period is expected to

result in more reasonable parameters for the implicit option valuation due to trajectory of the AMMIR

in the Figure (1).

3.1 Estimation results

F-statistic (p-value 0.028**) rejects the hypothesis of the full period model’s insignificance, however

the insignificance of the restricted period model is not rejected (p-value 0.14). Both White and Breusch-

Pagan tests did not reject hypothesis of heteroscedasticity for standard OLS estimator, therefore

heteroscedasticity-consistent standard errors are used. Furthermore, based on the Breusch-Godfrey test,

residuals with lag (-2) are identified as significantly correlated with dependent variable pointing to an

autocorrelation problem. Autocorrelation problem can be solved with better specification of the model

via adding dependent variable lags of the higher order to the model, although this would considerably

alter the Vasicek model (1977) and is therefore only noted in this paper. Lastly the hypothesis of

normality of residuals is also rejected.

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

ITM OTM

Page 34: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

27

Table 1. Vasicek model estimated parameters

estimation data

period metric

calculated parameters estimated parameters

𝜅 µ α β σ

2009/01:2019/11

value 0.212936 0.017365 0.000308138 -0.0177447 0.000666

lower 0.402497 -0.005687 -0.000190743 -0.0335414 not relevant

upper 0.023376 0.414289 0.000807019 -0.00194796 not relevant

p-value not relevant not relevant 0.2239 0.028** not relevant

2014/01:2019/11

value 0.448261 0.020726 0.000774215 -0.0373551 0.00056774

lower 1.046719 -0.003726 -0.00032504 -0.0872266 not relevant

upper -0.150196 -0.149682 0.00187347 0.0125163 not relevant

p-value not relevant not relevant 0.1645 0.1397 not relevant

(source: Author)

Lower and upper terms in Table (1) refer to values of the lower and upper bounds of the 95% confidence

interval of the parameters’ estimation. Number of * with the p-value represents statistical significance

of a parameter based on standard t-test or F-test (* >90%, ** >95%, *** >99%).

3.2 Estimated parameters discussion

Presented results show that the Vasicek model can only very weakly explain the behavior of the AMMIR

in the Czech market. Despite estimated parameters α and β yields reasonable parameters 𝜅 and µ, when

the 95% bands are taken into consideration the parameters can be anywhere including the negative

values.

Parameter 𝜅 estimated value from the full period model is more reasonable as it can hover between 2%

and 40% rate of return to mean in a one month. On the other hand, the restricted period model yields

parameter 𝜅 which can become even negative 15% and this would violate the mean-reversion settings

and would lead to exponential behavior of the AMMIR.

Results for long-term mean µ follow suite in having reasonable estimated value with however

unreasonable upper and lower bounds. The full period model estimates this parameter to be at 1.74%

and the restricted period model at 2.07%. This is reasonable because AMMIR hovered close to 2% in

the past 5 years and trended towards the 2% level in the 5 years before that (Figure 1). Looking at the µ

upper and lower bounds for both models however this shows the very weak ability of the Vasicek model

to interpret Czech AMMIR. For the full period model long-term mean AMMIR could be between -

0.57% and +41.43% and from the restricted period model the estimated value is not even included inside

the bounds.

Taking into consideration the results and their significance, the paper uses estimated parameters from

the full period model despite the expectation expressed at the beginning of this chapter.

4 The implicit option valuation

The results of valuation of the mortgage origination implicit options are the weighted average of the

values for each implicit option from the data. The contracted volume of the mortgage is used as a weight,

making value of option for bigger volume mortgage more relevant in the results. Two metrics represent

the results:

• relative price for a client - option value in the form of per annum (p.a.) IR - it is a natural result

of the calculations because the underlying asset is an IR,

• absolute price for a client - the value of the option in the Czech currency unit (CZK) calculated

as a one-month (30 days maturity option) interest income that would be paid to a bank for buying

the option for a concrete mortgage, in other words mortgage volume multiplied by relative price

for a client per month

Page 35: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

28

Due to the fact that the share of ITM options at origination is high compared to OTM, the results are

mostly comprised of the intrinsic value of the options. Therefore, the author decided for the sake of

presenting the ability of the Binomial method to yield reasonable results to show also results of the

computation as if all the options where originated as ATM.

For computational efficiency purposes the parameter M was chosen to be 60 as it yields results close to

the high multiples of 30, such as M=1200.

4.1 Results with Vasicek model estimated parameters

With Vasicek full period model estimated parameters (𝜅=21.3%; µ=1.74%; σ=0.0666%) the average

value of the implicit options written during the investigated period was 0.0592% (in basis points

5.92 bps) and the absolute price which a client should pay to buy the option is CZK 258. Figure (3)

shows the distribution of the options values, where the majority of the option values is below 4bps.

With assumption of all options being ATM at writing the average weighted value of the option is

0.95 bps and the average price for the client is CZK 136. The most part of the difference is the intrinsic

value of the options originated as ITM. On the other hand, also some options which were deep OTM

became ATM and may have got positive value at origination bringing more mortgages into the

weighting average and that is why the difference between absolute prices and relative prices does not

correspond.

Figure 3. Options value distribution under Vasicek full period model parameters

(source: Author)

4.2 Parameter sensitivity analysis

Due to the fact that Vasicek model produced statistically insignificant results, it is necessary to

investigate the sensitivity of results on different settings of parameters. For this analysis the author has

chosen parameters 𝜅, µ and E or more precisely parameter 𝜅 and differences between the price of the

underlying asset at origination S of the option and µ and E. The rest of parameters is fixed as follows: S = 2.5%; 𝜌 = 2%; σ = 0.0666%.

Page 36: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

29

Table 2. Value of implicit options under different parameter assumptions

𝜅 =10% S - µ

𝜅 =50% S - µ

1% 0,50% 0% -0,50% 1% 0,50% 0% -0,50%

S - E

0,50% 50,33 bps 50,33 bps 50,33 bps 50,33 bps

S - E

0,50% 50,33 bps 50,33 bps 50,33 bps 50,33 bps

0,20% 20,38 bps 20,38 bps 20,38 bps 20,38 bps 0,20% 20,38 bps 20,38 bps 20,38 bps 20,38 bps

0% 0,99 bps 0,99 bps 0,99 bps 0,99 bps 0% 0,96 bps 0,98 bps 0,99 bps 0,99 bps

-0,20% 0 bps 0 bps 0 bps 0 bps -0,20% 0 bps 0 bps 0 bps 0 bps

𝜅 =100% S - µ

𝜅 =500% S - µ

1% 0,50% 0% -0,50% 1% 0,50% 0% -0,50%

S - E

0,50% 50,33 bps 50,33 bps 50,33 bps 50,33 bps

S - E

0,50% 50 bps 50 bps 50,33 bps 52,27 bps

0,20% 20,38 bps 20,38 bps 20,38 bps 20,38 bps 0,20% 20 bps 20 bps 20,38 bps 22,32 bps

0% 0,87 bps 0,96 bps 0,99 bps 0,97 bps 0% 0 bps 0 bps 0,99 bps 2,35 bps

-0,20% 0 bps 0 bps 0 bps 0 bps -0,20% 0 bps 0 bps 0 bps 0 bps

(Source: Author)

Regarding rate of mean-reversion 𝜅 parameter it is obvious from the table (2) that its impact on the

implicit option value is negligible unless it becomes high or the investigated option was ATM at its

writing. Despite having small impact on the value of the option, the expected trajectory is strongly

impacted by the magnitude of 𝜅 (Figure 4). The same story can be told about sensitivity to S - µ. These

two parameters create mean-reversion setting for the AMMIR which is very weak in the model setup-

up presented in this paper.

On the other hand, value of the option is mostly sensitive to S – E as this relationship shows whether

the option is ITM, ATM or OTM in the moment of origination, hence determining its intrinsic value.

Extrinsic value for the ITM option comprises only a small part of the option’s value, although is

pronounced for ATM options.

Page 37: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

30

Figure 4. Binomial trees under different sets of parameters

(Source: Author)

5 Conclusions

In this paper we applied a mean-reversion binomial approach for valuation of the implicit option written

when a mortgage is approved to the client on the Czech market data. To capture mean-reversion setting

of the underlying asset the renown Vasicek model (1977) was used, its parameters were estimated on

11 years of monthly data utilizing standard econometric techniques. The estimates were however

statistically insignificant and we have to conclude that the Vasicek model with original set-up is

unsuitable for application on the Czech market. Despite statistical insignificance of the parameters, their

values were reasonable and therefore were used to value the implicit option via binomial model.

Application of the binomial approach was successful in yielding a reasonable result. Finally, a sensitivity

of option values on changes of parameters was investigated and we came to conclusion that the binomial

model is overall robust except for the impact of the option’s strike versus underlying assets price setting

at its writing. In conclusion the binomial approach is a strong tool to value the implicit options however

other mean-reversion models has to be investigated to improve the ability of the overall model to capture

the mean-reversion setting of the IR’s in the Czech market.

References [1] AHMADI, Z. et al. (2020). A lattice-based approach to option and bond valuation under mean-

reverting regime-switching diffusion processes. Journal of Computational and Applied

Mathematics, 363, pp. 156-170.

[2] BASTIAN-PINTO, Carlos et al. (2010). A Non-Censored Binomial Model for Mean Reverting

Stochastic Processes. Proceedings 14. Annual international conference on real options.

[3] BASTIAN-PINTO, Carlos de Lamare. (2015). Modeling Generic Mean Reversion Processes with

a Symmetrical Binomial Lattice - Applications to Real Options. Procedia Computer Science, 55,

pp. 764-773.

Page 38: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

31

[4] CALVO-GARRIDO, Maria and Carlos VÁZQUEZ. (2018). Mathematical analysis of obstacle

problems for pricing fixed-rate mortgages with prepayment and default options. Nonlinear

Analysis: Real World Applications, 39, pp. 157-165.

[5] CNB. (2020). PRIBOR rates - monthly and yearly averages. [online database]. Czech Republic:

Czech national Bank. Available at: <https://www.cnb.cz/en/financial-markets/money-

market/pribor/fixing-of-interest-rates-on-interbank-deposits-pribor/averages_form.html>.

[6] COX, J. C., S. A. Ross and M. Rubinstein. (1979). Option pricing: A simplified approach. Journal

of Financial Economics, 7, pp. 229-263.

[7] HAHN, Warren J. and James S. DYER. (2008). Discrete time modeling of mean-reverting

stochastic processes for real option valuation. European Journal of Operational Research, 184(2),

pp. 534-548.

[8] HIGHAM, Desmond J. (2002). Nine Ways to Implement the Binomial Method for Option

Valuation in MATLAB. SIAM Review, 44(4), pp. 661–677.

[9] HULL, John. (2018). Options, futures, and other derivatives. 10th ed. Upper Saddle River:

Pearson Prentice Hall. Prentice Hall series in finance. ISBN 978-9-35-286659-5.

[10] HÜRLIMANN, Werner. (2012). Valuation of fixed and variable rate mortgages: binomial tree

versus analytical approximations. Decisions in Economics and Finance, 35(2), pp. 171-202.

[11] KAU, James B. et al. (1987). The valuation and securitization of commercial and multifamily

mortgages. Journal of Banking and Finance, 11, pp. 525-546.

[12] KHRAMOV, Vadim. (2013). Estimating Parameters of Short-Term Real Interest Rate Models.

IMF Working Papers. WP/13/212.

[13] MUNNIK, Jeroen F. J. de. (1996). The valuation of interest rate derivative securities. New York:

Routledge. ISBN 0-415-13727-6.

[14] NEWVALUES. (2020). STATISTIKY HYPOEXPERT. [online database]. Czech Republic:

NEWVALUES s.r.o.. Available at: <https://new-values.com/hypoexpert/>.

[15] VAŠÍČEK, Oldřich. (1977). An equilibrium characterization of the term structure. Journal of

Financial Economics, 5, pp. 177-188.

Page 39: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

32

USING GEOINFORMATION IN PUBLIC ADMINISTRATION, CASE STUDY:

MORAVSKOSLEZKÝ REGION

Ivana Čermáková1

1Department of Applied Informatics, VŠB – Technical University of Ostrava,

Sokolská tř. 33, Ostrava 70200, Czech Republic

e-mail: [email protected]

Abstract

Provides information to citizens through web services is increasing trend in the public administration.

One of the key areas of the sharing information are spatial data. These data can be used for urban

planning decision making, passports (study regarding vegetation), traffic monitoring and so on. In these

days data can be provided immediately. Because of the UAV or satellites. Availability of the spatial

datasets accelerates the possibilities of public administration dealing. The article is proposed like a case

study. Area of interest is Moravskoslezský Region in the Czech Republic. Using the geoinformation in

the public administration, problems with implementation and possibilities of the next development are

contained also.

Keywords

Geoinformation, Moravskoslezský Region, Spatial Data, Public Administration

JEL Classification

H70, H83, O18, O38

Introduction

Information are the key part of the modern society. In these days, the information has an immeasurable

value. With technological development it´s much easier observe the geoinformation - information

regarding landscape and their changes. It means, that the society wants to be informed about a landscape,

their changes and the possibilities of the changes. Using the Geographic Information Systems (GIS) and

geoinformation saving time and money. These are part of the reasons why the public administration

started using the geoinformation. The global data providing is most using way in public administration.

But the accessary and the focus of the data wasn´t enough for the needs of the regions. So, most of the

regions or big cities created their own source of spatial data. The using of these information’s are

different and very extensive. So, Moravskoslezský region is chosen like area of the interest for the thesis.

The thesis is focused on source and using of spatial data on the regional level, namely Moravskoslezký

region in the Czech Republic. Problematic of geoinformation and their using in regional level is included

also. Possibilities of using geoinformation in future are included also.

Literature Review

The term geoinformation can be conceptualize in a number of ways. information used for geographic

services, geoinformatics, spatial position and spatial monitoring is one of the most used definition

explain geoinformation by Shaytura (2018). Geoinformation can be explain like information that support

the discipline of photogrammetry and remote sensing also by Lazaridou and Patmios (2012). So,

geoinformation can be explain like information which have some spatial context and can be used for

information systems work with the spatial data.

Geoinformation at regional level

The area of interest is the Moravskolezský region in the Czech Republic. So, the regional providers of

the data are contained first. Sometimes it means that the providers are capital cities of the regions,

sometimes are providers whole regions. Prague, the capital city of the Czech Republic, provides

geoinformation via different webpages. The first is geoportal. The geoportal has a concept of most using

Page 40: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

33

services on the first page, e.g. download the documents or maps, import geodetic documentation,

provide geodata, e-import, e-export and open data. The maps on the first page are available only in Java.

Then, the geoportal is divided to seven parts, but for the geoinformation vie are important only these

parts: maps, data a service. The section maps contain base online map, archive map, traffic map

topographic map and maps focused on important areas. Section data contains data, metadata and

searching in open data. Section service provide browsing, quering and searching services. The quering

and serching services aren´t available now by Prague Institute of Planning and Development (2013).

Map of passports (green vegetation) is another Prague´s web page providing geoinformation. The web

page allows see the passports in the area and users can chose another spatial information visible in map,

e.g. bicycle kickstand, areas where people can buy bioproducts, no-smoking areas or baby-friendly

areas. 1 792 places in Prague are listed now by Automat (2010). Prague has geoportal focused on crisis

management also. The geoinformation are distributed by portals of each city district, e.g. District no. 8

provide the floods map and integrated saving system map by MČ Praha 8 (2012).

Hradec Králové, the capital city of Královehradecký region, provide the passport geoportal also. The

map allows see the base map or orthophoto map through 2011 and 2019. The users can create their own

notes to the map, distance measuring or export the map. The map is very detailed and for example each

tree has identification number and basic notes by T-Mapy (2017). Hradec Králové has portal of crisis

management also by T-Mapy (2017). The map includes information about position of sirens, offices of

integrated saving system and other necessary known object in the case of dangers. Hradec Králové

provide ternary map, urban planning map, public administration forests map, environmental map, social

and business map, barrier free map also. The Hradec Králové city try to use the whole potential of the

geoinformation in public administration. So, this is the reason why the area of interest of this geoportal

is so wide and supply more topics and possibilities than the capital city of the Czech Republic.

Středočeský region, region close to Prague city, has crisis portal which inform about actual situation and

potentional hazards through the map by Středočeský kraj (2015). Středočeský district provide traffic

map, environmental map, distortion of various substances map, urban planning map, sports map, gap

donation map, investment map and library map. Maps inform about the topics only. Create some analysis

or find out some connection isn´t obtained in the geoportal.

Jihomoravský region has concept of brownfield like map of brownfields, where citizens can download

the documentation about each project by RRAJM (2019). The Brno city, capital city of the Jihomoravský

region, the problematic of brownfields presents through portal contains base map. The base map can be

change to the aerial map. The information from cadastre and other information are visible after click on

the brownfield by Statutární město Brno (2019). The Statutární město Brno provide the historical

orthophoto map, barrier free map, map of closures, catchment of nursery schools map, environmental

map, cemetery maps, projects map, companies map and map of ashbin for recycling on the geoportal

also. Some maps allow measuring and creating of analysis. The geoportal is often used for need of public

administration, e.g. urban planning or monitoring of development of industry zones by Statutární mesto

Brno (2019).

Plzeňský region has geoportal which provide orthophoto map, cadastral map, digital technical map,

urban planning map and base information about the maps by Geoportál Plzeňského kraje (2014). Java

is need for displaying each map. Find the address or cadastre information, draw the easy path or

displaying different layers is available. But the possibilities of the geoportal aren´t wide and it´s visible

that the geoportal isn´t often used for needs of the public administration. Because of data from 2014 and

providing only static information.

Karlovarský region provide geoinformation through geoportal also by Geoportal (2014). But in these

day isn´t available. Liberecký region isn´t available from October 2019 available also by Geoportál

Libereckého kraje (2014).

Ústecký region provide geoinformation through digital maps portal by Geoportál Ústeckého kraje

(2014). The concept is created similar like in the Plzeňský region. Both are created by the same

company. While static information are used in Plzeňský region, actual information are used in Ústecký

region, e.g. map of winter maintenance plan for season 2019-2020 is available. The classic topics: urban

Page 41: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

34

planning map, environmental map, traffic map and so on are contained also. Watchtowers map, culture

map, family silver map, beer and wine map and shipment map are important for the visitor.

Pardubický region provides geoinformation through public administration webpage of the region by

Pardubický kraj – Mapy (2018). Java is need for the displaying the maps. The webpage provides online

maps (available on their servers) and Web Map Services (WMS). WMS are available datasets which

can be used in GIS for base data or analysis through connection with the provider´s server. Connection

with the Internet through whole data processing is necessary. Without it, data can´t be visible. Base map,

administrative map, urban planning map, environmental map, traffic map, culture map and public

administration objects map is available via Pardubický region WMS. Online maps on the server should

displaying base geographic information in the region.

Jihočeský region provide geoinformation through digital maps portal by Geoportál Jihočeského kraje

(2014). The concept is created similar lik Plzeňský and Ústecký region by T-Mapy company. Cadastral

map, urban planning map, education and social map, traffic map, reference map, environmental map

and other maps are contained. The portal is used for need public administration and the contained

information are actual (last actualization was provided 2 January 2020). Immediately shared data

through maps (e.g. places of available shared bicycles or free parking) aren´t used.

Vysočina region provide geoinformation through geoportal by Geoportal (2002). The geoportal contains

maps, application, map services, data supply, metadata and datasets. Most important are map services

and applications. Map services contains basic maps e.g. cadastral. Applications are maps contained

immediate data. Some maps are created by Vysočina region, some are connected with specialized

servers (e.g. traffic map regarding actual traffic situation in the region is connected with

dopravniinfo.cz). Application contains traffic map, culture map, floods map, development plan map and

tourism map. The tourism map can be important for visitors because of the supply of potential sight-

seeing and informing about actual actions in the region.

Zlínský region provides map services portal by DMVS-ZK (2014). The portal is divided to next sections:

maps, documents and background information and metadata. Most important are maps. The section

maps are divided to: brownfields, floods, anti-floods restrictions, water piping development plan,

canalization development plan, orthophoto, manufacture areas, land-use by INSPIRE and administrative

agency. All maps are available in special webpages or can be distributed by WMS. All the data are

actual, but immediate data aren´t provided.

Olomoucký region provides geoinformation through maps in the public administration webpages by

GIS mapy (2011). The maps are visible like PDF documents. The focus of the maps is next: education,

ethnic groups, dependency on various things and substances and social excluded areas.

Geoinformation in Moravskoslezský region

Public administration of Moravskoslezký region provides geoinformation through various webpages. In

these days, one of the important topics is using of brownfields. Moravskoslezký region has geoportal

where citizens can see the map of brownfields. The regenerated objects are visible also. The citizens can

join to effort reconstruct the brownfields or make a proposal how the objects can be reconstructed and

which function the objects should have after Moravskoslezký kraj (2019).

The main of geoinformation is distributed via Moravskoslezký region webpages: section Maps. The

section Maps is divided to six parts: Actual Information, Basic Maps, Urban Planning, Investment and

Property, Environment and Tourism by Mapy (2019). The section Actual information contains new map

application regarding urban planning, data sources (WMS, datasets available for downloading and so

on), links for other public administration map servers and GIS (there are only base information about

the GIS and which product use the regional public administration). The section Basic Maps contains

zonation dividing of area, aerial photos, historical maps and war tombs. The section Urban Planning is

dividing to territory municipalities plan, urban development policy, analytics background, analytics

background of municipalities, territory administration of construction administration, development of

settled areas, cemetery and counting of traffic intensity. The section Environment contains case study

Page 42: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

35

of ecological stability, small water management actions, development plan of water pipes and

canalization, floods areas, prevention of breakdown and geology. The section Investment and Property

is divided to real estate and areas of building. The section Tourism contains open churches, skiing areas,

areas suitable for swimming and other water bodies, watchtowers, auto camp, culture and brewery.

The Ostrava city, the capital city of the Moravskoslezský region, provide their own geoportal by Mapový

portál města Ostravy (1999). The geoportal is divided to three sections Most used maps, Urban Planning

and Notices and Ordinance. The section Most used maps contain cadastral map, open data, election

places, cycling traces, historical maps, education map, free barrier map, WMS and environmental map.

The section Urban Planning contains urban plan, utility report, analytics background and building of the

year. The section Notices and Ordinance contains price map, nonsustaining communications, parking

zones, isolated trash, punishment of alcoholic drinks, concours using fee, areas available for free running

dogs and punishment of advertisement sharing. Immediate information like available park places in

chosen location or available closest share bicycle aren´t contained in the geoportal. But companies

provided these services have this information on their webpages.

Comparison of using geoinformation on regional level

The thesis is focused on using geoinformation in public administration in Moravskoslezský region. So

commonly, this information should be compared with other regions in the same state. This is the reason

why in previous chapter are listed all regions of the Czech Republic.

The using geoinformation can be divided to two parts: Static Information and Actual Information. The

section Static Information means the static information are distributed via webpages to citizens and

visitors. Like static information are explain information about features which are constant. Typical

representant of this group can be administrative dividing of municipalities in the region. Meanwhile, the

second section, section Actual Information, contains actual information. The actual information doesn’t

mean immediate information in this case. It means that the people are inform about actual topics and

have actual data. In some case, the public administration only creates the geoportal, because it must had

been created and now it´s not used, the data aren´t actual and so on. The typical representative of this

group can be canalization development plan or development of industry zones.

The Moravskoslezký region provide both of the listed parts. When compare the Moravskoslezský region

with other regions it´s clear the region is in the better half. Because of some regions don´t provide any

data (or provide the data only when the portal or webpage was established) in these days. The

Moravskoslezský region provide basic and actual data what many regions don´t provide. But immediate

data don´t use. So, it´s clear, that the region is in the better half, but because of different focus of maps

and topics it can´t be comparable concrete.

Discussion

Increasing interest of spatial information needs the public administration provide the data also.

Meanwhile some regions and municipalities behave to the new trend when use the geoinformation not

only for distributing the static data but for planning and immediate sharing the data, some regions totally

ignore the trend. The only provide the base data because the must in the time when the European Union

(EU) project INSPIRE was running. And then just ignore it. It´s questing if it´s because they don´t have

the specialist on GIS, haven´t enough money for sustainable working system and innovation or they just

don´t want it. In case of small municipalities, it´s clear mainly the financial problem because of money

budget. But what is the problem for the big municipalities? It´s find out that using geoinformation save

time and money. And the geoinformation can be used for decision making e.g. urban planning. It´s sure

that the trend of geoinformation sharing will be continued and main aim will be sharing immediate

information.

Page 43: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

36

Conclusion

The geoinformation and their using is increasing in these days. First time the private sector uses it, but

the public administration uses it in these days also. Because the citizens want the spatial information

and public administration can use this information not only for sharing information about some topic.

But they can use it for needs of public administration which save time and money. By geoinformation

can be find out the information which will other undetectable. Typical representative of this is where to

build a watchtower if we want to have the best view.

The thesis is focused on regional level of using geoinformation in public administration. The case study

is focused on Moravskoslezký region in the Czech Republic. All regions are listed for next comparing.

In the thesis the focus is on which geoinformation are distributed to the citizens and visitors of the region

and how. The two big groups of distributed geoinformation are find out: Static Information and Actual

Information. The group of Static Information represents portal or webpages where only static

information are represented, e.g. administrative dividing of the municipalities in the regions. It´s

information which aren´t often changed. The group of Actual Information can´t be compared with the

term immediate information. In this case it means that the citizens and visitors have information about

actual topics and the statement of the topics. The typical representative can be canalization development

plan. It was found out that the Moravskoslezký region provide both of the types of the geoinformation.

Moravskoslezský region is listed in the better half of the regions because not many regions have both of

geoinformation. Moravskoslezký region distributed the geoinformation through three webpages. The

first is webpage regarding brownfields. The second, the most important for citizens, is Moravskoslezký

region webpage in the section Maps. The section contains next areas of interest: Actual Information,

Basic Maps, Urban Planning, Investment and Property, Environment and Tourism. The last webpage is

geoportal of Ostrava city, the capital city of Moravskoslezský region. There is the important actual

information for citizens of the city. The immediate geoinformation are only one which aren´t provided

by public administration in the Moravskoslezký region. But the geoinformation are there, e.g. shared

bicycles, but the private companies provided them in these days. But this information are the most

important which citizens want to know. So, there is the highest potential for development of using

geoinformation in Moravskoslezký region.

Acknowledgement

This research was financially supported by the VSB – Technical University of Ostrava.

References

[1] Automat. (2010). Zelená mapa Prahy. [Online]. Available at: <http://zelenamapa.cz/>.

[2] DMVS-ZK. (2014). Portál mapových služeb. [Online]. Available at: <https://gis.kr-zlinsky.cz>.

[3] Geoportal. (2002). Geoportál DMVS Kraje Vysočina. [Online]. Available at: <http://geoportal.kr-

vysocina.cz/web/>.

[4] Geoportal. (2014). Geoportál DMV Karlovarského kraje. [Online]. Available at:

<https://geoportal.kr-karlovarsky.cz/web/>.

[5] Geoportál Jihočeského kraje. (2014). Portál digitální mapy veřejné správy Jihočeského kraje.

[Online]. Available at: <https://geoportal.kraj-jihocesky.gov.cz/>.

[6] Geoportál Libereckého kraje. (2014). Geoportal. [Online]. Available at: <https://geoportal.kraj-

lbc.cz/>.

[7] Geoportál Plzeňského kraje. (2014). Portál digitální mapy veřejné správy Plzeňského kraje.

[Online]. Available at: <https://geoportal.plzensky-kraj.cz/gs/>.

[8] Geoportál Ústeckého kraje. (2014). Portál digitální mapy veřejné správy Ústeckého kraje. [Online].

Available at: <https://geoportal.ustecky-kraj.cz/gs/>.

Page 44: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

37

[9] GIS Mapy. (2011). Olomoucký kraj. [Online]. Available at: <https://www.olkraj.cz/gis-mapy-

aktuality-211.html>.

[10] Lazaridou, M. A. and E. N. Patmios. (2012). Photogrammetry-remote sensing and geoinformation.

22nd Congress of the International Society for Photogrammetry and Remote Sensing, ISPRS 2012,

Australia, pp. 69-71.

[11] Mapový portál města Ostravy. (1999). Mapový portál. [Online]. Available at:

<https://mapy.ostrava.cz/>.

[12] Mapy. (2019). Moravskoslezský kraj. [Online]. Available at: <https://www.msk.cz/mapy/>.

[13] MČ Praha 8. (2012). Městská část Praha 8. [Online]. Available at: <https://m.praha8.cz/Mapy-

krizoveho-rizeni.html>.

[14] Moravskoslezský kraj. (2019).

[15] Pardubický kraj – Mapy (2018). Pardubický kraj. [Online]. Available at:

<https://www.pardubickykraj.cz/gis/>.

[16] Prague Institute of Planning and Development. (2013). Geoportal Praha. [Online]. Available at:

<http://www.geoportalpraha.cz/>.

[17] RRAJM. (2019). Brownfieldy. [Online]. Available at: <https://www.brownfieldy-jmk.cz/>.

[18] Shaytura, S. V. et al. (2018). Geoinformation services in spatial economy. International Journal

of Civil Engineering and Technology. 9(2), pp. 829-841.

[19] Statutární město Brno. (2019). Mapový portál Brno. [Online]. Available at:

<http://gis.brno.cz/portal/>.

[20] Středočeský kraj. (2015). Středočeský kraj: Mapové aplikace. [Online]. Available at:

<https://gis.kr-stredocesky.cz/JS/MAPY/>.

[21] T-Mapy. (2017). Mapové aplikace. [Online]. Available at: <http://geoportal.mmhk.cz/portal/>.

Page 45: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

38

INTER-ORGANIZATIONAL KNOWLEDGE SHARING

AND GAME THEORY

Alina Czapla1

1Department of Organizational Relationship Management, University of Economics in Katowice,

1 Maja 50, 40-287 Katowice, Poland ,

e-mail: [email protected]

Abstract

Knowledge management is in the center of attention of many researches and knowledge sharing (KS)

is currently the subject of many scientific studies. However, the vast majority of them refer to sharing

knowledge within the organization. This study focuses on inter-organizational knowledge sharing.

There are many similarities between inter-organizational knowledge sharing (IKS) and the strategic

game, so exchange of knowledge between organizations has been analysed in the framework of game

theory (GT). The analysis showed, that game theory is a useful tool for describing KS and can help

managers in making decisions. However, treating IKS as a strategic game is not always beneficial,

sometimes it is better to establish mutual cooperation.

Keywords

Knowledge sharing, Inter-organizational knowledge sharing, Game theory.

JEL Classification

D80, C570.

Introduction

The importance of knowledge and knowledge management has been constantly emphasized in recent

decades. The benefits of sharing knowledge have been highlighted. Increased competitiveness and

innovativeness have been considered to be particularly important. Although KS was mainly analysed as

an activity within the organization, inter-organizational knowledge sharing is currently gaining

importance. The possibility of obtaining knowledge from external sources can bring many benefits to

an organization. KS also carries risks, so the decision “share knowledge” or “not share knowledge” with

another organization should be made carefully.

The decision support tool was recently created by mathematicians. Game theory not only helps to model

real situations, but also is useful to find the right strategy. This theory has already found applications in

many areas, but its usefulness in economics and management seems to be particularly important. That

is why IKS is modelled in this study using game theory.

Methodology of using non zero-sum games to improve decision making in choosing courses of action

was proposed to make strategic decisions. Decision to "share knowledge" or "not share knowledge" with

another organization was treated as a solution of strategic game. Game theory rules and the payoff matrix

were used to solve this game. The solutions were examined and discussed.

From the considerations it follows that the possibility of cooperating in the field of inter-organizational

knowledge sharing should be considered before applying the game theory approach. In some situations

cooperation instead of competition will allow organizations to achieve optimal benefits. If it is not

possible, then treating IKS as a strategy game can be helpful for managers in making decision. By

applying mathematical rules we can find the game solution - the right strategic decision. However, the

concept of using game theory to make strategic decisions regarding IKS has also some disadvantages.

Page 46: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

39

Framework

Organizational Knowledge

Literature reveals many different definitions and perspectives on knowledge (Small and Sage,

2005/2006, p. 153). What is knowledge? There were many answers and many arguments used in

supporting them, but none of those theories has been accepted so far as being fully satisfactory (Bolisani

and Bratianu, 2018, p. 2). Researchers and practitioners have failed to agree on a definition of what

constitutes knowledge (Biggam, 2001, p. 6). Although scientists have not managed to develop a clear

definition of knowledge to this day, they agree that this is more than just data or information.

Also organizational knowledge is much talked about but little understood (Tsoukas and Vladimirou,

2001, p. 973). It is defined in various ways, for example: organizational knowledge is the collection of

knowledge, which exists in the organization that has been derived from current and past employees

(Jones and Leonard, 2009, p. 29); organizational knowledge is the set of collective understandings

embedded in a firm, which enable it to put its resources to particular uses (Tsoukas and Vladimirou,

2001, p. 981; Penrose, 1959); organizational knowledge is a dynamic process, of an essentially and

inherently social and interactive nature, which demands active and committed participation and

involvement by people (Cardoso et al., 2012, p. 267).

Organizational knowledge is much more than the sum of knowledge of its individual members.

Knowledge is organizational simply by its being generated, developed and transmitted by individuals

within organizations. Furthermore, knowledge becomes organizational when individuals operate

according to the general rules developed by the organization (Tsoukas and Vladimirou, 2001, p. 979).

Organizations are the sites of cultural knowledge, that provide an organized system with a distinct

identity and enable its members to act in coordinated ways (Tsoukas, 2011, p. 13).

Both scientists and practitioners agree that organizational knowledge is a valuable resource. Knowledge

represents the most important resource in creating the competitive advantage (Bratianu, 2015, p. 131).

Organizational knowledge is identified as one of the contributing factors to organizational

competitiveness (Pangil and Nasurddin, 2013, p. 349). It is perceived as the primary source of the

creation of value (Shaheen, 2017, p. 24). If the know-that or know-what knowledge is visible and can be

easily imitated by the other competitors, the know-how knowledge is invisible and can be considered

the backbone of the organizational knowledge (Bratianu, 2015, p. 131).

Knowledge Sharing

Knowledge management and especially knowledge sharing are currently the subject of many scientific

studies. The vast majority of them focus on sharing knowledge within the organization. Knowledge

sharing is basically the act of making knowledge available to others (Ipe, 2003, p. 341). If we consider

the internal exchange of knowledge, it concerns the exchange of information and know-how between

employees, teams or departments. Such exchange is considered to be very beneficial for an organization,

although individuals may sometimes suffer losses. Nevertheless, the quality of knowledge sharing is the

major factor that facilitated individual creativity (Lee, 2018, p. 10).

Knowledge sharing is affected by multi-level factors: organizational level, team level and individual

level factors; some will promote knowledge sharing, and some will have a negative impact (Zheng,

2017, p. 51). There are four enablers for KS: technology that supports KS, culture that influences the

attitude towards KS, organizational structure that affects KS style and motivation that determines KS

strategy (Xu et al., 2014, p. 14). KS depends on the nature of knowledge, the motivation to share, the

opportunities to share, and the culture of the work environment (Ipe, 2003, p. 351). The significant

drivers of KS are: enjoy helping others, monetary rewards, management support, change of knowledge

sharing behavior and recognition. The significant identified barriers to knowledge sharing are: change

of behavior, lack of trust and lack of time (Razmerita et al., 2016, p. 1).

KS does not only mean reorganization and effective transfer of knowledge, skills, and information, but

it also indicates the creation of new knowledge and innovative ideas (Lee, 2018, p. 3). Knowledge

Page 47: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

40

sharing has various positive effects on organizations. In a knowledge economy, effective sharing of

knowledge makes businesses function more effectively (Safari and Soufi, 2014, p. 13). KS increases the

effectiveness and quality of work to improve performance for the benefit of the organizations (Mohajan,

2019, p. 57). It influences the creativity of the team (Men et al., 2017, p. 1). Knowledge sharing

orientation significantly and positively impacts the business performance (Vij and Farooq, 2014, p. 17).

Inter-organizational Knowledge Sharing

Inter-organizational knowledge sharing confronts firms with a paradox of dealing with contradictory

requirements. On the one hand, KS can give firms new business opportunities, on the other, partners can

lose the uniqueness of their companies' knowledge. We can observe the competitive paradox of inter-

organizational knowledge sharing: how to reap the benefits of cooperating without losing one’s own

advantage ( Loebbecke et al., 2016, p. 5).

IKS and achieving innovation through IKS are critical for organizational survival (Tsai, 2016, p. 1402).

IKS can bring many benefits to the organization. Enhancement of effectiveness and efficiency by

spreading good ideas and practices are main advantages of knowledge sharing between

companies (Safari and Soufi, 2014, p. 21). Other benefits of inter-organizational knowledge

sharing are access to competitive knowledge, increased company’s competitive advantage,

synergy effects in the creation of know-how. Joint knowledge resources foster innovation, learning,

and knowledge creation (Ilvonen and Vuori, 2013).

IKS can prove to be worthwhile only when it is a joint activity between partners in which every party

attempts to create more value together than what they would be able to create individually. However,

inter-organizational knowledge sharing can be not only fruitful but also threatening (Safari and Soufi,

2014, pp. 13-14). IKS carries risks for example knowledge spill-over, opportunistic behavior, conflicts

with partners, risk of lack of balance between competition and co-operation (Ilvonen and Vuori, 2013).

Game Theory

Game theory is a young branch of mathematics that often supports decision making. It is often used in

optimization techniques. Games that are the focus of this theory illustrate various real decision-making

situations. In a simplified mathematical model, with the help of logical reasoning and mathematical

rules, GT allows to find a game solution. That is why game theory is helping to solve strategic problems.

Game theory is used by practitioners from various fields, including biology, psychology, international

relations and philosophy (Watson, 2002, p. 2). It is helpful in the development of computer science

(Halpern, 2007), cybernetics (Kazimierczak, 1973) or artificial intelligence (Tennenholz, 2002). It is

used in geology (Krzak, 2013), politics (McCarty and Meirowitz, 2007), jurisprudence (Załuski, 2013),

sociology (Burns et al., 2017) and military sciences (Fox, 2016). These are just examples of areas where

GT supports practitioners or forms the basis of some research.

However, the use of game theory in economics and management deserves special attention. Game theory

models are used in finance (Allen and Morris, 2014), accounting (Kanodia, 2014), marketing (Moorthy,

2014), management (Li and Whang, 2014). Watson (2002) describes the application of game theory in

the organization of markets, trade and negotiations. Drabik (2009, p. 28) underlines that GT models

many economic processes such as production, transport, distribution of goods, economic growth as well

as competition and cooperation.

Methodology

Methodology of using non zero-sum games to improve decision making in choosing courses of action

was proposed to make strategic decisions. Decision to "share knowledge" or "not share knowledge" with

another organization was treated as a solution of strategic game. Game theory rules and the payoff matrix

were used to solve this game. The solutions were examined and discussed.

Page 48: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

41

The concept of using game theory to make strategic decisions regarding IKS has undergone critical

analysis. Weaknesses of this approach have been identified. The advantages of this approach have also

been pointed out.

Results

Inter-Organizational Knowledge Sharing as a Strategic Game

For organization the decision to "share knowledge" or "not share knowledge" with another one is a part

of strategy. Inter-organizational knowledge sharing can be very beneficial, but it's also risky. There are

many similarities between knowledge sharing and strategy games. They involves two or more persons,

organizations or players; the player or organization chooses one of the possible strategies; the strategy

leading to the highest payoff is chosen (Chua, 2003, p. 120). In this paper KS between two organizations

is considered. This economic situation is presented as a a two-player non-zero sum game. Each

organization (player) chooses one of two strategies. The payoff matrix is presented in table 1.

Table 1. Payoff matrix of knowledge sharing between two organizations

Player/organization 2

Share knowledge Not share knowledge

Player/organization 1 Share knowledge (a1,a2) (b1,c2) Not share knowledge (c1,b2) (0,0)

Source: Own elaboration

It was assumed that if both organizations decide to choose the "not share knowledge" (NSK) strategy,

their payoffs will be 0. Due to the fact that knowledge is a valuable resource, it was also assumed that if only one organization decides to “share knowledge” (SK), then the payoff of the second one will be

positive: ci>0, where iє{1,2}. (1)

For the same reason:

ai>bi, where iє{1,2}. (2) It means that the payoff of an organization that shares knowledge is larger when the second player also

shares knowledge. Generally speaking, we assume that acquiring knowledge from outside is always

beneficial for the organization. Importantly, payoffs are not only financial profits but also all other

benefits or losses. They determine the utility of each strategy for a given player.

Game Solution

To find a solution to the game, the concept of dominant strategy is helpful. Strategy is called dominant,

when always produces a higher payoff, regardless of what strategies are used by the other players

(Harrington, 2009, p. 56). Weakly dominant strategy is defined analogically, but only one payoff has to

be higher, others can be greater than or equal. Game theory assumes the rationality of all players, which

in practice means that if a given player has a dominant or weakly dominant strategy, then he will apply

it. Additionally, common knowledge about the rationality of all players is assumed: all the players are

rational; all the players believe that all the players are rational; all the players believe that all the players

believe that all the players are rational; and so on; and so on (Heifetz, 2012, p. 49). If none of the players has a dominant or weakly dominant strategy, then the game's solution is the most

favorable Nash equilibrium (named after John Nash, who first described it). Such equilibrium is a set of

strategies, in which each player's strategy maximizes his payoff, given the strategies used by the other

players (Harrington, 2009, p. 90). In other words, no player has anything to gain by changing only his

own strategy. The rational player chooses the strategy for which his payoff is the largest. The solution of the game

will therefore depend on the relationship between the payoffs of i-th player. To find a solution to the

game, all possible cases should be considered. the solution to the game titled “Inter-organizational

Page 49: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

42

knowledge sharing”, obtained by searching for dominant strategies and Nash equilibria, is presented in

the table 2.

Table 2. Game solution

Payoff values Game solution

bi≥0, iє{1,2} (SK, SK)

{bi < 0ai > ci

, iє{1,2} (SK, SK)

{bi < 0, 𝑖є{1,2} a1 ≤ c1or a2 ≤ c2

(NSK, NSK)

{b1 ≥ 0 and b2 < 0

a2 > c2

(SK, SK)

{b1 ≥ 0 and b2 < 0

a2 < c2

(SK, NSK)

{b1 ≥ 0 and b2 < 0

a2 = c2

(SK, SK or NSK)

{b2 ≥ 0 and b1 < 0

a1 > c1

(SK, SK)

{b2 ≥ 0 and b1 < 0

a1 < c1

(NSK, SK)

{b2 ≥ 0 and b1 < 0

a1 = c1

(SK or NSK, SK)

Source: Own elaboration

To show an example of used reasoning, let's consider the following situation:

{bi < 0ai ≤ ci

, where iє{1,2}. (3)

Then in the payoff matrixes of both players we can find the dominant strategy or weakly dominant

strategy:

[a1 b1𝐜1 0

] and [a2 𝐜2b2 0 ]. (4)

So the solution of the game is (NSK,NSK). This does not mean, however, that this solution is best for

players. If:

ai>0, where iє{1,2} (5) then strategy (SK,SK) would be better for both organizations. Although the reasoning is correct, the

result is not always optimal. The reason is that there is a game going on between players. We have not

considered in the model that cooperation can be more beneficial for organizations.

Prisoner’s Dilemma

Game theory explains this specific strategic situation through simply story called the prisoner's dilemma,

which is often described in the literature (for instance Heifetz, 2012; Geckil and Anderson, 2010;

Harrington, 2009). Two criminals are suspected of having committed a crime together. The police

caught them, but evidence is lacking. Each suspect is promised that if he confesses he will go free, while

his colleague will receive a five-year prison sentence. If both confess, each will spend three years in

prison. If none confess, they will spend one year in prison. Table 3 shows the payoff matrix.

Page 50: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

43

Table 3. Payoff matrix in Prisoner’s Dilemma

Prisoner/player 2

Not confess Confess

Prisoner/player 1 Not confess (-1,-1) (-5,0)

Confess (0,-5) (-3,-3)

Source: Own elaboration (based on Heifetz, 2012, p. 28).

In the payoff matrixes of the players we can find a dominant strategies

[−1 −50 −3

] and [−1 0−5 −3

]. (6)

Because:

{0 > −1,−3 > −5

(7)

so strategy “confess” dominates strategy “not confess”. If the suspect confesses and his colleague does

not, he will be free (it is better than not to confess and spend a year in prison). If the suspect confesses

and his colleague also, he will spend three years in prison instead of five. So the solution to the game is

therefore the decision "confess" by each prisoner.

Both prisoners are aware that they’d better not say anything, but they cannot communicate and therefore

cannot rely on agreements between them. It is a situation in which individuals receive less than in the

case of cooperation.

Discussion

The above example shows that using game theory to make strategic decisions about inter-organizational

knowledge sharing is not always beneficial for the organizations. As with the prisoner's dilemma, it is

recommended to use game theory to make decisions about IKS when organizations cannot communicate

with each other. Otherwise, it is always best to consider cooperation first. Modified approach to IKS is

shown in Figure 1.

Figure 1. Decision path for IKS

Source: Own elaboration

The concept of using game theory to make strategic decisions regarding IKS has also another

disadvantages. The first problem is the assumption that we know players' payoffs. In reality such data

is not known or is only known approximately. The second assumption - that players are rational also

often is not fulfilled in reality. Managers often do not know the basics of game theory and act according

to their knowledge, but not always rationally. Similarly, in the case of common knowledge about

rationality. From people unfamiliar with the basics of game theory, one cannot expect common

knowledge of rationality. Therefore, IKS modeling through a strategic game should be used with

caution. Additionally, model is always a simplification, it does not fully describe reality.

To find the best strategy for

knowledge sharing

If it is possible to set the terms of

cooperation

Set mutually beneficial terms of

IKS

If cooperation is not possible, use game theory and

Decide to share knowledge, when

it is profitable

Otherwise, decide not share

knowledge

Page 51: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

44

On the other hand, game theory allows to assess the situation rationally, analyse possible payoffs and

choose the right strategy. It helps to understand decision making mechanisms. Knowledge of the basics

of this theory simplifies management and leads to better results.

The solution of “inter-organizational knowledge sharing” game is clearly indicated in this study. This

can be helpful for practitioners. Knowledge as a valuable resource should be more often obtained from

external sources.

Conclusions

The decision whether to share knowledge with other organizations or not is a typical strategic situation

that can be described using game theory. Game theory can be helpful in analyzing the possible benefits

and losses resulting from knowledge sharing. Nevertheless, one should be aware that game theory

illustrates a simplified model of reality, so it can only help managers make decisions. Nevertheless, the

possibility of cooperating in the field of inter-organizational knowledge sharing should be considered

before applying the game theory approach. In some situations cooperation instead of competition will

allow organizations to achieve optimal benefits.

This study has limitations. Mainly, IKS between two organizations was considered. The analysis of

knowledge sharing by more organizations should be the subject of future research. Additionally, the

theoretical approach was proposed in this paper. Case studies of real situations could verify the

correctness of the conducted considerations.

Despite the fact that the results can’t be uncritically applied in practice, knowledge of the basics of game

theory can help managers to find rational IKS strategy.

References

[1] Allen F. and Morris S. (2014). Game Theory Models in Finance, [in:] Chatterjee K. and Samuelson

W. (Eds.), Game Theory and Business Applications. Springer, pp. 17-42.

[2] Biggam J. (2001). Defining Knowledge: an Epistemological Foundation for Knowledge

Management. Proceedings of the 34th Hawaii International Conference on System Sciences, pp.

1-7.

[3] Bolisani E. and Bratianu C. (2018). The Elusive Definition of Knowledge, [in:] Bolisani E. and

Bratianu C. (Eds.), Emergent knowledge strategies: Strategic thinking in knowledge management.

Cham: Springer International Publishing, pp. 1-22.

[4] Bratianu C. (2015). Organizational Knowledge Dynamics: Managing Knowledge Creation,

Acquisition, Sharing, and Transformation. Hershey: IGI Global.

[5] Burns T. R., Roszkowska E., Corte U. and Machado N. (2017). Sociological Game Theory:

Agency, Social Structures and Interaction Processes. Optimum. Studia Ekonomiczne, 5(89), pp.

187-199.

[6] Cardoso L., Meireles A. and Peralta C. F. (2012). Knowledge management and its critical factors

in social economy organizations. Journal of Knowledge Management, 16(2), pp. 267-284.

[7] Chua A. (2003). Knowledge sharing: A game people play. Aslib Proceedings, 55(3), pp. 117-129.

[8] Drabik E. (2009). Kilka uwag o formalnych zasadach matematycznego modelowania zjawisk

ekonomicznych i interakcji społecznych. Ekonomika i Organizacja Gospodarki Żywnościowej,

79, pp. 23-37.

[9] Fox W. P. (2016). Applied Game Theory to Improve Strategic and Tactical Military Decisions.

Journal of Defense Management, 6(2), pp. 1-7.

[10] Geckil I. K. and Anderson P. L. (2010). Applied Game Theory and Strategic Behavior. USA: CRC

Press.

Page 52: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

45

[11] Halpern J. Y. (2007). Computer Science and Game Theory: A Brief Survey. Available at:

<https://arxiv.org/pdf/cs/0703148.pdf>.

[12] Harrington J. E. (2009). Games, strategies, and decision making. New York: Worth Publishers.

[13] Heifetz A. (2012). Game Theory. Interactive Strategies in Economics and Management. New

York: Cambridge University Press.

[14] Ilvonen I. and Vuori V. (2013). Risks and benefits of knowledge sharing in co-opetitive knowledge

networks. International Journal of Networking and Virtual Organisations, 13(3), pp. 209 – 223.

[15] Ipe M. (2003). Knowledge Sharing in Organizations: a Conceptual Framework. Human Resource

Development Review, 2(4), pp. 337–359.

[16] Jones K. and Leonard L. N. K. (2009). From Tacit Knowledge to Organizational Knowledge for

Successful KM, [in:] King W. R. (Ed.), Knowledge Management and Organizational Learning.

Springer Science+Business Media, LLC, pp. 27-39.

[17] Kanodia C. (2014). Game Theory Models in Accounting, [in:] Chatterjee K. and Samuelson W.

(Eds.), Game Theory and Business Applications. Springer, pp. 43-80.

[18] Kazimierczak J. (1973). Teoria gier w cybernetyce. Warszawa: Wiedza Powszechna.

[19] Krzak M. (2013). Teoria gier w geologii gospodarczej. Kraków: Wydawnictwa AGH.

[20] Lee J. (2018). The Effects of Knowledge Sharing on Individual Creativity in Higher Education

Institutions: Socio-Technical View. Administrative Sciences, 8(21), pp. 1-16.

[21] Li L. and Whang S. (2014). Applications of Game Theory in Operation Management and

Information Systems, [in:] Chatterjee K. and Samuelson W. (Eds.), Game Theory and Business

Applications. Springer, pp. 103-136.

[22] Loebbecke C., van Fenema P. C. and Powell P. (2016), Managing Inter-Organizational Knowledge

Sharing. The Journal of Strategic Information Systems, 25 (1), pp. 4-14.

[23] McCarty N. and Meirowitz A. (2007). Political Game Theory: An Introduction. Cambridge

University Press.

[24] Men C., Fong P. S. W., Luo J., Zhong J. and Huo W. (2017). When and how knowledge sharing

benefits team creativity: The importance of cognitive team diversity. Journal of Management &

Organization, pp. 1-18.

[25] Mohajan H. K. (2019). Knowledge Sharing among Employees in Organizations. Journal of

Economic Development, Environment and People, 8(1), pp. 52-61.

[26] Moorthy S. (2014). Marketing Applications of Game Theory, [in:] Chatterjee K. and Samuelson

W. (Eds.), Game Theory and Business Applications. Springer, pp. 81-102.

[27] Pangil F. and Nasurddin A. M. (2013). Knowledge and the Importance of Knowledge Sharing in

Organizations. Conference on Business Management Research Universiti Utara Malaysia, pp.

349-361.

[28] Penrose E. (1959). The Theory of the Growth of the Firm. New York: Wiley.

[29] Razmerita L., Kirchner K. and Nielsen P. (2016). What Factors Influence Knowledge Sharing in

Organizations? : A Social Dilemma Perspective of Social Media Communication. Journal of

Knowledge Management, 20(6), pp. 1-31.

[30] Safari H. and Soufi M. (2014). A Game Theory Approach for Solving the Knowledge Sharing

Problem in Supply Chain. International Journal of Applied Operational Research, 4(3), pp. 13-

24.

[31] Shaheen O. (2017). Knowledge and knowledge management in organization: identifying the

critical role of IT in the knowledge management process. World Scientific News, 87, pp. 24-48.

Page 53: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

46

[32] Small C. T. and Sage A. P. (2005/2006). Knowledge management and knowledge sharing: A

review. Information Knowledge Systems Management, 5, pp. 153-169.

[33] Tennenholz M. (2002). Game theory and artificial intelligence, [in:] d’Inverno M., Luck M., Fisher

M. and Preist C. (Eds.), Foundations and applications of multi-agent systems. Springer, pp. 49-

58.

[34] Tsai A. (2016). The effects of innovation by inter-organizational knowledge management.

Information Development, 32(5), pp. 1402-1416.

[35] Tsoukas H. and Vladimirou E. (2001). What is organizational Knowledge. Journal of Management

Studies, 38(7), pp. 973-993.

[36] Tsoukas H. (2011). Representation, Signification, Improvisation – A Three-Dimensional View of

Organizational Knowledge, [in:] Canary H. E. and McPhee R. D. (Eds.), Communication and

Organizational Knowledge. New York and London: Routledge, pp. 10-19.

[37] Vij S. and Farooq R. (2014). Knowledge Sharing Orientation and Its Relationship with Business

Performance: A Structural Equation Modelling Approach, The IUP Journal of Knowledge

Management, 12(3), pp.17-41.

[38] Watson J. (2002). Strategia. Wprowadzenie do teorii gier. Warszawa: Wydawnictwo Naukowo-

Techniczne.

[39] Xu J., Quaddus M. and Gao X. (2014). Towards a Knowledge Sharing Model for Small

Businesses. The International Technology Management Review, 4(1), pp. 12-26.

[40] Załuski W. (2013). Game theory in jurisprudence. Kraków: Copernicus Center Press.

[41] Zheng T. (2017). A Literature Review on Knowledge Sharing. Scientific Research Publishing, 5,

pp. 51-58.

Page 54: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

47

COMPARISON OF EVALUATION OF INNOVATIVE ACTIVITIES IN INNOVATIVE

COMPANIES WITHIN THE V4 COUNTRIES

Katarzyna Czerná1

1Department of Business Administration, VŠB – Technical University of Ostrava,

Sokolská tř. 33, Ostrava 70200, Czech Republic

e-mail: [email protected]

Abstract

Conditions of a current competitive environment urge management to innovation strategy, a key for

obtaining a significant market position. For efficient implementation of innovations, evaluation of the

best practice and its inclusion in entrepreneurship are crucial. The selection of countries for innovation

activities evaluation is based on the assumption of similar economic, cultural developments and mutual

connection based on international cooperation within V4. Even the innovation performance of the

selected countries is evaluated within EU28 countries according to Eurostat data and the standardized

CIS 2016 Innovation Questionnaire are used for collecting data on innovation activities of enterprises,

there are quite significant differences caused primarily by mandatory and optional questions. The paper

aims to compare the way of evaluation of innovative activities in innovative companies within the V4

countries. At the same time, the evaluation of innovative activities through data of innovative enterprises

available from statistical offices will be carried out. Evaluation of innovation activities will primarily

focus on their subject, the costs of the implementation and the obstacles to the implementation of

innovation activities. A hierarchical clustering analysis, Ward´s approach with Euclidean distance, and

regression analysis will be used to evaluate and compare the results for innovative enterprises in the

selected countries.

Keywords

innovation activities, innovative companies, manufacturing industry, regression analysis, V4.

JEL Classification

M10, O31, O32

Introduction

In today's highly competitive business environment, innovation activities are essential for the extension

and subsequent business development. Their correct implementation can secure the market position,

economic growth and long-term competitive advantage, which many managers are already trying to

achieve as an innovator on the market (Vance, 2015).

The key question is how can it be done? One way is to follow leaders in the industry, analyse their

working practices, and then apply best practice examples in their own environment with necessity in

finding their way of transferring the innovative idea into practice, consequently avoiding the copying of

others (Peterková, Ludvík, 2015). Repeating for Peterková and Wozniaková (2015), innovation should

not terminate in itself but should lead to more high-level performance, better makings, and greater

capability. Tidd and Bessant (2016) add the wellspring for national industrial growth.

At the same time, however, businesses have to face the progress of globalization, the increasing

frequency of change, and its distracting nature. These elements also bring advantages, which include, in

particular, the possibility of comparing their own innovative activities with foreign principles of

operation, of course, in the case of countries with similar economic and social development. The

Community Innovation Survey (CIS, 2016) is used within the EU to facilitate comparison, carried out

with two years' frequency by EU member states and a number of ESS member countries. The CIS stands

for a harmonized survey of innovation activity in enterprises, created to provide information on the

innovativeness of sectors by type of enterprises, on different types of innovation and on various aspects

of the development of innovation, such as objectives, the sources of information, public funding, the

innovation expenditures, etc. The CIS provides statistics broken down by countries, types of innovators,

economic activities and size classes.

Page 55: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

48

Even the innovation performance is evaluated according to standardized CIS, there are quite significant

differences caused primarily by mandatory and optional questions. The paper aims to compare the way

of evaluation of innovative activities in innovative companies within the V4 countries. These countries

were chosen not only for similar economic and social developments but also for their strong

interconnection and possible cooperation.

The Visegrád Group, Visegrád Four, or V4, means a cultural and political alliance of four states from

Central Europe, naimly the Czech Republic, Hungary, Poland, and Slovakia, that are members of the

European Union (EU) and NATO . Its purpose is the advancing army, cultural, economic and energy

cooperation with one another simultaneously with furthering their integration in the EU (Visegrad

Group, 2019).

At the same time, the evaluation of innovative activities through innovative enterprises available from

statistical offices will be carried out. Evaluation of innovation activities will primarily focus on their

subject, the costs of implementation and obstacles to the implementation of innovation activities. A

hierarchical clustering analysis, Ward's approach with Euclidean distance, and regression analysis will

be used to evaluate and compare results for innovative enterprises in selected countries. The research

results are analysed using the IBM SPSS Statistics 21 statistical program.

Literature Review

In internal and foreign literature exist distinct ways of defining the key concept of innovation. Although,

authors usually repeat the changes towards something new. Firstly, Schumpeter (1987) at his time

recognized only absolute innovation in which he meant the launching of a new product or existing

product with new features, the introduction of a new production process, the opening of a new market,

the use of new raw materials or the creation of a new production organization. For functional reasons,

however, the understanding of innovation has to be increased to changes of all sorts, which are new

internally, for a singular business. This concept is visualized in relative innovations and the distance of

a new product or other new factors from the original position before innovation. According to Valenta

(2001), there are nine innovation codes that are divided into three groups: rationalization, qualitative

innovation, and a technological breakthrough.

The OECD definition (2018) declaims the introduction of a new or significantly improved product and

the use of a new or significantly improved process in the inner environment of the production company

which is completed by Tidd and Bessant (2016) with need develop them into practical use. Concerning

the authenticity of the implemented changes, Kuratko (2009) characterizes four types of innovation:

invention, extension, duplication, and synthesis. Innovations are varying from brand new products,

services or processes to the sequence of existing concepts into new forms of use.

Bessant and Tidd (2014); Green (2005); Ireland et al. (2011) see the innovation spectrum from minor

incremental improvements (incremental innovations) to radical changes (radical innovations) that

change the way we think and use.

For benchmarking within EU countries, innovation activities are monitored through a statistical

sampling survey, which is governed by the international Oslo Manual (OECD, 2018). Here can be found

a basic categorization to technical and non-technical innovations. The first group includes the product

(the introduction of new or significantly improved products or services) and process innovation (the

introduction of new or significantly improved production or even delivery methods). Non-technical

innovations include marketing (introducing a new marketing method that includes significant changes

in product design or packaging, product placement, product support, or valuation) and organizational

innovations (introducing a new organizational method incorporate business practices, job organization,

or on external relationships). In the Eurostat conception (2014), an enterprise is estimated to be an

innovative company when it introduced one of the foregoing innovations, i.e., product, process,

marketing and organizational, during the period. The evaluation of innovation activities of business units

in EU countries was carried out through a statistical sample survey based on the International Oslo

Manual 2005 prepared on the initiative of the OECD and in line with Commission Implementing

Page 56: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

49

Regulation (EU) No 995/2012 of 26 October 2012. The goal of the statistical survey is to present

globally comparable data on innovation environments and innovative business activities. The

harmonized model questionnaire of Eurostat for the EU Innovation Survey CIS 2016 (Community

Innovation Survey 2016) for the 2014-2016 reference period is used to collect innovation data on

enterprises.

The structure of the CIS 2016 questionnaires is consistently built and contains 15 research areas, with

questions that are mandatory or voluntary in each area. These are 1. General information about the

enterprise (4 questions); 2. Product innovation / good or service / (4 questions); 3. Process innovation

(2 questions); 4. Ongoing or abandoned innovation activities for product or process innovations (1

question); 5. Innovation activities and expenditures for product and process innovations (2 questions);

6. Public financial support for product and process innovation activities (1 question); 7. Sources of

information and co-operation for product and process innovations (3 questions); 8. Organizational

innovation (1 question); 9. Marketing innovation (1 question); 10. Factors hampering innovation

activities (1 question); 11. Effect of legislation and regulations on innovation activities (2 issues); 12.

Non-innovators (3 questions); 13. Intellectual property rights (1 question); 14. Innovations in logistics

(5 questions); 15. Basic economic information on business (4 questions).

Based on the results of the questionnaire, first innovation indicators and their components can be

recognized. These are major economic indicators, the position of the enterprise in international markets,

the number of innovating enterprises according to the type of innovation, the costs of technical

innovation activities according to the type of costs, revenues for innovative products and services

according to the degree of innovation, cooperation on innovation activities, technical innovations, results

of innovative activities, obstacles to the implementation of innovation activities. The innovation

indicators are monitored according to the ownership of the company (domestic and foreign control), size

of the enterprise (small, medium, large), prevailing economic activity (CZ-NACE classification) and

regional classification (CZ-NUTS 2 classification).

For measuring innovation performance at the international level are used composite and straightforward

indicators. One of the single indicators is a knowledge intensity indicator that serves to determine

innovation performance. It is calculated as the ratio of total R&D expenditure (GERD) and gross

domestic product (GDP). The composite indicators include the Summary Innovation Index (SII), Global

Innovation Index (GII) and Innovation Output Indicator (IOI). Summary Innovation Index allows a

comparison of EU member states' innovation and selected third countries. It consists of four indicator

areas - Framework Conditions, Investments, Innovation Activities, Impacts. These areas contain ten

sub-innovation groups and consist of 27 different weight indicators. According to the achieved value of

SII, the rated countries are divided into four groups - Innovation Leaders (score more than 20% above

EU average), Strong Innovators (score between 90-120% of EU average), Moderate Innovators (score

between 50-90% of EU average), Modest Innovators (50% of EU average).

In 2003, Chesbrough, supported by Herzog (2011), Pitra (2006), noticed that companies from different

high-tech sectors have shifted how innovation is realized. These companies have moved their efforts

from a closed innovation model based on their corporate research to an open innovation model of using

their own but also external ideas and technologies. Bessant et al (2012) further highlight the value of

building an innovation network for cooperation which can provide a way of getting access to different

resources through a shared exchange process. Also, collective learning offers exchange practices,

challenging models. Partners encourage each other with new perspicacity and ideas that lead to routine

experimentation. Collective risk-taking play also a key role in innovation networking. Collaboration can

work with other businesses within a group, with suppliers, customers, but also with competitors from

the same industry. As Birskinshaw (2007) highlights, as partners can stand consultants in the field of

research and development, universities, public research institutions or private research organizations.

It follows from the above, one and generally valid concept of the innovation can´t be defined. Innovation

is broadly understood as a change, in any area of social life. When it comes to innovation in business

practice, it is advisable to prefer a narrower definition of the concept, with innovation seen not just a

Page 57: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

50

change in products and services but also a change in the circumstances and ways in which they get and

keep on the market.

Methodology and Data

The necessary data for evaluating the innovation activity of enterprises in the V4 countries were obtained

from the National Statistical Offices of each country, that means Czech Statistical Office, Polish

Statistical Office (GUS), Statistical Office SR and Hungarian Central Statistical Office, where, at two-

year intervals, a statistical survey on innovation activities of enterprises respecting the OECD

methodological principles in the Oslo Manual is conveyed. The evaluation of innovation activities is

based on the differences and similarities in CIS in the area of the enterprise size data.

At the same time, this research aimed to estimate the realization of innovative activities in the

manufacturing industry in small and medium-sized and large enterprises based in the selected countries.

Based on the literature, the aim of the paper and previous author´s research the following hypotheses

were defined:

• Hypothesis 1: There is a dependency between the size of the company and the type of realized

innovation.

• Hypothesis 2: Increasing market share as a result of the introduction of innovation is related to the

relative novelty of the product (new for the enterprise only).

Defined hypotheses were judged in IBM SPSS Statistics Software by using regression analysis. The

results were surveyed for small and medium-sized enterprises with up to 249 employees and large

enterprises with 250 and more employees. The fundamental tool was a questionnaire with the principal

innovative signs: data on the innovations carried out according to innovative types, the novelty of the

product innovations, cooperation on innovation activities.

The hierarchical cluster analysis, Ward technique with Euclidean distance, is used for basic comparison

of the results for innovating enterprises in the selected countries. Cluster analysis commits to a set of

methods intended to analyse multidimensional data to classify a plurality of objects into several

relatively homogeneous subsets, known as clusters. Objects within clusters are as close as possible, and

objects belonging to different clusters are as different as possible. Conventional clustering methods

include the Ward method - at each step for each pair of deviations, the increment of the sum of the

squares of the deviations resulting from their merger is computed, and then the clusters corresponding

to the minimum value of this increment are combined. A binary tree, a dendrogram can represent

clustering with this method. Ward's method is suitable for working with objects that have the same

dimension of variables. Ward's minimum variance method is the most commonly used in management

(Charry, Coussement, Demoulin, Heuvinck, 2016).

Empirical Results

Based on the European Innovation Scoreboard 2017, the achieved values of SII in the selected countries

are visible in Figure 1.

Page 58: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

51

Figure 2. Summary Innovation Index and components in V4 countries

Source: own research based on European Commission (2018)

Slovakia is perceived as a moderate innovator with SII 69,1. Sales impacts and Employment impacts

are the strongest innovation dimensions and Slovakia scores particularly well on Sales of new-to-market

and new-to-firm product innovations, Employment fast-growing enterprises of innovative sectors, and

Medium and high-tech product exports. Finance and support, Intellectual assets and Attractive research

systems are the weakest innovation dimensions. Overall, Slovakia’s lowest indicator scores comprise

Venture capital expenditures, PCT patent applications, and Lifelong learning.

Also, Hungary belongs to the group of moderate innovators (SII 69) with employment impacts and the

innovation-friendly environment as the strongest dimensions. The highest performance is visible in

section employment fast-growing enterprises of innovative sectors, medium and high-tech product

exports, and broadband penetration. On the other hand, innovators, intellectual assets and finance and

support are the weakest innovation dimensions. Hungary’s lowest indicator scores remain in design

applications, SMEs innovating in-house, and SMEs with marketing or organizational innovations.

The Czech Republic (SII 89,4) and Poland (61,1) are ranked also among the most numerous groups of

countries. The strength of the innovation system in Poland is based on the sales impact, employment

impacts, and an innovation-friendly environment. Shortcomings are perceived in finance and support,

innovators, intellectual assets and linkages. The strengths of the innovation system in the Czech

Republic endure in corporate investment, employment impacts and sales. Weaknesses are in intellectual

property, ties, and innovators. It is clear that the Czech Republic has the highest SII of the monitored

countries, although it lags behind the EU average by 19.4 points. At the same time, all four countries

were found to be lagging behind innovation leaders, including Sweden (SII 148), Finland (SII 146),

Denmark (SII 141) or the Netherlands (SII 135).

In the research, the author of the paper further focused on comparing data from 2016, as it is the latest

available data in all countries for the same period (in Poland since last year they publish data in an annual

period, ie two-year results cannot be obtained, in Hungary the last available data are from 2016). Based

on a comparison of the structure of the CIS questionnaire in individual V4 countries, it was found that

the available information has a more detailed character in the Czech Republic, Poland, and Slovakia.

Within the Hungarian data in the English-language version, tables are available in the Internet interface

regarding the share of innovative enterprises by staff categories and by NACE, enterprises with

technological innovation by developer and type of innovation by staff categories, enterprises by product

0,0

50,0

100,0

150,0

200,0

250,0

EU Czechia Hungary Poland Slovakia

Page 59: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

52

and / or process innovation by type of innovative organization, enterprises with technological innovation

by developer and type of innovation by NACE, distribution of turnover of product innovator enterprises,

enterprises by type of product and / or process innovation, type of product and / or process innovation

by staff categories, share of innovative enterprises by cooperative partners and staff categories and

NACE. A separate category in Hungarian statistics is data related to the use of patents in innovative

business or financing of innovation by the state and the European Union.

On the other hand, in other V4 countries, it is possible to find information concerning the costs of

technical innovation including their structure and sales of innovative products, at the same time divided

by individual NACE branches but also by individual territorial division. It is also possible to find

detailed data concerning only technical (product and process) or non-technical innovations, again with

respect to individual fields of NACE. Attention is paid to technical innovations, their development or

results. To a certain extent, Poland, where there is no statistical data on how to develop innovative

products placed on the market, is outside the group of these three countries. Also, the Polish Statistical

Office does not disclose how enterprises in the manufacturing industry cooperate with other innovation

entities according to the country of the cooperating partner.

Another difference is the very nature of the data. In the Czech Republic and Slovakia, in addition to the

percentage calculations of the individual metrics, absolute figures, ie the number of enterprises in each

category, are reported. In Poland and Hungary, only the relative figures, ie the percentages of the

individual categories, can be found on the official website of the statistical offices.

Slovak and Hungarian institutions work mainly with dissemination databases and summary tables, also

known as STADat. The data are presented in the form of reports in annual time series in territorial

structures for the Slovak Republic, regions, regions and districts. Report output can be exported to data

formats: PDF, XML, XLSX, XLS, CSV. STATdat. is a public data database based on IBM Cognos BI

technology. In the Czech Republic and Poland, Excel files are published directly on the statistical offices'

websites, which must be downloaded and edited according to their wishes.

Verification of hypothesis 1

During the analysis of relationships in the environment of innovating companies in the V4 countries the

question about the relation between the size of the company and the type of realized innovation arose.

Because of that, hypothesis 1 assumes that this dependence is visible across countries. To determine the

dependence of these variables, statistical testing was performed using hierarchical regression analysis.

As a dependent variable, the type of implemented innovation was chosen, the independent variable is

the size of the company. Multicollinearity in variables was investigated using the Pearson correlation

coefficient, the tolerance values and mean values of VIF. Multicollinearity was not confirmed. Based

on the Durbin-Watson test values (2,974), the assumption of error independence was confirmed.

Dependence was confirmed by β = 1.218; p (0.048) <0.05. Large and medium-sized companies favour

the implementation of technical innovations, while small companies prefer non-technical innovations

due to limited resources.

Verification of hypothesis 2

Moreover, deliberations directed to the question of the absolute novelty of the product, a product that is

new for the whole market, not just for the organization and its relationship with the method in which

product innovation evolves especially self-managed type. Hypothesis 2 implies that this dependency

exists. Repeatedly, hierarchical regression analysis was used to define the dependence of these two

variables when the dependent variable was the absolute novelty of the product. Based on the result, we

can state that multicollinearity through VIF values was not detected. Based on the Durbin-Watson test

values (2,974), the assumption of error independence was confirmed. By performing statistical testing

by regression analysis, it was found that this dependence exists, p (0.044) <0.05.

Page 60: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

53

Cluster analysis

Four variables have been involved in the cluster analysis (the type of innovation, product novelty, type

of collaboration and developer of innovation).

The Euclidean metric calculated the proximity between profiles. The object combination was compared

with the Ward method. The Ward's minimum variance method creates clusters that minimize variance

within each cluster. For each cluster, the mean is calculated. In each cluster, the observations are

compared to the mean for each variable. Observations or/and clusters are combined so that the variance

within the final cluster solution is minimized. Ward's minimum variance method is the most commonly

used in management. Individual business entities are identified by numbers, namely 1 - small Czech

enterprises, 2 - small Polish enterprises, 3 - small Slovak enterprises, 4 - small Hungarian enterprises, 5

- medium Czech enterprises, 6 - medium Polish enterprises, 7 - medium Slovak enterprises, 8 - medium

Hungarian enterprises, 9 - large Czech enterprises, 10 - large Polish enterprises, 11 - large Slovak

enterprises, 12 - large Hungarian enterprise. The entire clutter process displays the dendrogram in Figure

2.

Figure 2. Dendrogram – cluster analysis

Source: own research based on IBM SPSS Software

It can be said that using the Ward method three clusters of enterprises were created. The first cluster

consists only of enterprises based in the Polish territory, and all size classes of enterprises are described.

All these businesses, regardless of their size, share the fact that the most significant issue is the lack of

decision-making about the development of new products, as well as the greatest cooperation with

suppliers, i.e., an external source.

The second cluster consists only of enterprises with headquarters in the Czech Republic and Slovak, and

all size-types of enterprises represent it. All of these businesses, regardless of size, have a common

launch of new products for the enterprise only, and mainly realize product innovations. Also the first

cluster lays within the second cluster, when the similarity of large enterprises implementation the most

innovative products and services both new on the market and new ones only for businesses appeared.

The last cluster is represented only by Hungarian companies based on co-operation arrangements on

innovation activities at product or process innovative enterprises.

Conclusion

The first part of the paper was based on a comparison of individual V4 countries based on the

organization of the European CIS. This is a harmonized questionnaire aimed at collecting and analysing

data within the European Union on innovative business activities. Although this questionnaire is

harmonized, not all questions are exhaustively given, there is some freedom for each country. This step

Page 61: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

54

aimed to allow each country in the questionnaire to cover the specific features of its economy.

Unfortunately, in this way, a thorough cross-country analysis within the EU is impossible to some extent.

In the Czech Republic and Slovakia, the structure of data is almost identical; Data on obstacles or

funding are available, both in relative and absolute terms. The absolute expression is missing in the case

of Polish and Hungarian statistics. At the same time, it is not possible to trace detailed data in Poland in

terms of how technical innovation or cooperation is developed. While Hungary provides interactive

tables on its official statistical website on the internet interface, their number in English is limited to

basic metrics related to innovative activities.

Another part of the paper focused on finding the truth of the tentative hypotheses, which were defined

not only based on literary research but also based on its previous research as follows: increasing market

share as a result of the introduction of innovation is related to the relative novelty of the product (new

for the enterprise only). Both hypotheses using regression analysis have been confirmed, implying first

of all that are companies favor the implementation of technical innovation, while small companies prefer

non-technical innovation due to limited resources, independently of the analysed state. At the same time,

it has been proven that in all V4 countries increasing market shares is the result of implementing a

relative product change. This solution is safer and, in the case of radical novelty, companies are afraid

of initial distrust of customers and hence a decline in market share. Although this share can be increased

after the first phase has been overcome, companies prefer constant growth over a steep rise in market

shares.

In the last part of the paper, the comparison of all V4 countries was performed by performing a cluster

analysis with four variables, namely the type of innovation, product novelty, type of collaboration and

developer of innovation. These variables were selected because they are available in all countries under

review. Based on the analysis, it was found that 3 clusters had been created with respect to the states:

Poland (1); Slovakia, Czech Republic (2); Hungary (3), it is clear that individual economies retain

certain characteristics, and the connection between Slovakia and the Czech Republic is logical to

determine the long-term interconnection in all respects. solutions according to the size of companies, as

one might expect.

Research has been limited by the nature of available data, which is very sensitive. The result of the paper

also implies the need for greater unification of the CIS structure, the creation of an exhaustive corpus,

which would include both absolute and relative values, which would allow a more in-depth examination

of innovative business activities not only in academia and science but especially in business. In this way,

one could provide some insight into the best practices that work abroad and that would be applicable to

the domestic economy.

References

[1] Bessant, J., A. Alexander, et al. (2012). Developing innovation capability through learning

networks. Journal of Economic Geography 12, pp. 1087-1112.

[2] Birkinshaw, J., J. Bessant, and R. Delbridge. (2007. Finding, forming, and performing: Creating

networks for discountinouous innovation. California Management Review, 49(3), pp. 67-83.

[3] Charry, K., Coussement, K., Demoulin, N., Heuvinck, N. (2016). Marketing Research with IBM

SPSS Statistics. New York: Routledge.

[4] ČSÚ. (2018). Inovační aktivity podniků 2014-2016. [online database]. Praha: Český statistický

úřad, Available at: <https://www.czso.cz/csu/czso/inovacni-aktivity-podniku-2014-2016>.

[5] Drucker, P. (1985). Innovation and Entrepreneurship. New York: Harmer & Row.

[6] European Commission. (2018). European Innovation Scoreboard. [online database]. Available at:

https://interactivetool.eu/EIS/EIS_2.html#a.

[7] Eurostat. (2014). Innovation statistics. [online database]. Luxembourg: European Commision,

Available at:https://ec.europa.eu/eurostat/statistics-

explained/index.php?title=Innovation_statistics#Further_Eurostat_information.

Page 62: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

55

[8] GUS (2018). Działalność innowacyjna przedsiębiorstw 2014-2016. [online database]. Warszawa:

Główny Urząd Statystyzny. Available at: https://stat.gov.pl/obszary-tematyczne/nauka-i-

technika-spoleczenstwo-informacyjne/nauka-i-technika/dzialalnosc-innowacyjna-

przedsiebiorstw-w-polsce-w-latach-2014-2016,14,6.html.

[9] Herzog, P. (2011). Open and Closed Innovation. Wiesbaden: Gabler Verlag.

[10] HUNGARIAN CENTRAL STATISTICAL OFFICE (2019). Science and technology. [online].

Budapest: HCSO. Available at: http://statinfo.ksh.hu/Statinfo/index.jsp.

[11] Kuratko, D. F. (2009). Entrepreneurship: Theory, process, practice. Mason: South-Western

Cengage Learning.

[12] Oslo Manual. OECD, 4th Edition, 2018. Available at: <https://read.oecd-ilibrary.org/science-and-

technology/oslo-manual-2018_9789264304604-en#page1>

[13] Peterková, J. and L. Ludvík. (2015). Řízení inovací v průmyslovém podniku. SAEI, vol. 42.

Ostrava: VŠB-TU Ostrava.

[14] Peterková, J. and Z. Wozniaková. (2015). The Czech innovative enterprise. Journal of Applied

Economic, 10/(2), 243-252.

[15] Pitra, Z. (2006). Management inovačních aktivit. Praha: Professional Publishing.

[16] Schumpeter, J. A. (1987). Theory of Economic Development. The New Palgrave, 1987.

[17] Štatistický úrad Slovenskej republiky (2019). Databázy. [online]. Bratislava: ŠÚS. Available at:

https://slovak.statistics.sk/wps/portal/ext/Databases.

[18] Tidd, J. and J. Bessant (2016). Managing Innovation: Integrating Technological, Market and

Organizational Change. Chichester: Wiley.

[19] Valenta, F. (2001). Inovace v manažerské praxi. Praha: Velryba.

[20] Vance, A. (2015). Elon Musk: How the Billionaire CEO of SpaceX and Tesla is Shaping our

Future. New York: HarperCollins Publishers.

[21] Veber, J. et al. (2016). Management inovací. Managing Innovation. Praha: Management Press.

[22] Visegrad Group (2019). About the Visegrad Group. [online]. Available at:

http://www.visegradgroup.eu/about.

Page 63: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

56

IMPACT OF UNILATERAL PREFERENTIAL MEASURES OF THE EUROPEAN UNION,

THE UNITED STATES AND CHINA ON EXPORTS OF THE LEAST DEVELOPED

COUNTRIES

Petra Doleželová1

1Department of European Integration, VŠB – Technical University of Ostrava,

Sokolská tř. 33, Ostrava 70200, Czech Republic

e-mail: [email protected]

Abstract

The main aim of the paper is to find out how the exports of the least developed countries (LDCs) have

evolved in terms of the commodity and geographical structure since the introduction of the main

preferential schemes for LDCs - Everything but Arms (EBA) of the European Union (EU) and African

Growth and Opportunity Act (AGOA) of the United States (US) and later Chinese duty-free, quota-free

program. To identify the changes, we carried out two different cluster analyses based on the data from

2000 and 2018 in which the LDCs were sorted into groups based on the similarity of their exports. The

changes in the position of individual LDCs within these groups indicate the changes in their export

structure. The results do not suggest that preferential schemes have contributed to a greater

diversification of LDCs exports or an increase in the proportion of processing-intensive products in

them. However, there have been significant changes in the geographical focus of LDCs exports within

the European Union, the United States and China.

Keywords

China, European Union, Least developed countries, Nonreciprocal trade preference, United States.

JEL Classification

F10, F15, F40, O10,

Introduction

Poverty is one of the most complex and widespread global problems humanity has ever faced. During

years the trade has proven to be one of the most effective instruments for eradicating poverty. Therefore,

rich countries help developing countries to market their products on the world market and benefit from

engaging in international trade. For this purpose, developed and some developing countries grant

unilateral trade preferences to the poor countries. Although there are currently around fifty countries

providing nonreciprocal trade preferences, this paper focuses on the preferential schemes of the three

main global actors, i.e. the European Union, the United States and China. The first part of this paper

provides a brief overview of preference schemes for LDCs provided by these economies. Although these

preferences have been granted for a long time, to this point, it is not clear whether they achieve their

purpose of promoting exports of beneficiary countries. Selected studies dealing with the impacts of

preferential schemes are closer discussed in the literature review.

Trade preferences are usually provided with the two main goals: to increase export volumes for

developing countries and thereby boost their export earnings, and to facilitate export diversification

(Persson, 2013). However, the majority of authors, including those mentioned in the literature review,

who sought to measure the impact of these preferences, focused only on the impacts on the trade volume

of the beneficiary countries. Compared to these studies, the main goal of this paper is not to assess the

impact on the volume of exports of the LDCs, but to map the changes in the structure of these exports

that have occurred since the introduction of the preferential schemes.

The main aim of the paper is to find out how the exports of the least developed countries have evolved

in terms of the commodity and geographical structure since the introduction of the main preferential

schemes for LDCs - Everything but Arms of the European Union and African Growth and Opportunity

Act of the United States and later Chinese duty-free, quota-free program. We also try to find out whether

Page 64: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

57

the geographical focus of exports of the least developed countries within the three selected economies,

i.e. the European Union, the United States, and China, has changed over the years especially after the

introduction of the preferential system of China in 2010.

The paper will, therefore, provide answers to the two research questions. The first research question is

whether the beneficiary LDCs' exports have evolved over the time towards exports less concentrated

and exports of the higher value-added products.

The second research question is whether the introduction of a program for the LDCs by China in 2010

caused some LDCs to shift part of their exports from the European Union and the United States to China.

In order to determine how the structure of the LDCs´ exports have changed following the introduction

of the preferential schemes in question, several groups were created within the LDCs based on the export

data from 2000 and 2018. Groups of the LDCs were created based on the similarity of their exports. In

each group, all the countries included share the same main characteristics and therefore are similar in

terms of the export structure and its geographical focus. Conversely, countries belonging to different

groups are very different in their export structure and geographic focus.

Background

Empirical evidence supports the idea that the expansion of trade is one of the most proven means to

boost the growth and development of developing countries (Grossman, Helpman, 2015). Therefore, all

countries even the least developed ones should have a chance to engage in international trade and benefit

from it. But being successful in competing with others in the world market can be difficult for some

countries, especially the least developed ones. The idea that developing countries should receive “special

and differential treatment” in the trade area originated from the General Agreement on Tariffs and Trade

(GATT) in the early 1970s. This special treatment can take several different forms, although its most

well-known form is the Generalized System of Preferences (GSP). Under this scheme, developed

countries apply concessional measures towards developing countries in the form of unilateral trade

preferences (Pareja et al., 2016). As the word unilateral implies, these preferences are provided by a

preference-granting country to a developing country without any reciprocal preferences for the donor’s

exports. The expected result of these measures is an increase in exports of beneficiary countries towards

the preference-giving country. These preferences may take the form of duty-free access to the donor’s

market or substantially lower than the normal Most-favoured-nation tariffs. The list of affected products

varies from several dozens to thousands of items for different preferential schemes.

Unilateral preferences have been applied since the early 1970s and are currently part of the trade policies

of all developed countries. Most of these countries have also introduced more privileged preference

programs that can be targeted either at developing countries located in a particular region or countries

with a high degree of underdevelopment.

One of the longest applied and most comprehensive preference schemes is the Generalized System of

Preferences of the European Union. The first GSP scheme of the European Community was applied in

an initial phase between 1971–1981 and has been subsequently renewed several times. At each renewal,

the GSP was also revised in terms of the range of products covered, quotas and ceilings as well as the

lists of beneficiaries and conditions for export of agricultural products (Aiello, 2010). The General

System of Preferences of the European Union is one of the most studied preferential schemes, especially

its Everything but Arms initiative. However, as we mention in more detail in the literature review,

prevalent is the empirical literature claiming that the European Union´s GSP fails to achieve its

objectives in terms of enhancing the trade flows of beneficiaries towards EU markets (Cipollina and

Salvatici, 2007).

Everything but Arms initiative became part of the European Union´s preferential scheme on March the

5th, 2001. The Everything but Arms is specifically targeted on the least developed countries and

compared to other preferences under the GSP, the Everything but Arms has an unlimited period of its

implementation. Under the Everything but Arms all products from the least developed countries except

for arms and munitions, have duty-free access and access without any quantitative restrictions to the

Page 65: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

58

market of the European Union. Expectations from the EBA were from the beginning high even though,

as Brenton (2003) proved, the vast majority of imports from the least developed countries have already

entered the EU without duty and quotas before the EBA´s implementation. Currently is 7,200 products

covering also agricultural products including sensitive ones eligible for the EBA initiative.

The second most frequently studied preferential scheme is the African Growth and Opportunity Act that

came into force in 2001 intending to enforce trade and investment of sub-Saharan African countries in

the United States. This cooperation is supposed to stimulate economic growth and help the countries of

sub-Saharan Africa to integrate into the world economy (AGOA, 2018). However, likewise the EBA,

the African Growth and Opportunity Act is not entirely flawless and is criticized for several reasons.

For example, Fayissa and Tadesse (2008) point to the fact that exports from African countries are mainly

dominated by petroleum products with relatively low value-added.

The developed countries are no longer the only ones that provide unilateral preferences. Recently,

several developing countries have also introduced their preferential schemes. One such a country is

China, that started in 2001 to grant duty-free treatment to developing countries that have good diplomatic

relations with China. Since then, China has been gradually working towards an increase in the product

coverage of its LDC scheme. The Chinese duty-free, quota-free market access program for LDCs

entered into force in 2010 covering 95 percent of China’s total tariff lines.

Literature Review

Although unilateral preferences have been applied by developed countries for a long time, the evidence

of their effectiveness is inconsistent. We can divide studies on the impact of trade preferences into two

main groups: studies confirming the effect of trade preferences and studies denying that these

preferences somehow affect the trade of developing countries.

Aiello (2010) finds positive effects of preferences on LDC´s export to OECD countries on three different

levels of data aggregation: total exports, total agricultural exports, and export flow for ten groups of

agricultural products at 2-digit level. In line with Aiello´s findings, Thelle (2015) finds that GSP

preferences have contributed to an export increase of covered products by up to 5%, compared to the

pre-preference export level. Thelle also points out that preferences under the Everything but Arms

scheme have generated higher export responses than preferences under the GSP General Arrangement

or GSP+ scheme.

Ornelas (2018) acknowledges the positive effect of preferences on trade but with some limitations. He

claims that nonreciprocal preferences boost the exports of the least developed countries, but only if these

countries are members of the World Trade Organization (WTO). However, non-reciprocal preferences

help non-LDCs promote foreign sales only if they are not members of the WTO.

As mentioned above, the EU preferential system is often the subject of studies assessing the

effectiveness of preferences. Cernat (2004) in his study focuses solely on the impacts of the Everything

but Arms on third developing countries and the LDCs. The study shows moderate trade gains from the

EBA initiative with the largest gains being recorded for sub-Saharan Africa. Only minor impact of EU´s

GSP on the trade of beneficiary countries is found also by Cipollina et al. (2013) with preferences having

a significant impact only in some sectors such as ceramics and glassware, textiles and footwear and for

specific exporters. So far, studies showing the significant impact of EU preferences on LDCs exports

are very scarce.

The effectiveness of the Generalized System of Preferences is questioned also by Herz and Wagner

(2011) who in their study draw attention to the short duration of effects. They state that the GSP tends

to foster developing countries' exports in the short-run but hampers them in the long-run. They also point

to the fact that the GSP granting countries are initially able to promote their exports, since the GSP

recipients import inputs mainly from the GSP granting country.

Gradeva and Martínez-Zarzoso (2010) on the example of the African, Caribbean and Pacific LDCs show

that eligibility for the EU´s Everything but Arms scheme alone does not contribute to the increase of the

Page 66: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

59

exports of these countries because there cannot be found any substantial improvements in their export

performance. They also address the issue of replacing development assistance with non-reciprocal

preferences, which they consider to be highly questionable.

Following the introduction of the AGOA by the United States in 2001, the attention of experts shifted

also in this direction. Unfortunately, in this case, too, there is prevalent empirical evidence suggesting

little or no significant impact. But of course, studies in favour of the AGOA can also be found. The

findings in the study of Kassa (2019) show that most of the eligible countries registered gains in exports

due to the African Growth and Opportunity Act. However, the gains were relatively unevenly

distributed, with exports of oil and other minerals making up the largest part of the growth in exports.

The study by Wamisho (2015) indicates that the AGOA trade preferences do not have a statistically

significant impact on sub-Saharan Africa´s agricultural exports. Fernandes (2018) finds the positive

impact of the AGOA on the export of the least developed countries in Africa. He shows on a sample of

African countries exports to the US at HS 6-digit level in 26 years that the biggest boost from the AGOA

to African countries’ exports was for apparel products.

Since China's preferential system has been provided for the shortest time of the three preferential

systems in question, there are very few studies examining it. Here we can mention the study of Minson

(2007) who examines its potential and weaknesses.

There are studies in which the preferential systems are not only evaluated but also compared to each

other. Coulibaly (2017) examines the impacts of the AGOA and the EBA on the LDCs located in Africa

over the period 2001-2015. Although he finds positive impacts of these preferential schemes, he also

states that not all African countries have benefited from them, such as some West African countries.

Klasen´s (2016) study assesses the impact of specific preference regimes of different economies on the

exports of LDCs. Out of the nine different preferential systems examined, a positive and significant

impact on exports has been proven only in the case of GSP granted by Canada, Australia, and the

European Union.

Methodology and Data

The final groups of the LDCs were formed based on the results of two different cluster analyzes. Cluster

analysis is a multivariate method which purpose, as explained by Bijnen (1973), is „to group and

distinguish comparable units, and separate them from differing units.“ Cluster analysis aims to classify

objects based on given variables into several groups or as Sinharay (2010) put it: to group similar

observations into a number of clusters based on the observed values of several variables for each

individual.

The resulting clusters are defined through an analysis of the given data, where the similarity of the cases

within cluster and dissimilarity between groups is maximized.

The methods of cluster analysis can be divided into two main groups: hierarchical methods and non-

hierarchical methods. The algorithms in the hierarchical clustering are based on gathering the most

similar two objects in a cluster. Such a process is very extensive because all objects must be compared

before every clustering step. In contrast to non-hierarchical methods, the hierarchical clustering creates

a hierarchy of clusters and does not require specifying the number of clusters before carrying out the

analysis. In our case, we had not determined the exact number of clusters we required, therefore we

could work with hierarchical methods. Furthermore, the results of hierarchical clustering can be easily

visualized by a two-dimensional graph called a dendrogram.

Hierarchical methods can be further classified as agglomerative or divisive methods. In this paper, we

use agglomerative hierarchical clustering. Although we can find a number of different agglomerative

hierarchical clustering techniques, they are all based on one single approach. Agglomerative

Hierarchical Clustering is an iterative classification multi-step method. At the beginning of each

agglomeration hierarchical analysis, all objects in the analysis begin as separate clusters. In the first step,

the dissimilarity between the N objects is calculated. Based on the rule of minimization of agglomeration

criterion first two objects are clustered together thus creating a class comprising these two objects. Then

Page 67: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

60

again using agglomeration criterion the dissimilarity between this cluster and other, now N - 2, objects

are calculated. Two objects or classes of objects for which when clustering the agglomeration criterion

is minimal are then clustered together. This process is then repeated, reducing the number of clusters in

every iteration. At the end of the process, we have only one cluster left in which all objects are included.

As mentioned above, the graphical output of hierarchical clustering is a dendrogram. A dendrogram is

a tree-shaped diagram displaying the clusters formed at each step of the algorithm together with their

similarity levels. With the help of a dendrogram, the optimal number of clusters is selected from all

possible cluster solutions. Dendrogram allows determining the level at which to cut the tree diagram to

generate a suitable number of groups.

To identify the changes in LDCs exports to three preference-granting economies in a given period 2000

- 2018, it was necessary to generate groups of countries in two different years, at the beginning and the

end of the period. The objects researched in our analysis are 41 least developed countries, the rest of the

LDCs were excluded from the analysis due to lack of actual data.

The variables based on which the countries were divided into individual groups is the volume of exports

of LDCs to the EU, the United States, and China within individual product categories. The different

number of groups, different characteristics and mainly the change in the position of individual LDCs

within these groups allowed us to identify how the patterns of LDCs´ exports have changed since 2000.

These product categories are based on the standard international trade classification on a one-digit level.

These categories are:

• food, drinks and tobacco,

• raw materials,

• energy products,

• chemicals,

• manufactured goods classified chiefly by material,

• machinery and transport equipment,

• other manufactured goods,

Category of energy products was from analysis excluded due to the lack of actual data, therefore we

worked with six product categories. Three main export flows of LDCs were used in the analysis: exports

to the European Union, the United States of America and China. Each export flow was divided into six

product categories. This means that we had 18 variables based on which individual clusters of countries

were created. The first six variables are the volume of exports in each product category to the EU, the

next six variables are the volume of exports in each category to the USA and the last six variables are

the volume in each category exported to China. All export-related data were taken from UNCTADstat

– the statistical database of the United Nations Conference on Trade and Development.

As a linkage method for evaluation of similarity between clusters, we used Ward´s method since this

method is most appropriate for quantitative variables. Ward´s method seeks to join the two clusters

whose merger leads to the smallest within-cluster sum of squares (Moral, 1980). Field (2000) describes

Ward´s method as follows: “The difference between each case within a cluster and that average

similarity is calculated and squared. The sum of squared deviations is used as a measure of error within

a cluster. A case is selected to enter the cluster if it is the case whose inclusion in the cluster produces

the least increase in the error.” Ward method is calculated as

∆(𝐴, 𝐵) = ∑ ‖𝑥𝑖⃗⃗ ⃗ − 𝑚𝐴⋃𝐵⃗⃗ ⃗⃗ ⃗⃗ ⃗⃗ ⃗⃗ ⃗‖𝑖∊𝐴⋃𝐵2− ∑ ‖𝑥𝑖⃗⃗ ⃗ − 𝑚𝐴⃗⃗ ⃗⃗ ⃗‖𝑖∊𝐴

2− ∑ ‖𝑥𝑖⃗⃗ ⃗ − 𝑚𝐵⃗⃗⃗⃗⃗⃗ ‖𝑖∊𝐵

2 (1)

Page 68: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

61

= 𝑛𝐴𝑛𝐵

𝑛𝐴+𝑛𝐵‖𝑚𝐴⃗⃗ ⃗⃗ ⃗ − 𝑚𝐵⃗⃗⃗⃗⃗⃗ ‖

2 (2)

where 𝑚𝑗⃗⃗ ⃗⃗ presents the centre of cluster j, and nj is the number of points in it. ∆ is the merging cost of

combining clusters A and B.

As a distance measure we used Square Euclidian Distance, which is measure proposed for the Ward´s

method and also the most common measure used in cluster analysis when working with interval data.

According to Sakthivel (2015) squared Euclidean distance is the sum of the squared differences between

scores for two cases on all variables calculated as

𝑑 (𝑖, 𝑗) = ∑ (𝑋𝑖𝑘 − 𝑋𝑗𝑘)2𝑛

𝑘=1 (3)

where i = Xin and j = Xjn are two n dimensional data objects.

Empirical Results

In the first cluster analysis based on the export data of the least developed countries in 2000, the six

groups of countries were generated. Then according to the results of the second cluster analysis based

on data from 2018, the countries were again divided into groups. This time, however, nine groups were

identified as the optimal number.

As can be seen in the Table 1, the five groups in 2018 had the same characteristics as those of the year

2000. The individual pairs of these groups shared the same product category that contributed most to

their exports and the same destination of these exports.

We can also see that in 2000 the European union was the most important export market for all the LDCs

excluding 6 countries whose exports were mainly focused on other manufactured goods going to USA.

The exports of the LDCs in 1st groups were strongly concentrated in both years. More than 50% of the

total EU-US-China exports of these countries were made of the raw materials exported to the European

Union. Moreover, when taking in account all product categories, almost 90% of the total exports of these

LDCs went to the European Union.

Both 2nd groups in 2000 and 2018 comprise of countries whose EU-US-China exports were dominated

mainly by machinery and transport equipment and miscellaneous manufactured articles directed to the

European Union. Therefore, we can say that these countries concentrated mainly on exports of high

value-added products.

Countries in the 3rd groups in 2000 and 2018 also focused on exports of products with higher added

value. The vast majority of their exports were made up of manufactured goods classified chiefly by

material and were directed to the European Union.

Both 4th groups included countries which exported predominantly food, drinks and tobacco to the

European union.

Countries in the 5th groups were also strongly oriented towards export to the European Union. These

countries exported mainly product from two product categories, i.e. food, drinks and tobacco and raw

materials most of which were exported to the European Union.

As we can see the structure of these groups has changed significantly and only eight countries remained

in the same group in both years, i.e. Burkina Faso, Chad, Botswana, Burundi, Malawi, Senegal, Uganda,

and Yemen. Which means that their export structure in 2000 and 2018 was similar.

The 6th, the last group in 2000 consists of countries whose EU-USA-China exports were more than fifty

percent composed of miscellaneous manufactured articles destined to the USA. Based on the data from

2018 there was not generated any group similar to this one. This means that in 2018 the largest share of

these countries' exports was made up of other products than miscellaneous manufactured articles or the

largest share of exports went to the EU or China instead of the United States. For example, in

Bangladeshi and Cambodia, the largest share of exports in 2018 was again made up of miscellaneous

manufactured articles but exported to the European Union instead of the US. Therefore, we can say that

these two countries have shifted part of their exports to the European Union from the United States.

Page 69: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

62

The other groups created in 2018 have no equivalent among the groups in 2000 and have very different

main features. This means that some countries have changed focus of their exports and thus disconnected

from their original groups from 2000 and created completely new ones in 2018.

These were mainly groups of countries whose largest share of exports went to China in 2018. This was

particularly the case for the eighth group in 2018, that included seven countries for which raw materials

to China accounted for the largest share of exports. The same goes for the sixth group, which included

countries whose EU-US-China export was largely made up of manufactured goods from the sixth

product category heading to China. It can be seen in the table 1 that the 6th group actually originated

from countries that shifted a significant part of their exports of manufactured goods from the European

Union to China. This means that in 2018 these countries concentrated their exports more to China instead

of the European Union.

The 7th group contained countries whose EU-US-China exports in 2018 consisted of more than 60% of

raw materials going to China. This group was of medium size and contained five countries. Although

this group of countries had the same product and geographic focus of the largest share of exports as the

8th group, these two groups are, in fact, different. When considering all products categories, the 8th group

exported in total most to the European Union but the 7th group exported most to China.

The last 9th group created in 2018 included countries that in 2000 originally exported the largest part of

their exports consisting mainly of food, beverages, tobacco and raw materials to the European Union.

In 2018, however, the largest share of these countries' exports was directed to the United States.

Table 1: Groups of LDCs created based on data from 2000 and 2018

Source: own creation

The Table 1 heading shows the numbers indicating the product group and destination of the largest share

of LDCs exports. For example, the EU 2 + 4 column lists LDCs whose largest share of exports were

raw materials destined for the European Union

Conclusion

The paper aimed to identify how the introduction of preferential systems for the least developed

countries by the European Union, the United States and later China influenced the structure and

geographical focus of LDCs exports to these economies.

Afghanistan 1 Bhutan 2 Botswana 3 Burundi 4 Eritrea 5 Bangladesh6

Benin 1 Djibouti 2 Central African Republic3 Ethiopia 4 Kiribati 5 Cambodia 6

Burkina Faso 1 Lao People's Dem. Rep.2 Dem. Rep. of the Congo3 Malawi 4 Samoa 5 Lesotho 6

Chad 1 Madagascar2 Gambia 3 Mozambique4 Solomon Islands5 Maldives 6

Guinea 1 Sierra Leone2 Zambia 3 Rwanda 4 Somalia 5 Myanmar 6

Mali 1 Senegal 4 Togo 5 Nepal 6

Mauritania 1 Timor-Leste 4 Yemen 5

Niger 1 Uganda 4

Vanuatu 1 United Republic of Tanzania4

Burkina Faso 1 Bangladesh 2 Bhutan 3 Burundi 4 Afghanistan 5 Dem. Rep. of the Congo6 Eritrea 7 Benin 8 Kiribati 9

Chad 1 Cambodia 2 Botswana 3 Djibouti 4 Ethiopia 5 Zambia 6 Gambia 7 Central African Republic8 Samoa 9

Somalia 1 Mozambique3 Malawi 4 Lesotho 5 Guinea 7 Mali 8 Timor-Leste 9

Maldives 4 Madagascar 5 Lao People's Dem. Rep.7 Mauritania 8 Vanuatu 9

Senegal 4 Myanmar 5 Solomon Islands7 Niger 8

Uganda 4 Nepal 5 Sierra Leone 8

Rwanda 5 Togo 8

United Republic of Tanzania5

Yemen 5

2018

2000

USA 0+1China 2 + 4China 2+4China 6USA 8EU 0+1, 2+4EU 0+1EU 6EU 8EU 2+4

Total: China Total:EU

Page 70: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

63

Countries have not seen any shift towards increasing the share of products with higher added value in

their exports. On the contrary, the number of countries whose largest part of exports consisted of these

products decreased compared to 2000.

The preferential schemes should help LDCs to better assert themselves on the world market and

gradually expand their product portfolio so that they are not dependent on exports of primary

commodities. Exporting high value-added products brings much more to LDC economies than primary

commodity exports. Nevertheless, the export of LDCs has not evolved within the product structure, and

still mostly raw materials, agricultural products, food, and beverages are exported to the preference-

granting economies. Therefore, we can say that the results do not suggest that preferential schemes for

LDCs of the European Union, the United States, and China have contributed to a greater diversification

of LDCs exports or an increase in the proportion of processing-intensive products in them.

However, there have been significant changes in the geographical focus of LDCs exports within the

European Union, the United States and China. In 2000, of the three preferential economies, the European

Union was clearly the largest export market for LDCs. More precisely, the European Union took the

largest share in exports of 34 LDCs. In 2018, however, the EU occupied the largest share of exports in

only 24 least developed countries. This finding is consistent with the fact that the European Union's

share in world trade is gradually decreasing. The share of the United States in LDCs exports has also

decreased since 2000, but not as significant as that of the European Union. While in 2000 China was

not the largest export market for any LDC within EU-US-China exports, in 2018 the largest share of

export of twelve countries was exported to China. Therefore, we can say that China's share in LDCs

exports has increased at the expense of the EU and the US.

However, in the light of the results, it is necessary to realize that China was already experiencing a

period of rapid economic growth at the time of the introduction of its preferential plan. It cannot

therefore be ruled out that China, as the main export market for LDCs, has overtaken the EU and the US

partly because of its increasing domestic demand or because it has begun to be seen by LDCs as a more

prospective trading partner for the years to come.

In conclusion, it should be noted that although the effects of the preferential systems of the European

Union, the United States and China failed to meet the expectations of changes in the pattern of exports

of LDCs, it is not excluded that they largely affected the volume of these exports. However, this will be

the subject of further research.

References

[1] AGOA. (2019). About African Growth and Opportunity Act. TRALAC. Retrieved from:

https://agoa.info/about-agoa.html

[2] Aiello, F. (2010). Evaluating the impact of nonreciprocal trade preferences using gravity models.

Applied Economics. Taylor & Francis Journals, 42(29), pp. 3745-3760.

[3] Aiello, F. (2010). Do Trade Preferential Agreements Enhance The Exports Of Developing

Countries? Evidence From The EU GSP, Working Papers 201002, Università della Calabria,

Dipartimento di Economia.

[4] Bijnen, E. J. (1973). Cluster Analysis: Survey and Evaluation of Techniques. ISBN 978-94-011-

6782-6

[5] Brenton, P. (2003). Integrating the Least Developed Countries into the World Trade System: The

Current Impact of EU Preferences Under Everything But Arms. Journal of World Trade, 37(3),

pp. 623-46.

[6] Cernat L. (2004). The EU Everything But Arms Initiative and the LDCs. In: Guha-Khasnobis B.

(eds) The WTO, Developing Countries and the Doha Development Agenda. Studies in

Development Economics and Policy. Palgrave Macmillan, London

Page 71: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

64

[7] Cipollina, M. R. (2013). Do Preferential Trade Policies (Actually) Increase Exports? An analysis

of EU trade policies. Agricultural and Applied Economics Association.

[8] Coulibaly, S. (2017). Differentiated Impact of AGOA and EBA on Western African Countries.

Africa Chief Economist Office, the World Bank.

[9] Eurostat. (2019). Glossary:Standard international trade classification (SITC). Europa. Retriewed

from:

https://ec.europa.eu/eurostat/statisticsexplained/index.php/Glossary:Standard_international_trade

_classification_(SITC)

[10] Fayissa, B. and Tadesse, B. (2008). The impact of African Growth and Opportunity Act

(AGOA) on U.S. imports from Sub-Saharan Africa. Journal of International Development. 20. pp.

920-941.

[11] Fernandes, A. M. (2019). Are Trade Preferences a Panacea? The African Growth and

Opportunity Act and African Exports. CESifo Working Paper No. 7672. Available at SSRN:

https://ssrn.com/abstract=3422254

[12] Field, A. (2000). Cluster Analysis. Aims and Objectives. Postgraduate Statistics: Cluster

Analysis. Retrieved from: http://www.discoveringstatistics.com/docs/cluster.pdf

[13] Gil-Pareja S., Llorca-Vivero R., Martínez-Serrano JA. (2019). Reciprocal vs nonreciprocal trade

agreements: Which have been best to promote exports? PLoS ONE 14(2): e0210446.

https://doi.org/10.1371/journal.pone.0210446

[14] Gradeva, K. and Martínez-Zarzoso, I. (2010). The Role of the Everything But Arms Trade

Preferences Regime in the EU Development Strategy. Research Committee Development

Economics, Proceedings of the German Development Economics Conference, Hannover 2010.

[15] Grossman, G. M. and Helpman, E. (2015). Globalization and growth. American Economic

Review, 105(5):100– 104.

[16] Herz, B. and Wagner, M. (2011). The Dark Side of the Generalized System of Preferences.

Review of International Economics, 19. pp. 763-775. doi:10.1111/j.1467-9396.2011.00980.x

[17] Kassa, W. and Coulibaly, S. (2019). Revisiting the Trade Impact of the African Growth and

Opportunity Act: A Synthetic Control Approach. World Bank Working Paper. 1.

[18] Klasen, S. (2016). Trade preferences for least developed countries. are they effective?

preliminary econometric evidence. United Nations. CDP Policy Review No. 4

[19] Minson A. (2007). Will Chinese Trade Preferences Aid African LDCs? Trade Policy Report

No. 19. Johannesburg: South African Institute of International Affairs.

[20] Moral Del, R. (1980). On Selecting Indirect Ordination Methods. Plant Ecology - PLANT

ECOL (Vegetatio). 42. pp.75-84. 10.1007/BF00048873.

[21] Ornelas, E. and Ritel, M. (2018), The not-so-generalized effects of the Generalized System of

Preferences, CEPR Discussion Paper 13208.

[22] Persson, Maria & Wilhelmsson, Fredrik, 2013. "EU Trade Preferences and Export

Diversification," Working Paper Series 991, Research Institute of Industrial Economics. [23] Sakthivel, E. (2015). Clustering Algorithms using Different Distance Measures. Retrieved from:

https://shodhganga.inflibnet.ac.in/bitstream/10603/90817/12/12_chapter8.pdf

[24] Sinharay, S. (2010) An Overview of Statistics in Education. In: Peterson, P., et al., Eds.,

International Encyclopedia of Education, 3rd Edition, Elsevier Ltd., Amsterdam, pp.1-11.

[25] Thelle, M. (2015). Assessment of Economic Benefits Generated by the EU Trade Regimes

Towards Developing Countries. European Union, 2015. In: Belgium. ISBN: 978-92-79-48088-1

Page 72: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

65

[26] Wamisho, K. (2015). The impact of the african growth and opportunity act (AGOA): An

empirical analysis of sub-saharan african agricultural exports to the United States. Journal of

International Agricultural Trade and Development. 9 (2).

Page 73: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

66

IMPLEMENTATION OF INDUSTRY 4.0: A RESEARCH BASED ON THE EFFECTIVE

TRAINING OF HRM

Meri Duduci1

1Faculty of Management and Economics – Tomas Bata University in Zlin

Masaryka 5555, 760 01, Zlin Czech Republic

[email protected]

Abstract Industry 4.0 is considered as the new innovation and way of doing business due to the digitization of the

manufacturing sector.

This has brought several changes in the processes of a company. The changes have been tightly connected to the

Human Resources department.

This research aims to analyze and identify the importance of the effective training by analysing factors that

influence and have a crucial impact in the implementation of Industry 4.0.

The research embraces a quantitative methodological approach through an online questionnaire as a mechanism

to gather data.

The results are beneficial and valuable for HRM and companies managers to comprehend the relevance of training

of employees in order to successfully implement Industry 4.0 in a company.

Keywords

Industry 4.0, employees, training, influential factors, questionnaire, HRM.

JEL Classification

M21, M54, N30, 031.

Introduction

Industry 4.0 has brought numerous changes in the way of doing business, especially when it comes to

inner management approach. Training has been one of the crucial points of this innovation. Training

signifies not only having employees with the prerequisite and fundamental skills to perform a job but

also to advance the set of skills required in the eras of change (Armstrong, 2006).

In Industry 4.0 it is not only about preparing to gain some new skills in order to be up to date with their

working position but especially to prepare themselves professionally to be able to maintain their stand

against the hard competition that they have to face with robotics (Arnold, 2016).

Digitalization and automatization of working processes is crucial for the current labor force of an

enterprise to prepare and provide themselves with a series of abilities to sustain and withstand their

current working position.

It is critical and very important for the Human Resources Department to cautiously select the labor force.

After this set of skills being recognised by the HR it is then attainable though the effective direction to

teach the workers to follow their path in this advanced and contemporary working system.

It is the Human Resources Department who plays the essential and decisive role of designing the training

where the company will be oriented into (Armstrong, 2006). The main objective of the whole training

process is to guide the working force into their everyday process and to be able to gain the maximum

production through a favourable environment by eliminating apprehension and nervousness situations

that might affect them (Arnold, 2016).

The training process selected by the HRM should be equivalent and match the strategic objectives of

the company. It is crucial for all enterprises, SMEs and big companies as well to deeply focus and

concentrate toward the correct type of training in order for the implementation of Industry 4.0 to be

Page 74: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

67

smoothly implemented avoiding in this way the complications and different disputes that might surface

during this process.

Historical Background The Industrial Revolutions are investigated as the ground of modernization, alteration, revolutions in

the economic world. The economic revolution that commenced in England in the year 1760 (Arnold,

2016) expanded to other European countries. Prior to the Industrial Revolution, the economic basis were

focused on agriculture and animal husbandry. With the invention of the steamer it leaded to the

mechanization of agricultural production (Baena et. al. 2016).

The mechanization process induced the growth of production. The capital in use increased production

by the usage of new machineries and the creation of big companies into the economic environment of

countries (Baena et. al. 2016). . This process caused as a chain factor the creation of new working

possibilities and job positions with a noticeable growth of population. As a final result the living standard

improved by the growth and recovery of the economic situation of the countries (Armstrong 2006).

The First Industrial Revolution consists in the beginning of 1760s and going through the 1830s. In this

revolution, production has developed from physical strength to machinery usage. The main shift in this

process was the usage of coal and steam replacing in this way wood, by increasing the power of the

machines. This mechanization process has caused the formation of big factories by displacing the small

family companies and small enterprises (Armstrong 2006).

The first Industrial Revolution commenced and set the roots firstly in England by fastly spreading all

through Europe and America (Arnold, 2016). The usage of steam, iron and coal was considered crucial

as a result of railway development. This set the path to a whole new innovation and modernization of

the economic life due to the facilitation of movement, not only of people but of trading as well

establishing and evolving in this way the economic situation of countries involved. The changes due to

the revolutions affected not only the economic sector but at a very high range also the social

environment. The lifespan was extended and lengthened as well as the population increased. The quality

of everyday life was improved due to the alleviate circumstances of mechanization (Baena et. al. 2016).

The Second Industrial Revolution includes the time period between the years 1840–1870. The

foundations of the Second Industrial Revolution started with Henry Ford’s mass production. The Second

Industrial Revolution has emerged with changes in basic raw materials and energy sources by using in

the production process steel, petroleum and also chemical elements (Armstrong 2006). The usage of

different raw materials has played a key role in the production system, as well as in the first industrial

revolution also in this second one the railway transport was deeply improved. Easier transportation

system for products to reach further markets and the adjoining and approaching of communication

systems have shaped the transmission process. With the electric technology development (Faller, 2015).

In the Second Industrial Revolution, electric technology was developed and started to be used in

production lines (Baena et. al. 2016). This has allowed us to develop the machines, increase the amount

of production, and meet the concept of mass production (Armstrong 2006). The main actors of the

Second Industrial Revolution were England, Germany, USA, and Japan. This revolution is defined as

the massification of production.

During the time period of the first half of the 20th century, the innovation and modernization of

technology was abated and down sized due to the political issued happening in the world. Economic

crisis was deeply felt also in the alteration and novelty of industrialization. Around the year 1970 (Faller,

2015) the third industrial revolution was initiated by the programmable technologies switching in this

way to the well known and big turn of digital technology. The production process was in this way highly

affected by the computers, new machines and the colossal innovation of using renewable energy

(Arnold, 2016). It leaded to a powerful alteration with the production and manufacturing process at the

time.

And finally the last industrial revolution is the fourth one also known as “The Internet of Things”. The

innovation considered the most major one due to the whole production system concealed by the

digitization of the working system and data analysis (Baena et. al. 2016). The leading edge of this new

revolution is the usage and function of machines without the requirement for human force. The first

steps into the 4.0 happened in Germany in the year 2011 (Faller, 2015) and during the whole process

Page 75: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

68

Germany was the leading country of spreading and extending this new way of doing business in the

whole world.

This colossal change will cause the adjustment and development of different working departments of an

enterprise. This is why even if it was widely embraced by different countries of the world there are still

adjustments to be done and by the year 2020 there are still investments happening in this new industrial

revolution. With the creation of Industry 4.0 there signifies the establishment of new businesses and

new working positions. Also, with the change of the production system was created the possibility to

generate new products and working processes. The transformations have affected not only products but

also the economic, technological and social spheres (Faller, 2015).

Literature Review

The preeminent component that diversifies a successful company form a less successive one is the

Human Resource Department (Arnold, 2016). The investment that should be subjected to the preparation

of the labor force is crucial in order to boost and stimulate the human capital to accomplish the objectives

and goals of an enterprise. The management should be assured that the training and preparation program

are correctly put into action in a prosperous approach to obtain the maximum compensation from it.

The implementation of Industry 4.0 has caused numerous changes in the working process of the business

world due to the different technology used to process and manufacture the products. With the change of

the technology and robotics comprising and involving more processes caused the change of the working

system overall (Çekmecelioğlu, 2013). The activities related to the labor force have been broaden but

still considered slow compared to the overall development that the working mechanism has faced.

As mentioned previously in this research paper this is revolution has been deliberated as the most

impactive one in the economic world (Arnold, 2016). With the high level of automatisation has

declined the demand for a numerous working force. Companies through the world are facing the

problem of adapting the employees to the advanced and contemporary developments. It is crucial to

state the fact that this is not a time period where to withdraw the employees and replace them with

technology on the contrary, managerial approach have changed and altered in a way to prioritise the

working force by preparing them for the change (Faller, 2015) . Vice versa this is change that employees

should recognise and appraise as a favourable opportunity to develop and enhance themselves

professionally.

According to a study of Rhisiart (Rhisiart et al., 2014) the business environment has been changed and

shaped due to three main factors, the first one is the artificial intelligence and robotics, continuously and

repeatedly updates and improvements of the internet services and finally the time and location where

the training takes place.

The working positions are changing in a direction where around 65 % of children in the primary school

will be working in the future in working positions that have not yet been created (WEF, 2016). There

are several factors that affect the evolution of the working positions such as technology based on the

cloud, big data analysis, developed robotics, artificial intelligence and many other develops of

technologies that have affected and deeply changed the process of doing economy. These alteration

deeply affect everyday life in different extents of the economic and social aspects through the growth

and progress of enterprises but also the creation of new categories and varieties of jobs (Rhisiart et al.,

2014).

The creation of new working positions will lead in the necessity and prerequisite of new assemblage of

skills in new working positions as well as in the existing ones.

The management approach will also be crucial to coordinate the employees with the new method and

technique of industry 4.0.

Industry 4.0 will deeply alter the everyday lifestyles of associations and especially societies by

modifying the manners and habits of economy and business life as well. This modernization will have

to balance and sustain with societies changes and companies natures. It is imperative to keep pace with

the changes of technology and all the innovatives of Industry 4.0 to preserve the enterprises from the

risk of those who cannot catch up with change will face the risk of economic vanishing and fading with

the time.

Page 76: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

69

These numerous changes, that have come into existence in different countries, enterprises and

employees culture will mainly affect the human resource in radical manners. Industry 4.0 will alter and

innovate the whole production process and also profoundly affecting the mechanism of marketing and

distribution. The centrality of all these innovations will be the preparation of employees with the correct

methods of preparing and developing their abilities into the regeneration and recondition equalized with

the modernization of new revolution.

Analysis of the influential factors

There are different factors that influence the correct and successful implementation of Industry 4.0 in

the enterprises. Several challenges affect not only Small and Medium Enterprises but also the big

companies when a new way of doing business is implemented. It signifies a new approach and new

methods of management.

In a study conducted by Mrugalska and Wyrwicka (Mrugalska et. al. 2017), it resulted that factors such

as the ¨Working Environment¨ deeply affects the correct implementation of industry 4.0 not only in the

SMEs but also for big companies. According to their study the high level of technology will strongly

affect their working method that will be more driven toward individual processes. It is expected in the

near future for a new approach toward the working hours that will change and develop toward a more

productive one (Mrugalska et. al. 2017).

The research continues with the stressing toward another important factor considered one of the most

influential factors toward the fast and precise adaptation of business culture in the new and innovative

processes of the 4.0, and that is exactly the correct training of employees and management (Mrugalska

et. al. 2017). As one of the most decisive and imperative elements it is tightly connected with the

qualified and prepared workforce to be taught how to use machines. Also the education system will face

an updated version of itself in order to prepare the new labor force at an earlier stage making possible

the faster adaptation toward the new requirements of the different enterprises updated in the 4.0.

Also in another study conducted by Torna and Vaneker (2019), where the main research is to analyse

the main and crucial factors that affect and determine the profitable and prosperous implementation of

Industry 4.0 in the enterprises is Data Based Management (Torna et. al. 2019).

Data-Based Management is the most decisive and essential decision taken proces by the enterprises in

relation to employees (Torna et. al. 2019).. It is crucial to take into accountability the data and important

information of the labor force that is necessary for the company decision making.

These data are meaningful and imperative for the future decisions or for any type of future organization

and development of the labor force system (Lee, 2014).. Commentaries of the labor force are crucial as

the real force and strong point of each company is comprised by the employees and updates and

improvements made by them are deliberated the most ponderous ones (Torna et. al. 2019).

In another study from Meissner (Meissner, 2017) it is acutely analyzed the Performance Management

as another key factor that influences the effective training toward the correct implementation of Industry

4.0. The approach that the management will embrace will deeply shape the business life and the

environment of an enterprise. The management approach is switching from the classical method to an

approach based on young labor force motivated by rewards.

The key to success in an economy that is moving fast toward the change is the manager role that should

be empowering and an inspiring leader (Meissner, 2017). A manager that faces the future and embraces

the changes and innovation. Nowadays it is crucial that the manager and the whole human resource

department should be trained through a digital system in order to keep track of the data to quickly

develop and preserve the leading role in the market.

Based on the literature review analyzed the following hypothesis will be analyzed by this paper in a

quantitative method. H1: The working environment is highly connected with the correct implementation of Industry 4.0 in a company.

H2: Training of Employees is positively correlated with the optimal implementation of Industry 4.0 in a company.

H3: Implementing the appropriate Data Based Management in a company induces to a successful implementation

of Industry 4.0 in the company`s processes.

H4: A well coordinated performance management affirmatively complements the correct and effective

implementation of industry 4.0 in an enterprise.

Page 77: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

70

Methodology and Data

In this paper the methodology used is a mix of approaches that combines qualitative and quantitative

approach to verify the hypothesis covered by this research. The qualitative approach implicates a

comprehensive analysis of the literatures in regards to industry 4.0. The analysis of previous research is

crucial to be able to analyze the hypothesis and objectives of this research in order to study the previous

examinations of the topic. The literature review used are both from the industry and also from the

academic field. In regards to the academic sphere the articles used are from indexed journals as Web of

Science and Scopus and the analysis from the industry are related to companies leaders in the market

such as Siemens, Amazon etc. The analysis of all the articles were done by attentively analysing the key

words similar to the ones of this research. The literature research analysis was essential and imperative

for evolving and developing the concept of training related to industry 4.0 implementation.

The other approach used to analyse and affirm the hypothesis is the Quantitative research. The data

collected were from over a two-month period from an online questionnaire utilizing Google Forms. The

questionnaire was sent to enterprises based in different countries. The respondents were from the

managerial level in order to have full knowledge in regards to Industry 4.0 concept. The answers were

a total of 200 all developed in English language, not considered a barrier due to previous study and

communication conducted with the companies. The data were analysed through a confirmatory factor

analysis to analyze and study if the proposed factors were persistent with the model created. This

analysis is crucial to show through a ranking level the importance of the factors that affect the

implementation of industry 4.0 and where the training factor stands toward the level of importance that

the literature review has proven for it to have.

Empirical Results

The four hypothesis were analyzed through the Confirmatory Factor Analysis through all the data

gathered by the questionnaire. By this analysis is possible to rank them through the level of

importance.

(H1) Working Environment – (env)

(H2) Training of Employees – (train)

(H4) Data Based Management – (dbm)

(H4)Performance Management – (perf)

Page 78: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

71

Figure 1: Result of statistical analysis of proposed conceptual framework on Industry 4.0

env

0.254

0.875 0.354

train perf

0.421

dbm

After the statistical analysis it is possible that through the gained results to identify the validity of the

hypothesis and to determine which of the factors has the major impact in the successful

implementation of industry 4.0. During the literature review the main factor was considered the

training of the employees and now it is possible to check it statistically.

From all the factors that have been researched and analysed, only one factor or as it is statistically

knows one construct meets the criteria, and it is the construct Training of Employees. So it is

confirmed and statistically demonstrated through both quantitative and qualitative approaches the

elevated relevance and preponderance that the training process of employees has in the correct and

successful implementation of Industry 4.0.

Here is the rank from the most important to the less important factor, and also hypothesis, archived

through the statistical review.

H2: Training of Employees (train)

H3. Data Based Management (dbm)

H4. Performance Management (perf)

H1. Working environment (env)

Table 1: Quality Criteria

AVE

Composite

Reliability

R

Square

Cronbach`s

Alpha

Communalit

y

Redundanc

y

Env 0.9152 0.9864 0 0.9186 0.9142 0

Train 0.9096 0.9697 0 0.9615 0.9124 0

Dbm 0.8908 0.9542 0 0.9684 0.7541 0

Perf 0.886 0.9621 0 0.9514 0.862 0

4.0

imp 0.954 0.99845 0.8112 0.9845 0.962 0.1358

In regards to this table of quality criteria it is possible to check more in details the results. All factors

show their importance toward the successful implementation of Industry 4.0

0.00

0

Ind

4.0-

0.811

0.00

0

0.000 0.00

0

Page 79: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

72

With reference of Goodness of fit R-square it determines that only Implementation of Industry 4.0 is a

valid factor that expounds the connectivity artificially dependance of this hypothesis towards each

other..

In the results given by Cronbachs Alpha, incomentable is evident that all the factors and hypothesis

have an internal correlation with one another. Regarding the results of Communality it is crucial to be

higher than 0.5. It is a clear demonstration of the emphasis that the factors have toward the general

outcome of 4.0 processes implementation.. The values of communalities of the variables are higher than

0.5, determines and indicates that the validity of the equation is very good and there is no problem with

this specific test.

For the Redundancy test, it is repeatedly confirming as in the R Square results that only the 4.0

implementation factor is significant when compared to the others that have established the value 0 of

validity.

Table 2: Total Effect

Original

Sample

Sample

Mean

Standard

Deviation

Standard

Error

T

Statistics

env -> 4.0 imp 0.087 0.02418 0.155 0.164 0.1524

train-> 4.0

imp 0.5841 0.5879 0.1405 0.1421 4.5018

dbm -> 4.0

imp 0.1124 0.11 0.1254 0.1876 0.7652

perf -> 4.0

imp 0.1198 0.1012 0.1248 0.1431 0.7233

From the results of Table 2 where are shown the results of the total effect generalized. Through the

different tests that have been conducted in this table once again the results are confirmed that the training

of the employees are more relevant compared to the others. These tests show us a low standard error

and a normal value of the T Statistics. Between these four factors exists low standard error and an

average T statistics. Also the factors show a low correlation and dependency between other factors.

Conclusion

Through this research paper it was possible to analyse in different methods and in complex analysis the

factors that have a high impact and a crucial importance toward the correct implementation of Industry

4.0 into a company.

Due to the fact that Industry 4.0 is still relatively a new term in the economic (Vaneker, 2019),

technological and especially in the social world it still requires a lot of research to fill gaps of knowledge

and previous research and papers are very helpful to enrich our information toward this new way of

doing business.

According to the literature review it was possible to understand that the process of training employees

is the key factor for a successful implementation of Industry 4.0. It was further on confirmed by the

qualitative research of the paper the same result. Through the questionnaire used it was possible to

identify the rank and importance of different factors when compared to the implementation of the new

industry (Vaneker, 2019). The fact of the questionnaire being distributed in the managerial departments

and in different countries of the world served as a powerful movement to gather more data and to have

more qualified data when analyzing industry 4.0 and the factors that have affected it through time.

Through the Confirmatory Factor analysis and through a statistical program was possible to analyze the

data and the results confirmed the literature review that the most important factor when switching to this

new innovation is the training process of employees.

Companies should attentively analyze the importance of the training process by meticulously enrich it

to offer all the adequate elements in order to prepare the labor force to be able to adapt and profitably

switch to this new way of doing business, the innovative industry 4.0 (Lee, 2014).

Page 80: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

73

Labor force have always been considered the crucial point of all enterprises and this new revolution

proved it once again that if the labor force is not prepared , even if it is a revolution based on robotics,

still the labor force will be crucial to keep the company in the success line.

Acknowledgement

This research was supported by Tomas Bata University.

References

[1] Armstrong, M. (2006). A handbook of human resource management practice (10th ed). London,

Philadelphia: Kogan Page.

[2] Arnold, C., Kiel, D., & Voigt, K. I. (2016). How the industrial internet of things changes business

models in diferent manufacturing industries. International Journal of Innovation Management, 20(08),

1640015. https://doi.org/10.1142/S1363919616400156

[3] Baena, F., Guarin, A., Mora, J., Sauza, J., & Retat, S. (2017). Learning Factory: The Path to Industry

4.0. Procedia Manufacturing, 9, 73–80. https://doi.org/10.1016/j.promfg.2017.04.022

[4] Çekmecelioğlu, H. G., & Günsel, A. (2013). The effects of individual creativity and organizational

climate on firm innovativeness. Procedia - Social and Behavioral Sciences, 99, 257-264.

https://doi.org/10.1016/j.sbspro.2013.10.493

[5] Faller, C., & Feldmúller, D. (2015). Industry 4.0 learning factory for regional SMEs. In Procedia

CIRP (Vol. 32, pp. 88–91). https://doi.org/10.1016/j.procir.2015.02.117

[6] Hecklau, F., Galeitzke, M., Flachs, S., & Kohl, H. (2016). Holistic Approach for Human Resource

Management in Industry 4.0. Procedia CIRP, 54, 1-6. https://doi.org/10.1016/j.procir.2016.05.102

[7] Lee, J., Kao, H. A., & Yang, S. (2014). Service innovation and smart analytics for Industry 4.0 and

big data environment. In Procedia CIRP (Vol. 16, pp. 3–8). https://doi.org/10.1016/j.procir.2014.02.001

[8] Meissner, H., Ilsen, R., & Aurich, J. C. (2017). ScienceDirect Analysis of control architectures in

the context of Industry 4.0. Procedia CIRP, 62, 165–169. https://doi.org/10.1016/j.procir.2016.06.113

[9] Mrugalska, B., & Wyrwicka, M. K. (2017). Towards Lean Production in Industry 4.0. In Procedia

Engineering (Vol. 182, pp. 466–473). https://doi.org/10.1016/j.proeng.2017.03.135

[10] PWC (2016). Industry 4.0: Building the Digital Enterprise, retrieved 27.06.2018,

fromhttps://www.pwc.com/gx/en/industries/industries-4.0/landing-page/ industry-4.0-building-your-

digital-enterprise-april-2016.pdf.

[11] Qin, J., Liu, Y., & Grosvenor, R. (2016). A categorical framework of manufacturing for Industry

4.0 and beyond. Procedia CIRP, 52, 173-178. https://doi.org/10.1016/j.procir.2016.08.005

[12] Rennung, F., Luminosu, C. T., & Draghici, A. (2016). Service provision in the framework of

Industry 4.0. Procedia - Social and Behavioral Sciences, 221, 372-377.

https://doi.org/10.1016/j.sbspro.2016.05.127

[13] Rhisiart M., Glover P., Beck H., (2014). The Future of Work Jobs and Skills in 2030, retrieved

23.05.2017, from www.ukces.org.uk/thefutureofwork

[14] Schwab K. (2015). The Fourth Industrial Revolution What It Means and How to Respond, retrieved

23.05.2017, from https://www.foreignaffairs. com: https://www.foreignaffairs.com/articles/2015-12-

12/ fourthindustrialrevolution.

[15] Torna, I.A.R., Vaneker, T.H.J., (2019). Mass Personalization with Industry 4.0 by SMEs: a Concept

for Collaborative Networks. International Conference on Changeable, Agile, Reconfigurable and Virtual

Production, Procedia Manufacturing 28, 135–141, DOI: 10.1016/j.promfg.2018.12.022

Page 81: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

74

[15] WEF (2016). The Future of Jobs. World Economic Forum. Geneva, 2016.

www.innovarobotik.com: Yatay, dikey entegrasyon, retrieved 05.24.2017, from

https://www.innovarobotik.com/yatay-dikey-entegrasyon.

Page 82: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

75

EVALUATION OF THE EFFICIENCY OF THE SYSTEM OF SELECTED RESIDENTIAL

SOCIAL SERVICES FOR SENIORS IN THE CZECH REPUBLIC

Izabela Ertingerová1

1Department of Public Economics, VŠB – Technical University of Ostrava,

Sokolská tř. 33, Ostrava 70200, Czech Republic

e-mail: [email protected]

Abstract

The contribution deals with the evaluation of the efficiency of the system of selected residential social

services for seniors in the Czech Republic for the period 2006-2018. The efficiency of social services

for seniors of the residential character is evaluated by means of input and output oriented basic models

of CCR method Data Envelopment Analysis from the perspective of selected key seven aggregated

annual parameters. Input parameters are represented by the total number of employees in selected

residential social facilities, the number of employees in direct care, the size of bed capacity, the number

of residential social facilities and the total expenditure. The output parameters represent the amount of

total income and the number of clients using selected residential social services. The monitored

parameters were analyzed within three selected models: Model A, Model B and Model C. Efficiency

results from the point of view of input and output oriented basic CCR models show that the system of

supply of residential social services for seniors cannot be considered

100 % efficient. In particular, ensuring the availability of bed capacity in direct care workers (Model B)

is considerably inefficient in relation to total income.

Keywords

Residential social services, seniors, efficiency, Data Envelopment Analysis, Czech republic.

JEL Classification

C67, J14

Introduction

The need for long-term care in the last decade has been seen not only in the Czech Republic but also in

other European countries as a new social risk that needs to be addressed. All member states of the

European Union strive to ensure access to social care for seniors in all forms - outpatient, field and

residential services. (Greve, 2016) Generally, it is possible to state that the largest volume of provided

social care is provided to persons of post-productive age in the Czech Republic in their home social

environment in the form of nursing services. However, due to the increasing number of people with

dementia and the minimum willingness of families to look after their relatives, there has been a

significant increase in demand for seniors' social services in recent years, or more precisely after their

placement in residential social facilities. These are mainly residential social services in the form of

homes for the elderly, or homes for the elderly with a special regime. (Hrozenská, Dvořáčková, 2013)

According to the available data from the register of Social Service Providers (2019), the number of

social services facilities for the elderly has increased from 541 to 866 (as of 2018) since 2006, ie by 325

facilities. Given the demographic aging of the population and the published projections of demographic

trends over the coming decades, the area of social services for elderly faces high pressure to ensure

universal access to social services of the required level and quality.

The requirement for providing a certain minimum level of quality of social care and the development of

a system of residential social services for seniors is determined by the economic and legal environment.

An important role here is played not only by assessing and evaluating the efficiency of the activities and

processes of individual residential facilities, but also by managing financial resources, the majority of

which is paid to individual providers of residential social services for seniors from public budgets. At

the same time, individual providers of residential social services for seniors emphasize the need to cope

with rising operating costs, which are growing every year. Increased efficiency can be achieved by

Page 83: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

76

reducing individual costs, which has a positive impact on the activities of all residential social facilities

in the country. (Víšek, Průša, 2012; Greve, 2016; Horecký, 2010)

The aim of the paper is to evaluate the efficiency of the system of selected residential social services for

seniors (retirement homes and homes for people with special regime) in the Czech Republic and to

describe its trend for the period 2006-2018.

Two hypotheses have been defined in relation to the defined objective:

- H1: “In all monitored years, the system of supply of selected residential social services for

seniors within the three models reaches the value of min. 0.85”;

- H2: “The year 2018 shows, in Model A, B and C, input- and output-oriented, the full value of

efficiency, ie 1”.

The mathematical method of Data Envelopment Analysis was chosen to achieve this goal, on which the

subsequent comparative analysis of each Model A, B and C was based. Furthermore, the Super

Efficiency DEA models were used, which can then be used to organize a set of resulting efficient units

and determine which one is the best.

Literature Review

A significant transformation of the social services system in the Czech Republic was recorded in 1989

due to the reaction to the new conditions of the changing social order. The reform of the social services

system, which was based primarily on the concept of a safety net, began in the early 1990s and lasted

for several years. The key moment came only on January 1, 2007, when the new legislative regulation

of the social services sector came into force - the Social Services Act No. 108/2006 Coll., As amended,

which comprehensively and independently regulates the entire system of social services. (Krebs a kol.,

2015)

At the same time, significant economic, social and demographic changes have occurred since 2007,

which have had a direct impact not only on the development of the system of supplying social services

for seniors, but also on their overall cost. These included, for example, the development of civilization

diseases, worsened health of seniors, increased chances of life expectancy, improvement of the level of

medical diagnostics, etc. (Matoušek, 2011; Horecký, 2010)

The system of financing social services also underwent a change in 2007 in the form of a greater

emphasis on multi-source funding and the introduction of a new social benefit - care allowance that was

transformed from social security benefits that were paid by 2006. The aim was to strengthen the ability

to ensure the optimal form of meeting the needs of people in difficult life situations and to significantly

increase the emphasis on the efficiency of the whole social system. The requirement to assess efficiency

is based on the need to use appropriately available resources, which are available to a limited extent. It

was assumed that the introduction of the new funding system, similar to foreign experience, will reduce

the demand for placement in residential social facilities, due to the increasing use of field services.

(Průša, 2008)

The topic of residential social services for seniors, especially addressing the accessibility of these

services, is a very topical theoretical problem, which is given a significant space in both domestic and

foreign studies. Cornea (2017) analyzed the different types of social services for the elderly and points

out the responsibility of public authorities to ensure the availability of these services. Proenca, Proenca

and Costa (2018) define the main factors - social service providers, sources of funding and activities

that have an impact on the emergence and development of a system of social services that is provided

not only by public but also by private, profitable entities. Langhamrová, Šimková and Sixta (2018)

examined their national economic costs to meet the need to provide social services in nursing homes.

The research shows that the system of offering social services for seniors with a residential character is

different in individual regions of the Czech Republic. At the same time, they state that the amount of

public funds paid to ensure the activities of individual providers of residential social services for seniors

is insufficient.

Page 84: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

77

Scientific approaches that assess the efficiency of social services are not regularly implemented in the

Czech Republic, such as Průša (2007), Průša (2008), Horecký (2012). Matoušek (2011) also points out

that the Standards of Quality of Social Services contained in the implementing decree to the Act on

Social Services do not deal with the issue of evaluating the effect of the service.

The Data Envelopment Analysis (DEA) method is considered a suitable non-parametric method for

evaluating efficiency because of its ability to handle multiple input and output variables. Modeling and

evaluation of the efficiency of the system of residential social facilities for seniors according to the DEA

method is of interest to many authors, especially from abroad. From American studies, mention may be

made of publications from the 1990s, such as Nyman et al. (1990), Fizel, Nunnikhoven (1992) or

Kleinsorge, Karney (1992), whose analyzes were aimed at comparing the efficiency of the system

between profitable and non-profit organizations providing residential social services for seniors. Ozcan

et al. (1998) used the DEA method to measure the efficiency of the offer of services for seniors,

respectively of registered US residential facilities. The analysis was carried out on a representative

sample of 10 % of the total of 324 registered residential facilities. The results have shown that the legal

form of providers of residential services (private, public sector) significantly affects the resulting level

of productivity. However, according to Garavaglia, Lettieri, Agasisti, Lopez (2011) and Chang, Cheng

(2013), it can be anticipated that competition between the providers of residential services will positively

lead to a gradual increase in efficiency. The level of efficiency is also closely linked to the occupancy

rate of accommodation facilities, which is also identified by Christensen (2003) in his expert study.

Methodology and Data

Analyzing and evaluating the efficiency of production units and identifying the sources of their

inefficiency is an important prerequisite for improving the behavior, activities and processes of these

units across the market. Čechura (2009) states that among the main, and in favor of, other methods stand

out those methods that are based on an estimation of the production function when evaluating efficiency.

These methods include, for example, the Data Envelopment Analysis (DEA) method, whose basic

models are among the most commonly applied to assess efficiency.

Given that the essence of the basic models of the DEA method is only to identify efficient and inefficient

units from the monitored set, a number of approaches have been developed that deal with the subsequent

arrangement of the set of efficient units. Among the approaches for organizing efficient units are, for

example, cross-effectiveness, optimistic and pessimistic efficient, the AHP model or the Super-

efficiency models. (Jablonský, Dlouhý, 2015) Super-efficiency models will be used to organize the

evaluated units, or the resulting efficient units from the modeling.

DEA CCR basic model with input and output orientation

The CCR model belongs to a group of basic models and is the first ever DEA model (created in 1978).

The input-oriented primary CCR DEA model is based on the assumption of a constant return to scale

(CRS).

The primary CCR model of the DEA method maximizes the rate of the rated unit of Uq, expressed as

the ratio of the weighted inputs to the weighted outputs, subject to the basic conditions where: (i) the

weights must not be negative; (ii) the efficiency rates of all other production units are less than or equal

to one, ie z ≤ 1. For each production unit monitored, the input weights v i = 1,2,…,m shall be obtained,

the virtual input and the output weights ui = 1,2,….,r, virtual output, expressed as:

virtual input (input) = v1x1q + v2x2q +……+ vmxmq,

virtual output (output) = u1y1q + u2y2q +…..+ uryrq.

The entire input-oriented model can be converted from a linear angle programming to a standard

programming problem into a mathematical formulation for the Uq unit by Charnes-Cooper transform:

maximize z =∑ uiri y

iq, (1)

Page 85: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

78

under conditions ∑ uiri y

ik ≤ ∑ vj

mj xjk, k = 1,2,…,n,

∑ vjmj xjq = 1,

ui≥ ε i = 1, 2, …,r,

vj≥ ε, j = 1, 2, …,m.

If the resulting value of the coefficient z is equal to one, the production unit Uq is evaluated as efficient.

For inefficient units, their efficiency is less than one, ie z < 1. The amount of the coefficient signals the

amount of input reduction needed to make the unit efficient. (Fiala et al., 2010; Cooper, Seiford, Tone,

2007)

The output-oriented primary CCR DEA model also assumes constant scale returns.

The output-oriented DEA CCR model is based on the same assumptions as the input-oriented model

above. The value of the technical efficiency coefficient is given by the ratio of the weighted sum of

inputs to the weighted sum of outputs, however, weights are sought compared to the previous model so

that the value of the efficiency coefficient is greater than or equal to one, ie g ≥ 1.

The primary output-oriented CCR model can be formulated as follows:

minimalize g =∑ vjmj xjq, (2)

under conditions ∑ uiri y

ik ≤ ∑ vj

mj xjk, k = 1,2,… , n

∑ uiri y

iq = 1,

ui ≥ ε i = 1, 2, …,r,

vj ≥ ε, j = 1, 2, …,m.

If the efficiency coefficient g is equal to one, the production unit of interest is considered to be efficient.

However, if higher efficiency coefficient values were found, the unit can be described as inefficient.

The output-oriented CCR model (2) allows you to determine the number of outputs that make an

inefficient unit efficient. (Cooper, Seiford, Tone, 2007; Jablonský, Dlouhý, 2015).

Super-efficiency models

The essence of Super-efficiency models is based on the fact that when calculating the super-efficiency

rate, the weight of the original efficient unit is set equal to zero (the evaluated unit is thus removed from

the set of monitored production units), resulting in a change of the original efficient limit. The model

then measures the distance between the inputs and outputs of the rated unit from the new efficient

boundary (Fiala et al., 2010; Jablonský, Dlouhý 2015).

In input-oriented DEA models, the original efficient units monitored receive a degree of super-efficiency

of more than one or less than one for output-oriented models. Thanks to this, it is possible to classify the

monitored efficient production units and determine which unit is the most efficient of the given set.

The first model from the category of super-efficiency models, which was published in 1993, was the

Andersen and Petersen models (hereinafter the AP model). The mathematical formulation of the input-

oriented model and assuming a constant returns to scale (CRS) can be expressed as:

Page 86: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

79

minimalize Ѳ𝑞𝐴𝑃

under conditions Ʃ𝑗=1𝑛 𝑥𝑖𝑗𝜆𝑗 + 𝑠𝑖

− = Ѳ𝑞𝐴𝑃 𝑥𝑖𝑞, i = 1,2,…,m, (3)

Ʃ𝑗=1𝑛 𝑦𝑘𝑗𝜆𝑗 − 𝑠𝑘

+ = 𝑦𝑘𝑞, k = 1,2,…,r,

𝜆𝑗 ≥ 0, j = 1,2,…,n, j ≠ q,

𝜆𝑞 = 0.

The higher the value reaches the degree of Super-efficiency, the more stable the evaluated unit is in

efficiency and will be in a higher position within the overall arrangement.

The formulation of an output-oriented AP model can be filled in analogously as follows:

maximalize 𝜙𝑞𝐴𝑃

under conditions Ʃ𝑗=1𝑛 𝑥𝑖𝑗𝜆𝑗 + 𝑠𝑖

− = 𝑥𝑖𝑞, i = 1,2,…,m, (4)

Ʃ𝑗=1𝑛 𝑦𝑘𝑗𝜆𝑗 − 𝑠𝑘

+ = 𝜙𝑞𝐴𝑃𝑦𝑘𝑞, k = 1,2,…,r,

𝜆𝑗 ≥ 0, j = 1,2,…,n, j ≠ q,

𝜆𝑞 = 0.

Data

The subject of efficiency evaluation according to mathematical formulations (2), (3) of the basic model

of CCR DEA method and Super efficiency model (3) is the system of selected residential social services

for seniors of the Czech Republic. The analysis was carried out for the period 2006-2018. Five input

variables (x1 - x5) and two output variables (y1, y2) were selected for modeling the efficiency of the

system of selected social services for seniors, which combined three models: Model A (x1; y1, y2), Model

B (x2, x3; y2) and Model C (x4, x5; y2). All three models consider input (IO) and output orientation (OO)

assumptions. Table 1. shows the scheme of monitored models.

Table 1. Structure Model A, Model B and Model C

Parameters Model A Model B Model C

Total employees (x1) ✔

Employees in direct care (x2) ✔

Number of beds (x3) ✔

Total costs (x4) ✔

Number of accommodation facilities (x5) ✔

Number of clients (y1) ✔

Total revenue (y2) ✔ ✔ ✔

Source: Own calculation.

For an input-oriented model (IO), the observed unit (year) can be considered efficient if the efficiency

value is equal to 1, but inefficient at less than 1. Also in the case of the output-oriented model (OO) the

monitored unit is efficient if the efficiency value is equal to 1, while the inefficient value is greater than

1.

The data for efficiency modeling were drawn from statistical yearbooks of the Ministry of Labor and

Social Affairs of the Czech Republic. The system of selected residential social services for seniors was

Page 87: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

80

evaluated for the period 2006-2018, respectively for individual years; for the purposes of the research,

the monitored years were designated as DMU_2006 - DMU_2018. Efficiency modeling was carried out

according to the selected methodology (see 3.1, 3.2), through the DEAFrontier Add-In for Microsoft

Excel.

Empirical Results

Results of the evaluation of efficiency according to the basic CCR DEA model

The results of modeling the efficiency of the system of selected residential social services for seniors

contain and compare input and output variables (parameters), and only from the perspective of

individual models (A, B, C) in total for individual years 2006-2018 (n = 13). The observed efficiency

results by model are shown in Table 2. The results show that the number of efficient and inefficient

homogeneous production units (DMUs - years) is, in terms of both input and output oriented, basic CCR

DEA models completely identical.

It is clear from the table that the best results were achieved in Model A and Model C - 3 efficient units

(years). The results of average values and standard deviations show that the assessed system of selected

residential social services for seniors in the Czech Republic is less efficient in Model B, compared to

Model A and C - the efficiency rate for both models improves over time; the minimum values of

efficiency measure show the location of the monitored DMUs in relative proximity to the efficient limit

1. The standard deviation also confirms the mutual differences in model values.

Table 2. Efficiency results according to models (A - C)

Model A; n=13 Model B; n=13 Model C; n=13

input output input output input output

Number of efficient DMUs 3 3 2 2 3 3

Number of inefficient DMUs 10 10 11 11 10 10

Minimum efficiency rate 0,8734 1,1449 0,6818 1,4666 0,9072 1,1023

Average efficiency rate 0,9301 1,0783 0,8336 1,2305 0,9434 1,0612

Standard deviation 0,0511 0,0583 0,1323 0,1962 0,0329 0,0360

Source: Own calculation.

When the results of the efficiency evaluation are entered into the network graphs, the variance of the

result values can be observed, see Figure 3. The 100 % efficient values (years) (θ (Uq) = 1 and φ (Uq)

= 1) lie on the outer circle and find units (years) showing increasingly inefficiency. Because of the

different values of the output-oriented models, these graphs were adapted to the format of graphs of the

input-oriented models for comparability. Identical values of efficient units in the models are only of a

random nature.

Page 88: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

81

Figure 3. Efficiency results according to models assuming input (IO) and output orientation

(OO)

Source: Own processing.

The figure shows that Model C achieved better results in the evaluation of efficiency. While Model A

achieved the same number as the Model C in terms of efficient and inefficient units, the results of the

measure of efficiency were slightly worse. However, these were very good overall results. The biggest

fluctuation of the results was recorded for the Model B. The most significant fluctuations were recorded

in 2006-2011, especially in the case of the output oriented model (OO). The units (years) in this model

were close to 0.6 (60 %). In these years, the amount of total revenue paid from public budgets was

insufficient (inefficient) in relation to bed capacity and direct service workers. Altogether, the number

of workers providing direct social care was not sufficient in individual years to be able to effectively

and quality serve all beds (placed by clients in residential social facilities).

Homogeneous production units (years) that have been shown to be inefficient in the monitored models

assuming input-oriented assumption are recommended, which in the case of Model A means reducing

the number of employees in residential social facilities (x1); in Model B the number of employees in

direct care care (x2) and size of bed capacity (x3) and in Model C a reduction in total expenditure (x4)

and number of social facilities of a residential nature (x5). On the other hand, units that are inefficient in

output-oriented models are recommended to increase the output variables, ie the number of clients (y1)

and total revenue (y2). A possible solution is also to make changes on both sides at the same time, by

reducing inputs accordingly and increasing outputs. Nevertheless, the reduction of the number of

employees can have an negative impact on the quality of provided care and the increase the number of

clients is not so flexible in the social services system.

In terms of efficient units (years), the system of selected residential social services for seniors in the

Czech Republic was efficient in Model A in 2006, 2015 and 2018, in the case of Model B in 2014 and

2018 and in Model C in 2011, 2017 and 2018. On the contrary, the worst results were achieved in 2008

0,60,70,80,91,0

20062007

2008

2009

2010

201120122013

2014

2015

2016

2017

2018

Model A

IO OO

0,60,70,80,91,0

20062007

2008

2009

2010

201120122013

2014

2015

2016

2017

2018

Model B

IO OO

0,60,70,80,91,0

20062007

2008

2009

2010

201120122013

2014

2015

2016

2017

2018

Model C

IO OO

Page 89: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

82

and 2009 in both Model A and Model B, while in the case of Model C the years 2006 and 2008 were

concerned.

Results of the evaluation of efficiency according to the Super-efficiency models

Using the Super-efficiency models, it is possible to organize the evaluated efficient units and thus

determine their resulting order. Super-efficiency models allow the classification of efficient units by

assigning a value greater than 1 to the efficient units.

Table 4 shows the results of the Super-efficiency analysis in relation to the evaluation of the system of

selected residential social services for seniors in the Czech Republic in the monitored period 2006-2018.

The above results are limited to units (years) when the selected residential care system for seniors in the

Czech Republic in the modeling models A, B and C was totally efficient in the modeling of efficiency,

ie the rate of efficiency was equal to 1.

Table 4. Results of Super efficiency analysis

Model A

Model B

Model C

Rank DMU Score Rank DMU Score Rank DMU Score

1. DMU_2006 1,1069 1. DMU_2018 1,0655 1. DMU_2018 1,0576

2. DMU_2018 1,0445 2. DMU_2014 1,0187 2. DMU_2017 1,0356

3. DMU_2015 1,0270 3. - - 3. DMU_2011 1,0153

Source: Own calculation.

From the table it is clear that in terms of individual models and the total number of efficient units (years),

it ranks among the most efficient unit - the year 2018, except in Model A, where it occupies the second

place. In the given year, the number of workers provided direct social care in bed most effectively in

relation to the total income managed by providers of residential social services. At the same time, in

2018 the most efficient expenditure was spent on securing activities in terms of the total number of

residential social facilities in the Czech Republic. In Model A, the unit ranked first - 2006. This was the

year in which the social services sector had not yet been comprehensively addressed by legislation.

Nevertheless, in the given year, the system showed the most efficient personnel security in relation to

the total number of clients and received funds (income).

Although these results point to the observed efficiency, it is necessary to take into account that it is the

efficiency of the whole system of residential social care for seniors, which is defined by selected input

and output variables. The units are therefore (in) efficient just in the combination of given inputs and

outputs. The specific selection of parameters, the respective inputs, must be supported by the relevant

arguments relating to the outputs and vice versa. The results of the efficiency modeling when looking at

individual residential social facilities for seniors may differ significantly in the monitored years.

Conclusion

The paper focuses on the evaluation of the efficiency of the system of selected residential social services

for seniors in the Czech Republic for the period 2006–2018 from the perspective of aggregated

parameters - annual results such as staffing, financial statements, bed capacity and number of facilities

systems of residential services for seniors of a social character. The system of residential social services

for seniors consists of state, regional, municipal and non-profit (other) facilities, respectively registered

providers.

The Data Envelopment Analysis (DEA) method, more precisely the CCR model with the assumption of

input and output orientation, was chosen as the key method for evaluating the efficiency of the system

of selected residential social services. The efficiency rating is complemented by the arrangement of

efficient units in each model through the Super Efficiency method.

Page 90: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

83

Although the new law on social services came into effect in 2007, which is in line with the content of a

number of contemporary European systems and principles, the previous problems have not been

completely eliminated and many others have paradoxically deepened. One of the problem areas is the

provision of care for the elderly in residential facilities. (Hrozenská, Dvořáčková, 2013; Průša, 2008)

Based on the findings of the evaluation of the efficiency of the system of social services, which provides

residential social care for seniors in the Czech Republic and in the context of the research hypotheses

H1 and H2, it can be stated that the system can be considered inefficient. However, efficiency is

improving.

Verification of hypothesis H1 as follows: “In all monitored years, the system of supply of selected

residential social services for seniors within the framework of all three models reaches the value of min.

0.85” showed that the system rated within the selected input and output parameters in Model B achieved

efficiency levels between 0.6 and 0.7 between 2006 and 2011 for the input-oriented model and between

0.5 and 0.6 for an output-oriented model. This situation was due to the lack of granted and paid funds

from public budgets needed to ensure the operation of residential facilities. Although the total amount

of money spent on residential social services is growing every year, many funds are not returned to the

system, providers of residential social services are increasingly dependent on the provision of subsidies

from the state budget, health insurance rehabilitation care. At the same time, there was a significant

shortage of workers in direct nursing care in the given years, while the total bed capacity increased due

to the response to the growing demand for seniors' placement in residential facilities. The results of the

efficiency measure for Model A and Model C were above the minimum threshold of 0.85. The

hypothesis H1 is therefore refuted.

The results confirm the second research hypothesis H2, stating that the year 2018 shows the same input

and output orientations in Model A, B and C, the full value of efficiency, ie 1 ranked among the most

efficient this year. It is obvious that the system in terms of monitored basic annual parameters (input and

output variables) was in the given year in optimal values, which led to achieving efficiency in terms of

economic and allocation security of provided social care. In view of the expected demographic growth

in the number of people of post-productive age, it is necessary to recommend that the offer of residential

social services for seniors continues to develop in the coming years, not only in terms of productivity

but also in terms of scale.

However, the results of efficiency or inefficiency, whether from a technical, economic or other point of

view, do not constitute a single or final incentive to decide on the further functioning of the whole

system, respectively of individual social facilities. Effective does not always mean socially desirable.

An important role is also played by the added value created by the area of social services with a focus

on the elderly. These include, for example, improving the quality of life not only for seniors themselves,

but also for their families, social inclusion and other services difficult to replace for society.

References

[1] Cooper, W. W., L. M. Seiford and K. Tone. (2007). Data Envelopment Analysis: A Comprehensive

Text with Models, Applications, References and DEA-Solver Software. New York: Springer.

[2] Cornea, V. (2017). Institutional and Administrative Answers to the Phenomenon of Demographic

Aging: (Re)configuration of the Social Services Infrastructure. Public Administration and

Regional Studies, 19 (1), pp. 71-84.

[3] Čechura, L. (2009). Zdroje a limity růstu agrárního sektoru: analýza efektivnosti a productivity

českého argárního sektoru – aplikace SFA. Praha: Wolters Kluwer.

[4] Fiala, P. et al. (2010). Operační výzkum – nové trendy. Praha: Professional Publishing.

[5] Fizel, L. J. and T. S. Nunnikhoven. (1992). Technical efficiency of for-profit and non-profit

nursing homes. Managerial Decision Economics, 13(5), pp. 429–439.

[6] Garavaglia, G., E. Lettieri, T. Agasisti and S. Lopez. (2011). Efficiency and quality of care in

nursing homes: an Italian case study. Health Care Management Science, 14(1), pp. 22–35.

Page 91: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

84

[7] Greve, B. (2016). Long-Term Care for the Elderly in Europe: Development and Prospects. New

York: Routledge.

[8] Horecký, J. (2012). Efektivní financování sociálních služeb v České republice. Praha: Vysoká

škola finanční a správní.

[9] Horecký, J. (2010). Kapacita, dostupnost, struktura a kvalita sociálních služeb. Tábor: Sociální

služby.

[10] Hrozenská, M. and D. Dvořáčková. (2013). Sociální péče o seniory. Praha: Grada Publishing.

[11] Chang, S. J. and M. Cheng. (2013). The impact of nursing quality on nursing home efficiency:

evidence from Taiwan. Review of Accounting and Finance, Emerald Group Publishing, 12(4), pp.

369-386.

[12] Christensen, E. W. (2003). Scale and Scope Economies in Nursing Homes: A Quantile Regression

Approach. Health Economics, 13(4), pp. 363-377.

[13] Jablonský, J. and M. Dlouhý. (2015). Modely hodnocení efektivnosti a alokace zdrojů. Praha:

Professional Publishing.

[14] Kleinsorge, K. I. and D. F. Karney. (1992). Management of nursing homes using data envelopment

analysis. Socio-Economic Planning Science, 26(1), pp. 57–71.

[15] Krebs, V. et al. (2015). Sociální politika. Praha: Wolters Kluwer.

[16] Langhamrová, J., M. Šimková and J. Sixta. (2018). Makroekonomické dopady rozšiřování

sociálních služeb pro stárnoucí populaci České republiky. Politická ekonomie, 66 (2), pp. 240-259.

[17] Matoušek, O. (2011). Sociální služby: Legislativa, ekonomika, plánování, hodnocení. Praha:

Portál.

[18] Nyman, J. A., D. L. Bricker and D. Link. (1990). Technical efficiency in nursing homes. Medical

Care, 28(6), pp. 541–551.

[19] Ozcan, Y. A., S. E. Wogen and L. W. Mau. (1998). Efficiency evaluation of skilled nursing

facilities. Journal of Medical Systems, 22(4), pp. 211–224.

[20] Proenca, T., J. Proenca and C. Costa. (2018). Enabling factors for developing a social services

network. The Service Industries Journal, 38 (5-6), pp. 321-342.

[21] Průša, L. (2008). Efektivnost financování sociálních služeb pro seniory. Praha: VÚPSV, v.v.i.

[22] Průša, L. (2007). Efektivnost sociálních služeb: vybrané prvky a aspekty. Praha: VÚPSV, v.v.i.

[23] Register of Social Service Providers. (2019). Selected residential social services for seniors.

[online database]. Prague Ministry of Labour and Social Affairs. Available at:

<http://iregistr.mpsv.cz/socreg/hledani_sluzby.do?SUBSESSION_ID=1579012964422_1>.

[24] Víšek, P. and L. Průša. (2012). Optimalizace sociálních služeb. Praha: VÚPSV, v.v.i.

Page 92: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

85

ANALYSIS OF THE SPILLOVER EFFECT OF STOCK MARKET RISK: BASED ON EVT-

COPULA-CVAR MODEL

Lun Gao1

1Department of Finance, VŠB – Technical University of Ostrava,

Sokolská tř. 33, Ostrava 70200, Czech Republic

e-mail: [email protected]

Abstract

This paper integarate the analysis characteristics of EVT copula model and CVaR model, construct the

EVT-copula-CVaR model to study the risk spillover effect of American stock market to the U.K. stock

market. The results show that the U.S. stock market has significant risks to the U.K market. Model

diagnosis and post test show that the model can effectively measure the risk spillover of a single

financial institution (or financial market), which is conducive to the financial regulatory authorities to

track the changes of systemic risk in a timely manner..

Keywords

Conditional value at Risk, Extreme value theory, Copula,Spillover Effect, systematic risk.

JEL Classification

G24 G28

Introduction

Financial globalization makes the economic relations between different countries and regions become

closer. With the increasing level of market opening, while improving the market transaction efficiency,

the risk effect is no longer limited to the domestic financial market, but interacts with each other in

different markets. The risk of a single financial institution (or financial market) can spread to other

market systems through open market channels, resulting in risk spillover and systemic risk and financial

crisis. In the 2008 financial crisis, countries around the world initially underestimated the level of Risk

Spillover in the US financial market. The traditional value at risk (VaR) method lacks effective

estimation and measurement of the risk spillover between institutions and markets, which shows some

limitations.

In order to effectively prevent the occurrence of systemic crisis, it is urgent for us to consider the

Financial Risk Spillover in the extreme situation of the market, the economic dependence of financial

markets between different countries and regions under the condition of open economy, and the potential

losses caused by it. Based on this background, this paper uses extreme value theory (EVT), Copula and

conditional value at risk (CVaR) to study the risk spillover effect, and comprehensively measures the

risk contagion spillover effect between different markets and the measurement of conditional value at

risk.

The structure of this paper is as follows: the second part is literature review; the third part introduces

related models; the fourth part is data selection and empirical analysis; the fifth part is the conclusion of

this paper.

Literature Review

Mcaleer and Da Veiga (2005) systematically used VaR method to study the market volatility spillover

effect, and found that VaR method underestimated the market volatility spillover. In order to solve the

undervaluation of market by VaR method, Adian and Brunnermeier (2008) put forward CVaR method.

This method can fully consider the dynamic change of systemic risk in financial market, and also

effectively improve the risk prediction problem of financial market, so it is widely used in the research

field of Risk Spillover (Gideon and Paul, 2017). Gropp et al. (2009) studied the cross-border Risk

Spillover Effect among European banks; Bee and Miorelli (2010) analyzed the market risk during the

financial crisis by using the POT (pecks over threshold) method of extreme value theory and dynamic

Page 93: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

86

VaR method, and found that this method was very effective in the risk spillover measurement of high-

risk period; Girardi and Erguny (2010) used GARCH method to analyze the dependence of CVaR

between international financial markets, and found that when the vulnerability of financial markets

increased, the risk spillover effect increased the risk contagion; Adams et al. (2010) quantitatively

analyzed the scale and duration of Risk Spillover using copula-VAR model, and found that investment

banks and arbitrage funds play a leading role in the transmission of risk spillover. Most of the above-

mentioned related studies use the independent single GARCH model family method, CVaR method and

EVT to study the spillover effect of the stock market, and most of them focus on the single domestic

market. The financial crisis triggered by the United States fully shows that although we can effectively

manage the normal market fluctuations through the daily institutional arrangements, however, although

the probability of market volatility is low in extreme cases, it often leads to great market risk. In an open

economy, this risk spreads more through the interconnection of different economies, due to the lack of

comprehensive consideration of the risk spillover effect under the extreme conditions of the market, it

eventually leads to serious financial crisis. Therefore, under the condition of open economy, we should

not only measure the risk spillover between different stock markets, but also pay attention to the possible

extreme situation of the market.

Because financial assets generally have "leverage effect", the yield often presents an asymmetric

distribution of "peak and fat tail". The traditional statistical analysis can not fit the distribution of the

yield sequence well, nor predict the change characteristics of financial assets in extreme cases. However,

the extreme value theory has a good goodness of fit for the tail of income, and does not need to model

the whole distribution. It can overcome the shortcomings of other measurement methods in solving fat

tail distribution. It is an effective method to measure the extreme situation of market risk. Copula method

can flexibly select the specific form of asset edge distribution, consider the asset edge distribution and

their correlation structure separately, and capture the nonlinear and asymmetric correlation between

variables. When the profit and loss distribution of traditional VaR method is non normal distribution, it

can't satisfy all the properties of consistent risk measurement. As a result, the local optimal solution is

not necessarily the global optimal solution, which can not deal with the extreme price changes in

financial markets in a timely and effective manner. CVaR can satisfy all the properties and convexity of

consistent risk measurement. It measures the average value of tail loss when the loss exceeds VaR,

which represents the average level of excess loss, and can measure tail loss adequately

Based on the analysis advantages of the above methods and the actual demand of stock market risk

spillover effect measurement, In this paper, EVT copula CVaR model is constructed to analyze the risk

spillover effect between U.S. and U.K. stock markets.

Methodology

This part first introduces the definition and principle of CVaR, and then introduces the specific process

of CVaR calculation using EVT copula model. There are two key steps to measure the relevant structure

of financial market by copula method. The first step is to select appropriate marginal distribution to

combine the sequences respectively, considering that the financial times series generally have the

characteristics of peak and fat tail, this paper uses the semi parametric method to fit the upper and lower

tail of the time series with the generalized Pareto distribution (GPD) of the extreme value theory, while

the middle part of the series uses the empirical distribution. After the marginal distribution is established,

the best fitting function is found from the common copula function family in order to describe the

correlation.

Conditional Value at Risk (CVaR)

Due to the non-linear characteristics such as thick-tailedness and asymmetry, the volatility of financial

asset returns generally cannot be effectively examined by traditional VaR. Huge loss scenarios with

extreme probability of extreme price changes (such as stock market crashes and financial crises) are

often underestimated, leading to inadequate measurements of VaR tail losses. CVaR satisfies all the

properties and convexity of consistent risk measures, and reflects that the tail loss exceeds the average

Page 94: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

87

value of VaR. Through methods such as sample quantile estimation, a sufficient measure of tail loss can

be achieved without relying on VaR calculations. According to Adrian and Brunnermeier (2008), the

mathematical expression of CVaR from the perspective of risk spillover effects:

Pr(𝑋𝑖 ≤ 𝐶𝑉𝑎𝑅𝑞𝑖𝑗|𝑋𝑗 = 𝐶𝑉𝑎𝑅𝑞

𝑗) = 𝑞 (1)

Among them, 𝑋 represents the level of risk loss, 𝑞 represents a significant level, and 𝑖 and 𝑗represent

financial institutions (or financial markets), 𝐶𝑉𝑎𝑅𝑞𝑖𝑗

indicates that when 𝑗 is in an extremely unfavorable

condition, the level of risk 𝑖 faces is the conditional risk value of 𝑖 with respect to 𝑗, which includes the

value of unconditional risk and the value of spillover risk. The risk spillover effect of 𝑗 on 𝑖 is described

by the numerical relationship between 𝐶𝑉𝑎𝑅𝑞𝑖𝑗

and 𝐶𝑉𝑎𝑅𝑞𝑗. In order to reflect the risk spillover of the

risk event of 𝑗 to 𝑖 more accurately, we define the spillover risk value as 𝛥𝐶𝑉𝑎𝑅𝑞𝑖𝑗

, and the expression

is:

𝛥𝐶𝑉𝑎𝑅𝑞𝑖𝑗= 𝐶𝑉𝑎𝑅𝑞

𝑖𝑗− 𝑉𝑎𝑅𝑞

𝑖 (2)

Due to the large differences in the scale of risk spillovers between different markets and financial

institutions, in order to facilitate comparison, further standardization is required, such as:

%𝐶𝑉𝑎𝑅𝑞𝑖𝑗= (

𝛥𝐶𝑉𝑎𝑅𝑞𝑖𝑗

𝑉𝑎𝑅𝑞𝑖 ) ∗ 100% (3)

%𝐶𝑉𝑎𝑅𝑞𝑖𝑗

removes the influence of dimension, which can more accurately reflect the degree of risk

spillover to 𝑖 when a risk event occurs for 𝑗, and discover the change of system risk when 𝑗 occurs a risk

event timely. 𝛥𝐶𝑉𝑎𝑅𝑞𝑖𝑗

technology combines the risk spillover effect with traditional VaR, which can

more accurately reflect the true level of risk, which is of great significance to the regulatory authorities

concerned about the risk of the entire financial system. Supervisory authorities can therefore accurately

and effectively discover the level of contribution of individual financial institutions (or financial

markets) to systemic risks, and quickly take targeted regulatory measures for the stability of the entire

financial system.

Modeling the marginal distribution using EVT

Extreme value theory mainly deals with extreme cases of risk. It has the ability to estimate beyond

sample data and can accurately describe the tail distribution. Extreme value theory mainly includes two

types of models, the traditional BMM (block maxima method) model and the POT model developed in

recent years. The BMM model often requires a large amount of sample data to model the maximum

value after blocking. This method is difficult to apply in practice due to the limited availability of tail

data. The POT model sets a threshold in advance and models all sample data that exceeds the threshold.

It overcomes the statistical problem of insufficient tail data to a certain extent. It has a clear advantage

over BMM when dealing with tail data in extreme cases. According to the definition of Viviana (2003),

the mathematical expression of EVT is as follows:

Let 𝑋𝑖 , 𝑖 = 1, … , 𝑛 be independent and identically distributed random variables, their common

distribution is 𝐹(𝑥) = Pr (𝑋𝑖 ≤ 𝑥). Let 𝑋 be an arbitrary 𝑋𝑖 . Choose a threshold 𝑢 . We define the

conditional probability distribution of the excess 𝑦 that exceeds the threshold 𝑢 as:

𝐹𝑢(𝑦) = Pr(𝑋 − 𝑢 ≤ 𝑦|𝑋 > 𝑢) =𝐹(𝑦+𝑢)−𝐹(𝑢)

1−𝐹(𝑢), 𝑦 > 0

(4)

The 𝐹𝑢(𝑦) is an over-threshold distribution. Since the generalized Pareto distribution (GPD )can fit the

tail of the yield series well, this article chooses GPD to model the up and lower tail of the yield series

and uses the empirical distribution to fit the middle part of the yield sequence. Finally, the marginal

distribution of the yield sequence 𝑋 is:

𝐹(𝑥) =

{

𝑁𝑢𝐿

𝑁(1 − 𝜉

𝑥−𝑢

𝛽(𝑢))−1

𝜉, 𝑥 < 𝑢𝐿

𝐸𝑐𝑑𝑓(𝑥), 𝑢𝐿 ≤ 𝑥 ≤ 𝑢𝑅

1 −𝑁𝑢𝑅

𝑁(1 − 𝜉

𝑥−𝑢

𝛽(𝑢))−1

𝜉, 𝑥 > 𝑢𝑅

(5)

Page 95: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

88

Among them, 𝐸𝑐𝑑𝑓(𝑥) is the empirical distribution function on the interval of yield rate 𝑢𝐿 ≤ 𝑥 ≤ 𝑢𝑅.

𝛽(𝑢) is expressed as a positive function scale parameter related to 𝑢, ξ ∈ R is the shape parameter of

the distribution, 𝑢 is the lower-tail threshold, 𝑢𝑅 is the upper-tail threshold of 𝑢, and 𝑢𝑅 is the lower-tail

threshold of 𝑢, 𝑁𝑢 represents the number of observations in the sample that are less than the threshold

𝑢 . The determination of the threshold 𝑢 is a prerequisite for correct estimation of ξ and 𝛽(𝑢). An

excessively high threshold 𝑢 will cause too little excess data, so that the variance of the estimated

parameters will be high, while a too small threshold 𝑢 will produce a biased estimate. In actual

operation, it is usually estimated by mean excess plot and Hill plot, but there is no consistent accurate

estimation method for the selection of the threshold 𝑢. At present, most of paper use the principle

provided by Du Mouchel (1983), that is, selecting the number of samples exceeding the threshold value

to account for 10% of the total number of samples to determine the threshold value. This article uses

this principle to determine the upper and lower tail thresholds of the yield series.

Choose the appropriate Copula function

Copula is actually a class of functions that connect joint distribution functions with their respective

marginal distribution functions. It was first proposed by Sklar (1959). With the development of current

information technology, it began to be used in the financial field in the late 1990s. This article will

choose the corresponding Copula function according to its fit with the actual return series. According to

the definition of Di Clemente (2003), the N-dimensional Copula function refers to a set of functions that

satisfy the following properties C: [0,1]𝑁 → [0,1]. The specific mathematical definition features are

expressed as follows:

1. 𝐷𝑜𝑚𝐶 = 𝐼𝑁 = [0,1]𝑁(DomC represents the domain of function set C);

2. 𝐶 is an N-dimensional increasing function with a grounded surface;

3. The marginal distribution function 𝐶𝑛 of 𝐶 satisfies: Cn (u) = C (1,… 1, u, 1… , 1) = u, where

u ∈ [0,1]. According to the above definition, the Copula function is a conection function that associates a multi-

dimensional joint distribution with a one-dimensional edge distribution. In fact, it is a multivariate

distribution function cluster with [0,1] uniform marginal distribution in N-dimensional [0,1] space.

If 𝐹1 , ... 𝐹𝑁 are univariate distribution functions, then 𝐶(𝐹1(𝑥1),… , (𝐹1(𝑥𝑛),… , 𝐹𝑁(𝑥𝑛)) is a

multivariate distribution function with marginal distribution 𝐹1, ... 𝐹𝑁. According to Sklar's theorem, if

F is an N-dimensional joint distribution function with marginal distributions 𝐹1, ... 𝐹𝑁, there must be a

Copula function C: [0,1]𝑁 → [0,1], so that:

𝐶(𝑢1, … , 𝑢𝑛) = 𝐹(𝐹1(−1)(𝑥1), … , 𝐹𝑁

(−1)(𝑥𝑁)) (7)

The above formula fully illustrates that the Copula function actually reflects a relationship between the

multivariate marginal distribution and its joint distribution that contains all the information between the

variables. Therefore, applying the Copula function can easily obtain the related structure of the

multivariate distribution, and it is not necessary that the edge distribution functions 𝐹1, ... 𝐹𝑁 have the

same distribution form. According to Sklar's theorem, we can get the density function of the joint

distribution function 𝐹:

𝑓(𝑥1, … , 𝑥𝑛 , … 𝑥𝑁) = 𝑐(𝐹1(𝑥1),… , 𝐹𝑛(𝑥𝑛), …𝐹𝑁(𝑥𝑁))∏ 𝑓𝑛(𝑥𝑛)𝑁𝑛=1 (8)

Where𝑐(𝑢1, … , 𝑢𝑛,… 𝑢𝑁) =𝜕C(𝑢1,…,𝑢𝑛,…𝑢𝑁)

𝜕𝑢1…𝜕𝑢𝑛…𝜕𝑢𝑁 is the density function of the Copula function, 𝑓𝑁(𝑥𝑁)is the

density function of the marginal distribution 𝐹𝑛(𝑥𝑛). From Equation (3), we can split a joint distribution

function into a univariate marginal distribution and a dependent structure represented by the copula

function. It provides a method to analyze the multivariate distribution dependent structure without

considering the edge distribution, It makes the solution of multiple univariate joint distribution functions

more convenient.

In this article, we will use t-copula for calculations. That is, consider two risk factors(𝑋1, 𝑋2)𝑇The joint

distribution 𝐹 is unknown. For a certain Copula C, their marginal distributions are 𝐹1 and 𝐹2 stisfy:

F (x1, x2) = C(F1(x1), F2(x2))

Page 96: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

89

Calculation of CVaR

According to the definition of Gropp et al. (2009), for the yield series 𝑋𝑖 and 𝑋𝑗 , it is assumed that their

joint distribution density function and edge distribution density function are 𝑓 (𝑥𝑖 , 𝑥𝑗 ), 𝑓𝑖 (𝑥𝑖), 𝑓𝑗(𝑥

𝑗).

The conditional distribution density function of the sequence 𝑋𝑖 under the given conditions is:

𝑓𝑖|𝑗(𝑥𝑖|𝑥𝑗) =

𝑓 (𝑥𝑖 ,𝑥𝑗 )

𝑓𝑗 ( 𝑥𝑗 ) (9)

Combining the previous formula (8) of the Copula function, we can further derive the following

formula: 𝑓𝑖|𝑗(𝑥𝑖|𝑥𝑗) = 𝑐 (𝐹𝑖(𝑥

𝑖),… , 𝐹𝑗(𝑥𝑗)) 𝑓𝑖(𝑥

𝑖).Therefore, the conditional distribution function of

the yield sequence 𝑋𝑖 under the given 𝑋𝑗 can be obtained by the following formula:

𝑓𝑖|𝑗(𝑥𝑖|𝑥𝑗) ∫ 𝑐(

𝑥𝑖

−∞(𝐹𝑖(𝑥

𝑖),… , 𝐹𝑗(𝑥𝑗)) 𝑓𝑖(𝑥

𝑖)𝑑𝑥𝑖 (10)

Among them, 𝐹𝑖 and 𝐹𝑗are the edge distributions of the Copula function, which can be obtained by the

extreme value theory which introduced earlier. The derivative of 𝐹𝑖 is 𝑓𝑖, and 𝑐 is the density function

of the selected optimal Copula function. According to the definition of 𝐶𝑉𝑎𝑅𝑞𝑖𝑗, 𝐶𝑉𝑎𝑅𝑞

𝑖𝑗 is the value at

risk of 𝑋𝑖 under 𝑋𝑗 = Va𝑅𝑞𝑗.

𝐶𝑉𝑎𝑅𝑞𝑖𝑗= 𝐹𝑖|𝑗

−1(𝑞| Va𝑅𝑞𝑗) (11)

Among them, 𝐹𝑖|𝑗−1j is an inverse function of 𝐹𝑖|𝑗, or a conditional quantile function. Sometimes it is

difficult to calculate the explicit expression of 𝐹𝑖|𝑗−1, therefore, in the actual solution process, we usually

calculate equation (12), and the solution of 𝑥𝑖 is 𝐶𝑉𝑎𝑅𝑞𝑖𝑗

.

∫ 𝑐(𝑥𝑖

−∞(𝐹𝑖(𝑥

𝑖), 𝐹𝑗(𝑉𝑎𝑅𝑞𝑗)) 𝑓𝑖(𝑥

𝑖)𝑑𝑥𝑖 = 𝑞 (12)

Data selection and empirical analysis

As the world's largest economy, the United States has a dominant financial market in the global financial

system. The turbulence in the US financial market can easily spill over to financial markets in other

countries (regions) through various channels. Therefore, using the stock market as a representative of

the financial market, study the risk spillover effects of US financial markets on other major financial

markets have great practical significance. This article selects the daily closing prices of Standard &

Poor's Index (S&P500) and London Index (FTSE) as raw data. Considering the time difference between

the U.S. stock market and the stock markets of other countries. In the analysis process, 𝑡 − 1 is used as

the US stock market trading day, and 𝑡 is used as the corresponding British stock market trading day.

Take the logarithmic first-order difference of the index closing price to calculate the daily index return.

In order to reduce the calculation error, we multiply all calculation results by 100.

The description of the basic statistical characteristics in Table 1 shows that although the skewness of the

stock index returns are close to 0 corresponding to the normal distribution, the kurtosis are greater than

3 corresponding to the normal distribution, and the probability value of the Jarque-Bera test result is 0.

That is, at a significant level of 1%, each stock index return series is significantly different from the

normal distribution. Based on this, it can be initially judged that each stock index return series does not

obey the normal distribution.

Page 97: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

90

Table 1 Basic statistical description of data

Mean Max Min S.D Skewness Kurtosis J-B

S&P500 0.040 4.840 -6.896 0.938 -0.491 4.525 0.0

FTSE100 0.011 5.032 -6.199 0.936 -0.279 3.124 0.0

In order to further confirm the non-normality of each stock index return series, we make a Q-Q plot

corresponding to each series. Taking the FTSE100 yield sequence as an example, the Q-Q plot is shown

in Figure 1. It can be seen that the upper and lower tails of the FTSE100 yield series deviate significantly

from the normal distribution, and have significant fat tail characteristics.

Figure 1 Q-Q plot of S&P500 and FTSE100

The test of other stock index yield series Q-Q charts also reached the same conclusion, so each stock

index yield series showed a significant "peak and fat tail" phenomenon. Since the GPD distribution in

the extreme value theory can fit the tail data of the yield series well, we use the GPD distribution to fit

the upper and lower tails of each stock index yield series. The data in the middle of the upper and lower

tails of the stock index return series is fitted using an empirical distribution.

After determining the upper and lower tail thresholds of each stock index's return series according to the

Du Mouchel 10% principle, the maximum likelihood method is used to estimate the scale parameter

𝛽 (𝑢) and shape parameter 𝜉 corresponding to the GPD distribution. Based on the estimated results, we

make a GPD distribution fitting diagnostic chart for each series. Taking the FTSE income series as an

example, the upper tail GPD distribution fitting diagnostic chart is shown in Figure 2. From the fitted

diagnostic graph, we found that all points are concentrated near the distribution curve (including the

over-threshold distribution curve and the tail distribution curve), which fits the data well, and the test of

Page 98: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

91

the fitting effect of S&P500 sequences also gives the same in conclusion. Substitute the parameter

estimates into formula (9) to get the marginal distribution function of each stock index return. After

establishing the marginal distribution of each stock index return series, we use the t-Copula dependency

structure function to capture the correlation structure between the FTSE100 index return series and the

S&P500 return series. The results are shown in Table 2. According to the Kendall τ correlation

coefficient and the upper and lower tail correlation coefficient (mainly focusing on the lower tail

correlation coefficient). From the perspective of risk spillovers, because the bottom-end correlation

coefficients are all positive numbers, S & P500 has a positive risk spillover effect on other stock indexes,

that is, when the S & P500 yield is at its risk level, the probability of potential loss of other stock index

returns will be increase.

Figure 2 upper-tail fitting of ftse100

Table 2 Basic statistical description of data

Parameter estimates Kendall τ lower tail Upper tail

ftse100 θ = 1.153 0.39327202 0.138 0.086 δ = 0.244

So far, we have established the marginal distribution function of each stock index return series and the

Copula function of the S&P500 return series and the UK stock market index return series. In order to

examine the strength of the spillover effect of US stock market risk on the British stock market, we use

the previous method to calculate the CVaR, ΔCVaR, and % CVaR of the other stock index return series

under S&P500 at risk. The results are shown in Table 3.

Page 99: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

92

Table 2 Basic statistical description of data

VaR Cvar △CoVaR %CoVaR( %)

FTSE100 2.5559 3.3081 0.7522 29%

It can be seen that, at a significant level of 5%, the conditional risk value (CVaR) of the UK stock index

return series is greater than the corresponding unconditional value at risk (VaR), that is, the S & P500

risk event has a positive spillover effect on other stock indexes, and the risk spillover intensity (

(Expressed as% CVaR) is 29%. The above analysis shows that, using the EVT-Copula-CVaR model to

better fit the relevant structure between different stock markets, financial institutions and regulators can

apply the model method to risk spillovers when other financial institutions (or financial markets) have

risk events Effective evaluation of direction and intensity will further improve risk management

decision-making capabilities.

Conclusion

The definition of CVaR is based on VaR, which measures the average of tail losses with losses exceeding

VaR and represents the average level of excess losses.

CVaR can effectively measure the tail loss and overcome the inadequacy of traditional VaR for tail loss

measurement. The EVT-Copula model can effectively fit the relevant structure between financial

markets under extreme market conditions. This paper builds the EVT-Copula-CVaR model by

combining the analysis characteristics of the two models. The generalized Pareto distribution is used to

fit the upper and lower tails of each stock index yield series, and the data in the middle of the upper and

lower tails of the stock index yield series is fitted using an empirical distribution. According to the

Kendallτ correlation coefficient and the upper and lower tail correlation coefficients (mainly focusing

on the lower tail correlation coefficients), the risk spillover effects of S & P500 on other stock indexes

are qualitatively analyzed.

Analysis based on this model shows that the US stock market has a strong positive risk spillover effect

on the UK. The intensity of risk spillover of the US stock market to other stock markets is also related

to the size of the US stock market's own risk. The greater the risk of the US stock market, the higher the

risk spillover of other stock markets. This model method can effectively measure the risk spillover of a

single financial institution (or financial market), and it is helpful for financial regulatory authorities to

track the changes in systemic risks in a timely manner. The financial supervision department can carry

out differentiated management according to the contribution of various financial institutions to the

system risk ΔCVaR, focusing on strengthening the supervision of financial institutions with relatively

high ΔCVaR values.

.

References

[1] Adams, Zeno; Füss, Roland and Gropp, Reint.(2010).Modeling Spillover Effects among Financial

Institutions: A State-Dependent Sensitivity Value-at-Risk( SDSVaR).ApproachEuropean

Business School ( EBS) working paper,( 5) ,2010.

[2] Adrian,T.and Brunnermeier, M.(2008). CoVaR. Federal Reserve Bank of New York Staff Reports,

no. 348,2008.

[3] Bee, Marco and Miorelli, Fabrizio. (2010). Dynamic VaR Models and the Peaks over Threshold

Method for MarketRisk Measurement: an Empirical Investigation during a Financial Crisis.

Elenco dei working paper,2010.

[4] Di Clemente, A. and Romano, C. A copula-Extreme Value Theory Approach for modeling

Operational Risk. http: //www.gloriamundi. org, 2003.

[5] Giulio Girardi, A. Tolga Ergün. (2013).Systemic Risk Measurement:Multivariate GARCH

Estimation of CVaR. Journal of Banking & Finance,37(8).

Page 100: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

93

[6] Hamao,Y. ; Masulis, R. W. and Ng, V. (1990). Correlations in Price Changes and Volatility across

International Stock Markets. Review of Financial Studies, 1990,3( 2) , pp. 281-307.

[7] Hartman, P. ;Straetmans, S and de Vries, C. G. (2004). Asset Market Linkages in Crisis Periods.

Review of FinancialStudies. 2004,86(1) , pp. 313-326.

[8] McAleer, M. and da Veiga, B. (2005). Spillover Effects in Forecasting Volatility and VaR. School

of Economics and Commerce University of Western Australia. 2005.

[9] Samit Paul, Prateek Sharma. (2017). Improved VaR Forecasts Using Extreme Value Theory with

the Realized GARCH model .Studies in Economics and Finance, 2017, 34(2).

[10] Sklar,A. Fonctions de repartition àn dimensions et leurs marges. Publication de l Institut de

Statistique de l Université de Paris, 1959, 8, pp. 229-231.

Page 101: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

94

THE IDENTIFICATION OF FACTORS INFLUENCING HUMAN RESOURCES

MANAGEMENT AND THE EVALUATION OF THEIR INTENSITY: A CASE STUDY ON

HUMAN RESOURCES MANAGEMENT (HRM)

Daniela Kharroubi1

1Department of Management, VŠB – Technical University of Ostrava,

Sokolská tř. 33, Ostrava 70200, Czech Republic

e-mail: [email protected]

Abstract

All experts admit that human resources management is the most important part of any business structure.

The company may have the best technology, output capacity and equipment, but may not achieve the

required profits because its staff is poorly managed. When human resources are managed in the best

possible way, positive results are achieved. For this reason, the HRM must count with factors that

influence its efficiency and have to learn how to deal with these factors. The main objective of this case

study is to identify the main factors that influence HRM in the workplace and to determine whether

these factors have a significant impact on HRM. The research tools used in this paper was a structured

undisguised questionnaire, which was administrated to 25 employees, 18 women and 7 men in the HRM

field. Two independent factors, i.e. external and internal were assessed. The descriptive analysis and the

analysis of Variance (ANOVA) were carried out to derive conclusions about the features of the

mentioned factors. In order to measure the internal consistency of the survey and the reliability of the

scale, the reliability Cronbach’s alphas were calculated. The value of Cronbach’s alpha for each of the

independent factors ranged above 0.9, where the overall C-alpha was 0,943 for the internal factors and

0.991 for the external factors, showing a consistency of the acquired data. In addition to that, the analysis

of Variance was tested to explain the relationship between the independent factors and HRM. The results

of the hypothesis testing showed that more than 50% the mentioned internal and external factors have

an influence on HRM.

Keywords

Human Resources Management, internal factors, external factors, Cronbach’s alpha, analysis of Variance.

JEL Classification

M12 Personnel Management, O15 Human Resources, C46 Specific Statistics, C12 Hypothesis Testing: General.

Introduction

Human Resources Management has changed a lot over the past years. A century ago, most of the people

worked in manufacturing companies and were watched by supervisors. However, companies started of

thinking for ways on how to improve the productivity and efficiency of employees, which was the

approach of scientific management. All of sudden, companies started to study performance standards,

i.e. how much is made by a certain time, job satisfaction, human relations, financial rewards, etc. This

was the beginning of a new approach on how to gain an advantage over competitors and economies of

scale.

As a result, it is renowned that Human Resources Management is the only living factor of production to

control the other factors. In fact, imagine leading companies with impressive buildings and lofty offices

without well-talented employees, definitely they will collapse (Dessler, 2008).

For this reason, there are many aspects that affect the implementation of Human Resources Management

practices. For instance, Budhwar and Baruch (2003) studied HR practices in developing countries and

they found out that it’s associated with organizational and social aspects. In this regard, Oinas Paivi and

Van Gils (2001) identified contextual resources that enhance HR competencies. These aspects included

elements in the external and the internal environment of the company.

As a result, this research study is focused on to explore the factors (external and internal) that influence

the performance of HRM in the Czech Republic;

Page 102: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

95

To identify the main factors that influence HRM in the workplace

To identify whether the mentioned factors have a significant influence on HRM

Literature Review

In today’ s modest business environment, company’s workforce is in a continual state of flux- skill sets

and job requirement, in addition to regulatory environmental changes at such a rapid pace that the staff

needs had significantly changed.

In the past decades, the HR manager has evolved considerably. Their previous functional approach has

been substituted for a strategic one (Wakely & Point, 2003). Human resource management is mostly

concentrated on leadership (as getting ready for tomorrow) agenda and thoroughly incorporated with

the business (Mooney, 2001). HR manager now has a much deeper understanding of key organizational

challenges, plays a proactive and strategic role and is no more condemned to a reactive and

administrative role (Nasiriour, Afshar Kazemi & Izadi, 2012). Ulrich (1995) even goes so far as to

suggest that HR department should be purged if they fail to become more strategic. For this reason,

HRM is the fundamental strength of organizations in facing the challenges/factors of business today.

Some of the definitions of Human Resources Management: “Human Resources Management is

regarded as a philosophy about the ways in which people are managed at work that is underpinned by

a number of theories relating to the behavior of people and organizations” (Armstrong & Taylor, 2020).

“Human Resources Management is the aspect of managing people in the broad areas of resourcing-

varieties of recruitment and selection, rewarding – forms of pay, developing – forms of training and

assessment, and the building and sustaining of relationships, primarily here, employment relations”

(Rowley & Jackson, 2010).“Human Resource Management is the function within an organization that

focuses on recruitment of, management of, and providing direction for the people who work in the

organization” (Maalderink, 2014).

When analyzing the role of HRM, many challenges exist either internally or externally which adversely

affect its delivery to quality services. In urbanized countries, the HR managers have distinguished the

challenges they face and have developed different strategies to overwhelm these challenges. Now the

question arises as, which internal and external factors impact the role of HR in an organization and how

these factors affect it. In today’s intensive competition and global marketplace, there are a lot of internal

and external factors that affect the role of HR department.

External factors

External factors (Pitra, 2008) have an impact on the internal environment of the organization. They

create an environment in which opportunities and threats arise for the realization of the business plans

of an organization. In general, they can be divided into economic ones that affect the economic

conditions of organizations; political, which are the source of legislation and restrictions; social, which

characterize the lifestyle of society, environmental and technological, which represent the possibility of

applying certain technologies during the implementation of activities in the organization. From the

external aspect, the following external factors are particularly important for the management of human

resource: the situation of workforce in the labor market, such as the level and type of human resources

qualifications, average incomes, labor movements…etc. Labor law (legislations), which affects

activities related to the closure and termination of employment, social security, remuneration…etc.

Socio-cultural environment, such as the average time spent commuting to work, labor norms in a given

region or country, interpersonal relationships, life values and cultural traditions. Competitors (Porter,

2008) affects the behavior of the organization. Organizations focus on other capabilities that the

customer appreciates, gaining a competitive advantage. State regulations that influence the

organization's capabilities through legal norms that impose on them various obligations to reduce

externalities. E.g. the obligation to build a sewage treatment plant, to have a catalyst in the vehicles, to

declare a certain area to be protected…etc. Demographical that affects the overall state of the workforce

and thus the overall level of labor supply. They may result in a shortage of people with the necessary

professional and qualification requirements, the proportion of the working population, changes in age

structure and the quality of the workforce. Globalization is a global process based on the

Page 103: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

96

internationalization of the economy, i.e. the interconnection of the markets of different countries

through the trade of goods and services and the free movement of capital. Companies operating in the

world markets are merging to form multinational corporations, aiming to dominate as much of the world

market as possible.

The mentioned external factors are summarized in the table (2.1) below.

Table 2.1: External Factors

External Factors

Legislative regulations for business activities

Provided support in selected areas of business

Amount of tax burden and method of payment of taxes

Labor law

State regulation through legal regulations imposing on

them various obligations to reduce externalities

Development of population employment

Lifestyle and consumer habits of population groups

Demographic composition

Average income and savings rate

Gross domestic product

Climate conditions

Infrastructure of transport, energy and telecommunication

networks

The level of production facilities and the state of

development of science and technology

Globalization (linking markets of different countries

through trade in goods and services and free movement of

capital)

Source: own elaboration.

Internal factors

Internal factors that significantly influence the organization's management concept and human resource

management goals are the following (Koubek, 2006): the size of the organization, where the number of

employees is most often used as an indicator of the size of the organization. In a smaller organization,

communication works more easily in the context of direct personal relationships, the organizational

structure is clear, with fewer hierarchical levels and decision-making is seen by specific people. In a

larger organization it is necessary to create mechanisms for communication within the organization, the

organizational structure has more hierarchical levels, it is necessary to formalize decision-making

processes and delegate powers; technologies used are mainly information and communication systems.

New technologies increase productivity and speed up communication. It brings a change in working

practices, restructure of jobs where staff retraining is needed, increased needs for new types of training,

different skills and abilities are required; organizational structure, when choosing such criteria it is

necessary to take into account criteria such as geographical location, functionality or market segment.

The chosen structure affects the number and specialization of recruited employees, types of training,

remuneration system, motivation, job planning, the need to delegate decision-making powers; The

corporate culture that results from a previous development of an organization affecting the

organization's innovative capabilities and internal relationships. It enables employees to share values,

standards and goals together; economic outcomes where an organization can stimulate the performance

of its employees by directly contributing to profits or by preferentially purchasing primary shares whose

value increases, etc. Due to this, employees more easily share and identify with common values,

standards and objectives. The mentioned internal factors are summarized in table (2.2).

Page 104: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

97

Table 2.2: Internal Factors

Internal Factors

Proper and effective organizational structure affects

human resource planning

Size of the organization

Human resources management policy and strategic goals

of the organization

Training

Economic outcome can stimulate the performance of its

employees by directly contributing to profits

Devotion of employees

Evaluation systems in the area of human resources

management

Organizational culture

Wage policy

HRM style- leadership

Used technologies especially information and

communication systems

Organizational climate (rigid / flexible, friendly / hostile

climate)

Process of adaptation of new employees Motivation

Source: own elaboration.

Methodology and Data

Research design

The research had adopted the descriptive research design. Descriptive research design aims at launching

the current occurrences, just the way it is as the researcher has no control over the variables. This

research type had been established to be ideal when data are collected to describe persons &

organizations (Creswell, 1994).

Research Instrument

The research instrument used for the survey was a structured disguised questionnaire. As it was the

primary tool for the data collection. Secondary tool for deciding about what factors to mention in the

survey was literatures, websites and annual reports.

The questionnaire contained two sections. The first section contained background information and

personal details of the respondents. The second part of the questionnaire identified various factors that

could have an impact on HRM and evaluated the intensity of this impact. This part contained 31

questions related to external and internal factors, as described earlier. The respondents of the study were

HR professionals or their equivalent. The author preferred these respondents since they are directly

involved in the HRM. The respondents were contacted by e-mail and were asked to choose the most

appropriate answer for each question. The Likert-type scale with 1- 5 items was used. As 1 represented

a very small range of impact, 2 small range of impact, 3 slight range of impact, 4 big range of impact

and 5 as very considerable one.

Reliability Test of The Questionnaire

To measure the internal consistency and the reliability of the survey, we used the Cronbach’s alpha.

Cronbach’s alpha (Andrew, Pederson & McEvoy,2011) measures how well a set of variables measures

a single, unidimensional latent construct. It is essentially a correlation between the item responses in a

questionnaire; assuming the statistic is directed toward a group of items intended to measure the same

construct. Cronbach’s alpha values will be high when the correlations between the respective

questionnaire items are high. C-alpha values range from 0 to 1 and values at above 0.7 are desirable.

Page 105: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

98

The formula (Andrew, Pederson & McEvoy,2011) for Cronbach’s alpha is:

=𝑁. c̄

v̄+(𝑁 − 1). c̄

(1)

Where,

• N = the number of items,

• c ̄= average covariance between item-pairs.

• v̄ = average variance.

Data analysis

For the empirical investigation, the researcher had applied the statistical techniques to analyze the data

collected from the survey. The descriptive statistics was used to draw out the basic features of the data.

As it is known that Likert scale types are classified among ordinal data. However, Pecáková (2011)

defended in her book, that the total score obtained by adding the point expression of individual stimuli,

then represents a one-dimensional scale evaluation of the observed phenomenon. So, based on the

procedure used, the scale obtained can be considered as cardinal. For this reason, in this study, the basic

features will be the mean and the standard deviation. Mean, x̄, is the sum of the values in a data set

divided by the number of values (Jacques, 2013) and it is calculated as follows,

x̄ = 1

𝑛∑𝑥 (2)

Standard deviation is calculated as the square root of variance, where variance measures the spread of

data about the mean (Jacques, 2013),

𝑆𝑡. 𝑑𝑒𝑣 = √1

𝑛∑(𝑥 − x̄) (3)

Also, to investigate the researched hypothesis, a parametric hypothesis testing was employed – oneway

ANOVA or the analysis of Variance. The analysis of variance (ANOVA) (Sahai & Ageel, 2000) models

have become one of the most widely used tools of modern statistics for analyzing multifactor data. The

ANOVA models provide versatile statistical tools for studying the relationship between a dependent

variable and one or more independent variables. The results of the statistic test were calculated by the following ways:

Sum of squares within groups

Sum of squares between groups

Sum of squares between-groups examines the differences among the group means by calculating the

variation of each mean (Yj) around the grand mean (Y) (PDX.edu)

𝑆𝑆𝑎 = n∑ (𝑌𝑗. 𝑌)

2

(4)

Sum of squares within-groups (PDX.EDU) examines error variation or variation of individual scores

around each group mean. This is variation in the scores that is not due to the treatment (or independent

variable):

𝑆𝑆𝑠/𝐴 = ∑(𝑌𝑖𝑗 . 𝑌j)2 (5)

Empirical Results

For the data collection, a sample size of 25 HR employees were selected and the socio-demographical

characteristics of the respondents are represented in the tables below (table 4.1, 4.2, 4.3 & 4.4).

Page 106: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

99

Table 4.1 Gender of the Respondents

Gender Frequency Percentage

Female 18 72%

Male 7 28%

Total 25 100%

Source: own elaboration.

The results obtained shows that 18 were women, while 7 were men, thus showing a high diversity of the

gender. Sands (2019) has noted in his study, that more than 86% of women in the US work as HR

generalists. Table 4.2 Operating Period

Operating years Frequency Percentage

1 year 5 20%

1 – 5 years 11 44%

6 – 9 years 4 16%

10 years 5 20%

Source: own elaboration.

Most of the HR employees – 44% had been working in the organization from one to five years. Table 4.3 Number of Employees in the Company

No. of

employees

Frequency Percentage

1 – 10 3 12%

11 – 50 3 12%

51 – 250 11 44%

250 8 32%

Source: own elaboration.

Majority of the respondents work for companies that have 51-250 employees. Table 4.4 Business Category

Business

Category

Frequency Percentage

Manufacturing 11 44%

Providing

services

11 44%

Others 3 12%

Source: own elaboration.

The response rate was the same in the manufacturing category as the ones providing services – 44%.

Basic features of the data collected are represented in the tables below and the meaningful features for

each factor (external and internal) are highlighted in grey (tables 4.5 & 4.6).

Table 4.5 Basic features of the external factors

1 2 3 4 5 Mean St.dev Legislative

regulations 1 2 11 8 3 3,4

0,191

Provided support

2 3 11 7 2 3,16

0,205

Amount of tax

burden 2 2 9 7 5 3,44

0,231

Labor law 1 4 2 6 12 3,96

0,254

State

regulations 1 7 5 9 3 3,24

0,225

Employment 2 3 6 5 9 3,64

0,263

Lifestyle and consumer

habits

1 - 8 10 6 3,8

0,191

Demographic composition

1 2 7 7 8 3,76 0,225

Page 107: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

100

Income & saving rate

- 5 7 10 3 3,44

0,192

GDP 3 5 14 3 - 2,68

0,170

Climate conditions

5 3 8 7 2 2,92

0,251

Infrastructure 3 1 3 12 6 3,68

0,249

Level of production

1 4 7 8 5 3,48

0,224

Globalization 1 2 10 7 5 3,52

0,209

Source: own elaboration.

Table 4.6 Basic features of the internal factors

1 2 3 4 5 Mean St.dev Organizational structure

- 1 3 17 4 3,96

0,135

Size of the organization

2 4 6 10 3 3,32

0,228

HRM policy & strategic goals

1 4 1 13 6 3,76

0,226

Training 1 2 4 9 9 3,92

0,223

Economic outcome

4 6 4 7 4 3,04

0,273

Devotion of employees

- 2 5 8 10 4,04

0,195

Evaluation systems

1 4 5 6 9 3,72

0,248

Organizational culture

- 5 4 9 7 3,72

0,22

Wage policy 1 1 4 8 11 4,08

0,215

HRM style management

1 3 2 13 6 3,8

0,216

Communication systems & IS

1 1 7 11 5 3,72

0,195

Organizational

climate - 2 6 8 9 3,96

0,195

Adaptation 2 4 3 7 9 3,68

0,269

Motivation - 2 3 4 16 4,36

0,198

Source: own elaboration.

The results show us that e.g. the researched factor legislative regulations was evaluated by a mean of

3.4 with standard deviation 0.191. While on the other hand, the basic features for the internal factors are

described in table (4.6). From the table, we can notice that the researched factor e.g. Training was

evaluated by a mean of 3.92 and with standard deviation 0.223. Also, we can notice that the mean in

both cases range between 3 and 4 and that the standard deviation is below 1, which interprets the non-

heterogeneity of the opinions. Also, the mean in most cases is in normal distribution.

Page 108: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

101

In order to test the internal consistency of the survey and the reliability of the scale, the reliability

Cronbach’s alphas were calculated in Excel. The value of Cronbach’s alpha for each of the independent

factors are represented in table (4.7).

Table 4.7 Cronbach’s alphas

Cronbach’s alpha

External Factors 0.991

Internal Factors 0.943

Source: own elaboration.

The value of Cronbach’s alpha for each of the independent factors ranged above 0.9, where the overall

C-alpha was 0,943 for the internal factors and 0.991 for the external factors, showing a consistency of

the acquired data.

For further analysis of the researched data, the analysis of Variance was applied for further evaluation

of the mentioned external and internal factors that have an impact on HRM. While hypothesis testing,

the significance level of the study was on =0.05. Also, the external and internal factors were set as

two independent groups. To reach the partial objective of this research two hypothesis were formulated

as following:

H0: Each evaluated factor within the group has no impact on HRM.

H1: Each evaluated factor within the group has an impact on HRM.

Based on the hypothesis formed, the mentioned external and internal factors were tested separately.

After performing calculations using Excel, the following results were deduced (Table 4.8 & 4.9). Table 4.8 Impact of external factors on HRM

Source: own elaboration.

Table 4.9 Impact of internal factors on HRM

Source: own elaboration.

For the validation of the hypothesis, we compared the p-value with the significance level. According to

the p-values in tables 4.8 & 4.9 are lower than 0.05, this means that there is a statistically significant

difference between groups. Therefore, the H0 hypothesis is rejected with 95% probability. This means

that all external and internal factors have an impact on HRM.

ANOVA Source of Variation SS df MS F P-value F crit

Between Groups 41,0714286 13 3,15934066 3,07808681 0,00025347 1,7506351 Within Groups 330,5 322 1,02639752

Total 371,571429 335

ANOVA Source of Variation SS df MS F P-value F crit

Between Groups 35,1085714 13 2,70065934 2,23458811 0,00818186 1,74935988

Within Groups 406,08 336 1,20857143

Total 441,188571 349

Page 109: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

102

Discussion

In figure 4.1, we can see the external factors that influence the organization as a whole. It illustrates

various types of external factor and displays Labor law as the biggest influencer. Changes of legislations

enacted by governments will continue to have a dramatic impact on the HRM. In fact, these changes

complicate the life of HR employees by levying detailed and demanding liabilities on employers. Also,

the macroeconomic indicator, employment plays an important role in HRM and expresses the current

situation on the labor market. Meanwhile, the development of indicators like inflation rate or the average

salary have an influence on wage policy, because it must be taken into attention while working on the

yearly salary tariffs. Economic situation such as the fluctuation of GDP levels affect the number of

employees necessitated. In addition, it is essential for the HR manager to know the demographical

development of the country. For instance, the age structure of men and women, the percentage of women

and men in productive age. Demographics can also refer to workforce diversity, where various

promoting programs are to be applied to motivate older and even women workforce. For this reason,

policies and practices of the HR must be adapted to embrace the diversity of the workforce.

Technology’s impact on today’s HR can’t be ignored. Thus, the organizations’ structures have been

redesigned and new programs were instituted for the selection and the evaluation of the employees.

Figure 4.1 External factors influencing HRM

Source: own elaboration.

On the other hand, the internal factors (Figure 4.2) impact the run of the company as whole. Developing

the abilities of the employees through trainings can increase their performance and thus help in reaching

the organization’s strategic goals. Motivation also plays an important role in stimulating the individual’s

performance, i.e. through shares from the profit, benefit systems and other motivating programs.

Interpersonal relationships in the workplace have a great influence on the employee’s psychology.

Building friendly interconnections can stimulate the work satisfaction factor and hence lead to better

outcomes. In addition, the management style that the HR department addresses through the leadership

role can have a big impact on to day-to-day operations. Thus, the organizational climate through friendly

and flexible environment is one of the necessities in the workplace. When talking about hiring new

employees, the HR managers should take into mind the adaptation process of these workforces.

Adaptation plays a big role in the devotion of the employees. A devoted employee is usually loyal to

their organizations and work from their heart, which is great in bringing awesome yields. The HR

strategic policy should be transparent and understood by the employees and should work onto reaching

and fulfilling the organizational goals. When the HR plans its procedures, they have to take into

consideration the size of the company. As bigger the company is, the more are sophisticated the

processes. For this reason, a periodical monitoring of the processes can help figure out obstacles and

work on their improvement. Also, the HR management should take have clearly defined performance

measures of the processes. And should update these Key Performance Indicators to adequate the

circumstances. Observing the outcomes of these KPIs reflects shortcomings and would help onto the

optimization of the daily procedures of the HRM.

1,00

2,00

3,00

4,00

5,00legislative regulations

State regulations

Provided support

Lifestyle and…

Level of production

Labor law

Infrastructure

Income & saving rate

Globalization

GDP

Employment

Demographic…

Climate conditions

Amount of tax burden

External factors influencing HRM

Page 110: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

103

Figure 4.2 Internal factors influencing HRM

Source: own elaboration.

Conclusion

The main objective of this paper was to identify the factors, i.e. external and internal that influence HRM

and to determine whether they have a significant impact on it.

To know the external and internal factors affecting the HR daily procedures, a structured undisguised

questionnaire was sent to a number of HR employees. The mentioned factors in the questionnaire were

generated based on the literatures, websites and annual reports. HR employees were asked to evaluate

the given factors based on a Likert scale from 1 to 5. Where 1 reflects a very small range of impact, 2

small range of impact, 3 slight range of impact, 4 big range of impact and 5 as very considerable one.

After a thorough comparative analysis, the researcher had to realize the most influential factors through

descriptive analysis features. These features were represented in tables 4.5 & 4.6. Among the external

factors ranged legislative regulations, labor law, demographic characteristics, employment rates, amount

of taxes burden and the way of its payment, lifestyle and the consumer’s habits, macroeconomic

indicators, technological advances, infrastructure of transport, energy and telecommunication networks,

and climate changes. On the other hand, among the internal factors ranged the organizational structure,

culture, climate, HR management style or leadership, the organizational size, trainings, process of

adaptation of new employees, motivation, evaluation systems, HR policies and strategic goals, the

devotion of employees, wage policies, and economic outcomes where the employee has a share from

the profits. Based on the descriptive analysis, the most influential external factors are labor laws,

legislative regulations, demographic changes, tax rates, macroeconomic indicators & technological

changes. The most influential internal factors are trainings, wage policies, motivation, evaluation

systems, organizational climate and the devotion of employees. In order to measure the internal

consistency of the survey and the reliability of the scale, the reliability Cronbach’s alphas were

calculated. The value of Cronbach’s alpha for each of the independent factors ranged above 0.9, where

the overall C-alpha was 0,943 for the internal factors and 0.991 for the external factors, showing a

consistency of the acquired data. In addition to that, the analysis of Variance was tested to explain the

relationship between the independent factors and HRM. The results of the hypothesis testing showed

that more than 50% the mentioned internal and external factors have an influence on HRM.

Taking into consideration all the above mentioned external and internal factors, the organization can

then support the organizations development in a desired direction and to stimulate the performance

measures of the organization.

1,00

2,00

3,00

4,00

5,00Proper and effective…

Size of the organization

Training

Human resources…

Economic outcome…

Devotion of employees

Evaluation systems in…

Organizational culture

Wage policy

HRM style- leadership

Used technologies…

Organizational climate…

Process of adaptation…

Motivation

Internal factors influencing HRM

Page 111: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

104

Acknowledgement

This research was financially supported within the SGS 2020/33 project.

References

[1] ANDREW, D., PERDERSEN, P., MCEVOY, C. (2011) Research Methods and Design in Sport

Management. Illinois (USA): Human Kinetics. ISBN 9780736073851

[2] ARMSTRONG, M., TAYLOR, S. (2020). Armstrong's Handbook of Human Resource

Management Practice. UK: Kogan Page Publishers. ISBN 9780749498283.

[3] BUDWAR, P., YEHUDA, B. (2003). Career Management Practices: an empirical study,

International Journal of Manpower, 24.

[4] CRESWELL, J. W. (1994). Research Design: Qualitative and Quantitative Approaches. USA:

Sage Publishing. ISBN 9780803952546.

[5] DESSLER, G. (2008). Human Resources Management. New Jersey: Pearson Prentice Hall. ISBN

978-0131746176.

[6] JACAQUES, I. (2013). Quantitative Methods. UK: Pearson Educated Limited. ISBN 978-273

776161.

[7] KOUBEK, J. (2006). Řízení lidských zdrojů: Základy moderní personalistiky. 3.vyd. Praha:

Management Press. IBSN 8072610333.

[8] MAALDERINK, Y. (2005). Human Resource Management: Functions, Applications, Skill

Development. USA: CreateSpace Independent Publishing Platform. ISBN 9781503300118.

[9] MOONEY, P. (2001). Turbo-Charging the HR Function. UK: CIPD Publishing. ISBN

9780852928967.

[10] NASIRIPOUR, A., AFSHAR KAZEMI M., and IZADI, A. (2012). Effect of Different HRM

Policies on Potential of employee Productivity, Research Journal of Recent Sciences, 1(16), pp.

45-54.

[11] OINAS, P., VAN GILS, H. (2001). Identifying Contexts of Learning in Firms and Regions. In:

Paivi Oinas and Hein van Gils, The organizational and Industrial Space. ISBN 978-13151868496.

[12] PDX.edu. Notation and Computation of One-Way ANOVA [online]. [cit. 2020-01-11]. Available

at : < http://web.pdx.edu/~newsomj/uvclass/ho_ANOVA.pdf>.

[13] PECÁKOVÁ, I. (2011). Statistika v terénních průzkumech. Praha: Professional Publishing. ISBN

978-8074310393.

[14] PITRA, Z. (2008). Dovednosti a image managera. 2.vyd. Praha: BIVS. ISBN 978- 8072651306.

[15] PORTER, M. (2008). On Competition. USA: Harvard Business Press. ISBN 9781422126967.

[16] ROWLEY, C., JACKSON, K. (2010). Human Resource Management: The Key Concepts. UK:

Routledge. ISBN 9781136901355.

[17] SAND, B. Why is the HR Profession Dominated by Women [online]. [cit. 2020-01-11. Available

at: https://study.com/blog/why-is-the-hr-profession-dominated-by-women.html

Page 112: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

105

EVALUATION EFFICIENT PRICE OF COMPENSATION OF SELECTED PUBLIC

TRANSPORT IN OLOMOUC REGION AND MORAVIAN – SILESIAN REGION

Natálie Konečná1

1Department of Public Economics, VŠB – Technical University of Ostrava,

Sokolská tř. 33, Ostrava 70200, Czech Republic

e-mail: [email protected]

Abstract

This paper deals with an evaluation efficient price of compensation of selected public transport in 24

selected areas of Moravian-Silesian Region and Olomouc Region, specifically suburban bus transport.

All used data from MSR are accessible in the National Electronic Tool (NEN), data from Olomouc

Region (OR) are from the Tenderarena electronic marketplace. Tenders have been opened within 2018

– 2019. Contract has been concluded with the chosen carrier in these areas. To estimate technical

efficiency, the output-oriented and input-oriented DEA model was chosen with constant and variable

returns to scale working with one input variable (competitive price for 1 vehicle-kilometer) and two

output variables (number of connections and number of vehicle-kilometers). Based on the performed

analysis, it was found that more than 50 % of contracted compensations was inefficient in the all models.

The degree of inefficiency is very dispersed in the all models (output-oriented and input-oriented

constant returns to scale and variable returns to scale models). Only the outputs that have been used in

the model can be objectively evaluated.

Keywords

bus transport, Data Envelopment Analysis, efficiency, compensation

JEL Classification

C21, C67, R48

Introduction

The paper focuses on the issue of securing transport services in the regions in the context of the

efficiency of public expenditure as price compensation and public services. From the point of view of

goods, public transport belongs to mixed public goods, both in terms of security and financing, and in

terms of consumption. Both public administration (state, regions and municipalities) and the private

sector, which is usually a service provider, are involved in securing public transport. Specifically, the

article focuses on suburban bus transport in the Moravian-Silesian and Olomouc regions.

The obligation to provide transport services at the regional level results from Act No. 194/2010 Coll.,

On public passenger transport services and on amendments to other acts. Transport service is defined

according to § 2 of this Act:„ Transport service means ensuring transport on all days of the week,

primarily to schools and school facilities, to public authorities, to work, to health establishments

providing basic health care and to meeting cultural, recreational and social needs, including transport

back, contributing to sustainable territorial district development.“

The regions are divided into individual areas for the purpose of ensuring transport services. For each

given area, the contracting authority, in this case the region, announces a public contract for the provision

of transport services, thus selecting a specific public service provider. Contracts are concluded for a

period of 10 years, mainly because the longer term of these contracts brings some stability to public

transport and also allows carriers to invest more in the fleet. (Transport plan of the MSK territory for

the period 2017–2021, Transport plan of the Olomouc region)

The regions and municipalities have the right to determine the scope of this service, whether it concerns

the required number of connections or the number of vehicle kilometers traveled, or also by determining

whether the service will be provided by public rail passenger transport or public regular transport or by

their interconnection. This will fulfill the condition under the Public Transport Services Act.

Page 113: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

106

Furthermore, the regions are obliged to create a road transport service plan for at least 5 years, which

should in particular map the provision of public passenger transport services in the given territory, the

extent of the expected compensation and the related issue of public service contracts. Regions may

provide public service on their own or conclude public service contracts for the carriage of passengers

with carriers. If the region, respectively. the contracting authority decides for the tendering procedure

on the basis of which the contract will be concluded; in addition to the aforementioned Act No. 194/2010

Coll., on public passenger transport services and amending other acts, it is governed by Act No.

134/2016 Coll., on public procurement, which further defines the procedure for the award procedure.

The contract with the winning bidder must be concluded in accordance with these laws and with the

European Union legislation. This process is supervised by the Office for the Protection of Competition.

Within the framework of the concluded contracts, the range of transport services of individual parts of

the region is set to meet the condition of transport security pursuant to Act No. 194/2010 Coll., On

Public Services. Due to the market environment, the price of transport performance should be lowered

and thus financial savings will be achieved throughout the region. The paper focuses on 24 areas from

the Moravian-Silesian and Olomouc regions, where the tendering procedure was initiated and at the

same time the public service contract was fulfilled in the period 2018–2019. Individual areas are called

DMUs for the purposes of efficiency evaluation. Specifically, these areas are MSK – Karvinsko,

Orlovsko, Frydlant Region, Novojicin East, Novojicin West, Krnov, Bruntal, Rymarov, Opavsko,

Vitkov, Frydek-Mistek, Bilovecko. From the Olomouc Region are the areas – Olomouc Northeast,

Olomouc Southwest, Prerov North and Lipnicko, Hranicko, Sternberg and Uničovsko, Prerov South,

Litovelsko, Prostejov Northwest, Sumperne North, Sumperne South, Mohelnicko, Zabreh. (Transport

plan of the MSK territory for the period 2017–2021, Transport plan of the Olomouc region)

The aim of the paper is to evaluate the technical efficiency of 10-year compensation of suburban bus

transport in 24 selected service areas of the Moravian-Silesian and Olomouc regions according to

selected inputs and outputs.

Technical efficiency is estimated by an input-output oriented Data Envelopment Analysis (DEA) model.

Two research questions (RQs) are verified for evaluation purposes:

RQ1: Is more than 50% of contracted compensation effective in selected regions?

RQ2: Do contracts in the Moravian-Silesian and Olomouc regions achieve comparable average

efficiency values?

Literature Review

Public transport has been the subject of an evaluation and examination of a number of works which have

received considerable attention in recent years. The problems of transport, especially the efficiency of

spending and its utilization, are dealt with by the authors both in the territory of modern European and

non-European countries.

The price of compensation is a public expenditure, in this case it is an expenditure from the regional

budget. The assessment of technical efficiency in bus transport is also addressed by Hanauerová (2019),

who also uses the DEA model to assess efficiency. Beck and Walter (2013) deal with the factors that

influence the bid price in Germany. He also deals with adequate price compensation in his study

Dementiev (2018). Rosell (2017) addresses cost-effectiveness based on examples of municipalities in

the province of Barcelona, concluding that the smaller a municipality has to provide transport services,

the less efficient it is. Vigren (2018) deals with factors influencing those interested in public transport

services. The study uses the Poisson model and concludes that in Sweden the technical safety

requirements, in particular the bus requirements, are a limiting factor. Mathisen (2016) examines, for

example, Norway whether it is really necessary to select carriers on the basis of public contracts, which

bears some uncertainty for the tenderers with regard to the outcome and conclusion of the contract.

There is also the question of how long public transport will be to the extent required by law. Increasing

the number of cars in practice means less use of this service and with it increasing pressure for efficiency,

Page 114: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

107

resp. on the use of public transport. In his study Zhang et al. Authors (2019) address the example of six

cities in China whether a policy restricting the purchase and use of cars will affect the development of

public transport. Migliore et al. authors (2013) point out in their study the availability of public transport,

which would increase the efficiency and effectiveness of this public service.

Compensation in bus transport

The selection of the carrier is authorized by the contracting authority, Act No. 194/2010 Coll., On public

passenger transport services and amending other laws. This law implies the possibility to conclude a

contract either by tender or by direct award. When choosing the second option, the conditions arising

from Regulation (EC) No 1370/2007 of the European Parliament and of the Council must be fulfilled.

In the Czech Republic, this regulation is followed by Decree No. 296/2010 Coll., On the procedures for

establishing the financial model and determining the maximum amount of compensation. This decree

regulates the method of construction of the financial model. (IODA, 2015)

Regulation (EC) No 1370/2007 of the European Parliament and of the Council on public passenger

transport services by rail and by road and repealing Council Regulations (EEC) Nos 119/69 and 1107/70

implies that public service compensation means:

"Any advantage, in particular financial, granted directly or indirectly by the competent authority from

public sources during or in respect of the period of implementation of the public service obligation".

Region, resp. the client and also the contracting authority commits in the contract to a certain price

compensation, ie payment for ensuring the transport serviceability of the territory by the selected carrier.

In this case, it is a financial cost that the contracting authority (region) has to pay from its budget to the

carrier for the provision of this public service on the basis of a concluded contract. The price entered

into in the contract arises as a bid price of a particular carrier in the public contract. In the Tender

Documentation, the Client determines in advance the number of connections and the number of vehicle

kilometers required for the given location. offer price. By concluding such a contract, the selected

tenderer undertakes to perform within the given scope for the competitively priced price. The winner is

the carrier whose price is the lowest, but not unreasonably low. In this case, the participant would be

excluded. The selected carrier provides a public service through its technical equipment and its staff.

Based on the fulfillment of the contracting authority's requirement, ie the required number of

connections and the number of vehicle kilometers traveled, the area will be managed by public transport.

In this way, the region guarantees to ensure transport accessibility in the given locality and thus fulfills

the obligation arising from the law. In general, price compensation can be understood as a subsidy from

public budgets, ie from the contracting authority. (Hanauerova, 2018).

Act No. 194/2010 Coll., On public passenger transport services and amending other acts, stipulates that

the amount of compensation must be reasonable, otherwise the client may not conclude the contract.

Should such a contract still be concluded, the contract shall be null and void. Prior to the conclusion of

the contract, in the case of direct award, the selected carrier is required to submit a financial model of

costs, revenues and net income. Similarly, the selected carrier shall submit the financial model for the

tender before signing the contract, unless otherwise specified in the tender dossier. (IODA, 2015)

Methodology and Data

The essence of the DEA method lies in the division of surveyed objects into efficient and inefficient

according to the size of consumed resources and the amount of outputs (production). The solution of

DEA models defines empirical production function. Jablonský and Dlouhý (2015) identify output-

oriented models as output oriented models. In the case of minimizing the value of inputs, again while

maintaining the value of the outputs, we are talking about input oriented models. The combination of

these two options creates an additive models, slack – based models.

In output-oriented models, output variables can be used to determine efficiency. Such models calculate

the technical efficiency coefficient, which is determined by the ratio of the weighted sum of inputs to

the weighted sum of outputs, but weights are sought so that the value of the coefficient g is greater than

Page 115: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

108

or equal to one. Thus, for the effective unit Uq the coefficient g = 1 and for the inefficient unit g> 1. In

order to make the inefficient units effective, it is necessary to increase some or all of the outputs

(Klieštik, 2009).

In the input-oriented models, the effective units within the comparison group are those with a coefficient

value equal to one (g = 1). Within this homogeneous group, units that are less than one (g <1) are

ineffective. This value is then a feedback to improve inputs so that the inefficient unit becomes an

effective unit (Klieštik, 2009).

Reaching the effective threshold for these models is possible in the following ways:

• increasing the value of output consumed while maintaining current input levels – output oriented

models;

• reducing the value of input consumed while maintaining current output levels – input oriented

models;

• a combination of both approaches – additive models, slack-based models. (Jablonsky, Dlouhy,

2015; Vrabkova, Vankova, 2015)

The estimation of the technical efficiency of contracted compensation in suburban bus transport in the

conditions of 24 regions of the Moravian-Silesian and Olomouc regions was carried out according to

the following procedure:

- definition of one input and two outputs for estimating technical efficiency, statistical description

of selected variables (see Table 1),

- calculating an output-oriented efficiency model according to the DEA model, which takes into

account the constant range yield (CCR), according to formula (1);

- calculating an output-oriented efficiency model according to the DEA model, which takes into

account the variable range yield (BCC) formula (2);

- calculating an input-oriented efficiency model according to the DEA model, which takes into

account the constant range yield (CCR), according to formula (3);

- calculating an input-oriented efficiency model according to the DEA model, which takes into

account the variable range yield (BCC), formula (4);

The output-oriented CCR model can be formulated as follows:

Minimalize: g =∑ vjmj xjq, (1)

Under conditions ∑ uiri y

ik ≤ ∑ vj

mj xjk, k = 1, 2, …, n

∑ uiri y

iq = 1,

ui ≥ ε i = 1, 2, …, r,

vj≥ ε, j = 1, 2, …, m.

The output-oriented BCC model can be formulated as follows:

Minimalize: g =∑ vjmi xjq+ v, (2)

Under conditions ∑ uiri y

ik ≤ ∑ vj

mj xjk + v, k = 1, 2, …, n,

∑ 𝑢i𝑟i xiq = 1,

ui ≥ ε, i = 1, 2, …, r,

vj≥ ε, j = 1, 2, …, m,

v – free.

Page 116: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

109

The input-oriented CCR model can be formulated as follows:

Maximalize: 𝑧 =∑ 𝑢iri y

iq, (3)

Under conditions ∑ uiri y

ik ≤ ∑ vj

mj xjk, k = 1, 2, …, n,

∑ vjmj xjq, = 1,

ui ≥ ε i = 1, 2, …, r

vj≥ ε, j = 1, 2, …, m.

The input-oriented BCC model can be formulated as follows:

Maximalize: 𝑧 =∑ 𝑢iri y

iq+ 𝜇 , (4)

Under coditions ∑ u𝑖𝑟𝑖 𝑦𝑖𝑘 + 𝜇 ≤ Σ𝑗

𝑚 𝑣𝑗𝑥𝑗𝑘 , k = 1, 2, …, n,

∑ vjmj xjq, = 1,

ui ≥ ε i = 1, 2, …, r,

vj≥ ε, j = 1, 2, …, m,

𝜇 − any

In order to achieve the goal, the input and output oriented DEA model is chosen in the paper. In both

models, the competitive price is chosen as the input variable, the output side is formed by two variables,

namely the number of connections and the estimated number of vehicle kilometers traveled.

Input

X1 – competitive price (CZK / vehicle). The competitive prize is a price compensation for carriers for

1 mileage covered by the contract. This data is available in the National Electronic Instrument (NEN)

for MSK procurement. The Olomouc Region publishes public tenders in the electronic marketplace

TENDERARENA – electronic tool eGordion.

Outputs

Y1 – number of connections requested by the contracting authority (Moravian-Silesian and Olomouc

regions). Each contracting authority determines more detailed specification of connections for the

competition area itself. Requirements for the number of connections are specified in the Tender

documentation, which is always available in the electronic tools of the contracting authorities.

Y2 – the estimated number of vehicle kilometers traveled in 10 years at the location. This is the

anticipated number of vehicle kilometers traveled, which is needed to fulfill the obligation to serve the

given territory. The Contracting Authority also specifies this requirement in the Tender Documentation

for a specific area. The documents are available in electronic tools for public procurement of the

respective regions.

Table 1. Statistical characteristics of input and output

X1 – competitive

price (CZK/vkm)

Y1 – number of

connections

Y2 – vkm/10 years

Min. 30.46 9 8 350 812

Max. 38.14 43 39 383 743

Mean 35.37 18,08 18 061 082

Median 35.55 16 15 851 699

SD 1.95 8 7 900 933.6

Source: Custom processing.

From Tab. 1, it is evident that the lowest competition price in the compared regions was CZK 30.46 /

1change for the Sternberg and Uničov regions of the Olomouc Region. On the other hand, the contract

Page 117: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

110

with the highest price (CZK 38.14 / 1vkm) was concluded in MSK for the Frydlant Region. The average

value of the contested price from 24 regions in these regions is CZK 35.37 / vkm. The median or mean

value of the input variable is 35.55 CZK / vkm The standard deviation of MSK input is 1.95.

On the output side, two variables are selected, namely the number of connections and the estimated

number of vehicle kilometers traveled within the time horizon of 10 years. From Tab. 1 shows that the

lowest number of required connections is 9, this number is the same for both regions. In the Moravian-

Silesian Region it is the Karviná region, in the Olomouc region it is the Přerov South region. The highest

number of connections (43) is requested by the contracting authority for the Novojičínsko East in the

Moravian-Silesian Region. The average number of connections is 18.08 connections per area. The

median of this selected output is 16. The standard deviation of the output (Y1) is 8.

The second selected output is defined as the number of vehicle kilometers traveled by the client in the

given location. The lowest number of vehicle kilometers is required for the Frydlant Region (8 350

thousand vehicle-kilometers / 10-years) in the Moravian-Silesian Region. The average mileage is 18

061 ths. Vkm/ 10 years for the site. The mean value for the number of driven kilometers is 15 851 ths.

vkm/10years. The standard deviation of the second variable on the output side (Y2) is 7 900 thous.

vkm/10years.

Empirical Results

Results of output oriented models

The results of the calculation of the efficiency of the Output Oriented Constant Revenue Model (CRS)

model show that out of 24 DMUs, only one procurement is effective – Novojičínsko východ (DMU15).

Contract concluded between the carrier – ČSAD Vsetín, a.s. and the region guarantees the provision of

43 connections and 39 383 thousand vehicle-kilometers / 10-years with the agreed price compensation

of CZK 37.86 /vkm.

In terms of efficiency, the contract for the Opava region (DMU18) is also well based, where it is

contractually agreed to secure 34 connections and 35 559 thousand vehicle-kilometers / 10-years with

compensation of CZK 37.01 / vkm. ČSAD Vsetín, a.s.

On the other hand, a public contract for the Frydlant Region (DMU22) is based on a very ineffective

model, where 12 connections and 8 350 thousand vehicle-kilometers / 10-years are needed for the

management of the area. The price compensation amounts to CZK 38.14 / vkm, and this contract was

re-announced because the price for 1chokokm was exceeded in the first contract for passenger transport

in this locality. This may be one of the reasons why the public service contract in the area is awarded

with the highest price compensation, being the smallest number of vehicle kilometers required and the

second smallest number of connections (12). In both cases, only one participant (ČSAD Frýdek-Místek

a.s.) entered the public contract, which also became the winning carrier for the given territory.

In the field of inefficiency, in addition to the already mentioned Frydlant Region, where the tender was

announced repeatedly, there are also contracts for the Olomouc Southwest (DMU2), Lithuania (DMU7),

Šumperk South (DMU10) in the Olomouc Region and Krnovsko (DMU19) in MSK. Vojtila Trans s.r.o.

won the public contract for the Olomouc region, which committed itself to the management of the

territory with 12 connections and 10 369 thousand vehicles / 10 years for the price of CZK 35.76 / vkm

Three companies entered the tender for the Litovel region, the winner was the carrier ARRIVA

MORAVA a.s. In this locality there are 13 connections, 10 783 thousand driven km / 10 years at an

agreed price of 36.20 CZK / 1 km. As already mentioned, for the operation of the Šumperk South region,

the Olomouc Region had to announce a public contract repeatedly. Three carriers entered this tender,

the winning bidder was ARRIVA MORAVA a.s. The carrier manages the territory on the basis of a

request from the Tender Documentation 13 links, 11 278 thousand vehicle-kilometers / 10-years for the

agreed price of CZK 34.70 / 1-carriage (Figure 1).

Page 118: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

111

Figure 1. Results of the output-oriented model with constant and variable returns to

scale

Source: own elaboration.

A comparative view of the results-oriented CRS and VRS models is shown in Tab. 2. According to the

output-oriented model with constant scale returns, only the public service contract for the Novojičínská

East (DMU15) is effectively awarded. In contrast, in the output-oriented model with variable returns

from the range, three public contracts are effectively awarded, namely from the Olomouc Region, a

transport service order from Šternberk and Uničovsko (DMU5) and Prostějov Northwest (DMU8). ).

This unit achieves efficiency in both selected models. On the other side of the selected scale there are

those orders that were awarded inefficiently. Based on the results of both output-oriented models, public

tenders for the provision of transport services to the Olomouc South-West (DMU2) and Frýdlant

(DMU22) are awarded inefficiently. According to the output-oriented model with constant yields from

the range, public contracts for the provision of transport services in the Litovelsko (DMU7), Šumperk

South (DMU10) areas from the Olomouc region are also ineffective.

Table 2. Summarized results of efficiency modeling of output-oriented model with CRS

and VRS

CRS VRS

Table g number DMU(s) number DMU(s)

[1] 1 D15 3 D5, D8, D15

[1,001 – 1,499] 3 D8, D18, D23 3 D14, D18, D23

[1,500 – 1,999] 5 D5, D12, D14, D16, D24 8 D1, D3, D4, D6, D12, D13,

D16, D24

[2,000 – 2,499] 8 D1, D3, D4, D6, D9, D13,

D20, D21

5 D9, D11, D17, D20, D21

[2,500 – 2,999] 2 D11, D17 3 D7, D10, D19

[3,000 +] 5 D2, D7, D10, D19, D22 2 D2, D22

Source: own elaboration.

Results of input-oriented models

The DMU15 (Novojičínsko East) is based on all 24 comparison units in the input-oriented model with

a constant return on scale. This area was also most effective in the output-oriented model, both with

constant and variable scale returns. Other units achieve different inefficiencies. Public tenders for the

Frydlant Region (DMU22) in the Moravian-Silesian Region and also the Olomouc Southwest (DMU2)

in the Olomouc Region are ineffective. Also in these units, inefficiencies in the previous output oriented

Olomoucko JZ; 3,385 Litovelsko; 3,163

Šumpersko J II; 3,032 Krnovsko; 3,144

Frýdlantsko II; 3,610

Olomoucko JZ; 3,063

Šternbersko; Uničovsko; 1,000

Prostějovsko SZ; 1,000Novojičínsko východ;

1,000

Frýdlantsko II; 3,583

0,000

1,000

2,000

3,000

4,000

0 5 1 0 1 5 2 0 2 5 3 0

CRS VRS

Page 119: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

112

model (CRS and VRS). The average value of the efficiency of the input-oriented model with constant

returns from the range is 0.515. The standard deviation is 0.189.

Similarly, the results of an input-oriented model with variable yields from a range can be described. In

this model, as well as the output-oriented variable yield model, DMU5 – the Sternberg and Uničovsko

regions, DMU8 – the Prostějov northwest region and DMU15 – the Novojičínsko east region are

effective units. It can be stated that this model shows efficiency less dispersed compared to the entry-

oriented CRS model. Yet the least effective one is DMU22 – the Frydlant Region. This unit comes out

inefficiently in all the mentioned models. The average efficiency of units in the variable yield model

range is 0.892. The standard deviation of this model is 0.057 (Figure 2).

Figure 2. Results of the input-oriented model with constant and variable returns to scale

Source: own elaboration.

The comparison of the input-oriented model with the constant and variable yields of the range proves

that in the model with the constant yields of the range the efficiency of individual units is significantly

dispersed. The largest number of units, 13 in total, is in the CRS model on a range of values 0.49 – 0.3.

Only the DMU15 unit – the Novojičínsko East area – comes out effectively. In contrast, in the variable

yield model, the largest number of units (20) is on a scale in the range of 0.99 – 0.8 (see Table 3).

Olomoucko JZ; 0,295 Frýdlantsko II; 0,277

Šternbersko; Uničovsko; 1,000 Prostějovsko SZ; 1,000

Novojičínsko východ; 1,000

Frýdlantsko II; 0,799

0,000

0,200

0,400

0,600

0,800

1,000

1,200

0 5 1 0 1 5 2 0 2 5 3 0

CRS VRS

Page 120: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

113

Table 3. Summarized results of efficiency modeling of input-oriented model with CRS

and VRS

CRS VRS

Table g number DMU(s) number DMU(s)

[1] 1 D15 3 D5, D8, D15

[0,99 – 0,8] 2 D18, D23 20 D1, D2, D3, D4, D6, D7, D9,

D10, D11, D12, D13, D14,

D16, D17, D18, D19, D20,

D21, D23, D24

[0,79 – 0,7] 1 D8 1 D22

[0,69 – 0,5] 5 D5, D12, D14, D16, D24 0

[0,49 – 0,3] 13 D1, D3, D4, D6, D7, D9,

D10, D11, D13, D17, D19,

D20, D21

0

[0,29 – 0] 2 D2, D22 0

Source: own elaboration.

Conclusion

It is worth noting that the number of passenger cars is increasing and people are using this mode more

often than in the past. This is one of the reasons why there is less and less interest in public transport.

However, this mode of transport is also justified.

As for the region, as a contracting authority, the Act No. 194/2010 Coll., On public passenger transport

services and amending other laws, requires the provision of basic transport accessibility, it is also no

less important factor in spending funds to provide this public service.

The aim of the paper was to evaluate the technical efficiency of 10-year compensation of suburban bus

transport in 24 selected service areas of the Moravian-Silesian and Olomouc regions according to

selected inputs and outputs.

The Data Envelopment Analysis model was used to achieve the goal. In the verification of the research

question RQ1: “Is more than 50% of contracted offsets effective in selected regions?” The efficiency

calculation showed that in both cases the input / output oriented CRS and VRS models are more than 50

% of contracted offsets ineffective. For research question RQ2: “Do contracts in the Moravian-Silesian

and Olomouc regions achieve comparable average efficiency values?” According to the efficiency

calculation in the output-oriented DEA model, with both constant and variable yields, contracts

concluded in the Moravian-Silesian Region achieve higher efficiencies. The results of the input-oriented

model with constant returns from the scale show that even in this case contracts in MSK have a higher

efficiency. In contrast, in an input-oriented model with variable returns from scale, the efficiency in both

regions is comparable.

Objectively, it is not recommended to increase the number of joints. to require more vehicle kilometers,

as this could be inefficient in terms of unused line capacity. For example, the Frydlant Region itself is

a mountainous area with a large number of small municipalities where the inhabitants are forced to

commute to work in larger neighboring towns. In order to make inefficient public procurement effective,

the output would have to be increased. The required number of connections and the number of vehicle

kilometers is determined by the client. This implies that the contracting authority itself should know

how many connections and vehicle kilometers are needed to ensure the serviceability of the territory.

Thus, increasing outputs could be inefficient in terms of the use of this public service by citizens and it

would be completely unnecessary for such an increase to occur. The price also reflects the mountainous

Page 121: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

114

terrain, population, respectively. potential passengers, distance of individual stops as well as the size of

the managed area.

The selection of a particular carrier is carried out in accordance with Act No. 134/2016 Coll., On Public

Procurement, based on which a contract is concluded with the selected (winning) tenderer. Tender is

concluded with the carrier who submitted the most advantageous offer from the perspective of the

region, ie the contracting authority. The best bid is one that brings the lowest price compensation for the

contracting authority's requirements in the Tender Documentation, so-called competition for the lowest

price occurs. As the payment for the public service comes from the region's budget, it is therefore

possible to consider as effective the price compensation that is as low as possible but not unreasonably

low for servicing the region's territory. Should the region still conclude a contract with such a carrier,

there would be a violation of Act No. 134/2016 Coll., On public procurement. In this case, the Office

for the Protection of Competition (ÚOHS) would invalidate the public contract and the concluded

contract.

The DEA model was used to estimate the technical efficiency. The input side consists of one variable –

the competitive price, on the output side two variables are selected – the required number of connections

and the expected number of vehicle kilometers traveled. The estimation of technical efficiency was made

on the basis of 24 homogeneous units, which in fact represent individual areas that form the territory of

the entire region. At these locations, the carrier was currently selected for the upcoming 10-year period.

In both regions it is the same number of 12 localities where the contract was concluded and the obligation

was fulfilled.

When estimating the technical efficiency of both input-output and output-oriented models with constant

yields, a public contract for the Novojičínsko East (DMU15) region is based on an efficient contract. In

both variable yield scale models, in addition to the already mentioned DMU15, tenders were estimated

for the Sternberg and Uničov regions (DMU5) and Prostějov Northwest (DMU8). When estimating the

technical efficiency of output-oriented CRS and VRS models, efficiency is very scattered. In the input-

oriented constant range yield (CRS) model, the efficiency values are also very dispersed. In contrast, in

the input-oriented variable yield model, 20 units are on the g scale in the range of 0.99 – 0.8. The least

effective unit of this model (DMU22 – Frydlant Region II) is 0.799. However, when estimating technical

efficiency, the public contract for the Frýdlant II area is completely ineffective in all models. The reason

for concluding a contract with such price compensation for certain requirements may be the fact that the

contract was awarded for the second time.

All efficiency results achieved through the DEA model are limited both by input and output selection

and by the number of units to be compared (DMU). Furthermore, the results affect the effective and

inefficient units within the model.

References

[1] Act No. 194/2010 Coll. on public passenger transport service, as amended.

[2] Act No. 111/1994 Coll., on road transport.

[3] Act No. 134/2016 Coll., on public procurement, as amended.

[4] Beck, A., Walter, M. (2013). Factors affecting tender prices in local bus transport: Evidence from

Germany. Journal of Transport Economics and Policy, 47 (PART2), pp. 265-278.

[5] Coelli, T., J., Prasada Rao, D. S., O´donell, CH. J., Battese, G. E. (2005). An Introduction to

Efficiency and Productivity Analysis. New York: Springer Science.

[6] Cooper, W. W, L. M Seiford, Tone, K. (2007). Data Envelopement Analysis, Springer, New York

[7] Dementiev, A. (2018). Contracting out public transport services to vertical partnerships. Research

in Transportation Economics, Vol. 69, pp. 126-134.

Page 122: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

115

[8] Hanauerová, Eliška. (2018). Optimalizace kritérií veřejných soutěží v hromadné dopravě osob.

Disertační práce. 2018. MASARYKOVA UNIVERZITA, Ekonomicko-správní fakulta, Brno.

[9] Hanauerová, E. (2019), Assessing the technical efficiency of public procurements in the bus

transportation sector in the Czech Republic. Socio-Economic Planning Sciences.Vol. 66, pp. 105-

111.

[10] IODA (2015). Veřejná doprava v České republice. IODA Informace pro dopravní analýzy.

Fakulta dopravní ČVUT.

[11] Ivan, I. (2010). Advantage of caropooling in comparison with individual and public transport case

study of the Czech Republic. Geographia Technica [online]. 2010, 9(1), pp. 36-46 [cit. 2019-12-

20]. ISSN 18425135.

[12] Jablonský, J., Dlouhý, M. (2015). Modely hodnocení efektivnosti a alokace zdrojů. Praha:

Professional Publishing.

[13] Kliestik, T. (2009). Kvantifikácia efektivity činností dopravných podnikov pomocou data

envelopment analysis. E a M: Ekonomie a Management. 12. 133-145.

[14] Kleprlík, J. (2013). Analýza plánů dopravní obslužnosti krajů a návrhy změn plánů dopravní

obslužnosti kraje. Perner´s contacts.

[15] Mathisen, T.A. (2016). Competitive tendering and cross-shareholding in public passenger

transport. Transport Policy, 48, pp. 45-48.

[16] Migliore, M., Lo Burgio, A., Maritano, L., Catalano, M., Zangara, A. (2013). Modelling the

accessibility to public local transport to increase the efficiency and effectiveness of the service:

The case study of the Roccella area in Palermo. WIT Transactions on the Built Environment, Vol.

130, pp. 163-173.

[17] MINISTRY OF TRANSPORT. Yearbook 2017. [online]. [cit.2019-12-15]. Available from

https://www.sydos.cz/cs/rocenka-2017/index.html

[18] MORAVSKOSLEZSKÝ KRAJ (2019). MSR Plan of transport services for 2017 – 2021

[online].[cit.2019-07-05]. Available from https://www.msk.cz/cz/doprava/plan-dopravni-

obsluznosti-uzemi-moravskoslezskeho-kraje-40792/

[19] NATIONAL ELECTRONIC TOOL. Moravskoslezský kraj – Seznam uzavřených zadávacích

postupů. [online]. [cit.2019-12-20]. Available from

https://nen.nipez.cz/SeznamPlatnychProfiluZadavatelu/MultiprofilZakladniUdajeOZadavateliM-

20523824/SeznamUzavrenychZadavacichPostupu-20523824

[20] OLOMOUCKÝ KRAJ (2019). Olomouc Region Plan of transport services. [online].[cit.2019-12-

05]. https://www.kr-olomoucky.cz/plan-dopravni-obsluznosti-uzemi-olomouckeho-kraje-

aktuality-632.html

[21] Rosell, J. (2017). Urban bus contractual regimes in small- and medium-sized municipalities:

Competitive tendering or negotiation? Transport Policy, 60, pp. 54-62.

[22] TENDERARENA. Olomoucký kraj. [online]. [cit.2019-12-15]. Available from

https://www.tenderarena.cz/profil/detail.jsf?identifikator=Olomouckykraj

[23] Vigren, A. (2018). How many want to drive the bus? Analyzing the number of bids for public

transport bus contracts. Transport Policy, 72, pp. 138-147

[24] Vrabková, I., Vaňková, I. (2015). Evaluation Models of Efficiency and Quality of Bed Care in

Hospitals. Ostrava: VŠB-TUO.

[25] Zhang, L., Long, R., Chen, H.). (2019 Do car restriction policies effectively promote the

development of public transport? World Development, 119, pp. 100-110.

Page 123: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

116

EVALUATION OF CSR DISCLOSURE OF THE BIGGEST COMPANIES IN CZECH

REPUBLIC WITH MCDM METHODS

František Konečný1

1Department of Economics, VŠB – Technical University of Ostrava,

Sokolská tř. 33, Ostrava 70200, Czech Republic

e-mail: [email protected]

Abstract

Corporate Social Responsibility (CSR) is a concept focusing on the future of sustainable business

environment. Sustainability is a very popular keyword and CSR is theory following this umbrella term.

As a practical concept, CSR is however limited in several ways. The predominantly qualitative

conception of the topic makes it difficult to comprehensively assess the level of CSR in a specific

company. The aim of this paper is to evaluate the social responsibility of selected companies using

multi-criteria decision-making methods AHP and TOPSIS. In addition, the evaluation results are used

for comparing selected companies and ranking.

Keywords

Corporate social responsibility, Corporate social performance, Analytic Hierarchy Process, TOPSIS method

JEL Classification

M14, M40, Q56, C38

Introduction

In the last two decades, sustainability or sustainable development has been a frequently used term in

various fields in the academic environment, but also in the private business sector, both nationally and

transnationally. In 2019, the European Commission explicitly sets out its strategies, objectives and

guidelines for sustainable future development. Although sustainability and corporate social

responsibility (CSR) are not necessarily the same, both approaches are strongly forward-looking. There

are several studies that contribute to understanding the long-term benefits of CSR. Although CSR is

partially current trend, the term itself has undergone a long evolution since its major expansion in the

1960s and 1970s. Later, the focus shifted from defining CSR to more practical approaches such as

corporate social performance (CSP) and corporate financial performance, stakeholder theory, business

ethics or other alternative frameworks (Carroll, 1999; Carroll, Schmidt, Rynes, 2016).

The aim of this work is a comprehensive comparison of the social responsibility activities of the five

largest companies in the Czech Republic by turnover. Corporate Social Responsibility (CSR) reports or

other available information from the official sources of selected companies are used for this evaluation.

The method for the evaluation of individual CSR programs and the subsequent comparison is the multi-

criteria decision-making AHP - Analytical hierarchical process and the TOPSIS method.

Modern Corporate Social Responsibility

Recent developments in the theory of CSR are taking place especially on the political field. CSR

standards for businesses are not only created by governments and intergovernmental organizations (EU),

but companies themselves are more involved in political processes by providing public goods. This

makes companies the political actors who shape their institutional environment (Rasche, 2015, Scherer

and Palazzo, 2011). Involvement in CSR activities is essential in multinational corporations with respect

to supply chains that can spread across many countries. These companies usually include standards,

codes and norms with appropriate auditing, while experts are still striving for large corporations to take

a more proactive approach by involving their stakeholders and partners (Quarshie et al., 2015). On the

Page 124: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

117

other hand, small and medium-sized enterprises (SMEs) make up a large part of the world economy, but

their involvement in CSR activities is currently not well understood. (Scherer et al., 2016).

Campbell et al. (2007) claim that businesses are inclined to integrate CSR activities when certain

economic conditions of the company are met. Simply, companies will act socially responsibly when

their financial and economic health is in good shape. However, there is a need for an institutional

environment that promotes socially responsible behaviour.

Multicriteria decision making methods in CSR

In the following part will be described selected methods of multi-criteria evaluation, which are used in

the practical part for analysis and evaluation of companies according to their social responsibility

(performance). Both methods are described by Saaty (2013), Tryantaphyllou, (2000) and Hwang, Yoon

(1981).

Analytical hierarchy process - AHP

The procedure of this method is based on the distribution of a multi-criteria problem into a system of

levels. The problem is therefore solved at multiple levels, creating a hierarchy. The main step of AHP

is dealing with the structure of the 𝑚 × 𝑛 matrix (where 𝑚 is the number of alternatives and 𝑛 is the

number of criteria). The matrix is created using the relative importance of the variants for each criterion.

The vector (𝑎𝑖1, 𝑎𝑖2, 𝑎𝑖3,…; 𝑎𝑖𝑛) for each i represents its own vector 𝑛 × 𝑛 of the reciprocal matrix,

which is determined by pairwise comparison of the impact of 𝑚 variants on the i-th criterion. An

important feature of this method is therefore also the use of the Saaty paired comparison method for the

determination of criteria weights.

AHP hierarchy for the specific case of five factors and five chosen companies for this paper is in the

Figure 1 below.

Figure 3 Corporate social responsibility in AHP model

Source: Own creation

TOPSIS

TOPSIS was developed in 1980 (Yoon and Hwang) as an alternative to other multi-criteria decision-

making methods. The basic concept of this method is that the chosen alternative should have the shortest

distance from the ideal solution and the greatest distance from the negative-ideal (basal) solution. The

TOPSIS method assumes that each criterion tends to increase or decrease monotonously the usefulness.

For this reason, it should be easy to define the ideal and negative ideal solution. The Euclidean distance

approach was determined to evaluate the relative distance of variants to the ideal solution. Thus, the

order of preference consists of a series of comparisons of these relative distances.

Page 125: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

118

Evaluation method

To objectively asses Corporate Social Performance with MCDM methods, it was needed to code

qualitative factors into the transparent quantitative solution. Companies CSR programmes were

evaluated according to the five factors (criteria) and scored with number (0 – worst, 4 – best). To

understand, what factors were used, there is a Table 1 below with text-based explanation of the factors.

Table 1 Evaluation criteria for AHP and TOPSIS method

Criteria Evaluation

Structure (K1)

There is a coherent report on corporate social responsibility activities. The report

contains a clear and logical summary of all parts of the entire report (Executive

summary), the overall report is logically organized and contains relevant

information. The structure of the report does not lack essential data and important

parts (e.g. introductory word, overview, content, three pillars of activities, GRI

methodology, quantitative data and indicators, or explanatory notes and

comments). The report is clear and readable by any interested party, with

graphical or other data facilitating legibility.

Stakeholders

(K2)

The report contains CSR activities towards all stakeholders. There should also be

activities towards employees, suppliers, customers, local communities or the

wider environment, the state, or the transnational community. The data are

expressed in a clear way and the documented data can be largely verified (e.g. due

to quantitative data).

Quantitative

indicators (K3)

The report contains a large amount of quantitative data and indicators to measure

specific CSR activities. The quantitative data is organized into a clear and simple

structure. The report includes specific social performance activities with

measurable data and indicators of their measurement, as well as an explanation of

the calculation procedure. Data can be verified or traced by documented source.

The data was also verified by an independent evaluator and added comments.

Triple bottom

line (K4)

The report is structured and contains information on the economic, social and

environmental pillar of corporate social responsibility. The report should not omit

activities towards stakeholders in any pillar. There is a large amount of data and

information on all three pillars. All categories are clearly and clearly structured

for any stakeholder group.

GRI

methodology

The report is prepared according to GRI methodology and contains all its

significant parts.

Source: Own creation

Selected companies for comparison

Within the practical application of the AHP and TOPSIS methods, five largest companies were selected

according to their turnover in 2018 in the Czech Republic. Companies are: ŠKODA, ČEZ, EPH,

AGROFERT, UNIPETROL. A thorough analysis of CSR reports (sometimes also referred to as

sustainability report) was carried out for these companies. If a separate CSR report was not found for

the selected company, information from the company's website or annual report was used for the

evaluation. More detailed information about selected companies, evaluation of separate reports,

subsequent determination of weights by the Saaty method and determination of the overall ranking in

the quality of analyzed reports by the AHP and TOPSIS methods are given in this chapter below.

Page 126: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

119

Results of MCDM methods

In this section specific results of previously described process are presented. It is important to firstly

present computed weight priorities in the selected criteria.

Computed criteria weights

According to the Saaty method, weights of the selected criteria were determined. A scale from 1 (least

important) to 3 (most important) was used to assess the importance of specific criteria for the resulting

weights. After the evaluation of the criteria, the first criterion Structure (K1) was rated grade 2,

Stakeholders (K2) also grade 2. Quantitative indicators (K3) were awarded the highest grade 3, Triple

bottom line (K4) were rated 1 and GRI Methodology (K5) 3 points. After pairing with preferences with

respect to the selected degrees of importance, the following table was created, which also contains the

resulting weighting criteria.

Table 2 Computed criteria

Criteria Priority Weights (in %)

Structure (K1) 13,3

Stakeholders (K2) 13.3

Quantitative indicators (K3) 35,4

Triple bottom line (K4) 6,2

GRI methodology (K5) 31,9

Source: Own creation

The K4 (Triple bottom line) criterion came out as the least important from a paired comparison with a

weight of 6.2%. The first (Structure) and the second criterion (Stakeholders) are divided equally with

13.3% importance. The K5 criterion (GRI methodology), according to the results of the Saaty method,

is the second most important criterion with 32.9%. The highest weight is assigned to criterion K3

(Quantitative indicators) with a value of 35.4%.

Reporting quality evaluation according to the chosen methods

After the weights were determined, the CSR reports of selected companies or other available information

related to reporting on CSR (e.g. partial reports, information from the websites of companies) were

evaluated according to the criteria. Two methods of multi-criteria decision making, AHP and TOPSIS

were used for comprehensive evaluation of this information. The application of methods with partial

end results are presented in this chapter

Analytical hierarchy process - AHP

CSR reports of selected companies and other additional information were evaluated for each company

separately by the corresponding quality level from the 0 to 4 scale. Each criterion is presented separately

with partial results, then the overall result of the used method and the resulting order of variants

(companies) are analyzed. Five criteria (designated K1, K2, ..., K5) and 5 variants (designated as V1,

V2, ..., V5) were chosen for the hierarchical system of the AHP method.

Page 127: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

120

All five variants were evaluated according to selected criteria in individual criteria. Following a

comprehensive evaluation of the quality of the CSR reports, the order of the options was presented as

shown in Table 3 below.

Table 3 Final results of the AHP method

Rank Company Final score (in %)

1. ČEZ (V2) 33,8

2. ŠKODA (V1) 28,8

3. EPH (V4) 27,1

4. AGROFERT (V3) 5,4

5. UNIPETROL (V5) 4,9

Source: Own creation

According to the analytical hierarchical process, the second ranking (CEZ) came out of the final ranking,

with a final value of 33.8%. With a minimum difference of 1.7%, V1 (ŠKODA) with 28.8% and V4

(EPH) with 27.1% rank second and third. In all three variants, important quality factors in the individual

criteria were met. Only minor differences determined the resulting order of variants. The remaining

variants V3 (AGROFERT) and V5 (Unipetrol) showed significant shortcomings in several criteria and

stay behind overall with 5.4% V3 in the fourth and 4.9% V5 in the last place.

TOPSIS The overall evaluation of the quality of reporting activities of selected companies was also carried out

using the TOPSIS method. The TOPSIS multi-criteria decision-making method looks for the solution

closest to the ideal variant. The same starting weights from the Saaty method were used to compare the

method. The criteria are the same as the previous method (labeled K1, K2, ..., K5) and the 5 variants

(labeled V1, V2, ..., V5). The values based on the calculation are given in Table 4.

Table 4 Final results of the TOPSIS method

Rank Company Preference (in %)

1. ČEZ (V2) 92,4

2. ŠKODA (V1) 80,3

3. EPH (V4) 80

4. AGROFERT (V3) 14,3

5. UNIPETROL (V5) 0

Source: Own creation CEZ Group (V1) ranked first in a relative ranking of 92.4% with its high-quality sustainability report in

all areas under review. Equally good reports were also evaluated in the case of ŠKODA (V1) and EPH

(V4), where with only a small difference of 3 tenths of percent ŠKODA finished second and EPH third.

AGROFERT (V3) with its report fell to the penultimate 4th place in relative appreciation, mainly due

to low values in criteria K3 (Quantitative indicators) and K5 (GRI methodology). The criteria were also

problematic in the way of reporting on the CSR activities of Unipetrol (V5), but the resulting evaluation

was reduced by the fact that there is no single CSR. Unipetrol is ranked 5th in the overall ranking.

Page 128: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

121

Comparison of the results

By comparing both applied methods of multi-criteria decision-making it is possible to determine the

definite order of the specified variants. Table 6 below was created for easier display and comparison of

results from AHP and TOPSIS methods.

Table 6 Comparison of results from both selected methods

Company Rank with AHP method Rank with TOPSIS method

ČEZ (V2) 1. 1.

ŠKODA (V1) 2. 2.

EPH (V4) 3. 3.

AGROFERT (V3) 4. 4.

UNIPETROL (V5) 5. 5.

Source: Own creation

The order of the variants does not differ for both methods used. The first variant of V1 (ŠKODA)

finished second in both cases, while the difference between V1 and V4 in third place (EPH) was

minimal. CEZ Group (V2) ranked the highest in the ranking by both methods. The penultimate place is

the AGROFERT Group (V3) and the last place in the AHP and TOPSIS evaluation is the last place.

From the point of view of the two selected methods no significant difference was found, and it can be

stated that the choice of the method did not have a significant influence on the final ranking.

Conclusion

A comprehensive evaluation of the social responsibility activities of one or more companies is very

difficult due to often qualitative data and insufficient information. The evaluation should include not

only traditional and audited data from annual reports, but also selected activities to all stakeholders in

the period under the CSR concept. Despite the fact that the principle of volunteering is an important

feature of corporate social responsibility and only a small percentage of companies have to issue a CSR

report, it is expected that in terms of sphere influences of the most important and largest companies

these activities are recorded, measured and subsequently passed to stakeholders in a single report. For

this reason, a deeper analysis of CSR reports and other information regarding CSR activities of the five

largest companies in the Czech Republic according to their turnover for 2018 was carried out.

The aim of this work was a complex evaluation of these data and comparison of individual companies

according to mathematical methods of multi-criteria decision-making AHP and TOPSIS. The qualitative

data was categorized into five cumulative quality factors for a given CSR report and evaluated on a five-

step scale (0 - worst; 4 - best). Based on this evaluation, it was possible to use the AHP and TOPSIS

methodology. There was no significant difference between the two methods, which would influence the

final order of the selected variants. ČEZ, ŠKODA and EPH report their corporate social responsibility

activities using high-quality reports, consistent with international standards. Minor differences in the

assessment of individual factors, however, determined ČEZ in the first, ŠKODA in the second and EPH

in the third place in relative order. In the case of AGROFERT, the report was evaluated, which complied

only with some selected criteria and, thanks to the lack of quantitative data and non-existent GRI

methodology, finished only in fourth place. The last place was placed by Unipetrol, where the low rating

was influenced mainly by the non-existent whole CSR report. Information about CSR activities was

thus very difficult or impossible to find in official sources.

Both mathematical methods fulfilled the basic prerequisite, i.e. the determination of the order based on

the chosen evaluation and comparison. It is therefore possible for a comprehensive evaluation of the

Page 129: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

122

company's social responsibility activities and subsequent comparison, but it is also necessary to mention

the considerable limitations of this procedure. The methodology of the chosen evaluation in this work

is not standardized and factors for quality measurement can be determined differently by different

authors and with different weights. It should also be noted that the evaluation is relative and therefore

only relevant to the options chosen. To include other companies, it would be necessary to expand the

computational matrix, which complicates the calculations in a larger number of variants.

Finally, it is necessary to state that the data is obtained from reports, which in many cases are optional

and thus the data need not be audited or otherwise checked. Therefore, the resulting ranking of

companies is only appropriate to the extent that the published data of selected companies are authentic

and factually correct

Acknowledgement This article was prepared as a part of the SGS project at the Faculty of Economics, VŠB-TU Ostrava,

project number: SP2019 / 7.

References

[1] CAMPBELL, John L., Jeremy MOON a Sara L. RYNES, 2007. Why would corporations

behave in socially responsible ways? an institutional theory of corporate social responsibility:

A Conceptual Framework for a Comparative Understanding of Corporate Social

Responsibility. Academy of Management Review. 32(3), 946-967. DOI:

10.5465/amr.2007.25275684. ISSN 0363-7425.

[2] CARROLL, Archie, 1999. Corporate social responsibility: Evolution of a definitional

construct [online]. Business and Society

[3] CARROLL, Marc, Frank L. SCHMIDT a Sara L. RYNES, 2016. Corporate Social and

Financial Performance: A Meta-Analysis. Organization Studies. 24(3), 403-441. DOI:

10.1177/0170840603024003910. ISSN 0170-8406.

[4] Data from selected companies (ŠKODA, ČEZ, EPH, UNIPETROL, AGROFERT) were acquired

from their CSR reports, Annual reports and other openly accessible sources (official websites)

[5] HWANG, Ching-Lai a Kwangsun YOON, 1981. Methods for Multiple Attribute Decision

Making. Multiple Attribute Decision Making. Berlin, Heidelberg: Springer Berlin Heidelberg,

1981, 58-191. Lecture Notes in Economics and Mathematical Systems. DOI: 10.1007/978-3-

642-48318-9_3. ISBN 978-3-540-10558-9.

[6] QUARSHIE, Anne M., Asta SALMI a Rudolf LEUSCHNER, 2016. Sustainability and

corporate social responsibility in supply chains: The state of research in supply chain

management and business ethics journals. Journal of Purchasing and Supply Management.

22(2), 82-97. DOI: 10.1016/j.pursup.2015.11.001. ISSN 14784092

[7] RASCHE, Andreas, The Corporation as a Political Actor – European and North American

Perspectives (January 23, 2015). Rasche, A. (2015). The Corporation as a Political Actor:

European and North American Perspectives, European Management Journal, 33(1): 4-8.

[8] SAATY, Thomas L., 2013. The Modern Science of Multicriteria Decision Making and Its

Practical Applications: The AHP/ANP Approach. Operations Research [online]. 61(5), 1101-

1118. DOI: 10.1287/opre.2013.1197. ISSN 0030-364X

[9] SCHERER, A. G., & PALAZZO, G. (2011). The new political role of business in a globalized

world: A review of a new perspective on CSR and its implications for the firm, governance,

and democracy. Journal of Management Studies, 48(4), 899–931.

[10] TRIANTAPHYLLOU, Evangelos, Multi-criteria Decision-Making Methods: A Comparative

Study. 2000. DOI: 10.1007/978-1-4757-3157-6.

Page 130: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

123

MEASURING THE FINANCIAL PERFORMANCE OF A COMPANY BASED ON

SELECTED APPROACH

Filip, Lessl1

1Department of Finance, VŠB – Technical University of Ostrava,

Sokolská tř. 33, Ostrava 70200, Czech Republic

e-mail: [email protected]

Abstract

The paper focuses on the financial performance of a company evaluated on the basis of selected

approaches. At present, the problem of the impact of International Financial Reporting Standards (IFRS)

on the reporting of the financial level of the particular enterprise, in particular of the stock companies

listed on the stock exchange, is at the forefront. The problem arises when choosing the right criteria for

measuring financial performance with respect to international reporting. The starting point is an

understanding of the elementary differences between international accounting standards and Czech

standards. This differentiation distorts the input data for the calculations of individual financial

indicators and thus the overall financial performance. The article will address one of the most widely

used and most comprehensive indicators of economic added value (EVATM). Comparison of financial

performance results for input data according to Czech Accounting Standards and IFRS will be

performed.

Keywords

Value Added (EVA), financial performance, Czech Accounting Standards (CAS), International

Financial Reporting Standards (IFRS), Cost of capital, Risk premium, The Fama French Model (TFF)

JEL Classification

G20, G30, G00.

Introduction

Financial performance is an important and key concept in the financial management of an enterprise and

increasing it is generally considered to be the main objective of financial management of each business.

The term performance is most often defined as the ability to evaluate the capital invested by individual

owners compared to the possibilities of alternative use of that capital for other purposes. However, it is

important to note that the performance of an enterprise can be viewed from a variety of perspectives,

from the point of view of the owners or managers, since the two groups are pursuing slightly different

interests and goals. The world economy has been decimating national borders for decades. In Europe,

together with economic globalization, political unification takes place within the European Union. As a

consequence of these processes, there is a growing need for accounting harmonization. Accounting

information is necessary not only for the implementation of qualified business decisions, but also for

the provision of subsidies, etc. There are currently three significant lines of international accounting

harmonization. These are International Financial Reporting Standards (IFRS), the Accounting

Directives of the European Union, and the US GAAP (General Accepted Accounting Principles) also

play an important role. This paper will only take into IFRS and, of course, Czech Accounting Standards.

The aim of this paper is to determine the cost of capital and to calculate the economic value added based

on the Value Spread.

Literature Review

There are currently many approaches and methods evaluating financial performance. This issue is

addressed by a number of authors, such as Young (2001), EHRBAR (1998). Some approach to measuring

financial performance from a management point of view, for example, with the Balanced Scorecard,

Page 131: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

124

Damodaran (2011). Economic value added (EVA) is based on concept of an economical profit, which

is part of the financial theory for a long time. Economic value added is measures of firm’s performance,

which have been created with the aim to motivate managers on shareholder”s value increase. EVA is

brand of Stern Steward and company, who has been popularised this approach to measure financial

performance in United State, where this method has been implemented into management system of

many firms. Its role is growing both in transition economies and, above all, in market economies.

According to Dluhošová (2010), the fact that the indicator shows a strong correlation to the development

of the shares of the companies, which is important especially for the shareholders, can be considered.

EVA is a comprehensive instrument in business management. EVA can be used to evaluate an enterprise

or as a tool for managing and motivating workers, especially managers, etc. This way, performance can

be characterized by the 4M (measurement, management, motivation and mindset) rule, as Young (2001)

states. The indicator is relatively new and is becoming increasingly used in the area of performance

measurement indicators. The resulting value indicates whether the value for owners has increased or

decreased. When considering financial performance, there are problems with the quality of input data,

variations in methodological approaches, the risk and uncertainty of future financial flows, etc.

Moreover, another problem arises in the difference between Czech accounting regulations and

International Financial Reporting Standards. (IFRS). Unlike national accounting systems, IFRS do not

provided guidance on accouting procedures.

Definition of International Financial Reporting Standards

The International Financial Reporting Standards are a summary of the best accounting procedures and

experience of the accounting profession and of users´ requirements upon the range of publicly

information. Their purpose is to increase comparability of reporting on financial effectiveness and

financial position of different companies, operating under different national conditions. Financial

Reporting Standards are a summary of the best accounting procedures and experience of the accounting

profession and of users´ requirements upon the range of publicly (IFRS), the original name was

International Accounting Standards, (IAS), are currently one of the three basic regulations in the context

of international accounting harmonization. IFRS used mainly in Europe. Priority objectives of published

standards are not the methodological accounting procedures, but the main emphasis is placed on the

interpretation of accounting data in the form of financial statements. The financial statements prepared

according to these standards provide high-quality, transparent and comparable information, which can

help the users to make economic decisions. (Procházka, 2015).

Czech companies and IFRS

As Dvořáková (2017) states, since 2005 the IFRS application has been compulsory for companies

operating on EU regulated markets. According to the Regulation No. 1606/2002 of the European

Parliament and the Council of 19 July 2002 on the IAS, the accounting entities that are trading

companies and that are issuers of securities registered at a regulated securities market in the EU member

countries, have to apply the IAS, adapted by the European Union law, for accounting and drawing a

financial statement. A key problem of accounting based on IFRS is the tax basis which is obtained from

the accounting profit in the Czech Republic. For this reason, the accounting entities which account and

report according to IFRS by law have, for the purposes of calculation of the profit tax payable, to

transform the business result to such a result which they would have if they accounted and reported

according to the Czech regulations

For income tax purposes, these companies have to rely on the economic result expressed in accordance

with Czech accounting regulations. To solve this situation, there is a two-fold approach:

• To create a high-quality bridge for these operations, which will be displayed

differently in both accounting systems and then to convert it according to IFRS based

on the Czech regulations, or

• to account and prepare financial statements in two accounting systems, i.e. according

to IFRS, and according to Czech accounting regulations.

Page 132: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

125

Application of IFRS in Czech companies requires high demands on the professional knowledge of

accountants and all other employees.

Basic differences between CAS and IFRS

It should be noted that IFRS do not specify a specific form of financial statements, they do not require

any accounting framework. IFRS also do not report standard account balances, as is the case with Czech

accounting. So, IFRS defines the minimum amount of information that an enterprise must publish. The

primary goal of the IFRS financial statements is to provide high quality information for decision-makers.

Conversely, Czech accounting is very closely linked to tax laws. IFRS thus requires the transactions to

be traded consistently according to their economic substance and not in accordance with the legal

standard. Máče (2013).

Calculation of Economic Value Added

General concept of EVA, as a measure of financial performance, expresses the difference between profit

and cost of capital, which reflects a minimal rate of return of capital invested. The calculation of the

EVA is determined on the one hand by the input data and the way of the cost of capital calculation,

variations in methodological approaches, the risk and uncertainty of future financial flows. Moreover,

it is also important, if me want to calculate an absolute or a relative value. According to Dluhošová

(2004), are two basic concepts of calculation: operating profit concept and value spread concept. EVA

calculation on base of operating profit is general defined as:

EVA = NOPAT – WACC · C, (1)

where NOPAT is net operating profit after taxation, WACC is weighted cost of capital and C is value of

total capital invested.

NOPAT is subject to the same adjustment principles as the corrected economic result for DCF. It thus

includes only those revenues and expenses related to the core business of the company. C consists of

assets that are used for operating activities or for the main operation of the company (in the EVA concept

it is replaced by the term NOA, which are the so-called net operating assets). In the value C are assets

that are necessary for operating profit are tied. Thus, there must be some symmetry where NOPAT

should include those revenues and costs related to the assets that are part of the NOA. The calculation

of economic value added can be calculated in two ways, using the cost-of-capital formula or the Value

Spread. As stated in Mařík (2018), the first calculation method described above looks like this:

𝐸𝑉𝐴𝑡 = 𝑁𝑂𝑃𝐴𝑇𝑡 − 𝑁𝑂𝐴𝑡−1 ∙ 𝑊𝐴𝐶𝐶𝑡. (2)

The second aforementioned approaches is being used in this paper, namely the Value Spread concept.

Specifically, this is EVA based on a narrowed concept of value spread, which is defined as follows:

𝐸𝑉𝐴 = (𝑅𝑂𝐸 − 𝑅𝐸) ⋅ 𝐸, (3)

where RE is market cost of equity, E is equity and ROE is return on equity. For the owner, it is important

that the (ROE – RE) spread to be as large as possible or at least positive. Only in this case investment to

the firm brings more than an alternative investment.

Another the decisive factor in EVA calculation is cost of capital, which is one of the key issues due to

their sensitivity to EVA. As Dluhošová (2010) states, Cost of capital represents minimal rate of return,

which must be achieved by firm no do decrease wealth of investors. There are three basic types of capital

costs. The first type is Weighted Average Cost of Capital (WACC), which is a combination of different

forms of capital:

,ED

ERDRWACC ED

+

+= (4)

where D is debt, RE is cost of equity, E is equity, RD is cost of debt, D + E is total capital invested.

Page 133: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

126

The second type is cost of equity. Generally, cost of equity can be calculated using capital assets models

of construction models.

Calculation Cost of capital using rating model (INFA)

In this paper, the cost of equity was determined using the construction model, specifically a rating model

(INFA) used by the Ministry of Industry and Trade of the Czech Republic. Cost of equity can be

expressed as a sum of return of a risk-free assets and risk premiums. The INFA model calculation is as

follows:

𝑊𝐴𝐶𝐶𝑈  =  𝑅𝐸𝑈  =  𝑅𝐹 + 𝑅𝑒𝑛𝑡𝑟𝑒𝑝𝑟𝑒𝑛𝑒𝑢𝑟𝑖𝑎𝑙 + 𝑅𝑓𝑖𝑛𝑠𝑡𝑎𝑏 + 𝑅𝑠𝑖𝑧𝑒, (5)

where Rsize is risk premium for share liquidity, Rentrepreneurial is risk premium for trade risk, RF is risk

free rate, Rfinstab is risk premium resulting for financial stability and WACCU are the weighted average

cost of capital of the non-indebted entity.

Because EBIT · CZ/Z = WACCU. UZ, then the cost of equity can be determined as:

,

A

VK

A

VK

A

UZUM

Z

CZ

A

UZWACC

R

U

E

−−

= (6)

where UZ financial (paid) resources, A are total assets, VK is equity, Z is gross profit, UM is interest

rate, when UM = 𝐼

𝐵+𝑂, where B are bank credits, O are bonds and I are interests.

The cost of equity can then be determined using these risk premiums as follows:

𝑅𝐸 = 𝑊𝐴𝐶𝐶𝑈 + 𝑅𝑓𝑖𝑛𝑠𝑡𝑟 = 𝑅𝐹 + 𝑅𝑒𝑛𝑡𝑟𝑒𝑝𝑟𝑒𝑛𝑒𝑢𝑟𝑖𝑎𝑙 + 𝑅𝑓𝑖𝑛𝑠𝑡𝑎𝑏 + 𝑅𝑠𝑖𝑧𝑒, (7)

where Rfinstr is risk premium resulting for the capital strukture and can be expressed as follows: Rfinstr =

RE - WACCU If 𝑅𝐸 = 𝑊𝐴𝐶𝐶𝑈, after Rfinstr = 0%. In case, when RE – WACCU > 10%, then Rfinstr = 10%.

Risk free rate RF correspond to the yield of the T-bonds with the time to maturity from five to ten years,

most often with a maturity of 10 years.

Determination of risk premium characterizing enterprise size, Rsize, which is the function of the size of

a fimr´s equity. If the value of the financial resources (UZ) > 3 mld. CZK, then Rsize = 0%. If UZ < 100

mil. CZK, after is Rsize = 5.0%. If i tis UZ > 0.1 and UZ < 3 mld. CZK, then Rsiz = (3 mld. Kč – UZ)2 /

168,2.

Rentrepreneurial is risk premium reflecting the production power of the enterprise. This risk premium is

dependent on indicator EBIT/A, which is compared to the indicator XI expressing the replacement of

external paid capital with equity. The indicator is calculated as: ,1 UMA

UZX = Consequently, if

EBIT/A >X1, then Rentrepreneuril = min Rentrepreneuril. If EBIT/A < 0, then Rentrepreneuril = 10.0%. If 0 ≤ EBIT/A

≤ X1, then ,1,01

/12

−=

X

AEBITXR sképodnikatel

Risk premium Rfinstab is a function of gross liquidity (L3), current assets/(short term liabilities + short

term loans). If L3 > XL2, then Rfinstab = 0%. If L3 ≤ XL1, then is 10%. And finally if XL1 < L3 <XL2,

then is Rfinstab= ((XL2 – L3 )/( XL2 – XL1)) 2·0.1. X1 and X2 are the recommended liquidity limits in the

industry, when X1 = 1 and X2 = 2.5.

Calculation Cost of capital using The Fama-French Model

Specifically, a Three-Factor Fama-French model (FF3F) was chosen for the purpose of this paper. The

FF3F model is an asset pricing model developed by University of Chicago professors Eugene Fama and

Kenneth French due to a reaction to the poor results of Sharpe’s and Linter’s in explaining average

Page 134: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

127

cross-sectional stock returns in the U.S. stock market. FAMA Eugene a Kenneth FRENCH (1992). The

Fama-French Model is an extension of the Capital Asset Pricing Model (CAPM), as stated in the paper

FAMA Eugene and Kenneth FRENCH (1995; 1996a; 2012). Specifically, this model extends the

(CAPM) about the size factor and book-to-market factor in order to capture the cross-sectional variation

in average returns, which would be viewed as certain anomaly in the (CAPM). According to Zmeškal

(2018), the formula is as follows:

E (RI) = RF + β ·(E(RM) – RF) + β · E(RSMB) + βi,HML · E(RHML), (8) where RF is the risk-free return rate, β factor’s coefficient, E (Ri) portfolio’s expected rate of return, (E

(RM) – RF) is market risk premium, RM is the return of the market portfolio.

Factors of The Fama-French model:

• Market risk premium - gives the investor returns above the risk-free rate.

• Small Minus Big, (SMB) - is a size effect based on the market capitalization of a

company. It reflects the difference between the average returns of portfolios of small

and large firms that have a similar ratio of book value to market value. Specifically, it

is taken as the difference between the decile yields of the largest shares according to

market capitalization and the decile of the smallest shares. This factor’s coefficient is

calculated linear regression and it can have negative and positive values.

• High Minus Low, (HML) - it reflects the spread between the average returns of

portfolios with high and low book value ratios. Specifically, it is again taken as the

difference in the yields of the portfolios containing the shares with the highest

proportion of book value and market value (9th decile) of the shares with the lowest

proportion of this ratio (1st decile). As a previous beta coefficient, this risk factor can

be calculated using linear regression, with both positive and negative values.

These parameters βi can be expressed using the matrix notation as:

βi = Var (f) -1 Cov (f, RI – RF), (9)

where f is a vector of risk factors, Var(f) is the variance-covariance matrix of f and Cov (f, RI − RF) is a

vector containing the covariances of risk factors with excess asset return.

Empirical Results

The following key part of this paper will assess the effect of IFRS on the financial performance of the

selected enterprise, measured on the basis of the EVA indicator. The following Table 1. lists the input

data for the calculation of the individual risk premium, the calculation of the cost of capital determined

on the basis of the INFA model (construction model) and for calculating the EVA. It is necessary to

recall that the calculation of the EVA will be based on the accounting model, because the INFA model

is based on the accounting principle.

The input data for the purposes of this article were used by XY. The name of XY is to keep business

secrets. XY is a joint-stock company specializing in the production of heavy castings. The history of the

company dates back to the 1960s. The company is focused on the aerospace industry, where it has a

number of certifications and supplies subcontractors for world-class aircraft brands. In addition to this,

XY is also active in power engineering and electrotechnics. The company's export accounts for 60% of

the company's turnover. According to CZ-NACE the company falls into 25 section - manufacture of

metal structures and fabricated metal products. The input data were obtained on the basis of the

company's annual reports.

Page 135: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

128

Table 1. Input data for the calculation of the risk premium, RE and EVA

(T. CZK)

CAS 2014 2015 2016 2017 2018

Total assets 100 316 107 052 105 727 108 747 106 328

Property, plant and equipment 45 431 46 756 42 732 51 836 43 432

Current assets 54 416 60 072 62 720 56 203 62 391

Total liabilities and equity 100 316 107 052 105 727 108 747 106 328

Equity 44 816 53 513 61 126 69 111 76 567

Non-current liabilities 27 722 25 481 12 841 11 533 8 714

Bank credits and loans 22 722 20 481 7 841 6 533 3 714

Current liabilities 27 778 28 058 31 760 28 103 21 047

Trade and other liabilities 8 000 5 000 5 000 5 000 5 000

Bank credits and loans 11 150 13 676 19 502 16 288 7 219

EBIT 8924 9350 8067 8340 7761

EAT 7373 8319 7613 7985 7456

Interest expense 863 706 206 153 99

IFRS 2014 2015 2016 2017 2018

Total assets 179 339 182 454 174 933 171 865 160 626

Property, plant and equipment 125 342 122 743 112 466 115 465 98 282

Current assets 53 997 59 711 62 467 56 400 62 344

Total liabilities and equity 179 339 182 454 174 933 171 865 160 626

Equity 75 435 83 341 83 462 89 230 92 353

Non-current liabilities 73 025 68 009 57 005 51 448 45 056

Bank credits and loans 22 722 20 481 7 841 6 533 3 714

Current liabilities 30 879 31 104 34 466 31 187 23 217

Trade and other liabilities 18 024 21 636 28 286 23 191 15 379

Bank credits and loans 8 000 5 000 2 000 3 168 2 770

EBIT 7158 8000 4560 7533 6890

EAT 2442 6961 3127 5646 5148

Interest expense 5 505 4 136 2 272 2 882 1 923

Source: Own creation

In the calculation of risk premiums, a risk-free rate was first established. The value-free rates RF were

derived from data published on the Czech National Bank website. These values ranged from 2.26% in

2014 to 0.98% in 2018. Subsequently, it was necessary to calculate the value of the financial (paid)

resources for the calculation of the risk premium characterizing the size of the enterprise Rsize. In the

case of XY, the financial resources are the sum of equity and bank loans. Considering that the value of

the financial sources was less than 100 million crowns for the whole time, then the value of this risk

premium was set at 5% over the whole period. These are CAS values. Then, was determined the value

of the risk premium of Rentrepreneurial, which characterizes the productive power. First, it was necessary to

calculate the value of the indicator X1 and the value of the interest rate. The size of the interest rate was

as follows: UM = I/(B + O) The values of the X1 indicator are then determined as: .

This value was then compared to the return on assets. Given to, that the return on assets is greater than

the X1 indicator, then Rentrepreneurial is equal to the minimum Rentrepreneurial for the sectors available in the

financial analyzes of the Ministry of Industry and Trade published on www.mpo.cz. In the first four

analyzed periods, the overall liquidity was greater than XL2, then the risk premium was calculated as:

In the last year, the indicator L3 was larger than XL2, then the value of the risk premium was 0. Of

,1 UMA

UZX =

Page 136: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

129

course, the values of individual risk premium were different for IFRSs. Based on the calculation of

individual risk margins, then WACCU and subsequently RE were calculated. WACCU were calculated

based on formula (1) and RE then by (5) and (6), respectively.

Table 2. Risk premium, cost of equity and value of spread (%)

CAS 2014 2015 2016 2017 2018

RF 2,26% 1,58% 0,58% 0,48% 0,98%

Rentepre.min. 4,05% 3,50% 3,00% 3,49% 3,76%

Rsize 5,00% 5,00% 5,00% 5,00% 5,00%

Rfinstab 1,30% 0,57% 1,23% 1,11% 0%

WACCU 12,61% 10,65% 9,81% 10,08% 9,74%

RE 14,86% 14,37% 12,14% 11,34% 9,61%

Rfinstr 2,25% 3,72% 2,33% 1,26% -0,13%

ROE 16,45% 15,55% 12,45% 11,55% 9,74%

Value of spread 1,59% 1,17% 0,32% 0,21% 0,12%

IFRS 2014 2015 2016 2017 2018

Rentepre.min. 3,89% 2,99% 6,21% 5,53% 5,85%

Rsize 4,98% 4,97% 5,00% 5,00% 5,00%

Rfinstab 2,51% 1,50% 2,10% 2,13% 0,00%

RF 2,26% 1,58% 0,58% 0,48% 0,98%

WACCU 13,64% 11,04% 13,90% 13,14% 11,83%

RE 16,70% 10,10% 13,67% 12,14% 11,11%

Rfinstr 3,06% 0,94% 0,23% 0,99% 0,73%

ROE 3,24% 8,35% 3,75% 6,33% 5,57%

Value of spread 2,49% 4,32% 1,87% 2,42% 1,56%

Source: Own creation

All variables are now known to determine the indicator EVA. It was calculated on the basis of the narrow

range according to formula (2.7). The following Figure 1shows EVA values under IFRS and CAS for

the period from 2014 to 2018.

Page 137: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

130

Figure 1. Evolution of EVA over the period from 2014 to 2018

Source: Own creation

As can be seen from the chart, the resulting values of EVA are quite different. It is clear from the analysis

that no positive EVA based on IFRS was achieved at any time.These different results are due to the fact

that the EVA based on IFRS data has a significantly negative value of spread (ROE – RE) against EVA

based on CAS. The negative value of the spread is mainly due to the lower return on equity. Under

IFRS, less profits were due in particular to higher depreciation of new assets, namely machinery and

equipment leases and leased premises. Decreasing return on equity also affected the revaluation of fixed

assets.

The following section will explain and compare the impact of the difference between IFRS and CAS on

EVA. As can be seen, the value of assets, liabilities and profits under IFRS adjustments has changed

significantly. Of course, the names and structure of the individual financial statements have changed.

However, the objective of this paper is not to describe differences in the structure of individual

indicators, but to assess the difference and impact of IFRS financial statements on total financial

performance. The first thing that caused a substantial difference in the resulting value of EVA is the

rental of production premises. According to IFRS, the lease of such premises must be capitalized in the

value of the property as it uses its premises for its economic activity and, at the same time, an increase

in the liability must occur. For CAS, this rental only appears in off-balance sheet records. Therefore, for

realistic display of reality, the item 'Long-term rentals' must be capitalized in the balance sheet. Expert

estimate was the estimated net book of leased premises at 1.1. 2014 in the amount of 52.4 million CZK,

when it is assumed that this property will be used in the form of 25 years. This data is important for

setting up additional costs that have arisen due to the activation of the property (space rent). These

annual depreciations were calculated as the portion of the net book and the economic life of the building,

52.4 mil. CZK / 25 = 2096 thousand CZK. The amount of the annual depreciation is a short-term

liability, the rest of the amortized amount being part of long-term payables. Under IFRS, the original

amount of the lease (4 400 thous. CZK) is deferred to annual depreciation, interest expense and

maintenance and administration. Rental is considered as a form of foreign capital, so it is necessary to

quantify interest. Interest was determined on the basis of the PRIBOR interest rate and the selected risk

margins. The second thing that caused a substantial difference in the resulting value of EVA is financial

leasing. In particular, the Company acquires in the form of financial leasing primarily machinery and

equipment. According to IFRS, the current value of the lease payments must be capitalized in fixed

assets. This will increase valu efor property, plant and equipment. At the same time, there will be an

increase in non-current liabilities and current liabilities, as finance lease liabilities. In addition, the

capture of these assets will also be reflected in the statement of profit and loss. In particular, there will

be an increase in financial costs (interest) and, of course, depreciation. Compared to IFRS, the capture

of the finance leasing was quite different for CAS. According to CAS, financial leasing would only be

reflected in the form of operating costs. Its value is recorded only in off-balance sheet accounts. Another

713,01 628,15 194,41 146,49 95,11

-10157,32

-1453,33

-8279,89

-5189,86 -5109,05

-12000,00

-10000,00

-8000,00

-6000,00

-4000,00

-2000,00

0,00

2000,00

EVA EVA(IFRS)

Page 138: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

131

very significant difference with CAS is the revaluation of fixed assets. Under IFRS, the tangible and

intangible assets may be revalued at fair value at the balance sheet date. In this way, the situation in the

company more corresponds to reality because the property is valued at fair value. At the same time, the

revaluation is shown in equity. The revalued value becomes the basis for determining the new amount

of depreciation. Precise revaluation of long-term assets can not be fully carried out, as the investor needs

a subordinate internal information.

Conclusion

The objective of this paper was to assess the impact of IFRS on the financial performance of an

enterprise. Comparing accounting procedures and reporting of some items according to the Czech

Accounting Standards and according to IFRS results in differences in the accounting data reporting.

According to one reporting frame a company can reach a profit, while according to another it can show

a loss. Of big difference can also be total balance sums, asset value, and value of other items of property

or liabilities. Financial analyses indices and comprehensive conclusions about performance could differ

considerably. Compilation of the financial statements according to IFRS will change the assessment of

financial stability and financial performance, both in positive and in negative direction. On the contrary,

according to CAS, XY created an economic added value of CZK 0.713 million. Consequently, the

conclusion can be deduced from the results, the impact of IFRS on financial performance is well founded

and more relevant to reality because assets and liabilities are measured at fair value, and all transactions

related to the economic activity of the enterprise are recorded in the financial statements. It can be argued

that EVA according to IFRS is so close to the market method and is not a mere approximation.

References

[1] DAMODARAN, Aswath. Study guide for Damodaran on valuation, security analysis for

investment and corporate finance. New York: Wiley, 1994. 220 s. ISBN 0-471-10897-9.

[2] DAMODARAN, Aswath. Applied corporate finance. 3rd ed. Hoboken: John Wiley & Sons, 2011.

ISBN 978-0-470-38464-0.

[3] DLUHOŠOVÁ, Dana. Finanční řízení a rozhodování podniku. 3. vyd. Praha: Ekopress, 2010. 225

s. ISBN 978-80-86929-68-2.

[4] DLUHOŠOVÁ, Dana a kolektiv. Nové přístupy a finanční nástroje ve finančním rozhodování. 1.

vyd. Ostrava: 2004. 640 s. ISBN 80-248-0669-X.

[5] DVOŘÁKOVÁ, Dana. Finanční účetnictví a výkaznictví podle mezinárodních standardů IFRS. 5.

aktualizované a přepracované vydání. Brno: BizBooks, 2017. 368 s. ISBN 978-80-265-0692-8.

[6] DVOŘÁKOVÁ, Dana. Finanční účetnictví a výkaznictví podle mezinárodních standardů IFRS.

Aktualiz. a rozš. vyd., 4. vyd. Brno: BizBooks, 2014. Daně a účetnictví (BizBooks). ISBN 978-

80-265-0149-7.

[7] EHRBAR, Al. EVA: the real key to creating wealth. New York: Wiley, c1998. ISBN 0-471-

29860-3.

[8] MÁČE, Miroslav. Účetnictví a finanční řízení. 1. vyd. Praha: Grada, 2013, 552 s. ISBN 978-80-

247-4574-9.

[9] MAŘÍK, Miloš a Pavla MAŘÍKOVÁ. Moderní metody hodnocení výkonnosti a oceňování

podniku: ekonomická přidaná hodnota, tržní přidaná hodnota, CF ROI. Přeprac. a rozš. vyd. Praha:

Ekopress, 2005. ISBN 80-86119-61-0.

[10] MAŘÍK, Miloš. Metody oceňování podniku: proces ocenění, základní metody a postupy. 4. vyd.

Praha: Ekopress, 2018. ISBN 978-80-87865-38-5.

[11] PROCHÁZKA, David. Ekonomické dopady implementace IFRS v Evropě. Praha: Oeconomica,

nakladatelství VŠE, 2015. 153 s. ISBN 978-80-245-2097-1.

Page 139: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

132

[12] VERNIMMEN, Pierre. Corporate finance: theory and practice. Chichester: Wiley, c2005. ISBN

0-470-09225-4.

[13] YOUNG, David a Stepfen O'BYRNE. EVA and value-based management: a practical guide to

implementation. New York: McGraw-Hill, c2001. ISBN 0-07-136439-0.

[14] ZMEŠKAL, Zdeněk, Dana DLUHOŠOVÁ a Tomáš TICHÝ. Finanční modely: koncepty, metody,

aplikace. 3. přeprac. a rozš. vyd. Praha: Ekopress, 2013, 267 s. ISBN 978-80-86929-91-0.

[15] ZMEŠKAL, Zdeněk, Miroslav ČULÍK a Tomáš TICHÝ, Finanční řízení a rozhodování: sbírka

řešených příkladů. 5. uprav. a rozš. vyd. Ostrava: VŠB-TU Ostrava, 2018, 204 s. ISBN 978-80-

248-4218-9.

[16] MINISTERSTVO PRŮMYSLU A OBCHODU ČR. Finanční analýza podnikové sféry[online].

MPO. [30.12.2018]. Dostupné z: http://www.mpo.cz/cz/ministr-a-ministerstvo/analyticke-

materialy/#category238

[17] INTERNATIONAL ACCOUNTING STANDARDS BOARD. (2005) International Financial

Reporting Standards (IFRS), London; IASC, 2005. ISBN 1-904230-44-X.

[18] FAMA Eugene a Kenneth FRENCH. "The Cross-Section of Expected Stock Returns". The Journal

of Finance, 1992, 47 (2): 427. doi:10.1111/j.1540-6261.1992.tb04398.

[19] FAMA Eugene a Kenneth FRENCH. „Size and Book-to-Market Factors in Earnings and

Returns.“ The Journal of Finance, 1995, 50(1), pp. 131-155. DOI: 10.1111/j.1540-

6261.1995.tb05169.x.

[20] FAMA Eugene and Kenneth FRENCH. ‘Multifactor Explanations of Asset Pricing Anomalies’.

The Journal of Finance, (1996a), 51 (1), pp. 55–84. doi: 10.1111/j.1540- 6261.1996.tb05202.x.

[21] FAMA Eugene a Kenneth FRENCH. Size, value, and momentum in international stock returns.

Journal of Financial Economics, 2012, 105 (3): 457. doi:10.1016/j.jfineco.2012.05.011

Page 140: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

133

IDENTIFYING FACTORS OF EMPLOYEE TURNOVER WITH MULTIPLE

CORRESPONDENCE ANALYSIS

Ondřej Mikulec1

1Department of Management, VŠB – Technical University of Ostrava,

Sokolská tř. 33, Ostrava 70200, Czech Republic

e-mail: [email protected]

Abstract

Multiple correspondence analysis of factors of employee turnover represents the practical

implementation of statistical analysis and HR analytics in the area of human resources management. It

is a tool for finding relationships between categories of all categories of variables and the ability to

display these associations graphically in the positional map view of row and column profiles. From this

perspective, you can see what categories of variables you might notice or see if multiple analyses can

be used, such as a quick score tool for data, individual categories, and their associations. This paper

aims to present a new approach to human resource management data visualization and display

associations among factors of employee turnover based on real data from large production company

from Moravia-Silesia region.

Keywords

HR Analytics, Employee Turnover, Multiple Correspondence Analysis, Data Visualization.

JEL Classification

C53, M1, M54.

Introduction

The success of organizations in today's world depends largely on people, and therefore their

management can decide not only whether an organization is successful, but also whether it can survive

in today's competitive market environment (Horváthová et al., 2014). This work deals with the

quantitative approach of HR analytics to solving the problem of human resource management with

employee turnover using multivariate statistical analysis. This study is using multiple correspondence

analysis as another tool of multiple statistical analysis with high potential for possible use in the field of

human resources management. MCA presents strong visualization technique detecting and representing

underlying structures in a data set by expressing each group of categorical variables as a point in a low-

dimensional Euclidean space. Nowadays it is possible to grasp enormous amounts of visualization and

use more data and process it with visual pattern recognition as the basis of exploratory analysis.

Visualization plays important role in human resource management for the need of expressing the right

things the right way as described in Few (2015) or Sinar (2018). Interdependences or associations among

individual factors significant to undesirable employee turnover will be described and displayed based

on multiple correspondence analysis. The output of the application of MCA will be a position map of

row and column profiles visualizing in a graphical form with all categories associated with undesirable

employee turnover.

Methodology and Data

Correspondence analysis (CA) represents graphical and numerical tool expressing hidden inner

dependence among observed variables. Correspondence analysis focus according to Benzécri J. P.

(1992) and Greenacre, M. J. (1993) in Mikulec (2017) is on two-dimensional table of frequencies usually

called contingency table and shows inner associations. Contingency tables are defined by n row

categories and m column categories. Diagram of correspondence analysis is expressed as “subjective

map” with two groups of points: group of n points corresponding to row categories and group of m

points corresponding to column categories as in Mikulec (2017).

Page 141: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

134

Principles of correspondence analysis

Principles of correspondence analysis are based on contingency tables with n rows and m columns. We

can define matrix U as n × m with elements 𝑈𝑖𝑗 which correspond to elements of contingency table. We

define row sums 𝑁𝑗+ and column sums 𝑁+𝑖 and total sum 𝑁𝑇 as following

𝑁𝑇 =∑𝑁𝑗+

𝑛

𝑗=1

+∑𝑁+𝑖

𝑚

𝑖=1

, (1)

where 𝑁𝑗+ = ∑ 𝑈𝑖𝑗𝑚𝑖=1 and 𝑁+𝑖 = ∑ 𝑈𝑖𝑗

𝑛𝑗=1 .

Chi-quadrat statistics 𝜒2 is testing a null hypothesis of non-existence of associations between rows and

columns is calculated as

𝜒2 = 𝑁𝑇∑∑(𝑝𝑖𝑗 − 𝑟𝑖𝑐𝑗)2/𝑟𝑖𝑐𝑗

𝑚

𝑗=1

𝑛

𝑖=1

, (2)

where 𝑟𝑗 = 𝑁𝑗+/𝑁𝑇, 𝑐𝑖 = 𝑁+𝑖/𝑁𝑇 and 𝑝𝑖𝑗 = 𝑈𝑖𝑗/𝑁𝑇 represent elements of frequency matrix P.

Pearson mean quadratic contingency coefficient 𝑡 = 𝜒2/𝑁𝑇 measures whether statistical properties of

any one part of an overall dataset are the same as any other part or in other words homogeneity

characterized with low t value or heteroscedasticity characterized by large t value. t value can be

expressed as following

𝑡 =∑𝑟𝑖

𝑛

𝑖=1

∑[(𝑝𝑖𝑗/𝑟𝑖 − 𝑐𝑗)2/𝑐𝑗]

𝑚

𝑗=1

, (3)

which is equal to weighted Euclidean distance between vector of relative frequencies and average row

profile.

Let us denote 𝑟 = 𝑃 𝐼 and 𝑐 = 𝑃𝑇𝐼 where I are vectors containing only ones. Then we can denote matrix

J with elements proportional to standardized residues of contingency table U. Matrix J can be defined

as

𝐽 = 𝐷𝑟−1/2(𝑃 − 𝑟𝑐𝑇)𝐷𝑐

−1/2. (4)

Generally rectangular matrix E can be decomposed by singular values decomposition technique into

three matrixes 𝐸 = 𝑈 𝑆 𝑉𝑇 where S is matrix of singular numbers and U and V are left and right

eigenvectors respectively. Row profile components of contingency table 𝑓𝑖 are rows of matrix

𝐹 = 𝐷𝑟−1/2

𝑈𝑆, (5)

and column profile components of contingency table 𝑔𝑖 are rows of matrix

𝐺 = 𝐷𝑐−1/2

𝑉𝑆 (6)

Pairs of row and column components 𝑓𝑖 , 𝑔𝑖 are elements of orthogonal residue decomposition ordered

hierarchically according to importance. This decomposition is being referred to as correspondence

analysis. Components 𝑓𝑖 and 𝑔𝑖 are uncorrelated with zero mean values connected by following linkages

𝐺 = 𝐷𝑐−1/2

𝑃𝑇𝐹𝑆−1, (7)

𝐹 = 𝐷𝑟−1/2

𝑃𝐺𝑆−1. (8)

Page 142: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

135

Diagonal elements of S matrix are identified as principal inertias. Biplot chart is used as display method

of both profiles as well as for principal components. Main goal is to identify sources of heterogeneity in

contingency tables. Correspondence analysis allows decomposition of 𝜒2 statistics to assess structures

in matrix N.

Multiple correspondence analysis and its eigenvalue correction

MCA is an extension of correspondence analysis which allows one to analyse the pattern of relationships

of several categorical dependent variables. MCA codes data by creating several binary columns for each

variable with the constraint that one and only one of the columns gets the value 1 as discussed in Abdi,

Valentin (2007). This coding schema creates artificial additional dimensions because one categorical

variable is coded with several columns. Therefore, the inertia of the solution space is artificially inflated

and therefore the percentage of inertia explained by the first dimension is severely underestimated.

Correction formula according to Greenacre (1993) can be expressed as

𝜔𝐽 =

{

[(𝐾

𝐾 − 1) (𝜔𝑖 −

1

𝐾)]2

𝑖𝑓𝜔𝑖 >1

𝐾

0 𝑖𝑓𝜔𝑖 ≤1

𝐾

, (9)

where K is a number of nominal variables and 𝜔𝑖 represents proportionally redistributed eigenvalues for

each pattern of relationship. Using this formula gives a better estimate of the inertia, extracted by each

eigenvalue.

MCA applies CA algorithms to each set of categorical variables formed in a Burt table which is analogy

of contingency table for more than two displayed variables. The Burt table is the symmetric matrix of

all two-way cross-tabulations between the categorical variables and has one row and one column for

each level of each categorical variable. It has an analogy to the covariance matrix of continuous

variables. Analysing the Burt table is a more natural generalization of simple correspondence analysis.

Correspondence and multiple correspondence analysis procedure

According to Meloun M., Militký J. and Hill M. (2017) we define the procedure of correspondence

analysis in six following steps:

1) define the objectives of correspondence analysis to assess associations among row and column

categories and rows and columns themselves,

2) task formulation and creation of squared non-negative data matrix,

3) fulfilling assumptions of compositional techniques (completeness of the input characters),

4) presentation of row, column or both categories in common chart where we look for suit-able

number of chart dimensions,

5) interpretation of the results by defining associated categories and comparing row and column

categories,

6) verifying the results.

Data description

The model of MCA for undesirable employee turnover is consisted of fluctuation factors, variables

which were evaluated as significant by the multiple logistic regression model of employee turnover

prediction described in Mikulec (2019) and ongoing research. Categories included in MCA to identify

associations with undesirable employee turnover are Organization, Category, Type of contract, Average

performance, Wage tariff, Age, Gender, Education, Distance to work based on Vnoučková (213),

Horváthová et al. (2014), Rubenstein, Eberly, Lee, Mitchell (2017), Bednář (2018) and analysed

company’s management.

Page 143: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

136

MCA is based on data from 5041 employees including 399 leavers individually described with

elementary categories associated with employee turnover. Stayers, own employees active in the analysed

production company at September 2019, and Leavers, employees who left the company with undesirable

turnover between 2015 and 2018.

Organization (ORG) identifies an employee's job in a particular part of an organization - primary

production, secondary / final production, service and maintenance, engineering production and

organization headquarter with related activities (finance, marketing, human resources, etc.), which were

combined due to the inseparable impact on employee turnover, although these are two distinct parts of

the organization.

Category (CAT) corresponds to the categorization of occupations into blue-collar (mostly manual

labour) and white-collar WC (mostly administration labour). The unlimited / limited contract (U / L)

corresponds to the type of contract between and employee and organization.

Average performance (PERF) corresponds to an average employee performance rating for years 2015-

2018. Employee performance rating ranges from 1 to 5, with 1 being a very low performance rating and

5 a very high performance rating.

Wage tariff (TARIFF) corresponds to the classification of an employee to a certain tariff level. The

organization uses a twelve-degree scale to rank employees with a minimum of 1 and a maximum of 12.

The tariff component of wages is only one of several components of the wage of employees of an

organization and the tariff level is used rather as an indicator of the classification of an employee into a

certain group of professions with similar tariff evaluation.

Age (AGE) variable is a continuous variable that represents the employee's age, with a minimum value

of 19 and a maximum value of 66, which are empirical values that correspond to the actual values under

the conditions of the analysed company.

Gender (GEND) corresponds to the employee's gender as male or female. Major part of analysed

employees are males due to the nature of production company.

Education (EDUC) determines an employee's maximum educational level. For the purposes of the

model and in view of the frequency and possible forms of educational attainment, employees are divided

into three categories with elementary education, high school education, including grammar schools,

secondary vocational schools, secondary vocational schools, and 3 = university education including

tertiary professional schools, bachelor's, master's and doctoral degrees.

Distance to Work (DIST) corresponds to the estimated distance to work. For the sake of simplicity were

created two groups depending whether the employee resides in the same municipality (Short) as the

organization or does not have and therefore a longer commuting time to work is expected (Long).

For continuous variables that were not naturally binary or categorical - namely Average performance

and Age - categories were created so the categories can be included in the multiple correspondence

analysis. A total of three categories were created for average performance: Low_perf for employees

with a low average rating (Perf <2.5), an average rating Avg_perf (2.5 ≤ Perf <3.5), and a high rating

(Perf ≥ 3.5). Age is distributed into age groups regularly used as following: up to 29, 30-39, 40-49, 50-

59, and 60+. The Tariff variable was for the sake of reduced from levels 1 to 12 to three, distributed

into: unqualified professions (Tariff ≤ 6), qualified professions (7 ≤ Tariff ≤ 9) and specialized

professions and sub management (Tariff ≥ 10).

Multiple correspondence analysis of employee turnover

Multiple correspondence analysis of fluctuation factors represents a practical implementation of

statistical analysis and HR analytics in the area of human resources management. It is a tool for analysing

relationships between individual categories of all categorical variables and allows to display these

associations graphically in the form of positional map of row and column profiles. Based on this

approach, it is possible to see which categories of variables are similar or related to each other (Sucháček,

Page 144: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

137

Sed'a, Friedrich, Koutský, 2014), which makes multiple correspondence analysis an ideal tool for getting

a quick overview of data, individual categories and their associations.

By means of multiple correspondence analysis, a position map is compiled in accordance with the

mutual associations of the individual employee turnover factors described as an important visualization

tool for quick orientation and insight into the fluctuation issue as in Mori, Kuroda, Makino (2016).

Table 1. Outputs of multiple correspondence model of fluctuation factors

dimension % inertia % inertia

cummulative

1 59,32 59,32

2 15,28 74,60

3 2,93 77,53

4 0,23 77,76

5 0,10 77,86

6 0,05 77,91

7 0,01 77,92

8 0 77,92

Source: Own research

Inertia is the total amount of information displayed in a given number of dimensions. The total

advertisement recorded by the multiple correspondence analysis model is shown in Tab. 4.10,

which contains aggregate model indicators, reaching 77.92%, of which 59.32% in the first

dimension, demonstrating the appropriateness of using the method to explain internal

associations in the data.

Table 2. Outputs of multiple correspondence model of fluctuation factors

Factor Category Mass Quality Intertia % Coord x Coord y

LEAVER Stayer 0,092 0,883 0,3% 0,162 0,284

Leaver 0,008 0,883 3,4% -1,887 -3,306

ORG Primary p. 0,022 0,862 1,0% 0,637 -0,654

Secondary p. 0,013 0,557 1,9% 0,821 0,25

Service 0,039 0,655 0,9% 1,031 -1,052

HQ+Engin. 0,027 0,955 4,8% 0,355 0,706

CAT BC 0,071 0,758 6,4% 1,056 0,325

WC 0,029 0,758 15,8% -2,618 0,807

U/L Unlimited 0,082 0,689 1,5% 0,191 0,835

Limited 0,018 0,689 6,9% 0,865 -3,784

PERF Low_perf 0,003 0,949 1,5% 0,063 -5,396

Avg_perf 0,087 0,067 0,2% 0,01 0,038

High_perf 0,009 0,552 0,6% 0,138 1,532

TARIFF UnQualified 0,020 0,512 3,9% 0,92 -1,778

Qualified 0,056 0,694 4,0% 0,897 0,307

Specialist 0,023 0,753 16,6% -2,979 0,816

AGE to 29 0,009 0,657 5,0% 0,408 -4,958

30-39 0,013 0,740 1,9% -1,023 -1,729

40-49 0,031 0,637 0,4% 0,137 0,642

50-59 0,040 0,608 1,1% 0,206 0,952

60+ 0,007 0,525 0,4% 0,541 0,828

GEND Male 0,088 0,638 0,4% 0,209 0,077

Page 145: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

138

Female 0,012 0,638 2,7% -1,53 0,561

EDUC Elementary 0,004 0,461 1,3% 1,292 -2,047

High school 0,081 0,817 2,3% 0,622 0,065

University 0,015 0,829 14,4% -3,648 0,156

DIST Short 0,043 0,706 0,3% 0,276 0,134

Long 0,057 0,706 0,2% 0,208 0,101

Source: Own research

In Tab. 2 are the results of multiple correspondence analysis for factors associated to

employee turnover. The Mass indicator denotes the line weight as a percentage of information

from all factors, the higher the frequency of the characters in a given category, the higher the

indicator in that category and the whole. The Quality indicator is the sum of the qualitative

score of the extracted dimensions. The inertia% expresses what percentage of the total

variability is explained by the model by a given variable. For each category, the coordinate

values were calculated, the x-axis (Coord x) based on the first dimension and the y-axis (Coord

y) based on the second dimension, so that they could be graphically depicted to show their

degree of proximity and association.

Figure 4. Position map of row and column profiles of multiple correspondence

Source: Own research

Substituting the coordinate values from Tab. 2 we get a position map of row and column profiles for all

categories important for employee turnover in Fig. 1. On this position map you can see the distribution

of points representing individual categories in space. It can be seen from the chart that the monitored

Leaver category representing employees with undesirable fluctuations is relatively far from all other

points taking into account the fact that it is less than 10% of the entire sample. However, some points

are closer than others and therefore have a higher association with the Leaver category. Categories

displayed in the position map visualizes in a very clear way. The Leaver category is the closest to the

30-39 age category, while the second closest age is under 29, which is consistent with the likelihood of

Stayer

Leaver

Primary p.

Secondary p.

Service

HQ+Engin.

BC

WC Unlimited

Limited

Low_perf

Avg_perf

High_perf

UnQualif

Qualif

Specialist

20-29

30-39

40-49

50-5960+

Male

Female

Elementary

High schoolUniversity

LongShort

-6

-5

-4

-3

-2

-1

0

1

2

3

4

-4 -3 -2 -1 0 1 2

Page 146: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

139

unwanted fluctuation with an older employee's age in Tab. 2. Another point relatively close to the Leaver

category is the WC point, representing white collar employees with administrative and technical

occupation. White collar employees are relatively closely associated with employees with a higher tariff

level (specialists and sub management), higher education and gender female which are all relatively

close to the Leaver category. On the right side of the position map, there are also points, in relative

proximity to the Leaver category, representing employees with fixed-term contracts and those with low

average performance scores. Other points are visibly accumulated around category Stayer indicating

low associations with category Leaver.

Conclusion

People analytics or HR analytics as data-driven approach to managing people at work are based on

implementation of quantitative models into human resource management decision processing and its

influence on the company. Over the past decade, big data analytics and people analytics has been

revolutionizing the way many companies do business. Common use for these techniques is described

for recruitment, talent management, people retention etc. as well as market performance as in Edwards

and Edwards (2016) or Kapoor and Kabra (2016) and this study is oriented on interdependencies and

associations of personnel data with focus on undesirable employee turnover and visualization of the

results using MCA position map of row and column profiles.

Based on the results of multiple correspondence analysis of factors associated with undesirable

employee turnover, it is possible to visually determine which categories are more associated with it and

who fall into those categories and thus who is at greater risk of undesirable turnover. Based on the

results, recommendations can be made to optimize human resource management policy settings to

reduce risk in the categories. Specific measures include, for example, higher payroll progressiveness at

higher tariff levels so that they are more financially motivated to continue with the organization, a

support program such as training for employees with lower performance ratings to improve it, social

events or benefits targeting employees in lower age categories or analogically similar approaches

It is concluded that MCA represents a suitable statistical tool to uncover interdependencies and

associations for human resource management data and can support decisions based on the results of the

analysis which can be used to various HR problems.

Acknowledgement

This research was financially supported by the Czech Scientific Foundation (Grant SGS SP2019/7).

References

[1] Abdi H., Valentin D. (2007). Multiple Correspondence Analysis. In: Encyclopedia of

Measurement and Statistics. Thousand Oaks (CA): Sage. Available at: < http://www.utdallas.

edu/~herve/Abdi-MCA2007-pretty.pdf >

[2] Bednář, V. (2018). Jak omezit fluktuaci a udržet si zaměstnance manažerskými nástroji. Grada

Publishing, 1st ed.

[3] Benzécri J. P. (1992). Correspondence Analysis Handbook, New York: CRC Press.

[4] Edwards, R. Martin and K. Edwards. (2016). Predictive HR Analytics: mastering the HR metric.

London; Philadelphia: Kogan Page.

[5] Few, S. (2015) Data Visualization for Human Perception. The Encyclopedia of Human-Computer

Interaction, 2nd ed.

[6] Greenacre, M. J. (1993a). Correspondence analysis in practice. London: Academic Press.

[7] Greenacre, M. J. (1993b). Biplots in correspondence analysis. Journal of Applied Statistics, 20(2),

pp. 251-269.

Page 147: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

140

[8] Horváthová, P. et al. (2014). Řízení lidských zdrojů pro pokročilé. SOET, vol. 12. Ostrava: VSB-

TU Ostrava.

[9] Kapoor, B. and Y. Kabra. Current and Future Trends in Human Resources Analytics Adoption.

2016. Journal of Cases on Information Technology. 16(1).

[10] Meloun M., Militký J., Hill M. (2017). Statistická analýza vícerozměrných dat v příkladech, Praha:

Academia, 2 ed.

[11] Mikulec, O. (2017). Use of multiple correspondence analysis to identify influence of risk attitude

on trading behavior. Ekonomická revue: Central European Review of Economic Issues. VŠB – TU

Ostrava, vol XX, nr. 2, pp: 53-61.

[12] Mikulec, O. (2019) Predictive HR Analytics: Case of Employee Turnover. Financial Management

of Firms and Financial Institutions, 12th International Scientific Conference Proceedings. pp:

144-152.

[13] Mori, I., Kuroda, M., Makino, N. (2016). Multiple Correspondence Analysis. SpringerBriefs in

Statistics: Nonlinear Principal Component Analysis and Its Applications. pp. 21-28.

[14] Rubenstein, A. L., Eberly, M. B., Lee, T. W., Mitchell, T. R. (2017) Surveying the forest: A meta‐analysis, moderator investigation, and future‐oriented discussion of the antecedents of voluntary

employee turnover. Personnel Psychology. 71(1).

[15] Sinar, E. F. (2018) Data Visualization: Get Visual to Drive HR’s Impact and Influence. Society

for Human Resource Management and Society for Industrial and Organizational Psychology.

Available at: < http://www.siop.org/SIOP-SHRM/2018_03_SHRM-

SIOP_Data_Visualization.pdf>

[16] Sucháček, J., P. Seďa, V. Friedrich, J. Koutský. (2014). Media portrayals of regions in the Czech

Republic: selected issues. E & M ekonomie a management, Vol 17, Iss. 4.

[17] Vnoučková, L. (2013). Fluktuace a retence zaměstnanců. 3. vyd. Adart s.r.o.: VŠEM Praha.

Page 148: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

141

DATA ANALYSIS AND TESTING WITH RESPECT OF PORTFOLIO SELECTION

PROBLEM

David Neděla1

1Department of Economics, VŠB – Technical University of Ostrava,

Sokolská tř. 33, Ostrava 70200, Czech Republic

e-mail: [email protected]

Abstract

In recent years the problem of asset selection for creating an optimal portfolio is a very questionable

topic, not only for economists and scientists, but also for private and institutional investors. It is advised

to analyze the rate of return over a time period before investing funds. The main goal of this paper is

defined as statistical analyzing and testing of stock returns time series in relation to the portfolio

optimization problem. This is due to different assumptions of the portfolio models regarding time series.

Therefore, in this paper basic data characteristics are defined and analyzed, different kinds of probability

results and the relationships between time series, their lags and stationarity. Subsequently, various

methods are used to estimate parameters of suitable distribution. Finally, various types of tests, graphical

as well as statistical, are provided for verifying the correctness of hypotheses. In conclusion, many

results confirm that many assumptions of basic optimization models simplify the model and it is

appropriate to use a better fitted model.

Keywords

Stationarity, Autocorrelation, Probability distribution, Normal distribution, Variance Gamma

distribution, Stable distribution

JEL Classification

C13, G22

Introduction

During recent years the problem of choosing assets for creating an optimal portfolio is a favourite and

questionable topic for economists, scientists and both private and institutional investors. Before

investing is advisable to analyze the evolution of the time series of prices, particularly returns before

investing funds. One of the possible analysis is by using statistics methods due to inserting data to model

and subsequent prediction of future prediction. From a financial point of view, it is also necessary to

examine the relationships between assets for effective diversification. For these reasons, the main goal

of this paper is to statistically analyze and test time series of stock returns with regards to the portfolio

optimization problem.

In the literature, we can find many portfolio optimizations models and methods which include various

assumptions and limitations of the applied data e.g. the Markowitz model contains the assumption about

static probability belief, meaning that the used time series of returns should be normally distributed, see

Markowitz (1952). The same assumption is contained in the CVaR portfolio optimization model. As

already mentioned, another reason to provide data analysis is to predict future evolution of the various

investment instruments included in the asset portfolio by estimated parameters of distribution functions.

Other reasons for analyzing returns time series is the appropriateness of portfolio diversification which

many investors require. Therefore, the dependency analysis is carried out. In this paper only on different

types of correlations are characterized.

The paper is organized as follows. The first section is brief introducing. In the second section is

characterization of used data, then description and formulation of mathematical methodology, included

descriptive statistics, probability distributions and their tests, definition of stationarity and correlation.

At the third section are data characteristics and the main empirical findings are also presented in this

section. Fourth section is conclusion.

Page 149: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

142

Description and Formulation of Mathematical Methodology and Data

In this section is described mathematical methodology, specifically, general descriptive statistics,

different types of probability distribution, distribution tests, estimation methods, stationarity stationarity,

linear and nonlinear types of correlation and autocorrelation coefficients. For the empirical analysis

daily adjusted close prices of stocks included in Dow Jones Industry Average index are used. Time

period is from Monday January 3, 2006 to Friday June 28, 2018. The main source of individual stock

prices is the Yahoo Finance website.

Probability distribution

The probability distribution of dataset is an important characteristic if someone wants to fit parameters

of a statistical (financial) model to data or if someone use specific model with probability distribution

assumption. Almost all data used in finance have underlying stochasticity (randomness) which feeds

into uncertainties in the fitted statistical model parameters. That is the reason why is important to find

what kind of probability distributions typically underlie the stochasticity in data set.

A traditional assumption in financial study normal distribution of dataset, meaning the simple returns

Rt are independently and identically distributed (i.i.d.) with fixed mean value and variance value. Due

to this assumption, the statistical properties of asset returns can be made tractable. In practise, it is not

so clear as it seems. The problems are that the simple return cannot be lower than −1 but the normal

distribution may assume any value in the real line with no limiting interval and “If Rit is normally

distributed, then the multiperiod simple return Rit [k] is not normally distributed because it is a product

of one-period returns.” Tsay (2010). Many empirical results confirmed conclusion that the normality

assumption is reject for asset returns. The general form of normal probability density function is defined

as

𝑓(𝑥) =

1

𝜎√2𝜋𝑒−

12(𝑧−𝜇2)2

, (1)

where 𝜇 is the mean of the distribution and also median and 𝜎 is standard deviation.

The class of distributions included Gaussian, Cauchy and Lévy distributions is stable distributions, see

Lévy (1924). It is a class that allows skewness and heavy tails. The general definition of stable

distribution is described by parameters: the index of stability 𝛼, the skewness parameter β, the scale

parameter γ and the location parameter δ. The reason why to use stable distribution is derived from the

Generalized Central Limit Theorem which defined that the only possible nontrivial limit of normalized

sums of i.i.d. terms is stable. The empirical observation confirmed that large data sets exhibit heavy tails

and skewness. The most used parametrization of this distribution is Zolotarev parametrization defined

as 𝑋~𝑆(𝛼, 𝛽, 𝛾, 𝛿0; 0) with characteristic function:

𝐸 𝑒𝑥𝑝(𝑖𝑡𝑋) = {𝑒𝑥𝑝 (−𝛾𝛼|𝑡|𝛼 [1 − 𝑖𝛽(𝑡𝑎𝑛

𝜋𝛼

2)(𝑠𝑖𝑔𝑛 𝑡)((𝛾|𝑡|)1−𝛼 − 1)] + 𝑖𝛿0𝑡) 𝛼 ≠ 1

𝑒𝑥𝑝 (−𝛾|𝑡| [1 + 𝑖𝛽2

𝜋(𝑠𝑖𝑔𝑛 𝑡)(𝑙𝑛|𝑡| + 𝑙𝑛|𝛾|)] + 𝑖𝛿0𝑡) 𝛼 = 1.

(2)

The parameters occur at an interval 0 < 𝛼 ≤ 2,−1 ≤ 𝛽 ≤ 1, 𝛾 > 0 and 𝛿0 ∈ 𝑅. Special case of the

stable distribution with 𝛼 = 1 and 𝛽 = 0 is called Cauchy distribution. Several methods for estimating

parameters has been founded and distinguished. One of the most popular and empirically confirmed as

reliable method is maximum likelihood estimation. The log likelihood function of i.i.d. variable 𝑋𝑖 is

defined as

ℓ(𝜃 ) =∑𝑙𝑜𝑔𝑓(𝑋𝑖|𝜃 ) .

𝑛

𝑖=1

(3)

Page 150: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

143

The main problem of this formula is that the general stable density formula does not exist. Zolotarev

(1986) dealt with and describe in detail the topic of stable density. Therefore, the parameters and

can be estimated from the observations 𝑥1, … , 𝑥𝑛 by maximizing the log likelihood function

∑𝑙𝑜𝑔𝑓𝛼 (𝓏𝑖) = 𝑛 𝑙𝑜𝑔𝛼 −

𝑛

𝑖=1

𝑛 𝑙𝑜𝑔(𝛼 − 1)𝜋 +∑𝑙𝑜𝑔 𝓏𝑖𝛼 − 1

𝑛

𝑖=1

+∑𝑙𝑜𝑔∫ 𝑈𝛼

𝜋/2

0

(𝛾, 0)

𝑛

𝑖=1

𝑒−𝑧𝑖𝛼 𝛼⁄ −1

𝑈𝛼(𝛾,0)𝑑𝛾, (4)

where 𝓏𝑖 = |𝑥𝑖 − 𝜇| ∕ 𝜎. To preclude the discontinuity and nondifferentiability of the symmetric -

stable density function at = 1, is restricted to be greater than one. Disadvantage of this method is

due to highly nonlinear optimization problem where no initialization and convergence are not available.

Other allowable method is McCulloch method. McCulloch (1986) generalized and improved Fama-

Roll's method from the set of Sample quantile method, Fama (1971). Firstly, independent of both and

is defined

𝑣𝛼 =𝑥0.95 − 𝑥0.05𝑥0.75 − 𝑥0.25

, (5)

where �̂�𝛼 is the corresponding sample value and it is a consistent estimator of 𝑣𝛼 . Similarly, 𝑣𝛽 is

calculated as

𝑣𝛽 =𝑥0.95 + 𝑥0.05 − 2𝑥0.50

𝑥0.95 − 𝑥0.05,

(6)

where �̂�𝛽is also the corresponding sample value and it is a consistent estimator of 𝑣𝛽. Both, 𝑣𝛼 and 𝑣𝛽,

are functions of and β. The parameters and β may be express as function of 𝑣𝛼 and 𝑣𝛽

𝛼 = 𝜓1(𝑣𝛼, 𝑣𝛽), 𝛽 = 𝜓2(𝑣𝛼, 𝑣𝛽). (7)

Consequently, the scale parameter is given by

𝑣𝛼 =𝑥0.75 − 𝑥0.25

𝜓3(�̂�, �̂�),

(8)

where 𝜓3(�̂�, �̂�)is given by table in McCulloch (1986). McCulloch also gives an estimate of , however,

this procedure is much more complicated.

The generalized Pareto distribution (GPD) is often used to model the tails of another known

distribution. The probability density function of the generalized Pareto distribution is defined as:

𝑓(𝑥) = (

1

𝜎) (1 + 𝑘

(𝑥 − 𝜃)

𝜎)

−1−1𝑘

, (9)

where k is shape parameter, σ is scale parameter and θ is threshold parameter. If k = 0 and θ = 0, the

GPD is the same as the exponential distribution. The GPD has three basic forms, each corresponding

to a limiting distribution of exceedance data from a different class of underlying distributions.

Distributions whose tails decrease exponentially, lead to a shape parameter of zero. Distributions whose

tails decrease as a polynomial, lead to a positive shape parameter. Distributions whose tails are finite,

lead to a negative shape parameter.

Crisis periods such as the 1987 stock market crash or the collapse of Lehman Brothers have changed the

view that extreme events in the financial markets have negligible probability. The use of the extreme

value (EV) distribution is then adequate. The standardized generalized extreme value distribution,

(GEV) in the defined by a location parameter µ, a scale parameter σ, and a tail shape parameter ξ. The

whole equation is defined as:

Page 151: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

144

𝐹(𝜉,𝜇,𝜎)(𝑥) = 𝑒𝑥𝑝 (−(1 + 𝜉(𝑥−𝜇)

𝜎)−1∕𝜉

), (10)

where

1 + 𝜉

(𝑥 − 𝜇)

𝜎> 0 , 𝜉 ≠ 0. (11)

The probability density functions are respectively:

𝐹(𝜉,𝜇,𝜎)(𝑥) =

1

𝜎(1 + 𝜉

(𝑥 − 𝜇)

𝜎)

−1−1∕𝜉

. (12)

If ξ = 0 then the GEV distribution belongs to the Gumbel class which includes the normal, exponential,

gamma.

The class of GEV distributions is very flexible with the tail shape parameter ξ. Tail index is defined as

𝛼 = 𝜉−1. If ξ > 0 than the distribution is called Fréchet and fat tailed distributions such as Pareto or

Cauchy.

A gamma distribution is one of the general types of statistic distribution. It is a continuous, positive-

only, unimodal distribution with relation to exponential and normal distributions. A random gamma-

distributed variable x with shape parameter 𝑘 and a scale parameter 𝜃 is denoted by the probability

density function

𝑓(𝑥) =

𝑥𝑘−1𝑒−𝑥𝜃

𝜃𝑘𝛤(𝑘), (13)

where 𝑥, 𝑘, 𝜃 > 0 and 𝛤(𝑘) is gamma function evaluated at k.

The formula for the cumulative distribution function of the gamma distribution is defined as

𝐹(𝑥) = ∫ 𝑓(𝜇, 𝑘, 𝜃)𝑑𝜇

𝑥

0

=𝛾(𝑘,

𝑥𝜃)

𝛤(𝑘), (14)

where 𝛾(𝑘,𝑥

𝜃) is the lower incomplete gamma function denoted by

𝛤𝑥(𝑎) = ∫ 𝑡𝑎−1𝑒−𝑡𝑑𝑡.

𝑥

0

(15)

The variance gamma distribution is a continuous probability distribution which is defined as

the normal variance-mean mixture where the mixing density is the gamma distribution. The probability

density function of a variance gamma distribution is unimodal, with a single peak, and heavy tails. The

probability density function for the variance-gamma distribution is given by

𝑓𝑋(𝑥) =(𝛼2 − 𝛽2)𝜆

√𝜋𝛤(𝜆)(2𝛼)𝜆−1∕2|𝑥 − 𝜇|𝜆−1∕2𝐾𝜆−1∕2(𝛼|𝑥 − 𝜇|) 𝑒𝑥𝑝(𝛽(𝑥 − 𝜇)), (16)

where 𝐾 denotes a modified Bessel function of the second kind and 𝛤 is gamma function. Parameters 𝜇

denote location, 𝛼 denote tail, 𝛽 denote asymmetry and 𝜆 denote scale. Variance gamma distribution is

usually used to model daily returns of assets, as the skewness and kurtosis of the data may be a better fit

with this distribution than a normal distribution.

We can find many methods used for estimating individual parameters of this type of distribution e.g.

based on Nelder-Mead algorithm, see Nelder (1965). It is commonly used numerical direct search

method based on function comparison to find the minimum or maximum of an objective function in a

multidimensional space. The problem may be that the Nelder-Mead algorithm usually requires only one

Page 152: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

145

or two function evaluations at each step, while different other direct search methods use n or more

function evaluations.

The normal inverse Gaussian distribution (NIG) obviously defined as a variance-mean mixture of a

normal distribution and the inverse Gaussian as the mixing distribution. NIG is also often used in finance

to construct stochastic processes for statistical modelling purposes, see Barndorff-Nielsen (1997). The

NIG distribution on the whole real line having density function defined as

𝑔(𝑥; 𝛼, 𝛽, 𝜇, 𝛿) = 𝑎(𝛼, 𝛽, 𝜇, 𝛿)𝑞 (𝑥 − 𝜇

𝛿)−1

𝐾1 {𝛿𝛼𝑞 (𝑥 − 𝜇

𝛿)} 𝑒𝑥𝑝(𝛽𝑥), (17)

where

𝑎(𝛼, 𝛽, 𝜇, 𝛿) = 𝜋−1𝛼𝑒𝑥𝑝 (𝛿√(𝛼2 − 𝛽2) − 𝛽𝜇, (18)

and

𝑞(𝑥) = √1 + 𝑥2, (19)

where 𝐾1 is the modified Bessel function of third order and index 1. Parameter 𝛼 is tail heaviness

parameter, 𝛽 is asymmetry parameter satisfying 0 ≤ |𝛽| ≤ 𝛼, 𝜇 ∈ 𝑅 define location and 𝛿 > 0 define

scale parameter. The distrigution is symmetric around the value 𝜇.

Distribution tests

This group of tests is used for testing if our data are distributed by supposed distribution or not.

Generally, the null hypothesis is defined that the data set is distributed by given type and the alternative

hypothesis is the data set is not distributed by given distribution, see e.g. Tsay (2010).

The first often applied test is the Kolmogorov-Smirnov test (K-S test) which examines if random

variable are likely to follow given distribution. Usually as given distribution is supposed as normal

distribution but by this test the other kinds of probability distribution can be tested. The K-S test is based

on the distance between the empirical distribution function and given cumulative distribution function

and test statistic is defined as

𝐷 = 𝑚𝑎𝑥1≤𝑖≤𝑁

(𝐹(𝑅𝑖) −𝑖 − 1

𝑁,𝑖

𝑁− 𝐹(𝑅𝑖) ), (20)

where 𝐹 is the theoretical cumulative distribution of the distribution being tested.

The K-S test has several important limitations as only continuous distributions can be applied, the test

is more sensitive near the centre of the distribution than at the tails and the distribution must be fully

specified.

Jarque-Bera test can be also used to test the hypothesis that the data are distributed by given

distribution, generally normal distribution. It is based on another way of testing than K-S test. Test

statistic is defined as

𝐽𝐵 =

𝑛

6[𝑆2 +

1

4(𝐾 − 3)2], (21)

where n is number of observations, S presents the sample skewness, K is the sample kurtosis. If the data

comes from a normal distribution, the JB asymptotically has a chi-squared distribution with two degrees

of freedom.

The Shapiro–Wilk test is basically used for data set with observation 𝑁 < 30. The statistic is defined

as

𝑊 =

(∑ 𝑎𝑖𝑥𝑖𝑛𝑛=2 )2

∑ (𝑥𝑖 − 𝜇𝑥𝑖)𝑛𝑖=1

2, (22)

Page 153: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

146

where 𝑥𝑖 are ordered random sample values, 𝑎𝑖 are constants generated from

the covariances, variances and means of the normally distributed sample (size n), see Shapiro (1965).

The Lilliefors test is also used in statistics to test hypothesis that whether a given distribution more

precisely exponentially distributed population, The Lilliefors test uses the same equation (20) as the K-

S test, but the critical values are compared with the Lilliefors test table. Since the critical values in this

table are smaller, the Lilliefors test is less likely to show that data is normally distributed.

Finally, the Anderson-Darling test is included into group of distribution tests. It is again based on

the K-S test and gives more weight to the tails than the K-S test. The Anderson-Darling test makes use

of the specific distribution for calculating critical values. The Anderson-Darling test statistic is

defined as follows

𝐴2 = −𝑁 − 𝑆, (23)

where

𝑆 =∑

(2𝑖 − 1)

𝑁

𝑁

𝑖=1

[𝑙𝑛 𝐹(𝑌𝑖) + 𝑙𝑛(1 − 𝐹(𝑌𝑁+1−𝑖))], (24)

where F is the cumulative distribution function of the specified distribution and 𝑌𝑖 are the ordered data.

The test is a one-sided and if the test statistic, A, is greater than the critical value the null hypothesis is

than rejected.

Stationarity

In many studies financial rates (interest rates, foreign exchange rates or price series tend to be

nonstationary. For example, price time series are mainly nonstationary since there is no fixed value to

which prices should return. The nonstationary series is usually called unit-root nonstationary time series,

as mention Tsay (2013).

Stationarity means that the statistical characteristics of a time series are almost same over the time

period. Stationarity is very important characteristic because of many statistical tests and many models

(as well financial models) rely on it. If the joint distribution of time series (𝑟𝑡1 , 𝑟𝑡2, … , 𝑟𝑡𝑘) is invetiant

under time shift it is called as strictly stationarity. We assume a weak stationarity which means that there

exists a covariance between rt and lag values which is called autocovariance. It is possible to decide

about stationarity based on graphical view or statistical tests. The graphical point of view includes

analyzing autocorrelation and partial autocorrelation graphs and statistical approach, using statistical

tests like the Dickey-Fuller Test.

Stationarity and autocorrelation tests

The Dickey-Fuller test is used to test the null hypothesis that a unit root is present in an autoregressive

model of a given time series, and that the process is thus not stationary. The original test treats the case

of a simple lag-1 AR model. Before performing the test is necessary to visually decide if mean of time

series is not equal to zero and the trend is linear or quadratic, Tsay (2010, 2013). The Dickey-Fuller test

is testing if 𝜙 = 0 in model of the data:

𝑦𝑡 = 𝛼 + 𝛽𝑡 + 𝜙𝑦𝑡−1 + 𝑒𝑡, (25)

where 𝑦𝑡 is used data set. If 𝜙 = 0, then it is a random walk process.

If we use Ljung and Box test the null hypothesis is defined that the time series does not show

statistically significant dependency structures up to the order m. The Ljung Box test statistic is calculated

as

𝑄(𝑚) = 𝑇(𝑇 + 2)∑

𝑟𝑖2

𝑇 − 𝑖

𝑚

𝑖=1

, (26)

Page 154: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

147

where 𝑟𝑖 is accumulated sample autocorrelations and m is time lag.

One of many other acceptable tests is Kwiatkowski–Phillips–Schmidt–Shin tests (KPSS test) with

the same null hypothesis as previous. The test is based on a similar principle as the Dickey-Fuller test

and the main principle of this test is to split a time series to three parts:

𝑥𝑡 = 𝑟𝑡 + 𝛽𝑡 + 휀𝑡, (27)

where 𝑟𝑡 is random walk, 𝛽𝑡 is deterministic trend, 휀𝑡 is stationary error and 𝑢~(0, 𝜎2) and are i.i.d.

Depending to this equation, the null hypothesis is defined as 𝐻0: 𝜎2 = 0. The KPSS test statistic is

𝐾 =∑ 𝑆2𝑁𝑛=1 ,

𝑠2𝑁2 (28)

where N is the sample size 𝑠2 is the Newey-West estimate of the long-run variance and 𝑆2 = 𝑒1 +⋯+𝑒3.

The last characterized test is the Phillips–Perron test used for testing the null hypothesis that a time

series is integrated of order 1. It builds on the previous mentioned Dickey–Fuller test.

Correlation and autocorrelation

The dependency structure of random sources is necessarily known indicator in portfolio theory and in

other financial optimization problems. The first and the most used indicator is the Pearson coefficient

of correlation which is based on linear dependency, see Tsay (2010). The formula of Pearson correlation

coefficient between two random variables X and Y is defined as:

𝜌𝑥,𝑦 =

𝐶𝑜𝑣(𝑋, 𝑌)

𝜎(𝑋) ∙ 𝜎(𝑌)=

∑ [𝑥𝑡 − 𝐸(𝑥)][𝑦𝑡 − 𝐸(𝑦)]𝑇𝑡=1

√∑ [𝑥𝑡 − 𝐸(𝑥)]2𝑇𝑡=1 ∙ ∑ [𝑦𝑡 − 𝐸(𝑦)]

𝑇𝑡=1

2

, (29)

where 𝐸(𝑥) and 𝐸(𝑦) are the sample mean of X and Y, respectively, and it is assumed that the variances

exist.

It works well just with normally distributed data which can be a disadvantage. For financial markets

returns the normal (Gaussian) distribution is in most cases rejected, see Fama (1965).

Another possibility to measure the correlation between random variables is Spearman coefficient of

correlation which is a nonparametric statistic. The Spearman correlation between two variables is equal

to the Pearson correlation between the rank values of those two random variables. The formula is defined

as:

𝜌𝑆;𝑥,𝑦 =𝐶𝑜𝑣(𝑟𝑋, 𝑟𝑌)

𝜎(𝑟𝑋) ∙ 𝜎(𝑟𝑌),

(30)

where 𝑟𝑋 is rank of variable X, 𝑟𝑌 is rank of variable Y, 𝜎(𝑟𝑋) and 𝜎(𝑟𝑌) are the standard

deviations of the rank variables.

As the third in order the Kendall correlation coefficient is good to mention. Kendall correlation is also

a non-parametric test based on rank of two random variable. The Kendall′s Tau usually acquire smaller

values than Spearman′s correlation. The formula of Kendall rank correlation is defined as

𝜏 =𝑛𝑐 − 𝑛𝑑12𝑛(𝑛 − 1)

, (31)

where 𝑛𝑐 is number of concordant pairs, 𝑛𝑑is number of discordant pairs.

When the linear dependence between rt and its past values rt−i is of interest, the concept of correlation

is generalized to autocorrelation. The correlation coefficient between rt and 𝑟𝑡−𝑖 is called the lag-i

Page 155: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

148

autocorrelation of rt and is commonly denoted by �̂�𝜄, which under the weak stationarity assumption is a

function of only. The lag-i autocorrelation of 𝑟𝑡 is given by

�̂�𝜄 =

∑ [𝑟𝑡 − 𝐸(𝑟)][𝑟𝑡−𝑖 − 𝐸(𝑟)]𝑇𝑡=𝜄+1

∑ [𝑟𝑡 − 𝐸(𝑟)]2𝑇𝑡=1

, (32)

where 0 T-1. If 𝑟𝑡 has statistically significant lag-1 autocorrelation, the value 𝑟𝑡−1 might be useful

to predict 𝑟𝑡. The simple linear regression model is referred to AR model of order 1 or AR(1) model.

The definition of AR(1) model can by generalized to the general AR(p) model which is often used in

finance, see Tsay (2010).

Empirical Results

In this section methodology specified in previous sections is used to analyse and test stock return time

series applicable for portfolio optimization problems. In the first step, the continuously return calculation

over each 3396 daily prices was provided for the reason price time series are not usually stationary.

From daily returns the basic descriptive statistics (mean, Min, Max, std. deviation, skewness and

kurtosis) should be calculated.

All daily returns are included in the interval -0,2323;0,2983. The absolute value of the interval

boundary points are roughly similar. In many cases, the difference between limit values in absolute

terms of the interval in which the returns occur are generally smaller than 2 percentage points. In some

cases, if the absolute value of Min and Max are compared the possible maximum return is higher than

possible maximum loss. It may be a sign of a higher positive extreme values or less likely heavier right

tail over a left tail. The means of daily returns are very small, in tens of thousands. When comparing the

mean value with the Max and Min values, there is a large gap where there will certainly be extreme and

outlying values. The highest average daily return relates to stock V before AAPL and NKE stocks.

Otherwise, the least profitable shares are DOW and GS stocks. From these findings, we cannot confirm

that one industry is more profitable compared to an another. According to financial theory, the higher

returns stocks should relate to the higher rate of standard deviation, but this assumption does not apply

in our data set, e.g. the average return of GS is the second smallest, but the standard deviation is one of

the highest. It is contradicted with the basic economic theory. The same non-economic situation can be

seen in DOW and an opposite situation characterized with relatively higher return against smaller

standard deviation is in MCD stock. A zero-skewness value indicates that the values of the random

variable are symmetrically distributed to the left and right of the mean value. For example, the BA value

of skewness is the nearest to zero which indicates a certain symmetry. The most skewed data set relates

to MMM and WBA stocks. In both cases, left-skewed because of negative statistics value. It means that

there is higher probability of achieving a below average return or negative yield. UNH data set is the

most right-skewed and provides a higher than expected daily return. The values of kurtosis higher than

3 is called leptokurtic and indicate fatter tails. In our data set, only with DOW the leptokurtic is

apparently rejected because of the value 0.738. We can also see extreme kurtosis (higher than 25) in two

asset returns (TRV, UNH) and for this reason, it is advisable to observe a different type of distribution

to normal distribution. That fact confirms the hypothesis that stock returns are not based on normal

distribution.

Correlation and autocorrelation

In Pearson correlation matrix, the lower value than 0.3 can be found only in few correlation coefficients

in DOW, but in many cases, it is not significant at confidence level 5 %. In the remaining time series,

the correlation coefficients mostly occur at an interval (0.30, 0.65). Likewise, low values, we can find

exceptions and higher values, except the value 1 which means the correlation between themselves. The

most linear dependent pairs are CVX and XOM with correlation coefficient 0.860 and AXP and JPM

where the coefficient is 0.712. The first pair of companies is from energy industry and the second pair

is from financial services industry. It supports the assumption of the same returns evolution in individual

industries. The values of Spearman correlation coefficients are lower compared to Pearson correlation.

Page 156: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

149

Similarly, the most dependent pairs of returns time series are again (CVX and XOM) and (AXP and

JPM) and the less dependent returns are again between UNH and VZ or WMT in reverse order. Kendall

correlation values were the lowest as expected.

As we can see from autocorrelation coefficients, in all cases the hypothesis that the time series shows

statistically significant dependencies until the lag 15 is rejected.

Normality tests

In recent models, the return normality assumption is abandoned because the normality of asset returns

has been rejected in many studies. To confirm the findings the normal distribution of data is testing by

statistical tests. The results of tests are depicted in Table 1.

For data set smaller than 30 elements, is appropriate use the Shapiro-Wilk test. In this case, due to 3 395

observation for each asset, the Shapiro-Wilk test is not very suitable to use. In all cases except DOW p-

value does not reach 5 % limit of significance than the null hypothesis for all cases can be rejected and

conclude that daily return series do not come from a normal distribution. As for the DOW, according to

Shapiro-Wilk and Jarque Bera tests, the hypothesis of normally distributed returns is confirmed at 95 %

confidence level. After looking at the results of the Anderson-Darling test we can deduce that the null

hypothesis is rejected but compared to the rest, there is a visible difference in test statistics. Only in this

case is the statistics is lower than zero. The same difference is visible in the case of the Jarque Bera test.

Stationarity and autocorrelation tests

If we look at the Ljung box test (Table 2) the conclusion about stationarity is that only in 10 time series

(e.g. AAPL, BA, CAT, VZ) the hypothesis about stationarity of 1 order is confirmed by this test. In

other words, the similar time series as white noise is confirmed in 10 time series. After running function

for Augmented Dickey-Fuller and Phillips–Perron tests we can see that the significances of the statistics

are identically smaller than 0.05 in all cases which means that the null hypothesis is rejected, and the

time series are not stationary. Finally, the KPSS test for level stationarity is provided and the main

attention is again paid to coefficient of significance. Contrary to the previous test all p-values are higher

than 0.05 than the null hypothesis is rejected.

Parameters estimation and testing of particular distribution

It is permissible to find individual parameters defining other types of distribution. Firstly, the parameter

estimation of stable distribution by using maximum likelihood and McCulloch method is in Table 3.

Table 1. Results of particular tests of normality

Kolmogorov-Smirnov Shapiro-Wilk Jarque Bera Anderson-Darling test

Statistic Sig. Statistic Sig. Statistic Sig. Statistic Sig.

AAPL .469 .000 .940 .000 6123.7 .000 39.125 .000

AXP .464 .000 .836 .000 25315 .000 124.09 .000

BA .472 .000 .945 .000 3201.1 .000 38.064 .000

CAT .470 .000 .937 .000 4473.8 .000 42.741 .000

CSCO .471 .000 .888 .000 19942 .000 64.822 .000

CVX .476 .000 .905 .000 24308 .000 44.559 .000

DIS .474 .000 .902 .000 11920 .000 58.439 .000

DOW .476 .000 .975 .174 1.094 .579 .796 .037

GS .467 .000 .856 .000 37323 .000 79.152 .000

HD .474 .000 .935 .000 4576.9 .000 47.115 .000

IBM .478 .000 .927 .000 5487.8 .000 46.978 .000

INTC .471 .000 .939 .000 4223.7 .000 37.646 .000

JNJ .483 .000 .904 .000 25671 .000 49.043 .000

JPM .463 .000 .815 .000 42219 .000 129.71 .000

KO .482 .000 .903 .000 25370 .000 49.465 .000

MCD .480 .000 .939 .000 4756 .000 34.76 .000

MMM .478 .000 .905 .000 11441 .000 63.704 .000

MRK .475 .000 .893 .000 19055 .000 56.446 .000

MSFT .473 .000 .907 .000 13686 .000 55.397 .000

NKE .079 .000 .907 .000 9348 .000 54.223 .000

PFE .074 .000 .932 .000 7229 .000 40.136 .000

PG .481 .000 .917 .000 9285 .000 51.99 .000

Page 157: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

150

TRV .473 .000 .816 .000 100420 .000 108.25 .000

UNH .469 .000 .852 .000 98849 .000 73.984 .000

UTX .477 .000 .928 .000 7189 .000 45.729 .000

V .471 000 .909 .000 8621.6 .000 48.723 .000

VZ .479 .000 .933 .000 9359 .000 33.505 .000

WBA .474 .000 .911 .000 18076 .000 43.292 .000

WMT .480 .000 .905 .000 18037 .000 46.067 .000

XOM .477 .000 .891 .000 29847 .000 51.242 .000

Source: Neděla (2020)

Table 2. Results of particular stationarity tests

Ljung-Box test Dickey-Fuller test KPSS test Phillips-Perron test

Statistic Sig. Statistic Sig. Statistic Sig. Statistic Sig.

AAPL .037 .847 -42.263 .010 .060 .100 -15.944 .010

AXP 26.770 .000 -45.127 .010 .127 .100 -63.906 .010

BA .262 .609 -41.971 .010 .278 .100 -57.764 .010

CAT .328 .567 -41.254 .010 .050 .100 -57.670 .010

CSCO 4.632 .031 -43.828 .010 .151 .100 -60.686 .010

CVX 22.233 .000 -45.875 .010 .028 .100 -63.487 .010

DIS 11.669 .001 -45.356 .010 .067 .100 -62.100 .010

DOW .015 .904 -5.466 .010 .165 .100 -8.070 .010

GS 5.696 .015 -42.995 .010 .044 .100 -61.054 .010

HD .439 .508 -43.053 .010 .307 .100 -57.708 .010

IBM .911 .340 -41.775 .010 .170 .100 -59.362 .010

INTC 17.673 .000 -43.648 .010 .119 .100 -62.763 .010

JNJ .019 .003 -45.384 .010 .090 .100 -61.519 .010

JPM 29.692 .000 -44.410 .010 .072 .100 -65.417 .010

KO 10.196 .001 -44.646 .010 .076 .100 -62.086 .010

MCD 11.382 .001 -45.848 .010 .056 .100 -62.473 .010

MMM 7.413 .006 -43.573 .010 .089 .100 -61.144 .010

MRK 1.877 .171 -43.237 .010 .067 .100 -59.831 .010

MSFT 16.021 .000 -45.304 .010 .364 .093 -63.370 .010

NKE 9.373 .002 -43.993 .010 .032 .100 -62.133 .010

PFE 7.698 .006 -44.363 .010 .144 .100 -61.534 .010

PG 21.600 .000 -45.884 .010 .077 .100 -63.556 .010

TRV 114.890 .000 -48.591 .010 .052 .100 -71.905 .010

UNH 1.366 .242 -42.180 .010 .391 .081 -59.469 .010

UTX 12.992 .000 -44.229 .010 .030 .100 -62.150 .010

V 25.467 .000 -42.462 .010 .039 .100 -59.864 .010

VZ .384 .535 -44.926 .010 .040 .100 -59.148 .010

WBA 3.030 .082 -42.674 .010 .125 .100 -60.006 .010

WMT 12.150 .000 -44.945 .010 .070 .100 -62.385 .010

XOM 49.594 .000 -48.367 .010 .088 .100 -66.954 .010

Source: Neděla (2020)

Page 158: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

151

Table 3. Parameter estimation of stable distribution

Maximum Likelihood Estimation McCulloch Estimation

AAPL 1.6198 -.0439 .0108 .0012 1.5100 .0300 .0101 .0008

AXP 1.3796 -.0832 .0085 .0010 1.3260 -.0880 .0083 .0008

BA 1.6199 -.1073 .0094 .0012 1.5450 -.1040 .0093 .0011

CAT 1.6082 -.0953 .0106 .0001 1.4670 -.0490 .0098 .0007

CSCO 1.5748 -.0886 .0087 .0009 1.4920 -.0610 .0082 .0007

CVX 1.6533 -.1709 .0086 .0010 1.5560 -.1810 .0082 .0012

DIS 1.5878 -.0502 .0080 .0008 1.4770 -.0670 .0076 .0009

DOW 1.9990 -.1964 .0158 .0001 1.4240 -.2460 .0115 .0035

GS 1.5654 -.0765 .0106 .0007 1.4750 -.0910 .0102 .0007

HD 1.5649 -.0356 .0082 .0007 1.3900 -.0130 .0078 .0005

IBM 1.6972 -.1501 .0073 .0007 1.5200 -.0950 .0068 .0006

INTC 1.6437 -.1158 .0098 .0009 1.5430 -.0870 .0094 .0009

JNJ 1.6363 -.0921 .0053 .0007 1.5160 -.0480 .0050 .0004

JPM 1.4031 -.0185 .0093 .0004 1.3420 -.0800 .0090 .0005

KO 1.6237 -.0944 .0057 .0007 1.5260 -.0770 .0054 .0007

MCD 1.6735 -.1104 .0064 .0009 1.5540 -.0920 .0060 .0010

MMM 1.5429 -.1281 .0066 .0011 1.4530 -.0860 .0063 .0009

MRK 1.6328 -.0248 .0077 .0006 1.5870 -.0090 .0075 .0004

MSFT 1.5805 .0013 .0084 .0006 1.4960 -.0240 .0080 .0005

NKE 1.6082 -.0406 .0085 .0008 1.4610 -.0250 .0082 .0008

PFE 1.6339 .0002 .0072 .0004 1.5020 .0036 .0068 .0001

PG 1.5844 -.0163 .0054 .0004 1.4640 -.0050 .0052 .0003

TRV 1.4923 -.0701 .0070 .0008 1.4280 -.0570 .0070 .0008

UNH 1.5755 -.0706 .0093 .0008 1.5250 -.0240 .0091 .0006

UTX 1.6025 -.0670 .0075 .0007 1.5000 -.0600 .0072 .0006

V 1.5538 -.0775 .0091 .0013 1.4720 -.0610 .0086 .0014

VZ 1.6917 -.0129 .0072 .0008 1.6320 -.0880 .0071 .0008

WBA 1.6593 -.0427 .0089 .0004 1.5970 -.0340 .0088 .0003

WMT 1.6564 -.1030 .0064 .0006 1.5550 -.0750 .0062 .0007

XOM 1.6500 -.1015 .0076 .0006 1.5550 -.0690 .0073 .0004

Source: Neděla (2020)

In the first method of estimation the average values about 1.6 of the parameter suggest the stronger

presence of heavy tails, since the skewness values are mostly negative, and average is about -0.07

meaning the presence of more values to the left of the mean. By the second method the estimated

parameters are slightly different, especially in the first two parameters.

Estimation of variance gamma distribution by used method based on Nelder-Mead algorithm follows in

Table 4.

Table 4. Parameter estimation of variance gamma distribution

AAP

L .0000

.042

7 .0017

2.483

0 IBM

-

.000

8

.024

0

.001

3

2.645

2 PFE

.000

5

.012

6

-

.000

2

.2269

AXP -

.0002

.025

4 .0004

2.070

9 INTC

.001

4

.017

7

-

.001

1

.8408 PG .000

4

.010

3

-

.000

1

.9396

BA .0027 .016

5

-

.0021 .2005 JNJ

.000

0

.016

1

.000

6

2.546

0 TRV

.001

2

.014

0

-

.000

7

.3073

CAT .0009 .040

7

-

.0002

2.506

4 JPM

-

.000

6

.051

3

.008

7

2.824

5 UNH

.000

0

.038

3

.000

7

2.715

0

CSC

O .0017

.015

9

-

.0013 .2949 KO

.000

9

.010

4

-

.000

5

0.883

5 UTX

.001

5

.013

4

-

.001

2

.2075

CVX .0016 .015

5

-

.0012 .8344 MCD

.001

2

.014

4

-

.000

6

1.721

5 V

.001

6

.017

0

-

.000

6

.1704

DIS .0009 .015

3

-

.0003 .9987

MM

M

.000

5

.019

7

-

.000

4

2.370

7 VZ

.001

4

.021

4

-

.000

7

2.484

3

Page 159: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

152

DOW .0088 .023

4

-

.0080

1.271

0 MRK

.000

4

.014

5

.000

1 .9082 WBA

.001

8

.015

4

-

.001

7

.2137

GS .0012 .031

8

-

.0007

2.131

2

MSF

T

.000

9

.015

3

-

.000

4

.2020 WM

T

.000

7

.022

5

-

.000

5

2.520

2

HD .0004 .015

6 .0002 .9957 NKE

.000

0

-

.015

4

.000

8 .2019 XOM

.001

7

.013

5

-

.001

6

.1334

Source: Neděla (2020)

It can be seen in Table 4 large differences between γ parameter estimated values which is scale parameter

and small disparity is also of tail index. The remaining parameters do not show any significant

changes.

Other parameters estimation is applied for generalized extreme value distribution by using maximum

likelihood method. This distribution is chosen because some financial time series contain many extreme

values.

The location of the GEV distribution changed from -0.100 to -0.0041, and the scale is from 0.0131 to

0.0325. From the estimated shape parameters can be calculated tail index and it is advisable to perform

statistical tests if the distribution is related to dataset. Results of particular tests in Table 5 indicate

conclusion: in all cases the hypothesis (time series are distributed by given distribution) is rejected.

Table 5. Particular tests for estimated distribution

Source: Neděla (2020)

Conclusion

The main goal of this paper was to analyze and test time series of daily stock returns during a chosen

time period for using in portfolio optimization models and strategies.

It was found out that the distances between the mean values and the maximum or minimum values were

large, indicating that the presence of extreme values. If density curves and their logarithm were created,

the fluctuations that indicated heavy tails of data set, caused by founded extreme values, should be seen.

After testing normal distribution by four statistical tests was detected that we should consider

distribution with heavier tails. In this paper, Pearson, Spearman and Kendall correlations were calculated

and compared. Based on autocorrelation results, in all cases the hypothesis that the time series is

statistically significant dependent until the lag 15 was rejected. Through Ljung-Box autocorrelation test

in 10 time series, the hypothesis about dependency of first order was confirmed. Due to Augmented

Dickey-Fuller and Phillips-Perron tests the null hypothesis was rejected, but by the KPSS test the null

hypothesis about level stationarity was failed to reject.

K- S test for stable distribution Lilliefors test for generalized extreme values

distribution

Stat. Sig. Stat. Sig. Stat. Sig. Stat. Sig. AAPL .479 .000 MCD .491 .000 AAPL .149 .001 MCD .174 .001

AXP .478 .000 MMM .489 .000 AXP .241 .001 MMM .174 .001

BA .485 .000 MRK .480 .000 BA .189 .001 MRK .210 .001

CAT .481 .000 MSFT .479 .000 CAT .166 .001 MSFT .237 .001

CSCO .483 .000 NKE .482 .000 CSCO .210 .001 NKE .197 .001

CVX .491 .000 PFE .483 .000 CVX .253 .001 PFE .167 .001

DIS .482 .000 PG .486 .000 DIS .224 .001 PG .219 .001

DOW .483 .000 TRV .483 .000 DOW .163 .001 TRV .309 .001

GS .478 .000 UNH .481 .000 GS .261 .001 UNH .311 .001

HD .483 .000 UTX .484 .000 HD .195 .001 UTX .212 .001

IBM .489 .000 V .483 .000 IBM .167 .001 V .200 .001

INTC .484 .000 VZ .484 .000 INTC .155 .001 VZ .227 .001

JNJ .491 .000 WBA .481 .000 JNJ .255 .001 WBA .212 .001

JPM .472 .000 WMT .471 .000 JPM .261 .001 WMT .221 .001

KO .491 .000 XOM .488 .000 KO .262 .001 XOM .251 .001

Page 160: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

153

At the end of the application part, the parameters of the stable, variance gamma and generalized extreme

value distribution functions were estimated by using several methods. Then the parameters of the

different distribution functions were tested. KS test for stable distribution or Lilliefors test for

generalized extreme values distribution, identically rejected the null hypotheses. However, if a QQ plot

for AXP or HD is compiled than it may be recommended to use variance gamma distribution.

References

[1] BARNDORFF-NIELSEN, Ole E. Normal Inverse Gaussian Distributions and Stochastic

Volatility Modelling. Scandinavian Journal of Statistics. 1997, 24(1), 1–13.

[2] FAMA, Eugene. The Behavior of Stock-Market Prices. Journal of Business. 1965, 38(1), 34-

105.

[3] FAMA, E. and ROLL, R. Parameter Estimates for Symmetric Stable Distributions. Journal of

the American Statistical Association. 1971, 66(334),331-338.

[4] LÉVY, Paul. Théorie des erreurs. La loi de Gauss et les lois exceptionnelles. Bulletin de la

Société Mathématique de France. 1924, 52, 49-85. ISSN 0037-9484.

[5] MARKOSE, Sheri and Amadeo ALENTORN. The generalized extreme value distribution,

implied tail index, and option pricing. The Journal of Derivatives. 2011, 18(3), 35-60.

[6] MARKOWITZ, Harry. Portfolio Selection. The Journal of Finance. 1952, 7(1), 77-91.

[7] MCCULLOCH, John. Simple consistent estimators of stable distribution

parameters. Communications in Statistics - Simulation and Computation. 1986, 15(4), 1109–

1136.

[8] NELDER, John A. a Roger MEAD. A Simplex Method for Function Minimization. The

Computer Journal. 1965, 7(4), 308–313.

[9] SHAPIRO, S. S. a M. B. WILK. An Analysis of Variance Test for Normality (Complete

Samples). Biometrika. 1965, 52(3/4), 591-611.

[10] TSAY, Ruey S. Analysis of financial time series. 3rd ed. Cambridge, Mass.: Wiley, 2010. ISBN

978-0-470-41435-4.

[11] TSAY, Ruey S. An introduction to analysis of financial data with R. Hoboken, N.J.: Wiley,

2013. ISBN 978-0-470-89081-3.

Page 161: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

154

INSURABLE AND UNINSURABLE RISKS AND THEIR CLASSIFICATION FROM THE

PERSPECTIVE OF A CZECH EXPORTER

Michaela Petrová1

1Department of Law, VŠB – Technical University of Ostrava,

Sokolská tř. 33, Ostrava 70200, Czech Republic

e-mail: [email protected]

Abstract

Nowadays, not only Czech manufacturers are trying to export their products abroad. This is probably

due to the trend of trade liberalization. However, the implementation of international business

transactions involves a higher level of risk compared to domestic transactions. This paper deals with the

issue of risks that may arise in international trade and their division. The aim of the article is to define

and identify risks from the perspective of a Czech exporter based on a summary of domestic theoretical

knowledge related to risks in the international trade. Risks are also associated with insurance. Against

individual risks, exporters can take advantage of insurance offers from commercial insurance companies

or use insurance with state support. However, not all risks can be insured, so individual risks must be

viewed from the perspective of insurability and insurability on the Czech market.

Keywords

Risks, Insurance, International Trade, Insurable and Uninsurable Risks, Export

JEL Classification

F23, G22

Introduction

Every exporter is always exposed to some risk when exporting his products. Generally, risk is defined

as a result that differs somehow from the expected result due to random events. For example, according

to Smejkal (2010), risk is defined as the volatility of a financial variable around the expected value due

to some parameter changes.9 Each area of business is associated with risks, and business on foreign

markets is associated with specific risks. The concept of risk can be viewed from different perspectives.

From the perspective of an ordinary person, this is the possibility that something unfavourable can

happen. From this perspective, risk is perceived as an illness or personal loss of something valuable such

as property, loved ones, employment, etc. From a business perspective, the risk is an economic loss –

financial loss, bankruptcy, loss of market position, loss of credibility, etc. From an insurance

perspective, risk is seen as an event that may adversely affect the ability to achieve our goal. In some

cases, the term risk is confused with the word uncertainty. However, this substitution is not correct.

There is a significant difference between risk and uncertainty. Uncertainty cannot be measured or

estimated against risk. Therefore, we can say that risk is uncertainty, but we have to added that it is

uncertainty that can be measured. 2

This paper deals with the division of risks from the perspective of a Czech exporter, so the article focuses

on risks in the area of international trade. The thesis deals with the issue of using insurance as one of the

possibilities of protection against risks. The insurance transfers the impact of the risk from the

policyholder to the insurer, thus supporting the exporter to carry out his business.

The main aim of the article is to evaluate selected risks in terms of their insurability and uninsurability

on the Czech market. This objective is preceded by the need to define and identify risks from the

perspective of the Czech exporter on the basis of a summary of domestic theoretical knowledge relating

to risks in the area of international trade.

In this article are used the methods of analysis, synthesis and description. In order to write the paper, it

was necessary to search the domestic literature related to risks in the field of international trade. For

Page 162: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

155

assessing insurability of selected risks, it was necessary to study the insurance conditions of insurance

companies.

Classification of the risks in international trade

The risk needs to be seen more specifically and the risk has to be characterized. Each author divides

risks differently according to various criteria, such as measurability, the effect of risk on the outcome,

causes of origin, areas, etc. There is no consistent nomenclature, but in some areas the individual authors

agree with each other. In the area of international trade, according to Černohlávková (2014), the main

types of risks are market risks, commercial risks, transport risks, territorial risks, currency risks, liability

risks and others.6 Janatka (2011) divides risks into commercial, manufacturing and market risks, risks

from non-fulfillment of obligations by the contracting parties, risks arising from errors in the negotiation

of a commercial operation, transport risks, risks associated with non-payment for delivered goods and

services, risk of liability for damage caused by a product defect.5

Böhm (2009) classify the risks even more narrowly. He declares that in the area of foreign trade practice

it is appropriate to divide the risks into traditional and modern. The traditional risks include business

risks, production risks and risks from erroneous conclusion of business operations. This group of

traditional risks is complemented by other risks like risks, which are connected with the selection of

business partners, with tariff and non-tariff barriers, with patent and trademark protection of goods.

Furthermore, risks that arise from the nature of goods, transport risks, liability and legal risks. Modern

risks include payment risks, credit and investment risks.1

Other authors classify the risks according to the areas in which they occur. For example, Cipra (2015)

characterizes individual types of risks from the perspective of banks and insurance companies.

According to him, these are the following: market risks, credit risks, liquidity risks, model risks,

operational risks and insurance risks.2

No matter how the risks are classified by individual authors, there are links and connections between

these types of risks. Some species may occur together or complement each other.5 In addition, there is a

need to look at risks in a comprehensive way, because it may happen that the restrictions of one risk can

increase the possibility of occurrence of the other risk.5

For insurance and insurability, it makes sense to divide risks into commercial and territorial, as Böhm

(2009) states. In general, commercial risks are normally insurable, while territorial risks are more

difficult or not at all. Commercial and territorial risks are among the financial risks of insurance

companies, in terms of credit insurance.1 In terms of credit risks, Petrusheva (2016) divides risks into

commercial and so-called non-commercial, which includes political and natural disaster risks.7

Commercial risks

In general, commercial risks are those resulting from the nature of the goods. Pursuant to the Convention

International Sales of Goods, “the seller must deliver the goods, hand over any documents relating to

them and transfer the property in the goods as required by the contract and this Convention”10, this

provision is also enshrined in the Czech Civil Code. The seller has a statutory obligation to deliver the

agreed goods and all documents. In order to minimize commercial risks, the contract needs to define the

characteristics of the goods as accurately as possible. However commercial risks are not only due to the

nature of the goods but can be classified according to various criteria.5

In a broader context, we perceive commercial risks as risks that may arise during the preparation of

production, through production financing, production itself, closing of sales, delivery of goods,

takeover, payment to potential claims. Specifically, as stated by Janatka (2011), these are risks related

to the production and nature of goods, risks related to the sale and delivery of goods, risks arising from

errors and deficiencies in the negotiation of sales contracts, business partner risks, territorial risks, risks

connecting with transportation, handling and storage, payment and exchange risks, product liability risks

and other specific (unforeseeable) risks.5

Page 163: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

156

In terms of credit insurance, commercial risks are related to the economic and financial situation of a

foreign entity - the buyer. These are risks that can be influenced by the buyer - mainly insolvency and

protracted default.1 Insolvency generally means that the debtor is unable to pay its obligations. On the

other hand, protracted default means that the debtor is in breach of the contract by failing to pay the

claim for no legal reason. It is the unwillingness to pay. Černohlávková (2014) adds to the insolvency

and protracted default other forms of commercial risks, such as the withdrawal of a business partner

from the contract, failure to fulfil the contract or defective performance of the contract by the supplier

and unjustified non-acceptance by the customer.6

Territorial risks

Territorial risks constitute the second group of risks in international trade. In the literature we also find

the terms like non-commercial risks or political risks. Territorial risks are associated with the political,

economic, financial and legal environment in the exporting country – the debtor country. They arise

from political and economic events and measures. Unlike commercial risks arising from the debtor's

economic or financial situation. Janatka (2011) states the individual causes that lead to territorial risk.

These are various political events in the country of the debtor, such as war, revolution, rebellion, civil

unrest, strikes. Then it is legislative and political measures or administrative decisions, such as

withdrawal of import or export license, embargo, restriction of the movement of goods, etc. It may also

be caused by situations of expropriation – in the form of nationalization or confiscation. In this list of

causes must be also included the natural disasters.5

This is the most serious type of risk because it is unpredictable. They are so-called force majeure and

that is why are difficult to quantify in advance. They arise from the uncertainty of political and

macroeconomic development of individual countries. Černohlávková (2014) claims that these risks have

a negative impact on the results of individual business transactions and on the implementation of future

business plans. In her view, political risks are the most serious, because they can lead to a reduction or

even severance of relations with the country and thus to great damage.6

Risk insurability and insurability criteria

Risk prevention and their effects are used in a variety of instruments. In the time of loss, the entity may

use its own resources – so-called self-insurance or special financial institutions. The second option is

used more often. Risks are therefore closely connected to the concept of insurance.

Insurance means the transfer of risk to the insurer. The main purpose of the insurance is to provide

compensation for loss incurred as a result of accidental events.6 In addition, it is an optimal possibility

of obtaining funds in the event of damage.4 The insurer in this legal relationship is the insurance

company with which the policyholder concludes the insurance contract. The insurance company has the

right to have a reward for taking over the risk, which is provided in the form of insurance premiums.

The essence of insurance is risk management, taking risks from clients and working with risks.11

Probably everyone knows that not all risks can be insured. In order to consider whether the risk can be

insured, it must meet the requirements of insurability. Therefore, there are criteria to determine whether

the risk is insurable. These are the so-called insurability criteria and the individual criteria are the

contingency criterion, the uniqueness criterion, the estimability criterion, the independence criterion, the

criterion of the size and moral principles. The insurability criteria are summarized and explained in the

following table (Table 1). These are the conditions under which insurance companies determine the

characteristics of a given risk and assist them in deciding whether to take the risk into insurance.3

Page 164: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

157

Table 1. Insurability criteria

Criterion Explanation

Contingency criterion A random event; uncertain, uncontrollable, unpredictable

and unexpected risk

Uniqueness criterion Clearly descriptive, demonstrable and unmistakable risk

Estimability criterion Identifiable and measurable probability of realization of

risk, valuability of damage

Independence criterion Independence of individual risks*

Criterion of the size The insurer's ability to bear the risk**

Moral principles Do not help to avoid punishment for damages caused by

acting in circumstances that are not considered moral

Notes: * Dependent risks are cumulative risks, contagion risks, risks with fluctuating underlying probability such as storm or hail

** The size of the risk is determined by the amount of damage that may occur in the realization of the risk

Source: Ducháčková (2009)

If any of the above criteria is not met, it is an uninsurable risk.

Insurability is further investigated by the insurer in three terms:3

– in terms of the accidental nature of insured events – the probability of the insured event

not being too high (certainty that the insured event will occur) and the occurrence of

the event must not be influenced by the policyholder,

– in terms of the size of assurance benefit in case of realization of the risk – the

possibility of causing too much damage,

– in terms of the achievement of insurance protection – insurability options in terms of

surface and temporal distribution of risks (objective risk assessment).

Risks in international trade are different from domestic risks. They are difficult to predict and are harder

to remove. According to Černohlávková (2014), the most frequently used types of insurance in

international trade are transport risk insurance, credit and investment risk insurance including payment

instruments, liability insurance and insurance for fairs and exhibitions.6

Discussion – Insurance of selected risks on the Czech market

The division of risks into commercial and territorial is important from the insurance point of view,

namely because the risk can be insured with a regular commercial insurance company or if it is necessary

to use insurance with state support. For the scope and use of state-supported insurance are important

directives issued by the European Council and the European Parliament. According to these directives,

the risks are divided according to whether they are marketable. Marketable risks are commercial risks.

On the other hand, territorial risks are not marketable due to insufficient reinsurance capacity of

commercial insurance companies. Marketable risks include:5

– arbitrary failure to recognize a contract by a foreign debtor,

– arbitrary refusal to take over the goods by the debtor,

– insolvency of the foreign debtor or its guarantor,

– long-term default on obligation of a foreign debtor.

The above risks can be covered by commercial credit insurance, for all the others can use state-supported

insurance. The purpose of state aid is to assume a certain risk and to provide a guarantee, especially

where the guarantees of commercial insurance companies cannot cover the exporter to such an extent

Page 165: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

158

or are unavailable to the exporter.8 Insurance companies themselves decide about insurability. Insurance

companies therefore decide which risks they will insure and at what price.

In the article I present selected risks that a Czech exporter may encounter when is exporting his goods

or services. Risks are viewed in terms of insurability on the Czech market. To assess the insurability of

selected risks were examined the individual insurance conditions of insurance companies that offer their

insurance products on the Czech market.

Defective performance

This type of risk can be easily insured with commercial insurance companies on the Czech market. A

product defect is a condition where the product does not exhibit specified, notified or agreed

characteristics. Characteristics which can reasonably be expected of it to have regard to all the

circumstances - the intended purpose for which the product is to be used (also taking into account when

the product was launched).

In most cases, insurance is offered under liability insurance. Most often it is the liability insurance for

damage caused by a product defect. The aim of insurance is to minimize the damage that a defective

product can cause to the end consumer. The purpose of the insurance is therefore to cover the damage

to health or property of third parties, which occurred in connection with the use of a product which did

not guarantee the safety characteristics that could be reasonably expected and which the insured persons

is obliged to replace. The idea is not to relieve the policyholder of the liability for the damage caused,

but to limit and assume the negative economic impacts. Liability for damage caused by product defects

is usually agreed separately in the insurance contract.

This is an insurable risk, but there may also be situations where normally insurable risk becomes

uninsurable. These are situations that are contained in exclusions. The insurance does not cover, for

example, damage caused intentionally, damage caused by insufficiently pre-tested product, product that

does not reach the agreed functional parameters, etc.

Insolvency and protracted default

As part of commercial insurance, entrepreneurs have the possibility to insure against situations such as

insolvency and protracted default. This is again an insurable risk. In practice, it is possible to meet with

insurance of debts, where there is a possibility to insure the risk of non-payment of invoices for goods,

such as insolvency and protracted default. Insolvency occurs when the insolvency of the customer is

detected by the competent authority under the insolvency law. Protracted default is defined in the

insurance terms as non-payment of a debt by the due date for reasons other than insolvency.

Even within this insurance, we may encounter situations where the insurance may not occur. Insurance

companies will not insure claims in the event of a fault caused by the policyholder, invalidity of the

contract, breach of the relevant legal regulations or due to breach of the terms of the contract by the

insured person, etc.

Against this risk can be also used state-supported insurance offered by the Export Guarantee and

Insurance Company (EGAP) in the Czech Republic. EGAP offers, for example, the possibility of

insolvency insurance under insurance against the inability to fulfil an export contract. The inability to

fulfil the contract here means the partial or total impossibility of performance, even the economic

impossibility, when the insured person cannot be reasonably required from the insured point of view,

even if the obligation is legally and effectively fulfilled. According to EGAP, the inability to fulfil may

be firstly due to the insolvency of the importer - a bankruptcy decision or the rejection of an insolvency

petition for lack of property. Secondly, breach of the export contract by the importer - refusal of

performance or inaction.

Withdrawal from the contract

The above mentioned risk of inability to fulfil the contract may lead to the suspension or even

cancellation of the export contract. The reasons, in addition to insolvency and protracted default, may

Page 166: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

159

be the refusal to take delivery of the goods supplied, or may be due to the political, financial or

macroeconomic situation in the importer's country. Withdrawal risk is a commercial risk, so it is

insurable risk.

Situations such as the existence of a dispute over the fulfilment of an export contract, the insured person

has breached the norms or customs of international law, etc. may lead to the risk, that the risk will be

difficult insurable or uninsurable.

On the Czech market, this type of insurance can be insured through the EGAP insurance company. It

offers insurance against cancellation or interruption of the contract during production.

Natural disasters

In the Czech Republic there is a possibility of insurance against natural disasters. The best way to

reduction the financial consequences of disasters is to use commercial insurance. Insurance against

natural disasters does not meet the criterion of independence. Natural disasters affect a high percentage

of the population, causing high damage to property in similar locations and at the same time. The

insurance company solves this problem by grouping several risks into one insurance contract. Another

criterion that is not met is the contingency criterion. The problem is the low level of predictability of

disaster risk. Insurance companies are not able to obtain sufficient information about the economic

impacts, and this uncertainty prevents insurance companies from correctly assessing the risks, which

they undertake and estimate the probabilities of disasters of varying severity.

Nevertheless, these risks are at least partially insured on the Czech market. Modern risk estimation

methods, evaluation tools and risk maps have been developed primarily for earthquakes, tropical

cyclones, hurricanes and floods. In the Czech Republic, insurance companies use flood maps, but on the

other hand, for example, for the risk of a windstorm there is no geographical information system.

Catastrophic risks appear in commercial insurance as part of life insurance, which covers the risk of

death in connection with a catastrophic event and then in property insurance, primarily to cover damage

caused by catastrophic events. Insurance companies to natural risks include fire, explosion, lightning,

windstorm, flood, hail, frost, earthquakes, falling trees and poles, land subsidence, landslip or collapse

avalanches, heavy snow and glaze ice. From the point of view of insurability is important for the

insurance company the area to which the insurance will be apply. In the event of a recurrent natural

disaster in the area, this risk may be uninsurable. In practice, insurance companies do not like to insure

natural disasters, such as earthquakes, tsunamis, volcano eruptions, etc., mainly because of concerns

about serious impacts.

Increasing damage from natural disasters often exceeds the capacity of insurance markets and thus

creates further barriers to insurability of catastrophic risks, therefore the state is also involved in the

coverage of damage caused by natural disasters.

Terrorist attacks

Nowadays terrorism is on the expansion. Recent events all over the world enhance uncertainty and are

merely examples of terrorist acts and violence caused by terrorism. From the policyholder's point of

view, this is an event against which he or she has a need to protect himself and this insurance option is

important to him. For the exporter, this represents the confidence to expand abroad and to areas where

the possibility of terrorism is possible. In recent years, it has been shown that attacks can occur even in

a quiet region.

In insurance a terrorist attack is a violent act or series of acts committed by any person or group of

persons acting alone or in connection with any organization founded for political, religious or

ideological reasons in order to influence any government and/or raise public fear.

This type of insurance is quite new in practice and can be said that is insurable with difficulty. This is a

threat that is very difficult to predict and qualify. The insurance does not usually cover damage caused

by war, invasion, rebellion, uprising, nuclear energy or nuclear radiation and deliberate actions of the

policyholder or the insured person. The high risk and high level of uncertainty of this risk is reflected in

Page 167: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

160

very high premiums or even in the unwillingness of the private insurance sector to cover these risks.

Most insurance companies today cover at least partially the consequences of terrorist attacks. However,

a frequent limitation of the payment of assurance benefits is if the terrorist attack occurs at a place where

the Ministry of Foreign Affairs has declared that it does not recommend citizens to travel to the area or

state.

A summary table with an overview of risks and insurability rates can be seen below (Table 2). We can

see that common commercial risks are insurable on the Czech market without a problem. Almost all

commercial insurance companies offer insurance against these risks. Insurance against territorial risks

is already more difficult in practice, but it can still be at least partially insured. The only exceptions are

war, rebellion, uprising and other war events that cannot be insured against.

Table 2. Overview of selected risks and their rate of insurability

Risk Type of risk Insurability

Defective performance commercial insurable

Insolvency commercial insurable

Protracted default commercial insurable

Withdrawal from the contract commercial insurable

Natural disasters (in general) territorial partially insurable (uninsurable)

Terrorist attacks territorial insurable with difficulty

War territorial uninsurable

Source: own processing

In case of insurance of some risks it is advantageous to use ART methods. So we can use alternative

transfer of risk than conventional insurance, either due to uninsurability or high premium.

Conclusion

The subject of the paper was to define, identify and allocate risks from the perspective of a Czech

exporter in the field of international trade. The aim of the article was to evaluate selected risks in terms

of insurability and uninsurability on the Czech market. Insurance companies decide on insurability based

on the fulfilment of insurability criteria, which are the criteria of contingency, uniqueness, estimability,

independence, criterion of the size and moral principles.

The research shows that commercial risks are insurable without problems. These are risks that can be

influenced by the buyer and are primarily insolvency and protracted default. On the other hand,

insurance companies are generally not able to insure deliberate actions, unlawful actions, too high

expected damage and too high and unquantifiable risk. Commercial risks are insurable on the Czech

market by regular commercial insurance companies. For some risks it is advisable to use state-supported

insurance, in some cases it may even be the only insurance option. We can also say that today there are

few risks that cannot be at least partially insured against. However, everything is subsequently reflected

in the amount of the premium and the amount of the participation. An exception to insurability is

represented by various war events, such as war, rebellion, uprising, etc., against which cannot be insured.

Insurance companies also report absolute exclusions that cannot be included in any type of insurance.

These are mainly damages caused by deliberate actions of the policyholder or the insured person or by

another person on their orders. Furthermore, the insurance company does not provide assurance benefits

if there is a deliberate negligence, breach of the relevant legal regulations or due to breach of the terms

of the contract, etc.

Page 168: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

161

However, ART methods, such as captive insurance, can be used for insurance of uninsurable risks. The

use of ART methods instead of conventional insurance may be subject to further investigation.

Today, insurers are looking for new approaches to risk insurance, so that the risk remains insurable

while not endanger the existence of the insurance company itself. Most often, insurance companies make

premiums more expensive, change the amount of participation and the upper limit of assurance benefits.

In extreme cases, the insurance company has to withdraw its insurance product and the risk becomes

uninsurable. An example is flood insurance in risk areas.

Acknowledgement

This paper was financially supported within the VŠB–Technical University SGS grant project No.

SP2020/77 (Risk Assessment in International Trade in Selected OECD Countries and Risk Minimization

in the Context of the Czech Exporter).

References

[1] BÖHM, Arnošt. Pojištění pohledávek v mezinárodním obchodě. Praha: Professional Publishing,

2009. ISBN 978-80-7431-004-1.

[2] CIPRA, Tomáš. Riziko ve financích a pojišťovnictví: Basel III a Solvency II. Vydání I. Praha:

Ekopress, 2015. ISBN 978-80-87865-24-8.

[3] DUCHÁČKOVÁ, Eva. Principy pojištění a pojišťovnictví. 3. aktualiz. vyd. Praha: Ekopress,

c2009. ISBN 978-80-86929-51-4.

[4] JANATA, Jiří. Pojištění a management majetkových podnikatelských rizik. Praha: Professional

Publishing, 2004. ISBN 80-86419-64-9.

[5] JANATKA, František. Rizika v komerční praxi. Praha: Wolters Kluwer Česká republika, 2011.

ISBN 978-80-7357-632-5.

[6] MACHKOVÁ, Hana, Eva ČERNOHLÁVKOVÁ a Alexej SATO. Mezinárodní obchodní

operace. 6. aktualiz. a dopl. vyd. Praha: Grada Publishing, 2014. ISBN 978-80-247-4874-0.

[7] Petrusheva, N. (2016). Management of Financial Risks in International Trade Financing. Časopis

za ekonomiju i tržišne komunikacije, Vol. 6, Issue 1. pp. 81-92

[8] ROZEHNALOVÁ, N. (2010). Právo mezinárodního obchodu. 3. Vydání. Praha: Wolters Kluwer,

ČR, a. s.

[9] SMEJKAL, V., RAIS, K. Řízení rizik ve firmách a jiných organizacích. 3. vydání. Praha: Grada

Publishnig, 2010. ISBN 978-80-247-3051-6

[10] Úmluva OSN o smlouvách o mezinárodní koupi zboží

[11] VÁVROVÁ, Eva. Finanční řízení komerčních pojišťoven. Praha: Grada Publishing, 2014. Expert.

ISBN 978-80-247-4662-3

Page 169: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

162

GOVERNANCE STRUCTURES OF MUNICIPAL ENTERPRISES – EMPIRICAL STUDY

OF EFFICIENCY OF HOSPITALS

Sabrina Lee1

1Department of Economics, VŠB – Technical University of Ostrava,

Sokolská tř. 33, Ostrava 70200, Czech Republic

e-mail: [email protected]

Abstract

This paper deals with the complex of topics of efficiency measurement of municipal hospitals in Baden-

Württemberg, Germany. The municipal economy and in particular the assurance of public health

services has a very high economic and socio-political importance. The aim is to make a scientifically

solid and empirically proven contribution to the efficiency measurement of hospitals and the influence

of their governance structures. The core question is which essential input and output factors exist in

municipal hospitals for measuring efficiency and how these can be supported with regard to responsible

public governance. To answer this question, a modelling was performed using Data Envelopment

Analysis (DEA) and Free Disposable Hull (FDH). It was found that smaller hospitals with facultative

supervisory boards have considerable potential. Larger hospitals could be filtered out as inefficient.

Thus, this work provides a contribution to the science and practice of municipal hospitals.

Keywords

Health services, hospitals, hospitals in public ownership, efficiency, Data Envelopment Analysis

model (DEA), Free Disposable Hull (FDH).

.

JEL Classification

C10, C14, I10, I18

Introduction

The municipal economy is an important part of the overall economy and directly affects all citizens. The

performance of public tasks by different public enterprises is also clearly visible to citizens. Whether in

local public transport, health care, real estate and housing, water supply, energy (electricity and gas) or

swimming pools, museums and theatres, the practical relevance of public enterprises is significant

(Westermann and Cronauge, 2006; Fabry and Appel, 2011). As a representative of the performance of

public tasks, the German health care system in particular has for many years been the subject of

increasing political and economic discussion. Almost in alternation, the efficiency of the health care

system, the quality of the services offered and the level of expenditure and costs are the subject of dispute

(Kuchinke et al., 2004; Helmig, 2005; Augurzky and Schmitz, 2010; Kalb, 2010). This political and

economic discussion on the efficiency and quality of the public health care system has been taking place

for many years, not only in Germany but also at international level (see exemplary Vrabková and

Vaňková, (2015); Kalb, (2010)).

In the discussion on cost saving potentials, of the various groups of service providers such as physicians,

pharmacies and hospitals and their respective associations, the concentration on the hospital sector

appears to be the most appropriate. Especially since the hospital sector is regularly mentioned first when

it comes to cost saving potentials and the question of the cost explosion in the health care system

(Helmig, 2005; Taube, 1988). At first glance, this seems understandable, since in absolute figures,

expenditure on inpatient hospital services makes up the largest share of costs in the German health care

system. As a result, hospitals are subject to particularly strong public observation and are under great

pressure to justify their corporate policy and financial activities. Publicly owned hospitals are especially

in the spotlight because they are considered comparatively inefficient and are financed by public funds

(Helmig, 2005). There are great differences between hospitals in this respect. While some have already

Page 170: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

163

taken steps to increase efficiency years ago and are now in a good economic position, there are also

hospitals that could not survive without the financial support of their owners (Augurzky and Schmitz,

2010). In this paper, efficiency is understood to mean technical efficiency. This is defined as the ability

to produce a certain amount of output with the least possible amount of input (Augurzky and Schmitz,

2010; Kuchinke et al., 2004). In contrast, inefficiency exists if the production costs are higher than the

costs that can be achieved with the given state of the technology (Kuchinke et al., 2004).

The present study therefore continues at this point. Using econometric methods, it investigates and

measures the efficiency and quality of German hospitals owned by local authorities. For this purpose,

the most recent data from official hospital statistics are used. A special focus is on the differences

between the different management structures in various legal forms of municipal hospitals.

The objective of the paper is to evaluate the technical efficiency of 89 German public hospitals in the

year 2018 based on selected inputs and outputs as well as the input-oriented DEA and FDH.

Relevance of previous research

Also in the literature on efficiency in the public sector, the health care sector has received most attention

in recent decades. There are a large number of studies that examine the technical or cost efficiency and

the respective influencing factors of health care facilities (such as hospitals or even nursing homes) in

different countries. However, the absolute majority of studies refer to the United States of America

(Kalb, 2010; Helmig, 2005).

Early studies go back to Banker et al. (1986), Wilson and Jadlow (1982), Grosskopf and Valdmanis

(1987), Nyman and Bricker (1989) and Valdmanis (1992). These studies use linear programming

techniques to evaluate different aspects of the technical efficiency of hospitals and nursing homes in the

United States of America. The last four studies also examine the influence of the form of ownership on

technical efficiency (Kalb, 2010).

Hospitals are an essential part of the health care system. In the Federal Republic of Germany, the

assurance of health care and thus also of the hospital system is regarded as a public task (Helmig, 2005).

The declared aim of the legislature is to have a variety of providers and operators. In particular - in

addition to the public institutions - the economic sustainability of non-profit and private hospitals must

be guaranteed (Deutscher Bundestag, 2019a).

Only just one in three hospitals is still in public ownership. Between 1991 and 2017, the share of public

hospitals fell from 46.0 % to 28.8 %. In the same period, the share of privately owned hospitals increased

steadily from 14.8 % to 37.1 %. This shows that privatization in the hospital sector is also continuing to

make strong progress. The share of non-profit hospitals, on the other hand, decreased only slightly from

39.1 % to 34.1 %. Figure 1 (appendix) shows the hospitals by ownership and the number of beds by

ownership in 2017. Around two thirds (59.8 %) of public hospitals are organized under private law (e.g.

limited liability companies, in germen “GmbH”). The share of private legal forms has thus doubled since

2002 to 28.3 % within a very short time. By way of comparison, the share of public hospitals that are

operated as legally dependent institutions, such as own operations ("Eigenbetrieb") and controlled

operations ("Regiebetrieb"), will be 15 % in 2017. In 2002, their share was 56.9% (Statistisches

Bundesamt, 2018).

Therefore, the ownership and legal form of hospitals is of enormous importance. Against this

background, special challenges arise with regard to the governance, management and supervision of

public municipal hospitals. On the one hand, they are situated in the area of conflict between market

economy and economic efficiency, but on the other hand, as institutions of the public sector, they have

to perform tasks of public services of general interest at the municipal level and are therefore subject to

a dual system of objectives (Ruter et al., 2005; Schaefer et al., 2008; Hilb et al., 2013). In order to make

clear the connection between the efficiency measurement of public hospitals and the differences between

the respective legal forms and the management structures derived from them, a corresponding study is

Page 171: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

164

necessary. From the studies on efficiency measurement mentioned so far, it can be deduced that in some

cases they only take into account the ownership as exogenous influencing factors. Only Helmig (2005),

Frohloff (2007), Herr (2008), Herr et al. (2009, 2011), Tiemann and Schreyögg (2011) consider the

differences between private, non-profit and public sponsorship. Public sponsorship with its different

legal forms (legally independent, legally dependent and under private law) is not examined in any of

these studies.

Theoretical approach to analyze the efficiency of municipal hospitals and their

governance structures

The present paper refers to the theoretical approach of the "New Institutional Economics". This looks at

the enterprise as an institution for the combination of production factors and analyses the process of the

production of goods from a legal and economic point of view. The focus is not on production factors

from a technical-economic perspective and their ownership, but on property rights over a factor or a

specific good (Wöhe et al., 2016; Roßberg, 2007). While the external factors of enterprises cannot be

influenced for the most part, internal factors of enterprises, such as structures and processes, can be

designed in many ways (Wöhe et al., 2016).

The essential three sub-areas of the New Institutional Economics are in particular

− The "Property Rights Theory", which forms the theoretical foundation of the new institutional

economics and deals with the rights of disposition of resources and their transactions.

− The "principal-agent theory", which deals with the problems of optimal contract design within

a contractual relationship between principal and agent on the basis of incomplete information

or information asymmetries.

− As well as the "transaction cost theory", which deals with the costs of transferring rights of

disposition and their cost minimization (Roßberg, 2007).

Applied to the field of municipal hospitals and to Public Corporate Governance, this means that aspects

of corporate governance can be explained and solved using the principal-agent approach.

Research contribution and aim of this paper

The main objective of this work is to make a scientifically solid, empirically proven research

contribution to the evaluation of the efficiency of municipal hospitals in Germany and to propose

improvements where necessary. The contribution is based on already existing economic findings and

own empirical studies. The aim of this work is to construct and evaluate the benefits and limitations of

the most widely accepted efficiency model DEA under the conditions existing in Germany. In particular,

the paper focuses on the different legal and governance structures of public hospitals to ensure

responsible Public Corporate Governance. This research aspect represents a novelty, which has not yet

been further considered in any existing study on efficiency measurement of public hospitals.

Research design: methods used and solution process

The methods of multiple-criteria decision-making (MCDM) are among the most frequently used

methods in health care economics today. The models are based on applied efficiency formation and

evaluation. Efficiency is generally achieved when the expenditure/costs of ensuring certain processes

(inputs) do not exceed the profits achieved at the end of the process (outputs) (Vrabková and Vaňková,

2015). Fried et al. (2008) state that efficiency is a component of productivity. They refer to the

comparison between the actual and the optimal amounts of inputs and outputs (Vrabková and Vaňková,

2015). As already mentioned, technical efficiency is defined as the production of goods of a predefined

quality at the lowest possible cost (Kuchinke et al., 2004; Augurzky and Schmitz, 2010). In order to

measure the technical efficiency of hospitals, which are called decision-making units (DMUs, such as

schools, public administrations, waste management enterprises, etc.), it is first important to define a

reasonable set of input and output combinations. The inputs and outputs are then used to construct a

Page 172: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

165

best-practice frontier - a frontier which then contains the most efficient decision-making units.

Consequently, DMUs that are below this line are not efficient (Kalb, 2010).

Data Envelopment Analysis Modell (DEA)

Among the non-parametric approaches, one of the best-known methods proposed for the construction

of a best-practice frontier is Data Envelopment Analysis (hereinafter DEA) (Kalb, 2010; Helmig, 2005).

With regard to the majority of studies on hospital care and other health services, the DEA method is also

used predominantly (Vrabková and Vaňková, 2015; Helmig, 2005). Originally this approach goes back

to the research of Farrell (1957) and Charnes et al. (1978) (Kalb, 2010). Farrel's model for measuring

the efficiency of units with one input and one output therefore represents the original starting point. The

model was subsequently extended in 1978 by Charnes, Cooper and Rhodes to CCR (both input-oriented

and output-oriented models) and by Banker, Charnes and Cooper to BCC (modified CCR extended by

variable returns to scale) (Vrabková and Vaňková, 2015). Due to its limited scope, this paper focuses

on the CCR model.

The DEA method is used to assess technical efficiency and aims to measure the relationship between

the inputs and outputs of homogeneous units (Vrabková and Vaňková, 2015). In concrete terms, the

efficiency of the decision unit is measured by how efficiently it succeeds in transforming input factors

into output factors in a production process (Helmig, 2005). Since there can be many different types of

inputs and outputs, DEA models belong to the methods of multiple-criteria decision-making (MCDM)

(Vrabková and Vaňková, 2015).

In the case of multiple inputs consumed in the production of multiple outputs, the relative measurement

of the degree of efficiency Uq is used. The latter can be expressed as follows:

𝐸𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑐𝑦 𝑟𝑎𝑡𝑖𝑜 (𝑈𝑞) = 𝑤𝑒𝑖𝑔ℎ𝑡𝑒𝑑 𝑠𝑢𝑚 𝑜𝑓 𝑜𝑢𝑡𝑝𝑢𝑠

𝑤𝑒𝑖𝑔ℎ𝑡𝑒𝑑 𝑠𝑢𝑚 𝑜𝑓 𝑖𝑛𝑝𝑢𝑡𝑠= ∑ 𝑢𝑖𝑦𝑖𝑞𝑖

∑ 𝑣𝑗𝑥𝑗𝑞𝑗

Formula 1: Efficiency ratio

There are different variants of DEA, which concern on the one hand the behavioural objective of the

decision units and on the other hand the economies to scale. If the behavioral goals of the decision units

are considered first, technical efficiency can be identified either as a proportional reduction in input

consumption for a given output or, conversely, as a proportional increase in output production for a

given set of inputs. In the following, efficiency indices that are calculated according to the method of

reducing input consumption are referred to as input-oriented technical efficiency measures. While the

method of increasing the output quantity is called output-oriented efficiency measures. In contrast,

returns to scale deal with the question of how output changes when all inputs increase proportionally

(rate of increase, in production theory called return to scale). In general, DEA can be based on one of

the following three assumptions:

1. constant returns to scale (CRS),

2. variable returns to scale (VRS); or

3. non-increasing returns to scale (NIRS).

The differences between the three DEA frontiers and the differences between input and output

orientation are shown in Figure 3 (appendix) for the special case of one input and one output (Kalb,

2010).

Figure 3 (appendix) illustrates that at constant returns to scale, only decision making unit B is considered

efficient. However, assuming no increase in returns to scale, the best-practice frontier runs through

points B, C and D. Finally, the assumption of variable returns to scale also includes decision-making

unit A as an efficient point. Merely decision-making unit P is regarded as inefficient in all three cases,

since it is always below the frontier (Kalb, 2010).

As for inputs, two basic categories are considered: Labour and capital. Labour inputs is usually measured

in terms of the number of employees (nurses, physicians, specialists, administrative staff and support

staff). Capital inputs usually include the number of beds, the number of operating rooms, bed occupancy

in days and the average length of stay (Vrabková and Vaňková, 2015). .

Page 173: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

166

Alternative models for evaluating efficiency

The Free Disposable Hull Model and the Malmquist Productivity Index are other widely accepted

models of multiple-criteria decision-making (MCDM) used in health care to evaluate efficiency and

productivity. Another special category is the Health Technology Assessment Model (often also HTA),

which evaluates the efficiency of health care. In the case of health care institutions, this model focuses

on the efficiency assessment and the evaluation of the technical and technological coverage of the

examined institutions (Vrabková and Vaňková, 2015).

Due to the limited scope of this work, the Free Disposable Hull model is described in more detail below

for comparison with the DEA model outlined above.

Free Disposable Hull Modell

The Free Disposable Hull Model (FDH) is one of the so-called dynamic models of the production

function. This FDH model was formulated by Deprins et al. (1984). The basic characteristic of the FDH

model is the non-convex characteristic of the production possibility sets. In contrast to the DEA models,

the production unit can only be evaluated in relation to other existing units and not in their convex

combinations. The advantage of the FDH is the fact that the economies of scale are not affected by any

assumptions. The FDH models analyse both input and output-oriented approaches. The input and output

matrices 𝑋 and 𝑌 represent the structural coefficients of an application. The variables of the model are

𝜆, 𝑠+, 𝑠− vectores, the 𝜃 variable (in case of the input-oriented model) and the Φ variable (in case of the

output-oriented model). To determine the efficiency of all units, it is necessary to solve for each unit

independently, i.e. n times. The value of the target function measures the distance of the unit from the

frontier of production possibility. Depending on the type of model orientation (input- or output-

oriented), it indicates the amount by which outputs must be increased or inputs reduced in order for the

production unit to be judged as efficient (Vrabková and Vaňková, 2015).

Figure 4 (appendix) graphically shows the difference between DEA CRS, DEA VRS and FDH (again

for the input and output case). While the best practice frontier in the case of DEA CRS and DEA VRS

is represented by a straight line or convex curve, the FDH model generates a best practice frontier in the

form of a staircase (Kalb, 2010).

Against this background, the choice of the DEA method chosen in this paper is justified in particular by

the fact that the DEA method is clearly the most frequently used in studies on the efficiency of health

care facilities and hospitals (Vrabková and Vaňková, 2015; Kalb, 2010; Helmig, 2005; Vera, 2010).

In order to address the research problem of measuring the technical efficiency of public hospitals that is

being considered here, it is - as already mentioned - first important to define an appropriate set of input

and output combinations. In the further procedure, these are then used to construct a best-practice

frontier.

The most common input parameters used in health care are those that can be reduced rationally and

sensibly for reasons of efficiency (with regard to maintaining the availability and quality of health care).

For output parameters, those are preferred for which rational and sensible growth is desired (Vrabková

and Vaňková, 2015).

Typically, the input and output parameters listed in Table 2 (appendix) are used in the literature for

measuring the efficiency of health care institutions.

Research application and results

The public hospitals examined in this study (DMU n = 89) have specialized clinics in basic medical

fields such as anesthesia, surgery, gynecology and obstetrics, ear, nose and throat medicine and internal

medicine. The selected data set consists of all public hospitals of the state of Baden-Württemberg in

Germany. The current official statistical number of all hospitals in Baden-Württemberg is 267 from

2018, whereof 89 are in public ownership. The proportionate distribution of every third publicly-owned

hospital in Baden-Württemberg is thereby representative of the nationwide distribution (see section 1.2).

Page 174: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

167

In order to ensure an unbiased measurement of efficiency, all independent psychiatric centers or clinics

and university hospitals are not included in the sample. These differ too much in their characteristics

from general hospitals (e.g. the number of beds of 0 in many psychiatric day hospitals) and could

therefore influence the result by inhomogeneity of the units.

Three classic input parameters (number of beds, number of physicians and number of nursing staff) and

one classic output parameter (case numbers per year) were defined for the analysis. The models take

into account the approach of input orientation as well as constant and variable returns to scale (CRS and

VRS).

The data material was taken from official statistics (Statistisches Bundesamt, 2018), the German

Hospital Register (Deutsches Krankenhaus Verzeichnis) and the respective annual reports or quality

reports of the hospitals, which the hospitals are legally obliged to provide according to Section 137 I of

the Social Security Code V (Deutscher Bundestag, 2019b).

Distinction by size class

The arithmetic mean of the number of beds in the public hospitals examined is 323,6. Figure 5

(appendix) gives an overview of the DMU distribution of size classes by number of beds. However,

most hospitals only provide 101 to 250 beds. It can be observed that the majority of hospitals belong to

the small and medium size classes.

DEA model M1

The model M1 represents the input-oriented efficiency with constant returns to scale (CRS). The optimal

result of the efficiency measure calculated with the DEA model is equal to one (𝑧 = 1). The results of

the efficiency modelling are expressed in percent, i.e. an efficient decision unit has a value of 100%,

while the value of inefficient decision units is correspondingly less than 100%.

Figure 6 (appendix) shows the percentage distribution of DMUs based on their efficiency measures. Of

the total number of 89 DMUs, however, only 6 DMUs use their input variables efficiently, or better than

the remaining 83 DMUs. Most hospitals (n = 87) use their input variables inefficiently (𝑧 < 1).

It can be seen that the percentage input-oriented efficiency of most public hospitals ranges between 70

% - 79 %. The efficient hospitals are: H15 Hegau-Bodensee-Klinikum Stühlingen, H49 Krankenhaus

Herrenberg, H18 Hohenloher Krankenhaus Öhringen, H54 Krankenhaus Stockach, H75 Rems-Murr-

Klinikum Winnenden, H44 Krankenhaus 14 Nothelfer GmbH.

The following 3 DMUs are the most inefficient: H38 Klinikum Schloß Winnenden, H43 Klinikum am

Weissenhof, H3 Alb-Donau Klinikum Standort Langenau.

The aggregated results of the M1 model are shown in Table 3 (appedix). More detailed results are given

in the appendix.

DEA model M2

The M2 model represents input-oriented efficiency with variable returns to scale (VRS) in all 89 DMUs.

The optimal result of the efficiency measure calculated with the DEA model is also equal to one (𝑧 =1). The results of the efficiency modelling are expressed in percent, i.e. an efficient decision unit has a

value of 100%, while the value of the inefficient decision units is correspondingly less than 100%.

Figure 7 (appendix) shows the percentage distribution of DMUs based on their efficiency measures. Of

the total number 89 DMUs, 12 DMUs use their input variables efficiently, or better than the remaining

77 DMUs. Most hospitals (n = 77) use their input variables inefficiently (𝑧 < 1).

It can be seen that the percentage input-oriented efficiency of most public hospitals ranges between 70

% - 79 %. The efficient hospitals are: H44 Krankenhaus 14 Nothelfer GmbH, H15 Hegau-Bodensee-

Klinikum Stühlingen, H34 Klinikum Mittelbaden Baden-Baden Ebersteinburg, H72 Ostalb-Klinikum

Aalen, H18 Hohenloher Krankenhaus Öhringen, H54 Krankenhaus Stockach, H75 Rems-Murr-

Klinikum Winnenden, H48 Krankenhaus Hardheim, H39 Klinikum Stuttgart – Katharinenhospital und

Page 175: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

168

Olgahospital / Frauenklinik, H41 Klinikum am Gesundbrunnen, H83 Städtisches Klinikum Karlsruhe,

H49 Krankenhaus Herrenberg.

The following 2 DMUs are the most inefficient: H38 Klinikum Schloß Winnenden und H43 Klinikum

am Steinenberg.

The aggregated results of the M1 model are shown in Table 4 (appendix). More detailed results are

given in the appendix.

FDH Model

The FDH is one of the independent models of a production function. In difference to the DEA models,

a production unit can only be evaluated in relation to other existing units and not in relation to their

convex combinations. The advantage of the FDH is that it is not restricted by any prerequisites regarding

the character of the economies to scale. The FDH model also determines for each production unit a

suitable production unit for benchmarking. The FDH model tests if o DMU is non-dominated, i.e. Pareto

efficient. Units suitable for benchmarking are units that are efficient in the given variant of the model

(Vrabková and Vaňková, 2015).

Figure 8 (appendix) shows the input-oriented efficiency of the evaluated 89 DMU. The optimum level

of the efficiency measure equals one (Φ = 1). The results are given in percentage. An efficient unit thus

has the value of 100 %.

According to the FDH calculation IM1 model, 53 DMU are efficient, which means they cannot improve

one production factor without worsening another and have reached their limits of production

possibilities. The efficient 53 DMU shown in Table 5 (appendix) are therefore suitable for

benchmarking.

All efficient units are suitable benchmark primarily for themselves. Table 6 (appendix) shows the

benchmark of efficiency DMU. The units H22 and H52 have the most numerous benchmark in the input-

oriented model IM1.

Discussion on results

The efficient It is noticeable that the hospitals that are efficient as a result fall either into the category of

the smallest and small size classes with an average number of beds of only 93.85 (7 DMU) or into the

large to largest size classes (1 DMU with 399 and 4 DMU with an average number of beds of 1110.5).

Although the majority of DMUs are in the small to mid-size category based on the number of beds,

many of these DMUs are not efficient as a result.

Of 12 efficient hospitals, the following 9 are organised in the private law form of a limited liability

company (GmbH) with supervisory board and additionally in a cooperative of several hospitals: H15

Hegau-Bodensee-Klinikum Stühlingen, H49 Krankenhaus Herrenberg, H18 Hohenloher Krankenhaus

Öhringen, H75 Rems-Murr-Klinikum Winnenden, H44 Krankenhaus 14 Nothelfer GmbH, H34

Klinikum Mittelbaden Baden-Baden Ebersteinburg, H41 Klinikum am Gesundbrunnen Only H83

Karlsruhe Municipal Hospital and H54 Stockach Hospital are not part of any network. 8 of the 9 efficient

hospitals under private law have a mandatory supervisory board. H 54 has an optional supervisory board.

3 of the 12 efficient hospitals are nevertheless organised under public law. H39 and H72 are organised

as large units in the legal form of a "joint municipal institution under public law" (gkAöR) and H48 as

a very small unit in the form of a public law "Zweckverband". All 3 have a supervisory board as a

controlling organ.

Against this background, it could be deduced that hospitals in associations are much more efficient than

hospitals acting alone. Furthermore, it is important to note that due to the considerable spread of the

private law legal form of the GmbH, it cannot be deduced that these are more efficient than public law

legal forms. Hospitals are also efficient in a public law legal form in combination as an association, in

Page 176: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

169

this case the legal form of the "joint municipal institution" (gkAöR) or the " Zweckverband". Another

efficiency criterion is the existence of a supervisory board.

It is also interesting to note that all three of the most inefficient hospitals - H38 Klinikum Schloß

Winnenden with 572 beds, H43 Klinikum am Steinenberg with 578 beds and H3 Alb-Donau Klinikum

Standort Langenau 375 beds - are just not in the range of the efficient poles of small and very large

hospitals.

The One-Third Participation Act (Drittelbeteiligungsgesetz) in Germany requires a mandatory

supervisory board for companies in the private law form of a GmbH with more than 500 employees.

Below this number, the supervisory board is considered optional. The management structures of the

efficient hospitals evaluated all have a supervisory board.

Thus, it can be deduced from these results that a clinic group contributes to efficiency, especially for

very small and very large hospitals. Governance structures have a particularly positive effect on

efficiency when the management is additionally controlled by an optional or mandatory supervisory

board.

Conclusion

In this paper, the technical efficiency of hospital services was evaluated using the example of 89 selected

regional hospitals under public ownership in 2018 in Germany. The technical efficiency was evaluated

using input and output indicators using the DEA model and the variant with a focus on the inputs and

the constant and variable returns to scale of the various hospitals. It must also be added that the

evaluation of the efficiency of the units examined in the DEA model can also be approached on the basis

of other rational economic indicators, such as 1 day of stay or 1 hospitalized patient as well.

One model was constructed in the FDH model, which took the input-orientation into account. The FDH

model is not only suitable for the evaluation of input and output variables like the DEA model, but also

additionally for the determination of benchmarking basic data.

However, the application transfer of these derivations also has limitations. In addition, the exact

structures, rights and obligations, communication channels, and instruction rights of the governing

actors of the different hospitals in their respective legal forms must be researched. Only in the following

step concrete recommendations for the design of a governance structure that affects the efficiency of

hospitals can be formulated.

Research on the effects of the legal forms with their respective governance structures on the efficiency

of hospitals, which follows on from these results, must also be verified by means of qualitative research,

such as expert interviews. The extent to which hospital networks can increase efficiency must also be

questioned. The results of this study show that unconnected, middle-sized hospitals are inefficient.

References

[1] Augurzky, B. and Schmitz, H. (2010), Effizienz von Krankenhäusern in Deutschland im

Zeitvergleich: Endbericht - November 2010, RWI Projektberichte, Rheinisch-Westfälisches

Institut für Wirtschaftsforschung (RWI), Essen.

[2] Banker, R.D., Conrad, R.F. and Strauss, R. (1986), “A Comparative Application of Data

Envelopment Analysis and Translog Methods. An Illustrative Study of Hospital Production”,

Management Science, Vol. 32 No. 1, pp. 30–44.

[3] Bremeier, W., Brinckmann, H. and Killian, W. (2006), Public Governance kommunaler

Unternehmen: Vorschläge zur politischen Steuerung ausgegliederter Aufgaben auf der

Grundlage einer empirischen Erhebung, Edition der Hans-Böckler-Stiftung, Vol. 173, Hans-

Böckler-Stiftung, Düsseldorf.

[4] Budäus, D. (Ed.) (2008), Corporate Governance in der öffentlichen Wirtschaft: Referate eines

Symposiums der Gesellschaft für öffentliche Wirtschaft, … am 23./24. November 2006 in Berlin,

Beiträge zur öffentlichen Wirtschaft, Vol. 27, Ges. für öffentliche Wirtschaft, Berlin.

Page 177: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

170

[5] Charnes, A., Cooper, W.W. and Rhodes, E. (1978), “Measuring the efficiency of decision making

units”, European Journal of Operational Research, Vol. 2 No. 6, pp. 429–444.

[6] Deprins, D., Simar, L. and Tulkens, H. (1984), Measuring labor-efficiency in post offices.

[7] Deutscher Bundestag (2019a), Gesetz zur wirtschaftlichen Sicherung der Krankenhäuserund zur

Regelung der Krankenhauspflegesätze(Krankenhausfinanzierungsgesetz - KHG):

Krankenhausfinanzierungsgesetzt.

[8] Deutscher Bundestag (2019b), Sozialgesetzbuch (SGB) Fünftes Buch (V): SGB V.

[9] Deutsches Krankenhaus Verzeichnis, “Deutsches Krankenhaus Verzeichnis”.

[10] Dittrich, G., Karmann, A., Steinmann, L. and Zweifel, P. (2005), “Effiziente Krankenhäuser?

Ein Vergleich sächsischer und schweizerischer Krankenhäuser”, ifo Dresden berichtet, Vol. 12

No. 2, pp. 9–18.

[11] Fabry, B. and Appel, M. (Eds.) (2011), Unternehmen der öffentlichen Hand: Handbuch, Nomos-

Praxis, 2. Aufl., Nomos-Verl.-Ges, Baden-Baden.

[12] Farrell, M.J. (1957), “The Measurement of Productive Efficiency”, Journal of the Royal

Statistical Society. Series A (General), Vol. 120 No. 3, p. 253.

[13] Fried, H.O., Lovell, C.A.K. and Schmidt, S.S. (Eds.) (2008), The measurement of productive

efficiency and productivity growth, Oxford Univ. Press, Oxford.

[14] Frohloff, A. (2007), Cost and Technical Efficiency of German Hospitals – A Stochastic Frontier

Analysis.

[15] Grosskopf, S. and Valdmanis, V. (1987), “Measuring hospital performance. A non-parametric

approach”, Journal of Health Economics, Vol. 6 No. 2, pp. 89–107.

[16] Hammerschmid, G. (Ed.) (2010), Zukunft der öffentlichen Wirtschaft: Referate einer vom

Wissenschaftlichen Beirat des Bundesverbandes Öffentliche Dienstleistungen am 25./26.

Februar 2009 in Eppstein (Taunus) veranstalteten Tagung, Beiträge zur öffentlichen Wirtschaft,

Vol. 31, Berlin.

[17] Helmig, B. (2005), Ökonomischer Erfolg in öffentlichen Krankenhäusern, Teilw. zugl.: Freiburg,

Univ., Habil.-Schr., 2001, Schriften zur öffentlichen Verwaltung und öffentlichen Wirtschaft,

Vol. 185, 1. Aufl., BWV Berliner Wiss.-Verl., Berlin.

[18] Herr, A. (2008), “Cost and technical efficiency of German hospitals. Does ownership matter?”,

Health economics, Vol. 17 No. 9, pp. 1057–1071.

[19] Herr, A., Schmitz, H. and Augurzky, B. (2009), Does higher cost inefficiency imply higher profit

inefficiency?: Evidence on inefficiency and ownership of German hospitals, Ruhr economic

papers, Vol. 132, RUB Dep. of Economics, Bochum.

[20] Herr, A., Schmitz, H. and Augurzky, B. (2011), “Profit efficiency and ownership of German

hospitals”, Health economics, Vol. 20 No. 6, pp. 660–674.

[21] Herwartz, H. and Strumann, C. (2011), On the effect of prospective payment system on hospital

efficiency and competition for patients in Germany, Economics Working Paper, Kiel University,

Department of Economics, Kiel.

[22] Hilb, M., Hösly, B. and Müller, R. (2013), Wirksame Führung und Aufsicht von Öffentlichen

Unternehmen, VR- und GL-Praxis, 1. Aufl., Haupt, Bern.

[23] Kalb, A. (2010), Public Sector Efficiency: Applications to Local Governments in Germany,

Gabler Research, Gabler Verlag / GWV Fachverlage GmbH Wiesbaden, Wiesbaden.

[24] Kuchinke, B., Kallfass, H.H., Schneider, H. and Kirn, S. (2004), Krankenhausdienstleistungen

und Effizienz in Deutschland: Eine industrieökonomische Analyse, Zugl.: Ilmenau, Techn. Univ.,

Diss., 2004, Gesundheitsökonomische Beiträge, Vol. 43, 1. Aufl., Nomos Verl.-Ges, Baden-

Baden.

[25] Meyer, M. and Wohlmannstetter, V. (1985), “Effizienzmessung in Krankenhäusern”, Zeitschrift

für Betriebswirtschaftslehre, No. Band 55 Nummer 3, pp. 262–281.

[26] Nyman, J. and Bricker, D.L. (1989), “Profit Incentives and Technical Efficiency in the

Production of Nursing Home Care”, The Review of Economics and Statistics, Vol. 71 No. 4, pp.

586–594.

[27] Papenfuß, U. (2013), Verantwortungsvolle Steuerung und Leitung öffentlicher Unternehmen:

Empirische Analyse und Handlungsempfehlungen zur Public Corporate Governance, Zugl.:

Hamburg, Helmut-Schmidt-Univ., Diss., 2012, Research, Springer Gabler, Wiesbaden.

Page 178: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

171

[28] Papenfuß, U. and Reichard, C. (Eds.) (2016), Gemischtwirtschaftliche Unternehmen:

Bestandsaufnahme und Perspektiven für Forschung und Praxis, 1. Auflage, Nomos

Verlagsgesellschaft mbH & Co. KG, Baden-Baden.

[29] Roßberg, I. (2007), Die marktorientierte Umstrukturierung kommunaler Kultureinrichtungen:

Besonderheiten und Lösungsansätze, Zugl.: Ostrava, Techn. Univ., Diss., 2006, Tectum-Verl.,

Marburg.

[30] Ruter, R.X., Sahr, K. and Waldersee, G. (2005), Public Corporate Governance: Ein Kodex für

öffentliche Unternehmen, Gabler Verlag, Wiesbaden, s.l.

[31] Schaefer, C., Reichard, C., Schulz-Nieswandt, F., Püttner, G., Grossi, G., Reck, H.-J.,

Mühlenkamp, H., Harms, J., Theuvsen, L., Röber, M., Frentrup, M., Rottmann, O., Lenk, T. and

Papenfuß, U. (2008), Public Corporate Governance: Bestandsaufnahme und Perspektiven, 1.

Auflage, Nomos Verlagsgesellschaft mbH & Co. KG, Baden-Baden.

[32] Schedler, K., Müller, R. and Sonderegger, R.W. (2011), Public corporate governance:

Handbuch für die Praxis, Public Management (PM), 1. Aufl., Haupt, Bern.

[33] Staat, M. (2006), “Efficiency of hospitals in Germany. A DEA-bootstrap approach”, Applied

Economics, Vol. 38 No. 19, pp. 2255–2263.

[34] Štastná, L. and Votápková, J. (2014), Efficiency of hospitals in the Czech Republic: Conditional

efficiency approach, IES Working Paper, Charles University in Prague, Institute of Economic

Studies (IES), Prague.

[35] Statistisches Bundesamt (2018), Grunddaten der Krankenhäuser: Fachserie 12 Reihe 6.1.1 -

2017.

[36] Steinmann, L., Dittrich, G., Karmann, A. and Zweifel, P. (2003), Measuring and comparing the

(in)efficiency of German and Swiss hospitals, Dresden discussion paper in economics, Techn.

Univ., Fak. Wirtschaftswiss, Dresden.

[37] Taube, R. (1988), Möglichkeiten der Effizienzmessung von öffentlichen Verwaltungen: Eine

ökonometrische Untersuchung am Beispiel von Krankenhäusern der Bundesrepublik

Deutschland, Zugl.: Münster, Univ., Diss., 1988, Volkswirtschaftliche Schriften, Vol. 383,

Duncker & Humblot, Berlin.

[38] Tiemann, O. and Schreyögg, J. (2011), Changes in hospital efficiency after privatization, HCHE

Research Paper, University of Hamburg, Hamburg Center for Health Economics (HCHE),

Hamburg.

[39] Vahs, D. and Schäfer-Kunz, J. (2012), Einführung in die Betriebswirtschaftslehre, 6. Aufl.,

Schäffer-Poeschel Verlag, s.l.

[40] Valdmanis, V. (1992), “Sensitivity analysis for DEA models. An empirical example using public

vs. NFP hospitals”, Journal of Public Economics, Vol. 48 No. 2, pp. 185–205.

[41] Vera, A. (2010), Krankenhausmanagement in einem wettbewerbsorientierten Umfeld, Zugl.:

Köln, Univ., Habil.-Schr., 2007, 1. Aufl., Eul, Lohmar.

[42] Vrabková, I. and Vaňková, I. (2015), Evaluation models of efficiency and quality of bed care in

hospitals, Series on advanced economic issues, 2015, vol. 36, 1. vydání, VŠB-TU Ostrava,

Ostrava.

[43] Westermann, G. and Cronauge, U. (2006), Kommunale Unternehmen: Eigenbetriebe -

Kapitalgesellschaften - Zweckverbände, Finanzwesen der Gemeinden, Vol. 3, 5., überarb. Aufl.,

Schmidt, Berlin.

[44] Wilson, G.W. and Jadlow, J.M. (1982), “Competition, Profit Incentives, and Technical

Efficiency in the Provision of Nuclear Medicine Services”, The Bell Journal of Economics,

Vol. 13 No. 2, p. 472.

[45] Wöhe, G., Döring, U. and Brösel, G. (2016), Einführung in die allgemeine

Betriebswirtschaftslehre, Vahlens Handbücher der Wirtschafts- und Sozialwissenschaften, 26.,

überarbeitete und aktualisierte Auflage, Verlag Franz Vahlen, München.

Page 179: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

172

Appendix

Table 3: Overview of empirical studies on the efficiency of hospitals in Germany

Short title of study Author Yea

r

Method Basis of

analysis

Inputs and outputs /

main Efficiency measurement in hospitals

(Meyer and Wohlmannstet

ter, 1985)

1985 DEA 20 German hospitals

-

Possibilities for measuring the

efficiency of public administra.

(Taube, 1988) 1988 Regressions

analyse SFA

613 German

hospitals

Outputs: patients in different

departments Inputs: Costs

Economic success in public

hospitals

(Helmig,

2005)

2005 DEA 418 German

hospitals

Inputs: Number of beds, treatment

cases per year, sponsorship

Efficient hospitals?

A comparison

(Dittrich et

al., 2005)

2005 DEA 105 Saxon and

251 Swiss

hospitals

Inputs: Number of staff, costs, days

of care

Main findings: Swiss hospitals are

less efficient than German hospitals

Efficiency of hospitals in

Germany: a DEA-bootstrap

approach

(Staat, 2006) 2006 DEA 160 German

hospitals

Inputs: daily rates, number of beds

Outputs: Treatment cases per year,

length of stay

Cost and Technical Efficiency of German Hospitals

(Frohloff, 2007)

2007 SFA 1500 German general hospitals

Inputs: e.g. ownership Main findings: private and non-

profit hospitals are on average less

efficient than public hospitals

Cost and Technical Efficiency

of German Hospitals: Does Ownership Matter?

(Herr, 2008) 2008 SFA 1500 German

hospitals

Inputs: Number of beds, treatment

cases per year, sponsorship Main findings: private and non-

profit hospitals are less cost-

effective and technically less

efficient than publicly owned h.

Does Higher Cost Inefficiency

Imply Higher Profit Inefficiency? - Evidence on

Inefficiency and Ownership of

German Hospitals

(Herr et al.,

2009)

2009 SFA 374 German

hospitals

Inputs: Number of beds, treatment

cases per year, ownership Main findings: private (for-profit)

and (private) non-profit hospitals

are less cost-efficient but more

profitable than publicly owned h.

On the effect of prospective

payment system on hospital efficiency and competition for

patients in Germany

(Herwartz and

Strumann, 2011)

2011 DEA 1500 German

general hospitals

Inputs: Material costs, personnel,

number of beds as non-descretionary input

Outputs: Treatment cases per year,

number of trainees

Main findings: Improvement of overall efficiency after DRG introd.

Profit Efficiency and Ownership of German

Hospitals

(Herr et al., 2011)

2011 SFA 541 German hospitals

Main findings: higher profit efficiency of private hospitals

compared to public hospitals

Changes in hospital efficiency

after privatization

(Tiemann and

Schreyögg,

2011)

2011 DEA 1878 German

acute hospitals

Main findings: Conversion from

public to private ownership resulted

in increased efficiency

Source: Own illustration.

Page 180: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

173

Figure 5: Percentage of hospitals by ownership 2017

Source: (Statistisches Bundesamt, 2018).

Figure 6: DEA frontiers due to economies to scale Figure 7: Overview of frontiers

with input and output orientation

Source: Own illustration based on Source: Own illustration based on

(Kalb, 2010; Vrabková a nd Vaňková, 2015)). (Kalb, 2010; Vrabková and Vaňková, 2015).

Table 4: Typically used inputs and outputs for measuring the efficiency of health care institutions

Source: Own illustration based on (Vrabková and Vaňková, 2015; Vera, 2010; Helmig, 2005; Kalb, 2010).

Page 181: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

174

Figure 8: DMU Distribution to size classes Figure 9: Percentage distribution of

efficiency measurement according to DEA model M1 according to number of

beds

Table 5: Aggregated results of modelling M1 Figure 10: Percentage distribution DEA

model M2

Table 6: Aggregated results of modelling M2 Figure 11: FDH model IM1

Table 7: Aggregated results of modelling IM1 Table 8: Benchmark of Efficiency DMU in input-oriented model

Page 182: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

175

GENERALIZED LINEAR MODELS IN A MOTOR HULL INSURANCE PORTFOLIO

Adéla Špačková1

1Department of Economics, VŠB – Technical University of Ostrava,

Sokolská tř. 33, Ostrava 70200, Czech Republic

e-mail: [email protected]

Abstract

Actuaries in insurance companies try to find out the best model for an estimation of insurance premium.

It depends on many risk factors, e.g. the car characteristics and the profile of the driver. In this paper,

an analysis of the motor hull insurance portfolio using a generalized linear model (GLM). Our aim is to

predict the relation of claim frequency on given risk factors. The models with different predictor

variables are compared by Likelihood ratio test, analysis of deviance, Akaike information criterion

(AIC) and Bayesian information criterion (BIC). Based on this comparison, the model for the best

estimate of annual claim frequency is chosen. All statistical calculations are computed in STATA

environment.

Keywords

Generalized Linear Models, Claim Frequency, Motor Hull Insurance Portfolio, Risk Factors

JEL Classification

C13, G22

Introduction

The need and the necessity of establishing internal models is still growing. Internal models are important

for the determination and management of risks in each insurance company. Risk management is often

complex, therefore, it is important to establish a model for the estimation of the claims frequency and

severity, which are important for the calculation of insurance premiums. Our aim is to predict the relation

of claim frequency on given risk factors. In this paper will be conducted an estimate of the claim

frequency and the selected models of this frequency will be tested using LR test, the deviance, the

Akaike criterion (AIC) and Bayesian information criterion (BIC).

In statistics, the likelihood-ratio test assesses the goodness of fit of two competing statistical models

based on the ratio of their likelihoods, specifically one found by maximization over the entire parameter

space and another found after imposing some constraint. If the constraint (i.e., the null hypothesis) is

supported by the observed data, the two likelihoods should not differ by more than sampling error. Thus

the likelihood-ratio test tests whether this ratio is significantly different from one, or equivalently

whether its natural logarithm is significantly different from zero.

Deviance measures the discrepancy between the current model and the full model. The full model is the

model that has n parameters, one parameter per observation. The Akaike information criterion (AIC) is

an estimator of out-of-sample prediction error and thereby relative quality of statistical models for a

given set of data. In statistics, the Bayesian information criterion (BIC) is a criterion for model selection

among a finite set of models; the model with the lowest BIC is preferred. It is based, in part, on the

likelihood function and it is closely related to the Akaike information criterion (AIC).

Literature Review

Generalized linear modelling is a methodology of descriptions possibilities of relationships between

variables. This methodology have been find out by great scientists Nelder and Wedderburn in 1972.

Nowadays, these models generally deals with many of the authors as Gray and Pitts (2012), Hardin and

Hilbe (2012), Long and Freese (2008) etc. Practical example using negative binomial distribution is

demonstrated in Hilbe (2011). The above mentioned titles are aimed more generally. The work focused

Page 183: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

176

on pricing process using non-life insurance data is well described in Ohlsson and Johansson (2010).

General part about the insurance risk is possible to find out in Cipra (2006 and 2012). Pretty well

application paper is written by Valecký (2015).

Methodology and Data

In this section is described the methodology of generalized linear regression models. first, generalized

linear models in general, then it is described the methodology of the estimation of the individual

parameters. At the very end of the methodology are described the tools of comparison of the selected

models, specifically, the deviance and information criteria.

As the data file has been selected a random sample of contracts of insurance accident insurance collected

in 2005-2010 in the Czech Republic territory.

Generalized linear models

Generalized linear models have been formulated by John Nelder and Robert Wedderburn as the way of

others regression statistical models, including linear that permit for independent variable utilize other

than normal distribution. The basic of these models are defined as an extentions of the Gaussian linear

predictor derived from the exponential family. The main purpose of theese models is to estimate random

explanation variable (denoted y), depending on certain number of explanatory variables (Xi).

Generally, GLM includes three main assumptions:

• A probability distribution has to be from an exponential family

• A linear predictor is transformed by link function, such as:𝑛 = 𝑥′𝛽

• A link function 𝑔 (𝜇

𝑛) = 𝑥′𝛽,

where g is a link function, 𝜇 is mean, n is called the exposure.

Link function can be diverse, but for the purposes of this paper the logarithm link function is selected

(Jong, Heller (2008) and Gray and Pits) .

Thus, link function g is log, that becomes:

ln ´ ln ln ´x n x

n

= = = +

(1)

where ln n is called an „offset“. This offset is another variable x , where the coeficient is equal to

one. Offsets are usualy used to correct differing time period of observation.

All probability distribution can be decribed of the general form:

( )( ) ( , ) exp

y af y c y

−=

(2)

where is the canonical parameter and is called the dispersion parameter. ( )a and ( , )c y are

functions determining the actual probability function such as normal, gamma, binomial etc.

For the purposes of this paper the negative binomial distribution is choosen. Description of exponencial

family parameters is shown in Table 1.

Page 184: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

177

Table 9. Parameters distributions in exponential family framework

Distribution ( )a ( )E y ( )

( )Var y

V

=

Negative-

binomial( , )

ln1

+

1ln(1 )e

− − 1 (1 ) +

source: Jong and Heller(2008)

Maximum likelihood estimation

The standard method of estimation parameters is maximum likelihood estimation. This method is

based on selecting parameter estimates and maximize the likelihood of the observed sample:

1

( , ) ln ( ; , )n

i

i

f y =

(3)

where ( )if y is a probability function depends on the canonical parameter and the dispersion

parameter If the maximum likelihood estimation is the exponential family probability function:

1 1

( )( , ) ln ( ; , ) ln ( , )

n ni i i

i i

i i

y af y c y

= =

− = = +

(4)

Maximalization of likelihood called the log-likelihood is a logarithm of the likelihood with respect to

j :

1

ni

ij i j

=

=

(5)

where the parameters are folowing:

( )i i i iy a y

− −= =

(6)

i i i i

ij

j i j i

x

= =

(7)

ijx is a component i of jx .

If the equation (5) is equal to zero, that estimation equations for is:

1

( ) 0 ´ ( ) 0n

iij i i

i i

x y X D y

=

− = − =

(8)

According to the equation (8) it is clear, that parameter is implicit and working throught and D .

Generalized linear models are estimated using Newton-Raphson method, or the method of IRLS

(method of iteratively weighted least squares). Using the algorithm Newthon-Rapson can obtain the

observed information matrix (OIM), on the contrary, the method of IRLS we obtain the expected

information matrix (EIM) see GRAY, Roger J. a Susan M. PITTS (2012).

Page 185: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

178

Claim frequency models

Claim frequency model (or observed number of claims) is a situation, where the random dependent

variable is discrete and conditioned by vector of explanatory variables ( characteristic of risk based on

individual characteristic of shareholders). According to the purposes of this paper, the Negative-

binomial and Poisson distribution is selected. The Negative-binomial probability of random variable Y

fitting into the exponential family framework (2) is given:

1 ( )ln ( ) ln ln(1 )

1

y af y y

−= − + =

+

(9)

where the dispersion parameter is 1 = and canonical parameter ln1

=

+

Mean and variance function is denoted:

( ) ( )

1

eE y a

e

= = =

− (10)

2( ) ( ) (1 )

(1 )

eVar y a

e

= = = +

− (11)

where ( )a and ( )a are first and second derivates of ( )a with respect to .

The Poisson probability of random variable Y fitting into the exponential family framework (2) is given:

ln ( ) ln ln ! ln !

y af y y y y

−= − + − = − (12)

where the dispersion parameter is 1 = and canonical parameter e = (see table of parameters 1),

see JONG, Piet de a Gillian Z. HELLER (2008).

Mean and variance function is denoted:

( ) ( ) ( ) ( )a e E y a Var y = = = = = (13)

where ( )a and ( )a are first and second derivates of ( )a with respect to .

Models´ goodness of fit

The goodnes of fit of a model to a data is natural question arises with every statistical modelling. The

literature presents many statistic tools, that can be used to select and to assess the performance of count

data models. As discussed in Jong and Heller (2008) one way how to assess the fit of given model is to

compare with the model with the best possible fit.

Likelihood Ratio

Indicator of the credibility (likelihood ratio), the LR is very often used for comparison of models with

different distribution of probability. For this test it is necessary to calculate the parameter estimates for

both the full model and for the model reduced (estimated to best describe the reality). Test statistics is

then given by the expression:

𝐿𝑅 = −2(𝑙𝑜𝑔𝐿𝐼 − 𝑙𝑜𝑔𝐿𝐹)

Likelihood statistic has in large selections division of Pearson 𝜒2 statistics with the number of freedom,

which is then equal to the difference of the number of parameters of the tested models, i.e.

Page 186: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

179

𝐿𝑅~𝜒 𝑘𝑓−𝑘𝑖2 (0,1 − 𝛼)

The value of the LR test is high if the explanatory variable (linear regresor) in the model affect the

explanatory variable, see Hardin, Hilbe (2012). Test ratio assurance is an alternative to the F-test in

linear regression model and is suitable to use if it is considered about adding other explanatory variables

into the model.

Akaike and Bayesian information criteria

The basic of these criteria is the comparison of models among themselves and in the most suitable model

is considered to be such a model, the value of which AIC and BIC is the lowest. Akaike information

criterion is following:

2 2log( )AIC k L= − (15)

where k is number of predictors of a model including constants and log(L) means log-likelihood model.

Bayesian information criterion is:

2log log( )BIC L k n= − + (16)

n is the number of observations.

Deviance residuals

By using residual analysis is possible to find out the information about the suitability of the model. The

deviance residuals can be used for assessing the quality of the model, e.g. for the detection of remote

observation and verification of the assumption about the variance.

The general form of deviance residual is:

( )ˆ( ) (D

i i i i ir sign y d y = − (17)

where ( )iiyd ( denotes the distance function, which represents the remoteness from the estimated

mean values to observed.

Empirical Results

Every person, which applying vehicle insurance is divided into a class, where the risk is homogenous.

One of the criteria used for assigning an individual to a certain class is number of claims. It is a very

important task for insurance companies to model the number of claims in given insurance portfolio.

Our aim is to predict the relation of claim frequency on given risk factors. For the purposes of this paper

a random selection of real data in vehicle insurance is used and collected during the years 2005-2010 in

the Czech Republic teritory. The file contains 18 111 contracts.

Page 187: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

180

Figure 12. Summarize count

The mean number of count is 0,064, Variance is 0,070 it is a bit more than mean, it can mean that data

are overdispersed. Negative-binomial regression can be used for overdispersed count data, that is when

the conditional variance exceeds the conditional mean.

The model is going to be estimated with these seven following predictors, because each vehicle

insurance contract includes this following individual characteristic of the policyholders:

Table 2. Variable description

variable description

agecar Age of car

gender Drivers gender

volumkw Engine power

fuel Type of fuel

ageman Age of driver

price Vehicle price

district District area

Source: Špačková (2020)

The following fig. shows histogram of empirical frequency:

Figure 2. Empirical claim frequency

Source: Špačková (2020)

According to the histogram, it is obvious positive skewness. In the following table is shown observed

and predicted claim frequency by Negative binomial distribution.

Page 188: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

181

Table 3. Observed vs. predicted frequency

Claim frequency Observed Negative-binomial

0 17 045 17 104,88

1 984 897,14

2 76 75,21

3 6 2,66

Source: Špačková (2020)

According to the table it is obvious, that in 17 045 suffered no claim. One claim was occurred in 984

cases, two in 76 cases, three in 6 cases. The table provides, that Negative-binomial fits good to our

insurance data. In following table we can see the β parameters estimated by maximum likelihood

methods.

Table 4. Analysis of parameters

variable Negative-binomial

parameter St. error p-value

agecar -0.129 0.012 0.000

gender 0.267 0.089 0.000

volumkw 0.006 0.004 0.000

fuel

2 0.398 0.089 0.017

3 -10.303 890.294 0.986

4 0.560 1.05 0.575

ageman -0.013 0.002 0.000

price 5.81e-07 1.95e-07 0.000

district

2 -0.421 0.319 0.200

3 -0.428 0.319 0.180

4 0.192 0.2347 0.411

5 -0.296 0.239 0.188

6 0.365 0.246 0.133

7 -0.440 0.320 0.215

8 -0.328 0.295 0.219

9 -0.303 0.327 0.389

10 -0.188 0.221 0.430

11 0.156 0.192 0.556

12 -1.082 0.382 0.005

Page 189: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

182

13 -0.011 0.263 0.961

14 -0.113 0.204 0.532

Source: Špačková (2020)

P-value of variables fuel and the district was higher than 0.05, which of course signifies the

irrelevance to the variable count. According to LR test procedure, it is necessary to estimate a second

model. Thus, next step contains second model, which is estimated without variables fuel and district.

This model, called model 2 is nested to the model 1 and subsequently it is possible to test it by LR test.

The results of likelihood ratio test are in table 5.

Table 5. Likelihood ratio test

LR test 2 (16) prob. >

2

46,1 0,0001

Source: Špačková (2020)

According to the results it is clear that the more accurate model is model 1 including all variables. In the

following table it is shown criteria for assessing goodness of fit of full and restricted model.

Table 6. Criteria for assessing goodness of fit

Full model Restricted model

Log Likelihood -2704.994906 -2686.450867

AIC 0.4317 0.4482

BIC -114970.7 -115264.6

Source: Špačková (2020)

Preferred model is still full model, because the Log-likelihood is higher. According to the results of AIC

a BIC, the better model is full model again, because the value of AIC and BIC is lower. The Akaike

Information Criterion (AIC) and Bayesian Information Criterion (BIC) provide a method for assessing

the quality the model through comparison of related models. It’s based on the Deviance, but penalizes

you for making the model more complicated. Eventually, it was proved, that the full model is better.

The models are also possible to compare by deviance In the figure is comparison of predicted deviance

count by each model.

Figure 3. Predicted deviance by full (left) and restricted model

Page 190: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

183

From figure 2 clearly shows that the estimate of the mean value of the variable count model 1 is more

accurate, since the residues for full model are not so much scattered. In restricted model contained a

larger number of extreme values compared to the full model. Based on the results of deviance residual

was again full model evaluated as more appropriate for modelling the number of claims.

From the results of the comparison of the two models, both essential characteristics, so the test of

credibility and a comparison of information criteria, etc., full model appeared to be more appropriate for

modelling the number of claims.

Conclusion

Our aim was to predict the relation of claim frequency on given risk factors from which the premium is

derived. It was found out that the claim frequency depends on many risk factors – agecar, gender,

volumkw, fuel, ageman, price and district. In this paper have been conducted an estimate of the claim

frequency and the selected models of this frequency will be tested using LR test, the deviance, the

Akaike criterion (AIC) and Bayesian information criterion (BIC).

The standard method of estimation is generalized linear models. The GLM models are used for

estimation claim frequency in this paper. In the theoretical part the GLM are introduced, including the

definition of link function and specification of an exponential family of probability density functions.

The standard method of estimation parameters is maximum likelihood estimation. This method is

based on selecting parameter estimates and maximize the likelihood of the observed sample.

The main part of the paper is GLM application in vehicle insurance. For the purposes of this paper a

random selection of real data in vehicle insurance have been used and collected during the years 2005-

2010 in the Czech Republic territory. The file contained 18 111 contracts. The drivers have been divided

into groups on the basic of risk factors. The negative-binomial distribution with log link function is used.

Two different models with various variables are considered. Based on goodness of fit the best model

have been chosen. The best model is a full model which includes all above mentioned variable.

References

[1] CIPRA, Tomáš. Finanční a pojistné vzorce. Praha: Grada Publishing, 2006. ISBN 80-247-1633-

X.

[2] CIPRA, Tomáš. Pojistná matematika: teorie a praxe. 2. aktualiz. vyd. Praha: Ekopress, c2006.

ISBN 80-86929-11-6.

[3] CIPRA, Tomáš. Riziko ve financích a pojišťovnictví: Basel III a Solvency II. Vydání I. Praha:

Ekopress, 2015. ISBN 978-80-87865-24-8.

[4] GRAY, Roger J. a Susan M. PITTS. Risk modelling in general insurance: from principles to

practice. Cambridge: Cambridge University Press, 2012. ISBN 978-0-521-86394-0.

[5] HARDIN, James W. a Joseph HILBE. Generalized linear models and extensions. 3rd ed.

College Station: Stata Press, 2012. ISBN 978-1-59718-105-1.

[6] HILBE, Joseph. Negative binomial regression. 2nd ed. Cambridge: Cambridge University Press,

2011. ISBN 978-0-521-19815-8.

[7] JONG, Piet de a Gillian Z. HELLER. Generalized linear models for insurance data. Cambridge:

Cambridge University Press, 2008. ISBN 978-0-521-87914-9.

[8] LONG, J. Scott a Jeremy FREESE. Regression models for categorical dependent variables using

Stata. Third edition. College Station: Stata Press Publication, 2014. ISBN 978-1-59718-111-2

[9] OHLSSON, Esbjörn a Björn JOHANSSON. Non-life insurance pricing with generalized linear

models. Berlin: Springer, c2010. ISBN 978-3-642-10790-0.

[10] VALECKÝ, Jiří. Modelling claim frequency in vehicle insurance. Acta Universitatis

[11] Agriculturae et Silviculturae Mendelianae Brunensis. 2015. 10 s. ISSN 1211-8516.

Page 191: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

184

OPTIMALIZATION OF DIRECT COSTS OF THE RAILWAYS OF THE SLOVAK

REPUBLIC

Adrián Kuka1, Adrián Šperka2, Michal Petr Hranický3, Jozef Gašparík4

1Department of Railway transport, ŽU – Univerzity of Žilina,

Univerzitná. 1, Žilina , Slovak Republic

e-mail: [email protected]

2 Department of Railway transport, ŽU – Univerzity of Žilina,

Univerzitná. 1, Žilina , Slovak Republic

e-mail: [email protected]

3Department of Railway transport, ŽU – Univerzity of Žilina,

Univerzitná. 1, Žilina , Slovak Republic

e-mail: [email protected]

4Department of Railway transport, ŽU – Univerzity of Žilina,

Univerzitná. 1, Žilina , Slovak Republic

e-mail: [email protected]

Abstract

The article deals with the issue of direct costs in the railway transport environment, specifically in the

environment of the infrastructure manager. It is important for the infrastructure manager to gradually

reduce the direct costs and thus seize the opportunity to save money. One of the largest items of direct

costs is staff costs. This can be seen in operating professions where there is a trend to reduce staff and

implement dispatching centralization in railway transport operating. The issue of reducing direct costs

through the introduction of dispatching centralization is currently topical also due to the lack of

employees caused by high demands and low attractiveness of operating professions in the railway

sector.

Keywords

remote control track, infrastructure manager, costs, human labor

JEL Classification

R4

Introduction

Rail transport is one of the fastest growing transport branches. It is essential that it respond to new

challenges from passenger passengers and freight carriers. A dynamic modernization process, which

brings many positive changes, also facilitates its correct response. Everything from the switches to the

locomotives is being modernized.

Modernization and reconstruction measures also cause direct costs of rail transport. The effort of the

infrastructure manager and individual carriers is to eliminate these costs. One of the popular and

currently inevitable ways to solve this problem is to control railway lines remotely. This will not only

reduce direct personnel costs but also the number of operating staff. The other side of this measure is

where to put workers who have lost their jobs as a result of these measures.

Rail transport costs

Almost all human needs are characterized by movement, change of place. The higher the level of

transport, the better the social division of labor and cooperation, the distribution of means of production

and consumer goods, the change of activities and the change of goods can develop (Řezníček, 1982).

The development of the transport enables (Řezníček, 1982): • closer social relations,

Page 192: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

185

• stronger material and cultural connections between nations,

• a much richer life for people and its better protection of the participation of transport in

strengthening state defense.

But every development costs something. The transport sector is also one of the most cost-intensive

sectors. Its cost depends on various factors and is also different for different modes of transport.

Classification of costs

The classification of costs serves mainly for a more detailed breakdown of costs in the company, thus it

organizes individual cost items into groups, with each group having its own characteristic feature.

(Dolinayová & Nedeliaková, 2015).

In railway transport, it is necessary to monitor costs from various points of view and their classification

is most often according to (Dolinayová & Nedeliaková, 2015):

• cost types - economic division of costs,

• relation to the production process - purpose-specific cost division,

• calculation formula items,

• cost dependency,

• responsibility for their creation,

• decision making - managerial understanding of costs.

The classification of costs is mainly used for (Dolinayová & Nedeliaková, 2015):

• getting more accurate information,

• finding the level of management of the company and its individual organizational units,

• more precise allocation of costs to individual activities and processes in the company and its

organizational units,

• cost planning.

The division of costs by cost type is the basic one used by each undertaking, given that these costs are

recorded in the cost accounts. It is a value expression of consumption of individual types of production

factors at the input to the production process (Dolinayová & Nedeliaková, 2015).

Costs according to cost types can be divided eg. as follows (Míka, 2005):

1. Material and energy costs

2. External services costs:

• analysis,

• mediation,

• consulting,

• service, reparation.

3. Personnel costs:

• wages,

• rewards,

• educational costs.

4. Taxes and fees

5. Depreciation and provisions

6. Income taxes

The division shows that employees' wages are included in personnel costs. A very important aspect in

rail transport is also the training of employees due to the demanding nature of operating professions.

Calculation formulas are used to calculate the cost part of the cost formula, which allows to assign the

respective part of each cost type to individual products (Kupkovič, 1999).

In terms of calculations, we divide costs into two groups (Kupkovič, 1999):

• direct costs - costs that can be directly identified and calculated by the cost bearer,

• indirect costs - they are incurred jointly by several cost carriers and are allocated to cost carriers

using various methods, most often by using a mark-up surcharge.

Page 193: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

186

The general calculation formula is designed to calculate the full cost (Kupkovič, 1999). The type

calculation formula of the infrastructure manager (hereinafter referred to as ŽSR) is based on the

purpose-specific cost classification. The type calculation formula serves as a basis for calculating the

performance of the individual company units, which may add to the calculation formula other items

related to the calculation of the cost of services and services as shown in Table 1.

Table 1. Calculation formula of railways of the Slovak Republic

Category Cost

1. DIRECT MATERIAL

2. DIRECT WAGES

3. OTHER PERSONNEL COSTS

3. 1. Legal social insurance

3. 2. Other social security

3. 3. Legal and other social costs

4. OTHER DIRECT COSTS

4. 1. Other services

4. 2. Direct depreciation

4. 3. Other direct costs

1. – 4. Operating overhead

5. OPERATING RICE

1. – 5. Own costs of operation

6. CORRECT SURVEY

1. – 6. Total own cost of performance

7. PROFIT OR LOSS

1. – 7. Price without VAT

8. VAT

9. Other price components within the

meaning of the Price Act

Source: (Železnice Slovenskej republiky, 2012)

The table shows that the infrastructure manager's labor costs are direct costs. All items of direct costs

must be related to the valued performance and the cost bearer (in our case the employee) is determined

directly or by technical conversion. (Dolinayová & Nedeliaková, 2015).

Wage costs

Labor costs are one of the largest items that the company seeks to minimize. It is necessary for the

company to be able to control and direct them so that the interests of employees and the economic

objectives of the company are preserved. (Štetka, 2014).

Employer's labor costs per employee (labor price) consist of the following parts (Šutyová, 2019):

1. Gross wage of emloyee

2. Payments to the tax office

3. Contributions to the Social Insurance Agency

4. Payments to the Health Insurance Company

Employers' contributions are 35.2% of the employee's gross salary according to the items listed above.

In relation 1 there is a sample calculation of labor costs. (Šutyová, 2019)

𝐿𝑎𝑏𝑜𝑢𝑟 𝑐𝑜𝑠𝑡 = 𝐺𝑟𝑜𝑠𝑠 𝑤𝑎𝑔𝑒 𝑜𝑠 𝑒𝑚𝑝𝑙𝑜𝑦𝑒𝑒 + 𝑒𝑚𝑝𝑙𝑜𝑦𝑒𝑟 𝑐𝑜𝑛𝑡𝑟𝑖𝑏𝑢𝑡𝑖𝑜𝑛𝑠 (1)

It is true that the more employees a company has, the greater the labor cost of its employees is.

Page 194: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

187

Dispatching centralization on ŽSR network

The high pace of introduction of new technologies in railway traffic management requires a review of

ŽSR's approach in this area and before the actual implementation of traffic management centers and

remote-controlled lines it was necessary to decide strategic issues and define requirements for their

implementation on the ŽSR network.

Main milestones of implementation of dispatching centralization on ŽSR network (Odbor stratégie a

vonkajších vzťahov GR ŽSR, 2019):

• analysis of the current situation,

• a proposal for the future situation,

• implementation of the proposal.

Proposal of strategic distribution of traffic managements centers on ŽSR network

The ŽSR network has a total length of 3,627 km, of which 1,586 km are electrified. Even for a relatively

short length, it is necessary that the traffic management centers (CRDs) are located according to certain

principles and conditions respecting the individual parameters of the lines and stations.

The proposal for the implementation of the CRD stems from the following principles and conditions

(Odbor stratégie a vonkajších vzťahov GR ŽSR, 2019):

• coherence of transport - linking traffic management to the direction of traffic flows,

• link to the modernization of lines according to sections,

• composition of the lines - significant and minor routes, corridor and sub-routes,

• transparency in traffic management,

• traffic intensity and number of dispatchers,

• the location of CRDs at major transport hubs, following the availability of local controlled

workplaces in the event of emergencies,

• acceptance of the current state of long-distance traffic management at the site of ŽSR,

• anticipation of the development of demand and the structure of transport on individual lines,

• possibility of integration into integrated transport systems,

• size of controlled stations and lines,

• the expected number of controlled elements in each station,

• shift making rules,

• Labour Code and othew laws,

• the average wage and employment in the region.

The following are excluded from the draft CRD layout concept (Odbor stratégie a vonkajších vzťahov

GR ŽSR, 2019):

1. Large marshalling yards, which will be locally controlled and only entrances and exits of trains,

ie branches:

• Bratislava východ,

• Žilina-Teplička,

• Čierna nad Tisou,

• Košice nákladná stanica.

2. Large border crossing points on wide-gauge lines to be operated locally:

• Čierna nad Tisou,

• Maťovce.

3. Lines with simplified traffic management and lines with low traffic frequency.

Figure 1 is a map background showing the planned CRD network on the ŽSR network.

Page 195: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

188

Figure 1. Traffic management centers on ŽSR network

Source: (Odbor stratégie a vonkajších vzťahov GR ŽSR, 2019)

Table 2 shows the number of individual CRDs, associated local CRDs, and km of tracks to be controlled

from local CRDs.

Table 2. CRD location on the network ŽSR

CRD Local CRD Length of controlled lines

in km

Bratislava

Jablonica 69

Kúty 93

Bratislava-Nové

Mesto 36

Dunajská Streda 95

Nové Zámky 131

Galanta 94

Lužianky 76

Prievidza 105

Trnava 145

Žilina

Púchov 138

Žilina 70

Kraľovany 139

Zvolen

Lučenec 198

Zvolen 145

Brezno 179

Banská Bystrica 80

Levice 55

Košice

Poprad 94

Kysak 27

Košice 36

Michaľany 105

Moldava nad Bodvou 66

Plaveč 104

Prešov 117

Humenné 91

Trebišov 100

Haniska pri Košiciach 105

Source: (Odbor stratégie a vonkajších vzťahov GR ŽSR, 2019)

Page 196: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

189

A total of 27 local CRDs should be built on the territory of ŽSR, which will fall below 4 CRDs. No

CRD will serve an area greater than 200 km. On the one hand, it is relatively enough CRD for such a

small railway network, but considering individual catchment areas, operating staff may not move

directly to CRD construction sites, but may commute.

In the Czech Republic, where there are two CRDs (Prague and Prerov), there is a problem with relocating

operational employees to these locations. As the housing issue is not being solved on a long-term basis,

especially in Prague, the operational employees (dispatchers) of the infrastructure manager are not

satisfied.

Reduction of operational staff due to dispatching centralization

Under the ŽSR conditions, mainly railway dispatchers, signalmen and switchmen are involved in the

management of railway transport. Collectively, they can be called traffic management employees.

The work of the switcherman comprises (Majerčák, et al., 2015):

• operation of manually and locally adjusted switches for train movements and for shunting,

• controlling whether the track is free and thus ready for a train movement,

• controlling the train movements,

• inspection and operational service of switches and derails

• operation of railway crossings’ signalling systems, as long as they are within their responsibility.

Contents of the dispatcher's work (Majerčák, et al., 2015):

• control, coordination and management of train traffic within the allocated perimeter of the

railway station, track or remotely operated track sections,

• shunting management and control,

• management and control of train-forming activities,

• managing and coordinating the work of the relevant staff at the railway station and train staff

while in the railway station and adjacent interstation section and in remotely controlled stations.

Due to the introduction of CRD on the ŽSR network, the number of traffic management employees will

decrease. Figure 2 shows the estimated number of employees in each CRD.

Figure 2. Number of current traffic management staff and their reduction after the introduction

CRD

Source: (Odbor stratégie a vonkajších vzťahov GR ŽSR, 2019)

From the graph it is clear that at present, the number of employees involved in traffic management is a

total of 3,846. After the CRD is built, there will be only 1915, which is a saving of 1931 employees.

1258

675850

1063

667

314396

538

0

200

400

600

800

1000

1200

1400

Bratislava Žilina Zvolen Košice

Current number of traffic management staff

The proposed number of traffic management staff

Page 197: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

190

Dispatching centralization track Banská Bystrica – Zvolen and its economic impact

According to the facts from Chapter 3, one of the sections included in the dispatching centralization is

also the section Banská Bystrica - Zvolen mesto. It is a single track 20 km long, which should be

controlled from the CRD Banská Bystrica after completion of the remote control. For charging purposes,

the track falls into category no. A (Dopravný úrad, 2019).

The section includes the following railway stations:

• Banská Bystrica,

• Radvaň,

• Vlkanová,

• Sliač kúpele.

The following train stops are located on the track section:

• Banská Bystrica mesto,

• Hronsek,

• Veľká Lúka,

• Zvolen mesto.

The number of transport employees (dispatchers and switchmen) depends mainly on the size of the

railway station and the type of signalling system. Figure 3 shows the number of tracks at individual

railway stations on the section in question.

Figure 3. Number and type of tracks

Source: (VLAKY.NET, 2004)

It is clear from the graph that the most rail tracks are in the Banská Bystrica railway station, therefore

the operational need of transport employees will be the greatest at this point of transport. Other railway

stations are intermediate stations with a smaller scope of operational work.

Staff costs in the current state

The costs of employees in the operation of railway transport depend mainly on the number of employees.

Traffic on the section Banská Bystrica - Zvolen (outside) is controlled by dispatchers in each station on

the section. Since the signalling system does not require the interaction of switches, the switches are

switched directly by the dispatcher and therefore the importance of switchmen at individual stations

decreases from year to year. Just in 2019 ŽSR proceeded to abolish switchmen on the track section.

Table 3 shows the number of dispatchers at each transport site.

15

23 3

5

21 1

2

0

2

00

2

4

6

8

10

12

14

16

Banská Bystrica Radvaň Vlkanová Sliač kúpele

Traffic tracks Handling tracks Other tracks

Page 198: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

191

Table 3. Operational need for transport staff at individual stations

Railway station Number of dispatchers per shift Total need of dispatchers

Banská Bystrica 2 8

Radvaň 1 4

Vlkanová 1 4

Sliač kúpele 1 4

Odb. Zvolen mesto - -

Source: Authors

The amount of wage for each employee depends on several factors, especially on the labor intensity of

individual transport operations. There is a table system of remuneration at ŽSR where individual stations

are grouped according to their operational indicators. According to the groups, individual tariff classes

for operational employees are developed. For the purposes of this article, we will consider a uniform

hourly rate for dispatchers and switches, as shown in Table 4.

Table 4. Data for direct cost calculation

Current status Working hours of

dispatchers

Hourly rate of

dispatchers

Annual rate of

dispatchers

Banská Bystrica

12 17,-€ 148 920,-€ Radvaň

Vlkanová

Sliač kúpele

Odb. Zvolen

mesto -

- -

Source: Authors

Each of the aforementioned traffic is in operation 24 hours a day and 7 days a week. The dispatchers

work for 12 hours, then take turns. The annual rate of dispatchers is calculated according to formula 2

as follows:

𝐴𝑛𝑛𝑢𝑎𝑙 𝑟𝑎𝑡𝑒 = ℎ𝑜𝑑𝑖𝑛𝑜𝑣á 𝑠𝑎𝑑𝑧𝑏𝑎 ∗ 24 ℎ𝑜𝑢𝑟𝑠 ∗ 365 𝑑𝑎𝑦𝑠 (2)

The annual number of dispatchers shall be calculated by taking into account the number of dispatchers

and their annual rate. In our case, we will not take into account holidays, sickdays, overtime, or even

days when dispatchers have time off between changes. The annual cost of transport service staff shall

be calculated according to formula 3 as follows:

𝐴𝑛𝑛𝑢𝑎𝑙 𝑐𝑜𝑠𝑡 = 𝑎𝑛𝑛𝑢𝑎𝑙 𝑟𝑎𝑡𝑒 ∗ 𝑠ℎ𝑖𝑓𝑡 𝑛𝑒𝑒𝑑 (𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑑𝑖𝑠𝑝𝑎𝑡𝑐ℎ𝑒𝑟) (3)

Table 5 shows the annual costs of dispatchers at each railway station on the line.

Table 5. Current status of annual costs of transport service employees

Railway station Annual costs for

dispatchers

Banská Bystrica 1 191 360,-€

Radvaň 595 680,-€

Vlkanová 595 680,-€

Sliač kúpele 595 680,-€

Σ 2 978 400,-€

Source: Authors

Page 199: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

192

The highest costs for dispatchers are in the railway station Banská Bystrica, due to double occupancy

compared to other railway stations on the track section.

Staff costs following the introduction of a remote-controlled line

A fundamental change is in the occupation of individual transport. While in the previous chapter all the

traffic was occupied by at least one dispatcher, only one centralized traffic, in this case Banská Bystrica,

is occupied by the dispatcher.

Following the introduction of centralized dispatching, railway transport will be organized as follows:

• station Banská Bystrica, station Kostiviarska, station Uľanka – 1 dispatcher,

• station Radvaň, station Vlkanová, station Sliač kúpele – 1 dispatcher.

Emergency dispatchers (with a switchman wage rate) will be available for 24 hours in other stations.

They will take turns after twelve hours. They will be available at railway stations in case of failure of

station systems and line signalling equipment. Table 6 shows the dispatches of the dispatchers and their

turn-over need after the introduction of dispatching centralization.

Table 6. Operational need for transport staff at dispatch control

Railway station Number of dispatchers per shift Total need of dispatchers

Banská Bystrica 2 8

Radvaň 0 0

Vlkanová 0 0

Sliač kúpele 0 0

Odb. Zvolen mesto - -

Source: Authors

At the Radvaň, Vlkanová and Sliač stations there will always be one emergency dispatcher and their

need will be four emergency dispatchers. Table 7 provides the basis for cost calculation.

Table 7. Data for direct cost calculation after optimalization

Railway station Working hours of

dispatchers

Hourly rate of

dispatchers

Annual rate of

dispatchers

Banská Bystrica

12

17,-€ 148 920,-€

Radvaň 14,-€ 122 640,-€

Vlkanová 14,-€ 122 640,-€

Sliač kúpele 14,-€ 122 640,-€

Odb. Zvolen

mesto -

- -

Source: Authors

The calculations are carried out according to the same formulas, but with different rates for remote line

dispatchers and emergency dispatchers. Table 8 calculates staff costs after optimization.

Page 200: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

193

Table 8. Current status of annual costs of transport service employees for optimalization

Railway station Annual costs for

dispatchers

Banská Bystrica 1 191 360,-€

Radvaň 490 560,-€

Vlkanová 490 560,-€

Sliač kúpele 490 560,-€

Σ 2 663 040,-€

Source: Authors

At first glance it is obvious that the costs are lower due to the different station occupancy. Figure 4

shows the amount of savings achieved by cost reduction due to dispatching centralization.

Figure 4. Cost savings

Source: Authors

The total annual saving of labor costs for operating employees is 315 360, - € for the 20 km long track

after the dispatching centralization. As the signaling equipment for the dispatching line control is already

prepared in the railway station Banská Bystrica, the additional costs for the construction of the

dispatching apparatus will not be so extensive.

Conclusion

Dispatching centralization is the way that all developed railway reports are driven worldwide. In

addition to increasing throughput and security, there is also the economic side of the project. Dispatcher

centralization can greatly help reduce operating personnel costs, thus saving considerable money for

railway administrations. The aim of the article was to analyse the current state of dispatching

centralization on the ŽSR network with a link to human resources and to point out possible reductions

in personnel costs. It can be stated that the goal has been met.

References

[1] Dolinayová, A. & Nedeliaková, E., 2015. Controlling v železničnej doprave. Prvý ed. Bratislava:

DOLIS s. r. o..

[2] Dopravný úrad, 2019. Podmienky používania železničnej siete 2020. [Online] Available at:

https://www.zsr.sk/files/dopravcovia/zeleznicna-infrastruktura/podmienky-pouzivania-zel-

infrastruktury/podmienky-pouzivania-zel-siete-2020/priloha3_3_1_1-

kategoriatrati_pre_ucely_spoplatnovania-01_01_2019.pdf [Accessed 24 December 2019].

[3] Kupkovič, M. 1999. Náklady podniku, komplexný pohľad na náklady. Prvý ed. Bratislava:

Sprint.

€2 978 400,00 €2 663 040,00

€-€500 000,00

€1 000 000,00 €1 500 000,00 €2 000 000,00 €2 500 000,00 €3 000 000,00 €3 500 000,00

Staff costs before the introduction ofdispatching centralization

Staff costs following the introduction ofcentralized dispatching

Page 201: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

194

[4] Majerčák, J. Gašparík, J. & Blaho, P., 2015. Železničná dopravná predvázka: Technológia

železničních stanic. Prvý ed. Žilina: EDIS – Vydavateľstvo Žilinskej univerzity v Žiline.

[5] Míka, V., 2005. Mikroekomómia: Vybrané přednášky z mikroekonómie a podnikovej ekonomiky

pre študentov bezpečnostního manažmentu. [Online] Available at:

http://fsi.uniza.sk/kkm/files/publikacie/mie/mie_07.pdf [Accessed 18 December 2019].

[6] Odbor stratégie a vonkajších vzťahov GR ŽSR, 2019. Postup zavádzania centier riadenia

dopravy a diaľkovo ovládaných tratí na sieti ŽSR. [Online] Available at:

http://www.betamont.sk/userfiles/editor/files/08_ZSR.pdf [Accessed 19 December 2019].

[7] Řezníček, B., 1982. Ekonomika železničnej dopravy. Prvý ed. Bratislava: Vydavateľstvo

technickej a ekonomickej literatúry ALFA.

[8] Štetka, P., 2014. Mzdové náklady. [Online] Available at:

https://peterstetka.wordpress.com/2014/10/03/mzdove-naklady/ [Accessed 18 December 2019].

[9] Šutyová, Z., 2019. Hrubá a čistá mzda od 1. 1. 2020. [Online] Available at:

https://www.podnikajte.sk/socialne-a-zdravotne-odvody/hruba-cista-minimalna-mzda-2020-dan-

odvody [Accessed 18 December 2019].

[10] VLAKY.NET, 2004. ŽSR 170: Vrútky – Zvolen. [Online] Available at:

https://www.vlaky.net/trate/47/zsr-170-vrutky-zvolen/ [Accessed 24 December 2019].

[11] Železnice Slovenskej republiky, 2012. Kalkulácia nákladov a tvorba cien ŽSR. Bratislava:

Odbor 330 GR ŽSR.

Page 202: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

195

DIGITAL TRANSFORMATION AND BUSINESS PROCESS MANAGEMENT

IN CREATIVE INDUSTRIES: THE CASE OF FILM PRODUCTION PROCESS

Tran Van Hai Trieu1

1Faculty of Management and Economics, Tomas Bata University in Zlín

nám. TGM 5555, 760 01 Zlín, Czech Republic

e-mail: [email protected]

Abstract

Business Process Management is an approach to model, analyse, and improve the business processes

that are applied for performance enhancement, cost reduction, and risk management. Also, some of the

business process management systems such as total quality management, business process

reengineering, etc. are the management tools for organizations to increase business competitiveness,

moreover, they help to achieve better system performance, for instance, higher profit, quicker response,

and better services for customer using services or products. Especially, the digitalization and digital

transformation based on the trend of technology 4.0 have contributed the changes all aspects of life,

culture, economy, and social, that is a reason why the purpose of the paper is to analyse the business

process management in the case of film production process, as well as study how the technology 4.0

trend and digital transformation affect the business process management in the creative industry.

Keywords

Industry 4.0, audiovisual, digital transformation, business process management, creative industry.

JEL Classification

M10, O30

Introduction

The explosion of industry 4.0 in the era of 21st brings many innovations of technology with the Artificial

Intelligent (AI), Big Data, Internet of thing (IoT), Cloud, Blockchain (Imran et al., 2018), as well as it

was the play of an important role for the backbone network to integrate physical objects, human actors,

production lines, intelligent machines, and processes across organizational boundaries (Schumacher et

al., 2016). One of the most important of industry 4.0, it is necessary to apply the new technology for

digitalization and digital transformation for any fields such as the creative industry, etc. with some

applications namely social media, mobile devices, and analytics, or embedded devices to improve

business performance through customer experience enhancement, business activities and creating a new

business model (Krizanic et al., 2019). Moreover, in the enterprise perspective, digital transformation

was considered as an organizational transformation to technology platforms, for instance, data analytics,

cloud, mobile, and social media platforms (Nwankpa and Roumani, 2016). Besides, business process

management was a systematic approach to identify, map, document, design, implement, measure and

control business processes, as well as it embraced the increasing IT support to improve, innovate, and

manage processes thoroughly, determining business results and creating customer value, achieving thus

the business goals with greater flexibility (Hitpass and Astudillo, 2019). Therefore, business process

management in the creative industry such as film production is so important, and it needs to digitize and

automate business process workflows to support the transparent interoperations of product or service

vendors, and achieving business results, creating customer value, and the business goals with greater

flexibility.

Theoretical background

Industry 4.0

The phrase "Revolution of industry 4.0" is taking place in many countries around the world and it is

discussed in many social networks. The study of Imran et al. (2018) showed that there were four stages

Page 203: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

196

for the industrial revolution, the first industrial revolution took place in the 18th century with the steam-

based machine, the second took place in the 19th and 20th century with the electrical energy-based mass

production and the third happened at the end of the 20th century with the computer and internet-based

knowledge. The fourth, in the 21st, has been taken place with an era of the Artificial Intelligent (AI), Big

Data, Internet of thing (IoT), Cloud, Blockchain, and the perspective of Schumacher et al. (2016) also

mentioned industry 4.0 that was the internet and assistive technologies such as embedded systems,

played an important role for the backbone network to integrate physical objects, human actors,

production lines, intelligent machines and processes across organizational boundaries to form a new

kind of intelligent, networked and agile value chain. Besides, Haseeb et al. (2019) found the industry

4.0 features that included cloud computing, augmented reality, multilevel customer interaction,

advanced algorithms with big data, smart sensors, mobile devices, IoT platforms, location detection,

advanced human-machine, and 3D printing.

Digital Transformation

Along with the development of industry 4.0, the term of digital transformation is mentioned so much

and there are many terms and concepts related to digital transformation, and Nwankpa and Roumani

(2016) showed that it was the change and transformation with the technology platforms. In the enterprise

perspective, digital transformation was considered as an organizational transformation to technology

platforms such as data analytics, cloud, mobile, and social media platforms. Krizanic et al. (2019) gave

the definition of digital transformation as the digital technologies’ application namely social media,

mobile devices, and analytics or embedded devices to improve business performance through customer

experience enhancement, business activities and creating new business model, while Schallmo et al.

(2017) in their study proposed the definition of digital transformation that included business and

customer-related elements across all value chain segments and the application of new technologies so

that it could be an extraction, exchange, data analysis and convert information for using, evaluation and

decision making for business operations. Besides, digital transformation was relevant to business

models, processes, relationships and products that helped to increase the performance and company

operation scope.

Creative Industry

In this era, creativity is considered a key factor in the knowledge economy, which leads to creativity and

technology change, thereby it contributes to competitive advantages for business and country, the

transformation of creative ideas has contributed to the increase of tangible products and intangible

services. Martinaitytė and Kregždaitė (2015) said that the creative industry played an extremely

important role in economic development and social security, affecting macroeconomic results

throughout GDP index, technology performance, and employment, personal income, unemployment,

interest rates, and related welfare programs.

Page 204: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

197

Figure 1: Classification of creative industries. Source UNCTAD (2008, p.13)

Particularly, economic growth was influenced by the creative industry throughout the economic, social,

technological, scientific and creative industry was developed by factors related to art, social values,

cultural and local economic welfare, etc. In fact, the creative industry is new in the 20 th century,

UNCTAD (2008, p.13) classified four broad groups as heritage, arts, media, and functional creations,

these groups was divided by nine subgroups that consisted of the traditional culture expressions (arts

and crafts, festivals and celebrations), cultural sites (archaeological sites, museums, libraries,

exhibitions), visual arts (painting, sculpture, photography, and antiques), performing arts (live music,

theatre, dance, opera, circus, puppetry), publishing and printed media (books, press and other

publications), audiovisuals (film, television, radio and other broadcasting), design (interior, graphic,

fashion, jewelry, toys), creative services (architectural, advertising, cultural and recreational, creative

research and development, digital and other related creative services) and new media (software, video

games, and digitalized creative content).

Another opinion of Müller et al. (2009), creative industry was separated by core groups including six

sectors were content (film, games, journalism, authors, music, performing arts, photography and sound

studios), design (arts and crafts, design and fashion, graphic design, engineering design, and web

design), software (programming and computer services (excluding web design and computer games)),

architecture (architecture including landscaping and urban planning), advertising (planning, creating and

putting in place advertising campaigns, public relations management, market research, advertising

services), publishing (publishing of books, newspapers and other printed matter, including printing

services). Besides, from the perspective of Mangematin et al. (2014), music, movies and videos,

publishing, video games and television in the creative industry have been transformed by digitization.

Digital technology affected not only the dissemination, circulation, and storage of content but also it

changed the way viewers choose content to watch. Musicians and artists were determined by the number

of views on social video platforms as Youtube or other video channels that allowed users to participate

heavily in content development before being published.

Business Process Management

Business process management was mentioned by Paschek et al. (2017) as a management concept for

controlling, adapting and optimizing business processes, they defined that it was “a systematic

approach, to capture, shape, execute, document, measure, monitor and steering automatic and non-

automatic processes to reach coordinated and sustainable company targets” based on a concrete

definition from European Association of Business Process Management (EAPM), and they analysed the

digital transformation impacted the business process management while using methods like machine

learning or artificial intelligence. Besides, business process management was emerged by succeeding

concept of the total quality management (TQM) in the 1980s, and the business process reengineering

Page 205: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

198

(BPR) in the 1990s, following BPR, there were many IT systems such as enterprise resource planning

(ERP) and customer relationship management (CRM) gained organizational focus (Brocke and Sinnl,

2011), while Utama and Ratnapuri (2018, p.1002) gave out seven rules of business process management

that were addressed as “1) major activities should be mapped and well-documented, 2) there are

horizontal linkages between key activities, 3) Business process management should rely on documented

procedure and system, 4) Business process management should measure an activity to assess the

performance, 5) based on continuous approach, 6) Business process management has to represent the

best practice, 7) Business process management is an approach for culture change in organization”. In

the study of Hitpass and Astudillo (2019), they referred business process management that was a

systematic approach to identify, map, document, design, implement, measure and control business

processes, as well as it embraced the increasing IT support to improve, innovate, and manage processes

thoroughly, determining business results and creating customer value, achieving thus the business goals

with greater flexibility.

Business process management of film production in creative industry

In creativity-intensive processes managing, there were two main perspectives as task-level, or activity-

level analysis and process-level analysis that were distinguished, in which, the task-level perspective

pertained to the questions of how pockets of creativity were characterized and how they could be

supported while the process-level perspective took a view of the overall business process, consequently

the existence of creative tasks within a business process significantly affected the process as a whole

(Seidel and Rosemann, 2008), for instance, in visual production process model emerged in subsequent

data analysis by Becker et al. (2011), as follow:

Figure 2: The format of production process. Source Becker et al. (2011, p.4)

In this process, the first phase was the idea generation for future visual formats that were generated by

producers, scriptwriters or directors. Moreover, commissioning broadcaster sometimes developed new

format and requirement of the production companies deploying the new concepts for the visual format,

for example, game shows or serials, and the output of this phase generally described an exposé for the

rough cut of the idea in few sentences. And then, when the broadcast networks accepted the exposé,

producers, and scriptwriters would develop more detailed concepts in terms of a script and screencast

including budget planning for the format production, and resent the exposé until receiving the feedback

of the second approval from the broadcasting network for the script and calculation, the visual

production was conducted by an established production team with the phase had to be strictly organized

so that there was the right staff and cast accordant with the right equipment at the right place to the right

time that was planned in pre-production, whilst the actual shooting of footage took place in the

production phase, besides, the footage was modified by the directors and producers expectations in post-

production. After finishing the visual production, the broadcasting network was received the format

including market research, TV program planning, marketing activities and the actual broadcasting of the

format (Becker et al., 2011). Especially, the prepared film for the edit process model was referred by

Seidel et al. (2007) that was a predetermined structure, as follow:

Page 206: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

199

Figure 3: The prepared film for edit process. Source Seidel et al. (2007, p.524)

Although the prepared film for the edit process model was an example of a static model which was

relatively simple, in which, many points in the process where decisions or choices must still be made on

which branch of the model to execute based on the conditions of a particular instance, that was any

flexibility had to be incorporated into the process control flow as explicit conditional branches to decide

the control flow mixing with business process logic (Seidel et al., 2007).

At last, from two above illustrated examples related to the business process management of film

production, it can be seen that there were many challenges arise from the existence of creative tasks

within business processes such as allocating resources (task-level, process level), enhancing creativity

(task-level), managing creative risks (task-level, process-level), and enhancing process performance

(process-level), in which, the allocating resource with the special creative tasks that were resource and

time-intensive, as well as the process owner had to decide what budget, equipment while creative

individuals had to be allocated to what ask. For enhancing creativity, the process owner wanted to

enhance the quality of the creative output as the core output of that task through generating a new idea,

the evaluation of alternative proposals, or a selection process. Moreover, managing creative risks related

to creative tasks were inherently connected to high variance of possible outcomes because the fact of

creative was original and came up with novel ideas, and solutions that was reason why happened

unwanted consequences, for instance losing control of process (losing control of time and budget), low

product quality (which would lead to customer dissatisfaction), and lack of external compliance (which

could lead to a loss of reputation or even to lawsuits), especially it related to the film industry when the

customer was often unable to specify the requirements and the visual effects studio such as providing a

set of iterative solutions to get closer to the actual requirements. Besides, the time and budget had to be

controlled by the company, as well as complying with external requirements such as governmental

policies and legal requirements, hence the identification of creative tasks and their attributes within a

process was the prerequisite to successfully implementing risk management strategies. For enhancing

process performance, creativity-intensive processes were characterized by a high demand for flexibility

that was conventional process automation approaches such as workflow management or even more

sophisticated approaches such as exception handling or evolutionary workflow solutions would not be

appropriate, in fact, processes included both well-structured parts and pockets of creativity that did not

have any obvious structure at all, thus, identifying and better understanding these pockets of creativity

allowed for designing an IT solution providing a maximum level of automation where it was suitable

(Seidel and Rosemann, 2008).

Page 207: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

200

How digital transformation impacts business process management in creative

industries

Digital transformation and business process management 4th wave

Each organization had its business processes that could be specified in terms of the goals they wanted to achieve but letting the process efficiency, quality and agility were the keys to business success, every organization had to invest in the quality of business process management. Digital transformation replied on the creation of integrated, stable and reliable processes and structured data that were further drivers of flexibility and agility, as well as the integration of various technological innovations into business, previous automation and processes managed by the software within the business process management. Furthermore, the technological impacted on process changes, two other impacts could be considered, such as (1) the impact of data and (2) the impact of human factors. During the digital transformation of organizations, named two impacts on business processes also had to be taken into the consideration since they generated three main areas of business process management trends in digitalization: (i) Business process management influenced by data, (ii) Business process management influenced by social factors and (iii) Business process management based on process cases (Vugec et al., 2018). Besides, the study of Pihir (2019) analyzed and gave out three waves of process evolution and adding the last 4th wave as a digital transformation wave of business process management (refer to Table 1).

Table 1: Business process management and digital transformation across time:

Digital transformation as a new business process management wave.

Phase Time Focus Business Technology Tools/Enablers

Industrial Age

Industrial Age 1750 -

1960s

- Specialization of Labour - Task Productivity - Cost Reduction

- Functional - Hierarchies - Command - Control - Assembly Line

- Mechanization - Standardization - Record-keeping

- Scientific Management - PDCA Improvement Cycle; Financial Modeling

Informational Age

1st Wave

Process

Improvement

70s - 80s

- Quality

Management

- Continuous

Flow

- Task Efficiency

- Multi-

Industry

Enterprises

- Line of

Business

- Organization

Mergers &

Acquisitions

- Computerized

Automation

- Management

Information

Systems

- MRP

- TQM

- Statistical

Process Control

- Process Improvement Methods

2nd Wave -

Process

Reengineering

1990s - Process

Innovation

- “Best

Practices”

- Better, Faster,

Cheaper

- Business via the Internet

- Flat

Organization

- End-to-end

Processes

- Value

Propositions –

Speed to

Market,

Customer

Intimacy,

Operational

Excellence

- Enterprise

Architecture

- ERP - CRM - Supply Chain Management

- Activity Based

Costing

- Six Sigma

- Buy vs. build -

Process

Redesign/ Reengineering Methods

Page 208: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

201

3rd Wave -

Business

Process

Management

2000 -

2015

- Assessment,

Adaptability, &

Agility

- 24x7 Global

Business

- Continual Transformation

- Networked

Organization

- Hyper

Competition

- Market

Growth Driven

- Process

Effectiveness

over Resource

Efficiency

- Organizational Effectiveness Over Operation

- Enterprise

Application

Integration

- Service

Oriented

Architecture

- Performance

Management

software

- BPM Systems

- Balanced

Scorecard

- Self Service &

Personalization

- Outsourcing,

Co-Sourcing, In-

sourcing

- BPM Methods

4th Wave -

Digital Transformation

2015 +

- Process/

Product

Innovation by

Creative Use of

New

Technology,

- Using Disruptions as New Possibilities not Problems

- Added Value

to Old

Customers/

Products,

- New Value

through New

Business

Models

- Radical change driven by Technology and Shift in Mind

- AI; Big Data;

Cloud

Computing;

- Data Analytics,

− Implantable

technologies

- IoT; Smart

cities; 3D print;

Driverless cars

- Robotics − Block chain − Sharing economy

- Process

Oriented

Applications

(POA)

- Intelligent

BPMS Systems

- Software as a

Service

- New digital

transformation

methods

- New digital transformation tools

Source: Pihir (2019, p.358)

Business process management capabilities in the digital age

The penetration of economy and society based on digital technologies referred to digitalization, a global

phenomenon leading to an opportunity-rich, hyper-connected, fast-moving, and highly competitive

environment, for instance, the social and mobile technologies (e.g., social media and social collaboration

platforms) helped the people to communicate and emancipate work from time and location, as well as

equipping physical objects with sensors, actuators, computing power, and connectivity, the internet of

things boosted the fusion of the physical and digital world. Besides, when combined with the potential

of blockchain-empowered solutions, it enabled novel value exchanges among individuals, businesses,

and smart things, reducing the distance between customers and companies, and grants access to so far

unexplored data sources. Further, data analytics, including the latest advances in cognitive technologies

enabled capitalizing on data in a diagnostic, predictive, and prescriptive manner, building the foundation

of data-driven business models, the automation of unstructured tasks, and natural interaction between

humans and machines (e.g., social robotics), and 3D/4D printing disrupted supply chains and value

networks by enabling highly decentral, delayed production facilities. Digital technologies enabled so far

unimaginable business processes, digitalization was a true game-changer for business process

management, posing manifold challenges and opportunities. In an environment characterized by

advanced process automation capabilities and new digital process design opportunities, a purely reactive

and problem-driven approach to business process management is no longer sufficient. Instead, business

process management needed to become ambidextrous, i.e., it must leverage digital technologies for both

streamlining and innovating business processes. With the uptake of digitalization, business process

management had to continue emphasizing the people's perspective to ensure an optimized augmentation

for employees and customers in the future of work and consumption (Kerpedzhiev et al., 2017).

Page 209: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

202

Furthermore, Kerpedzhiev et al. (2017) showed business process management capabilities including 30

capabilities structured along with the established core elements such as strategic alignment, governance,

methods or IT, people, and culture in the digital age (refer to Table 2), in which, the strategic alignment

referred the continual alignment of organizational priorities and processes, enabling the achievement of

business goals, and governance established relevant and transparent accountability and decision-making

processes to align rewards and guide actions, while methods were the approaches and techniques

supporting and enabling consistent process actions and outcomes through using information technology

application such as software, hardware, and information systems that enable and support business

processes. For the people in the table were the individuals and groups who continually enhanced and

applied their process-related expertise and knowledge, and culture included the collective values and

beliefs that shape process-related attitudes and behaviors.

Table 2: Framework of business process management capabilities in the digital age.

Strategic

Alignment

Governance Methods/ Information

technology

People Culture

Strategic

Business

Process

Management

Alignment

Contextual

Business

Process

Management

Governance

Process

Context

Management

Multi-purpose

Process Design

Business

Process

Management

and Process

Literacy

Process

Centricity

Strategic

Process

Alignment

Contextual

Process

Governance

Process

Compliance

Management

Advanced

Process

Automation

Data

Literacy

Evidence

Centricity

Process

Positioning

Process

Architecture

Governance

Process

Architecture

Management

Adaptive Process

Automation

Innovation

Literacy

Change

Centricity

Process

Customer

and

Stakeholder

Alignment

Process Data

Governance

Process Data

Analytics

Agile Process

Improvement

Customer

Literacy

Customer

Centricity

Process

Portfolio

Management

Roles and

Responsibilities

Business

Process

Management

Platform

Integration

Transformational

Process

Improvement

Digital

Literacy

Employee

Centricity

Source: Kerpedzhiev et al. (2017, p.2)

In the digital age, the strategic alignment had to ensure the transparency, and benefits associated with

business processes and business process management that were arranged with the expectations of re-

conditioned, digitally savvy customers, and other stakeholders. Besides, business process management

and process governance had to be highly contextual with the methods and tools were chosen and

customized that was associated with organizational contexts, and process design fitted multiple purpose

such as customer-centric, risk-aware, or flexibility-aware processes, and mass-personalized processes

that was the same to the process data analytics and business process management platform integration,

as well as process automation forced to tackle unstructured tasks and enable new forms of human-

machine interaction by leveraging opportunities of digital technologies such as cognitive automation,

social robotics, and smart devices. Especially, related to the people in the digital age was knowledge

requirement about digital technology that was so important for the people, for instance, data analytics,

data privacy, data security techniques, innovation techniques, digital economy, and digital business

models along with the substantial knowledge about business process management methods and tools.

Moreover, new process values and beliefs were required, and business process management had to

actively involve the people in process decisions, the same time forecasted the effects of these decisions

on their work lives, took customer feedback seriously and granted them the sovereignty to make self-

dependent decisions although they encountered unprecedented challenges.

Page 210: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

203

Conclusion

The research topic studies the overall issues of technology 4.0 with the concepts related to the artificial

intelligent, big data, internet of thing, blockchain, cloud computing, augmented reality, multilevel

customer interaction, advanced algorithms with big data, smart sensors, mobile devices, internet of thing

platforms, location detection, advanced human-machine, and 3D printing, as well as the role of digital

technology, and digital transformation affects the business models, process business, business process

management that helps to increase the performance and company operation scope, especially companies

in the creative industry. Besides, the works are specified by business process management, for instance,

total quality management, business process reengineering, and enterprise resource planning that is an

innovation through the systematization method to identify, map, document, design, implement, measure

and control business processes, and it embraces the increasing IT support to improve, innovate, and

manage processes thoroughly, determining business results and creating customer value, achieving the

business goals with greater flexibility.

Moreover, this paper also analyzes the creativity-intensive processes managing through two main

perspectives that are task-level (activity-level) analysis and process-level analysis, and it is illustrated

by two examples that are the visual production process model and the prepared film for edit process

related to the business process management of film production with many challenges arise from the

existence of creative tasks within business processes such as allocating resources (task-level, process

level), enhancing creativity (task-level), managing creative risks (task-level, process-level), and

enhancing process performance (process-level). Thus, each organization wants to achieve the process

efficiency, quality and agility for business success that they must invest in the quality of business process

management through using digital technology, and digital transformation that is reason why there are

four waves of process evolution, in which, the last 4th wave as a digital transformation wave of business

process management, as well as the business process management capabilities including 30 capabilities

structured along with the established core elements such as strategic alignment, governance, methods or

IT, people, and culture in the digital age, have been referred and analyzed, especially, the role of digital

technology is so important for business process management through the methods and tools that are

associated with organizational contexts, and process automation forces to tackle unstructured tasks and

enable new forms of human-machine interaction by leveraging opportunities of digital technologies such

as cognitive automation, social robotics, and smart devices.

To sum up, the development of economy and society based on digital technologies along with

globalization trend has facilitated an opportunity-rich, hyper-connected, fast-moving, and highly

competitive environment such as the social and mobile technologies, equipping physical objects with

sensors, actuators, computing power, etc. The people play a central role in the digital age who is required

a good knowledge of digital technology such as data analytics, data privacy, data security techniques,

innovation techniques, digital economy, and digital business models along with the substantial

knowledge about business process management.

References

[1] Becker, J., Bergener, K., Schwehm, M. O., and Voigt, M. (2011). CONFIRMING BPM THEORY

IN CREATIVE INDUSTRY CONTEXT – A CASE STUDY IN THE GERMAN TV INDUSTRY. 19th

European Conference on Information Systems, ECIS 2011.

[2] Brocke, J. vom, and Sinnl, T. (2011). Culture in business process management: a literature review.

Business Process Management Journal, 17(2), pp. 357–377. Available at <https://doi.org/10.1108/14637151111122383>.

[3] Haseeb, M., Hussain, H. I., ´Slusarczyk, B., and Jermsittiparsert, K. (2019). Industry 4.0: A

solution towards technology challenges of sustainable business performance. Social Sciences,

8(5). Available at <https://doi.org/10.3390/socsci8050154>.

[4] Hitpass, B., and Astudillo, H. (2019). Editorial: Industry 4.0 Challenges for Business Process

Management and Electronic-Commerce. Journal of Theoretical and Applied Electronic

Page 211: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

204

Commerce Research, 14(1), pp. I–III. Available at <https://doi.org/10.4067/S0718-

18762019000100101>.

[5] Imran, M., ul Hameed, W., and ul Haque, A. (2018). Influence of Industry 4.0 on the production

and service sectors in Pakistan: Evidence from textile and logistics industries. Social Sciences,

7(12), pp. 0–21. Available at <https://doi.org/10.3390/socsci7120246>.

[6] Kerpedzhiev, G., König, U., Röglinger, M., and Rosemann, M. (2017). Business Process

Management in the Digital Age. BPTrends, (July), pp. 1–6. Available at

<https://doi.org/10.13140/RG.2.2.12087.42408>.

[7] Krizanic, S., Sestanj-peric, T., and Tomicic-pupek, K. (2019). THE CHANGING ROLE OF ERP

AND CRM IN DIGITALTRANSFORMATION. (May), pp. 23–24.

[8] Mangematin, V., Sapsed, J., and Schüßler, E. (2014). Disassembly and reassembly: An

introduction to the Special Issue on digital technology and creative industries. Technological

Forecasting and Social Change, 83(1), pp. 1–9. Available at

<https://doi.org/10.1016/j.techfore.2014.01.002>.

[9] Martinaitytė, E., and Kregždaitė, R. (2015). THE FACTORS OF CREATIVE INDUSTRIES

DEVELOPMENT IN NOWADAYS STAGE. Economics and Sociology, 8(1), pp. 55–70.

[10] Müller, K., Rammer, C., and Trüby, J. (2009). The role of creative industries in industrial

innovation. Innovation: Management, Policy and Practice, 11(2), pp. 148–168. Available at

<https://doi.org/10.5172/impp.11.2.148>.

[11] Nwankpa, J. K., and Roumani, Y. (2016). IT capability and digital transformation: A firm

performance perspective. 2016 International Conference on Information Systems, ICIS 2016, pp.

1–16.

[12] Paschek, D., Luminosu, C. T., and Draghici, A. (2017). Automated business process management-

in times of digital transformation using machine learning or artificial intelligence. MATEC Web

of Conferences, 121, pp. 1–8. Available at <https://doi.org/10.1051/matecconf/201712104007>.

[13] Pihir, I. (2019). BUSINESS PROCESS MANAGEMENT AND DIGITAL TRANSFORMATION.

41st International Scientific Conference on Economic and Social Development, (May), pp. 353–

360.

[14] Schallmo, D., Williams, C. A., and Boardman, L. (2017). Digital transformation of business

models-best practice, enablers, and roadmap. International Journal of Innovation Management,

21(8), pp. 1–17. Available at <https://doi.org/10.1142/S136391961740014X>.

[15] Schumacher, A., Erol, S., and Sihn, W. (2016). A Maturity Model for Assessing Industry 4.0

Readiness and Maturity of Manufacturing Enterprises. Procedia CIRP, 52, pp. 161–166. Available

at <https://doi.org/10.1016/j.procir.2016.07.040>.

[16] Seidel, S., Adams, M., ter Hofstede, A., and Rosemann, M. (2007). MODELLING AND

SUPPORTING PROCESSES IN CREATIVE ENVIRONMENTS. Proceedings 15th European

Conference on Information Systems, pp. 516–527.

[17] Seidel, S., and Rosemann, M. (2008). Creativity Management–The New Challenge for BPM.

BPTrends, (May), pp. 1–8.

[18] UNCTAD. (2008). Creative Economy Report 2008. The Challenge of Assessing the Creative

Economy: towards Informed Policy-making. In Harvard Business Review (Vol. 8).

[19] Utama, I. D., and Ratnapuri, C. I. (2018). Comparing the Business Process in Creative Industry

at Bandung. Proceedings of the International Conference on Industrial Engineering and Operations

Management, 2018-March, pp. 999–1004.

Page 212: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

205

[20] Vugec, D. S., Stjepić, A., and Vidović, D. I. (2018). The Role of Business Process Management

in Driving Digital Transformation: Insurance Company Case Study. International Scholarly and

Scientific Research & Innovation, 12(9), pp. 730–736.

Contact information

Trieu Tran Van Hai, Ph.D student

University: Tomas Bata University in Zlín

Faculty: Management and Economics

E-mail: [email protected] or [email protected]

ORCID: 0000-0002-2532-8016

Page 213: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

206

AVOIDANCE OF COST INCREASES DURING CHANGE MANAGEMENT Rijad Trumic1

1Department of Economics, VŠB – Technical University of Ostrava,

Sokolská tř. 33, Ostrava 70200, Czech Republic

e-mail: [email protected]

Abstract

This work deals with the problem of cost increases in purchasing after a supplier nomination. The first

part of the paper explains the problem and the AHP method used to find a solution for this issue.

Furthermore, the TOPSIS method is performed to substantiate the AHP result. The expectation is that

the subject of overhead is ranked as the first priority and the catalogue of changes is ranked second

because it means a large output with a manageable time schedule. In the last part a summarized

conclusion and an outlook for the dissertation is also given.

Keywords

Analytical Hierarchy Process (AHP), TOPSIS, change catalogue, overhead, technical requirements.

Introduction

For emphasizing a word or phrase, please, use italic. Do not use boldface typing or capital letters except

for section headings (cf. remarks on section headings, below). Achieving the best costs and best prices

for the nomination of a supplier over the serial running time to the aftersales is one of our goals.

Unfortunately, in reality, good nomination results are often forfeited because of changes in the

component development process until the "start of production" (SOP).

Often, we commit ourselves to a supplier over a period of up to 25 years at an early stage and with

relatively low maturity of the component! Thus, it is undisputed: The nomination has the largest impact

on costs during the whole process

But not the changes after the nomination: They are a significant cost driver for OEMs and they often

have to be negotiated in a single-source environment.

Change requests are never entirely avoidable in a vehicle development process, certainly the efforts in

concept plausibility must be intensified and the quality specifications must be improved.

All car manufacturers nominate suppliers for vehicle components many years before the SOP. This

means that the technical procurement status of these components does not have the same technical status

as for the SOP because the components face technical changes due to the development phase after a

nomination. However, since the suppliers have been defined in this phase and the supply relationship

has been contractually defined, the changes are in many cases implemented followed by higher prices

by the suppliers. Car manufacturers very often have no option to negotiate real costs or to withdraw

from the contract, as the development has already been performed and the security of supply is the first

priority. At the end, the high costs are very often accepted without any negotiating leverage.

Nevertheless, changes are an integral part of the development process, in order to continuously improve

vehicles to market maturity, to develop them further and to keep them competitive through necessary

changes, and also to be competitive on the cost side despite the changes. The objective of this paper is

to prioritize and focus on cost reduction measures and is a sub-aspect and integral part of the dissertation,

which also deals with the tools from the measures found.

The main goal is defined as the cost savings in change management which is in level one. The cost

savings are examined for further 6 aspects and decision criteria in level two. These are briefly explained:

The speed of implementation is a very important factor in the selection of the tools. It is very important

how fast each topic can be implemented in practice, how complex the topics are in the preparation and

how much capacity must be used in terms of man power and time.

Page 214: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

207

Furthermore, it is also very important whether premises can be set for the respective topic. For example,

if premises are kept too coarse and generous in a change catalogue, the costs cannot be precisely defined.

A precise and detailed definition of the premises also enables a detailed statement of costs for a specific

measure.

It is also important to ask whether the know-how is available internally. The employees and their

experience is essential. Employees from development and purchasing can bring the topics into the tools

from Lessons Learned. These topics have to be evaluated by the supplier.

Finally, output is the last also very important criteria. It may be that everything can be implemented very

fast, with low capacity and high know-how, but if the output is small or it brings little savings, the focus

is usually placed on another topic. These decision criteria are speed of implementation, complexity,

capacity effort, setting of premises, internal Know-How and output.

In the context of this paper, three alternatives of tools were selected, which are evaluated in quality and

in relation to the decision criteria. These 3 subject areas are change catalogue or pre-negotiation of

possible changes in the future, Improvement of the technical requirements and specifications.

A decrease of the overhead and profit surcharge or a question whether the used „Surcharge calculation”

by many OEMs, is future-oriented.

The use of a cost catalogue after a nomination can be useful in many cases also to be able to negotiate

changes better and more effectively. Increasing product requirements are an important part of a

nomination. A good change catalogue is developed in close coordination between the purchasing and

developing colleagues. In the second step, the contribution of a cost calculator is of course very

important in order to calculate the measures or changes requested in the change catalogue. This leads to

better negotiation results.

The aim of the Dissertation is to develop a consistent method so that purchasing can pre-negotiate

possible changes that may occur in the future before a supplier nomination. If these changes occur in

reality, the conditions are already established and so they then have their validity. The conditions that

have been agreed with a supplier prior to being awarded are significantly better than conditions that can

be achieved after a suppliers’ nomination. The cost differences in a product line are in the 6-digit range.

The method should also represent the relationship between different component families. To a certain

extent, changes are pre-negotiable. A change catalogue contains technical increases and reductions as

well as commercial issues, such as changes in the volume of the components during the project, premises

for the nomination, relocations, raw materials or currency issues. A detailed elaboration of the change

catalogue will be developed and presented in the context of the dissertation.

Improving the quality of the specifications can avoid many changes in the series development, so the

catalogue of changes can be superfluous or greatly reduced. However, this improvement is very often

not possible in the early phase or at the time of nomination because the requirements for the component

and the development goals are often unknown. For example, the requirements and premises are often

kept very general at the time of nomination, so they are not defined more precisely. The concept and

development maturity is simply not given at the time of nomination, so changes have to occur in most

cases. On the other hand, changes are also deliberately decided so that OEMs stay competitive or to get

the innovation and technological leadership. The requirement specifications describe the requirements

for the component in a minimal form, which is required as a function in the vehicle (in order to also

achieve the lowest costs during a nomination) and the change catalogue describes the possible extensions

with the additional costs. Too generously designed specifications lead to higher costs during the

nomination. A reduction of the technical requirements in the specifications after a nomination does not

lead to the desired material cost savings as opposed to before a nomination. This means a mature

specification sheet in combination with a good and detailed change catalogue are the optimal solution.

As the third main aspect in avoiding costs in series development, but even more importantly in series

delivery, is a decrease of overhead and profit surcharges. Many OEMs use a surcharge calculation as a

calculation base. The calculation uses the botton-up approach to calculate the cost components and then

add the overhead and profit surcharges as a percentage of the material and production costs.

Page 215: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

208

This is primarily determined during the nomination and agreed with the supplier. This means that the

supplier confirms that his business case, with a certain volume, a certain price and the stated overhead

and profit, has been accepted. This also results in the profit and overhead as a sum in EUR (not in %)

over the duration of a project. However, there are often volume changes in the project duration compared

to the nomination status. Nevertheless, the percentage surcharges are still added to every single

component that the OEM buys, even though the supplier agreed to the lower sum x during nomination.

This gives the supplier additional pure profit, which he did not expect and which a supplier does not

have to make any additional effort for (especially since most OEMs pay the investment separately). It

is therefore sufficient if the supplier gets an additional profit by the increase of volume, which may also

finance an additional investment required to produce the higher volume. Currently, this approach is

given little attention and it is quite easy and fast to implement. This type of surcharge has to be frozen

during the nomination. This means the supplier needs to point out the overhead and profit in their offer

as an absolute number in EUR which is then paid over the nominated quantity as a surcharge. As soon

as the volume changes, the term of payment of the apportionments also changes, in other words, the

apportionment is canceled earlier than at the EOP (End of Production). Instead, many suppliers are

overpaid till the EOP and very often the OEMs have not even noticed this phenomenon. Figure 2 shows

the problem of costs increasing hierarchically. The goal is noted on the top level. The second level lists

the criteria that are important. The third level shows the individual alternatives or tools that can be

considered. The AHP is also well suited to solving the problem because the assessment is quantitative

in nature.

Methodology and Data

The Analytical Hierarchy Process (AHP) presented by SAATY is a method for solving multi-criteria

decision problems3. In this method, a problem is broken down hierarchically into sub-problems, thus

reducing complexity. The sub-problems are solved step by step.

The evaluation of the alternatives in the AHP procedure takes place with regard to their importance to a

superordinate element or criterion and is compared in pairs4. The criteria are compared at one level in

the hierarchy of the problem5. If a problem is divided into several levels, the pair comparisons are

continued first on the criteria level and then successively for the other criteria levels6. The results of all

comparisons are presented in an evaluation matrix. The priorities are calculated through an iterative

process with the so-called eigen value method. There are three steps. In the first step, the evaluation

matrix is squared7. In a second step, the matrix is normalized and the local weights are determined. In

the third step, this process of squaring and normalization is repeated in an iterative process until the

calculated weights deviate very small from the values determined from the previously calculated

matrix8.

𝐴 =

(

𝑎11 … 𝑎1𝑗 … 𝑎1𝑛… … … … …𝑎𝑖1 … 𝑎𝑖𝑗 … 𝑎𝑖𝑛… … … … …𝑎𝑛1 … 𝑎𝑛𝑗 … 𝑎ₙₙ)

with

∀𝑖 = 1,… , 𝑛 ∀𝑗 = 1,… , 𝑛: 𝑎𝑖𝑗 > 0 ∀𝑖 = 𝑗: 𝑎𝑖𝑗 = 1 ∀𝑖 = 1, … , 𝑛

3 See Saaty (2000); Saaty (2001) 4 See Saaty (1994b), p.22; Saaty (2000), p. 105 5 See Saaty (1994b), p.22; Saaty (2000), p. 105 6 See Saaty (1994b), p.22; Saaty (2000), p. 105 7 Dolan JG, Isselhardt BJ, Cappuccio JD, Med Decis Mak. 1989;9 (1):40–50 8 Saaty TL. The analytic hierarchy process: planning, priority setting, resource allocation. 2. Aufl. New York:

McGraw-Hill; 1980. S. XIII, 287.

Page 216: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

209

∀𝑗 = 1,… , 𝑛: 𝑎𝑖𝑗 = 𝑎𝑗𝑖−1

Each element of the comparison shows how much an element is more significant with respect to the

element of the overlying level9.

Table 1: Relative meaning of the criteria for the superior element

AHP Scale of Importance

for comparison pair (aij)

Numeric Rating Reciprocal (decimal)

Extreme Importance 9 1/9 (0,111)

Very strong to extremely 8 1/8 (0,125)

Very strong Importance 7 1/7 (0,143)

Strongly to very strong 6 1/6 (0,167)

Strong Importance 5 1/5 (0,2)

Moderately to Strong 4 1/4 (0,25)

Moderate Importance 3 1/3 (0,333)

Equally to Moderately 2 1/2 (0,5)

Equal Importance 1 1 (1,0) In order to determine the inconsistency between the pairs, SAATY has developed a consistency index

(C.I. = Consistency Index) and a consistency value (C.R. = Consitency Ratio)10. The largest eigenvalue

𝜆𝑚𝑎𝑥 of the evaluation matrix is equal to the dimension n of the matrix for the consistency of the pairs.

The consistency of the matrix can be checked using the largest eigenvalue 𝜆𝑚𝑎𝑥 .

𝐶. 𝐼. = 𝜆𝑚𝑎𝑥 − 𝑛

𝑛 − 1

The necessity of revising the evaluation matrix can be checked with the consistency value (C.R.). A C.R

value <0.1 is permissible. SAATY indicates that for .C.R.≥0,1 a revision of the pair comparisons in the

evaluation matrix should be made11.

𝐶. 𝑅. = 𝐶. 𝐼.

𝑅. 𝐼.

The Random Index (R.I.) is an average consistency index that was randomly generated from reciprocal

matrices. SAATY shows in table 2 the determined values.

Table 2: Random Index12

n 2 3 4 5 6 7 8 9 10 11 12 13 14 15

R.I

.

0,0

0

0,5

2

0,8

9

1,1

1

1,2

5

1,3

5

1,4

0

1,4

5

1,4

9

1,5

1

1,5

4

1,5

6

1,5

7

1,5

8

9 See Saaty (1994b), p.23 10 See Saaty (2000), p. 47; Saaty (2001), p. 80 11 See Saaty (1994b), p. 27; Saaty (2000), p. 84 f.; Saaty/Vargas (2001), p. 9. 12 See Saaty (2000), p. 65 and 84.

Page 217: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

210

Application of the AHP method

Input data and model quantification of criteria

Table 3: Evaluation matrix of all criteria in the second decision level

Speed Complexity Capacity Premises Know

How

Output

Speed 1 0,2 0,2 0,125 0,111 0,111

Complexity 5 1 3 0,167 0,111 0,111

Capacity 5 0,333 1 0,143 0,2 0,2

Premises 8 6 7 1 1 1

Know How 9 9 5 1 1 3

Output 9 9 5 1 0,333 1

The result for the evaluation matrix in table 3 is 𝜆𝑚𝑎𝑥 = 6,35.

Consistency-Index C.I. (N=6)

𝐶. 𝐼. =6,35 − 6

6 − 1= 0,07

Calculation of the consistency rate C.R. (Random-Index R.I.= 1,25)

𝐶. 𝑅. =0,07

1,25= 0,056 < 0,1

Since the C.R. value is 0.056, it is not necessary to revise the evolution matrix.

Calculation of alternatives in terms of speed

Table 4: Evaluation matrix of all alternatives related to the speed of implementation

Speed Change cataloge Technical

Requirements

Overhead

Change cataloge 1 4 0,142

Technical

Requirements

0,25 1 0,142

Overhead 7 7 1 The result for the evaluation matrix in table 4 is 𝜆𝑚𝑎𝑥 = 3,086.

Consistency-Index C.I. (N=3)

𝐶. 𝐼. =3,086 − 3

3 − 1= 0,043

Calculation of the consistency rate C.R. (Random-Index R.I.= 0,52)

𝐶. 𝑅.=0,043

0,52= 0,082 < 0,1

Since the C.R. value is 0.082, it is not necessary to revise the evolution matrix.

Page 218: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

211

Calculation of alternatives in terms of complexity

Table 5: Evaluation matrix of all alternatives based on the complexity of the implementation

Complexity Change cataloge Technical

Requirements

Overhead

Change cataloge 1 0,25 3

Technical

Requirements

4 1 3

Overhead 0,333 0,333 1 The result for the evaluation matrix in table 5 is 𝜆𝑚𝑎𝑥 =3,015.

Consistency-Index C.I. (N=3)

𝐶. 𝐼. =3,015 − 3

3 − 1= 0,0078

Calculation of the consistency rate C.R. (Random-Index R.I.= 0,52)

𝐶. 𝑅. =0,0078

0,52= 0,0088 < 0,1

Since the C.R. value is 0,0088, it is not necessary to revise the evolution matrix.

Calculation of alternatives in terms of capacity

Table 5: Evaluation matrix of all alternatives based on the capacity during implementation

Capacity Change cataloge Technical

Requirements

Overhead

Change cataloge 1 5 0,167

Technical

Requirements

0,2 1 0,167

Overhead 6 6 1 The result for the evaluation matrix in table 6 is 𝜆𝑚𝑎𝑥 = 3,122.

Consistency-Index C.I. (N=3)

𝐶. 𝐼. =3,122 − 3

3 − 1= 0,061

Calculation of the consistency rate C.R. (Random-Index R.I.= 0,52)

𝐶. 𝑅.=0,061

0,52= 0,069 < 0,1

Since the C.R. value is 0,069, it is not necessary to revise the evolution matrix.

Page 219: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

212

Calculation of alternatives in terms of premises

Table 7: Evaluation matrix of all alternatives based on the premises during implementation

Premises Change cataloge Technical

Requirements

Overhead

Change cataloge 1 4 0,333

Technical

Requirements

0,25 1 0,167

Overhead 3 6 1 The result for the evaluation matrix in table 7 is 𝜆𝑚𝑎𝑥 = 3,027.

Consistency-Index C.I. (N=3)

𝐶. 𝐼. =3,027 − 3

3 − 1= 0,0139

Calculation of the consistency rate C.R. (Random-Index R.I.= 0,52)

𝐶. 𝑅. =0,0139

0,52= 0,0156 < 0,1

Since the C.R. value is 0,0156, it is not necessary to revise the evolution matrix.

Calculation of alternatives in terms of know-how

Table 8: Evaluation matrix of all alternatives related to implementation know-how

Know -how Change cataloge Technical

Requirements

Overhead

Change cataloge 1 3 0,333

Technical

Requirements

0,333 1 0,167

Overhead 3 6 1 The result for the evaluation matrix in table 8 is 𝜆𝑚𝑎𝑥 = 3,009.

Consistency-Index C.I. (N=3)

𝐶. 𝐼. =3,009 − 3

3 − 1= 0,0048

Calculation of the consistency rate C.R. (Random-Index R.I.= 0,52)

𝐶. 𝑅. =0,0048

0,52= 0,0054 < 0,1

Since the C.R. value is 0,0054, it is not necessary to revise the evolution matrix.

Page 220: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

213

Calculation of alternatives in terms of output

Table 9: Evaluation matrix of all alternatives related to the output during implementation

Output Change cataloge Technical

Requirements

Overhead

Change cataloge 1 3 0,5

Technical

Requirements

0,333 1 0,5

Overhead 2 2 1 The result for the evaluation matrix in table 9 is 𝜆𝑚𝑎𝑥 = 3,11.

Consistency-Index C.I. (N=3)

𝐶. 𝐼. =3,11 − 3

3 − 1= 0,055

Calculation of the consistency rate C.R. (Random-Index R.I.= 0,52)

𝐶. 𝑅.=0,055

0,52= 0,0622 < 0,1

Since the C.R. value is 0,0622, it is not necessary to revise the evolution matrix.

Summary of results

Table 10: Ranking of results

Weights Ranking

Change cataloge 0,286 2

Technical Requirements 0,111 3

Overhead 0,601 1

In this summary, the importance or priorities are clearly visible. The top priority is overhead, followed

by the change catalogue. In contrast, the subject of specifications is at the last place, which is also

reflected in reality.

Conclusion

From experience, at the beginning of this thesis there was an expectation that this sequence would also

be reflected after the qualitative presentation. The same priorities were presented and confirmed using

the AHP and TOPSIS method. The top priority is clearly the overhead issue, followed by the cost

catalogue on second place. Even after a sensitivity analysis, the overall ranking does not change, which

indicates a very stable result. These two topics will be more detailed in the dissertation.

It will be interesting to see whether this result will be confirmed even after the survey in the various

purchasing departments. The tendency can be derived from this work. But a delimitation in the

application also must be considered. The implementation of the methods can be carried out for a new

project. In the case of series projects, where contracts have already been signed by the suppliers,

implementation and negotiation becomes much more difficult because the framework conditions for the

project have been defined. Subsequent implementation is much more difficult, so the timing for the

application of the methods is essential.

Page 221: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

214

As a perspective, it can be stated that the result of the dissertation should take the form of a contract,

that the premises and specifications are included in the inquiry documents sent to the suppliers, and that

the content must be confirmed by the supplier.

Furthermore, it can be investigated which content, for example from the overhead, is variable and which

is fixed. The fixed portion is independent of the volume of parts and should not be paid during the

change management, and the variable part of the costs should be paid. In this work there is no difference,

the whole overhead is fixed part.

Implementation in a change catalog is partly possible for a new component (innovation parts). There is

no technical basis or experience of possible changes in the future. Only costs regarding volume effects

can be determined.

References

[1] Christian GILLE. Gestaltung von Produktänderungen im Kontext hybrider Produkte. Springer

Gabler, 2013. ISBN 978-3-658-02693-6.

[2] Dolan JG, Isselhardt BJ, Cappuccio JD. The analytic hierarchy process in medical decision

making – a tutorial. Med Decis Mak. 1989;9(1):40–50.

[3] Günther SCHUH, Wolfgang STÖLZLE. Anlaufmanagement in der Automobilindustrie

erfolgreich umsetzen. Springer-Verlag, 2008. ISBN 978-3-540-78406-7.

[4] Martina Carolina WICKEL. Änderungen besser managen – Eine datenbasierte Methode zur

Analyse technischer Änderungen. Technical University Munich (TUM), Dissertation 2017.

[5] Peter MILLING, Jan JÜRGING. Der Serienanlauf in der Automobilindustrie: Technische

Änderungen als Ursache oder Symptom von Anlaufschwierigkeiten? Gabler, 2008. ISBN 978-3-

8350-5583-4.

[6] Ralf REICHWALD, Juan-Ignacio CONRAT. Vermeidung von Änderungskosten durch

Integrationsmaßnahmen im Entwicklungsbereich. Chair of general and Industrial Business

Administration at the technical University Munich (TUM), 1993.

[7] Saaty, T.L. How to Make a Decision: The Analytic Hierarchy Process. In: Interfaces,

[8] Vol. 24 (1994), No. 6, p. 19-43.

[9] Saaty, T.L. Fundamentals of Decision Making and Priority Theory with the Analytic

[10] Hierarchy Process. 2. Aufl., Pittsburgh 2000.

[11] Saaty, T.L.; Vargas, L.G. Decision Making in Economic, Political, Social and

[12] Technological Environments. The Analytic Hierarchy Process Series, Vol. 7. Pittsburgh

[13] 1994.

[14] Saaty TL. The analytic hierarchy process: planning, priority setting, resource allocation. 2.

Aufl. New York: McGraw-Hill; 1980. S. XIII, 287.

[15] Saaty TL. How to make a decision: the analytic hierarchy process. Eur J Oper Res. 1990;48

(1):9–26.

[16] Udo LINDEMANN. Das Änderungsmanagement Report 2015. Competence in Design and

Development, Working Paper Series. 2005. ISSN 1861-079X.

Page 222: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

215

BUSINESS STUDIES IN TIMES OF CHANGE (INDUSTRY 4.0)

Susann Wieczorek1

1Department of Business Sciences, Westsächsische Hochschule Zwickau,

Dr.- Friedrichs- Ring 2, Zwickau 08056, Germany

e-mail: [email protected]

Abstract

The fourth industrial revolution, or Industry 4.0 for short, brings changes for politics, society and

companies. At the same time, digitisation does not stop at the education sector, especially the

universities. For this reason, the business studies program is also undergoing a change that has never

been seen before. Two research phases (I. Document analysis and II. Expert interviews) made it possible

to record the first innovations for the business studies programme. With the help of two sequentially

conducted online surveys of companies and universities, these points from the first two research phases

could be substantiated. As a future-oriented requirement, digital competence was identified as a novelty,

alongside professional competence, methodological competence and personal/social competence.

Keywords

Tertiary education, higher education, further development of business studies, business administration,

teaching, science, digital literacy

JEL Classification

A23, M20, M21

Introduction

Changing market requirements, caused by rapidly growing challenges induced by technology and

automation, promote competition between companies. (Casper- Hehne/Reiffenrath, 2017) A key driver

for companies to be able to hold their own in the market is their innovative ability. This involves

production improvement, new products and networked systems to meet customer needs. Suitable

personnel are necessary for this innovative ability. This high level of market dynamics and flexibility

requires managers who recognize opportunities and use their powers of judgment to find solutions.

(Kirsch/Picot, 2013) Social and economic debates also show that with the increasing use of digital

technologies, the world of work is undergoing and will undergo change. (Hirsch-Kreinsen, 2017)

Education is thus becoming an even stronger decisive driver of innovation.

The inevitable consequence of Industry 4.0 is the elimination of traditional occupations, but also the

emergence of new activities. For this reason, we have to deal with these changes already today. When

teaching in tertiary education, it is of course important to know what skills and abilities are expected of

working people in order to be prepared for the future. A large proportion of companies believe that

universities do not sufficiently prepare students for working life. For this reason, the issue of

employability is constantly being viewed critically. (Haberfellner/Sturm, 2012) It is precisely for this

reason that it is imperative that universities undergo an adaptation induced by Industry 4.0.

The aim of this paper is to identify the competencies that will become increasingly important and

necessary for business studies in the future, especially for science, teaching and practice, and to derive

recommendations for action.

Methodology

Industry 4.0 brings ground-breaking changes. Using the example of business administration studies,

essential requirements with regard to competences were examined more closely. Based on this research

focus, the present paper will examine the research objective with regard to Industry 4.0: Requirements

for university graduates, especially for business administration, to derive a new type of study curriculum.

Page 223: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

216

With the data collected during the qualitative and quantitative research it was possible to use the mixed

method research approach. Due to the limited literature available, the exploratory research approach was

chosen. Document analysis (phase I) was launched (Mayring, 2016), which examined three studies on

the subject of expectations of graduates. The following three studies were used. The Association of

German Chambers of Industry and Commerce e. V. (DIHK) has already addressed the requirements of

Bachelor and Master graduates in 2014. Around 2,000 companies took part in the survey, including

companies from the manufacturing industry, other services and trade (Bauer, 2016). In autumn 2014,

Graz University of Technology launched a two-stage survey with the aim of making an initial

assessment of the current employment situation of university graduates. At this point it is assumed that

the requirements of Austrian and German companies and graduates do not differ (Bauer/Sadei, 2015).

In the study carried out by Lödermann/Scharrer at the University of Augsburg in summer 2009, a total

of 1,789 companies from the Augsburg/Swabia region were surveyed. The response rate was 14 %, i.e.

only 249 questionnaires could be evaluated. The survey focused on employability (Lödermann/Scharrer,

2010)

The findings were put into a category system. Based on these findings, semi-structured expert interviews

(phase II) were conducted with experts from companies and universities in Germany. With the help of

qualitative content analysis according to Mayring and Strauss' Grounded Theory (Mayring, 2016), these

interviews were evaluated. Subsequently, the findings from the document analysis and the expert

interviews contributed to the creation of two online questionnaires which were considered separately.

The two online questionnaires (phase III) were pre-tested after their preparation. The online

questionnaires were sent to German universities and companies. IBM SPSS Statistics Version 25 was

used to verify or falsify the hypotheses made.

Research phases I - III

After completion of the three research phases mentioned above, the following results could be stated.

Using the example of business administration studies, essential aspects of digitisation were examined

more closely. are effective: stronger interdisciplinary cooperation across degree programme borders and

Introduction of digitalization modules, taking into account the level of competence and knowledge

transfer.

Research phases I

Within the framework of the document analysis, the three studies presented in Section 2 (DIHK Business

Survey, Graz University of Technology and Augsburg Study), which form the basis for the category

system, are to be used. In this phase it should be possible to create an overview of the three studies.

Research phases II

The findings of the first research phase are the starting point for the semi-structured expert interviews.

The individual competencies of the graduates were examined more closely and transferred to the

business studies program. The results are to be confirmed or rejected with the help of ten interviews

each with experts from companies and universities. The interview guidelines were divided into five

topics:

1. personal details: position, work experience, number of employees, industry

2. industry 4.0: Terms, future strategies, qualification measures, digitalisation specialists,

previous changes in the company

3rd business graduate: relevance of competences, interest in digital focal points,

recommendations for action, theses

4 CDO and Disruption: Relevance and influence on business studies

5. participation in the working group

Page 224: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

217

Research phases III

The findings of research phases I and II were to be transferred into a sequential online survey for

companies and universities, so that a fully structured written survey could be conducted. The

questionnaire was to contain points:

1. introduction: topic incl. contact possibilities to the creator, target group

2. sociodemographic data: Activity, professional experience, sectors, degree of digitisation as

well as university name, type of university, name of degree programme incl. degrees, etc.

3. thematic catalogue of questions: Industry 4.0, skills of the business studies graduate, potential

for improvement for business studies in the digital age, etc.

4th degree: Next steps, farewell

The questionnaire was created via the online platform www.umfrageonline.com and distributed to the

two groups of participants, companies and universities. The questionnaire mainly asked closed

questions. The online survey should be answerable within ten minutes in order to keep the dropout rate

low. In the course of a follow-up action, if necessary, the answering of the questionnaire should be

reminded. The lynchpin of this scientific work was to find out what the requirements for business studies

are and what measures can be derived from them. In order to test the functionality of the online survey

for the companies, it was made available to a test person in advance (pretest). As the survey has changed

only slightly for the universities, except for socio-demographic information such as course of study and

position in the university, no further pretest was conducted.

Results of research phases

On the basis of the expert interviews conducted in research phase II, the following hypotheses for the

online survey of enterprises could be established.

No. Hypotheses Statistical methods

H1 The larger the company, the more important digital literacy is. T-Test and Spearman rank

correlation analysis

H2 The degree of digitalisation varies from company size to

company size.

Single factorial analysis of

variance or simple. ANOVA

H3 The more personnel are involved in digitisation, the more

important digitisation is for a company.

Spearman rank correlation

analysis

H4 The basic knowledge of a business master's degree is more

important for professional life than technical specialisation.

Spearman rank correlation

analysis

H5 Comparison of the importance of methodological, technical,

social and digital competence

H5a The methodological competence of business studies graduates is

more important to employers than professional competence.

Spearman rank correlation

analysis

H5b The methodological competence of business studies graduates is

more important to employers than social competence.

Spearman rank correlation

analysis

H5c The methodological competence of business administration

graduates is more important to employers than digital

competence.

Spearman rank correlation

analysis

H6 The degree of digitalization of a company has an impact on the

business requirements profile.

Spearman rank correlation

analysis

H6a There is a correlation between the degree of digitization and the

requirement profile of bachelor's degree graduates.

Spearman rank correlation

analysis

H6b There is a correlation between the degree of digitization and the

requirements profile of master's degree graduates.

Spearman rank correlation

analysis

H6c There is a correlation between the degree of digitisation and the

professional competence.

Spearman rank correlation

analysis

H6d There is a correlation between the degree of digitisation and

methodological competence.

Spearman rank correlation

analysis

Page 225: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

218

H6e There is a correlation between the degree of digitisation and

social competence.

Spearman rank correlation

analysis

H6f There is a correlation between the level of digitisation and digital

literacy.

Spearman rank correlation

analysis

H7 The higher the level of digitisation in a company, the greater the

interest in exchanging information on digitisation with

representatives of educational institutions.

Spearman rank correlation

analysis

H8 For companies, the modular further training (modular studies,

certificates, part-time studies) of employees is more important

than attendance at a university.

Frequencies

Every single hypothesis was checked and either verified or falsified. With the respective results the

following statements could be derived for this scientific elaboration.

Science

The author refers to a triangular relationship in science, which is characterized by key figures,

methodology and basic IT understanding.

Key figures play an important role in the daily work of the business administration graduate. To

be able to explain the key figures correctly, the necessary expertise must be available.

In addition, the business administration graduate must have a basic understanding of IT. A sound

knowledge of the technologies used in the company is essential. Questions that should be

answered by business administration graduates are, for example: what a technology offers and

what are the chances and risks, etc. In this way, it can be rationally decided in the course of

negotiations whether or not the technology presented will bring added value to the company.

In order to be able to handle the mentioned key figures and the basic understanding of IT

correctly, methodological competence is required. In the context of this work, methodological

competence is understood as the use of cause-effect chains, as well as the ability to solve

problems and to think economically.

The research approaches and developments of Industry 4.0 and digitisation mentioned in this paper are

difficult to foresee from today's perspective. The reasons for this lie in the fast moving and technological

change. Nevertheless, it remains to be noted that the trend is increasingly towards harmonising the

corporate business areas. In other words, all corporate divisions are now subject to digitisation. This is

supported by the use of technologies to make detailed statements for the present and future.

Business administration covers all the original tasks of a company, so that a wide range of research areas

are created. For this reason, it is important to research creative ideas and thus always be one step ahead.

At this point, suggestions for further research studies are e. g. mentioned:

Does science, in terms of research, unite with business?

How will university professors do their research in 2030?

Can the Internet of Services (Facebook, Instagram, Twitter, YouTube) replace university

professors in the future?

Teaching

In this section, the contents to be taught in the business studies programme, learning formats and

methods, etc. are discussed. Following on from this, model curricula for business studies, for the degrees

'Bachelor of Science' and 'Master of Science', are used to provide a forward-looking orientation.

When designing the content of the course of study, great importance was attached to supplementing the

classic business administration modules with digitisation modules. The unchanged business

administration modules include: 'Controlling', 'Financing', 'Cost and Performance Accounting', 'Micro

and Macro Economics', 'Personnel', 'Organization' and 'Taxes'. The surveys helped to identify issues that

Page 226: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

219

were considered particularly important. Thus, relevant digitisation modules or focal points for business

studies could be worked out: business analytics, AI, data mining, IT law & IT security, IT data protection

& ethics, AR & VR, agile projects, blockchain.

It would be conceivable to successively restructure an existing business studies course, i.e. to include

elective modules in the first step. Subsequently, these modules would be combined to form digitisation

priorities, and, in the final instance, a new course of study adapted to the above modules could be

designed and introduced. If business studies do not yet exist at a university, such a course could be

introduced in such a form.

The desire for greater interdisciplinarity across the boundaries of study programmes was expressed by

all experts in universities and companies. The challenges posed by Industry 4.0 are so diverse and

complex that they can usually only be solved in an interdisciplinary manner and not just within a specific

discipline. This also broadens the knowledge horizon of all those involved.

In order to incorporate the digitisation modules into teaching, it is essential to break up the prevailing

column thinking created by departments or faculties. This means that interdisciplinary complexes of

topics or questions should be posed in an interdisciplinary way and thus lead to success. In this way,

many different perspectives can be incorporated. The students should be given the freedom to find their

own solutions. At the same time, the ability to communicate and work in a team is also strengthened in

this way. Knowledge labs offer a platform for new and agile topics. In order to strengthen or intensify

the handling of new technological developments, such teaching factories, in which several universities

join forces, can be a possible instrument. The testing of new technologies is the core objective in order

to implement a holistic way of thinking. Such cross-cutting modules allow students from other faculties

to work together on entrepreneurial and technological solutions. In this way, a close bond between

universities and companies is created and consolidated. The author sees the introduction of knowledge

labs, especially at universities of applied sciences, as the focus is more on practical application in

companies.

With a revision of the previously known learning formats, such as classroom or frontal teaching, the

business studies programme can be made more sustainable. This is relevant and important both for the

continuing education sector from a business perspective and for the up- and-coming generation, the

digital natives. Hybrid learning formats for teachers and learners are conceivable, regardless of the time

and place component, to enable more flexible learning. Learning nuggets could be a good possibility in

the organisation of the course. These are small, concise learning units, which are generally intended to

impart knowledge for no longer than five minutes. With Learning nuggets is a great learning success

connected. The interviewees see great opportunities for universities to position themselves better,

especially in the modular further education possibilities, such as modular studies, part-time studies,

certificate courses. This continuing education programme follows the mindset of precise knowledge

transfer in small learning units, so that continuing education in enterprises can be further expanded and

thus the attractiveness of universities increased. These findings are accompanied by the answer to

another research question.

With a new curriculum 'BWL - digital management (B. Sc.)', a modern business administration course

of studies is to be offered. The aim of the course is to impart digital skills in addition to business

management skills. The digital influence mentioned at the beginning is new to this course of study. For

this reason, topics such as Business Intelligence, Data Management as well as Artificial Intelligence etc.

should not be missing. These modules are supported by safety-relevant and legal aspects that a business

economist must be familiar with. Using scientific and modern methods, it is possible to record and

evaluate the data correctly and to derive the results logically. The vertical introduction of these digital

modules makes it possible to have a solid foundation of new technologies.

Page 227: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

220

Table 1. Business studies - digital management (B. Sc.)

Business modules Digitization modules Bachelor thesis

Basics of business administration,

Mathematics I, Personnel

management, External accounting

Basics of digitization,

IT management and Digital business

models

Taxation, Mathematics II,

Macroeconomics, Internal

accounting

IT security, Digital marketing and

social media

Controlling, Statistics I,

Microeconomics, Cost and

performance accounting

IT law and data protection,

Enterprise resource planning,

Business intelligence

Financing, Statistics II, Economic

policy, Corporate governance

Data management, Artificial

intelligence, Modern methods

Source: own Source

In addition, the business studies programme covers all important business management modules,

including controlling, financing, human resources, taxation, etc. In addition, the university education

will have a higher level of detail on the modules: mathematics, micro- and macroeconomics, internal

and external accounting and statistics, as shown in Table 1. The acquisition of additional qualifications,

e.g. MS Office and soft skills, should be given at any time during the course of the study. A compulsory

semester abroad could round off the course of study.

The Master's degree programme 'Digital Management - Accounting, Controlling, Finance (M. Sc.)'

provides in-depth knowledge in the areas of 'Accounting', 'Controlling', 'Finance'. A major role is given

to statistical methods, as these will become increasingly important in the future. The standard repertoire

also includes 'Controlling', 'Management', 'Accounting', 'Corporate Governance' and 'Business Law' etc.

In addition, the selection consists of a pool of focus modules to choose from. Possible focus modules

could be, for example: 'international accounting standards', 'financial controlling', 'risk and compliance

management' and 'auditing'. During the course of studies, the focus is on methodological competence,

so that the subject areas of qualitative and quantitative research, such as 'Multivariate Statistics' and

'Predictive Analysis', can also be taught.

Table 2. Digital management – Accounting, Controlling, Finance (M. Sc.)

Business modules Digitization modules Master thesis

Multivariate statistics, Controlling

systems, Strategic management,

Business law

Business intelligence,

Enterprise resource planning

Accounting, Project controlling,

Corporate management and

corporate governance, Technology

and innovation management

Business analytics,

Data mining

Scientific work Multivariate statistics,

predictive modeling

Source: own Source

Page 228: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

221

The two study profiles presented here have been expanded to include the field of activity of the Chief

Digital Officer (CDO), which is understood to mean the planning and control of digital processes and

their transformation. The author succeeded in fusing business management know-how and IT

understanding. With the presented sample study plans further answers to the research questions could

be found.

Practice

This paper has shown that Industry 4.0 is changing all our lives and has a significant impact on an

economy; from government and business to science and education. One aim of this work was to increase

employability. This means enabling graduates to carry out the required activities in their professional

life more quickly. Conversely, the newly required course contents and the newly created sample

curricula based on new trends and current topics are intended to increase the graduates' know-how for

entrepreneurial use in practice and thus reduce the training period of the newly graduated.

The digital modules support to get a better understanding of Industry 4.0 and to solve modern problems.

Business intelligence and artificial intelligence are particularly important for German and Slovakian

SMEs, as they provide considerable added value over competitors. The reason is that companies can

back up more and more data and want to understand and interpret it. The view is directed towards the

future. On the part of the universities, knowledge labs have been integrated into the model curricula,

which lead to better cooperation between practice and universities. Entrepreneurial topics can thus be

identified more quickly and taken into account in teaching. As a result, universities are becoming more

agile, which was a point of criticism from the experts. In addition, the study profile of business

administration studies was extended to the field of activity of CDOs.

With more flexible and modular further training options, it is possible to increase the range of further

training on offer, from which companies also benefit. Because companies are constantly trying to find

new and interesting further training offers for their staff in order to have a knowledge advantage over

their competitors. It is important that the employees perform their work despite further training. For this

reason, part-time, distance learning, modular studies or certificates are suitable for the workforce.

External lecturers can promote cooperation between practice and universities and provide new impulses.

Conclusion

The elaboration has shown that other authors have already devoted themselves to this topic. Universities

are faced with the decision what is the right way to study business administration. This paper aims to

provide guidelines and contribute to the discussion. However, the author's clear message is that change

is necessary and important.

With the research phases I - III it could be stated that the four competences technical, methodological,

social and digital competence are of central importance, they serve as a starting point for the results. In

the age of digitalisation, politics is increasingly challenged. Education is the pioneer of our future.

There is ample evidence that it is not yet possible to foresee what final changes Industry 4.0 will bring.

But one thing can be said for sure: today and tomorrow it is important to develop an understanding in

order to solve problems, no matter how good the expertise is. In this way, the required problem and

solution orientation, or rather the cause-and-effect principle, is achieved. Knowledge labs are suitable

as a possible tool or as a platform to try out unknown things. These are small laboratories in which new

ideas or solutions are developed with the help of IT; a kind of creative workshop.

Creativity opens up new avenues, so such a skill is essential in the digital age. Personal and social skills,

which include intuition, help in the decision-making process. The results also flow into the design of the

study programme in order to prepare students as well as possible for the labour market.

A central task is to ensure that universities are sensitised to bring about change as quickly as possible.

But there must also be a rethink in politics, so that new positions for professors with a changed profile

can be advertised.

Page 229: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

222

References

[1] BAUER, U. 2016. Employability: Welche Kompetenzen fordern Unternehmen von TU-

Absolventen? 2016. DOI: 10.1007/978-3-658-12097-9_1.

[2] BAUER, U. - SADEI, C. 2015. Studie zur Employability der TU Graz Absolvent/inn/ en - 2015:

Ergebnisse einer Primärerhebung unter Arbeitgeber/inne/n und TU Graz Absolvent/inn/en.

Research Report. Technische Universität Graz. Graz 2015. DOI: 10.3217/978-3-85125-386-3.

[3] Casper-Hehne, H. - Reiffenrath, T. (2017): Higher education in global and local contexts: The

Göttingen Model of Curricular Internationalisation. pp. 98.

www.ssoar.info/ssoar/bitstream/handle/document/55106/ssoar-interculturej-2017-27/28- casper-

hehne_et_al-Hochschulbildung_in_globalen_und_lokalen.pdf?sequence=1, [accessed

15.05.2019].

[4] Haberfellner, R. - Storm, R. (2012) "Job opportunities in higher education": Longer-term

employment trends of university graduates.

www.econstor.eu/bitstream/10419/97929/1/787185000.pdf, [accessed 30.07.2019].

[5] Hirsch-Kreinsen, H. (2017). Arbeiten 4.0 – Qualifikationsentwicklung und Gestaltungsoptionen.

pp. 473. www.wiwi.tu-dortmund.de/wiwi/ts/de/forschung/veroeff/soz_arbeitspapiere/AP-SOZ-

38.pdf, [accessed 13.05.2019].

[6] Kirsch, W. - Picot, A. (2013). Business administration in the field of tension between

generalisation and specialisation: Wiesbaden: Gabler Publishing 2013. ISBN 3-663-0690-95.

[7] LÖDERMANN, A.-M. - SCHARRER, K. 2010. Beschäftigungsfähigkeit von

Universitätsabsolventen – Anforderungen und Kompetenzen aus Unternehmenssicht. Universität

Augsburg. Augsburg 2010. ISSN: 0171-645X.

[8] Mayring, P. (2016). Einführung in die qualitative Sozialforschung Eine Anleitung zu qualitativem

Denken. ISBN 978-3-407-25734-5.

Page 230: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

223

RESEARCH ON THE IMPACT OF CHARACTERISTICS OF THE BOARD OF

DIRECTORS OF CHINESE APPLIANCE LISTED COMPANIES ON CORPORATE SOCIAL

RESPONSIBILITY

XIAOJUAN WU1

1Department of Economics, VŠB – Technical University of Ostrava,

Sokolská tř. 33, Ostrava 70200, Czech Republic

e-mail:[email protected]

Abstract

With the increasingly serious environmental and social problems, more and more Chinese companies

are voluntary or required to balance social and environmental interests while pursuing economic

benefits. Sustainable development has been reflected in all aspects of China's economic activity as the

current goal of the Chinese economy. Corporate social responsibility (CSR) is a key indicator for

measuring sustainable development and has become one of the most important references when

investing in companies. Previous empirical research based on data collected from global companies

found that the composition of the board of directors has a significant impact on sustainable development,

but few articles have studied the situation of Chinese companies. This article takes the Chinese home

appliance listed companies as a sample to study the impact of board composition-independence,

diversity and CEO duality on CSR. The linear regression results show that only the diversity of board

members has a statistically significant and negative impact on CSR, and there is no significant

relationship between board independence and CEO duality and CSR.

Keywords

Corporate social responsibility, Corporate governance, Board of directors, Sustainable development

goals.

JEL Classification

M210 Business Economics.

Introduction

With the growing environmental problems, the Chinese government proposed an economic goal for

scientific development in 2003. In 2006, the Shenzhen Stock Exchange issued the “Guidelines for Social

Responsibility of Listed Companies”, encouraging listed companies to establish corresponding social

responsibility institutions by the guidelines, and advocating that listed companies also disclose their self-

assessed social responsibility reports and financial statements. In 2008, the Shanghai Stock Exchange

released the "Guidelines for Environmental Information Disclosure of Listed Companies", which

stipulates that listed companies should promptly disclose incidents related to environmental protection

that may affect stock prices. After more than ten years of development, the overall implementation of

Chinese corporate social responsibility (CSR) has shifted from the initial bystander stage to the starter

stage (Chinese Academy of Social Sciences, 2018). But the general situation is still not ideal. Exploring

which factors may affect the implementation of CSR has become one of the focuses of scholars. Recent

research has found that the characteristics of the board of directors (BOD) have a significant impact on

CSR (Chams and García-Blandon 2019; Naciti 2019). China's home appliance industry is one of the few

internationally competitive industries in China. What is the current status of CSR of Chinese home

appliance companies, and what factors can affect it are issues worth studying? To the best of the author’s

knowledge, few articles examining the relationship between the characteristics of the board of directors

of Chinese household electrical appliances companies and CSR. Therefore, this study attempts to

provide evidence supporting the nexus between BOD characteristics and implementation of CSR of

Chinese home appliance companies. With the help of regression techniques, the research question arises:

do the characteristics of BOD have any effect on CSR of Chinese home appliance companies?

Page 231: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

224

The objective of the article is to analyze the impact of specific characteristics of board composition

(board independence, board diversity and CEO duality) on CSR of the Chinese home appliance

companies. Our research provides a descriptive knowledge that links the implementation of CSR with

corporate governance.

The remainder of the study is presented as follows. The next section reviews the existing literature about

CSR (or sustainable development) and characteristics of BOD and develops the hypotheses. In the third

section, we present the sample, the data, and the applied methodology. The fourth section lays out the

empirical results. In the last section, we present our conclusion.

Literature Review

In 2014, the UN Member States followed a decision made at the Rio+20 Conference to launch a process

to develop a set of Sustainable Development Goals (SDGs) that will build upon the Millennium

Development Goals (MDGs). The SDGs recognize that companies play a crucial and decisive role as

they are the chief motivators of sustainable development. When companies face such new development

goals, they should satisfy the interests of stakeholders by maintaining balanced economic, social and

environmental development rather than just pursuing economic interests to satisfy the interests of

shareholders. The board of directors, as the highest decision-maker, plays a decisive role in setting and

implementing goals. For this reason, it is important to reconfigure the governance system that has been

tasked with defining and implementing Corporate Sustainability (CS) policies and strategies (Van

Marrewijk, 2003). The following section presents the recent literature about the relationship between

CSR (or sustainability) and three characteristics of the board -- board independence, board diversity and

CEO duality, which are our main research objects.

Board Independence

Based on the Stakeholders Theory, the independence of the BOD is expected to be positively connected

with CSR, since these are less subject to shareholder pressures. A board with a large portion of

independent directors can provide supervision for management and protect all the stakeholders’

interests. Hussain et al. (2018) found that “board with a higher proportion of independent directors

positively impacts environmental and social performance” by researching the US-based companies. As

Nicola Cucari et al. (2018) stated: “that firms with more independent directors lead to an increase in the

information related to ESG disclosure”. Pucheta‐Martínez et al. (2018) used a sample of financial

companies listed in Spain between 2004 and 2015 and found evidence that the presence of independent

directors encouraged financial entities to report CSR matters, which demonstrated the effectiveness of

the corporate governance mechanism. Hence, it is expected that BODs with more independent directors

exhibit then better CSR.

Hypothesis 1. A high number of independent directors on a BOD is positively and significantly

associated with CSR.

Board Diversity

The diversity of BOD has been explained in several ways. Gender and nationality are the main

characteristics of a diverse board. In the Stakeholders Theory framework, the presence of women on a

BOD is expected to be positively connected with CSR. Female directors have certain personality traits,

such as low-risk aversion, transparency, responsiveness, and recognition of social and environmental

issues, which improves sustainable performance (Boulouta, 2013). Hussain et al. (2018) find “that board

diversity enhances the social dimension of sustainability”. Naciti (2019) stated that firms with more

diversity on the board showed higher sustainability performance based on data from 362 large industrial

firms on the 2013—2016 Fortune Global 500 list. Accordingly, the hypothesis is formulated to test the

link between BOD diversity and CSR:

Hypothesis 2. A high proportion of board diversity, in terms of gender, is positively and significantly

associated with CSR.

Page 232: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

225

CEO Duality

CEO duality occurs when the CEO is the company's president and chairman of the board. According to

agency theory, separation of board chairman and CEO roles increases board oversight of management.

Therefore, if the CEO also serves as chairman, it can have a significant impact on board decision-making

processes and company operations. The previous empirical literature has not provided consensus results

on the relationship between CEO duality and CSR. Naciti (2019) found firms with a separation between

chair and CEO roles exhibit higher sustainability performance. Pucheta‐Martínez (2018) found that

CEO duality is negatively correlated with CSR. Chams and García-Blandon (2019) did not find a

significant relationship between CEO duality and sustainability. Thus, this study predicts that the dual

nature of the CEO will have a negative impact on the implementation of CSR.

Hypothesis 3. CEO duality is negatively and significantly associated with CSR.

Methodology and Data

Sample and Data

This study aims to explore the impact of characteristics of BOD of Chinese appliance firms on the level

of CSR. Thus, the study analyzed data from Chinese home appliance listed companies in 2018. The

sample list is from the wencai database and includes 26 companies listed on the mainboard, 25

companies listed on the Small and Medium Enterprise board, and 8 companies listed on the Growth

Enterprise Market (GEM). Chinese authority agency -Rankins CSR Ratings (RKS) will release the CSR

level of listed companies according to the CSR report. However, not every listed company is willing to

disclose its CSR status, so there are only a few listed companies included in its report. This paper adopted

the concept of CSR advocated by Elkington13 – the so-called triple bottom line principle, which assumes

that if an enterprise forms an economic and social system, then its development objectives should

constitute a triple beam, which relates to the profit, the people associated with the company, and care

for the planet. Therefore, according to the triple bottom line principle, the author calculates the CSR

score of appliance listed companies from three aspects: economics (shareholder, consumer, employee,

supply chain and government), environment and society, as the measurement of the dependent variable

of CSR. As for what criteria are included and how to assign the specific weights for each criterion can

be seen in the author’s other paper (Wu, 2019). In order to get the newest result, this paper used the data

of 2018, the related criteria and corresponding weights are the same as the author’s other paper (Wu,

2019), and just the data is updated to 2018. All sample data are collected from wencai database, financial

statements and CSR reports (if any).

Methodology and Applicability Analysis of OLS

When data only has a cross-sectional dimension, a multiple linear regression model is proposed to test

the hypotheses formulated in the former section. This method is considered an appropriate analytical

tool when the outcome variables are measurable and continuous. Thus, the empirical model is provided

as

𝐶𝑆𝑅 = 𝛽0 + 𝛽1𝐵𝐼𝑁𝐷𝐸𝑃 + 𝛽2𝐵𝐷𝐼𝑉𝐸𝑅 + 𝛽3𝐶𝐸𝑂𝐷𝑈𝐴𝐿 + 𝛽4𝑅𝑂𝑆 + 𝛽5𝐹𝑅𝐼𝑆𝐾

+ 𝛽6𝑆𝐼𝑍𝐸 + 휀 (1)

where 𝛽𝑖 represents the partial regression coefficient, all the variables included in the study is described

in detail in Table 1.

13 J. Elkington: Cannibals with Forks: The Triple Bottom Line of 21st Century Business. Capstone Publishing Limited, Oxford 1997.

Page 233: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

226

Table 1. Summary of Variables and Their Measurements in the Study

Name of variables Abbreviation Measurement

CSR CSR CSR score calculated by AHP method

Independent Variables

Board Independence BINDEP Proportion of independent directors on the

board

Board Diversity BDIVER Proportion of female directors on the board

CEO Duality CEODUAL Value of 1 if CEO is both president and

chairman of BOD; 0 otherwise.

Controlling Variables

Return on Sales ROS Operating income to total revenue

Financial Risk FRISK Liabilities divided by total assets.

Company Size SIZE The logarithm of the number of assets

Note: Table 1 presents the description and abbreviation of the variables included in this study.

Table 2 shows the descriptive statistics of input data in columns 4 and 5, while minimum and maximum

scores are shown in columns 2 and 3.

Table 2. Descriptive Statistics of Given Variables

Minimum Maximum Mean Std. Deviation

CSR 0.0585 0.5913 0.349622 0.1229467

BOD Independence 0.3000 0.6000 0.377711 0.0593242

BOD Diversity 0.0000 0.5556 0.165026 0.1476641

CEO Duality 0.0000 1.0000 0.322034 0.4712667

ROS -74.8300 40.0600 2.178475 20.9121352

Financial Risk 12.2800 195.1700 48.699831 27.0375150

Size 8.5617 11.4211 9.682264 0.6092080

Source: SPSS

Testing the data with SPSS software, we found that the data used meets the assumptions of the ordinary

least squares (OLS) method, so this paper uses OLS to estimate model parameters. The test results are

shown in Figure 1 to 4 and Table 3 to 5.

Page 234: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

227

Figure 1.The Linear Relationship between Independent Variables of BOD Independence and

BOD Diversity and Dependent Variable of CSR

Source: SPSS

Figure 1 shows the linear relationship between independent variables of BOD dependence and BOD

diversity and the dependent variable of CSR. Because the CEO duality is a categorical variable, it is not

necessary to examine its linear relationship with the dependent variable of CSR.

Figure 2. Relationship between Independent Variables of BOD Independence and BOD

Diversity and Unstandardized Residuals

Source: SPSS

We can observe the relationship between independent variables of BOD independence and BOD

diversity and unstandardized residuals through Figure 2 and find independent variables are not

correlated with the residuals. Because the CEO duality is a categorical variable, it is not necessary to

examine its relationship with the residuals.

Page 235: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

228

Figure 3. Histogram and P-P plot of Regression Standardized Residual

Source: SPSS

Figure 4. Scatter Plot of Regression Standardized Predicted Value and Regression Standardized

Residual

Source: SPSS

We can get the conclusion that the residuals are normally distributed, and their mean value is zero from

Figure 3. From Figure 4, we can see the scattered point fluctuation range of the standardized residual is

basically stable and does not change with the change of the standardized prediction value. It can be

considered that the homogeneity of the variance is basically satisfied.

Page 236: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

229

Table 3. Model Summary

Model R R Square

Adjusted R

Square

Std. Error of

the Estimate

Durbin-

Watson

1 0.595a 0.354 0.279 0.1044000 2.098

Source: SPSS

From Table 3, we know the value of Durbin-Watson is 2.098, which is very close to 2. Thus, we can

infer that there is no obvious correlation between the residuals; that is, the residuals are independent.

Table 4. Correlations

CSR BOD

independence

BOD

diversit

y

CEO

dualit

y

ROS

Financia

l

risk

Size

Pearson

Correlatio

n

CSR 1

BOD

independence -

0.132 1

BOD diversity -

0.284 -0.049 1

CEO duality 0.012 0.288 0.122 1

ROS 0.146 0.109 0.05 0.08 1

Financial risk 0.158 -0.049 0.024 -0.111 -

0.323 1

Size 0.459 0.021 -0.017 0.193 0.023 0.226 1

Sig.(1-

tailed)

CSR

BOD

independence 0.16

BOD diversity 0.015 0.356

CEO duality 0.465 0.014 0.179

ROS 0.135 0.205 0.355 0.274

Financial risk 0.117 0.355 0.428 0.201 0.006

Size 0 0.437 0.45 0.072 0.431 0.042

Source: SPSS

Table 4 displays the Pearson correlation coefficient between any two variables of all variables and their

corresponding p-values. The results show that all the correlation coefficients between the continuous

independent variables are less than 0.5, and the p-values are larger than 0.05, indicating that the

correlation between the independent variables is weak, and it can be considered that there is no

collinearity. Because the independent variable CEO duality is a categorical variable, it is not appropriate

to use the Pearson correlation coefficient for testing. And we can get the same conclusion from the value

of collinearity statistics—Tolerance and VIF from Table 5. For each variable, the value of Tolerance is

larger than 0.2, and the VIF is less than 10, indicating that there is no collinearity between all variables.

Page 237: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

230

Table 5. Coefficients Estimated by OLS

Model

Unstandardized

Coefficients

Standardize

d

Coefficients t Sig.

Collinearity

Statistics

B Std.

Error Beta

Toleranc

e VIF

1

(Constant) -0.324 0.246 -1.315 0.194

BOD

independence -0.367 0.244 -0.177 -1.505 0.138 0.9 1.111

BOD diversity -0.251 0.094 -0.302 -2.663 0.010 0.968 1.033

CEO duality 0.004 0.032 0.016 0.132 0.895 0.841 1.188

ROS 0.001 0.001 0.212 1.775 0.082 0.873 1.145

Financial risk 0.001 0.001 0.131 1.065 0.292 0.819 1.222

Size 0.085 0.024 0.42 3.546 0.001 0.887 1.127

Source: SPSS

Empirical Results and Discussion

Empirical Results

As previously mentioned, the sample comprises 59 sample companies in the Chinese appliance industry.

Table 4 shows the Pearson correlation results. We find that CSR has a negative and significant

correlation with BOD diversity. The correlation coefficient between CSR and BDIVER is -0.284 at 5%

significance level. We also discover that at a significance level of 5%, CSR was not associated with

BINDP and CEODUAL. In Table 4, we notice that company size is positively correlated with CSR at

1% significance level.

From Table 3, we find the Coefficient of Determination R2 is 0.354, which means 35.4 per cent of the

total variation in CSR can be explained by the multiple regression model provided in this study. Thus,

the goodness of fit of this multiple regression model is acceptable.

In Table 5, we report the results after testing the hypotheses with OLS; that is, the estimated results of

the determinants of CSR. Results show that the particular regression coefficient of BOD independence

is -0.367, and the p-value is 0.138, which is slightly greater than 10% significant level. It represents a

relatively weak linear relationship between CSR and board independence, which is inconsistent with

hypothesis 1. BOD diversity is significantly but negatively associated with CSR at 5% level.

Specifically, an increase in board diversity leads to a decrease in CSR by 0.302 points, which is contrary

to our expectation. The p-value of CEO duality is 0.895, which is much greater than the 5% significant

level. It demonstrates that CEO duality is not associated with CSR at 5% significant level, which does

not support hypothesis 3. In addition, ROS is positively correlated to CSR at 10% significant level, and

company size is positively associated with CSR at 1% significant level.

Results Analysis

Possible explanations to our result about the negative relationship between board diversity and CSR can

be found in Cucari et al. (2018), where it was shown that “being a woman director does not necessarily

mean having a different outlook. Therefore, it is not the gender that determines a positive level of ESG

disclosure”. According to Jain and Jamali (2016) and Rodríguez-Ariza et al. (2017), the relationship

between directors’ gender and CSR behaviour is very complex, because a female director has many

characteristics, except for her genders, such as a woman with specific expertise or experience as an

executive or non-executive director.

Page 238: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

231

Another empirical result on the relationship between board independence and CSR is the same as that

found by Lau et al. (2016), which explained that a small number of outside directors was not sufficient

to make a change in the board. The connection between CSR and CEO duality is the same as the finding

of Chams and García-Blandón (2019). The reason may be the most sample companies are private

companies (45 in 59), where the actual controller is also the chairman and CEO in most cases, so the

agency theory is not appropriate. Thus, there is no relationship between CSR and CEO duality.

Conclusion

This paper examines the impact of the characteristics of the board of directors on CSR. We measure

CSR as the weighted score of seven aspects using the method of AHP to get their corresponding weights.

Based on a sample of 59 Chinese appliance listed companies and applying empirical tests, we find that

only the relationship between board diversity and CSR is significant but negative, and there are no

statistical links of board independence and CEO duality with CSR.

References

[1] Boulouta, I. (2013). Hidden Connections: The Link between Board Gender Diversity and

Corporate Social Performance. Journal of Business Ethics, 113(2), pp. 185–97.

[2] Chinese Academy of Social Sciences. (2018). China Corporate Social Responsibility Research

Report. Social Science Literature Press.

[3] Chams, N. and J. García-Blandón. (2019). Sustainable or Not Sustainable? The Role of the Board

of Directors. Journal of Cleaner Production, 226, pp.1067–1081.

[4] Cucari, N., S. E. De Falco. and B Orlando. (2018). Diversity of Board of Directors and

Environmental Social Governance: Evidence from Italian Listed Companies. Corporate Social

Responsibility and Environmental Management, 25(3), pp. 250–266.

[5] Hussain, N., U. Rigoni. and R. P. Orij. (2018). Corporate Governance and Sustainability

Performance: Analysis of Triple Bottom Line Performance. Journal of Business Ethics, 149(2),

pp. 411–432.

[6] Jain, T. and D. Jamali. (2016). Looking Inside the Black Box: The Effect of Corporate Governance

on Corporate Social Responsibility. Corporate Governance: An International Review, 24(3),

pp.253–273.

[7] Lau, C. M., Y. Lu. and Q. Liang. (2016). Corporate Social Responsibility in China: A Corporate

Governance Approach. Journal of Business Ethics, 136(1), pp. 73–87.

[8] Naciti, V. (2019). Corporate Governance and Board of Directors: The Effect of a Board

Composition on Firm Sustainability Performance. Journal of Cleaner Production, 237: 117727.

https://doi.org/10.1016/j.jclepro.2019.117727.

[9] Pucheta-Martínez, M. C., I. Bel-Oms. and M. Nekhili. (2019). The Contribution of Financial

Entities to the Sustainable Development through the Reporting of Corporate Social Responsibility

Information. Sustainable Development, 27(3), pp. 388–400.

[10] Rodríguez-Ariza, L., B. Cuadrado-Ballesteros., J. Martínez-Ferrero. and I. M. García-Sánchez.

(2017). The Role of Female Directors in Promoting CSR Practices: An International Comparison

between Family and Non-Family Businesses. Business Ethics, 26(2), pp. 162–174.

[11] Van Marrewijk, M. (2003). Concepts and Definitions of CSR and Corporate Sustainability:

Between Agency and Communion. Journal of Business Ethics, 44, pp. 95–105.

[12] Wu, Xiaojuan. (2019). Research on the implementation of CSR of listed companies in the Chinese

household appliance industry. 21st international conference Mekon 2019. pp.255-262

Page 239: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

232

MULTI-CRITERIA DECISION MAKING USING THE ENTROPHY METHOD APPLIED

ON SELECTED VARIABLES FROM THE AREA OF DIGITALIZATION AND

DEVELOPMENT IN THE CENTRAL EUROPE TERRITORY

Martina Žwaková1

1Department of Economics, VŠB – Technical University of Ostrava,

Sokolská tř. 33, Ostrava 70200, Czech Republic

e-mail: [email protected]

Abstract

A comparison of the regions can be considered as a complex procedure, where multiple criteria are

requested to be evaluated. The aim of this paper is to rank the selected regions based on the variables

from the field of technology, its use and development, used for the comparison and assess the selected

regions from the area of Central Europe at the NUTS2 level. There are 30 NUTS2 regions assessed. The

variables related to human resources, use of the digital technologies, information and communication

technology and science are incorporated in the evaluation. This approach is used to obtain results which

are more complex. The variables were chosen also with regards to an availability of statistical data. The

Entropy method is used in the paper to calculate weights of the variables identifying an importance of

each variable for overall ranking for the selected group of the regions. Results from the analysis are

presented comparing the ranking of the regions and describing main characteristics, potential strengths

and weaknesses in terms of digitalization and development area defined by the selected variables.

Keywords

Entropy method, NUTS 2, multi-criteria decision making, digitalization, Central Europe

JEL Classification

D80, O30, O33, R11

1 Introduction

This paper is focused on application of entropy method used for purposes of assessment NUTS 2 regions

from the area of Central Europe from a perspective of variables related to areas with possible influence

on digitalization in the regions. The aim of the paper is to rank the selected regions based on the

indicators considering their importance for the evaluation expressed by the calculated weights.

The motivation for this ranking was to obtain an overview of the ranking of the regions Czech Republic

in a wider geographical context based on the variables related to the trend of digitalization. This

comparison was aimed to contribute to a description of a business environment of the regions.

The variables have been selected based on their availability, consistency of the data from a point of

missing values and appropriate time range – the restriction has been set not to use data older than five

years in the paper to use the most current data possible. There were seventeen variables selected for the

evaluation from 2016. This year has been selected due to balanced demands on availability and also

suitability of a distance from current time period. The variables can be divided into three separate groups

based on the areas which are covered by them. The first group of indicators is from the field for human

resources in the technological branches and branches with a potential importance for development. In

the second area there are the indicators of use of digital technologies included. The indicators related to

added value of selected branches are included in the third group of variables.

The values for the 17 variables have been collected for 30 regions from the Central Europe area and

Slovenia, in total five countries: Czech Republic, Slovakia, Austria, Slovenia and Hungary. Due to a

change in methodology of NUTS 2 regions it was needed to re-calculate data from two separate regions

in Hungary, Budapest and Pest which were originally merged in the Közép-Magyarország area and

according to the later changes into methodology divided into the Budapest and Pest region. As for the

use-of-technologies related variables, the data was available only for the merged region it was needed

to unify the regions used, the older divides have been used.

Page 240: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

233

To identify the importance of the selected values for the overall ranking the objective weighting method

has been used to get the results which are not dependent on an evaluator but based on the statistical data.

The Entropy method has been selected for this purpose due to its suitability to detect the most various

indicators (Deng et al., 2000). The TOPSIS approach has been then used to combine results from the

seventeen variables. The TOPSIS method has been selected due to the approach considering the position

of the variant compared to the best and to the worst possible result (Brožová, 2014, p. 37), the evaluation

is performed in a context.

The outcomes of the evaluation are briefly described focusing on the results from regions with the

highest and the lowest rankings. The results in specific areas are also compared and the main findings

are presented.

2 Brief overview of the topic

As the multi criteria decision making using Entropy method and TOPSIS method have been applied on

the problematic of assessment of the selected NUTS2 regions from the perspective related to digital

technologies, there is a concise outline of schemes used for evaluation in terms of digitalization. In this

paper the variables have been selected mainly based on their accessibility on the regional level and they

are focused more on the usage of technologies, value created and human resources in knowledge

intensive areas.

As the term of digitalization is used, this term is outlined at first as presented by Vogelsang (2010, p. 3)

where the author emphasize the technical aspects of the phenomenon in terms of use a binary form of

information which creates a background for digitalization. Vogelsang (2010, p. 3) also emphasize the

relation with the wide use of internet and worldwide consequences.

There are several structures of indexes described in a literature including a complex set of areas

indicating level of digitalization or indicators with a high impact on this area.

The indexes as the index used by the World Economic Forum are composed of the wide range of

variables from several areas from the technological to social (Baller Silja, Dutta Soumitra and Bruno

Lanvin, 2017). This index is not the only index used for assessment of area of digitalization. As the other

examples of such indexes the methodology used by Roland Berger Consultants is used as explained by

Soldatos et al. (2016, p. 166) or DESI indicator (European Commission, 2019). Another index with a

wide range of variables focus is the Networked readiness index. The description of this index is provided

for example by Vogelsang (2010, p. 12) or Xu (2014, p. 11-12).

Although variables from similar areas are used the indexes, the scope of variables is more narrow and

focused only on three main areas. The importance of such factors as research and development and the

problematic of human resources in terms of digitalization is described for example by Mařík et al.

(2016).

The topic of digitalization is subject of research of several authors as Vogelsang (2010), Xu (2014),

from the Czech authors as Mařík et al. (2016) focusing of Industy 4.0, Veber et al. (2018) focusing on

impacts of the digitalization and also implications in the economical braches including industry. The

topic is also monitored by institutions as World Economic forum as the Global Economic Forum, as the

Global Information Technology Report was issued, for example social (Baller Silja, Dutta Soumitra and

Bruno Lanvin, 2017).

The area of weighting methods and multi criteria decision making is described for example by the

following authors Zardari et al. (2015) or Brožová et al. (2014).

3 Methodology and Data

The variables taken into the assessment are at first weighted. The approach using the different weights

has been preferred before the same weighting for all variables to emphasize the importance of the

variables, which are suitable for differentiating the regions as the principle of the preference of the

Page 241: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

234

variables with varied results is also described by Zardari et al. 2015, p. 65). This approach also helps to

identify the areas where different levels of the results are achieved.

As method for calculation of weights the Entropy method has been selected. This method has been used

due to its base in the statistical data and objective approach excluding influence of the evaluator. The

method is focused on identification of uncertainty and the more various variables are ranked with a

higher weight.

After calculation for the weights, then additive method is applied to calculate the overall results for each

region based on which they are ranked. In the methodology part of this article, the symbols in equations

have been unified as multiple literature resources have been used.

3.1 Methods used for assessment of the NUTS2 regions

At first a data matrix marked as Aij is set from the particular values yij where j=1,2,3…n, where n

represents seventeenth variable and i=1,2,3…m, where m is the thirtieth of the assessed regions. (Bao,

2012)

The values needs to be normalized at first as the different measures are used applying the equation (1)

=M

i

ij

ij

ij

y

yx

(1)

where xij is the normalized value, yij is the original value of the jth variable for ith region from M

regions.The normalization by the overall sum is described by Zardari et al. (2015, p. 33) or Brožová et

al. (2014, p, 13).

The assessment of the importance of the criteria follows with calculation of information entropy E j based

on the equation (2) as presented by Zardari et al. (2015, p. 33) or Brožová et al. (2014, p, 13).

( ) ( )MxxE ij

M

i

ijj ln/ln

−= (2)

Where xij represents normalized values and M represents a number of the variants of values for each

variable, in this paper it is the number of the NUTS2 regions. Finally the weight of jth variable

represented by wj is calculated using equation (3)

−=

N

j

j

j

j

E

Ew

1

1

(3)

Where Ej is information entropy of jth variable and N is a number of variables included in the assessment.

(Zardari et al., 2015, p. 33) or (Brožová et al., 2014, p, 13).

The second method used for the assessment is the Statistical variance method. Before application of the

method the values are normalized using the equation (1) to eliminate an influence of the different scales

for the variables. Then variances are calculated using the equation (4):

( )

M

xx

V

M

i

ijij

j

=

= 1

2

(4)

where a number of variables is represented by M. . (Zardari et al., 2015, p. 35)

The weights wj are calculated by dividing each variance by a sum of variances as in the equation (5) as

described by Zardari et al. (2015, p. 33).

Page 242: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

235

=N

j

ij

ij

ij

v

vw

(5)

3.2 TOPSIS method

As a method of calculation of overall ranking the TOPSIS method has been selected. At first

normalization of the original data is needed based on the following calculation (6)

=M

i

ij

ij

ij

y

yr

2

(6)

where rij is a normalized value of jth variable for ith region (Brožová, 2014, p. 36). The normalized values

are then multiplied by already calculated weights in equation (4) and the v ij value is obtained. There are

also different approaches used for normalization, for example Bao et al. (2012, p. 112) uses

normalization by dividing each value by the highest value for the variable whereas Chan and Wang

(2013, p. 116) uses the same weighting procedure as used in the equation (6).

For each variable the value with the most and the least preferred variant is selected and then the distances

of the values for the regions are calculated from the most and also for the least preferred value. For the

distance calculation from the most preferred one the equation (7) is used

( )=

+ −=N

j

jiji hvd1

2 (7)

Where d+ i represents the overall distance of the best variant, vij is normalized and weighted value of the

jth variable for ith region and hj is the best variant for the jth variable. Similar procedure is applied for a

calculation of the overall distance from the least preferred variant d- i displayed in the equation (8)

described by (Brožová, 2014, p. 37).

( )=

− −=N

j

jiji lvd1

2 (8)

where lj represents the least preferred value for the jth variable. The overall ranking ci is obtain by

application of equation (9) described by (Brožová, 2014, p. 37).

−+

+=

ii

i

idd

dc (9)

3.3 Data preparation

As the main aims of the comparison was to base the assessment on the objective comparison method,

the data from the reliable resource as Eurostat (2019)14 were collected. There were variables from three

areas which were identified as ones with a potential to influence digitalization level or spread selected.

These three categories have been created focusing on the similarity of the area of the basis of the single

indicators. The three main areas are human resources, relationship of users to digital technologies and

vale adding in the selected areas. In the diagram 1 there is a structure of the used indicators visible.

14 DISCLAIMER: The opinions expressed in this publication are those of the individual author alone and do not

necessarily reflect the position of the European Commission.

Page 243: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

236

Diagram 1. Main groups of indicators used for the assessment

Source: Author’s processing

Regarding the indicators from the area of human resources, the structural point of view regarding

employment is described by Mařík et al. (2016, p. 159 - 162) where the importance of employment in

knowledge intensive manufacturing and services, especially information and communication

technologies and science and research, is highlighted, so the variables from such areas were included

into the comparison.

According to Mařík et al. (2016, p. 162 - 165) there is also a high importance of computer usage skills

not only on the side of the employees, but also end users and the importance of the partial areas as

internet access, use of internet for different purposes including communication with government are

mentioned, such areas are also included in DESI indicator (European commission, 2019). This can be

considered as a supportive factor to use the indicators related to the use of technologies for the

comparison.

Also the aspect of level of value added in the sector is mentioned by Mařík et al. (2016, p. 162) in a

context of the ratio of the employment in information and communication technology areas, where it is

noted, that not only the ratio itself, but also a structure, from a value added point of view , is important.

The specific variables divided in the presented three categories with an overview of the used measures

are listed in the Table 1

Selected areas of interest

Human resources

Value addedApproach to technologies

Page 244: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

237

Table 1. An overview of the used variables

Indicator Measure

Value added ICT to overall % from overall value added

Value added professional and scientific activities to overall % from overall value added

Gross value added mil. EUR for ICT Per 1 employee in a branch

Gross value added mil. EUR of profess., scientific and technical activities Per 1 employee in a branch

Persons employed - computer programming % from economically actives15

Persons employed - information service activities % from economically actives

Persons employed - manufacturing of computers % from economically actives

Persons employed in High and medium high manufacturing % from economically actives

Persons employed in knowledge intensive services % from economically actives

Persons employed - scientists and engineers % from economically actives

Households with broadband access % of population

Individuals who ordered goods/ services over the internet for private use in last 3 months % of population

Individuals who used the internet for interaction with public authorities (in last 12 months) % of population

Individuals who used the internet for interaction with public authorities – submitting forms

(in last 12 months) % of population

Individuals who used the internet, frequency of use and activities – daily use of internet % of population

Individuals who used the internet, frequency of use and activities – internet banking % of population

Individuals who used the internet, frequency of use and activities – sell of goods % of population

Source: Author’s processing based on names and measures from Eurostat (2019)

The data is gathered for 30 NUTS2 regions for five countries: Czech Republic, Slovakia, Hungary,

Slovenia and Austria. These countries were selected on a geographical basis as the focus was on the

Central Europe to compare a situation in Czech Republic with the geographically near regions to identify

a position of each NUTS2 regions within the area. Also the perspective of data availability was needed

to be taken into consideration. For this reasons NUTS2 regions of Poland had to be excluded as for the

variables from the group containing indicators of use of technologies the values weren’t available at the

NUTS2 level and also Slovenia was added to the comparison.

The NUTS2 regions were selected to compare not the countries but to use more detailed view. Due to

the data availability a lower level of detail than NUTS2 couldn’t be used.

The year 2016 was selected as a suitable time period for the assessment where the data availability

perspective for all regions and the requirements on actual data were needed to be equalized. The year

2016 also meets the requirement to not to use the data older than 5 years from the current year.

To obtain data in a comparable format there were additional calculations needed except of the variables

of usage of digital technologies among population in a region which were already available in a form of

percentage. The other variables were re-calculated either to a percentage of the overall value (value

added or employed persons depending on the particular area) or per one unit – this approach has been

used in case of value added related variables, where the re-calculation has been performed.

There were also adjustments needed due to an inconsistency in methodology of data presentation from

Eurostat’s side. Data from two separate regions in Hungary, Budapest and Pest were originally merged

in the Közép-Magyarország area. For the variables related to human resources and for the value added

related variables the data was available according to a newer methodology divided into separate regions,

but for the data for the area of usage of digital technologies the data was available only for a merged

region Közép-Magyarország. To unify the methodology, the older version with merged region was

applied also in case of the variables for areas human resources and value added. The values for Budapest

and Pest regions were aggregated to obtain the value for the former Közép-Magyarország area and this

15 15 – 65 years

Page 245: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

238

aggregated value has been used for calculations. The methodology of dividing regions is described by

Eurostat (2019).

3.4 Comparison of empirical results from the assessment

The approach using the weighted variables by Entropy method is used combined with the TOPSIS

method. The weight calculation applying the Entropy method also compared to the weighting by

Statistical variance approach to verify, if there is a significant difference in the weights obtained

identified.

After the calculation of the overall values for each NUTS 2 region the overall ranking is completed. The

focus is on the regions with the highest and the lowest overall score. These regions are then described

more in detail.

The analysis for the variables itself is also done assessing the regions not from the multi-criteria point

of view, but by each variable separately using also fundamental descriptive statistics and graphical tools.

Those are also used for the description of the most and the least successful regions in the ranking.

4 Empirical Results

In this section of the paper, main findings from the application of described methods are described. At

first the focus is on the calculation of the weights and comparison of results using two different objective

calculation methods. After the description of results of weights’ calculation, there are described the

results from application of TOPSIS decision making method. As weights used for a calculation the

results from the entropy method were applied. The presented results are based on processing of data

from Eurostat (2019).

4.1 Weights calculated by Entropy method and Statistical variance

As the weights are calculated by objective methods, there is no influence of assessors, the calculation

are based only on statistical parameters. Due to this fact, it is not possible to deduce the assumptions

about an importance of selected variables for digitalization itself as there are not included all the

interdependencies. The weights represent importance for sorting the regions according to these selected

variables by adding higher preference to the variables based on which the regions can be sorted more

sufficiently, in the other words the variables which have the higher measure of uncertainty (Entropy

method) or have a higher value of the average difference from the average values of the variables

(Statistical variance method).

At firsts the results from the Entropy method weight calculation are presented in the Table 2, where the

variables are sorted according to achieved weights.

Table 2 . Weights calculated using Entropy method

Variable Weight Rank

Persons employed - information service activities 0.215756 1

Persons employed - computer programming 0.181908 2

Value added ICT to overall 0.127692 3

Persons employed - manufacturing of computers 0.104176 4

Gross value added mil. EUR of profess. sci-tech. activities per 1 econ. active person 0.087879 5

Gross value added mil. EUR for ICT per 1 economically active person 0.050905 6

Persons employed in High and medium high manufacturing 0.048423 7

Persons employed - scientists and engineers 0.038730 8

Individuals who used the internet for interaction with public authorities – submitting forms 0.038157 9

Value added professional and scientific activities to overall 0.037554 10

Individuals who used the internet, frequency of use and activities – sell of goods 0.020088 11

Individuals who ordered goods/ services over the internet for private use 0.018620 12

Page 246: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

239

Individuals who used the internet, frequency of use and activities – internet banking 0.011476 13

Individuals who used the internet for interaction with public authorities 0.009051 14

Persons employed in knowledge intensive services 0.007325 15

Individuals who used the internet, frequency of use and activities – daily use of internet 0.001454 16

Households with broadband access 0.000806 17

Source: Authors processing based on data from Eurostat (2019)

It can be seen, that in general the values for variables related to the users’ behaviours can be evaluated

as less varied and are of the more similar digits in all assessed regions. The first three variables chosen

as the ones with the biggest impact on the ranking are Persons employed - information service activities,

Persons employed - computer programming which have significantly higher values and the Value added

ICT to overall where a value close to the fourth one - Persons employed - manufacturing of computers

– is achieved.

The indicator with the lowest assigned importance is the Household with broadband access where the

numbers in all regions are very similar.

The second rank displayed in the Table 3 is calculated using the Statistical variance procedure. The

overall ranking varies very little from the results from the Entropy method, only the ranks of Individuals

who used the internet for interaction with public authorities and Value added professional and scientific

activities to overall are switched.

Table 3. Weights calculated using Statistical variance method

Variable Weight Rank

Persons employed - information service activities 0.251532 1

Persons employed - computer programming 0.207787 2

Value added ICT to overall 0.136985 3

Persons employed - manufacturing of computers 0.089487 4

Gross value added mil. EUR of profess, sci-tech. activities per 1 econ. active person 0.078581 5

Gross value added mil. EUR for ICT per 1 economically active person 0.042422 6

Persons employed in High and medium high manufacturing 0.039308 7

Persons employed - scientists and engineers 0.034667 8

Value added professional and scientific activities to overall 0.033404 9

Individuals who used the internet for interaction with public authorities – submitting forms 0.029891 10

Individuals who used the internet, frequency of use and activities – sell of goods 0.016474 11

Individuals who ordered goods/ services over the internet for private use 0.015235 12

Individuals who used the internet, frequency of use and activities – internet banking 0.008822 13

Individuals who used the internet for interaction with public authorities 0.007190 14

Persons employed in knowledge intensive services 0.006354 15

Individuals who used the internet, frequency of use and activities – daily use of internet 0.001203 16

Households with broadband access 0.000660 17

Source: Authors processing based on data from Eurostat (2019)

The results are more varied in a matter of the distribution of the values of the weights when the ends of

the rank are reviewed. The importance of the first three indicators is higher than in case of Entropy

method and there is a higher difference between the weight on the 3rd and 4th position.

Based on the both applied methods it is visible, that the regions differs mostly in the variables related to

human resources in IT branches and also in case of the ratio of the value added to overall value added

in the region.

Page 247: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

240

4.2 Results from application of TOPSIS method

The weights calculated using Entropy method were combined with the TOPSIS multi-criteria decision

making procedure to obtain overall score in a rank from 0 to 1 base on the distance from the most

preferred variant. As all the variables can be considered as maximizing criteria there was no

transformation needed among criteria type. The original values have been transformed to the same scale

as described in the part of the paper dedicated to methodology.

The final ranking of the regions is presented in the Table 4, where we can see that the first four regions

with evaluation above 0.5 are achieved in case of Praha, Bratislavský kraj, Wien and Közép-

Magyarország, so the highest overall values are achieved in the capital regions of the countries, where

there is a higher usage of technologies, concentration of human resources and value added.

Table 4. Final ranking of the NUTS2 regions combining the Entropy and TOPSIS methods

Region Score Rank Region Score Rank Region Score Rank

Praha 0.782087 1 Steiermark 0.215584 11 Niederösterreich 0.14443 21

Bratislavský kraj 0.690954 2 Közép-Dunántúl 0.201964 12 Střední Morava 0.142244 22

Wien 0.623235 3 Severovýchod 0.198683 13 Burgenland 0.127493 23

Közép-Magyarország 0.549858 4 Vorarlberg 0.184004 14 Jihozápad 0.125833 24

Zahodna Slovenija 0.304428 5 Východné Slovensko 0.174372 15 Dél-Dunántúl 0.117726 25

Jihovýchod 0.285439 6 Tirol 0.171391 16 Střední Čechy 0.107704 26

Salzburg 0.247281 7 Západné Slovensko 0.161791 17 Észak-Alföld 0.106295 27

Kärnten 0.220348 8 Nyugat-Dunántúl 0.151063 18 Vzhodna Slovenija 0.095132 28

Oberösterreich 0.220028 9 Moravskoslezsko 0.149815 19 Dél-Alföld 0.067743 29

Észak-Magyarország 0.216701 10 Stredné Slovensko 0.148306 20 Severozápad 0.061491 30

Source: Authors processing based on data from Eurostat (2019)

The results for the regions, which have been ranked with the highest scores, are described more in details.

For this comparison, the normalized data used for weights’ calculation are used also for the comparison

as it is suitable approach for increasing readability of the graphs presented in the Appendix 1 as they are

scaled to the same average (0.033). The description is focused on the defined three sub-areas.

Praha – In case of value added related variables the values are above the average values for the variables.

In case of Value added ICT to overall the highest value for the whole group of regions is achieved. For

the variable Value added professional and scientific activities to overall the third highest value has been

reached. Except of value for the variable Gross value added mil. EUR of professional and sci-tech.

activities per 1 employee there were high values achieved in general for this area.

In case of area of human resources there are high values achieved, in case of Persons employed -

computer programming, Persons employed in knowledge intensive services and Persons employed -

scientists and engineers the highest values from all regions have been achieved, except of variables

Persons employed - manufacturing of computers and Persons employed in High and medium high

manufacturing where the values are even below the averages for these variables.

When the group of variables related to the approach to use of technologies is assessed, the results are

more various. Except from the variables Individuals who ordered goods/ services over the internet for

private use, Individuals who used the internet for interaction with public authorities and Individuals who

used the internet for interaction with public authorities - submitting forms the achieved values were

above the averages for the values. In case of variable of households with broadband access the value

was the highest from all regions.

Based on these findings it can be derived that the strengths of this region in terms of the assessed

variables are mainly human resources except of the manufacturing and also area of value adding. Good

results are also achieved in case of the approach to digital technologies.

Page 248: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

241

Bratislavský kraj – In Bratislavský kraj in area of value added related variables high values are achieved

in case of the ratios of the value added for ICT and for professional and scientific activities to the overall

value added. Values lower than the average values were achieved in case of the variables of value added

per one employee in the ICT and professional and scientific activities branches.

In case of the human resources are the values for the selected variables are mostly above the average

values. For the variable Persons employed - information service activities the highest value from all

regions has been achieved and for the variable Persons employed - computer programming there was

the second highest valued identified. For the variables related to manufacturing the values are lower –

for Persons employed - manufacturing of computers the value is above the average and for Persons

employed in High and medium high manufacturing the value is close to the average.

When the variables from the area of approach to digital technologies the values for the related variables

are assessed the values are above or close to the average except of Individuals who used the internet for

interaction with public authorities - submitting forms, which is below the average.

For the Bratislavský kraj area we can consider results from the area of the human resources as a strength

except of these related to manufacturing. The results of variables of ratios ICT and professional and

scientific GVA to the overall GVA are also considered as high.

Wien – For Wien there are above average values of value added related variables identified. In case of

variables Value added professional and scientific activities to overall and Gross value added mil. EUR

for ICT per 1 employee the highest values from all regions are achieved.

In the area of human resources related variables high values above the overall average are achieved

except of the variables Persons employed - manufacturing of computers and Persons employed in

High and medium high manufacturing where the values are relatively low. For the variables Persons

employed in knowledge intensive services and Persons employed - scientists and engineers the second

highest values are achieved among all assessed regions.

The variables in the area of attitude to the digital technologies usage are above or close to the averages

for the variables except of the variable of Individuals who used the internet, frequency of use and

activities – sell of goods which is slightly below the average.

For Wien the values can be considered as balanced, mainly above the averages for variables. As the

exception there are variables related to the human resources in manufacturing areas included to this

assessment.

Közép-Magyarország – In this region results for the area of value added related variables the similar

pattern as in case of Praha area is followed. High values are achieved in case of variables Value added

ICT to overall and Value added professional and scientific activities to overall. In case of variables Gross

value added mil. EUR for ICT per 1 employee and Gross value added mil. EUR of professional and sci-

tech. activities per 1 employee the values are below the averages.

In the area of human resources below average values are also identified for variables Persons employed

- manufacturing of computers and Persons employed in High and medium high manufacturing. For the

variable the third highest value is achieved and for Persons employed - computer programming

Persons employed - information service activities the fourth highest value from all regions has

been identified.

In the third assessed area the values for variables are identified as slightly higher or close to the average

values except of the variable Individuals who used the internet for interaction with public authorities -

submitting forms and in case of the variable Individuals who ordered goods/ services over the internet

for private use the value achieved is under the average.

Generally it could be considered that Közép-Magyarország follows similar patterns as Praha in the

assessed areas of variables with lower values with some exceptions as Persons employed -

manufacturing of computers.

Page 249: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

242

The regions with the lowest values of the overall ranking are Dél-Alföld and Severozápad region. The

value under 0.1 is achieved also in the region Vzhodna Slovenija, but the value doesn’t differ so

significantly from regions on the 26th positions.

More details for characteristics for the regions with the lowest rankings are also described.

Dél-Alföld – All values from the area of value adding are below the averages for these variables. The

highest value from these variables for the region are achieved for the variable Value added professional

and scientific activities to overall, the lowest for the Gross value added mil. EUR of professional and

sci-tech. activities per 1 employee, but the values are not significantly highly varied.

In the area of human resources, the values are also relatively lower except of the value of Persons

employed in knowledge intensive services where the score is close to the average value. The lowest

value among the variables for this region is identified for the variable Persons employed - information

service activities.

In case of variables related to the attitude to the usage of digital technologies, three values are

significantly below the average - Individuals who ordered goods/ services over the internet for private

use, Individuals who used the internet, frequency of use and activities - internet banking and

Individuals who used the internet, frequency of use and activities - sell of goods. The variable

Individuals who used the internet for interaction with public authorities - submitting forms is identified

as above average one and the remaining are below the average.

Severozápad region – In this region the values related to the GVA are generally below the average,

except of the variable Gross value added mil. EUR for ICT per 1 employee. The lowest value for the

region in this area is identified for the variable Gross value added mil. EUR of professional and sci-tech.

activities per 1 employee.

In the area of human resources, the values are relatively low with the exception of the variable Persons

employed in High and medium high manufacturing where the score is above the average and also the

value of Persons employed in knowledge intensive services which is higher than the remaining scores

in this group of variables.

In the area of usage of digital technologies there were for values significantly lower than the average

identified - Individuals who ordered goods/ services over the internet for private use, Individuals who

used the internet for interaction with public authorities, Individuals who used the internet for interaction

with public authorities - submitting forms and Individuals who used the internet, frequency of use and

activities - sell of goods. The remaining ones are more approaching to the average value.

There are also groups with the lower differences among them, one with values around the score 0.2 and

around the value 0.15. The biggest differences are between the lowest and highest values and the rest of

the assessed regions.

5 Conclusion

Based on the statistical data gathered for the seventeen selected variables for the thirty regions from the

area of Central Europe the weights for these variables have been calculated using two objective

weighting methods – Entropy method and Statistical variance. The variables with a highest and the

lowest importance for the ranking of the regions were identified and it was found out, that the variables

from the area of usage of digital technologies has the lowest impact on the multi-criteria decision

making.

The results from both methods have been also compared and the differences identified. There were no

significant changes in the rank of impact of the variables, but the weights were distributed differently.

In case of Entropy method the difference between the lowest and highest weights was lower than in case

of the Statistical variance method. The weighting according to Entropy method calculation was used in

combination with a TOPSIS method and the overall scores for the regions were achieved and the final

ranking of the regions based on the selected values presented.

Page 250: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

243

There were four regions identified with the score above the value 0.5 Praha, Bratislavský kraj, Wien and

Közép-Magyarország. The results for these four regions were described more in details by a

summarization of the main findings in each of the three groups of variables.The same procedure has

been applied for the cases of the regions with a low score under the value 0.1. There were three regions

identified - Dél-Alföld and Severozápad region Vzhodna Slovenija. The descriptions of the achieved

results has been done for the regions Dél-Alföld and Severozápad as they have achieved similar scores

and the score for the Vzhodna Slovenija is relatively close to the score of the region on the 27 th position

of the ranking with a value slightly above 0.1.

When application of the selected weighting methods, mainly the Entropy method is evaluated, as the

main advantage of the method is eliminating the need of the subjective evaluation of the importance of

variables and the results are not dependent on the specific evaluator. This method was well applicable

on the data of various measures assigning the high importance to the variables which are suitable to

select the regions as also mentioned by (Zardari et al. 2015, p. 65).

The scope of the research in this paper was significantly predetermined by the data base available on the

level of the level of more detailed geographical areas. This aspect influenced a range of variables used

for the assessment as well as the selection of the regions.

There are also several topics for the further research in the area of the variable selection and areas of

interest (within the limitation of data availability) and also in terms of the relation between the variables

– the aspect of correlation. It would be also beneficial to compare the results within time series to

evaluate the development of positions of the regions in time.

References

[1] Baller, S., Dutta, S. and B. Lanvin. (2016) The global Information Technology Report 2016.

[online]. World Economic Forum and INSEAD. Geneva. Available at:

<www.weforum.org/reports/the-global-information-technology-report-2016, (cit. 12.11. 2019)>.

[2] Bao, Q. et al (2012) [Kahraman, c. ed.] Computational Intelligence Systems in Industrial

Engineering: With Recent Theory and Applications. Paris: Atlantis Press, 2012-11-6. p. 109-130.

[3] Brožová, H. et al. (2014). Modely pro vícekriteriální rozhodování. Praha: Credit.

[4] Deng, H., Yeh, C. H., & Willis, R. J. (2000). Inter-company comparison using modified TOPSIS

with objective weights. Computers and Operations Research, 27(10). p. 963–973.

[5] European commision. (2019). Digital single market: DESI by components. [Online]. European

commision. Available at: <https://digital-agenda-data.eu/charts/desi-

components#chart={%22indicator%22:%22DESI%22,%22breakdown-

group%22:%22DESI%22,%22unit-measure%22:%22pc_DESI%22,%22time-

period%22:%222018%22} , (cit. 28.11.2019)>.

[6] Eurostat (2019). Regional statistics by NUTS classification. [online database]. Luxembourg:

European Commision, Available at: <https://ec.europa.eu/eurostat/data/database, (cit. 25.

5.2019)>.

[7] Chan, H. K. and X. Wang (2013). Fuzzy hierarchical model for risk assessment. New York:

Springer.

[8] Mařík, V. et al. (2016). Průmysl 4.0: výzva pro Českou republiku. Praha: Management Press.

[9] Soldatos J., Gusmeroli S., Malo P. and Di Orio G. (Friess, P, Vermesan O. - Editors) (c2016)

Digitising the Industry – Internet of Things Connecting the Physical, Digital and Virtual Worlds.

Delft: River Publishers. p. 153-181.

[10] Veber, J. et al. (2018) Digitalizace ekonomiky a společnosti: výhody, rizika, příležitosti. Praha:

Management Press.

[11] Vogelsang, M. (2010). Digitalization in Open Economies – Theory and Policy Implications.

Heidelberg: Physica-Verlag Heidelberg.

Page 251: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

22nd International Conference

MEKON 2020

February 6, 2020, Ostrava, Czech Republic

Conference Proceedings of MEKON 2020

VŠB-Technical University of Ostrava

Faculty of Economics

244

[12] Xu, J. (2014). Managing Digital Enterprise: Ten Essential Topics. Paris: Atlantis Press

[13] Zardari, N. et al. (2015). Weighting Methods and their Effects on Multi-Criteria Decision Making

Model Outcomes in Water Resources Management. Cham: Springer International Publishing.

Page 252: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

245

Appendix 1

Table 1 – Variables related to gross value added

Source: Authors processing based on data from Eurostat (2019)

Page 253: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

246

Table 2 – Variables related to human resources

Source: Authors processing based on data from Eurostat (2019)

Page 254: International Conference MEKON...e-mail: vm_uti@muctr.ru 2Department of Chemistry, Bauman Moscow State Technical University, 2nd Baumanskaya str. 5/1, Moscow 105005, Russian Federation

247

Table 3 – Variables related to approach to digital technologies usage

Source: Authors processing based on data from Eurostat (2019)