PlanetLab Research Activities Aki Nakao Univ. of Tokyo / NICT.
© CESNET, Association of Legal Entities · computing environment (MetaCentrum) » Cloud services...
Transcript of © CESNET, Association of Legal Entities · computing environment (MetaCentrum) » Cloud services...
© CESNET, Association of Legal Entities
Zikova 4, 160 00 Prague 6 / www.cesnet.cz / ISBN 978-80-906308-0-2
Text: Pavel Satrapa, CESNET, a. l. e.
Graphic design: Petr Stupka, Radical Design, s. r. o.
TABLE OF CONTENTS
From history to the present, 20 years of the CESNET Association
CESNET 2016
PresentCESNET National e-infrastructure
Communication infrastructure
MetaCentrum
Data storage
Collaborative environment
Security
Support for communities
Roadmap of Large Infrastructures
for Research, Experimental
Development and Innovation
of the Czech Republic
for the years 2016–2022
06
0808
12
16
20
24
28
32
36
HistoryTimeline
Before the Association
TEN-34 CZ
High-Speed National
Research Network and
Its New Applications
Optical National
Research Network
and Its New Applications
CESNET Large
Infrastructure & eIGer
International Projects
Timeline of projects
Cooperation
3838
40
42
46
48
50
52
56
58
CESNET 2016
It has been twenty years since representatives of all Czech universities and colleges and the Czech Academy of Sciences signed CESNET’s memorandum of association on 6 March 1996. Throughout this time, we have striven to advance and improve the quality of the infrastructure we operate and the services it provides.
06�——�07
Today, there is a 100 Gb/s
network core, most of the
nodes are connected by
multiple 10 Gb/s links, and only
the smallest ones have to make
do with “just” 100 Mb/s. The
bulk of the backbone network
uses Dense Wavelength-
Division Multiplexing (DWDM),
which allows transmitting
dozens of independent signals
over a single optical fi bre.
Thanks to that, we can off er
demanding applications their
own infrastructure, separated
from standard traffi c.
But most signifi cantly,
we have expanded our
activities beyond networks
although high-performance
communication infrastructure
still remains the basis for our
activities today. We now off er
grid and cloud environments
for high-performance
computing, high-capacity
data storage, collaboration
tools for distributed teams,
support for user mobility
and easy user access to
network services. All of this is
connected with a number of
mechanisms hidden “under
the hood”, which make sure
the entire infrastructure is
running smoothly and can be
used by users. These include
system and component
monitoring, activities of our
security team, or various
authentication infrastructures.
Naturally, our activities
require in-house research.
We are at the cutting edge
of today’s technology
and we try to push it even
farther. We have moved
from software development
to the development of our
own hardware. We hold
nine Czech, two European,
and fi ve US patents and
have won several awards
in the Czech Republic and
abroad. The components
and technologies we
developed are used in several
manufacturers’ products.
Looking back at our twenty-
year history, we believe we
can be proud of what we
have achieved. And we intend
to go on. In 2016, we are
at the beginning of another
big project aiming to further
expand and enhance the
parameters and services of
our e-infrastructure, included
in the Roadmap of Large
Infrastructures for Research,
Experimental Development
and Innovation of the
Czech Republic for the years
2016–2022. We hope that
we will be as progressive and
as valuable for our users in
the coming years as we have
been so far.
In 1996, we started with
a network whose fastest links
had a bit rate of 2 Mb/s.
CESNET NATIONALE-INFRASTRUCTURE
It provides a complex of advanced information and communication services to research, development, and education organizations throughout the Czech Republic. It is based on state-of-the-art technologies and their innovative combinations.
08�——�09
Our e-infrastructure services
are used by 94% of the scientifi c
and research community
in the Czech Republic.
They can be accessed by
all 54 institutes of the Czech
Academy of Sciences, all
28 public universities and
colleges, eight private
universities and colleges,
and almost 300 other
organizations. Overall, there
are some 450,000 individual
users with access to our
services.
Map of today’s infrastructure
SERVICES
Network services
» Internet protocol (IPv4
and IPv6) connectivity
» Dedicated circuits and
subnetworks (VPN, lambda
circuits, photonic circuits)
» Support for network
applications (backup mail
and DNS servers, antispam
gateway)
Computation and
development environments
» High-performance
computing environment
(MetaCentrum)
» Cloud services
(MetaCloud, VMware)
» Development and testing
environment (PlanetLab)
Data storage and backup
» Data storage
» User services
(FileSender, ownCloud)
Collaboration support
and multimedia
» Videoconferencing and
web conferencing
» IP telephony
» Special video transmissions
» Streaming and multimedia
archiving
Security services
» Security incident resolution
» Security risk monitoring
» Forensic laboratory
Identity management
» Federated access
to services (eduID.cz)
» User roaming infrastructure
(eduroam)
» User and server certifi cates
» User and access right
management system (Perun)
Monitoring
and measurement
» Monitoring of network
traffic and qualitative
parameters
» Time services
Consulting and training
» Expert seminars
» Security training
» CESNET Days
» Technical consulting
» Cisco Academy
Map of today’s infrastructure
10�——�11
NUMBER OF SERVICES
1995 → 1
1996 → 2
1997 → 3
1998 → 5
1999 → 8
2000 → 14
2001 → 17
2002 → 17
2003 → 17
2004 → 22
2005 → 24
2006 → 24
2007 → 24
2008 → 24
2009 → 27
2010 → 27
2011 → 27
2012 → 31
2013 → 34
2014 → 36
2015 → 38
COMMUNICATION INFRASTRUCTURE
Building and developing a state-of-the-art backbone network were the reasons why the Association was formed and have remained the core of its activities. We strive to keep up with what is happening in communication technology and off er our connected organizations a communication infrastructure with parameters that are not commonly available on the market.
12�——�13
Bit rates have increased by
several orders and today’s
backbone network capabilities
are something completely
diff erent. The second half of
1990s was dominated by ATM;
backbones off ered 34 Mb/s
and 155 Mb/s.
The turn of the new century
marked the start of an era of
gigabit transmission rates.
Packet over SONET and
especially various variants
of Ethernet completely
dominated local area as
well as wide area networks.
The core of our current
communication infrastructure
works at 100 Gb/s; most
other links off er 10 Gb/s. Only
those nodes whose users
have lower demands on data
transmission volumes are
connected at lower bit rates.
A major technological
breakthrough came
with Dense Wave Division
Multiplexing (DWDM),
a technology that we started
to deploy in 2004 and that
can be currently found in
an overwhelming majority
of our backbone links.
By transmitting multiple
independent signals over
a single optic fi bre, it
allows literally multiplying
the capacity of an optical
infrastructure. This allows
us to separate experimental
data transmissions from
routine traffi c and increase
the reliability of the whole
network. Most importantly,
we can off er dedicated
links and entire networks
for special applications with
extraordinary transmission
demands.
Jan Radil, Josef Vojtěch
(both CESNET),
and Miroslav Karásek (Institute
of Photonics
and Electronics,
Czech Academy
of Sciences) received
the education
minister’s award
for research on
26 November 2007.
Naturally, technology has developed
considerably over the twenty years
of our history.
We also provide appropriate
external connectivity. We
are connected to the standard
Internet (6 Gb/s) and the NIX.
CZ peering centre (2×20Gb/s),
and more importantly to
the GÉANT pan-European
backbone academic network
(100 Gb/s), academic networks
in Slovakia, Poland, and Austria
(10 Gb/s each), as well as the
GLIF experimental optical
infrastructure (10 Gb/s). Since
our connection to the GÉANT
network also implements
DWDM, dedicated links and
networks can be created
internationally.
We conduct our own research and
development to support network
development. We build CzechLight,
a line of our own optical elements that
are deployed in many backbones.
We also develop software and hardware
for network traffi c monitoring and analysis.
0
2E+10
4E+10
6E+10
8E+10
1E+11
1.2E11
VOLUME OF INTERNATIONAL TRAFFIC
In
Out
2. 6. 2005 15. 3. 2007 26. 4. 2009 9. 10. 2010 22. 7. 2012 5. 5. 2014
14�——�15
METACENTRUM
Our high-performance computing environment is named MetaCentrum. Its history goes back to the mid-1990s, to a project that involved building fi ve independent and interconnected university computing centres.
metacentrum.cz
16�——�17
It began to be managed by
CESNET in 1998 and since
2009 it has been offi cially
recognized as the National
Grid Infrastructure (NGI) of
the Czech Republic, which it
represents at the European
Grid Infrastructure (EGI).
The concept behind
MetaCentrum is
interconnecting needs
and existing capacities. Its
interconnected resources
represent a wide range
of technologies from big
computers with shared memory
to clusters – many identical,
relatively standard-sized nodes.
For every user and their task,
the optimum set of resources
is always sought among all
available resources. Overall, the
interconnected capacity consists
of more than 12,000 computing
cores and 2 PB of disk space.
MetaCentrum integrates
heterogeneous computing
resources owned by several
organizations, creating an
umbrella environment for
them. CESNET guarantees and
conceptually develops the
shared environment’s services
and technologies and owns
about one half of the computing
capacity. The rest is provided by
the institutions involved, with
which we cooperate closely.
A large share of MetaCentrum’s
resources consists of the
CERIT-SC large infrastructure;
additional computing capacities
are provided by the Institute
of Physics, CEITEC, University
of West Bohemia, University
of South Bohemia, and others.
It was conceived with the idea
of interconnecting various owners’
resources in order to create
a uniform environment for
the entire academic community.
2004
262 262600 680
1 2001 5601 468
2 028
6 028
9 020
10 160
12 256
0
2 000
4 000
6 000
8 000
10 000
12 000
14 000
2006 2008 2010 2012 20142005 2007 2009 2011 2013 2015
COMPUTING CORES
Examples include GP- GPU
computing or using the
MapReduce model in
a Hadoop environment.
Software is an important
complement of the
hardware. It covers a wide
range of scientifi c fi elds
and computing methods.
Users can take advantage
of ready-made applications
or use the available tools for
developing and optimizing
their own applications. This
also applies to computation
specifi cation and run control,
with the command line and
web interface complemented
by an integrated and
modular scientifi c workfl ow
environment, called a portal
(Galaxy).
MetaCentrum off ers several
methods for accessing the
resources, from standard task
execution to cloud services that
allow creating a customized virtual
environment to specialized and
experimental environments.
18�——�19
SOFTWARE
There are over
250 programs available
to users. Some examples
of them are:
Bioinformatics
» CS-Rosetta
» Galaxy
» Chipster
» MrBayes
» PhyloBayes
» PhyML
DNA sequencing
and analysis
» Blast
» Bowtie
» BWA
» CLCbio Genomics
Workbench
» Cuffl inks
» Geneious
» RepeatExplorer
» RepeatMasker
» SAMtools
» SOAPdenovo
» Stacks
» TopHat
» Trinity
» Velvet
Computational chemistry
» Amber
» Gaussian/GaussView
» Gromacs
» MolPro
» Turbomole
Mathematics
» Grid-Mathematica
» Maple
» Mathematica
» MATLAB
» Octave
» R
Engineering
and material simulations
» Ansys (Fluent + CFX +
Mechanical + HPC)
» OpenFOAM
Development tools
» Allinea DDT
» Intel CDK
» Numpy
» PGI CDK
» Scipy
» TotalView
DATA STORAGE
Data storage as a standalone service is the newest item in our off er – we put our fi rst large-capacity data storage facility into operation in 2012. However, the popularity and utilization of our storage facilities has been growing rapidly.
filesender.cesnet.czowncloud.cesnet.czdu.cesnet.cz
20�——�21
All of them are based on
the Hierarchical Storage
Management (HSM) concept,
under which each storage
facility has several data tiers
with increasing capacity
and decreasing speed and
energy demands, from fast
disk arrays to tape libraries.
The storage management
system automatically moves
data that has not been
used for some time to the
slower tiers while keeping
currently accessed data in
the faster tiers. This allows
building a suffi ciently fast
and large storage facility
at a signifi cantly lower cost
in comparison to keeping
all data in standard disk
arrays.
We now operate three storage
facilities – in Plzeň, Jihlava,
and Brno – with total physical
capacity of over 21 PB.
There are also interesting
options for team collaboration
– users can use the storage
facilities to transfer large
amounts of data or even share
their data with others directly,
allowing team members to
participate in data processing.
A number of communication
protocols for access to stored
data are available for these
purposes: NFSv4, rsync, SCP,
FTP, the Globus system, and
others. The storage facilities
are also directly accessible
from the MetaCentrum
environment.
We also off er specialized
services such as FileSender
for sharing large amounts
of data, allowing a user to
upload a fi le to the storage
using a simple web interface
and send a download link to
the recipient via e-mail.
Considerable popularity
has been gained by our
ownCloud data syncing
and sharing service. There
is a special client program
available that allows users to
synchronize data between the
server and a local computer
or mobile device. Another
option for working with their
data is using a web interface.
Data can be easily shared with
other users or made publicly
available in a controlled
manner.
The primary use
is for backing up
and archiving.
22�——�23
FILES STORED IN OWNCLOUD
2014/06 2014/10 2015/02 2015/06 2015/10 2016/02
0
5�M
10�M
15�M
20�M
25�M
35�M
30�M
Files
Time
Number of fi les
NETWORK TRAFFIC OF THE PLZEŇ STORAGE
2013/07 2014/01 2014/06 2014/12 2015/06 2015/12
0
500
1 000
1 500
2 000
2 500
3 500
3 000
Traffi c (Mbps)
Time
In
Out
VOLUME OF DATA STORED IN THE PLZEŇ STORAGE
2013/01 2013/06 2013/12 2014/06 2014/12 2015/06 2015/12
0
1
2
3
4
5
7
6
Data (PiB)
Time
Tapes usage
MAID usage
Tapes + MAID
COLLABORATIVEENVIRONMENT
The team that was building and developing the backbone network had been dispersed throughout the Czech Republic since the very beginning. That is why we paid a lot of attention to resources enabling remote collaboration. We were then able to off er our experience to a wide range of users.
24�——�25
Originally, its backbone was
an IP telephony infrastructure
but videoconferencing and
web conferencing tools
have prevailed over time.
Videoconferences rely
on specialized software
or hardware that allows
users to communicate.
Our multipoint units make
such communication possible
among a larger group of users
and can record it if needed.
Altogether, they save the
participants a great amount
of time and money.
We have progressed from
our initial attempts at
transmitting voice and video
over a computer network
to building specialized
videoconferencing rooms
and conducting our own
research into appropriate
tools to today’s extensive
infrastructure that off ers
many possibilities.
The laboratory
environment gives birth
to new applications and
allows us to participate
in teaching and fi nd
new colleagues.
26�——�27
Web conferences off er a lower
quality of the transmitted
video signal but they can make
do with a common web browser
and usually include additional
options for collaboration, such
as document exchange, shared
“whiteboards”, and more.
The whole environment is
complemented by a reservation
system that allows user
coordination.
We specialize in transmitting
high-quality video – HD, 4K and
higher resolution, stereoscopic
transmissions, or minimum-
latency transmissions, including
transmissions among multiple
places at a time. To this end, we
have been developing UltraGrid,
our own software solution
enabling video compression by
graphics adapters, and MVTP
(Modular Video Transmission
Platform), a hardware
system achieving extra-low
latency. UltraGrid has won an
international award and MVTP
has been patented; each of
them resulted in the formation
of a spin-off (Comprimato and
Infi vision, respectively).
We have found interesting
applications in medicine
(e.g. live streaming of surgeries
for medical conferences),
the fi lm industry, culture,
and sports broadcasting.
For example, the top-level
parameters allow organizing
a joint concert with musicians
located in several countries.
Crucial for development in the
fi eld of special transmissions
and visualization on tiled walls
with ultra-high resolution are
SAGElab and Sitola, laboratories
that evolved from our
collaboration with universities.
SECURITY
csirt.cesnet.czwarden.cesnet.czmentat.cesnet.czflab.cesnet.cz
28�——�29
We place great emphasis on the security of infrastructure and its services.
We offi cially founded
our security team,
CESNET-CERTS, at the
beginning of 2004, when it was
recognized as a CSIRT-type
team by the global security
infrastructure and appeared
in the global directories of
security teams worldwide.
Its core activities involve
detecting and responding
to security incidents, usually
in cooperation with the
security teams and network
administrators of associated
organizations. An important
part of its activities is
cooperation with similar teams
in the Czech Republic and
abroad.
In 2008, CESNET-CERTS
became the fi rst
team in the Czech Republic
to receive international
accreditation under the Trusted
Introducer activity of TERENA
(now GÉANT). We have
participated in a number
of international exercises
verifying mutual cooperation
and ability to respond to
large-scale incidents.
Security issues are specifi c
in that they virtually cannot
be handled individually.
The source and the target
of an attack often come
from diff erent organizations
or even diff erent countries
and continents. Attack
prevention and response
require extensive
collaboration at many
levels.
We created CSIRT.CZ using
the tools and procedures of
our CESNET-CERTS team and
performed its tasks until 2010.
Since 2011 its operations have
been carried out by CZ.NIC.
Besides responses to
incidents that have occurred,
prevention is also very
important in computer
security. We have been
running and developing
several systems that identify
weaknesses in the security
of infrastructure and services
and send notifi cations to their
administrators. Since 2013
we have been running FLAB,
our forensic laboratory that
off ers more extensive security
testing (penetration and load
tests of networks and services)
and security incident analysis.
To be able to successfully
operate and develop the
CESNET e-infrastructure, we
need detailed information
about its behavior, status
and usage and particularly
a complex environment
supporting the security
teams. Over the years, we
developed and operate
various technologies and
tools for automatic network
monitoring, detection of
anomalies and providing
information needed by the
administrators to investigate
and eliminate problems.
CESNET has a long term
experience in packet-based
analysis of network traffi c. The
fi rst step is the measurement
of network fl ows performed by
in-house developed COMBO
hardware accelerators. Storage
and processing of the results
are ensured by FTAS – system
for continuous monitoring
of IP traffi c in large network
infrastructures. Its functionality
is based on an advanced
processing of IP network fl ow
data (NetFlow). To monitor the
status and behavior of large
powerful infrastructures based
on SNMP, we developed the
G3 system.
Experience and practical
results in the area of large
network monitoring were
transferred into the Invea-
Tech company, currently
Flowmon Networks, a spin-
off of Masaryk University
and Brno University of
Technology.
We put a lot of eff ort
into security research and
development. We have
participated in a number of
projects focusing on new
methods for detection,
defence, and information
sharing. We have been
developing Warden and
Mentat, systems that play
a crucial role in this. We have
the ambition to turn Mentat
into a powerful SIEM tool.
We should also mention
our involvement in FENIX,
a project organized by NIX.
CZ. It originated in 2014
in reaction to large-scale
DDoS attacks on important
Czech servers and allows its
members to shut off external
traffi c while preserving mutual
communication. CESNET is
one of the founding members.
Important aspects of
defence include awareness,
cooperation, and exchange
of experience. Therefore, we
have organized a number of
security seminars and training
courses, intended both for
network, computer, and
service administrators and for
end users. In addition, we are
a member of several task forces
– TF-CSIRT, an international
task force associated with
GÉANT; the CSIRT.CZ working
group organized by the Czech
National CSIRT; eCrime,
a Jihlava-based working group
associated with the Vysočina
Region; etc. – where we often
present our experience.
We took advantage of our experience with
the CSIRT foundation and operations in a project
named Cyber Threats from the Perspective of
the Czech Republic’s Security Interests, which
gave birth to the Czech Republic’s national
security team, CSIRT.CZ.
30�——�31
International security exercisesWe participated
in international
security exercises
organized by ENISA
(European Union
Agency for Network
and Information
Security):
» Cyber Europe 2010
» Cyber Atlantic 2011
» Cyber Europe 2012
» Cyber Europe 2014
SUPPORT FOR COMMUNITIES
One of the key requirements for the e-infrastructure we operate is its ability to meet special demands. Let’s take a look at some cases where we supported scientifi c communities by building environments “tailored” to their needs.
32�——�33
PARTICLE PHYSICS
Particle physics, or high
energy physics, has a long
history of generating large
amounts of data and pushing
the limits of what computer
technology can do. Whole
new dimensions of this were
reached when the Large
Hadron Collider (LHC) was
built at CERN. Putting it into
operation was expected to
increase the volume of data by
several orders of magnitude and
it was obvious that the existing
model of centralized processing
at CERN’s computing centre
will not suffi ce.
Therefore, the construction
of a distributed infrastructure
to process the data started
in 2000. The infrastructure
combined three fundamental
aspects: storage capacity,
computing resources, and an
interconnecting network. All
of them had to off er state-
of-the-art parameters to be
able to handle the expected
amount of data.
The European foundations
of that infrastructure were
laid by the DataGrid project,
which was followed by other
projects. CESNET joined those
projects with its MetaCentrum
grid environment and
collaborated in the formation
of a European particle physics
grid. We built a network
of dedicated links for
extreme data transmissions,
interconnecting the individual
centres nationally and
internationally.
This allowed our
community of particle
physicists fully-fl edged
involvement in LHC
experiments. One of
the acknowledgements
of the qualities of our
infrastructure was CHEP
2009 (Computing in
High Energy and Nuclear
Physics), an important
conference held in Prague
and co-organized by
CESNET.
1
BIOINFORMATICS
A more recent example of
our support for the scientifi c
community is the ELIXIR
project started in 2013. Its
goal is to build an effi cient
and economical system to
store, retrieve, and process
data from molecular biology
research.
This environment is more
heterogeneous than the one
built for particle physics. Its
data will not originate from
a single source but comprise
results of a number of diff erent
laboratories. Therefore, the
integration and interoperability
of data from various sources
will be a major task to solve.
The system under construction
has a hierarchical structure
with its centre in Hinxton,
UK, which is where the
European Bioinformatics
Institute (EBI) coordinating
the entire project is located.
This is complemented by
national nodes with a uniform
architecture in 13 participating
countries. The technical
facilities are similar to the
infrastructure mentioned above
– storage capacity, computing
capacity, and communication
infrastructure. This is the basis
on which unique software is
being developed to provide the
bioinformatics community with
necessary tools.
CESNET is one of 12 ELIXIR
participants in the Czech
Republic. We take care of
the technical aspects of the
national node’s operation
and development – we
provided the hardware needed
for its launch and have
been collaborating on the
development of its software.
Photographs1 CERN data processing (source: CERN)
2 ELIXIR national node (source IOCB AS CR)
3 Particle accelerator in CERN
(source: CERN)
2
34�——�35
3
ROADMAP OF LARGE INFRASTRUCTURES FOR RESEARCH, EXPERIMENTAL DEVELOPMENT AND INNOVATION OF THE CZECH REPUBLIC FOR THE YEARS 2016–2022
Drawn up by the Ministry of Education, Youth, and Sports, this strategic document defi nes the model of support for large infrastructures for the purposes of science, research, development, and innovation.
36�——�37
This is the second
generation of this
strategic document. Its
predecessor was approved
by a government resolution
on 15 March 2010 and
ensured support for the
large infrastructures in 2011
to 2015. The CESNET Large
Infrastructure was already
in the fi rst generation of the
Czech Roadmap and, thanks
to its excellent results, it has
been included in the current
generation, too.
The Czech Roadmap covers
a range of natural sciences
as well as humanities.
Overall, it contains 58 large
infrastructures: 22 under
physical science, eight
under the energy sector,
seven under environmental
science, ten under medicine,
eight under social science
and humanities, and three
under information and
communication technology.
Naturally, CESNET is
included in the last mentioned
category, together with the
IT4Innovations national
supercomputer centre and
the CERIT Scientifi c Cloud
infrastructure. We cooperate
closely with these two large
infrastructures, taking mutual
advantage of off ered services.
It builds on the ESFRI Roadmap,
describing research infrastructures at
European level. The Government of the
Czech Republic took cognizance of
the document on 30 September 2015.
19961997 200620011999 20031998 20022000 2004 2005
CES
NET
fo
un
ded
|
Met
aCen
tru
m c
reat
ed (a
s a
sep
arat
e p
roje
ct)
34M
b/s
back
bone
net
wo
rk
Prag
ue–
Brn
o 1
55 M
b/s
| A
cade
mic
net
wo
rks
tran
sfer
red
unde
r TEN
-34
CZ
| M
etaC
entr
um tr
ansf
erre
d un
der C
ESN
ET; n
atio
nal g
rid
form
ed
Expe
rim
enta
l IPv
6 ba
ckbo
ne
Prag
ue–
Brn
o 2
.5 G
b/s
| Pi
lot I
P te
leph
ony
pro
ject
2.5G
b/s
back
bone
net
wo
rk
| IP
tele
pho
ny in
rout
ine
ope
ratio
n |
Dev
elo
pmen
t of N
etFl
ow
ana
lyse
r sta
rted
|
Cen
tral
ized
aut
hent
icat
ion
and
auth
ori
zatio
n sy
stem
cre
ated
|
Firs
t int
rusi
on
dete
ctio
n sy
stem
s in
stal
led
| N
TP s
erve
rs in
stal
led
DW
DM
with
10
Gb/
s ch
anne
ls o
n th
e Pr
ague
–B
rno
link
|
edur
oam
laun
ched
|
CES
NET
-CER
TS s
ecur
ity te
am a
ccep
ted
by g
loba
l inf
rast
ruct
ure
and
Trus
ted
Intr
odu
cer
| O
ne o
f the
firs
t car
ds fo
r 10
Gb/
s Et
hern
et d
evel
ope
d |
Firs
t pro
duct
ion
vers
ion
of F
TAS
netw
ork
mo
nito
ring
sys
tem
Red
unda
nt b
ackb
one
net
wo
rk
| B
ackb
one
co
mpl
etel
y sw
itche
d to
fibr
e o
ptic
s |
Stre
amin
g o
f Cze
ch R
adio
’s b
road
cast
s st
arte
d |
Dev
elo
pmen
t of C
zech
Ligh
t dev
ices
sta
rted
|
Dev
elo
pmen
t of C
OM
BO
car
ds s
tart
ed
Net
wo
rk c
ore
upg
rade
d to
DW
DM
with
10
Gb/
s ch
anne
ls
| Fi
rst m
ultip
oin
t co
nfer
enci
ng u
nit (
MC
U) p
ut in
to o
pera
tion
|
Inte
rco
ntin
enta
l mul
tipo
int t
rans
mis
sio
n o
f unc
om
pres
sed
Full
HD
vid
eo
| W
e jo
ined
Ultr
aGri
d de
velo
pmen
t
IPv6
in p
rodu
ctio
n o
pera
tion
| C
ESN
ET c
ertif
icat
ion
auth
ori
ty s
tart
ed it
s o
pera
tions
|
Expe
rim
enta
l Cze
chLi
ght n
etw
ork
est
ablis
hed
| Fi
rst s
ingl
e-fib
re tr
ansm
issi
ons
|
Sito
la la
bora
tory
ope
ned
(in c
olla
bora
tion
with
Mas
aryk
Uni
vers
ity)
› General
› Communication infrastructure
› MetaCentrum
› Data storage
› Collaborative environment
› Security and authentication
› Research and development
› Other services
LEGEND
Dar
k fib
re in
terc
onn
ectio
n o
f CES
NET
, AC
One
t (A
ustr
ia) a
nd S
AN
ET (S
lova
kia)
net
wo
rks.
38�——�39
201620102008 20122007 20112009 2013 2014 2015
INV
EA-T
ECH
, a. s
. spi
n-o
ff fo
unde
d to
put
the
resu
lts o
f our
rese
arch
& d
evel
opm
ent i
nto
pra
ctic
e |
Net
wo
rk b
ackb
one
upg
rade
d to
DW
DM
with
10
Gb/
s ch
anne
ls
| Fi
rst 4
K (U
HD
) vid
eo tr
ansm
issi
ons
Met
aCen
trum
offi
cia
lly re
pres
ents
the
Cze
ch R
epub
lic a
s th
e N
atio
nal G
rid
Infr
astr
uctu
re
| M
VTP
cre
ated
Back
bone
DW
DM
sys
tem
rebu
ilt
| M
etaC
entr
um o
ffer
s G
P-G
PU c
ompu
ting
grap
hics
car
ds
| Fo
undi
ng m
embe
r of E
urop
ean
FedC
loud
|
Dev
elop
men
t of W
arde
n an
d M
enta
t
syst
ems
star
ted
| C
SIRT
.CZ
hand
ed o
ver t
o C
Z.N
IC
| O
ptic
al tr
ansm
issi
on o
f tim
e be
twee
n Pr
ague
& V
ienn
a |
IPv6
Lab
ope
ned
(in c
olla
bora
tion
with
Cze
ch T
echn
ical
Uni
vers
ity)
Foun
ding
mem
ber o
f ELI
XIR
CZ
| C
om
prim
ato
Sys
tem
s sp
in-o
ff fo
unde
d |
100
Gb/
s ne
two
rk c
ore
|
Brn
o a
nd J
ihla
va d
ata
sto
rage
faci
litie
s o
pene
d |
FLA
B o
pene
d |
HD
MC
U c
apac
ity u
pgra
ded
| V
ideo
conf
eren
cing
rese
rvat
ion
port
al la
unch
ed
| SA
GEl
ab fo
unde
d (in
co
llabo
ratio
n w
ith C
zech
Tec
hnic
al U
nive
rsity
)
Met
aCen
trum
off
ers
virt
ual e
nviro
nmen
ts
| Fi
rst H
D M
CU
put
into
ope
ratio
n |
Ado
be C
onn
ect w
eb c
onf
eren
cing
sys
tem
put
into
ope
ratio
n |
eduI
D.c
z id
entit
y fe
dera
tion
crea
ted
| C
ESN
ET-C
ERTS
inte
rnat
iona
lly a
ccre
dite
d |
CSI
RT.
CZ
ope
ratio
ns s
tart
ed
| Ti
mes
tam
p au
tho
rity
laun
ched
Road
map
of L
arge
Infr
astr
uctu
res
for R
esea
rch,
Exp
erim
enta
l Dev
elo
pmen
t and
Inno
vatio
n o
f the
Cze
ch R
epub
lic a
ppro
ved
| C
SIR
T.C
Z te
am d
ecla
red
the
Cze
ch R
epub
lic’s
Nat
iona
l CSI
RT
Firs
t 10
0 G
b/s
links
|
Met
aClo
ud –
clo
ud a
cces
s to
Met
aCen
trum
|
Plze
ň da
ta s
tora
ge fa
cilit
y o
pene
d |
File
Send
er la
unch
ed
| Fi
rst v
ersi
on
of W
arde
n la
unch
ed
| D
NSS
EC d
eplo
yed
| 8K
vid
eo tr
ansm
issi
on
| U
ltraG
rid
wo
n A
CM
Mul
timed
ia 2
012
: The
Bes
t Ope
n-So
urce
So
ftw
are
Aw
ard
| A
dobe
Co
nnec
t cap
acity
gre
atly
upg
rade
d
Infi v
isio
n sp
in-o
ff fo
unde
d |
ow
nClo
ud la
unch
ed
| Fo
undi
ng m
embe
r of F
ENIX
|
CO
MB
O-1
00
G, t
he w
orl
d’s
first
PC
I-E
adap
ter f
or 1
00
Gb/
s Et
hern
et d
evel
ope
d (in
co
llabo
ratio
n be
twee
n C
ESN
ET a
nd IN
VEA
-TEC
H)
Had
oo
p cl
uste
r fo
r big
dat
a pr
oce
ssin
g la
unch
ed
| Fi
rst v
ersi
on
of M
enta
t lau
nche
d
TIMELINE
BEFORETHE ASSOCIATION
The Czech Republic’s science, research, and education computer network is older than CESNET. The fi rst possibilities of access to international communications opened up in the early 1990s.
40�——�41
In 1990, an IBM mainframe owned by the Czech Technical
University in Prague was connected to the European Academic
and Research Network (EARN), the European off shoot of
BITNET. In 1991, a number of universities gained access to
UUnet, which allowed e-mail transmissions over phone lines.
Those pioneering times
of networking were
characterized by highly limited
options and huge enthusiasm
among the people involved.
Methods for providing
universities, the Academy of
Sciences’ institutes, and other
organizations with full-scale
access to the Internet were
explored intensively.
The turning point came
in February 1992, when
the Internet was offi cially
launched in what was then
Czechoslovakia. In the
same year, the Ministry of
Education, Youth, and Sports
supported a project for
a backbone network named
FESNET (Federal Education
and Scientifi c NETwork),
which was to interconnect
domestic academic
institutions.
Because the country split
up, the network was not put
into operation until early 1993,
with somewhat limited extent,
and with its name changed
to CESNET (Czech Education
and Scientifi c NETwork). It
interconnected eight cities at
a bit rate of 19.2 kb/s; only the
most important link between
Prague and Brno operated
at 64 kb/s. The network
was built collaboratively by
a number of universities and
administered by the Czech
Technical University’s Regional
Computer Centre.
The network was extremely
popular. Link capacities grew
rapidly and the number of
newly connected towns and
organizations kept increasing.
To tap into a subsidiary source
of income, the network began
providing Internet access
to commercial customers.
However, there was the
notorious problem with
insuffi cient bandwidth, with
upgrades being very costly.
A whole new dimension
was then opened up by the
TEN-34 CZ project and the
formation of the Association,
which resulted in the
separation of academic and
commercial networks.
The Association operated the
commercial CESNET network
until 2000, when it was sold
to Contactel. Since that time,
the CESNET Association
has focused solely on
e-infrastructure for science,
research, and education.
Photographs1 This panel delivered the fi rst Internet
connectivity in the Czech Republic
2 Initial topology of the CESNET network
1 2
TEN-34 CZ
Telecommunication monopolies continued to exist in many European countries in the fi rst half of the 1990s. This resulted in high prices of telecommunication services and transmission rates lagging behind the USA. Eff orts to improve the situation for the scientifi c and research community gave birth to a project named TEN-34 (Trans-European Network at 34 Mbps), which aimed to build a backbone interconnecting academic networks in European countries using high-speed lines.
1996–1998
42�——�43
The Czech Republic was
the only country from the
former Eastern Bloc to join
TEN-34, announcing its
TEN-34 CZ programme.
Universities, colleges,
and the Czech Academy
of Sciences reached the
conclusion that the best form
of joining the programme
will be through a special
independent organization
created for that purpose.
The result was the formation
of the CESNET Association,
which subsequently became
responsible for the project
named TEN-34 CZ Network
Implementation.
It allowed us to build
a backbone network with
state-of-the-art parameters –
the basic bit rate was 34 Mb/s
and the Prague–Brno link was
even upgraded later to 155
Mb/s. Due to lack of available
transmission services, some
links were implemented by
radio relay, which was a highly
unusual engineering solution.
During the project, scientifi c,
research, and academic
institutions moved from the
original CESNET network to the
newly built TEN-34 CZ, which
resulted in a split between the
commercial network and the
academic network.
Participation in the project was
conditional on building an analogous
national infrastructure.
Key results» Construction
of a 34–155Mb/s
backbone network
» Initiation of
international
cooperation
» Strict separation
of academic and
commercial traffi c
We focused mostly on
building and developing
the backbone network and
on directly related activities
(network management,
network development
planning, etc.). However,
we were already concerned
with some application fi elds.
These included our fi rst steps
in videoconferencing and IP
telephony or the construction
of a sophisticated
infrastructure of WWW caches
that were intended to reduce
network load.
Our participation in the TEN-34
project was also the beginning of
a long series of international projects
we have been and still are involved in.
Thanks to those projects, CESNET has
gained position of a respected partner
on the international scene.
Network topology at the end of the project
44�——�45
HIGH-SPEED NATIONAL RESEARCH NETWORK AND ITS NEW APPLICATIONS
In 1999, the Ministry of Education changed its research funding strategy and initiated an era of large, specifi cally funded research projects. Under that programme, we obtained funding for a research project named High-Speed National Research Network and Its New Applications, which aimed to further develop the national communication infrastructure for science, research, and education. It had a somewhat specifi c position, because in reality it was more of an infrastructure-focused activity that was primarily intended to support the research of connected organizations.
1999–2003
46�——�47
From the perspective of
the backbone network, the
research project brought
about literally revolutionary
changes. We abandoned
the ATM technology used
in the previous generations
of the backbone network
and switched to PoS and
especially various versions
of Ethernet. This resulted
in upgraded bit rates on
the order of gigabits – we
deployed 2.5 Gb/s on the
Prague–Brno link in 2000
and on other crucial links
a year later.
We stopped leasing
transmission services on
intercity links and started
leasing fi bre optics fi tted with
our own technology, which
gave us far more options
for future development. We
found this concept, called
Customer Empowered Fibre
(CEF) Network, to be very
useful and became one of
its pioneers worldwide. We
also signifi cantly changed the
topology of the backbone
network, which was previously
built as a tree. We turned it
into a multi-ring network
during the research project,
which provided major nodes
with redundant connection.
Our research activities
focused mainly on network
technology. Among other
things, we began to develop
our own hardware – at fi rst
with the aim of developing
a hardware-accelerated router
(Liberouter), later on focusing
more on monitoring tools and
optical transmission elements.
The research project also
initiated the expansion
of our services beyond
communication infrastructure
because it involved a range
of application fi elds. Most
importantly, we developed
MetaCentrum, our grid
environment for high-
performance computing,
which was originally created
as a separate project. We
off ered users IP telephony
and videoconferencing
services, our own certifi cation
authority, and many other
services.
Nevertheless, its categorization as
a research project meant that we had to
step up our own research activities.
Key results» Gigabit backbone
network
» Redundant
connection
of nodes
» Switch to fi bre
optics and CEF
approach
» Development
of our own
hardware
Network topology at the end of the project
OPTICAL NATIONAL RESEARCH NETWORK AND ITS NEW APPLICATIONSOur successful execution of the fi rst large research project allowed us to follow up with another large project, which was the core of our activities in 2004 to 2010. Its name suggests the greater role played by optical technology in backbone network development. However, there were additional fi elds of our activities that were quickly evolving, too.
From the perspective of communication
infrastructure, the crucial change was
converting the backbone network to
DWDM, which allows transmitting several
independent channels over a single fi bre.
2004–2010
eduroam.czeduid.cz
48�——�49
We boosted the development
of our own hardware. We
developed a family of optical
transmission elements named
CzechLight, several cards and
devices for network traffi c
monitoring and processing
(COMBO, FlowMon, NIFIC,
MTPP), or special devices
for top-quality video signal
transmissions (MVTP). We
have successfully used
these results in international
projects and some of them
are even manufactured and
commercially available.
Our MetaCentrum was
virtualized during the second
large research project to off er
its users greater capabilities.
Its development involved
progressively increasing its
computing capacity (it off ered
about 250 computing cores at
the beginning of the research
project and almost two
thousand cores, naturally with
higher performance, at its end)
and expanding its range of
available software.
For users, the most noticeable
benefi t was creation of remote
collaborative environment
(video and webconferencing)
and establishment of eduroam
and eduID.cz. The European
roaming infrastructure eduroam
supports mobility of academic
users, who can use it to easily
connect to the Internet when
visiting another organization.
The eduID.cz system facilitates
access to network services
via single sign-on in the user’s
home organization.
Our security team, CESNET-
-CERTS, was the fi rst team in
the Czech Republic to receive
international accreditation.
It built several systems for
security monitoring and
warning against potential risks
during the research project.
It then made good use of its
experience when collaborating
on the formation of CSIRT.CZ,
the national security centre.
Gradually, we worked
more and more with several
user communities with
extraordinary demands on
communications. Typical
examples are physics
(especially high energy
physics and the processing
of data obtained from CERN
experiments) and medicine.
The software (UltraGrid)
and hardware (MVTP) we
developed have been used
in a number of unique
transmissions.
Together with the deployment of a 10 Gb/s bit rate, this
resulted in a dramatic increase in backbone throughput.
In addition, DWDM enabled parallel and mutually
uninfl uenced transmission of standard traffi c, experimental
signals, and communications reserved for special applications.
Key results» Backbone network
with DWDM
and a bit rate of
n × 10 Gb/s
» Creation of
eduroam and
eduID.cz
» Creation
of video and
webconferencing
environment
» Virtualization
of MetaCentrum
» A number of
devices and
components
developed, some
manufactured
commercially
» Accredited
security team
and collaboration
on CSIRT.CZ
formation
Network topology at the end of the project
CESNET LARGEINFRASTRUCTURE& eIGer
2011–2015
The period from 2011 to 2015 was dominated by two large, mutually complementary projects: CESNET Large Infrastructure (2011–2015) and Extension of National R&D Information Infrastructure in Regions (eIGeR, 2011–2013).
50�——�51
Their shared objective was to rebuild the national
research network into a large infrastructure comprising
all information and communication e-infrastructures
necessary for the Czech Republic’s inclusion in
the European Research Area and connection to
the e-infrastructures described in the ESFRI Roadmap.
The projects brought the
communication infrastructure
to a new qualitative level. The
DWDM infrastructure was
completely renewed to off er
a higher number of channels
and greater bit rates. This was
followed by switching the
core to 100 Gb/s.
Other areas experienced no
less important changes. We
installed several new clusters
and developed software to
extend the capabilities of the
MetaCentrum computing
infrastructure, which was
refl ected by a more than
fi vefold increase in its
utilization. We launched
a completely new data storage
service; our fi rst data storage
facility was put into operation
in Plzeň in early 2012 and two
others – in Brno and Jihlava
– followed in 2013. Altogether,
we prepared a storage
capacity exceeding 21 PB for
our users. Our collaborative
environment also underwent
major development –
fundamentally improved
capacity of webconferencing
system and installation of new
videoconferencing units with
recording capabilities enabled
to handle requirements of
an ever increasing number
of users.
The fi nal assessment of
the results of the CESNET
Large Infrastructure
project, carried out by an
international committee,
gave us the highest rating
of A1. The quality of our
work is also evidenced by an
honourable mention in the
Innovation of the Year 2014
competition, organized by
the Association of Innovative
Entrepreneurship of the
Czech Republic.
Honorable mention in the Innovation of the Year and its presentation on 5 December 2014
Key results» Upgrading the
network core
to 100Gb/s
» Increasing the
number of DWDM
channels up to 80
» Raising the
number of
MetaCentrum
cores from 2,000
to 10,000
» Building data
storage facilities
» Installing robust
web conferencing
systems
» Establishing
several
laboratories
(SAGElab, FLAB)
INTERNATIONAL PROJECTS
International collaboration is a crucial part of our activities. After all, the Czech Republic’s involvement in the European TEN- 34 project played an important role in the Association’s formation.
52�——�53
» AARC (Authentication and
Authorization for Research
and Collaboration):
2015–2017
› Developing federated
identity infrastructure
for the authentication
and authorization of
education users
» BEBA (Behavioural-Based
Forwarding): 2015–2017
› Developing new
approaches for software-
defi ned networks
» COMPLETE
(Communication Platform
for Tenders of Novel
Transport Networks):
2015–2017
› Sharing knowledge
to optimize resource
utilization in the creation
of state-of-the-art
networks
» DataGrid: 2001–2003
› Creating an extensive
computing and data
infrastructure for the
evaluation of CERN
experiments
» Digital Restoration
of Czech Film Heritage:
2014–2016
› Digitizing and restoring
old Czech fi lms using new
network technologies
» EGEE (Enabling Grids
for E-sciencE): 2004–2006
› EGEE II: 2006–2008
› EGEE III: 2008–2010
› A series of related
projects that built and
developed a European
grid connected to
analogous infrastructures
outside Europe (USA,
Japan, Korea)
» EGI_DS (European Grid
Initiative – Design Study):
2007–2009
› Designing and taking
fi rst steps to implement
a sustainable pan-European
grid infrastructure
» EGI-Engage: 2015–2017
› Developing the European
backbone infrastructure
for data processing and
storage, involving large
user communities
Throughout our existence, we have
been joining projects aimed at building
science, research, and education
infrastructures and at research into
new technologies or their applications.
We have participated in the following
projects to date:
» EGI InSPIRE (Integrated
Sustainable Pan-European
Infrastructure for
Researchers in Europe):
2010–2014
› Developing the European
grid infrastructure built by
the EGEE series of projects
» ELIXIR-EXCELERATE:
2015–2019
› Creating a European
bioinformatics
infrastructure with unique
tools for the bioinformatics
scientifi c community
» EMI (European Middleware
Initiative): 2010–2013
› Developing middleware
components for the
EGI grid and other
distributed computing
infrastructures
» EUAsiaGRID: 2008–2010
› Promoting the use of
the gLite middleware
developed for the EGEE
grid in Asian grids
» EuroCareCF (European
Coordination Action
for Research in Cystic
Fibrosis): 2007
› Coordinating basic and
clinical research into
cystic fi brosis and related
diseases
» FEDERICA (Federated
E-infrastructure Dedicated
to European Researchers
Innovating in Computing
network Architectures):
2008–2010
› Creating a European
virtual infrastructure
for testing innovative
communication
architectures
» GÉANT: 2000–2004
› GN2: 2004–2009
› GN3: 2009–2013
› GN3plus: 2013–2015
› GÉANT 2020: 2015–2016
› A series of related
projects that created and
developed the GÉANT
pan-European backbone
gigabit communication
infrastructure for science,
research, and education.
This network is used by
over 40 million users in
38 countries today.
» CHAIN (Co-ordination &
Harmonisation of Advanced
e-INfrastructures):
2010–2012
› Interconnecting regional
grid infrastructures in
Asia, Latin America, and
Africa with the EGI grid
» CHAIN-REDS
(Co-ordination and
Harmonisation of Advanced
e-Infrastructures for
Research and Education
Data Sharing): 2012–2015
› Intercontinental support
for the technological and
scientifi c cooperation of
various e-infrastructures
» INDIGO-DataCloud
(INtegrating Distributed
data Infrastructures for
Global ExplOitation):
2015–2017
› Developing software
components for cloud
and grid infrastructures
» Ithanet: 2007–2008
› Using information
and communication
technology for research
on thalassaemia
and related
haemoglobinopathies in
the Mediterranean region
» LOBSTER: 2004–2007
› Creating a large-scale
monitoring infrastructure
using tools developed by
the SCAMPI project
» MAGIC (Middleware for
collaborative Applications
and Global virtual
Communities): 2015–2017
› Developing middleware
tools to support
collaboration among
research communities
» NEAT-FT (Network for
European Accurate Time
and Frequency Transfer):
2012–2015
› Using fi bre optics for
improving the accuracy
and stability of time
transfers
» ORIENT: 2006–2009
› Building high-quality
interconnection between
European and Chinese
research, development,
and education networks
» ORIENTplus: 2011–2014
› Follow-up project
developing infrastructure
for scientifi c
collaboration between
Europe and China
» OSIRIS (Towards an
Open and Sustainable ICT
Research Infrastructure
Strategy): 2010–2011
› Coordinating the
development of
research infrastructures
in information and
communication
technology
54�——�55
» PHOSPHORUS:
2006–2009
› Creating an architecture for
the utilization of network
resources and services
in a heterogeneous
environment
» Porta Optica Study:
2006–2007
› Developing high-speed
optical networks for
research and education in
Eastern Europe, the Baltic
states, and Southern
Caucasus
» QUANTUM (QUAlity Network
Technology for User Oriented
Multi-Media): 1998–2000
› Building TEN-155, a 155
Mb/s European backbone
science, research and
education network
» SCAMPI (A Scaleable
Monitoring Platform for the
Internet): 2002–2005
› Developing a network
monitoring adapter for
bit rates of up to 10 Gb/s
and related tools for
monitoring attacks and
security incidents
» SEEFIRE (South-East
Europe Fibre Infrastructure
for Research and
Education): 2005–2006
› Developing science,
research, and education
networks in Southeastern
Europe
» TEN-34: 1996–1998
› Creating a backbone
network for European
science, research, and
education with parameters
comparable to those of
the US NSFNET network
» VINI (Virtual Network
Infrastructure): 2007–2009
› Creating a virtual
infrastructure for the
verifi cation of protocols
and services in a large
network
» XIFI (eXperimental
Infrastructures for Future
Internet): 2014–2015
› Building a European
platform for testing
the outputs of the
Future Internet
programme
» 6NET: 2002–2004
› Creating a large-
scale international
network using the
IPv6 network
protocol
PROJECT LEADERSHIP
Our role in international
projects is illustrated
by our staff being elected
to project management
functions:
» Jan Gruntorád was
elected to the GÉANT
Executive Committee in
three consecutive terms
(2004–2012). He was the
chairman of the Executive
Committee in 2011.
» Luděk Matyska was
elected as a member of the
EGEE steering committee
in 2005, becoming its
chairman in 2006. He was
elected chairman of the
EGI-InSPIRE steering
committee in 2011.
» Helmut Sverenyák was
elected to the TERENA
Executive Committee
as its vice-president for
conferences (2012–2014).
19961997 200620011999 20031998 20022000 2004 2005
6NET (2002–2004)
LOBSTER (2004–2007)
TIMELINE OF PROJECTS
TEN-34 CZ Network Implementation
(1996–1998)
High-Speed National Research Network
and Its New Applications (1999–2003)
SCAMPI (2002–2005)
SEEFIRE (2005–2006)
GN2 (2004–2009)TEN-34 (1996–1998)
QUANTUM
(1998–2000) GÉANT (2000–2004)
EGEE (2004–2006)DataGrid (2001–2003)
56�——�57
201620102008 20122007 20112009 2013 2014 2015
CHAIN-REDS (2012–2015)
NEAT-FT (2012–2015)
Digital Restoration of Czech
Film Heritage (2014–2016)
Optical National Research Network
and Its New Applications (2004–2010)
CESNET Large Infrastructure
(2011–2015)
eIGeR (2011–2018)
CESNETe-infrastructure(2016–2019)
ORIENTplus (2011–2014)
AARC (2015–2017)
COMPLETE (2015–2017)
ELIXIR-EXCELERATE (2015–2019)
INDIGO-DataCloud (2015–2017)
MAGIC (2015–2017)
BEBA (2015–2017)
EGI-Engage (2015–2017)EGI InSPIRE (2010–2014)EGI_DS (2007–2009)
XIFI (2014–2015)EMI (2010–2013)
ORIENT (2006–2009)
EUAsiaGRID (2008–2010)
FEDERICA (2008–2010)
PHOSPHORUS (2006–2009)
Ithanet (2007–2008)
Porta Optica Study
(2006–2007)
EuroCareCF
(2007)
VINI (2007–2009)
CHAIN (2010–2012)
OSIRIS (2010–2011)
GN3plus (2013–2015) GN3 (2009–2013) GÉANT 2020 (2015–2016)
EGEEII (2006–2008) EGEE III (2008–2010)
COOPERATION
Involvement in international projects strengthens our relations with foreign and domestic experts and organizations engaged in information and communication technology.
58�——�59
Conferences, seminars,
workshops, and working
meetings provide an
excellent opportunity for
exchanging information;
we have organized and
co-organized a great number
of them during the twenty
years of our existence.
Here are the most important
ones (in alphabetical order):
Exchanging experience
with them means a great
contribution to our work
– we get inspiration for
our further development
as well as feedback.
1
2 3
54
Popisky k fotografi ím1 TERENA Networking Conference, 2011
2 EU Commissioner Viviane Reding and rector Vlastimil Růžička
opening the Future of the Internet conference, 2009
3 Campus Network Monitoring Workshop, 2012
4 Vint Cerf and Jan Gruntorád, 2007
5 CEF Networks Workshop, 2012
60�——�61
CONFERENCES
» Annual Global
LambdaGrid Workshop
› A conference for GLIF
members and experts
on optical networks.
We organized the
workshops in 2007
and 2015.
» Campus Network
Monitoring Workshop
› A seminar focusing on
traffi c monitoring in local
area networks, 2011,
2012, and 2014
» CEF Networks
Workshop
› A working meeting
we initiated for specialists
in customer-operated
optical networks.
Eight workshops have
been organized so far:
in 2004, 2005, 2006,
2007, 2009, 2010, 2012,
and 2014.
» CESNET Conference
› A conference focusing
on optical networks,
middleware, virtualization,
and security; 2006 and
2008
» EGI Technical Forum
› A conference on grid
computing; 2012
» Future of the Internet
› A conference on Internet
trends and development;
2009
» The Networking
Conference (TNC)
› A large conference
on computer networks;
2011 and 2016
» 10th Anniversary
of the Internet in
the Czech Republic
› A national conference
with international
guests; 2002
WORKSHOPS
» CESNET Community
Forum
› Meeting of national
e-infrastrusture users
with CESNET specialists,
2014, 2015
» CESNET Days
› Informal meetings
of CESNET specialists
with representatives
of member organisations,
2013, 2014, 2015, 2016
» Security of networks
and services
› A big workshop on
securing ICT infrastructures,
2014, 2015, 2016
MEETINGS
There have also been
a number of individual
meetings with various
partners. The most
important ones
included:
» On 9 June 2008,
we met with Eugene
Yeh and other
representatives
of Taiwan’s National
Center for
Supercomputing.
» On 22 September 2008,
CESNET was visited by
Vint Cerf.
» On 28–30 September
2008, CESNET director
Jan Gruntorád was invited
to a White House advisory
forum meeting.
» On 7 May 2015,
CESNET was visited
by Japan’s ambassador
Tetsuo Yamakawa and
representatives of the
Internet Initiative Japan.
THANK YOUFOR YOUR FAVOUR