Scanned by CamScanner2019/11/07  · modular architecture for enabling design, implementation and...

34
Scanned by CamScanner

Transcript of Scanned by CamScanner2019/11/07  · modular architecture for enabling design, implementation and...

Page 1: Scanned by CamScanner2019/11/07  · modular architecture for enabling design, implementation and maintenance to be executed by a larger group of programmers. As far as programming

Scanned by CamScanner

Page 2: Scanned by CamScanner2019/11/07  · modular architecture for enabling design, implementation and maintenance to be executed by a larger group of programmers. As far as programming

Scanned by CamScanner

Page 3: Scanned by CamScanner2019/11/07  · modular architecture for enabling design, implementation and maintenance to be executed by a larger group of programmers. As far as programming

Q1. a. How Will you estimate the cost of a plan? Explain.

Answer: When a query is submitted to DB, it is parsed and translated to relational algebra. It is

verified for its validity and correctness. Once it passes this stage, different ways of evaluating the

query is generated. It is checked for various factors and its execution plan is generated. It may be

based on cost of the query or based on the equivalence rules. Once cost based execution and rule

based execution plans are generated, optimizer has to decide, which plan to be selected for

evaluation. This is the most important step in processing a query. The cost or the heuristic

execution plan may not be always effective in all the tables with same type of query. They are all

general guidelines to evaluate a query. There are lot many factors affecting the performance of a

query. The evaluation plans exactly defines the system which plan / algorithm is to be used to

evaluate, which index to be used etc.

Consider below example on EMP and DEPT tables.

SELECT EMP_ID,DEPT_NAME

FROM EMP, DEPT WHERE EMP.DEPT_ID=DEPT.DEPT_ID

AND EMP.DEPT_ID=10

AND EMP.EMP_LAST_NAME=’JOSEPH’;

Above query selects the EMP_ID and DEPT_NAME from EMP and DEPT table for DEPT_ID

= 10 with employee’s last name as ‘Joseph’. But when it is given to the DBMS, it divides the

query into tokens and sees how it can be put together so that performance will be better. This is

the duty of query optimizer. But altering the order of tokens in the query should not change the

result. In either way it should give same result. Order of records can change and are least

important. This is called equivalent query. There is set of rules to put tokens in the query. This is

called equivalence rule.

Above query can be broken down by the DBMS in either ways below : Select the records of EMP with DEPT_ID = 10 and EMP_LAST_NAME = ‘Joseph’ first

then join them with DEPT table to get all matching records of EMP and DEPT. Then

select only the columns EMP_ID and DEPT_NAME to display the result.

Select all matching records from EMP and DEPT, from which filter on DEPT_ID = 10

and EMP_LAST_NAME = ‘Joseph’ and then select only EMP_ID and DEPT_NAME to

display.

Select all matching records from EMP and DEPT, from which select only EMP_ID and

DEPT_NAME and then filter on DEPT_ID = 10 and EMP_LAST_NAME = ‘Joseph’

and then.

Both the steps above are same irrespective of how it is performed. Hence both are called

equivalent query. These are not written in SQL, but using relational algebra, or tree.

ΠEMP_ID,DEPT_NAME(σDEPT_ID=10ANDEMP_LAST_NAME=’JOSEPH’(EMP)∞

DEPT).

Page 4: Scanned by CamScanner2019/11/07  · modular architecture for enabling design, implementation and maintenance to be executed by a larger group of programmers. As far as programming

OR

ΠEMP_ID,DEPT_NAME(σDEPT_ID=10ANDEMP_LAST_NAME=’JOSEPH’(EMP ∞DEPT).

OR

σdept_id=10ANDEMP_LAST_NAME=’JOSEPH’(ΠEMP_ID,DEPT_NAME,DEPT_ID

(EMP∞DEPT).

The

optim

izer

can

produce relational expression and tree in above three formats. According to evaluation rule, the

first query seems to be the best one. But considering other factors of tables, other plans may also

be efficient. Let us see what are other factors affecting the performance and how a ideal

execution plan is determined.

Q.1 b.What is meant by enumeration of alternative plans?Explain.

Answer:

Page 5: Scanned by CamScanner2019/11/07  · modular architecture for enabling design, implementation and maintenance to be executed by a larger group of programmers. As far as programming
Page 6: Scanned by CamScanner2019/11/07  · modular architecture for enabling design, implementation and maintenance to be executed by a larger group of programmers. As far as programming
Page 7: Scanned by CamScanner2019/11/07  · modular architecture for enabling design, implementation and maintenance to be executed by a larger group of programmers. As far as programming

OR

Q1. a. How will you translate a SQL query into relational algebra? Explain with example.

Answer: In practice, SQL is the query language that is used in most commercial RDBMSs. An

SQL query is first translated into an equivalent extended relational algebra expression—

represented as a query tree data structure—that is then optimized. Typically, SQL queries are

decomposed into query blocks, which form the basic units that can be translated into the

algebraic operators and optimized. A query block contains a single SELECT-FROM-WHERE

expression, as well

Page 8: Scanned by CamScanner2019/11/07  · modular architecture for enabling design, implementation and maintenance to be executed by a larger group of programmers. As far as programming

as GROUP BY and HAVING clauses if these are part of the block. Hence, nested queries within

a query are identified as separate query blocks. Because SQL includes aggregate operators—

such as MAX, MIN, SUM, and COUNT—these operators must also be included in the

extended algebra.

Consider the following SQL query on the EMPLOYEE relation in Figure

SELECT Lname, Fname

FROM EMPLOYEE

WHERE Salary > ( SELECT MAX (Salary)

FROM EMPLOYEE

WHERE Dno=5 );

This query retrieves the names of employees (from any department in the company) who earn a

salary that is greater than the highest salary in department 5. The query includes a nested

subquery and hence would be decomposed into two blocks. The inner block is:

SELECT MAX (Salary)

FROMEMPLOYEE

WHERE Dno=5 )

This retrieves the highest salary in department 5. The outer query block is:

SELECT Lname, Fname

Page 9: Scanned by CamScanner2019/11/07  · modular architecture for enabling design, implementation and maintenance to be executed by a larger group of programmers. As far as programming

FROM EMPLOYEE

WHERE Salary > c

where c represents the result returned from the inner block. The inner block could be translated

into the following extended relational algebra expression:

ℑMAX Salary(σDno=5(EMPLOYEE))

and the outer block into the expression:

πLname,Fname(σSalary>c(EMPLOYEE))

The query optimizer would then choose an execution plan for each query block. Notice that in

the above example, the inner block needs to be evaluated only once to produce the maximum

salary of employees in department 5, which is then used—as the constant c—by the outer block.

We called this a nested query (without correlation with the outer query) in Section 5.1.2. It is

much harder to optimize the more complex correlated nested queries (see Section 5.1.3), where

a tuple variable from the outer query block appears in the WHERE-clause of the inner query

block.

Q1. b. What is System Catalog in a Relational DBMS? Explain.

As we create lots of tables, views, constraints, indexes etc in a database, it is difficult for any

developer or user to remember all of them. If there is any change request to the table structure or

if there is any issue with the existing design then the developer might not remember which table

is for which purpose or which table contains which data, what is the domain of the columns or if

there is any specific meaning to the value that has been entered to the column etc. Also, if there

is any new designer handling the database, then it is necessary for him to understand the system

thoroughly.

Hence, every database stores every information about its objects. These informations can be it

structure, definition, purpose, storage, number of columns and records, dependencies, access

rights, owner etc. Basically these are the useful information about the data in the database. They

are also called as metadata. These metadata are also stored as rows and columns of a table.

Page 10: Scanned by CamScanner2019/11/07  · modular architecture for enabling design, implementation and maintenance to be executed by a larger group of programmers. As far as programming

Collection of these metadata is stored in the data dictionary or system catalog. Usually, system

catalogs are accessed by the DBMS to perform various transactions and data dictionary has the

user accessible views that are accessed by the developers/ designers/ users.

It is a database about the database objects. It can exist in the same database or it can be

completely a separate database. If it is a separate database, then there should be a process to

update the table in system catalog as when object of the database changes. It should be in sync

with database objects. This helps the developer/ user to get the quick information about the

database objects.

System catalog also contains various tables and views. Since it contains the information about

the database objects, users are not allowed to modify the details in it. It is automatically inserted/

updated/ deleted by the RDBMS. User has no access rights on them. But users/developer will

need to see the details in them to understand the objects. Sometimes developer wants to know the

exact column name and the table in which the records that he is fetching exist. In these cases, he

has to fetch in the data dictionary or system catalog. For this purpose views on tables in the data

dictionary are created and given SELECT rights to the users. Data dictionary tables are created in

the SYS schema and do not have any access privileges for the user. Only the RDBMS access it.

The views on these tables are created in SYSTEM schema and users are given only SELCT

rights. They cannot modify these views.

Depending on the access rights of the user we have 3 types of views – USER, ALL and DBA.

The user view lists only those tables and views which are created by the current user/schema. It

does not list the tables and views of other schemas nor the ones to which it has access. It displays

only those objects that are created by it.

The ALL view lists all the tables and views that are owned by the current user as well as those

tables and views to which it has access. That means it lists the tables and views that are created

by it, and those tables and views of other user/schema to which it has access.

DBA view will have access to all the tables and views of all the users/schema. But these views

will be accessible by only those who have DBA privileges.

Page 11: Scanned by CamScanner2019/11/07  · modular architecture for enabling design, implementation and maintenance to be executed by a larger group of programmers. As far as programming

One of the view – DICTIONARY in the SYSTEM schema provides the details of all the objects

in the database system. It has two columns – TABLE_NAME and COMMENTS.

TABLE_NAME gives the name of the objects in the database and COMMENTS gives details

about each of them.

SELECT * FROM DICTIONARY;

There are lots of views in SYSTEM schema with table details, column details, constraints, index,

etc. Each view has its set of columns and details.

Some of the views of data dictionary are listed below.

Page 12: Scanned by CamScanner2019/11/07  · modular architecture for enabling design, implementation and maintenance to be executed by a larger group of programmers. As far as programming

SlNo Views Columns Description

1 ALL_COL_COMMENTS

OWNER

TABLE_NAME

COLUMN_NAME

COMMENTS

It has comments for each columns of the table that user

has provided.

2 ALL_CONSTRAINTS

OWNER

CONSTRAINT_NAME

CONSTRAINT_TYPE

TABLE_NAME

etc It has constraint definitions of the tables

3 ALL_CONS_COLUMNS

OWNER CONSTRAINT_NAME TABLE_NAME COLUMN_NAME POSITION

It lists all the columns that are involved in particular constraint

4 ALL_INDEXES

OWNER INDEX_NAME TABLE_OWNER TABLE_NAME TABLE_TYPE UNIQUENESS Lists all the indexes defined on the table

5 ALL_IND_COLUMNS

OWNER INDEX_NAME TABLE_OWNER TABLE_NAME COLUMN_NAME COLUMN_POSITION

Lists all the columns that are involved in particular index.

Page 13: Scanned by CamScanner2019/11/07  · modular architecture for enabling design, implementation and maintenance to be executed by a larger group of programmers. As far as programming

6 ALL_OBJECTS

OWNER OBJECT_NAME OBJECT_TYPE CREATED STATUS

It lists all the objects like table, view, constraint, index, synonym etc in the database

7 DUAL DUMMY

It is a dummy table usually used in the query to return single row. It has single column, and has value 'X' always.

Q2. a. Explain Encapsulation in Object Databases.

Answer: Encapsulation is a basic concept for all object-oriented technologies. It was created

for making a clear distinction between the specification and the implementation of an operation

and in this way for enabling modularity. Software that exceeds a certain size needs some sort of

modular architecture for enabling design, implementation and maintenance to be executed by a

larger group of programmers.

As far as programming languages are concerned, encapsulation derives from the notion of

abstract data types, including the distinction of interfaces and implementation. An interface

specifies the operations that can be invoked on an object. This is the only part of the object that is

visible from the outside. An implementation is an object's data, representing the object's state,

and the implementation of the object's methods. Implementation details are not visible from

outside the object.

Page 14: Scanned by CamScanner2019/11/07  · modular architecture for enabling design, implementation and maintenance to be executed by a larger group of programmers. As far as programming

In principle, the concept of encapsulation in object-oriented databases is the same. The only

difference is that it's is not clearly defined, whether the object's data structure is part of the

interface. In programming languages, the data structure is certainly part of the implementation.

Methods defined by an object create the object's behaviour. The purpose of such a method can be

changing some attributes' values, as well as calculating a certain value in dependency on the

object's actual state. Every object may define an arbitrary number of methods. Encapsulation

conceptually makes the maintenance of essential routines a task of the data level.

Q2. b. What are the different Operations applied on structured data in Object

Database?

Answer: The major advantage of using objects is the ability to define new data types (Abstract

Data Types). In ORDBMS, the RDBMS extends the usage of objects that can be defined and

stored as part of database. Like a CLASS declaration in C++ language, a new type can be

defined in an ORDBMS as follows; (the reserved words/keywords are given in UPPERCASE

hereafter)

CREATE TYPE type_name AS

(Attribute1_name data_type(size),

Attribute2_name data_type(size),

Attribute3_name data_type(size),

…….

AttributeN_name data+_type(size));

Here, data_type can be any of the following;

It can be one of the valid data types like CHAR, VARCHAR, NUMBER, INTEGER, etc. Or

It can be another User Defined Type.

We call this kind of new User Defined Types as Structured Types / Abstract Datatypes.

Page 15: Scanned by CamScanner2019/11/07  · modular architecture for enabling design, implementation and maintenance to be executed by a larger group of programmers. As far as programming

For example, Structured types can be declared and used in SQL:1999 as follows;

CREATE TYPE phone AS

(Country_code NUMBER(4),

STD_Code NUMBER(5),

Phone_Number NUMBER(10))

This type can be used in other TYPE definition or TABLE definition as follows;

CREATE TABLE contact

(Contact_name VARCHAR(25),

Street VARCHAR(25),

City VARCHAR(25),

Ph PHONE);

In this TABLE definition, PHONE is the structured type that we have defined through previous

example.

Structured Types in Oracle

Let us see some examples of defining and manipulating Structured types in Oracle.

CREATE TYPE Address AS OBJECT

(Street VARCHAR(35),

City VARCHAR(30),

State VARCHAR(30),

Pincode NUMBER(10));

Execution of the above statement will create a new ABSTRACT datatype named ADDRESS and

store the definition as part of the database.

Page 16: Scanned by CamScanner2019/11/07  · modular architecture for enabling design, implementation and maintenance to be executed by a larger group of programmers. As far as programming

This new type can be used to define an attribute in any TABLEs or TYPEs as follows;

CREATE TABLE Person

(Person_name VARCHAR(25),

Addr ADDRESS,

Phone NUMBER(10));

This table Person will consist of 3 columns where the first one and the third one are of regular

datatypes VARCHAR, and NUMBER respectively, and the second one is of the abstract type

ADDRESS. The table PERSON will look like as follows;

Page 17: Scanned by CamScanner2019/11/07  · modular architecture for enabling design, implementation and maintenance to be executed by a larger group of programmers. As far as programming
Page 18: Scanned by CamScanner2019/11/07  · modular architecture for enabling design, implementation and maintenance to be executed by a larger group of programmers. As far as programming

OR

Q2. a. Describe Database Design of an ORDBMS.

Answer: An object-oriented database management system (OODBMS), sometimes shortened

to ODBMS for object database management system, is a database management system (DBMS)

that supports the modelling and creation of data as objects. This includes some kind of support

for classes of objects and the inheritance of class properties and methods by subclasses and their

objects.

There is no widely agreed-upon standard for what constitutes an OODBMS, although the Object

Data Management Group (ODMG) published The Object Data Standard ODMG 3.0 in 2001

which describes an object model as well as standards for defining and querying objects. The

group has since disbanded.

Currently, Not Only SQL (NoSQL) document database systems are a popular alternative to the

object database. Although they lack all the capabilities of a true ODBMS, NoSQL document

databases provide key-based access to semi-structured data as documents, typically using

JavaScript Object Notation (JSON).

Features of an ODBMS

In their influential paper, The Object-Oriented Database Manifesto, Malcolm Atkinson and

others define an OODBMS as follows:

An object-oriented database system must satisfy two criteria: it should be a DBMS, and it should

be an object-oriented system, i.e., to the extent possible, it should be consistent with the current

crop of object-oriented programming languages.

The first criterion translates into five features: persistence, secondary storage management,

concurrency, recovery and an ad hoc query facility.

Page 19: Scanned by CamScanner2019/11/07  · modular architecture for enabling design, implementation and maintenance to be executed by a larger group of programmers. As far as programming

The second one translates into eight features: complex objects, object identity, encapsulation,

types or classes, inheritance, overriding combined with late binding, extensibility and

computational completeness.

Q3. a. Explain how a distributed Catalog is maintained?

Answer: A distributed database management system (DDBMS) is a software system that

allows you to manage and interconnect distributed databases as if they were a single

centralized database on a single computer. This service is responsible for synchronizing all data

and checking it is updated frequently so that content in different locations is homogeneous. Thus,

a change made by a user in a database will be immediately reflected in all other databases

connected to the distributed database management system.

Catalogue Management tools will be your best partners in this phase of data management,

especially if you are a retailer or eCommerce business that handles multiple product

management databases distributed among suppliers, distributors, departments and geographic

locations. But pay attention, because adapting Catalog Management to a distributed database can

have its own ins and outs.Catalog Management is responsible for gathering all the product

data of a company to facilitate the management, creation and updating of catalogs, both

physical and online. When the company has a single repository, it is easy to connect the product

management database to the Catalog Management software. Having more databases shouldn't

mean you have to suffer more.

We give you some tips so that the process is just as easy if you have a distributed database

management system.

One or more distributed database management systems: Do you prefer a single

DBMS that manages all locations, or a location-specific service? Take into account the

costs of hiring and maintenance according to the number of databases, and what type of

CM tool you are going to use.

Homogeneous or heterogeneous? Depending on whether your databases contain data

that is very different from each other or they contain same tables to be synchronized and

updated in real time, you may need a single, comprehensive Catalog Management service

or an individual management tool for each location.

Cloud-based: Consider the option of contracting a catalog Management service that

offers software in the cloud, as a CM on-premise would involve installing the program in

all those computers and locations where there are databases. This greatly delays

installation time and increases costs, compared to a cloud-based CM that ensures

universal access from any computer, Internet connection and location.

Page 20: Scanned by CamScanner2019/11/07  · modular architecture for enabling design, implementation and maintenance to be executed by a larger group of programmers. As far as programming

Several management controls: Data type and indexes, registered users, user rights,

access history, module dependency...

Replica tracking: The product management database must replicate data between

different databases to ensure information can be retrieved in the event of a system failure

or damage.

Strong security: Being composed of diverse bases and locations, the task of ensuring the

security of information and access controls requires greater caution and vigilance. Choose

Catalog Management tools that guarantee confidentiality and appropriate backup and

replication systems.

Q3. b. What is the need of distributed recovery? Explain..

Answer: In order to recuperate from database failure, database management systems resort to

a number of recovery management techniques. In this chapter, we will study the different

approaches for database recovery.

The typical strategies for database recovery are −

In case of soft failures that result in inconsistency of database, recovery strategy includes

transaction undo or rollback. However, sometimes, transaction redo may also be adopted

to recover to a consistent state of the transaction.

In case of hard failures resulting in extensive damage to database, recovery strategies

encompass restoring a past copy of the database from archival backup. A more current

state of the database is obtained through redoing operations of committed transactions

from transaction log.

Recovery from Power Failure

Power failure causes loss of information in the non-persistent memory. When power is restored,

the operating system and the database management system restart. Recovery manager initiates

recovery from the transaction logs.

In case of immediate update mode, the recovery manager takes the following actions −

Transactions which are in active list and failed list are undone and written on the abort

list.

Transactions which are in before-commit list are redone.

No action is taken for transactions in commit or abort lists.

In case of deferred update mode, the recovery manager takes the following actions −

Transactions which are in the active list and failed list are written onto the abort list. No

undo operations are required since the changes have not been written to the disk yet.

Page 21: Scanned by CamScanner2019/11/07  · modular architecture for enabling design, implementation and maintenance to be executed by a larger group of programmers. As far as programming

Transactions which are in before-commit list are redone.

No action is taken for transactions in commit or abort lists.

Recovery from Disk Failure

A disk failure or hard crash causes a total database loss. To recover from this hard crash, a new

disk is prepared, then the operating system is restored, and finally the database is recovered

using the database backup and transaction log. The recovery method is same for both immediate

and deferred update modes.

The recovery manager takes the following actions −

The transactions in the commit list and before-commit list are redone and written onto

the commit list in the transaction log.

The transactions in the active list and failed list are undone and written onto the abort list

in the transaction log.

Checkpointing

Checkpoint is a point of time at which a record is written onto the database from the buffers.

As a consequence, in case of a system crash, the recovery manager does not have to redo the

transactions that have been committed before checkpoint. Periodical checkpointing shortens the

recovery process.

The two types of checkpointing techniques are −

Consistent checkpointing

Fuzzy checkpointing

Consistent Checkpointing

Consistent checkpointing creates a consistent image of the database at checkpoint. During

recovery, only those transactions which are on the right side of the last checkpoint are undone

or redone. The transactions to the left side of the last consistent checkpoint are already

committed and needn’t be processed again. The actions taken for checkpointing are −

The active transactions are suspended temporarily.

All changes in main-memory buffers are written onto the disk.

A “checkpoint” record is written in the transaction log.

The transaction log is written to the disk.

The suspended transactions are resumed.

If in step 4, the transaction log is archived as well, then this checkpointing aids in recovery from

disk failures and power failures, otherwise it aids recovery from only power failures.

Page 22: Scanned by CamScanner2019/11/07  · modular architecture for enabling design, implementation and maintenance to be executed by a larger group of programmers. As far as programming

Fuzzy Checkpointing

In fuzzy checkpointing, at the time of checkpoint, all the active transactions are written in the

log. In case of power failure, the recovery manager processes only those transactions that were

active during checkpoint and later. The transactions that have been committed before

checkpoint are written to the disk and hence need not be redone.

Example of Checkpointing

Let us consider that in system the time of checkpointing is tcheck and the time of system crash

is tfail. Let there be four transactions Ta, Tb, Tc and Td such that −

Ta commits before checkpoint.

Tb starts before checkpoint and commits before system crash.

Tc starts after checkpoint and commits before system crash.

Td starts after checkpoint and was active at the time of system crash

Q4. a. What is the Covert Channels ? Explain ?

Answer:A covert channel is any method of communication that is used to illicitly transfer

information, thus breaking the security policy of the system. Any shared resource can potentially

be used as a covert channel. There are many threats that modern network security must take into

account. From brute force password attacks to port scanning, the issues, which system engineers

and administrators must worry about, increase at a faster

than normal pace. However, one of the issues that many in the field have not paid enough

attention to is covert channels. In this paper, we present an overview of covert channels with

examples. We explore the two types of covert channels: storage channels and timing channels.

Storage channels are more commonly used than timing channels because they are easier to

implement. Both constitute major security threats. Covert channels can be designed for either a

single system or a network. It is important for system engineers to understand covert channels so

that they will be able to preemptively overcome certain security problems. We also explore the

implementation problems associated with covert channels, for example noise.

Types of Covert Channels

In this section, we discuss the basics of covert storage channels and covert timing channels, with

examples of each. We also present the problem of noise in covert channels.

2.1 Storage Channels

Covert storage channels are methods of communication that “include all vehicles that would

allow the direct or indirect writing of a storage location by one process and the direct or indirect

reading of it by another” [2]. In other words, one process writes to a shared resource, while

Page 23: Scanned by CamScanner2019/11/07  · modular architecture for enabling design, implementation and maintenance to be executed by a larger group of programmers. As far as programming

another process reads from it. Storage channels can be used between processes within a single

computer or between multiple computers across a network [3]. A good example of a storage

channel is a printing queue. The process with higher security privileges, the sending process,

either fills up the printer queue to signal a 1 or leaves it as it is to signal a 0. The process with

lower security privileges, the receiving process, polls the printer queue to see whether or not it is

full and determines the value accordingly.

2.2 Timing Channels

Covert timing channels are methods of communication that “include all vehicles that would

allow one process to signal information to another process by modulating its own use of system

resources in such a way that the change in response time observed by the second process would

provide information” [2]. In other words, it is essentially any method that uses a clock or

measurement of time to signal the value being sent over channel. Similarly to storage channels,

timing channels can exist both in a single-computer setting and a network setting. However,

they are less practical in a network setting [4]. An example of a timing channel can be found in a

movable head I/O device, such as a hard disk. One process with higher security privileges, the

sending process, has access to the entire device while another process with lower security

privileges, the receiving process, only has access to a small portion of the device. Requests to

the device are processed serially. To signal a 1, the sending process makes a read request far

away from the section that the receiving process has access to. To signal a 0, it does nothing.

The receiving process makes a read request within its own section and uses the time it takes for

the head to travel to the section and finish the read request to determine the value accordingly

[2]. Papers by the National Computer Security Center [5] and Van Vleck [6] provide more

examples of timing channels.

2.3. Noise in Covert Channels

One of the major problems in a successful implementation of a covert channel is noise. “A noisy

channel intentionally or accidentally corrupts the data signal with errors so that the information

rate is slower than the data rate”. Because it is a primary problem in their implementation, it is

also the best defenses against covert channels. If enough noise is introduced into a covert

channel, it can hinder the use of that channel. The receiving process would have to request the

same data continuously and wait for a very high level of redundancy before knowing that it is

reliable. This process may take many hours for a request that would only take a few clock cycles

if the channel was noise-free. Although this does not halt the covert channel, it can considerably

decrease its speed.

2.3. Steganography

Steganography is a method of covert communication that relates well to covert channels. It is

any method used to conceal a secret object within another public object [8]. When employing

any steganographic techniques, a cover-file is used to hide the information. It can be a text-file,

Page 24: Scanned by CamScanner2019/11/07  · modular architecture for enabling design, implementation and maintenance to be executed by a larger group of programmers. As far as programming

audio, image, video, piece of software, or others [9]. A common example of steganography is

known as least significant bit (LSB) insertion. In a 24-bit bitmap image, the colors red, green,

and blue have one byte each that represent their intensities. A steganographer could replace the

least intense bits with the hidden message. Ifdone correctly, this should not alter the image

enough to be noticeable to the human eye. However, the more data stored in the picture, the

more the picture is altered.

Q4.b. Explain mandatory access control with an example ?

Answer: MAC: Mandatory Access Control

– Definition: A system-wide policy decrees who is allowed to have access; individual user

cannot alter that access.

– Relies on the system to control access.

– Examples: The law allows a court to access driving records without the owners’ permission.

– Traditional MAC mechanisms have been tightly coupled to a few security models.

– Recently, systems supporting flexible security models start to appear (e.g., SELinux, Trusted

Solaris, TrustedBSD, etc.)

• Security Policy Model – A security policy model is a succinct statement of the protection

properties that a system, or generic type of system, must have.

Multilevel Security

Security Levels

• People and information are classified into different levels of trust and sensitivity.

• These levels represent the well-known security classifications: Unclassified =⇒ Confidential

=⇒ Secret =⇒ Top Secret.

• Clearance level indicates the level of trust given to a person with a security clearance, or a

computer that processes classified information, or an area that has been physically secured for

storing classified information. The level indicates the highest level of classified information to be

stored or handled by the person, device, or location.

• Classification level indicates the level of sensitivity associated with some information, like that

in a document or a computer file. The level is supposed to indicate the degree of damage the

country could suffer if the information is disclosed to an enemy.

• Security level is a generic term for either a clearance level or a classification level.

Page 25: Scanned by CamScanner2019/11/07  · modular architecture for enabling design, implementation and maintenance to be executed by a larger group of programmers. As far as programming

The Bell-LaPadula Security Policy Model

• Proposed by David Bell and Len Lapadula in 1973, in response to U.S. Air Force concerns over

the security of time-sharing mainframe systems.

• This model is the most widely recognized MLS model.

• The model deal with confidentiality only.

• Two properties: No read up and No write down. – Simple security property: Subject A is

allowed to read object O only if class(O) ≤ class(A). – *-property: Subject A is allowed to write

object O only if class(A) ≤ class(O).

• The *-property was Bell and LaPadula’s critical innovation. It was driven by the fear that a

user with “Secret” clearance might be “tricked” by attackers (e.g., through Trojan horse

programs or software vulnerabilities) to copy down the information to a ”Unclassified” area

where the attackers can read.

MAC Implementation in Windows Vista

• It is called Mandatory Integrity Control (MIC) in Windows Vista. MIC implements a form of

the Biba model, which ensures integrity by controlling writes and deletions.

• Label on Subjects: When a user logs on, Windows Vista assigns an integrity SID to the users

access token. Included in the SID is an integrity label that determines the level of access the

token (and thus the user) can achieve.

• Label on Objects: Objects, such as files, pipes, processes, threads, registry keys, services,

printers, etc., are also assigned an integrity SID, which is stored in the system access control list

(SACL) of the objects security descriptor. The label in the SID specifies the integrity level of the

object. • Access Control Policy: To write to or delete an object, the integrity level of subject must

be equal to or greater than the object’s level.

• Relationship to DAC: Vista checks MAC first, if passed, it then checks DAC (e.g. access

control list). Therefore, MAC provides a layer of access control in addition to DAC; it does not

overwrite DAC.

• Integrity levels: Windows Vista defines six integrity levels (IL): Low, Medium, High, and

System.

Q4. a. What do you understand by Discretionary access control ? Explain ?

Answer: Based on the concept of access rights or privileges for objects (tables and views), and

mechanisms for giving users privileges (and revoking privileges).

Page 26: Scanned by CamScanner2019/11/07  · modular architecture for enabling design, implementation and maintenance to be executed by a larger group of programmers. As far as programming

™ Creator of a table or a view automatically gets all privileges on it. – DMBS keeps track of

who subsequently gains and loses privileges, and ensures that only requests from users who have

the necessary privileges (at the time the request is issued) are allowed.

GRANT Command

GRANT privileges ON object TO users [WITH GRANT OPTION]

The following privileges can be specified:

SELECT: Can read all columns (including those added later via ALTER TABLE command). ™

INSERT(col-name): Can insert tuples with non-null or nondefault values in this column. ™

INSERT means same right with respect to all columns. ™

DELETE: Can delete tuples. ™

REFERENCES (col-name): Can define foreign keys (in other tables) that refer to this column. ™

If a user has a privilege with the GRANT OPTION, can pass privilege on to other users (with or

without passing on the GRANT OPTION). ™

GRANT and REVOKE of Privileges

-GRANT INSERT, SELECT ON Sailors TO Horatio

– Horatio can query Sailors or insert tuples into it.

GRANT DELETE ON Sailors TO Yuppy WITH GRANT OPTION

– Yuppy can delete tuples, and also authorize others to do so. ™

GRANT UPDATE (rating) ON Sailors TO Dustin

– Dustin can update (only) the rating field of Sailors tuples.

GRANT SELECT ON ActiveSailors TO Guppy, Yuppy

– This does NOT allow the ‘uppies to query Sailors directly! ™

REVOKE: When a privilege is revoked from X, it is also revoked from all users who got it

solely from X

Only owner can execute CREATE, ALTER, and DROP

GRANT/REVOKE on Views

Page 27: Scanned by CamScanner2019/11/07  · modular architecture for enabling design, implementation and maintenance to be executed by a larger group of programmers. As far as programming

1. If the creator of a view loses the SELECT privilege on an underlying table, the view is

dropped! ™

2. If the creator of a view loses a privilege held with the grant option on an underlying table,

(s)he loses the privilege on the view as well; so do users who were granted that privilege on the

view!

Q5. a. What is meant by SQL extensions ? Explain ?

Answer: Structured Query language (SQL) pronounced as "S-Q-L" or sometimes as "See-

Quel"is actually the standard language for dealing with Relational Databases.

SQL programming can be effectively used to insert, search, update, delete database records.

That doesn't mean SQL cannot do things beyond that.

In fact it can do lot of things including, but not limited to, optimizing and maintenance of

databases.

Relational databases like MySQL Database, Oracle, Ms SQL server, Sybase, etc uses SQL

! How to use sql syntaxes?

SQL syntaxes used in these databases are almost similar, except the fact that some are using few

different syntaxes and even proprietary SQL syntaxes.

SQL Example

SELECT * FROM Members WHERE Age > 30

MySQL Extensions to Standard SQL

MySQL Server supports some extensions that you probably will not find in other SQL DBMSs.

Be warned that if you use them, your code will not be portable to other SQL servers. In some

cases, you can write code that includes MySQL extensions, but is still portable, by using

comments of the following form:

/*! MySQL-specific code */

In this case, MySQL Server parses and executes the code within the comment as it would any

other SQL statement, but other SQL servers will ignore the extensions. For example, MySQL

Server recognizes the STRAIGHT_JOIN keyword in the following statement, but other servers

will not:

SELECT/*! STRAIGHT_JOIN */ col1 FROM table1,table2 WHERE...

Page 28: Scanned by CamScanner2019/11/07  · modular architecture for enabling design, implementation and maintenance to be executed by a larger group of programmers. As far as programming

If you add a version number after the ! character, the syntax within the comment is executed only

if the MySQL version is greater than or equal to the specified version number.

The KEY_BLOCK_SIZE clause in the following comment is executed only by servers from

MySQL 5.1.10 or higher:

CREATETABLE t1(a INT,KEY(a))/*!50110 KEY_BLOCK_SIZE=1024 */;

he following descriptions list MySQL extensions, organized by category.

Organization of data on disk

MySQL Server maps each database to a directory under the MySQL data directory, and

maps tables within a database to file names in the database directory. Consequently,

database and table names are case-sensitive in MySQL Server on operating systems that

have case-sensitive file names (such as most Unix systems).

General language syntax

o By default, strings can be enclosed by " as well as '. If the ANSI_QUOTES SQL mode is

enabled, strings can be enclosed only by ' and the server interprets strings enclosed by " as

identifiers.

o \ is the escape character in strings.

o In SQL statements, you can access tables from different databases with

the db_name.tbl_name syntax. Some SQL servers provide the same functionality but call

this User space. MySQL Server doesn't support tablespaces such as used in statements like

this: CREATE TABLE ralph.my_table ... IN my_tablespace.

SQL statement syntax

o The ANALYZE TABLE, CHECK TABLE, OPTIMIZE TABLE, and REPAIR

TABLE statements.

o The CREATE DATABASE, DROP DATABASE, and ALTER DATABASE statements.

o The DO statement.

o EXPLAIN SELECT to obtain a description of how tables are processed by the query

optimizer.

o The FLUSH and RESET statements.

o The SET statement.

o Use of RENAME TABLE. See

o Use of REPLACE instead of DELETE plus INSERT.

o Use of CHANGE col_name, DROP col_name, or DROP

INDEX, IGNORE or RENAME in ALTER TABLE statements.

o Use of multiple ADD, ALTER, DROP, or CHANGE clauses in an ALTER

TABLE statement.

o Use of index names, indexes on a prefix of a column, and use

of INDEX or KEY in CREATE TABLE statements.

Page 29: Scanned by CamScanner2019/11/07  · modular architecture for enabling design, implementation and maintenance to be executed by a larger group of programmers. As far as programming

Q5.b. What are XML Applications ? Explain ?

Answer: XML Application

XML applications are software programs that process and manipulate data using XML

technologies including XML, XSLT, XQuery, XML Schema, XPath, Web services, etc. Stylus

Studio already provides many intuitive tools for working with all of the above — and now

using XML pipeline you can design a complete XML application from start to finish! For

example, you can visually specify the order in which different XML processing steps should

occur, and can even debug the entire application and deploy it to your production environment in

just minutes. In this tutorial, we'll cover how to build an example XML application using XML

pipeline.

XML Application

In the following sample XML application, we'll building an order report. This will involve some

XML processing, for example, applying various XML operations (converting, parsing,

validating, transforming and publishing XML) on several data sources. The order report XML

application is displayed here:

The steps involved in creating this XML application include:

1. Getting a catalog of books from a Text File

2. Getting an Order from an EDIFACT file

3. Using XQuery to extract the order information

Page 30: Scanned by CamScanner2019/11/07  · modular architecture for enabling design, implementation and maintenance to be executed by a larger group of programmers. As far as programming

4. Using XSLT to publish an HTML order report

5. Using XQuery to generate an XSL:FO stylesheet

6. Using XSL:FO to publish a PDF order report

Building XML Applications

The easiest way to build this XML application is to use Stylus Studio, and to follow along with

this tutorial, we recommend downloading the free trial version of Stylus Studio which includes

this sample application including all files in the example project. If you're just looking to get an

overview of Stylus Studio's XML Pipeline tools for XML application development, simply just

read on.

Stylus Studio provides many helpful tools and utilities to implement steps 1-6 listed previously.

For example:

1. Using Convert to XML, you could convert the text file into an XML format

2. and also get your EDI to XML

3. then integrate the required information using XQuery tools

4. and use XSLT tools to publish the HTML order report to the Web

5. then generate an XSL:FO stylesheet with XML Publisher

Q5.a. Explain storage and indexing in POSTGRES ?

Answer: Indexes are special lookup tables that the database search engine can use to speed up

data retrieval. Simply put, an index is a pointer to data in a table. An index in a database is very

similar to an index in the back of a book.

For example, if you want to reference all pages in a book that discusses a certain topic, you have

to first refer to the index, which lists all topics alphabetically and then refer to one or more

specific page numbers.

An index helps to speed up SELECT queries and WHERE clauses; however, it slows down data

input, with UPDATE and INSERT statements. Indexes can be created or dropped with no effect

on the data.

Creating an index involves the CREATE INDEX statement, which allows you to name the

index, to specify the table and which column or columns to index, and to indicate whether the

index is in ascending or descending order.

Indexes can also be unique, similar to the UNIQUE constraint, in that the index prevents

duplicate entries in the column or combination of columns on which there's an index.

Page 31: Scanned by CamScanner2019/11/07  · modular architecture for enabling design, implementation and maintenance to be executed by a larger group of programmers. As far as programming

The CREATE INDEX Command

The basic syntax of CREATE INDEX is as follows −

CREATE INDEX index_name ON table_name;

Index Types

PostgreSQL provides several index types: B-tree, Hash, GiST, SP-GiST and GIN. Each Index

type uses a different algorithm that is best suited to different types of queries. By default, the

CREATE INDEX command creates B-tree indexes, which fit the most common situations.

Single-Column Indexes

A single-column index is one that is created based on only one table column. The basic syntax

is as follows −

CREATE INDEX index_name

ON table_name (column_name);

Multicolumn Indexes

A multicolumn index is defined on more than one column of a table. The basic syntax is as

follows −

CREATE INDEX index_name

ON table_name (column1_name, column2_name);

Whether to create a single-column index or a multicolumn index, take into consideration the

column(s) that you may use very frequently in a query's WHERE clause as filter conditions.

Should there be only one column used, a single-column index should be the choice. Should

there be two or more columns that are frequently used in the WHERE clause as filters, the

multicolumn index would be the best choice.

Unique Indexes

Unique indexes are used not only for performance, but also for data integrity. A unique index

does not allow any duplicate values to be inserted into the table. The basic syntax is as follows

CREATE UNIQUE INDEX index_name

on table_name (column_name);

Partial Indexes

A partial index is an index built over a subset of a table; the subset is defined by a conditional

expression (called the predicate of the partial index). The index contains entries only for those

table rows that satisfy the predicate. The basic syntax is as follows −

CREATE INDEX index_name

on table_name (conditional_expression);

Page 32: Scanned by CamScanner2019/11/07  · modular architecture for enabling design, implementation and maintenance to be executed by a larger group of programmers. As far as programming

Implicit Indexes

Implicit indexes are indexes that are automatically created by the database server when an

object is created. Indexes are automatically created for primary key constraints and unique

constraints.

Example

The following is an example where we will create an index on COMPANY table for salary

column −

# CREATE INDEX salary_index ON COMPANY (salary);

Now, let us list down all the indices available on COMPANY table using \d

company command.

# \d company

This will produce the following result, where company_pkey is an implicit index, which got

created when the table was created.

Table"public.company"

Column|Type|Modifiers

---------+---------------+-----------

id | integer |notnull

name | text |notnull

age | integer |notnull

address | character(50)|

salary | real |

Indexes:

"company_pkey" PRIMARY KEY, btree (id)

"salary_index" btree (salary)

You can list down the entire indexes database wide using the \di command −

The DROP INDEX Command

An index can be dropped using PostgreSQL DROP command. Care should be taken when

dropping an index because performance may be slowed or improved.

The basic syntax is as follows −

DROP INDEX index_name;

You can use following statement to delete previously created index −

# DROP INDEX salary_index;

a. Describe the structures of XML data ?

Page 33: Scanned by CamScanner2019/11/07  · modular architecture for enabling design, implementation and maintenance to be executed by a larger group of programmers. As far as programming

Answer: XML Database is used to store huge amount of information in the XML format. As

the use of XML is increasing in every field, it is required to have a secured place to store the

XML documents. The data stored in the database can be queried using XQuery, serialized, and

exported into a desired format.

XML Database Types

There are two major types of XML databases −

XML- enabled

Native XML (NXD)

XML - Enabled Database

XML enabled database is nothing but the extension provided for the conversion of XML

document. This is a relational database, where data is stored in tables consisting of rows and

columns. The tables contain set of records, which in turn consist of fields.

XML Usage

A short list of XML usage says it all −

XML can work behind the scene to simplify the creation of HTML documents for large

web sites.

XML can be used to exchange the information between organizations and systems.

XML can be used for offloading and reloading of databases.

XML can be used to store and arrange the data, which can customize your data handling

needs.

XML can easily be merged with style sheets to create almost any desired output.

Virtually, any type of data can be expressed as an XML document.

<?xml version ="1.0"?>

<contact-info>

<name>Tanmay Patil</name>

<company>TutorialsPoint</company>

<phone>(011) 123-4567</phone>

</contact-info>

Page 34: Scanned by CamScanner2019/11/07  · modular architecture for enabling design, implementation and maintenance to be executed by a larger group of programmers. As far as programming

XML Declaration

The XML document can optionally have an XML declaration. It is written as follows −

<?xml version ="1.0" encoding ="UTF-8"?>

Where version is the XML version and encoding specifies the character encoding used in the

document.