paramvir

download paramvir

If you can't read please download the document

Transcript of paramvir

NETWORK MANAGEMENT AND SECURITYSubmitted by: ( PARMVIR SINGH ) (A20405210047) B.Tech-CSE, III Semester

Under the Guidance of (Ms. BHAGYASHREE NARUKA)

Amity School of Engineering and Technology

AMITY UNIVERSITY RAJASTHAN

CERTIFICATE

This is to certify that the report entitled NETWORK MANAGEMENT AND SECURITY done by PARMVIR SINGH ,Roll No.AUR1023045, student of B.TECH CSE is an authentic work carried out by him/her at Amity School of Engineering and Technology, Amity University Rajasthan under my guidance. The matter embodied in this work has not been submitted earlier to the best of my knowledge.

PARMVIR SINGH Signature:

Ms. Bhagyashree Naruka Signature: Desig

nation: Date:

CONTENTS

Abstract 1. Introduction 1.1. ISO Network Management Forum divided Management 1.2. The Key Security Issues for the E-Business Economy

2. HISTORY2.1. 1920s: Network management begins 2.2. 2.3. 2.4. 2.5. 2.6.

1960s: Network Control Center 1970s: Network Operations Center 1980s: Updating the NOC Global Network Operations Center HISTORY OF NETWORK SECURITY

3. PRESENT STATE OF NETWORK MANAGEMENT ANDSECURITY

4. ARCHITECTURAL MODEL OF NMS 5. Implementation6. Expert System Prototype

6.1. Simple Network Management Protocol (SNMP)6.2. Internet Protocol Layers

7. PAST AND FUTURE 8. How SNMP Works 9. INTERNET ARCHITECTURE AND VULNERABLE SECURITY ASPECTS 9.1. IPv4 Architecture 9.2. IPv6 Architecture

10. NETWORK MANAGEMENT APPLICATION 10.1. A strategy for application performance management

11. Attacks through the Current Internet Protocol IP11.1. Common Internet Attack Methods11.1.1. Eavesdropping 11.1.2. Viruses 11.1.3. Worms 11.1.4. Trojans 11.1.5. Phishing

11.2. IP Spoofing Attacks 11.3. Denial of Service 12. Technology for Internet Security

12.1. Cryptographic systems 12.2. Firewall 12.3. Intrusion Detection Systems 13. SECURITY IN DIFFERENT NETWORKS 13.1. CURRENT DEVELOPMENTS IN NETWORK SECURITY13.1.1. Hardware Developments 13.1.2. Software Developments

14. MARKET ACCEPTANCE

15. FUTURE OF NETWORK MANAGEMENT AND SECURITY 16. SCOPES 17. Conclusions

Abstract:This paper describes work in our project fundedby DARPA Dynamic Coalitions program todesign, develop, and demonstrate a system forautomatically managing security policies indynamic networks.. Specifically, we aim toreduce human involvement in networkmanagement by building a practical networkreconfiguration system so that simple securitypolicies stated as positive and negativeinvariants are upheld as the network changes.The focus of this project is a practical tool to systems administrators verifiably enforcesimple multi-layer network security policies. Ourkey design considerations are computationalcost of policy validation and the power of theenforcement primitives. The central componentis a policy engine populated by models ofnetwork elements and services that validatespolicies and computes new configuration settingsfor network elements when they are violated. Weinstantiate our policy enforcement tool using amonitoring and instrumentation layer thatreports network changes as they occur andimplements configuration changes computed bythe policy engine.

Network management and securityINTRODUCTION:

Any complex systems requires monitoring and control this included autonomous systems or computer network. Network Management involved the deployment, integration and coordination of devices to monitor, test, poll, configure, analyze, evaluate, and control the network and its components. The objective of network management is to meet the requirements of a network which including availability, real-time, operational performance, and Quality of Service at a reasonable cost. But network is heterogeneous. Devices need standards to. Security and automated network management are necessary features for managers and administrators of large networks. This section examines additional Windows 2000 features that provide network security, bandwidth management, and automated client management Operation deals with keeping the network (and the services that the network provides) up and running smoothly. It includes monitoring the network to spot problems as soon as possible, ideally before users are affected. Administration deals with keeping track of resources in the network and how they are assigned. It includes all the "housekeeping" that is necessary to keep the network under control. Maintenance is concerned with performing repairs and upgradesfor example, when equipment must be replaced, when a router needs a patch for an operating system

image, when a new switch is added to a network. Maintenance also involves corrective and preventive measures to make the managed network run "better", such as adjusting device configuration parameters.

Provisioning is concerned with configuring resources in the network to support a given service. For example, this might include setting up the network so that a new customer can receive voice services.Network and Security Manager (NSM) is highly scalable and flexible. Enterprise customers can leverage NSM globally to scale from branch to data center, and service providers can use this network security management solution for carrier-class deployments. NSM can be deployed as software on a server or as dedicated appliances to scale large enterprise and service provider environments.

Network management is the process of controlling a complex data network to maximize its efficiency and productivity The overall goal of network management is to help with the complexity of a data network and to ensure that data can go across it with maximum efficiency and transparency to the users

1.1 . The International Organization for Standardization (ISO) Network Management Forum divided Management: Security Management Performance Management Accounting Management Is the process of locating problems, or faults, on the data network

It involves the following steps: Discover the problem Isolate the problem Fix the problem (if possible)

1.2. An Introduction to the Key Security Issues for the E-Business Economy With the explosion of the public Internet and e-commerce, private computers, and computer

networks, if not adequately secured, are increasingly vulnerable to damaging attacks. Hackers, viruses, vindictive employees and even human error all represent clear and present dangers to networks. And all computer users, from the most casual Internet surfers to large enterprises, could be affected by network security breaches. However, security breaches can often be easily prevented. How? This guide provides you with a general overview of the most common network security threats and the steps you and your organization can take to protect yourselves from threats and ensure that the data traveling across your networks is safe.

In todays Information age, need for continuous internet connectivity cannot be denied. As an individual user or as an employee in your organization, blended threats are waiting to attack YOU by identifying one vulnerable moment when your defenses are low. These can be viruses, malware, spam, Trojans and insider attacks like data theft and leakage. Securing YOU the User, thus becomes critical! How do you ensure continuous security against sophisticated IT security threats? Cyberoams identity-based security solutions can secure your every move at work, at home and while you travel from the network gateway to the endpoints. It binds security with your identity and works as your private security guard, even when you are away from work or at home. Its endpoint security protects your sensitive data by securing your endpoints, storage devices and controllingapplications.

The world is becoming more interconnected with the advent of the Internet and new networking technology. There is a large amount of personal, commercial, military, and government information on networking infrastructures worldwide. Network security is becoming of great importance because of intellectual property that can be easily acquired through the internet. There are currently two fundamentally different networks, data networks and synchronous network comprised of switches. The internet is considered a data network. Since the current data network consists of computerbased routers, information can be obtained by special programs, such as Trojan horses, planted in the routers. The synchronous network that consists of switches does not buffer data and therefore are not threatened by attackers. That is why security is emphasized in data networks, such as the internet, and other networks that link to the internet. The vast topic of network security is analyzed by researching the following :History of security in networks Internet architecture and vulnerable security aspects of the Internet Types of internet attacks and security methods Security for networks with internet access Current development in network security hardware and software Based on this research, the future of network security is forecasted. New trends that are emerging will also be considered to understand where network security is heading. In the field of networking, the area of network security consists of the provisions

and policies adopted by the network administrator to prevent and monitor unauthorized access, misuse, modification, or denial of the computer network and network-accessible resources. Network security involves the authorization of access to data in a network, which is controlled by the network administrator. Users choose or are assigned an ID and password or other authenticating information that allows them access to information and programs within their authority. Network security covers a variety of computer networks, both public and private, that are used in everyday jobs conducting transactions and communications among businesses, government agencies and individuals. Networks can be private, such as within a company, and others which might be open to public access. Network security is involved in organizations, enterprises, and other types of institutions. It does as its title explains: It secures the network, as well as protecting and overseeing operations being done .. 0100090000030202000002008a01000000008a01000026060f000a035 74d464301000000000001004a850000000001000000e802000000000 000e8020000010000006c00000000000000000000002c00000071000 00000000000000000007d400000af27000020454d4600000100e80200 000e00000002000000000000000000000000000000ec130000c81900 00d8000000170100000000000000000000000000005c4b0300684304 00160000000c000000180000000a0000001000000000000000000000 0009000000100000003c0f000060090000250000000c0000000e00008 0120000000c000000010000005200000070010000010000009cffffff00 0000000000000000000000900100000000000004400012540069006d 006500730020004e0065007700200052006f006d0061006e000000000 00000000000000000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000000000 0000000000000cb30093000000000040000000000ae3008310930000 0000047169001000002020603050405020304877a002000000080080 0000000000000ff01000000000000540069006d006500730020000000 65007700200052006f006d0061006e000000dc3f00003006000064030 0007881e8013abdca3000000000103d1100aab40230103d11004c3eaf 30283d11006476000800000000250000000c00000001000000180000 000c00000000000002540000005400000000000000000000002c0000 00710000000100000088878740d1458740000000005a000000010000 004c0000000400000000000000000000003c0f0000600900005000000 020002d002d00000046000000280000001c000000474449430200000 0ffffffffffffffff3d0f0000610900000000000046000000140000000800000 04744494303000000250000000c0000000e0000800e0000001400000 00000000010000000140000000400000003010800050000000b02000 00000050000000c02b401c102040000002e0118001c000000fb020300 010000000000bc02000000000102022253797374656d000000000000 0000000000000000000000000000000000000000040000002d010000 04000000020101001c000000fb02eeff00000000000090010000000004 40001254696d6573204e657720526f6d616e000000000000000000000

0000000000000040000002d010100050000000902000000020d00000 0320a100000000100040000000000c102b30120500800040000002d0 10000030000000000

2. HISTORY For more than a century, AT&T people have managed the AT&T network to provide superior reliable service. In the early years, as AT&T extended its network, "management" meant deciding which new routes to build and which existing routes needed additional circuits. By 1900, AT&T had developed general statistical methods to predict future demand for service between any two points on the network. These methods provided guidance for network managers, who needed to balance the cost of construction with the risk of a subscriber reaching a busy circuit. There was little active day-to-day traffic management, nor was any really needed in an era when subscribers expected to be called back by an operator when their distant party was on the line. To an extent, these operators themselves managed the network, one call at a time, as they sought routings through other switchboards that would get the customers call through.

2.1. 1920s: Network management begins By the 1920s, AT&T had designed its network to meet the demands for quick, efficient service at the peak periods of a normal business day. But unusual events, such as holidays and natural disasters, could cause delays. Handling these events required active, coordinated management of the network as a whole.

Long Distance operators, Kansas City, Missouri, 1920. The supervisor is on roller skates

so she can get around the large room more quickly to assist the operators. Active network management began in the mid-1920s with the establishment of regional Traffic Control Bureaus in Chicago, Cleveland and New York. These bureaus served as clearinghouses for all information affecting traffic over their portion of the network. The bureaus stayed in contact with each other and with important switching centers in their regions via dedicated teletype (printing telegraph) systems. Their staffs implemented plans for coping with unusual calling patterns, weather, damaged lines or other emergency situations. Switching centers might be instructed to reroute calls. Circuits might be temporarily reassigned. Large, manually operated wall displays provided a visual depiction of the condition of major network routes. The bureaus closed in the late 1950s.

2.2. 1960s: Network Control Center : AT&T opened a Network Control Center in New York in 1962. By now, most customers dialed their own long distance calls. Switch and route information flowed in real time from the most important toll switches to status boards in New York. Similar data from the rest of the switches flowed into three new regional centers in Chicago, Rockdale, Ga., and White Plains, N.Y. Network managers could respond more quickly to unusual situations, and instruct the relevant switching centers to take steps, such as heading off calls with little chance of completion, or sending calls over indirect alternate routes with available circuits. The Cuban Missile crisis of October 1962 provided an early test for the new centers. As President Kennedy addressed the nation, AT&T network managers placed controls throughout the network to prevent the volume of Miami-bound call attempts from overwhelming switches and circuits throughout the Southeast.

2.3. 1970s: Network Operations Center :-

AT&T Network Operations Center, Bedminster, N.J., 1987. In 1977, AT&T replaced the Control Center with a Network Operations Center (NOC) in Bedminster, N.J. The new center included domestic and international status boards, which automatically updated every 12 seconds, and computer databases to instantly provide managers with the information needed to reroute calls.

Many changes followed over the next decade. Dramatically increased computer intelligence became available, both in the network itself and in auxiliary functions. Common channel signaling put call-set-up on a digital network separate from the circuits that carried the calls. The network moved from analog toward digital technologies. One effect of these changes was an increased ability to actively manage the network both automatically and by management intervention.

2.4. 1980s: Updating the NOC : AT&T revamped and modernized the NOC in 1987, adding a 75-screen video wall where computer-driven support systems provided information on multiple layers and categories of network activity. Managers used computer systems and terminals to find detailed information on any switch or route in the network. They then used those same systems to issue instructions to any place in the network.

2.5. Global Network Operations Center : AT&Ts system had become a Worldwide Intelligent Network. Two regional control centers, in Denver and Conyers, Ga., opened in 1991, and assumed the task of monitoring and managing the flow of traffic onto and off of the network. In 1999, AT&T replaced the NOC with a new Global Network Operations Center, to better to meet the needs of the 21st century.

2.6. HISTORY OF NETWORK SECURITY :o Recent interest in security was fueled by the crimecommitted by Kevin Mitnick. Kevin Mitnick committed the largest computer related crime in U.S. history [3]. The losses were eighty million dollars in U.S. intellectual property and source code from a variety of companies [3]. Since then, information security came into the spotlight Public networks are being relied upon to deliver financial and personal information. Due to the evolution of information that is made available through the internet, information security is also required to evolve. Due to Kevin Mitnicks offense, companies are emphasizing security for them intellectual property. Internet has been a driving force for data security improvement. Internet protocols in the past were not developed to secure themselves. Within the TCP/IP

communication stack, security protocols are not implemented. This leaves the internet open to attacks. Modern developments in the internet architecture have made communication more secure. As the internet came to be, security was low profile and on the back burner for most corporations. Connectivity was a primary concern for Information Technology Professionals. With this beginning, malicious users began to infiltrate and modify systems and data. Sending out viruses and hacking through weak unprotected networks, these users became an immediate threat to legitimate business that wanted to expand and grow globally. Many Chief Information Officers state that the ever growing concerns of security is one of the biggest tasks facing the information technology field today. With spyware/malware, worms, viruses, internal threats and hackers, companies today face their most challenging time for ecommerce growth. With customers all over the globe, the protection of local assets as well as the customers accounts information is of the utmost importance. The historical events that have caused such a concern with computers began with the simplex hacking of phones by Captain Crunch and the adding of boot sector viruses to floppy disks. The growth of these malicious activities now can affect millions of users within a matter of minutes.

3.

PRESENT STATE OF NETWORK MANAGEMENT AND SECURITY:As early as 2003, the American Management Association made a survey and found that 75% of the surveyed companies have monitored online behavior of their staff. According to survey of the United States Association of electronic policy this year, 26% of the enterprise use network monitoring system to monitor their staff and 2% of employees who used the instant signal communication system improperly were dismissed. 50% of the employees online activities have nothing to do with work. Alone, U.S. companies need to pay billions of dollars annually. "Korea Daily" reported that in South Korea, 60% of large enterprises and 30% of the public enterprise will check the e-mails of staff. A well-known site in Thailand made a survey last year, showing 40% of employees in the company are under Internet surveillance. A manager of Motorola disclosed to the reporter: "For large enterprises or companies, they use information filtering technology to shield worknot-related websites. Of course, if you want to view the content of employee e-mail and chatting records, technical department can do it,

but generally they would not investigate because it is a waste of human resources. " Some business executives said that they are reluctant to monitor their staff but they have to. However, in face of staff resistance and public pressure, business managers also feel embarrassed. Director of personnel department in a company admitted that if employees know that they are monitored, they will feel upset or even make complaints against the company, however the company really needs to control on the company business. If laws and regulations in this respect are sound, organizations can take necessary measures according to laws. Now, we just signed the conservative trade secrets documents with each employee but whether it works, we could not identify. Many managers know that chatting online during working hours or sending a private e-mail will lead to low efficiency, network bandwidth waste, and information leakage, so though reluctant, they have to monitor their staff. But most managers cannot balance the conflicts between enterprise and their staff brought about by monitoring employees. Moreover, due to the lack of laws in this aspect, it is difficult to handle this problem and it is also difficult to control the extent. "Monitoring" brings great pressure on corporate managers and also brings challenges to the entire enterprise monitoring and management system.

Digital economy has stand out conspicuously today. The most obvious change is that most significant gains in recent years can be directly attributed to Internet connection. For most companies, the Internet has become indispensable together with telephone and copy machine. However, along with the benefits also comes the pain. In addition to high cost of access to e-commerce and online information, the Internet has quietly turned into a casino for employees. More and more employees are trading stocks online, downloading music, gambling, playing games, buying books, watching sports news, sending e-cards and visiting pornographic websites ---all during working time. Some people send chain letters and jokes in the company, wasting the already nervous bandwidth and mental resources. If employers and human resource practitioners can recognize that there will be more and more problems with the applications of Internet. ... They will be

able find ways to improve productivity and make the company succeed like a duck in the water. "With applications of the Internet and other new technologies such as wireless web browser on the telephone and personal digital assistants, new and more elusive" Wisdom Struggle "will appear. The key is to maintain a reasonable balance between staff and productivity.

Survey: Monitoring IM (instant messaging software) has become an important IT function for most banks on Wall Street

ERmployee monitoring software is becoming more attractive, their prices are more moderate and are simpler to use. The companies realize the value of monitoring software, which help them improve security, efficiency and reduce improper activities, prevent leakage of company secrets and the risk of law liability. Many analysts point out that the best way to maintain interest for the company is to use monitoring software. Michael Gartenberg, research director of Jupiter Media Metrix said that if companies choose to use monitoring software, they must tell employees what they are monitoring and why they should monitor. Gartenberg said, "Employees must understand that the employer has the right to protect the company business communication tool from being abused, including the situation that would bring the company risk of liability or trouble." "Employers also need to understand that they need to set and achieve expectations and strike the right balance between trust and distrust of employees." Todays IT administrators face infinitely greater challenges than they did in previous years, said Adam Powers, CTO of Lancope and a speaker on the panel. Traditional network management and security tools are no longer enough to maintain secure, highperformance networks in the current technology environment. Fortunately, in recent years, the industry has developed several network and security innovations that fill in the gaps where traditional tools leave off, empowering IT teams to effectively address current and future challenges. One such innovation, Lancopes StealthWatch system, delivers comprehensive network visibility across physical and virtual environments by leveraging NetFlow and other flow data from existing routers and switches. By unifying security, network and application performance monitoring in a single platform, StealthWatch eliminates network blind spots, cuts network and security management costs and dramatically reduces the time from problem onset to resolution. StealthWatch uncovers not only the sophisticated, externally-launched attacks that often

bypass perimeter defenses, but also internal threats such as policy violations, network misuse, misconfigurations and data leakage that traditional security solutions are not designed to detect. Alongside flow-based monitoring and anomaly detection, the system features advanced capabilities such as application and identity awareness to further expedite troubleshooting, assist in forensic investigations and support compliance

initiatives. As Lancopes panel speaker, Powers is a leading innovator in the development of nextgeneration network behavior anomaly detection solutions. He has a decade of operational and engineering experience in enterprise IP security technologies.

o ARCHITECTURAL MODEL OF NMS :-

0100090000030202000002008a01000000008a01000026060f000a03574d464301000000 000001004a850000000001000000e802000000000000e8020000010000006c0000000000 0000000000002c0000007100000000000000000000007d400000af27000020454d460000 0100e80200000e00000002000000000000000000000000000000ec130000c8190000d800 0000170100000000000000000000000000005c4b030068430400160000000c0000001800 00000a00000010000000000000000000000009000000100000003c0f0000600900002500 00000c0000000e000080120000000c000000010000005200000070010000010000009cfff fff000000000000000000000000900100000000000004400012540069006d00650073002 0004e0065007700200052006f006d0061006e00000000000000000000000000000000000 00000000000000000000000000000000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000000000000000000000000 000000000000000000000000000000000000000000000000000000cb3009300000000004 0000000000ae30083109300000000047169001000002020603050405020304877a002000 0000800800000000000000ff01000000000000540069006d006500730020000000650077 00200052006f006d0061006e000000dc3f000030060000640300007881e8013abdca30000 00000103d1100aab40230103d11004c3eaf30283d11006476000800000000250000000c00 000001000000180000000c00000000000002540000005400000000000000000000002c00 0000710000000100000088878740d1458740000000005a000000010000004c0000000400 000000000000000000003c0f0000600900005000000020002d002d000000460000002800 00001c0000004744494302000000ffffffffffffffff3d0f00006109000000000000460000001 4000000080000004744494303000000250000000c0000000e0000800e000000140000000 000000010000000140000000400000003010800050000000b0200000000050000000c02b 401c102040000002e0118001c000000fb020300010000000000bc0200000000010202225 3797374656d000000000000000000000000000000000000000000000000000004000000 2d01000004000000020101001c000000fb02eeff000000000000900100000000044000125 4696d6573204e657720526f6d616e0000000000000000000000000000000000040000002 d010100050000000902000000020d000000320a100000000100040000000000c102b3012

o Implementation :o We have developed a rule based expert system prototype, with rules for load monitoring and traffic routing. The Network Manager Interface has been implemented using web technology. Using the web-based interface we can perform basic network monitoring services. The network interface has been implemented using the IBM NetView/6000 for AIX Network Management System. The NetView/6000 is an SNMP based networking tool primarily meant for data collection in IP networks. It can also monitor various network parameters against previously defined thresholds and generate traps for network operators. We can use these traps to trigger other programs, such as the monitor.o

Expert System Prototype : The prototype of the rule based ExNet is tested for a hypothetical network consisting of eight hosts and four gateways. Our aim is to simulate conditions for a real network and to route data between two given hosts. The system was built using CLIPS [GIAR 89]. The main tasks performed in this implementation were 1) monitoring congestion in the network by periodically gathering information on system status, 2) deciding whether the congestion has exceeded thresholds or not and what nodes are more congested than others, 3) choosing a new route for the data to be routed between the two hosts, and 4) suggesting a corrective action to the human manager. The database containing the network information is provided by generating text files using IBM NetView/6000 for AIX. The monitor decides whether or not the problem in the current routing was related to performance or some system fault.

Performance related problems include high latency, congestion or high system load. System faults include a crashed gateway or host. To determine a system fault, the monitor module would ping the host or gateway to inquire the status. The expert system analyzes the problem, requested more information from the network interface and after considering the alternatives decide on the new route to be implemented between the two given hosts. If there is a problem on an intermediate host/gateway, such as high load or too many users, the expert system would suggest a set of corrective actions to be taken. In this implementation example, we have used four main rules: 1) Check CPU State, 2) Check-Routing- ResponseTime, 3) Diagnosenetwork-status, and 4) Determine New Route. The first set of rules checks the loads on each computer to determine its load status. The heavily loaded nodes once identified are excluded from future path selections. The second set of rules evaluates whether the response time associated with the current network route meets the desired performance requirements. If the latency or delay is excessive and does not meet the user requirement, the current route needs to be changed. The third rule shows how the system can prevent nodes from sending new requests to highly loaded machines or network devices. The last set of rules compute a new path from the source to the destination. It first determines which gateways/hosts are still available for path selection. From the available gateways, the path with the least delay is selected.

6.1. Simple Network Management Protocol (SNMP) :-

We have got a good-size network with hundreds of users in several locations, connected by routers, hubs, bridges, switches, dial-up modems, Web servers, application serversyou name it. When everything's fine, then everything's fine. But what happens if a section of the network starts experiencing dropouts, outages, reduced throughput or other network-based errors? How do you know that something has gone wrong, discover where the source problem is and

then fix it?

This isn't a trivial issue. In today's economy, businesses are linked to one another and to their customers by networks that must be kept running around the clock. To do that, you need to know when there's a network problem, and you need to know now. The most common mechanism for keeping tabs on network health is a standard called Simple Network Management Protocol (SNMP). Any device (which in this case can refer to software as well as hardware) that can be managed by SNMP contains a monitoring program, called an agent, that gathers information on that device's network activity. This information is in the form of messages called protocol data units (PDU) and is stored in an onboard database called a management information base (MIB). At the network administrator's console, there's usually some type of monitoring application, often called a network management station, such as IBM's Tivoli Net View or Hewlett-Packard Co.'s Open View. From this point, the administrator (or an automated or scheduled process) polls all or some of the network nodes, asking for whatever information has been collected. At the device being monitored, another piece of software, called the master agent, looks at what's been stored in the MIB and sends it back up the chain to the network management station, where it can be collated and processed with information from other nodes to determine what's happening on the network. At this point, SNMP can also be used by the network administrator to reconfigure specific devices. SNMP agents can also be set up to automatically notify the network management station if certain predefined conditions or events occur. These alerts are called traps.

o PAST AND FUTURE : When networks were first created, problems could be solved only by network gurus using relatively primitive tools such as Internet Control Message Protocol and ping. As networks grew, however, these simple tools no longer sufficed for monitoring every device on a network.

The first specific network management tool was the Simple Gateway Monitoring Protocol (SGMP), which debuted in 1987. SGMP could monitor gateways but still wasn't a general-purpose tool. SNMP came along a year later, but only for TCP/IP networks. In 1993, SNMP was extended to use two other network transport systems, AppleTalk and Novell Inc.'s IPX protocols. A more recent offshoot of SNMP is Rmon, a remote monitoring capability that gives a network manager the ability to monitor subnetworks as a whole, rather than just individual devices. The more powerful and secure Common Management Information Protocol (CMIP), developed in the mid-1990s, was expected to replace SNMP. However, the fact that CMIP uses 10 times the network overhead has meant that SNMP is still the major player in the industry. Despite its innocent-sounding name, SNMP isn't simple. It's a highly complex protocol that can be difficult to implement. Also, SNMP isn't very efficient. It wastes considerable bandwidth relaying unnecessary information, such as the version number, which is included in every message. But one thing that sets SNMP apart from so many other standards is that it's not a mere paper specification but is widely available and interoperable among a variety of network components

o How SNMP Works :

o INTERNET ARCHITECTURE AND VULNERABLE SECURITY ASPECTS : Fear of security breaches on the Internet is causing organizations to use protected private networks or intranets [4]. The Internet Engineering Task Force (IETF) has introduced security mechanisms at various layers of the Internet Protocol Suite [4]. These security mechanisms allow for the logical protection of data units that are transferred across the network. The security architecture of the internet protocol, known as IP Security, is a standardization of internet security. IP security, IPsec, covers the new generation of IP (IPv6) as well as the current version (IPv4). Although new techniques, such as IPsec, have been developed to overcome internets bestknown deficiencies, they seem to be insufficient [5]. Figure 2 shows a visual representation of how IPsec is implemented to provide secure communications. IPSec is a pointto point protocol, one side encrypts, the other decrypts and both sides share key or keys. IPSec can be used in two modes, namely transport mode and tunnel modes.

IPsec contains a gateway and a tunnel in order to secure communications

The current version and new version of the Internet Protocol are analyzed to determine the security implications. Although security may exist within the protocol, certain attacks cannot be guarded against. These attacks are analyzed to determine other security mechanisms that may be necessary. The internet protocols design is so vast and cannot be covered fully. The main parts of the architecture relating to security are discussed in detail.

o IPv4 Architecture : The protocol contains a couple aspects which caused problems with its use. These problems do not all relate to security. They are mentioned to gain a comprehensive understanding of the internet protocol and its shortcomings. The causes of problems with the protocol are: Address Space Routing Configuration

Security Quality of Service The IPv4 architecture has an address that is 32 bits wide . This limits the maximum number of computers that can be connected to the internet. The 32 bit address provides for a maximum of two billions computers to be connected to the internet. The problem of exceeding that number was not foreseen when the protocol was created. The small address space of the IPv4 facilitates malicious code distribution . Routing is a problem for this protocol because the routing tables are constantly increasing in size. The maximum theoretical size of the global routing tables was 2.1 million entries . Methods have been adopted to reduce the number of entries in the routing table. This is helpful for a short period of time, but drastic change needs to be made to address this problem. The TCP/IPbased networking of IPv4 requires that the user supplies some data in order to configure a network. Some of the information required is the IP address, routing gateway address, subnet mask, and DNS server. The simplicity of configuring the network is not evident in the IPv4 protocol. The user can request appropriate network configuration from a central server . This eases configuration hassles for the user but not the networks administrators. The lack of embedded security within the IPv4 protocol has led to the many attacks seen today. Mechanisms to secure IPv4 do exist, but there are no requirements for their use .IPsec is a specific mechanism used to secure the protocol. IPsec secures the packet payloads by means of cryptography. IPsec provides the services of confidentiality, integrity, and authentication . This form of protection does not account for the skilled hacker who may be able to break the encryption method and obtain the key. When internet was created, the quality of service (QoS) was standardized according to the information that was transferred across the network. The original transfer of information was mostly text based. As the internet expanded and technology evolved, other forms of communication began to be transmitted across the internet. The quality of service for streaming videos and music are much different than the standard text. The protocol does not have the functionality of

dynamic QoS that changes based on the type of data being communicated .

o IPv6 Architecture : When IPv6 was being developed, emphasis was placed on aspects of the IPv4 protocol that needed to be improved. The development efforts were placed in the following areas :Routing and addressing Multiprotocol architecture Security architecture Traffic control The IPv6 protocols address space was extended by supporting 128 bit addresses. With 128 bit addresses, the protocol can support up to 3.4 * (10)^38 machines. The address bits are used less efficiently in this protocol because it simplifies addressing configuration.

o

NETWORK MANAGEMENT APPLICATION :-

Networked applications are nothing new. What is new is the shift in the management focus from availability and event management to application performance management. Network availability/reliability is no longer the pre-eminent problem it once was. The topics of interest now are the service levels experienced with the increasing number of networked applications (VoIP, MPLS, etc.) appearing in the enterprise. In general, short of a physical break from whatever cause, the network is pretty much available and accessible, 24/7, x365. The problem with all these networked applications, however, is maintainingin the face of escalating demandacceptable levels of performance, whatever the service level metric (such as the performance of an individual service application or overall access for a class of critical end users). IT is spending more and more of its time analyzing and tracking what is happening across the network, at endpoints and in-between, to keep service performance at acceptable levels.

10.1. A strategy for application performance management :-

The solution is to work from a coherent strategy to understand what needs to be done, assess what you have today that can help, and then determine what you need to add or change to accomplish the task. Only rarely is such decision making and planning done from a completely clean slate without existing tools or with a blank check to rip and replace existing tools with a totally new solution set. Therefore, the next best starting point is to plan on using as many of the existing monitoring and management tools as possible.

o Attacks through the Current Internet Protocol IPv4 : There are four main computer security attributes. They were mentioned before in a slightly different form, but are restated for convenience and emphasis. These security attributes are confidentiality, integrity, privacy, and availability. Confidentiality and integrity still hold to the same definition. Availability means the computer assets can be accessed by authorized people . Privacy is the right to protect personal secrets . Various attack methods relate to these four security attributes. Table 1 shows the attack methods and solutions.

11.1. Common Internet Attack Methods :-

Common internet attacks methods are broken down into categories. Some attacks gain system knowledge or personal information, such as eavesdropping and phishing. Attacks can also interfere with the systems intended function, such as viruses, worms and trojans. The other form of attack is when the systems resources are consumes uselessly, these can be caused by denial of service (DoS) attack. Other forms of network intrusions also exist, such as land attacks, smurf attacks, and teardrop attacks. These attacks are not as well known as DoS attacks, but they are used in some form or another even if they arenT mentioned by name.

o 11.1.1. Eavesdropping :-

Interception of communications by and munauthorized party is called eavesdropping. Passive eavesdropping is when the person only secretly listens to the networked messages. On the other hand, active eavesdropping is when the intruder listens and inserts something into the communication stream. This can lead to the messages being distorted. Sensitive information can be stolen this way .

o 11.1.2. Viruses : Viruses are selfreplication programs that use files to infect and propagate [8]. Once a file is opened, the virus will activate within the system.

o 11.1.3. Worms :

A worm is similar to a virus because they both are selfreplicating, but the worm does not require a file to allow it to propagate [8]. There are two main types of worms, massmailing worms and networkaware worms. Mass mailing worms use email as a means to infect other computers. Networkaware worms are a major problem for the Internet. A networkaware worm selects a target and once the worm accesses the target host, it can infect it by means of a Trojan or otherwise.

o 11.1.4. Trojans :

Trojans appear to be benign programs to the user, but will actually have some malicious purpose. Trojans usually carry some payload such as a virus.

o 11.1.5. Phishing : Phishing is an attempt to obtain confidential information from an individual, group, or organization . Phishers trick users into disclosing personal data, such as credit card numbers, online banking credentials, and other sensitive information.

o 11.2. IP Spoofing Attacks :

Spoofing means to have the address of the computer mirror the address of a trusted computer in order to gain access to other computers. The identity of the intruder is hidden by different means making detection and prevention difficult. With the current IP protocol technology, IPspoofed packets cannot be eliminated .

o 11.3. Denial of Service :

Denial of Service is an attack when the system receiving too many requests cannot return communication with the requestors [9]. The system then consumes resources waiting for the handshake to complete. Eventually, the system cannot respond to any more requests rendering it without service.

o Technology for Internet Security :

Internet threats will continue to be a major issue in the global world as long as information is accessible and transferred across the Internet. Different defense and detection mechanisms were developed to deal with these attacks.

o 12.1. Cryptographic systems : Cryptography is a useful and widely used tool in security engineering today. It involved the use of codes and ciphers to transform information into unintelligible data.

o 12.2. Firewall :

A firewall is a typical border control mechanism or perimeter defense. The purpose of a firewall is to block traffic from the outside, but it could also be used to block traffic from the inside. A firewall is the front line defense mechanism against intruders. It is a system designed to prevent unauthorized access to or from a private network. Firewalls can be implemented in both hardware and software, or a combination

of both .

o 12.3. Intrusion Detection Systems :

An Intrusion Detection System (IDS) is an additional protection measure that helps ward off computer intrusions. IDS systems can be software and hardware devices used to detect an attack. IDS products are used to monitor connection in determining whether attacks are been launched. Some IDS systems just monitor and alert of an attack, whereas others try to block the attack.

o SECURITY IN DIFFERENT NETWORKS :-

The businesses today use combinations of firewalls, encryption, and authentication mechanisms to create intranets that are connected to the internet but protected from it at the same time. Intranet is a private computer network that uses internet protocols. Intranets differ from "Extranets" in that the former are generally restricted to employees of the organization while extranets can generally be accessed by customers, suppliers, or other approved parties. There does not necessarily have to be any access from the organization's internal network to the Internet itself. When such access is provided it is usually through a gateway with a firewall, along with user authentication, encryption of messages, and often makes use of virtual private networks (VPNs). Although intranets can be set up quickly to share data in a controlled environment, that data is still at risk unless there is tight security. The disadvantage of a closed intranet is that vital data might not get into the hands of those who need it. Intranets have a place within agencies. But for broader data sharing, it might be better to keep the networks open, with these safeguards :Firewalls that detect and report intrusion attempts Sophisticated virus checking at the firewall Enforced rules for employee opening of email attachments Encryption for all connections and data transfers Authentication by synchronized, timed passwords or security certificates.

It was mentioned that if the intranet wanted access to the internet, virtual private networks are often used. Intranets that exist across multiple locations generally run over separate leased lines or a newer approach of VPN can be utilized. VPN is a private network that uses a public network (usually the Internet) to connect remote sites or users together. Instead of using a dedicated, realworld connection such as leased line, a VPN uses "virtual connections routed through the Internet from the company's private network to the remote site or employee. Figure is a graphical representation of an organization and VPN network.

o

A typical VPN might have a main LAN at the corporate headquarters of a company, other LANs at remote offices or facilities and individual users connecting from out in the field .

o CURRENT DEVELOPMENTS IN NETWORK SECURITY :-

The network security field is continuing down the same route. The same methodologies are being used with the addition of biometric identification. Biometrics provides a better method of authentication than passwords. This might greatly reduce the unauthorized access of secure systems. New technology such as the smart card is surfacing in research on network security. The software aspect of network security is very dynamic. Constantly new firewalls and encryption schemes are being implemented. The research being performed assists in understanding current development and projecting the future developments of the field.

13.1.1. Hardware Developments :-

Hardware developments are not developing rapidly. Biometric systems and smart cards are the only new hardware technologies that are widely impacting security. The most obvious use of biometrics for network security is for secure workstation logons for a workstation connected to a network. Each workstation requires some software support for biometric identification of the user as well as, depending on the biometric being used, some hardware device. The cost of hardware devices is one thing that may lead to the widespread use of voice biometric security identification, especially among companies and organizations on a low budget. Hardware device such as computer mice with built in thumbprint readers would be the next step up. These devices would be more expensive to implement on several computers, as each machine would require its own hardware device. A biometric mouse, with the software to support it, is available from around $120 in the U.S. The advantage of voice recognition software is that it can be centralized, thus reducing the cost of implementation per machine. At top of the range a centralized voice biometric package can cost up to $50,000 but may be able to manage the secure login of up to 5000 machines. The main use of Biometric network security will be to replace the current password system. Maintaining password security can be a major task for even a small organization. Passwords have to be changed every few months and people forget their password or lock themselves out of the system by incorrectly entering their password repeatedly. Very often people write their password down and keep it near their computer. This is of course completely undermines any effort at network security. Biometrics can replace this security identification method. The use of biometric identification stops this problem and while it may be expensive to set up at first, these devices save on administration and user assistance costs. Smart cards are usually a creditcardsized digital electronic media. The card itself is designed to store encryption keys and other information used in

authentication and other identification processes. The main idea behind smart cards is to provide undeniable proof of a users identity. Smart cards can be used for everything from logging in to the network to providing secure Web communications and secure email transactions. It may seem that smart cards are nothing more than a repository for storing passwords. Obviously, someone can easily steal a smart card from someone else. Fortunately, there are safety features built into smart cards to prevent someone from using a stolen card. Smart cards require anyone who is using them to enter a personal identification number (PIN) before theyll be granted any level of access into the system. The PIN is similar to the PIN used by ATM machines. When a user inserts the smart card into the card reader, the smart card prompts the user for a PIN. This PIN was assigned to the user by the administrator at the time the administrator issued the card to the user. Because the PIN is short and purely numeric, the user should have no trouble remembering it and therefore would be unlikely to write the PIN down. But the interesting thing is what happens when the user inputs the PIN. The PIN is verified from inside the smart card. Because the PIN is never transmitted across the network, theres absolutely no danger of it being intercepted. The main benefit, though, is that the PIN is useless without the smart card, and the smart card is useless without the PIN. There are other security issues of the smart card. The smart card is cost effective but not as secure as the biometric identification devices.

Software Developments:-

The software aspect of network security is very vast. It includes firewalls, antivirus, vpn, intrusion detection, and much more. The research development of all security software is not feasible to study at this point. The goal is to obtain a view of where the security software is heading based on emphasis being placed now. The improvement of the standard security software still remains the same. When new viruses emerge, the antivirus is updated to be able to guard against those threats. This process is the same for firewalls and intrusion detection systems. Many research papers that have been skimmed were based on analyzing attack patterns in order to create smarter security software. As the security hardware transitions to biometrics, the software also needs to be able to use the information

appropriately. Current research is being performed on security software using neural networks. The objective of the research is to use neural networks for the facial recognition software. Many small and complex devices can be connected to the internet. Most of the current security algorithms are computational intensive and require substantial processing power. This power, however, is not available in small devices like sensors. Therefore, there is a need for designing light weight security algorithms. Research in this area is currently being performed.o

MARKET ACCEPTANCE :-

As a result of challenges with integration and user acceptance, true two-factor authentication is not yet widespread, although it can be found in certain sectors requiring additional security (e.g. banking, military). Faced with regulatory two-factor authentication guidelines in 2005, numerous U.S. financial institutions instead deployed additional knowledge-based authentication methods, such as shared secrets or challenge questions, only to discover later that such methods do not satisfy the regulatory definition of "true multifactor authentication". Supplemental regulatory guidelines and stricter enforcement are now beginning to force the abandonment of knowledge-based methods in favor of "true multifactor authentication". A 2007 study published by the Credit Union Journal and co-sponsored by BearingPoint reported 94% of the authentication solutions implemented by U.S. financial institutions fail to meet the regulatory definition of true multi-factor authentication. An increasing count of recent undesired disclosure of governmentally protected data [7] [8] or private data [9] [10] is likely to contribute to new TF-A requirements, especially in the European Union.

o FUTURE OF NETWORK MANAGEMENT AND SECURITY : Next Generation Firewalls (NGFW) and Unified Threat Management (UTM) systems have advanced in sophistication since achieving market acceptance during the first decade of 2000. A UTM device is a comprehensive network security product, which is used as a primary gateway networking and defense solution for organizations. UTM is the evolution of the traditional firewall into an all-inclusive security product that has the ability to perform multiple security functions in one single appliance: network firewalling, network intrusion prevention system, anti-spyware protection, gateway

antivirus (GAV), gateway anti-spam, VPN, website content filtering, load balancing and on-appliance reporting. NGFW technology builds upon UTM, adding application awareness to detect application-specific attacks and enforce application-specific granular security policies, both inbound and outbound. Security policies can be used to increase or decrease the priority of certain flows of traffic, block traffic and record its use for auditing.A key differentiation between the traditional firewall and the new breed of UTM and NGFW devices is the advent of deep packet inspection (DPI). DPI will identify and halt flows of network traffic containing malware and/or to classify traffic

flows into application specific categories. Examples of applications are Skype, peer-topeer (P2P), web surfing, database transfers, streaming video, etc. The newer generation of firewall technology which implements DPI must examine every bit of every byte of every packet in order to effectively classify application traffic and detect malware, which is attempting to enter the protected network. This results in a significant increase on processor load. All network device vendors provide data sheets which state performance and capabilities of their products. However, these data sheets commonly offer best case, ideal scenarios under which the vendors derive their performance numbers. Due to the increased processing requirements associated with performing DPI for a network packet, a substantial performance impact can be experienced. Testing has shown up to a 90 percent decrease in traffic throughput can be observed due to the added requirements of examining all 1,514 bytes of a packet rather than the traditional ten bytes. For example, a device that can pass traffic at 100 Mbps while performing traditional firewall operations can slow down to approximately 10 Mbps when DPI is enabled. To address the requirements associated with DPI performance testing and to "level the playing field," EEMBC, a leading benchmark consortium and an expanding group of companies recently formed a DPIBench Working Group to collaborate and formalize the DPIBench testing methodologies. Companies that will benefit from this association include Cisco, Juniper, IBM, Checkpoint, MacAfee, Alcatel-Lucent and others. Members have yet to finalize test conditions and the Working Group is encouraging other voices to be heard in developing this industry standard benchmark suite. What is needed for further DPIBench development is consensus on final issues of test setup and agreement on common test and certification procedures. The end goal is to provide consumers of networking security technologies objective test data so they can make an informed decision when selecting a solution from the myriad of

vendor offerings.

The next meeting of the Working Group for DPIBench will be hosted by Cavium Networks in Mountain View, on August 19th. If you find new network security benchmark testing interesting, you might want to participate in this industrysetting discussion . . Machine-to-machine (M2M) applications are one of the key business drivers for sensors. M2M covers a wide range of applications but in a typical scenario, a sensor detects an anomaly, sends an alert to a monitoring middleware which in turn generates a command to an actuator, informs a business operation software application, and eventually sends an alarm to an operator off the loop. Scalability is critical for M2M. Soon, billions of sensor devices might be deployed to provide remote monitoring of houses, buildings, patient vital signs, physical security, and a number of new applications enabled by the plummeting price of the technology. The Internet Protocol (IP) provides well-known and proven technology to reach the sensors from anywhere, at any point of time, even on the move. IP-based sensors can connect to IP-based networks without the need for intermediate entities like translation gateways or proxies.

NETWORK MANAGEMENT AND SECURITY IN FUTURE

o SCOPES :

By now, most of us in security probably realize that full compliance with the Payment Card Industry Data Security Standard (PCI DSS) represents a significant technical and operational challenge. Because of the expense and effort required in achieving and validating compliance, many merchants and service providers have sought to limit their scope of PCI DSS compliance by restricting the degree to which they actually store, process or transmit cardholder data in-house. Strategies like handing cardholder data off directly to acquirers (thereby minimizing the degree to which the data is handled in the merchant environment) or leveraging tokenization technology (replacing the primary network ). The process of deploying network access control (NAC) can be arduous, particularly considering the complexity of todays networks and the sheer number of devices connecting to them. Thus, when a company undertakes a NAC implementation, it's important to know the software will be around long enough to justify the up-front expense, will have a sufficient number of policies to scan for many potential security flaws, and will actually enforce the policies it puts in place. This year's gold medal winner -- Cisco's NAC Appliance -- scored highly across the board, with a few areas where it received particularly high marks. Scalability was one of the categories in which the tool was most highly rated, with over two-thirds of respondents giving the product a score of four or five (on a scale of 1 to 5); this bodes well for enterprises looking to grow or expand their NAC coverage over time. Cisco's NAC product also scored highly for the range of policy checks it can perform, its enforcement options for when devices don't meet those policy. In the midst of investigating and cleaning up a data breach affecting millions of users of its PlayStation Network, Sony is now announcing that the attack also affected servers of its Online Entertainment division. The Sony attack, which took place April 16 and 17, brought the PlayStation Network to a halt for more than a week, disrupting 77 million account holders. The Sony security breach exposed credit card information of about 10 million subscribers. In new details, released Tuesday, Sony said an attack also exposed the data of an additional 24.6 million Sony Entertainment account holders.

The company said an affected database includes more than 12,000 non-U.S. credit and debit card numbers. Whether Sony's bad practices are an act of hubris or simply gross incompetence is hard to discern..

17. Conclusions :

Cisco provides the technology and the reference designs that meet the industrial requirements in Security, Availability, Quality of Service and Flow Isolation, as well as pervasive support for IPv6 within its product lines. With these comprehensive offerings, Cisco enables the integration of WSNs within the enterprise network today, and prepares for the next generation of sensor networking and of IP networking in general. With an emphasis in reliability and high levels of security, Emerson Process Management provides standard-based, self-organizing industrial WSNs. This technology enables the wireless networking of a wide variety of Emersons field monitoring devices, such as temperature, pressure, flow, corrosion, and others. With solid, proven standards and the commitment and leadership of companies such as Emerson and Cisco, industrial wireless sensors are a reality. The technologies and strategies to mitigate coexistence and security issues and maximize network availability while maintaining a defined QoS are well documented and specified in the existing standards.This work has presented a framework to design a hybrid model based on secure mobile agent protocol and SNMP strategies. The work gives network administrators flexibility of using any of the two approaches to exploit mobile agent technology in network management. The results show that as the managed nodes increases, the proposed techniques perform better than centralized approach. On this note, this paper has demonstrated that it is possible to develop a secure mobile agent network management system using Java components and cryptography. To this end, the paper has presented reasonable detail on design level view .

18. REFERENCE :Technet.microsoft.com www.blackhat.com www.share.org

www.findamasters.com www.arubanetworks.com