IT security testing, a practical guide — Part 8: System scoring techniques

5
June 1993 Computer Audit Update LIST USER1. There are masking options to select categories of user. This enquiry facility also lets you do simple selections, e.g., list everyone with the SECURITY attribute, but not in a particularly convenient form. The SL report from the reporting facility, I describe below, I usually find more convenient. How do I know what users have been doing? VM writes no event log like SMF, but ACF2 writes the equivalent of SMF records of key events (like changes to the rule-sets), and all security violations. When you write rules, you can also specify that certain events should be logged by ACF2. The reporting utility will read the selection of SMF records you ask for and provide some standard reports. These fall into the following categories: changes to the ACF2 database entries in the period covered by the SMF records (Iogonids, rules, resources etc.); all requests for service which were denied by ACF2; any events which are being specially logged (by entries in the rules); failed attempts to access the system (pass- word violations). As I mentioned above, you can also (on the SL report) list selections of Iogonids from the ACF2 database. There are also cross-reference reports showing which user can access what and which disk areas can be accessed by whom. The easiest way to specify which report and what report parameters you want is to use the online reporter. If, at the CMS level, you enter the command ACFFS, you can choose the ACF2 reporting screen and specify the options you want. You can also direct the output to your terminal rather than straight to the print queue, which is a quick way of testing that what you've asked for is what you really want. ACF2 therefore provides plenty of help to the auditor, and plenty of ways to secure a VM system. The next article looks in detail at which areas we might want to secure, how we can use ACF2 to do it and where, in practice, the loopholes are likely to be. Alison Webb is an independent computer audit specialist. She divides her time between advising on security for specific applications, mainframe control and security reviews, general reviews of computer installations, and file interrogation. She also lectures and writes on computer audit topics. IT SECURITY TESTING, A PRACTICAL GUIDE --- PART 8 SYSTEM SCORING TECHNIQUES Bernard Robertson and David Pullen PA Consulting group IT security testing sometimes involves comparing similar entities across the same or different sites or functional areas (e.g. reviewing the operation of an application at different computer centres). It is very difficult to compare the different areas consistently, especially if there are different reviewers, without developing a framework within which all the reviewers can operate. A scoring mechanism can provide a quantitative measure of performance and so facilitate a consistent review process. This article introduces the concept of system scoring for IT security testing and suggests ways in which the scoring mechanism may be developed and the results presented. Objectives and purpose The prime objective of using a system scoring technique is to enable the results of security testing to be displayed clearly, ©1993 Elsevier Science Publishers Ltd 13

Transcript of IT security testing, a practical guide — Part 8: System scoring techniques

June 1993 Computer Audit Update

LIST USER1. There are masking options to select categories of user.

This enquiry facility also lets you do simple selections, e.g., list everyone with the SECURITY attribute, but not in a particularly convenient form. The SL report from the reporting facility, I describe below, I usually find more convenient.

How do I know what users have been doing?

VM writes no event log like SMF, but ACF2 writes the equivalent of SMF records of key events (like changes to the rule-sets), and all security violations. When you write rules, you can also specify that certain events should be logged by ACF2.

The reporting utility will read the selection of SMF records you ask for and provide some standard reports. These fall into the following categories:

changes to the ACF2 database entries in the period covered by the SMF records (Iogonids, rules, resources etc.);

• all requests for service which were denied by ACF2;

• any events which are being specially logged (by entries in the rules);

• failed attempts to access the system (pass- word violations).

As I mentioned above, you can also (on the SL report) list selections of Iogonids from the ACF2 database. There are also cross-reference reports showing which user can access what and which disk areas can be accessed by whom. The easiest way to specify which report and what report parameters you want is to use the online reporter. If, at the CMS level, you enter the command ACFFS, you can choose the ACF2 reporting screen and specify the options you want. You can also direct the output to your terminal rather than straight to the print queue, which is a quick way of testing that what you've asked for is what you really want.

ACF2 therefore provides plenty of help to the auditor, and plenty of ways to secure a VM system. The next article looks in detail at which areas we might want to secure, how we can use ACF2 to do it and where, in practice, the loopholes are likely to be.

Alison Webb is an independent computer audit specialist. She divides her time between advising on security for specific applications, mainframe control and security reviews, general reviews of computer installations, and file interrogation. She also lectures and writes on computer audit topics.

IT SECURITY TESTING, A PRACTICAL GUIDE --- PART 8

S Y S T E M S C O R I N G T E C H N I Q U E S

Bernard Robertson and David Pullen PA Consulting group

IT security testing sometimes involves comparing similar entities across the same or different sites or functional areas (e.g. reviewing the operation of an application at different computer centres). It is very difficult to compare the different areas consistently, especially if there are different reviewers, without developing a framework within which all the reviewers can operate. A scoring mechanism can provide a quantitative measure of performance and so facilitate a consistent review process.

This article introduces the concept of system scoring for IT security testing and suggests ways in which the scoring mechanism may be developed and the results presented.

Objectives and purpose

The prime objective of using a system scoring technique is to enable the results of secur i ty test ing to be d isp layed clearly,

©1993 Elsevier Science Publishers Ltd 13

Computer Audit Update June 1993

consistently and concisely. System scoring techniques may also have the secondary objectives of facilitating:

• the decision making process;

• the development of a prioritized action plan;

• compliance measurement (e.g. compliance with standards);

comparisons with other organizations within the same sector. Furthermore scoring tech- niques may:

• help to clarify thinking about the system pro- cesses or functions;

help provide the best value for money by clearly showing the effect on the overall se- curity level of implementing individual countermeasures;

• help to motivate people to act by clearly dem- onstrating the magnitude of the problems.

Scope

Scoring techniques may be used for the following comparisons:

• the same process across different sites or functional areas;

• different processeswithin the samefunctional area;

the improvement in security with the cost of implementation;

the actual security level with that normally found in the sector;

the actual security level with the expected/re- quired level of security.

The factors which may be taken into account

in the development of security testing include:

a scoring system for

• the cost of implementation;

• the level of risk;

• the level of threat potential loss;

• the level of user inconvenience;

the level of potential embarrassment to the organization;

the security sensitivity of a function or pro- cess;

• the time required to measures.

the level of staff awareness of security;

implement counter-

The scoring process

The scoring process involves first gaining a thorough understanding of the system(s) to be scored and then:

• identifying the factors that should be scored;

• weighting and scoring the factors;

• specifying the data to be collected (data re- quirements); collecting the data;

• undertaking the comparison;

• displaying the results.

Each of these phases are described in detail in the following sections.

Identification of factors to be scored

Identifying the factors that need to be scored is not an easy task and relies to a great extent on the experience of the testers. The first task is to

14 ©1993 Elsevier Science Publishers Ltd

June 1993 Computer Audit Update

Delivery I Main Store

i rlmrnediate Store I ....

Batch Job

[ Out~~UtHaundlingPrint t ' '~*I~ e'~'~'~ ~~.te -1 I l ~l Shredding

Post

Figure 1: Main processes in the example payment system.

identify the different functions/processes within the system and then determine the controls and procedures which secure the system against identified threats. Experience indicates that this process is best accomplished by using a combination of paper analysis and brainstorming. The process is illustrated below by referring to the testing of a payments system. Figure 1 shows the funct ions/processes involved in producing cheques from the system. For the purposes of this example, the system has been limited to the point from which the payment batch job begins to deliver to the post van.

The process consists of two main parts:

• delivery and storage of the blank cheques;

• running of the job which produces the printed

cheques which are then collated and sent to post.

There are a number of points in the overall process where security controls and procedures may exist and include the following:

• records of delivery (i.e. number of cheques);

management check of the main secure sta- tionery store (i.e. a check that the records of stock in the main store are correct) records of movement of cheques to the immediate store (which is used to store the cheques ready for printing);

• records of the number of items used in the print run;

@1993 Elsevier Science Publishers Ltd 15

Computer Audit Update June 1993

• records of the number of items wasted in the printrun;

• the number of items wasted in output hand- ling;

• a 5% check of all the sealed envelopes after output handling;

• a count of the number of envelopes delivered to post;

• management checksthat the two items above have been correctly carried out;

• securing of the envelopes awaiting the post van;

• management check that waste is correctly recorded;

• records of the number of items shredded;

• management check that shredding has been correctly completed;

reconciliation of all the cheques, i.e. Number in stores = Old number in stores + Number delivered - Number sent out - Number de- stroyed.

Weighting the factors

Each of the identified factors is then weighted in terms of its importance relative to the whole process. In this example, the reconciliation of all the cheques was given a weighting of 5 because it was considered to be the most important procedure. This reconciliation process would identify if any cheques had been mislaid or stolen anywhere in the entire process. All the other controls and procedures were given weightings of one or two depending on their relative importance. For example, the count of envelopes delivered to post was given a weighting of one. Although the count of envelopes is a useful check

there is no guarantee that it is being done accurately.

Scoring the factors

Factors may be scored as being present or absent or, in some cases, on their level of effectiveness. For example, the secure storage of the envelopes in the case of the payment system could be scored according to the level of security afforded by the store. A fire-resistant safe might score 3 (on a scale of 1 to 3) whereas a locked cupboard may score 1.

Specifying the data to be collected

To obtain consistent and accurate data input, the framework for data collection should be formulated explicitly across the different areas to be examined. In most cases it will be necessary to develop a questionnaire, which may be either sent to the appropriate individuals for completion or used as the basis for an interview session. It is essential that the wording of the questions is unambiguous.

Collecting the data

When the questions are asked in an interview session the reviewer/tester should ensure that the respondent clearly understands the question and should confirm the response. Wherever possible the reviewer/tester should check that the responses actually agree with what happens in practice. The data collection phase depends on asking the right questions of the correct people. It is therefore important to ensure that these people are available.

During the data collection phase it will often emerge that there are some factors that had not been considered in the original analysis. Factors which arise in this way should be incorporated into the scoring system as soon as possible. The data collection requirements should be updated and distributed to all the testers.

Undertaking the comparison

Once all the data has been collected it should be input into the scoring system and the overall scores calculated. It may be necessary to make some adjustments to the scoring system and the

16 01993 Elsevier Science Publishers Ltd

June 1993 Computer Audit Update

various factors involved if the results do not appear to reflect reality accurately. The scores will indicate the areas which perform best, however a better understanding of the reasons for the final scores will be obtained once the results have been analysed and displayed.

Displaying the results

The raw output from a scoring system will be a set of results which can be displayed in many different ways. Typical display methods are: bar charts; matrices; pie charts; and tables.

The results may be displayed:

• as end results only;

• relative to the average;

• relative to a sector/market norm;

• by process/functional area or system;

• or in any other way that aids understanding.

Graphical displays such as bar charts tend to provide a good overall comparison and help identify areas which are particularly good or bad. Where possible colour should be used to aid comprehension. More detailed information (which will usually be in a tabular form) may be used to back up the overall picture or help identify the cause of any identified security weaknesses.

Hints

The use of scoring techniques in security testing is not without its problems. By its very nature there must be some subjectivity in a scoring system which is very dependent upon the experience of the testers. The level of subjectivity may be reduced by involving several experienced testers in the scoring process and by continually asking the question 'what does each factor really contribute to the overall level of security?'.

A common mistake in developing scoring systems is to develop complex systems which

are difficult to understand, difficult to score and difficult to modify. Where possible, weighting factors should to be kept to a scale of 1 to 5 and scores should be kept to a scale of 1 to 3.

Wherever possible, the scoring system should be automated to minimize the onerous task of calculating the overall scores. Experience indicates that a simple spreadsheet, which can be modified quickly to cope with changes in the scoring system is the best mechanism.

The last article in this series describes the activities to be performed after the final results report of an IT security testing programme has been delivered.

Bernard Robertson is a principal consultant in the secur i ty consul t ing pract ice of PA Consulting Group. He has extensive experience in performing a range of secur i ty testing programmes for public and financial sector clients. Bernard is a regular speaker on IT security issues and holds degrees in economics and business administration. David Pullen is also a principal consultant within the same security consulting practice. Over the last five years he has conducted several security testing projects, including one lasting two years with a team of 15 security testers. David is a physics graduate and has produced security testing educational material

NEWS

Digital, Logica and Barclays still switched on to FM

Despite their unsuccessful bid for the Inland Revenue's £250 million-a-year outsourcing contract, Digital, Barclays Bank and Logica are planning to make their consortium, Digital Alliance, an independent FM supplier. According to a report in Computing, talks between the three companies are likely to continue for a month. The

@1993 Elsevier Science Publishers Ltd 17