[IEEE 2011 44th Hawaii International Conference on System Sciences (HICSS 2011) - Kauai, HI...

9
Identifying and Visualizing the Malicious Insider Threat Using Bipartite Graphs Kara Nance Department of Computer Science University of Alaska Fairbanks [email protected] Raffael Marty Loggly, Inc. [email protected] Abstract Government agencies and organizations are just beginning to harness the powerful capabilities of visualization to aid in the prevention, detection, and mitigation of security threats. Most advances in this area have focused on protecting an agency or organization from malicious outsiders. While not a new threat, the malicious insider has recently earned increased focus. This paper investigates methods of classifying and visualizing insider behavior to establish a pattern of acceptable actions based on workgroup role classifications. It then discusses actions as related to identified precursors of malicious activities and provides a simplified example of how visualization can be used to help detect this threat. When visualized using bipartite mappings, behaviors outside the norm can be easily identified and provide an important step in the process of highlighting areas and individuals for further investigation. 1. Introduction There is a significant body of research in behavioral theory studying the insider threat and attempting to quantify and qualify the characteristics of the wide range of malicious insiders as well as their motivations. Whether their objectives be espionage, sabotage, terrorism, embezzlement, extortion, bribery, or corruption [1], detecting the activities of the malicious insider is critical since this threat is considered more difficult to mitigate than external threats [2]. In a study related to electronic crime involving 500 security and law enforcement executives, participants identified “current or former employees and contractors as the second greatest cyber security threat, preceded only by hackers.” [3] What motivates an insider to operate outside of her traditional role? The old FBI adage lists money, ideology, compromise/coercion, and ego as potential motivating factors. In addition to characteristics associated with overt malicious insiders, there are also insider threats resulting from ignorance and apathy. Regardless of the contributing factors, the associated impact can be significant. New methods for identifying and mitigating the insider threat must be developed. Visualization can provide a means to present a plethora of log and audit data in a single digestible format. It provides a palatable means to sip from the fire hose of information that comprises an organization’s digital footprint. A number of research approaches focus on profiling the insider threat from a theoretical standpoint, or alternatively, from an applied standpoint. Theoretical profiling can be used to develop an understanding of the characteristics and behaviors of groups and individuals, whether positive or negative. Applied profiling can be used to identify specific individuals or groups who are acting outside of an associated theoretical norm. The more thorough the underlying theoretical profiling foundation is, the more accurate the applied profiling techniques that rely on this foundation will be. The following simplified example walks through a visualization technique that uses applied profiling. The intent is to present a description of an IT approach to be used within an organizational context where a precursor set particular to that organization can be identified and used to visualize potential insider threats. The approach is based on simplified use cases identified through a theoretical approach to describing users, use-cases and associated mis-use cases within an organization. In order to develop our simplified use case diagrams, we need a means to group users based on their respective roles within an organization. Since government and corporate management structures vary greatly, from functional to matrix to project-based, we will use the term workgroup roles to group employees within an organization based on the expected behavioral characteristics associated with their expected work responsibilities. Proceedings of the 44th Hawaii International Conference on System Sciences - 2011 1 1530-1605/11 $26.00 © 2011 IEEE

Transcript of [IEEE 2011 44th Hawaii International Conference on System Sciences (HICSS 2011) - Kauai, HI...

Page 1: [IEEE 2011 44th Hawaii International Conference on System Sciences (HICSS 2011) - Kauai, HI (2011.01.4-2011.01.7)] 2011 44th Hawaii International Conference on System Sciences - Identifying

Identifying and Visualizing the Malicious Insider Threat Using Bipartite Graphs

Kara Nance Department of Computer Science University of Alaska Fairbanks

[email protected]

Raffael Marty Loggly, Inc.

[email protected]

Abstract

Government agencies and organizations are just beginning to harness the powerful capabilities of visualization to aid in the prevention, detection, and mitigation of security threats. Most advances in this area have focused on protecting an agency or organization from malicious outsiders. While not a new threat, the malicious insider has recently earned increased focus. This paper investigates methods of classifying and visualizing insider behavior to establish a pattern of acceptable actions based on workgroup role classifications. It then discusses actions as related to identified precursors of malicious activities and provides a simplified example of how visualization can be used to help detect this threat. When visualized using bipartite mappings, behaviors outside the norm can be easily identified and provide an important step in the process of highlighting areas and individuals for further investigation. 1. Introduction

There is a significant body of research in behavioral theory studying the insider threat and attempting to quantify and qualify the characteristics of the wide range of malicious insiders as well as their motivations. Whether their objectives be espionage, sabotage, terrorism, embezzlement, extortion, bribery, or corruption [1], detecting the activities of the malicious insider is critical since this threat is considered more difficult to mitigate than external threats [2]. In a study related to electronic crime involving 500 security and law enforcement executives, participants identified “current or former employees and contractors as the second greatest cyber security threat, preceded only by hackers.” [3]

What motivates an insider to operate outside of her traditional role? The old FBI adage lists money, ideology, compromise/coercion, and ego as potential motivating factors. In addition to characteristics associated with overt malicious insiders, there are

also insider threats resulting from ignorance and apathy. Regardless of the contributing factors, the associated impact can be significant. New methods for identifying and mitigating the insider threat must be developed. Visualization can provide a means to present a plethora of log and audit data in a single digestible format. It provides a palatable means to sip from the fire hose of information that comprises an organization’s digital footprint.

A number of research approaches focus on profiling the insider threat from a theoretical standpoint, or alternatively, from an applied standpoint. Theoretical profiling can be used to develop an understanding of the characteristics and behaviors of groups and individuals, whether positive or negative. Applied profiling can be used to identify specific individuals or groups who are acting outside of an associated theoretical norm. The more thorough the underlying theoretical profiling foundation is, the more accurate the applied profiling techniques that rely on this foundation will be.

The following simplified example walks through a visualization technique that uses applied profiling. The intent is to present a description of an IT approach to be used within an organizational context where a precursor set particular to that organization can be identified and used to visualize potential insider threats. The approach is based on simplified use cases identified through a theoretical approach to describing users, use-cases and associated mis-use cases within an organization. In order to develop our simplified use case diagrams, we need a means to group users based on their respective roles within an organization. Since government and corporate management structures vary greatly, from functional to matrix to project-based, we will use the term workgroup roles to group employees within an organization based on the expected behavioral characteristics associated with their expected work responsibilities.

Proceedings of the 44th Hawaii International Conference on System Sciences - 2011

11530-1605/11 $26.00 © 2011 IEEE

Page 2: [IEEE 2011 44th Hawaii International Conference on System Sciences (HICSS 2011) - Kauai, HI (2011.01.4-2011.01.7)] 2011 44th Hawaii International Conference on System Sciences - Identifying

2. Workgroup roles While not always easy to delineate, the definition of user roles and workgroup roles can be viewed as a type of theoretical profiling. Within an organization, users take on roles. Each role exhibits a specific set of normal, or expected, behaviors. Deviations from these roles can indicate a situation that merits further investigation. Consider a simple example where the user’s department defines the associated user’s role. For this example, consider four departments that define the user groups: legal, sales, engineering, and marketing. In addition, consider the pool of resources that users can potentially interact with during the course of their work week. There is a wide range of resources available within the organization. Accessing a particular resource or using a resource in a particular way may deviate from the expected behavior associated with an individual user’s role.

2.1 Visualizing workgroup roles The ability to classify and reclassify users into

workgroup roles provides a means to identify and determine normal or expected behavior for workgroups. It is easy to derive an initial classification system from a method based on use case diagrams. While originally developed for defining, verifying, and teaching agreement on product scope [4], use case diagrams are easily extended to reflect actors and actions within a system. Consider an organization with the defined user roles of Legal, Engineering, Marketing, and Sales. These users might all share common resources with each other including email servers, printers, customer databases, scanners, copiers, facsimile machines, web servers, cell phones, routers, firewalls, patent databases, and wireless access points, as well as many others. In addition, individuals act within as well as outside of the organizational boundary. Building on this example, a simplified use case diagram for the Engineering and Legal workgroups is shown in Figure 1.

Figure 1: Example use case diagram identifying actions for Legal and Engineering roles.

Proceedings of the 44th Hawaii International Conference on System Sciences - 2011

2

Page 3: [IEEE 2011 44th Hawaii International Conference on System Sciences (HICSS 2011) - Kauai, HI (2011.01.4-2011.01.7)] 2011 44th Hawaii International Conference on System Sciences - Identifying

This figure demonstrates some actions that are

considered within the norm for users classified within the Engineering and Legal workgroups. The actions can then be mapped to associated resources and logging systems. In order to identify users that are acting outside of their workgroup roles, the logging systems need to be configured to record user interactions and activities. For each identified activity, the associated resources required to accomplish the task need to be identified as well as methods to log interactions with the resources as shown in Table 1. Of course the information obtained is entirely dependent on the logging capabilities and the configurations that the organization has set up for logging. In some cases, the desired visibility will require the reconfiguration of data sources to log corresponding activities. 2.2 Log files

Many applications and systems have logging

capabilities that can potentially be set up to record activities with various degrees of granularity ranging

from an organization, to a department, even down to an individual user. Log file conventions and capabilities vary greatly. They provide the potential to track important characteristics including timestamps, source, destination, users, actions, etc. Log files can be viewed individually or correlated to provide specific information about an event or a set of events being investigated. In addition to individual log entries, aggregate entries can provide meaningful information, especially with respect to time trends and data volumes, to further detect deviations from normal behavior. While the focus of this paper is on visualizing this information, it is important to note, that the individual and correlated log entries can be part of a feedback loop that further refines the definition of normal and acceptable behavior within workgroup definitions and also for individuals within workgroups.

Repeated anomalous behaviors that are investigated and found to be a consistent false positive may stimulate the definition of a new workgroup class or subclass into which a particular user or group of users fall.

Proceedings of the 44th Hawaii International Conference on System Sciences - 2011

3

Page 4: [IEEE 2011 44th Hawaii International Conference on System Sciences (HICSS 2011) - Kauai, HI (2011.01.4-2011.01.7)] 2011 44th Hawaii International Conference on System Sciences - Identifying

3. Precursors

Use case diagrams provide us with a means to represent workgroup roles and to identify seemingly acceptable behaviors. There is another class of behaviors that can be viewed as precursory to a potential malicious action by an insider. These so-called precursors can be used to identify potential misuse cases. Precursors are generally more specific than use case actions as they may reflect an unusual access pattern, application of a use case action from another workgroup, or inappropriate application of a use case action among others. They can be assigned values based on potential severity. Listings of precursors with associated rating and detection methods have been identified [5] and can be easily adapted to individual corporate environments. Maloof and Stephens [6] call these precursors “detectors”. They built a system of 76 detectors in conjunction with a Bayesian network to assign an overall threat score to an organization. To extend the system, our visualization approach could be used to augment the detection capabilities and incorporate it into a decision support system using an interactive human guide, instead of simply relying on the Bayesian network.

To identify precursors for a particular environment, it helps to identify motivators that might trigger malicious insider activity. Harrison identified three motivators: greed, political, and anger. When evaluating greed as a motivator, it is helpful to identify the artifacts and corporate assets that can be used to enrich users. These would include sellable items such as credit card and patent information, as well as other frequent targets of industrial espionage. Political motivations are more difficult to tie to artifacts, but contacts and communication patterns may be indicators of this sort of activity including telephone records, email, websites visited, etc. Anger can potentially lead to sabotage activities. In this case it is easiest to identify associated triggers which may create or magnify anger. These might include termination, demotion, excessively increased or decreased responsibility, or significant lack of management support. [7]

Defining the insider perimeter is a challenging problem that merits the determination of the degree of “insiderness” associated with an individual, rather than the binary definition that would distinguish a clear insider/outsider boundary. [8] In addition to classifying employees into workgroups, we need to find a method to classify former employees into this construct. There are many ways that they can be incorporated into the proposed visualization method.

One approach would be to classify all terminated employees into a separate workgroup from which any interaction with system resources is likely a misuse case. Activities of former employees generally should not be seen after they have completed the transition from insider to outsider. Further special cases could be developed to handle employees who have switched jobs within the organization, transferred to other locations, retired, been fired, died, etc., in order to deal with special circumstances as they occur. 3.1 Introduction to precursors

There is a significant body of research in

behavioral theory studying insider threat and attempting to quantify and qualify the characteristics of the wide range of malicious insiders. Let’s extend the use case actions defined in Figure 1 and use them as a basis for developing visualizations that help identify potential misuse cases using precursors. For this example, we will identify twelve specific precursors and visualize them using a bipartite mapping.

While precursors can vary greatly across and within specific domains, some examples might include anomalous printing activity including 1) printing resume, 2) printing during off hours, and 3) excessive printing; accessing assets, including 4) source code, 5) patent information, and 6) financial records; interesting email trends including 7) mass emails, and 8) large outbound emails; using protection such as 9) information encryption; unusual web activity such as 10) visiting job web sites or 11) social networking sites; and excessive uses of 12) network reconnaissance, such as ping. (Excessive ping can be a sign of an employee trying to gather information about local machines on the network that she can then further fingerprint and exploit.)

3.2 Precursor analysis

Figure 2 shows a bipartite matching of all of the

workgroups and precursors that they triggered for the example scenario. Because we have defined normal use case actions, we can color the edges that are inconsistent with normal workgroup behavior by using a lookup table to determine if an activity is within identified norms. We can further use color to associate each edge with a particular workgroup by assigning unique colors to workgroups. Viewing the entire graph provides an indication of areas where additional investigation might be appropriate. Note that the black lines, which indicate behavior within established norms, can be displayed (as in Figures 2

Proceedings of the 44th Hawaii International Conference on System Sciences - 2011

4

Page 5: [IEEE 2011 44th Hawaii International Conference on System Sciences (HICSS 2011) - Kauai, HI (2011.01.4-2011.01.7)] 2011 44th Hawaii International Conference on System Sciences - Identifying

and 3) or hidden. Hiding them clarifies the potential misuse behavior, while including them provides the

normal baseline for quick visual comparative analysis.

Figure 2: Visualization of individuals, workgroups and precursors showing normal and precursor behavior.

Proceedings of the 44th Hawaii International Conference on System Sciences - 2011

5

Page 6: [IEEE 2011 44th Hawaii International Conference on System Sciences (HICSS 2011) - Kauai, HI (2011.01.4-2011.01.7)] 2011 44th Hawaii International Conference on System Sciences - Identifying

Figure 3: Activity of the Engineering workgroup along with identified precursor activity.

Consider just the Engineering workgroup actions

as shown in Figure 3. Examination of this visualization indicates that while all Engineering workgroup members access the source code, and two are involved in mass mailings, Engineer 3 has demonstrated three additional unique precursors that merit further investigation.

While this information exists in the logs and is part of Engineer 3’s digital footprint, visualization provides a one-step method to quickly identify potential misuse cases for further investigation. Since the focus of this paper is to demonstrate this method of visualizing precursors as a decision support system tool, rather than on the associated digital forensics, we will limit our discussion of Engineer 3’s precursor activities to one of these observed precursors. Assume that the link from Engineer 3 to Printing Off Hours was added to the

graph visualization file based on an automated search of the printer log file shown in Figure 4 by looking for print jobs that originated after 5pm and before 8am. (Note that some log file fields have been deleted from Figure 4 for clarity.) Assume that the use case diagram, (or another means such as a graph clustering algorithm), was used to create a table that contains information about employees, workgroups, workgroup color code, and acceptable workgroup behavior with respect to precursors. This foundational information can be compared with the behaviors observed in the log files to generate a visualization as shown in Figure 2. Note that printer logs are a great starting point as they are relatively low volume and thus dictionary searches for strings within the job name can be easily accomplished without a noticeable performance hit.

Figure 4: Example extract of a printer log.

Proceedings of the 44th Hawaii International Conference on System Sciences - 2011

6

Page 7: [IEEE 2011 44th Hawaii International Conference on System Sciences (HICSS 2011) - Kauai, HI (2011.01.4-2011.01.7)] 2011 44th Hawaii International Conference on System Sciences - Identifying

Note that the printer log shown in Figure 4 includes the triggering record (ID - 6675) for Printing Off Hours as well as neighboring records. Using these log entries, we can make some additional observations about the Printing Off Hours precursor that appears in the visualization associated with Engineer 3.

Observations of the printer log file indicate that Engineer 3 has indeed been printing outside of normal working hours, which are 8am to 5pm. This segment of the log file shows that numerous documents were printed beginning in the evening of March 10. Examination of the Job Names field shows that a number of the files printed were source code files (ID 6677 - 6680). In addition Engineer 3 has printed a file containing the string “resume” three times (ID 6682) followed by three unique printings of a file called “coverletter.doc” (ID 6683 – 6685) as well as a customer file (ID 6675). While the visualization is a helpful starting point, it does not provide an inherent mechanism to attach enough meaning to the graph to determine if the insider activity is malicious or benign. This is where investigation of the raw log files or use of ancillary visualization tools can provide more context and information. 3.3 Graph scalability

While bipartite graphs are a great way of

visualizing relationships between workgroups and precursors, there are many visualization techniques that can be applied in order to meet the needs of various user groups. For examples, in a large organization with more than 50 workgroups, and/or precursors to be visualized, the graph may appear overloaded and the visualization could be difficult to understand. In order to address these scalability issues, there are various approaches that can be taken:

• Aggregation can be applied on the nodes in the graph. Instead of showing each individual within the workgroups, the individuals are aggregated together in one single workgroup node. For example, instead of showing each Engineer in the Engineering workgroup, only a single Engineering node is shown.

• Aggregation can be applied also to the precursor nodes. Instead of showing each individual precursor, groups of precursors are aggregated into a single node.

• Displays can be limited to anomalous behavior rather than showing all of the behavior in the same graph.

In addition to static displays, there are interactive visualization techniques that allow users to control the scope of the visualization and collapse and expand individual nodes into their aggregated and unaggregated views. That way, the user can quickly pivot between aggregated nodes that help unclutter the visualization, and all the individual nodes to see all the detail necessary to identify anomalous behavior. Hertzog has implemented a similar aggregation for parallel coordinates [9], where he groups individual values into larger groups. His approach is consistent with the bipartite graph aggregation discussed in this paper.

In addition to potential enhancements to the bipartite graph approach demonstrated, there are many other visualization techniques that could be used to represent the described information including treemaps [10], which are particularly well-suited for visualizing a large number of elements in a compact way. A customizable dashboard approach which meets the needs of the human users of the decision support system would allow the most flexibility in design of the decision support system interface. 3.4 Precursor investigation

Visualizing precursors, as with most objective

evaluations from log files, cannot reflect intent. Considering insider actions and their consequences in isolation, without considering intent, might prevent effective and appropriate responses. [11] There is a level of uncertainty introduced by the limitation of the information provided to the defenders of the organization and this uncertainty affects the judgments of the individuals analyzing the situation. [12] The previous printer log records could correspond to a wide range of circumstances including each of the following:

• Scenario 1: Engineer 3 is a malicious foreign

agent stockpiling source code files and current client information. She has now obtained the list of targeted resources from this agency and is seeking a new job with another agency to continue her exploits. She has three active positions for which she has applied which identify her future targets.

• Scenario 2: Engineer 3 is an employee who is printing source code after hours to work on it while she prints resumes and cover letters for her adult child to help him find a job so he won’t move back in with her.

• Scenario 3: Engineer 3 is an employee who has printed the customer list to identify inactive

Proceedings of the 44th Hawaii International Conference on System Sciences - 2011

7

Page 8: [IEEE 2011 44th Hawaii International Conference on System Sciences (HICSS 2011) - Kauai, HI (2011.01.4-2011.01.7)] 2011 44th Hawaii International Conference on System Sciences - Identifying

clients. She has then reviewed source code she created for them and has sent cover letters and information to them with directions about how to “resume” their relationship with her corporation.

• Scenario 4: Engineer 3 is a dedicated employee who is working extended hours on her source code project while applying for three open positions within the agency to which she has a long-term commitment and in which she plans to advance. In order to advance, she needs to land a new client and is reviewing current client profiles in order to understand the demographic and identify new potential clients.

• Scenario 5: Engineer 3 left her work station without logging out and the janitor is using her workstation for malicious purposes. Further, consider some additional observations

based on Figure 4: 1. Observation 1: Record 6676 shows files

with the string “resume” being printed, which triggers two precursors: Printing Off Hours and Printing Resume.

2. Observation 2: Printer Record 6681 is printing from a job website, which triggers another precursor: Job Web Site.

3. Observation 3: There are missing values in the print job sequence number between Records 6672 and 6675.

The previous could be examples of false

positives and false negatives. More detailed information about the actual circumstances can be used to tune the system to then provide new visualizations that more accurately represent the malicious insider threat. Observation 1 could be a false positive. In record 6676, the printer log owner field indicates that the job owner is part of the Human Resources (HR) workgroup. This false positive can be used to tune the visualization system to refine the trigger based on the observation that the HR workgroup is expected to print resumes as part of acceptable workgroup behavior. For example, printing files containing the string “resume” in the title could be considered a precursor unless the owner is a member of the HR workgroup.

Observation 2 is a similar situation triggering the precursor Job Web Site, (although the event would likely be detected through examination of web browsing history rather than from the printer log. This example shows however, that you can detect that the user accessed a web site that triggered the Job Web Site precursor in the printer logs.). As with the previous observation, the fact that this action

originated within the HR realm is likely to mean that it is a false positive. The associated trigger within the web browsing log analysis could be refined to exclude HR personnel from triggering the precursor.

Observation 3 could be a false negative, i.e., a threat that has not been identified in the visualization system and involves a more thorough digital forensics analysis in order to tune the visualization. The missing log records could potentially be a sign of log tampering, a logging mis-configuration, or could be a benign result of a power failure. In any case, this does not trigger an identified precursor link directly, yet this could potentially be a very real insider threat.

There are many potential logs in which printing a document or attempting to print a document will be recorded in a system. The previous example is limited to looking at the printer log file. In addition to the printer log file, information could be recorded in operating system log files, temp files, spooling directories, as well as within various network traffic logs throughout the environment. While it may not be efficient to regularly scan the wide variety of log files associated with printing, missing record numbers may merit further investigation. Abnormalities in log files that are expected to have sequentially ordered records, such as many printer logs, could be used as a trigger to search for associated information about the missing records. This automated or manual investigation as part of decision support system development could then result in a precursor link being identified through alternative means. The technological knowledge and capabilities of malicious insiders will vary greatly. It is unlikely that the average user will understand the extent of the digital footprint associated with an individual task, even one as innocuous as printing a document (or deleting entries from a printer log). As such, even if she tampers with the evidence, such tampering provides new evidence that can be coupled with existing digital evidence to recreate events and trigger precursors.

In addition to refining the system and identifying new triggers for precursor links in the visualization, the identification of false positives can also be used to refine the set of workgroups or minimize similar false positives in the future. For example, if an individual consistently undertakes bridging work between two departments, and thus triggers precursors that are associated with both groups, a new workgroup could be created to identify this individual (or individuals) to minimize associated false positives in the future and to more accurately represent the organizational structure from a functional, not an organizational perspective.

Proceedings of the 44th Hawaii International Conference on System Sciences - 2011

8

Page 9: [IEEE 2011 44th Hawaii International Conference on System Sciences (HICSS 2011) - Kauai, HI (2011.01.4-2011.01.7)] 2011 44th Hawaii International Conference on System Sciences - Identifying

Another refinement mechanism is to trigger (or eliminate) precursors that demonstrate deviations from (or are consistent with) trends over time. It might be more likely to be a malicious activity if a worker suddenly starts printing off hours as opposed to a worker who regularly prints off hours due to her working habits. 4. Conclusions

While advances have been made in detecting and mitigating the malicious insider problem, much work remains to be done. As the security perimeters for government and industry continue to become more fluid, the challenges associated with detecting and identifying malicious insider behavior becomes increasingly difficult. This paper discusses a technical approach to visualizing the malicious insider threat, but research indicates that successful mitigation of these threats will depend on both technical and behavioral solutions. [13] Advancements in the technological visualization realm and continued incorporation into decision support systems, will include the development of new tools and adaptation of existing tools to facilitate early identification of the potential malicious insider activity with fewer false positives and false negatives.

As the field continues to advance, new issues are being identified for focused research and development. Methods need to be developed to protect the underlying decision support system, visualization tools, and underlying data from malicious insiders. With respect to workgroups, there remains much work to be done in order to develop visualizations that more accurately reflect the wide range of organizational structures and workgroup roles that can exist. Specific issues include defining exceptions for workgroups, extending workgroup roles to account for time trends such as seasonal variations in responsibilities, and developing methods to better define baseline actions to define norms. Finally, the use of increased development of decision support system to help identify and investigate malicious insider threats (as well as other threats) will contribute to better understanding and optimization of the associated tools. All of these will contribute to a more effective means to use visualization to determine when a deviation from normal expected behavior is an indication of malicious insider activity and merits further investigation.

5. References [1] Frank L. Greitzer, Andrew P. Moore, Dawn M.

Cappelli, Dee H. Andrews, Lynn A. Carroll, Thomas D. Hull, "Combating the Insider Cyber Threat," IEEE Security and Privacy, vol. 6, no. 1, pp. 61-64, Jan./Feb. 2008, doi:10.1109/MSP.2008.8

[2] Spitzner, Lance (2003) Honeypots: Catching the Insider Threat. www.acsac.org. Retrieved March 1, 2009 from http://www.acsac.org/2003/papers/ spitzner.pdf

[3] Keeney, Michelle, et al. Insider Threat Study: Computer Systems Sabotage of Critical Infrastructure Sectors. http://www.cert.org/archive/pdf insidercross051105.pdf

[4] Jacobson, I., M. Cristerson, P. Jonsson, and G. Overgaard 1992. Object-oriented Software Engineering: A Use-Case Driven Approach. Reading, MA: Addison-Wesley.

[5] Raffael Marty, Applied Security Visualization, Boston, MA: Addison-Wesley, August 2008.

[6] Marcus Maloof and Gregory Stephens, ELICIT:A System for Detecting Insiders Who Violate Need-to-Know, RAID 2007.

[7] Warren Harrison, "The Saboteur Within," IEEE Software, vol. 22, no. 4, pp. 5-7, July/Aug. 2005, doi:10.1109/MS.2005.109

[8] Bishop, M. and Gates, C. 2008. Defining the insider threat. In Proceedings of the 4th Annual Workshop on Cyber Security and information intelligence Research: Developing Strategies To Meet the Cyber Security and information intelligence Challenges Ahead (Oak Ridge, Tennessee, May 12 - 14, 2008). F. Sheldon, A. Krings, R. Abercrombie, and A. Mili, Eds. CSIIRW '08, vol. 288. ACM, New York, NY, 1-3. DOI= http://doi.acm.org/10.1145/1413140.1413158

[9] Hertzog, P., Visualizations to improve reactivity towards security incidents inside corporate networks, VizSEC 2006.

[10] Shneiderman, B., Tree visualization with treemaps: a 2-d space-filling approach, ACM Transactions on Graphics, vol. 11, 1 (Jan. 1992) 92-99.

[11] Joel Predd, Shari Lawrence Pfleeger, Jeffrey Hunker, Carla Bulford, "Insiders Behaving Badly," IEEE Security and Privacy, vol. 6, no. 4, pp. 66-70, July/Aug. 2008, doi:10.1109/MSP.2008.87

[12] Martinez-Moyano, I. J., Rich, E., Conrad, S., Andersen, D. F., and Stewart, T. R. 2008. A behavioral theory of insider-threat risks: A system dynamics approach. ACM Trans. Model. Comput. Simul. 18, 2 (Apr. 2008), 1-27. DOI= http://doi.acm.org/10.1145/1346325.1346328

[13] Anderson, D.F., Cappelli, D., Gonzalez, J. J., Mojtahedzadeg, M., Moore, A.P., Rich, E., Sarriegui, J.M., Shimeall, T.J., Stanton, J.M., Weaver, E.., and Zagonel, A. Preliminary System Dynamics Maps of the Insider Cyber-threat Problem. Proceedings of the 22nd International Conference of the Systems Dynamics Society. Oxford, England. July 2004

Proceedings of the 44th Hawaii International Conference on System Sciences - 2011

9