ISF Demystifying Artifical Intelligence in Information ... · 3/18/2019  · Information Security...

17
AI MOVES INTO THE MAINSTREAM, BUT REMAINS A MYSTERY AI has long been associated with niche applications such as gaming – whether beating world champion players at chess 1 or Go, 2 or powering characters in video games – but is now finding many more mainstream business use cases. Organisations use variations of AI to support processes in areas including customer service (using automated chatbots to communicate with customers online), human resources (vetting CVs during recruitment) and bank fraud detection (analysing payments to determine those which are likely to be fraudulent). AI is becoming a big business, with the global market for AI-related products forecast to be worth $191bn by 2024. 3 The amount of hype around the topic means that IT and security leaders are also taking notice of the opportunities provided by AI: 63% of IT decision makers plan to leverage AI technology to automate their security processes. 4 However, the hype also leads to confusion and scepticism over what AI actually is and what it really means for business and security. It is difficult to separate wishful thinking from reality. ABOUT THIS BRIEFING PAPER This briefing paper removes the confusion, demystifying AI in information security. It introduces the topic, helping business and security leaders to: understand what AI is identify the information risks posed by AI, and how to mitigate them explore opportunities around using AI in defence. AI is creating a new frontier in information security. Systems that independently learn, reason and act will increasingly replicate human behaviour – and like humans they will be flawed, but also capable of achieving great things. AI poses new information risks and makes some existing ones more dangerous. But it can also be used for good and should become a key part of every organisation’s defensive arsenal. Business and information security leaders alike must understand both the risks and opportunities before embracing technologies that will soon become a critically important part of everyday business. Artificial Intelligence (AI) inspires intrigue, fear and confusion in equal measures. For many people the topic is shrouded in mystery, as visions of AI ushering in a brave new world clash with dystopian science fiction nightmares depicting human beings enslaved by super-intelligent machines. A dose of reality is required, especially regarding the impact of AI on business and information security. Neither wildest dreams nor worst nightmares are likely to come true any time soon; but AI already poses risks to information assets, as well as the potential to significantly improve cyber defences. 1 M. R. Anderson, “Twenty years on from Deep Blue vs Kasparov: how a chess match started the big data revolution”, The Conversation, 11 May 2017, http://theconversation.com/twenty-years-on-from-deep-blue-vs-kasparov-how-a-chess-match-started-the-big-data-revolution-76882 2 T. Revell, “AlphaGo’s AI upgrade gets round the need for human input”, New Scientist, 18 October 2017, https://www.newscientist.com/article/mg23631484-000-alphagos-ai-upgrade-gets-round-the-need-for-human-input/ 3 “Artificial Intelligence Market is estimated to be worth US$ 191 Billion By 2024”, MarketWatch, 23 January 2019, https://www.marketwatch.com/press-release/artificial-intelligence-market-is-estimated-to-be-worth-us-191-billion-by-2024-2019-01-23 4 “Trend Micro Survey Confirms Organizations Struggle With a Lack of Security Talent and Tidal Waves of Threat Alerts”, Trend Micro, 18 March 2019, https://newsroom.trendmicro.com/press-release/commercial/trend-micro-survey-confirms-organizations-struggle-lack-security-talent-and of IT decision makers plan to leverage AI technology to automate their security processes 63% DEMYSTIFYING ARTIFICIAL INTELLIGENCE IN INFORMATION SECURITY

Transcript of ISF Demystifying Artifical Intelligence in Information ... · 3/18/2019  · Information Security...

Page 1: ISF Demystifying Artifical Intelligence in Information ... · 3/18/2019  · Information Security Forum Demystifying Artificial Intelligence in Information Security 3 Figure 1: Examples

AI MOVES INTO THE MAINSTREAM, BUT REMAINS A MYSTERYAI has long been associated with niche applications such as gaming – whether beating world champion players at chess1 or Go,2 or powering characters in video games – but is now finding many more mainstream business use cases. Organisations use variations of AI to support processes in areas including customer service (using automated chatbots to communicate with customers online), human resources (vetting CVs during recruitment) and bank fraud detection (analysing payments to determine those which are likely to be fraudulent). AI is becoming a big business, with the global market for AI-related products forecast to be worth $191bn by 2024.3

The amount of hype around the topic means that IT and security leaders are also taking notice of the opportunities provided by AI: 63% of IT decision makers plan to leverage AI technology to automate their security processes.4 However, the hype also leads to confusion and scepticism over what AI actually is and what it really means for business and security. It is difficult to separate wishful thinking from reality.

ABOUT THIS BRIEFING PAPERThis briefing paper removes the confusion, demystifying AI in information security. It introduces the topic, helping business and security leaders to:

‒ understand what AI is ‒ identify the information risks posed by AI, and how to mitigate them

‒ explore opportunities around using AI in defence.

AI is creating a new frontier in information security. Systems that independently learn, reason and act will increasingly replicate human behaviour – and like humans they will be flawed, but also capable of achieving great things. AI poses new information risks and makes some existing ones more dangerous. But it can also be used for good and should become a key part of every organisation’s defensive arsenal. Business and information security leaders alike must understand both the risks and opportunities before embracing technologies that will soon become a critically important part of everyday business.

Artificial Intelligence (AI) inspires intrigue, fear and confusion in equal measures. For many people the topic is shrouded in mystery, as visions of AI ushering in a brave new world clash with dystopian science fiction nightmares depicting human beings enslaved by super-intelligent machines. A dose of reality is required, especially regarding the impact of AI on business and information security. Neither wildest dreams nor worst nightmares are likely to come true any time soon; but AI already poses risks to information assets, as well as the potential to significantly improve cyber defences.

1 M. R. Anderson, “Twenty years on from Deep Blue vs Kasparov: how a chess match started the big data revolution”, The Conversation, 11 May 2017, http://theconversation.com/twenty-years-on-from-deep-blue-vs-kasparov-how-a-chess-match-started-the-big-data-revolution-76882

2 T. Revell, “AlphaGo’s AI upgrade gets round the need for human input”, New Scientist, 18 October 2017, https://www.newscientist.com/article/mg23631484-000-alphagos-ai-upgrade-gets-round-the-need-for-human-input/

3 “Artificial Intelligence Market is estimated to be worth US$ 191 Billion By 2024”, MarketWatch, 23 January 2019, https://www.marketwatch.com/press-release/artificial-intelligence-market-is-estimated-to-be-worth-us-191-billion-by-2024-2019-01-23

4 “Trend Micro Survey Confirms Organizations Struggle With a Lack of Security Talent and Tidal Waves of Threat Alerts”, Trend Micro, 18 March 2019, https://newsroom.trendmicro.com/press-release/commercial/trend-micro-survey-confirms-organizations-struggle-lack-security-talent-and

of IT decision makers plan to leverage AI technology to automate their security processes

63%

DEMYSTIFYING ARTIFICIALINTELLIGENCE IN INFORMATION SECURITY

Page 2: ISF Demystifying Artifical Intelligence in Information ... · 3/18/2019  · Information Security Forum Demystifying Artificial Intelligence in Information Security 3 Figure 1: Examples

Information Security Forum2 Demystifying Artificial Intelligence in Information Security

1 What is AI?

There is often confusion over what the term ‘AI’ refers to because it is used in so many different contexts. Is it a concept? A technology? Or simply a marketing buzzword? In reality it can be – and often is – used in any and all of these contexts. Those curious enough to look for a definition will find a different version in every paper, presentation or article they come across.

The following is not just another definition, but a summary of: ‒ what AI means as a concept (broadly – computer systems that learn, reason and act independently) ‒ the different types of technology that can be described as AI (and how they enable systems to learn, reason and act).

COMPUTER SYSTEMS THAT INDEPENDENTLY LEARN, REASON AND ACTThe simplest way of thinking about AI is to compare it to human intelligence. If intelligence is the ability to acquire and apply knowledge and skills, artificially intelligent computer systems meet that definition by being able to independently learn, reason and act.5 AI systems learn from their own experience – rather than acting purely on programmers’ instructions – and are able to influence or manipulate their environment, whether physical or digital.

The concept of AI has been around since the early days of computer science. In 1950, computing pioneer Alan Turing developed a test to determine whether a computer is capable of thinking like a human being. To pass the Turing test, computer systems must be able to hold a conversation with a human without the person realising they were talking to a computer. But it is only recently that AI systems have begun to pass the test:6 the growth of big data has given AI systems a vast wealth of learning material to hone their capabilities; and ever-increasing processing power provides the capability to analyse large datasets.

While current AI systems are highly sophisticated, there are none that can learn, reason and act exactly like a human – a ‘general AI’ does not yet exist. Instead, AI systems have ‘narrow’ intelligence: they complete specific tasks that a human could perform, emulating small slices of human behaviour.7 However intelligent it seems, an AI system is still only programmed to perform a very specific function and operates within the boundaries of that programming. It would not be able to complete a different task without a redesign.

“The AI system that beats world champions at Go is terrible at chess and even worse at tennis.” – ISF Member

Examples of how AI systems learn, reason and act independently in everyday real-world scenarios and in information security are shown in Figure 1. While smart speakers and network monitoring systems each demonstrate intelligence in performing their specific functions, one would not be able to do the job of the other as they require different inputs, learn from different datasets and operate in different environments.

5 K. Hao, “What is AI? We drew you a flowchart to work it out”, MIT Technology Review, 10 November 2018, https://www.technologyreview.com/s/612404/is-this-ai-we-drew-you-a-flowchart-to-work-it-out/

6 J. Hruska, “Did Google Fake Its Duplex AI Demo?”, Extreme Tech, 18 May 2018, https://www.extremetech.com/computing/269497-did-google-fake-its-google-duplex-ai-demo7 T. D. Jajal, “Distinguishing between Narrow AI, General AI and Super AI”, Medium, 21 May 2018,

https://medium.com/@tjajal/distinguishing-between-narrow-ai-general-ai-and-super-ai-a4bc44172e22

Page 3: ISF Demystifying Artifical Intelligence in Information ... · 3/18/2019  · Information Security Forum Demystifying Artificial Intelligence in Information Security 3 Figure 1: Examples

Information Security Forum 3Demystifying Artificial Intelligence in Information Security

Figure 1: Examples of systems that independently learn, reason and act, in the real world and in information security

Reason

Learn

The smart speaker’s owner asks it to play a song that they like, but have forgotten the title and name of the band.

The network monitoring system analyses activity including server access, device connections, data volumes and credential use.

The system continuously monitors activity and listens for activation words.

Real-world example: smart speakers Information security example: network monitoring

The system monitors activity across the network and learns what ‘normal’ looks like.

The smart speaker identifies that the user is talking to it; that they want it to play an audio file; they have a specific file in mind; that the file is categorised as humorous and that it contains the lyric “the humans are dead”.

A ransomware attack hits the network after a user clicks on a link in a phishing email. The network monitoring system does not recognise the ransomware file, but identifies the way in which infected devices scan the network as abnormal.

The system understands human speech, identifies what user requests refer to and decides how to respond.

The system identifies unusual activity and decides how to respond.!

Act

The smart speaker locates the audio file that best fits the description (‘Robots’ by Flight of the Conchords) on its server and plays it through a music streaming application.

The system independently initiates a response to the request.

The network monitoring system sends an alert to the organisation’s Security Automation Orchestration and Response (SOAR) system, instructing it to immediately cut the connection to infected devices.

The system independently initiates a response to the threat.

“Alexa, play the funny song that says the humans

are dead”

In order to learn, reason and act, AI systems such as those represented above require multiple inputs, including large datasets (e.g. the data constantly produced by network activity) and/or sensors to perceive and make sense of a physical environment (e.g. microphones in smart speakers). These sensors may not be intelligent themselves, but they provide vital context for the AI system to be able to learn and make decisions.

Page 4: ISF Demystifying Artifical Intelligence in Information ... · 3/18/2019  · Information Security Forum Demystifying Artificial Intelligence in Information Security 3 Figure 1: Examples

Information Security Forum4 Demystifying Artificial Intelligence in Information Security

8 M. Mueller, “Medical Applications Expose Current Limits of AI”, Spiegel, 3 August 2018, https://www.spiegel.de/international/world/playing-doctor-with-watson-medical-applications-expose-current-limits-of-ai-a-1221543.html

AI TECHNOLOGIESIn practice, AI systems are usually in fact a collection of multiple systems and technologies. AI is often used as an umbrella term for a range of technologies, which can be broadly separated into three categories:

‒ Machine learning ‒ Perception systems ‒ Intelligent robotics

These different technologies often work together to enable systems to perceive their environment and then learn, reason and act. There is plenty of overlap between these technologies and their functions. For example, perception systems can be the beginning of a learning process but often do more than just perceive; they may also be able to reason and act to influence that environment. These perception systems may also rely on machine learning (e.g. to classify different types of image or understand human language).

Machine learningWhen most people talk about AI, they are primarily focused on machine learning. Machine learning refers to systems that improve their performance on a specified task over time, learning from experience. Machine learning systems are well placed to make the most of big data, ingesting vast amounts of structured and unstructured data in order to make decisions.

These systems use three different learning methods: ‒ Supervised learning, i.e. training the system to learn from labelled training data ‒ Unsupervised learning, i.e. training the system to learn from unlabelled and unclassified information ‒ Reinforcement learning, i.e. allowing the system to learn as it performs a task repeatedly

Subsets of machine learning include cognitive computing (systems that mimic human thought processes, e.g. neural networks), decision trees (systems that use a tree-like graph to model decisions and their possible outcomes) and deep learning (systems that learn in successive steps, requiring several decision-making layers between input and output).

Machine learning systems are being increasingly trialled across multiple industries. One well-known potential application is in healthcare, helping to improve patient diagnosis. Machine learning systems can ingest unstructured data at a far greater speed than humans, including medical research and journals – so in theory they can build up a greater knowledge of medical conditions based on the latest research and make accurate diagnoses more quickly than any doctor (see Figure 2). While this technology is still maturing and sometimes runs into practical issues in terms of the data it uses, it demonstrates what may be possible in the future.8

Figure 2: Machine learning in patient diagnosis

Medical Research Machine Learning System Patient Diagnosis

Machine learning techniques underpin many perception systems and intelligent robotics, enabling them to make specific decisions.

Page 5: ISF Demystifying Artifical Intelligence in Information ... · 3/18/2019  · Information Security Forum Demystifying Artificial Intelligence in Information Security 3 Figure 1: Examples

Information Security Forum 5Demystifying Artificial Intelligence in Information Security

9 A. Ha, “Facebook confirms that it’s acquiring Bloomsbury AI”, TechCrunch, 3 July 2018, https://techcrunch.com/2018/07/03/facebook-confirms-bloomsbury-ai-acquisition/10 A. Norman, “Chinese Police Add Facial Recognition Glasses to Their Surveillance Arsenal”, Futurism, 8 February 2018,

https://futurism.com/chinese-police-facial-recognition-glasses-surveillance-arsenal

Perception systems Perception systems observe and understand real-world activities and react in a manner that can be understood by humans. These types of system include natural language processing (NLP) and computer vision.

NLP systems can understand, interpret and manipulate human language, whether spoken or written (speech, text and handwriting recognition are subsets of NLP). Applications for NLP include identifying and removing fake news on social media9 and using voice-controlled personal assistants on smartphones or smart speakers.

Computer vision systems can acquire, process and analyse digital images or videos (essentially replicating and enhancing the human visual system). Applications include helping autonomous vehicles to recognise their surroundings and providing facial recognition in law enforcement. Chinese police already use augmented reality glasses equipped with facial recognition technology to identify criminals or persons of interest in real time: the glasses are connected to a database of images of known criminals and overlay relevant information on a police officer’s view of the real world as they go on patrol (see Figure 3).10

Figure 3: Computer vision powering augmented reality glasses for surveillance

Perception systems form the basis for a range of applications where digital and physical interaction is required – and therefore tend to be an important component of intelligent robotics.

Page 6: ISF Demystifying Artifical Intelligence in Information ... · 3/18/2019  · Information Security Forum Demystifying Artificial Intelligence in Information Security 3 Figure 1: Examples

Information Security Forum6 Demystifying Artificial Intelligence in Information Security

11 A. Owen-Hill, “What’s the Difference Between Robotics and Artificial Intelligence?”, ROBOTIQ, 19 July 2019, https://blog.robotiq.com/whats-the-difference-between-robotics-and-artificial-intelligence

12 J. Walker, “Machine Learning in Manufacturing – Present and Future Use-Cases”, Emerj, 13 August 2019, https://emerj.com/ai-sector-overviews/machine-learning-in-manufacturing/13 R. Savaram, “30 Robotic Process Automation Examples”, MindMajix, 17 May 2017, https://mindmajix.com/30-rpa-examples14 “Gartner Says Worldwide Robotic Process Automation Software Market Grew 63% in 2018”, Gartner, 24 June 2019,

https://www.gartner.com/en/newsroom/press-releases/2019-06-24-gartner-says-worldwide-robotic-process-automation-sof

Intelligent roboticsRobots do not all take a humanoid shape and speak with a clipped metallic accent. They take many shapes and sizes and have different methods of perceiving and interacting with their environment – automated vacuum cleaners and vehicles are robots, as are moving toys or exoskeletons used for physical rehabilitation. While not all robots can be classed as intelligent (most act only on relatively simple, pre-programmed commands), intelligent robotics can be seen as the application of AI in the real world.11 Intelligent robots require the ability to perceive their environment and act based on reasoned decisions.

One example of an area in which intelligent robots might soon outperform their less intelligent counterparts is in manufacturing. Robots that can independently learn to perform tasks can retrain themselves to produce new products – meaning that a manufacturer could produce multiple products in the same factory without significant re-programming overheads. Multiple robots learning how to perform the same tasks simultaneously through trial and error will be able to perfect new techniques much more quickly than a single robot could (see Figure 4).12

Figure 4: Intelligent robots training themselves through reinforcement learning

Multiple robots train themselves to conduct new tasks using

reinforcement learning (i.e. through trial and error).

Once one robot has perfected a task, all the others will learn to perform the

task in the same way.

The robots can train themselves to produce multiple products in the

same manner.

Robotic Process Automation ‘Robotics’ do not only exist in the physical world – the term can also apply to software. Robotic Process Automation (RPA) refers to technology that enables easy configuration of software, emulating the interactions of a human user to execute a business process. An evolution of the screen scraping concept, RPA ‘robots’ used on a particular application interpret and communicate with other systems, automating multiple tasks.13 RPA is the fastest-growing segment of the global enterprise software market, worth an estimated $846m in 2018.14

Page 7: ISF Demystifying Artifical Intelligence in Information ... · 3/18/2019  · Information Security Forum Demystifying Artificial Intelligence in Information Security 3 Figure 1: Examples

Information Security Forum 7Demystifying Artificial Intelligence in Information Security

15 K. Wiggers, “MIT’s AI makes autonomous cars drive more like humans”, VentureBeat, 23 May 2019, https://venturebeat.com/2019/05/23/mits-ai-makes-autonomous-cars-drive-more-like-humans/

How AI technologies work togetherAutomated vehicles provide a good demonstration of how different AI technologies work together to perform a single narrow function, i.e. driving. As shown in Figure 5, automated vehicles require:

‒ inputs from computer vision sensors, allowing them to identify objects (detecting how far away they are and whether they are moving or stationary) and to read and follow directions given by road signs15

‒ decision trees to help them make real-time decisions ‒ robotics to control movement.

Figure 5: AI technologies at work in an automated vehicle

Reason

Learn

Act

The objects in front are other automobiles that are moving slowly. There is nothing behind the vehicle.

AI Technology – Computer vision: image processing, trained and supported by neutral networks.

The vehicle must learn what all the objects around it are, what they are doing and what they mean.

Indicate left, move into the left-hand lane and accelerate in order to overtake the automobiles in front.

AI Technology – Robotics: Instantaneous, continuous control of all functions (e.g. accelerating, braking, steering).

The vehicle must act on the decisions it has taken.

The automobiles in front are dangerous obstacles so the vehicle must either overtake or slow down. It is safe to overtake because there is nothing behind or to the left of the vehicle.

AI Technology – Decision trees: Logical choices based on known data (i.e. inputs from computer vision) result in decisions over required actions.

The vehicle must make reasoned decisions about what actions to take, based on what it knows abouts its surroundings.

Automated vehicles also require programming – potentially via machine learning – to understand and obey the rules of the road. For example, in a country where vehicles drive on the left-hand side of the road, the vehicle in Figure 5 would have to move to the right to overtake.

The inter-relationship between AI technologies means that they are all susceptible to a range of threats and pose a variety of information risks, explored in Section 2.

Page 8: ISF Demystifying Artifical Intelligence in Information ... · 3/18/2019  · Information Security Forum Demystifying Artificial Intelligence in Information Security 3 Figure 1: Examples

Information Security Forum8 Demystifying Artificial Intelligence in Information Security

2 Information risks posed by AI

As AI systems are adopted by organisations, they will become increasingly critical to day-to-day business operations. Some organisations already have, or will have, business models entirely dependent on AI technology. No matter the function for which an organisation uses AI, such systems and the information that supports them have inherent vulnerabilities and are at risk from both accidental and adversarial threats. Compromised AI systems make poor decisions and produce unexpected outcomes.

Simultaneously, organisations are beginning to face sophisticated AI-enabled attacks – which have the potential to compromise information and cause severe business impact at a greater speed and scale than ever before (see Figure 6). Taking steps both to secure internal AI systems and defend against external AI-enabled threats will become vitally important in reducing information risk.

Figure 6: How AI poses information risk – threats to AI systems and AI-enabled attacks

AI-enabled attacks on traditional systems used by the organisation.

Threats to the AI systems used by the organisation.

Results in:Poor decisions, unexpected outcomes.

! ! Results in:Compromised information and resultant business impact.

AI SYSTEMS FACE ACCIDENTAL AND ADVERSARIAL THREATS The same things that go wrong on more conventional systems can go wrong with AI. The crucial difference is that the impacts from incidents involving AI systems are highly amplified in terms of scale and speed. AI systems operate using large amounts of data and make decisions extremely quickly, often using decision-making processes that their human programmers do not fully understand. This makes it challenging for operators to keep track if systems make errors or unexpected decisions – which they are liable to do. It can take a long time before mistakes are recognised, by which time any poor decisions may have had an adverse business impact.

“There is a great risk in giving machine learning systems total autonomy, as they make decisions we wouldn’t expect, even after a long test phase.” – ISF Member

Unexpected decisions carry business risk: if the AI system is trusted to run a critical or high-value process, such as a trading algorithm in the financial sector, a poor decision can have a high cost. Unexpected decisions also carry regulatory risk: if AI systems process sensitive data, and the organisation does not understand how it makes decisions based on that data, it risks running afoul of privacy regulations including the EU General Data Protection Regulation (GDPR).

Unexpected decisions can be the result either of accidents or malicious actors deliberately trying to compromise the AI system. Some of the most common threats – and tips on emerging practice to mitigate them – are highlighted on the following pages.

Page 9: ISF Demystifying Artifical Intelligence in Information ... · 3/18/2019  · Information Security Forum Demystifying Artificial Intelligence in Information Security 3 Figure 1: Examples

Information Security Forum 9Demystifying Artificial Intelligence in Information Security

THREAT TYPE EMERGING PRACTICE TO MITIGATE THE THREAT

Bias (accidental)Bias can be introduced into an AI system either in the dataset used to train it, or by the programmer(s) who set the parameters over how it learns. Examples include facial recognitions systems with a bias against specific ethnic groups16 and recruitment tools that discriminate by race or gender.17 Bias in the decision-making process inevitably leads to poor decisions – and in some cases such discrimination may lead to legal action against the organisation using the offending system.

All humans have a natural set of biases that can contribute to poor decision making around information security (see the ISF briefing paper Human-Centred Security). When working with a team of humans, there is a chance that each person’s biases may be counteracted or balanced out by others. However, that mitigation will not work for a single AI system working independently.

Information attribute affected: Integrity

The risk of AI systems exhibiting bias creates a strong requirement for human oversight.

The potential for bias should be checked when designing the system and setting initial parameters. Training those who operate the systems to constantly check outputs (e.g. via data sampling) should help to identify and work to correct obvious biases. If a recruitment tool appears to highly favour men named Jared who have previously played lacrosse, for example, it is a sign that the system is not working properly.18

Overfitting or underfitting (accidental)Bad design and/or implementation can result in AI systems making bad decisions, by overfitting or underfitting a decision-making model based on the data analysed. Modeling too closely on the available data leads to overfitting; not matching closely enough leads to underfitting. This means that the systems reach either too-specific or too-general decisions, rather than providing the right balance between certainty and doubt.

Information attribute affected: Integrity

Organisations should test AI systems thoroughly to check for overfitting or underfitting before deploying them in an operational environment. It is also important to continue testing and monitoring the systems after they have been deployed, as they continue to learn and make inferences about the data they are fed. Organisations should not use a “fire and forget” model with AI systems – they need to be sure that the systems are still making good decisions in the months and years following deployment.

16 J. Buolamwini, “Artificial Intelligence Has a Problem With Gender and Racial Bias. Here’s How to Solve It”, Time, 7 February 2019, https://time.com/5520558/artificial-intelligence-racial-gender-bias/

17 J. Dastin, “Amazon scraps secret AI recruiting tool that showed bias against women”, Reuters, 10 October 2018, https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G

18 A. Patel, T. Hatzakis, K. Macnish, M. Ryan, A. Kirichenko, “Security Issues, Dangers and Implications of Smart Information Systems”, SHERPA, 29 March 2019, https://dmu.figshare.com/articles/D1_3_Cyberthreats_and_countermeasures/7951292

Page 10: ISF Demystifying Artifical Intelligence in Information ... · 3/18/2019  · Information Security Forum Demystifying Artificial Intelligence in Information Security 3 Figure 1: Examples

Information Security Forum10 Demystifying Artificial Intelligence in Information Security

THREAT TYPE EMERGING PRACTICE TO MITIGATE THE THREAT

Poisoning (adversarial)/Accidental errors (accidental)Attackers with access to the datasets used to train AI systems can influence the learning process by tampering with the data or the parameters used by the system. Poisoning attacks involve gradually introducing carefully designed samples or ‘perturbations’ that avoid alerts but eventually fool the system into seeing the changes as natural rather than abnormal. This affects its decision-making capability and leads to misleading results. One infamous example of poisoning is the way in which Microsoft’s Twitter chatbot, Tay, became a racist Nazi sympathiser in the space of a day – based entirely on the information it was being fed by other Twitter users.19

Similarly, errors introduced by accident into the dataset used to train an AI system can lead to wrong or unexpected outcomes or decisions.

Information attribute affected: Integrity

Given the amount of data used by AI systems, this can be a thorny issue – manually checking for accuracy will usually be an impossible task. Organisations should introduce processes designed to avoid introducing errors, identify errors that do creep into the systems and then correct them.

Using data provided by trusted sources and adopting training models that factor occasional anomalies or errors into their decision-making process should help to mitigate the threat.

AI systems should also be isolated from non-related systems (i.e. anything not required to help make decisions) and have fail-safe mechanisms built in so that they can be switched off if creating erroneous outputs.

Evasion (adversarial)Attackers without access to the datasets used to train AI systems can tamper with inputs to force the system into making mistakes. Evasion attacks modify input data so that the system cannot correctly identify the input, misclassifying data and effectively rendering the system unavailable. One well-cited example involves fooling image processing systems into incorrectly identifying images (e.g. a picture of a panda is identified as a gibbon, or – more seriously – a traffic stop sign is shown as a speed limit sign). This is covered in more detail in the ISF report Threat Horizon 2021.

Information attribute affected: Availability

Training methods involving the creation and inclusion of adversarial samples in the training set can help train the AI system to identify malicious inputs.

Placing safeguard mechanisms between the public interface to the model and the model itself can help to detect and clean adversarial inputs.

Inference (adversarial)Attackers may aim to reverse-engineer AI systems to expose the data used to train them. This could result in confidential data being compromised, for example in relation to medical diagnostics – the sensitive information required to make particular decisions may be inferred by reverse-engineering how the decision was made.20

Reverse-engineering can also enable attackers to replicate an AI system, impacting the competitive advantage of the organisation that developed it.

Information attribute affected: Confidentiality

Organisations that deploy AI systems which make decisions based on sensitive data should conduct regular black box penetration testing (i.e. assuming the attacker has no direct access to the system) to identify whether confidential data can be extracted from the model.

19 J. Vincent, “Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day”, The Verge, 24 March 2016, https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist?source=post_page

20 J. Vaidya, B. Shafiq, X. Jiang, L. Ohno-Machado, “Identifying inference attacks against healthcare data repositories”, PMC, U.S. National Institutes of Health’s National Library of Medicine (NIH/NLM), 18 March 2013, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3845790/

Page 11: ISF Demystifying Artifical Intelligence in Information ... · 3/18/2019  · Information Security Forum Demystifying Artificial Intelligence in Information Security 3 Figure 1: Examples

Information Security Forum 11Demystifying Artificial Intelligence in Information Security

AI-ENABLED ATTACKS PUT ORGANISATIONAL INFORMATION AT RISK While AI systems adopted by organisations present a tempting target, adversarial attackers are also beginning to use AI for their own purposes. AI is a powerful tool that can be used to enhance attack techniques, or even create entirely new ones. Organisations must be ready to adapt their defences in order to cope with the scale and sophistication of AI-enabled cyber attacks.

“AI doesn’t get tired or give up, so can clearly pose big threats to anyone’s data and resources.” – Sophie Hackford, Co-founder, 1715 Labs

Examples of the type of attack techniques that AI is already enhancing are listed in the table below – along with examples of emerging practice in countering these attacks.

AI-ENABLED ATTACKS EMERGING PRACTICE TO MITIGATE THE THREAT

Social engineeringAI is supercharging the scale and sophistication of social engineering attacks, particularly phishing, spear phishing and whaling. Combining the ability to learn about patterns of behaviour and communication styles with natural language processing capabilities, voice replication and DeepFake videos, AI systems are able to write convincing phishing emails – and even follow up with automated phone calls that replicate the voice of another person (‘vishing’). There are already signs of attacks that impersonate the voice of senior staff members.21 As AI systems become increasingly adept at passing the Turing test, social engineering techniques will become even more convincing and therefore dangerous.

Monitoring network access and activity can help to identify where communications originate and identify suspicious or anomalous behaviour. This may be one area where AI defences are required to protect against AI threats: enhanced network monitoring and access management is one of the key areas in which AI can be used to protect information (see Section 3).

Vulnerability identificationAttackers are using AI tools to identify vulnerabilities in networks and applications. Working 24/7 and at a much faster rate than humans, AI systems are proving far more efficient at such tasks. While AI systems do not yet have the same capability as skilled human hackers in exploiting vulnerabilities, they are speeding up the process for human attackers by doing the first part of the job for them.

Identifying and patching vulnerabilities is a critical aspect of information security, and the emergence of AI systems that can speed up the process of identifying vulnerabilities is both a threat and an opportunity. Organisations can use similar tools to identify weak spots on their network and patch them accordingly (see Section 3).

AI-enabled attack techniques of the future: Threat Horizon reports There is great potential for AI to be used further for malicious purposes in the future, as the related technologies develop and become less costly and more accessible. The ISF’s Threat Horizon report series aims to identify new threats that may emerge over the coming years. Examples of potential new AI-enabled threats include the:

‒ spread of commercially damaging misinformation by intelligent chatbots (see Threat Horizon 2019)

‒ emergence of intelligent malware that can independently exploit vulnerabilities (see Threat Horizon 2020) ‒ manipulation of machine learning powered computer vision systems in automated vehicles, leading to danger on the roads (see Threat Horizon 2021).

21 “Israel sees cyber attacks by voice impersonating of senior staff”, Outlook India, 10 July 2019, https://www.outlookindia.com/newsscroll/israel-sees-cyber-attacks-by-voice-impersonating-of-senior-staff/1571982

Page 12: ISF Demystifying Artifical Intelligence in Information ... · 3/18/2019  · Information Security Forum Demystifying Artificial Intelligence in Information Security 3 Figure 1: Examples

Information Security Forum12 Demystifying Artificial Intelligence in Information Security

3 Defensive opportunities provided by AI

Security practitioners are always fighting to keep up with the methods used by attackers, and AI systems can provide at least a short-term boost by significantly enhancing a variety of defensive mechanisms. AI can automate numerous tasks, helping understaffed security departments to bridge the specialist skills gap and improve the efficiency of their human practitioners. Protecting against many existing threats, AI can put defenders a step ahead. However, as explored in Section 2, adversaries are not standing still – as AI-enabled threats become more sophisticated, security practitioners will need to use AI-supported defences simply to keep up.

“Traditional tools that are programmed to spot known threats are no longer sufficient.” – Doug Topalovic, VP of IT, Heritage Education Fund22

AI-SUPPORTED DEFENSIVE MECHANISMS: DETECT, PREVENT AND RESPOND Different AI systems can help organisations to detect, prevent and respond to cyber attacks. This section identifies the ways that AI can enhance defensive mechanisms. These mechanisms are often interlinked: preventative and responsive measures would not be possible without detection measures in place.

DetectExamples of AI-enhanced defensive mechanisms that can help detect cyber attacks are shown in the table below.

DEFENSIVE MECHANISM HOW AI CAN HELP

Network monitoring

and analytics

This is perhaps the area where AI is most effective as a defence mechanism. Various vendors already offer AI-based solutions that monitor network activity, starting with no awareness of how the network operates and using machine learning to build up a picture of what normal activity looks like. Examples of the type of activity analysed include server access, data volumes, timings of events and credential use. Abnormal activity can then be identified and acted upon, helping to detect and prevent intrusions or signs of attacks such as DDoS.

Intrusion detection/

prevention

This control is enabled by AI-supported network monitoring. Once abnormal network behaviour is identified, the organisation has various options over how to deal with it. Operators could be sent notifications, or the system could be given the ability to respond autonomously (e.g. by closing ports or connections). Because AI-supported network monitoring systems do not require knowledge of past threats, they can identify and prevent intrusions through new or zero-day exploits or provide an extra layer of defence if vulnerabilities have not been patched. For example, organisations with such AI systems installed were typically unaffected by the WannaCry malware attack in 2017: although they did not recognise the malware, the AI systems very quickly identified how infected devices scanned the network as abnormal behaviour. The systems then took steps to disconnect compromised devices before they could infect the rest of the network.23

User and entity behaviour

analytics (UEBA)

Similar to network monitoring and analytics, UEBA programmes track individual users to build up an understanding of normal behaviour, sending notifications or alerts when they identify abnormal behaviour. These systems are a useful tool in terms of protecting against the insider threat. Machine learning can enhance UEBA by creating more accurate pictures of normal behaviour, reducing the number of alerts and false positives sent to human operators to investigate.24

Vulnerability identification

AI systems can identify vulnerabilities and assess the threats most likely to be able to exploit them.25

While attackers can also make use of this capability (see Section 2), this presents a great opportunity for organisations to improve their vulnerability identification and patch management processes.

22 “Cyber Defense AI Solution Overview”, Darktrace, 201923 A. Tschonev, “WannaCry: Darktrace’s response to the global ransomware campaign”, Darktrace, 17 May 2017,

https://www.darktrace.com/en/blog/wanna-cry-darktraces-response-to-the-global-ransomware-campaign/24 J. Graves, “Machine Learning and UEBA (User Entity Behavior Analytics)”, Fortinet, 1 March 2019,

https://www.fortinet.com/blog/industry-trends/machine-learning-and-ueba--user-entity-behavior-analytics-.html25 I. Fadelli, “Using machine learning to detect software vulnerabilities”, Tech Xplore, 24 July 2018, https://techxplore.com/news/2018-07-machine-software-vulnerabilities.html

Page 13: ISF Demystifying Artifical Intelligence in Information ... · 3/18/2019  · Information Security Forum Demystifying Artificial Intelligence in Information Security 3 Figure 1: Examples

Information Security Forum 13Demystifying Artificial Intelligence in Information Security

26 L. Musthaler, “Forget signatures for malware detection. SparkCognition says AI is 99% effective”, Network World, 21 April 2017, https://www.networkworld.com/article/3191551/forget-signatures-for-malware-detection-sparkcognition-says-ai-is-99-effective.html

27 J. Ghanchi, “How AI Powered Tools Are Bringing Revolution to Software Development?”, OpenMind, 12 April 2019, https://www.bbvaopenmind.com/en/technology/artificial-intelligence/how-ai-powered-tools-are-bringing-revolution-to-software-development/

28 R. Lemos, “Threat Intelligence Firms Look to AI, but Still Require Humans”, Dark Reading, 30 April 2019, https://www.darkreading.com/risk/threat-intelligence-firms-look-to-ai-but-still-require-humans/d/d-id/1334570

29 B. Barrett, “IBM’s Watson has a new project: fighting cybercrime”, Wired, 10 May 2016, https://www.wired.com/2016/05/ibm-watson-cybercrime/

DEFENSIVE MECHANISM HOW AI CAN HELP

Malware detection

and analysis

Hundreds of thousands of new malware files are released every week, with most simply replicating existing files but making tiny alterations to the code. Such modifications completely change the hash-based signatures applied by anti-virus and anti-malware providers to identify malware. The task of applying a signature to each new file is becoming impossible to perform manually – but machine learning systems can match the scale of the problem, learning about the characteristics of files to identify whether they are malicious or benign and classifying them accordingly.26

PreventExamples of AI-enhanced defensive mechanisms that can help prevent cyber attacks are shown in the table below.

DEFENSIVE MECHANISM HOW AI CAN HELP

Secure software

development

Using AI in software development can make the process more efficient, automating different tasks (to the extent of autocompleting fully functional lines of code). It can also improve code security, for example by learning from a large volume of coding rules, identifying bugs and helping developers to fix them.27 Vulnerability identification capabilities can also be used in the testing phase, before software is rolled out.

Threat intelligence/ Threat classification

AI systems can improve threat intelligence platforms, for example by automatically identifying and classifying incoming threats and enabling appropriate responses. Threat intelligence vendors are also starting to use AI systems to map the dark web, helping human analysts to discover new threats in previously hidden locations or reveal a cyber criminal’s true identity.28 See the ISF report Threat Intelligence: React and prepare for more information.

Asset identification and management

The ability of machine learning systems to ingest and learn from large amounts of data, including unstructured data, offers the potential to help organisations identify and classify information assets. By applying similar concepts to those used to ingest information and provide diagnoses in the medical industry, AI systems can learn to define the sensitivity of different types of data, automatically classify them and suggest or enforce necessary access restrictions.

Identify latest best practice

in information security

The same techniques used for asset identification and management (as above) can be used to learn about the latest recommendations and techniques for good information security practice (e.g. by identifying new types of control or understanding the root cause and impact of recent cyber attacks on similar organisations). No human security practitioner can be expected to read and apply all the research material on information security that is produced – but an AI system can.29

% $

Quantitative risk

assessments

Organisations that use a quantitative information risk management model can use AI systems to provide input and analysis into risk assessments. By analysing and learning from incident logs, AI systems can accurately and continuously measure the frequency of security incidents and any associated losses. They can build up a picture of the difference between events, incidents and loss events, building profiles of each and helping risk practitioners to understand how and when an incident is likely to turn into a loss event.

Quantitative Techniques in Information Risk Analysis For more information on quantitative risk assessments, see the ISF report on Quantitative Techniques in Information Risk Analysis and accompanying Quantitative Information Risk Analysis (QIRA) accelerator tool, which enable practitioners to more accurately forecast risk.

Page 14: ISF Demystifying Artifical Intelligence in Information ... · 3/18/2019  · Information Security Forum Demystifying Artificial Intelligence in Information Security 3 Figure 1: Examples

Information Security Forum14 Demystifying Artificial Intelligence in Information Security

RespondExamples of AI-enhanced defensive mechanisms that can help contain and respond to cyber attacks are shown in the table below.

DEFENSIVE MECHANISM HOW AI CAN HELP

SOAR

Security Orchestration Automation and Response (SOAR) platforms help define, prioritise and automate incident response functions. They can use AI tools to identify attacks and initiate a response, pushing information and instructions to other security platforms (e.g. endpoint security) used by the organisation.30

Force-drop connections

Having noticed anomalous activity on a specific connection, some defensive AI systems are able to forcibly drop the connection – stopping an attack while allowing the rest of the network to operate as normal. Enabling such active measures requires a strong level of trust that the AI system will only intervene when it is absolutely necessary.

The benefit of AI in terms of response to threats is that it can act independently, taking responsive measures without the need for human oversight and at a much greater speed than a human could. Given the presence of malware that can compromise whole systems almost instantaneously, this is a highly valuable capability.

MANAGING DEFENSIVE AI The number of ways in which defensive mechanisms can be significantly enhanced by AI provide grounds for optimism, but as with any new type of technology it is not a miracle cure. Security practitioners should be aware of the practical challenges involved when deploying defensive AI.

Questions and considerations before deploying defensive AIAI systems have narrow intelligence and are designed to fulfil one type of task. They require sufficient data and inputs in order to complete that task. One single defensive AI system will not be able to enhance all the defensive mechanisms outlined previously – an organisation is likely to adopt multiple systems. Before purchasing and deploying defensive AI, security leaders should consider whether an AI system is required to solve the problem, or whether more conventional options would do a similar or better job.

Questions to ask include: ‒ Is the problem bounded? (i.e. can it be addressed with one dataset or type of input, or does it require a high understanding of context, which humans are usually better at providing?)

‒ Does the organisation have the data required to run and optimise the AI system?

Security leaders also need to consider issues of governance around defensive AI, such as: ‒ How do defensive AI systems fit into organisational security governance structures? ‒ How can the organisation provide security assurance for defensive AI systems? ‒ How can defensive AI systems be maintained, backed up, tested and patched? ‒ Does the organisation have sufficiently skilled people to provide oversight for defensive AI systems?

Organisations that adopt defensive AI should not use it as an excuse to neglect more traditional defensive mechanisms such as regular patching and system updates. AI systems are often immature and cannot be expected to protect against every type of threat.

30 C. Brooks, “SOAR Cybersecurity: Reviewing Security Orchestration, Automation and Response”, AlienVault, 27 November 2018, https://www.alienvault.com/blogs/security-essentials/security-orchestration-automation-and-response-soar-the-pinnacle-for-cognitive-cybersecurity

Page 15: ISF Demystifying Artifical Intelligence in Information ... · 3/18/2019  · Information Security Forum Demystifying Artificial Intelligence in Information Security 3 Figure 1: Examples

Information Security Forum 15Demystifying Artificial Intelligence in Information Security

Balancing human oversight with AI autonomyUntil a system has demonstrated maturity and trustworthiness, organisations are rightly unwilling to give it a high level of autonomy and responsive capability – and even when the system has proven itself by consistently making good decisions, most organisations will still require some level of human oversight. The risk of AI systems making bad decisions means that organisations are likely to always require the presence of a human who can take control and press the off switch when necessary.

“The problem with most AI today is it performs really well in a very narrow sector. Humans are very good at understanding context; machines, today, are unable to. All of our customers, in a way, say: ‘We want the human to have the capacity to override the machine.’ Even when the decision is made so fast the human is not in the loop, he still has to exercise his responsibility.” – Marko Erman, CTO, Thales31

The desire to keep humans in the loop creates its own challenges. Placing too much emphasis on the need for human oversight can reduce the effectiveness of the AI system, leading to a deluge of notifications and alerts rather than letting the AI take automatic responsive measures. In a survey of 410 security researchers, 74% said that AI-driven security solutions are flawed – citing too much reliance on humans to make security decisions, slower security operations and high false positive rates.32

Making the most of defensive AI requires a deft touch. AI will not replace the need for skilled security practitioners with technical expertise and an intuitive nose for risk. These security practitioners need to balance the need for human oversight with the confidence to allow AI-supported controls to act autonomously and effectively. Such confidence will take time to develop, especially as stories continue to emerge of AI proving unreliable or making poor or unexpected decisions. AI systems will make mistakes – a beneficial aspect of human oversight is that human practitioners can provide feedback when things go wrong and incorporate it into the AI’s decision-making process. Of course, humans make mistakes too – organisations that adopt defensive AI need to devote time, training and support to help security practitioners learn to work with intelligent systems.

Given time to develop and learn together, the combination of human and artificial intelligence should become a valuable component of an organisation’s cyber defences.

31 A. Batey, “New Thales Concept Aims to Build Trust in Artificial Intelligence”, Aviation Week, 16 June 2019, http://m.aviationweek.com/paris-airshow-2019/new-thales-concept-aims-build-trust-artificial-intelligence

32 “Beyond the hype”, Carbon Black, 2017, https://www.carbonblack.com/resource/beyond-hype-artificial-intelligence-machine-learning-non-malware-attacks-research-report/

Page 16: ISF Demystifying Artifical Intelligence in Information ... · 3/18/2019  · Information Security Forum Demystifying Artificial Intelligence in Information Security 3 Figure 1: Examples

Information Security Forum16 Demystifying Artificial Intelligence in Information Security

4 Preparing for an arms race

Computer systems that can independently learn, reason and act herald a new technological era, full of both risk and opportunity. The advances already on display are only the tip of the iceberg – there is a lot more to come from AI. The speed and scale at which AI systems ‘think’ will be increased by growing access to big data, greater computing power and continuous refinement of programming techniques. Such power will have the potential to both make and destroy a business.

As early adopters of defensive AI get to grips with a new way of working, they are seeing the benefits in terms of the ability to more easily counter existing threats. However, an arms race is developing. AI tools and techniques that can be used in defence are also available to malicious actors including criminals, hacktivists and state-sponsored groups. Sooner rather than later these adversaries will find ways to use AI to create completely new threats such as intelligent malware – and at that point, defensive AI will not just be a ‘nice to have’. It will be a necessity. Security practitioners using traditional controls will not be able to cope with the speed, volume and sophistication of attacks.

“The battleground of the future is digital, and AI is the undisputed weapon of choice. There is no silver bullet to the generational challenge of cybersecurity, but one thing is clear: only AI can play AI at its own game.” – William Dixon, World Economic Forum33

To thrive in the new era, organisations need to reduce the risks posed by AI and make the most of the opportunities it offers. That means securing their own intelligent systems and deploying their own intelligent defences. AI is no longer a vision of the distant future: the time to start preparing is now.

WHERE NEXT?The ISF encourages collaboration on its research and tools. ISF Members are invited to join the Artificial Intelligence community on ISF Live to share experiences.

33 W. Dixon, “3 ways AI will change the nature of cyber attacks”, World Economic Forum, 19 June 2019, https://www.weforum.org/agenda/2019/06/ai-is-powering-a-new-generation-of-cyberattack-its-also-our-best-defence/

Page 17: ISF Demystifying Artifical Intelligence in Information ... · 3/18/2019  · Information Security Forum Demystifying Artificial Intelligence in Information Security 3 Figure 1: Examples

ABOUT THE ISFFounded in 1989, the Information Security Forum (ISF) is an independent, not-for-profit association of leading organisations from around the world. It is dedicated to investigating, clarifying and resolving key issues in cyber, information security and risk management and developing best practice methodologies, processes and solutions that meet the business needs of its Members.

WARNINGThis document is confidential and is intended for the attention of and use by either organisations that are Members of the Information Security Forum (ISF) or by persons who have purchased it from the ISF direct. If you are not a Member of the ISF or have received this document in error, please destroy it or contact the ISF on [email protected]. Any storage or use of this document by organisations which are not Members of the ISF or who have not validly acquired the report directly from the ISF is not permitted and strictly prohibited. This document has been produced with care and to the best of our ability. However, both the Information Security Forum and the Information Security Forum Limited accept no responsibility for any problems or incidents arising from its use.

CLASSIFICATIONRestricted to ISF Members, ISF Service Providers and non-Members who have acquired the report from the ISF.

CONTACTFor further information contact:

Steve Durbin, Managing Director US Tel: +1 (347) 767 6772 UK Tel: +44 (0)20 3289 5884 UK Mobile: +44 (0)7785 953 800 [email protected] securityforum.org

REFERENCE: ISF 19 08 01 ©2019 Information Security Forum Limited. All rights reserved.

Demystifying Artificial Intelligence in Information SecurityAugust 2019

AUTHORRichard Absalom