Trustworthy Attestation of Untrusted Sensors · 2020. 7. 12. · 36x48_Trifold_Pwrpnt-JV-GHC...

1
RESEARCH POSTER PRESENTATION DESIGN © 2011 www.PosterPresentations.com Trustworthy Attestation of Untrusted Sensors Critical infrastructures are monitored and controlled by a wide variety of sensors and controllers. Unfortunately, most of these devices interacting with the physical world are fragile to security incidents. One particular technology that can help us improve their trustworthiness is attestation—a protocol where a verifier sends a random challenge to a device and the device replies with a response to prove its integrity (Fig. 1). However, the need to modify the device software or lack of hardware (e.g., TPM) have limited the use of attestation on existing devices. Our contribution: we propose a new attestation protocol to detect replay-attacks on sensors that continuously monitor a physical area. The breakthrough aspect of our approach is that we do not send the attestation challenge to the device itself, instead we modify the physical area the device is sensing and verify that the desired changes are reflected in the sensor readings. We evaluate our approach for IP cameras used in surveillance applications. Abstract Introduc.on Proposed Attestation for Sensing Devices: o The verifier sends a random challenge as input to the prover by modifying the physical area the prover is sensing (Fig. 3) o The response will show up in the output of the prover even if the prover does not know anything about the challenge sent o If the correct response does not show up in the prover output, then we know the prover is not reporting correct readings Proposed Solu.on Overview System Design: Our goal is to detect a compromised camera in the presence of an attacker, i.e., a camera reporting incorrect or old data. We add two new devices: a display and a verifier. Implementa.on Details Our results show that displaying a continuous stream of visual challenges on a signage is an effective way to commu- nicate verifiable evidences between the verifier and camera. In our upcoming ACSAC’15 paper [5], we propose attack detection statistics for each visual challenge, and study their performance under normal conditions (without attack) and against a variety of adversaries. References [1] D. Challener, K. Yoder, R. Catherman, D. Safford, and L. Van Doorn. A practical guide to trusted computing. IBM Press, 2008. [2] G. Friedman. The trustworthy digital camera: restoring credibility to the photographic image. IEEE Trans. Consum. Electron, 39(4):905–910, 1993 [3] R. Hummen, M. Henze, D. Catrein, and K. Wehrle. A cloud design for user- controlled storage and processing of sensor data. In CloudCom, 2012. [4] Tesseract Open Source OCR Engine. https://github.com/tesseract-ocr/. [5] J. Valente and A. Cárdenas. IOT: Using Visual Challenges to Verify the Integrity of Security Cameras. In Proceedings of the Annual Computer Security Applications Conference (ACSAC'15), 2015 (forthcoming). [6] T. Winkler and B. Rinner. Securing embedded smart cameras with trusted computing. EURASIP Journal on Wireless Comm. & Netw., 2011. [7] ZBar bar code reader. http://zbar.sourceforge.net/. Problem Description: o Sensing devices that interact with the physical world are extremely fragile to security incidents o Once secret keys of a device are compromised, the attacker can bypass traditional integrity mechanisms Attestation to Improve Device Trustworthiness: - Attestation helps detect unauthorized changes to devices - Drawbacks have limited their application in the field Figure 1: Generic attestation protocol. Email: [email protected] Junia Valente Figure 8: The only trusted entities in our system are the verifier, which sends visual challenges to the display, and the database that stores the history of sent challenges for offline forensics purposes. Junia Valente – October 2015 Device Verifier challenge response TPM Approach and Uniqueness Date: 05/28/14 Date: 05/27/14 Date: 05/27/14 Figure 2: An intuitive illustration of our approach Device Verifier challenge response sense Physical Environment Our Proposed Solution is motivated by attestation; however, the novel aspect is that we do not send the attestation challenge to the device itself. Instead, we modify the physical area the device is sensing and verify that the desired changes reflect in the sensor readings. Figure 3: The verifier modifies the physical environment that the sensor (labeled as the ‘device’) is sensing. The verifier then verify that the sensor readings (labeled as ‘response’) include the injected ‘challenge’. Approach Evaluation: o We evaluate our approach for surveillance cameras o Security cameras are used in sensitive settings and are becoming attractive to attackers Adversary Model: The attacker can launch replay attacks by using old video footage. Our goal is to minimize the changes the attacker can use previously recorded footage and present it as new. Research Challenges: - What is the practicability of our proposed attestation? - How can we inject verifiable evidence in the camera video feed for the purpose of attestation? - How can we continuously modify the physical world in a random way so the modifications can be communicated to the verifier? pzVnU6GVJoJ7YVXQtt8QXYNvmSvIUEqs Verifier visual challenge 1 4 video feed 3 2 Figure 4: (1) A visual challenge is sent to a display, (2) the camera captures an image which includes the display, (3) the video feed is sent to verifier, (4) if the verifier confirms the challenge was captured in the video just received, it gains confidence that the camera is transmitting fresh footage. Camera Verifier visual challenge sense Physical Environment Digital Signage Control Station DB video feed DB Trusted We Propose Two Visual Challenges with enough randomness to prevent replay attacks on cameras: plain text (e.g., Fig. 4) and QR code (e.g., Fig. 5). o Plain text can be recognized with optical character recognition (OCR) such as tesseract [4] o QR code can be decoded via popular barcode readers such as zbar [7] Camera Verifier response sense Physical Environment 3) Extract visual challenge from the current frame 4) Verify if the extracted random string is the correct one 1) Generates a random string 2) Embeds the string to a visual challenge image to be shown in front of the camera visual challenge – QR code Figure 5: Verifier generates a random string and either encodes it into a QR code or displays it as-is in the monitor. The verifier use QR reader or OCR engine to extract the random string from the image before verification. 0 1920 x 1200 pixels 195 x 195 pixels 1772 x 113 pixels tHpy1aCkZPS7um5dXZfgBHPtmTh50f0H8mLSHrm3l0XzaCwA9V yWl04QwFQhTtVUamgTRmy7YpoiikRwJbxnNtfVTKbbhBi3qDGc Figure 6: QR code vs. Plain Text. Verification Process: Acknowledgements Results and Conclusions Time step 0 10 20 30 40 50 60 70 80 90 100 Cumulative Error 0 200 400 600 800 1000 normal operation replay attack 50 60 70 80 0 50 Time step 10 20 30 40 50 60 70 80 90 100 Error 0 20 40 60 80 100 normal operation replay attack (a) (b) Naive Attack against OCR Error Rate per Challenge Cumulative Error Rate normal 1 E alarm 2 consecutive E 3 consecutive E incorrect decoding E C E C A A or E A A C E C Figure 9: We found our approach to be secure against replay- attacks even when the camera footage might not look suspicious to security guards [5]. Fig 8 (a): anomaly score per image. Fig 8 (b): cumulative anomaly score over time (i.e., the CUSUM statistic). Figure 10: Anomaly detection for QR code visual challenges. Legend: C = correct decoding, A = decoding of a value different to the challenge, and E = cannot decode QR code in the current frame. This research was conducted under the supervision of Dr. Alvaro Cárdenas. This work was supported in part by NIST under award 70NANB14H236 from the U.S. Department of Commerce. Figure 7: Lab Setup for Experiments. Related Works Trust Model: There has been attention on ensuring camera systems secure video streams in the cloud and in transmission (by signing [2] or encrypting [3] the stream before sending to storage). However, these approaches assume keys have not been compromised. Research on the security of sensors includes those that look at the development of trusted sensors to ensure trustworthy sensor readings. Proposed approaches [6] make it difficult for a sensor to ‘lie’ or fabricate malicious readings because they assume the presence of trusted computing technology [1].

Transcript of Trustworthy Attestation of Untrusted Sensors · 2020. 7. 12. · 36x48_Trifold_Pwrpnt-JV-GHC...

Page 1: Trustworthy Attestation of Untrusted Sensors · 2020. 7. 12. · 36x48_Trifold_Pwrpnt-JV-GHC Author: Junia Valente Created Date: 10/13/2015 1:42:53 PM ...

RESEARCH POSTER PRESENTATION DESIGN © 2011

www.PosterPresentations.com

Trustworthy Attestation of Untrusted Sensors !

Critical infrastructures are monitored and controlled by a wide variety of sensors and controllers. Unfortunately, most of these devices interacting with the physical world are fragile to security incidents. !

One particular technology that can help us improve their trustworthiness is attestation—a protocol where a verifier sends a random challenge to a device and the device replies with a response to prove its integrity (Fig. 1). However, the need to modify the device software or lack of hardware (e.g., TPM) have limited the use of attestation on existing devices.!

Our contribution: we propose a new attestation protocol to detect replay-attacks on sensors that continuously monitor a physical area. The breakthrough aspect of our approach is that we do not send the attestation challenge to the device itself, instead we modify the physical area the device is sensing and verify that the desired changes are reflected in the sensor readings. We evaluate our approach for IP cameras used in surveillance applications. !

Abstract  

Introduc.on  

Proposed Attestation for Sensing Devices:!

o  The verifier sends a random challenge as input to the prover by modifying the physical area the prover is sensing (Fig. 3)!

o  The response will show up in the output of the prover even if the prover does not know anything about the challenge sent !

o  If the correct response does not show up in the prover output, then we know the prover is not reporting correct readings!

Proposed  Solu.on  Overview  

System Design:!Our goal is to detect a compromised camera in the presence of an attacker, i.e., a camera reporting incorrect or old data.!

We add two new devices: a display and a verifier.!

Implementa.on  Details  

Our results show that displaying a continuous stream of visual challenges on a signage is an effective way to commu-nicate verifiable evidences between the verifier and camera.!

In our upcoming ACSAC’15 paper [5], we propose attack detection statistics for each visual challenge, and study their performance under normal conditions (without attack) and against a variety of adversaries.!

References  [1] D. Challener, K. Yoder, R. Catherman, D. Safford, and L. Van Doorn. A practical guide to trusted computing. IBM Press, 2008. ![2] G. Friedman. The trustworthy digital camera: restoring credibility to the photographic image. IEEE Trans. Consum. Electron, 39(4):905–910, 1993![3] R. Hummen, M. Henze, D. Catrein, and K. Wehrle. A cloud design for user- controlled storage and processing of sensor data. In CloudCom, 2012.![4] Tesseract Open Source OCR Engine. https://github.com/tesseract-ocr/.![5] J. Valente and A. Cárdenas. IOT: Using Visual Challenges to Verify the Integrity of Security Cameras. In Proceedings of the Annual Computer Security Applications Conference (ACSAC'15), 2015 (forthcoming).![6] T. Winkler and B. Rinner. Securing embedded smart cameras with trusted computing. EURASIP Journal on Wireless Comm. & Netw., 2011. ![7] ZBar bar code reader. http://zbar.sourceforge.net/.!

Problem Description:!

o  Sensing devices that interact with the physical world are extremely fragile to security incidents!

o  Once secret keys of a device are compromised, the attacker can bypass traditional integrity mechanisms!

Attestation to Improve Device Trustworthiness:!

- Attestation helps detect unauthorized changes to devices- Drawbacks have limited their application in the field

Figure 1: Generic attestation protocol.!

Email: [email protected] Junia Valente

Figure 8: The only trusted entities in our system are the verifier, which sends visual challenges to the display, and the database that stores the history of sent challenges for offline forensics purposes. !

Junia  Valente  –  October  2015  

Device ! Verifier!challenge!

response!TPM !

Approach  and  Uniqueness  

Date: 05/28/14

Date: 05/27/14

Date: 05/27/14

Figure 2: An intuitive illustration of our approach!

Device! Verifier!

challenge!

response!sense!Physical Environment !

Our Proposed Solution is motivated by attestation; however, the novel aspect is that we do not send the attestation challenge to the device itself. Instead, we modify the physical area the device is sensing and verify that the desired changes reflect in the sensor readings.  

Figure 3: The verifier modifies the physical environment that the sensor (labeled as the ‘device’) is sensing. The verifier then verify that the sensor readings (labeled as ‘response’) include the injected ‘challenge’.!

Approach Evaluation: !

o  We evaluate our approach for surveillance cameras!

o  Security cameras are used in sensitive settings and are becoming attractive to attackers

Adversary Model: !The attacker can launch replay attacks by using old video footage. Our goal is to minimize the changes the attacker can use previously recorded footage and present it as new.!

Research Challenges:!- What is the practicability of our proposed attestation?!

-  How can we inject verifiable evidence in the camera video feed for the purpose of attestation? !

-  How can we continuously modify the physical world in a random way so the modifications can be communicated to the verifier?!

pzVnU6GVJoJ7YVXQtt8QXYNvmSvIUEqs!

Verifier!

visual !challenge!

1!4!

video feed!3!

2!

Figure 4: (1) A visual challenge is sent to a display, (2) the camera captures an image which includes the display, (3) the video feed is sent to verifier, (4) if the verifier confirms the challenge was captured in the video just received, it gains confidence that the camera is transmitting fresh footage.! Camera!

Verifier!visual challenge!

sense!Physical

Environment !

Digital Signage!

Control Station!

DB!

video!feed!

DB!

Trusted!

We Propose Two Visual Challenges with enough randomness to prevent replay attacks on cameras: plain text (e.g., Fig. 4) and QR code (e.g., Fig. 5).

o  Plain text can be recognized with optical character recognition (OCR) such as tesseract [4]!

o  QR code can be decoded via popular barcode readers such as zbar [7]!

Camera! Verifier!response!sense!Physical Environment! 3) Extract visual challenge from

the current frame!4) Verify if the extracted random string is the correct one!

1)  Generates a random string!2)  Embeds the string to a visual

challenge image to be shown in front of the camera!

visual challenge – Plain Text!visual challenge – QR code!

Figure 5: Verifier generates a random string and either encodes it into a QR code or displays it as-is in the monitor. The verifier use QR reader or OCR engine to extract the random string from the image before verification.!

Key length50 100 150 200 250 300

Size

(pix

el2 )

×105

0

1

2

3

4

5

6

7Character per Pixel

QROCR

1920 x 1200 pixels!

195 x 195 pixels!

1772 x 113 pixels!

tHpy1aCkZPS7um5dXZfgBHPtmTh50f0H8mLSHrm3l0XzaCwA9V yWl04QwFQhTtVUamgTRmy7YpoiikRwJbxnNtfVTKbbhBi3qDGc

Figure 6: QR code vs. Plain Text.!

Verification Process:!

Acknowledgements  

Results  and  Conclusions  

Time step0 10 20 30 40 50 60 70 80 90 100

Cum

ulat

ive

Erro

r

0

200

400

600

800

1000Naive attack cumulative errors - OCR

normal operationreplay attack

50 60 70 800

50

Time step10 20 30 40 50 60 70 80 90 100

Erro

r

0

20

40

60

80

100

Naive attack errors - OCR

normal operationreplay attack

(a) (b)

Naive Attack against OCR Error Rate per Challenge Cumulative Error Rate

normal 1 E

alarm

2 consecutive E

3 consecutive Eincorrect decoding

E

C

E

C

A

A or E

AA

C

EC

Figure 9: We found our approach to be secure against replay-attacks even when the camera footage might not look suspicious to security guards [5]. Fig 8 (a): anomaly score per image. Fig 8 (b): cumulative anomaly score over time (i.e., the CUSUM statistic). !

Figure 10: Anomaly detection for QR code visual challenges. Legend: C = correct decoding, A = decoding of a value different to the challenge, and E = cannot decode QR code in the current frame.!

This research was conducted under the supervision of Dr. Alvaro Cárdenas. This work was supported in part by NIST under award 70NANB14H236 from the U.S. Department of Commerce.!

Figure 7: Lab Setup for Experiments.!

Related  Works  

Trust Model:!

There has been attention on ensuring camera systems secure video streams in the cloud and in transmission (by signing [2] or encrypting [3] the stream before sending to storage). However, these approaches assume keys have not been compromised.!

Research on the security of sensors includes those that look at the development of trusted sensors to ensure trustworthy sensor readings. Proposed approaches [6] make it difficult for a sensor to ‘lie’ or fabricate malicious readings because they assume the presence of trusted computing technology [1]. !