Android Security Presentation @ JUG (NCR) 02-22-2017

Post on 12-Apr-2017

121 views 11 download

Transcript of Android Security Presentation @ JUG (NCR) 02-22-2017

Android SecurityInternals“The Grand Tour”

JUG Edinburgh, UK23rd February 2017

Ed AustinAndroid Consultant+7 925 871 9411 / +44 7726 05 0000ed@ryer.ru

Disclaimer!

• This is Draft #1.0!

• Liable to have some(?) technical inaccuracies – however feel free to correct me

• I am still getting to grips with ARM

• I am far from being an expert

© ed@ryer.ru 2017 BETA!

A Reminder…

• There is only one end goal!

PRIVILEGE ESCALATION!full or partial

© ed@ryer.ru 2017 BETA!

$ whoami

• 2014+: Android Developer/Consultant

• Before this:

• Worked for USG – DTRA • Global Architect – SE Asia (based out of Moscow)

• Worked for UNISYS EMEA• Security Architect if.com• Firewall Engineer BoA• Firewall Engineer BT IS

• Worked for MCI Vancouver• UNIX Systems Engineer

• Graduated Liverpool – Computer Science

• Interests: Core Java, Smalltalk, OOD

© ed@ryer.ru 2017 BETA!

Scope of the Presentation

© ed@ryer.ru 2017 BETA!

• Start by looking at device history 2008-present• Examine how Android enforces security

immediately from power-on• Quick look at the Baseband as an Attack Surface• The role of the TEE in Android• Enumerate the various OS technologies used to

protect the environment• Brief look from an Application Developer

perspective• List possible Attack Surfaces and Vectors• Finish by looking at useful tools for forensic

analysis of the platform

Assumptions

• Thousands of Hardware and Software Builds in the wild perhaps tens of thousands – nobody knows – likely only Google has a good idea

• We assume a typical device in terms of HW (and firmware) for 2017 (realistically any mid-range device from 2012 onwards) – usually an LG Nexus 5 (fast QC/2GB/TEE).

• Assume the ARM manufacturer is Qualcomm (effectively control the market) briefly look at MediaTek

• Also assume running “Modern” AOSP Build – say KK/4.4+ as before this many SW (and chronologically HW) features were not available

• AOSP = Android Open Source Project = Android (interchangeable)

© ed@ryer.ru 2017BETA!

I. Evolution of the AOSP Security Model2008 - 2017

Google ‘Sooner’ May 2007

© ed@ryer.ru 2017 BETA!

• Pre-release Hardware• ARM11 - 64MB RAM

• Same base Android as HTC G1

• ‘Alpha’ Android – Linux 2.6 kernel• Supported GMail, Maps, GTalk

• May 2007

• Manufacture by HTC

The HTC Dream /T-Mobile G1 circa 2008

Body here

© ed@ryer.ru 2017 BETA!

• Launched with 1.5 (Cupcake) (based on 2.6.27)• Officially upgradeable to 1.6 (Donut)• ARM11 - 528MHZ Qualcomm / 256MB RAM / SDHC

• AOSP 1.5 supported:• gcc –fstack-protector (stack buffer overruns) aka ProPolice• safe_iop (int overflow handling for bionic)• Some malloc extensions (security improvements)

• Locked Bootloader!

• But effectively “Naked” compared to todays fortified phones.

The HTC Desire (2010) – the first “usable” phone

© ed@ryer.ru 2017 BETA!

• 2.1 (Éclair) – 2.2 (Froyo) 2.6.x• ARM 1GHZ Single Core CPU / 576MB

• Basic Protections• Hardware NX (ARM XN)• Rudimentary ASLR (Stack was

randomized)• Kernel enhancements (ptr

dereferencing)

• Play Market• No real PHA screening

2011-2017 Major AOSP Security enhancements

Body here

© ed@ryer.ru 2017 BETA!

• Major changes really started at ICS 4.0 (3.0.1) with a big security push• Each release since 4.0 significantly enhances security • Features now include:

• ASLR (4.0)• PIE (4.1)• Library Load Order Randomization (7.0) new!

• SELinux (SEAndroid) + thorough policies (4.3-7.0)• Secures Sandbox, IPC etc.

• dm-verity (4.4)• R/T Permission model (6.0)• gcc FORTIFY_SOURCE O2 (Stack check macro) 4.x• Significantly hardened Kernel via NX R/O (7.0) new!• File level transparent crypto (7.0) using TrustZone support new!• Mandatory Verified boot (7.0) new!

• Play - Google Bouncer for PHA screening

• In hardware: Octa-Core, 4G baseband, NX support, TEE SoC, loads of sensors (NFC etc), enhanced GPU’s more powerful than original devices

2017 Modified AOSP Builds dedicated to Security

Body here

© ed@ryer.ru 2017 BETA!

• Example: BlackPhone/BlackPhone 2• SilentOS 3.0 (Based on AOSP Source 6.0.1)• Modified Source/Firmware? (MITM Call downgrade etc.)• Virtual OS instances• End-to-end crypto for Voice/Data (to another BP)• Almost immediate bug fixes push (faster than Nexus)• Closed… expensive.. £500+ unit for mid-range HW

• Other: Copperhead (Open Source Secure AOSP)• Open sourced improvements to technologies such as ASLR• Runs on recent Nexus Devices only (5/6)• F-Droid (no Play..)

Future? Development of Custom devices… for specific markets

Problem with these? How do they (and we) trust the Chipset?

II. ANDROID BOOT SEQUENCE (From power-on to Kernel Boot)

The Boot Process of a Simple Android Device

Body here

© ed@ryer.ru 2017 BETA!

IBL/PBL (ROM)Initial Boot Loaderimmutable

SBL (Initialize Subsystems)Secondary Boot Loaderinitialize TZ/Baseband (GSM/3G/LTE etc.) – Architectures may have multiple SBL’s with differing responsibilities

aboot (Boot Android)Boot the Kernelusually based on QC LK

kernelAndroid KernelProblem with this?

How can we trust all this critical code?

Trust! Boot Verification #1 – “Chain-of-trust”

© ed@ryer.ru 2017 BETA!

Signature based verification of Images

Ensures the integrity of the booted OS as image tampering will result in failed verification (and boot failure)

Ensures we boot into a safe environment

Mandatory as of Nougat (7.0)Some services such as Google Pay will not function without it (checked via SafetyNet APIs at a higher level) – signed by OEM…

First Stage: The IBL/PBL/BootROM (KK 4.4+)

© ed@ryer.ru 2017 BETA!

Step 1 Validate our root certificate is the real OEM certificateThe OEM Public X.509 hard-coded (2048-bit) certificate is hashed using SHA256 and compared with OEM_PK_HASH string burned into the device

Step 2 Get the stored hash of the SBL from the SBLThe (OEM private key) signed SBL image signature (concatenated to the SBL) is RSA decrypted using the PK above producing a (plaintext) hash

Step 3 Hash the SBL and compare against our retrieved HashThe SBL image is hashed (using dm-verity) and compared against the above resultant extracted hash (if equal all OK to move on) = start of chain-of-trust

Sbl image

Signing an Android Boot Image – Crypto 101

© ed@ryer.ru 2017 BETA!

SHA256

SHA256

X.509

X.509

Boot Loader

Signed Boot Loader

Signed Boot LoaderSplit into 2

Stage 2: The Secondary Boot Loaders

© ed@ryer.ru 2017 BETA!

Boots the various (HW) subsystems of the deviceGPU, TZ, Radio (baseband), Sensors

Often divided into multiple loaders with differing responsibilities (SBL1, SBL2, SBL3)

Vendor specific – but much of it written by Qualcomm and OEM modified

The last SBL usually verifies the aboot image (using chain-of-trust)

Stage 3: Kernel Boot (aboot – Android Bootloader)

© ed@ryer.ru 2017 BETA!

Boots the Kernel and mounts /system, /vendor and all the other booty stuff - i.e. the “Android Bootloader”

Also controls whether a device is “locked” (possibly by checking of QFuses/using the TZ subsystem) if so verifies the Kernel image (using its embedded signature)

Functionality to allow system maintenance (flashing/recovery etc.)

Vendor specific but usually based on LK (simple to check)

After the verified boot sequence…

© ed@ryer.ru 2017 BETA!

• dm-verity – ensure integrity of the file-system through a device mapper

• Block level verification of ext4 fs (/system, /vendor)

• Uses a tree with leafs SHA256 verified (Root hash is signed with a certificate stored in the boot image ramdisk)

• Some software ECC support (side-effect can undo changes!)

• /system RO (R/W is mutable – superblock write)

• L+ supports full implementation• Add “verify” to /etc/fstab at relevant partition line

How does all this map to a real device?

The Samsung SIII boot sequence (MSM8960 Snapdragon – Krait ARMv7)

© ed@ryer.ru 2017 BETA!

Newer ARMs follow a similar sequence

Partition physical layout on flash (Nexus 5 – CM N 7.1)

• eMMC layout of partitions • Mapped to flash blocks

• boot• sbl1 (only one sbl!)• aboot• system

• DDR• Modem (radio)• ssd• rpm• tz• sdi (rex/amss?)• laf (lots of qcom glue)

• xxx”b” = backup

© ed@ryer.ru 2017 BETA!

So what happens if we modify these Boot blocks?

Breaking the Chain-of-Trust!

Signing issue in SBL notification (Nexus 5)

© ed@ryer.ru 2017 BETA!

RED: SBL has been modified (signing issue)!

Unlocked Bootloader notification (Nexus 5)

© ed@ryer.ru 2017 BETA!

ORANGE: Bootloader is unlocked!

Signed “But not by OEM” notification (Nexus 5)

© ed@ryer.ru 2017 BETA!

YELLOW: Booting something not signed by the OEM!

Moral of the Boot Process Story…

Even with UID 0 (“root” access) the boot path integrity cannot be modified as we implement a chain-of-trust as well as dm-verity

In addition if the ABOOT is unlocked then manufacturers force a /data format – effectively wiping all the installed programs and user data

© ed@ryer.ru 2017 BETA!

Boot process from power-on/reset right up to Kernel is protected cryptographically – mandatory from API 24 (Nougat/7.0)

Looking at the Radio Layer (Telephony) of a device

As of 2017 there has not been a single “Android side” exploit of a baseband subsystem published

What is the baseband/Radio Layer?• HW Telephony module in Device

• Handles GSM (2G), UMTS (3G), 4G etc mobile phone stack

• Separate ARM CPU (Actually dual CPU SoC) ARM9+ARM11?

• Follows its own IBLSBL chain-of-trust just as the normal flow• Any other forms of verification?

• Runs own R/T OS (AMSS with a REX kernel)• Other: / DSP with custom ‘hexagon’ architecture – depends on smartphone

• Completely closed architecture• No source, no API. No documents (even high level)

• Has not been compromised in any known device exploit

Why? Tools needed… complexity of code… awareness

© ed@ryer.ru 2017 BETA!

Radio Layer Attack Surface

• Qualcomm use their own proprietary protocol to connect to the Radio – this can be seen as a possible baseband attack surface

• Signed IBL/SBL+APPSBL + AMSS + dm-verity

/vendor (the glue) protections!

• QDSP (Baseband) Firmware loaded by HLOS

• Baseband runs in ARM supervisor (Kernel) mode – has access to AOSP Userspace?

• A very attractive and tempting attack surface!

© ed@ryer.ru 2017 BETA!

Radio Layer Perspectives – Attack Surfaces Look at both ends of the data flowExamination of a single function “Powering up the Radio”

Reverse Engineer Radio Layer of a Nexus 5 #1 (look for radio owned processes)

© ed@ryer.ru 2017 BETA!

ps | grep radioLooking at a running process list for devices owned by radio (GID)the qmuxd (daemon) looks interesting here.. being a Qualcomm daemon probably very low level (unlike rild) (we need to be root to do this).

Reverse Engineer Radio Layer of a Nexus 5 #2 (view open files from daemons)

© ed@ryer.ru 2017 BETA!

lsof | grep qmuxd | grep sockList the sockets opened by the qmuxd daemon using a grep piping from lsof and we can see /dev/socket/qmux_radio/qmux_connect_socket which we assume handles the radio IPC.. and is listening.

Reverse Engineer Radio Layer of a Nexus 5 #3 (identify supporting libraries)

© ed@ryer.ru 2017 BETA!

From the previous lsof we see the socket /dev/socket/qmux_radio/qmux_connect_socket (it looks interesting)so we poke around the /vendor partition to determine what (Qualcomm) libraries use this – because it may be an entry point into the baseband itself – therefore an attack vector

ls /vendor/libWe can see the libqmi_client_qmux.so (elf library) and this is likely our candidate so we adb pull* the library over to our local linux machine for a closer look

*adb = android device bridge (USB driver to the device)

Reverse Engineer Radio Layer of a Nexus 5 #4 (ARMC set_pwr_state)

© ed@ryer.ru 2017 BETA!

check libqmi_client_qmux.so

binwalk -E libqmi_client_qmux.so Quick binwalk (firmware analysis) check to see what type of file it is and determine if any encrypted components (entropy graph) = “no straight line” = high entropy = indicative of lack of crypto in the image.

Reverse Engineer Radio Layer of a Nexus 5 #5 (ARMC set_pwr_state)

© ed@ryer.ru 2017 BETA!

Disassemble libqmi_client_qmux.so We can see (gcc) stack canaries!

Reverse Engineer Radio Layer of a Nexus 5 #6 (set_pwr_state call flow)

© ed@ryer.ru 2017 BETA!

qmi_qmux_if_set_pwr_state

Reverse Engineer Radio Layer of a Nexus 5 #7 (Higher Level Perspective)

© ed@ryer.ru 2017 BETA!

User applications need Zero JNI – calls are made from Java framework to the (closed) rildIntent intent = new Intent(Intent.SET_RADIO_POWER, ON));startActivity(intent);

<uses-permission android:name="android.permission.RADIO_POWER" />

frameworks/base/telephony/java/android/telephony/TelephonyManager.java

User level Java applications set up BroadcastRx and intents to handle receiving callbacks and sending data (assuming the app has the permissions set in the Manifest) – these constructs use binder to connect to the telephony API framework code transparently (we don’t call API directly) – note above example is illustrative – not valid!

Java framework links using AIDL generated stub/proxy to the RIL daemon (rild) C++ Binder service through the /dev/binder device

Reverse Engineer Radio Layer of a Nexus 5 #8 (Architecture Perspective)

© ed@ryer.ru 2017 BETA!

Android App. Has specific Manifest permissions (such as READ_PHONE_STATE) and registers Broadcast Rx to listen for CALL_STATE_RINGING.. and use intents to send stuff like an SMS..

---------------“AT Command Set”---------------(Java/HLL) + Manifest Permissions (Java) AIDL RIL HAL (C++)

Broadcast Rx/Intents etc (binder abstr.) Framework API binder rild (imports) libqmi.so (imports)

libqmi_client_qmux.so qmi_qmux_if_set_pwr_state(…) /dev/socket/qmux_radio/qmux_connect_socket qmuxd

(Hexagon QDSP6 Baseband)BTS RF modem serial

Zero JNI… above deduced from ARM call graph

Remember: Further down the chain we exploit the better the likely reward – exploit from the SMS application might yield little – exploit from writing over the socket to qmuxd infinitely more (along with complexity)

Reverse Engineer Radio Layer of a Nexus 5 #8 (Summary)

© ed@ryer.ru 2017 BETA!

Summary:

We looked at data flow from a user app to the baseband

Need to examine all attack surfaces (Framework to Baseband) to determine likely vectors – this involves Java, C and ARM.

Attacker may possibly have access to the firmware source (a HUGE advantage) – HLL firmware between chipsets mostly identical.

Stack canaries shows the baseband has some (default gcc) protection

Utilizing these surfaces however requires initial privileges (such as “radio” GID) – a successful attack would likely chain escalations

We considered an attack locally to the Radio Layer of a device!

That is.. exploiting possible chipset weaknesses from the Android surface

CVE-2015-8546 – Little publicized but likely one of the most damaging attacks so far

© ed@ryer.ru 2017 BETA!

Persistent OTA baseband ‘rootkits’ – completely transparent!

Another attack would be OpenBTS on a (for example) Ettus SDR as a portable BTS with modified code to attack the baseband from the ‘opposite’ direction. This could be transported in a small rucksack. Perhaps with a 3G/4G jammer.

And also we might not even need to Reverse Engineer the Radio Layer !

Attackers may already have access to the Qualcomm Confidential code!

The underlying proprietary Firmware code leaked (2015) Qualcomm Chipsets used in Android Devices!

© ed@ryer.ru 2017 BETA!

Extremely useful for low level analysis of Attack Vectors within the Android Device itselfOver 50GB of source/docs were leaked from China incl. entire device git repos.

However a lot of the source “glues” into native ARM code to perform low level operations

ARM expertise still required although reverse engineering tools may help.

III. THE HEART OF ANDROID SECURITY SECURITY

The TEE Trusted Execution Environment

“He who has root owns the phone!”--anon

completely untrue!

What is the Trusted Execution Environment (TEE) ?• The TEE is a separate secure execution environment

• Shares the CPU with the main device but can time-slice (has its own MMU)

• Available on 99% of 2017 Android devices as part of the ARM SoC (and now mandatory 7+)

• Use is to run code securely outside of the normal (AOSP) environment

• Compromise of main OS (even “rooted”) still leaves the TEE Secure• The reverse is not necessarily true

• Not widely known about yet pretty critical

© ed@ryer.ru 2017 BETA!

How does the TEE fit into our (SIII) boot sequence?

© ed@ryer.ru 2017 BETA!

Factoid: The Samsung SIII was the first Android device shipped with a TEE back in 2012

The TEE – A Tale of Two Worlds

• Android devices have two Worlds

• Known as Normal World (NWd) and Secure World (SWd)

• They are completely isolated from one another

• Monitor acts as a “gatekeeper”

• Usually we use an API call to communicate from NWdSWd

© ed@ryer.ru 2017 BETA!

AOSP (NWd) TEE/TrustZone/SWd

What can the TEE/TrustZone be used for?

• Assist Secure Boot/Verified Boot (hold keys/flags in QFuses)

• SoC Crypto Accelerator – hidden crypto work

• R/T Kernel Integrity checks (Samsung KNOX TIMA)

• Carrier locks on the baseband

• Programs – known in TrustZone as “Trustlets”

• Private key/data storage

• DRM

• Basically anything you want hidden from the NWd

© ed@ryer.ru 2017 BETA!

AOSP (NWd) TEE/TrustZone/SWd

How do we determine which “World” we are in?

The “Normal” AOSP World or the Secure TEE TrustZone environment

ARM Processor Modes

• “Normal” CPU has User and Supervisor Mode (Rings 0/3)• ARM actually has 4 modes (Exception Levels – i.e. changed

during enter/return from an exception)

• EL0 Normal User (Android User/Secure User)• EL1 Kernel (Android Kernel/Secure User)• EL2 Hypervisor (not used)• EL3 TZ (Default Power-up/Reset state)

These states indicate the ARM CPU privilege mode.

© ed@ryer.ru 2017 BETA!

“Classic” CPU Mode register – User/Supervisor• Current Process Status Register = “Classic Mode” Reg.• 32-bit register – amongst other things indicates current mode

• Determines User/Supervisor mode• user:b10000 svc:b10011 (need to be SVC for TEE operations)

© ed@ryer.ru 2017 BETA!

How does ARM differentiate between NWd and SWd ?• Secure Configuration Register (SCR)• 32-bit register use Non-Secure bit (NS)• NS bit determines Secure/Normal World• To use must be in (NWd) Kernel mode!• Handled by libraries when we are calling from higher level abstractions – really only interesting at

ARM level

© ed@ryer.ru 2017 BETA!

TEE Permissions Context – Processor Mode

© ed@ryer.ru 2017 BETA!

ARM ConceptualEL0 EL1 EL3 Secure-EL1 Secure EL-0

NWd (Normal World) SWd (Secure World)

How do Programs interact between the Worlds?

Connecting “Trustlets” to the Normal World (NWd) environment

This thing called Trustlets

• Company called Trustonic (jointly owned by ARM) www.trustonic.com

• Dominant by far in the Android/Qualcomm TEE universe (Google have “trusty OS” nowhere to be seen)

• Software environment is known as TrustZone

• TrustZone runs on the TEE (as we saw loaded by the SBL)

• TrustZone can run programs known as “Trustlets” (.tlbin) – analogous to AOSP executables

© ed@ryer.ru 2017 BETA!

Where are Trustlets held?• On the device in the Android filesystem in /system/vendor/firmware

© ed@ryer.ru 2017 BETA!

Widevine (DRM) on 2 Billion devices?

Reconstituted as .tlb?

How do we build Trustlets?

• We can’t use Java – there are no bindings (unlikely in near future)

• The supported languages are C (and C++)

• This can be via an autonomous executable on the machine or via the NDK (we simply use JNI to glue the C)

• We call the TrustZone via qseecom (Qualcomm Secure Execution Environment) SCM device driver

• Trustonic have an SDK (time limited and bound by your PGP key) available – NDA required as IP issues – and export licence (to RU)!!!

• >=3.10 Kernel have built-in TZ support© ed@ryer.ru 2017 BETA!

Simple Call to the TEE from C Start a Trustlet

© ed@ryer.ru 2017 BETA!

Client handle (unsigned char buffer*), pointer to path and filename (of the Trustlet in Android file space), buffer size to allocate for Trustlet I/O

Returns 0 success, negative error (err no).

* Handle to the qseecom device for kernel clients

QSEECom_start_app call from the libQSEECom.so library

The Path between AOSP NWd SWd “Trustlet” #1• Very few Android Processes have access to the qseecom

driver (used to access the Secure World) – and they are secured by a GID

• Mediaserver (indexes device media)• keystore (handles crypto keys) keymaster trustlet• drmserver (manages drm) widevine trustlet• surfaceflinger (handles buffer for screen writes)

• The one successful TZ exploit used mediaserver –exploiting the path to the widevine Trustlet – crafting an attack based on poor API design - and proxying into Normal World (can see all of NW) from the trustlet.

• However… needed privileges (group level) initially to get driver access!

© ed@ryer.ru 2017 BETA!

Secure World Overview

Keymaster

• Allows storage of crypto material securely through a hardware backed device (the TEE)• Keystore provided digital signing and verification operations, plus generation and import of asymmetric signing key

pairs.

• No sensitive user space operations – all must be done on the TEE (km trustlet)

• Access via an OEM-provided, dynamically-loadable library used by the Keystore service (using framework services and Keystore daemon) accessible from Java

© ed@ryer.ru 2017 BETA!

TEE Known exploits are currently few

© ed@ryer.ru 2017 BETA!

bits-pleaseCVE-2015-6639 widevineCVE-2015-6647 weak qseecom call

GoogleCVE-2016-0825 kernel access to sensitive TZ info

Nexus HW Specific IssueCVE-2016-2431 N5/6/7(2013) privilege subversion from TZCVE-2016-2432 N6 privilege subversion via TZ

CVE = Common Vulnerabilities and Exposures

CVE-2015-6639 / 6647 (Critical)

© ed@ryer.ru 2017 BETA!

• Control was taken of the mediaserver (has a GID … allowing qseecom call access) – we need those permissions!

• Trustlet exploited in SecureWorld via a malformed qseecom API call (from mediaserver)

• Trustlets have full Normal World Access!

• Trustlet acted as a “proxy” to gain full device control

mediaserver path to a “Trustlet” from Android

TEE Summary

• TEE is a separate Execution environment sharing the same CPU but dividing the device into two partitions Normal World (NWd) AOSP and Secure World SWd) TEE

• The most widely used TEE implementation is known as “TrustZone” (TZ) running programs (written in C) known as “Trustlets”

• Trustlets are stored on the device with a .tlbin extension and are loaded at boot into the TZ environment

• The TZ can only be accessed (at higher level) via C calls to “qseecom”

• QFuses in the TZ provide all sorts of useful functionality (many hidden)© ed@ryer.ru 2017 BETA!

What are QFuses and how are they used?

Qualcomm implementation of eFuses

Qualcomm Qfuses (QFPROM)

• Fundamental to the Qualcomm Security Architecture (and most others)

• Once ‘blown’ impossible to unblow – hardware level one-way

• Allow Crypto stuff and Device settings to be established – can be seen as a PROM of sorts

• Usually stored (2017) in the TEE of most devices – Most SoC internally utilize a 16kb bank of one time programmable fuses

• (Undocumented) API Calls to operate on Qfuses in the TEE

• OEM’s can blow fuses using a Python script during manufacture

© ed@ryer.ru 2017 BETA!

Examples of Android Qfuses

• FORCE_TRUSTED_BOOT QFUSE forces crypto verification of the boot-chain using the chain-of-trust by forcing SBL authentication from the IBL Cert (as we know in Masked ROM) hashed. (Also checked by Radio device boot up). • Force verified boot?

• OEM_PK_HASH hash is ‘fused’ into the QFUSES and the SHA256 hash of the root (public) certificate is compared with this to verify the IBL certificate and initiate the chain-of-trust (recall this is held in IBL ROM!). • Is Qfuse hash same as IBL hash?

• Google Pay checks whether Verified Boot Path QFuse blown via SafetyNet API likely using TZ_HLOS_IMG_TAMPER_FUSE* in TEE)??• “CTS profile match: false.”

*Qualcomm TZ Internal – Reverse Engineered from ARM code

© ed@ryer.ru 2017 BETA!

Samsung use of Qfuses

• KNOX – Samsung TEE implementation* + Security Suite from KK 4.4 (NSA certified for USG use) – implemented on all Samsung current devices (everything from SIII onwards)

• Samsung KNOX TEE uses “KNOX warranty bit” – not just for warranty but to blow fuses so that Containers (“Trustlets”) become inaccessible irreversibly if a compromise detected - such as an untrusted boot path

• So to compromise the TEE requires a crypto certified boot before any type of attack – making things more difficult

• Can be fused by user (may have unintended consequences) – possible security fuses – such as Carrier Lock after incorrect Porting code

*Uses Trustonic SW

© ed@ryer.ru 2017 BETA!

IV. ANDROID SECURITY – THE SUPPORTING CAST

The Usual Suspects

Android Kernel

• Currently based upon mainline 3.x• Some changes were upstreamed• Doesn’t use a mainstream kernel

• Android adds a "paranoid network" option to the Linux kernel, network restrictions dependent on GID of caller.

• No gnu libc (uses own “bionic” C library)

• Uses Binder for IPC

• Some peculiarities like ashmem, pmem (sharing mem between processes)

• No specific security enhancements (other than binder and GID checking?)

© ed@ryer.ru 2017 BETA!

Classic *nix UID/GID protections API 1(1.0)

• Same as UNIX from the 70’s

• For apps UID/GID assigned during install via the PM

• Standard UID/GID permissions (DAC) and SELinux (MAC)• Move away from DAC to MAC model with substantive CTS enforced policies

• UID 0 privileged (“root”)• Losing influence… SELinux now handles most stuff

• Components can SetUID/SetGID during execution (or used to be able to…)

• Devices accessed as files (standard *nix)

• Can share data via common UID (sign with same cert)

© ed@ryer.ru 2017 BETA!

Android reserves UID/GID for internal use #1 (UID)

© ed@ryer.ru 2017 BETA!

Can be seen in /system/etc/permissions/platform.xml

Android reserves UID/GID for internal use #2 (GID)

© ed@ryer.ru 2017 BETA!

Can be seen in /system/etc/permissions/platform.xml

Android Application Sandbox

• Was mostly a completely fancy term for simple UID protection enforcement – typical Google hype

• Installed APK assigned a UID

• Runs with this UID with obviously no means of modification (certainly not from build environment) and the sandbox extends to any NDK code

• Now the DAC is strengthened with an SEAndroid policy

• AV Vendors provide snake oil… since they can’t provide true AV protection (meaningless) they hype up “malware”

© ed@ryer.ru 2017 BETA!

MAC on top of DAC (SELinux) API xx (x.x)

• The UID/GID model too broad and possibly not enforced (DAC)

• SELinux complements/replaces the old UID/GID DAC model

• Based on whitelist concept - everything denied by default and requires explicit permission via the policy (using policy files)

• More granular than the old model – we can set permissions on calls (vector)???

• Also provides additional Kernel and Sandbox security

• CTS requires strict Policy files…

© ed@ryer.ru 2017 BETA!

IPC - tightened access to the servicemanager

• All IPC in Android is abstracted from Binder• That includes all the Android dev stuff like intents

(services/broadcast receivers)

• /dev/binder

• Top Level name service – Binder Context Manager• “servicemanager” on Android

• Now uses SELinux to check requests for add (L) /find/list available (M) services

• SELinux prevents arbitrarily looking up services and using.

© ed@ryer.ru 2017 BETA!

ASLR Address Space Layout Randomization API xx (4.0)

• Introduced at API 25 (ICS 4.0) from the Linux tree – before this only Stack randomization• Implements Randomization of Address Space of executables/data

(Stack/Heap/Libs/Exec/ld.so) – protects against many exploits such as a ROP chain• Executables have to be PIE (Position Independent) such as a dynamic

library (so Modules required to be GCC compiled with –pie –fPIE)• Post 6.0 all of AOSP has to be PIE compliant?• Remember Java is sandboxed in a JVM (ART) and the language has no

pointers (unlike C) so this type of attack must come from a JNI call with some C• 90%+ devices now implement ASLR

© ed@ryer.ru 2017 BETA!

Example of ASLR Randomization in Android

• ASLR example – show mapping of the stack for vold (volume mounter daemon)

© ed@ryer.ru 2017 BETA!

Note that the stack has been relocated on the #2 invocation of vold! (Also note this is a very bad example…)

ASLR through the ages…

• In (<=) GB only the stack was randomized

• (>=) ICS most things randomized in memory (heap/linker/daemons/bionic)

© ed@ryer.ru 2017 BETA!

ASLR – Not (currently) truly random in the Sandbox!

© ed@ryer.ru 2017 BETA!

Zygote is responsible for forking new user processes – these are identical copies of the original zygote process + the app – shared libraries however do not relocate per user process – each user process receives the same Zygote template with references to the shared entities (libc.so, libart.so etc).

Reference: Blender – Self-randomizing Address Space Layout for Android Apps (Mingshen et al, Chinese Uni of HK, 2016)

Summary of attack: We can find boot.oat (elf’ish) relocated address, this holds plenty of great functionality(framework API), we pre-find using oatdump some routine like sendTextMessage’s offset and ret2art (to this offset) by a ROP chain of “gadgets” in the shared memory libraries (found with a ROP finder tool) and some call setup code.

PIE – Position Independent Executables API xx (4.1)

• Use gcc with –fPIE –pie options to produce executables that can be relocated at runtime.

• Works in conjunction with ASLR

• Make sit harder to calculate the runtime addresses of code – stacks – heaps and other things – mitigates such attacks such as SO

• Uses ASLR (however ASLR predates PIE)

• Set in Android.mk (gcc uses these flags)• LOCAL_CFLAGS += -fPIE• LOCAL_LDFLAGS = -fPIE -pie

© ed@ryer.ru 2017 BETA!

Android Keystore

• Store cryptographic material in a secure container, i.e. private keys.

• Can be used for crypto operations without being exportable.

• As we have previously seen the TrustZone is an ideal place for such storage (and mandatory after 7.0 for Direct Boot)

© ed@ryer.ru 2017 BETA!

File Based Encryption

• Move from Disk based to File based

• Credential Encrypted (CE) storage, which is the default storage location and only available after the user has unlocked the device.

• Device Encrypted (DE) storage, which is a storage location available both during Direct Boot mode and after the user has unlocked the device.

• Keymaster/Keystore and Gatekeeper must be implemented in a Trusted Execution Environment (TEE) for DE storage.

• Applications can use via Direct Boot API.

© ed@ryer.ru 2017 BETA!

ARM HW based XN (W^X)

• NX (ARM XN) bit on memory = memory location cannot be executed

• Ideal to mitigate attacks such as SO

• Complementary to ASLR and PIE

• Causes a HW Exception if you attempt to execute location

• Protects against Stack overflows as you can’t execute any shellcode!

• Used in the AOSP build – where?

© ed@ryer.ru 2017 BETA!

Android OTA Security Updates

• Pushed monthly by Google

• Can be checked in:• Settings About Phone Android Security Patch level

• Vanilla Nexus devices are the first to receive (immediately)

• Manufacturers see little incentive in pushing updates to something they have already sold

• Motorola last 6.0.1 security update was August 2016! (-6M)• Many simply don’t bother at all• Same goes for FOTA updates – why bother?

• Most phones simply aren’t updated!

© ed@ryer.ru 2017 BETA!

Google Bouncer / PHA Analysis

© ed@ryer.ru 2017 BETA!

• Bouncer used to determine PHA when uploaded to Play• Bouncer is Googles PHA Analysis environment

• Uses static/dynamic analysis and other trickery to determine functionality of program and whether it is ‘bad’

• <1% of uploaded apps detected as PHA

• Outside of Play on be default (do you wish Google to check..) when application sideloaded or from another market [gives Google dubious benefit of also seeing non-Play apps]

Google CTS – compatibility test suite

• Verifies a correctly configured and secure build

• $ANDROID_SOURCE/cts/tests/• Includes looking at DAC and MAC implementation/rules

• Checks ASLR (and other security mechanisms)• /cts/tests/aslr/src/AslrMallocTest.cpp

• Testing randomization in malloc call

• Android CTS Verifier (runs on device) checks UI components (audio/touchscreen/camera etc).

• Can be used by OEM’s to verify they conform to Google requirements

• Some manufacturers have a bad reputation for failing

© ed@ryer.ru 2017 BETA!

Before we leave low level….“MediaTek” issues

The MediaTek Chipset “enhancements”• MediaTek (MTK) are a Chinese chipset manufacturer

• Used firmware from the “Shanghai Adups Technology Co. Ltd” a “Firmware provisioning” Company.

• They supply Lenovo, Blu, Archos, Barnes&Noble (43 vendors)• Including the BLU R1 HD a big seller on Amazon US (until it was pulled by Amazon)!

• Shipped a chipset with that implemented a backdoor via a FOTA update• Sent Geo Data /IP / call data / text messages / contact list back to a Chinese Server every 24-72 hours and

allows remote device execution (without the users knowledge).

• Google were informed and blocked the backdoor in CTS (for future devices)• Adups simply changed the exploits name!

© ed@ryer.ru 2017 BETA!

Low level Chipset code is beyond the reach of AOSP and beyond the reach of any CTS testing! OEM needs to be trusted (as does Mobile provider!)

Possible Attack Surfaces/Vectors

© ed@ryer.ru 2017 BETA!

Surfaces• Bugs in AOSP Supervisor Trusted Code (such as some 3 rd party helper)• TZ/TEE Subsystem• Baseband Processor/Any other processor/sensor code/libs• Sandbox• Kernel• System Boot images – PBLSBLABOOT (not likely)• Poorly implemented SELinux policies (not likely)• OEMs circumventing CTS (not likely)

Vectors• Weak API checking – examine call graph – can we change things further down (side effects) the call with our args?

• Poor checking of buffers/stacks• Pointer dereferencing to privileged areas

• Chaining Gadgets (ROP) and returning to privileged code• TBC

$^D

© ed@ryer.ru 2017 BETA!

Any Questions?

From a Developers PerspectiveGeneral Guidelines

Mostly common-sense

• Java programmers do not need to worry about device integrity – the APK is boxed in – this includes all the AV snake oil by the large vendors

• Sandbox provides on-device security executable/data

• Obfuscate and compress using Proguard/Dexguard• Use the TEE (but significant entry barriers)• Use Local Broadcasts• Bouncy Castle for Data (on top of AOSP Crypto?)• JNI some crypto code to support the app (and obfuscate the C using

LLVM ???)• Application signing

The first real serious attack – STAGEFRIGHT

• Heap overflow exploit• Int overflow caused more bytes to be read into buffer than the allocated size

due to poor argument checking

• 95% of Android Devices exposed (2.2+) –however in reality probably not many were truly unprotected

• Need to bypass ASLR to determine heap!• Other protections such as stack canaries, NX, Fortify_source…

CVE-2015-1538 poor checking of 3GPP metadata (heap overflow)CVE-2015-1539 integer underflow (signed)CVE-2015-3824 integer overflowCVE-2015-3826 integer overflowCVE-2015-3827 integer underflow (signed)