June 1, 2016

Last Updated on January 19, 2024

In Part 1 of this post, I provided an overview of mobile app attack vectors and penetration testing approaches. In Part 2, I covered tools, techniques, and issues related to the two types of mobile applications: browser-based and native.
In this Part 3, I’ll discuss penetration testing of mobile client apps in general, plus I’ll talk about ways to uncover vulnerabilities specific to Apple’s iOS during pen testing.

Penetration testing of client applications

Most mobile apps are architected such that client software is installed locally on the mobile device. Users can download these apps from places like the App Store or the Android Market.
To penetration-test these apps, you need a rooted Android device or jailbroken iOS device, or an emulator. It’s always better to conduct penetration testing using the original (rooted or jailbroken) mobile device, if available. Examples of emulators for popular mobile client systems include Google Android Emulator, MobiOne, Xcode and Blackberry Simulator.
Besides an emulator or root-accessible mobile device, mobile app pen testing also requires a decompile so you can decompile the binary application files. During black-box engagements, decompilation is essential in order to gain a complete understanding of the app’s internals. Decompilers for mobile apps include .NET Reflector for Windows Mobile, otool and class-dump-x for iPhone, dex2jar and JD-Gui for Android and Coddec for Blackberry.
Once you’ve successfully decompiled the application, consider using a code analysis tool to identify vulnerabilities in the source code. Tools for this purpose include Klocwork Solo, Flawfinder and Clang.
When performing penetration testing in these environments, you check for the presence of controls to mitigate vulnerabilities related to:

  • Files (temporary, cached, configuration, databases, etc.) on the local file system
  • File permissions
  • Application authentication and authorization
  • Error handling and session management
  • Business logic testing
  • Decompiling, analysing and modifying the installation package
  • Client-side injections

iOS Application Security Issues

Eliminating security vulnerabilities in iOS apps is especially critical, not only because iOS usage is so widespread, but also because many users—and maybe even some IT decision-makers—think the platform is invulnerable to hackers.
Here are some of the security concerns to watch out for on iOS:
Privacy issues
Every iOS device has a Unique Device Identifier (UDID). It functions somewhat like a serial number. Mobile apps can collect these identifiers through an API, or hackers can sniff them out from the network traffic. With this data it has also become possible to observe a user’s browsing patterns as well as observe users’ geolocation data with their UDID. Apple and others make use of this data; fortunately, it’s not linked to users’ identities.
Application data storage
Applications installed on mobile devices use device memory to store their data. About 75% of apps do this. Usually, on-device data storage is used to help improve performance or support offline usage. Although, according to one source, 10% of apps store passwords in clear text on the device.
On iOS, apps run in a “sandbox” with “mobile” privileges. Each app gets a private area of the file system. iOS app data is mainly stored in these locations: property list (plist) files, the keychain, logs, screenshots, and the app’s home directory. An example home directory is: /var/mobile/Applications/[GUID].
Below the home directory can be subdirectories:

Sub Directory Description
Appname.app Contains the application code and static data
Documents Data that may be shared with a desktop application through iTunes
Library Application support files
Library/Preferences/ App-specific preferences
Library/Caches/ Data that should persist across successive launches of the application but doesn’t need to be backed up.
Tmp Temporary files that do not need to persist across successive launches of the application.

Plist files
Plist files are primarily used to store users’ application properties; e.g.: /var/mobile/Applications/[appid]/Documents/Preferences. Apps store key/value pairs in binary format. These can be easily extracted and modified with a property list editor (plutil). It is recommended to not store clear text data in plist files. During a pen test, look for usernames, passwords, cookies, etc. within plist files. (Some apps may take authentication/authorization decisions; e.g., admin=1, timeout=10.)
Keychain
iOS apps use an SQLite database for storing sensitive data. This database has four tables (genp, inet, cert and keys) located at: /var/Keychains/Keychain-2.db. For encryption of keychain data, iOS uses a hardware encryption key, along with the user’s passcode, which depends on continuous access to the keychain entry.
Developers are meant to leverage keychains for secure data storage. Keychains are accessible to all apps. Normally an app can only access its own keychain items, but on a jailbroken device that safeguard can be bypassed. A Keychain Dumper tool is available to check which keychain items are accessible to an attacker if an iOS device is jailbroken. The best way to keep data stored in a keychain secure is to use a data protection API.
Error logs
Apps may write sensitive data in logs, such as for debugging, troubleshooting or requests/responses. Logs can be found at /private/var/log/syslog. To view iOS logs you can download a Console app from the App Store.
Keyboard cache
To support auto-correction, iOS apps can populate a local keyboard cache (located at Library/Keyboard/en_GB-dynamic-text.dat) on the device. The problem is that it records everything that the user types in text fields. During pen testing, check whether the app is caching sensitive data by clearing the existing cache and then entering data in text fields for analysis.
File cache
iOS apps can store files in various formats, such as PDF, XLS and TXT, when viewed from the app. When a user opens a file from an email, it gets cached. For optimal security, apps that are storing temporary files on the device should clear those files upon logout/close.
Screenshots
When you press the Home button on an iOS device, the open app shrinks with a smooth visual effect. iOS takes screen shots of the app to create that effect. In this context, there is a possibility that sensitive data could be cached. The solution is for the app to remove sensitive data or change the screen before the applicationDidEnterBackground() function returns. Or, instead of hiding or removing sensitive data, you can prevent “backgrounding” altogether by setting the “Application does not run in background” property in the application’s info.plist file.
Home directory
Apps can store data in the app home directory. A custom encryption mechanism can be used to store files. During pen testing, you can use reverse engineering techniques to find the encryption key, and write tools to break the custom encryption.
Reverse engineering
In general, iOS apps downloaded from the App Store are encrypted. However, it is possible to decrypt any application on a jailbroken device. For example, you can make use of Crackulous, which decrypts apps on the device, or Installous-type apps to install decrypted apps on a device. Most self-distributed apps are not encrypted. During pen testing, look for hard-coded passwords and encryption keys.
As an example of reverse engineering, iOS 6.1 hackers were able to access the phone app, listen to voicemails and place calls, after bypassing the device’s passcode via a loophole in the code.
URL scheme
iOS apps use a URL scheme to specify how they will interact with a web browser. During pen testing you can view plist.info to see what schemes are supported. For example:

>plutill Facebook.app/info.plist
CFBundleURLName=“com.facebook”;
CFBundleURLSchemes=(fbauth, fb);

If the app does not perform proper validation of these parameters, then bad inputs can crash it; for example:

mailto: [email protected]
Twitter: //post?message=visit%20abc.com

You can decrypt the app to find out the parameters; e.g.:

  • >strings Facebook.app/Facebook | grep ‘fb;’
  • Fb://online#offline
  • Fb://birthdays/(initWithMonth: )/(year: )
  • Fb://userset
  • Fb://nearby
  • Fb://place/(initWithPageId: )

Attackers can also perform remote attacks using weaknesses in URL schemes, such as allowing editing or deleting of data without the user’s permission. For example, this “Skype URL Handler Dial Arbitrary Number”: <iframe src=“skype://140877777777?call”></iframe>
Push notifications
App vendors use this service to push notifications to a user’s device even when the app is not active. For example, iMessage alerts you when you have a new message even if you’re using another app. Apple can read push notifications. Therefore, it is recommended not to send confidential data in notifications. Also, during pentesting you should check whether the app allows push notifications to modify app data.
Next time, in the fourth and final part of this post, I’ll cover penetration testing of the mobile app’s communication channel.
To discuss penetrating testing services for your business-critical mobile applications, contact Pivot Point Security.

Applications on Mobile Devices carry specific unique security concerns.

This whitepaper explores such vulnerabilities and explains in detail how to avoid them.