Discover the top 10 strategies to safeguard your Android app and healthcare data model. Enhance security with expert tips for robust protection. Click now!
TL;DR: Android Security Checklist (15 Items)
Table of Contents
Written by Mustafa Najoom
CEO at Gaper.io. Building the future of technical hiring. Mustafa has spent over a decade working with engineering teams that ship secure, production-grade mobile applications across healthcare, fintech, and enterprise.
Android now powers over 3.3 billion active devices worldwide. That is not a typo. More than 3 billion phones, tablets, wearables, TVs, and automotive systems run some version of Android. Every one of those devices is a potential entry point for attackers, and the numbers show they know it: roughly 60% of all mobile malware and attack campaigns target the Android ecosystem.
The reason is straightforward. Android’s open ecosystem, which makes it a great platform for developers, also makes it a larger attack surface than iOS. Users can sideload APKs from outside the Play Store. OEM fragmentation means millions of devices run outdated OS versions with known vulnerabilities. And the sheer volume of apps on the Play Store (over 3.5 million at last count) makes comprehensive security review impossible at scale.
Google has not been standing still. Google Play Protect now scans over 125 billion apps per day and has gotten significantly better at catching polymorphic malware that changes its signature to evade detection. The introduction of real-time code scanning in Play Protect (rolling out broadly through 2025 and 2026) means that even apps downloaded from sideloaded sources get analyzed for malicious behavior.
Android 15 brought several security features that developers should understand and leverage. Credential Manager is now the recommended unified API for passkeys, passwords, and federated sign-in. The new partial screen sharing feature limits what screen recording and casting can capture, protecting apps that display sensitive data. File integrity checks are stronger, with improved APK signature verification that makes tampering detectably harder.
But platform improvements alone do not secure your app. The responsibility sits squarely with the development team. According to the NowSecure 2025 Mobile Security Report, 85% of Android apps contain at least one security vulnerability that could be exploited. The most common issues are insecure data storage, improper certificate validation, and missing binary protections. All of these are preventable with the right engineering practices.
The cost of getting security wrong is escalating. In 2025, the average cost of a mobile data breach exceeded $4.5 million for mid-size companies. Regulatory fines under GDPR, HIPAA, and PCI DSS add to that total, and in healthcare and finance, the reputational damage can be terminal. Nobody wants to be the app that made headlines for leaking patient records or payment data.
85% of Android apps contain at least one exploitable security vulnerability.
Source: NowSecure 2025 Mobile Security Report
This guide covers the 15 most effective security practices for Android apps in 2026. Each one includes what it does, how to implement it with real code, and the mistakes developers commonly make. These are not theoretical suggestions. They are the same techniques used by banking apps, healthcare platforms, and enterprise software that handle millions of sensitive transactions daily.
The Open Worldwide Application Security Project (OWASP) maintains the definitive list of mobile security risks. The Mobile Top 10 is the starting point for any Android security strategy because it reflects the vulnerabilities that are most frequently exploited in the real world. Here is the current list with Android-specific context for each entry.
| # | Risk | Android-Specific Context |
|---|---|---|
| M1 | Improper Credential Usage | Hardcoded API keys in source, storing tokens in plain-text SharedPreferences, or embedding secrets in BuildConfig fields that get compiled into the APK. Use Android Keystore instead. |
| M2 | Inadequate Supply Chain Security | Third-party Gradle dependencies with known CVEs, unverified Maven artifacts, or compromised SDK integrations. Android’s dependency tree can pull in hundreds of transitive libraries you never directly chose. |
| M3 | Insecure Authentication/Authorization | Client-side auth checks that can be bypassed with a patched APK. Weak biometric implementation that falls back to PIN without proper CryptoObject binding. Missing server-side validation of user roles. |
| M4 | Insufficient Input/Output Validation | SQL injection through ContentProviders, path traversal via Intent URIs, and JavaScript injection in WebViews. Android’s IPC mechanism (Intents) introduces validation challenges that do not exist on other platforms. |
| M5 | Insecure Communication | Apps that allow cleartext HTTP, lack certificate pinning, or accept invalid TLS certificates. Android’s Network Security Config makes it easy to enforce HTTPS, but many apps still ship with misconfigured settings. |
| M6 | Inadequate Privacy Controls | Collecting excessive permissions, not deleting user data on account deletion, or leaking PII through logs and crash reports. Android 15’s granular permissions model raises the bar here. |
| M7 | Insufficient Binary Protections | APKs that ship without code obfuscation, debug symbols left in release builds, or missing tamper detection. Android APKs can be trivially decompiled with jadx or apktool if unprotected. |
| M8 | Security Misconfiguration | Exported Activities/Services/Receivers that should be private, world-readable file permissions, or debuggable=true left in production manifests. Android’s component model creates unique misconfiguration risks. |
| M9 | Insecure Data Storage | Sensitive data in plain-text SharedPreferences, unencrypted SQLite databases, or data written to external storage that any app can read. This is the most common Android vulnerability by volume. |
| M10 | Insufficient Cryptography | Using deprecated algorithms (MD5, SHA-1, DES), weak key generation, or custom crypto implementations. Android’s Keystore and Jetpack Security libraries provide the right primitives. Use them. |
The 15 best practices in the next section map directly to these OWASP categories. If you address all 15, you will have meaningful coverage against every item on this list.
Each practice below includes what it does, a working code example, and the mistakes developers commonly make when implementing it. These are ordered roughly by impact and implementation priority.
What it does: Certificate pinning ensures your app only communicates with servers presenting a specific, known TLS certificate or public key. Without it, an attacker with access to a trusted Certificate Authority (or a compromised one) can perform man-in-the-middle attacks, intercepting and modifying all traffic between your app and your API.
Network Security Config (res/xml/network_security_config.xml):
<?xml version="1.0" encoding="utf-8"?>
<network-security-config>
<domain-config cleartextTrafficPermitted="false">
<domain includeSubdomains="true">api.yourapp.com</domain>
<pin-set expiration="2027-01-01">
<!-- Primary pin (current certificate) -->
<pin digest="SHA-256">AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=</pin>
<!-- Backup pin (next certificate rotation) -->
<pin digest="SHA-256">BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB=</pin>
</pin-set>
</domain-config>
</network-security-config>
OkHttp Certificate Pinner (for programmatic control):
val certificatePinner = CertificatePinner.Builder()
.add("api.yourapp.com", "sha256/AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=")
.add("api.yourapp.com", "sha256/BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB=")
.build()
val client = OkHttpClient.Builder()
.certificatePinner(certificatePinner)
.build()
Common mistakes: Only pinning a single certificate with no backup (the app breaks when the cert rotates). Pinning to a leaf certificate instead of the public key (requires app update on every renewal). Not setting an expiration date on the pin set, which means a bad pin can brick the app permanently. Forgetting to disable pinning in debug builds, making local development painful.
What it does: Encrypts all data stored on the device, including SharedPreferences, SQLite databases, and files. If a device is lost, stolen, or compromised by malware, the attacker cannot read the data without the encryption key, which is stored securely in the Android Keystore.
EncryptedSharedPreferences (Jetpack Security):
// build.gradle
implementation "androidx.security:security-crypto:1.1.0-alpha06"
// Kotlin
val masterKey = MasterKey.Builder(context)
.setKeyScheme(MasterKey.KeyScheme.AES256_GCM)
.build()
val encryptedPrefs = EncryptedSharedPreferences.create(
context,
"secure_prefs",
masterKey,
EncryptedSharedPreferences.PrefKeyEncryptionScheme.AES256_SIV,
EncryptedSharedPreferences.PrefValueEncryptionScheme.AES256_GCM
)
// Use exactly like regular SharedPreferences
encryptedPrefs.edit()
.putString("auth_token", token)
.apply()
Common mistakes: Encrypting values but not keys (the key names themselves can reveal sensitive information, like “user_ssn” or “credit_card_number”). Storing the encryption key in the APK or in plain-text SharedPreferences instead of the Keystore. Using ECB mode instead of GCM for AES encryption. Not encrypting database files when using Room or raw SQLite.
What it does: R8 (the modern replacement for ProGuard) shrinks, optimizes, and obfuscates your compiled code. Class names become single letters, method names are randomized, and unused code is removed. This makes reverse engineering your APK significantly harder and reduces the binary size as a bonus.
// build.gradle (app module)
android {
buildTypes {
release {
minifyEnabled true
shrinkResources true
proguardFiles getDefaultProguardFile(
'proguard-android-optimize.txt'
), 'proguard-rules.pro'
}
}
}
// proguard-rules.pro
# Keep security-critical classes from being obfuscated
-keep class com.yourapp.security.** { *; }
# Keep model classes used for API serialization
-keepclassmembers class com.yourapp.models.** {
<fields>;
}
# Remove logging in release builds
-assumenosideeffects class android.util.Log {
public static int d(...);
public static int v(...);
public static int i(...);
}
Common mistakes: Leaving minifyEnabled as false in release builds because “it breaks stuff.” Not testing the release build until right before launch and then scrambling to fix ProGuard rules. Keeping debug log statements in production (the assumenosideeffects rule above strips them). Not uploading the mapping.txt file to your crash reporting tool, making stack traces unreadable.
What it does: Detects whether the device has been rooted, which gives any app (including malware) unrestricted access to the entire filesystem, other apps’ data, and system-level operations. For apps handling financial data, healthcare records, or enterprise secrets, running on a rooted device is an unacceptable risk.
// Using Google Play Integrity API (recommended over SafetyNet)
val integrityManager = IntegrityManagerFactory.create(context)
val integrityTokenRequest = IntegrityTokenRequest.builder()
.setNonce(generateNonce())
.build()
integrityManager.requestIntegrityToken(integrityTokenRequest)
.addOnSuccessListener { response ->
// Send token to your server for verification
verifyIntegrityOnServer(response.token())
}
.addOnFailureListener { e ->
// Handle failure - consider blocking sensitive ops
restrictSensitiveFeatures()
}
// Server-side: decode and check deviceIntegrity
// MEETS_DEVICE_INTEGRITY = not rooted, passes CTS
// MEETS_BASIC_INTEGRITY = may be rooted but not actively attacked
// No label = device is compromised
Common mistakes: Relying only on client-side root checks (they can be patched out of the APK). Using the deprecated SafetyNet API instead of the Play Integrity API. Hard-blocking all rooted devices instead of degrading gracefully (hide sensitive features but let the app run). Not checking for Magisk Hide or other root-cloaking tools that specifically target detection libraries.
What it does: Protects the communication layer between your Android app and backend services. This goes beyond just using HTTPS. It includes token management, request signing, rate limiting enforcement, and payload encryption for sensitive data.
// OkHttp Interceptor for request signing
class ApiSecurityInterceptor(
private val keyStore: KeyStoreManager
) : Interceptor {
override fun intercept(chain: Interceptor.Chain): Response {
val original = chain.request()
val timestamp = System.currentTimeMillis().toString()
val nonce = UUID.randomUUID().toString()
// Create HMAC signature of request body + timestamp + nonce
val bodyBytes = original.body?.let { body ->
val buffer = Buffer()
body.writeTo(buffer)
buffer.readByteArray()
} ?: ByteArray(0)
val signaturePayload = "${original.url}|$timestamp|$nonce|${
bodyBytes.toBase64()
}"
val signature = keyStore.signWithHmac(signaturePayload)
val secured = original.newBuilder()
.header("X-Timestamp", timestamp)
.header("X-Nonce", nonce)
.header("X-Signature", signature)
.header("Authorization", "Bearer ${getShortLivedToken()}")
.build()
return chain.proceed(secured)
}
}
Common mistakes: Using long-lived tokens (use short-lived access tokens with refresh tokens). Sending API keys directly in headers instead of using HMAC-signed requests. Not implementing replay protection (timestamps + nonces prevent request replay attacks). Logging full request/response bodies including auth headers in debug builds.
What it does: The Android Keystore system provides hardware-backed (on supported devices) storage for cryptographic keys. Keys stored in the Keystore cannot be extracted from the device. Even with root access, the private key material never leaves the secure hardware enclave (TEE or StrongBox).
// Generate a key pair in Android Keystore
val keyPairGenerator = KeyPairGenerator.getInstance(
KeyProperties.KEY_ALGORITHM_EC,
"AndroidKeyStore"
)
val parameterSpec = KeyGenParameterSpec.Builder(
"my_secure_key",
KeyProperties.PURPOSE_SIGN or KeyProperties.PURPOSE_VERIFY
).apply {
setDigests(KeyProperties.DIGEST_SHA256)
setUserAuthenticationRequired(true)
setUserAuthenticationParameters(
300, // timeout in seconds
KeyProperties.AUTH_BIOMETRIC_STRONG
)
// Use StrongBox if available (dedicated security chip)
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.P) {
setIsStrongBoxBacked(true)
}
}.build()
keyPairGenerator.initialize(parameterSpec)
val keyPair = keyPairGenerator.generateKeyPair()
// Sign data with the Keystore key
val signature = Signature.getInstance("SHA256withECDSA")
signature.initSign(keyPair.private)
signature.update(dataToSign)
val signedBytes = signature.sign()
Common mistakes: Not checking whether StrongBox is available before requiring it (crashes on devices without the hardware). Using RSA when EC (Elliptic Curve) is more efficient and equally secure for signing operations. Setting user authentication timeout too long (or not requiring it at all). Not handling KeyPermanentlyInvalidatedException when the user changes their biometric enrollment.
What it does: Uses the device’s fingerprint sensor, face recognition, or iris scanner to authenticate the user for sensitive operations. When implemented correctly with a CryptoObject, the biometric check is cryptographically bound to a Keystore key, meaning it cannot be bypassed by hooking the callback.
// Secure biometric auth with CryptoObject binding
val biometricPrompt = BiometricPrompt(
this, // FragmentActivity
executor,
object : BiometricPrompt.AuthenticationCallback() {
override fun onAuthenticationSucceeded(
result: BiometricPrompt.AuthenticationResult
) {
// The cipher is now unlocked - use it to decrypt
val cipher = result.cryptoObject?.cipher
cipher?.let {
val decryptedData = it.doFinal(encryptedPayload)
processSecureData(decryptedData)
}
}
override fun onAuthenticationError(
errorCode: Int, errString: CharSequence
) {
// Handle error - do NOT fall back to weak auth
showAuthenticationError(errString)
}
}
)
// Create a CryptoObject tied to a Keystore key
val cipher = getCipherFromKeystore("biometric_key")
val cryptoObject = BiometricPrompt.CryptoObject(cipher)
val promptInfo = BiometricPrompt.PromptInfo.Builder()
.setTitle("Authenticate")
.setSubtitle("Verify your identity to continue")
.setAllowedAuthenticators(BiometricManager.Authenticators.BIOMETRIC_STRONG)
.setNegativeButtonText("Cancel")
.build()
biometricPrompt.authenticate(promptInfo, cryptoObject)
Common mistakes: Not using a CryptoObject (the biometric check becomes purely UI-level and can be bypassed with Frida). Falling back to device PIN/pattern when biometric fails (defeats the purpose for high-security scenarios). Not handling the case where biometrics are not enrolled. Using BIOMETRIC_WEAK which includes less secure methods like 2D face recognition.
What it does: Prevents the system from capturing screenshots or screen recordings of activities that display sensitive data. This blocks the recent apps thumbnail, screen recording tools, screen sharing in video calls, and screenshot attempts. Banking apps, password managers, and healthcare apps use this extensively.
// In your Activity's onCreate
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
// FLAG_SECURE prevents screenshots and screen recording
window.setFlags(
WindowManager.LayoutParams.FLAG_SECURE,
WindowManager.LayoutParams.FLAG_SECURE
)
setContentView(R.layout.activity_sensitive_data)
}
// For Jetpack Compose, set it at the composable level
@Composable
fun SecureScreen() {
val context = LocalContext.current
DisposableEffect(Unit) {
val window = (context as Activity).window
window.addFlags(WindowManager.LayoutParams.FLAG_SECURE)
onDispose {
window.clearFlags(WindowManager.LayoutParams.FLAG_SECURE)
}
}
// Your sensitive UI content here
}
Common mistakes: Applying FLAG_SECURE globally to the entire app instead of just sensitive screens (users get frustrated when they cannot screenshot non-sensitive content). Not clearing the flag when navigating away from the sensitive screen in single-Activity architectures. Forgetting that the recent apps preview still shows a snapshot unless the flag is set before the view renders.
What it does: Validates, sanitizes, and constrains all user input before processing it. On Android, input validation extends beyond form fields to include Intent data, ContentProvider queries, deep link URIs, and clipboard content. Every piece of data that crosses a trust boundary must be validated.
// Input validation utility
object InputValidator {
// Sanitize and validate Intent extras
fun sanitizeIntentData(intent: Intent): Map<String, Any>? {
return try {
val data = mutableMapOf<String, Any>()
intent.extras?.keySet()?.forEach { key ->
val value = intent.extras?.getString(key)
if (value != null && isCleanInput(value)) {
data[key] = value.take(MAX_INPUT_LENGTH)
}
}
data.ifEmpty { null }
} catch (e: Exception) {
null // Reject malformed intents entirely
}
}
// Validate deep link URIs
fun validateDeepLink(uri: Uri): Boolean {
val allowedHosts = setOf("yourapp.com", "www.yourapp.com")
val allowedSchemes = setOf("https", "yourapp")
return uri.scheme in allowedSchemes
&& uri.host in allowedHosts
&& !uri.path.orEmpty().contains("..")
&& uri.queryParameterNames.all { isCleanInput(it) }
}
private fun isCleanInput(input: String): Boolean {
// Reject SQL injection, XSS, and path traversal
val dangerousPatterns = listOf(
"('|\"|;|--)", // SQL injection
"(<script|javascript:)", // XSS
"(\\.\\.[\\\\/])", // Path traversal
)
return dangerousPatterns.none { pattern ->
Regex(pattern, RegexOption.IGNORE_CASE).containsMatchIn(input)
}
}
private const val MAX_INPUT_LENGTH = 1000
}
Common mistakes: Only validating user-facing form inputs while ignoring Intent extras, deep links, and ContentProvider queries. Using a blocklist approach (blocking known bad patterns) instead of an allowlist approach (allowing only known good patterns). Performing validation only on the client side. Not handling exceptions from malformed data, which can crash the app or create unexpected behavior.
What it does: Enforces TLS encryption on all network traffic leaving your app. Android 9+ blocks cleartext HTTP by default, but older devices and misconfigured apps can still leak data over unencrypted connections. Proper configuration ensures TLS 1.3 is used wherever supported and that no fallback to cleartext exists.
<!-- AndroidManifest.xml -->
<application
android:networkSecurityConfig="@xml/network_security_config"
android:usesCleartextTraffic="false">
<!-- res/xml/network_security_config.xml -->
<?xml version="1.0" encoding="utf-8"?>
<network-security-config>
<!-- Block all cleartext traffic globally -->
<base-config cleartextTrafficPermitted="false">
<trust-anchors>
<certificates src="system" />
</trust-anchors>
</base-config>
<!-- Debug overrides for local development -->
<debug-overrides>
<trust-anchors>
<certificates src="user" />
</trust-anchors>
</debug-overrides>
</network-security-config>
Common mistakes: Adding cleartext exceptions for specific domains during development and forgetting to remove them. Not enforcing TLS 1.3 when the server supports it. Allowing user-installed CA certificates in production builds (this lets proxy tools like Charles intercept traffic). Not testing on older devices where the default cleartext behavior differs.
What it does: RASP monitors the app’s runtime environment for signs of attack or tampering. Unlike static protections (obfuscation, certificate pinning) that can be patched out of a repackaged APK, RASP detects debuggers, hooking frameworks (Frida, Xposed), memory injection, and dynamic code modifications in real time.
// Basic RASP checks (for production, use a commercial SDK)
object RuntimeProtection {
fun performSecurityChecks(context: Context): SecurityStatus {
val threats = mutableListOf<String>()
if (isDebuggerAttached()) threats.add("debugger")
if (isFridaDetected()) threats.add("frida")
if (isRunningOnEmulator()) threats.add("emulator")
if (isAppTampered(context)) threats.add("tampered")
return SecurityStatus(threats)
}
private fun isDebuggerAttached(): Boolean {
return Debug.isDebuggerConnected()
|| Debug.waitingForDebugger()
}
private fun isFridaDetected(): Boolean {
// Check for Frida's default listening port
return try {
val socket = java.net.Socket()
socket.connect(
java.net.InetSocketAddress("127.0.0.1", 27042), 100
)
socket.close()
true // Frida is likely running
} catch (e: Exception) {
false
}
}
private fun isRunningOnEmulator(): Boolean {
return (Build.FINGERPRINT.startsWith("generic")
|| Build.FINGERPRINT.startsWith("unknown")
|| Build.MODEL.contains("Emulator")
|| Build.MANUFACTURER.contains("Genymotion")
|| Build.BRAND.startsWith("generic")
|| Build.PRODUCT.startsWith("sdk"))
}
private fun isAppTampered(context: Context): Boolean {
val validSignature = "YOUR_RELEASE_SIGNATURE_HASH"
val currentSignature = getAppSignature(context)
return currentSignature != validSignature
}
}
Common mistakes: Implementing RASP checks only in Java/Kotlin where they can be easily hooked. Not running checks continuously (doing a single check at startup is useless if Frida attaches after launch). Hard-killing the app on detection instead of silently reporting to your backend and degrading functionality. Not combining multiple detection techniques, since skilled attackers can bypass individual checks.
What it does: Locks down WebView to prevent JavaScript injection, file system access, and cross-origin attacks. WebView is essentially an embedded browser, and its default configuration is far too permissive for production use. Every WebView-based feature needs explicit security hardening.
// Secure WebView configuration
val webView = WebView(context).apply {
settings.apply {
// Disable file access entirely
allowFileAccess = false
allowFileAccessFromFileURLs = false
allowUniversalAccessFromFileURLs = false
allowContentAccess = false
// Enable JavaScript only if you need it
javaScriptEnabled = true // set to false if not needed
// Disable potentially dangerous features
setSupportZoom(false)
saveFormData = false
databaseEnabled = false
domStorageEnabled = true // only if needed
// Force HTTPS
mixedContentMode = WebSettings.MIXED_CONTENT_NEVER_ALLOW
}
// Restrict navigation to allowed domains
webViewClient = object : WebViewClient() {
override fun shouldOverrideUrlLoading(
view: WebView, request: WebResourceRequest
): Boolean {
val allowedHosts = setOf(
"yourapp.com", "api.yourapp.com"
)
return if (request.url.host !in allowedHosts) {
true // Block navigation to unknown domains
} else {
false // Allow
}
}
}
}
// NEVER do this in production:
// webView.addJavascriptInterface(obj, "Android")
// unless you absolutely need it and have validated all inputs
Common mistakes: Enabling addJavascriptInterface without strict input validation (this is how the infamous Stagefright-era WebView exploits worked). Allowing mixed content (HTTP resources loaded in HTTPS pages). Not restricting URL navigation, which lets injected JavaScript redirect to phishing pages. Leaving file access enabled, which can expose internal app files to loaded web content.
What it does: Android 15 continues the platform’s shift toward minimal, just-in-time permissions. Apps should request only the permissions they need, at the moment they need them, and handle denial gracefully. The photo picker, which eliminates the need for broad storage permissions, and the foreground service type declarations are the most impactful changes.
// Modern permission handling with Activity Result API
class SecureActivity : AppCompatActivity() {
// Use Photo Picker instead of READ_EXTERNAL_STORAGE
private val pickMedia = registerForActivityResult(
ActivityResultContracts.PickVisualMedia()
) { uri ->
uri?.let { processSelectedImage(it) }
}
// Request permissions with proper rationale
private val requestPermission = registerForActivityResult(
ActivityResultContracts.RequestPermission()
) { isGranted ->
if (isGranted) {
enableFeature()
} else {
showFeatureDegradedUI()
}
}
fun selectProfilePhoto() {
// No permission needed - Photo Picker handles it
pickMedia.launch(
PickVisualMediaRequest(
ActivityResultContracts.PickVisualMedia.ImageOnly
)
)
}
fun requestLocationForDelivery() {
when {
ContextCompat.checkSelfPermission(
this, Manifest.permission.ACCESS_FINE_LOCATION
) == PackageManager.PERMISSION_GRANTED -> {
enableFeature()
}
shouldShowRequestPermissionRationale(
Manifest.permission.ACCESS_FINE_LOCATION
) -> {
showLocationRationaleDialog {
requestPermission.launch(
Manifest.permission.ACCESS_FINE_LOCATION
)
}
}
else -> {
requestPermission.launch(
Manifest.permission.ACCESS_FINE_LOCATION
)
}
}
}
}
Common mistakes: Requesting all permissions at app launch instead of when the feature is actually needed. Not providing rationale before re-requesting a denied permission. Using READ_EXTERNAL_STORAGE when the Photo Picker would work without any permission. Not declaring foreground service types in the manifest (required on Android 14+), causing crashes.
What it does: Verifies that the APK has not been modified, repackaged, or resigned after release. Attackers frequently decompile APKs, remove security checks, inject malicious code, and redistribute the modified app. Tamper detection catches this by verifying the APK’s signature and checksum at runtime.
object TamperDetection {
// Verify APK signature matches expected value
fun verifySignature(context: Context): Boolean {
return try {
val packageInfo = if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.P) {
context.packageManager.getPackageInfo(
context.packageName,
PackageManager.GET_SIGNING_CERTIFICATES
)
} else {
@Suppress("DEPRECATION")
context.packageManager.getPackageInfo(
context.packageName,
PackageManager.GET_SIGNATURES
)
}
val signatures = if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.P) {
packageInfo.signingInfo.apkContentsSigners
} else {
@Suppress("DEPRECATION")
packageInfo.signatures
}
val currentHash = signatures.firstOrNull()?.let { sig ->
val md = MessageDigest.getInstance("SHA-256")
md.digest(sig.toByteArray())
.joinToString("") { "%02x".format(it) }
}
currentHash == EXPECTED_SIGNATURE_HASH
} catch (e: Exception) {
false
}
}
// Verify installation source
fun isInstalledFromPlayStore(context: Context): Boolean {
val installer = if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.R) {
context.packageManager
.getInstallSourceInfo(context.packageName)
.installingPackageName
} else {
@Suppress("DEPRECATION")
context.packageManager
.getInstallerPackageName(context.packageName)
}
return installer == "com.android.vending"
}
private const val EXPECTED_SIGNATURE_HASH = "your_sha256_hash_here"
}
Common mistakes: Hardcoding the expected signature hash as a plain string that is easily found and patched. Performing the check only once at startup instead of periodically. Not checking the installation source (legitimate apps come from Google Play, not sideloaded). Crashing immediately on tamper detection instead of reporting to the server first (the crash can be patched out, but server-side logging persists).
What it does: Integrates security scanning directly into your build pipeline so that vulnerabilities are caught before they reach production. This includes static analysis (SAST), dependency scanning, and optionally dynamic analysis (DAST) running against debug builds.
# .github/workflows/security-scan.yml
name: Android Security Scan
on:
pull_request:
branches: [main, develop]
push:
branches: [main]
jobs:
security:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
# Dependency vulnerability scan
- name: Dependency Check
uses: dependency-check/Dependency-Check_Action@main
with:
project: 'android-app'
path: '.'
format: 'HTML'
# Static analysis with MobSF
- name: Build Debug APK
run: ./gradlew assembleDebug
- name: MobSF Static Analysis
run: |
docker run --rm \
-v "$PWD/app/build/outputs/apk/debug:/input" \
opensecurity/mobile-security-framework-mobsf \
mobsf --scan /input/app-debug.apk \
--output /input/report.json
# Lint security rules
- name: Android Lint Security Check
run: ./gradlew lint
# Ensure lintOptions in build.gradle has:
# warningsAsErrors true
# checkDependencies true
# Secrets detection
- name: Scan for Hardcoded Secrets
uses: trufflesecurity/trufflehog@main
with:
path: .
extra_args: --only-verified
Common mistakes: Running security scans only in nightly builds instead of on every pull request. Not failing the build on high-severity findings (if the scan passes regardless, nobody looks at it). Ignoring dependency vulnerabilities because updating the library “might break something.” Not scanning for hardcoded secrets, which is the single most common security issue in Android codebases.
No single security measure is sufficient. The 15 practices above form concentric defense layers. If an attacker bypasses one, the next layer stops them. The following diagram shows how these layers stack from the outermost (network) to the innermost (hardware-backed key storage).
Choosing the right security testing tools depends on your team’s size, budget, and where you are in the development lifecycle. Here is a comparison of the five most commonly used tools for Android security testing, with honest assessments of what each does well and where it falls short.
| Tool | Type | Cost | Best For |
|---|---|---|---|
| MobSF | SAST + DAST | Free (open source) | All-in-one scanning for teams without a dedicated security budget. Scans APKs for hardcoded secrets, insecure configurations, dangerous permissions, and known vulnerability patterns. Easy to integrate into CI/CD with Docker. |
| QARK | SAST | Free (open source) | Source code and APK analysis focused on Android-specific vulnerabilities. Particularly good at detecting exported components, Intent vulnerabilities, and WebView misconfigurations. Generates proof-of-concept exploit APKs for findings. |
| Drozer | DAST (IPC) | Free (open source) | Testing Android IPC attack surfaces. Enumerates and tests exported Activities, ContentProviders, BroadcastReceivers, and Services. The go-to tool for finding Intent-based vulnerabilities. Requires a physical or emulated device. |
| Frida | Dynamic Analysis | Free (open source) | Runtime instrumentation and hooking. Intercepts function calls, modifies return values, bypasses security checks, and inspects encrypted traffic at the application layer. Essential for pen testing certificate pinning, biometric auth, and root detection implementations. |
| Burp Suite | Network Proxy | Free (Community) / $449/yr (Pro) | API security testing. Intercepts and modifies HTTP/S traffic between the app and server. Tests for authentication bypasses, injection vulnerabilities, and insecure API behaviors. The Pro version adds automated scanning and is worth it for teams doing regular security assessments. |
For most teams, the recommended combination is MobSF in CI/CD for automated scanning on every build, Burp Suite for manual API testing during development sprints, and Frida for targeted penetration testing before major releases. Drozer is valuable if your app has a complex IPC surface (multiple exported components, ContentProviders, or deep links). QARK is a good supplement to MobSF but has some overlap in coverage.
Teams that integrate security scanning into CI/CD catch 73% of vulnerabilities before code reaches production.
Source: Synopsys 2025 Software Security Report
The 15 best practices above form a universal baseline. But if your Android app operates in healthcare, fintech, or government, you have additional compliance requirements that go beyond general security hygiene. Here is what each major framework demands and how it maps to Android-specific implementation.
HIPAA’s Security Rule requires that any app handling Protected Health Information (PHI) implement administrative, physical, and technical safeguards. For Android apps, the technical safeguards are where the development team has direct control.
Encryption at rest is non-negotiable. Every piece of PHI stored on the device must be encrypted with AES-256. EncryptedSharedPreferences and SQLCipher cover this. Encryption in transit requires TLS 1.2 minimum (TLS 1.3 preferred) with certificate pinning. Access controls mean biometric or strong authentication before displaying PHI. Audit logging requires tracking every access to PHI with timestamps and user identity, and those logs must be stored securely and transmitted to a HIPAA-compliant backend. Automatic session timeout must lock the app after a period of inactivity (typically 15 minutes or less). Remote wipe capability should be built in so that if a device is lost, PHI can be destroyed remotely.
The screenshot prevention flag (FLAG_SECURE) is practically mandatory for any screen displaying PHI. HIPAA does not explicitly require it, but a screenshot of patient data stored in the device’s photo gallery is a breach waiting to happen.
PCI DSS v4.0 (mandatory since March 2025) applies to any app that processes, stores, or transmits cardholder data. For Android apps, this typically means payment screens, wallet features, or any functionality that touches card numbers, CVVs, or account data.
PCI DSS Requirement 4 mandates strong cryptography for cardholder data in transit. Requirement 3 requires encryption at rest with proper key management (Android Keystore satisfies this when configured correctly). Requirement 6 requires secure development practices, including code reviews, vulnerability scanning, and change control. Requirement 11 requires regular penetration testing. The new v4.0 additions include requirements for automated technical solutions to detect and prevent web-based attacks (relevant for apps with WebView-based payment flows) and enhanced monitoring of payment page scripts and headers.
For most Android fintech apps, the practical advice is: never store full card numbers on the device (use tokenization), implement RASP to detect tampering of payment screens, use the Android Keystore for any cryptographic operations involving financial data, and run both SAST and DAST scans as part of your CI/CD pipeline.
FedRAMP (Federal Risk and Authorization Management Program) is the US government’s framework for authorizing cloud services. If your Android app is used by federal agencies or connects to FedRAMP-authorized cloud services, the app itself becomes part of the security boundary that must meet FedRAMP requirements.
FedRAMP builds on NIST 800-53 controls. For Android apps, the most impactful controls are SC-8 (Transmission Confidentiality) requiring FIPS 140-2 validated cryptography for data in transit, SC-28 (Protection of Information at Rest) requiring encryption of all stored data with FIPS-validated modules, IA-2 (Identification and Authentication) requiring multi-factor authentication, and AU-2 (Audit Events) requiring comprehensive logging of security-relevant events.
The Android Keystore on devices with StrongBox can meet FIPS 140-2 requirements when the hardware module itself is FIPS-certified (check with the device manufacturer). For the TLS layer, you may need to configure specific cipher suites that are FIPS-approved. The continuous monitoring requirement means your app needs to report security events to a SIEM system in near real time, which is a significant engineering effort beyond what most consumer apps implement.
Most of the 15 practices in this guide can be implemented by experienced Android developers who take the time to understand the security model. Certificate pinning, encrypted storage, obfuscation, and permissions handling are well-documented and have mature library support. A good senior Android developer can implement all of these.
But there are situations where a dedicated security engineer becomes necessary. If your app handles regulated data (HIPAA, PCI DSS, FedRAMP), you need someone who understands the compliance framework deeply enough to know what “good enough” actually means for your specific use case. If you have been breached or suspect a vulnerability is being actively exploited, incident response requires specialized skills and tools. If you are preparing for a third-party security audit or penetration test, having an internal security engineer who can pre-audit and fix issues before the external team arrives will save you weeks and significant cost.
Custom RASP implementations, advanced tamper detection in native code, and cryptographic protocol design are areas where security specialization pays off. These are not tasks where “mostly right” is acceptable, because subtle implementation errors can create vulnerabilities that are worse than having no protection at all (they give a false sense of security).
Gaper’s network includes Android security engineers who have built secure mobile applications for healthcare systems, banking platforms, and government contractors. If your team needs to add security expertise without the overhead of a full-time hire, working with a vetted specialist on a project basis is often the most practical path.
Need a Security Audit for Your Android App?
Our engineers have built HIPAA-compliant and PCI-DSS-compliant mobile apps. From security assessment to remediation.
Encrypted local storage and HTTPS enforcement. These two measures protect user data at rest and in transit, which covers the two most common attack vectors. Start with EncryptedSharedPreferences from Jetpack Security for stored data and a Network Security Config that blocks all cleartext traffic. Once those are in place, add certificate pinning and then work through the remaining 13 practices based on your app’s risk profile. Apps handling sensitive data (health, financial, authentication) should prioritize Android Keystore integration and biometric authentication as their next steps.
Yes, and arguably more necessary than ever. Certificate Transparency (CT) logs have made certificate misissuance more detectable, but they have not eliminated man-in-the-middle attacks. Corporate proxy environments, compromised CA certificates, and state-level attackers can all intercept TLS traffic without triggering CT alerts. Certificate pinning is the only defense that verifies you are actually talking to your server and not a proxy presenting a valid but wrong certificate. The implementation has gotten easier with Android’s Network Security Config, and the operational burden has decreased now that most teams use public key pinning with backup pins and expiration dates.
Start with MobSF (Mobile Security Framework), which is free, open-source, and runs in Docker. Upload your APK and it will scan for issues mapped to the OWASP Mobile Top 10 categories automatically. For more thorough testing, combine MobSF with manual testing using Burp Suite (for API traffic inspection) and Frida (for runtime behavior analysis). The OWASP MASTG (Mobile Application Security Testing Guide) provides step-by-step testing procedures for each risk category. For a comprehensive assessment, run Drozer to enumerate and test your app’s IPC attack surface, especially if you have exported ContentProviders or custom URL schemes.
Android Keystore is a low-level system for generating and storing cryptographic keys in hardware-backed secure storage (TEE or StrongBox). The keys never leave the secure hardware, and you use them to perform cryptographic operations like signing or encryption. EncryptedSharedPreferences is a higher-level library that uses Android Keystore internally to encrypt key-value data stored in SharedPreferences files. Think of Keystore as the vault where your keys live, and EncryptedSharedPreferences as a convenient tool built on top of that vault. For most apps, EncryptedSharedPreferences is the right choice for storing configuration data and tokens. Use Keystore directly when you need custom cryptographic operations, biometric-bound keys, or hardware attestation.
It raises the bar significantly but does not stop a determined attacker. R8 obfuscation makes reverse engineering slower and more expensive by renaming classes and methods to meaningless characters, removing unused code, and optimizing bytecode. A skilled reverse engineer using jadx or JEB can still understand the logic, but it takes hours or days instead of minutes. The real value of R8 is that it stops casual attackers and automated scanning tools from quickly finding hardcoded secrets, API endpoints, and security logic. For higher assurance, combine R8 with commercial obfuscation tools like DexGuard or ProGuard’s successor, which add string encryption, control flow obfuscation, and native library protection.
Android fragmentation is a real challenge for security. Devices running Android 8 or 9 do not have the same Keystore capabilities, permission models, or network security defaults as Android 14 and 15. The practical approach is to set a minimum SDK version that supports your required security features (API 26/Android 8.0 is a reasonable floor in 2026), use Jetpack Security libraries that provide backward-compatible implementations, and degrade gracefully on older devices. For apps handling highly sensitive data, consider blocking devices below a certain OS version entirely. Google’s own data shows that over 85% of active Android devices now run Android 10 or later, so raising the minimum version is less painful than it used to be. The alternative is maintaining parallel security implementations, which doubles your testing surface and increases the chance of bugs.
Ship Secure Mobile Apps, Faster
Vetted Android engineers with security expertise. OWASP-aligned. HIPAA and PCI-DSS experience.
8,200+ vetted engineers. 14 verified Clutch reviews. Backed by Harvard and Stanford alumni.
Top quality ensured or we work for free
