Sneaker Dev Logo
Back to Blog

Reversing Akamai BMP 3.2.4 on Android

March 15, 2026

martinsummers

Breaking down Akamai's mobile bot management SDK on Android — the encryption scheme, payload structure, and what it takes to generate valid sensor data at scale.
Reversing Akamai BMP 3.2.4 on AndroidThis article may have been republished from another source and might not have been originally written for this site.

⚠️ Some information, tools, or techniques discussed may have changed or evolved since the publishing of this article.

Originally published at https://antibot.blog/posts/1773605197805

After solving Castle, I wanted to try a mobile antibot that was harder. Akamai BMP felt like a good entry point for mobile reverse engineering - it's widely deployed, the encryption is real but not insane, and the SDK is obfuscated enough to be interesting without being impenetrable.

This post covers the Android side - how the X-Acf-Sensor-Data header is built, encrypted, and what actually matters when you try to generate it yourself. My implementation is on GitHub, and links to all tools used are at the bottom of this article.

Big shout-out to xvertile's akamai-bmp-generator - it was a really helpful reference whenever I got lost throughout this process.

Intro

Before getting into it, here's what you'll need to follow along: a rooted Android device (I used a Pixel 7a running Android 16), Frida for runtime hooking, JADX for decompiling the APK, and Burp Suite for proxying traffic. All tools are linked at the end of this article.

If you need a more detailed walkthrough on setting up the intercept environment (rooting, installing Frida server, configuring Burp's proxy), check this out.

The goal: understand how the X-Acf-Sensor-Data header is generated. This header gets sent with API requests to prove the client is a legitimate mobile app.

Setup

Grab the target APK and set up Burp as a proxy. The first wall is SSL pinning - a generic TrustManager hook handles it:

1Java.perform(function() {
2    var TrustManagerImpl = Java.use('com.android.org.conscrypt.TrustManagerImpl');
3    TrustManagerImpl.verifyChain.implementation = function() {
4        return Java.use('java.util.ArrayList').$new();
5    };
6});

One tip here: try to find sites using BMP that don't have sophisticated Frida detection. Some apps will detect the Frida server running on the device and refuse to work, and bypassing that alone can become a real headache. For your first target, pick something that lets you hook without a fight - you want to focus on understanding the sensor, not fighting the instrumentation layer.

With traffic flowing through Burp, the X-Acf-Sensor-Data header showed up on API calls.

First Look at the Payload

The header value looks something like this:

12,a,BM79GnKFX1J5NxN9p2lZhh...=,CRLsTODUZHrDs1Z/vak...=$SssjP1/LzqrdoxUwtFn0mxO6...=$1000,0,1000$$

Breaking this down: starts with 2,a (version/type identifier), then two base64 blobs separated by a comma, a dollar sign delimiter, another big base64 blob, then dollar sign, some numbers, and a double dollar sign at the end.

Those first two base64 blocks are roughly 172 characters each - suspiciously close to what RSA-1024 produces (128 bytes -> ~172 base64 chars). The third block varies in size, probably the actual encrypted payload.

Finding the SDK

Threw the APK into JADX and searched for "sensor". That led to com.akamai.botman - about 30 obfuscated classes with single-letter names.

The entry point was easy to find: com.cyberfend.cyfsecurity.CYFMonitor. This is the public API the app calls:

1public static synchronized String getSensorData() {
2    return f9382a.a();
3}

Where f9382a is an instance of class i. So class i is the main target.

Hooking the Plaintext

Before going deeper into static analysis, I wanted to see the plaintext before encryption. Hooking Cipher.doFinal() with Frida does the trick:

1var Cipher = Java.use('javax.crypto.Cipher');
2Cipher.doFinal.overload('[B').implementation = function(input) {
3    var inputStr = Java.use('java.lang.String').$new(input);
4    if (inputStr.indexOf('-94,-100') !== -1) {
5        console.log('=== PLAINTEXT SENSOR DATA ===');
6        console.log(inputStr);
7    }
8    return this.doFinal(input);
9};

Bingo:

13.2.4-1,2,-94,-100,-1,uaend,-1,2219,1080,1,100,1,en,16,0,Pixel%207a,
2lynx-16.3-13642542,lynx,-1,com.example.app,-1,-1,
3b8b691969187b067,-1,0,1,REL,14331693,36,Google,lynx,release-keys,
4user,android-build,BP3A.251005.004.B2,lynx,google,lynx,
5google/lynx/lynx:16/BP3A.251005.004.B2/14331693:user/release-keys,
6a997691c6787,BP3A.251005.004.B2...

The format uses -1,2,-94,<number> as section delimiters. Each number identifies a different type of data.

The Encryption Scheme

Back to JADX. Class af turned out to be the crypto handler. The key generation:

1public final synchronized void b() {
2    KeyGenerator keyGenerator = KeyGenerator.getInstance("AES");
3    keyGenerator.init(128);
4    this.f7048a = keyGenerator.generateKey();
5    
6    KeyGenerator keyGenerator2 = KeyGenerator.getInstance("HmacSHA256");
7    keyGenerator2.init(256);
8    this.f7049b = keyGenerator2.generateKey();
9    
10    RSAPublicKey rSAPublicKey = ai.a("MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC4sA7vA7N...");
11    this.f7050c = Base64.encodeToString(ai.a(this.f7048a.getEncoded(), rSAPublicKey), 2);
12    this.f7051d = Base64.encodeToString(ai.a(this.f7049b.getEncoded(), rSAPublicKey), 2);
13}

The flow: generate a random AES-128 key, generate a random HMAC-SHA256 key, RSA encrypt both keys with Akamai's hardcoded public key, and store the base64-encoded encrypted keys.

Then encryption itself - AES-CBC with PKCS5 padding, HMAC-SHA256 signature appended:

1public final synchronized String a(String str) {
2    Cipher cipher = Cipher.getInstance("AES/CBC/PKCS5Padding");
3    cipher.init(1, this.f7048a);
4    byte[] ciphertext = cipher.doFinal(str.getBytes());
5    byte[] iv = cipher.getIV();
6    
7    // Combine: IV + Ciphertext
8    byte[] combined = new byte[iv.length + ciphertext.length];
9    System.arraycopy(iv, 0, combined, 0, iv.length);
10    System.arraycopy(ciphertext, 0, combined, iv.length, ciphertext.length);
11    
12    // HMAC sign
13    Mac mac = Mac.getInstance("HmacSHA256");
14    mac.init(new SecretKeySpec(this.f7049b.getEncoded(), "HmacSHA256"));
15    byte[] hmac = mac.doFinal(combined);
16    
17    // Final: IV + Ciphertext + HMAC
18    byte[] finalPayload = new byte[combined.length + hmac.length];
19    System.arraycopy(combined, 0, finalPayload, 0, combined.length);
20    System.arraycopy(hmac, 0, finalPayload, combined.length, hmac.length);
21    
22    return "2,a," + this.f7050c + "," + this.f7051d + "$" + 
23           Base64.encodeToString(finalPayload, 2) + "$" + timingMetrics + "$$";
24}

So the final payload structure: 2,a (version/type), then the first base64 block which is RSA(AES key), second base64 block which is RSA(HMAC key), third base64 block which is IV (16 bytes) + AES ciphertext + HMAC (32 bytes), and timing metrics at the end.

The server has Akamai's RSA private key, so it decrypts the AES/HMAC keys, then uses those to decrypt and verify the payload. Clever scheme.

Mapping the Sections

Now for the tedious part - figuring out what each section contains. The section IDs I found:

The complete plaintext structure looks like:

13.2.4-rc3
2-1,2,-94,-100,{35+ field device fingerprint}
3-1,2,-94,-101,do_en,dm_en,t_en
4-1,2,-94,-102,{empty}
5-1,2,-94,-108,{empty}
6-1,2,-94,-117,{touch events}
7-1,2,-94,-144,{orientation timestamp checksum - RLE encoded}
8-1,2,-94,-142,{orientation value checksums - RLE encoded}
9-1,2,-94,-145,{motion timestamp checksum - RLE encoded}
10-1,2,-94,-143,{motion value checksums - RLE encoded}
11-1,2,-94,-115,{16 field stats section}
12-1,2,-94,-70,{empty}
13-1,2,-94,-80,{empty}
14-1,2,-94,-120,{empty unless PoW required}
15-1,2,-94,-112,{performance benchmarks}
16-1,2,-94,-121,{empty}
17-1,2,-94,-103,{background events}
18-1,2,-94,-150,1,0

Let me walk through the important ones.

Device Fingerprint (Section -100)

This is the big one. Tracing CYFMonitor.b(), you can see exactly how this string gets built - it's a StringBuilder appending 35+ fields, mostly from android.os.Build:

Worth noting: they check adb_enabled, which could flag debugging environments.

Sensor Checksums (Sections -144, -142, -145, -143)

These had me confused at first. Turns out they're using DCT compression on accelerometer/gyroscope data. Looking at class ak, the process is: normalize the sensor data to a min/max range, quantize to character buckets (A-}, 60 buckets), run-length encode (AAABBC -> 3A2BC), then CRC-32 checksum.

Output format: 2;{min};{max};{checksum};{data}

The 2; prefix means raw encoding (as opposed to 1; for DCT-compressed). For the small sample sizes on mobile (~8-16 readings), raw is almost always smaller.

Performance Benchmarks (Section -112)

This section turned out to be way more important than I initially thought. Nine values from five CPU micro-benchmarks:

117,1297,59,1666,126700,1275,33400,333,64354

Traced the generation in class g method h():

1public static String h() {
2    long start = SystemClock.uptimeMillis();
3    int hits1 = 0, iters1 = 0;
4    
5    // Benchmark 1: Integer modulo operations
6    for (int i = 1; i < 1000000; i++) {
7        if (((4508713 % i) * 11) % i == 0) hits1++;
8        iters1++;
9        if (i % 100 == 0 && SystemClock.uptimeMillis() - start > 2) break;
10    }
11    
12    // Benchmark 2: Float operations
13    // Benchmark 3: Square root
14    // Benchmark 4: Trigonometry  
15    // Benchmark 5: Simple loop
16    
17    return hits1 + "," + iters1 + "," + hits2 + "," + iters2 + ...;
18}

Five different micro-benchmarks measuring how many operations complete in a fixed time window. The results are deterministic for a given CPU - a Snapdragon 855 produces consistent results across runs, but completely different from an Exynos 9810.

Keep this in mind. It matters a lot later.

Building the Generator

At this point I had the format mapped out and the encryption understood. Built a generator in Go, ran it against a protected endpoint...

1403 Forbidden

A 403 from Akamai tells you absolutely nothing about what's wrong. Could be the sensor format, encryption, TLS fingerprinting, header order, cookies - anything.

I'll spare you the full debugging spiral, but here's the short version: I spent time chasing TLS fingerprinting (wasn't the issue), header ordering (wasn't the issue), and comparing plaintexts side-by-side with Frida hooks (this was useful - my -115 section had 9 fields when the real thing had 16).

The thing that finally got single requests working? Removing cookies. I'd been passing ak_bmsc and bm_sz cookies from an earlier request, which seemed logical. But the real app sends protected requests on a fresh client with no cookie jar. Dropped the cookies, got a 200.

The Scale Problem

Single requests passing is nice. Scaling is where things actually get hard.

Built a concurrent worker pool, fired off 100 requests, and watched the pass rate crater in real time:

1[6s]  Pass:8  | Blocked:3  | Rate:1.8/s
2[12s] Pass:12 | Blocked:15 | Rate:1.5/s
3[18s] Pass:14 | Blocked:31 | Rate:1.2/s
4...
5[45s] Pass:18 | Blocked:82 | Rate:0/s

First 8-15 requests pass, then 403s start stacking, and after ~30 requests everything's blocked. The format was correct - the problem was behavioral. Same device fingerprint making dozens of requests in rapid succession looks exactly like automation.

First obvious fix: rotate devices between requests. That helped a bit (45% -> 52%), but some devices kept getting banned while others worked fine. The pattern? Budget phones and older models had way higher ban rates. Meanwhile flagship devices were mostly fine.

The Device Data Breakthrough

This is where it clicked. I'd been generating device fingerprints from a dataset I built myself - model, manufacturer, screen dimensions, the basics. But I was missing fields and faking values I shouldn't have been.

Found a repository on GitHub with a devices.json that had complete device profiles, including something I'd overlooked entirely:

1{
2  "BUILD": {
3    "MANUFACTURER": "samsung",
4    "MODEL": "SM-G965F",
5    "FINGERPRINT": "samsung/star2ltexx/star2lte:10/QP1A.190711.020/G965FXXSGHWB1:user/release-keys",
6    "HARDWARE": "samsungexynos9810",
7    "BOARD": "exynos9810",
8    "BOOTLOADER": "G965FXXSGHWB1"
9  },
10  "PERF_BENCH": [
11    "16,338,59,115,134000,1348,80600,805,9535",
12    "17,1043,59,1679,84500,853,51900,518,2059"
13  ]
14}

PERF_BENCH - performance benchmarks. Nine comma-separated values. The exact same format as section -112.

I'd been generating random values for -112. But these aren't random - they're CPU benchmark results from real devices. Akamai correlates the performance benchmarks with the claimed device model. A "Samsung Galaxy S9" reporting budget-phone benchmark numbers gets flagged immediately.

That's why the budget phones were failing. Incomplete fingerprints + fake benchmark data = obvious bot.

Final Rotation Strategy

With real device data (complete BUILD fields + real PERF_BENCH), I implemented smarter rotation:

1func worker(jobs <-chan Request, devices []Device) {
2    var currentDevice Device
3    var generator *Generator
4    requestCount := 0
5    
6    rotateDevice := func() {
7        currentDevice = devices[rand.Intn(len(devices))]
8        generator = NewGenerator(currentDevice)
9    }
10    
11    rotateDevice()
12    
13    for req := range jobs {
14        if requestCount > 0 && requestCount % (rand.Intn(3)+3) == 0 {
15            rotateDevice()
16        }
17        requestCount++
18        
19        result := sendRequest(req, generator)
20        
21        if result.StatusCode == 428 {
22            rotateDevice() // Sensor rejected = device burned
23        }
24        
25        time.Sleep(randomDelay(2000, 5000))
26    }
27}

Scale test with 20 workers, 1,000 requests:

1Total Processed:  1001
2Passed:           416  (sensor accepted)
3Blocked:          559  (sensor rejected/rate limited)
4Errors:           26   (connection issues)
5Duration:         5m26s
6
7Sensor Pass Rate: 42.7% (416/975)

42.7% at 3.1 req/s. Not amazing, but functional. The block rate is partly from stale device data (~2 years old) and partly from the inherent challenge of sourcing real mobile fingerprints. Filtering out the problem devices should push it above 60%.

Takeaways

Format correctness != behavioral correctness. My sensor format was right from the start. Single requests passed. But Akamai isn't just checking format - it's correlating device claims with expected behavior.

Performance benchmarks are CPU fingerprints. Section -112 correlates to specific chipsets. Random values are an instant flag.

Device data quality matters more than quantity. Missing fields like hardware, board, bootloader, and real benchmark data caused most of my blocks. Incomplete profiles get caught.

Compare plaintexts early. I wasted hours on TLS and headers when a side-by-side diff of the plaintext would've shown the format issues immediately.

Don't assume you need cookies. My assumption that session cookies were needed was wrong. Fresh requests worked better.

Next Steps

The main limitation right now is stale device data. The next step would be building a collector script that runs inside a real app to harvest fresh device fingerprints and benchmark data from physical devices, then continuing to test with up-to-date profiles. That alone should push the pass rate significantly higher.

My implementation: github.com/B9ph0met/akamai-bmp

Reference implementation: github.com/xvertile/akamai-bmp-generator

Frida: frida.re

JADX: github.com/skylot/jadx

Burp Suite: portswigger.net/burp

Setup guide: antibot.blog - PerimeterX SDK setup

Akamai BMP version: 3.2.4


martinsummers