5 min read

Photo.sh - Analysing The Locked iPhone - Apple Photos Shared Albums

Photo.sh - Analysing The Locked iPhone - Apple Photos Shared Albums

Recently, i’ve been using my iPhone X as my main device in order to gain some real usage data, rather than just downloading an app and quickly logging in.

I wanted to have a source that would represent a real user, as there are of course some differences and patterns you’ll notice where you have access to historical, real-world data.

After using my iPhone X for around a week, I dumped the Root Filesystem (saving the entire file structure and contents of the device) in the BFU State, using my iPhone rootFS Tool utility.

There are also some other great free resources for dumping the filesystem including Magnet Acquire, or tar’ing the filesystem manually via SSH (make sure you don’t run out of device storage space doing this!).

Dumping a device in the BFU State (locked after reboot) will pull less data, but I personally find it much more interesting as it means we can reproduce it on other devices without a valid passcode.

A Root Filesystem is very large, and sifting through such data manually takes an extremely long time. And so I executed SPIDER, my research automation software.

It works by having you input your own personal data, and SPIDER ingesting all of the files within the rootFS and detecting/matching files where your personal data is present!

This will in turn give you some pointers for which databases and files you might find useful.

I’ve pasted a picture of my SPIDER configuration file for demonstration purposes.

After executing SPIDER, I checked out the data which was generated and exported to the SpiderOUT directory.

The MatchDB and MatchSQLITE files are generally-speaking the first two SPIDER-generated files I check, as a database known to contain one element of your personal information probably contains more!

I decided to check out Model.sqlite and have a browse around the Database using ‘Base’ Database Viewer.

Upon checking the ‘Comments’ table, it wasn’t immediately obvious that there was any information of interest.

Yes, maybe a timestamp, but no context to the information.

I did , however, notice by chance that upon clicking the ‘obj’ cell for each and all of the records, there was a snippet of the ‘bplist’ header. Awesome, I thought! 🎉

I decided to execute a simple SQL query ‘SELECT obj FROM Comments’ - This was the result…

The problem is that you can’t simply copy the ASCII representation into a new file and parse it. I had no idea how to actually ‘dump’ the information to a readable form.

If I could dump the data ‘blob’ as HEX, this would be much more useful as we can pull the entire contents. With that in mind, and a few Google searches later, I made a little alteration to the SQL Query.

Namely instead of ‘SELECT obj’, we could use ‘SELECT hex(obj)’ instead to pull the HEX representation. Woohoo! We’re getting somewhere!! - We now have the full hex representation of each bplist in our output.

Now that we have our HEX representation, we want to parse this HEX data as a bplist and pull the data. The problem is that our output is full of various different comments from different photographs, each block of HEX represents a different bplist and we cannot parse them all as one.

You’ll also notice that at the start of each block, we have ‘hex(obj) =‘ which will break our bplist file entirely, even if we can export each block to it’s own file.

To fix this issue, we can use the wonderful amazing ‘cut’ unix utility to our advantage. Here’s how it looks in the query so far:

sqlite3 Model.sqlite -line 'select hex(obj) from Comments' | cut -f2 -d'='

we’re using the ‘|’ symbol here to ‘pipe’ the output of our SQL Query into another command, which is ‘cut’ in this case.

Cut will take the output of our SQL query as it’s generated, do it’s thing, and then we’ll see only the output of cut within our Terminal Prompt.

We use ‘-f2’ as we would like to pickup data AFTER the delimiter. The delimiter is ‘=‘ as it appears only once per block, and is in-between the ‘hex(obj)’ and the bplist HEX data. We define the delimiter in cut with -d’=‘ .

Our output should now look something like this:

Technically, each block of hex it’s own line, but as my laptop screen isn’t 120” or something crazy like that, we can’t see it all on one line in our terminal prompt.

Now that each HEX ‘bplist’ is on it’s own line, we know that the newline is what’s separating each block of information - we can use this in a moment.

We can then use plistbuddy to start parsing our new binary PLIST files. There’s one more problem, though...

Each file has a little more data than we need.

As of now, we’re just looking to output the comment, rather than everything around it (the bplist structure).

As there’s no ‘Key’ for accessing the comment directly, and so we’re not able to use that method to simply filter for the specific key. We’re going to have to use the good old ‘Grep’. My favourite!

The only definitive value I could guarantee is that where the text ‘MSASComment’ was printed within the terminal prompt - the plaintext comment would be precisely 7 lines above. Interesting…

We can use the ‘-B7’ flag in Grep to match not only the MSASComment string, but also match the previous 7 lines... see where we’re going? We then output the ‘grep’ of all the parsed bplists to a file named ‘bplistvals

This results in a nice clean output, but still has 7 irrelevant lines per output. Unfortunately, when calling the sh script from an external application, piping the data through ‘head -1’ to pull the first line of each entry wouldn’t suffice as it would pull the first line of the entire output, not per file.

With this in mind, we ‘cat’ the ‘bplistvals’ file and grep for the ‘-‘ symbol which very kindly appears above each plaintext comment.

We’ll then be able to retrieve lovely, clean output from our sh script.

Thank you!

This article was more than definitely a little more long-winded than the majority of my other articles, but I felt that this would give a positive insight into the entire process of going from checking out our Root Filesystem Dump to creating a bash script to extract said data in a clean form!

If you have any questions or feel that clarification in some areas would be beneficial, please let me know and i’ll happily add some extra information!