note: To preserve @MichaelClark's substantial answer below, I've volunteered to leave this question here and allow it to be closed, instead of just deleting it (see discussion below the answer). That means I'm volunteering to eat the downvotes, so please take a moment to consider it before casting one. Thanks!
Is it possible to reconstruct an approximation of the original raw sensor data from downloaded iPhone 6 images by interpreting the meta-data and mathematically undoing whatever was done in the phone?
I understand this is not the best way to proceed, but I would like to keep my phone app-less, and I believe at this time I can not access the raw data in the phone without a 3rd party app.
I generally use Python for everything I do, but if there is free software I could consider it. If the math is out there I can write my own script as well. I'm looking to understand what I'm doing more than finding a quick solution.
If I see something interesting or useful with this, I'll bring in a DSLR later.
What I'm planning to do:
I would like to look at relatively small color differences between two regions of an image, and see if the shift increases or decreases between one photo and another. For example, in each image I'll define two rectangles, then I'll just calculate r = R/(R+G+B) where R, G and B are each integrated within the rectangle. So r1 and r2 might be 0.31 and 0.32, and I'll say there's a shift of 0.01.
The images are fairly flat - white paper reflecting ambient light. I'll maintain the same distance and angle. (note: I'm only asking here about the data processing step so I'm only loosely describing this to better give an idea how I'll use the data, not how I'll interpret it.)
I'll do the same for the next photo, and see if the shift is larger or smaller. This is "science stuff" but not serious science stuff and I understand this is not the best way to proceed.
No comments:
Post a Comment