j. b. crawford

devops consultant. computer curmudgeon. author of Computers Are Bad.

  • 0 Posts
  • 1 Comment
Joined 1 year ago
cake
Cake day: May 31st, 2023

help-circle
  • There’s an interesting aspect of this issue that I think the post summary really dismisses. Photos coming from phones these days sort of are AI, and in an annoyingly pervasive way.

    I’ve actually gone back from using my phone to using a proper camera again over the last year or so because I’m getting so irritated by the amount of ML-based post-processing my phone does. It results in a lot of photos looking bad, and there’s no easy way to bypass it besides setting the phone to save raw which sort of defeats the point of using the phone in a lot of ways (ability to go from taking the photo to posting on the device). A really common situation for me is when I take a photo with my phone that is blurry because of bad focus/shake/low light/some combination. The phone does really aggressive ML “sharpening” of the image that makes it look extremely artificial and, frankly, a lot worse than if the postprocessing had been omitted. I’ve had sets of photos I took totally ruined by this kind of “helpfulness.”

    It’s a tricky issue, there absolutely are benefits to cameras using the best technology available to create the best photograph available. I’m not meaning to appeal to some sense of artistic integrity or “real photography” here. I just hate the lack of control over the product. I used to be really into photography as a hobby and had a lot of opinions about lenses and mostly set up exposures manually. Nowadays I use my Sony Alpha with the kit lens and rarely take it off of its “smart” auto mode, which does have some ML-driven features like subject detection. But it feels like I have so much more control over the output than I do with my phone, because the Sony doesn’t run the image through ten layers of AI processing that’s not a whole lot better than the state of the art in Instagram filters before saving it. If I don’t hold the camera steady it’ll just come out motion blurred, not like someone new to photoshop has just discovered the posterize button.

    As I understand Apple is better than most of the Android vendors about this kind of thing and the iPhone processing probably produces better output - but it’s still frustrating to me feeling like photos are changing from “capturing the scene” to “recreating the scene.” I did graduate work on forensics of digital images, learned a lot of theory and methods for analyzing and reversing in-camera processing. I did some research on the “auto HDR” feature that was starting to appear in Android devices at the time and whether or not it defeated some known forensic methods for device fingerprinting (mostly, not totally). But that was the tip of the iceberg… it used to be that cameras only did a bit of processing, debayering for example, the kind of things that really need to be done to turn sensor data into a useful image because of the properties of the sensor and readout pipeline. But phones, the dominant photographic tool today, are taking it to this whole new level where they do what would have been very complex postprocessing on every image, as it’s taken.

    As with so many things, I guess it’s good when it works, but endlessly frustrating when it doesn’t. At least it feels like the phone vendors are doing their part to preserve “traditional” photographic technology, if that’s what you’d call a Sony mirrorless, by really nerfing phones as tools for people who want much control over the result. I do understand there are third-party apps for iPhone that expose a lot more user control but it seems like they also have some limitations with how much of the camera stack they can control/bypass.