Saturday 30 June 2018

low light - Do you have tips on what kit and techniques would give for photographing concerts and small gigs?


I want to photograph some local gigs to build up a portfolio. Has anyone any tips on what kit, and techniques would give the best results? The gigs would be mostly rock bands.


Here's my kit:




  • Nikon D70S

  • AFS Nikkor DX 18-70mm 3.5-4.5

  • Tamron 70-300mm 4-5.6


Any help most appreciated,


Jonesy



Answer



Given the equipment you have listed, I would say an entire kit replacement might be in order. The Nikon D70s looks like a decent DSLR, however it does not have the greatest high-ISO performance. Higher ISO capability with low noise would be a huge boon for photographing concerts (which tend to be quite dark.) Something that can handle ISO 1600, possibly even ISO 3200, without a lot of noise would be a LOT better for photographing dark scenes like concerts.


Neither of your lenses are very fast, with the fastest aperture being f/3.5 on the 18-70mm. I would highly recommend f/2.8 at at the very least, with f/1.8 or f/1.4 being far preferable. A 50mm, 35mm, or 24mm lens at f/1.8 at least, or f/1.4 if possible, would be fast enough for you to capture some decent concert shots. If you keep your existing camera body, I would say shoot for f/1.4 or f/1.2 if you can, as that should allow you to use ISO 400-800 (however, your shots are still going to be pretty dark.)



If you intend to photograph concert musicians up close, 24mm, 35mm, or 50mm lenses should suffice. If you think you might be photographing from farther away, or want to get closer shots with decent bokeh, 85mm, 100mm, or 135mm lenses would be better. Again, the fastest lens you can get would be ideal. At 85mm you should be able to get as wide as f/1.2, for 100mm anr 135mm you might be able to find lenses as fast as f/2, however f/2.8 should work.


Lenses with wider apertures tend to have thinner DOF, which might pose a small challenge for getting perfect focus. A camera body with better ISO performance, up to ISO 3200, would greatly help in this area, as you could stop down the lens to increase your DOF. On the flip side, thinner DOF results in smoother background blur (bokeh). Not sure that bokeh will really be an issue for concerts...backgrounds will probably be all dark or black anyway.


Why is my camera limited to a shutter speed of 1/250th when the flash is up?


Sometimes in bright backlight and with large aperture (f2.8) I want flash at high shutter speeds. But when I pop the flash up on D7000 it caps the shutter speed to 1/250th, which overexposes the shots and makes everything look like a nuclear bomb just went off.



What's the logic behind the limiting of the shutter speed when using flash? Is there anything I can do to get flash + faster shutter speeds?



Answer



The limitation has to do with synchronizing the length of the exposure with the length of the flash burst. The flash does not go off immediately...it occurs a fraction of a moment after the shutter has opened, and the burst only lasts a fraction of the time the shutter is open. This is necessary to produce a proper exposure when using a full-powered flash due to the way the shutter itself works. The maximum shutter speed that can be achieved at is 1/200th or 1/250th of a second most of the time. With more precise logic and shutter timing, you can achieve 1/500th of a second flash sync, however thats more difficult (and therefor expensive) to do, which is why its relegated to only top of the line pro-grade cameras. This ensures that the front shutter curtain is fully open before the flash pulse is set off, and that it stays open long enough for the effects of flash to properly light the scene and allow correct exposure before the second curtain closes.


There is also an alternative approach to flash, called high-speed flash sync. This allows flash to be used at any shutter speed. The difference between high speed flash sync and normal flash sync is that at shutter speeds above about 1/500th of a second, the second curtain starts to close before the first curtain is fully open...a shutter "gap" transitions across the sensor. High speed flash sync uses a lower-powered flash pulse set off multiple times in rapid succession to ensure that the scene is properly lit for each part of the sensor as that shutter gap moves across it. High speed flash sync is generally not as good as standard flash, and in most cases it should not be needed...but in a pinch it can do the job.


depth of field - Why is my far away background in focus even with a low aperture number?


I took a picture of my son in an elementary school cafeteria. The room was very well lit and the parameters of the shot were Canon T3, 18mm, F/3.5, 1/125, ISO 100. I was close to him and his head basically filled the frame. He was about 20 feet from the wall and against the wall there were tables with books on them.


When I took the picture, he was totally in focus but the wall, table, and books were also in focus, though not as sharp as he was. I expected to get a very blurred background and was surprised that I could make out everything 20 ft behind him in such detail.


Does anyone know why the shot ended up that way? Thanks!



Answer



The small degree of defocus in the background is due to the focal length used being very short (18mm).


The amount of background blur depends on the size of the entrance pupil, not the f-number. The entrance pupil size is the focal length divided by f number, so in this case it would be about 5mm. This is quite small. A 100mm lens at f/3.5 would have an entrance pupil of size 29mm.


For this reason you will get more out of focus backgrounds at the other end of the kit zoom range at f/5.6, at 55mm the entrance pupil will be twice the size at 10mm.



Friday 29 June 2018

ethics - Is composition after capture against any traditional photography rules?


I use my mobile phone for photography. Normally when I notice anything interesting, I take a picture with the point of interest somewhere near the region I want it to be. But I do this with the clear assumption that the final composition after I edit it using Snapseed might be completely different. I do this because I feel that I get a greater level of freedom and convenience when I compose offline, when I am sitting somewhere comfortably.


My question is whether this is a common practice among photographers? Or maybe traditional photographers do the composition when they capture?


More specifically, is offline composing considered as cheating or something?


Updates:


Example of my offline composition


ORIGINAL ORIGINAL FINAL FINAL



It wasn't me who added the "ethics" tag. And honestly I wasn't thinking of ethics when I used the word cheating. What I meant is taking shortcuts. Technology has made it very easy to take good pictures. A better phrasing of the question would be whether offline composition is frowned upon by traditional photographers?


I take photos as an outlet of my creativity. I don't intend to make money with it or use it for promoting anything. I just upload it to 500px.


I don't go to places to take photos. I take pictures of interesting stuff I find in places that life takes me. Being an introvert, I am not comfortable carrying a big camera and tripod etc in crowded places and attracting attention. So I prefer a phone with good camera specs(LG G6) now. And I prefer taking pictures fast and not sticking around. That's why I prefer to compose later. Of course I do minimum composition when I capture.


Even though I got the answers I need, I am finding it very hard to select the most appropriate answer here. Should I wait for a few days and select the answer with the highest votes?



Answer



To paraphrase a passage from the ancient Hebrew Book of Ecclesiastes:


For everything there is a season.


A time to be born, a time to die
A time to break down, a time to build up
A time to keep, a time to cast away

A time to use 'Auto Exposure', a time to use 'Manual Exposure'
A time to use 'Single Point AF-S', a time to use 'Area AF-C'
A time to crop in post, a time to carefully compose the final image when shooting.


With many things in photography, there are some things that can be very useful when learning how to do photography that ultimately get in the way of doing the best kinds of photography. The trick is to recognize when something that was once useful has become something that is holding one back from becoming an even better photographer.


That's not to say that cropping is always in the former category and never cropping is always in the latter. There are also times and places when cropping after the fact will produce a better image than what the best efforts of the photographer, constrained by factors out of their control, can do in camera.


The oft-revered Henri Cartier-Bresson had this to say about cropping:



If you start cutting or cropping a good photograph, it means death to the geometrically correct interplay of proportions. Besides, it very rarely happens that a photograph which was feebly composed can be saved by reconstruction of its composition under the darkroom’s enlarger; the integrity of vision is no longer there.



Yet even H.C.B. used cropping (and considerable dodging and burning in the darkroom, likely performed by someone else) when it was the only way to get the shot he wanted.



enter image description here



There was a plank fence around some repairs behind the Gare Saint Lazare train station. I happened to be peeking through a gap in the fence with my camera at the moment the man jumped. The space between the planks was not entirely wide enough for my lens, which is the reason why the picture is cut off on the left.



But in general, cropping as a means of saving or improving a less than ideally composed image is probably more useful as a learning tool in a photographer's development than as a primary tool in a mature photographer's bag of tricks. By cropping after the fact, the developing¹ photographer can self-critique their own compositions and consider ways in which they could have improved the photo by composing differently at the time the image was captured.


Take, for example, the original image included in the OP.


enter image description here


Compare it to the OP's crop:


![enter image description here


This crop allows distractions to remain on three of the four edges of the frame in order to preserve, or even improve a bit, the (implied) lines between the man's head, the sun, and the sun's reflections in the water that form a triangle as a dominant feature of the composition. When we look at the photo, we can't help but notice it.



In many ways xiota's crop of the original in his answer is an improvement over the in camera composition and over the OP's crop of it with regard to eliminating the peripheral distractions. But it also introduces a new problem: The position of the sun and its reflection on the water is now too close to the left edge. The implied lines between the man's head, the sun, and the sun's reflections are no longer the dominant compositional features in the central part of the image. The much less interesting vertical line in the center fo the frame takes over.


![enter image description here


If the photographer had more carefully looked for a final composition in camera, they could have seen that and moved a bit to the right, while keeping the two vertical elements on the edges, to move the sun further inside the edges. Putting the man's head and the sun on opposite sides of the frame equidistant from the edge of their respective sides, framed by the strong vertical elements, would have vastly strengthened the overall composition.


It would have looked something more like this:


enter image description here


This could only have been done by more carefully considering the final composition and adjusting the shooting position at the time the image was captured.


Ideally, the photographer would have perfectly positioned the camera to get the strongest composition prior to taking the shot. Back in the medium format days, when pressing the shutter button cost about $1 a pop in film and developing cost, that's the way a lot of folks did it. If the people moved before that point, the shot would not have been taken at all. In the real world today, we would probably take the shot as the OP did, as a safety shot in case the people started moving. If they remained stationary long enough, though, we might have also repositioned the camera to get the more ideal composition and used that frame as our selected image.


¹ See what I did there?


Why are DSLR lenses measured in F stops instead of T stops?


I understand the difference between f-stop and t-stop, but why are cinema lenses measured in one, and dSLR lenses measured in another?




Thursday 28 June 2018

reverse engineering - How to determine camera location from an existing photograph?


I have old photos of a town and landscape around it. I'd like to take these photos again, from the same locations. In some cases I am struggling with finding the original location of the camera.



I can identify objects on the photo, I know their position on the map in some cases even their dimensions, but I don't know anything about camera or lens. Are there any techniques (or even better any ready-made software) to calculate position of the camera from the photo?



Answer



The key is to find areas of the image with a lot of parallax, such as a foreground building and a background tree. Try to pick a point as close to one edge of frame as possible. Now walk left/right (green) to find the correct point of intersection from the old photograph.


enter image description here


Now that you've done that, you've established a straight line to move along (red).


Pick a different parallax intersection on the other edge of frame. Instead of walking left/right, walk along the red axis you established earlier. Once you've matched that parallax, without spoiling the first match, you've found the position of the camera.


Once you're in the same position, matching the lens is easy. You can just look through the camera and adjust until the framing matches, or measure the angle of view.




There is software that can calculate the position of the camera, but generally you need a 3D model of the scene as a basis.


Wednesday 27 June 2018

night - What settings should I be using to shoot "Supermoon"?


This weekend is a "Supermoon" (the largest full moon of the year, in perigee, the closest to Earth). Would someone please advise how I could make the most out of my Canon PowerShot to photograph this phenomenon?



  • What settings should I use?

  • What times are the best with respect to the exact times of moonrise and moonset provided by the U.S. Naval Observatory? In other words, is it better 30 min before it crosses the horizon? Or just at the crossing?

  • Are there any tips and tricks for a beginner to benefit from?


  • Are moonrises or moonsets generally better/easier to capture?

  • Are there different techniques for different phases of the moon?


I read that full moon tends to come out flat, but I also want to benefit from the perigee.




How exactly is the deeper bit-depth of RAW mapped onto JPEG and the display?


I am trying to understand RAW better.


I have a Canon EOS 20D, and shoot in RAW+Jpeg mode. According to the specs in the manual, the RAWs of the 20D are 12 Bit. I understand that this means each pixel contains 36 bit of information. A Jpeg only has 3*8=24 bit of information.



In RAW+Jpeg mode, the 20D actually generates two Jpegs: one in full resolution (3504x2336), and one in down scaled resolution (1536x1024) that is embedded in the RAW file for preview purposes.


Sorry, have to post a whole battery of questions, don't know how to summarize my question, so here it goes:


How exactly are the 36 bit of the RAW mapped to the 24 bit of the full resolution Jpeg? Does it just take the 24 bit in the middle of the 36 bits, or at the beginning, at the end or what? Or is there a more sophisticated mapping going on?


Is the mapping the same for the separate full resolution Jpeg and the embedded preview Jpeg?


When I open a RAW in Raw Therapee, it again needs to be mapped down to 24 bit to be displayed at the screen. Is this again the same mapping or a different one?


Also, the RAW images always look very flat and drab, with very dim colors. (Only with Raw Therapee I can bring out a pop and vibrancy which I love from film). The fact that the RAWs and derived Jpegs always look so drab without post-processing, is this related to the bit reduction mapping, or has it different reasons?



Answer



First, you are making a common mistake thinking it is 36 bit. I made the same mistake for a while. In reality, RAW data is monochrome and thus only 12 bit in your case since each pixel doesn't have any color information without looking at neighboring pixels.


Beyond that, it depends on the software being used. Color, as mentioned, is derived from the color of the filter on that pixel and the value of neighboring pixels of other colors, but the pattern used can vary.


Similarly, the reduction in bit depth varies even more. It could be a linear map that brings the darkest to darkest and brightest to brightest. It could just grab the middle. It could try to make processing judgements about what a dark black point and what a bright white point should be and adjust according to that. It really depends on how the software decides to do it and then how you adjust the mapping during development.



And that's really the point of RAW. It's designed to allow you to make selections about how to do that mapping as the photographer. If you just want an automatic process to form an 8 bit file for you, simply shoot JPEG. Using RAW is a waste of space. The point of RAW is that it lets you control the process of converting it to an 8 bit space by hand, and thus ensures you get the information you want out of it.


As for why it seems drab initially, it is probably just a stylistic thing for how the logic works. With Lightroom, it tries to make choices to make it look much more like a JPEG would by default, but adjustments are still needed in either case. That initial adjustment is going to vary from software to software and camera to camera and even photo to photo potentially.


history - What was the Group f.64, and why were they important?


I've heard of the "Group f.64", and know that famous photographers Ansel Adams and Edward Weston were members. What was this group and why was it significant? What about members of the group other than those commonly-recognized names?


Beyond print sales for college dorm rooms, what is their impact today? Are there tenets of or lessons from this group that are important or useful for photography today even in the modern digital world?




Tuesday 26 June 2018

equipment recommendation - Which prime lens for Pentax APS-C?


My girlfriend is buying a Pentax K-5 (I already have a Pentax DSLR so we thought it made sense for us to be able to share lenses), and she's not sure about which lens to get with it, so I've been charged with offering her my advice/research.



We left it last night that I'd ask here for advice on which superzoom to get (e.g. 18-200mm Sigma) as they would be very convenient, but, as usually happens when I start reading and researching, my targeted level of quality took a significant bump and I'm now thinking that if she's spending that kind of money on a prosumer body she may as well stick to the kinds of lenses that will help her get the best she can out of her camera. She's often examining the sharpness of her photos and is a bit of a perfectionist, so I think she'll enjoy her photography more overall if she knows she's not being held back by her lens, even though she'd have to swap over to my SMC Pentax-FA 80-320mm f4.5-5.6 when she wants telephoto.


Photography is a hobby of hers (neither of us are pros or anything like that). Normal usage will be landscapes, nature, friends/family shots, holidays.


As this prime would function as her general walkabout lens, I'm looking at focal lengths of around 35mm. One which has caught my eye is the SMC Pentax-DA 40mm F2.8 Limited.


From what I've read, the quality is good and having a pancake lens would be relatively portable and pretty damn cool. :)


Possible points for considering others:



  • She sometimes likes to get in close with flowers, etc., and it's not a macro lens.

  • I gather f2.8 is decent but that larger apertures on primes are relatively available.


It doesn't have to be Pentax as long as it fits her K-5. (She's German so I'm sure she'd be up for a Carl Zeiss, but I expect they're way more expensive.)



Her budget seems to be around €450ish for the lens, but I think she's open to moving slightly either way depending on value.


With that in mind, do you have any advice, recommendations of specific lenses or general guidance?



Answer



There's so many nice Pentax primes — why pick just one when you can collect a whole set? That's why we have Lens Buying Addiction, after all!


In seriousness, based on what you've said, I think the smc PENTAX DA 35mm F2.8 Macro Limited may be the place you should start. But, I've used quite a few Pentax primes, so let me give you the whole tour, as it were.


Series


Pentax makes two major series of prime lenses, and a number of prime lenses outside of those lineups as well.


FA Limited is the premiere prime lineup from the last days of Pentax as a film camera company. They're relatively modern designs (dating to the late 90s and early 00s), but definitely made for film. Manual focus on these lenses is a pleasure, and in exchange autofocus is a bit slow. In general, they have gorgeous rendition, and are relatively fast and small. Presumably the lens coatings are not designed for digital sensors, but I really haven't seen anyone complaining. The lenses have full-frame coverage, and there's persistent rumors that Pentax is going to cancel them any day now — but they haven't yet. (For a while, they weren't even shown, but they're back on the Pentax lens roadmap now.)


DA Limited is the designed-for-digital premiere prime series. The design focus is somewhat different: small, jewel-like "pancake" designs are favored, and even the models which aren't technically pancakes are quite compact. This is at the expense of a few stops in max aperture, the thinking apparently being that the amazing high-ISO capabilities of modern sensors make this less important. They do generally have nice bokeh, although maybe a little busier than that of the FA series. And a key point is that they're quite good wide open — you can get peak performance by stopping down a bit, but you don't need to. Autofocus is very fast (another advantage of keeping the weight of down), and while they feature solid metal construction, the focus rings have less throw, which, combined with the slower optics, makes manual focus not as enjoyable as with the FA series. Overall, the optical qualities are excellent, and while the FA lenses have legendary status, the DA lineup is by no means a step down.


In 2013, Pentax announced a refresh of the five lenses in the DA Limited series, adding curved aperture blades and the improved "HD" optical coating. These versions have a red ring instead of the green ring found on the earlier models. The previous SMC coating was well-regarded, but theoretically these should have higher transmittance and even better control of flare and ghosting, and of course more attractive bokeh when stopped down. It's a pity they didn't add weather resistance, but that probably would have meant a significant redesign of the lens body.



DA ★ (or DA Star) is Pentax's "high grade" designation. Lenses in this series are weather- and dust-sealed, and feature ultrasonic focus motors built into the lens. There's a few long telephoto lenses with this designation, and one portrait prime.


Then, there are a few other lenses in production within the DA, FA, and D FA series. In general, the ones with a D are designed with digital in mind, and the FA means full-frame coverage. Up until recently, it was sure that the D FA models had a manual aperture ring, but Pentax doesn't seem to hold that sacred.


There are also a lot of out-of-production lenses that are worth looking into, from the FA series and older. But in the interest of this-is-a-long-enough-post-already, I'll going to concentrate on only current models.


So, on to specifics:


Ultra-Wide


smc DA 14mm F2.8 This is one of the older DA designs. I've only used it briefly in a store; my main impression was that it's relatively large and heavy. It's, basically, what you might expect from a fast ultrawide prime. I haven't really heard anyone raving about this lens one way or another.


smc DA 15mm F4 ED AL Limited / HD DA 15mm F4 ED AL Limited I just got this, and, apart from struggling with wide-angle composition in general I really enjoy it. It's inevitably heavier and a bit bigger than the other DA Limited lenses, but still feels compact. The old-school distance scale on the lens is a nice touch, and the built-in lens hood is great. I don't really miss faster apertures at this focal length; generally I want more in focus, not less, and I tend to use it where there's plenty of light. Perspective distortion is inherent with ultra-wide, but barrel distortion is impressively controlled, and there's no other strange distortions or any optical problems to speak of. And for a wide-angle lens, it does an amazing job of handling flare — you can point this thing right at the sun and get nice results. Recommended, but probably not as your first starting point.


Wide


smc DA 21mm F3.2 AL Limited / HD DA 21mm F3.2 AL LimitedMy friend from whom I could borrow this moved to the other coast a couple of years ago, so I'm going from memory. My main impression was: yeah, that's fine. It's not dramatic enough to really feel wide, but it takes in enough of the scene that it doesn't feel intimate. However, the field of view on a 1.5× crop-factor dSLR like the K-5 is roughly equivalent to the traditional 35mm, which I know some photographers really like. So if that's your thing, this may be a good choice for you. (Having used the iPhone camera extensively, I think I have a little more appreciation for the versatility of a moderate wide angle lens like this than I did; I should give it another try.) As with the 15mm, I don't think the speed is generally a concern, not with good high ISO from the camera.


Normal



smc FA 31mm F1.8 Limited I have not used this lens, but it is certainly legendary among Pentax photographers. Since I haven't direct experience, I won't go on, but people do love it. This is the third-most expensive lens Pentax has on the market, which, at less than a $1000, is really another way of saying that Pentax's great lenses are really quite affordable, comparatively speaking.


smc DA 35mm F2.4 AL This is Pentax's new affordable normal prime, their answer to Nikon's AF-S DX Nikkor 35mm f/1.8G. It's a decent lens, and rumor has it that it's basically an update of the generally-respected smc FA 35mm F2 AL, which was recently discontinued. Plastic construction, but solid. There's a few other cost-cutting measures — for example, no "quick shift" to disengage the focus system when AF isn't active, which I think is a nice feature to have. Image quality doesn't stand out particularly, either positively or negatively. I think it's a decent value at the price, but I also think it's probably not what you're looking for. (It's a better match for the K-r than the K-5.)


smc DA 35mm F2.8 Macro Limited / HD DA 35mm F2.8 Macro Limited The main reason I don't have this is because I have and love the 40mm Limited, and I decided that they're too close to justify it. I came close, though! I've seen some technical reviews which conclude that the numbers aren't as good as those for the other Limited lenses, but the actual results I've seen are stellar. Lens connoisseur Mike Johnston puts it through some hands-on testing and ends up calling it "an optical paragon", which is really better than taking my word for it. Compared to the DA 40mm, the slightly wider field of view makes it closer to a true normal lens on the Pentax DSLRs (although even wider would be ideal — around 29mm). One downside is that since it's designed for macro focusing, the AF is somewhat slower than on the DA 40mm. The K-7 or K-5 can still focus it quite quickly, though.


smc DA 40mm F2.8 Limited / HD DA 40mm F2.8 Limited This is my most-used lens. (I posted a bunch of sample family snapshots in the photo chat a few weeks ago, and there's one in the official Pentax Photo Gallery, which I'm kind of proud of because that site is very unfriendly to posting pictures of your kids.) From a technical standpoint, it produces gorgeous results basically effortlessly. And I'm pretty sure it's the fastest autofocusing of any Pentax lens. The focal length is a bit odd on a 1.5×-crop camera, but it basically works out like a normal lens with a built-in crop that makes you make tighter compositions without thinking about it. It also has a close-focusing distance of about 13", which is not macro by any means but lets you get comfortably close to things if need be. Anyway, this stays on my camera 80% of the time. One tip: stow away the screw on lens cap and replace it with a clip-on one. The metal lens cap is classy, but too fiddly for field work. (If you have a Pentax Auto110 by any chance, the cap from the 35mm lens fits perfectly, and still says Pentax. Heh.)


smc FA 43mm F1.9 Limited A friend of mine got this at the same time I got the DA 40mm, so I've been able to compare pretty well. Stopped down, the results are pretty similar; of course, the FA 43mm is slightly more than a stop faster, and the bokeh definitely is nice. At f/2.8, the DA 40mm lens is sharper across the frame, while the FA 43mm is distinctly sharper in the center. The FA is much slower to autofocus, to the point where I'd sometimes miss things I know I wouldn't have with the DA 40mm.


Portrait Telephoto


smc FA 50mm F1.4 This is the classic film-era nifty-fifty from Pentax. The FA design comes directly from the earlier models with F, A, and even M designations, and probably even before that. It stacks up just fine against similar lenses from other brands, and again, Mike Johnston likes it. However, it's a bit, I dunno, pedestrian compared to many other Pentax primes. At the current pricing, I really don't see why one would go for it over one of the Limiteds, unless f/1.4 was really, really important to you.


smc DA 50mm F1.8 Pentax's newest lens, but from the optical specifications, probably a refresh of the classic 50mm f/1.7 — which is no bad thing, because that's a wonderful lens (also on Mike Johnston's top 10). I have the old version, and it's my favorite lens on my K1000 film camera. The new version adds modern conveniences like autofocus, newer optical coatings, and features rounded aperture blades for smooth bokeh — just like the much more expensive DA ★ 55mm below. Like the 35mm f/2.4, this is positioned as a budget prime, but as with that lens, the skimping seems to be on things like plastic lens mount rather then metal — not on image quality.


smc D FA 50mm F2.8 Macro Another older design, with solid but plasticky construction. It does have the quick-shift mechanism like the DA lenses, so you can manually focus even with AF engaged. Like most macro primes, it features excellent optical quality, but sometimes people complain about the octagonal highlights when stopped down. Also, on APS-C, this becomes a kind of weird length for macro. The wider 35mm works better as a general-purpose lens (or for document reproduction), and one of the other choices is probably better for portraits.


smc DA ★ 55mm F1.4 SDM As far as I can tell, people fall into two camps on this lens: people who have read technical reviews and hate it in theory, and people who have used it for portraits and love it in reality. This is a special-purpose lens with a modern design specially made for portraits. It's got Pentax's quiet but slow USM focusing system, a non-rotating front element, and all the high-end touches one would expect of a ★-class lens (including dust/weather sealing). The aperture blades are rounded for smooth bokeh, which is nice because while it's designed to be used wide open that gives flexibility to stop down a bit as well. And, it's one of only a very few Pentax lenses to use a nanocoating, which they call Aero Bright. It's also heavy and and quite a lot more expensive than the FA 50mm F1.4. If you're really serious about making photographs of people and don't give a second thought to the weight, definitely consider this.



smc DA 70mm F2.4 Limited / HD DA 70mm F2.4 Limited I have this lens and use it often. It pretty much never goes wrong. It's not as tiny as the DA 40mm, but is still incredibly small and light. Despite the diminutive size, there are basically no technical flaws or problems which impact image quality. Like the DA 15mm, it's got a handy built-in pull-out lens hood. The focal length is just right for close-in portraits, but I've also used it for kids' soccer games. Because of the small size and weight, this is one of the fastest focusing options.


smc FA 77mm F1.8 Limited As with the DA 40mm / FA 43mm Limited, I've had some time to compare against a friend's copy. The FA 77mm is a little bit faster in terms of aperture, and it's reasonable to argue that the bokeh has a more dreamy quality. The 70mm is a bit sharper across the frame, although for the kind of work one would usually put it to, I don't think that's a serious complaint against the 77mm. Also, as with the 40mm and 43mm, the newer DA lens is decidedly snappier to autofocus. The FA lens is quite a bit more expensive, and overall the results are quite similar — I don't think most people could tell the resulting images apart in a double-blind study.


Long Telephoto


smc D FA 100mm F2.8 Macro See the next entry.


smc D FA 100mm F2.8 WR Macro I have to insert the caveat that while I've at least handled most of the lenses above (and used a number of them quite extensively), I've never even touched the 100mm macros. So, I'm going to restrict myself to more general comments. These are basically the same lens; the WR version is newer and features a Limited-style metal body with weather sealing. Optically, they're the same, and quite well regarded. There's some hope that the WR designation will find its way to a newer version of the 35mm Macro as well, since a weather-sealed normal is conspicuously missing from this list.


smc DA ★ 200mm F2.8 ED (IF) SDM I was able to use one of these for a couple of weeks as part of the Gear Grant Program. This is a great lens with beautiful rendering. Construction is top-notch, and like all ★ lenses, it's weather-sealed, so it could be part of a nice wildlife kit. However, wow, it is large. It is literally larger and heavier than all my DA Limited lenses put together. Personally, I'm not sure I'd go Pentax in the first place if this sort of lens was my attraction, as the big two camera brands provide even more options in this range — and up. But if you're invested in Pentax for other reasons and this is something you need, it's good that there's the option.


smc DA ★ 300mm F4 ED(IF) SDM Basically a twin to the 200mm DA ★, announced at the same time and with similar construction. The 300mm has the distinction of being the second most expensive current Pentax lens, bested only by the smc DA ★ 60-250mm F4 ED (IF) SDM weather-sealed zoom. I haven't used this one but by all reports if this is the kind of lens you're looking for you won't be disappointed.


Monday 25 June 2018

cameraphones - Where can I find detailed camera, sensor and lens specs for cameras in smartphones?


For pretty much any dedicated digital camera out there I can easily find detailed specs on sites such as dpreview.com.


For cameras in smartphones however, these details are not easily found, or maybe I don't know where to look. Examples of items I'm looking for:



  • sensor size

  • lens focal length

  • image size

  • lens f-stop

  • ISO range

  • shutter speeds


  • closest focus distance

  • etc.


Is there a database or review site out there that reports these camera specs for all the various mobile devices?




Sunday 24 June 2018

autofocus - How can I effectively use the focus points (of Canon DSLR), to get accurate focus on a small subject?


I'm shooting birds with my Canon 1200D and 55-250mm lens.


I know this is not ideal lens and focal length for bird photography. But I am new to this field and before investing on expensive equipment, I would like be more thorough with the craft.


So to the scenario now. Often when I'm shooting birds with my 55-250mm, they are so far away that they appear as tiny dots in my viewfinder. Often little larger than the focus points.


Now my question is, in my frame if the bird is only slightly bigger than one of the focus points (and the rest of frame is say trees and forest), can I use that focus point to accurately focus on the bird and get a reasonably sharp image of the bird?


I have obviously faced this in the field, and the results are not so impressive. So that led me to wonder - if these focus points are designed to focus on that area of the frame - then technically I should be able to get a sharp focus on that object.



But I don't, is it because I need lot more practice, or is it fundamentally not possible?



Answer



You're going to have a very tough time shooting far-away birds with that setup. Capturing birds-in-flight is one of the most challenging forms of photography there is in terms of gear limitations and photographer skill. In addition to photographic skill the photographer must also practice excellent fieldcraft to get as close as possible to the subjects.


You might be better served to find other subject matter that will be more within the capability of the gear you have and learn how to use it on subjects that will allow you to learn progressively and see more of the differences in your capabilities as you improve. Depending on where you live, go to a park and shoot the animals that will allow you to get closer to them. Hang out near an airport and photograph the planes taking off and landing. Shoot your friends' and family's pets and children at get togethers.


With pretty much any modern AF system the areas of actual sensitivity are larger than the little markers for each AF point that you see in your viewfinder. The good news is that each one covers a larger area than you think. The bad news is that each one covers a larger area than you think. If your target is very small but there is an area of even greater contrast within the area of sensitivity, the camera will almost certainly focus on the area of greater contrast. For a look at how this works out practically when shooting, see this entry from Andre's Blog. For a look at how AF accuracy can vary from shot to shot, see this entry from Roger Cicala's blog at lensrentals.com.


The EOS Rebel T5/1200D has a very basic AF system. For action you're probably going to be limited to the center cross-type point only as it will perform faster and more accurately than the others. The EF-S 55-250mm f/4-5.6 in it's various versions is a fairly slow lens - both in terms of maximum aperture that limits what shutter speeds you can use in less than optimal light and in terms of AF speed.


The AF systems of cameras more optimized for action shooting are so much more sophisticated and configurable that moving from a basic AF system like the 1200D's to one like the 80D's or the 7D Mark II's is going to be a little like learning to drive in a Hyundai Accent with an automatic transmission and then moving to a Corvette with 6 speed manual. Yes, the skills you learn on the 1200D will be needed with the advanced camera. But a lot of other knowledge and skill that the 1200D does not require you to have (or allow you to use if you already have it) will also be needed to control the more advanced tool set.


Can I edit photos in Lightroom using only previews


All my photos are stored on a NAS. I edit them on one of my two laptops. The catalog is synchronized between the laptops and I only use one laptop at a time to edit photos. This model works very well as long as I am on my home network.


But can I take one of my laptops on a trip and still be able to edit photos that physically sit on the home network? What if I pre-render high res previews of all my photos? Will I be able to edit photos using only previews?



Answer



No, you can't. LR will not open a file in the development module if the image is not there. You can however change keywords and other attributes.


Saturday 23 June 2018

lens - Are there any Canon EF-S prime lenses or tele-zooms?


Canon is supposed to have introduced the EF-S system for the APS-C cameras to make smaller\lighter lenses and keep the costs low. However, most of their EF-S lenses are general purpose zooms in the range 15/17/18mm-55/85/135/200mm ones. There is only one telephoto zoom lens (55-250mm), and one prime* (60mm macro). In fact, most of the Canon non-L prime lenses seem to be a decade or 2 old and for the EF system.


While EF lenses are supported on APS-C bodies, they are much heavier and costlier. So, if one were to build a photography kit based on the EF-S system (primary constraints being cost & overall equipment weight), there are very few options to do so. The only high quality lenses are the 15-85mm & 17-55mm ones.


Are there any EF-S lenses (Canon or third party) that fill this gap for telephoto focal lengths and prime lenses? Is there any reason why Canon has not addressed this segment?


* Corrected the "no prime" mention based on the answers.



Answer



The EF-S mount allows the rear element of lenses to sit closer to the sensor. This makes wide angle lenses slightly easier to design. The format size (of APS-C) allows lenses to be made lighter as the image circle the lens projects can be smaller.


EF-S doesn't really make sense for telephoto lenses, as the rear element sits quite far from the mount anyway. You do save a little weight, but not much since the size of the front element is dictated by the aperture, regardless of the size of the image circle. For wider lenses the size of the front element is dictated by the angle of view more than the maximum aperture.



As for primes, Canon make one EF-S prime, the 60mm macro.


The story goes that Canon introduced the EF-S lenses to allow them to scale down existing lens designs as the basis for new lenses as opposed to creating a new optical formula from scratch. The EF-s 60mm macro is thus a scaled down version of the 100mm macro. I don't really buy this however, as if you look at the block designs there are some significant differences.


The relative dearth of primes for the format is probably based more around the target market, Canon sell a lot more APS-C bodies, and their users are more likely to prefer zooms.


The fact that they recently celebrated the 50 millionth camera and 70 millionth lens paints a stark picture, that the vast majority of users have just one lens, which is likely to be a standard zoom.


Friday 22 June 2018

Pushing film while stand developing?



Recently, I came across an experiment stand developing Kodak Tri-X. The photographer shoots the same scene at box speed (ISO 400), 800, 1600, and 3200. The entire roll is developed in Rodinal diluted to 1:100 for an hour—he only agitates for 30 seconds at the beginning and for a second after 30 minutes. I noticed that the images, for the most part, look the same. There are differences in grain and shadow detail, of course, but the exposure looks the same.


So: do you have to change development times when you stand develop at different speeds?



Answer



A great deal here depends on the developer you're using. In this case, there are three key ingredients. First, is the fact that it's Rodinal. Rodinal is a one-use developer, meaning that you use it once, and then throw it away, because the chemicals get "used up' in the course of developing one batch of film. Second is the high dilution, to keep the amount of chemical available in any one part of the film to a minimum. Third is the minimal agitation -- basically, just enough to give a reasonable assurance against air bubble forming on the film.


With a typical developer, you control the amount of development by the time you let the film sit in the developer. The developer is concentrated enough that the longer you let it sit, the more it develops.


With Rodinal at 1:100, you're basically just letting it develop until the developer is used up. If you were to let the film sit in the developer for another hour or two, it wouldn't make much difference either -- the developer is pretty well used up by then, so if you let it sit for (say) another hour or two, it wouldn't change much more. The lack of agitation means that (for the most part) as the developer gets used up, it stays close to the area of the film that used it up. In areas that had different exposure the developer will be used up at a different rate, so you (mostly) want to keep it in place, to assure against less-used developer getting redistributed around to places that it would continue development too long.


A different developer (e.g., D-76) would change the situation entirely though -- most other developers contain enough chemical to develop quite a bit of film. Even if you minimize agitation, if you left film in D-76 for an hour, it would be so over-developed it probably wouldn't be usable.


Summary: in the case of Rodinal, the development time isn't critical because the developer itself is basically self-limiting. With most other developers, however, the amount of development is controlled by the time the film is in the developer, so the time and temperature are critical to control the total development of the film.


What do I need in order to mount my zoom lens onto a tripod?


I just purchased a cheap Opteka zoom lens and it is super long -- over two feet when fully elongated. It has built-in a tripod mount and I have no idea how to use it.



I currently own a Vanguard tripod with a three-axis head. How does this fit in with my new lens? It seems like the ideal setup would be for the lens tripod mount to somehow connect to my existing head, so that the camera and the lens move in perfect synchronization with one another. Is this right? If so, what parts am I looking for to complete this setup? (And if not, how is the tripod mount on the lens intended to be used?)



Answer



You're assuming that the camera would be mounted to the tripod, and that there ought to be a way to mount the lens as well. That's not the case. When you're using a large, heavy lens with its own tripod mount/collar, you only mount the lens to the tripod. The camera simply hangs off the lens. Don't worry -- the lens mount will be able to support the camera.


There are cases with extremely long lenses (long as in focal length as well as long physically), you might want to use two tripods to make the setup as stable as possible. When the focal length is a meter (1000mm) or more, it doesn't take a whole lot of camera motion to create a great deal of blur in the image. As you've intuited, that's a fussy, awkward undertaking, so it's not something you'd do for the heck of it, or if you need to be able to move the point of aim; it's much more common to use the photographer as the secondary anchor point if the shutter speed can remain reasonably high (that is, you'd mount the lens to your tripod/stand but still hold the camera).


geotagging - Where, oh where, are the GPS Point-and-shoot cameras?


"Geotagging" has been around for a number of years now. Why haven't the major camera producers (Canon, Nikon, etc.) gotten around to installing internal GPS systems in their point-and-shoot cameras to automatically geotag photos? Even more so, why not in SLRs which don't have the same size constraints?




metering - How well do smart phone light meter apps work?


There are a number of smart phone apps which use the phone's camera as a light meter. How well do these work?


The metering built into most modern cameras is very powerful and accurate, but in some cases it's nice to have a detached device. A smartphone is something I already have with me, and the app are only a few bucks, compared to $40 for a very cheap analog meter — or hundreds for a nice one.


Do these apps really work, or are they gimmicks? Can they get the same information from a scene that a real device can? Can they be used as incident-light meters without additional physical attachments like a diffuser dome? Are they accurate? How accurate, compared to the various stand-alone devices? Do the phone apps have any advantages?



Answer




Disclosure: I'm the guy behind Cine Meter and Cine Meter II, so take what I say with a grain of salt, grin.



Do these apps really work, or are they gimmicks?



They really work, within the limits of what the built-in camera allows. They may not be able to measure really dim light, for example.



Can they get the same information from a scene that a real device can?



Yes.




Can they be used as incident-light meters without additional physical attachments like a diffuser dome?



No. You either need a diffuser dome like Luxi or an add-on incident meter like Lumu



Are they accurate? How accurate, compared to the various stand-alone devices?



To within 1/10 stop (the limits of my measuring capability) if done correctly.


Many apps use the camera's own exposure setting, so can vary by nearly a stop at times from what a meter might see (smartphone cameras often "expose to the right" a bit to maximize SNR). iOS devices provide a "brightness value" in their EXIF data stream, and that value appears to track external meters pretty much exactly. Apps that use the brightness value, or that do image analysis on captured pix to compensate for the camera's ETTR behavior, should track an external meter to within a tenth of a stop.



Do the phone apps have any advantages?





  1. "The best lightmeter is the one you have with you."

  2. Cost: if you already have the phone, a metering app is a cheap add-on. If you don't have a phone, an iPod touch works fine, and it's cheaper than a standalone meter.

  3. Features: an app can add things like a false-color display or a waveform monitor to help in visualizing how the light falls on a scene and to look at contrast ratios. If they enable the front-facing camera, you can use 'em for reflected-reading "lightmeter selfies", using yourself as a model before your talent arrives. Also, when taking reflected readings, they show you exactly what the "meter" is seeing.


storage - What devices transfer photos from one flash media to another?



I recall seeing years ago a device which you would attach two USB drives to, and it would copy files from one USB device to the other. Usually two flash drives, but it would also work with sd to USB adaptive, and USB hard drives.


I can't find it again, and I want to be able to backup my photos by essentially duplicating my flash drives when they are full, and backing up to a larger USB drive.


Further, for a variety of reasons, I want to do this without using a computer. (speed, power consumption, target of theft, etc)


Any recommendations or tips?



Answer



Maybe you recall this one: http://reviews.cnet.com/home-entertainment/belkin-usb-anywhere/4505-6449_7-31490968.html, although I can not find it on belkin's site anymore.


lighting - Best fluorescent bulb color temperature for shooting people and interviews?


I have a small, ultra white matte 13x10 room that I need to photograph and film people in. Something like this. The most important thing to me is accurately depicting a person's skin color.


Yesterday I purchased some clamp work lights and a few 6500k CFL bulbs. However, after taking some test photos with these lights I noticed that the skin color of my subjects looked very white (and even green where veins where). It seems that this lighting is a bit much for people.


What is the correct temperature for shooting people in an indoor room? I also have some nasty, yellow soft-white (2300k?) and halogen lights.



Answer



The problem with fluorescent lighting isn't the color temperature, exactly. You can generally adjust white balance to account for that. If there's a green tint, that can usually be compensated for with manual white balance. But the poor color rendering is harder.


The problem is that by their nature fluorescent tubes only produce light in narrow ranges of wavelengths (depending on the composition of the gasses and phosphors used). Since colors in objects are in a sense only actually there if the matching wavelength of light can be reflected back into your eyes or camera, this means fluorescent lighting flattens color in weird ways.



This is one of those cases where the human vision system's magical qualities run us into trouble. Your brain adjusts for this so quickly that you don't really notice unless you've got a reference light source to compare to. (There's a cool little exhibit on this at the Museum of Science in Boston, if you're ever in my area.)


"Full spectrum" bulbs use a combination of gasses to cover more spectrum. But even then, it tends to be spiky and weird, not the wide incandescence of, say, the sun (or a traditional light bulb). Many fluorescent bulbs list something called CRI, or "Color Rendering Index". This isn't perfect — I don't think it's regulated, and it appears to be determined by each manufacturer, not independently. And the process / standard could stand to be updated to an approach using more rigorous scientific understanding. But, it's what we've got.


So, you want to look for bulbs advertising a CRI at least in the high 80s — 100 is perfect on the scale. There's lots more detail in the Wikipedia article on CRI.


Of course, you could avoid the problem by using incandescent (including halogen) light sources.


Thursday 21 June 2018

education - What do you learn in a BA in Photography that can't be self taught?


What do students learn in a photography degree that they can't learn by themselves? Most amateur photographers (and many, many pros) are self taught. A bit of reading, experimentation, practice, practice, practice, and reflection goes along way in photography education. Does a photography degree just fast track you through the initial stages of the learning curve, or do you get something extra?



I guess that access to expert criticism is major part of photo school, which certainly can't be self taught, but what else does the formally educated photographer learn that is difficult to pick up on your own?


I hope this doesn't sound like I am anyway dismissive of going to school, I'm just curious.



Answer



A fine art degree and a degree in a field such as Computer Science (CS) are not really the same. CS is a field where earning a BA is, to a large degree, a technical exercise. The technical aspects of photography are relatively limited, and you spend a lot more time learning to express yourself, which is a lot harder than it seems.


One thing that frustrates a lot of technically minded people about photography is the lack of correct answers. You can't benchmark ideas or run unit tests against expression. There is a complete and total absence of metrics. Thus, what a good school gives you is:



  • Completely removes you from your comfort zone. Suddenly, nobody who sees your photography has any interest in making you feel better about yourself.

  • An education in art in general, and in older and newer art photography.

  • The tools you need to define and express your vision.

  • How to talk about your work, accept criticism and criticize the work of others. The peer review aspect is absolutely critical. And, in the end, you learn how to defend your ideas and concepts.


  • Varying viewpoints from people who are absolutely not afraid to share them: your professors.

  • A community of photographers. This kicks your ass in a way no online community ever will because, after the first year, everybody is at least good and everybody knows how to talk about photography. Being able to discuss your projects with people who are able to understand them is absolutely amazing.

  • It forces you to take pictures that you would never take otherwise. I can't stress how important this is.


The internet suffers from what I describe as the photography echo chamber. People get waaaaaaay too caught up on gear and technical minutiae, on post-processing tutorials. Few of your forum peers will have looked at an album or a photography show that wasn't on the internet. For many of them, the best place to go for photography is flickr. Flickr's fine for what it is, but it's like learning about music by listening to local cover bands.


Besides, people like to be nice on the internet, because if they're not nice, they are often branded as trolls. For most of the photography I see online, there's no way to be nice without lying. Most pictures, even the ones people ask opinions on, are snapshots in the most derogatory sense of the word. You learn that fixing the technical problems with an image usually does nothing for the image, because most problems aren't technical, they're ones of vision (or rather, a lack of it.)


I spent a good couple of years "learning about photography" online. Then I went to art school and 3 months in, I realized that I'd really spent 4 years learning about cameras and lenses. The two years I spent in school taught me so much, I recommend some sort of formal training to everybody and anybody serious about expressing themselves through photography.


color - Why are red objects coming out unnaturally in my photographs?


I have observed lately during my shoots that when I try to capture a red object with hardly anything in the background, the color doesnt seem to come exactly as what it is in reality. With flash or without flash, either way it is too saturated or a bit desaturated. Is this normal behaviour and is it common for all colors?



I have seen this behaviour in natural lights only, have never shot in studio lights so no idea about that.



Answer



It is difficult to answer the question accurately without knowing what you really photographed. But what you have reported is very similar to what many photographers experience when photographing red flowers.


This has a two fold cause.
First the CMOS sensor in the camera has an extended spectral response extending into the near infrared. See the diagram below. Taken from this Kodak publication.


A near infrared filter is normally fitted to limit the response to 700 nm, corresponding to the human eye. In practice the sensor is still responsive to some light in the near infrared range. See Thom Hogan, Shooting Infrared With Digital Cameras.


Secondly many red flowers have a strong spectral reflectivity extending into the near infrared. See the spectral response plot for the poppy below.(www.reflectance.co.uk, a database of flower reflectance) See also this paper, FReD: The Floral Reflectance Database.


It is the additive combination of strong reflectance extending into the near infrared and the slight sensitivity of the sensor to the near infrared that results in saturated, over exposed red flowers.


enter image description here


enter image description here



The way to deal with this problem is to spot meter on the brightly reflecting red object and treat it as if it were a highlight(which it is). Then increase your exposure by about 2 EV. The exact value will depend on the lighting and you will need to do some tests to determine the correction value.


photography basics - Photo Editing guide for gimp



Can I get some guidelines related to editing images with GIMP?


Its not easy to learn it without any detailed guidance.



How can I sharpen my gimping skills?




Wednesday 20 June 2018

jpeg - Should I use Active D-Lighting when shooting Raw?


I understand (ref) that I can do my own post processing to get the same effect that Active D-Lighting gives.


But does the in-camera ADL get applied to the raw image data or is it only done to the accompanying jpeg (when shooting RAW+JPEG) It might be nice to see the preview image with that effect, knowing that the Raw image is still unmodifed.


If it only affects the jpeg, what does ADL do when shooting RAW only?



Answer



No and yes, mostly no though :)


ADL does not affect RAW data directly. However it sometimes affects exposure, which therefore gives you a different RAW file under the same circumstance with ADL turned Off.


Trying this on a Nikon Coolpix A, with ADL off, on a given scene I get 1/320s F/2.8 @ ISO 800 but as I increase a ADL from Low to Extra-High, the shutter-speed goes up incrementally, seemingly by 1/3 stop on each step: 1/400s, 1/500s, 1/640s, 1/800s. This says that ADL is trying to preserve more details in highlights, this always happens at the expense of shadow noise.



However, this depends on the scene and results in a less predictable camera experience. It may give you a better exposure but I strongly suggest you get to know the metering system and use Exposure-Compensation (EC) as needed which puts things in your control.


canon - What causes this sort of RAW file corruption?


I came home from a recent trip to Gettysburg with almost 1500 photos. Among them were a handful that appear to have been corrupted, as shown below. It was hot during the hikes -- ambient temperature was near or above 100 Fahrenheit, so I suspect the heat contributed to the problem.


These files exhibit some interesting behavior:



  • When I view them in Lightroom or Windows Explorer, the files show normally for a split-second, then are over-painted with the light area you see below.

  • If I try to open the file in Canon's Digital Photo Professional, the thumbnail view shows the complete (uncorrupted) file, but DPP refuses to open the files for processing, saying that they're corrupt.


I haven't tried to figure out if all the files came from the same card, but all cards were formatted in-camera prior to the trip.


I don't believe I've lost anything important at this point, as I shot multiple frames of most subjects, but I'd really like to know what caused this problem, and what, if anything can be done to resurrect these files.


corrupt raw files




Answer



I can only answer your second question - i.e. if anything can be done to resurrect these files. As others have mentioned, that uncorrupted version you're seeing briefly as you import them into Lightroom is the preview JPEG generated in-camera when you shoot RAW. There are plenty of tools that will recover those previews for you, hopefully even if the RAW file itself is corrupted. (The fact that you're seeing the previews in other software is an encouraging sign.)


Check this accepted answer for a list of software. I've tried dcraw and it worked a treat.


Good luck!


Where do non-standard shutter speeds come from?


When I google "shutter speed" and explore a few first hits, there is always listed only the standard shutter speeds - 1/1000, 1/500, 1/250, 1/125, 1/60, 1/30, 1/15 etc


But, when I go looking at photos in Flickr, I sometimes see photos taken with shutter speeds like 1/320 or 1/80 and similar oddities.


Where do these odd speeds come from?


I know we are talking about electronic devices here, so naturally we are not tied to the old mechanical camera limitations. So, can a shutter speed 1/320 be manually chosen in a modern digicamera, or is it a product of camera set to Auto-mode?




Answer



Those listed are full stops. Most cameras allow you to increment shutter speed and aperture in half-stops or one-third stops, and you can select intermediate values manually.



  • If you have the camera set to half-stops, then you'll have 1/350 between 1/250 and 1/500.

  • If you have 1/3 stop increments set, you'll have 1/320 and 1/400


To work these out, a full stop is double the light. A half stop then is the square root of 2 times, or 1.4 (so that if you go up a half stop, then another half stop, you multiply the 1.4 factor together, and 1.4 * 1.4 = 2, which is your full stop)



  • So 250 times 1.4 = 350

  • and 350 * 1.4 = 500



For 1/3 stops, it's the cube root of 2, or 1.26x



  • 250 * 1.26 = 315 (rounded to 320)

  • 315 * 1.26 = 396 (rounded to 400)

  • 396 * 1.26 = 500


Note that numbers are rounded, considerably in some cases, for convenience. The actual shutter speeds the camera produces are probably more precise values than these.


 1/2       1/3
Stops    Stops



10001000
750 800 
500 640 
350 500 
250 400 
180 320 
125 250 
90  200 
60  160 
45  125 

30  100 
23  80  
15  60  
11  50  
8   40  
6   30  
4   25  
3   20  
2   15  
1.5 13  

1   10  
    8   
    6   
    5   
    4   
    3   
    2.5 
    2   
    1.6 
    1.3 

    1   


post processing - How can I reverse-engineer the RAW conversion settings used in a JPEG image?



I've lost my Lightroom catalog during OS migration. I have couple of JPEGs developed from RAWs and would like to put few more JPEGs from other RAWs in similar mood. Is there a way to reverse engineer development settings having RAWs and JPEGs developed from them?




What is the difference in purpose between a focal plane shutter and a leaf shutter on a camera?


I inherited my Grandfather's beautiful Graflex Crown Graphic which he used for his entire career as a portrait photographer. (It's the picture on my avatar if you want to see it.)



Doing research on Graflex cameras I came across the Speed Graphic and the Crown Graphic. The main difference being the inclusion of a focal plane shutter in the SG.


My Crown Graphic has a leaf shutter in the lens itself. If my research is correct, I believe the Speed Graphic has both a leaf shutter and focal plane shutter. Why have two shutters on the same camera? What difference does this make?


And, in general what is the difference between a leaf shutter and focal plane shutter in function? I obviously know that the location is different! It also, clearly, makes the lens production less complicated because the lens no longer needs to contain a shutter mechanism. But, is there a photographic difference between the two?



Answer



The biggest functional difference between a leaf shutter and a focal plane shutter is the ability of a focal plane shutter to precisely allow the same amount of exposure time for the entire field of light collected at the front of the lens and to allow the practical use of faster shutter speeds.


Due to the fact that leaf shutters are open in the center longer than at the edges, the light coming through the center of the lens falls on the image plane for slightly longer periods that the light coming from the edges of the lens. This wasn't such a big issue when photography first got started and the emulsions were so low in sensitivity that typical exposure times were in minutes, rather than hundredths or even thousandths of a second! In fact, the first "shutters" were lens caps or plugs that were removed and replaced on the front of the lens by hand.


As cameras became more sensitive to light and the desired exposure times got shorter and shorter, the limitations of the leaf shutter became a more significant issue. Even so, there are still new digital cameras produced today that use leaf shutters. The designers feel, and the marketplace seems to agree, that the tradeoffs are worth it in some cases.


A focal plane shutter can be designed to begin exposure on one side of the frame and end it on the other side of the frame. This allows all parts of the frame to receive light from all parts of the lens for the same amount of time. The earliest single curtain focal plane shutters, such as those used in the Speed Graphic, had a fixed slit that passed across the focal plane. By allowing the user to select different slit widths and spring tensions for the mechanism that drove the slit across the focal plane, shutter speeds ranging from 1/10 second to 1/1000 second were possible using most of the various models of the Speed Graphic.


Why would the Speed Graphic have both a focal plane and leaf shutter? It doesn't necessarily also need a leaf shutter. Barrel lenses without a leaf shutter can be used with a Speed Graphic. The focal plane shutter is used for speed, specifically faster shutter speeds, thus the name Speed Graphic. But the camera was certainly not speedy in terms of shot to shot intervals and the operation of the FP shutter took longer to manually reset the FP curtain between shots than the operation of a leaf shutter in the lens. This may be one reason many users preferred both options. The lineup of lenses that included leaf shutters offered by lens makers could be used across both the Speed Graphic and the Crown Graphic and Century Graphic models. (The lack of a focal plane shutter allowed the Crown Graphic to be made slightly thinner which allowed use of some wider angle lenses than could be used with the Speed Graphic.)





Though not exactly applicable to your specific model, here is a link to the instructions for a c.1925 Top Handle Speed Graphic.


Tuesday 19 June 2018

technique - How to keep flash from disrupting the scene?


My friends and I have observed that the act of using a flash in informal settings (for example, pictures of a Christmas party or a busy toddler) tends to draw attention to the photographer and thus disrupt the scene we were trying to capture. Because of this, I tend to avoid the flash (external or on-camera) and just try to get the most light I can. But these are exactly the sort of poorly lit, dynamic situations that you'd want a flash for! How do I resolve this Heisenberg-ian paradox?


(Edit: When I said "off-camera", I really meant "not the built-in flash", I have a Canon Speedlite 430EX 2. Thank you for answers which cover both of the types!)




Answer




  1. Bounce flash is much better, since it's less directly (and literally) in-your-face.

  2. Off-camera bounce flash is even better. If you're using a wireless radio system, that's probably best of all, but I actually have pretty good results using my camera's built-in optical TTL wireless (I suppose since it's less powerful than a full flash burst), particularly when combined with:

  3. Start shooting early and do it often; people will get used to it and start ignoring you.


lens - How does a DX-format sensor support FX lenses?


I have a Nikon D3200 camera. I want to buy a prime lens. I read some reviews online and understand that the "Nikon AF-S NIKKOR 50mm F/1.8G" would be a good lens to buy, but this is a FX lens.


Can someone please explain me how my camera, which has DX-format sensor, can support FX lenses?



Answer



It's quite simple:


A lens projects what is called an "image circle" onto the sensor. A DX-only lens is designed to project an image circle just big enough to cover that sensor. An FX lens is designed to project an image circle big enough to cover a FX frame.


Therefore an FX lens will work on a DX sensor properly.



(But it will have an effective "crop factor" of 1.5x as you are only using the inner section of the image circle, effectively "zooming" the lens in 1.5x. That said, on a DX body the crop factor applies to all lenses, both DX and FX.)


lens - Is there an IQ gain from Canon's 18-135 to 24-105L?


Just got a Canon 40D body, and was about to get Canon's 24-105L lens. I hear people raving about the lens, and it all made sense until I saw a Cameralabs' review of the lens.




So don’t buy the EF 24-105mm expecting an upgrade in optical quality alone or you may be disappointed. Where this lens really scores over general-purpose EF-S lenses is in terms of build and mechanical quality ...



Now I'm contemplating getting a 18-135 for a 3rd of the price.


I'd be interested to hear from somebody who had some experience with the lenses, and not just read some reviews (like I did).



Answer



I have a 24-105 L and it is an excellent lens. The build quality is superb which makes it heavy but the L series lenses become an asset as a result of their build - it will last.


The image quality is also excellent. Check out the digital picture review linked to above. Whilst Andy says some will say he is bias (for a long time I think he has only reviewed Canon gear) - the bias is somewhat irrelevant when you're comparing Canon to Canon.


The 18-135 covers a longer range, typically this isn't a good thing IQ wise in zoom lenses (with some exceptions). I haven't used this lens though.


Check out this link which contains comparison images of a lens test chart with both lenses. Set the focal lengths and apertures to comparable values. To me, at a quick comparison, the 24-105 looks to have significantly improved IQ:


http://www.the-digital-picture.com/Reviews/ISO-12233-Sample-Crops.aspx?Lens=355&Camera=453&Sample=0&FLI=0&API=0&LensComp=678&CameraComp=474&SampleComp=0&FLIComp=1&APIComp=0



My advice? Invest in the best quality lenses you can afford to. If you buy L or at least don't buy EF-S, they will last forever and transcend any bodies you own be they cropped or full frame.


Friday 15 June 2018

equipment recommendation - Kenko extension tubes or no name tubes?


I'm planning to buy extension tubes for my Canon 50mm f1.8 lens. I found Kenko for about $150 but then again there are the others which are going for about $10 or less which does not mention of a brand name or anything. Any particular difference or disadvantage to using the no-name brand to the lens or camera? I find it pointless to spend lots because for sure I know that there are no optics and hence image quality should not be a problem.



Answer



The more expensive Kenko tubes have contacts that allow the lens to pass metering and aperture information to the camera, and the necessary mechanics to work the aperture, so you can use the lens as normal.


The cheap ones lack these, so you have to meter manually and your aperture will be fixed at its smallest diameter - ideally you need a lens with a manual aperture ring, which I believe is not the case with your Canon.


lighting - How do you set the power for a flash in manual mode?


How do you calculate the flash power fraction, assuming you want to keep constant the other variables of ISO, aperture, and distance from subject to flash?


The Guide Number formula for a flash (strobe) is GN = distance * f-stop


Let's say the Guide Number is 174 at ISO 200. You want to shoot at f/8. This gives a distance of d = 174/8 = 21.75 feet


Now, let's say you're shooting in a room that doesn't have that much room to play with.


But you do know you can move the flash 10 feet away.


What fraction of power do you use?


Is it linear, e.g. 21.75/10=2.175, so use 1/2? Or something else?


The numbers in this example are for the Nikon SB-800 and a Nikon D90 camera, but the principle is likely the same.



Answer




Light falls off according to the inverse-square law. Basically it boils down to this equation:


I ~ 1/r2


Where I is the intensity and r is the radius (which is subject distance for us) and the ~ means 'approximately equal'. Anyways, a couple of good articles on the subject can be found at Cambridge in Colour and at Portrait Lighting.


Thursday 14 June 2018

lens - Why do zoom lenses and compact cameras have varied maximum aperture across the zoom range?


Why does a camera's maximum (allowed) aperture get smaller when you increase its zoom?



Answer



The short answer is because it is cheaper to manufacture such lenses. The longer the lens and the wider the aperture, the larger the optical elements in the lens - thus larger the expense to produce them.


A lens like 70-200/2.8 must have a front optical element of 200mm/2.8=72mm, which is quite a chunk of glass. On the other hand, the 70-300/4-5.6 needs to be 300mm/5.6=54mm wide. If it were f/4 through its full range, the optical element would need to be 75mm wide - even larger than the much more expensive 70-200/2.8.



In your question, you say "the camera's maximum aperture". The camera does not have an aperture - the lens does. Minor but important difference, especially for SLRs - once you remove the lens you see that the camera is just a light bucket with a big hole in the front.


DETAILS:


The aperture is the ratio of the focal length of the lens to the size of the front optical element. Essentially


aperture = focal length / optical element size


For example, a 50mm f/1.8 lens has a 28 mm (50/1.8) element size.


If you're wondering why the f-stop numbers don't seem to be linear (they're not), it is because the amount of light collected by the lens is proportional to the focal length divided by the aperture squared. Because of this power of 2, f/4 collects twice as much light as f/5.6, since 5.6/4=sqrt(2).


Why are Red, Green, and Blue the primary colors of light?


Colors don't have to be a mixture of red, green, and blue because visible light can be any wavelengths in the 390nm-700nm range. Do primary colors really exist in the real world? Or did we select red, green, and blue because those are the colors that human eyes' cones respond to?



Answer



TL:DR



Do primary colors really exist in the real world?



No.


There are no primary colors of light, in fact there is no color intrinsic in light at all (or any other wavelength of electromagnetic radiation). There are only colors in the perception of certain wavelengths of EMR by our eye/brain systems.




Or did we select red, green, and blue because those are the colors that human eyes' cones respond to?



We use three-color reproduction systems because the human vision system is trichromatic, but the primary colors we use in our three-color reproduction systems do not match each of the three colors, respectively, to which each of the three types of cones in the human retina are most responsive.




Short Answer


There's no such thing as "color" in nature. Light only has wavelengths. Electromagnetic radiation sources on either end of the visible spectrum also have wavelengths. The only difference between visible light and other forms of electromagnetic radiation, such as radio waves, is that our eyes chemically react to certain wavelengths of electromagnetic radiation and do not react to other wavelengths. Beyond that there is nothing substantially different between "light" and "radio waves" or "X-rays". Nothing.


Our retinas are made up of three different types of cones that are each most responsive to a different wavelength of electromagnetic radiation. In the case of our "red" and "green" cones there is very little difference in the response to most wavelengths of light. But by comparing the difference and which has a higher response, the red or the green cones, our brains can interpolate how far and in which direction towards red or towards blue, the light source is strongest.


Color is a construct of our eye brain system that compares the relative response of the three different types of cones in our retinas and creates a perception of "color" based on the different amounts each set of cones responds to the same light. There are many colors humans perceive that can not be created by a single wavelength of light. "Magenta", for instance, is what our brains create when we are simultaneously exposed to red light on one end of the visible spectrum and blue light on the other end of the visible spectrum.


Color reproduction systems have colors that are chosen to serve as primary colors, but the specific colors vary from one system to the next, and such colors do not necessarily correspond to the peak sensitivities of the three types of cones in the human retina. "Blue" and "Green" are fairly close to the peak response of human S-cones and M-cones, but "Red" is nowhere near the peak response of our L-cones.





Extended Answer


The spectral response of color filters on Bayer masked sensors closely mimics the response of the three different types of cones in the human retina. In fact, our eyes have more "overlap" between red and green than most digital cameras do.


The 'response curves' of the three different types of cones in our eyes:
enter image description here Note: The "red" L-line peaks at about 570nm, which is what we call 'yellow-green', rather than at 640-650nm, which is the color we call "Red."


A typical response curve of a modern digital camera:
enter image description here
Note: The "red" filtered part of the sensor peaks at 600nm, which is what we call "orange", rather than 640nm, which is the color we call "Red."


The IR and UV wavelengths are filtered by elements in the stack in front of the sensor in most digital cameras. Almost all of that light has already been removed before the light reaches the Bayer mask. Generally, those other filters in the stack in front of the sensor are not present and IR and UV light are not removed when sensors are tested for spectral response. Unless those filters are removed from a camera when it is used to take photographs, the response of the pixels under each color filter to, say, 870nm is irrelevant because virtually no 800nm or longer wavelength signal is being allowed to reach the Bayer mask.




  • Without the 'overlap' between red, green and blue (or more precisely, without the overlapping way the sensitivity curves of the three different types of cones in our retinas are shaped to light with peak sensitivity centered on 565nm, 540nm, and 445nm) it would not be possible to reproduce colors in the way that we perceive many of them.

  • Our eye/brain vision system creates colors out of combinations and mixtures of different wavelengths of light as well as out of single wavelengths of light.

  • There is no color that is intrinsic to a particular wavelength of visible light. There is only the color that our eye/brain assigns to a particular wavelength or combination of wavelengths of light.

  • Many of the distinct colors we perceive can not be created by a singular wavelength of light.

  • On the other hand, the response of human vision to any particular single wavelength of light that results in the perception of a certain color can also be reproduced by combining the proper ratio of other wavelengths of light to produce the same biological response in our retinas.

  • The reason we use RGB to reproduce color is not because the colors 'Red', 'Green', and 'Blue' are somehow intrinsic to the nature of light. They aren't. We use RGB because trichromatism¹ is intrinsic to the way our eye/brain systems respond to light.


The Myth of our "Red" cones and the Myth of "Red" filters on our Bayer masks.


Where a lot of folks' understanding of 'RGB' as being intrinsic to the human vision system runs off the rails is in the idea that L-cones are most sensitive to red light somewhere around 640nm. They are not. (Neither are the filters in front of the "red" pixels on most of our Bayer masks. We'll come back to that below.)


Our S-cones ('S' denotes most sensitive to 'short wavelengths', not 'smaller in size') are most sensitive to about 445nm, which is the wavelength of light most of us perceive as a slightly bluer than red version of purple.



Our M-cones ('medium wavelength') are most sensitive to about 540nm, which is the wavelength of light most of us perceive as a slightly blue-tinted green.


Our L-cones ('long wavelength') are most sensitive to about 565nm, which is the wavelength of light most of us perceive as yellow-green with a bit more green than yellow. Our L-cones are nowhere near as sensitive to 640nm "Red" light than they are to 565nm "Yellow-Green" light!


As the simplified first graph above illustrates, there's not that much difference between our M-cones and L-cones. But our brains use that difference to perceive "color."


From comments by another user to a different answer:



Imagine an extraterrestrial alien who has yellow as a primary color. She would find our color prints and screens lacking. She would think we would be partially color blind not seeing the difference between the world she perceives and our color prints and screens.



That's actually a more accurate description of the sensitivities of our cones that are most sensitive to around 565nm than describing the peak sensitivity of L-cones as "red" when 565nm is on the 'green' side of 'yellow'. The color we call "Red" is centered on about 640nm, which is on the other side of "orange" from "yellow."


Why we use three colors in our color reproduction systems


To recap what we've covered up to this point:



There are no primary colors of light.


It is the trichromatic nature of human vision that allows tri-color reproduction systems to more or less accurately mimic the way we see the world with our own eyes. We perceive a large number of colors.


What we call "primary" colors are not the three colors we perceive for the three wavelengths of light to which each type of cone is most sensitive.


Color reproduction systems have colors that are chosen to serve as primary colors, but the specific colors vary from one system to the next, and such colors do not directly correspond to the peak sensitivities of the three types of cones in the human retina.


The three colors, whatever they might be, used by reproduction systems do not match the three wavelengths of light to which each type of cone in the human retina is most sensitive.


If, for example, we wanted to create a camera system that would provide 'color accurate' images for dogs we would need to create a sensor that is masked to mimic the response of the cones in dogs' retinas, rather than one that mimics the cones in human retinas. Due to only two types of cones in dog retinas, they see the "visible spectrum" differently than we do and can differentiate much less between similar wavelengths of light than we can. Our color reproduction system for dogs would only need to be based on two, rather than three, different filters on our sensor masks.


enter image description here


The chart above explains why we think our dog is dumb for running right past that brand new shiny bright red toy we just threw out in the yard: he can barely see the wavelengths of light that we call "red." It looks to a dog like a very dim brown looks to humans. That, combined with the fact dogs don't have the ability to focus at close distances the way humans do - they use their powerful sense of smell for that - leaves him at a distinct disadvantage since he's never smelled the new toy you just pulled out of the packaging it came in.


Back to humans.


The Myth of "only" red, "only" green, and "only" blue



If we could create a sensor so that the "blue" filtered pixels were sensitive to only 445nm light, the "green" filtered pixels were sensitive to only 540nm light, and the "red" filtered pixels were sensitive to only 565nm light it would not produce an image that our eyes would recognize as anything resembling the world as we perceive it. To begin with, almost all of the energy of "white light" would be blocked from ever reaching the sensor, so it would be far less sensitive to light than our current cameras are. Any source of light that didn't emit or reflect light at one of the exact wavelengths listed above would not be measureable at all. So the vast majority of a scene would be very dark or black. It would also be impossible to differentiate between objects that reflect a LOT of light at, say, 490nm and none at 615nm from objects that reflect a LOT of 615nm light but none at 490nm if they both reflected the same amounts of light at 540nm and 565nm. It would be impossible to tell apart many of the distinct colors we perceive.


Even if we created a sensor so that the "blue" filtered pixels were only sensitive to light below about 480nm, the "green" filtered pixels were only sensitive to light between 480nm and 550nm, and the "red" filtered pixels were only sensitive to light above 550nm we would not be able to capture and reproduce an image that resembles what we see with our eyes. Although it would be more efficient than a sensor described above as sensitive to only 445nm, only 540nm, and only 565nm light, it would still be much less sensitive than the overlapping sensitivities provided by a Bayer masked sensor. The overlapping nature of the sensitivities of the cones in the human retina is what gives the brain the ability to perceive color from the differences in the responses of each type of cone to the same light. Without such overlapping sensitivities in a camera's sensor, we wouldn't be able to mimic the brain's response to the signals from our retinas. We would not be able to, for instance, discriminate at all between something reflecting 490nm light from something reflecting 540nm light. In much the same way that a monochromatic camera can not distinguish between any wavelengths of light, but only between intensities of light, we would not be able to discriminate the colors of anything that is emitting or reflecting only wavelengths that all fall within only one of the the three color channels.


Think of how it is when we are seeing under very limited spectrum red lighting. It is impossible to tell the difference between a red shirt and a white one. They both appear the same color to our eyes. Similarly, under limited spectrum red light anything that is blue in color will look very much like it is black because it isn't reflecting any of the red light shining on it and there is no blue light shining on it to be reflected.


The whole idea that red, green, and blue would be measured discreetly by a "perfect" color sensor is based on oft repeated misconceptions about how Bayer masked cameras reproduce color (The green filter only allows green light to pass, the red filter only allows red light to pass, etc.). It is also based on a misconception of what 'color' is.


How Bayer Masked Cameras Reproduce Color


Raw files don't really store any colors per pixel. They only store a single brightness value per pixel.


It is true that with a Bayer mask over each pixel the light is filtered with either a "Red", "Green", or "Blue" filter over each pixel well. But there's no hard cutoff where only green light gets through to a green filtered pixel or only red light gets through to a red filtered pixel. There's a lot of overlap.² A lot of red light and some blue light gets through the green filter. A lot of green light and even a bit of blue light makes it through the red filter, and some red and green light is recorded by the pixels that are filtered with blue. Since a raw file is a set of single luminance values for each pixel on the sensor there is no actual color information to a raw file. Color is derived by comparing adjoining pixels that are filtered for one of three colors with a Bayer mask.


Each photon vibrating at the corresponding frequency for a 'red' wavelength that makes it past the green filter is counted just the same as each photon vibrating at a frequency for a 'green' wavelength that makes it into the same pixel well.³


It is just like putting a red filter in front of the lens when shooting black and white film. It didn't result in a monochromatic red photo. It also doesn't result in a B&W photo where only red objects have any brightness at all. Rather, when photographed in B&W through a red filter, red objects appear a brighter shade of grey than green or blue objects that are the same brightness in the scene as the red object.


The Bayer mask in front of monochromatic pixels doesn't create color either. What it does is change the tonal value (how bright or how dark the luminance value of a particular wavelength of light is recorded) of various wavelengths by differing amounts. When the tonal values (gray intensities) of adjoining pixels filtered with the three different color filters used in the Bayer mask are compared then colors may be interpolated from that information. This is the process we refer to as demosaicing.



What Is 'Color'?


Equating certain wavelengths of light to the "color" humans perceive that specific wavelength is a bit of a false assumption. "Color" is very much a construct of the eye/brain system that perceives it and doesn't really exist at all in the portion of the range of electromagnetic radiation that we call "visible light." While it is the case that light that is only a discrete single wavelength may be perceived by us as a certain color, it is equally true that some of the colors we perceive are not possible to produce by light that contains only a single wavelength.


The only difference between "visible" light and other forms of EMR that our eyes don't see is that our eyes are chemically responsive to certain wavelengths of EMR while not being chemically responsive to other wavelengths. Bayer masked cameras work because their sensors mimic the trichromatic way our retinas respond to visible wavelengths of light and when they process the raw data from the sensor into a viewable image they also mimic the way our brains process the information gained from our retinas. But our color reproduction systems rarely, if ever, use three primary colors that match the three respective wavelengths of light to which the three types of cones in the human retina are most responsive.


¹ There are a very few rare humans, almost all of them female, who are tetrachromats with an additional type of cone that is most sensitive to light at wavelengths between green (540nm) and red (565nm). Most such individuals are functional trichromats. Only one such person has been positively identified to be a functional tetrachromat. The subject could identify more colors (in terms of finer distinctions between very similar colors - the range at both ends of the 'visible spectrum' were not extended) than other humans with normal trichromatic vision.


² Keep in mind that the "red" filters are usually actually a yellow-orange color that is closer to "red" than the greenish-blue "green" filters, but they are not actually "Red." That's why a camera sensor looks blue-green when we examine it. Half the Bayer mask is a slightly blue-tinted green, one quarter is a blue-tinted purple, and one-quarter is a yellow-orange color. There is no filter on a Bayer mask that is actually the color we call "Red", all of the drawings on the internet that use "Red" to depict them notwithstanding.


³ There are very minor differences in the amount of energy a photon carries based on the wavelength at which it is vibrating. But each sensel (pixel well) only measures the energy, it doesn't discriminate between photons that have slightly more or slightly less energy, it just accumulates whatever energy all of the the photons that strike it release when they fall on the silicon wafer within that sensel.


Why is the front element of a telephoto lens larger than a wide angle lens?

A wide angle lens has a wide angle of view, therefore it would make sense that the front of the lens would also be wide. A telephoto lens ha...