Thursday, 30 April 2015

reverse engineering - How to reproduce a photo with silhouette and bokeh?


I recently found a photo with a silhouette and bokeh at DSLR vs Mirrorless: Wedding Photographer's review. The subjects are as dark as the background, there is only a rim light shaping their heads. I know that the yellow circles are small, out-of-focus light bulbs. I am interested in how the silhouette is made and how the it was setup with the light bulbs. Could it be a composite?


silhouette photo



Answer



There are two ways to go about accomplishing this - in camera and in post.


Both techniques will rely on shooting a rim-lit subject.


Put a flash behind the subject. In my image, I actually had the flash cranked up WAY too much, so I'm getting additional light acting as fill (bouncing off the couch and back toward the front of the subject):


Set-up below shot is below. Flash is a 430EX set to 105mm, full power, with 3 MagMod grids attached. They limit the light to 15 degree beam. I'm shooting directly opposite the flash, putting Yoshi right between me and the flash.


enter image description here



And the resultant image:


enter image description here


Technique 1: post pro add in


Now, some rough clean up work to darken everything around my couple:


enter image description here


And finally, topping with some stock bokeh shot. Bokeh meshed using Add method and slightly less opacity. Bokeh used from https://www.pexels.com/photo/time-lapse-photo-of-lights-220118/ under CC license.


enter image description here


Assuming you take the time to do this right, I'm sure you could do a lot better. But, to recap:



  • Have a point light source firing from behind, this is called a rim light


  • Darken any areas that you want gone in post

  • Add a stock bokeh shot on top, whether yours or purchased. (Creating your own stock bokeh photos is simple and fun. All you really need are some Christmas lights. Here's a good how-to)


Technique 2: in camera


And here's the shot redone holding this string light in front of the lens. It's a bit bright where I am right now and I didn't darken out the BG. Please excuse that for this example...


Photo of the lights:


enter image description here


Photo of the shot:


enter image description here


Having a big handful of Christmas lights would be better than this simple string. Also, it's terribly hard to use string lights when a cat is in the room.





To restate: These are two very different techniques to accomplishing the same thing. Many photographers are in-camera purists and will despise the above technique. If you have the time, by all means, go for in-camera. If you are shooting this on location with a non-model couple, then I would advise getting a shot without the foreground bokeh...just in case you need it later, because, now you know, you can do this technique in post quite easily.


Which Canon prime and teleconverters should I bring to a safari?


Currently I have the 70-300mm L lens from Canon.


I would like to go on a Safari trip, and was wondering, does it make sense to pick up the Canon EF 400mm f/2.8L IS II with the 1.4X and 2X Teleconverters which effectively allows me to reach up to 560mm + 1 stop and 800mm + 2 stops or just pick up the 600mm or 800mm prime lens from the beginning?


I know I am loosing a couple of stops, and image quality with the Teleconverters, but it allows me to have some what of a zoom function with a smaller package.


Thoughts?



Answer




Go for the long lenses if you can. On Safari, you will be taken out when animals are most active which is at dawn and around dusk. Given the lack of artificial light, it will be dimmer than those times in the city. Meaning you will be shooting wide-open and as wide as possible to get shutter-speeds fast enough to freeze the animals.


Otherwise, the 400mm F/2.8L will still do and be a very useful and worthy upgrade to the lens you already have. A 1.4X extender will get you close enough to pretty much all mammals you will see on safari.


aperture - Can a smaller sensor's "crop factor" be used to calculate the exact increase in depth of field?


If APS-C and similar crop-sensor digital cameras have a focal length multiplying effect such that a 50mm lens has an apparent focal length closer to the field of view of an 80mm on a full frame camera, and yet at the same time the depth of field for the smaller sensor camera is more like the depth of field a 50mm lens would produce on a full frame camera (using the same aperture), then this would seem to suggest the concept of an "aperture dividing effect."


In other words, a 50mm f/1.8 lens on an APS-C camera would act more like a 80mm f/2.8 (approx. 1.8 * 1.6x) lens in 35mm equivalent — for depth of field, not considering exposure.


Can someone with a better understanding of the physics involved clarify this for me. I've never seen this concept explicitly mentioned anywhere, so I am a bit suspect of it.



Answer



This answer to another question goes into detail on the math behind this. And there's a Wikipedia article with a section specifically about getting the "same picture" with different camera formats. In short, it is approximately true that adjusting both the focal length and aperture by the ratio of the format sizes (the crop factor) will give you the same picture. ¹


But this breaks down if the subject is within the macro range of the larger-format camera (focusing really close). In this case, magnification (and therefore actual sensor size) becomes crucial to the DoF equation, messing up the equivalence.


And, the Wikipedia article casually mentions but does not elaborate on another important point. The assumption is that for the same print size, the acceptable circle of confusion (roughly, the acceptable blur level still considered in focus) will scale exactly with format size. That might not actually hold true, and you might hope (for example) to get greater actual resolution from your full-frame sensor. In that case, the equivalence also isn't valid, but fortunately in a constant way. (You simply have to multiply in your pickiness factor.) ²


You mention "not considering exposure", and now you might be thinking (as I did): wait, hold on. If cropping+enlarging applies to "effective" aperture for depth of field, why doesn't it apply to exposure? It's well known that the basic exposure parameters are universal for all formats, from tiny point and shoots to DSLRs all the way up to large format. If ISO 100, f/5.6, ¹⁄₁₀₀th second gives correct exposure on one camera, it will on any other as well. ³ So, what's going on here?


The secret is: it's because we "cheat" when enlarging. Of course, in all cases the exposure for a given f-number on any area of a sensor is the same. It doesn't matter if you crop or just have a small sensor to start with. But when we enlarge (so that we have, for example, 8×10 prints from that point and shoot to match the large format), we keep the exposure the same, even though the actual photons recorded per area is "stretched". This also has the same correspondence: if you have a 2× crop factor, you have to enlarge 2× in each dimension, and that means each pixel takes 4× the area of the original — or, two stops less actual light recorded. But we don't render it two stops darker, of course.⁴





Footnotes:


[1]: In fact, by changing the f/number, what you are doing is holding the absolute aperture of the lens constant, since the f/number is the focal length over absolute aperture diameter.


[2]: This factor breaks down too, as you approach the hyperfocal distance, because once the smaller format reaches infinity, infinity divided by anything is still infinity.


[3]: Assuming the exact same scene, and minor variations from real-world factors like lens transmission aside.


[4]: Basically, there ain't no such thing as a free lunch. This has the effect of making noise more obvious, and it's a reasonable approximation to say that this increase effectively means the crop factor also applies to noise apparent from ISO amplification.


composition - What is the Rule of Odds?


In furthering my reading on photographic composition I came across a compositional technique called the "Rule of Odds."



  1. What is the "Rule of Odds"?

  2. Why is it important?

  3. How do I apply it to my photography?




Answer





  1. Rule of Odds states that having an odd number of objects in an image will be more interesting and therefore pleasing. In case there is an even number of objects, your brain would have an easy time "organizing" the objects into pairs and therefore bringing in symmetry and dullness.


    If you have one main object, accompany it with two supporting objects, not one. This way, one of them will be a middle one.


    We can find a parallel from the art of writing, where rule of three states remember that lists of three examples are the most efficient in carrying forward the presented idea.




  2. Human eye tends to wander to the center of a group. With even number of objects, eye will end up at the negative space in center.


    The rule becomes important when trying to achieve a visually pleasing composition of several objects. A common form of usage is having three objects in frame, they always form either a line or a triangle, both are considered pleasurable shapes.



    The rule will not matter with larger groups though, few people will feel any different if there are 36 or 37 fish in the sea. The amount translates to "plenty" in brain either way.




  3. You should strive to apply the rule when including a "group of" objects as an important element of your photo. E.g. five flowers in a vase will be more pleasing than four or six.


    Implied from the rule is that you should have an even number of objects if the paired relationship or dullness is what you want to express (for example, a shot of students sitting in pairs would carry the idea of a dull long lesson, while adding a teacher would turn it around into a photo of educational interaction).


    By the way, both the question and this answer serve as examples of using Rule of Odds.




Wednesday, 29 April 2015

How can I get the most possible depth of field out of close-up/macro shots?



I own a Nikon D40X and just the kit lens and I'm considering buying either a macro lens or some extension tubes. I don't necessarily want to do true 1:1 macro, just some close ups of bugs, fungi and flowers.



Answer



Use focus stacking for static scenes; a small aperture and matching combination of ISO/lighting for scenes with moving elements. Focus stacking is more time consuming, but preferred because small apertures will introduce .


When photographing flowers outside, you might want to build a temporary wall to block wind so your flower would be static.


Using shorter focal length also helps, but due to closer distance it makes lighting more difficult and may scare your model.


Tuesday, 28 April 2015

web - What's the best website for showcasing my work to the public?



Wondering which website is more popular for posting own photography art and maybe get some feedback from community in other words a way to carry photography to mass public. FB is not the right place.



Answer



Flickr is definitely a good fit for what you're after:



  1. It's public by default (unlike Facebook which tends to be private).

  2. It's huge: it definitely ticks your "mass public" box.

  3. It has excellent social features that make it very easy to interact with new users (people you don't already know) and get your photos seen.

  4. It doesn't over-compress your images (like Facebook does) so it's a great place for a high-quality showcase.



In response to Kiril Kirov's comment ("I created a flickr profile, but not a single view for about a week :D"), it sounds like you're not using Flickr to its full potential. If you want other people to notice your work, try the following:



  1. Give your photos meaningful titles, descriptions and tags to ensure they appear in search results.

  2. Add your photos to relevant groups and join in the group discussions.

  3. Find other users whose work you like and add them as a contact. It's not obligatory to reciprocate but many people will. At the very least, they'll usually check out your stream to see who you are.

  4. Look for other photos you like and comment/favourite them. Again, there's no guarantee people will reciprocate but many do.

  5. Join a Flickr group local to your area and go along to a meet-up. (If they don't seem to have them, try arranging one! If there's no group, create one!)


Even though Flickr is public by default, you do have to invest some time in it before you'll come to other people's attention. But the same is true of any other photo site. Plenty of people choose other sites (such as 500px, SmugMug or Zenfolio), often because they have a more professional "portfolio" look, but for social features Flickr just can't be beaten.


lens - What's the difference between vignetting and pinhole?


Sometimes people say that a lens has bad vignetting but it looks very similar to the effect of pinhole (darkened edges).


What's the difference?


If someone asked to have a vignetting effect (in PP) and someone else asked for a pinhole effect, what would you do different?


Also, if you look at a photo, can you tell if it was due to pinhole or vignetting?



Is it simply that vignetting happens at open apertures and pin-hole happens in closed apertures? (Thus resulting in quite different shutter speeds)?


EDIT: Great, thanks for your answers guys! But I'd like to add something. If I was to draw a graph of Vignetting versus Aperture (in general terms), would it look like a reversed bell curve?


eg:


Rough graph


If so, why?


Example:


Taken from Lenstip


From Lenstip.



Answer



There are multiple causes for the light falloff which we call vignetting. In lenses, the primary causes are:




  • Poor response to light rays hitting the sensor at a strong angle. That's normal for wide angle lenses, and is fundamentally more of a problem at wider apertures. It's also significantly more problematic in digital sensors than in film, which is why the Four Thirds standard emphasizes telecentric lens design.

  • Physical obstructions like lens hoods, filter rings, or even internal lens elements. This reduces the effective image circle size, and again is more problematic at wide angles and wide apertures. (Sometimes people make a distinction between external and internal obstructions, but it really comes down to the same thing.)


As you stop down or zoom out, you get to the point where these factors have zero impact inside the area captured by the sensor.


The vignetting associated with pinhole lenses occurs for a different reason: it happens when the size of the aperture approaches the thickness of the aperture material itself. That doesn't happen in normal lenses because: A) the aperture blades are quite thin and B) even the smallest apertures are much bigger than the f/200 and smaller common to pinhole lenses.


(This may be related to Do rounded edges on aperture blades improve image sharpness, and how?, but I can't make any promises on that score.)


It's also really a special case of physical obstruction — rays from too wide of an angle hit the side of the aperture material — but one which occurs in a different case from what we see in typical lenses.


So, for your chart, we'd be looking at a less symmetrical curve, where the effects of the first causes disappear at smaller apertures used in normal lenses, and the pinhole cause doesn't appear until much later. (I've entirely made up the numbers and the particular curve here; they're for illustration, and don't take into account any actual lens or aperture material.)


chart



How to nail focus for DSLR astrophotography?



I've been doing a bit of astrophotography and the problem that I almost always have is nailing focus. I'll manually focus on a bright star or the moon (which hopefully isn't an option!) by magnifying on live view, leave the lens on manual focus, and then rotate the camera to the target.


My problems happen two fold:




  • Evenly greatly magnified on the live view screen, there's very little difference between in focus and slightly out of focus of a bring point light source. It's extremely difficult to tell in camera if its right, but painfully obvious once on the computer after the exposure.




  • I think sometimes the focus ring gets bumped while moving the camera to the target. So my focus ends up way off.





What are techniques or methods to help here? I've seen some reference to some sort of diffraction mask that could help, but don't know anything about it. My particular DSLR doesn't tether to a laptop, so computer aided focus is out for me (but it may be worth mentioning in general for a wider audience with similar issues).



Answer



A very simple, yet effective method to achieve almost perfect focus is to use a Bahtinov mask. I believe that this is the "mask" that you were referring to. It is a diffraction mask that is placed on the aperture of the telescope, creating three diffraction spikes. When the image is in focus, the three spikes line up perfectly. If it is even slightly out of focus, it is very easy to tell. There are online generators that you can use to make your own, or you can buy more well-made ones from commercial suppliers.


An earlier, and less effective device is the Hartmann mask. It relies on similar principles of diffraction, but it is generally thought that a Bahtinov mask is more precise and easier to use. Besides these, there are other methods of focusing, as you know, but for simple, amateur astrophotography, I'd use a Bahtinov mask.


Just don't forget to take it off before you actually start taking pictures!


Does "long exposure noise reduction" option make any difference when shooting RAW?



I can configure my camera (5D) to use long exposure noise reduction (dark frame is exposed and subtracted), but is this method really effective when shooting RAW? Or is the actual subtraction only done when shooting JPEG?



Answer



Wasn't really intended this way (and I didn't have this information when I asked the question), but my experience shows different results. I made a 556-second exposure at ISO 400 with lens cap on and long exposure noise reduction switched on and RAW+JPEG configured. The results are 100% crops with no additional processing applied.


In-camera JPEG: alt text


Canon DPP (noise reduction disabled): alt text


Adobe Camera RAW (noise reduction disabled): alt text


Adobe Camera RAW + Topaz Denoise (RAW-moderate setting): alt text


My conclusion is that there is no point shooting only RAW with long exposure noise reduction on (expose and subtract dark frame) as it doubles exposure time and its effect is questionable unless done by the camera.


nikon - Focusing issue with kit lens


I have been using my camera (Nikon D5100) for around 5 years. Recently I am facing an issue when I use the kit lens. If I take a group photo (of a group of people) standing not very far from them, the autofocus works well but after clicking if you examine the image by zooming in, it shows that none of the persons in the photo are in focus. It keeps happening.


But If I zoom on anyone and then click, it works fine even though I maintain the same distance from object.


So I decided to zoom first and once I got the proper focus after zooming, then I change the mode to Manual (hoping that focus remains locked) and zoom out and take the pics. Unfortunately, it goes wrong again as I mentioned in the first case.


What could be the issue here? anyone has any input?



Answer



Your lens is not parfocal. If you had a parfocal lens, you would know it. Mainly because the prices of true parfocal zoom lenses ($10K and up) mean only those who really need them and know why they need them buy them.


A parfocal lens is a zoom lens that maintains the same focus distance as it is zoomed in and out.



Most lenses used by anyone other than professional videographers are not parfocal. When you focus a non-parfocal zoom lens at one end and then zoom it, the focus distance will change and must be adjusted again after zooming to be correct. There are some fairly cheap lenses that are effectively parfocal - that is, they lack sharp enough resolution or a wide enough maximum aperture to be able to tell the difference between focused and slightly out of focus.


That being the case, the answer to the second part of your question is that your lens is not made to be able to do that with any degree of accuracy.


The answer to your first part is a little tougher. It may be that you are just now noticing something that has always been the case. It is not uncommon for zoom lenses to be better on one end than the other, although it is usually the wider end that is a little better. Or it may be that your lens is drifting (or has been knocked) out of alignment and needs to be adjusted. If the lens is a cheaper one that does not cost as much as it would take to get it properly aligned, then it might be time to think about a replacement or upgrade lens.


Monday, 27 April 2015

lens - What causes blurred/non-sharp images taken of stable objects?


I know about two important reasons which are motioning or shaking camera/hands and dust and dirt on you lens but if you have fixed them and still you have blurred/non-sharp results, what can be the reason?


I am talking about DSLR Canon cameras and lenses.



Answer



uhhhh :p may be earth quake?? :p (really may be)



You see id like to know how old your camera is, and did you drop it or crash it at some point of time?


why i am asking is because, if you are cent percent sure that your tripod is stable then, the sensor may have become a bit shaky if in any of the above case...so no matter what shutter speed and stability you set, the photo might get blurred if the sensor inside your camera shakes....(even if the shake is like micro meter :p if the sensor is loosely held by the camera ) it may even shake due to tiny vibrations like mirror flip.


this is the only reason i could come up with as per your description. so i would prefer you taking your beloved camera to an authorized service center and giving it a full body check up :p :)


happy clicking :)


Sunday, 26 April 2015

equipment recommendation - Should I buy a camera with kit lens, or body plus lens separately?



I am planning to buy a DSLR soon.


Before I zero in on the model, I want to know whether it is advisable to buy a kit (body + 18-55 lens) or not? OR should I go for the camera-body and the lens separately?



Answer



The answer is, unfortunately, depends.


First, what kit are we talking about? The 5D Mark II kit comes with the 24-105mm f4 IS L lens. The lens is roughly $1000 new, so selling it immediately, gives you discount on the body itself, so it absolutely makes sense to get the kit (unless you don't want to hassle with selling the lens). I bought the 5d2 kit, even though I already had a great copy of the 24-105 (I kept my original and sold the new one) However the 24-105 f4 IS L lens is a great general purpose zoom, so I would think twice before selling it. In kit your looking at, the 18-55mm Zoom lens isn't that all great of a lens. It works, but there are a number of drawbacks. The lens costs $170 new, but you will have a harder time selling that lens for that price.


Next, what is your budget? If you can afford a better lens, then you should buy a better lens. Lens are far more important than camera bodies. However, having no lens is way way way worse than having a "bad" lens. So if you need the 18-55mm range, and can't afford the step-up alternatives (or even the 3rd party alternatives), then get the kit lens.


Finally, what other goodies are you getting with the kit? If you search, you can find kits that also come with bags, memory cards, a filter, 2nd batteries, cleaners, etc. If you don't have these items already, they will add up quickly. Having them thrown into the kit will generally reduce your total cost.


Image stabilization with monopod: on or off?


With most lenses that feature image stabilization, you're supposed to turn IS (or VR) off when mounting the camera on a tripod. But what about monopods?


For a DSLR mounted on a monopod, should IS be turned on or off?


A monopod will go a long way toward stabilizing the camera (that's the point, after all), but it doesn't completely eliminate motion so it seems like using a monopod and IS together might be a good idea. If IS should be on, which mode should be used?




nikon - How important is lens selection for getting a retro, film look?


So basically I'm planning on buying my first DSLR camera. I'm familiar with cameras it's just that I never had my own DSLR. I will probably get some entry-level second-hand Nikon body with kit lens since I don't want to spend money on the body at the moment. Probably something like d5100 or d3300. I will upgrade the body later.


The main question is choosing the second lens, but that is where all of my knowledge about lenses ends (kind of). I really love old retro looks, grainy, nostalgic feel photos. Not sure how much lens adds to that, but I would like to maximise my abilities to take photos like these with least post-production as possible. Total budget for accessories incl. lens is around 350$. What lens specs I should consider?


Akabane, Tokyo


Yakitori and Tachinomi


Kichijoji, Tokyo



Love hotel (Uguisudani, Tokyo)


photos's author: "xperiane" on flickr


P.S. Advices for accessories like filters, hoods are very appreciated




Saturday, 25 April 2015

Is the MacBook Pro display well calibrated?


I own a Macbook Pro with Retina display. Assuming I'm only interested in displaying photos on the screen, is the Retina display calibrated well enough for colors? If not, which method would you recommend to use?



Answer



In general, every monitor has subtle differences on them, so if you really need a high-standard calibration, you should do it to your equipment.



There are many softwares you can use to calibrate your monitor. For example, take a look at SpyderElite , for about $170.


How should I learn to understand Canon's metering?


Spot metering is my preferred metering mode on the Canon 50D. This allows me to nail the exposure first time -- 99% of the time -- by using AE-lock on my subject and applying the desired amount of exposure compensation. However, I find this too slow for some situations, so would like to learn to use one of the "full frame" metering modes (i.e. Evaluative or CWA).


I have tried using both of these modes. On the whole, I found Evaluative to be unpredictable. I found myself having to second-guess when and by how much the camera had automatically added exposure compensation. On the other hand, I found CWA to be much more predictable, but slower as it was still necessary to AE-lock and recompose for off-centre subjects.


It seems that Canon designed Evaluative metering largely as a replacement for CWA, and that by not using this, I would somehow be missing out on much of the ability of my camera.


How would I be better investing my time?


a) Using Evaluative metering and learning to recognise how it responds in certain situations so that I can second-guess it.


b) Using the much simpler CWA metering and learning to read the scene to determine the required amount of exposure compensation.


If only Canon had an "active focus point weighted average" mode, it would be perfect!



Answer



Metering will always be trial and error, because the camera assumes everything you're shooting reflects 18% of the incoming light back at the camera. It has no way of knowing whether your subject is white or grey, or even what part of your scene is the intended subject!



The closest the camera can get to knowing the latter is by looking at the currently selected focus point, as you suggest. In fact some Canon bodies do offer spot metering linked to the active focus point. I suspect it's only on the more expensive models. Edit: yes it seems to be just the 1D/1Ds models. And it's only activated when you're using a subset of the 45 AF points, which is disappointed (you'd think it could use the closest spot metering point when you have all 45 activated).


Contrary to ElendilTheTall I use Evaluative mode almost exclusively (no I don't use the camera on auto ;). The reason for this is I don't want to be constantly recomposing to meter. Also whilst Evaluative might not get it right as much as spot metering, but when spot metering gets it wrong, it can get it wrong by a huge margin. Example, if you happen to have a deep shadow in the centre of the scene you'll massively overexpose.


At the end of the day metering is fixable in post as long as you're close, unlike for example, focus. So when shooting quickly I concentrate on nailing the focus, even if it means the metering is off. Evaluative allows me to do this. If I'm not shooting quickly I'll bracket exposure or use manual to get the exposure exactly how I want it.


Colored powder in a lens: will it need a cleaning?


I was taking photos of people playing Holi. Some of the powdered colour landed on the lens of my Canon Powershot S95, and in the groove of the flash housing. By lens, I mean the cylindrical part that retracts, not the glass.


Does the housing of the lens have some protection that wipes the lens as it retracts (i.e, a self-cleaning mechanism)? Or will I need to give the camera to a professional repair shop?



Answer



Hopefully you have wiped the stuff before turning the camera off, otherwise there is probably some of it inside it.


There is nothing self-cleaning here and neither is this camera sealed against dust and particles from entering. It's a great camera you have, so I would bring it to Canon for a cleaning. They will take the camera apart and clean it. Here this service costs a bit over $100 CDN.



If you intend to exposur your camera to elements like this, it will quickly become expensive, so I recommend investing in a weather-sealed or waterproof camera for those occasions. Unfortunately, there are very few such cameras with manual controls other than DSLRs which also require weather-sealed lenses to be sealed.


Friday, 24 April 2015

Is the image processor relevant in a camera when shooting RAW?


When cameras come out the company often states that the image processor is upgraded.


Does such an upgrade matter when one only shoots in RAW?


When shooting RAW the image is taken directly from the sensor. The processing happens off-camera on the computer.


Does the on-camera image processor process the RAW image or is it just used when the camera outputs JPEG?


If it comes into play when shooting RAW, what does it exactly do?




Answer



Short answer: Yes.


Because it isn't an "image processor", it is the camera's CPU (assuming that you don't speak about beasts like Canon 1D X which has three processors).


It matters for:




  • How many sustained Frames per Second do you have. IOW how quick it moves the images in the buffer and how quickly empties the buffer on card. This also requires processing: creating the thumbnail, writting the EXIF data, appling some image processing options which are applied to RAW - for example Highlight Tone Priority (in Canon therms, google for it - Nikon has a similar feature).




  • AF engine management. Speed, two-way communication with the lenses etc.




  • Metering management

  • digital push/pull for certain ISO values.

  • Lens corrections (Vigneting, some CA etc.)


What apertures are required to enable autofocus, including cross-type or high-precision focusing, on Canon DSLR cameras?


This is a follow-up to Why does Canon and Nikon limit or disable autofocus beyond certain f-numbers? I'm doing this to separate the AF aperture requirements from the linked question above. My goal here is to create a canonical answer for Canon AF aperture limits, with complete information for every Canon EOS model since the EOS D30.


What apertures are required for Canon EOS DSLR cameras to autofocus? What apertures are needed to enable cross-type and/or high-precision autofocus?





Answer navigation




Answer



To increase readability and avoid exceeding the answer length limit, this answer has been split across two posts. General information and APS-C cameras are covered in this post; full-frame and APS-H cameras are covered in a separate post below.






TL;DR answer


In general, Canon DSLRs require a maximum aperture of at least f/5.6 or wider to autofocus, although EOS-1 series cameras and certain newer models (including the EOS 5D Mark III, EOS 7D Mark II, and EOS 80D; essentially, any model with an AF sensor that has 45 or more points) are capable of focusing at f/8 with the center focus point. Depending on the camera model, and with some exceptions, a maximum aperture at least f/2.8 or f/4 or wider enables cross-type and/or high-precision focusing.





Definitions



  • A horizontal-sensitive point or vertical-sensitive point is an autofocus point that can detect horizontal or vertical lines, respectively. A horizontal-sensitive point cannot detect vertical lines, and a vertical-sensitive point cannot detect horizontal lines.

  • A single-line point is a horizontal-sensitive point or a vertical-sensitive point.

  • A high-precision sensor is an autofocus sensor that is capable of focusing within 1/3 of the depth of field of the lens, instead of simply within the depth of field. These sensors require faster maximum apertures, typically f/2.8 or wider, in order for them to work.

  • A cross-type point is an autofocus point that can detect both horizontal and vertical lines simultaneously. Cross-type points are actually two single-line points superimposed over one another at a 90 degree angle. Depending on the camera model, the maximum aperture may need to be at least as wide as f/2.8 or f/4 for a point to be cross-type, because the vertical-sensitive part of the AF point may be a high-precision sensor.

  • A dual cross-type point is an autofocus point that can detect diagonal lines as well as horizontal and vertical lines. Dual cross-type points are a cross-type point superimposed over another cross type point at a 45 degree angle. The diagonal sensors require an f/2.8 or wider lens in order to function because they are high-precision sensors.


Important notes


Where "autofocus does not function" is noted, phase-detection autofocus is normally impossible, either when shooting through the viewfinder or with the Quick AF mode in Live View, but it is generally possible to autofocus using the contrast detection-based Live AF and Face Detection Live AF modes in Live View.



In addition, when an extender (teleconverter) is being used and the resulting aperture is less than what is normally required for AF, it may be possible to enable AF by taping the three extender-specific pins (which are opposite of the raised pins) on the lens side of the extender to prevent it from communicating extender information with the lens and camera, so that the camera sees the lens's aperture as though no extender is attached. This is not guaranteed to succeed as the AF system is not designed to operate under these conditions, and the Exif data will not reflect the use of the extender. Some third-party extenders may not communicate extender information with the lens and camera at all, with the same effect as the aforementioned "tape trick".


The limitations discussed hereafter generally do not apply to third-party zoom lenses that have maximum apertures of f/6.3 at the long end of the zoom range, as they actually report a maximum aperture of f/5.6 to the camera. Note, though, that faster third-party lenses should be able to autofocus as expected for their aperture, including any high-precision focusing capability normally supported by the lens and body.


All images in this answer should be attributed to Canon Inc.









Canon EOS Digital Rebel (original, 300D), Digital Rebel XT (350D), Rebel XS (1000D), Rebel T3 (1100D), D30, D60, 10D


All points function to f/5.6; the center point is normal-precision cross-type (it is both vertical- and horizontal-sensitive to f/5.6 and is not high precision at f/2.8). All other points are single-line (EOS D30 and D60: vertical sensitive only). Autofocus does not function below f/5.6.




  • EOS D30 and D60: 3 AF points available.

  • EOS 10D, Digital Rebel (300D), Digital Rebel XT (350D), Rebel XS (1000D): 7 AF points available.

  • EOS Rebel T3 (1100D): 9 AF points available.


Canon EOS D30/D60 AF array
Canon EOS D30/D60 AF array


Canon EOS 1000D AF array
Canon EOS 1000D AF array


Note that the EOS Rebel T3 (1100D) has essentially the same AF point layout as those in the next group of cameras, only that the center point is not high precision at f/2.8.





Canon EOS Digital Rebel XTi (400D), Rebel XSi (450D), Rebel T1i (500D), Rebel T2i (550D), Rebel T3i (600D), Rebel SL1 (100D), Rebel T5 (1200D), Rebel T6 (1300D), 20D, 30D


All points function to f/5.6. The center point is cross-type, and all other points are single-line. The center point is high-precision dual cross-type with f/2.8 or faster lenses. Autofocus does not function with lenses narrower than f/5.6.


Canon EOS Rebel T3i (600D) AF array
Canon EOS Rebel T3i (600D) AF array




Canon EOS Rebel T4i (650D), Rebel T5i (700D), 40D, 50D, 60D


All points function as cross type to f/5.6. The center point is high-precision dual cross-type with f/2.8 or faster lenses. Autofocus does not function with lenses narrower than f/5.6. With the following lenses, the center point is normal-precision single cross-type, as if they were f/5.6 lenses:



  • EF 28-80mm f/2.8-4L USM

  • EF 50mm f/2.5 Compact Macro



With the following lenses, the outside points of the AF array are single-line, not cross-type:



  • EF 35-80mm f/4-5.6 (all variants)

  • EF 35-105mm f/4-5.6 (original and original USM variants only)

  • EF 80-200mm f/4.5-5.6 (II and USM variants only)


(EOS Rebel T4i (650D): Though not included in the above link, this limitation is specified in the manual for this camera, page 99.)


Canon EOS 60D AF array
Canon EOS 60D AF array





Canon EOS Rebel T6i (750D) and T6s (760D), 70D, 7D


The same aperture limits for the EOS Rebel T4i (650D), 40D, 50D, and 60D apply to the 7D, only that the 7D has 19 AF points instead of 9.


EOS 70D: The above applies to focusing when using the viewfinder. In Live View, with Dual Pixel CMOS AF, phase-detection autofocus with apertures as small as f/11 is possible across about 65% of the total frame area (80% height by 80% width). Dual Pixel CMOS AF is fully supported with more than 100 EF lenses, including all EF lenses currently in production. Other lenses will autofocus in a hybrid phase-detection/contrast-detection mode when in Live View.


Rebel T6i and T6s: Sensor-plane phase detection AF is available but only in a hybrid mode.


Canon EOS 7D AF array
Canon EOS 7D AF array




Canon EOS Rebel T7i (800D), 77D, 80D


TODO: Expand with lens group information



45 points are available at all apertures f/5.6 or wider, 27 at f/8. With certain exceptions, the following apply:



  • If f/2.8 or wider, all 45 points are cross-type; additionally, the center point is high-precision dual cross-type.

  • If f/5.6 or wider, all 45 points remain cross-type but the center point is standard-precision single cross-type.

  • If f/8 or wider, 27 points remain available, 9 of which are cross-type.

  • Autofocus does not function with lenses narrower than f/8.




Canon EOS 7D Mark II


Canon EOS 7D Mark II AF array

Canon EOS 7D Mark II AF array


Autofocus capabilities depend on which of the seven groups the lens in use falls into (see the EOS 7D Mark II manual):




  • Group A: Most f/2.8 and faster lenses and lens/extender combinations, with the exception of those listed in other groups below, support 65 cross-type points, including one high-precision dual cross-type point in the center of the array. All 65 AF points and AF area selection modes are available.


    Group A AF array: 65 points, 65 cross-type, 1 dual cross-type




  • Group B: Most f/5.6 and faster lenses and lens/extender combinations, with the exception of those listed in other groups below, support 65 cross-type points. The center point is normal-precision single cross-type. All 65 AF points and AF area selection modes are available. This group also includes the following lenses:




    • EF-S 60mm f/2.8 Macro (with neither IS nor USM)

    • EF 50mm f/2.5 Compact Macro (with or without Life-Size Converter)

    • EF 100mm f/2.8L Macro IS USM


    Group B AF array: 65 points, 65 cross-type




  • Group C: These lenses support 45 cross-type points; the outer ten points at each side of the AF array are horizontal-sensitive. All 65 AF points and AF area selection modes are available.



    • EF-S 10-22mm f/3.5-4.5 USM


    • EF-S 18-55mm f/3.5-5.6 (all versions)

    • EF 20-35mm f/3.5-4.5 USM

    • EF 35-135mm f/4-5.6 USM

    • EF 75-300mm f/4-5.6 USM (this version only; all other versions are in Group B)

    • EF 100-300mm f/4.5-5.6 USM


    Group C AF array: 65 points, 45 cross-type




  • Group D: These lenses support 25 cross-type points in the center; the outer points on the left and right side of the array are all horizontal-sensitive. All 65 AF points and AF area selection modes are available.




    • EF 24-85mm f/3.5-4.5 USM

    • EF 35-350mm f/3.5-5.6L USM

    • EF 55-200mm f/4.5-5.6 USM (both versions)

    • EF 80-200mm f/4.5-5.6 (original version, without USM)

    • EF 90-300mm f/4.5-5.6 (with and without USM)


    Group D AF array: 65 points, 25 cross-type





  • Group E: These lenses support 25 cross-type points in the center, but a total of 45 points are available. The outer ten points on each side of the AF array are disabled, and the remaining ten points on each side are horizontal-sensitive. All AF area selection modes are available.



    • EF-S 10-18mm f/4.5-5.6 IS STM

    • EF 28-70mm f/3.5-4.5 (both versions)

    • EF 28-80mm f/3.5-5.6 (all versions)

    • EF 35-70mm f/3.5-4.5

    • EF 35-70mm f/3.5-4.5A (autofocus-only)

    • EF 35-80mm f/4-5.6 (II and PZ power zoom versions only)

    • EF 80-200mm f/4.5-5.6 (USM and II versions)

    • EF 100mm f/2.8 Macro USM (non-L version)


    • EF 800mm f/5.6L IS USM

    • EF 1200mm f/5.6L USM


    Group E AF array: 45 points, 25 cross-type




  • Group F: These lenses and lens/extender combinations support 15 cross-type points in the center, with a total of 45 points available. The outer ten points on each side of the AF array are disabled, the remaining ten points on each side are horizontal-sensitive, and the top and bottom five points in the center are vertical-sensitive. All AF area selection modes are available.



    • EF 22-55mm f/4-5.6 USM

    • EF 28-105mm f/4-5.6 (both versions)


    • EF 35-80mm f/4-5.6 (original, USM, and III versions)

    • EF 180mm f/3.5L Macro USM with Extender EF 1.4x


    Group F AF array: 45 points, 15 cross-type




  • Group G: Most f/8 lens/extender combinations, the sole exception being EF 180mm f/3.5L Macro USM with Extender EF 2x (with which autofocus does not function), support autofocus with the center point only, which is cross-type (all other points are disabled). AF point expansion can be used (although only the center point can be selected), with four points adjacent to the center point acting as AF assist points. The points above and below the center point are vertical sensitive, while the points to the left and right of the center point are horizontal sensitive. Autofocus does not function with lenses narrower than f/8. This group also includes the following lenses:



    • EF 35-105mm f/4.5-5.6 (with and without USM)



    Group G AF array: One cross-type point, plus four AF assist points when AF point expansion is enabled




Dual Pixel CMOS AF, as in the EOS 70D, is also available.


Thursday, 23 April 2015

equipment recommendation - What is a Cheap (less than USD50) Piece of Kit would make a Good Gift for a Photographer?



Christmas is Coming


... Well, it's a few months away, but family are going to start asking for gift ideas soon, so I'm putting together my list.



I'm sure you have some great suggestions for affordable little bits of kit that make a big difference. Please share your ideas!


Thank you :)




dslr - Why is my Canon 70D's liveview autofocus much worse than the viewfinder's?


First of all I came across these questions already:
My viewfinder won't autofocus but live view focuses well
Canon 600D Autofocus not working when using view finder but working in live-view
Why is my camera focusing fine in liveview but getting it wrong with the viewfinder?


And they really surprise me because it's basically the opposite of what I experience. I've tried the Yongnuo EF 35mm and the Canon EF-S 10-18mm IS STM, and with both I experience the same: Using the optical viewfinder, I always have blazing fast, accurate AF. However using liveview, it is most of the time impossible to get a focus point at all, forcing me to move around and what not.


The camera is a Canon 70D, supposed to have a good AF system.


In a bright light condition both work perfectly fine so I doubt it's a defect, but in lower light situation I can never get a focus point with LiveView at all - No matter which AF method I select. This surprises me as the answers to the questions I've linked suggest that in low light, the LiveView AF should work even better. Does my camera and its dual-ISO make a difference? If anything I'd assume it'd make it better.


Especially the video AF (which I assume does always use LiveView's CDAF?) for the 70D is praised to be outstanding, compared to its previous models. Again: Can't confirm. I haven't tried video in bright light (I'd assume that'd make it better though), but in low light (it's not that low by the way, it's a single, but bright and close-to-subject lightbulb spotlight) the AF in video is just as bad.



Any explanations/insights/tips/tricks for a DSLR newbie like me?


Additional info: When focusing manually, Magic Lantern's Focus Peak is able to draw in where the focus is too, and it works by detecting contrasts too. Unless I'm missing something doesn't that mean that Canon's CDAF should be able to find a focus point with CDAF too?


Here is a screenshot of the scene: enter image description here It's really not that little light!


Here is me putting the aperature up to F/2 for testing: enter image description here


Here is my F/11 setting with manually focused: enter image description here


As you can see, ML has no issues drawing the focus peak into the right spot instantly. In addition to that: I'm sorry I didn't notice this earlier, but the camera actually does drive up the ISO while focusing when in LiveView. I couldn't get a screenshot of that but the subject was clearly visible (and in focus for a split second) during the AF attempt of the camera in LiveView. It just went past the focus point, and back again. Everytime I'd try to focus. The distance to subject is ~0.35m and my lens can focus up to 0.25m (even tho with the viewfinder I can focus as close as 0.15 sometimes). Just to make sure I've tried a longer (~0.5m) distance - Same results.


I've also just tried to AF on my desk keyboard in absolute low-light (on the LCD, I see nothing but pitch black). The camera was able to focus in LiveView properly and it turned up the ISO to the max. during the focus process.


Here's a screenshot made during the AF attempt - The camera sets the ISO to a guessed ~4000 during the failed attempt:


enter image description here


Conclusion: It's not the lowlight environment. What else could be the cause?



Some more hints to "it's not low-light": enter image description here The camera is perfectly capable of focusing on this in LiveView.


enter image description here


I couldn't get the screenshot timed right, but this is during focus attempt. The camera was able to find focus on first try, even though the light situation is a lot worse.



Answer




It's not the lowlight environment. What else could be the cause?



In the case of your example photo, you're pointing at a spot on the lens cap that is black and pretty much featureless: no texture, no light/shadow, no contrast. In other words: nothing for the AF system to work with.


For more please see:


What could be causing focus problems in low light?

Why can't my SLR autofocus on certain parts of a scene?
How do I diagnose the source of focus problem in a camera?
Why doesn't auto-focus work with an all-white subject (like a wall)?


From the comments:



But how come the viewfinder's AF is able to focus within a split second on the exact same spot? I mean, it's a Canon lens cap. There's the "Canon" logo on it which the viewfinder's PDAF is able to focus on instantly.



What AF mode are you using with PDAF through the viewfinder? How many AF points are enabled? Are you using single point AF, expanded point AF, area AF, etc.? The 'Canon' logo is not in the center of the frame where it appears you have instructed the Live View AF to focus.



I'm using single point AF with the center point enabled only, but I've tried all the modes through and I get consistent results with every mode. The only mode that differs is "Quick AF" for LiewView, that works just as fine, but since it disables and re-enables LiveView to get its focus I'm sure that puts more wear on the shutter so I'd rather not use it.




The actual areas of sensitivity for each system can be different, even when you think you have the same focus "point"¹ selected with each independent AF system. When using the dedicated PDAF sensor via the viewfinder, the area of sensitivity, especially in the vertical direction, for the center AF 'point' is much larger than the little square you see in the viewfinder.


It also seems to me, based on years of using the 7D and later similar Canon PDAF systems (5D Mark III and 7D Mark II), that even when a single point is selected with no 'assist' points enabled, if there is absolutely nothing the camera can find to focus on in the designated area for the selected AF 'point' then the camera will expand the search to find something, anything, that gives it enough contrast on which it can focus. When using Dual Pixel CMOS AF, the actual area of sensitivity doesn't seem to be quite as large, nor does the camera tend to expand the search until it finds something it can latch onto.



Also please see updated question, the camera is actually able to focus on a similar (white font on black background) subject in near-pitch-black, so I really doubt low-light being the cause.



In the last images you added, you have high contrast between the white lines and the black background. That's an entirely different situation than pointing at a featureless uniform surface.


Low light will always reduce the capacity of an AF system to focus on the same thing. It may not reduce it enough to disable it, but AF always works better in brighter light than dimmer light.



I'm thinking you're right about the contrast situation, but then again judging from other answers/comments the 70D should use PDAF in LiveView too, which is supposed to work (like it does in the viewfinder)? I mean I understand all the troubles of low-light/-contrast and such, but what I really don't get is how the viewfinder's AF is doing such an amazing job with the exact same scene then.




The 70D uses a hybrid of phase detection and CDAF when in Live View and using Dual Pixel CMOS AF. But it is not using the dedicated PDAF sensor that is used when you aren't using LV. The areas of sensitivity for each system can be different, even when you think you have the same focus "point" selected. The performance of each system will also be different because they are each using very different hardware from the other.


The size of the photosites (pixels) in the dedicated PDAF sensor are 'huge' compared to the size of the 'half-pixels' on the imaging sensor used by the Dual Pixel CMOS AF. This makes them more sensitive in low light. Although I don't know it for a fact, my strong hunch² is that the sampling period for the dedicated PDAF sensor used when shooting via the viewfinder, particularly in 'One Shot', rather than 'AI Servo', mode, is significantly longer than the sampling period for sensor based Dual Pixel CMOS AF.


¹ No 'AF point' is actually a "point", it is an area for which the camera will (attempt to) find the greatest amount of contrast. This is true of dedicated PDAF sensors. It is also true of sensor based CDAF as well as sensor based hybrid systems. Both PDAF and CDAF are based on detecting contrast and using that information to focus the lens. How each type of AF system uses that information is different, but they're all based on detecting contrast. If they can't detect any contrast, they don't work.
² Based on several different things I've read Chuck Westfall, long time technical advisor for Canon USA, say over the years regarding PDAF and Dual Pixel CMOS AF combined with the requirement for the sensor to refresh fast enough to provide what amounts to a video feed for Live View.


What is the best way to reduce high ISO noise?



Even on a 5D ISO noise is a challenge. I use PS raw photoshop and Adobe Bridge. I want to minimise loss of image quality.




Wednesday, 22 April 2015

software - Apple Aperture or Adobe Lightroom: which is better for post processing RAW photos?


This is probably an old chestnut, but I'm trying to decide between purchasing Adobe Lightroom or Aperture for Mac, and would appreciate any pointers that would help me decide.


I think I've outgrown iPhoto, and would like to spend a little more time post processing my RAW files to get the best out of them. I guess in the future, I'd like to try some HDR stuff too (if that has any relevance).



Answer



Personally I much prefer Lightroom, though I suspect you'll find this argument goes on as long as the Mac vs PC or Canon vs Nikon debates.


Lightroom is more expensive but has far more features, and (surprisingly) seems easier to use, but that may just be because I'm more used to it. The main advantage of Aperture is that it integrates with your other iLife apps, and has (I believe) built-in print ordering options, much like iPhoto does.


Aperture doesn't have as many image adjustment tools as Lightroom, like split-toning (where you colour the highlights and shadows different tints), and graduated filter, and the noise reduction in the latest version of Lightroom is mind-blowingly good. Neither of them supports HDR images as far as I know, but Lightroom integrates more tightly with Photoshop (as you'd expect) which is where you may be doing most of your HDR work.


You can download trial versions of both so it's worth giving them a try and seeing what you prefer working with.


(One other consideration - Aperture requires an Intel processor, not sure what you have but if it's G5 or earlier then you can't use Aperture)


equipment recommendation - Is there a remote shutter release for the Panasonic DMC-FZ40?


A friend has the DMC-FZ40. Discussing Lightpainting, he was interested in buying a wireless control. Looking around for one we found out none exists, and after researching Itai's neocamera and dpreview (for the FZ30 and FZ50), we concluded that the FZ40 is not designed for remote shooting.


Has anyone overcome this problem? Any alternative (mod?) for remote - wired or wireless?




Tuesday, 21 April 2015

optics - Why are objects far away inverted through a lens but not through the viewfinder?


When I look through a lens the image of objects far away are inverted but when looking the the viewfinder on my camera they are not. Why is this?


I am having a hard time understanding why objects far away are inverted in the first place.


Could anyone provide an explanation or ray diagrams (Preferably using a point source on an object and including the lens in the human eye)?


EDIT: Thanks everyone I now understand why objects far away from a lens appear inverted. But can anyone now explain how the camera elements make far inverted objects appear right way up without also making close normal objects appear upside down?


EDIT 2: I can't provide an image right now because I am at school but you know how when you look through a magnifying glass and far objects will be inverted and blurry but close objects will be sharp and erect (normal)?


That is what is happening when I look through my camera lenses while they are not attached to the camera, but when they are attached to the camera and I look through the viewfinder (or at processed film) the objects in the image produced are all of the same orientation.


Does this mean that lens doesn't actually produce images like a magnifying glass would because the objects on the images produced on film are all of the same orientation? Or does this mean a magnifying glass doesn't actually produce objects with different orientations?? If a magnifying glass doesn't, then why does it look like it does and are the convex lens diagrams wrong (they show a virtual image upright for close objects and real upside-down images for far objects)? Isn't a magnifying glass just a convex lens?


It DOES look like a magnifying glass when I look through the lens. That's why I thought that then lens was producing objects with different orientations. This also goes with the convex lens diagrams below that show objects with different orientations.


So does the lens produce objects with different orientations or doesn't it??? If not why does it look like it does when I look through the lens, and also based on the convex lens diagrams it seems like it should. If it doesn't then how do the other lenses in a camera lens attachment correct the convex lens. And if it does then why does film and the viewfinder show objects with the same orientation?



Sorry for asking so much. This is just so confusing!


EDIT 3: This is how I thought a camera lens would work: Lenses


I forgot to mention in EDIT 2 that it seems that close objects shouldn't even appear on film based on the diagrams.


I still don't understand... =(


EDIT 4: So objects really close to the camera lens should not appear on film, correct?


So...Why do all objects in the viewfinder appear upright??? Since my eye is reviving both the light rays from close objects (virtual upright images) and far objects (real inverted images) shouldn't really close objects and objects farther away have different orientations? Just like looking through the lens directly? How does the viewfinder change anything?


EDIT 5: Thanks so much everyone. Thanks for the help.


"Anything close enough to form a virtual image is not focused onto the focusing screen"


So lets say I put a pen right in front of the lens and look through it directly. The image I see is upright so this means it is a virtual image. Now lets say I attach the lens to the camera and look through the viewfinder. I can still see the pen but it is blurry (because the focal length is longer, right?). The lens forms a virtual image of the pen but I can still see it in the viewfinder. Why is this? If the viewfinder shows me exactly what would be on the film it should not show the pen at all (based on the diagrams in the image above) should it?


EDIT 6: Maybe it should form a blurry image. Like a pin hole camera or something. In any case thanks for all the help everyone. I know it can be frustrating trying to teach me. I can be pretty dense sometimes.





lens - How can I get headshot portraits with pleasing, natural perspective if I'm constrained to a short distance?


I know in a portrait photo we use distance between camera and the subject in such a way as to create a pleasing and hopefully natural-looking perspective. I think it's good on 1 ~ 1.5 meter to get a best portrait photo.


I'm computer engineer and have a constraint in distance. The person (subject) and camera should be at 50 cm distance.


Is there any camera or lens or any hardware filter that I can use to get a best portrait (head-shot) and prevent perspective distortion with this distance limitation?


Update 1: this is my figure captured but I think all of them is not good as well (perspective distorted):



headshot 1 headshot 2 headshot 3




Does the camera white balance setting affect the raw image at all?


Does a DSLR's white balance setting (whether a preset like cloudy or a custom WB setting) affect the raw file at all, or does that setting only determine the WB of the JPG that the camera generates?


A related way to ask this is whether this procedure makes sense: 1. shoot in RAW mode only 2. set the white balance using a gray card, but not by taking a picture of the gray card


If the WB setting only affects the JPGs and not the RAW files, and you want to shoot RAW only, then you must have a photo with the gray card to use in post processing.



Answer



The white balance setting doesn't affect the image data in the RAW file, but the setting is recorded in the meta data in the file, so you can still use it to process the RAW image if you like.


Monday, 20 April 2015

lens - What is CLA service?


I've seen the term used in used lens listings, e.g "recently CLA'd".



I have a few questions, but I think they're related enough to only warrant one post:



  • What does CLA stand for, and what is it?

  • How much does it generally cost?

  • How do you decide whether a lens might need CLA?



Answer



"Clean, Lubricate, Adjust", as Maxwell said. It is usually encountered when discussing Leica camera bodies, those things need a checkup every two decades or so. But, yes, lenses can be CLA'd too.


For lenses, it basically means dismantling the thing, cleaning dust and other deposits from the lens surfaces, replacing the goop that lubricates the focus and aperture mechanisms, and (hopefully) putting it all back together in working order.


If I had an old lens where I could see fogging on the internal glass surfaces, where the focus mechanism was uneven, too stiff or too loose, or ditto the aperture mechanism, I'd consider having it CLA'd. I have no hard data on what it'd cost for a lens though; for what it is worth I did pay a couple hundred Euro for CLA on a Leica M3 camera body recently.



lighting - Canon 5D Mark III - problems with fluorescent light


This is brand new Canon 5D Mark III. There's green light banding when taking photo or video in the environment with fluorescent light. The first Canon 5D Mark III was returned to the shop. And this is the second one. But it still has the same problem.


So, I think this is a general issue with fluorescent light. I recorded a video clip of this problem here http://www.youtube.com/watch?v=MOer6WPnPDM


There is also a yellow banding here http://www.learningdslrvideo.com/yellow-banding-high-iso/


Is there any way to fix this problem? Should the Canon release anything to fix it?



Answer



fluorescent lights flicker, they change both intensity and color 50 or 60 times per second (depending on where in the world you are).


This produce inconsistent colors, banding at high shutter speeds and confuses the auto white balance feature.


but the solution is simple, just make sure to only get complete flicker cycles during the exposure - the cumulative light of a complete cycle is consistent, you do this by setting a shutter speed according to the electricity frequency of your country.


In the US the frequency is 60Hz, so 1/60 shutter speed will catch a complete cycle, 1/30 will take two complete cycles, 1/15 will cover 4 cycles, etc.



In most of Europe the frequency is 50Hz, s0 1/50 for one cycle, 1/25 for two, 1/13 fo4 four, etc.


I've written a longer version of this answer with sample pictures on my blog at this post: photography under florescent light


digital - Where to start with photography?



I am new to photography, and need a little advice. I currently have a Samsung L83t.


To get rolling, where do I go from here? Does the camera matter? Is this camera enough to start with?




Sunday, 19 April 2015

color correction - How can I desaturate reds in post-processing using Rawtherapee (or Gimp)?



I took a few artwork pictures in sunlight. I used Rawtherapee's pipette to find a white balance I'm happy with, but the red are overbright.


How can I desaturate reds only ?


I had some success decreasing contrast, but obviously it affects the whole picture.


Here is an example. Left is the original picture, red is too bright. Right is the corrected picture, here obtained with contrast set to -20 (with Gimp) : the red is correct, but not the other colors.


Overbright red



Answer



enter image description here


Left is with HSV equalizer's S-value decreased, right is original. Only a very quick demonstration, so I did not bother to fine-tune it, so please forgive the reddish edges of the blossom, especially in the background.


Probably the easiest way is to use RawTherapee's HSV ("Hue/Saturation/Value") Equalizer:




  • Enable it

  • Create a curve for the S-parameter

  • On the right side of the curve, there is a tool-button that looks like a curve with a cross. Click it, use Ctrl and left-click and drag in your picture the color that you want to (de)saturate

  • Repeat the click-and-drag until you are satisfied.


If you want the red to be darker/brighter, you need to change the V-parameter. If you want to change the hue of the red (e.g. making it more orange or magenta), then use the H-parameter.


equipment recommendation - Besides several Pentax DSLRs, what cameras are rated for operation below 0 °C (32 °F)?


Many Pentax DSLRs are rated for operation down to -10 °C (14 °F). This is a selling point of these cameras, and they are specifically tested to operate reliably at this temperature (with a note on reduced battery life, recommending that the operator have extra batteries in an inside pocket).


I can't find any camera from Canon, Nikon, Sony, or Olympus rated to below 0 °C (to my surprise, neither the Canon EOS-1D X nor the Nikon D4 are rated to below 0 °C). Are there any other interchangeable-lens cameras designed to operate at below-freezing temperatures?



Answer



Phase One IQ series digital backs for medium format cameras are rated from -10°C to 50°C (but not the older P+ series). These could be used with, for example, Phase One 645DF body.



The manual of Nikon F6 does not list the range of operating temperatures, but does list how many rolls of 36-exposure films you can expect to shoot at -10°C with fresh batteries, so it's supposedly a supported temperature.


focal length - Is crop-factor a bad thing?


It seems to me that there is a preference for full-frame sensors rather than cropped sensors, and I'm curious as to why. It seems to me, that the cropped sensor means that I get more bang for my buck with zoom lenses. True, I suppose it means I would need to a shorter lens to get the same wide-angle effect on the short end, but it seems like wide-angle lenses are (generally) cheaper than telephoto lenses. Am I missing something?



Answer



No, it is not a bad thing. It is not really "good" or "bad" in any sense. Its simply a different format than full-frame, which is different than medium format, etc. There are pros and cons to each. The smaller APS-C style "cropped" sensors do have some effects on lens focal length due to their field of view, and that can be beneficial or detrimental, depending on how you choose to see it. Here are some facts about sensors:




  1. Cropped Sensor Formats (APS-C)

    • These are smaller sensors

      • They have higher manufacturing "yield" than larger sensors

      • As such, they are generally much cheaper



    • Photosites are generally smaller and more densely packed


      • This generally results in lower signal-to-noise ratio, more noisy pictures

      • This also means the maximum dynamic range (contrast ratio) of cropped senors is lower (less light gathering power per photosite)



    • They have a narrower field of view compared to larger sensors

    • Their narrower FOV has the effect of multiplying the focal length of any lens

      • This may be beneficial if you need super telephoto lengths (i.e. 400mm on FF ~= 640mm on APS-C, effectively)

      • This may be detrimental if you need ultra wide angle lengths (i.e. 16mm on FF ~= 26mm on APS-C, effectively)




    • The additional "effective magnification" offered by a cropped sensor is only illusory, and is not actual magnification

      • Given a large enough sensor with enough megapixels, and the same exact "crop" provided by a cropped senor can be achieved with a full-frame or medium format (however, the larger sensor would need some SERIOUS megapixels to achieve this.)

        • The 1.6x crop sensor of a Canon 450D would require a full-Frame sensor with 31mp to achieve the same crop

        • The 1.6x crop sensor of a Canon 550D would require a full-frame sensor with 46mp to achieve the same crop








  2. Full-Frame Sensor Formats

    • These sensors provide the same "usable" pixel area as 35mm film

    • These sensors are larger, and have lower manufacturing yield

      • This generally means they are more expensive




    • The photosites are larger and often less densely packed

      • This results in better signal-to-noise ratio, less noisy pictures

      • Dynamic range is generally higher with larger photosites.

        • (The new Canon 1Ds IV with a 30mp+ sensor is touted as having full 16bit RAW capability, which offers much greater dynamic range than the general 12bit RAW of cropped sensors)






    • Their field of view is "normal" from the perspective of the bulk of the photography community and equipment

    • A lenses focal length is as stated when used on a full frame



  3. Medium Format Sensors

    • These sensors are often much larger than full-frame (up to 57mm or larger)


      • They have extremely low yield, and thus their cost is extremely high



    • They have high density, but large photosites

      • This results in some of the best dynamic range possible in a digital sensor

      • Leica and Hasselblad's latest medium-format sensors tout 24bit RAW



    • They may have a much wider field of view than normal 35mm for a given focal length


      • A lens of a normal 35mm focal length would be shorter on medium format, providing even greater field of view

      • As with cropped sensors, the effect is illusory, and only useful when describing things at a technical level






(Note that the effect of sensor size on focal length or the apparent magnification assumes a common lens system. Medium format cameras tend to be rather specialized, so a direct comparison here is likely impossible. For the sake of discussion, the effect given similar lens system and focal lengths would thread throughout the range of sensor sizes.)


optics - What could cause this visible artifact which seems to a be a glowing inverse of something outside of the frame overlayed on this photograph?


An acquaintance recently posted an image to FB from his recent trip to the JFK Historic Site asking for an explanation of a visible artifact he states was not visible to the naked eye but was visible through the EVF of his camera and in the final image.


Image 1


The artifact is a series of bright, nearly-vertical bars and a longer, almost horizontal bar superimposed over the lower part of the door frame and the radiator.


I found a very similar artifact in another image of the same room from a blogger which isn't as compressed:


Image 2


In both cases the artifact seems to be a vertically inverted image of the post and rail outside the saturated window. From looking at other images of the room available online I know there is spherical glass ornament topping the table lamp whose shape is visible at the left of the photo.


One response to the OP's question is as follows:




RK: It is lens flare. So, even on an iPhone, the camera lens is made up of multiple lenses. Each designed to focus the image and make sure it isn't distorted. Remember a lens bends light. Sometimes, based on angle and intensity of the incoming light, some stray light bounces back and forth while traveling through the various elements that make up what we think of as a camera lens (some of mine have 16 elements inside a single lens.) Even the best glass is capable of reflecting some light instead of passing it through the lens elements to the sensor. This is what you are seeing. The railing is overexposed, and some stray light is bouncing around a bit inside the lens.



My response:



DJ: I don't think it's lens flare or ghosting. Those usually form an image of starbursts, rings, or circles in a row across the image or in the shape of the aperture; or a veiling glare or haze. This is forming an image of the light outside minus objects in its path. It is more likely a direct reflection/projection off a mirror or the glass in a picture frame elsewhere in the room onto the opposite wall and the radiator. There are two other clues which seem to confirm this. First, the image has a gradiated appearance, it is less distinct in the relatively bright area on the door frame and brighter in the relatively darker areas behind and on the radiator. You would expect lens flare to be more evenly specular. Second, the image representing the light from below the horizontal rail changes angle on the different faces of the door frame. Unlike a projection, lens flare is an image created within the lens, the physical shape of the objects it superimposes do not change its shape.


OP: If that were the case, would I have been able to see said reflection in the room without the camera? There was no image I could see with the naked eye. I notice the image in my phone before I took the pic and did a double, triple and quadruple take before I took it so I'm sure.


DJ: It would have been visible at the moment the image was taken, but it may not have appeared nearly as bright and distinct because your eyes were accommodating for the bright light coming in the window. The projection could have easily been obscured by you or anyone around you changing position or moving through the path of the projected reflection. Also, it's been caught by other people... (see Image 2) Notice that, in the linked example, the projection is again distorted by the surface it falls on.



Anyone else want to chime in with an explanation?




software - Are Gimp's "color space" blend modes the same as those in Photoshop?


The Wikipedia article on Blend Modes describes three of them in particular like this:



Photoshop’s hue, saturation, color, and luminosity blend modes are based on a color space with dimensions that the article HSL and HSV calls hue, chroma, and luma. Note that this space is different from both HSL and HSV, and only the hue dimension is shared between the three; see that article for details.




...



Because these blend modes are based on a color space which is much closer than RGB to perceptually relevant dimensions, it can be used to correct the color of an image without altering perceived lightness, and to manipulate lightness contrast without changing the hue or chroma. The Luminosity mode is commonly used for image sharpening, because human vision is much more sensitive to fine-scale lightness contrast than color contrast. See Contrast (vision).



and, without references, adds:



Few editors other than Photoshop implement this same color space for their analogs of these blend modes. Instead, they typically base their blend modes on HSV (aka HSB) or HSL. Blend modes based on HSV are typically labeled hue, saturation, and brightness. Using HSL or HSV has the advantage that most operations become invertible (at least in theory), but the disadvantage that the dimensions of HSL and HSV are not as perceptually relevant as the dimensions of the space Photoshop uses.



In my version of Gimp, those options are Hue, Saturation, Color, and Value. This isn't exactly the same as the terminology used in Photoshop (value replaces luminosity), but it's also not "hue, saturation, and brightness" as in the paragraph above.



As the Wikipedia article explains it, the color space Photoshop uses here would appear to be an advantage for photography, since it's more perceptually-relevant. Which does Gimp use?



Answer



I guess technically speaking I would call HSL and HSV a "color model", same as RGB or CMYK, as they are tools for modeling and describing color. A "color space" is a tool for calculating color adjustments or comparing colors, such as XYV or Lab. Either way, I am not sure either color spaces nor color models really matter for the question at hand...


Based on the references you have linked, and the terminology used, I can offer two possibilities in regards to how Gimp's blending modes work (which are not really color space or color model related...they are simply layer blending operations that work on different channels, and should work the same regardless of what working color space your image is in, or what color model your image is using.


Option 1


The first option is that Gimp and Photoshop behave the same for those four blending modes. This is based on the assumption that the terms "Luminosity" and "Value" refer to the same thing...the luma axis, the facet of color that determines whether a color is light or dark, irrespective of its chromaticity (hue and saturation.) If this is the case, you can assume that applying the "Value" blend mode will preserve the luminosity of the top layer, and keep the hue and saturation/chroma of the bottom layer.


Option 2


The second option is that Gimp assumes the "value" term is akin to the HSV color model's Value component. In HSV, value is a bit different than luminosity/brightness, in that at maximum value, you achieve maximum color purity for a given saturation. If saturation is zero, you get pure white, where as if saturation is 1.0 (100%) you get that pure hue. This is in contrast with the HSL/B color model, where in luminosity/brightness is an agnostic component. At maximum brightness, regardless of hue and saturation, you get white. At 0.5 (50%) brightness, you achieve maximum color purity for a given saturation. If saturation is zero, you still get pure white, where as if saturation is 1.0 (100%) you get that pure hue. Photoshop uses luminosity according to HSL/B rather than value when you look at things this way.


If Gimp uses Value in the same way as the HSV, I can not say for sure exactly what the outcome of the "Value" blend mode would be. The most logical way to think about it would be that it keeps the value numeric value (say 0.5) of the top layer, and applies that to the hue and saturation values of the bottom layer. If the top layer has a red hue and 100% saturation at 50% value, while the bottom layer has a green hue and 50% saturation at 100% value, I would assume the final outcome is green hue at 50% saturation and 50% value. In other words, a soft, semi-desaturated dark (but not blackish) green.


UPDATE:



Here are some sample images from both Photoshop and GIMP that demonstrate each mode. The differences between the Luminosity blend in Photoshop and the Value blend in GIMP are pretty clear...it does appear as though GIMP treats Value blend according to the rules that govern Value in HSV. GIMP also seems to apply some stronger curves during processing, or simply has a slightly different approach to blending the two colors for each pixel, than Photoshop...producing slightly harsher results. Photoshop on the left, GIMP on the right:


Photoshop HueGIMP Hue
Photoshop SatGIMP Sat
Photoshop ColorGIMP Color
Photoshop LumGIMP Value


Why is the front element of a telephoto lens larger than a wide angle lens?

A wide angle lens has a wide angle of view, therefore it would make sense that the front of the lens would also be wide. A telephoto lens ha...