Tuesday 28 June 2016

exposure - Why doesn't the picture become darker the more you zoom in?


As your lens's focal length gets longer, fewer photons pass through the lens to hit the mirror/sensor.



Why don't you see darkening when you look into the viewfinder and zoom in with a zoom lens, and brightening vice versa?


Why don't telephoto lenses need longer shutter times than wide-angle lenses?



Answer



The answer to this question revolves around explaining how zoom lenses function because you are correct in your observation: As you zoom to higher and higher magnifications the image dims unless somehow compensation is applied. Suppose you zoom from 25mm to 50mm, should the working diameter of the aperture remain unchanged, image brightness would suffer a 4x loss as to its intensity. Stated differently, each doubling of the focal length will dim, it will be just 25% as bright as it was before the zoom. If true, how is this light loss prevented?


The amount of light energy that can enter the lens is directly related to the working diameter of the iris diaphragm (aperture). The larger the working diameter the more surface area, the more light the lens can gather.



The modern zoom lens has a trick up its sleeve that keeps the image brightness the same thorough most of the zoom. Some high end zooms keep the image brightness throughout the zoom. How this works: The diameter of the aperture as seen when looking into the lens from the front appears larger than it actually is. This is because the front group of lens elements of the zoom lens magnifies thus the diameter of this entrance circle appears larger than reality.


Further, as you zoom, the distance from the front lens group and the iris diaphragm also change. This induces an apparent diameter change. The fact that it is apparent and not a real change is unimportant. From the outside looking in, this change appears real and this action allows more and more light energy to enter as you zoom.


As I said earlier, some high end zooms are good to go through the entire zoom. These are called constant aperture zooms. Lower priced zooms keep a constant aperture until the last 80% or so of the zoom, these fail and suffer the light loss you are asking about.


equipment recommendation - Buying a used camera



Based on the discussions here earlier, I decided to get a Nikon D90. I am looking on sites for used camera. I contacted one person and here is the response I received:



All gear is in mint condition with less than 2500 photo shutter actuations and the D90 body just over 2 years old. The lens is just over a year old. Technically there is still 4 years left on the Nikon warranty for the lens but apparently warranties are non transferable. This is only a package deal (D90 with 55-200mm AFS lens). Price CAD $1000.



But I have no idea if the deal is worth the money I would be paying for it, since there is no warranty. Is it worth taking this path or better to stay with Nikon D5100 and buying from store that comes with all warranty for almost same price.


Thanks again for all the help!




post processing - How does one create the Dave Hill effect for portraits/action shots?



Inspired by the question here by sebastian.b, and the response to the subsequent meta question. I would like to ask how to achieve a specific photo effect.


There may be more photographers that pull this off, but I know of two specific ones. The first is the wonderful photographer, Dave Hill. I know that he does lots of post processing, and his talent goes beyond a short list of tips that I can learn from a website, but I think there is something that I can hope to understand.



What is the Dave Hill effect? I have heard people talk about it, but what is it actually and how does one achieve it?



and



On his webpage, the "Nerd ad", "Allen Robinson", and "Jon Heder" shots give the subjects a specific skin tone/saturation. In general, his portrait shots are amazing. The detail and texture mystifies me. How can I pull this eerie feeling off?



Now the second photographer is skylove from flikr. His shots are also extremely impressive.




Again, he is using some black magic to make skin tone amazing. His subjects look beyond real, and I love it. Three specific examples are energy, discovering, king to defeat. Looking at energy, beyond the absolutely phenomenal idea for the shot and execution, and just focusing on the subject, he has this plaster/grey look, and his skin looks wonderful. Similar things are true for those other shots I linked.



Some ideas I have had that might contribute to this are HDR or possibly just phenomenal lighting setups. I have watched the videos on Dave Hill's website, but they just make me think more-so that he sold his soul to Mephistopheles.


I would appreciate any suggestions on how to achieve these results, or even some direction to go in this direction.


Thanks so much!



Answer



It's all about the lighting!




Most people seem to think it's all done in 'shop but a lot of work goes into the actual setup. Many of the supposed "Dave Hill look" imitations have just used HDR style tonemapping, and the results are nothing like the work of Dave Hill. Compare this:



http://www.flickr.com/search/?q=dave%20hill%20look&w=all


with this:


http://www.davehillphoto.com/


No similarity at all! Yes there is a lot of post production in Dave's work, but you need to get close with the lighting in order to get the same look. Here is an example, showing what is possible with lighting alone. The first image is straight out of camera:




...the second image is the final image after Photoshop, mostly saturation, contrast and a little high pass filter in an overlay blending layer. I should have pointed out I didn't set out to get the Dave Hill look, I just wanted to play around and see what worked with the subject (the singer in a Ska band). I could have done more in post to get slightly closer to Dave's images.


The key to the look is sculpted light. This means directing large lightsources at oblique angles to the subject. I used two square softboxes right up close and a bare hotshoe strobe for the hair light. Here's how the lights were placed:



The softboxes were brought in really close to maximise the softness of the light, and angled toward the camera to increase the sculpting effect and falloff nicely across the face. Two "gobos" were used to stop flare interfering with the picture (the black lines in the diagram). This is absolutely essential when angling the strobes toward the camera! The hairlight at the back was bare (i.e. no modifier) to really penetrate Aaron's hair and make it glow. As less power is required for an unmodified light, I substituted a hotshoe flash for the big monolight.



You can replicate this look on the cheap with a couple of hotshoe flashes but softboxes are important. Something like an umbrellabox is a good option if you're using small battery flashes. You just don't get the same effect with umbrellas as they spill too much light out the sides, and are in general harder to fine tune as you don't get a crisp edge to the light. The curved profile of the umbrella also makes it hard to get close. Getting close basically makes your softboxes bigger. Dave would have used huge octoboxes further away to give the model more space but he gets paid more than I do ;)


You can actually see Dave's setups in his behind the scenes videos (which I've just noticed you mentioned in your question!). And yup there are some big octoboxes! http://www.davehillphoto.com/bts/


Here's another example of the same look in a group shot I did for the band Asking Alexandria. This one required a bit more post production (as lighting groups is much harder) so I'll take it step by step to show you what sort of post production you can do to push the look to its limits.



As before, lighting is still very important. I used a very similar setup to the previous example but with the softboxes much more symmetrical. The models stood in a triangle pattern so the light hit everybody. The hairlight is also much higher this time as we're lighting the top of the head, not shining through anyone's hair (which would have required three hairlights in this example). You can see the lightstand for the hair light in the straight out of camera shot, behind the guy on the left.



Here's the image after Raw conversion in Adobe Camera Raw, where I added a little of the clarity slider, brought up the black point and used a lot of the fill light slider. I wasn't going for any specific look, just to get an image with detail and a good tonal range to work with. The background clutter has been removed by a manual selection. Also note I've made the image taller by adding more black. I planned to do this so I shot the original very tightly cropped to make the most of my megapixels (I didn't do this with Aaron as some of the shots we took involved a lot of movement).



Here I've faded a little of the ground into the image with a gradient mask in order to ground the models and make them look a little less like they're floating in space!




Next I went mental with the Photomatix tonemapping plug-in! With any sort of HDR work, taste and subtlety is important. I find the easiest way to achieve this is to push it as far as I can and then fade it into the original at a very low opacity to retain some realism.



Here, it is reduced in opacity to 30%. I actually lowered it to 15% in the final version, as I just wanted to show the effect in bringing out the details in the legs and shoes that was lost due to the fact I concentrated on lighting the faces (and didn't have any striplights to hand). It's also done a nice job with the hair.



Now here's pièce de résistance of the post processing. I've duplicated the original applied a high pass filter (which removes small details such as noise) and leaves large scale tone changes. The layer is blended back in at 30% with the blending mode set to overlay. This is very important, as it sort of amplifies the high pass by lightening the lights and darkening the darks. It's starting to look how I want it now.



Here's the final image after some fiddling; dodging and burning here and there, and some minor colour correction (I moved the jeans away from cyan and desaturated for a more pleasing look). Desaturated skin tones makes this lighting technique look better in almost all cases. Finally, the image is resized and sharpening applied, leaving an image which I think captures the look you are going for based on the examples posted.


It's worth noting that you need a dark place in order to pull this off otherwise you lose control over the light and you will have difficulties getting the deep shadows required for maximum contrast. The first image of Aaron was shot in a black walled studio, but if you don't have access to a studio you can get the same effect in any large space. The size is important as the light that gets reflected back from other objects falls off with the squre of the distance (double the space, 4x less light bouncing off walls), up to the point where it is totally overpowered by the subject and you get a nice black background. If you're looking for a large space, the great outdoors comes in very handy!


Monday 27 June 2016

photo editing - How does the exposure slider work in Adobe Camera Raw/Lightroom?


How does the exposure slider work in Adobe Camera Raw/Lightroom?


Naively I would expect the exposure slider to be a simple linear scaling (in the input color space, not in the working color space), but it's obviously not that. It's applying some sort of compression because you can increase it a lot without blowing up your highlights. Is this compression curve documented somewhere?


Related to that, do the numbers in the exposure slider have any absolute meaning? E.g. I have seen people claim -1.0 is one stop less exposure, but I don't think that's right. When I use graduated neutral density filters I use 0.6 and 0.9 grads (2 and 3 stops), but when I achieve the same effect in Adobe Camera Raw I rarely need to do more than -1.



Related questions:


What do all the settings do in Lightroom?


Does the Exposure Slider in Adobe Camera Raw Have Same Effect as in camera Exposure Compensation?




canon - macro lens for artwork with the T2i


This is a follow up to my first query about photographing artwork. There was a consensus that I go with a macro lens: Is Canon T2i and kit lens good for shooting (2D) artwork?



I think I'm finally ready to purchase. I found this lens and I wonder if anyone has an opinion on it as a macro lens: Sigma 70-300mm f/4-5.6 DG Macro Telephoto Zoom Lens


it has excellent reviews and is inexpensive. Or should I just get a standard non-zooming 50mm or 60mm macro lens?


And if you have any other thoughts in reguards to photograhing artwork please let me know.



Answer



Like any other zoom with the word "macro" in it's name, the Sigma is quite simply lying. No ifs, no buts. Misleading marketingspeak.


When you were recommended a "macro" lens it was implicitly meant a prime lens that can focus to 1:1 magnification, or at least 1:2. Canon has these in 60, 100 (2 flavours) and 180 mm focal lengths, a 1:2 50mm "compact macro" that requires an extra doo-dad to reach 1:1, while Tamron, Sigma and Tokina will be more than happy to sell one to you in 60, 90, 105 or 150mm varieties. At least; those are the ones I remember off the top of my head. All of the above are quite splendid optically, focal length being the main difference. The Canon ones that are equipped with USM focus motors are probably a bit better than the third-party ones for all-round use simply because they focus faster. While macros are purpose-built for macro photography they also tend to be extremely well-behaved in other ways, such as minimal distortion, good sharpness and very flat field of focus, which makes them excellent for reproduction and other non-macro purposes as well.


Sunday 26 June 2016

f stop - Does using an extender actually change the aperture of the lens?


I just bought a Canon 2X extender and I always believed that it would only result in 2 stops' loss of light and not change the aperture.


However, when I tested this extender on a Canon 135mm f/2 lens, the maximum aperture I could set went down to F/4. So it kind of confused me, because if I step down the aperture to f/5.6, is it actually setting the aperture to f/5.6 or f/2.8 (i.e., 1 stop below f/2)?



Answer



This is really simple when you think about it. The additional element changes the focal length of the lens, without changing the apparent size of the aperture. That means that the relative size of the aperture decreases, so the f number does in fact actually change. (If this is unclear to you, see the bit about f numbers in this other answer.)


This is also why rear wide-angle converters can go the other way, effectively increasing the aperture. (See How can a speedbooster improve the light performance of a lens? for more.)


Some converters communicate intelligently with the camera body, so the aperture displayed will be correct. This is the case with the Canon extender you have, but might not be with third-party ones. This explains the part you were confused about: the camera is aware of the change already and the numbers it is showing you are what you will actually get. When you set the aperture on the camera to f/5.6, the aperture on the lens is set to the same position that would be f/2.8 without the extender (but which genuinely is f/5.6 with it).



Note that teleside converters and wide-angle converters which go on the front of the lens do change the effective aperture (see What's the difference between real and effective aperture?), so they don't change the f number. (They are usually lower quality, however, and can introduce vignetting and other artifacts.)


history - What historic reasons are there for common aspect ratios?


The most common width to height ratio rates in the "old good" paper photography seems to be 3:2, which was adopted by today's DSLRs. Early (non-professional) digital cameras adopted 4:3 aspect ratio, which was the industry standard for computer monitors and for consumer TV sets. Prints are often 5:4 (as in 4"x5" or 8"x10"). Wide-format monitors are 16:9. Does anyone have any idea where and why these aspect ratio conventions were adopted in the first place?



Answer




Originally, film formats were arbitrary and specific to each camera model. For example, Kodak started making "pocket" cameras in 1895, but each new design used a different format. By 1908, they decided to simplify the confusion with a numbering scheme, calling that first format "101" and continuing the numbering up from there. In fact, the "120" medium-format film still in use today is part of this sequence. ("135" for 35mm film came later, and it appears they skipped ahead to make it match.) 101 happened to be square; 102 was 3:4, 103 and 104 were 4:5 and 5:4; 105 is 9:13, and basically arbitrary weirdness continues down the line. By 1916, Kodak had thirty different lines with a dozen different aspect ratios, all in production.


And of course, it wasn't just Kodak — everyone did their own thing, possibly partly to avoid Thomas Edison's patents (as he was famously litigious). Eventually (possibly as that fear receded), some standards did emerge, but even within those, there's a whole lot to choose from. Here's some of the more common formats today, and a bit about their histories.




4:3


4:3 is the most common ratio for compact digital cameras, including point and shoot models. It matches the standard ratio of computer monitors in the 1990s as digital cameras were first developed, and that came from TV, which got it from cinema.


Thomas Edison's lab chose a 4:3 ratio for silent film, and it became the standard. No one knows exactly why this particular ratio was chosen, but there's plenty of speculation. One story suggests that engineer William Kennedy-Laurie Dickson asked Thomas Edison what shape he wanted each frame to be, and Edison held his fingers in approximately a 4:3 shape, saying "about like this".


When soundtracks were added to motion pictures, the space required changed the standard slightly, but 4:3 was still the foundation. That translated to television sets, and then to computer monitors, and therefore was a natural choice for early digital cameras, and of course continues to today.


However arbitrary (or inspired?) Edison and Dickson's choice might have been, there's precedent in visual arts — analysis of several different datasets generally shows that the most common aspect ratio for paintings is something close to 4:3, with 5:4 also popular.


This is also roughly the proportion of a "full plate" (or "whole plate") used in Daguerreotypes or tintypes, from before cinema. This format is 6½"×8½", which is roughly 4:3, give or take the oddness of the half inch. Cutting this various fractions was also common, and although the resulting sizes were not consistent, the smaller sizes usually stayed to approximately-4:3 aspect ratio.


Wider formats eventually came to cinema largely as a way to distinguish the attraction of theaters from home viewing. See this for more, or search for "Academy ratio" and you'll get lots of information. This comes back around to photography when we get to the 16:9 aspect ratio discussed below.



It's worth observing that 4:3 and 3:2 are geometric cousins, since halving or doubling a 4:3 frame (in the sensible dimension) yields a 3:2 frame, and halving or doubling 3:2 yields 4:3.


110 film, an obsolete cartridge format for mass-consumer-level cameras, uses a 13mm×17mm frame, which is close enough to 3:4 in spirit — although oddly the standard prints are 3½"×5", or 10:7, a "weird" ratio partly between this and 3:2.




3:2


3:2 is the format of 35mm film and the de facto standard for digital SLRs. Oskar Barnack of Leitz invented a small camera using cinema film rolls, and chose to use a double frame — and a double-4:3 frame is 4:6 — which is to say, 3:2 when you turn it 90°. This is the origin of the 35mm film format, and here we are today.


(Beware when searching for more on this; there's an oft-repeated article out there full of unwarranted golden-ratio mysticism. Not only is 3:2 not even close to the golden ratio, but, as noted under 1:1 below, historically artists have shown a preference for more-square formats which are even further from the golden ratio.)


Japanese camera makers Nikon and Minolta used a 4:3 format in their first 35mm film cameras, but then switched to 3:2 along with everyone else — possibly for political reasons, but possibly just for convenience.


When the Advanced Photo System standard was invented, "APS-C" was defined to follow this Classic aspect ratio (in a smaller size). APS also defined APS-P (a 3:1 panorama), which didn't really catch on; and APS-H, which is close to but not exactly 16:9 (but probably chosen for its similarity).




1:1



1:1 is, of course, a square. Squares are obvious, and nice to compose in. There's no concern about "portrait" or "landscape" orientation. The inherent symmetry can be used for strong formal composition. So, conceptually, this is pretty straightforward.


However, it appears that the various non-square rectangles were more common for photography — perhaps following preferences in painting, where off-square rectangles are historically predominant.


Square wasn't really a hit until Rollei's twin-lens cameras came along in 1929. These use a waist-level finder you look down into, and it'd be inconvenient to have to tip the camera for different orientations. Hasselblad followed suit with their waist-level SLR, again using square format. It seems that despite what appears at first to be obviousness for composition, square photos were first a matter of technical practicality rather than aesthetics.


Also on a technical note, new photographers often wonder why square sensors aren't used to capture more of the image circle projected by camera lenses — after all, a square is the greatest-area rectangle that can fit in a circle. But, it turns out this is only efficient if you want a square in the end, of course not everyone wants that.


I should mention popular smartphone apps here, too. Hipstamatic for the iPhone was the first to take off, and now Instagram boasts 60 million square-format photos per day. As these are shared on social media to be viewed on "tall" smartphones and "wide" computer screens alike, the appeal is no surprise.




5:4


5:4 is a common large format aspect ratio, both as 4"x5" and 8"×10", and that's where the popular 8"×10" print comes from. I'm not sure why exactly it was chosen, but I wouldn't be surprised it simply fits with the historical preferences for almost-square frames as noted above. It certainly goes back to at least the 1850s — see the bit on cartes de visite below.


Mostly, I imagine the history here roughly mirrors the history of standardized sizes for letter paper.





5:7


5:7 is another aspect ratio one commonly sees available for prints and in pre-made picture frames. It was a moderately-popular large-format option which seems to have mostly fallen out of favor, perhaps because it's "too in-between" — inconveniently large to enlarge, smaller than people might prefer printed directly. I found a couple of interesting articles on the format (here and here), but I haven't found any particular reason for the aspect ratio; it seems to just have been an acceptable arbitrary choice between the other common sizes of 4"×5" and 8"×10".




5:8


Since 8"×10" can be cut in quarters for 4"×5", it seems logical that half-sized film would also have been common, and indeed cameras using the 5"×8" format also exist/existed, but for whatever reason never got to be as popular as 5×7.


This is particularly interesting because 5:8 is a very close approximation of the golden ratio, and maybe this is an argument against people's natural attraction to that. (See this 1891 article, where the author says: "I would recommend the 6½×8½ in preference to the 5×8, since for most work the latter is not so well proportioned.")


One common format with this aspect ratio was popular in the 1860s — the carte de visite, a 2.5"×4" "business card". There was a technique for taking eight such photographs on a single 8"×10" plate, which explains the aspect ratio choice, although especially given the timing it may well be that the golden ratio played some part. This format, though, was supplanted in a few decades' time by larger 4:3-ratio cabinet cards.




6:7 — but actually, not


6x7 is a common medium-format film format, but this isn't really the aspect ratio used. Unlike typical large format film, this is measured in metric instead of inches, so 6x7 is actually a smaller format than (for example) 4×5, although that's only tangentially relevant to the aspect ratio discussion. The important thing is that the usable portion of 120-format roll film is 56mm wide, so 70mm gives a 4:5 (8:10) aspect ratio. That means you can create 8"×10" prints without cropping, and for this reason, this was marketed as "ideal format".



There's other common ways of dividing up that same roll, giving different aspect ratios, and most of these are already discussed: 6×6 is 1:1 (56mm×56mm frame size), 6×4.5 is 4:3 (56mm×42mm), 6×9 is 3:2 (56mm×84mm). And 6×17 (almost 3:1) is used for panoramas.




16:9


16:9 is the standard for HDTV, of course, and it was simply selected as a compromise format by the committee designing that standard. Hooray for committees! This forum thread goes into the background of the decision, but really, committee compromise sums it up — it's not ideal for either the classic ratio or common widescreen formats, but sits in the middle — either awkwardly or conveniently, depending on your biases. Many computer monitors, laptop screens, and even phones use this aspect ratio today, and it's no surprise that digital cameras often offer it as an in-camera output choice to match.


It'll be interesting to see how long this one lasts in the grand scheme of things.


shutter speed - How can I get long exposure for star photography with a Nikon Coolpix P900?


I own a Nikon Coolpix P900. I'm still fairly new to photography but I'm interested in learning about photographing the stars. I've read that the best settings to do this are to have an aperture of f/2.8, a shutter speed of at least 20", and an ISO of between 800 and 1200.


I'm having some trouble getting my camera to these settings. At an ISO of 800 my camera will only allow me a maximum shutter speed of 2". How can I change this?





terminology - What's the difference between exposure and shutter speed?


Is there a difference between exposure and shutter speed, or are the terms interchangeable? I read that "If you use a quick shutter speed, you can just raise the exposure to compensate." Is this statement invalid, or is there a difference?




technique - Why do it "in-camera" rather than in post-processing?


There's a very common attitude among photographers that the appearance of a photo (ex: exposure) should be created using the features of the camera (aperture, shutter, etc; does not include the "retouching" features built into newer cameras' software) rather than post-processing (Photoshop and the like).


Obviously, before the digital age, this was largely a matter of practicality. Now we have more tools at our disposal.


What is the reasoning for doing things in-camera instead of in post-processing, given the current technology available?



Answer



No amount of processing can add detail that isn't there to begin with. If you greatly overexpose your picture, you cannot rescue the highlight detail lost. The same with significantly underexposing your picture. Additionally, attempting to fix some perspective problems will make the picture look unnatural and sometimes even cartoonish.


Getting it right in the camera is still a matter of pragmatism. It's a question of whether you want to spend several hours in front of a computer retouching the picture, or spend a couple of minutes getting your camera settings right.


Some things might be better done in post processing because you have more control, such as multiple exposures. However, this class of post processing has more to do with special effects rather than proper exposure.



I'm also of the opinion that you should never use "I'll fix it in post" to do a mediocre job taking your picture the first time. An extra minute or two at the time of exposure is well worth saving hours in front of a computer. As my college professor once said, "No matter how much you polish a turd, it's still a turd."


Saturday 25 June 2016

terminology - What is the difference between depth of field and depth of focus?


Reading this answer I realized that I didn't know the difference between the two depths (of field and focus). Browsing related questions didn't reduce the blur (!) between the two...


Wikipedia provides a discussion which is helpful but I think that it could be useful to clarify the distinction between the two here in more detailed terms (e.g. physically they are not the same effect but does this really affect the final result? Is the confusion really misleading?)



Answer



Depth of field tells you the range of distances from sensor/film where your subject can move in scene so it would still remain in focus.


Depth of focus is used in two slightly different meanings . It tells you the range of distances from lens where your sensor or film can move so



  • the same object plane would still remain in focus;

  • or, the same subject would still remain in focus (this depends on placement of subject - greater on edge of image).



Depth of focus is usually much less than a millimeter and becomes relevant when



  • your camera does not correctly tension film against back wall of the camera,

  • or you are using bellows between lens and film/sensor,

  • or you are freelensing,

  • or you are using a lens which has bellows as part of its construction, such as Lensbaby Muse.


sharing - What are the best sites to share photo galleries with friends?



Which sites do you use for sharing photos with family and friends?




lens - Which piece of equipment is holding me back?


I am primarily an amateur photographer but I do take small jobs from time to time. I'm looking to upgrade my equipment but I'm not sure if I should be spending money on my body, glass, or both. I have two common issues that I'd like to improve.




  • I shoot a lot at ice rinks and find that even in servo mode I get a lot of shots that are slightly out of focus. I try to keep the AF target on face masks but many shots will come out focused an inch or two off (ears, hands, etc). I'm shooting F/1.8-F/2.8 to get action stopping shutter speeds so you can really tell when the focus is even a hair off. I don't know if this is a limitation of my camera, lens, or the human operator.





  • Most of my paid work is indoors shooting candids where a flash is inappropriate. I feel like I need to use a higher F-number to get more subject in focus but I usually can't do it without upping to ISO 3200 unless I'm using a flash. By the time I clean up the high ISO noise in Camera Raw I end up with an image lacking detail. So far this hasn't been a big problem because the photos are typically used in magazine prints only an inch or two wide but it's still driving me nuts.




Here is the equipment currently in my bag:



  • Canon Rebel T2i

  • Canon EF-S 18-55mm f/3.5-5.6 II (no IS)


    • Collecting dust, came with my old Rebel XTi.



  • Canon EF-S 18-135mm f/3.5-5.6 IS

    • Purchased with my T2i. This is my workhorse lens but I'm disappointed with the soft images it produces at higher focal lengths. Stopping down makes the images acceptable but that's not always an option. My eyes want more contrast and color but considering how inexpensive this lens is I can't really complain.



  • Canon EF 70-300mm f/4-5.6 IS USM


    • This was purchased with an older camera and doesn't get used much anymore. The images are too soft beyond 200mm and the AF seems sluggish.



  • Canon EF 50mm f/1.4 USM

    • I am extremely happy with the images I get from this lens. I use it indoors all the time and also paired with extension tubes for some macro work. Some of my best ice hockey images have been from this lens even though I didn't have sports in mind for this lens.




My initial plan was to upgrade to a 60D or 7D body but after doing some reading it occurred to me that I might invest in better glass first. I'm unsure of what will help improve focusing, image quality, high ISO shooting, etc. Maybe it's a new camera, new glass, photography lessons, or a little of everything?



I'm looking to keep my next round of purchases under $3,000 and am willing to trade in some of my used equipment to offset costs. Would anyone care to make a recommendation? I'm looking at the EOS 60D and 7D bodies. Not sure if the 7D is overkill for my needs. I'm also looking at the EF 24-70mm f/2.8L USM lens to be my new workhorse but I'm concerned that it may not be wide enough on an APS-C body and the lack of IS might be an issue. There's also the EF 70-200mm f/2.8L IS II USM but I'm thinking it wouldn't be wise to spend all my budget on that lens right now (or maybe not?).



Answer



I would gravitate to lens choices over the body. There's a couple of reasons...




  1. The lenses will be useful in the future when another opportunity to purchase arises.




  2. Fast glass, such as f/2.8 zooms, are very helpful in low light. These are, often, pro grade lenses as well, so that helps sharpness.





In the end result, you'll have these lenses for years, the bodies will come and go. So, that investment will pay long term dividends starting now. The other thing to keep in mind, cameras like the 7D will start to appear on the used market and so you may find you'll have budget when that happens and you'll have nice lenses for it when it does.


Now, having said that, the big upside to a 7D now is the speed of the camera. The frame rate is high and that helps a lot in the situations you shoot in. A common technique in these environments is to shoot a bunch of frames of the same scene, it ups the odds that one is sharp.


Still, I'd go lenses first if you have a cap on spending.


Friday 24 June 2016

sensor size - How does the smaller mirror in APS-C cameras offer these advantages?


As Wikipedia says here



The smaller mirror used in APS-C cameras also allows optical elements to protrude further into the camera body, which enhances the possibilities for wide angle and very wide angle lenses, enabling them to be made smaller, lighter (containing less glass), faster (larger aperture) and less expensive.



How does a small mirror allow the possibilities for wide angle and very wide angle lenses to be enhanced by enabling them to be made containing less glass and larger apertures?





How Long Does a Rechargeable Lithium-Ion Battery for Digital Camera Live?


There is much discussion about battery-life in use and a standard (CIPA) for measuring it but there is not much information on how long batteries are supposed to last.


How long does a rechargeable Lithium-Ion battery for digital cameras live for?


Is this measured chronologically? (In years, for example) or in cycles? And, if so, what does one know how many cycles a battery has been through?



Answer




Is this measured chronologically? (In years, for example) or in cycles?



Both, I think. At least they're both factors, but I don't think it's possible to reliably predict the useful lifetime. Batteries age whether you use them or just let them sit on the shelf, but they age faster if you put them through a lot of charge/discharge cycles. I'm no expert, but I suspect other factors like temperature and charge during storage have an impact on a lithium ion battery's lifetime.


All these variables probably combine to create a range of possible lifetimes that's probably too large for battery manufacturers to provide a useful estimate. If they guess too low (in either time or cycles), people will accuse them of giving a short lifetime to encourage unnecessarily frequent battery replacement. If they guess too high, people will complain when their batteries don't meet the estimate.



The best thing to do, of course, is to monitor the battery's performance. Does it recharge to the same voltage? Does it provide useful capacity? It can be hard to really know when the performance has dropped if you're not paying close attention. To that end, some cameras (e.g. Canon 6D and recent 5D variants) keep track of each battery's performance so that you can see when a battery might need to be replaced.


exposure - How do I use the different shutter speeds my camera offers?


If I use a shutter speed below 1/30 on my Nikon P100, I get extremely dark images which are completely unusable. If I use the flash, it comes out fairly bright but just not natural.


My camera supports shutter speeds as fast as 1/2000 and as slow as 8 seconds. How do I use them correctly? What should I take into account before shooting, what else do I have to configure or use?




Answer



First off, using any on-camera popup flash is probably not going to give you a "natural" look. You'll need to either use natural light or move your flash off camera (which I don't believe your camera supports). The popup flash (I'm assuming thats what you mean with just "flash") is on the same axis as your lens and generally doesn't produce "natural" pictures (largely because our eyes don't typically see the world lit by a bright light shining from our forehead).


Your shutter speed issue sounds like an exposure issue. With a fixed amount of light, the brightness and darkness (exposure) of your pictures will be determined by three things: shutter speed, aperture, and ISO -see exposure triangle. (Mattdm points out in a comment below, that this may better be visualized as a rectangular prism. See his comment - if you can visualize that, its even more useful.)


By going below 1/30 on your P100 in whatever situation you have, your aperture isn't large enough and/or your ISO isn't high enough to compensate. You'll need to try to open your aperture larger (lower f number) or increase your ISO (this is the light sensitivity of your sensor). These should be settings on your camera.


Its difficult to go into all the exposure details here, but there are several excellent books on the subject and many, many online sites. Try Understanding Exposure for an excellent reference.


Thursday 23 June 2016

lens - How do I decide between the Canon 55-250mm or 70-300mm + 50mm?


I just bought a Canon Rebel T2i (EOS550D/Kissx4) with 18-55mm kit-lens and I have an additional $300 USD to spend on a lens. I had decided to buy a Canon EF-S 75-300mm f/4-5.6 (without IS) and a Canon EF 50mm f/1.8 and these are just $150 and $110 respectively. However, my friend just got a new camera with an 18-135mm lens so he's selling his current Canon EFs 55-250mm IS for $250.


If I choose Canon EF-S 70-300mm f4-5.6 & 50mm I can get 2 lenses for the same price. If I choose the Canon EF-S 55-250mm IS, I will just get only one.


Advice on which one to choose, please?



Answer




If your purpose is for the outdoors then go with your first option of 2 lenses. Personally I have used both lenses and I feel the quality is more or less the same. Again it depends on your style of shooting, but the 50mm f1.8 is a low light lens which you could use in case of bad light.


The IS in the 55-250mm is nothing great and since you have the ISO advantage its worth taking the risk of the 70-300mm and you would have more range too. Don't worry about the lost 55-75 range. Trust me, it does not matter as you always have your feet to compensate for that. Just walk into your subject or away and you have more or less the same result.


My suggestion would be pick up the first option since you'll have the 50mm. Once you use that lens there is no turning back, especially for its bokeh and sharpness at that price.


How to show a "real" color to someone over the internet?



I have an iPad Air with its camera. I need to show a color of my shoes to a person who will dye shoelaces for them. So I need to reliably capture and transfer this color somehow. I assume that simple photo is not enough because of monitor color profiles and calibration differences. I thought of using some "colorimeter" app for iPad and extracting a color hex code. Is it a feasible idea? What else can I do in my case without professional equipment?



Answer



iPads [in fact most mobile devices] tend to be a bit 'over contrasty' unless you actually calibrate the screen with a hardware colorimeter; which probably means that even on your own screen the colour will be wrong. It will also vary depending on backlight brightness & surrounding lighting conditions.


Sending that value to someone else, who also has a non-calibrated & potentially over-contrasty screen, in an uncontrolled backlight/ambient light environment, just multiplies the potential for error.


Your only real viable solution, so you both know you are seeing exactly the same thing would be to print varying samples of the colour until you can clearly see, in good light [cloudy daylight may be the closest you can both get to being the same value], that it is a true match... then post it, snail-mail.


Alternatively, both of you would need either professional Pantone swatches... or at a push, find a paint swatch at a local DIY shop, if you both can source the same paint manufacturer locally [or, again, post it].


moire - Pattern shown in LED screen background used as backdrop on a stage



I'm trying to figure out an oddity in photographing a large LED background. I'm using a Canon 5D Mark II with a 100-400 lens although it occurs with any lens and at any focal length. The camera is picking up a pattern that isn't visible with the naked eye.


It looks like aliasing that you might get when photographing a LED monitor except here the screen is the size of a billboard. I'm not sure I would call it a moire pattern though.


enter image description here


The shutter speed must be 250 minimum, usually 320 or 400 due to action on stage requiring it otherwise action is blurry. Lighting is a bit low so ISO 3200 has to be used and aperture is wide open either 2.8 or 4.0 depending on the lens.


Obtaining a shallow depth of field can help by blurring the background a bit but it's still there. I haven't really noticed a difference in the angle I'm photographing as to if it makes a difference but a straight on shot is what is needed in most cases to capture the scene. A much slower shutter like 160 starts reducing the effect but it is too slow for action in the scene.


Using my Iphone 7 Plus, taking the same photo all lines are completely gone! The picture is 100% solid. That's using an Iphone 7 plus and an automatic photo (no settings). The only time the iphone 7 plus does not do well is if the action is in high motion, the action may be a bit blurry.


The only time I have been able to obtain a good photo with the Canon 5D Mark II (also tried a Sony A6000) and multiple lenses is when using a flash but a flash isn't really acceptable to fire off during a show. It appears with the flash firing it is indicating a shutter of 1/200 (which is usually too slow for action) but how the flash works I believe it is only firing for a fraction of the 1/200 even though I have it manually set for a faster shutter. The action is frozen and the screen is solid. Shooting at 1/200 without the flash yields the same results of lines in the screen. So I'm not exactly sure how the flash is causing the LED screen to appear solid but it works. The iphone 7 plus (no flash) works.


So how do I shoot in order to remove the lines in the photo of the LED screen?


Thanks... Gary




post processing - Astrophotography picture with too much noise. How do I correct this in postprocessing?


This is the Milky Way above Kakadu National Park, Australia, and my first try at astrophotography.


enter image description here


Thirty second exposure, f/4.0 at 17mm with ISO 12800 (!!) which in hindsight probably was way too high and resulted in a lot of noise. I also used Picasa to increase contrast, but I probably went a bit overboard there as well.


This is the original RAW file (.CR2), I am looking for input on how to better postprocess this picture to reduce noise.



Answer




Firstly had you lowered the ISO whilst staying at 30s f/4 you wouldn't have ended up with any less noise.


There's probably nothing you could have done to prevent the noise, I presume f/4.0 was the maximum aperture and if you went any longer than 30 seconds you would get star trails. You might even get less noise if you raise the ISO but that's another story.


However there's plenty you could do to rescue the image, the main thing is reducing chroma (colour) noise. Most noise reduction plug-ins as well as RAW converters give you the option of reducing only colour noise.


Here is the image with some brute force chroma noise reduction (split to LAB in GIMP and then Gaussian blur of 250 to the A and B channels):



The noise reduction has also fixed the magenta cast caused by noise in the red channel. A dedicated noise reduction plugin could do much better than this. A little luminance noise reduction would help too, but not too much in case it mistakes the stars for noise.


Wednesday 22 June 2016

Is it possible to use a smartphone/tablet's 3G/4G network to transfer Eyefi photos?



I know that Eyefi cards can use Wi-Fi networks to transfers photos. I also know that there's some sort of smartphone/tablet app that you can download to do various things with your Eyefi card and photos, but in the case that a Wi-Fi connection is not available, can the Eyefi card somehow link to a smartphone/tablet and use that smartphone/tablet's 3G/4G network to transfer the photos to a predesignated computer? Thank you.



Answer



You could turn your device into a Wifi hotspot and then EyeFi can connect to the internet via your phone (or tablet).


I don't have any experience with Eye Fi specifically, but presumably its programmed to scan for a list of known wifi hotspots. I would imagine you could program it to find your phone consistently and then once your device is enabled as a hotspot and as long as your phone has internet access to the target machine (ie. that target machine needs to be exposed to the www as a server of some sort) you'd be off to the races.


Of course enabling your device as a wifi hotspot will eat up data usage from your cell plan, so make sure you've covered that angle. If your plan doesn't already include data it can be a pretty sizable hit to the wallet.


A quick Google search turned up these pages:



What is tone mapping? How does it relate to HDR?


Whenever I read / hear about HDR (High Dynamic Range) photography, someone usually says something along the lines of "Actually, you're talking about tone mapping, not HDR."


Please can someone explain what Tone Mapping is, and how it relates to HDR?



Answer



An HDR image has a high dynamic range, which means a very large ratio between the brightest and darkest parts of the image. An HDR image on a normal (low dynamic range) monitor will actually look very flat:



This is because that huge range of brightnesses has to be compressed to fit into a much smaller range of brightnesses. This results in an overall lack of contrast, hence the flatness.


This image has a split personality, the skies are very bright and the subject much dimmer, if we could use all the monitor's brightness range for the sky, it would look pretty good:



But we'd totally lose our subject. Likewise if you used all the monitors brightness range for the subject it would also look good, but we'd totally lose the sky:




It would be great if we could combine them in some way, or carefully ration out the brightness range we have to work with so we make most use of it. This is where tonemapping comes in.


What tonemapping does is instead of mapping the whole image into the monitor's brightness range in one go, it adjusts the contrast locally so that each region of the image uses the whole range for maximum contrast (there's a bit more going on here, it depends on the tonemapping algorithm used). Here is the same image tonemapped:



The reason that tonemapping is not HDR is that you can tonemap a single low dynamic range image in order to make it more contrasty. The result looks similar but with much more noise in the shadows:



Disclaimer: for the love of God don't do this to your images!


This is a demonstration of the fact that dynamic range and noise are opposites, in fact dynamic range is usually defined in terms of the noise floor of an image. This is because there is a point at which any tonal differences in an image get lost in noise, so this defines the darkest thing you can image (which in turn with the brightest thing, defined by the point at which the signal clips) determines the dynamic range.


Tuesday 21 June 2016

macintosh - Why does Spyder4Pro calibration of Seiki Monitor look odd?


I just added an external 4k Seiki SE39UY04 3840x2160 monitor to an early 2013 Mac Book Retina Pro laptop and got the DataColor Spyder4Pro calibration tool. After running a full calibration, I see some oddities (although things do look better now.) In particular, when I view a text editor displayed via Real VNC Viewer of some bluish text upon a light gray background, each of the blueish letters appears to have a white halo effect which is particularly pronounced on capital S characters. Also, when viewing white on a Chrome web page, such as this one, the white looks a tiny bit yellowish in hue. Based upon the calibration, the screen itself is set to Normal temperature, 50% Contrast, and 7% Brightness.


Am I just seeing limitations of this monitor or have I got a bad configuration? Are there any steps I can take to mitigate these symptoms?



Answer



I resolved the issue by lowering the backlight setting that is added in the firmware update version from August 28, 2013. However, Seiki's directions on that download page are incorrect, instead, this is how you may update the firmware:



  1. Download the .zip file and extract the install.img file to a freshly formatted (FAT32) USB thumbdrive. (On a Mac, be sure the OS has not overzealously also expand the install.img file too.)

  2. Put the thumb drive into USB port 1 (the port on the back, not the side of the monitor).

  3. Turn the TV on.

  4. On your remote control press the menu button.


  5. Then on your remote press 0 four times, that will take you into the service menu.

  6. In the service menu choose the software upgrade option.

  7. Screen will display upgrade animation.

  8. Wait for the upgrade to complete.

  9. Turn off the TV when finished.

  10. Disconnect the thumb drive.


Once updated, I now had the option to change the backlight setting in the options menu which was previously invisibly set to 100% which is why I had needed to have 7% brightness to get proper light levels. I updated the backlight to now be 71% and after calibration found the brightness only needed to be 42% and with that the display looks vastly superior and actually has proper colors after calibration using the Spyder4Pro.


To fix the halo effect I was seeing, I had to lower the sharpness that defaulted to 100% down to 0% and with that change, the halo effect is now completely gone.


Note that the firmware upgrade also fixed the flakiness of the monitor not reliably syncing with the HDMI signal coming from the laptop as evidenced by it displaying the Not Support error message and having to disconnect and reconnect several times before it would finally sync.



All in all, I am very happy with the monitor after the firmware upgrade, as it was very inexpensive and for the price offers what I consider to be very adequate color reproduction to meet my needs. The Spyder4Pro report states it gives 100% of sRGB and 77% of adobeRGB with a contrast ratio of 2120 to 1. Here is the not very good Tone response report relative to gamma 1.8 or 2.2:


Spyder4Pro report of Tone Response relative to gamma


micro four thirds - Is it possible to adapt a Canon XL system lens to a mirrorless body?


I know it's been asked if it's possible to mount an XL lens on a DSLR; but M43 bodies have a flange focal distance of 19.25 mm, so that seems more possible, although I haven't found a manufacturer that produces such an adapter.



Answer



TL;DR: even though the shorter FFD of mirrorless cameras solves the infinity focus problem, the small image circle of XL lenses presents the same problems as answered to the referenced question about mounting XL lenses to DSLRs.




From a flange focal distance (FFD) standpoint, yes, it is possible. But the most likely reason you haven't seen a XL-to-MFT adapter is because the Micro 4/3 sensor is much larger than the 1/3" sensor Canon XL lenses were designed for.



MFT's crop factor is 2; 1/3" sensor crop factor is 7.21. Thus the XL-to-MFT crop factor is 7.21 / 2 = 3.6. The 1/3" sensor of the XL cameras is nearly one quarter the diagonal size of a Micro 4/3 sensor.


Canon XL lenses were designed to project an image circle that would cover a 1/3" sensor with minimal or acceptable vignetting (soft vignetting). But the hard cutoff of the image circle would be clearly seen/recorded on a much larger MFT sensor. Because of this, Canon XL lenses are not generally useful on sensors larger than the 1/3" sensor used for the XL.


A likely secondary reason you can't find such an adapter is because there are relatively few Canon XL lenses on the used market, compared to the huge number and low price of DSLR lenses on the market, such as Canon EF or EF-S lenses, Nikon F-mount, not to mention the various manual focus lenses of all sorts of mounts (M42, Minolta, Pentax, etc.).


Monday 20 June 2016

Is it safe to perform a firmware upgrade on a Canon 60D camera?


How safe is it to perform a firmware upgrade on a camera? I know what firmware upgrade is and what advantages/possible disadvantages it brings, I'm just specifically asking how safe or unsafe it is to perform one.


Can something go wrong? What should I double-check before performing an upgrade?



Answer



I trust Canon is sane enough and tests all official firmware upgrades not to brick your camera. As long as you follow the guide that comes with the firmware (instructions like "don't remove battery during upgrade") you should be completely safe.


Sunday 19 June 2016

software - What are the differences between Picasa and Lightroom, other than price?


I've heard about Adobe Lightroom before, but never really understood what's so great about it, or why you'd pay for it when you can get Picasa for free.


I know Lightroom probably has more sophisticated editing options, but what are they, exactly, and how to do they compare to Picasa? Why would you use one over the other? Is Lightroom better for streamlining editing workflow, but Picasa is better for organizing & tagging photos? And would you ever use both programs?



Basically, what are the differences between Picasa and Lightroom?



Answer



There are several features that I think are just awesome in one or the other. Depending on your needs, one of these features will make you (usually + some other things) go towards Picasa or Lightroom.


Lightroom:



  • Integration with other Adobe product (Photoshop, InDesign, etc)

  • More sophistication in editing (somewhere in between Picasa & Photoshop)

    • Color correction, CA correction

    • NR & sharpening


    • Exposure & WB adjustments



  • Can do batch processing


Picasa:



  • Arguably faster (much faster for me)

  • Integration with Google product (Blogger, etc)

  • Simpler editing tools



So, if your workflow deals with more Photoshop, other Adobe stuff, Lightroom is the way to go. If you need more editing tools but don't want Photoshop, go Lightroom. Other than that, like me, I went for Picasa.


repair - AF mirror not flipping on Canon 350D




My Canon 350D is no longer raising the AF mirror when it takes a photo. Instead, it looks like this when the mirror flips up. not swinging


This results in a photo that is half black due to the sensor being obscured.


This camera is 9 years old and has never been dropped, mishandled or subjected to harsh temperature or humidity. Is there any common maintenance I can do to get this thing swinging properly again?




Friday 17 June 2016

Which Canon focusing screens fit on the Canon 6D


I'm having trouble finding the available focusing screens for the Canon 6D.


Are the canon 6D focusing screen compatible with the 5D Mk II focusing screens? Is there a microprism focusing screen for the 6D?



Answer



According to Canon USA's online support page (see the note below), the only focusing screen compatible with the Canon EOS 6D is the one supplied with it, the Eg-A II. Also according to Canon USA's online support page, the focusing screens available for the Canon EOS 5D II are only compatible with the 5D II.


However, according to page 312 of the EOS 6D Instruction Manual, the Eg-D (Precision Matte w/Grid) and Eg-S (Super Precision Matte) are compatible. They are also listed as available accesories on page 354. If you install one of these focusing screens, be sure to set the C.Fn.3- II to match the correct focusing screen so that the exposure metering system can compensate for the different amount of light the screen allows to pass to the light meter in the pentaprism.


The last micro-prism screens I'm aware of that Canon made were the Ec-A (matte w/out split image) which is listed as compatible with all 1-series bodies including the 1D IV and 1D X and the Ec-B (matte w/split image) which fit 1-series bodies through the 1D III and the 1Ds III, but not the 1D IV.


The newer bodies from the 7D onward use a transmissive LCD screen to project most, if not all, of the viewfinder information onto the focusing screen. Before the 7D the markings were etched onto the focusing screen and LED lights were used to illuminate active focus points, etc.


http://usa.canon.com/cusa/sna/consumer (click 'Camera Accesories--> Focusing Screens' and then scroll through the list.) In general the Eg series are for the 5DII, the Ec series are for the 1-series bodies, and the others are for even older cameras. Notice that there are no replacement screens listed for the 7D and 5DIII on Canon's official web site because the focusing screens on these models are not considered user removable, and Canon does not provide Custom function settings to compensate for exposure metering when using third party focusing screens.



hotshoe - What's this device that is attached to the hot shoe?


In this youtube video (High-Tech Photography of Nature in Japan), at this point, what is that device attached to that photographer's camera?


enter image description here




technique - How do I get adequate depth of field in macro photography?


I have been messing around with some Macro Photography using the techniques (and a reversal ring) described in a blog post by @ElendilTheTall My main issue is that I am having a hard time getting the subject into focus. There are times when the center is blurred and the outside is in focus or the center in focus and the outside out of focus. What are ways to get as much of the subject in focus?


I am using a Nikon D5100, I have a 18-55mm kit lens but I find the 55-200mm lens easier to use for shooting.



Answer



That is part of the art and difficulty of macro photography. As with all lenses, only one plane is in perfect focus and everything closer and further will be blurry.


The only thing to do to maximize depth-of-field in one shot is to pick a small aperture. It is recommended to use something up to the diffraction limit of your camera which should be about F/16, otherwise the whole frame becomes blurry.


Once you have a certain depth-of-field, you should take advantage of it by placing the focus somewhere in the middle (as measured in sensor-distance) of what you want to appear sharp. The common rule of thumb is that depth-of-field is 1/3 in front and 2/3 in back of the place of focus.



The other option you have it to take multiple shots and merge them together using a technique called Focus Stacking. Each shot should be taken at the same aperture but with a different point in focus. There are specialized software to do that (just search for the term) but Exposure Fusion software also do it (because of how they work).


terminology - What do the measurements for colour depth mean?


Colour depth is often referred to as being X bits. What does this mean and how does it effect a photograph? What scale is used, i.e. is it linear, exponential, logarithmic, etc.?



Answer



What is a bit?


Computers store values as binary numbers. Each digit of a binary number is called a bit. 2^N, where N is the number of bits is the maximum number of things that binary number can represent.


Example Please



A black and white image (no gray here, just black and white) can be represented with a color depth of 1 bit. 2^1 = 2. Those two colors are black and white.


Back on older mac computers you could set the color depth: 16 colors, 256 colors, thousands of colors, millions of colors. These options correspond to different bit depth values: 4, 8, 16, and 24 bits. Bit depth on computer monitors always refers to the sum of the red, green, and blue pixels bit depth. If the sum is not divisible by 3, then usually green gets the extra bit since your eye is most sensitive to green.


What are some real world numbers?


Nikon d7000: 14bits per pixel.


Most computer monitors display color with 8 bits per color for a total of 24 bits per pixel.


Scale


Image sensors are linear, which means the half the values represent the brightest stop of light, then the next quarter the next stop and so forth. This means that dark values quickly get compressed into a small number of possible values. The higher the bit depth the better quality dark pixels.


How does it affect a photograph?


More bits means more data. That can't be faked. More bits may also mean more quality to work with when processing the images.


Higher values are not always better though. Designing ADC (analog digital converters) with high bit depth is very difficult. This is because the noise level of the converter must be below (V)/2^N where V is the input signal's voltage and N is the bit depth. This voltage, V/2^N is called the least significant bit voltage (often called 'one LSB'). It is the voltage that each bit represents. If the noise level is greater than one LSB the LSB is not storing useful data and should be removed.



Example: A 5 Volt signal is being digitized by a 10 bit ADC. Under what voltage should noise be kept?


Using the equation for LSB voltage: 5/(2^10) = (5/1024)V, 4.88mV.


film - What is the difference between an SLR and a DSLR?



What are the differences between SLRs and DSLRs? (I know that DSLRs are Digital SLRs but are there any actual differences?)


Also, what are the advantages/disadvantages of both?



Answer



Nowadays, both terms are using interchangeably because the vast majority of SLRs in production are digital and there was not been a new model of another type of SLR in years.


SLR refers to a camera with a Single Lens and a Reflex mirror to bend the light path to the optical viewfinder for framing. A DSLR is a Digital SLR, meaning it has a digital sensor to record images.


Digital SLRs have may advantages compared to their film counter-parts. You get the digital workflow with instead previews and low usage costs. Taking thousands of images is easy with a digital. On the other hand, each roll of film costs money takes space and most be properly kept until developed (and after too).


Film SLRs have advantages and can be far more durable and resistant to extreme environments. They require much less care and batteries last for years. Some do not even need batteries to operate, although you loose metering and autofocus obviously. They are not prone to sensor-dust (or film-dust) because a new frame is used for each shot.


Advances mean that image quality greatly favors digital cameras which now have over 14 stops of dynamic-range and reach stellar ISO sensitivities as high as 204,800. With resolutions reaching 36 MP too, they can capture a tremendous amount of details.


When DSLRs had around 6 MP people would argue which one captures more details but I have not seen anyone argue about it anymore. Of course, with a film camera, it depends on the film used and the resolution is actually not a uniform grid, so highlights get more resolution and shadows less.


Wednesday 15 June 2016

digital - Is an entry-level DSLR good enough for portrait photography?


I'm looking into starting a small at home portrait photography business mostly as a hobby for now, and I'm wondering if the camera I have now (Nikon D3000) is good enough for portrait photography.


I've heard from a few friends who do portraits, weddings, etc. that most of the "photo magic" is really just handled in Photoshop, and how the camera is actually used, and the camera is not the only thing to think about.


Is there a flash or some other add-on I should purchase that will help the camera? I just have the onboard one now. I've signed up for a few classes at our local college, however they won't start for a month, so I want to get a head start on what I should look for in equipment.


Is this camera simply sub-par for this sort of work? Is there a better entry level camera I should look into?



Answer



Yes, it's "good enough" for someone getting started in portrait photography. Almost everything you'll need to learn is camera-agnostic. The one technical limitation that comes to mind with the D3000 is that the body doesn't have an autofocus motor, so you'll need to be using lenses that have AF motors in order to get autofocus.



From a lighting perspective, most folks doing portrait photography use an external lighting source of some sort such as a speedlight. You'll want to use this off-camera, which means either a flash sync cord or a radio trigger of some sort. That said, it's also possible to create good portraits using only natural light, it's just harder since you have less control.


There are cameras that would offer slightly easier controls, or flash triggering built into the camera, but they're a couple steps above entry level which is what you specify in your question. You'll be able to learn the basics of portraiture with the D3000.


camera bag - What can I use as a 'holster' to keep DSLR handy but secure while hiking?


Is anyone aware of a holster-type case, or anything that is designed to keep a camera easily accessible while hiking, but prevent it from swinging freely (like it does when it hangs from the neck strap)?


I have a pack with a chest strap, so anything that can attach to that might make sense. I am envisioning something that I can slide/drop the camera into that is attached to my chest, and then just lift it out when I come across something photo-worthy.


Does anything like this exist commercially? Any ideas on a DIY solution?



Answer



Lowepro makes a good chest harness that is similar to the Cotton Carrier, but provides more protection. alt text


I have no connection with Lowepro—I simply recommend their products because I've used them and because The Digital Picture speaks highly of them.



Tuesday 14 June 2016

portrait - Does street photography including children introduce any specific concern?


If one were to take a similar path as Henri Cartier-Bresson in shooting candid street photography today, should they be concerned about laws or other realistic norms of today? I am wondering specifically in the US speaking broadly of all states. I am not looking for legal advice, but anything that one photographer to another would caution against as potentially being grey or obviously illegal.


Similar previous question: Can I publish photos taken in public legally?




lighting - What non-studio lights are best for photography?


I don't have a studio, nor I want studio lights. I am interested in knowing which "kinds" of light "bulbs" should I purchase for my "table lamp" for indoor photography?


Are there some particular "colours" of lights that one should prefer? Is it advisable to purchase a white umbrella too? How and when is that used for table lamps?



Answer



Regarding what type of bulb - getting a good lighting effect is seldom a function of the bulb (leave color aside). In any reasonable working distance, all bulbs become a point source equivalent (*). When doing portraits, it is usually desirable to create a large light source, thus having "soft" light (note - soft, not flat). The way to achieve this is by using some kind of diffuser or soft box. An easy setup can be a shoot-through umbrella.



(*) an exception is a long fluorescent bulb, but still it is a 1 dimensional light which may create harder shadows in the perpendicular direction.


Update: Summary of the discussion in the comments, to OP's request:


What umbrella can I use?


A "shoot through" umbrella is the semitransparent white one. A reflector umbrella can be used as well. The idea is to enlarge the effective light source. The advantage of shoot-through is that the "source" can be located much closer to the subject (technically, to 0 distance) than the reflector (where you are limited by the shaft length). This means that compared to the subject, the relative source is bigger. The disadvantage is that being semitransparent, you reduce the power of the light source.


Can I use the lamp's "shade" instead?


I'd say that a "shade" is similar in effect (just smaller). You do want a white one, though.


Where can I find such an umbrella?


Here's one: dealextreme.com. You can easily find more on eBay or Amazon.


What should be the power of the bulb?


Generally, power is "the more the better", but be careful of overheating. A dimmer controlled light will be best to get the desired effect. Weaker light can be compensated by longer exposures. You want stronger light mainly because you want shorter exposures to minimize motion blur. Also, the stronger your source, the more dominant it is and allows you better control of you lighting setup. A dimmer, however, will let you decrease the power according to the actual need, and basically will allow you to get closer to the subject.



That said, a cheap off camera flash with manual mode is far more flexible than a fixed light fixture. You can find some at the same websites I pointed to above, together with an optional optical slaves (to be triggered by your on camera flash). Some have an integrated slave. For others, use a standalone slave.


This and this can be used as fixtures for (a) bulb(s) with umbrella.


OK, I am almost convinced to try an external flash. Can you recommend an affordable one?


YONGNUO is manufacturing a series of affordable flashes, some of them have manual mode and slave. As an example, the YN460 is a basic one with nice features. (don't stop at this one - do further research on other models they have and other brands. I remember the Strobist recommends a cheap manual flash as well, which was designed using his inputs so must be fairly suitable for a home setup).


Sunday 12 June 2016

troubleshooting - Nikon SB-600 flash will not fire; what can I do to fix it?


I have a Nikon SB-600 that was working just fine during a wedding and then stopped firing. I removed it from the off shoe cord in hopes that the cord might be damaged and attached it directly to the camera, but that did not work. I had a back up and just placed it to the side to finish out the wedding but now I am trying to see what is wrong with it. It will come on but will not fire/flash not even when I press the test button. It has never been dropped and I have older ones that were used more that still work. Is there a way to replace the strobe flash in the unit or could something else be wrong?




Answer



If you are confident it's the flash tube that's bad, and are experienced in electronics, you can purchase a flash tube online and replace it. But, you will need the right knowledge, tools, and experience, and this is not as simple as, say, simply unplugging something and replacing it. Desoldering and soldering are involved.


Safely discharging the capacitor in the flash is a must. The voltages are very high and can potentially be lethal. A service manual will help you a lot. OEM flashes typically have a contact point that you can use to short a resistor between it and the foot plate (ground) to safely discharge the capacitor. Using a voltmeter to see the voltage is low enough to be safe is probably a really good idea. Also, speedlight capacitors have been known to hold over 200V even with the batteries out for 24 hours, so don't think battery removal alone is sufficient.


If you are not someone who services their own car or messes about with their computer, or can't do the math to figure out how big a resistor you're going to need, this may not be the path for you. The following Youtube video demonstrates the steps involved in replacing the bulb.


https://www.youtube.com/watch?v=ljA0rPV8hCI


(The "s" in fresnel is silent :-), but it's still a decent video).


Hopefully, that should give you an idea of what's involved and whether or not you feel confident in tackling the task, or it's completely worth it to pay a service center to do it for you.


See also: http://strobist.blogspot.com/2013/09/super-cheap-replacement-tubes-for-your.html


software - Can the Nik Collection be used without Photoshop or Lightroom?


So the Nik Collection is now available to download free of charge from Google. But can it be used standalone, ie without buying Photoshop or Lightroom?


I know they are usually described as 'plugins' or 'filters' for Photoshop etc, but can it be used on its own (on Windows)? Or can it be used along with other (free) software, eg GIMP or RawTherapee?



Would it actually be useful as standalone software? Is there any limitations to doing this.




technique - How to estimate depth of field?




Possible Duplicate:
Is there a 'rule of thumb' that I can use to estimate depth of field while shooting?



How do you estimate the aperture needed to produce sufficient depth of field for a given subject?



For example:
Suppose I am facing a bicycle from the front and use whatever combination of focal length and subject distance is necessary to have the bicycle completely fill the frame. I focus on the handle bars as I would like this to be the sharpest point of focus. How can I best estimate the aperture needed to ensure that entire length of the bicycle is within acceptable focus?


I have tried using my camera's DOF preview button, but the resulting image is too dark in the viewfinder to determine the depth of field.


I could use a DOF calculator, but this would take too long.


I could take a few shots at varying apertures, but again, this would take too long.


What methods do you use to visualise depth of field in situations like this?




canon - How do I evaluate photo accessories bundled with a camera?




So I am somewhat of a beginner photographer and this will be my very first DSLR purchase. I have taken photography classes in which I've used really older models of DSLR but that was about 10 years ago, before my camera got stolen (first one was given to me). So I feel like I am starting from square one since it has been so long. I like event, landscape, abstract, and candid photography. I am also interested in mics because I plan to start learning videography and dabbling in that a bit.


I plan to purchase the canon 80D. What I am struggling with is the accessories that come with the bundle. My most important questions would be:



  • Am I getting the best bang for my buck?

  • Will this last me a relatively long time or will it break in like a year?

  • Am I completely off and just worried about the wrong things for the kind of photography I will be doing?


Lastly, I plan to get the 50mm/f1.8 lens separately ($125).


This kit is the most expensive but includes a Rode VideoMic. I am not sure if it's worth it since it looks to be for professionals and I am a beginner. Should I just get a lower cost mic and play around with that first? Or should I see this as an investment that would be worth it?



This kit is $150 cheaper but it does not come with much of the other accessories, and has a Kodak MIC-711 Shotgun Microphone. I can buy a pretty cheap cleaning kit but I'm more concerned over it not having the wide angle and telephoto lens. Or the 67mm 3pc filter kit. Will these be important later on?




Saturday 11 June 2016

equipment recommendation - How is the quality of the Sigma 18-300 DC Macro lens?


I'm considering the less expensive Sigma 18-300 DC Macro lens vs. the (way) more expensive Nikkor 18-300. Would I be losing the gains by buying a 24 MP Nikon 3200 if I went with the Sigma?




Why does the appearance of RAW files change when switching from "lighttable" to "darkroom" in Darktable?


In Darktable, when extracting RAW and JPG from the camera and viewing them using lighttable mode, both look identical. Does this mean that they both contain post-processing done by the camera processor?


Then, when taking the RAW and going from lighttable to darkroom mode some properties of the picture changes significantly (the picture looks quite different). Is this because darkroom mode removes the in-camera post-processing?


Is there a way to get in darkroom, as a starting point, the RAW file just like we can see it in lighttable?



Answer



This is essentially the same as Why does my Lightroom preview change after loading? . The RAW file contains a JPEG preview, which reflects the camera's settings and will generally be the same as an in-camera JPEG (although usually in low quality to save space). That's what Darktable is showing you initially.


When you go to process the image, Darktable is working from the RAW itself. It's not removing in-camera processing -- it's just that that processing wasn't really there in any helpful way in the first place. (Clues to the processing may be included in the file's metadata, but usually as manufacturer-specific proprietary information.)


Darktable doesn't have access to the exact algorithms and settings used for the internal processing, so the basic answer to that part of your question is "sorry, no". Take a look at How can I reproduce the camera-internal postprocessing? for more on this.


Why is the front element of a telephoto lens larger than a wide angle lens?

A wide angle lens has a wide angle of view, therefore it would make sense that the front of the lens would also be wide. A telephoto lens ha...