Tuesday 30 June 2015

Why is the battery required to manually focus through the viewfinder?



Is it normal that I can't manual focus any of my lenses on my 5D MK iii without having a battery inserted? I can focus on an object, and while looking through the view finder, remove the battery and instantly the image will get darker and blurrier.


Is this normal behavior? If so what's going on here?



Answer



Yes, it's normal behavior. The reason you're having problems is that the 5DMkIII has an LCD overlay in the viewfinder. This overlay is used to give you grid lines you can turn on and off and different AF point displays. Without power, the LCD becomes opaque. This behavior is identical in Nikon cameras with an LCD overlay in the viewfinder, and has long been noted.


From the Canon Learning Center's article on the 1DX's & 5DMkIII's intelligent viewfinder display:



The LCD overlay does require a tiny amount of electrical power to operate. This is obviously no concern when the camera is turned on, but if the battery is removed the transmissive LCD suddenly loses a lot of brightness and contrast. This is perfectly normal and will return to full brightness once a battery is reinstalled in the camera (the camera doesn't have to be turned on; it only requires a functioning battery pack to draw power for proper viewfinder operation).



old lenses - Canon EF 75-300mm f/4-5.6 III vs EF 100-300mm f/4.5-5.6 USM


I'm using a Canon 75-300 III, the non-USM version, that came in a "kit" along with my Rebel T6. I like its reach, but the optical quality isn't great, even if you step it down a bit, and it takes forever to focus.


Then I came across a used 100-300mm lens, a (very) old lens, but maybe optically better/faster to focus than my 75-300.


I have two specific and very related question in this context:




  1. How would the two lenses compare?

  2. What could be the issues of buying such an old lens, even considering that the store gave it an 8+ rating, meaning "Used very little, but obviously used. No major marring of the finish or brassing. Optics perfect. Mechanics perfect."



Answer



The 100-300mm f/4.5-5.6 USM is a better lens than the 75-300 III. It is sharper, has better build, and most importantly has very fast Ring Type USM Auto Focus. It also has a non-rotating front element which is great when using a polarizer.


A +8 rating from a reputable store means you should feel safe buying it and could always return it if there is a problem.


Still, I would NOT advise you to buy the 100-300 mainly because of it's lack of sharpness and lack of Image stabilization.


A MUCH better option is the EF-S 55-250mm STM. It will cost you $199 at that same seller, but it will be well worth it. It is much sharper, has great STM AF, and most importantly, it has Image Stabilization. It also has a non-rotating front element.


It is so sharp you can crop the 250mm image to match a 300mm field of view and you will still have a better image.


enter image description here



Monday 29 June 2015

Properly Using Mirror Lock With Canon EOS 60D And RC-6 IR Remote


I am trying to use my Canon EOS 60D on a tripod, with mirror lock enabled and shutter release via the RC-6 infrared remote, for minimizing vibration.



When using the shutter release on the camera, everything works as expected, first press locks the mirror in the "up" position, second press shoots.


But, when I try to do the same with the remote, I fail. What I try to accomplish is a pause longer than the 2s.


From reading this question it seems I am stuck with the 2s as long as I use my RC-6.


If I use another remote (wired, for example), would that give me the desired effect?



Answer



The wired remote will function just like the shutter button on the camera: A half press will activate metering and/or focus (depending on how your custom functions are set), the first full press will lock up the mirror. The second full press will then open the shutter to expose the image.


Just as with the shutter button on the camera, if you press once to lock up the mirror and don't press again for 30 seconds, the mirror will unlock and return to the normal position.


The ability to do a half press, the ability to activate the shutter from behind/beside the camera, and the ability to lock the remote to hold the shutter open in bulb mode is why I prefer to use my wired remote over the RC-6. It has probably been over a year since I've touched the RC-6. I use my wired remote on a near weekly basis.


Sunday 28 June 2015

exposure - Why will low-end Nikons not meter with old lenses?


To quote the source, the following sites claim that the Nikons: D90, D5200, D3200 and so on will not meter with AI lenses.


http://www.aiconversions.com/compatibilitytable.htm



http://www.kenrockwell.com/nikon/compatibility-lens.htm


From my understanding there must be some kind of sensor inside of the camera that measures the amount of light that comes in (kind of photoresistor). If that were true, all Nikons will meter with basically all lenses, so what is different that it doesn't work?


And why the AI lenses work with high-end models: D7100, D200,... etc. ?



Answer



You can divide Nikon lenses in basically three categories (as far as metering goes):



  • pre-AI: no metering functionality, mounting them on a modern camera might break the metering system for old lenses. Interestingly they can usually be mounted without risk of damage on low end cameras because they lack the metering system for old lenses in the first place (bear in mind that officially Nikon denies this, however).

  • AI, AI-S: they communicate the selected aperture, maximum available aperture, etc. using mechanical prongs. They require mechanical prongs on the camera side to get these informations. High end models have these prongs.

  • AI-P, AF: they communicate the selected aperture, maximum available aperture and eventually more data using electronic contacts. This is the only interface supported on low end models.



What is the difference between digital high ISO noise and film grain?


What is the difference between digital high ISO noise and film grain? Why does one "eat detail" and the other does not?



Answer






  • The size of the grains in the film varies depending on the film sensitivity. The more sensitive the film, the larger the grains. Digital noise is always the size of a pixel, regardless of the ISO setting.




  • Film grain is color neutral, as it consist mostly of luminance differences. Digital noise consists of both luminance and color differences, and is most visible in the blue color channel.




  • In the more recent digital cameras the digital noise is quite even. In earlier models the noise had more banding and patterns. The film grain doesn't have any banding or patterns, so it's seen as pure noise. If the digital noise has any banding or pattern, the brain can easily pick that up, and that is more disturbing than pure noise.




  • Neither grain nor noise eats detail. It's noise reduction that eats detail, as it can't tell the difference between small details and noise. Noise reduction is used on digital noise, but it can also be used to reduce film grain.





There is an example below. On the left is the film grain from a Kodak Gold ISO 200 film. On the right is the digital noise of a Canon EOS 5D Mark II @ ISO 3200. Notice the blue noise in the dark areas in the right image.


grains vs noise


Saturday 27 June 2015

How does a camera in an automatic mode choose what exposure, aperture and ISO to use?



For example:





  1. Does the camera detect if the scene contains fast-moving objects and if so use a shorter shutter speed?




  2. Does it detect if the entire scene is beyond the hyperfocal distance, and there's plenty of light, and automatically use an aperture at which the lens is the sharpest? Put differently, does it use the sharpest aperture unless there's a reason not to?




  3. Alternatively, does the camera detect if I'm shooting a portrait and switch to the widest aperture to de-focus the background?





Is there any intelligence at all here? Can I rely on the camera to pick the right aperture the vast majority of time, like I can with metering, and with focus when there's enough light?


Obviously, when I want to use a particular setting for a certain creative effect, I specify that, and not leave it to the camera. For example, if I'm shooting a train under low-light, I might choose a several second exposure to make the train appear milky, or a 1/4 - 1/2 second exposure to create some blur while still retaining distinct objects to some extent, or an even shorter exposure to freeze the motion. Obviously, I can't expect the camera to guess what I have have in my mind. This question is not about these cases. It's about cases where there's an obvious default that's right the majority of time. Can I rely on the camera to pick these?


In case the answer depends on the camera, this is with reference to the Sony NEX-5R.


When I say "an automatic mode", I mean automatic, program, shutter-speed priority and aperture priority modes. In the latter two modes, the camera still has to choose between ISO and aperture / shutter speed.



Answer



Camera manufacturers use their own proprietary algorithms for exposure metering. I doubt anyone will be able to describe the exact exposure and scene selection algorithms used by sony nex cameras without risking being sued by Sony, but one can make educated guesses so I will make such an attempt.


Sony states that in Intelligent Auto mode it performs Scene Recognition:



Scene Recognition operates in [Intelligent Auto] mode. This function lets the camera automatically recognize the shooting conditions and shoot the image




Based on that statement, it is evident that some kind of image analysis is done in order to determine the scene before the photo is taken. The scene in this case will determine whether to use aperture priority for macro and portraits or whether to use shutter priority for scenes like sports.


Not knowing the exact algorithm being used, but coming from an AI background, I'd guess that the algorithm most likely analyses the scene and attempts to detect certain parameters to use for scene selection such as object detection, object separation, rate of camera movement, and so on. Those parameters, in turn, will most likely be used as inputs to scene selection of some kind of fuzzy logic algorithm. The algorithm is most likely learned from the analysis of thousands of shots to approximate the ideal scene selection. The learned algorithm (assuming they use one) would depend on the quality, quantity, and diversity of the photos used for learning the algorithm. Normally, many photos will be analyzed, whose input parameters and the resulting scene and settings have already been detected manually. For simplicity, let's take an example of learning from 2 shots:



  1. Inputs: Detected face, separation from background, small to no camera movement. Output: portrait mode, aperture priority, small aperture

  2. Inputs: Fast camera movement. Output: sports mode, shutter priority


Now, when the camera detects a face, it will most likely choose portrait mode, if it detects fast camera movement then it will choose the settings from #2. The more images the algorithm is learned from, the better quality of the inputs, the more accurate the result will be.


Normally, algorithms like these have been learned before the camera hits the stores. I'm not sure if there are cameras that use adaptive algorithms where they learn based on your own camera usage what kind of shots you prefer and then appropriately skew the scene selection towards your preferences. I doubt self-learning mechanisms are used nowadays, but it certainly be sweet.


Now, the accuracy of the scene selection will depend on many factors:




  • The ability to detect the inputs needed (such as object detection, face detection, object separation, camera movement, histogram, etc)

  • The quality of the learned algorithm

  • The time the scene is analyzed

  • And many others depending on the exact algorithm used


It is safe to say, that there is lots of room for error no matter how intelligent the scene selection is. Assuming the scene selection is detected properly, then there will be additional algorithm to determine the proper exposure. The scene selection will most likely set limit preferences such as aperture between X and Y for portraits, shutter between A and B for sport scenes, and so on. Then the exposure algorithm will take those preferences into account and then complete the exposure triangle based on the light meter.


Now, no matter how good and sophisticated are the algorithms, there is no guarantee that it will choose the exact settings that you would have preferred. Therefore, photographers usually shoot in manual modes to have complete control over the image that they envisioned.


I guess it all depends what kind of shots you want. If you want quick snapshots, then use auto and be subject to the AI algorithms. If you want creative control, then use more manual modes.


If you want automatic control, but want to help the algorithms, then I'd suggest using scene selection to guide the algorithms by using sports, macro, portrait, etc. If you have controls to limit ISO levels then use those as well, depending on how much noise you are willing to tolerate.


lightroom - Verifying the integrity of your library


After reorganizing my image collection I moved a bunch of photos between different hard drives. While copying, I did get error messages. Fortunately, I had made multiple copies before doing the reorganization, so I was able to create my new image collection.


But since I got error message while copying, I'm getting nervous and would really like to verify the integrity of my library, i.e. check that all raw files exists, and that none of them has become corrupted.


Is there a way to check this? Or a tool that can check raw files for corruption? (In my case, Canon raw files, the images are taken with 500D and 7D cameras)


My image collection are split between a Lightroom catalog and an Aperture catalog. The latter uses referenced images, i.e. all the raw files are not stored inside the catalog.




Friday 26 June 2015

viewfinder - What's the difference between a rangefinder and SLR?


I've seen rangefinder cameras around and idly wondered what the difference was between them and SLR cameras. Are there any advantages rangefinders offer that cannot be reproduced using an SLR?



Answer



An SLR camera allows you to look through the lens and was created to reach WYSIWYG (What You See Is What You Get). It has a mirror box inside, and as a result, is much larger. Other developments include splitting the beam for auto-focus, etc.


A rangefinder is a camera that has a rangefinder mechanism. This is a device that measures subject distance. Through this device, you see two images. When the two images coincide through moving the dial, the correct distance is displayed. On older cameras, this was a separate device and one must transfer this to the lens. Now they are built into the viewfinder. You have different viewfinders for different focal lengths (zoom lenses are difficult, as a result)


Advantages:




  • body size/weight

  • discreetness

  • no mirror blackout, mirror sound, mirror induced vibrations

  • shorter registration distance: smaller/lighter lenses, potentially higher quality wide angle lenses

  • ease of both-eye-open photography and awareness


Disadvantages:



  • lack of autofocus (though some have contrast-detect AF, but it is not phase-detect)

  • parallax effect, pronounced at close distances


  • no depth-of-field preview, exact framing, and other WYSIWYG things

  • switching viewfinders


equipment recommendation - Which prime lens to get after the 18-55mm & 55-250mm?


I started out with the 18-55mm IS kit lens on my Canon EOS 550D over a year ago, and added the 55-250mm IS a couple of months back to augment my telephoto reach (based on the recommendations in one of my earlier questions). After shooting almost 5000 photos with the former and 1000+ with the latter, the biggest limitation I find is their low light shooting ability and I've ended up shooting a lot of images at ISO 3200 and\or slow shutter speeds resulting in subject blur. AF performance has also been a bit iffy in these situations. On the subjective front, I've generally preferred to shoot portraits.


To this end I did some analysis on a selected subset of my photos using Exposureplot and exiftool+Excel to quantify my results (also below), and found that 55mm is the most shot focal length followed by 18mm, both of which correspond to the lens range limits (so likely to be skewed). The usual portrait ranges (80-110) also feature reasonably given that the range was added only recently.


Aperture vs ISO plot


I also checked out the Canon lens lineup and have settled on the following shortlist based on my above analysis (given my budget of around $500):



Lens(length+Av) Macro USM IS L-series
50 mm f/1.8 No No No No
35 mm f/2 No No No No

50 mm f/2.5 Yes No No No
28 mm f/2.8 No No No No
24 mm f/2.8 No No No No
50 mm f/1.4 No Yes No No
100 mm f/2.8 Yes No No No
135 mm f/2.8 No No No No
60 mm f/2.8 Yes Yes No No
85 mm f/1.8 No Yes No No
100 mm f/2 No Yes No No
28 mm f/1.8 No Yes No No

100 mm f/2.8 Yes Yes No No

The EFS 17-55mm f2.8 would serve my requirements, but is beyond my budget at present. I do plan to get it eventually. I also doubt that I'll be upgrading to a full frame DSLR, so EF-S lenses are also good for me.


To summarize, I need the following from the next lens:



  1. Good low light performance

  2. Suitable for portrait (haven't tried playing around with depth of field much so far, as I end up shooting wide open most of the time anyway)

  3. Macro ability would be a plus (does it make sense to get a non-macro & a macro for similar focal length?)

  4. Better AF performance (should be a given for the lenses as they are f2.8 or wider, and many are USM)

  5. Canon lenses preferred as I'm not sure of the reliability & service\warranty options of 3rd party lenses in India



So, given this scenario, which would be the recommended lens to get?



Answer



Canon EF-S 60mm f/2.8 Macro USM


Suits (1) (2) ad (3) perfectly, and it is not expensive.


While it makes sense to get a non-macro and a macro lens for a similar focal length, I see that you are on a budget, so that wouldn't be your best choice.


I did not suggest the 50mm f/1.4 because 50mm is very slightly short for portrait, and you cannot do macro at all, even the kit lens offers a better magnifying ratio.


Lastly, allow me to point out that you can learn very little from shooting 6000 photos. You will, however, learn a lot if you try to delete 5900 photos from the 6000 you shot.


If you constantly find yourself shooting in low light condition, getting a flash may not be a bad idea at all, provided you know how to use it right.


I thought my photography would improve if I have bought a better lens. So I did, and I see less noise but no improvement. So I experiment with different ways of shooting, I shoot EVERYTHING in all kinds of situation. I no longer limit myself to shooting indoor. I try everything I can.


Then, my photography improved, and I am now able to create much much better photos, using the exact same lenses that I once said is limiting.



legal - Is this type of photo manipulation still violating copyright law?


I am a photo student and I am doing a photo series on contemporary art. I plan to take pictures of contemporary art from printed books (photo of a published photo of an art work). And I will be cutting, cropping, and putting the art work in a completely different setting. This setting will be something I stage or photograph myself.


This series will not be sold; it will be used to criticize the flow of the contemporary visual art. I plan to list all the photographs and the book I obtained the photographs from as part of the series. I just want know if this is ok with copyright laws.


I reside in the US




Thursday 25 June 2015

macro - What does "magnification" mean?



When speaking about lenses, especially for macro-photography, I often hear about magnification. Sometimes it appears in the form of a ratio (1:1, 1:2), sometimes as a single number (0.5x).


Sometimes magnification is also used to describe extension tubes or macro /"close-up" filter


What does magnification mean for a lens? For an extension tube? For a filter?



Answer



At the most basic level, magnification means the size of an actual object in front of the camera compared to the size of the image of that object as projected by the lens onto the imaging plane.


If an object is 24mm tall and is projected by the lens as 12mm tall on the sensor, the lens has a 1:2 magnification ratio. This is exactly the same as a 0.5X maximum magnification or 50% magnification. The three ways of expressing magnification tell us the same thing. 1:2 = 1/2 = 0.5 = 50%.


If the lens can focus closely enough to project a life sized image onto the sensor, it has 1:1 / 1.0X / 100% magnification. If the lens can only project a 48mm tall object onto the sensor as a 12mm tall image, it has a 1:4 /0.25X / 25% magnification.


Some macro lenses can project larger than life sized images. The Canon MP-E 65mm 1-5X Macro can focus closely enough to give a 5:1 / 5.0X / 500% magnification. That means a 10mm object would be projected at 50mm which would not even fit on the diagonal of a 36x24mm full frame sensor!


Extension tubes merely extend the lens away from the sensor and in so doing allow it to focus closer objects than would be the case without the extension tube. Extension tubes are not usually described in terms of magnification. Rather, they are described in terms of distance. This is because the same 12mm extension tube would increase magnification by different amounts for different lenses. A 12mm extension tube simply adds 12mm of distance between the lens and the camera. The focal length and minimum focus distance of the lens itself will determine how much additional magnification will be given by an extension tube of a particular size.


"Macro filters" are usually described in terms of diopters, the same as with reading glasses. A +2 diopter magnifying lens does not necessarily equate to increasing a lens' maximum magnification by a factor of two. Rather, by dividing 1000mm by the diopter power of a lens, one can get the distance at which objects will be in focus if the host lens is focused at infinity (before the diopter lens is attached). A +2 diopter lens attached to a lens focused on infinity will reduce the focus distance so that objects 500mm away are in focus. A +3 diopter lens will reduce the focus distance to 333mm, and so on. But since the maximum magnification of a lens is measured when it is at that lens' minimum focus distance and not at infinity, there's no easy way to translate diopters into magnification.



For more about extension tubes and screw on "close-up filters", and when each might be preferable, please see: What's the difference between a diopter and an extension tube?


But we rarely, if ever, look at images at the size they are projected onto a digital image sensor. We once did, and sometimes still do, make contact prints from medium and large format film cameras - an 8x10" negative was used to make an 8x10" print without the need for an enlarger. So were 5x7" or 4x5" or 6x4.5cm negatives used to make 5x7", 4x5", or 6x4.5cm contact prints. We would often create a "contact sheet" for a roll of 135 film in which each frame was 36x24mm on a larger sheet that had the entire roll placed in several rows after the negatives had been developed and trimmed.


To get the total magnification, we need to also include the enlargement ratio used to display an image.


When we view an image from a 36x24mm FF sensor at a size of, say, 12x8 inches, we have enlarged the image by a factor of about 8.47X. When we view an image from a 24x16mm APS-C sensor, we must enlarge by a factor of 12.7X to view it at the same 12x8 inches display size.


If we used a 1:1 Macro lens with a FF camera and then display the resulting image at our 12x8 inch viewing size, we have a photo of an object that is 8.47X the size of the actual object. (1 x 8.47 = 8.47)


On the other hand, if we used a lens with a magnification ratio of 1:7 (typical for many telephoto lenses) on an APS-C camera and displayed the resulting image at 24x16 inches, the photo would show objects at 3.63 times their actual size (0.1429 x 25.4 = 3.63).


Keep in mind that lenses only achieve their maximum magnification ratio (MM) at their minimum focus distance (MFD). If you have a lens with a MM of 1:2 at a 20 inch MFD and you focused on an object 40 inches away, the resulting magnification for that object in the image would only be about 1:4.


If we use a 600mm lens with an MFD of 4.5 meters (which gives a 0.15X MM) and focus on a person 100 meters away the magnification of that person is only 0.00675X (or about 1:148). If we then enlarge that image from 36x24mm to view it at 12x8 inches the total magnification is 0.057X, or about 1/17 of life size. A 6 foot tall person would be a little over 4 inches tall on our displayed image.


terminology - What is the correct pronunciation of "bokeh"?


I've had a lot of trouble working out how to pronounce "bokeh".


Amusingly enough, I've thus far been unable to get any of the small handful of photographer friends I have to ever say it out loud, despite setting up a number of lead-ins like "how do I get that nice blurred background... what's that called again?"



So yeah, please, tell me! I already sound like enough of a spud when I'm trying to talk jargon with photographers without dropping a big fat "bock-ee" in the middle of an otherwise serious sentence.


Thanks!




flash - Why don't these external flashes fire when the camera shoots in Live View mode?


I had the opportunity to use some professional flashes yesterday and I found out that they didn't fire when I was trying to shoot with the camera in Live View mode.


The flashes were two Elinchrom FX400, one wirelessly synchronized with the camera and the other was activated automatically when the first one fired (With a photocell or something).


Using the camera normally fired both flashes at the same time with no issues, but live view mode deactivated both. Why the LV mode is incompatible? Am I missing something?


By the way, I tried this with two cameras, a Canon 1000D and a 550D.



Answer



Looks like when using a non-Canon flash you can't fire the flash while in live view with a Silent Shooting mode enabled, says the manual:



  • If you use flash, the [Disable] operation will take effect even if you had set it to [Mode 1] or [Mode 2].


  • When using a non-Canon flash unit, set it to [Disable]. (The flash will not fire if [Mode 1] or [Mode 2] is set.)


from http://martybugs.net/blog/blog.cgi/gear/lights/Triggering-Flashes-While-Using-LiveView.html


depth of field - How do I keep both the background and foreground in the image in focus at the same time?


How do you have both the background and foreground in the image in focus at the same time?


Last week I was attempting a shot where I had the back of a hat in focus and in the distance you could see a mountain. I wanted both in focus, but could either focus on the mountain or the hat. I ended up taking the in-focus shots of both and merging them in Photoshop. While that isn't difficult I would prefer to shoot the shot as accurate as possible. From what I have read aperture is important in the depth of field but what are the techniques for keeping both the background and foreground in focus?



I am using a Nikon D5100 with a 18-55mm f/3.5-5.6 (standard kit lens) and a 55-200mm f/4.5-5.6.


This question almost answers my question, but I am a bit confused. Can someone explain this a bit differently? Thanks!



Answer



I think this is mostly answered, but I'll add some examples.


When a lens is focused at a particular distance, the image will appear sharp from some distance in front of the focus point to some distance behind it. The range from "nearest sharp point" to "most distant sharp point" is called depth of field (DOF).


Here's a Wikipedia example. The focus point is the "depth of field" text, and the DOF extends approximately from the line below to the line above:


DOF example


DOF depends on three factors. (That is, three factors you can easily change. DOF also depends on the size of the camera sensor, how large you want to print or display the picture, how closely you look and how critical you are, but I'll assume that these are constant.)


The factors you can easily change are





  • Aperture: Smaller aperture gives more DOF.
    Example: (All examples for Nikon D5100 or other APS-C 1.5x crop camera.)
    55mm f/5.6, focus at 10 feet: DOF range from 9 to 11 feet (2 feet total)
    55mm f/22, focus at 10 feet: DOF range from 7 to 18 feet (11 feet total)




  • Distance: More distance from camera to focus point gives more DOF.
    Example:
    55mm f/5.6, focus at 10 feet: DOF range 9 - 11 feet (2 feet total)

    55mm f/5.6, focus at 20 feet: DOF range 16 - 26 feet (10 feet total)




  • Focal length: Shorter focal length gives more DOF.
    Example:
    55mm f/5.6, focus at 10 feet: DOF range 9 - 11 feet
    18mm f/5.6, focus at 10 feet: DOF range 5 feet to infinity




So to maximize the depth of field, the basics are




  • Shortest possible focal length

  • Smallest possible aperture (although at very small apertures you run into diffraction blur instead)

  • If possible, increase distance between camera and foreground

  • Focus slightly behind the foreground subject, to exploit the fact that DOF extends some distance in front of the focus point.


Examples:



  • 18mm f/22, focus at 2 feet: DOF range from 1 to 12 feet

  • 18mm f/22, focus at 10 feet: DOF range from 2 feet to infinity


  • 18mm f/22, focus at infinity (the mountains): DOF range from ~2.5 feet to infinity.


So you could have everything from 2 feet to infinity in focus, by focusing at 10 feet.


If you want to go advanced, you could improve DOF slightly by focusing at what's called the hyperfocal distance. (Although you would need a table or calculator to look it up.)


Focusing at the hyperfocal distance gives the maximum possible depth of field, in this case that's



  • 18mm f/32, focus at 1.7 feet (hyperfocal distance): DOF range from 0.9 feet to infinity


That's the best optics can give you. If you need more than that, you'll have to fake it by taking multiple shots and merging them.





You can find these values from an online DOF calculator like the one ElendilTheTall linked to. Just select your camera (or another camera with the same size sensor, like D5000 in your case) and fill in focal length, f-stop and focus distance.


Note that these DOF values are only approximate: The DOF range is calculated using a definition of "acceptable sharpness" - subjects at the edges of the DOF range will be slightly out of focus, but we might still call them "acceptably sharp" - and what you consider acceptable will probably differ from the standard definition.
You will have to experiment to find your own limits, but a DOF calculator is a good starting point.


Wednesday 24 June 2015

long exposure - Is astrophotography basically pointless with a moon in the sky?


I want to take some astrophotography shots of objects in the sky while not keeping the moon in the shot. I have read in a few places that you need essentially a moonless sky to do so. Does that mean that I have to wait for a new moon phase or for the moon to set? Or does it mean that I just don't want the moon in my frame? I'm traveling to an area with very little light pollution, so I would like to take some star trail shots, but am unsure what I can do since the moon will be at around 1/2 phase.



Answer



It will actually depend on your humidity! As the air gets more humid there is more water vapor in the air and this helps scatter the bright moon light, brightening the dark black sky.


However, to answer your question, if you wait until the moon is well set you should be fine. You may find it easier to wake up early rather than stay up late.


Dew can be a real problem. You are pointing a piece of glass at the sky as the temperature drops. A homemade cardboard dew shield is quite effective. Just wrap a cylinder around the lens. The height of the cylinder should be about the width of the lens. Of course, if you are shooting wide angle then this could block parts of the image. But if you dew up, well, all the image will be blocked.


Keep the lens pointing down at the ground when you take a break. Once you dew up it is difficult to remove it. A portable hairdryer in the car's lighter can do it, but preventing it is much easier.



A cable release would be awesome.


A practice BEFORE you go to your dark site.


Oh, and wear warm clothing.


equipment recommendation - How can I get to 400mm+ for wildlife photography on a budget of around £1000 with a Canon DSLR?


I am asking a question that might have been asked many times in the past. However, I like to hear some advice for my situation so.



Basically I am looking to invest some money on my photography for some possibilities of shooting wildlife. It's just a hobby and I haven't done much wildlife in the past. In team of subjects, I don't really know. What can I find easily here (UK)? I am going to do a puffin trip next month. I would love to shoot birds and all kind of animals I can find. Also have a plan to go to Spain for a wildlife trip in Summer.


My budget is quite limited though. I am willing to spend about £1000 for this.


My kit. My camera body is 5D III. My longest tele lens is 70-200f2.8 IS II. Although I said it is a hobby, I want to get best possible results and best possible range. What are my options?


Here is few options I can think of to start with. You guys can advise me new options and also comment on what I have thought.




  • a new 7D + 2x Extender (or 1.4 Extender). It would give me 448mm with 1.4Extender or 640mm with 2x. 7D would give me 1.6x but I am not sure about it. If I just use my 5DIII and crop it as 1.6x, would it be the same? 5DIII has 23MP so 23MP/1.6x = 14.4MP. 7D is 18MP so I will lose about 4MP? Does it work like that? I don't really know I am guessing.




  • I heard about greatness of Sigma 50-500. The range sounds amazing. I don't mind having f6.3 as this is a tele photo lens I should get a nice bokeh at f6.3. But how's the quality? I normally try to avoid using non-canon equipments as much as I can. (apart from battery grid btw)





  • Canon 400mm f5.6 ? that would cost me 1k but I can just buy 2x Extender with my 70-200. I can get 400f5.6, and still have IS on it. But which can give me better quality and AF speed?




Any other lens or options that I should consider?



Answer



The 70-200 F2.8L IS II works fine with a 2.0x teleconverter. That's my standard birding and critter lens these days. It's sharper than a 300F4+1.4x (my previous go to lens), and MUCH sharper than a canon 100-400 @ 400mm (my initial birding len). All are acceptable, the 70-200+2.0x is incredibly sharp and I'm really impressed with that lens combo. I use that on the 7D


If you want to look at images of the 7D/70-200/2x combo, try some of these:


http://www.chuqui.com/2013/04/house-wren/ -- which is actually a weak image, heavily backlit and processed, massive crop. and still not bad for a blog posting.



http://www.chuqui.com/2013/01/canon-70-200-f2-8l-is-vs-is-ii-plus-bonus-on-600mm-f8-option/ some of my test shots when I was evaluating this combo, including some 100% pixel peeping views, so you can see the sharpness difference.


Sigma 50-500: haven't tested it. Folks I know who have say its' usable, but slow AF and it softens at 500mm. Whether it's too soft is something you'll have to test and see for yourself. I've seen some nice images come out of it online.


Canon 400: buy the 2.0x instead. Or the 300+1.4x. The 300 has slightly faster AF, and cna be used with or without teleconverter, so you have a bit more flexibility for about the same price. But that 70-200 is a killer lens. the 400 won't get you better images.


The 7d+70-200+2.0x is (IMHO) the best overall bird/critter lens for canon these days. I recommend the 100-400 as the entry level because it's half the cost, but you already own the 70-200 so the big hunk of money is already spent. Before you consider going longer than 400mm, work with that combo and how figure out how much you can crop -- it's a lot cheaper to buy a good modern body with enough megapixels you can crop than it is to buy a 600mm bazooka to get that extra distance. This combo has convinced me not to buy a 500mm, since I can crop effectively into that distance.


Rent a 7d. Rent a 2.0X tele III. Try it out. You'll probably end up buying that as your upgrade.


Another opinion: it's what Art Morris uses. If it's good enough for him...


http://www.birdsasart-blog.com/2011/03/11/canon-70-200-f2-8l-is-ii-gear-questions-from-the-non-believers/


Monday 22 June 2015

What equipment is needed for a basic product shoot studio?


Our college is setting up a photo studio in campus. Mostly the we do product shoots. We have some old strobes, softboxes to start with. We need suggestions on the equipments listed below.



  1. Which are the commonly used backdrop screen colors used or say must have colors?


  2. Suggested tripod and head?

  3. Colors for reflectors?



Answer





  1. I'd use a black, a white and a mid-gray (18% reflection). Theoretically, with the right lighting, you'll be able to make the white background look black, and vice versa, but it limits the lighting options for your subject.




  2. I'm using a Manfrotto tripod and a head with separate controls for the three directions, though many people prefer a ballhead, as it moves more intuitively. Make sure to have a sturdy tripod. They're usually a bit heavier, but I guess for a studio setup this is not much of a problem. A head with built-in spirit levels might be handy.

    edit You may also want quick-release camera plates, which allow you to quickly change cameras on the tripod, without having to screw it on and off again.




  3. Personally I'm not too wild about color reflectors. You can make neutral (white) reflector panels of styrofoam, which, thanks to their light weight, are easy to handle. I'd use Lightroom to adjust colours (if you shoot digital).




What are the main things to avoid in photographic composition?


Considering only the frame and it's content, putting the issues of color rendering, sharpness, exposition, brightness, contrast, optical aberration, unpleasant flash/noise aside.


I think of:




  • Inappropriate depth of field

  • Inappropriate focal point

  • Tilted horizon without interesting perspective

  • Neglected background/foreground

  • Distracting reflections

  • Out of control lens flare

  • The subject was cutting

  • An interesting part/element of the image was cutting

  • Something (finger/strap) is in front of the lens

  • ...



I'm trying to have an overall view.



Answer



You need to avoid not thinking ;)


Jay Maisel says Everything in your frame either helps you or hurts you.


In other words, to compose well you must make sure that everything in your image is part of what you want to show or say and that nothing in your image should distract from that.


Jay also says not to include letters in your frames unless you want them to be read, otherwise they distract from your subject. There are no absolutes, even lens flare can be used creatively to reinforce a harsh environment.


Mount old lens on Canon Rebel T3i using Canon FD/FL Lens to Canon EOS EF Body Mount Adapter


Can I use any of the following lenses:


this or this or this or this with this adapter for my Canon Rebel T3i?


Please explain if any pitfalls that I may encounter such as infinity focus.


Also suggestions for cheap old high range zoom lens (such as 70-300mm) are welcome even if they are using another adapters



Answer



FD lenses are designed to sit closer to the film/sensor than EF lenses - so there is no adapter that can just place the lens in the correct location (because the correct location is inside the camera where the mirror is).


This leaves us with to options:





  1. Adapter that places the lens farther than it's supposed to be - this has the same effect as placing the lens on a macro extension tube, you lose the ability to focus to infinity and from what I heard you are likely to lose the ability to focus behind macro range.




  2. Adapter with a lens to corrects the distance difference (like the adapter you linked to) - so you just take an old lens (that is not as good optically as the new ones) and run all the light trough a second cheap lens degrading image quality even more.




So, if you mount FD lens on an EOS camera you have a choice, you can have an impractical focus range or bad image quality - and as an added bonus you also lose auto focus and other niceties of your camera.


If you want to use cheap old lenses you should choose lenses for a mount that has an adapter that can place the lens at the correct distance without optical corrections, basically and mount with the distance grater than what EF uses (you can use this list) - but you still lose a few decades of advances in camera and lens technology, don't expect the old cheap lenses to be nearly as good as the new expensive lenses.


Sunday 21 June 2015

Wide Angle Lens recommendation for canon 70D (cropped sensor camera)



I am interested in landscape and have a 40mm f/2.8 pancake lens but finding it does not capture enough and would like to purchase a Wide Angle lens.


I am find ing the prices for the Canons too much and have started looking at 3rd party (Samyang, Tokina, Sigma etc)


Can someone recommend a decent lens


the ones I have started looking at are


Rokinon 14mm Ultra Wide-Angle f/2.8 IF ED UMC



Samyang 14mm Ultra Wide-Angle f/2.8 IF ED UMC


Tokina 11-16mm f/2.8 AT-X 116 Pro DX


I know Rokinon and Samyang come out of the same factory but are they identical lenses?


If anyone has any sample images of landscapes taken with a cropped sensor and any 3rd party wide angle lens I would love to see them


I am at some stage planning to go to a full frame camera and would like lenses that are compatible with both



Answer



There is actually an OEM Canon ultrawide zoom lens that costs less than all the lenses you're looking at. That is the EF-S 10-18 f/4.5-5.6 IS STM, which is the low-price alternative to the Canon 10-22.


Yes, the Samyang and Rokinon (and Vivitar, Pro-Optic, Opteka, Bower, Phoenix, Walimex, etc. etc.) are the same lens, and are optically identical, although not identical in outward appearance. However, Samyang's lenses are manual-only and do not perform electronic communication with the camera body. You have to manually focus, and manually set the aperture with the lens's aperture ring. You can only use M/Av modes because the camera body can't control the aperture. You have to use stop-down metering (i.e., the view through the viewfinder goes darker the more you stop down). The EXIF won't have any lens information (aperture used, lens name, etc.) in it. What you save in dollars, you will pay back in inconvenience vs. a lens that can communicate with your camera body.


You also won't have much luck in getting a lens that's ultrawide on both crop and full-frame because of the crop factor. While 14mm is ultrawide on full frame, it's the full-frame equivalent of a 21mm lens on a crop body. This is the single category of lenses where you would probably be much better off getting a crop lens, and then reselling it when you move to full-frame, because none of the ultrawide offerings for full frame are ultrawide on a crop (ok, maybe the Sigma 12-24), none of the crop ultrawides cover a full-frame sensor even if they still mount, and the crop versions cost quite a bit less.


The Tokina 11-16, btw, is a crop lens and will vignette on full frame up to about 15mm. That's why Tokina also makes the 16-28/2.8 for full frame.



Saturday 20 June 2015

digital - Are there any free but comparable alternatives to Lightroom?




Does anyone have any recommendations for free software that provides the key features of Lightroom in one package? The main things I am looking for are:



  • RAW photo editing

  • Library management


  • Multiple catalogs

  • Rich keyword/tag support


Thanks!




exposure - With the same camera settings, will a photo always be exposed correctly across different cameras?


I want to do a photo shoot using film, but due to not being able to see the shots I've taken I would first like to take a digital photo, then recreate it using film.


If I used the same focal length, ISO, aperture size and shutter speed, would the photo was taken digitally be exposed the same as the film photo? Maybe crop vs non crop sensor may affect things?


If not, what can I do to achieve similar levels of exposure (I don't just want to rely on the film cameras light meter all the time)?




Friday 19 June 2015

lens - Why do people recommend 50mm or other prime lenses as starting lenses for learning photography?


I've seen quite a few people recommend the 50mm prime lenses, in particular the sub-$100 50mm/f1.8, as a starting lens for photographers (especially because they're likely to be using cameras with cropped sensors). From my experiences with the 18-55 mm kit lens (on a Canon EOS 550D), it seems that 50mm is not really very suited for indoor flashless group photographs where the wide aperture would be helpful. It is probably a good lens for portraits of one or two persons, and some low light photographs.


So, what are the advantages it offers over the kit zoom lenses (apart from the wider aperture), under what scenarios is it more useful, and why would you recommend it as a starting lens?



Answer



What people have generally suggested is to start with a normal lens. On a full-frame 35mm camera, that role was generally filled by the 50mm lens. On a modern crop-frame DSLR, it would be closer to 30mm (for Canon APS-C 1.6x sensors) or 35mm (Nikon, Pentax, Sony) or 25mm (Olympus and Panasonic).


The 50mm is usually suggested these days as the first good supplement to the kit lens. It doesn't really matter whose 50mm lens you're looking at, the design for the f/1.8 (or f/1.7) version has been around forever. They're all sharp to very sharp, lightweight and (most of all) cheap. And as you pointed out, they'll function well as a shortish version of the traditional medium telephoto portrait lens. But as nice as the "nifty fifty" is, it's not a normal lens on a crop-sensor camera.


A normal prime lens is very versatile. You can step back a couple of feet and get a fairly wide image field. Step forward and you can fill the frame with a single subject of interest. Neither picture will be quite what you'd get using a wide angle lens or a short telephoto, but you can get a reasonably good picture either way. The field of view subjectively matches what you tend to think you're seeing in real life, so there are no major surprises or unintentional special effects.



Working with a prime lens, though, helps you to become a better photographer. It forces you to change your point of view to find the best image rather than just standing in the easiest spot and turning a ring. You might get acceptable results using a zoom exclusively, but it's unlikely you'll get a spectacular result until you've forced yourself to take the rocky road for a while. You may decide to stick with primes (I have always had zooms, and shot an average of ten rolls of film a day -- or the equivalent -- for a couple of decades, and I bet I took fewer than two hundred shots with a zoom lens in all that time) but even if you use a zoom lens most of the time, you'll never use it in quite the same way after working exclusively with prime lenses long enough to change your habits.


dslr - Wondering if my Nikon lens is broken


I have a Nikon D50 (not sure what lens right now but could check if necessary). I recently took it traveling with me, and I'm not sure if I damaged the lens or if a setting is off. It's making all of my photos really dark and/or blue tinted. I was wondering if anyone else has experienced this or knows what it means. The setting is already on auto.


Here are two example pictures (taken indoors) which show the issues I'm having:


image 1 image 2


Photos taken at night seem to be OK.


night




Thursday 18 June 2015

optics - Is there any physical reason a mirror lens could not have a variable aperture?


I understand that mirror lenses already have high maximum apertures and that often, the last thing one wants to do is let in less light, but is there a physical reason that one could not build a working mirror lens that did use a variable aperture? Is it impossible to put the aperture in the right place? Is there something about the optical pathway that would ruin the functionality of an aperture. The advantage would be that one could get greater depth of field.




Wednesday 17 June 2015

prints - Is there a trick to separate an old photo moisture-fused to glass?


We have lived in some moist environments and a favorite photo, while it otherwise looks good, has large parts of it that appear to have bonded to the glass of its frame. Presumably this is due to small amounts of condensation getting between the photo and the glass.



Does anyone know a trick I can use to help me separate them cleanly? The photo is about 16 years old.



Answer



Water does not harm photographic paper. After all, it is soaked in different waterbased chemicals during developing and washed in clean water in the end. So you'll be quite safe removing the glass along with the photo from the frame and sinking them in good clean lukewarm water with a couple of drops of liquid soap. Do not try to pry the photo off the glass by force. Gently rubbing the back of the photo should help water get in between the photo and glass. Expect result in a few minutes.


If the photo does not come free off the glass inside 20 minutes, it most likely is stuck forever. In that case take the photo with glass off the water, wipe all loose water away and take the suggested (in comments) photograph of the photo thru the glass now that you just washed the glass clean and the photo looks good while still wet.


Disclaimer. This is safe method for photos printed (developed) on normal photographic paper. If the photo in question is not of that material, soaking in water may in some cases do great harm and destroy the photo irreversably. You should take a photograph of the photo thru the glass before attempting any other operations. Advice on how to best do a photographic copy may be found elsewhere on this site.


lens - Why does the max aperture change when use Nikon 60mm AF-D Micro on D90 camera?


I have found that my Nikon 60mm Micro AF-D changes aperture from 2.8 to 3.2 when used with a D90. I have tried both A and M mode. Is it by design? Why?


I intend to buy Nikon 50mm 1.4 AF-D, but it also has a switch on the lens and it will lock the maximum aperture at 2.0 (my guess) instead 2.8. So should I buy it or not, or would it be better to buy the newer 50mm 1.4 AF-S version?



Answer



I think you must have noticed something slightly different. The max aperture is a function of the lens and should be unaffected by camera.


The Nikon 60mm Micro AF-D will change its maximum aperture as a function of the focus distance. As it focuses closer, the reported max aperture drops. By life size 1:1, its down to f/5.



There is apparently a button on the 50mm AF-D that will lock it to its minimum aperture. This is a property of manual aperture lenses. Modern cameras control the aperture automatically from the camera body. Setting this switch locks it at the minimum aperture so that the camera can automatically control it. Since your camera has an in-body focus motor, you're able to use the AF-D or AF-S, you just need to decide if there's a difference for the price that you care about. They'll both take great pictures, the AF-S will just focus faster and quieter mainly.


lens - Why is my Nikon 50mm f/1.8 giving me an fEE error?


I have a AF Nikkor 50mm f/1.8. When I have the aperture setting on the lens to 22 I can take photos. But when I change the dial to anything other than 22(lowest f/stop) then I get a flashing fEE message and r09 where the number of exposure left is.


I don't have the manual for this lens but am just wondering if anyone knows what the issue is here.


My camera is a Nikon D70S.



Answer




Lenses with the aperture ring were originally designed for older SLR's which did not control aperture via the camera body.


Newer SLRs and DSLR control aperture via the body, so these older lenses must be stopped all the way down in order to be used. Your lens is working as it should, and is not defective.


To select aperture using your camera, ensure the aperture ring is set at 22 and shoot in either aperture (A) priority or manual and use the command dial and LCD screen to select the aperture you desire.


pixels - What is "Resolution" and how is it related to "Printing"?


What do we mean by "Image Resolution", and how is it related to printing on paper?


Is resolution related to displaying the image on computer screens too?


What is the meaning of "High Resolution"? "High" relative to what?




Tuesday 16 June 2015

equipment recommendation - What types of filters are there and what's their use?


I have basic idea of what a filter is and what it is used for. I would like to know what are the different types of filters are available and if possible would love to have the following questions answered for each of them:




  1. Mostly used for 'X' photography

  2. Price range

  3. Utility factor (out of 5)



Answer



Wow... Well, there are a lot of filters out there that are used for a whole variety of purposes, but to cover the high points:




  • UV Filters are designed to reduce or eliminate UV rays coming through the lens. At this point, they're primarily used as lens protection, though the utility of that is debatable (I side on the not so useful side of that argument, but others disagree) versus the possible side effects (such as ghost lights). So, my rating is a 1 there, but that's personal choice and you have to decide if the 'protection' is relevant to you versus the potential side effects.





  • Color correcting filters are designed to help create the correct white balance. For example, to color correct for tungsten light, you might use a 85B filter and that will help ensure that white is, in fact, white when you take the shot. My rating is, again, low, perhaps around a 2 here because white balance is easily corrected if you shoot RAW. This is a film holdover.




  • A Polarizer is designed to cut certain types of glare. There are two types: linear and circular. They are basically the same, a filter that can be rotated to block light perpendicular to the filter axis which, in turn, can cut glare. For modern cameras, the circular polarizer is the one you want as the linear one may interfere with the autofocus function. Bear in mind that when using one, you will lose a couple of stops of light. My rating is a 5 if you ever want to shoot subjects around and through water or glass.




  • Neutral Density filters are designed to reduce light, an action often seen as counter-productive to an entire practice of capturing light. However, the use of ND filters allows for significantly reducing shutter speed under very bright light without affecting the color of the light. Very handy if you want to slow down things like a waterfall to create that dreamy look as the water cascades through the rocks. ND filters come in a variety of strengths and it is a common practice to stack various types as needed to increase the strength. This is a 4 for me because I like water shots of all kinds.





  • Graduated Neutral Density filters are similar to the above except that they reduce their density as they progress. As with standard ND, it's about light reduction, but intended to be more controlled so that, for example, you can reduce the brightness of the sky in order to even out with the brightness of the land. Again, there are various levels of strength, but unlike most of the above, these are best when they are not circular and directly affixed to the lens. You want a filter that is much larger than the front of the lens so that you can adjust the position of the gradation as you need for the shot. Cokin makes a good system for this purpose, though you are not limited to their filters if you get the system. Also a 4, though most useful in the landscape world which I don't do a lot of.




  • Special Effects filters run the gamut, including options like a starlight filter that creates a 'star' like effect on light sources. These are all over the map and some can be fun, though you should bear in mind that many are gimicky. Maybe a 3 because of the fun factor, some of which you can achieve through other options such as creative bokeh kits. Still, with the right subject, a special effect filter can produce a cool result.




As for all of the above, in regards to price, you get what you pay for to a large degree and it is really all over the map. Better filters aren't cheap and, if you are getting them, it's better to pay the price knowing what you want from it. Having said that, options like the Cokin filter system can really help if you have a large lens collection. I went the Cokin route because I have 5 different filter sizes to deal with and I didn't want to be buying certain filters 5 times. Now, Cokin filters are very good, but not great, though the good news is that there are third party manufacturers that produce filters for the system and some of those really are great. I like the system, your mileage may vary.


software - How to migrate from Picasa to Lightroom?


I've been a long time user of Picasa (the Windows desktop application, not Picasa Web Albums), and have roughly 40k photos. The photos are a mix of my own digital snapshots, and scans of my 35mm film and old family photos. Now that Google has announced they're ending support for Picasa (not that it was ever well supported, but at least it was free) I'm thinking about moving to Lightroom.


The problem is that Picasa is, like Lightroom, a non-destructive editor. Thousands of my photos have edits (mostly straightening and crops), which I would like to carry over to Lightroom. As I see it, I have two options:



  1. Save the photos within Picasa before importing into Lightroom. Picasa saves a copy of the original in a hidden subdirectory, and then saves the photo with the edits applied. (Or equivalently export all my photos to a new directory tree.) I think that Lightroom will ignore the hidden subdirectories, so all history will be lost in Lightroom, and I won't have easy access to the originals any more.

  2. Import the photos as-is into Lightroom, and then redo the edits. I won't know which photos have been edited, or what I did to them. Many hours of work lost.


Does anyone have experience with either method, or advice? Ideally I'd like to have Lightroom know about the edits, so that I have access to both the originals and the edited version within Lightroom. Is that possible?




pinhole cameras - How properly expose a long exposure in daylight?


I was shooting some lake waves with some nice color in them, and I wanted to do a long exposure to smooth out the wave and color, similar to the effect you see from a long exposure of whitewater or a waterfall.



However, the image was way too overexposed, even at 1/2 a second (f/20, 100 ISO):


enter image description here


I went to the camera store and asked them. The guy at the offered two solutions:



  1. Shoot an HDR shot, either in camera, or compose one in photoshop.

  2. Get a variable nuetral density filter.


However I wonder, is there a way to make a very small aperture in a lens-cap, ala a poor-man's pinhole camera, that would reduce light sufficiently to properly expose a very long timelapse?


Are there other methods available?



Answer




You are spot on! Use a sewing needle heated in a candle flame and carefully pierce the lens cap. Make a trial exposure and enlarge the hole if needed. Keep in mind that a tiny pinhole will induce diffraction that degrades. Also, a pinhole has super depth-of-field. Anyway, experimentation leads to discovery.


Monday 15 June 2015

photo editing - Any way to use Nik Software under Linux?


The only way I've found is to install Nik Software on Windows 7 in VirtualBox. The problem is that it's working really slow, though the host system is an Intel Core i7 with 8GB RAM, and all the other programs work fine in VirtualBox. Has anyone got a solution for this problem?


I use different versions of Kubuntu / KDE Neon.




How to hold a watch for product photography?


As commented, I'm doing some product photography for smartwatches, I want to achieve certain "poses" with the watch...


smartWatch


I have tried different options like plasticine but, the shape and angle of the resulting images are not as good as the sample image.


So I need to find a way that I can photograph different watches with the same reproducible angle on my lightbox...



Answer



Hang them.





  1. Use some wood sticks tied with some cotton treads hardened with glue.




  2. Use a metal hanger unwrapped.




  3. Use a PVC rig and hang the watches using "invisible" nylon thread hooked with unwrapped paper clips. I like this one the better. It is the least intrusive on the shot, it can become too bulky, but you can cut different rigs for different projects.





enter image description here


PVC is the DIY photographer's best friend. I do not know the exact diameter, but the one I use a lot is around 2.1cm or 3/4 inch. The caps are important because they stabilize it.


enter image description here


Be imaginative and have fun to construct new rigs for each project.


You can still need to use some paper clips to keep some straps with a correct shape.


enter image description here


canon - Why don't semi-automatic modes on a DSLR take flash into account when calculating exposure?


I've purchased a TTL flash today and was surprised to learn that the camera (Canon 77D) is apparently not taking it into account when you set it to aperture priority or any other automatic mode. The proposed exposure settings are similar with or without the flash, even though there's obviously much more potential light when you have it on.


Why is it so? Can't the camera estimate the light available from the flash based on the information provided through TTL?



Answer





Why don't semi-automatic modes on a DSLR take flash into account when calculating exposure?



They do. But when using Aperture Priority exposure mode with flash in low light situations, the camera usually assumes the photographer wishes to use a technique called 'slow shutter sync' or 'dragging the shutter'. When using Av mode with TTL flash for brightly lit scenes the camera usually assumes the photographer wishes to use the flash for "fill."


Before we can understand what happens during TTL flash metering, we must first understand how metering works in general. There is no single "correct" exposure value for an entire scene, there are only correct exposure values for objects with a specific luminance value within that scene. If "correct" exposure is equivalent to 18% gray, then only one luminance value in a scene can be rendered at that level. Everything brighter than the object rendered 18% gray will be closer to saturation, everything darker will be closer to black. Many scenes include differences in brightness that are greater than a camera's ability to record. Either some of the scene will be pure white, some of it will be pure black, or maybe even some of both.


When we choose a simple metering mode, we are telling the camera what part of the scene we are most concerned with exposing properly or we are telling the camera to expose for the areas halfway between the darkest and brightest parts of the scene. With more sophisticated metering modes we are telling the camera to compare the scene to a database in the camera and apply the proper settings to the scene based on which preloaded scenario our current scene most closely matches.


Here's where adding flash can get confusing: the flash will not raise everything in the scene in terms of 'stops' by the same amount. Consider two scenarios:




  • Fill flash. If a scene has 5 stops of contrast between the darkest and brightest parts of the scene, that means the brightest parts are reflecting 32 times as much light per cm² as the darkest parts (2^5=32). Assume all objects are roughly the same distance from the flash and the camera. If we add enough light to quadruple the amount of light reflected by the objects in the shadows (4x the light = two stops), we only increase the amount of light from the highlights by 1/8 (4/32) which winds up about 1/6 (8/√2=5.65) of a stop. That is half the smallest adjustment to exposure you can make by changing the aperture or shutter speed of your camera! What we accomplished was bringing the shadows two stops closer to the same brightness as the highlights without adding any significant light to the highlights. Think of it this way: if you have two buckets the same size and one has 1" of water in it and the other has 32" inches of water in it and you add 3" of water to both buckets the first bucket now contains four times as much water as it had before but the other bucket only has 1.09x as much water as before.





  • Slow Sync. If a dark subject is fairly close to the camera/flash, but the background is lit by ambient light, the camera meters for the background to set shutter speed and/or aperture and then adds enough flash to properly expose the subject in the foreground. Since the power of a flash is 1/4 as much for each doubling of distance the flash raises the brightness in the foreground much more than it raises the brightness of the background. And if the background is brighter to begin with than the subject in the foreground then the difference in the effect of the flash will be even greater because not only is the background receiving less of the flash's light per cm², but the flash's light the camera is receiving from the background is a lower percentage of the total light to the camera from the background.




So now the question is, "How does the camera tell each situation apart?" The answer to that is also twofold: It depends on the way the camera has been programmed and on the settings you have selected. Different camera makers design their TTL flash logic systems differently. And within a particular camera model the settings you choose tell the camera how you want it to act under different scenarios. Most recent TTL systems use the AF distance information reported by the lens and the focus point selected in the viewfinder to determine which area of the frame includes the subject. In general the brighter the scene the more likely the camera will try to provide fill flash and the darker the scene the more likely the camera will attempt proper exposure of the subject using the flash as the primary light source and then expose the rest of the scene as well as it can.


My Canon 5D Mark II, for instance, will assume I want to use slow sync when I am in Av mode and will allow for shutter speeds as long as 30 seconds to properly expose the background in low light situations (I can modify that to 1/60 second or even to 1/200 second via custom functions). On the other hand, if I am in P mode it will use a shutter speed of 1/60 second at the slowest and use enough flash to properly expose the subject at that Tv. It will adjust the aperture to try and properly expose the rest of the scene, but if the widest aperture and 1/60 second is not enough the background will still be dark. Likewise, if the background is much brighter than the subject it will use full flash power and attempt to reduce the exposure as much as it can with a smaller aperture and/or a faster shutter speed up to the camera's flash sync speed.


For more, please see:


What exactly does TTL flash sets its power to?
Why is flash TTL metering independent from ambient light metering?

How do TTL flash metering systems calculate how much power is needed?


How do I get my Canon 60D to use short shutter speeds with flash in Av mode?
Canon 600D Exposure Lock and Flash Exposure Lock
Why is my Metz 58 AF-2 using long shutter values when my Canon 60D is in Av mode?
What is this shadow in my photo?
Taking a skyline photo with a person closer to the camera during night time


What is "Dragging the Shutter"?


How do you photograph artwork in a glass picture frame?


I'm trying to sell a picture of a piece of artwork for charity, and I'd like to photograph it. It's in a glass frame with lots of glare. Any ideas of what I can to to photograph it to look the absolute best? Thanks!



Answer



When it comes to glass it's all about lighting direction.


You want to make sure that when you look at the picture through the camera neither the reflection of the lightsource or anything lit by your lightsource is visible.


Hold up, I'll draw a diagram:




Glass and other shiny objects reflect light back in one direction (like a ball bouncing off a wall). The painting, which is diffuse reflects some light this way, and reflects some light back in every direction.


In the setup above, the light from a directional source hits the glass and carries on, missing the camera, thus the glass isn't visible! The same light hits the painting and some of it does get reflected into the camera so the painting does show up.


The important thing is to use a directional source, as it's possible for light from your lightsource to bounce of a white wall, and hit the glass at an angle that does go down the lens, and this shows up as flare.



This often happens when you have a white wall behind the camera, as in the above example.


The moral of the story is to make sure you can't see the reflection of the lightsource in the glass when stood behind the camera, and try to limit what the light hits (ideally it should hit nothing but the painting). You can make any lightsource directional by placing opaque objects around it to block the light in certain directions.


Sunday 14 June 2015

Can I live view my Nikon D3200 via laptop/ tablet etc?



I am wanting to take self portraits using my Nikon D3200 and was wondering is there a way I can stream the live view through a laptop, tablet or some other screen?



I want to have the screen facing me so I can see what the picture will look like before I capture it so it's not a case of clicking and hoping.


I have an AV out (from camera) to USB lead and also an AV out (from camera) to the yellow and white lead


Hope that makes sense


Thanks




digital - Why don't mainstream sensors use CYM filters instead of RGB?


From what I understand most digital cameras have a sensor where each pixel-sensor has three sub-sensors, each one with an R,G and B filter. RGB is obviously the more fundamental colour model since it directly corresponds with the receptors (cones) in the human eye.


However, RGB filters necessarily cut out two thirds of white light to get their component. Surely cameras would benefit from shorter exposure times if the filters were instead CYM where each element cuts out only one third of the light? The camera's processor can still save the image in whatever format the consumer wants since a CYM datapoint can be converted easily to an RGB one.


I know this is sometimes done in astrophotography where three separate B&W photos are taken with CYM filters.


Am I just wrong and this is, in fact, what's already done - or is there a good reason for an RGB sensor?



Answer



First, a little background to clear up a slight misunderstanding on your part.


The vast majority of color digital cameras have a Bayer filter that masks each pixel with a color filter: Red, Green, or Blue.¹ The RAW data does not include any color information, but only a luminance value for each pixel.




However, RGB filters necessarily cut out two thirds of white light to get their component.



Not really. There's a lot of green light that makes it past the 'red' and 'blue' filters. There's a lot 'red' light and a good bit of 'blue' light that makes it past the 'green' filter. There's some 'blue' light that makes it past the red filter and vice-versa. The wavelengths that the 'Green' and 'Red' filters are centered on are very close to one another, and 'Red' is usually somewhere between 580nm and 600nm, which is more in 'yellow-orange' territory than 'red'. The "peaks" of the filters in a typical Bayer array aren't aligned with the wavelengths we describe as "red", "green", and "blue."


enter image description here


So in a sense, our cameras are really YGV (Yellow-Green-Violet) as much as they are RGB. Our color reproduction systems (monitors, printers, web presses, etc.) are what are RGB, CMYK, or some other combination of colors.


enter image description here


This mimics the human eye, where our 'red' cones are centered around 565nm, which is a greenish yellow, as opposed to our 'green' cones that are centered around 540nm, which is green with just a tint of yellow mixed in. For more about how both the human vision system and our cameras create "color" out of the portion of the electromagnetic radiation spectrum we call "light", please see: Why are Red, Green, and Blue the primary colors of light?


There's no hard cutoff between the filter colors, such as with a filter used on a scientific instrument that only lets a very narrow band of wavelengths through. It's more like the color filters we use on B&W film. If we use a red filter with B&W film all of the green objects don't disappear or look totally black, as they would with a hard cutoff. Rather, the green objects will look a darker shade of grey than red objects that are similarly bright in the actual scene.


Just as with the human eye, almost all Bayer filters include twice as many "Green" pixels as they do "Red" or "Blue" pixels. In other words every other pixel is masked with "Green" and the remaining half are split between "Red" and "Blue". So a 20MP sensor would have roughly 10M Green, 5M Red, and 5M Blue pixels. When the luminance values from each pixel are interpreted by the camera's processing unit the difference between adjacent pixels masked with different colors are used to interpolate a Red, Green, and Blue value (that actually corresponds to somewhere around 480, 530, and 640 nanometers) for each pixel. Each color is additionally weighted to roughly the sensitivity of the human eye, so the "Red" pixels carry a little more weight than the "Blue" ones do.



The process of converting monochrome luminance values from each pixel into an interpolated RGB value for each pixel is known as demosaicing. Since most camera manufacturers use proprietary algorithms to do this, using third party RAW convertors such as Adobe Camera RAW or DxO Optics will yield slightly different results than using the manufacturer's own RAW convertor. There are some sensor types, such as the Foveon, that do have three color sensitive layers stacked on top of each other. But the manufacturers claim such a sensor with three 15MP layers stacked on each other is a 45MP sensor. In reality such an arrangement yields the same amount of detail as an approximately 30MP conventional Bayer masked sensor. The problem with Foveon type sensors, at least thus far, has been poorer noise performance in low light environments.


So why don't most digital cameras use CYM filters instead of RGB¹ filters? The primary reason is color accuracy as defined by the human perception of the different wavelengths of light. It is much more difficult to interpolate color values accurately using values from adjacent pixels when using a CYM mask than when using an "RGB" mask.¹ So you give up a little light sensitivity to gain color accuracy. After all, most commercial photography at the highest levels is either done with controlled lighting (such as a portrait studio where it is easy enough to add light) or from a tripod (which allows longer exposure times to collect more light). And the demands of professional photographers are what drives the technology that then finds its way down to the consumer grade products.


¹ Except the three color filters for most Bayer masked "RGB" cameras are really 'blue-with a touch of violet', 'Green with a touch of yellow', and somewhere between 'Yellow with a touch of green' (which mimics the human eye the most) and 'Yellow with a lot of orange' (which seems to be easier to implement for a CMOS sensor).


color - Are RGB numeric values equal to CMYK percentages?


are RGB numeric values equal to CMYK percentages?




Where to focus when shooting landscapes?


When I am shooting landscapes, where (at what point) should I focus when using Auto Focus?


Or if I am shooting the lit up skyline of the city, what should be my focus point?


The reason I ask is that in AF-S I have one small focus point, but if I am shooting the entire skyline, wouldn't it throw the rest out of focus? Or should I change to Manual Focus?




resolution - Why are two pictures that are the same dimensions/dpi such different file sizes?


I am working on a project where two my images must be AS SMALL AS POSSIBLE. I have scaled two images to the same dimensions/dpi and I've checked the color profile in Photoshop CS6 and they both look the same. They were both saved out at the same JPEG compression.


Can someone please explain to me how these two images ended up being such drastically different sizes? The dog one is 97 KB and the bunny one is 576 KB.


enter image description here



enter image description here


So after following the suggestion I changed the embedded color profiles of both of the above images and they are now nearly the same file size. However, I have two more that do have the same embedded color profile and again, these are drastically different sizes. Can you explain why?


enter image description here


enter image description here



Answer



The first two images both have embedded color profiles. The smaller one has Adobe RGB, and the larger one has "TIFF RGB", which happens to consume more space.


My guess is you probably want these to be sRGB anyway, with no embedded color profile.


In the second case, it's the details. The hand photograph has big areas of the same color, a lot of blur, and very many sharp lines. That's ideal for compression. The bikes and trees are full of contrast and intricate detail. That's much harder to compress.


Try running a strong gaussian blur over the second image and watch how it shrinks when you save it. That doesn't solve your problem, but should make clear what's happening.


Saturday 13 June 2015

presentation - What alternatives to glass are there for framing glossy prints?


I ordered photos through an online service. Print is pretty good; the photos are glossy, so I'm wondering if there is any alternative to glass. I think putting glossy photos under glass would make them shine even more. However glass also protects them from dust, although that might not be that important considering most of the paintings don't have glass on them. However I've seen some nice prints done, where the paper goes around the edges of the frame so that the photo is seen on the sides as well. I don't know if that's possible with already printed photos or should I order this style straight from print shop.


Photos are 10x8 inch, which is roughly A4 size.


What options do I have?




Is it best to transport a camera with lens attached, or off?


I'm traveling now(I'm at Aruba) and I bought me a Canon 60D with the lens kit(EFS 18-55mm). I never used a camera that has removable lens like this one, my last one was a Panasonic Lumix DMC-FZ18, so I'm going to a lot of places to take some photos, but which is the best way to carry the camera and the lens on the case? My ideas:




  • Remove the lens and let the camera without any lens attached(just with the protector cover)

  • Let the lens on the camera as it is


Which one is correct? If I have more stuff like: More lens, flash and other things for taking photos, how should I carry them?



Answer



Take a look at this picture:


I am


This is how I carry my camera (Nikon D80), 2-3 lenses and a flash. Also a small tripod (not on this picture). Everything is easy and fast to access, and stay in safe while transportation in a backpack (I travel a lot on a bicycle - you probably know how these all shakes there).


Bag for the camera is a Lowepro (don't remember a model, can look if necessary), and an Adorama Slinger lens cases (attaches to a camera bag on both sides).



Everything is still working well (knock-knock on wood).


lens - Should I be worried about getting dust inside my SLR?


I recently went to Silverstone to watch the F1 qualifying and took my DSLR camera with me. Throughout the day, I switched lenses a few times, but each time was very conscious that there was a lot of dust around (by then end of the day, my clothes were covered in dust).


Every time I changed lenses, I was very protective of the SLR body and made sure the amount of time that the SLR lens hole was exposed was minimised. Was I being too protective, or is it very easy to get dust in the SLR body when changing lenses?



Answer



In my experience, it is pretty easy to get dust...as well as other unwanted junk, inside your camera body fairly easily. I follow a pretty rigorous routine when changing my lenses so that I minimize the world-exposure time of anything...sensor, back lens element, etc. Despite my attempts to be careful, even a short, random gust can blow in the most astonishing things and they can drastically effect your photos.


A couple months ago I was out taking shots of birds (one of my first times trying bird photography) and I changed a lens. I had exposed the camera sensor for only a few seconds, but a dried fragment of a grass blade ended up inside my camera body. It took a while to find it at first, as I was looking at the shutter and sensor assembly. After some time, I finally realized it was actually stuck to the prism that redirects light to my viewfinder.


Since then, I've completely accepted my paraniodism about switching lenses. Better safe than sorry. ;)


color - Why does my white picture have a blue hue?




I took this picture in a soft box illuminated by three 5000k 10 watt LED flood lights (one on each side and a third on top)


I'm using a Samsung Galaxy S5 to take the picture, ISO 100 and Matrix Metering.


Why am I getting that blue hue? And why does it look like there are horizontal waves of white and gray throughout the picture?


Thank you for your help!




color management - How to calibrate 2 monitors identically?



Is it possible to calibrate correctly and identically (or nearly) 2 monitors? I'm really getting into more professionnal photography so I recently bought an external monitor, and a Spyder to calibrate it. However after calibrating the 2 devices, I find that the result on each monitor is exactly the same as the sRGB ICC profile. Fine by me, but the 2 monitors still show very different results in terms of color and contrast, which makes it impossible for me to work on my photographs.


How can I fix this? I'm considering sending back the Spyder 5 to get a refund since it's apparently not having any effect. Should I by a different one?


I know I don't have the best equipment, but I'm on a really tight budget so I hope to be able to make the most of it.


Thanks for your help everyone


My gear:


Asus Vivobook s550ca / External Monitor BenQ GW2255 (recently purchased) / Spyder 5 Express (recently purchased) / Sony A33 and recently purchased Sony A77 mII


enter image description here


EDIT: There is already a question concerning a similar problem How do I calibrate two displays to the same color? (LCD, LED backlight and CCFL backlight) but I sincerely am not qualified to say whether the other one answers my problem or not - I'm really not an expert, which is why I'm asking a question in the first place.



Answer





Is it possible to calibrate correctly and identically (or nearly) 2 monitors?



Only if they're identical display types. There are many different types of LCD display, and several non-LCD display types besides. Two different display types may simply be incapable of producing the same color gamut, brightness levels, evenness of illumination, contrast, etc.


ASUS doesn't say what kind of LCD your laptop has, but it's probably TN. Your external display is a VA type LCD, which is contrastier than TN, and more even in terms of lighting, but not as good as IPS. Most touch screen technologies also affect image quality, because they put arrays of microscopic stuff in between the actual LCD and your eyes.


You might feel that these difficulties make calibration pointless, but it isn't so. Proper calibration brings your monitor as close to the objective ideal as its technology makes possible. If you edit your photos on a properly-calibrated monitor, they will also look good on other calibrated monitors, even if they don't look exactly the same as on yours. On uncalibrated monitors, your photos may not look as good as you would like, but that's unavoidable; this was true before you calibrated your display, too.



I find that the result on each monitor is exactly the same as the sRGB ICC profile.



Are you setting the per-monitor ICC profile? Simply building the profile with your calibration tool may not be enough. You might have to install it manually, or manually select it in the OS's display settings, depending on how the calibration software works.




which makes it impossible for me to work on my photographs



Nonsense. Your external display is almost certainly a better display, objectively speaking, so do your actual photo edits on that monitor. Use the laptop display for auxiliary things, such as app palettes, email, a web browser, etc.



I don't really "trust" it, seems even after calibration, manual and with the Spyder, it's still really bright and saturated. My pics look really good straight from the camera, it seems really weird to have such quality from my old camera to be honest. Also it looks like all my old pictures suffer too much contrast due to my former photoshop editing - but they dont look so contrasted on the prints i used for an exhibition a while ago. I'm lost!



Out of the box, most displays are too bright. Manufacturers do that to make them "pop" under bright fluorescent retail store lighting. Home and office lighting is typically much dimmer, so you need to turn the display brightness down quite a bit.


Calibration should have taken care of saturation; it should now be correct. If your previously-edited photos look overly saturated on the new, calibrated monitor, it's because you (or your camera) punched the saturation up to compensate for the poorer capabilities of the old display. You might want to go back and recheck your best photos, to see if their adjustments should be dialed back a bit. It's happened to me, too.


As for the rest of your frustration, I offer this bit of wisdom: A man with a watch always knows what time it is. A man with two watches is never sure.


Why is the front element of a telephoto lens larger than a wide angle lens?

A wide angle lens has a wide angle of view, therefore it would make sense that the front of the lens would also be wide. A telephoto lens ha...