Tuesday 31 October 2017

Macro photography with an ultra-wide lens and extension tubes?


While reading some reviews of Canon EF-S 10-22mm f/3.5-4.5 USM, I was a bit surprised to notice that the lens is supposed to be compatible with 12mm and 25mm extension tubes, and with those one should be able to get maximum magnifications better than 1:1.


An ultra-wide lens for macro work sounds like a fun alternative, but does it really work in practice at all?


Have people had any success with taking macro shots using ultra-wides and extension tubes? Do you have any working distance at all between the lens and your subject? What about lighting the subject, does it become near-impossible with the lens shadowing everything?



Answer



Here's Estonian reverse of 1 euro-cent shot with my widest lens, Zenitar 16, at f/11 on 19mm extension tubes, giving 1.18x magnification:


Estonian reverse of 1 euro-cent


Not much room for lighting indeed, sidelight or glow-through with a translucent subject seem to be the only options:



making of


portrait - What do I need to get photos with a unifom black background (not with post)?


Often i see portraits such as the one below and like the way they look. I wanted to know can I get this look in my shots.



I am sure there are professional materials you can get to outfit a studio that will deliver such results. I have no clue what these items would be, can i get clued in?


Also, I was wondering if there is a DIY way to achieve this as well. Maybe with a black bed sheet or something?


I want to set something up in a room in my house for studio type shots. I am looking to get an idea of what items I need to get this done and how the setup should go.


black background portrait enter image description here



Answer



To get a pure black background you need space, not material. The easiest way to get a black background is to shoot outdoors at night. It doesn't matter what your background is like, provided it's not too close and doesn't have it's own lightsources. This was shot in my garden:



Distance is always key. If you are working indoors, even with a specialist photographic black backdrop it's really hard to get it jet black in your photos. "black" objects still reflect some light so if lighting from the front if any of your key light hits the background the effect is easily scuppered.


This is where the inverse square law comes in, if the light is twice the distance from your background as it is from the subject, the background gets 4 x less light. Three times further away and it's 9 x less. This allows you to make a white wall turn black if it's far enough away.




The above image was shot in front of a white projector screen! That's about as bright white as objects get. But due to the lighting being many times closer to the grey subject, the background appears absolutely jet black.



Now the flash in the elephant example was extremely close, however the effect scales up to people, the above image was shot in front of a white wall, which although it doesn't go pure black it's suitable for most purposes (this was actually part of a multi light setup with the background light off, I could have move further from the wall).



This image demonstrates the opposite effect - the background was substituted for a piece of black card, of similar reflectance to many commercial photographic backgrounds. Now enough light is hitting this "black" object for it to appear white!


Here's an example of what goes wrong if you don't have enough space, even with the right gear:



There wasn't sufficient space (or a large enough background) to have the background further from the subject, and the result is a not quite black background which shows up creases and other imperfections which need to be 'shopped out. Not ideal.




Now your actual question referred to creating a uniform background (not necessarily 100% pure black). Any black material could be used, but unless you get it jet black, the weave plus any creases are going to show up. This can be remedied by throwing the background out of focus with a fast lens. Unfortunately this also requires space (or a very big aperture) so ultimately I'm afraid there are few options for the space limited!



Monday 30 October 2017

technique - Would a 85mm lens have allowed me to blur the background better in this photo of a DJ in action?


This is the picture I took:


enter image description here



I had a Nikon-D610 and a 24-70mm f 2.8 lens on it and I used it on Manual mode and metering was f:2.8, Shutter speed: 1/20 sec and Focal length: 62mm. Also I was as close to him as possible, about 2 meters away and the background at his back was about one meter behind him.


I will be happy to know as many issues and suggestions you have on the photo but my main question is this: Notice the background is not blurred enough, not to be fully blurred but still some more blurring that makes the background less distracting. So do I need a 85mm lens to achieve this? or I should be able to do it with this lens too and need more practice? or is it just that with this distance of me and him and he and the back, it is not just possible?



Answer



Okay, so, really, two aspects here. First, would an 85mm lens let you blur the background more? Probably, because the framing for a longer focal length decreases the apparent depth of field, and because there are reasonably-priced and readily available 85mm prime lenses with wide apertures and generally nice technical image quality . But it's not a particularly magic number — any long, fast lens will give you that kind of result. If you want more of that than you can get with your current lens, and you can't get closer for the framing you want, any lens that's a longer focal length and the same or wider max aperture will do.


But, second (and unstated): are you sure that that's what you really want? Getting rid of "distractions" is an easy way to get straightforward, obvious results (and therefore win Internet photo competitions), but isn't necessarily automatically better. In this particular case, I think the records in the background add significantly to the context of the image and would be much more distracting were they out of focus.


In fact, I wish for more sharpness in the turntable in the foreground left — I want more depth of field, not less. The whatever-it-is behind the DJ's head is a bit unfortunate, but I'd solve that by moving slightly, or by removing it digitally, or just not worrying about it. (A little more blur wouldn't remove it anyway — what you need is lighting separation, and in this situation that's probably not under your control.)


If I were to be concerned about anything technical in this image overall, my main concern would be the blown-out details on the DJ, probably due to poor lighting. His hands are particularly unfortunate here, since they are naturally a center of attention — I'd go so far as to say that for me, they are the main part of what the image is about, so it hurts to have them looking so... compromised. Unfortunately, this is a really tough situation — and a different lens won't really help that aspect (85mm or otherwise).


dslr - Does a sensor count the number of photons that hits it?


I'm interested in the grayscale image case. In a sensor there is an array of cavities which collect photons. (Source: Cambridge in Colour Digital Camera Sensors)


Does each cavity count the number of signals (or peaks) generated by each photon? Or is there one signal which is the sum of all photons (in which case the size of the signal should depend on photon energy presumably)?


And also I'm guessing each cavity correspond to a pixel?


Additional references would be appreciated.



Answer



Your link discusses how a CCD (charge coupled device) image sensor works. Note, CCDs have applications besides images sensors, but the vast majority of CCDs are used as image sensors, and that is the only primary application I will be discussing.



CCDs


In typical CCDs used for color image sensing each CCD cell has a color filter over it. The most commonly used pattern groups 4 cells together with one red filter, one blue filter, and two green filters. These filters only allow photons of their corresponding colors, in a certain frequency band through. A greyscale CCD just doesn't have these filters.


A CCD (when used as an image sensor) at its core is a photon counting device. A photon that is incident upon the active region of a CCD excites an electron through the photoelectric effect which is then stored within that cell of the CCD. This process continues as long as photons hit the cell causing electrons to accumulate within each cell.


Your camera lens projects an image of the scene you are taking a picture of onto the CCD. This is the same as in a film camera, except with film instead of a CCD. Each pixel corresponds to one cell within the CCD. In the case of a color image, each pixel is the the product of one or more filtered cells, depending on the algorithm and cell location. The simplest algorithm groups each set of 4 filtered cells into a single pixel. However it is common for interpolation schemes to increase the number of full color pixels to equal the number of CCD cells.


Photon Energy Dependence


The signal does depend on photon energy, but only as a threshold. In order for a photon to generate an electron through the photoelectric effect it must have a certain amount of energy. This amount of energy is the "bandgap" energy of the semiconductor. The bandgap energy of silicon is about 1.1 eV, meaning photons with a wavelength of about 1100 nm and lower will be detected. As you continue to increase photon energy the signal remains constant at one electron per photon. Once your photons have twice the bandgap energy, or more, an incident photon can generate two electrons, but it is fairly rare.


Once you have decided you are done taking your image the shutter is closed and it is time to read out what image was captured in the CCD. To read out the image the charge within each cell is shifted over one column within its row. The first column is then read out. This can be done by either measuring the current to discharge the cell, or measure the voltage of the cell while knowing the capacitance. Both of which can tell you how many electrons were stored in that cell. After the first column is read out, the cells are all shifted again, and this repeats until all cells have been read.


Non-Idealities


There are a number of factors that prevent typical CCDs from giving you an exact photon count. There is a significant amount of thermal noise that can only be reduced by lowering the temperature well below what is reasonable for a handheld camera to be capable of. There can be leakage within the CCD cells which can cause electrons to escape the cell, or move into nearby cells, which prevents an accurate count. There will also be photons that reflect off the cell, and therefore aren't counted.


However, none of this changes the fact that a CCD counts photons. It just means it isn't a very precise photon counter. More on this below.





Does a CCD Count Photons?


I believe it does, but it comes down to the definition of "count". Lets consider an analogy.


Alice, Bob, and Chris each own an apple orchard. They want to know how many apples have fallen off the trees in their orchards. To do this they use a Tennis Ball Coupled Device (TBCD). It might look like an ordinary basket, but trust me, its a TBCD. Alice, Bob, and Chris walk through their orchards putting a tennis ball in the TBCD for each apple they see on the ground. By the time they have finished, each has a number of tennis balls in the TBCD equal to the number of apples that fell off the trees.


To figure out how many apples fell off the trees, Alice, Bob, and Chris each use a different method. Alice proceeds to count out the number of tennis balls in her TBCD. When she is done, she knows exactly the number of apples she saw. Bob is not as patient as Alice and uses an advanced computer vision system to automatically count the apples in his TBCD. When he is done, he knows approximately the number of apples he saw, but there is a small error because the CV system isn't perfect. Chris can't afford such a system, nor is he as patient as Alice, so he weighs his TBCD and using the weight of a tennis ball can determine approximately how many tennis balls there are.


Now here is the question. Who of these people used a system that counted the number of apples that fell in their orchards? Each at one point had a number of tennis balls equal to the number of apples. Does the readout method impact whether or not the TBCD counts apples that fell onto the ground?


The TBCD is (unsurprisingly) directly comparable to a cell in a CCD. It stores a number of electrons equal to the number of photons it captured. This most certainly qualifies as a photon count. Then, depending on your readout circuit, you might get a more or less precise reading of this value. Is it a count? If my image sensor counts the number of photons, but doesn't tell anyone, did it still count the number of photons? As I said earlier, I think this comes down to your definition of count, but I believe a CCD qualifies as a photon counting device.


lens - How iPhone 5S can have such a big aperture? f/2.2?


I thought that in order to have a big aperture such as f/2.2 a big amount of light should be able to enter to the sensor and in order to do it, a big lens was needed.


How is it possible that in the iPhone 5S, which has such a small camera lens, the aperture can be so wide?



Answer



Firstly the iPhone 5 lens has to be f/2.2, due to the small pixel size, the effects of diffraction which start to creep in at f/11 on a DSLR, start to creep in at f/1.45 on a 5.6mm (diagonal) sensor!



I though that in order to have a big aperture such as f/2.2 a big amount of light should be able to enter to the sensor and in order to do it, a big lens was needed.




The figure f/2.2 actually means a large amount of light per unit area. Given the tiny sensor in the iPhone 5, this means there is still a small amount of light overall being transmitted by the lens.


An f/2.2 lens has an entrance pupil (the apparent size of the aperture when looking through the centre of the lens) whose diameter is equal to the focal length divided by 2.2


The focal length of the iPhone lens is 4.1mm so the entrance pupil is 1.86mm, which is not difficult to achieve in a small package. Compare this to a 35mm f/2.0 lens for a DSLR, which has an entrance pupil which is 17.5mm in diameter!


Aside #1: entrance pupil diameters


From the above it would seem to be the case that ultra-wides with focal lengths of 8mm and f-stops of 4.0 for large APS-C sized camera sensors should be the same size as the iPhone lens as the entrance pupil is hardly any larger. However these lenses are many, many times larger. To explain why we need to go a bit deeper into lens design.


In order to be precise I use the term "entrance pupil", instead of the "physical aperture" (the hole in the lens barrel where the iris is located). The important factor for lens performance is not how large the aperture is, but how large it appears to be from the outside world. The Canon 600mm f/4 lens has an entrance pupil that is a whopping 150mm wide! Yet the aperture itself is located in the middle of the lens where there is clearly no space for a 150mm opening.


So you might read from this that a large entrance pupil lens doesn't have to be physically large, however in order for the aperture to appear to be 150mm wide, the opening at the front of the lens has to be at least 150mm. And if you look at the Canon 600mm f/4 clear this is the case with a dinner plate sized front element!


Entrance pupil size and front element diameter are extremely well correlated for longer focal lengths, but as you get into ultrawides the correspondence stops. Our 8mm f/4.0 lens should have a tiny front element. The answer is that for a lens to be f/4.0 the physical hole in the lens, that appears to be 2mm wide, must be visible from across the whole field of view, which is considerable; hence a large bulbous front element.


Due to the smaller sensor the iPhone lens has a much smaller field of view compared to its focal length, hence the range of angles the physical aperture has to be visible from is much reduced, allowing the front element (and thus the size of the lens as a whole) to be much smaller than that of the APS-C lens.


Aside #2: phone camera lens design



Having a small f-number like f/2.2 is not only associated with large lenses but expensive lenses also. Whilst f/2 lenses appear on some compacts, they tend to be high end models. So the obvious question is how the iPhone camera achieves a relatively large apertures at a price that is economical for inclusion in a smart phone.


The answer to this question is that the lens is made from aspherical plastic elements. Asphericals made of glass are very expensive to manufacture, however the iPhone lens is so small they can be molded from plastic, which is cheap but only works for small elements as the plastic would expand/contract too much on heating when scaled up.


The Nokia 808 PureView is the best example of this, being a five element all aspherical design, which would cost an absolute fortune to make from glass (if it were even possible with today's processes) and reportedly outresolves the Zeiss 50 f/2 (taking the image circle into account). See this link for more info, including a cross section of the lens image showing the sort of curves that DSLR lens designers can only dream of!


http://ramrao.abajirao.com/photography/nokia-800pv-lens.html (Broken. Use Wayback Machine link)


darkroom - Enlarging - how to increase exposure time without closing the aperture too much?


When enlarging, let's say the sharpest f-stop on your 135mm lens is f8, but at that aperture you need to expose for such a short period of time, say <1sec, that dodging and burning become impossible. How can you use the sharpest aperture of your enlarging lens (i.e. not stop down to f22 or f32) with a bright image like that?


I am using an Omega D2 that I bought used for printing 4x5 enlargements.



Answer



When I had this problem a while back I solved it by installing a weaker lamp into my enlarger - 75W instead of 150W it came with. I am not sure if it is an option with Omega D2, but it surely is less hassle than ND filters.



Sunday 29 October 2017

focal length - So my iPhone 6 camera lens is as wide as my full frame 35mm DSLR lens?


I am trying to understand what is the deal with "full-frame" and what difference it makes, so did a silly test:


I sat at my bed and aligned the iPhone 6 camera with the door of the room and took a picture... then aligned my full frame camera with its prime 35mm lens and took the same shot, and compared the area that is covered in both pics. And actually, the iPhone 6 covered a little more.


So now I am confused. Can someone please explain what is happening? Does that mean the room is small, and for that distance the 35mm is acting more of a zoom lens that the iPhone? and if I go outside in an open area and do the test, I can see the difference?



Answer



You're right that the angle of view of the iPhone camera is a little bit wider than a 35mm lens on a full-frame film camera. Up until this point, you're not really confused. But the part after that, about the small room and zoom and distance — definitely confused. :)


"Zoom" means the ability to change the field of view — it isn't magnification. See What is the difference between a telephoto lens and a zoom lens? or How do zoom, magnification, and focal length relate? for more. Your 35mm prime lens (by definition, since it is a prime lens) does not zoom, so the angle of view does not change. Moving outdoors won't affect that. The iPhone also has a prime lens, but a smaller one, with a much shorter focal length — but, because the imaging area (the sensor) is also much smaller, the field of view captured is also much smaller — to the point where we call the lenses "equivalent" for the different sensor sizes.


Hopefully this diagram will make things more clear:



focal length comparison


Based on the specs I found online the focal length should actually be a little different, but that's just details. The key thing you can see here is that even though the lens and sensor pairs are different sizes, they form (roughly) equivalent angles. That means it doesn't matter what you're pointing at — they'll roughly get the same field of view (with, as you observed, the iPhone showing just a little bit more).


Imagine moving the blue line up so that it's at the 4.15mm focal length, same as the red iPhone line. Redraw the blue dotted lines from the edges of that through the point, and you can see that if you somehow used that lens on your bigger sensor, it would result in a very wide angle. (There's an added complication in that the iPhone lens isn't designed to cover that whole full-frame area with its image circle, because it doesn't need to, and in fact it would be practically impossible to make a 4.15mm lens which did, but in the world of theory, that's an entirely separate issue. It's only the part of the image cone which is recorded which matters.)


Or, imagine moving the red line down, using a true 35mm lens with the iPhone sensor. This would result in a very narrow cone.


Saturday 28 October 2017

photoshop - Batch resize images to a particular ratio without cropping


I'm looking for a way to batch resize photos to a particular ratio without cropping. For example, if I have pictures with 2x3 ratio which is default for my camera and I want to print it with 5x7 ratio which is a bit narrower, I will lose part of the pictures if I cropped it 5x7. Instead I want to fill part of the pictures with either white or black color and centering the images. (even at one side is fine)


If there's Photoshop Action that can do it, it will be ideal. the sizes mentioned above are just example. It will be good if I can set the destination photos size. The pictures I have are in various sizes, 2x3, 5x7, 16x9, and some squares. (it's fine if it's from one size to one size. I can group pictures with same ratio and run action a few times)


I know there are answers for normal resizing but I can't find the one which will fill and resize the pictures.




Thursday 26 October 2017

lens - Since it's not "focal length", what is the term for the distance at which things are in focus?



I was under the impression that the “focal length” of a lens is the distance at which stuff appears in-focus. (E.g., perhaps I set the camera so that objects 3 meters away appear sharp, and anything nearer or further than that is blurry.) But everything I've read seems to suggest that focal length is actually a slightly odd way of describing the field of view of the lens, and actually nothing to do with focus at all. (?)


So what's the correct term for “stuff at this distance will be in focus” then? (I.e., the thing you change with the focus ring.) If I want stuff 3 meters away to appear sharp, what parameter have I set to 3 meters?



Answer



Focal length is the distance between the lens and the sensor when the subject is in focus, not the distance to the subject.


The term for the distance to the subject in focus is the focus distance and is measured from the image plane (sensor/film plane). The distance from the lens to the subject is called the working distance which can be significantly less within the context of macro photography. The zone which appears in focus either side (front and back) of the subject is the depth of field. This varies with the aperture - depth of field increases as the aperture gets smaller (f-number gets larger). All else being equal, depth of field is greater at f/4 than at f/2.


So if you focus on an object 3 meters away with a focal length of 18mm and aperture of f/11, everything from 1m to infinity will be in focus. However, if you focus on the same subject with the same aperture with a focal length of 135mm, the near focus limit is 2.9m and the far focus limit is 3.1m - the depth of field is only 20cm deep, in other words.


depth of field - How can I maximize the "blurry background, sharp subject" (bokeh) effect?


I know that this effect occurs when there's a shallow depth of field.


My question is, what are the various ways I can increase this effect in my photos when I'm taking them?


Note that I'm not asking how to use an editing program to achieve this effect after in post production or after the image is captured.




Where exactly would the focal length of a lens in the ray diagram fall?


If a lens is specified as (EF 50mm f/1.8), where exactly is the 50mm distance if I were to show it in a ray diagram? Considering that a lens is made up of multiple elements, I could not find a satisfactory answer anywhere.



Answer



You cannot find answer because there is not one. It depends on the exact formula and shape of lens elements and their position at the set focus distance. Note that most lenses are specified for focus at infinity, so the 50mmm F/1.8 may not be 50mm at other distances.


From your question, I guess you already know that it would be easy if your lens was a single-element lens. With a complex 50mm lens, there is may not be an exact 50mm distance. You could check where it forms the image manually by shining a light through the lens elements and finding where is is most focused.


Wednesday 25 October 2017

zoom - What features really matter in a point and shoot camera?


I am planning to buy my first new digital camera. Now I am too confused for more than 20 days about how I can choose a camera.


I want to use the camera for domestic purpose.


The Canon IXUS 220 HS looks good but it is only 12 MP and only 5X zoom. But there is a also the Canon A3300 that is 16 MP.


So I don't know what is the basis of selecting a camera. What is Image Stabilization? When should I buy a CMOS sensor camera? How much Optical Zoom is sufficient? How much does noise matter in the camera?What about if the camera has HD video recording, and if not, how much will that lack affect viewing on my 32 inch TV?


The cameras I'm currently looking at are Nikon s3100, Canon IXUS 220 HS, Canon IXUS 110 HS, Canon A3300, and Sony W370/B. This will be my first camera and I can't take any risk because I cannot afford a new one in the next few years.



Answer



I would put the number of megapixels at the bottom of the priority list. Megapixels matter, but once you get past about 10 you'll have enough for anything you're likely to do. In particular there wont be a noticeable difference between 12 and 16, theoretically you'll be able to print a 16MP image 15% larger, but that's only if the lens resolves more detail and that detail is not lost to noise.


Image stabilzation prevents blur due to camera shake when you take a photo. It becomes very important the longer the zoom lens. With a 5x it's probably not important but once you get 10x or more you should definitely look for a stabilized model. How long a zoom you need is really up to you.



All small sensor point and shoot cameras will produce noisy images in low light, it's just something you need to be aware of and plan your shooting accordingly.


HD video is a useful feature, I've seen some really good quality clips from this sort of camera, so if you plan to take videos at all I'd look at one that records HD.


Tuesday 24 October 2017

point and shoot - What non-toy camera would be good for a child?


Looking for a camera for my six year old. We have one of the V-Tech child cameras, which he loves, but the pictures it takes are amazingly awful. Basically I am looking for a rugged point-and-shoot.




equipment recommendation - Is there a compact camera that offers secure deletion of pictures?


Does any manufacturer produce a compact camera which has the ability to securely delete pictures from the memory card? Or, failing that, have a format feature which really blanks the card rather than just deleting the FAT?


I work for a healthcare organization and we're trying to find the easiest way to wipe photos of patients after they have been transferred to our network.


I'm aware that we can load the memory into a PC and wipe it there, but I'd like to find something built in, for convenience.




equipment recommendation - Why are 50mm no longer the default kit lens for prosumer SLRs?



Every time someone asks me for a kit recommendation I point them towards some sort of prime lens with a standard focal length. Similarly, almost anyone I talk to about the ideal budget kit swears that fixed lenses get much better quality photographs than similarly priced zoom lenses.


What factors contributed to the revolution of the default 'kit' lens changing from a 50mm prime to a zoom lens in consumer and prosumer grade cameras. Why wasn't the 50mm replaced by the 35mm lens, which would be standard on a crop sensor? I am interested to know from a manufacturing and distribution point of view why broad range variable zoom lenses replaced the default SLR lens.



Answer



Presumably because the people who buy their first DSLRs mostly come from the point-and-shoot world and care about the versatility afforded by the zoom more than about image quality. Also, a 50mm is way too long to be a good "default" lens with an APS-C camera, and good-quality ~30mm lenses are, due to certain quirks of optics, much more complex (and thus more expensive) than 50mm ones.


Also, the sort of image quality now common even in cheap kit lenses was simply not possible to achieve in an affordable zoom a few decades back. A prime was back then pretty much the only option.


EDIT


To elaborate on what I meant by "certain quirks of optics", lenses with focal length shorter than the flange focal distance (or register distance) have to employ a special retrofocal design to make the optics work out. This basically entails adding a reverse telephoto group in the rear of the lens. The register distance of the Canon EF and EF-S mounts is 44 mm; Nikon's F mount has 46.5 mm.


This question has a good answer by Matt Grum with some illustrative pictures.


An APS-C camera could probably be designed to have a correspondingly shorter register distance, but it would be incompatible with lenses designed for full frame.


Monday 23 October 2017

post processing - What should I consider for cropping aspect ratios?


The question What things should one take into account when looking at cropping a photo? asks about factors that affect composition during cropping.


I'm interested in a very specific sub-set of this, which is barely mentioned in the answers to this other question.



What factors should I consider when choosing an aspect ratio? Obviously for photos I intend to print, I may wish to choose a standard size, or when fitting a photo into a specific spot in a magazine or web site, I need to choose a specific size/aspect ratio.


But given full artistic freedom, what aspect ratios are considered most pleasing, in what contexts, and why?




Sunday 22 October 2017

exposure - How Avoid Vignetting using a variable ND filter


My camera is Nikon D700


Lens 28-300mm Lens 24-70mm


I have a Variable ND filter 77mm size



How do I avoid vignetting around the edges of the filter?


thank you.




prints - What effects do matte, semi-glossy and glossy paper have on the photo?


What impact does printing on matte, semi-glossy or glossy paper have on my photos?



This answer says that dynamic range is more with glossy paper. What other effects does the paper type have on the output?



Answer



Glossy papers have darker blacks, so you can display wider tonal range, get better perception of contrast and sharpness. But those blacks (whole image, really) are glossy and reflect the world around. So you may have nice deep blacks, but if they will reflect your white shirt or a window, you won't enjoy them as much. That may become even bigger problem for images mounted under glass. So there are tradeoffs and you need to pick the right compromise.


Many high quality prints are printed on pigment printers these days. The prints are great, but when using glossy paper, you can see so called gloss differential where certain areas of the print are glossier than others.


Glossy prints are also less forgiving when mounting on a board - small bumps and imperfections are more likely to be visible.


Film ISO and Push/Pull Processing


I am looking to start shooting film and have been having a look a different 35mm film. What confuses me is film that can be shot at different ISOs/exposure bias. For example Kodak says that Portra 800 "delivers best-in-class underexposure latitude, with the ability to push to 1600."


Does this mean that:


a) I have to set the camera to ISO 1600 and request the film be push-processed.


or


b) I can over/under expose each frame at will and the film and normal processing will cope with this?


Supplementary to this: If the camera I am using does not have a manual setting for ISO but relies on the DX code do I have to get some sort of sticker to cover the DX strip? If so does anyone know of a (mail order) place I can get these in the UK?



Thanks



Answer



You can simply underexpose by a stop and rely on the printer's ability to recover a usable image from the film, of course, but that's going to come at a much higher cost in terms of image quality than requesting push-processing.


One of the neat things about print (negative) film is that it (usually) has a pretty enormous latitude—it can tolerate quite a bit of over- or under-exposure (say, two stops over and one and a third under) before you really can't recover a usable print from standard film processing. With a one-stop underexposure, your picture will be thin (lacking contrast) and grainy. The point of push processing is to overdevelop somewhat to restore the contrast you lost by underexposing.


If you know ahead of time that you will be shooting an ISO 800 film at 1600 (or if the most important images on thee roll were shot at 1600), then requesting a one-stop push will give much higher-quality images. If there are just a handful of images on the roll that are shot at a higher speed and they are no more important than images shot at the film's nominal speed (or your preferred speed for that batch), then the one-stop underexposure and normal processing will just result in somewhat less-than-optimal prints (or scans) for those images.


So if you're using a minilab, you can simply underexpose and let the printer's autocorrection do what it do. Push processing may be an extra-cost expense you're not willing to shell out, and one stop of underexposure isn't earth-shattering with a print film. If the ultimate quality matters, then push processing will result in images with better colour, contrast and grain, but at a cost. It can't be gang-processed with eleventy-seven other rolls unless the processor has scheduled push and pull run times for the machine, so a local lab will have to make time for your job. And even if you are going to be running a full processor load (submitting multiple rolls at once), you're not going to get normal pricing (just in case you might think it works that way all of the time and come in with one roll somewhere down the line). A pro lab probably won't charge quite as much extra for the push or pull, since that's their stock in trade, but then you have to deal with the pro lab base price and turn-around time.


As an aside, non-overridable DX indexing is an unspeakable evil. I rarely shot any film other than Velvia at its nominal speed—it was usually a third under for chromes, a third to two-thirds over for colour negs, and my N was usually a full stop over for the B&W/developer combos I was using. It varied by batch/lot number, but the ISO-rated speed almost never gave me the contrast and saturation I wanted. If the camera uses electrical contacts for the DX code and has a manual ASA/ISO setting for uncoded films, anything from Scotch tape on up will work fine; you only need special stickers if the camera has no way to set film speed manually. I see you've found a source, but needless to say, they're not as easy to find as they once were (but then, neither is 35mm film).


Saturday 21 October 2017

What is the equivalent resolution of a 35mm film




Possible Duplicate:
What megapixel value is equivalent to which ISO film?
Is it true that '80s 35mm photofilm had quality corresponding to 24 megapixels?




I know this is a little bit like comparing apples with oranges but I am trying to find the point where digital photography caught up with conventional film photography.


Today's high-end full-frame digital cameras have 35mm sensors which is the size of a conventional 35mm film. These two technologies (digital and analog) are fundamentally different but at the end of the day, the images produced are consumed alike.


Now, consider an 35mm analog image recorded on top-quality film with top quality equipment. What would be the resolution (megapixels) and color-depth of an equivalent digital image.




How loose can the lens mount be?


I've noticed that there is a little give in my camera system at the EF mount. It is really hard to see, but I can feel it sometimes and first noticed it when I had the lens attached to a monopod and gripped the camera body. Even though I was supporting all the weight with the monopod I could still feel the body shifting around slightly. This seems to happen with my more expensive equipment and my cheaper equipment, but if there's something wrong I want to make sure I get my expensive equipment in for repairs before the warranty expires.


Thank you.



Answer



As long as the faces are flush together and the movement is rotational only I wouldn't worry unduly. The mechanism will have a degree of backlash to account for variations in tolerance for all the different lens manufactures since the introduction of the EOS EF mount.


However, if there is a gap between the lens mount and the body, you'd notice this with a reasonably sized lens on (70-200mm) as the weight of the lens pulls down and a gap opens at the top of the mount. This would point the to mechanism that pulls the two together is not working and that should be looked at.



If so then it will be the body that needs attentions since all your lens have the same effect.


But, personally my lens (Canon L, 'normal' and sigma) all have a little rotational play and I wouldn't be worried.


Projects to do if you have an additional camera



So I bought my 7D a while ago meaning my older 30D gets very little use and it could definitely have a better life. I would like to find some projects that would put this extra camera back to work. I'm open other suggestions but I've primarily thought of projects in these two directions:


A project that requires two cameras
I can't really think of anything here. Maybe something along the line of capturing a moment with both cameras, at different angles/focus/etc, at the same time.


A project that modifies the additional camera
I would be open to suggestions such as Magic Lanter, CHDK and such things but I assume the 30D may be a bit dated for such mods, as I haven't found any that support the 30D. I'm also open to suggestions such as infra-red modifications


And of course "just keep it as a backup" is a perfectly valid answer.


I am looking for answers more precisely for the 30D but to keep the information as useful as possible, feel free to give more general answers.



Answer



Things to do with two or more cameras:





  • 3D Photography: You can setup both cameras with remote triggers with them mounted using a tripod accessory that holds both cameras. Then you can take photo simultaneously from two points (you would have to scale down the one from the 7D) and merge them together into a 3D image.




  • Time-Lapse are great to do with a second camera because it keeps you camera busy for a long time. Actually, if you do not mind have both cameras busy, feel free to do a 3D timelapse!




  • A DSLR can be modified for infrared photography. This is a costly modification and renders is not easily reversible. So most people do it with a second camera.





  • Create How-To Photography tutorials :)




  • Stop-Motion videos can be done with one camera but with two you can make a stop-motion video and a making-of-stop-motion stop-motion video. OK, I'm running out of ideas!




Friday 20 October 2017

long exposure - Are there any resources or websites for finding areas which have low levels of light pollution at night?


Are there any websites which map light pollution levels so I know where to go for decent long exposure night shots? Or general resources for that matter?




Answer



I would start at the International Dark Sky Association. They have detailed maps for North America and also a link to the Blue Marble Navigator for the rest of the world. Although a little older, I find these maps of North America more readable.


Do photographers see ambiguity in the color of the blue/black (gold/white) dress?


Okay, so, this has taken the Internet by storm today... You've probably seen it and lots of commentary.


the other famous blue dress


Apparently, many people see this as gold and white; to me, it's unambiguously blue. There are a number of articles (for example on Wired) explaining that this is an optical illusion and going into details about what most photographers already know well — the human vision system's mechanism for coping with changing light sources, and white balance and all that.



Try as I might to see it the other way, it just appears to be a blue dress, poorly photographed and with bad attention it the lighting. (And my perception happens to be correct; see this update on the original.) But many of my friends insist that it is either "clearly" white/gold, or at least ambiguous. And many of them are not... crazy people... and many are even artists, but none a serious/enthusiast/expert photographer.


So...



  • Is it that my years of experience with digital photography and lighting have trained my brain to the point where I'm seeing it differently from the uninitiated? (See How to recognize different lighting color temperatures? — recognition of the color of light is certainly something that can be learned?)

  • Or is it that many people have terribly calibrated monitors, compounding the problem? I know that most consumer monitors come with a very high default color temperature, blue-shifting everything, so I kind of suspect that it is at least a major factor. (Except, I showed my children on my system, and they see it as "white and kind of bronze".)

  • Or is it really something that varies from person to person, with a background in photography not having anything to do with it?


I know this is an net meme thing, but I'm specifically interested in the photographer's perspective. I don't need a recap of the Wired article — I know all that. I want to know if it's still true for people with experience looking at photographs and lighting. The dress is blue, and I'm wondering if being used to thinking about the color of light (to the point where it's automatic) made it natural to see it correctly (and basically whether photographers are more likely than the general public to be among those who see it correctly).




Or, to come at this from another direction:





  1. As a photographer, can you explain a plausible lighting situation where this could be a white dress? The only one that would make sense to me is if the dress were strongly lit by daylight or a daylight-equivalent source, and the background in tungsten and not lit by that same daylight. How could I take a white and gold dress and shoot it this way using standard interior lights (that is, no colored gels) and with global white balance as the only color-tweaking tool?




  2. Could you recreate a different scene using either blue and black or gold and white and which would cause the same visual consternation? What elements would be necessary to do so?




If you are able to answer either of those questions, does the fact that you can answer meaningfully play into how you perceive the original?




Thursday 19 October 2017

Principles of correcting perspective in software


I am interested in photographing buildings such that the vertical lines are preserved. As shown in the image on the right.


tilt shift image


I understand that this can be achieved with a tilt-shift lens. However, my Fujifilm X100 has a fixed lens, so I wont be able to do this with my current camera.



So I am interested in achieving a similar result using software. Note that I am not after that "minature" look with shallow depth of field at a long distance. This is what most software that is billed as "fake tilt-shift" seems to reproduce.


I simply want my vertical lines to be preserved.


Is this as simple as projecting a rectangular image onto a trapezoid? Just squashing the bottom edge of the rectangle? Or is it more involved than this? Is the projected shape more complex that a trapezoid perhaps?


distortion


I don't have access to Photoshop, but do use Gimp and Inkscape. And if needed, can write my own software. To clarify, I am after the actual geometric transformation.



Answer



Although many lenses that have the ability to control one also have the ability to control the other, tilt and shift are two different movements.


The "miniature" look is achieved using tilt movements. The optical axis of the lens is tilted away from a perpendicular angle with respect to the imaging plane.


tilt movement


Correcting perspective in terms of converging lines is accomplished optically using the shift movement. The 90° angle of the lens' optical axis with respect to the imaging plane is maintained. The center of the lens' optical axis is moved away from the center of the film or sensor while remaining perpendicular to the plane occupied by the film/sensor.



shift movement


Assuming the building is perpendicular to the ground, this has the same effect as pointing the lens' optical axis horizontally and then cropping away all but the top part of the image. Shift lenses have larger image circles than the formats they are designed to be used with, so the field of view can be shifted fairly far to one side or another of the image circle cast by the lens and we don't have to give up resolution by cropping a significant portion of the image.


enter image description here


It is possible to tilt the camera up at the building and correct for the converging lines using software. Of course, any time we start remapping pixels we give up a small but measurable amount of resolution.



Is this as simple as projecting a rectangular image onto a trapezoid? Just squashing the bottom edge of the rectangle? Or is it more involved than this? Is the projected shape more complex that a trapezoid perhaps?



It depends upon the characteristics of the lens you are using and whether or not the camera provides lens distortion correction. If your lens has any significant geometric distortion, you'll need to correct for that before doing your trapezoidal projection to correct for the perspective "distortion" caused by your shooting position. Be careful, though. If your camera already does geometric distortion correction automatically you don't need to apply it again.


software - How do popular free RAW editor/converter compare to each other on Windows?



I'm looking for a free RAW editor/converter on Windows. Can you tell me some strong/weak points of them comparing with Capture NX and/or Adobe Photoshop Elements?



EXIF editing would be a nice bonus.


Related:




Answer



The camera manufacturer can sometimes offer an excellent RAW->JPG convertor. One reason to use the manufacturer's software is that no one else knows better how to interpret the RAW information. All the light and lens-specific data especially can be quite tricky to fully interpret and post-process by other than the manufacturer of the camera.





In the Nikon world, there's ViewNX, which ships for free with the DSLRs and is also downloadable for free here. It's excellent for first-pass editing of photos, including Exposure, White Balance, Sharpness, Contrast, Brightness, Highlight and Shadow Protection (very impressive), Color Booster, D-Lighting HS, and Axial Color Aberration. You can also do all your Metadata edits here.


Of course, it's not as full-featured as their expensive, and terribly slow pay version: CaptureNX.


UPDATE: Nikon's Capture NX-D is now free




Canon's own Digital Photo Professional (DPP) is included with every Canon DSLR. It can be downloaded for free from Canon's website, but you must have a valid camera serial number to download it. Apart from the obvious lack of no additional expense, the primary advantage to using DPP is that the same proprietary algorithms used to encode .crw and .cr2 files are used to decode them. It has a fairly full list of features of non destructive adjustments that can be made on a global level including a basic HDR tool. RAW files may be exported as 16 bit TIFFs to other image editors for further adjustment when desired. It features the Digital Lens Optimizer (DLO) which corrects for several lens aberrations (spherical aberration, curvature of field, astigmatism, comatic aberration, sagittal halo, chromatic aberration of magnification, axial chromatic aberration).




For Sony cameras it would be the Image Data converter software. It used to be two separate programs called Image data lightbox and Image data converter SR, but they combined those into one package in 2012. No requirements for download, as there is for Canon and Olympus. It processes RAW files, but offers next to nothing for images already in JPEG format. Also RAW-features are limited - for example you can't crop and resize at the same go. You can convert one RAW-image, save the recipe and then apply it in a batch process to other images without a need to open each RAW-file separately.


Link to Sony eSupport software pages




Olympus offers Image Viewer 3 for Olympus camera owners. The download will not begin without a camera serial-number filled in a field on the download page. Image Viewer 3 is a nice upgrade from the old Olympus Master 2 and the not-so-old Image Viewer 2. Selection of possible operations is good for RAW and also for images already in JPEG format. When saving to JPEG you can also include IPTC info in the file.



Link to Olympus software download



Wednesday 18 October 2017

neutral density - Which filter would I use for daytime lightning long exposure?



I'm new to the photography scene and although I managed to take a reasonable night time lightning strike on long exposure, during the daytime, I can't seem to get it right (too bright) and a friend told me to use an ND Filter. Is this something to consider and what type as my research led me to so many different types.


I am using a Canon 600D with the 18-55mm kit lens.




What is the reference plane used when the minimum focus distance is measured?


Once you know the MFD (minimum focus distance) for a particular lens, is that the distance from the subject to the front of the lens or to the camera sensor?




Why don't the zoom buttons work on my DSLR?


I am a newbie to the new camera I just bought. It is a Nikon D5200 it has the original lens that came on the camera a 18-55 lens on it. My question is when I turn the lens to the 55 and then push the plus and minus buttons at the bottom of my camera to zoom in it appears to be zooming in and then I take the photo and the picture is not zoomed in at all.





I have a new Canon T3i and was able to take and capture zoomed in photos with the lens that came with the camera (EF-S 18-55mm) as well as a Canon zoom lens EF 75-300mm 1:4-5.6. Now when I zoom in with either lens and take a photo, when I review it, it shows original image before zooming. Is it a defect in the camera or inexperience of the operator?




(Two separate questions from separate users but with the same root cause.)




Tuesday 17 October 2017

What do I need to trigger a remote flash with a Nikon D3100?


I'd like to get into doing some remote lighting, but really don't get the best way to do it.


I've got a Nikon D3100, so no "commander mode" with the built-in flash. My current external flash is a SB-600.



Answer



You've got a few options:




  • You could use a few universal translators and do it wired.

  • You could use a cheap radio trigger (I personally use the Cowboy Strobist with the D3100 and SB-600).

  • You could use the highly reputed classic Pocket Wizard

  • Or if you want wireless TTL, you could look at MiniTT Pocket Wizards

  • You can use the SU-4 Nikon remote to make any body a Nikon CLS commander (but this is line of sight then).

  • You could use optical slave triggers and trigger with the onboard flash (but this is line of sight then).


I've had good, reliable luck with the cheap Cowboy Strobist radio triggers and think they're a great way to get started.


software - How do I get started with RAW photography?


Having read about the benefits of shooting in RAW as opposed to JPEG (for example this question), I'd like to have a go with it myself. However, I'm not really sure how to go about it.


I can turn RAW mode on in my camera (that's easy!), but what do I do with the files when I get them on to my computer? I presume I need to process them to get the RAW data into something that I can work with, but do I need a specific tool for my camera (a Finepix S5100), or will something like the GIMP do what I want? I'm also a little confused about how I then take advantage of the abilities that RAW gives me when I'm processing it on the computer — do RAW processing tools have more processing options than standard tools?



Answer



To get started, set up a few shots from a tripod and shoot them in both JPEG and RAW. Most DSLRs can do that simultaneously but I suspect your camera may not have that option because its writing pipeline is slow (it would otherwise lockup your camera for 20-45s IIRC).


Then load the RAW into any conversion software and see if you can produce an image which YOU prefer to the in-camera JPEG. Play with the conversion controls: sharpness, saturation, contrast, curve, etc. Don't go with the default conversion unless you want to waste your time because that will almost always produce the same JPEG as the camera (some advanced programs will let you define your own conversion though which is usually called a preset).


REMEBER: The RAW advantage is about what YOU can do with the image. Most mediums cannot even show all the nuances in a JPEG (nearly no LCD monitor can), so it is more about having control on the final image than about showing one with more color tones.



After a few rounds, you'll be able to judge if it is for you or not. There will be a cost in terms of space, speed and workflow. Particularly since you do not have a DSLR, every time you shoot a RAW image, it will be a while before you can shoot again. Then you have to realize that if you don't take the time to make the output better than what the camera does, you're not getting much out of RAW. If you do, realize that you could have been shooting more instead. Ask yourself what you prefer and what is worth it.


Sunday 15 October 2017

lens - Why are so many kit-lenses parfocal if it's an expensive feature?


According to this answer, parfocal lenses (lenses that can zoom in/out without losing focus) are relatively rare and more expensive.


However, the original poster's kit-lens is parfocal. Additionally, the kit-lens that came with my Sony a390 and the kit lens with my brother's Nikon D3200 (both entry-level DSLR's, with presumably cheapo kit-lenses) are parfocal as well.


Does anyone know why, if this is a feature primarily of expensive lenses, so many cheapo kit-lenses seem to support it? What features of the lens cause it to be parfocal or not?




equipment recommendation - Is the screw for attaching tripod heads to tripods standardized?


I was gifted a tripod creatively named: PT-TPM665-C


I am quite happy with it but the head can be frustrating to work with. It feels more geared towards videography rather than photography and needs some fighting to get into desired angle. I am thinking a ball-head will be great upgrade for it.


The problem is, I am a noobie and have no knowlage of different tripod mounting systems. From what I can tell it's a 1/4"ish screw.


Is it a standard size? Does this mount size have a name?


enter image description here





Recommendations for Photo Editing/Organization Software?


I have a MacBook Pro, while my wife has a PC. We have a NAS device on our network where we store all of our pictures. This gives us the ability to both view our photos from one simple location, without having to copy them back and forth to each others machines. It does, however, make it difficult when trying to find good photo editing software.


On the PC, my wife uses Picasa and it works great with this setup. The best part about it is the ability to "watch" folders for changes. If I upload new photos, she automatically gets them in her library, and vice versa. I've downloaded Picasa for the Mac, but found it very buggy. The organization isn't always 100% accurate, and sometimes the thumbnails don't match up properly with the photos themselves. Plus, performance has always been severely lacking when compared to the PC version.


Normally, I use Lightroom or Aperture for serious photo editing. However, it's nice to have something like Picasa for general photo viewing, or to make simple touchups or share photos to Flickr or Facebook. I've tried iPhoto on my Mac, but it lacks the "watched folders" feature, which is a real dealbreaker for me. Does anyone have any recommendations for good photo organization and editing software? Like Picasa but better executed on the Mac? What about Photoshop Elements? I've heard good things about it, but haven't found out if it has the watched folder feature. I've seen Adobe Bridge, but that doesn't give me the ability to make simple edits in the same program.


I'd really appreciate any recommendations you can give!




lens - Why does my Canon 7D get blurry pictures with a 70-200mm f/2.8 IS (series 1)?


I'm wondering if it's normal for a Canon 7D with its crop sensor to get unsharp pictures with a 70-200mm 2.8? I just bought a used 70-200 and it seems a little bit blurry. Maybe it needs calibration? IS seems to work fine, AF seems to be working fine in the viewfinder too. But when I open images up on computer, not as great as I expected.





effect - How to I edit a photo to have a old vintage look?



How to I edit a photo to have a old vintage look? I'm looking to achieve this sorta of vintage film like look. I'm looking for a way to achieve this without purchasing a "auto" effect program by using just the basic controls. It seems like desaturation is a start but there's more to it?


My basic editing program of choice is Aperture (but I assume Lightroom or anything else out there all have the same basic set of controls).


enter image description here


enter image description here




Saturday 14 October 2017

software - Why doesn't editing photos from Lightroom in Photoshop post-Lighroom adjustments work right?


Mac, Lightroom 4.0, Photoshop CS5.1



So I import a photo from my camera using Lighroom (CR2 from EOS 5D Mk II)


Yes it's a terrible photo, but it demonstrates the problem


Do some Lightroom adjustments to edit the image (Done here to display the obvious difference)


dropped exposure now it looks like this


So I right click the photo, edit in->photoshop


I don't know why I felt you needed a screenshot of this


and it opens the file in Photoshop, but prior to the adjustments I made in Lightroom.


and this is what it looks like in Photoshop


I've only recently upgraded to LR4 but this is happening to all my photos and it's very annoying. I'm sure in LR3 it gave me the option to "Edit with Lighroom adjustments" or something similar, but now it seems it doesn't. I looked through the settings and couldn't find anything where I could change the setting. Has anyone else experienced this little bit of "functionality" in LR4?



Answer




I've experienced the same issue with Lightroom 4.0 and CS5; I just downloaded the Release Candidate for ACR 6.7, which you can get here, and that seems to have resolved my issues. Hopefully that helps!


post processing - What is the most effective way to remove a fence in the foreground?


I enjoy visiting a good aviary to capture birds, but often they are caged. My strategy is then to move as close as possible to the fence, resulting in a blurred fence foreground and the subject in focus:


enter image description here


Still, often the blur lines are clearly visible, which I then try to reduce or remove in post processing. And that's what my question is about...are there are any best practices and effective techniques to get rid of this foreground fence?



Answer



This is a "preprocessing" answer based on my answer to a recent question by @Brandon K about lens effects.
This relates to the "anti-focusing" of objects closer than 1 focal length to lens:
Stopping the bars being visible saves needing to try a difficult post processing job.


In the diagram below (submitted by original questioner, modified by me) it can be seen that objects which are closer than one focal length to the lens are NEVER focused - they are "defocused" with their image being dispersed and spread out across the picture - in some cases to create the blur lines that you see.





  • The biggest "trick" is to get the bars or mesh as close to the lens face as possible. Against the glass is ideal but is obviously dangerous to the health of the lens.




  • Set lens to maximum aperture. A lens with a large maximum aperture helps. I get good results at f1.8 as with the green and orange bird below BUT the black/white /orange Toucan was shot at f6.3.




  • Longer focal length helps - both to reduce depth of field of the subject but also to give a greater distance between focal point and lens.





  • Having the subject close but far enough away that the depth of field does not extend anywhere near the bars helps BUT most birds are not liable to cooperate.




The original questioner @Brandon K provided the diagram below and asked or suggested that




  • ... it seems that close objects shouldn't even appear on film based on the diagrams.





  • "Anything close enough to form a virtual image is not focused onto the focusing screen"






What he and you describe happens, but because the defocusing of objects closer than a focal length from the lens is progressive as distance inside the focal point increases - just as the diagram suggests - they do not just "vanish" as they come inside the critical distance - rather they become progressively more indistinct the closer they get to the lens face.


The pictures below show reasonably extreme examples of this 'feature' being used to good effect to nearly completely remove closeground items from the photo. Foreground objects (in this case a heavy mesh and cage bars) which are closer to the lens than its focal length are "anti-focused" to the point of near invisibility.


Diagram below from Brandon modified by me - cage bars and lens focal length added:


enter image description here


See sample photos


This is one of my standard "tricks" for photographing objects in cages and similar environments where there is an incomplete obscuring layer that you can get right up against. An extremely useful "trick".



In this photo there are cage bars very close to the front element of the lens - as close as I could get them. I use this method to sucessfully "drop out" even quite solid bars. In this case it's normal thickness cage bars. Distance to front element is under 50mm and it's a 50 mm f1.8 lens. There ARE some optical effects present but they are not normally noticed by most viewers. Higher res version of this is here and click download icon 2nd from right at top of photo. This gives a much better look at what you CAN'T see.


CAGE BARS BETWEEN BIRD & VIEWER


enter image description here


This is an even better example, in that there is a small pitch very thick square mesh between the camera and the subject (I think not more than 20mm squares - I can check other photos). This was using an 18-250 lens at 18mm, f6.3 * See photos showing mesh that was present in 2nd photo below. Visually the mesh ruins the bird's presentation and the camera "sees" the bird far better than the eye can.
Same photo on facebook here


VERY THICK & UGLY SQUARE MESH BETWEEN BIRD & VIEWER


enter image description here


enter image description here




(*) I originally said that this was taken with a 50mm f1.8 lens but after checking the original I have changed the details, as above.



What are the applications for high ISO shooting?




EOS high-end cameras seem to have a very high ISO range. According to some charts, the prosumer 6D has 102400 ISO extrapolated and the 1Dx can go up to 400K ISO. These numbers seem insanely high to me.


In the film era, if you wanted to maximize picture quality, you tended to stay close to 100 ISO or below. From all I read, this is still the case and ISO 100 gives the best digital capture quality.


I understand that now you can take pictures in near darkness that couldn’t be obtained a decade ago.



What I read in Is high ISO useful for photography? seems to confirm that it is useful in some situations but that it is better to stick to the base ISO resolution.


For regular use, what are the applications for high ISO shooting?




metadata - Adding data to an EXIF file


What is the easiest way to add data to an EXIF file? For example if I wanted to add a copyright to my images can that be easily added to the EXIF data?




dslr - Are refurbished digital SLR cameras okay to buy (and worth the small savings)?


I ran across a few refurbished cameras (from reputable sites) and my question is this:


Are refurbished cameras normally okay to buy? (Note: I am referring to DSLR cameras that are body plus one 18-55mm OEM Lens) I ask because there are some items that 90% of the time I will not buy refurbished. But I have no clue when it comes to cameras, and if this is a safe way to start, or if it is worth the extra 100-150 I would save to buy brand new.


One other note: I am not a professional and looking to get into the world of DSLRs with a basic entry to gain knowledge and experience using a DSLR, and of course save some money.




Answer



I bought my Canon XSi/450D refurbished from B&H about 3 years ago, and was very happy with it. I did eventually notice a bright red pixel on all of my pictures (although typically only visible in night shots). I don't blame B&H or the refurbished state of the camera for this; just the fact that pixels some times die.


When I replaced it about 4 months ago, I had settled on a T3i/600D, and I would gladly have gone with a refurbished unit again, but there weren't any available at the time from B&H. (I think I saw some refurbs on Amazon, but the price was almost the same as new.)


I think the main thing to consider when buying refurbished is the warranty it comes with. Not so much because you'll expect to use the warranty, but because how long and extensive the warranty is says a lot about how much the vendor believes in their refurbished equipment. If they only offer a 7-day warranty, they probably don't test their refurbished equipment sufficiently to stand behind it.


Friday 13 October 2017

autofocus - How can I enable back button focus and disable focusing with shutter button on a Nikon D5500?


I assigned the AE-L, AF-L button to activate focusing on my Nikon D5500, but I can still focus using the shutter button.


I would like to know if it was possible to use the AE-L/AF-L button to set focus only and the shutter button to take photos only.



Answer




According to page 267 of the D5500 Reference Manual, using custom setting f2 to set the AE-L/AF-L button to AF-ON prevents the shutter release button from focusing.


p. 267


Are these spots on the sensor or the lens?




Equipment is Canon EOS 5D3 (5760x3840) with a 24-105 f/4 L lens at 105mm, ISO 200.


The following are crops from the same small (676x449) section of a couple of blue sky shots I did to test for sensor dirt. The first image is unmodified other than Raw->JPEG conversion in Lightroom with no setting changes, and shows some faint dark circles.



f/22 unedited


To enhance the circles I cranked up contrast and clarity to 100%, giving this:



f/22 contrast-enhanced


However, the same area of an image taken 1 second earlier at f/7.1 shows no evidence of these spots, even after cranking up the contrast/clarity to 100%.




f/7.1 contrast-enhanced




  1. The spots seem to be more prevalent towards the edges of the original (full-size) image, although there is one spot near the center. None of the spots are visible at f/7.1.

  2. All the spots appear to be the same size, but have different densities.


Given all this evidence can I deduce that these are lens defects (or dirt on a lens element)?


Since they're completely undetectable at "normal" apertures I don't consider this a problem. I just would like to know what I'm seeing.


BTW, the obvious test is to shoot the same image with a different lens and compare, but I won't be able to do that for several days and thought this might be an interesting question for photo.se.





How does Photoshop/Lightroom get the Colour temperature of a Raw image?


I captured a Raw image using my Canon 450D. When imported this RAW(CR2) file into Lightroom and Photoshop CS5, it showed the Temperature as 4900 and White Balance setting - "As shot". When I checked the EXIF data associated to this raw CR2 file, there is no mention of the colour temperature setting in it.


I used Irfanview to see this EXIF data. Irfanview + Canon raw plugin can open this Canon raw CR2 file.**





  1. So how does Photoshop/Lightroom compute the colour temperature from the Raw image data?




  2. Would be interested in knowing What kind of algorithm/mathematical computation it does to get this temeprature number?




  3. Could it be possible that the CR2 raw file has this Colour temperature information embedded into it, but Irfanview Exif information display somehow missed it/messed it?





Any pointers would be useful.



Answer



It is in the EXIF data, but the info is under Canon tag. For any EXIF-related tasks, I wholeheartedly recommend ExifTool by Phil Harvey.


Here's an example of a real file (which coincidentally was shot with Canon 450D)


$ exiftool -canon:"WB_RGGB*" -canon:"*temp*" MG_5366.CR2
WB RGGB Levels As Shot : 2270 1024 1024 1520
WB RGGB Levels Auto : 2270 1024 1024 1520
WB RGGB Levels Measured : 2267 1023 1024 1518
WB RGGB Levels Daylight : 2245 1024 1024 1425
WB RGGB Levels Shade : 2595 1024 1024 1197

WB RGGB Levels Cloudy : 2422 1024 1024 1299
WB RGGB Levels Tungsten : 1660 1075 1075 2222
WB RGGB Levels Fluorescent : 1960 1024 1024 1945
WB RGGB Levels Kelvin : 2245 1024 1024 1425
WB RGGB Levels Flash : 2485 1024 1024 1273
Camera Temperature : 18 C
Color Temperature : 5200
Color Temp As Shot : 4955
Color Temp Auto : 4955
Color Temp Measured : 4955

Color Temp Daylight : 5200
Color Temp Shade : 7000
Color Temp Cloudy : 6000
Color Temp Tungsten : 3200
Color Temp Fluorescent : 3776
Color Temp Kelvin : 5189
Color Temp Flash : 6310

NB: Windows users: double-check that you use double-quotes, not single quotes.





EDIT: The Color Temp infos are "nice to know" data, but they do not hold any other value than informational. The °Kelvins are probably based on camera's WB calculations and post-processing software most likely uses the WB RGGB Levels data.


I tested this by changing the Color Temp As Shot value from 52007000 and opened the file in Photoshop (Adobe Camera Raw). Nothing did change.


Then I changed the WB RGGB Levels As Shot value of a copy of the original file from 2270 1024 1024 15201000 1000 1000 1000 and the image changed to this:


wb rggb change


I did not change the Color Temp As Shot value, but Adobe Camera Raw shows the temperature as 2150 (tint -144)


Summa summarum: Adobe Camera Raw calculates the "Color Temperature" from the EXIF-data, from WB_RGGBLevels* tag, under the Canon group (under the Maker Notes group).


Why did the shutter speed increase when I raised ISO and lowered exposure compensation in Aperture Priority mode?



I have a Sony A6000 camera and I am studying photography. I was experimenting with ISO and exposure compensation while in Aperture Priority mode and encountered this situation:




  • I set the aperture to F3.5 and the ISO to 3200. The exposure compensation was set to +1. This resulted in a 1/100 shutter speed being chosen by the camera when taking the photo.



    • Then I changed the ISO to 6400 and the exposure compensation to 0 and took a new photo, without changing the composition at all. This time the shutter speed chosen by the camera was 1/500, and I don't understand why.




Based on what I learned so far the chosen shutter speed should have been the same as before, because I raised the ISO by 1 EV and also lowered the exposure by 1 EV. So in the end the EV value was the same.



Am I missing something? Are my assumptions wrong or could there be some other setting on the camera that is influencing the results?



Answer



Both of your changes were in the same direction with regard to shutter speed, not in offsetting directions.


Keep in mind that exposure values (EV) with higher numbers are for brighter lighting conditions and require shorter shutter times, narrower apertures, or lower ISO. Exposure values with lower numbers are for dimmer lighting conditions and require longer shutter times, wider apertures, or higher ISO. A lower exposure value results in a brighter image (more exposure!) if the light is the same! When you select a lower EV you increase exposure in constant light. When you select a higher EV you reduce exposure in constant light.


Lower EV's let more light into the camera to compensate for dimmer lighting conditions. For ISO 100, EV 0 is f/1 at one second, or f/1.4 @ 2 seconds, or f/0.7 @ 1/2 second, etc. For ISO 100, EV 15 is f/16 @ 1/125 seconds, or f/11 @ 1/250, or f/8 @ 1/500, etc. The light needed to get a proper exposure at EV 15 is 2^15 (32,768) times brighter than the light needed to get a proper exposure at EV 0.


Assuming the lighting and the scene remains constant:



  • Raising ISO with no other changes results in a lower EV and a brighter image

  • Lowering ISO with no other changes results in a higher EV and a dimmer image

  • Increasing/shortening the shutter time with no other changes results in a lower EV and a brighter image


  • Decreasing/lengthening the shutter time with no other changes results in a higher EV and a dimmer image

  • Opening the aperture/lowering the f-number with no other changes results in a lower EV and a brighter image

  • Closing the aperture/increasing the f-number with no other changes results in a higher EV and a dimmer image

  • Increasing Exposure Compensation results in a lower EV and a brighter image, but not because the EC number has changed. The real change is the difference in Tv (longer), Av (wider), or ISO (higher) that the camera applies when you enter a higher EC number.

  • Decreasing Exposure Compensation results in a higher EV and a darker image, but not because the EC number has changed. The real change is the difference in Tv (shorter), Av (narrower), or ISO (lower) that the camera applies when you enter a lower EC number.


Consider if you had only made one change at a time:



  • Moving from ISO 3200 at [+1 EC] to ISO 3200 at [0 EC] will result in the shutter time decreasing (getting shorter) from 1/100 to 1/200. You have told the camera you want the exposure to be half as bright. You have raised the EV by one stop without raising the actual brightness of the lighting by one stop. Aperture is the same. ISO is the same. Shutter time is one stop shorter. The image will be one stop dimmer.

  • Moving from ISO 3200 at [0 EC] to ISO 6400 at [0 EC] will result in the shutter time decreasing (getting shorter) from 1/200 to 1/400. You have doubled the sensors amplification, so the camera will halve the shutter time to compensate. You have not changed the EV at all, you've swapped sensitivity for exposure time. When you doubled ISO from 3200 to 6400 (brighter by one stop), the camera halved exposure time from 1/200 second to 1/400 second (dimmer by one stop). The two changes offset each other and the image is the same brightness.



Since your camera probably sets exposure in 1/3 stop increments, the 1/3 stop difference between 1/400 and 1/500 second is rounding error. Your initial reading of f/3.5, ISO 1600, 1/100 second could have actually metered somewhere between 1/100 and 1/125 second. Your second reading of f/3.5, ISO 3200, 1/500 second could have actually metered somewhere between 1/400 and 1/500 second. The first meter reading was rounded down slightly to the nearest 1/3 stop. The second meter reading was rounded up slightly to the nearest 1/3 stop.



You said that "Moving from ISO 3200 at [+1 EC] to ISO 3200 at [0 EC] will result in the shutter time decreasing (getting shorter) from 1/100 to 1/200". That means that going even lower [-1 EC] will result in the shutter time decreasing even further, to 1/400, is that correct?



Yes. If ISO and aperture are unchanged, changing EC from zero to -1 will result in a shutter time half as long. In this case 1/400. You have reduced the amount of light allowed into the camera by one stop. The image will be dimmer.



So then, based on the rest of your answer, the camera will set the same 1/400 shutter value in both these instances: ISO 6400 at [0 EC] and ISO 3200 at [-1 EC]?



Yes, you are correct. If the lighting is unchanged and the meter measures the same amount of light for both readings: If f/3.5, ISO 6400 [0 EC] results in a Tv of 1/400, then f/3.5, ISO 3200 [-1 EC] will also result in a Tv of 1/400. The second image will be one stop dimmer.




There must me something fundamental I'm missing, because to my mind they would be two stops dimmer, not one. First stop because the sensor is less sensitive, the second stop because I told the camera I wanted lower exposure (-1). And that's also why it's strange to me that the Tv is the same.



You are shooting in aperture priority. You had the Av set at f/3.5 and the ISO set manually at 6400. The camera computed a Tv of 1/400 second when EC was set to zero. You then changed the ISO from 6400 to 3200 and changed the EC from 0 to -1.


In this instance, you told the camera you wanted one stop lower exposure (by changing EC from 0 to -1). But you had already manually changed the ISO from 6400 to 3200 (-1 stop) yourself, so you had already made the change you needed to get one stop lower exposure with the same Av (f/3.5) and Tv (1/400). The camera recognized that you had changed the ISO (-1 stop) and you had left the Av the same (+/-0 stops) when you told it you wanted one stop darker exposure. That resulted in the same Tv (+/-0): 1/400 second. The net change was one stop darker due to the lower ISO you had already changed.


When you enter a higher exposure compensation number, you are telling the camera you want a brighter image. To get a brighter image, the camera must use a lower exposure value. To get a lower EV, the camera must open the aperture wider, expose for longer, or increase the ISO.


When you enter a lower EC number, you are telling the camera you want a darker image. To get a darker image, the camera must use a higher exposure value. To get a higher EV, the camera must make the aperture narrower, expose for a shorter time, or decrease the ISO.


The only things that determine how bright or dim your image is when the lighting is constant are ISO, Av, and Tv. Exposure compensation (EC) does not tell the camera to change the way it develops the raw file, it only tells the camera that you want the image to be exposed brighter or dimmer and the camera will change Tv, ISO, or Av to expose the image with the brightness you want. If you have manually set the ISO and have manually selected the aperture by shooting in Aperture Priority exposure mode, the only thing left that the camera can calculate is the shutter time. If you have left the aperture unchanged, lowered the ISO by one stop, and then told the camera you want one stop less exposure, there's nothing left for the camera to change. It will use the same shutter time as before.


Thursday 12 October 2017

Effective focal length with crop sensor, and detail of image


I have in fact read the other threads, and I am still not clear on this topic.


For any given lens, say a 400mm EF or FX telephoto, I am shooting a subject in low light, for arguments sake lets say I am shooting an eye chart (the kind you read in the military to check your vision).


The distance of the subject is such that no manner of focus or camera settings or photoshop manipulation will make the bottom line of the chart readable. The 2nd and 3rd lines to the bottom are almost readable when shot with the 400mm lens on a full frame body.


Will using a crop sensor body with the 400mm lens (having the effective focal length of 600mm) actually allow me to have better detail in my final image, and make those unreadable lines appear readable?


For arguments sake the camera bodies and sensors are identical in every regard except the crop factor, and forgetting arguments about final photo print size, pixel size, depth of field, etc., and only worrying about the final ability of the camera system to resolve detail in written words at extreme distances...


Logic would dictate that the answer is no; otherwise why wouldn't they make 5x or 10x crop sensor cameras so people could take advantage of insane effective focal lengths from their lenses? However I am seeing both answers when I google this, and a lot of confusing explanations.


Thanks!




lens - Where should one go in terms of equipment for event shooting after a D7000 kit?


I'm still new at photography -- really, it is going to be a long while before I can get close to the potential out of the D7000 kit. However, Microsoft recently has been giving me the opportunity to go and photograph things like their Imagine Cup World Finals (which is, incidentally, where I am now).


I'm not really looking at spending lots of money on equipment at this point -- I just bought my first serious camera and it's going to be a long time before I will really be able to seriously use better lenses, or other niceties. However, as long as I get to go to these events, I'd like to be able to capture what I see there.


Consider this failed shot of Team Endeavor_Design from Romania, currently competing at the world finals:


Failed Shot #1


Note how the background is reasonably sharp (indicating correct handholding technique + VR in use).


This failed shot demonstrates the opposite effect -- the subject got tracked just fine but everything except for the one subject is a blur. (This image is also a composition failure but ignore that for the sake of discussion here)



Failed Shot #2


For both shots, the D7000 is about maxed out in terms of sensitivity at ISO3200. As it is I had to throw away a lot of sharpness in order to remove the noise at that sensitivity.


As I see it, the only way to save these shots (the first in particular) would be to somehow get more light into the game. A tripod won't solve my issue here, because as the first shot demonstrates, the camera is perfectly able to capture a sharp image while being handheld here. (And just as important, a tripod is unwieldy at an event) That leaves me two options:



  1. A Fast Lens. This is probably going to be the 35mm f/1.8 or the 50mm f/1.8. (All the fast zooms are in the thousands of dollars range which I don't intend to spend anytime soon)

  2. Some form of flash.


I'm unsure what would be the best place to go from here though. The lens might be more versatile, because many venues won't allow use of a flash, and sometimes there's not a nearby ceiling or wall handy to bounce off for a reasonable shot. On the other hand, going with a faster lens takes away zoom ability. Moreover, a fast lens might get me two stops maximum over my current lens, while a flash would work in pitch darkness. As I've not shot a ton of events before though, I'm not sure which path is the better path.


What should I do?



Answer




You need a faster lens. D7000 is a good camera which is able to produce reasonably clean images even at ISO 1600. If you get a 35mm f/1.8, you'll be able to increase your shutter speed and thus get rid of the motion blur. Using flash could be an alternative to consider but a lot of events don't allow flashes to be used and also sometimes it annoys people, so I'd suggest a faster lens over a flash, if possible something with f/1.4.


Also, shoot a lot of pictures, so your chances of getting a technically and composition-wise sound picture is higher. Event photography is a sector which requires the most amount of pictures to be taken comparing to other sectors, so bear with it.


Why is the front element of a telephoto lens larger than a wide angle lens?

A wide angle lens has a wide angle of view, therefore it would make sense that the front of the lens would also be wide. A telephoto lens ha...