Thursday, 30 July 2015

Is a Canon Rebel T3 an appropriate entry-level DSLR?


I'm starting photography and am wondering if the Canon Rebel T3 is a good choice.


My only real constraints are that I want to able to take "burst shots" (such as of sports events) and low light shots.


Thank you



Answer



Entry-level DSLRs can do burst shots, but none of them is very strong in it.


With RAW, the Canon T3 can only shoot two frames per second and has to stop after five frames. JPEG is a little better, 3 fps up to 830 frames.


Nikon D3100 is faster with RAW (3 fps) and has larger/faster buffer (13 RAW, unlimited JPEG).



Pentax K-r has the fastest frame rate, 6 fps (25 JPEG / 12 RAW), but you might find its tracking AF inadequate and too slow for sports events.


Tuesday, 28 July 2015

photoshop - Is it possible to reproduce a colour tone (of any photos) perfectly (by curves)?


Is it possible to mimic a colour tone perfectly? (most likely using curves.)


For example, I want my photos taken by my digital camera having the colour tone of the film-photographs (let's say Velvia).


Or, for example, I want my photos taken by my Nikon DSLR having the colour tone of my Fujifilm images. (Nikon's Picture Control allows us to set the curves in PC and import them into the camera.)


I know some photographers with very good photo-editing skills can reproduce colour tones of 99% alike. But they are not 100% alike anyway. And post-editing one by one is not time effective.





To achieve this, I have an idea. I don't know if it works or not.




  1. Print out a colour board. A simple way of the colour board would contains 3 columns and 16 rows. Each columns would be Red, Green, and Blue. Each row would have gradient of colours from 0 to 255. (A complete way of the colour board would contains 256 x 256 x 256 colours.)




  2. Then I use my Fujifilm camera to take a photo of the colour board.




  3. Thirdly, having my Nikon DSLR of the same settings as my Fujifilm, and take a photo of "standard" colour tone of the same colour board, under the same environment.





  4. What we want now, is a curve that transforms the Nikon's "standard" photo, to the Fujifilm's colour tone.




  5. To get that curve, we first analyse the Fujifilm's colour board by the Eyedropper Tool.




  6. Ideally, take the Red column as an example, a "Standard" image would result in values (0 0 0), (16 0 0), (32 0 0) ... (240 0 0).





  7. But nothing is ideal in real world. There is no problem. We are now analysing the Fujifilm's colour tone of the colour board. Let's say Fujifilm has a darker style: (0 0 0), (15 0 0), (29 0 0) ... (230 0 0).




  8. Then we now analyse the Nikon's "Standard" image. Let's say Nikon's image is a bit brighter: (1 0 0), (16 0 0), (33 0 0) ... (241 0 0).




  9. Here comes the curve in the Nikon's Picture Control (works also for Photoshop post editing curve). If we set the Red curve's input 1 to output 0, 16 -> 15, 33 -> 29, ... , 241 -> 230, (and so forth for the Green and Blue curves,) I guess, for this colour board image, we can get exactly the Fujifilm's colour tone.





Do you think this "adjustment curve" can theoretically transform every Nikon's "standard" image to a Fujifilm's image?




Digital zoom versus optical zoom for minimising camera shake


Here's the scenario: every so often our garden plays host to some small deer. Naturally, I try to take pictures of them. From inside the house, I can get a decent shot with full zoom on my camera. The difficulty is that the kids also like to see the deer and one particularly wriggly child needs to be lifted up to be able to see them. So sometimes I'm trying to take a picture one-handed with a wriggly child in the other arm. Not the best conditions for a steady shot!


My camera has enough resolution that I could take a photo with a lower zoom factor and then crop it later and still have enough detail for a decent photograph. So, given the above circumstances, which is going to provide me with the least blurred photograph: full zoom or least zoom with later cropping (which is what I mean by "digital zoom" in the title of the question)? Note that I'm only interested in the blur on the deer itself, I don't care about the rest, and (for the purposes of this question) I want to focus (ha ha) on blur from camera shake.


Also, due to having small child under one arm, in this circumstance I have the camera on "P" mode, letting it select the best aperture and shutter speed. I guess that this might make a difference.



Answer



Assuming you crop to the same area and print to the same size, the amount of camera-movement blur should be indistinguishable between the two cases. Using a shorter focal length will decrease the impact by exactly the amount you have to crop and enlarge.


So, it comes down to other factors. The most important (from the narrow viewpoint of camera-shake blur) may be that your zoom lens may allow a wider aperture at shorter focal lengths, which would let you speed up the shutter, reducing blur.


But in real use, I don't think that's necessarily going to make up for the decrease in resolution.


I think what I'd do is ask the kids to let you take turns taking pictures and holding them. They'll appreciate looking at the pictures, too.


How to get good results with the built-in popup flash?


How can I take good pictures using the built-in popup flash of my camera?


Usually I try to use available light only but sometimes there's just isn't enough light for proper exposure (at a reasonable shutter speed).


I know about all the drawbacks of the popup flash, I know it's hard to take good pictures with it — but it's all I have at the moment.


Please don't tell me to just get an external flash (I know, but don't have the budget at the moment). My camera is a Canon 550D (Rebel X2i in the US) if it makes any difference.



Answer



The problem with the pop-up flash is that it's a small, directional point of light, aiming directly at the subject. This gives harsh shadows behind the subject and makes the photo generally unflattering.



There's a number of things that you can do to make this better:



  1. Make the point of light larger. The professionals often use massive softboxes - large uniformly illuminated boxes of light which give very soft shadows which are very flattering. You can make your source of light slightly larger by buying or making a diffuser - something that sits in front of your flash and diffuses the small point into a larger point of light.

  2. Bounce the flash off the ceiling. This is like turning the ceiling into a massive softbox. You can achieve this by putting a small piece of white card at a 45 degree angle in front of the flash. Experiment until you get the best results.

  3. Use a longer exposure to balance the flash and the ambient light. Try putting the camera on a tripod or stable surface and allowing a long exposure as well as the flash. This is good for capturing a specific moment but allowing the background light to fill in the dark shadows that you would get otherwise. This is often set up as an automatic scene mode called "Night Portrait" or something similar.


Monday, 27 July 2015

jpeg - How much post processing advantage is gained when scanning 35mm negatives as TIFF rather than JPG?


I am sending some C41 colour film to be developed. I'm not going to be requesting any prints, just scans of the negative to CD. I can choose between high resolution, high quality JPG or TIFF.


The TIFF files are more expensive and will be much larger files.


Although I'm not intending to do extensive post processing on these pictures, I may make small adjustments or bigger adjustments when I feel a picture needs it. I am used to shooting RAW on my digital camera and the extra PP latitude this brings when editing my photos in Lightroom.


I know that theoretically a TIFF file has the potential to retain more data than a lossy JPG, giving more PP latitude.



My question is does this theoretical benefit translate to a real post processing benefit when making minor/medium adjustments in Lightroom?


Would the benefit be of a similar magnitude to that of RAW over high quality JPG, or much less?


On a secondary note: I understand that there are various options when saving to TIFF (e.g. 8bit vs 16bit) though I do not fully understand what advantages these give. If I choose TIFF do I need to make sure the lab is going to use particular settings in order to get the benefit over JPG?



Answer



If the TIFF files are only 8bit and the resolution is the same then there will be very little (unless the JPEG compression is set very high). The only difference will be slight artefacts in high frequency areas and potentially lower colour resolution if chroma sub-sampling is used on the JPEGs.


Additionally if the scan resolution itself is high compared to the resolution of the image, then there will be little difference between TIFF and JPEG as they will both contain more information than the original film.


It seems to me that they're just trying to create an artificial differentiation to increase revenues. The only time I'd consider paying half again for TIFFs would be under the following conditions:



  • The quality of the original negatives was very high

  • The TIFF files and scanner were both more than 8 bit



lighting - Is it OK to use different kinds of strobes together?


I currently have 2 Vivitar 285 strobes.
These are working very nicely for me.


I would like to buy 2 more strobes, and I would be quite happy with the Vivitars, but I would also like to consider either the LumoPro LP160 or something similar.


I typically shoot with the flashes off-camera in manual mode.
I use a cheap radio trigger.


My question is: will I have problems if I start mixing different makes of strobe?


Thank you for any advice.



Answer



In day to day use, you should not have any problems mixing manually controlled strobes, as long as the color of light from both is the same. I think you'd be fine.



timelapse - What is bulb ramping?


What does the term bulb ramping refer to and what does it bring the world of time-lapse photography?


It seems you need a specific type of intervalometer for bulb ramping. How do these differ from the average intervalometer?



Answer



Bulb ramping, or bramping, is a means of automatically adjusting exposure settings to maintain a specific exposure value (EV) throughout the duration of a time-lapse sequence. Bulb ramping intervalometers can be simple and cheap, or complex and expensive, depending on the results they can provide. Cheaper ones, and many DIY projects that you can follow to build your own, tend to produce fairly apparent jumps when exposure settings are changed, resulting in less-than-ideal results when a time-lapse sequence is stitched together into a video. More expensive intervalometers that offer bulb ramping capabilities tend to produce much finer adjustments over more frames, greatly reducing or eliminating visible jumps in exposure value in the final video.


Its somewhat possible to achieve bulb ramping with a normal intervolometer and automatic settings in a camera body. Things like auto ISO and a priority mode will usually achieve some degree of bulb ramping...but the results are often unpredictable. Using automatic and priority modes sometimes limit your options as well...such as only outputting JPEG images, or being limited in how much "ramping" can occur, etc. If you want the best results, buying or building a high quality bramper that supports fine adjustments over the duration of a time-lapse sequence will be necessary.


Sunday, 26 July 2015

point and shoot - Why is there too much noise in a fog photograph?



I have been told that this photo has terrible noise.
I wasn't aware of this fact! :redface:


Morning scene, F4.8, Shutter: 1/50, ISO 100, Focal 14.4mm.


The fog was dense of course, but still it wasn't "too" dark.
Canon Powershot SX210 IS


Why is there too much noise in a fog photograph? Small sensor - the culprit?


enter image description here



Answer



I am fairly certain what people are telling you is "noise" are actually JPEG compression artifacts. Unlike RAW images, JPEG images use a form of lossy compression...that means that some degree of detail and perfection in an image is permanently lost when saving a JPEG image. From what I can tell, the photo you posted is not noisy at all, however it does display a fair amount of mild JPEG artifacting and a little bit of banding (usually called posterization) in the bottom portion.


This is a pretty normal consequence of JPEG compression with low-contrast scenes and smooth gradients. JPEG compression is excellent when there is a lot of detail, however it tends to degrade quality pretty quickly when there is a minimal amount of detail, and does quite poorly with smooth gradients. Your photo has both a low amount of detail (by design...its a photo of fog, after all), and has a fairly smooth gradient from top to bottom. You may have been able to avoid the artifacts and posterization if you had used a lower level of compression (higher JPEG "quality"), but to fully eliminate all the negatives, you would have had to have used one of the top two JPEG quality settings, which pretty much eliminates any space savings gains.



If you want to avoid the quality issues with JPEG, you should use RAW if you can. Whenever you save a JPEG, from RAW or anything else, you should try to keep the quality setting around 70-80 (or "high"), as that will usually produce some useful space saving without degrading quality too much.


Why are mirrorless cameras much slower than DSLRs?


This is a follow-up to a previous question that I made regarding a specific camera (K-01).


I've been comparing cycle times (or shot to shot times in single shot mode) of mirrorless cameras with interchangeable lenses with traditional DSLRs and all of them, even the highest end ones (like the Olympus OM-D EM-5 or Panasonic GH3) are slower than even entry level DSLRs.


How's that possible? The electronics should be the same and they don't have the burden of a mirror going up and down between each shot. I think that there must be a technical reason for that (maybe some cpu time is going to feed the live-view before the image is encoded?), because from a commercial point of view it does not make sense.


I've seen the cycle times under the performance tab in the reviews pages of imaging-resource.


The average CSC has cycle times around 0.7~1.2s, the flagship ones around 0.5s. The average DSLR has cycle times around 0.4s and the flagships around 0.25s.




Saturday, 25 July 2015

post processing - How to I get the Instagram "Lux" effect without using Instagram?



I am kind of fascinated with the new Lux effect of Instagram. I want to "get" the same effect myself, during post processing. This is what I was able to get to, after playing with Levels, Contrast, Saturation and Shadows in iPhoto.


My questions:



  1. Is it possible to get such an effect using basic tools like Picasa or iPhoto?If so, how?

  2. Is it possible to get such an effect in Photoshop/PS Elements/Lightroom (and the like). If so How?


Original Photo: Original Image


With the Instagram Lux effect and "Low Fi" filter: Photo with Instagram Lux effect


What I could get: (at best, with my limited knowledge) photo after playing with Levels, Contrast, Saturation and Shadows in iPhoto



Answer




So, here's what I got in just a few minutes using two basic tools: Curves, and Unsharp mask:


replication attempt


I used Gimp, but this is basic stuff any decent image editing software will have. Here's all I did. First, I used the curves tool to dramatically increase the black point, increasing shadow contrast:


shadow contrast up


Then, I pulled the curve upwards to brighten the (new) midtones:


more midtones


I didn't mess with the color channels at all; this is all the global "value" curve. I made these adjustments by eye, watching the tone of the house as I worked.


Having done that, I resized to 612×612 (the size of your Instagram example here), and then used an Unsharp Mask with a radius of 10 pixels and a very high strength.


This doesn't look exactly like your image, but I think we're in the ballpark.


There's a sort of glow over the lower part of the house that's missing, and I couldn't replicate this with global adjustments without destroying the tones in the sky and the detail on on the tree branches on the left; I suspect that the filter applies a graduated vignette/glow/"light leak" effect somewhere in the pipeline here. If you compare the top half of my attempt to the Instagram output, you'll see they're really close; the difference is in the lower part.



The original has flat lighting; this fake burst is part of what adds dynamic interest, but which also feels a little bit like cheating: Instagram is not just capturing what's there with a funky filter, but altering the reality of the scene.




Update: this is with just an upsharp mask with radius 100 and strength in Gimp of 2.0 (Photoshop measures strength differently, but basically, about 10× higher than one would normally use if going for a natural looking image).


with just unsharp mask


The curves approach gives a lot more control and it's still what I recommend, but for quick and dirty replication of the effect, this might be all you need.


focus - I don't need anything to be blurred. I need everything to be focused



The scene is at the distance of say 10 feet. I own Nikon d7200 and a lens of 18-140 mm.


There are three cats here and there (not together), but many of my images focused on one/two cats and not all because the third one was a bit away from the rest two. In other photo, the camera focused only on that isolated cat and not those two.


I need the solution that all three cats are focused, surrounding is focused, and nothing is blurred.


Out of many photos, only a few focused on all the cats and background. But I wanted almost all of them to be like that.


For reference, check this photo. This is one of them what I wanted. But in other photos, all the cats were not at one place. That's why the camera focused only on one/two cats. The background is just a wall, a plain wall. So, no depth of field is to be worried about.




Friday, 24 July 2015

Is there an automated way of color correcting using a color card in Capture One?


With the X-Rite Colorchecker Passport and their accompanying software, it's really simple to generate a DNG profile to get perfectly accurate colors in Lightroom.


Using X-Rite's Colorchecker Passport with Lightroom


I want to know if there's a similarly automated way of perfectly correcting colors—via the X-Rite Colorchecker or any color reference—in Capture One. Here's the workflow I am looking for:



  1. Set up lighting.


  2. Take a shot of the color card.

  3. Do the photo shoot.

  4. Import photos into Capture One.

  5. On the shot of the color card, adjust white balance by clicking on a grey square.

  6. On the same shot of the color card, do some magic to perfectly align colors.

  7. Apply to these adjustments to all the remaining shots.


It's that magical step six that I'm looking for. I've seen how to do this manually. I'm wondering if there's an automated (fast, repeatable) way of doing this.


(I'm fully aware that Capture One uses ICC profiles only and X-rite instead outputs DNG profiles.)



Answer




You need to use X-Rite's i1Profiler software. Unlike the ColorChecker Passport software, it can create ICC profiles from the ColorChecker that Phase One can use. It looks like it only comes bundled with one of the X-Rite i1 products.


See: http://www.colourspace.xyz/creating-camera-profiles-for-capture-one/


exposure - Why is my Metz 58 AF-2 using long shutter values when my Canon 60D is in Av mode?


About one year ago I bought a Metz 58 AF-2 for my Canon 60D. Because of a lack of free time to experiment with automatic functions, I have used it only in manual mode with other manual flashes, without problem. Those few times I used it in auto-mode (E-TTL 2 w/o HSS) I noticed some problem but I thought I was incompetent in setting it correctly.


The problem is that when the flash is in mode ETTL 2 or ETTL2 HSS (attached on the camera) the shots, sometimes (apparently randomly) are underexposed.


In my last experiment I've noticed a thing that made me suspect about a real hardware problem. If, on the camera, I set the AV (Aperture priority) mode (e.g. to f3.5) and the flash is in the mode I just described, with the parabola oriented toward the ceiling, when I half-press the shutter button, the camera sets an exposure time much too long, between 1 and 2 seconds. Instead, I was expecting that, as it has a very powerful flash attached, it could reduce the exposure time to something like 1/300 and increase the output power of the flash.


These extra-long exposure times are set when I lock the ISO on 125. if I set ISO to AUTO, it reduces the exposure time to 1/60, but increases ISO to 1600... In no way does the camera seems to take in account that there is a 58GN flash attached that could improve the shot.


Anyway the camera is aware that there is a good flash attached, as I see it use the flash-focus-beam, and I can control flash parameters by the camera menu. Also the flash fires when I press the shutter button, but photos are underexposed.



Is this behavior normal? How could I solve it?




workflow - Is there an EXIF standard for tagging people in photographs?


There are a number of websites that let you tag people in photos, such as Facebook and now flickr, but is there an EXIF standard for tagging who is in a photo, and where they are?



Answer



Adobe's XMP metadata standard supports information defined by the Metadata Working Group (MWG), which includes a definition of how to store face tagged data. See:


Adobe XMP: http://www.adobe.com/products/xmp/standards.html MWG: http://www.metadataworkinggroup.com/ where you can click on the specifications, download the PDF, and then look at page 51 onward.


So while this isn't "EXIF" per se, it is metadata stored in the image. I'm just starting now to explore how widely supported this is.


blur - Blurred images with Canon 1100D


I have a Canon 1100D which I bought last year. The Image quality used to be good. Recently I have noticed that the images are fairly blurred. Even if I use short shutter, stable base for the camera and even after using timer to avoid human error, the output images are blurred and not clear.


Is there any camera settings that I need to calibrate? The blurry images are very annoying.



Answer



Not a lot comes to mind on the camera body side, but several things may result in blurry/out of focus photos on the lens side of this problem.




  • Auto-focus switch set to On?

  • Image stabilizing switch set to On?

  • Image stabilizing malfunction?

  • Smudges/dirt on the front element of the lens?

  • Smudges/dirt on the rear element of the lens?

  • Auto-focus malfunction? Stuck focus ring?

  • Fungus growth inside lens?

  • Moisture condensated inside lens?

  • Internal misalignment of lens elements after a hit/bumb?



To test some of these, first check and clean both ends of the lens, set camera to Manual focus, Image stabilizer Off, small aperture (f/16-f/22), fast shutter speed (1/250 sec or faster) and place the camera on steady surface or a tripod, carefully focus manually as best you can, use 10 second timer and shoot a photo of a contrasty subject in such good lighting that ISO will not raise too high (


If the result is still blurred, try another lens.


equipment recommendation - What is the best Canon telephoto zoom lens (around $1500-$2000) for general use?


Looking for a good telephoto zoom lens from Canon, I consider :




  • Canon EF 100-400mm f/4,5-5,6L IS USM

  • Canon EF 70-300mm f/4-5.6L IS USM

  • Canon EF 70-200mm f/2,8L IS II USM



What do you recommend?


exposure - What exactly does TTL flash sets its power to?



I understand that TTL flash means the flash will preflash and try to calculate the right exposure for that particular scene. That part makes sense, but what exactly is the "right exposure"? Does it use the same 18% rule? And is it using the same metering mode that is used for the normal exposure?


So say, if you are using spot metering and you are pointing into a pitch black background. I guess what's going to happen is, TTL will preflash and the metering will pick up the fact that flash isn't doing much. It's going then tell it to go to maximum power? But this could thrown things off in the foreground?




Thursday, 23 July 2015

phase detection - How does autofocus micro adjustment (AFMA) really work?


One of my answers was edited to include autofocus micro adjustment (AFMA) as a camera choosing criterion. The edit linked to another answer explaining the AFMA more.



My understanding of autofocus is that:



  • Autofocus is closed loop if using contrast detect, so AFMA should be unnecessary for contrast detect autofocus because the focus is actually confirmed in contrast detect autofocus, at the imaging sensor.

  • Open loop autofocus such as phase detect autofocus calculates the predicted adjustment to the focus based on the current position and the predicted optimal position, so if this doesn't work, how much it will be off is dependent on the current position of the focusing.


Thus, my questions are:



  • Is AFMA per-lens or a global adjustment?

  • Can AFMA actually fully correct issues in poor-focusing lens+body combinations?

  • What actually is AFMA adjusting? Is it an additive offset between the predicted position and the actually chosen position? Or is it a multiplicative correction for the amount how much focus is adusted?


  • Is AFMA used for contrast detect autofocus? Is it used for Canon's dual pixel autofocus?

  • How on earth can AFMA work if the operation of the AF system is dependent on the current focus position? I mean, if the focusing isn't working, shouldn't the amount off be dependent on the current focus position? So, if you are photographing an object/person 2 meters away, the current focus position is 1 meters away, it is different than it would be if the current focus position was at infinity.


About the only way AFMA would benefit in my opinion is that if the camera body has slightly different optical distance from lens to phase detect autofocus sensor than it has from lens to imaging sensor, an additive offset could be beneficial. Is that why AFMA is used? So, essentally I mean having focus at the PDAF sensor is not the same as having focus on the imaging sensor.



Answer




Is AFMA per-lens or a global adjustment?



In theory, it could be either depending on how the camera designer approaches it, but the usual case is per-lens for reasons I'll go into below. Making it global would be a ham-fisted way to go about it unless the camera only has one lens.




Can AFMA actually fully correct issues in poor-focusing lens+body combinations?



Not without characterizing the behavior of every lens at every combination of focus point, focal length and distance setting, which gets impractical quickly on systems with a lot of points. The camera manufacturers have very likely done some variant on that exercise and found that one adjustment per lens is sufficient.



What actually is AFMA adjusting?



It's just a way to tell the AF system that when it comes up with answer x, the correct answer is really x + k, where k is some constant. The units involved are known only to the camera manufacturer. Think of AFMA as you would making an adjustment for throwing a basketball into the basket: if you're constantly hitting the front of the rim, your throw needs to be adjusted so the ball lands some distance further back.



Is AFMA used for contrast detect autofocus? Is it used for Canon's dual pixel autofocus?




Where you use it has more to do with how the camera is constructed than how the focus is detected.


Taking focus measurements directly from the imager makes AFMA redundant because the AF system would already be basing its actions on exactly what will be recorded.


When the path from the lens to the AF sensors differs from the path to the imager, mechanical tolerances can make the distances vary, making for a difference in what's in focus on each. The manufacturer calibrates for that as best it can at the factory, and if there was a way for you to do that at home, there would be a separate adjustment for it. Because lenses are mechanical beasts with their own mechanical tolerances, it makes more sense to screw on a lens and adjust the entire system from end to end.


Usually, you don't need the adjustment at all if the body and lens are aligned to the manufacturer's specs. I have one lens that needs correction, and that's not surprising since it's 25 years old, has seen a lot of use and has never been to the shop for adjustment.



How on earth can AFMA work if the operation of the AF system is dependent on the current focus position?



It isn't. All the AF sensors do is give enough information to make an in-focus/not-in-focus decision and, optionally, whether it's too far back or forward. The body moves the lens until it gets an in-focus indication or it finds the two positions where it makes the transition from out in one direction to out in the other. (That's a bit of an oversimplification, but it's enough for this discussion.)



I mean, if the focusing isn't working, shouldn't the amount off be dependent on the current focus position?




It might be, but if you look at how focusing is accomplished in most lenses, odds are good the error will be a fixed amount rather than something nonlinear. Someone who understands optics better than I do can comment on whether or not lenses behave non-linearly as the focus position changes, but my suspicion is that, given everything else, it's not enough to matter.


depth of field - How can a Frazier lens achieve that massive DOF?


I've read in PetaPixel about this "Frazier lens" that can achieve a massive DOF. In this video is shown the capabilities of this lens and I must say I'm very impressed.



How can this lens achieve such a DOF? Most of the shots were made on a sunny day, so I'm guessing that it has a tiny diaphragm.


Anyone has more technical information about this lens?



Answer



The lens is nothing magical and does not have "infinite depth of field" as some have claimed. However it does achieve a very deep depth of field, by a combination of short focal length, small aperture and tilted plane of focus. It was developed by wildlife photographer/filmmaker Jim Frazier who was fed up with the limitations of traditional lenses for shooting wildlife subjects close up. According to Jim the device started out as a mirror on a stick attached to a camera which allowed ground level shots without the camera or operator lying on the ground. The device needed to be refined, as he found himself panning left when the subject went right, due to the mirror!


The "Frazier lens" is a really a system of lenses, the main body of which is a wide adaptor, i.e. the opposite of a teleconverter. This unit accepts one of a series of "taking lenses" of different focal lengths. These are traditional optics that have been specially modified for the system, including sealing the units to prevent dust from entering, and locking the controls (aperture is set via controls on the main lens unit).


Traditional macro lenses use a long focal length to achieve a comfortable working distance (a long focal length allows 1:1 magnification at a greater physical distance between the subject and front of the lens). A downside of a long focal length is decreased depth of field.


The Frazier lens system allows wide angle macro shots to be made. It also includes a prism element that allows the lens body to articulate in order to get close to small subjects to make up for the lack of working distance by moving the camera body further away. Here's the lens in use showing the articulation:



The lens also tilts the plane of focus (like a tilt-shift) to maximise the depth of field with respect to the ground plane (where most subjects/items of interest are likely to be).


It's worth noting that the apparent depth of field is much larger when you look at low resolution imagery such as standard definition video, as depth of field is defined as the range in which objects are "acceptably sharp". When you downsample you lose the ability to distinguish really sharp areas and thus everything can look "acceptably sharp". You too can achieve really deep depth of field with a DSLR using an ultra wide lens if you downsample your images to 0.3 megapixels.



canon - Can I set a button to a particular focus distance on 5DIII (and ML)?


Can I, by simply pressing a button, set the autofocus to a precise, pre-registered distance value ? ( and then, I guess, lock AF ).


The goal is simple: Hyperfocal - On my zoom lens, there are no steps between 1m and infinity and it is impossible to judge to focus distance this way.


The only way I actually have is to keep a piece of string with the wanted precalulated hyperfocal focus distance and autofocus on the end of that string. But in practice, in the situations I will be in, that will be impossible.


Friend of mine said he can actually do that with his old Pentax.



I looked everywhere in canon menus, everywhere in Magic Lantern menus, but couldn't find anything like that.


Is that possible ?



Answer



With the Canon system pre-set focus distances are registered in the lens, rather than the body. The only lenses of which I am aware that offer this feature are the Super Telephoto series. But even those lenses wouldn't really help you very much for what you say you want to do.


There is no single hyperfocal distance for a given lens at a given focal length. Changing any one of several parameters will also change the calculated hyperfocal distance. These parameters include: aperture, sensor size, intended display size, intended viewing distance, etc. Change any one of them and the hyperfocal distance also changes.


For example, I can figure the hyperfocal distance for my 100m lens set at f/8 mounted on my Canon FF camera and based on and intended display size of an 8x12" print viewed from 12 inches the hyperfocal distance calculates to 155 feet. If I shoot the photo with the focus point aimed at 155 feet and make an 8x12" print everything from about 77 feet to infinity will look like it is in focus.


But what happens if I decide to make a 16x24" print from the same image file? Using the larger print size and assuming the same viewing distance, the same DoF calculator says the depth of field now only extends from between 103 feet and 312 feet when the camera was focused at 155 feet! To get the hyperfocal distance for a 16x20 print using the same camera and lens set at f/8 I should have focused at 308 feet.


Why is this so? Because depth of field is an illusion.


Wednesday, 22 July 2015

How can I convert a proprietary raw image format to Adobe DNG?


I have a proprietary raw camera image format taken from a Aptina camera.


Its raw file format is 16bit per color channel, 'GRBG' mode with file extension .raw. No headers, nothing, just plan raw bayer samples. I want to convert this format to DNG, as none of the raw image decoders (dcraw, gimp, …) seem to support it.





  1. Is there any tool/converter which can handle Aptina camera sensor raw bayer format, and allow me to convert it to DNG format?




  2. If not, I want to write a small C code to convert it myself. Where can I find the detailed specification about DNG format, its structure etc.? Any document explaining all this DNG format?





Answer



DNG files are based on the TIFF/EP standard, ISO 12234-2, (they're essentially bitmaps with extra metadata) so if you start out with an appropriate TIFF I/O library that will get you part way, but you'll need to fill in the extra data required by DNG, which could be tricky.


Raw converters need to know more than just the pixel intensities. Other relevant information includes pixel shape and orientation, or the properties of the dyes (just knowing they go GRBG is not enough, you may need to know the precise shade (or more accurately the frequency response) of each dye to create the colour representation as this varies between manufacturers). I think DNGs handle this by means of an embedded "camera profile".



edit:


There's an open source photo management application called digiKam which can write DNG files, so your best bet would probably be to look through their source and try and rip out the DNG encoding functions.


portrait - What aperture should I use to photograph people and why?


What f/# do you use to photograph people, and why? I know this varies from shot to shot.


Let's suppose that you are photographing either a single person or a couple, that it it outside on a somewhat overcast day. What depth of field do you prefer to use, and why?


What about indoors in a more-controlled setting — what is typically used in studio photography?



What about in other conditions?



Answer



Totally depends on your goal. Think background first. What story do you want to tell? Epic background, big mountains. Looking to deliver a sense of grandeur with your subject. Go big! f/22 or higher if you have it. If you want to really isolate your subject and use the background as simple tone, a splash of delightful color, open up to f/1.4. Be careful here as you may have parts of the face go out of focus. Step back with a long lens and you'll eliminate that problem (by increasing the over all dof).


Save bet though, f/2.8 - f/5.6. This will give you moderate depth (keeping person in focus and throughing background softly out.)


Good luck!


Monday, 20 July 2015

canon - Can I trigger Yongnuo YN-622N and YN-622C together at the same time?


I am a Nikon user and my friend uses Canon. If we use Yongnuo YN-622N & YN-622C can we trigger both the Nikon and Canon flashes at the same time?



Answer



No, you can't.


To quote from v5.0 of The Other YN-622C User Guide II, page 20:



The Canon YN-622C is NOT compatible with the Nikon YN-622N. The camera codes are not the same.




This actually makes sense when you consider the completely different pin/contact arrangement on the hotshoe and the inevitable differences in signal protocols.


The only way I could theorize that you might be able to get YN-622 gear to work between both systems might be if you switched all the gear to 603 compatibility mode, or tried using the legacy mode that mirrorless shooters use, but then, of course, you'd lose all the added TTL/HSS/remote commanding capability of the 622s across systems.


Manual triggers, like the RF-603 II/YN-560-TX/YN-560III-IV/YN-660 combo might work, but you'd want to be sure that whatever unit you put onto the camera as the on-shoe transmitter is compatible with the specific camera body (check the pins on the foot).


workflow - Why store both JPEG and raw?


DSLRs often have the ability to store both a JPEG and a raw file.


Given that the primary benefit of in-camera JPEG over raw is the smaller filesize, and that JPEG+raw is going to store even more data than raw alone, it seems like you're just wasting space on your card and making your workflow more complicated if you store both.



Why bother storing both JPEG and raw in camera, instead of just a raw file?



Answer



I am an amateur photographer going semi-pro and even though I still only use RAW I have come across a few occasions where RAW+JPEG was needed (or at least would be a great convenience):



  • ready to email files (like @rowland-shaw wrote) - some times you need to get your photos out there as fast as possible

  • lite photo files to browse through - given that your workflow might include taking a look in your photos from a not-so-capable computer (or other device) before importing them or even during the shoot, it is faster to load a 1.2MB JPEG than a 15MB RAW file

  • timelapse - ok, this is an overkill but when shooting timelapse I want to have a bunch of small JPEGs ready to be opened in QuickTime to check the result and then go through the RAWs


In general, JPEGs are for fast preview on other devices (other than your camera) while RAWs are for editing.


autofocus - Is the color of the AF assist light important?


What are the differences between white and this darkroom red/orange active autofocus assistance lights?



Answer



AF assist (if that's what you're referring to) is simply a way to illuminate a dark subject to help the camera focus. The colour of the light isn't really important - what's more important is that it (a) is bright enough to allow the camera to focus (b) doesn't adversely affect the metering and (c) is efficient in terms of battery use. A red LED would seem to be the best way to meet all these criteria, which is why it is the most common kind. Of course LED technology is improving all the time, which may explain why some cameras can get away with a green beam.


Sunday, 19 July 2015

Saturday, 18 July 2015

autofocus - How can I use AF+MF on Fujinon lenses with hard stop focus rings?


Fujifilm cameras have an AF+MF mode which autofocuses when the shutter button is half-depressed, then lets you correct it using the focus ring.


However, some Fuji focus rings have hard stops, so you can't turn the ring past a certain point. This creates a problem because the scale of the focus ring does not change after the AF stage. This means that depending on where the ring was before AF, you can't correct the focus past a certain distance.


This picture explains the problem better:


AF-S


Is there a setting or fix for this problem? I want to be able to use AF-S, then correct it if the camera gets it wrong, and this limits my ability to do so.


In theory it can definitely be altered, since Fuji lenses are focus-by-wire.




sensor - What does expanded ISO mean?



Especially with regards to image quality.


I am particularly interested in when you expand to a lower ISO, say 50 rather than 100. Generally the standard high ISO is too noisy, with the expanded ISO even more so.





lens - What can cause different aberration on left side and right side of photo?


I got a photo of starry sky where I find different kind of aberrations on the left side and the right side of photo. It seems consistent in various photos so far, but this lens is quite new to me and I've had only so much time to get familiar with it.


The lens is a Samyang 14 mm f/2.8 ED AS IF UMC but do not take my sample photo as a proof of low quality. The sample is taken in high humidity weather with an entry level cheap system camera by an unexperienced star photographer - end of disclaimer.


This photo was taken with 20 sec exposure, f/4.0 and ISO 3200. Focus should be "quite good" for the photo is the sharpest one from a set of five that I took by manual focus bracketing. Image stabilizing is Off because I have the camera on a tripod and using a wireless remote shutter release. The Samyang 14mm lens is made for full frame camera, but mine has an APS-C size sensor, which means I'm not even getting the worst of the aberrations on my photos :)


Center of photo at 100% pixels:
enter image description here


Left and right side near edges:
enter image description here enter image description here


On the left side some of the stars appear like vertically split in half.
On the right side the brightest stars have a bright blue shadow cast on the right side of the star and less bright stars have a lesser shadow.



What can cause the aberrations to be different on either side of the photo?


Full size fine quality jpeg image here (12 MB).



Answer



What you're seeing is most likely the artifacts of slight misalignment of the lens, the sensor plane is not exactly in line with one or more lens elements which gives rise to slight variations from left to right.


All lenses have a degree of misalignment, and generally it is more visible the shorter the focal length. So for a 14mm lens even a miniscule misalignment can be visible in images. I would expect that your 14mm lens is within factory spec.


Normally such effects are masked by the depth of field but shooting a starfield at infinity with a wide angle lens is a bit of a torture test for the lens designer!




Edit: on closer inspection the artifacts look like comatic aberration (that's comatic, not chromatic) which is common in wide angle lenses, and is probably the restult of a slight misalignment of a lens element/group, you might be able to send the lens in for service, but it may be deemed to be within spec.


Friday, 17 July 2015

How can I avoid vignetting when shooting in raw with a Canon Powershot?


Very inexperienced person here when it comes to photography and raw processing here. I'm using a Canon SX 60HS camera. No attachments — filter/hood/etc. — of any sort were attached to lens.



After switching to raw+jpeg mode, I started noticing black round corners for some of the images. I had never experienced this while using camera in automatic mode.


After researching, I found out this is called as vignetting and is a limitation or function of lens and some other parameters at the time of taking the snap.


Why does this not happen when taking the picture in auto mode? Even the jpeg images generated in raw+jpeg mode do not show vignetting. Only raw images show it.


If it matters, I'm using Darktable 2.4.1 on Ubuntu. Since I don't have access to Windows PC, I can not test on Canon software.


I understand vignetting can happen. However camera image preview does not show vignetting. Is there any way to detect vignetting in the camera while I'm at scene so that I can adjust parameters and take the picture again. What sort of information I should check for possible vignetting. I tried to find a common pattern, but so far no luck.




Answer



Many modern digital cameras are designed with the expectation that lens compromises will be... de-compromised... in RAW processing. That appears to be the case here.


The preview image is based on the JPEG rendering, which includes this processing. The RAW file, however, does not.


The amount of apparent vignetting will not be dependent on the scene, but on the lens zoom and aperture at which the picture was taken — that's why you're seeing differences in different situations.



The answer here is to use the lens correction module in darktable to apply this correction to your images as you process them. There's no other way around it — it is simply how the camera was designed.


camera basics - What is flash duration?


When speaking in terms of flash specifications, what is flash duration and how does it impact exposure?



Answer



Simply, it's the duration the flash is actually on, emitting light.



This doesn't impact exposure per se; as long as the same amount of light is emitted, you'll get the same exposure. But it can affect the result: as the flash duration gets shorter, it has a better ability to freeze motion.


For most photography this won't matter very much. 1/1000s is a typical duration for decent flashguns at full power1, which is more than sufficient to freeze normal motion in a photograph. Other aspects will have much more bearing on the results; flash power, shutter speed, and so forth.


The one example I can imagine where shorter flash durations would be particularly desirable is for high-speed photography.



  1. Studio strobes are sometimes rated faster, but thanks to their different characteristics, may be slower in practice. More detail here: http://www.scantips.com/speed.html


Thursday, 16 July 2015

How to white-balance photos shot in mixed-lighting environments?


I have a dSLR and I often find myself taking pictures of people in 'mixed lighting' environments (e.g. tungsten lighting and daylight, fluorescent and tungsten, or even the ‘nightmare lighting scenario’ of mixed fluorescent, tungsten and daylight). Since white balancing won't work (or at least it won't work completely) to remove the color cast from these sorts of mixed environments, what can I do to manage the multiple types of lighting in my environment?




Asked by Finer Recliner:


I was at a wedding recently, and the reception hall had these huge windows that let a lot of sunlight through. The overhead lamps used inside the reception hall had a strong yellow tint to them. Given the two different types of light sources, "white" seems to have a vastly different definition in different parts of the photo. I found that a lot of my photos were near impossible to correct the white balance in post production (I shot in RAW).


Here is an example from the set I shot at the wedding:


mixed lighting at wedding



If I set the white balance relative to something outdoors, everything inside looks too yellow (as seen). If I set the white balance relative to something indoors, everything that falls in the sunlight looks too blue. Neither looks particularly "good".


So, does anyone have tips for how to handle this sort of situation next time I take a shot? I'll also accept answers that offer post-processing advice.


// As an aside, I'm just an photography hobbyist...not a professional wedding photographer ;)



Answer



It is important to understand that different types of lighting will produce different ‘color casts’ to the light in a photograph. While the eye is great at correcting for color ‘on-the-fly,’ our cameras aren’t very good at the task of adjusting in mixed environments at all. This can result in severely yellow/orange pictures, or sickly green ones, depending on the lighting present at the location. Generally the approaches to correcting for mixed-environment lighting depend on how much control you will have over the environment. The solution isn't to do all of these things, but to know about each of them such that you can do one (or more) depending on the situations you encounter.




Complete control


(These solutions assume complete ability to control the ‘non-studio’ environment where the pictures will be taken. Because of the nature of completely controlling a location, to some extent it also assumes a large(ish) budget, and a good amount of time to engineer the environment.)



  • Turn off the ‘contrasting’ light source(s) - The first thing to look for is whether you can turn off one (or more) of the competing light sources. In terms of ‘easy solutions’ this is always what I look at first because if I am able to remove ‘offending’ light sources with the flip of a switch, I can often bring the color cast back to ‘normal’ (or close to normal anyway). On occasion I have even found myself turning off lamps and gaffers taping or clamping my own flashes inside of the lamp to provide the illumination of the subject from an ‘correct’ direction (such as when the lamp is in the frame so having it on would be expected).


  • Overpower the ambient lighting - If you aren’t in a position to turn off the contrasting light sources, the ‘next best thing’ might be to simply overpower the offending light with your own light sources. This works best if you’re able to throw enough light to completely light the scene yourself (e.g. if you only have enough power to light the subject OR the background, but not the subject AND the background) this will be tough to pull-off. I’ve included this one because I know photographers who do the kinds of shoots where they have the time to engineer solutions like this. For those of us who work more ‘on-the-fly,’ or are hobbyists, it probably isn’t practical to completely engineer the lighting in this manner.

  • Gel everything - This is the ‘Hollywood’ solution for using practical locations and if you look at many ‘behind the scenes’ extras on DVDs you can often see combinations of gels being used everywhere… CTO gels on all the tungsten, CTG on the fluorescent, ND or CTB gels on all the windows, etc. While it often isn’t practical to gel everything, doing so will essentially ensure a proper color balance. I’ve included this one in the interest of completeness, but for ‘most of us’ this will be an impractical option short of working on a large-scale paid shoot (where this sort of thing is done all the time)…




Partial control


(These solutions can often work in situations where complete control of the environment is not possible, but you do have some time to plan your photography at least a bit ahead of time or prepare the environment)



  • Gel something - If I’m in an environment that contains 3 different types of lighting I may not have the time, materials, or wherewithal to gel everything… But if given the choice I’ll often choose to gel something. Generally speaking the thing that I try to gel is the fluorescents if possible because a picture with an orange color cast is easier to manage and looks more ‘correct’ to the eye. A picture with a heavy green color cast on the other hand… That only looks good if you’re in the Matrix.

  • Control the angles - This will be situation dependent, but sometimes it’s possible to eliminate ‘offending’ light sources simply by not shooting in the direction of the light source. For example, I recently shot a wedding reception in a gym with lots of ugly fluorescent lighting up above, as well as one wall of the gym being all windows. The solution ended up being to balance for the fluorescents and simply not shooting anything with my camera pointing towards the windows (though admittedly not shooting towards the big bay windows was also because it was a daytime wedding and the daylight would have blown out all my pictures).

  • White balance for the main subject(s) - If ‘all else fails’ and you have no other options available, at least white balance for the subject. This will (more or less) get the color cast right for the skin tones, which are the most important element of most pictures. The rest of the colors in the photo may be off, but for some photos this will be forgivable, and in others you have a few options still available to you in post.





No control


(Though these can been solutions that are arrived at ahead of time, since they are post production solutions they can also be used ‘last ditch’ efforts, where either no color control was taken during the shoot, or the attempts at color control were not successful.)



  • Convert to B&W - If you weren’t able to do anything to control the mixed lighting and/or the lighting came out in an unattractive way despite attempts to control it, one answer may be to simply make the picture black and white in order to remove all color cast.

  • De-saturate in post - This is especially effective in situations where tungsten lighting has made pictures too orange. Often it can be an easy fix to simply ‘desaturate to taste’ and add a bit of blue (to tone down the pinks that invariably result from the desaturation).

  • Mask the area(s) with improper color cast and rebalance them - Time consuming, but certainly an option if you have pictures that you need to keep, but the color cast is wrong. It is possible to come up with something relatively good looking by correcting the photograph for skin tones, and then using masks to further correct any areas where the remaining color is off.



post processing - How to preserve detail when downscaling scanned photos?


I'm mixing analog with digital photography — I tend to take better photos when I can't look at a small digital screen, unconsciously forcing me to take the picture before I shoot instead of checking afterwards if it succeeded. I also like the grain film gives me. However, most shots aren't worth spending much time in the darkroom, and scanning them is a lot faster. I'm wondering how best to convert the film to digital.


Scanning at lower than maximum (non-interpolated) resolution of my image scanner tends to exaggerate the contrast of the grain and screw up the mid-tones because of that.1 On the other hand, the resulting size of the scans are major overkill in most cases (I don't print them at A0 size), so I want to downscale the pictures. However, the grain confuses most methods and results in loss of detail and/or loss of the characteristic grain. For example, bicubic sharper tends to result in a "bigger" grain than the original. Now, the best compromise I've found so far is simply using bilinear interpolation, but I doubt that this is the best method to preserve both overall details and the look of the grain.



In short: when processing the scanned negative, how do I "best" preserve both image quality and the characteristic look of the grain?


As Stan Rogers pointed out below, there's a difference between truly scanning the grain of the film, and scanning the "grainy" character of film at lower resolutions. This question is about the latter; Stan Rogers explains how to do the first.


I'm looking for alternatives to the standard Photoshop options to try out, and reasons for each. Is the fact that every film has a different grain characteristic a problem? Are customized advanced noise reduction methods perhaps an option?


1. I'm using a Minolta DiMAGE Scan Elite 5400, using the software provided by Minolta.


As requested, here is a scanned picture with different resize methods. Just scanned this, scanning a B&W negative to get better tonality range. I turned off grain dissolver, automatic dust and scratch removal, etc, for obvious reasons. On the right the pictures are upscaled again with Nearest Neighbor. I set the 8 bit scan to 16 bits before downscaling, assuming it reduces risks of banding if I choose to mess with curves, channel mixers, etc. No other modifications (usually I first retouch dust & scratches, and in the end channel mix the picture to B&W, then switch to 8 bits again).


The Different Scaling Methods




dslr - Why can't my SLR autofocus on certain parts of a scene?


I had my Nikon D7000 on single-point auto focus and it just would not focus on the spots I wanted. I have marked the spots that could not be focused on with red circles, and the parts that could be focused on with green circles.


Could not focus on red circles


I was using a Nikon 35 mm f/1.8, although I don't think this was a lens' problem since it has happened before with other lenses too.



Answer



Short answer: Current autofocus systems only work when the AF area contains high contrast. The places where it doesn't work don't contain enough, and the areas which do work, do.





Here's what's going on in more detail:


There are two different types of autofocus systems in modern cameras.


One is the contrast-detection AF, which is used in most point and shoot cameras and in live-view in most DSLRS. This works by moving the lens back and forth until the setting which gives most contrast between adjacent pixels is found. Obviously, this requires actual contrast in the subject — you can't focus on an all-white wall.


The other type is phase-detect AF, which uses a beam-splitter to tell whether patterns of light and dark are back- or front-focused, and moves then lens accordingly. (This is what you're probably using with your Nikon DSLR.) Phase-detect AF also requires the focus area to have a pattern with high contrast in order to work.


There's more technical information about this here: How does autofocus work?, if you're curious.


In your example, the red points which don't work for focus are on a relatively plain gray area. There's simply not enough contrast for the AF system to do its thing. The green points, where you are able to focus, have clear, high-contrast detail — perfect.


metadata - Identify date from code on back of printed photo


I am digitizing all of my old family photos and was curious if anyone knew how to interpret this code and figure out the date the photo was taken or printed.


Either would be fine:


Sample Code #1:



032 12+00 NNNNN+15AU 0110


Sample Code #2:


046 12+00 NNNNN+16AU 0110


My guess was that 15AU and 16AU represent 15th and 16th of august but I could be off about that. I also don't know where to find the year.



Answer



Those appear to be codes from a Fuji Frontier automatic film processing lab machine or one of its older predecessors. Such machines were/are popular at mass retailers who did/do one hour photo processing and printing.


Users have some leeway in assigning what information is printed using the codes on the back of the print, so there is some variation depending on the specific user's preferences. Some mass retailers used a standardized format across all machines in all of their stores, other chains seem to have used whatever the individual tech who set up the machine selected and can vary significantly from one location to the next. Here's what your first sample code probably means:



032 12+00 NNNNN+15AU 0110




032 - Identifies the specific machine among other machines the same operator may own. Mass retailers with less than 999 locations could assign a different code to every machine they owned in all of their locations using this field. This could also be used to represent the roll number, job number, or even the sequential negative/print number.


12+00 - Two codes representing the film maker and film speed along with film density. Used by the machine to apply a specific profile that had been previously entered for that particular film. Each machine could have different numbers assigned for the same film maker. Film maker '1' might be Kodak for one machine and Fuji for another. Film speed '2' might be for ISO 100, ISO 200, ISO 400, etc. depending on the numbers assigned for that machine. The two numbers after the '+' symbol usually related to film density. If the film was processed "straight" (i.e. ISO 400 film was developed as ISO 400) and the resulting density of the negatives were "average" (kind of like our modern expectation of the average brightness of a digital photo being 18% gray) it was usually +00. If the negatives were darker or lighter than normal then a '+' or '-' number would be applied to bring them back to an expected "average". This is where an operator paying attention could notice that the shots were supposed to be brighter or darker than "average" and use a more appropriate number.


NNNNN - represents the amount of correction manually entered by the operator for cyan/magenta/yellow (some machines reversed the order to yellow/cyan/magenta) and two user assignable parameters. If the letter 'N' is used, it means the default setting for that particular machine (at the time the print was made) was used and no additional manually entered correction was done. Since the user of each machine could assign their own default profiles and custom changes for particular films (identified using the XX+XX code) the amount of correction in this field is pretty much meaningless unless the machine's software version and profile loaded into the machine at the time the print was made is known.


+16AU - Identifies auto-correction applied by the machine's automatic routines. AU is for 'Auto', not 'August'.


0110 - Another user assignable sequence number. It could be the job, roll, or print number for that day.


I've also seen prints from Fuji machines that use the following format. When the < xxxx> brackets are used, the number inside is almost always a sequence number corresponding to the negative number on the roll of film.


< No. 02 > 003 22-02 NNNNN-32AU 0032


When dates were included in the codes they were usually fairly obvious, such as:


APR96 001 0111 NNNN


Some stores chose to print the date on a separate line from the developing/printing information.



Wednesday, 15 July 2015

What's the difference between using black and white mode in-camera, and converting in post?


For black and white photography, will there be any quality difference between using the built-in black and white mode, or shooting in color initially and then removing the color later using Photoshop (or similar application)?


I have a high-end compact camera, the Canon S95.




lighting - How do I setup and capture an event photo booth?


It is becoming someone common in my area to have what is colloquially called a photo booth at events such as weddings. Usually they are simply a camera on a tripod tethered to a computer, with lighting setup, and a background. An attendant is present to press the shutter release and to prepare the subjects. In other cases the setup is more elaborate with an actual "booth", self-service shutter release functionality(self-timer), and strips of dye-sub prints automatically created.


I am interested in specifically the simple setup of a photo booth with a camera on a tripod and an attendant/photographer assisting to capture the images.



  • What type of lighting setup would be needed?


  • What camera and flash settings would make sense for the various subjects?

  • Can I leave it in manual mode all night and only have an attendant press the shutter release?




terminology - What are shutter actuations?


I was just reading the question How many actuations are "too many actuations"? and trying to figure out what a shutter "actuation" actually is.


The only thing I can guess is that it's the number of times the shutter is released and restored — i.e. how many photos the camera has taken — but values of 50-100k don't seem like that many. Is that really what it is?


Why is the word "actuations" used here? Is it a term manufacturers use to sound fancy or to mask that modern cameras might fail after 100,000 images?


What happens when 100k is reached (or whatever the actual lifespan is) — does the shutter fail in a way that damages the camera? Can it be repaired, or is it effectively the end of that camera body's life?




Tuesday, 14 July 2015

How does lens-hoods for pancake lenses work? (Canon EF 40mm with ES-52)



Normally I use lens-hoods on every of my glasses (flairlight blocker, lens protection). Some days ago a bought the Canon EF 40mm STM (Pancake-Lens). So I look for the fitting lens-hood and found the tiny ES-52.


Honestly I can't imagine that this tiny piece mounted on the lens can really block sun or protect the lens very well. Maybe in very very extreme situations with light coming from a nearly 90 degrees angle, but this can't be all.


I don't want start an opinion based discussion - I'm searching for facts. Do I miss here something or is the lens-hood “useless” (yes I know this sentence is a little bit overpraise, but I think you know what I mean).



Answer



The ES-52 works a little differently than most lens hoods we are accustomed to seeing. Instead of blocking off-axis light by extending a cylinder or cone perpendicular to the image plane and centered around the optical axis, The ES-52 blocks extraneous light by placing a smaller circular opening parallel to the image plane. Due to the difference in design, perhaps it doesn't offer as much hazard protection as a conventional design would. This is balanced by the fact it preserves the compact size of the EF 40mm f/2.8 STM which is one of the drawing points of this lens.


As with all lens hoods designed for EF lenses, the OEM hood is designed for the angle of view needed for use on a camera with a full frame sensor. If you are using an APS-C camera, the hole on the middle of the hood could be even smaller since the angle of view needed is narrower. This would be the same as aftermarket conventional hoods for other lenses that extend further out for use on APS-C cameras.


ES-52


I have seen a few reports that the ES-62 hood (including thread adapter) designed for the EF 50mm f/1.8 II will also fit the threads in the 40mm pancake lens, but you then lose the compactness of the 'pancake.' Since the EF 40mm f/2.8 STM provides a narrower FoV on an APS-C camera than the EF 50mm f/1.8 II does on a full frame camera and the front element of the 50mm lens is recessed from the hood mounting point and wider than the front element of the 40mm lens there should be no issues with vignetting if you choose this option. Without comparing them side-by-side, though, it would be hard to say which one provides the narrower and wider angle of protection from off-axis light when mounted on the EF 40mm f/2.8 STM.


ES-62


Here's a very crude conceptual drawing that illustrates how each type design blocks extraneous light. Please note that this is not to scale for either of the actual hoods. Light from point sources outside the yellow lines of each cone but inside the orange ones would fall on part, but not all, of the front element. Light from outside the cone formed by the orange lines would not fall on any part of the front element.



drawing


Monday, 13 July 2015

Where can I find the maximum lens weight for my camera?


Where can I find information about maximum allowed lens weight for my camera?


I'm asking because I have an opportunity to get an old Soviet-built 80-200mm lens for my Nikon D3100 for a very attractive price. But then it occurred to me: the lens weighs 960g (more than camera itself) and the camera has plastic body, so could it damage my camera?



Nikon website only list compatible types of lenses, but not maximum weight. I'm pretty sure this particular lens will be OK, but this kind of information would be very useful when considering purchasing heavy lenses.


While I'm asking about Nikon, I think the question will be relevant for any camera brand.



Answer



It's strange but I can't find this data either; clearly there is a maximum load the mount can take, but I can't find one for any DSLR manufacturer, even ones like Canon who have in the past manufactured a 16.5kg lens!


In the manual for the Panasonic GH1 mirrorless camera, the maximum recommend lens weight carried by the mount alone was 1kg. The Nikon F mount is bigger than the m43 mount so I would expect it to take at least this figure if not more.


I can say however that you won't have problems with a 1kg lens, as the mount will take this weight — people use this type of lens all the time without a collar (tripod mount for the lens).


If a lens was shipped by the manufacturer without a tripod collar, I would take this as indication that it is safe to mount this lens without one. The Canon EF85mm f/1.2L is just over 1kg and is shipped without a collar, so I would infer that this is well within the limit (though this is a short lens and so exerts a smaller moment on the mount).


I think that perhaps one of the reasons it isn't specified is that even when using a 1kg lens I've heard people report that it felt too heavy for the mount even when it wasn't, so they can rely on common sense — provided the mount is stronger than it looks!


equipment recommendation - Most effective directed speedlight diffusers?


Since I have a wireless flash I figured I may as well get a tripod adapter and some sort of diffuser so I can supplement strobes, and especially use it off-camera at locations I wouldn't haul monolights.



What are the best options for balancing diffusion with light transmission from a flash?


The problem with a flash is that it starts out quite directed, so to begin with I'm assuming it's not well suited to umbrellas — either reflective or shoot-through. Without taking advantage of reflector geometry I was thinking it would take two diffusion layers to smooth it out, but that's going to cut out a lot of its light, which is already its weak point. But if I'm over-thinking this just let me know if there's a consensus on the best directed flash diffusers.




Sunday, 12 July 2015

lens - What's the best way to create a tilt-shift photograph?


Specifically, what's the best way to go about shooting photos that have a fake miniature quality to them, which produces high-quality (and believable) images? Is there some special lens that you can shoot these kinds of photographs without resorting to Photoshop?


(For a software post-processing effect, see How to get a miniature effect on pictures without special equipment?)



Answer




The best way absolutely is to use a dedicated tilt-shift lens. They are quite pricey, and manual control only, but that is the best option, and you asked for the best. For some examples, see Canon's TS lineup and Nikon's Perspective Control lineup.


Assuming you don't have $1000 to spend (Or more), then your next best option is to try for something like a Lensbaby. They are somewhat pricey, but only a couple hundred dollars.


Assuming that neither of these is in your budget, you could try some of the other suggestions have come up, I won't bother repeating them.


You're going to be very hard pressed to find a software solution, because it's really difficult to change the focal point using software, and that's what a tilt lens does.


Saturday, 11 July 2015

astrophotography - How can I simulate a long exposure photo using a set of shorter exposure photos?


I'm shooting towards the sky to capture the stars. The exposure time is 15 seconds, so I can see the stars still, without a trace. These photos are taken continuously one after the other, because I want to make a time-lapse video showing the "movement" of the sky (it is the earth that moves actually).


For that, everything's fine. But I'd also like to do one other thing. If instead of taking all those photos, I would take just one with a exposure time that equals the sums of all those photos photos together (15 sec times the quantity of photos), I would see the trace the stars left in the sky.



Is there anyway to "create" that photo, from all the "short" exposition ones?



Answer



You can do it with a script for The GIMP. I did it a couple years ago, and got pretty good results. Remember to keep the time between exposures as short as possible, otherwise you will get visible gaps in the trails. That's why it's best to take a single dark frame at the end, and subtract that frame from the result (I had intended to incorporate that into the script, but never got around to it).


My notes for the script:



Combined with renaming the first to base.JPG, "gimp -b -" with
(let* ((filelist (cadr (file-glob "IMG*.JPG" 1)))
(img (car (gimp-file-load RUN-NONINTERACTIVE "base.JPG" "base.JPG"))))
(while (not (null? filelist))
(let* ((filename (car filelist))


(layer (car (gimp-file-load-layer RUN-NONINTERACTIVE img filename)))
)
(gimp-image-add-layer img layer 0)
(gimp-layer-set-mode layer LIGHTEN-ONLY-MODE)
(gimp-image-merge-visible-layers img CLIP-TO-IMAGE)
)
(set! filelist (cdr filelist))
)
(gimp-file-save RUN-NONINTERACTIVE img (car (gimp-image-flatten img)) "test2.jpg" "test2.jpg")

)

For subtracting the dark frame, my notes say, "I opened this as a layer on the composite image (the result of my gimp script), and set the dark layer's mode to Difference."


Friday, 10 July 2015

Why does wide angle lens adapter + large aperture make images blurry?


I have a kit 18-55mm lens on my Canon. I also have a .45 wide-angle lens that I use on the kit lens. The results are fairly sharp.


When I try to use the .45 wide-angle lens on the 50mm @ f/1.4, it produces blurry images. Changing the aperture f/5.6+ on the 50mm, the images become sharper.


The 50mm produces good images, so it's not the 50mm lens. The 18-55mm also produces good images by itself and with the .45, so it's neither the kit lens, nor the .45 wide-angle lens.


Keep in mind that I manually focused, had a tripod, good light, and was shooting still photography.



So, why is this happening? Why can't I use a wide-angle adapter on my 50mm at a large aperture? And why does it make the photo blurry?




telephoto - Is a macro lens suitable for distant subjects - wildlife, sports, portraiture?


Some macro lenses have a really nice focal length that would make them a nice prime-telephoto lens, but are there any downsides of using a macro lens when shooting distant subjects (besides the lack of zooming)?



Answer



Most prime macro lenses are suitable for distant subjects. However, there are some exceptions:




  • the king of macro photography, Canon MP-E 65, will not focus far enough to fit more than an eye or nose on a portrait;





  • some macro lenses, like Pentax DA 35 Limited Macro, have a short focal length -suitable for distant subjects only as environmental shots showing context rather than details of the subject; shorter than about 50mm on APS-C or 75mm on full frame are generally not considered suitable as portrait lens;




  • some zoom lenses are also sold as "macro" lenses; generally they have a variable aperture similar to consumer zooms. You can take portraits with them, but you have to use other tricks to get a good background separation (e.g. background far away, plain background, lighting subject to underexpose background).




Macro lenses are made to be comfortable for precise manual focusing (because that's how macro is mostly done), so their large focusing range is spread over almost a full turn of focusing ring. This implies that auto-focus can be a bit slow, especially if there's no focus range limit switch and the lens goes hunting through the whole range. Prefocusing to an approximate distance might help you here in many cases.


Another disadvantage in using macro lenses compared to a regular primes lens is their moderate maximum aperture for a prime of similar focal length (especially ones preferred for low-light, fast action or portraiture), usually in range of f/2.8 to f/4.5 - for macro, more would be overkill. Tamron 60mm f/2.0 is a surprising exception here; unfortunately 60mm has to be so close to subject it will scare away living critters, also lighting becomes challenging; so it has somewhat limited use in macro world.


The smaller aperture means less flexibility in getting thin depth of field. But small maximum aperture means the aperture for maximum sharpness is even slower (typically by a stop or two), meaning you have to take harder compromises between sharpness and background separation by DOF.



That said, an f/2.8 macro lens is still on par with professional zooms aperture-wise.


fujifilm - How do I evaluate a camera's JPEG tone curve options when the manufacturer only provides flowery prose?


There aren't many options for RAW development for the Fujifilm's X-Pro 1 because of its unique sensor layout. However, Fujifilm has clearly spent a lot of time working on the in-camera JPEG results, which get high praise, especially in the rendition of skin tones.



There are a number of options for controlling these, including tweaks to "color" (by which they appear to largely mean saturation of color) and changes to highlight and shadow "tone", but the Big Switch is what Fujifilm calls Film Simulation. In addition to a number of monochrome options, these are described in the manual like this:



  • Provia (Standard) Standard color reproduction. Suited to a wide range of subjects, from portraits to landscapes.

  • Velvia (Vivid) A high-contrast palette of saturated colors, suited to nature photos.

  • Astia (Soft) Enhances the range of hues available for skin tones in portraits while preserving the bright blues of daylight skies. Recommended for outdoor portrait photography.

  • PRO Negative High Offers slightly more contrast than PRO Negative Standard. Recommended for outdoor portrait photography.

  • PRO Negative Standard A soft-toned palette. The range of hues available for skin tones is enhanced, making this a good choice for studio portrait photography.


These descriptions are better than some I've seen, but they're pretty short on specifics. In my testing, I've seen that what the "Pro Negative" settings actually do is make tanned or pink skin very pale, and strongly boost contrast in the shadows — somewhat at odds with the description.


Is there a way to get a better handle on what these effects do, and which I want to be using if I'm shooting in JPEG? (Because, hey, that's where the money went.) I can take a bunch of test shots, but going through all the permutations seems time consuming (especially since the in-camera RAW development is tedious). Is there a shortcut to understanding?



I used the "film simulation bracketing" feature to make these:


ProviaVelviaAstia


Which are Provia, Velvia, and Astia respectively — to my mind, that's "neutral to the point of bland", "overcooked", and "I would say overcooked if I hadn't just seen the previous".


My Pentax camera has range of image tone options similar to these "film simulation" choices, but for each of these, on the back of the screen there's a hexagonal color spider chart showing the effect of the settings. This isn't a perfect representation, but gives some idea of what's going on.


Is there a straightforward way to produce those for this camera (or for any generic camera), or has someone already done the work?




Thursday, 9 July 2015

inspiration - How to get back to photography after having a long break?


I recently had to take almost a year long break from photography. Not sure how much I've missed, may be a few new equipment releases or some new technologies. However, I think I am willing and ready to get active in photography again but since I've been inactive for quite a while I am struggling with it. The local communities have all new faces and I don't think at this stage I can restart from zero. I bought a new lens hoping it'll kick me active, but nope!


I know this sounds too stupid, but may be someone out there has been in my shoes and can share his experience with me?



Answer




If you have trouble to get excited, I can understand that you took the option to get some new gear to get kicked into a new creative mood.


Unfortunately, this question needs a very personal answer, so allow me to try to throw a few options out here. I am a on-and-off shooter and take breaks sometimes for a month or more and then find something that kicks me back into the mood.



  1. New gear - you mentioned that already but still - if you find something that really interests you from a challenge or technical point of view this can be really nice - provided that you have a chance to use it properly. Things like lens babies, T/S lenses or pinhole, filters, fish eyes etc can really get one dreaming what one could do with it, but it demands that you can actually apply it where and when you are using it. I normally see a photo shot with an effect first and then try to replicate it - and might get the gear for it.

  2. New inspiration - I found that looking at new books and magazines that showcase others creative work always irks me to try something similar. I am mostly not good enough to replicate things, but I always can find some work that is just a margin away from mine and I want to try to move into that direction because I like what they do, and then try it

  3. Old gear - we spend sometimes a lot of money on it but maybe do not use it. Reading reviews about the gear that I have often shows me examples and fields of usage that I have not used appropriately. It somehow 'guilts' me into using that gear, but often I find that those applications are promising, since the gear is recommended for this specific thing!

  4. New places - If I look at my city's map and see places, restaurants or landscapes that I have not seen before or from where I do not have photos, it makes me think I should go there and take a look at it. The camera comes more along just in case, but I often end up to take at least some interesting shots - since I am interested in the place, not the photo.

  5. New people - similar to point 2, when I talk to others about their work, I get inspired to try something similar or get my own ideas about what was mentioned. If the people around you changed, I would rather consider this an opportunity than anything else.

  6. Downgrade - The risk of too much equipment makes me always think that I am almost too much in control. Going out with a simpler camera or only one lens can be very interesting.

  7. challenges - there are those Photo challenge websites (like http://www.dpchallenge.com/) are something that can get you into gear since they throw you a phrase like "Absurdity" and their deadlines also can give you a small kick to get going.



I guess for myself the object is more important than the picture, and once I find something interesting, the drive to take a photo of it comes along.


lighting - How to reduce shadows on product photos when there is no room between the subject and the background?


I sell horse equipment and my photos are created against a white background with the items on hooks right against the wall or background. I get a lot of shadows, but it is the only way to display the products. I would love to blow out the background but there is no space between the product and the background.




exposure - Why won't my ISO go below 6400 when I shoot indoors?



I'm having an issue with my Canon 6D. Whenever I shoot photos inside my house they turn out so grainy! I've tried turning the ISO down but anything lower than 6400 and my photos are basically black. Mind you this is mid-day, lots of light in our house, and yet the ISO on my camera will not go below 6400 and it is making my photos look terrible because they are grainy instead of sharp! Please help!!


Lens is 24 - 105mm f4.0. I shot at 24mm, f5.0, shutter 1/160th, ISO 6400 (this is on auto, but only because anything lower I set it at makes the photos black!).


I'm still kind of new to this, but I do understand ISO, aperture, and shutter speed need to be in certain places for light to come in!




Why is the front element of a telephoto lens larger than a wide angle lens?

A wide angle lens has a wide angle of view, therefore it would make sense that the front of the lens would also be wide. A telephoto lens ha...