So I was reminded yet again recently why I so much despise color management, especially in Windows. This is half rant, half looking for answers, and half-random things I’ve noticed about color management. Not that I was actively looking for any of this mind you. In the weeks since I wrote about old colorimeters and modern displays I’ve run across a couple more amusing but notable tidbits of color management fun.
Color management, at least the technical stuff behind it, is something to me of an arcane art. I don’t at all understand what’s going on, so don’t look at me to explain anything here or grant any great insight. In fact, while I’m quite comfortable profiling my displays and getting acceptable results, I still dread my quarterly re-profiling runs.
Color management in Windows is, at best, a tragically chaotic affair. The OS doesn’t do color management globally, applications have to opt into color management, and it seems only certain parts of them get managed or there’s an interaction between display profile and what’s rendered that completely alludes me.
Worse, Microsoft didn’t bother to implement color management in most OS related areas—especially annoying to me is the desktop background. Further, where they did implement color management it’s not always complete. For example, the Windows Photo viewer is actually color managed, but if you use an ICC version 4-color profile, it loses the plot.
Which raises the question, why bother with ICC v4 profiles in the first place?
Not to put too fine of a point on it, the give you better color accuracy.
I haven’t done anything extensive with this, but I was curious why my SpectraView display kept measuring so poorly in terms of Delta E (ΔE). For those that aren’t familiar with it, ΔE is the difference between colors—typically with displays it’s the difference between the color displayed and the color that should have been displayed. The idea is that the lower the ΔE, the closer the colors are.
For a display, a ΔE of 1 is good; the vast majority of people won’t be able to see a difference between the color that is and the color it’s supposed to be. Professional color critical displays will usually do better than that. Most consumer displays are a lot worse.
On my desk I have 3 displays a Dell E248WFP, which is a cheap TN panel consumer level display; a Dell Ultrasharp U2408WFP, which is a respectable semi-high end PVA panel; and an NEC PA241W-BK, a color critical IPS display. The cheap TN display, after calibration has a ΔE around 2.3; the Ultrasharp, around 1.2; and here’s where the fun with profiles shows up.
Using an ICC Version 2 table based profile, my SpectraView II measures with a ΔE of 1.1. However, switching around to a matrix base version 4 ICC profile, drops the ΔE to 0.7.
So I’ve switched back to V4 matrix profiles, and solved the somewhat confusing question of why my display suddenly seemed to tank sometime earlier this year.
Of course, one might be asking why I switched from the i1 recommended V4 matrix profile back to the V2 table profile. The answer, unsurprisingly, is Windows; specifically the Windows Photo Viewer, which is incapable of using V4 matrix profiles. Of course, the solution to that is just to use something other than Windows Photo Viewer to preview images, like say IrfanView.
I’ve never been a huge fan of SD cards. I can point to reasons, some sound some maybe not so much, but given the choice, I’d much prefer to use CF to SD given the choice. Maybe this stems from some initial bias; my first digital cameras, even the crappy point and shoots, were all compact flash based. So was my first SLR, and all my current SLRs.
It’s not totally an irrational bias; there are good reasons to prefer compact flash. For starters the cards are bigger that SD cards. Bigger cards are easier to find if you drop them, and easier to handle with gloves on. CF cards are generally faster, even at comparable sizes and rated speeds.
Actually, let me talk about speed for a moment. That too has long been one of my problems with SD cards. They’re slow, or at least that’s always been my perception of them. I was pleasantly surprised when I benchmarked a 32GB Lexar 400X card to find it wasn’t atrociously slow. No, it wasn’t as fast as the 32GB 400X UDMA compact flash cards I normally use, but its performance wasn’t radically worse either (6.5% slower at writes, 12.5% slower reads). The key difference, write speed, is so meaningless it isn’t even worth whining about.
Of course, SD cards have one huge advantage over compact flash, cost. Where a 1000x UDMA7 CF card can run upwards of the cost of an entry level SLR, the reasonably sized (8-16GB), reasonably fast (400x) SD cards I use are in the $10-30 range. Why put the wear and tear—and yes the NAND flash memory in flash cards wears out eventually, and not on 1000 year scales, but more like several 1000 write cycles scales—on expensive CF cards when their advantages aren’t needed?
Until earlier this year, I was pretty die-hard against SD cards in cameras, at least at the prosumer and pro level. I’m still not the biggest fan on of them, at the same time, other than when shooting video or when I know I’m going to want to shoot larger bursts at high FPS, I’m shooting almost everything on cheap, comparatively disposable—at least I won’t be crying when one dies—SD cards now.
Somewhat amusingly, I’ve also found a couple of things I like about them over CF cards as well. For starters, the lack of pins means I’m much less concerned about bending pins in my camera’s card slot or card reader. It’s never happened to me in the past, but it’s always been a concern in the bag of my mind. I’m also increasingly becoming a fan of the push-in-and-it-pops-out release mechanism.
Going forward, the next apparent card format XQD, adopts some of the perks I like from SD and combines it with a bit bigger card and faster performance form CF. That said, it looks like it’s going to be a long slow transition to XQD, if it happens at all. As it stands, only the Nikon D4 supports it and Lexar and SanDisk waited until well after the D4 was released to announce they would begin making XQD cards. All told, it’s been more widely adopted than CFast was, but 1 camera still isn’t much of a market.
In the end, what I think I’m trying to say, at least to the guys like me who’ve always shied away from SD cards, is they aren’t all that bad.
If you’re still using an old Spyder 3 LT or Pro, or really any colorimeter of that age or older, and are looking at upgrading to new wide gamut or LED backlit displays, the odds are good your trusty old colorimeter won’t do so good.
The simple reality is that the Spyder3 was designed at a time when wide gamut cold-cathode fluorescent (CCFL) and LED backlights simply weren’t considerations. Today those two technologies are becoming increasingly prevalent in displays us as photographers are likely to encounter in looking at new dispalys
My Spyder 3 met the end of its utility almost a year ago when I replaced my aging and dying Dell 2408WPF—a fairly standard gamut PVA display—with an NED PA241W-BK—a SpectraView series wide gamut IPS displays. The Spyder3 simply couldn’t calibrate and profile the display properly.
The limitations were reconfirmed to me again just recently, when I went to help a friend setup a new LED backlit standard gamut PVA panel as a secondary display on his machine. He’s also a Spyder 3 user, as he’s had good success with Spyder products for years with his older CCFL displays. However, getting a good profile out of this LED backlit display has been next to impossible.
So what’s the deal?
Well a big part of this comes down to how the color calibration hardware is built. The relatively inexpensive colorimeters use multiple color filters over more or less standard photo detectors to determine what color they’re actually looking at. Since the hardware/software knows the pass-bands for the various color filters, it can compute what the actual color it’s being shown is based on the response of the various detectors.
For that to work right, the hardware and software have to be designed with the perspective light sources and gamuts in mind. Meaning they had to plan on the spectral output of a wide-gamut CCFL or LED backlight when they were building the hardware.
This is where the Spyder 3 falls down. Back in the 2007 when the Spyder 3 was coming to market, wide gamut CCFLs were the preview of only the most expensive professional displays, and LED backlights weren’t all that common. Of course, in the intervening years, LED backlights have become much more prevalent, and wide gamut CCFLs are in more and more mid-tier photographer displays.
The moral of the story here is simple, if you’re looking to replace or upgrade your displays in the current market (2013), and are considering any of the wide gamut IPS or LED backlit LCD displays, and are still using a Spyder 3 LT or Spyder 3 Pro, you should probably plan on upgrading that as well.
Canon Rumors has posted a couple of rumors lately about the possibility of a 75- or 80-megapixel camera from Canon that might in fact use a multi-level Foveon style sensor. I can’t comment on the veracity of the rumors, but it’s a strategy I’ve always liked for a number of reasons.
Multilayer sensors do a number of things that improve efficiency, mostly by eliminating cruft that’s necessary for Bayer pattern sensors. First they can lose the low pass filter, as it’s largely unnecessary because the sensor samples all colors at all points and moiré isn’t a significant concern. More importantly, the efficiency sapping color filters are gone.
The real benefit to me is the flexibility that a multi-layer sensor brings to the camera.
For those interested in the absolute best image quality it gives you a moiré free image that records the full color information at every point. Moreover, since the sensor doesn’t need a low-pass filter and isn’t doing using a Bayer pattern you get the full special resolution out of the sensor, not just a significant fraction of it.
For those who need frame rate over absolute maximum quality, you can read out a multilayer sensor as if it was a Bayer sensor and suddenly you’re looking at 1/3 of the data to process and therefore 3x higher frame rates.
Finally, for people who want to shoot monochrome, you can treat the sensor as a true monochrome sensor by summing all three of the stacked photo sites into a single monochrome output. While the market isn’t huge, there’s certainly enough of a market that Leica brought out a monochrome M, and there have been a couple of monochrome medium format backs. Though the market isn’t huge for such a feature, the amount of work necessary to implement it isn’t that big either.
With that said, lets jump back to the wild speculation on Canon Rumors rumored 75MP multilayer SLR from Canon.
Foveon, and Sigma, have always characterized a pixel as a single color photo site, even though 3 of those photo sites make up a single 3-color pixel—picture element—in the resulting picture from their X3 cameras. That’s certainly a fair way to look at it, considering that a pixel on every other digital camera is only a single, single color, photo site too.
If we assume that Canon will run with that same definition, since it makes a nice big marketable number, then the rumored 75MP camera would only have a spatial resolution of 25MP and images would be about 6124 x 4082 pixels.
A 25MP special resolution sensor actually sounds a whole lot more reasonable to me than a 75MP one does. The pixel pitch would be around 5.9 microns, instead of 3.4 microns. That’s well within Canon’s comfort zone for manufacturing, slotting in right between the 5D mark 3’s 6.25μm pixels and the 40D’s 5.7μm pixels size, and well above the 4.3μm pixels in their APS-C offerings.
From the sensor standpoint, you have comparatively big pixels, at least compared to a D800 or Canon’s APS-C cameras, which are simultaneously unencumbered by efficiency eating color and low-pass filters. That combination has the potential to be quite impressive in terms of image quality.
Then we come to the question of resolution. Canon finally unified the EOS-1 line with the EOS-1D X. I’ve always felt the split in the past was due to technical limitations more than anything else. With a single layer senor, you can only reduce the image size to reduce the amount of data being read and therefore increase the frames per second.
However, if Canon implemented a system similar to what I described above, there’s no need to break the 1D line apart again to meet both FPS and resolution targets. Never mind, current Digic 5+ processors are within spitting distance of being able to process the 300- 14-bit megapixels per second needed to support 25MP at 12FPS—the Canon 70D’s Digic 5+ is already pushing 141.4 14-bit MP/s , just 9.6 MP/s short of what’s needed.
Moreover, if you can do 12 FPS with 25MP of data, you can do 4 FPS with the full 75MP of data from reading all colors at all points. In one body, you get a reasonable frame rate at really good quality, or a high frame rates at a quality that would likely be better than Canon’s current EOS-1D X or 5D mark III.
In short, while it might not be what Canon ultimately brings to market, the rumor certainly passes my sniff test of what seems feasible. Of course this is all speculation based on a rumor reported by a rumor site. There’s no telling whether Canon will actually bring something like this to market, but sometimes it’s fun to speculate.
Rolling shutters, jell-o-cam, anybody who’s serious about video production and using SLRs or even high-end cinema cameras should be familiar with the effects of rolling shutters. Even if you’re not, if you’ve ever watched a video that has an airplane’s propeller in it and it’s anything but straight, then you’ve seen the effect of a rolling shutter.
Back in 2011, Zacuto rounded up a number of industry players and put the state of the art, at the time, of cameras to the test. These included things like the Red One, Arri Alexa, Sony F-35 on the high end, and a number of more approachable VDSLRs like Canon’s 5D Mk II, 7D, 1D Mark IV, and Nikon’s D7000. Their final segment of the 3-part series included tests for rolling shutter performance.
They used two test mechanisms, a drum with vertical lines and a rotating disk. Combined they simulate pretty much all the cases you run into with rolling shutter. The drum covers panning and the disk covering rapidly rotating objects, like a plane’s propeller.
I’ve replicated them in a fashion, though this is spit-and-bailing wire engineering at its finest. Both tests are powered by my trusty DeWalt cordless drill motor rotating somewhere between 0 and 450RPM. The disk test used a circle of black matte board with some white gaffer tape strips placed across it, the line test used an old zip-tie container (so old in fact I broke it trying to put a hole in it), wrapped in white paper with strips of black gaffer tape added for lines.
Like I said, these aren’t strictly scientific tests, due to breaking and an off center hole, my drum test is considerably more wobbly than it could be, and I have no idea at what RPM I’m actually spinning at. Both tests were shot with a 1/500th shutter. At 1/60th, there was way too much motion blur that the disk became essentially gray instead of having distinct lines. The 5D shots are at f/4 ISO 800, the EOS M at f/5.6 ISO 1600 due to the lenses used.
That said, not having done this in nearly as much of a high tech & controlled manner as the Zacuto team did, I don’t think you can readily compare my results to theirs.
VDSLRs have provided a huge amount of latitude to photographers and videographers in where they can shoot and the quality of that shot. The large sensors and high ISO capabilities makes lighting much less of an issue and the advances in LED light sources have even made the lighting much more portable and easy to deal with.
That said, if you’re considering doing any kind of long running production and want any kind of consistency to the set you need a studio of some kind. But where? How big does it have to be? And what about a set?
When I got my 5D mark 3 a year ago, it wasn’t for the video capabilities, however having them and the challenge of a new undertaking planted the seed to do something with video. A lot of that has been a learning process trying to find and adapt to what the camera is best at doing. VDSLRs can bring tremendous advantages to low budget videographer in terms of portability and image quality, but they do have drawbacks.
The biggest trick I’ve found to effectively leveraging a VDSLR is playing to its strengths—and perhaps more importantly planning around your own limitations. VDSLRs aren’t setup to be camcorders, most don’t have autofocus, and on the ones that do, the performance is far from desirable. Within a manual focus, manual exposure, pretty much manual everything, environment many things can be difficult to shoot, especially without any production aids like a follow focus or an EVF.
That said, the story isn’t no crew, no gear, no video.
The impetus of this whole project is to try and get some video content up on this site to go along with the text reviews. Don’t get me wrong, I like text, it’s far easier to skim text than it is to skim through a video, however I’ve found numerous concepts that can be explained in 30 seconds of video that can’t be clearly or concisely articulated though text alone.
I’m sort of throwing this one out there in hopes maybe someone has something helpful to share.
I have a love hate relationship with camera straps, one that borders more on hate than love. When I first started the whole photography thing, I used the straps Canon shipped with their cameras. They’re thick and stiff, but the rubberized grip material holds on virtually every shirt I’ve put under them. Though like any fixed strap, I found them getting in the way whenever they weren’t simply around my neck.
I switched from the stock straps to a Lowepro strap that use quick release disconnects on the ends. Talk about worst of all worlds. When the strap was released from the quick release bits, the short straps that were left attached to the camera got in the way as much or more than when I had a whole strap on the camera. Never mind the rubberized material on the shoulder pad was so slick that my camera was constantly sliding off my shoulders. The experience with the Lowepro strap quickly sent me back to the stock Canon straps and looking for something better.
I tried the straps Optech USA sells, but I found their padding way too wide and uncomfortable, not my thing.
I finally ended up with one of ThinkTank Photo’s straps. It’s thin, but it holds well, and the rubber is on both sides of the strap. On top of that, it works with their Camera Support Straps and bags, which makes it a real joy to care 1 or 2 bodies with one of their bags. I would say their straps are probably my favorite, at least so far. However, as good as the ThinkTank straps are, they suffer from the same problem everything else does, they get in the way when they aren’t around your neck.
The real problem seems to be that I want a strap that’s quick release, but without having dangly bits on the camera. If I were a Nikon user, this would be a non-issue. Nikon’s bodies, at least the higher end ones, use split rings on studs to attach the straps instead of just a solid metal “loop”. It’s easy enough to put just about any solid carabineer or quick release hook on the end of a strap and run with it.
Unfortunately, the situation isn’t quite so nice in Canon land, which brings me to my current conundrum.
On one hand, Optech USA makes Adapt-its(Affiliate Link); little plastic loops that can be pressed though the strap loops to provide a place to clip on a larger quick release system. OpTech claims they can hold up to 15 pounds, but I haven’t tested that, and it’s hard for me to put a lot of confidence in a little plastic tab with a knob on the end when thousands of dollars of camera gear are hanging from it.
The other alternative I’ve been looking at is Really Right Stuff’s Mini-Clamp with strap bosses. I can run that with either my normal strap or an R-Strap and still have the ability to quickly add and release it from my camera. It’s also slotted so to can be clamped in any of really right stuff’s clamps without having to take it off the camera. On the other hand, it’s a simple screw clamp with no apparently locking mechanism, and that too gives me pause when it’s the biggest point of failure stopping camera gear from plunging to the ground.
Which just leaves me with the question, what do you use, and how do you like it? Drop a comment below and let me know.