T.Stops Blog

The 3 Strip process color work flow : Building a color workflow for GFX100 II

Welcome back to Tstops Blog.

In this installment, I want to talk about a color process that I have been experimenting with for the Fujifilm GFX100 II.  Since last year when I used it to test My Favorite Lenses, I have fallen in love with the medium format look and Fujifilm color science for cinematic story telling.

This project for Taylor John Williams‘ song Naked, directed by Bhavani Lee, was the genesis of this color workflow exploration.  After discussing the look of the project with Bhavani, I said: ” I want it to look like the audience is looking through a window at real life”.  The idea is to create a look with rich colors and softer contrast, but will still reach white point and black point , from which I can start grading the image.  REALLIFE_11 is the look I came up with. It’s not designed to create the final look, it is designed to be the starting point.  That said, it does have a distinct look, that when dialed in, can make for some interesting results.  In the body of this post is a download link for the power grade in resolve. I encourage you to give it a try. I built this for GFX100 II, but it works on every image. The core benefits are rich skin tones and subtle color contrast that seemingly gets lost with other methods.

BTS photos by : Jesse Dvorak


I loaded the LUT into the Small HD monitor so I could review the look in real time.
Our Director Bhavani Lee
Bhavani Reviewing the footage with the look applied.
Filming the Broll with model Debora Comba

My first challenge when exploring any new camera is, what do I want this camera to look like?  I like to shoot my cameras with one look.  A look that I will know inside out.  Somewhat neutral and usually derived from a film emulation of some kind.  From there I can bend the look around on set with light, lenses and filters.  This gives me a consistent place to expose from and make sure the colorist has a thick negative to work with.  The LUT I use is usually passed along to the colorist to ensure they know what the intention is.

Fujifilm has built into the GFX100II ( and the rest of their line up) the very beautiful Film Sims for still images or video.  Though, in video its recorded as a baked in Rec709 look.  The Film Sims are looks that match their own classic film stocks of the past.  Eterna, Reala, Astia, Provia and more.  They look beautiful in camera.  You would think that the different film simulations were readily available as post LUTs.  They aren’t, and for a very specific reason.  Film Sims are not just a LUT the camera applies.  Each film simulation actually has its own dedicated hardware built into the camera processor.  It’s much more than just a LUT.  It’s a constantly flexing set of mathematic rules.  It also has to bend the look to be ready for the specific light response at different ISO values, white balance  and special color options in the menus.   You can choose to enable Chrome Effects Blue, and Color Chrome.  These effects change the way certain colors render to mimic the “printed” photo look.

It is much more limiting to just apply a fixed LUT to footage after the fact and expect it to look as good as the in camera color pipeline thats dedicated to creating the look in real time.  Fujifilm has released a small handful of FUJIFILM Eterna LUTs for the GFX100 II’s specific sensor and light response.  The Eterna LUT is a little too contrasty and desaturated overall for my taste.  It’s not my favorite look to shoot with.  I never feel like it nails the color response the way I want.  One of the challenges of working with a newer camera system, especially a system that is relatively new to advanced video capabilities (GFX line specifically), is that there is little post support for the specific color pipeline and capabilities of a new sensor.  There are some LUT packs, and FilmConvert supports the FUJIFILM XH2S, but it’s sensor responds to light differently than the GFX.  While the XH2S specific LUTs sort of work, its not quite right.

Eterna LUT
My 3 Strip Process : “REALLIFE_11”

I figured, why not start from scratch.  It begged the question, what do I need? I do find the idea of a look that flexes to the needs of the shot itself like the built in Film Sim, interesting.  The main goal was to make something filmic and flexible. I wanted to build a power grade basic structure that I could drop on footage, as if it were a LUT in post, but that would give me the ability to reach a nice starting point to begin grading in a more advanced way.  I mean things like power windows, masks, color curves etc.  During this process, I realized that in order for this to work properly, it was necessary to also build an on set LUT that would feed this power grade with the color “nutrients” it needs via a dense negative.


Eterna LUT on Flog2
REALLIFE_11 : No grade, just the color process applied to LOG, vs the Eterna LUT in the previous example.

I had seen over the years, some experiments with a 3 Strip process emulation in resolve, similar in function to how Technicolor film worked.  It works by breaking the RGB image channels up into their individual components, and remixing them manually.  I started experimenting with this idea by building it myself from a blank slate.  This way I would understand the “why” of how it works a bit better.  I consulted many people on how it may function and what pitfalls to watch out for.  Tim Kang, DP and color science expert has had my back on this project for a couple of months now.  Every time I build a version (52 versions to be exact) I would find some new bug in the process.  Tim helped me immensely with designing a node structure in Resolve, and explained the how and why the color structure works.  The following example is from a different project that I had to regrade, simply because it was so much easier, faster, and better looking to apply the 3 Strip look.  The key difference in this example is that what I struggled with in the initial shot was getting the skin tone to look right.  The green of the glass in the window was always there and over powering.  Correcting out the green made the skin magenta, and even using the qualifier, bringing the skin back to neutral was more difficult.  Using the 3 Strip process, instead of desaturating the green away, I was able to embrace it a bit more. The skin tones were able to show through the glass, blended with the green, but still present.  The green and teal colors of the train became more vibrant, but also became more individually separated and looked truer to real life.   The image looks less muddy and cold.

FilmConvert Kodak Emulation for XH2S adjusted for GFX100
REALLIFE_11 with the 3 Strip Process.  The green tint of the glass, while present, doesnt over power the skin tone behind it. You can now see both, and it looks more natural and to my eye realistic.


Download the .DRX and 33×33 & 65×65 versions of the onset LUT below.   The LUT’s are made using the complete process and are designed to shoot on set with the GFX100 II in Flog2.  You can load it into your monitor with the 33×33 version, or use the 65×65 version in Resolve to make dailies.  Pay close attention to the node by node effects.  The REALWORLD_11 .DRX file is designed to give a colorful but natural image to GFX100 II Flog2 footage.  Drag and drop the .DRX file into your stills gallery in Resolve.   You have to tailor it to the scene and once its dialed in, you can save it as a separate still/power grade to then apply it to the rest of the footage in your scene.  I recommend using the stills/power grade method instead of applying the LUT, as it keeps the mechanics of the look accessible, and easy to adjust.  The saturation and contrast may not be to your liking, but thats up to you to dial it to where you like it.  It works best if you expose to the LUT provided, then apply the .DRX for full control.  Keep reading to learn what the nodes do to the image.  If you want to experiment with footage from other cameras, just reset the TINT, SAT, and Contrast nodes, then start from scratch.


PART 1: How it Works:


The basic set up is a series of nodes, in a specific order.  The initial input is the un-manipulated shot, and I label that TINT.  Then from there, there will be three parallel nodes, Red, Green, Blue which then come back together with a combiner node.  After the combiner node, is the Saturation Node, then Contrast node and finally the Skin node.  I separated all of these functions so that its easy to find where something is being effected, and you can always reset that node alone, and start over.

1: Tint:  This node is where you adjust your white balance.  It’s labeled Tint specifically, because the more balanced the image is before it splits up, the better time you’re going to have. The process is more sensitive to green/magenta biases.  Try to nail your WB in camera as best you can, especially in a camera like GFX100 II thats recording ProRes as the digital negative.  This means, if you want a warm look, white balance to a cooler Kelvin, or vice versa. or if you want neutral, nail it with a white card on set.

2: RGB: This series of three parallel nodes are where the magic happens.   Its a bit different than using the RGB Offset dials.   It seems on the surface that its the same thing, but the 3 strip works slightly differently in the way it blends the colors.  It is something you feel in the image and I cant quite explain, because I don’t really know what Resolve is doing under the hood.  It’s important to make sure the “Monochrome” box and “Preserve Luminance” boxes are checked. Otherwise it doesn’t work.  The Power grade .DRX file is already formatted, but if you need to go back to reset either color channel node for any reason, this is the way it should look per channel.   The levels of the red, green, and blue, are not fixed, they can be whatever value you want blended together to get your desired effect.


So here is whats interesting and why this is different than using the RGB offset dials.  The “tinted” monochrome images, it seems are blended together in an additive color sense.  Cranking up Red, doesn’t necessarily make the image substantially redder and neither does reducing the blue channel make it yellow.  The red channel behaves as a saturation saturation dial for skin tones and reds.  The green channel acts as a slight green/magenta bias but only in the green and yellow parts of the image and the blue channel acts something like overall saturation, with a bias towards blue.   I know it sounds confusing, but when you start using it you will get it.  This is also why its important to nail the white balance BEFORE the split.  Anything thats off, will become exaggerated.  If you feel your image keeps turning green no matter what you do, go back to the TINT node and push the tint slider a few points towards magenta.  If the image is to warm, or you can’t quite dial out some cool shadows with the blue channel, go back to TINT node and gently adjust the color temperature.   The RGB 3 Strip sliders operate slightly backwards from what seems intuitive at first, drastically increasing the output from one color channel seems to push the image in the opposite direction on the color wheel.  It’s almost more like CMYK color mixing or doing printer lights on a negative.  Manipulating these sliders together, when you get it dialed in, produces a beautiful and lush color palette.  The other added benefit to splitting up the channels in this manner is that you can add serial nodes to each color channel and add in effects.  Imagine adding diffusion, grain, glow, noise reduction or some other effect, only on one color channel.   It’s really fascinating to experiment with adding grain and halation per channel.

3: Combiner Node: This is the node that takes the output from RGB and turns it back into one lane.  I have a theory that this node is also where the color magic happens. It may be combining the channels in a different way.

4: Saturation Node:  Often the image that get recombined is extremely saturated.  This node lets you dial that back to earth.  The colors will settle in nicely, and then you get a chance to go back and adjust the mix.  Try experimenting not only with Saturation, but “color boost” slider here too.  That can give some interesting results.  Use the Primaries / Color wheel panel. ( or what ever you like really… You can use color curves here too)

5: Contrast: This is where you put in your contrast into the Log Curve.   I use a combination of the Lift/Gamma/Gain dials and the luminance curves to get it where I like it. Use what works for you.

6: SKIN Node:  This node serves ONLY one purpose. You isolate the skin tones with the qualifier, and go back and double check your three strip mix on the Vector scope.  Like below.    This is a quality control node.   I don’t make adjustments here.

Everything beyond the Skin node is traditional grading.   Masks and qualifier adjustments, overall color grading, and the “look” you want to apply.  You could probably make a compound node out of the 3 strip section to make it simpler.  I like to personally keep them expanded to be able to dig back in.  I generally, do Contrast Node first, then Tint Node, then find the color balance with RGB and Saturation. The skin Node is last to make sure the humans look right.

Use the RealWorld11 Power grade to set the look of the starting point of color for each scene.   Then you can just work traditionally.

OK, so thats how it functions.
Have a blast!


PART II: The Shooting LUT:

Now, on to the LUT building section of this project.

Because, this 3 Strip process is just a manipulation of channels with no OFX in play, you can actually generate a LUT from it.  So the LUT in the google drive folder above, is actually utilizing the 3 Strip process.  It’s carefully crafted from test charts and a variety of footage to build a “do it all” on set LUT.  This also means, thats its not perfect overall, however its the most flexible LUT I was able to build.  More on that in a moment.

With the very generous support of Tim Kang of Aputure Lighting, providing me space to work in, I set out to design that onset LUT, that would use the full contrast range of the GFX100 II’s Flog2 gamma curve, as well as have a color palette that I found to be beautiful.  Now that I had a rough idea how the 3 Strip process worked, the first step was getting good footage into it.

It all boiled down to the following steps.

1:  LUT build for on set use.  This is to help me use as much of the Flog2 curve I could, to feed the post “Real World” 3 Strip process.
A: Color balance the LUT to neutral.
B: Contrast expansion of Flog2 to reach black and white points.
C: Contrast curve to add a pleasing feel to the image.

The way I did this was I shot some Tungsten and Daylight test images with charts and skin tones.   Then I built a scenario where the scene had true black and white in the same scene, a LED light with diffusion and a piece of shaded and boxed black jewelers velvet.   Then I collected a few various scenarios of footage on  GFX100 II.   Shots around the house, interiors, exteriors, the street, the desert, and the ocean.  I used a Sekonic C-800U color meter to measure the quality of the spectrum in the Tungsten and Daylight sections to maintain purity.

Then what happens is you start doing what Tim Kang called, LUT averaging.   You start with the Color charts and human face.  This point I need to create really clean colors, that are neutral.  I have to be able to put the tungsten look on the daylight shot and the daylight look on the tungsten shot and get minimal color shift.  At that point I split the difference between them and thats the color “accurate” balance.  The trick here is that there are two color accurate points of reference.  Since the camera is balanced to the light source and both light sources are accurate and full spectrum, you can trust that the reproduction will be faithful.  This LUT I am building will have to work for both daylight and tungsten, and it should not matter the light source so long as the spectrum is full.  The white balanced camera won’t be able to tell the difference. Then you save that power grade.

The setup:  There is little importance to the brightness and contrast of this image.  This is just to see the chart, and get perfect white balance.  Its basically still in LOG.  The only nodes manipulated are the TINT, RGB and Saturation in this phase.

One of the Color balance samples.

That saved power grade is then applied to the contrast test scenario footage. Without touching the color, you adjust the contrast node until black hits “0”, and white hit’s 100.   So now you have centered the color across two black body radiators, Tungsten and the Sun, and dialed in the black and white points of the sensor and log curve.

The Contrast test scenario

The last step in the LUT building is now developing the contrast curve.  Just because I stretched the white up to hit 100% and the black down to 0%, doesn’t mean the curve is nice to look at.  I took a ton of GFX 100 II footage and adjusted each one, individually, saving and passing the power grade forward.  Then cross referencing back to the beginning clips to see if the look still worked.  This is where the LUT averaging really starts to take effect.  In total, I made 52 versions.  I thought I was done at version 41, and named it REALLIFE_1.

Classic mistake.

10 versions later I think I’m happy with REALLIFE_11.  All in all, this process took almost 2 months to complete.   Next time with all the trial and error out of the way, I can do it in a much shorter time.

I hope you found this all useful.  Enjoy the LUT and Color workflow.  Thanks for reading.




“The Hitchcock Experiment” – A Lens Test: 8 Of My Favorite Lenses, Three Sensor Formats, One Field Of View, and the GFX100 II To Show Them All.

The brand new GFX100 II Medium format stills and video camera.
1st AC Ryan Patrick O’hara (L) and 2nd AC Danny Kim (R) Rigging the GFX100 II

Welcome back to Tstop blog!

In the last few years the film industry has changed, and nearly every element of the filmmaking process has made wondrous advances.  Especially camera equipment.  There are so many new digital cinematography tools available to us right now, with different sensor sizes, light sensitivities, resolutions, and capabilities.  In this article, I will discuss how all of this presented me with a unique problem, how to directly compare lenses made for different formats.   Luckily, there is also a unique solution.

I am working on shooting a film thats in the development phase.  This past winter, my director, and I discussed the creative possibilities of shooting the whole film on a single focal length.  It’s a creative choice that would lend itself to the story.  Seems simple enough, as the name of this post implies, Alfred Hitchcock famously shot Psycho with only a 50mm Spherical lens.  I figured why not the classic 50mm.  It’s a great focal length for this purpose, not so tight that backing up for a wide is impossible and is pretty flattering to talent in a closeup.  I didn’t think too much of it at first.  Pick up your favorite 50mm and be done with it.   That feeling changed as soon as I started thinking about which camera system to shoot on.  I was initially just going to shoot a quick little test with my three favorite 50mm lenses; Cooke S4 T2.0, Bausch & Lomb Super Baltar T2.3 and the Zeiss Super Speed MKII / III T1.3.  A quick “Hitchcock Experiment”.

Shooting the Control take of the the lens test. Arri Alexa and FUJINON Premista 28-100 T.29
A still from “The Permit”


Then something struck me.   The cameras we shoot on today, aren’t necessarily S35 sensor size anymore.  As I type this, there are commercially available Digital Cinema cameras that have sensors that range from: s8mm, S16mm, S35mm, “Stills” Full Frame and the new frontier of Digital Medium Format.  This changes things with regard to what a “50mm” is.   When we are talking Super35, the 50mm is the next common tighter lens from a “normal” 35mm.  However, when using a stills full frame Digital Cinema camera, 50mm is now the normal focal length.  Step up to an even bigger sensor like Digital Medium format, and a 50mm falls on the slightly wider than normal end of the spectrum.  What I needed to be testing was field of view, not necessarily the actual focal length.   That is roughly 47 degrees field of view.   Thats what a 50mm on Super35 sees.

50mm Cooke S4/i on the GFX100II

Having to contend with multiple formats did complicate the matter a bit,  but it also offered an interesting opportunity.  Now that I’m not necessarily limited to S35 anymore, What about my other favorite lenses?  Specifically full frame, and Medium format options.  I shoot stills on a FUJIFILM GFX100s.   It’s a Digital Medium Format mirrorless stills camera, that has a very large 44x33mm Sensor.  Full Frame is 36mm x 24mm, and the 16:9 extraction for video is about 36mm x18mm.  The GFX100s sensor is using about 44mm x 25mm in 16:9 video mode, though thats its only option for sensor area in video.  I have two definite favorite lenses for the GFX100s that are in that 47 degree FOV range. The Pentax 645 75mm F2.8, a beautiful vintage lens from the 1980’s and the FUJIFILM 80mm F1.7, modern, crisp and sharp.  I figured I should test all my favorite lenses, of each format, in their native intended format, but choosing the focal length that gives me as close as possible that 47 degrees.   My plan was somewhat thwarted, when I realized I’d have to change camera bodies for each group.   Comparing lenses directly to each other, but with footage originating from different cameras is not going to give me quite the clean comparison I was looking for.   The different cameras will impart their look on the test subject, and skew the results.

(***Very important note here…. Most cinema camera manufacturers have their own sense of what these formats should be sensor dimension wise.  For example, the “S35” of a RED Helium, is a different size than that of the Alexa35, which is different than the Alexa XT, which is different than the Alexa Classic, which is different from the Canon C300, the Varicam, the F55, and so on.  They are all roughly close to each other, but also in general much bigger than the 16:9 shaped 3 perf film gate… So while the target field of view (FOV) is 47 degrees… The lenses in the test will never truly produce that exact FOV.  Variations in the lenses themselves, the target formats, and whether the lenses are even available in the specific focal lengths necessary to make an exact 47degree FOV, meant I had to accept a little leeway. I decided instead to try to find the lenses that matched FOV between the formats that was still reasonably close to the look of a classic 50mm on S35***)

My Pentax 645 75mm F2.8

I went to NAB this past spring, and was discussing my challenges with the camera prep on this film with my friends at the FUJIFILM booth.  “I just wish the GFX could shoot all the smaller video formats natively, then I could test all the lenses on one camera” I said.   They smiled and asked me to contact them in a week when NAB was over.   NDA signing later, I was eventually introduced to the GFX100 II.  Whats different about this update to the GFX line, is that FUJIFILM really put a lot of thought and care into the Digital Cinema side of the camera.   It’s similar in spirit to the latest small mirrorless from FUJIFILM, the X-H2S. The new features are things like high frame rates, 10 bit 4:2:2 4K and 8K ProRes recording on board, 4K 60P, improved dynamic range and an even better Flog2 gamma just to start.  The GFX100s, the model I have, was really only able to do up to 30fps in 4k in a AVC intra style H265 codec in 10 bit 4:20 color.   While its footage is still great, you are limited to the full size of the sensor and 29.97 fps as a max frame rate in 4K.  What really stood out to me was that the GFX100 II, can shoot multi format. You can choose to shoot in full “GF” (Medium format), “PREMISTA”  (Full frame), and a “Super35” crop on the sensor.   Medium format…. Full Frame…. S35… on one sensor.  This was the tool I needed to be able to try out my “Hitchcock Experiment” and get “The Permit” in the can.  Now it gets interesting.


Ok, so now that its possible to do this test with results I can actually compare directly, what exactly am I doing?

What I didn’t want to do was a dull lens test.  I wanted a scene, not a lens chart.   So I came up with a concept paying some homage to “The Conversation”, Francis Ford Coppola’s classic.  My writer friend Joseph Piscopo penned a script that accomplished what I needed technically and with great entertainment value.  The look of this test, is very similar to the intended look of the feature project, so its a good way to see what lens will be a good match for the mood of the film.   What you are going to see is a short film, broken up by “days” in the story.  It’s designed to be repetitive in nature, so that we could shoot each day with a different lens and its accompanying format.  The shot consists of a tracking stedicam shot, as the main character shows up to work each day, taking a long walk through a factory.  The actors movements and the movements of the camera are as consistent as we could possibly make them.    There are a few key moments to look out for, detailed after the test film below.

A part of the set built by production designer Abigail Stanton. She built such a textured, lived in retro world. It looked amazing on camera.

These are the lenses in order of display in the film.  All were shot at T2.8 (T2.9 for PREMISTA zoom).

***In the film, Lens #9, the control, shows up first and last to book end the test with a very ubiquitous known quantity, the Alexa Mini, and a popular lens I use often, that I feel is very neutral, the PREMISTA 28-100 zoom.  Just to give a sense of what a familiar camera will look like in the scene.***

Lens #1
FUJIFILM GFX100 II with PENTAX 645 75mm F2.8 | GF Format – Medium Format
Lens #2
FUJIFILM GFX100 II with FUJINON GF 80mmF1.7  R WR | Autofocus | GF Format – Medium Format
Lens #3
FUJIFILM GFX100 II with ZEISS SUPREME PRIME 65mmT1.5 | Premista Format – Full Frame Format
Lens #4
FUJIFILM GFX100 II with SIGMA CINE 65mmT1.5 FF | Premista Format – Full Frame Format
Lens #5
FUJIFILM GFX100 II with FUJINON PREMISTA 28-100mmT2.9  | Premista Format – Full Frame Format
Lens #6
FUJIFILM GFX100 II with COOKE S4/i 50mm T2.0  | 35mm Format – Super35 Format
Lens #7
FUJIFILM GFX100 II with ZEISS SUPER SPEED MKII50mmT1.3 | 35mm Format – Super35 Format
Lens #8
FUJIFILM GFX100 II with BAUSCH & LOMB SUPER BALTAR 50mmT2.3 | 35mm Format – Super35 Format
Lens #9/Control
ARRI ALEXA (Mini) with FUJINON PREMISTA 28-100mmT2.9 | Super35 Format
This is where the digital sensor Super35 format size discrepancies come into effect. Most digital cameras use a much bigger sensor than actual 3 perf 1.78:1 film, (16:9).  The Arri Alexa Mini and GFX100 II are no different.  If the digital sensors used the actual “S35” sized sensor, the correct lens to get about 47 degrees FOV would be a touch wider than 50mm.  But in this case, being that the sensors are bigger than a proper 3 perf frame, I had to match the FF and Medium format frames to a similar field of view.  This is why the FF format is using a 65mm as a base line, and 75/80mm for the Medium format sensor size.  It’s the closest I can get, and still have lenses to choose from at Keslow.

My goal, is to see which of these lenses, in their native formats, S35, FF and Medium Format, captures the mood of the scene the best. This is the most subjective test I can think of, and thats the point.  There is no winner.  They are all amazing lenses.  I use them all regularly.  The goal is not to see what lens is “best” technically… The goal is to see which lens/format is “right” for the storytelling.  What makes this test unique, is that for the first time, I can compare not only the characteristics of the lenses, but how they behave in their intended format and how the format also changes the perception of the mood and feel of the film.  Bear in mind, this is also all coming from the same camera.  The exact same D/R, color science, and light response.  The GFX100II is the currently the only cinema camera on the market that can shoot three different motion picture formats, all in 4K-8K.  This gives the cinematographer access to not only 120 years of Cine glass, but all the Full Frame, and Medium Format lenses of the past, in their intended formats ( or at least close to it), can now be used to their fullest.


-The room metered at a T5.6 @800.  I used a Formatt Hitech Firecrest .6 ND filter to maintain a T2.8 across all the lenses, as thats my preferred shooting stop.  The filter does of course create some reflections under certain circumstances. The most visible one is in the doorway near the red “Exit Sign”.  The over head lighting in the background makes a small reflection that gets kicked back to the sensor.  This was not visible without the filter in place. It is not indicative of the performance of the GFX100 II

-I opted to try autofocus with the FUJIFILM 80mm F1.7 lens on “DAY 2”.   As its a stills lens, and the GFX100II was in beta firmware, I wasn’t able to use its repeatable manual focus function.  I didn’t want to mis represent its capabilities before its finished.  Essentially, it makes the AF lens behave like a manual lens with repeatable focus movements on the electronic focus ring, allowing for the use of wireless follow focus units.  But, it was a cool chance to test out the GFX100 II’s face tracking ability in video.  I think it did very well, it remained locked on the subject the entire shot, only hunting when pointing at the surveillance station for a moment.

-You will see a single edit jump cut in Day One / Lens 1 PENTAX 75mm F2.8.  This was for audio purposes, as there was interference in one of the takes over his dialogue, and there was a lighting issue on the tail end of the cleanest audio take, when he reaches the surveillance table. I did not want to crop in to help seam the shots together.

-Both cameras, GFX100 II and the Alexa Mini were rated at ISO800.

Without further delay, Please enjoy “The Permit”
You can download the original file from the Vimeo link.  Its in 4K ProRes if you would like to see it in its full quality.



BTS Video:

THE STATIONS: Some things to keep an eye on.

1 – The Roll Gate. The scene opens with the roll gate.  The exterior is 8-9 stops over key as compared to the ambient interior.  What I was looking for here is how does a large broad, over exposed element effect the characteristics of the lens.  How did they handle this extreme lighting situation?  Does the artifacting take you out of it the story.   Does it look “weird”….  All things to look out for.

2 – The entrance. As the character continues, we enter into the big room.   There is a small light above the talent right at the first turn designed to ping the lens from a high angle.  Does it register at all?

3 – The factory floor. As the character continues in the big room.  This is one of the longest vistas in the interior.  The parallel lights and distant background give a good sense of the bokeh, and how the location reads in relation to the character.  Is the background too out of focus to read well etc… This section is available light.   Sunlight filtering through sky lights. I was looking for loss of contrast, and any artifacts from having many smaller light sources in the frame.

4 – The doorway. The next turn is designed to mimic a neon, or LED exit sign.   I was looking to see if there was any color veiling from having a ton of red light in the frame.  This was more specific to the older lenses. Super Baltar, Super Speed and Pentax.

5 – The Long Walk.  This section has a row of three fresnels rigged out of frame, providing three levels of backlight.  The first just hitting the talent, flagged from camera.  The second tilted up slighlty to hit the talent and the camera with an additional unit hitting the lens more aggressively to create a flare.  The third, hitting just the talent, but with a 2nd unit hitting just the camera sepreatly.  I wanted to see how off of frame flares are handled, as well as how the background separation is in a wider frame.

6 – The surveillance table.  Here there is some mixed light. A tungsten lamp illuminating the table, and a daylight backlight. This is the darkest part of the scene, its also when the camera gets the closest. I was looking for how the focus fall off, bokeh, contrast, black level and formats effected the scene.


It is always fun to try out new cameras.  In this case, I was VERY VERY impressed.  The GFX100II is stellar.  Not just how it looked, but how it worked.  It felt like I was using a proper cine camera.  The controls were simple and straight forward.  Its built “tough”.  Featuring a full sized HDMI connection, a strong and snug lens mount with no play and a removable EVF.  The removable EVF is great because it lowers the profile of the camera and left more room for accessories.  We had no overheating issues but still attached an aftermarket cooling fan for extra protection, but I didn’t think it was needed.  The battery life was very good, better than my current GFX100s.  I shot a combination of Prores 4K and 8K. The super35 modes to get the proper S35 frame size, you have to record in 8K. Its basically “pixel for pixel” readout.

I used a Fotodiox Pro – PL to GFX lens mount adapter.   The same adapter I use on my GFX100s.  A testament to the quality FUJIFILM puts into their cameras, the mount was spot on collimated on both of the bodies.  You could use the lens marks with a measuring tape.  With other mirrorless cameras I’ve owned, shimming a PL mount for one body didn’t mean it would be correctly collimated for another body in the same line.   It bodes well that both were just about perfect.

We also used a Kondor Blue prototype cage to mount accessories and rails.  The camera was powered by from a battery plate that also powered the Teradek Bolt 6 – XT with panel array on the receiver. I reached out to my friends at CSLA , Teradek and SmallHD, and asked if they could help out with this project. I needed to use the most powerful wireless system they had.  I knew the walls of this factory were going to be a problem.  Graciously they let me borrow the best of what they make.  Amazingly, the Bolt 6 was able to reach the video village consisting of a Small HD 24″ OLED monitor, with a crystal clear signal through thick brick walls and at times over 150 feet of distance with zero line of sight.  It was a life saver.  We also used a Teradek RT wireless Follow focus, and 1st AC Ryan Patrick O’hara mapped all of the lenses, so that each individual lens would have the same focus throw on the hand unit. This way as we rehearsed the move, regardless of the lens, it would feel the same to him.  He is living autofocus that never misses.  It’s remarkable.

Stedicam Operator Jason Leeds, SOC balancing his Stedicam Sled.

Considering the length of the takes, and precision of the move, our Stedicam operator Jason Leeds, SOC, certainly appreciated the light weight of the build.  I capped us at 3 takes per day/lens in the script.  It worked out to about 30 takes in all… each over two minutes counting frame up and slate. Thats a long time to be framed and moving with precision.  Jason is an amazing operator.

The Sigma Cine 65mm on deck for its test.

Now… the part I love…. The image…. I recorded in Flog2, FUJIFILM’s new log gamma.  It’s flatter than the previous version and it needs it to store all the image data.  The GFX100 II has excellent dynamic range and color.  I didn’t put it on a XYLA chart, but it felt close to the Alexa Mini in over all dynamic range. The Mini had a stop or so more highlight latitude.  The GFX100 II was MUCH cleaner in the shadows than the Alexa.  I would comfortably shoot it as high as 3200 ISO.  As you saw, in a real world filmmaking scenario, the GFX100 II held its own against the gold standard in the industry, the Alexa Mini. I would even be bold enough to say I preferred the color science in the GFX100 II.  It always locked in on the skin tones easily in the color suite, and it represented the other colors in the scene very accurately.  Even the Mini sometimes shifts aquas and teals to green or blue.  The GFX100 II was dead on with colors, from everything from skin, to wardrobe, to the elements in the location.

The ProRes footage is very easy to cut.  While the GFX100II can do 12bit 4K RAW over HDMI to a recorder, I opted to shoot ProRes as I felt thats what most users will shoot.  It graded and edited like butter on a M1 mac.  Even the 8K footage.

To maintain consistency, I used FUJIFILM’s Flog2 to Eterna LUT.  I metered the factory location with a Sekonic C800 Color meter, and found an ambient color temperature of 5100K.  The Cameras were set to 5100K and not rebalanced for the rest of the shoot.   The factory lighting was consistent, and we shot so quickly that the sun did not have a very drastic shift.  Any color shifting you see between the days is imparted by the lens.  The Control footage from the Alexa I did however grade a little bit using Film Convert’s Fuji Film looks as a starting point. It tracked surprisingly well.

I realized something thats actually kind of cool.   Since you can have three different format based DOF characteristics, You can in theory carry three “47Degree FOV” lenses, and choose between them for the kind of DOF and background separation you need in a particular shot, while not changing the “rules” of maintaining the same field of view.  Think if it as changing lenses, without changing lenses.


It was important to me to really give this test a finished look.  There is no more real world trial for a lens than actually shooting something with it in the field. With the support of some amazing people we pulled together and made what I hope was entertaining as much as it was informative.

I wanted to create a world, and see how the lenses reacted to the world.   Our production designer Abigail Stanton worked tirelessly and really made the place feel lived in and real.  Not only did the film have a sense of time and place, but small details like finger prints and crumbs on the table really pulled it all together.

Production Designer Abigail Stanton on the left, and Hair and Make up expert Karissa Symmons in a moment between takes.
Karissa Symmons applying last looks to Bhavani Lee, Actor AND our 1st AD!

Josh Benson our gaffer, spent the better part of a day carefully blocking skylights and windows, then re lighting big sections of the walking segment to maintain the look and feel of the space.   We had very limited tools, but the G/E team made the most of what we did have.  We had only a handful of my own personal Rayzr lighting: three Rayzr 7 fresnels, 1x 2×1 Rayzr MC400 max panel, 4 LED tubes, and 2 Rayzr Mono lights.  Josh somehow lit 100,000 square feet with only that, a few Cardelini clamps and a couple rolls of duvateyne.  Brilliant gaffer.

Gaffer Josh Benson plotting out the lighting scheme.
Our “Automatic gate”…..

The incredible acting talent that came to help out and bring the characters to life.  Mousa Hussein Kraish and Bhavani Lee.


I know what my favorite lens is out of the bunch.  Whats yours?  That’s the point of this.  It is purely subjective.

I found this project very personally informative.  It was an interesting experience, seeing how certain lenses brought the scene to life.  Their characteristics just happened to mesh with the content…. to me….  I would be willing to bet that different people have differing opinions of which lens that actually is.

I was excited that the stars aligned so that I could actually perform this test.  This would not have happened without the generous support from FUJIFILM, not only supplying me with a beta camera that was physically able to make this all possible, but supporting the production and bringing me the fabulous production team at Pairadox Studios: Jackie Merry, Casper Hanney, Nhung Nguyen and Tani Shukla.  Without their support this would have been impossible.

I also want to thank Creative Solutions LA, Teradek, Small HD, Keslow Camera, Sigma Photo Corporation and Rayzr Lighting for their support.

Thank you for reading, I hope you enjoyed it as much as I did making it.





An Examination of: Sigma fp : RAW workflow and how to get the most from the fp.

Welcome back, thank you for reading.

This is really two posts in one. This section and a quoted section at the end, from a post from a week ago that highlighted a preliminary test for Sigma fp Cinema DNG workflow.  Last week’s post, it seemed initially like I had unlocked some extra dynamic range out of the RAW files.  I have spent the better part of the last week talking with colorists and engineers about what I was seeing. Despite being a cinematographer for about 14 years,  one thing I never dove too deeply into, was post production. Last fall, I decided to try to learn how to color grade properly after seeing the amazing things Steve Yedlin was doing.  It was amazing how much control he was able to extract from the footage of nearly any camera, by fully understanding how the color and post process pipeline works in combination with on set practices.   It’s been about 7 months since I started on this “self help” project.  I have been watching all the same tutorials, youtube videos and workshops as everyone else.   One thing I know… despite how much I have learned, I realized I know almost nothing.  That fact became even more apparent last week, especially considering what I learned since.   I was wrong about what I was seeing in the initial test.  While impressive to someone who took the footage at face value, it was really nothing miraculous.  The difference being,  I simply followed along on the workflow of someone else; Juan Melara, and did it “right” for once.  While it looked like initially that I was unlocking some added dynamic range, I was really just setting up the project in a more advanced, and more advantageous starting point.  A starting point that let me see what the fp was actually capturing, and work with it in a way that was familiar and effective.  The fp footage seemed to take on better image reproduction qualities and seemed to perform nearly as well as the much more expensive and impressively specced Panasonic S1h.   So while the results are not false by any means, I did a few things… wrong.  Well not wrong, inefficient and clumsy is more the right description.

After posting my findings, I was contacted by a few colleagues who specialized in color and post.  They could see that I was doing it “right”, but that I didn’t really know WHY it was right.  This is why the film community is so important.  There is no way I could have learned as much as I did without the time other people gave me.  Mentorship is a priceless thing. I had a multi hour conversation with a more experienced resolve user Tim Kang, colorist Juan Salvo and briefly Deanan DeSilva.  They explained that Cinema DNG being an open source format means that there is not necessarily a set way of filling that container with information.  So what happens is you are at the mercy of how the camera is going to display the data from within the CDNG container.  Then, there is the issue of how the editing/grading software is going to take that and display it to you.  They may not line up.  Of course, you can just work with the RAW CDNG from the fp as is.  There is nothing wrong with that method.  For me, I was never happy with the highlight and shadow handing out of camera. It felt too crunchy.  To help speed up the process of getting the footage where I want it, taking advice from both Tim and Juan, I found what works best for me.   The method transforms the CDNG into linear space and converts it to ARRI Color and LogC, just like I did in that first test.  However, the difference is how I got there.  I chose ARRI color science and LogC as an output because it has the most established workflow overall. I have tons of looks, grades, and plugins that work with Alexa footage.  (A quick side note, Tim Kang also showed me a method thats very clever, for workin with LOG files with transformations that make them behave as as if they were RAW.   I will detail that in another post.)

Essentially, RAW is just RGB values coming off the sensor with no “Color Science” applied yet.  Theoretically, if you were to de-bayer the image only and record the straight linear RGB values off the sensor, and record that uncompressed, before any image conversions, you would have something very close to RAW.  There are major manufacturers who have to manipulate their RAW output in a “Half Baked In” method similar to this to get around patents for recording compressed RAW.  The sensor data would be de-bayered, but it wouldn’t have any “white balance” or look baked in because its in linear gamma.    White balance being a function of the balance between RGB levels being manipulated mathematically, then ran through the selected color science to take the linear values and convert it to a visible image that makes sense, like 709 or various LOG formats. Linear space looks like it has about 3 stops of dynamic range when viewed in a traditional color space. Lots of the image is so dark it looks black, there is maybe one or two stops worth of “mids” then the rest blows out white and is over saturated.   This is just because the traditional color space and gamma spaces can’t interpret the data correctly because there is no curve applied.   So, since linear space is mathematically similar to RAW in that it’s just manipulating RGB values to take data and fold it to a curve that looks good in a set color space, you can use the color space transform tool in Resolve to do some interesting things.  You can make RAW like image manipulations in the steps between conversion to Linear/P3 and the output to ARRI space/LogC.  The nodes in between these steps sort of act like the DSP in the camera now.   By taking Cinema DNG RAW and setting it up in DaVinci with the RAW tab to be linear in output with the widest color gamut it allows, which is P3D60 and Linear, you can make your adjustments to WB, color, contrast and “ISO” with the nodes in between the input and output color space nodes.   This lets you get the image to the right place BEFORE you convert it to a readily viewable grading starting point, the ARRIcolor/LogC output of the node tree.   Think of it as a prep cook meticulously selecting the very best from a pile of ingredients, then preparing them prior to handing it over to the chef.  The intentional choice of which piece of meat, the very best vegetable, the consistent cuts and attention to detail, will help everything cook more evenly and in the end, taste just a little better than just chopping everything up and throwing it in a pot.  The reason I prefer to do it this way, as opposed to using the built in RAW adjustments in Resolve;  I don’t love how Resolve treats the fp footage natively, This method bypasses their process.   I can essentially build my own starting point.  One that shows all the dynamic range the image has to offer, in the Log format of my choice. For arguments sake, I’m going to use ARRI logC as its pretty much ubiquitous at this point in the industry. Here is a very important note though, this footage despite going through some very accurate transforms between colorspaces and gammas, ultimately LogC and ARRI color are set up for an ARRI camera’s sensor.   So just be aware, your footage may need a little tweak to work with the ARRI luts.   But you can easily make those tweaks on the “in between transform” nodes to get it to a nice place.

Watch this video showing the process, and below that I made some screen grabs to help explain it in writing as a reference to come back to if you decide to try it. ( i apologize for the audio quality, all my microphones are in my office in NY, and I’m quarantined in LA, had to use MBP Mics while operating Resolve so the computer fans were blaring)


The master Color management settings for the project.

The starting point: 

Under Camera RAW:

Decode : Clip

White balance: As close to what you think you shot the footage at, or a little off if you prefer warmer or cooler, but it wont matter cause you can adjust it in the WB node with the Linear RGB values.

Color Space: P3 D60 ( the widest one CDNG allows, if using R3D or ARRIRAW you can choose the native colorspace, plus when using other RAW formats you will get more options, CDNG only has REC709, Blackmagic Color and P3D60, I don’t use the BMcolor space because there is no transform option for it in the ColorSpace transform too)

Gamma: Linear

Highlight Recovery (off)


Then I make three nodes:

1 White balance/Exposure

2: Output Transform

3: LUT

Also, very important, make sure under the Color Wheels tab, you turn LumMix to Zero for the WB/Exposure node. It basically messes with saturation as you change values to keep the saturation and luminance in check, but since we are looking for a more “manual” way of doing things, it can cause problems in this stage in the Linear space.


Next thing you do is on the Output Node, add the Colorspace Transform tool.  Don’t touch the other nodes yet.

Set it to:

inputColor space: P3-D60,

Input Gamma: Linear

Output Color Space: ARRI Alexa

OutPut Gamma: ARRI LogC

(you can also do RedWideGamut/Log3G10 or V-Gamut vLog for output color,  Make sure the gamma and color space are matching for now, unless you REALLY know what you are doing or are planning on grading fully manually with no camera luts)

Its all about getting to a nice starting point with fp CDNG files. Remember this is just one way of 100 to grade footage. But I like how this gives me control before and after the point where the Log image is created, and the looks are applied.  Also, making changes BEFORE the conversion to log will have much bigger effect down the pipeline, cause you are “changing the ingredients” to stick with the cooking analogy.

Below, is the older post I initially made, and I am including it, so you can see the road to this current method of working. What is different between the two, is that the additional node between the Camera RAW tab linear output and the Transform tool (2nd Node) lets you fine tune the data before it gets “baked into Log”.   Also your controls with the wheels in the Color tab are finer than just picking an ISO, and punching in a white balance.


“After some deeper testing….. I think the Sigma fp “might” have more dynamic range than Sigma says.

I have been testing and using the fp for some time now.   According to Sigma, the fp has 12 stops of dynamic range.   This has always been a bit underwhelming, as the standard these days seems to be 13 stops -14 stops.  However, the camera has so much going for it that, its still a very versatile, and intriguing camera. RAW recording, 4K, Full frame, excellent lowlight performance in a package thats smaller than my Teradek wireless video transmitter.   You can build it up to be a pretty effective cinema camera, or strip it down to be a great little pocket camera.

To over come its somewhat limited Dynamic range, I had been using the old “MiniDV” technique of protecting the highlights when exposing in RAW.   Knowing that because the footage is so clean, I had a couple stops to pull the image back up and bring some balance back to the image.   After spending the last 4 months doing quarantine research about colorspace transformation and raw processing ( thanks Steve Yedlin, ASC) I started playing with different RAW development methods. I took the fp DNG files into resolve, set the RAW settings to Linear gamma, P3D60. ( a Juan Melara trick)  Then I applied the colorspace transform tool in the OpenFx tab.  Then I selected the input space/gamma (P3D60+Linear), then the output color space/gamma to ARRI Alexa and Log C (or V-Gamut/Vlog, or RWG/Log3G10 if you need to mix cameras), and watched as 3 stops of highlight detail magically appeared…..

See attached stills from the fp RAW:

1: Camera RAW MetaData :  I’m not sure if the Resolve ingestion applies any curves by default, but “Camera MetaData” setting implied not.
2: Using RAW controls: Conversion to P3D60 and Linear Space

3: Using Color space Transform tool: Conversion to ARRI Alexa Color space and LogC.  Pay attention to the highlights in the tree and the obnoxiously brightly lit house.

I wonder if processing this way is digging deeper into the RAW file or the camera just has an overly aggressive 709 curve for viewing.  What did Sigma Measure the 12 stops of dynamic range claimed on? The stock look, or with a method that uses the full “negative”.
For comparison here is a camera with a claimed 14+ stops in vLog the Lumix S1H.  Using the same ISO and the same lens, Sigma 35 F1.2.  Naturally the curve and saturation are going to be different, but look at the highlights.  Not far off. The vLog seems to have higher gamma and low end placement than logC.
Graded to match:
This is S1h:
This is Fp
I need to tear into this with a XYLA 20 step chart to try and measure more accurately what it’s doing. But it seems taking a different RAW developmental approach can get you FAR more out of the camera than what it seems to show initially.
If you have an fp, give this a try and see if you get better results.”
Hopefully this was helpful.  Happy grading!
Thanks for reading!
-Timur Civan

An Examination of: Sigma Cine Prime Lenses.


Photo Credit: Matthew Duclos

Welcome back to Tstops.

It’s been a long time since my last entry as its been a busy year shooting. This means one thing however. Something special has captured my attention. In 2016, I stopped by the Sigma booth at Photo Expo in New York. The usual plethora of still lenses behind glass on display. In a far corner at the end of the booth, were two cine cameras with something that looked a bit different attached. Two compact twin cine style zooms based on the now legendary Sigma 18-35 and 50-100 stills lenses. I could tell just by looking at them, this was something out of the ordinary. The housings were properly built, marked and even had some really unique features that some of the top end cine lenses lacked. For example, they had all kinds of technical data printed on the lens barrel itself, filter thread size, outside diameter, and the copies they had on display even had close focus information. I was intrigued. I introduced myself to Brian Linhoff, the gentleman standing by them proudly and hit him with a rapid fire barrage of questions relating to the optics and mechanics. Were they par-focal? What kind of focus mechanism was under the hood? Sensor coverage? Coatings? MTF? Before he answered anything he asked me who I was and what did I do? It seems for Sigmas first foray into cinema lenses, these were questions that they hadn’t heard from the photographers that stopped by the table, and being primarily a photo company, they had limited exposure with motion. I explained I was a cinematographer, and I was just curious how they came up with so many clever design elements. He said, it just made sense and I agreed. The 18-35 and 50-100 were both members of Sigmas top shelf “Art” series of lenses. The line at the time was made up of just a few lenses, the two zooms, and an 85mm F1.4 prime, which was shockingly beautiful and even out scored the bar raising Zeiss Otus 85mm according to DXO. My next question naturally was.. do you have PL primes? The answer was “soon”. How much is asked? “Affordable” was the answer. The full 7 lens set is priced at around $24,799 at reseller Duclos Lenses. Thats a decent chunk of cash. However, as I came to realize, this set is a steal.

Flash forward several months and I got a call from Sigma asking if I would have any interest in testing their newly developed full prime set. I mean, when you ask a lens addicted DP if he wants to test brand new lenses the answer is always YES! A few months later, the set arrived, and I was ready to give them a whirl. I received the 14mm T2.0, 20mm T1.5, 24mm T1.5, 35mm T1.3, 50mm T1.5, 85mm T1.5 and the 135mm T1.5.


I have been shooting a very wide range of projects, fashion, commercial, tabletop, stills and even shot a film of my own.

1st AC Glen Chin calibrating the wireless focus.


On the set of my fashion film “Skull”


Using the ultra wide 14mm in a studio situation.


Here’s what I think.


The prime set, is a fully rehoused version of the full ART series of primes. Now, this is not just a set of still lenses with gears glued on. The build quality is similar to the top offerings available today. The physical size, look and feel is similar to ARRI Ultra Primes. Dense, but compact. Where as Zeiss CP2/3 are airy, light and feel hollow (not in a negative way), and the Cooke S4 and MiniS4’s are dense and feel like there is no air space inside, I’d say the Sigmas Cines fall somewhere in the middle, closer to the S4’s. They feel confidently built. Focus mechanism and Iris move like silk. The Focus scales are longer than the still versions, but still not quite the 300 degree rotation you would find on a S4 or the UltraPrime. Given the price I can understand there had to be some compromises. It uses what was described to me as a hybrid of cam & rail and helicoil focus mechanism. I’m not sure what that means exactly, as it seems to be a bit of a secret within Sigma. The result is a fantastic, smooth and jitter free focus pull. Low resistance, but just enough dampening to make hand turned focus pulls have smooth starts and stops. The bodies are all aluminum, with steel mounts. As of now, the lenses are available in an electronic EF mount and dumb PL mount. No iData protocol is on the PL versions. The EF version, however transmits focus and t-Stop information to the camera. They are fully manual, so no autofocus functionality. The EF version (I also tested the EF version) benefit greatly from a locking mechanism EF mount. The native Canon mount on a Canon DSLR, Arri, RED and Metabones EF adapters have zero play. Cheaper EF adapters, did have some play (commlite adapter on A7R2).

85mm on a Red EpicW. The lenses are quite compact given the speed and build quality.

The markings are good, and reasonably spaced even on the longer lenses. Still rehoused lenses often suffer from having a majority of its focus range compressed into a 1/8th of a turn. While not as well spaced as the Angeniuex 24-290, the longer  Sigmas at least give you a 150′ mark. Many budget lenses mark somewhere around 90′ then you basically have to guess till you hit infinity. All of the Sigmas rotate well beyond infinity to make up for any mis calibrated mounts. I do wish they had a hard stop at infinity, but that means that every camera you ever put them on has to be 110% perfectly back focused. (this should be the case at all times!) The real world often has other plans. Rented cameras, lens adapters, and lower cost cameras often are not back focused perfectly. This buffer gives you a shot in hell of hitting infinity focus even on a very mis-calibrated mount. So, while some ACs  find it somewhat annoying to have 3/4″ of travel past infinity, if  production rented a camera that was very far off, the 1st AC would not be screwed. Not being screwed is far better than being slightly annoyed.

All in all, beautifully built, and made to last a long long time.


This is the part where I was floored. I knew the Art series was great. I had tried out the 85mm ART stills lens on an A7R2 and was blown away by the look. Absolutely razor sharp at all stops. No compromise on sharpness wide open. The bokeh on the 85 was truly something special. Smooth, with perfect flat specular discs, but not quite gaussian, that is to say it doesn’t look like perfect airbrush blurry. It has texture and form. Bokeh quality is obviously subjective in its beauty. I find it gorgeous. Bokeh is a function of lens design. To achieve certain optical goals, it affects different parts of the image. Sharpness, flare, chromatic aberration, distortion etc. When you eliminate one, another usually suffers, so lots of meticulous design planning goes into building a lens to get good overall performance. In the case of the 85mm Art it’s nearly a perfect lens. Now, an 85mm is actually one of the simpler optical designs, and making all the corrections to achieve a perfect image is within the realm of possibility. The Rokinon 85mm for example is definitely the jewel of the set, and only $300. Sharp, low CA, low distortion etc.. The trouble is matching the rest of the set to that standard. The Rokinons (and Xeen) show how hard that is when given a low price point.

Still photo. 24mm T1.5


Still from a commercial. 85mm T1.5


Still from a commercial project. 50mm T2.0

The crazy part of the Sigma Art, and by extension the Sigma Cine set is that they just don’t break. The optical perfection of the 85 carries all the way through the entire set. In fact, the 14mm T2.0, is one of the fastest ultra wide lenses available. The Cooke 14mm S4 T2, Master prime 14mm T1.3 and Ultra Prime 14mm T1.9 and Leica Summilux C 16mm are all fast wides.. and they all cost as much as a car. However, the Sigma set has another ace up the sleeve. Not only are they fast and optically incredible, they ALL cover full frame. None of the aforementioned high end cine primes can touch a full frame. Forget Vista Vision which is even bigger. The 14mm Sigma at an incredible T2.0 (F1.8) covers the RED Monstro vistavision sensor. Even wide open, the 14mm is TACK sharp in the corners.  It’s a nature and astro cinematographer’s dream lens.  Every lens in the set is a dream lens.

The 14mm @ T2.0. Notice how sharp the corners are. Shot on A7R2 FullFrame


14mm Full frame on A7R2. The combination of A7R2 image stabilisation and the fast apeture let me shoot this handheld at 1/5th of a second exposure time. A tea candle in a paper baloon and cell phone screen are what is lighting this scene.

Just to be fair, the weakest lens in the set is the 20mm. Now by weak, I mean it is maybe 5% less sharp wide open than every other lens in the set @T1.5.  By T2.0, you would never know it had a flaw. I am gushing over these lenses because they deserve it. On a S35 frame, I think these lenses would hold their own against Master primes, Summilux C’s and Cooke 5i’s. (I will shoot a blind test soon to see if my impressions hold water). That is what they feel like on the monitor, tippy top shelf glass. Now, mechanically the shorter scale of focus and I’m sure some build quality differences will be apparent, but when you take into consideration the average price of a sigma is about $3499 per lens, vs $25,000 per lens for the Arri/Leica/Cooke, I think the value is clear.

50mm Sigma Cine Still. T1.5


100% Crop. The resolution is a great match for the A7R2 and 4K – 8K cameras.

But let me not get ahead of myself.  Just take a look.  Here are a few moving examples.  I made these clips to put the Sigmas through their paces in real world scenarios.  A mix of natural light, available light, “No light”,  lit interior/exterior and tightly lit tabletop.

This first video is a fashion film. It’s intended finish is monochrome. I included a second color version for those of you who want to see it all in color. The third video is an assortment of shots from different cameras and very different circumstances. Tabletop, timelapse and documentary. Pay close attention to the small things. Notice how despite the incredible resolution, it has a gentle focus roll off. It looks to me like a hybrid of: the sharpness of a master prime, the focus roll off of a cooke, the subtle color tone of a Leica and its very own flavor of bokeh. Lenses this clean usually are boring to look at. With these lenses, the cleanliness is not sterile. It is the source of its look.


The look they give is sophisticated but still has some character. With the exception of the two widest lenses, the set has almost no distortion, and none of them have significant chromatic aberration. Even wide open. The 14mm does however have a very strange distortion when you get to close focus. It’s almost like a donut shaped area that distorts differently. When on S35 frame its less pronounced. This only happens when focusing under 18″. When the focus throw is further out, its nearly perfect, lines are dead straight and you get no curvature of horizontal lines as you tilt the camera. I think the internal elements just get too close (or far from) the big glass ball that makes up the front element. The 20mm has a tiny tiny bit of  barrel distortion, but nothing jarring to the eye. It would only show up if you were shooting a grid, and you were close to it physically. At distance the distortion cleans up to the eye.

Using the Sigmas for high speed motion control.


The small size and light weigh of the Phantom VEO/Sigma means the arm can move faster, without sacrificing any image quality.


The Negatives:

So maybe they aren’t 100% perfect but I only found a few issues.

One: The primes do exhibit some slight breathing. It only becomes apparent when you near the close focus end of the lenses. It’s subdued but more noticeable on the wider end. That said, even the worst offender still breathes a lot less than most of the S4’s. If you have shot on S4’s recently, you will know that while present, their breathing is totally acceptable and hasn’t stopped them from becoming the most popular set of lenses of the last 25 years.  I think the Sigmas are absolutely acceptable. Would it be better to have none? Of course. Will it kill a shot? No. Remember the price tag. It seems this is where the compromise happened.

Two: The close focus on some of the lenses is not the best, specifically the 85mm. The CF on the wider end is excellent, and although the 135mm is 35″ close focus, it’s so tight that it feels almost like a macro.

14mm: 11″ CF

20mm: 11″ CF

24mm: 10″ CF

35mm: 12″ CF

50mm: 16″ CF

85mm: 34″ CF

135mm:  35″ CF

The 85mm, kinda is the odd man out. As its adapted from a still lens, I suppose this is another compromise. Jumping from 16″ CF of the 50mm to 2’10” of the 85mm is a pretty big jump. The 85 is not tight enough a focal length to get really close to an object at that distance. You can fill the screen with a face, but you may not get an eye to fill the frame. If your set is the 5 lens set, you may have to go through some hoops to get a ECU, diopters etc.. I found the 135mm very useful for table top and product. With some post cropping, we got some very close up shots. You saw the water droplets hitting the ice in the examples above. The resolution allowed us to crop in a bit without the shot standing out in a bad way.

Three: The lenses vary in overall length. Not huge amounts, but going from 20mm to 135mm means you are going to have to move the matte box around a little when swapping lenses. The gear positions however are static. Follow focus and Iris motor doesn’t move the whole day regardless of lens. ACs find it mildly annoying.

One other little thing. Over the time I have spent with the Sigmas, the pelican case they came in only held 5 lenses, so the 14mm and 135mm had to travel in another case which is slightly inconvenient. This is not the lenses fault, but it did make me question whether to carry the 14 and 135 around on every shoot. If you do go for the whole set, it is probably a good idea to have a custom case built to hold them all. I believe Duclos Lenses is working on a special case that holds all 7.


In Conclusion:

Despite a few small drawbacks, I think this set is the best deal in professional level lenses money can buy. They are fast, amazingly sharp,  great new look, has beautiful bokeh and the kind of build quality that will last you decades with proper maintenance. Not to mention being rather affordable considering the level of lenses it seems to be competing with in terms of performance.


Thank you for reading. Till the next examination!



Follow me on Instagram @Timurcivan for more of my work and life on set.



An A7R2 Love story… How the A7R2 can be Surprisingly Affordable and an Slog2 Workflow for Stills

Welcome back!

Last fall, my trusty Canon 5D mkI ( yes the Mk 1) finally called it quits after a decade of use.   I was on a job in the salt flats of Bonneville, Utah shooting a land speed record for the Triumph Race team.   I figured I’d get some beautiful photos while I was out there.  Sadly, the 5D sputtered, locked up, and I got every error message the camera could come up with.  I loved this camera.  It had a certain “Mojo” that is difficult to replicate:


What to do?  I had a couple Canon lenses, so naturally I was excited for the MKIV, and the 5DSr.  As a perpetual user of RED, Alexa and Phantom, I wasn’t necessarily looking for a SLR/mirrorless camera had video as a main attraction.   I decided to go after pure still photo power.   After testing both Canon cameras I felt the technology of the MKIV was impressive, but sitting at 31MP it didn’t seem to be catching up with the D810’s 43MP.   I loved the resolution of the 5DSr, albeit its performance in image quality didint seem like a quantum leap forward.  Having only in my eyes, acceptable dynamic range, moderate noise performance, I felt a bit perplexed.  I wanted the technologically advanced 5DmkIV, to have the resolution of the 5DSr.

My friend mentioned to me, “Have you considered the A99II?”.  I scoffed at first.  I wanted a tried and true optical finder, and at the time I was convinced that the only functional autofocus system that’s useable came from a true optical mirror setup with phase detection.  However a challenge had been posed, and I looked into it.  After all, we all have to do our due diligence.  The A99ii started looking more and more interesting, 399 AF points with a trans mirror… Hey this thing may actually be able to do it!   But then the realization that none of my canon lenses would work on the system, so it basically put the system out of the running.   After all it would mean reinvesting in top end Sony G glass as well as a new $3400 body.  However my interest was piqued by Sony.  What else did they have?  I checked DXO to see what cameras were at or near the top of their sensor score list, and there it was. The A7R2 with a 98 overall score, 13.9 stops of D/R, and an E mount that let me at least temporarily continue to use my canon glass.  Hmmm…  I shoot mainly portraits and landscapes so even with a Metabones adapter, I would have the ability to get useable AF for a still subject, keep my lenses, get 42+ Megapixels, 13.9 stops of D/R, no OLPF, and use the uncompressed RAW.    The perks on top of that were that it shot 4K video, maintained the D/R with SLog2 and because of the E-Mount, I could put any lens on earth on this camera.   As you all know I LOVE vintage lenses.

Upon looking into the system further, I was still bugged by the lack of an optical finder.  That is till I realized other than the 5DmkI, I haven’t looked through an optical finder since I shot with an Alexa Studio four years ago, then an ARRICAM LT 8 years ago.   I was hanging onto something that honestly wasn’t really an issue anymore.

Ok, so it seemed I was leaning a bit towards an A7R2…   But what else does the Emount do?  After many late nights reading up on the E-mount system, I learned that Sony makes an A-mount to E-mount adapter, The LA-EA4.  What is special about this? Well, Sony bought Minolta some years ago, and took their lens technology with them.  Minolta, though not hugely popular back in the day, had one of the earliest autofocus systems available. I’m talking nearly 35 years old, however, they all still worked on the A mount.  Especially with the LA-EA4 adapter as it has the screw drive mechanism to activate the older autofocus system built in!  This let me put a whole catalog of vintage Minolta Maxxum ( their equivalent of “L series”), Vivitar Series 1 and Tokina lenses on a modern camera with native autofocus functionality.  I searched online, and new old stock Maxxum lenses are often less than $100.  Some as low as $50.   So I picked up a used A7R2, a used LAEa4 adapter, and a Vivtar 19-35 F3.5, Tokina 28-70 F2.8, Minolta Maxxum 50mm F1.4, Tokina 90mm F2.5 Macro ( that is similar to my Vivitar Series 1 90mm; but with autofocus!) and a Minolta Maxxum 70-210 F4 “Beercan” all in immaculate shape.  Total cost? $2997.  That’s the A7R2 Body, LA-EA4 Adapter, and ALL of the lenses for $200 less than the cost of a New A7R2 body.   The lenses are for the most part great and have vintage feel with nearly modern AF performance.  The only hitch, though interesting, ( is it even a bad thing?) is the La-EA4 adapter has its own built in trans-mirror 9 point diamond pattern phase detection AF system. It bypasses the 399 AF points in the R2.   This did not bother me as I only use the center point anyhow.  Think about that for a minute.  For the photographer stepping up from an entry level camera, for less than the price of a flagship body you get everything.  I’d dare to say that it almost makes the A7R2 an entry level camera, with incredible room to grow over time.  Imagine being fully outfitted optically from 19mm to 210mm,  good AF, with the elusive vintage lens look, for less than $3000, taxed and shipped.  Oh and another thing…  The LA-EA4 and lenses will work on ANY E-Mount camera.  A5000, A7, A6500 etc. Really makes stepping up easier no?

To round out the system I got a Tilta Cage, Metabones PL adapter, and figured out a trade of some old gear for a Odyysey 7Q+ to record 4K.   I mean when the 4K looks so good why not use it right? ( I will go over the video capabilities in another post)

Now, back to the photos. I went wild shooting and enjoyed every last second.  These are from a few things, My wife riding a horse, some astro work and some shooting from a helicopter on Election night.


I was having a ball.   I found however that there was one small issue with the Sony system.   It is what has plagued every sony camera since the Vx1000 DV camera.  Skin color.   At least in camera skin tone rendition.   There is just something about the way the Sony cameras render skintone. It isn’t “Wrong”… It’s actually too accurate.   Cameras like the Canons (video and photo) seem to enhance the skin tone ranges, Nikons, The ARRI Alexa DEFINITELY enhances the skin tones, and the RED system has made vast improvements in this department.   Sony looks like it uses the same color science since 1994 … Ok maybe thats a bit harsh, but you get the idea.   The color quality is recoverable in post, but it requires work to dial it in.

Stock A7R2 in camera skin tones: See how the skin just kind of seems like a flat wash of color? I liken it to taking a tack sharp black and white photo, then colorizing it.

I discovered something curious in stills mode however, the Picture Profiles, the camera’s gamma and Color matrix settings, included Slog2.  This is primarily intended for when you shoot video, you can preserve all the dynamic range of the camera.   However, you can take stills with Slog2 Gamma engaged.   Hmmmm…  I was curious.  Slog2 is Slog2. It’s a reliable standard.   I bet, the LUTS we use for films that are Slog2 based luts would grade the stills.  I looked on line for some Slog2 luts that had promise, but many were over powering and just made the image look unnatural.  I then found a company called Omeneo Primer, they had the kind of stuff I was looking for.  I bought their pack called Omeneo Primer for A7R2.   Their LUT pack is specifically designed to “deSony” the image in terms of color quality.   It adds some contrast to the LOG image to make it look nicer, but it renders the image with a much softer toe and shoulder, preserving detail.  They state on their website that the primers are intended to give you a good starting place from which to grade further.   I found it really just brings the Slog2 Image to life in an amazing way.

I just did a very simple test, I shot in XtraFine Jpeg mode to preserve as much information as possible.  Slog2 Gamma won’t apply to RAW images.  You have to shoot Jpeg in some fashion.

I took my wife and her friend outside, shot the frame according to normal exposure according to the internal light meter with “still” gamma and “still” color matrix.    Really, the other matrices are essentially the same color feel, with either a lot of saturation, or not so much. None of them seem to exude warmth or a particular style. 709Matrix is far too saturated, and what it does wind up saturating just doesn’t look great.  The “Still” color matrix is saturated, but not absurd.

The two photos were shot, within a couple seconds of each other.  One in stock “still” gamma at normal exposure according to the meter, and the second shot is just switched to SLOG2, same exact exposure, ISO, shutter speed everything.  Then, the third image is the Slog2 gamma shot with the Omeneo LUT applied in photoshop.  No other adjustments other than applying the LUT.

1: Still Gamma:

2: SLog2 / S-Gamut Color

3: SLog2 S-Gamut color : Omeneo LUT.  Notice the skin. IT just has a rosy, warm feel, without warming the whole image. In fact the whole image globally has more vibrant, realistic color.  They mapped the sony sensors specifically to draw out the colors in a more pleasing way.



Another Example, My mom on mother’s day last week:

Slog2 / S-Gamut:

SLog2 / S-Gamut: With Omeneo LUT: No other adjustments.


You can see, the difference in skin tone rendering in the first example.  It’s far richer, overall has warmth, depth and just enough saturation to look pleasing, but not unnatural.   The steely grey undertone of Sony images are pretty much eliminated.  Colors that under normal circumstances you would never see suddenly come to life.   However, that’s not all…


I posted my findings online, and started a thread about the technique.  In a discussion I had with Geoffrey C Bassett: I brought up this technique and he wanted to try it out.   He noticed something quite remarkable.  It seems that when the camera is in SLOG2 mode, some interesting image processing happens.  It seems that shooting stills in SLog2 seems to eliminate a majority of chromatic aberration.   Under the same test images, he also noted that SLog2 images were a bit grainier.   What I think happens, is that the camera does zero processing in noise reduction, and doubles up Chromatic Aberration Compensation, or just does an extremely good debayer.   Whatever the case is, I would gladly take a slightly grainer image in exchange for beautiful color and less C/A.

Geoff C Basset’s Test: ( feel free to click his name to check out his work)

Standard Jpeg:

RAW Processed in Capture one:





100% crops:

1:  The out of Camera “Still Gamma” Jpeg: Notice the high light handling, the purple fringing on the silver gears and the bolts in the pedal gear.


2: RAW Processed with Capture one:  Notice although the highlights are better handled, but the purple fringing is still there!


SLOG2: unprocessed


3: SLOG2 – OMENEO LUT: Look how clean the edges of the white highlights are on the spokes.   The image seems sharper because somehow the ghosting that comes from the purple fringing is removed.  I think perhaps the debayer algorithm in SLOG2 mode is better.  It seems to process the image for more accuracy, albeit a bit noisier.  Also the highlight handling as a result is far better then even his RAW example.  I’m not too worried about the grain however.  This is a 100% crop from a 8K image.  That noise gets eliminated when scaled down, or even when it’s printed.  Obviously, some noise reduction would also solve it, if you don’t like any noise in your image. Personally I like the grain.


TEST 2: Direct sunlight.

1 Standard Jpeg: Notice how the chipped paint section on the boat looks almost purple from all the C/A.  Also the paint on the boat reads white, and the water reads nearly grey.


SLOG – OMENEO LUT: Here the boat looks blue, and the water takes on the sky’s reflection that was just not there in the JPEG version.

100% Crops:

JPEG standard – The Chromatic aberration is clearly visible on the hull of the boat and edges of the deck chair.  Also note the White tag on the orange life vest.


SLOG2 – OMENEO LUT :  For the most part, the color of the boat without all the purple fringing can actually show through, from a combination of a better utilization of Dynamic range in the highlights, and a lack of C/A.  In the Jpeg image the boat appears white.  In the Slog2 with LUT version, the boat reads as a light blue.  Also notice the lack of blotchiness  in the orange life vest.  The color subtleties are FAR better.

SLOG2 – Ungraded


While this doesn’t replace a RAW workflow, it’s a great alternative.  I think a properly exposed image using this method that benefits you with all the clarity of color, for me, is now my prefered way of taking pictures.  Alternatively, you can start using the A7R2 camera as a director’s viewfinder on set, it can be set to S35 mode, and you will get the field of view that closely matches nearly all cinema cameras, even the RED (at 5K ANSI s35 frame size).  Then after shooting a Jpeg in SLOG2, you can apply the LUT you want to use for the project directly to the Jpeg and get a sense of how things may look.  I think the trick is to expose perfectly, treat it like any LOG cinema camera.  Often LUTs come in different varieties based on different sensors, so they should track pretty well between the A7R2 and whatever you happen to be using.

I would likely shoot Jpeg + RAW to use this technique, but have the RAW as a backup incase it needs major adjustments. I would really like to figure out a way of making photoshop or lightroom export an Slog2 + S-gamut TIFF from the RAW files, so you can make the exposure adjustments necessary then use the luts to bring out the colors hiding in there, but from an uncompressed, 12bit space.  I think if you are careful, you can get away with shooting SLOG2 to Xfine Jpeg, and forgo RAW altogether so long as you expose properly.  You can adjust exposure a bit in photoshop before you apply the lut for small correction. It still looks pretty good up to a stop of push or pull.

I think this is a cool way of working, I hope you find it useful.

Thanks for reading!


You can follow my Instagram or Twitter for more photos and life on set.