Maxwell Render vs Photograph
Which is the Photo and which is the Maxwell Render?
This is the Maxwell Render – which is still a work in progress
This is the Real Photograph
Work by By Dmitriy Berdnichenko – an excellent result.
From now we are offering a SALE of Nextlimit Product up to 30% OFF
with 30% off RealFlow for Cinema 4D and RealFlow for Maya till June 30th, but everything else is 20% discounted till July 15 as part of the ‘World Cup Sale Promotion’
*This is a limited offer and all price discounts currently offered online by direct link by clicking the image expire on July 15 2018
This week on the blog we have a different case study – one that shows how Maxwell Render for photorealistic background plates is part of a solution for combining photography and CGI.
Talented photographer and CG artist Matthew Cherry is a skilled Maxwell user who has brought to our attention a number of gorgeous projects over the years. Now he has agreed to take us behind the scenes of one of them. Let’s enjoy the cinematic feeling and learn more about his workflow!
A photographer and cinematographer, Matthew Cherry‘s visual storytelling ability, resonates in both his portraiture and conceptual imagery. His work has the ability to intrigue, delight, and inspire the viewer while merging art, theater, and photography. His dramatic use of lighting builds depth and a rich palate that creates a cinematic tone within his work, while detailed sets and polished styling exude glamour and sophistication.
By working with extraordinary stylists, makeup artists and prop masters, Matthew and his team continue to create amazing visionary scenes both realistic and fictional. Matthew draws his inspiration from a wide range of sources, most notably Italian and French cinema, American Film Noir and American Jazz artists of the 40s and 50s. In addition to his creative work behind the lens, Matthew has been involved in business marketing for the past fifteen years and has designed and run numerous local, national and international marketing campaigns. You can check out Matthew’s web, and follow him on Facebook, Twitter or Vimeo.
Midas was conceived of as a personal/promotional project designed to showcase our studio’s ability to create photorealistic environments that can serve as background plates for talent that is shot in-studio. As advertising and editorial budgets continue to shrink, it is often too costly to produce the kind of iconic shots many envision. This is especially true of smaller agencies looking to “make their mark” by producing innovative work with high production values. While many agencies do use CGI in their workflow, they tend to think of it more as a special effect.
By integrating CGI in a cinematic manner, we believe that more compelling artwork can be created in which costs are dramatically reduced and production values remain high.
As a freelance studio working on a promotional project, all of the input was ours. I concepted the shot and did the art direction and casting.
The objective of this shot was to create a polished example of a celebrity portrait of the type used in film and television advertising.
This type of shot meant that casting and wardrobe were both critical to achieving the final look. To that end, my wardrobe stylist, Tanya Seeman, created the look for the talent, which included creating a handmade, couture outfit for the woman and custom accents for the man.
Given current trends in high-end television programming, we wanted to produce a modern photograph that paid homage to the music scene of the 70s. The concept was a producer who had “the touch” for producing gold records. Touch of Gold records became the back story and “Midas” was born.
Creating an entire “gold” room and making it appear to be real was a much harder challenge than first anticipated.With Maxwell it is easy to create a convincing gold material.However, if all the objects had been made of actual gold, even with the use of dirt and dust maps, it would be obviously fake.
The challenge was to create objects that could plausibly exist within the real world, with authentic materials that still gave the impression of a gold room.
This meant creating not only a variety of gold metal materials but marble, leather, lacquer paint, plastic and even paper. To overcome this challenge we first produced the shot as if we had to source all the props in the real world and build a physical set. This provided real-world analogs to what we wanted in the scene, and we were able to see what was really possible. These references were invaluable in creating believable materials. Additionally, we made sure that any variable that was used, was mapped. Because Maxwell does not allow for the use of procedural nodes, this meant first creating the maps in either Substance or Photoshop. In addition to mapping all variables, most materials have some level of dust, scratches and fingerprints applied to them as well. While not always readily apparent, I think it still registers with the viewer and contributes to the authenticity of the material
My training is as a photographer and cinematographer, so for me, Maxwell is a natural rendering solution as it allows me to use the skill set I already have to produce beautiful renders. Also, as a photographer, I take light very seriously. If the light in a scene does not behave properly and does not interact with the materials in just the right way, the effect is ruined for me. My goal is to create the most photorealistic images possible, and to my eye, Maxwell does very well in this regard.
Everything in the frame, other than the two actors, is a render. Even the chair the male actor is sitting in is a render.
I created all the materials that are found in the scene. Sometimes I would create them from scratch and other times I would use the Material Gallery or the Material Assistants as a starting point. My workflow is to begin a model or project in Maya, whether I use a purchased model as a starting point or model an item from scratch. Since I often use purchased models my first task is to clean up the geometry and re-uv the model since the existing geometry and UVs are often problematic. After I clean up the model in Maya, I then bring it into Z-Brush to sculpt in a bit more realism. Once I have a finished, working model, I then decide if I can use a repeating texture or if I need to paint a texture in Z-Brush, Mari or Substance Painter. To create base textures, I also use Substance designer. Once I’ve put together all the maps, then I create the MXM material using the Maya Plugin. The rule is any variable worth having is worth mapping, so ever slot gets a map. Once the texture is created, then I add additional layers for dust, scratches, fingerprints, etc. Then I test render the prop in a virtual studio with HDRI lighting to see how it reacts.
I use Multilight all the time and it has become a pretty indispensable part of my workflow. The ability to adjust lighting post render is fantastic.
Because I tend to render to a pretty high resolution (6K to 8K) extra sampling helps to keep render times down.
In this case, because there were so many surfaces that needed to get clean, I just let the whole scene render to sample level 20.
I try to do as much in the render as possible and the Multilight feature makes that much easier. However, I still do a fair bit of post-production within either After Effects or, in this case, Photoshop. Most of this has to do with comping in the talent which we shoot in studio. In order to make this a seamless process, we take our custom camera setup from Maya and duplicate it exactly in the real world, including height, rotation and tilt as well as ISO, Shutter Speed and F-Stop. Additionally, we use the lighting information from Maxwell along with the light positions in Maya to create the lighting plan that we use when photographing the talent in studio. By duplicating what appears in the scene, we are able to create a seamless effect. When doing this it is important to remember to place the same lights in the scene that you will be using to light the talent. So, for this shot, there is a large (virtual) scrim positioned just in front of the chair, above the talent and angled down at a 45-degree angle. This matched the studio lighting setup for the talent. Without this step, the lighting in the scene won’t match the light on the talent and they will look comped in.
I am beyond pleased with the final result.
I have since shown this image to numerous art directors who have been blown away by the amount of realism in the set.
NONE of them thought this was a render until they saw the behind the scenes video (see below) that we created to showcase the construction of the scene. Based on that, I consider this a huge success.
Maxwell Render 4.2.02 released and we have just uploaded a new version in the Portal.
We updated Maxwell Render, Maxwell Studio, all 3D integration plugins, postproduction plugins and Multilight app.
We hope you enjoy it.
You can get it from the Customer Portal in “My Downloads” area: https://portal.nextlimit.com
It is mainly a bug fix release. You’ll find both Maxwell and Studio much more stable.
These are the releases notes:
Publish date: Fri, 09 Feb 2018
-Feature: New triangle ID channel.
-Feature: UV render channel is now multiple, with a maximum of 16 UV channels.
-Feature: You can cycle the images on channels that are multiple (custom alpha and UV) with arrows at the sides of its selector.
-Feature: Display channel name below channel selector.
-Feature: New override flag on references to select Object ID behaviour from 3 modes.
-Change: Instances of hidden objects / references are shown now.
-Change: Type, units and default values for some Stereo and Fish lenses parameters.
-Fixed: Paths or dependencies with special characters cause Maxwell to fail the render.
-Fixed: Lens extension: Stereo and fish rendered always as center.
-Fixed: Reflectance channel may be wrong on textured instances.
-Fixed: Flip x and Flip y projection is not correct on Lat-Long and fish lenses.
-Fixed: Neck on Lat-Long lenses.
-Fixed: Fast multilight with simulens crashes on any change.
-Fixed: An instance pointing to a hidden object crashes on production.
-Fixed: Hiding all instances, makes the original object to hide too.
-Fixed: Custom alpha channel ignores instances of mxs references.
-Fixed: Resuming deep alpha channel crashes on some scenes.
-Fixed: Grass extension: Instances of objects with grass inside a group have wrong grass transform.
-Fixed: Grass extension: Instances of references of objects with grass have no grass.
-Fixed: Render via Network button in OSX
-Fixed: Referenced emitter materials crash Maxwell.
-Feature: New triangle ID channel.
-Feature: UV channel is now multiple, with a maximum of 4 UV channels.
-Fixed: If you have non-continuous UVs channels on an object (i.e. 0, 2, 5), crashed at render.
-Fixed: Emitter map was sampled always on channel 0.
-Fixed: Alpha channel was a few pixels wider than it should.
-Fixed: Texture properties (brightness, contrast, saturation) are not applied correctly.
-Fixed: wrong UV’s index assignment to instances.
-Fixed: If you had non-continuous UVs channels on an object (i.e. 0, 2, 5), index was changed at saving (0, 1, 2) and it crashed at render.
-Fixed: Quick double-click on fire button could make studio crash.
-Fixed: Key shortcuts shouldn’t work on blocked cameras.
-Fixed: Issues on hide / unhide multiple selected objects with groups.
-Fixed: Xref objects didn’t hide using isolate selected option on fire.
-Fixed: Deleting an emitter component on material with Fire running crashed Studio.
-Fixed: Newly created instances could disappear on the viewport.
-Fixed: Material Id was randomized on a material clone.
-Fixed: If a reference visibility override flag is marked on a scene, when loading all flags are marked on UI.
-Improvement: Int and double fields don’t accept letters and “Esc” key cancels edition.
-Improvement: Randomize Id option on materials and objects.
-Fixed: Render via Network button in OSX
-Fixed: Faculty licenses showed as unregistered when using render in viewport.
-Improvement: Return error when the user tries to save a scene with an object with two or more UV channels with the same ID.
-Fixed: pesky bug related to wrong dealing with UV projectors’ ChannelID’s.
-Fixed: If gallery path didn’t end with a slash, it created a folder for each material.
-Fixed: Texture picker with floating preview had no size after app restart.
-Fixed: Unload texture didn’t clean procedural textures.
-Fixed: It asked for missing textures twice.
-Fixed: Connection failed if Monitor was launched before Manager on a IPv6 network.
-Fixed: Batch renders were sent with Denoiser always activated.
-Fixed: Error shown if the node-locked license was not named “maxwell_license.lic”
-Fixed: PyMaxwell module renamed from “pymaxwell” to “pymaxwell4”.
-Removed exclusive functions to access custom alpha channels: getAlphaCustomBuffers, getAlphaCustomBuffer, getNumberOfAlphaCustomBuffers.
-Removed exclusive functions to access shadow channels: getShadowBuffers, getShadowBuffer, getShadowBuffer, getNumberOfShadowBuffers
-New methods to access all channels like they are multiple: getNumSubChannels, getExtraBuffer (with subChannel index) and getExtraBuffers.
-New methods to Know UV array idx from Id and viceversa: getUVIdxFromId, getUVIdFromIdx.
Recent Maxwell Forum Post; ( Good idea to join )
I need some advice on how to improve this work.
These images you see is only part of the work.
The client tells me that the images are “flat”.
Can I use some color-contrast lights (warm / cold tones)?
I await your instructions. Thank you
i would suggest this:
1. Use less uniform lighting. Try to make basic contrast with shadow/illuminated areas.
2. Use little more complex materials (different roughness/mapped roughness). Most of Your materials look as a uniform textures. Use shaders more “material” oriented than texture oriented. Use textures to map more material chanels (especially roughness). Use more reflections.
3. Be sure not exceed 225 value for any of RGB chanel of the material colour (golden rule). Using 255,255,255 for white colour will lower contrast of the image dramaticaly.
And You always can play with gama, burn, “S”-curve … in postproduction.
Probably You already know all of this. Maybe Mihai will send You some more advanced tips .
Find a new client , these images look good enough to me !
only thing i could see in these images as was suggested before already, is that the light sources eat each other, producing a very even illuminated image, there is very few contrast areas, where the surfaces might look more interesting.
I’d tone down some lights, especially neon ones, or just lower them to 0 and start rising them one by one to look for more interesting lighting.
some materials could use a bit more interesting shading, most of them are quite lambertian (so to speak).
The growing process of digitalisation with digital simulation with Maxwell Render, machine-to-machine communication and automation is paving the way for a new revolution: that of the so called 4.0 Industry. It is anticipated that the inclusion of simulation and machine learning technologies will have an even greater impact than that of any previous revolution, fostering a genuine transformation of the company and the manufacturing processes.
The factories of the future will not only be fully connected and motorized, but supervised by artificial intelligence units, which will look after the constant optimization of all the processes. The training and design of this intelligence, together with that of humans, will have a vital role for physics simulations.
The design of machines and robots will occur in virtual environments, where it will be possible to iterate without the need to produce a real prototype, since the virtual prototype will have an equally exact physical value. The same setting will subsequently serve to train the Artificial Intelligences units in the undertaking of optimal tasks (of maximum efficiency and security), thanks to the automatic reprogramming with the Deep Learning technique.
Also in the logistics of products, or the performance of robots in different environmental conditions, Next Limit has a great deal to offer. The simulation of fluids (traditionally associated with audiovisual production) can be applied to a new field, making it possible to simulate complex interactions between automated machinery and its settings, or between products and the chain of production, packaging and transportation.
from Ben / Pylon Technical
The Maxwell Render 4.1 release is a big one. If you haven’t updated to Maxwell 4 yet, now is the time.
If you are an existing customer, and want to upgrade your Maxwell Render 126.96.36.199 for any modeller at here. Just pick your 2 plugins ( Studio can be one choice )
If you’re new to Maxwell and want to kick the tyres first, download a trial copy. Please come back here and we can help with your new version.
Please note: The following is oriented to the formZ implementation.
There’s now an integrated denoiser. When enabled, there will be NO noise in your final rendering. This means Maxwell-quality renderings in a fraction of the time it took previously – in either CPU Production or GPU mode. Expect 2x – 8x faster architectural visualizations.
Denoiser Video Tutorial in Studio (Click CC button for English subtitles)
Next up, we have a new Light Mixer. When used in conjunction with Fire, you can interactively fine-tune the contribution of all lights and emitter materials in your scene from one convenient panel. It’s a lot like Multilight, inside formZ.
Maxwell for formZ supports formZ Light groups both in the Light Mixer and in Multilight. Intensity-override groups are represented as a single slider in both environments.
Using materials from the online materials database in formZ is now just a click away.
• Integrated Denoiser support. (Maxwell Options > Engine tab)
• New Light Mixer for interactively balancing intensity of lights, light groups, onf material emitters. (Extensions > Maxwell Render > Light Mixer…)
• formZ Light Group support: All lights in a formZ light group with intensity override are now represented by a single Multilight slider. (Enable in Maxwell Options > Translate tab)
• Improved access to MXM file browser. (Material Parameters > Maxwell Representation > Referenced MXM option)
• Improved access to online MXM gallery browser. (Material Parameters > Maxwell Representation > Referenced MXM option)
• Direct support for X-Rite AxF Materials (Material Parameters > Maxwell representation).
• Direct support for TableBrdf Materials (Material Parameters > Maxwell representation).
• Support for rendering with the number of available logical processors, minus a specified number. (Set a negative number in Maxwell Options > Engine Tab > CPU Threads)
• Plugin now uses central Maxwell installation.
• Update to the Maxwell 4.1 engine.
• Grass Modifiers using formZ material option now restored correctly on open.
• Denoiser At Each SL/At End parameter now correctly saved in project.
We are back with some valuable tips by our very own Product Specialist, Fernando Tella. Fer is long-time Maxwell user, ever since the alpha version in 2005! Ten years later he joined the Maxwell team. Now he helps customers with technical issues, and also does product demos, tutorials, trainings and helps with product development. In this blogpost he will help you grasp Maxwell 4’s new top feature – the Denoiser, so you can take advantage of its full potential. Here we go 🙂
As a start, it’s important to understand some basic concepts:
The new Denoiser integration gives you usable denoised images as the render progresses. In a similar way as Maxwell progressively cleans the image, the denoised image evolves with the render. At the beginning, when the sampling level is very low, the render will look blurry, and as the render gets more defined you’ll start noticing it starts to “learn” where are the limits of the objects, materials, the features of the textures, etc. As the sampling level goes up, the denoised image and the original render will get closer and closer, so we can say the denoised image also converges to the natural solution with time, as well as the original render, but instead of noise, you get usable images (or even perfect ones) in the meantime.
The following video shows how the denoised and render image evolves as the render goes on, from sampling level 4 to 13 (Scene by Maxwell Xpert David de las Casas).
So, in the worse scenario, it will take the same time as the not-denoised image, and in the best, you will save a lot of time.
TIP 2: Use the Denoiser and Extra Sampling combo
If you find that some particular materials get too blurred when using the Denoiser (for example, in the case of grainy textures), but most of the image looks good at some particular SL, you can combine two different techniques: Denoiser and Extra Sampling.
The idea you have to keep in mind is that the longer the render goes on, the less information the Denoiser will have to guess and the more it will be based on the true render, so if one particular texture is problematic, usually you only have to render it longer.
Based on this, if you have a particularly problematic material were the texture itself is grainy, like sand, sugar or the towel in this case, you only have to render the whole image until the needed SL and then use Extra Sampling for that material or object to render to a higher SL only in that part.
Please note that, for the moment, when using Extra Sampling, the Denoiser will only be calculated at the end of the render, not at each SL, so you’ll have to wait until the end to see the denoised result.
TIP 3: For objetcs behind transparent geometries use the Shadow Channel
When an important part of your scene contains objects behind transparent ones, you’ll notice the extra channels used in Fast mode (Normals, Position and Reflectance) won’t give information about what’s behind the glass.
In this case the render will take more time (around 1.5 the time without shadow channel), so it will be wise to test which will be better: render with the Shadow channel or let the render reach a higher SL without the Shadow channel. The result in terms of time can be very similar.
TIP 4: Use saved MXI files to denoise after closing Maxwell
For the moment, once Maxwell is closed after rendering an image with the Denoiser, the two passes cannot be loaded again into the interface to resume them or make changes and Re-Denoise. Nevertheless, if you have at least a couple of mxi files of the same frame (and with the required channels), you can make Maxwell denoise them by running the following commands through command line:
mximerge -folder:“folder containing the mxi files of the same frame” -coopdenoiser:“output path and name of the denoised image” -target:“path of the merged mxi files”
Then you ask – “But where can I find the two mxi files saved while rendering with Denoiser?”
See below the paths where they are stored depending on the OS. You will find two files named as your current scene and ending with _render1.mxi and _render2.mxi:
In Windows they are stored under: C:\Users\<username>\AppData\Local\Temp\maxwellrendertmp
In MacOS they are stored in a random folder under: private/var/folders/…../T/maxwellrendertmp/
TIP 5: Use a macro for the denoised image name
When setting the denoised image name, instead of setting a specific name, it might be better to use a macro like %scenename% (including the %)
It creates the perfect name, as the macro is replaced by the scene name when saving the file and avoids having to rename the denoised image file when you want to make different versions of your rendered files.
For example, if you are rendering a scene named cool_render.mxs, you can set the Denoiser output path as Denoised_%scenename%.png
Once the render is finished, you will get a file called Denoised_cool_render.png.
Another useful macro could be %camera%, which is replaced by the name of the active camera.
Here is the list of all the supported macros, in case you find this convenient.
I hope you’ll find these tips useful and help you master the use of the Denoiser! 🙂
Maxwell Render 4.1 Denoiser Usage
Post on Maxwell Forum June 19 2017
This is an interesting post on the usage and effects of using the DeNoiser functionality inside Maxwell Render 4.1 and based on a render made at
SL : 12
If you turn denoise on, what happens is Maxwell actually renders two renders at the SL you specified in render options. Besides these two renders it also renders reflectance, normals and position channels which it will use to decide what is detail in textures/materials that should be kept and what is noise. It also compares the noise in those two renders to further help with the accurate noise reduction, without removing details like normal noise reduction can do.
The difference between the “fast” and “accurate” presets is that the accurate one also renders a shadow channel (in addition to all the others) and this might be of help to determine where to apply NR. But so far I haven’t found much difference and the thing is the shadow channel takes a long time to render. So far I’m using the fast preset so it skips the shadow channel.
I also set the scenes SL to about SL11, because remember you get two renders with NR, so the two merged ones is more like SL12. It depends on the scene how high your initial SL should be, but I think it will be pretty rare to go above SL 12. Maybe only in very difficult scenes where the noise is still very course also at SL 13 or so.