NVIDIA RTX Performance Explained

Facebooktwittergoogle_plusredditpinterestlinkedinmail

NVIDIA RTX Performance explain by Paul Arden – miGenius

NVIDIA RTX technology was announced late last year and has gathered a lot of coverage in the press. Many software vendors have been scrambling to implement support for it since then and there has been a lot of speculation about what is possible with RTX. Now that Iray RTX is finally about to be part of RealityServer we can talk about what RTX means for our customers and where it will be most beneficial for you.

RTX

Iray RTX speed-up is highly scene dependent but can be substantial. If your scene has low geometric complexity then you are likely to only see a small improvement. Larger scenes can see multiples of about a 2x speed-up while extremely complex scenes can even see a 3x speed-up.

What is RTX?

RTX is both software and hardware. The key enabling innovation introduced with RTX hardware is a new type of accelerator unit within the GPU called an RT Core. These cores are dedicated purely to performing ray-tracing operations and can do so significantly faster than using traditional general purpose GPU compute. Performance will depend on how many RT Cores your card has. The Quadro RTX 6000 for example has 72 RT Cores.

Quadro RTX 6000

Along side the new hardware, NVIDIA has introduced various APIs and SDKs which enable software developers to access these new RT Cores. For example, in the gaming world RTX hardware is accessed through the Microsoft Direct X Ray-tracing APIs (DXR). While production rendering tools such as Iray use OptiX.

Rendering software must be modified to take advantage of the new software APIs and SDKs in order to access the hardware. With RTX hardware and the latest RealityServer release, the portion of rendering work performed by Iray that involves ray intersection and computation of acceleration structures (see below) can be offloaded to the new RT Core hardware, greatly speeding up that part of the rendering computation.

Ray Intersection and Acceleration Structures

Ray intersection is the work of determining whether a ray (just think of it as a straight line) crosses through a given primitive (e.g., a triangle). We won’t cover exactly how path-tracers like Iray work but Disney have a great video Practical Guide to Path Tracing which gives you a good idea of the basics. You’ll quickly see that ray intersection is key to making this work.

While the mathematics involved in checking if a ray intersects a primitive is relatively simple (at least for a triangle), scenes today can easily contain millions or even hundreds of millions of primitives. To make matters worse for typical scenes you also need to perform these checks for millions of rays. That’s millions of primitives times millions of rays, a whole lot of computation.

Naively checking for intersections with all primitives doesn’t cut it, you’d be waiting years for your images. To speed things up, when using ray-tracing, an acceleration structure is almost always also used. This uses some pre-computation to split the scene up into a hierarchy of primitives that can be tested rapidly to eliminate large numbers of those primitives from consideration quickly.

As a very simple example, imagine you have a scene with a million primitives to test distributed fairly evenly. If you cut the scene into two groups, you can first test whether a ray intersects with the volume of one of the groups and if it does not you can immediately exclude half of the primitives. By nesting structures like this you can progressively test until you reach the primitive that is intersected

While this is a massively over-simplified example and there is a lot of subtlety and nuance to implementing a highly optimised system for this, the basic principle remains the same. Devise a cheap test that can eliminate as many primitives from consideration as possible. RT Core hardware accelerates the query of acceleration structures and the ray intersection calculations making the whole process significantly faster.

BVH

Enough Already, How Much Faster?

It depends. Yes, everyone hates this answer but no way around it here. We’ve so far seen a typical range, for practical scenes, from 5%-300%. That is a pretty wide range, so what determines how much faster it will be? We didn’t describe what ray intersection was above just for fun.

Notice that when we talked about ray intersection we never talk about materials, textures, lighting, global illumination, shadows or any of the other jargon commonly associated with photorealistic rendering. That is because for a renderer to do its job, it has to do much more than just ray intersections, even if it calls itself a ray-tracer.

All of the calculations needed for solving light transport, evaluating materials, looking up textures, calculating procedural functions and so on are still being performed on the traditional GPU compute hardware using CUDA (at least in the case of Iray). This portion of the rendering calculation is not being accelerated by RTX. So how much ray intersection is being done in a typical rendering with Iray for example?

RT Compute Ratio

In many scenes, we found that ray intersection comprises only 20% of the total work being performed by rendering. This is a very important point. Even if the new RT Cores were to make ray intersection infinitely fast so that it takes no time, 80% of the work still remains in that scene. So a 100 second render would still take 80 seconds with RTX acceleration, giving a speed-up of 1.25x (25%). Of course, ray intersection is not free with RTX, just faster, so the speed-up would be lower than this but this is the hypothetical upper limit.

If you have a scene where 60% of the work is ray intersection you will naturally see a much more significant speed-up. In that case on a 100 second render, with an infinitely fast ray intersector you still have 40 seconds of rendering, giving a speed-up of 2.5x (250%) at the hypothetical upper limit. In general we have found RTX provides the greatest benefit in very complex scenes with millions of triangles and also scenes that heavily exploit instancing.

Real-world Performance Testing

We took 14 scenes we had available and tested them on a Quadro RTX 6000 card with Iray 2018.1.3 and Iray RTX 2019.1.0 to evaluate the speed-up.

Above you can see we’ve also included an estimate of the percentage of the rendering time that is associated with ray tracing where available. This gives a clear picture of how this directly affects how much speed-up you get by using RTX hardware. It is also clear that the more complex scenes benefit a lot more since they spend more time doing ray intersection.

Unfortunately the strong scene dependence of RTX performance means there is no single number you can give to describe the performance advantage when integrated into a full renderer like Iray. Any way you cut it, you’ll definitely get a performance boost from RTX, exactly how much will depend on your scenes.

One bonus not considered here is that the inference part of the AI Denoiser in Iray can be accelerated by the Tensor Core units on the RTX cards, much the same way as was seen on Volta based hardware. This can be quite useful on larger image sizes when using the denoiser. There is also a more general performance bump that comes with the new version of Iray that is unrelated to RTX and a significant speed up in Iray Interactive mode for real-time use cases.

Tesla T4 and Cloud Providers

The NVIDIA Tesla T4 card which is increasingly used in the data-center and becoming available at various cloud providers actually also contains RT Core hardware (40 RT Cores) even though it doesn’t have the RTX branding. This isn’t emphasised in marketing material so is easy to miss.

For many of our customers, availability of hardware at popular cloud providers is important since they are often not deploying on their own hardware. As of writing both Google Compute Engine now has the Tesla T4 in general availability while Amazon Web Services have Tesla T4 based instances in beta and should be in general availability soon.

Making a Decision

We get a lot of questions from customers on whether they should be looking at RTX hardware for RealityServer. It certainly gives more options to consider and now it is important to think about your content as well when making a purchasing decision. If you deal with highly complex scenes, there is little doubt that RTX is worthwhile and the price points of RTX hardware, compared to say Volta based hardware make them very compelling even if they don’t quite reach the performance of Volta on smaller scenes. When comparing to Pascal or Maxwell based cards, RTX cards are a pretty clear winner in price/performance and they walk all over older Kepler based cards.

The best way to make a decision is to benchmark a representative scene or scenes from your own content rather than our generic benchmark tests. Our benchmarks will give you a good feeling for the difference between cards as a baseline, but you need to test your own data to determine how much additional benefit you will get on RTX hardware. If you’re a RealityServer customer or considering purchasing RealityServer and have scene data we can help you with these tests, contact us to learn more.

Making-of Bathroom renders

Facebooktwittergoogle_plusredditpinterestlinkedinmail


Leonardo Giomarelli (ilgioma on the Maxwell official forums), who is the creator of some great materials on MZ (CopperBrassPlastic) has been kind enough to give us a glimpse of his workflow in creating these beautiful interior renders of a modern bathroom. 

I started working with MaxwellZone for about 1 year producing sets of materials based on photographic references found in the shop of this site. Lately trying to test Maxwell Render on interior projects I have produced a small setting and, talking with Mihai, we decided to share on the blog the main steps on which I based my work.

Choosing a project

Personally I find most of the inspirations for my projects on Pinterest, I think this social media is structured in an exceptional way for those who need to check trends, color palettes, compositions etc. But we must be careful not to remain too faithful to a single image to avoid the replication effect that I do not like. If the project I want to do represents a bathroom, I’m also looking for references to bedrooms, kitchens or anything else from the images of the guidelines for the creation of my work. In any case, for my inspirations I always try to prefer photographic shots to other projects realized in CGI, it is always good to have something real as the first reference.

From idea to 3D

Here there is very little to say… knowing how to model well reproducing details will bring the project to a higher level right away. Obviously for the fabrics you can not do without Marvelous Designer.

Creating good lighting

Choosing the type of lighting is essential for creating engaging images. Of all the known techniques I have always preferred the Sky Dome because it offers me a neutral and very soft light to be used as primary light. Depending on the design and the type of product to be represented I always go to add support lights to enhance certain areas or to have a light complementary to the primary perhaps warmer so as to give dynamism to the shot. The following pictures show the Sky Dome settings and the layout of the lights used in this job. 

A little trick: Generally as a basis of the architectonic I always put a disk or a floor to simulate a pavement outside the building that obviously contributes to bring into the room the light reflected by the Sky Dome. On this disc or plane (it makes no difference) I always apply a very simple material with a medium gray color RGB 180, never white. This is to avoid burning too much the areas near the openings.

The materials

Up to this point everything is relatively simple but, believe me, using approximate and “perfect” materials will cancel every effort. Metal materials in particular, can push the image towards photorealism if well calibrated. For my project I made extensive use of the materials available on MaxwellZone, so in addition to steel I also used the Ceramic Zirconia as a starting point for the tub and the elements of the radiators. 

When I have to make simple glasses like the vase I rely on the Maxwell Render presets, I find it very easy to use and at the same time effective. For other materials concerning textiles and walls, I used textures on some models purchased on Bentanji. I think this is also a very useful resource for us Maxwell users.

The finishing touches

There is no rendering without post production but it is necessary to prepare well all the render passes. In the image I show which passes I always render out.

As for the Alpha Custom channels, I usually use them only if I have objects that are difficult to select with just MaterialID or ObjectID, in this case, branches and flowers. Below I will show the images that make up the project before and after the adjustments in Photoshop. As you will see, there are images that need very slight adjustments, while others need more work to show their full potential…

This image has been worked on very little, I only wanted to give a warm tone that in my opinion improves the look.

Here, however, the adjustments were more massive, I wanted it to come out an emotional shot and then I pushed on a bloom effect typical of backlighting. To obtain it, simply select the opening area, fill the selection with a white color and then use a blur filter to increase the value until a credible effect is obtained. The original shot was also underexposed, in this case I remedied directly on Photoshop, but I could do it with the Multilight.

Still little adjustments to even out the tonality of previous shots.

In the presence of metal details, I like to include some chromatic aberration in the renderings. It is an artifact present on the photos. Here’s a link where it’s explained how it is generated in photography and how to do it in Photoshop.

I hope you enjoyed this short making-of, maybe in the future we can deal with more specific aspects related to some processing phase.

See you soon
ilgioma

Follow Leonardo Giomarelli on Twitter or Facebook

Maxwell Render vs Photograph

Facebooktwittergoogle_plusredditpinterestlinkedinmail

Maxwell Render vs Photograph

Which is the Photo and which is the Maxwell Render?

This is the Maxwell Render – which is still a work in progress

This is the Real Photograph

 

Work by By Dmitriy Berdnichenko – an excellent result.

 

 

Huge Material Library for Maxwell 4

Facebooktwittergoogle_plusredditpinterestlinkedinmail

Huge Material Library for Maxwell 4

With the release of Maxwell 4 and the significant improvements in performance afforded by the Denoiser after (Maxwell 4.2 release) we now have over 5,800 materials available for use in Maxwell 4

See the extensive library here

Huge Material Library for Maxwell 4

MAXWELL RENDER FOR PHOTOREALISTIC BACKGROUND PLATES

Facebooktwittergoogle_plusredditpinterestlinkedinmail

MAXWELL RENDER FOR PHOTOREALISTIC BACKGROUND PLATES

February 22, 2018 Customer stories 0 Comments

Hi guys,

This week on the blog we have a different case study – one that shows how Maxwell Render for photorealistic background plates is part of a solution for combining photography and CGI.

For more demonstrations on how it is done see the Maxwell TV Episode 4 given by Matthew Cherry

Talented photographer and CG artist Matthew Cherry is a skilled Maxwell user who has brought to our attention a number of gorgeous projects over the years. Now he has agreed to take us behind the scenes of one of them. Let’s enjoy the cinematic feeling and learn more about his workflow!

A photographer and cinematographer, Matthew Cherry‘s visual storytelling ability, resonates in both his portraiture and conceptual imagery. His work has the ability to intrigue, delight, and inspire the viewer while merging art, theater, and photography. His dramatic use of lighting builds depth and a rich palate that creates a cinematic tone within his work, while detailed sets and polished styling exude glamour and sophistication.

By working with extraordinary stylists, makeup artists and prop masters, Matthew and his team continue to create amazing visionary scenes both realistic and fictional. Matthew draws his inspiration from a wide range of sources, most notably Italian and French cinema, American Film Noir and American Jazz artists of the 40s and 50s. In addition to his creative work behind the lens, Matthew has been involved in business marketing for the past fifteen years and has designed and run numerous local, national and international marketing campaigns. You can check out Matthew’s web, and follow him on Facebook, Twitter or Vimeo.

THE CHALLENGE

Midas was conceived of as a personal/promotional project designed to showcase our studio’s ability to create photorealistic environments that can serve as background plates for talent that is shot in-studio. As advertising and editorial budgets continue to shrink, it is often too costly to produce the kind of iconic shots many envision. This is especially true of smaller agencies looking to “make their mark” by producing innovative work with high production values. While many agencies do use CGI in their workflow, they tend to think of it more as a special effect.

By integrating CGI in a cinematic manner, we believe that more compelling artwork can be created in which costs are dramatically reduced and production values remain high.

THE PROCESS

As a freelance studio working on a promotional project, all of the input was ours. I concepted the shot and did the art direction and casting.

The objective of this shot was to create a polished example of a celebrity portrait of the type used in film and television advertising.

This type of shot meant that casting and wardrobe were both critical to achieving the final look. To that end, my wardrobe stylist, Tanya Seeman, created the look for the talent, which included creating a handmade, couture outfit for the woman and custom accents for the man.

Given current trends in high-end television programming, we wanted to produce a modern photograph that paid homage to the music scene of the 70s. The concept was a producer who had “the touch” for producing gold records. Touch of Gold records became the back story and “Midas” was born.

Creating an entire “gold” room and making it appear to be real was a much harder challenge than first anticipated.With Maxwell it is easy to create a convincing gold material.However, if all the objects had been made of actual gold, even with the use of dirt and dust maps, it would be obviously fake.

The challenge was to create objects that could plausibly exist within the real world, with authentic materials that still gave the impression of a gold room.

This meant creating not only a variety of gold metal materials but marble, leather, lacquer paint, plastic and even paper. To overcome this challenge we first produced the shot as if we had to source all the props in the real world and build a physical set. This provided real-world analogs to what we wanted in the scene, and we were able to see what was really possible. These references were invaluable in creating believable materials. Additionally, we made sure that any variable that was used, was mapped. Because Maxwell does not allow for the use of procedural nodes, this meant first creating the maps in either Substance or Photoshop. In addition to mapping all variables, most materials have some level of dust, scratches and fingerprints applied to them as well. While not always readily apparent, I think it still registers with the viewer and contributes to the authenticity of the material

THE ROLE OF MAXWELL

My training is as a photographer and cinematographer, so for me, Maxwell is a natural rendering solution as it allows me to use the skill set I already have to produce beautiful renders. Also, as a photographer, I take light very seriously. If the light in a scene does not behave properly and does not interact with the materials in just the right way, the effect is ruined for me. My goal is to create the most photorealistic images possible, and to my eye, Maxwell does very well in this regard.

Everything in the frame, other than the two actors, is a render. Even the chair the male actor is sitting in is a render.

I created all the materials that are found in the scene. Sometimes I would create them from scratch and other times I would use the Material Gallery or the Material Assistants as a starting point. My workflow is to begin a model or project in Maya, whether I use a purchased model as a starting point or model an item from scratch. Since I often use purchased models my first task is to clean up the geometry and re-uv the model since the existing geometry and UVs are often problematic. After I clean up the model in Maya, I then bring it into Z-Brush to sculpt in a bit more realism. Once I have a finished, working model, I then decide if I can use a repeating texture or if I need to paint a texture in Z-Brush, Mari or Substance Painter. To create base textures, I also use Substance designer. Once I’ve put together all the maps, then I create the MXM material using the Maya Plugin. The rule is any variable worth having is worth mapping, so ever slot gets a map. Once the texture is created, then I add additional layers for dust, scratches, fingerprints, etc. Then I test render the prop in a virtual studio with HDRI lighting to see how it reacts.

I use Multilight all the time and it has become a pretty indispensable part of my workflow. The ability to adjust lighting post render is fantastic.

While I have tried out the new Multilight | Standalone app, if I need to relight I do it either in Photoshop or After Effects. Extra sampling also plays an important role in my workflow.

Because I tend to render to a pretty high resolution (6K to 8K) extra sampling helps to keep render times down.

In this case, because there were so many surfaces that needed to get clean, I just let the whole scene render to sample level 20.

POSTPRO

I try to do as much in the render as possible and the Multilight feature makes that much easier. However, I still do a fair bit of post-production within either After Effects or, in this case, Photoshop. Most of this has to do with comping in the talent which we shoot in studio. In order to make this a seamless process, we take our custom camera setup from Maya and duplicate it exactly in the real world, including height, rotation and tilt as well as ISO, Shutter Speed and F-Stop. Additionally, we use the lighting information from Maxwell along with the light positions in Maya to create the lighting plan that we use when photographing the talent in studio. By duplicating what appears in the scene, we are able to create a seamless effect. When doing this it is important to remember to place the same lights in the scene that you will be using to light the talent. So, for this shot, there is a large (virtual) scrim positioned just in front of the chair, above the talent and angled down at a 45-degree angle. This matched the studio lighting setup for the talent. Without this step, the lighting in the scene won’t match the light on the talent and they will look comped in.

THE FINAL PIECE

I am beyond pleased with the final result.

I have since shown this image to numerous art directors who have been blown away by the amount of realism in the set.

NONE of them thought this was a render until they saw the behind the scenes video (see below) that we created to showcase the construction of the scene. Based on that, I consider this a huge success.

 

Project credits:

  • Matthew Cherry – Art Direction & CGI, Photographer
  • Dan Galli – Assistant
  • Tanya Seeman – Props and Wardrobe
  • Delina Medhin – Hair & Makeup
  • Shermon Solo Braithwaite – Male Talent
  • Melinda Berry – Female Talent

Software used:

Hardware used:

  • Mac Pro
  • Wacom Tablet
  • X-Right Color Management System
  • Mamiya RZ67 ProIID with a Leaf Credo 80 Digital Back
  • Hensel Lighting

Maxwell Render 4.2.02 released

Facebooktwittergoogle_plusredditpinterestlinkedinmail

Maxwell Render 4.2.02 released and we have just uploaded a new version in the Portal.

 

We updated Maxwell Render, Maxwell Studio, all 3D integration plugins, postproduction plugins and Multilight app.

We hope you enjoy it.

You can get it from the Customer Portal in “My Downloads” area: https://portal.nextlimit.com

It is mainly a bug fix release. You’ll find both Maxwell and Studio much more stable.

These are the releases notes:

4.2.0.2
Publish date: Fri, 09 Feb 2018

MAXWELL RENDER
-Feature: New triangle ID channel.
-Feature: UV render channel is now multiple, with a maximum of 16 UV channels.
-Feature: You can cycle the images on channels that are multiple (custom alpha and UV) with arrows at the sides of its selector.
-Feature: Display channel name below channel selector.
-Feature: New override flag on references to select Object ID behaviour from 3 modes.
-Change: Instances of hidden objects / references are shown now.
-Change: Type, units and default values for some Stereo and Fish lenses parameters.
-Fixed: Paths or dependencies with special characters cause Maxwell to fail the render.
-Fixed: Lens extension: Stereo and fish rendered always as center.
-Fixed: Reflectance channel may be wrong on textured instances.
-Fixed: Flip x and Flip y projection is not correct on Lat-Long and fish lenses.
-Fixed: Neck on Lat-Long lenses.
-Fixed: Fast multilight with simulens crashes on any change.
-Fixed: An instance pointing to a hidden object crashes on production.
-Fixed: Hiding all instances, makes the original object to hide too.
-Fixed: Custom alpha channel ignores instances of mxs references.
-Fixed: Resuming deep alpha channel crashes on some scenes.
-Fixed: Grass extension: Instances of objects with grass inside a group have wrong grass transform.
-Fixed: Grass extension: Instances of references of objects with grass have no grass.
-Fixed: Render via Network button in OSX
-Fixed: Referenced emitter materials crash Maxwell.

MAXWELL GPU
-Feature: New triangle ID channel.
-Feature: UV channel is now multiple, with a maximum of 4 UV channels.
-Fixed: If you have non-continuous UVs channels on an object (i.e. 0, 2, 5), crashed at render.
-Fixed: Emitter map was sampled always on channel 0.
-Fixed: Alpha channel was a few pixels wider than it should.
-Fixed: Texture properties (brightness, contrast, saturation) are not applied correctly.
-Fixed: wrong UV’s index assignment to instances.

MAXWELL STUDIO
-Fixed: If you had non-continuous UVs channels on an object (i.e. 0, 2, 5), index was changed at saving (0, 1, 2) and it crashed at render.
-Fixed: Quick double-click on fire button could make studio crash.
-Fixed: Key shortcuts shouldn’t work on blocked cameras.
-Fixed: Issues on hide / unhide multiple selected objects with groups.
-Fixed: Xref objects didn’t hide using isolate selected option on fire.
-Fixed: Deleting an emitter component on material with Fire running crashed Studio.
-Fixed: Newly created instances could disappear on the viewport.
-Fixed: Material Id was randomized on a material clone.
-Fixed: If a reference visibility override flag is marked on a scene, when loading all flags are marked on UI.
-Improvement: Int and double fields don’t accept letters and “Esc” key cancels edition.
-Improvement: Randomize Id option on materials and objects.
-Fixed: Render via Network button in OSX
-Fixed: Faculty licenses showed as unregistered when using render in viewport.
-Improvement: Return error when the user tries to save a scene with an object with two or more UV channels with the same ID.
-Fixed: pesky bug related to wrong dealing with UV projectors’ ChannelID’s.

MATERIAL EDITOR
-Fixed: If gallery path didn’t end with a slash, it created a folder for each material.
-Fixed: Texture picker with floating preview had no size after app restart.
-Fixed: Unload texture didn’t clean procedural textures.
-Fixed: It asked for missing textures twice.

MAXWELL NETWORK
-Fixed: Connection failed if Monitor was launched before Manager on a IPv6 network.
-Fixed: Batch renders were sent with Denoiser always activated.
-Fixed: Error shown if the node-locked license was not named “maxwell_license.lic”

PYMAXWELL EDITOR
-Fixed: PyMaxwell module renamed from “pymaxwell” to “pymaxwell4”.

MAXWELL SDK
-Removed exclusive functions to access custom alpha channels: getAlphaCustomBuffers, getAlphaCustomBuffer, getNumberOfAlphaCustomBuffers.
-Removed exclusive functions to access shadow channels: getShadowBuffers, getShadowBuffer, getShadowBuffer, getNumberOfShadowBuffers
-New methods to access all channels like they are multiple: getNumSubChannels, getExtraBuffer (with subChannel index) and getExtraBuffers.
-New methods to Know UV array idx from Id and viceversa: getUVIdxFromId, getUVIdFromIdx.

TIPS FOR MAXWELL RENDER

Facebooktwittergoogle_plusredditpinterestlinkedinmailRecent Maxwell Forum Post; ( Good idea to join )

 

Hello everyone,

 

 

 

 

 

 

 

I need some advice on how to improve this work.
These images you see is only part of the work.
The client tells me that the images are “flat”.
Can I use some color-contrast lights (warm / cold tones)?
I await your instructions. Thank you

Re: tips for improving images

Hello Andrea,

i would suggest this:
1. Use less uniform lighting. Try to make basic contrast with shadow/illuminated areas.
2. Use little more complex materials (different roughness/mapped roughness). Most of Your materials look as a uniform textures. Use shaders more “material” oriented than texture oriented. Use textures to map more material chanels (especially roughness). Use more reflections.
3. Be sure not exceed 225 value for any of RGB chanel of the material colour (golden rule). Using 255,255,255 for white colour will lower contrast of the image dramaticaly.

And You always can play with gama, burn, “S”-curve … in postproduction.

Probably You already know all of this. Maybe Mihai will send You some more advanced tips :) .

Pa3k

Pomrose

Find a new client , these images look good enough to me ! :)

good luck

Max

Which 3D software do you use with Maxwell?:XSI
Contact:

only thing i could see in these images as was suggested before already, is that the light sources eat each other, producing a very even illuminated image, there is very few contrast areas, where the surfaces might look more interesting.

I’d tone down some lights, especially neon ones, or just lower them to 0 and start rising them one by one to look for more interesting lighting.

some materials could use a bit more interesting shading, most of them are quite lambertian (so to speak).

User avatar

Mihai

Which 3D software do you use with Maxwell?:Cinema4D

Post by Mihai » Wed Feb 14, 2018 6:43 am

What has been said about the lighting – I’d add to try and have a bit more influence from the outside lighting, with a dusk/dawn HDR.
Maxwellzone.com – tutorials, training and other goodies related to Maxwell Render

Digital Simulation with Maxwell Render

Maxwell Render

Facebooktwittergoogle_plusredditpinterestlinkedinmail

Digital Simulation with Maxwell Render


Digital Simulation

The growing process of digitalisation with digital simulation with Maxwell Render, machine-to-machine communication and automation is paving the way for a new revolution: that of the so called 4.0 Industry. It is anticipated that the inclusion of simulation and machine learning technologies will have an even greater impact than that of any previous revolution, fostering a genuine transformation of the company and the manufacturing processes.

The factories of the future will not only be fully connected and motorized, but supervised by artificial intelligence units, which will look after the constant optimization of all the processes. The training and design of this intelligence, together with that of humans, will have a vital role for physics simulations.

The design of machines and robots will occur in virtual environments, where it will be possible to iterate without the need to produce a real prototype, since the virtual prototype will have an equally exact physical value. The same setting will subsequently serve to train the Artificial Intelligences units in the undertaking of optimal tasks (of maximum efficiency and security), thanks to the automatic reprogramming with the Deep Learning technique.

Also in the logistics of products, or the performance of robots in different environmental conditions, Next Limit has a great deal to offer. The simulation of fluids (traditionally associated with audiovisual production) can be applied to a new field, making it possible to simulate complex interactions between automated machinery and its settings, or between products and the chain of production, packaging and transportation.

The Maxwell Render 4.1 release is a big one

Facebooktwittergoogle_plusredditpinterestlinkedinmailfrom Ben / Pylon Technical

The Maxwell Render 4.1 release is a big one. If you haven’t updated to Maxwell 4 yet, now is the time.

If you are an existing customer, and want to upgrade your Maxwell Render 3.2.1.5 for any modeller at here. Just pick your 2 plugins ( Studio can be one choice )

If you’re new to Maxwell and want to kick the tyres first, download a trial copy. Please come back here and we can help with your new version.

Maxwell Render 4.1

 

 

 

Please note: The following is oriented to the formZ implementation.

denoise.png

DENOISER
There’s now an integrated denoiser. When enabled, there will be NO noise in your final rendering. This means Maxwell-quality renderings in a fraction of the time it took previously – in either CPU Production or GPU mode. Expect 2x – 8x faster architectural visualizations.

Denoiser Overview

Denoiser Video Tutorial in Studio (Click CC button for English subtitles)

Denoiser in Maxwell

Denoiser control in formZ

 

Light_Mixer2.jpg
LIGHT MIXER
Next up, we have a new Light Mixer. When used in conjunction with Fire, you can interactively fine-tune the contribution of all lights and emitter materials in your scene from one convenient panel. It’s a lot like Multilight, inside formZ.

Light Mixer in formZ

LIGHT GROUPS
Maxwell for formZ supports formZ Light groups both in the Light Mixer and in Multilight. Intensity-override groups are represented as a single slider in both environments.

 

ONLINE MATERIALS
Using materials from the online materials database in formZ is now just a click away.

Materials Browser

Referenced MXM controls in formZ

 

NEW MATERIAL TYPES
Xrite AxF and TableBRDF material types are now supported directly by the plugin.

RELEASE NOTES
Changes

• Integrated Denoiser support. (Maxwell Options > Engine tab)
• New Light Mixer for interactively balancing intensity of lights, light groups, onf material emitters. (Extensions > Maxwell Render > Light Mixer…)
• formZ Light Group support: All lights in a formZ light group with intensity override are now represented by a single Multilight slider. (Enable in Maxwell Options > Translate tab)
• Improved access to MXM file browser. (Material Parameters > Maxwell Representation > Referenced MXM option)
• Improved access to online MXM gallery browser. (Material Parameters > Maxwell Representation > Referenced MXM option)

• Direct support for X-Rite AxF Materials (Material Parameters > Maxwell representation).
• Direct support for TableBrdf Materials (Material Parameters > Maxwell representation).
• Support for rendering with the number of available logical processors, minus a specified number. (Set a negative number in Maxwell Options > Engine Tab > CPU Threads)
• Plugin now uses central Maxwell installation.
• Update to the Maxwell 4.1 engine.

Fixes
• Grass Modifiers using formZ material option now restored correctly on open.

• Denoiser At Each SL/At End parameter now correctly saved in project.

5 TIPS TO USE THE DENOISER IN MAXWELL 4

Facebooktwittergoogle_plusredditpinterestlinkedinmail

5 TIPS TO USE THE DENOISER IN MAXWELL 4

August 4, 2017 Technology, Tips 0 Comments

Hello everyone,

We are back with some valuable tips by our very own Product Specialist, Fernando Tella. Fer is long-time Maxwell user, ever since the alpha version in 2005! Ten years later he joined the Maxwell team. Now he helps customers with technical issues, and also does product demos, tutorials, trainings and helps with product development. In this blogpost he will help you grasp Maxwell 4’s new top feature – the Denoiser, so you can take advantage of its full potential. Here we go 🙂

THE BASICS

As a start, it’s important to understand some basic concepts:

  • When using the Denoiser, Maxwell launches two renders.
  • It also automatically activates some extra channels that help preserve the details of the image, such as texture details, the shapes of the objects, materials, etc.
  • All this is done automatically when launching a render with the Denoiser

A DIFFERENT KIND OF NOISE (A CLEAN ONE)

The new Denoiser integration gives you usable denoised images as the render progresses. In a similar way as Maxwell progressively cleans the image, the denoised image evolves with the render. At the beginning, when the sampling level is very low, the render will look blurry, and as the render gets more defined you’ll start noticing it starts to “learn” where are the limits of the objects, materials, the features of the textures, etc. As the sampling level goes up, the denoised image and the original render will get closer and closer, so we can say the denoised image also converges to the natural solution with time, as well as the original render, but instead of noise, you get usable images (or even perfect ones) in the meantime.

The following video shows how the denoised and render image evolves as the render goes on, from sampling level 4 to 13 (Scene by Maxwell Xpert David de las Casas).

So, in the worse scenario, it will take the same time as the not-denoised image, and in the best, you will save a lot of time.

COMBINING TECHNIQUES

TIP 2: Use the Denoiser and Extra Sampling combo

If you find that some particular materials get too blurred when using the Denoiser (for example, in the case of grainy textures), but most of the image looks good at some particular SL, you can combine two different techniques: Denoiser and Extra Sampling.

The idea you have to keep in mind is that the longer the render goes on, the less information the Denoiser will have to guess and the more it will be based on the true render, so if one particular texture is problematic, usually you only have to render it longer.

Based on this, if you have a particularly problematic material were the texture itself is grainy, like sand, sugar or the towel in this case, you only have to render the whole image until the needed SL and then use Extra Sampling for that material or object to render to a higher SL only in that part.

Please note that, for the moment, when using Extra Sampling, the Denoiser will only be calculated at the end of the render, not at each SL, so you’ll have to wait until the end to see the denoised result.

OBJECTS BEHIND TRANSPARENT GEOMETRIES

TIP 3: For objetcs behind transparent geometries use the Shadow Channel

When an important part of your scene contains objects behind transparent ones, you’ll notice the extra channels used in Fast mode (Normals, Position and Reflectance) won’t give information about what’s behind the glass.

In these cases, you could consider using Accurate mode, which adds Shadow channel, as it will give information about what’s behind the glass and this will improve the Denoiser result.

In this case the render will take more time (around 1.5 the time without shadow channel), so it will be wise to test which will be better: render with the Shadow channel or let the render reach a higher SL without the Shadow channel. The result in terms of time can be very similar.

RE-DENOISE AFTER CLOSING MAXWELL

TIP 4: Use saved MXI files to denoise after closing Maxwell

For the moment, once Maxwell is closed after rendering an image with the Denoiser, the two passes cannot be loaded again into the interface to resume them or make changes and Re-Denoise. Nevertheless, if you have at least a couple of mxi files of the same frame (and with the required channels), you can make Maxwell denoise them by running the following commands through command line:

mximerge -folder:“folder containing the mxi files of the same frame” -coopdenoiser:“output path and name of the denoised image” -target:“path of the merged mxi files”

Then you ask – “But where can I find the two mxi files saved while rendering with Denoiser?”

See below the paths where they are stored depending on the OS. You will find two files named as your current scene and ending with _render1.mxi and _render2.mxi:

In Windows they are stored under: C:\Users\<username>\AppData\Local\Temp\maxwellrendertmp

In MacOS they are stored in a random folder under: private/var/folders/…../T/maxwellrendertmp/

MACRO IN DENOISED IMAGE NAME

TIP 5: Use a macro for the denoised image name

When setting the denoised image name, instead of setting a specific name, it might be better to use a macro like %scenename% (including the %)

It creates the perfect name, as the macro is replaced by the scene name when saving the file and avoids having to rename the denoised image file when you want to make different versions of your rendered files.

For example, if you are rendering a scene named cool_render.mxs, you can set the Denoiser output path as Denoised_%scenename%.png 

Once the render is finished, you will get a file called Denoised_cool_render.png.

Another useful macro could be %camera%, which is replaced by the name of the active camera.

Here is the list of all the supported macros, in case you find this convenient.

I hope you’ll find these tips useful and help you master the use of the Denoiser! 🙂

Cheers!

Fernando