Th latest version of formZ years in the making is about to be released on Jan 7 – 2nd week of January 2020 and available in both Perpetual license versions
( A$ 1,500 ( on offer A$ 1,070 – till Jan 6 ) and a new Annual Subscription 12 mth license for A$ 495till Jan 6
The details are:
Improved application interface with more workspace, easy to manage palettes.
Classic draft/layout space features.
v6 draft files open directly.
Rectangular and Radial array tools.
Scale to Size tool.
Updated file translators including direct PSD and SketchUp export.
New instance engine makes components significantly faster!
It’s with great pleasure that we announce the release of Corona Renderer 5 for Cinema 4D! From the development of the Corona Core, this version brings a focus on optimizations, saving memory for displacement, and memory and render times for caustics – and from the Cinema 4D specific side, there are also a great many improvements, including multiple skies for use in LightMix, the addition of the Select Material, Select Shader and MultiShader, greatly improved handling of proxies, and more!
NEW FEATURES VIDEO
No time to read things in detail and want the quick overview? We’ve got you covered with the New Features video!
Grab the latest version while you read! It’s available at:
Demo Refresh: If you have tried a previous version of Corona Renderer for Cinema 4D and your 45-day trial has expired, then you’ll be pleased to hear that we have automatically refreshed the demo period to give everyone an extra 14 days! Simply download and install Corona Renderer 5 for Cinema 4D from the link above, and activate the demo license right within Cinema 4D! Enjoy!
2.5D Displacement offers significant savings to memory usage
Optimizations to caustics results in memory savings and faster rendering
Greatly improved handling of Corona Proxies, which can now include animation, and have better handling of large numbers of proxies that use the same data
Multi Shader added, to allow randomizing colors or textures between objects or sub-objects
UVWRandomizer updated to include the Mesh Element mode (to allow randomization across sub-objects) and Object buffer ID mode
Select Shader and Select Material added, allowing an object to store multiple shaders or materials (you can think of it as like a self-contained “mini-library” that the object stores with itself)
Corona Sky object replaces the Corona Sky tag, allowing multiple environments in LightMix, and use of any shader as environment lighting
And of course lots of quality of life improvements, bug fixes, and UI improvements!
Maxwell is all about quality. Period. This has been our sacred mantra for most than 15 years of development. Our secret sauce is a physically-correct unbiased spectral engine, which produces not only beautiful images but also lighting-accurate simulations. In fact, Maxwell is considered the ground-truth in rendering and CGI production. We strive to inspire others, and you inspire us.
If you have ever heard that Maxwell is slow…well, it was. Our commitment to developing the most accurate render engine on the market wasn’t negotiable. The story has changed in Maxwell 5. A fully rewritten multi-GPU core now delivers final results in minutes and accurate previews in seconds, keeping physical accuracy intact. With multiple GPUs working in parallel you’ll get an unprecedented Maxwell experience. With Maxwell 5, time is now in your hands.
The new Cloud Render service allows you to access the most powerful machines available in the cloud (up to 96 cores) speeding up the render process and thus improving your productivity. Cloud render jobs can be easily dispatched from Maxwell Studio and the plugins, freeing the local computer from high CPU loads. Cloud Render helps you optimize your time and resources more efficiently. –
Maxwell 5 is seamlessly integrated in most of the major 3D/CAD software solutions and available separately:
Studio offers an independent production/rendering environment to create, edit and render Maxwell scenes. Render nodes and network tools for advanced deployments are included. Maxwell is available for Windows, MacOS and Linux.
Epic Games’ Datasmith Plugin Available Free of Charge in Unreal Studio Beta
Smart Technology is excited to announce Epic Games has introduced native support for Cinema 4D in the latest release of Unreal Engine 4.23. Integration for Cinema 4D is enabled via the Datasmith plugin, presently a feature of the free Unreal Studio Beta.
Unreal Engine is the industry-leading suite of production-proven development tools used to create some of the world’s most beloved games, including Fortnite, and is popular with many in the Cinema 4D community. The ability to bring assets from Cinema 4D directly into Unreal Engine and quickly iterate offers a more seamless content creation experience for high-end, real-time animation and motion graphics workflows from game trailers to pre-rendered graphics in broadcast, to immersive AR and VR visualizations.
Benefits of the new level of integration allow for .c4d file import directly into
Unreal Engine with support for scene hierarchies, geometry, materials, lights, cameras, and baked animations. The ‘save for Cineware’ command in Cinema 4D allows users to easily bake complex procedural motion graphics directly into real-time scenes through the Unreal Engine Sequencer cinematic editor.
Set up the shelter or cover to avoid direct sunlight
Special Objects for Scanning
For the transparent, highly reflective or some dark objects, please spray powder before scanning
Printable Data Output
Able to export watertight 3D model directly to 3D printing
OBJ; STL; ASC; PLY; P3 ; 3MF
Scanner Body Weight
1.13kg (include the USB3.0 cable)
Win7; Win8; Win10; (64bit)
Graphics card: NVIDIA GTX1060 and higher; video memory: >4G, processor: I7-8700, memory: 32G; interface: high-speed USB 3.0
Graphics card: Quadro card P1000 and above or NVIDIA GTX660 and higher; processor: Intel (R) xeon E3-1230, Intel (R) I5-3470, Intel (R) I7-3770; interface: high-speed USB 3.0; memory: 8G
*volumetric accuracy refers to the relationship between 3D data accuracy and object size; the accuracy is reduced by 0.3mm per 100cm. The conclusion is obtained by measuring the center of sphere under markers alignment.
Versatile Scan Modes & Align Modes
Handheld Rapid Scan, Handheld HD Scan, Fixed Scan
NVIDIA RTX Performance explain by Paul Arden – miGenius
NVIDIA RTX technology was announced late last year and has gathered a lot of coverage in the press. Many software vendors have been scrambling to implement support for it since then and there has been a lot of speculation about what is possible with RTX. Now that Iray RTX is finally about to be part of RealityServer we can talk about what RTX means for our customers and where it will be most beneficial for you.
Iray RTX speed-up is highly scene dependent but can be substantial. If your scene has low geometric complexity then you are likely to only see a small improvement. Larger scenes can see multiples of about a 2x speed-up while extremely complex scenes can even see a 3x speed-up.
What is RTX?
RTX is both software and hardware. The key enabling innovation introduced with RTX hardware is a new type of accelerator unit within the GPU called an RT Core. These cores are dedicated purely to performing ray-tracing operations and can do so significantly faster than using traditional general purpose GPU compute. Performance will depend on how many RT Cores your card has. The Quadro RTX 6000 for example has 72 RT Cores.
Along side the new hardware, NVIDIA has introduced various APIs and SDKs which enable software developers to access these new RT Cores. For example, in the gaming world RTX hardware is accessed through the Microsoft Direct X Ray-tracing APIs (DXR). While production rendering tools such as Iray use OptiX.
Rendering software must be modified to take advantage of the new software APIs and SDKs in order to access the hardware. With RTX hardware and the latest RealityServer release, the portion of rendering work performed by Iray that involves ray intersection and computation of acceleration structures (see below) can be offloaded to the new RT Core hardware, greatly speeding up that part of the rendering computation.
Ray Intersection and Acceleration Structures
Ray intersection is the work of determining whether a ray (just think of it as a straight line) crosses through a given primitive (e.g., a triangle). We won’t cover exactly how path-tracers like Iray work but Disney have a great video Practical Guide to Path Tracing which gives you a good idea of the basics. You’ll quickly see that ray intersection is key to making this work.
While the mathematics involved in checking if a ray intersects a primitive is relatively simple (at least for a triangle), scenes today can easily contain millions or even hundreds of millions of primitives. To make matters worse for typical scenes you also need to perform these checks for millions of rays. That’s millions of primitives times millions of rays, a whole lot of computation.
Naively checking for intersections with all primitives doesn’t cut it, you’d be waiting years for your images. To speed things up, when using ray-tracing, an acceleration structure is almost always also used. This uses some pre-computation to split the scene up into a hierarchy of primitives that can be tested rapidly to eliminate large numbers of those primitives from consideration quickly.
As a very simple example, imagine you have a scene with a million primitives to test distributed fairly evenly. If you cut the scene into two groups, you can first test whether a ray intersects with the volume of one of the groups and if it does not you can immediately exclude half of the primitives. By nesting structures like this you can progressively test until you reach the primitive that is intersected
While this is a massively over-simplified example and there is a lot of subtlety and nuance to implementing a highly optimised system for this, the basic principle remains the same. Devise a cheap test that can eliminate as many primitives from consideration as possible. RT Core hardware accelerates the query of acceleration structures and the ray intersection calculations making the whole process significantly faster.
Enough Already, How Much Faster?
It depends. Yes, everyone hates this answer but no way around it here. We’ve so far seen a typical range, for practical scenes, from 5%-300%. That is a pretty wide range, so what determines how much faster it will be? We didn’t describe what ray intersection was above just for fun.
Notice that when we talked about ray intersection we never talk about materials, textures, lighting, global illumination, shadows or any of the other jargon commonly associated with photorealistic rendering. That is because for a renderer to do its job, it has to do much more than just ray intersections, even if it calls itself a ray-tracer.
All of the calculations needed for solving light transport, evaluating materials, looking up textures, calculating procedural functions and so on are still being performed on the traditional GPU compute hardware using CUDA (at least in the case of Iray). This portion of the rendering calculation is not being accelerated by RTX. So how much ray intersection is being done in a typical rendering with Iray for example?
In many scenes, we found that ray intersection comprises only 20% of the total work being performed by rendering. This is a very important point. Even if the new RT Cores were to make ray intersection infinitely fast so that it takes no time, 80% of the work still remains in that scene. So a 100 second render would still take 80 seconds with RTX acceleration, giving a speed-up of 1.25x (25%). Of course, ray intersection is not free with RTX, just faster, so the speed-up would be lower than this but this is the hypothetical upper limit.
If you have a scene where 60% of the work is ray intersection you will naturally see a much more significant speed-up. In that case on a 100 second render, with an infinitely fast ray intersector you still have 40 seconds of rendering, giving a speed-up of 2.5x (250%) at the hypothetical upper limit. In general we have found RTX provides the greatest benefit in very complex scenes with millions of triangles and also scenes that heavily exploit instancing.
Real-world Performance Testing
We took 14 scenes we had available and tested them on a Quadro RTX 6000 card with Iray 2018.1.3 and Iray RTX 2019.1.0 to evaluate the speed-up.
Above you can see we’ve also included an estimate of the percentage of the rendering time that is associated with ray tracing where available. This gives a clear picture of how this directly affects how much speed-up you get by using RTX hardware. It is also clear that the more complex scenes benefit a lot more since they spend more time doing ray intersection.
Unfortunately the strong scene dependence of RTX performance means there is no single number you can give to describe the performance advantage when integrated into a full renderer like Iray. Any way you cut it, you’ll definitely get a performance boost from RTX, exactly how much will depend on your scenes.
One bonus not considered here is that the inference part of the AI Denoiser in Iray can be accelerated by the Tensor Core units on the RTX cards, much the same way as was seen on Volta based hardware. This can be quite useful on larger image sizes when using the denoiser. There is also a more general performance bump that comes with the new version of Iray that is unrelated to RTX and a significant speed up in Iray Interactive mode for real-time use cases.
Tesla T4 and Cloud Providers
The NVIDIA Tesla T4 card which is increasingly used in the data-center and becoming available at various cloud providers actually also contains RT Core hardware (40 RT Cores) even though it doesn’t have the RTX branding. This isn’t emphasised in marketing material so is easy to miss.
For many of our customers, availability of hardware at popular cloud providers is important since they are often not deploying on their own hardware. As of writing both Google Compute Engine now has the Tesla T4 in general availability while Amazon Web Services have Tesla T4 based instances in beta and should be in general availability soon.
Making a Decision
We get a lot of questions from customers on whether they should be looking at RTX hardware for RealityServer. It certainly gives more options to consider and now it is important to think about your content as well when making a purchasing decision. If you deal with highly complex scenes, there is little doubt that RTX is worthwhile and the price points of RTX hardware, compared to say Volta based hardware make them very compelling even if they don’t quite reach the performance of Volta on smaller scenes. When comparing to Pascal or Maxwell based cards, RTX cards are a pretty clear winner in price/performance and they walk all over older Kepler based cards.
The best way to make a decision is to benchmark a representative scene or scenes from your own content rather than our generic benchmark tests. Our benchmarks will give you a good feeling for the difference between cards as a baseline, but you need to test your own data to determine how much additional benefit you will get on RTX hardware. If you’re a RealityServer customer or considering purchasing RealityServer and have scene data we can help you with these tests, contact us to learn more.
Leonardo Giomarelli (ilgioma on the Maxwell official forums), who is the creator of some great materials on MZ (Copper, Brass, Plastic) has been kind enough to give us a glimpse of his workflow in creating these beautiful interior renders of a modern bathroom.
I started working with MaxwellZone for about 1 year producing sets of materials based on photographic references found in the shop of this site. Lately trying to test Maxwell Render on interior projects I have produced a small setting and, talking with Mihai, we decided to share on the blog the main steps on which I based my work.
Choosing a project
Personally I find most of the inspirations for my projects on Pinterest, I think this social media is structured in an exceptional way for those who need to check trends, color palettes, compositions etc. But we must be careful not to remain too faithful to a single image to avoid the replication effect that I do not like. If the project I want to do represents a bathroom, I’m also looking for references to bedrooms, kitchens or anything else from the images of the guidelines for the creation of my work. In any case, for my inspirations I always try to prefer photographic shots to other projects realized in CGI, it is always good to have something real as the first reference.
From idea to 3D
Here there is very little to say… knowing how to model well reproducing details will bring the project to a higher level right away. Obviously for the fabrics you can not do without Marvelous Designer.
Creating good lighting
Choosing the type of lighting is essential for creating engaging images. Of all the known techniques I have always preferred the Sky Dome because it offers me a neutral and very soft light to be used as primary light. Depending on the design and the type of product to be represented I always go to add support lights to enhance certain areas or to have a light complementary to the primary perhaps warmer so as to give dynamism to the shot. The following pictures show the Sky Dome settings and the layout of the lights used in this job.
A little trick: Generally as a basis of the architectonic I always put a disk or a floor to simulate a pavement outside the building that obviously contributes to bring into the room the light reflected by the Sky Dome. On this disc or plane (it makes no difference) I always apply a very simple material with a medium gray color RGB 180, never white. This is to avoid burning too much the areas near the openings.
Up to this point everything is relatively simple but, believe me, using approximate and “perfect” materials will cancel every effort. Metal materials in particular, can push the image towards photorealism if well calibrated. For my project I made extensive use of the materials available on MaxwellZone, so in addition to steel I also used the Ceramic Zirconia as a starting point for the tub and the elements of the radiators.
When I have to make simple glasses like the vase I rely on the Maxwell Render presets, I find it very easy to use and at the same time effective. For other materials concerning textiles and walls, I used textures on some models purchased on Bentanji. I think this is also a very useful resource for us Maxwell users.
The finishing touches
There is no rendering without post production but it is necessary to prepare well all the render passes. In the image I show which passes I always render out.
As for the Alpha Custom channels, I usually use them only if I have objects that are difficult to select with just MaterialID or ObjectID, in this case, branches and flowers. Below I will show the images that make up the project before and after the adjustments in Photoshop. As you will see, there are images that need very slight adjustments, while others need more work to show their full potential…
This image has been worked on very little, I only wanted to give a warm tone that in my opinion improves the look.
Here, however, the adjustments were more massive, I wanted it to come out an emotional shot and then I pushed on a bloom effect typical of backlighting. To obtain it, simply select the opening area, fill the selection with a white color and then use a blur filter to increase the value until a credible effect is obtained. The original shot was also underexposed, in this case I remedied directly on Photoshop, but I could do it with the Multilight.
Still little adjustments to even out the tonality of previous shots.
In the presence of metal details, I like to include some chromatic aberration in the renderings. It is an artifact present on the photos. Here’s a link where it’s explained how it is generated in photography and how to do it in Photoshop.
I hope you enjoyed this short making-of, maybe in the future we can deal with more specific aspects related to some processing phase.
Thinking out of the box with Cinema 4D by Tim Clapham
Every year Sydney hosts the Vivid Festival of Light, Music and Ideas that includes outdoor immersive light installations and projections.
Like the one shown here on the Sydney Opera House sails, done with Cinema 4D by Tim Clapham and his team. It also includes performances by local and international musicians, and an ideas exchange forum featuring public talks and debates with leading creative thinkers.