Epic Games’ Datasmith Plugin Available Free of Charge in Unreal Studio Beta
Smart Technology is excited to announce Epic Games has introduced native support for Cinema 4D in the latest release of Unreal Engine 4.23. Integration for Cinema 4D is enabled via the Datasmith plugin, presently a feature of the free Unreal Studio Beta.
Unreal Engine is the industry-leading suite of production-proven development tools used to create some of the world’s most beloved games, including Fortnite, and is popular with many in the Cinema 4D community. The ability to bring assets from Cinema 4D directly into Unreal Engine and quickly iterate offers a more seamless content creation experience for high-end, real-time animation and motion graphics workflows from game trailers to pre-rendered graphics in broadcast, to immersive AR and VR visualizations.
Benefits of the new level of integration allow for .c4d file import directly into
Unreal Engine with support for scene hierarchies, geometry, materials, lights, cameras, and baked animations. The ‘save for Cineware’ command in Cinema 4D allows users to easily bake complex procedural motion graphics directly into real-time scenes through the Unreal Engine Sequencer cinematic editor.
NVIDIA RTX Performance explain by Paul Arden – miGenius
NVIDIA RTX technology was announced late last year and has gathered a lot of coverage in the press. Many software vendors have been scrambling to implement support for it since then and there has been a lot of speculation about what is possible with RTX. Now that Iray RTX is finally about to be part of RealityServer we can talk about what RTX means for our customers and where it will be most beneficial for you.
Iray RTX speed-up is highly scene dependent but can be substantial. If your scene has low geometric complexity then you are likely to only see a small improvement. Larger scenes can see multiples of about a 2x speed-up while extremely complex scenes can even see a 3x speed-up.
What is RTX?
RTX is both software and hardware. The key enabling innovation introduced with RTX hardware is a new type of accelerator unit within the GPU called an RT Core. These cores are dedicated purely to performing ray-tracing operations and can do so significantly faster than using traditional general purpose GPU compute. Performance will depend on how many RT Cores your card has. The Quadro RTX 6000 for example has 72 RT Cores.
Along side the new hardware, NVIDIA has introduced various APIs and SDKs which enable software developers to access these new RT Cores. For example, in the gaming world RTX hardware is accessed through the Microsoft Direct X Ray-tracing APIs (DXR). While production rendering tools such as Iray use OptiX.
Rendering software must be modified to take advantage of the new software APIs and SDKs in order to access the hardware. With RTX hardware and the latest RealityServer release, the portion of rendering work performed by Iray that involves ray intersection and computation of acceleration structures (see below) can be offloaded to the new RT Core hardware, greatly speeding up that part of the rendering computation.
Ray Intersection and Acceleration Structures
Ray intersection is the work of determining whether a ray (just think of it as a straight line) crosses through a given primitive (e.g., a triangle). We won’t cover exactly how path-tracers like Iray work but Disney have a great video Practical Guide to Path Tracing which gives you a good idea of the basics. You’ll quickly see that ray intersection is key to making this work.
While the mathematics involved in checking if a ray intersects a primitive is relatively simple (at least for a triangle), scenes today can easily contain millions or even hundreds of millions of primitives. To make matters worse for typical scenes you also need to perform these checks for millions of rays. That’s millions of primitives times millions of rays, a whole lot of computation.
Naively checking for intersections with all primitives doesn’t cut it, you’d be waiting years for your images. To speed things up, when using ray-tracing, an acceleration structure is almost always also used. This uses some pre-computation to split the scene up into a hierarchy of primitives that can be tested rapidly to eliminate large numbers of those primitives from consideration quickly.
As a very simple example, imagine you have a scene with a million primitives to test distributed fairly evenly. If you cut the scene into two groups, you can first test whether a ray intersects with the volume of one of the groups and if it does not you can immediately exclude half of the primitives. By nesting structures like this you can progressively test until you reach the primitive that is intersected
While this is a massively over-simplified example and there is a lot of subtlety and nuance to implementing a highly optimised system for this, the basic principle remains the same. Devise a cheap test that can eliminate as many primitives from consideration as possible. RT Core hardware accelerates the query of acceleration structures and the ray intersection calculations making the whole process significantly faster.
Enough Already, How Much Faster?
It depends. Yes, everyone hates this answer but no way around it here. We’ve so far seen a typical range, for practical scenes, from 5%-300%. That is a pretty wide range, so what determines how much faster it will be? We didn’t describe what ray intersection was above just for fun.
Notice that when we talked about ray intersection we never talk about materials, textures, lighting, global illumination, shadows or any of the other jargon commonly associated with photorealistic rendering. That is because for a renderer to do its job, it has to do much more than just ray intersections, even if it calls itself a ray-tracer.
All of the calculations needed for solving light transport, evaluating materials, looking up textures, calculating procedural functions and so on are still being performed on the traditional GPU compute hardware using CUDA (at least in the case of Iray). This portion of the rendering calculation is not being accelerated by RTX. So how much ray intersection is being done in a typical rendering with Iray for example?
In many scenes, we found that ray intersection comprises only 20% of the total work being performed by rendering. This is a very important point. Even if the new RT Cores were to make ray intersection infinitely fast so that it takes no time, 80% of the work still remains in that scene. So a 100 second render would still take 80 seconds with RTX acceleration, giving a speed-up of 1.25x (25%). Of course, ray intersection is not free with RTX, just faster, so the speed-up would be lower than this but this is the hypothetical upper limit.
If you have a scene where 60% of the work is ray intersection you will naturally see a much more significant speed-up. In that case on a 100 second render, with an infinitely fast ray intersector you still have 40 seconds of rendering, giving a speed-up of 2.5x (250%) at the hypothetical upper limit. In general we have found RTX provides the greatest benefit in very complex scenes with millions of triangles and also scenes that heavily exploit instancing.
Real-world Performance Testing
We took 14 scenes we had available and tested them on a Quadro RTX 6000 card with Iray 2018.1.3 and Iray RTX 2019.1.0 to evaluate the speed-up.
Above you can see we’ve also included an estimate of the percentage of the rendering time that is associated with ray tracing where available. This gives a clear picture of how this directly affects how much speed-up you get by using RTX hardware. It is also clear that the more complex scenes benefit a lot more since they spend more time doing ray intersection.
Unfortunately the strong scene dependence of RTX performance means there is no single number you can give to describe the performance advantage when integrated into a full renderer like Iray. Any way you cut it, you’ll definitely get a performance boost from RTX, exactly how much will depend on your scenes.
One bonus not considered here is that the inference part of the AI Denoiser in Iray can be accelerated by the Tensor Core units on the RTX cards, much the same way as was seen on Volta based hardware. This can be quite useful on larger image sizes when using the denoiser. There is also a more general performance bump that comes with the new version of Iray that is unrelated to RTX and a significant speed up in Iray Interactive mode for real-time use cases.
Tesla T4 and Cloud Providers
The NVIDIA Tesla T4 card which is increasingly used in the data-center and becoming available at various cloud providers actually also contains RT Core hardware (40 RT Cores) even though it doesn’t have the RTX branding. This isn’t emphasised in marketing material so is easy to miss.
For many of our customers, availability of hardware at popular cloud providers is important since they are often not deploying on their own hardware. As of writing both Google Compute Engine now has the Tesla T4 in general availability while Amazon Web Services have Tesla T4 based instances in beta and should be in general availability soon.
Making a Decision
We get a lot of questions from customers on whether they should be looking at RTX hardware for RealityServer. It certainly gives more options to consider and now it is important to think about your content as well when making a purchasing decision. If you deal with highly complex scenes, there is little doubt that RTX is worthwhile and the price points of RTX hardware, compared to say Volta based hardware make them very compelling even if they don’t quite reach the performance of Volta on smaller scenes. When comparing to Pascal or Maxwell based cards, RTX cards are a pretty clear winner in price/performance and they walk all over older Kepler based cards.
The best way to make a decision is to benchmark a representative scene or scenes from your own content rather than our generic benchmark tests. Our benchmarks will give you a good feeling for the difference between cards as a baseline, but you need to test your own data to determine how much additional benefit you will get on RTX hardware. If you’re a RealityServer customer or considering purchasing RealityServer and have scene data we can help you with these tests, contact us to learn more.
Thinking out of the box with Cinema 4D by Tim Clapham
Every year Sydney hosts the Vivid Festival of Light, Music and Ideas that includes outdoor immersive light installations and projections.
Like the one shown here on the Sydney Opera House sails, done with Cinema 4D by Tim Clapham and his team. It also includes performances by local and international musicians, and an ideas exchange forum featuring public talks and debates with leading creative thinkers.
Cineware for Illustrator Version 1.2 and R20 available
The update features a number of workflow optimizations, updates and fixes for several bugs and issues under macOS and Windows
Today we’re happy to announce the next major update for Cineware for Illustrator. Thanks to the support of numerous users from the 2D and 3D communities we were able to integrate a number of great new features into the plug-in.
Highlights in Version 1.2:
Support for the new R20 render engines, including support for Sketch & Toon
Users can define new camera positions from within Illustrator
Materials can be duplicated and named from within Illustrator
Improved installation process, including a Welcome screen and helpful tips to get started
Polished the overall user interface and fixed a long list of bugs
Cineware is just one year old and we are continually improving it to make users’ lives easier. Your feedback and input really help us make it even better. So let us know what you think! Please tell us everything, the good, the bad and even the ‘I don’t get it?’.
You can contact us at : firstname.lastname@example.org
Your feedback helps us to improve and develop Cineware for Illustrator continuously!
Download Cineware for Illustrator 1.2 above by clicking image
Groundbreaking Update of the 3D Application Delivers Advanced Features and Streamlined Workflows to Creative Professionals
FRIEDRICHSDORF, Germany — August 1, 2018 — MAXON today unveiled Cinema 4D Release 20 (R20), a break-through version of its iconic 3D design and animation software. Release 20 introduces high-end features for VFX and motion graphics artists including node-based materials, volume modeling, robust CAD import and a dramatic evolution of the MoGraph toolset. MAXON will debut Cinema 4D R20 live and online (C4DLive.com) at the upcoming SIGGRAPH 2018 convention August 14-16, in Vancouver, BC.
“We are excited to be delivering high-end tools and features that will streamline workflow and push the industry in new and exciting directions,” says David McGavran, CEO at MAXON Computer GmbH. “Over the last decade, our MoGraph toolset has revolutionized the broadcast graphics industry. The new Fields system in R20 offers the next evolution in Cinema 4D’s signature workflow.”
Key highlights in Release 20 include:
Node-Based Materials –
Provide new possibilities for creating materials from simple references to complex shaders in a node-based editor. With more than 150 nodes to choose from that perform different functions, artists can combine nodes to easily build complex shading effects for greater creative flexibility. For an easy start, users new to a node-based material workflow still can rely on the user interface of Cinema 4D’s standard Material Editor, creating the corresponding node material in the background automatically. Node-based materials can be packaged into assets with user-defined parameters exposed in a similar interface to Cinema 4D’s classic Material Editor.
MoGraph Fields – New capabilities in this industry-leading procedural animation toolset offer an entirely new way to define the strength of effects by combining falloffs – from simple shapes to shaders or sounds and objects and formulas. Artists can layer Fields with standard mixing modes and remap their effects. Group multiple Fields and use them to control effectors, deformers, weights, and more.
CAD Data Import – Popular CAD formats can be directly and seamlessly imported into Cinema 4D R20 with a simple drag and drop. A unique scale-based tessellation interface allows for adjustment of detail to build amazing visualizations. STEP, Solidworks, JT, Catia V5 and IGES formats are supported.
Volume Modeling – Create complex models by adding or subtracting basic shapes in Boolean-type operations using Cinema 4D R20’s OpenVDB–based Volume Builder and Mesher. Procedurally build organic or hard-surface volumes using any Cinema 4D object including new Field objects. Volumes can be exported in sequenced .vdb format for use in any application or render engine that supports OpenVDB.
ProRender Enhancements – ProRender in Cinema 4D R20 extends the GPU-rendering toolset with key features including sub-surface scattering, motion blur and multi-passes. Also included are an updated ProRender core, support for Apple’s Metal 2 technology, out-of-core textures and other enhancements.
Core Technology Modernization – As part of the transition to a more modern core in Cinema 4D, R20 comes with substantial API enhancements, the new node framework, further development on the new modeling framework, and a new UI framework.
Pricing, Availability / Upgrade Path
All current MSA users that have active agreements through September 2018 will get the R20 update automatically Cinema 4D Release 20 is scheduled for availability in September 2018. Available for both macOS and Windows.
ALL other users with versions older than the current R19 can upgrade their existing package to R19 and receive a free MSA ( 12 month support agreement ) – This ensures they too will receive the R20 on release.
About MAXON Headquartered in Friedrichsdorf, Germany, MAXON Computer is a developer of professional 3D modeling, painting, animation and rendering solutions. Its award-winning Cinema 4D and BodyPaint 3D software products have been used extensively to help create everything from stunning visual effects in top feature films, TV shows and commercials, cutting-edge game cinematics for AAA games, as well as for medical illustration, architectural and industrial design applications. MAXON has offices in Germany, USA, United Kingdom, Canada, France, Japan and Singapore. MAXON products are available directly from their Website and its worldwide distribution network. MAXON is part of the Nemetschek Group.
We’re back with an update for RealFlow | Cinema 4D! We didn’t just fix bugs, but added lots of nice features as well.
These functions give you more possibilities, flexibility, and control for special offers see Smarttec site.
A NEW VERTEX MAP APPROACH
Have you ever made use of the mesh engine’s → vertex maps to enhance your fluid renders? If the answer is yes then you certainly know that vertex maps were limited to speed so far. But RealFlow’s → fluid and material solvers offer much more channels. With this update you now have a wider choice and we’ve added vorticity, age, and weight maps.
Furthermore we’ve introduced a new, much more intuitive and artist-friendly workflow. In previous versions you had to deal with an abstract “Scale” parameter. Instead of guessing a value it’s now possible to adjust speed, age, and vorticity precisely through → ranges – or let RealFlow | Cinema 4D do the work with the new “Auto” mode.
The icing on the cake is that you can now evaluate the changes in Cinema 4D’s viewport as you’re used to do with native Cinema 4D vertex maps. This means that you no longer have to create preview renders to see the result of your settings. Truly a huge time saver.
And to give you an impression of how these vertex maps affect your fluids we’ve created some videos for you. This clip is a side-by-side comparison of the speed, vorticity, and age channels:
In this video you can see four differently coloured fluids. Weight maps are used to create the colour mixing effects in areas where the fluids touch and interact. To create softer colour transition we have applied the new → “Smoothing Length Scale” parameter to the mesh:
Updates on the maps’ ranges will be applied automatically and displayed in the viewport, but changes on “Smoothing Length Scale” require that the meshes to be recreated.
This neat helper has been added to ease the process of adjusting daemons. RealFlow | Cinema 4D’s new → “Visualizer” is able to make forces visible and even show how they evolve and change over time. Now you have full control over daemons and instant visual feedback.
You can choose between arrows, lines, and points – and you can also display these modes as streamlets. Streamlets trace the forces over a short timespan and this results in a curved view of the force, giving you a sense of direction. One of the most interesting feature is that the “Visualizer” also shows the combined result from multiple daemons. You can decide which daemons should be visualized together with simple drag and drop. Here we have a bounded → “Noise Field”, → “Vortex”, and an animated → “Attractor”:
The “Visualizer” works with the following force-based daemons: “Attractor”, “Gravity”, “DSpline”, “Noise Field”, “Vortex”, and “Wind”. For obvious reasons you can’t visualize k daemons or the “Filter”.
Another, very important, novelty is the introduction of time offsets for cached fluids. So far you haven’t been able to shift the start and end frames of your particle and mesh sequences, for example if you wanted to synchronize them with other animated assets in your scene. In many cases it was necessary to batch rename the files or do other fancy things. But those days are over now.
Furthermore, there’s a → global → “Frame Offset” located in the “Scene” object.
Both offsets influence each other: total offset = nodespecific offset + global frame offset
This way you’ll be able to shift simulation nodes freely and independently from each other, and define custom time offsets in both positive and negative directions.
(MANY) MORE IMPROVEMENTS
All in all we’ve improved the plugin’s overall robustness. Especially → GPU simulations with a → “Filter” have been unstable under certain conditions – a thing that has been fixed.
Another important fix is that initial states will be kept from now on when you remove a simulation’s cache files. This may sound like a side note, but in fact it saves you lots of time!
And since we’ve been jumping on the “visualization train” there’s another new function: the → “Image” emitter is now capable of showing the attached image/pattern in the viewport. This will help you to identify the areas of emission. Furthermore, this emitter now supports animated textures, for example Cinema 4D’s noise types.
Not to forget the → “Fill” emitter. Now this neat little helper makes it easier to fill your objects with particles, but they can also be covered with a layer of particles.
The connection to Cinema 4D’s Thinking Particles module became much more robust, less RAM-intense, and got a new → workflow, making it easier to keep track of multiple particles/TP sources.
Our friends at → Jawset Visual Computing, the makers of TurbulenceFD, also surprised us with a neat feature: it’s no longer necessary to create Thinking Particles from RealFlow fluids in conjunction with TurbulenceFD. Aall you have to do is to apply a “TurbulenceFD” emitter to a RealFlow emitter, fluid, rigid, or elastic container directly. This improvement requires at least version v1.0 Rev 1435.
Many emitters (“Circle”, “Image”, “Square”, and “Triangle”) provide a → “Volume” parameter. This option allows you to create a defined initial volume of particles. A new handle in the emitters’ viewport gizmo lets you define this volume simply by dragging the handle, but of course you can still use numerical values as well.
Finally, we’ve added a falloff to the → “DSpline” daemon.
Of course, we’ve also improved the plugin’s overall stability, and updated to the latest Dyverso library. Experienced users will be happy to hear that the new 2.5 functions can be highlighted in Cinema 4D’s user interface.
Those new to 3D will also learn how Cineware for Illustrator can enrich your 2D workflow for packaging design or when creating custom illustrations.
Dimitris demonstrates a complete workflow in Adobe Illustrator from importing a 3D object to adjusting camera angles and easily placing labels through to the final rendering.
RealFlow for Cinema 4D – Physics simulation and awesome motion fluid effects
A few years ago we started a partnership with MAXON, the creators of Cinema 4D, and in the meantime this connection became a real friendship. You will find a MAXON booth on almost every important event, road show, or convention. And there is always a top-class lineup of international Cinema 4D artists giving workshops, showcasing their actual work, and sharing incredibly useful tips.
For direct links to product details and local pricing in AUD$ go to our estore or call 02 99394000
The MAXON stage at IBC (courtesy of MAXON).
But it’s not just the artists, it’s also the friendly and highly-professional attitude of the entire MAXON team: they really care and they always have the best get-togethers and parties. This year, Peggy Beck and her team organized a 3-hour boat trip through the canals of Amsterdam. It must have been great, but of course I’ve missed it, because I had a rather late flight and arrived in Amsterdam when the party was almost over. I guess that’s life… 🙂
My own presentation was the last one on Sunday from 5-6 pm. Despite this late time slot I was positively surprised how many visitors dropped in to watch the workshop. Another big plus is that the live feed brings the show to everyone who’s interested, but not able to be present. Finally, the MAXON team provides an edited recording and shares all videos on their YouTube channel.
The workshop’s topic was
RealFlow | Cinema 4D 2.0 – Multi-Physics in a Nutshell
Here I’m going to show you how to create and control complex interactions between RealFlow’s different solvers and material types. Furthermore you will learn fundamental things about “Collider” and “Volume” tags, simulation settings, and the workflow for → rigid/elastic deformer.
If you’re interested in watching the recording just do it here or directly on YouTube where you will find the presentations of my fellow artists:
Having a good time together is important, but that’s just one aspect. All these shows mean that you’re present, help users with their doubts and questions, and gather feedback. It’s about connecting with real people. And it’s also about learning and sharing methods and knowledge. The feedback I get during these events is of particular importance for us and – at least I do think so – for the customer. It definitely makes a difference whether you have the opportunity to talk with someone face to face or via a communication channel. Sure, in most cases there’s no other way than using forums for social media, but the personal talk helps both sides to understand each other much better.