đĄTips & Optimizations
Tips to improve rendering speed and quality
Anti-Aliasing
This feature is supported by Render To Disk and is recommended for best results, however the anti-aliasing options depend on which render pipeline you are using. See the related Unity documentation for configuring anti-aliasing.
Hard Drive Speed
The biggest impact on render time is usually disk speed. It is recommended to render to the fastest hard drive available, such as an internal solid state drive. Most of the time spent rendering is saving files, so this can offer a significant improvement.
EXR, TGA, and PNG files are larger than JPEG and therefore will take longer to save to disk and a lot more free space. For previews of work in progress it is always best to use JPEG.
Memory Buffers
Another factor that affects render time is the number of frame buffer swaps and the amount of memory being moved each frame. When rendering to large formats, especially when using cubemaps and/or multiple render textures, this can add overhead to the render time that would not take place during normal playback. Besides lowering the render settings, make sure your computer has ample memory, and if need be upgrade to faster RAM.
Graphics Card
In most cases the GPU is not the cause of slow rendering, however does contribute to render time. As a general rule for both maximum quality and performance, the highest tier graphics card you can afford is going to allow you to render more, better, and faster.
Do your research to find a card suitable for your rendering needs, and make sure to have a power supply and motherboard than can handle the performance needs for rendering.
Heavy Scenes and Low Frame Rate
If your scene contains huge assets, your render settings are maxed out, or you're using lots of render features, the scene may play at a very low frame rate. This is ok as long as it doesn't overflow the GPU memory and cause a crash. Rendering will output correctly, albeit somewhat slower. This is good news for those more focused on final quality than playing in real time.
Rendering Alpha Channels
Alpha channel is supported by PNG, TGA, and EXR formats (but not JPEG). This renders an unmultiplied RGB with Alpha channel. Note that the alpha is created by the render pipeline and you may need to adjust shaders and settings for transparent objects to render correctly. Alpha channels work best for solid geometry.
Bake Audio Reactive Behaviors
Any Audio Reactive behaviors in the scene will need to be baked before rendering. This is due to the fact that audio cannot play back normally while rendering frame-by-frame.
If audio-driven animations are not baked, they will not respond to audio as expected in the final render. Other simulated effects such as physics do not have this requirement and should perform as expected.
Multi-Eye Rendering
Certain features such as screen space rendering effects, some shaders, and post processing filters may not be supported with stereoscopic or cubemap rendering setups. This can result in anomalies between left and right eye, or noticeable seems along the edges of a cubemap.
To avoid issues, test early on and decide on rendering features that work with multi-eye rendering and cubemaps. Also refer to Unity's documentation for single and multipass rendering.
Fullscreen Post Processing Effects
When rendering with cubemaps (VR 360 or fulldome) post processing effects such as bloom will likely cause seems and other artifacts across the cubemap faces. This is due to each side being rendered separately and the effects applied in screen space before stitched together in the cubemap. Render features that cause such anomalies must be avoided, however there is a workaround with a bit more setup.
Please see the scene RenderFulldomePostProcess for a working example.
Overlay Camera Capture As Final
To apply post processing and other effects to the whole fulldome or equirectangular image, a separate camera may be used for the final render.
This requires some extra steps in the setup and rendering:
The main camera must output to a render texture, matching the final output size and using the maximum bit depth. Floating point is recommended to best preserve HDR colors and lighting.
The main camera can use post processing, but nothing that adversely affect cubemaps such as bloom. The main camera then prerenders the fulldome image to the render texture assigned to the camera, which is then ready for the next pass.
A second camera (assigned as the Overlay Camera with Capture As Final) renders the render texture (computed in the previous step) mapped onto an unlit quad matching 1:1 to the camera view. The quad and camera should be on a separate layer so that it renders separately from the main camera and scene.
The second camera applies post processing to the final rendered image captured for output. Note that this camera should be set to 'Base', not 'Overlay' mode (in URP).
Render features such as shadows and light probes can be disabled for the quad and final camera as a further optimization.
Also note that when using HDRP or another render pipeline that these setups vary a little.
Avoiding Cubemap Seams
There are multiple reasons why seams may appear in cubemap rendering (for fulldome, 360, or fisheye format). The first step is to identify and isolate each item or effect causing seams in the render. Likely, one of the following solutions will provide a workaround. However, in some cases particularly with 3rd party add-ons there may be situations where there is no solution and you may need to find a different approach altogether.
Solution #1: For post processing, shaders, or any effects which are applied in screen space, the only viable alternative is to defer these effects to a secondary post processing pass or to change them to be world space bound if possible.
Some effects such as bloom and vignette simply cannot be used when rendering to cubemaps without causing noticeable seams.
See the example camera rigs that use post processing. There are limitations to this approach so it may not solve every problem and you may need to seek an alternative to achieve the desired effect.
Solution #2: Culling is another reason that objects may not appear on all sides of the cubemap, particularly with particles and certain effects. This can be solved in some cases by disabling Occlusion Culling on the camera.
TIP #1: When working with VFX (Visual Effect Graph), select the graph in the Project view and in the inspector change the culling option to "Always recompute and simulate". It may also be needed to set the bounds size of the Initialize Particle module to a very large size, such as 1000, 1000, 1000.
TIP #2: Avoid using Camera Fade or particles that face the camera plane. It's ok to face towards the camera world position, but anything that orients along the camera plane or to screen space will appear boxed in the cubemap.
Last updated