

Unity allows to implement the plan in a convenient way. One optimization suggests itself: why not use a single, unified GrabPass? The original image remains unchanged per frame, so it should be possible to “grab” once and then use the data for all the subsequent blend operations. Imagine what performance would be if we run such procedure on some iPad with an ultra HD display? In my case, even a couple of UI-objects using GrabPass inside an empty scene dropped framerate below 20 FPS. The situation is aggravated by the fact that GrabPass performance is inversely related to the display resolution. Even on average PC, using more than 100 of such effects leads to a noticeable frame rate drop. Render to texture is a somewhat “heavy” operation. To evaluate the results, we will use the same textures we used in Photoshop during blend algorithms demonstration.Īs you can see, the results in Unity and in Photoshop are visually identical. Let’s create a simple shader for UI-graphics, add a GrabPass to it and return the result of colors blending using the Darken algorithm: GrabDarken.shader This texture can be used in subsequent passes to do advanced image based effects. GrabPass is a special passtype - it grabs the contents of the screen where the object is about to be drawn into a texture. In our case, the most appropriate would be to make use of GrabPass. There are several ways to work with render texture in Unity. For such cases, there is the so-called render to texture technic: the data from the frame buffer is copied to a special texture, which is exposed for reading on consecutive fragment shader run. Most of the post processing effects implementation, for example, is unthinkable without accessing the final image inside fragment function.

In fact, the need for the final image data within the fragment shader occurs quite often. The final image (which contains the data we need) is formed after the fragment shader execution, so we can’t access it directly from the Cg program. At the same time, the logic of the rendering pipeline won’t allow us to do this directly. In order to get that “ a” layer we need to access the frame buffer, and we need to do this inside the fragment shader to be able to perform the blending algorithm per pixel.

However, this step proved to be the most problematic. Having all the blending algorithms implemented it may seem that only the trivial part of the work is left: to get the “ a” - color of the pixels located “behind” our object. Thus, in general terms, the problem can be summarized as follows: for each pixel of the object ( b) find the pixel that is drawn “behind” it ( a) and blend their colors using selected algorithm. Other modes are implemented in a similar ways and writing about all of them will be boring, so (in case you are interested) I’ve put implementations of the 18 remaining blend modes in Cg here: /Elringus/d21c8b0f87616ede9014. In Darken mode RGB components of the layers are compared and only the “darkest” left in the resulting color. Overlay algorithm is conditional: for the “dark” areas we are multiplying the colors, for “light” - using the expression similar to Screen.įixed4 r = a <. Note we are passing the alpha value of the upper layer ( b.a) to the alpha of the resulting layer ( r.a), to be able to control the opacity of the material. Let’s implement this algorithm using Cg programming language: While in Screen blend mode, colors of both layers are inverted, multiplied, and then inverted again. The most trivial case possible - most of the objects “blends” like this. This is the example of Normal blend mode: each pixel of the lower layer ( a) is completely overridden by the pixels of the layer that covers it ( b). Consider two graphic elements, where one overlaps the other. In case you are looking for a complete solution to use the blend mode effect in Unity, try this package on the Asset Store: Blending algorithmsįor starters, let’s define what exactly we need to do. In this article I will describe the mechanics behind popular blend modes and try to simulate their effect inside Unity game engine. Or your UI-artist made a beautiful assets in Photoshop, but some of them are using Soft Light blend mode? Or maybe you’ve wanted to create some weird Lynch-esque effect with Divide blending and apply it to a 3D mesh? Say you need to use Color Dodge blending for a particle system. It is an important technique for content creation and has long since become an integral part of them. You’ve probably heard about blend modes available in image and video editing tools like Photoshop or After Effects.
