Houdini VOP raytracer part 4
Posted on September 3, 2013
Spatial Aliasing
So far, we have been sending single ray from centre of each pixel. You can notice that we have very chunky edges. You might think that increasing number of pixels should eliminate that problem but it won’t. We can make pixels smaller but jaggies on edges will persist. Additionally, if scene object is too small or too far away, it might be placed in-between rays and we will never register its existence. This problem is called spatial aliasing. There are few ways of fixing it but we will focus on one popular technique.
Stochastic supersampling
Supersampling is one of spatial anti-aliassing methods. The idea behind it is to create few instances of our ray and randomly position it within a pixel (not just a centre like we were so far) and calculate average colour value. The random supersampling pattern is also known as stochastic sampling. It avoids the regularity of grid supersampling. However, due to the irregularity of the pattern, samples end up being unnecessary in some areas of the pixel and lacking in others.
In order to apply that technique, we will have to make few modifications to our existing vop nodes. So Far we were working on single point – single ray. Now we will create multiple ray instances and let user specify the amount needed and apply same calculations FOR EACH one of our rays.
Create new FOR LOOP node and inside our vop renderer and cut/paste all of our nodes besides input parameters and globals. Additionally, create spare integer parameter, call it “samples” and connect it to “end”, blue input of our for loop node. That will determine number of samples we want to calculate, starting from 1 to input provided by user in newly created spare parameter (“samples”). Connect rest of spare parameters to the “next” input of for loop node and reconnect nodes inside accordingly. As we are going to summarise color values of each of rays per pixel, we need to create constant vector parameter (0,0,0) that we will use to store them to (See picture below). Name it “Initial Color”. To make life easier, make sure that your all your spare parameters have proper labels.
Now, we are able to create multiple instances of ray per pixel but each instance would have same position – U:0.5, V:0.5 on primitive. To randomise those values, create random vex node, connect it to _I sub-input of for loop node and change it’s value to “1D Integer input, 3D Vector”. That will give us random values per each ray in all three axes but we need only two of them so create vector to float vex node and connect it to “prim_P_attrib” Primitive Attribute vex node as shown below.
UPDATE: At this point, our samples are randomised per iteration and all pixels would get the same seed. To randomise positions per pixel we need to bring global point number position and multiply iteration value just before we will plug it to random vex node.
To summarise color values, create add vex node and add “Initial colour” from sub-input to Shadowed colour we have been using as an output so far and connect it to “Initial Color” sub output.
We are almost done. If we would try using that setup we have done so far, you would notice that our image turns brighter and brighter with each additional ray instance. To fix this issue, we need to average summarised colours. Simply divide Initial Color output from for loop node by number of samples created in spare parameter and connect calculated value to Cd global output.
Set number of samples to value higher than 1 (for eg 6). You should get anti-aliased picture as in example below (right).
Adding Diffuse model
In the first part of this tutorial, I have covered a diffuse reflection model. Now it is a good time to apply it to our sop renderer. We already know that diffuse is calculated by dot product between vector of incoming light and surface normal. To calculate incoming light vector, simply subtract Light position from “last_HIT_pos” (last hit position). As we are interested in its angle, add normalise vex node to unify vectors length to 1.0.
TIP: Order of substracion is very crucial as it defines vetor direction. Different order will lead to wrong results.
Add dot product and connect light direction vector that we just calculated and normalised “print_N_attrib” (primitive Normal vector). As I explained in first part of this tutorial, dot product will give us values between -1 and 1 that will result negative values in shadows. To fix that, add clamp vex node and set Min value to 0 and Max value to 1. Last step is to multiply our raw colour by out calculated diffuse. Connect multiply vex node in-between raw colour output and switch2 as in example below.
If everything went fine you should see nice diffused image.
<< [ 3 ] [ 4 ] [ 5 ] >>
What Others Are Saying