Houdini VOP raytracer part 2

Posted on August 15, 2013

render01

Critical issue of image synthesis is to determine of the correct color of each pixel. One way of finding that color is to average colors of the light rays that strike that pixel.

But how do we find that light rays and what color are they?

Forward tracing and backward tracing

We can think of light ray as the straight path followed by a light particle (called photon) as it travels through space. In physical world, traveling photon from light source can have its frequency changed when colliding with an object. When photon enters our eye, receptors on our retina recognises different frequencies and brain translates them as a colour. Colours can also add both on film and eye. for example, if photon of red colour frequency and green photon both arrive at our eye simultaneously, we will perceive the sum of the colours: yellow.

photons

If we consider pixel on image plane, which of the photons in a 3D scene actually contribute to that pixel?
Photons leave the light source and bounce around the scene, some of photons gets absorbed, some will get reflected. Only photons that eventually hit the screen and then pass into our eye contribute to the image.

photonsForward

This technique describes an approximation of how the real world works. When we follow the path of each photon from its origin (light) into the scene, we call it forward ray tracing.
You might think that this would be good way of creating a picture and you would be correct. But this technique would require simulation of milions of photons per second with different frequencies. Many of those photons will hit the objects that we would never see, others will just pass right through the scene and only a small portion will eventually hit our image plane and then our eye.

Key insight for computational efficiency is to reverse the problem by following photons backwards.  Considering point on a image plane, we can easily find the path followed by a  photon that hit that point. It is a line joining that point and our eye. We call it ray. Knowing the position on the scene from which our photon bounced from, we will continue tracking its path to get to its origin. We call this process backward ray tracking. By doing so, we can efficiently analyse a selection of rays that will make a difference to the radiated light in the direction we care about (propagated light).

raytraceH

Houdini Scene

At this point, we can start building our Houdini scene so we could easily follow the next steps.

Hicon screenStart.hipnc

First we can start with preparing our image plane display. My idea was to use polygons of a simple grid as “pixels”. In the provided scene, I have prepared an “EYE” object with grid geometry that we will use as Image Plane. You can change its size, position and resolution anytime.

eyeParms

In “SCENE” object you will find all the necessary data together with grid image plane already imported for you. Unfortunately vop sop works only on points and if we would use grid object directly, each polygon would have interpolated colour in between its points.

polypoint

To solve this problem we can create point per polygon and use it as as “pixel sample”.  First, we need to create the same amount of points as desired “pixels” (for this i have used line sop) and we will position them inside vop sop.

pixel
Lets jump inside VOP sop and start setting it up. To position our “pixel samples” inside each “pixel” we will use Primitive Attribute vex node to get the UV position of the polygon we are interested in. As a “file” we will specify first vop sop input (inputs starts from 0 – 3, not 1-4)

op:`opinputpath(“.”, 1)`

I won’t be covering houdini expressions in details, but you can read about them in the manual [opinputpath]. Next, we connect point number to primitive number input as we have exactly same number of points (“pixel samples”) as primitives (“pixels”). Inputs “u” and “v” are coordinates on each primitive. We want our points in the centre so just enter 0.5 in both parameters. Output of primitive attribute vex node is world position of specified uv coordinate of each primitive, lets call it “new_P” and connect it to P output.

screenSetup

You should now get nicely spaced points, in the centre of each polygon.
Next, we will create a vector for our ray. It will be a vector going from eye through evaluated pixel. We need to get Eye position. The easiest way is to create a new parameter and get eye the position in its origin point.

X: point(“../EYE_POS”, 0, “P”, 0)
Y: point(“../EYE_POS”, 0, “P”, 1)
Z: point(“../EYE_POS”, 0, “P”, 2)

After substracting newP from eye position, we will get a vector of our ray Direction.

TIP: Remember that subtraction order matters. Wrong order might give opposite vector!

vopRayDir rayDir

First Ray

Lets build Ray itself. For this purpose, we are going to use Intersect vex node. It will compute ray from origin point to desired direction and return the exact position of hit point. Again, as ‘file` input provide geometry parameter with expression

op:`opinputpath(“.”, 2)`

This will be second input of our vop sop node – our scene geometry, where intersect vex will collide with. Our points (“pixels”) will be ray origin and ray direction is what we have calculated above.

Now we know where our ray eventually might hit something. Intersect vex also provides primitive number that was hit with exact uv coordinate, again, we can use Primitive Attribute vex node to evaluate that. As `file` we can connect same geometry parameter as we had for intersect vex node and connect prim, u and v inputs to corresponding output from intersect vex. “Attribute” parameter is what we want to evaluate on that exact hit point, for now, we will check “Cd” (colour). Connect output of primitive vex “adata” to “Cd” output node.

buildRay

Output might be a little bit different than expected. In my case, background is pink and scene geometry seems to be cut. What happened?

firstRay

detailsIf we would check “prim” output of intersect ray, you would notice that we have lot of negative [-1] numbers. This is a debug output, informing us that there are places where our ray of intersect vex node haven’t found anything (there was no direct collision with any geometry). Primitive attrib vex node reads [-1] number as number [0] because we cannot have negative primitive numbers, which happens to be a pink sphere in my case.

Another problem is that our scene is not fully visible, seems to be cut. As we can read in intersect vex node documentation:

Intersect will only check for intersections within the maximum length of the “Ray Direction” – [0, length(raydir)].

 

It means that our rayDir vector might be too short. We can fix both of those problems fairly easily. First, lets extend our ray length. After subtraction eye position from newP, add normalise node. This will make make the length of our vector = 1.0. Multiply it by spare parameter. I have called it ‘Clipping Far Point’. If we multiply our vector by 50, it will have length of 50 etc.

rayLength

To find rays which haven’t hit any surface, we will use if statement (switch node in that case). Create compare vex node and connect first input to prim output of intersect vex. We will compare if input 1 value is equal (==) to value of [-1]. If input value is equal [-1] it will return True (1), if not, False (0). Create switch node connecting output of compare node to switch index, Input 1 is our False (0) value where ray is not returning [-1] so connect output of Primitive Cd Attribute. Input 2 is a True value where primitive number of intersect Ray equals [-1], create and connect spare colour parameter and call it Background Colour.

background
renderParametersNow we should see full, colour scene with black background as shown below (animation and ray is added only for visualisation purposes).

render01





<< [ 1 ] [ 2 ] [ 3 ] >>

Be the first to leave a comment

Leave a Reply


%d bloggers like this: