mardi 26 janvier 2016

Relighting Tool, position passe based



Without any doubt the more complicated tool I did so far.

This is the second part of the solution for the smalls particules in the lighthouse of the movie "The End". We wanted to illuminate the particules with the wall lighting. To avoid the huge 3D render time, I find a 2D solution.

I discovered last year the blink scripts, and I have to say that it gives nuke a whole new power !

This gizmo relight an object using, instead of a point, a luminance passe and a position pass. For that, it uses two blink script, one for optimisation, one for the actual computation.

Basically, the gizmo compares each pixel of the target position passe to every pixels of the lighting position passe. If there are close enough, it relights the target object according to the distance and the amount of light in the area.

Let's get into it step by step :


The tolerance knob controls the black point of the Grade node, that allows the user to choose from which amount of light the relight is active.

Here is the first expression node. Simple expression meaning :
if alpha < 0 : alpha = 0; else : alpha = 0

The first blink script 


As I said, the gizmo compares each pixel of the target position passe to every pixels of the lighting position passe.
For an HD image as we had for the movie. It means :
1280 * 720 = 921600
921600 pixels compared each time to 921600 pixels
So.... 921600*921600 = 849346560000 comparation. This is to much.

So I had to optimized the thing. First, the reformat downscale a bit the lighting passe: That helps.

Then, this blink script check one time every pixel. It keeps and orders only the one that are not black in the alpha (which is the luminance).

Instead of having that :


This blink script gives us that :


Because the blink scripts are mutli core process, I used the alpha of the pixel (0,-1) as a mutex. The actual position in the image is saved in this pixel.

Here is the code :
kernel glow3D_Kernel: ImageComputationKernel<ePixelWise>
{
Image<eRead, eAccessPoint, eEdgeClamped> src; //target position
Image<eWrite, eAccessRandom> dst;  //the output image

  //In define(), parameters can be given labels and default values.

param:
 int width;

void define() {
 defineParam(width, "Width", 1280);
 }

void process() { // this fonction is executed for every pixel
 if (src(3) != 0){ //if the alpha is not black
  int position = dst(0,-1,3); // get the pixel index where it should write from the mutex pixel
  dst(0,-1,3) = (float)(position+1); // set the mutex pixel to the next pixel index
  int x = position % width; // get x and y from the pixel index
  int y = position / width;
  dst(x,y) = src(); // write the actual pixel value in the actual position
  }
  }
};
The color is still the position pass, and the alpha is still the luminance. But the pixels are ordered, we don't have to check the whole image for each pixel now.

The second blink script.


This is where everything happens. This blink script takes two entry the light position pass and the target object position pass.

The script runs through the target object image, for each pixel, if the alpha is not 0, it goes trough the light position passe.

For each pixel in the light position passe, it checks if the pixel is in the zone defined by the user. If it is. It weight the luminance value by the distance and the decay set by the user.

The result is added to the light amount of the target object pixel and the denominator is incremented.
The denominator is actually the number of pixel affecting the target pixel

After running through the light position passe, the luminance amount is divided by the denominator. And the value is wrote in the target object pixel.

Here is the code :
kernel glow3D_Kernel: ImageComputationKernel<ePixelWise>
{
Image<eRead, eAccessPoint, eEdgeClamped> posDst; //target position
Image<eRead, eAccessRandom, eEdgeClamped> posRef;  //reference position pass
Image<eWrite> dst;  //the output image

param:
 float size; //Size of the glow
 float decay;
 int width;


  //In define(), parameters can be given labels and default values.
void define() {
 defineParam(size, "Size", 1.0f);
 defineParam(decay, "Decay", 1.0f);
 defineParam(width, "Width", 1280);
 }

local:
 int x1;
 int x2;
 int y1;
 int y2;

  //The init() function is run before any calls to kernel().
void init() {
  }

float distance(float ref0,float targ0, float ref1, float targ1, float ref2, float targ2) {// this fonction calculate the distance, thx pythagore :)
 float dist = sqrt((targ0-ref0)*(targ0-ref0) +(targ1-ref1)*(targ1-ref1) +(targ2-ref2)*(targ2-ref2) );
 return dist;
 }

void process(int2 pos) {
 if (posDst(3) != 0) { //if alpha is not 0
  float tempDist;
  int denominator = 0;
  float lightAmount = 0;
  float weight;
  int i = 0;
  int x = i % width;
  int y = i / width;
  while (posRef(x,y,3) != 0) { //this loop run through the light passe, but stops when alpha = 0
   tempDist = distance(posRef(x,y,0), posDst(0), posRef(x,y,1), posDst(1), posRef(x,y,2), posDst(2)); // get the distqnce between the two pixel
   if (tempDist<size) { // if the pixel is in the area
    weight = 1-(tempDist/size); // calcul the weight with the distance
    weight = pow(ponderation,decay); // add the decay
    lightAmount += (posRef(x,y,3)*ponderation); // add the actual light impact to the light amount
    denominator++; // add one pixel impacting the target pixel
    }
   i += 1;
   x = i % width;
   y = i / width;
   }
  if (denominateur != 0) {
   dst() = lightAmount/denominator; // divide the light amount by the amount of pixel impacting
   }
  else {
   dst() = 0;
   }
 }
 else {dst() = 0;}
 
   }
};
This way, the light is calculated in relation with the distance, the light amount, and the decay and gives a real smooth results.

Here is a result of the particle system used with this gizmo :

Since this method doesn't compute the pixel that are outside the field of view, I overscaned the 3D renders to have a better results.

I hope you understood, if you have any question, please ask ! :)

Light particle system


http://www.nukepedia.com/gizmos/particles/partsystem

I did this gizmo for the movie "The End".

We needed small particles in the lighthouse, they had to react with the light of the wall and be fast to render, because it's a 40 second shot.
This is the trick I found.

This gizmo is absolutely inspired by the TX_Fog gizmo by Tomas Lefebvre :
http://www.nukepedia.com/gizmos/3d/tx_fog

Actually, it's no more than a lot of card with a lot a dots.



A card is distorded by a displaced geo which is animated by a noise.
Then this card goes to as much as the user wants of displace geo that all have an offset choose by the user.

The scene get the moved cards back, and the final transform geo moves the whole system.

I gives you the code that is in the reload button and creates all the cards :
nuke.thisNode().begin() #get into the gizmo
scene = nuke.toNode("Scene1")
ctrl = nuke.toNode("ctrl")
displace = nuke.toNode("DisplaceGeo1")

#delete the previous transform geo connected to the scene
for i in range(scene.inputs()): 
    nuke.delete(scene.input(i))
    sc.setInput(i, None)

card = nuke.toNode("Card1")
xpos = card ['xpos'].value()
ypos = card ['ypos'].value()
transformList = []

#create the transform geo nodes
#ctrl['nbCard'] = nombre de card asked by the user

for i in range(int(ctrl['nbCard'].value())) :
    offset = str(i)
    transform = nuke.createNode("TransformGeo", inpanel=False)
    transform['xpos'].setValue(xp + 200*i)
    transform['ypos'].setValue(yp + 150)

    #set the value of the transform geo, with offset
    for a in [0,1,2] :
        for b in ['translate', 'rotate']:
            t[b].setExpression("ctrl." + b + "." + str(a) + "*" + sti, a)
    transform.setInput(0,dp)
    transformList.append(transform)


#reconnect the scene
i =0
for n in transformList:
    sc.setInput(i,n)
    i+=1

in the transform geo, the expression ends up like this :


Once again, I you want to know more about this gizmo, please ask !

Tools in bulk

I drop here some tools without much explication, there is not much to say about it, but it can be useful !

Fake occlusion :


A little gizmo allowing to reproduce something like a occlusion with two position passes:


http://www.nukepedia.com/gizmos/3d/fake-occlusion


Smooth cam :

Little script to smooth a cam using integral


Multi nukescript render :

Useful script that allows you to render several write, even if they are not from the same script.

dimanche 24 janvier 2016

Convert Normal Pass Between World and Camera



http://www.nukepedia.com/gizmos/3d/cfakepathconvert_normal

Let's talk about Normals !

Normals are vectors that indicate a face direction compared to a original vector (the camera or the world)

To change this origin from the camera to the world or the other way arround. We have to use an rotation matrice.

For that, the node "color matrix" used with the three first column of the Camera World Matrix does the work pretty well !

But there is one thing, this method doesn't compute the focal length !
So here is my trick :

I use three cameras with three different orientations : X, Y and Z :


This three cameras are shooting a sphere (parented to cam) to get the reference normals for every axis. This way, the reference takes the focal length in consideration.

Then, for each axis, I do a scalar product with a merge expression between the original normal passe and the reference normal passe I got shooting the sphere.


I copy the 3 result in a new passe, and I got the converted normals.

For the facing ratio, it's the Z (or the blue) of the camera normals. :)

dimanche 29 novembre 2015

Unlimited Camera Transition



http://www.nukepedia.com/gizmos/3d/fusioncam

This is actually the first interesting script I've done in nuke.
I wanted to improve the common expression used to make a camera transition :

1-sliderValue*Camera1.translate + sliderValue*Camera2.translate

In order to use it with an unlimited amount of camera, I modified a switch node : easy way to have an unlimited number of input branches.

Then the expression goes like this (gizmo is the created switch node) :

(1 - sliderValue - int(sliderValue)) * gizmo.input(int(sliderValue)).translate +
(sliderValue - int(sliderValue)) * gizmo.input(int(sliderValue) + 1).translate

Substracting the sliderValue converted in int conforms the sliderValue between 0 and 1, then the expression works the same but instead of Camera1 and Camera2, we have Camera(int(sliderValue)) et Camera(int(sliderValue) + 1) 

This expression doesn't work when sliderValue = int(sliderValue) so I had to add an exception :
if sliderValue == int(sliderValue) : gizmo.input(int(sliderValue)).translate

I'm not using the real nuke tcl syntax in order to explain it. You can find the full expression in the gizmo.

If you have any trouble installing, using or understanding this tool, please ask. I'll be very happy to help !

samedi 28 novembre 2015

Hello world

      Since this summer, I'm thinking that I could create a blog referencing the tools and the other things I created. This weekend, I thought that I must. If those tools have been useful for me, they can be useful for someone else. And, most of all, it makes no sense to try to improve something without sharing what you've done.

      I'm Simon Moreau, born in 1992. My first contact with visual effects was about 14 years old, with after effect, laser saber and making shitty movie with my brothers. You know what I mean !

      When I realized that I was able to spend hours on this more or less useless thing without getting tired, I understood that it could be my job.

      I did my studies in Montpellier, in ArtFX. I specialized myself in compositing. And made with a crazy team this amazing movie :

      Here is the link of the making of : https://vimeo.com/132258695

      Visual effect are magic tricks, as magicians we have to make the audience thinks that what they see is real. But in reality, we made it made it with cardboard and scotch-tape. Here is the fun ! :)


      I'm clearly more interested in the way of making something than in the result. It's like preferring building the model than playing with the Lego ! So I'have been digging in how all those things we use are working, in order to use them more wisely, and sometimes, to improve them.


      I learned scripting and every related things by myself, thanks to internet. Or should I say, thanks to the incredible amount of people sharing their knowledge. Thanks you humanity.

      I'm now working as Pipeline TD and Compositor artist in Automatik VFX in Berlin.

      Here are my demoreels, if you want to take a look :