Visualizing UVs and UV maps

Ripped right from Wikipedia: “UV mapping is the 3D modeling process of projecting a 2D image to a 3D models surface.”

If you’re familiar with 3D modeling then UV maps might be second nature to you, they allow you to give the object you create texture. They allow you to manipulate the way light interacts with your objects and in some cases allow you to bake lighting right in to save render time. Some programs (my favorite being Blender) even let you displace geometry in a similar matter to this tutorial I posted a while ago.

While you can do UV mapping in your 3D modeling software and import it to TouchDesigner, circumstances may arise where you need to dynamically alter your UVs to achieve the right effect.

While 3D modeling programs have made fancy front ends and have developed ways to stream line the process of perfectly setting your UV, this process can be somewhat frustrating in TouchDesigner. In this tutorial I hope to shed some light on how TouchDesigner handles UVs and how to get control over your textures.

What is a UV anyway?
UV is a normalized (a number between 0-1) set of numbers which look up pixels in a 2D image. “U” corresponds to the x component for the image, while “V” is the y component. UV coordinates start from the bottom left corner of the image at (0,0). The exact middle point of any image is (0.5, 0.5) and the top right being (1,1). There are even 3D texture images that utilize a 3 component W which is a topic for another day.

A good way to grasp UV is by looking at how the CPU thinks about geometry.

Build a simple network with a Box SOP and a Sopto DAT. Drag the Box SOP into the SopTo DAT and the table will populate with columns and rows.

SopData

Every row contains information about a single point of the Box. “Hey wait, cubes only have eight corners. Why are there 24 rows of information?” That’s a great question voice in my head. Let’s break the box down into what defines it. There are 6 surfaces where each surface has 4 edges.  4 x 6 = 24. Meaning that each corner of the Box is actually made out of 4 points. If you scan the Sopto Dat you will actually see rows where the position columns (denoted as P(#)) will be the same.

We can see how these points are arranged as the correspond to the surfaces by changing the parameter Extract in the Sopto DAT to “primitives”. Primitives are the polygon components that make up a 3D object.

Vertexs
Now we see a completely different picture. Six rows, each containing information about the primitives. In the vertices column we can see the points that build each surface and close refers to if the loop is closed or not. However this is not where UV is denoted. For this you have to change the “Extract” parameter again to “Vertices.”

Vertices

This view gives you some more useful information. It shows you which primitive each point belongs to in the “index” column, the index of the point in the primitive in “vindex”, and then the all important UV data. As you can see we have a normalized number showing the horizontal element [uv(0)], vertical element [uv(1)], and the depth element [uv(2)].

We can easily make a network to visualize this arrangement.

Start by creating an additional SOPto DAT and pointing it to the box you have already created. Set the “Extract” parameter to primitives. You should now have a DAT that reads out the vertices and one that reads out the primitives.

VerticesAndPrim

Next take two select DATs and and connect them each to a SOPto. Select only the UV columns from the vertices SOPto and only the vertices and close columns from the primitives. Do not include their names.
selectedVertsPrims

Throw an add SOP into your network. This is where we will rebuild the UVs into a shape we can see. Drag the vertices DAT into the points table parameter of the add SOP and the primatives DAT into the polygons table. Pull the add SOP into a render network and give the GEO with a wireframe material and you should see a representation of the UV coordinates as a flat grid along the XY axis.

UVrenderNetwork

Each section of this grid is a side of the box. If you were to replace the box with a different SOP you can see how the other standard 3D meshes UVs are arranged.

We can take this a step further and see the images we are mapping to by creating another GEO with a 1×1 grid rendered inside it. Make sure to transform the center of the grid so that is +0.5 in X and +0.5 in Y, this will ensure that the grid is aligned with the UV rendering we have already made.

In order to put an image on that grid, create a constant MAT and a movieFileIn (I grabbed the box map image because it works well with the box.) Apply the movieFileIn to the constant MAT as a color map. Throw a camera and render TOP into the mix and you can see how the UV coordinates align with the image texture.

ImageUValignedUVAligned

You can see this even more clearly when you create another render network to apply the texture to the SOP you are working with like I did below.

WholeNetwork

Leave a Reply