Ian Shelanskey

Visualizing UVs and UV maps

Ripped right from Wikipedia: “UV mapping is the 3D modeling process of projecting a 2D image to a 3D models surface.”

If you’re familiar with 3D modeling then UV maps might be second nature to you, they allow you to give the object you create texture. They allow you to manipulate the way light interacts with your objects and in some cases allow you to bake lighting right in to save render time. Some programs (my favorite being Blender) even let you displace geometry in a similar matter to this tutorial I posted a while ago.

While you can do UV mapping in your 3D modeling software and import it to TouchDesigner, circumstances may arise where you need to dynamically alter your UVs to achieve the right effect.

While 3D modeling programs have made fancy front ends and have developed ways to stream line the process of perfectly setting your UV, this process can be somewhat frustrating in TouchDesigner. In this tutorial I hope to shed some light on how TouchDesigner handles UVs and how to get control over your textures.

What is a UV anyway?
UV is a normalized (a number between 0-1) set of numbers which look up pixels in a 2D image. “U” corresponds to the x component for the image, while “V” is the y component. UV coordinates start from the bottom left corner of the image at (0,0). The exact middle point of any image is (0.5, 0.5) and the top right being (1,1). There are even 3D texture images that utilize a 3 component W which is a topic for another day.

A good way to grasp UV is by looking at how the CPU thinks about geometry.

Build a simple network with a Box SOP and a Sopto DAT. Drag the Box SOP into the SopTo DAT and the table will populate with columns and rows.

SopData

Every row contains information about a single point of the Box. “Hey wait, cubes only have eight corners. Why are there 24 rows of information?” That’s a great question voice in my head. Let’s break the box down into what defines it. There are 6 surfaces where each surface has 4 edges.  4 x 6 = 24. Meaning that each corner of the Box is actually made out of 4 points. If you scan the Sopto Dat you will actually see rows where the position columns (denoted as P(#)) will be the same.

We can see how these points are arranged as the correspond to the surfaces by changing the parameter Extract in the Sopto DAT to “primitives”. Primitives are the polygon components that make up a 3D object.

Vertexs
Now we see a completely different picture. Six rows, each containing information about the primitives. In the vertices column we can see the points that build each surface and close refers to if the loop is closed or not. However this is not where UV is denoted. For this you have to change the “Extract” parameter again to “Vertices.”

Vertices

This view gives you some more useful information. It shows you which primitive each point belongs to in the “index” column, the index of the point in the primitive in “vindex”, and then the all important UV data. As you can see we have a normalized number showing the horizontal element [uv(0)], vertical element [uv(1)], and the depth element [uv(2)].

We can easily make a network to visualize this arrangement.

Start by creating an additional SOPto DAT and pointing it to the box you have already created. Set the “Extract” parameter to primitives. You should now have a DAT that reads out the vertices and one that reads out the primitives.

VerticesAndPrim

Next take two select DATs and and connect them each to a SOPto. Select only the UV columns from the vertices SOPto and only the vertices and close columns from the primitives. Do not include their names.
selectedVertsPrims

Throw an add SOP into your network. This is where we will rebuild the UVs into a shape we can see. Drag the vertices DAT into the points table parameter of the add SOP and the primatives DAT into the polygons table. Pull the add SOP into a render network and give the GEO with a wireframe material and you should see a representation of the UV coordinates as a flat grid along the XY axis.

UVrenderNetwork

Each section of this grid is a side of the box. If you were to replace the box with a different SOP you can see how the other standard 3D meshes UVs are arranged.

We can take this a step further and see the images we are mapping to by creating another GEO with a 1×1 grid rendered inside it. Make sure to transform the center of the grid so that is +0.5 in X and +0.5 in Y, this will ensure that the grid is aligned with the UV rendering we have already made.

In order to put an image on that grid, create a constant MAT and a movieFileIn (I grabbed the box map image because it works well with the box.) Apply the movieFileIn to the constant MAT as a color map. Throw a camera and render TOP into the mix and you can see how the UV coordinates align with the image texture.

ImageUValignedUVAligned

You can see this even more clearly when you create another render network to apply the texture to the SOP you are working with like I did below.

WholeNetwork

Forgive the dust.

Hi All,
I’ve switched my hosting around so my site is a bit more stable and in the process some of my tutorial content has been lost. I am doing my best to get it all up again, however it’s kind of slow going. If you have specific questions about any tutorial here, please contact me at ishelanskey@gmail.com.

Thanks!

3D Content Part 3: Simulating Physics in Blender to TouchDesigner

This tutorial will help you leverage Blenders physics engine to create complex 3D animations quickly. It will also walk-through programmatic solutions to deal with texturing and playing back those animations.

The tutorial uses 3D assets from my previous tutorial on camera mapping which you can find here: http://www.design.ianshelanskey.com/technology/3d-content-part-1-camera-mapping/

3D Content Part 2: UV mapping and Importing Animations to TouchDesigner

Here is a useful workflow for creating complex/timeline based animations in Blender 2.7 and importing them into TouchDesigner.

This tutorial will take you through UV mapping/editing to easily line up 2D graphics to the 3D counterpart, then how to generate realtime textures for the 3D geometries.

Here is a link to the Blender file used in the tutorial.

 

 

3D Content Part 1: Camera Mapping

 

Hey all. If you’re like me and don’t have enough money for a LIDAR scanner but need to make precision 3D models for projection mapping or content creation, here’s a relatively quick and cost effective method.
All you need is a decent camera and a 3D modeling program like Blender. For this tutorial I am using my cell phone camera and Blender 2.74.

Start out by finding a good angle of the set or object you will be projecting onto. This method is a lot easier if you are working with rectilinear objects. You will want to try to find an angle where you can see as many sides of the object as possible(Top, side, and front are best.) Take a photo of your set so that your camera is as parallel with the ground as possible, this will make it easier to align in the virtual environment. Its generally best if you can be as close to the angle of the projector as possible.

IMAG0280

Next take you image into your 3D modeling environment and apply it to show up in the background of your camera. In Blender there is an option to add an image in the bottom of the right side tool bar. I find it easier to set the image to the front option and set the opacity like so.

Add a ground plane/wall/cube to compare the perspective of the image to the one of the camera. I find it works better if I add an empty for the camera to look at, then align elements I know to be square with the virtual elements I placed. In this case I am concerned with how converging line of the corner of the wall line up with the virtual representation. If I can get these to align perfectly, I can reproduce the rest of the image easily. This part take time and patience. I recommend finding the focal length of the camera you are using and setting it to be the same in the virtual camera.

Once the reference elements are in place, its time to start placing the other elements. You should not touch your camera alignment unless something looks very off.

I’ve added a simple cube that is going to become one of the boxes in the image. Tab into edit mode and align the origin of the box to be one of the corners. This will make it easier to align with the image. Place one corner of the box with the matching corner in the image and rotate until edges align. then scale each side appropriately referencing the image.

Once you have everything lined up and looking right, you’re done! You can now start UV mapping and planning out some interesting visuals.

For more information on Camera Mapping, check out this tutorial.

Animating Dictionaries in TouchDesigner

I came across the need to animate CHOPs based on a cued sequence. I have found that the animation chop is a little difficult to work with quickly in a theatre tech week scenario, so I created a way of cuing parameters with python dictionaries and storage.

 

 

A dictionary in Python is a data structure that works like an array or list, but instead of referring to things by index, it refers to things by a key. Matthew Ragan has a fantastic video about how storing lists and dictionaries here: http://matthewragan.com/2015/03/31/thp-494-598-python-lists-touchdesigner/
For more information refer to this documentation: https://docs.python.org/2/tutorial/datastructures.html

The recording all takes place in this network.

cue_Comp

CHOP values are converted into a table and then this script stores the values as a dictionary. The stored keys for each dictionary  correspond to the cue its recorded under. The positions are stored using a for loop like Matt showed in his video. A stored Dictionary looks like this:

Examine_Storage.PNG

I can fetch any of these dictionaries at anytime to transition between them. The “GO” button drives a count that keeps track of where we currently are in the cue stack. When this count changes it sets the value of a table to be the cue we want to call. This table update triggers a DAT Execute which runs this script:

AB_Controller.PNG

This controls the AB Switcher that runs the animation. I made a module for unpacking dictionaries into a table call “cues.unpackCue()”. Inside a base called local and another base called modules I made a text DAT to write this module. Unpack_Cues

There are 3 arguments passed into this function, num, to, and t. “num” is the cue number you want to unpack, to is the table you want to unpack it in, and t is to set the time pulled from the dictionary.

This all gets unpacked at the AB switcher in the table that isn’t the last one used (the one that would be up if we were playing). Once a cue is unpacked, time is loaded and switched so a speed increments or decrements, in turn driving a cross.

Recording all of this happens in this script which is run when the record button is hit.

StoreScript

 

TouchDesigner Previs – The Hour We Knew Nothing of Each Other

Part of my work on ASU’s production of The Hour We Knew Nothing of Each Other involved making some Previs software. I wanted to be able to work with the scenic designer and the director in order to devise a specific portion of the show called the ‘Panel Dance’ and what the media might look like during it.

I made a mock up of ASU’s Galvin Playhouse stage in Vectorworks then exported it as a .FBX over to Touchdesigner. This allowed me to record different panel placements with the scenic designer and start to work through some ideas I had for projection. All of us could then collaborate on what would be most interesting for each scene.

I developed this simple interface to move the panels easily and record their positions.

Interface

The recording all takes place in this container. CHOP values are converted into a table and then a script stores the values in a dictionary as a “cue”.

Cues are then called up by an AB switcher controller to this portion of the network.

cue_Comp

These values drive the geometry in the render network. Lights in Touchdesigner have an attribute called projector map, which allows you to simulate projectors in the render network. I could then test out content before getting into the space by setting the output of my playback system to be the projector map. This setup allowed the director, the scenic designer, and I to build parts of the show before getting into the space, which proved super helpful in the overall process.

View.PNG

 

Twitter in TouchDesigner

EDIT: This is now way easier to do from within TouchDesigner using the Threaded Web Comp here: http://derivative.ca/Forum/viewtopic.php?f=22&t=9164 . Tutorial to come soon.

OH BOY! This one is going to be exciting!

Somewhere in my meanderings of Touchdesigner, I decided it was a good idea to try this. Assuming my mantra of “How hard can it be?”, I set out to give it a shot.

I recently posted this project that uses the method I am going to show you.
http://www.design.ianshelanskey.com/uncategorized/touchdesigner-twitter-live-feed-asu-emerge-hud/

Once the information is in Touchdesigner you can use it for just about anything, it’s getting it there that becomes a headache. That’s the part of the problem I will focus on in this tutorial.

First things first: head over to GitHub and download Python Twitter Tools(https://github.com/sixohsix/twitter). This is a python wrapper for Twitter’s Application Programming Interface(API), which handles how software interacts with Twitter and can do stuff like post tweets, search for hashtags, and pull tweets from other users timelines.

Extract the files from the ZIP to your working folder. Mine is on my desktop called twitterTest.

I like to work in commandline, but most of this stuff should be able to be done in GUI.

In CMD navigate to the project folder and find a file called setup.py:

chdir Desktop/twitterTest
chdir twitter-master

Once you have found it, install it using this command:

python setup.py install

You should see a bunch of text scrolling by saying “Installing….”, once this has completed test to make sure the module has installed by typing into CMD:

python
import twitter

If you don’t get an error, you’ve installed the python Twitter API! Now the real fun begins.

I’m sorry to say, but you will need a Twitter account in order to use this. So start there. Once you have made a new account you need to become a Twitter Developer. This is super easy. Head over this link https://apps.twitter.com/, sign in, and click create app. You should be met by this screen:

TwitterCreateApp

Fill out the form and agree to the terms.

The next page will have information we need in order to access twitter. Keep it open.

Open up you favorite text editor and lets make some Twitter magic happen.

In the Text editor paste the following:

from twitter import * import os
MY_TWITTER_CREDS = os.path.expanduser('~/.my_app_credentials') if not os.path.exists(MY_TWITTER_CREDS):
oauth_dance("$$NAME_OF_APP$$", '>$$OAUTH_KEY$$', '$$OAUTH_SECRET$$', MY_TWITTER_CREDS)
oauth_token, oauth_secret = read_token_file(MY_TWITTER_CREDS)

twitter = Twitter(auth=OAuth(oauth_token, oauth_secret, '$$OAUTH_KEY$$', '$$OAUTH_SECRET$$'))

t = twitter.statuses.home_timeline() log = open('tweets.txt','w')

log.write(str(t[0]))

log.close()

Replace the bold text with the information on the Twitter Developer page and we should be all set to be authenticated by Twitter.
Save the programs as ‘.py’ and jump back into commandline, find it and run:

python twitterTest.py

A small script will run, and then a window will pop up asking you to sign into Twitter again. Do so and then copy the pin they give you into the command line, where it is asking for the pin. Hit enter and you should be all authenticated. If you look in your project folder, there will be a a new .txt file called ‘tweets’. Open it and you will see the mess you’ve gotten yourself into.

UnparsedTweet

This is one Tweet. It’s one big dictionary that holds data and several smaller dictionaries inside it. I went ahead and parsed through it making it a bit easier to read:

Parsed TweetParsedTweet.PNG

Here you can see the structure behind what is happening. Each piece of data has a key associated with it. In order to find the text of the tweet for instance, you need to change code in the program to this:

from twitter import * import os

MY_TWITTER_CREDS = os.path.expanduser('~/.my_app_credentials') if not os.path.exists(MY_TWITTER_CREDS):
oauth_dance("$$NAME_OF_APP$$", '$$OAUTH_KEY$$', '$$OAUTH_SECRET$$', MY_TWITTER_CREDS)

oauth_token, oauth_secret = read_token_file(MY_TWITTER_CREDS)

twitter = Twitter(auth=OAuth(oauth_token, oauth_secret, '$$OAUTH_KEY$$', '$$OAUTH_SECRET$$'))

t = twitter.statuses.home_timeline() log = open('tweets.txt','w')

log.write(str(t[0]['text']))

log.close()

This will create a dictionary out of the first tweet (index 0) in your timeline, then look for the data associated with the key ‘text’. If you run this program you should get just the text from the last tweet on your timeline. You can run a for loop that iterates through the last 100 tweets by changing the program to this:


 

from twitter import * import os

 

MY_TWITTER_CREDS = os.path.expanduser(‘~/.my_app_credentials’) if not os.path.exists(MY_TWITTER_CREDS):

 

oauth_dance(“$$NAME_OF_APP$$“, ‘$$OAUTH_KEY$$‘, ‘$$OAUTH_SECRET$$‘, MY_TWITTER_CREDS)

 

oauth_token, oauth_secret = read_token_file(MY_TWITTER_CREDS)

 

twitter = Twitter(auth=OAuth(oauth_token, oauth_secret, ‘$$OAUTH_KEY$$‘, ‘$$OAUTH_SECRET$$‘))

 

t = twitter.statuses.home_timeline() log = open(‘tweets.txt’,’w’)

 

for i in range(len(t)):

 

      log.write(str(t[i][‘text’]))

 

log.close()

 

There is some information in the tweet is stored as another dictionary within the larger dictionary. Hashtags work that way.The hashtag dictionary lives in another dictionary called ‘entities’ which also holds the keys ‘user_mentions’ and ‘urls’. If a tweet has multiple hashtags and you want the text from all of them you must write a ‘for’ loop that iterates through all of them.


hashtags = t[0][‘entities’][‘hashtags’]

 

for i in range(len(hashtags)): print(hashtags[i][‘text’])

Run the script, and you should see all of the hashtags print out in console.

This is great, but how do I make cool visuals out of them in Touchdesigner?!

As of the date of this post, I have had issues with how Touchdesigner communicates with the network socket. It seems to glitch and drop frames for a brief moment while it cooks, then loads the stuff you wanted. To get around this, I ran a python program in command-line to write the information I needed into a .TXT file then imported it into Touch. Here is the documentation for how to do that: https://docs.python.org/2/tutorial/inputoutput.html 

In the parameters of that text DAT, turn on refresh, and now whenever that file saves Touch will refresh it.

Awesome, now you have a separate program that is pulling new data from the internet for you while Touch remains chugging along at its normal pace. Now all that you need to do is structure the data in such a way that its easily manageable in touch.

If your data is a 1d array, or 1 piece of data per index, the best way to do this is the just separate each line you write in your .TXT file with ‘\n’. ‘\n’ is an End of Line character(EoL) commonly use in text files and data structures. If you add it to the end of your ‘.write()’ statement, it will create a newline that you can read in Touch.


 

log.write(str(t[0][‘text’])+’\n’)

 

The file loads up to look something like this.

TdTweets From here you can convert to a table and replicate, clone, and texture instance to your hearts content. Tweets_replicated But suppose you had more than a 1d array. Like maybe you want a username, the date, and the tweet per index. In order to do this we are going to take advantage of a parameter in the convert DAT: “split cells at”. This parameter allows you to look for a string in a text DAT convert the text DAT into a table where each cell is split at the string. Generally when I use this technique, I use ‘|’ because  you don’t normally see that character in data. So in the python twitter program, your ‘.write()’ statement looks like this:


log.write(str(t[i][‘created_at’])+’|’+str(t[i][‘favorite_count’])+|+str(t[i][‘text’])+’n’)

(Protip: use the same idea with the Substitute DAT in Touch to have scripts adapt to new information without the hassle of figuring out where the quotes go, or converting data types. I use this technique in my GOTO Cue example here: http://www.design.ianshelanskey.com/uncategorized/building-tools-to-make-programming-easier-goto-cue )

After passing the ‘filein’ to a convert with the ‘split at’ parameter set to ‘|’ we get this:

4D_Tweet

Now each column is a different piece of data. This makes it super easy to have instances and replicated objects have different parameters.

From here, the world is your oyster. You can use this data in tons of different ways. Here I made a data visualization of NASA’s last 100 tweets and ordered them by date. The size of the circle refers to the number of Favorites the post got and the color refers to the time it was posted.

NasaTweets demo

I used this exact same method of retrieving information that Touch would stall over for this Voice Recognition project:
http://www.design.ianshelanskey.com/technology/topology-of-words-fall-14-ame-530/

en_USEnglish
en_USEnglish