Ian Shelanskey

Tracking People in TouchDesigner with OpenPTrack and Point Clouds

In any interactive installation environment there are plenty of way to get user input / data that will drive your visuals and give the audience the feeling of connected-ness with your installation. One of the more interesting ways that I have done recently is by tracking people and their movement around the space.

This concept is not new by any means and has actually taken many different forms over the years (Blob tracking, Motion Capture, Blacktrax). However these systems can be either WAY too expensive, a pain to set up, or not able to give you any reliable data.

OpenPTrack is open source person tracking software which works on a distributed network of depth cameras. The system constructs a streaming point cloud image and then separates the forms of people to then track around in the space. Then, like most tracking systems, it spits the data out onto the network to allow other programs to use it.  Go check out their website to learn more.

I worked with them to develop two TouchDesigner components for taking in the streaming data. You can find them at my Github repo. There is one TOX file which streams in data and parses it with Python. The other is a CPlusPlus Chop DLL which works much faster than the TOX when tracking a lot of people (20+).

 

City Infrastructure Game – Town Preview

Changing tack a bit to create a interactive game using this system. Unity is a perfect platform to build this in and I have plenty of resources to make the game engaging.

I am commandeering another project I am working on in Unity, a infrastructure management game – albeit stripping out a lot of the more complicated modeling and maths, and focusing on just maintaining power infrastructure.

The game play is simple: You are the power manager for the city. Your job is to make sure everyone has power and that all of your infrastructure is maintained appropriately. As time passes the core elements of the power grid start to fail and you must go fix them by standing on them. If power goes out in part of your city, the public opinion drops and you earn less score. The object of the game is to get the most score before the timer runs out.

I did some level design for this project over the weekend. Here is a video that shows it off:

 

 

 

SideCoach: Node.js and TouchDesigner

Project Overview


Public speaking and capturing an audiences attention about a topic you are passionate about is hard, especially for people who haven’t spent time in an improv or acting environment. Many times, when seeking funding or trying to inform an audience about a topic we get very rooted in what we have prepared to say and ignore the audience and the attitudes in the room. This can lead to an audience who is not only bored with what you are saying – but actively resents you being onstage talking at them.

A more engaging approach is to try to read your audience and understand what’s landing and what isn’t, then adjust your topic to the audience you are presenting for.

I worked with Boyd Branch on creating software to assist those wishing to learn public speaking skills and improv techniques so that they might engage their audience with their topic while presenting it. Boyd developed a system for teaching / coaching speakers and wanted to get it off of paper and into a digital technology.

He wanted a easy way for the audience to give real-time feedback while the speaker was talking to see what parts of the presentation needed the most work. This needed to be recorded and played back afterward so the speaker could recognize points of low interest. They could then work on better ways of presenting the material in those sections in order to keep the audience engaged.

The Design


The crucial part of this software is the audience feedback. We needed an easy way for participants to provide feedback without having them jump through hoops in order to do so. Some thoughts being knocked around were creating an app and asking them to download it, or re-purposing existing apps to suit our needs.

In the end I decided to create a web app / service which would handle receiving and recording the data from the user. More on that later.

The next critical element was aligning the recorded data with the video so that the speaker could see how they did. This was going to be done in TouchDesigner, where we could pull in the data and create a visual for it and overlay that onto a video.

The last element was being able to coach the speaker discreetly during the talk so that they might adjust for low interest in the moment. This was also going to be handled in TouchDesigner.

The web service was built on a Node.js and Express.js stack. I set up a small API that handled user data and streaming current interest. The service sent the average interest of the user to TouchDesigner so that the Coach could have access to the data and it could be recorded for alignment with the recorded talk.

The Web App used Materialize as a CSS framework, It is a pretty front end and didn’t take long to set up. The app is set up so that the audience user can see the clues that the coach gives to the speak in HUD and use it as a teaching moment.

 

 

 

 

Connecting TouchDesigner and Node.js


The connection between TouchDesigner and Node.js was done with a TCP socket. In node the code looks like this:

var net = require('net');

var client = new net.Socket();
client.setNoDelay();
client.connect(1337, 'localhost', function(){
 console.log('Connected to TD Server')
 });
client.on('error',()=>{
 console.log("An error has occured");
 console.log("Open TouchDesigner Server before starting.");
});
client.on('data', (data) => {
 console.log(data.toString());
 client.end();
});
client.on('end', () => {
 console.log('disconnected from server');
});


In order to send data to TouchDesigner just use:

client.write("DATAHERE")

Then in TouchDesigner, use the tcpip DAT and set the port to what you set it to in Node.js. You should see new data appear whenever that write command is used.

Since this all happened in real-time, I could then just take the data and build a graphing visual and overlay that onto some the video being recorded. Now there was no need in syncing up the data.

 

 

 

Forest3 Software Suite

Project overview

While studying for my MFA at Arizona State University, I was a research assistant for Christian Ziegler. We were working on the next step in his research on matrix lighting systems started in his previous works, which consisted of statically hung neon lights arranged in a rectangular grid.

For the next iteration we wanted to build out the idea of lights arranged in a rectangular grid – however this time to have the lights be able to be moved vertically the full height of the space. Here is a video of a small scale test of the system.

Throughout this process I had to design various bits of software and system to ensure the project worked as envisioned. These software projects spanned between microcontroller programming, python services, and TouchDesigner control systems.

Software

Device Firmware:


Chris Zlaket designed and produced the circuit and electronics that control the device. We are using a 16 Mhz Teensy-LC as the micro-controller driving a 12v DC motor and two 3.3v surface mount LEDs soldered back to back.

The unit reads Art-Net packets being sent on a ENC28J60 Ethernet Controller. The devices are set up just like a standard multi-channel DMX device where they are given an address will read a number of channels from that address in the packet. These channels correspond to attributes we would like to set on the device such as LED color or height. For our implementation we decided to run with 6 channels for each device:

  1. Height (coarse) – The coarse adjustment of height.
  2. Height (fine) – The fine adjustment of height.
  3. Red – The red component of the LED color.
  4. Green – The green component of the LED color.
  5. Blue – The blue component of the LED color.
  6. Opcode – A set of operational control codes.

Addressing the devices

In traditional DMX systems, the DMX address of a device is set on-board the device using dip switches or an LCD menu. Because we were using the Teensy and pinouts were at a premium, we decided against adding another component (such as a dip switch) that would eat into more pins. The idea was tossed around to hard code the DMX address into each device in order to get around this  – but the thought of changing the programming on each and every device was painful as a programmer who actually wants a life.

Instead I arrived at having the device request an address from a server running on the control machine.

When the device boots – it checks to make sure it has a MAC address (which is stored in EPPROM) and then starts the DHCP handshake with the router. Once its given an IP address, the device looks up the control computer on the local network and sends a TCP request with its MAC address. The server sends back its DMX address which is given the first time the device is brought online.

If this whole process looks familiar, it is. This is exactly how the DCHP protocol works – just altered a bit for DMX addresses. This process allows me to address the whole array of devices without reprogramming them or knowing/dealing with their IP addresses.

Driving the motor

The motor is driven by a PID controller which runs the motor between its current location and a setpoint as fast as possible. By using this control method, I can hook the height data that I am receiving from Art-Net directly to the motor setpoint. This allows me to use the streaming Art-Net data to control the speed of the movement. The motor controller does not have to worry about timing or figuring out speed. This saves on processing time.

The height channels in the Art-Net packet act as a 16 bit number which gives me super precise control over the motor as it moves. Each step on the coarse value is 256 steps of the fine value.

Opcodes

The Opcodes channel in the ArtNet packet correspond to special subroutines hard coded in the firmware. For example, if a value of 255 were passed in the Opcode channel the device would re-calibrate and re-zero itself. If 50 or 51 were passed, the device would jog up or down and stay there until cleared with an Opcode of 53. We can also use this channel to tell the device to lookup it’s address again – so we can readdress the system on the fly.

These codes are helpful for debugging purposes but could also be useful in show scenarios to set lights at heights and lock them there.

Address service:


This software runs on the computer that controls the system. It was written in python so it could be cross-platform between macOS, Windows, and Linux – depending on what software package you want to use to control the system (TouchDesigner, Max, ETCnomad…). This software acts much like a DHCP server, however instead of serving IP Addresses, it serves DMX addresses.

When a device check in with this system – it looks up the MAC address in a database and returns its DMX address which is then sent to the device so it can properly read Art-Net. This system allows for quickly re-addressing large sections of the matrix or setting up different configurations.

The service has a small UI which allows the user to readdress and lookup location and MAC address of any device in the system.

 

TouchDesigner Controller / VR Previsualizer:


This piece of the puzzle is one of the more exciting parts. It was originally written as a way to control the system but has evolved several times to be a previsualizer, an override controller, show control and a proxy to other programs to access the system.

Previsualizer

In the early stage of this process we needed a way to pitch the project in order to receive more funding for it. I built this visualizer as a way to allow people to experience the final product and convince them that it was worth pursuing. The original .toe file had the option of a Oculus Rift hook up so the user could see the system as if the were in it.

The visualizer uses two images to drive the height and color of the 3D representation of the matrix. This image based control extends into how the system works today and allows it to be easily manipulated within many visual programming environments.

Override controller

This element of the software allows for a user to override and control sections of the system in order to sculpt the space. The override controls allow you to select devices based on position in the system and manipulate color, brightness, and height. The override control also allows the user to send opcodes to the devices to trigger their on-board subroutines.

 

Show control and Proxy

The final element is the ability to run various looks into the system from video files, .tox files, or streaming from another service. The sources can be triggered from the buttons on the top which show a preview of the source and will transition in a crossfade from an AB deck.

A user can make their own tox in TouchDesigner and drag a drop it into the spare panel which will load it as an asset the can use in the system. Any UI elements they have exposed in the tox will show up in the panel so they can control the system.

Lastly, a user can stream via RSTP protocol and input into the system. This allows for videos to be played from the internet or movie players like VLC.

 

 

Continuing forward:

We are currently building out the system in have 40 devices in a 5×8 grid.

 

 

 

Playing with some realism

This week I am going to start diving head first into some back end development for this experience – but first I wanted to play a bit with realism and realistic textures.

I took the plants from the previous video and added in a campfire and a ground texture to see how well the realistic textures read. Video below.

There is a fun physics simulation in this scene where the plants move out of your way as you walk through them. It was fairly simple to set up in Unity with a collider object and the mesh renderer turn off. As the collider hit the rigged plants they will move out of the way then spring back to where they were.

I am not sure if the final product will have this style or if I will embrace the “computer-y” nature of the project in a more low poly aesthetic. Perhaps there might be room for both.

Starting the project.

Here I will log my process in creating an 3D interactive story telling environment which utilizing room-space AR as the primary tool of design. I will be posting progress updates, tips, and other potential paths this technology can take in live event and performance design.

The main technologies I will be using in order to develop this system are the Unity3D game engine, TouchDesigner, Blender, and the Optitrack Motion Capture system. I have chosen these programs based on my prior expertise and availability.

I hope to catalog my entire process in video documentation and will post the resulting content here.

Below is a preview demo of the system. I will be posting more about how the system works, including the math, shaders, and methods used, as this blog continues. This video has not been altered – the scene displayed in the frame is projected content on the floor of a stage.

 

Data management with Python

Right now I am working on some software to educate public speakers. One of the requirements for the software is that it records and stores data gathered in real time during a talk and be able to be accessed later for playback and data analysis. This data needs to be managed in such a way that I can easily search through it given a few tags and pull the corresponding information.

I’m developing in a program called TouchDesigner which uses python 3. In TouchDesigner COMP objects are easily extended using python classes so this is where I will start. Matthew Ragan (https://matthewragan.com) has a tremendously helpful tutorial on ‘Understanding Extensions’ here.

I will post the full code here with some comment but to see it in action download the file on Github: https://github.com/raganmd/TD-Examples/tree/master/shelanskey/Data_Management

import os
import string
import random
import json

class DB:
	def __init__(self):
		'''Initializes records cache if nothing exists.
		Looks in current filepath for a DB collection and creates one if it doesn't exist. 
		Preloads DB with json metadata. 
		
		Args: 
			self: self
		
		Output: 
		
		'''
		self.filepath = parent().par.Path + '/db' 			#grab path from parents custom parameters
		if not os.path.isdir(self.filepath):				#check to see if that path exists
			os.makedirs(self.filepath)						#make new directory if it doesn't
			
			f = open(self.filepath+'/cache.txt', "w")		#create a file called cache
			jsonPrescriptData = {}
			jsonPrescriptData['meta'] = "DB Record Cache"
			jsonPrescriptData['records'] = [] 
			f.write(json.dumps(jsonPrescriptData))     		#load it with metadata
			f.close()
			
			print('> New Database created at: ' + self.filepath)
		else:
			print('> Mounting Database at: '+ self.filepath)
			print("|\n| Number of records: "+ str( len(json.loads(self.unpack())['records'])) )
		return
		
	def Insert(self, dict):
		'''Inserts key and name with pointer to folder holding content
		
		Args:
			self: self
			dict: A dict that is to be inserted into cache
		
		Output: 
			returns path to new folder
		'''
		mypath = self.filepath								
		key = self.keygen()									#create unique key
		newFolder = mypath + '/records/' + key				
		if not os.path.isdir(newFolder):					#check if it exists
			os.makedirs(newFolder)							#create new
			dict['Key'] = key								#pack cache record with directory name
			self.pack(dict)
												
			print("> Successfully created record!")
			print("|\n| Unique Identifier: " +key)
			print("| Record data: \n"+ json.dumps(dict, sort_keys=True, indent=4, separators=(',', ': ')))
		return newFolder
	
	
	def Query(self, query):
		'''Queries cache for records that match search parameters
		
		Args:
			self: self
			query: a dict that is used to lookup and match cache data
					{
						"Name": "Ian",
					 	"Age": 24
					}
		
		Output:
			prints query results.
		'''
		cache = json.loads(self.unpack())
		for key in list(query.keys()):
			data = query[key]
			cache["records"] = [item for item in cache["records"] if key in item and item[key] == data]
		
		print("Query Results: \n" + json.dumps(cache, sort_keys=True, indent=4, separators=(',', ': ')))
		return
	
		
	def Delete(self):
		return
	
	
	def pack(self, dict):
		'''
		Packs Json DB Item
		'''
		f = open(self.filepath+'/cache.txt', 'r')
		cache = json.loads(f.read())
		f.close()
		
		cache['records'].append(dict)
		f = open(self.filepath+'/cache.txt', 'w')
		f.write(json.dumps(cache, sort_keys=True, indent=4, separators=(',', ': ')))
		f.close()
		return
		
		
	def unpack(self):
		'''
		Unpacks Json DB Item
		'''
		f = open(self.filepath+'/cache.txt', 'r')
		data = f.read()
		f.close()
		return data
	
	
	def keygen(self, size=6, chars=string.ascii_uppercase + string.digits):
		'''
		Generates key for folder.  
		'''
		return ''.join(random.choice(chars) for _ in range(size))

 

Simple Stereoscopy visuals in TouchDesigner

Today I’going to bring you through the basics of creating stereoscopic visuals. If you don’t have a set of anaglyphic 3D glass laying around you can make them fairly easily with plastic glasses frames from a movie theater and some red and blue lighting gel. Here is a picture of the glasses I made.

1918231_10153911628052642_7174711299512589533_nIt’s important that you get lighting gel with a low transmission factor for the other color. you want the least amount of blue light passing through the red filter and the least amount of red light passing through the blue filter.

Stereoscopy works by tricking your brain into perceiving depth, at the end of the day it is just an optical illusion. The illusion happens when you show the eyes two separate images that the brain can converge into one.

 

 
Let’s start by building a simple render network with a twist… we will have two cameras and two render TOPs. Set the “left eye camera” to the left render TOP and the “right eye camera” to the right render TOP. Below you will see an example with labels.

SpecialRenderNetwork

The two cameras will capture two different views of the same object. The trick is to make the eye separation or “inter-ocular distance” small enough so that the brain can still converge the images.

Create a Object null called parent and make it the parent of the two cameras. Select both cameras and set their translate position to 0,0,0. Then translate the Object null parent 5 units in the z direction. This should move the cameras back with it.

ParentedCameras

Once you have done that translate the designated left camera to -0.05 units in X direction and the right camera to be +0.05 units in the X direction. This will provide a small amount of separation to create the effect.

Drop down one more Object null and name it “lookAT”. Set this null as the lookat point for both cameras and translate it -10 units in the z direction.LookAt

You now have a very basic stereoscopic rig. If you have some other means of doing stereoscopy this is as far as this tutorial might be helpful. The rest of the blog will be focused on making anaglyphic rendering.

Now lets make it so we can use our homemade anaglyphic glasses. After each of the render TOPs place a channelmix TOP.  Take a look at the 3D glasses you have or made and take note of which eye is red and blue. For me the left eye is red so I want to make sure that I remove all blue from the left eye render, and visa versa for right. In the channelmix TOP parameters I am going only allow red pixels to be shown. Do the same for the other eye and color.

chmixblue
Blue Eye Channel Settings
chmixred
Red Eye Channel Settings

 

 

Now take those two TOPs and composite them together. Change the operation to “add”. Right-click and view the composite TOP and put your glasses on. You should see your torus with a bit of depth to it. This becomes even more apparent if you rotate the Geo by putting the expression “absTime.frame” in the rotate x parameter.

StereoscopyFullNetwork

Play around with the settings like the “inter-ocular distance” and the position of the lookAT null to get the setting just right. Then have fun creating sweet stereoscopic visuals.

en_USEnglish
en_USEnglish