Site icon Ian Shelanskey

Topology of words – Fall ’14 AME 530

This project explores our perception of words in space. Using Windows SAPI SR engine, Touchdesigner, and a bit of python coding, I wrote this handy set of programs that listens, interprets (or in many cases misinterprets) and displays words that are said with a velocity derived from the volume of which they are spoken. The result looks something like this:

 

The programming for this is pretty simple. I found a few libraries that use python to create a shell for Windows SAPI:
https://pypi.python.org/pypi/SpeechRecognition/
https://pypi.python.org/pypi/speech/0.5.2
https://code.google.com/p/pyspeech/

This in combination with some string math and a counter I was able to get any number of the most common words from the speech buffer. Code Below:

import speech

import string

import sys

import threading

from collections import Counter

import re

speak = “”

common_count = 100

reduc_time = 60.0

def callback(phrase, listener):

    fo = open(“spbuff.txt”, “a”)

    fo.write(phrase + ” “)

    fo.close()

    print “: “ + phrase

    if phrase == “turn off”:

        speech.say(“Goodbye.”)

        listener.stoplistening()

        sys.exit()

def sort():

  dict = open(‘CompDict.txt’)

  my_list = dict.readlines()

  sb = open(‘spbuff.txt’)

  #for line in sb.readlines():

  words = re.findall(r‘w+’, sb.read())

  cap_words = [word.upper() for word in words]

  #    print(cap_words)

  word_counts = Counter(cap_words)

  print(word_counts)

  for word in my_list:

      key_words = re.findall(r‘w+’, word)

      cap_keys = [keyword.upper() for keyword in key_words]

      print (“_____________EXCLUSION DICTIONARY_______________”)

      print (cap_keys)

      print (“_______________________________________________________”)

  for key in cap_keys:

    if key in word_counts:

      del word_counts[key]

  sb.close()

  dict.close()

  return word_counts

def reduc():

        global common_count

        global reduc_time

        sortd = sort()

        threading.Timer(reduc_time, reduc).start()

        if common_count > 1:

            common_count = common_count/2

        else:

            common_count = 1

        op = open(“outputWords.txt”,“w”)

        op.write(str(sortd.most_common(common_count)))

        print str(sortd.most_common(common_count))

        op.close()

listener = speech.listenforanything(callback)

reduc()

while listener.islistening():

   print common_count

This code decreases the word count by half every minute, eventually leading to the most common word uttered (taking out the, and, me, a, and many others).

 

This project would prove useful for improvised performance and potentially as a realtime word cloud for a discussion. The “seed” words do not have to be spoken, they can be a paper or an article that is currently being discussed. If this is used in that case, there is significant potential for a rediscovery of meaning by placing topics adjacent to each other.

Written words and sentences have a very specific topology to one another. There must be a subject (noun) and then a predicate(what that noun is doing). We use written words so often because we can pull meaning in this very logical way. There are computer programs that can narrow down the exact topic of an article just by looking at sentence structure. This leads to a very linear way of understanding written text, which is good for understanding specific and narrow arguments that follow a linear logic. This process is not good for inspiring creative concepts or new ways of thinking about a topic past the authors original argument.

This program takes sentence structure and throws it out the window. Relying on the most common words in a buffer, arranged dynamically in space, we can start to attack problem with a non-sequitur line of reasoning. This introduces a new topology to how we understand text, which relies on distance, movement, and our brains ability to generate meaning from mess, instead of just adjacency of static topics in a linear sense.  By injecting the text into this new topology we connect arguments in ways that the author might not have intended in order to pull new truths from them and new creative energies.

Exit mobile version