interactive – THE HYPERTEXT http://www.thehypertext.com Thu, 10 Dec 2015 06:10:15 +0000 en-US hourly 1 https://wordpress.org/?v=5.0.4 novel camera http://www.thehypertext.com/2015/12/01/novel-camera/ Tue, 01 Dec 2015 17:10:37 +0000 http://www.thehypertext.com/?p=790 I have spent the last few months completing a novel I started a long time ago and turning it into a non-linear interactive experience. For my final project in several classes, I have transferred this novel into a printer-equipped camera to make a new and different type of photographic experience.

Read More...

]]>
I have spent the last few months completing a novel I started a long time ago and turning it into a non-linear interactive experience. For my final project in several classes, I have transferred this novel into a printer-equipped camera to make a new and different type of photographic experience.

IMG_1321_copy

IMG_1439 copy

IMG_1442 copy

 

Inside the antique camera is a Raspberry Pi with a camera module behind the lens. The flow of passages is controlled by a single, handwritten JSON file. When there is overlap between the tags detected in an image by Clarifai and the tags assigned to a passage, and the candidate passage occurs next in a storyline that has already begun, that passage is printed out. If no passage can be found, the camera prints poetry enabled by a recursive context-free grammar and constructed from words detected in the image.

IMG_1317_copy

 

This week, I am planning to add a back end component that will allow photos taken to be preserved as albums, and passages printed to be read later online. For now, here is the JSON file that controls the order of output:

{
    "zero": {
        "tags": ["moon", "swamp", "marble", "north america", "insect", "street"],
        "order": 0,
        "next": ["story"]
    },
    "guam_zero": {
    	"tags": ["computer", "technology", "future", "keyboard", "politics"],
    	"order": 0,
    	"next": ["guam_one"]
    },
    "guam_one": {
    	"tags": ["computer", "technology", "future", "keyboard", "politics"],
    	"order": 1,
    	"next": []
    },
    "dream_zero": {
    	"tags": ["dream", "dark", "night", "sleep", "bed", "bedroom", "indoors"],
    	"order": 0,
    	"next": ["chess_board"]
    },
    "chess_board": {
    	"tags": ["dream", "dark", "night", "sleep", "bed", "bedroom", "indoors"],
    	"order": 2,
    	"next": ["black_queen", "black_pawn", "black_king", "black_rook", "white_king", "white_knight"]
    },
    "black_queen": {
    	"tags": ["dream", "dark", "black", "night", "sleep", "bed", "bedroom", "indoors", "chess", "game", "queen"],
    	"order": 3,
    	"next": ["wake_up"]
    },
    "black_pawn": {
    	"tags": ["dream", "dark", "black", "night", "sleep", "bed", "bedroom", "indoors", "chess", "game", "pawn"],
    	"order": 3,
    	"next": ["wake_up"]
    },
    "black_king": {
    	"tags": ["dream", "dark", "black", "night", "sleep", "bed", "bedroom", "indoors", "chess", "game", "king"],
    	"order": 3,
    	"next": ["wake_up"]
    },
    "black_rook": {
    	"tags": ["dream", "dark", "black", "night", "sleep", "bed", "bedroom", "indoors", "chess", "game", "rook", "castle"],
    	"order": 3,
    	"next": ["wake_up"]
    },
    "white_king": {
    	"tags": ["dream", "dark", "white", "night", "sleep", "bed", "bedroom", "indoors", "chess", "game", "king"],
    	"order": 3,
    	"next": ["wake_up"]
    },
    "white_knight": {
    	"tags": ["dream", "dark", "white", "night", "sleep", "bed", "bedroom", "indoors", "chess", "game", "knight"],
    	"order": 3,
    	"next": ["wake_up"]
    },
    "wake_up": {
    	"tags": ["dream", "dark", "night", "sleep", "bed", "bedroom", "indoors"],
    	"order": 4,
    	"next": []
    },
    "forget": {
    	"tags": ["man", "men", "boy"],
    	"order": 0,
    	"next": []
    },    
    "story": {
    	"tags": ["moon", "swamp", "marble", "north america", "insect", "night", "street", "woman", "women", "girl"],
    	"order": 1,
    	"next": ["miss_vest", "forget"]
    },
    "miss_vest": {
    	"tags": ["moon", "swamp", "marble", "north america", "insect", "night", "street", "woman", "women", "girl"],
    	"order": 2,
    	"next": ["envelope", "forget"]
    },
    "envelope": {
    	"tags": ["moon", "swamp", "marble", "north america", "insect", "night", "street", "woman", "women", "girl", "paper", "envelope", "mail"],
    	"order": 3,
    	"next": ["apartment", "forget"]
    },
    "apartment": {
    	"tags": ["moon", "swamp", "marble", "north america", "insect", "night", "street", "woman", "women", "girl", "paper", "envelope", "mail"],
    	"order": 4,
    	"next": ["email"]
    },
    "email": {
    	"tags": ["moon", "swamp", "marble", "north america", "insect", "night", "woman", "women", "girl", "paper", "envelope", "mail", "computer", "technology"],
    	"order": 5,
    	"next": ["match"]
    },
    "match": {
    	"tags": ["moon", "swamp", "marble", "north america", "insect", "night", "man", "men", "boy", "paper", "envelope", "mail", "computer", "technology"],
    	"order": 5,
    	"next": ["smithpoint", "morning"]
    },
    "morning": {
    	"tags": ["day", "sun", "bedroom", "bed", "breakfast", "morning", "dream", "dark", "night"],
    	"order": 6,
    	"next": ["call"]
    },
    "call": {
    	"tags": ["phone", "telephone", "technology", "computer"],
    	"order": 7,
    	"next": ["smithpoint"]
    },
    "smithpoint": {
    	"tags": ["moon", "swamp", "marble", "north america", "insect", "night", "man", "men", "boy", "bar", "drink", "alcohol", "wine", "beer"],
    	"order": 8,
    	"next": ["drive", "forget"]
    },
    "drive": {
    	"tags": ["moon", "swamp", "marble", "north america", "insect", "night", "man", "men", "boy", "bar", "drink", "alcohol", "wine", "beer"],
    	"order": 9,
    	"next": ["take_pill", "toss_pill"]
    },
    "take_pill": {
    	"tags": ["drug", "pill", "man", "men", "boy", "bar", "night", "drink", "alcohol", "wine", "beer"],
    	"order": 10,
    	"next": ["meet_stranger_drugs", "john_home"]
    },
    "toss_pill": {
    	"tags": ["moon", "swamp", "marble", "north america", "insect", "girl", "street", "woman", "women"],
    	"order": 10,
    	"next": ["meet_stranger_no_drugs"]
    },
    "meet_stranger_drugs": {
    	"tags": ["moon", "swamp", "marble", "north america", "insect", "night", "man", "men", "boy", "bar", "drink", "alcohol", "wine", "beer"],
    	"order": 11,
    	"next": ["john_home"]
    },
    "meet_stranger_no_drugs": {
    	"tags": ["moon", "swamp", "marble", "north america", "insect", "night", "man", "men", "boy", "bar", "drink", "alcohol", "wine", "beer"],
    	"order": 11,
    	"next": ["painting"]
    },
    "painting": {
    	"tags": ["painting", "art", "moon", "swamp", "marble", "north america", "insect", "night", "man", "men", "boy", "bar", "drink", "alcohol", "wine", "beer"],
    	"order": 12,
    	"next": []
    },
    "john_home": {
    	"tags": ["drug", "pill", "man", "men", "boy", "bar", "night", "drink", "alcohol", "wine", "beer"],
    	"order": 13,
    	"next": []
    }

}

And here is the code that’s currently running on the Raspberry Pi:

import RPi.GPIO as GPIO
from Adafruit_Thermal import *
import time
import os
import sys
import json
import picamera
from clarifai.client import ClarifaiApi
from pattern.en import referenced

import gen

# Init Clarifai
os.environ["CLARIFAI_APP_ID"] = "nAT8dW6B0Oc5qA6JQfFcdIEr-CajukVSOZ6u_IsN"
os.environ["CLARIFAI_APP_SECRET"] = "BnETdY6wtp8DmXIWCBZf8nE4XNPtlHMdtK0ISNJQ"
clarifai_api = ClarifaiApi() # Assumes Env Vars Set

# Init System Paths
APP_PATH = os.path.dirname(os.path.realpath(__file__))
IMG_PATH = os.path.join(APP_PATH, 'img')
TALE_PATH = os.path.join(APP_PATH, 'tales')

# Init tale_dict
with open(os.path.join(APP_PATH, 'tales_dict.json'), 'r') as infile:
    tale_dict = json.load(infile)

# Seen tales
seen_tales = list()

# Init Camera
camera = picamera.PiCamera()

# Init Printer
printer = Adafruit_Thermal("/dev/ttyAMA0", 9600, timeout=5)
printer.boldOn()

# Init GPIO
# With camera pointed forward...
# LEFT:  11 (button), 15 (led)
# RIGHT: 13 (button), 16 (led)
GPIO.setmode(GPIO.BOARD)
ledPins = (15,16)
butPins = (11,13)

for pinNo in ledPins:
    GPIO.setup(pinNo, GPIO.OUT)

for pinNo in butPins:
    GPIO.setup(pinNo, GPIO.IN, pull_up_down=GPIO.PUD_UP)

# Open Grammar Dict
with open(os.path.join(APP_PATH, 'weird_grammar.json'), 'r') as infile:
    grammar_dict = json.load(infile)

def blink_left_right(count):
    ledLeft, ledRight = ledPins
    for _ in range(count):
        GPIO.output(ledRight, False)
        GPIO.output(ledLeft, True)
        time.sleep(0.2)
        GPIO.output(ledRight, True)
        GPIO.output(ledLeft, False)
        time.sleep(0.2)
    GPIO.output(ledRight, False)

def to_lines(sentences):
    def sentence_to_lines(text):
        LL = 32
        tokens = text.split(' ')
        lines = list()
        curLine = list()
        charCount = 0
        for t in tokens:
            charCount += (len(t)+1)
            if charCount > LL:
                lines.append(' '.join(curLine))
                curLine = [t]
                charCount = len(t)+1
            else:
                curLine.append(t)
        lines.append(' '.join(curLine))
        return '\n'.join(lines)
    sentence_lines = map(sentence_to_lines, sentences)
    return '\n\n'.join(sentence_lines)

def open_tale(tale_name):
    with open(os.path.join(TALE_PATH, tale_name), 'r') as infile:
        tale_text = to_lines(
            filter(lambda x: x.strip(), infile.read().strip().split('\n'))
        )
    return tale_text

def pick_tale(tags, next_tales):
    choice = str()
    record = 0
    for tale in tale_dict:
        if tale in next_tales or tale_dict[tale]['order'] == 0:
            score = len(set(tale_dict[tale]['tags']) & set(tags))
            if tale in next_tales and score > 0 and not tale in seen_tales:
                score += 100
            if score > record:
                choice = tale
                record = score
    return choice


blink_left_right(5)
imgCount = 1
cur_tale = str()


while True:
    inputLeft, inputRight = map(GPIO.input, butPins)
    if inputLeft != inputRight:
        try:
            img_fn = str(int(time.time()*100))+'.jpg'
            img_fp = os.path.join(IMG_PATH, img_fn)

            camera.capture(img_fp)

            blink_left_right(3)

            result = clarifai_api.tag_images(open(img_fp))
            tags = result['results'][0]['result']['tag']['classes']

            if cur_tale:
                next_tales = tale_dict[cur_tale]['next']
            else:
                next_tales = list()

            tale_name = pick_tale(tags, next_tales)
            cur_tale = tale_name

            if tale_name:
                lines_to_print = open_tale(tale_name)
                seen_tales.append(tale_name)

            else:
                grammar_dict["N"].extend(tags)

                if not inputLeft:
                    sentences = [gen.make_polar(grammar_dict, 10, sent=0) for _ in range(10)]
                elif not inputRight:
                    sentences = [gen.make_polar(grammar_dict, 10) for _ in range(10)]
                else:
                    sentences = gen.main(grammar_dict, 10)

                lines_to_print = to_lines(sentences)

            prefix = '\n\n\nNo. %i\n\n'%imgCount

            printer.println(prefix+lines_to_print+'\n\n\n')

            grammar_dict["N"] = list()
            imgCount += 1
        except:
            blink_left_right(15)
            print sys.exc_info()

    elif (not inputLeft) and (not inputRight):
        offCounter = 0
        for _ in range(100):
            inputLeft, inputRight = map(GPIO.input, butPins)
            if (not inputLeft) and (not inputRight):
                time.sleep(0.1)
                offCounter += 1
                if offCounter > 50:
                    os.system('sudo shutdown -h now')
            else:
                break

 

Click here for a Google Drive folder with all the passages from the novel.

]]>
artificial intelligence http://www.thehypertext.com/2015/10/27/artificial-intelligence/ Tue, 27 Oct 2015 19:06:42 +0000 http://www.thehypertext.com/?p=758 For my current project in Temporary Expert, I have been experimenting with artificially intelligent voice interfaces in order to build an art piece with similar functionality to the Amazon Echo, but with unexpected properties.

Read More...

]]>
For my current project in Temporary Expert, I have been experimenting with artificially intelligent voice interfaces in order to build an art piece with similar functionality to the Amazon Echo, but with unexpected properties.

feature-key-features

My robot will take the form of a benevolent computer virus. Using tools like pyautogui and the python webbrowser library, it will respond to user inquiries by opening documents, typing, and displaying web pages. It will also talk back to users using Apple’s text-to-speech utility.

I am building this robot using Wit.ai, a deep learning tool for making voice interfaces. Using the tool’s dashboard, I have been training my robot to respond to various user intents.

Screen Shot 2015-10-27 at 2.56.56 PM

The core of the functionality will be a therapy bot similar to ELIZA, but with some additional functionality. When this project is complete, I believe it will provide an interesting take on artificial intelligence. Using AI tools for different purposes than they were designed, I hope to make users question whether the tool they are using is in fact sentient and aware of their presence.

]]>
Sound Camera, Part II http://www.thehypertext.com/2015/10/06/sound-camera-part-ii/ Tue, 06 Oct 2015 02:20:44 +0000 http://www.thehypertext.com/?p=733 Using JavaScript and Python Flask, I created a functional software prototype of the Sound Camera.

Read More...

]]>
Using JavaScript and Python Flask, I created a functional software prototype of the Sound Camera: rossgoodwin.com/soundcamera

The front-end JavaScript code is available on GitHub. Here is the primary back-end Python code:

import os
import json
import uuid
from base64 import decodestring
import time
from random import choice as rc
from random import sample as rs
import re

import PIL
from PIL import Image
import requests
import exifread

from flask import Flask, request, abort, jsonify
from flask.ext.cors import CORS
from werkzeug import secure_filename

from clarifai.client import ClarifaiApi

app = Flask(__name__)
CORS(app)

app.config['UPLOAD_FOLDER'] = '/var/www/SoundCamera/SoundCamera/static/img'
IMGPATH = '/var/www/SoundCamera/SoundCamera/static/img/'

clarifai_api = ClarifaiApi()

@app.route("/")
def index():
    return "These aren't the droids you're looking for."

@app.route("/img", methods=["POST"])
def img():
	request.get_data()
	if request.method == "POST":
		f = request.files['file']
		if f:
			filename = secure_filename(f.filename)
			f.save(os.path.join(app.config['UPLOAD_FOLDER'], filename))
			new_filename = resize_image(filename)
			return jsonify(uri=main(new_filename))
		else:
			abort(501)

@app.route("/b64", methods=["POST"])
def base64():
	if request.method == "POST":
		fstring = request.form['base64str']
		filename = str(uuid.uuid4())+'.jpg'
		file_obj = open(IMGPATH+filename, 'w')
		file_obj.write(fstring.decode('base64'))
		file_obj.close()
		return jsonify(uri=main(filename))

@app.route("/url")
def url():
	img_url = request.args.get('url')
	response = requests.get(img_url, stream=True)
	orig_filename = img_url.split('/')[-1]
	if response.status_code == 200:
		with open(IMGPATH+orig_filename, 'wb') as f:
			for chunk in response.iter_content(1024):
				f.write(chunk)
		new_filename = resize_image(orig_filename)
		return jsonify(uri=main(new_filename))
	else:
		abort(500)


# def allowed_img_file(filename):
#     return '.' in filename and \
# 		filename.rsplit('.', 1)[1].lower() in set(['.jpg', '.jpeg', '.png'])

def resize_image(fn):
    longedge = 640
    orientDict = {
        1: (0, 1),
        2: (0, PIL.Image.FLIP_LEFT_RIGHT),
        3: (-180, 1),
        4: (0, PIL.Image.FLIP_TOP_BOTTOM),
        5: (-90, PIL.Image.FLIP_LEFT_RIGHT),
        6: (-90, 1),
        7: (90, PIL.Image.FLIP_LEFT_RIGHT),
        8: (90, 1)
    }

    imgOriList = []
    try:
        f = open(IMGPATH+fn, "rb")
        exifTags = exifread.process_file(f, details=False, stop_tag='Image Orientation')
        if 'Image Orientation' in exifTags:
            imgOriList.extend(exifTags['Image Orientation'].values)
    except:
        pass

    img = Image.open(IMGPATH+fn)
    w, h = img.size
    newName = str(uuid.uuid4())+'.jpeg'
    if w >= h:
        wpercent = (longedge/float(w))
        hsize = int((float(h)*float(wpercent)))
        img = img.resize((longedge,hsize), PIL.Image.ANTIALIAS)
    else:
        hpercent = (longedge/float(h))
        wsize = int((float(w)*float(hpercent)))
        img = img.resize((wsize,longedge), PIL.Image.ANTIALIAS)

    for val in imgOriList:
        if val in orientDict:
            deg, flip = orientDict[val]
            img = img.rotate(deg)
            if flip != 1:
                img = img.transpose(flip)

    img.save(IMGPATH+newName, format='JPEG')
    os.remove(IMGPATH+fn)
    
    return newName

def chunks(l, n):
    """Yield successive n-sized chunks from l."""
    for i in xrange(0, len(l), n):
        yield l[i:i+n]

def get_tags(fp):
    fileObj = open(fp)
    result = clarifai_api.tag_images(fileObj)
    resultObj = result['results'][0]
    tags = resultObj['result']['tag']['classes']
    return tags

def genius_search(tags):
    access_token = 'd2IuV9fGKzYEWVnzmLVtFnm-EYvBQKR8Uh3I1cfZOdr8j-BGVTPThDES532dym5a'
    payload = {
        'q': ' '.join(tags),
        'access_token': access_token
    }
    endpt = 'http://api.genius.com/search'
    response = requests.get(endpt, params=payload)
    results = response.json()
    hits = results['response']['hits']
    
    artists_titles = []
    
    for h in hits:
        hit_result = h['result']
        if hit_result['url'].endswith('lyrics'):
            artists_titles.append(
                (hit_result['primary_artist']['name'], hit_result['title'])
            )
    
    return artists_titles

def spotify_search(query):
    endpt = "https://api.spotify.com/v1/search"
    payload = {
        'q': query,
        'type': 'track'
    }
    response = requests.get(endpt, params=payload)
    result = response.json()
    result_zero = result['tracks']['items'][0]
    
    return result_zero['uri']

def main(fn):
    tags = get_tags(IMGPATH+fn)
    for tag_chunk in chunks(tags,3):
        artists_titles = genius_search(tag_chunk)
        for artist, title in artists_titles:
            try:
                result_uri = spotify_search(artist+' '+title)
            except IndexError:
                pass
            else:
                return result_uri


if __name__ == "__main__":
    app.run()

 

It uses the same algorithm discussed in my prior post. Now that I have the opportunity to test it more, I am not quite satisfied with the results it is providing. First of all, they are not entirely deterministic (you can upload the same photo twice and end up with two different songs in some cases). Moreover, the results from a human face — which I expect to be a common use case — are not very personal. For the next steps in this project, I plan to integrate additional data including GPS, weather, time of day, and possibly even facial expressions in order to improve the output.

The broken cameras I ordered from eBay have arrived, and I have been considering how to use them as cases for the new models. I also purchased a GPS module for my Raspberry Pi, so the next Sound Camera prototype, with new features integrated, will likely be a physical version. I’m planning to use this Kodak Brownie camera (c. 1916):

IMG_1207

]]>
Candidate Image Explorer http://www.thehypertext.com/2015/09/17/candidate-image-explorer/ Thu, 17 Sep 2015 15:53:26 +0000 http://www.thehypertext.com/?p=700 For this week's homework in Designing for Data Personalization with Sam Slover, I made progress on a project that I'm working on for Fusion as part of their 2016 US Presidential Election coverage.

Read More...

]]>
For this week’s homework in Designing for Data Personalization with Sam Slover, I made progress on a project that I’m working on for Fusion as part of their 2016 US Presidential Election coverage. I began this project by downloading all the images from each candidate’s Twitter, Facebook, and Instagram account — about 60,000 in total — then running those images through Clarifai‘s convolutional neural networks to generate descriptive tags.

With all the images hosted on Amazon s3, and the tag data hosted on parse.com, I created a simple page where users can explore the candidates’ images by topic and by candidate. The default is all topics and all candidates, but users can narrow the selection of images displayed by making multiple selections from each field. Additionally, more images will load as you scroll down the page.

Screen Shot 2015-09-17 at 11.24.47 AM

Screen Shot 2015-09-17 at 11.30.30 AM

Screen Shot 2015-09-17 at 11.20.50 AM

Screen Shot 2015-09-17 at 11.28.45 AM

Screen Shot 2015-09-17 at 11.25.42 AM

Screen Shot 2015-09-17 at 11.19.53 AM

Unfortunately, the AI-enabled image tagging doesn’t always work as well as one might hope.

Screen Shot 2015-09-17 at 11.23.49 AM

Here’s the page’s JavaScript code:

var name2slug = {};
var slug2name = {};

Array.prototype.remove = function() {
    var what, a = arguments, L = a.length, ax;
    while (L && this.length) {
        what = a[--L];
        while ((ax = this.indexOf(what)) !== -1) {
            this.splice(ax, 1);
        }
    }
    return this;
}

Array.prototype.chunk = function(chunkSize) {
    var array=this;
    return [].concat.apply([],
        array.map(function(elem,i) {
            return i%chunkSize ? [] : [array.slice(i,i+chunkSize)];
        })
    );
}

function dateFromString(str) {
	var m = str.match(/(\d+)-(\d+)-(\d+)T(\d+):(\d+):(\d+)Z/);
	var date = new Date(Date.UTC(+m[1], +m[2], +m[3], +m[4], +m[5], +m[6]));
	var options = {
	    weekday: "long", year: "numeric", month: "short",
	    day: "numeric", hour: "2-digit", minute: "2-digit"
	};
	return date.toLocaleTimeString("en-us", options);
}

function updatePhotos(query) {
	$.ajax({
		url: 'https://api.parse.com/1/classes/all_photos?limit=1000&where='+JSON.stringify(query),
		type: 'GET',
		dataType: 'json',
		success: function(response) {
			// console.log(response);
			$('#img-container').empty();

			var curChunk = 0;
			var resultChunks = response['results'].chunk(30);

			function appendPhotos(chunkNo) {

				resultChunks[chunkNo].map(function(obj){
					var date = dateFromString(obj['datetime'])
					var imgUrl = "https://s3-us-west-2.amazonaws.com/electionscrape/" + obj['source'] + "/400px_" + obj['filename'];
					var fullImgUrl = "https://s3-us-west-2.amazonaws.com/electionscrape/" + obj['source'] + "/" + obj['filename'];
					$('#img-container').append(
						$('<div class=\"grid-item\"></div>').append(
							'<a href=\"'+fullImgUrl+'\"><img src=\"'+imgUrl+'\" width=\"280px\"></a><p>'+slug2name[obj['candidate']]+'</p><p>'+date+'</p><p>'+obj['source']+'</p>'
						) // not a missing semicolon
					);
					// console.log(obj['candidate']);
					// console.log(obj['datetime']);
					// console.log(obj['source']);
					// console.log(obj['filename']);
				});

			}

			appendPhotos(curChunk);

			window.onscroll = function(ev) {
			    if ((window.innerHeight + window.scrollY) >= document.body.offsetHeight) {
			        curChunk++;
			        appendPhotos(curChunk);
			    }
			};


		},
		error: function(response) { "error" },
		beforeSend: setHeader
	});
}

function setHeader(xhr) {
	xhr.setRequestHeader("X-Parse-Application-Id", "ID-GOES-HERE");
	xhr.setRequestHeader("X-Parse-REST-API-Key", "KEY-GOES-HERE");
}

function makeQuery(candArr, tagArr) {

	orArr = tagArr.map(function(tag){
		return { "tags": tag };
	})

	if (tagArr.length === 0 && candArr.length > 0) {
		var query = {
			'candidate': {"$in": candArr}
		};
	}
	else if (tagArr.length > 0 && candArr.length === 0) {
		var query = {
			'$or': orArr
		};
	}
	else if (tagArr.length === 0 && candArr.length === 0) {
		var query = {};
	}
	else {
		var query = {
			'candidate': {"$in": candArr},
			'$or': orArr
		};
	}

	updatePhotos(query);

}

(function(){

$('.grid').masonry({
  // options
  itemSelector: '.grid-item',
  columnWidth: 300
});

var selectedCandidates = [];
var selectedTags = [];

$.getJSON("data/candidates.json", function(data){
	var candNames = Object.keys(data).map(function(slug){
		var name = data[slug]['name'];
		name2slug[name] = slug;
		slug2name[slug] = name;
		return name;
	}).sort();

	candNames.map(function(name){
		$('#candidate-dropdown').append(
			'<li class=\"candidate-item\"><a href=\"#\">'+name+'</a></li>'
		);
	});

	$('.candidate-item').click(function(){
		var name = $(this).text();
		var slug = name2slug[name];
		if ($.inArray(slug, selectedCandidates) === -1) {
			selectedCandidates.push(slug);
			makeQuery(selectedCandidates, selectedTags);
			console.log(selectedCandidates);
			$('#selected-candidates').append(
				$('<button class=\"btn btn-danger btn-xs cand-select-btn\"><span class=\"glyphicon glyphicon-remove\" aria-hidden=\"true\"></span>'+name+'</button>')
					.click(function(){
						$(this).fadeOut("fast", function(){
							selectedCandidates.remove(name2slug[$(this).text()]);
							makeQuery(selectedCandidates, selectedTags);
							console.log(selectedCandidates);
						});
					}) // THIS IS NOT A MISSING SEMI-COLON
			);
		}
	});
});


$.getJSON("data/tags.json", function(data){
	var tags = data["tags"].sort();
	tags.map(function(tag){
		$('#tag-dropdown').append(
			'<li class=\"tag-item\"><a href=\"#\">'+tag+'</a></li>'
		);
	});

	$('.tag-item').click(function(){
		var tag = $(this).text();
		if ($.inArray(tag, selectedTags) === -1) {
			selectedTags.push(tag);
			makeQuery(selectedCandidates, selectedTags);
			console.log(selectedTags);
			$('#selected-tags').append(
				$('<button class=\"btn btn-primary btn-xs tag-select-btn\"><span class=\"glyphicon glyphicon-remove\" aria-hidden=\"true\"></span>'+tag+'</button>')
					.click(function(){
						$(this).fadeOut("fast", function(){
							selectedTags.remove($(this).text());
							makeQuery(selectedCandidates, selectedTags);
							console.log(selectedTags);
						});
					})
			);
		}
	});
});

makeQuery(selectedCandidates, selectedTags);

})();

 

 

]]>
word.camera, Part II http://www.thehypertext.com/2015/05/08/word-camera-part-ii/ Fri, 08 May 2015 21:50:25 +0000 http://www.thehypertext.com/?p=505 For my final projects in Conversation and Computation with Lauren McCarthy and This Is The Remix with Roopa Vasudevan, I iterated on my word.camera project.

Read More...

]]>
Click Here for Part I


11161692_10100527204674408_7877879408640753455_o


For my final projects in Conversation and Computation with Lauren McCarthy and This Is The Remix with Roopa Vasudevan, I iterated on my word.camera project. I added a few new features to the web application, including a private API that I used to enable the creation of a physical version of word.camera inside a Mamiya C33 TLR.

The current version of the code remains open source and available on GitHub, and the project continues to receive positive mentions in the press.

On April 19, I announced two new features for word.camera via the TinyLetter email newsletter I advertised on the site.

Hello,

Thank you for subscribing to this newsletter, wherein I will provide occasional updates regarding my project, word.camera.

I wanted to let you know about two new features I added to the site in the past week:

word.camera/albums You can now generate ebooks (DRM-free ePub format) from sets of lexographs.

word.camera/postcards You can support word.camera by sending a lexograph as a postcard, anywhere in the world for $5. I am currently a graduate student, and proceeds will help cover the cost of maintaining this web application as a free, open source project.

Also:

word.camera/a/XwP59n1zR A lexograph album containing some of the best results I’ve gotten so far with the camera on my phone.

1, 2, 3 A few random lexographs I did not make that were popular on social media.

Best,

Ross Goodwin
rossgoodwin.com
word.camera

Next, I set to work on the physical version. I decided to use a technique I developed on another project earlier in the semester to create word.camera epitaphs composed of highly relevant paragraphs from novels. To ensure fair use of copyrighted materials, I determined that all of this additional data would be processed locally on the physical camera.

I developed a collection of data from a combination of novels that are considered classics and those I personally enjoyed, and I included only paragraphs over 99 characters in length. In total, the collection contains 7,113,809 words from 48 books.

Below is an infographic showing all the books used in my corpus, and their relative included word counts (click on it for the full-size image).

A79449E2CDA5D178

To build the physical version of word.camera, I purchased the following materials:

  • Raspberry Pi 2 board
  • Raspberry Pi camera module
  • Two (2) 10,000 mAh batteries
  • Thermal receipt printer
  • 40 female-to-male jumper wires
  • Three (3) extra-small prototyping perf boards
  • LED button

After some tinkering, I was able to put together the arrangement pictured below, which could print raw word.camera output on the receipt printer.

IMG_0354

I thought for a long time about the type of case I wanted to put the camera in. My original idea was a photobooth, but I felt that a portable camera—along the lines of Matt Richardson’s Descriptive Camera—might take better advantage of the Raspberry Pi’s small footprint.

Rather than fabricating my own case, I determined that an antique film camera might provide a familiar exterior to draw in people not familiar with the project. (And I was creating it for a remix-themed class, after all.) So I purchased a lot of three broken TLR film cameras on eBay, and the Mamiya C33 was in the best condition of all of them, so I gutted it. (N.B. I’m an antique camera enthusiast—I own a working version of the C33’s predecessor, the C2—and, despite its broken condition, cutting open the bellows of the C33 felt sacrilegious.)

I laser cut some clear acrylic I had left over from the traveler’s lamp project to fill the lens holes and mount the LED button on the back of the camera. Here are some photos of the finished product:

9503_20150507_tlr_1000px

9502_20150507_tlr_1000px

9509_20150507_tlr_1000px

9496_20150507_tlr_1000px

9493_20150507_tlr_1000px

9513_20150507_tlr_1000px

And here is the code that’s running on the Raspberry Pi (the crux of the matching algorithm is on line 90):

import uuid
import picamera
import RPi.GPIO as GPIO
import requests
from time import sleep
import os
import json
from Adafruit_Thermal import *
from alchemykey import apikey
import time

# SHUTTER COUNT / startNo GLOBAL
startNo = 0

# Init Printer
printer = Adafruit_Thermal("/dev/ttyAMA0", 19200, timeout=5)
printer.setSize('S')
printer.justify('L')
printer.setLineHeight(36)

# Init Camera
camera = picamera.PiCamera()

# Init GPIO
GPIO.setmode(GPIO.BCM)

# Working Dir
cwd = '/home/pi/tlr'

# Init Button Pin
GPIO.setup(21, GPIO.IN, pull_up_down=GPIO.PUD_UP)

# Init LED Pin
GPIO.setup(20, GPIO.OUT)

# Init Flash Pin
GPIO.setup(16, GPIO.OUT)

# LED and Flash Off
GPIO.output(20, False)
GPIO.output(16, False)

# Load lit list
lit = json.load( open(cwd+'/lit.json', 'r') )


def blink(n):
    for _ in range(n):
        GPIO.output(20, True)
        sleep(0.2)
        GPIO.output(20, False)
        sleep(0.2)

def takePhoto():
    fn = str(int(time.time()))+'.jpg' # TODO: Change to timestamp hash
    fp = cwd+'/img/'+fn
    GPIO.output(16, True)
    camera.capture(fp)
    GPIO.output(16, False)
    return fp

def getText(imgPath):
    endPt = 'https://word.camera/img'
    payload = {'Script': 'Yes'}
    files = {'file': open(imgPath, 'rb')}
    response = requests.post(endPt, data=payload, files=files)
    return response.text

def alchemy(text):
    endpt = "http://access.alchemyapi.com/calls/text/TextGetRankedConcepts"
    payload = {"apikey": apikey,
               "text": text,
               "outputMode": "json",
               "showSourceText": 0,
               "knowledgeGraph": 1,
               "maxRetrieve": 500}
    headers = {'content-type': 'application/x-www-form-urlencoded'}
    r = requests.post(endpt, data=payload, headers=headers)
    return r.json()

def findIntersection(testDict):
    returnText = ""
    returnTitle = ""
    returnAuthor = ""
    recordInter = set(testDict.keys())
    relRecord = 0.0
    for doc in lit:
        inter = set(doc['concepts'].keys()) & set(testDict.keys())
        if inter:
            relSum = sum([doc['concepts'][tag]+testDict[tag] for tag in inter])
            if relSum > relRecord: 
                relRecord = relSum
                recordInter = inter
                returnText = doc['text']
                returnTitle = doc['title']
                returnAuthor = doc['author']
    doc = {
        'text': returnText,
        'title': returnTitle,
        'author': returnAuthor,
        'inter': recordInter,
        'record': relRecord
    }
    return doc

def puncReplace(text):
    replaceDict = {
        '&#8212;': '---',
        '&#8211;': '--',
        '&#8216;': "\'",
        '&#8217;': "\'",
        '&#8220;': '\"',
        '&#8221;': '\"',
        '&#180;': "\'",
        '&#235;': 'e',
        '&#241;': 'n'
    }

    for key in replaceDict:
        text = text.replace(key, replaceDict[key])

    return text


blink(5)
while 1:
    input_state = GPIO.input(21)
    if not input_state:
        GPIO.output(20, True)
        try:
            # Get Word.Camera Output
            print "GETTING TEXT FROM WORD.CAMERA..."
            wcText = getText(takePhoto())
            blink(3)
            GPIO.output(20, True)
            print "...GOT TEXT"

            # Print
            # print "PRINTING PRIMARY"
            # startNo += 1
            # printer.println("No. %i\n\n\n%s" % (startNo, wcText))

            # Get Alchemy Data
            print "GETTING ALCHEMY DATA..."
            data = alchemy(wcText)
            tagRelDict = {concept['text']:float(concept['relevance']) for concept in data['concepts']}
            blink(3)
            GPIO.output(20, True)
            print "...GOT DATA"

            # Make Match
            print "FINDING MATCH..."
            interDoc = findIntersection(tagRelDict)
            print interDoc
            interText = puncReplace(interDoc['text'].encode('ascii', 'xmlcharrefreplace'))
            interTitle = puncReplace(interDoc['title'].encode('ascii', 'xmlcharrefreplace'))
            interAuthor = puncReplace(interDoc['author'].encode('ascii', 'xmlcharrefreplace'))
            blink(3)
            GPIO.output(20, True)
            print "...FOUND"

            grafList = [p for p in wcText.split('\n') if p]

            # Choose primary paragraph
            primaryText = min(grafList, key=lambda x: x.count('#'))
            url = 'word.camera/i/' + grafList[-1].strip().replace('#', '')

            # Print
            print "PRINTING..."
            startNo += 1
            printStr = "No. %i\n\n\n%s\n\n%s\n\n\n\nEPITAPH\n\n%s\n\nFrom %s by %s" % (startNo, primaryText, url, interText, interTitle, interAuthor)
            printer.println(printStr)

        except:
            print "SOMETHING BROKE"
            blink(15)

        GPIO.output(20, False)

Thanks to a transistor pulsing circuit that keeps the printer’s battery awake, and some code that automatically tethers the Raspberry Pi to my iPhone, the Fiction Camera is fully portable. I’ve been walking around Brooklyn and Manhattan over the past week making lexographs—the device is definitely a conversation starter. As a street photographer, I’ve noticed that people seem to be more comfortable having their photograph taken with it than with a standard camera, possibly because the visual image (and whether they look alright in it) is far less important.

As a result of these wanderings, I’ve accrued quite a large number of lexograph receipts. Earlier iterations of the receipt design contained longer versions of the word.camera output. Eventually, I settled on a version that contains a number (indicating how many lexographs have been taken since the device was last turned on), one paragraph of word.camera output, a URL to the word.camera page containing the photo + complete output, and a single high-relevance paragraph from a novel.

2080_20150508_doc_1800px

2095_20150508_doc_1800px

2082_20150508_doc_1800px

2088_20150508_doc_1800px

2091_20150508_doc_1800px

2093_20150508_doc_1800px

2097_20150508_doc_1800px

2100_20150508_doc_1800px

2102_20150508_doc_1800px

2104_20150508_doc_1800px

2106_20150508_doc_1800px

2108_20150508_doc_1800px

2109_20150508_doc_1800px

I also demonstrated the camera at ConvoHack, our final presentation event for Conversation and Computation, which took place at Babycastles gallery, and passed out over 50 lexograph receipts that evening alone.

6A0A1475

6A0A1416

6A0A1380

6A0A1352

6A0A1348

Photographs by Karam Byun

Often, when photographing a person, the camera will output a passage from a novel featuring a character description that subjects seem to relate to. Many people have told me the results have qualities that remind them of horoscopes.

]]>
The Mechanical Turk’s Ghost, Part V http://www.thehypertext.com/2015/01/05/the-mechanical-turks-ghost-part-v/ Mon, 05 Jan 2015 02:13:40 +0000 http://www.thehypertext.com/?p=415 For my final project in Automata with Nick Yulman, I completed work on my musical chess experience, entitled the Mechanical Turk's Ghost.

Read More...

]]>
For my final project in Automata with Nick Yulman, I completed work on my musical chess experience, the Mechanical Turk’s Ghost. Along with adding a case, I changed the music to an original score and added solenoids beneath the board that fire when the Stockfish chess engine detects one player is within range of checkmate.

Here are some additional sketches and photos of the finished product:

photo 5

photo 2

IMG_0628

 

The drawer left ample space for a variable voltage power supply (for the solenoids), a pair of speakers (to amplify the music), and my MacBook Pro (to run Stockfish).

IMG_0726

IMG_0724

 

Here’s a look beneath the board:

IMG_0733

IMG_0736

IMG_0739

]]>
Fiction Generator, Part IV http://www.thehypertext.com/2014/12/21/fiction-generator-part-iv/ Sun, 21 Dec 2014 03:04:53 +0000 http://www.thehypertext.com/?p=406 For my final project in Networked Media with Daniel Shiffman, I put the Fiction Generator online at fictiongenerator.com. I also exhibited this project at the ITP Winter Show.

Read More...

]]>
Prior Installments:
Part I
Part II
Part III

For my final project in Comm Lab: Networked Media with Daniel Shiffman, I put the Fiction Generator online at fictiongenerator.com. VICE/Motherboard ran an article about my website, and I exhibited the project at the ITP Winter Show.

composite

 

After reading William S. Burroughs’ essay about the cut-up technique, I decided to implement an algorithmic version of it into the generator. I also refactored my existing code and added a load screen, with this animation:

robotholdingbook

I am running a LinuxApacheFlask stack at the moment. Here’s a screen shot of the website in its current state:

screenshot

]]>
Stenogloves, Part III http://www.thehypertext.com/2014/12/09/stenogloves-part-iii/ Tue, 09 Dec 2014 05:40:07 +0000 http://www.thehypertext.com/?p=367 On Wednesday, I presented my progress thus far on the Stenogloves for my final project in Introduction to Physical Computing with Tom Igoe.

Read More...

]]>
Prior Installments:
Part I
Part II

On Wednesday, I presented my progress thus far on the Stenogloves for my final project in Introduction to Physical Computing with Tom Igoe. Since my last post, I have connected the prototype keyboard to an Arduino Micro, developed an algorithm for translating chords into keystrokes, updated the typing tutor game I had demonstrated previously, and iterated through three chord layouts.

Here is the current prototype in action, with my final chord layout and updated typing tutor game:

After connecting the keyboard I discussed in my previous post to an Arduino Micro, I developed the following Arduino sketch for detecting chords and translating them into keystrokes:

int pins[10] = {2, 3, 4, 5, 6, 7, 8, 9, 10, 11};
int keyStatus[10];
int keyStatus2[10];
boolean waiting = false;
char ctrlKey = KEY_LEFT_CTRL;

boolean alt = false;

//int chords[1024] = {0, 116, 115, 117, 114, 0, 0, 118, 111, 39, 62, 0, 112, 0, 113, 119, 32, 46, 58, 93, 59, 0, 0, 125, 44, 0, 9, 0, 91, 0, 123, 45, 0, 84, 83, 85, 82, 0, 0, 86, 79, 0, 0, 0, 80, 0, 81, 87, 10, 0, 0, 0, 0, 0, 0, 0, 63, 47, 0, 0, 0, 0, 0, 0, 110, 0, 0, 0, 0, 0, 0, 0, 120, 0, 0, 0, 0, 0, 0, 0, 49, 0, 0, 0, 0, 0, 0, 0, 43, 0, 0, 0, 0, 0, 0, 0, 78, 0, 0, 0, 0, 0, 0, 0, 88, 0, 0, 0, 0, 0, 0, 0, 64, 92, 0, 0, 0, 0, 0, 0, 61, 0, 0, 0, 0, 0, 0, 0, 105, 0, 0, 0, 0, 0, 0, 0, 121, 0, 0, 0, 0, 0, 0, 0, 50, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 73, 0, 0, 0, 0, 0, 0, 0, 89, 0, 0, 0, 0, 0, 0, 0, 35, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 106, 0, 0, 0, 0, 0, 0, 0, 107, 0, 0, 0, 108, 0, 109, 0, 51, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 74, 0, 0, 0, 0, 0, 0, 0, 75, 0, 0, 0, 76, 0, 77, 0, 36, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 124, 0, 0, 0, 101, 0, 0, 0, 0, 0, 0, 0, 122, 0, 0, 0, 0, 0, 0, 0, 52, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 69, 0, 0, 0, 0, 0, 0, 0, 90, 0, 0, 0, 0, 0, 0, 0, 37, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 60, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 53, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 94, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 102, 0, 0, 0, 0, 0, 96, 0, 0, 0, 0, 0, 0, 0, 0, 0, 54, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 70, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 38, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 103, 0, 0, 0, 0, 0, 0, 0, 104, 0, 0, 0, 0, 0, 0, 0, 55, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 71, 0, 0, 0, 0, 0, 0, 0, 72, 0, 0, 0, 0, 0, 0, 0, 42, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 97, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 56, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 65, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 40, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 34, 0, 0, 0, 0, 0, 0, 0, 0, 126, 0, 0, 0, 0, 0, 0, 57, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 41, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 48, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 33, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 98, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 66, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 99, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 67, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 100, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 95, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 68, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 500};
//int chords[1024] = {0, 116, 115, 117, 114, 0, 0, 118, 111, 39, 62, 0, 112, 0, 113, 119, 32, 46, 58, 93, 59, 0, 0, 125, 44, 0, 9, 0, 91, 0, 123, 45, 0, 84, 83, 85, 82, 0, 0, 86, 79, 0, 0, 0, 80, 0, 81, 87, 10, 0, 0, 0, 0, 0, 0, 0, 63, 47, 0, 0, 0, 0, 0, 0, 110, 120, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 49, 0, 0, 0, 0, 0, 0, 0, 43, 0, 0, 0, 0, 0, 0, 0, 78, 88, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 64, 92, 0, 0, 0, 0, 0, 0, 61, 0, 0, 0, 0, 0, 0, 0, 105, 109, 108, 0, 107, 0, 0, 0, 106, 0, 0, 0, 0, 0, 0, 0, 50, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 73, 77, 76, 0, 75, 0, 0, 0, 74, 0, 0, 0, 0, 0, 0, 0, 35, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 51, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 36, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 124, 0, 0, 0, 101, 121, 104, 0, 103, 0, 0, 0, 102, 0, 0, 0, 0, 0, 0, 0, 52, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 69, 89, 72, 0, 71, 0, 0, 0, 70, 0, 0, 0, 0, 0, 0, 0, 37, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 60, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 53, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 94, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 96, 0, 0, 0, 0, 0, 0, 0, 0, 0, 54, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 38, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 55, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 42, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 97, 122, 100, 0, 99, 0, 0, 0, 98, 0, 0, 0, 0, 0, 0, 0, 56, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 65, 90, 68, 0, 67, 0, 0, 0, 66, 0, 0, 0, 0, 0, 0, 0, 40, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 34, 0, 0, 0, 0, 0, 0, 0, 0, 126, 0, 0, 0, 0, 0, 0, 57, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 41, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 48, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 33, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 95, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 500};
int chords[1024] = {0, 116, 115, 0, 114, 0, 0, 0, 111, 39, 62, 0, 112, 0, 113, 0, 32, 46, 58, 93, 59, 0, 0, 125, 44, 0, 9, 0, 91, 0, 123, 45, 0, 84, 83, 0, 82, 0, 0, 0, 79, 0, 0, 0, 80, 0, 81, 0, 10, 0, 0, 0, 0, 0, 0, 0, 63, 47, 0, 0, 0, 0, 0, 0, 110, 117, 118, 0, 119, 0, 0, 0, 120, 0, 0, 0, 0, 0, 0, 0, 49, 0, 0, 0, 0, 0, 0, 0, 43, 0, 0, 0, 0, 0, 0, 0, 78, 85, 86, 0, 87, 0, 0, 0, 88, 0, 0, 0, 0, 0, 0, 0, 64, 92, 0, 0, 0, 0, 0, 0, 61, 0, 0, 0, 0, 0, 0, 0, 105, 109, 108, 0, 107, 0, 0, 0, 106, 0, 0, 0, 0, 0, 0, 0, 50, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 73, 77, 76, 0, 75, 0, 0, 0, 74, 0, 0, 0, 0, 0, 0, 0, 35, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 51, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 36, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 124, 0, 0, 0, 101, 121, 104, 0, 103, 0, 0, 0, 102, 0, 0, 0, 0, 0, 0, 0, 52, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 69, 89, 72, 0, 71, 0, 0, 0, 70, 0, 0, 0, 0, 0, 0, 0, 37, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 60, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 53, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 94, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 96, 0, 0, 0, 0, 0, 0, 0, 0, 0, 54, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 38, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 55, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 42, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 97, 122, 100, 0, 99, 0, 0, 0, 98, 0, 0, 0, 0, 0, 0, 0, 56, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 65, 90, 68, 0, 67, 0, 0, 0, 66, 0, 0, 0, 0, 0, 0, 0, 40, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 34, 0, 0, 0, 0, 0, 0, 0, 0, 126, 0, 0, 0, 0, 0, 0, 57, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 41, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 48, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 33, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 95, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 500};

void setup() {
  for (int i=0; i<10; i++) {
    pinMode(pins[i], INPUT_PULLUP);
  }
  Keyboard.begin();
}

void loop() {
  checkKeys();
  if (keyPressed()) {
    waitForRelease();
  } else {
    waiting = true;
  }
  
}

void checkKeys() {
  for (int i=0; i<10; i++) {
    int keyState = digitalRead(pins[i]);
    if (keyState == HIGH) {
      keyStatus[i] = 0;
    } else {
      keyStatus[i] = 1;
    }
  }
}

void checkKeys2() {
  for (int i=0; i<10; i++) {
    int keyState = digitalRead(pins[i]);
    if (keyState == HIGH) {
      keyStatus2[i] = 0;
    } else {
      keyStatus2[i] = 1;
    }
  }
}

void waitForRelease() {
  checkKeys();
  delay(10);
  checkKeys2();
  boolean released = oneToZero();
  while (!released) {
    checkKeys();
    delay(10);
    checkKeys2();
    released = oneToZero();
  }
  if (waiting) recordChord();
  waiting = false;
  delay(10);
}

void recordChord() {
  int ch = convert_bin2dec();
  int toType = chords[ch];
  if (toType < 256) {
    Keyboard.write(toType);
  } else {
    
    if (toType == 500) {
      alt = !alt;
      Keyboard.press(ctrlKey);
      delay(100);
      Keyboard.releaseAll();
    }    
    
  }
}

boolean keyPressed() {
  boolean kp = false;
  for (int i=0; i<10; i++) {
    if (keyStatus[i] == 1) kp = true;
  }
  return kp;
}

boolean oneToZero() {
  boolean released = false;
  for (int i=0; i<10; i++) {
    if (keyStatus[i] == 1 && keyStatus2[i] == 0) {
      released = true;
    }
  }
  return released;
}

int convert_bin2dec() {
    int val = 0;
    for ( int i = 0; i<=9 ; ++i ) {
        val = (val << 1) | keyStatus[i];
    }
    return val;
}

I experimented with a number of possible solutions involving timing windows in which a chord would be detected. However, I eventually determined that the best solution would involve detecting the chord upon key release rather than key press. The sketch above waits for any key to be released, then records the chord detected immediately prior to release.

Note that there are three arrays named “chords”—the first two are commented out. Unfortunately, the Arduino Micro’s limited storage capacity could not accommodate more than one 1024-unit integer array of chords at a time. Thus, switching between potential chord layouts required uploading a new sketch to the Arduino each time.

keyboard-screen

After developing the Arduino software, I updated the typing tutor game for use with the keyboard. Rather than a timed animation, I changed the code so that the text cursor doesn’t advance to the next letter until the prior letter has been typed. Additionally, I implemented a score system based on chord accuracy, a hint screen that pops up for 3 seconds after the grave accent (a.k.a. backtick: ‘`’, which is currently left and right ring and middle fingers together) is typed, and an “easy mode” in which the hint screen is displayed constantly and score is not kept.

After receiving feedback on my initial chord layout in class, I decided to try a new layout that included more two-finger chords (rather than three- and four-finger chords), with more coordination between right and left hands. Here is the raw JSON file for this layout:

{"\b": 640, " ": 16, "$": 240, "(": 560, ",": 24, "0": 656, "4": 272, "8": 528, "<": 320, "@": 112, "D": 546, "H": 290, "L": 162, "P": 44, "T": 33, "X": 97, "\\": 113, "`": 390, "d": 514, "h": 258, "l": 130, "p": 12, "t": 1, "x": 65, "|": 252, "#": 176, "'": 9, "+": 88, "/": 57, "3": 208, "7": 464, ";": 20, "?": 56, "C": 548, "G": 292, "K": 164, "O": 40, "S": 34, "W": 47, "[": 28, "_": 976, "c": 516, "g": 260, "k": 132, "o": 8, "s": 2, "w": 15, "{": 30, "\n": 48, "\"": 576, "&": 432, "*": 496, ".": 17, "2": 144, "6": 400, ":": 18, ">": 10, "B": 552, "F": 296, "J": 168, "N": 96, "R": 36, "V": 39, "Z": 545, "^": 368, "b": 520, "f": 264, "j": 136, "n": 64, "r": 4, "v": 7, "z": 513, "~": 585, "\t": 26, "!": 688, "%": 304, ")": 624, "-": 31, "1": 80, "5": 336, "9": 592, "=": 120, "A": 544, "E": 288, "I": 160, "M": 161, "Q": 46, "U": 35, "Y": 289, "]": 19, "a": 512, "e": 256, "i": 128, "m": 129, "q": 14, "u": 3, "y": 257, "}": 23}

As with the initial version, each typed character has a corresponding integer, which is translated into a 10-digit binary number corresponding to the 10-finger chord that must be typed.

I tested this layout extensively, and found that test subjects preferred it (almost) unanimously to the initial layout. The exact reasons varied, but I observed that individuals had an easier time typing two-finger chords than chords that involved three or more fingers. Typing speed was also 50% faster on average compared to the initial layout.

Accounting for these observations, I set out to devise another improved layout that would incorporate even more two-finger chords. In the prior layout, the letters  ‘V’ and ‘W’ still involved three- and four-finger combinations. In this layout, all letters except for ‘P’ and ‘Q’ involve two-finger combinations with the left and right hands together.

{"\b": 640, " ": 16, "$": 240, "(": 560, ",": 24, "0": 656, "4": 272, "8": 528, "<": 320, "@": 112, "D": 546, "H": 290, "L": 162, "P": 44, "T": 33, "X": 104, "\\": 113, "`": 390, "d": 514, "h": 258, "l": 130, "p": 12, "t": 1, "x": 72, "|": 252, "#": 176, "'": 9, "+": 88, "/": 57, "3": 208, "7": 464, ";": 20, "?": 56, "C": 548, "G": 292, "K": 164, "O": 40, "S": 34, "W": 100, "[": 28, "_": 976, "c": 516, "g": 260, "k": 132, "o": 8, "s": 2, "w": 68, "{": 30, "\n": 48, "\"": 576, "&": 432, "*": 496, ".": 17, "2": 144, "6": 400, ":": 18, ">": 10, "B": 552, "F": 296, "J": 168, "N": 96, "R": 36, "V": 98, "Z": 545, "^": 368, "b": 520, "f": 264, "j": 136, "n": 64, "r": 4, "v": 66, "z": 513, "~": 585, "\t": 26, "!": 688, "%": 304, ")": 624, "-": 31, "1": 80, "5": 336, "9": 592, "=": 120, "A": 544, "E": 288, "I": 160, "M": 161, "Q": 46, "U": 97, "Y": 289, "]": 19, "a": 512, "e": 256, "i": 128, "m": 129, "q": 14, "u": 65, "y": 257, "}": 23}

If I move forward with the Stenogloves, the layout above is most likely what I will integrate. Much work remains on the punctuation marks, which I am not yet satisfied with. Backspace, in particular, involves an awkward two-finger combination with the left hand, and that can be improved with the real estate gained from the new keyboard layout. In a new version of the keyboard layout, most of the punctuation marks would, in fact, resemble the alphabetical characters from the initial version of the layout (simple two- and three-finger chords).

keyboard-top

keyboard-side

 

]]>
Stenogloves, Part II http://www.thehypertext.com/2014/11/20/stenogloves-part-ii/ http://www.thehypertext.com/2014/11/20/stenogloves-part-ii/#comments Thu, 20 Nov 2014 16:47:30 +0000 http://www.thehypertext.com/?p=341 For my final project in Introduction to Physical Computing, I am making a set of chorded keyboard gloves for quick typing in any setting.

Read More...

]]>
For my final project in Introduction to Physical Computing, I had discussed creating a navigation system for a 3D browser using a pair of gloves with force-sensitive resistors in the fingertips. After further consideration and several discussions with Tom Igoe, I have altered my plan for this project.

For me, the most interesting part was going to be the proposed “typing mode” for the gloves. So, I’m going to focus on that part alone—making a pair of general-purpose typing gloves, or “stenogloves” as I’ve begun calling them.

My first step was to develop a chorded, 10-key typing system and a simple typing tutor game to learn the system. To accomplish this, I examined the Google Ngram data on English letter frequency. With over 3.5 trillion letters in the data set, here are the frequency counts for each letter:

Screen Shot 2014-11-20 at 10.17.50 AM

[Chart via Peter Norvig]

At first, I attempted to create the typing system using the simplest one and two-finger chords, and mapping the letters to chords in descending order of chord difficulty. (Single-finger chords for the most common letters, simple two-finger chords for less common letters, more complex two-finger and three-finger chords for even less common letters, etc.) After creating this initial draft of the typing system, I attempted to mime my way through the alphabet, only to discover that such a system would be incredibly difficult to learn.

The system needed a common reference point—ideally, one that would allow for a mnemonic that could make learning the system easy—so I decided to try an alphabetical orientation. In this scheme, the eight most common letters in alphabetical order—’A’, ‘E’, ‘I’, ‘N’, ‘O’, ‘R’, ‘S’, ‘T’, which together account for 65% of all keystrokes—would be mapped to single-finger chords with each of the eight fingers on both hands (excluding thumbs), in sequential order from left pinky to right pinky with palms facing down. Until the letter ‘T’, letters in between these eight key letters would be typed by adding an appropriate number of fingers after the single-finger chord. (For example, ‘A’ would be left pinky, ‘B’ would be left pinky + left ring fingers, ‘C’ would be left pinky + left ring + left middle, and so on.) After the ‘T’ chord, the system continues in the opposite direction, with right pinky + right ring = ‘U’, and so on. ‘X’, ‘Y’, and ‘Z’ have special chords (left index + right index, left middle + right index, left ring + right index) because of where they fall in the alphabet. Left thumb is reserved for shift, and right thumb is reserved for space / number / punctuation.

After creating this system, I found I could mime my way through the alphabet very quickly and easily, which should provide some indicator of the difficulty with which such a system could be learned. I also added chords for numbers 0-9 and every punctuation mark, turned all the chords into 10-digit binary numbers, and converted these numbers to integers to allow them to be read easily into any computer program as a JSON file.

Here is the Python script I used to generate that JSON file:

import json

aToZ = {'A': ['LP'],
		'B': ['LP','LR'],
		'C': ['LP','LR','LM'],
		'D': ['LP','LR','LM','LI'],
		'E': ['LR'],
		'F': ['LR','LM'],
		'G': ['LR','LM','LI'],
		'H': ['LR','LM','LI','RI'],
		'I': ['LM'],
		'J': ['LM','LI'],
		'K': ['LM','LI','RI'],
		'L': ['LM','LI','RI','RM'],
		'M': ['LM','LI','RI','RM','RR'],
		'N': ['LI'],
		'O': ['RI'],
		'P': ['RI','RM'],
		'Q': ['RI','RM','RR'],
		'R': ['RM'],
		'S': ['RR'],
		'T': ['RP'],
		'U': ['RR','RP'],
		'V': ['RM','RR','RP'],
		'W': ['RI','RM','RR','RP'],
		'X': ['LI','RI'],
		'Y': ['LM','RI'],
		'Z': ['LR','RI']}

aToZbin = {'a':[],'b':[],'c':[],'d':[],'e':[],'f':[],'g':[],'h':[],'i':[],'j':[],'k':[],'l':[],'m':[],'n':[],'o':[],'p':[],'q':[],'r':[],'s':[],'t':[],'u':[],'v':[],'w':[],'x':[],'y':[],'z':[]}

fingers = ['LP', 'LR', 'LM', 'LI', 'LT', 'RT', 'RI', 'RM', 'RR', 'RP']

# lower case letters
for key, value in aToZ.iteritems():
	for finger in fingers:
		if finger in value:
			aToZbin[key.lower()].append(1)
		else:
			aToZbin[key.lower()].append(0)

# capital letters
for key in aToZbin.keys():
	l = aToZbin[key]
	m = l[:]
	m[4] = 1
	aToZbin[key.upper()] = m

# numbers 0-9
aToZbin[0] = [1,0,1,0,0,1,0,0,0,0]
aToZbin[1] = [0,0,0,1,0,1,0,0,0,0]
aToZbin[2] = [0,0,1,0,0,1,0,0,0,0]
aToZbin[3] = [0,0,1,1,0,1,0,0,0,0]
aToZbin[4] = [0,1,0,0,0,1,0,0,0,0]
aToZbin[5] = [0,1,0,1,0,1,0,0,0,0]
aToZbin[6] = [0,1,1,0,0,1,0,0,0,0]
aToZbin[7] = [0,1,1,1,0,1,0,0,0,0]
aToZbin[8] = [1,0,0,0,0,1,0,0,0,0]
aToZbin[9] = [1,0,0,1,0,1,0,0,0,0]



# symbols !-)
num_symbols = ['!', '@', '#', '$', '%', '^', '&', '*', '(', ')']
for key in aToZbin.keys():
	if key in range(10):
		l = aToZbin[key]
		m = l[:]
		m[4] = 1
		aToZbin[num_symbols[key]] = m

# space and return
aToZbin[' '] = [0,0,0,0,0,1,0,0,0,0]
aToZbin['\n'] = [0,0,0,0,1,1,0,0,0,0]

# / ?
aToZbin['/'] = [0,0,0,0,1,1,1,0,0,1]
aToZbin['?'] = [0,0,0,0,1,1,1,0,0,0]

# = +
aToZbin['='] = [0,0,0,1,1,1,1,0,0,0]
aToZbin['+'] = [0,0,0,1,0,1,1,0,0,0]

# < >
aToZbin['<'] = [0,1,0,1,0,0,0,0,0,0]
aToZbin['>'] = [0,0,0,0,0,0,1,0,1,0]

# [ ]
aToZbin['['] = [0,0,0,0,0,1,1,1,0,0]
aToZbin[']'] = [0,0,0,0,0,1,0,0,1,1]

# { }
aToZbin['{'] = [0,0,0,0,0,1,1,1,1,0]
aToZbin['}'] = [0,0,0,0,0,1,0,1,1,1]

# " '
aToZbin['\"'] = [1,0,0,1,0,0,0,0,0,0]
aToZbin['\''] = [0,0,0,0,0,0,1,0,0,1]

# , ; : . (and +space)
aToZbin[','] = [0,0,0,0,0,1,1,0,0,0]
aToZbin[';'] = [0,0,0,0,0,1,0,1,0,0]
aToZbin[':'] = [0,0,0,0,0,1,0,0,1,0]
aToZbin['.'] = [0,0,0,0,0,1,0,0,0,1]
aToZbin[', '] = [1,0,0,0,0,1,1,0,0,0]
aToZbin['; '] = [1,0,0,0,0,1,0,1,0,0]
aToZbin[': '] = [1,0,0,0,0,1,0,0,1,0]
aToZbin['. '] = [1,0,0,0,0,1,0,0,0,1]

# underscore, dash, ndash, mdash
aToZbin['_'] = [1,1,1,1,0,1,0,0,0,0]
aToZbin['-'] = [0,0,0,0,0,1,1,1,1,1]
aToZbin[u"\u2013"] = [0,0,1,1,0,1,1,1,0,0]
aToZbin[u"\u2014"] = [0,1,1,1,0,1,1,1,1,0]

# \ |
aToZbin['\\'] = [0,0,0,1,1,1,0,0,0,1]
aToZbin['|'] = [0,0,1,1,1,1,1,1,0,0]

# ~ `
aToZbin['~'] = [1,0,0,1,0,0,1,0,0,1]
aToZbin['`'] = [0,1,1,0,0,0,0,1,1,0]


# print and test uniqueness

print aToZbin

jb = aToZbin.values()

print len(jb)

unique_jb = []
duplicate = []

for i in jb:
	if i not in unique_jb:
		unique_jb.append(i)
	else:
		duplicate.append(i)

print len(unique_jb)

print duplicate


# Turn into binary

chords = {}
lookup = [[] for _ in range(1024)]

for key, value in aToZbin.iteritems():
	number = int(''.join([str(i) for i in value]), 2)
	chords[unicode(key)] = number
	lookup[number].append(key)

print chords
print lookup

with open('data.txt', 'w') as outfile:
	json.dump(chords, outfile)

And here’s the JSON file:

{": ": 530, "\u2014": 478, " ": 16, "$": 240, "(": 560, ",": 24, "0": 656, "4": 272, "8": 528, "<": 320, "@": 112, "D": 992, "H": 488, "L": 236, "P": 44, "T": 33, "X": 104, "\\": 113, "`": 390, "d": 960, "h": 456, "l": 204, "p": 12, "t": 1, "x": 72, "|": 252, "\u2013": 220, "#": 176, "'": 9, "+": 88, "/": 57, "3": 208, "7": 464, ";": 20, "?": 56, "C": 928, "G": 480, "K": 232, "O": 40, "S": 34, "; ": 532, "W": 47, "[": 28, "_": 976, "c": 896, "g": 448, "k": 200, "o": 8, "s": 2, "w": 15, "{": 30, "\n": 48, "\"": 576, "&": 432, ". ": 529, "*": 496, ".": 17, "2": 144, "6": 400, ":": 18, ">": 10, "B": 800, "F": 416, "J": 224, "N": 96, "R": 36, "V": 39, "Z": 296, "^": 368, "b": 768, "f": 384, "j": 192, "n": 64, "r": 4, "v": 7, "z": 264, "~": 585, "!": 688, "%": 304, ", ": 536, ")": 624, "-": 31, "1": 80, "5": 336, "9": 592, "=": 120, "A": 544, "E": 288, "I": 160, "M": 238, "Q": 46, "U": 35, "Y": 168, "]": 19, "a": 512, "e": 256, "i": 128, "m": 206, "q": 14, "u": 3, "y": 136, "}": 23}

Using this data, I made a simple typing tutor game in Processing that pulls the text from any news article on the web for users to type out. The code is available on Github.

The next step was to begin tinkering with actual gloves, and so I purchased a pair of inexpensive motorcycle gloves to experiment with.

oneal gloves

 

I also needed to make a decision about the actuation method for each fingertip. I settled on using mechanical keyboard switches instead of force-sensitive resistors because I knew the switches would be easier to work with and would provide better tactile feedback for users. After doing a significant amount of research on mechanical keyboard components, I settled on Cherry MX Blue switches, due to their tactile feel and clicky responsiveness.

Here is a cross sectional gif of a Cherry MX Blue switch:

Blue

Tom Igoe suggested I build a simple keyboard before attaching keys to the gloves. However, I was eager to begin working with the gloves, so I turned the right one into a mouse glove using parts from a wireless mouse I purchased. Next, I plan to mount an accelerometer on the left glove, then mount keyboard switches to the fingertips on both gloves.

image_3

image_1

image_8

image_17

After playtesting the mouse glove, I built a 10-key keyboard by mounting Cherry MX Blue switches to a wooden board. I still need to connect the switches to an Arduino in order to test this keyboard, which I hope to do very soon.

image_26

image_27

image_30

image_35

]]>
http://www.thehypertext.com/2014/11/20/stenogloves-part-ii/feed/ 4
General Update http://www.thehypertext.com/2014/09/29/general-update/ http://www.thehypertext.com/2014/09/29/general-update/#comments Mon, 29 Sep 2014 06:24:41 +0000 http://www.thehypertext.com/?p=177 I've been so busy the past two weeks that I failed to update this blog. But documentation is important, and that's why I'm going to take a moment to fill you in on all my recent activities. This post will cover all the projects I've been working on.

Read More...

]]>
I’ve been so busy the past two weeks that I failed to update this blog. But documentation is important, and that’s why I’m going to take a moment to fill you in on all my recent activities. This post will cover all the projects I’ve been working on, primarily:

  • Applications Presentation on September 16
  • ITP Code Poetry Slam on November 14
  • The Mechanical Turk’s Ghost
  • Che55

On Tuesday, September 16, I helped deliver a presentation to our class in Applications. Yingjie Bei, Rebecca Lieberman, and Supreet Mahanti were in my group, and we utilized my Poetizer software to create an interactive storytelling exercise for the entire audience. Sarah Rothberg was kind enough to record the presentation, and Rebecca posted it on Vimeo:

 

 

I’ve also been organizing an ITP Code Poetry Slam, which will take place at 6:30pm on November 14. Submissions are now open, and I’m hoping the event will serve as a conduit for productive dialogue between the fields of poetry and computer science. Announcements regarding judges, special guests, and other details to come.

Various explorations related to the Mechanical Turk’s Ghost [working title] have consumed the rest of my time. While I wait for all the electronic components I need to arrive, I have been focusing on the software aspects of the project, along with some general aspects of the hardware.

The first revision to the preliminary design I sketched out in my prior post resulted from a friend‘s suggestion. Rather than using conductive pads on the board, I now plan to use Hall effect sensors mounted beneath the board that will react to tiny neodymium magnets embedded in each chess piece. If everything works properly, this design should be far less visible, and thus less intrusive to the overall experience. I ordered 100 sensors and 500 magnets, and I look forward to experimenting with them when they arrive.

In the meantime, the parts I listed in my prior post arrived, and I was especially excited to begin working with the Raspberry Pi. I formatted an 8GB SD card and put NOOBS on it, then booted up the Raspberry Pi and installed Raspbian, a free operating system based on Debian Linux that is optimized for the Pi’s hardware.

r_pi

The Stockfish chess engine will be a major component of this project, and I was concerned that its binaries would not compile on the Raspberry Pi. The makefile documentation listed a number of options for system architecture, none of which exactly matched the ARM v6 chip on the Raspberry Pi.

Screen Shot 2014-09-28 at 10.46.18 PMFirst, I tried the “ARMv7” option. The compiler ran for about 10 minutes before experiencing errors and failing. I then tried several other options, none of which worked. I was about to give up completely and resign myself to running the chess engine on my laptop, when I noticed the “profile-build” option. I had never heard of profile-guided optimization (PGO), but I tried using the command “make profile-build” rather than “make build” along with the option for unspecified 32-bit architecture. This combination allowed Stockfish to compile without any issues. Here is the command that I used (from the /Stockfish/src folder):

$ make profile-build ARCH=general-32

With Stockfish successfully compiled on the Raspberry Pi, I copied the binary executable to the system path (so that I could script the engine using the Python subprocess library), then tried running the Python script I wrote to control Stockfish. It worked without any issues:

ghost

My next set of explorations revolved around the music component of the project. As I specified in my prior post, I want the device to generate music. I took some time to consider what type of music would be most appropriate, and settled on classical music as a starting point. Classical music is ideal because so many great works are in the public domain, and because so many serious chess players enjoy listening to it during play. (As anecdotal evidence, the Chess Forum in Greenwich Village, a venue where chess players congregate to play at all hours of the day and night, plays nothing but classical music all the time. I have been speaking to one of the owners of the Chess Forum about demonstrating my prototype device there once it is constructed.)

Generating a classical music mashup using data from the game in progress was the first idea I pursued. For this approach, I imagined that two classical music themes (one for black, one for white) could be combined in a way that reflected the relative strength of each side at any given point in the game. (A more complex approach might involve algorithmic music generation, but I am not ready to pursue that option just yet.) Before pursuing any prototyping or experimentation, I knew that the two themes would need to be suitably different (so as to distinguish one from the other) but also somewhat complementary in order to create a pleasant listening experience. A friend of mine who studies music suggested pairing one song (or symphony or concerto) in a major key with another song in the relative minor key.

Using YouTube Mixer, I was able to prototype the overall experience by fading back and forth between two songs. I started by pairing Beethoven’s Symphony No. 9 and Rachmaninoff’s Piano Concerto No. 3, and I was very satisfied with the results (play both these videos at once to hear the mashup):

I then worked on creating a music mashup script to pair with my chess engine script. My requirements seemed very simple: I would need a script that could play two sound files at once and control their respective volume levels independently, based on the fluctuations in the score calculated by the chess engine. The script would also need to be able to run on the Raspberry Pi.

These requirements ended up being more difficult to fulfill than I anticipated. I explored many Python audio libraries, including pyo, PyFluidSynth, mingus, and pygame’s mixer module. I also looked into using SoX, a command line audio utility, through the python subprocess library. Unfortunately, all of these options were either too complex or too simple to perform the required tasks.

Finally, on Gabe Weintraub’s suggestion, I looked into using Processing for my audio requirements and discovered a library called Minim that could do everything I needed. I then wrote the following Processing sketch:

import ddf.minim.*;

Minim minim1;
Minim minim2;
AudioPlayer player1;
AudioPlayer player2;

float gain1 = 0.0;
float gain2 = 0.0;
float tgtGain1 = 0.0;
float tgtGain2 = 0.0;
float level1 = 0.0;
float level2 = 0.0;
float lvlAdjust = 0.0;

BufferedReader reader;
String line;
float score = 0;

void setup() {
  minim1 = new Minim(this);
  minim2 = new Minim(this);
  player1 = minim1.loadFile("valkyries.mp3");
  player2 = minim2.loadFile("Rc3_1.mp3");
  player1.play();
  player1.setGain(-80.0);
  player2.play();
  player2.setGain(6.0);
}

void draw() {
  reader = createReader("score.txt");
  try {
    line = reader.readLine();
  } catch (IOException e) {
    e.printStackTrace();
    line = null;
  }
  print(line); 
  score = float(line);
  
  level1 = (player1.left.level() + player1.right.level()) / 2;
  level2 = (player2.left.level() + player2.right.level()) / 2;  

  lvlAdjust = map(level1 - level2, -0.2, 0.2, -1, 1);
  tgtGain1 = map(score, -1000, 1000, -30, 6);
  tgtGain2 = map(score, 1000, -1000, -30, 6);
  tgtGain1 = tgtGain1 * (lvlAdjust + 1);
  tgtGain2 = tgtGain2 / (lvlAdjust + 1);
  
  gain1 = player1.getGain();
  gain2 = player2.getGain();
  
  print(' ');
  print(gain1);
  print(' ');
  print(gain2);
  print(' ');
  print(level1);
  print(' ');
  println(level2);
  
  if (level2 > level1) {
    tgtGain2 -= 0.1;
  } else if (level1 < level2) {
    tgtGain1 -= 0.1;
  }
  
  player1.setGain(tgtGain1);
  player2.setGain(tgtGain2);
}

The script above reads score values from a file created by the Python script that controls the chess engine. The score values are then mapped to gain levels for each of the two tracks that are playing. I input a chess game move by move into the terminal, and the combination of scripts worked as intended by fading between the two songs based on the relative positions of white and black in the chess game.

Unfortunately, a broader issue with my overall approach became highly apparent: the dynamic qualities of each song overshadowed most of the volume changes that occurred as a result of the game. In other words, each song got louder and quieter at various points by itself, and that was more noticeable than the volume adjustments the script was making. I attempted to compensate for these natural volume changes by normalizing the volume of each song based on its relative level compared to the other song (see lines 42-45, 48-49, and 63-67 in the code above). This did not work as effectively as I hoped, and resulted in some very unpleasant sound distortions.

After conferring with my Automata instructor, Nick Yulman,  I have decided to take an alternate approach. Rather than playing two complete tracks and fading between them, I plan to record stems (individual instrument recordings) using the relevant midi files, and then create loop tracks that will be triggered at various score thresholds. I am still in the process of exploring this approach and will provide a comprehensive update sometime in the near future.

In the meantime, I have been learning about using combinations of digital and analog inputs and outputs with the Arduino, and using various input sensors to control motors, servos, solenoids, and RGB LEDs:

photo 3

In Introduction to Computational Media, we are learning about object oriented programming, and Dan Shiffman asked us to create a Processing sketch using classes and objects this week. As I prepare to create a physical chessboard, I thought it would be appropriate to make a software version to perform tests. Che55 (which I named with 5’s as an homage to Processing’s original name, “Proce55ing“) was the result.

che55

Che55 is a fully functional chess GUI, written in Processing. Only legal moves can be made, and special moves such as en passant, castling, and pawns reaching the end of the board have been accounted for. I plan to link Che55 with Stockfish in order to create chess visualizations and provide game analysis, and to prototype various elements of the Mechanical Turk’s Ghost, including the musical component. I left plenty of space around the board for additional GUI elements, which I’m currently working on implementing. All of the code is available on Github.

Unfortunately, I cannot claim credit for the chess piece designs. Rather, I was inspired by an installation I saw at the New York MoMA two weeks ago called Thinking Machine 4 by Martin Wattenberg and Marek Walczak (also written in Processing).

That’s all for now. Stay tuned for new posts about each of these projects. I will try to keep this blog more regularly updated so there (hopefully) will be no need for future multi-project megaposts like this one. Thanks for reading.

]]>
http://www.thehypertext.com/2014/09/29/general-update/feed/ 2