artifical intelligence – THE HYPERTEXT http://www.thehypertext.com Thu, 10 Dec 2015 06:10:15 +0000 en-US hourly 1 https://wordpress.org/?v=5.0.4 Netflix for Robots http://www.thehypertext.com/2015/12/10/netflix-for-robots/ Thu, 10 Dec 2015 06:08:24 +0000 http://www.thehypertext.com/?p=800 For my final project in Learning Machines, I forced a deep learning machine to watch every episode of The X-Files.

Read More...

]]>
For my final project in Learning Machines, I forced a deep learning machine to watch every episode of The X-Files.

Watching every episode of The X-Files in high school on Netflix DVDs that came in the mail (remember those?) seemed like the thing to do. It was a great show, with 9 seasons of 20+ episodes a piece. So, it only seemed fair to provide a robot friend with the same experience.

I’m currently running NeuralTalk2, which is truly wonderful open source image captioning code consisting of convolutional and recurrent neural networks. The software requires a GPU to train models, so I’m running it on an Amazon Web Services GPU server instance. At ~50 cents per hour, it’s a lot more expensive than Netflix.

Andrej Karpathy wrote NeuralTalk2 in Torch, which is based in Lua, and it requires a lot of dependencies. However, it was a lot easier to set up than the Deep Dream code I experimented with over the summer.

The training process has involved a lot of trial and error. The learning process seems to just halt sometimes, and the machine often wants to issue the same caption for every image.

Rather than training the machine with an image caption set, I trained it with dialogue from subtitles and matching frames extracted at 10 second intervals from every episode of The X-Files. This is just an experiment, and I’m not expecting stellar results.

That said, the robot is already spitting out some pretty weird and genuinely creepy lines. I can’t wait until I have a version that’s trained well enough to feed in new images and get varied results.

Screen Shot 2015-12-09 at 2.18.48 PM Screen Shot 2015-12-09 at 2.20.22 PM Screen Shot 2015-12-09 at 2.21.53 PM Screen Shot 2015-12-09 at 2.26.45 PM Screen Shot 2015-12-09 at 2.33.11 PM Screen Shot 2015-12-09 at 2.34.41 PM Screen Shot 2015-12-09 at 2.35.27 PM Screen Shot 2015-12-09 at 2.36.01 PM Screen Shot 2015-12-09 at 2.42.05 PM

]]>
Sound Camera, Part III http://www.thehypertext.com/2015/10/21/sound-camera-part-iii/ Wed, 21 Oct 2015 22:10:27 +0000 http://www.thehypertext.com/?p=741 I completed the physical prototype of the sound camera inside the enclosure I specified in my prior post, the Kodak Brownie Model 2.

Read More...

]]>
I completed the physical prototype of the sound camera inside the enclosure I specified in my prior post, the Kodak Brownie Model 2.


IMG_1264

I started by adding a shutter button to the top of the enclosure. I used a Cherry MX Blue mechanical keyboard switch that I had leftover from a project last year.

IMG_1268

 

The battery and Raspberry Pi just barely fit into the enclosure:

IMG_1267

IMG_1265

 

The Raspberry Pi camera module is wedged snugly beneath the camera’s front plate:

IMG_1263

 

In additional to playing the song, I added some functionality that provides a bit of context to the user. Using the pico2wave text-to-speech utility, the camera speaks the tags aloud before playing the song. Additionally, using SoX, the camera plays an initialization tone generated from the color histogram of the image before reading the tags.

Here’s the code that’s currently running on the Raspberry Pi:

from __future__ import unicode_literals

import os
import json
import uuid
import time
from random import choice as rc
from random import sample as rs
import re
import subprocess

import RPi.GPIO as GPIO
import picamera
from clarifai.client import ClarifaiApi
import requests
from PIL import Image

import sys
import threading

import spotify

import genius_token

# SPOTIFY STUFF

# Assuming a spotify_appkey.key in the current dir
session = spotify.Session()

# Process events in the background
loop = spotify.EventLoop(session)
loop.start()

# Connect an audio sink
audio = spotify.AlsaSink(session)

# Events for coordination
logged_in = threading.Event()
logged_out = threading.Event()
end_of_track = threading.Event()

logged_out.set()


def on_connection_state_updated(session):
    if session.connection.state is spotify.ConnectionState.LOGGED_IN:
        logged_in.set()
        logged_out.clear()
    elif session.connection.state is spotify.ConnectionState.LOGGED_OUT:
        logged_in.clear()
        logged_out.set()


def on_end_of_track(self):
    end_of_track.set()

# Register event listeners
session.on(
    spotify.SessionEvent.CONNECTION_STATE_UPDATED, on_connection_state_updated)
session.on(spotify.SessionEvent.END_OF_TRACK, on_end_of_track)

# Assuming a previous login with remember_me=True and a proper logout
# session.relogin()
# session.login(genius_token.spotify_un, genius_token.spotify_pwd, remember_me=True)

# logged_in.wait()

# CAMERA STUFF

# Init Camera
camera = picamera.PiCamera()

# Init GPIO
GPIO.setmode(GPIO.BCM)

# Button Pin
GPIO.setup(18, GPIO.IN, pull_up_down=GPIO.PUD_UP)

IMGPATH = '/home/pi/soundcamera/img/'

clarifai_api = ClarifaiApi()

def chunks(l, n):
    """Yield successive n-sized chunks from l."""
    for i in xrange(0, len(l), n):
        yield l[i:i+n]

def take_photo():
    fn = str(int(time.time()))+'.jpg' # TODO: Change to timestamp hash
    fp = IMGPATH+fn
    camera.capture(fp)
    return fp

def chunks(l, n):
    """Yield successive n-sized chunks from l."""
    for i in xrange(0, len(l), n):
        yield l[i:i+n]

def get_tags(fp):
    fileObj = open(fp)
    result = clarifai_api.tag_images(fileObj)
    resultObj = result['results'][0]
    tags = resultObj['result']['tag']['classes']
    return tags

def genius_search(tags):
    access_token = genius_token.token
    payload = {
        'q': ' '.join(tags),
        'access_token': access_token
    }
    endpt = 'http://api.genius.com/search'
    response = requests.get(endpt, params=payload)
    results = response.json()
    hits = results['response']['hits']
    
    artists_titles = []
    
    for h in hits:
        hit_result = h['result']
        if hit_result['url'].endswith('lyrics'):
            artists_titles.append(
                (hit_result['primary_artist']['name'], hit_result['title'])
            )
    
    return artists_titles

def spotify_search(query):
    endpt = "https://api.spotify.com/v1/search"
    payload = {
        'q': query,
        'type': 'track'
    }
    response = requests.get(endpt, params=payload)
    result = response.json()
    result_zero = result['tracks']['items'][0]
    
    return result_zero['uri']

def main(fn):
    tags = get_tags(fn)
    for tag_chunk in chunks(tags,3):
        artists_titles = genius_search(tag_chunk)
        for artist, title in artists_titles:
            try:
                result_uri = spotify_search(artist+' '+title)
            except IndexError:
                pass
            else:
                print tag_chunk
                byline = "%s by %s" % (title, artist)
                print byline
                to_read = ', '.join(tag_chunk) + ". " + byline
                return to_read, result_uri

def play_uri(track_uri):
    # Play a track
    # audio = spotify.AlsaSink(session)
    session.login(genius_token.spotify_un, genius_token.spotify_pwd, remember_me=True)
    logged_in.wait()
    track = session.get_track(track_uri).load()
    session.player.load(track)
    session.player.play()


def stop_track():
    session.player.play(False)
    session.player.unload()
    session.logout()
    logged_out.wait()
    audio._close()

def talk(msg):
    proc = subprocess.Popen(
        ['bash', '/home/pi/soundcamera/play_text.sh', msg]
    )
    proc.communicate()

def play_tone(freqs):
    freq1, freq2 = freqs
    proc = subprocess.Popen(
        ['play', '-n', 'synth', '0.25', 'saw', "%i-%i" % (freq1, freq2)]
    )
    proc.communicate()

def histo_tone(fp):
    im = Image.open(fp)
    hist = im.histogram()
    vals = map(sum, chunks(hist, 64)) # list of 12 values
    print vals
    map(play_tone, chunks(vals,2))

if __name__ == "__main__":
    input_state = True
    new_state = True
    hold_counter = 0
    while 1:
        input_state = GPIO.input(18)
        if not (input_state and new_state):
            talk("capturing")

            # Hold for 15 seconds to turn off
            while not GPIO.input(18):
                time.sleep(0.1)
                hold_counter += 1
                if hold_counter > 150:
                    os.system('shutdown now -h')
                    sys.exit()

            # Reset hold counter
            hold_counter = 0

            # Else take photo
            try:
                img_fp = take_photo()
                msg, uri = main(img_fp)
                histo_tone(img_fp)
                talk(msg)
                play_uri(uri)
            except:
                print sys.exc_info()

            # Wait for playback to complete or Ctrl+C
            try:
                while not end_of_track.wait(0.1):
                    # If new photo, play new song
                    new_state = GPIO.input(18)
                    if not new_state:
                        stop_track()
                        # time.sleep(2)
                        break
            except KeyboardInterrupt:
                pass

 

]]>
Sound Camera, Part II http://www.thehypertext.com/2015/10/06/sound-camera-part-ii/ Tue, 06 Oct 2015 02:20:44 +0000 http://www.thehypertext.com/?p=733 Using JavaScript and Python Flask, I created a functional software prototype of the Sound Camera.

Read More...

]]>
Using JavaScript and Python Flask, I created a functional software prototype of the Sound Camera: rossgoodwin.com/soundcamera

The front-end JavaScript code is available on GitHub. Here is the primary back-end Python code:

import os
import json
import uuid
from base64 import decodestring
import time
from random import choice as rc
from random import sample as rs
import re

import PIL
from PIL import Image
import requests
import exifread

from flask import Flask, request, abort, jsonify
from flask.ext.cors import CORS
from werkzeug import secure_filename

from clarifai.client import ClarifaiApi

app = Flask(__name__)
CORS(app)

app.config['UPLOAD_FOLDER'] = '/var/www/SoundCamera/SoundCamera/static/img'
IMGPATH = '/var/www/SoundCamera/SoundCamera/static/img/'

clarifai_api = ClarifaiApi()

@app.route("/")
def index():
    return "These aren't the droids you're looking for."

@app.route("/img", methods=["POST"])
def img():
	request.get_data()
	if request.method == "POST":
		f = request.files['file']
		if f:
			filename = secure_filename(f.filename)
			f.save(os.path.join(app.config['UPLOAD_FOLDER'], filename))
			new_filename = resize_image(filename)
			return jsonify(uri=main(new_filename))
		else:
			abort(501)

@app.route("/b64", methods=["POST"])
def base64():
	if request.method == "POST":
		fstring = request.form['base64str']
		filename = str(uuid.uuid4())+'.jpg'
		file_obj = open(IMGPATH+filename, 'w')
		file_obj.write(fstring.decode('base64'))
		file_obj.close()
		return jsonify(uri=main(filename))

@app.route("/url")
def url():
	img_url = request.args.get('url')
	response = requests.get(img_url, stream=True)
	orig_filename = img_url.split('/')[-1]
	if response.status_code == 200:
		with open(IMGPATH+orig_filename, 'wb') as f:
			for chunk in response.iter_content(1024):
				f.write(chunk)
		new_filename = resize_image(orig_filename)
		return jsonify(uri=main(new_filename))
	else:
		abort(500)


# def allowed_img_file(filename):
#     return '.' in filename and \
# 		filename.rsplit('.', 1)[1].lower() in set(['.jpg', '.jpeg', '.png'])

def resize_image(fn):
    longedge = 640
    orientDict = {
        1: (0, 1),
        2: (0, PIL.Image.FLIP_LEFT_RIGHT),
        3: (-180, 1),
        4: (0, PIL.Image.FLIP_TOP_BOTTOM),
        5: (-90, PIL.Image.FLIP_LEFT_RIGHT),
        6: (-90, 1),
        7: (90, PIL.Image.FLIP_LEFT_RIGHT),
        8: (90, 1)
    }

    imgOriList = []
    try:
        f = open(IMGPATH+fn, "rb")
        exifTags = exifread.process_file(f, details=False, stop_tag='Image Orientation')
        if 'Image Orientation' in exifTags:
            imgOriList.extend(exifTags['Image Orientation'].values)
    except:
        pass

    img = Image.open(IMGPATH+fn)
    w, h = img.size
    newName = str(uuid.uuid4())+'.jpeg'
    if w >= h:
        wpercent = (longedge/float(w))
        hsize = int((float(h)*float(wpercent)))
        img = img.resize((longedge,hsize), PIL.Image.ANTIALIAS)
    else:
        hpercent = (longedge/float(h))
        wsize = int((float(w)*float(hpercent)))
        img = img.resize((wsize,longedge), PIL.Image.ANTIALIAS)

    for val in imgOriList:
        if val in orientDict:
            deg, flip = orientDict[val]
            img = img.rotate(deg)
            if flip != 1:
                img = img.transpose(flip)

    img.save(IMGPATH+newName, format='JPEG')
    os.remove(IMGPATH+fn)
    
    return newName

def chunks(l, n):
    """Yield successive n-sized chunks from l."""
    for i in xrange(0, len(l), n):
        yield l[i:i+n]

def get_tags(fp):
    fileObj = open(fp)
    result = clarifai_api.tag_images(fileObj)
    resultObj = result['results'][0]
    tags = resultObj['result']['tag']['classes']
    return tags

def genius_search(tags):
    access_token = 'd2IuV9fGKzYEWVnzmLVtFnm-EYvBQKR8Uh3I1cfZOdr8j-BGVTPThDES532dym5a'
    payload = {
        'q': ' '.join(tags),
        'access_token': access_token
    }
    endpt = 'http://api.genius.com/search'
    response = requests.get(endpt, params=payload)
    results = response.json()
    hits = results['response']['hits']
    
    artists_titles = []
    
    for h in hits:
        hit_result = h['result']
        if hit_result['url'].endswith('lyrics'):
            artists_titles.append(
                (hit_result['primary_artist']['name'], hit_result['title'])
            )
    
    return artists_titles

def spotify_search(query):
    endpt = "https://api.spotify.com/v1/search"
    payload = {
        'q': query,
        'type': 'track'
    }
    response = requests.get(endpt, params=payload)
    result = response.json()
    result_zero = result['tracks']['items'][0]
    
    return result_zero['uri']

def main(fn):
    tags = get_tags(IMGPATH+fn)
    for tag_chunk in chunks(tags,3):
        artists_titles = genius_search(tag_chunk)
        for artist, title in artists_titles:
            try:
                result_uri = spotify_search(artist+' '+title)
            except IndexError:
                pass
            else:
                return result_uri


if __name__ == "__main__":
    app.run()

 

It uses the same algorithm discussed in my prior post. Now that I have the opportunity to test it more, I am not quite satisfied with the results it is providing. First of all, they are not entirely deterministic (you can upload the same photo twice and end up with two different songs in some cases). Moreover, the results from a human face — which I expect to be a common use case — are not very personal. For the next steps in this project, I plan to integrate additional data including GPS, weather, time of day, and possibly even facial expressions in order to improve the output.

The broken cameras I ordered from eBay have arrived, and I have been considering how to use them as cases for the new models. I also purchased a GPS module for my Raspberry Pi, so the next Sound Camera prototype, with new features integrated, will likely be a physical version. I’m planning to use this Kodak Brownie camera (c. 1916):

IMG_1207

]]>
Author Cameras http://www.thehypertext.com/2015/09/09/author-cameras/ Wed, 09 Sep 2015 19:58:10 +0000 http://www.thehypertext.com/?p=631 For my primary project in Project Development Studio with Stefani Bardin, I am planning to make 3-5 more physical word cameras.

Read More...

]]>
For my primary project in Project Development Studio with Stefani Bardin, I am planning to make 3-5 more physical word cameras. These models will iterate on my prior physical word camera by printing relevant passages from specific authors, based on convolutional neural network analysis of captured images.

I have not yet chosen the authors I plan to embed in these cameras, or how I plan to present the extracted text. I also have tentative plans for a new iteration of the talking surveillance camera I developed last semester, but more on that will be provided in future posts.

This week, I spent some time on eBay finding a few broken medium- and large-format cameras to use as cases. Here’s what I bought (for $5 to $25 each):

$_57 (3)

$_57 (2)

$_57 (1)

$_57

I am current waiting to receive them so that I can start planning the builds. Below is a list of the additional parts that will be required for each camera:

Raspberry Pi 2 ($40)
85.60mm x 56mm x 21mm (or roughly 3.37″ x 2.21″ x 0.83″)

Raspberry Pi Camera Board ($30)
25mm x 20mm x 9mm

Buck Converter ($10)
51 * 26.3 * 14 (L * W * H) (mm)

7.4V LiIon Battery Pack ($90)
22mm (0.9″) x 104mm (4.1″) x 107mm (4.2″)
OR two USB batteries ($40)

Thermal Printer ($25 from China or $50 from U.S.)
~4 1/8″ (105mm) x 2 1/4″ (58mm) for rectangular hole
~58mm deep

On/Off Switch ($1)
18.60mm x 12.40mm rectangular hole
13.9mm deep

LED Button ($5)
Shutter button, user will hold for 3 seconds to turn off Raspberry Pi
16mm round hole
~1.5″ deep

1/4-size permaproto board ($3)

1/4″ Acrylic ($12) or Broken Medium Format TLR ($30-69)

Jumper Wires ($2)

]]>
word.camera, Part II http://www.thehypertext.com/2015/05/08/word-camera-part-ii/ Fri, 08 May 2015 21:50:25 +0000 http://www.thehypertext.com/?p=505 For my final projects in Conversation and Computation with Lauren McCarthy and This Is The Remix with Roopa Vasudevan, I iterated on my word.camera project.

Read More...

]]>
Click Here for Part I


11161692_10100527204674408_7877879408640753455_o


For my final projects in Conversation and Computation with Lauren McCarthy and This Is The Remix with Roopa Vasudevan, I iterated on my word.camera project. I added a few new features to the web application, including a private API that I used to enable the creation of a physical version of word.camera inside a Mamiya C33 TLR.

The current version of the code remains open source and available on GitHub, and the project continues to receive positive mentions in the press.

On April 19, I announced two new features for word.camera via the TinyLetter email newsletter I advertised on the site.

Hello,

Thank you for subscribing to this newsletter, wherein I will provide occasional updates regarding my project, word.camera.

I wanted to let you know about two new features I added to the site in the past week:

word.camera/albums You can now generate ebooks (DRM-free ePub format) from sets of lexographs.

word.camera/postcards You can support word.camera by sending a lexograph as a postcard, anywhere in the world for $5. I am currently a graduate student, and proceeds will help cover the cost of maintaining this web application as a free, open source project.

Also:

word.camera/a/XwP59n1zR A lexograph album containing some of the best results I’ve gotten so far with the camera on my phone.

1, 2, 3 A few random lexographs I did not make that were popular on social media.

Best,

Ross Goodwin
rossgoodwin.com
word.camera

Next, I set to work on the physical version. I decided to use a technique I developed on another project earlier in the semester to create word.camera epitaphs composed of highly relevant paragraphs from novels. To ensure fair use of copyrighted materials, I determined that all of this additional data would be processed locally on the physical camera.

I developed a collection of data from a combination of novels that are considered classics and those I personally enjoyed, and I included only paragraphs over 99 characters in length. In total, the collection contains 7,113,809 words from 48 books.

Below is an infographic showing all the books used in my corpus, and their relative included word counts (click on it for the full-size image).

A79449E2CDA5D178

To build the physical version of word.camera, I purchased the following materials:

  • Raspberry Pi 2 board
  • Raspberry Pi camera module
  • Two (2) 10,000 mAh batteries
  • Thermal receipt printer
  • 40 female-to-male jumper wires
  • Three (3) extra-small prototyping perf boards
  • LED button

After some tinkering, I was able to put together the arrangement pictured below, which could print raw word.camera output on the receipt printer.

IMG_0354

I thought for a long time about the type of case I wanted to put the camera in. My original idea was a photobooth, but I felt that a portable camera—along the lines of Matt Richardson’s Descriptive Camera—might take better advantage of the Raspberry Pi’s small footprint.

Rather than fabricating my own case, I determined that an antique film camera might provide a familiar exterior to draw in people not familiar with the project. (And I was creating it for a remix-themed class, after all.) So I purchased a lot of three broken TLR film cameras on eBay, and the Mamiya C33 was in the best condition of all of them, so I gutted it. (N.B. I’m an antique camera enthusiast—I own a working version of the C33’s predecessor, the C2—and, despite its broken condition, cutting open the bellows of the C33 felt sacrilegious.)

I laser cut some clear acrylic I had left over from the traveler’s lamp project to fill the lens holes and mount the LED button on the back of the camera. Here are some photos of the finished product:

9503_20150507_tlr_1000px

9502_20150507_tlr_1000px

9509_20150507_tlr_1000px

9496_20150507_tlr_1000px

9493_20150507_tlr_1000px

9513_20150507_tlr_1000px

And here is the code that’s running on the Raspberry Pi (the crux of the matching algorithm is on line 90):

import uuid
import picamera
import RPi.GPIO as GPIO
import requests
from time import sleep
import os
import json
from Adafruit_Thermal import *
from alchemykey import apikey
import time

# SHUTTER COUNT / startNo GLOBAL
startNo = 0

# Init Printer
printer = Adafruit_Thermal("/dev/ttyAMA0", 19200, timeout=5)
printer.setSize('S')
printer.justify('L')
printer.setLineHeight(36)

# Init Camera
camera = picamera.PiCamera()

# Init GPIO
GPIO.setmode(GPIO.BCM)

# Working Dir
cwd = '/home/pi/tlr'

# Init Button Pin
GPIO.setup(21, GPIO.IN, pull_up_down=GPIO.PUD_UP)

# Init LED Pin
GPIO.setup(20, GPIO.OUT)

# Init Flash Pin
GPIO.setup(16, GPIO.OUT)

# LED and Flash Off
GPIO.output(20, False)
GPIO.output(16, False)

# Load lit list
lit = json.load( open(cwd+'/lit.json', 'r') )


def blink(n):
    for _ in range(n):
        GPIO.output(20, True)
        sleep(0.2)
        GPIO.output(20, False)
        sleep(0.2)

def takePhoto():
    fn = str(int(time.time()))+'.jpg' # TODO: Change to timestamp hash
    fp = cwd+'/img/'+fn
    GPIO.output(16, True)
    camera.capture(fp)
    GPIO.output(16, False)
    return fp

def getText(imgPath):
    endPt = 'https://word.camera/img'
    payload = {'Script': 'Yes'}
    files = {'file': open(imgPath, 'rb')}
    response = requests.post(endPt, data=payload, files=files)
    return response.text

def alchemy(text):
    endpt = "http://access.alchemyapi.com/calls/text/TextGetRankedConcepts"
    payload = {"apikey": apikey,
               "text": text,
               "outputMode": "json",
               "showSourceText": 0,
               "knowledgeGraph": 1,
               "maxRetrieve": 500}
    headers = {'content-type': 'application/x-www-form-urlencoded'}
    r = requests.post(endpt, data=payload, headers=headers)
    return r.json()

def findIntersection(testDict):
    returnText = ""
    returnTitle = ""
    returnAuthor = ""
    recordInter = set(testDict.keys())
    relRecord = 0.0
    for doc in lit:
        inter = set(doc['concepts'].keys()) & set(testDict.keys())
        if inter:
            relSum = sum([doc['concepts'][tag]+testDict[tag] for tag in inter])
            if relSum > relRecord: 
                relRecord = relSum
                recordInter = inter
                returnText = doc['text']
                returnTitle = doc['title']
                returnAuthor = doc['author']
    doc = {
        'text': returnText,
        'title': returnTitle,
        'author': returnAuthor,
        'inter': recordInter,
        'record': relRecord
    }
    return doc

def puncReplace(text):
    replaceDict = {
        '—': '---',
        '–': '--',
        '‘': "\'",
        '’': "\'",
        '“': '\"',
        '”': '\"',
        '´': "\'",
        'ë': 'e',
        'ñ': 'n'
    }

    for key in replaceDict:
        text = text.replace(key, replaceDict[key])

    return text


blink(5)
while 1:
    input_state = GPIO.input(21)
    if not input_state:
        GPIO.output(20, True)
        try:
            # Get Word.Camera Output
            print "GETTING TEXT FROM WORD.CAMERA..."
            wcText = getText(takePhoto())
            blink(3)
            GPIO.output(20, True)
            print "...GOT TEXT"

            # Print
            # print "PRINTING PRIMARY"
            # startNo += 1
            # printer.println("No. %i\n\n\n%s" % (startNo, wcText))

            # Get Alchemy Data
            print "GETTING ALCHEMY DATA..."
            data = alchemy(wcText)
            tagRelDict = {concept['text']:float(concept['relevance']) for concept in data['concepts']}
            blink(3)
            GPIO.output(20, True)
            print "...GOT DATA"

            # Make Match
            print "FINDING MATCH..."
            interDoc = findIntersection(tagRelDict)
            print interDoc
            interText = puncReplace(interDoc['text'].encode('ascii', 'xmlcharrefreplace'))
            interTitle = puncReplace(interDoc['title'].encode('ascii', 'xmlcharrefreplace'))
            interAuthor = puncReplace(interDoc['author'].encode('ascii', 'xmlcharrefreplace'))
            blink(3)
            GPIO.output(20, True)
            print "...FOUND"

            grafList = [p for p in wcText.split('\n') if p]

            # Choose primary paragraph
            primaryText = min(grafList, key=lambda x: x.count('#'))
            url = 'word.camera/i/' + grafList[-1].strip().replace('#', '')

            # Print
            print "PRINTING..."
            startNo += 1
            printStr = "No. %i\n\n\n%s\n\n%s\n\n\n\nEPITAPH\n\n%s\n\nFrom %s by %s" % (startNo, primaryText, url, interText, interTitle, interAuthor)
            printer.println(printStr)

        except:
            print "SOMETHING BROKE"
            blink(15)

        GPIO.output(20, False)

Thanks to a transistor pulsing circuit that keeps the printer’s battery awake, and some code that automatically tethers the Raspberry Pi to my iPhone, the Fiction Camera is fully portable. I’ve been walking around Brooklyn and Manhattan over the past week making lexographs—the device is definitely a conversation starter. As a street photographer, I’ve noticed that people seem to be more comfortable having their photograph taken with it than with a standard camera, possibly because the visual image (and whether they look alright in it) is far less important.

As a result of these wanderings, I’ve accrued quite a large number of lexograph receipts. Earlier iterations of the receipt design contained longer versions of the word.camera output. Eventually, I settled on a version that contains a number (indicating how many lexographs have been taken since the device was last turned on), one paragraph of word.camera output, a URL to the word.camera page containing the photo + complete output, and a single high-relevance paragraph from a novel.

2080_20150508_doc_1800px

2095_20150508_doc_1800px

2082_20150508_doc_1800px

2088_20150508_doc_1800px

2091_20150508_doc_1800px

2093_20150508_doc_1800px

2097_20150508_doc_1800px

2100_20150508_doc_1800px

2102_20150508_doc_1800px

2104_20150508_doc_1800px

2106_20150508_doc_1800px

2108_20150508_doc_1800px

2109_20150508_doc_1800px

I also demonstrated the camera at ConvoHack, our final presentation event for Conversation and Computation, which took place at Babycastles gallery, and passed out over 50 lexograph receipts that evening alone.

6A0A1475

6A0A1416

6A0A1380

6A0A1352

6A0A1348

Photographs by Karam Byun

Often, when photographing a person, the camera will output a passage from a novel featuring a character description that subjects seem to relate to. Many people have told me the results have qualities that remind them of horoscopes.

]]>
Dr. Gonzo http://www.thehypertext.com/2015/02/19/dr-gonzo/ http://www.thehypertext.com/2015/02/19/dr-gonzo/#comments Thu, 19 Feb 2015 02:57:21 +0000 http://www.thehypertext.com/?p=434 For my first project in Conversation and Computation with Lauren McCarthy, I created a therapist bot with the voice of Hunter S. Thompson.

Read More...

]]>
For my first project in Conversation and Computation with Lauren McCarthy, I created a therapist bot with the voice of Hunter S. Thompson. The bot currently runs in the terminal, but I am working on a web version. All my code is on GitHub.

Screen Shot 2015-02-18 at 9.08.13 PMTo make Dr. Gonzo, I used AlchemyAPI concept extraction to tag each paragraph of a large corpus of Hunter S. Thompson’s writing. I fed the tagged corpus into a MongoDB database, which I query with PyMongo. I used Pattern and NLTK to parse and categorize user input, and match it to documents in the database. Database entries are appended with text generated from a template engine. Additionally, my template engine handles the first several user requests in every session.

Here are a few more screenshots of the doctor in action:

Screen Shot 2015-02-18 at 9.09.16 PM Screen Shot 2015-02-18 at 9.11.19 PM Screen Shot 2015-02-18 at 9.19.23 PM Screen Shot 2015-02-18 at 9.23.06 PM Screen Shot 2015-02-18 at 9.24.30 PM Screen Shot 2015-02-18 at 9.27.27 PM Screen Shot 2015-02-18 at 9.29.15 PM

 

]]>
http://www.thehypertext.com/2015/02/19/dr-gonzo/feed/ 2
Fiction Generator, Part IV http://www.thehypertext.com/2014/12/21/fiction-generator-part-iv/ Sun, 21 Dec 2014 03:04:53 +0000 http://www.thehypertext.com/?p=406 For my final project in Networked Media with Daniel Shiffman, I put the Fiction Generator online at fictiongenerator.com. I also exhibited this project at the ITP Winter Show.

Read More...

]]>
Prior Installments:
Part I
Part II
Part III

For my final project in Comm Lab: Networked Media with Daniel Shiffman, I put the Fiction Generator online at fictiongenerator.com. VICE/Motherboard ran an article about my website, and I exhibited the project at the ITP Winter Show.

composite

 

After reading William S. Burroughs’ essay about the cut-up technique, I decided to implement an algorithmic version of it into the generator. I also refactored my existing code and added a load screen, with this animation:

robotholdingbook

I am running a LinuxApacheFlask stack at the moment. Here’s a screen shot of the website in its current state:

screenshot

]]>
ITP Code Poetry Slam http://www.thehypertext.com/2014/12/09/itp-code-poetry-slam-2014/ Tue, 09 Dec 2014 08:45:53 +0000 http://www.thehypertext.com/?p=381 Several months ago, I asked the question: Who Is Code Shakespeare. On November 14, 2014, I believe the first ITP Code Poetry Slam may have brought us closer to an answer.

Read More...

]]>
Who Is Code Shakespeare?

codeshakespeare_sm

Several months ago, I asked the question above. On November 14, 2014, I believe the first ITP Code Poetry Slam may have brought us closer to an answer.

In all the bustle of final projects being due in the past month, I haven’t had a chance to post anything about the code poetry slam I organized in November. Needless to say, the event was an enormous success, thanks mostly to the incredible judges and presenters. I hope to organize another one in 2015.

The judges brought a wealth of experience from a variety of different fields, which provided for some extraordinary discussion. They were:

This was the schedule for the slam, as written by me on the whiteboard wall of room 50 at ITP:

whiteboard

Rather than providing a blow-by-blow account of proceedings, I’ll direct you to Hannes Bajohr, who did just that.

The entries truly speak for themselves. Those who presented (in order of presentation) were:

 


Participants and attendees: Please let me know if any of the names or links above need to be changed. Also, if your name is not linked, and you’d like me to link it to something, let me know!

If you missed the ITP Code Poetry Slam, you can attend or submit your work for the Stanford Code Poetry Slam in January.

]]>
Fiction Generator, Part II http://www.thehypertext.com/2014/11/20/fiction-generator-part-ii/ http://www.thehypertext.com/2014/11/20/fiction-generator-part-ii/#comments Thu, 20 Nov 2014 03:39:13 +0000 http://www.thehypertext.com/?p=328 After scraping about 5000 articles from tvtropes.org to retrieve descriptions for characters and settings, Sam Lavigne suggested I scrape erowid.org to dig up some exposition material. I proceeded to scrape 18,324 drug trip reports from the site, and integrated that material into the generator.

Read More...

]]>
For background, see my previous post on this project.

After scraping about 5000 articles from tvtropes.org to retrieve descriptions for characters and settings, Sam Lavigne suggested I scrape erowid.org to dig up some exposition material. I proceeded to scrape 18,324 drug trip reports from the site, and integrated that material into the generator.

While this project remains unfinished—I’m considering adding more material from many other websites, which is why I’m calling it a “collective consciousness fiction generator”—it is now generating full-length “novels” (300+ pages, 8.5×11, 12pt font). I polled my fellow ITP students to insert themselves into novels, and they responded with over 50 suggestions for novel titles. The generated PDFs are available for viewing/download on Google Drive.

I decided to create covers for 3 of my favorite novels the software has generated. Click on the covers below to see those PDFs:

infinite_splendour parallel_synchronized_randomness tricks_of_the_trade

Here is the current state of the code that’s generating these novels:

latex_special_char_1 = ['&', '%', '$', '#', '_', '{', '}']
latex_special_char_2 = ['~', '^', '\\']

outputFile = open("output/"+outputFileName+".tex", 'w')

openingTexLines = ["\\documentclass[12pt]{book}",
				   "\\usepackage{ucs}",
				   "\\usepackage[utf8x]{inputenc}",
				   "\\usepackage{hyperref}",
				   "\\title{"+outputFileName+"}",
				   "\\author{collective consciousness fiction generator\\\\http://rossgoodwin.com/ficgen}",
				   "\\date{\\today}",
				   "\\begin{document}",
				   "\\maketitle"]

closingTexLine = "\\end{document}"

for line in openingTexLines:
	outputFile.write(line+"\n\r")
outputFile.write("\n\r\n\r")

intros = char_match()

for x, y in intros.iteritems():

	outputFile.write("\\chapter{"+x+"}\n\r")

	chapter_type = random.randint(0, 4)
	bonus_drug_trip = random.randint(0, 1)
	trip_count = random.randint(1,4)


	# BLOCK ONE

	if chapter_type in [0, 3]:

		for char in y[0]:
			if char == "`":
				outputFile.seek(-1, 1)
			elif char in latex_special_char_1:
				outputFile.write("\\"+char)
			elif char in latex_special_char_2:
				if char == '~':
					outputFile.write("")
				elif char == '^':
					outputFile.write("")
				elif char == '\\':
					outputFile.write("-")
				else:
					pass
			else:
				outputFile.write(char)

	elif chapter_type in [1, 4]:

		for char in y[2]:
			if char == "`":
				outputFile.seek(-1, 1)
			elif char in latex_special_char_1:
				outputFile.write("\\"+char)
			elif char in latex_special_char_2:
				if char == '~':
					outputFile.write("")
				elif char == '^':
					outputFile.write("")
				elif char == '\\':
					outputFile.write("-")
				else:
					pass
			else:
				outputFile.write(char)

	elif chapter_type == 2:

		for char in y[1][0]:
			if char == "`":
				outputFile.seek(-1, 1)
			else:
				outputFile.write(char)

	outputFile.write("\n\r\n\r\n\r")

	
	# BLOCK TWO

	if chapter_type == 0:

		for char in y[2]:
			if char == "`":
				outputFile.seek(-1, 1)
			elif char in latex_special_char_1:
				outputFile.write("\\"+char)
			elif char in latex_special_char_2:
				if char == '~':
					outputFile.write("")
				elif char == '^':
					outputFile.write("")
				elif char == '\\':
					outputFile.write("-")
				else:
					pass
			else:
				outputFile.write(char)

	elif chapter_type == 1:

		for char in y[0]:
			if char == "`":
				outputFile.seek(-1, 1)
			elif char in latex_special_char_1:
				outputFile.write("\\"+char)
			elif char in latex_special_char_2:
				if char == '~':
					outputFile.write("")
				elif char == '^':
					outputFile.write("")
				elif char == '\\':
					outputFile.write("-")
				else:
					pass
			else:
				outputFile.write(char)

	elif chapter_type in [3, 4]:

		for char in y[1][0]:
			if char == "`":
				outputFile.seek(-1, 1)
			else:
				outputFile.write(char)

	elif chapter_type == 2 and bonus_drug_trip:

		for tripIndex in range(trip_count):

			for char in y[1][tripIndex+1]:
				if char == "`":
					outputFile.seek(-1, 1)
				else:
					outputFile.write(char)

	else:
		pass

	outputFile.write("\n\r\n\r\n\r")


	# BLOCK THREE

	if chapter_type in [0, 1, 3, 4] and bonus_drug_trip:

		for tripIndex in range(trip_count):

			for char in y[1][tripIndex+1]:
				if char == "`":
					outputFile.seek(-1, 1)
				else:
					outputFile.write(char)

		outputFile.write("\n\r\n\r\n\r")

	else:
		pass


outputFile.write("\n\r\n\r")
outputFile.write(closingTexLine)


outputFile.close()


print '\"output/'+outputFileName+'.tex\"'

 

UPDATE: Part III

]]>
http://www.thehypertext.com/2014/11/20/fiction-generator-part-ii/feed/ 1
The Mechanical Turk’s Ghost, Part III http://www.thehypertext.com/2014/10/19/the-mechanical-turks-ghost-part-iii/ Sun, 19 Oct 2014 22:01:50 +0000 http://www.thehypertext.com/?p=228 We have begun work on our midterm assignments for Automata, and we were asked to present our concepts for this week's class. I have decided to pursue my chess idea, the Mechanical Turk's Ghost, and will discuss its implementation in this post.

Read More...

]]>
CONCEPT

My midterm project will be a chess set that generates music and ejects pieces from the board based on Stockfish chess engine analytics. My eventual plan is to implement a physical (hardware) version of the chess set, using magnets in the pieces, Hall Effect sensors in the board, and solenoids beneath the board. However, I may rely on a software version (a chess GUI rather than a physical board) as my initial prototype. Such a version would still be connected to a physical board with solenoids beneath it to demonstrate that aspect of the project.

COMPOSITION

The chess board will be connected to the Stockfish chess engine — the world’s most powerful chess engine, which also happens to be open source. The engine will provide real-time analytics for games-in-progress, providing a score (above 0 if white is winning, below 0 if black is winning), along with the “best move” from any given board position. Mapping these variables to music will provide auditory feedback for players, turning an otherwise normal game of chess into “advanced chess” (chess where both players have access to engine analytics), but without the traditional chess engine interface. The solenoids beneath the board will provide an element of surprise and a unique way to signal that the game has ended, due to one player coming within range of a checkmate.

CONTEXT

Creating an auditory interface for the game of chess could have interesting consequences, both for chess itself and the possibility of applying such an interface to other games. I am not sure how auditory feedback will effect the game, but I hope it will make players more acutely aware of their relative strategic positions at all times. Ideally, it would provide an avenue for improvement by helping people think more like the computer chess engines.

BILL OF MATERIALS

Chess board & housings for Hall Effect sensors
64 Hall Effect sensors
32 (or more) magnets
4 solenoids
1 Arduino Mega
1 Raspberry Pi
16 multiplexor ICs
64 LEDs (if “best move” feature implemented)

TECHNICAL DRAWINGS & IMAGES

Initial Drawing (with conductive pads instead of hall effect sensors):
image_23

 

Rendering of Hall Effect Sensor Enclosure (for laser cutter):

halleffectencl

Hall Effect Sensor Enclosure Prototype:

photo

Chess GUI (software version):

che55

SIGNAL CHAIN

Magnets >> Hall Effect Sensors >> Multiplexors >> Arduino >> Raspberry Pi (>> Music) >> Arduino >> Multiplexors >> Solenoids/LEDs

]]>