code poetry – THE HYPERTEXT http://www.thehypertext.com Thu, 10 Dec 2015 06:10:15 +0000 en-US hourly 1 https://wordpress.org/?v=5.0.4 Netflix for Robots http://www.thehypertext.com/2015/12/10/netflix-for-robots/ Thu, 10 Dec 2015 06:08:24 +0000 http://www.thehypertext.com/?p=800 For my final project in Learning Machines, I forced a deep learning machine to watch every episode of The X-Files.

Read More...

]]>
For my final project in Learning Machines, I forced a deep learning machine to watch every episode of The X-Files.

Watching every episode of The X-Files in high school on Netflix DVDs that came in the mail (remember those?) seemed like the thing to do. It was a great show, with 9 seasons of 20+ episodes a piece. So, it only seemed fair to provide a robot friend with the same experience.

I’m currently running NeuralTalk2, which is truly wonderful open source image captioning code consisting of convolutional and recurrent neural networks. The software requires a GPU to train models, so I’m running it on an Amazon Web Services GPU server instance. At ~50 cents per hour, it’s a lot more expensive than Netflix.

Andrej Karpathy wrote NeuralTalk2 in Torch, which is based in Lua, and it requires a lot of dependencies. However, it was a lot easier to set up than the Deep Dream code I experimented with over the summer.

The training process has involved a lot of trial and error. The learning process seems to just halt sometimes, and the machine often wants to issue the same caption for every image.

Rather than training the machine with an image caption set, I trained it with dialogue from subtitles and matching frames extracted at 10 second intervals from every episode of The X-Files. This is just an experiment, and I’m not expecting stellar results.

That said, the robot is already spitting out some pretty weird and genuinely creepy lines. I can’t wait until I have a version that’s trained well enough to feed in new images and get varied results.

Screen Shot 2015-12-09 at 2.18.48 PM Screen Shot 2015-12-09 at 2.20.22 PM Screen Shot 2015-12-09 at 2.21.53 PM Screen Shot 2015-12-09 at 2.26.45 PM Screen Shot 2015-12-09 at 2.33.11 PM Screen Shot 2015-12-09 at 2.34.41 PM Screen Shot 2015-12-09 at 2.35.27 PM Screen Shot 2015-12-09 at 2.36.01 PM Screen Shot 2015-12-09 at 2.42.05 PM

]]>
word.camera exhibition http://www.thehypertext.com/2015/11/24/word-camera-exhibition/ Tue, 24 Nov 2015 19:58:34 +0000 http://www.thehypertext.com/?p=772 This week, I've been exhibiting my ongoing project, word.camera, at IDFA DocLab in Amsterdam.

Read More...

]]>
IMG_1338

This week, I’ve been exhibiting my ongoing project, word.camera, at IDFA DocLab in Amsterdam. My installation consists of four cameras:

  1. The original word.camera physical prototype:9503_20150507_tlr_1000px
  2. The sound camera physical prototype:IMG_1264
  3. A new word.camera model that uses a context-free grammar to generate poems based on the images it captures:IMG_1321_copyIMG_1317_copy
  4. A talking, pan-tilt-zoom surveillance camera that looks for faces in the hallway and then describes them aloud. (See also: this Motherboard video)
    IMG_2324
    IMG_2322
    IMG_1365

 

During the exhibition, I was also invited to deliver two lectures. Here are my slides from the first lecture:

And here’s a video of the second one:

 

Visitors are able to reserve the portable cameras for half hour blocks by leaving their ID at the volunteer kiosk. I have really enjoyed watching people borrow and use my cameras.

IMG_1333 copy

 

]]>
word.camera, Part II http://www.thehypertext.com/2015/05/08/word-camera-part-ii/ Fri, 08 May 2015 21:50:25 +0000 http://www.thehypertext.com/?p=505 For my final projects in Conversation and Computation with Lauren McCarthy and This Is The Remix with Roopa Vasudevan, I iterated on my word.camera project.

Read More...

]]>
Click Here for Part I


11161692_10100527204674408_7877879408640753455_o


For my final projects in Conversation and Computation with Lauren McCarthy and This Is The Remix with Roopa Vasudevan, I iterated on my word.camera project. I added a few new features to the web application, including a private API that I used to enable the creation of a physical version of word.camera inside a Mamiya C33 TLR.

The current version of the code remains open source and available on GitHub, and the project continues to receive positive mentions in the press.

On April 19, I announced two new features for word.camera via the TinyLetter email newsletter I advertised on the site.

Hello,

Thank you for subscribing to this newsletter, wherein I will provide occasional updates regarding my project, word.camera.

I wanted to let you know about two new features I added to the site in the past week:

word.camera/albums You can now generate ebooks (DRM-free ePub format) from sets of lexographs.

word.camera/postcards You can support word.camera by sending a lexograph as a postcard, anywhere in the world for $5. I am currently a graduate student, and proceeds will help cover the cost of maintaining this web application as a free, open source project.

Also:

word.camera/a/XwP59n1zR A lexograph album containing some of the best results I’ve gotten so far with the camera on my phone.

1, 2, 3 A few random lexographs I did not make that were popular on social media.

Best,

Ross Goodwin
rossgoodwin.com
word.camera

Next, I set to work on the physical version. I decided to use a technique I developed on another project earlier in the semester to create word.camera epitaphs composed of highly relevant paragraphs from novels. To ensure fair use of copyrighted materials, I determined that all of this additional data would be processed locally on the physical camera.

I developed a collection of data from a combination of novels that are considered classics and those I personally enjoyed, and I included only paragraphs over 99 characters in length. In total, the collection contains 7,113,809 words from 48 books.

Below is an infographic showing all the books used in my corpus, and their relative included word counts (click on it for the full-size image).

A79449E2CDA5D178

To build the physical version of word.camera, I purchased the following materials:

  • Raspberry Pi 2 board
  • Raspberry Pi camera module
  • Two (2) 10,000 mAh batteries
  • Thermal receipt printer
  • 40 female-to-male jumper wires
  • Three (3) extra-small prototyping perf boards
  • LED button

After some tinkering, I was able to put together the arrangement pictured below, which could print raw word.camera output on the receipt printer.

IMG_0354

I thought for a long time about the type of case I wanted to put the camera in. My original idea was a photobooth, but I felt that a portable camera—along the lines of Matt Richardson’s Descriptive Camera—might take better advantage of the Raspberry Pi’s small footprint.

Rather than fabricating my own case, I determined that an antique film camera might provide a familiar exterior to draw in people not familiar with the project. (And I was creating it for a remix-themed class, after all.) So I purchased a lot of three broken TLR film cameras on eBay, and the Mamiya C33 was in the best condition of all of them, so I gutted it. (N.B. I’m an antique camera enthusiast—I own a working version of the C33’s predecessor, the C2—and, despite its broken condition, cutting open the bellows of the C33 felt sacrilegious.)

I laser cut some clear acrylic I had left over from the traveler’s lamp project to fill the lens holes and mount the LED button on the back of the camera. Here are some photos of the finished product:

9503_20150507_tlr_1000px

9502_20150507_tlr_1000px

9509_20150507_tlr_1000px

9496_20150507_tlr_1000px

9493_20150507_tlr_1000px

9513_20150507_tlr_1000px

And here is the code that’s running on the Raspberry Pi (the crux of the matching algorithm is on line 90):

import uuid
import picamera
import RPi.GPIO as GPIO
import requests
from time import sleep
import os
import json
from Adafruit_Thermal import *
from alchemykey import apikey
import time

# SHUTTER COUNT / startNo GLOBAL
startNo = 0

# Init Printer
printer = Adafruit_Thermal("/dev/ttyAMA0", 19200, timeout=5)
printer.setSize('S')
printer.justify('L')
printer.setLineHeight(36)

# Init Camera
camera = picamera.PiCamera()

# Init GPIO
GPIO.setmode(GPIO.BCM)

# Working Dir
cwd = '/home/pi/tlr'

# Init Button Pin
GPIO.setup(21, GPIO.IN, pull_up_down=GPIO.PUD_UP)

# Init LED Pin
GPIO.setup(20, GPIO.OUT)

# Init Flash Pin
GPIO.setup(16, GPIO.OUT)

# LED and Flash Off
GPIO.output(20, False)
GPIO.output(16, False)

# Load lit list
lit = json.load( open(cwd+'/lit.json', 'r') )


def blink(n):
    for _ in range(n):
        GPIO.output(20, True)
        sleep(0.2)
        GPIO.output(20, False)
        sleep(0.2)

def takePhoto():
    fn = str(int(time.time()))+'.jpg' # TODO: Change to timestamp hash
    fp = cwd+'/img/'+fn
    GPIO.output(16, True)
    camera.capture(fp)
    GPIO.output(16, False)
    return fp

def getText(imgPath):
    endPt = 'https://word.camera/img'
    payload = {'Script': 'Yes'}
    files = {'file': open(imgPath, 'rb')}
    response = requests.post(endPt, data=payload, files=files)
    return response.text

def alchemy(text):
    endpt = "http://access.alchemyapi.com/calls/text/TextGetRankedConcepts"
    payload = {"apikey": apikey,
               "text": text,
               "outputMode": "json",
               "showSourceText": 0,
               "knowledgeGraph": 1,
               "maxRetrieve": 500}
    headers = {'content-type': 'application/x-www-form-urlencoded'}
    r = requests.post(endpt, data=payload, headers=headers)
    return r.json()

def findIntersection(testDict):
    returnText = ""
    returnTitle = ""
    returnAuthor = ""
    recordInter = set(testDict.keys())
    relRecord = 0.0
    for doc in lit:
        inter = set(doc['concepts'].keys()) & set(testDict.keys())
        if inter:
            relSum = sum([doc['concepts'][tag]+testDict[tag] for tag in inter])
            if relSum > relRecord: 
                relRecord = relSum
                recordInter = inter
                returnText = doc['text']
                returnTitle = doc['title']
                returnAuthor = doc['author']
    doc = {
        'text': returnText,
        'title': returnTitle,
        'author': returnAuthor,
        'inter': recordInter,
        'record': relRecord
    }
    return doc

def puncReplace(text):
    replaceDict = {
        '—': '---',
        '–': '--',
        '‘': "\'",
        '’': "\'",
        '“': '\"',
        '”': '\"',
        '´': "\'",
        'ë': 'e',
        'ñ': 'n'
    }

    for key in replaceDict:
        text = text.replace(key, replaceDict[key])

    return text


blink(5)
while 1:
    input_state = GPIO.input(21)
    if not input_state:
        GPIO.output(20, True)
        try:
            # Get Word.Camera Output
            print "GETTING TEXT FROM WORD.CAMERA..."
            wcText = getText(takePhoto())
            blink(3)
            GPIO.output(20, True)
            print "...GOT TEXT"

            # Print
            # print "PRINTING PRIMARY"
            # startNo += 1
            # printer.println("No. %i\n\n\n%s" % (startNo, wcText))

            # Get Alchemy Data
            print "GETTING ALCHEMY DATA..."
            data = alchemy(wcText)
            tagRelDict = {concept['text']:float(concept['relevance']) for concept in data['concepts']}
            blink(3)
            GPIO.output(20, True)
            print "...GOT DATA"

            # Make Match
            print "FINDING MATCH..."
            interDoc = findIntersection(tagRelDict)
            print interDoc
            interText = puncReplace(interDoc['text'].encode('ascii', 'xmlcharrefreplace'))
            interTitle = puncReplace(interDoc['title'].encode('ascii', 'xmlcharrefreplace'))
            interAuthor = puncReplace(interDoc['author'].encode('ascii', 'xmlcharrefreplace'))
            blink(3)
            GPIO.output(20, True)
            print "...FOUND"

            grafList = [p for p in wcText.split('\n') if p]

            # Choose primary paragraph
            primaryText = min(grafList, key=lambda x: x.count('#'))
            url = 'word.camera/i/' + grafList[-1].strip().replace('#', '')

            # Print
            print "PRINTING..."
            startNo += 1
            printStr = "No. %i\n\n\n%s\n\n%s\n\n\n\nEPITAPH\n\n%s\n\nFrom %s by %s" % (startNo, primaryText, url, interText, interTitle, interAuthor)
            printer.println(printStr)

        except:
            print "SOMETHING BROKE"
            blink(15)

        GPIO.output(20, False)

Thanks to a transistor pulsing circuit that keeps the printer’s battery awake, and some code that automatically tethers the Raspberry Pi to my iPhone, the Fiction Camera is fully portable. I’ve been walking around Brooklyn and Manhattan over the past week making lexographs—the device is definitely a conversation starter. As a street photographer, I’ve noticed that people seem to be more comfortable having their photograph taken with it than with a standard camera, possibly because the visual image (and whether they look alright in it) is far less important.

As a result of these wanderings, I’ve accrued quite a large number of lexograph receipts. Earlier iterations of the receipt design contained longer versions of the word.camera output. Eventually, I settled on a version that contains a number (indicating how many lexographs have been taken since the device was last turned on), one paragraph of word.camera output, a URL to the word.camera page containing the photo + complete output, and a single high-relevance paragraph from a novel.

2080_20150508_doc_1800px

2095_20150508_doc_1800px

2082_20150508_doc_1800px

2088_20150508_doc_1800px

2091_20150508_doc_1800px

2093_20150508_doc_1800px

2097_20150508_doc_1800px

2100_20150508_doc_1800px

2102_20150508_doc_1800px

2104_20150508_doc_1800px

2106_20150508_doc_1800px

2108_20150508_doc_1800px

2109_20150508_doc_1800px

I also demonstrated the camera at ConvoHack, our final presentation event for Conversation and Computation, which took place at Babycastles gallery, and passed out over 50 lexograph receipts that evening alone.

6A0A1475

6A0A1416

6A0A1380

6A0A1352

6A0A1348

Photographs by Karam Byun

Often, when photographing a person, the camera will output a passage from a novel featuring a character description that subjects seem to relate to. Many people have told me the results have qualities that remind them of horoscopes.

]]>
word.camera http://www.thehypertext.com/2015/04/11/word-camera/ http://www.thehypertext.com/2015/04/11/word-camera/#comments Sat, 11 Apr 2015 05:12:58 +0000 http://www.thehypertext.com/?p=481 Last week, I launched a web application and a concept for photographic text generation that I have been working on for a few months. The idea came to me while working on another project, a computer generated screenplay, and I will discuss the connection in this post.

Read More...

]]>
lexograph /ˈleksəʊɡɹɑːf/ (n.)
A text document generated from digital image data

 

Last week, I launched a web application and a concept for photographic text generation that I have been working on for a few months. The idea came to me while working on another project, a computer generated screenplay, and I will discuss the connection in this post.

word.camera is responsive — it works on desktop, tablet, and mobile devices running recent versions of iOS or Android. The code behind it is open source and available on GitHub, because lexography is for everyone.

 

Screen Shot 2015-04-11 at 12.31.56 AM

Screen Shot 2015-04-08 at 2.01.42 AM

Screen Shot 2015-04-08 at 2.02.24 AM

 

Users can share their lexographs using unique URLs. Of all this lexographs I’ve seen generated by users since the site launched (there are now almost 7,000), this one, shared on reddit’s /r/creativecoding, stuck with me the most: http://word.camera/i/7KZPPaqdP

I was surprised when the software noticed and commented on the singer in the painting behind me: http://word.camera/i/ypQvqJr6L

I was inspired to create this project while working on another project. This semester, I received a grant from the Future of Storytelling Initiative at NYU to produce a computer generated screenplay, and I had been thinking about how to generate text that’s more cohesive and realistically descriptive, meaning that it would transition between related topics in a logical fashion and describe a scene that could realistically exist (no “colorless green ideas sleeping furiously”) in order to making filming the screenplay possible . After playing with the Clarifai API, which uses convolutional neural networks to tag images, it occurred to me that including photographs in my input corpus, rather than relying on text alone, could provide those qualities. word.camera is my first attempt at producing that type of generative text.

At the moment, the results are not nearly as grammatical as I would like them to be, and I’m working on that. The algorithm extracts tags from images using Clarifai’s convolutional neural networks, then expands those tags into paragraphs using ConceptNet (a lexical relations database developed at MIT) and a flexible template system. The template system enables the code to build sentences that connect concepts together.

This project is about augmenting our creativity and presenting images in a different format, but it’s also about creative applications of artificial intelligence technology. I think that when we think about the type of artificial intelligence we’ll have in the future, based on what we’ve read in science fiction novels, we think of a robot that can describe and interact with its environment with natural language. I think that creating the type of AI we imagine in our wildest sci-fi fantasies is not only an engineering problem, but also a design problem that requires a creative approach.

I hope lexography eventually becomes accepted as a new form of photography. As a writer and a photographer, I love the idea that I could look at a scene and photograph it because it might generate an interesting poem or short story, rather than just an interesting image. And I’m not trying to suggest that word.camera is the final or the only possible implementation of that new art form. I made the code behind word.camera open source because I want others to help improve it and make their own versions — provided they also make their code available under the same terms, which is required under the GNU GPLv3 open source license I’m using. As the technology gets better, the results will get better, and lexography will make more sense to people as a worthy artistic pursuit.

I’m thrilled that the project has received worldwide attention from photography blogs and a few media outlets, and I hope users around the world continue enjoying word.camera as I keep working to improve it. Along with improving the language, I plan to expand the project by offering a mobile app and generated downloadable ebooks so that users can enjoy their lexographs offline.


 

Click Here for Part II

]]>
http://www.thehypertext.com/2015/04/11/word-camera/feed/ 2
Fiction Generator, Part IV http://www.thehypertext.com/2014/12/21/fiction-generator-part-iv/ Sun, 21 Dec 2014 03:04:53 +0000 http://www.thehypertext.com/?p=406 For my final project in Networked Media with Daniel Shiffman, I put the Fiction Generator online at fictiongenerator.com. I also exhibited this project at the ITP Winter Show.

Read More...

]]>
Prior Installments:
Part I
Part II
Part III

For my final project in Comm Lab: Networked Media with Daniel Shiffman, I put the Fiction Generator online at fictiongenerator.com. VICE/Motherboard ran an article about my website, and I exhibited the project at the ITP Winter Show.

composite

 

After reading William S. Burroughs’ essay about the cut-up technique, I decided to implement an algorithmic version of it into the generator. I also refactored my existing code and added a load screen, with this animation:

robotholdingbook

I am running a LinuxApacheFlask stack at the moment. Here’s a screen shot of the website in its current state:

screenshot

]]>
Fiction Generator, Part III http://www.thehypertext.com/2014/12/09/fiction-generator-part-iii/ http://www.thehypertext.com/2014/12/09/fiction-generator-part-iii/#comments Tue, 09 Dec 2014 19:00:04 +0000 http://www.thehypertext.com/?p=392 For my final project in Introduction to Computational Media with Daniel Shiffman, I presented my fiction generator (working title: "FicGen"). Since my previous post about this project, I have added a graphical user interface and significantly refactored my code.

Read More...

]]>
Prior Installments:
Part I
Part II

For my final project in Introduction to Computational Media with Daniel Shiffman, I presented my fiction generator (working title: “FicGen”). Since my previous post about this project, I have added a graphical user interface and significantly expanded/refactored my code, which I moved to a new repository on GitHub. I have also submitted this project as my entry in the ITP Winter Show. For my Networked Media final project, which is due Friday, I plan to put FicGen online.

Here is a screenshot of the GUI, which I implemented in Processing:

Screen Shot 2014-12-02 at 1.19.28 PM

When I presented this project in our final ICM class on Tuesday, November 25, the only working elements in the GUI were the text fields and the big red button. Now, most of the buttons and sliders have functionality as well. After pushing the red button, a Python script emails the completed novel to the user in PDF format.

After creating the GUI above, I expanded the material I am using to generate the novels by scraping content from two additional sources: over 2,000 sci-fi/horror stories from scp-wiki.net, and over 47,000 books from Project Gutenberg. I then significantly refactored my code to accommodate these additions. My new Python program, ficgen.py, is far more object oriented and organized than my previous plotgen script, which had become somewhat of a mess by the time I presented my project in class two weeks ago.

Here’s the current code:

import math
import argparse
import random
from random import choice as rc
from random import sample as rs
from random import randint as ri
import string
import math
from zipfile import ZipFile

import nltk
import en

from g_paths import gPaths
from erowid_experience_paths import erowidExpPaths
from tropes_character import characterTropeFiles
from tropes_setting import settingTropeFiles
from scp_paths import scpPaths
from firstnames_f import fFirstNames
from firstnames_m import mFirstNames
from surnames import surnames


# TODO:
# [X] CLEAN UP TROPE FILE PATHS LIST
# [ ] Fix "I'm" and "I'll" problem
# [ ] Add Plot Points / Narrative Points / Phlebotinum
# [ ] subtrope / sub-trope
# [ ] add yelp reviews
# [ ] add livejournal
# [X] add SCP

# System Path

sysPath = "/Users/rg/Projects/plotgen/ficgen/"


# Argument Values

genre_list = ['literary', 'sci-fi', 'fantasy', 'history', 'romance', 'thriller', 
			  'mystery', 'crime', 'pulp', 'horror', 'beat', 'fan', 'western', 
			  'action', 'war', 'family', 'humor', 'sport', 'speculative']
conflict_list = ['nature', 'man', 'god', 'society', 'self', 'fate', 'tech', 'no god', 'reality', 'author']
narr_list = ['first', '1st', '1', 'third', '3rd', '3', 'alt', 'alternating', 'subjective', 
			 'objective', 'sub', 'obj', 'omniscient', 'omn', 'limited', 'lim']

parser = argparse.ArgumentParser(description='Story Parameters')
parser.add_argument('--charnames', nargs='*', help="Character Names")
parser.add_argument('--title', help="Story Title")
parser.add_argument('--length', help="Story Length (0-999)")
parser.add_argument('--charcount', help="Character Count (0-999)")
parser.add_argument('--genre', nargs='*', help="Genre", choices=genre_list)
parser.add_argument('--conflict', nargs='*', help="Conflict", choices=conflict_list)
parser.add_argument('--passion', help="Passion (0-999)")
parser.add_argument('--verbosity', help="Verbosity (0-999)")
parser.add_argument('--realism', help="Realism (0-999)")
parser.add_argument('--density', help="Density (0-999)")
parser.add_argument('--accessibility', help="Accessibility (0-999)")
parser.add_argument('--depravity', help="Depravity (0-999)")
parser.add_argument('--linearity', help="Linearity (0-999)")
parser.add_argument('--narrator', nargs='*', help="Narrative PoV", choices=narr_list)
args = parser.parse_args()


# ESTABLISH SYSTEM-WIDE COEFFICIENTS/CONSTANTS

# tsv = trope setting volume
TSV = (int(args.length)/2.0 + int(args.realism)/6.0 + int(args.passion)/3.0)/1000.0
if 'fan' in args.genre:
	TSV += 1.0
TSV = int(math.ceil(2.0*TSV))

# cc = actual number of extra characters / MAKE EXPONENTIAL
CC = int(math.exp(math.ceil(int(args.charcount)/160.0))/2.0)+10

# chc = chapter count
CHC = int(math.exp(math.ceil(int(args.length)/160.0))/2.0)+10

# dtv = drug trip volume
DTV = (int(args.length)/4.0 + int(args.realism)/12.0 + int(args.passion)/6.0 + int(args.depravity)*1.5)/1000.0
if 'beat' in args.genre:
	DTV += 1.0
if 'society' in args.conflict:
	DTV += 1.0
DTV = int(math.ceil(5.0*DTV))

# scp = scp article volume
SCP = int(args.length)/1000.0
if bool(set(['sci-fi', 'horror']) & set(args.genre)):
	SCP += 1.0
if bool(set(['tech', 'no god', 'reality', 'nature', 'god']) & set(args.conflict)):
	SCP += 1.0
SCP = int(math.ceil(2.0*SCP))

# den = length (in chars) of project gutenerg excerpts
DEN = int(args.density)*10

# ggv = gutenberg excerpt volume
GGV = (int(args.length) + int(args.density))/500.0
if 'literary' in args.genre:
	GGV += 2.0
GGV = int(math.ceil(5.0*GGV))

# chl = chapter length as percent of potential chapter length
CHL = int(args.length)/1000.0


# file text fetchers
def get_file(fp):

	f = open(sysPath+fp, 'r')
	t = f.read()
	f.close()

	return t

def get_zip(fp):

	fileName = fp.split('/')[-1]
	noExtName = fileName.split('.')[0]
	txtName = noExtName + ".txt"

	ff = ZipFile(fp, 'r')
	fileNames = ff.namelist()
	oo = ff.open(fileNames[0], 'r')
	tt = oo.read()
	oo.close()
	ff.close()

	return tt



# CLASSES

class Character(object):

	def __init__(self, firstName, lastName):
		self.firstName = firstName
		self.lastName = lastName
		self.introDesc = ""
		self.scenes = []
		self.drugTrips = []
		self.scpReports = [] 
		self.gbergExcerpts = []
		self.friends = [] # list of objects


class Chapter(object):

	def __init__(self, charObj):
		self.charObj = charObj
		self.title = ""
		self.blocks = []


	def title_maker(self):
		charTitle = ri(0, 2)

		if not bool(charTitle):

			ttl = self.charObj.firstName + " " + self.charObj.lastName

		else:
			
			titleSource = ri(0, 3)

			if titleSource == 0:
				textSource = rc(self.charObj.scenes)
			elif titleSource == 1:
				textSource = rc(self.charObj.drugTrips)
			elif titleSource == 2:
				textSource = rc(self.charObj.scpReports)
			elif titleSource == 3:
				textSource = rc(self.charObj.gbergExcerpts)

			tokens = nltk.word_tokenize(textSource)

			if len(tokens) > 20:
				index = ri(0, len(tokens)-10)
				titleLen = ri(2, 6)
				ttl = ' '.join(tokens[index:index+titleLen])
			else:
				ttl = self.charObj.firstName + " " + self.charObj.lastName

		self.title = ttl


	def chapter_builder(self):
		blockList = [self.charObj.introDesc] + self.charObj.scenes + self.charObj.drugTrips + self.charObj.scpReports + self.charObj.gbergExcerpts
		
		random.shuffle(blockList)

		stopAt = int(math.ceil(CHL*len(blockList)))

		blockList = blockList[:stopAt]

		self.blocks = blockList

		# self.blocks.append("stuff")



class Novel(object):

	def __init__(self):
		self.title = args.title
		self.characters = [] # list of characters
		self.chapters = [] # list of chapters

	def generate(self):
		self.make_chars()
		self.assemble_chapters()
		self.make_tex_file()


	def make_tex_file(self):
		# Look at PlotGen for this part
		outputFileName = self.title

		latex_special_char_1 = ['&', '%', '$', '#', '_', '{', '}']
		latex_special_char_2 = ['~', '^', '\\']

		outputFile = open(sysPath+"output/"+outputFileName+".tex", 'w')

		openingTexLines = ["\\documentclass[12pt]{book}",
						   "\\usepackage{ucs}",
						   "\\usepackage[utf8x]{inputenc}",
						   "\\usepackage{hyperref}",
						   "\\title{"+outputFileName+"}",
						   "\\author{collective consciousness fiction generator\\\\http://rossgoodwin.com/ficgen}",
						   "\\date{\\today}",
						   "\\begin{document}",
						   "\\maketitle"]

		closingTexLine = "\\end{document}"

		for line in openingTexLines:
			outputFile.write(line+"\n\r")
		outputFile.write("\n\r\n\r")

		for ch in self.chapters:

			outputFile.write("\\chapter{"+ch.title+"}\n\r")
			outputFile.write("\n\r\n\r")

			rawText = '\n\r\n\r\n\r'.join(ch.blocks)

			try:
				rawText = rawText.decode('utf8')
			except:
				pass
			try:
				rawText = rawText.encode('ascii', 'ignore')
			except:
				pass

			i = 0
			for char in rawText:

				if char == "\b":
					outputFile.seek(-1, 1)
				elif char in latex_special_char_1 and rawText[i-1] != "\\":
					outputFile.write("\\"+char)
				elif char in latex_special_char_2 and not rawText[i+1] in latex_special_char_1:
					outputFile.write("-")
				else:
					outputFile.write(char)

				i += 1

			outputFile.write("\n\r\n\r")

		outputFile.write("\n\r\n\r")
		outputFile.write(closingTexLine)

		outputFile.close()

		print '\"'+sysPath+'output/'+outputFileName+'.tex\"'


	def assemble_chapters(self):
		novel = []

		for c in self.characters:
			novel.append(Chapter(c))

		for ch in novel:
			ch.title_maker()
			ch.chapter_builder()

		random.shuffle(novel) # MAYBE RETHINK THIS LATER

		self.chapters = novel


	def make_chars(self):
		# establish gender ratio
		charGenders = [ri(0,1) for _ in range(CC)]
		
		# initialize list of characters
		chars = []

		# add user defined characters
		for firstlast in args.charnames:
			fl_list = firstlast.split('_')  # Note that split is an underscore!
			chars.append(Character(fl_list[0], fl_list[1]))

		# add generated characters
		for b in charGenders:
			if b:
				chars.append(Character(rc(fFirstNames), rc(surnames)))
			else:
				chars.append(Character(rc(mFirstNames), rc(surnames)))

		# establish list of intro scenes
		introScenePaths = rs(characterTropeFiles, len(chars))

		# establish list of settings
		settings = rs(settingTropeFiles, len(chars)*TSV)

		# establish list of drug trips
		trips = rs(erowidExpPaths, len(chars)*DTV)

		# establish list of scp articles
		scps = rs(scpPaths, len(chars)*SCP)

		# establish list of gberg excerpts
		gbergs = rs(gPaths.values(), len(chars)*GGV)

		i = 0
		j = 0
		m = 0
		p = 0
		s = 0
		for c in chars:

			# make friends
			c.friends += rs(chars, ri(1,len(chars)-1))
			if c in c.friends:
				c.friends.remove(c)

			# add introduction description
			c.introDesc = self.personal_trope([c], introScenePaths[i])

			# add setting scenes
			for k in range(TSV):
				c.scenes.append(self.personal_trope([c]+c.friends, settings[j+k]))

			# add drug trip scenes
			for n in range(DTV):
				c.drugTrips.append(self.personal_trip([c]+c.friends, trips[m+n]))

			# add scp articles
			for q in range(SCP):
				c.scpReports.append(self.personal_scp([c]+c.friends, scps[p+q]))

			# add gberg excerpts
			for t in range(GGV):
				c.gbergExcerpts.append(self.personal_gberg([c]+c.friends, gbergs[s+t]))

			i += 1
			j += TSV
			m += DTV
			p += SCP
			s += GGV

		self.characters = chars


	def personal_trope(self, charList, filePath):
		text = get_file(filePath)
		# text = text.decode('utf8')
		# text = text.encode('ascii', 'ignore')

		if len(charList) == 1:
			characterTrope = True
		else:
			characterTrope = False

		try:

			pos = en.sentence.tag(text)
			wordtag = map(list, zip(*pos))
			words = wordtag[0]
			tags = wordtag[1]

			for i in range(len(words)):
				charRef = rc([rc(charList), charList[0]])
				if words[i].lower() == "character" and i > 0:
					words[i-1] = charRef.firstName
					words[i] = charRef.lastName

				elif tags[i] == "PRP":
					words[i] = charRef.firstName
				elif tags[i] == "PRP$":
					words[i] = charRef.firstName+"\'s"
				elif tags[i] in ["VBD", "VBG", "VBN", "VBZ"]:
					try:
						words[i] = en.verb.past(words[i], person=3, negate=False)
					except:
						pass

				if characterTrope:

					if words[i] == "have":
						words[i] = "has"
					elif words[i] == "are":
						words[i] = "is"

			punc = [".", ",", ";", ":", "!", "?"]

			for i in range(len(words)):
				if words[i] in punc:
					words[i] = '\b'+words[i]

			final_text = " ".join(words)

			if characterTrope:

				mainCharRef = rc(charList)

				index = string.find(final_text, mainCharRef.firstName)

				if final_text[index+len(mainCharRef.firstName)+1:index+len(mainCharRef.firstName)+1+len(mainCharRef.lastName)] == mainCharRef.lastName:
					final_text = final_text[index:]
				else:
					final_text = mainCharRef.firstName+" "+mainCharRef.lastName+final_text[index+len(mainCharRef.firstName):]

			replacements = {"trope": "clue", "Trope": "clue", "TROPE": "CLUE"}

			for x, y in replacements.iteritems():
				final_text = string.replace(final_text, x, y)

		except:
			
			final_text = ""


		return final_text


	def personal_trip(self, charList, tripPath):

		fileText = get_file(tripPath)
		splitText = fileText.split('\\vspace{2mm}')
		endOfText = splitText[-1]
		text = endOfText[:len(endOfText)-15]

		try:

			pos = en.sentence.tag(text)
			wordtag = map(list, zip(*pos))
			words = wordtag[0]
			tags = wordtag[1]

			for i in range(len(words)):

				charRef = rc([rc(charList), charList[0]])

				if tags[i] == "PRP":
					words[i] = charRef.firstName
				elif tags[i] == "PRP$":
					words[i] = charRef.firstName+"\'s"
				elif tags[i] in ["VBD", "VBG", "VBN", "VBZ"]:
					try:
						words[i] = en.verb.past(words[i], person=3, negate=False)
					except:
						pass
				else:
					pass

			punc = [".", ",", ";", ":", "!", "?"]

			for i in range(len(words)):
				if words[i] in punc:
					words[i] = '\b'+words[i]

			final_text = " ".join(words)

			final_text = string.replace(final_text, "\\end{itemize}", "")
			final_text = string.replace(final_text, "\\begin{itemize}", "")
			final_text = string.replace(final_text, "\\end{center}", "")
			final_text = string.replace(final_text, "\\begin{center}", "")
			final_text = string.replace(final_text, "\\ldots", " . . . ")
			final_text = string.replace(final_text, "\\egroup", "")
			final_text = string.replace(final_text, "EROWID", "GOVERNMENT")
			final_text = string.replace(final_text, "erowid", "government")
			final_text = string.replace(final_text, "Erowid", "Government")

		except:

			final_text = ""

		return final_text


	def personal_scp(self, charList, scpPath):

		text = get_file(scpPath)

		text = string.replace(text, "SCP", charList[0].lastName)
		text = string.replace(text, "Foundation", charList[0].lastName)

		try:

			pos = en.sentence.tag(text)
			wordtag = map(list, zip(*pos))
			words = wordtag[0]
			tags = wordtag[1]

			for i in range(len(words)):

				charRef = rc(charList)

				if tags[i] == "PRP":
					words[i] = charRef.firstName
				elif tags[i] == "PRP$":
					words[i] = charRef.firstName+"\'s"
				elif tags[i] in ["VBD", "VBG", "VBN", "VBZ"]:
					try:
						words[i] = en.verb.past(words[i], person=3, negate=False)
					except:
						pass
				else:
					pass

			punc = [".", ",", ";", ":", "!", "?"]

			for i in range(len(words)):
				if words[i] in punc:
					words[i] = '\b'+words[i]

			final_text = " ".join(words)

		except:

			final_text = ""

		return final_text



	def personal_gberg(self, charList, gPath):

		full_text = ""
		while full_text == "":
			try:
				full_text = get_zip(gPath)
			except:
				full_text = ""
				gPath = rc(gPaths.values())

		endPart = full_text.split("*** START OF THIS PROJECT GUTENBERG EBOOK ")[-1]
		theMeat = endPart.split("*** END OF THIS PROJECT GUTENBERG EBOOK")[0]

		theMeat = string.replace(theMeat, "\r\n", " ")

		
		if len(theMeat) < DEN+5:
			text = theMeat
		else:
			startLoc = int(len(theMeat)/2.0 - DEN/2.0)
			text = theMeat[startLoc:startLoc+DEN]

		spLoc = text.find(" ")
		text = text[spLoc+1:]

		try:
			pos = en.sentence.tag(text)
			wordtag = map(list, zip(*pos))
			words = wordtag[0]
			tags = wordtag[1]

			for i in range(len(words)):

				charRef = rc([rc(charList), charList[0]])

				if tags[i] == "PRP":
					words[i] = charRef.firstName
				elif tags[i] == "PRP$":
					words[i] = charRef.firstName+"\'s"
				elif tags[i] in ["VBD", "VBG", "VBN", "VBZ"]:
					try:
						words[i] = en.verb.past(words[i], person=3, negate=False)
					except:
						pass
				else:
					pass

			punc = [".", ",", ";", ":", "!", "?"]

			for i in range(len(words)):
				if words[i] in punc:
					words[i] = '\b'+words[i]

			final_text = " ".join(words)

		except:
			final_text = ""


		return final_text


	def print_chars(self):

		c = self.make_chars()
		for character in c:
			print 'INTRO DESC'
			print '\n\n'
			print character.introDesc
			print '\n\n'
			print 'SCENES'
			print '\n\n'
			for s in character.scenes:
				print s
			print '\n\n'
			print 'DRUG TRIPS'
			print '\n\n'
			for t in character.drugTrips:
				print t
			print '\n\n'
			print 'SCP REPORTS'
			print '\n\n'
			for p in character.scpReports:
				print p
			print '\n\n'
			print 'GBERG EXCERPTS'
			print '\n\n'
			for q in character.gbergExcerpts:
				print q
			print '\n\n'




foobar = Novel()
foobar.generate()

The program’s argument values, which I’m using the Python argparse library to deal with, are designed to be inserted by the GUI. However, they can be inserted manually as well in the terminal.

Typing python ficgen.py -h in the terminal will yield the following help text:

usage: ficgen.py [-h] [--charnames [CHARNAMES [CHARNAMES ...]]]
                 [--title TITLE] [--length LENGTH] [--charcount CHARCOUNT]
                 [--genre [{literary,sci-fi,fantasy,history,romance,thriller,mystery,crime,pulp,horror,beat,fan,western,action,war,family,humor,sport,speculative} [{literary,sci-fi,fantasy,history,romance,thriller,mystery,crime,pulp,horror,beat,fan,western,action,war,family,humor,sport,speculative} ...]]]
                 [--conflict [{nature,man,god,society,self,fate,tech,no god,reality,author} [{nature,man,god,society,self,fate,tech,no god,reality,author} ...]]]
                 [--passion PASSION] [--verbosity VERBOSITY]
                 [--realism REALISM] [--density DENSITY]
                 [--accessibility ACCESSIBILITY] [--depravity DEPRAVITY]
                 [--linearity LINEARITY]
                 [--narrator [{first,1st,1,third,3rd,3,alt,alternating,subjective,objective,sub,obj,omniscient,omn,limited,lim} [{first,1st,1,third,3rd,3,alt,alternating,subjective,objective,sub,obj,omniscient,omn,limited,lim} ...]]]

Story Parameters

optional arguments:
  -h, --help            show this help message and exit
  --charnames [CHARNAMES [CHARNAMES ...]]
                        Character Names
  --title TITLE         Story Title
  --length LENGTH       Story Length (0-999)
  --charcount CHARCOUNT
                        Character Count (0-999)
  --genre [{literary,sci-fi,fantasy,history,romance,thriller,mystery,crime,pulp,horror,beat,fan,western,action,war,family,humor,sport,speculative} [{literary,sci-fi,fantasy,history,romance,thriller,mystery,crime,pulp,horror,beat,fan,western,action,war,family,humor,sport,speculative} ...]]
                        Genre
  --conflict [{nature,man,god,society,self,fate,tech,no god,reality,author} [{nature,man,god,society,self,fate,tech,no god,reality,author} ...]]
                        Conflict
  --passion PASSION     Passion (0-999)
  --verbosity VERBOSITY
                        Verbosity (0-999)
  --realism REALISM     Realism (0-999)
  --density DENSITY     Density (0-999)
  --accessibility ACCESSIBILITY
                        Accessibility (0-999)
  --depravity DEPRAVITY
                        Depravity (0-999)
  --linearity LINEARITY
                        Linearity (0-999)
  --narrator [{first,1st,1,third,3rd,3,alt,alternating,subjective,objective,sub,obj,omniscient,omn,limited,lim} [{first,1st,1,third,3rd,3,alt,alternating,subjective,objective,sub,obj,omniscient,omn,limited,lim} ...]]
                        Narrative PoV

Finally, here are some sample novels generated by the new code (titles chosen by volunteers):

]]>
http://www.thehypertext.com/2014/12/09/fiction-generator-part-iii/feed/ 3
ITP Code Poetry Slam http://www.thehypertext.com/2014/12/09/itp-code-poetry-slam-2014/ Tue, 09 Dec 2014 08:45:53 +0000 http://www.thehypertext.com/?p=381 Several months ago, I asked the question: Who Is Code Shakespeare. On November 14, 2014, I believe the first ITP Code Poetry Slam may have brought us closer to an answer.

Read More...

]]>
Who Is Code Shakespeare?

codeshakespeare_sm

Several months ago, I asked the question above. On November 14, 2014, I believe the first ITP Code Poetry Slam may have brought us closer to an answer.

In all the bustle of final projects being due in the past month, I haven’t had a chance to post anything about the code poetry slam I organized in November. Needless to say, the event was an enormous success, thanks mostly to the incredible judges and presenters. I hope to organize another one in 2015.

The judges brought a wealth of experience from a variety of different fields, which provided for some extraordinary discussion. They were:

This was the schedule for the slam, as written by me on the whiteboard wall of room 50 at ITP:

whiteboard

Rather than providing a blow-by-blow account of proceedings, I’ll direct you to Hannes Bajohr, who did just that.

The entries truly speak for themselves. Those who presented (in order of presentation) were:

 


Participants and attendees: Please let me know if any of the names or links above need to be changed. Also, if your name is not linked, and you’d like me to link it to something, let me know!

If you missed the ITP Code Poetry Slam, you can attend or submit your work for the Stanford Code Poetry Slam in January.

]]>
Fiction Generator, Part II http://www.thehypertext.com/2014/11/20/fiction-generator-part-ii/ http://www.thehypertext.com/2014/11/20/fiction-generator-part-ii/#comments Thu, 20 Nov 2014 03:39:13 +0000 http://www.thehypertext.com/?p=328 After scraping about 5000 articles from tvtropes.org to retrieve descriptions for characters and settings, Sam Lavigne suggested I scrape erowid.org to dig up some exposition material. I proceeded to scrape 18,324 drug trip reports from the site, and integrated that material into the generator.

Read More...

]]>
For background, see my previous post on this project.

After scraping about 5000 articles from tvtropes.org to retrieve descriptions for characters and settings, Sam Lavigne suggested I scrape erowid.org to dig up some exposition material. I proceeded to scrape 18,324 drug trip reports from the site, and integrated that material into the generator.

While this project remains unfinished—I’m considering adding more material from many other websites, which is why I’m calling it a “collective consciousness fiction generator”—it is now generating full-length “novels” (300+ pages, 8.5×11, 12pt font). I polled my fellow ITP students to insert themselves into novels, and they responded with over 50 suggestions for novel titles. The generated PDFs are available for viewing/download on Google Drive.

I decided to create covers for 3 of my favorite novels the software has generated. Click on the covers below to see those PDFs:

infinite_splendour parallel_synchronized_randomness tricks_of_the_trade

Here is the current state of the code that’s generating these novels:

latex_special_char_1 = ['&', '%', '$', '#', '_', '{', '}']
latex_special_char_2 = ['~', '^', '\\']

outputFile = open("output/"+outputFileName+".tex", 'w')

openingTexLines = ["\\documentclass[12pt]{book}",
				   "\\usepackage{ucs}",
				   "\\usepackage[utf8x]{inputenc}",
				   "\\usepackage{hyperref}",
				   "\\title{"+outputFileName+"}",
				   "\\author{collective consciousness fiction generator\\\\http://rossgoodwin.com/ficgen}",
				   "\\date{\\today}",
				   "\\begin{document}",
				   "\\maketitle"]

closingTexLine = "\\end{document}"

for line in openingTexLines:
	outputFile.write(line+"\n\r")
outputFile.write("\n\r\n\r")

intros = char_match()

for x, y in intros.iteritems():

	outputFile.write("\\chapter{"+x+"}\n\r")

	chapter_type = random.randint(0, 4)
	bonus_drug_trip = random.randint(0, 1)
	trip_count = random.randint(1,4)


	# BLOCK ONE

	if chapter_type in [0, 3]:

		for char in y[0]:
			if char == "`":
				outputFile.seek(-1, 1)
			elif char in latex_special_char_1:
				outputFile.write("\\"+char)
			elif char in latex_special_char_2:
				if char == '~':
					outputFile.write("")
				elif char == '^':
					outputFile.write("")
				elif char == '\\':
					outputFile.write("-")
				else:
					pass
			else:
				outputFile.write(char)

	elif chapter_type in [1, 4]:

		for char in y[2]:
			if char == "`":
				outputFile.seek(-1, 1)
			elif char in latex_special_char_1:
				outputFile.write("\\"+char)
			elif char in latex_special_char_2:
				if char == '~':
					outputFile.write("")
				elif char == '^':
					outputFile.write("")
				elif char == '\\':
					outputFile.write("-")
				else:
					pass
			else:
				outputFile.write(char)

	elif chapter_type == 2:

		for char in y[1][0]:
			if char == "`":
				outputFile.seek(-1, 1)
			else:
				outputFile.write(char)

	outputFile.write("\n\r\n\r\n\r")

	
	# BLOCK TWO

	if chapter_type == 0:

		for char in y[2]:
			if char == "`":
				outputFile.seek(-1, 1)
			elif char in latex_special_char_1:
				outputFile.write("\\"+char)
			elif char in latex_special_char_2:
				if char == '~':
					outputFile.write("")
				elif char == '^':
					outputFile.write("")
				elif char == '\\':
					outputFile.write("-")
				else:
					pass
			else:
				outputFile.write(char)

	elif chapter_type == 1:

		for char in y[0]:
			if char == "`":
				outputFile.seek(-1, 1)
			elif char in latex_special_char_1:
				outputFile.write("\\"+char)
			elif char in latex_special_char_2:
				if char == '~':
					outputFile.write("")
				elif char == '^':
					outputFile.write("")
				elif char == '\\':
					outputFile.write("-")
				else:
					pass
			else:
				outputFile.write(char)

	elif chapter_type in [3, 4]:

		for char in y[1][0]:
			if char == "`":
				outputFile.seek(-1, 1)
			else:
				outputFile.write(char)

	elif chapter_type == 2 and bonus_drug_trip:

		for tripIndex in range(trip_count):

			for char in y[1][tripIndex+1]:
				if char == "`":
					outputFile.seek(-1, 1)
				else:
					outputFile.write(char)

	else:
		pass

	outputFile.write("\n\r\n\r\n\r")


	# BLOCK THREE

	if chapter_type in [0, 1, 3, 4] and bonus_drug_trip:

		for tripIndex in range(trip_count):

			for char in y[1][tripIndex+1]:
				if char == "`":
					outputFile.seek(-1, 1)
				else:
					outputFile.write(char)

		outputFile.write("\n\r\n\r\n\r")

	else:
		pass


outputFile.write("\n\r\n\r")
outputFile.write(closingTexLine)


outputFile.close()


print '\"output/'+outputFileName+'.tex\"'

 

UPDATE: Part III

]]>
http://www.thehypertext.com/2014/11/20/fiction-generator-part-ii/feed/ 1
General Update http://www.thehypertext.com/2014/09/29/general-update/ http://www.thehypertext.com/2014/09/29/general-update/#comments Mon, 29 Sep 2014 06:24:41 +0000 http://www.thehypertext.com/?p=177 I've been so busy the past two weeks that I failed to update this blog. But documentation is important, and that's why I'm going to take a moment to fill you in on all my recent activities. This post will cover all the projects I've been working on.

Read More...

]]>
I’ve been so busy the past two weeks that I failed to update this blog. But documentation is important, and that’s why I’m going to take a moment to fill you in on all my recent activities. This post will cover all the projects I’ve been working on, primarily:

  • Applications Presentation on September 16
  • ITP Code Poetry Slam on November 14
  • The Mechanical Turk’s Ghost
  • Che55

On Tuesday, September 16, I helped deliver a presentation to our class in Applications. Yingjie Bei, Rebecca Lieberman, and Supreet Mahanti were in my group, and we utilized my Poetizer software to create an interactive storytelling exercise for the entire audience. Sarah Rothberg was kind enough to record the presentation, and Rebecca posted it on Vimeo:

 

 

I’ve also been organizing an ITP Code Poetry Slam, which will take place at 6:30pm on November 14. Submissions are now open, and I’m hoping the event will serve as a conduit for productive dialogue between the fields of poetry and computer science. Announcements regarding judges, special guests, and other details to come.

Various explorations related to the Mechanical Turk’s Ghost [working title] have consumed the rest of my time. While I wait for all the electronic components I need to arrive, I have been focusing on the software aspects of the project, along with some general aspects of the hardware.

The first revision to the preliminary design I sketched out in my prior post resulted from a friend‘s suggestion. Rather than using conductive pads on the board, I now plan to use Hall effect sensors mounted beneath the board that will react to tiny neodymium magnets embedded in each chess piece. If everything works properly, this design should be far less visible, and thus less intrusive to the overall experience. I ordered 100 sensors and 500 magnets, and I look forward to experimenting with them when they arrive.

In the meantime, the parts I listed in my prior post arrived, and I was especially excited to begin working with the Raspberry Pi. I formatted an 8GB SD card and put NOOBS on it, then booted up the Raspberry Pi and installed Raspbian, a free operating system based on Debian Linux that is optimized for the Pi’s hardware.

r_pi

The Stockfish chess engine will be a major component of this project, and I was concerned that its binaries would not compile on the Raspberry Pi. The makefile documentation listed a number of options for system architecture, none of which exactly matched the ARM v6 chip on the Raspberry Pi.

Screen Shot 2014-09-28 at 10.46.18 PMFirst, I tried the “ARMv7” option. The compiler ran for about 10 minutes before experiencing errors and failing. I then tried several other options, none of which worked. I was about to give up completely and resign myself to running the chess engine on my laptop, when I noticed the “profile-build” option. I had never heard of profile-guided optimization (PGO), but I tried using the command “make profile-build” rather than “make build” along with the option for unspecified 32-bit architecture. This combination allowed Stockfish to compile without any issues. Here is the command that I used (from the /Stockfish/src folder):

$ make profile-build ARCH=general-32

With Stockfish successfully compiled on the Raspberry Pi, I copied the binary executable to the system path (so that I could script the engine using the Python subprocess library), then tried running the Python script I wrote to control Stockfish. It worked without any issues:

ghost

My next set of explorations revolved around the music component of the project. As I specified in my prior post, I want the device to generate music. I took some time to consider what type of music would be most appropriate, and settled on classical music as a starting point. Classical music is ideal because so many great works are in the public domain, and because so many serious chess players enjoy listening to it during play. (As anecdotal evidence, the Chess Forum in Greenwich Village, a venue where chess players congregate to play at all hours of the day and night, plays nothing but classical music all the time. I have been speaking to one of the owners of the Chess Forum about demonstrating my prototype device there once it is constructed.)

Generating a classical music mashup using data from the game in progress was the first idea I pursued. For this approach, I imagined that two classical music themes (one for black, one for white) could be combined in a way that reflected the relative strength of each side at any given point in the game. (A more complex approach might involve algorithmic music generation, but I am not ready to pursue that option just yet.) Before pursuing any prototyping or experimentation, I knew that the two themes would need to be suitably different (so as to distinguish one from the other) but also somewhat complementary in order to create a pleasant listening experience. A friend of mine who studies music suggested pairing one song (or symphony or concerto) in a major key with another song in the relative minor key.

Using YouTube Mixer, I was able to prototype the overall experience by fading back and forth between two songs. I started by pairing Beethoven’s Symphony No. 9 and Rachmaninoff’s Piano Concerto No. 3, and I was very satisfied with the results (play both these videos at once to hear the mashup):

I then worked on creating a music mashup script to pair with my chess engine script. My requirements seemed very simple: I would need a script that could play two sound files at once and control their respective volume levels independently, based on the fluctuations in the score calculated by the chess engine. The script would also need to be able to run on the Raspberry Pi.

These requirements ended up being more difficult to fulfill than I anticipated. I explored many Python audio libraries, including pyo, PyFluidSynth, mingus, and pygame’s mixer module. I also looked into using SoX, a command line audio utility, through the python subprocess library. Unfortunately, all of these options were either too complex or too simple to perform the required tasks.

Finally, on Gabe Weintraub’s suggestion, I looked into using Processing for my audio requirements and discovered a library called Minim that could do everything I needed. I then wrote the following Processing sketch:

import ddf.minim.*;

Minim minim1;
Minim minim2;
AudioPlayer player1;
AudioPlayer player2;

float gain1 = 0.0;
float gain2 = 0.0;
float tgtGain1 = 0.0;
float tgtGain2 = 0.0;
float level1 = 0.0;
float level2 = 0.0;
float lvlAdjust = 0.0;

BufferedReader reader;
String line;
float score = 0;

void setup() {
  minim1 = new Minim(this);
  minim2 = new Minim(this);
  player1 = minim1.loadFile("valkyries.mp3");
  player2 = minim2.loadFile("Rc3_1.mp3");
  player1.play();
  player1.setGain(-80.0);
  player2.play();
  player2.setGain(6.0);
}

void draw() {
  reader = createReader("score.txt");
  try {
    line = reader.readLine();
  } catch (IOException e) {
    e.printStackTrace();
    line = null;
  }
  print(line); 
  score = float(line);
  
  level1 = (player1.left.level() + player1.right.level()) / 2;
  level2 = (player2.left.level() + player2.right.level()) / 2;  

  lvlAdjust = map(level1 - level2, -0.2, 0.2, -1, 1);
  tgtGain1 = map(score, -1000, 1000, -30, 6);
  tgtGain2 = map(score, 1000, -1000, -30, 6);
  tgtGain1 = tgtGain1 * (lvlAdjust + 1);
  tgtGain2 = tgtGain2 / (lvlAdjust + 1);
  
  gain1 = player1.getGain();
  gain2 = player2.getGain();
  
  print(' ');
  print(gain1);
  print(' ');
  print(gain2);
  print(' ');
  print(level1);
  print(' ');
  println(level2);
  
  if (level2 > level1) {
    tgtGain2 -= 0.1;
  } else if (level1 < level2) {
    tgtGain1 -= 0.1;
  }
  
  player1.setGain(tgtGain1);
  player2.setGain(tgtGain2);
}

The script above reads score values from a file created by the Python script that controls the chess engine. The score values are then mapped to gain levels for each of the two tracks that are playing. I input a chess game move by move into the terminal, and the combination of scripts worked as intended by fading between the two songs based on the relative positions of white and black in the chess game.

Unfortunately, a broader issue with my overall approach became highly apparent: the dynamic qualities of each song overshadowed most of the volume changes that occurred as a result of the game. In other words, each song got louder and quieter at various points by itself, and that was more noticeable than the volume adjustments the script was making. I attempted to compensate for these natural volume changes by normalizing the volume of each song based on its relative level compared to the other song (see lines 42-45, 48-49, and 63-67 in the code above). This did not work as effectively as I hoped, and resulted in some very unpleasant sound distortions.

After conferring with my Automata instructor, Nick Yulman,  I have decided to take an alternate approach. Rather than playing two complete tracks and fading between them, I plan to record stems (individual instrument recordings) using the relevant midi files, and then create loop tracks that will be triggered at various score thresholds. I am still in the process of exploring this approach and will provide a comprehensive update sometime in the near future.

In the meantime, I have been learning about using combinations of digital and analog inputs and outputs with the Arduino, and using various input sensors to control motors, servos, solenoids, and RGB LEDs:

photo 3

In Introduction to Computational Media, we are learning about object oriented programming, and Dan Shiffman asked us to create a Processing sketch using classes and objects this week. As I prepare to create a physical chessboard, I thought it would be appropriate to make a software version to perform tests. Che55 (which I named with 5’s as an homage to Processing’s original name, “Proce55ing“) was the result.

che55

Che55 is a fully functional chess GUI, written in Processing. Only legal moves can be made, and special moves such as en passant, castling, and pawns reaching the end of the board have been accounted for. I plan to link Che55 with Stockfish in order to create chess visualizations and provide game analysis, and to prototype various elements of the Mechanical Turk’s Ghost, including the musical component. I left plenty of space around the board for additional GUI elements, which I’m currently working on implementing. All of the code is available on Github.

Unfortunately, I cannot claim credit for the chess piece designs. Rather, I was inspired by an installation I saw at the New York MoMA two weeks ago called Thinking Machine 4 by Martin Wattenberg and Marek Walczak (also written in Processing).

That’s all for now. Stay tuned for new posts about each of these projects. I will try to keep this blog more regularly updated so there (hopefully) will be no need for future multi-project megaposts like this one. Thanks for reading.

]]>
http://www.thehypertext.com/2014/09/29/general-update/feed/ 2
Wikipoet http://www.thehypertext.com/2014/09/04/wikipoet/ Thu, 04 Sep 2014 17:16:55 +0000 http://www.thehypertext.com/?p=77 Wikipoet is a program I wrote in Python that generates simple, iterative poems using the raw text from Wikipedia articles (retrieved via the MediaWiki API) and NLTK.

Read More...

]]>
Wikipoet is a program I wrote in Python that generates simple, iterative poems using the raw text from Wikipedia articles (retrieved via the MediaWiki API) and NLTK.

Wikipoet begins with a single word and uses the Wikipedia article for that word to find likely noun and adjective combinations. It then prints a stanza with the following structure:

[word]
[adjective] [word]
[adjective], [adjective] [word]
[adjective], [adjective], [adjective] [word]
[adjective], [adjective], [adjective], [adjective] [word]
[word] [noun], [word] [noun], [word] [noun]
[word] [noun], [word] [noun], [word] [noun]
[word] [noun / new word]
[new word]

Wikipoet then repeats this operation for the new word. The stanzas can continue indefinitely.

Here’s an example:

computer
former computer
flash, military computer
many, full, best computer
all, later, more, earlier computer
computer design, computer help, computer say
computer reference, computer voice, computer central processing unit
computer job
job
principal job
risky, creative job
critical, national, many job
lowly, steady, poor, primary job
job satisfaction, job reference, job preparation
job system, job look, job retention
job want
want
noble want
four, like want
more, strong, most want
human, some, american, many want
want can, want production, want protection
want level, want story, want item
want character
character
classical character
novel, new character
other, written, first character
greek, various, practical, set character
character construction, character actor, character words
character see, character page, character volume
character pick
pick
game pick
original, american pick
used, all, first pick
bay, star, early, specific pick
pick brand, pick use, pick set
pick title, pick people, pick peter
pick page
page
side page
modern, all page
other, past, early page
south, worldwide, beginning, electronic page
page format, page declaration, page band
page technology, page business, page address
page stop
stop
three stop
full, former stop
total, black, used stop
top, safe, international, white stop
stop code, stop nation, stop destruction
stop period, stop frank, stop part
stop closure
closure
prompt closure
epistemic, tight closure
early, short, social closure
transitive, deductive, other, cognitive closure
closure operator, closure process, closure rule
closure operation, closure law, closure map
closure series
series
kind series
systematic, sequential series
geologic, former, odd series
world, fixed, ordered, funny series
series flora, series movie, series sequence
series tone, series world, series step
series year
year
actual year
received, minor year
mass, cultural, done year
scheduled, united, martian, keen year
year consultation, year master, year trend
year personal, year level, year lord
year high

Depending on the length of the poem desired and the speed of one’s internet connection, Wikipoet can take a relatively long time to produce its output. The poem above took approximately 30 minutes to produce with a standard broadband connection.

While creating Wikipoet, I realized that I could improve the quality of its adjective-noun pairings by producing one set of adjective-noun combinations, then searching those combinations and removing ones that appear fewer than 10 times in Wikipedia search results.

Here is the code that accomplishes that using the MediaWiki API and the Python Requests library, where y is a list of adjectives:

for i in y[:]:
    search_string = "\"" + i + ' ' + word + "\""
    payload = {'action': 'query', 'list': 'search',
               'format': 'json', 'srsearch': search_string, 
               'srlimit': 1, 'srprop': 'snippet',
               'srwhat': 'text'}
    r = s.get(url, params=payload, headers=headers)
    json_obj = r.json()
    hits = int(json_obj['query']['searchinfo']['totalhits'])
    if hits < 10:
        y.remove(i)
    else:
        pass

The primary utility of Wikipoet is its ability to find meaningful adjectives to pair with nouns and nouns to pair with adjectives. I plan to integrate this process into future projects.

]]>