computational media – THE HYPERTEXT http://www.thehypertext.com Thu, 10 Dec 2015 06:10:15 +0000 en-US hourly 1 https://wordpress.org/?v=5.0.4 word.camera, Part II http://www.thehypertext.com/2015/05/08/word-camera-part-ii/ Fri, 08 May 2015 21:50:25 +0000 http://www.thehypertext.com/?p=505 For my final projects in Conversation and Computation with Lauren McCarthy and This Is The Remix with Roopa Vasudevan, I iterated on my word.camera project.

Read More...

]]>
Click Here for Part I


11161692_10100527204674408_7877879408640753455_o


For my final projects in Conversation and Computation with Lauren McCarthy and This Is The Remix with Roopa Vasudevan, I iterated on my word.camera project. I added a few new features to the web application, including a private API that I used to enable the creation of a physical version of word.camera inside a Mamiya C33 TLR.

The current version of the code remains open source and available on GitHub, and the project continues to receive positive mentions in the press.

On April 19, I announced two new features for word.camera via the TinyLetter email newsletter I advertised on the site.

Hello,

Thank you for subscribing to this newsletter, wherein I will provide occasional updates regarding my project, word.camera.

I wanted to let you know about two new features I added to the site in the past week:

word.camera/albums You can now generate ebooks (DRM-free ePub format) from sets of lexographs.

word.camera/postcards You can support word.camera by sending a lexograph as a postcard, anywhere in the world for $5. I am currently a graduate student, and proceeds will help cover the cost of maintaining this web application as a free, open source project.

Also:

word.camera/a/XwP59n1zR A lexograph album containing some of the best results I’ve gotten so far with the camera on my phone.

1, 2, 3 A few random lexographs I did not make that were popular on social media.

Best,

Ross Goodwin
rossgoodwin.com
word.camera

Next, I set to work on the physical version. I decided to use a technique I developed on another project earlier in the semester to create word.camera epitaphs composed of highly relevant paragraphs from novels. To ensure fair use of copyrighted materials, I determined that all of this additional data would be processed locally on the physical camera.

I developed a collection of data from a combination of novels that are considered classics and those I personally enjoyed, and I included only paragraphs over 99 characters in length. In total, the collection contains 7,113,809 words from 48 books.

Below is an infographic showing all the books used in my corpus, and their relative included word counts (click on it for the full-size image).

A79449E2CDA5D178

To build the physical version of word.camera, I purchased the following materials:

  • Raspberry Pi 2 board
  • Raspberry Pi camera module
  • Two (2) 10,000 mAh batteries
  • Thermal receipt printer
  • 40 female-to-male jumper wires
  • Three (3) extra-small prototyping perf boards
  • LED button

After some tinkering, I was able to put together the arrangement pictured below, which could print raw word.camera output on the receipt printer.

IMG_0354

I thought for a long time about the type of case I wanted to put the camera in. My original idea was a photobooth, but I felt that a portable camera—along the lines of Matt Richardson’s Descriptive Camera—might take better advantage of the Raspberry Pi’s small footprint.

Rather than fabricating my own case, I determined that an antique film camera might provide a familiar exterior to draw in people not familiar with the project. (And I was creating it for a remix-themed class, after all.) So I purchased a lot of three broken TLR film cameras on eBay, and the Mamiya C33 was in the best condition of all of them, so I gutted it. (N.B. I’m an antique camera enthusiast—I own a working version of the C33’s predecessor, the C2—and, despite its broken condition, cutting open the bellows of the C33 felt sacrilegious.)

I laser cut some clear acrylic I had left over from the traveler’s lamp project to fill the lens holes and mount the LED button on the back of the camera. Here are some photos of the finished product:

9503_20150507_tlr_1000px

9502_20150507_tlr_1000px

9509_20150507_tlr_1000px

9496_20150507_tlr_1000px

9493_20150507_tlr_1000px

9513_20150507_tlr_1000px

And here is the code that’s running on the Raspberry Pi (the crux of the matching algorithm is on line 90):

import uuid
import picamera
import RPi.GPIO as GPIO
import requests
from time import sleep
import os
import json
from Adafruit_Thermal import *
from alchemykey import apikey
import time

# SHUTTER COUNT / startNo GLOBAL
startNo = 0

# Init Printer
printer = Adafruit_Thermal("/dev/ttyAMA0", 19200, timeout=5)
printer.setSize('S')
printer.justify('L')
printer.setLineHeight(36)

# Init Camera
camera = picamera.PiCamera()

# Init GPIO
GPIO.setmode(GPIO.BCM)

# Working Dir
cwd = '/home/pi/tlr'

# Init Button Pin
GPIO.setup(21, GPIO.IN, pull_up_down=GPIO.PUD_UP)

# Init LED Pin
GPIO.setup(20, GPIO.OUT)

# Init Flash Pin
GPIO.setup(16, GPIO.OUT)

# LED and Flash Off
GPIO.output(20, False)
GPIO.output(16, False)

# Load lit list
lit = json.load( open(cwd+'/lit.json', 'r') )


def blink(n):
    for _ in range(n):
        GPIO.output(20, True)
        sleep(0.2)
        GPIO.output(20, False)
        sleep(0.2)

def takePhoto():
    fn = str(int(time.time()))+'.jpg' # TODO: Change to timestamp hash
    fp = cwd+'/img/'+fn
    GPIO.output(16, True)
    camera.capture(fp)
    GPIO.output(16, False)
    return fp

def getText(imgPath):
    endPt = 'https://word.camera/img'
    payload = {'Script': 'Yes'}
    files = {'file': open(imgPath, 'rb')}
    response = requests.post(endPt, data=payload, files=files)
    return response.text

def alchemy(text):
    endpt = "http://access.alchemyapi.com/calls/text/TextGetRankedConcepts"
    payload = {"apikey": apikey,
               "text": text,
               "outputMode": "json",
               "showSourceText": 0,
               "knowledgeGraph": 1,
               "maxRetrieve": 500}
    headers = {'content-type': 'application/x-www-form-urlencoded'}
    r = requests.post(endpt, data=payload, headers=headers)
    return r.json()

def findIntersection(testDict):
    returnText = ""
    returnTitle = ""
    returnAuthor = ""
    recordInter = set(testDict.keys())
    relRecord = 0.0
    for doc in lit:
        inter = set(doc['concepts'].keys()) & set(testDict.keys())
        if inter:
            relSum = sum([doc['concepts'][tag]+testDict[tag] for tag in inter])
            if relSum > relRecord: 
                relRecord = relSum
                recordInter = inter
                returnText = doc['text']
                returnTitle = doc['title']
                returnAuthor = doc['author']
    doc = {
        'text': returnText,
        'title': returnTitle,
        'author': returnAuthor,
        'inter': recordInter,
        'record': relRecord
    }
    return doc

def puncReplace(text):
    replaceDict = {
        '—': '---',
        '–': '--',
        '‘': "\'",
        '’': "\'",
        '“': '\"',
        '”': '\"',
        '´': "\'",
        'ë': 'e',
        'ñ': 'n'
    }

    for key in replaceDict:
        text = text.replace(key, replaceDict[key])

    return text


blink(5)
while 1:
    input_state = GPIO.input(21)
    if not input_state:
        GPIO.output(20, True)
        try:
            # Get Word.Camera Output
            print "GETTING TEXT FROM WORD.CAMERA..."
            wcText = getText(takePhoto())
            blink(3)
            GPIO.output(20, True)
            print "...GOT TEXT"

            # Print
            # print "PRINTING PRIMARY"
            # startNo += 1
            # printer.println("No. %i\n\n\n%s" % (startNo, wcText))

            # Get Alchemy Data
            print "GETTING ALCHEMY DATA..."
            data = alchemy(wcText)
            tagRelDict = {concept['text']:float(concept['relevance']) for concept in data['concepts']}
            blink(3)
            GPIO.output(20, True)
            print "...GOT DATA"

            # Make Match
            print "FINDING MATCH..."
            interDoc = findIntersection(tagRelDict)
            print interDoc
            interText = puncReplace(interDoc['text'].encode('ascii', 'xmlcharrefreplace'))
            interTitle = puncReplace(interDoc['title'].encode('ascii', 'xmlcharrefreplace'))
            interAuthor = puncReplace(interDoc['author'].encode('ascii', 'xmlcharrefreplace'))
            blink(3)
            GPIO.output(20, True)
            print "...FOUND"

            grafList = [p for p in wcText.split('\n') if p]

            # Choose primary paragraph
            primaryText = min(grafList, key=lambda x: x.count('#'))
            url = 'word.camera/i/' + grafList[-1].strip().replace('#', '')

            # Print
            print "PRINTING..."
            startNo += 1
            printStr = "No. %i\n\n\n%s\n\n%s\n\n\n\nEPITAPH\n\n%s\n\nFrom %s by %s" % (startNo, primaryText, url, interText, interTitle, interAuthor)
            printer.println(printStr)

        except:
            print "SOMETHING BROKE"
            blink(15)

        GPIO.output(20, False)

Thanks to a transistor pulsing circuit that keeps the printer’s battery awake, and some code that automatically tethers the Raspberry Pi to my iPhone, the Fiction Camera is fully portable. I’ve been walking around Brooklyn and Manhattan over the past week making lexographs—the device is definitely a conversation starter. As a street photographer, I’ve noticed that people seem to be more comfortable having their photograph taken with it than with a standard camera, possibly because the visual image (and whether they look alright in it) is far less important.

As a result of these wanderings, I’ve accrued quite a large number of lexograph receipts. Earlier iterations of the receipt design contained longer versions of the word.camera output. Eventually, I settled on a version that contains a number (indicating how many lexographs have been taken since the device was last turned on), one paragraph of word.camera output, a URL to the word.camera page containing the photo + complete output, and a single high-relevance paragraph from a novel.

2080_20150508_doc_1800px

2095_20150508_doc_1800px

2082_20150508_doc_1800px

2088_20150508_doc_1800px

2091_20150508_doc_1800px

2093_20150508_doc_1800px

2097_20150508_doc_1800px

2100_20150508_doc_1800px

2102_20150508_doc_1800px

2104_20150508_doc_1800px

2106_20150508_doc_1800px

2108_20150508_doc_1800px

2109_20150508_doc_1800px

I also demonstrated the camera at ConvoHack, our final presentation event for Conversation and Computation, which took place at Babycastles gallery, and passed out over 50 lexograph receipts that evening alone.

6A0A1475

6A0A1416

6A0A1380

6A0A1352

6A0A1348

Photographs by Karam Byun

Often, when photographing a person, the camera will output a passage from a novel featuring a character description that subjects seem to relate to. Many people have told me the results have qualities that remind them of horoscopes.

]]>
Text Clock http://www.thehypertext.com/2015/04/11/text-clock/ Sat, 11 Apr 2015 07:50:22 +0000 http://www.thehypertext.com/?p=497 Over winter break, I turned the complete text of Project Gutenberg into a clock.

Read More...

]]>
Over winter break, I turned Project Gutenberg into a clock.  The page take can initially take some time to load, but once it does, it will update every minute.
Screen Shot 2014-12-26 at 1.33.51 AM

Screen Shot 2014-12-26 at 1.34.05 AM

I wrote the clock template in JavaScript, and performed the text extraction using Python scripts. The site is running on GitHub pages, and all the code for the clock template and relevant data is on GitHub.

 

 

]]>
word.camera http://www.thehypertext.com/2015/04/11/word-camera/ http://www.thehypertext.com/2015/04/11/word-camera/#comments Sat, 11 Apr 2015 05:12:58 +0000 http://www.thehypertext.com/?p=481 Last week, I launched a web application and a concept for photographic text generation that I have been working on for a few months. The idea came to me while working on another project, a computer generated screenplay, and I will discuss the connection in this post.

Read More...

]]>
lexograph /ˈleksəʊɡɹɑːf/ (n.)
A text document generated from digital image data

 

Last week, I launched a web application and a concept for photographic text generation that I have been working on for a few months. The idea came to me while working on another project, a computer generated screenplay, and I will discuss the connection in this post.

word.camera is responsive — it works on desktop, tablet, and mobile devices running recent versions of iOS or Android. The code behind it is open source and available on GitHub, because lexography is for everyone.

 

Screen Shot 2015-04-11 at 12.31.56 AM

Screen Shot 2015-04-08 at 2.01.42 AM

Screen Shot 2015-04-08 at 2.02.24 AM

 

Users can share their lexographs using unique URLs. Of all this lexographs I’ve seen generated by users since the site launched (there are now almost 7,000), this one, shared on reddit’s /r/creativecoding, stuck with me the most: http://word.camera/i/7KZPPaqdP

I was surprised when the software noticed and commented on the singer in the painting behind me: http://word.camera/i/ypQvqJr6L

I was inspired to create this project while working on another project. This semester, I received a grant from the Future of Storytelling Initiative at NYU to produce a computer generated screenplay, and I had been thinking about how to generate text that’s more cohesive and realistically descriptive, meaning that it would transition between related topics in a logical fashion and describe a scene that could realistically exist (no “colorless green ideas sleeping furiously”) in order to making filming the screenplay possible . After playing with the Clarifai API, which uses convolutional neural networks to tag images, it occurred to me that including photographs in my input corpus, rather than relying on text alone, could provide those qualities. word.camera is my first attempt at producing that type of generative text.

At the moment, the results are not nearly as grammatical as I would like them to be, and I’m working on that. The algorithm extracts tags from images using Clarifai’s convolutional neural networks, then expands those tags into paragraphs using ConceptNet (a lexical relations database developed at MIT) and a flexible template system. The template system enables the code to build sentences that connect concepts together.

This project is about augmenting our creativity and presenting images in a different format, but it’s also about creative applications of artificial intelligence technology. I think that when we think about the type of artificial intelligence we’ll have in the future, based on what we’ve read in science fiction novels, we think of a robot that can describe and interact with its environment with natural language. I think that creating the type of AI we imagine in our wildest sci-fi fantasies is not only an engineering problem, but also a design problem that requires a creative approach.

I hope lexography eventually becomes accepted as a new form of photography. As a writer and a photographer, I love the idea that I could look at a scene and photograph it because it might generate an interesting poem or short story, rather than just an interesting image. And I’m not trying to suggest that word.camera is the final or the only possible implementation of that new art form. I made the code behind word.camera open source because I want others to help improve it and make their own versions — provided they also make their code available under the same terms, which is required under the GNU GPLv3 open source license I’m using. As the technology gets better, the results will get better, and lexography will make more sense to people as a worthy artistic pursuit.

I’m thrilled that the project has received worldwide attention from photography blogs and a few media outlets, and I hope users around the world continue enjoying word.camera as I keep working to improve it. Along with improving the language, I plan to expand the project by offering a mobile app and generated downloadable ebooks so that users can enjoy their lexographs offline.


 

Click Here for Part II

]]>
http://www.thehypertext.com/2015/04/11/word-camera/feed/ 2
Fiction Generator, Part IV http://www.thehypertext.com/2014/12/21/fiction-generator-part-iv/ Sun, 21 Dec 2014 03:04:53 +0000 http://www.thehypertext.com/?p=406 For my final project in Networked Media with Daniel Shiffman, I put the Fiction Generator online at fictiongenerator.com. I also exhibited this project at the ITP Winter Show.

Read More...

]]>
Prior Installments:
Part I
Part II
Part III

For my final project in Comm Lab: Networked Media with Daniel Shiffman, I put the Fiction Generator online at fictiongenerator.com. VICE/Motherboard ran an article about my website, and I exhibited the project at the ITP Winter Show.

composite

 

After reading William S. Burroughs’ essay about the cut-up technique, I decided to implement an algorithmic version of it into the generator. I also refactored my existing code and added a load screen, with this animation:

robotholdingbook

I am running a LinuxApacheFlask stack at the moment. Here’s a screen shot of the website in its current state:

screenshot

]]>
Fiction Generator, Part III http://www.thehypertext.com/2014/12/09/fiction-generator-part-iii/ http://www.thehypertext.com/2014/12/09/fiction-generator-part-iii/#comments Tue, 09 Dec 2014 19:00:04 +0000 http://www.thehypertext.com/?p=392 For my final project in Introduction to Computational Media with Daniel Shiffman, I presented my fiction generator (working title: "FicGen"). Since my previous post about this project, I have added a graphical user interface and significantly refactored my code.

Read More...

]]>
Prior Installments:
Part I
Part II

For my final project in Introduction to Computational Media with Daniel Shiffman, I presented my fiction generator (working title: “FicGen”). Since my previous post about this project, I have added a graphical user interface and significantly expanded/refactored my code, which I moved to a new repository on GitHub. I have also submitted this project as my entry in the ITP Winter Show. For my Networked Media final project, which is due Friday, I plan to put FicGen online.

Here is a screenshot of the GUI, which I implemented in Processing:

Screen Shot 2014-12-02 at 1.19.28 PM

When I presented this project in our final ICM class on Tuesday, November 25, the only working elements in the GUI were the text fields and the big red button. Now, most of the buttons and sliders have functionality as well. After pushing the red button, a Python script emails the completed novel to the user in PDF format.

After creating the GUI above, I expanded the material I am using to generate the novels by scraping content from two additional sources: over 2,000 sci-fi/horror stories from scp-wiki.net, and over 47,000 books from Project Gutenberg. I then significantly refactored my code to accommodate these additions. My new Python program, ficgen.py, is far more object oriented and organized than my previous plotgen script, which had become somewhat of a mess by the time I presented my project in class two weeks ago.

Here’s the current code:

import math
import argparse
import random
from random import choice as rc
from random import sample as rs
from random import randint as ri
import string
import math
from zipfile import ZipFile

import nltk
import en

from g_paths import gPaths
from erowid_experience_paths import erowidExpPaths
from tropes_character import characterTropeFiles
from tropes_setting import settingTropeFiles
from scp_paths import scpPaths
from firstnames_f import fFirstNames
from firstnames_m import mFirstNames
from surnames import surnames


# TODO:
# [X] CLEAN UP TROPE FILE PATHS LIST
# [ ] Fix "I'm" and "I'll" problem
# [ ] Add Plot Points / Narrative Points / Phlebotinum
# [ ] subtrope / sub-trope
# [ ] add yelp reviews
# [ ] add livejournal
# [X] add SCP

# System Path

sysPath = "/Users/rg/Projects/plotgen/ficgen/"


# Argument Values

genre_list = ['literary', 'sci-fi', 'fantasy', 'history', 'romance', 'thriller', 
			  'mystery', 'crime', 'pulp', 'horror', 'beat', 'fan', 'western', 
			  'action', 'war', 'family', 'humor', 'sport', 'speculative']
conflict_list = ['nature', 'man', 'god', 'society', 'self', 'fate', 'tech', 'no god', 'reality', 'author']
narr_list = ['first', '1st', '1', 'third', '3rd', '3', 'alt', 'alternating', 'subjective', 
			 'objective', 'sub', 'obj', 'omniscient', 'omn', 'limited', 'lim']

parser = argparse.ArgumentParser(description='Story Parameters')
parser.add_argument('--charnames', nargs='*', help="Character Names")
parser.add_argument('--title', help="Story Title")
parser.add_argument('--length', help="Story Length (0-999)")
parser.add_argument('--charcount', help="Character Count (0-999)")
parser.add_argument('--genre', nargs='*', help="Genre", choices=genre_list)
parser.add_argument('--conflict', nargs='*', help="Conflict", choices=conflict_list)
parser.add_argument('--passion', help="Passion (0-999)")
parser.add_argument('--verbosity', help="Verbosity (0-999)")
parser.add_argument('--realism', help="Realism (0-999)")
parser.add_argument('--density', help="Density (0-999)")
parser.add_argument('--accessibility', help="Accessibility (0-999)")
parser.add_argument('--depravity', help="Depravity (0-999)")
parser.add_argument('--linearity', help="Linearity (0-999)")
parser.add_argument('--narrator', nargs='*', help="Narrative PoV", choices=narr_list)
args = parser.parse_args()


# ESTABLISH SYSTEM-WIDE COEFFICIENTS/CONSTANTS

# tsv = trope setting volume
TSV = (int(args.length)/2.0 + int(args.realism)/6.0 + int(args.passion)/3.0)/1000.0
if 'fan' in args.genre:
	TSV += 1.0
TSV = int(math.ceil(2.0*TSV))

# cc = actual number of extra characters / MAKE EXPONENTIAL
CC = int(math.exp(math.ceil(int(args.charcount)/160.0))/2.0)+10

# chc = chapter count
CHC = int(math.exp(math.ceil(int(args.length)/160.0))/2.0)+10

# dtv = drug trip volume
DTV = (int(args.length)/4.0 + int(args.realism)/12.0 + int(args.passion)/6.0 + int(args.depravity)*1.5)/1000.0
if 'beat' in args.genre:
	DTV += 1.0
if 'society' in args.conflict:
	DTV += 1.0
DTV = int(math.ceil(5.0*DTV))

# scp = scp article volume
SCP = int(args.length)/1000.0
if bool(set(['sci-fi', 'horror']) & set(args.genre)):
	SCP += 1.0
if bool(set(['tech', 'no god', 'reality', 'nature', 'god']) & set(args.conflict)):
	SCP += 1.0
SCP = int(math.ceil(2.0*SCP))

# den = length (in chars) of project gutenerg excerpts
DEN = int(args.density)*10

# ggv = gutenberg excerpt volume
GGV = (int(args.length) + int(args.density))/500.0
if 'literary' in args.genre:
	GGV += 2.0
GGV = int(math.ceil(5.0*GGV))

# chl = chapter length as percent of potential chapter length
CHL = int(args.length)/1000.0


# file text fetchers
def get_file(fp):

	f = open(sysPath+fp, 'r')
	t = f.read()
	f.close()

	return t

def get_zip(fp):

	fileName = fp.split('/')[-1]
	noExtName = fileName.split('.')[0]
	txtName = noExtName + ".txt"

	ff = ZipFile(fp, 'r')
	fileNames = ff.namelist()
	oo = ff.open(fileNames[0], 'r')
	tt = oo.read()
	oo.close()
	ff.close()

	return tt



# CLASSES

class Character(object):

	def __init__(self, firstName, lastName):
		self.firstName = firstName
		self.lastName = lastName
		self.introDesc = ""
		self.scenes = []
		self.drugTrips = []
		self.scpReports = [] 
		self.gbergExcerpts = []
		self.friends = [] # list of objects


class Chapter(object):

	def __init__(self, charObj):
		self.charObj = charObj
		self.title = ""
		self.blocks = []


	def title_maker(self):
		charTitle = ri(0, 2)

		if not bool(charTitle):

			ttl = self.charObj.firstName + " " + self.charObj.lastName

		else:
			
			titleSource = ri(0, 3)

			if titleSource == 0:
				textSource = rc(self.charObj.scenes)
			elif titleSource == 1:
				textSource = rc(self.charObj.drugTrips)
			elif titleSource == 2:
				textSource = rc(self.charObj.scpReports)
			elif titleSource == 3:
				textSource = rc(self.charObj.gbergExcerpts)

			tokens = nltk.word_tokenize(textSource)

			if len(tokens) > 20:
				index = ri(0, len(tokens)-10)
				titleLen = ri(2, 6)
				ttl = ' '.join(tokens[index:index+titleLen])
			else:
				ttl = self.charObj.firstName + " " + self.charObj.lastName

		self.title = ttl


	def chapter_builder(self):
		blockList = [self.charObj.introDesc] + self.charObj.scenes + self.charObj.drugTrips + self.charObj.scpReports + self.charObj.gbergExcerpts
		
		random.shuffle(blockList)

		stopAt = int(math.ceil(CHL*len(blockList)))

		blockList = blockList[:stopAt]

		self.blocks = blockList

		# self.blocks.append("stuff")



class Novel(object):

	def __init__(self):
		self.title = args.title
		self.characters = [] # list of characters
		self.chapters = [] # list of chapters

	def generate(self):
		self.make_chars()
		self.assemble_chapters()
		self.make_tex_file()


	def make_tex_file(self):
		# Look at PlotGen for this part
		outputFileName = self.title

		latex_special_char_1 = ['&', '%', '$', '#', '_', '{', '}']
		latex_special_char_2 = ['~', '^', '\\']

		outputFile = open(sysPath+"output/"+outputFileName+".tex", 'w')

		openingTexLines = ["\\documentclass[12pt]{book}",
						   "\\usepackage{ucs}",
						   "\\usepackage[utf8x]{inputenc}",
						   "\\usepackage{hyperref}",
						   "\\title{"+outputFileName+"}",
						   "\\author{collective consciousness fiction generator\\\\http://rossgoodwin.com/ficgen}",
						   "\\date{\\today}",
						   "\\begin{document}",
						   "\\maketitle"]

		closingTexLine = "\\end{document}"

		for line in openingTexLines:
			outputFile.write(line+"\n\r")
		outputFile.write("\n\r\n\r")

		for ch in self.chapters:

			outputFile.write("\\chapter{"+ch.title+"}\n\r")
			outputFile.write("\n\r\n\r")

			rawText = '\n\r\n\r\n\r'.join(ch.blocks)

			try:
				rawText = rawText.decode('utf8')
			except:
				pass
			try:
				rawText = rawText.encode('ascii', 'ignore')
			except:
				pass

			i = 0
			for char in rawText:

				if char == "\b":
					outputFile.seek(-1, 1)
				elif char in latex_special_char_1 and rawText[i-1] != "\\":
					outputFile.write("\\"+char)
				elif char in latex_special_char_2 and not rawText[i+1] in latex_special_char_1:
					outputFile.write("-")
				else:
					outputFile.write(char)

				i += 1

			outputFile.write("\n\r\n\r")

		outputFile.write("\n\r\n\r")
		outputFile.write(closingTexLine)

		outputFile.close()

		print '\"'+sysPath+'output/'+outputFileName+'.tex\"'


	def assemble_chapters(self):
		novel = []

		for c in self.characters:
			novel.append(Chapter(c))

		for ch in novel:
			ch.title_maker()
			ch.chapter_builder()

		random.shuffle(novel) # MAYBE RETHINK THIS LATER

		self.chapters = novel


	def make_chars(self):
		# establish gender ratio
		charGenders = [ri(0,1) for _ in range(CC)]
		
		# initialize list of characters
		chars = []

		# add user defined characters
		for firstlast in args.charnames:
			fl_list = firstlast.split('_')  # Note that split is an underscore!
			chars.append(Character(fl_list[0], fl_list[1]))

		# add generated characters
		for b in charGenders:
			if b:
				chars.append(Character(rc(fFirstNames), rc(surnames)))
			else:
				chars.append(Character(rc(mFirstNames), rc(surnames)))

		# establish list of intro scenes
		introScenePaths = rs(characterTropeFiles, len(chars))

		# establish list of settings
		settings = rs(settingTropeFiles, len(chars)*TSV)

		# establish list of drug trips
		trips = rs(erowidExpPaths, len(chars)*DTV)

		# establish list of scp articles
		scps = rs(scpPaths, len(chars)*SCP)

		# establish list of gberg excerpts
		gbergs = rs(gPaths.values(), len(chars)*GGV)

		i = 0
		j = 0
		m = 0
		p = 0
		s = 0
		for c in chars:

			# make friends
			c.friends += rs(chars, ri(1,len(chars)-1))
			if c in c.friends:
				c.friends.remove(c)

			# add introduction description
			c.introDesc = self.personal_trope([c], introScenePaths[i])

			# add setting scenes
			for k in range(TSV):
				c.scenes.append(self.personal_trope([c]+c.friends, settings[j+k]))

			# add drug trip scenes
			for n in range(DTV):
				c.drugTrips.append(self.personal_trip([c]+c.friends, trips[m+n]))

			# add scp articles
			for q in range(SCP):
				c.scpReports.append(self.personal_scp([c]+c.friends, scps[p+q]))

			# add gberg excerpts
			for t in range(GGV):
				c.gbergExcerpts.append(self.personal_gberg([c]+c.friends, gbergs[s+t]))

			i += 1
			j += TSV
			m += DTV
			p += SCP
			s += GGV

		self.characters = chars


	def personal_trope(self, charList, filePath):
		text = get_file(filePath)
		# text = text.decode('utf8')
		# text = text.encode('ascii', 'ignore')

		if len(charList) == 1:
			characterTrope = True
		else:
			characterTrope = False

		try:

			pos = en.sentence.tag(text)
			wordtag = map(list, zip(*pos))
			words = wordtag[0]
			tags = wordtag[1]

			for i in range(len(words)):
				charRef = rc([rc(charList), charList[0]])
				if words[i].lower() == "character" and i > 0:
					words[i-1] = charRef.firstName
					words[i] = charRef.lastName

				elif tags[i] == "PRP":
					words[i] = charRef.firstName
				elif tags[i] == "PRP$":
					words[i] = charRef.firstName+"\'s"
				elif tags[i] in ["VBD", "VBG", "VBN", "VBZ"]:
					try:
						words[i] = en.verb.past(words[i], person=3, negate=False)
					except:
						pass

				if characterTrope:

					if words[i] == "have":
						words[i] = "has"
					elif words[i] == "are":
						words[i] = "is"

			punc = [".", ",", ";", ":", "!", "?"]

			for i in range(len(words)):
				if words[i] in punc:
					words[i] = '\b'+words[i]

			final_text = " ".join(words)

			if characterTrope:

				mainCharRef = rc(charList)

				index = string.find(final_text, mainCharRef.firstName)

				if final_text[index+len(mainCharRef.firstName)+1:index+len(mainCharRef.firstName)+1+len(mainCharRef.lastName)] == mainCharRef.lastName:
					final_text = final_text[index:]
				else:
					final_text = mainCharRef.firstName+" "+mainCharRef.lastName+final_text[index+len(mainCharRef.firstName):]

			replacements = {"trope": "clue", "Trope": "clue", "TROPE": "CLUE"}

			for x, y in replacements.iteritems():
				final_text = string.replace(final_text, x, y)

		except:
			
			final_text = ""


		return final_text


	def personal_trip(self, charList, tripPath):

		fileText = get_file(tripPath)
		splitText = fileText.split('\\vspace{2mm}')
		endOfText = splitText[-1]
		text = endOfText[:len(endOfText)-15]

		try:

			pos = en.sentence.tag(text)
			wordtag = map(list, zip(*pos))
			words = wordtag[0]
			tags = wordtag[1]

			for i in range(len(words)):

				charRef = rc([rc(charList), charList[0]])

				if tags[i] == "PRP":
					words[i] = charRef.firstName
				elif tags[i] == "PRP$":
					words[i] = charRef.firstName+"\'s"
				elif tags[i] in ["VBD", "VBG", "VBN", "VBZ"]:
					try:
						words[i] = en.verb.past(words[i], person=3, negate=False)
					except:
						pass
				else:
					pass

			punc = [".", ",", ";", ":", "!", "?"]

			for i in range(len(words)):
				if words[i] in punc:
					words[i] = '\b'+words[i]

			final_text = " ".join(words)

			final_text = string.replace(final_text, "\\end{itemize}", "")
			final_text = string.replace(final_text, "\\begin{itemize}", "")
			final_text = string.replace(final_text, "\\end{center}", "")
			final_text = string.replace(final_text, "\\begin{center}", "")
			final_text = string.replace(final_text, "\\ldots", " . . . ")
			final_text = string.replace(final_text, "\\egroup", "")
			final_text = string.replace(final_text, "EROWID", "GOVERNMENT")
			final_text = string.replace(final_text, "erowid", "government")
			final_text = string.replace(final_text, "Erowid", "Government")

		except:

			final_text = ""

		return final_text


	def personal_scp(self, charList, scpPath):

		text = get_file(scpPath)

		text = string.replace(text, "SCP", charList[0].lastName)
		text = string.replace(text, "Foundation", charList[0].lastName)

		try:

			pos = en.sentence.tag(text)
			wordtag = map(list, zip(*pos))
			words = wordtag[0]
			tags = wordtag[1]

			for i in range(len(words)):

				charRef = rc(charList)

				if tags[i] == "PRP":
					words[i] = charRef.firstName
				elif tags[i] == "PRP$":
					words[i] = charRef.firstName+"\'s"
				elif tags[i] in ["VBD", "VBG", "VBN", "VBZ"]:
					try:
						words[i] = en.verb.past(words[i], person=3, negate=False)
					except:
						pass
				else:
					pass

			punc = [".", ",", ";", ":", "!", "?"]

			for i in range(len(words)):
				if words[i] in punc:
					words[i] = '\b'+words[i]

			final_text = " ".join(words)

		except:

			final_text = ""

		return final_text



	def personal_gberg(self, charList, gPath):

		full_text = ""
		while full_text == "":
			try:
				full_text = get_zip(gPath)
			except:
				full_text = ""
				gPath = rc(gPaths.values())

		endPart = full_text.split("*** START OF THIS PROJECT GUTENBERG EBOOK ")[-1]
		theMeat = endPart.split("*** END OF THIS PROJECT GUTENBERG EBOOK")[0]

		theMeat = string.replace(theMeat, "\r\n", " ")

		
		if len(theMeat) < DEN+5:
			text = theMeat
		else:
			startLoc = int(len(theMeat)/2.0 - DEN/2.0)
			text = theMeat[startLoc:startLoc+DEN]

		spLoc = text.find(" ")
		text = text[spLoc+1:]

		try:
			pos = en.sentence.tag(text)
			wordtag = map(list, zip(*pos))
			words = wordtag[0]
			tags = wordtag[1]

			for i in range(len(words)):

				charRef = rc([rc(charList), charList[0]])

				if tags[i] == "PRP":
					words[i] = charRef.firstName
				elif tags[i] == "PRP$":
					words[i] = charRef.firstName+"\'s"
				elif tags[i] in ["VBD", "VBG", "VBN", "VBZ"]:
					try:
						words[i] = en.verb.past(words[i], person=3, negate=False)
					except:
						pass
				else:
					pass

			punc = [".", ",", ";", ":", "!", "?"]

			for i in range(len(words)):
				if words[i] in punc:
					words[i] = '\b'+words[i]

			final_text = " ".join(words)

		except:
			final_text = ""


		return final_text


	def print_chars(self):

		c = self.make_chars()
		for character in c:
			print 'INTRO DESC'
			print '\n\n'
			print character.introDesc
			print '\n\n'
			print 'SCENES'
			print '\n\n'
			for s in character.scenes:
				print s
			print '\n\n'
			print 'DRUG TRIPS'
			print '\n\n'
			for t in character.drugTrips:
				print t
			print '\n\n'
			print 'SCP REPORTS'
			print '\n\n'
			for p in character.scpReports:
				print p
			print '\n\n'
			print 'GBERG EXCERPTS'
			print '\n\n'
			for q in character.gbergExcerpts:
				print q
			print '\n\n'




foobar = Novel()
foobar.generate()

The program’s argument values, which I’m using the Python argparse library to deal with, are designed to be inserted by the GUI. However, they can be inserted manually as well in the terminal.

Typing python ficgen.py -h in the terminal will yield the following help text:

usage: ficgen.py [-h] [--charnames [CHARNAMES [CHARNAMES ...]]]
                 [--title TITLE] [--length LENGTH] [--charcount CHARCOUNT]
                 [--genre [{literary,sci-fi,fantasy,history,romance,thriller,mystery,crime,pulp,horror,beat,fan,western,action,war,family,humor,sport,speculative} [{literary,sci-fi,fantasy,history,romance,thriller,mystery,crime,pulp,horror,beat,fan,western,action,war,family,humor,sport,speculative} ...]]]
                 [--conflict [{nature,man,god,society,self,fate,tech,no god,reality,author} [{nature,man,god,society,self,fate,tech,no god,reality,author} ...]]]
                 [--passion PASSION] [--verbosity VERBOSITY]
                 [--realism REALISM] [--density DENSITY]
                 [--accessibility ACCESSIBILITY] [--depravity DEPRAVITY]
                 [--linearity LINEARITY]
                 [--narrator [{first,1st,1,third,3rd,3,alt,alternating,subjective,objective,sub,obj,omniscient,omn,limited,lim} [{first,1st,1,third,3rd,3,alt,alternating,subjective,objective,sub,obj,omniscient,omn,limited,lim} ...]]]

Story Parameters

optional arguments:
  -h, --help            show this help message and exit
  --charnames [CHARNAMES [CHARNAMES ...]]
                        Character Names
  --title TITLE         Story Title
  --length LENGTH       Story Length (0-999)
  --charcount CHARCOUNT
                        Character Count (0-999)
  --genre [{literary,sci-fi,fantasy,history,romance,thriller,mystery,crime,pulp,horror,beat,fan,western,action,war,family,humor,sport,speculative} [{literary,sci-fi,fantasy,history,romance,thriller,mystery,crime,pulp,horror,beat,fan,western,action,war,family,humor,sport,speculative} ...]]
                        Genre
  --conflict [{nature,man,god,society,self,fate,tech,no god,reality,author} [{nature,man,god,society,self,fate,tech,no god,reality,author} ...]]
                        Conflict
  --passion PASSION     Passion (0-999)
  --verbosity VERBOSITY
                        Verbosity (0-999)
  --realism REALISM     Realism (0-999)
  --density DENSITY     Density (0-999)
  --accessibility ACCESSIBILITY
                        Accessibility (0-999)
  --depravity DEPRAVITY
                        Depravity (0-999)
  --linearity LINEARITY
                        Linearity (0-999)
  --narrator [{first,1st,1,third,3rd,3,alt,alternating,subjective,objective,sub,obj,omniscient,omn,limited,lim} [{first,1st,1,third,3rd,3,alt,alternating,subjective,objective,sub,obj,omniscient,omn,limited,lim} ...]]
                        Narrative PoV

Finally, here are some sample novels generated by the new code (titles chosen by volunteers):

]]>
http://www.thehypertext.com/2014/12/09/fiction-generator-part-iii/feed/ 3
Fiction Generator, Part II http://www.thehypertext.com/2014/11/20/fiction-generator-part-ii/ http://www.thehypertext.com/2014/11/20/fiction-generator-part-ii/#comments Thu, 20 Nov 2014 03:39:13 +0000 http://www.thehypertext.com/?p=328 After scraping about 5000 articles from tvtropes.org to retrieve descriptions for characters and settings, Sam Lavigne suggested I scrape erowid.org to dig up some exposition material. I proceeded to scrape 18,324 drug trip reports from the site, and integrated that material into the generator.

Read More...

]]>
For background, see my previous post on this project.

After scraping about 5000 articles from tvtropes.org to retrieve descriptions for characters and settings, Sam Lavigne suggested I scrape erowid.org to dig up some exposition material. I proceeded to scrape 18,324 drug trip reports from the site, and integrated that material into the generator.

While this project remains unfinished—I’m considering adding more material from many other websites, which is why I’m calling it a “collective consciousness fiction generator”—it is now generating full-length “novels” (300+ pages, 8.5×11, 12pt font). I polled my fellow ITP students to insert themselves into novels, and they responded with over 50 suggestions for novel titles. The generated PDFs are available for viewing/download on Google Drive.

I decided to create covers for 3 of my favorite novels the software has generated. Click on the covers below to see those PDFs:

infinite_splendour parallel_synchronized_randomness tricks_of_the_trade

Here is the current state of the code that’s generating these novels:

latex_special_char_1 = ['&', '%', '$', '#', '_', '{', '}']
latex_special_char_2 = ['~', '^', '\\']

outputFile = open("output/"+outputFileName+".tex", 'w')

openingTexLines = ["\\documentclass[12pt]{book}",
				   "\\usepackage{ucs}",
				   "\\usepackage[utf8x]{inputenc}",
				   "\\usepackage{hyperref}",
				   "\\title{"+outputFileName+"}",
				   "\\author{collective consciousness fiction generator\\\\http://rossgoodwin.com/ficgen}",
				   "\\date{\\today}",
				   "\\begin{document}",
				   "\\maketitle"]

closingTexLine = "\\end{document}"

for line in openingTexLines:
	outputFile.write(line+"\n\r")
outputFile.write("\n\r\n\r")

intros = char_match()

for x, y in intros.iteritems():

	outputFile.write("\\chapter{"+x+"}\n\r")

	chapter_type = random.randint(0, 4)
	bonus_drug_trip = random.randint(0, 1)
	trip_count = random.randint(1,4)


	# BLOCK ONE

	if chapter_type in [0, 3]:

		for char in y[0]:
			if char == "`":
				outputFile.seek(-1, 1)
			elif char in latex_special_char_1:
				outputFile.write("\\"+char)
			elif char in latex_special_char_2:
				if char == '~':
					outputFile.write("")
				elif char == '^':
					outputFile.write("")
				elif char == '\\':
					outputFile.write("-")
				else:
					pass
			else:
				outputFile.write(char)

	elif chapter_type in [1, 4]:

		for char in y[2]:
			if char == "`":
				outputFile.seek(-1, 1)
			elif char in latex_special_char_1:
				outputFile.write("\\"+char)
			elif char in latex_special_char_2:
				if char == '~':
					outputFile.write("")
				elif char == '^':
					outputFile.write("")
				elif char == '\\':
					outputFile.write("-")
				else:
					pass
			else:
				outputFile.write(char)

	elif chapter_type == 2:

		for char in y[1][0]:
			if char == "`":
				outputFile.seek(-1, 1)
			else:
				outputFile.write(char)

	outputFile.write("\n\r\n\r\n\r")

	
	# BLOCK TWO

	if chapter_type == 0:

		for char in y[2]:
			if char == "`":
				outputFile.seek(-1, 1)
			elif char in latex_special_char_1:
				outputFile.write("\\"+char)
			elif char in latex_special_char_2:
				if char == '~':
					outputFile.write("")
				elif char == '^':
					outputFile.write("")
				elif char == '\\':
					outputFile.write("-")
				else:
					pass
			else:
				outputFile.write(char)

	elif chapter_type == 1:

		for char in y[0]:
			if char == "`":
				outputFile.seek(-1, 1)
			elif char in latex_special_char_1:
				outputFile.write("\\"+char)
			elif char in latex_special_char_2:
				if char == '~':
					outputFile.write("")
				elif char == '^':
					outputFile.write("")
				elif char == '\\':
					outputFile.write("-")
				else:
					pass
			else:
				outputFile.write(char)

	elif chapter_type in [3, 4]:

		for char in y[1][0]:
			if char == "`":
				outputFile.seek(-1, 1)
			else:
				outputFile.write(char)

	elif chapter_type == 2 and bonus_drug_trip:

		for tripIndex in range(trip_count):

			for char in y[1][tripIndex+1]:
				if char == "`":
					outputFile.seek(-1, 1)
				else:
					outputFile.write(char)

	else:
		pass

	outputFile.write("\n\r\n\r\n\r")


	# BLOCK THREE

	if chapter_type in [0, 1, 3, 4] and bonus_drug_trip:

		for tripIndex in range(trip_count):

			for char in y[1][tripIndex+1]:
				if char == "`":
					outputFile.seek(-1, 1)
				else:
					outputFile.write(char)

		outputFile.write("\n\r\n\r\n\r")

	else:
		pass


outputFile.write("\n\r\n\r")
outputFile.write(closingTexLine)


outputFile.close()


print '\"output/'+outputFileName+'.tex\"'

 

UPDATE: Part III

]]>
http://www.thehypertext.com/2014/11/20/fiction-generator-part-ii/feed/ 1
Fiction Generator http://www.thehypertext.com/2014/11/11/fiction-generator/ http://www.thehypertext.com/2014/11/11/fiction-generator/#comments Tue, 11 Nov 2014 03:35:39 +0000 http://www.thehypertext.com/?p=297 For my Introduction to Computational Media final project, I will be creating a fiction generator using text files scraped from tvtropes.org along with natural language processing in Python.

Read More...

]]>
For my Introduction to Computational Media final project, I will be creating a fiction generator using text files scraped from tvtropes.org along with natural language processing in Python. All the code I have written so far is available on Github, and will remain available in that repository as it evolves.

I am interested in natural language processing and natural language generation due to my background as a writer. After I learned Python over the summer, my first major project was a poetry generator. Since then, I have aspired to create a fiction generator—along the lines of The Great Automatic Grammatizator, a fictional machine that appears in a short story by Roald Dahl—but have lacked the skills or framework to pursue such a project.

After spending a significant amount of time on tvtropes.org, a self-described wiki of “the tricks of the trade for writing fiction,” I believe I have found the raw material I need to create such a project. The audience for this project will be fiction writers with writer’s block who need a raw, original framework for a new story.

2010-06-04-trope-trek-76a344a4-abfb6b6f-c7d6338e

[Comic via Strewth! by Josh Way]

The tvtropes wiki contains an extraordinary variety of fiction “tropes”—recurring motifs, themes, or elements that are present in nearly all fiction. I found an effective way to scrape the raw text of trope articles on the site using the Beautiful Soup library with Python. The following code is the function I am using to scrape each article:

def scrape(url):
	r = requests.get(url)
	doc = r.text
	soup = BeautifulSoup(doc)
	wikitext = soup.find(id="wikitext")
	approvedTags = ["em", "strong", "a", "ul", "ol", "li"]
	scraped = []
	for c in wikitext.children:
		try:
			tagClass = c['class']
		except TypeError:
			tagClass = False
		except KeyError:
			tagClass = False
		except AttributeError:
			tagClass = False

		try:
			childTag = c.name
		except TypeError:
			childTag = False
		except KeyError:
			childTag = False
		except AttributeError:
			childTag = False

		if childTag and childTag not in approvedTags:
			pass
		elif childTag and childTag in approvedTags:
			if tagClass:
				if tagClass[0] == "twikilink":
					scraped.append(c.string.lower())
				elif tagClass[0] == "urllink":
					scraped.append(c.contents[0])
			elif childTag == "ul" or childTag == "ol":
				for d in c.children:
					scraped.append(d.contents[0])
			else:
				scraped.append(c.string)

		else:
			scraped.append(c)

	bad_values = ['\n', None]
	scraped = [s for s in scraped if not s in bad_values]

	article = "".join(scraped)
	article = article.split('\n')
	article = '\n\n'.join(article)

	return article

I decided to start with characters because, after all, every story needs characters. And for the characters, I started with their names, because every character needs a name—ideally, a first name and a last name.

I used US Social Security Administration records for first names and US Census data on surnames to create a name generator. I am using a data set containing all of the surnames that appeared at least 100 times in the 2000 US Census (151,671 surnames), and all of the first names that were used for at least 5 individuals in a given year for all years between 1880-2013 (1,792,091 first names). Additionally, I have separated the first names by gender.

Here is a list of 25 random names generated from the aforementioned data:

Tolbert Routten
Jakaylah Gaugler
Lanya Kazel
Apolonio Buddemeyer
Josearmando Viloa
Shakur Litwinski
Lashaunta Cariello
Carolee Chatt
Tya Shuda
Estus Stubben
Caden Loranger
Aneatra Grueneich
Aleks Ronquille
Jeiel Seller
Balin Fosnow
Keymari Ketrow
Annemarie Neukam
Tobe Peaks
Lois Reebel
Adaly Detling
Marco Paider
Coolidge Troughton
Miles Chmara
Lucky Dehen
Marcellus Mussenden

Due to the nature of the data (more uncommon names, fewer common names), the names generated tend to be rather unusual. However, for the purposes of a fiction generator, this property may be desirable.

The next step I took was to pair these fictitious characters with character tropes. I scraped 2,219 articles on tvtropes.org, each on a specific trope that would apply to a particular character. I then used the NodeBox English Linguistics library to replace personal pronouns (e.g. he, she, it, they) with first names and the phrase “the character” or “this character” (or any word followed by “character”) with first name + last name. I also used NodeBox Linguistics to convert all the verbs in each article to past tense, and to replace every instance of the word trope with “clue” (in order to avoid the appearance that these are articles about tropes rather than particular characters). Finally, I used a Python script to generate 3-9 character names, pair each character with a trope, and add each character’s name into the converted trope text.

Here is some sample output:

Rashelle Roholt is cute, sweet, innocent and extremely huggable. Incidentally Rashelle is also varied shades of violent, unstable, and downright insane. Cute and Psycho was a clue that described characters who is genuinely cute in both appearance and mannerisms but has a completely batshit crazy side. Sometimes there is distinctly different sides which may be showed equally, but other times Rashelle is mostly one or the other, the killer rabbit displayed moments of sweetness and relative-sanity or the cutie showed hints of a dark psychotic nature. Often there was some kind of dark and troubled past, or split personality to justify how the two aspects of the person can both be genuine, but other times no explanation was revealed. The primary difference between this clue and the yandere was that the Cute and Rashelle Roholt was not drove by an obsessive needed to possess a friend or lover. Rashelle’s motivation, if Rashelle has one, can vary immensely. Rashelle also don’t necessarily has to be provoked to enter Rashelle’s Psycho-state, but can switch for reasons observers would be hard-pressed to determine. Cute and Psycho was a sister clue to killer rabbit, yandere and enfant terrible, and closely related to psychopathic manchild and beware the nice ones, though while there was frequent overlap between these clues was one doesn’t necessarily mean Rashelle Roholt qualified for another. If the “cute” part was real, then Rashelle Roholt was the fake cutie instead. Characters of this type tend to be female, though male examples do exist. the ophelia was someone whose psychosis was part of the cute picture, rather than a contrasted to Rashelle. In some anime fandoms Rashelle Roholt was referred to as a yangire, an informal fanspeak term which was a portmanteau of yandere and kireru ( a word meant “to snap or lose one’s temper”). It’s also used to refer to ax-crazy versions of the fake cutie. Not to be confused with fangire, which was a species of monster vampire.

Lissa Kanaday’s best friend and partner pled with Lissa to stop, Lissa won’t bring “her” back, and Lissa just put Lissa in danger. Yet still the hero persiste. A few acts later, he’s got beat on by the giant mook, Lissa looked like it’s all went to fade to black when… Lissa’s partner showed up, gun in hand! Wait, why was Lissa pointed the tranquilizer gun at hi— When Lissa woke up, the friend was terribly distraught. Says Lissa tried to get Lissa to stop, that Lissa warned Lissa what would happen. Saving Lissa was out of Lissa’s hands now, it’s all on Lissa’s head. Wait, what?The best friend had was in league with ( or was ) the big bad behind the whole plot. However, Lissa genuinely like the hero and would rather Lissa live a long and happy life. Lissa might try a circled monologue to bring Lissa onboard, but chances is Lissa already knew the hero’s moral code was such that he’d just be wasted both Lissa’s time by did Lissa. Still, Lissa just might try, for old time’s sake. Compounding matters, he’s usually a straw traitor to some horrible ideal, was either directly or indirectly responsible for much of the hero’s recent suffered, and/or was covered Lissa up. Compare evil former friend. Contrast friendly enemy and lived with the villain. not to be confused with another type of big bad friend. If the hero was was chummy with the big bad, that’s go karting with bowser. evil all along was for anyone who turned out to be evil, not just friends. Related to Lissa was held Lissa back. This was a Spoiler Clue, so beware.

Jaclene Desharnais. However, while Jaclene may first appear to be the hero’s equal or even superior in combat, subsequent battles will establish the Brute as was the goliath to the hero’s david. Jaclene was usually a bully, incapable of empathy, and, more often than not, also very stupid, though there is exceptions. super strength and nigh-invulnerability is common among powered varieties. Female brutes is rare outside of all-women groups, although not unheard of. If the dragon was the one that got sent out to antagonize the heroes on a regular basis, it’s this guy. Jaclene was usually the lowest-ranking member of the inner circle’s hierarchy, and as such generally got little respect from Jaclene, though Jaclene may exercise authority over the mooks. Jaclene was often the first opponent the heroes face after Jaclene’s successes require that someone more capable be sent to take care of Jaclene. Jaclene tended to be either blindly loyal or just too thickheaded and incompetent to ever stand a chance of overthrew the leaders. Despite Jaclene’s role as the primary brute force of the evil army, Jaclene was rarely ever as strong as the dragon. One thing to keep in mind with Jaclene Desharnais type was that it’s the role and rank as opposed to just the personality that defined Jaclene. Pete from the walt disney canon was a classic example of the Brute personality type: a big dumb bully that just loved to throw Jaclene’s own weight around. However, he’s generally used as a big bad ( or, in works like Kingdom Hearts II, the dragon). As such, in most appearances, Jaclene was not technically a Brute. Jaclene Desharnais type often showed up as part of the five-bad band dynamic ( in fact, Jaclene’s presence was often what defined it). Jaclene can also show up as a member of the quirky miniboss squad, but ( like all the other members ) will lose most of Jaclene’s threat level by virtue of Jaclene’s quirkiness. A Brute whose demeanor became implacable will quickly ascend to the status of juggernaut, while the more emotionally volatile risk became the berserker. Be wary too, recruiters, of a Brute who pets the dog, lest Jaclene prove to be a closet gentle giant and may very well eventually heel-face turn on Jaclene. Considering Jaclene’s aforementioned general role as the mean, stupid, and disrespected meat shield for Jaclene’s team, the Brute tended to be especially susceptible to humble pie and the humiliation conga. Compare: smash mook.

Fidelia Nollet must sacrifice something else… Fidelia’s good name, Fidelia’s reputation and Fidelia’s integrity. Fidelia Nollet attempted a Zero Approval Gambit will knowingly risk – or deliberately seek – a 0% approval rated and paint Fidelia in a bad light in order to achieve some greater good. This might involve falsely confessed to a crime Fidelia did commit, or Fidelia might involve Fidelia was an enormous jerkass contrary to Fidelia’s usual nature. The net result was that Fidelia will be hated, hunted or disgraced for all time. In short, Fidelia willingly became a hero with bad publicity. Note that this was a short-term trick. A Zero Approval Gambit was usually permanent or took a huge amount of work to undo. This was an inverse of villain with good publicity; compare good was not nice, necessarily evil, noble demon, what the hell, hero?, break Fidelia’s heart to save Fidelia. Can result in a hero with an f in good. Sometimes did to facilitate a genghis gambit. Often a job hazard of the agent provocateur. Most of the time Fidelia involved became a silent scapegoat.

Luciann Aspengren see a demon, god, or someone or something else otherworldly you’d expect Luciann to be easy to tell if Luciann was a man or a woman. but when he/she/they had qualities of both? or lacked qualities of either? or can change from one to another with neither was the confirmed default? Luciann is otherworldly and sexually ambiguous.There might be reasons why or how the demon, spirit, etc., was a hermaphrodite, can change sex, etc., be Luciann either magical corruption, or the creators did want to give Luciann Aspengren a definitive sex, so Luciann made Luciann ambiguous. Sometimes it’s just a striking detail that reminded the audience that this individual was a mundane creature, and that the shape they’re in now might just be a form Luciann is comfortable with. See also no biological sex, hermaphrodite, voluntary shapeshifting and ambiguous gender. May cross over with shapeshifters do Luciann for a change. In Ashura, the god of war from Apos from The goddess Kanzeon Bosatsu from Aleister Crowley of Envy the shapeshifting homunculus from The angel that appeared in the anime of Desire from In Gozer the Gozerian, the Sumerian god from In Larry Niven’s In In In In In Mallory from The Metrons in the The angels in Some of the demons in the Although the Judeo-Christian God was usually referred to by male personal pronouns, this was more convention than canon. Several Biblical verses show God identified with roles which western culture would generally consider feminine. Most languages ( English included ) do not has any gender-neutral personal pronouns and God was referred to as “He” because most societies in which the Bible was wrote was patriarchal. Inari Okami, the Shinto God of fertility, rice, agriculture, foxes, industry and worldly success, was generally considered to be neither male nor female, though like YHWH, masculine or feminine aspects is often emphasized depended on the context and the region. This was true for many other Kami as well. Angels and demons in Christianity is sometimes considered to be sexless because Luciann don’t reproduce in Heaven or Hell and so would not needed to be male or female. The Bible always referred to Luciann as male, and “the sons of god,” who is generally but not always thought to be angels, sired the The Egyptian god Hapi was generally considered male ( included had one or more wives), but was also pictured with breasts to represent Luciann’s ability to nurture and feed people ( he’s a god of the Nile). Not surprisingly, Luna, the main moon-deity of Accoring to The Despair Embodied in In While all of the Daedra princes ( a loose analog of Demon Lords And Archdevils, only with The Cloud of Darkness from Minogame from The Soulthirster, Uryuoms in The Egyptian Pharaoh Akhenaten IV, most famous for tried to change the nation to

Alisyn Hafner good looked, but Alisyn tend to be described as better looked than the vast majority of humans could ever hope to be. When described Alisyn’s beauty, authors tend to use terms like “inhuman”, “otherworldly” and “ethereal”. Depending on the author, such a species may inspire either simple chaste appreciation, or immediate and profound arousal. In extreme cases, Alisyn’s looked is so incredible as to act as almost a form of glamour, instantly become the center of attention ( and desire ) everywhere Alisyn go. While this concept can be found in all forms of media, Alisyn usually this works best in a non-visual medium. With a novel, the reader can imagine Alisyn’s own ideal of beauty. In a live action work, Alisyn may become a case of a subjective judgement of informed attractiveness. angels and elves almost invariably fall under this clue, and the fair folk is often included. physical gods can easily do so. In recent years, Vampires has also increasingly was portrayed as had inhuman hotness and allure, in contrast to older versions where Alisyn looked more like walked corpses. And Alisyn went without said for succubi. Not incubi, though, as they’re usually depicted as a kind of rapist gargoyle-creature. Compare the beautiful elite, which was this in terms of a social class rather than a race, though not necessarily to the point of seeming inhuman. mary sued frequently belong to one of these. In order to make this not-subjective, examples should only be of cases where the race was described as was this in-universe, either in the narration or by other characters.

Shermaine Siber’s arm was chained to the table and a Rabid Cop was sprayed spittle into Shermaine’s face in a way that convinced Shermaine that Shermaine had completely lost Shermaine’s mind.All Shermaine wanted Shermaine to do was admit that everything hitler did was Shermaine’s idea. Sounds good to Shermaine. What do Shermaine has to sign to get away from this maniac? The Rabid Cop might be casually dirty, or overbearingly self-righteous, or anywhere in between, but Shermaine all has two things in common: a reckless disregard for civil rights, and an unwavering conviction that any person they’ve identified as “the perp” really was a perp ( regardless of any contradicted evidence ) and deserved to suffer. In a good cop/bad cop routine, Shermaine usually take the “Bad Cop” ball and run clear out of the stadium with Shermaine. Likely to enjoy used torture for fun and information. Compare/contrast the ( presumed ) sympathetic cowboy cop.

Isaiah Oguinn’s doctorate in a scientific field that a peon like Isaiah can’t even pronounce. Isaiah always wore a suit… Until the eventual shirtless scene during Isaiah’s ( strenuous ) exercise routine, that was. Isaiah had a lovely smile.But inside, he’s an ugly, writhed mass of self-hatred and possibly parental issues. Isaiah came in two flavors: The one who happened to be great at everything, and was loved and respected by the people around Isaiah – but he’s used Isaiah’s charm and talents The one who Expect Isaiah to has at least one bizarre trait or ability that should not be overlooked, as well as a completely unhealthy attitude about love, life, and humanity in general. Isaiah most likely doesn’t has anyone that loved or respects Isaiah for what Isaiah really was. this may be justified.In the most cynical works on the slid scale, he’ll be a serial killer, or at least a future one. Isaiah Oguinn was usually male, but not always. Also, he’s not always evil – maybe just a well-hidden jerkass. The chief difference between the Broken Ace and the usually female stepford smiler was that the Stepford Smiler wanted to appear normal at all costs, often to the point of hurt Isaiah emotionally ( or because she’s sociopathic). This guy had the same setup, but was more talented and wanted to be the best, loved by all, and accepted. The debilitating personal issues which he’s hid is only got worse because of was repressed and the stress of Isaiah’s efforts to excel, and these sorts of characters is prime jerkass woobie material. See also the ace, who’s still better than Isaiah at everything but was so prone to mental disorders or emotional problems, and the byronic hero, who’s just as awe-inspiring and brooded but lacked the charming, polished façade and was rarely presented as pathetic. For a plot wherein The Ace was revealed to has deep personal problems, see broke pedestal. In case Isaiah haven’t noticed, this had nothing to do with asexuality. In real life, this was rather common. Real people has flaws no matter how perfect Isaiah seem to be at first glance.

I look forward to continuing to work on this project, as I am very excited by the possibilities of extended output with combinations of tropes. My intent is to produce an output that can be used as my entry for this year’s NaNoGenMo (National Novel Generation Month)—a natural language generation answer to NaNoWriMo (National Novel Writing Month)—which takes place throughout the month of November.

Edit: Click here for Part II.

]]>
http://www.thehypertext.com/2014/11/11/fiction-generator/feed/ 6
Edge Finder http://www.thehypertext.com/2014/11/01/edge-finder/ Sat, 01 Nov 2014 23:50:10 +0000 http://www.thehypertext.com/?p=270 For the pixel manipulation assignment in Introduction to Computational Media, I created an edge finding algorithm that can find edges in the frames of a live video stream.

Read More...

]]>
For the pixel manipulation assignment in Introduction to Computational Media, I created an edge finding algorithm that can find edges in the frames of a live video stream. The code is available on Github.

Below are some screen shots of the Processing sketch in action.

Screen Shot 2014-10-29 at 7.44.23 PM Screen Shot 2014-10-29 at 2.39.34 AM Screen Shot 2014-10-29 at 2.38.58 AM Screen Shot 2014-10-29 at 2.38.28 AM Screen Shot 2014-10-29 at 2.36.06 AM Screen Shot 2014-10-29 at 2.32.21 AM Screen Shot 2014-10-29 at 2.31.18 AM Screen Shot 2014-10-29 at 2.31.06 AM Screen Shot 2014-10-29 at 2.30.37 PM Screen Shot 2014-10-29 at 2.29.42 PM Screen Shot 2014-10-29 at 2.26.30 PM Screen Shot 2014-10-29 at 2.23.54 PM Screen Shot 2014-10-29 at 2.21.36 PM Screen Shot 2014-10-29 at 2.20.35 PM Screen Shot 2014-10-29 at 2.15.33 PM Screen Shot 2014-10-29 at 2.10.32 PM Screen Shot 2014-10-29 at 1.29.55 PM Screen Shot 2014-10-29 at 1.27.55 PM Screen Shot 2014-10-29 at 1.21.49 PM Screen Shot 2014-10-29 at 1.21.45 PM Screen Shot 2014-10-29 at 1.17.45 PM Screen Shot 2014-10-29 at 1.17.25 PM

]]>
General Update http://www.thehypertext.com/2014/09/29/general-update/ http://www.thehypertext.com/2014/09/29/general-update/#comments Mon, 29 Sep 2014 06:24:41 +0000 http://www.thehypertext.com/?p=177 I've been so busy the past two weeks that I failed to update this blog. But documentation is important, and that's why I'm going to take a moment to fill you in on all my recent activities. This post will cover all the projects I've been working on.

Read More...

]]>
I’ve been so busy the past two weeks that I failed to update this blog. But documentation is important, and that’s why I’m going to take a moment to fill you in on all my recent activities. This post will cover all the projects I’ve been working on, primarily:

  • Applications Presentation on September 16
  • ITP Code Poetry Slam on November 14
  • The Mechanical Turk’s Ghost
  • Che55

On Tuesday, September 16, I helped deliver a presentation to our class in Applications. Yingjie Bei, Rebecca Lieberman, and Supreet Mahanti were in my group, and we utilized my Poetizer software to create an interactive storytelling exercise for the entire audience. Sarah Rothberg was kind enough to record the presentation, and Rebecca posted it on Vimeo:

 

 

I’ve also been organizing an ITP Code Poetry Slam, which will take place at 6:30pm on November 14. Submissions are now open, and I’m hoping the event will serve as a conduit for productive dialogue between the fields of poetry and computer science. Announcements regarding judges, special guests, and other details to come.

Various explorations related to the Mechanical Turk’s Ghost [working title] have consumed the rest of my time. While I wait for all the electronic components I need to arrive, I have been focusing on the software aspects of the project, along with some general aspects of the hardware.

The first revision to the preliminary design I sketched out in my prior post resulted from a friend‘s suggestion. Rather than using conductive pads on the board, I now plan to use Hall effect sensors mounted beneath the board that will react to tiny neodymium magnets embedded in each chess piece. If everything works properly, this design should be far less visible, and thus less intrusive to the overall experience. I ordered 100 sensors and 500 magnets, and I look forward to experimenting with them when they arrive.

In the meantime, the parts I listed in my prior post arrived, and I was especially excited to begin working with the Raspberry Pi. I formatted an 8GB SD card and put NOOBS on it, then booted up the Raspberry Pi and installed Raspbian, a free operating system based on Debian Linux that is optimized for the Pi’s hardware.

r_pi

The Stockfish chess engine will be a major component of this project, and I was concerned that its binaries would not compile on the Raspberry Pi. The makefile documentation listed a number of options for system architecture, none of which exactly matched the ARM v6 chip on the Raspberry Pi.

Screen Shot 2014-09-28 at 10.46.18 PMFirst, I tried the “ARMv7” option. The compiler ran for about 10 minutes before experiencing errors and failing. I then tried several other options, none of which worked. I was about to give up completely and resign myself to running the chess engine on my laptop, when I noticed the “profile-build” option. I had never heard of profile-guided optimization (PGO), but I tried using the command “make profile-build” rather than “make build” along with the option for unspecified 32-bit architecture. This combination allowed Stockfish to compile without any issues. Here is the command that I used (from the /Stockfish/src folder):

$ make profile-build ARCH=general-32

With Stockfish successfully compiled on the Raspberry Pi, I copied the binary executable to the system path (so that I could script the engine using the Python subprocess library), then tried running the Python script I wrote to control Stockfish. It worked without any issues:

ghost

My next set of explorations revolved around the music component of the project. As I specified in my prior post, I want the device to generate music. I took some time to consider what type of music would be most appropriate, and settled on classical music as a starting point. Classical music is ideal because so many great works are in the public domain, and because so many serious chess players enjoy listening to it during play. (As anecdotal evidence, the Chess Forum in Greenwich Village, a venue where chess players congregate to play at all hours of the day and night, plays nothing but classical music all the time. I have been speaking to one of the owners of the Chess Forum about demonstrating my prototype device there once it is constructed.)

Generating a classical music mashup using data from the game in progress was the first idea I pursued. For this approach, I imagined that two classical music themes (one for black, one for white) could be combined in a way that reflected the relative strength of each side at any given point in the game. (A more complex approach might involve algorithmic music generation, but I am not ready to pursue that option just yet.) Before pursuing any prototyping or experimentation, I knew that the two themes would need to be suitably different (so as to distinguish one from the other) but also somewhat complementary in order to create a pleasant listening experience. A friend of mine who studies music suggested pairing one song (or symphony or concerto) in a major key with another song in the relative minor key.

Using YouTube Mixer, I was able to prototype the overall experience by fading back and forth between two songs. I started by pairing Beethoven’s Symphony No. 9 and Rachmaninoff’s Piano Concerto No. 3, and I was very satisfied with the results (play both these videos at once to hear the mashup):

I then worked on creating a music mashup script to pair with my chess engine script. My requirements seemed very simple: I would need a script that could play two sound files at once and control their respective volume levels independently, based on the fluctuations in the score calculated by the chess engine. The script would also need to be able to run on the Raspberry Pi.

These requirements ended up being more difficult to fulfill than I anticipated. I explored many Python audio libraries, including pyo, PyFluidSynth, mingus, and pygame’s mixer module. I also looked into using SoX, a command line audio utility, through the python subprocess library. Unfortunately, all of these options were either too complex or too simple to perform the required tasks.

Finally, on Gabe Weintraub’s suggestion, I looked into using Processing for my audio requirements and discovered a library called Minim that could do everything I needed. I then wrote the following Processing sketch:

import ddf.minim.*;

Minim minim1;
Minim minim2;
AudioPlayer player1;
AudioPlayer player2;

float gain1 = 0.0;
float gain2 = 0.0;
float tgtGain1 = 0.0;
float tgtGain2 = 0.0;
float level1 = 0.0;
float level2 = 0.0;
float lvlAdjust = 0.0;

BufferedReader reader;
String line;
float score = 0;

void setup() {
  minim1 = new Minim(this);
  minim2 = new Minim(this);
  player1 = minim1.loadFile("valkyries.mp3");
  player2 = minim2.loadFile("Rc3_1.mp3");
  player1.play();
  player1.setGain(-80.0);
  player2.play();
  player2.setGain(6.0);
}

void draw() {
  reader = createReader("score.txt");
  try {
    line = reader.readLine();
  } catch (IOException e) {
    e.printStackTrace();
    line = null;
  }
  print(line); 
  score = float(line);
  
  level1 = (player1.left.level() + player1.right.level()) / 2;
  level2 = (player2.left.level() + player2.right.level()) / 2;  

  lvlAdjust = map(level1 - level2, -0.2, 0.2, -1, 1);
  tgtGain1 = map(score, -1000, 1000, -30, 6);
  tgtGain2 = map(score, 1000, -1000, -30, 6);
  tgtGain1 = tgtGain1 * (lvlAdjust + 1);
  tgtGain2 = tgtGain2 / (lvlAdjust + 1);
  
  gain1 = player1.getGain();
  gain2 = player2.getGain();
  
  print(' ');
  print(gain1);
  print(' ');
  print(gain2);
  print(' ');
  print(level1);
  print(' ');
  println(level2);
  
  if (level2 > level1) {
    tgtGain2 -= 0.1;
  } else if (level1 < level2) {
    tgtGain1 -= 0.1;
  }
  
  player1.setGain(tgtGain1);
  player2.setGain(tgtGain2);
}

The script above reads score values from a file created by the Python script that controls the chess engine. The score values are then mapped to gain levels for each of the two tracks that are playing. I input a chess game move by move into the terminal, and the combination of scripts worked as intended by fading between the two songs based on the relative positions of white and black in the chess game.

Unfortunately, a broader issue with my overall approach became highly apparent: the dynamic qualities of each song overshadowed most of the volume changes that occurred as a result of the game. In other words, each song got louder and quieter at various points by itself, and that was more noticeable than the volume adjustments the script was making. I attempted to compensate for these natural volume changes by normalizing the volume of each song based on its relative level compared to the other song (see lines 42-45, 48-49, and 63-67 in the code above). This did not work as effectively as I hoped, and resulted in some very unpleasant sound distortions.

After conferring with my Automata instructor, Nick Yulman,  I have decided to take an alternate approach. Rather than playing two complete tracks and fading between them, I plan to record stems (individual instrument recordings) using the relevant midi files, and then create loop tracks that will be triggered at various score thresholds. I am still in the process of exploring this approach and will provide a comprehensive update sometime in the near future.

In the meantime, I have been learning about using combinations of digital and analog inputs and outputs with the Arduino, and using various input sensors to control motors, servos, solenoids, and RGB LEDs:

photo 3

In Introduction to Computational Media, we are learning about object oriented programming, and Dan Shiffman asked us to create a Processing sketch using classes and objects this week. As I prepare to create a physical chessboard, I thought it would be appropriate to make a software version to perform tests. Che55 (which I named with 5’s as an homage to Processing’s original name, “Proce55ing“) was the result.

che55

Che55 is a fully functional chess GUI, written in Processing. Only legal moves can be made, and special moves such as en passant, castling, and pawns reaching the end of the board have been accounted for. I plan to link Che55 with Stockfish in order to create chess visualizations and provide game analysis, and to prototype various elements of the Mechanical Turk’s Ghost, including the musical component. I left plenty of space around the board for additional GUI elements, which I’m currently working on implementing. All of the code is available on Github.

Unfortunately, I cannot claim credit for the chess piece designs. Rather, I was inspired by an installation I saw at the New York MoMA two weeks ago called Thinking Machine 4 by Martin Wattenberg and Marek Walczak (also written in Processing).

That’s all for now. Stay tuned for new posts about each of these projects. I will try to keep this blog more regularly updated so there (hopefully) will be no need for future multi-project megaposts like this one. Thanks for reading.

]]>
http://www.thehypertext.com/2014/09/29/general-update/feed/ 2
Primitive Fractal http://www.thehypertext.com/2014/09/07/primitive-fractal/ Sun, 07 Sep 2014 09:22:50 +0000 http://www.thehypertext.com/?p=87 For my ICM homework, I created a primitive fractal pattern. The source code for the image on the left is available on Github.

Read More...

]]>
Snowflake Fractal

Our first homework assignment for Introduction to Computational Media with Dan Shiffman was somewhat open ended. Dan asked to us make a static image using the basic shape and line tools in Processing, and to write a blog post about it. I decided to create a primitive fractal pattern. The source code for the image above is available on Github.

I constructed a variation on the fractal curve known as a Koch snowflake. First specified by Swedish mathematician Helge von Koch in 1904, the Koch snowflake is one of the first fractal curves to have been described.

Rather than using equilateral triangles, I used squares. This was my first sketch:

 

fractal notebook drawing

 

I then mapped out the base pattern, starting with an 8 x 8 unit “origin square”, and derived the relevant equations to transpose coordinates beginning with that square:

 

whiteboard 1

 

whiteboard 2

 

whiteboard 3

 

whiteboard 4

 

Using these equations as a guide, I then wrote some pseudocode for a recursive function to draw the fractal:

 

whiteboard 6

 

Which turned into…

 

def drawFourSquares(x, y, l):
	l = l / 2
	sTop = drawSquareTop(x, y, l)
	sBottom = drawSquareBottom(x, y, l)
	sRight = drawSquareRight(x, y, l)
	sLeft = drawSquareLeft(x, y, l)
	if l >= 1:
		drawFourSquares(sTop[0], sTop[1], l)
		drawFourSquares(sBottom[0], sBottom[1], l)
		drawFourSquares(sRight[0], sRight[1], l)
		drawFourSquares(sLeft[0], sLeft[1], l)

 

The rest of the code is available on GitHub.

First, I generated the shape with default fills and strokes, just to test my algorithm:

 

first_result

 

I then removed the strokes, changed the color mode to HSB, and mapped the saturation and opacity of the fills to the length of each square. The result was significantly more attractive:

 

blue

 

Finally, I mapped the hue to the length of each square and used logarithm functions to smooth the transition between different hues, opacities, and saturation levels. (The length decreases by a factor of two at each level of the fractal pattern. By using binary logarithms (base = 2), I was able to make the hue, opacity, and saturation transitions linear in order to include a more diverse spectrum of values for each property.) This was the final result, also pictured above:

 

Snowflake Fractal

 

The constraint for this project was to create a static image. However, in the future I would like to explore animating this pattern, perhaps by shifting the color values to create the illusion of depth and motion. Another option would be to program a continuous “zoom in” or “zoom out” effect to emphasize the potentially endless resolution and repetition of the pattern.

EDIT: See the new dynamic version.

]]>