fiction – THE HYPERTEXT http://www.thehypertext.com Thu, 10 Dec 2015 06:10:15 +0000 en-US hourly 1 https://wordpress.org/?v=5.0.4 novel camera http://www.thehypertext.com/2015/12/01/novel-camera/ Tue, 01 Dec 2015 17:10:37 +0000 http://www.thehypertext.com/?p=790 I have spent the last few months completing a novel I started a long time ago and turning it into a non-linear interactive experience. For my final project in several classes, I have transferred this novel into a printer-equipped camera to make a new and different type of photographic experience.

Read More...

]]>
I have spent the last few months completing a novel I started a long time ago and turning it into a non-linear interactive experience. For my final project in several classes, I have transferred this novel into a printer-equipped camera to make a new and different type of photographic experience.

IMG_1321_copy

IMG_1439 copy

IMG_1442 copy

 

Inside the antique camera is a Raspberry Pi with a camera module behind the lens. The flow of passages is controlled by a single, handwritten JSON file. When there is overlap between the tags detected in an image by Clarifai and the tags assigned to a passage, and the candidate passage occurs next in a storyline that has already begun, that passage is printed out. If no passage can be found, the camera prints poetry enabled by a recursive context-free grammar and constructed from words detected in the image.

IMG_1317_copy

 

This week, I am planning to add a back end component that will allow photos taken to be preserved as albums, and passages printed to be read later online. For now, here is the JSON file that controls the order of output:

{
    "zero": {
        "tags": ["moon", "swamp", "marble", "north america", "insect", "street"],
        "order": 0,
        "next": ["story"]
    },
    "guam_zero": {
    	"tags": ["computer", "technology", "future", "keyboard", "politics"],
    	"order": 0,
    	"next": ["guam_one"]
    },
    "guam_one": {
    	"tags": ["computer", "technology", "future", "keyboard", "politics"],
    	"order": 1,
    	"next": []
    },
    "dream_zero": {
    	"tags": ["dream", "dark", "night", "sleep", "bed", "bedroom", "indoors"],
    	"order": 0,
    	"next": ["chess_board"]
    },
    "chess_board": {
    	"tags": ["dream", "dark", "night", "sleep", "bed", "bedroom", "indoors"],
    	"order": 2,
    	"next": ["black_queen", "black_pawn", "black_king", "black_rook", "white_king", "white_knight"]
    },
    "black_queen": {
    	"tags": ["dream", "dark", "black", "night", "sleep", "bed", "bedroom", "indoors", "chess", "game", "queen"],
    	"order": 3,
    	"next": ["wake_up"]
    },
    "black_pawn": {
    	"tags": ["dream", "dark", "black", "night", "sleep", "bed", "bedroom", "indoors", "chess", "game", "pawn"],
    	"order": 3,
    	"next": ["wake_up"]
    },
    "black_king": {
    	"tags": ["dream", "dark", "black", "night", "sleep", "bed", "bedroom", "indoors", "chess", "game", "king"],
    	"order": 3,
    	"next": ["wake_up"]
    },
    "black_rook": {
    	"tags": ["dream", "dark", "black", "night", "sleep", "bed", "bedroom", "indoors", "chess", "game", "rook", "castle"],
    	"order": 3,
    	"next": ["wake_up"]
    },
    "white_king": {
    	"tags": ["dream", "dark", "white", "night", "sleep", "bed", "bedroom", "indoors", "chess", "game", "king"],
    	"order": 3,
    	"next": ["wake_up"]
    },
    "white_knight": {
    	"tags": ["dream", "dark", "white", "night", "sleep", "bed", "bedroom", "indoors", "chess", "game", "knight"],
    	"order": 3,
    	"next": ["wake_up"]
    },
    "wake_up": {
    	"tags": ["dream", "dark", "night", "sleep", "bed", "bedroom", "indoors"],
    	"order": 4,
    	"next": []
    },
    "forget": {
    	"tags": ["man", "men", "boy"],
    	"order": 0,
    	"next": []
    },    
    "story": {
    	"tags": ["moon", "swamp", "marble", "north america", "insect", "night", "street", "woman", "women", "girl"],
    	"order": 1,
    	"next": ["miss_vest", "forget"]
    },
    "miss_vest": {
    	"tags": ["moon", "swamp", "marble", "north america", "insect", "night", "street", "woman", "women", "girl"],
    	"order": 2,
    	"next": ["envelope", "forget"]
    },
    "envelope": {
    	"tags": ["moon", "swamp", "marble", "north america", "insect", "night", "street", "woman", "women", "girl", "paper", "envelope", "mail"],
    	"order": 3,
    	"next": ["apartment", "forget"]
    },
    "apartment": {
    	"tags": ["moon", "swamp", "marble", "north america", "insect", "night", "street", "woman", "women", "girl", "paper", "envelope", "mail"],
    	"order": 4,
    	"next": ["email"]
    },
    "email": {
    	"tags": ["moon", "swamp", "marble", "north america", "insect", "night", "woman", "women", "girl", "paper", "envelope", "mail", "computer", "technology"],
    	"order": 5,
    	"next": ["match"]
    },
    "match": {
    	"tags": ["moon", "swamp", "marble", "north america", "insect", "night", "man", "men", "boy", "paper", "envelope", "mail", "computer", "technology"],
    	"order": 5,
    	"next": ["smithpoint", "morning"]
    },
    "morning": {
    	"tags": ["day", "sun", "bedroom", "bed", "breakfast", "morning", "dream", "dark", "night"],
    	"order": 6,
    	"next": ["call"]
    },
    "call": {
    	"tags": ["phone", "telephone", "technology", "computer"],
    	"order": 7,
    	"next": ["smithpoint"]
    },
    "smithpoint": {
    	"tags": ["moon", "swamp", "marble", "north america", "insect", "night", "man", "men", "boy", "bar", "drink", "alcohol", "wine", "beer"],
    	"order": 8,
    	"next": ["drive", "forget"]
    },
    "drive": {
    	"tags": ["moon", "swamp", "marble", "north america", "insect", "night", "man", "men", "boy", "bar", "drink", "alcohol", "wine", "beer"],
    	"order": 9,
    	"next": ["take_pill", "toss_pill"]
    },
    "take_pill": {
    	"tags": ["drug", "pill", "man", "men", "boy", "bar", "night", "drink", "alcohol", "wine", "beer"],
    	"order": 10,
    	"next": ["meet_stranger_drugs", "john_home"]
    },
    "toss_pill": {
    	"tags": ["moon", "swamp", "marble", "north america", "insect", "girl", "street", "woman", "women"],
    	"order": 10,
    	"next": ["meet_stranger_no_drugs"]
    },
    "meet_stranger_drugs": {
    	"tags": ["moon", "swamp", "marble", "north america", "insect", "night", "man", "men", "boy", "bar", "drink", "alcohol", "wine", "beer"],
    	"order": 11,
    	"next": ["john_home"]
    },
    "meet_stranger_no_drugs": {
    	"tags": ["moon", "swamp", "marble", "north america", "insect", "night", "man", "men", "boy", "bar", "drink", "alcohol", "wine", "beer"],
    	"order": 11,
    	"next": ["painting"]
    },
    "painting": {
    	"tags": ["painting", "art", "moon", "swamp", "marble", "north america", "insect", "night", "man", "men", "boy", "bar", "drink", "alcohol", "wine", "beer"],
    	"order": 12,
    	"next": []
    },
    "john_home": {
    	"tags": ["drug", "pill", "man", "men", "boy", "bar", "night", "drink", "alcohol", "wine", "beer"],
    	"order": 13,
    	"next": []
    }

}

And here is the code that’s currently running on the Raspberry Pi:

import RPi.GPIO as GPIO
from Adafruit_Thermal import *
import time
import os
import sys
import json
import picamera
from clarifai.client import ClarifaiApi
from pattern.en import referenced

import gen

# Init Clarifai
os.environ["CLARIFAI_APP_ID"] = "nAT8dW6B0Oc5qA6JQfFcdIEr-CajukVSOZ6u_IsN"
os.environ["CLARIFAI_APP_SECRET"] = "BnETdY6wtp8DmXIWCBZf8nE4XNPtlHMdtK0ISNJQ"
clarifai_api = ClarifaiApi() # Assumes Env Vars Set

# Init System Paths
APP_PATH = os.path.dirname(os.path.realpath(__file__))
IMG_PATH = os.path.join(APP_PATH, 'img')
TALE_PATH = os.path.join(APP_PATH, 'tales')

# Init tale_dict
with open(os.path.join(APP_PATH, 'tales_dict.json'), 'r') as infile:
    tale_dict = json.load(infile)

# Seen tales
seen_tales = list()

# Init Camera
camera = picamera.PiCamera()

# Init Printer
printer = Adafruit_Thermal("/dev/ttyAMA0", 9600, timeout=5)
printer.boldOn()

# Init GPIO
# With camera pointed forward...
# LEFT:  11 (button), 15 (led)
# RIGHT: 13 (button), 16 (led)
GPIO.setmode(GPIO.BOARD)
ledPins = (15,16)
butPins = (11,13)

for pinNo in ledPins:
    GPIO.setup(pinNo, GPIO.OUT)

for pinNo in butPins:
    GPIO.setup(pinNo, GPIO.IN, pull_up_down=GPIO.PUD_UP)

# Open Grammar Dict
with open(os.path.join(APP_PATH, 'weird_grammar.json'), 'r') as infile:
    grammar_dict = json.load(infile)

def blink_left_right(count):
    ledLeft, ledRight = ledPins
    for _ in range(count):
        GPIO.output(ledRight, False)
        GPIO.output(ledLeft, True)
        time.sleep(0.2)
        GPIO.output(ledRight, True)
        GPIO.output(ledLeft, False)
        time.sleep(0.2)
    GPIO.output(ledRight, False)

def to_lines(sentences):
    def sentence_to_lines(text):
        LL = 32
        tokens = text.split(' ')
        lines = list()
        curLine = list()
        charCount = 0
        for t in tokens:
            charCount += (len(t)+1)
            if charCount > LL:
                lines.append(' '.join(curLine))
                curLine = [t]
                charCount = len(t)+1
            else:
                curLine.append(t)
        lines.append(' '.join(curLine))
        return '\n'.join(lines)
    sentence_lines = map(sentence_to_lines, sentences)
    return '\n\n'.join(sentence_lines)

def open_tale(tale_name):
    with open(os.path.join(TALE_PATH, tale_name), 'r') as infile:
        tale_text = to_lines(
            filter(lambda x: x.strip(), infile.read().strip().split('\n'))
        )
    return tale_text

def pick_tale(tags, next_tales):
    choice = str()
    record = 0
    for tale in tale_dict:
        if tale in next_tales or tale_dict[tale]['order'] == 0:
            score = len(set(tale_dict[tale]['tags']) & set(tags))
            if tale in next_tales and score > 0 and not tale in seen_tales:
                score += 100
            if score > record:
                choice = tale
                record = score
    return choice


blink_left_right(5)
imgCount = 1
cur_tale = str()


while True:
    inputLeft, inputRight = map(GPIO.input, butPins)
    if inputLeft != inputRight:
        try:
            img_fn = str(int(time.time()*100))+'.jpg'
            img_fp = os.path.join(IMG_PATH, img_fn)

            camera.capture(img_fp)

            blink_left_right(3)

            result = clarifai_api.tag_images(open(img_fp))
            tags = result['results'][0]['result']['tag']['classes']

            if cur_tale:
                next_tales = tale_dict[cur_tale]['next']
            else:
                next_tales = list()

            tale_name = pick_tale(tags, next_tales)
            cur_tale = tale_name

            if tale_name:
                lines_to_print = open_tale(tale_name)
                seen_tales.append(tale_name)

            else:
                grammar_dict["N"].extend(tags)

                if not inputLeft:
                    sentences = [gen.make_polar(grammar_dict, 10, sent=0) for _ in range(10)]
                elif not inputRight:
                    sentences = [gen.make_polar(grammar_dict, 10) for _ in range(10)]
                else:
                    sentences = gen.main(grammar_dict, 10)

                lines_to_print = to_lines(sentences)

            prefix = '\n\n\nNo. %i\n\n'%imgCount

            printer.println(prefix+lines_to_print+'\n\n\n')

            grammar_dict["N"] = list()
            imgCount += 1
        except:
            blink_left_right(15)
            print sys.exc_info()

    elif (not inputLeft) and (not inputRight):
        offCounter = 0
        for _ in range(100):
            inputLeft, inputRight = map(GPIO.input, butPins)
            if (not inputLeft) and (not inputRight):
                time.sleep(0.1)
                offCounter += 1
                if offCounter > 50:
                    os.system('sudo shutdown -h now')
            else:
                break

 

Click here for a Google Drive folder with all the passages from the novel.

]]>
word.camera, Part II http://www.thehypertext.com/2015/05/08/word-camera-part-ii/ Fri, 08 May 2015 21:50:25 +0000 http://www.thehypertext.com/?p=505 For my final projects in Conversation and Computation with Lauren McCarthy and This Is The Remix with Roopa Vasudevan, I iterated on my word.camera project.

Read More...

]]>
Click Here for Part I


11161692_10100527204674408_7877879408640753455_o


For my final projects in Conversation and Computation with Lauren McCarthy and This Is The Remix with Roopa Vasudevan, I iterated on my word.camera project. I added a few new features to the web application, including a private API that I used to enable the creation of a physical version of word.camera inside a Mamiya C33 TLR.

The current version of the code remains open source and available on GitHub, and the project continues to receive positive mentions in the press.

On April 19, I announced two new features for word.camera via the TinyLetter email newsletter I advertised on the site.

Hello,

Thank you for subscribing to this newsletter, wherein I will provide occasional updates regarding my project, word.camera.

I wanted to let you know about two new features I added to the site in the past week:

word.camera/albums You can now generate ebooks (DRM-free ePub format) from sets of lexographs.

word.camera/postcards You can support word.camera by sending a lexograph as a postcard, anywhere in the world for $5. I am currently a graduate student, and proceeds will help cover the cost of maintaining this web application as a free, open source project.

Also:

word.camera/a/XwP59n1zR A lexograph album containing some of the best results I’ve gotten so far with the camera on my phone.

1, 2, 3 A few random lexographs I did not make that were popular on social media.

Best,

Ross Goodwin
rossgoodwin.com
word.camera

Next, I set to work on the physical version. I decided to use a technique I developed on another project earlier in the semester to create word.camera epitaphs composed of highly relevant paragraphs from novels. To ensure fair use of copyrighted materials, I determined that all of this additional data would be processed locally on the physical camera.

I developed a collection of data from a combination of novels that are considered classics and those I personally enjoyed, and I included only paragraphs over 99 characters in length. In total, the collection contains 7,113,809 words from 48 books.

Below is an infographic showing all the books used in my corpus, and their relative included word counts (click on it for the full-size image).

A79449E2CDA5D178

To build the physical version of word.camera, I purchased the following materials:

  • Raspberry Pi 2 board
  • Raspberry Pi camera module
  • Two (2) 10,000 mAh batteries
  • Thermal receipt printer
  • 40 female-to-male jumper wires
  • Three (3) extra-small prototyping perf boards
  • LED button

After some tinkering, I was able to put together the arrangement pictured below, which could print raw word.camera output on the receipt printer.

IMG_0354

I thought for a long time about the type of case I wanted to put the camera in. My original idea was a photobooth, but I felt that a portable camera—along the lines of Matt Richardson’s Descriptive Camera—might take better advantage of the Raspberry Pi’s small footprint.

Rather than fabricating my own case, I determined that an antique film camera might provide a familiar exterior to draw in people not familiar with the project. (And I was creating it for a remix-themed class, after all.) So I purchased a lot of three broken TLR film cameras on eBay, and the Mamiya C33 was in the best condition of all of them, so I gutted it. (N.B. I’m an antique camera enthusiast—I own a working version of the C33’s predecessor, the C2—and, despite its broken condition, cutting open the bellows of the C33 felt sacrilegious.)

I laser cut some clear acrylic I had left over from the traveler’s lamp project to fill the lens holes and mount the LED button on the back of the camera. Here are some photos of the finished product:

9503_20150507_tlr_1000px

9502_20150507_tlr_1000px

9509_20150507_tlr_1000px

9496_20150507_tlr_1000px

9493_20150507_tlr_1000px

9513_20150507_tlr_1000px

And here is the code that’s running on the Raspberry Pi (the crux of the matching algorithm is on line 90):

import uuid
import picamera
import RPi.GPIO as GPIO
import requests
from time import sleep
import os
import json
from Adafruit_Thermal import *
from alchemykey import apikey
import time

# SHUTTER COUNT / startNo GLOBAL
startNo = 0

# Init Printer
printer = Adafruit_Thermal("/dev/ttyAMA0", 19200, timeout=5)
printer.setSize('S')
printer.justify('L')
printer.setLineHeight(36)

# Init Camera
camera = picamera.PiCamera()

# Init GPIO
GPIO.setmode(GPIO.BCM)

# Working Dir
cwd = '/home/pi/tlr'

# Init Button Pin
GPIO.setup(21, GPIO.IN, pull_up_down=GPIO.PUD_UP)

# Init LED Pin
GPIO.setup(20, GPIO.OUT)

# Init Flash Pin
GPIO.setup(16, GPIO.OUT)

# LED and Flash Off
GPIO.output(20, False)
GPIO.output(16, False)

# Load lit list
lit = json.load( open(cwd+'/lit.json', 'r') )


def blink(n):
    for _ in range(n):
        GPIO.output(20, True)
        sleep(0.2)
        GPIO.output(20, False)
        sleep(0.2)

def takePhoto():
    fn = str(int(time.time()))+'.jpg' # TODO: Change to timestamp hash
    fp = cwd+'/img/'+fn
    GPIO.output(16, True)
    camera.capture(fp)
    GPIO.output(16, False)
    return fp

def getText(imgPath):
    endPt = 'https://word.camera/img'
    payload = {'Script': 'Yes'}
    files = {'file': open(imgPath, 'rb')}
    response = requests.post(endPt, data=payload, files=files)
    return response.text

def alchemy(text):
    endpt = "http://access.alchemyapi.com/calls/text/TextGetRankedConcepts"
    payload = {"apikey": apikey,
               "text": text,
               "outputMode": "json",
               "showSourceText": 0,
               "knowledgeGraph": 1,
               "maxRetrieve": 500}
    headers = {'content-type': 'application/x-www-form-urlencoded'}
    r = requests.post(endpt, data=payload, headers=headers)
    return r.json()

def findIntersection(testDict):
    returnText = ""
    returnTitle = ""
    returnAuthor = ""
    recordInter = set(testDict.keys())
    relRecord = 0.0
    for doc in lit:
        inter = set(doc['concepts'].keys()) & set(testDict.keys())
        if inter:
            relSum = sum([doc['concepts'][tag]+testDict[tag] for tag in inter])
            if relSum > relRecord: 
                relRecord = relSum
                recordInter = inter
                returnText = doc['text']
                returnTitle = doc['title']
                returnAuthor = doc['author']
    doc = {
        'text': returnText,
        'title': returnTitle,
        'author': returnAuthor,
        'inter': recordInter,
        'record': relRecord
    }
    return doc

def puncReplace(text):
    replaceDict = {
        '—': '---',
        '–': '--',
        '‘': "\'",
        '’': "\'",
        '“': '\"',
        '”': '\"',
        '´': "\'",
        'ë': 'e',
        'ñ': 'n'
    }

    for key in replaceDict:
        text = text.replace(key, replaceDict[key])

    return text


blink(5)
while 1:
    input_state = GPIO.input(21)
    if not input_state:
        GPIO.output(20, True)
        try:
            # Get Word.Camera Output
            print "GETTING TEXT FROM WORD.CAMERA..."
            wcText = getText(takePhoto())
            blink(3)
            GPIO.output(20, True)
            print "...GOT TEXT"

            # Print
            # print "PRINTING PRIMARY"
            # startNo += 1
            # printer.println("No. %i\n\n\n%s" % (startNo, wcText))

            # Get Alchemy Data
            print "GETTING ALCHEMY DATA..."
            data = alchemy(wcText)
            tagRelDict = {concept['text']:float(concept['relevance']) for concept in data['concepts']}
            blink(3)
            GPIO.output(20, True)
            print "...GOT DATA"

            # Make Match
            print "FINDING MATCH..."
            interDoc = findIntersection(tagRelDict)
            print interDoc
            interText = puncReplace(interDoc['text'].encode('ascii', 'xmlcharrefreplace'))
            interTitle = puncReplace(interDoc['title'].encode('ascii', 'xmlcharrefreplace'))
            interAuthor = puncReplace(interDoc['author'].encode('ascii', 'xmlcharrefreplace'))
            blink(3)
            GPIO.output(20, True)
            print "...FOUND"

            grafList = [p for p in wcText.split('\n') if p]

            # Choose primary paragraph
            primaryText = min(grafList, key=lambda x: x.count('#'))
            url = 'word.camera/i/' + grafList[-1].strip().replace('#', '')

            # Print
            print "PRINTING..."
            startNo += 1
            printStr = "No. %i\n\n\n%s\n\n%s\n\n\n\nEPITAPH\n\n%s\n\nFrom %s by %s" % (startNo, primaryText, url, interText, interTitle, interAuthor)
            printer.println(printStr)

        except:
            print "SOMETHING BROKE"
            blink(15)

        GPIO.output(20, False)

Thanks to a transistor pulsing circuit that keeps the printer’s battery awake, and some code that automatically tethers the Raspberry Pi to my iPhone, the Fiction Camera is fully portable. I’ve been walking around Brooklyn and Manhattan over the past week making lexographs—the device is definitely a conversation starter. As a street photographer, I’ve noticed that people seem to be more comfortable having their photograph taken with it than with a standard camera, possibly because the visual image (and whether they look alright in it) is far less important.

As a result of these wanderings, I’ve accrued quite a large number of lexograph receipts. Earlier iterations of the receipt design contained longer versions of the word.camera output. Eventually, I settled on a version that contains a number (indicating how many lexographs have been taken since the device was last turned on), one paragraph of word.camera output, a URL to the word.camera page containing the photo + complete output, and a single high-relevance paragraph from a novel.

2080_20150508_doc_1800px

2095_20150508_doc_1800px

2082_20150508_doc_1800px

2088_20150508_doc_1800px

2091_20150508_doc_1800px

2093_20150508_doc_1800px

2097_20150508_doc_1800px

2100_20150508_doc_1800px

2102_20150508_doc_1800px

2104_20150508_doc_1800px

2106_20150508_doc_1800px

2108_20150508_doc_1800px

2109_20150508_doc_1800px

I also demonstrated the camera at ConvoHack, our final presentation event for Conversation and Computation, which took place at Babycastles gallery, and passed out over 50 lexograph receipts that evening alone.

6A0A1475

6A0A1416

6A0A1380

6A0A1352

6A0A1348

Photographs by Karam Byun

Often, when photographing a person, the camera will output a passage from a novel featuring a character description that subjects seem to relate to. Many people have told me the results have qualities that remind them of horoscopes.

]]>
word.camera http://www.thehypertext.com/2015/04/11/word-camera/ http://www.thehypertext.com/2015/04/11/word-camera/#comments Sat, 11 Apr 2015 05:12:58 +0000 http://www.thehypertext.com/?p=481 Last week, I launched a web application and a concept for photographic text generation that I have been working on for a few months. The idea came to me while working on another project, a computer generated screenplay, and I will discuss the connection in this post.

Read More...

]]>
lexograph /ˈleksəʊɡɹɑːf/ (n.)
A text document generated from digital image data

 

Last week, I launched a web application and a concept for photographic text generation that I have been working on for a few months. The idea came to me while working on another project, a computer generated screenplay, and I will discuss the connection in this post.

word.camera is responsive — it works on desktop, tablet, and mobile devices running recent versions of iOS or Android. The code behind it is open source and available on GitHub, because lexography is for everyone.

 

Screen Shot 2015-04-11 at 12.31.56 AM

Screen Shot 2015-04-08 at 2.01.42 AM

Screen Shot 2015-04-08 at 2.02.24 AM

 

Users can share their lexographs using unique URLs. Of all this lexographs I’ve seen generated by users since the site launched (there are now almost 7,000), this one, shared on reddit’s /r/creativecoding, stuck with me the most: http://word.camera/i/7KZPPaqdP

I was surprised when the software noticed and commented on the singer in the painting behind me: http://word.camera/i/ypQvqJr6L

I was inspired to create this project while working on another project. This semester, I received a grant from the Future of Storytelling Initiative at NYU to produce a computer generated screenplay, and I had been thinking about how to generate text that’s more cohesive and realistically descriptive, meaning that it would transition between related topics in a logical fashion and describe a scene that could realistically exist (no “colorless green ideas sleeping furiously”) in order to making filming the screenplay possible . After playing with the Clarifai API, which uses convolutional neural networks to tag images, it occurred to me that including photographs in my input corpus, rather than relying on text alone, could provide those qualities. word.camera is my first attempt at producing that type of generative text.

At the moment, the results are not nearly as grammatical as I would like them to be, and I’m working on that. The algorithm extracts tags from images using Clarifai’s convolutional neural networks, then expands those tags into paragraphs using ConceptNet (a lexical relations database developed at MIT) and a flexible template system. The template system enables the code to build sentences that connect concepts together.

This project is about augmenting our creativity and presenting images in a different format, but it’s also about creative applications of artificial intelligence technology. I think that when we think about the type of artificial intelligence we’ll have in the future, based on what we’ve read in science fiction novels, we think of a robot that can describe and interact with its environment with natural language. I think that creating the type of AI we imagine in our wildest sci-fi fantasies is not only an engineering problem, but also a design problem that requires a creative approach.

I hope lexography eventually becomes accepted as a new form of photography. As a writer and a photographer, I love the idea that I could look at a scene and photograph it because it might generate an interesting poem or short story, rather than just an interesting image. And I’m not trying to suggest that word.camera is the final or the only possible implementation of that new art form. I made the code behind word.camera open source because I want others to help improve it and make their own versions — provided they also make their code available under the same terms, which is required under the GNU GPLv3 open source license I’m using. As the technology gets better, the results will get better, and lexography will make more sense to people as a worthy artistic pursuit.

I’m thrilled that the project has received worldwide attention from photography blogs and a few media outlets, and I hope users around the world continue enjoying word.camera as I keep working to improve it. Along with improving the language, I plan to expand the project by offering a mobile app and generated downloadable ebooks so that users can enjoy their lexographs offline.


 

Click Here for Part II

]]>
http://www.thehypertext.com/2015/04/11/word-camera/feed/ 2
Dr. Gonzo http://www.thehypertext.com/2015/02/19/dr-gonzo/ http://www.thehypertext.com/2015/02/19/dr-gonzo/#comments Thu, 19 Feb 2015 02:57:21 +0000 http://www.thehypertext.com/?p=434 For my first project in Conversation and Computation with Lauren McCarthy, I created a therapist bot with the voice of Hunter S. Thompson.

Read More...

]]>
For my first project in Conversation and Computation with Lauren McCarthy, I created a therapist bot with the voice of Hunter S. Thompson. The bot currently runs in the terminal, but I am working on a web version. All my code is on GitHub.

Screen Shot 2015-02-18 at 9.08.13 PMTo make Dr. Gonzo, I used AlchemyAPI concept extraction to tag each paragraph of a large corpus of Hunter S. Thompson’s writing. I fed the tagged corpus into a MongoDB database, which I query with PyMongo. I used Pattern and NLTK to parse and categorize user input, and match it to documents in the database. Database entries are appended with text generated from a template engine. Additionally, my template engine handles the first several user requests in every session.

Here are a few more screenshots of the doctor in action:

Screen Shot 2015-02-18 at 9.09.16 PM Screen Shot 2015-02-18 at 9.11.19 PM Screen Shot 2015-02-18 at 9.19.23 PM Screen Shot 2015-02-18 at 9.23.06 PM Screen Shot 2015-02-18 at 9.24.30 PM Screen Shot 2015-02-18 at 9.27.27 PM Screen Shot 2015-02-18 at 9.29.15 PM

 

]]>
http://www.thehypertext.com/2015/02/19/dr-gonzo/feed/ 2
Fiction Generator, Part IV http://www.thehypertext.com/2014/12/21/fiction-generator-part-iv/ Sun, 21 Dec 2014 03:04:53 +0000 http://www.thehypertext.com/?p=406 For my final project in Networked Media with Daniel Shiffman, I put the Fiction Generator online at fictiongenerator.com. I also exhibited this project at the ITP Winter Show.

Read More...

]]>
Prior Installments:
Part I
Part II
Part III

For my final project in Comm Lab: Networked Media with Daniel Shiffman, I put the Fiction Generator online at fictiongenerator.com. VICE/Motherboard ran an article about my website, and I exhibited the project at the ITP Winter Show.

composite

 

After reading William S. Burroughs’ essay about the cut-up technique, I decided to implement an algorithmic version of it into the generator. I also refactored my existing code and added a load screen, with this animation:

robotholdingbook

I am running a LinuxApacheFlask stack at the moment. Here’s a screen shot of the website in its current state:

screenshot

]]>
Fiction Generator, Part III http://www.thehypertext.com/2014/12/09/fiction-generator-part-iii/ http://www.thehypertext.com/2014/12/09/fiction-generator-part-iii/#comments Tue, 09 Dec 2014 19:00:04 +0000 http://www.thehypertext.com/?p=392 For my final project in Introduction to Computational Media with Daniel Shiffman, I presented my fiction generator (working title: "FicGen"). Since my previous post about this project, I have added a graphical user interface and significantly refactored my code.

Read More...

]]>
Prior Installments:
Part I
Part II

For my final project in Introduction to Computational Media with Daniel Shiffman, I presented my fiction generator (working title: “FicGen”). Since my previous post about this project, I have added a graphical user interface and significantly expanded/refactored my code, which I moved to a new repository on GitHub. I have also submitted this project as my entry in the ITP Winter Show. For my Networked Media final project, which is due Friday, I plan to put FicGen online.

Here is a screenshot of the GUI, which I implemented in Processing:

Screen Shot 2014-12-02 at 1.19.28 PM

When I presented this project in our final ICM class on Tuesday, November 25, the only working elements in the GUI were the text fields and the big red button. Now, most of the buttons and sliders have functionality as well. After pushing the red button, a Python script emails the completed novel to the user in PDF format.

After creating the GUI above, I expanded the material I am using to generate the novels by scraping content from two additional sources: over 2,000 sci-fi/horror stories from scp-wiki.net, and over 47,000 books from Project Gutenberg. I then significantly refactored my code to accommodate these additions. My new Python program, ficgen.py, is far more object oriented and organized than my previous plotgen script, which had become somewhat of a mess by the time I presented my project in class two weeks ago.

Here’s the current code:

import math
import argparse
import random
from random import choice as rc
from random import sample as rs
from random import randint as ri
import string
import math
from zipfile import ZipFile

import nltk
import en

from g_paths import gPaths
from erowid_experience_paths import erowidExpPaths
from tropes_character import characterTropeFiles
from tropes_setting import settingTropeFiles
from scp_paths import scpPaths
from firstnames_f import fFirstNames
from firstnames_m import mFirstNames
from surnames import surnames


# TODO:
# [X] CLEAN UP TROPE FILE PATHS LIST
# [ ] Fix "I'm" and "I'll" problem
# [ ] Add Plot Points / Narrative Points / Phlebotinum
# [ ] subtrope / sub-trope
# [ ] add yelp reviews
# [ ] add livejournal
# [X] add SCP

# System Path

sysPath = "/Users/rg/Projects/plotgen/ficgen/"


# Argument Values

genre_list = ['literary', 'sci-fi', 'fantasy', 'history', 'romance', 'thriller', 
			  'mystery', 'crime', 'pulp', 'horror', 'beat', 'fan', 'western', 
			  'action', 'war', 'family', 'humor', 'sport', 'speculative']
conflict_list = ['nature', 'man', 'god', 'society', 'self', 'fate', 'tech', 'no god', 'reality', 'author']
narr_list = ['first', '1st', '1', 'third', '3rd', '3', 'alt', 'alternating', 'subjective', 
			 'objective', 'sub', 'obj', 'omniscient', 'omn', 'limited', 'lim']

parser = argparse.ArgumentParser(description='Story Parameters')
parser.add_argument('--charnames', nargs='*', help="Character Names")
parser.add_argument('--title', help="Story Title")
parser.add_argument('--length', help="Story Length (0-999)")
parser.add_argument('--charcount', help="Character Count (0-999)")
parser.add_argument('--genre', nargs='*', help="Genre", choices=genre_list)
parser.add_argument('--conflict', nargs='*', help="Conflict", choices=conflict_list)
parser.add_argument('--passion', help="Passion (0-999)")
parser.add_argument('--verbosity', help="Verbosity (0-999)")
parser.add_argument('--realism', help="Realism (0-999)")
parser.add_argument('--density', help="Density (0-999)")
parser.add_argument('--accessibility', help="Accessibility (0-999)")
parser.add_argument('--depravity', help="Depravity (0-999)")
parser.add_argument('--linearity', help="Linearity (0-999)")
parser.add_argument('--narrator', nargs='*', help="Narrative PoV", choices=narr_list)
args = parser.parse_args()


# ESTABLISH SYSTEM-WIDE COEFFICIENTS/CONSTANTS

# tsv = trope setting volume
TSV = (int(args.length)/2.0 + int(args.realism)/6.0 + int(args.passion)/3.0)/1000.0
if 'fan' in args.genre:
	TSV += 1.0
TSV = int(math.ceil(2.0*TSV))

# cc = actual number of extra characters / MAKE EXPONENTIAL
CC = int(math.exp(math.ceil(int(args.charcount)/160.0))/2.0)+10

# chc = chapter count
CHC = int(math.exp(math.ceil(int(args.length)/160.0))/2.0)+10

# dtv = drug trip volume
DTV = (int(args.length)/4.0 + int(args.realism)/12.0 + int(args.passion)/6.0 + int(args.depravity)*1.5)/1000.0
if 'beat' in args.genre:
	DTV += 1.0
if 'society' in args.conflict:
	DTV += 1.0
DTV = int(math.ceil(5.0*DTV))

# scp = scp article volume
SCP = int(args.length)/1000.0
if bool(set(['sci-fi', 'horror']) & set(args.genre)):
	SCP += 1.0
if bool(set(['tech', 'no god', 'reality', 'nature', 'god']) & set(args.conflict)):
	SCP += 1.0
SCP = int(math.ceil(2.0*SCP))

# den = length (in chars) of project gutenerg excerpts
DEN = int(args.density)*10

# ggv = gutenberg excerpt volume
GGV = (int(args.length) + int(args.density))/500.0
if 'literary' in args.genre:
	GGV += 2.0
GGV = int(math.ceil(5.0*GGV))

# chl = chapter length as percent of potential chapter length
CHL = int(args.length)/1000.0


# file text fetchers
def get_file(fp):

	f = open(sysPath+fp, 'r')
	t = f.read()
	f.close()

	return t

def get_zip(fp):

	fileName = fp.split('/')[-1]
	noExtName = fileName.split('.')[0]
	txtName = noExtName + ".txt"

	ff = ZipFile(fp, 'r')
	fileNames = ff.namelist()
	oo = ff.open(fileNames[0], 'r')
	tt = oo.read()
	oo.close()
	ff.close()

	return tt



# CLASSES

class Character(object):

	def __init__(self, firstName, lastName):
		self.firstName = firstName
		self.lastName = lastName
		self.introDesc = ""
		self.scenes = []
		self.drugTrips = []
		self.scpReports = [] 
		self.gbergExcerpts = []
		self.friends = [] # list of objects


class Chapter(object):

	def __init__(self, charObj):
		self.charObj = charObj
		self.title = ""
		self.blocks = []


	def title_maker(self):
		charTitle = ri(0, 2)

		if not bool(charTitle):

			ttl = self.charObj.firstName + " " + self.charObj.lastName

		else:
			
			titleSource = ri(0, 3)

			if titleSource == 0:
				textSource = rc(self.charObj.scenes)
			elif titleSource == 1:
				textSource = rc(self.charObj.drugTrips)
			elif titleSource == 2:
				textSource = rc(self.charObj.scpReports)
			elif titleSource == 3:
				textSource = rc(self.charObj.gbergExcerpts)

			tokens = nltk.word_tokenize(textSource)

			if len(tokens) > 20:
				index = ri(0, len(tokens)-10)
				titleLen = ri(2, 6)
				ttl = ' '.join(tokens[index:index+titleLen])
			else:
				ttl = self.charObj.firstName + " " + self.charObj.lastName

		self.title = ttl


	def chapter_builder(self):
		blockList = [self.charObj.introDesc] + self.charObj.scenes + self.charObj.drugTrips + self.charObj.scpReports + self.charObj.gbergExcerpts
		
		random.shuffle(blockList)

		stopAt = int(math.ceil(CHL*len(blockList)))

		blockList = blockList[:stopAt]

		self.blocks = blockList

		# self.blocks.append("stuff")



class Novel(object):

	def __init__(self):
		self.title = args.title
		self.characters = [] # list of characters
		self.chapters = [] # list of chapters

	def generate(self):
		self.make_chars()
		self.assemble_chapters()
		self.make_tex_file()


	def make_tex_file(self):
		# Look at PlotGen for this part
		outputFileName = self.title

		latex_special_char_1 = ['&', '%', '$', '#', '_', '{', '}']
		latex_special_char_2 = ['~', '^', '\\']

		outputFile = open(sysPath+"output/"+outputFileName+".tex", 'w')

		openingTexLines = ["\\documentclass[12pt]{book}",
						   "\\usepackage{ucs}",
						   "\\usepackage[utf8x]{inputenc}",
						   "\\usepackage{hyperref}",
						   "\\title{"+outputFileName+"}",
						   "\\author{collective consciousness fiction generator\\\\http://rossgoodwin.com/ficgen}",
						   "\\date{\\today}",
						   "\\begin{document}",
						   "\\maketitle"]

		closingTexLine = "\\end{document}"

		for line in openingTexLines:
			outputFile.write(line+"\n\r")
		outputFile.write("\n\r\n\r")

		for ch in self.chapters:

			outputFile.write("\\chapter{"+ch.title+"}\n\r")
			outputFile.write("\n\r\n\r")

			rawText = '\n\r\n\r\n\r'.join(ch.blocks)

			try:
				rawText = rawText.decode('utf8')
			except:
				pass
			try:
				rawText = rawText.encode('ascii', 'ignore')
			except:
				pass

			i = 0
			for char in rawText:

				if char == "\b":
					outputFile.seek(-1, 1)
				elif char in latex_special_char_1 and rawText[i-1] != "\\":
					outputFile.write("\\"+char)
				elif char in latex_special_char_2 and not rawText[i+1] in latex_special_char_1:
					outputFile.write("-")
				else:
					outputFile.write(char)

				i += 1

			outputFile.write("\n\r\n\r")

		outputFile.write("\n\r\n\r")
		outputFile.write(closingTexLine)

		outputFile.close()

		print '\"'+sysPath+'output/'+outputFileName+'.tex\"'


	def assemble_chapters(self):
		novel = []

		for c in self.characters:
			novel.append(Chapter(c))

		for ch in novel:
			ch.title_maker()
			ch.chapter_builder()

		random.shuffle(novel) # MAYBE RETHINK THIS LATER

		self.chapters = novel


	def make_chars(self):
		# establish gender ratio
		charGenders = [ri(0,1) for _ in range(CC)]
		
		# initialize list of characters
		chars = []

		# add user defined characters
		for firstlast in args.charnames:
			fl_list = firstlast.split('_')  # Note that split is an underscore!
			chars.append(Character(fl_list[0], fl_list[1]))

		# add generated characters
		for b in charGenders:
			if b:
				chars.append(Character(rc(fFirstNames), rc(surnames)))
			else:
				chars.append(Character(rc(mFirstNames), rc(surnames)))

		# establish list of intro scenes
		introScenePaths = rs(characterTropeFiles, len(chars))

		# establish list of settings
		settings = rs(settingTropeFiles, len(chars)*TSV)

		# establish list of drug trips
		trips = rs(erowidExpPaths, len(chars)*DTV)

		# establish list of scp articles
		scps = rs(scpPaths, len(chars)*SCP)

		# establish list of gberg excerpts
		gbergs = rs(gPaths.values(), len(chars)*GGV)

		i = 0
		j = 0
		m = 0
		p = 0
		s = 0
		for c in chars:

			# make friends
			c.friends += rs(chars, ri(1,len(chars)-1))
			if c in c.friends:
				c.friends.remove(c)

			# add introduction description
			c.introDesc = self.personal_trope([c], introScenePaths[i])

			# add setting scenes
			for k in range(TSV):
				c.scenes.append(self.personal_trope([c]+c.friends, settings[j+k]))

			# add drug trip scenes
			for n in range(DTV):
				c.drugTrips.append(self.personal_trip([c]+c.friends, trips[m+n]))

			# add scp articles
			for q in range(SCP):
				c.scpReports.append(self.personal_scp([c]+c.friends, scps[p+q]))

			# add gberg excerpts
			for t in range(GGV):
				c.gbergExcerpts.append(self.personal_gberg([c]+c.friends, gbergs[s+t]))

			i += 1
			j += TSV
			m += DTV
			p += SCP
			s += GGV

		self.characters = chars


	def personal_trope(self, charList, filePath):
		text = get_file(filePath)
		# text = text.decode('utf8')
		# text = text.encode('ascii', 'ignore')

		if len(charList) == 1:
			characterTrope = True
		else:
			characterTrope = False

		try:

			pos = en.sentence.tag(text)
			wordtag = map(list, zip(*pos))
			words = wordtag[0]
			tags = wordtag[1]

			for i in range(len(words)):
				charRef = rc([rc(charList), charList[0]])
				if words[i].lower() == "character" and i > 0:
					words[i-1] = charRef.firstName
					words[i] = charRef.lastName

				elif tags[i] == "PRP":
					words[i] = charRef.firstName
				elif tags[i] == "PRP$":
					words[i] = charRef.firstName+"\'s"
				elif tags[i] in ["VBD", "VBG", "VBN", "VBZ"]:
					try:
						words[i] = en.verb.past(words[i], person=3, negate=False)
					except:
						pass

				if characterTrope:

					if words[i] == "have":
						words[i] = "has"
					elif words[i] == "are":
						words[i] = "is"

			punc = [".", ",", ";", ":", "!", "?"]

			for i in range(len(words)):
				if words[i] in punc:
					words[i] = '\b'+words[i]

			final_text = " ".join(words)

			if characterTrope:

				mainCharRef = rc(charList)

				index = string.find(final_text, mainCharRef.firstName)

				if final_text[index+len(mainCharRef.firstName)+1:index+len(mainCharRef.firstName)+1+len(mainCharRef.lastName)] == mainCharRef.lastName:
					final_text = final_text[index:]
				else:
					final_text = mainCharRef.firstName+" "+mainCharRef.lastName+final_text[index+len(mainCharRef.firstName):]

			replacements = {"trope": "clue", "Trope": "clue", "TROPE": "CLUE"}

			for x, y in replacements.iteritems():
				final_text = string.replace(final_text, x, y)

		except:
			
			final_text = ""


		return final_text


	def personal_trip(self, charList, tripPath):

		fileText = get_file(tripPath)
		splitText = fileText.split('\\vspace{2mm}')
		endOfText = splitText[-1]
		text = endOfText[:len(endOfText)-15]

		try:

			pos = en.sentence.tag(text)
			wordtag = map(list, zip(*pos))
			words = wordtag[0]
			tags = wordtag[1]

			for i in range(len(words)):

				charRef = rc([rc(charList), charList[0]])

				if tags[i] == "PRP":
					words[i] = charRef.firstName
				elif tags[i] == "PRP$":
					words[i] = charRef.firstName+"\'s"
				elif tags[i] in ["VBD", "VBG", "VBN", "VBZ"]:
					try:
						words[i] = en.verb.past(words[i], person=3, negate=False)
					except:
						pass
				else:
					pass

			punc = [".", ",", ";", ":", "!", "?"]

			for i in range(len(words)):
				if words[i] in punc:
					words[i] = '\b'+words[i]

			final_text = " ".join(words)

			final_text = string.replace(final_text, "\\end{itemize}", "")
			final_text = string.replace(final_text, "\\begin{itemize}", "")
			final_text = string.replace(final_text, "\\end{center}", "")
			final_text = string.replace(final_text, "\\begin{center}", "")
			final_text = string.replace(final_text, "\\ldots", " . . . ")
			final_text = string.replace(final_text, "\\egroup", "")
			final_text = string.replace(final_text, "EROWID", "GOVERNMENT")
			final_text = string.replace(final_text, "erowid", "government")
			final_text = string.replace(final_text, "Erowid", "Government")

		except:

			final_text = ""

		return final_text


	def personal_scp(self, charList, scpPath):

		text = get_file(scpPath)

		text = string.replace(text, "SCP", charList[0].lastName)
		text = string.replace(text, "Foundation", charList[0].lastName)

		try:

			pos = en.sentence.tag(text)
			wordtag = map(list, zip(*pos))
			words = wordtag[0]
			tags = wordtag[1]

			for i in range(len(words)):

				charRef = rc(charList)

				if tags[i] == "PRP":
					words[i] = charRef.firstName
				elif tags[i] == "PRP$":
					words[i] = charRef.firstName+"\'s"
				elif tags[i] in ["VBD", "VBG", "VBN", "VBZ"]:
					try:
						words[i] = en.verb.past(words[i], person=3, negate=False)
					except:
						pass
				else:
					pass

			punc = [".", ",", ";", ":", "!", "?"]

			for i in range(len(words)):
				if words[i] in punc:
					words[i] = '\b'+words[i]

			final_text = " ".join(words)

		except:

			final_text = ""

		return final_text



	def personal_gberg(self, charList, gPath):

		full_text = ""
		while full_text == "":
			try:
				full_text = get_zip(gPath)
			except:
				full_text = ""
				gPath = rc(gPaths.values())

		endPart = full_text.split("*** START OF THIS PROJECT GUTENBERG EBOOK ")[-1]
		theMeat = endPart.split("*** END OF THIS PROJECT GUTENBERG EBOOK")[0]

		theMeat = string.replace(theMeat, "\r\n", " ")

		
		if len(theMeat) < DEN+5:
			text = theMeat
		else:
			startLoc = int(len(theMeat)/2.0 - DEN/2.0)
			text = theMeat[startLoc:startLoc+DEN]

		spLoc = text.find(" ")
		text = text[spLoc+1:]

		try:
			pos = en.sentence.tag(text)
			wordtag = map(list, zip(*pos))
			words = wordtag[0]
			tags = wordtag[1]

			for i in range(len(words)):

				charRef = rc([rc(charList), charList[0]])

				if tags[i] == "PRP":
					words[i] = charRef.firstName
				elif tags[i] == "PRP$":
					words[i] = charRef.firstName+"\'s"
				elif tags[i] in ["VBD", "VBG", "VBN", "VBZ"]:
					try:
						words[i] = en.verb.past(words[i], person=3, negate=False)
					except:
						pass
				else:
					pass

			punc = [".", ",", ";", ":", "!", "?"]

			for i in range(len(words)):
				if words[i] in punc:
					words[i] = '\b'+words[i]

			final_text = " ".join(words)

		except:
			final_text = ""


		return final_text


	def print_chars(self):

		c = self.make_chars()
		for character in c:
			print 'INTRO DESC'
			print '\n\n'
			print character.introDesc
			print '\n\n'
			print 'SCENES'
			print '\n\n'
			for s in character.scenes:
				print s
			print '\n\n'
			print 'DRUG TRIPS'
			print '\n\n'
			for t in character.drugTrips:
				print t
			print '\n\n'
			print 'SCP REPORTS'
			print '\n\n'
			for p in character.scpReports:
				print p
			print '\n\n'
			print 'GBERG EXCERPTS'
			print '\n\n'
			for q in character.gbergExcerpts:
				print q
			print '\n\n'




foobar = Novel()
foobar.generate()

The program’s argument values, which I’m using the Python argparse library to deal with, are designed to be inserted by the GUI. However, they can be inserted manually as well in the terminal.

Typing python ficgen.py -h in the terminal will yield the following help text:

usage: ficgen.py [-h] [--charnames [CHARNAMES [CHARNAMES ...]]]
                 [--title TITLE] [--length LENGTH] [--charcount CHARCOUNT]
                 [--genre [{literary,sci-fi,fantasy,history,romance,thriller,mystery,crime,pulp,horror,beat,fan,western,action,war,family,humor,sport,speculative} [{literary,sci-fi,fantasy,history,romance,thriller,mystery,crime,pulp,horror,beat,fan,western,action,war,family,humor,sport,speculative} ...]]]
                 [--conflict [{nature,man,god,society,self,fate,tech,no god,reality,author} [{nature,man,god,society,self,fate,tech,no god,reality,author} ...]]]
                 [--passion PASSION] [--verbosity VERBOSITY]
                 [--realism REALISM] [--density DENSITY]
                 [--accessibility ACCESSIBILITY] [--depravity DEPRAVITY]
                 [--linearity LINEARITY]
                 [--narrator [{first,1st,1,third,3rd,3,alt,alternating,subjective,objective,sub,obj,omniscient,omn,limited,lim} [{first,1st,1,third,3rd,3,alt,alternating,subjective,objective,sub,obj,omniscient,omn,limited,lim} ...]]]

Story Parameters

optional arguments:
  -h, --help            show this help message and exit
  --charnames [CHARNAMES [CHARNAMES ...]]
                        Character Names
  --title TITLE         Story Title
  --length LENGTH       Story Length (0-999)
  --charcount CHARCOUNT
                        Character Count (0-999)
  --genre [{literary,sci-fi,fantasy,history,romance,thriller,mystery,crime,pulp,horror,beat,fan,western,action,war,family,humor,sport,speculative} [{literary,sci-fi,fantasy,history,romance,thriller,mystery,crime,pulp,horror,beat,fan,western,action,war,family,humor,sport,speculative} ...]]
                        Genre
  --conflict [{nature,man,god,society,self,fate,tech,no god,reality,author} [{nature,man,god,society,self,fate,tech,no god,reality,author} ...]]
                        Conflict
  --passion PASSION     Passion (0-999)
  --verbosity VERBOSITY
                        Verbosity (0-999)
  --realism REALISM     Realism (0-999)
  --density DENSITY     Density (0-999)
  --accessibility ACCESSIBILITY
                        Accessibility (0-999)
  --depravity DEPRAVITY
                        Depravity (0-999)
  --linearity LINEARITY
                        Linearity (0-999)
  --narrator [{first,1st,1,third,3rd,3,alt,alternating,subjective,objective,sub,obj,omniscient,omn,limited,lim} [{first,1st,1,third,3rd,3,alt,alternating,subjective,objective,sub,obj,omniscient,omn,limited,lim} ...]]
                        Narrative PoV

Finally, here are some sample novels generated by the new code (titles chosen by volunteers):

]]>
http://www.thehypertext.com/2014/12/09/fiction-generator-part-iii/feed/ 3
Fiction Generator http://www.thehypertext.com/2014/11/11/fiction-generator/ http://www.thehypertext.com/2014/11/11/fiction-generator/#comments Tue, 11 Nov 2014 03:35:39 +0000 http://www.thehypertext.com/?p=297 For my Introduction to Computational Media final project, I will be creating a fiction generator using text files scraped from tvtropes.org along with natural language processing in Python.

Read More...

]]>
For my Introduction to Computational Media final project, I will be creating a fiction generator using text files scraped from tvtropes.org along with natural language processing in Python. All the code I have written so far is available on Github, and will remain available in that repository as it evolves.

I am interested in natural language processing and natural language generation due to my background as a writer. After I learned Python over the summer, my first major project was a poetry generator. Since then, I have aspired to create a fiction generator—along the lines of The Great Automatic Grammatizator, a fictional machine that appears in a short story by Roald Dahl—but have lacked the skills or framework to pursue such a project.

After spending a significant amount of time on tvtropes.org, a self-described wiki of “the tricks of the trade for writing fiction,” I believe I have found the raw material I need to create such a project. The audience for this project will be fiction writers with writer’s block who need a raw, original framework for a new story.

2010-06-04-trope-trek-76a344a4-abfb6b6f-c7d6338e

[Comic via Strewth! by Josh Way]

The tvtropes wiki contains an extraordinary variety of fiction “tropes”—recurring motifs, themes, or elements that are present in nearly all fiction. I found an effective way to scrape the raw text of trope articles on the site using the Beautiful Soup library with Python. The following code is the function I am using to scrape each article:

def scrape(url):
	r = requests.get(url)
	doc = r.text
	soup = BeautifulSoup(doc)
	wikitext = soup.find(id="wikitext")
	approvedTags = ["em", "strong", "a", "ul", "ol", "li"]
	scraped = []
	for c in wikitext.children:
		try:
			tagClass = c['class']
		except TypeError:
			tagClass = False
		except KeyError:
			tagClass = False
		except AttributeError:
			tagClass = False

		try:
			childTag = c.name
		except TypeError:
			childTag = False
		except KeyError:
			childTag = False
		except AttributeError:
			childTag = False

		if childTag and childTag not in approvedTags:
			pass
		elif childTag and childTag in approvedTags:
			if tagClass:
				if tagClass[0] == "twikilink":
					scraped.append(c.string.lower())
				elif tagClass[0] == "urllink":
					scraped.append(c.contents[0])
			elif childTag == "ul" or childTag == "ol":
				for d in c.children:
					scraped.append(d.contents[0])
			else:
				scraped.append(c.string)

		else:
			scraped.append(c)

	bad_values = ['\n', None]
	scraped = [s for s in scraped if not s in bad_values]

	article = "".join(scraped)
	article = article.split('\n')
	article = '\n\n'.join(article)

	return article

I decided to start with characters because, after all, every story needs characters. And for the characters, I started with their names, because every character needs a name—ideally, a first name and a last name.

I used US Social Security Administration records for first names and US Census data on surnames to create a name generator. I am using a data set containing all of the surnames that appeared at least 100 times in the 2000 US Census (151,671 surnames), and all of the first names that were used for at least 5 individuals in a given year for all years between 1880-2013 (1,792,091 first names). Additionally, I have separated the first names by gender.

Here is a list of 25 random names generated from the aforementioned data:

Tolbert Routten
Jakaylah Gaugler
Lanya Kazel
Apolonio Buddemeyer
Josearmando Viloa
Shakur Litwinski
Lashaunta Cariello
Carolee Chatt
Tya Shuda
Estus Stubben
Caden Loranger
Aneatra Grueneich
Aleks Ronquille
Jeiel Seller
Balin Fosnow
Keymari Ketrow
Annemarie Neukam
Tobe Peaks
Lois Reebel
Adaly Detling
Marco Paider
Coolidge Troughton
Miles Chmara
Lucky Dehen
Marcellus Mussenden

Due to the nature of the data (more uncommon names, fewer common names), the names generated tend to be rather unusual. However, for the purposes of a fiction generator, this property may be desirable.

The next step I took was to pair these fictitious characters with character tropes. I scraped 2,219 articles on tvtropes.org, each on a specific trope that would apply to a particular character. I then used the NodeBox English Linguistics library to replace personal pronouns (e.g. he, she, it, they) with first names and the phrase “the character” or “this character” (or any word followed by “character”) with first name + last name. I also used NodeBox Linguistics to convert all the verbs in each article to past tense, and to replace every instance of the word trope with “clue” (in order to avoid the appearance that these are articles about tropes rather than particular characters). Finally, I used a Python script to generate 3-9 character names, pair each character with a trope, and add each character’s name into the converted trope text.

Here is some sample output:

Rashelle Roholt is cute, sweet, innocent and extremely huggable. Incidentally Rashelle is also varied shades of violent, unstable, and downright insane. Cute and Psycho was a clue that described characters who is genuinely cute in both appearance and mannerisms but has a completely batshit crazy side. Sometimes there is distinctly different sides which may be showed equally, but other times Rashelle is mostly one or the other, the killer rabbit displayed moments of sweetness and relative-sanity or the cutie showed hints of a dark psychotic nature. Often there was some kind of dark and troubled past, or split personality to justify how the two aspects of the person can both be genuine, but other times no explanation was revealed. The primary difference between this clue and the yandere was that the Cute and Rashelle Roholt was not drove by an obsessive needed to possess a friend or lover. Rashelle’s motivation, if Rashelle has one, can vary immensely. Rashelle also don’t necessarily has to be provoked to enter Rashelle’s Psycho-state, but can switch for reasons observers would be hard-pressed to determine. Cute and Psycho was a sister clue to killer rabbit, yandere and enfant terrible, and closely related to psychopathic manchild and beware the nice ones, though while there was frequent overlap between these clues was one doesn’t necessarily mean Rashelle Roholt qualified for another. If the “cute” part was real, then Rashelle Roholt was the fake cutie instead. Characters of this type tend to be female, though male examples do exist. the ophelia was someone whose psychosis was part of the cute picture, rather than a contrasted to Rashelle. In some anime fandoms Rashelle Roholt was referred to as a yangire, an informal fanspeak term which was a portmanteau of yandere and kireru ( a word meant “to snap or lose one’s temper”). It’s also used to refer to ax-crazy versions of the fake cutie. Not to be confused with fangire, which was a species of monster vampire.

Lissa Kanaday’s best friend and partner pled with Lissa to stop, Lissa won’t bring “her” back, and Lissa just put Lissa in danger. Yet still the hero persiste. A few acts later, he’s got beat on by the giant mook, Lissa looked like it’s all went to fade to black when… Lissa’s partner showed up, gun in hand! Wait, why was Lissa pointed the tranquilizer gun at hi— When Lissa woke up, the friend was terribly distraught. Says Lissa tried to get Lissa to stop, that Lissa warned Lissa what would happen. Saving Lissa was out of Lissa’s hands now, it’s all on Lissa’s head. Wait, what?The best friend had was in league with ( or was ) the big bad behind the whole plot. However, Lissa genuinely like the hero and would rather Lissa live a long and happy life. Lissa might try a circled monologue to bring Lissa onboard, but chances is Lissa already knew the hero’s moral code was such that he’d just be wasted both Lissa’s time by did Lissa. Still, Lissa just might try, for old time’s sake. Compounding matters, he’s usually a straw traitor to some horrible ideal, was either directly or indirectly responsible for much of the hero’s recent suffered, and/or was covered Lissa up. Compare evil former friend. Contrast friendly enemy and lived with the villain. not to be confused with another type of big bad friend. If the hero was was chummy with the big bad, that’s go karting with bowser. evil all along was for anyone who turned out to be evil, not just friends. Related to Lissa was held Lissa back. This was a Spoiler Clue, so beware.

Jaclene Desharnais. However, while Jaclene may first appear to be the hero’s equal or even superior in combat, subsequent battles will establish the Brute as was the goliath to the hero’s david. Jaclene was usually a bully, incapable of empathy, and, more often than not, also very stupid, though there is exceptions. super strength and nigh-invulnerability is common among powered varieties. Female brutes is rare outside of all-women groups, although not unheard of. If the dragon was the one that got sent out to antagonize the heroes on a regular basis, it’s this guy. Jaclene was usually the lowest-ranking member of the inner circle’s hierarchy, and as such generally got little respect from Jaclene, though Jaclene may exercise authority over the mooks. Jaclene was often the first opponent the heroes face after Jaclene’s successes require that someone more capable be sent to take care of Jaclene. Jaclene tended to be either blindly loyal or just too thickheaded and incompetent to ever stand a chance of overthrew the leaders. Despite Jaclene’s role as the primary brute force of the evil army, Jaclene was rarely ever as strong as the dragon. One thing to keep in mind with Jaclene Desharnais type was that it’s the role and rank as opposed to just the personality that defined Jaclene. Pete from the walt disney canon was a classic example of the Brute personality type: a big dumb bully that just loved to throw Jaclene’s own weight around. However, he’s generally used as a big bad ( or, in works like Kingdom Hearts II, the dragon). As such, in most appearances, Jaclene was not technically a Brute. Jaclene Desharnais type often showed up as part of the five-bad band dynamic ( in fact, Jaclene’s presence was often what defined it). Jaclene can also show up as a member of the quirky miniboss squad, but ( like all the other members ) will lose most of Jaclene’s threat level by virtue of Jaclene’s quirkiness. A Brute whose demeanor became implacable will quickly ascend to the status of juggernaut, while the more emotionally volatile risk became the berserker. Be wary too, recruiters, of a Brute who pets the dog, lest Jaclene prove to be a closet gentle giant and may very well eventually heel-face turn on Jaclene. Considering Jaclene’s aforementioned general role as the mean, stupid, and disrespected meat shield for Jaclene’s team, the Brute tended to be especially susceptible to humble pie and the humiliation conga. Compare: smash mook.

Fidelia Nollet must sacrifice something else… Fidelia’s good name, Fidelia’s reputation and Fidelia’s integrity. Fidelia Nollet attempted a Zero Approval Gambit will knowingly risk – or deliberately seek – a 0% approval rated and paint Fidelia in a bad light in order to achieve some greater good. This might involve falsely confessed to a crime Fidelia did commit, or Fidelia might involve Fidelia was an enormous jerkass contrary to Fidelia’s usual nature. The net result was that Fidelia will be hated, hunted or disgraced for all time. In short, Fidelia willingly became a hero with bad publicity. Note that this was a short-term trick. A Zero Approval Gambit was usually permanent or took a huge amount of work to undo. This was an inverse of villain with good publicity; compare good was not nice, necessarily evil, noble demon, what the hell, hero?, break Fidelia’s heart to save Fidelia. Can result in a hero with an f in good. Sometimes did to facilitate a genghis gambit. Often a job hazard of the agent provocateur. Most of the time Fidelia involved became a silent scapegoat.

Luciann Aspengren see a demon, god, or someone or something else otherworldly you’d expect Luciann to be easy to tell if Luciann was a man or a woman. but when he/she/they had qualities of both? or lacked qualities of either? or can change from one to another with neither was the confirmed default? Luciann is otherworldly and sexually ambiguous.There might be reasons why or how the demon, spirit, etc., was a hermaphrodite, can change sex, etc., be Luciann either magical corruption, or the creators did want to give Luciann Aspengren a definitive sex, so Luciann made Luciann ambiguous. Sometimes it’s just a striking detail that reminded the audience that this individual was a mundane creature, and that the shape they’re in now might just be a form Luciann is comfortable with. See also no biological sex, hermaphrodite, voluntary shapeshifting and ambiguous gender. May cross over with shapeshifters do Luciann for a change. In Ashura, the god of war from Apos from The goddess Kanzeon Bosatsu from Aleister Crowley of Envy the shapeshifting homunculus from The angel that appeared in the anime of Desire from In Gozer the Gozerian, the Sumerian god from In Larry Niven’s In In In In In Mallory from The Metrons in the The angels in Some of the demons in the Although the Judeo-Christian God was usually referred to by male personal pronouns, this was more convention than canon. Several Biblical verses show God identified with roles which western culture would generally consider feminine. Most languages ( English included ) do not has any gender-neutral personal pronouns and God was referred to as “He” because most societies in which the Bible was wrote was patriarchal. Inari Okami, the Shinto God of fertility, rice, agriculture, foxes, industry and worldly success, was generally considered to be neither male nor female, though like YHWH, masculine or feminine aspects is often emphasized depended on the context and the region. This was true for many other Kami as well. Angels and demons in Christianity is sometimes considered to be sexless because Luciann don’t reproduce in Heaven or Hell and so would not needed to be male or female. The Bible always referred to Luciann as male, and “the sons of god,” who is generally but not always thought to be angels, sired the The Egyptian god Hapi was generally considered male ( included had one or more wives), but was also pictured with breasts to represent Luciann’s ability to nurture and feed people ( he’s a god of the Nile). Not surprisingly, Luna, the main moon-deity of Accoring to The Despair Embodied in In While all of the Daedra princes ( a loose analog of Demon Lords And Archdevils, only with The Cloud of Darkness from Minogame from The Soulthirster, Uryuoms in The Egyptian Pharaoh Akhenaten IV, most famous for tried to change the nation to

Alisyn Hafner good looked, but Alisyn tend to be described as better looked than the vast majority of humans could ever hope to be. When described Alisyn’s beauty, authors tend to use terms like “inhuman”, “otherworldly” and “ethereal”. Depending on the author, such a species may inspire either simple chaste appreciation, or immediate and profound arousal. In extreme cases, Alisyn’s looked is so incredible as to act as almost a form of glamour, instantly become the center of attention ( and desire ) everywhere Alisyn go. While this concept can be found in all forms of media, Alisyn usually this works best in a non-visual medium. With a novel, the reader can imagine Alisyn’s own ideal of beauty. In a live action work, Alisyn may become a case of a subjective judgement of informed attractiveness. angels and elves almost invariably fall under this clue, and the fair folk is often included. physical gods can easily do so. In recent years, Vampires has also increasingly was portrayed as had inhuman hotness and allure, in contrast to older versions where Alisyn looked more like walked corpses. And Alisyn went without said for succubi. Not incubi, though, as they’re usually depicted as a kind of rapist gargoyle-creature. Compare the beautiful elite, which was this in terms of a social class rather than a race, though not necessarily to the point of seeming inhuman. mary sued frequently belong to one of these. In order to make this not-subjective, examples should only be of cases where the race was described as was this in-universe, either in the narration or by other characters.

Shermaine Siber’s arm was chained to the table and a Rabid Cop was sprayed spittle into Shermaine’s face in a way that convinced Shermaine that Shermaine had completely lost Shermaine’s mind.All Shermaine wanted Shermaine to do was admit that everything hitler did was Shermaine’s idea. Sounds good to Shermaine. What do Shermaine has to sign to get away from this maniac? The Rabid Cop might be casually dirty, or overbearingly self-righteous, or anywhere in between, but Shermaine all has two things in common: a reckless disregard for civil rights, and an unwavering conviction that any person they’ve identified as “the perp” really was a perp ( regardless of any contradicted evidence ) and deserved to suffer. In a good cop/bad cop routine, Shermaine usually take the “Bad Cop” ball and run clear out of the stadium with Shermaine. Likely to enjoy used torture for fun and information. Compare/contrast the ( presumed ) sympathetic cowboy cop.

Isaiah Oguinn’s doctorate in a scientific field that a peon like Isaiah can’t even pronounce. Isaiah always wore a suit… Until the eventual shirtless scene during Isaiah’s ( strenuous ) exercise routine, that was. Isaiah had a lovely smile.But inside, he’s an ugly, writhed mass of self-hatred and possibly parental issues. Isaiah came in two flavors: The one who happened to be great at everything, and was loved and respected by the people around Isaiah – but he’s used Isaiah’s charm and talents The one who Expect Isaiah to has at least one bizarre trait or ability that should not be overlooked, as well as a completely unhealthy attitude about love, life, and humanity in general. Isaiah most likely doesn’t has anyone that loved or respects Isaiah for what Isaiah really was. this may be justified.In the most cynical works on the slid scale, he’ll be a serial killer, or at least a future one. Isaiah Oguinn was usually male, but not always. Also, he’s not always evil – maybe just a well-hidden jerkass. The chief difference between the Broken Ace and the usually female stepford smiler was that the Stepford Smiler wanted to appear normal at all costs, often to the point of hurt Isaiah emotionally ( or because she’s sociopathic). This guy had the same setup, but was more talented and wanted to be the best, loved by all, and accepted. The debilitating personal issues which he’s hid is only got worse because of was repressed and the stress of Isaiah’s efforts to excel, and these sorts of characters is prime jerkass woobie material. See also the ace, who’s still better than Isaiah at everything but was so prone to mental disorders or emotional problems, and the byronic hero, who’s just as awe-inspiring and brooded but lacked the charming, polished façade and was rarely presented as pathetic. For a plot wherein The Ace was revealed to has deep personal problems, see broke pedestal. In case Isaiah haven’t noticed, this had nothing to do with asexuality. In real life, this was rather common. Real people has flaws no matter how perfect Isaiah seem to be at first glance.

I look forward to continuing to work on this project, as I am very excited by the possibilities of extended output with combinations of tropes. My intent is to produce an output that can be used as my entry for this year’s NaNoGenMo (National Novel Generation Month)—a natural language generation answer to NaNoWriMo (National Novel Writing Month)—which takes place throughout the month of November.

Edit: Click here for Part II.

]]>
http://www.thehypertext.com/2014/11/11/fiction-generator/feed/ 6