Spring 2015 – THE HYPERTEXT http://www.thehypertext.com Thu, 10 Dec 2015 06:10:15 +0000 en-US hourly 1 https://wordpress.org/?v=5.0.4 Traveler’s Lamp, Part II http://www.thehypertext.com/2015/05/08/travelers-lamp-part-ii/ http://www.thehypertext.com/2015/05/08/travelers-lamp-part-ii/#comments Fri, 08 May 2015 22:51:42 +0000 http://www.thehypertext.com/?p=560 Last week, Joanna Wrzaszczyk and I completed the first version of our dynamic light sculpture, inspired by Italo Calvino's Invisible Cities and the Traveling Salesman Problem.

Read More...

]]>
Click Here for Part I



Last week, Joanna Wrzaszczyk and I completed the first version of our dynamic light sculpture, inspired by Italo Calvino’s Invisible Cities and the Traveling Salesman Problem. We have decided to call it the Traveler’s Lamp.

Here is the midterm presentation that Joanna and I delivered in March:

Screen Shot 2015-05-08 at 6.26.53 PM

We received a lot of feedback after that presentation, which resulted in a number of revisions to the lamp’s overall design. Here are some sketches I made during that process:

2111_20150508_doc_1800px

2112_20150508_doc_1800px

2113_20150508_doc_1800px

2114_20150508_doc_1800px

2115_20150508_doc_1800px

2116_20150508_doc_1800px

2120_20150508_doc_1800px

2121_20150508_doc_1800px

Since that presentation, Joanna and I successfully designed and printed ten city-nodes for the lamp. Here is the deck from our final presentation, which contains renderings of all the city-nodes:

Screen Shot 2015-05-08 at 6.27.46 PM

We built the structure from laser-cut acrylic, fishing line, and 38-gauge wire. The top and base plates of the acrylic scaffolding are laser etched with the first and last page, respectively, from Invisible Cities. We fabricated the wood base on ITP’s CNC router from 3/4″ plywood.

Here are some photos of the assembled lamp:

5865_20150507_lamp_2400px

5846_20150507_lamp_2400px

5870_20150507_lamp_2400px

5877_20150507_lamp_2400px

5885_20150507_lamp_2400px

5887_20150507_lamp_2400px

5899_20150507_lamp_2400px

5901_20150507_lamp_2400px

5902_20150507_lamp_2400px

5920_20150507_lamp_2400px

5929_20150507_lamp_2400px

5965_20150507_lamp_2400px

5961_20150507_lamp_2400px

5947_20150507_lamp_2400px

5941_20150507_lamp_2400px

5931_20150507_lamp_2400px

Here’s a sketch, by Joanna, of the x-y-z coordinate plot that we fed into the computer program:

2122_20150508_doc_1800px

And finally, here’s some of the Python code that’s running on the Raspberry Pi:

def tsp():
    startingPin = random.choice(pins)
    pins.remove(startingPin)
    GPIO.output(startingPin, True)
    sleep(0.5)
    distances = []
    for i in range(pins):
        for p in pins:
            dist = distance(locDict[startingPin], locDict[p])
            distances.append((dist, p))
            GPIO.output(p, True)
            sleep(0.5)
            GPIO.output(p, False)
        distances = sorted(distances, key=lambda x: x[0])
        nextPin = distances[0][1]
        GPIO.output(nextPin, True)
        sleep(0.5)
        pins.remove(nextPin)
        startingPin = nextPin

]]>
http://www.thehypertext.com/2015/05/08/travelers-lamp-part-ii/feed/ 1
word.camera, Part II http://www.thehypertext.com/2015/05/08/word-camera-part-ii/ Fri, 08 May 2015 21:50:25 +0000 http://www.thehypertext.com/?p=505 For my final projects in Conversation and Computation with Lauren McCarthy and This Is The Remix with Roopa Vasudevan, I iterated on my word.camera project.

Read More...

]]>
Click Here for Part I


11161692_10100527204674408_7877879408640753455_o


For my final projects in Conversation and Computation with Lauren McCarthy and This Is The Remix with Roopa Vasudevan, I iterated on my word.camera project. I added a few new features to the web application, including a private API that I used to enable the creation of a physical version of word.camera inside a Mamiya C33 TLR.

The current version of the code remains open source and available on GitHub, and the project continues to receive positive mentions in the press.

On April 19, I announced two new features for word.camera via the TinyLetter email newsletter I advertised on the site.

Hello,

Thank you for subscribing to this newsletter, wherein I will provide occasional updates regarding my project, word.camera.

I wanted to let you know about two new features I added to the site in the past week:

word.camera/albums You can now generate ebooks (DRM-free ePub format) from sets of lexographs.

word.camera/postcards You can support word.camera by sending a lexograph as a postcard, anywhere in the world for $5. I am currently a graduate student, and proceeds will help cover the cost of maintaining this web application as a free, open source project.

Also:

word.camera/a/XwP59n1zR A lexograph album containing some of the best results I’ve gotten so far with the camera on my phone.

1, 2, 3 A few random lexographs I did not make that were popular on social media.

Best,

Ross Goodwin
rossgoodwin.com
word.camera

Next, I set to work on the physical version. I decided to use a technique I developed on another project earlier in the semester to create word.camera epitaphs composed of highly relevant paragraphs from novels. To ensure fair use of copyrighted materials, I determined that all of this additional data would be processed locally on the physical camera.

I developed a collection of data from a combination of novels that are considered classics and those I personally enjoyed, and I included only paragraphs over 99 characters in length. In total, the collection contains 7,113,809 words from 48 books.

Below is an infographic showing all the books used in my corpus, and their relative included word counts (click on it for the full-size image).

A79449E2CDA5D178

To build the physical version of word.camera, I purchased the following materials:

  • Raspberry Pi 2 board
  • Raspberry Pi camera module
  • Two (2) 10,000 mAh batteries
  • Thermal receipt printer
  • 40 female-to-male jumper wires
  • Three (3) extra-small prototyping perf boards
  • LED button

After some tinkering, I was able to put together the arrangement pictured below, which could print raw word.camera output on the receipt printer.

IMG_0354

I thought for a long time about the type of case I wanted to put the camera in. My original idea was a photobooth, but I felt that a portable camera—along the lines of Matt Richardson’s Descriptive Camera—might take better advantage of the Raspberry Pi’s small footprint.

Rather than fabricating my own case, I determined that an antique film camera might provide a familiar exterior to draw in people not familiar with the project. (And I was creating it for a remix-themed class, after all.) So I purchased a lot of three broken TLR film cameras on eBay, and the Mamiya C33 was in the best condition of all of them, so I gutted it. (N.B. I’m an antique camera enthusiast—I own a working version of the C33’s predecessor, the C2—and, despite its broken condition, cutting open the bellows of the C33 felt sacrilegious.)

I laser cut some clear acrylic I had left over from the traveler’s lamp project to fill the lens holes and mount the LED button on the back of the camera. Here are some photos of the finished product:

9503_20150507_tlr_1000px

9502_20150507_tlr_1000px

9509_20150507_tlr_1000px

9496_20150507_tlr_1000px

9493_20150507_tlr_1000px

9513_20150507_tlr_1000px

And here is the code that’s running on the Raspberry Pi (the crux of the matching algorithm is on line 90):

import uuid
import picamera
import RPi.GPIO as GPIO
import requests
from time import sleep
import os
import json
from Adafruit_Thermal import *
from alchemykey import apikey
import time

# SHUTTER COUNT / startNo GLOBAL
startNo = 0

# Init Printer
printer = Adafruit_Thermal("/dev/ttyAMA0", 19200, timeout=5)
printer.setSize('S')
printer.justify('L')
printer.setLineHeight(36)

# Init Camera
camera = picamera.PiCamera()

# Init GPIO
GPIO.setmode(GPIO.BCM)

# Working Dir
cwd = '/home/pi/tlr'

# Init Button Pin
GPIO.setup(21, GPIO.IN, pull_up_down=GPIO.PUD_UP)

# Init LED Pin
GPIO.setup(20, GPIO.OUT)

# Init Flash Pin
GPIO.setup(16, GPIO.OUT)

# LED and Flash Off
GPIO.output(20, False)
GPIO.output(16, False)

# Load lit list
lit = json.load( open(cwd+'/lit.json', 'r') )


def blink(n):
    for _ in range(n):
        GPIO.output(20, True)
        sleep(0.2)
        GPIO.output(20, False)
        sleep(0.2)

def takePhoto():
    fn = str(int(time.time()))+'.jpg' # TODO: Change to timestamp hash
    fp = cwd+'/img/'+fn
    GPIO.output(16, True)
    camera.capture(fp)
    GPIO.output(16, False)
    return fp

def getText(imgPath):
    endPt = 'https://word.camera/img'
    payload = {'Script': 'Yes'}
    files = {'file': open(imgPath, 'rb')}
    response = requests.post(endPt, data=payload, files=files)
    return response.text

def alchemy(text):
    endpt = "http://access.alchemyapi.com/calls/text/TextGetRankedConcepts"
    payload = {"apikey": apikey,
               "text": text,
               "outputMode": "json",
               "showSourceText": 0,
               "knowledgeGraph": 1,
               "maxRetrieve": 500}
    headers = {'content-type': 'application/x-www-form-urlencoded'}
    r = requests.post(endpt, data=payload, headers=headers)
    return r.json()

def findIntersection(testDict):
    returnText = ""
    returnTitle = ""
    returnAuthor = ""
    recordInter = set(testDict.keys())
    relRecord = 0.0
    for doc in lit:
        inter = set(doc['concepts'].keys()) & set(testDict.keys())
        if inter:
            relSum = sum([doc['concepts'][tag]+testDict[tag] for tag in inter])
            if relSum > relRecord: 
                relRecord = relSum
                recordInter = inter
                returnText = doc['text']
                returnTitle = doc['title']
                returnAuthor = doc['author']
    doc = {
        'text': returnText,
        'title': returnTitle,
        'author': returnAuthor,
        'inter': recordInter,
        'record': relRecord
    }
    return doc

def puncReplace(text):
    replaceDict = {
        '—': '---',
        '–': '--',
        '‘': "\'",
        '’': "\'",
        '“': '\"',
        '”': '\"',
        '´': "\'",
        'ë': 'e',
        'ñ': 'n'
    }

    for key in replaceDict:
        text = text.replace(key, replaceDict[key])

    return text


blink(5)
while 1:
    input_state = GPIO.input(21)
    if not input_state:
        GPIO.output(20, True)
        try:
            # Get Word.Camera Output
            print "GETTING TEXT FROM WORD.CAMERA..."
            wcText = getText(takePhoto())
            blink(3)
            GPIO.output(20, True)
            print "...GOT TEXT"

            # Print
            # print "PRINTING PRIMARY"
            # startNo += 1
            # printer.println("No. %i\n\n\n%s" % (startNo, wcText))

            # Get Alchemy Data
            print "GETTING ALCHEMY DATA..."
            data = alchemy(wcText)
            tagRelDict = {concept['text']:float(concept['relevance']) for concept in data['concepts']}
            blink(3)
            GPIO.output(20, True)
            print "...GOT DATA"

            # Make Match
            print "FINDING MATCH..."
            interDoc = findIntersection(tagRelDict)
            print interDoc
            interText = puncReplace(interDoc['text'].encode('ascii', 'xmlcharrefreplace'))
            interTitle = puncReplace(interDoc['title'].encode('ascii', 'xmlcharrefreplace'))
            interAuthor = puncReplace(interDoc['author'].encode('ascii', 'xmlcharrefreplace'))
            blink(3)
            GPIO.output(20, True)
            print "...FOUND"

            grafList = [p for p in wcText.split('\n') if p]

            # Choose primary paragraph
            primaryText = min(grafList, key=lambda x: x.count('#'))
            url = 'word.camera/i/' + grafList[-1].strip().replace('#', '')

            # Print
            print "PRINTING..."
            startNo += 1
            printStr = "No. %i\n\n\n%s\n\n%s\n\n\n\nEPITAPH\n\n%s\n\nFrom %s by %s" % (startNo, primaryText, url, interText, interTitle, interAuthor)
            printer.println(printStr)

        except:
            print "SOMETHING BROKE"
            blink(15)

        GPIO.output(20, False)

Thanks to a transistor pulsing circuit that keeps the printer’s battery awake, and some code that automatically tethers the Raspberry Pi to my iPhone, the Fiction Camera is fully portable. I’ve been walking around Brooklyn and Manhattan over the past week making lexographs—the device is definitely a conversation starter. As a street photographer, I’ve noticed that people seem to be more comfortable having their photograph taken with it than with a standard camera, possibly because the visual image (and whether they look alright in it) is far less important.

As a result of these wanderings, I’ve accrued quite a large number of lexograph receipts. Earlier iterations of the receipt design contained longer versions of the word.camera output. Eventually, I settled on a version that contains a number (indicating how many lexographs have been taken since the device was last turned on), one paragraph of word.camera output, a URL to the word.camera page containing the photo + complete output, and a single high-relevance paragraph from a novel.

2080_20150508_doc_1800px

2095_20150508_doc_1800px

2082_20150508_doc_1800px

2088_20150508_doc_1800px

2091_20150508_doc_1800px

2093_20150508_doc_1800px

2097_20150508_doc_1800px

2100_20150508_doc_1800px

2102_20150508_doc_1800px

2104_20150508_doc_1800px

2106_20150508_doc_1800px

2108_20150508_doc_1800px

2109_20150508_doc_1800px

I also demonstrated the camera at ConvoHack, our final presentation event for Conversation and Computation, which took place at Babycastles gallery, and passed out over 50 lexograph receipts that evening alone.

6A0A1475

6A0A1416

6A0A1380

6A0A1352

6A0A1348

Photographs by Karam Byun

Often, when photographing a person, the camera will output a passage from a novel featuring a character description that subjects seem to relate to. Many people have told me the results have qualities that remind them of horoscopes.

]]>
word.camera http://www.thehypertext.com/2015/04/11/word-camera/ http://www.thehypertext.com/2015/04/11/word-camera/#comments Sat, 11 Apr 2015 05:12:58 +0000 http://www.thehypertext.com/?p=481 Last week, I launched a web application and a concept for photographic text generation that I have been working on for a few months. The idea came to me while working on another project, a computer generated screenplay, and I will discuss the connection in this post.

Read More...

]]>
lexograph /ˈleksəʊɡɹɑːf/ (n.)
A text document generated from digital image data

 

Last week, I launched a web application and a concept for photographic text generation that I have been working on for a few months. The idea came to me while working on another project, a computer generated screenplay, and I will discuss the connection in this post.

word.camera is responsive — it works on desktop, tablet, and mobile devices running recent versions of iOS or Android. The code behind it is open source and available on GitHub, because lexography is for everyone.

 

Screen Shot 2015-04-11 at 12.31.56 AM

Screen Shot 2015-04-08 at 2.01.42 AM

Screen Shot 2015-04-08 at 2.02.24 AM

 

Users can share their lexographs using unique URLs. Of all this lexographs I’ve seen generated by users since the site launched (there are now almost 7,000), this one, shared on reddit’s /r/creativecoding, stuck with me the most: http://word.camera/i/7KZPPaqdP

I was surprised when the software noticed and commented on the singer in the painting behind me: http://word.camera/i/ypQvqJr6L

I was inspired to create this project while working on another project. This semester, I received a grant from the Future of Storytelling Initiative at NYU to produce a computer generated screenplay, and I had been thinking about how to generate text that’s more cohesive and realistically descriptive, meaning that it would transition between related topics in a logical fashion and describe a scene that could realistically exist (no “colorless green ideas sleeping furiously”) in order to making filming the screenplay possible . After playing with the Clarifai API, which uses convolutional neural networks to tag images, it occurred to me that including photographs in my input corpus, rather than relying on text alone, could provide those qualities. word.camera is my first attempt at producing that type of generative text.

At the moment, the results are not nearly as grammatical as I would like them to be, and I’m working on that. The algorithm extracts tags from images using Clarifai’s convolutional neural networks, then expands those tags into paragraphs using ConceptNet (a lexical relations database developed at MIT) and a flexible template system. The template system enables the code to build sentences that connect concepts together.

This project is about augmenting our creativity and presenting images in a different format, but it’s also about creative applications of artificial intelligence technology. I think that when we think about the type of artificial intelligence we’ll have in the future, based on what we’ve read in science fiction novels, we think of a robot that can describe and interact with its environment with natural language. I think that creating the type of AI we imagine in our wildest sci-fi fantasies is not only an engineering problem, but also a design problem that requires a creative approach.

I hope lexography eventually becomes accepted as a new form of photography. As a writer and a photographer, I love the idea that I could look at a scene and photograph it because it might generate an interesting poem or short story, rather than just an interesting image. And I’m not trying to suggest that word.camera is the final or the only possible implementation of that new art form. I made the code behind word.camera open source because I want others to help improve it and make their own versions — provided they also make their code available under the same terms, which is required under the GNU GPLv3 open source license I’m using. As the technology gets better, the results will get better, and lexography will make more sense to people as a worthy artistic pursuit.

I’m thrilled that the project has received worldwide attention from photography blogs and a few media outlets, and I hope users around the world continue enjoying word.camera as I keep working to improve it. Along with improving the language, I plan to expand the project by offering a mobile app and generated downloadable ebooks so that users can enjoy their lexographs offline.


 

Click Here for Part II

]]>
http://www.thehypertext.com/2015/04/11/word-camera/feed/ 2
GutenFlag http://www.thehypertext.com/2015/03/10/gutenflag/ Tue, 10 Mar 2015 05:46:32 +0000 http://www.thehypertext.com/?p=473 For my final project in Storage Wars: Narrating Digital Archives with Michael Connor, I generated new metadata for the Project Gutenberg ebook archive using AlchemyAPI natural language concept extraction. I then created a Twitter bot (@GutenFlag) that recommends books to Twitter users based on topics in their most recent tweets.

Read More...

]]>
For my final project in Storage Wars: Narrating Digital Archives with Michael Connor, I generated new metadata for the Project Gutenberg ebook archive using AlchemyAPI natural language concept extraction. I then used that database to create a Twitter bot (@GutenFlag) that recommends books to Twitter users based on topics in their most recent tweets.

The code for the Twitter bot, along with the metadata the bot uses, is available on GitHub.

[MORE TO COME]

 

]]>
Traveler’s Lamp http://www.thehypertext.com/2015/02/21/invisible-salesman/ http://www.thehypertext.com/2015/02/21/invisible-salesman/#comments Sat, 21 Feb 2015 20:40:39 +0000 http://www.thehypertext.com/?p=452 For my primary project in Sculpting Data into Everyday Objects with Esther Cheung and Scott Leinweber, Joanna Wrzaszczyk and I will be creating a lamp to visualize the traveling salesman problem between a set of cities that Italo Calvino described in Invisible Cities.

Read More...

]]>
For my primary project in Sculpting Data into Everyday Objects with Esther Cheung and Scott Leinweber, Joanna Wrzaszczyk and I will be creating a lamp to visualize the traveling salesman problem between a set of cities that Italo Calvino described in Invisible Cities.

This project began with a personal fascination I have with graph data. A graph is a mathematical diagram of connections between various vertices (a.k.a. nodes) and edges (a.k.a.). They can be directed (meaning the edges point in specific directions) or undirected, and generally look like this:

credit: mathinsight.org

credit: mathinsight.org

Graphs are widely applicable data structures, relevant to a broad range of fields. The traveling salesman problem (TSP), in its classical form, involves a set of cities along with data comprising the distance from each city to every other city. Given a salesman who starts in any given city, what is the optimal path for the salesman to take in order to visit every city once and return to the city from which s/he began?

The lamp Joanna and I are designing will be a three-dimensional set of vertices, each a 3D printed city designed according to the specifications of one of Calvino’s Invisible Cities. The cities/vertices will be connected with light pipe, connected to LEDs, that will visualize a computer algorithm (likely running on an Arduino or Raspberry Pi) solving the traveling salesman problem in real time between the cities.

We plan to print our cities on the Connex500 printer at NYU AMS as intricate white or black structures embedded inside clear plastic. The Connex500 can make prints like this:

credit: 3ders.org

credit: 3ders.org

We plan to make our cities inside spheres. I designed the first one based on the first city in the book, described here:

Leaving there and proceeding for three days toward

the east, you reach Diomira, a city with sixty silver

domes, bronze statues of all the gods, streets paved

with lead, a crystal theater, a golden cock that crows

each morning on a tower. All these beauties will already

be familiar to the visitor, who has seen them

also in other cities. But the special quality of this

city for the man who arrives there on a September

evening, when the days are growing shorter and the

multicolored lamps are lighted all at once at the

doors of the food stalls and from a terrace a woman’s

voice cries ooh!, is that he feels envy toward those

who now believe they have once before lived an evening

identical to this and who think they were

happy, that time.

 

I focused on the description of “sixty silver domes” and made this in Rhino:

Screen Shot 2015-02-20 at 11.47.38 PM Screen Shot 2015-02-20 at 11.46.17 PM Screen Shot 2015-02-20 at 11.45.50 PM Screen Shot 2015-02-20 at 11.45.39 PM Screen Shot 2015-02-20 at 11.45.01 PM

The model of a 4cm-diameter sphere contains two holes: one on the top for an LED or light pipe connection, and one going all the way through to hang the city inside a clear outer enclosure.

Before creating the city above, I created another object in Rhino, representative of what I hope we can achieve with the lamp as a whole:

Screen Shot 2015-02-11 at 11.33.04 PM Screen Shot 2015-02-11 at 11.32.58 PM Screen Shot 2015-02-11 at 11.32.51 PM

 


 

Click Here for Part II

]]>
http://www.thehypertext.com/2015/02/21/invisible-salesman/feed/ 1
Dr. Gonzo http://www.thehypertext.com/2015/02/19/dr-gonzo/ http://www.thehypertext.com/2015/02/19/dr-gonzo/#comments Thu, 19 Feb 2015 02:57:21 +0000 http://www.thehypertext.com/?p=434 For my first project in Conversation and Computation with Lauren McCarthy, I created a therapist bot with the voice of Hunter S. Thompson.

Read More...

]]>
For my first project in Conversation and Computation with Lauren McCarthy, I created a therapist bot with the voice of Hunter S. Thompson. The bot currently runs in the terminal, but I am working on a web version. All my code is on GitHub.

Screen Shot 2015-02-18 at 9.08.13 PMTo make Dr. Gonzo, I used AlchemyAPI concept extraction to tag each paragraph of a large corpus of Hunter S. Thompson’s writing. I fed the tagged corpus into a MongoDB database, which I query with PyMongo. I used Pattern and NLTK to parse and categorize user input, and match it to documents in the database. Database entries are appended with text generated from a template engine. Additionally, my template engine handles the first several user requests in every session.

Here are a few more screenshots of the doctor in action:

Screen Shot 2015-02-18 at 9.09.16 PM Screen Shot 2015-02-18 at 9.11.19 PM Screen Shot 2015-02-18 at 9.19.23 PM Screen Shot 2015-02-18 at 9.23.06 PM Screen Shot 2015-02-18 at 9.24.30 PM Screen Shot 2015-02-18 at 9.27.27 PM Screen Shot 2015-02-18 at 9.29.15 PM

 

]]>
http://www.thehypertext.com/2015/02/19/dr-gonzo/feed/ 2