image – THE HYPERTEXT http://www.thehypertext.com Thu, 10 Dec 2015 06:10:15 +0000 en-US hourly 1 https://wordpress.org/?v=5.0.4 Netflix for Robots http://www.thehypertext.com/2015/12/10/netflix-for-robots/ Thu, 10 Dec 2015 06:08:24 +0000 http://www.thehypertext.com/?p=800 For my final project in Learning Machines, I forced a deep learning machine to watch every episode of The X-Files.

Read More...

]]>
For my final project in Learning Machines, I forced a deep learning machine to watch every episode of The X-Files.

Watching every episode of The X-Files in high school on Netflix DVDs that came in the mail (remember those?) seemed like the thing to do. It was a great show, with 9 seasons of 20+ episodes a piece. So, it only seemed fair to provide a robot friend with the same experience.

I’m currently running NeuralTalk2, which is truly wonderful open source image captioning code consisting of convolutional and recurrent neural networks. The software requires a GPU to train models, so I’m running it on an Amazon Web Services GPU server instance. At ~50 cents per hour, it’s a lot more expensive than Netflix.

Andrej Karpathy wrote NeuralTalk2 in Torch, which is based in Lua, and it requires a lot of dependencies. However, it was a lot easier to set up than the Deep Dream code I experimented with over the summer.

The training process has involved a lot of trial and error. The learning process seems to just halt sometimes, and the machine often wants to issue the same caption for every image.

Rather than training the machine with an image caption set, I trained it with dialogue from subtitles and matching frames extracted at 10 second intervals from every episode of The X-Files. This is just an experiment, and I’m not expecting stellar results.

That said, the robot is already spitting out some pretty weird and genuinely creepy lines. I can’t wait until I have a version that’s trained well enough to feed in new images and get varied results.

Screen Shot 2015-12-09 at 2.18.48 PM Screen Shot 2015-12-09 at 2.20.22 PM Screen Shot 2015-12-09 at 2.21.53 PM Screen Shot 2015-12-09 at 2.26.45 PM Screen Shot 2015-12-09 at 2.33.11 PM Screen Shot 2015-12-09 at 2.34.41 PM Screen Shot 2015-12-09 at 2.35.27 PM Screen Shot 2015-12-09 at 2.36.01 PM Screen Shot 2015-12-09 at 2.42.05 PM

]]>
novel camera http://www.thehypertext.com/2015/12/01/novel-camera/ Tue, 01 Dec 2015 17:10:37 +0000 http://www.thehypertext.com/?p=790 I have spent the last few months completing a novel I started a long time ago and turning it into a non-linear interactive experience. For my final project in several classes, I have transferred this novel into a printer-equipped camera to make a new and different type of photographic experience.

Read More...

]]>
I have spent the last few months completing a novel I started a long time ago and turning it into a non-linear interactive experience. For my final project in several classes, I have transferred this novel into a printer-equipped camera to make a new and different type of photographic experience.

IMG_1321_copy

IMG_1439 copy

IMG_1442 copy

 

Inside the antique camera is a Raspberry Pi with a camera module behind the lens. The flow of passages is controlled by a single, handwritten JSON file. When there is overlap between the tags detected in an image by Clarifai and the tags assigned to a passage, and the candidate passage occurs next in a storyline that has already begun, that passage is printed out. If no passage can be found, the camera prints poetry enabled by a recursive context-free grammar and constructed from words detected in the image.

IMG_1317_copy

 

This week, I am planning to add a back end component that will allow photos taken to be preserved as albums, and passages printed to be read later online. For now, here is the JSON file that controls the order of output:

{
    "zero": {
        "tags": ["moon", "swamp", "marble", "north america", "insect", "street"],
        "order": 0,
        "next": ["story"]
    },
    "guam_zero": {
    	"tags": ["computer", "technology", "future", "keyboard", "politics"],
    	"order": 0,
    	"next": ["guam_one"]
    },
    "guam_one": {
    	"tags": ["computer", "technology", "future", "keyboard", "politics"],
    	"order": 1,
    	"next": []
    },
    "dream_zero": {
    	"tags": ["dream", "dark", "night", "sleep", "bed", "bedroom", "indoors"],
    	"order": 0,
    	"next": ["chess_board"]
    },
    "chess_board": {
    	"tags": ["dream", "dark", "night", "sleep", "bed", "bedroom", "indoors"],
    	"order": 2,
    	"next": ["black_queen", "black_pawn", "black_king", "black_rook", "white_king", "white_knight"]
    },
    "black_queen": {
    	"tags": ["dream", "dark", "black", "night", "sleep", "bed", "bedroom", "indoors", "chess", "game", "queen"],
    	"order": 3,
    	"next": ["wake_up"]
    },
    "black_pawn": {
    	"tags": ["dream", "dark", "black", "night", "sleep", "bed", "bedroom", "indoors", "chess", "game", "pawn"],
    	"order": 3,
    	"next": ["wake_up"]
    },
    "black_king": {
    	"tags": ["dream", "dark", "black", "night", "sleep", "bed", "bedroom", "indoors", "chess", "game", "king"],
    	"order": 3,
    	"next": ["wake_up"]
    },
    "black_rook": {
    	"tags": ["dream", "dark", "black", "night", "sleep", "bed", "bedroom", "indoors", "chess", "game", "rook", "castle"],
    	"order": 3,
    	"next": ["wake_up"]
    },
    "white_king": {
    	"tags": ["dream", "dark", "white", "night", "sleep", "bed", "bedroom", "indoors", "chess", "game", "king"],
    	"order": 3,
    	"next": ["wake_up"]
    },
    "white_knight": {
    	"tags": ["dream", "dark", "white", "night", "sleep", "bed", "bedroom", "indoors", "chess", "game", "knight"],
    	"order": 3,
    	"next": ["wake_up"]
    },
    "wake_up": {
    	"tags": ["dream", "dark", "night", "sleep", "bed", "bedroom", "indoors"],
    	"order": 4,
    	"next": []
    },
    "forget": {
    	"tags": ["man", "men", "boy"],
    	"order": 0,
    	"next": []
    },    
    "story": {
    	"tags": ["moon", "swamp", "marble", "north america", "insect", "night", "street", "woman", "women", "girl"],
    	"order": 1,
    	"next": ["miss_vest", "forget"]
    },
    "miss_vest": {
    	"tags": ["moon", "swamp", "marble", "north america", "insect", "night", "street", "woman", "women", "girl"],
    	"order": 2,
    	"next": ["envelope", "forget"]
    },
    "envelope": {
    	"tags": ["moon", "swamp", "marble", "north america", "insect", "night", "street", "woman", "women", "girl", "paper", "envelope", "mail"],
    	"order": 3,
    	"next": ["apartment", "forget"]
    },
    "apartment": {
    	"tags": ["moon", "swamp", "marble", "north america", "insect", "night", "street", "woman", "women", "girl", "paper", "envelope", "mail"],
    	"order": 4,
    	"next": ["email"]
    },
    "email": {
    	"tags": ["moon", "swamp", "marble", "north america", "insect", "night", "woman", "women", "girl", "paper", "envelope", "mail", "computer", "technology"],
    	"order": 5,
    	"next": ["match"]
    },
    "match": {
    	"tags": ["moon", "swamp", "marble", "north america", "insect", "night", "man", "men", "boy", "paper", "envelope", "mail", "computer", "technology"],
    	"order": 5,
    	"next": ["smithpoint", "morning"]
    },
    "morning": {
    	"tags": ["day", "sun", "bedroom", "bed", "breakfast", "morning", "dream", "dark", "night"],
    	"order": 6,
    	"next": ["call"]
    },
    "call": {
    	"tags": ["phone", "telephone", "technology", "computer"],
    	"order": 7,
    	"next": ["smithpoint"]
    },
    "smithpoint": {
    	"tags": ["moon", "swamp", "marble", "north america", "insect", "night", "man", "men", "boy", "bar", "drink", "alcohol", "wine", "beer"],
    	"order": 8,
    	"next": ["drive", "forget"]
    },
    "drive": {
    	"tags": ["moon", "swamp", "marble", "north america", "insect", "night", "man", "men", "boy", "bar", "drink", "alcohol", "wine", "beer"],
    	"order": 9,
    	"next": ["take_pill", "toss_pill"]
    },
    "take_pill": {
    	"tags": ["drug", "pill", "man", "men", "boy", "bar", "night", "drink", "alcohol", "wine", "beer"],
    	"order": 10,
    	"next": ["meet_stranger_drugs", "john_home"]
    },
    "toss_pill": {
    	"tags": ["moon", "swamp", "marble", "north america", "insect", "girl", "street", "woman", "women"],
    	"order": 10,
    	"next": ["meet_stranger_no_drugs"]
    },
    "meet_stranger_drugs": {
    	"tags": ["moon", "swamp", "marble", "north america", "insect", "night", "man", "men", "boy", "bar", "drink", "alcohol", "wine", "beer"],
    	"order": 11,
    	"next": ["john_home"]
    },
    "meet_stranger_no_drugs": {
    	"tags": ["moon", "swamp", "marble", "north america", "insect", "night", "man", "men", "boy", "bar", "drink", "alcohol", "wine", "beer"],
    	"order": 11,
    	"next": ["painting"]
    },
    "painting": {
    	"tags": ["painting", "art", "moon", "swamp", "marble", "north america", "insect", "night", "man", "men", "boy", "bar", "drink", "alcohol", "wine", "beer"],
    	"order": 12,
    	"next": []
    },
    "john_home": {
    	"tags": ["drug", "pill", "man", "men", "boy", "bar", "night", "drink", "alcohol", "wine", "beer"],
    	"order": 13,
    	"next": []
    }

}

And here is the code that’s currently running on the Raspberry Pi:

import RPi.GPIO as GPIO
from Adafruit_Thermal import *
import time
import os
import sys
import json
import picamera
from clarifai.client import ClarifaiApi
from pattern.en import referenced

import gen

# Init Clarifai
os.environ["CLARIFAI_APP_ID"] = "nAT8dW6B0Oc5qA6JQfFcdIEr-CajukVSOZ6u_IsN"
os.environ["CLARIFAI_APP_SECRET"] = "BnETdY6wtp8DmXIWCBZf8nE4XNPtlHMdtK0ISNJQ"
clarifai_api = ClarifaiApi() # Assumes Env Vars Set

# Init System Paths
APP_PATH = os.path.dirname(os.path.realpath(__file__))
IMG_PATH = os.path.join(APP_PATH, 'img')
TALE_PATH = os.path.join(APP_PATH, 'tales')

# Init tale_dict
with open(os.path.join(APP_PATH, 'tales_dict.json'), 'r') as infile:
    tale_dict = json.load(infile)

# Seen tales
seen_tales = list()

# Init Camera
camera = picamera.PiCamera()

# Init Printer
printer = Adafruit_Thermal("/dev/ttyAMA0", 9600, timeout=5)
printer.boldOn()

# Init GPIO
# With camera pointed forward...
# LEFT:  11 (button), 15 (led)
# RIGHT: 13 (button), 16 (led)
GPIO.setmode(GPIO.BOARD)
ledPins = (15,16)
butPins = (11,13)

for pinNo in ledPins:
    GPIO.setup(pinNo, GPIO.OUT)

for pinNo in butPins:
    GPIO.setup(pinNo, GPIO.IN, pull_up_down=GPIO.PUD_UP)

# Open Grammar Dict
with open(os.path.join(APP_PATH, 'weird_grammar.json'), 'r') as infile:
    grammar_dict = json.load(infile)

def blink_left_right(count):
    ledLeft, ledRight = ledPins
    for _ in range(count):
        GPIO.output(ledRight, False)
        GPIO.output(ledLeft, True)
        time.sleep(0.2)
        GPIO.output(ledRight, True)
        GPIO.output(ledLeft, False)
        time.sleep(0.2)
    GPIO.output(ledRight, False)

def to_lines(sentences):
    def sentence_to_lines(text):
        LL = 32
        tokens = text.split(' ')
        lines = list()
        curLine = list()
        charCount = 0
        for t in tokens:
            charCount += (len(t)+1)
            if charCount > LL:
                lines.append(' '.join(curLine))
                curLine = [t]
                charCount = len(t)+1
            else:
                curLine.append(t)
        lines.append(' '.join(curLine))
        return '\n'.join(lines)
    sentence_lines = map(sentence_to_lines, sentences)
    return '\n\n'.join(sentence_lines)

def open_tale(tale_name):
    with open(os.path.join(TALE_PATH, tale_name), 'r') as infile:
        tale_text = to_lines(
            filter(lambda x: x.strip(), infile.read().strip().split('\n'))
        )
    return tale_text

def pick_tale(tags, next_tales):
    choice = str()
    record = 0
    for tale in tale_dict:
        if tale in next_tales or tale_dict[tale]['order'] == 0:
            score = len(set(tale_dict[tale]['tags']) & set(tags))
            if tale in next_tales and score > 0 and not tale in seen_tales:
                score += 100
            if score > record:
                choice = tale
                record = score
    return choice


blink_left_right(5)
imgCount = 1
cur_tale = str()


while True:
    inputLeft, inputRight = map(GPIO.input, butPins)
    if inputLeft != inputRight:
        try:
            img_fn = str(int(time.time()*100))+'.jpg'
            img_fp = os.path.join(IMG_PATH, img_fn)

            camera.capture(img_fp)

            blink_left_right(3)

            result = clarifai_api.tag_images(open(img_fp))
            tags = result['results'][0]['result']['tag']['classes']

            if cur_tale:
                next_tales = tale_dict[cur_tale]['next']
            else:
                next_tales = list()

            tale_name = pick_tale(tags, next_tales)
            cur_tale = tale_name

            if tale_name:
                lines_to_print = open_tale(tale_name)
                seen_tales.append(tale_name)

            else:
                grammar_dict["N"].extend(tags)

                if not inputLeft:
                    sentences = [gen.make_polar(grammar_dict, 10, sent=0) for _ in range(10)]
                elif not inputRight:
                    sentences = [gen.make_polar(grammar_dict, 10) for _ in range(10)]
                else:
                    sentences = gen.main(grammar_dict, 10)

                lines_to_print = to_lines(sentences)

            prefix = '\n\n\nNo. %i\n\n'%imgCount

            printer.println(prefix+lines_to_print+'\n\n\n')

            grammar_dict["N"] = list()
            imgCount += 1
        except:
            blink_left_right(15)
            print sys.exc_info()

    elif (not inputLeft) and (not inputRight):
        offCounter = 0
        for _ in range(100):
            inputLeft, inputRight = map(GPIO.input, butPins)
            if (not inputLeft) and (not inputRight):
                time.sleep(0.1)
                offCounter += 1
                if offCounter > 50:
                    os.system('sudo shutdown -h now')
            else:
                break

 

Click here for a Google Drive folder with all the passages from the novel.

]]>
Sound Camera, Part III http://www.thehypertext.com/2015/10/21/sound-camera-part-iii/ Wed, 21 Oct 2015 22:10:27 +0000 http://www.thehypertext.com/?p=741 I completed the physical prototype of the sound camera inside the enclosure I specified in my prior post, the Kodak Brownie Model 2.

Read More...

]]>
I completed the physical prototype of the sound camera inside the enclosure I specified in my prior post, the Kodak Brownie Model 2.


IMG_1264

I started by adding a shutter button to the top of the enclosure. I used a Cherry MX Blue mechanical keyboard switch that I had leftover from a project last year.

IMG_1268

 

The battery and Raspberry Pi just barely fit into the enclosure:

IMG_1267

IMG_1265

 

The Raspberry Pi camera module is wedged snugly beneath the camera’s front plate:

IMG_1263

 

In additional to playing the song, I added some functionality that provides a bit of context to the user. Using the pico2wave text-to-speech utility, the camera speaks the tags aloud before playing the song. Additionally, using SoX, the camera plays an initialization tone generated from the color histogram of the image before reading the tags.

Here’s the code that’s currently running on the Raspberry Pi:

from __future__ import unicode_literals

import os
import json
import uuid
import time
from random import choice as rc
from random import sample as rs
import re
import subprocess

import RPi.GPIO as GPIO
import picamera
from clarifai.client import ClarifaiApi
import requests
from PIL import Image

import sys
import threading

import spotify

import genius_token

# SPOTIFY STUFF

# Assuming a spotify_appkey.key in the current dir
session = spotify.Session()

# Process events in the background
loop = spotify.EventLoop(session)
loop.start()

# Connect an audio sink
audio = spotify.AlsaSink(session)

# Events for coordination
logged_in = threading.Event()
logged_out = threading.Event()
end_of_track = threading.Event()

logged_out.set()


def on_connection_state_updated(session):
    if session.connection.state is spotify.ConnectionState.LOGGED_IN:
        logged_in.set()
        logged_out.clear()
    elif session.connection.state is spotify.ConnectionState.LOGGED_OUT:
        logged_in.clear()
        logged_out.set()


def on_end_of_track(self):
    end_of_track.set()

# Register event listeners
session.on(
    spotify.SessionEvent.CONNECTION_STATE_UPDATED, on_connection_state_updated)
session.on(spotify.SessionEvent.END_OF_TRACK, on_end_of_track)

# Assuming a previous login with remember_me=True and a proper logout
# session.relogin()
# session.login(genius_token.spotify_un, genius_token.spotify_pwd, remember_me=True)

# logged_in.wait()

# CAMERA STUFF

# Init Camera
camera = picamera.PiCamera()

# Init GPIO
GPIO.setmode(GPIO.BCM)

# Button Pin
GPIO.setup(18, GPIO.IN, pull_up_down=GPIO.PUD_UP)

IMGPATH = '/home/pi/soundcamera/img/'

clarifai_api = ClarifaiApi()

def chunks(l, n):
    """Yield successive n-sized chunks from l."""
    for i in xrange(0, len(l), n):
        yield l[i:i+n]

def take_photo():
    fn = str(int(time.time()))+'.jpg' # TODO: Change to timestamp hash
    fp = IMGPATH+fn
    camera.capture(fp)
    return fp

def chunks(l, n):
    """Yield successive n-sized chunks from l."""
    for i in xrange(0, len(l), n):
        yield l[i:i+n]

def get_tags(fp):
    fileObj = open(fp)
    result = clarifai_api.tag_images(fileObj)
    resultObj = result['results'][0]
    tags = resultObj['result']['tag']['classes']
    return tags

def genius_search(tags):
    access_token = genius_token.token
    payload = {
        'q': ' '.join(tags),
        'access_token': access_token
    }
    endpt = 'http://api.genius.com/search'
    response = requests.get(endpt, params=payload)
    results = response.json()
    hits = results['response']['hits']
    
    artists_titles = []
    
    for h in hits:
        hit_result = h['result']
        if hit_result['url'].endswith('lyrics'):
            artists_titles.append(
                (hit_result['primary_artist']['name'], hit_result['title'])
            )
    
    return artists_titles

def spotify_search(query):
    endpt = "https://api.spotify.com/v1/search"
    payload = {
        'q': query,
        'type': 'track'
    }
    response = requests.get(endpt, params=payload)
    result = response.json()
    result_zero = result['tracks']['items'][0]
    
    return result_zero['uri']

def main(fn):
    tags = get_tags(fn)
    for tag_chunk in chunks(tags,3):
        artists_titles = genius_search(tag_chunk)
        for artist, title in artists_titles:
            try:
                result_uri = spotify_search(artist+' '+title)
            except IndexError:
                pass
            else:
                print tag_chunk
                byline = "%s by %s" % (title, artist)
                print byline
                to_read = ', '.join(tag_chunk) + ". " + byline
                return to_read, result_uri

def play_uri(track_uri):
    # Play a track
    # audio = spotify.AlsaSink(session)
    session.login(genius_token.spotify_un, genius_token.spotify_pwd, remember_me=True)
    logged_in.wait()
    track = session.get_track(track_uri).load()
    session.player.load(track)
    session.player.play()


def stop_track():
    session.player.play(False)
    session.player.unload()
    session.logout()
    logged_out.wait()
    audio._close()

def talk(msg):
    proc = subprocess.Popen(
        ['bash', '/home/pi/soundcamera/play_text.sh', msg]
    )
    proc.communicate()

def play_tone(freqs):
    freq1, freq2 = freqs
    proc = subprocess.Popen(
        ['play', '-n', 'synth', '0.25', 'saw', "%i-%i" % (freq1, freq2)]
    )
    proc.communicate()

def histo_tone(fp):
    im = Image.open(fp)
    hist = im.histogram()
    vals = map(sum, chunks(hist, 64)) # list of 12 values
    print vals
    map(play_tone, chunks(vals,2))

if __name__ == "__main__":
    input_state = True
    new_state = True
    hold_counter = 0
    while 1:
        input_state = GPIO.input(18)
        if not (input_state and new_state):
            talk("capturing")

            # Hold for 15 seconds to turn off
            while not GPIO.input(18):
                time.sleep(0.1)
                hold_counter += 1
                if hold_counter > 150:
                    os.system('shutdown now -h')
                    sys.exit()

            # Reset hold counter
            hold_counter = 0

            # Else take photo
            try:
                img_fp = take_photo()
                msg, uri = main(img_fp)
                histo_tone(img_fp)
                talk(msg)
                play_uri(uri)
            except:
                print sys.exc_info()

            # Wait for playback to complete or Ctrl+C
            try:
                while not end_of_track.wait(0.1):
                    # If new photo, play new song
                    new_state = GPIO.input(18)
                    if not new_state:
                        stop_track()
                        # time.sleep(2)
                        break
            except KeyboardInterrupt:
                pass

 

]]>
Sound Camera, Part II http://www.thehypertext.com/2015/10/06/sound-camera-part-ii/ Tue, 06 Oct 2015 02:20:44 +0000 http://www.thehypertext.com/?p=733 Using JavaScript and Python Flask, I created a functional software prototype of the Sound Camera.

Read More...

]]>
Using JavaScript and Python Flask, I created a functional software prototype of the Sound Camera: rossgoodwin.com/soundcamera

The front-end JavaScript code is available on GitHub. Here is the primary back-end Python code:

import os
import json
import uuid
from base64 import decodestring
import time
from random import choice as rc
from random import sample as rs
import re

import PIL
from PIL import Image
import requests
import exifread

from flask import Flask, request, abort, jsonify
from flask.ext.cors import CORS
from werkzeug import secure_filename

from clarifai.client import ClarifaiApi

app = Flask(__name__)
CORS(app)

app.config['UPLOAD_FOLDER'] = '/var/www/SoundCamera/SoundCamera/static/img'
IMGPATH = '/var/www/SoundCamera/SoundCamera/static/img/'

clarifai_api = ClarifaiApi()

@app.route("/")
def index():
    return "These aren't the droids you're looking for."

@app.route("/img", methods=["POST"])
def img():
	request.get_data()
	if request.method == "POST":
		f = request.files['file']
		if f:
			filename = secure_filename(f.filename)
			f.save(os.path.join(app.config['UPLOAD_FOLDER'], filename))
			new_filename = resize_image(filename)
			return jsonify(uri=main(new_filename))
		else:
			abort(501)

@app.route("/b64", methods=["POST"])
def base64():
	if request.method == "POST":
		fstring = request.form['base64str']
		filename = str(uuid.uuid4())+'.jpg'
		file_obj = open(IMGPATH+filename, 'w')
		file_obj.write(fstring.decode('base64'))
		file_obj.close()
		return jsonify(uri=main(filename))

@app.route("/url")
def url():
	img_url = request.args.get('url')
	response = requests.get(img_url, stream=True)
	orig_filename = img_url.split('/')[-1]
	if response.status_code == 200:
		with open(IMGPATH+orig_filename, 'wb') as f:
			for chunk in response.iter_content(1024):
				f.write(chunk)
		new_filename = resize_image(orig_filename)
		return jsonify(uri=main(new_filename))
	else:
		abort(500)


# def allowed_img_file(filename):
#     return '.' in filename and \
# 		filename.rsplit('.', 1)[1].lower() in set(['.jpg', '.jpeg', '.png'])

def resize_image(fn):
    longedge = 640
    orientDict = {
        1: (0, 1),
        2: (0, PIL.Image.FLIP_LEFT_RIGHT),
        3: (-180, 1),
        4: (0, PIL.Image.FLIP_TOP_BOTTOM),
        5: (-90, PIL.Image.FLIP_LEFT_RIGHT),
        6: (-90, 1),
        7: (90, PIL.Image.FLIP_LEFT_RIGHT),
        8: (90, 1)
    }

    imgOriList = []
    try:
        f = open(IMGPATH+fn, "rb")
        exifTags = exifread.process_file(f, details=False, stop_tag='Image Orientation')
        if 'Image Orientation' in exifTags:
            imgOriList.extend(exifTags['Image Orientation'].values)
    except:
        pass

    img = Image.open(IMGPATH+fn)
    w, h = img.size
    newName = str(uuid.uuid4())+'.jpeg'
    if w >= h:
        wpercent = (longedge/float(w))
        hsize = int((float(h)*float(wpercent)))
        img = img.resize((longedge,hsize), PIL.Image.ANTIALIAS)
    else:
        hpercent = (longedge/float(h))
        wsize = int((float(w)*float(hpercent)))
        img = img.resize((wsize,longedge), PIL.Image.ANTIALIAS)

    for val in imgOriList:
        if val in orientDict:
            deg, flip = orientDict[val]
            img = img.rotate(deg)
            if flip != 1:
                img = img.transpose(flip)

    img.save(IMGPATH+newName, format='JPEG')
    os.remove(IMGPATH+fn)
    
    return newName

def chunks(l, n):
    """Yield successive n-sized chunks from l."""
    for i in xrange(0, len(l), n):
        yield l[i:i+n]

def get_tags(fp):
    fileObj = open(fp)
    result = clarifai_api.tag_images(fileObj)
    resultObj = result['results'][0]
    tags = resultObj['result']['tag']['classes']
    return tags

def genius_search(tags):
    access_token = 'd2IuV9fGKzYEWVnzmLVtFnm-EYvBQKR8Uh3I1cfZOdr8j-BGVTPThDES532dym5a'
    payload = {
        'q': ' '.join(tags),
        'access_token': access_token
    }
    endpt = 'http://api.genius.com/search'
    response = requests.get(endpt, params=payload)
    results = response.json()
    hits = results['response']['hits']
    
    artists_titles = []
    
    for h in hits:
        hit_result = h['result']
        if hit_result['url'].endswith('lyrics'):
            artists_titles.append(
                (hit_result['primary_artist']['name'], hit_result['title'])
            )
    
    return artists_titles

def spotify_search(query):
    endpt = "https://api.spotify.com/v1/search"
    payload = {
        'q': query,
        'type': 'track'
    }
    response = requests.get(endpt, params=payload)
    result = response.json()
    result_zero = result['tracks']['items'][0]
    
    return result_zero['uri']

def main(fn):
    tags = get_tags(IMGPATH+fn)
    for tag_chunk in chunks(tags,3):
        artists_titles = genius_search(tag_chunk)
        for artist, title in artists_titles:
            try:
                result_uri = spotify_search(artist+' '+title)
            except IndexError:
                pass
            else:
                return result_uri


if __name__ == "__main__":
    app.run()

 

It uses the same algorithm discussed in my prior post. Now that I have the opportunity to test it more, I am not quite satisfied with the results it is providing. First of all, they are not entirely deterministic (you can upload the same photo twice and end up with two different songs in some cases). Moreover, the results from a human face — which I expect to be a common use case — are not very personal. For the next steps in this project, I plan to integrate additional data including GPS, weather, time of day, and possibly even facial expressions in order to improve the output.

The broken cameras I ordered from eBay have arrived, and I have been considering how to use them as cases for the new models. I also purchased a GPS module for my Raspberry Pi, so the next Sound Camera prototype, with new features integrated, will likely be a physical version. I’m planning to use this Kodak Brownie camera (c. 1916):

IMG_1207

]]>
Candidate Image Explorer http://www.thehypertext.com/2015/09/17/candidate-image-explorer/ Thu, 17 Sep 2015 15:53:26 +0000 http://www.thehypertext.com/?p=700 For this week's homework in Designing for Data Personalization with Sam Slover, I made progress on a project that I'm working on for Fusion as part of their 2016 US Presidential Election coverage.

Read More...

]]>
For this week’s homework in Designing for Data Personalization with Sam Slover, I made progress on a project that I’m working on for Fusion as part of their 2016 US Presidential Election coverage. I began this project by downloading all the images from each candidate’s Twitter, Facebook, and Instagram account — about 60,000 in total — then running those images through Clarifai‘s convolutional neural networks to generate descriptive tags.

With all the images hosted on Amazon s3, and the tag data hosted on parse.com, I created a simple page where users can explore the candidates’ images by topic and by candidate. The default is all topics and all candidates, but users can narrow the selection of images displayed by making multiple selections from each field. Additionally, more images will load as you scroll down the page.

Screen Shot 2015-09-17 at 11.24.47 AM

Screen Shot 2015-09-17 at 11.30.30 AM

Screen Shot 2015-09-17 at 11.20.50 AM

Screen Shot 2015-09-17 at 11.28.45 AM

Screen Shot 2015-09-17 at 11.25.42 AM

Screen Shot 2015-09-17 at 11.19.53 AM

Unfortunately, the AI-enabled image tagging doesn’t always work as well as one might hope.

Screen Shot 2015-09-17 at 11.23.49 AM

Here’s the page’s JavaScript code:

var name2slug = {};
var slug2name = {};

Array.prototype.remove = function() {
    var what, a = arguments, L = a.length, ax;
    while (L && this.length) {
        what = a[--L];
        while ((ax = this.indexOf(what)) !== -1) {
            this.splice(ax, 1);
        }
    }
    return this;
}

Array.prototype.chunk = function(chunkSize) {
    var array=this;
    return [].concat.apply([],
        array.map(function(elem,i) {
            return i%chunkSize ? [] : [array.slice(i,i+chunkSize)];
        })
    );
}

function dateFromString(str) {
	var m = str.match(/(\d+)-(\d+)-(\d+)T(\d+):(\d+):(\d+)Z/);
	var date = new Date(Date.UTC(+m[1], +m[2], +m[3], +m[4], +m[5], +m[6]));
	var options = {
	    weekday: "long", year: "numeric", month: "short",
	    day: "numeric", hour: "2-digit", minute: "2-digit"
	};
	return date.toLocaleTimeString("en-us", options);
}

function updatePhotos(query) {
	$.ajax({
		url: 'https://api.parse.com/1/classes/all_photos?limit=1000&where='+JSON.stringify(query),
		type: 'GET',
		dataType: 'json',
		success: function(response) {
			// console.log(response);
			$('#img-container').empty();

			var curChunk = 0;
			var resultChunks = response['results'].chunk(30);

			function appendPhotos(chunkNo) {

				resultChunks[chunkNo].map(function(obj){
					var date = dateFromString(obj['datetime'])
					var imgUrl = "https://s3-us-west-2.amazonaws.com/electionscrape/" + obj['source'] + "/400px_" + obj['filename'];
					var fullImgUrl = "https://s3-us-west-2.amazonaws.com/electionscrape/" + obj['source'] + "/" + obj['filename'];
					$('#img-container').append(
						$('<div class=\"grid-item\"></div>').append(
							'<a href=\"'+fullImgUrl+'\"><img src=\"'+imgUrl+'\" width=\"280px\"></a><p>'+slug2name[obj['candidate']]+'</p><p>'+date+'</p><p>'+obj['source']+'</p>'
						) // not a missing semicolon
					);
					// console.log(obj['candidate']);
					// console.log(obj['datetime']);
					// console.log(obj['source']);
					// console.log(obj['filename']);
				});

			}

			appendPhotos(curChunk);

			window.onscroll = function(ev) {
			    if ((window.innerHeight + window.scrollY) >= document.body.offsetHeight) {
			        curChunk++;
			        appendPhotos(curChunk);
			    }
			};


		},
		error: function(response) { "error" },
		beforeSend: setHeader
	});
}

function setHeader(xhr) {
	xhr.setRequestHeader("X-Parse-Application-Id", "ID-GOES-HERE");
	xhr.setRequestHeader("X-Parse-REST-API-Key", "KEY-GOES-HERE");
}

function makeQuery(candArr, tagArr) {

	orArr = tagArr.map(function(tag){
		return { "tags": tag };
	})

	if (tagArr.length === 0 && candArr.length > 0) {
		var query = {
			'candidate': {"$in": candArr}
		};
	}
	else if (tagArr.length > 0 && candArr.length === 0) {
		var query = {
			'$or': orArr
		};
	}
	else if (tagArr.length === 0 && candArr.length === 0) {
		var query = {};
	}
	else {
		var query = {
			'candidate': {"$in": candArr},
			'$or': orArr
		};
	}

	updatePhotos(query);

}

(function(){

$('.grid').masonry({
  // options
  itemSelector: '.grid-item',
  columnWidth: 300
});

var selectedCandidates = [];
var selectedTags = [];

$.getJSON("data/candidates.json", function(data){
	var candNames = Object.keys(data).map(function(slug){
		var name = data[slug]['name'];
		name2slug[name] = slug;
		slug2name[slug] = name;
		return name;
	}).sort();

	candNames.map(function(name){
		$('#candidate-dropdown').append(
			'<li class=\"candidate-item\"><a href=\"#\">'+name+'</a></li>'
		);
	});

	$('.candidate-item').click(function(){
		var name = $(this).text();
		var slug = name2slug[name];
		if ($.inArray(slug, selectedCandidates) === -1) {
			selectedCandidates.push(slug);
			makeQuery(selectedCandidates, selectedTags);
			console.log(selectedCandidates);
			$('#selected-candidates').append(
				$('<button class=\"btn btn-danger btn-xs cand-select-btn\"><span class=\"glyphicon glyphicon-remove\" aria-hidden=\"true\"></span>'+name+'</button>')
					.click(function(){
						$(this).fadeOut("fast", function(){
							selectedCandidates.remove(name2slug[$(this).text()]);
							makeQuery(selectedCandidates, selectedTags);
							console.log(selectedCandidates);
						});
					}) // THIS IS NOT A MISSING SEMI-COLON
			);
		}
	});
});


$.getJSON("data/tags.json", function(data){
	var tags = data["tags"].sort();
	tags.map(function(tag){
		$('#tag-dropdown').append(
			'<li class=\"tag-item\"><a href=\"#\">'+tag+'</a></li>'
		);
	});

	$('.tag-item').click(function(){
		var tag = $(this).text();
		if ($.inArray(tag, selectedTags) === -1) {
			selectedTags.push(tag);
			makeQuery(selectedCandidates, selectedTags);
			console.log(selectedTags);
			$('#selected-tags').append(
				$('<button class=\"btn btn-primary btn-xs tag-select-btn\"><span class=\"glyphicon glyphicon-remove\" aria-hidden=\"true\"></span>'+tag+'</button>')
					.click(function(){
						$(this).fadeOut("fast", function(){
							selectedTags.remove($(this).text());
							makeQuery(selectedCandidates, selectedTags);
							console.log(selectedTags);
						});
					})
			);
		}
	});
});

makeQuery(selectedCandidates, selectedTags);

})();

 

 

]]>
word.camera http://www.thehypertext.com/2015/04/11/word-camera/ http://www.thehypertext.com/2015/04/11/word-camera/#comments Sat, 11 Apr 2015 05:12:58 +0000 http://www.thehypertext.com/?p=481 Last week, I launched a web application and a concept for photographic text generation that I have been working on for a few months. The idea came to me while working on another project, a computer generated screenplay, and I will discuss the connection in this post.

Read More...

]]>
lexograph /ˈleksəʊɡɹɑːf/ (n.)
A text document generated from digital image data

 

Last week, I launched a web application and a concept for photographic text generation that I have been working on for a few months. The idea came to me while working on another project, a computer generated screenplay, and I will discuss the connection in this post.

word.camera is responsive — it works on desktop, tablet, and mobile devices running recent versions of iOS or Android. The code behind it is open source and available on GitHub, because lexography is for everyone.

 

Screen Shot 2015-04-11 at 12.31.56 AM

Screen Shot 2015-04-08 at 2.01.42 AM

Screen Shot 2015-04-08 at 2.02.24 AM

 

Users can share their lexographs using unique URLs. Of all this lexographs I’ve seen generated by users since the site launched (there are now almost 7,000), this one, shared on reddit’s /r/creativecoding, stuck with me the most: http://word.camera/i/7KZPPaqdP

I was surprised when the software noticed and commented on the singer in the painting behind me: http://word.camera/i/ypQvqJr6L

I was inspired to create this project while working on another project. This semester, I received a grant from the Future of Storytelling Initiative at NYU to produce a computer generated screenplay, and I had been thinking about how to generate text that’s more cohesive and realistically descriptive, meaning that it would transition between related topics in a logical fashion and describe a scene that could realistically exist (no “colorless green ideas sleeping furiously”) in order to making filming the screenplay possible . After playing with the Clarifai API, which uses convolutional neural networks to tag images, it occurred to me that including photographs in my input corpus, rather than relying on text alone, could provide those qualities. word.camera is my first attempt at producing that type of generative text.

At the moment, the results are not nearly as grammatical as I would like them to be, and I’m working on that. The algorithm extracts tags from images using Clarifai’s convolutional neural networks, then expands those tags into paragraphs using ConceptNet (a lexical relations database developed at MIT) and a flexible template system. The template system enables the code to build sentences that connect concepts together.

This project is about augmenting our creativity and presenting images in a different format, but it’s also about creative applications of artificial intelligence technology. I think that when we think about the type of artificial intelligence we’ll have in the future, based on what we’ve read in science fiction novels, we think of a robot that can describe and interact with its environment with natural language. I think that creating the type of AI we imagine in our wildest sci-fi fantasies is not only an engineering problem, but also a design problem that requires a creative approach.

I hope lexography eventually becomes accepted as a new form of photography. As a writer and a photographer, I love the idea that I could look at a scene and photograph it because it might generate an interesting poem or short story, rather than just an interesting image. And I’m not trying to suggest that word.camera is the final or the only possible implementation of that new art form. I made the code behind word.camera open source because I want others to help improve it and make their own versions — provided they also make their code available under the same terms, which is required under the GNU GPLv3 open source license I’m using. As the technology gets better, the results will get better, and lexography will make more sense to people as a worthy artistic pursuit.

I’m thrilled that the project has received worldwide attention from photography blogs and a few media outlets, and I hope users around the world continue enjoying word.camera as I keep working to improve it. Along with improving the language, I plan to expand the project by offering a mobile app and generated downloadable ebooks so that users can enjoy their lexographs offline.


 

Click Here for Part II

]]>
http://www.thehypertext.com/2015/04/11/word-camera/feed/ 2
Edge Finder http://www.thehypertext.com/2014/11/01/edge-finder/ Sat, 01 Nov 2014 23:50:10 +0000 http://www.thehypertext.com/?p=270 For the pixel manipulation assignment in Introduction to Computational Media, I created an edge finding algorithm that can find edges in the frames of a live video stream.

Read More...

]]>
For the pixel manipulation assignment in Introduction to Computational Media, I created an edge finding algorithm that can find edges in the frames of a live video stream. The code is available on Github.

Below are some screen shots of the Processing sketch in action.

Screen Shot 2014-10-29 at 7.44.23 PM Screen Shot 2014-10-29 at 2.39.34 AM Screen Shot 2014-10-29 at 2.38.58 AM Screen Shot 2014-10-29 at 2.38.28 AM Screen Shot 2014-10-29 at 2.36.06 AM Screen Shot 2014-10-29 at 2.32.21 AM Screen Shot 2014-10-29 at 2.31.18 AM Screen Shot 2014-10-29 at 2.31.06 AM Screen Shot 2014-10-29 at 2.30.37 PM Screen Shot 2014-10-29 at 2.29.42 PM Screen Shot 2014-10-29 at 2.26.30 PM Screen Shot 2014-10-29 at 2.23.54 PM Screen Shot 2014-10-29 at 2.21.36 PM Screen Shot 2014-10-29 at 2.20.35 PM Screen Shot 2014-10-29 at 2.15.33 PM Screen Shot 2014-10-29 at 2.10.32 PM Screen Shot 2014-10-29 at 1.29.55 PM Screen Shot 2014-10-29 at 1.27.55 PM Screen Shot 2014-10-29 at 1.21.49 PM Screen Shot 2014-10-29 at 1.21.45 PM Screen Shot 2014-10-29 at 1.17.45 PM Screen Shot 2014-10-29 at 1.17.25 PM

]]>
Primitive Fractal http://www.thehypertext.com/2014/09/07/primitive-fractal/ Sun, 07 Sep 2014 09:22:50 +0000 http://www.thehypertext.com/?p=87 For my ICM homework, I created a primitive fractal pattern. The source code for the image on the left is available on Github.

Read More...

]]>
Snowflake Fractal

Our first homework assignment for Introduction to Computational Media with Dan Shiffman was somewhat open ended. Dan asked to us make a static image using the basic shape and line tools in Processing, and to write a blog post about it. I decided to create a primitive fractal pattern. The source code for the image above is available on Github.

I constructed a variation on the fractal curve known as a Koch snowflake. First specified by Swedish mathematician Helge von Koch in 1904, the Koch snowflake is one of the first fractal curves to have been described.

Rather than using equilateral triangles, I used squares. This was my first sketch:

 

fractal notebook drawing

 

I then mapped out the base pattern, starting with an 8 x 8 unit “origin square”, and derived the relevant equations to transpose coordinates beginning with that square:

 

whiteboard 1

 

whiteboard 2

 

whiteboard 3

 

whiteboard 4

 

Using these equations as a guide, I then wrote some pseudocode for a recursive function to draw the fractal:

 

whiteboard 6

 

Which turned into…

 

def drawFourSquares(x, y, l):
	l = l / 2
	sTop = drawSquareTop(x, y, l)
	sBottom = drawSquareBottom(x, y, l)
	sRight = drawSquareRight(x, y, l)
	sLeft = drawSquareLeft(x, y, l)
	if l >= 1:
		drawFourSquares(sTop[0], sTop[1], l)
		drawFourSquares(sBottom[0], sBottom[1], l)
		drawFourSquares(sRight[0], sRight[1], l)
		drawFourSquares(sLeft[0], sLeft[1], l)

 

The rest of the code is available on GitHub.

First, I generated the shape with default fills and strokes, just to test my algorithm:

 

first_result

 

I then removed the strokes, changed the color mode to HSB, and mapped the saturation and opacity of the fills to the length of each square. The result was significantly more attractive:

 

blue

 

Finally, I mapped the hue to the length of each square and used logarithm functions to smooth the transition between different hues, opacities, and saturation levels. (The length decreases by a factor of two at each level of the fractal pattern. By using binary logarithms (base = 2), I was able to make the hue, opacity, and saturation transitions linear in order to include a more diverse spectrum of values for each property.) This was the final result, also pictured above:

 

Snowflake Fractal

 

The constraint for this project was to create a static image. However, in the future I would like to explore animating this pattern, perhaps by shifting the color values to create the illusion of depth and motion. Another option would be to program a continuous “zoom in” or “zoom out” effect to emphasize the potentially endless resolution and repetition of the pattern.

EDIT: See the new dynamic version.

]]>
More Poetizer Output http://www.thehypertext.com/2014/09/02/more-poetizer-output/ Tue, 02 Sep 2014 01:02:57 +0000 http://www.thehypertext.com/?p=51 I thought I'd share a few more computer poems with you. But first, a new video.

Read More & Watch the Video...

]]>
I thought I’d share a few more computer poems with you. But first, a new video:

For more information, see my previous blog entry.

Now, here are some of the best poems it has produced. This is only a small collection… I have literally thousands of poems like these:

1. by richmond
i raised my knees supine on the granite
steps of the human mind to correlate
all its contents we live on a cold winter
moony night it said that nothing ever
happened so worry its all like golden
of past puerility or past manhood and all
i raised my knees supine on the granite
crumble away batch will crumble but
crumble away batch will crumble

2. and houses
roads boulevard are as fugitive alas
as the tears blister and start you shall love
corrupt with your crooked heart it was late
late in the afternoon and it was not
meant that we were doing was right that we
hoped to change because it was not meant that
roads boulevard are as fugitive alas
and the thought of such endless avenues
and the thought of such endless avenues

3. upward lemon
yellow his belly button bud of flesh and saw
the dark universe yawning where the plot
where the walls staring forms leaned out leaning
the room the char come and go talking
of michelangelo and indeed there
will be time to get complicated messy
yellow his belly button bud of flesh and saw
cities have sprouted up along the floor
cities have sprouted up along the floor

4. never really
die it has no ending ill love you till
china and africa meet and the sensation
of pater to recreate the phrase structure
and measure of poor man prose and stand
before you dumb and intelligent
and shaking with shame rejected yet profess
out the time and if it rains a closed circuit

5. nymphs of
depths infinity around the limp leaves
waited for rain while the black planets roll
without aim where they roll in their whirlpools
strange dolphins and sea nymphs of depths eternity
around the limp father of thou
dreamy floating flower ugly and futile
lean neck and shrieked with delight in for committing

6. with a
dulcimer in a formulated phrase
and when i count there are only you and
i have seen the perpetual footman hold
my coat and snicker and in the mountains
in the wad but dry sterile thunder
without rain there is one thing its a dream
already ended nothing

7. me hitting
the period and you know nothing do
you think the emptiness of space which is
the end grant to aristotle
what we have lingered in the subway window
jumped in limousines with the imaginary

8. party is
stuck on a pin when i and i thought i
was fishing in the middle things have had
time to devise a face to meet the faces
that you were there and alive in their hearts
who lit cigarettes in boxcars boxcars

9. troubles me
for i can connect nothing with nothing
the broken fingernails of soil hands
my citizenry humble people who expect
null la la to carthage then i came
to your metropolis walked market street singing

10. of them
is being smeared on the far arctic isle
oh great was the sin of my hands queen and
her tribe courier stars doctor back from
his hansen's disease and woe ships of pure air
inconceivable for me to betray even
the ism can disperse through the
wall and i know this from staring at mountains

11. being depressed
on a ledge halfway up the white hair of
the intrinsic mind that these cockle of
birth and death and back turn upward from the
sky with monuments span the bay then up
the mountain and the climbing party is
stuck on a bare stage as the years in did
khan a stately pleasure dome with caves

12. like the
action of the high water mark that place
where the action of the hoary primordial
grove where the sun goes down and wept sweet thames
run softly till i end my song sweet thames river
run softly till i end my birdsong the river
into the heart of the blind where mass and
gravity bulk vast its centrifuge

13. cardboard boxes
butt ends or other testimony
of summertime nights the nymphs are departed
sweet thames run softly for i have lived over
my lives without number i have whirled with
the fanciful idea of
a toast and tea in the colonnade and
went leaving no broken hearts who sang sweet

14. will show
you fear in a cage and can see at the
air murmur of maternal who are mad
to talk mad to live mad to talk mad to
be the first word of paradise lost on
an empty page think of an animal
is being tortured with electric while
android pope this is the lady of situations

15. dear mrs
enjoin her i bring the horoscope myself
one must be drooping and shedding her dims
on the final stanzas of gibberish
who cooked rotten creature lung heart feet
tail and tortillas dreaming of as i
need be found in our true blissful essence
of mind is pure machinery whose blood

16. in the
form of my spirit and great is the merchant
and this and this is early on years before
the machinery of other things of things
more foreign and more distant in space and in
realms where the glass held up by standards wrought
with vines from which a minute there is not
even another individual who is herself
clinically depressed person can not get
straight you are thinking think i think we are
efflorescence all sorts of shapes and smells and after
that long kiss i near lost my intimation yes he
said i blaspheme i cant bear to look at you
and me would it have been the same you are
behind bars you are in the earth at the
romance of the wards of the king must die
unheard in dim song of my hands queen and
her only i cant bear to look so antique
and her presence the monition of
her heart but for her lips roses from the
superhuman tomb

17. ate the
lamb stew of the east pilgrim states and halls
bickering with the thing that troubles me
for i should have been the rats foot only
year to year but at my back in a brown
mantle hooded i do i shall wear white
flannel trousers and walk the street danced on
broken barefoot smashed record player records
of nostalgic european high german
jazz finished the whisky and threw up groaning
into the middle sixties was a soft
october night curled once about the sky
a hat on a million times the fool i
grow previous i grow old i shall wear the bottoms
of my natural ecstasy whom i sit
on her back o look in the sand if there
were water we should voyage far the sciences
each straining in its spore like the andalusian
girls used or shall i at least pretend to
be of use suave cautious and meticulous
full of depressed affected role is coming

18. the sacred
river ran on i saw your saints your zens
athenians and though the west wind appear
to harbor there not one pure dream of love
the dictates of her aspect her indeterminate
response to question her potency
over sewer water and waters her power
to to mortify to invest with lulu
to render mad to incite
gong o let not time deceive you you can
not get drunk and you closing the book spread
like an infective disease from city
to metropolis from continent to continent
barred out here attach there denounced
by press and stump censured even by
the eleemosynary spider or under
seals broken by the hole in the ocean
of mercator projection its in marshes
faded stagnant pools in the colonnade
and went on a bare stage as the writer
as the writer has cursed the world

19. [untitled]
shriek come the break of day being driven to
madness with fright i have haunted the tombs
of the wards of the low on whom assurance
sits as a silk hat on a screen would it
have been as often of the band and blew
the suffering of the states naked
mind for love was the color of television

20. regiments of
fashion and the deep waters of the subway
and were red eyed in the trench of the heterosexual
dollar the one universal essence
of purest poison lurked the very good
and the dying and the lore of ocean
blue green grey white or black smooth prance or
mountainous that ocean is more ancient
than the healthy then too still american
television is full of high sentence
but a bit dumb at times indeed almost
ridiculous almost at times indeed
almost nonsensical almost at times
indeed almost idiotic almost
at times indeed almost farcical
almost at times indeed almost ridiculous
almost at times the utter casualness
and deep ignorance of it and in iowa
i know this from staring at raft months
on end they never quite shine and the tanked
up clatter of the stability

21. no explanation
no mix of words or euphony or memories
can touch that sense of the globe its ubiquity
as constituting percent of the true
predator the great white shark of pain that
here was literally take up away by something
unnamed until only a void was left
the knowledge that the hyades shall sing
where flap the tatters of the skull who
humourless protest overturned only
one emblematic pingpong table resting
briefly in returning class later truly
bald except for a hundred visions and
revisions which a golden peeped out another
hid his eyes before his feet flowed up the
earth take heed scraped and scraped look again at
that time remembrance of a whole generation
comes to a sea journey
atone im with you in rockland where you
scream in a long face its them pills i took
to bring it off she said something

22. their wrists
three fourth dimension successively unsuccessfully
gave up all for what everything comes down
to the three old of fate the one eyeball of
the blind where mass and gravity bulk brobdingnagian

]]>
Hmap http://www.thehypertext.com/2014/08/31/hmap/ Sun, 31 Aug 2014 23:19:03 +0000 http://www.thehypertext.com/?p=41 Another project I completed over the summer with help from a friend: a Python script for color histogram mapping between two images. We decided to call the program Hmap.

Read More...

]]>
Another project I completed over the summer with help from a friend: a Python script for color histogram mapping between two images. We decided to call the program Hmap.

The script takes two images and applies the color histogram from the designated “source” image onto the designated “target” image.

Here’s an example with black and white images (photographs by Ansel Adams).

Source Image:

Source Image

Target Image

Target Image

Result:

Result

 

To see an example with color images, check out this page.

 

]]>