programming – THE HYPERTEXT http://www.thehypertext.com Thu, 10 Dec 2015 06:10:15 +0000 en-US hourly 1 https://wordpress.org/?v=5.0.4 Netflix for Robots http://www.thehypertext.com/2015/12/10/netflix-for-robots/ Thu, 10 Dec 2015 06:08:24 +0000 http://www.thehypertext.com/?p=800 For my final project in Learning Machines, I forced a deep learning machine to watch every episode of The X-Files.

Read More...

]]>
For my final project in Learning Machines, I forced a deep learning machine to watch every episode of The X-Files.

Watching every episode of The X-Files in high school on Netflix DVDs that came in the mail (remember those?) seemed like the thing to do. It was a great show, with 9 seasons of 20+ episodes a piece. So, it only seemed fair to provide a robot friend with the same experience.

I’m currently running NeuralTalk2, which is truly wonderful open source image captioning code consisting of convolutional and recurrent neural networks. The software requires a GPU to train models, so I’m running it on an Amazon Web Services GPU server instance. At ~50 cents per hour, it’s a lot more expensive than Netflix.

Andrej Karpathy wrote NeuralTalk2 in Torch, which is based in Lua, and it requires a lot of dependencies. However, it was a lot easier to set up than the Deep Dream code I experimented with over the summer.

The training process has involved a lot of trial and error. The learning process seems to just halt sometimes, and the machine often wants to issue the same caption for every image.

Rather than training the machine with an image caption set, I trained it with dialogue from subtitles and matching frames extracted at 10 second intervals from every episode of The X-Files. This is just an experiment, and I’m not expecting stellar results.

That said, the robot is already spitting out some pretty weird and genuinely creepy lines. I can’t wait until I have a version that’s trained well enough to feed in new images and get varied results.

Screen Shot 2015-12-09 at 2.18.48 PM Screen Shot 2015-12-09 at 2.20.22 PM Screen Shot 2015-12-09 at 2.21.53 PM Screen Shot 2015-12-09 at 2.26.45 PM Screen Shot 2015-12-09 at 2.33.11 PM Screen Shot 2015-12-09 at 2.34.41 PM Screen Shot 2015-12-09 at 2.35.27 PM Screen Shot 2015-12-09 at 2.36.01 PM Screen Shot 2015-12-09 at 2.42.05 PM

]]>
word.camera exhibition http://www.thehypertext.com/2015/11/24/word-camera-exhibition/ Tue, 24 Nov 2015 19:58:34 +0000 http://www.thehypertext.com/?p=772 This week, I've been exhibiting my ongoing project, word.camera, at IDFA DocLab in Amsterdam.

Read More...

]]>
IMG_1338

This week, I’ve been exhibiting my ongoing project, word.camera, at IDFA DocLab in Amsterdam. My installation consists of four cameras:

  1. The original word.camera physical prototype:9503_20150507_tlr_1000px
  2. The sound camera physical prototype:IMG_1264
  3. A new word.camera model that uses a context-free grammar to generate poems based on the images it captures:IMG_1321_copyIMG_1317_copy
  4. A talking, pan-tilt-zoom surveillance camera that looks for faces in the hallway and then describes them aloud. (See also: this Motherboard video)
    IMG_2324
    IMG_2322
    IMG_1365

 

During the exhibition, I was also invited to deliver two lectures. Here are my slides from the first lecture:

And here’s a video of the second one:

 

Visitors are able to reserve the portable cameras for half hour blocks by leaving their ID at the volunteer kiosk. I have really enjoyed watching people borrow and use my cameras.

IMG_1333 copy

 

]]>
Run-length encoding algorithm http://www.thehypertext.com/2015/10/28/run-length-encoding-algorithm/ Wed, 28 Oct 2015 01:06:58 +0000 http://www.thehypertext.com/?p=769 For our first assignment in Learning Machines, Patrick asked us to implement run-length encoding in Python.

Read More...

]]>
For our first assignment in Learning Machines, Patrick asked us to implement run-length encoding in Python.

Below is my code, which includes encode and decode functions.

def encode(u):
	# init tracking vars
	enc = list()
	cur = list(u[0])
	# iterate through string
	for i in range(1, len(u)):
		if u[i] == u[i-1]:
			cur.append(u[i])
		else:
			enc.append((len(cur), u[i-1]))
			cur = list(u[i])
	# handle last character
	enc.append((len(cur), cur[0]))

	def rep(tup):
		# encode using backtick as delimeter
		count, char = tup
		if count == 1:
			return char
		else:
			return "%s`%i`" % (char, count)

	return ''.join(map(rep, enc))

def decode(e):
	# init tracking vars
	result = list()
	cur_num = list()
	# switch var
	on_num = False
	# iterate through encoded string
	for i, char in enumerate(e):
		# if delimeter found...
		if char == "`":
			# switch on/off
			on_num = not on_num
			# if closing delimeter
			if not on_num:
				result.append(
					rep_char*int(''.join(cur_num))
				)
				cur_num = list()
			# if opening delimeter
			elif on_num and i > 0:
				# repeated char is last
				# added to result
				rep_char = result.pop()
		# if not delimeter and not on number
		elif not on_num:
			result.append(char)
		# if not delimeter and on number
		else:
			cur_num.append(char)
	return ''.join(result)



if __name__ == '__main__':
	import sys
	to_encode = sys.argv[1]
	encoded = encode(to_encode)
	decoded = decode(encoded)
	print encoded
	print decoded
	assert to_encode == decoded

 

]]>
artificial intelligence http://www.thehypertext.com/2015/10/27/artificial-intelligence/ Tue, 27 Oct 2015 19:06:42 +0000 http://www.thehypertext.com/?p=758 For my current project in Temporary Expert, I have been experimenting with artificially intelligent voice interfaces in order to build an art piece with similar functionality to the Amazon Echo, but with unexpected properties.

Read More...

]]>
For my current project in Temporary Expert, I have been experimenting with artificially intelligent voice interfaces in order to build an art piece with similar functionality to the Amazon Echo, but with unexpected properties.

feature-key-features

My robot will take the form of a benevolent computer virus. Using tools like pyautogui and the python webbrowser library, it will respond to user inquiries by opening documents, typing, and displaying web pages. It will also talk back to users using Apple’s text-to-speech utility.

I am building this robot using Wit.ai, a deep learning tool for making voice interfaces. Using the tool’s dashboard, I have been training my robot to respond to various user intents.

Screen Shot 2015-10-27 at 2.56.56 PM

The core of the functionality will be a therapy bot similar to ELIZA, but with some additional functionality. When this project is complete, I believe it will provide an interesting take on artificial intelligence. Using AI tools for different purposes than they were designed, I hope to make users question whether the tool they are using is in fact sentient and aware of their presence.

]]>
Sound Camera, Part III http://www.thehypertext.com/2015/10/21/sound-camera-part-iii/ Wed, 21 Oct 2015 22:10:27 +0000 http://www.thehypertext.com/?p=741 I completed the physical prototype of the sound camera inside the enclosure I specified in my prior post, the Kodak Brownie Model 2.

Read More...

]]>
I completed the physical prototype of the sound camera inside the enclosure I specified in my prior post, the Kodak Brownie Model 2.


IMG_1264

I started by adding a shutter button to the top of the enclosure. I used a Cherry MX Blue mechanical keyboard switch that I had leftover from a project last year.

IMG_1268

 

The battery and Raspberry Pi just barely fit into the enclosure:

IMG_1267

IMG_1265

 

The Raspberry Pi camera module is wedged snugly beneath the camera’s front plate:

IMG_1263

 

In additional to playing the song, I added some functionality that provides a bit of context to the user. Using the pico2wave text-to-speech utility, the camera speaks the tags aloud before playing the song. Additionally, using SoX, the camera plays an initialization tone generated from the color histogram of the image before reading the tags.

Here’s the code that’s currently running on the Raspberry Pi:

from __future__ import unicode_literals

import os
import json
import uuid
import time
from random import choice as rc
from random import sample as rs
import re
import subprocess

import RPi.GPIO as GPIO
import picamera
from clarifai.client import ClarifaiApi
import requests
from PIL import Image

import sys
import threading

import spotify

import genius_token

# SPOTIFY STUFF

# Assuming a spotify_appkey.key in the current dir
session = spotify.Session()

# Process events in the background
loop = spotify.EventLoop(session)
loop.start()

# Connect an audio sink
audio = spotify.AlsaSink(session)

# Events for coordination
logged_in = threading.Event()
logged_out = threading.Event()
end_of_track = threading.Event()

logged_out.set()


def on_connection_state_updated(session):
    if session.connection.state is spotify.ConnectionState.LOGGED_IN:
        logged_in.set()
        logged_out.clear()
    elif session.connection.state is spotify.ConnectionState.LOGGED_OUT:
        logged_in.clear()
        logged_out.set()


def on_end_of_track(self):
    end_of_track.set()

# Register event listeners
session.on(
    spotify.SessionEvent.CONNECTION_STATE_UPDATED, on_connection_state_updated)
session.on(spotify.SessionEvent.END_OF_TRACK, on_end_of_track)

# Assuming a previous login with remember_me=True and a proper logout
# session.relogin()
# session.login(genius_token.spotify_un, genius_token.spotify_pwd, remember_me=True)

# logged_in.wait()

# CAMERA STUFF

# Init Camera
camera = picamera.PiCamera()

# Init GPIO
GPIO.setmode(GPIO.BCM)

# Button Pin
GPIO.setup(18, GPIO.IN, pull_up_down=GPIO.PUD_UP)

IMGPATH = '/home/pi/soundcamera/img/'

clarifai_api = ClarifaiApi()

def chunks(l, n):
    """Yield successive n-sized chunks from l."""
    for i in xrange(0, len(l), n):
        yield l[i:i+n]

def take_photo():
    fn = str(int(time.time()))+'.jpg' # TODO: Change to timestamp hash
    fp = IMGPATH+fn
    camera.capture(fp)
    return fp

def chunks(l, n):
    """Yield successive n-sized chunks from l."""
    for i in xrange(0, len(l), n):
        yield l[i:i+n]

def get_tags(fp):
    fileObj = open(fp)
    result = clarifai_api.tag_images(fileObj)
    resultObj = result['results'][0]
    tags = resultObj['result']['tag']['classes']
    return tags

def genius_search(tags):
    access_token = genius_token.token
    payload = {
        'q': ' '.join(tags),
        'access_token': access_token
    }
    endpt = 'http://api.genius.com/search'
    response = requests.get(endpt, params=payload)
    results = response.json()
    hits = results['response']['hits']
    
    artists_titles = []
    
    for h in hits:
        hit_result = h['result']
        if hit_result['url'].endswith('lyrics'):
            artists_titles.append(
                (hit_result['primary_artist']['name'], hit_result['title'])
            )
    
    return artists_titles

def spotify_search(query):
    endpt = "https://api.spotify.com/v1/search"
    payload = {
        'q': query,
        'type': 'track'
    }
    response = requests.get(endpt, params=payload)
    result = response.json()
    result_zero = result['tracks']['items'][0]
    
    return result_zero['uri']

def main(fn):
    tags = get_tags(fn)
    for tag_chunk in chunks(tags,3):
        artists_titles = genius_search(tag_chunk)
        for artist, title in artists_titles:
            try:
                result_uri = spotify_search(artist+' '+title)
            except IndexError:
                pass
            else:
                print tag_chunk
                byline = "%s by %s" % (title, artist)
                print byline
                to_read = ', '.join(tag_chunk) + ". " + byline
                return to_read, result_uri

def play_uri(track_uri):
    # Play a track
    # audio = spotify.AlsaSink(session)
    session.login(genius_token.spotify_un, genius_token.spotify_pwd, remember_me=True)
    logged_in.wait()
    track = session.get_track(track_uri).load()
    session.player.load(track)
    session.player.play()


def stop_track():
    session.player.play(False)
    session.player.unload()
    session.logout()
    logged_out.wait()
    audio._close()

def talk(msg):
    proc = subprocess.Popen(
        ['bash', '/home/pi/soundcamera/play_text.sh', msg]
    )
    proc.communicate()

def play_tone(freqs):
    freq1, freq2 = freqs
    proc = subprocess.Popen(
        ['play', '-n', 'synth', '0.25', 'saw', "%i-%i" % (freq1, freq2)]
    )
    proc.communicate()

def histo_tone(fp):
    im = Image.open(fp)
    hist = im.histogram()
    vals = map(sum, chunks(hist, 64)) # list of 12 values
    print vals
    map(play_tone, chunks(vals,2))

if __name__ == "__main__":
    input_state = True
    new_state = True
    hold_counter = 0
    while 1:
        input_state = GPIO.input(18)
        if not (input_state and new_state):
            talk("capturing")

            # Hold for 15 seconds to turn off
            while not GPIO.input(18):
                time.sleep(0.1)
                hold_counter += 1
                if hold_counter > 150:
                    os.system('shutdown now -h')
                    sys.exit()

            # Reset hold counter
            hold_counter = 0

            # Else take photo
            try:
                img_fp = take_photo()
                msg, uri = main(img_fp)
                histo_tone(img_fp)
                talk(msg)
                play_uri(uri)
            except:
                print sys.exc_info()

            # Wait for playback to complete or Ctrl+C
            try:
                while not end_of_track.wait(0.1):
                    # If new photo, play new song
                    new_state = GPIO.input(18)
                    if not new_state:
                        stop_track()
                        # time.sleep(2)
                        break
            except KeyboardInterrupt:
                pass

 

]]>
Sound Camera, Part II http://www.thehypertext.com/2015/10/06/sound-camera-part-ii/ Tue, 06 Oct 2015 02:20:44 +0000 http://www.thehypertext.com/?p=733 Using JavaScript and Python Flask, I created a functional software prototype of the Sound Camera.

Read More...

]]>
Using JavaScript and Python Flask, I created a functional software prototype of the Sound Camera: rossgoodwin.com/soundcamera

The front-end JavaScript code is available on GitHub. Here is the primary back-end Python code:

import os
import json
import uuid
from base64 import decodestring
import time
from random import choice as rc
from random import sample as rs
import re

import PIL
from PIL import Image
import requests
import exifread

from flask import Flask, request, abort, jsonify
from flask.ext.cors import CORS
from werkzeug import secure_filename

from clarifai.client import ClarifaiApi

app = Flask(__name__)
CORS(app)

app.config['UPLOAD_FOLDER'] = '/var/www/SoundCamera/SoundCamera/static/img'
IMGPATH = '/var/www/SoundCamera/SoundCamera/static/img/'

clarifai_api = ClarifaiApi()

@app.route("/")
def index():
    return "These aren't the droids you're looking for."

@app.route("/img", methods=["POST"])
def img():
	request.get_data()
	if request.method == "POST":
		f = request.files['file']
		if f:
			filename = secure_filename(f.filename)
			f.save(os.path.join(app.config['UPLOAD_FOLDER'], filename))
			new_filename = resize_image(filename)
			return jsonify(uri=main(new_filename))
		else:
			abort(501)

@app.route("/b64", methods=["POST"])
def base64():
	if request.method == "POST":
		fstring = request.form['base64str']
		filename = str(uuid.uuid4())+'.jpg'
		file_obj = open(IMGPATH+filename, 'w')
		file_obj.write(fstring.decode('base64'))
		file_obj.close()
		return jsonify(uri=main(filename))

@app.route("/url")
def url():
	img_url = request.args.get('url')
	response = requests.get(img_url, stream=True)
	orig_filename = img_url.split('/')[-1]
	if response.status_code == 200:
		with open(IMGPATH+orig_filename, 'wb') as f:
			for chunk in response.iter_content(1024):
				f.write(chunk)
		new_filename = resize_image(orig_filename)
		return jsonify(uri=main(new_filename))
	else:
		abort(500)


# def allowed_img_file(filename):
#     return '.' in filename and \
# 		filename.rsplit('.', 1)[1].lower() in set(['.jpg', '.jpeg', '.png'])

def resize_image(fn):
    longedge = 640
    orientDict = {
        1: (0, 1),
        2: (0, PIL.Image.FLIP_LEFT_RIGHT),
        3: (-180, 1),
        4: (0, PIL.Image.FLIP_TOP_BOTTOM),
        5: (-90, PIL.Image.FLIP_LEFT_RIGHT),
        6: (-90, 1),
        7: (90, PIL.Image.FLIP_LEFT_RIGHT),
        8: (90, 1)
    }

    imgOriList = []
    try:
        f = open(IMGPATH+fn, "rb")
        exifTags = exifread.process_file(f, details=False, stop_tag='Image Orientation')
        if 'Image Orientation' in exifTags:
            imgOriList.extend(exifTags['Image Orientation'].values)
    except:
        pass

    img = Image.open(IMGPATH+fn)
    w, h = img.size
    newName = str(uuid.uuid4())+'.jpeg'
    if w >= h:
        wpercent = (longedge/float(w))
        hsize = int((float(h)*float(wpercent)))
        img = img.resize((longedge,hsize), PIL.Image.ANTIALIAS)
    else:
        hpercent = (longedge/float(h))
        wsize = int((float(w)*float(hpercent)))
        img = img.resize((wsize,longedge), PIL.Image.ANTIALIAS)

    for val in imgOriList:
        if val in orientDict:
            deg, flip = orientDict[val]
            img = img.rotate(deg)
            if flip != 1:
                img = img.transpose(flip)

    img.save(IMGPATH+newName, format='JPEG')
    os.remove(IMGPATH+fn)
    
    return newName

def chunks(l, n):
    """Yield successive n-sized chunks from l."""
    for i in xrange(0, len(l), n):
        yield l[i:i+n]

def get_tags(fp):
    fileObj = open(fp)
    result = clarifai_api.tag_images(fileObj)
    resultObj = result['results'][0]
    tags = resultObj['result']['tag']['classes']
    return tags

def genius_search(tags):
    access_token = 'd2IuV9fGKzYEWVnzmLVtFnm-EYvBQKR8Uh3I1cfZOdr8j-BGVTPThDES532dym5a'
    payload = {
        'q': ' '.join(tags),
        'access_token': access_token
    }
    endpt = 'http://api.genius.com/search'
    response = requests.get(endpt, params=payload)
    results = response.json()
    hits = results['response']['hits']
    
    artists_titles = []
    
    for h in hits:
        hit_result = h['result']
        if hit_result['url'].endswith('lyrics'):
            artists_titles.append(
                (hit_result['primary_artist']['name'], hit_result['title'])
            )
    
    return artists_titles

def spotify_search(query):
    endpt = "https://api.spotify.com/v1/search"
    payload = {
        'q': query,
        'type': 'track'
    }
    response = requests.get(endpt, params=payload)
    result = response.json()
    result_zero = result['tracks']['items'][0]
    
    return result_zero['uri']

def main(fn):
    tags = get_tags(IMGPATH+fn)
    for tag_chunk in chunks(tags,3):
        artists_titles = genius_search(tag_chunk)
        for artist, title in artists_titles:
            try:
                result_uri = spotify_search(artist+' '+title)
            except IndexError:
                pass
            else:
                return result_uri


if __name__ == "__main__":
    app.run()

 

It uses the same algorithm discussed in my prior post. Now that I have the opportunity to test it more, I am not quite satisfied with the results it is providing. First of all, they are not entirely deterministic (you can upload the same photo twice and end up with two different songs in some cases). Moreover, the results from a human face — which I expect to be a common use case — are not very personal. For the next steps in this project, I plan to integrate additional data including GPS, weather, time of day, and possibly even facial expressions in order to improve the output.

The broken cameras I ordered from eBay have arrived, and I have been considering how to use them as cases for the new models. I also purchased a GPS module for my Raspberry Pi, so the next Sound Camera prototype, with new features integrated, will likely be a physical version. I’m planning to use this Kodak Brownie camera (c. 1916):

IMG_1207

]]>
So it goes. http://www.thehypertext.com/2015/10/06/so-it-goes/ Tue, 06 Oct 2015 01:02:23 +0000 http://www.thehypertext.com/?p=717 Kurt Vonnegut's complete works, analyzed for sentiment, visualized as interactive TF-IDF word clouds

Read More...

]]>
Kurt Vonnegut once gave a brief, delightful lecture on the shapes of stories:

 

This was the primary inspiration for my latest project, which features Kurt Vonnegut’s complete works, analyzed for sentiment, and visualized as interactive word clouds. I developed it entirely in front-end JavaScript, and it’s currently hosted on GitHub pages: rossgoodwin.com/vonnegut

Screen Shot 2015-10-05 at 8.26.38 PM

Screen Shot 2015-10-05 at 8.26.03 PM

Screen Shot 2015-10-05 at 8.24.51 PM

 

Users can scrub through the sentiment graph of each book from start to finish and see a word cloud displayed for each position on the slider. Each word cloud represents 10 paragraphs of the book. Along with the rises and dips in the graph, sentiment values are indicated by the color of the word cloud text, which ranges from dark green (highly positive) to bright red (highly negative).

Rather than simply using word count or frequency for the size of the words, I used TF-IDF scores. (Each 10 paragraph block was treated as one document, and each book was treated as an independent set of documents.) As a result, the largest words in each word cloud are those that make their respective section unique in the context of the entire book.

The first steps in creating this project were to parse Vonnegut’s books, perform TF-IDF calculations for each word and sentiment analysis for each 10-paragraph segment, then store the resulting data in a set of JSON files. Here are the iPython Notebooks where I completed these steps:

Once I had the JSON files, I used D3 to create the word clouds and Chart.js to create the line graphs. The sliders are HTML range inputs, modified with custom CSS. I wanted to create the appearance of long, semi-transparent planchettes sliding over the graphs. Getting the sliders to line up with the graphs precisely was particularly challenging, as was providing the option to click on the graphs in any location and automatically move the sliders to that location.

Here is my JavaScript code, in its current state:

(function() {

Number.prototype.map = function (in_min, in_max, out_min, out_max) {
  return (this - in_min) * (out_max - out_min) / (in_max - in_min) + out_min;
}

function titleCase(str) {
    return str.replace(/\w\S*/g, function(txt){return txt.charAt(0).toUpperCase() + txt.substr(1).toLowerCase();});
}

// Charts.js global config
Chart.defaults.global.animation = false;
Chart.defaults.global.tooltipEvents = [];
Chart.defaults.global.scaleFontFamily = "'Cousine', monospace";
Chart.defaults.global.showScale = false;

// var spectrum = ['#F22613', '#E74C3C', '#D35400', '#F2784B', '#95A5A6', '#68C3A3', '#4DAF7C', '#3FC380', '#2ECC71'];
var spectrum = ["#f22613", "#f25749", "#f28379", "#f2b0aa", "#95a5a6", "#add9c2", "#74b391", "#45996c", "#1e824c"];


$("#key-block").append(
  '<div id=\"key-text-box\"><p class=\"text-center lead small\" style=\"margin-left: 7px;\">&lt;&lt;&lt; negative | positive &gt;&gt;&gt;</p></div>'
);

spectrum.map(function(hex){
  $("#key-block").append(
    '<div class=\"key-color\" style=\"background-color:'+hex+';\"></div>'
  );
});

function updateCloud(bookslug, section) {

  $.getJSON("data/vonnegut-"+section+".json", function(data){

    // var factor = Math.pow(data[bookslug]['tfidf'].length, 2);

    var layout = d3.layout.cloud()
        .size([800, 500])
        .words(data[bookslug]['tfidf'].map(function(d) {
          return {text: d[0], size: d[1] * 500};
        }))
        .padding(3)
        .rotate(function() { return 0; }) // return ~~(Math.random() * 2) * 90
        .font("Cousine")
        .fontSize(function(d) { return d.size; })
        .on("end", draw);
    layout.start();

    function draw(words) {

      var overallContainer = d3.select("#"+bookslug);

      overallContainer.select("svg").remove();
      overallContainer.select("a").remove();

      var svgContainer = overallContainer.append("svg")
          .attr("width", layout.size()[0])
          .attr("height", layout.size()[1])
          .attr("class", "svg-cont");

      var wordCloud = svgContainer.append("g")
          .attr("transform", "translate(" + layout.size()[0] / 2 + "," + layout.size()[1] / 2 + ")")
        .selectAll("text")
          .data(words)
        .enter().append("text")
          .transition().duration(500)
          .style("font-size", function(d) { return d.size + "px"; })
          .style("font-family", "Cousine")
          .style("fill", function(d, i) {
              var sentiment = data[bookslug]['sentiment'];
              var ix = Math.floor(((sentiment + 1)/2)*spectrum.length);
              return spectrum[ix];
          })
          .attr("text-anchor", "middle")
          .attr("transform", function(d) {
            return "translate(" + [d.x, d.y] + ")rotate(" + d.rotate + ")";
          })
          .text(function(d) { return d.text; });

      var title = titleCase(data[bookslug]['title']);

      var labelText = overallContainer
                      .append("a")
                      .attr("href", "http://www.amazon.com/exec/obidos/external-search/?field-keywords=%s"+title+"&mode=blended")
                      .attr("class", "twitter-link")
                      .attr("target", "_blank")
                      .text(title);

      overallContainer.transition()
          .style("opacity", 1.0)
          .delay(1000)
          .duration(3000);
    }

  });

}

$.getJSON("data/sentiment.json", function(sent){
$.getJSON("data/vonnegut-0.json", function(data){
  $("#loadinggif").fadeOut("slow");
  Object.keys(data).sort().map(function(slug){
    $("#vis").append(
      '<div id=\"'+slug+'\" class=\"col-md-12 transparent text-center\"></div>'
    );

    $("#"+slug).append(
      '<canvas class="chart-canvas" id=\"'+slug+'-chart\" width=\"800\" height=\"150\"></canvas>'
    );

    var ctx = document.getElementById(slug+"-chart").getContext("2d");

    var xLabels = [];

    for (var i=0;i<data[slug]['length'];i++) {
      xLabels.push('');
    }

    var chartData = {
        labels: xLabels,
        datasets: [
            {
                label: titleCase(data[slug]['title']),
                fillColor: "rgba(210, 215, 211, 0.7)",
                strokeColor: "rgba(189, 195, 199, 1)",
                pointColor: "rgba(210, 215, 211, 1)",
                pointStrokeColor: "#fff",
                pointHighlightFill: "#fff",
                pointHighlightStroke: "rgba(220,220,220,1)",
                data: sent[slug]
            }
        ]
    };

    var chartOptions = {
      pointDot : false,
      pointHitDetectionRadius : 5,
      scaleShowVerticalLines: false,
      bezierCurve: false
    };

    var myNewChart = new Chart(ctx).Line(chartData, chartOptions);

    var stepCount = data[slug]['length'] - 1;

    $("#"+slug).append(
      '<div class=\"scrubber\"><input id=\"'+slug+'-scrub\" type=\"range\" min=\"0\" max=\"'+stepCount+'\" value=\"0\" step=\"1\"></div>'
    );

    $("#"+slug+"-chart").on("click", function(evt){
      var activePoints = myNewChart.getPointsAtEvent(evt);
      var xPos = activePoints[Math.floor(activePoints.length/2)].x;
      var ix = Math.floor(xPos.map(0, 800, 0, data[slug]['length']));
      console.log(xPos);
      console.log(ix);
      $('#'+slug+'-scrub').val(ix);
      updateCloud(slug, ix);
    });

    // Play Button
    $('#'+slug).append(
      '<button type=\"button\" id=\"'+slug+'-btn\" class=\"btn btn-default btn-xs play-btn\" aria-label=\"Play\"><span class=\"glyphicon glyphicon-play\" aria-hidden=\"true\"></span></button>'
    );

    $('#'+slug).append(
      '<button type=\"button\" id=\"'+slug+'-btn-pause\" class=\"btn btn-default btn-xs play-btn\" aria-label=\"Pause\"><span class=\"glyphicon glyphicon-pause\" aria-hidden=\"true\"></span></button>'
    );

    // Load First Clouds
    updateCloud(slug, 0);

    var play;

    $('#'+slug+'-btn').click(function(){

      console.log('clicked ' + slug);
      autoAdvance();
      play = setInterval(function(){
        autoAdvance();
      }, 5000);

      function autoAdvance(){
          var scrubVal = $('#'+slug+'-scrub').val();
          console.log(data[slug]['length']);
          if (scrubVal >= data[slug]['length']-1) {
            console.log("EOR");
            clearInterval(play);
          }
          console.log(scrubVal);
          var newVal = parseInt(scrubVal, 10) + 1;
          $('#'+slug+'-scrub').val(newVal);
          updateCloud(slug, newVal);
      }

    });



    $('#'+slug+'-btn-pause').click(function(){
      clearInterval(play);
    });


    $("#"+slug+"-scrub").on("input", function(){
      var sectNo = $(this).val();
      console.log(sectNo);
      updateCloud(slug, sectNo);
    });
  });
});
});



})();

 

The rest of my front-end code can be found on GitHub.

]]>
Candidate Image Explorer http://www.thehypertext.com/2015/09/17/candidate-image-explorer/ Thu, 17 Sep 2015 15:53:26 +0000 http://www.thehypertext.com/?p=700 For this week's homework in Designing for Data Personalization with Sam Slover, I made progress on a project that I'm working on for Fusion as part of their 2016 US Presidential Election coverage.

Read More...

]]>
For this week’s homework in Designing for Data Personalization with Sam Slover, I made progress on a project that I’m working on for Fusion as part of their 2016 US Presidential Election coverage. I began this project by downloading all the images from each candidate’s Twitter, Facebook, and Instagram account — about 60,000 in total — then running those images through Clarifai‘s convolutional neural networks to generate descriptive tags.

With all the images hosted on Amazon s3, and the tag data hosted on parse.com, I created a simple page where users can explore the candidates’ images by topic and by candidate. The default is all topics and all candidates, but users can narrow the selection of images displayed by making multiple selections from each field. Additionally, more images will load as you scroll down the page.

Screen Shot 2015-09-17 at 11.24.47 AM

Screen Shot 2015-09-17 at 11.30.30 AM

Screen Shot 2015-09-17 at 11.20.50 AM

Screen Shot 2015-09-17 at 11.28.45 AM

Screen Shot 2015-09-17 at 11.25.42 AM

Screen Shot 2015-09-17 at 11.19.53 AM

Unfortunately, the AI-enabled image tagging doesn’t always work as well as one might hope.

Screen Shot 2015-09-17 at 11.23.49 AM

Here’s the page’s JavaScript code:

var name2slug = {};
var slug2name = {};

Array.prototype.remove = function() {
    var what, a = arguments, L = a.length, ax;
    while (L && this.length) {
        what = a[--L];
        while ((ax = this.indexOf(what)) !== -1) {
            this.splice(ax, 1);
        }
    }
    return this;
}

Array.prototype.chunk = function(chunkSize) {
    var array=this;
    return [].concat.apply([],
        array.map(function(elem,i) {
            return i%chunkSize ? [] : [array.slice(i,i+chunkSize)];
        })
    );
}

function dateFromString(str) {
	var m = str.match(/(\d+)-(\d+)-(\d+)T(\d+):(\d+):(\d+)Z/);
	var date = new Date(Date.UTC(+m[1], +m[2], +m[3], +m[4], +m[5], +m[6]));
	var options = {
	    weekday: "long", year: "numeric", month: "short",
	    day: "numeric", hour: "2-digit", minute: "2-digit"
	};
	return date.toLocaleTimeString("en-us", options);
}

function updatePhotos(query) {
	$.ajax({
		url: 'https://api.parse.com/1/classes/all_photos?limit=1000&where='+JSON.stringify(query),
		type: 'GET',
		dataType: 'json',
		success: function(response) {
			// console.log(response);
			$('#img-container').empty();

			var curChunk = 0;
			var resultChunks = response['results'].chunk(30);

			function appendPhotos(chunkNo) {

				resultChunks[chunkNo].map(function(obj){
					var date = dateFromString(obj['datetime'])
					var imgUrl = "https://s3-us-west-2.amazonaws.com/electionscrape/" + obj['source'] + "/400px_" + obj['filename'];
					var fullImgUrl = "https://s3-us-west-2.amazonaws.com/electionscrape/" + obj['source'] + "/" + obj['filename'];
					$('#img-container').append(
						$('<div class=\"grid-item\"></div>').append(
							'<a href=\"'+fullImgUrl+'\"><img src=\"'+imgUrl+'\" width=\"280px\"></a><p>'+slug2name[obj['candidate']]+'</p><p>'+date+'</p><p>'+obj['source']+'</p>'
						) // not a missing semicolon
					);
					// console.log(obj['candidate']);
					// console.log(obj['datetime']);
					// console.log(obj['source']);
					// console.log(obj['filename']);
				});

			}

			appendPhotos(curChunk);

			window.onscroll = function(ev) {
			    if ((window.innerHeight + window.scrollY) >= document.body.offsetHeight) {
			        curChunk++;
			        appendPhotos(curChunk);
			    }
			};


		},
		error: function(response) { "error" },
		beforeSend: setHeader
	});
}

function setHeader(xhr) {
	xhr.setRequestHeader("X-Parse-Application-Id", "ID-GOES-HERE");
	xhr.setRequestHeader("X-Parse-REST-API-Key", "KEY-GOES-HERE");
}

function makeQuery(candArr, tagArr) {

	orArr = tagArr.map(function(tag){
		return { "tags": tag };
	})

	if (tagArr.length === 0 && candArr.length > 0) {
		var query = {
			'candidate': {"$in": candArr}
		};
	}
	else if (tagArr.length > 0 && candArr.length === 0) {
		var query = {
			'$or': orArr
		};
	}
	else if (tagArr.length === 0 && candArr.length === 0) {
		var query = {};
	}
	else {
		var query = {
			'candidate': {"$in": candArr},
			'$or': orArr
		};
	}

	updatePhotos(query);

}

(function(){

$('.grid').masonry({
  // options
  itemSelector: '.grid-item',
  columnWidth: 300
});

var selectedCandidates = [];
var selectedTags = [];

$.getJSON("data/candidates.json", function(data){
	var candNames = Object.keys(data).map(function(slug){
		var name = data[slug]['name'];
		name2slug[name] = slug;
		slug2name[slug] = name;
		return name;
	}).sort();

	candNames.map(function(name){
		$('#candidate-dropdown').append(
			'<li class=\"candidate-item\"><a href=\"#\">'+name+'</a></li>'
		);
	});

	$('.candidate-item').click(function(){
		var name = $(this).text();
		var slug = name2slug[name];
		if ($.inArray(slug, selectedCandidates) === -1) {
			selectedCandidates.push(slug);
			makeQuery(selectedCandidates, selectedTags);
			console.log(selectedCandidates);
			$('#selected-candidates').append(
				$('<button class=\"btn btn-danger btn-xs cand-select-btn\"><span class=\"glyphicon glyphicon-remove\" aria-hidden=\"true\"></span>'+name+'</button>')
					.click(function(){
						$(this).fadeOut("fast", function(){
							selectedCandidates.remove(name2slug[$(this).text()]);
							makeQuery(selectedCandidates, selectedTags);
							console.log(selectedCandidates);
						});
					}) // THIS IS NOT A MISSING SEMI-COLON
			);
		}
	});
});


$.getJSON("data/tags.json", function(data){
	var tags = data["tags"].sort();
	tags.map(function(tag){
		$('#tag-dropdown').append(
			'<li class=\"tag-item\"><a href=\"#\">'+tag+'</a></li>'
		);
	});

	$('.tag-item').click(function(){
		var tag = $(this).text();
		if ($.inArray(tag, selectedTags) === -1) {
			selectedTags.push(tag);
			makeQuery(selectedCandidates, selectedTags);
			console.log(selectedTags);
			$('#selected-tags').append(
				$('<button class=\"btn btn-primary btn-xs tag-select-btn\"><span class=\"glyphicon glyphicon-remove\" aria-hidden=\"true\"></span>'+tag+'</button>')
					.click(function(){
						$(this).fadeOut("fast", function(){
							selectedTags.remove($(this).text());
							makeQuery(selectedCandidates, selectedTags);
							console.log(selectedTags);
						});
					})
			);
		}
	});
});

makeQuery(selectedCandidates, selectedTags);

})();

 

 

]]>
Sound Camera http://www.thehypertext.com/2015/09/14/sound-camera/ http://www.thehypertext.com/2015/09/14/sound-camera/#comments Mon, 14 Sep 2015 04:06:42 +0000 http://www.thehypertext.com/?p=687 This week, I have been prototyping a script that chooses music based on photographs. Ideally, the end result will be a wearable camera / music player that selects tracks for you based on your environment.

Read More...

]]>
I have been looking for ways to push the conceptual framework behind word.camera to another domain. This week, I have been prototyping a script that chooses music based on photographs. Ideally, the end result will be a wearable camera / music player that selects tracks for you based on your environment. Unfortunately, the domain sound.camera has been claimed, but I’m still planning to use the name “Sound Camera” for this project.

ipod shuffle - modified

My code:
iPython Notebook

The script I wrote gets concept words from the image via Clarifai, then searches song lyrics for those words on Genius, then finds the song on Spotify. Below are some images I put through the algorithm. You can click on each one to hear the song that resulted, though you will need to login to Spotify to do so.

 

putin

street

landscape

 

cat

 

 

The next step will be to get this code working on a Raspberry Pi inside one of the film camera bodies I just received via eBay.

]]>
http://www.thehypertext.com/2015/09/14/sound-camera/feed/ 1
Author Cameras http://www.thehypertext.com/2015/09/09/author-cameras/ Wed, 09 Sep 2015 19:58:10 +0000 http://www.thehypertext.com/?p=631 For my primary project in Project Development Studio with Stefani Bardin, I am planning to make 3-5 more physical word cameras.

Read More...

]]>
For my primary project in Project Development Studio with Stefani Bardin, I am planning to make 3-5 more physical word cameras. These models will iterate on my prior physical word camera by printing relevant passages from specific authors, based on convolutional neural network analysis of captured images.

I have not yet chosen the authors I plan to embed in these cameras, or how I plan to present the extracted text. I also have tentative plans for a new iteration of the talking surveillance camera I developed last semester, but more on that will be provided in future posts.

This week, I spent some time on eBay finding a few broken medium- and large-format cameras to use as cases. Here’s what I bought (for $5 to $25 each):

$_57 (3)

$_57 (2)

$_57 (1)

$_57

I am current waiting to receive them so that I can start planning the builds. Below is a list of the additional parts that will be required for each camera:

Raspberry Pi 2 ($40)
85.60mm x 56mm x 21mm (or roughly 3.37″ x 2.21″ x 0.83″)

Raspberry Pi Camera Board ($30)
25mm x 20mm x 9mm

Buck Converter ($10)
51 * 26.3 * 14 (L * W * H) (mm)

7.4V LiIon Battery Pack ($90)
22mm (0.9″) x 104mm (4.1″) x 107mm (4.2″)
OR two USB batteries ($40)

Thermal Printer ($25 from China or $50 from U.S.)
~4 1/8″ (105mm) x 2 1/4″ (58mm) for rectangular hole
~58mm deep

On/Off Switch ($1)
18.60mm x 12.40mm rectangular hole
13.9mm deep

LED Button ($5)
Shutter button, user will hold for 3 seconds to turn off Raspberry Pi
16mm round hole
~1.5″ deep

1/4-size permaproto board ($3)

1/4″ Acrylic ($12) or Broken Medium Format TLR ($30-69)

Jumper Wires ($2)

]]>