visualization – THE HYPERTEXT http://www.thehypertext.com Thu, 10 Dec 2015 06:10:15 +0000 en-US hourly 1 https://wordpress.org/?v=5.0.4 So it goes. http://www.thehypertext.com/2015/10/06/so-it-goes/ Tue, 06 Oct 2015 01:02:23 +0000 http://www.thehypertext.com/?p=717 Kurt Vonnegut's complete works, analyzed for sentiment, visualized as interactive TF-IDF word clouds

Read More...

]]>
Kurt Vonnegut once gave a brief, delightful lecture on the shapes of stories:

 

This was the primary inspiration for my latest project, which features Kurt Vonnegut’s complete works, analyzed for sentiment, and visualized as interactive word clouds. I developed it entirely in front-end JavaScript, and it’s currently hosted on GitHub pages: rossgoodwin.com/vonnegut

Screen Shot 2015-10-05 at 8.26.38 PM

Screen Shot 2015-10-05 at 8.26.03 PM

Screen Shot 2015-10-05 at 8.24.51 PM

 

Users can scrub through the sentiment graph of each book from start to finish and see a word cloud displayed for each position on the slider. Each word cloud represents 10 paragraphs of the book. Along with the rises and dips in the graph, sentiment values are indicated by the color of the word cloud text, which ranges from dark green (highly positive) to bright red (highly negative).

Rather than simply using word count or frequency for the size of the words, I used TF-IDF scores. (Each 10 paragraph block was treated as one document, and each book was treated as an independent set of documents.) As a result, the largest words in each word cloud are those that make their respective section unique in the context of the entire book.

The first steps in creating this project were to parse Vonnegut’s books, perform TF-IDF calculations for each word and sentiment analysis for each 10-paragraph segment, then store the resulting data in a set of JSON files. Here are the iPython Notebooks where I completed these steps:

Once I had the JSON files, I used D3 to create the word clouds and Chart.js to create the line graphs. The sliders are HTML range inputs, modified with custom CSS. I wanted to create the appearance of long, semi-transparent planchettes sliding over the graphs. Getting the sliders to line up with the graphs precisely was particularly challenging, as was providing the option to click on the graphs in any location and automatically move the sliders to that location.

Here is my JavaScript code, in its current state:

(function() {

Number.prototype.map = function (in_min, in_max, out_min, out_max) {
  return (this - in_min) * (out_max - out_min) / (in_max - in_min) + out_min;
}

function titleCase(str) {
    return str.replace(/\w\S*/g, function(txt){return txt.charAt(0).toUpperCase() + txt.substr(1).toLowerCase();});
}

// Charts.js global config
Chart.defaults.global.animation = false;
Chart.defaults.global.tooltipEvents = [];
Chart.defaults.global.scaleFontFamily = "'Cousine', monospace";
Chart.defaults.global.showScale = false;

// var spectrum = ['#F22613', '#E74C3C', '#D35400', '#F2784B', '#95A5A6', '#68C3A3', '#4DAF7C', '#3FC380', '#2ECC71'];
var spectrum = ["#f22613", "#f25749", "#f28379", "#f2b0aa", "#95a5a6", "#add9c2", "#74b391", "#45996c", "#1e824c"];


$("#key-block").append(
  '<div id=\"key-text-box\"><p class=\"text-center lead small\" style=\"margin-left: 7px;\">&lt;&lt;&lt; negative | positive &gt;&gt;&gt;</p></div>'
);

spectrum.map(function(hex){
  $("#key-block").append(
    '<div class=\"key-color\" style=\"background-color:'+hex+';\"></div>'
  );
});

function updateCloud(bookslug, section) {

  $.getJSON("data/vonnegut-"+section+".json", function(data){

    // var factor = Math.pow(data[bookslug]['tfidf'].length, 2);

    var layout = d3.layout.cloud()
        .size([800, 500])
        .words(data[bookslug]['tfidf'].map(function(d) {
          return {text: d[0], size: d[1] * 500};
        }))
        .padding(3)
        .rotate(function() { return 0; }) // return ~~(Math.random() * 2) * 90
        .font("Cousine")
        .fontSize(function(d) { return d.size; })
        .on("end", draw);
    layout.start();

    function draw(words) {

      var overallContainer = d3.select("#"+bookslug);

      overallContainer.select("svg").remove();
      overallContainer.select("a").remove();

      var svgContainer = overallContainer.append("svg")
          .attr("width", layout.size()[0])
          .attr("height", layout.size()[1])
          .attr("class", "svg-cont");

      var wordCloud = svgContainer.append("g")
          .attr("transform", "translate(" + layout.size()[0] / 2 + "," + layout.size()[1] / 2 + ")")
        .selectAll("text")
          .data(words)
        .enter().append("text")
          .transition().duration(500)
          .style("font-size", function(d) { return d.size + "px"; })
          .style("font-family", "Cousine")
          .style("fill", function(d, i) {
              var sentiment = data[bookslug]['sentiment'];
              var ix = Math.floor(((sentiment + 1)/2)*spectrum.length);
              return spectrum[ix];
          })
          .attr("text-anchor", "middle")
          .attr("transform", function(d) {
            return "translate(" + [d.x, d.y] + ")rotate(" + d.rotate + ")";
          })
          .text(function(d) { return d.text; });

      var title = titleCase(data[bookslug]['title']);

      var labelText = overallContainer
                      .append("a")
                      .attr("href", "http://www.amazon.com/exec/obidos/external-search/?field-keywords=%s"+title+"&mode=blended")
                      .attr("class", "twitter-link")
                      .attr("target", "_blank")
                      .text(title);

      overallContainer.transition()
          .style("opacity", 1.0)
          .delay(1000)
          .duration(3000);
    }

  });

}

$.getJSON("data/sentiment.json", function(sent){
$.getJSON("data/vonnegut-0.json", function(data){
  $("#loadinggif").fadeOut("slow");
  Object.keys(data).sort().map(function(slug){
    $("#vis").append(
      '<div id=\"'+slug+'\" class=\"col-md-12 transparent text-center\"></div>'
    );

    $("#"+slug).append(
      '<canvas class="chart-canvas" id=\"'+slug+'-chart\" width=\"800\" height=\"150\"></canvas>'
    );

    var ctx = document.getElementById(slug+"-chart").getContext("2d");

    var xLabels = [];

    for (var i=0;i<data[slug]['length'];i++) {
      xLabels.push('');
    }

    var chartData = {
        labels: xLabels,
        datasets: [
            {
                label: titleCase(data[slug]['title']),
                fillColor: "rgba(210, 215, 211, 0.7)",
                strokeColor: "rgba(189, 195, 199, 1)",
                pointColor: "rgba(210, 215, 211, 1)",
                pointStrokeColor: "#fff",
                pointHighlightFill: "#fff",
                pointHighlightStroke: "rgba(220,220,220,1)",
                data: sent[slug]
            }
        ]
    };

    var chartOptions = {
      pointDot : false,
      pointHitDetectionRadius : 5,
      scaleShowVerticalLines: false,
      bezierCurve: false
    };

    var myNewChart = new Chart(ctx).Line(chartData, chartOptions);

    var stepCount = data[slug]['length'] - 1;

    $("#"+slug).append(
      '<div class=\"scrubber\"><input id=\"'+slug+'-scrub\" type=\"range\" min=\"0\" max=\"'+stepCount+'\" value=\"0\" step=\"1\"></div>'
    );

    $("#"+slug+"-chart").on("click", function(evt){
      var activePoints = myNewChart.getPointsAtEvent(evt);
      var xPos = activePoints[Math.floor(activePoints.length/2)].x;
      var ix = Math.floor(xPos.map(0, 800, 0, data[slug]['length']));
      console.log(xPos);
      console.log(ix);
      $('#'+slug+'-scrub').val(ix);
      updateCloud(slug, ix);
    });

    // Play Button
    $('#'+slug).append(
      '<button type=\"button\" id=\"'+slug+'-btn\" class=\"btn btn-default btn-xs play-btn\" aria-label=\"Play\"><span class=\"glyphicon glyphicon-play\" aria-hidden=\"true\"></span></button>'
    );

    $('#'+slug).append(
      '<button type=\"button\" id=\"'+slug+'-btn-pause\" class=\"btn btn-default btn-xs play-btn\" aria-label=\"Pause\"><span class=\"glyphicon glyphicon-pause\" aria-hidden=\"true\"></span></button>'
    );

    // Load First Clouds
    updateCloud(slug, 0);

    var play;

    $('#'+slug+'-btn').click(function(){

      console.log('clicked ' + slug);
      autoAdvance();
      play = setInterval(function(){
        autoAdvance();
      }, 5000);

      function autoAdvance(){
          var scrubVal = $('#'+slug+'-scrub').val();
          console.log(data[slug]['length']);
          if (scrubVal >= data[slug]['length']-1) {
            console.log("EOR");
            clearInterval(play);
          }
          console.log(scrubVal);
          var newVal = parseInt(scrubVal, 10) + 1;
          $('#'+slug+'-scrub').val(newVal);
          updateCloud(slug, newVal);
      }

    });



    $('#'+slug+'-btn-pause').click(function(){
      clearInterval(play);
    });


    $("#"+slug+"-scrub").on("input", function(){
      var sectNo = $(this).val();
      console.log(sectNo);
      updateCloud(slug, sectNo);
    });
  });
});
});



})();

 

The rest of my front-end code can be found on GitHub.

]]>
Candidate Image Explorer http://www.thehypertext.com/2015/09/17/candidate-image-explorer/ Thu, 17 Sep 2015 15:53:26 +0000 http://www.thehypertext.com/?p=700 For this week's homework in Designing for Data Personalization with Sam Slover, I made progress on a project that I'm working on for Fusion as part of their 2016 US Presidential Election coverage.

Read More...

]]>
For this week’s homework in Designing for Data Personalization with Sam Slover, I made progress on a project that I’m working on for Fusion as part of their 2016 US Presidential Election coverage. I began this project by downloading all the images from each candidate’s Twitter, Facebook, and Instagram account — about 60,000 in total — then running those images through Clarifai‘s convolutional neural networks to generate descriptive tags.

With all the images hosted on Amazon s3, and the tag data hosted on parse.com, I created a simple page where users can explore the candidates’ images by topic and by candidate. The default is all topics and all candidates, but users can narrow the selection of images displayed by making multiple selections from each field. Additionally, more images will load as you scroll down the page.

Screen Shot 2015-09-17 at 11.24.47 AM

Screen Shot 2015-09-17 at 11.30.30 AM

Screen Shot 2015-09-17 at 11.20.50 AM

Screen Shot 2015-09-17 at 11.28.45 AM

Screen Shot 2015-09-17 at 11.25.42 AM

Screen Shot 2015-09-17 at 11.19.53 AM

Unfortunately, the AI-enabled image tagging doesn’t always work as well as one might hope.

Screen Shot 2015-09-17 at 11.23.49 AM

Here’s the page’s JavaScript code:

var name2slug = {};
var slug2name = {};

Array.prototype.remove = function() {
    var what, a = arguments, L = a.length, ax;
    while (L && this.length) {
        what = a[--L];
        while ((ax = this.indexOf(what)) !== -1) {
            this.splice(ax, 1);
        }
    }
    return this;
}

Array.prototype.chunk = function(chunkSize) {
    var array=this;
    return [].concat.apply([],
        array.map(function(elem,i) {
            return i%chunkSize ? [] : [array.slice(i,i+chunkSize)];
        })
    );
}

function dateFromString(str) {
	var m = str.match(/(\d+)-(\d+)-(\d+)T(\d+):(\d+):(\d+)Z/);
	var date = new Date(Date.UTC(+m[1], +m[2], +m[3], +m[4], +m[5], +m[6]));
	var options = {
	    weekday: "long", year: "numeric", month: "short",
	    day: "numeric", hour: "2-digit", minute: "2-digit"
	};
	return date.toLocaleTimeString("en-us", options);
}

function updatePhotos(query) {
	$.ajax({
		url: 'https://api.parse.com/1/classes/all_photos?limit=1000&where='+JSON.stringify(query),
		type: 'GET',
		dataType: 'json',
		success: function(response) {
			// console.log(response);
			$('#img-container').empty();

			var curChunk = 0;
			var resultChunks = response['results'].chunk(30);

			function appendPhotos(chunkNo) {

				resultChunks[chunkNo].map(function(obj){
					var date = dateFromString(obj['datetime'])
					var imgUrl = "https://s3-us-west-2.amazonaws.com/electionscrape/" + obj['source'] + "/400px_" + obj['filename'];
					var fullImgUrl = "https://s3-us-west-2.amazonaws.com/electionscrape/" + obj['source'] + "/" + obj['filename'];
					$('#img-container').append(
						$('<div class=\"grid-item\"></div>').append(
							'<a href=\"'+fullImgUrl+'\"><img src=\"'+imgUrl+'\" width=\"280px\"></a><p>'+slug2name[obj['candidate']]+'</p><p>'+date+'</p><p>'+obj['source']+'</p>'
						) // not a missing semicolon
					);
					// console.log(obj['candidate']);
					// console.log(obj['datetime']);
					// console.log(obj['source']);
					// console.log(obj['filename']);
				});

			}

			appendPhotos(curChunk);

			window.onscroll = function(ev) {
			    if ((window.innerHeight + window.scrollY) >= document.body.offsetHeight) {
			        curChunk++;
			        appendPhotos(curChunk);
			    }
			};


		},
		error: function(response) { "error" },
		beforeSend: setHeader
	});
}

function setHeader(xhr) {
	xhr.setRequestHeader("X-Parse-Application-Id", "ID-GOES-HERE");
	xhr.setRequestHeader("X-Parse-REST-API-Key", "KEY-GOES-HERE");
}

function makeQuery(candArr, tagArr) {

	orArr = tagArr.map(function(tag){
		return { "tags": tag };
	})

	if (tagArr.length === 0 && candArr.length > 0) {
		var query = {
			'candidate': {"$in": candArr}
		};
	}
	else if (tagArr.length > 0 && candArr.length === 0) {
		var query = {
			'$or': orArr
		};
	}
	else if (tagArr.length === 0 && candArr.length === 0) {
		var query = {};
	}
	else {
		var query = {
			'candidate': {"$in": candArr},
			'$or': orArr
		};
	}

	updatePhotos(query);

}

(function(){

$('.grid').masonry({
  // options
  itemSelector: '.grid-item',
  columnWidth: 300
});

var selectedCandidates = [];
var selectedTags = [];

$.getJSON("data/candidates.json", function(data){
	var candNames = Object.keys(data).map(function(slug){
		var name = data[slug]['name'];
		name2slug[name] = slug;
		slug2name[slug] = name;
		return name;
	}).sort();

	candNames.map(function(name){
		$('#candidate-dropdown').append(
			'<li class=\"candidate-item\"><a href=\"#\">'+name+'</a></li>'
		);
	});

	$('.candidate-item').click(function(){
		var name = $(this).text();
		var slug = name2slug[name];
		if ($.inArray(slug, selectedCandidates) === -1) {
			selectedCandidates.push(slug);
			makeQuery(selectedCandidates, selectedTags);
			console.log(selectedCandidates);
			$('#selected-candidates').append(
				$('<button class=\"btn btn-danger btn-xs cand-select-btn\"><span class=\"glyphicon glyphicon-remove\" aria-hidden=\"true\"></span>'+name+'</button>')
					.click(function(){
						$(this).fadeOut("fast", function(){
							selectedCandidates.remove(name2slug[$(this).text()]);
							makeQuery(selectedCandidates, selectedTags);
							console.log(selectedCandidates);
						});
					}) // THIS IS NOT A MISSING SEMI-COLON
			);
		}
	});
});


$.getJSON("data/tags.json", function(data){
	var tags = data["tags"].sort();
	tags.map(function(tag){
		$('#tag-dropdown').append(
			'<li class=\"tag-item\"><a href=\"#\">'+tag+'</a></li>'
		);
	});

	$('.tag-item').click(function(){
		var tag = $(this).text();
		if ($.inArray(tag, selectedTags) === -1) {
			selectedTags.push(tag);
			makeQuery(selectedCandidates, selectedTags);
			console.log(selectedTags);
			$('#selected-tags').append(
				$('<button class=\"btn btn-primary btn-xs tag-select-btn\"><span class=\"glyphicon glyphicon-remove\" aria-hidden=\"true\"></span>'+tag+'</button>')
					.click(function(){
						$(this).fadeOut("fast", function(){
							selectedTags.remove($(this).text());
							makeQuery(selectedCandidates, selectedTags);
							console.log(selectedTags);
						});
					})
			);
		}
	});
});

makeQuery(selectedCandidates, selectedTags);

})();

 

 

]]>
Traveler’s Lamp, Part II http://www.thehypertext.com/2015/05/08/travelers-lamp-part-ii/ http://www.thehypertext.com/2015/05/08/travelers-lamp-part-ii/#comments Fri, 08 May 2015 22:51:42 +0000 http://www.thehypertext.com/?p=560 Last week, Joanna Wrzaszczyk and I completed the first version of our dynamic light sculpture, inspired by Italo Calvino's Invisible Cities and the Traveling Salesman Problem.

Read More...

]]>
Click Here for Part I



Last week, Joanna Wrzaszczyk and I completed the first version of our dynamic light sculpture, inspired by Italo Calvino’s Invisible Cities and the Traveling Salesman Problem. We have decided to call it the Traveler’s Lamp.

Here is the midterm presentation that Joanna and I delivered in March:

Screen Shot 2015-05-08 at 6.26.53 PM

We received a lot of feedback after that presentation, which resulted in a number of revisions to the lamp’s overall design. Here are some sketches I made during that process:

2111_20150508_doc_1800px

2112_20150508_doc_1800px

2113_20150508_doc_1800px

2114_20150508_doc_1800px

2115_20150508_doc_1800px

2116_20150508_doc_1800px

2120_20150508_doc_1800px

2121_20150508_doc_1800px

Since that presentation, Joanna and I successfully designed and printed ten city-nodes for the lamp. Here is the deck from our final presentation, which contains renderings of all the city-nodes:

Screen Shot 2015-05-08 at 6.27.46 PM

We built the structure from laser-cut acrylic, fishing line, and 38-gauge wire. The top and base plates of the acrylic scaffolding are laser etched with the first and last page, respectively, from Invisible Cities. We fabricated the wood base on ITP’s CNC router from 3/4″ plywood.

Here are some photos of the assembled lamp:

5865_20150507_lamp_2400px

5846_20150507_lamp_2400px

5870_20150507_lamp_2400px

5877_20150507_lamp_2400px

5885_20150507_lamp_2400px

5887_20150507_lamp_2400px

5899_20150507_lamp_2400px

5901_20150507_lamp_2400px

5902_20150507_lamp_2400px

5920_20150507_lamp_2400px

5929_20150507_lamp_2400px

5965_20150507_lamp_2400px

5961_20150507_lamp_2400px

5947_20150507_lamp_2400px

5941_20150507_lamp_2400px

5931_20150507_lamp_2400px

Here’s a sketch, by Joanna, of the x-y-z coordinate plot that we fed into the computer program:

2122_20150508_doc_1800px

And finally, here’s some of the Python code that’s running on the Raspberry Pi:

def tsp():
    startingPin = random.choice(pins)
    pins.remove(startingPin)
    GPIO.output(startingPin, True)
    sleep(0.5)
    distances = []
    for i in range(pins):
        for p in pins:
            dist = distance(locDict[startingPin], locDict[p])
            distances.append((dist, p))
            GPIO.output(p, True)
            sleep(0.5)
            GPIO.output(p, False)
        distances = sorted(distances, key=lambda x: x[0])
        nextPin = distances[0][1]
        GPIO.output(nextPin, True)
        sleep(0.5)
        pins.remove(nextPin)
        startingPin = nextPin

]]>
http://www.thehypertext.com/2015/05/08/travelers-lamp-part-ii/feed/ 1
Traveler’s Lamp http://www.thehypertext.com/2015/02/21/invisible-salesman/ http://www.thehypertext.com/2015/02/21/invisible-salesman/#comments Sat, 21 Feb 2015 20:40:39 +0000 http://www.thehypertext.com/?p=452 For my primary project in Sculpting Data into Everyday Objects with Esther Cheung and Scott Leinweber, Joanna Wrzaszczyk and I will be creating a lamp to visualize the traveling salesman problem between a set of cities that Italo Calvino described in Invisible Cities.

Read More...

]]>
For my primary project in Sculpting Data into Everyday Objects with Esther Cheung and Scott Leinweber, Joanna Wrzaszczyk and I will be creating a lamp to visualize the traveling salesman problem between a set of cities that Italo Calvino described in Invisible Cities.

This project began with a personal fascination I have with graph data. A graph is a mathematical diagram of connections between various vertices (a.k.a. nodes) and edges (a.k.a.). They can be directed (meaning the edges point in specific directions) or undirected, and generally look like this:

credit: mathinsight.org

credit: mathinsight.org

Graphs are widely applicable data structures, relevant to a broad range of fields. The traveling salesman problem (TSP), in its classical form, involves a set of cities along with data comprising the distance from each city to every other city. Given a salesman who starts in any given city, what is the optimal path for the salesman to take in order to visit every city once and return to the city from which s/he began?

The lamp Joanna and I are designing will be a three-dimensional set of vertices, each a 3D printed city designed according to the specifications of one of Calvino’s Invisible Cities. The cities/vertices will be connected with light pipe, connected to LEDs, that will visualize a computer algorithm (likely running on an Arduino or Raspberry Pi) solving the traveling salesman problem in real time between the cities.

We plan to print our cities on the Connex500 printer at NYU AMS as intricate white or black structures embedded inside clear plastic. The Connex500 can make prints like this:

credit: 3ders.org

credit: 3ders.org

We plan to make our cities inside spheres. I designed the first one based on the first city in the book, described here:

Leaving there and proceeding for three days toward

the east, you reach Diomira, a city with sixty silver

domes, bronze statues of all the gods, streets paved

with lead, a crystal theater, a golden cock that crows

each morning on a tower. All these beauties will already

be familiar to the visitor, who has seen them

also in other cities. But the special quality of this

city for the man who arrives there on a September

evening, when the days are growing shorter and the

multicolored lamps are lighted all at once at the

doors of the food stalls and from a terrace a woman’s

voice cries ooh!, is that he feels envy toward those

who now believe they have once before lived an evening

identical to this and who think they were

happy, that time.

 

I focused on the description of “sixty silver domes” and made this in Rhino:

Screen Shot 2015-02-20 at 11.47.38 PM Screen Shot 2015-02-20 at 11.46.17 PM Screen Shot 2015-02-20 at 11.45.50 PM Screen Shot 2015-02-20 at 11.45.39 PM Screen Shot 2015-02-20 at 11.45.01 PM

The model of a 4cm-diameter sphere contains two holes: one on the top for an LED or light pipe connection, and one going all the way through to hang the city inside a clear outer enclosure.

Before creating the city above, I created another object in Rhino, representative of what I hope we can achieve with the lamp as a whole:

Screen Shot 2015-02-11 at 11.33.04 PM Screen Shot 2015-02-11 at 11.32.58 PM Screen Shot 2015-02-11 at 11.32.51 PM

 


 

Click Here for Part II

]]>
http://www.thehypertext.com/2015/02/21/invisible-salesman/feed/ 1
Primitive Fractal, Part II http://www.thehypertext.com/2014/09/14/primitive-fractal-part-ii/ Sun, 14 Sep 2014 04:44:00 +0000 http://www.thehypertext.com/?p=158 For this week's ICM homework, Dan Shiffman asked us to experiment with rule-based animation, motion, and interaction. I decided to expand on the primitive fractal pattern I developed last week and recorded the results in a video. All the code is available on Github.

Read More...

]]>

SEIZURE WARNING FOR VIDEOS

UPDATE: I was able to solve the lag problem with a combination of two solutions: stopping further recursion when the origin square falls outside a certain window and stopping the creation of new levels when the pattern zooms in. On Abhishek Singh‘s suggestion, I also added comments to my code. The Github repository has been updated (see the “main” folder).

Above is a new video of the processing script in action. My original post is below…

For this week’s ICM homework, Dan Shiffman asked us to experiment with rule-based animation, motion, and interaction. I decided to expand on the primitive fractal pattern I developed last week and recorded the results in the video above. All the code is available on Github.

The first goal I tried to accomplish was zooming in on the pattern. The only feasible way I found to accomplish this was to regenerate the shape over and over again with different parameters inside a draw loop. By increasing the origin square’s width, I could make the entire pattern grow.

origin = 256

def draw():
  background(0,0,100)
  noStroke()
  fill(7, 30, 100, 100)
  rect(256, 256, origin, origin)
  drawFourSquares(256, 256, origin)
  origin *= 1.01

Using *= rather than += allowed for smooth growth of the entire pattern due to its mathematical properties.

Next, I made the colors in the pattern shift. I accomplished this by using the frameCount variable with a modulo operation to make another variable (count) increment from 1 to the number of levels in the fractal.

def draw():
  count = frameCount % (log(origin)/log(2))
  background(0, 0, 100)
  noStroke()
  fill(100-(abs(log(origin)/log(2) - count - 1)/(log(origin)/log(2)))*93,
       100-((log(origin)/log(2))/(log(origin)/log(2)))*70,
       100,
       30+((log(origin)/log(2))/(log(origin)/log(2)))*70)
  rect(origin, origin, origin, origin)
  drawFourSquares(origin, origin, origin, count)

Finally, I combined the two effects to create one visualization:

def draw():
  count = frameCount % (log(origin)/log(2))
  background(0, 0, 100)
  noStroke()
  fill(100-(abs(log(origin)/log(2) - count - 1)/(log(origin)/log(2)))*93,
       100-((log(origin)/log(2))/(log(origin)/log(2)))*70,
       100,
       30+((log(origin)/log(2))/(log(origin)/log(2)))*70)
  rect(256, 256, origin, origin)
  drawFourSquares(256, 256, origin, count)
  origin *= 1.01

The main issue I encountered was frame rate lag. The more detailed the fractal, the more the program would lag. I experimented with adding an acceleration factor to overcome the lag, but that seemed to make the lag accelerate along with the animation.  I hope to learn more about potential solutions to overcome this issue. Perhaps there is a way to get Processing to “ignore” the shapes outside a specific field of reference. (This has since been solved. See UPDATE at the top of the post.)

To get an idea of how much processing power I’m using when I run this sketch, I used the “top” command in the terminal. These were the results:

Screen Shot 2014-09-14 at 8.31.03 PM

As I observed, Java (via Processing) is using over 100% of my CPU. I would like to run this sketch on a more powerful computer at some point to see what happens.

]]>
Primitive Fractal http://www.thehypertext.com/2014/09/07/primitive-fractal/ Sun, 07 Sep 2014 09:22:50 +0000 http://www.thehypertext.com/?p=87 For my ICM homework, I created a primitive fractal pattern. The source code for the image on the left is available on Github.

Read More...

]]>
Snowflake Fractal

Our first homework assignment for Introduction to Computational Media with Dan Shiffman was somewhat open ended. Dan asked to us make a static image using the basic shape and line tools in Processing, and to write a blog post about it. I decided to create a primitive fractal pattern. The source code for the image above is available on Github.

I constructed a variation on the fractal curve known as a Koch snowflake. First specified by Swedish mathematician Helge von Koch in 1904, the Koch snowflake is one of the first fractal curves to have been described.

Rather than using equilateral triangles, I used squares. This was my first sketch:

 

fractal notebook drawing

 

I then mapped out the base pattern, starting with an 8 x 8 unit “origin square”, and derived the relevant equations to transpose coordinates beginning with that square:

 

whiteboard 1

 

whiteboard 2

 

whiteboard 3

 

whiteboard 4

 

Using these equations as a guide, I then wrote some pseudocode for a recursive function to draw the fractal:

 

whiteboard 6

 

Which turned into…

 

def drawFourSquares(x, y, l):
	l = l / 2
	sTop = drawSquareTop(x, y, l)
	sBottom = drawSquareBottom(x, y, l)
	sRight = drawSquareRight(x, y, l)
	sLeft = drawSquareLeft(x, y, l)
	if l >= 1:
		drawFourSquares(sTop[0], sTop[1], l)
		drawFourSquares(sBottom[0], sBottom[1], l)
		drawFourSquares(sRight[0], sRight[1], l)
		drawFourSquares(sLeft[0], sLeft[1], l)

 

The rest of the code is available on GitHub.

First, I generated the shape with default fills and strokes, just to test my algorithm:

 

first_result

 

I then removed the strokes, changed the color mode to HSB, and mapped the saturation and opacity of the fills to the length of each square. The result was significantly more attractive:

 

blue

 

Finally, I mapped the hue to the length of each square and used logarithm functions to smooth the transition between different hues, opacities, and saturation levels. (The length decreases by a factor of two at each level of the fractal pattern. By using binary logarithms (base = 2), I was able to make the hue, opacity, and saturation transitions linear in order to include a more diverse spectrum of values for each property.) This was the final result, also pictured above:

 

Snowflake Fractal

 

The constraint for this project was to create a static image. However, in the future I would like to explore animating this pattern, perhaps by shifting the color values to create the illusion of depth and motion. Another option would be to program a continuous “zoom in” or “zoom out” effect to emphasize the potentially endless resolution and repetition of the pattern.

EDIT: See the new dynamic version.

]]>