a scientific thing

we are supposed to “Incorporate a scientific layer into your project.”

some scientific layers i’m interested in thinking about are 1. yochai benkler and 2. network theory.

benkler writes and speaks about open source economics, and i want to flip through his book the wealth of networks.

and i’m thinking about network theory because of tom’s class. i don’t necessarily think it’s appropriate to apply mathematical concepts to social worlds, but there may be a dataviz or something in the weak links that are cultivated through open source projects of a certain size.

who are the guantanamo detainees?

do you know? i don’t know. until a few days ago, i’d never taken the time to look through the files that wikileaks published a few years ago. wikileaks has these inmate profiles indexed by inmate number or name.

part of my interest in this is about learning how someone ends up in the atrocious place that is guantanamo without due process. how bad are these bad guys? what kind of bad are they? are they like us? did we have a role in making them? in starting to read these profiles, i’m learning about these men and am particularly interested in the “prior history” section of these documents.

another part of my interest in this data set is something about chelsea manning not being in solitary in vain. if she risked her life to leak this information, we sure as hell better do something good with it, right?

wikileaks.org interface


making a javascript array from ISN and “prior history”


first draft

3 next steps

i see my project in this studio as framing some of the technical things i’m doing outside of class in terms of what it means to build stuff with people; in other words, this project is about collaborative projects as a medium, as process.

3 next steps:

  • sketch out [in words] the main argument/hypothesis that’s floating around in my brain. something something scalable collaboration.
  • pick 3 writers, thinkers, or artists who have said things about this
  • make some wordless gifs or sketches around concepts i learn about

project statement/how i want to spend the semester:

my brain dump/storm raised a bunch of formats, sources, and projects i’m interested in. i’m really interested in the role of the web in producing, revising, distributing ideology and history. that’s too much for one semester. so to narrow things a little: this semester, i’d like to articulate, in words and pictures, something about certain kinds of tech as a medium for collaborative process, and the relationship between collaborative process and ideology.

a funny thing happened on the way to the nsa(.gov)


we talked about traceroute last week in understanding networks. this led me down a thousand rabbit holes, including this instructive powerpoint presentation featuring

Random Traceroute Factoid

•The default starting port in UNIX traceroute is 33434.This comes from 32768 (2^15 or the max value of a signed 16-bit integer) + 666 (the mark of Satan).”


but also more applicable things like address naming conventions and how to notice different possible relationships between network types (p21).


i wanted to see if anything interesting happened when i ran traceroute nsa.gov.

answer: not really. i ran whois on some of these, but they’re just regular ol’ cloud companies in new york and massachusetts and colorado. i guess this makes sense because the nsa probably doesn’t store all of our stuff on the same server that hosts the nsa.gov website.

from there, i tried to find an isis website, thinking that that might be a better way to find an nsa server along a traceroute. it was surprisingly difficult for me to find one via (english) google or twitter. i did learn about a quarterly isis magazine, but it had no web presence to speak of. #printnotdead

googling “nsa traceroute” pulled up a wired article from 2006 which lists the address of the folsom street web carrier hotel in san fran where the nsa was mirroring everyone’s communication. the article said to look for the string tbr2-p012201.sffca.ip.att.net in your traceroute or, really, any att.net string. still no dice. but, per the powerpoint presentation above, i was able to tell that the “sf” in there probably stands for “san francisco”.

it felt like i’d hit a dead end, so i read the manual page for traceroute to see if there were any arguments i could add to my traceroute command to give me more information. -D looked promising:

“When an ICMP response to our probe datagram is received, print
the differences between the transmitted packet and the packet
quoted by the ICMP response.  A key showing the location of
fields within the transmitted packet is printed, followed by the
original packet in hex, followed by the quoted packet in hex.
Bytes that are unchanged in the quoted packet are shown as under-
scores.  Note, the IP checksum and the TTL of the quoted packet
are not expected to match.  By default, only one probe per hop is
sent with this option.”

i dunno, maybe looking at the packet contents could be helpful? here’s a sampling:

from the definition in the manual page, we know the structure of this blob of letters and numbers is

[human-readable(ish) header]

[outbound packet contents]

[inbound packet contents with existing stuff as underscore and new stuff denoted]

so, a few interesting things, although not really what i was looking for:

  • tl decrements every time it’s sent out, and always comes back as 1. this must be the “time to live”!
  • the bytes? bits? under sum always leave as 0 and come back as something slightly different. maybe this is just to notify that it’s a new packet? or maybe the TTL changes the packet a little?
  • the ts always leaves as “00” and comes back as “08”. maybe 00 means outgoing and 08 means incoming? idk, tbh.

eventually, i somehow ended up at what i think is the traceroute spec, which sort of verified pieces of this. hopefully, i can get some more insight during class this week.

see, i told you! rabbit holes…

stupid network & mother earth mother board

the dawn of the stupid network

the difference between smart networks—where scarcity of infrastructure & bandwidth mandated maximizing efficiency of bits, creating services, expansion was expen$ive, endpoints (telco terminals, telephones) were *just* endpoints—to stupid networks—where bandwidth becomes abundant and cheap, bits go in one end and out the other, processing happens at endpoints.

design assumptions of telephone networks: “Theoretically, a local exchange can serve up to 10,000 telephones, e.g., with numbers 762-0000 through 762-9999. The design assumption, though, is that only a certain percentage of these lines, maybe one in 10, are active at any one time. ” when more people use phones, or when the internet happens, this assumption breaks the system.

interesting to note the revenue-generating/value-adding things these companies came up with:

  • call routing
  • caller options (press 1 for…)
  • database lookup based on number you call from

“Stupid Networks have three basic advantages over Intelligent Networks – abundant infrastructure; underspecification; and a universal way of dealing with underlying network details, thanks to IP (Internet Protocol)”

“repertoire of different data handling techniques” makes it possible to handle lots of different kinds of traffic on the same infrastructure.

mother earth mother board

jesus christ, neal stephenson is obnoxious. but once you get past that:

“The cyberspace-warping power of wires, therefore, changes the geometry of the world of commerce and politics and ideas that we live in. The financial districts of New York, London, and Tokyo, linked by thousands of wires, are much closer to each other than, say, the Bronx is to Manhattan.”

“wires have never been perfectly transparent carriers of data; they have always degraded the information put into them.”

“(the distinction between countries and companies is hazy in the telco world)”

“Without rubber and another kind of tree resin called gutta-percha, it would not have been possible to wire the world.”

“Virtually all communications between countries take place through a very small number of bottlenecks, and the available bandwidth simply isn’t that great.”

as opposed to cable over land, where air does not interfere because it’s a bad conductor, cable underwater has this technical challenge: “the ocean serves as the ground wire.”

“Daily and Wall preside over this [FLAG] operation, which is Western at the top and pure Thai at the ground level”

“Nynex and AT&T have their offices a short distance from each other in Manhattan, but the war between them is being fought in trenches in Thailand, glass office towers in Tokyo, and dusty government ministries in Egypt.”

“Cables have always been financed and built by telecoms, which until very recently have always been government-backed monopolies.” privatization of infrastructure was a game-changer.

“In deep water, where the majority of FLAG is located, the work is done by cable ships and has more in common with space exploration than with any terrestrial activity.”

everything goes in a Big Room Full of Expensive Stuff. “Early cable technicians were sometimes startled to see their cables suddenly jerk loose from their moorings inside the station – yanking the guts out of expensive pieces of equipment – and disappear in the direction of the ocean, where a passing ship had snagged them.”

“The first cables carried telegraphy, which is as purely digital as anything that goes on inside your computer. The cables were designed that way because the hackers of a century and a half ago understood perfectly well why digital was better. A single bit of code passing down a wire from Porthcurno to the Azores was apt to be in sorry shape by the time it arrived, but precisely because it was a bit, it could easily be abstracted from the noise, then recognized, regenerated, and transmitted anew.”

cue mr. shannon’s gorgeous drawring:


initial ramblings

this semester, i’m taking a project development studio. i’d like to use the time and structure of this class to think and write about some projects i’ll be working on:

  • the p5-web-editor with cassie
  • a project about networks with surya

these are both very different projects, but i’m interested in thinking about what draws me to them and what they might have in common.

let’s start with the p5 web editor. this is an open-source tool for learning how to code with the creative coding language, p5.js.

the project with surya is different. this is a teaching and advocacy tool meant to make the process of learning about networks engaging and fun. i’ll be working on web episodes and thinking a lot about audience and tbd stuff.

the things that excite me most about any project are 1. the ideas behind the project and 2. who i get to work with. i am totally thrilled to work with both of the people leading these projects because i think they’re thoughtful and creative and kind and really smart.

switching gears. the backdrop of everything always for me is kafka and judith butler and hannah arendt. since i read eichmann in jerusalem a million years ago, i have never stopped being haunted by the idea that what makes people do evil things is a lack of imagination, an inability to think. what leads to this state of affairs? what is the role of bureaucracy here? and ideology? where can i possibly intervene?  what special opportunities does the internet present? demand? what about code? collaborative projects?

both of these projects are open-source or have elements of open-source thinking. i want to use the time in this studio to get specific about the difference between “free” and “collaborative” and “open-source.” they are not all the same. further: i think part of my excitement about open-source comes from a belief in the mcluhan thing that the medium is the message. if we are collaborating, if we are thinking and teaching each other along the way, we cannot be doing harm. of course, this is not always true. i wanna think about when it is true and when it’s not true.

barabasi open-source fail

over the course of completing our first series of class readings, i did an open-source fail: i forgot that the whole world is not a github repo and i shared a pdf of a section of a copyrighted book with our class. tom asked that i remove the pdf from our class email group out of respect for copyrights. i was surprised that i’d broken a rule and i contacted the nyu library, where i’d gotten the ebook originally, to find out more. here’s the response i got from the library’s legal specialist:


a few link hops away from the one she posted, i found a bunch of stuff about copyright and fair use. there’s my “mass e-mail to your class” right next to the high risk stop light:


this is all very curious to me. what’s the point of having the ability to easily download a portion or an entire book to pdf if it’s not to share the pdf? i know the correct answer is “to read or print out exactly one copy for yourself and yourself only!” but the reality is that sharing digitally is an affordance of having a digital file. counter arguments just don’t make sense.

what makes more sense is admitting that we have a bunch of real mismatches here: between the affordances of digital information and the needs of knowledge producers and distributors to be compensated for their work. surely, there are precedents re: how to deal with this problem. the camera and the printing press are also technologies for copying and sharing the work of a single person. i’d be curious to learn more about the histories of those and this question of copyrights in their wake.

another interesting thing i found:

“Copyright law provides a classroom exception in section 110(1) that allows instructors to display or show entire copyrighted works during the course of a face-to-face classroom session.”

i love a good loophole. would it count as “a face-to-face classroom session” to share a digital copy of a book as long as each page also contained a photo or video of the professor’s face? i’m only partly joking. my point is: how can a rule like that possibly hold up in this era of MOOCs? who is it there for anyway?

anyway. barabasi is brilliant. i’m excited for the next few chapters, which you can rest assured i will never share with anyone ever. these parts of the reading about the google outage were also interesting:

“Google previously suffered a similar outage when Pakistan was allegedly trying to censor a video on YouTube and the National ISP of Pakistan null routed the service’s IP addresses. Unfortunately, they leaked the null route externally. Pakistan Telecom’s upstream provider, PCCW, trusted what Pakistan Telecom’s was sending them and the routes spread across the Internet. The effect was YouTube was knocked offline for around 2 hours.”

“When I figured out the problem, I contacted a colleague at Moratel to let him know what was going on. He was able to fix the problem at around 2:50 UTC / 6:50pm PST. Around 3 minutes later, routing returned to normal and Google’s services came back online.”

update: i sent a snarky question back to the email librarian, to which she graciously responded.

“It’s why we fight so hard for open access. I encourage you and your classmates to make your scholarship OA and to encourage your professors to do the same. In the meantime, we provide access to what we can given the contractual restraints publishers put on us.

Welcome to the world (business) of scholarly publishing. Glad you’re fired up. Join us in fighting to (legally) change it.”