we’re talking about ~list comprehensions~ in class so i’m testing this demo code on the executive order i was playing with earlier.

$ only-5-letter-words.py <EO1-clean.txt >test.txt

dummy sites for herbivore

eve and i put up some really basic form websites, one on HTTP and one on HTTPS, because we want to demo how HTTP form submissions are totally interceptable. the sites need some cleanup, but the test was still helpful and i think we’re on the right track! check it out:


the cut up method, uncreative writing, delusions of whiteness, et al.

the cut up method, william burroughs

into this: “You can not will spontaneity. But you can introduce the unpredictable spontaneous factor with a pair of scissors.”


i’ve been wanting to try this method with windows open on my computer, like this:

uncreative writing, kenneth goldsmith

“the act of pushing language around”

“gift economies, open-source cultures…” 🙄

“Lethem’s piece is a self-reflexive, demonstrative work of unoriginal genius.” o rly?

“For them, the act of writing is literally moving language from one place to another, proclaiming that context is the new content.” ah, like financialization! the value is in the asset bundling. or something.

delusions of whiteness in the avant garde, cathy park hong

“the luxurious opinion that anyone can bepost-identity'”

“expired snake oil”, “masturbatory exegesis”

what does she mean by this? “in complete transcription, in total paratactic scrambling,”

“Here is how Dworkin and Goldsmith characterize Zong: “the ethical inadequacies of that legal document . . . do not prevent their détournement in the service of experimental writing.”God forbid that maudlin and heavy-handed subjects like slavery and mass slaughter overwhelm the form!”

omg omg i promise i will not c&p the entire essay, but… “To be an identity politics poet is to be anti-intellectual, without literary merit, no complexity, sentimental, manufactured, feminine, niche-focused, woefully out-of-date and therefore woefully unhip, politically light, and deadliest of all, used as bait by market forces’ calculated branding of boutique liberalism. Compare that to Marxist—and often male—poets whose difficult and rigorous poetry may formally critique neoliberalism but is never “just about class” in the way that identity politics poetry is always “just about race,” with little to no aesthetic value.”

“…say a few more panels on forgotten subaltern poetry for the next wax museum conference?” 😱


all of trump’s executive orders are hosted on the whitehouse.gov site like blog posts. i pulled them all down using curl and put them into individual text files (which, btw, i wonder if there is a feed of these somewhere?):

i already loved HTML, but now i love it even more because each <tag> is followed by a newline. oh right obvi this is because browsers are trying to parse things just like i am. i am a browser. not a very good browser yet :-/

i tried wc -l and wc -m to count lines and chars. i’m sure one day these commands will be useful for something?

i cleaned up one of the executive orders by deleting all the stuff at the top and bottom, leaving just the <div>s and <p>s that contained EO content. i should figure out how to do this programmatically. maybe

  1. look through file with my actual eyeballs
  2. grep parent div that i want and remove anything before it.
  3. not sure how i would find the closing </div> for that section to delete everything after…
  4. line.strip() to get everything on its own line

anyway, i did it manually for now. i used the randomizer.py code from class to shuffle the lines of the first executive order.

$ cat EO1-clean.txt | python randomizer.py > output.txt

and then, since it was HTML, i put it in an index.html file.

i tried to do a slightly different version that splits each line at “law” and removes that word, then joins everything back together, and shuffles.

but it just resulted in a thousand AttributeErrors

so but what i really want to do is a cutup of both content and HTML tags and have the css still apply to the tags. that way, i’d get divs and buttons and menus all over the place.

update: the application formerly known as ajooba

we had our first semi-official user testing session! i made a document that i thought could guide our session, but it was actually hard to stick to once we got started. i think for future user testing sessions, none of us devs/designers should be in the room because we talk too much and wanna go into great detail about what we ~intended~ with different features. we still got great feedback because we were showing the tool to people who know a ton about teaching and technology. next time will be a more formal thing, and i think we’ll write a script/list of objectives for someone else to administer.

eve took amazing notes, which we converted into a list of issues file-able on github:

she’s busy animating transitions that will hopefully make some of the information we’re conveying more intuitive to understand. i’m really excited about what she’s working on.

meanwhile, i’ve been trying to wrap my brain around the code base since i stopped keeping up with it after surya did a big awesome refactor a few months ago. the front is vue.js with vuex. the back is node.js. the back talks to the front via sockets. that’s about where i’m at right now. i made this sketch of vue files:

and am redoing the annotated file tree exercise:

|-- NOTES.md
|-- README.md
|-- app
|  |-- App.vue
|  |-- components
|  |  |-- Console.vue
|  |  |-- InfoBar.vue
|  |  |-- NavMenu.vue
|  |  |-- ToolBar.vue
|  |  |-- tools
|  |  |  |-- Network.vue - frontend table for network view (columns, ability to sort, etc)
|  |  |  |-- Sniffer.vue - frontend table for sniffer view (columns, ability to sort, key code shortcuts. etc)
|  |  |  -- SnifferPayload.vue - frontend window for payload view in sniffer tool; this is where the HTTP vs HTTPS text lives
|  | 
-- viz
|  |     |-- Grid.js
|  |     |-- Viz.vue - for testing; deleted
|  |     |-- VizTree.vue - vue template for visualization area
|  |     -- VizTreeStyleParams.js
|  |-- filters
|  | 
-- index.js
|  |-- main.js - everything that's in the app div, rendered in index.html
-- store
|     |-- modules
|       |-- sniffer.js - includes actions, getters, mutations re: newPacket, clearSnifferInfo
|       |-- toolbar.js - includes actions, getters, mutations re: currentTool, currentView, toolRunning, toolNames, clearToolbarInfo
-- network-info.js
|     |-- actions.js
|     |-- getters.js
|     |-- index.js
|     -- mutation-types.js - this is basically a list of all the ways to commit new data to the vuex store. this is how state is updated in network view, sniffer view, and toolbar side panel. all these mutation types are used in the 'modules' files: sniffer.js, toolbar.js, network-info.js
|-- assets
-- imgs
|     -- play.png
|-- dist
|  |-- build.js
-- public
|     -- fonts
|        |-- photon-entypo.eot
|        |-- photon-entypo.ttf
-- photon-entypo.woff
|-- fonts
|-- herbivore-darwin-x64 - binary
|-- index.html - thing that renders app div from app/main.js, build.js script
|-- main.js - first thing to launch. sets new browser window, sets sudo permissions for sniffing, sets sockets that talk to vue.js and network scripts
|-- network-scripts
|  |-- Network.js - network constructor function; outputs to terminal. to init, _getHostNamePromise, _getHostName, _scanArpTable, _pingSubnet, _getAllHostnames, _checkHost, _getHostBuffer, cmd, start, stop
|  |-- Sniffer.js
|  |-- ToolManager.js
|  |-- old
|  |  |-- pcap-parser.js
|  |  -- tcp-test.js
-- pcap-filters.js
|-- package.json
|-- styles - photon styling, plus old scss stuff
|  |-- main.scss
-- photon.css
-- webpack.config.js - webpack builds the dist/build.js file that's rendered in index.html

a bunch of experiments with amazon annual reports

unix text processing commands in python

tr 'value' for 'surveillance'

python string methods docs for future reference

experiment 1: grep, then print line up to 100 words

$ grep 'We' amz_1997_shareholder_letter.txt | python strip_line.py >amz_we_1997.txt

$ grep 'We' amz_2015_shareholder_letter.txt | python strip_line.py >amz_we_2015.txt

experiment 2: grep, then print line between 50 and 100 chars

$ grep 'We' amz_1997_shareholder_letter.txt | python strip_line_50_100.py >amz_50-100_1997.txt

$ grep 'We' amz_2015_shareholder_letter.txt | python strip_line_50_100.py >amz_50-100_2015.txt

grep 'user base' fb_2015_annual_report.txt | python strip_line_50_100.py >fb_user_base_chunk_2015.txt

i think some questions these experiments raise for me are:

  • okay, i have some blocks of text. i’m not used to thinking about chunks of text in a structural way. what do i do with the chunks?
  • how do i break up the chunks in a programmatic way?
  • do the chunks have anything to do with the content?
  • highlighting the corporate jargon-y-ness of annual reports is not very interesting. what’s a more interesting thing to do with corporate jargon?
  • workflow! should i create a file for each new experiment? only good experiments? what’s the best way to name these slight variations? how do i document the command? i love the idea of these commands with slight variations as a score <3

we believe these lawsuits are without merit

i was trying to do a different thing when i entered
$ cat 2015-Annual-Report.txt | grep 'connect' | tr '.' '\n' | sort
but this was part of the output and i like it:

i looked for a related thing in a different text:
$ grep 'We' blackreconstruction.rtf | sort
there are some We’s in quotes:

and some not:


$ ~/command/line/adventure

our first homework assignment for reading and writing electronic text with allison parrish is, shockingly, to read and write electronic text.

i completed this great series of command line exercises; read padgett; loved some of these sentences from loss pequeño glazier’s “grep: a grammar” either because i’m a dork or a sucker for obscurantist writing about writing or both:

“writing as the action of production (process). That is, to a viewpoint where it’s the procedure or algorithm that counts, the output being simply a by-product of that activity.”

“Such materiality is evident in concrete conceptions of language: “literal strings,” “strings,” “regular expressions,” and “compound expressions” are among the way language is viewed in the world of grep.”

“Like the hole in Pollack’s paint can, a grep is an opening into the world of the materiality of words constituting the electronic text file.”

but my favorite part was the command line adventure of installing pdftotext. when you download the precompiled binary from the website, you get this:

i’m used to a GUI interface for installing stuff, so this was new. i opened the INSTALL instructions, which say:

for step 1, i couldn’t figure out how to copy an entire directory so i just cp’d one executable at a time.

with step 2, i ran into a problem.

the terminal kept telling me i was using the cp command wrong. some googling revealed that this happened because /usr/local/man/man1 doesn’t exist on my  mac; the man pages actually live in /usr/share/man/man1.

i sudo installed in the correct directory and checked that the page existed with man pdftotext:

voila! then, running pdftotext 2015-Annual-Report.pdf exported the 2015 facebook annual report to a text file with the same name in the same directory where i ran the command.