SYN flooding test

i’ve been testing the idea i described in my last post. in the vid below, i use james woolley’s python script to flood the ball drop game with syn packets. i execute the script to send 2000 packets consecutively to the ip and port that the server is listening on. while that’s going, i connect to the game with the ball drop client. the client seems to connect with no delay which is unfortunate since i want a delay. i’m not sure whether i’ve set something up wrong or whether 2000 packets just isn’t that many.


well, here we are, a thousand rabbit holes later. when i tried executing the syn flood script with 20,000 packets and the connection didn’t change, i decided i was doing something wrong. i looked into a few other kinds of attacks, but ultimately came back to this one. below are some characteristic screen shots of my wireshark readings.

in the first one, you see a SYN from assigned port to port 8080, SYN/ACK from 8080 to me, and RST from me to 8080. RST means to close the connection because something’s gone wrong, and the client started by the python script kept sending them out when the connections were (predictably) broken. sometimes, a message from the actual ball drop client would get through in the middle of all this SYN/ACK/RSTing, which suggested to me that maybe there was something multi-threaded about the connection? and if there are multiple threads, how do you know how many there are and what their capacities are? and how do you block them all? 😱


here’s another thing that happened: a bunch of SYN packets, with no RST packets (great!) with ball drop client messages in between (not great!). the ball drop client messages here are the two not-gray ones in the middle. the arrow is pointing to the payload, which is “L” for directing the paddle to go left.


so, like, wtf? i went back to where i’d originally gotten the syn flood python script. i realized that i’d totally skipped the first script, which blocks RST packets from being sent out. alas, the utility it relies on, iptables, is for linux and isn’t a thing on mac os x. i went hunting for alternatives and learned about pf.conf and tried to implement a block on RST flags using this solution. double alas. i could edit pf.conf, but couldn’t load the new file to be executed.


paddleball game

our last assignment was to “design a device that can connect to a server using a TCP socket connection to play a game.”

the game? “This is a multiplayer game in which players collaborate to keep a ball from hitting the ground. Each player has a paddle, and can bound the ball off her paddle. When the ball bounces off your paddle, you get a point. Only the first bounce counts, though; subsequent bounces don’t get you points. But if the ball bounces off another player’s paddle then back to yours, you score again. You can keep scoring forever by bouncing the ball back and forth.” you can learn more about it here.

in thinking about how to get the highest score, it’s helpful to have the server code available because it lets me know how the game works. tom gave us some examples of sanctioned ways to play: how to hook up a joystick or an arduino or whatever. but he also mentioned surya, allison burtch, and jon wasserman’s packet injection hack. i’m more interested in a hack than in making a controller, so i’ve been thinking about different options.

i think this part of the server code presents an interesting opportunity:

i imagined it would be possible for starting positions to look like this, in which case we would get a point for every new paddle the ball hit on the way down to the bottom:


i tested this by running the server on my computer and logging in from the game client and my phone:

this confirms to me that it’s possible to get a cascade thing happening if all clients log into the game in quick succession. so now i’m thinking about how to fill up the server’s buffer and only accept connections under certain conditions, like that there are 10 clients lined up to log into the game. i’m just starting to play with scapy, so we’ll see if i can execute this in time for tuesday.


there’s something called a SYN flooding attack that might work for filling up the buffer. and i found a script by james woolley that works with scapy to do it. the problem is that i’m not sure how to control the length of time the buffer stays full. i don’t want to totally break the server; i just want to block connections for a certain amount of time. i hope i don’t have to do resort to doing math 😱

what problem are we solving?

wireshark and debookee already exist, so what’s the problem we’re trying to solve with this tool?

  1. ours isn’t designed for engineers or netsec people. we think everyone should know how networks work, and we’re designing the experience and interface around this tool to be engaging and legible to people outside of tech bubbles. tl;dr: we’re creating a different context around the information in yr packets.
  2. the tool is interesting and useful on its own, but we’re also eventually releasing it with short episodes that break down recent headlines about hacking (“iot botnet attack!”) so normal humans can understand them.

    fox news explains ddos
    fox news explains ddos

packet sniffing prototype

i mentioned many posts ago that i’m working with surya mattu on *stuff* about networks. what started out as *stuff* has narrowed in focus and we’ve started building. we have an ugly prototype of a web-based packet-sniffing tool that lets users on the same network explore how their computers are talking to each other. this has gone through many iterations, mostly riffing on the wireshark interface:

versions 1-4:


version 5:


we’ve settled on a combination of these two, with a toggle that lets you switch between a visual view and hex view. i’ll post screen shots once that’s all built out.

back end

version 5 above has an actual back end, which surya built and which lives here:

what’s happening here? this is an express server application that uses a node library called node pcap to capture network packets. right now, the server listens for packets on port 3000, parses them, and saves them out to a json file called test.json. separately, the express server serves a static file called packet.json so if you visit http://localhost:3000/data/packet.json you’ll see the json file that is used to create the front end visualization in the screen shot above. eventually, we want the server to serve up the packets it sniffs and saves, but for now we’re simulating that situation.

front end

the front end, which lives here:, is built with webpack,, and vue.js. the back listens to packets traveling over the network, saves and parses them into a json file, then serves that json file to the front via socket over port 7777.


stuff that’s happening/stuff to do:

  • meetings!
  • i want to apply to be a part of this workshop on work and automation
  • to read: 1, 2, 3

part one: i was originally interested in the overlap i see between open source software projects and grassroots organizing. this now only seems like half of what i’m interested in. we don’t organize just for the hell of it, just like we don’t “build community” for the hell of it. organizing builds collective power. collective power is important to the extent that it results in material improvements to people’s lives. i’m curious about the extent to which the scalability and accessibility of a distributed open source model can be used to expand the reach of and participation in IRL social movements.

part two: everyone’s talking about digital literacy. but to what end? i’m interested in a digital literacy that empowers people as civic participants. i’m interested in a digital literacy that helps activists build tools to leverage power as workers and users of digital platforms. currently, these platforms benefit from intentional information asymmetries, but the reality is that they would not exist without us. how do we leverage this to demand accountability, fair wages, transparency? this looks different in different kinds of cases. in particular, i’m interested in two kinds of recent cases:

  1. instacart strike.
  2. facebook collaboration with police to censor police killing of korryn gaines.

these are technical design challenges. maybe they look like apps or browser extensions or encrypted email lists or a combination of all the above or something else entirely.

so that’s it. those are the things. going forward, i need to work on elaborating what i see as the lay of the land, the problems, and the opportunities here. aside from that, there are some prototypes to be very sketchily sketched and design challenges to think through. everyone’s talking about algorithmic accountability lalala, but what if we never get access to proprietary algorithms? what if, instead, we hack our way around them the way instacart workers did? i’m interested in a digital literacy, a kind of thinking aided by a technical skill set, that makes this possible on a larger scale.

thoughts on 6a68’s thoughts on open product development

jared wrote really interesting posts about open product development and building a community of contributors around test pilot. through emailing him some long rambly thoughts in response, i actually got to think through some ideas i’ve been mulling over around

  • the overlap between grassroots organizing spaces and open source software
  • infrastructure for communities of contributors that enables them continue to grow and learn and build together
  • how to mitigate burnout & other problems

so. ramblings below.

“some things the open product dev piece brought up for me:

– it seems like your “generating and discussing…” section assumes that the test pilot community contains/is made up of people with skill sets that mirror staff mozilla product teams: roughly, ux/ur & dev. so my first question is: is that true? if not, an issue becomes how to improve the pipeline to the test pilot community so that it becomes true. another issue is what if the ratios are off? more ux than devs or vice versa? how can people plug in with different skill sets? or is the test pilot target audience people who can either do user research or dev?

– another thing i’ve seen play out in activist organizing spaces, which actually feel structurally similar to what i’ve seen in open source in some ways, is that people tend to fall into roles. one person always does logistics, one person always writes press releases, the rest of the people always wait to be assigned something, etc. in open source, i wonder if this would play out like: a few people always come up with new product ideas, a few other people always do research stuff, etc. how do you feel about this? should there be an effort for people to try out other roles and build other skill sets? i think there are pros and cons to trying new roles AND really building ONE skill set, so i’m just putting out the question. related: seems like another thing i’m getting at is a power law distribution of labor and credit for contributions. is that sustainable/how will that play out over time? will it lead to burnout in open source in the way it often leads to burnout in activist spaces? if so, is it important to mitigate in some way?

– one last thing! i am not really familiar with different ways mozilla volunteers plug in or how they build community, so take this next part with that grain of salt. in my experience building community IRL, it seems that different kinds of gathering/communicating structures bring different kinds of people together. so i guess i wonder whether discourse forums/forums in general being the primary mode of collaboration means that certain kinds of people will be drawn to or put off by participating? i think you, john, and/or wil actually already brought this up. maybe marketing can help with pitching the collaboration in different ways to different audiences and providing additional/alternative ways to plug in. for example, maybe IRL test pilot hackathons could be a way of reaching folks and bringing people together in a way that the forum wouldn’t.”

coding together at scale

i found a paper i’m really excited about! this finding really sticks out to me: “We find a very low reciprocity of the social ties, which is remarkably different from the findings of studies of other types of social networks.”

this reminds me of another line that’s been stuck in my brain for a few months now: “Email, texting and messaging apps are social reciprocity factories” from tristan harris’s very insightful article last spring. harris is talking about how so many social technologies hijack our attention and agency. in this context, github’s “low reciprocity of social ties” feels like a positive characteristic. i wonder if there is something here about the importance of weak links in social networks. barabasi talks a little about this in linked which i’m still reading for tom’s class.

i do wish there were more in this study about github projects as nodes for connecting people, even if they’re not exactly collaborators on a branch. i only skimmed, so maybe it actually is in there. either way, excited to have a lead here.