adventures in elisp

samer masterson

changes in ram usage from 2gb to 8gb in this old laptop

I’ve been aching for an x86 laptop for a while now, and I found my Dad’s old laptop, an HP ProBook 4510s from 2009, in his closet. After successfully installing Ubuntu 14.04 (after multiple failed attempts to put Arch on it), it became a useable laptop! It only had 2gb of RAM, though, and I kept bumping into issues with it. Starting the terminal immediately after closing it was noticeably sluggish, and it would start swapping if I had too many tabs open in firefox. I was constantly closing unused tabs and applications so I could have enough space to breath.

I bought 8gb of ram (after buying 6gb of DDR2 ram that didn’t fit, whoops) and thought it would be fun to record the results! I used a simple shell script hooked up to /proc/meminfo to gather the data, and I recorded my memory usage for about the same amount of time for the before and after data:


OUT=mem_after # or whatever you want to name the output file

while [ 1 ] ; do
    echo "start" $(date +%s) >> $OUT
    /bin/grep -E -e $MATCH /proc/meminfo >> $OUT
    sleep 5

And here are the results:

06-01-2018: Unfortunately, the results have been lost to the sands of time

And what are my conclusions? I’m not exactly sure :P

surprising: DOM manipulation by modifying innerHTML faster than using appendChild

For my multiplayer snake game, I needed to be able to modify an unordered list (“ul”) by wiping all the “li” tags under it and generating them again for the scores of all of the snakes. My first idea was to create a string of “li” tags for all of my scores, and then set that as the innerHTML property of my “ul” tag. I’m vaguely familiar with the idea that DOM operations are expensive, though, and so I reached out to the DOM wizards I knew for more efficient ways of doing the same thing. They told me to try creating an “li” element and appending that to ul with appendChild.

So, I had two ways of doing the same thing, but I wanted to know which way was faster. The “appendChild” method seemed “closer to the metal” than setting innerHTML to a gigantic string, but modifying the DOM multiple times might be more expensive than doing it all at once. I knew I wouldn’t be able to go to bed without knowing which technique was better, so I measured the speed of both, and the results were surprising!

All of the tests are run on Google Chrome 34.0.1847.134 on a Samsung ARM Series 3 Chromebook. The only browser tab running is the test. The times are rounded to the nearest thousandth.

Test results for the innerHTML method:
For 30 trials with 100000 elements in the inner loop
Average: 684.133 ms
Standard Deviation: 45.281 ms

06-01-2018: Unfortunately, this image has been lost to the sands of time

Test results for the appendChild method:
For 30 trials with 100000 elements in the inner loop
Average: 880.3 ms
Standard Deviation: 280.129 ms

06-01-2018: Unfortunately, this image has been lost to the sands of time

The innerHTML method is faster and more consistent than the appendChild method. The appendChild method has three outliers that spike to almost 2000 ms (!!!), where innerHTML is much compact. The standard deviation for appendChild is really high, and I tried to reign it in by increasing the number of trials, but that didn’t have any affect on it.

The results for appendChild cluster right below 800 ms if you disregard the outliers, which is higher than the average time for innerHTML by about 100 ms.

Who would have guessed that the best way to make large changes to the DOM would be to set innerHTML to a gigantic string? :) I’m happy with this, because I think the innerHTML method is more clear.

If you want to try this at home, here are the files I used to test it: test.html and test.js. If you notice anything interesting, send me an email (samer{@}this domain) because I’d like to hear about! Also, if there are any other ways of doing this, I’m interested in running more tests. :)

(Shout out to Julia Evans for her awesome introduction to ipython notebook and pandas!)

compilers - week 1

I’m on track so far! I got 100% on my lexer. Flex is a ton of fun to use – you don’t worry about the implementation, all your focus goes towards getting your map[regexp]token correct (that’s Go syntax). Lexers, consider yourselves demystified!

The project was specified differently from how they did it at my old university. For the lexer, I was given the Cool manual, a reference for the tools, a reference lexer, and a slightly underspecified specification as the assignment. The hardest part was getting started. I needed to read a fair bit of material before I could figure out what the hell I was supposed to do with Flex’s rules. Having the reference lexer was invaluable, and comparing it’s output to my lexer’s output made it super easy to make progress when the assignment instructions were unclear. And the grader dumped a buttload of failed test cases on me after I thought I was finished, which was also helpful :P. Going in, I was inexperienced at making software resilient to all possible input. Slowly making my lexer more and more robust was rewarding.

At my old university, our projects were spec’d out to an insane level of detail, and any ambiguity was considered a bug in the spec. We weren’t given reference implementations or test cases. And I hadn’t encountered any projects where we were expected to report errors for bad input. The difference may have been that I just never reached the “higher” level classes, but I have a hunch that Stanford expects a bit more from their students, too. It’s a lot more fun to have to discover the answer to ambiguities on your own, in a twisted kind of way :P.

medium reading time

Graphed the top 100 medium articles of march vs their reading time. Thought it would be more interesting!

06-01-2018: Unfortunately, the results have been lost to the sands of time

The median article length is 5 minutes. I’m not sure what that says about the site.




me: melody just asked me if i'm liking the cs program at gmu
me: i was like, heh i feel like i'm doing more on my own
kais: LOL

programming feels

A friend just sent me this via facebook chat, a feeling every programmer knows all too well:

I can't code in Scheme
god fucking damn.
*describes code*
Nyak nyak nyak
I am a lisp god.
a lisp snake

Feel that rush of dopamine!

radiohead - then and now

Radiohead’s a band that’s almost defined by how much they change (but I could talk extensively about how they really haven’t!). Creep is pretty famous outside of Radiohead’s fanbase as the only Radiohead song they know/like, and inside as a song that Radiohead disowned because of how popular it became! I’m a fan, though, and it’s fun to look how radiohead does “creep” over the years.

1994 - blond thom

The first thing to notice is that Thom is a beautiful man. The second is that he’s not holding a guitar. Radiohead’s earlier work frequently abused the fact that the band has three guitarists, and it’s a rare performance where Thom isn’t holding one. He’s really melodic in this performance, and it reflects how the studio version of song sounds. You can tell that Thom still isn’t mature in his vocal performance, though, and he uses some sounds that are relegated to Pablo Honey. But the bridge (“she’s running”) is super intense! Jonny Greenwood’s lead into the chorus (the “chuh chunk” part) is really noisy and fun in this version. The band takes a lot more liberty with noise and drone.

Now let’s take a look at the song in 2009:

The difference in Thom’s singing is extraordinary! He takes an incredibly “lazy” approach to this song, something he was a lot more willing to experiment with in the later years of the band. I have half a mind to think he’s taking the piss out of the song, but they certainly weren’t pressured into playing it, given the vast body of their later work. Other “later” versions of Creep have the same lazy vibe. Also, I’m really digging his 2009 hair (not a fan of the leather jacket, though!). He really belts it out during the “she’s running” bridge, but it doesn’t sound at all like the studio version. Still, an emotional performance.

Both performances are fantastic. While I think that the 1994 version is more impressive, the 2009 one definitely fits in with Radiohead’s contemporary sound. It’s fun to hear Radiohead’s later version of Creep, because I don’t think I’ve heard any covers of the song that take as many liberties with it as Radiohead does.

kiev: my new favorite band

Radiohead is one of my favorite bands, and I’ve been looking for bands with a similar sound for a while. One of my friends recommended Kiev, and they are fantastic. I get so many radiohead vibes from this group.

My personal favorite (so far) - Found the Real Found. Super radiohead-y.

To the woman at the beginning: it’s my song too <3 - Opening in G. Listen to that freaking saxophone! What bands have saxophonists???

I’m purchasing an album as soon as I have the money to!

Also, one last comment before I close. When a friend says, “hey, you like \<band>, you should really listen to \<other_band> because they’re similar (I promise)!”, they really mean “hey! I like \<other_band> a lot, and I’m going to recommend them to you in the hopes that you like them too, even they’re only superficially (if at all) similar to \<band>!”, so I was incredibly surprised when Kiev turned out to match my interests almost exactly.