Category Archives: Optimization

How to monitor packets to/from Amazon Beanstalk / EC2 instances

If you need to answer such questions as

  • Is my client correctly indicating that it can handle gzip-compressed responses?
  • Is my Beanstalk-hosted webapp providing gzip-compressed responses?

then you may have no choice but to monitor packet traffic on the Beanstalk side, especially if your client is running on a mobile OS.

How can one do this? Let’s take the most difficult case and assume the only machine you have for development work runs Windows:

  1. If you didn’t associate a keypair .pem file when launching your Beanstalk environment, then create a new environment that does have such an association.
  2. On your Windows client machine, install VMWare Player 6+ and download an ISO for Ubuntu 13+
  3. Using Player, create a virtual Ubuntu desktop. You’ll also want to install VMWare Tools, which can be tricky:
  4. In Player, go to Player | Manage | Virtual Machine Settings | Options | Shared Folders | set to “Always enabled” and add the folder containing your keypair pem file
  5. In Ubuntu’s dock, select Ubuntu Software Manager, type Wireshark into the search box, and install it
  6. In Ubuntu, open a terminal (e.g. via Ctrl + Alt + T)
  7. You should see the folder containing your pem file if you do

    ls -l /mnt/hgfs

    Sometimes even after you’ve installed VMWareTools, it still fails to access the shared folder. In this case, I’ve tried reinstalling this way:

    sudo ./Desktop/vmware-tools-distrib/

    Let’s assume it worked and the full path is /mnt/hgfs/Projects/bs-david.pem

  8. Look in your EC2 console for the Public DNS of your instance. Let’s assume it’s
    1. If you aren’t sure which ec2 instance your beanstalk instance is running on, go to | Service = ElasticBeanstalk | Application = your app | Configuration / Edit button | Instances / gear icon | Custom AMI ID. Note down this ID.
    2. Go to Service = EC2 | Running Instances, and find the ID under the AMI ID column. Click in the Name cell of this row.
    3. The Public DNS of this instance will be listed under the Description tab (and immediately above the tabs).
  9. First, verify that ssh can connect. In the Ubuntu terminal, enter

    ssh -t -i /mnt/hgfs/Projects/bs-david.pem

    If that doesn’t work, it might be that your network blocks the port your SSH is trying to use. I have to switch from my office’s Ethernet to its wifi.

  10. If you succeeded, the prompt should now be [ec2-user@ip-99-99-99-99 ~]
  11. At the ec2 prompt, enter

    sudo tcpdump -i eth0 -s 65535 -w test.pcap

  12. If you succeeded, the terminal should show tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes and then suspend to capture the packet traffic
  13. Using your client, send some of the requests you want to investigate. When you’re done, do Ctrl + C in the terminal
  14. Type

    ls -l test.pcap

    to ensure the file exists and is nonempty

  15. Move the pcap file from EC2 to the Ubuntu desktop by logging out of the ec2 instance and then using scp to fetch the file

    scp -i /mnt/hgfs/Projects/bs-david.pem .

    Take note of the dot at the end, indicating you want to save the remote file to the current local directory.

  16. If you succeeded, you should see running updates like this: test.pcap 100% 16MB 203.8KB/s 01:18
  17. Verify that the file was transferred fully by typing

    ls -l test.pcap

    The file size should be the same as that reported by ls when you ran it on the ec2 instance.

  18. Load the pcap file into wireshark by typing

    wireshark -r test.pcap &

  19. In wireshark near the right end of the toolbar, tap the “Edit/apply display filter…” button. In the dialog that pops up, scroll down in the Display Filter list, select “http”, and click OK.
  20. Back in the main wireshark window, the second pane from the top should now show a turnkey for Hypertext Transfer Protocol.
  21. In the top pane, select the row indicating the request or response you’re interested in. Then in the second pane, open the http turnkey.
  22. For the particular scenario of checking that gzip-compression is happening, the client GETs should include “gzip” without quotes among the values of Accept-Encoding, and the server responses must have http response code 200, the Content-Type must not be text/plain, and the body shown at the right end of the third pane should not be comprehensible.

StackOverflow describes an alternative method that works even when you don’t have access to the server, if you can direct the mobile client to use your workstation as a wifi access point.

Making games more fun with artificial stupidity

If one buys into Daniel Dennett’s proposed use of “the intentional stance” to generate explanations and predictions of human behavior (say, in an AI program that observes a person and tries to find ways of helping), then accounting for human error is a tough problem (because the stance assumes rationality and errors aren’t rational). That’s one reason I’m interested in errors.

Game AI faces a similar problem in that some games like chess and pool/billiards allow a computer player to make very good predictions many steps ahead, often beyond the ability of human players. Such near-optimal skill makes the computer players not much fun. One has to find ways of making the computers appear to play at a similar level of skill as whatever human they play against.

I just came across a very interesting article on the topic of how to make computer players have plausibly non-optimal skills. Here’s a good summarizing quote:

In order to provide an exciting and dynamic game, the AI needs to manipulate the gameplay to create situations that the player can exploit.

In pool this could mean, instead of blindly taking a shot and not caring where the cue ball ends up, the AI should deliberately fail to pot the ball and ensure that the cue ball ends up in a place where the player can make a good shot.

An interesting anecdote from the article is that the author created a pool-playing program that understood the physics of the simulated table and balls so well that it could unfailingly knock any ball it wanted to into a pocket. The program didn’t make any attempt to have the cue ball stop at a particular position after the target ball was pocketed, however. Yet naive human players interpreted the computer’s plays as trying to optimize the final position of the cue ball, apparently because they projected human abilities onto the program, and humans cannot unfailingly pocket any ball but seemingly are pretty good at having the cue ball stop where they want.

Read more

Steve Souders’ 14 rules for faster-loading websites

Within a few years, if you notice in Firebug that a site doesn’t adhere to these, you should ask yourself why…

These rules are an excerpt from Steve Souders’ best-seller, High Performance Websites.

Since modular techniques for building dynamic pages (such as .ascx files in ASP.NET) don’t immediately work well with some of the rules (such as placing all scripts at the bottom of the page), new patterns are sure to emerge.

Memory leaks due to iframes in IE (also how to file-upload via dojo)

1) Changing the src of an iframe in IE may cause later events to fire multiple times – once for each change of src.  See

06-24-2006, 09:21 AM
Yeah, I see what you mean, big time slow down in IE. No problem in FF. I didn’t test any others. I Thought it might be a memory problem so I tracked memory usage in Task Manager. No real problem with memory usage but I noticed actual CPU usage was spiking and then getting pegged a 100%. The more I loaded pages into the iframe in IE after that the longer CPU usage would remain pegged at 100% and this corresponded exactly with the amount of time that the frame was blank. I then had a look at your source code and saw that you had commented out this line:

//currentfr.detachEvent(“onload”, readjustIframe) // Bug fix line

Those two little red slashes at the beginning make it a comment. Why did you do that? I’m like 99% sure that this is the problem as that is an IE specific line designed to prevent multiple instances of the resizing event. Without that line, each time you load something into the iframe an event gets attached to it. After 20 loads, you have 20 events all firing at the same time. Almost has to be it. Just remove the red slashes and you should be fine.

2) Alex Russell of DojoToolkit explains that memory problems can occur with IE in both DOM event handlers and XHR due to the browser’s reference counting mechanism not realizing that it can recover some closures after they’re no longer needed.  See (which also explains how dojo can be used for file uploads).