jBoxer

I change the directions of small pieces of metal for a living.

Scraping the bottom of the “Developer Productivity” barrel

| Comments

Patrick Smacchia writes a blog post about how buying a Solid State Drive will produce a worthwhile increase in developer productivity.

He cites a few self-run benchmarks. A couple are impressive: a certain build shaves off 2 minute and 14 seconds (a little under 50% faster), and running 1846 NUnit tests brings a 2 minute and 40 second process down to 37 seconds (about 80% faster). The rest… not so much. Most are cutting scant seconds off of already quick processes.

I do believe that there are some expenditures that are worth it. Developers spend 8 hours a day sitting, so buying them extremely comfortable chairs makes a huge difference. Similarly, developers spend 8 hours a day looking at their monitors, and buying them each two large monitors allows them to keep their work and minds better organized.

This SSD claim does not seem to pass muster. Look at the difference. An internal 80GB Intel SSD costs $559.99. How much does a normal 5400RPM internal 80GB drive cost? Well, I can get this Western Digital one for just over $40.00.

That means the Intel one costs almost 14 times as much. If I’m buying this for my developers, I want to make sure it’s cost-effective. Let’s say I pay my developers an average of $52.00 per hour (to make the numbers easy). That means, to justify the extra $520.00 I’m spending to get this SSD, it needs to save them 10 hours over the course of their time with me. That’s 268 of those builds mentioned earlier, or 293 of those sets of NUnit tests.

Is this an outrageous number? No. But it’s also not an obvious win, like buying a pair of large monitors or a comfortable chair. It’s scraping the bottle of the barrel, to say the least. Unless he’s got a super awesome chair, a couple 24-inch monitors, and all the other productivity staples, I’d say his money would be much better spent elsewhere.

Why some love computers and others hate them

| Comments

Last night, I spent two hours trying to get PHP (a programming language) working on my Mac. Eventually, I realized it had been working for about an hour and fifty minutes of the time, but Firefox had cached the error page. Once I cleared my cache, everything was fine. Did I hate my computer for wasting my time? No, I felt a sense of accomplishment from finally solving the problem.

Today, I spent another two hours trying to fix the problems with this blog and get it back online. I went through about 25 poorly-written articles on various aspects of the problem (integration between PHP, nginx, FastCGI, and lighttpd, in case you’re interested). I also struggled with prewritten scripts containing awful formatting errors, such as replacing the ” (double-quote) symbol with ” (two apostrophes), forcing me to hunt down these errors. Eventually, I figured out the problem, fixed all the scripts, and got it working (as you can see). Did I feel a sense of frustration from the poor documentation? No, I felt a sense of self-satisfaction from solving the problem, figuring it out (virtually) myself, and learning something new in the process

From what I understand, these are the things that make many people swear off programming or any complicated computer work; if it takes too much effort to deduce and fix the problem, it’s too frustrating for many people. I have no problem with this, and in fact, I feel it in many other areas. But for some reason, it’s the opposite for me. The longer I have to spend on something, the better I feel when I finally fix it (except for the rare exception where it was a stupid mistake on my part, in which case I feel annoyed with myself, not with the concept of computers).

I have a feeling that this is true for many people in my field. It’s the difference between the person who goes “I tried to learn programming, but you have to get everything right! If you get one little thing wrong, the whole thing breaks!” and the person (like me) who gleefully recounts a late-night six-hour debugging session like it was the most exhilarating thing to happen in a long time.

I don’t think either of these things are “right” or “wrong”, but I do think one of them is more indicative of some sort of mental illness, and I don’t think it bodes well for me.

Don’t wonder if you are stupid

| Comments

I’m reading a book on Mac programming called Cocoa Programming for Mac OS X. However, unrelated to programming at all was this piece of advice on believing in yourself:

While learning something new, many students will think, “Damn, this is hard for me. I wonder if I am stupid.” Because stupidity is such an unthinkably terrible thing in our culture, the students will then spend hours constructing arguments that explain why they are intelligent yet are having difficulties. The moment you start down this path, you have lost your focus.

I used to have a boss named Rock. Rock had earned a degree in astrophysics from Cal Tech and had never had a job in which he used his knowledge of the heavens. Once I asked him whether he regretted getting the degree. “Actually, my degree in astrophysics has proved to be very valuable,” he said. “Some things in this world are just hard. When I am struggling with something, I sometimes think ‘Damn, this is hard for me. I wonder if I am stupid,’ and then I remember that I have a degree in astrophysics from Cal Tech; I must not be stupid.”

I think this is great advice. There have been many times where I’ve been struggling with something outside of computer science (like calculus or my Oceanography class last semester). Just as I start to consider giving up, I remember that I’ve found a way to succeed at some very difficult things; things that are much more difficult than these. It gives me the confidence to continue banging my head against the wall until I finally get it.

Array slicing is messed up

| Comments

The mechanics of array-slicing (and, by extension, string-slicing) have bothered me since my very first Computer Science class. For those of you who aren’t programmers but are braving the programmer warning, let me explain a little. Arrays in programming are essentially lists of data. For example, I could keep an array of test grades, and it might look something like this:

1
array = [99, 72, 85, 85, 100, 61, 88, 32]

Arrays throw beginners off a little bit, because indices are zero-based. This means that if I want to access specific elements in an array, the first element is considered element 0, the second is element 1, etc. An example using the above array:

1
2
3
print array[0]  # prints '99' (the 1st element in the list)
print array[4]  # prints '100' (the 5th element in the list)
print array[7]  # prints '32' (the 8th and last element in the list)

The concept of “array-slicing” means creating a new array out of a subsequence from a previous array. For example, a “slice” of the above array could be:

1
newArray = [85, 100, 61]  # a slice containing the 4th-6th elements of the original array

Most programming languages provide a function that allows you to take a “slice” of an already-existing array. They do this by asking you to specify where in the original array you’d like to start the slice, and where you’d like to end it. This is where my complaint comes in. Most languages ask you to specify two numbers: the index of the first element you want, and the index after the last element you want. For example, to get [85, 100, 61] out of the original array, I would do the following:

1
newArray = array.slice(3, 6)  # creates a new array with the 4th, 5th, and 6th elements from the original array

To me, this is silly and unnecessarily confusing. Why is the second number the index after the last element you want? Wouldn’t it make more sense for it to just be the index of the last element you want, like the first number is the index of the first element you want?

I’ve heard people say that this is intended to make it easier to take a slice of the last N elements in a list. This would be because of the length() function that arrays in most languages have. A quick example:

1
2
array = [99, 72, 85, 85, 100, 61, 88, 32]
print array.length()  # prints '8', since there are 8 elements in the array

You can combine this with the array slicing function to take a slice of the last N elements in an array, like this:

1
2
array = [99, 72, 85, 85, 100, 61, 88, 32]
newArray = array.slice(3, array.length())  # equivalent to newArray = array.slice(3, 8)

Since length() gives back the length of the list, it’s guaranteed to give back a number one higher than the highest index (since, as we said before, indices are zero-based). This means that plugging length() into the second position of the slice() function (which refers to the index after the last index you want) is the equivalent of saying “up to the last item in my array.”

So I can see the convenience of this. But I refuse to believe that this is the primary reason for this ass-backwards method of array-slicing. At least in my experience (which I admit is not comprehensive, and this may be where my misunderstanding lies), slicing the end of an array is not that common, and certainly not common enough to warrant the obfuscation of a commonly-used function. If array-slicing was done the way I want (where the last number refers to the index of the last element you want), it would be easy enough to do it like this:

1
2
array = [99, 72, 85, 85, 100, 61, 88, 32]
newArray = array.slice(3, array.length() - 1)  # equivalent to newArray = array.slice(3, 7)

In fact, it would be even easier to provide a second version of slice() (and I believe some languages actually do this) which only asks for the first index. When the language sees that you’ve only included one index instead of two, it assumes that the last parameter refers to the end of the list.

1
2
3
array     = [99, 72, 85, 85, 100, 61, 88, 32]
newArray1 = array.slice(3, 5)  # equivalent to newArray = [85, 100, 61]
newArray2 = array.slice(3)     # equivalent to newArray = array.slice(3, 7) = [85, 100, 61, 88, 32]

Obviously, this is a pretty minor thing in programming; it just strikes me as weird, especially because every language does it this way. The majority of the time, when I have to do an array-slice, I look the method up online, to make sure I remember how the indexing works. It’s unusual for such a commonly-used function (besides one with lots of extra syntax, like date() in PHP) to require a reference to the documentation on every use. I’m not suggesting that it’s straight up wrong and should be changed immediately; I just don’t understand why it works this way in every language.