Student of Code

19 Jun

Janie Clayton has spoken about imposter syndrome before, something all too common in the programming community. While her recent post on Soul Searching didn’t mention it directly, I couldn’t help but feel it tugging away. Admittedly this may simply be me imposing my view on her writing, but that’s somewhat beside my point if you’ll indulge me for a moment.

I share many of her concerns, and I greatly admire her courage to be upfront and open about her experience and her expectations going forward. Putting yourself out there in such a way, especially online, takes a lot of courage and humility. It’s a sign of both a strong person and, I believe, a good programmer. In doing so she has inspired me to take a step in the same direction.

So, in hopes that it might help a little, or at least add some perspective, here is my story…

I’m 33 years old now and I’ve been programming since I was a kid (1993). I don’t count what little BASIC programming I messed around with on my older brother’s Commodore 64 in the 80’s. While that was, in a large part, what started me down my programming path, I can’t say I even had a BASIC understanding of what was going on at that young age. I’ve never had a “job” as a programmer. As a teenager in the 90’s I worked on shareware games for the Mac with a now long defunct company called Gaz Software. I mostly wrote the game engine pieces like the blitters (graphics drawing routines) and physics. By todays standards these would be considered a joke. In those days we didn’t have OpenGL, or shaders, we would draw using the CPU to copy memory from an offscreen buffer to the screen buffer typically using a 3rd buffer as a 1-bit mask to decide wether of not to copy a given pixel (a pixel being a single byte; remember we were dealing with 256 glorious colors back then). This was very simple stuff like using alloc to get a chunk of memory and memcpy to copy the bytes around. This is how most shareware games graphics were done back then (I can’t speak to high end games, I don’t have any experience there). At the time simple games might use QuickDraw (the System 7 equivalent of CoreGraphics these days), but if you needed performance you wrote your own blitters. Back then I didn’t really understand what I was doing. Sure, I knew I could pass a reference to some memory into a function and use memcpy to copy it’s contents to another chunk of memory, but if you had asked me even the simplest of questions like “Is that memory on the stack of the heap?” I probably would have given you a confused look and asked “What do you mean?”. In fact, the name of this blog comes from a time when I was working with a friend’s graphics code and saw this

// a fast numBytesPerRow / 4
numRowLongs = numBytesPerRow >> 2;

I wondered what black magic >> was doing that made it a fast divide by 4? To this day I still love those simple bitwise operations, something about manipulating the bits at that level, it’s just fun.

It wouldn’t be until 2008 that I would ship my first full game on my own. In March of 2008 when Apple announced the iPhone would be getting a full SDK I leapt at the chance to write code for it. At the time I was programming as a hobby and still writing in Carbon. So I had to learn Cocoa, and fast. Thankfully, I have a very understanding wife who supports me. So I spent countless nights learning about Object Oriented Programming (OOP) and Model View Controller (MVC) design. I, once again, didn’t fully understand. Despite all of that I managed to cobble together my first game and two months after the AppStore opened I shipped Consumed.

If any decent programmer had seen that first code I wrote for Consumed their brain probably would have had an embolism. Despite the architecture of that code being, well, mush, I was still immensely proud of it, or at least I tried to be. But part of me couldn’t find joy in it. I knew the code had problems. Sure it worked, the gameplay worked without any flaws, I didn’t have any crashing bugs, but I knew the code could be better. Your Code Is Bad And You Should Feel Bad, is all I heard inside my head. Even though I had gone from not even being able to read Objective-C to shipping a full game in a matter of months, square brackets be damned. I learned CoreAnimation, researched different AI algorithms like mini-max, discovered the performance constraints of those algorithms, like the memory implications of doing a recursive loop and how to address that – hello NSAutoReleasePool. I used what little performance profiling knowledge I could muster to discover exactly what was taking so much time in the AI loop, ultimately rewriting the performance critical code in plain C and using a better algorithm (alpha-beta pruning) to make the AI virtually unbeatable at higher levels, even on the original iPhone’s hardware. Consumed even got featured in the “What We’re Playing” section of the AppStore. Though it was never a commercial success, I learned a lot through building and shipping it. And…I still felt like I didn’t have a clue what I was doing.

Fast forward to today and I’m still doing software development on the side. I’ve got a few apps on the AppStore, I’ve “shipped” my first web app, done a few contract jobs, and I continually read everything I can get my hands on. None of my apps have “taken off” and I can’t quit my day job, but that won’t stop me from coding. It’s as much who I am as anything else that defines me. After twenty+ years of doing this I still feel like a beginner, I know I have so much left to learn. That both excites and scares me at the same time.

All this is to say that when you’re evaluating yourself, be honest, but don’t be hard on yourself. There will always be someone who is smarter, someone who knows more than you, and that is a good thing! It means that there is still more for you to do, more to learn, more to experience. If you ever find yourself convinced that you’ve learned it all, all you’ve learned is how to not to learn anymore.

I remain, as ever, a student of code.