iOS Apps for Love

30 Jun

Brent Simmons talks about how if you’re an indie developer working on iOS apps you’re likely doing it out of love and not for profit – wether you admit this to yourself or not.

It’s a sad fact that making a profit on iOS apps feels like an up hill battle these days – it arguably has been since the launch of the AppStore. Most people expect apps to be cheap, 99¢ or $1.99 is common, but for a developer to make a living at those prices you would need to sale something around 300-400 copies of an app every single day of the year. And that’s if you’re a one person shop with basically zero overhead. This simply isn’t happening.

Most indies have resorted to doing contract work on the side or, like me (and Brent for that matter), relying on a day job to pay the bills while doing their iOS development during the evenings and on weekends. Actually I do a combination of all three…and it’s taxing.

I hope this is temporary. I love iOS development and I’m hopeful one day in the future I’ll be able to do it full-time.


27 Jun

I’ve rewritten my Chip8 emulator in Swift.

I’m still learning Swift so I won’t say that it is the right way to do an emulator in Swift. But it is fully functional and does take advantage of some sweet Swift features; like switching on tuples while ignore values:

switch opcodeTuple {
case (0xF, _, 6, 5):
// do stuff for Fx65

This helps to make the opcode switch statement nice and clean.

Swift has strong type safety…actually Swift has REALLY strong type safety. For example, if you have an unsigned 8-bit integer (UInt8, used throughout Chip8) and you want to store that in an Int you have to explicity convert it

let myInt = Int(my8bitInt)

This can be a bit of a pain. Don’t get me wrong, I think type safety is a good thing. I’ve been bitten by type conversion “magic” in languages like JavaScript before. But this seems like overkill. I can kind of see the case for this type safety when converting from a larger bit integer to a smaller one (UInt16 to UInt8 for example); since the larger one wouldn’t fit and you want to be certain you’re aware of what’s going on there. But for conversion from a smaller to larger? The smaller integer will always fit into the larger type integer. Maybe this is improved in Swift 2, I haven’t checked yet.

The main hassle I ran into while rewriting it in Swift was accessing array elements using UInt8 and UInt16. Luckily there is an easy solution to this.

extension Array {
  subscript (index: UInt8) -> T {
    get {
      return self[ Int(index) ]
    set {
      self[ Int(index) ] = newValue
  subscript (index: UInt16) -> T {
    get {
      return self[ Int(index) ]
    set {
      self[ Int(index) ] = newValue

That small extension on Array allows us to both get and set elements in Array using UInt8 and UInt16. This comes in handy when accessing elements in the Chip8 memory (array of UInt8) using a Chip8 register (UInt8) as the index.

One other issue I ran into is overflow. In C overflow isn’t protected. If you have a UInt8, set it to 255 (the max it can hold), and then add 3 to it, it will overflow and become 2. Meaning, once UInt8 goes past it’s max value it just starts back over again at 0. If your subtracting underflow does the same thing (but backwards).

In Swift however, integers are protected from overflow and underflow. The compiler is helpful with this and tries to catch it during compile time and warn you.


But in some cases (common in Chip8) you’re adding two unknown integers together, so it’s impossible to catch it during compile time. Instead, you’ll notice during runtime, because your app will crash (bad app, bad!).


Chip8 expects overflow and underflow, and luckily this is easy to handle in Swift. Just use the &+ operator instead of + (or &- instead of -) wherever you need overflow / underflow.

I would still like to move the code to a more modern Swift style. Right now it feels a bit too rooted in C. Breaking out the main Chip8 emulator class into parts like say an ALU extension might be a nice start. Also, instead of using UInt8 and UInt16 it might make sense to have custom types that represent the Chip8 architecture better like a Chip8 Nibble struct and Chip8 Byte struct? I’ll keep thinking about it.

Here’s the repo on GitHub.

There’s the correct way, then there’s the right way.

25 Jun

I’ve been tinkering with my Chip-8 emulator in my spare time, and I kept circling around a bug that annoyed me but I couldn’t seem to track down.

In a few ROMs the emulator behaved strangely, it’s hard to put in more precise terms than that, which is in a large part what made it tricky to track down. Check out this image of the TicTacToe game for example.chip8_bug_tictactoe

Clearly something is going on in that top left corner. You can actually play in that square (and a few others) as many times as you like, the game won’t stop you from playing over your opponent. To further complicate matters you can’t play in the top center square, or a few others, at all.

Ok, maybe the ROM itself is broke? Well, check out Connect-4. It has a similar issue, you can play on top of your opponent here as well. Which messes up the graphics. Notice that square piece? That doesn’t belong there. That’s a square peg in a round hole.


At first I thought this was probably a bug in my Chip8 graphics routine (opcode DXYN). Since Chip-8 systems used XOR drawing and set a “collision” flag if two pixels are drawn on top of one another it seemed reasonable to think that maybe I wasn’t setting that flag correcting?

After check and rechecking it a couple of times I felt certain that wasn’t the problem. Plus it was still bothering me that part of the bug didn’t seem to depend on graphics at all – like when we couldn’t play in the top center square of the TicTacToe game.

I was certain one of my opcodes were behaving incorrectly, but which one?

I decided to check them one by one against the spec to make certain I was doing exactly what the spec called for.

I pulled up Cowgod’s Chip-8 Technical Reference and set about checking each of my opcodes.

I had gotten through 33 of the 35 opcodes and hadn’t found anything wrong. But then, on the very last two opcodes I noticed something…

Fx55 - LD [I], Vx
Store registers V0 through Vx in memory starting at location I.

The interpreter copies the values of registers V0 through Vx into memory, starting at the address in I.

My code looked like this

unsigned char X = GetX(opcode);
for (unsigned char r = 0; r <= X; r++) {
  memory[I+r] = V[r];
I = I + X + 1;
pc += 2;

Odd, why am I modifying the I register? Well, it turns out the original CHIP-8 interrupter FX55 and FX66 modified I is the manor. However, on current implementations I is left unchanged. I had written my Chip8 emulator to the original spec, but these ROMs are expecting to be run on the more current implementations that don’t modify I.

Easy enough, I want my emulator to match what most every ROM these days expects which means I just need to delete that line in FX55 and FX66. As to why most implementations are not inline with the spec? I’m not sure, if I had to guess I would say CHIP-8 emulators took off with this bug in them, people built ROMs expecting them to work that way and when it was noticed that this deviated from the spec it was too late, but that’s just supposition.

Just because the spec says do it one way, doesn’t make it the right way to do it.

With that said, if you we’re shipping a real product you should probably take a different approach. Like maybe exposing an option to simulate using the CHIP-8 spec or the newer style. Or better yet, how about checking a bunch of commons ROMs, seeing which ones are affected by this bug (maybe by checking their use of the affected opcodes) and then saving their checksums. Then you could auto detect them and use the correct style without the user ever having to worry about it (at least for most cases).

Here’s the commit on GitHub with the changes, if your interested.

clamp() in Swift

22 Jun

I’m writing my newest app entirely in Swift. The language is still so new it’s hard to know what exactly it means to write code in the Swift style. Despite some hurdles with the type safety, like not being able to add a Int8 to Int without first converting one of them, it’s been fun.

I noticed a common theme in part of my code as I was cleaning stuff up today. It goes something like this…

if proposedIndex < 0 {
  return 0
else if proposedIndex < anArray.count-1 {
  return proposedIndex
else {
  return anArray.count-1

This is a pretty standard case of clamping. In the past I’d include my standard math code that handles clamping floats, integers, etc. But we’re in a Swift world now, so let’s take this as an opportunity to learn a bit more about Swift and it’s style.

I figured I could make this clearer. At first I rewrote it like so…

var i = min(proposedIndex, anArray.count-1)
    i = max(proposedIndex, 0)

Not bad, but we lost immutability. Don’t worry, we can get it back and compact it down a bit more at the same time…

let i = min(max(proposedIndex, 0), anArray.count-1)

Well, it’s certainly more compact and quicker to read. But it’s not any clearer. In fact, the compact version is likely worse. But that looks more like C than Swift. So I got to thinking, I wonder what this would look like if it was truly written in Swift style. Lets look at the prototype for min.

func min<T: Comparable>(x: T, y: T) -> T

Ah, of course. It’s using generics. So anything that conforms to the Comparable protocol can be min’ed or max’ed (ya I used them as verbs, what ya going do about it?). Let’s make a function that handles this for us.

func clamp<T: Comparable>(value: T, lower: T, upper: T) -> T {
  return min(max(value, lower), upper)

Now we can use it like so…

let i = clamp(proposedIndex, 0, anArray.count-1)

Now that is both compact and clear. It somewhat surprised me that this isn’t included in the Swift standard lib, seems like a prime candidate for it. Maybe I missed it?

Here is a Gist

Chip8 Emulator

20 Jun

Today I open sourced my CHIP-8 emulator. This also happens to be my first open source project.

It’s for the Mac, but the CHIP-8 emulator core is written in plain C, so it should be easily portable to other platforms.

Emulators in and of themselves are not a huge interest of mine. But having a better understanding how a PC works is. I have an electronics background and I’ve always been fascinated with the use of gates and timers, registers and ROM, so on and so forth. I’m interested in learning more about how stuff like interrupts and buses work.

Someday I’ll get around to building my own computer. I don’t mean in the sense of buying a CPU, a motherboard, etc, and plugging them together. I mean sitting down and soldering integrated circuits together. My dream would be to build something like an Apple I. But for now I’ll just dabble in emulating PCs in my spare time so I can learn more.

Student of Code

19 Jun

Janie Clayton has spoken about imposter syndrome before, something all too common in the programming community. While her recent post on Soul Searching didn’t mention it directly, I couldn’t help but feel it tugging away. Admittedly this may simply be me imposing my view on her writing, but that’s somewhat beside my point if you’ll indulge me for a moment.

I share many of her concerns, and I greatly admire her courage to be upfront and open about her experience and her expectations going forward. Putting yourself out there in such a way, especially online, takes a lot of courage and humility. It’s a sign of both a strong person and, I believe, a good programmer. In doing so she has inspired me to take a step in the same direction.

So, in hopes that it might help a little, or at least add some perspective, here is my story…

I’m 33 years old now and I’ve been programming since I was a kid (1993). I don’t count what little BASIC programming I messed around with on my older brother’s Commodore 64 in the 80’s. While that was, in a large part, what started me down my programming path, I can’t say I even had a BASIC understanding of what was going on at that young age. I’ve never had a “job” as a programmer. As a teenager in the 90’s I worked on shareware games for the Mac with a now long defunct company called Gaz Software. I mostly wrote the game engine pieces like the blitters (graphics drawing routines) and physics. By todays standards these would be considered a joke. In those days we didn’t have OpenGL, or shaders, we would draw using the CPU to copy memory from an offscreen buffer to the screen buffer typically using a 3rd buffer as a 1-bit mask to decide wether of not to copy a given pixel (a pixel being a single byte; remember we were dealing with 256 glorious colors back then). This was very simple stuff like using alloc to get a chunk of memory and memcpy to copy the bytes around. This is how most shareware games graphics were done back then (I can’t speak to high end games, I don’t have any experience there). At the time simple games might use QuickDraw (the System 7 equivalent of CoreGraphics these days), but if you needed performance you wrote your own blitters. Back then I didn’t really understand what I was doing. Sure, I knew I could pass a reference to some memory into a function and use memcpy to copy it’s contents to another chunk of memory, but if you had asked me even the simplest of questions like “Is that memory on the stack of the heap?” I probably would have given you a confused look and asked “What do you mean?”. In fact, the name of this blog comes from a time when I was working with a friend’s graphics code and saw this

// a fast numBytesPerRow / 4
numRowLongs = numBytesPerRow >> 2;

I wondered what black magic >> was doing that made it a fast divide by 4? To this day I still love those simple bitwise operations, something about manipulating the bits at that level, it’s just fun.

It wouldn’t be until 2008 that I would ship my first full game on my own. In March of 2008 when Apple announced the iPhone would be getting a full SDK I leapt at the chance to write code for it. At the time I was programming as a hobby and still writing in Carbon. So I had to learn Cocoa, and fast. Thankfully, I have a very understanding wife who supports me. So I spent countless nights learning about Object Oriented Programming (OOP) and Model View Controller (MVC) design. I, once again, didn’t fully understand. Despite all of that I managed to cobble together my first game and two months after the AppStore opened I shipped Consumed.

If any decent programmer had seen that first code I wrote for Consumed their brain probably would have had an embolism. Despite the architecture of that code being, well, mush, I was still immensely proud of it, or at least I tried to be. But part of me couldn’t find joy in it. I knew the code had problems. Sure it worked, the gameplay worked without any flaws, I didn’t have any crashing bugs, but I knew the code could be better. Your Code Is Bad And You Should Feel Bad, is all I heard inside my head. Even though I had gone from not even being able to read Objective-C to shipping a full game in a matter of months, square brackets be damned. I learned CoreAnimation, researched different AI algorithms like mini-max, discovered the performance constraints of those algorithms, like the memory implications of doing a recursive loop and how to address that – hello NSAutoReleasePool. I used what little performance profiling knowledge I could muster to discover exactly what was taking so much time in the AI loop, ultimately rewriting the performance critical code in plain C and using a better algorithm (alpha-beta pruning) to make the AI virtually unbeatable at higher levels, even on the original iPhone’s hardware. Consumed even got featured in the “What We’re Playing” section of the AppStore. Though it was never a commercial success, I learned a lot through building and shipping it. And…I still felt like I didn’t have a clue what I was doing.

Fast forward to today and I’m still doing software development on the side. I’ve got a few apps on the AppStore, I’ve “shipped” my first web app, done a few contract jobs, and I continually read everything I can get my hands on. None of my apps have “taken off” and I can’t quit my day job, but that won’t stop me from coding. It’s as much who I am as anything else that defines me. After twenty+ years of doing this I still feel like a beginner, I know I have so much left to learn. That both excites and scares me at the same time.

All this is to say that when you’re evaluating yourself, be honest, but don’t be hard on yourself. There will always be someone who is smarter, someone who knows more than you, and that is a good thing! It means that there is still more for you to do, more to learn, more to experience. If you ever find yourself convinced that you’ve learned it all, all you’ve learned is how to not to learn anymore.

I remain, as ever, a student of code.

Isn’t It Obvious?

15 Jun

Jonathan Penn is an amazing speaker. I didn’t get to go to NSNorth, but luckily a video of his talk (along with others from NSNorth) are available online.

It’s not software related, but rather it relates to us as human-beings. It’s good. You should check it out.

Isn’t It Obvious?

App Architecture: Fetching JSON

11 Jun

I’m working on an app that needs to fetch some JSON from a server. The JSON doesn’t change frequently, but it may change a couple of times throughout the day at unset intervals. We also need to ensure we fetch new data if the user’s location has changed significantly, say by at least a kilometer.

The simplest (lazy) approach is to just fetch the latest JSON whenever the app needs to display the data. This is also the worst approach. It means we’re hitting the network constantly. What happens if some object requests the data multiple times within a few seconds? It doesn’t make sense to perform a whole new network request each time. It also means we have a significant delay in returning results, since we have to wait for the JSON to download each time. Even a very small JSON file can take upwards of a minute to download on a slow cell network (Yes, sometimes you still end up on 3G or even ::gasp:: Edge). So we can see that while this way would ensure we get the newest data each time, including when the user’s location changes, it does so at a significant performance penalty.

We need a better method. One obvious way we can improve on this is by only requesting data if the user’s location has changed significantly. Is this something the Fetcher class should care about or should it be left up to the requesting object? Let’s pick a path and see where it leads us…

Smart JSONFetcher
In this scenario we’ve chosen that JSONFetcher should handle deciding if the location change is significant enough to warrant a new fetch. This means we’ve made the JSONFetcher class aware about more than its basic task. In a sense we are piling extra work onto JSONFetcher. How does the fetcher know what we consider a significant change in location? Do we pass it each time the fetch is called? Is it a variable we set on an instance fetcher? Is it hardcoded into the fetcher class? All of these have trade offs in usability and flexibility. It also means that our JSONFetcher class is now specialized into fetching JSON that has a location associated with it. Is that desired?

Dumb JSONFetcher
How about if we decide the only thing the fetcher needs to know is what it wants to fetch? In this case we leave handling location changes up to the requesting object. We still have to do the work of deciding when to fetch based on location changes, but now the requester is handling that. The work still exists, but we’ve shifted it onto the user of the class rather than handling it for them.

At first glance you might think “Clearly, I want the Smart JSONFetcher”. But in my experience when you say you’ve made a class smart what you’re really saying is you’ve made the class complex. We want to avoid complexity. A side effect of removing the location information in the Dumb JSONFetcher is that it’s now capable of handling any requested JSON, not just ones that have a location associated with them. This is great, now we have a class that we can easily reuse anytime we need to fetch some JSON. We can build on top of this “Dumb” JSONFetcher to make it more specific to our app’s needs. We’ll take a look at this in a bit.

For now, let’s revisit our first requirement. Not fetching every time we’re asked. This sounds counter intuitive. After all, if some object is requesting the information clearly we want to give it the latest and greatest. But the requester likely doesn’t know (and shouldn’t need to know) that the JSON only changes a few times throughout the day. Luckily if we use NSURLSession (and you really should) we get caching for free. But there’s a catch. There’s always a catch. If the server doesn’t set cache headers you’ll run into issues. As Murphy would have it, the server I’m dealing with doesn’t set cache headers. So the work around is to override willCacheResponse. This allows you to set proper cache times, perhaps something like 5 minutes.

Ok, so now we have a JSONFetcher class that will fetch our JSON, but only if it’s truly stale. Let’s go back to our location problem. We can extend our JSONFetcher class to give it the location validation that our app needs. Let’s add a category JSONFetcher+Location. We’ll add a method fetchJSON:withLocation:. This method will take a location and only execute the fetchJSON: method if its outside of our threshold.

This is getting pretty good. Let’s step back and take a look at where we’re at. Now, the JSONFetcher+Location is ensuing our location is valid and returning the cached data if it’s within our time threshold. What happens if the location isn’t valid? The fetcher goes ahead and executes the download and we wait until we get new data. That seems appropriate, we wouldn’t want to display incorrect data to the user. Ok, what happens if the location is valid but we’ve exceeded the time threshold? The fetcher checks if it’s valid based on the cache policy we set and fetches only if needed.

So that’s the path I think I’m headed down for now. The main point I’m trying to make here isn’t the best way to download some JSON. Rather, it’s about investing the time to think these things through. I may still end up needing to handle this differently, but I’ve put enough forethought into it that I feel fairly confident. Keeping things modular and easy to refactor is key to good development.