Writing a game with F# and RxNA

I wanted to test my hand on writing a short F# program and using functional
reactive programming in it. Because lot of this would be new to me, I
deliberately chose to write a simple shooter game.

F# is functional programming language on .Net platform. It can integrate with
existing .Net code easily and has decent tooling with Visual Studio. Some of
the features I really like are immutability and type inference. Immutability
means that I don’t have to worry that much about global state of the system
while working with some detail. Type inference means that I don’t have to
specify types all the time, but the compiler can figure them out from the
context (usually, sometimes it does need a little bit help).

Rx (or Reactive Extensions) is quite neat library for composing programs using
Observables, queries and schedulers. I found that by using Rx, my program
naturally fell into small pieces or streams that communicated with each other.
When ever something changed, all the dependent things reacted too.

MonoGame is an open source library for cross-platform game development. The
API is really close to now defunct XNA and there are plenty of resources out
there, so finding help or information usually isn’t particularly difficult.

RxNA is an open source library that adds a set of Observables that are suited
in interacting with MonoGame.

Continue reading


qc is a Python implementation of QuickCheck library for Haskell. Its purpose is to generate input data for tests. So instead of writing multiple tests to test arbitrary values or writing data generation logic inside of the test, developer can decorate test case and qc takes care of the rest.

Following example has test case called test_simple_move, which is just testing that taking a step to a random direction actually moves the character to desired location. @forall(tries=5, direction=integers(low = 1, high = 8)) decorator before the function is used to run the test five times, with direction randomly selected between 1 and 8 (inclusive).

@forall(tries=5, direction=integers(low = 1, high = 8))
def test_simple_move(self, direction):
    Test that taking single step is possible
    self.character.location = (5, 5)
    expected_location = [(0, 0),
                         (5, 4), (6, 4), (6, 5), (6, 6),
                         (5, 6), (4, 6), (4, 5), (4, 4)]

It’s worth noting, since the @forall decorator is between function and test framework (nose in my case), setup function is not automatically run between tests. From the point of view of nose, this is just a single test.

This is pretty nifty tool for testing. More than randomly generated integers are supported too, of course. Even creation of user defined objects is supported.

There is also work in progress version for .NET, that I might find useful in the future.

TeamCity and dotCover

I recently had an opportunity to play around with TeamCity and dotCover in while trying to measure test coverage. Basic idea is explained well in blog posting, but I found couple gotchas (reading manual is what I do as a last resort, as usual).

First was that analysed dll-files need to be from a debug build. pdb-files alone probably might do the trick, but easiest way to get those from all of our projects was to make a debug build. This of course can lead to funny situations, if your build has been configured to output files to same directory, regardless them being release or debug configuration. So this needs to be taken into account when collecting files automatically and packaging them for release (we don’t want to ship pdb-files along the release dlls)

Another gotcha is when you have configured different output directories (bin/debug and bin/release). If your build includes testing task that matches by wildcards (*test*.dll for example), changes are that you end up with debug and release binaries in unit testing. That is fast way of doubling your reported test cases though.

Third one is that when you finally get everything up and running, you need to know how to read the results. It seems that total coverage reported is only for those dll-files that actually got picked up while testing (well, that actually makes sense if you think about it). Those dll-files that were not used by the tests, are not reported and are not included in the total statistics.

Shortening the feedback loop

One of the reason I like doing test driven development is the really short feedback loop it offers (there are many others, but this post is about the feedback loop). Basic idea is:

  • Write a test that demonstrates the lack of the wanted feature
  • Start developing

While developing (Repeat until acceptance test is passed):

  • Write failing unit test
  • Make the test pass
  • Refactor

All the time, you keep running all the tests that have been written before. This gives confidence that nothing is accidentally broken while the new stuff is being developed and code is being refactored. As soon as the acceptance test passes, feature is ready. Of course it might lack some detail that the acceptance test doesn’t cover, but for those you can always write more acceptance tests.

This is really nifty and I like doing it. While working with .Net code, cycle is like this (in more detailed level):

  • Write test, compile, run tests
  • Code to make the test pass, compile, run tests
  • Refactor, compile, run tests

While working with Python, cycle gets shorter:

  • Write test, run tests
  • Code to make the test pass, run tests
  • Refactor, run tests

This morning while biking to work, I had an idea that it would be really cool if the test suite at the local workstation would work in a similar way to test suite in the continuous integration.

I’m of course not the first one to think this. Two first hits were autonose and nosy. I tried autonose first, but couldn’t get it to install because of mismatch in md5 hash. With nosy I was more lucky and got it installed and quickly set up. I still need to manually start it by simply typing:


In my top level source folder and system starts itself and runs all the tests found inside the folder hierarchy. After that, everytime a source file is changed tests are automatically run. For fully automated solution on Linux, there’s nosier.

After this, cycle is shortened even more:

  • Write test
  • Make test pass
  • Refactor

For .Net side, there is of course equivalent tools. First one I found is called NCrunch and looks quite promising. I haven’t tested that, but I probably should.


Python and Mock

So, last week I was happily mocking away and nothing seemed to be a problem. This week I ran into a part of program that makes mocking hard: properties.

For the background, I’m currently using Mock framework by Michael Foord and Konrad Delong for mocking in Python. The framework is really nice and hardly ever gets your way of working, until you run into properties.

In the manual, two different solutions are given. One of them is to subclass Mock, define needed property on this new class and use Mock to track the calls. Another option is to use patching.

These do get the work done, but both look somewhat messy in the test code. For now, I have settled working on subclassing Mock and defining needed properties there.

.Net and Moq

On the .Net side of the things, I have started playing around on Moq. It’s simple to use and support .Net 3.5. After some initial trial and error, using it has proven to be pretty straightforward. I’ll write more about our experiences when I’ll use it some more.

Advanced technologies

So, for advanced technologies course we’re required to do some self studying and focus on advanced technologies. I want to concentrate on a one or two fields at the most and chose Microsoft’s .Net as my focus.

My colleague had  a great suggestions as sources for webcasts and gave me two links below. They probably contain more information than I ever will have time to watch, so some priorisation is in order before I dive in.

http://channel9.msdn.com and http://www.dnrtv.com/