# Hy, toolz and currying

Currying (or should we say Schönfinkelning as it was developed by Moses Schönfinkeln and then further developed by Haskell Curry or even Fregeling as Gottlob Frege originally introduced the idea), is technique of translating execution of function of multiple parameters into execution a sequence of functions taking a single parameter (thanks Wikipedia, I couldn’t figure out more succint way of saying that).

# Learning NumPy Array

I recently got a review copy of Ivan Idris’ “Learning NumPy Array” from Pact Publishing. I have read his earlier book “NumPy Cookbook” and found that useful, so I had my expectations quite high when I started.

The book is not huge brick, but still has enough content for almost 150 pages. As usual, first chapter is dedicated for installing NumPy, Matplotlib, SciPy, and IPython in various operating systems. While the information is good, I think just pointing to online resources would have been sufficient.

The second chapter is reserved for NumPy basics. This is where things are starting to get interesting if you haven’t worked with NumPy and arrays before. It is a good idea to read this chapter carefully, if you aren’t familiar with NumPy. Later chapters are built on top of the foundation laid here and are easier to understand when you understand the basics.

Starting from the 3rd chapter, the book dives into details of NumPy arrays and tools that are available to work with them. I like the fact the each subsequent chapter is built on a theme (basic data analysis, simple predictive analytics and signal processing techniques) with concrete examples. Mostly examples are built around various kinds of weather data, but there’s a little bit of stocks thrown into the mix too. Mathematical foundations are only explained in briefly because of the limited amount of the pages the book has. There’s enough detail for reader to understand what is going on and more information is readily available on internet.

Near the end of the book, there is short chapter about profiling, debugging and testing. Especially the part about testing I found very brief and not that useful, but this is book about NumPy after all and not about testing. This is probably the weakest part of the book and could have been left out. The pages used for this chapter could have been used to explain NumPy in more detail.

The last chapter of the book touches other related libraries briefly. It’s good to know how NumPy relates to for example SciPy and scikit-learn.

All in all I found the book very enjoyable to read and easy to follow. Sometimes graphics was getting a bit on the way, like when textual output was shown as an image of text instead of text (so font differed just slightly or the output had different coloured background). The author is already working on the next book, called “Learning Python Data Analysis” which also sounds quite interesting and is expected to come out 2015.

# Python, Behave and Mockito-Python

This article was originally published in November 2012 issue of Open Source for You. It is republished here under Creative Commons Attribution-Share Alike 3.0 Unported license.

Here’s an article that explores behaviour-driven development with Python for software developers interested in automated testing. Behaviour-driven development approaches the designing and writing of software using executable specifications that can be written by developers, business analysts and quality assurance together, to create a common language that helps different people communicate with each other.

This article has been written using Python 2.6.1, but any sufficiently new version will do. Behave is a tool written by Benno Rice and Richard Jones, and it allows users to write executable specifications. Behave has been released under a BSD licence and can be downloaded from http://pypi.python.org/pypi/behave. This article assumes that version 1.2.2 is being used. Mockito-Python is a framework for creating test doubles for Python. It has been released under an MIT licence, and can be downloaded from: https://bitbucket.org/szczepiq/mockito-python/. This article assumes the use of version 0.5.0, but almost any version will do.

Our project
For our project, we are going to write a simple program that simulates an automated breakfast machine. The machine knows how to prepare different types of breakfast, and has a safety mechanism to stop it when something goes wrong. Our customer has sent us the following requirements for it:

“The automated breakfast machine needs to know how to make breakfast. It can boil eggs, both hard and soft. It can fry eggs and bacon. It also knows how to toast bread to two different specifications (light and dark). The breakfast machine also knows how to squeeze juice from oranges. If something goes wrong, the machine will stop and announce an error. You don’t have to write a user interface; we will create that. Just make easy-to-understand commands and queries that can be used to control the machine.”

Layout of the project
Let’s start by creating a directory structure for our project:

```mkdir -p breakfast/features/steps
```

The breakfast directory will contain our code for the breakfast machine. The features directory is used for storing specifications for features that the customer listed. The steps directory is where we specify how different steps in the specifications are done. If this sounds confusing, don’t worry, it will soon become clear.

The first specification
In the case of Behave, the specification is a file containing structured natural language text that specifies how a given feature should behave. A feature is an aspect of a program—for example, boiling eggs. Each feature can have one or more scenarios that can be thought of as use cases for the given feature. In our example, the scenario is hard-boiling a single egg. First, we need to tackle boiling the eggs. That sounds easy enough. Let’s start by creating a new file in the features directory, called boiling_eggs.feature, and enter the following specification:

```Feature: Boiling eggs
as an user
in order to have breakfast
I want machine to boil eggs

Background:
Given machine is standing by

Scenario: boil hard egg
Given machine has 10 eggs
And egg setting is hard
And amount of eggs to boil is set to 1
When machine boils eggs
Then there should be 1 boiled egg
And eggs should be hard
```

It does not look like much, but we are just getting started. Notice how the specification reads almost like a story. Instead of talking about integers, objects and method calls, the specification talks about machines, eggs and boiling. It is much easier for non-coders to read, understand and comment on this kind of text. At this point, we can already try running our tests, by moving into the breakfast directory and issuing the command behave. Behave will try to run our first specification, and will fail because we have neither written steps or implemented the actual machine. Because Behave is really helpful, it will tell us how to proceed:

You can implement step definitions for undefined steps with these snippets:

```@given(u'machine is standing by')
def impl(context):
assert False
```

Now we need to define what is to be done when the breakfast machine is standing by and has 10 eggs ready to be boiled. Create the file eggs.py in the steps directory and add the following code in it:

```from breakfast import BreakfastMachine
from behave import *

@given(u'machine is standing by')
def impl(context):
context.machine = BreakfastMachine()

@given(u'machine has {egg_amount} eggs')
def impl(context, egg_amount):
context.machine.eggs = int(egg_amount)
```

Steps are the bridge between the specification and the system being tested. They map the natural-language sentences into function calls that the computer can understand.

The first function defines what will happen when there should be a breakfast machine standing by. Let’s create a new instance of BreakfastMachine and store it in context, which is a special object that Behave keeps track of. It is passed from step to step, and can be used to relay information between them. Eventually, we will use it to assert that the specification has been executed correctly.

This defines code that is executed when there is a step ‘machine has x eggs’, where x can be anything (in our example it is 10). {egg_amount} is automatically parsed and passed as a parameter to the function, which has to have an identically named parameter. Note that the parameters are Unicode strings, and thus need to be converted to integers in our example.

If we were to run Behave at this point, we would get an error message that BreakfastMachine cannot be imported. This, of course, is because we have not yet written it. It might feel strange to start coding from this end of the problem (specifications and tests), instead of diving headlong into coding BreakfastMachine. The advantage of approaching the task from this direction is that we can first think about how we would like our new object to behave and interact with other objects, and write tests or specifications that capture this. Only after we know how we would like to use the new object do we start writing it.

In order to continue, let’s create BreakfastMachine in the file breakfast.py and save it in the breakfast directory. This is the class our client asked us to write and which we want to test:

```class BreakfastMachine(object):

def __init__(self):
super(BreakfastMachine, self).__init__()
self.eggs = 0
self.egg_hardness = 'soft'
self.eggs_to_boil = 0
self.boiled_eggs = 0
self.boiled_egg_hardness = None

def boil_eggs(self):
pass
```

Steps to implement a hardness setting for eggs and the amount of eggs to boil are quite similar to setting the total amount of eggs available. The difference is that we are not creating a new BreakfastMachine, but using the one that has been stored in context earlier. This way, we configure the machine step by step, according to the specification. You can keep running Behave after each addition to eggs.py to see what kind of reports it will output. This is a good way of working, because Behave guides you regarding what needs to be done next, in order to fulfil the specification. In eggs.py, add the following:

```@given(u'egg setting is {egg_hardness}')
def impl(context, egg_hardness):
context.machine.egg_hardness = egg_hardness

@given(u'amount of eggs to boil is set to {amount}')
def impl(context, amount):
context.machine.eggs_to_boil = int(amount)
```

Up to this point, we have been configuring the breakfast machine. Now it is time to get serious and actually instruct the machine to boil our eggs, and verify afterwards that we got what we wanted. Add the following piece of code to eggs.py:

```@when(u'machine boils eggs')
def impl(context):
context.machine.boil_eggs()

@then(u'there should be {amount} boiled egg')
def impl(context, amount):
assert context.machine.boiled_eggs == int(amount)

@then(u'eggs should be {hardness}')
def impl(context, hardness):
assert context.machine.boiled_egg_hardness == hardness
```

The first step is to instruct our breakfast machine to boil eggs. The next two are to ascertain that the results of boiling are what we wanted. If you run Behave at this point, an error will be displayed, because the machine does not yet know how to boil eggs. Change breakfast.py to the following final version, and the tests should pass:

```class BreakfastMachine(object):

def __init__(self):
super(BreakfastMachine, self).__init__()
self.eggs = 0
self.egg_hardness = 'soft'
self.eggs_to_boil = 0
self.boiled_eggs = 0
self.boiled_egg_hardness = None

def boil_eggs(self):
if self.eggs_to_boil <= self.eggs:
self.boiled_eggs = self.eggs_to_boil
self.boiled_egg_hardness = self.egg_hardness
self.eggs = self.eggs - self.eggs_to_boil
else:
self.boiled_eggs = 0
self.boiled_egg_hardness = None
```

Now we have a passing scenario and some code; so what’s next? You could experiment with what we have now, and see if you can write another scenario and try to soft-boil an egg, or boil multiple eggs. Just add a new scenario in boiling_eggs. feature after the first one, leaving an empty line in between. Do not repeat the feature or background sections—those are needed only once. After experimenting for a bit, continue to the second specification.

The second specification
Let’s start by reviewing the customer’s requirements listed at the beginning of this article.

Boiling varying numbers of eggs to different specifications, i.e., hard or soft, is taken care of. Frying and toasting should be easy to implement, since it is similar to boiling eggs. “Just make easy-to-understand commands and queries that can be used to control the machine,” catches our attention, and we ask for a clarification from our customer, who sends us an API specification that tells us how their user interface is going to interact with our machine. Since they are still working on it, we’ve got only the part that deals with eggs. The following is for ui.py in the breakfast directory:

```class BreakfastUI(object):

def __init__():
super(BreakfastUI, self).__init__()

def eggs_boiled(self, amount, hardness):
pass

def error(self, message):
pass
```

Their user interface is not ready yet, and their API specification is not completely ready either. But we got some of it, and it is good enough for us to start working with.

We can first tackle sending error messages to the user interface. Let’s add the following code at the end of boiling_eggs.feature:

```Scenario: boiling too many eggs should give an error
Given machine has 1 eggs
And amount of eggs to boil is set to 5
When machine boils eggs
Then there should be error message "not enough eggs"
```

The next step, like before, is to implement new steps in
eggs.py:

```from breakfast import BreakfastMachine
from ui import BreakfastUI
from mockito import verify, mock
from behave import *

@given(u'machine is standing by')
def impl(context):
context.ui = mock(BreakfastUI)
context.machine = BreakfastMachine(context.ui)
...

@then(u'there should be error message "{message}"')
def impl(context, message):
verify(context.machine.ui).error(message)
```

We also need to modify our BreakfastMachine to connect to the user interface when it starts up. We do this by modifying the __init__ method in breakfast.py as per the following:

```def __init__(self, ui):
super(BreakfastMachine, self).__init__()
self.ui = ui
self.eggs = 0
self.egg_hardness = 'soft'
self.eggs_to_boil = 0
self.boiled_eggs = 0
self.boiled_egg_hardness = None
```

There are two interesting bits in eggs.py. The first is in the method where we set the machine to stand by. Instead of using a real BreakfastUI object (which wouldn’t have any implementation anyway), we create a test double that looks like BreakfastUI, but does not do anything when called. However, it can record all the calls, their parameters and order.

The second interesting part is the function where we verify that an error message has been delivered to the UI. We call verify, pass the UI object as a parameter to it, and then specify which method and parameters should be checked. Both verify and mock are part of Mockito, and offer us tools to check the interactions or the behaviour of objects.

If we run Behave after these modifications, we are going to get a new error message, as shown below:

```Then there should be error message "not enough eggs" #
features\steps\eggs.py:35
Assertion Failed:
Wanted but not invoked: error(u'not enough eggs')
```

This tells us that the specification expected a call to the method error, with the parameter ‘not enough eggs’. However, our code does not currently do that, so the specification fails. Let’s fix that and modify how the machine boils eggs (breakfast.py):

```def boil_eggs(self):
if self.eggs_to_boil <= self.eggs:
self.boiled_eggs = self.eggs_to_boil
self.boiled_egg_hardness = self.egg_hardness
self.eggs = self.eggs - self.eggs_to_boil
else:
self.boiled_eggs = 0
self.boiled_egg_hardness = None
self.ui.error('not enough eggs')
```

Let’s add a call to the UI object’s error method in order to let the user interface know that there was an error, and that the user should be notified about it. After this modification, Behave should run again without errors and give us a summary:

```1 feature passed, 0 failed, 0 skipped
2 scenarios passed, 0 failed, 0 skipped
12 steps passed, 0 failed, 0 skipped, 0 undefined
Took 0m0.0s
```

There is a distinction between these two approaches. In the first one, we wanted to verify the state of the system after it boiled the eggs. We checked that the amount of eggs boiled matched our specification, and that they were of the correct hardness.

In the second case, we did not even have a real user interface to work with. Instead of writing one from scratch in order to be able to run tests, we created a test double: an object that looks like a real user interface, but isn’t one. With this double, we could verify that the breakfast machine calls the correct methods with the correct parameters.

Now we have the basics of boiling eggs. If you want, you can continue from here and add more features to the breakfast machine, like frying eggs and bacon, and squeezing juice out of oranges. Try and add a call to the user interface after the eggs have been boiled, and verify that you are sending the correct number of eggs and as per the specified hardness.

After getting comfortable with the approaches and techniques shown here, you can start learning more about Behave and Mockito-Python. Good places to start are their download pages, as both offer short tutorials.

In this brief example, we had a glimpse of several different ways of writing executable specifications. We started our project from a natural-language file that specified how we wanted our machine to boil eggs. After this specification, we wrote an implementation for each step, and used that to help us to write our actual breakfast machine code. After learning how to verify the state of a system, we switched our focus to verifying how objects behave and communicate with each other. We finished with a breakfast machine that can boil a given number of soft or hard-boiled eggs, and that will issue a notification to the user interface in case there are not enough eggs in the machine.

# New learning project

I learn by doing and I have never really learned anything about web development. So in order to fix that, I started a new hobby project (as if I didn’t have enough of them already) unexpected-raptor.

Unexpected-raptor is a web site written with Python and Django. The goal is to write a tool that helps me to manage our group’s BattleTech games. BattleTech is really great and fun game, but running a full mercenary company by hand can get a bit tedious because of all the calculations that need to be taken care of in-between of games.

Since writing a straight conversion of the rules into digital form would be breaching the IP rights of the respective owners (I seem to recall that Microsoft actually owns the rights for digital games), I’m writing a more general system that can be configured to take care of BattleTech too.

This is not the first of it’s kind, there exists multiple products that do various parts of the process. But since I’m doing this to learn web development and not to game (although that’s a nice bonus), I’ll be rolling out yet-another-tool.

# satin-python is now available at PyPi

PyPi is a Python package index (also known as a cheeseshop). Satin is a UI testing library that I have been tinkering with on and off while developing Herculeum. I have now released version 0.1.0 of Satin in PyPi, in order to make it easier for others to download it.

The change is reflected in requirements-dev file of Herculeum. Now it is possible to download almost all dependencies of the game in just a single command:

``` pypi install -r requirements-dev ```

# Having fun with context managers

Working with databases most of the time means working with transactions. You open a transaction, perform data manipulation and commit the transaction to finish up. If something goes wrong, instead of commit you’ll perform rollback that cleans everything you were doing inside of the transaction. Sometimes you want to commit the transaction after a simple database operation, sometimes you want to string lots of simple database operations together and commit only after all of them have been successfully performed. Nested transactions are one solution for this, another is to let the business logic dictate when the operation is finished. The later option means that data access layer should not care about transactions.

Python and Hy have very nice and elegant way of handling transactions with Sqlite databases. The following code is a function that is used to save a person into a database. Parameter person is a dict and connection is Sqlite connection. There is no transaction handling on this level.

```(defn save-person [person connection]
(let [[cursor (.cursor connection)]
[params (, (:name person) (:id person))]
[person-id (:id person)]]
(if person-id (do (.execute cursor "update person set name=? where OID=?" params)
(do (.execute cursor "insert into person (name, OID) values (?, ?)" params)
(let [[new-person-id cursor.lastrowid]]
```

The following code is a test that saves and loads person from a database (in-memory database in this case). The transaction handling is on this level, although it might not be apparent on a first glance.

```(defn test-save-person []
(with [connection (create-schema (get-in-memory-connection))]
(let [[person {:id None :name "Pete"}]
[saved-person (save-person person connection)]
(assert-that (:name loaded-person) (is- (equal-to "Pete"))))))
```

The magic is in with statement. Connection can be used as a context manager, which delegates transaction management to it. When with – block is entered a transaction is automatically started. When with block ends, the transaction is automatically commited, causing the changes to be written into the database. If there is an exception during the execution of with – block and the user code does not handle it, with catches it and performs a rollback.

The solution is really nice and elegant looking. The drawback is that it is not possible to call another function that uses with – block from within with – block, because that would cause problems with transaction management.

# Running nosetests for Hy

Getting nosetest to work with Hy was easier than I thought in the end. The trick I used was to create a Python module that will act as an entry point. In that module I first import Hy, which in turn lets me to import my test modules:

```import hy
from .test_person_repository import *
```

After that I can run nosetests and end up with:

```tuukka@new-loft:~/programming/python/sandvalley/src/sandvalley/tests\$ nosetests
..
----------------------------------------------------------------------
Ran 2 tests in 0.015s

OK
```

Pretty nifty I must say. Having to import everything from modules in a entry point module is a bit hassle, but that’s something that I should be able to automate.

The same method can be used to call Hy code from Python program.

# hy

I stumbled upon a new programming language called hy. Basically it is for Python what Clojure is for Java; a LISP implementation. I have played around with it a bit, mainly for my Sandvalley project and at least so far I find it easier to use than Clojure. I think this probably is because Python standard library is much more familiar to me than Java’s.

If I have understood everything correctly, interfacing hy and Python is extremely simple. hy programs get turned into Python abstract syntax trees and the interpreter does not see any difference at all. There are some conventions how for example function names are mapped (some-function gets turned into some_function).

Hy seems to have quite active development community. What I lack currently is the documentation, but eventually that’ll pick up too.

(sorry, no code samples this time, still have to figure out how things mesh together :))

# Code samples in API documentation

Nat Pryce has an interesting idea for maintaining code samples in API documentation. The gist is that the examples are maintained as programs in same source control repository as the API itself. This allows them to be tested and kept sync with the evolving API. Special markup is used in the program to write the prose and explanations. From the source code a HTML-document is generated that shows the prose and the actual program. While a reader navigates the explanations one by one, only the current explanation is shown, which reduces the clutter and guides the reader through the program step by step.

I think this is pretty nifty tool. Especially the ability to test the examples is important in order to keep them in sync with the code. My approach has been using Sphinx + DocTest as shown in pyherc – documentation.

More can be read from here. The tool itself is available here.

# Refactoring tests

Work with the magic system continues. Last time I added ability to create effects for spells. This time I work on creating those effects. Since the test for this step is really close to the previous step and the code required to make it pass is just couple of lines, I’ll concentrate on writing about something else: refactoring tests.

The test I wrote is as follows:

```def test_triggering_effect(self):
"""
Casting a spell should trigger the effect
"""
effects_factory = mock(EffectsFactory)
effect = mock(Effect)
dying_rules = mock(Dying)
when(effects_factory).create_effect(key = 'healing wind',
target = self.character).thenReturn(effect)

effect_handle = EffectHandle(trigger = 'on spell hit',
effect = 'healing wind',
parameters = {},
charges = 1)

spell = (SpellBuilder()
.with_effect_handle(effect_handle)
.with_target(self.character)
.build())

spell.cast(effects_factory,
dying_rules)

verify(effect).trigger(dying_rules)
```

You probably notice how similar it is to the code I wrote in the previous step. It didn’t bother me when I was working on making it pass, but as soon as I was done with that, it was time to clean things up.

First step was to move these two similar tests to a separate test class called TestSpellEffects. The maintenance is easier when similar tests or tests for certain functionality are close to each other.

Next I extracted the duplicate code from each test and moved it into a setup function:

```def setup(self):
"""
Setup test cases
"""
self.character = (CharacterBuilder()
.build())

self.effects_factory = mock(EffectsFactory)
self.effect = mock(Effect)
self.dying_rules = mock(Dying)
when(self.effects_factory).create_effect(key = 'healing wind',
target = self.character).thenReturn(self.effect)

self.effect_handle = EffectHandle(trigger = 'on spell hit',
effect = 'healing wind',
parameters = {},
charges = 1)

self.spell = (SpellBuilder()
.with_effect_handle(self.effect_handle)
.with_target(self.character)
.build())
```

The setup function is run once for each test and it is perfect place for code that would be duplicated otherwise. It’s good idea to pay close attention what actually ends into setup and try to keep it as small as possible. The more there is code in setup function the harder to read the tests seem to me.

After this change the actual tests were much smaller:

```def test_creating_effect(self):
"""
Casting a spell should create effects it has
"""
self.spell.cast(self.effects_factory,
self.dying_rules)

verify(self.effects_factory).create_effect(key = 'healing wind',
target = self.character)

def test_triggering_effect(self):
"""
Casting a spell should trigger the effect
"""
self.spell.cast(self.effects_factory,
self.dying_rules)

verify(self.effect).trigger(self.dying_rules)
```

Much nicer than the first version. This highlights the fact that the test code is equally important as the production code and deserves equal attention and care.