Let your users direct the tech

I was trying to explain to my co-founder Liz last night the changes that I’d been making to Cake, the system that we use to process enquiries from our website, when she frowned and said, “that sounds really complicated.”

It stopped me in my tracks – technology is supposed to be enabling our company to move faster! She went on, “Trello is so simple in comparison!” She’s right on that – Trello IS simple but it only covers one part of of the process, while Cake is trying to do all the things. But it was time to stop ignoring that niggling feeling that I was losing my direction on what I was building, and revisit the system with the person who actually uses it every day.

So co-founder-2 (Phil) and I sat down today, hooked his laptop up to a large TV screen and started putting orders through on the staging site, all the way from the customer placing them, through to the supplier accepting them, and the customer confirming and paying. I played the part of the customer, and Phil took on the two hats of the supplier and himself, doing his daily job of processing orders for You Chews. It was a bit confusing – would have been better with Liz to act as supplier!

Things started off slowly. I’m using a Kanban system to pull changes through to production, and I’m releasing changes sometimes several times a day. For Phil, that means that the system and processes he uses are changing every day – and even when it seems to me like it’s for the better, sometimes it’s just a PITA. It’s not until you see somebody else using the system that you built that you start to see whether or not it will actually work!

The first order was a bit of a struggle. It became obvious fairly quickly that Cake needs to update the order status automatically, that having that as an extra step makes it significantly more confusing.

With the second order, we discovered some bugs in the steps to email the supplier – luckily they were just config issues, so I fixed them, and we were off again.

The third and fourth orders started to go more smoothly. Phil was following the steps he had noted down for himself, knew that there would be upcoming changes to automate the status updates, and asked loads of questions. He consistently deleted one part of the email that would go to suppliers, so it was really clear that it wasn’t needed. Towards the end of the meeting, he was even getting excited – “This is great! This is great!” – he even said at one point. We discovered a ton of places where small changes like wording could make the process smoother, and I walked away with a much clearer vision of what I need to create to make the system useful for him. Phil even left saying that he’d enjoyed the meeting – we spent about two hours in that room, so believe me that is quite an acheivement!

It reinforced a few things for me. Firstly, and most obviously, test the system with real users. They can be customers or staff – and don’t just do a quick test, let them use it a few times, see if you can provide help and direction and if that makes a difference, and watch what they do. The more they use the system, the more useful feedback they’ll be able to provide you.

Secondly, if you’re working in a fast-paced environment, constantly pushing through changes, it’s a good thing to stop and review the bigger picture every so often. It doesn’t need to be a big, formal process – I spent five minutes sketching a diagram of what I was trying to do, and fifteen minutes talking it through, and was far more confident that I was heading in the right direction.

Lastly: if you are going to do user testing, expect to walk away with a longer to-do list!

Debugging Cucumber tests on Codeship

I’ve been running into a few problems recently where my cucumber tests fail in the build but not on my local machine. It’s often related to the tests running slightly faster than the javascript on the build machine, but it can be really hard to catch.

A couple of things have worked well. First of all, I’ve added capybara-screenshot gem to my cucumber tests – setting it up was as simple as adding it to the gemfile and putting one line in the env.rb for my tests:

require 'capybara-screenshot/cucumber'

Every time a test fails, it captures an HTML output and an image. I’ve discovered that running tests without javascript – which uses the default headless Capybara driver – does not create an image, and also using the Selenium driver in Chrome throws an error, but that hasn’t really caused an issue.

The next step was accessing the details of the fails on the build. I’m using Codeship which is an awesome cloud-based CI platform that offers 100 builds a month for FREE (plenty for a solo developer on a startup!) Codeship allows me to create a debug build and access it via SSH, which was great because now I can generate images and HTML from the failed tests.

The last problem to solve was getting the images. There’s probably a few ways to skin this cat – I started off just using the HTML, but a picture not only tells a thousand words, it usually is a much faster way of seeing the root of the problem than picking through the HTML code. Turns out SFTP works perfectly – I just use the server IP and port from the SSH command that Codeship provides:

sftp -P [port number] user@ip_address
get [image file name]

I do still need to ssh in and run the failing tests, but finally I can see the broken screens! Much faster.

Using Variants in Rails 4

I’ve been using Variants in Rails 4 to change my layouts for one controller action. It’s really straightforward – just set the variant name as a symbol, for example:

request.variant = :popup

Then name the template file with the variant name in it, e.g.

new.html+popup.haml

Another neat trick I discovered is that you can also use the variant to control the layout file. I wanted to have a different layout for my popup content, one that didn’t include all of the usual bumph like headers and menus. Turns out I didn’t need any other code, just create a template in my layout folder with the variant:

website.html+popup.haml

In addition to that, I can also use it with partials. Inside my haml file, I reference a partial as I would normally, and when the variant is set, it automatically chooses the partial with the variant if there is one.

...
= render partial: 'customer' 
...

Now I can use the popup variant across multiple controllers with this same simple layout.

I also discovered that setting variant to nil causes a problem. I wanted to do this:

request.variant = find_my_variant
...
...
...
def find_my_variant
  return nil if some_logic_determining_no_variant
  :variant_name
end

Instead I have to do this:

request.variant = find_my_variant unless find_my_variant.nil?
...
...
...
def find_my_variant
  return nil if some_logic_determining_no_variant
  :variant_name
end

I’m interested to know if there is a way to set a “blank” or default variant so that I can remove that extra check …

Try and Fetch

One of my colleagues showed me this awesome little method in Rails called try.

How many times have you written code to check for nil like this?

def get_value
  if @my_object.nil?
    ""
  else
    @my_object.value
  end
end

(well I have, many times!)

If you call try on an object if you’re not sure whether or not it will respond to that method. If it doesn’t, then you just get back nil:

def get_value
  @my_object.try(:value)
end

Particularly useful if you want to call a method on something you get back from a hash that may not be there:

@my_object[:my_key].try(:method)

While reading up on this, I also discovered the fetch method for Hashes, which allows me to specify a default value to return if the key is missing.

This means I can clean up stuff like this:

my_array = @my_hash[:my_key]
my_array.each {|array_item| ... some code ... } unless my_array.nil?

To this:

@my_hash.fetch(:my_key, []).each {|array_item| ... some code ... }

Less if statements, less branching, and so hopefully fewer bugs to write!

Controlling Asset Precompilation in Rails

I’ve run into issues recently with precompilation after introducing new gems containing partial SASS files (Bootstrap and Font Awesome).

Rails allows you to specify patterns or Procs to determine which files should – or in our case should not – be precompiled, like this:

Rails.application.config.assets.precompile = [Proc.new {|e| !(e =~ /(font-awesome|bootstrap)\/_/) } ]

Upgrading to RSpec 3

I recently upgraded our Rails application to use RSpec 3, which is currently in its second beta, from 2.99. I was expecting it to be nice and straightforward, but sadly it was not! This was partly because we have been using RR for mocking and stubbing, and the syntax was not compatible with RSpec 3.

In the process of upgrading I learned a bunch about the new RSpec “allow” syntax, which in my opinion is far nicer than the RR one we were using.

Here’s how to stub:

allow(thing).to receive(:method).with(args).and_return(other_thing)

Mocking can be done in the same way by substituting “allow” for “expect” – although in most cases the tests read better if you test that the stub received the method using the following matcher:

expect(thing).to have_received(:method).with(args)

That this is different to the previous RR syntax, which was expect(thing).to have_received.method(args)
You can also use argument matchers, for example:

expect(thing).to have_received(:method).with(a_kind_of(Class))

And you can verify how many times a method was called:

expect(object).to have_received(method).at_least(:once)
expect(object).to have_received(method).exactly(3).times

The .with(args) and .and_return(other_thing) methods are optional. You can also invoke a different function:

allow(thing).to receive(:method) { |args| block }

Or call the original function:

allow(thing).to receive(:method).and_call_original

Another thing we used fairly often was any_instance_of. This is now cleaner (RR used to take a block):

allow_any_instance_of(Class).to receive(:method).and_return
allow_any_instance_of(Class).to receive(:method) { |instance, args| block}

If you pass a block, the first argument when it gets called is the instance it is called on.
In RSpec 3, be_false and be_true are deprecated. Instead, use eq false or eq true. You can use be in place of eq, but when the test fails you get a longer error message, pointing out that the error may be due to incorrect object reference, which is irrelevant and kind of annoying.

Using RSpec mocks means that we can create new mock or stub objects using double(Class_or_name) rather than Object.new, which results in tidier error messages and clearer test code.

Stubbing a chain of methods may also be a handy tool – I only found one place where we used it, but it is useful if we’re chaining together methods to search models.

allow(object).to receive_message_chain(:method1, :method2)

More info:

  1. https://relishapp.com/rspec/rspec-mocks/docs
  2. https://github.com/rspec/rspec-mocks

Update: it turns out I was missing a configuration option in RSpec. It should have worked with RR by doing this:

RSpec.configure do |rspec|
  rspec.mock_with :rr
end

Thanks Myron for clearing this up :)

Scala – Day 1

I was looking forward to Chapter 5 of Seven Languages in Seven Weeks: Scala. I’ve heard quite a bit about it in the last few weeks at various user groups, and I’m hoping to get my hands on it at some point in my upcoming work with Atlassian, so this was a good time to dive in. As a personality, Scala is assigned Edward Scissorhands in the book: “awkward, and sometimes amazing”.

I tried at first to install it with Homebrew, which just failed with a 404, so I downloaded the package and installed it manually, which worked fine.

Day 1 was pretty straightforward – type a few things into the console and have a look at what you get back. This chapter delves into loops and ranges and compares Scala with both Java and Ruby, finishing up with some simple class definitions and traits. As with most of the chapters so far, it very quickly introduces a lot of ideas – not much detail, but enough to get me thinking.

In the self study for Day 1, the first questions are reasonably simple.

  1. Here’s a link to the Scala API
  2. There are lots of blog posts comparing java to scala, mostly just one aspect. I liked this write up based on a year of experience with Scala.
  3. A discussion of val versus var.

The next challenge was to write a game that will take a tic-tac-toe board (noughts and crosses for the Brits …) and determine a winner. The bonus part of the challenge would be to make it into a game that two people could play, so I attacked this part first.

I started off using Sublime Text 2, then decided to switch to IntelliJ with the Scala plugin. I like Sublime, but was hoping IntelliJ would give me better auto completion, refactoring tools and keyboard shortcuts. It seems to work OK – I had to point IntelliJ towards my Scala installation, and it is still popping up with some errors although it does compile and run just fine. Perhaps in Day 2 and 3 I’ll dig into those a bit more.

In writing the code for the game, I tripped up on a few things. I had Martin Odersky’s book Programming in Scala to refer to as well, which helped me solve most things really quickly.

Firstly that the chapter hadn’t covered how to return a specific type from a function. Scala doesn’t require the return keyword, but if you don’t specify a return value, it returns Unit().

Here’s a function without a return type:

def myMethod() {

}

And with:

def myMethod() : Int = {

}

In my next mistake, I tried to create an array and then add items to it – Arrays are mutable in Scala, but you can’t change the size of them. I didn’t even have a good reason for doing this, except that I thought it would make the code prettier, so I changed it to a List (immutable) instead. And it looked fine :)

I don’t know a lot about functional programming yet, so I did have a couple of classes in my solution. I wanted to make sure I had no mutable objects though.

When I finished my initial solution, I had three files – I ended up taking the lazy way of copying all the classes into the one file and running it from the terminal with scala tictactoe.scala. Here’s the initial attempt. I like that it doesn’t have any mutable objects, it’s simple, I don’t have to worry about blanks, and the simple map method to get the positions. I don’t much like the magic winning combinations in the Judge class, and I don’t like that if you don’t enter the moves in a valid format it will barf.

Next challenge: on to Day 2, and also trying to extend the tic-tac-toe game for the bonus challenge!

Frying my brain with Prolog

I recently picked up the Seven Languages in Seven Weeks book again, with the intention to start where I left off with Prolog, around the end of Day 1. As I was doing research for the exercises, I came across a great description that summed up exactly how I felt about this chapter:

“Today, Prolog broke my brain. The chapter started with recursion, lists, tuples, and pattern matching, all of which were tolerable if you’ve had prior exposure to functional programming. However, after that, we moved onto using unification as the primary construct for problem solving, and the gears in my head began to grind.”

At first, it seemed fairly easy to follow, very different to anything I’d done before, but that’s why I started with the Seven Languages book, to learn about new and different techniques in programming.

Reading through the day two section about lists and recursion, I started to find myself getting lost in the examples and it took a long time to understand what was going on. I couldn’t complete the day 2 exercises without a little help from the interwebz, although by the time I finished working through them, I did understand what I was going on. Trying to switch my brain from thinking about solving the problems in terms of rules rather than algorithms continued to bite me throughout the chapter though.

Some of the things I learned through Day 2 are pretty basic, but for a complete newbie to Prolog, they weren’t obvious.

In the factorial exercise, I realised that within a rule, I could add a line to validate the parameters – in this case that X > 0. Super obvious maybe, but Day 1 was all about matching rules and so this was new.

The next thing I learned was that you can have two versions of a rule with different conditions. I was already creating multiple versions of a rule to unify specific values such as 0 for factorial, but this was a different way to think about it.

As I worked through the sudoku and queens exercises, I still found myself wanting to do something like this:


Diags1 = [R1+C1, R2+C2, R3+C3, R4+C4, R5+C5, R6+C6, R7+C7, R8+C8],

… which just doesn’t work!

I did get there in the end with the queens solution, with a little help from the book to point me in the right direction for the diagonals.

In conclusion, I definitely learned a lot from this chapter, but struggled a lot as well! It was worth fighting through to the end though, as the concepts did start to make sense!

I don’t think we should always pair, all the time. There, I said it.

I’ve been working in “agile” teams for several years, and since starting at ThoughtWorks one practice that we always seem to use and promote is pair programming.

I think pair programming is great, for a few reasons:

  • Knowledge sharing and avoiding silos or single points of failure.
  • Bringing people up to speed – especially for new team members and juniors.
  • Building relationships and communication. I found after my first project as a developer, I had much closer relationships with the developers I’d paired with than any of the other people I’d worked with as a BA on previous projects. This makes for better teams.
  • Collective code ownership – if you’re not the only one working on the code, you can’t feel too much like you own it.
  • Better decision making – by having two people discuss and agree on a solution.
  • Faster problem solving – especially in complex systems.
  • Promotes consistency of code style and standards, especially if the pairs rotate.

Yay! So should we always pair all the time?

I would actually say No to that. Having spent a decent amount of time pairing, and experienced many eager and reluctant pairs, been the “junior” and the “senior”, on both work and fun projects, I’ve found there are definitely some frustrations.

  • It’s exhausting. If you’ve ever done a solid day of development on a difficult project with a pair, you probably came away shattered.
  • After working with the same person for a long time, both people stop learning from each other. A lack of rotation also means you still end up with knowledge silos – just made up of two people rather than one.
  • Sometimes, pairs are well matched, but more often one person is significantly faster, usually because they’re more knowledgeable about the codebase or the work being done. Over time, this can be frustrating.
  • Pairing on simple problems can feel like a bit of a waste. I’ve definitely been in this position towards the end of a project.
  • Some people just don’t like pairing. Even I don’t like pairing when it’s all the time.
  • I’m not convinced that pairing significantly reduces the number of bugs when you practice TDD and have a good suite of automated tests. The navigator may see obvious errors first, but more often than not the automated tests find the more interesting ones before the code is even checked in (although it does help having two people to solve them)

My last project was a two-person delivery gig. My colleague Hari and I discussed upfront whether we would pair, and in the end we didn’t – for some of the reasons above, but also because some of the benefits of pairing were much less on a two-person team:

  • We didn’t really have a complex problem to solve – it was a reasonably simple, small website.
  • Six weeks of pairing with me would probably have driven poor Hari insane.
  • We were sat directly next to each other and were constantly discussing the project, so we didn’t need to work on the same code to make ourselves communicate or share decisions.

As well as agreeing to talk a lot, share important decisions about the code design, and refactor each other’s code where we saw a need to, we also decided to rotate the stories we were working on to try and avoid any silos or single points of failure.

So how did it work?

In retrospect, I do think it was the right decision. I discovered that I actually enjoyed working alone (which felt like a terrible admission for a while) – although I still prefer to work in a larger team with some pairing, I now believe that developers also need a break from pairing some of the time. It’s a matter of finding the right things to pair on, and the right time to work alone. For us, I think pairing would have slowed us down.

However, at the end of the project, I discovered that there were definitely some gaps in my knowledge around the things that Hari had implemented. We were probably not rigorous enough about recognising when we did need to pair and doing it, and towards the end of the project as time grew tighter we did not swap stories enough. We were pretty good at changing each other’s code, and I think that in general our coding style was fairly consistent – although this was probably the case before we even started working together, it seems to be a ThoughtWorks “thing”!

From a quality perspective, only one bug was reported in UAT, and it wasn’t even much of a bug (calendar starts on the wrong day) – and I think this is because we were pretty rigorous around our automated testing practices, including integration and acceptance testing with Cucumber.

In future, I would still promote pairing but perhaps a little bit less dogmatically than I used to. I want to make sure I continue to use it when there are clear benefits, and especially when introducing new team members or for complex problems. However, I would like to try and ensure developers have more breaks from pairing and that work is organised to allow for that, as well as a good degree of rotation. How often the rotation happens, and what proportion of time is spent pairing versus working alone, I think should always be dependent on the team, the problem at hand and the situation.

CoffeeScript

On my last project, we decided to try out CoffeeScript. I was pleasantly surprised at how easy it was to get started with it and how nice it was to use, instead of JavaScript.

I’m a fan of object-oriented JavaScript code, but there are many different ways to structure the JavaScript (using the prototype, or constructing an object each time with private methods, to name a couple). I’ve seen several medium-to-large JavaScript codebases that use a range of techniques with no consistent pattern. CoffeeScript solves this problem by giving me a standard way to create new classes (which IMO is much easier to read and understand than JavaScript methods defined on the prototype). It’s performant (since the methods are defined on the prototype) and tries to produce easily-readable, JSLint-able code. (I had my first look at the JavaScript produced by ClojureScript last night, and it’s definitely much harder to navigate than the code produced by CoffeeScript).

Here are some tips to get started:

  1. The back-end was .NET and we were working in Visual Studio, so we used the free Web Workbench plugin to generate javascript files from our coffee files. It updates the files automatically at every save, which was really handy. Errors appear in the output window for Visual Studio.
  2. If you’re using the Node Package Manager, you can install CoffeeScript with that:
    npm install -g coffee-script
    You can then use the coffee command to compile coffeescript files:
    coffee -c myfile.coffee
    This also works with wildcards:
    coffee -c src/*.coffee

I was pretty amazed how easy it was to integrate CoffeeScript with any other JavaScript frameworks, including JQuery and Jasmine. Jasmine tests in CoffeeScript look like this:

What this means is that if you want to start using CoffeeScript, you can – you don’t even need to rewrite any of the existing JavaScript if you don’t want to.

In general, CoffeeScript reads much more like English than JavaScript – === is replaced by is, !== becomes isnt, you can use unless instead of if (!) and it also has Ruby-style string interpolation, just to name a few nice things.

We did run in to a couple of things that tripped us up, so here are some things to watch out for.

Classes are not global

Chances are, you’ll be creating classes across a number of files. If you just create your classes using class MyClass, you won’t be able to create one in another file by using new MyClass since CoffeeScript puts all the code inside a single file in a single closure (which is a Good Thing).

You can solve this by creating classes called new window.MyClass. However, the better practice is to use namespaces. We added a line to the top of all of our files to ensure the parent namespace was defined:


window.MyNamespace or= {}

Binding to this

You can access properties of the object with the @ symbol. Inside the javascript code, you will have a reference to this.propertyName.
However, for methods that are to be used as the targets of events, you will need to bind to the original value of this. CoffeeScript makes this easy, you just use => to define the function instead of ->.

Example of when you don’t need to bind to the original value of this:

Example of binding to an event target, when you do need to use the original value of this to access properties:

Is it worth a try?

I would definitely say yes! In particular, for larger JavaScript projects, if you don’t have an established consistent way of writing JavaScript, or if you’re writing object-oriented JavaScript code, it will probably simplify a lot of the code base and remove the danger of accidental bugs like creating global variables.

If you’re more in favour of functional-style JavaScript, or if you’re a JavaScript guru and just luuuurrrrve those curly braces, then you probably won’t get as much from CoffeeScript. Maybe you could try ClojureScript instead ;)