Using Variants in Rails 4

I’ve been using Variants in Rails 4 to change my layouts for one controller action. It’s really straightforward – just set the variant name as a symbol, for example:

request.variant = :popup

Then name the template file with the variant name in it, e.g.

new.html+popup.haml

Another neat trick I discovered is that you can also use the variant to control the layout file. I wanted to have a different layout for my popup content, one that didn’t include all of the usual bumph like headers and menus. Turns out I didn’t need any other code, just create a template in my layout folder with the variant:

website.html+popup.haml

In addition to that, I can also use it with partials. Inside my haml file, I reference a partial as I would normally, and when the variant is set, it automatically chooses the partial with the variant if there is one.

...
= render partial: 'customer' 
...

Now I can use the popup variant across multiple controllers with this same simple layout.

I also discovered that setting variant to nil causes a problem. I wanted to do this:

request.variant = find_my_variant
...
...
...
def find_my_variant
  return nil if some_logic_determining_no_variant
  :variant_name
end

Instead I have to do this:

request.variant = find_my_variant unless find_my_variant.nil?
...
...
...
def find_my_variant
  return nil if some_logic_determining_no_variant
  :variant_name
end

I’m interested to know if there is a way to set a “blank” or default variant so that I can remove that extra check …

Try and Fetch

One of my colleagues showed me this awesome little method in Rails called try.

How many times have you written code to check for nil like this?

def get_value
  if @my_object.nil?
    ""
  else
    @my_object.value
  end
end

(well I have, many times!)

If you call try on an object if you’re not sure whether or not it will respond to that method. If it doesn’t, then you just get back nil:

def get_value
  @my_object.try(:value)
end

Particularly useful if you want to call a method on something you get back from a hash that may not be there:

@my_object[:my_key].try(:method)

While reading up on this, I also discovered the fetch method for Hashes, which allows me to specify a default value to return if the key is missing.

This means I can clean up stuff like this:

my_array = @my_hash[:my_key]
my_array.each {|array_item| ... some code ... } unless my_array.nil?

To this:

@my_hash.fetch(:my_key, []).each {|array_item| ... some code ... }

Less if statements, less branching, and so hopefully fewer bugs to write!

Controlling Asset Precompilation in Rails

I’ve run into issues recently with precompilation after introducing new gems containing partial SASS files (Bootstrap and Font Awesome).

Rails allows you to specify patterns or Procs to determine which files should – or in our case should not – be precompiled, like this:

Rails.application.config.assets.precompile = [Proc.new {|e| !(e =~ /(font-awesome|bootstrap)\/_/) } ]

Upgrading to RSpec 3

I recently upgraded our Rails application to use RSpec 3, which is currently in its second beta, from 2.99. I was expecting it to be nice and straightforward, but sadly it was not! This was partly because we have been using RR for mocking and stubbing, and the syntax was not compatible with RSpec 3.

In the process of upgrading I learned a bunch about the new RSpec “allow” syntax, which in my opinion is far nicer than the RR one we were using.

Here’s how to stub:

allow(thing).to receive(:method).with(args).and_return(other_thing)

Mocking can be done in the same way by substituting “allow” for “expect” – although in most cases the tests read better if you test that the stub received the method using the following matcher:

expect(thing).to have_received(:method).with(args)

That this is different to the previous RR syntax, which was expect(thing).to have_received.method(args)
You can also use argument matchers, for example:

expect(thing).to have_received(:method).with(a_kind_of(Class))

And you can verify how many times a method was called:

expect(object).to have_received(method).at_least(:once)
expect(object).to have_received(method).exactly(3).times

The .with(args) and .and_return(other_thing) methods are optional. You can also invoke a different function:

allow(thing).to receive(:method) { |args| block }

Or call the original function:

allow(thing).to receive(:method).and_call_original

Another thing we used fairly often was any_instance_of. This is now cleaner (RR used to take a block):

allow_any_instance_of(Class).to receive(:method).and_return
allow_any_instance_of(Class).to receive(:method) { |instance, args| block}

If you pass a block, the first argument when it gets called is the instance it is called on.
In RSpec 3, be_false and be_true are deprecated. Instead, use eq false or eq true. You can use be in place of eq, but when the test fails you get a longer error message, pointing out that the error may be due to incorrect object reference, which is irrelevant and kind of annoying.

Using RSpec mocks means that we can create new mock or stub objects using double(Class_or_name) rather than Object.new, which results in tidier error messages and clearer test code.

Stubbing a chain of methods may also be a handy tool – I only found one place where we used it, but it is useful if we’re chaining together methods to search models.

allow(object).to receive_message_chain(:method1, :method2)

More info:

  1. https://relishapp.com/rspec/rspec-mocks/docs
  2. https://github.com/rspec/rspec-mocks

Update: it turns out I was missing a configuration option in RSpec. It should have worked with RR by doing this:

RSpec.configure do |rspec|
  rspec.mock_with :rr
end

Thanks Myron for clearing this up :)

Scala – Day 1

I was looking forward to Chapter 5 of Seven Languages in Seven Weeks: Scala. I’ve heard quite a bit about it in the last few weeks at various user groups, and I’m hoping to get my hands on it at some point in my upcoming work with Atlassian, so this was a good time to dive in. As a personality, Scala is assigned Edward Scissorhands in the book: “awkward, and sometimes amazing”.

I tried at first to install it with Homebrew, which just failed with a 404, so I downloaded the package and installed it manually, which worked fine.

Day 1 was pretty straightforward – type a few things into the console and have a look at what you get back. This chapter delves into loops and ranges and compares Scala with both Java and Ruby, finishing up with some simple class definitions and traits. As with most of the chapters so far, it very quickly introduces a lot of ideas – not much detail, but enough to get me thinking.

In the self study for Day 1, the first questions are reasonably simple.

  1. Here’s a link to the Scala API
  2. There are lots of blog posts comparing java to scala, mostly just one aspect. I liked this write up based on a year of experience with Scala.
  3. A discussion of val versus var.

The next challenge was to write a game that will take a tic-tac-toe board (noughts and crosses for the Brits …) and determine a winner. The bonus part of the challenge would be to make it into a game that two people could play, so I attacked this part first.

I started off using Sublime Text 2, then decided to switch to IntelliJ with the Scala plugin. I like Sublime, but was hoping IntelliJ would give me better auto completion, refactoring tools and keyboard shortcuts. It seems to work OK – I had to point IntelliJ towards my Scala installation, and it is still popping up with some errors although it does compile and run just fine. Perhaps in Day 2 and 3 I’ll dig into those a bit more.

In writing the code for the game, I tripped up on a few things. I had Martin Odersky’s book Programming in Scala to refer to as well, which helped me solve most things really quickly.

Firstly that the chapter hadn’t covered how to return a specific type from a function. Scala doesn’t require the return keyword, but if you don’t specify a return value, it returns Unit().

Here’s a function without a return type:

def myMethod() {

}

And with:

def myMethod() : Int = {

}

In my next mistake, I tried to create an array and then add items to it – Arrays are mutable in Scala, but you can’t change the size of them. I didn’t even have a good reason for doing this, except that I thought it would make the code prettier, so I changed it to a List (immutable) instead. And it looked fine :)

I don’t know a lot about functional programming yet, so I did have a couple of classes in my solution. I wanted to make sure I had no mutable objects though.

When I finished my initial solution, I had three files – I ended up taking the lazy way of copying all the classes into the one file and running it from the terminal with scala tictactoe.scala. Here’s the initial attempt. I like that it doesn’t have any mutable objects, it’s simple, I don’t have to worry about blanks, and the simple map method to get the positions. I don’t much like the magic winning combinations in the Judge class, and I don’t like that if you don’t enter the moves in a valid format it will barf.

Next challenge: on to Day 2, and also trying to extend the tic-tac-toe game for the bonus challenge!

Frying my brain with Prolog

I recently picked up the Seven Languages in Seven Weeks book again, with the intention to start where I left off with Prolog, around the end of Day 1. As I was doing research for the exercises, I came across a great description that summed up exactly how I felt about this chapter:

“Today, Prolog broke my brain. The chapter started with recursion, lists, tuples, and pattern matching, all of which were tolerable if you’ve had prior exposure to functional programming. However, after that, we moved onto using unification as the primary construct for problem solving, and the gears in my head began to grind.”

At first, it seemed fairly easy to follow, very different to anything I’d done before, but that’s why I started with the Seven Languages book, to learn about new and different techniques in programming.

Reading through the day two section about lists and recursion, I started to find myself getting lost in the examples and it took a long time to understand what was going on. I couldn’t complete the day 2 exercises without a little help from the interwebz, although by the time I finished working through them, I did understand what I was going on. Trying to switch my brain from thinking about solving the problems in terms of rules rather than algorithms continued to bite me throughout the chapter though.

Some of the things I learned through Day 2 are pretty basic, but for a complete newbie to Prolog, they weren’t obvious.

In the factorial exercise, I realised that within a rule, I could add a line to validate the parameters – in this case that X > 0. Super obvious maybe, but Day 1 was all about matching rules and so this was new.

The next thing I learned was that you can have two versions of a rule with different conditions. I was already creating multiple versions of a rule to unify specific values such as 0 for factorial, but this was a different way to think about it.

As I worked through the sudoku and queens exercises, I still found myself wanting to do something like this:


Diags1 = [R1+C1, R2+C2, R3+C3, R4+C4, R5+C5, R6+C6, R7+C7, R8+C8],

… which just doesn’t work!

I did get there in the end with the queens solution, with a little help from the book to point me in the right direction for the diagonals.

In conclusion, I definitely learned a lot from this chapter, but struggled a lot as well! It was worth fighting through to the end though, as the concepts did start to make sense!

I don’t think we should always pair, all the time. There, I said it.

I’ve been working in “agile” teams for several years, and since starting at ThoughtWorks one practice that we always seem to use and promote is pair programming.

I think pair programming is great, for a few reasons:

  • Knowledge sharing and avoiding silos or single points of failure.
  • Bringing people up to speed – especially for new team members and juniors.
  • Building relationships and communication. I found after my first project as a developer, I had much closer relationships with the developers I’d paired with than any of the other people I’d worked with as a BA on previous projects. This makes for better teams.
  • Collective code ownership – if you’re not the only one working on the code, you can’t feel too much like you own it.
  • Better decision making – by having two people discuss and agree on a solution.
  • Faster problem solving – especially in complex systems.
  • Promotes consistency of code style and standards, especially if the pairs rotate.

Yay! So should we always pair all the time?

I would actually say No to that. Having spent a decent amount of time pairing, and experienced many eager and reluctant pairs, been the “junior” and the “senior”, on both work and fun projects, I’ve found there are definitely some frustrations.

  • It’s exhausting. If you’ve ever done a solid day of development on a difficult project with a pair, you probably came away shattered.
  • After working with the same person for a long time, both people stop learning from each other. A lack of rotation also means you still end up with knowledge silos – just made up of two people rather than one.
  • Sometimes, pairs are well matched, but more often one person is significantly faster, usually because they’re more knowledgeable about the codebase or the work being done. Over time, this can be frustrating.
  • Pairing on simple problems can feel like a bit of a waste. I’ve definitely been in this position towards the end of a project.
  • Some people just don’t like pairing. Even I don’t like pairing when it’s all the time.
  • I’m not convinced that pairing significantly reduces the number of bugs when you practice TDD and have a good suite of automated tests. The navigator may see obvious errors first, but more often than not the automated tests find the more interesting ones before the code is even checked in (although it does help having two people to solve them)

My last project was a two-person delivery gig. My colleague Hari and I discussed upfront whether we would pair, and in the end we didn’t – for some of the reasons above, but also because some of the benefits of pairing were much less on a two-person team:

  • We didn’t really have a complex problem to solve – it was a reasonably simple, small website.
  • Six weeks of pairing with me would probably have driven poor Hari insane.
  • We were sat directly next to each other and were constantly discussing the project, so we didn’t need to work on the same code to make ourselves communicate or share decisions.

As well as agreeing to talk a lot, share important decisions about the code design, and refactor each other’s code where we saw a need to, we also decided to rotate the stories we were working on to try and avoid any silos or single points of failure.

So how did it work?

In retrospect, I do think it was the right decision. I discovered that I actually enjoyed working alone (which felt like a terrible admission for a while) – although I still prefer to work in a larger team with some pairing, I now believe that developers also need a break from pairing some of the time. It’s a matter of finding the right things to pair on, and the right time to work alone. For us, I think pairing would have slowed us down.

However, at the end of the project, I discovered that there were definitely some gaps in my knowledge around the things that Hari had implemented. We were probably not rigorous enough about recognising when we did need to pair and doing it, and towards the end of the project as time grew tighter we did not swap stories enough. We were pretty good at changing each other’s code, and I think that in general our coding style was fairly consistent – although this was probably the case before we even started working together, it seems to be a ThoughtWorks “thing”!

From a quality perspective, only one bug was reported in UAT, and it wasn’t even much of a bug (calendar starts on the wrong day) – and I think this is because we were pretty rigorous around our automated testing practices, including integration and acceptance testing with Cucumber.

In future, I would still promote pairing but perhaps a little bit less dogmatically than I used to. I want to make sure I continue to use it when there are clear benefits, and especially when introducing new team members or for complex problems. However, I would like to try and ensure developers have more breaks from pairing and that work is organised to allow for that, as well as a good degree of rotation. How often the rotation happens, and what proportion of time is spent pairing versus working alone, I think should always be dependent on the team, the problem at hand and the situation.

CoffeeScript

On my last project, we decided to try out CoffeeScript. I was pleasantly surprised at how easy it was to get started with it and how nice it was to use, instead of JavaScript.

I’m a fan of object-oriented JavaScript code, but there are many different ways to structure the JavaScript (using the prototype, or constructing an object each time with private methods, to name a couple). I’ve seen several medium-to-large JavaScript codebases that use a range of techniques with no consistent pattern. CoffeeScript solves this problem by giving me a standard way to create new classes (which IMO is much easier to read and understand than JavaScript methods defined on the prototype). It’s performant (since the methods are defined on the prototype) and tries to produce easily-readable, JSLint-able code. (I had my first look at the JavaScript produced by ClojureScript last night, and it’s definitely much harder to navigate than the code produced by CoffeeScript).

Here are some tips to get started:

  1. The back-end was .NET and we were working in Visual Studio, so we used the free Web Workbench plugin to generate javascript files from our coffee files. It updates the files automatically at every save, which was really handy. Errors appear in the output window for Visual Studio.
  2. If you’re using the Node Package Manager, you can install CoffeeScript with that:
    npm install -g coffee-script
    You can then use the coffee command to compile coffeescript files:
    coffee -c myfile.coffee
    This also works with wildcards:
    coffee -c src/*.coffee

I was pretty amazed how easy it was to integrate CoffeeScript with any other JavaScript frameworks, including JQuery and Jasmine. Jasmine tests in CoffeeScript look like this:

What this means is that if you want to start using CoffeeScript, you can – you don’t even need to rewrite any of the existing JavaScript if you don’t want to.

In general, CoffeeScript reads much more like English than JavaScript – === is replaced by is, !== becomes isnt, you can use unless instead of if (!) and it also has Ruby-style string interpolation, just to name a few nice things.

We did run in to a couple of things that tripped us up, so here are some things to watch out for.

Classes are not global

Chances are, you’ll be creating classes across a number of files. If you just create your classes using class MyClass, you won’t be able to create one in another file by using new MyClass since CoffeeScript puts all the code inside a single file in a single closure (which is a Good Thing).

You can solve this by creating classes called new window.MyClass. However, the better practice is to use namespaces. We added a line to the top of all of our files to ensure the parent namespace was defined:


window.MyNamespace or= {}

Binding to this

You can access properties of the object with the @ symbol. Inside the javascript code, you will have a reference to this.propertyName.
However, for methods that are to be used as the targets of events, you will need to bind to the original value of this. CoffeeScript makes this easy, you just use => to define the function instead of ->.

Example of when you don’t need to bind to the original value of this:

Example of binding to an event target, when you do need to use the original value of this to access properties:

Is it worth a try?

I would definitely say yes! In particular, for larger JavaScript projects, if you don’t have an established consistent way of writing JavaScript, or if you’re writing object-oriented JavaScript code, it will probably simplify a lot of the code base and remove the danger of accidental bugs like creating global variables.

If you’re more in favour of functional-style JavaScript, or if you’re a JavaScript guru and just luuuurrrrve those curly braces, then you probably won’t get as much from CoffeeScript. Maybe you could try ClojureScript instead ;)

First steps with Clojure

Over the last year, I learned Ruby. I’m not amazing at it and there’s still a lot to learn, but I know enough to be useful with it. In the spirit of learning a new language every year, the next one I wanted to tackle was Clojure.

I also decided to try using Emacs, since I had heard the Clojure support was pretty good.

After installing Emacs on my Mac (which is running OS X 10.7), I spent the morning surfing teh interwebz and trying a few different things to get Clojure working. Here’s how it looked in the end:

  1. Install Emacs from here.
  2. Read the tutorial to understand the basic shortcuts (C = control, M = alt below)
  3. Install marmalade – inside the directory ~/.emacs.d create the file init.el and enter:

    (require 'package)
    (add-to-list 'package-archives
    '("marmalade" . "http://marmalade-repo.org/packages/") t)
    (package-initialize)

  4. Run M-x: “package-refresh-contents”
  5. M-x package-install starter-kit
  6. Install clojure-mode by pressing M-x package-install and choose clojure-mode.
  7. Install Leiningen by folling the instructions here: https://github.com/technomancy/leiningen
  8. Install swank-clojure: run “lein plugin install swank-clojure 1.3.3”
    Note – I initially installed 1.3.2 which gave me an error when I tried to run it from Emacs.
  9. Create a new project with Leiningen.
  10. Open Emacs, navigate to a project file and type M-x clojure-jack-in.

Once this was working, I also discovered a couple of key shortcuts that were really useful:

  • C-c C-k compiles the code in the current buffer.
  • C-x C-e executes the code in the buffer before the current line end.

Now I’m up and running, I’ve been able to complete all of the 4Clojure Koans at the elementary level. You don’t actually need to have clojure running locally, although I found it helped to figure out what was going on.

I’m also working on the Clojure Koans on github.

So far I’m finding both to be really fun and good learning resources ☺

Some of the best guidance I found

Finishing up with Io

As I reached Day 2 with Io, I continued to struggle with the syntax, in particular temptation to write object.method instead of dropping the ‘.’. I felt that I was starting to get some of the concepts of the language, the ability to rewrite core methods and override Operators.

However, I have also struggled throughout Day 2 and Day 3 with the lack of easily available documentation – the reference and guide on the main IO website did not seem to explain many of the things I needed to know, including how to read input from the console. I resorted in some cases to “cheating” by reading other people’s solutions to the exercises (although it’s not really cheating, since the point is to learn :)).

As I wrote my solutions, particularly the longer ones, they felt slightly clumsy – as though I wasn’t taking advantage of Io’s strengths. In the middle of Day 2 and start of Day 3, I really wasn’t enjoying working with the language, although by the end of Day 3 when I introduced the new Xml_Element object, it felt more natural. I’m glad I finished the chapter, although I’m still not really sure where I would use Io.

Overall, the main thing I’ve got out of the book so far is a greater understanding of meta-programming, which I’m hoping will help me take better approaches to new programming problems.

All of my solutions are on Github Gists:

Here are some highlights from the difficulties I ran into …

I was baffled for Day 2, Exercise 2 as to how to keep the original division method, but a quick internet search (and a minor cheat) revealed the solution: I needed to store the original division method in another variable:

When I reached Exercise 7, I discovered from other solutions that serialization in IO is actually really, really easy. All objects have a serialized method that writes it out to a string. When reading the object back from a file, all that was needed was the assign the results of the doFile("filename.txt") method to a new object. Writing to files is easy too – here’s the full solution:

For Exercise 8, I spent a considerable amount of time trying to find out how to read input from the console – turns out you can use File standardInput readLine. This gives you a string, so you need to use asNumber to complete this exercise.

Day 3, Exercise 1, was a massive struggle – at first I could not figure out how to pass the right indent, and I ended up looking around at some other examples before I figured out that I needed to:

  1. Write the parent node
  2. Add one to the indent
  3. Process the child nodes
  4. Remove one from the indent
  5. Close the parent node

I also discovered that using indent := indent + 1 doesn’t work – I guess you lose the reference to indent, so you need to do indent = indent + 1. Here’s the finished code:

In Exercise 2, I discovered after some internet searching and experimenting that Io has two built in methods: curlyBrackets (as used in the book) and squareBrackets – these can both be overridden to allow lists or maps, or anything else, to be created using {} or []. I guess this might be obvious to some people, but I was quite confused by it!

Here’s the list code:

The final exercise, adding attributes to the XML Builder, had me stumped for some time – in fact, I took a break from it overnight and a solution came to me in the shower :) My difficulty was figuring out when to write the contents of the XML node, I didn’t want a separate method but couldn’t write out the contents inside the loop.

The final solution still feels slightly clumsy but it does work (indents and all):