Introducing Spectie, a behavior-driven-development library for RSpec 12

Posted by ryan Mon, 02 Nov 2009 03:34:00 GMT

I'm a firm believer in the importance of top-down and behavior-driven development. I often start writing an integration test as the first step to implementing a story. When I started doing Rails development, the expressiveness of Ruby encouraged me to start building a DSL to easily express the way I most-often wrote integration tests. In the pre-RSpec days, this was just a subclass of ActionController::IntegrationTest that encapsulated the session management code to simplify authoring tests from the perspective of a single user. As the behavior-driven development idea started taking hold, I adapted the DSL to more-closely match those concepts, and finally integrated it with RSpec. The result of this effort was Spectie (rhymes with necktie).

The primary goal of Spectie is to provide a simple, straight-forward way for developers to write BDD-style integration tests for their projects in a way that is most natural to them, using existing practices and idioms of the Ruby language.

Here is a simple example of the Spectie syntax in a Rails integration test:

Feature "Compelling Feature" do
  Scenario "As a user, I would like to use a compelling feature" do
    Given :i_have_an_account, :email => ""
    And   :i_have_logged_in

    When  :i_access_a_compelling_feature

    Then  :i_am_presented_with_stunning_results

  def i_have_an_account(options)
    @user = create_user(options[:email])

  def i_have_logged_in
    log_in_as @user

  def i_access_a_compelling_feature
    get compelling_feature_path
    response.should be_success

  def i_am_presented_with_stunning_results
    response.should have_text("Simply stunning!")


Spectie is available on GitHub, Gemcutter, and RubyForge. The following should get it installed quickly for most people:

% sudo gem install spectie

For more information on using Spectie, visit

Why not Cucumber or Coulda?

At the time that this is being written, Cucumber is the new hotness in BDD integration testing. My reasons for sticking with Spectie instead of switching to Cucumber like the rest of the world are as follows:

  • Using regular expressions in place of normal Ruby method names seems like a potential maintenance nightmare, above and beyond the usual potential.
  • The layer of indirection that is created in order to write tests in plain text doesn't seem worth the cost of maintenance in most cases.
  • Separating a feature from its "step definitions" seems mostly unnecessary. I like keeping my scenarios and steps in one file until the feature becomes sufficiently big that it warrants extra organizational consideration.

These reasons are more-or-less the same as those given by Evan Light, who recently published Coulda, which is his solution for avoiding the cuke. What sets Spectie apart from Coulda is its reliance on and integration with RSpec. The Spectie 'Feature' statement has the same behavior as an RSpec 'describe' statement, and the 'Scenario' statement is the same as the RSpec 'example' and 'it' statements. By building on RSpec, Spectie can take advantage of the contextual nesting provided by RSpec, and rely on RSpec to provide the BDD-style syntax within what I've been calling a scenario statement (the words after the Given/When/Thens). Coulda is built directly on Test::Unit. I'm a firm believer in code reuse, and RSpec is the de facto standard for writing BDD-style tests. Spectie, then, is a feature-driven skin on top of RSpec for writing BDD-style integration tests. To me, it only makes sense to do things that way; as RSpec evolves, so will Spectie.

Rails Plugin for Mimicking SSL requests and responses 1

Posted by ryan Fri, 14 Nov 2008 23:33:42 GMT

The Short

I've written a plugin for Ruby on Rails that allows you to test SSL-dependent application behavior that is driven by the ssl_requirement plugin without the need to install and configure a web server with SSL.

Learn more

The Long

A while back, I wanted the Selenium tests for a Ruby on Rails app I was working on to cover the SSL requirements and allowances of certain controller actions in the system, as defined using functionality provided by the ssl_requirement plugin. I also wanted this SSL-dependent behavior to occur when I was running the application on my local development machines. I had two options:

  1. Get a web server configured with SSL running on my development machines, as well as on the build server.

  2. Patch the logic used by the system to determine if a request is under SSL or not, as well as the logic for constructing a URL under SSL, so that the system can essentially mimic an SSL request without a server configured for SSL.

Since I had multiple Selenium builds on the build server, setting up an SSL server involved adding a host name to the loopback for each build, so that Apache could switch between virtual hosts for the different server ports. I also occasionally ran web servers on my development machines on ports other than the default 3000, as did everyone else on the team, so that we'd all have to go through the setup process for multiple servers on those machines as well. We would need to do all of this work in order to test application logic that, strictly speaking, didn't even require the use of an actual SSL server. Given that the only thing that I was interested in testing was that the requests to certain actions either redirected or didn't, depending on their SSL requirements, all I really needed was to make the application mimic an SSL request.

To mimic an SSL request in conjunction with using the ssl_requirement plugin without an SSL server consisted of patching four things:

  1. ActionController::UrlRewriter#rewrite_url - Provides logic for constructing a URL from options and route parameters

    If provided, the :protocol option normally serves as the part before the :// in the constructed URL.

    The method was patched so that the constructed URL always starts with "http://". If :protocol is equal to "https", this causes an "ssl" key to be added to the query string of the constructed URL, with a value of "1".

  2. ActionController::AbstractRequest#protocol - Provides the protocol used for the request.

    The normal value is one of "http" or "https", depending on whether the request was made under SSL or not.

    The method was patched so that it always returns "http".

  3. ActionController::AbstractRequest#ssl? - Indicates whether or not the request was made under SSL.

    The normal value is determined by checking if request header HTTPS is equal to "on" or HTTP\_X\_FORWARDED_PROTO is equal to "https".

    The method was patched so that it checks for a query parameter of "ssl" equal to "1".

  4. SslRequirement#ensure\_proper\_protocol - Used as the before\_filter on a controller that includes the ssl_requirement plugin module, which causes the redirection to an SSL or non-SSL URL to occur, depending on the requirements defined by the controller.

    This method was patched so that, instead of replacing the protocol used on the URL with "http" or "https", it either adds or removes the "ssl" query parameter.

For more information, installation instructions, and so on, please refer to the plugin directly at:

Enabling/disabling observers for testing 6

Posted by ryan Thu, 10 Apr 2008 02:53:50 GMT

If you use ActiveRecord observers in your application and are concerned about the isolation of your model unit tests, you probably want some way to disable/enable observers. Unfortunately, Rails doesn't provide an easy way to do this. So, here's some code I threw together a while ago to do just that.

module ObserverTestHelperMethods
  def observer_instances
    ActiveRecord::Base.observers.collect do |observer|
      observer_klass = \
        if observer.respond_to?(:to_sym)
        elsif observer.respond_to?(:instance)

  def observed_classes(observer=nil)
    observed =
    (observer.nil? ? observer_instances : [observer]).each do |observer|
      observed += (observer.send(:observed_classes) + observer.send(:observed_subclasses))

  def observed_classes_and_their_observers
    observers_by_observed_class = {}
    observer_instances.each do |observer|
      observed_classes(observer).each do |observed_class|
        observers_by_observed_class[observed_class] ||=
        observers_by_observed_class[observed_class] << observer

  def disable_observers(options={})
    except = options[:except]
    observed_classes_and_their_observers.each do |observed_class, observers|
      observers.each do |observer|
        unless observer.class == except

  def enable_observers(options={})
    except = options[:except]
    observer_instances.each do |observer|
      unless observer.class == except
        observed_classes(observer).each do |observed_class|
          observer.send :add_observer!, observed_class

Include this in a Test::Unit::TestCase or 'include' in your RSpec configuration, whatever rocks your boat. Here's a stupid example:

class SomethingCoolTest < Test::Unit::TestCase
  include ObserverTestHelperMethods

  def setup

  def teardown

  def test_without_observers
    # ...


When you go to test the behavior of the observer itself, simply disable/enable like the following to disable/enable all observers except the one you're testing:

class DispassionateObserverTest < Test::Unit::TestCase
  include ObserverTestHelperMethods

  def setup
    disable_observers :except => DispassionateObserver

  def teardown
    enable_observers :except => DispassionateObserver

  def test_without_observers_except_dispassionate_observer
    # ...


Bug: composite_primary_keys and belongs_to with :class_name option 2

Posted by ryan Sat, 17 Nov 2007 02:39:44 GMT

For those of you using the composite_primary_keys gem as of version 0.9.0, you may encounter an issue if you try to do something like:

class Reading < ActiveRecord::Base
  belongs_to :reader, :class_name => "User"

When a User is loaded up from the database via the reader association, the CPK modification to ActiveRecord::Reflection::AssociationReflection#primary\_key\_name incorrectly returns "user_id" as the primary key name. If you encounter this issue, I've submitted a patch against revision 124 that can be obtained here.

Hopefully this will get fixed in the next release. More hopefully, I won't need to care by then.

Learning Ruby Meta-programming with MetaKoans 1

Posted by ryan Sun, 23 Sep 2007 03:29:00 GMT

As I mentioned previously, the MetaKoans Ruby Quiz (#67) is a great way to learn meta-programming. However, it had some shortcomings. I've used MetaKoans as a training tool, and something I hear a lot is that it's unclear why certain things make a koan, or set of koans, pass. One reason for this confusion is that, often times while puzzling through the solution, a student will do something that causes multiple koans to pass at once. Due to the way that the koans are structured, I wasn't able to find a way to make a single koan pass at a time.

I've addressed this shortcoming by restructuring the koans so that the problem can be solved incrementally, one koan at a time. While restructuring the koans, I wrote a solution to each in turn, and saved that solution to its own knowledge file. Each file is a small refactoring from the one before it, ultimately building up to the final solution.

For the purposes of future training sessions, I've also started adding documentation to each refactoring, explaining how it changed from the previous one, and why it changed the way it did. The documentation isn't complete yet, but it's a start.

The restructured MetaKoans, along with the individual refactorings of my solution, can be found at Feel free to check it out.

A little explanation

You'll notice that the knowledge files in my solution follow the pattern: knowledge_for_koan_XX_Y.rb. The XX number is the koan that the knowledge is a solution for. The Y number is the ordered refactoring index, with 1 being the first, most straight-forward solution, and subsequent indices being refinements of the original.

The reason for this structuring is that, often times, the straight-forward, brute-force solution to a koan isn't always the optimal solution. So, I'd make refactorings to show how the code could, IMHO, be improved.


Thanks go to ara.t.howard for coming up with the original MetaKoans quiz. It's been an extremely informative tool for myself and many others.

Task Dependencies in Capistrano 2.0 1

Posted by ryan Fri, 21 Sep 2007 04:33:00 GMT

I've been tooling around with Capistrano 2.0 for the past couple of days. I've decided that the more mature Capistrano gets, the more it seems to be, at its core, a remote rake system with a really good suite of predefined tasks specific to Rails deployment issues. Some things Rake has that Cap 2 seems to be lacking are the ability to define dependent tasks, as well as tasks that are executed only once (the first time), with subsequent invocations being skipped.

So, here it is:

Just put that wherever you want (for Rails, vendor or vendor/plugins seems to make sense), and then require capistrano.rb in your deploy.rb file or wherever else it might make sense for you.

Here's an example deploy.rb that uses the two new bits of functionality with the URL above exported to /vendor/cap_task_dependencies:

File: config/deploy.rb

require File.expand_path(File.dirname(__DIR__) + "/../vendor/cap_task_dependencies/capistrano")

namespace :prerequisites do
  task :some_task1, :once => true do
    # this task will only be invoked once

task :some_task2, :once => true do
  # this task will only be invoked once

task :dependent_task, 
  :depends => ["prerequisites:some_task1", :some_task2] do
  # this task will be invoked as many times as it's called,
  # and it will call some_task1 and some_task2 each time, but
  # they will only be invoked once each

task :combo_task do
  # combo_task combines two tasks but still, prerequisites:some_task1
  # and some_task2 will be invoked only once each. 
  # prerequisites:some_task1 will be invoked from the first line in 
  # combo_task and some_task2 as a dependency of dependent_task

I think that just about covers it. Look at the RSpec examples if you want more info.

As always, I'd love your feedback.

Passing Arrays and Nested Params to url_for in Rails 15

Posted by ryan Wed, 07 Feb 2007 21:12:00 GMT

A few weeks ago (okay, more than a few weeks ago, it took me a while to write this), I discussed the problems involved with passing nested hash parameters to named routes in Rails. My coding pair and I discovered another bug (still using rev 5522) when passing hash parameters to a named route in Rails, this time when the hash contains arrays. For example, consider the following call to a named route:

person_url(:name => ['Ryan', 'Kinderman'])
In order for the params hash to get decoded properly on the server, the resulting URL must be encoded to look like this:
Unfortunately, it gets encoded to look like this:
For those of you unfamiliar with CGI escaping, the %2F translates into the '/' character. So, you end up with a params hash in the controller where params[:name] == ['Ryan/Kinderman']. How disappointing. To get around this in the past, I've chosen to either split the hash value on '/', or use my own encoding of arrays that Rails can handle, and then simply decode them myself within the controller. In the above example, I could have done something like:
person_url(:name => {0 => 'Ryan', 1 => 'Kinderman})
Of course, without the patch I described a few weeks ago, this kind of thing would not be possible either, because Rails can't encode nested hash parameters.

What I present here is a detailed explanation of the problem, with instructions at the end on how to install my plugin patch to fix it. My explanation and patch address the issues for both nested and array parameters. There are a number of methods involved in the solution to this problem. It may be useful at this point for you to refer to Jamis Buck's excellent articles on the gory details of Rails route recognition and generation.

When you call link_to or url_for, either explicitly, either explicitly or through the named route *_url and *_path methods, they roughly follow the following call sequence for processing route parameters:

The problems start in the call to options_as_params. This method is not recursive, and processing nested parameters is a recursive problem. The next issue with options_as_params is not actually in the method, but in the to_param method that it calls. If you look at the Rails implementation of Array#to_param, you'll see that all it's doing is joining the elements into a '/' separated string. This doesn't get processed back into separate array elements when the request is received by the controller. So, in the case when value is an Array instance during a call to options_as_params, the resulting string is encoded incorrectly.

The other specific issue lies in the Route#build_query_string method. Take a look at the method, and notice the part that looks like:

if value.class == Array
  key <<  '[]'
  value = [ value ] 
The check for the Array class causes a problem when passing an array to url_for as an option parameter when that array comes from the params hash from within a controller action (*whew*, that was a mouthful!). This is because what you thought was an array is actually an instance of ActionController::Routing::PathSegment::Result. To be honest, I don't know why this is happening. I looked at the code and realized that it'd take me longer to figure out than what I wanted to spend at the time. However, if someone could explain it to me, I'd love to hear it. In any case, to solve this particular problem, the conditional needs to be changed from a check for only Array to Array and any subclasses using something like the is_a? method.

So, those are the issues involved in why array and nested hash parameters don't work properly in calls to url_for. Rather than going through my solution, I'm offering it as a Rails plugin with full unit test coverage, and plan to submit it as an actual patch to the Rails team, with the code cleaned up a bit more. Maybe there are reasons why this sort of thing isn't supported, but I can't think what they might be. I'll post updates here if and when I get more information on this. If you have comments or questions on this patch or parts of the code, please let me know.

You can install the Rails plugin by typing the following into your command-line: ruby script/plugin install git:// To see the issues I've discussed first-hand, after installing the plugin, take a look at controller_test.rb.

Addendum: I checked, and as of revision 6141 of Rails, the issues covered by this article are still present, and the plugin still fixes them.

Addendum (2007/04/03): I've just got around to confirming that, as rwd's commented, the bug has been fixed. If you're using revision 6343 or later of Rails, you probably aren't going to need this patch. Yay!

Problems with the metakoans.rb Ruby Quiz 3

Posted by ryan Mon, 05 Feb 2007 23:25:00 GMT

There are indeed many ways to solve Ruby Quiz #67: metakoans.rb, as James Gray II says. By "solve," I mean getting all of the "koans" to pass. But you don't have to actually solve the quiz to make all of the koans pass. Here's a solution that passes all of the koans, but doesn't solve the problem completely:

class Module
  def attribute(params, &block)
    initial = nil
    if params.is_a?(Hash)
      name = params.keys[0]
      initial = params.values[0]
      name = params
    define_attribute_methods(name, initial, &block)
  def define_attribute_methods(name, initial, &block)
    define_method(name) do
      initial ||= instance_eval &block if block_given?

      @attr ||= initial
    define_method(name + '=') do |value| 
      @attr = value
    define_method(name + '?') do

While I was solving the solution, there were a number of times when all of the koans passed, but I had a sense that my solution wasn't correct. Similarly, when I was using this quiz as a Ruby teaching tool, a number of people told me that they solved the quiz, but don't know why the code passed all of the koans. For a self-testing quiz, this is a problem. Don't get me wrong, this quiz is awesome, but it'd be better if it had more thorough assertions to ensure that the "student" has correct "knowledge".

The first change that I made to the assertions was to change assertions like:

assert { (c.a = nil) == nil }
assert { c.a = nil; c.a == nil }
While these two assertions are similar, they are not the same, and the difference is subtle. The first assertion tests that the return value of the c.a= method is nil. The second assertion tests that the return value of the c.a method is nil. This is, I think, what the metakoans.rb author intended. Without using the second assertion, the return value of the c.a method could be incorrect after the internal attribute variable has changed via a call to c.a=.

The second change that I made was to make it so that, rather than using the number 44 as the default value for the 'a' attribute in every koan, I incremented the number by one. For koans that had a second attribute that took a block as a default value, I had them return a + 1 instead of simply a. This is important, because without this change, a single instance variable, such as the one I use in the erroneous solution above, could be used as the value for all attributes, and the koans would still pass.

You can get my metakoans.rb with the updated assertions here.

I'm not sure if there are other ways to make the koans pass without the solution being correct. If there are, please let me know, and I'll update the file.

Learn metaprogramming with Ruby Quiz #67: metakoans.rb 1

Posted by ryan Fri, 02 Feb 2007 21:21:00 GMT

If you're getting into metaprogramming with Ruby, a great way to learn is by solving the metakoans.rb Ruby quiz. I assigned it as a task for a training course recently, and one person told me that it was the most fun they've had programming.

I did, however, find a small problem with the way that the koans were structured. If you solve Koan 6 by setting a class-level variable for the default value, and then implement the attribute getter so that it returns that value if it's defined, koans 7-9 pass. To avoid this particular problem, I changed the use of the number 42 in koans 7-9 each to be a different number. This will ensure that the actions of one test do not fool the assertions of another.

Failing Quickly When Testing For Performance 2

Posted by ryan Fri, 24 Nov 2006 05:46:00 GMT

I was working with an algorithm today that I discovered had a bug that caused it to run for an unacceptable amount of time, hogging a lot of system resources in the process. Whenever I find a bug in a piece of code I'm working on, I write a failing unit test for it that defines the correct behavior. For this algorithm, I needed to define what an "acceptable amount of time" was in the test, and then test for that level of performance so that the test results were consistent across multiple computers with possibly differing resource loads and load fluctuations. I also needed to ensure that the test would fail as quickly as possible in the event that the algorithm did not perform as desired.

The method containing the algorithm takes a string parameter such as "1-4, 23, 50-52", specified as user input and representing a range of numbers. It then generates an array of numbers; for the string previously mentioned, the array would contain the numbers 1, 2, 3, 4, 23, 50, 51, and 52. The method also takes an optional parameter for the maximum amount of numbers that would be acceptable for it to generate, since generating an array containing all numbers for a range string like "1-9999999999999" would send the generating system into epileptic fits, complete with bus lines frothing. As you may have guessed, this was where the problem was: The method in question generated all of the numbers in the specified range string, and then it checked to see if the amount of numbers generated exceeded the specified maximum.

I needed to define an acceptable response time for a given maximum size of the generated array of numbers for my test. It seems to me that it should take the same amount of time for the algorithm to complete with a range for 10 numbers with a maximum resulting array size of 5 as it does with a range for 10 million, billion, or squigillion numbers with the same result size. Basically, when the algorithm determines that the given range will exceed the maximum, it should end. The challenge here is that different computers will have different timings to reach the maximum, so a reasonably-accurate system-specific timing expectation needed to be calculated.

For this purpose, I wrote a method that determines the range of acceptable response times for the algorithm, given a desired number count, maximum result size, and the number of sample timings to make, since timings will differ slightly from one invocation to another.

def acceptable_timing(number_count, result_size_limit, sample_count=10)
  timings = []
  sample_count.times do 
    generator ="1-#{number_count}", result_size_limit)
    start_time =
    end_time =
    timings << end_time - start_time
  0.0..average(timings) + standard_deviation(timings)

The next challenge was testing the numbers method with a range string that represents a large set of numbers, but using the same result_size_limit that was used in the call to acceptable_timing. I decided that a range of 9999999 numbers was sufficiently large to determine that the timing was acceptable; after all, it should take the same amount of time with the same result size limit as if I were to use 100 numbers, right? However, the problem with using a set of 9999999 numbers is that, with the bug, the test will hang for an extremely long time and hog a lot of system resources. We want our tests to fail as fast as possible, and give a useful error message if and when that failure occurs.

To ensure that the test fails fast, I decided to launch a separate thread to call the method under test so that I can stop it as soon as it's determined that it's taking longer than the acceptable amount of time to return.

def completes_within?(threshold, &block)
  start_time =
  thread = &block
  while true
    if !threshold.include?( - start_time)
      return false
    return true if thread.stop?
And finally, the test:
def test_numbers_fails_fast_when_result_size_limit_exceeded
  range_size = 9999999
  result_size_limit = 5
  generator ="1-#{range_size}", result_size_limit)

  acceptable_amount_of_time = acceptable_timing(100, result_size_limit)

  assert_equal true, \
    completes_within?(acceptable_amount_of_time) { generator.numbers }, \
    "Exceeded acceptable time to determine that range of #{range_size} " + \
    "numbers exceeds limit of #{result_size_limit}"

I considered using a range size smaller than 9999999 to avoid the threading and make the solution simpler. My reasoning for not doing that is, if I were to pick a smaller number, it would still have to be sufficiently larger than the range size I used to determine the acceptable amount of time for the method under test to return. The larger range size gives me confidence that a failed timing is not just because of a resource spike on the computer running the test, at least if the test is supposed to fail. If I have to pick a large number anyways, it's going to take the test longer to fail, thus violating the idea of fail-fast testing. Therefore, I might as well just abort the method as soon as I know it's going to take too long.

To further improve the reliability of this test, the completes_within? method could be called multiple times and, if a success is ever achieved, the test passes. However, this would make the test run longer, so the choice of whether to use it or not should depend on the variation in resource load that is expected amongst the computers that will be running the tests. If the tests are running on a dedicated machine, this technique probably wouldn't be needed.

In order to gain 100% confidence that there will be no false negatives in the test results, the structure of the code could be modified so that it can be determined whether the algorithm is considering the result limit while it generates the numbers, or afterwards, as in the case of the buggy version of the algorithm. The tradeoff here is that a certain amount of the algorithm logic must be externalized so that the necessary assertions can be set up in the test. This makes the algorithm itself less adaptable to change, as some changes could make the test fail inappropriately, since not only would the results be getting tested, but also the way in which the algorithm works.