I was recently responsible for building a suite of Behat-based tests for an existing site using the excellent Behat Drupal Extension. After initially wrapping my head around Behat with some simple tests, I moved on to some more complex tests for the site’s business logic, involving the creation of nodes and the voting API. Happy with my test feature, I ran it a couple of times only to get a different result every time; sometimes it would pass, sometimes it would fail early and sometimes it would fail on the penultimate step. What was going on?
A little background
Ordinarily when practicing TDD the goal is to write a test that initially fails, then to write code that results in the test passing. While I wasn’t able to practice TDD from the outset of this project, the goal was to ensure important existing functionality wouldn’t be broken by implementing new functionality. On this point, a test that passes or fails inconsistently makes this a difficult methodology to practice, as it becomes impossible to pinpoint the true source of the test failure.
It’s also important to note for this test feature, I was using the Selenium2 driver for Mink to simulate a user’s actions on the site. In this particular case, we needed Javascript support to click a voting API link that was created dynamically.
Giddy up!
After some time attempting to diagnose why tests would fail inconsistently, I eventually narrowed it down to a caveat of Behat:
Often, especially when using Mink to test web applications, you will find that Behat goes faster than your web application can keep up – it will try and click links or perform actions before the page has had chance to load, and therefore result in a failing test, that would have otherwise passed.
Essentially, the website under test was sometimes responding too slowly, resulting in failed tests. In the end it was a simple fix in this case – by enabling CSS and Javascript aggregation the particular test results became consistent. This just goes to highlight, site performance isn’t just important to consider for the end-user or live sites, but also for development and testing environments!
Whoa there!
On the other hand, there are times when you might want to deliberately slow your tests down. A large response, slow database query or long AJAX request (although Selenium2 seems to be quite good at waiting for AJAX requests to complete) may trip Behat up in much the same way as above however these cannot always be worked around by simply tweaking your site’s performance.
To slow down the execution of a test I implemented a function similar to the spin function described here in my FeatureContext:
/** * @Given /^I wait ([0-9^"]*) seconds$/ */ public function iWaitSeconds($wait) { for ($i = 0; $i < $wait; $i++) { sleep(1); } }
It should then be possible to write a feature as such:
Scenario: Waiting for a really slow page Given I am on "a/slow/page" And I wait 30 seconds Then I should see "Something that took a long time to load"
At home on the Drupal-prairie
Behat as a BDD testing tool for Drupal seems to be steadily gaining in popularity. Some of the things Behat is capable of, such as this cool technique demonstrated by Jonathan Jordan of Metal Toad, make it a highly practical tool in ensuring your site is functioning correctly or helping you track down problems when it's not.