Your RSpec test suite will degrade over time. As it endures the introduction of feature after feature, entropy takes hold. Innocent additions and minor neglect can have pernicious effects. Some familiar scenarios might include:
The FactoryGirl cascade
A new requirement emerges: users must have an associated address. The fix is probably easy: slap some validations into the
User model and add a line of code to the
Thanks, FactoryGirl. Look how easy you made that. You’re probably a cool gem.
But unbeknownst to you, the address factory requires all states/provinces to be seeded in the database via other factories. And the seed data creation isn’t idempotent because your coworker, me, Ben, didn’t really think about that. So now creating 10 users implies you’re creating hundreds of states/provinces.
Greedy factories will quietly produce cascades of slow database traffic.
Allowing HTTP traffic to third parties
Our application’s newsfeed looks less credible when littered with user avatar placeholder images, but we can’t force users to upload their gorgeous mugs. Luckily, there is an old trick for this issue: Gravatar.
Years ago, in a moment of youthful indiscretion, maybe in high school or something, our users uploaded avatars to Gravatar. Armed with just an email, we can conjure these avatars from the Gravatar service and fill our app with faces.
However, there is one caveat. If a user hasn’t uploaded an avatar AND they don’t have a Gravatar, we should hide them from the newsfeed.
I, Ben, your actual coworker and dear friend, did twice the appropriate amount of bath salts and got to work on this feature. If a user in the newsfeed lacks a local avatar, we make a synchronous request to Gravatar to check for an avatar there. I’ll probably make the argument that since this is cached in production, the performance penalties are acceptable.
But guess where caching doesn’t happen? Your test suite. Get #rekt.
I used to believe in feature spec purity. That is, when writing feature specs, the application should be treated as a black box 100% of the time, and data should only be created organically via the UI.
Every time you need a logged-in user, you must exercise the log in form. Every time a user needs to send a message to a friend, the friend needs to sign up, verify his or her email address, and accept a friend request – all via the UI.
This approach had the advantage of being highly principled and the disadvantage of being utterly worthless. Not only is it criminally slow, but respecifying the sign-in flow hundreds of times isn’t useful.
I have learned the error of my ways. Minimal database or session manipulation isolated behind reusable, declarative methods is acceptable to ensure that features are specified once and only once. But I learned this only after bringing a test suite to its knees.
I’m really sorry
Guys, I’m sorry I did that to our test suite. I want to help turn things around.
Getting to the root of the problem
There are many ways to attack the problem of test suite slowness — Limit database access, try to avoid loading the entire Rails environment, don’t allow objects to integrate deeply, prevent third party API integration, use a faster web driver, increase parallelism on CI, delete specs, etc.
If you’re writing medium to large-sized Rails applications, you probably have thousands of tests, and it can be difficult to discern where your optimization efforts are best spent. Some useful guidance might include:
- Which specs are the slowest?
- Which specs generate the most queries?
- Which specs generate the most requests?
To acquire such information, we’ll need to hook into Rails to receive notifications when queries and requests are executed, and we’ll need to hook into RSpec to learn when spec examples start and finish to get timing data.
Hooking into RSpec
We can register a listener class to receive notifications at various times throughout the RSpec lifecycle.
Consider a trivial RSpec example:
Which produces the output:
Let’s focus on 4 hooks:
:example_failed. We’ll create a
Listener class to listen to these lifecycle events in
The hooks are invoked at the appropriate moments in the lifecycle of the spec and our console output looks like this:
notification argument is particularly interesting because we can use it to access metadata about the example.
Adding Rudimentary Profiling
Let’s leverage RSpec hooks to inject profiling. Our first pass at profiling will simply write the example description, run time and outcome to a file. Create a new file in
require "profiling" in your
Producing output like this in
Upgrading to CSV
We’re making progress, but flat text files are not an ideal format. Changing to CSV format opens up the wide world of spreadsheet manipulation.
Logging Queries and Requests
The duration of a spec is useful information, but it would be nice to complement the profiling with more actionable data. Chiefly, the number of SQL queries issued and the number of requests made to the application.
To pull query and request data, we’ll use the Rails
ActiveSupport::Notifications synchronously instruments events for a variety of Rails subsystems, including ActiveRecord and ActionController.
For example, we can log every SQL query to the console. Assuming we have an ActiveRecord model called
We’ll see queries logged in the console:
Notice that, in addition to the users table queries, we have logged a database ROLLBACK, which is an artifact of testing with transactional fixtures. We can avoid counting such queries in our profiling by creating a blacklist pattern (partially borrowed from the rails-footnotes gem).
Let’s augment our
Listener class to count queries and requests using
Profiling over time
If test suite performance degrades over time, we need to profile performance over time. To accomplish this, we’ve extracted profiling into a gem which records the current git SHA and commit date, as well as everything we’ve covered so far. Results are stored in an SQLite database, allowing you to query and aggregate as necessary.
Install the gem by adding it to the Gemfile.
Then add RSpec hooks to
Finally, set up the profiling database and collect a sample.
We can review the profiling results in the rspec_profiling console:
Moving through history
To understand how performance may have regressed over the lifetime of a project, we can iterate over our git history as demonstrated in this script.
Add the script to the root Rails directory and modify its permissions:
Once you’ve collected profiles, you can illustrate test suite runtimes with a visualization tool like spark.
Profiling your specs can help you identify and mitigate test suite performance issues and even genuine application performance issues. It’s a useful tool to stash in your development toolbox, and sometimes it’s the only way to keep me, Ben, your handsome and universally beloved coworker, in check.