Tag Archives: Rails

Karma & Browserify

Recently I had the pleasure of setting up Angular tests within a project. For testing, we decided on Karma test runner by Angular team. You can find details about the reasoning at the end of the post, but basically the idea was that we’d like to stick with the runner built and used by the Angular team itself.

The project is currently a Rails gem and there are a bunch of nice tutorials on how integrate the two (see resources), but this project has aspirations to become a full-blown Angular app without any direct Rails dependencies. So, solutions like rails_karma gem or rails assets was not an option. 

The good news is that the project in question used Browserify and sprockets-browserify gem to integrate it into Rails. After a bit of investigation, we came upon karma-browserify node module. This way, the resources under test were still served by browserify in the same manner as they would be served with the live application, but Rails was actually not needed to run the tests. The module in question essentially adds Karma preprocessors that make Karma aware of browserify served resources. In the end, this made our tests run fast, without involving Rails, which is a nice bonus.

The coffeescript configuration that ended up running the test suite:


module.exports = (config) ->
 config.set
 basePath: '..'

frameworks: [
 'jasmine'
 'browserify'
 ]

preprocessors:
 'app/assets/javascripts/index.coffee': ['browserify'] # index contains all requires
 'spec/**/*.coffee': ['coffee']

browserify:
 extensions: ['.coffee']
 transform: ['coffeeify']
 watch: true
 debug: true

 files: [
 'spec/support/karma_init.coffee'
 'app/assets/javascripts/index.coffee' # load the application, have browserify serve it
 {
 # watch application files, but do not serve them from Karma since they are served by browserify
 pattern: 'app/assets/javascripts/*.+(coffee|js)'
 watched: true
 included: false
 served: false
 }
 'spec/support/**/*.+(coffee|js)' # load specs dependencies
 'spec/javascripts/**/*_spec.+(coffee|js)' # load the specs
 ]

exclude: []
 reporters: ['progress']
 port: 9876
 colors: true
 logLevel: config.LOG_INFO
 autoWatch: true
 browsers: ['PhantomJS']
 captureTimeout: 60000
 singleRun: false

Just run karma with karma start spec/karma.config.coffee and that’s it.

Resources:

Tagged , , , , ,

Deploying Discourse to Heroku

Recently I stumbled upon Discourse. Finally someone tackled that problem. Forums, while rich in content, have been so dull and unfriendly for so long. Anyway, I wanted to get  it up and running for myself, preferably on some cloud infrastructure, to play around. I’ve had previous experience with Heroku, so chose that setup, for no other reason. The Discourse’s own forum has plenty of other options so feel free to investigate.

As for the Heroku setup, there are existing guides, see [4] and [5]. The difference between them is that when using Discourse’s default configuration samples, you get Open Redis support, while I wanted to use Redis Cloud add-on, similar to Swrobel’s take on it. Also, I wanted to use Autoscaler for Sidekiq to reduce total cost on Heroku, as noted in Discourse’s document. All in all, I ended up with a mix of ideas from both sources. Add-ons used on Heroku:

Currently free plans are used for all of them as you can see below.

Heroku Configuration for a Discourse instance

To be able to repeat the process and easily deploy updates using Discourse’s Github repo, I’ve created a small Bash script for this. It basically performs the following tasks:

  • goes into your local Discourse git repo
  • creates a new branch, based on the current branch you’re on (all changes must be committed)
  • creates Redis configuration using provided sample and tweaked for Redis Cloud
  • adds mail configuration to production environment
  • removes clockwork from Procfile
  • set’s ruby version to 2.0 in Gemfile
  • configures Sidekiq for Autoscaler
  • creates temporary database and migrates it
  • precompiles assets using the above database
  • drops the temporary database
  • commits all configuration changes along with precompiled assets
  • pushes it all to Heroku
  • migrates the database on Heroku
  • and finally deletes the deployment branch

You can find the script here, feel free to use it, change it, do things to it you see fit 🙂 The main idea was to do as minimum manual work as possible, at least for this phase where I follow the default instructions closely. This of course might change, but for now is quite all right.

Prerequisites for executing the script:

  • You need to be able to run Discourse locally or at least to be able to precompile assets
  • This means you need:
    • PostgreSQL
    • Ruby 2.0
    • Cloned Discourse Github repo

When setting up PostgreSQL, take care to enable HStore on it [2] and to set appropriate discourse user permissions [3]. You can change username and password easily to match your setup.

Also, you need to replace the SECRET_TOKEN within the script and match it to your own setup. I took care to match it to the one set for the Heroku instance (heroku config:get SECRET_TOKEN).

Aside from the script, you should still follow the instructions on mentioned documents to e.g. create initial user, setup Amazon S3 upload etc.

Finally, the forum is up:

Running Discourse

Resources:

  1. Installing PostgreSQL on Ubuntu
  2. Enabling HStore for PostgreSQL
  3. Setting PostgreSQL permissions
  4. Swrobel’s take on Heroku deployment
  5. Discourse’s own Heroku deployment instruction
  6. Related topic on Discourse’s forum
Tagged , , , , , ,

Scraping Amazon item offers

In my pet project Dealesque, I am trying to compare all offers on a number of Amazon items, the idea being that it can help decide which offers to use to minimize shipping and total cost. Using Amazon Product Advertising API was the logical first step, but it doesn’t return all the offers for an item. It does however return the “more offers URL” for each item. Hence, the old scrapin’ was due, and none too late!

Plain wget-like action would not suffice, since Amazon is taking care to block unwanted traffic. So, mechanize gem to the rescue! It actually allows you to impersonate a real browser:

agent = Mechanize.new { |agent| agent.user_agent_alias = 'Mac Safari' }

After that, you can navigate the site, click away, read any forms etc.
For scraping, what I actually ended up using was to get the content of the “more offers URL” page and parse it using Nokogiri. Something like:

page = agent.get(more_offers_url)
root = Nokogiri::HTML(page.content.strip)
scrape_content(root)

For the current development stage, this is doing just fine. Unfortunately, for production use it will not suffice. There will probably be some traffic throttling from Amazon and some benchmarking will need to be done to determine the limits. Also, proxying the requests will probably be required too. But, I leave this for some other times.

The result of scraping the offers for picked items:

Dealesque picked items

Tagged , , , ,

Mixin Less & Sass with twitter-bootstrap-rails

When working with a Rails app, if you wan’t to use Twitter Bootstrap as the base of your design, there are plenty of options out there. From manual import to a more than a fair list of existing gems. Probably the best place to start is the “Twitter Bootstrap BasicsRailsCast by Ryan Bates.I decided on twitter-bootstrap-rails gem by Seyhun Akyürek mostly because  it maintains a fresh version of Bootstrap. The usual setup and usage are covered in the mentioned casts and related material, but the gem uses Less instead of Sass. I really wanted to work with Sass, so the question was how to combine the two?

The answer actually ended up being pretty simple. Let’s say the gem has generated a style-sheet Less file named “bootstrap_and_overrides.css.less“. If you want to use the classes and mixins from it, all you have to do is to place the following at the top of your Sass file:

  @import "bootstrap_and_overrides";

Now, all the definitions from Less part of the story are available in your Sass too. An added bonus is that now you can separate your HTML from the style completely and just add Twitter related classes in your Sass style-sheet. For example, here is how you can set the image container div width to “span2” Twitter class.

The Haml source:

.item
  .image_container
    .image
    ....

Sass style-sheet:

@import "bootstrap_and_overrides";
.item {
  .image_container {
    @extend .span2;
  }
}

The resulting html is:

<div class="item">
  <div class="image_container">
    <div class="image">...

The end result is that the “span2” will actually be applied to the image container because the “image_container” css class is acutally extended with “span2” mixin.

A lot like Ruby, go figure!

Haven’t decided whether to keep using this style or not, just playing with it for now, but it is good to know you can actually mix apples & oranges 🙂

Tagged , , , ,

Rails secret

Secret_token.rb has the word “secret” in it for a reason. Unfortunately, open sourced Rails applications tend to forged this. Ignoring the file in the SCM of your choice is not enough, the secret token is needed for Rails to work. So, how can we avoid publishing this information and still enable secret token configuration in production? The answer is simple. Either put you secret token in an .gitignored file and load it secret_token.rb or use the environment. So, how bout we use both?

class ConfigurationError < StandardError; end

secret_token_file = 'config/secret_token.yml'
secret_token = ENV['SECRET_TOKEN'] || secret_token = YAML::load(File.open(secret_token_file))[Rails.env]['token'] if File.exists?(secret_token_file)
raise ConfigurationError.new("Could not load secret token from environment or #{File.expand_path(secret_token_file)}") unless secret_token

Dealesque::Application.config.secret_token = secret_token

This configuration has the added bonus of being able to deploy on Heroku without any changes to the code.

SECRET_TOKEN can be added as environment variable or config var as Heroku calls them. The procedure is simple, open up your console and:


heroku config:set SECRET_TOKEN=...aba5a33f3dbad3b694f6154b...

And that’s it 🙂

Tagged , ,

Nokogiri vs Crack & Hashie

Recently I wrote about using Vacuum gem for accessing Amazon Product Advertising API. As the result of API calls, it returns vanilla XML response from Amazon API. There are gems that returns PORO’s, but the “bare metal” access I get from Vacuum made it a great gem to use for several reasons (breakdown of those reasons and other existing gems is a topic for another post). Parsing the Amazon responses is a bit of a funny issue since information is located at various points. Here’s an example:

  root = Nokogiri::XML(response.body).remove_namespaces!
  items = parse_items(root)

  def parse_items(node)
    node.xpath('//Items/Item').map do |item_node|
      create_item_from(item_node)
    end
  end

  def create_item_from(node)
    attributes = {}
    attributes[:id] = parse_value(node, './ASIN')
    attributes[:title] = parse_value(node, './ItemAttributes/Title')
    attributes[:url] = parse_value(node, './DetailPageURL')
    attributes[:group] = parse_value(node, './ItemAttributes/ProductGroup')
    attributes[:images] = parse_item_images(node)
    Item.new(attributes)
  end

  def parse_item_images(node)
    image_sets = node.xpath('./ImageSets/ImageSet')
    return if image_sets.children.size == 0

    image_set = image_sets.find {|image_set| image_set.attribute('Category').value == 'primary'} || image_sets.first
    image_set.xpath('./*').inject(Hash.new) do |images, image_node|
      image = create_item_image_from(image_node)
      images[image.type] = image
      images
    end
  end

  def create_item_image_from(node)
    attributes = {}
    attributes[:url] = parse_value(node, './URL')
    attributes[:height] = parse_value(node, './Height', :to_i)
    attributes[:width] = parse_value(node, './Width', :to_i)
    attributes[:type] = node.name.gsub("Image", "").downcase
    ItemImage.new(attributes)
  end

  def parse_value(node, path, apply_method = nil)
    nodes = node.xpath(path)
    if nodes.first
      value = nodes.first.content
      value = value.respond_to?(:strip) ? value.strip : value
      apply_method ? value.send(apply_method) : value
    end
  end

As you can see, my domain objects don’t map exactly to Amazon structure. But, that’s what this parser is all about. It translates API responses to what I need in my domain. What bothered me was that I needed to use hard-coded paths to specific information, so I went looking for another solution. This is where Crack & Hashie gems come in. The first one converts received XML to a Hash, and the second one enables nicer Hash parsing. The code seems nicer:

  data = Hashie::Mash.new(Crack::XML.parse(response.body))
  items = parse_items(data)

  def parse_items(data)
    items_data = data.ItemSearchResponse.Items.Item
    if items_data.kind_of?(Array)
      items_data.map {|item_data| create_item_from(item_data)}
    else
      [create_item_from(items_data)]
    end
  end

  def create_item_from(data)
    attributes = {}
    attributes[:id] = data.ASIN
    attributes[:title] = data.ItemAttributes.Title
    attributes[:url] = data.DetailPageURL
    attributes[:group] = data.ItemAttributes.ProductGroup
    attributes[:images] = parse_item_images(data)
    Item.new(attributes)
  end

  def parse_item_images(data)
    return unless data.respond_to?(:ImageSets)

    image_set = data.ImageSets.ImageSet
    image_set = image_set.find { |image_set| image_set.Category == 'primary' } || image_set.first if image_set.kind_of?(Array)
    image_set.keys.select { |key| key =~ /.*Image/ }.inject(Hash.new) do |images, key|
      image = create_item_image_from(image_set.send(key), key)
      images[image.type] = image
      images
    end
  end

  def create_item_image_from(item_data, type)
    attributes = {}
    attributes[:url] = item_data.URL
    attributes[:height] = item_data.Height.to_i
    attributes[:width] = item_data.Width.to_i
    attributes[:type] = type.gsub("Image", "").downcase
    ItemImage.new(attributes)
  end

So, the result although similar seems nicer to me. A bit more code like, and less strings, even a bit less code. Definitely a win so far. But, there are some issues.

First one is that with XML parsing I don’t think much about collections, they are always the same, regardless of the number of child nodes. With Crack/Hashie, suddenly I needed to think about it since the combination converts a collection of 1 into direct child. Hence the Array check in parse_items method. I don’t like making such checks but OK, it was a very limited and specific enough not to hurt me later.

The second issue was performance. Even while testing everything suddenly seemed slower. At first I attributed this to my fatigue, but just to be sure, I made a small performance test. It consisted of parsing a predefined XML document (with recorded Amazon API response) for 100 times. The XML response has 9 items, and you can see an example here. The test routine can be found here. The results were more than interesting:

Seconds: 1st 2nd 3rd 4th 5th Average
Nokogiri 1.18 1.26 1.27 1.33 1.23 1.25
Crack/Hashie 7.06 7.87 7.56 7.58 7.44 7.50

It seems extra baggage from Crack and Hashie makes that solution about 5 times slower.
This was more than enough reason to abandon the approach and just live with plain XML and Nokogiri.
But, at least now I know why 🙂

Tagged , ,

Stubbing Amazon API calls using VCR

When integrating with external services, it is wise to test those interactions. But then again, it soon becomes tedious and slow if you need to repeat tests and wait for those external services responses. So VCR gem comes to the rescue! Much has been written about it so I want go into details, but what put a smile on my face was the solution for the integration test where I needed to test searching Amazon item listings using Vacuum gem. So to test my code I needed to a way to:

  • tell VCR to record the Vacuum gem request and Amazon API response
  • reuse it for subsequent test runs

Two things bothered me:

  • Vacuum uses Excon for HTTP layer
  • Amazon API calls are signed, making two identical search calls have different URI’s – the difference being e.g. with Timestamp part of it

So how does one hook into these layers for test purposes? Fortunately, VCR comes with solutions for both issues.

There is a hook_into VCR configuration option for Excon. Essentially this means VCR can intercept Vacuum calls, great! Configuration is simple, just add :excon hook in spec helper, like in the gist below.

For signed Amazon API requests, VCR magic was needed 🙂 As you probably know, VCR saves the request and response to appropriate cassettes. For tests within cassette it tries to match the request from the test to the saved ones by comparing HTTP method and URI, as explained here. I couldn’t use it since Amazon API requests are signed, remember? And the existing matchers were of no help either. But, VCR also allows for custom matchers. So, I created a custom matcher that compares search keywords from request uri and that was it!

Now, when running tests, on first run the VCR records the Amazon API request and response to the configured location (spec/fixtures/vcr_cassettes in my case). Subsequent runs reuse those calls. This is more than OK for development tasks. If one needs to refresh the Amazon API response, just delete the saved cassette(s) and the sequence is repeated. Another choice I had to make was whether to store those response in SCM or not? In the end I decided not to save them. Search action is not destructive or otherwise dangerous so any developer can repeat the process without cost. Mind you, in some other use case, e.g. when billing some action over payment gateway, it would probably be wise to store the response.

Tagged , , , , , ,

Deploying Rails with Twitter Bootstrap on Heroku

Lately I’ve been playing with deploying a Rails application to Heroku. A most pleasurable experience but I’ve had some issues with Twitter Bootstrap. The reason is that Heroku discourages the usage of therubyracer gem, see https://devcenter.heroku.com/articles/rails3x-asset-pipeline-cedar#therubyracer for details. In my development environment I used the twitter-bootstrap-rails gem and I wanted to keep using it along with the less support. There are some other non less solutions, as described at the Ruby Source here, but that didn’t seem appealing. So, the only solution I could find was to precompile the assets and push them to Heroku. Aside from that, there were a few more issues to resolve:

  • Use of the helper bootstrap_flash doesn’t work when precompiling assets – the solution was to copy the helper to my Rails project, as described here
  • To be able to continue to use therubyracer gem in development, I had to move it to development group in Gemfile

Now, the procedure is pretty simple, just precompile the assets and push to Heroku and that’s it? Well, almost 🙂

I wanted also to be able to not use the precompiled assets in development, since this requires a bit of manual work (one needs to precompile for each run) and at the same time I didn’t want to forget precompiling them before pushing to Heroku. Hence, I put together a small shell script that:

  • creates a separate branch from the current one
  • precompiles the assets
  • pushes that branch to Heroku – it needs to be pushed to master
  • deletes the new branch

Tagged , ,

In memory session store for Rails 3

For the past two days I’ve been playing with the idea of having an in memory session store in a Rails application. Cookies 4KB limit soon became an issue, and I didn’t wan’t to use Redis, Memcached or even database for this. In production, those services have their place for sure, but for development it seems to me that it is silly to ask every developer on the team to have them installed and running to be able to run the application. Development environment is by its nature far from production, even when intentionally similar (e.g same OS), so it makes sense to use as little dependencies as possible.

So, I’ve been searching for in memory session store. At first, there were some references to memory_store, but unfortunately those were all Rails 2 related. There were even some attempts on re-implementing  it in Rails 3, but that of course failed.

Finally, this idea of implementing it for Rails 3 led me to reading Session related code in Rails. And bingo, there it was! Session store in Rails 3 has a couple of implementations, one of them being the CacheStore. And, since there is the in memory cache store in Rails, solution was pretty much straight forward. You can read up on cache store options in Rails here.

In the project, the following changes were needed:

# /config/initializers/session_store.rb
CrazyApp::Application.config.session_store :cache_store

# /config/environments/development.rb
config.cache_store = :memory_store

This will in effect put all your session data in rails process memory. After you kill rails, it all goes away. Ideal for development 🙂

In case you might need the data to persist, you can use file store which will, by default, save the session data in /tmp so you can inspect it. The file store is actually the default cache method.

There is one issue with this setup. You can’t have separate store for session and cache data. In that case, a new session store implementation would be needed, but let’s cross that bridge when the time comes.

Tagged , , ,

Running specs without Rails

For most of the Rails work I do, I prefer to run specs without Rails as much as possible. The core of the system is usually in PORO’s so there is no need to wait for Rails environment to load. The idea is picked up from the web, of curse, but with a little twist. In all the PORO’s I reference the spec_helper_without_rails.rb, with the below configuration. Note that some part were left out, because they are not relevant to this topic, but the important part is that Rails is not referenced at all.

Now, to run those specs, one recommendation is to run it with RSpec tags. What bothered me is that Rails environment is loaded after all, because RSpec seems to load all specs before filtering them out. This resulted in PORO specs being executed / filtered out correctly, but the load time was slow as before. The trick eventually is to run RSpec only for those specs that use the above helper without Rails. And for that I created a shell/bash script that does just that:

Now, the PORO specs are fast again (~3000 per second on my machine) and life is beautiful again too 🙂

Tagged , ,