57 posts about #rails

Fix for Form Post-Submit Redirection Issue with Turbo Drive

With Rails 7 came the addition of Hotwire which contains Turbo. Turbo Drive, which is enabled by default, intercepts link clicks and form submissions. In the case of the latter, some issues may occur if the back end is trying to do a redirect. For instance, let’s suppose we have the following controller code:

class SubscriptionsController < ApplicationController
  def create
    # code...
    redirect_to root_path
  end
end

Then we have the following form:

<%= form_with url: subscriptions_path, method: :post do |f| %>
  <%= # form fields... %>
  <%= f.button 'Subscribe', id: 'submit-button'%>
<% end %>

After submitting the form, we get a “320 found” response because of the redirect the back end is trying to do. Notice we are submitting the form using the POST method. In this case we may encounter some issues like Turbo losing the CSRF token, Caching Issues, etc that may cause some issues like the app not doing the redirect or having some weird partial renderings. It’s worth noting that this issue doesn’t happen if we submit the form using the GET method.

One easy solution to this problem is to disable Turbo using the data-turbo=”false” flag and let the from to be submitted the “old way”:

<%= form_with url: subscriptions_path, data: { turbo: false }, method: :post do |f| %>
  <%= # form fields... %>
  <%= f.button 'Subscribe', id: 'submit-button'%>
<% end %>

InstantClick behavior in Turbo

Today I learned about the new default InstantClick behavior for Turbo. Turbo now prefetches a link when hovering it and changes the page's contents after clicking on it, resulting in instantaneous page changes most of the time. There's a time window of ~300ms between the hover and the click event, so if you want to optimize for this make sure your backend can respond inside that time window.

This is enabled by default but you can opt out of it by adding data-turbo-prefetch="false" to specific links or to whole containers. Add it to your main container to disable it completely.

More info and demos here: https://github.com/hotwired/turbo/pull/1101

:use_route in controller specs inside engines

If you have controller specs inside your Rails engine, you'll need to pass use_route: :my_engine when doing the request. Otherwise you'll get an UrlGenerationError since the dummy app is not running and the engine is not mounted!

describe MyEngine::MainController, type: :controller do
  it "does something" do
    get :index, params: { use_route: :my_engine }
    # ...
  end
end

Reduce your slug size on Heroku!

You don't need app/javascript and node_modules after compiling assets. Enhance the assets:precompile task and delete them so they don't get added to your Heroku slug! Small slugs allow for fast boot up times and better scaling. This also helps stay below the 500MB size limit!

Rake::Task["assets:precompile"].enhance do
  next unless Rails.env.production?

  ["#{Dir.pwd}/app/javascript", "#{Dir.pwd}/node_modules"].each do |dir_path|
    FileUtils.rm_rf(dir_path)
  end
end

Setting up multiple Okta orgs with the same Omniauth Oauth2 Strategy in Rails

Hello There,

I want to share what I did in order to support a second Okta Organization in a Ruby on Rails application using omniauth_oktaoauth gem.

I started with a basic spec to make sure I dont break anything:

RSpec.describe 'Sign in with Okta', type: :request do
  describe "Using Okta" do
    let(:user) { User.find_by email: user_email }
    let(:okta_oauth) do
      {
        provider: okta_provider.to_s,
        uid: "123456789",
        info: {
          name: "John Doe",
          email: user_email,
        }
      }
    end

    before(:each) do
      OmniAuth.config.test_mode = true
      OmniAuth.config.mock_auth[okta_provider] = OmniAuth::AuthHash.new(okta_oauth)
    end

    context "When using existing configuration" do
      let(:okta_provider) { :oktaoauth }
      let(:user_email) { "john.okta@existing.domain" }

      it "keeps working" do
        post "/users/auth/oktaoauth"
        follow_redirect!
        expect(user.email).to eq(user_email)
      end
    end
  end
end

With this, I proceed to write another spec to ensure the new option would work:

context "When using second okta org" do
  let(:okta_provider) { :second_okta }
  let(:user_email) { "john.okta@new.domain" }

  it "signs the right user in" do
    post "/users/auth/second_okta"
    follow_redirect!

    expect(user.email).to eq(user_email)
  end
end

After this, I started to use the normal steps:

# user.rb
devise :omniauthable, omniauth_providers: [:oktaoauth, :second_okta]

Then, modify the initializer to tell omniauth about the new strategy

# config/initializers/okta.rb
....
config.omniauth(:second_okta,
                Rails.configuration.second_okta.client_id,
                Rails.configuration.second_okta.client_secret,
                name: "second_okta",
                request_path: "/users/auth/second_okta",
                callback_path: "/users/auth/second_okta/callback",
                scope: "openid profile email",
                fields: %w[profile email],
                client_options: {
                  site: Rails.configuration.second_okta.url,
                  authorize_url: "#{Rails.configuration.second_okta.auth_issuer}/v1/authorize",
                  token_url: "#{Rails.configuration.second_okta.auth_issuer}/v1/token"
                },
                redirect_uri: "#{Rails.configuration.app.base_domain}/users/auth/second_okta/callback",
                issuer: Rails.configuration.second_okta.auth_issuer,
                strategy_class: OmniAuth::Strategies::Oktaoauth)

At the beginning, it looked easy to add a second option using the same strategy, but after dealing with omniauth internals and omniauth_oktaoauth source code itself, I found that I needed to specify name and strategy_class to override defaults, there's something inside the source code that did not work out of the box, I had to specify request_path and callback_path explicitly (I'll dig deeper later and send a patch if it's a bug).

After making those changes, it worked just fine.

Thanks!

Automatically set your Ngrok tunnel as default Rails host for development environment

Ever needed to test your Rails app's emails or background jobs in a local environment but also wanted to expose it through a public URL? Here's a handy trick to dynamically set your default URL options based on Ngrok's current public URL.

Note that you should fire up ngrok before starting your local server. Also this will just grab the first tunnel you have active.

Code: config/initializers/default_url_options.rb

if Rails.env.local?
  ngrok_results = `curl -s -X GET -H "Authorization: Bearer <NGROK_API_KEY>" -H "Ngrok-Version: 2" https://api.ngrok.com/tunnels`
  ngrok_results = JSON.parse(ngrok_results)
  public_url = ngrok_results.dig("tunnels", 0, "public_url")
  host = public_url.gsub("https://", "")
  Rails.application.routes.default_url_options = { host: host }
else
  Rails.application.routes.default_url_options[:host] = "<YOUR PRODUCTION HOST>"
end

How it Works:

  • Checks if the environment is local.
  • Fetches the current Ngrok public URL using the Ngrok API.
  • Parses the JSON response to get the public URL.
  • Updates the default_url_options for the Rails application with this public URL.

Note: Replace <NGROK_API_KEY> and <YOUR PRODUCTION HOST> with your actual keys and host.


Feel free to tweak it!

Use OpenTelemetry gems to track your app's performance

Instead of going with expensive services like New Relic or Datadog, trace your Rails app's performance using the OpenTelemetry gems.

First, add the gems to your Gemfile:

gem 'opentelemetry-sdk'
gem 'opentelemetry-exporter-otlp'
gem 'opentelemetry-instrumentation-all'

Then, add this inside config/initializers/opentelemetry.rb

require 'opentelemetry/sdk'
require 'opentelemetry/exporter/otlp'
require 'opentelemetry/instrumentation/all'

OpenTelemetry::SDK.configure do |c|
  c.service_name = '<YOUR_SERVICE_NAME>'
  c.use_all() # enables all instrumentation!
end

Finally, launch your application and point it to a collector, like the OpenTelemetry Collector, Grafana Agent or SigNoz. Most of them have cloud or self-hosted versions.

Enjoy your observability!

Make Rails properly decode hashes and arrays in JSONB fields the way god intended

In what I'm going to call the greatest piece of pedantic fuckery of all time in Rails history, sgrif made JSON fields take primitives (including strings!!!!), instead of properly converting strings into Arrays and Hashes the way that God intended. In the years since, this one single peabrained decision, inexplicably rubber stamped by the rest of rails core, has surely cost millenias worth of headscratching, incontrollable sobbing, teeth gnashing, and rending of garments amongst poor Rails engineers like myself who wonder why, on an utterly non-deterministic basis, do my hashes turn into strings when going through the Postgres washing machine.

Unsure if you're having this problem yourself? Are you getting random no implicit conversion of Symbol into Integer (TypeError) errors in your code? That's what I'm talking about.

To fix this abomination and cast out the sgrif demon forever (or at least until they refactor ActiveRecord::Type modules again), simply toss the following file into your initializers and breathe easier.

# config/initializers/fix_active_record_jsonb.rb

ActiveRecord::Type::Json.class_eval do
  # this is a json field, thus always decode it
  def deserialize(value)
    ActiveSupport::JSON.decode(value) rescue nil
  end

  def serialize(value)
    if value.is_a?(::Array) || value.is_a?(::Hash)
      ::ActiveSupport::JSON.encode(value)
    elsif value.is_a?(::String) && value.start_with?("{", "[") && value.end_with?("}", "]")
      value
    elsif value.respond_to?(:to_json)
      value.to_json
    else
      value
    end
  end
end

Footnote: Apparently, I need to waste precious time of my life revisiting this topic every 5 years or so.

Fix inspect on Devise models

Have you wondered why User and other Devise models don't print properly in your console? Instead of nice pretty printed output, even if you're using a pretty printer, you still get a long, ugly, unreadable string.

Today I finally got fed up enough to do something about it, and here is the solution:

Chuck this into the bottom of your config/initializers/devise.rb file and you're good to go. It removes the overriding of the inspect method that is the culprit.

Devise::Models::Authenticatable.remove_method(:inspect)

But Obie, what about Chesterton's Fence!?!?!

My answer is that if you're paranoid about the possibility of inspect being called by a logger while a plain-text password happens to be in scope, then by all means override the method instead of just removing it, but doing so is left as an exercise to the reader. (Hint: start overriding it and Github CoPilot will do the rest.)

Crypt and decrypt messages with ActiveSupport::MessageEncryptor in Rails

In the Rails Console run next lines:

key = SecureRandom.base64(24) # random 32 bits key in order to encrypt
crypt = ActiveSupport::MessageEncryptor.new(key) # e.g. "PALkim1eXHeyGxFWhf+B4OvEYm6LXLtm"
encrypted_data = crypt.encrypt_and_sign('my favorite beer is La María') # "KgIkPJsn9n3JV4Y...=="
decrypted_data = crypt.decrypt_and_verify(encrypted_data) # decrypted message should concur with the original.

How do you guys set up naked domains in Heroku apps?

Let's say:

I have mydomain.mx and I want to allow anybody to type in the browser

www.mydomain.mx
# or
mydomain.mx
# or
http://mydomain.mx
# or
http://www.mydomain.mx

And to resolve/redirect to the secure version of it -> https://mydomain.mx (non www)

It is basically done, following this approach:

  1. You need to go to the settings section in Heroku.com and enabling the SSL option(only available in hobby and paid plans)
  2. Add your domain with two variations inside the settings section within heroku.com

  3. Create those two CNAMEs entries in your DNS providers (I'm using cloudfare.com for free)

  4. Create a redirect rule in your application(this depend on the technology and language you are using), in my case, as I'm using Rails, so it was a matter of adding this to the top of the config/routes.rb file:

  match '(*any)',
    to: redirect(subdomain: ''),
    via: :all,
    constraints: { subdomain: 'www' }

There you go! you are ready to go, here is a live example (Site in construction as of November 26th, 2021)

www.valoralo.mx
http://valoralo.mx
http://www.valoralo.mx
valoralo.mx

All of them will resolve to the same domain: https://valoralo.mx

Separate health check endpoint using puma

Puma offers a way to query its internal stats by enabling a controll app in a separate port, this can be useful when we need to know if the app is alive, this is different than normal health check endpoints because it does not get processed by rails at all.

To enable this functionality, all you need to do is to add this line in your puma.rb file:

activate_control_app 'tcp://0.0.0.0:9293', { no_token: true }

It will start a second web server in the port 9293 that can be queried by monitoring tools or even balancer healthcheck.

ecruz@Edwins-MBP % curl 'http://127.0.0.1:9293/stats'
{"started_at":"2021-10-15T21:39:55Z","workers":2,"phase":0,"booted_workers":2,"old_workers":0,"worker_status":[{"started_at":"2021-10-15T21:39:55Z","pid":44969,"index":0,"phase":0,"booted":true,"last_checkin":"2021-10-15T21:40:05Z","last_status":{"backlog":0,"running":5,"pool_capacity":5,"max_threads":5,"requests_count":0}},{"started_at":"2021-10-15T21:39:55Z","pid":44970,"index":1,"phase":0,"booted":true,"last_checkin":"2021-10-15T21:40:05Z","last_status":{"backlog":0,"running":5,"pool_capacity":5,"max_threads":5,"requests_count":0}}]}%
ecruz@Edwins-MBP %

Check the documentation for more options/usages

Eager load rails associations with nested scopes

It is common to apply some extra scopes when fetching AR relationships, for examples, if we have countries and states, we might want all the countries starting with the letter A and all their states that starts with the letter B, this will automatically create a n+1 query problem since it nees to iterate over each country and fetch all states, but, Rails provides a way to eager load these associations easily:

states_scope = State.where("name ilike 'b%'")
countries = Country.where("name ilike 'a%'")
# This is the magic
ActiveRecord::Associations::Preloader.new.preload(countries, :states, states_scope)

# Now you can invoke coutries.each(:states) and it wont cause queries N+1
countries.map {|country| { country.id => country.states.size }

Normally, you would have to define another relationship in order to eager load the association, but it is not needed using this approach:

class Country < AR::Base
  has_many :states
  has_many :states_starting_with_b, -> { where("name ilike 'b%'") }, foreign_key: :state_id, class_name: "State"
end

# Then
Country.includes(:states_starting_with_b).where("name ilike 'a%'")

But this approach does not scale, it requires to define tons of relationships

Rollback a specific Rails migration

bundlde exec rails db:migrate:down VERSION=202101010000001

Where 202101010000001 is the migration you want to rollback

Rails + Facebook Oauth Locally with SSL

Recently I had the need to test Oauth with Facebook locally and after creating and configuring the App and everything was working wonderfully ...

until it was not.

Facebook now forces SSL so I had to setup it locally by creating a self signed certificate and running my server with it.

  1. Create your certificate, this script create it as localhost.mumoc.crt and localhost.mumoc.key. Mumoc is my username in my working machine.
name=localhost.$(whoami)
openssl req \
  -new \
  -newkey rsa:2048 \
  -sha256 \
  -days 3650 \
  -nodes \
  -x509 \
  -keyout $name.key \
  -out $name.crt \
  -config <(cat <<-EOF
  [req]
  distinguished_name = req_distinguished_name
  x509_extensions = v3_req
  prompt = no
  [req_distinguished_name]
  CN = $name
  [v3_req]
  keyUsage = nonRepudiation, digitalSignature, keyEncipherment
  extendedKeyUsage = serverAuth
  subjectAltName = @alt_names
  [alt_names]
  DNS.1 = $name
  DNS.2 = *.$name
EOF
)

Make sure to at least add digitalSignature and keyEncipherment to KeyUsage or you won't be able to use it in Chrome

  1. Trust the certificate (I moved it to a config/ssl directory inside my app folder)
mv localhost.mumoc.* config/ssl
sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain config/ssl/localhost.mumoc.crt
  1. Run server binding the ssl with the key and certificate
rails s -b 'ssl://localhost:3000?key=config/ssl/localhost.mumoc.key&cert=config/ssl/localhost.mumoc.crt'

Control email deliveries in Rails with a custom Interceptor

You can easily tell Rails to control email delivering with a custom mailer interceptor, all you need to do is to implement the class method delivering_email:

class DeliverOrNotEmailInterceptor
  def self.delivering_email(email)
    mail.perform_deliveries = !email.to.end_with?('special-domain.com')
  end
end

# config/initializer/email_interceptors.rb
ActionMailer::Base.register_interceptor(DeliverOrNotEmailInterceptor)

How to check if an ActiveRecord association was already eager loaded

if you need to know whether to use a pre-laoded collection using eager loading or fech the collections, you can always call loaded? method:

if user.contact_phones.loaded?
  user.contact_phones.detect? {|phone| phone.primary }
else
  user.contact_phones.find_by(primary: true)
end

Using Gem::Dependency class manually to ensure version matching

I needed to add a mechanism to ensure that some actions of a controller were available only for specific versions, and I thought it was super similar of what Gemfile does, so I started to look how to use Gem::Dependency to solve this, and turns out it was super easy:

class Controller
  def feeds_one
    check_for_version_support('>= 1.0', '< 3')

    render json: SomeData.all
  end

  def feeds_two
    check_for_version_support('= 1.0')

    render json: SomeData.all
  end

  def feeds_three
    check_for_version_support('>= 2.1')

    render json: SomeData.all
  end

  private

  def check_for_version_support(*specification)
    checker = Gem::Dependency.new(action_name, specification)

    return if checker.match?(action_name, params[:version])

    raise VersionNotSupported, "Version not upported"
  end
end

If you're having issues installing gems...

...try running rm -rf vendor/cache inside your app's root directory. Looks like sometimes cache can cause compilation issues while building gem extensions so getting rid of it fixes the issue. I can't guarantee this works 100% of the time, but it's worth giving a try if it can help you avoid a headache.

Using Upsert with Rails

Some DBMS support natively something that behaves like update or insert, Rails recently added the method Upsert that can take advantage of this method.

It is useful to update information that do not need to run validations, meaning, in a super performant way, here's an example:

# We usually do this:
activity = current_user.activity
if activity 
  activity.update(last_seen_at: Time.current)
else
  current_user.activity.create(last_seen_at: Time.current)
end

But if you see, it does not need to run any validation, it just needs to update last_seen_at if exists or create a new one, it performs two queries: one to instanciate the activity object and a second one that performs the real update/insert statement.

That can be replaced with the following code and it will perform just a single query and it will take care to either create the record or update an existing one

Activity.upsert({ last_seen_at: Time.current, user_id: current_user.id}, unique_by: :user_id)

To make this really work, considering you use Postgresql, you have to add a unique index on user_id and modify default value on created_at and updated_at in a migration like this:

query = <<-SQL
  ALTER TABLE #{Activity.table_name}
  ALTER COLUMN created_at SET DEFAULT CURRENT_TIMESTAMP,
  ALTER COLUMN updated_at SET DEFAULT CURRENT_TIMESTAMP
SQL
execute(query)

Better usage of Rails logger

Logging usefull data is a hard task, but there's one specific method that helps to improve the experience of logging actual useful information: tagged. It adds extra tags to the log message making it easy to debug:

Rails.logger.tagged('Super App') do
  Rails.logger.info('Log Message')
end

This will result in somethin like this:

[Super App] Log Message

If you can see, it prepends usefull information, you could add personalized data to trace logs, for example:

class ApplicationController << ActionController::Base
  around_action :add_logger_tags

  def add_logger_tags
    Rails.logger.tagged(logging_tags) do
      yield
    end
  end

  def logging_tags
    [
      "Request Id: #{request.id}",
      "Session Id: #{session.id}",
      current_user && "User id: #{current_user.id}"
    ]
end

And you will have super nice logs to read

Printing queries executed by ActiveRercord

Starting of Rails 6, there's a handy way to specify the level of verbocity to ActiveRecord queries:

ActiveRecord::Base.verbose_query_logs = true

Full documentation here

No more adding a custom logger to ActiveRecord::Base.logger

Is it possible to render a "sidecar" partial for a ViewComponent?

Let's say I have

class MyComponent < ViewComponent::Base
end

and 2 view files in

app/components/my_component/my_component.html.erb
app/components/my_component/_some_partial.html.erb

In my_component.html.erb, I want to be able to do:

render "some_partial"

But without a special configuration, that looks under the views directory for said partial. I don't want to extract that partial to its own component, nor do I want it floating by itself in the view directory.

The first step is to tell Rails it can look for templates in the view components directory

class ApplicationController < ActionController::Base
  append_view_path "#{Rails.root}/app/components"

Keep in mind that view contexts are based on the currently executing controller, so <%= render :some_partial %> in a PostsController (even within a ViewComponent class) will look for a partial in a subdirectory /posts or /application.

To make sure Rails finds your partial, use an absolute path when you render it:

<%= render "/my_component/some_partial" %>

Hat tip to Roli in the StimulusReflex discord

Updating ActiveRecord models without loading an instance using Postgresql

There're sometimes that we need to update an ActiveRecord Model but it is not necesary to load an instance, the normal flow would be the following:

profile = UserProfile.find_by(user_id: id)
profile.update(last_seen_at: Time.now)

The problem with this is that we load a useless instance of UserProfile, it is not needed, in a high traffic sites, an extra select query can count a lot, but luckly, Rails has addressed this with upsert command:

UserProfile.upsert({ last_seen_at: Time.now, user_id: id }, unique_by: :user_id)

This will use native Postgresql upsert command to update a record if it exists or insert a new one and no select will be performed, everything in a single query instead of two.

To make it really work, you need to modify your created_at and updated_at columns to have default current_timestamp

Using Rails to migrate columns from JSON to JSONB in Postgresql

Postgres offers data type json to store any structure easily, but one dissadvantage is that filtering by properties stored in the json column are super slow, one simple fix before refactoring the whole implementation is to migrate the column to be jsonb, since it is stored in binary form, it supports indexes, a easy and safe way to do it is as follows:

class ModifyJSONDataDataType < ActiveRecord::Migration[6.0]
  def up
    add_column :table_name, :data_jsonb, :jsonb, default: '{}'

    # Copy data from old column to the new one
    TableName.update_all('data_jsonb = data::jsonb')

    # Rename columns instead of modify their type, it's way faster
    rename_column :table_name, :data, :data_json
    rename_column :table_name, :data_jsonb, :data
  end

  def down
    safety_assured do
      rename_column :table_name, :data, :data_jsonb
      rename_column :table_name, :data_json, :data
    end
  end
end

Then, using another migration(due the ability to disable transactions), add an index to it

class AddIndexToDataInTableName < ActiveRecord::Migration[6.0]
  disable_ddl_transaction!

  def change
    add_index :table_name, :data, name: "data_index", using: :gin, algorithm: :concurrently

    # You can even add indexes to virtual properties:
    # add_index :table_name, "((data->'country')::text)", :name => "data_country_index", using: 'gin', algorithm: :concurrently
  end
end

Invoking ActiveRecord::Migration functions in command line

If you need for some weird reason, to run migration commands in your rails console, just do the following:

ActiveRecord::Migration.add_index :table, :col, name: 'col_index'

Remove Rails model's fields without crashing on production

If you are experiencing problems after removing an attribute from a Rails model it is probably because ActiveRecord has cached that field, it is something normal is intended to help to improve performance on production environments but when removing a field may cause errors because Rails is training to read/writer a column that doesn't exist anymore, so for Rails 5 and newest we have the ignore_columns setting to fix this issue, simply add it to your model something like:

class MyModel < ApplicationRecord
  self.ignore_columns = %w[field_to_ignore]
end

And you are good to go! if you want to be warned or remind to your team you would like to consider using the strong migrations gem https://github.com/ankane/strong_migrations

Native Pub/Sub in Rails with ActiveSupport::Notifications

If you want to use pub/sub design pattern inside your rails app, there's no need to add extra dependencies, you can use ActiveSupport::Notifications to do the job

Example:

class Order
  # methods

  def complete
    update(complted_at: Time.zone.now, etc: 'some')
    ActiveSupport::Notifications.instrument("order_completed", { number: order.number })
  end
end

module Subscribers
  class SendConfirmationEmail
    def self.subscribe!
      ActiveSupport::Notifications.subscribe("order_completed") do |_name, _start, _finish, _id, params|
        order = Order.find_by number: params[:number]
        OrderMailer.confirm_order(order).deliver_later
      end
    end
  end
end

module Subscribers
  class UpdateCustomerCRM
    def self.subscribe!
      ActiveSupport::Notifications.subscribe("order_completed") do |_name, _start, _finish, _id, params|
        order = Order.find_by number: params[:number]
        CrmIntegration.update_customer(order.customer.email, order.total_amount)
      end
    end
  end
end

# etc

Rails find_by using relationships

If you want to find records via a relationship you can do it easily:

product = Product.joins(:variants).find_by(variants: { sku: 'SKU' }

Instead of:

product = Variant.find_by(sku: 'SKU')&.product

Use discard_on to discard the job with no attempts to retry

Discard the job with no attempts to retry, if the exception is raised. This is useful when the subject of the job, like an Active Record, is no longer available, and the job is thus no longer relevant.

You can also pass a block that'll be invoked. This block is yielded with the job instance as the first and the error instance as the second parameter.

Example: 1

class SearchIndexingJob < ActiveJob::Base
  discard_on ActiveJob::DeserializationError
  discard_on(CustomAppException) do |job, error|
    ExceptionNotifier.caught(error)
  end

  def perform(record)
    # Will raise ActiveJob::DeserializationError if the record can't be deserialized
    # Might raise CustomAppException for something domain specific
  end
end

Example: 2:

class UserNotFoundJob < ActiveJob::Base
  discard_on ActiveRecord::RecordNotFound



  def perform(user_id)
    @user = User.find(user_id)
    @user.do_some_thing
  end
end

Source

When to eager load relationships in Rails and when it is not that good

Rails provides a way to eager load relationships when fetching objects, the main idea is to avoid queries N+1, but, when isn't a good idea when to do it?

Good

When rendering unique results that can not be cached, for example: table reports

Why? Most of the times you need to display related information

orders = Order.includes(:user, :line_items).completed

Try to avoid

When you use fragment cache

Why? Eager load is executed before the rendering, regardless the final result is already cached or not. If using eager loading, it will always be executed, but when allowing queries n+1 that query will be executed once to fill the cache, and that's it

products = Product.includes(:categories, variants: [:price]).search(keywords)

Use product.id & updated_at to fill a fragment cache and fetch the data from database only when needed, no extra info needed such as variants, categories, prices, etc

Beware of calling #count on Active Record relations!

Given code like this:

records = Record.includes(:related).all # Eager-loads to prevent N+1 queries...
records.each do |record|
  puts record.related.count # => ... but this produces N+1 queries anyway!
end

If you run this, you'll notice you get an N+1 queries problem, even though we're using #includes. This happens because of record.related.count. Remember, records.related here is not an Array but an instance of CollectionProxy and its #count method always reaches out to the database. Use #length or #size instead to solve this issue.

records = Record.includes(:related).all
records.each do |record|
  puts record.related.length # Problem solved!
end

Migration operation that should run only in one direction

Disclaimer: I know it's not recommended to do data mutation in schema migrations. But if you want to do it anyway, here's how you do a one-way operation, using the reversible method.

class AddAds < ActiveRecord::Migration[5.0]
  def change
    create_table :ads do |t|
      t.string :image_url, null: false
      t.string :link_url, null: false
      t.integer :clicks, null: false, default: 0
      t.timestamps
    end

    reversible do |change|
      change.up do
        Ad.create(image_url: "https://www.dropbox.com/s/9kevwegmvj53whd/973983_AdforCodeReview_v3_0211021_C02_021121.png?dl=1", link_url: "http://pages.magmalabs.io/on-demand-github-code-reviews-for-your-pull-requests")
      end
    end
  end
end

Lotties with Rails 6 and Webpacker

1) Install Lottie Player:

npm install --save @lottiefiles/lottie-player

2) Require it at app/javascript/packs/application.js

require('@lottiefiles/lottie-player');

3) Set webpacker to load jsons at config/webpacker.yml

static_asset_extensions:
  - .json

4) Put your lotties jsons wherever you want e.g. app/javascript/images/lotties

5) Render lottie-player tag in your htm

%lottie-player{ autoplay: true,
                loop: true,
                src: asset_pack_path('media/images/lotties/mylottie.json') }

6) Profit

Period of Time with Ruby on Rails and Integers | ActiveSupport::Duration

From the rails console try next:

irb(main):001:0> period_of_time = 10.minutes
=> 10 minutes
irb(main):002:0> period_of_time.class
=> ActiveSupport::Duration
irb(main):003:0> period_of_time = 10.hours
=> 10 hours
irb(main):004:0> period_of_time.class
=> ActiveSupport::Duration
irb(main):005:0> period_of_time.to_i
=> 36000

Useful when you work with devise authentication gem, e.g to expire the session in a certain period of time.

class User < ApplicationRecord
  devise :database_authenticatable, :timeoutable,
    timeout_in: ENV['EXPIRATION_TIME_IN_MINUTES'].to_i.minutes || 10.minutes
end

And so on...

User.where(created_at: 20.days.ago..10.minutes.ago)

How to add timeouts to slow queries

Sometimes some of your queries are taking too long to execute; you can specify optimizer hints and define timeouts for those queries.

Employee.optimizer_hints("MAX_EXECUTION_TIME(5000)").all

It will raise a StatementTimeout exception if the query takes longer than usual to execute

Example (for PostgreSQL with pg_hint_plan):

Employee.optimizer_hints("SeqScan(employees)", "Parallel(employees 8)")

Example (for MySQL):

Employee.optimizer_hints("MAX_EXECUTION_TIME(50000)", "NO_INDEX_MERGE(employees)")

There are many causes for sudden slow queries in many databases, such as missing index, wrong catching, and performance.

But this is a topic for another day!

ActiveModel: Rails 6.1.0 - *_previously_changed? accepts :from and :to keyword arguments

*_previously_changed? accepts :from and :to keyword arguments like *_changed? since Rails 6.1.0

task.update!(status: :archived)
task.status_previously_changed?(from: "active", to: "archived")
# => true

Rails 5.1 added :default option to belongs_to

From the changelog

image

Now if only someone would add it to has_one relationships. Tim says it's harder because then you're setting attributes on another object.

Getting Heroku Review Apps to Work

A few years ago I heard about a project called Fourchette, which facilitated setting up one Heroku app per pull request on a project (aka review apps). I remember being all like THAT'S FREAKING BRILLIANT! Then I went back to whatever I was doing and never did anything about it.

Well, this week I finally had the time and inclination to get review apps working on Heroku. The instructions are out there, but they gave me enough trouble that I figured I'd document the gotchas for posterity.

#1. Understand the app.json file, really

We already had a tiny app.json file that we had created in connection with getting Heroku CI to run our test suite. All it had was an environments section that looked like this:

"environments": {
  "test": {
     "env": {
      "DEBUG_MAIL": "true",
      "OK_TO_SEED": "true"
    },
    "addons":[
      "heroku-postgresql:hobby-basic",
      "heroku-redis:hobby-dev"
    ]

When I started trying to get review apps to work, I simply created a pull request, and followed the dashboard instructions for creating review apps, assuming that since we already had an app.json file that it would just work. Nope, not at all.

After much thrashing, what finally got me over the hump was understanding the purpose of app.json from first principles, which didn't happen until I read this description of the Heroku Platform API. App.json originally came about as a way to automate the creation of an entire Heroku project, not just a CI or Review configuration. It predates CI and Review Apps and has been in essence repurposed.

#2. Add all your ENV variables

The concept of ENV variables being inherited from the designated parent app really threw me for a loop at first. I figured that the only ENV variables needed to be declared in the env section of app.json would be the ones I was overriding with a fixed value. Wrong again.

After much trial-and-error, I ended up with a list of all the same ENV variables as my staging environment. Some with fixed values, but most just marked as required.

"env": {
    "AWS_ACCESS_KEY_ID": {
      "required": true
    },
    "AWS_SECRET_ACCESS_KEY": {
      "required": true
    },

This won't make sense if you're thinking that app.json is specifically for setting up Review Apps (see #1 above.)

#3. Understand the lifecycle, especially with regards to add-ons

After everything was mostly working (meaning that I was able to get past the build stage and actually access my web app via the browser) I still kept getting errors related to the Redis server being missing. To make a long story short, not only did I have to add it to the addons section, but I also had to delete the review app altogether and create it again, so that addons would be created. (Addons are not affected by redeployment.)

"addons":[
  "heroku-postgresql:hobby-basic",
  "heroku-redis:hobby-dev",
  "memcachier:dev"
],

In retrospect, I realize that the reason that was totally unclear is that my review apps Postgres add-on was automatically created, even before I added an addons section to app.json. (Initially I thought it was coming from the test environment.)

I still don't know if Postgres is added by default to all review apps, or inherited from the parent app.

4. Post deploy to the rescue

There's at least one thing you want to do once, every time a new review app is created, and that is to load your database schema. You probably want to seed data also.

"scripts": {
  "postdeploy": "OK_TO_SEED=true bundle exec rails db:schema:load db:seed"
}

As an aside, I have learned to put an OK_TO_SEED conditional check around destructive seed operations to help prevent running in production. This is especially important if you run your staging instances in production mode, like you should.

How to setup Heroku Rails app to handle yarn.lock

One of the nicest features of Rails 5 is its integration with Yarn, the latest and greatest package manager for Node.js. Using it means you can install JavaScript dependencies for your app just as easily as you use Bundler to install Ruby gems.

Now one of the biggest problems you face when using any sort of Node package management is that the combinatorial explosion of libraries downloaded in order to do anything of significance.

Given that reality, you really do not want to add node_modules to your project's git repository, no more than you would want to add all the source code of your gems. Instead, you add node_modules to your .gitignore file.

Yarn adds a file to the root of your Rails app called yarn.lock. Today I learned that if you include the Node.js buildpack to your project on Heroku, it will recognize yarn.lock and install any required node modules for you. You just have to make sure that it runs first in the build chain.

heroku buildpacks:add --index 1 heroku/nodejs

Side note: If you use Heroku CI then you'll need to setup your test environment with the extra buildpack also by adding a new section to app.json.

"buildpacks": [
{ "url": "heroku/nodejs" },
{ "url": "heroku/ruby" }
]

Note that the nodejs buildpack expects a test script to be present in package.json. If you don't have one already, just add a dummy directive there. Almost anything will work; I just put an echo statement.

"scripts": {
    "test": "echo 'no tests in js'"
  },

Cloudflare Flexible SSL mode breaks Rails 5 CSRF

Putting this out there since I didn't find anything on StackOverflow or other places concerning this problem, which I'm sure I'm not the first to run into. CloudFlare is great, especially as a way to set-and-forget SSL on your site, along with all the other benefits you get. It acts as a proxy to your app, and if you set its SSL mode to Flexible then you don't have to have an SSL certificate setup on your server. This used to be a big deal when SSL certificates were expensive. (You could argue that since Let's Encrypt and free SSL certificates it's not worth using Flexible mode anymore.)

Anyway, I digress. The point of this TIL is that if you proxy https requests to http endpoint in Rails 5, you'll get the dreaded InvalidAuthenticityToken exception whenever you try to submit any forms. It has nothing to do with the forgery_protection_origin_check before action in ApplicationController.

The dead giveaway that you're having this problem is in your logs. Look for the following two lines near each other.

WARN -- : [c2992f72-f8cc-49a2-bc16-b0d429cdef20] HTTP Origin header (https://www.example.com) didn't match request.base_url (http://www.example.com)  
...
FATAL -- : [c2992f72-f8cc-49a2-bc16-b0d429cdef20] ActionController::InvalidAuthenticityToken (ActionController::InvalidAuthenticityToken): 
Aug 13 18:08:48 pb2-production app/web.1: F, [2017-08-14T01:08:48.226341 #4] FATAL -- : [c2992f72-f8cc-49a2-bc16-b0d429cdef20]    

The solution is simple. Make sure you have working SSL and HTTPS on Heroku (or wherever you're serving your Rails application.) Turn Cloudflare SSL to Full mode. Problem solved.

Run Puma in Single mode for development

Turns out how to switch between single and clustered modes of Puma is super unclear in the (little to non-existent) documentation. You'd think that setting WEB_CONCURRENCY to 1 would do it, but you actually have to set it to zero. Meaning you don't want to spin up any child processes.

return while rendering partial collections

Ruby's next keyword only works in the context of a loop or enumerator method.

So if you're rendering a collection of objects using Rails render partial: collection, how do you skip to the next item?

Since partials are compiled into methods in a dynamically generated view class, you can simulate next by using an explicit return statement. It will short-circuit the rendering of your partial template and iteration will continue with the next element of the collection.

For example

# app/views/users/_user.haml
- return if user.disabled?
  %li[user]
    rest of your template...

Using default_url_options in RSpec with Rails 5

There's a long-standing bug in the integration of controller testing into RSpec that prevents you from easily setting default_url_options for your controller specs. As far as I can tell, it doesn't get fixed because the RSpec teams considers the problem a bug in Rails, and the Rails team does not care if RSpec breaks.

I'm talking about the issue you run into when you're trying to work with locale settings passed into your application as namespace variables in routes.rb like this:

scope "/:locale" do
    devise_for :users,  #...and so on

Today I learned that the inability to set a default :locale parameter can be maddening. Your specs will fail with ActionView::Template::Error: No route matches errors:

1) Devise::RegistrationsController POST /users should allow registration
     Failure/Error: %p= link_to 'Confirm my account', confirmation_url(@resource, confirmation_token: @token)

     ActionView::Template::Error:
       No route matches {"action":"show","confirmation_token":"pcyw_izS8GchnT-R3EGz","controller":"devise/confirmations"} missing required keys: [:locale]

The reason is that ActionController::TestCase ignores normal settings of default_url_options in ApplicationController or your config/environments/test.rb. No other intuitive attempt at a workaround worked either. Frustratingly, it took me around an hour to debug and come up with a monkeypatch-style workaround. The existing workarounds that I could find online are all broken in Rails 5.

So here it is:

# spec/support/fix_locales.rb
ActionController::TestCase::Behavior.module_eval do
  alias_method :process_old, :process

  def process(action, *args)
    if params = args.first[:params]
      params["locale"] = I18n.default_locale
    end
    process_old(action, *args)
  end
end

Note the assumption that you are passing params in your spec using a symbol key and not the string "params".

CSS Autoprefixer OMG!!!

Posting on Rails channel, since there is a gem for using this amazing tool with your Rails apps. Using Autoprefixer, you no longer have to worry about writing or maintaining vendor-specific CSS properties. (The ones with the dash prefixes.) You just use the latest W3C standards, and the rest is taken care of for you with post-processing.

FactoryGirl, WebMock, VCR, Fog and CarrierWave

In the interest of fast suite runs (amongst other reasons) you want to make sure that your specs are not dependent on remote servers as they do their thing. One of the more popular ways of achieving this noble aim is by using a gem called WebMock, a library for stubbing and setting expectations on HTTP requests in Ruby.

The first time you use WebMock, code that calls external servers will break.

WebMock::NetConnectNotAllowedError:
       Real HTTP connections are disabled. Unregistered request: GET https://nueprops.s3.amazonaws.com/test...

       You can stub this request with the following snippet:

       stub_request(:get, "https://nueprops.s3.amazonaws.com...

Now maintaining that stub code is often painful, so you probably want to use a gem called VCR to automate the process. VCR works really well. After instrumenting your spec correctly, you run it once to generate a cassette, which is basically a YAML file that captures the HTTP interaction(s) of your spec with the external servers. Subsequent test runs use the cassette file instead of issuing real network calls.

Creation and maintenance of cassettes that mock interaction with JSON-based web services is easy. Services that talk binary? Not so much. And almost every modern Rails project I've ever worked on uses CarrierWave (or Paperclip) to handle uploads to AWS. If you try to use VCR on those requests, you're in for a world of annoyance.

Enter Fog, the cloud-abstraction library that undergirds those uploader's interactions with AWS S3. It has a somewhat poorly documented, yet useful mock mode. Using this mode, I was able to make WebMock stop complaining about CarrierWave trying to upload fixture files to S3.

However, the GET requests generated in my specs were still failing. Given that I'm using the venerable FactoryGirl gem to generate my test data, I was able to eventually move the stub_request calls out of my spec and into a better abstraction level.

factory :standard_star do
  sequence(:name) { |n| "Cat Wrangler #{n}" }
  description "Excellence in project management of ADD people"
  icon { Rack::Test::UploadedFile.new('spec/support/stars/cat-wrangler.jpg') }
  image { Rack::Test::UploadedFile.new('spec/support/stars/cat-wrangler.jpg') }
  after(:create) do |s, e|
    WebMock.stub_request(:get, "https://nueprops.s3.amazonaws.com/test/uploads/standard_star/image/#{s.name.parameterize}/cat-wrangler.jpg").
             to_return(:status => 200, :body => s.image.file.read)

    WebMock.stub_request(:get, "https://nueprops.s3.amazonaws.com/test/uploads/standard_star/icon/#{s.name.parameterize}/cat-wrangler.jpg").
             to_return(:status => 200, :body => s.icon.file.read)

  end
end

When counter_cache on wrong side of association

Absentmindedly put a counter_cache declaration on the has_many instead of where it belongs (pun intended.)

Rails 5 will complain in the most cryptic way it possibly can, which is to raise the following exception

ActiveModel::MissingAttributeError: can't write unknown attribute `true`

If you get that error, now you know how to fix it. Good luck and godspeed.

Change Rails default generators to Sass

Rails inexplicably defaults to SCSS when generating stylesheets. Maybe for the same reasons that DHH doesn't like Haml?

Anyway, to fix it just add the following directive to config/environments/development.rb:

config.sass.preferred_syntax = :sass

Rails 5 Attributes API + JSONb Postgres columns

As of when I'm writing this (Jan 2017), support for using ActiveRecord store with Postgres JSONb columns is a bit of shit-show. I'm planning to help fix it as soon as I have some time to spare, but for the moment if you want a better way of supporting these valuable column types in your Rails 5 app, use the new Attributes API. Plus get much improved performance with the Oj gem.

Here's how to make it work. First, define a :jsonb type to replace the native one.

class JsonbType < ActiveModel::Type::Value
  include ActiveModel::Type::Helpers::Mutable

  def type
    :jsonb
  end

  def deserialize(value)
    if value.is_a?(::String)
      Oj.load(value) rescue nil
    else
      value
    end
  end

  def serialize(value)
    if value.nil?
      nil
    else
      Oj.dump(value)
    end
  end

  def accessor
    ActiveRecord::Store::StringKeyedHashAccessor
  end
end

Next, register it in an initializer.

ActiveRecord::Type.register(:jsonb, JsonbType, override: true)

Note that the JsonbType class will need to be somewhere in your loadpath.

Now just declare the attribute at the top of your ActiveRecord model like this:

class User < ApplicationRecord
  attribute :preferences, :jsonb, default: {}

ActiveRecord test objects made easy

If you're testing a ActiveRecord model mixin in your application, you might be tempted to unit test it in the context of one of your app's models. However, that would violate your test isolation and introduce complexities related to the behavior of the model.

Better solution is to make an Active Record class just for your test, and the fact that you can invoke schema definitions on the fly makes it super easy. Here's the top of one of my specs, illustrating the technique.

require 'rails_helper'

ActiveRecord::Schema.define do
  create_table :test_objects, force: true do |t|
    t.jsonb :jobs, null: false, default: {}
  end
end

class TestObject < ApplicationRecord
  include WorkerRegistry
end

RSpec.describe WorkerRegistry do
  let(:test_object) { TestObject.create }

  ...

Adding your own datetime formats to Rails

An aspect of Rails that I adore is how it has a place for nearly everything you need to do. One of those things is to format dates/times using the strftime method. Instead of tucking away custom strftime patterns in constants, you can configure them onto the native Rails formatter, accessed via time.to_s(:format_name)

DateTime formats are shared with Time and stored in the Time::DATE_FORMATS hash. Use your desired format name as the hash key and either a strftime string or Proc instance that takes a time or datetime argument as the value.

# config/initializers/time_formats.rb
Time::DATE_FORMATS[:month_and_year] = '%B %Y'
Time::DATE_FORMATS[:short_ordinal] = lambda { |time| time.strftime("%B #{time.day.ordinalize}") }

Here's one of the formats that I've been using lately, to get my times into a more familiar form.

Time::DATE_FORMATS[:short_time] =
   lambda { |time| time.strftime('%I:%M%p').gsub('AM','am').gsub('PM','pm').gsub(':00','') }

Easily add Toastr flash notices to Rails apps

I learned about Toastr JavaScript library last week and have been delighted to use it instead of more traditional flash messaging.

First of all get the Toastr sources. I opted to link to them on CDNJS:

= stylesheet_link_tag 'https://cdnjs.cloudflare.com/ajax/libs/toastr.js/2.1.3/toastr.min.css'
= javascript_include_tag 'https://cdnjs.cloudflare.com/ajax/libs/toastr.js/2.1.3/toastr.min.js'

Next I defined some extra flash types in my application_controller.rb file to match Toastr's native notification types and enable use of the built-in styling.

class ApplicationController < ActionController::Base
  add_flash_types :success, :info, :warning, :error
  ...

Finally, add the following block of JavaScript to the bottom of a layout template (or whatever shared partial that contains your JS and CSS includes.

- flash.keys.each do |key|
  - toastr_key = key
  - toastr_key = 'info' if key == 'notice'
  - toastr_key = 'warning' if key == 'alert'
  :javascript
    $(function() {
      toastr["#{toastr_key}"]("#{flash[key]}");
    });

Lines 2 and 3 establish a mapping from conventional Rails notice and alert so that I don't have to hack libraries like Devise which rely on them.

Easy.

Don't roll your own slug code, use FriendlyId

This finding was a pleasant surprise. For years, I've been writing the same kind of boilerplate code to override to_param on my model classes and generate unique slugs. Turns out there's a really well-written library that does that, with some worthwhile additional functionality. Check out FriendlyId for easy slug generation and even the ability to preserve history of slugs after changes, so that it's possible to do 301 redirects with just a couple lines of code.

Set defaults for JSONb postgres columns in Rails

Make sure to pass the migration a native Ruby hash as the default value. DO NOT pass it a string representation of an hash, thinking that it'll work (as valid JSON).

DO THIS

t.jsonb :preferences, default: {}, null: false

NOT

t.jsonb :preferences, default: '{}', null: false

It'll break in a maddeningly non-obvious way. Take my word for it. Also there is this relevant StackOverflow post which saved my ass.

Use Nodemon to auto-restart Rails server

The handy-dandy Nodemon tool is not just for Node. Today I whipped up an invocation that can restart my Rails server whenever there are changes in the config directory tree. Super useful when working heavily with i18n, since changing translation files requires bouncing the server to see changes reflected in the view.

$ nodemon --watch config -e rb,yml --exec "rails server"

Better enumerated types in Active Record

Rails 4 added support for enumerations in Active Record classes. That's cool, but what's cooler is how it has been reimagined by Foraker Labs in Denver, based on the seriously underrated gem Enumerated Type.

Please go read the blog post about it right now, it'll take 5-10 minutes and I promise you won't regret it.

How Turbolinks handles redirects

When you visit location /one and the server redirects you to location /two, you expect the browser’s address bar to display the redirected URL. However, Turbolinks makes requests using XMLHttpRequest, which transparently follows redirects. There’s no way for Turbolinks to tell whether a request resulted in a redirect without additional cooperation from the server.

To work around this problem, Rails sends a Turbolinks-Location header in response to a visit that was redirected using redirect_to, and Turbolinks will replace the browser’s topmost history entry with the value provided. If for some reason you are performing redirects manually (so-to-speak, without using the redirect_to helper method), then you'll have to take care of adding the header yourself.