Sequel is a lightweight database access toolkit for Ruby.
- Sequel provides thread safety, connection pooling and a concise
DSL for constructing SQL queries and table schemas. - Sequel includes a comprehensive ORM layer for mapping records to
Ruby objects and handling associated records. - Sequel supports advanced database features such as prepared
statements, bound variables, stored procedures, savepoints,
two-phase commit, transaction isolation, master/slave
configurations, and database sharding. - Sequel currently has adapters for ADO, Amalgalite, DataObjects,
DB2, DBI, Firebird, IBM_DB, Informix, JDBC, MySQL, Mysql2, ODBC,
OpenBase, Oracle, PostgreSQL, SQLite3, Swift, and TinyTDS.
Sequel 3.34.0 has been released and should be available on the gem
mirrors.
= New PostgreSQL Extensions
-
A pg_array extension has been added, supporting PostgreSQL’s
numeric and string array types. Both single dimensional and
multi-dimensional array types are supported. Array values are
returned as instances of Sequel::Postgres::PGArray, which is a
delegate class of Array. You can turn an existing array into
a PGArray using Array#pg_array.If you are using arrays in model objects, you need to load
support for that:DB.extend Sequel::Postgres::PGArray::DatabaseMethods
This makes schema parsing and typecasting of array columns work
correctly.This extension also allows you to use PGArray objects and arrays
in bound variables when using the postgres adapter with pg. -
A pg_hstore extension has been added, supporting PostgreSQL’s hstore
type, which is a simple hash with string keys and string or NULL
values. hstore values are retrieved as instances of
Sequel::Postgres::HStore, which is a delegate class of Hash. You
can turn an existing hash into an hstore using Hash#hstore.If you are using hstores in model objects, you need to load
support for that:DB.extend Sequel::Postgres::HStore::DatabaseMethods
This makes schema parsing and typecasting of hstore columns work
correctly.This extension also allows you to use HStore objects and hashes
in bound variables when using the postgres adapter with pg. -
A pg_array_ops extension has been added, making it easier to call
PostgreSQL array operators and functions using plain ruby code.
Examples:a = :array_column.pg_array
a[1] # array_column[1]
a[1][2] # array_column[1][2]
a.push(1) # array_column || 1
a.unshift(1) # 1 || array_column
a.any # ANY(array_column)
a.join # array_to_string(array_column, ‘’, NULL)If you are also using the pg_array extension, you can turn
a PGArray object into a query object, which allows you to run
operations on array literals:a = [1, 2].pg_array.op
a.push(3) # ARRAY[1,2] || 3 -
A pg_hstore_ops extension has been added, making it easier to call
PostgreSQL hstore operators and functions using plain ruby code.
Examples:h = :hstore_column.hstore
h[‘a’] # hstore_column → ‘a’
h.has_key?(‘a’) # hstore_column ? ‘a’
h.keys # akeys(hstore_column)
h.to_array # hstore_to_array(hstore_column)If you are also using the pg_hstore extension, you can turn
an HStore object into a query object, which allows you to run
operations on hstore literals:h = {‘a’ => ‘b’}.hstore.op
h[a] # ‘“a”=>“b”’::hstore → ‘a’ -
A pg_auto_parameterize extension has been added for automatically
using bound variables for all queries. For example, it can take
code such as:DB[:table].where(:column=>1)
and do:
SELECT * FROM table WHERE column = $1; – [1]
Note that automatically parameterizing queries is not generally
faster unless the bound variables are large (i.e. long text/bytea
values). Also, there are multiple corner cases when automatically
parameterizing queries, though most can be worked around by
adding explicit casts. -
A pg_statement_cache extension has been added that works with the
pg_auto_parameterize extension for automatically caching prepared
statements and reusing them when using the postgres adapter with
pg. The combination of these two extensions makes it possible to
take an entire Sequel application and turn most or all of the
queries into prepared statements.Note that these two extensions do not necessarily improve
performance. For simple queries, they actually hurt performance.
They do help for complex queries, but in all cases, it’s faster
to use Sequel’s prepared statements API manually.
= Other New Extensions
-
A query_literals extension has been added that makes the select,
group, and order methods operate similar to the filter methods in
that if they are given a regular string as their first argument,
they treat it as a literal string, with additional arguments, if
any, used as placeholder values. This extension allows you to
write code such as:DB[:table].select(‘a, b, ?’ 2).group(‘a, b’).order(‘c’)
Without query_literals:
SELECT ‘a, b, ?’, 2 FROM table GROUP BY ‘a, b’ ORDER BY ‘c’
With query_literals:
SELECT a, b, 2 FROM table GROUP BY a, b ORDER BY c
Sequel’s default handling in this case is to use literal strings,
which is generally not desired and on some databases not even
valid syntax. In general, you’ll probably want to use this
extension for all of a database’s datasets, which you can do via:Sequel.extension :query_literals
DB.extend_datasets(Sequel::QueryLiterals)The next major version of Sequel (4.0.0) will probably integrate
this extension into the core library. -
A select_remove extension has been added that adds
Dataset#select_remove, for removing selected columns/expressions
from a dataset:ds = DB[:table]
Assume table has columns a, b, and c
ds.select_remove(:c)
SELECT a, b FROM table
Removal by column alias
ds.select(:a, :b___c, :c___b).select_remove(:c)
SELECT a, c AS b FROM table
Removal by expression
ds.select(:a, :b___c, :c___b).select_remove(:c___b)
SELECT a, b AS c FROM table
This method makes it easier to select all columns except for the
columns given. This is common in cases where a table has a few
large columns that are expensive to retrieve. This method does
have some corner cases, so read the documentation before using it. -
A schema_caching extension has added that makes it possible for
Database instances to dump the cached schema metadata to a
marshalled file, and load the cached schema metadata from the file.
This can be significantly faster than reparsing the schema from the
database, especially for databases with high latency.bin/sequel -S has been added to dump the schema for the given
database to a file, and DB.load_schema_cache(filename) can be used
to populate the schema cache inside your application. This should
be done after creating the Database object but before loading your
model files.Note that Sequel does no checking to ensure that the cached schema
currently reflects the state of the database. That is up to the
application. -
A null_dataset extension has been added, which adds
Dataset#nullify for creating a dataset that will not issue a
database query. It implements the null object pattern for
datasets, and is probably most useful in methods that must return
a dataset, but can determine that such a dataset will never return
a row.
= New Plugins
-
A static_cache plugin has been added, allowing you to cache a model
statically. This plugin is useful for models whose tables do not
change while the application is running, such as lookup tables.
When using this plugin, the following methods will no longer require
queries:- Primary key lookups (e.g. Model[1])
- Model.all calls
- Model.each calls
- Model.map calls without an argument
- Model.to_hash calls without an argument
The statically cached model instances are frozen so they are not
accidently modified. -
A many_to_one_pk_lookup plugin has been added that changes the
many_to_one association retrieval code to do a simple primary
key lookup on the associated class in most cases. This results
in significantly better performance, especially if the
associated model is using a caching plugin (either caching
or static_cache).
= Core Extension Replacements
-
Most of Sequel’s core extensions now have equivalent methods defined
on the Sequel module::column.as(:alias) → Sequel.as(:column, :alias)
:column.asc → Sequel.asc(:column)
:column.desc → Sequel.desc(:column)
:column.cast(Integer) → Sequel.cast(:column, Integer)
:column.cast_numeric → Sequel.cast_numeric(:column)
:column.cast_string → Sequel.cast_string(:column)
:column.extract(:year) → Sequel.extract(:year, :column)
:column.identifier → Sequel.identifier(:column)
:column.ilike(‘A%’) → Sequel.ilike(:column, ‘A%’)
:column.like(‘A%’) → Sequel.like(:column, ‘A%’)
:column.qualify(:table) → Sequel.qualify(:table, :column)
:column.sql_subscript(1) → Sequel.subscript(:column, 1)
:function.sql_function(1) → Sequel.function(:function, 1)
‘some SQL’.lit → Sequel.lit(‘some SQL’)
‘string’.to_sequel_blob → Sequel.blob(‘string’)
{:a=>1}.case(0) → Sequel.case({:a=>1}, 0)
{:a=>1}.sql_negate → Sequel.negate(:a=>1)
{:a=>1}.sql_or → Sequel.or(:a=>1)
[[1, 2]].sql_value_list → Sequel.value_list([[1, 2]])
[:a, :b].sql_string_join → Sequel.join([:a, :b])
~{:a=>1} → Sequel.~(:a=>1)
:a + 1 → Sequel.+(:a, 1)
:a - 1 → Sequel.-(:a, 1)
:a * 1 → Sequel.*(:a, 1)
:a / 1 → Sequel./(:a, 1)
:a & 1 → Sequel.&(:a, 1)
:a | 1 → Sequel.|(:a, 1) -
You can now wrap any object in a Sequel expression using
Sequel.expr. This is similar to the sql_expr extension, but
without defining the sql_expr method on all objects:1.sql_expr → Sequel.expr(1)
The sql_expr extension now just has Object#sql_expr call
Sequel.expr. -
Virtual Rows now have methods defined that handle the standard
mathematical operators:select{|o| o.+(1, :a)} # SELECT (1 + a)
the standard inequality operators:
where{|o| o.>(2, :a)} # WHERE (2 > a)
and the standard boolean operators:
where{|o| o.&({:a=>1}, o.~(:b=>1))} # WHERE ((a = 1) AND (b != 1))
Additionally, there is now direct support for creating literal
strings in instance_evaled virtual row blocks using `:where{a >
some crazy SQL
} # WHERE (a > some crazy SQL)This doesn’t override Kernel.
, since virtual rows use a BasicObject subclass. Previously, using
would result in calling the SQL
function named ` with the given string, which probably isn’t valid
syntax on most databases. -
You can now require ‘sequel/no_core_ext’ to load Sequel without the
core extensions. The previous way of setting the
SEQUEL_NO_CORE_EXTENSIONS constant or environment variable before
loading Sequel still works. -
The core extensions have been moved from Sequel’s core library into
an extension that is loadable with Sequel.extension. This extension
is still loaded by default for backwards compatibility. However,
the next major version of Sequel will no longer load this extension
by default (though it will still be available to load manually). -
You can now check if the core extensions have been loaded by using
Sequel.core_extensions?.
= Foreign Keys in the Schema Dumper
-
Database#foreign_key_list has been added that gives an array of
foreign key constraints on the table. It is currently implemented
on MySQL, PostgreSQL, and SQLite, and may be implemented on other
database types in the future. Each entry in the return array is
a hash, with at least the following keys present::columns :: An array of columns in the given table
:table :: The table referenced by the columns
:key :: An array of columns referenced (in the table specified by
:table), but can be nil on certain adapters if the primary
key is referenced.The hash may also contain entries for:
:deferrable :: Whether the constraint is deferrable
:name :: The name of the constraint
:on_delete :: The action to take ON DELETE
:on_update :: The action to take ON UPDATE -
The schema_dumper extension now dumps foreign key constraints on
databases that support Database#foreign_key_list. On such
databases, dumping a schema migration will dump the tables in
topological order, such that referenced tables always come before
referencing tables.In case there is a circular dependency, Sequel breaks the
dependency and adds separate foreign key constraints at the end
of the migration. However, when a circular dependency is broken,
the migration can probably not be migrated down.Foreign key constraints can also be dumped as a separate migration
using Database#dump_foreign_key_migration, similar to how
Database#dump_indexes_migration works. -
When using bin/sequel -C to copy databases, foreign key constraints
are now copied if the source database supports
Database#foreign_key_list.
= Other New Features
-
Dataset#to_hash_groups and #select_hash_groups have been added.
These methods are similar to #to_hash and #select_hash in that they
return a hash, but hashes returned by *_hash_groups methods have
arrays of all matching values, unlike the *_hash methods which
just use the last matching value. Example:DB[:table].all
=> [{:a=>1, :b=>2}, {:a=>1, :b=>3}, {:a=>2, :b=>4}]
DB[:table].to_hash(:a, :b)
=> {1=>3, 2=>4}
DB[:table].to_hash_groups(:a, :b)
=> {1=>[2, 3], 2=>[4]}
-
Model#set_fields and #update_fields now accept :missing=>:skip and
:missing=>:raise options, allowing them to be used in more cases.
:missing=>:skip skips missing entries in the hash, instead of
setting the field to the default hash value. :missing=>:raise
raises an error for missing fields, similar to
strict_param_setting = true. It’s recommended that these options
be used in new code in preference to #set_only and #update_only. -
Database#drop_table? has been added, for dropping tables if they
already exist. This uses DROP TABLE IF EXISTS on the databases that
support it. Database#supports_drop_table_if_exists? has been added
for checking whether the database supports that syntax. -
Database#create_join_table has been added that allows easy
creation of many_to_many join tables:DB.create_join_table(:album_id=>:albums, :artist_id=>:artists)
This uses real foreign keys for both of the columns, uses a
composite primary key of both of the columns, and adds an
additional composite index of the columns in reverse order. The
primary key and additional index should ensure that almost all
operations on the join table can benefit from an index.In terms of customization, the values in the hash can be hashes
themselves for column specific options, and an additional options
hash can also be given to override some of the default settings.Database#drop_join_table also exists and takes the same options
as create_join_table. It mostly exists to make it easy to
reverse migrations that use create_join_table. -
Model#freeze has been added that freezes a model such that it
works correctly in a read-only state. Before, it used the standard
Object#freeze, which broke some things that should work, and
allowed changes that shouldn’t be allowed (like modifying the
instance’s values). -
ConnectionPool#all_connections has been added, which yields each
available connection in the pool to the block. For threaded pools,
it does not yield connections that are currently being used by
other threads. When using this method, it is important to only
operate on the yielded connection objects, and not make any
modifications to the pool itself. The pool is also locked until
the method returns. -
ConnectionPool#after_connect= has been added, allowing you to
change a connection pool’s after_connect proc after instantiating
the pool. -
ConnectionPool#disconnection_proc= has been added, allowing you to
change a connection pool’s disconnection_proc after instantiating the
pool. -
A Model.cache_anonymous_models accessor has been added, and can be
set to false to disable the caching of classes created by
Sequel::Model(). This caching is only useful if you want to reload
the model’s file without getting a superclass mismatch. This
setting is true by default for backwards compatibility, but may be
changed to false in a later version, so you should manually set it to
true if you are using code reloading. -
Model.instance_dataset has been added for getting the dataset used
for model instances (a naked dataset restricted to a single row). -
Dataset#with_sql_delete has been added for running the given SQL
string as a delete and returning the number of rows modified. It’s
designed as a replacement for with_sql(sql).delete, which is slower
as it requires cloning the dataset. -
The :on_update and :on_delete entries for foreign_key now accept
string arguments which are used literally. -
Prepared statement objects now have a log_sql accessor that can be
turned on to log the entire SQL statement instead of just the
prepared statement name. -
Dataset#multi_replace has been added on MySQL. This is similar to
multi_insert, but uses REPLACE instead of INSERT. -
Dataset#explain has been added to MySQL. You can use an
:extended=>true option to use EXPLAIN EXTENDED. -
A Database#type_supported? method has been added on PostgreSQL to
check if the database supports the given type:DB.type_supported?(:hstore)
-
Datatabase#reset_conversion_procs has been added to the postgres
adapter, for use by extensions that modify the default conversion
procs and want to have the database use the updated defaults. -
A Database#convert_infinite_timestamps accessor has been added to
the postgres adapter, allowing you to return infinite timestamps as
nil, a string, or a float. -
SQL::PlaceholderLiteralString objects can now use a placeholder
array, where placeholder values are inserted between array elements.
This is about 2.5-3x faster than using a string with ? placeholders,
and allows usage of ? inside the array:Sequel.lit([“(”, " ? ", “)”], 1, 2) # (1 ? 2)
-
SQL::Subscript#[] has been added for accessing members of a
multi-dimensional array:Sequel.subscript(:column, 1)[2][3] # column[1][2][3]
-
SQL::Wrapper has been added for wrapping arbitrary objects in a
Sequel expression object. -
SQL::QualifiedIdentifier objects can now contain arbitrary Sequel
expressions. Before, they could only contain a few expression
types. This makes it easier to add extensions to support
PostgreSQL row-valued types.
= Performance Improvements
-
Model.[] when called with a primary key has been made about 110%
faster for most models by avoiding cloning datasets. -
Model.[] when called without arguments or with a single nil argument
is much faster as it now returns nil immediately instead of issuing
a database query. -
Model#delete and Model#destroy have been made about 75% faster for
most models by using a static SQL string. -
Model.new is now twice as fast when passed an empty hash.
-
Model#set is now four times as fast when passed an empty hash.
-
Model#this has been made about 85% faster by reducing the number of
dataset clones needed from 3 to 1. -
Some proc activations have been removed, giving minor speedups when
running on MRI.
= Other Improvements
-
Database#uri and #url now return the connection string given
to Sequel.connect. Previously, they tried to reconstruct the
url using the database’s options, but that didn’t work well in
corner cases. -
Database#inspect now shows the URL and/or options given when
connecting to the database. Previously, it showed the URL, or
all of the databases options if constructing the URL raised an
error. -
Sequel no longer checks for prepared transactions support when
using transactions unless a prepared transaction is specifically
requested. -
The schema utility dataset cached in the Database object is now
reset if you use Database#extend_datasets, ensuring that the new
value will use the given extension. -
The prepared_statements* plugins now log the full SQL by default.
Since the user doesn’t choose the name of the prepared statements,
it was often difficult to determine what SQL was actually run if
you were only looking at a subsection of the SQL log. -
The nested_attributes plugin’s delete/remove support now works
correctly when a false value is given for _delete/_remove and
strict_param_setting is true. -
The hook_class_methods and validation_class_methods plugins
now work correctly when subclassing if the subclass attempts to
create instances inside Model.inherited. -
The caching plugin has been refactored. Model.cache_get_pk and
cache_delete_pk have been added for retrieving/deleting from the
cache by primary key. Model.cache_key is now a public method. -
The typecast_on_load plugin now works correctly when saving
new model objects when insert_select is supported. -
In the sql_expr extension, nil.sql_expr is no longer treated as
a boolean value. It is now treated as a value with generic type. -
The postgres adapter no longer issues a query to map type names to
type oids if no named conversion procs have been registered. -
The postgres adapter now works around issues in ruby-pg by
supporting fractional seconds for Time/DateTime values, and
supporting SQL::Blob (bytea) values with embedded “\0” characters. -
The postgres adapter now supports pre-defining the PG_NAMED_TYPES
and PG_TYPES constants. This is so extensions can define them,
so they don’t have to load the postgres adapter file first. If
extensions need to use these constants, they should do:PG_NAMED_TYPES = {} unless defined?(PG_NAMED_TYPES)
PG_TYPES = {} unless defined?(PG_TYPES)That way they work whether they are loaded before or after the
postgres adapter. -
PostgreSQL 8.2-9.0 now correctly add the RETURNING clause when
building queries. Sequel 3.31.0 added support for returning values
from delete/update queries in PostgreSQL 8.2-9.0, but didn’t change
the literalization code to use the RETURNING clause on those
versions. -
The jdbc/postgres adapter now converts Java arrays
(Java::OrgPostgresqlJdbc4::Jdbc4Array) to ruby arrays. -
Tables and schemas with embedded ’ characters are now handled
correctly when parsing primary keys and sequences on PostgreSQL. -
Identifiers are now escaped on MySQL and SQLite. Previously they
were quoted, but internal ` characters were not doubled. -
Fractional seconds for the time type are now returned correctly on
jdbc (assuming they are returned as java.sql.Time values by JDBC). -
Multiple changes were made to ensure that Sequel works correctly
when the core extensions are not loaded. -
Composite foreign key constraints are now retained when emulating
alter_table operations on SQLite. Previously, only single
foreign key constraints were retained. -
An error is no longer raised when no indexes exist when calling
Database#indexes on jdbc/sqlite. -
A possible SystemStackError has been fixed in the SQLite adapter,
when trying to delete a dataset that uses a having clause and no
where clause. -
ROLLUP/CUBE support now works correctly on Microsoft SQL Server
2005. -
Unsigned tinyint types are now recognized in the schema dumper.
-
Using primary_key :column, :type=>Bignum now works correctly on H2.
Previously, the column created was not autoincrementing. -
Using a bound variable for a limit is now supported in the ibmdb
adapter on ruby 1.9. -
Connecting to PostgreSQL via the swift adapter has been fixed when
using newer versions of swift. -
The mock adapter now handles calling the Database#execute methods
directly (instead of via a dataset). -
The mock adapter now has the ability to have per-shared adapter
specific initialization code executed. This has been used to fix
some bugs when using the shared postgres adapter. -
The pretty_table extension has been split into two extensions, one
that adds a method to Dataset and one that just adds the
PrettyTable class. Also, PrettyTable.string has been added to get
a string copy of the table. -
A spec_model_no_assoc task has been added for running model specs
without the association plugin loaded. This is to check that the
SEQUEL_NO_ASSOCIATIONS setting works correctly.
= Deprecated Features to be Removed in Sequel 3.35.0
-
Ruby <1.8.7 support is now deprecated.
-
PostgreSQL <8.2 support is now deprecated.
-
Dataset#disable_insert_returning on PostgreSQL is now deprecated.
Starting in 3.35.0, RETURNING will now always be used to get the
primary key value when inserting. -
Array#all_two_pairs? is now deprecated. It was part of the core
extensions, but the core extensions have been refactored to no
longer require it. As it doesn’t specifically relate to creating
Sequel expression objects, it is being removed. The private
Array#sql_expr_if_all_two_pairs method is deprecated as well.
= Other Backwards Compatibility Issues
-
The generic Bignum type now uses bigint on SQLite, similar to
other databases. The integer type was previously used. The only
exception is for auto incrementing primary keys, which still use
integer for Bignum as SQLite doesn’t support autoincrementing
columns other than integer. -
On SQLite, Dataset#explain now returns a string, similar to
PostgreSQL (and now MySQL). -
When using the JDBC adapter, Java::OrgPostgresqlUtil::PGobject
objects are converted to ruby strings if the dataset is set to
convert types (the default setting). This is to support the
hstore extension, but it could have unforeseen effects if custom
types were used. -
For PostgreSQL connection objects, #primary_key and #sequence now
require their arguments are provided as already literalized
strings. Note that these methods are being removed in the next
version because they will not be needed after PostgreSQL <8.2
support is dropped. -
Database#uri and #url now return a string or nil, but never raise
an exception. Previously, they would either return a string
or raise an exception. -
The Model @simple_pk and @simple_table instance variables should
no longer be modified directly. Instead, the setter methods should
be used. -
Model.primary_key_lookup should no longer be called with a nil
value. -
Logging of prepared statements on some adapters has been changed
slightly, so log parsers might need to be updated. -
Dataset#identifier_append and #table_ref_append no longer treat
literal strings and blobs specially. Previously, they were treated
as identifiers. -
Dataset#qualified_identifier_sql_append now takes 3 arguments, so
any extensions that override it should be modified accordingly. -
Some internally used constants and private methods have been
deleted:Database::CASCADE
Database::NO_ACTION
Database::SET_DEFAULTS
Database::SET_NULL
Database::RESTRICT
Dataset::COLUMN_ALLor moved:
MySQL::Dataset::AFFECTED_ROWS_RE → MySQL::Database
MySQL::Dataset#affected_rows → MySQL::Database -
The sql_expr extension no longer creates the
Sequel::SQL::GenericComplexExpression class.
Thanks,
Jeremy
- {Website}[http://sequel.rubyforge.org]
- {Source code}[GitHub - jeremyevans/sequel: Sequel: The Database Toolkit for Ruby]
- {Blog}[http://sequel.heroku.com]
- {Bug tracking}[Issues · jeremyevans/sequel · GitHub]
- {Google group}[http://groups.google.com/group/sequel-talk]
- {RDoc}[http://sequel.rubyforge.org/rdoc]