The split_array_nil extension overrides Sequel's default handling of IN/NOT IN with arrays of values to do specific nil checking. For example,
ds = DB[:table].where(:column=>[1, nil])
By default, that produces the following SQL:
SELECT * FROM table WHERE (column IN (1, NULL))
However, because NULL = NULL is not true in SQL (it is NULL), this will not return rows in the table where the column is NULL. This extension allows for an alternative behavior more similar to ruby, which will return rows in the table where the column is NULL, using a query like:
SELECT * FROM table WHERE ((column IN (1)) OR (column IS NULL)))
Similarly, for NOT IN queries:
ds = DB[:table].exclude(:column=>[1, nil]) # Default: # SELECT * FROM table WHERE (column NOT IN (1, NULL)) # with split_array_nils extension: # SELECT * FROM table WHERE ((column NOT IN (1)) AND (column IS NOT NULL)))
To use this extension with a single dataset:
ds = ds.extension(:split_array_nil)
To use this extension for all of a database's datasets:
DB.extension(:split_array_nil)
This adds the following dataset methods:
[]= |
filter with the first argument, update with the second |
insert_multiple |
insert multiple rows at once |
set |
alias for update |
to_csv |
return string in csv format for the dataset |
db= |
change the dataset's database |
opts= |
change the dataset's opts |
It is only recommended to use this for backwards compatibility.
You can load this extension into specific datasets:
ds = DB[:table] ds.extension(:sequel_3_dataset_methods)
Or you can load it into all of a database's datasets, which is probably the desired behavior if you are using this extension:
DB.extension(:sequel_3_dataset_methods)
The select_remove extension adds Sequel::Dataset#select_remove for removing existing selected columns from a dataset. It's not part of Sequel core as it is rarely needed and has some corner cases where it can't work correctly.
You can load this extension into specific datasets:
ds = DB[:table] ds.extension(:select_remove)
Or you can load it into all of a database's datasets, which is probably the desired behavior if you are using this extension:
DB.extension(:select_remove)
The eval_inspect extension changes inspect for Sequel::SQL::Expression subclasses to return a string suitable for ruby's eval, such that
eval(obj.inspect) == obj
is true. The above code is true for most of ruby's simple classes such as String, Integer, Float, and Symbol, but it's not true for classes such as Time, Date, and BigDecimal. Sequel attempts to handle situations where instances of these classes are a component of a Sequel expression.
To load the extension:
Sequel.extension :eval_inspect
This extension allows Sequel's postgres adapter to automatically parameterize all common queries. Sequel's default behavior has always been to literalize all arguments unless specifically using parameters (via :$arg placeholders and the prepare/call methods). This extension makes Sequel take all string, numeric, date, and time types and automatically turn them into parameters. Example:
# Default DB[:test].where(:a=>1) # SQL: SELECT * FROM test WHERE a = 1 DB.extension :pg_auto_parameterize DB[:test].where(:a=>1) # SQL: SELECT * FROM test WHERE a = $1 (args: [1])
This extension is not necessarily faster or more safe than the default behavior. In some cases it is faster, such as when using large strings. However, there are also some known issues with this approach:
Because of the way it operates, it has no context to make a determination about whether to literalize an object or not. For example, if it comes across an integer, it will turn it into a parameter. That breaks code such as:
DB[:table].select(:a, :b).order(2, 1)
Since it will use the following SQL (which isn't valid):
SELECT a, b FROM table ORDER BY $1, $2
To work around this, you can either specify the columns manually or use a literal string:
DB[:table].select(:a, :b).order(:b, :a) DB[:table].select(:a, :b).order(Sequel.lit('2, 1'))
In order to avoid many type errors, it attempts to guess the appropriate type and automatically casts all placeholders. Unfortunately, if the type guess is incorrect, the query will be rejected. For example, the following works without automatic parameterization, but fails with it:
DB[:table].insert(:interval=>'1 day')
To work around this, you can just add the necessary casts manually:
DB[:table].insert(:interval=>'1 day'.cast(:interval))
You can also work around any issues that come up by disabling automatic parameterization by calling the no_auto_parameterize method on the dataset (which returns a clone of the dataset).
It is likely there are other corner cases I am not yet aware of when using this extension, so use this extension with caution.
This extension is only compatible when using the pg driver, not when using the old postgres driver or the postgres-pr driver.
The graph_each extension adds Dataset#graph_each and makes Dataset#each call graph_each if the dataset has been graphed. Dataset#graph_each splits result hashes into subhashes per table:
DB[:a].graph(:b, :id=>:b_id).all # => {:a=>{:id=>1, :b_id=>2}, :b=>{:id=>2}}
You can load this extension into specific datasets:
ds = DB[:table] ds.extension(:graph_each)
Or you can load it into all of a database's datasets, which is probably the desired behavior if you are using this extension:
DB.extension(:graph_each)
The hash_aliases extension allows Dataset#select and Dataset#from to treat a hash argument as an alias specification, with keys being the expressions and values being the aliases, which was the historical behavior before Sequel 4. It is only recommended to use this for backwards compatibility.
You can load this extension into specific datasets:
ds = DB[:table] ds.extension(:hash_aliases)
Or you can load it into all of a database's datasets, which is probably the desired behavior if you are using this extension:
DB.extension(:hash_aliases)
This adds a Sequel::Dataset#to_dot method. The to_dot method returns a string that can be processed by graphviz's dot program in order to get a visualization of the dataset. Basically, it shows a version of the dataset's abstract syntax tree.
To load the extension:
Sequel.extension :to_dot
The pagination extension adds the Sequel::Dataset#paginate and each_page methods, which return paginated (limited and offset) datasets with some helpful methods that make creating a paginated display easier.
This extension uses Object#extend at runtime, which can hurt performance.
You can load this extension into specific datasets:
ds = DB[:table] ds.extension(:pagination)
Or you can load it into all of a database's datasets, which is probably the desired behavior if you are using this extension:
DB.extension(:pagination)
This _pretty_table extension is only for internal use. It adds the Sequel::PrettyTable class without modifying Sequel::Dataset.
To load the extension:
Sequel.extension :_pretty_table
This changes Sequel's literalization of IN/NOT IN with an empty array value to not return NULL even if one of the referenced columns is NULL:
DB[:test].where(:name=>[]) # SELECT * FROM test WHERE (1 = 0) DB[:test].exclude(:name=>[]) # SELECT * FROM test WHERE (1 = 1)
The default Sequel behavior is to respect NULLs, so that when name is NULL, the expression returns NULL.
You can load this extension into specific datasets:
ds = DB[:table] ds.extension(:empty_array_ignore_nulls)
Or you can load it into all of a database's datasets, which is probably the desired behavior if you are using this extension:
DB.extension(:empty_array_ignore_nulls)
The query_literals extension changes Sequel's default behavior of the select, order and group methods so that if the first argument is a regular string, it is treated as a literal string, with the rest of the arguments (if any) treated as placeholder values. This allows you to write code such as:
DB[:table].select('a, b, ?', 2).group('a, b').order('c')
The default Sequel behavior would literalize that as:
SELECT 'a, b, ?', 2 FROM table GROUP BY 'a, b' ORDER BY 'c'
Using this extension changes the literalization to:
SELECT a, b, 2, FROM table GROUP BY a, b ORDER BY c
This extension makes select, group, and order methods operate like filter methods, which support the same interface.
There are very few places where Sequel's default behavior is desirable in this area, but for backwards compatibility, the defaults won't be changed until the next major release.
You can load this extension into specific datasets:
ds = DB[:table] ds.extension(:query_literals)
Or you can load it into all of a database's datasets, which is probably the desired behavior if you are using this extension:
DB.extension(:query_literals)
The schema_caching extension adds a few methods to Sequel::Database that make it easy to dump the parsed schema information to a file, and load it from that file. Loading the schema information from a dumped file is faster than parsing it from the database, so this can save bootup time for applications with large numbers of models.
Basic usage in application code:
DB = Sequel.connect('...') DB.extension :schema_caching DB.load_schema_cache('/path/to/schema.dump') # load model files
Then, whenever the database schema is modified, write a new cached file. You can do that with bin/sequel's -S option:
bin/sequel -S /path/to/schema.dump postgres://...
Alternatively, if you don't want to dump the schema information for all tables, and you don't worry about race conditions, you can choose to use the following in your application code:
DB = Sequel.connect('...') DB.extension :schema_caching DB.load_schema_cache?('/path/to/schema.dump') # load model files DB.dump_schema_cache?('/path/to/schema.dump')
With this method, you just have to delete the schema dump file if the schema is modified, and the application will recreate it for you using just the tables that your models use.
Note that it is up to the application to ensure that the dumped cached schema reflects the current state of the database. Sequel does no checking to ensure this, as checking would take time and the purpose of this code is to take a shortcut.
The cached schema is dumped in Marshal format, since it is the fastest and it handles all ruby objects used in the schema hash. Because of this, you should not attempt to load the schema from a untrusted file.
The date_arithmetic extension adds the ability to perform database-independent addition/substraction of intervals to/from dates and timestamps.
First, you need to load the extension into the database:
DB.extension :date_arithmetic
Then you can use the Sequel.date_add and Sequel.date_sub methods to return Sequel expressions:
add = Sequel.date_add(:date_column, :years=>1, :months=>2, :days=>3) sub = Sequel.date_sub(:date_column, :hours=>1, :minutes=>2, :seconds=>3)
In addition to specifying the interval as a hash, there is also support for specifying the interval as an ActiveSupport::Duration object:
require 'active_support/all' add = Sequel.date_add(:date_column, 1.years + 2.months + 3.days) sub = Sequel.date_sub(:date_column, 1.hours + 2.minutes + 3.seconds)
These expressions can be used in your datasets, or anywhere else that Sequel expressions are allowed:
DB[:table].select(add.as(:d)).where(sub > Sequel::CURRENT_TIMESTAMP)
The pg_range_ops extension adds support to Sequel's DSL to make it easier to call PostgreSQL range functions and operators.
To load the extension:
Sequel.extension :pg_range_ops
The most common usage is passing an expression to Sequel.pg_range_op:
r = Sequel.pg_range_op(:range)
If you have also loaded the pg_range extension, you can use Sequel.pg_range as well:
r = Sequel.pg_range(:range)
Also, on most Sequel expression objects, you can call the pg_range method:
r = Sequel.expr(:range).pg_range
If you have loaded the core_extensions extension), or you have loaded the core_refinements extension) and have activated refinements for the file, you can also use Symbol#pg_range:
r = :range.pg_range
This creates a Sequel::Postgres::RangeOp object that can be used for easier querying:
r.contains(:other) # range @> other r.contained_by(:other) # range <@ other r.overlaps(:other) # range && other r.left_of(:other) # range << other r.right_of(:other) # range >> other r.starts_after(:other) # range &> other r.ends_before(:other) # range &< other r.adjacent_to(:other) # range -|- other r.lower # lower(range) r.upper # upper(range) r.isempty # isempty(range) r.lower_inc # lower_inc(range) r.upper_inc # upper_inc(range) r.lower_inf # lower_inf(range) r.upper_inf # upper_inf(range)
See the PostgreSQL range function and operator documentation for more details on what these functions and operators do.
If you are also using the pg_range extension, you should load it before loading this extension. Doing so will allow you to use PGArray#op to get an RangeOp, allowing you to perform range operations on range literals.
The arbitrary_servers extension allows you to connect to arbitrary servers/shards that were not defined when you created the database. To use it, you first load the extension into the Database object:
DB.extension :arbitrary_servers
Then you can pass arbitrary connection options for the server/shard to use as a hash:
DB[:table].server(:host=>'...', :database=>'...').all
Because Sequel can never be sure that the connection will be reused, arbitrary connections are disconnected as soon as the outermost block that uses them exits. So this example uses the same connection:
DB.transaction(:server=>{:host=>'...', :database=>'...'}) do |c| DB.transaction(:server=>{:host=>'...', :database=>'...'}) do |c2| # c == c2 end end
But this example does not:
DB.transaction(:server=>{:host=>'...', :database=>'...'}) do |c| end DB.transaction(:server=>{:host=>'...', :database=>'...'}) do |c2| # c != c2 end
You can use this extension in conjunction with the server_block extension:
DB.with_server(:host=>'...', :database=>'...') do DB.synchronize do # All of these use the host/database given to with_server DB[:table].insert(...) DB[:table].update(...) DB.tables DB[:table].all end end
Anyone using this extension in conjunction with the server_block extension may want to do the following to so that you don't need to call synchronize separately:
def DB.with_server(*) super{synchronize{yield}} end
Note that this extension only works with the sharded threaded connection pool. If you are using the sharded single connection pool, you need to switch to the sharded threaded connection pool before using this extension.
The pg_hstore_ops extension adds support to Sequel's DSL to make it easier to call PostgreSQL hstore functions and operators.
To load the extension:
Sequel.extension :pg_hstore_ops
The most common usage is taking an object that represents an SQL expression (such as a :symbol), and calling Sequel.hstore_op with it:
h = Sequel.hstore_op(:hstore_column)
If you have also loaded the pg_hstore extension, you can use Sequel.hstore as well:
h = Sequel.hstore(:hstore_column)
Also, on most Sequel expression objects, you can call the hstore method:
h = Sequel.expr(:hstore_column).hstore
If you have loaded the core_extensions extension), or you have loaded the core_refinements extension) and have activated refinements for the file, you can also use Symbol#hstore:
h = :hstore_column.hstore
This creates a Sequel::Postgres::HStoreOp object that can be used for easier querying:
h - 'a' # hstore_column - CAST('a' AS text) h['a'] # hstore_column -> 'a' h.concat(:other_hstore_column) # || h.has_key?('a') # ? h.contain_all(:array_column) # ?& h.contain_any(:array_column) # ?| h.contains(:other_hstore_column) # @> h.contained_by(:other_hstore_column) # <@ h.defined # defined(hstore_column) h.delete('a') # delete(hstore_column, 'a') h.each # each(hstore_column) h.keys # akeys(hstore_column) h.populate(:a) # populate_record(a, hstore_column) h.record_set(:a) # (a #= hstore_column) h.skeys # skeys(hstore_column) h.slice(:a) # slice(hstore_column, a) h.svals # svals(hstore_column) h.to_array # hstore_to_array(hstore_column) h.to_matrix # hstore_to_matrix(hstore_column) h.values # avals(hstore_column)
See the PostgreSQL hstore function and operator documentation for more details on what these functions and operators do.
If you are also using the pg_hstore extension, you should load it before loading this extension. Doing so will allow you to use HStore#op to get an HStoreOp, allowing you to perform hstore operations on hstore literals.
The null_dataset extension adds the Dataset#nullify method, which returns a cloned dataset that will never issue a query to the database. It implements the null object pattern for datasets.
To load the extension:
Sequel.extension :null_dataset
The most common usage is probably in a method that must return a dataset, where the method knows the dataset shouldn't return anything. With standard Sequel, you'd probably just add a WHERE condition that is always false, but that still results in a query being sent to the database, and can be overridden using unfiltered, the OR operator, or a UNION.
Usage:
ds = DB[:items].nullify.where(:a=>:b).select(:c) ds.sql # => "SELECT c FROM items WHERE (a = b)" ds.all # => [] # no query sent to the database
Note that there is one case where a null dataset will sent a query to the database. If you call columns on a nulled dataset and the dataset doesn't have an already cached version of the columns, it will create a new dataset with the same options to get the columns.
This extension uses Object#extend at runtime, which can hurt performance.
The pg_row_ops extension adds support to Sequel's DSL to make it easier to deal with PostgreSQL row-valued/composite types.
To load the extension:
Sequel.extension :pg_row_ops
The most common usage is passing an expression to Sequel.pg_row_op:
r = Sequel.pg_row_op(:row_column)
If you have also loaded the pg_row extension, you can use Sequel.pg_row as well:
r = Sequel.pg_row(:row_column)
Also, on most Sequel expression objects, you can call the pg_row method:
r = Sequel.expr(:row_column).pg_row
If you have loaded the core_extensions extension), or you have loaded the core_refinements extension) and have activated refinements for the file, you can also use Symbol#pg_row:
r = :row_column.pg_row
There's only fairly basic support currently. You can use the [] method to access a member of the composite type:
r[:a] # (row_column).a
This can be chained:
r[:a][:b] # ((row_column).a).b
If you've loaded the pg_array_ops extension, you there is also support for composite types that include arrays, or arrays of composite types:
r[1][:a] # (row_column[1]).a r[:a][1] # (row_column).a[1]
The only other support is the splat method:
r.splat # (row_column.*)
The splat method is necessary if you are trying to reference a table's type when the table has the same name as one of it's columns. For example:
DB.create_table(:a){Integer :a; Integer :b}
Let's say you want to reference the composite type for the table:
a = Sequel.pg_row_op(:a) DB[:a].select(a[:b]) # SELECT (a).b FROM a
Unfortunately, that doesn't work, as it references the integer column, not the table. The splat method works around this:
DB[:a].select(a.splat[:b]) # SELECT (a.*).b FROM a
Splat also takes an argument which is used for casting. This is necessary if you want to return the composite type itself, instead of the columns in the composite type. For example:
DB[:a].select(a.splat).first # SELECT (a.*) FROM a # => {:a=>1, :b=>2}
By casting the expression, you can get a composite type returned:
DB[:a].select(a.splat).first # SELECT (a.*)::a FROM a # => {:a=>"(1,2)"} # or {:a=>{:a=>1, :b=>2}} if the "a" type has been registered # with the the pg_row extension
This feature is mostly useful for a different way to graph tables:
DB[:a].join(:b, :id=>:b_id).select(Sequel.pg_row_op(:a).splat(:a), Sequel.pg_row_op(:b).splat(:b)) # SELECT (a.*)::a, (b.*)::b FROM a INNER JOIN b ON (b.id = a.b_id) # => {:a=>{:id=>1, :b_id=>2}, :b=>{:id=>2}}
Adds the Sequel::Migration and Sequel::Migrator classes, which allow the user to easily group schema changes and migrate the database to a newer version or revert to a previous version.
To load the extension:
Sequel.extension :migration
This extension adds a statement cache to Sequel's postgres adapter, with the ability to automatically prepare statements that are executed repeatedly. When combined with the pg_auto_parameterize extension, it can take Sequel code such as:
DB.extension :pg_auto_parameterize, :pg_statement_cache DB[:table].filter(:a=>1) DB[:table].filter(:a=>2) DB[:table].filter(:a=>3)
And use the same prepared statement to execute the queries.
The backbone of this extension is a modified LRU cache. It considers both the last executed time and the number of executions when determining which queries to keep in the cache. It only cleans the cache when a high water mark has been passed, and removes queries until it reaches the low water mark, in order to avoid thrashing when you are using more than the maximum number of queries. To avoid preparing queries when it isn't necessary, it does not prepare them on the server side unless they are being executed more than once. The cache is very tunable, allowing you to set the high and low water marks, the number of executions before preparing the query, and even use a custom callback for determine which queries to keep in the cache.
Note that automatically preparing statements does have some issues. Most notably, if you change the result type that the query returns, PostgreSQL will raise an error. This can happen if you have prepared a statement that selects all columns from a table, and then you add or remove a column from that table. This extension does attempt to check that case and clear the statement caches if you use alter_table from within Sequel, but it cannot fix the case when such a change is made externally.
This extension only works when the pg driver is used as the backend for the postgres adapter.
The filter_having extension allows Dataset#filter, and, or and exclude to operate on the HAVING clause if the dataset already has a HAVING clause, which was the historical behavior before Sequel 4. It is only recommended to use this for backwards compatibility.
You can load this extension into specific datasets:
ds = DB[:table] ds.extension(:filter_having)
Or you can load it into all of a database's datasets, which is probably the desired behavior if you are using this extension:
DB.extension(:filter_having)
The LooserTypecasting extension loosens the default database typecasting for the following types:
:float |
use to_f instead of Float() |
:integer |
use to_i instead of Integer() |
:decimal |
don't check string conversion with Float() |
:string |
silently allow hash and array conversion to string |
To load the extension into the database:
DB.extension :looser_typecasting
The set_overrides extension adds the Dataset#set_overrides and Dataset#set_defaults methods which provide a crude way to control the values used in INSERT/UPDATE statements if a hash of values is passed to Dataset#insert or Dataset#update. It is only recommended to use this for backwards compatibility.
You can load this extension into specific datasets:
ds = DB[:table] ds.extension(:set_overrides)
Or you can load it into all of a database's datasets, which is probably the desired behavior if you are using this extension:
DB.extension(:set_overrides)
The meta_def extension is designed for backwards compatibility with older Sequel code that uses the meta_def method on Database, Dataset, and Model classes and/or instances. It is not recommended for usage in new code. To load this extension:
Sequel.extension :meta_def
The thread_local_timezones extension allows you to set a per-thread timezone that will override the default global timezone while the thread is executing. The main use case is for web applications that execute each request in its own thread, and want to set the timezones based on the request.
To load the extension:
Sequel.extension :thread_local_timezones
The most common example is having the database always store time in UTC, but have the application deal with the timezone of the current user. That can be done with:
Sequel.database_timezone = :utc # In each thread: Sequel.thread_application_timezone = current_user.timezone
This extension is designed to work with the named_timezones extension.
This extension adds the thread_application_timezone=, thread_database_timezone=, and thread_typecast_timezone= methods to the Sequel module. It overrides the application_timezone, database_timezone, and typecast_timezone methods to check the related thread local timezone first, and use it if present. If the related thread local timezone is not present, it falls back to the default global timezone.
There is one special case of note. If you have a default global timezone and you want to have a nil thread local timezone, you have to set the thread local value to :nil instead of nil:
Sequel.application_timezone = :utc Sequel.thread_application_timezone = nil Sequel.application_timezone # => :utc Sequel.thread_application_timezone = :nil Sequel.application_timezone # => nil
The connection_validator extension modifies a database's connection pool to validate that connections checked out from the pool are still valid, before yielding them for use. If it detects an invalid connection, it removes it from the pool and tries the next available connection, creating a new connection if no available connection is valid. Example of use:
DB.extension(:connection_validator)
As checking connections for validity involves issuing a query, which is potentially an expensive operation, the validation checks are only run if the connection has been idle for longer than a certain threshold. By default, that threshold is 3600 seconds (1 hour), but it can be modified by the user, set to -1 to always validate connections on checkout:
DB.pool.connection_validation_timeout = -1
Note that if you set the timeout to validate connections on every checkout, you should probably manually control connection checkouts on a coarse basis, using Database#synchronize. In a web application, the optimal place for that would be a rack middleware. Validating connections on every checkout without setting up coarse connection checkouts will hurt performance, in some cases significantly. Note that setting up coarse connection checkouts reduces the concurrency level acheivable. For example, in a web application, using Database#synchronize in a rack middleware will limit the number of concurrent web requests to the number to connections in the database connection pool.
Note that this extension only affects the default threaded and the sharded threaded connection pool. The single threaded and sharded single threaded connection pools are not affected. As the only reason to use the single threaded pools is for speed, and this extension makes the connection pool slower, there's not much point in modifying this extension to work with the single threaded pools. The threaded pools work fine even in single threaded code, so if you are currently using a single threaded pool and want to use this extension, switch to using a threaded pool.
The pretty_table extension adds Sequel::Dataset#print and the Sequel::PrettyTable class for creating nice-looking plain-text tables. Example:
+--+-------+ |id|name | |--+-------| |1 |fasdfas| |2 |test | +--+-------+
You can load this extension into specific datasets:
ds = DB[:table] ds.extension(:pretty_table)
Or you can load it into all of a database's datasets, which is probably the desired behavior if you are using this extension:
DB.extension(:pretty_table)
The pg_array_ops extension adds support to Sequel's DSL to make it easier to call PostgreSQL array functions and operators.
To load the extension:
Sequel.extension :pg_array_ops
The most common usage is passing an expression to Sequel.pg_array_op:
ia = Sequel.pg_array_op(:int_array_column)
If you have also loaded the pg_array extension, you can use Sequel.pg_array as well:
ia = Sequel.pg_array(:int_array_column)
Also, on most Sequel expression objects, you can call the pg_array method:
ia = Sequel.expr(:int_array_column).pg_array
If you have loaded the core_extensions extension), or you have loaded the core_refinements extension) and have activated refinements for the file, you can also use Symbol#pg_array:
ia = :int_array_column.pg_array
This creates a Sequel::Postgres::ArrayOp object that can be used for easier querying:
ia[1] # int_array_column[1] ia[1][2] # int_array_column[1][2] ia.contains(:other_int_array_column) # @> ia.contained_by(:other_int_array_column) # <@ ia.overlaps(:other_int_array_column) # && ia.concat(:other_int_array_column) # || ia.push(1) # int_array_column || 1 ia.unshift(1) # 1 || int_array_column ia.any # ANY(int_array_column) ia.all # ALL(int_array_column) ia.dims # array_dims(int_array_column) ia.length # array_length(int_array_column, 1) ia.length(2) # array_length(int_array_column, 2) ia.lower # array_lower(int_array_column, 1) ia.lower(2) # array_lower(int_array_column, 2) ia.join # array_to_string(int_array_column, '', NULL) ia.join(':') # array_to_string(int_array_column, ':', NULL) ia.join(':', ' ') # array_to_string(int_array_column, ':', ' ') ia.unnest # unnest(int_array_column)
See the PostgreSQL array function and operator documentation for more details on what these functions and operators do.
If you are also using the pg_array extension, you should load it before loading this extension. Doing so will allow you to use PGArray#op to get an ArrayOp, allowing you to perform array operations on array literals.
The columns_introspection extension attempts to introspect the selected columns for a dataset before issuing a query. If it thinks it can guess correctly at the columns the query will use, it will return the columns without issuing a database query.
This method is not fool-proof, it's possible that some databases will use column names that Sequel does not expect. Also, it may not correctly handle all cases.
To attempt to introspect columns for a single dataset:
ds.extension(:columns_introspection)
To attempt to introspect columns for all datasets on a single database:
DB.extension(:columns_introspection)
The query extension adds Sequel::Dataset#query which allows a different way to construct queries instead of the usual method chaining. See Sequel::Dataset#query for details.
You can load this extension into specific datasets:
ds = DB[:table] ds.extension(:query)
Or you can load it into all of a database's datasets, which is probably the desired behavior if you are using this extension:
DB.extension(:query)
The server_block extension adds the Database#with_server method, which takes a shard argument and a block, and makes it so that access inside the block will use the specified shard by default.
First, you need to enable it on the database object:
DB.extension :server_block
Then you can call with_server:
DB.with_server(:shard1) do DB[:a].all # Uses shard1 DB[:a].server(:shard2).all # Uses shard2 end DB[:a].all # Uses default
You can even nest calls to with_server:
DB.with_server(:shard1) do DB[:a].all # Uses shard1 DB.with_server(:shard2) do DB[:a].all # Uses shard2 end DB[:a].all # Uses shard1 end DB[:a].all # Uses default
Note this this extension assumes the following shard names should use the server/shard passed to with_server: :default, nil, :read_only. All other shard names will cause the standard behavior to be used.
The constraint_validations extension is designed to easily create database constraints inside create_table and alter_table blocks. It also adds relevant metadata about the constraints to a separate table, which the constraint_validations model plugin uses to setup automatic validations.
To use this extension, you first need to load it into the database:
DB.extension(:constraint_validations)
Note that you should only need to do this when modifying the constraint validations (i.e. when migrating). You should probably not load this extension in general application code.
You also need to make sure to add the metadata table for the automatic validations. By default, this table is called sequel_constraint_validations.
DB.create_constraint_validations_table
This table should only be created once. For new applications, you generally want to create it first, before creating any other application tables.
Because migrations instance_eval the up and down blocks on a database, using this extension in a migration can be done via:
Sequel.migration do up do extension(:constraint_validations) # ... end down do extension(:constraint_validations) # ... end end
However, note that you cannot use change migrations with this extension, you need to use separate up/down migrations.
The API for creating the constraints with automatic validations is similar to the validation_helpers model plugin API. However, instead of having separate validates_* methods, it just adds a validate method that accepts a block to the schema generators. Like the create_table and alter_table blocks, this block is instance_evaled and offers its own DSL. Example:
DB.create_table(:table) do Integer :id String :name validate do presence :id min_length 5, :name end end
instance_eval is used in this case because create_table and alter_table already use instance_eval, so losing access to the surrounding receiver is not an issue.
Here's a breakdown of the constraints created for each constraint validation method:
All constraints except unique unless :allow_nil is true |
CHECK column IS NOT NULL |
presence (String column) |
CHECK trim(column) != " |
exact_length 5 |
CHECK char_length(column) = 5 |
min_length 5 |
CHECK char_length(column) >= 5 |
max_length 5 |
CHECK char_length(column) <= 5 |
length_range 3..5 |
CHECK char_length(column) >= 3 AND char_length(column) <= 5 |
length_range 3...5 |
CHECK char_length(column) >= 3 AND char_length(column) < 5 |
format /foo\d+/ |
CHECK column ~ 'foo\d+' |
format /foo\d+/i |
CHECK column ~* 'foo\d+' |
like 'foo%' |
CHECK column LIKE 'foo%' |
ilike 'foo%' |
CHECK column ILIKE 'foo%' |
includes ['a', 'b'] |
CHECK column IN ('a', 'b') |
includes [1, 2] |
CHECK column IN (1, 2) |
includes 3..5 |
CHECK column >= 3 AND column <= 5 |
includes 3...5 |
CHECK column >= 3 AND column < 5 |
unique |
UNIQUE (column) |
There are some additional API differences:
Only the :message and :allow_nil options are respected. The :allow_blank and :allow_missing options are not respected.
A new option, :name, is respected, for providing the name of the constraint. It is highly recommended that you provide a name for all constraint validations, as otherwise, it is difficult to drop the constraints later.
The includes validation only supports an array of strings, and array of integers, and a range of integers.
There are like and ilike validations, which are similar to the format validation but use a case sensitive or case insensitive LIKE pattern. LIKE patters are very simple, so many regexp patterns cannot be expressed by them, but only a couple databases (PostgreSQL and MySQL) support regexp patterns.
When using the unique validation, column names cannot have embedded commas. For similar reasons, when using an includes validation with an array of strings, none of the strings in the array can have embedded commas.
The unique validation does not support an arbitrary number of columns. For a single column, just the symbol should be used, and for an array of columns, an array of symbols should be used. There is no support for creating two separate unique validations for separate columns in a single call.
A drop method can be called with a constraint name in a alter_table validate block to drop an existing constraint and the related validation metadata.
While it is allowed to create a presence constraint with :allow_nil set to true, doing so does not create a constraint unless the column has String type.
Note that this extension has the following issues on certain databases:
MySQL does not support check constraints (they are parsed but ignored), so using this extension does not actually set up constraints on MySQL, except for the unique constraint. It can still be used on MySQL to add the validation metadata so that the plugin can setup automatic validations.
On SQLite, adding constraints to a table is not supported, so it must be emulated by dropping the table and recreating it with the constraints. If you want to use this plugin on SQLite with an alter_table block, you should drop all constraint validation metadata using drop_constraint_validations_for(:table=>'table'), and then readd all constraints you want to use inside the alter table block, making no other changes inside the alter_table block.
Top level module for Sequel
There are some module methods that are added via metaprogramming, one for each supported adapter. For example:
DB = Sequel.sqlite # Memory database DB = Sequel.sqlite('blog.db') DB = Sequel.postgres('database_name', :user=>'user', :password=>'password', :host=>'host', :port=>5432, :max_connections=>10)
If a block is given to these methods, it is passed the opened Database object, which is closed (disconnected) when the block exits, just like a block passed to connect. For example:
Sequel.sqlite('blog.db'){|db| puts db[:users].count}
Sequel currently adds methods to the Array, Hash, String and Symbol classes by default. You can either require 'sequel/no_core_ext' or set the SEQUEL_NO_CORE_EXTENSIONS constant or environment variable before requiring sequel to have Sequel not add methods to those classes.
For a more expanded introduction, see the README. For a quicker introduction, see the cheat sheet.
Hash of adapters that have been used. The key is the adapter scheme symbol, and the value is the Database subclass.
Deprecated alias for HookFailed, kept for backwards compatibility
Array of all databases to which Sequel has connected. If you are developing an application that can connect to an arbitrary number of databases, delete the database objects from this or they will not get garbage collected.
Proc that is instance evaled to create the default inflections for both the model inflector and the inflector extension.
The major version of Sequel. Only bumped for major changes.
The minor version of Sequel. Bumped for every non-patch level release, generally around once a month.
The tiny version of Sequel. Usually 0, only bumped for bugfix releases that fix regressions from previous versions.
The version of Sequel you are using, as a string (e.g. "2.11.0")
Whether to cache the anonymous models created by Sequel::Model(). This is required for reloading them correctly (avoiding the superclass mismatch). True by default for backwards compatibility.
Sequel converts two digit years in Dates and DateTimes by default, so 01/02/03 is interpreted at January 2nd, 2003, and 12/13/99 is interpreted as December 13, 1999. You can override this to treat those dates as January 2nd, 0003 and December 13, 0099, respectively, by:
Sequel.convert_two_digit_years = false
Sequel can use either Time or DateTime for times returned from the database. It defaults to Time. To change it to DateTime:
Sequel.datetime_class = DateTime
For ruby versions less than 1.9.2, Time has a limited range (1901 to 2038), so if you use datetimes out of that range, you need to switch to DateTime. Also, before 1.9.2, Time can only handle local and UTC times, not other timezones. Note that Time and DateTime objects have a different API, and in cases where they implement the same methods, they often implement them differently (e.g. + using seconds on Time and days on DateTime).
Sets whether or not to attempt to handle NULL values correctly when given an empty array. By default:
DB[:a].filter(:b=>[]) # SELECT * FROM a WHERE (b != b) DB[:a].exclude(:b=>[]) # SELECT * FROM a WHERE (b = b)
However, some databases (e.g. MySQL) will perform very poorly with this type of query. You can set this to false to get the following behavior:
DB[:a].filter(:b=>[]) # SELECT * FROM a WHERE 1 = 0 DB[:a].exclude(:b=>[]) # SELECT * FROM a WHERE 1 = 1
This may not handle NULLs correctly, but can be much faster on some databases.
Lets you create a Model subclass with its dataset already set. source should be an instance of one of the following classes:
Sets the database for this model to source. Generally only useful when subclassing directly from the returned class, where the name of the subclass sets the table name (which is combined with the Database in source to create the dataset to use) | |
Sets the dataset for this model to source. | |
other |
Sets the table name for this model to source. The class will use the default database for model classes in order to create the dataset. |
The purpose of this method is to set the dataset/database automatically for a model class, if the table name doesn't match the implicit name. This is neater than using set_dataset inside the class, doesn't require a bogus query for the schema.
# Using a symbol class Comment < Sequel::Model(:something) table_name # => :something end # Using a dataset class Comment < Sequel::Model(DB1[:something]) dataset # => DB1[:something] end # Using a database class Comment < Sequel::Model(DB1) dataset # => DB1[:comments] end
# File lib/sequel/model.rb, line 37 def self.Model(source) if cache_anonymous_models && (klass = Sequel.synchronize{Model::ANONYMOUS_MODEL_CLASSES[source]}) return klass end klass = if source.is_a?(Database) c = Class.new(Model) c.db = source c else Class.new(Model).set_dataset(source) end Sequel.synchronize{Model::ANONYMOUS_MODEL_CLASSES[source] = klass} if cache_anonymous_models klass end
Returns true if the passed object could be a specifier of conditions, false otherwise. Currently, Sequel considers hashes and arrays of two element arrays as condition specifiers.
Sequel.condition_specifier?({}) # => true Sequel.condition_specifier?([[1, 2]]) # => true Sequel.condition_specifier?([]) # => false Sequel.condition_specifier?([1]) # => false Sequel.condition_specifier?(1) # => false
# File lib/sequel/core.rb, line 113 def self.condition_specifier?(obj) case obj when Hash true when Array !obj.empty? && !obj.is_a?(SQL::ValueList) && obj.all?{|i| i.is_a?(Array) && (i.length == 2)} else false end end
Creates a new database object based on the supplied connection string and optional arguments. The specified scheme determines the database class used, and the rest of the string specifies the connection options. For example:
DB = Sequel.connect('sqlite:/') # Memory database DB = Sequel.connect('sqlite://blog.db') # ./blog.db DB = Sequel.connect('sqlite:///blog.db') # /blog.db DB = Sequel.connect('postgres://user:password@host:port/database_name') DB = Sequel.connect('sqlite:///blog.db', :max_connections=>10)
If a block is given, it is passed the opened Database object, which is closed when the block exits. For example:
Sequel.connect('sqlite://blog.db'){|db| puts db[:users].count}
For details, see the "Connecting to a Database" guide. To set up a master/slave or sharded database connection, see the "Master/Slave Databases and Sharding" guide.
# File lib/sequel/core.rb, line 142 def self.connect(*args, &block) Database.connect(*args, &block) end
Convert the exception to the given class. The given class should be Sequel::Error or a subclass. Returns an instance of klass with the message and backtrace of exception.
# File lib/sequel/core.rb, line 155 def self.convert_exception_class(exception, klass) return exception if exception.is_a?(klass) e = klass.new("#{exception.class}: #{exception.message}") e.wrapped_exception = exception e.set_backtrace(exception.backtrace) e end
# File lib/sequel/deprecated_core_extensions.rb, line 1 def Sequel.core_extensions? true end
# File lib/sequel/core.rb, line 79 def empty_array_handle_nulls=(v) Sequel::Deprecation.deprecate('Sequel.empty_array_handle_nulls=', 'Please switch to loading the empty_array_ignore_nulls plugin if you wish empty array handling to ignore nulls') @empty_array_handle_nulls = v end
Load all Sequel extensions given. Extensions are just files that exist under sequel/extensions in the load path, and are just required. Generally, extensions modify the behavior of Database and/or Dataset, but Sequel ships with some extensions that modify other classes that exist for backwards compatibility. In some cases, requiring an extension modifies classes directly, and in others, it just loads a module that you can extend other classes with. Consult the documentation for each extension you plan on using for usage.
Sequel.extension(:schema_dumper) Sequel.extension(:pagination, :query)
# File lib/sequel/core.rb, line 173 def self.extension(*extensions) extensions.each{|e| Kernel.require "sequel/extensions/#{e}"} end
Set the method to call on identifiers going into the database. This affects the literalization of identifiers by calling this method on them before they are input. Sequel upcases identifiers in all SQL strings for most databases, so to turn that off:
Sequel.identifier_input_method = nil
to downcase instead:
Sequel.identifier_input_method = :downcase
Other String instance methods work as well.
# File lib/sequel/core.rb, line 188 def self.identifier_input_method=(value) Database.identifier_input_method = value end
Set the method to call on identifiers coming out of the database. This affects the literalization of identifiers by calling this method on them when they are retrieved from the database. Sequel downcases identifiers retrieved for most databases, so to turn that off:
Sequel.identifier_output_method = nil
to upcase instead:
Sequel.identifier_output_method = :upcase
Other String instance methods work as well.
# File lib/sequel/core.rb, line 204 def self.identifier_output_method=(value) Database.identifier_output_method = value end
Yield the Inflections module if a block is given, and return the Inflections module.
# File lib/sequel/model/inflections.rb, line 4 def self.inflections yield Inflections if block_given? Inflections end
The exception classed raised if there is an error parsing JSON. This can be overridden to use an alternative json implementation.
# File lib/sequel/core.rb, line 210 def self.json_parser_error_class JSON::ParserError end
# File lib/sequel/core.rb, line 91 def k_require(*a) Sequel::Deprecation.deprecate('Sequel.k_require', 'Please switch to Kernel.require') Kernel.require(*a) end
The preferred method for writing Sequel migrations, using a DSL:
Sequel.migration do up do create_table(:artists) do primary_key :id String :name end end down do drop_table(:artists) end end
Designed to be used with the Migrator class, part of the migration extension.
# File lib/sequel/extensions/migration.rb, line 280 def self.migration(&block) MigrationDSL.create(&block) end
Convert given object to json and return the result. This can be overridden to use an alternative json implementation.
# File lib/sequel/core.rb, line 216 def self.object_to_json(obj, *args) obj.to_json(*args) end
Parse the string as JSON and return the result. This can be overridden to use an alternative json implementation.
# File lib/sequel/core.rb, line 222 def self.parse_json(json) JSON.parse(json, :create_additions=>false) end
Convert each item in the array to the correct type, handling multi-dimensional arrays. For each element in the array or subarrays, call the converter, unless the value is nil.
# File lib/sequel/core.rb, line 237 def self.recursive_map(array, converter) array.map do |i| if i.is_a?(Array) recursive_map(i, converter) elsif i converter.call(i) end end end
Require all given files which should be in the same or a subdirectory of this file. If a subdir is given, assume all files are in that subdir. This is used to ensure that the files loaded are from the same version of Sequel as this file.
# File lib/sequel/core.rb, line 251 def self.require(files, subdir=nil) Array(files).each{|f| super("#{File.dirname(__FILE__).untaint}/#{"#{subdir}/" if subdir}#{f}")} end
Set whether Sequel is being used in single threaded mode. By default, Sequel uses a thread-safe connection pool, which isn't as fast as the single threaded connection pool, and also has some additional thread safety checks. If your program will only have one thread, and speed is a priority, you should set this to true:
Sequel.single_threaded = true
# File lib/sequel/core.rb, line 262 def self.single_threaded=(value) @single_threaded = value Database.single_threaded = value end
Splits the symbol into three parts. Each part will either be a string or nil.
For columns, these parts are the table, column, and alias. For tables, these parts are the schema, table, and alias.
# File lib/sequel/core.rb, line 276 def self.split_symbol(sym) case s = sym.to_s when COLUMN_REF_RE1 [$1, $2, $3] when COLUMN_REF_RE2 [nil, $1, $2] when COLUMN_REF_RE3 [$1, $2, nil] else [nil, s, nil] end end
Converts the given string into a Date object.
Sequel.string_to_date('2010-09-10') # Date.civil(2010, 09, 10)
# File lib/sequel/core.rb, line 292 def self.string_to_date(string) begin Date.parse(string, Sequel.convert_two_digit_years) rescue => e raise convert_exception_class(e, InvalidValue) end end
Converts the given string into a Time or DateTime object, depending on the value of Sequel.datetime_class.
Sequel.string_to_datetime('2010-09-10 10:20:30') # Time.local(2010, 09, 10, 10, 20, 30)
# File lib/sequel/core.rb, line 304 def self.string_to_datetime(string) begin if datetime_class == DateTime DateTime.parse(string, convert_two_digit_years) else datetime_class.parse(string) end rescue => e raise convert_exception_class(e, InvalidValue) end end
Converts the given string into a Sequel::SQLTime object.
v = Sequel.string_to_time('10:20:30') # Sequel::SQLTime.parse('10:20:30') DB.literal(v) # => '10:20:30'
# File lib/sequel/core.rb, line 320 def self.string_to_time(string) begin SQLTime.parse(string) rescue => e raise convert_exception_class(e, InvalidValue) end end
Unless in single threaded mode, protects access to any mutable global data structure in Sequel. Uses a non-reentrant mutex, so calling code should be careful.
# File lib/sequel/core.rb, line 336 def self.synchronize(&block) @single_threaded ? yield : @data_mutex.synchronize(&block) end
Uses a transaction on all given databases with the given options. This:
Sequel.transaction([DB1, DB2, DB3]){...}
is equivalent to:
DB1.transaction do DB2.transaction do DB3.transaction do ... end end end
except that if Sequel::Rollback is raised by the block, the transaction is rolled back on all databases instead of just the last one.
Note that this method cannot guarantee that all databases will commit or rollback. For example, if DB3 commits but attempting to commit on DB2 fails (maybe because foreign key checks are deferred), there is no way to uncommit the changes on DB3. For that kind of support, you need to have two-phase commit/prepared transactions (which Sequel supports on some databases).
# File lib/sequel/core.rb, line 371 def self.transaction(dbs, opts={}, &block) unless opts[:rollback] rescue_rollback = true opts = opts.merge(:rollback=>:reraise) end pr = dbs.reverse.inject(block){|bl, db| proc{db.transaction(opts, &bl)}} if rescue_rollback begin pr.call rescue Sequel::Rollback nil end else pr.call end end
REMOVE40
# File lib/sequel/core.rb, line 389 def self.ts_require(*args) Sequel::Deprecation.deprecate('Sequel.ts_require', 'Please switch to Sequel.require') require(*args) end
# File lib/sequel/core.rb, line 393 def self.tsk_require(*args) Sequel::Deprecation.deprecate('Sequel.tsk_require', 'Please switch to Kernel.require') Kernel.require(*args) end
The version of Sequel you are using, as a string (e.g. "2.11.0")
# File lib/sequel/version.rb, line 15 def self.version VERSION end
If the supplied block takes a single argument, yield an SQL::VirtualRow instance to the block argument. Otherwise, evaluate the block in the context of a SQL::VirtualRow instance.
Sequel.virtual_row{a} # Sequel::SQL::Identifier.new(:a) Sequel.virtual_row{|o| o.a{}} # Sequel::SQL::Function.new(:a)
# File lib/sequel/core.rb, line 405 def self.virtual_row(&block) vr = VIRTUAL_ROW case block.arity when -1, 0 vr.instance_exec(&block) else block.call(vr) end end
Generated with the Darkfish Rdoc Generator 2.