Issues RUNNING mysql gem

Hi, all.

With unbelievable difficulty, I have the mysql gem installed without
warnings with the following components:

OSX Leopard 10.6.2
ruby 1.8.7 (2008-08-11 patchlevel 72) [universal-darwin10.0]
Rails 2.3.5
mysql (2.8.1) [gem version]
MySQL Server 5.1.44

I even rebuilt my whole project a few minutes ago, thinking an
unsuccessful previous install might have failed to generate my database.
Yet when I run rake db:migrate, I get the following errors:

iMacMike:pfmpe mikemontagne$ rake db:migrate --trace
(in /Users/mikemontagne/rproj/pfmpe)
** Invoke db:migrate (first_time)
** Invoke environment (first_time)
** Execute environment
** Execute db:migrate
rake aborted!
Unknown database ‘pfmpe_development’
/Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:589:in
real_connect' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:589:inconnect’
/Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:203:in
initialize' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:75:innew’
/Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:75:in
mysql_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:223:insend’
/Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:223:in
new_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:245:incheckout_new_connection’
/Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:188:in
checkout' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:184:inloop’
/Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:184:in
checkout' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/monitor.rb:242:insynchronize’
/Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:183:in
checkout' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:98:inconnection’
/Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:326:in
retrieve_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_specification.rb:123:inretrieve_connection’
/Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_specification.rb:115:in
connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb:435:ininitialize’
/Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb:400:in
new' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb:400:inup’
/Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb:383:in
migrate' /Library/Ruby/Gems/1.8/gems/rails-2.3.5/lib/tasks/databases.rake:116 /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:617:incall’
/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:617:in
execute' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:612:ineach’
/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:612:in
execute' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:578:ininvoke_with_call_chain’
/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/monitor.rb:242:in
synchronize' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:571:ininvoke_with_call_chain’
/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:564:in
invoke' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:2019:ininvoke_task’
/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:1997:in
top_level' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:1997:ineach’
/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:1997:in
top_level' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:2036:instandard_exception_handling’
/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:1991:in
top_level' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:1970:inrun’
/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:2036:in
standard_exception_handling' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:1967:inrun’
/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/bin/rake:31
/usr/bin/rake:19:in `load’
/usr/bin/rake:19


Excuse me for posting all of this, but I thought there might be
something someone else would catch in this that I didn’t.

In any case, yes, MySQL Server is running. I’m using MySQL Workbench
5.2.16 beta for local administration. The root account requires no
password: and these are the functional contents of my database.yml:

development:
adapter: mysql
encoding: utf8
reconnect: false
database: pfmpe_development
pool: 5
username: root
password:
socket: /tmp/mysql.sock

Warning: The database defined as “test” will be erased and

re-generated from your development database when you run “rake”.

Do not set this db to the same as development or production.

test:
adapter: mysql
encoding: utf8
reconnect: false
database: pfmpe_test
pool: 5
username: root
password:
socket: /tmp/mysql.sock

production:
adapter: mysql
encoding: utf8
reconnect: false
database: pfmpe_production
pool: 5
username: root
password:
socket: /tmp/mysql.sock

Does anybody have any idea what I still have fudged up here? Why isn’t
the database being created? Are we supposed to manually create our
database schema when working with the MySQL gem?

Regards,

mike

On Sun, Mar 21, 2010 at 6:25 PM, Mike M. [email protected]
wrote:

Does anybody have any idea what I still have fudged up here? Why isn’t
the database being created? Are we supposed to manually create our
database schema when working with the MySQL gem?

You have to create your databases regardless of what DB you’re
using – as I just pointed out in your previous thread.

Are you reading any of the responses?


Hassan S. ------------------------ [email protected]
twitter: @hassan

Sorry for this. It’s my first go-around with RoR/MySQL. Got it:

rake db:create RAILS_ENV=‘development’

Then:

rake db:migrate --trace


Still, if anybody has any general pointers about how to go about
developing MySQL locally… much appreciated.

Regards,

mike

Hassan S. wrote:

Are you reading any of the responses?


Hassan S. ------------------------ [email protected]
twitter: @hassan

:slight_smile:

Sorry, Hassan. I didn’t see your response yet. But until I left to here,
I read the whole thread.

Thanks for the answer, and let me ask you one thing then:

What’s the basic procedure for local development and deployment of the
tables? Do you suffer the database-agnostic field definitions of the
local development environment (db:create and db:migrate); or, wanting to
test with real field definitions, to you go ahead and refine your table
declarations locally, as you would in what you deploy to the server?

Thanks,

mike

On Sun, Mar 21, 2010 at 7:24 PM, Mike M. [email protected]
wrote:

What’s the basic procedure for local development and deployment of the
tables? Do you suffer the database-agnostic field definitions of the
local development environment (db:create and db:migrate); or, wanting to
test with real field definitions, to you go ahead and refine your table
declarations locally, as you would in what you deploy to the server?

I’m not sure what you mean by “real field definitions” – can you go
into a little more detail? What is it you think you can’t do within the
scope of migrations?

And “deploy to the server” can cover a lot of different scenarios, so
you might want to describe what you mean by that, too.


Hassan S. ------------------------ [email protected]
twitter: @hassan

Hassan S. wrote:

On Sun, Mar 21, 2010 at 7:24 PM, Mike M. [email protected]
wrote:

What’s the basic procedure for local development and deployment of the
tables? Do you suffer the database-agnostic field definitions of the
local development environment (db:create and db:migrate); or, wanting to
test with real field definitions, to you go ahead and refine your table
declarations locally, as you would in what you deploy to the server?

I’m not sure what you mean by “real field definitions” – can you go
into a little more detail? What is it you think you can’t do within the
scope of migrations?

And “deploy to the server” can cover a lot of different scenarios, so
you might want to describe what you mean by that, too.


Hassan S. ------------------------ [email protected]
twitter: @hassan

  1. “Real field definitions”:

There’s a big difference between scaffolding:

script/generate scaffold tablename amendment_id:integer
first_name:string middle_name:string last_name:string
anonymous_signature:boolean email_address:string email_bounces:integer
referrer_email:string referrer_id:integer address1:string
address2:string city:string c_code:string
state_province_or_territory:string zip_or_postal_code:string
comment:string

And the usual SQL route to creating a table:

CREATE TABLE tablename (id integer NOT NULL AUTO_INCREMENT PRIMARY KEY,
ip integer, amendment_id integer, first_name varchar(25), middle_name
varchar(25), last_name varchar(30), anonymous_signature tinyint,
email_address varchar(100), email_bounces tinyint, referrer_email
varchar(100), address1 varchar(100), address2 varchar(100), city
varchar(25), c_code varchar(2), state_province_or_territory varchar(2),
zip_or_postal_code varchar(10), comment varchar(125), created_at
datetime, updated_at datetime) ENGINE = INNODB;

In the former, you’re not even passing the information to the
environment to build the tables you want. Of course then, it’s critical
to convey your intended design to deployment. A varchar(25) (intended
field designation) requires 1/10th the disk resources that a string
field (which translates to varchar(255)).

Migrating the database agnostic definitions of scaffolding then is not
producing the intended schema result.

  1. Thus the concern about migrating the scaffolding schema’s limited
    means of defining an intended schema with automated deployment
    processes, which still lack even the data from which your intended
    schema could be deduced. You’ve never provided it.

So, what I’m asking is, how do you usually handle these issues in RoR?
Do you just tolerate the consequences of scaffolding; or do you test and
develop on your intended schema; and, if you modify the schema to your
intentions with additional tools, is that preserved or destroyed by
further migrations?

My inclination is to want to revise/refine the resultant schema to my
intended table design. I’m also concerned about how RoR deployment might
inadvertently affect my schema on a destination server. How do you
ensure migrations to a production table don’t affect the intended schema
of installed tables?

These are regular concerns for anyone with a background in RDBMS, just
now getting their feet wet in RoR.

Thanks,

m

On Sun, Mar 21, 2010 at 9:43 PM, Mike M. [email protected]
wrote:

These are regular concerns for anyone with a background in RDBMS, just
now getting their feet wet in RoR.

Let it go, at least for a short vacation :slight_smile:

Do a quick search on “premature optimization”.

Consider that the whole point of an agile framework like Rails is to
facilitate getting features to customers, quickly.

Having to optimize your app to handle too-many-page-views is a
good problem, but you can deal with it closer to the actual event.


Hassan S. ------------------------ [email protected]
twitter: @hassan

Hassan S. wrote:

On Sun, Mar 21, 2010 at 9:43 PM, Mike M. [email protected]
wrote:

These are regular concerns for anyone with a background in RDBMS, just
now getting their feet wet in RoR.

Let it go, at least for a short vacation :slight_smile:

Do a quick search on “premature optimization”.

Consider that the whole point of an agile framework like Rails is to
facilitate getting features to customers, quickly.

Having to optimize your app to handle too-many-page-views is a
good problem, but you can deal with it closer to the actual event.


Hassan S. ------------------------ [email protected]
twitter: @hassan

No sarcasm intended… but I’m not about to buy the idea that “premature
optimization” is the root of all evil; nor that starting out with
obtuse translations of data types which broadly impose a ten fold
overusage of resources is “agile,” when the necessary definitions of
the field types are obvious from the start, and “agility” (IMO) would be
to broadcast those obvious definitions to every phase of your design –
at least *so that, with great grief, you aren’t forced to revise them
later. After the short vacation then, what’s the solution? We sit around
at our desks and exalt in the fact we proved a theory of “premature
optimization?”

No way. If it’s worth doing; it’s worth doing right. We’re not talking
about “premature” optimization. We’re talking about starting out with a
concept which will get us there with the least stress all the way. That
goal isn’t a challenged to be deferred. It’s a fundamental and
indispensable building block of not making a small job a lifetime job.

I’m not saying this just to argue. I don’t care if you dispense with
database design right out the gate, even as it’s such a basic issue that
any reasonably skilled RDBMS developer can envision that design right
out the gate (with little downstream refinement required at least)…
and real agility would be, to record that prescription in their “agile
framework” once, from the beginning, so that it’s a job done, and
they can proceed “agilely” from the beginning.

Let’s not argue about this; let’s get at the answers – become truly
agile as agile can be.

No one betters the path to the end by purposely starting off with a
wrong principle, which a) simply multiplies work at least because it has
to be straightened out somewhere; and which b) circumvents a whole set
(usually a huge one) of facts which as a consequence, are never “tested”
under real conditions. The assumption that the latter is “premature
optimization” would be a huge contributing factor to what is wrong with
software today. How does my “agile” application react to 3-character
input to an intended (and obvious) 2-character field, when my “agile”
(incoherent?) development environment simply won’t accept (or will it,
which was my question) the 2-character field definition – instead
translating it to a whopping 255 characters? (!)

The answer to my question should not only be easy… it should be a
fundamental skill. You mean nobody cares? A field has to handle
2-character country codes for instance, and every “agile” developer
simply lives with 255-character, auto-translated field definitions, to
“deal with” that later – on a living system?

No way.

I was hoping you were going to tell me that all I have to do is, instead
of:

def self.up
create_table :amendments do |t|
t.string :c_code
t.string :c_name
t.integer :tran_id
t.integer :priority

  t.timestamps
end

I could replace this with some syntax for executing SQL (which I’m about
to look up all the same):

def self.up
create_table :amendments do |t|
t.[execute_sql] :[CREATE TABLE amendments (id integer NOT NULL
AUTO_INCREMENT PRIMARY KEY, c_code varchar(2), c_name varchar(35),
tran_id integer), priority tinyint, created_at datetime, updated_at
datetime) ENGINE = INNODB;]
end

So this would fit my concept of “agility,” because the design is
obvious, and because we do this obvious chore once – eliminating the
unnecessary downstream work.

My concerns about this still are, does our agile framework want/insist
upon creating the timestamps fields (which this SQL Create would
produce)? Does the framework itself produce id field also? Do we
truncate these fields our SQL statement? Can I just change the template
code to such an SQL statement and BE agile?

That’s my question. I don’t want to start out, obviously needing to plan
for 2-character fields, and suffering whatever downstream ramifications
of a “agility” which requires that I alter my table structure to an
obvious, intended structure (as opposed to “premature optimization”)…
with all the resultant, unnecessary difficulties of doing so on a
“production” database, hobbled by a concept that declaring and working
with mere obvious principles is “premature optimization.”

So let’s not argue about this. I should be able to start with my
intended design far faster than I can eventually modify a wrong design
– imposed in the name of “agility” – to the right one (which in the
least, is twice the work).

Regards,

mike

On 22 March 2010 16:08, Mike M. [email protected] wrote:

So let’s not argue about this. I should be able to start with my
intended design far faster than I can eventually modify a wrong design
– imposed in the name of “agility” – to the right one (which in the
least, is twice the work).

If you don’t like the field definitions generated by the scaffold then
you can edit the migration before you run it, or run another migration
later. Is there anything in your sql table creation that cannot be
specified in migration code?

Colin

On 22 March 2010 04:43, Mike M. [email protected] wrote:

In the former, you’re not even passing the information to the
environment to build the tables you want. Of course then, it’s critical
to convey your intended design to deployment. A varchar(25) (intended
field designation) requires 1/10th the disk resources that a string
field (which translates to varchar(255)).

In the case of MySql I believe that the length of a varchar only
specifies the maximum length, it does not affect the actual space used
in the db as the data is stored as variable length records. I don’t
know about postgresql.

Colin

Colin L. wrote:

On 22 March 2010 04:43, Mike M. [email protected] wrote:

In the former, you’re not even passing the information to the
environment to build the tables you want. Of course then, it’s critical
to convey your intended design to deployment. A varchar(25) (intended
field designation) requires 1/10th the disk resources that a string
field (which translates to varchar(255)).

In the case of MySql I believe that the length of a varchar only
specifies the maximum length, it does not affect the actual space used
in the db as the data is stored as variable length records. I don’t
know about postgresql.

Colin

If that were the case, there would be little if any reasonable purpose
in specifying VARCHAR() length. What you say may apply to the various
text field types, as a buffer of the maximum length would handle a field
of any length up to the maximum length… and the data could (best) be
stored as the variable length stream – thus conserving disk space.

As to your preceding post:

If you don’t like the field definitions generated by the scaffold then
you can edit the migration before you run it, or run another migration
later.

No can do:

Scaffold will accept syntax like this:

script/generate scaffold amendment c_code:string c_name:string
tran_id:integer priority:integer

It will not accept syntax like this:

script/generate scaffold amendment c_code:varchar(2) c_name:varchar(35)
tran_id:tinyint priority:tiny int

Nor can you change the field prescriptions in the
xxxxdatetimexxxx_amendments.rb schema.

Is there anything in your sql table creation that cannot be
specified in migration code?

All of the above.

Worse, I have to support huge traffic and humongous amounts of data
which make it absolutely essential to conserve disk volume. If
deployment happened to translate these incoherent field definitions to
my actual production tables, I’d be in huge trouble.

But these are just simple, basic concerns for any implementation.

Does anybody handle these issues?

I’m not asking for the sake of a discussion. Here’s something like what
I would like to do in the xxxxdatetimexxxx_amendments.rb schema for
instance:

class CreateAmendments < ActiveRecord::Migration
def self.up
create_table :amendments do |t|
connection = ActiveRecord::Base.connection()
connection.execute(CREATE TABLE amendments (id integer NOT NULL
AUTO_INCREMENT PRIMARY KEY, c_code varchar(2), c_name varchar(35),
tran_id integer), priority tinyint, created_at datetime, updated_at
datetime) ENGINE=InnoDB;)
end
end

So this is a matter of a simple question:

Is it feasible to do things this way?

To answer that question to the standards the answer will be tested means
we know what the ramifications and requisites of doing so are. Do I for
instance need to specify the id field and “timestamps” fields (which are
actually of course, DATETIME values)?

This sort of a technique is what I envisioned having to do to preserve
the goals of my design schema. What we’re calling “agile” is abstracting
the processes so far away from us, that the abstraction is a huge
interference against doing the job right – unless of course someone can
answer this simple question: Is it feasible to do things this way?

If not – that’s certainly not agility. End of story.

If so – how? What do we have to do to negotiate our intentions
throughout design, testing, and deployment?

Regards,

mike

As others have posted, modify your migration file before running it if
you are concerned about the size of varchars.

For example, you can do the following,

def self.up
create_table :mytable, :force => true do |t|
t.string :field_name, :null => false, :limit => 50, :default =
“Some value”

end
end

Look into the foreign_key_migrations gem if you want to setup PK/FK
relationships in your migrations. I have not discovered anything I
can’t handle in a migration for table creation/alteration.

E. Litwin wrote:

As others have posted, modify your migration file before running it if
you are concerned about the size of varchars.

For example, you can do the following,

def self.up
create_table :mytable, :force => true do |t|
t.string :field_name, :null => false, :limit => 50, :default =
“Some value”

end
end

Look into the foreign_key_migrations gem if you want to setup PK/FK
relationships in your migrations. I have not discovered anything I
can’t handle in a migration for table creation/alteration.

Primary and foreign key migration is supported. It’s not the issue.

Have you found that you can migrate varchar() field types as varchar()
field types, instead of strings?

If so, how? That would answer the question.

E. Litwin wrote:

As others have posted, modify your migration file before running it if
you are concerned about the size of varchars.

For example, you can do the following,

def self.up
create_table :mytable, :force => true do |t|
t.string :field_name, :null => false, :limit => 50, :default =
“Some value”

end
end

Look into the foreign_key_migrations gem if you want to setup PK/FK
relationships in your migrations. I have not discovered anything I
can’t handle in a migration for table creation/alteration.


Quote from resource:

Migrations support all the basic data types: string, text,
integer, float, datetime, timestamp, time, date, binary and boolean:

string - is for small data types such as a title.
text - is for longer pieces of textual data, such as the 

description.
integer - is for whole numbers.
float - is for decimals.
datetime and timestamp - store the date and time into a column.
date and time - store either the date only or time only.
binary - is for storing data such as images, audio, or movies.
boolean - is for storing true or false values.


You don’t see any VARCHAR(), do you? Any TINYINT, SMALLINT, BIGINT,
DOUBLE, DOUBLE PRECISION, TINYBLOB, TINYTEXT, BLOB, TEXT, MEDIUMBLOB,
MEDIUMTEXT, LONGBLOB, LONGTEXT, ENUM, OR SET do you?

Neither do I.

We know that MySQL doesn’t support binary – of course, we use TINYINT
with 0 or 1 for false/true respectively.

But unless we can specify our schema with an SQL statement, it doesn’t
look like there’s much a way even to support many vital schemas, much
less migrate them.

Is there at trick to coerce scaffolding to create such fields? I haven’t
found one or I wouldn’t be asking.

Is it “premature optimization” to see I’m going to run into trouble here
if I need these field types? Never has been in any other tools I’ve
worked with; in fact it’s regular practice.

So the question remains, how do you support migration of these vital
field types? Answering that does not involve imagining it doesn’t
matter, or that we can’t account for it – and that’s just fine.

Hassan S. wrote:

On Mon, Mar 22, 2010 at 4:03 PM, Mike M. [email protected]
wrote:

So the question remains, how do you support migration of these vital
field types?

They are apparently vital to you; do not assume that’s true of every
developer and every situation.

Yes, you can use your own custom SQL in a migration. If you are
so focused on this level of control I have to wonder why you haven’t
(apparently) read the API doc for ActiveRecord::Migration.

The downside (of at least the examples you’ve used) is you’ve tied
yourself to a specific DB vendor’s product. I’ve currently working on
more instances than not where multiple vendors are involved.

Leaving the DB description in Rails’ abstract form means I can work
with those different DBs easily. That’s much more important to my
projects than saving 253 bytes of disk space somewhere.

FWIW,

Hassan S. ------------------------ [email protected]
twitter: @hassan

Of course, you recognize the importance then – “premature optimization”
is not about accounting for your issues to the depth of the first 253
bytes… half a dozen columns of half a dozen tables – some of which
may contain up to dozens of millions of records. It’s not about 253
bytes. It’s about 253 bytes per field, per so many records, per so many
tables.

As to why you’re merging tables from multiple vendors, I have no idea.
But certainly still, unless a vendor doesn’t support a vital field type
(in which case, your vendor would be chosen for your needed support),
there’s no limitation imposed by specifying a data schema you would want
to preserve for a great variety of reasons; and certainly neither, would
a RoR ability to sustain a needed schema inhibit your development
capabilities.

On the contrary, only if you accept the cost of 253 bytes on whatever
the scale, at every turn
, would the ability to sustain an intended data
schema inhibit your project development – and then, only to the degree
of specifying the translated field types, instead of the specific types
most of us require for better reasons.

On Mon, Mar 22, 2010 at 4:03 PM, Mike M. [email protected]
wrote:

So the question remains, how do you support migration of these vital
field types?

They are apparently vital to you; do not assume that’s true of every
developer and every situation.

Yes, you can use your own custom SQL in a migration. If you are
so focused on this level of control I have to wonder why you haven’t
(apparently) read the API doc for ActiveRecord::Migration.

The downside (of at least the examples you’ve used) is you’ve tied
yourself to a specific DB vendor’s product. I’ve currently working on
more instances than not where multiple vendors are involved.

Leaving the DB description in Rails’ abstract form means I can work
with those different DBs easily. That’s much more important to my
projects than saving 253 bytes of disk space somewhere.

FWIW,

Hassan S. ------------------------ [email protected]
twitter: @hassan

On Mon, Mar 22, 2010 at 4:58 PM, Mike M. [email protected]
wrote:

Of course, you recognize the importance then – “premature optimization”
is not about accounting for your issues to the depth of the first 253
bytes… half a dozen columns of half a dozen tables – some of which
may contain up to dozens of millions of records. I

I do lots of small projects, where tables may contain, oh, dozens of
records, maybe hundreds :slight_smile:

Your concern about disk usage simply doesn’t apply.

As to why you’re merging tables from multiple vendors, I have no idea.

I’m not talking about “merging tables”, I’m talking about running the
app
on different DBs.

That’s simply incompatible with your examples, and is why abstraction
of DB characteristics is a good thing for many developers.


Hassan S. ------------------------ [email protected]
twitter: @hassan

Hassan S. wrote:

On Mon, Mar 22, 2010 at 4:03 PM, Mike M. [email protected]
wrote:

I have to wonder why you haven’t
(apparently) read the API doc for ActiveRecord::Migration.

FWIW,

Hassan S. ------------------------ [email protected]
twitter: @hassan

It’s your assumption that I’m not studying the API documentation. I’ve
had the pages opened for days; and I just posted some material from the
API documentation. You point would be, that if you had studied the API
documentation, and that you knew how it was possible to implement the
usual goals of database schemas, you might cite what I’ve missed.

I’m waiting for a reply from a developer in India on one possibility (he
might know).

The closest I’ve found in the API documentation so far is, type_to_sql
in ActiveRecord::ConnectionAdapters::MysqlAdapter.

Here’s its source implementation:

 # File lib/active_record/connection_adapters/mysql_adapter.rb, line 

493
493: def type_to_sql(type, limit = nil, precision = nil, scale =
nil)
494: return super unless type.to_s == ‘integer’
495:
496: case limit
497: when 1; ‘tinyint’
498: when 2; ‘smallint’
499: when 3; ‘mediumint’
500: when nil, 4, 11; ‘int(11)’ # compatibility with MySQL
default
501: when 5…8; ‘bigint’
502: else raise(ActiveRecordError, “No integer type has byte
size #{limit}”)
503: end
504: end

That’s a far cry from handling all the field types, isn’t it?

Or is it?

You know how to implement that to produce VARCHAR() ranges from 1…255,
even as it isn’t important to you?

Why make this a spitting match? Either you know how to do this or you
don’t.

On Mon, Mar 22, 2010 at 5:21 PM, Mike M. [email protected]
wrote:

It’s your assumption that I’m not studying the API documentation.

Just from the fact that you seemed in doubt whether you could use
raw SQL in a migration; the doc has an example of doing just that.

You know how to implement that to produce VARCHAR() ranges from 1…255,
even as it isn’t important to you?

Why make this a spitting match? Either you know how to do this or you
don’t.

Who’s making it a “spitting match”? I do know, and it’s in the document
I already pointed you at – the ActiveRecord::Migration API.

Simple example:

class CreateDogs < ActiveRecord::Migration
def self.up
create_table :dogs do |t|
t.string :name, :limit => 100
t.string :nickname, :limit => 242
t.string :weight, :limit => 5
t.string :country, :limit => 2

  t.timestamps
end

end

def self.down
drop_table :dogs
end
end

mysql> show create table dogs\G
*************************** 1. row ***************************
Table: dogs
Create Table: CREATE TABLE dogs (
id int(11) NOT NULL AUTO_INCREMENT,
name varchar(100) COLLATE utf8_unicode_ci DEFAULT NULL,
nickname varchar(242) COLLATE utf8_unicode_ci DEFAULT NULL,
weight varchar(5) COLLATE utf8_unicode_ci DEFAULT NULL,
country varchar(2) COLLATE utf8_unicode_ci DEFAULT NULL,
created_at datetime DEFAULT NULL,
updated_at datetime DEFAULT NULL,
PRIMARY KEY (id)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci
1 row in set (0.00 sec)

mysql>


Hassan S. ------------------------ [email protected]
twitter: @hassan

On Mon, Mar 22, 2010 at 5:26 PM, Mike M. [email protected]
wrote:

You’re the first I’ve heard of who is unconcerned with redundantly
upscaling data storage resources on a scale of 255/2, ostensibly to
support an application on different engines, when all you’d have to do
to do so properly would be to conditionally implement an explicit schema
– calling such a simple thing “premature optimization.”

I have no idea what you’re going on about – “ostensibly”, as in you
don’t believe anyone would ever use different databases at different
stages of a single app’s lifecycle? Seriously?

Believe me, I already understand why.

? “why” what?


Hassan S. ------------------------ [email protected]
twitter: @hassan