The following tables have "is_obsolete" fields of differing data types:
Mysql, Postgres:
cvterm INT
pub BOOLEAN
feature BOOLEAN
stock BIT
Sybase:
cvterm TINYINT
pub BIT
feature BIT
stock BIT
Mysql converts BOOLEAN fields to TINYINT(1): http://dev.mysql.com/doc/refman/5.0/en/numeric-type-overview.html
This means the following syntax is valid in Mysql, but will fail in Postgres:
select feature_id, is_obsolete from feature where is_obsolete = 0 limit 3;
This will work in both Mysql and Postgres:
select feature_id, is_obsolete from feature where is_obsolete = 'f' limit 3;
I think it would be a good idea to change all is_obsolete columns to an INT or TINYINT data type to:
a) Enable us to standardize SQL across database vendors and avoid problems like the above and,
b) Support the usage case of is_obsolete for the cvterm table, which anticipates multiple obsolete versions of an object
I'd prefer to see us move toward the GMOD standard schema when possible, rather than away from it even more. Our API here is perl, and DBI often handles the casting of these values when done via placeholders. While 0 fails in boolean fields in PostgreSQL, for example, it gets cast correctly by the DBD if placeholders are used.
In the feature table, for example, is_obsolete is BOOLEAN as defined by GMOD. I'd rather switch it to that and only deviate from the public standard when the datatype doesn't have a match in the target db vendor.