From: Gregory W. <gw...@co...> - 2004-05-07 15:31:30
|
Hi, I've been using HaskellDB to interface to a postgresql database and have come across a problem with "SERIAL" fields. In postgresql, a SERIAL column is normally never written to but is automatically assigned the next integer. When read, the column is never empty, so it is assigned the "non-null" attribute. (And therefore DBDirect classifies it as not nullable.) However, in normal use the column is never written to. So "non-null" in this case does not have its usual meaning of "must be present" in an insert. In haskelldb however, failing to write the column causes a type error, as positional parameters seem to be misaligned. It is possible to work around this, but very inefficient. (The maximum value of the SERIAL column can be read, and the next entry can be written with max() + 1 into that field. But this involves scanning the whole table for every insertion. It's not possible to maintain the max value in the haskell program because it may not be the only one to access the database.) Is there some kind of simple way to fix this? I can imagine a record constructor that expands to nothing, allowing one to leave "non-null" fields empty in insertions. This could cause run time errors, but that's no worse than the situation now: if you write a non-unique value to a SERIAL, you get a run time error. Or should I just hack up the output of DBDirect by hand, to make SERIAL fields nullable? This has the drawback of returning them as (Maybe ...) in queries, when they are really guaranteed to be non-null. I admit that the use of many "SERIAL" fields is an ugly way to structure the database (I didn't write the schema). But it seems that schema that depend on autoincrementing values use them everywhere, and they are very awkward to handle under the current haskelldb arrangement. Any ideas as to how I might proceed? Best Wishes, Greg Gregory Wright Antiope Associates LLC gw...@an... |