You can subscribe to this list here.
| 2004 |
Jan
|
Feb
(6) |
Mar
(11) |
Apr
(5) |
May
(23) |
Jun
|
Jul
(5) |
Aug
|
Sep
(13) |
Oct
|
Nov
(10) |
Dec
(1) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2005 |
Jan
(1) |
Feb
(18) |
Mar
|
Apr
(5) |
May
(6) |
Jun
(2) |
Jul
(2) |
Aug
(2) |
Sep
(10) |
Oct
|
Nov
(1) |
Dec
(5) |
| 2006 |
Jan
(2) |
Feb
|
Mar
(11) |
Apr
|
May
|
Jun
|
Jul
(34) |
Aug
(5) |
Sep
|
Oct
(1) |
Nov
|
Dec
|
| 2007 |
Jan
(4) |
Feb
|
Mar
(2) |
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
(3) |
Sep
(5) |
Oct
(3) |
Nov
(14) |
Dec
(15) |
| 2008 |
Jan
(13) |
Feb
(3) |
Mar
(12) |
Apr
(16) |
May
(4) |
Jun
(2) |
Jul
(26) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(3) |
| 2009 |
Jan
|
Feb
(4) |
Mar
(13) |
Apr
(22) |
May
(25) |
Jun
(2) |
Jul
(10) |
Aug
(2) |
Sep
(41) |
Oct
(5) |
Nov
(9) |
Dec
|
| 2010 |
Jan
(3) |
Feb
(4) |
Mar
|
Apr
(5) |
May
|
Jun
|
Jul
|
Aug
(5) |
Sep
(1) |
Oct
(4) |
Nov
(8) |
Dec
|
| 2011 |
Jan
(2) |
Feb
|
Mar
(3) |
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(2) |
Dec
|
| 2012 |
Jan
|
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(5) |
Nov
|
Dec
(1) |
| 2013 |
Jan
|
Feb
|
Mar
(4) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
| 2014 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2016 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2017 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: Ganesh S. <ga...@ea...> - 2008-04-13 14:30:59
|
Hi all, The haskelldb repository has now been moved to code.haskell.org: http://code.haskell.org/haskelldb/ This means that we can add new committers. You'll need an account, which can be requested here: http://community.haskell.org/admin/account_request.html Once you have that you can email Bjorn or myself to be added to the project group. Cheers, Ganesh |
|
From: Peter G. <pe...@gm...> - 2008-04-07 11:10:26
|
To kick the horse one more time:
On 04/04/2008, at 12:08 AM, Justin Bailey wrote:
> [...] What makes GROUP BY filthy?
It forces me to enumerate fields. I can't say:
SELECT * FROM ... GROUP BY something-star-like
For comparison, here's a relatively-GROUP BY-free rendition of that
previously-send monstrosity:
SELECT ID.*, T.C FROM Items_Descriptions AS ID,
(SELECT IT.tag AS T, count(IT.item_id) AS C
FROM Items_Tags AS IT
GROUP BY IT.tag) AS T
WHERE ID.lang=$1
AND ID.item_id=T.T
param $1 = 'en'
As the Items_Tags table has 2 columns, I'm not too bothered about that
GROUP BY.
Bjorn, congrats on starting a company.
cheers
peter
|
|
From: Peter G. <pe...@gm...> - 2008-04-05 04:05:56
|
On 04/04/2008, at 10:09 PM, Justin Bailey wrote:
> On Thu, Apr 3, 2008 at 10:32 PM, Peter Gammie <pe...@gm...>
> wrote:
>> Sorry, the problem has been debugged out of existence, so I no
>> longer know
>> what precisely the problem was. In my case the SQL was definitely
>> incorrect.
>> I still have that (I think) if you want to see it.
>
> Please!
OK, here it is, it's huge:
SELECT item_id6 as item_id,
parent6 as parent,
root6 as root,
date_posted6 as date_posted,
date_last_update6 as date_last_update,
item_type6 as item_type,
lang6 as lang,
owner6 as owner,
title6 as title,
summary6 as summary,
contents6 as contents,
intfield6 as intfield
FROM (SELECT item_id5 as item_id6,
parent3 as parent6,
root3 as root6,
date_posted3 as date_posted6,
date_last_update3 as date_last_update6,
item_type5 as item_type6,
lang5 as lang6,
owner5 as owner6,
title5 as title6,
summary5 as summary6,
contents5 as contents6,
COUNT(item_id4) as intfield6
FROM (SELECT item_id as item_id4,
tag as tag4
FROM items_tags as T1) as T1,
(SELECT item_id2 as item_id3,
parent1 as parent3,
root1 as root3,
date_posted1 as date_posted3,
date_last_update1 as date_last_update3,
item_type2 as item_type3,
title2 as title3
FROM (SELECT item_id as item_id2,
item_type as item_type2,
lang as lang2,
title as title2,
date_posted as date_posted1,
date_last_update as date_last_update1
FROM items_descriptions as T1) as T1,
(SELECT item_id as item_id1,
parent as parent1,
root as root1
FROM items as T1) as T2
WHERE ((item_type2 = 'tag') AND (lang2 = 'en')) AND
(item_id1 = item_id2)) as T2,
(SELECT item_id as item_id5,
item_type as item_type5,
lang as lang5,
owner as owner5,
title as title5,
summary as summary5,
contents as contents5
FROM items_descriptions as T1) as T3
WHERE (item_id3 = item_id5) AND (item_id3 = tag4)
GROUP BY item_id6,
parent6,
root6,
date_posted6,
date_last_update6,
item_type6,
lang6,
owner6,
title6,
summary6,
contents6
ORDER BY item_type3 ASC,
title3 ASC) as T1
Putting this into PostgreSQL's psql, I get:
ERROR: column "t2.item_type3" must appear in the GROUP BY clause or
be used in an aggregate function
I can send you the SQL that creates the database if you want to play
with it more.
cheers
peter
|
|
From: Peter G. <pe...@gm...> - 2008-04-05 03:59:02
|
On 04/04/2008, at 10:51 PM, Justin Bailey wrote: > I like this, and I wonder if Peter's code (mentioned in another > thread) addresses it. I'd be glad to take a shot at unifying the two > approaches, once his patch is in. Err, patches to what? I've only hacked HSQL, not HaskellDB. As I said before, I'm in the process of removing my dependency on HaskellDB... I asked Bjorn about it as he was/is the de facto HSQL maintainer. cheers peter |
|
From: Bjorn B. <bj...@br...> - 2008-04-04 20:26:12
|
On Fri, Apr 4, 2008 at 10:20 PM, Justin Bailey <jgb...@gm...> wrote:
> I just spent most of the day creating some TH functions that generate
> DirectDB style table and field definitions. It allows me to create a
> module that exports a table like so:
>
> module My_Table where
>
> $(mkDBDirectTable "my_table"
> [("event_dtl_id", [t|Int|])
> , ("event_hdr_id", [t|Int|])
> , ("foo", [t|Int|])
> , ("bar", [t|Int|])])
>
> The above is a table name and a list of fields with types. TH
> generates appropriate haskell source so the definition looks just
> something DBDirect would produce:
>
> module My_Table where
>
> type My_table =
> (RecCons Pts_dlvry_carrier_flight_id (Expr Int)
> (RecCons Pts_dlvry_carrier_id (Expr Int) ...))
>
> my_table :: Table My_table
> my_table = baseTable "my_table" $
> hdbMakeEntry Event_dtl_id # ...
>
> data Event_dtl_id = Event_dtl_id
>
> instance FieldTag Event_dtl_id where
> fieldName _ = "event_dtl_id"
>
> event_dtl_id :: Attr Event_dtl_id
> event_dtl_id = mkAttr Event_dtl_id
> ....
>
> I also have a function that creates a field using TH, which is very
> useful when you need "fake" fields in a projection:
>
> $(mkDBDirectField "filtered" [t|Bool|])
>
> which produces something like:
>
> data Filtered = Filtered
>
> instance FieldTag Filtered where
> fieldName _ = "filtered"
>
> filtered :: Attr Filtered
> filtered = mkAttr Filtered
>
> Is there a place in the haskelldb source for TH utilities like this?
>
> Justin
Nice! I seem to recall there being such code somewhere in the past,
but I have no idea where.
I should probably go in a separate cabal package, since it adds a dep
on TH. Maybe add a haskelldb-th dir in the repo and put the code +
.cabal there.
I should move the repo to code.haskell.org to remove me as a
bottleneck for your work.
/Bjorn
|
|
From: Justin B. <jgb...@gm...> - 2008-04-04 20:21:03
|
I just spent most of the day creating some TH functions that generate
DirectDB style table and field definitions. It allows me to create a
module that exports a table like so:
module My_Table where
$(mkDBDirectTable "my_table"
[("event_dtl_id", [t|Int|])
, ("event_hdr_id", [t|Int|])
, ("foo", [t|Int|])
, ("bar", [t|Int|])])
The above is a table name and a list of fields with types. TH
generates appropriate haskell source so the definition looks just
something DBDirect would produce:
module My_Table where
type My_table =
(RecCons Pts_dlvry_carrier_flight_id (Expr Int)
(RecCons Pts_dlvry_carrier_id (Expr Int) ...))
my_table :: Table My_table
my_table = baseTable "my_table" $
hdbMakeEntry Event_dtl_id # ...
data Event_dtl_id = Event_dtl_id
instance FieldTag Event_dtl_id where
fieldName _ = "event_dtl_id"
event_dtl_id :: Attr Event_dtl_id
event_dtl_id = mkAttr Event_dtl_id
....
I also have a function that creates a field using TH, which is very
useful when you need "fake" fields in a projection:
$(mkDBDirectField "filtered" [t|Bool|])
which produces something like:
data Filtered = Filtered
instance FieldTag Filtered where
fieldName _ = "filtered"
filtered :: Attr Filtered
filtered = mkAttr Filtered
Is there a place in the haskelldb source for TH utilities like this?
Justin
|
|
From: Bjorn B. <bj...@br...> - 2008-04-04 16:13:45
|
On Fri, Apr 4, 2008 at 5:51 PM, Justin Bailey <jgb...@gm...> wrote: > On Fri, Apr 4, 2008 at 8:28 AM, Bjorn Bringert <bj...@br...> wrote: > > this looks great. How do you substitute in values for parameters in queries? > > That's something I didn't look at, but is needed. Since I'm using > haskelldb just to generate SQL I haven't concentrated on the runtime > side. However, see below .. > > > > Here, prepare and queryBind are something along these lines (but more > > general to support multiple arguments, and with some type class > > constraints): > > > > prepare :: Database -> (a -> Query (Rel b)) -> PreparedQuery (a -> Rel b) > > > > queryBind :: Database -> PreparedQuery (a -> Rel b) -> a -> IO [b] > > I like this, and I wonder if Peter's code (mentioned in another > thread) addresses it. I'd be glad to take a shot at unifying the two > approaches, once his patch is in. > > In any case, do you think the parameters should show up in the type of > Query (Rel b)? Any ideas how that could look, and how to make sure > queries can still be rendered to SQL w/o runtime values (so I can > still get my SQL w/o having to execute it)? Hmm, I'm not sure that the Query should have the parameters in it, since a Query is different from a PreparedQuery. To render a parametrized query as SQL, you could have a function (needs a better name): showParamSql :: (a -> Query (Rel b)) -> String This would have to invent names for the parameters, so it needs some trickery inside. Or maybe, forcing you to supply a name for each parameter: showParamSql :: (a -> Query (Rel b)) -> String -> String (All of the above needs to be extended with type-trickery to handle multiple parameters) /Björn |
|
From: Justin B. <jgb...@gm...> - 2008-04-04 15:51:25
|
On Fri, Apr 4, 2008 at 8:28 AM, Bjorn Bringert <bj...@br...> wrote: > this looks great. How do you substitute in values for parameters in queries? That's something I didn't look at, but is needed. Since I'm using haskelldb just to generate SQL I haven't concentrated on the runtime side. However, see below .. > Here, prepare and queryBind are something along these lines (but more > general to support multiple arguments, and with some type class > constraints): > > prepare :: Database -> (a -> Query (Rel b)) -> PreparedQuery (a -> Rel b) > > queryBind :: Database -> PreparedQuery (a -> Rel b) -> a -> IO [b] I like this, and I wonder if Peter's code (mentioned in another thread) addresses it. I'd be glad to take a shot at unifying the two approaches, once his patch is in. In any case, do you think the parameters should show up in the type of Query (Rel b)? Any ideas how that could look, and how to make sure queries can still be rendered to SQL w/o runtime values (so I can still get my SQL w/o having to execute it)? Justin |
|
From: Bjorn B. <bj...@br...> - 2008-04-04 15:28:16
|
On Thu, Apr 3, 2008 at 11:53 PM, Justin Bailey <jgb...@gm...> wrote:
> All,
>
> Attached you'll find my patch for using SQL functions and parameters
> in queries. By functions, I mean the ability to define a function
> prototype that will be used in a query. For example, if you need the
> 'lower' function in a query, you can define lower as:
>
> > lower :: Expr a -> Expr (Maybe String)
> > lower str = func "lower" str
>
> The function can then be used in a query:
>
> qry1 = do
> tbl <- table ...
> project $ col1 << lower (tbl ! col2)
>
> If the function is used in an inappropriate place, a compile time
> error occurs. The arguments to the function do not have to be
> expressions:
>
> > data DatePart = Day | Century deriving Show
>
> > datePart :: DatePart -> Expr (Maybe CalendarTime) -> Expr (Maybe Int)
> > datePart date col = func "date_part" (constant $ show date) col
>
> Aggregates are easy to define:
>
> > every :: Expr Bool -> ExprAggr Bool
> > every col = func "every" col
>
> Because haskelldb implements aggregates to always take one argument
> and only one argument, a compile time error occurs if an aggregate
> with a different number of arguments is defined.
>
> One problem with this mechanism is the type signatures can be too
> strict. For example, lower above cannot be used where an "Expr String"
> or even "Expr BStrN" (i.e., a bounded string) is expected. I'm not
> sure if this library should solve that or if coercion functions should
> be defined by the user. Suggestions welcome.
>
> Parameters allow generated queries to be used with the "prepared
> statement" facility of most databases. Any parameter defined is
> rendered as a "?" in the subsequent SQL, and it is expected the user
> will again supply appropriate values when the SQL is executed. Both
> named and position parameters can be defined:
>
> qry1 = do
> ...
> restrict (tbl1 ! col1 .==. namedParam "my_param" constNull)
> restrict (tbl1 ! col2 .==. param constNull)
>
> When a parameter is defined, a default value must be given for it.
> This feature is probably not useful to many, but does allows queries
> to be generated which do not contain any placeholders.'constNull'
> just means the parameter has a NULL default value.
>
> After SQL has been generated, the "name" of parameters is lost. It is
> very important that parameters can be retrieved in the order they
> will appear in the final query. The function 'queryParameters',
> exported from HaskellDB, does this. It returns a list of [Param]
> values. Param is just (Either Int String), so a named parameter is
> represented by (Right "...") while a positional parameter is (Left
> <val>).
>
> In summary the patch provides:
>
> * Allows SQL functions to be defined and used of SQL functions in queries
> * Allows positional and named parameters to be used in queries
> * Some minor bug fixes.
>
> Comments welcome!
>
> Justin
>
> p.s. I developed this against PostgreSQL; please let me know if I
> have introduced compatibility problems.
Hi Justin,
this looks great. How do you substitute in values for parameters in queries?
There could be an alternative approach to parameters that lets you
make them type safe by using lambda expression. I'm not sure that this
is implementable, but it ought to be. Here's how I imagine it working:
selectById :: Int -> Query (Rel ...)
selectById i =
do t <- table foo
restrict (t ! id .==. constant i)
...
test :: Database -> IO [...]
test db =
do q <- prepare db selectById
queryBind db q 42
Here, prepare and queryBind are something along these lines (but more
general to support multiple arguments, and with some type class
constraints):
prepare :: Database -> (a -> Query (Rel b)) -> PreparedQuery (a -> Rel b)
queryBind :: Database -> PreparedQuery (a -> Rel b) -> a -> IO [b]
/Björn
|
|
From: Justin B. <jgb...@gm...> - 2008-04-04 15:09:55
|
On Thu, Apr 3, 2008 at 10:32 PM, Peter Gammie <pe...@gm...> wrote: > Sorry, the problem has been debugged out of existence, so I no longer know > what precisely the problem was. In my case the SQL was definitely incorrect. > I still have that (I think) if you want to see it. Please! > Bjorn: I've added parameterised executions and queries to HSQL and > HSQL/PostgreSQL. Do you want these patches? One advantage is that one does > not need to do any string escaping. I just submitted a patch that allows parameters to be used in queries, but I didn't do any work on actually executing them (since I am only intersted in the generated SQL). I'm also using HDBC over HSQL. However, maybe my patch will work for you? If both are accepted I'd be willing to try and get them working together ... Justin |
|
From: Peter G. <pe...@gm...> - 2008-04-04 05:32:37
|
Justin, On 04/04/2008, at 12:08 AM, Justin Bailey wrote: > On Wed, Apr 2, 2008 at 9:48 PM, Peter Gammie <pe...@gm...> > wrote: >> I'm confused by the "unique" function. Why does it not just use >> SELECT >> DISTINCT <blah>? GROUP BY semantics are filthy. > > I can speak to unique, as I provided the patch which implemented it > (http://tinyurl.com/2u9hmf). When I started with haskelldb, it added > "distinct" to all queries. That behavior didn't work for what I wanted > to do. I coded unique so 'distinct' behavior was still accessible, but > wasn't the default. My experience with SQL has been to use GROUP BY > over DISTINCT, which is why it generates the code it does. What makes > GROUP BY filthy? It's filthy because it wasn't working for me. :-) I'm no SQL expert so I won't argue with you further. >> roughly I am trying to pass an aggregation function around and when I >> use it in a record construction I get this problem. > > I ran a problem when passing around aggregates recently that didn't > group quite correctly - the SQL was correct but the query result was > not. It's referenced in this email (http://tinyurl.com/33apbb) - are > you seeing the same thing? Sorry, the problem has been debugged out of existence, so I no longer know what precisely the problem was. In my case the SQL was definitely incorrect. I still have that (I think) if you want to see it. I've decided to abandon HaskellDB as there are just a few things I can't get it to do, and I'm running out of time. It's easier (and perhaps no less safe?) for me to just use HSQL directly. In any case the number of database accesses will be much reduced. Bjorn: I've added parameterised executions and queries to HSQL and HSQL/PostgreSQL. Do you want these patches? One advantage is that one does not need to do any string escaping. cheers peter |
|
From: Justin B. <jgb...@gm...> - 2008-04-03 21:53:47
|
All,
Attached you'll find my patch for using SQL functions and parameters
in queries. By functions, I mean the ability to define a function
prototype that will be used in a query. For example, if you need the
'lower' function in a query, you can define lower as:
> lower :: Expr a -> Expr (Maybe String)
> lower str = func "lower" str
The function can then be used in a query:
qry1 = do
tbl <- table ...
project $ col1 << lower (tbl ! col2)
If the function is used in an inappropriate place, a compile time
error occurs. The arguments to the function do not have to be
expressions:
> data DatePart = Day | Century deriving Show
> datePart :: DatePart -> Expr (Maybe CalendarTime) -> Expr (Maybe Int)
> datePart date col = func "date_part" (constant $ show date) col
Aggregates are easy to define:
> every :: Expr Bool -> ExprAggr Bool
> every col = func "every" col
Because haskelldb implements aggregates to always take one argument
and only one argument, a compile time error occurs if an aggregate
with a different number of arguments is defined.
One problem with this mechanism is the type signatures can be too
strict. For example, lower above cannot be used where an "Expr String"
or even "Expr BStrN" (i.e., a bounded string) is expected. I'm not
sure if this library should solve that or if coercion functions should
be defined by the user. Suggestions welcome.
Parameters allow generated queries to be used with the "prepared
statement" facility of most databases. Any parameter defined is
rendered as a "?" in the subsequent SQL, and it is expected the user
will again supply appropriate values when the SQL is executed. Both
named and position parameters can be defined:
qry1 = do
...
restrict (tbl1 ! col1 .==. namedParam "my_param" constNull)
restrict (tbl1 ! col2 .==. param constNull)
When a parameter is defined, a default value must be given for it.
This feature is probably not useful to many, but does allows queries
to be generated which do not contain any placeholders.'constNull'
just means the parameter has a NULL default value.
After SQL has been generated, the "name" of parameters is lost. It is
very important that parameters can be retrieved in the order they
will appear in the final query. The function 'queryParameters',
exported from HaskellDB, does this. It returns a list of [Param]
values. Param is just (Either Int String), so a named parameter is
represented by (Right "...") while a positional parameter is (Left
<val>).
In summary the patch provides:
* Allows SQL functions to be defined and used of SQL functions in queries
* Allows positional and named parameters to be used in queries
* Some minor bug fixes.
Comments welcome!
Justin
p.s. I developed this against PostgreSQL; please let me know if I
have introduced compatibility problems.
|
|
From: Justin B. <jgb...@gm...> - 2008-04-03 21:40:16
|
Please ignore this message - my fumble fingers sent it while it was
still being composed ... Complete message to follow ;)
---------- Forwarded message ----------
From: Justin Bailey <jgb...@gm...>
Date: Thu, Apr 3, 2008 at 2:39 PM
Subject: Patch for review: SQL functions and parameters
To: has...@li...
All,
Attached you'll find my patch for using SQL functions and parameters
in queries. By functions, I mean the ability to define a function
prototype that will be used in a query. For example, if you need the
'trim' function in a query, you can define trim as:
> lower :: Expr a -> Expr (Maybe String)
> lower str = func "lower" str
The function can then be used in a query:
qry1 = do
tbl <- table ...
project $ col1 << lower (tbl ! col2)
If the function is used in an inappropriate place, a compile time
error occurs. The arguments to the function do not have to be
expressions:
> data DatePart = Day | Century deriving Show
> datePart :: DatePart -> Expr (Maybe CalendarTime) -> Expr (Maybe Int)
> datePart date col = func "date_part" (constant $ show date) col
Aggregates are easy to define:
> every :: Expr Bool -> ExprAggr Bool
> every col = func "every" col
Because haskelldb implements aggregates to always take one argument,
and only one argument, a compile time error occurs if an aggregate
with a different number of arguments is defined.
One problem with this mechanism is the type signatures can be too
strict. For example, lower above cannot be used where an "Expr String"
or even "Expr BStrN" (i.e., a bounded string) is expected. I'm not
sure if this library should solve that or if coercion functions should
be defined by the user.
Parameters allow generated queries to be used with the "prepared
statement" facility of most databases. Any parameter defined is
rendered as a "?" in the subsequent SQL, and it is expected the user
will again supply appropriate parameters. Both named and position
parameters can be defined:
qry1 = do
...
restrict (tbl1 ! col1 .==. namedParam "my_param" constNull)
restrict (tbl1 ! col2 .==. param constNull)
When a parameter is defined, a default value must be given for it.
This feature is probably not useful to many, but does allows queries
to be generated which do not contain any placeholders.
|
|
From: Justin B. <jgb...@gm...> - 2008-04-03 21:39:13
|
All,
Attached you'll find my patch for using SQL functions and parameters
in queries. By functions, I mean the ability to define a function
prototype that will be used in a query. For example, if you need the
'trim' function in a query, you can define trim as:
> lower :: Expr a -> Expr (Maybe String)
> lower str = func "lower" str
The function can then be used in a query:
qry1 = do
tbl <- table ...
project $ col1 << lower (tbl ! col2)
If the function is used in an inappropriate place, a compile time
error occurs. The arguments to the function do not have to be
expressions:
> data DatePart = Day | Century deriving Show
> datePart :: DatePart -> Expr (Maybe CalendarTime) -> Expr (Maybe Int)
> datePart date col = func "date_part" (constant $ show date) col
Aggregates are easy to define:
> every :: Expr Bool -> ExprAggr Bool
> every col = func "every" col
Because haskelldb implements aggregates to always take one argument,
and only one argument, a compile time error occurs if an aggregate
with a different number of arguments is defined.
One problem with this mechanism is the type signatures can be too
strict. For example, lower above cannot be used where an "Expr String"
or even "Expr BStrN" (i.e., a bounded string) is expected. I'm not
sure if this library should solve that or if coercion functions should
be defined by the user.
Parameters allow generated queries to be used with the "prepared
statement" facility of most databases. Any parameter defined is
rendered as a "?" in the subsequent SQL, and it is expected the user
will again supply appropriate parameters. Both named and position
parameters can be defined:
qry1 = do
...
restrict (tbl1 ! col1 .==. namedParam "my_param" constNull)
restrict (tbl1 ! col2 .==. param constNull)
When a parameter is defined, a default value must be given for it.
This feature is probably not useful to many, but does allows queries
to be generated which do not contain any placeholders.
|
|
From: Justin B. <jgb...@gm...> - 2008-04-03 17:08:30
|
On Wed, Apr 2, 2008 at 9:48 PM, Peter Gammie <pe...@gm...> wrote: > Bjorn, > > I'm confused by the "unique" function. Why does it not just use SELECT > DISTINCT <blah>? GROUP BY semantics are filthy. I can speak to unique, as I provided the patch which implemented it (http://tinyurl.com/2u9hmf). When I started with haskelldb, it added "distinct" to all queries. That behavior didn't work for what I wanted to do. I coded unique so 'distinct' behavior was still accessible, but wasn't the default. My experience with SQL has been to use GROUP BY over DISTINCT, which is why it generates the code it does. What makes GROUP BY filthy? > roughly I am trying to pass an aggregation function around and when I > use it in a record construction I get this problem. I ran a problem when passing around aggregates recently that didn't group quite correctly - the SQL was correct but the query result was not. It's referenced in this email (http://tinyurl.com/33apbb) - are you seeing the same thing? > > Here's a plan: Add a new function: > > project' :: [Opt] -> ... blah, same as project > > where > > data Opt = DISTINCT | LIMIT Int | OFFSET Int > > I want to specify that these Opts apply to *this* projection, i.e. we > need to track the column names passed to project'. I propose to do > this by hacking the Project constructor in PrimQuery: > > data PrimQuery = ... | Project Assoc PrimQuery | ... > > becomes > > data PrimQuery = ... | Project [Opt] Assoc Assoc PrimQuery | ... Have you considered wrangling the existing Special value to do this? It already handles "TOP" and "ORDER". However, ORDER BY seems to get propogated out to the outermost select so maybe it's not feasible. Justin |
|
From: Peter G. <pe...@gm...> - 2008-04-03 04:48:25
|
Bjorn, I'm confused by the "unique" function. Why does it not just use SELECT DISTINCT <blah>? GROUP BY semantics are filthy. What is the intention with HaskellDB as far as safety goes? I have a query now that causes PostgreSQL to complain that I'm not doing enough GROUP BYing. I don't want to use GROUP BY at all, it's unnecessary for my query. (I actually want to use DISTINCT, but am busily hacking around needing either.) Where do these come from? I can easily send you the generated SQL but it is not easy to isolate the query itself; roughly I am trying to pass an aggregation function around and when I use it in a record construction I get this problem. Here's a plan: Add a new function: project' :: [Opt] -> ... blah, same as project where data Opt = DISTINCT | LIMIT Int | OFFSET Int I want to specify that these Opts apply to *this* projection, i.e. we need to track the column names passed to project'. I propose to do this by hacking the Project constructor in PrimQuery: data PrimQuery = ... | Project Assoc PrimQuery | ... becomes data PrimQuery = ... | Project [Opt] Assoc Assoc PrimQuery | ... The first Assoc is the specific fields the Opts apply to, the other assoc is as it is now. Hopefully the changes to SQL generation are obvious. Do you have an opinion on any of this? Is this on the right track? I definitely need LIMIT and OFFSET as they seem to be the only way to efficiently limit the result sets in HOPE. If you have other ideas, please share. Are you interested in fixing the UTF8 issues in HSQL and HaskellDB? If so, I'll send you some patches to HSQL, and later for HaskellDB. I've only hacked HSQL and HSQL-PostgreSQL, so it would need some love from someone else for the other backends. I can help with that, but have little time to do it myself. cheers pete |
|
From: Justin B. <jgb...@gm...> - 2008-03-26 23:40:31
|
On Tue, Mar 25, 2008 at 2:08 PM, Bjorn Bringert <bj...@br...> wrote:
> I'm thinking that it should be possible to get around this problem by
> adding an explict cartesian product operator to use instead of >>= /
> do. But then we are well on our way to a non-monadic interface.
How does this function look:
prod :: Query (Rel r) -> Query (Rel r)
prod (Query qs) = Query make
where
make (currentAlias, currentQry) =
-- Take the query to add and run it first, using the current alias as
-- a seed.
let (Rel otherAlias otherScheme,(newestAlias, otherQuery)) =
qs (currentAlias,Empty)
-- Effectively renames all columns in otherQuery to make
them unique in this
-- query.
assoc = zip (map (fresh newestAlias) otherScheme)
(map (AttrExpr . fresh otherAlias) otherScheme)
-- Produce a query which is a cross product of the other
query and the current query.
in (Rel newestAlias otherScheme, (newestAlias + 1, times
(Project assoc otherQuery) currentQry))
I modeled it after the binrel function. The long variable names
reflect me trying to figure out how the state (i.e. alias) was
threaded around. Continuing the example above, I can define:
cmb2x = do
t1 <- prod tbl1
t2 <- prod tbl2
project $ id1 << t1 ! id1 #
id2 << t2 ! id2
Evaluating cmb2x gives this SQL:
SELECT id13 as id1,
id26 as id2
FROM (SELECT id25 as id26
FROM (SELECT COUNT(id24) as id25
FROM (SELECT id2 as id24
FROM sample_table as T1
WHERE id1 < 0) as T1) as T1) as T1,
(SELECT id12 as id13
FROM (SELECT COUNT(id11) as id12
FROM (SELECT id1 as id11
FROM sample_table as T1
WHERE id1 > 0) as T1) as T1) as T2
which is far too nested. However, a one line change in mergeProject in
Optimize.hs:
safe :: Assoc -> Bool
-- merge projections that are all aggregates or no
safe assoc = not (any (isAggregate.snd) assoc) || all (isAggregate .
snd) assoc aggregates
gives SQL like this:
SELECT id13 as id1,
id26 as id2
FROM (SELECT COUNT(id2) as id26
FROM sample_table as T1
WHERE id1 < 0) as T1,
(SELECT COUNT(id1) as id13
FROM sample_table as T1
WHERE id1 > 0) as T2
Which is just what I want.
Justin
|
|
From: Justin B. <jgb...@gm...> - 2008-03-25 21:26:13
|
On Tue, Mar 25, 2008 at 2:08 PM, Bjorn Bringert <bj...@br...> wrote: > Hi Justin, > > this is a very good observation, and a good argument for why using a > Monad for constructing queries is a bad idea. Heffalump has proposed > an alternative functional API which should make it easier to avoid > problems like this. Do you have a link to that? I would be interested in looking at it. > But Monads should be associative (see > http://www.haskell.org/haskellwiki/Monad_Laws for a little bit of > explanation), so this should be equivalent to: > > do > table1 > aggregate1 > table2 > aggregate2 > rest > > Which gives you the result that you didn't want. The conclusion is > that with the current API, you can't compose aggregate queries the way > you want. Queries in the monadic interface should be read line by A very good point. Aggregrates are weird in SQL anyways so I'm not surprised this shows up as a bug. > I'm thinking that it should be possible to get around this problem by > adding an explict cartesian product operator to use instead of >>= / > do. But then we are well on our way to a non-monadic interface. Can you think of a way to inject a primitive query into an existing relation? I tried to write a "seedQuery" function but it gave me an assertion failure in optimize: seedQuery :: (PrimQuery, Rel r) -> Query (Rel r) seedQuery (prim, rel) = Query (\(x, qry) -> (rel, (x, prim))) The signature for seedQuery follows from runQueryRel in Query.hs, so you would use it as: do x <- seedQuery . runQueryRel $ { do ... } y <- seedQuery . runQueryRel $ { do ... } rest If this function worked, a query could be built in isolation and the problem would be avoided (and maybe even still preserving associativity?). Justin |
|
From: Bjorn B. <bj...@br...> - 2008-03-25 21:08:07
|
On Tue, Mar 25, 2008 at 8:32 PM, Justin Bailey <jgb...@gm...> wrote:
> One of the benefits of haskelldb is the ability to decompose a query
> into component parts and put them together in different ways. I
> believe I have a found a bug in the library that makes this hard to
> do. This literate email demonstrates the bug. The full source is given
> at the end, but I want to talk about the cause here. Imagine two
> queries that project one aggregate column each:
>
> tbl1 = do
> x <- table ...
> restrict ...
> project $ ... id1 << count ( ... )
>
> tbl2 = do
> x <- table ...
> restrict ..
> project $ .. id2 << count ( ... )
>
> When each query is run individually, the SQL produced is as expected
> (i.e., "select xx as id1 from (select count(...) as xx from ..)").
> When the queries are combined by hand:
>
> cmb1 = do
> x1 <- table ..
> x2 <- table ..
> restrict ( x1 ! ...)
> restrict (x2 ! ...)
> project $ id1 << count ( ... ) # id2 << count (...)
>
> The SQL produced is also as expected:
>
> SELECT id13 as id1,
> id23 as id2
> FROM (SELECT COUNT(id11) as id13,
> COUNT(id22) as id23
> FROM (SELECT id2 as id22
> FROM sample_table as T1
> WHERE id1 < 0) as T1,
> (SELECT id1 as id11
> FROM sample_table as T1
> WHERE id1 > 0) as T2) as T1
>
> The aggregates are at the same level, which is very important.
> However, if the individual queries are reused:
>
> cmb2 = do
> x1 <- tbl1
> x2 <- tbl2
> project $ id1 << x1 ! id1 # id2 << x2 ! id 2
>
> the SQL produced is NOT correct. It will execute but the aggregates
> are at different levels:
>
> SELECT id12 as id1,
> id24 as id2
> FROM (SELECT COUNT(id23) as id24,
> id12
> FROM (SELECT id2 as id23
> FROM sample_table as T1
> WHERE id1 < 0) as T1,
> (SELECT COUNT(id11) as id12
> FROM (SELECT id1 as id11
> FROM sample_table as T1) as T1
> WHERE id11 > 0) as T2
> GROUP BY id12) as T1
>
> When this query is executed, it will return NO rows if "select ID2 as
> ID23 .." returns no rows. In the correct query, 0 is returned in both
> columns (as expected with aggregates). That looks like a bug to me
> (though possibly its a quirk in postgres).
>
> The same SQL is produced by this, which is definitely counter-intuitive:
>
> cmb3 = do
> t1 <- do { x <- table ...; restrict ...; project $ id1 << count (x ! id1) }
> t2 <- do { x <- table ...; restrict ...; project $ id2 << count (x ! id2) }
> project $ id1 << t1 ! id1 #
> id2 << t2 ! id2
>
> The bug occurs when the t2 value is constructed. The "table" operation
> causes a "Binary Times" value to be placed in the PrimQuery being
> built, BEFORE the aggregate projection is added. I would expect the
> query for t2 to be built in isolation first, and then added via Times
> to the existing PrimQuery.
>
> This structure can be seen by loading this email into GHCi and
> entering cmb1Qry, cmb2Qry, and cmb3Qry at the prompt. tbl1Qry and
> tbl2Qry also show the individual queries in isolation. tbl1SQL,
> tbl2SQL, etc. are provided which show the SQL generated by each
> different query.
>
> There really isn't a workaround for this bug, except to redefine
> queries. I am not sure how to fix it as I tried many different
> approaches. Any suggestions?
>
> Justin
Hi Justin,
this is a very good observation, and a good argument for why using a
Monad for constructing queries is a bad idea. Heffalump has proposed
an alternative functional API which should make it easier to avoid
problems like this.
This is your counter-intuitive example:
do
do { table1; aggregate1 }
do { table2; aggregate2 }
rest
But Monads should be associative (see
http://www.haskell.org/haskellwiki/Monad_Laws for a little bit of
explanation), so this should be equivalent to:
do
table1
aggregate1
table2
aggregate2
rest
Which gives you the result that you didn't want. The conclusion is
that with the current API, you can't compose aggregate queries the way
you want. Queries in the monadic interface should be read line by
line, applying the operations to the "current relation". This is not
necessarily intuitive, especially not in Haskell. I think that Query
was originally made a Monad just to take advantage of do-notation.
I'm thinking that it should be possible to get around this problem by
adding an explict cartesian product operator to use instead of >>= /
do. But then we are well on our way to a non-monadic interface.
/Björn
> \begin{code}
>
> import Database.HaskellDB
> import Database.HaskellDB.DBLayout
>
> tbl1 :: Query (Rel (RecCons Id1 (Expr Int) RecNil))
> tbl1 = do
> t1 <- table sample_table
> restrict (t1 ! id1 .>. constant 0)
> project $ id1 << count (t1 ! id1)
>
> tbl2 :: Query (Rel (RecCons Id2 (Expr Int) RecNil))
> tbl2 = do
> t1 <- table sample_table
> restrict (t1 ! id1 .<. constant 0)
> project $ id2 << count (t1 ! id2)
>
> cmb1 :: Query (Rel (RecCons Id1 (Expr Int) (RecCons Id2 (Expr Int) RecNil)))
> cmb1 = do
> t1 <- table sample_table
> t2 <- table sample_table
> restrict (t1 ! id1 .>. constant 0)
> restrict (t2 ! id1 .<. constant 0)
> project $ id1 << count(t1 ! id1) #
> id2 << count(t2 ! id2)
>
> cmb2 :: Query (Rel (RecCons Id1 (Expr Int) (RecCons Id2 (Expr Int) RecNil)))
> cmb2 = do
> t1 <- tbl1
> t2 <- tbl2
> project $ id1 << t1 ! id1 #
> id2 << t2 ! id2
>
> cmb3 :: Query (Rel (RecCons Id1 (Expr Int) (RecCons Id2 (Expr Int) RecNil)))
> cmb3 = do
> t1 <- do { x <- table sample_table; restrict (x ! id1 .>. constant
> 0); project $ id1 << count (x ! id1) }
> t2 <- do { x <- table sample_table; restrict (x ! id1 .<. constant
> 0); project $ id2 << count (x ! id2) }
> project $ id1 << t1 ! id1 #
> id2 << t2 ! id2
>
> tbl1SQL = putStrLn $ showSql tbl1
> tbl2SQL = putStrLn $ showSql tbl2
>
> cmb1SQL = putStrLn $ showSql cmb1
> cmb2SQL = putStrLn $ showSql cmb2
> cmb3SQL = putStrLn $ showSql cmb3
>
> tbl1Qry = putStrLn $ showQuery tbl1
> tbl2Qry = putStrLn $ showQuery tbl2
>
> cmb1Qry = putStrLn $ showQuery cmb1
> cmb2Qry = putStrLn $ showQuery cmb2
> cmb3Qry = putStrLn $ showQuery cmb3
>
> data Id1 = Id1
> data Id2 = Id2
>
> instance FieldTag Id1 where fieldName _ = "id1"
> instance FieldTag Id2 where fieldName _ = "id2"
>
> id1 :: Attr Id1 Int
> id1 = mkAttr Id1
>
> id2 :: Attr Id2 Int
> id2 = mkAttr Id2
>
> sample_table :: Table (RecCons Id1 (Expr Int) (RecCons Id2 (Expr Int) RecNil))
> sample_table = baseTable "sample_table" $
> hdbMakeEntry Id1 #
> hdbMakeEntry Id2
>
> \end{code}
|
|
From: Justin B. <jgb...@gm...> - 2008-03-25 19:32:17
|
One of the benefits of haskelldb is the ability to decompose a query
into component parts and put them together in different ways. I
believe I have a found a bug in the library that makes this hard to
do. This literate email demonstrates the bug. The full source is given
at the end, but I want to talk about the cause here. Imagine two
queries that project one aggregate column each:
tbl1 = do
x <- table ...
restrict ...
project $ ... id1 << count ( ... )
tbl2 = do
x <- table ...
restrict ..
project $ .. id2 << count ( ... )
When each query is run individually, the SQL produced is as expected
(i.e., "select xx as id1 from (select count(...) as xx from ..)").
When the queries are combined by hand:
cmb1 = do
x1 <- table ..
x2 <- table ..
restrict ( x1 ! ...)
restrict (x2 ! ...)
project $ id1 << count ( ... ) # id2 << count (...)
The SQL produced is also as expected:
SELECT id13 as id1,
id23 as id2
FROM (SELECT COUNT(id11) as id13,
COUNT(id22) as id23
FROM (SELECT id2 as id22
FROM sample_table as T1
WHERE id1 < 0) as T1,
(SELECT id1 as id11
FROM sample_table as T1
WHERE id1 > 0) as T2) as T1
The aggregates are at the same level, which is very important.
However, if the individual queries are reused:
cmb2 = do
x1 <- tbl1
x2 <- tbl2
project $ id1 << x1 ! id1 # id2 << x2 ! id 2
the SQL produced is NOT correct. It will execute but the aggregates
are at different levels:
SELECT id12 as id1,
id24 as id2
FROM (SELECT COUNT(id23) as id24,
id12
FROM (SELECT id2 as id23
FROM sample_table as T1
WHERE id1 < 0) as T1,
(SELECT COUNT(id11) as id12
FROM (SELECT id1 as id11
FROM sample_table as T1) as T1
WHERE id11 > 0) as T2
GROUP BY id12) as T1
When this query is executed, it will return NO rows if "select ID2 as
ID23 .." returns no rows. In the correct query, 0 is returned in both
columns (as expected with aggregates). That looks like a bug to me
(though possibly its a quirk in postgres).
The same SQL is produced by this, which is definitely counter-intuitive:
cmb3 = do
t1 <- do { x <- table ...; restrict ...; project $ id1 << count (x ! id1) }
t2 <- do { x <- table ...; restrict ...; project $ id2 << count (x ! id2) }
project $ id1 << t1 ! id1 #
id2 << t2 ! id2
The bug occurs when the t2 value is constructed. The "table" operation
causes a "Binary Times" value to be placed in the PrimQuery being
built, BEFORE the aggregate projection is added. I would expect the
query for t2 to be built in isolation first, and then added via Times
to the existing PrimQuery.
This structure can be seen by loading this email into GHCi and
entering cmb1Qry, cmb2Qry, and cmb3Qry at the prompt. tbl1Qry and
tbl2Qry also show the individual queries in isolation. tbl1SQL,
tbl2SQL, etc. are provided which show the SQL generated by each
different query.
There really isn't a workaround for this bug, except to redefine
queries. I am not sure how to fix it as I tried many different
approaches. Any suggestions?
Justin
\begin{code}
import Database.HaskellDB
import Database.HaskellDB.DBLayout
tbl1 :: Query (Rel (RecCons Id1 (Expr Int) RecNil))
tbl1 = do
t1 <- table sample_table
restrict (t1 ! id1 .>. constant 0)
project $ id1 << count (t1 ! id1)
tbl2 :: Query (Rel (RecCons Id2 (Expr Int) RecNil))
tbl2 = do
t1 <- table sample_table
restrict (t1 ! id1 .<. constant 0)
project $ id2 << count (t1 ! id2)
cmb1 :: Query (Rel (RecCons Id1 (Expr Int) (RecCons Id2 (Expr Int) RecNil)))
cmb1 = do
t1 <- table sample_table
t2 <- table sample_table
restrict (t1 ! id1 .>. constant 0)
restrict (t2 ! id1 .<. constant 0)
project $ id1 << count(t1 ! id1) #
id2 << count(t2 ! id2)
cmb2 :: Query (Rel (RecCons Id1 (Expr Int) (RecCons Id2 (Expr Int) RecNil)))
cmb2 = do
t1 <- tbl1
t2 <- tbl2
project $ id1 << t1 ! id1 #
id2 << t2 ! id2
cmb3 :: Query (Rel (RecCons Id1 (Expr Int) (RecCons Id2 (Expr Int) RecNil)))
cmb3 = do
t1 <- do { x <- table sample_table; restrict (x ! id1 .>. constant
0); project $ id1 << count (x ! id1) }
t2 <- do { x <- table sample_table; restrict (x ! id1 .<. constant
0); project $ id2 << count (x ! id2) }
project $ id1 << t1 ! id1 #
id2 << t2 ! id2
tbl1SQL = putStrLn $ showSql tbl1
tbl2SQL = putStrLn $ showSql tbl2
cmb1SQL = putStrLn $ showSql cmb1
cmb2SQL = putStrLn $ showSql cmb2
cmb3SQL = putStrLn $ showSql cmb3
tbl1Qry = putStrLn $ showQuery tbl1
tbl2Qry = putStrLn $ showQuery tbl2
cmb1Qry = putStrLn $ showQuery cmb1
cmb2Qry = putStrLn $ showQuery cmb2
cmb3Qry = putStrLn $ showQuery cmb3
data Id1 = Id1
data Id2 = Id2
instance FieldTag Id1 where fieldName _ = "id1"
instance FieldTag Id2 where fieldName _ = "id2"
id1 :: Attr Id1 Int
id1 = mkAttr Id1
id2 :: Attr Id2 Int
id2 = mkAttr Id2
sample_table :: Table (RecCons Id1 (Expr Int) (RecCons Id2 (Expr Int) RecNil))
sample_table = baseTable "sample_table" $
hdbMakeEntry Id1 #
hdbMakeEntry Id2
\end{code}
|
|
From: Bjorn B. <bj...@br...> - 2008-03-24 10:57:37
|
Hi Justin,
On Thu, Mar 20, 2008 at 12:06 AM, Justin Bailey <jgb...@gm...> wrote:
> All,
>
> A feature haskelldb is lacking is a way to use SQL functions in
> queries which are not defined by the library. For example, the "trim"
> function is used frequently but can only be included in a haskelldb
> query through the 'literal' combinator or by writing your own function
> which creates the appropriate PrimExpr value.
Yes, that would be nice.
> What I'd like to be able to do is use SQL functions in a type-safe
> way, just like those included in the library. For example, a trim
> function would look like:
>
> trim :: Expr (Maybe a) -> Expr (Maybe String) -- Nulls are still null
Hmm, what if you want to apply this to a NOT NULL value? Should trim
be overloaded? Or maybe there should be a some kind of lifting
function to handle Maybes?
> To use the function in a project is as simple as:
>
> project $ rawNameField << customers ! Customers.name #
> name << trim (customers ! Customers.name)
>
>
> Some functions take multiple arguments. For example. 'rpad', which
> takes a column, a padding length, and an optional padding character.
> Notice not all arguments are expressions:
>
> rpad :: Expr a -> Expr Int -> Maybe String -> Expr (Maybe String)
>
> Sometimes the aggregate expressions provided by haskelldb aren't
> enough. The 'every' function is an aggregate found on postgresql:
>
> every :: Expr Bool -> ExprAggr Bool
>
> The code below allows these functions to be implemented in terms of
> two combinators - func and arg. It is intended to be included in the
> Query module, and only func and arg would exported. Questions I'd like
> answered:
>
> * Is the approach over-engineered? Is there a simpler way?
> * Is the feature useful?
> * Comments on the implementation?
>
> After the code the bodies of the functions above are given.
>
> -- Used to construct type-safe function definitions with
> -- arbitraryly typed exprssions. See func and arg to use these.
> -- The data and instances below are modeled on RecNil/RecCons from
> -- HDBRec.
> data ExprNil = ExprNil
> data ExprCons a b = ExprCons a b
>
> instance ToPrimExprs ExprNil where
> toPrimExprs ~ExprNil = []
>
> instance (ExprC e, ToPrimExprs r) => ToPrimExprs (ExprCons (e a) r) where
> toPrimExprs ~(ExprCons e r) = primExpr e : toPrimExprs r
>
> class (ExprC e) => MakeFunc e r where
> {- | Combinator which can be used to define SQL functions which will
> appear in queries. Each argument for the function is specified by the arg
> combinator. All arguments must be chained together via function composition.
> Examples include:
>
> lower :: Expr a -> Expr (Maybe String)
> lower str = func "lower" $ arg str
>
> The arguments to the function do not have to be Expr if they can
> be converted to Expr:
>
> data DatePart = Day | Century deriving Show
>
> datePart :: DatePart -> Expr (Maybe CalendarTime) -> Expr (Maybe Int)
> datePart date col = func "date_part" $ arg (constant $ show date) . arg col
>
> Aggregate functions can also be defined:
>
> every :: Expr Bool -> ExprAggr Bool
> every col = func "every" $ arg col
>
> Note that type signatures are required on each function defined, as func is
> defined in a typeclass. Also, because of the implementation of aggregates,
> only one argument can be provided.-}
> func :: (ExprC e, ToPrimExprs r) => String -- | The name of the function
> -> (ExprNil -> ExprCons (Expr d) r) -- | The arguments,
> specified via arg combinators joined by composition.
> -> e o -- | The resulting expression.
>
> -- | This instance ensures only one argument can be provided to
> -- an aggregate function.
> instance MakeFunc ExprAggr ExprNil where
> func = funcA
>
> funcA :: String -> (ExprNil -> ExprCons (Expr a) ExprNil) -> ExprAggr o
> funcA name args = ExprAggr (AggrExpr (AggrOther name) (primExpr .
> unExprs $ (args ExprNil)))
> where
> unExprs :: ExprCons (Expr a) ExprNil -> Expr a
> unExprs ~(ExprCons e _) = e
>
> -- | This instance allows any number of expressions to be used as
> -- arguments to a non-aggregate function.
> instance MakeFunc Expr a where
> func = funcE
>
> funcE :: (ToPrimExprs r) => String -> (ExprNil -> ExprCons (Expr e)
> r) -> Expr o
> funcE name args = Expr (FunExpr name (toPrimExprs (args ExprNil)))
>
> -- | Used to specify an individual argument to a SQL function definition. This
> -- combinator must be strung together with function composition. That chain
> -- must be provided to the func combinator.
> arg :: (Expr a) -> (c -> ExprCons (Expr a) c)
> arg expr = ExprCons expr
>
> I modeled the code above after the (#) function and the RecCons/RecNil
> types from the HDBRec module. In this case, a "list" of expressions is
> built through composition with the arg combinator and handed to the
> func combinator. Depending on the result type (Expr or ExprAggr), one
> of the two instances is selected. Since ExprAggr can only take one
> argument, the instance is only defined for (ExprCons x ExprNil). A
> really ugly error will result if more than one argument is defined for
> an aggregrate. Non-aggregates, however, can take any number of
> arguments.
Hmm, I get the feeling that this could be implemented in a simpler way.
If we ignore the one-argument restriction on aggregates, couldn't you
use the same trick as Text.Printf or the Remote class in HaXR (see
http://darcs.haskell.org/haxr/Network/XmlRpc/Client.hs). The aggregate
restriction could be handled by having a separate 'func' for
aggregates, with a simpler type.
Also, why are ExprNil and ExprCons needed? Couldn't func produce a
PrimExpr internally?
> The implementation of the motivating functions is then:
>
> trim str = func "trim" $ arg str
>
> rpad str len (Just char) = func "rpad" $ arg str . arg (constant
> len) . arg (constant char)
> rpad str len Nothing = func "rpad" $ arg str . arg (constant len)
>
> every col = func "every" $ arg col
>
> Thanks in advance to any responders!
>
> Justin
With the solution I refer to above, you should be able to write them
as (not tested of course):
trim str = func "trim" str
rpad str len (Just char) = func "rpad" str (constant len) (constant char)
rpad str len Nothing = func "rpad" str (constant len)
every col = funcAggr "every" col
/Bjorn
|
|
From: Bjorn B. <bj...@br...> - 2008-03-22 22:53:56
|
On Thu, Mar 20, 2008 at 12:42 AM, Justin Bailey <jgb...@gm...> wrote: > On Tue, Mar 18, 2008 at 10:52 AM, Bjorn Bringert <bj...@br...> wrote: > > Yes, this is a good idea. Does anyone have anything ready to go in a > > new release that they haven't pushed yet? Does the current darcs > > version build and work ok enough for the current users? > > Bjorn, > > You'll find attached a patch which adds the ability to recover field > information from a Query. I originally proposed the idea back in > January[1] - finally got a patch ready. I found a need for it when > generating SQL and other meta-data from queries - I needed to know the > types of each column in my query but couldn't get that out of the > Query monad easily. This patch does it. > > It also patches the cabal file so the library is compiled with -O2 > under GHC and gives a homepage so a link will show up on the hackage > page. > > Justin Thanks! I have pushed this now. /Björn |
|
From: Justin B. <jgb...@gm...> - 2008-03-19 23:42:38
|
On Tue, Mar 18, 2008 at 10:52 AM, Bjorn Bringert <bj...@br...> wrote: > Yes, this is a good idea. Does anyone have anything ready to go in a > new release that they haven't pushed yet? Does the current darcs > version build and work ok enough for the current users? Bjorn, You'll find attached a patch which adds the ability to recover field information from a Query. I originally proposed the idea back in January[1] - finally got a patch ready. I found a need for it when generating SQL and other meta-data from queries - I needed to know the types of each column in my query but couldn't get that out of the Query monad easily. This patch does it. It also patches the cabal file so the library is compiled with -O2 under GHC and gives a homepage so a link will show up on the hackage page. Justin [1] http://tinyurl.com/3y84al (or http://sourceforge.net/mailarchive/forum.php?thread_name=a45dff840801241048w24cadd3aob0291ff6933ab9b2%40mail.gmail.com&forum_name=haskelldb-users) |
|
From: Justin B. <jgb...@gm...> - 2008-03-19 23:06:51
|
All,
A feature haskelldb is lacking is a way to use SQL functions in
queries which are not defined by the library. For example, the "trim"
function is used frequently but can only be included in a haskelldb
query through the 'literal' combinator or by writing your own function
which creates the appropriate PrimExpr value.
What I'd like to be able to do is use SQL functions in a type-safe
way, just like those included in the library. For example, a trim
function would look like:
trim :: Expr (Maybe a) -> Expr (Maybe String) -- Nulls are still null
To use the function in a project is as simple as:
project $ rawNameField << customers ! Customers.name #
name << trim (customers ! Customers.name)
Some functions take multiple arguments. For example. 'rpad', which
takes a column, a padding length, and an optional padding character.
Notice not all arguments are expressions:
rpad :: Expr a -> Expr Int -> Maybe String -> Expr (Maybe String)
Sometimes the aggregate expressions provided by haskelldb aren't
enough. The 'every' function is an aggregate found on postgresql:
every :: Expr Bool -> ExprAggr Bool
The code below allows these functions to be implemented in terms of
two combinators - func and arg. It is intended to be included in the
Query module, and only func and arg would exported. Questions I'd like
answered:
* Is the approach over-engineered? Is there a simpler way?
* Is the feature useful?
* Comments on the implementation?
After the code the bodies of the functions above are given.
-- Used to construct type-safe function definitions with
-- arbitraryly typed exprssions. See func and arg to use these.
-- The data and instances below are modeled on RecNil/RecCons from
-- HDBRec.
data ExprNil = ExprNil
data ExprCons a b = ExprCons a b
instance ToPrimExprs ExprNil where
toPrimExprs ~ExprNil = []
instance (ExprC e, ToPrimExprs r) => ToPrimExprs (ExprCons (e a) r) where
toPrimExprs ~(ExprCons e r) = primExpr e : toPrimExprs r
class (ExprC e) => MakeFunc e r where
{- | Combinator which can be used to define SQL functions which will
appear in queries. Each argument for the function is specified by the arg
combinator. All arguments must be chained together via function composition.
Examples include:
lower :: Expr a -> Expr (Maybe String)
lower str = func "lower" $ arg str
The arguments to the function do not have to be Expr if they can
be converted to Expr:
data DatePart = Day | Century deriving Show
datePart :: DatePart -> Expr (Maybe CalendarTime) -> Expr (Maybe Int)
datePart date col = func "date_part" $ arg (constant $ show date) . arg col
Aggregate functions can also be defined:
every :: Expr Bool -> ExprAggr Bool
every col = func "every" $ arg col
Note that type signatures are required on each function defined, as func is
defined in a typeclass. Also, because of the implementation of aggregates,
only one argument can be provided.-}
func :: (ExprC e, ToPrimExprs r) => String -- | The name of the function
-> (ExprNil -> ExprCons (Expr d) r) -- | The arguments,
specified via arg combinators joined by composition.
-> e o -- | The resulting expression.
-- | This instance ensures only one argument can be provided to
-- an aggregate function.
instance MakeFunc ExprAggr ExprNil where
func = funcA
funcA :: String -> (ExprNil -> ExprCons (Expr a) ExprNil) -> ExprAggr o
funcA name args = ExprAggr (AggrExpr (AggrOther name) (primExpr .
unExprs $ (args ExprNil)))
where
unExprs :: ExprCons (Expr a) ExprNil -> Expr a
unExprs ~(ExprCons e _) = e
-- | This instance allows any number of expressions to be used as
-- arguments to a non-aggregate function.
instance MakeFunc Expr a where
func = funcE
funcE :: (ToPrimExprs r) => String -> (ExprNil -> ExprCons (Expr e)
r) -> Expr o
funcE name args = Expr (FunExpr name (toPrimExprs (args ExprNil)))
-- | Used to specify an individual argument to a SQL function definition. This
-- combinator must be strung together with function composition. That chain
-- must be provided to the func combinator.
arg :: (Expr a) -> (c -> ExprCons (Expr a) c)
arg expr = ExprCons expr
I modeled the code above after the (#) function and the RecCons/RecNil
types from the HDBRec module. In this case, a "list" of expressions is
built through composition with the arg combinator and handed to the
func combinator. Depending on the result type (Expr or ExprAggr), one
of the two instances is selected. Since ExprAggr can only take one
argument, the instance is only defined for (ExprCons x ExprNil). A
really ugly error will result if more than one argument is defined for
an aggregrate. Non-aggregates, however, can take any number of
arguments.
The implementation of the motivating functions is then:
trim str = func "trim" $ arg str
rpad str len (Just char) = func "rpad" $ arg str . arg (constant
len) . arg (constant char)
rpad str len Nothing = func "rpad" $ arg str . arg (constant len)
every col = func "every" $ arg col
Thanks in advance to any responders!
Justin
|
|
From: Bjorn B. <bj...@br...> - 2008-03-18 18:12:00
|
On Tue, Mar 18, 2008 at 6:52 PM, Bjorn Bringert <bj...@br...> wrote: > > On Sun, Mar 16, 2008 at 5:32 AM, Gwern Branwen <gw...@gm...> wrote: > > I think a new release of HaskellDB needs to be made. The current > > version on Hackage is just too old: > > > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> > > > gwern@localhost:1039~/hswebforms>cabal install haskelldb > > [ > > 2:02AM] > > > > 'haskelldb-0.10' is cached. > > > > [1 of 1] Compiling Main ( Setup.hs, dist/setup/Main.o ) > > > > Linking dist/setup/setup ... > > > > Configuring haskelldb-0.10... > > > > Warning: No 'build-type' specified. If possible use > 'build-type: Simple'. > > > > Preprocessing library haskelldb-0.10... > > > > Building haskelldb-0.10... > > > > > > > > src/Database/HaskellDB.hs:87:7: > > > > Could not find module `Text.PrettyPrint.HughesPJ': > > > > it is a member of package pretty-1.0.0.0, which is hidden > > > > cabal: Error: some packages failed to install: > > > > haskelldb-0.10 failed during the building phase. > > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> > > > > The latest release on Hackage is not even updated for the split-base > > of GHC 6.8.x. > > > > Since the Darcs repository worked fine, I think a sdist tarball needs > > to be made and uploaded. > > Yes, this is a good idea. Does anyone have anything ready to go in a > new release that they haven't pushed yet? Does the current darcs > version build and work ok enough for the current users? > > > > > Another issue is that <http://haskelldb.sourceforge.net/> says: > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> > > > You can also try asking HaskellDB questions in the Haskell IRC > > channel, > > #haskelldb at freenode.net. > > > > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> > > > > #haskelldb was wholly unpopulated when I joined. So this line should > > be removed. > > Yes, thanks for pointing that out. I'll dig up my SourceForge > credentials and try to fix that. > > > > I'm traveling at the moment, so I may take a while to respond to e-mail. Oh, and another thing: It would be great to have a new HSQL release, so that we could release a working haskelldb-hsql. Does anyone feel like prodding the HSQL guys? I seem to recall that HSQL got a new maintainer recently (check the list archives). /Björn |