|
From: Jojakim S. <JS...@de...> - 2002-11-21 11:25:03
|
I have a question regarding the FirebirdSql.Data.NGDS.GDS.FixNull(XSQLVAR
xsqlvar) method:
When executing a statement with an input parameter of value DBNull and type
FbType.VarChar or FbType.Char the FixNull function sets up (as I understand
it) a dummy (null) value which is transfered to the server. The actual
indication that the parameter is DBNull is done by setting xsqlvar.sqlind
= -1.
private void FixNull(XSQLVAR xsqlvar)
{
if ((xsqlvar.sqlind == -1) && (xsqlvar.sqldata == null))
{
switch (xsqlvar.sqltype & ~1)
{
case SQL_TEXT:
xsqlvar.sqldata = new byte[xsqlvar.sqllen];
break;
case SQL_VARYING:
xsqlvar.sqldata = new byte[0];
break;
Why now in FixNull the dummy value used for SQL_TEXT and SQL_VARYING are
arrays of bytes? and not strings? As in WriteSQLDatum the xsqlvar.sqldata
value is cast to a string, an exception is the result.
I did a quick-test and assigned xsqlvar.sqldata = string.Empty in the
SQL_VARYING case, and all seems to work well now.
But I'm not sure for the SQL_TEXT case. Is it required to always write the
exact amount of bytes for a CHAR column? What about a column with
UNICODE_FSS encoding: Do I have to write the amount of charachters declared
in the table creation statement, or the amount of octets reserved for the
column (for UNICODE_FSS this is 3 times the declared value).
If the latter is the case, I think the way how WriteSQLDatum works isn't
correct, too:
case SQL_TEXT:
if(((string)sqldata).Length != xsqlvar.sqllen)
{
throw new GDSException(isc_rec_size_err);
}
db.output.WriteOpaque(
Encoding.Default.GetBytes((string)sqldata),
xsqlvar.sqllen);
break;
case SQL_VARYING:
if(((string)sqldata).Length > xsqlvar.sqllen)
{
throw new GDSException(isc_rec_size_err);
}
db.output.WriteInt(((string)sqldata).Length);
db.output.WriteOpaque(
Encoding.Default.GetBytes((string)sqldata),
((string)sqldata).Length);
break;
While debugging I found that xsqlvar.sqllen contains the amnout of octets
reserved for the columns data, for textual columns of UNICODE_FSS encoding
this is 3 times the declared charachter amount. So comparing
((string)sqldata).Length with xsqlvar.sqllen as a condition to generate
exceptions isn't correct.
Aswell, just writing out the bytes that result from
Encoding.Default.GetBytes() is not correct because a string with just 10
ascii chars in the range 0 to 127 results in a byte array of length 10. But
for the SQL_TEXT case WriteOpaque is directed to write out 30 bytes.
For the SQL_VARYING case, if you write a unicode char with a UTF-8 mapping
of 3 bytes, you only write 1 byte, as string.Length will return 1.
Another thing is, that the usage of Encoding.Default should be replaced by
Encodings.GetFromFirebirdEncoding(connection.Charset).
So the key points are:
- When outputting a string, is it neccessary to output the amount of
charachters or the amout of bytes the string needs in the connections
encoding?
- Use the correct encoding!
- Remove inconsistencies within the type of data stored in xsqlvar.sqldata
for textual data types. Current State: ReadSQLDatum uses the byte[] version,
byte[] to string conversion is done in FbResultSet. WriteSQLDatum expects
strings. FixNull requires WriteSQLDatum to expect byte[].
Excuse the long message. Any comments welcome. I would take the time to
correct the behavior after an agreement on the what-is-correct is found.
Thanx,
Joja
|