wdb-users Mailing List for wdb - weather/water database system (Page 3)
Brought to you by:
falkenroth,
michaeloa
You can subscribe to this list here.
2008 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
(5) |
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
(7) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(6) |
2011 |
Jan
|
Feb
|
Mar
|
Apr
(5) |
May
(1) |
Jun
(2) |
Jul
(1) |
Aug
|
Sep
|
Oct
(1) |
Nov
(1) |
Dec
|
2012 |
Jan
|
Feb
|
Mar
(5) |
Apr
(38) |
May
(9) |
Jun
(2) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(2) |
2014 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(10) |
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
From: Michael A. <mic...@me...> - 2012-04-12 12:18:59
|
Hi, GRIB_API is not required for configuration of WDB, so there is no --with-gribapi command. I'm guessing that configure is failing to check for a header file in boost::filesystem that is somehow missing. But it is hard to be sure based on just the fragments of the log; please post the complete configure output. Regards, Michael A. ----- Original Message ----- > Hi, > this is my configure command: > > sudo ./configure > --with-postgis=/usr/share/postgresql/9.1/contrib/postgis-1.5/ > --with-gribapi=/usr/lib/ --with-boost=/usr/lib/ > > run but there are few errors: > first: > > checking string usability... no > .. > checking string usability... no > .. > > and at the end before > wdb 1.2.0 > Configuration: > ------------------------------------------------------------------------- > > Database Name: wdb > Source Code: . > Host System: i686-pc-linux-gnu > > Prefix: /usr/local > Binaries: ${prefix} > Manuals: ${datarootdir}/man > Data: ${datarootdir} > System Config: ${prefix}/etc > > CPPFLAGS: -I/usr/include/postgresql > -I/usr/include/postgresql/8.4/server -I/usr/include/postgresql > -I/usr/lib//include -I./src/blob -I./src/callInterface/api > -I./src/callInterface/core -I./src/callInterface/core/extractGridData > -I./src/callInterface/core/test -I./src/callInterface/types > -I./src/callInterface/types/test -I./src/callInterface/util > -I./src/cleaningProgram -I./src/common/configuration > -I./src/common/exception -I./src/common/logHandler -I./src/common/math > -I./src/common/projection -I./src/common/projection/test > -I./src/database -I./src/admin -I./src/admin/operations > -I./src/admin/ui/cmdLine -I./test/utility/testWrite -I./test/unit > -I./examples -I./examples/C++ -I./examples/sql > LDFLAGS: -L/usr/lib//lib > LIBS: -lproj -lreadline -lpqxx -lboost_program_options > -lboost_date_time -lboost_regex -lboost_filesystem -lboost_thread > -llog4cpp -lnsl > > ----------------------------------------------------------------------- > > I have > > configure: WARNING: unrecognized options: --with-gribapi > > Then I run make but finish at: > > File.Tpo -c -o src/admin/operations/libwdbAdmin_a-gribFile.o `test -f > 'src/admin/operations/gribFile.cpp' || echo > './'`src/admin/operations/gribFile.cpp > src/admin/operations/gribFile.cpp: In constructor > ‘GribFile::GribFile(const boost::filesystem3::path&)’: > src/admin/operations/gribFile.cpp:40:31: error: ‘const class > boost::filesystem3::path’ has no member named > ‘native_directory_string’ > src/admin/operations/gribFile.cpp:42:55: error: ‘const class > boost::filesystem3::path’ has no member named ‘native_file_string’ > make[1]: *** [src/admin/operations/libwdbAdmin_a-gribFile.o] Errore 1 > make[1]: uscita dalla directory "/home/filippo/Documenti/wdb" > make: *** [all] Errore 2 > > I don't understand. > Thank you! > Filippo > > > > ------------------------------------------------------------------------------ > For Developers, A Lot Can Happen In A Second. > Boundary is the first to Know...and Tell You. > Monitor Your Applications in Ultra-Fine Resolution. Try it FREE! > http://p.sf.net/sfu/Boundary-d2dvs2 > _______________________________________________ > WDB-Users mailing list > WDB...@li... > https://lists.sourceforge.net/lists/listinfo/wdb-users |
From: Filippo L. <fil...@gm...> - 2012-04-12 12:06:54
|
Hi, this is my configure command: sudo ./configure --with-postgis=/usr/share/postgresql/9.1/contrib/postgis-1.5/ --with-gribapi=/usr/lib/ --with-boost=/usr/lib/ run but there are few errors: first: checking string usability... no .. checking string usability... no .. and at the end before wdb 1.2.0 Configuration: ------------------------------------------------------------------------- Database Name: wdb Source Code: . Host System: i686-pc-linux-gnu Prefix: /usr/local Binaries: ${prefix} Manuals: ${datarootdir}/man Data: ${datarootdir} System Config: ${prefix}/etc CPPFLAGS: -I/usr/include/postgresql -I/usr/include/postgresql/8.4/server -I/usr/include/postgresql -I/usr/lib//include -I./src/blob -I./src/callInterface/api -I./src/callInterface/core -I./src/callInterface/core/extractGridData -I./src/callInterface/core/test -I./src/callInterface/types -I./src/callInterface/types/test -I./src/callInterface/util -I./src/cleaningProgram -I./src/common/configuration -I./src/common/exception -I./src/common/logHandler -I./src/common/math -I./src/common/projection -I./src/common/projection/test -I./src/database -I./src/admin -I./src/admin/operations -I./src/admin/ui/cmdLine -I./test/utility/testWrite -I./test/unit -I./examples -I./examples/C++ -I./examples/sql LDFLAGS: -L/usr/lib//lib LIBS: -lproj -lreadline -lpqxx -lboost_program_options -lboost_date_time -lboost_regex -lboost_filesystem -lboost_thread -llog4cpp -lnsl ----------------------------------------------------------------------- I have configure: WARNING: unrecognized options: --with-gribapi Then I run make but finish at: File.Tpo -c -o src/admin/operations/libwdbAdmin_a-gribFile.o `test -f 'src/admin/operations/gribFile.cpp' || echo './'`src/admin/operations/gribFile.cpp src/admin/operations/gribFile.cpp: In constructor ‘GribFile::GribFile(const boost::filesystem3::path&)’: src/admin/operations/gribFile.cpp:40:31: error: ‘const class boost::filesystem3::path’ has no member named ‘native_directory_string’ src/admin/operations/gribFile.cpp:42:55: error: ‘const class boost::filesystem3::path’ has no member named ‘native_file_string’ make[1]: *** [src/admin/operations/libwdbAdmin_a-gribFile.o] Errore 1 make[1]: uscita dalla directory "/home/filippo/Documenti/wdb" make: *** [all] Errore 2 I don't understand. Thank you! Filippo |
From: Michael A. <mic...@me...> - 2012-04-12 10:49:55
|
WDB requires access to two files - postgis.sql and spatial_ref_sys.sql, which are installed by Postgis. Configure will try to look for these scripts in $SHARE_DIR/contrib (see pg_config to find SHAREDIR). The PostGIS package version has sometimes install these files in weird locations. Check for them first. Once you have located the files, you should be able to run "configure --with-postgis=PATH", where PATH is the location of the files and get past that Regards, Michael A. ----- Original Message ----- > Hi, > I have some problem following instruction for installation of WDB. > Before I installed from this lines the packages: > g++ ; make; git-core; libreadline-dev; automake; libtool; > postgresql-8.4; postgresql-server-dev-8.4; postgresql-8.4-postgis; > libpqxx3-dev; libboost-dev; libboost-date-time-dev; > libboost-program-options-dev; > libboost-regex-dev; libboost-filesystem-dev; libboost-thread-dev; > liblog4cpp5-dev; libcppunit-dev; libgrib-api-dev; libproj-dev; > libglib2.0-dev; libgeos-dev; xmlto; > I had this bad message and I don't know what's wrong, > > "Cannot find postgis SQL files. Ensure that these files are > installed in the share directory of your PostgreSQL installation, > or explicitly specify its location using the --with-postgis=PATH > option". > Please help me!! > Thank you so much in advance > Filippo > > ------------------------------------------------------------------------------ > For Developers, A Lot Can Happen In A Second. > Boundary is the first to Know...and Tell You. > Monitor Your Applications in Ultra-Fine Resolution. Try it FREE! > http://p.sf.net/sfu/Boundary-d2dvs2 > _______________________________________________ > WDB-Users mailing list > WDB...@li... > https://lists.sourceforge.net/lists/listinfo/wdb-users |
From: Filippo L. <fil...@gm...> - 2012-04-12 09:02:50
|
Hi, I have some problem following instruction for installation of WDB. Before I installed from this lines the packages: g++ ; make; git-core; libreadline-dev; automake; libtool; postgresql-8.4; postgresql-server-dev-8.4; postgresql-8.4-postgis; libpqxx3-dev; libboost-dev; libboost-date-time-dev; libboost-program-options-dev; libboost-regex-dev; libboost-filesystem-dev; libboost-thread-dev; liblog4cpp5-dev; libcppunit-dev; libgrib-api-dev; libproj-dev; libglib2.0-dev; libgeos-dev; xmlto; I had this bad message and I don't know what's wrong, "Cannot find postgis SQL files. Ensure that these files are installed in the share directory of your PostgreSQL installation, or explicitly specify its location using the --with-postgis=PATH option". Please help me!! Thank you so much in advance Filippo |
From: Michael A. <mic...@me...> - 2012-04-04 19:48:44
|
I have now cleaned up the wiki markup in the WDB Documentation, so that it should now be (mostly) readable. Please note that parts of this documentation is still somewhat out of date. In particular, new functions for the handling of metadata has been added in the latest versions of WDB and still need to be documented. Regards, Michael A. ----- Original Message ----- > I've now put up Wiki versions of some of the WDB documentation on > wdb.met.no > https://wdb.met.no/doku.php?id=manual:start > > As mentioned, it is not entirely up to date, and it needs to be > cleaned up (html2wiki isn't perfect). But hopefully it can be of a > little help, and if you find something missing, let us know (useful > for us to improve the docs). > > Regards, > > Michael A. > > ----- Original Message ----- > > Github does not build a nice distribution package, so you will need > > to > > build configure yourself. > > > > This can be done with ./autogen.sh > > > > Regards, > > > > Michael A. > > > > ----- Original Message ----- > > > Hi, > > > from https://github.com/wdb/wdb/downloads I downloaded > > > wdb-wdb-wdb_1.1.0-34-g13dd814.zip. It's impossible to install > > > the program, ./configure doesn't run, there is only > > > configure.ac... > > > What's wrong in my installation steps? > > > Thank you so much and Happy Easter > > > filippo locci > > > > > > > > > ------------------------------------------------------------------------------ > > > Better than sec? Nothing is better than sec when it comes to > > > monitoring Big Data applications. Try Boundary one-second > > > resolution app monitoring today. Free. > > > http://p.sf.net/sfu/Boundary-dev2dev > > > _______________________________________________ > > > WDB-Users mailing list > > > WDB...@li... > > > https://lists.sourceforge.net/lists/listinfo/wdb-users > > > > ------------------------------------------------------------------------------ > > Better than sec? Nothing is better than sec when it comes to > > monitoring Big Data applications. Try Boundary one-second > > resolution app monitoring today. Free. > > http://p.sf.net/sfu/Boundary-dev2dev > > _______________________________________________ > > WDB-Users mailing list > > WDB...@li... > > https://lists.sourceforge.net/lists/listinfo/wdb-users > > ------------------------------------------------------------------------------ > Better than sec? Nothing is better than sec when it comes to > monitoring Big Data applications. Try Boundary one-second > resolution app monitoring today. Free. > http://p.sf.net/sfu/Boundary-dev2dev > _______________________________________________ > WDB-Users mailing list > WDB...@li... > https://lists.sourceforge.net/lists/listinfo/wdb-users |
From: Michael A. <mic...@me...> - 2012-04-04 13:05:48
|
I've now put up Wiki versions of some of the WDB documentation on wdb.met.no https://wdb.met.no/doku.php?id=manual:start As mentioned, it is not entirely up to date, and it needs to be cleaned up (html2wiki isn't perfect). But hopefully it can be of a little help, and if you find something missing, let us know (useful for us to improve the docs). Regards, Michael A. ----- Original Message ----- > Github does not build a nice distribution package, so you will need to > build configure yourself. > > This can be done with ./autogen.sh > > Regards, > > Michael A. > > ----- Original Message ----- > > Hi, > > from https://github.com/wdb/wdb/downloads I downloaded > > wdb-wdb-wdb_1.1.0-34-g13dd814.zip. It's impossible to install > > the program, ./configure doesn't run, there is only configure.ac... > > What's wrong in my installation steps? > > Thank you so much and Happy Easter > > filippo locci > > > > > > ------------------------------------------------------------------------------ > > Better than sec? Nothing is better than sec when it comes to > > monitoring Big Data applications. Try Boundary one-second > > resolution app monitoring today. Free. > > http://p.sf.net/sfu/Boundary-dev2dev > > _______________________________________________ > > WDB-Users mailing list > > WDB...@li... > > https://lists.sourceforge.net/lists/listinfo/wdb-users > > ------------------------------------------------------------------------------ > Better than sec? Nothing is better than sec when it comes to > monitoring Big Data applications. Try Boundary one-second > resolution app monitoring today. Free. > http://p.sf.net/sfu/Boundary-dev2dev > _______________________________________________ > WDB-Users mailing list > WDB...@li... > https://lists.sourceforge.net/lists/listinfo/wdb-users |
From: Michael A. <mic...@me...> - 2012-04-04 12:35:34
|
Github does not build a nice distribution package, so you will need to build configure yourself. This can be done with ./autogen.sh Regards, Michael A. ----- Original Message ----- > Hi, > from https://github.com/wdb/wdb/downloads I downloaded > wdb-wdb-wdb_1.1.0-34-g13dd814.zip. It's impossible to install > the program, ./configure doesn't run, there is only configure.ac... > What's wrong in my installation steps? > Thank you so much and Happy Easter > filippo locci > > > ------------------------------------------------------------------------------ > Better than sec? Nothing is better than sec when it comes to > monitoring Big Data applications. Try Boundary one-second > resolution app monitoring today. Free. > http://p.sf.net/sfu/Boundary-dev2dev > _______________________________________________ > WDB-Users mailing list > WDB...@li... > https://lists.sourceforge.net/lists/listinfo/wdb-users |
From: Filippo L. <fil...@gm...> - 2012-04-04 10:01:46
|
Hi, from https://github.com/wdb/wdb/downloads I downloaded wdb-wdb-wdb_1.1.0-34-g13dd814.zip. It's impossible to install the program, ./configure doesn't run, there is only configure.ac... What's wrong in my installation steps? Thank you so much and Happy Easter filippo locci |
From: Michael A. <mic...@me...> - 2012-04-02 14:26:17
|
Hi, Unfortunately, we have not managed update the documentation properly since we moved from sourceforge to github, so most of the user documentation has not been published. If you compile up the WDB core project and run "make html", it compiles up a bunch of HTML documents. Alternatively, wait a couple of days and I will see about copying the HTML across to our web servers (not working full time this week due to easter). WDB Core: https://github.com/wdb/wdb The documentation would hopefully provide answers to a fair amount of your questions. The WDB core database basically stores data using 7 dimensions: Data Provider, Place (Geographic Location), Reference Time, Valid Time, Value Parameter, Level, and Data Version. Your data should fit easily into that datamodel. You will no doubt need to add metadata for dataprovider and location, and perhaps also for the value parameters. An example project that demonstrates how to add metadata can be seen here: https://github.com/metno/wdb-metadata Reading and writing data to and from the database is done through an SQL function interface (referred to as WCI). There is a fairly complete documentation of this in the core package. The two key functions here are wci.read(...) and wci.write(...). Given a text file, it should be fairly easy to write a script or program that parses the text file and creates appropriately formated wci.write(...) statements. This is the normally recommended solution. The downside is that it can be a bit slow. Alternatively, there is a loading module referred to as fastload in the wdb-contrib project. It does batch loading of point data into the database from text streams. https://github.com/wdb/wdb-contrib/tree/master/fastload For reading data, there are essentially two options - reading through the SQL interface (wci.read) or though the apache module. https://github.com/wdb/wdb2ts The latter will probably require some slight modification to make it suitable for your use, but is an excellent starting point if this kind of webservice is what you require (WDB2TS returns data as either XML or comma separated text). If your needs are more for a traditional web page, then something using SQL in PHP/ASP/Java or whatever your preferred language is, would probably be better. Regards, Michael A. ----- Original Message ----- > Thank you for your answer, we have data from Automatic Station on > Everest for example: Temperature, Humidity, Albedo, Rain/Snow from > weather station and air quality from atmospheric station (ozone > content); Moreover, for each station we have a metadata ... We have > point data, for each station (Longitude; Latitude and measures). The > problem is: reading data from the station: .txt; extract information > and then populate DB and share this on the web... > Look the attached picture for better explanation (I hope!) > Best regard > filippo > > > > > 2012/3/30 Michael Akinde < mic...@me... > > > > Hi, > > WDB is intended to be a general database for storage of Weather/Water > data, so my answer to that would be yes. It also has an Apache > module/REST interface for extracting time series data from the > database ( https://github.com/wdb/wdb2ts ), though some modification > would probably be required for your purposes. > > What kind of data are you looking to store - gridded files, individual > data points, or both? WDB is currently heavily optimized for gridded > data, but not for points (though we are currently working on that). > > Regards, > > Michael A. > > P.S. Due to easter holidays, replies on this mailing list may be > somewhat slow during the next week. > > > > ----- Original Message ----- > > Hi, > > I'm a PhD student in University of Cagliari, I and my group is > > designing and implementing a meteorological-atmospheric DB for data > > and metadata from AWSs at high altitudes. Is Your WDB an appropriate > > tool for this aim, we don't produce forecasting but we need only a > > DB > > to store data and the elaborated data and then share this > > by web with scientific community. > > Thank you > > filippo > > > > ------------------------------------------------------------------------------ > > This SF email is sponsosred by: > > Try Windows Azure free for 90 days Click Here > > http://p.sf.net/sfu/sfd2d-msazure > > _______________________________________________ > > WDB-Users mailing list > > WDB...@li... > > https://lists.sourceforge.net/lists/listinfo/wdb-users > > > > ------------------------------------------------------------------------------ > This SF email is sponsosred by: > Try Windows Azure free for 90 days Click Here > http://p.sf.net/sfu/sfd2d-msazure > _______________________________________________ > WDB-Users mailing list > WDB...@li... > https://lists.sourceforge.net/lists/listinfo/wdb-users |
From: Michael A. <mic...@me...> - 2012-03-30 13:03:55
|
Hi, WDB is intended to be a general database for storage of Weather/Water data, so my answer to that would be yes. It also has an Apache module/REST interface for extracting time series data from the database (https://github.com/wdb/wdb2ts), though some modification would probably be required for your purposes. What kind of data are you looking to store - gridded files, individual data points, or both? WDB is currently heavily optimized for gridded data, but not for points (though we are currently working on that). Regards, Michael A. P.S. Due to easter holidays, replies on this mailing list may be somewhat slow during the next week. ----- Original Message ----- > Hi, > I'm a PhD student in University of Cagliari, I and my group is > designing and implementing a meteorological-atmospheric DB for data > and metadata from AWSs at high altitudes. Is Your WDB an appropriate > tool for this aim, we don't produce forecasting but we need only a DB > to store data and the elaborated data and then share this > by web with scientific community. > Thank you > filippo > > ------------------------------------------------------------------------------ > This SF email is sponsosred by: > Try Windows Azure free for 90 days Click Here > http://p.sf.net/sfu/sfd2d-msazure > _______________________________________________ > WDB-Users mailing list > WDB...@li... > https://lists.sourceforge.net/lists/listinfo/wdb-users |
From: Filippo L. <fil...@gm...> - 2012-03-30 12:22:12
|
Hi, I'm a PhD student in University of Cagliari, I and my group is designing and implementing a meteorological-atmospheric DB for data and metadata from AWSs at high altitudes. Is Your WDB an appropriate tool for this aim, we don't produce forecasting but we need only a DB to store data and the elaborated data and then share this by web with scientific community. Thank you filippo |
From: Filippo L. <fil...@gm...> - 2012-03-30 12:17:41
|
Hi, I'm a PhD student in University of Cagliari, I and my group is designing and implementing a meteorological-atmospheric DB for data and metadata from AWSs at high altitudes. Is Your WDB an appropriate tool for this aim, we don't produce forecasting but we need only a DB to store data and the elaborated data and then share this by web with scientific community thank you filippo locci |
From: Filippo L. <fil...@gm...> - 2012-03-30 12:11:52
|
Hi, I'm a PhD student in University of Cagliari, I and my group is designing and implementing a meteorological-atmospheric DB for data and metadata from AWSs at high altitudes. Is Your WDB an appropriate tool for this aim, we don't produce forecasting but we need only a DB to store data and the elaborated data and then |
From: Michael A. <mic...@me...> - 2011-11-01 10:31:00
|
Hi, Unfortunately, this is a result of met.no choosing to not follow the WMO/ECMWF standard in its internal file formats. WMO/ECMWF uses amount as its unit of precipitation (thus kg/m2). Presumably for historical reasons (as I can think of no practical reason), our internal file formats at met.no utilize thickness (originally mm ~ kg/m2, which makes the SI unit m). Unfortunately, our official FELT to CF conversion follows the choices of our file format, which is why - since WDB follows our CF standards where possible - precipitation in our internal files get converted to lwe_thickness (liquid water equivalent thickness). This is an issue that needs to be dealt with by met.no's internal metadata organization, as the problem is general (currently NetCDF files created internally use lwe_thickness, whilst external NetCDF files or those generated from GRIB are likely to have precipitation amount). There are similar issues with other parameters in our dataset. Regards, Michael A. ----- Original Message ----- > The unit of "lwe thickness of precipitation amount" is "m", while the > other precipitation parameters have unit "kg/m2". Se below. This is > very inconvenient since udunits can not convert from "m" to "kg/m2". > Why not use the same unit for all kinds of precipitation? > > Lisbeth > > valueparametername | valueunitname > -----------------------------------------------+---------------+---------------- > local 80th percentile of precipitation amount | kg/m2 > local 20th percentile of precipitation amount | kg/m2 > lwe thickness of precipitation amount | m > local 50th percentile of precipitation amount | kg/m2 > > ------------------------------------------------------------------------------ > Get your Android app more play: Bring it to the BlackBerry PlayBook > in minutes. BlackBerry App World™ now supports Android™ Apps > for the BlackBerry® PlayBook™. Discover just how easy and > simple > it is! http://p.sf.net/sfu/android-dev2dev > _______________________________________________ > WDB-Users mailing list > WDB...@li... > https://lists.sourceforge.net/lists/listinfo/wdb-users |
From: Lisbeth B. <lis...@me...> - 2011-10-31 09:46:25
|
The unit of "lwe thickness of precipitation amount" is "m", while the other precipitation parameters have unit "kg/m2". Se below. This is very inconvenient since udunits can not convert from "m" to "kg/m2". Why not use the same unit for all kinds of precipitation? Lisbeth valueparametername | valueunitname -----------------------------------------------+---------------+---------------- local 80th percentile of precipitation amount | kg/m2 local 20th percentile of precipitation amount | kg/m2 lwe thickness of precipitation amount | m local 50th percentile of precipitation amount | kg/m2 |
From: Michael A. <mic...@me...> - 2011-07-01 10:19:36
|
There is a bug in bilload 1.0.0. Essentially, it uses a deprecated function in libwdbload-1.0.0, which was removed in libwdbload-1.0.1. The error you are seeing is due to running against libwdbload 1.0.1. Will soon release a bilload 1.0.1 which fixes this problem. Regards, Michael A. ----- Original Message ----- > # apt-get -y install wdb=1.0.0~rc1-1 > # apt-get -y install wdb-bilload=1.0.0-1 > > # /usr/lib/wdb/bilLoad > /usr/lib/wdb/bilLoad: symbol lookup error: /usr/lib/wdb/bilLoad: > undefined symbol: _ZN3wdb4load24LoaderDatabaseConnectionC1ERKSsS3_iii > > > Any advice for debugging? > > -JI > > ------------------------------------------------------------------------------ > All of the data generated in your IT infrastructure is seriously > valuable. > Why? It contains a definitive record of application performance, > security > threats, fraudulent activity, and more. Splunk takes this data and > makes > sense of it. IT sense. And common sense. > http://p.sf.net/sfu/splunk-d2d-c2 > _______________________________________________ > WDB-Users mailing list > WDB...@li... > https://lists.sourceforge.net/lists/listinfo/wdb-users |
From: Jan I. P. <ja...@me...> - 2011-06-29 09:53:25
|
Hi! What has happened to http://wdb.met.no? -JI |
From: Jan I. P. <ja...@me...> - 2011-06-29 09:48:26
|
# apt-get -y install wdb=1.0.0~rc1-1 # apt-get -y install wdb-bilload=1.0.0-1 # /usr/lib/wdb/bilLoad /usr/lib/wdb/bilLoad: symbol lookup error: /usr/lib/wdb/bilLoad: undefined symbol: _ZN3wdb4load24LoaderDatabaseConnectionC1ERKSsS3_iii Any advice for debugging? -JI |
From: Michael A. <mic...@me...> - 2011-05-03 08:16:04
|
----- Original Message ----- > Concerning the bitstream manipulation in memory or in IO and > experience with netcdf-files: > a) you gain only performance if the data you don't read is >> > blocksize of the disk (usually 8-32k, I've worked with disks with bs = 20M) Yes, I expect this will be very data/hardware dependent. > b) usual wdb-fields are < 3000x3000 (max felt-size) Well, hopefully we will begin to see more data from GRIB2 and perhaps even NetCDF being used in WDB. :-) > => You will only gain in the slowest dimension since the fastest > is < 1bs > > In netcdf-files I have seen performance boosts when slicing from 3 or > 4d data. The gain for 2d data was usually very low. > > Of course, this might sum up when read very often as usual in wdb. And > it might be neglected completely by the disk-cache when read very > often... There is also the element involved that the data returned in Postgres is their binary string format; since it does not support random access, cutting and slicing is not going to be optimal there. Whether this actually makes any real difference in terms of performance, however, is an open question. But it should be relatively easy to do some testing to determine this, once we can schedule some time to work on this. Regards, Michael A. |
From: Heiko K. <Hei...@me...> - 2011-04-29 15:03:07
|
Concerning the bitstream manipulation in memory or in IO and experience with netcdf-files: a) you gain only performance if the data you don't read is >> blocksize of the disk (usually 8-32k, I've worked with disks with bs = 20M) b) usual wdb-fields are < 3000x3000 (max felt-size) => You will only gain in the slowest dimension since the fastest is < 1bs In netcdf-files I have seen performance boosts when slicing from 3 or 4d data. The gain for 2d data was usually very low. Of course, this might sum up when read very often as usual in wdb. And it might be neglected completely by the disk-cache when read very often... Heiko On 2011-04-29 15:47, Michael Akinde wrote: > Hi, > > That is certainly one way of handling it. In principle, though, there is no reason why a function such as what you suggest here should form part of WCI (WDB). It is essentially a generic "take a binary stream and cut it up" function. This does not mean we can not implement it in the WCI (would no doubt be most practical from a packaging/administration POV), but it does give me pause. > > The other thing that I am concerned about with such an implementation, is doing the bitstream manipulation in database memory. I would much prefer to do the chopping up while retrieving the data from disk - thereby reducing file I/O - rather than retrievie the entire field into memory first. Reduced file I/O combined with lower memory usage should - in principle - make the best performance, which is why I think this is not a trivial feature. > > Perhaps we can do some experimentation with this while working on the features for the climate portal webservices. It could certainly be very worthwhile. I'll put this into the task list. > > Regards, > > Michael A. > > ----- Original Message ----- >> Hi Michael, >> >> I still believe that this function can be implemented in a very simple >> way, by adding a 'SLICE' server-side function, e.g. >> >> select SLICE(grid, 17, 349, 18, 138) from wci.fetch(4467622, >> NULL::wci.grid); >> >> This would return a slice of the data from wci.fetch, starting with >> x=17 >> and x-size=349, and y=18 and y-size=138. Just server-side fiddling >> with >> the data, no handling of geometry. >> >> >> If you are brave, you could try to implement a slicing-language as in >> IDL/PDL or Fortran, i.e. SLICE(grid, "17:349,18:138"), but since all >> wci.grids are 2d?, this might be overkill. >> >> Heiko >> >> On 2011-04-26 10:52, Michael Akinde wrote: >>> Hi Heiko, >>> >>> Sorry about the late reply. Had a bit of work with WDB/Yr occupy my >>> time followed by easter. >>> >>> If I understand your suggestion correctly, I expect you are thinking >>> here of extracting the (rectangular) polygon as binary data from the >>> database, the same way you would with an entire field. Is that >>> correct? >>> >>> This would certainly be a neat feature to add, though it is not >>> entirely trivial to implement this easily in the current WCI >>> interface (binary retrieval is done over wci.fetch, and that does >>> not currently support the specification of geography - that is only >>> possible in the independent wci.read call). An extension of the >>> existing interface should be possible, though. We should certainly >>> try and do some testing to determine what magnitude of improvement >>> may be possible by such an implementation when we get the >>> opportunity. >>> >>> Regards, >>> >>> Michael A. >>> >>> ----- Original Message ----- >>>> Hi, >>>> >>>> it would be nice to have a subsetting possibility in wdb. In many >>>> cases >>>> I am only interested in smaller areas of the original data, and >>>> with >>>> the >>>> projection and axes information from wdb, I can describe the >>>> rectangular >>>> subset of the field I'm interested in. >>>> >>>> This would largely reduce the network latency between the client >>>> and >>>> the >>>> server, and it is a common option in client/server protocols for >>>> gridded >>>> data, i.e. OpENDAP or WCS. >>>> >>>> This is a completely different option than e.g. fetching polygons. >>>> >>>> Best regards, >>>> >>>> Heiko >> >> -- >> Dr. Heiko Klein Tel. + 47 22 96 32 58 >> Development Section / IT Department Fax. + 47 22 69 63 55 >> Norwegian Meteorological Institute http://www.met.no >> P.O. Box 43 Blindern 0313 Oslo NORWAY -- Dr. Heiko Klein Tel. + 47 22 96 32 58 Development Section / IT Department Fax. + 47 22 69 63 55 Norwegian Meteorological Institute http://www.met.no P.O. Box 43 Blindern 0313 Oslo NORWAY |
From: Michael A. <mic...@me...> - 2011-04-29 13:47:28
|
Hi, That is certainly one way of handling it. In principle, though, there is no reason why a function such as what you suggest here should form part of WCI (WDB). It is essentially a generic "take a binary stream and cut it up" function. This does not mean we can not implement it in the WCI (would no doubt be most practical from a packaging/administration POV), but it does give me pause. The other thing that I am concerned about with such an implementation, is doing the bitstream manipulation in database memory. I would much prefer to do the chopping up while retrieving the data from disk - thereby reducing file I/O - rather than retrievie the entire field into memory first. Reduced file I/O combined with lower memory usage should - in principle - make the best performance, which is why I think this is not a trivial feature. Perhaps we can do some experimentation with this while working on the features for the climate portal webservices. It could certainly be very worthwhile. I'll put this into the task list. Regards, Michael A. ----- Original Message ----- > Hi Michael, > > I still believe that this function can be implemented in a very simple > way, by adding a 'SLICE' server-side function, e.g. > > select SLICE(grid, 17, 349, 18, 138) from wci.fetch(4467622, > NULL::wci.grid); > > This would return a slice of the data from wci.fetch, starting with > x=17 > and x-size=349, and y=18 and y-size=138. Just server-side fiddling > with > the data, no handling of geometry. > > > If you are brave, you could try to implement a slicing-language as in > IDL/PDL or Fortran, i.e. SLICE(grid, "17:349,18:138"), but since all > wci.grids are 2d?, this might be overkill. > > Heiko > > On 2011-04-26 10:52, Michael Akinde wrote: > > Hi Heiko, > > > > Sorry about the late reply. Had a bit of work with WDB/Yr occupy my > > time followed by easter. > > > > If I understand your suggestion correctly, I expect you are thinking > > here of extracting the (rectangular) polygon as binary data from the > > database, the same way you would with an entire field. Is that > > correct? > > > > This would certainly be a neat feature to add, though it is not > > entirely trivial to implement this easily in the current WCI > > interface (binary retrieval is done over wci.fetch, and that does > > not currently support the specification of geography - that is only > > possible in the independent wci.read call). An extension of the > > existing interface should be possible, though. We should certainly > > try and do some testing to determine what magnitude of improvement > > may be possible by such an implementation when we get the > > opportunity. > > > > Regards, > > > > Michael A. > > > > ----- Original Message ----- > >> Hi, > >> > >> it would be nice to have a subsetting possibility in wdb. In many > >> cases > >> I am only interested in smaller areas of the original data, and > >> with > >> the > >> projection and axes information from wdb, I can describe the > >> rectangular > >> subset of the field I'm interested in. > >> > >> This would largely reduce the network latency between the client > >> and > >> the > >> server, and it is a common option in client/server protocols for > >> gridded > >> data, i.e. OpENDAP or WCS. > >> > >> This is a completely different option than e.g. fetching polygons. > >> > >> Best regards, > >> > >> Heiko > > -- > Dr. Heiko Klein Tel. + 47 22 96 32 58 > Development Section / IT Department Fax. + 47 22 69 63 55 > Norwegian Meteorological Institute http://www.met.no > P.O. Box 43 Blindern 0313 Oslo NORWAY |
From: Michael A. <mic...@me...> - 2011-04-26 10:27:25
|
Hi Heiko, Sorry about the late reply. Had a bit of work with WDB/Yr occupy my time followed by easter. If I understand your suggestion correctly, I expect you are thinking here of extracting the (rectangular) polygon as binary data from the database, the same way you would with an entire field. Is that correct? This would certainly be a neat feature to add, though it is not entirely trivial to implement this easily in the current WCI interface (binary retrieval is done over wci.fetch, and that does not currently support the specification of geography - that is only possible in the independent wci.read call). An extension of the existing interface should be possible, though. We should certainly try and do some testing to determine what magnitude of improvement may be possible by such an implementation when we get the opportunity. Regards, Michael A. ----- Original Message ----- > Hi, > > it would be nice to have a subsetting possibility in wdb. In many > cases > I am only interested in smaller areas of the original data, and with > the > projection and axes information from wdb, I can describe the > rectangular > subset of the field I'm interested in. > > This would largely reduce the network latency between the client and > the > server, and it is a common option in client/server protocols for > gridded > data, i.e. OpENDAP or WCS. > > This is a completely different option than e.g. fetching polygons. > > Best regards, > > Heiko > > ------------------------------------------------------------------------------ > Benefiting from Server Virtualization: Beyond Initial Workload > Consolidation -- Increasing the use of server virtualization is a top > priority.Virtualization can reduce costs, simplify management, and > improve > application availability and disaster protection. Learn more about > boosting > the value of server virtualization. > http://p.sf.net/sfu/vmware-sfdev2dev > _______________________________________________ > WDB-Users mailing list > WDB...@li... > https://lists.sourceforge.net/lists/listinfo/wdb-users |
From: Heiko K. <Hei...@me...> - 2011-04-26 09:38:53
|
Hi Michael, I still believe that this function can be implemented in a very simple way, by adding a 'SLICE' server-side function, e.g. select SLICE(grid, 17, 349, 18, 138) from wci.fetch(4467622, NULL::wci.grid); This would return a slice of the data from wci.fetch, starting with x=17 and x-size=349, and y=18 and y-size=138. Just server-side fiddling with the data, no handling of geometry. If you are brave, you could try to implement a slicing-language as in IDL/PDL or Fortran, i.e. SLICE(grid, "17:349,18:138"), but since all wci.grids are 2d?, this might be overkill. Heiko On 2011-04-26 10:52, Michael Akinde wrote: > Hi Heiko, > > Sorry about the late reply. Had a bit of work with WDB/Yr occupy my time followed by easter. > > If I understand your suggestion correctly, I expect you are thinking here of extracting the (rectangular) polygon as binary data from the database, the same way you would with an entire field. Is that correct? > > This would certainly be a neat feature to add, though it is not entirely trivial to implement this easily in the current WCI interface (binary retrieval is done over wci.fetch, and that does not currently support the specification of geography - that is only possible in the independent wci.read call). An extension of the existing interface should be possible, though. We should certainly try and do some testing to determine what magnitude of improvement may be possible by such an implementation when we get the opportunity. > > Regards, > > Michael A. > > ----- Original Message ----- >> Hi, >> >> it would be nice to have a subsetting possibility in wdb. In many >> cases >> I am only interested in smaller areas of the original data, and with >> the >> projection and axes information from wdb, I can describe the >> rectangular >> subset of the field I'm interested in. >> >> This would largely reduce the network latency between the client and >> the >> server, and it is a common option in client/server protocols for >> gridded >> data, i.e. OpENDAP or WCS. >> >> This is a completely different option than e.g. fetching polygons. >> >> Best regards, >> >> Heiko -- Dr. Heiko Klein Tel. + 47 22 96 32 58 Development Section / IT Department Fax. + 47 22 69 63 55 Norwegian Meteorological Institute http://www.met.no P.O. Box 43 Blindern 0313 Oslo NORWAY |
From: Heiko K. <Hei...@me...> - 2011-04-15 09:43:31
|
Hi, it would be nice to have a subsetting possibility in wdb. In many cases I am only interested in smaller areas of the original data, and with the projection and axes information from wdb, I can describe the rectangular subset of the field I'm interested in. This would largely reduce the network latency between the client and the server, and it is a common option in client/server protocols for gridded data, i.e. OpENDAP or WCS. This is a completely different option than e.g. fetching polygons. Best regards, Heiko |
From: Michael A. <mic...@me...> - 2010-12-17 16:14:31
|
----- Original Message ----- > Hi, > > Postgres defines that the protocol uses network byte order: > http://www.postgresql.org/docs/8.1/static/protocol.html > Binary representations for integers use network byte order (most > significant byte first). For other data types consult the > documentation or source code to learn about the binary > representation. As you note, for other data types (such as ByteA), they pretty much leave it undefined. There is a reason why the PostgreSQL developers generally do not recommend using binary transfer in libPQ. > I don't buy the performance argument! Shuffling bytes is about the > fastest thing a computer can do, orders of magnitudes faster than > receiving data over the network, or using the ASCII escaped protocol. > Most data-formats have a strict build-in byte-order, e.g. netcdf, jpeg Shuffling thousands of bytes still takes time and every millisecond counts. Nevertheless I do not disagree with your basic premise - it may very well be that the added operations are cheap enough that we can implement this without a significant hit on performance. We can not test and implement this at the current time, however, given the resources available to us. The issue has been registered as a ticket on sourceforge (you can sign in to monitor it). If we get some downtime next year, it will be one thing we should look to test. Regards, Michael A. > On 2010-12-17 13:59, Michael Akinde wrote: > > Hi, > > > > Ok - after having thought about the issue a bit more and looked over > > a little code, I now think I understand what is going on. > > > > ----- Original Message ----- > >> the data is not identical to the data from the files. The data in > >> files is usually stored in very special data-types, in the case of > >> grib not even usual datamaskin (IEEE) datatypes. The data stored > >> in WDB is the data identical to the data-presentation within the > >> import-program, after reading the data. > > > > Note I said "extracted from the files", not the actual data format > > in the files. :-) > > > > In any case, I am wrong about that - after checking the code, I can > > see that the FELT loader, for instance, corrects for the endianness > > in the files (as it should). > > > >> But there is no possiblity in WDB to determine the > >> data-presentation > >> within the import-program. > > > > Nor should there be. Generally speaking, WDB should always provide > > data in a consistent byte order (though exactly which byte order > > unfortunately seems to be missing from the documentation). This is > > definitely an omission that we need to rectify. > > > > The answer to which byte order is: whatever is the native byte order > > of the database server (this is the same way we deal with other > > similar problems that may affect formatting of data such as locale). > > In other words, given our linux servers, the binary data is stored > > in little endian. > > > > This is not a problem on a heterogenous network as long as the > > loading programs are operated correctly (i.e., on the same or > > similar > > machine as the database itself). It is the loading programs that > > have the responsibility of ensuring that data is transformed > > correctly when stored in the database. As the loading programs > > are currently setup to run on the database servers, the only > > way that data could be inconsistent is if there is a bug in > > the loading programs (not unlikely... ) or if a user starts writing > > in data directly with the wrong byte order (and the latter we can't > > guard against). > > > > The issues you are seeing, however, are probably due to the use of > > libpq in binary mode. If I understand correctly, libpq in binary > > mode always transfers data in big endian. This means that normal > > SQL data values that you receive will of course be returned in this > > format. > > > > The ByteA data that you retrieve from the database is simply > > binary data, though. This means that libpq will return the data > > without modification (it has no idea whether this is BE or LE), > > and of course you receive the data in its "native" format - little > > endian. As WDB/WCI has no way of knowing whether you are requesting > > data using binary or text mode, it is not possible to handle this > > programmatically prior to retrieving the data, > > > > This is a somewhat unexpected inconsistency (I had completely > > forgotten > > that libpq in binary mode treated data this way). > > > > We could probably fix the issue by mandating that WDB always stores > > it's binary data using BE, but I am not convinced that this would > > be a good solution as it would add an endian conversion overhead > > to every single write and read operation to and from binary data. As > > such, I think the system will stay as is for now (at least until > > we have time to check that a change such as the above can be done > > without loss of performance). > > > > We should add a WCI function to indicate the endianness of the > > database > > server, if there isn't one built into Postgres, of course. > > > > Hopefully the above clarified the issue somewhat. > > > > Regards, > > > > Michael A. > > > >> On 2010-12-16 21:06, Michael Akinde wrote: > >>> Hi, > >>> > >>> Keep in mind that the fields stored in WDB are essentially > >>> identical > >>> to the data extracted from the files (although we do a little > >>> rotation and scaling to ensure scanmode and units are consistent). > >>> > >>> It is unclear to me whether your problem here is with real or > >>> syntethic data? WDB does not add extra functionality to libpq, so > >>> if > >>> the problem is syntethic (i.e., with libpq), this question is > >>> better > >>> directed to their list. > >>> > >>> If the problem is with data from the grid.oslo database, we should > >>> probably check the source data. The bilLoader which is used to > >>> load > >>> that database is not a fully developed system, so it may not > >>> distinguish between little endian and big endian files (as I > >>> recall, > >>> BIL files can be stored in both format, though I do not recollect > >>> that our files differed in this respect from our regular data). > >>> > >>> Regards, > >>> > >>> Michael A. > >>> > >>> ----- Original Message ----- > >>>> Hi, > >>>> > >>>> while testing the binary postgres protocol, we found something > >>>> very > >>>> strange. Against the postgres-documentation, we don't had to > >>>> change > >>>> the byte-order or the floats in the bytea stream returned. > >>>> > >>>> I made now an additional test, and it is like this: > >>>> > >>>> Returning a ::float4 from postgres delivers a IEEE754 float in > >>>> network > >>>> byte order. (meaning we need to switch bytes when running on > >>>> linux) > >>>> > >>>> Code to restore: > >>>> (http://doxygen.postgresql.org/pqformat_8c-source.html#l00510) > >>>> > >>>> 1) int numberx = ntohl(*((uint32_t *) ptrNumberX)); > >>>> 2) float* numberxf = reinterpret_cast<float*>(&numberx); > >>>> > >>>> > >>>> Returning a bytea field from wdb (which is float) returns float > >>>> in > >>>> little endian byte order. > >>>> Code to restore: > >>>> > >>>> 2) dataF = reinterpret_cast<float*>(ptrGridBytea); > >>>> > >>>> > >>>> We've had lots of problems when switching from IRIX machines to > >>>> Linux > >>>> because of the problems with the endianess. I've seen that > >>>> PostGIS > >>>> uses > >>>> functions to explicitly give the endianess of the machine > >>>> (http://postgis.refractions.net/docs/ST_AsBinary.html). Is this > >>>> taken > >>>> care of with WDB? Does it work with mixed clients? > >>>> > >>>> Best wishes, > >>>> > >>>> Heiko > >>>> > > -- > Dr. Heiko Klein Tel. + 47 22 96 32 58 > Development Section / IT Department Fax. + 47 22 69 63 55 > Norwegian Meteorological Institute http://www.met.no > P.O. Box 43 Blindern 0313 Oslo NORWAY |