C:\Perl64 with perl 5, version 22
C:\curl with curl 7.52.1 (x86_64-pc-win32)
C:\Perl64\lib\CGI
C:\Perl64\lib\DBI with DBI-1.636
C:\Perl64\lib\DBD DBD-Pg-3.5.3
C:\xsltproc
C:\opt\metaf2xml-2.1
PostgreSQL 9.6.2 for Windows x86-64
PostGIS 2.3.2
I am able to run properly metaf2xml-2.1
For instance C:\WINDOWS\system32>perl /opt/metaf2xml-2.1/bin/metaf.pl lang=en format=xml type_synop=wmo src_synop=ogimet msg_synop="07255" gives me an xml in the command windows
Please my questions are:
1)could I run metaf.pl for a specific date e.g 01/01/2017 ?
2)could I store the xml to local directory e.g.C:\XMLoutput\ via metaf.pl arguments ?
3)how can I export directly (without storing it on local directory) the generated xml in the PostgreSQL? I manage to connect via DBI and create tables in Perl but I don't know how to combine both metaf.pl and DBI to copy directly the xml into PostgreSQL
Thanks a lot in advance for your answers
Gwen
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
1)could I run metaf.pl for a specific date e.g 01/01/2017 ?
Yes, you can. Please use the parameter "end_date" with the format "YYYY-MM-DD hh:mm", e.g.:
perl metaf.pl ... end_date="2017-01-01 13:00"
The start date is determined by subtracting the value for the parameter "hours", but it is only used with mode=summary.
However, whether you really get any data for the selected period depends on the data source. For a list of sources please see here.
2)could I store the xml to local directory e.g.C:\XMLoutput\ via metaf.pl arguments ?
It's not possible with arguments (primarily, this is a CGI script; a parameter to write to any file on the Web server would not be a good idea), but you can use the redirection feature of the shell (Command.com) to redirect the output to a file, e.g.:
perl metaf.pl > output.xml
3)how can I export directly (without storing it on local directory) the generated xml in the PostgreSQL? I manage to connect via DBI and create tables in Perl but I don't know how to combine both metaf.pl and DBI to copy directly the xml into PostgreSQL
Sorry, metaf.pl cannot store data in a database. metaf.pl can only get data from a database with tables/views fitting exactly to the SQL queries in metaf.pl.
Kind regards,
Thomas
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Thomas,
Thanks a lot for the great job you did with metaf2xml and your quick answer.
Questions 1) and 2) work fine
For question 3) I read support request 19 where connection to MySQl was discussed. You advised to "Use the Perl modules from metaf2xml and the modules DBI and DBD::mysql (should be available in Ubuntu) in one Perl script, with a callback function to copy the data into a usable structure and the prepare()/execute() functions from DBI to INSERT the data directly." therefore I tried to implement it with PostgreSQL.
I am able to read a local CSV file and copy it in PostgreSQL via "pg_putcopydata" but I was wondering how to implement your above advice.
I have tested to download synops for WMO 07255 for the last 24 hours the 2017-03-02 00:00 ("YYYY-MM-DD hh:mm") i.e. from 2017-03-01 at midnight to 2017-03-02 at midnight. In one hand Ogimet displays 6.2mm of precipitation with 24h details.
In the other hand with metaf2xml I tried mode all (C:\WINDOWS\system32>perl /opt/metaf2xml-2.1/bin/metaf.pl lang=en format=xml mode=all hours=24 type_synop=wmo src_synop=ogimet msg_synop="07255" end_date="2017-03-02 00:00"> C:\XMLoutput\all.xml)
and mode summary (C:\WINDOWS\system32>perl /opt/metaf2xml-2.1/bin/metaf.pl lang=en format=xml mode=all hours=24 type_synop=wmo src_synop=ogimet msg_synop="07255" end_date="2017-03-02 00:00"> C:\XMLoutput\summary.xml) but none gave me the same precipitation amount than the one displayed byogimet.
Please what is the difference between mode all and mode summary?
Why those don't lead to same precipitation amount.
kinds regards
Gwen
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I am able to read a local CSV file and copy it in PostgreSQL via "pg_putcopydata" but I was wondering how to implement your above advice.
Pg COPY with CSV files has some limitations:
you need to escape CSV/COPY special characters in the CSV file: CSV field delimiter, CSV line separator, CSV quoting character, COPY NULL character(s),
COPY cannot store references to rows (i.e. their ids) in other tables, or use references, this information must already be in the CSV,
maybe you need to delete duplicate data manually before or after COPY.
But if it works for your setup you don't need prepare()/execute() (for SQL INSERT) or callback functions, you just need to create the CSV files. Maybe you can even use "psql" to COPY them.
Anyway, I just saw in your new comment below that you now want to insert XML directly, so this is probably not relevant anymore.
Please what is the difference between mode all and mode summary?
Sorry, the description in the manual is confusing, this will be fixed in the next version. mode=all does not do anything, only mode=summary switches to summary mode.
Ogimet displays 6.2mm of precipitation with 24h details.
...
none gave me the same precipitation amount than the one displayed byogimet.
The 24h details in the "Daily summary" from Ogimet shows values which are computed by Ogimet. metaf2xml does not compute any summary values, it can only show values from the original reports. mode=summary for metaf.pl just means that all reports for the selected period are shown and it changes the output format: only selected values are shown, one row per observation (TAF: forecast period).
The "Daily summary" from Ogimet is probably computed from the average/sum/maximum/minimum of values from previous reports of a station. Note that some of these values may be actually impossible to compute because the reports simply do not contain sufficient information. E.g. if a report from 06:00 contains a maximum for the last 12 hours, it remains unknown if this maximum occurred before or after midnight, and it is impossible to say on which day (from 00:00 to 23:59) it occurred.
Back to the 6.2 mm/24h. This value is wrong. The cause is probably that France reports in BUFR format, and Ogimet converts BUFR messages to the traditional alpha-numeric code (TAC). TAC allows only 2 values for precipitation (one in section 1, one in section 3), so Ogimet does not show them in the "Decoded synop data" view and obviously does not use them for its summaries. The TAC created by Ogimet can be seen on the detail page. If you click on "this BUFR report" at the bottom of the detail page you come to the decoded BUFR page. Click on "Subset 0" and you see that Bourges actually reports 5 values: 0.2 mm/1h, 1.4 mm/3h, 2.0 mm/6h, 5.4 mm/12h, and: 6.8 mm/24h. Tada! That should be the value displayed in the "Daily summary", but Ogimet does not use it.
(Geo::BUFR and the BUFR tables are required) to be able to compare BUFR and converted TAC. (This discovered a minor bug in 2.1: 020013:360 and 020013:810 are shown as "not processed", which is wrong. It will be fixed in the next version.) You can see which values are missing in TAC, and you can also see another disadvantage of converting BUFR to TAC: values in TAC must be rounded (e.g visibility) or even encoded as range (e.g base of lowest cloud).
Kind regards,
Thomas
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
regarding question 3) I found the method to download and extract values from XML with Postgresql on stackoverflow xml-data-to-postgres. This method works with XML such as the one in the example from stackoverflow import-xml-files-to-postgresql but not with XML from metaf.pl.
I really don't know why I even don't manage to simply load the metaf XML in Postgresql. Is there a trick in the structure?
I should be able then to extract value with the function
I used
--create table to store xml docs
CREATE TABLE synops
(
object_name character varying(50) NOT NULL PRIMARY KEY,
object_value xml
);
--insert xml doc
INSERT INTO synops(object_name, object_value)
VALUES ('summary.xml', (
SELECT XMLPARSE(DOCUMENT convert_from(pg_read_binary_file('C:\Program Files\PostgreSQL\9.6\data\xml_data\summary.xml'), 'UTF8'))
));
--extract values with stackoverflow xml-data-to-postgres.
SELECT f_xml_extract_val('//precipAmount/@v', (SELECT object_value FROM synops), 'all').
Best regards
Gwenael
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
//precipAmount is under /data/reports, and everything under //reports is in the namespace http://metaf2xml.sourceforge.net/2.1. The full xpath would be
//*[local-name()="precipAmount" and namespace-uri()="http://metaf2xml.sourceforge.net/2.1"]/@v
but you can probably omit the namespace-uri() condition. I haven't found a shorter notation.
Kind regards,
Thomas
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
To be honest I am impressed by the quality of your help, that is really nice. Thanks a lot.
both
SELECT f_xml_extract_val('//*[local-name()="precipAmount" and namespace-uri()="http://metaf2xml.sourceforge.net/2.1"]/@v', (SELECT object_value FROM synops), 'all')
SELECT f_xml_extract_val('//*[local-name()="precipAmount"]/@v', (SELECT object_value FROM synops), 'all')
I was wondering to apply similar path for "hours" but I need only hours for precipitation.
Please how can I extract //data/reports/synop/precipitation/timeBeforeObs/hours/@v?
SELECT f_xml_extract_val('//data/reports/synop/precipitation/timeBeforeObs/hours/@v', (SELECT object_value FROM synops), 'all') will no works.
Regarding XML to Postgresdl, I am trying also perl parser such as XML::DOM to insert values from outside Postgresql.
Best regards
Gwenael
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I found a shorter notation :-) The Pg documentation says that the 3rd argument of xpath() can be an array of namespace mappings. E.g. with the namespace alias m, the precipAmount can be selected with:
xpath('//m:precipAmount/@v', (SELECT object_value FROM synops), ARRAY[ARRAY['m', 'http://metaf2xml.sourceforge.net/2.1']])
and to get the hours, the first argument for xpath() should be:
no, you did everything right. It's just that LFLD only reported "NIL" every half hour for the 24-hour period you selected, 48 times. I checked some other periods for this airport but I also always get "NIL". Have you ever seen a real METAR from this airport?
Kind regards,
Thomas
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hi Thomas
SYNOPS works fine for this station therefore I was assuming that METAR too but it seems that not the case.
By the way I noticed that in a loop using directly perl /opt/metaf2xml-2.1/bin/metaf.pl leads to empty case despite there are data. Up to now I don't have yet explanation. Perhaps the speed of ogimet server is in stake.
Well so if I change method and download batch of non decoded synops reports]http://www.ogimet.com/display_synops2.php?lang=en&lugar=07255&tipo=ALL&ord=REV&nil=SI&fmt=html&ano=2017&mes=04&day=14&hora=09&anof=2017&mesf=04&dayf=15&horaf=09&send=send) and then process them to get xmls, please what are the metaf2xml, steps to implement?
Kind regards
Gwenael
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
in a loop using directly perl /opt/metaf2xml-2.1/bin/metaf.pl leads to empty case
If your requests were for historical data, the cause is that Ogimet.com limits the amount of requests for such data.
if I change method and download batch of non decoded synops reports
As the messages are embedded in HTML, you need to extract them. Then you can feed (pipe) them to metaf2xml.pl, it can read messages from standard input; the input must be one complete message per line.
Kind regards,
Thomas
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
This code works for one line but how to make it works for the complete batch.
I read from metaf2xml example.that I have to use here-doc EOF method but still I don't manage to implement it.
#perl E:\perl\test1.pl#!/usr/bin/perlusestrict;usewarnings;usefeatureqw{say};my$string=`curl http://www.ogimet.com/cgi-bin/getsynop?begin=201704230000&end=201704232359&state=France`;#say $string;my($firstLine)=$string=~ /((.*?\n){1})/;# extract first linesay$firstLine;$string=~s/=*//g;# replace any = (= or ==) at the end of the string with emptmy($firstLine)=$string=~ /((.*?\n){1})/;#say $string;say$firstLine;my@fields=split(/\,/,$firstLine);my$filein=$fields[6];say$filein;my$fileout="E:\\perl\\xml_input\\France.xml ";my@join_cmd=("perl /opt/metaf2xml-2.1/bin/metaf2xml.pl -o ",$fileout,"/"",$filein,"/"");my$join_cmd=join('',@join_cmd);print"$join_cmd \n";my$xmlfile=qx/$join_cmd/;
Last edit: Anonymous 2017-04-26
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I tried to test multilines metaf2xml operation and even tried example from previous discussion but I miss something.
Please do I have to had "SYNOP" prefix our "AAXX" argument in some ways?
metaf2xml.pl expects each message as a separate command line argument. As space is a delimiter for arguments and messages are not quoted, each message item becomes a separate argument. You could use:
However, providing messages as command line arguments has several disadvantages:
It is not performant to fork a Perl script from a Perl script.
It doesn't scale. With increasing length/number of messages you could hit the limit for the maximum command length/number of arguments of the operating system.
It's not safe. You need to quote/filter out all characters in the messages which are special to the program which executes the command, i.e. characters for redirection (<, >, |), variable expansion (%, $), and quoting (', "). One malicious message may wipe your entire hard disk.
If you already have the data in a Perl variable, the simplest solution is to use the module metaf2xml::parser directly (decode_ogimet.pl):
(1) Perl is overkill here, but I assume you don't have sed, awk, grep -o, ...
Last variant, which avoids the scaling issue and the 2 Perl commands: use decode_ogimet.pl above but instead of setting and splitting $nondecodedinput, use "while (<>)" and read the curl output line by line and parse it:
curl ... | perl decode_ogimet.pl
For the other discussion entry, please just replace $nondecodedinput with $filein. The example for metaf2xml.pl uses Unix shell syntax, and Windows (cmd.exe) does not understand "<<".
if you want the XML in a variable (e.g. $xml) you can use the script decode_ogimet.pl from above with lines 8 and 10 changed:
- line 8 becomes: my $xml;
- line 10 becomes: metaf2xml::parser::start_xml { o => \$xml };
After invoking metaf2xml::parser::finish, $xml will contain the XML for all messages.
Dear Thomas,
I am installing an virtual machine via puphpet (http://puphpet.com) .
By default perl version is v5.22.1 and Postgresql is 9.6.3. I want to install metaf2xml
Please, my questions concern
1)modules dependencies
-Do I have to install first CGI,DBI,DBD, CURL and XSLTPROC and GEO::BUFR?
-Which method do I have to use. For instance sudo cpanm DBI works fine while sudo cpanm DBD doesn't and I have ti use cpan[1]> install "DBD:Pg"
2)metaf2xml
-Do I have simply to download https://sourceforge.net/projects/metaf2xml/files/2.2/ and then run the prompt sudo rpm -U metaf2xml-2.2-1.noarch.rpm ?
3)decode_ogimet.pl libraries
-How do I set the use in the decode_ogimet.pl to run the script in unbutu?
1)modules dependencies
-Do I have to install first CGI,DBI,DBD, CURL and XSLTPROC and GEO::BUFR?
This depends on what you want to do. If you need just metaf2xml::parser you need none of them, except Geo::BUFR if you want to decode binary BUFR messages with metaf2xml. For other requirements of metaf2xml please see the file INSTALL (or http://metaf2xml.sourceforge.net/install.html) and the section DEPENDENCIES of the relevant man pages (e.g. from http://metaf2xml.sourceforge.net/#Links). If anything is wrong or missing or unclear in the documentation please say so.
-Which method do I have to use.
It is best to check first if the software is pre-packaged for your distribution and use its package management to install it. E.g. for Ubuntu 16.10, DBI is provided as libdbi-perl, DBD::Pg as libdbd-pg-perl, xsltproc as xsltproc, curl as curl.
For instance sudo cpanm DBI works fine while sudo cpanm DBD doesn't and I have ti use cpan[1]> install "DBD:Pg"
There is no single module DBD. DBD stands for "Database dependent", each DB has its own DBD module. For Postgresql this is DBD::Pg, so "sudo cpanm DBD::Pg" should work (but it is not needed if you use the pre-packaged libdbd-pg-perl).
Ubuntu does not use RPM for package management, and it's not recommended to use 2 different package managers. Please see the documentation of "alien" for how to convert the RPM. Or you could download one of the source packages of metaf2xml and use the installation script install.pl, or (if you want to use just metaf2xml::parser) simply use the modules whereever you extracted them, without installation.
3)decode_ogimet.pl libraries
-How do I set the use in the decode_ogimet.pl to run the script in unbutu?
You can use "use lib" with the path where the modules are depending on your setup, or you can set/add the path in the environment variable PERL5LIB. To use the module metaf2xml::parser version 2.2 in a Perl script, use:
use metaf2xml::parser 2.002;
Kind regards,
Thomas
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
View and moderate all "Help" comments posted by this user
Mark all as spam, and block user from posting to "Discussion"
Hello metaf2xml,
my environment setting is:
C:\Perl64 with perl 5, version 22
C:\curl with curl 7.52.1 (x86_64-pc-win32)
C:\Perl64\lib\CGI
C:\Perl64\lib\DBI with DBI-1.636
C:\Perl64\lib\DBD DBD-Pg-3.5.3
C:\xsltproc
C:\opt\metaf2xml-2.1
PostgreSQL 9.6.2 for Windows x86-64
PostGIS 2.3.2
I am able to run properly metaf2xml-2.1
For instance C:\WINDOWS\system32>perl /opt/metaf2xml-2.1/bin/metaf.pl lang=en format=xml type_synop=wmo src_synop=ogimet msg_synop="07255" gives me an xml in the command windows
Please my questions are:
1)could I run metaf.pl for a specific date e.g 01/01/2017 ?
2)could I store the xml to local directory e.g.C:\XMLoutput\ via metaf.pl arguments ?
3)how can I export directly (without storing it on local directory) the generated xml in the PostgreSQL? I manage to connect via DBI and create tables in Perl but I don't know how to combine both metaf.pl and DBI to copy directly the xml into PostgreSQL
Thanks a lot in advance for your answers
Gwen
View and moderate all "Help" comments posted by this user
Mark all as spam, and block user from posting to "Discussion"
Please forgot question 2), the answer is simply:
C:\WINDOWS\system32>perl /opt/metaf2xml-2.1/bin/metaf.pl lang=en format=xml type_synop=wmo src_synop=ogimet msg_synop="07255"> C:\XMLoutput\test_output.xml
Hi Gwen,
thanks for providing your environment settings.
Yes, you can. Please use the parameter "end_date" with the format "YYYY-MM-DD hh:mm", e.g.:
The start date is determined by subtracting the value for the parameter "hours", but it is only used with mode=summary.
However, whether you really get any data for the selected period depends on the data source. For a list of sources please see here.
It's not possible with arguments (primarily, this is a CGI script; a parameter to write to any file on the Web server would not be a good idea), but you can use the redirection feature of the shell (Command.com) to redirect the output to a file, e.g.:
Sorry, metaf.pl cannot store data in a database. metaf.pl can only get data from a database with tables/views fitting exactly to the SQL queries in metaf.pl.
Kind regards,
Thomas
View and moderate all "Help" comments posted by this user
Mark all as spam, and block user from posting to "Discussion"
Thomas,
Thanks a lot for the great job you did with metaf2xml and your quick answer.
Questions 1) and 2) work fine
For question 3) I read support request 19 where connection to MySQl was discussed. You advised to "Use the Perl modules from metaf2xml and the modules DBI and DBD::mysql (should be available in Ubuntu) in one Perl script, with a callback function to copy the data into a usable structure and the prepare()/execute() functions from DBI to INSERT the data directly." therefore I tried to implement it with PostgreSQL.
I am able to read a local CSV file and copy it in PostgreSQL via "pg_putcopydata" but I was wondering how to implement your above advice.
I have tested to download synops for WMO 07255 for the last 24 hours the 2017-03-02 00:00 ("YYYY-MM-DD hh:mm") i.e. from 2017-03-01 at midnight to 2017-03-02 at midnight. In one hand Ogimet displays 6.2mm of precipitation with 24h details.
In the other hand with metaf2xml I tried mode all (C:\WINDOWS\system32>perl /opt/metaf2xml-2.1/bin/metaf.pl lang=en format=xml mode=all hours=24 type_synop=wmo src_synop=ogimet msg_synop="07255" end_date="2017-03-02 00:00"> C:\XMLoutput\all.xml)
and mode summary (C:\WINDOWS\system32>perl /opt/metaf2xml-2.1/bin/metaf.pl lang=en format=xml mode=all hours=24 type_synop=wmo src_synop=ogimet msg_synop="07255" end_date="2017-03-02 00:00"> C:\XMLoutput\summary.xml) but none gave me the same precipitation amount than the one displayed byogimet.
Please what is the difference between mode all and mode summary?
Why those don't lead to same precipitation amount.
kinds regards
Gwen
Hi Gwen,
Pg COPY with CSV files has some limitations:
But if it works for your setup you don't need prepare()/execute() (for SQL INSERT) or callback functions, you just need to create the CSV files. Maybe you can even use "psql" to COPY them.
Anyway, I just saw in your new comment below that you now want to insert XML directly, so this is probably not relevant anymore.
Sorry, the description in the manual is confusing, this will be fixed in the next version. mode=all does not do anything, only mode=summary switches to summary mode.
The 24h details in the "Daily summary" from Ogimet shows values which are computed by Ogimet. metaf2xml does not compute any summary values, it can only show values from the original reports. mode=summary for metaf.pl just means that all reports for the selected period are shown and it changes the output format: only selected values are shown, one row per observation (TAF: forecast period).
The "Daily summary" from Ogimet is probably computed from the average/sum/maximum/minimum of values from previous reports of a station. Note that some of these values may be actually impossible to compute because the reports simply do not contain sufficient information. E.g. if a report from 06:00 contains a maximum for the last 12 hours, it remains unknown if this maximum occurred before or after midnight, and it is impossible to say on which day (from 00:00 to 23:59) it occurred.
Back to the 6.2 mm/24h. This value is wrong. The cause is probably that France reports in BUFR format, and Ogimet converts BUFR messages to the traditional alpha-numeric code (TAC). TAC allows only 2 values for precipitation (one in section 1, one in section 3), so Ogimet does not show them in the "Decoded synop data" view and obviously does not use them for its summaries. The TAC created by Ogimet can be seen on the detail page. If you click on "this BUFR report" at the bottom of the detail page you come to the decoded BUFR page. Click on "Subset 0" and you see that Bourges actually reports 5 values: 0.2 mm/1h, 1.4 mm/3h, 2.0 mm/6h, 5.4 mm/12h, and: 6.8 mm/24h. Tada! That should be the value displayed in the "Daily summary", but Ogimet does not use it.
I even downloaded the BUFR archive for LFPW 2017-03-02 00:00, extracted the file 20170302001211_ISMN01_LFPW_020000.bufr, decoded it with
(Geo::BUFR and the BUFR tables are required) to be able to compare BUFR and converted TAC. (This discovered a minor bug in 2.1: 020013:360 and 020013:810 are shown as "not processed", which is wrong. It will be fixed in the next version.) You can see which values are missing in TAC, and you can also see another disadvantage of converting BUFR to TAC: values in TAC must be rounded (e.g visibility) or even encoded as range (e.g base of lowest cloud).
Kind regards,
Thomas
View and moderate all "Help" comments posted by this user
Mark all as spam, and block user from posting to "Discussion"
Thomas
regarding question 3) I found the method to download and extract values from XML with Postgresql on stackoverflow xml-data-to-postgres. This method works with XML such as the one in the example from stackoverflow import-xml-files-to-postgresql but not with XML from metaf.pl.
I really don't know why I even don't manage to simply load the metaf XML in Postgresql. Is there a trick in the structure?
I should be able then to extract value with the function
I used
--create table to store xml docs
CREATE TABLE synops
(
object_name character varying(50) NOT NULL PRIMARY KEY,
object_value xml
);
--insert xml doc
INSERT INTO synops(object_name, object_value)
VALUES ('summary.xml', (
SELECT XMLPARSE(DOCUMENT convert_from(pg_read_binary_file('C:\Program Files\PostgreSQL\9.6\data\xml_data\summary.xml'), 'UTF8'))
));
--extract values with stackoverflow xml-data-to-postgres.
SELECT f_xml_extract_val('//precipAmount/@v', (SELECT object_value FROM synops), 'all').
Best regards
Gwenael
Hi Gwenael,
//precipAmount is under /data/reports, and everything under //reports is in the namespace
http://metaf2xml.sourceforge.net/2.1
. The full xpath would bebut you can probably omit the namespace-uri() condition. I haven't found a shorter notation.
Kind regards,
Thomas
View and moderate all "Help" comments posted by this user
Mark all as spam, and block user from posting to "Discussion"
Hi Thomas,
To be honest I am impressed by the quality of your help, that is really nice. Thanks a lot.
both
I was wondering to apply similar path for "hours" but I need only hours for precipitation.
Please how can I extract //data/reports/synop/precipitation/timeBeforeObs/hours/@v?
SELECT f_xml_extract_val('//data/reports/synop/precipitation/timeBeforeObs/hours/@v', (SELECT object_value FROM synops), 'all') will no works.
Regarding XML to Postgresdl, I am trying also perl parser such as XML::DOM to insert values from outside Postgresql.
Best regards
Gwenael
Hi Gwenael,
thanks!
I found a shorter notation :-) The Pg documentation says that the 3rd argument of xpath() can be an array of namespace mappings. E.g. with the namespace alias
m
, the precipAmount can be selected with:and to get the hours, the first argument for xpath() should be:
You just need to extend the function f_xml_extract_val() to use it.
Kind regards,
Thomas
View and moderate all "Help" comments posted by this user
Mark all as spam, and block user from posting to "Discussion"
below perl code works fine too ;)
View and moderate all "Help" comments posted by this user
Mark all as spam, and block user from posting to "Discussion"
Hi Thomas
Please how do you write a metar request
and
shoud lead to the same xml result.however the metar request (2nd one) is not correct
Thanks
Gwenael
Hi Gwenael
to get METAR or TAF for ICAO codes, please use type_metaf=icao.
Kind regards,
Thomas
View and moderate all "Help" comments posted by this user
Mark all as spam, and block user from posting to "Discussion"
Hi Thomas
for station 07255/LFLD
returns <metar s="LFLD NIL">.
Please do I have to modify another argument?
Best regards
Gwenael
Hi Gwenael,
no, you did everything right. It's just that LFLD only reported "NIL" every half hour for the 24-hour period you selected, 48 times. I checked some other periods for this airport but I also always get "NIL". Have you ever seen a real METAR from this airport?
Kind regards,
Thomas
View and moderate all "Help" comments posted by this user
Mark all as spam, and block user from posting to "Discussion"
Hi Thomas
SYNOPS works fine for this station therefore I was assuming that METAR too but it seems that not the case.
By the way I noticed that in a loop using directly perl /opt/metaf2xml-2.1/bin/metaf.pl leads to empty case despite there are data. Up to now I don't have yet explanation. Perhaps the speed of ogimet server is in stake.
Well so if I change method and download batch of non decoded synops reports]http://www.ogimet.com/display_synops2.php?lang=en&lugar=07255&tipo=ALL&ord=REV&nil=SI&fmt=html&ano=2017&mes=04&day=14&hora=09&anof=2017&mesf=04&dayf=15&horaf=09&send=send) and then process them to get xmls, please what are the metaf2xml, steps to implement?
Kind regards
Gwenael
Hi Gwenael,
If your requests were for historical data, the cause is that Ogimet.com limits the amount of requests for such data.
As the messages are embedded in HTML, you need to extract them. Then you can feed (pipe) them to metaf2xml.pl, it can read messages from standard input; the input must be one complete message per line.
Kind regards,
Thomas
View and moderate all "Help" comments posted by this user
Mark all as spam, and block user from posting to "Discussion"
Hi Thomas,
This code works for one line but how to make it works for the complete batch.
I read from metaf2xml example.that I have to use here-doc EOF method but still I don't manage to implement it.
Last edit: Anonymous 2017-04-26
View and moderate all "Help" comments posted by this user
Mark all as spam, and block user from posting to "Discussion"
Hi Thomas,
Please another question how do you implement metaf2xml example.
Below code is wrong
Last edit: Anonymous 2017-04-27
View and moderate all "Help" comments posted by this user
Mark all as spam, and block user from posting to "Discussion"
Hi Thomas,
I tried to test multilines metaf2xml operation and even tried example from previous discussion but I miss something.
Please do I have to had "SYNOP" prefix our "AAXX" argument in some ways?
Hi Gwenael,
metaf2xml.pl expects each message as a separate command line argument. As space is a delimiter for arguments and messages are not quoted, each message item becomes a separate argument. You could use:
my $cmd = 'perl /opt/metaf2xml-2.1/bin/metaf2xml.pl -o' . $fileout . ' "' . join('" "', @nondecodedinputreports) . '"';
However, providing messages as command line arguments has several disadvantages:
If you already have the data in a Perl variable, the simplest solution is to use the module metaf2xml::parser directly (decode_ogimet.pl):
It doesn't scale (all messages must be in memory) but you avoid the forking and quoting.
A better solution would be to use
- curl to get the data,
- Perl (1) to extract the messages, and
- metaf2xml.pl to parse them:
(1) Perl is overkill here, but I assume you don't have sed, awk, grep -o, ...
Last variant, which avoids the scaling issue and the 2 Perl commands: use decode_ogimet.pl above but instead of setting and splitting $nondecodedinput, use "while (<>)" and read the curl output line by line and parse it:
For the other discussion entry, please just replace $nondecodedinput with $filein. The example for metaf2xml.pl uses Unix shell syntax, and Windows (cmd.exe) does not understand "<<".
Kind regards,
Thomas
Related
Discussion: metaf.pl xml to PostgreSQL
View and moderate all "Help" comments posted by this user
Mark all as spam, and block user from posting to "Discussion"
Hi Thomas,
Thanks a lot for all these explanations.
How do you save the xml into a file memory variable instead of a local file?
exports to momory but how can I store all the parsing into a string variable to later implement an xml parsing?
Thanks
Gwenael
View and moderate all "Help" comments posted by this user
Mark all as spam, and block user from posting to "Discussion"
Hi Thomas,
Thanks a lot for all these explanations.
How do you save the xml into a file memory variable instead of a local file?
exports to momory but how can I store all the parsing into a string variable to later implement an xml parsing?
Thanks
Gwenael
Hi Gwenael,
if you want the XML in a variable (e.g.
$xml
) you can use the script decode_ogimet.pl from above with lines 8 and 10 changed:- line 8 becomes:
my $xml;
- line 10 becomes:
metaf2xml::parser::start_xml { o => \$xml };
After invoking
metaf2xml::parser::finish
,$xml
will contain the XML for all messages.Kind regards,
Thomas
Related
Discussion: metaf.pl xml to PostgreSQL
View and moderate all "Help" comments posted by this user
Mark all as spam, and block user from posting to "Discussion"
Dear Thomas,
I am installing an virtual machine via puphpet (http://puphpet.com) .
By default perl version is v5.22.1 and Postgresql is 9.6.3. I want to install metaf2xml
Please, my questions concern
1)modules dependencies
-Do I have to install first CGI,DBI,DBD, CURL and XSLTPROC and GEO::BUFR?
-Which method do I have to use. For instance
sudo cpanm DBI
works fine whilesudo cpanm DBD
doesn't and I have ti usecpan[1]> install "DBD:Pg"
2)metaf2xml
-Do I have simply to download https://sourceforge.net/projects/metaf2xml/files/2.2/ and then run the prompt
sudo rpm -U metaf2xml-2.2-1.noarch.rpm
?3)decode_ogimet.pl libraries
-How do I set the use in the decode_ogimet.pl to run the script in unbutu?
Hi,
This depends on what you want to do. If you need just metaf2xml::parser you need none of them, except Geo::BUFR if you want to decode binary BUFR messages with metaf2xml. For other requirements of metaf2xml please see the file INSTALL (or http://metaf2xml.sourceforge.net/install.html) and the section DEPENDENCIES of the relevant man pages (e.g. from http://metaf2xml.sourceforge.net/#Links). If anything is wrong or missing or unclear in the documentation please say so.
It is best to check first if the software is pre-packaged for your distribution and use its package management to install it. E.g. for Ubuntu 16.10, DBI is provided as libdbi-perl, DBD::Pg as libdbd-pg-perl, xsltproc as xsltproc, curl as curl.
There is no single module DBD. DBD stands for "Database dependent", each DB has its own DBD module. For Postgresql this is DBD::Pg, so "sudo cpanm DBD::Pg" should work (but it is not needed if you use the pre-packaged libdbd-pg-perl).
Ubuntu does not use RPM for package management, and it's not recommended to use 2 different package managers. Please see the documentation of "alien" for how to convert the RPM. Or you could download one of the source packages of metaf2xml and use the installation script install.pl, or (if you want to use just metaf2xml::parser) simply use the modules whereever you extracted them, without installation.
You can use "use lib" with the path where the modules are depending on your setup, or you can set/add the path in the environment variable PERL5LIB. To use the module metaf2xml::parser version 2.2 in a Perl script, use:
use metaf2xml::parser 2.002;
Kind regards,
Thomas