<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Recent changes to SQLEnterDataIntoDatabase</title><link>https://sourceforge.net/p/carbones/wiki/SQLEnterDataIntoDatabase/</link><description>Recent changes to SQLEnterDataIntoDatabase</description><atom:link href="https://sourceforge.net/p/carbones/wiki/SQLEnterDataIntoDatabase/feed" rel="self"/><language>en</language><lastBuildDate>Fri, 20 Mar 2015 14:20:03 -0000</lastBuildDate><atom:link href="https://sourceforge.net/p/carbones/wiki/SQLEnterDataIntoDatabase/feed" rel="self" type="application/rss+xml"/><item><title>SQLEnterDataIntoDatabase modified by Anonymous</title><link>https://sourceforge.net/p/carbones/wiki/SQLEnterDataIntoDatabase/</link><description>&lt;div class="markdown_content"&gt;&lt;h1 id="superceded"&gt;SUPERCEDED&lt;/h1&gt;
&lt;h2 id="these-instructions-are-superceded-by-the-later-instructions-in-howtosetupperiodandvariablebaseddatabasestructure"&gt;&lt;strong&gt;These instructions are superceded by the later instructions in &lt;a class="" href="/p/carbones/wiki/HowToSetupPeriodAndVariableBasedDatabaseStructure"&gt;HowToSetupPeriodAndVariableBasedDatabaseStructure&lt;/a&gt;&lt;/strong&gt;&lt;/h2&gt;
&lt;hr /&gt;
&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;The data for the graph pages are created from the CCDAS output by offline post-processing. When a fresh dataset is generated, it must be uploaded into the PostGreSQL database &lt;strong&gt;carbones_graph&lt;/strong&gt;. This page explains how to do this. &lt;/p&gt;
&lt;p&gt;It is also necessary to follow these steps when setting up the graph database on a machine for the first time, as described under &lt;a class="" href="../DevQuickStart_GraphDatabase"&gt;DevQuickStart_GraphDatabase&lt;/a&gt;. &lt;/p&gt;
&lt;h2 id="details"&gt;Details&lt;/h2&gt;
&lt;p&gt;The post-processing creates the data in text file format. You will need to obtain these files. Currently the latest files are on the CERC network here. &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="nl"&gt;P:&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="n"&gt;FM&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="n"&gt;FM862_CARBONES&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="n"&gt;Web&lt;/span&gt; &lt;span class="n"&gt;site&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="mi"&gt;20110506&lt;/span&gt;&lt;span class="n"&gt;_20yearGraphData&lt;/span&gt;\
&lt;/pre&gt;&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;Copy the text files to a local directory. &lt;/li&gt;
&lt;li&gt;It is advisable to avoid deeply nested directories or directory names containing spaces. &lt;/li&gt;
&lt;li&gt;You will need to change the paths in the SQL scripts below from &lt;code&gt;E:/PostgreSQL/Data/&lt;/code&gt; to the correct path to the new data on your machine. &lt;/li&gt;
&lt;li&gt;It is necessary to use forward slashes &lt;code&gt;/&lt;/code&gt; as path delimiters even on Windows. &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;1. Enter data (in csv format): &lt;/p&gt;
&lt;p&gt;This SQL will load the data. &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;On a Windows machine we have found it best not to execute all of these statements in one operation, but only two or three at a time. Particularly for the "raw" data which are very large. &lt;/p&gt;
&lt;p&gt;COPY graphdata_new(startdate, "value", series_id, region_id)&lt;br /&gt;
FROM 'E:/PostgreSQL/Data/Babove.txt'&lt;br /&gt;
WITH DELIMITER ','&lt;br /&gt;
CSV HEADER;&lt;/p&gt;
&lt;p&gt;COPY graphdata_new(startdate, "value", series_id, region_id)&lt;br /&gt;
FROM 'E:/PostgreSQL/Data/Bbelow.txt'&lt;br /&gt;
WITH DELIMITER ','&lt;br /&gt;
CSV HEADER;&lt;/p&gt;
&lt;p&gt;COPY graphdata_new(startdate, "value", series_id, region_id)&lt;br /&gt;
FROM 'E:/PostgreSQL/Data/Btot.txt'&lt;br /&gt;
WITH DELIMITER ','&lt;br /&gt;
CSV HEADER;&lt;/p&gt;
&lt;p&gt;COPY graphdata_new(startdate, "value", series_id, region_id)&lt;br /&gt;
FROM 'E:/PostgreSQL/Data/Fbbur.txt'&lt;br /&gt;
WITH DELIMITER ','&lt;br /&gt;
CSV HEADER;&lt;/p&gt;
&lt;p&gt;COPY graphdata_new(startdate, "value", series_id, region_id)&lt;br /&gt;
FROM 'E:/PostgreSQL/Data/Fbbur_raw.txt'&lt;br /&gt;
WITH DELIMITER ','&lt;br /&gt;
CSV HEADER;&lt;/p&gt;
&lt;p&gt;COPY graphdata_new(startdate, "value", series_id, region_id)&lt;br /&gt;
FROM 'E:/PostgreSQL/Data/Ffos.txt'&lt;br /&gt;
WITH DELIMITER ','&lt;br /&gt;
CSV HEADER;&lt;/p&gt;
&lt;p&gt;COPY graphdata_new(startdate, "value", series_id, region_id)&lt;br /&gt;
FROM 'E:/PostgreSQL/Data/Ffos_raw.txt'&lt;br /&gt;
WITH DELIMITER ','&lt;br /&gt;
CSV HEADER;&lt;/p&gt;
&lt;p&gt;COPY graphdata_new(startdate, "value", series_id, region_id)&lt;br /&gt;
FROM 'E:/PostgreSQL/Data/Fgpp.txt'&lt;br /&gt;
WITH DELIMITER ','&lt;br /&gt;
CSV HEADER;&lt;/p&gt;
&lt;p&gt;COPY graphdata_new(startdate, "value", series_id, region_id)&lt;br /&gt;
FROM 'E:/PostgreSQL/Data/Fgppresp.txt'&lt;br /&gt;
WITH DELIMITER ','&lt;br /&gt;
CSV HEADER;&lt;/p&gt;
&lt;p&gt;COPY graphdata_new(startdate, "value", series_id, region_id)&lt;br /&gt;
FROM 'E:/PostgreSQL/Data/Fgppresp_raw.txt'&lt;br /&gt;
WITH DELIMITER ','&lt;br /&gt;
CSV HEADER;&lt;/p&gt;
&lt;p&gt;COPY graphdata_new(startdate, "value", series_id, region_id)&lt;br /&gt;
FROM 'E:/PostgreSQL/Data/Fgpp_raw.txt'&lt;br /&gt;
WITH DELIMITER ','&lt;br /&gt;
CSV HEADER;&lt;/p&gt;
&lt;p&gt;COPY graphdata_new(startdate, "value", series_id, region_id)&lt;br /&gt;
FROM 'E:/PostgreSQL/Data/Fle.txt'&lt;br /&gt;
WITH DELIMITER ','&lt;br /&gt;
CSV HEADER;&lt;/p&gt;
&lt;p&gt;COPY graphdata_new(startdate, "value", series_id, region_id)&lt;br /&gt;
FROM 'E:/PostgreSQL/Data/Fle_raw.txt'&lt;br /&gt;
WITH DELIMITER ','&lt;br /&gt;
CSV HEADER;&lt;/p&gt;
&lt;p&gt;COPY graphdata_new(startdate, "value", series_id, region_id)&lt;br /&gt;
FROM 'E:/PostgreSQL/Data/Fnatural.txt'&lt;br /&gt;
WITH DELIMITER ','&lt;br /&gt;
CSV HEADER;&lt;/p&gt;
&lt;p&gt;COPY graphdata_new(startdate, "value", series_id, region_id)&lt;br /&gt;
FROM 'E:/PostgreSQL/Data/Fnatural_raw.txt'&lt;br /&gt;
WITH DELIMITER ','&lt;br /&gt;
CSV HEADER;&lt;/p&gt;
&lt;p&gt;COPY graphdata_new(startdate, "value", series_id, region_id)&lt;br /&gt;
FROM 'E:/PostgreSQL/Data/Fnee.txt'&lt;br /&gt;
WITH DELIMITER ','&lt;br /&gt;
CSV HEADER;&lt;/p&gt;
&lt;p&gt;COPY graphdata_new(startdate, "value", series_id, region_id)&lt;br /&gt;
FROM 'E:/PostgreSQL/Data/Fnee_crop.txt'&lt;br /&gt;
WITH DELIMITER ','&lt;br /&gt;
CSV HEADER;&lt;/p&gt;
&lt;p&gt;COPY graphdata_new(startdate, "value", series_id, region_id)&lt;br /&gt;
FROM 'E:/PostgreSQL/Data/Fnee_crop_raw.txt'&lt;br /&gt;
WITH DELIMITER ','&lt;br /&gt;
CSV HEADER;&lt;/p&gt;
&lt;p&gt;COPY graphdata_new(startdate, "value", series_id, region_id)&lt;br /&gt;
FROM 'E:/PostgreSQL/Data/Fnee_for.txt'&lt;br /&gt;
WITH DELIMITER ','&lt;br /&gt;
CSV HEADER;&lt;/p&gt;
&lt;p&gt;COPY graphdata_new(startdate, "value", series_id, region_id)&lt;br /&gt;
FROM 'E:/PostgreSQL/Data/Fnee_for_raw.txt'&lt;br /&gt;
WITH DELIMITER ','&lt;br /&gt;
CSV HEADER;&lt;/p&gt;
&lt;p&gt;COPY graphdata_new(startdate, "value", series_id, region_id)&lt;br /&gt;
FROM 'E:/PostgreSQL/Data/Fnee_gras.txt'&lt;br /&gt;
WITH DELIMITER ','&lt;br /&gt;
CSV HEADER;&lt;/p&gt;
&lt;p&gt;COPY graphdata_new(startdate, "value", series_id, region_id)&lt;br /&gt;
FROM 'E:/PostgreSQL/Data/Fnee_gras_raw.txt'&lt;br /&gt;
WITH DELIMITER ','&lt;br /&gt;
CSV HEADER;&lt;/p&gt;
&lt;p&gt;COPY graphdata_new(startdate, "value", series_id, region_id)&lt;br /&gt;
FROM 'E:/PostgreSQL/Data/Fnee_raw.txt'&lt;br /&gt;
WITH DELIMITER ','&lt;br /&gt;
CSV HEADER;&lt;/p&gt;
&lt;p&gt;COPY graphdata_new(startdate, "value", series_id, region_id)&lt;br /&gt;
FROM 'E:/PostgreSQL/Data/Foce.txt'&lt;br /&gt;
WITH DELIMITER ','&lt;br /&gt;
CSV HEADER;&lt;/p&gt;
&lt;p&gt;COPY graphdata_new(startdate, "value", series_id, region_id)&lt;br /&gt;
FROM 'E:/PostgreSQL/Data/Foce_raw.txt'&lt;br /&gt;
WITH DELIMITER ','&lt;br /&gt;
CSV HEADER;&lt;/p&gt;
&lt;p&gt;COPY graphdata_new(startdate, "value", series_id, region_id)&lt;br /&gt;
FROM 'E:/PostgreSQL/Data/Fresp.txt'&lt;br /&gt;
WITH DELIMITER ','&lt;br /&gt;
CSV HEADER;&lt;/p&gt;
&lt;p&gt;COPY graphdata_new(startdate, "value", series_id, region_id)&lt;br /&gt;
FROM 'E:/PostgreSQL/Data/Fresp_raw.txt'&lt;br /&gt;
WITH DELIMITER ','&lt;br /&gt;
CSV HEADER;&lt;/p&gt;
&lt;p&gt;COPY graphdata_new(startdate, "value", series_id, region_id)&lt;br /&gt;
FROM 'E:/PostgreSQL/Data/Ftot.txt'&lt;br /&gt;
WITH DELIMITER ','&lt;br /&gt;
CSV HEADER;&lt;/p&gt;
&lt;p&gt;COPY graphdata_new(startdate, "value", series_id, region_id)&lt;br /&gt;
FROM 'E:/PostgreSQL/Data/Ftot_raw.txt'&lt;br /&gt;
WITH DELIMITER ','&lt;br /&gt;
CSV HEADER;&lt;/p&gt;
&lt;p&gt;COPY graphdata_new(startdate, "value", series_id, region_id)&lt;br /&gt;
FROM 'E:/PostgreSQL/Data/LAI.txt'&lt;br /&gt;
WITH DELIMITER ','&lt;br /&gt;
CSV HEADER;&lt;/p&gt;
&lt;p&gt;COPY graphdata_new(startdate, "value", series_id, region_id)&lt;br /&gt;
FROM 'E:/PostgreSQL/Data/SOC.txt'&lt;br /&gt;
WITH DELIMITER ','&lt;br /&gt;
CSV HEADER;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;2. Run the ANALYZE command &lt;/p&gt;
&lt;p&gt;The PostGreSQL documentation &lt;a class="" href="http://www.postgresql.org/docs/9.0/static/sql-analyze.html" rel="nofollow"&gt;advises&lt;/a&gt; that you should run the ANALYZE command just after making major changes in the contents of a table, to improve the query performance. &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;ANALYZE&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;This command can also be run from pgAdmin by right clicking on a table (or database) and choosing &lt;code&gt;Maintenance...&lt;/code&gt;. The &lt;code&gt;graphdata_new&lt;/code&gt; table will need to be analyzed. &lt;/p&gt;
&lt;p&gt;3. That's all! &lt;/p&gt;
&lt;h2 id="hints"&gt;Hints&lt;/h2&gt;
&lt;p&gt;It takes a long time to import the data. You don't necessarily need to import it all to check that things are working properly. &lt;/p&gt;
&lt;p&gt;You can start just with "Fnee", which is the daily/monthly/yearly data for "Net ecosystem flux". You won't be able to test any other variables or the 3-hourly data, but you can see whether the graphs are working on your local machine. &lt;/p&gt;
&lt;p&gt;You can test that the stored procedure is running correctly by executing a SQL query like this &lt;code&gt;select * from linegraph ('Fnee', 'day', 'mean', 'Global', '1990-05-01', '1990-05-10');&lt;/code&gt;&lt;/p&gt;
&lt;h2 id="background-information"&gt;Background information&lt;/h2&gt;
&lt;p&gt;These SQL queries were created automatically. The code snippets for this have been saved here &lt;a class="" href="../CodeSnippetCreateSQL"&gt;CodeSnippetCreateSQL&lt;/a&gt;. They may be useful if we need to generate new SQL for additional variables. &lt;/p&gt;&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Anonymous</dc:creator><pubDate>Fri, 20 Mar 2015 14:20:03 -0000</pubDate><guid>https://sourceforge.net4d115a0c596a40228e4419bb0a19b5c9b149dfd1</guid></item></channel></rss>