I figured out how to massage the news.dat file to reimport as CSV (using phpMyAdmin) into the pgv_news table. Essentially there are five fields per row, separated by semicolon and surrounded by double quotes, like "field1";"field2"; and so on with no trailing semicolon. Each row starts on a new line. The news.dat contains other information and appends the rows together, so it takes some massaging to get it into a suitable CSV format. The phpmyadmin import function is understandably picky about blank lines (use a <P> instead) and embedded double-quotes and semicolons in the text, so those have to be scrubbed beforehand.
It takes some work but if you've managed to lose a lot of news or journal articles you wanted to keep, or you have some old ones you'd like to add back in (just give them an unused index number), it's not hard at all. I'd recommend exporting your existing pgv_news table as plain CSV with default delimiters and so forth, and compare that to the news.dat file before you try it. Of course, if you take good and frequent database backups, you shouldn't need to do this. But it's nice to be able to.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
There is a bit of a learning curve but it's not bad. Write up your tutorial in a text editor first and use the most basic of Wiki formatting there. Add a space before each line in a block of code, for instance, so it displays as preformatted. Other tags are = for =main heading= ==subheading== and so forth. Then paste it into a new Wiki doc and fine-tune it there. I got through it, so can you. :)
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I figured out how to massage the news.dat file to reimport as CSV (using phpMyAdmin) into the pgv_news table. Essentially there are five fields per row, separated by semicolon and surrounded by double quotes, like "field1";"field2"; and so on with no trailing semicolon. Each row starts on a new line. The news.dat contains other information and appends the rows together, so it takes some massaging to get it into a suitable CSV format. The phpmyadmin import function is understandably picky about blank lines (use a <P> instead) and embedded double-quotes and semicolons in the text, so those have to be scrubbed beforehand.
It takes some work but if you've managed to lose a lot of news or journal articles you wanted to keep, or you have some old ones you'd like to add back in (just give them an unused index number), it's not hard at all. I'd recommend exporting your existing pgv_news table as plain CSV with default delimiters and so forth, and compare that to the news.dat file before you try it. Of course, if you take good and frequent database backups, you shouldn't need to do this. But it's nice to be able to.
It'd be great if you'd capture what you learned (with queries and so forth) in the Wiki.
I... am wiki-illiterate.
But I guess it's time I learn.
There is a bit of a learning curve but it's not bad. Write up your tutorial in a text editor first and use the most basic of Wiki formatting there. Add a space before each line in a block of code, for instance, so it displays as preformatted. Other tags are = for =main heading= ==subheading== and so forth. Then paste it into a new Wiki doc and fine-tune it there. I got through it, so can you. :)
Okay, thanks for the encouragement. I think I've submitted something.