udf-dlm-develop Mailing List for UDF-DLM: IDL interface for UDF format (Page 3)
Brought to you by:
esm
You can subscribe to this list here.
1999 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(9) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2000 |
Jan
(41) |
Feb
(14) |
Mar
(6) |
Apr
|
May
(3) |
Jun
(3) |
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2001 |
Jan
(1) |
Feb
(8) |
Mar
|
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
2002 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
|
Nov
|
Dec
|
2008 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
(1) |
Aug
|
Sep
(1) |
Oct
|
Nov
(1) |
Dec
(1) |
2009 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
From: Eduardo S. <es...@la...> - 2000-01-27 21:47:05
|
Could someone please confirm for me whether "tables" and "operations" are associated with each other? For instance, if a PIDF "unit" defines two tables (10 and 2) and two operations (7 and 5), is table 10 associated with operation 7, and 2 with 5? My fpidf.pl script (which generates a readable display out of the PIDF) currently groups tables with tables, and operations with operations: 7 [ -1.0, 64.0 ] : Unitless : Telemetry : Raw 10,2 7,5 ...but I've just checked in a change that groups each table with its "associated" (?) operation: 7 [ -1.0, 64.0 ] : Unitless : Telemetry : Raw t10,op7 +2,5 Does the latter make more sense? Thanks in advance, ^E |
From: Ed S. <es...@la...> - 2000-01-27 17:24:09
|
More info: I now think the problem is a reversal between "tables" and "operations" in the PIDF. I fixed libbase_udf/conv_units.c so it uses unsigned chars, and it promptly barfed back with CONVERT_TO_UNITS: INVALID TABLE NUMBER. That seems quite reasonable behavior. A look at the VIDF, then, showed *table* 0,1 with *operation* 150. Is this making sense to anyone? ^E |
From: Ed S. <es...@la...> - 2000-01-27 17:11:56
|
Chris, [ summary: Ed believes the bug is in the declaration of PidfStr.h:Tbls ] It's dying in convert_to_units(), a UDF routine. Some observations: * it only dies when using units "7" and "8", both of which refer to a table or operation called "150". * Accessing the data with /RAW, or by hardcoding unit "4" (which does not refer to the "150"), results in consistent success. * the SEGV only happens on the second or third call to convert_to_units. A bit of further prodding leads me to believe that the problem is with the PidfStr.h include file. It declares "ByTe_1 *Tbls", instead of "u_ByTe_1". Since the values aren't explicitly declared unsigned, the compiler seems to be expanding 150 to -106 (signed int). My setup: -- observations are using IMHIH and IMHIMCPL vinsts. -- data files is the 200001523 dataset from ifeds5 (I was unable to find 199919012 on ifeds5). -- VIDFs and PIDFs are the latest from pluto: 200002114 Thanks, ^E |
From: Kusterer, M. <Mar...@jh...> - 2000-01-27 16:11:46
|
Hi, When we upgraded to PIDF/VIDF versions 200001815 and also 200002114, we found that UDF DLM library version udf-dlm-0.50 and 0.51, would core dump in the function UDF_Read. I can get those versions of the PIDF and VIDF to work with the udf-dlm-0.24 version. I believe that the changes for PIDF/VIDF version 200002114 are: HENA: the H, HE, and CNO tables to get to flux were changed as requested. All changes in the VIDF. Can anyone give me an idea of how to proceed to fix this? I do not feel that I am in any way up to speed on the DLM yet. thanks, martha ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Martha Bolz Kusterer 4-178 Balt. (443) 778-7276 or Space Department Wash. (240) 228-7276 (office) Applied Physics Laboratory (443) 778-1093 (FAX) 111000 Johns Hopkins Road mar...@jh... (Internet) Laurel, MD 20723-6099 |
From: Ed S. <es...@la...> - 2000-01-24 22:04:42
|
Steve, I'm having a hard time understanding your note -- but that's because I don't really understand all the UDF "units" lingo. One thing you said, though, concerns me: >Two units are convenient for housekeeping, because I don't have to modify >Chris's unit definitions. The first unit (Chris's) isolates the >housekeeping byte. The second unit (which I added) multiplies >the byte by the appropriate scaling constant. Is it your intent to invoke convert_to_units(), with the first set of tables, then with the second? I don't think that's how it's supposed to work (but I'm not certain). My understanding is that a "unit" thingy is self-sufficient, and you can ask for unit "1" or "2", but not "1, then 2". In other words, if you're adding a table/ops thingy that converts telemetry units to volts, I think you need to include the first step (conversion to telemetry) in your units definition. Support for my belief comes from the MENA housekeeping PIDF (ImMnHkA), which lists duplicate units for some "sensors", e.g: s# name v# units -- ---- -- ----- 24 Min CPU Temp 15 1,6 ...and those units are defined as follows: # range labels tables / ops - ----- ------ ------------ 1 [ 0.0, 260.0 ] : Unitless : Telemetry : Raw 8,0 7,5 6 [ -55.0, 95.0 ] : Degrees C : Temperature : Temp 8,0,9 7,5,0 See how unit "6" duplicates the 8,0 and 7,5 "steps" of unit 1? Of course, I'm not really sure. I'm not sure about _anything_ with UDF. If I'm wrong, then please let me know, because this will affect every instrument! Also, here are some suggestions for things you can try: 1) Use my "fpidf.pl" script, included in the CVS repository. The PIDF file is an unreadable and unmaintainable abomination. My script tries to present it in a human-readable form. Perhaps it will be of help to you in defining your units. Although the output isn't labeled, since this is just my personal hack, it should be fairly easy to read if you're familiar with PIDF. 2) try putting some printf()s just before the main convert_to_units() call, something such as printf("convert_to_units: units=%d, Ntbls=%d\n", fh->units[sensor], pu[fh-> units[sensor]].NTbls); ...and see if you get what you expect. 3) If all else fails, could you mail me a copy of your PIDF? Please keep me posted. This could be a serious issue, if I'm wrong in my handling of units. Thanks, and best wishes, ^E |
From: Steve G. <ge...@ss...> - 2000-01-24 21:32:43
|
I've edited the FUV housekeeping PIDF, getting all the volts and amps properly scaled. The UDF retrievals come out OK when I use a C program, matching what we get from our LZ-based displays. Harald's program using IDL and DLM does not yield the correct results. For example, "DPU_P5VMON" returns 8.507e+07, instead of 5.02 -- it's our 5-Volt service. Large numbers like these are being returned for all voltages and amperages, but some housekeeping items do come out right. The difference is whether the PIDF specifies 1 unit or 2. When 2 units are specified, the garbage result comes back; when 1 unit is specified, we get the right result. To check this, I modified the PIDF to set 1 unit for DPU_P5VMON. This yields a value 205, which is correct for the raw, unscaled byte. One unit works; two units don't. I already made this kind of mistake in my C interface to UDF. I had to change my code to copy tables from the first unit, then append more tables from the second unit, or as many units as there are in the PIDF. Two units are convenient for housekeeping, because I don't have to modify Chris's unit definitions. The first unit (Chris's) isolates the housekeeping byte. The second unit (which I added) multiplies the byte by the appropriate scaling constant. |
From: Ed S. <es...@la...> - 2000-01-21 20:24:23
|
Steve, Good to hear from you! Thanks for the note... but: >These are the starting and stopping azimuth angle of the >instrument, as of the time the data was taken. My question -- and I'm probably just not phrasing it right -- is whether there's just one "start_az" and "stop_az" value for each record(*), or if there's one for each "sensor"/"ancillary", or matrix row, or whatever. (*) I use "record" in the sense of "one data struct returned by UDF_READ()". That is, basically an iteration over the read_drec() function with the "Forward" bit off. For example: IMMIMAGE comprises 54 "sensors", each of which is a vector of length 5. Does one START_AZ apply to all these values, or could there be 5 different START_AZes, or 54? Am I making any sense? Thanks again, and regards, ^E |
From: Steve G. <ge...@ss...> - 2000-01-21 20:16:38
|
>3. Every data structure contains [START_AZ and STOP_AZ] These are the starting and stopping azimuth angle of the instrument, as of the time the data was taken. Chris has the official offset angle from Spacecraft +X to each instrument. Knowing the spin rate, he can calculate an instrument azimuth for any given time. Because FUV does time-delayed-integration (TDI), these azimuths don't mean much for image data, but the could be of some use for housekeeping. |
From: Ed S. <es...@la...> - 2000-01-20 23:10:52
|
>>Are they one scalar value per record? > >Yes, it is always one scalar per record, even for the matrix Whoops, my bad. I wasn't clear. I know my code is returning it as a scalar per record... that's what I'm assuming, so that's what my code does. My question was actually to the list, asking if anyone knew what the start_az and stop_az were *supposed* to be. Does anyone know? Thanks, ^E |
From: Harald F. <hf...@ap...> - 2000-01-20 23:06:44
|
>I'll bet good money that the "16" you're seeing is intended to be >the bit length... We will look into this and I will let you know as soon as you owe us some money, or we owe something to you (I actuelly fear the latter will be the case). >Are they one scalar value per record? Yes, it is always one scalar per record, even for the matrix-containing files, which otherwise produce vectors of 256 elements, even if it should only be a scalar (I forgot to mention this in my previous mail, but I already mentioned it earlier.) This here is the first part of the data structure, and there is some "break" in it after the STOP_AZ, because all previous fields are scalars, everything after it becomes a 256-vector (except the 256x256 image) IDL> help,d,/struc ** Structure IMFHWIC, 52 tags, length=178240: BTIME STRUCT -> UDF_TIME Array[1] ETIME STRUCT -> UDF_TIME Array[1] D_QUAL DOUBLE 6.0000000 START_AZ FLOAT 0.00000 STOP_AZ FLOAT -2801.27 SCAN_INDEX FLOAT Array[256] WIC_PIXELS UINT Array[256, 256] PRODUCT_CODE FLOAT Array[256] ========================================================= Harald U. Frey Space Sciences Lab phone: 510-643-3323 University of California fax: 510-643-2624 Berkeley, CA 94720-7450 email: hf...@ss... |
From: Ed S. <es...@la...> - 2000-01-20 22:57:00
|
Harald, Thanks for your progress report. I'm glad we're farther along. >% UDF_OPEN: Unrecognized sensor type, 16 I have a strooong hunch that this is a bug in the VIDF file. The "sensor type" has always, in my experience, been one of: 0 (unsigned 8/16/32 bits) 1 (signed ditto) 2 (float) In fact, the VIDF "spec"[1] only documents up to 6. There's a number that goes close by, though, which is (ta-dah) the bit length indicator of whether it's 8, 16, or 32 bits. I'll bet good money that the "16" you're seeing is intended to be the bit length... Have I mentioned recently my opinion that the PIDF/VIDF files are unmaintainable? >2. I'm still in the process of cross-checking, but the data which I can read are >fine and the same as we expect from our level-zero data. Yahoo! I'm delighted to hear it! >3. Every data structure contains [START_AZ and STOP_AZ] Yeah, I'm not too sure what they are. They seem to be set when one does a "read_drec()", but I don't know if I'm getting the dimensions right. Are they one scalar value per record? Maybe these are valid only for certain instruments? Certain "sensors"? If anyone on this list has any knowledge of these, please let us know. Also, I see preposterous values there too sometimes. I don't know why. >4. When I read our WIC, SI12 and SI13 files I always get a >% Program caused arithmetic error: Floating overflow Interesting. I see it too. Yes, it's probably related to the matrix thingies... I'll investigate further, as time allows, and let you know. Regards, ^E [1] http://pluto.space.swri.edu/~frio/VIDFPg/SenInfo.html |
From: Harald F. <hf...@ap...> - 2000-01-20 21:41:05
|
Hi everybody, I thought I should give you a short status report after some days of programming with the dlm interface. 1. So far we can read all but two UDF-files for FUV. One of them does not contain reasonable data, so it's a file problem. For the other file we get an error message IDL> fh = UDF_Open(key, start_time, end_time, _Extra=_e ) % UDF_OPEN: Unrecognized sensor type, 16 At the moment I don't know if it is a problem with the file or with DLM. Did anybody see a similar thing? 2. I'm still in the process of cross-checking, but the data which I can read are fine and the same as we expect from our level-zero data. 3. Every data structure contains the two fields START_AZ FLOAT 0.00000 STOP_AZ FLOAT 1.00000 (or other number like -2801.27) It doesn't bother me, I'm just wondering where these data come from, because these data are definitely not in our FUV files. 4. When I read our WIC, SI12 and SI13 files I always get a IDL> d = udf_read(fh) % Program caused arithmetic error: Floating overflow Is it just a coincidence that these three files contain matrix data, and all other files don't? So far from here, Harald ========================================================= Harald U. Frey Space Sciences Lab phone: 510-643-3323 University of California fax: 510-643-2624 Berkeley, CA 94720-7450 email: hf...@ss... |
From: Harald F. <hf...@ap...> - 2000-01-13 22:21:16
|
Hi everybody, first of all I have to send a huge "Thank you" to Ed Santiago at Los Alamos. His DLM-UDF interface allows us to read all our data from UDF directly into IDL. The previous problems with reading images are solved. I confirmed with thermal-vacuum-test data, that the images of all FUV-instruments in the UDF files and in our level-zero files are identical. I'm now working on the "minor details" to display everything. Harald P.S. for Ed: Whenever we find an opportunity to meet, you deserve one (or more) beers from me/us. ========================================================= Harald U. Frey Space Sciences Lab phone: 510-643-3323 University of California fax: 510-643-2624 Berkeley, CA 94720-7450 email: hf...@ss... |
From: Joerg-Micha J. <jm...@Or...> - 2000-01-13 15:19:35
|
I wanted to share some feedback from EUV (UArizona) on their status concerning using DLM's. This is FYI only at this point, no-one should feel obliged to act. They are getting core dumps, I am in the process of finding out why ... Cheers, Joerg-Micha ---------- Forwarded message ---------- Date: Wed, 12 Jan 2000 11:56:43 -0700 (MST) From: Terry Forrester <te...@ve...> To: Joerg-Micha Jahn <jm...@or...> Cc: Bob King <bo...@ve...>, Bill Sandel <sa...@ve...> Subject: Re: IMAGE analysis software status Hi Micha: I wanted to let you know that we (EUV) are still interested in the UDF-IDL interface. I installed the latest software (version 0.24) and ran through all of our data types, with pretty much the same results as before. I'll include a table below with what I found. Like FUV, our skymap data is stored in matrix form, so we would also need support for that data type. With the current software, reading an EUV skymap only returns the first column of the matrix (1 out of 600), so it's not too useful as it stands. Terry Results of tests with udf-dlm package (version 0.24) --------------------------------------------------- January 12, 2000 Packet Type IMESnIMG sky map OK, but only reads 1 50-element column of the matrix IMESnWSZ WSZT core dumps IMESnEVT event counts core dumps IMESnCHS histogram OK IMESTAT status OK IMETAPE cmd tape OK IMEINTRN internals core dumps IMEDHSKP housekeeping core dumps IMEMEMD memory dump OK, but only reads 256 elements -- /* Terry Forrester te...@Ar... */ /* http://vega.lpl.arizona.edu/~terryf */ /* Lunar and Planetary Laboratory (520)-621-4539 */ /* University of Arizona Tucson, AZ 85721 */ |
From: Eduardo S. <es...@la...> - 2000-01-13 15:12:33
|
Folks, I've checked in new udf.c and udf.h files. These now support reading matrix sensors. I have tested with ImFhWIC, and I seem to get back something reasonable. Code is available on the CVS repository only. I will not make an official release yet. Now I know some of you don't know CVS, and may not feel like investing the time to learn it. The only thing I can say to that, is to give you my word that it is worth your while. Once you've spent some time with CVS, you will understand my point of view. Instructions for accessing the UDF-DLM repository are at: http://sourceforge.net/cvs/?group_id=868 Note that there's a link at bottom, "Browse CVS Repository", which provides full Web access to everything. Some helpful links to CVS documentation and introductions may be found at: http://sfdocs.sourceforge.net/sfdocs/ http://cvsbook.red-bean.com/ (the latter is an online copy of the "CVS book", a fairly decent reference for newbies and experts alike). ----------------------- Some important caveats about the checked-in source code: * these changes do not apply to /GROUP and /COLLAPSE ! That is left as an exercise for the reader. * regular (non-matrix) sensors _may_ be broken! I haven't even had time to run tests... the purpose of this checkin is to provide the preliminary hooks, and a baseline for further editing. Enjoy, ^E |
From: Eduardo S. <es...@la...> - 2000-01-12 16:29:27
|
Folks, Please be advised that UDF-DLM is completely broken with respect to handling multiple files. Here are the problems: * 0.24 will not handle NEXT_FILE_STATUS. It appears to work if you do multiple UDF_OPEN()s, but as I mentioned before, that is completely against my intent in writing this package. * 0.25 is even worse. I tried to fix a bug in ImmImage, related to the fact that each record actually consists of 32 "read_drec()" calls but file_pos() seems to be able to position you smack in the middle of those 32 records. The "fix" I implemented for this seems to have made things worse. I don't know what else to say right now. This is pretty critical functionality that's missing, but I have no idea how to fix it. Please, if you know *anything at all* about how UDF works, could you drop me a line? I need to know (1) how to position things properly in UDF_OPEN, and (2) what to do with NEXT_FILE_STATUS. Thanks in advance for your help. Thanks also to Evelyn Lee and Harald Frey for bringing this to my attention. ^E |
From: Ed S. <es...@la...> - 2000-01-12 13:53:55
|
Harald, >I don't think that the problem is in UDF_EOF. I still have a feeling that it might be, but need to spend some time investigating to make sure. Since I may not have time today, I mentioned some possibilities in my last note, in case anyone else feels like tracking down the problem. Stepping back up in your note, though... >You are able to fix this in the IDL program if you check the etime of the >present record against your end_time, and if you have reached the udf_eof(fh) Are you suggesting that as a temporary hack, or a permanent measure? It goes pretty much 100% against what I've tried to do in UDF-DLM, which is to eliminate the need for unwieldy code. Furthermore, I'd like to get on my soapbox yet again. This is getting to be an unpleasant habit... [Harald, this flame is not directed at you. If anything, it's directed towards those of us in the industry who have allowed the state of the art to get so bad] *** If something doesn't work as expected, GRIPE AT ME!!! *** Come on, I can take it. Heck, I'm obnoxious and insufferable, why not take the chance to put me down a notch? More importantly, though: I'm a professional. I take great pride in my work. My code is ultimately the only measure of my worth. When I hear that something isn't working as it should be -- especially something as critical as multi-file reads -- it immediately gets my attention, and gets placed _very high_ on my priority list (alas, right now my priority list is kind of overloaded). However, the worst thing I can hear is someone saying "I know about that, here's a way to work around it". That implies I'm not even expected to produce acceptable-quality software. Ouch. Or perhaps you've assumed that I know about the problem? Remember, I'm working on MENA only -- and I'm only budgeted 12%. Worse, most of the time, I can't even get data files to _install_ for other instruments (don't get me started about UDF), so my testing and experience is woefully limited. Maybe you think you can deal with some problems? If so, think about the next gal or guy: isn't it better to report (or fix) a bug, than to have every end user encounter it, track it down, and work around it? Please, don't accept mediocrity. Not from me, not from anyone. Microsoft has built an empire by shipping crap (glitzy crap, though) and convincing the customer that they're to blame for its failings. Come on, let's fight that attitude. Look, if you encounter a problem, it is NOT your fault, it's *MINE*! MINE MINE MINE, ALL MINE! I want to hear about it. Beat me over the head with it, if necessary. Whip me, hurt me (whoops, sorry, I get carried away sometimes. Where's that backspace key?) I may not have time to work on it (e.g., the "matrix" code). Or maybe I don't have the knowledge (multi-file reads?) or ability. But I'm not (yet) too old to learn new tricks, and spare time shows up occasionally. The Bottom Line: DEMAND EXCELLENCE! Be LOUD and IRRITATING about it! I may not deliver, but come on, give me a chance to try. Besides, if you don't demand perfection, you'll certainly end up with less. Ed the Obnoxious, over and out. |
From: Harald F. <hf...@ap...> - 2000-01-12 02:03:51
|
I see two things which might cause the problems with reading over the end of files, if the time span is given for a longer time. And I'm not sure what Evelyn's code looks like. 1. If your IDL code contains something like this WHILE NOT udf_eof(fh) DO BEGIN . . ENDWHILE you can not expect to get more than the content of the UDF file with the first correct time. 2. If your IDL code contains something like REPEAT BEGIN . . ENDREP UNTIL (whatever EQ something) then the whole process crashes at udf.c line 1356 with the following message % UDF_READ: failed to fill_data: @ sensor 0 At the moment I don't see how udf_read can recognize that it has to close the existing UDF file and which other file to open in order to get the next times right. You are able to fix this in the IDL program if you check the etime of the present record against your end_time, and if you have reached the udf_eof(fh) you close it, and advance to the next UDF-file. I don't think that the problem is in UDF_EOF. Am I wrong? Harald ========================================================= Harald U. Frey Space Sciences Lab phone: 510-643-3323 University of California fax: 510-643-2624 Berkeley, CA 94720-7450 email: hf...@ss... |
From: Ed S. <es...@la...> - 2000-01-12 01:12:57
|
I may have spoken too soon. In UDF_EOF() there's a check for NEXT_FILE_STATUS. I have no idea what this does, or what NEXT_FILE_STATUS means, but putting that code in made things worked. [Yes, I know this is the Superstition School Of Programming...but with these time pressures, and the hellacious complexity of UDF, we gotta just do what works, even if it's black magic! ] Evelyn and Harald, please try hacking inside UDF_EOF, where it does the check for NEXT_FILE_STATUS, and see if that fixes anything. The catch is, I suspect some accompanying fixes may be necessary in UDF_READ() -- but have no idea what those fixes might be. If anyone on this list is UDF-savvy enough to know what NEXT_FILE_STATUS means, please drop us a line! We would be eternally grateful for your advice and wisdom. ^E |
From: Joerg-Micha J. <jm...@Or...> - 2000-01-12 00:59:29
|
> > Unfortunately, my code assumes a "sane" dataset. It assumes > linear time, and if it encounters a record with an out-of-range > timestamp, it's not going to try reading past it just to see > if it's an anomaly. Perhaps it should? I hope not. > According to Chris it will be possible in the preliminary data (i.e. the data put on anonymous ftp by Rick Burley shortly after DSN download) do have not only data gaps, but also other time defects. Only the data distributed on DVD are supposedly guaranteed to have been cleaned up. My gues is that we will mostly see time gaps in both cases, just more severe in the preliminary data. As long as we can handle that, we should be fine. Jörg-Micha +-------------------------------------------------------------------------+ Jörg-Micha Jahn, Ph.D. Space Science Department Phone: (210) 522-2491 Southwest Research Institute FAX: (210) 520-9935 6220 Culebra Road E-mail: jm...@sw... San Antonio, TX 78238-5166, USA +-------------------------------------------------------------------------+ |
From: Ed S. <es...@la...> - 2000-01-12 00:52:31
|
>only the data for the day specified in the begin time (day 186 only) >was returned. Have you tried doing a read from, say, [1999,186] to [2099,365] ? I've seen that problem several times before, and each time it has been due to the presence of out-of-sequence records. Unfortunately, my code assumes a "sane" dataset. It assumes linear time, and if it encounters a record with an out-of-range timestamp, it's not going to try reading past it just to see if it's an anomaly. Perhaps it should? I hope not. Please let me knoe if that isn't the problem... ^E |
From: Harald F. <hf...@ap...> - 2000-01-11 22:08:28
|
We are running version 24. We have one file, which contains data of days 187 and 188, and these data are read without problems. However, if I try to read from two UDF-files by providing a longer time span over several days, only the data of the first file are read. Harald ========================================================= Harald U. Frey Space Sciences Lab phone: 510-643-3323 University of California fax: 510-643-2624 Berkeley, CA 94720-7450 email: hf...@ss... |
From: Evelyn L. <Eve...@gs...> - 2000-01-11 21:57:49
|
Hi, We're just wondering if anyone else has seen this particular scenario occurring when running the udf-dlm version 0.24 or version 0.25? In version 24, when trying to access data over multiple days with the time period set as (for example): begin time: day 186 end time : day 190 only the data for the day specified in the begin time (day 186 only) was returned. In version 25, when executing the same time range as in the above example, only the first day with available data was returned (which in our case was day 182) regardless of the begin time specified. Thanks. _________________________________________________________ Science Data Systems Branch Eve...@gs... Code 586, Bldg 23 Rm W337 voice: (301) 286-1487 _________________________________________________________ |
From: Harald F. <hf...@ap...> - 2000-01-11 19:58:08
|
We are here working on a different path, but so far we didn't get exciting results. We know which of our data are stored in matrix format. If we write a separate routine which after the call of udf_read just reads the matrix with consecutive calls of read_drec and we then exchange the data part of the structure with the result of the matrix-read. How about this? Harald ========================================================= Harald U. Frey Space Sciences Lab phone: 510-643-3323 University of California fax: 510-643-2624 Berkeley, CA 94720-7450 email: hf...@ss... |
From: Joerg-Micha J. <jm...@Or...> - 2000-01-11 19:49:10
|
> From: Ed Santiago <es...@la...> Excellent sleuthing. Your summary is identical to what I figured needed to be done. One difference, though: >If you read the VIDF/PIDF, you already know a matrix is coming Well, maybe VIDF. I'm pretty sure that the "matrix" indicator is nowhere in the PIDF. Near as I can tell, you need to read_idf( ..., _SmpId, ... ) to figure out if it's a matrix (3). >Now, what about the structure? You know >only *after* the first read_drec, how big the matrix is going to be. Will >this be easy to implement in the code?? Well, not too hard, maybe. You'll need to do a read_drec() in the UDF_OPEN() function, or possibly in udfdlm_make_idl_struct(). The trick is going to be, what to do with the struct. The code in udfdlm_make_idl_struct() is going to have to do special magic with the s_dims[] definition, and that isn't going to be pretty. (...) |