fx2lib-devel Mailing List for fx2lib (Page 2)
Status: Beta
Brought to you by:
mulicheng
This list is closed, nobody may subscribe to it.
2008 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2009 |
Jan
(16) |
Feb
(2) |
Mar
(35) |
Apr
(4) |
May
(9) |
Jun
(5) |
Jul
(20) |
Aug
(2) |
Sep
(10) |
Oct
(14) |
Nov
(12) |
Dec
(11) |
2010 |
Jan
(8) |
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
(6) |
Aug
(8) |
Sep
(4) |
Oct
|
Nov
|
Dec
|
2011 |
Jan
(4) |
Feb
(10) |
Mar
(25) |
Apr
|
May
|
Jun
(4) |
Jul
(11) |
Aug
(2) |
Sep
(11) |
Oct
|
Nov
|
Dec
|
2012 |
Jan
(1) |
Feb
|
Mar
|
Apr
(10) |
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2013 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
(3) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2014 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(8) |
Sep
|
Oct
|
Nov
|
Dec
|
From: Maarten B. <sou...@ds...> - 2011-09-17 19:30:44
|
Hi, Daniel wrote: > I think it probably "just works" in this case because the descriptor > table is defined in assembly and its address is forced in the linker. > > Hence the declaration only affects how the C code accesses it. Ah yes, I forgot about this. Chris wrote; > You're the last person in the world I would want to argue with about > SDCC, but my observations *appear* to contradict you: It works because the descriptors are initialized in assembly. > * xdata is initialised with explicit values at 0x3f00 and 0xe000 by the > RAM load and 0xC2 EEPROM load before any SDCC-generated code executes > > > > FWIW, I just tried the code->xdata + DSCR_AREA=e000 change on the > > > firmware in FPGALink[2] and it works fine. > > > > Are you using C0 or C2 mode? C2 mode cannot load 0xe000 > > according to the datasheet. Can the driver download to > > 0xe000 directly? > > > A 0xC2 loader can definitely target 0xe000. There's a note in section > 3.4.3 of the TRM: > > Serial EEPROM data can be loaded only into these three on-chip RAM > spaces: > * Program / Data RAM at 0x0000-0x1FFF > * Data RAM at 0xE000-0xE1FF > * The CPUCS register at 0xE600 [bit 0] > > (Actually the TRM is a bit wrong here because it can actually load > program/data RAM at 0x0000-0x3fff, not 0x0000-0x1fff, but the point > about 0xe000-0xe1ff stands). I admit I did not read further than 3.2 EZ-USB Startup Modes which states: Note: Although the EZ-USB can perform C2 Loads from EEPROMs as large as 64 KB, code can only be downloaded to the 16K of on chip RAM. On second thought I think "code" here means code space since 0xe000 is xdata only. I had read it as 'binary data'. Sorry for the noise, Maarten |
From: Chris M. <fx...@m3...> - 2011-09-17 08:35:36
|
Apologies, re-sending from correct account. On Fri, 2011-09-16 at 21:55 +0200, Maarten Brock wrote: > And how is that xdata filled? SDCC assumes the code > memory is stored in a ROM-like memory and xdata is > volatile. So during startup it copies initialized data > from code to xdata. So you have two copies in memory and > thus your memory footprint has grown. If you use --no- > xinit-opt it is not copied but initialized by lots of > instructions, usually taking even more code memory. > > I guess it works because SDCC's initialization code > copied it from lower memory to xdata at 0xe000. I also > guess you missed the downside. > My compile lines look like this: sdcc -mmcs51 --code-size 0x3c00 --xram-size 0x0200 --xram-loc 0x3c00 -Wl"-b DSCR_AREA=0xe000" -Wl"-b INT2JT=0x3f00" -DEEPROM -c --disable-warning 85 -I sdcc -I../../../3rd/fx2lib/include -I../../../common firmware.c And my link lines look like this: sdcc -mmcs51 --code-size 0x3c00 --xram-size 0x0200 --xram-loc 0x3c00 -Wl"-b DSCR_AREA=0xe000" -Wl"-b INT2JT=0x3f00" -DEEPROM -o firmware.hex app.rel firmware.rel infra.rel jtag.rel prom.rel sync.rel descriptors.rel -L../../../3rd/fx2lib/lib fx2.lib I tried a RAM-load and a 0xC2 EEPROM load. The final 0xC2 record for *code* does a load of 0x185 bytes at 0x21AB. That is immediately followed by a 0xC2 record to load 0xB8 bytes of xdata at 0x3f00, followed by a 0xC2 record to load 0xAC bytes of xdata descriptors at 0xe000. This is naturally before any SDCC init code has been called. The init code I get from SDCC (2.9.0) looks like this: 894 ;-------------------------------------------------------- 895 ; global & static initialisations 896 ;-------------------------------------------------------- 897 .area HOME (CODE) 898 .area GSINIT (CODE) 899 .area GSFINAL (CODE) 900 .area GSINIT (CODE) 901 .globl __sdcc_gsinit_startup 902 .globl __sdcc_program_startup 903 .globl __start__stack 904 .globl __mcs51_genXINIT 905 .globl __mcs51_genXRAMCLEAR 906 .globl __mcs51_genRAMCLEAR 907 ; firmware.c:35: volatile bit dosud=FALSE; 908 clr _dosud 909 ; firmware.c:36: volatile bit dosuspend=FALSE; 910 clr _dosuspend 911 .area GSFINAL (CODE) 912 ljmp __sdcc_program_startup 913 ;-------------------------------------------------------- 914 ; Home 915 ;-------------------------------------------------------- 916 .area HOME (CODE) 917 .area HOME (CODE) 918 __sdcc_program_startup: 919 lcall _main 920 ; return from main will lock up 921 sjmp . > SDCC does not support the concept of preinitialized > xdata as the FX2 has when it gets the firmware from the > driver or the I2C eeprom. > You're the last person in the world I would want to argue with about SDCC, but my observations *appear* to contradict you: * xdata is initialised with explicit values at 0x3f00 and 0xe000 by the RAM load and 0xC2 EEPROM load before any SDCC-generated code executes * When the firmware starts up, it initialises only the globals declared as 'bit' before calling main(). * There is only one copy of the descriptor values in the 0xC2 records, targetting 0xe000 directly; no copying appears to be done by the SDCC startup code. > > FWIW, I just tried the code->xdata + DSCR_AREA=e000 change on the > > firmware in FPGALink[2] and it works fine. > > Are you using C0 or C2 mode? C2 mode cannot load 0xe000 > according to the datasheet. Can the driver download to > 0xe000 directly? > A 0xC2 loader can definitely target 0xe000. There's a note in section 3.4.3 of the TRM: Serial EEPROM data can be loaded only into these three on-chip RAM spaces: * Program / Data RAM at 0x0000-0x1FFF * Data RAM at 0xE000-0xE1FF * The CPUCS register at 0xE600 [bit 0] (Actually the TRM is a bit wrong here because it can actually load program/data RAM at 0x0000-0x3fff, not 0x0000-0x1fff, but the point about 0xe000-0xe1ff stands). Chris |
From: Chris M. <ch...@ma...> - 2011-09-17 08:29:16
|
On Fri, 2011-09-16 at 21:55 +0200, Maarten Brock wrote: > And how is that xdata filled? SDCC assumes the code > memory is stored in a ROM-like memory and xdata is > volatile. So during startup it copies initialized data > from code to xdata. So you have two copies in memory and > thus your memory footprint has grown. If you use --no- > xinit-opt it is not copied but initialized by lots of > instructions, usually taking even more code memory. > > I guess it works because SDCC's initialization code > copied it from lower memory to xdata at 0xe000. I also > guess you missed the downside. > My compile lines look like this: sdcc -mmcs51 --code-size 0x3c00 --xram-size 0x0200 --xram-loc 0x3c00 -Wl"-b DSCR_AREA=0x3e00" -Wl"-b INT2JT=0x3f00" -DEEPROM -c --disable-warning 85 -I sdcc -I../../../3rd/fx2lib/include -I../../../common firmware.c And my link lines look like this: sdcc -mmcs51 --code-size 0x3c00 --xram-size 0x0200 --xram-loc 0x3c00 -Wl"-b DSCR_AREA=0x3e00" -Wl"-b INT2JT=0x3f00" -DEEPROM -o firmware.hex app.rel firmware.rel infra.rel jtag.rel prom.rel sync.rel descriptors.rel -L../../../3rd/fx2lib/lib fx2.lib I tried a RAM-load and a 0xC2 EEPROM load. The final 0xC2 record for *code* does a load of 0x185 bytes at 0x21AB. That is immediately followed by a 0xC2 record to load 0xB8 bytes of xdata at 0x3f00, followed by a 0xC2 record to load 0xAC bytes of xdata descriptors at 0xe000. This is naturally before any SDCC init code has been called. The init code I get from SDCC (2.9.0) looks like this: 894 ;-------------------------------------------------------- 895 ; global & static initialisations 896 ;-------------------------------------------------------- 897 .area HOME (CODE) 898 .area GSINIT (CODE) 899 .area GSFINAL (CODE) 900 .area GSINIT (CODE) 901 .globl __sdcc_gsinit_startup 902 .globl __sdcc_program_startup 903 .globl __start__stack 904 .globl __mcs51_genXINIT 905 .globl __mcs51_genXRAMCLEAR 906 .globl __mcs51_genRAMCLEAR 907 ; firmware.c:35: volatile bit dosud=FALSE; 908 clr _dosud 909 ; firmware.c:36: volatile bit dosuspend=FALSE; 910 clr _dosuspend 911 .area GSFINAL (CODE) 912 ljmp __sdcc_program_startup 913 ;-------------------------------------------------------- 914 ; Home 915 ;-------------------------------------------------------- 916 .area HOME (CODE) 917 .area HOME (CODE) 918 __sdcc_program_startup: 919 lcall _main 920 ; return from main will lock up 921 sjmp . > SDCC does not support the concept of preinitialized > xdata as the FX2 has when it gets the firmware from the > driver or the I2C eeprom. > You're the last person in the world I would want to argue with about SDCC, but my observations *appear* to contradict you: * xdata is initialised with explicit values at 0x3f00 and 0xe000 by the RAM load and 0xC2 EEPROM load before any SDCC-generated code executes * When the firmware starts up, it initialises only the globals declared as 'bit' before calling main(). * There is only one copy of the descriptor values in the 0xC2 records, targetting 0xe000 directly; no copying appears to be done by the SDCC startup code. > > FWIW, I just tried the code->xdata + DSCR_AREA=e000 change on the > > firmware in FPGALink[2] and it works fine. > > Are you using C0 or C2 mode? C2 mode cannot load 0xe000 > according to the datasheet. Can the driver download to > 0xe000 directly? > A 0xC2 loader can definitely target 0xe000. There's a note in section 3.4.3 of the TRM: Serial EEPROM data can be loaded only into these three on-chip RAM spaces: * Program / Data RAM at 0x0000-0x1FFF * Data RAM at 0xE000-0xE1FF * The CPUCS register at 0xE600 [bit 0] (Actually the TRM is a bit wrong here because it can actually load program/data RAM at 0x0000-0x3fff, not 0x0000-0x1fff, but the point about 0xe000-0xe1ff stands). Chris |
From: Chris M. <fx...@m3...> - 2011-09-17 08:25:16
|
> Is that really necessary? > > Yes. > Evelyn the modified dog asks "why?" > Hmm I'm not sure, I can't try it declared as __xdata at 0x3e00 because my code is too big :) > Done. Works fine (I tested a RAM load and a 0xC2 EEPROM load, declared xdata at 0x3e00 and 0xe000). > > [2] http://www.makestuff.eu/wordpress/?page_id=1400 > > Nice project :) > Thanks! |
From: Daniel O'C. <doc...@gs...> - 2011-09-17 01:55:57
|
On 17/09/2011, at 5:25, Maarten Brock wrote: >> OK I see now. Sorry, I was unaware of the 512 bytes of RAM at 0xe000. > > But these can only be used as RAM accordign to the > datasheet. Yes, you can't put code there. >> If we just go ahead and change the storage class of the string >> descriptors from "code" to "xdata" in setupdat.c[1] (i.e no preprocessor >> guards), will it conceivably break anything? It looks to me like >> changing it to "xdata" merely gives apps the option of relocating >> DSCR_AREA to 0xe000 (which may be used for "xdata" only, not "code"), >> but with NO downside. >> >> Or am I missing something? > > And how is that xdata filled? SDCC assumes the code > memory is stored in a ROM-like memory and xdata is > volatile. So during startup it copies initialized data > from code to xdata. So you have two copies in memory and > thus your memory footprint has grown. If you use --no- > xinit-opt it is not copied but initialized by lots of > instructions, usually taking even more code memory. > > I guess it works because SDCC's initialization code > copied it from lower memory to xdata at 0xe000. I also > guess you missed the downside. Hmm I didn't consider this, however I just examined the output file and I don't think it is doing that. ie I did this.. gobjcopy -I ihex -O binary obj/ugsio.ihx obj/ugsio.bin hexdump -Cv obj/ugsio.bin and I see.. <snip> 0000e000 12 01 00 02 ff ff ff 40 53 47 01 00 01 00 01 02 |.......@SG......| 0000e010 04 01 0a 06 00 02 ff ff ff 40 01 00 09 02 35 00 |.........@....5.| <snip> 0000e080 07 05 08 02 00 02 00 00 04 03 09 04 22 03 47 00 |............".G.| 0000e090 65 00 6e 00 65 00 73 00 69 00 73 00 20 00 53 00 |e.n.e.s.i.s. .S.| 0000e0a0 6f 00 66 00 74 00 77 00 61 00 72 00 65 00 0c 03 |o.f.t.w.a.r.e...| <snip> > SDCC does not support the concept of preinitialized > xdata as the FX2 has when it gets the firmware from the > driver or the I2C eeprom. I think it probably "just works" in this case because the descriptor table is defined in assembly and its address is forced in the linker. Hence the declaration only affects how the C code accesses it. >> FWIW, I just tried the code->xdata + DSCR_AREA=e000 change on the >> firmware in FPGALink[2] and it works fine. > > Are you using C0 or C2 mode? C2 mode cannot load 0xe000 > according to the datasheet. Can the driver download to > 0xe000 directly? > > My advice is to keep these constants in code memory. And > if you do want to change them at runtime cast a code > pointer to xdata pointer to access them. > > If the driver really supports loading to 0xe000 you can > choose to store the constants there. You can also choose > to use 0xe000 as xdata as it was meant. I am using C2 mode, and the firmware is loaded into RAM using fxload. EEPROM has a PID of 0xff01 and devd (somewhat like udev but for FreeBSD) looks for this and runs fxload. The firmware has a PID of 0x0001 and the kernel driver attaches to this. -- Daniel O'Connor software and network engineer for Genesis Software - http://www.gsoft.com.au "The nice thing about standards is that there are so many of them to choose from." -- Andrew Tanenbaum GPG Fingerprint - 5596 B766 97C0 0E94 4347 295E E593 DC20 7B3F CE8C |
From: Daniel O'C. <doc...@gs...> - 2011-09-17 01:31:50
|
On 16/09/2011, at 22:43, Chris McClelland wrote: > OK I see now. Sorry, I was unaware of the 512 bytes of RAM at 0xe000. > > Your patch jumps through hoops to ensure that unless you set > DSCR_ADDRSPACE, everything remains exactly as it is today. Is that > really necessary? Yes. > If we just go ahead and change the storage class of the string > descriptors from "code" to "xdata" in setupdat.c[1] (i.e no preprocessor > guards), will it conceivably break anything? It looks to me like > changing it to "xdata" merely gives apps the option of relocating > DSCR_AREA to 0xe000 (which may be used for "xdata" only, not "code"), > but with NO downside. > > Or am I missing something? Hmm I'm not sure, I can't try it declared as __xdata at 0x3e00 because my code is too big :) > FWIW, I just tried the code->xdata + DSCR_AREA=e000 change on the > firmware in FPGALink[2] and it works fine. OK. can you try declaring it as __xdata but still at 0x3e00? > Also, was your diff from a branch or something? The trunk has string > descriptors declared like this[1] in setupdat.c: > > extern code WORD dev_dscr; > extern code WORD dev_qual_dscr; > extern code WORD highspd_dscr; > extern code WORD fullspd_dscr; > extern code WORD dev_strings; > > i.e, s/__code/code/g Yes, I changed them because my version of sdcc complains that code/xdata/etc is deprecated and you should use __code/__xdata/etc. See this github pull request.. https://github.com/mulicheng/fx2lib/pull/2/files > [2] http://www.makestuff.eu/wordpress/?page_id=1400 Nice project :) -- Daniel O'Connor software and network engineer for Genesis Software - http://www.gsoft.com.au "The nice thing about standards is that there are so many of them to choose from." -- Andrew Tanenbaum GPG Fingerprint - 5596 B766 97C0 0E94 4347 295E E593 DC20 7B3F CE8C |
From: Maarten B. <sou...@ds...> - 2011-09-16 19:56:03
|
> OK I see now. Sorry, I was unaware of the 512 bytes of RAM at 0xe000. But these can only be used as RAM accordign to the datasheet. > If we just go ahead and change the storage class of the string > descriptors from "code" to "xdata" in setupdat.c[1] (i.e no preprocessor > guards), will it conceivably break anything? It looks to me like > changing it to "xdata" merely gives apps the option of relocating > DSCR_AREA to 0xe000 (which may be used for "xdata" only, not "code"), > but with NO downside. > > Or am I missing something? And how is that xdata filled? SDCC assumes the code memory is stored in a ROM-like memory and xdata is volatile. So during startup it copies initialized data from code to xdata. So you have two copies in memory and thus your memory footprint has grown. If you use --no- xinit-opt it is not copied but initialized by lots of instructions, usually taking even more code memory. I guess it works because SDCC's initialization code copied it from lower memory to xdata at 0xe000. I also guess you missed the downside. SDCC does not support the concept of preinitialized xdata as the FX2 has when it gets the firmware from the driver or the I2C eeprom. > FWIW, I just tried the code->xdata + DSCR_AREA=e000 change on the > firmware in FPGALink[2] and it works fine. Are you using C0 or C2 mode? C2 mode cannot load 0xe000 according to the datasheet. Can the driver download to 0xe000 directly? My advice is to keep these constants in code memory. And if you do want to change them at runtime cast a code pointer to xdata pointer to access them. If the driver really supports loading to 0xe000 you can choose to store the constants there. You can also choose to use 0xe000 as xdata as it was meant. Maarten |
From: Chris M. <fx...@m3...> - 2011-09-16 13:13:42
|
OK I see now. Sorry, I was unaware of the 512 bytes of RAM at 0xe000. Your patch jumps through hoops to ensure that unless you set DSCR_ADDRSPACE, everything remains exactly as it is today. Is that really necessary? If we just go ahead and change the storage class of the string descriptors from "code" to "xdata" in setupdat.c[1] (i.e no preprocessor guards), will it conceivably break anything? It looks to me like changing it to "xdata" merely gives apps the option of relocating DSCR_AREA to 0xe000 (which may be used for "xdata" only, not "code"), but with NO downside. Or am I missing something? FWIW, I just tried the code->xdata + DSCR_AREA=e000 change on the firmware in FPGALink[2] and it works fine. Also, was your diff from a branch or something? The trunk has string descriptors declared like this[1] in setupdat.c: extern code WORD dev_dscr; extern code WORD dev_qual_dscr; extern code WORD highspd_dscr; extern code WORD fullspd_dscr; extern code WORD dev_strings; i.e, s/__code/code/g Chris [1] https://github.com/mulicheng/fx2lib/blob/master/lib/setupdat.c#L273 [2] http://www.makestuff.eu/wordpress/?page_id=1400 On Fri, 2011-09-16 at 21:30 +0930, Daniel O'Connor wrote: > I think there's a good argument for putting it at 0xe000 because it saves precious code memory :) > > I don't think there is any performance consideration one way or the other. > > The only reason I'd think to leave it would be because of existing code (which isn't necessarily a small thing!) |
From: Daniel O'C. <doc...@gs...> - 2011-09-16 12:00:31
|
On 16/09/2011, at 20:26, Chris McClelland wrote: > Interesting. But I think we should ask ourselves where the string > descriptors really belong. Are the arguments for putting the string > descriptors in code mem or xdata mem equally valid? If so then it boils > down to the arbitrary positioning of the code/xdata boundary. If not > then we should choose one or the other and stick with it. I think there's a good argument for putting it at 0xe000 because it saves precious code memory :) I don't think there is any performance consideration one way or the other. The only reason I'd think to leave it would be because of existing code (which isn't necessarily a small thing!) -- Daniel O'Connor software and network engineer for Genesis Software - http://www.gsoft.com.au "The nice thing about standards is that there are so many of them to choose from." -- Andrew Tanenbaum GPG Fingerprint - 5596 B766 97C0 0E94 4347 295E E593 DC20 7B3F CE8C |
From: Chris M. <fx...@m3...> - 2011-09-16 11:14:33
|
Hi Dan, Interesting. But I think we should ask ourselves where the string descriptors really belong. Are the arguments for putting the string descriptors in code mem or xdata mem equally valid? If so then it boils down to the arbitrary positioning of the code/xdata boundary. If not then we should choose one or the other and stick with it. Chris On Fri, 2011-09-16 at 17:57 +0930, Daniel O'Connor wrote: > After bashing my head against the desk for a bit I realised it was because they are declared as __code rather than __xdata. The device descriptor still works because (I think) it uses SUDPTRH/L. > |
From: Daniel O'C. <doc...@gs...> - 2011-09-16 08:27:57
|
Hi, I recently found myself running out of space on my FX2 firmware so I did a bit of remapping things to squeeze out some more room. One thing I did was to put the descriptors at 0xe000 and this worked, however I found that string descriptors didn't work. After bashing my head against the desk for a bit I realised it was because they are declared as __code rather than __xdata. The device descriptor still works because (I think) it uses SUDPTRH/L. I have the following diff.. Index: lib/setupdat.c =================================================================== RCS file: /usr/local/Genesis/cvs/micro/lib/fx2lib/lib/setupdat.c,v retrieving revision 1.1.1.1 diff -u -p -r1.1.1.1 setupdat.c --- lib/setupdat.c 26 Feb 2011 06:40:02 -0000 1.1.1.1 +++ lib/setupdat.c 16 Sep 2011 08:15:34 -0000 @@ -271,11 +271,14 @@ BOOL handle_set_feature() { /* these are devined in dscr.asm and need to be customized then linked in by the firmware manually */ -extern __code WORD dev_dscr; -extern __code WORD dev_qual_dscr; -extern __code WORD highspd_dscr; -extern __code WORD fullspd_dscr; -extern __code WORD dev_strings; +#ifndef DSCR_ADDRSPACE +#define DSCR_ADDRSPACE __code +#endif +extern DSCR_ADDRSPACE WORD dev_dscr; +extern DSCR_ADDRSPACE WORD dev_qual_dscr; +extern DSCR_ADDRSPACE WORD highspd_dscr; +extern DSCR_ADDRSPACE WORD fullspd_dscr; +extern DSCR_ADDRSPACE WORD dev_strings; WORD pDevConfig = (WORD)&fullspd_dscr; WORD pOtherConfig = (WORD)&highspd_dscr; To go with this I have a local only diff to modify SDCCFLAGS in lib/Makefile to set it. My earlier changes where to build code with.. CFLAGS= -mmcs51 --code-size 0x3e00 --xram-size 0x0100 --xram-loc 0x3e00 --stack-auto LDFLAGS= -Wl"-b DSCR_AREA=0xe000" -Wl"-b INT2JT=0x3f00" ie I cut RAM in half after checking the SDCC report. Also it seems that the 'default' (i.e. what's in the examples) --xram-loc is conservative. -- Daniel O'Connor software and network engineer for Genesis Software - http://www.gsoft.com.au "The nice thing about standards is that there are so many of them to choose from." -- Andrew Tanenbaum GPG Fingerprint - 5596 B766 97C0 0E94 4347 295E E593 DC20 7B3F CE8C |
From: Phil B. <phi...@gm...> - 2011-07-09 16:48:24
|
Hmm...I think the problem might lie in my handling of the setup data. Right now I'm just using the bulk-mode example code from fx2lib and I don't think that's correct for my application. I will have to investigate that further. -Phil On Fri, Jul 8, 2011 at 2:35 AM, Xiaofan Chen <xia...@gm...> wrote: > On Fri, Jul 8, 2011 at 1:58 PM, Phil Behnke <phi...@gm...> wrote: > > I think the sleep code is one of the problems, but if I lower or get rid > of > > it, libUSB starts to fail (in both python and C) with a LIBUSB_ERROR_IO > > error. > > You might want to use multi-thread approach. But I do not know much > about this. libusb-1.0's async APIs are quite complicated when used > in a multithread application. > http://www.libusb.org/wiki/libusb-1.0 > http://libusb.sourceforge.net/api-1.0/io.html > > http://libusb.sourceforge.net/api-1.0/group__asyncio.html > You might try this simple option. > "2) Repeatedly call libusb_handle_events() in blocking mode from a > dedicated thread." > > Or if you want to have some challenges, try the following which > is quite POSIX centric and not that good for Windows. > http://libusb.sourceforge.net/api-1.0/mtasync.html > > > -- > Xiaofan > > > ------------------------------------------------------------------------------ > All of the data generated in your IT infrastructure is seriously valuable. > Why? It contains a definitive record of application performance, security > threats, fraudulent activity, and more. Splunk takes this data and makes > sense of it. IT sense. And common sense. > http://p.sf.net/sfu/splunk-d2d-c2 > _______________________________________________ > Fx2lib-devel mailing list > Fx2...@li... > https://lists.sourceforge.net/lists/listinfo/fx2lib-devel > |
From: Xiaofan C. <xia...@gm...> - 2011-07-08 06:35:46
|
On Fri, Jul 8, 2011 at 1:58 PM, Phil Behnke <phi...@gm...> wrote: > I think the sleep code is one of the problems, but if I lower or get rid of > it, libUSB starts to fail (in both python and C) with a LIBUSB_ERROR_IO > error. You might want to use multi-thread approach. But I do not know much about this. libusb-1.0's async APIs are quite complicated when used in a multithread application. http://www.libusb.org/wiki/libusb-1.0 http://libusb.sourceforge.net/api-1.0/io.html http://libusb.sourceforge.net/api-1.0/group__asyncio.html You might try this simple option. "2) Repeatedly call libusb_handle_events() in blocking mode from a dedicated thread." Or if you want to have some challenges, try the following which is quite POSIX centric and not that good for Windows. http://libusb.sourceforge.net/api-1.0/mtasync.html -- Xiaofan |
From: Phil B. <phi...@gm...> - 2011-07-08 05:58:39
|
I think the sleep code is one of the problems, but if I lower or get rid of it, libUSB starts to fail (in both python and C) with a LIBUSB_ERROR_IO error. On Fri, Jul 8, 2011 at 1:43 AM, Xiaofan Chen <xia...@gm...> wrote: > On Fri, Jul 8, 2011 at 1:16 PM, Xiaofan Chen <xia...@gm...> wrote: > > On Fri, Jul 8, 2011 at 12:26 PM, Phil Behnke <phi...@gm...> > wrote: > >> Xiaofan, > >> > >> Thanks for the help. I'm using Linux with a 2.6.38.2 custom compiled > >> kernel. In addition to the python application, I wrote a basic C > program to > >> do the same thing, but I still have the same problem. Both the > >> python-libusb1 app and the C program are using async transfers. The > maximum > >> isoc data rate of 24MBps should hopefully be enough, since my data from > the > >> FPGA is at 19MBps. I've been experimenting with the data rate from the > FPGA > >> to the FX2; I can slow the data rate way down to ~6MBps and everything > works > >> great, but any faster than that and it starts to fail. :-/ > > > > I see. It may not be that easy to achieve 24MB/s since your host side > > code and the FPGA side logic need to be both good. > > > > I suggest you to post questions to libusb mailing list. I remember > > there are reports there about sustaining this 24MB/s isoc transfer > > and 35MB-45MB/s bulk transfer speed using FX2. > > > > One quick test you can try. You might want to reduce the > transfer buffer size, now it is 1024 * 500, which may be > a bit too big. What if you reduce it to 1024 * 30? You > are submitting 5 transfer simultaneously which is good. > > transfer_object1=self.iso_xfer_object.fx2.getTransfer(500) > > transfer_object1.setIsochronous(endpoint=0x82,buffer_or_len=(1024*500),callback=self.createDataThread) > > And try to get rid of the sleep code. > #must sleep or LibUSB will start to fail > time.sleep(0.005) > > -- > Xiaofan > > > ------------------------------------------------------------------------------ > All of the data generated in your IT infrastructure is seriously valuable. > Why? It contains a definitive record of application performance, security > threats, fraudulent activity, and more. Splunk takes this data and makes > sense of it. IT sense. And common sense. > http://p.sf.net/sfu/splunk-d2d-c2 > _______________________________________________ > Fx2lib-devel mailing list > Fx2...@li... > https://lists.sourceforge.net/lists/listinfo/fx2lib-devel > |
From: Xiaofan C. <xia...@gm...> - 2011-07-08 05:43:35
|
On Fri, Jul 8, 2011 at 1:16 PM, Xiaofan Chen <xia...@gm...> wrote: > On Fri, Jul 8, 2011 at 12:26 PM, Phil Behnke <phi...@gm...> wrote: >> Xiaofan, >> >> Thanks for the help. I'm using Linux with a 2.6.38.2 custom compiled >> kernel. In addition to the python application, I wrote a basic C program to >> do the same thing, but I still have the same problem. Both the >> python-libusb1 app and the C program are using async transfers. The maximum >> isoc data rate of 24MBps should hopefully be enough, since my data from the >> FPGA is at 19MBps. I've been experimenting with the data rate from the FPGA >> to the FX2; I can slow the data rate way down to ~6MBps and everything works >> great, but any faster than that and it starts to fail. :-/ > > I see. It may not be that easy to achieve 24MB/s since your host side > code and the FPGA side logic need to be both good. > > I suggest you to post questions to libusb mailing list. I remember > there are reports there about sustaining this 24MB/s isoc transfer > and 35MB-45MB/s bulk transfer speed using FX2. > One quick test you can try. You might want to reduce the transfer buffer size, now it is 1024 * 500, which may be a bit too big. What if you reduce it to 1024 * 30? You are submitting 5 transfer simultaneously which is good. transfer_object1=self.iso_xfer_object.fx2.getTransfer(500) transfer_object1.setIsochronous(endpoint=0x82,buffer_or_len=(1024*500),callback=self.createDataThread) And try to get rid of the sleep code. #must sleep or LibUSB will start to fail time.sleep(0.005) -- Xiaofan |
From: Xiaofan C. <xia...@gm...> - 2011-07-08 05:16:11
|
On Fri, Jul 8, 2011 at 12:26 PM, Phil Behnke <phi...@gm...> wrote: > Xiaofan, > > Thanks for the help. I'm using Linux with a 2.6.38.2 custom compiled > kernel. In addition to the python application, I wrote a basic C program to > do the same thing, but I still have the same problem. Both the > python-libusb1 app and the C program are using async transfers. The maximum > isoc data rate of 24MBps should hopefully be enough, since my data from the > FPGA is at 19MBps. I've been experimenting with the data rate from the FPGA > to the FX2; I can slow the data rate way down to ~6MBps and everything works > great, but any faster than that and it starts to fail. :-/ I see. It may not be that easy to achieve 24MB/s since your host side code and the FPGA side logic need to be both good. I suggest you to post questions to libusb mailing list. I remember there are reports there about sustaining this 24MB/s isoc transfer and 35MB-45MB/s bulk transfer speed using FX2. > I'm not sure how to determine my USB chipset. You can use lspci. http://wiki.debian.org/HowToIdentifyADevice/PCI > Chris - Thanks for your help. I will check out FPGALink on your site as I > am using the Nexys2 board. (Great blog BTW; I've been using it as a > reference over the past couple months while developing this project.) > I am not familiar with FPGA but I might try this in the future with Spartan 3E (or 3Aor 3AN) Starterkit we have at work -- Xiaofan |
From: Phil B. <phi...@gm...> - 2011-07-08 04:27:10
|
Xiaofan, Thanks for the help. I'm using Linux with a 2.6.38.2 custom compiled kernel. In addition to the python application, I wrote a basic C program to do the same thing, but I still have the same problem. Both the python-libusb1 app and the C program are using async transfers. The maximum isoc data rate of 24MBps should hopefully be enough, since my data from the FPGA is at 19MBps. I've been experimenting with the data rate from the FPGA to the FX2; I can slow the data rate way down to ~6MBps and everything works great, but any faster than that and it starts to fail. :-/ I'm not sure how to determine my USB chipset. Chris - Thanks for your help. I will check out FPGALink on your site as I am using the Nexys2 board. (Great blog BTW; I've been using it as a reference over the past couple months while developing this project.) -Phil On Thu, Jul 7, 2011 at 9:51 PM, Xiaofan Chen <xia...@gm...> wrote: > On Fri, Jul 8, 2011 at 12:33 AM, Phil Behnke <phi...@gm...> > wrote: > > Hi All, > > > > I've been have a issue streaming data from an FPGA to a PC using the > FX2LP > > and LibUSB. The FPGA is sending 8 bits to the FX2 at a rate of 20MHz > (152 > > Mbps) using isochronous mode. The problem I'm having is that I cannot > get > > data from the FX2's buffer to the PC fast enough and the buffer becomes > > full, causing me to miss some data. I'm not sure where the bottle neck > is. > > The FX2 is set to iso mode, with 1024 bytes per packet, and 3 packets per > > microframe, so I should have enough bandwidth to empty the buffer. I'm > > attached my firmware (fpga.c), descriptor file, and libusb driver. The > > driver was written in Python using python-libusb1 wrappers. I have a LED > on > > the FPGA board which will read the FULL flag on the FX2 and light an LED > > when full. By inspection, it looks like the LED is lit at about 50% duty > > cycle. I've been working on this for quite a while and would really > > appreciate any tips. > > > > BTW, your device violates the USB spec since you need to have > zero bandwidth for the default alt interface (0) for isoc, which means > that you need to add that and put your high speed high bandwidth > isoc endpoint in alt interface 1. > > But this might not cause a problem for you depending on the OS > since the OS may not reject it. > > -- > Xiaofan > > > ------------------------------------------------------------------------------ > All of the data generated in your IT infrastructure is seriously valuable. > Why? It contains a definitive record of application performance, security > threats, fraudulent activity, and more. Splunk takes this data and makes > sense of it. IT sense. And common sense. > http://p.sf.net/sfu/splunk-d2d-c2 > _______________________________________________ > Fx2lib-devel mailing list > Fx2...@li... > https://lists.sourceforge.net/lists/listinfo/fx2lib-devel > |
From: Xiaofan C. <xia...@gm...> - 2011-07-08 01:51:13
|
On Fri, Jul 8, 2011 at 12:33 AM, Phil Behnke <phi...@gm...> wrote: > Hi All, > > I've been have a issue streaming data from an FPGA to a PC using the FX2LP > and LibUSB. The FPGA is sending 8 bits to the FX2 at a rate of 20MHz (152 > Mbps) using isochronous mode. The problem I'm having is that I cannot get > data from the FX2's buffer to the PC fast enough and the buffer becomes > full, causing me to miss some data. I'm not sure where the bottle neck is. > The FX2 is set to iso mode, with 1024 bytes per packet, and 3 packets per > microframe, so I should have enough bandwidth to empty the buffer. I'm > attached my firmware (fpga.c), descriptor file, and libusb driver. The > driver was written in Python using python-libusb1 wrappers. I have a LED on > the FPGA board which will read the FULL flag on the FX2 and light an LED > when full. By inspection, it looks like the LED is lit at about 50% duty > cycle. I've been working on this for quite a while and would really > appreciate any tips. > BTW, your device violates the USB spec since you need to have zero bandwidth for the default alt interface (0) for isoc, which means that you need to add that and put your high speed high bandwidth isoc endpoint in alt interface 1. But this might not cause a problem for you depending on the OS since the OS may not reject it. -- Xiaofan |
From: Xiaofan C. <xia...@gm...> - 2011-07-08 01:22:15
|
On Fri, Jul 8, 2011 at 12:33 AM, Phil Behnke <phi...@gm...> wrote: > Hi All, > > I've been have a issue streaming data from an FPGA to a PC using the FX2LP > and LibUSB. The FPGA is sending 8 bits to the FX2 at a rate of 20MHz (152 > Mbps) using isochronous mode. The problem I'm having is that I cannot get > data from the FX2's buffer to the PC fast enough and the buffer becomes > full, causing me to miss some data. I'm not sure where the bottle neck is. Typically on the PC side and typically it is for IN transfer. > The FX2 is set to iso mode, with 1024 bytes per packet, and 3 packets per > microframe, so I should have enough bandwidth to empty the buffer. That is high speed high bandwidth ISOC and the best speed you can get is 24MB/sec (3 x 1024B /125us). The host will be able to schedule the transfer but you must make sure that the IN request is always there. Using async transfer helps. What is your OS and your USB chipset? They play a part as well. > I'm attached my firmware (fpga.c), descriptor file, and libusb driver. The > driver was written in Python using python-libusb1 wrappers. I have a LED on > the FPGA board which will read the FULL flag on the FX2 and light an LED > when full. By inspection, it looks like the LED is lit at about 50% duty > cycle. I've been working on this for quite a while and would really > appreciate any tips. > I am not familiar with python-libusb1. pyusb is more matured. Unfortunately it does not support isoc transfer yet. Maybe you want to use libusb-1.0 and C to see if that helps. Take out all the other USB device attached to the same root hub and see if that helps. This thread may help, it is not for isoc transfer but the idea should be the same. http://libusb.6.n5.nabble.com/Fwd-FT2232H-asynchronous-maximum-data-rates-td4519549.html You can probably post the question to libusb mailing list. There are quite a few USB experts there (both Linux and Windows). http://www.libusb.org Travis has got a FX2LP development board and he will try to develop the benchmark firmware to test high speed high bandwidth isoc transfer for libusbk in the future. Right now we are using AVR UC3 (AVR32) sponsored by Atmel for high speed USB related testing but it only support max packet size of 512 and the MCU core may still be slow to deal with high speed USB. http://code.google.com/p/usb-travis/ -- Xiaofan |
From: Xiaofan C. <xia...@gm...> - 2011-07-08 01:04:46
|
On Fri, Jul 8, 2011 at 8:51 AM, Chris McClelland <fx...@m3...> wrote: > FWIW, even with the FIFO flags clamped to endlessly source (or sink) > data, I have never managed to get bulk reads (or writes) using LibUSB to > give more than 25MiB/s on Linux and 18MiB/s on Windows. Try libusb-1.0 under Linux and it may help since it support async API (for all transfer types). Under Windows, you can try to use libusb-1.0 Windows backend but it does not support isoc transfer since it uses WinUSB. On the other hand, libusb-win32 has its own async API and supports isoc transfer. http://sourceforge.net/apps/trac/libusb-win32/wiki/libusbwin32_documentation http://sourceforge.net/apps/trac/libusb-win32/wiki/libusbwin32_examples > Admittedly that is for bulk transfers which have more overheads than > iso, and I have only tried the 0.1.x releases of LibUSB because they're > API-compatible across Windows, MacOS and Linux. libusb-1.0 API works under Windows, Mac OS X and Linux. The problem is that 1.0.9 release keeps being delayed (first official release to support Windows). But you can use libusb-pbatard. http://www.libusb.org/ http://www.libusb.org/wiki/windows_backend > The only other thing I can suggest is to compare the throughput you get > with my firmware, host code and VHDL[1], but unless your FPGA board is a > Digilent Nexys2, you will have to do some work on the pin constraints > for your board. And remember this only deals with bulk, not iso. > > Sorry I can't be more helpful! > > Chris > > [1] http://www.makestuff.eu/wordpress/?page_id=1400 -- Xiaofan |
From: Chris M. <fx...@m3...> - 2011-07-08 00:51:16
|
FWIW, even with the FIFO flags clamped to endlessly source (or sink) data, I have never managed to get bulk reads (or writes) using LibUSB to give more than 25MiB/s on Linux and 18MiB/s on Windows. Admittedly that is for bulk transfers which have more overheads than iso, and I have only tried the 0.1.x releases of LibUSB because they're API-compatible across Windows, MacOS and Linux. The only other thing I can suggest is to compare the throughput you get with my firmware, host code and VHDL[1], but unless your FPGA board is a Digilent Nexys2, you will have to do some work on the pin constraints for your board. And remember this only deals with bulk, not iso. Sorry I can't be more helpful! Chris [1] http://www.makestuff.eu/wordpress/?page_id=1400 On Thu, 2011-07-07 at 12:33 -0400, Phil Behnke wrote: > Hi All, > > I've been have a issue streaming data from an FPGA to a PC using the > FX2LP and LibUSB. The FPGA is sending 8 bits to the FX2 at a rate of > 20MHz (152 Mbps) using isochronous mode. The problem I'm having is > that I cannot get data from the FX2's buffer to the PC fast enough and > the buffer becomes full, causing me to miss some data. I'm not sure > where the bottle neck is. The FX2 is set to iso mode, with 1024 bytes > per packet, and 3 packets per microframe, so I should have enough > bandwidth to empty the buffer. I'm attached my firmware (fpga.c), > descriptor file, and libusb driver. The driver was written in Python > using python-libusb1 wrappers. I have a LED on the FPGA board which > will read the FULL flag on the FX2 and light an LED when full. By > inspection, it looks like the LED is lit at about 50% duty cycle. > I've been working on this for quite a while and would really > appreciate any tips. > > Thanks! > > Best Regards, > Phil Behnke > ------------------------------------------------------------------------------ > All of the data generated in your IT infrastructure is seriously valuable. > Why? It contains a definitive record of application performance, security > threats, fraudulent activity, and more. Splunk takes this data and makes > sense of it. IT sense. And common sense. > http://p.sf.net/sfu/splunk-d2d-c2 > _______________________________________________ Fx2lib-devel mailing list Fx2...@li... https://lists.sourceforge.net/lists/listinfo/fx2lib-devel |
From: Phil B. <phi...@gm...> - 2011-07-07 16:33:48
|
Hi All, I've been have a issue streaming data from an FPGA to a PC using the FX2LP and LibUSB. The FPGA is sending 8 bits to the FX2 at a rate of 20MHz (152 Mbps) using isochronous mode. The problem I'm having is that I cannot get data from the FX2's buffer to the PC fast enough and the buffer becomes full, causing me to miss some data. I'm not sure where the bottle neck is. The FX2 is set to iso mode, with 1024 bytes per packet, and 3 packets per microframe, so I should have enough bandwidth to empty the buffer. I'm attached my firmware (fpga.c), descriptor file, and libusb driver. The driver was written in Python using python-libusb1 wrappers. I have a LED on the FPGA board which will read the FULL flag on the FX2 and light an LED when full. By inspection, it looks like the LED is lit at about 50% duty cycle. I've been working on this for quite a while and would really appreciate any tips. Thanks! Best Regards, Phil Behnke |
From: Dennis M. <djm...@gm...> - 2011-06-15 18:05:24
|
I had made a similar optimization in a couple firmware's I'd written. The Keil FW framework had a one at a time write I think for some historical reason and originally fx2lib provided equal/similar functionality. Perhaps there was an eeprom that wouldn't work correctly writing multiple bytes at a time or something. Anyhow, we saw the same thing. We were able to write an entire prom out in a second or two. On 6/15/11 6:45 AM, S S wrote: > Hi Folks, > > Could I suggest we modify the i2c eeprom_write function in i2c.c to the > code given below? > > The current function writes one byte at a time on the eeprom. This is > less than optimum as each write triggers a refresh on the eeprom which > refreshes a whole page (64 bytes). > That is sub-optimal on two accounts: 1- the eeprom endurance is > specified in number of page refresh (OK at 1 million it might not be an > issue) but 2- it takes about 64 times longer than needed to write a > program into the eeprom. > Modifying the function to write 64 bytes at a time (which fit nicely in > one EP0 transfer) cuts down the program writing time from 73s down to 2s. > > Cheers, > > Sébastien > > BOOL eeprom_write(BYTE prom_addr, WORD addr, WORD length, BYTE* buf) > { > BYTE addr_len=0; > BYTE addr_buffer[2]; > BYTE bs; > BYTE *data_buffer_ptr = buf; > BYTE *last_data_ptr = buf + length; > > if (EEPROM_TWO_BYTE) { > addr_len = 2; > addr_buffer[0] = MSB(addr); > addr_buffer[1] = LSB(addr); > } > else { > addr_len = 1; > addr_buffer[0] = LSB(addr); > } > > while ( data_buffer_ptr < last_data_ptr ) { > if ( (last_data_ptr - data_buffer_ptr) > MAX_EEP_WRITE) { // Should not > be the case if data is from an EP0 transfer > bs = MAX_EEP_WRITE; > } > else bs = last_data_ptr - data_buffer_ptr; > if ( ! i2c_write ( prom_addr, addr_len, addr_buffer, bs, data_buffer_ptr > ) ) return FALSE; > addr += bs; // Potentially more data to come so remember to increase the > address and buffer pointer > data_buffer_ptr += bs; > } > return TRUE; > } > > > > > ------------------------------------------------------------------------------ > EditLive Enterprise is the world's most technically advanced content > authoring tool. Experience the power of Track Changes, Inline Image > Editing and ensure content is compliant with Accessibility Checking. > http://p.sf.net/sfu/ephox-dev2dev > > > > _______________________________________________ > Fx2lib-devel mailing list > Fx2...@pu... > https://lists.sourceforge.net/lists/listinfo/fx2lib-devel |
From: S S <ss...@ho...> - 2011-06-15 12:45:17
|
Hi Folks, Could I suggest we modify the i2c eeprom_write function in i2c.c to the code given below? The current function writes one byte at a time on the eeprom. This is less than optimum as each write triggers a refresh on the eeprom which refreshes a whole page (64 bytes). That is sub-optimal on two accounts: 1- the eeprom endurance is specified in number of page refresh (OK at 1 million it might not be an issue) but 2- it takes about 64 times longer than needed to write a program into the eeprom. Modifying the function to write 64 bytes at a time (which fit nicely in one EP0 transfer) cuts down the program writing time from 73s down to 2s. Cheers, Sébastien BOOL eeprom_write(BYTE prom_addr, WORD addr, WORD length, BYTE* buf) { BYTE addr_len=0; BYTE addr_buffer[2]; BYTE bs; BYTE *data_buffer_ptr = buf; BYTE *last_data_ptr = buf + length; if (EEPROM_TWO_BYTE) { addr_len = 2; addr_buffer[0] = MSB(addr); addr_buffer[1] = LSB(addr); } else { addr_len = 1; addr_buffer[0] = LSB(addr); } while ( data_buffer_ptr < last_data_ptr ) { if ( (last_data_ptr - data_buffer_ptr) > MAX_EEP_WRITE) { // Should not be the case if data is from an EP0 transfer bs = MAX_EEP_WRITE; } else bs = last_data_ptr - data_buffer_ptr; if ( ! i2c_write ( prom_addr, addr_len, addr_buffer, bs, data_buffer_ptr ) ) return FALSE; addr += bs; // Potentially more data to come so remember to increase the address and buffer pointer data_buffer_ptr += bs; } return TRUE; } |
From: Chris M. <fx...@m3...> - 2011-06-06 14:40:39
|
String descriptor zero is not really a string, it is just a list of supported 16-bit language codes. Furthermore, string descriptor zero is the only one which contains any language codes. Thus, for devices supporting English it is correctly declared in fw/dscr.a51: string0: .db string0end - string0 ; len .db DSCR_STRING_TYPE .db 0x09, 0x04 ; 0x0409 is the language code for English. string0end: If you wanted to support another language in addition to English the intent was that you would add another entry on to the end of that list, e.g Polish is 0x0415. When the host requests a string with GET_DESCRIPTOR, it supplies the language code (guaranteed to be either 0x0409 or 0x0415 in this case since that's the list returned by descriptor zero) in wIndex. Unfortunately, handle_get_descriptor() in setupdat.c does not consider the language code in wIndex; it assumes there is only one language. So fx2lib works OK as long as there is only one language (whether English or something else), but querying for the same descriptor in multiple languages always returns the same string. This is a bug. I propose to add an extra structure in fw/dscr.a51 which selects which string table to use based on the supplied language code. Something like this: _lang_table: .word 0x0409, eng_string1_begin .word 0x040c, fra_string1_begin _string0: .db string0_end - _string0 ; len .db DSCR_STRING_TYPE .db 0x09, 0x04 ; 0x0409 is the language code for US English. .db 0x0c, 0x04 ; 0x040c is the language code for French string0_end: eng_string1_begin: .db eng_string1_end - eng_string1_begin .db DSCR_STRING_TYPE .ascii "The Dog" ; Actually a list of .ascii 'T'; .db 0 for each char eng_string1_end: eng_string2_begin: .db eng_string2_end - eng_string2_begin .db DSCR_STRING_TYPE .ascii "FooBar v1" eng_string2_end: .dw 0x0000 fra_string1_begin: .db fra_string1_end - fra_string1_begin .db DSCR_STRING_TYPE .ascii "Le Chien" fra_string1_end: fra_string2_begin: .db fra_string2_end - fra_string2_begin .db DSCR_STRING_TYPE .ascii "FooBar v1" fra_string2_end: .dw 0x0000 ...then modify handle_get_descriptor() to select a string table from _lang_table based on the langid supplied in wIndex. For the single-language case it should be possible to use some conditional compilation, omit the lang_table and fall back to the current behaviour. Any objections to that approach? Chris On Sun, 2011-06-05 at 12:04 +0200, Zbigniew Karkuszewski wrote: > Hi, > > I keep getting a string 0 descriptor error message on linux while > downloading my firmware build with fx2lib. I have noticed that in dscr_asm > file there is only one byte devoted to the language id in the string 0 > descriptor. The USB 2.0 TRM says it should be two bytes (word) since the > language codes are 16-bit long. > > Same thing for the STRING_DSCR structure in setupdat.h. > > > Cheers! > zbyszek > |