From: Giacomo G. <gi...@ga...> - 2003-09-09 16:10:31
|
Hi all ok, I underestimate the video scroll function... it takes 1900 us sec on a P3 celeron. So the ABORT 64 error with system TICK of 1000 it's normal... anyway with one shot mode it shouldn't happen. I suggest to change the ll_timer to skip the abort test in case of one shot. I modified also the scroll function: void _scroll(char attr,int x1,int y1,int x2,int y2) { register int x,y; WORD xattr = attr << 8,w; LIN_ADDR v = (LIN_ADDR)(0xB8000 + active_page*(2*PAGE_SIZE)); for (y = y1+1; y <= y2; y++) for (x = x1; x <= x2; x++) { w = lmempeekw((LIN_ADDR)(v + 2*(y*cons_columns+x))); lmempokew((LIN_ADDR)(v + 2*((y-1)*cons_columns+x)),w); } for (x = x1; x <= x2; x++) lmempokew((LIN_ADDR)(v + 2*((y-1)*cons_columns+x)),xattr); } //#define OPTIMIZED #ifdef OPTIMIZED void scroll(void) { int x; WORD xattr = bios_attr << 8; LIN_ADDR v = (LIN_ADDR)(0xB8000 + active_page*(2*PAGE_SIZE)); memcpy((LIN_ADDR)(v), (LIN_ADDR)(v + 2*cons_columns), cons_columns*(cons_rows-1)*2); for (x = 0; x <= cons_columns-1; x++) lmempokew((LIN_ADDR)(v + 2*((cons_rows-1)*cons_columns+x)),xattr); } #else void scroll(void) { _scroll(bios_attr,0,0,(cons_columns-1),(cons_rows-1)); } #endif with the optimized one I got 1200 us instead of 1900... not much but it's called many times. Regards Giacomo |
From: Luca A. <luc...@em...> - 2003-09-10 07:43:14
|
Hi, > ok, I underestimate the video scroll function... it takes 1900 us sec on > a P3 celeron. Since there is some I/O involved, this long time is not completely surprising. However, most of the text output functions (as they are implemented now) are just crap, and they should be rewritten. I wanted to do that, but I never found the time (as usual ;). As you note, for example, the scroll function is implemented through _scroll, and this is just bad. If someone is going to reimplement the relevant parts of libcons, I am more than willing to commit the changes. In particular: - All the functions that support windows (I remember about _clear() and _scroll()) should be renamed to win_*(), and moved to a different file (something like wincons.c). - All the functions calling win_* functions should be reimplemented, in a simpler and optimized way. > So the ABORT 64 error with system TICK of 1000 it's normal... Yes, doing a message() inside an irq handler is a _very_ bad idea. > with one shot mode it shouldn't happen. I suggest to change the ll_timer > to skip > the abort test in case of one shot. I am willing to do this change, however... In this particular situation, the abort 64 is more than motivated even in the oneshot case. Guys, we are spending more than 1ms in the int handler!!! The kernel must complain, otherwise we are going to have a lot of funny bugs in the future... Maybe I can just change the overrun test (converting it in a test on the amount of time spent in the handler) and print a warning if things are going bad... > I modified also the scroll function: > > void _scroll(char attr,int x1,int y1,int x2,int y2) > { > register int x,y; > WORD xattr = attr << 8,w; > LIN_ADDR v = (LIN_ADDR)(0xB8000 + active_page*(2*PAGE_SIZE)); > > for (y = y1+1; y <= y2; y++) > for (x = x1; x <= x2; x++) { > w = lmempeekw((LIN_ADDR)(v + 2*(y*cons_columns+x))); > lmempokew((LIN_ADDR)(v + 2*((y-1)*cons_columns+x)),w); > } > for (x = x1; x <= x2; x++) > lmempokew((LIN_ADDR)(v + 2*((y-1)*cons_columns+x)),xattr); > } > > //#define OPTIMIZED > #ifdef OPTIMIZED > > void scroll(void) > { > > int x; > WORD xattr = bios_attr << 8; > LIN_ADDR v = (LIN_ADDR)(0xB8000 + active_page*(2*PAGE_SIZE)); > > memcpy((LIN_ADDR)(v), > (LIN_ADDR)(v + 2*cons_columns), > cons_columns*(cons_rows-1)*2); Uhmmm... Maybe memmove()? The two memory areas are overlapping... Ok, since dest < src memcpy() works well, but memmove() would be safer... Can you measure any performance difference if you use memmove? Luca -- _____________________________________________________________________________ Copy this in your signature, if you think it is important: N O W A R ! ! ! -- Email.it, the professional e-mail, gratis per te: http://www.email.it/f Sponsor: Conto Arancio: facile aprirlo, difficile rinunciarci. Clicca qui: http://adv.email.it/cgi-bin/foclick.cgi?mid=663&d=10-9 |
From: Paolo G. <pao...@ti...> - 2003-09-10 08:03:51
|
> I am willing to do this change, however... In this particular situation, > the abort 64 is more than motivated even in the oneshot case. Guys, we > are spending more than 1ms in the int handler!!! The kernel must > complain, otherwise we are going to have a lot of funny bugs in the > future... > Maybe I can just change the overrun test (converting it in a test on the > amount of time spent in the handler) and print a warning if things are > going bad... My opinion is that the action that have to be carried when a timer overrun is detected should be left to the kernel layer. In my opinion, printing a message or exiting with ll_abort is not a good idea, because ll_abort -just exits without saying a word- in general the kernel should have the possibility to do at least something like: - clean up all its things before exiting - sending a signal to some process saying "hey! something strange happened!", leaving to the application the possibility to ignore that or to exit properly. - adjust the timer period adaptively ;-) Also note that a clock overrun is -really- critical if it lets you loose the time reference...but not so critical if it happen when using the TSC (that has a longer lifetime). (Btw, Giacomo sent you a patch about TSC/APIC support in the event mechanism of oslib... I've not checked if it has been rejected or included in the CVS tree... if you want we can post it again in the patch part of the sourceforge website...) Also note that the clock overrun problem happens often at initialization time, so since things have not yet started properly, we should allow some flexibility. I would propose some kind of hook that is by default set to ll_abort, but that it can be redefined by the user, in a similar way to what happens to the irqs/exceptions/... bye PJ -- Paolo Gai <pao...@ti...> Scuola S. Anna |
From: Luca A. <luc...@em...> - 2003-09-11 15:03:14
|
Hi all, > > oslib/kl/intevt.c:65 return in line 70 without sti, also warning in comments > bug Is it ok if I remove the cli()/sti() from irq_bind() (it is the only oslib function that protects itself with cli/sti)? In this way, the responsibility for locking/unlocking is left to the kernel (right now, this seems to me the correct thing to do...). Also, this would answer the "open question" in oq.txt (I forgot about it ;-). > > oslib/kl/intevt.c:111: cli() possible without sti after? or are we deep in something else > should be ok... Yes, it is ok. It is re-disabling the interrupts that have been enabled at line 106. Luca -- _____________________________________________________________________________ Copy this in your signature, if you think it is important: N O W A R ! ! ! -- Email.it, the professional e-mail, gratis per te: http://www.email.it/f Sponsor: Con KLM puoi risparmiare fino a 20 Euro sul tuo biglietto aereo prenotando on line per Usa, Europa e il resto del mondo Clicca qui: http://adv.email.it/cgi-bin/foclick.cgi?mid=1259&d=11-9 |
From: Paolo G. <pao...@ti...> - 2003-09-11 15:31:02
|
> Is it ok if I remove the cli()/sti() from irq_bind() (it is the only > oslib function that protects itself with cli/sti)? In this way, the > responsibility for locking/unlocking is left to the kernel (right now, > this seems to me the correct thing to do...). > > Also, this would answer the "open question" in oq.txt (I forgot about it > ;-). >From my point of view, it should be that OSLib functions are not protected with cli/sti. That is, it is responsability of the upper layer (the kernel) to disable the interrupts when needed. That is at least what I always thought when I implemented the shark kernel layer... bye PJ -- Paolo Gai <pao...@ti...> Scuola S. Anna |
From: Luca A. <luc...@em...> - 2003-09-12 09:39:01
|
On Thu, 2003-09-11 at 16:30, Paolo Gai wrote: > > Is it ok if I remove the cli()/sti() from irq_bind() (it is the only > > oslib function that protects itself with cli/sti)? In this way, the > > responsibility for locking/unlocking is left to the kernel (right now, > > this seems to me the correct thing to do...). > > > > Also, this would answer the "open question" in oq.txt (I forgot about it > > ;-). > > >From my point of view, it should be that OSLib functions are not > protected with cli/sti. Ok, I am going to remove cli()/sti() from irq_bind()... > That is, it is responsability of the upper layer (the kernel) to disable > the interrupts when needed. > > That is at least what I always thought when I implemented the shark > kernel layer... Well, it was an open question... Time to close it :-) Luca -- _____________________________________________________________________________ Copy this in your signature, if you think it is important: N O W A R ! ! ! -- Email.it, the professional e-mail, gratis per te: http://www.email.it/f Sponsor: Al Garden Center Peraga fioriscono nuove iniziative: ecco i Tour Day Peraga, per andare alla scoperta del Canavese! INFO 0125 665500 Clicca qui: http://adv.email.it/cgi-bin/foclick.cgi?mid=1613&d=12-9 |
From: Ian B. <ia...@cs...> - 2003-09-16 11:33:23
|
We've got a serious problem of clock drift in Shark. Over 1 second, sys_gettime() loses about 50us compared to the machine running linux. I note that oslib/kl/event.c and oslib/ll/sys/ll/time.h make use of a magic number 1197, relating to 1.19718 MHz. Should this be 1.193182 MHz? I'm not sure about this, particularly as the ratio of these two frequencies (1.10032) does not quite match our clock drift discrepancy (1.0059). ian |
From: Luca A. <luc...@em...> - 2003-09-16 11:58:21
|
Hi Ian, I do not remember where the 1197 comes from... I suspect you are right, and it should be 1193. Try to use 1193 and see if it works better... ;-) BTW, shark also has some "time correction" mechanism based on the RTC... Did you try it? Luca -- _____________________________________________________________________________ Copy this in your signature, if you think it is important: N O W A R ! ! ! -- Email.it, the professional e-mail, gratis per te: http://www.email.it/f Sponsor: Sconti fino al 50% su eBay Clicca qui: http://adv.email.it/cgi-bin/foclick.cgi?mid=1649&d=16-9 |
From: Giacomo G. <gi...@ga...> - 2003-09-16 14:00:09
|
Hi all the precision of the timer in Shark is due to the calibration routine that calculates the CPU clocks on 1 msec. This value is printed during the init step of a shark demo (the clk_per_msec). I rewrote this calibration routine starting from the linux one... but there are not the same and it's normal a difference of some microsec. Anyway, after a test session with an external timer (controlled by parallel port) I think that is Linux the more imprecise. The timer precision is due to the relation between the PIT and the TSC. Considering that a PIT has 16 bit counter, this precision cannot be more than 1 part on 65535 With RTC corrections (the IRQ8Handler) I tried to reduce the long term drift... and the effect cannot be seen on 1 second, but hours. The old timer routine of shark (the current of OsLib) uses this 1197 value that now is no more significative (the system tick is apart from the timer count) and it brought a bigger error. bye Giacomo Ian Broster wrote: >We've got a serious problem of clock drift in Shark. >Over 1 second, sys_gettime() loses about 50us >compared to the machine running linux. > >I note that oslib/kl/event.c and oslib/ll/sys/ll/time.h >make use of a magic number 1197, relating to >1.19718 MHz. Should this be 1.193182 MHz? > >I'm not sure about this, particularly as the >ratio of these two frequencies (1.10032) does not >quite match our clock drift discrepancy (1.0059). > > >ian > > > |
From: Paolo G. <pao...@ti...> - 2003-09-16 12:23:23
|
Yes... we also noted that. I never checked the real frequency of the system, but we noted that the system was not going properly compared to an external alarm. Giacomo implemented a clock correction algorithm based on the Pentium RTC that also uses the real-time clock of the PCs. It has been submitted as a patch to Luca, and it is currently part of the Shark distribution. you have to check the value of the following two variables in oslib/advtimer.c: unsigned char use_tsc = 1; //Enable the TSC counter mode unsigned char use_cmos = 1; //Enable the RTC correction ...they should be both 1. I also think that Giacomo prepared a small note on the clock correction, but I think that he never published it yet... (Giacomoooo???) bye PJ On Tue, 2003-09-16 at 13:27, Ian Broster wrote: > We've got a serious problem of clock drift in Shark. > Over 1 second, sys_gettime() loses about 50us > compared to the machine running linux. > > I note that oslib/kl/event.c and oslib/ll/sys/ll/time.h > make use of a magic number 1197, relating to > 1.19718 MHz. Should this be 1.193182 MHz? > > I'm not sure about this, particularly as the > ratio of these two frequencies (1.10032) does not > quite match our clock drift discrepancy (1.0059). > > > ian -- Paolo Gai <pao...@ti...> Scuola S. Anna |
From: Ian B. <ia...@cs...> - 2003-09-16 13:06:46
|
> unsigned char use_tsc = 1; //Enable the TSC counter mode > unsigned char use_cmos = 1; //Enable the RTC correction This didn't seem to help---the clock drift is the same. i |