|
From: Petr V. <van...@vc...> - 2000-03-19 01:09:10
|
On Fri, Mar 17, 2000 at 03:13:40PM +0100, Pavel Machek wrote:
> > > From: Pavel Machek <pa...@su...>
> > > To: lin...@bu..., lin...@su..., mi...@ch...,
> > > jsi...@ac...
> > > PS: Would it be possible to declare that fbcon is re-entrant, but may
> > > mess the screen up in such case?
> > No. Accesses to accelerators cannot be interrupted. But it is possible
> > to place down_trylock around them...
> Well, then critical sections around accelerator could be wrapped in
> spin_lock_irq...
It is not good idea. Accelerated operation can be very long.
> > > + * Bigger buffer means better console writing performance, but worse
> > > + * latency of console switches.
> > > -char con_buf[PAGE_SIZE];
> > > -#define CON_BUF_SIZE PAGE_SIZE
> > > +#define CON_BUF_SIZE (PAGE_SIZE/10)
> > > +char con_buf[CON_BUF_SIZE];
> > Is not 400 too small? It is ~3 lines...
> No. Scrolling 3 lines takes _lots_ of time.
Under normal setups no scrolling occurs (i.e. hardware panning is used).
> > > +static int softint_missed = 0;
> > > -static void console_softint(unsigned long ignored)
> > > +static void console_softint(unsigned long ignored)
> > > +{
> > > + run_task_queue(&con_task_queue);
> > > + softint_missed = 1;
> > > + if (down_trylock(&console_sem)) {
> > > + printk( "console_softint request dropped\n" );
> > Is it really good idea to do printk() when console_sem is held?
> Yes. It _must_ work that way.
With current code you cannot do printk() from fbcon/fbdev (because of
console_lock is held, so on SMP deadlocks occurs on printk()), so if
your code fixes it, fine.
> > > - spin_unlock_irq(&console_lock);
> > > ret = copy_to_user(buf, con_buf_start, orig_count);
> > You do not have to have locked console here... You need only
> I know, but I do not care: copy_from_user is faster operation than
> scrolling.
I'm sorry. As we are probably holding big kernel lock here anyway,
there is no reason for unlock console_sem (except to allow console_softint
to run?)
Best regards,
Petr Vandrovec
van...@vc...
|