From: Nathan H. <na...@ma...> - 2001-01-01 22:02:10
|
On Mon, Jan 01, 2001 at 12:14:27PM +0100, Damon LoCascio wrote: > Hey all, > > Just answered a question on the Dri-Users list about building the > mga_hal driver. In doing this I came across something that I thought > everyone should know about in order that silly questions are not asked > twice. Having said that, if someone could tell me WHY -DUSE_KATMAI_ASM is > the default???? Because the GL code has assembly paths for all the common CPUs and dynamically hooks in the appropriate paths at runtime. This allows for a single binary that is optimised on all CPUs. The most likely problem is that the user has compiled their kernel for P3 support when they don't have a P3. I'm no expert on how the katmai detection works though. I think Gareth would know. You might like to find out where in check_os_katmai_support you're bombing out. Easiest way to do this is to remote attach a debugger to the X server. Failing that, use printf()/fflush() in the code. > >From my mail: > > CAVEATS: 1) > > I am going to mention this on the developer list as well but it's worth > saying it twice because I got burned myself. It seems that the Xfree86- > 4.0.2 tree when building the GL libriaries defaults to using the > USE_KATMAI_ASM macro as well as any mmx or 3dnow extensions your CPU may > support. This was fatal on my machine, an AMD K6-III, and any object > compiled against that libGL.so lib or even trying to use it dynamically > would core dump with an illegal instruction. If you are using an Intel > Processor the chances are you won't even notice and perhaps get a bit of > speed increase. > > To disable this macro if you need to, define > > MesaUseKatmai NO > > in the same place as the Matrox macro. And it will compile it with out > that particular type of asm. |