Hi all,
 
I've ported Mesa to an ARM9 embedded system. I used linux-osmesa config as basis. All demos work fine and some open gl games are run ok.
 
Now I want to optimize it to work faster, if possible.
Constraints of my system: ARM9 cpu, no floating point unit, 16bpp color depth ( 565 ), screen resolution  480*272
 
I've made following changes to src/mesa/main/config.h file:
 
/** Maximum order (degree) of curves */
#   define MAX_EVAL_ORDER 12
( was 30 )
/** Max texture palette / color table size */
#define MAX_COLOR_TABLE_SIZE 128
( was 256 )
 
/** Subpixel precision for antialiasing, window coordinate snapping */
#define SUB_PIXEL_BITS 2
( was 4 )
 
/** Size of histogram tables */
#define HISTOGRAM_TABLE_SIZE 128
( was 256 )
 

/**
 * Bits per depth buffer value (max is 32).
 */
#define DEFAULT_SOFTWARE_DEPTH_BITS 4
/* was 16 */

/**
 * Bits per color channel:  8, 16 or 32
 */
#define CHAN_BITS 8

I also tried to play with featires but they seem to cuse compiling errors.
Are there any other hints how to optimize Mesa behavior? Are the changes that I did safe?
Is it possible to switch to pure integer processing, i.e. chnage GFloat type to int somehow ?
 
Sincerely,
 
Serg