You can subscribe to this list here.
2000 |
Jan
|
Feb
|
Mar
(10) |
Apr
(28) |
May
(41) |
Jun
(91) |
Jul
(63) |
Aug
(45) |
Sep
(37) |
Oct
(80) |
Nov
(91) |
Dec
(47) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(48) |
Feb
(121) |
Mar
(126) |
Apr
(16) |
May
(85) |
Jun
(84) |
Jul
(115) |
Aug
(71) |
Sep
(27) |
Oct
(33) |
Nov
(15) |
Dec
(71) |
2002 |
Jan
(73) |
Feb
(34) |
Mar
(39) |
Apr
(135) |
May
(59) |
Jun
(116) |
Jul
(93) |
Aug
(40) |
Sep
(50) |
Oct
(87) |
Nov
(90) |
Dec
(32) |
2003 |
Jan
(181) |
Feb
(101) |
Mar
(231) |
Apr
(240) |
May
(148) |
Jun
(228) |
Jul
(156) |
Aug
(49) |
Sep
(173) |
Oct
(169) |
Nov
(137) |
Dec
(163) |
2004 |
Jan
(243) |
Feb
(141) |
Mar
(183) |
Apr
(364) |
May
(369) |
Jun
(251) |
Jul
(194) |
Aug
(140) |
Sep
(154) |
Oct
(167) |
Nov
(86) |
Dec
(109) |
2005 |
Jan
(176) |
Feb
(140) |
Mar
(112) |
Apr
(158) |
May
(140) |
Jun
(201) |
Jul
(123) |
Aug
(196) |
Sep
(143) |
Oct
(165) |
Nov
(158) |
Dec
(79) |
2006 |
Jan
(90) |
Feb
(156) |
Mar
(125) |
Apr
(146) |
May
(169) |
Jun
(146) |
Jul
(150) |
Aug
(176) |
Sep
(156) |
Oct
(237) |
Nov
(179) |
Dec
(140) |
2007 |
Jan
(144) |
Feb
(116) |
Mar
(261) |
Apr
(279) |
May
(222) |
Jun
(103) |
Jul
(237) |
Aug
(191) |
Sep
(113) |
Oct
(129) |
Nov
(141) |
Dec
(165) |
2008 |
Jan
(152) |
Feb
(195) |
Mar
(242) |
Apr
(146) |
May
(151) |
Jun
(172) |
Jul
(123) |
Aug
(195) |
Sep
(195) |
Oct
(138) |
Nov
(183) |
Dec
(125) |
2009 |
Jan
(268) |
Feb
(281) |
Mar
(295) |
Apr
(293) |
May
(273) |
Jun
(265) |
Jul
(406) |
Aug
(679) |
Sep
(434) |
Oct
(357) |
Nov
(306) |
Dec
(478) |
2010 |
Jan
(856) |
Feb
(668) |
Mar
(927) |
Apr
(269) |
May
(12) |
Jun
(13) |
Jul
(6) |
Aug
(8) |
Sep
(23) |
Oct
(4) |
Nov
(8) |
Dec
(11) |
2011 |
Jan
(4) |
Feb
(2) |
Mar
(3) |
Apr
(9) |
May
(6) |
Jun
|
Jul
(1) |
Aug
(1) |
Sep
|
Oct
(2) |
Nov
|
Dec
|
2012 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(3) |
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
2013 |
Jan
(2) |
Feb
(2) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(7) |
Nov
(1) |
Dec
|
2014 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Gareth H. <ga...@va...> - 2001-03-09 00:18:51
|
Gareth Hughes wrote: > > My approach for this is perhaps slightly different to yours. I was > thinking more along the lines of having the compiled functions stored as > strings, which can be copied and edited by the context as needed. This > allows the context to insert hard-coded memory references and so on. > Similarly, I've been kicking around a design of a dynamic software > renderer, which is built from chunks of compiled code that can be > tweaked and chained together depending on the current GL state etc. I > don't think actually "compiling" code is the answer -- it's more a > customization of pre-compiled code to suit the current context. I should also add that functions can be built up from basic blocks, and these blocks are stored as strings and are edited/chained together to form the function as required. -- Gareth |
From: Gareth H. <ga...@va...> - 2001-03-09 00:07:05
|
Josh Vanderhoof wrote: > > > I believe that when SGI do this kind of thing in their Windoze > > software-OpenGL, they were generating x86 machine code directly > > into memory without using compilers or even assemblers. That's > > a viable technique...at least for the most popular CPU types. > > That's the obvious way to approach the problem, but there are problems > there too. Say I have a routine that uses 1 more variable that there > are registers in the worst case, but usually fits in the registers. > Should the x86 generator have a register allocator? How about if some > versions of the generated code have common subexpressions? Should the > generator have a CSE pass? The generator will eventually start to > look like a bad compiler that can only generate x86 code. My approach for this is perhaps slightly different to yours. I was thinking more along the lines of having the compiled functions stored as strings, which can be copied and edited by the context as needed. This allows the context to insert hard-coded memory references and so on. Similarly, I've been kicking around a design of a dynamic software renderer, which is built from chunks of compiled code that can be tweaked and chained together depending on the current GL state etc. I don't think actually "compiling" code is the answer -- it's more a customization of pre-compiled code to suit the current context. -- Gareth |
From: Allen A. <ak...@po...> - 2001-03-08 22:00:33
|
I'm a great fan of dynamic code generation. Here are two of my favorite online references on the subject: http://www.cs.washington.edu/research/projects/unisw/DynComp/www/ http://www.cs.columbia.edu/~library/TR-repository/reports/reports-1992/cucs-039-92.ps.gz Plenty more available from Google, of course. Now back to the discussion...I tend to agree with Steve. Allen |
From: Josh V. <ho...@na...> - 2001-03-08 21:46:45
|
"Stephen J Baker" <sj...@li...> writes: > I think the problems are significant: > > 1) Whilst it'll (perhaps) improve frame rates once the code > has been compiled, you could easily get a several-second > pause when you first refer to something that triggers this > action. In some applications, that would be disasterous. This doesn't seem like a big problem to me. In most cases, you could compile the entire pipeline in a fraction of a second. If you had to compile something really big, I guess you could fork() the compiler at a low priority level and use a general purpose routine until the compile finishes. > 2) You are depending on picking a valid compiler correctly and > that there is a compiler on the system at all. There are > many (and growing) platforms where software-only-Mesa might > make sense (eg http://www.agendacomputing.com - a Linux-based > PDA) where no compiler exists on the target machine. You're right about this. I would never suggest that Mesa not function without having a compiler available at run time. Lots of people run Linux without installing a compiler. > I believe that when SGI do this kind of thing in their Windoze > software-OpenGL, they were generating x86 machine code directly > into memory without using compilers or even assemblers. That's > a viable technique...at least for the most popular CPU types. That's the obvious way to approach the problem, but there are problems there too. Say I have a routine that uses 1 more variable that there are registers in the worst case, but usually fits in the registers. Should the x86 generator have a register allocator? How about if some versions of the generated code have common subexpressions? Should the generator have a CSE pass? The generator will eventually start to look like a bad compiler that can only generate x86 code. Josh |
From: Stephen J B. <sj...@li...> - 2001-03-08 19:40:39
|
On 8 Mar 2001, Josh Vanderhoof wrote: > Keith Whitwell <ke...@va...> writes: > > > You may want to avoid sharing the nv-specific stuff, but any progress on otf > > codegen has lots of application beyond that extension -- I can think of a > > dozen uses for something like this. > > What would you guys think of putting stuff like this in Mesa? <snip compile-on-the-fly stuff> I think the problems are significant: 1) Whilst it'll (perhaps) improve frame rates once the code has been compiled, you could easily get a several-second pause when you first refer to something that triggers this action. In some applications, that would be disasterous. 2) You are depending on picking a valid compiler correctly and that there is a compiler on the system at all. There are many (and growing) platforms where software-only-Mesa might make sense (eg http://www.agendacomputing.com - a Linux-based PDA) where no compiler exists on the target machine. I believe that when SGI do this kind of thing in their Windoze software-OpenGL, they were generating x86 machine code directly into memory without using compilers or even assemblers. That's a viable technique...at least for the most popular CPU types. ---- Steve Baker (817)619-2657 (Vox/Vox-Mail) L3Com/Link Simulation & Training (817)619-2466 (Fax) Work: sj...@li... http://www.link.com Home: sjb...@ai... http://web2.airmail.net/sjbaker1 |
From: Josh V. <ho...@na...> - 2001-03-08 19:24:34
|
Keith Whitwell <ke...@va...> writes: > You may want to avoid sharing the nv-specific stuff, but any progress on otf > codegen has lots of application beyond that extension -- I can think of a > dozen uses for something like this. What would you guys think of putting stuff like this in Mesa? #include <dlfcn.h> #include <stdio.h> #include <stdlib.h> #include <unistd.h> void * compile_function(const char *name, const char *source, void **r_handle) { char name_c[256], name_so[256], cmdline[768]; FILE *fp; size_t len = strlen(source); pid_t pid = getpid(); void *handle, *sym; sprintf(name_c, "/tmp/%ld_%.200s.c", (long)pid, name); sprintf(name_so, "/tmp/%ld_%.200s.so", (long)pid, name); fp = fopen(name_c, "w"); if (fp == NULL) return NULL; if (fwrite(source, len, 1, fp) != 1) { fclose(fp); remove(name_c); return NULL; } fclose(fp); sprintf(cmdline, "gcc -O2 -fomit-frame-pointer -shared %s -o %s", name_c, name_so); if (system(cmdline) != 0) { remove(name_c); return NULL; } remove(name_c); handle = dlopen(name_so, RTLD_NOW); remove(name_so); if (handle == NULL) return NULL; sym = dlsym(handle, name); if (sym == NULL) { dlclose(handle); return NULL; } *r_handle = handle; return sym; } void free_function(void *handle) { dlclose(handle); } static const char *convert_template = " void NAME(void *vsrc, void *vdst, int count) { SRC_TYPE *src = vsrc; DST_TYPE *dst = vdst; int i; unsigned int s, d, r, g, b, a; for (i = 0; i < count; i++) { s = src[i]; r = s >> SRC_RPOS; r &= (1 << SRC_RSZ) - 1; #if SRC_RSZ < DST_RSZ r = r * ((1 << DST_RSZ) - 1) / ((1 << SRC_RSZ - 1)); #else r >>= SRC_RSZ - DST_RSZ; #endif r <<= DST_RPOS; g = s >> SRC_GPOS; g &= (1 << SRC_GSZ) - 1; #if SRC_GSZ < DST_GSZ g = g * ((1 << DST_GSZ) - 1) / ((1 << SRC_GSZ - 1)); #else g >>= SRC_GSZ - DST_GSZ; #endif g <<= DST_GPOS; b = s >> SRC_BPOS; b &= (1 << SRC_BSZ) - 1; #if SRC_BSZ < DST_BSZ b = b * ((1 << DST_BSZ) - 1) / ((1 << SRC_BSZ - 1)); #else b >>= SRC_BSZ - DST_BSZ; #endif b <<= DST_BPOS; a = s >> SRC_APOS; a &= (1 << SRC_ASZ) - 1; #if SRC_ASZ < DST_ASZ a = a * ((1 << DST_ASZ) - 1) / ((1 << SRC_ASZ - 1)); #else a >>= SRC_ASZ - DST_ASZ; #endif a <<= DST_APOS; d = r | g | b | a; dst[i] = d; } } "; int convert_pixels(void *src, int src_pixel_size, int src_rpos, int src_gpos, int src_bpos, int src_apos, int src_rsz, int src_gsz, int src_bsz, int src_asz, void *dst, int dst_pixel_size, int dst_rpos, int dst_gpos, int dst_bpos, int dst_apos, int dst_rsz, int dst_gsz, int dst_bsz, int dst_asz, int count) { char buf[10000], name[200]; char *p = buf; void (*fn)(void *src, void *dst, int count); void *handle; switch (dst_pixel_size) { case 1: case 2: case 4: break; default: return 0; } sprintf(name, "convert_%d%d%d%d_%d%d%d%d", src_rsz, src_gsz, src_bsz, src_asz, dst_rsz, dst_gsz, dst_bsz, dst_asz); p += sprintf(p, "#define NAME %s\n", name); p += sprintf(p, "#define SRC_TYPE "); switch (src_pixel_size) { case 1: p += sprintf(p, "unsigned char\n"); break; case 2: p += sprintf(p, "unsigned short\n"); break; case 4: p += sprintf(p, "unsigned int\n"); break; default: return 0; } p += sprintf(p, "#define SRC_RPOS %d\n", src_rpos); p += sprintf(p, "#define SRC_GPOS %d\n", src_gpos); p += sprintf(p, "#define SRC_BPOS %d\n", src_bpos); p += sprintf(p, "#define SRC_APOS %d\n", src_apos); p += sprintf(p, "#define SRC_RSZ %d\n", src_rsz); p += sprintf(p, "#define SRC_GSZ %d\n", src_gsz); p += sprintf(p, "#define SRC_BSZ %d\n", src_bsz); p += sprintf(p, "#define SRC_ASZ %d\n", src_asz); p += sprintf(p, "#define DST_TYPE "); switch (dst_pixel_size) { case 1: p += sprintf(p, "unsigned char\n"); break; case 2: p += sprintf(p, "unsigned short\n"); break; case 4: p += sprintf(p, "unsigned int\n"); break; default: return 0; } p += sprintf(p, "#define DST_RPOS %d\n", dst_rpos); p += sprintf(p, "#define DST_GPOS %d\n", dst_gpos); p += sprintf(p, "#define DST_BPOS %d\n", dst_bpos); p += sprintf(p, "#define DST_APOS %d\n", dst_apos); p += sprintf(p, "#define DST_RSZ %d\n", dst_rsz); p += sprintf(p, "#define DST_GSZ %d\n", dst_gsz); p += sprintf(p, "#define DST_BSZ %d\n", dst_bsz); p += sprintf(p, "#define DST_ASZ %d\n", dst_asz); strcpy(p, convert_template); fn = compile_function(name, buf, &handle); if (fn != NULL) fn(src, dst, count); else return 0; free_function(handle); return 1; } int main(void) { unsigned int src[8] = { 0xff555555, 0x55ff5555, 0x5555ff55, 0x555555ff, 0x00555555, 0x55005555, 0x55550055, 0x55555500 }; unsigned short dst[8]; int i; i = convert_pixels( src, 4, /* pixel size */ 16, 8, 0, 24, /* offsets for r, g, b, a */ 8, 8, 8, 8, /* sizes for r, g, b, a */ dst, 2, 8, 4, 0, 12, 4, 4, 4, 4, 8); /* count */ if (i == 0) return 1; for (i = 0; i < 8; i++) printf("%08x %04x\n", src[i], dst[i]); return 0; } |
From: Keith W. <ke...@va...> - 2001-03-08 16:35:35
|
Brian Paul wrote: > > Jeff Epler wrote: > > > > On Tue, Mar 06, 2001 at 08:02:46PM -0500, Brian Paul wrote: > > > GL_NV_vertex_program is a really nice, exciting extension. It would > > > be great to have it in Mesa. The issue is getting permission from > > > NVIDIA to implement it in open-source. "NVIDIA Proprietary" is pretty > > > clearly plastered over the pdf file. > > > > You're right. I had read a version with the "proprietary" watermark, > > but when I saw the version in this other PDF file the watermark was gone > > and I thought it must have been released without restriction. However, > > it's still marked as proprietary. What does this mean for the status > > of the code I've written based on the document? Should I avoid sharing it > > with anybody at this time? > You may want to avoid sharing the nv-specific stuff, but any progress on otf codegen has lots of application beyond that extension -- I can think of a dozen uses for something like this. Keith |
From: Brian P. <br...@va...> - 2001-03-08 15:38:21
|
Back in October, luc-eric rousseau wrote: > > Hello, > for research, I would like to modify Mesa to support 16 bit per channel > texture, as well as a higher precision render buffers, either 16 bit or > single-precision floating point. > > The goal here is to use Mesa to render (non-interactivly) in software high > precision 3D images made with 16 bit texture source, without loss of > quality. The current precision of the vertex coordinates is fine, it's the > precision of the shading and the alpha blending that matters. The rendered > images are not meant for PC monitors (which are only 8 bit per channel max). > > My question is: how much trouble would it be? Of course, I've already > looked at the source, but I would like to get an idea of how much of the > code I would have to change, and if there are roadblocks I would hit. I > should also point out that i'm lazy :^) Got some good news for you. Mesa now has support for 16-bit color channels. There's probably a number of bugs to fix yet but I ran some tests last night and it seems to work. I haven't exercised texturing yet so there could be problems there. If you want to play with it, here's what you have to do: 1. Get the latest Mesa CVS trunk code. 2. Edit src/config.h and change CHAN_BITS to 16. 3. Edit src/Makefile.X11 and remove DRIVER_SOURCES from the OBJECTS assignment. 4. make -f Makefile.X11 linux-debug (this'll help if you find bugs) 5. Mesa/tests/osdemo16.c is a modified version of the osdemo.c program which renders a 16-bit/channel image and writes it as an 8-bit/channel targa file (dropping the least significant bits). Only the OSMesa interface supports 16-bit channels at this time. Let me know how it works if you try it. -Brian |
From: Brian P. <br...@va...> - 2001-03-07 15:34:57
|
Andrew Richardson wrote: > > I can't rem if this was originaly posted here or on DRI but... > > If you flush the VB down into the driver (and card) then I have a question. > I want to show video using ogl as the renderer (cf glmovie from loki). So I > want to paste multiple textures onto muliple quads. Say I have two textures > and two quads, I want two texture units to do the texturing, one quad-texture > each. the code is like > > glClientActiveTextureARB ( GL_TEXTURE0_ARB ); You probably want glActiveTextureARB() here. glClientActiveTextureARB() is for selecting vertex array sets. > glTexture2D(/*texture 0*/); > glCallList(QUAD0); > > glClientActiveTextureARB ( GL_TEXTURE1_ARB ); > glTexture2D(/*texture 1*/); > glCallList(QUAD1); > > The second glTexture2D call will flush the VB, but to where? If there are any queued rendering commands, they'll get executed and retired. > Does the card > process all of texture/quad 0's stuff? If so I don't see how I can use both > texture units at the same time for texture procedures that don't share > verticies. Of course I may not be implementing the dual text-unit stuff > correctly but even if Im not the question still stands, what happens with > multi-texturing when the two (or more) units don't share verticies. I think you're confused about how multitexture works. Texture units don't know anything about vertices. Each vertex processed has N sets of texture coordinates associated with it- one set per texture unit. The application of texture environments is pipelined so all active units contribute to processing each fragment (pixel). I don't understand what you're trying to do. > Also how does Mesa deal with references to GL_TEXTURE~N~_ARB where ~N~ is > larger than the actual number of tex-units, Trying to address or enable ~N~ would cause an error and be a no-op. > does it map them to the texture > units that ARE present or is it in the ogl spec that it returns an error? You should read the spec. It's an error. -Brian |
From: Andrew R. <and...@uc...> - 2001-03-07 12:51:36
|
I can't rem if this was originaly posted here or on DRI but... If you flush the VB down into the driver (and card) then I have a question. I want to show video using ogl as the renderer (cf glmovie from loki). So I want to paste multiple textures onto muliple quads. Say I have two textures and two quads, I want two texture units to do the texturing, one quad-texture each. the code is like glClientActiveTextureARB ( GL_TEXTURE0_ARB ); glTexture2D(/*texture 0*/); glCallList(QUAD0); glClientActiveTextureARB ( GL_TEXTURE1_ARB ); glTexture2D(/*texture 1*/); glCallList(QUAD1); The second glTexture2D call will flush the VB, but to where? Does the card process all of texture/quad 0's stuff? If so I don't see how I can use both texture units at the same time for texture procedures that don't share verticies. Of course I may not be implementing the dual text-unit stuff correctly but even if Im not the question still stands, what happens with multi-texturing when the two (or more) units don't share verticies. Also how does Mesa deal with references to GL_TEXTURE~N~_ARB where ~N~ is larger than the actual number of tex-units, does it map them to the texture units that ARE present or is it in the ogl spec that it returns an error? Thanks Andy -- \\\|/// \\ - - // ( @ @ ) +---------------o00o----(_)----o00o-------------------------------------+ |Andy Richardson Dept. of Chemistry | |t(w): +44-20-7679 (4718) University college London | |f(w): +44-20-7679 (4560) Gordon street | |e: and...@uc... London WC1E 6BT UK | +------------------------------0ooo-------------------------------------+ ooo0 ( ) ( ) ) / \ ( (_/ \_) |
From: Brian P. <br...@va...> - 2001-03-07 04:46:39
|
Jeff Epler wrote: > > On Tue, Mar 06, 2001 at 08:02:46PM -0500, Brian Paul wrote: > > GL_NV_vertex_program is a really nice, exciting extension. It would > > be great to have it in Mesa. The issue is getting permission from > > NVIDIA to implement it in open-source. "NVIDIA Proprietary" is pretty > > clearly plastered over the pdf file. > > You're right. I had read a version with the "proprietary" watermark, > but when I saw the version in this other PDF file the watermark was gone > and I thought it must have been released without restriction. However, > it's still marked as proprietary. What does this mean for the status > of the code I've written based on the document? Should I avoid sharing it > with anybody at this time? I can't really answer that, but I'd be careful. -Brian |
From: Brian P. <br...@va...> - 2001-03-07 04:44:01
|
Gareth Hughes wrote: > > Jeff Epler wrote: > > > > By the way, can someone tell me what the DST instruction is good for? > > I don't recall seeing it used in any of their example vector shader > > snippets. > > It computes the distance attenuation vector from two scalars. Result is > (1, d, d^2, 1/d). It could be used for computing the GL_CONSTANT/LINEAR/QUADRATIC_ATTENUATION factors in lighting and the point size for GL_EXT_point_parameters, among other things. -Brian |
From: Gareth H. <ga...@va...> - 2001-03-07 03:55:47
|
Jeff Epler wrote: > > By the way, can someone tell me what the DST instruction is good for? > I don't recall seeing it used in any of their example vector shader > snippets. It computes the distance attenuation vector from two scalars. Result is (1, d, d^2, 1/d). -- Gareth |
From: Jeff E. <je...@us...> - 2001-03-07 03:44:41
|
On Tue, Mar 06, 2001 at 08:02:46PM -0500, Brian Paul wrote: > GL_NV_vertex_program is a really nice, exciting extension. It would > be great to have it in Mesa. The issue is getting permission from > NVIDIA to implement it in open-source. "NVIDIA Proprietary" is pretty > clearly plastered over the pdf file. You're right. I had read a version with the "proprietary" watermark, but when I saw the version in this other PDF file the watermark was gone and I thought it must have been released without restriction. However, it's still marked as proprietary. What does this mean for the status of the code I've written based on the document? Should I avoid sharing it with anybody at this time? > An interpreter-based implementation would be fine for starters, and > probably be easiest to debug, and most portable. Ideally, vertex > programs would be compiled into 3dNow or SSE-optimized assembly code. Yes, that's probably the way I should have approached the problem. By the way, can someone tell me what the DST instruction is good for? I don't recall seeing it used in any of their example vector shader snippets. Jeff |
From: Gareth H. <ga...@va...> - 2001-03-07 03:19:40
|
Brian Paul wrote: > > > I'm not disagreeing re: the math, I'm concerned about shifting GLubytes > > up by 8. Maybe it does automatic conversion to GLushort and thus you > > won't overflow. On testing this, it appears that this is the case. Oh > > well :-) > > gcc may be doing what I intend but it probably would be safer to put > a cast in there for the sake of other compilers. Yeah, it never hurts to be explicit about these things. > Good catch. Thanks :-) -- Gareth |
From: Brian P. <br...@va...> - 2001-03-07 03:12:43
|
Gareth Hughes wrote: > > Brian Paul wrote: > > > > OK, this sounds good. I see why you were asking. > > Cool :-) > > > I think it's correct. To convert a GLubyte color in [0,255] to a > > 16-bit GLchan value in [0,65535] you'd do: > > > > chan = b * 65535 / 255; > > > > which is equivalent to: > > > > chan = (b << 8) | b; > > > > Here's the proof: > > > > #include <assert.h> > > int main(int argc, char *argv[]) > > { > > int b; > > for (b = 0; b < 256; b++) { > > int c1 = b * 65535 / 255; > > int c2 = (b << 8) | b; > > assert(c1 == c2); > > } > > } > > > > :) > > I'm not disagreeing re: the math, I'm concerned about shifting GLubytes > up by 8. Maybe it does automatic conversion to GLushort and thus you > won't overflow. On testing this, it appears that this is the case. Oh > well :-) gcc may be doing what I intend but it probably would be safer to put a cast in there for the sake of other compilers. Good catch. -Brian |
From: Gareth H. <ga...@va...> - 2001-03-07 03:06:53
|
Brian Paul wrote: > > OK, this sounds good. I see why you were asking. Cool :-) > I think it's correct. To convert a GLubyte color in [0,255] to a > 16-bit GLchan value in [0,65535] you'd do: > > chan = b * 65535 / 255; > > which is equivalent to: > > chan = (b << 8) | b; > > Here's the proof: > > #include <assert.h> > int main(int argc, char *argv[]) > { > int b; > for (b = 0; b < 256; b++) { > int c1 = b * 65535 / 255; > int c2 = (b << 8) | b; > assert(c1 == c2); > } > } > > :) I'm not disagreeing re: the math, I'm concerned about shifting GLubytes up by 8. Maybe it does automatic conversion to GLushort and thus you won't overflow. On testing this, it appears that this is the case. Oh well :-) -- Gareth |
From: Brian P. <br...@va...> - 2001-03-07 03:03:33
|
Jeff Epler wrote: > > After reading about NV_vertex_program, an OpenGL extension which is > supported in hardware on the GeForce3 in Windows, I became interested > in implementing a software version of the feature. NVidia has recently > documented this feature in a comprehensive pdf document of OpenGL > extensions. > > I have a (somewhat tested) parser for the language and a (nearly untested) > 3DNow!/gas code generator for the vertex program language (one > instruction unimplemented, two instructions partially implemented, > the rest implemented but I've never actually executed the code, just > eyeballed it) This code is written in Python, and is thus far from likely > to be suitable for inclusion in its current form. > > I have implemented none of the added functions in Mesa, since I am > unfamiliar with the rendering pipeline. Right now all that exists is a > commandline program which takes the vertex program source and emits > assembly code suitable for feeding to GNU as. > > If you're interested in having a gander at my code, just drop me a line. > It's a ~20k tarball file which should run on any system with Python 1.5 > or newer installed. > > I'd also like to hear what thoughts people have on the NV_vertex_program > extension, on this list if it's considered topical, otherwise in private > e-mail. GL_NV_vertex_program is a really nice, exciting extension. It would be great to have it in Mesa. The issue is getting permission from NVIDIA to implement it in open-source. "NVIDIA Proprietary" is pretty clearly plastered over the pdf file. I'll be at the OpenGL ARB meeting next week. I'll talk to the NVIDIA folks about it. As much as I'm a fan of Python, we'd need a C implementation. As for your prototype and interfacing with Mesa, it shouldn't be hard to clearly abstract the inputs and outputs as simple data structures which could later be adapted for Mesa. An interpreter-based implementation would be fine for starters, and probably be easiest to debug, and most portable. Ideally, vertex programs would be compiled into 3dNow or SSE-optimized assembly code. -Brian |
From: Brian P. <br...@va...> - 2001-03-07 02:54:29
|
Gareth Hughes wrote: > > Brian Paul wrote: > > > > With 3.5 drivers can implement _any_ format or type of texture images they > > want. They just have to provide the appropriate FetchTexel() functions > > so the software fallbacks can operate. FetchTexel() should return GLchan > > values. > > > > But yes, the s/w Mesa teximage routines should be written with the GLchan > > type so that 16-bit and float color components will be possible. > > texImage->TexFormat can't be NULL, so there has to be texure formats for > the s/w texture images. I'm trying to make this as clean as possible, > as the old "Mesa formats" were based on GLubyte per channel only. > > > Originally, the "Mesa formats" and texutil.[ch] were just helper routines > > for drivers; core Mesa knew nothing about them. It sounds like you're > > bringing that into core Mesa. I'm not sure of all the ramifications of > > that. > > Only the texture format stuff. The texutil code is used by drivers to > convert into hardware-friendly texture formats, but the texstore > utilities need to have corresponding gl_texture_format structures for > tightly-packed GLchan images. > > Now, the gl_texture_image structure has a pointer to a gl_texture_format > structure that defines the internal format of the image. The > gl_texture_format has the component sizes, bytes per texel, and custom > FetchTexel routines that are plugged into the main gl_texture_image > structure. OK, this sounds good. I see why you were asking. > > It was compiling about a month ago. I started testing OSMesa with 16-bit > > color channels but didn't do any verification. It's definitely not > > ready for prime time yet. > > Okay, I was just wondering if it was possible to test it out. Some of > the colormac.h macros look a little wrong, particularly this one at 16 > bits: > > #define UBYTE_TO_CHAN(b) ((GLchan) (((b) << 8) | (b))) I think it's correct. To convert a GLubyte color in [0,255] to a 16-bit GLchan value in [0,65535] you'd do: chan = b * 65535 / 255; which is equivalent to: chan = (b << 8) | b; Here's the proof: #include <assert.h> int main(int argc, char *argv[]) { int b; for (b = 0; b < 256; b++) { int c1 = b * 65535 / 255; int c2 = (b << 8) | b; assert(c1 == c2); } } :) -Brian |
From: Gareth H. <ga...@va...> - 2001-03-07 02:16:15
|
Brian Paul wrote: > > With 3.5 drivers can implement _any_ format or type of texture images they > want. They just have to provide the appropriate FetchTexel() functions > so the software fallbacks can operate. FetchTexel() should return GLchan > values. > > But yes, the s/w Mesa teximage routines should be written with the GLchan > type so that 16-bit and float color components will be possible. texImage->TexFormat can't be NULL, so there has to be texure formats for the s/w texture images. I'm trying to make this as clean as possible, as the old "Mesa formats" were based on GLubyte per channel only. > Originally, the "Mesa formats" and texutil.[ch] were just helper routines > for drivers; core Mesa knew nothing about them. It sounds like you're > bringing that into core Mesa. I'm not sure of all the ramifications of > that. Only the texture format stuff. The texutil code is used by drivers to convert into hardware-friendly texture formats, but the texstore utilities need to have corresponding gl_texture_format structures for tightly-packed GLchan images. Now, the gl_texture_image structure has a pointer to a gl_texture_format structure that defines the internal format of the image. The gl_texture_format has the component sizes, bytes per texel, and custom FetchTexel routines that are plugged into the main gl_texture_image structure. > It was compiling about a month ago. I started testing OSMesa with 16-bit > color channels but didn't do any verification. It's definitely not > ready for prime time yet. Okay, I was just wondering if it was possible to test it out. Some of the colormac.h macros look a little wrong, particularly this one at 16 bits: #define UBYTE_TO_CHAN(b) ((GLchan) (((b) << 8) | (b))) -- Gareth |
From: Jeff E. <je...@us...> - 2001-03-07 00:30:46
|
After reading about NV_vertex_program, an OpenGL extension which is supported in hardware on the GeForce3 in Windows, I became interested in implementing a software version of the feature. NVidia has recently documented this feature in a comprehensive pdf document of OpenGL extensions. I have a (somewhat tested) parser for the language and a (nearly untested) 3DNow!/gas code generator for the vertex program language (one instruction unimplemented, two instructions partially implemented, the rest implemented but I've never actually executed the code, just eyeballed it) This code is written in Python, and is thus far from likely to be suitable for inclusion in its current form. I have implemented none of the added functions in Mesa, since I am unfamiliar with the rendering pipeline. Right now all that exists is a commandline program which takes the vertex program source and emits assembly code suitable for feeding to GNU as. If you're interested in having a gander at my code, just drop me a line. It's a ~20k tarball file which should run on any system with Python 1.5 or newer installed. I'd also like to hear what thoughts people have on the NV_vertex_program extension, on this list if it's considered topical, otherwise in private e-mail. Have a nice day, Jeff |
From: Brian P. <br...@va...> - 2001-03-06 16:46:26
|
Gareth Hughes wrote: > > Is the plan to support GLushort or GLfloat per component texture images, > as well as the regular GLubyte per component? With 3.5 drivers can implement _any_ format or type of texture images they want. They just have to provide the appropriate FetchTexel() functions so the software fallbacks can operate. FetchTexel() should return GLchan values. But yes, the s/w Mesa teximage routines should be written with the GLchan type so that 16-bit and float color components will be possible. > If so, I'll change what > was the old-style Mesa formats to be based on GLchan, which would give > us things like _mesa_format_rgba_chan or _mesa_format_default_rgba > instead of _mesa_format_rgba8888 and the like. Originally, the "Mesa formats" and texutil.[ch] were just helper routines for drivers; core Mesa knew nothing about them. It sounds like you're bringing that into core Mesa. I'm not sure of all the ramifications of that. > And, I'm assuming that GLchan != GLubyte doesn't work at the moment, > correct? It was compiling about a month ago. I started testing OSMesa with 16-bit color channels but didn't do any verification. It's definitely not ready for prime time yet. -Brian |
From: Gareth H. <ga...@va...> - 2001-03-06 16:18:28
|
Is the plan to support GLushort or GLfloat per component texture images, as well as the regular GLubyte per component? If so, I'll change what was the old-style Mesa formats to be based on GLchan, which would give us things like _mesa_format_rgba_chan or _mesa_format_default_rgba instead of _mesa_format_rgba8888 and the like. And, I'm assuming that GLchan != GLubyte doesn't work at the moment, correct? -- Gareth |
From: Keith W. <ke...@va...> - 2001-03-06 00:20:17
|
ous_verts()' ? > > Yes: > > #0 0x403e8e03 in emit_wgt0t1 (ctx=0x81b58b0, start=0, end=1, dest=0x40017004, > stride=40) at ../../tnl_dd/t_dd_vbtmp.h:165 > #1 0x403f1e70 in i810_emit_contiguous_verts (ctx=0x81b58b0, start=0, count=1) > at i810vb.c:446 OK. I've got a fix. Check_tex_sizes() wasn't getting called except from the 'old' rastersetup paths. I'm doing it now in RenderStart(). I'll check in fixes to the dri drivers & revert this change. Keith |
From: Brian P. <br...@va...> - 2001-03-06 00:06:20
|
Keith Whitwell wrote: > > Brian Paul wrote: > > > > Keith Whitwell wrote: > > > > > > Brian Paul wrote: > > > > > > > > CVSROOT: /cvsroot/mesa3d > > > > Module name: Mesa > > > > Repository: Mesa/src/tnl_dd/ > > > > Changes by: brianp@usw-pr-cvs1. 01/03/05 14:40:10 > > > > > > > > Modified files: > > > > Mesa/src/tnl_dd/: > > > > t_dd_vbtmp.h > > > > > > > > Revision Changes Path > > > > 1.6 +160 -116 Mesa/src/tnl_dd/t_dd_vbtmp.h > > > > > > > > Log message: > > > > fixed segfaults when tex unit 1 enabled, but not unit 0 (conform) > > > > > > Brian, > > > > > > This should already be handled by the code in which checks > > > if texcoordptr[0] is null, and if so assigns it to texcoordptr[1] (the > > > maximally numbered texcoordptr must be nonnull). > > > > > > The question is why this mechanism didn't work in your case - what hardware > > > demonstrated the problem? > > > > i810 running 'conform -1 multitex.c' VB->TexCoordPtr[0] was null and causing > > the segfault, around line 188 in t_dd_vbtmp.h. > > I've found a way for this to happen. Can you confirm that this occurs via > 'i810_emit_contiguous_verts()' ? Yes: #0 0x403e8e03 in emit_wgt0t1 (ctx=0x81b58b0, start=0, end=1, dest=0x40017004, stride=40) at ../../tnl_dd/t_dd_vbtmp.h:165 #1 0x403f1e70 in i810_emit_contiguous_verts (ctx=0x81b58b0, start=0, count=1) at i810vb.c:446 #2 0x403d538e in i810_render_poly_verts (ctx=0x81b58b0, start=0, count=4, flags=777) at ../../tnl_dd/t_dd_dmatmp.h:378 #3 0x403d581b in i810_run_render (ctx=0x81b58b0, stage=0x8258fd4) at i810render.c:170 #4 0x4037ef6c in _tnl_run_pipeline (ctx=0x81b58b0) at t_pipeline.c:158 #5 0x4037d57b in _tnl_run_cassette (ctx=0x81b58b0, IM=0x8264020) at t_imm_exec.c:329 #6 0x4037d6e5 in exec_vert_cassette (ctx=0x81b58b0, IM=0x8264020) at t_imm_exec.c:391 #7 0x4037d7ba in _tnl_execute_cassette (ctx=0x81b58b0, IM=0x8264020) at t_imm_exec.c:429 #8 0x40373fbe in _tnl_flush_immediate (IM=0x8264020) at t_imm_api.c:57 #9 0x40373ffe in _tnl_flush_vertices (ctx=0x81b58b0, flags=1) at t_imm_api.c:67 #10 0x40307e04 in _mesa_ReadPixels (x=0, y=0, width=64, height=64, format=6407, type=5126, pixels=0x82c9700) at readpix.c:48 -Brian |