|
From: Sottek, M. J <mat...@in...> - 2001-12-17 18:15:21
|
>>Sorry for the delay. Finally I found time to read the document
>>and write my comments...
>> struct fb_display_info {
>> u32 id;
>> u32 xres;
>> u32 yres;
>Are these visible xres and yres?
"Display" is ONLY visible. The biggest design feature was the separation
of "Display Timings" from "Mode". Display is what you are outputting
to a display device. Mode is what you are rendering. So yes this is
visible only.
>Are these the sync margins? What about hsync_len and vsync_len?
No. The user should forget they ever knew anything about syncs and
such. They are for the driver to decide. These margins _may_ be
translated into sync fudging to move the picture on a CRT, but that
would be bad. Most hardware vendors have a fixed set of timings and
don't shift them around. Monitor vendors don't like you messing with
the syncs either. Calculate them according to the VESA guidelines
and leave it at that.
What this may be good for is the new DDC that allows altering of the
monitor positioning. Or on a "virtual" display of some type it could
move the data within a larger image, or if the image is "centered"
on a digital display this could alter the position.
The point is that the user doesn't care. If the driver can let you move
the picture around on the screen it will do so. Otherwise nothing
happens.
> struct fb_mode_info {
> u32 width;
> u32 height;
>Are these virtual width and height?
Yes, but don't think of it that way. It is the resolution that you are
rendering. So if you hardware is creating 1024x768 images but your
display is only showing 8x6 then yes this is the same as "virtual"
resolution.
"Virtual" resolution is really used when it shouldn't be. It should
mean a resolution bigger than what is being rendered. Like multiple
desktops in X. Data that doesn't really exist in a physical manner
all the time == virtual. There is nothing virtual about a plain 10x7
framebuffer... even when your display is only showing 8x6. But I
digress, everyone uses virtual to mean "Bigger than your display" so
there is no changing it now :)
>There exist many different variants of YUV: 4:4:4, 4:2:2, 4:2:0,
>Depending on the hardware, YUV modes can be interlaced and/or
>interleaved. E.g. some Set Top Box S(TB) hardware splits the YUV
>data in four parts: Y even, Y odd, UV even, and UV odd.
>What about RGBA?
>Are these all packed? What about planar modes and interleaved
>planes?
>So Amiga HAM6 and HAM8 can be considered NONSTD.
>What about weird endianness? E.g. RGB565 can look very weird
>(GGGBBBBBRRRRRGGG) if bytes have to be swapped, but not 16-bit words.
>On some architectures (e.g. ARM), video memory accesses must always
>be x-bit wide.
Ok all these fall into the same category. I made no attempt in the
design doc to cover all possible pixel formats. The concept was
that instead of trying to describe the pixels with offsets, depths,
etc. just use a 32 bit unique id. 2^32 is plenty of pixel formats to
cover whatever odd stuff we come across.
Now secondly. I realize that just numbering the pixel formats 0,1,2...
as we come across them isn't going to work. User apps would have tons
of code like this:
switch(pixel_format) {
case 0x12345678:
case 0x32123456:
case 0x23145634:
do_something();
case 0x43245689:
something();
}
So I provided some ideas about how to "group" the id's into bitfields
that would help out. So you can do something like this in user apps
if you wanted to support all RGB32 variants.
if((pixel_format & (RGB | 32BIT) == (RGB | 32BIT)) {
format_info = rgb32_format_table[(pixel_format & ID_MASK)>>8];
}
red = (red<<format_info->red_shift) & format_info->red_mask;
or some such thing
The benefits are that, given an good set of groupings, you are never
going to be unable to represent a new format. When you use depths,
offsets, masks etc. You are always going to be unable to represent some
types and you've wasted a bunch of driver space and time in the process.
Drivers can use id's very fast. They already know exactly what they
are going to support, and it is a small number. The driver shouldn't
have to decode the pixel formats from descriptions just to figure out
how to render a character. They should just do a simple switch with
the 4 or 5 formats they support.
>Don't you need bit field positions? Other fields than R, G, B, and A.
Not in the driver I don't. Each one is unique. If you have 10 different
RGBA32's then they all look the same except for the id bits. Look
the positions up in a table (in user-space).
>No XOR index (index of the color entry that inverts the
>underlying pixel)? Flashing cursors?
Yea, There are probably lots of odd cursor out there to account for.
>Perhaps hardware animation (some STBs have that)?
That can work. Just use a pixel format that has a bunch of images in
whatever format such a beast would need. This is probably done with
a bunch of similar cursor images stacked up in adjacent memory? Works
just fine. Probably want to keep that "clever" pixel format as a
device private one.
>So different layers/planes are different surfaces. STBs can do
>different layers (a few YUV, a few RGB/CLUT for OSD, cursor),
>with alpha blending between the layers, changeable order, ...
Surfaces are a representation of memory regions that are useful to
user apps. Most likely you would have one for each "plane" but
they are just a way to access and get information about the memory.
You need to use another interface to "control" the surfaces.
i.e. The framebuffer is a surface. You use the set_mode() get_mode()
set_display() etc. To "control" the framebuffer. The surface just
tells you where the memory is, if it is dirty etc.
For overlay's we need a set of interfaces to "control" the features.
I didn't make an attempt on those yet. We have bigger problems
to solve :)
Note that you can do whatever you want with a surface. If it is a
driver private surface type you can have driver private ways of
controlling it. Overlays and such may be so complicated that we just
decide to export driver private interfaces and wrap them in user-space
with a library. No need to make the driver work harder than it must.
>This is for packed pixels only. For (interleaved) bitplanes you
>need an offset to the next line and next plane.
Not really. You have to take into account the pixel format. If the
format is such that you have interleaved data then you should know
"how" the data is interleaved. If it is planar data then you should
know where the next plane begins. For instance, a Planar YUV 4:2:0
format would be defined to have the U plane at width*height offset
into the surface, and it would have width == width/2. All
surfaces with the same pixel format have the same layout.
Since this is of course a device file we are talking about, you
don't have to actually have the memory layout as described. The U
and V data could be somewhere else, but you just use the "defined"
area of the device file as the access point.
>Is the lseek() really needed? This means we need 2 syscalls
>(context switches) instead of 1.
This is _very_ open for comment. I just wanted to provide an example
to show people that using a command file doesn't mean writing ascii
into a file to be parsed by the kernel. Ascii is only good for human
interfaces. The interface between an application and the kernel has
no use for ascii.
I had assumed (In hindsight this assumption was complete garbage on
my part) that reading/writing to a device file worked over NFS as
long as you didn't do any "Device" type things (mmap, ioctl). This
isn't the case. Clearly it IS possible to make some type of filesystem
that has network capability of this fashion, perhaps devfs can
do this??
So I'm sure the way I described isn't the best. We just need to come
up with one that is easy on both ends. Removing the seek is an easy
first step.
>> KERNEL INTERFACE:
>No overlay console? ;-)
Actually I was thinking of attaching a kernel interface to a
surface so you could do something like that. I'm still thinking
about it.
>> fb_put(u8 *data, u16 srcx, u16 srcy, u16 width, u16 height, u16
^^^^
> void *?
Sure, that is cleaner.
>BTW, on many STBs, the whole screen is not one pixmap, but you can
>have multiple non-overlapping regions (windows), using different
>pixel formats.
Well that requires some driver specific applications anyway right?
Just define the framebuffer as a driver private pixel format and
make each "region" have its own surface. Write to the surfaces and
never to the whole framebuffer surface.
-> copyrect
-> fillrect
-> imageblit
If you want to keep all the character drawing at the di layer then
all the console functions can go away. Just use put() set() get()
copy().
I haven't looked at imageblit yet. If it requires driver to support
pixel formats that are not native then it is totally unacceptable.
The kernel guys have said forever that sound format conversion doesn't
belong in the kernel. Image format conversion isn't any different.
If the hardware supports the conversion, then fine. Otherwise punt.
I was thinking about Petr's comments on needing to be able to access
the hardware conversion capabilities. My put() and get() don't have
a format type... for the reasons above. I was thinking that I could
add a list of alternative formats to each surface. Then you could either
write to the surface in it's native format or in an alternative format
that the hardware can convert.
> fb_base: This is a kernel-virtual address that points to the
^^^^^^^^^^^^^^^^^^^^^^^^???
> beginning of the framebuffer. This address can be written to in
> a manner such as this "*fb_base = 0;". Typically this is an
> ioremapped version of the physical base address of the
> framebuffer.
What is the problem here? Are you wondering about the terminology?
Virtual address == paged. kernel address == one for use in the kernel.
kernel virtual is a paged address for use in the kernel.
As opposed to physical or bus address which you need to remap. Or
a user address which is virtual but in a private address space for the
process.
-Matt
|