I have a pretty nifty idea, which I think much of the code is done for
already. The idea is this: Use the console on FB device (CONFIG_FB) in
combination with a modified version of the virtual framebuffer device
(CONFIG_FB_VIRTUAL) and VNC http://www.uk.research.att.com/vnc/. All that
is required is to get the virtual framebuffer device working with UML (which
I have compiled, but not tried), and write a simple VNC server for the
virtual framebuffer device.
Why you ask? This would allow remote administration of your linux system
the instant the networking card comes alive. This system could be a UML
(without those pesky XWindows), or it could be a server running linuxbios
half way 'round the world. Benefits in UML are that you would have a
one-window interface into your UML Box, with an ALT-F1, ALT-F2 type
interface to switch between your virtual terminals. (Note, a virtual
terminal is different from the virtual FB). Benefits in non-uml uses would
be that you don't have to be at the physical console for anything.
Ideally, I'd like to integrate this kernel VNC driver into the
http://www.linuxbios.org project, so I can have a server that truly doesn't ever
need any interface except an Ethernet port.
What would this require?
It seems to me that most of the work is done already. Network drivers are
in the kernel. If I understand the virtual FB device properly
(drivers/video/fbmem.c) it provides a nice bitmap image of what the console
looks like, without the need for any actual VGA device. All this system has
to do is TCP/IP and a VNC server to send the framebuffer information to a
remote computer. I don't think the VNC stuff is difficult. I don't know
anything about doing TCP from the kernel.
Does this make sense? If so, is anyone willing to give me pointers on how
to do some of the tricky bits? I've never done any real kernel programming,
so pointers would be greatly appreciated. (Heck, for that matter, a finished
patch to the kernel would be even more greatly appreciated ;-)