geekos-devel Mailing List for GeekOS (Page 8)
Status: Pre-Alpha
Brought to you by:
daveho
You can subscribe to this list here.
2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(8) |
Dec
(2) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2002 |
Jan
(13) |
Feb
(11) |
Mar
(11) |
Apr
|
May
(2) |
Jun
(2) |
Jul
|
Aug
(1) |
Sep
(4) |
Oct
(3) |
Nov
|
Dec
(2) |
2003 |
Jan
|
Feb
(10) |
Mar
(1) |
Apr
(8) |
May
(8) |
Jun
(4) |
Jul
(7) |
Aug
(6) |
Sep
(9) |
Oct
(10) |
Nov
(4) |
Dec
(7) |
2004 |
Jan
(9) |
Feb
(8) |
Mar
(12) |
Apr
(30) |
May
(45) |
Jun
(38) |
Jul
(31) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2005 |
Jan
|
Feb
(1) |
Mar
(2) |
Apr
(2) |
May
|
Jun
(4) |
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
(1) |
Dec
|
2006 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(2) |
Jul
|
Aug
(3) |
Sep
|
Oct
|
Nov
(3) |
Dec
(1) |
2007 |
Jan
(1) |
Feb
(4) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2008 |
Jan
|
Feb
|
Mar
(2) |
Apr
(4) |
May
|
Jun
(1) |
Jul
|
Aug
(3) |
Sep
|
Oct
|
Nov
|
Dec
|
2009 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Michael Lucas-S. <u32...@st...> - 2002-03-20 12:04:39
|
Ahh. It's true. You get that .bochsrc file set up right and suddenly it all works Or not in this case. Unfortunately GeekOS locked up instantly. Odd. I'll try the 'experimental' version next. I'd rather pick up playing with it from the non-C++ version though. Michael Michael Lucas-Smith wrote: > Hi, > > I've just set up GeekOS on my linux box. > I had to remove the [ORG] tags from the .asm files to get nasm to > accept them. > > I've installed bochs and everything has compiled. > > Question: How do I run it? :) - There's no documentation in the > .tar.gz or on the > webpage about how to run it. > > Michael > > > _______________________________________________ > Geekos-devel mailing list > Gee...@li... > https://lists.sourceforge.net/lists/listinfo/geekos-devel > > |
From: Michael Lucas-S. <u32...@st...> - 2002-03-20 11:12:33
|
Hi, I've just set up GeekOS on my linux box. I had to remove the [ORG] tags from the .asm files to get nasm to accept them. I've installed bochs and everything has compiled. Question: How do I run it? :) - There's no documentation in the .tar.gz or on the webpage about how to run it. Michael |
From: David H. <da...@cs...> - 2002-03-11 17:53:30
|
On Mon, Mar 11, 2002 at 11:24:30AM -0600, Parc wrote: > Hidden temporaries are only a problem when you don't keep them in mind. > A well constructed default ctor will keep overhead to a minimum. Once > you're in C++, you should be passing by reference for the most part anyway. I agree to some extent. For the most part, however, I don't think objects with by-value semantics will be necessary in the kernel. Currently, I think there is only one place where a struct is allocated on the stack. In general, all interesting objects will be allocated in the kernel heap. > What are you using to allocate/free memory then? Ignoring the internal > free pool, if you don't use new/delete, you're going to screw up inheritance > down the line. - kmalloc() and kfree() for the memory allocation - explicit call to constructor via the placement new operator - explicit call to destructor There are two reasons I don't want dynamic new or delete. First, at some point I may want an additional parameter to kmalloc() specifying whether the caller wants to wait for memory if there is a shortage. This is hard to do with "new". Second, and more importantly, I want to be able to return a null pointer to indicate any problem with the creation of the object. C++ doesn't allow new to return null. You're supposed to use exceptions to signal errors in a constructor, and I definitely don't want to use exceptions (mostly because I don't understand the runtime requirements for C++ exceptions as implemented in GNU C++). So, I make the constructor and destructor private, and use static member functions to do object creation. The static function does the resource creation. If successful, it invokes the constructor, which cannot fail. > Multiple inheritence would cause more overhead, just to clarify my point. > Further, implementation of inheritence is the responsability of the compiler > and is not specified by the standard. Most compilers use a vtable, but not > all(not that it matters since you can only compile with gcc). It's entirely > possible(albeit stupid) to not have O(c) lookup time for a function. Multiple inheritance is evil! My brain gets tied in knots just trying to think about how it works. I do not expect to use multiple inheritance, ever. -Dave |
From: Parc <ne...@ri...> - 2002-03-11 17:28:14
|
On Mon, Mar 11, 2002 at 12:09:30PM -0500, David Hovemeyer wrote: > On Mon, Mar 11, 2002 at 10:35:01AM -0600, Parc wrote: > > On Mon, Mar 11, 2002 at 11:15:14AM -0500, David Hovemeyer wrote: > > [snip] > > > is a clear benefit. I want to avoid over-use of C++ features, > > > especially constructors and destructors (which can cause major > > > performance overheads if used carelessly). > > > > What? Constructors and destructors don't cause performance overhead. > > You can't avoid calling ctors/dtors. > > Right, I wasn't very specific. What I meant was "constructors > and destructors implicitly called for objects used by value", > especially for hidden temporaries. I have seen this become > a huge source of overhead in C++ programs. > Hidden temporaries are only a problem when you don't keep them in mind. A well constructed default ctor will keep overhead to a minimum. Once you're in C++, you should be passing by reference for the most part anyway. > I've conciously avoided using the dynamic "new" and "delete" operators, > so we can completely avoid automatic calls to constructors and > destructors made by the compiler. (These calls can be made explicitly > as needed.) > What are you using to allocate/free memory then? Ignoring the internal free pool, if you don't use new/delete, you're going to screw up inheritance down the line. > > Perhaps what you intended to say was virtual functions? Those would > > cause some significant overhead. > > Yes, there is some overhead associated with virtual function calls. > However, they are exactly equivalent to calling a function through > a table of function pointers, and every OS kernel I have seen > uses this technique. > > -Dave > Multiple inheritence would cause more overhead, just to clarify my point. Further, implementation of inheritence is the responsability of the compiler and is not specified by the standard. Most compilers use a vtable, but not all(not that it matters since you can only compile with gcc). It's entirely possible(albeit stupid) to not have O(c) lookup time for a function. -parc |
From: David H. <da...@cs...> - 2002-03-11 17:16:20
|
On Mon, Mar 11, 2002 at 12:09:30PM -0500, David Hovemeyer wrote: > I've conciously avoided using the dynamic "new" and "delete" operators, > so we can completely avoid automatic calls to constructors and > destructors made by the compiler. (These calls can be made explicitly > as needed.) Just to clarify, the above statement applies to dynamically allocated objects only. Ctor/dtor calls are of course mandatory for objects used by value. However, I would like to avoid using objects by value as much as possible. Also, for plain structs having no members with constructors/destructors, I'm pretty sure it's legal for the compiler to make the implicit constructor and destructor no-ops which can be optimized away. This falls under the category of "no extra overhead for C programs" which is one of the design goals of C++. -Dave |
From: David H. <da...@cs...> - 2002-03-11 17:09:49
|
On Mon, Mar 11, 2002 at 10:35:01AM -0600, Parc wrote: > On Mon, Mar 11, 2002 at 11:15:14AM -0500, David Hovemeyer wrote: > [snip] > > is a clear benefit. I want to avoid over-use of C++ features, > > especially constructors and destructors (which can cause major > > performance overheads if used carelessly). > > What? Constructors and destructors don't cause performance overhead. > You can't avoid calling ctors/dtors. Right, I wasn't very specific. What I meant was "constructors and destructors implicitly called for objects used by value", especially for hidden temporaries. I have seen this become a huge source of overhead in C++ programs. I've conciously avoided using the dynamic "new" and "delete" operators, so we can completely avoid automatic calls to constructors and destructors made by the compiler. (These calls can be made explicitly as needed.) > Perhaps what you intended to say was virtual functions? Those would > cause some significant overhead. Yes, there is some overhead associated with virtual function calls. However, they are exactly equivalent to calling a function through a table of function pointers, and every OS kernel I have seen uses this technique. -Dave |
From: Parc <ne...@ri...> - 2002-03-11 16:39:23
|
On Mon, Mar 11, 2002 at 11:15:14AM -0500, David Hovemeyer wrote: [snip] > is a clear benefit. I want to avoid over-use of C++ features, > especially constructors and destructors (which can cause major > performance overheads if used carelessly). > What? Constructors and destructors don't cause performance overhead. You can't avoid calling ctors/dtors. Perhaps what you intended to say was virtual functions? Those would cause some significant overhead. -parc > -Dave > > _______________________________________________ > Geekos-devel mailing list > Gee...@li... > https://lists.sourceforge.net/lists/listinfo/geekos-devel > |
From: David H. <da...@cs...> - 2002-03-11 16:15:37
|
I finished rewriting the experimental version of GeekOS in C++, and put up a file release. You can download it from http://prdownloads.sourceforge.net/geekos/geekos-0.1.0.tar.gz You can also keep current with this branch using anonymous cvs. Use the commands: cvs -d:pserver:ano...@cv...:/cvsroot/geekos login cvs -z3 -d:pserver:ano...@cv...:/cvsroot/geekos \ co oo-geekos I am quite pleased with the improvement that has resulted in using classes and inheritance. There is a base "kernel_object" class which implements reference counting and automatic cleanup when the refcount becomes zero. A smart pointer class called "kobj_ptr" is used to automatically remove a reference from a kernel_object when it goes out of scope. This is a big win, because it makes the compiler responsible for cleaning up state on the error paths of resource creation and initialization code, which is one of the most tedious and error prone aspects of C programming. Another obvious benefit to using C++ is that we can express object interfaces using a more natural syntax. Most of the kernel is still written using functions and structs. Some of this code may be converted to objects later on, if there is a clear benefit. I want to avoid over-use of C++ features, especially constructors and destructors (which can cause major performance overheads if used carelessly). -Dave |
From: David H. <da...@cs...> - 2002-03-06 21:55:23
|
I'm planning on re-writing the kernel in C++, since a number of object interfaces are already in place, with more on the way. My theory is that C++, if applied very carefully, will result in cleaner and more maintainable interfaces. I don't plan to use either templates or exceptions, which should avoid much of the bloat traditionally associated with C++. As a proof of concept, I have a local source tree which compiles as C++ code. No significant changes were required. I'm also working on a cache system for files. That will pave the way for filesystem implementations (a filesystem's job being basically to schedule disk I/O for file buffers). -Dave |
From: David H. <da...@cs...> - 2002-03-03 21:21:47
|
I commited some code to define an interface for block devices: include/sys/geekos/blockdev.h kern/blockdev.c Basically, each request to read or write a block is represented by a block_device_request object, which specifies the block number, memory buffer, etc. A condition variable (protected by the mutex in the block device object) is used to signal the completion of the request. I also wrote a ramdisk driver as an example of how the interface works: include/sys/geekos/ramdisk.h drivers/ramdisk.c It's not a very good model for real block devices, since it's completely synchronous. A real block device driver would be asynchronous, since things like disks are slow, the the requests take a lot of time to complete. I haven't actually tested any of the code, so there may be bugs. The next thing I plan to work on is a file cache for filesystems to use. It would handle the tasks of calling the underlying block device driver to schedule I/O and caching the buffers in memory. The pages of a file cache would be used by read() and write() system calls, and also the VM system (for memory-mapped file I/O.) -Dave |
From: David H. <da...@cs...> - 2002-02-22 14:59:10
|
On Thu, Feb 21, 2002 at 02:54:05AM +0000, Vishal Soni wrote: > I am planning to implement a MINIX like file system. But before starting > this I wanted to make certain points clear. Great idea! > Do we have a system call > interface ready or do we need to implement one? > > If we dont have a system call interface i could look into it first because > i think it makes sense to implement a system call layer before implementing > a file system. A rudimentary system call interface exists: kern/syscall.c user/libc/include/syscall.h user/libc/src/syscall.c The system call number is passed in eax, parameters are passed in other registers, return value is in eax. There are copy_to_user() and copy_from_user() functions to copy data to/from user space. Right now they don't check that the user buffer is valid, i.e. that it is actually in user space, is readable or writable, and that the pages are present. I'm planning on fixing them soon. The real issue, of course, is that we need a user/kernel API and we need to decide on how to implement it. I would really like suggestions on how to approach this. In the virtual memory code, there is an abstraction called memory_object, which is a collection of pages. User processes can map these into their VM space. The "memory_object" has a pointer to a "pager" which is a source and sink for data. (If this sounds familiar, the ideas were all stolen from 4.4 BSD, which took its VM system from Mach.) I'd like the design of file I/O to use these memory_objects as their file buffer cache, so that read() and write() system calls are automatically coherent with memory-mapped file I/O. Right now, I'm thinking the design would be something like the following: - you implement a "disk pager", which implements the pager interface for IDE devices, using your IDE driver - the filesystem uses the disk pager to communicate with the hard drive - the filesystem handles requests to open and close files and directories residing it it - an instance of a file is represented by a "file pager", which handles the I/O, and a memory object, which caches the pages Then a "file instance" object in the OS would have a reference to the memory object, and would use it to read/write/seek/etc. There are lots of unresolved issues here, such as - how does the filesystem store and cache metadata? - how do we implement semantics such as copy-on-write mappings? I think BSD uses stacks of memory objects for this. - When do we commit writes to the underlying file pager (to initiate real output to disk)? - How do we do page stealing? - If the number of references to a memory object drops to zero, we probably don't want to throw it away immediately, since it might be used again. Whose responsibility is it to cache recently used persistent memory objects? Thoughts? -Dave |
From: Vishal S. <vis...@ho...> - 2002-02-21 02:54:23
|
Hi, I am planning to implement a MINIX like file system. But before starting this I wanted to make certain points clear. Do we have a system call interface ready or do we need to implement one? If we dont have a system call interface i could look into it first because i think it makes sense to implement a system call layer before implementing a file system. Let me know what you think..... Thanks, Vishal Soni _________________________________________________________________ Send and receive Hotmail on your mobile device: http://mobile.msn.com |
From: David H. <da...@cs...> - 2002-02-12 20:37:46
|
Hi Harry, I think eventually we should implement a terminal driver (emulating something like a VT100) built on top of the existing screen and keyboard support. Then user programs could communicate with it via escape sequences. Or, as you mention, we could do this at the user library level. Note that all new development in GeekOS is being done on an experimental version, details at http://geekos.sourceforge.net/docs/experimental.html Right now there are no file releases for the experimental version, but it is available through anonymous CVS. -Dave On Tue, Feb 12, 2002 at 04:00:23AM -0600, Harry Glinos wrote: > So far the OS seems pretty neat. I have gotten it to run on my labtap > (more details on forums on sourceforge) and I would like to extend some > of the features a little. I was messing around with the keyboard driver > and I was able to have it delete characters already on the screen. > After I finished that, I noticed that the driver would be a bad place > to get the backspace to do its thing. Think that the Read_line function > in libuser.c (i think that is where it is at needs to be changed to > accomodate backspace functionality). I wanted to make a small little > command line program for it with some built in built in commands. Do do > that, the libuser.c needs to be fixed. I don't think it would be that > hard to take the code that I put in my copy of keyboard.c and move it > into libuser.c. Maybe while I'm at it, I could add some new functions > to it too. |
From: Harry G. <hg...@ya...> - 2002-02-12 10:00:27
|
So far the OS seems pretty neat. I have gotten it to run on my labtap = (more details on forums on sourceforge) and I would like to extend some = of the features a little. I was messing around with the keyboard driver = and I was able to have it delete characters already on the screen. After = I finished that, I noticed that the driver would be a bad place to get = the backspace to do its thing. Think that the Read_line function in = libuser.c (i think that is where it is at needs to be changed to = accomodate backspace functionality). I wanted to make a small little = command line program for it with some built in built in commands. Do do = that, the libuser.c needs to be fixed. I don't think it would be that = hard to take the code that I put in my copy of keyboard.c and move it = into libuser.c. Maybe while I'm at it, I could add some new functions to = it too. Harry Glinos ha...@gl... >=20 > Message: 1 > Date: Sun, 10 Feb 2002 15:04:42 -0500 > From: David Hovemeyer <da...@cs...> > To: gee...@li... > Subject: [Geekos-devel] user mode works again >=20 > User mode works again. Right now a user context's virtual memory > structures and associated memory pages are not tracked properly, so = that's > the next thing I will work on. >=20 > We need to start thinking about how to present OS services to > user applications. At the moment I'm leaning towards a traditional > monolithic kernel with POSIX (at least a subset) as the API. > However, I'm open to suggestions. >=20 > -Dave >=20 >=20 _________________________________________________________ Do You Yahoo!? Get your free @yahoo.com address at http://mail.yahoo.com |
From: David H. <da...@cs...> - 2002-02-11 16:29:46
|
Hi Ricardo, Reading an introductory OS textbook is a good place to start, for example: http://www.amazon.com/exec/obidos/ASIN/0136386776/qid=1013444582/sr=1-4/ref=sr_1_4/002-2234997-7412850 http://www.amazon.com/exec/obidos/ASIN/0471417432/qid=1013444666/sr=1-1/ref=sr_1_1/002-2234997-7412850 There are also lots of good on line resources, such as http://www.mega-tokyo.com/os/os-faq.html http://www.execpc.com/~geezer/osd/index.htm Good luck, Dave On Sun, Feb 10, 2002 at 09:38:52AM -0600, de...@wo... wrote: > Hi! > I'm new in this list, and i want to know how a kernel works so i'll learn > how your kernel works an then help you for make it better. > > if any can give me an advice for lean faster , please tell me > > > if you don't understand excuseme but i'm from mexico and i don't speak > english so good. > > thanks > > ricardo > > > > > _______________________________________________ > Geekos-devel mailing list > Gee...@li... > https://lists.sourceforge.net/lists/listinfo/geekos-devel |
From: <de...@wo...> - 2002-02-10 21:49:48
|
Hi! I'm new in this list, and i want to know how a kernel works so i'll learn how your kernel works an then help you for make it better. if any can give me an advice for lean faster , please tell me if you don't understand excuseme but i'm from mexico and i don't speak english so good. thanks ricardo |
From: David H. <da...@cs...> - 2002-02-10 20:04:52
|
User mode works again. Right now a user context's virtual memory structures and associated memory pages are not tracked properly, so that's the next thing I will work on. We need to start thinking about how to present OS services to user applications. At the moment I'm leaning towards a traditional monolithic kernel with POSIX (at least a subset) as the API. However, I'm open to suggestions. -Dave |
From: David H. <da...@cs...> - 2002-02-07 16:23:59
|
On Tue, Feb 05, 2002 at 11:25:25PM +0000, Vishal Soni wrote: > > Hi, > > The IDE driver is almost ready...just need to do some final phases of > testing. Wow, that was fast work! > Now i have questions regarding the interface and certain implementations. > > Do we need to implement the disk scheduling "elevator algorithm" or do you > just want a basic read write ability to the disk. I think this would be > required down the line for sure. But for now i could give u the basic driver > to test the VM swapping. Basic I/O is fine for now. (Also, paging to disk is months in the future, as I'm still trying to get basic virtual memory to work.) > Again a question about LBA and CHS...after looking into some implementations > of file systems like FFS and EXT2 i absorved that disk geometry plays an > important role in file system performance... > > So most of the implementations talk in terms of cylinders head and sectors. > So do we want an LBA ability in our drive Disclaimer: I really don't know much about hard drives or file systems. Having said that, it's my impression that the geometry reported by modern drives is somewhat ficticious. For example, there are more physical sectors on outer tracks than inner tracks, so the uniform CHS model isn't really strictly accurate. I'd be interested to see any documentation, papers, experimental results, etc. on this topic. In general, GeekOS is at such an early stage that I'd rather keep things as simple as possible, and wait until later to think about performance. So, my vote is still for LBA. -Dave |
From: David H. <da...@cs...> - 2002-02-07 15:32:47
|
FYI for anyone tracking changes to the "experimental" version of GeekOS (http://geekos.sourceforge.net/docs/experimental.html); I'm tracking down bugs in the virtual memory code, so things may be sort of broken for a while. However; plain kernel threads still work, so you can just comment out the code in Main() which starts the user processes. At the moment, the problem seems to be that a single user mode process runs fine, but when multiple user mode processes are running there are random triple faults (induced by a page fault at a kernel address). Anyone who can think of good techniques for debugging this kind of problem, please let me know :-) -Dave |
From: Vishal S. <vis...@ho...> - 2002-02-05 23:25:40
|
Hi, The IDE driver is almost ready...just need to do some final phases of testing. Now i have questions regarding the interface and certain implementations. Do we need to implement the disk scheduling "elevator algorithm" or do you just want a basic read write ability to the disk. I think this would be required down the line for sure. But for now i could give u the basic driver to test the VM swapping. Again a question about LBA and CHS...after looking into some implementations of file systems like FFS and EXT2 i absorved that disk geometry plays an important role in file system performance... So most of the implementations talk in terms of cylinders head and sectors. So do we want an LBA ability in our drive Thanks, vishal _________________________________________________________________ Join the worlds largest e-mail service with MSN Hotmail. http://www.hotmail.com |
From: David H. <da...@cs...> - 2002-02-01 21:42:20
|
I put up a document describing the GeekOS coding style: http://geekos.sourceforge.net/docs/coding.html It would be nice if all GeekOS code was written using the same style. I might be willing to change aspects of the guidelines, if there is strong opposition. Please let me know what you think. Personally, I like the style described on the web page, but I can be flexible. Thanks, Dave |
From: David H. <da...@cs...> - 2002-01-30 15:43:27
|
On Tue, Jan 29, 2002 at 07:02:24PM -0600, 'Parc' wrote: > On Tue, Jan 29, 2002 at 06:19:48PM -0500, David Hovemeyer wrote: > [snip] > > > > Is the idea that you'd program a one-shot timer, and wait for > > it to go off? Would the resolution be sufficient? > > > Yes and no. I'm still researching a bit, but I'll have a design > made up pretty soon, and should have an implementation pretty quickly. > The basic is that I'm going to fire off the RTC with a 2x divider. That > gives me a 8MHz clock. When you nanosleep, I'll fire it up, and turn it > off when the timer is done. It looks stupid, though, so I'm still thinking > about it. Sounds good. [snip] > > I looked at the Linux BogoMIPS code, and it's pretty hairy. > > So, I hacked something similar into timer.c; basically, it runs > > a similar loop, and determines how many iterations occur in one > > timer tick. The patch is attached to this email. (I'm going to > > commit it, so you can also just cvs update.) > I really don't like that. The compiler can optimize that completely out > if it feels like it. Also, CPU caching changes how it behaves. If > something invalidates your cache while you're spinning, the timing won't > be accurate. I rewrote the loop in inline volatile asm, so it won't get optimized out. (Will commit this soon.) The code does rely a bit too intimately on gcc to not mess it up. The main dangers are (1) that the stack pointer might change between when the eip is saved and reset by the interrupt handler (which would cause massive lossage), and (2) that the "count" variable might not get allocated in a register, or might occupy different registers at different times, which would result in an invalid timing result. Case (1) is not likely to happen, especially if we compile with optimization. Case (2) is a bit more likely, but I think making the counter a "register" variable solves this case. Anyway, the code works. Regarding a cache invalidation (due to an interrupt occurring?) messing up the timing, yes, this could certainly happen. However, I think that's OK. The semantics of this sort of delay would be "wait for *at least* the specified amount of time". If it waits a bit longer, no big deal. > BogoMIPs are just bogus. There are variables that I don't think are > good to rely on. Here's my take on using a spin loop to delay. This is for really short delays (400 ns is less than a millionth of a second). It's too short a period to schedule another thread, so therefore the processor may as well be completely idle. > Then again, I don't do OS coding for a living, so > I could be wrong... :) I would also like to invoke this disclaimer :-) -Dave |
From: 'Parc' <ne...@ri...> - 2002-01-30 01:03:54
|
On Tue, Jan 29, 2002 at 06:19:48PM -0500, David Hovemeyer wrote: [snip] > > Is the idea that you'd program a one-shot timer, and wait for > it to go off? Would the resolution be sufficient? > Yes and no. I'm still researching a bit, but I'll have a design made up pretty soon, and should have an implementation pretty quickly. The basic is that I'm going to fire off the RTC with a 2x divider. That gives me a 8MHz clock. When you nanosleep, I'll fire it up, and turn it off when the timer is done. It looks stupid, though, so I'm still thinking about it. [snip] > > NOP gets optimized out by bochs, and some processors may do odd things. > > VMWare would also not like it. Another way to do it might be to watch > > the DRAM refresh status. It runs at 66Mhz, so watch it's flip-flops > > for enough time. > > Interesting idea, although wouldn't it actually be dependent on > the motherboard and processor? I realized that yes, it would be. Maybe. All my references are for older machines. I'll look some more. > [snip] > > Linux uses a hard coded loop of (roughly) the form: > > while ( count > 0 ) > --count; > > The BogoMIPS value is the result of a test that calibrates how many > iterations of this loop can execute per second. > > I looked at the Linux BogoMIPS code, and it's pretty hairy. > So, I hacked something similar into timer.c; basically, it runs > a similar loop, and determines how many iterations occur in one > timer tick. The patch is attached to this email. (I'm going to > commit it, so you can also just cvs update.) I really don't like that. The compiler can optimize that completely out if it feels like it. Also, CPU caching changes how it behaves. If something invalidates your cache while you're spinning, the timing won't be accurate. > > I haven't actually written the delay function yet, since I want to > see whether it works on real hardware. (It seems to be pretty stable > in Bochs; on my 450 MHz Ultra 60 it can execute 1464 iterations of > the loop in one tick :-) I also need to write the loop in assembly > so the same code gets executed in the delay function and the > timing function. > > -Dave BogoMIPs are just bogus. There are variables that I don't think are good to rely on. Then again, I don't do OS coding for a living, so I could be wrong... :) -parc |
From: David H. <da...@cs...> - 2002-01-29 23:19:53
|
On Tue, Jan 29, 2002 at 04:07:52PM -0600, 'Parc' wrote: > On Tue, Jan 29, 2002 at 04:06:58PM -0800, Vishal Soni wrote: > > Use timer interrupts: I can use them if they are not going to be used by > > any other components of the kernel. This also is not a good idea because > > we frequently need to wait for 400ns in IDE -drive communication. This > > will increase the number of interrupts. Probably couple of them each > > time u try to access a sector. > > > Give me a little time and I can probably get either an RTC timer or a > 8254 secondary timer together. I believe the most common way is with > the RTC. Is the idea that you'd program a one-shot timer, and wait for it to go off? Would the resolution be sufficient? > > Second is to use loop with NOP's in it and execute it to get > > approximately a 400nsecond delay. > > For eg. if NOP takes 4 clock cycles and my clock has 100ns duty cycle I > > can get 400ns delay by putting 4 NOP in my delay loop. > > NOP gets optimized out by bochs, and some processors may do odd things. > VMWare would also not like it. Another way to do it might be to watch > the DRAM refresh status. It runs at 66Mhz, so watch it's flip-flops > for enough time. Interesting idea, although wouldn't it actually be dependent on the motherboard and processor? > > If you guys have any Ideas regarding this let me know?? I will try to > > look into Linux source reference how they handle such situations. I know > > Linux uses something called BogoMIPS. But I am not sure how it works. > > See above. Linux uses a hard coded loop of (roughly) the form: while ( count > 0 ) --count; The BogoMIPS value is the result of a test that calibrates how many iterations of this loop can execute per second. I looked at the Linux BogoMIPS code, and it's pretty hairy. So, I hacked something similar into timer.c; basically, it runs a similar loop, and determines how many iterations occur in one timer tick. The patch is attached to this email. (I'm going to commit it, so you can also just cvs update.) I haven't actually written the delay function yet, since I want to see whether it works on real hardware. (It seems to be pretty stable in Bochs; on my 450 MHz Ultra 60 it can execute 1464 iterations of the loop in one tick :-) I also need to write the loop in assembly so the same code gets executed in the delay function and the timing function. -Dave |
From: 'Parc' <ne...@ri...> - 2002-01-29 22:09:04
|
On Tue, Jan 29, 2002 at 04:06:58PM -0800, Vishal Soni wrote: > Hi, > > I need a calibrated delay of approx. 400ns for communicating with IDE > drives. > > There are two possible options. > > Use timer interrupts: I can use them if they are not going to be used by > any other components of the kernel. This also is not a good idea because > we frequently need to wait for 400ns in IDE -drive communication. This > will increase the number of interrupts. Probably couple of them each > time u try to access a sector. > Give me a little time and I can probably get either an RTC timer or a 8254 secondary timer together. I believe the most common way is with the RTC. > Second is to use loop with NOP's in it and execute it to get > approximately a 400nsecond delay. > For eg. if NOP takes 4 clock cycles and my clock has 100ns duty cycle I > can get 400ns delay by putting 4 NOP in my delay loop. NOP gets optimized out by bochs, and some processors may do odd things. VMWare would also not like it. Another way to do it might be to watch the DRAM refresh status. It runs at 66Mhz, so watch it's flip-flops for enough time. > > This works fine if we assume all processors have 100ns clock duty > cycle...but this not the case because different versions of Intel/AMD > processors have different speeds. > > If you guys have any Ideas regarding this let me know?? I will try to > look into Linux source reference how they handle such situations. I know > Linux uses something called BogoMIPS. But I am not sure how it works. See above. > > Let me know if you guys have some other amazing ideas to handle this > situation The least invasive way right now is to use the DRAM refresh. Look for a nanosleep to appear shortly, since you need it. I imagine the controller can handle longer pauses than required, so you could just sleep(1). Yeah, it's WAY too long, but if it works... > > Thanks, > > Vishal > No prob -parc |