capros-devel Mailing List for CapROS
Status: Beta
Brought to you by:
clandau
You can subscribe to this list here.
2005 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(55) |
Jun
(20) |
Jul
(42) |
Aug
(15) |
Sep
(34) |
Oct
(27) |
Nov
(51) |
Dec
(5) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2006 |
Jan
(10) |
Feb
(3) |
Mar
(9) |
Apr
(1) |
May
(11) |
Jun
(5) |
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
(1) |
Dec
|
2007 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(35) |
Jun
(43) |
Jul
(24) |
Aug
(39) |
Sep
|
Oct
(2) |
Nov
(37) |
Dec
(77) |
2008 |
Jan
(2) |
Feb
(23) |
Mar
(3) |
Apr
(13) |
May
(1) |
Jun
(7) |
Jul
|
Aug
(13) |
Sep
(1) |
Oct
|
Nov
|
Dec
(7) |
2009 |
Jan
(36) |
Feb
(49) |
Mar
(3) |
Apr
|
May
|
Jun
(7) |
Jul
(5) |
Aug
(13) |
Sep
(23) |
Oct
(5) |
Nov
|
Dec
|
2010 |
Jan
|
Feb
(40) |
Mar
(1) |
Apr
(19) |
May
|
Jun
(16) |
Jul
|
Aug
|
Sep
|
Oct
(13) |
Nov
|
Dec
|
2011 |
Jan
|
Feb
|
Mar
|
Apr
(2) |
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(7) |
Oct
(2) |
Nov
|
Dec
|
2013 |
Jan
(8) |
Feb
|
Mar
(4) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
From: Charlie L. <ch...@ch...> - 2022-08-15 21:25:40
|
Hello CapROS friends! Work is progressing on getting CapROS to build on current machines with current compilers. Better compiler checking has uncovered some coding issues and two bugs. Device drivers are based on code taken from Linux kernel version 2.6.22, which was EOL in 2008 and no longer compiles. I am in the process of updating all Linux code to version 5.10.113. CapROS rewrites code for Linux semaphores, mutexes, spinlocks, wait queues, etc. to work in a CapROS process, so this is a fair amount of work. Many thanks to William ML Leslie for contributions to both CapROS and the cross tools. The documentation, which is on the web, is mostly lacking at this point. What is there is handwritten HTML. Any suggestions for a better documentation solution will be gratefully received. Thanks for your interest and support! -- Charlie Landau he/him/his (Why Pronouns Matter <https://www.mypronouns.org/>) |
From: Charlie L. <ch...@ch...> - 2022-01-14 17:34:41
|
Hello CapROS friends! I recently retired from my day job and am restarting work on CapROS. The passage of time has broken a number of things, and you can no longer buy any machine that the code runs on. I appreciate your patience while we bring it back to working. At this point I have moved the CapROS source from cvs on SourceForge to git on GitHub. The history is preserved. See https://github.com/capros-os and the capros repository therein. The CapROS web site is now hosted from the GitHub sources. It is at the same URL: http://www.capros.org/ This capros-devel email list is still hosted at SourceForge. I didn't see an option to do this on GitHub. My goal is to get the code building and working again, port it to a Raspberry Pi, and improve the documentation. Thanks for your interest and support! -Charlie |
From: Matt R. <ra...@gm...> - 2013-03-17 22:52:40
|
On Sun, Mar 17, 2013 at 3:05 PM, Jonathan S. Shapiro <sh...@er...> wrote: > Matt: > > There are several topics in your note, so I'm a bit confused. Let me try to > take them in turn. > >> I think a centos vm is closest to what I was advocating, and the path >> of least resistance from where are... > > > If the goal is to help people get set up quickly, I think that setting up a > reference VM image isn't helpful in practice. It is much faster to > net-install a CentOS image (or any other major distribution) by pulling from > the major repositories than it is to copy a reference virtual machine image. > > Once you have the base image installed, the Coyotos/CapROS host-xenv package > exists to ensure that any additional packages you need get installed. > yeah, I suppose my main reason for a complete vm was its simple to host a single image containing everything from bit torrent/file locker/whatever, but if you don't mind hosting again that point is moot. >> so I started to look into llvm and outside of the pesky libstdc++ >> dependency... > > > I'm not clear why we would look at LLVM, unless perhaps for performance > reasons. There are some compiler features that LLVM is missing for kernel > compiles, but it might be worth a look. Basically I just wanted to evaluate if llvm suffered the same fate as a gcc port (requiring posix emulation), If a native port of llvm were possible and able to compile domains, To me it seemed worthwhile to then spend the effort in switching the cross toolchain to use it as a first step in a longer term effort. If llvm-on-capros were a dead end like GCC It'd seem like a waste of time/effort to me. As it is I didn't see any serious roadblocks Just alot of time/effort. The other thing was that the llvm toolchain stance to cross compiling is fairly different than what GCC does. By default a llvm/clang is built to target all the targets and so they are all cross compilers by default. As such it seemed like there were some possibly maintenance burden reduction possibilities there, e.g. if we could provide a driver library for anything needed that wasn't built into the centos llvm package or whatever. Anyhow... just the changes in the landscape since the cross tools were created seemed to warrant a reevaluation. > But if the goal is to be able to write domains in C++, I think a better path > is to get libstdc++ and g++ ported. this wasn't the goal, In fact was looking at linking c++ statically to avoid this, looking at it merely as 'dependency of the compiler' though I did note that supporting this would be a requirement for any genode-on-capros http://www.genode.org effort |
From: Jonathan S. S. <sh...@er...> - 2013-03-17 22:06:04
|
Matt: There are several topics in your note, so I'm a bit confused. Let me try to take them in turn. I think a centos vm is closest to what I was advocating, and the path > of least resistance from where are... If the goal is to help people get set up quickly, I think that setting up a reference VM image isn't helpful in practice. It is *much* faster to net-install a CentOS image (or any other major distribution) by pulling from the major repositories than it is to copy a reference virtual machine image. Once you have the base image installed, the Coyotos/CapROS host-xenv package exists to ensure that any additional packages you need get installed. On Sun, Mar 17, 2013 at 1:56 PM, Matt Rice <ra...@gm...> wrote: > At that point I started evaluating the toolchain itself, with a focus > on coyotos/capros eventually self hosting. Without posix emulation I > don't think it'll ever run the GNU tools filenames are ubiquitous > throughout all of them, with the bfd library wanting to > open()/close(). > Yes. These tools are designed to live in a POSIX world. There is no realistic hope of separating them from that world. > so I started to look into llvm and outside of the pesky libstdc++ > dependency... I'm not clear why we would look at LLVM, unless perhaps for performance reasons. There are some compiler features that LLVM is missing for kernel compiles, but it might be worth a look. But if the goal is to be able to write domains in C++, I think a better path is to get libstdc++ and g++ ported. Jonathan |
From: Matt R. <ra...@gm...> - 2013-03-17 20:56:48
|
On Sat, Mar 16, 2013 at 11:12 PM, Jonathan S. Shapiro <sh...@er...> wrote: > I want to describe what is being done, but I also want to explain why we are > not adopting the "builld it yourself" approach that Matt Rice suggested back > in January. hmm, I see I mentioned the 'everybody builds a toolchain' model, I didn't intend to advocate it, I don't really like it for all the reasons you have given, I only really mentioned it for the sake of completeness. I think a centos vm is closest to what I was advocating, and the path of least resistance from where are. So I am definitely willing to go alog with that route. What follows is just a little update of what I was looking into before this decision just for the record... --- I spent a little time evaluating small self hosting distros, and kinda settled on www.baserock.org as the ideal VM for me at least. in that It's minimal and uses sources directly from version control (git) which helps manage the capros/coyotos specific patches to the toolchain, It has a way (with the trebuchet tool) that we can provide a delta containing the cross compiler tools that gets applied to the standard image. The main issue is it is currently undergoing a rather furious development pace I was hoping to attack it once it settles down somewhat. At that point I started evaluating the toolchain itself, with a focus on coyotos/capros eventually self hosting. Without posix emulation I don't think it'll ever run the GNU tools filenames are ubiquitous throughout all of them, with the bfd library wanting to open()/close(). so I started to look into llvm and outside of the pesky libstdc++ dependency, the linker scripts here are the main impediment to complete self hosting, as the llvm tools don't really have any linker scripts, relying on gnu tools for that. which means at least for compiling the kernel we need gnu tools, still having the ability to compile normal executables would be great. The llvm tools themselves appear to generally leave filenames at the main() level so they seem much more straightforward task to port. so I started looking into the various standard c++ libarary implementations which is about where I ran out of steam for the time being. Definitely not the path of least resistance. |
From: Jonathan S. S. <sh...@er...> - 2013-03-17 06:12:57
|
Back in January there was a discussion about cross compilers. For various reasons I seem to be back in the mode of maintaining cross compilers, and I'm refreshing the Coyotos/CapROS cross tools in order to get my head back into the cross tool build middens. I'm not resuming work on Coyotos, so I'm not going to have time for much ongoing maintenance, but we all want to have a set of working tools. After talking with Charlie Landau, we have settled on a course of action. I want to describe what is being done, but I also want to explain why we are *not* adopting the "builld it yourself" approach that Matt Rice suggested back in January. Building a cross-tool chain is a seriously hairy process. Some steps have to be done multiple times (e.g. GCC). It's delicate, finicky, error-prone, cranky, not friendly to tool version updates, and generally a pain in the ass. By the time I stopped, I had 20 virtual machines running different OS versions and architectures, each dedicated to building the tools for one of the many possible development host environments. Revising the cross tools involved a 40 hour compilation process. It was *ridiculous*. *Packaging* the cross tools was very helpful. It provided a way to ensure that all tool dependencies were satisfied, and it also meant that every developer was using bit-identical compilers with bit-identical results. Those two things, taken together, go a long way toward eliminating bugs arising from the cross environment itself. The fact that you could add the tool repository with a single RPM command, and then install the cross tools for your target with a single YUM command, and get the updates conveyed to you automatically, made things awfully convenient. It's a lot of work to get all of that right, and having people build the cross tools by hand is just begging for trouble. The last time I reviewed this process was in 2010, and some of the constraints have changed considerably. At this point, I'm prepared to make three assumptions about development host machines that were problematic in 2010: 1. All dev machines are now x86_64 2. All dev machines now have hardware virtualization support. 3. Disk space is a lot less constrained than it used to be. This is true for OSX machines, and also for nearly all Windows desktops and laptops. If you bought a netbook you may be out of luck. What this means in practical terms is that it is now reasonable to declare that Coyotos/CapROS development should happen on *one* operating system version, and that you should be prepared to install a modest-sized virtual machine for that purpose. In my experience, a virtual machine for Coyotos or CapROS development can fit comfortably in 40G, and can *probably* be done in 25G. We have been going crazy chasing the Fedora Core revision cycle, and given the ubiquitous availability of virtualization today, it just doesn't make sense to keep doing that. The packages that I built for CentOS 5.1 in 2010 work today, unchanged on CentOS 5.9. That beats the heck out of rebuilding things every six months, which I just don't have time to do. So the plan from this point forward is that Coyotos/CapROS development tool packages will *only* be built for CentOS, and *only* for development from an x86_64 development host. I'm just now finishing the rebuild for CentOS5, and I'll be starting on the build for CentOS6 shortly. Once that is done, I will go through and try to bring the tools up to more current versions of binutils and GCC, and make those available through the test repository. Jonathan |
From: Matt R. <ra...@gm...> - 2013-01-30 06:23:09
|
On 1/29/13, Charles Landau <cl...@ma...> wrote: > On 1/29/13 9:33 AM, Matt Rice wrote: >> Unfortunately some legalities have gotten in the way, >> >> https://fedoraproject.org/wiki/Legal:Distribution?rd=Legal/Distribution >> >> so not only would we/I have to distribute the sources for our cross >> compilers, >> but the sources for anything originating from fedora as well > > If you "distribute" the cross-compilation environment, and the sources > for the cross compilers, by giving the URL of a server with that data > (e.g. SourceForge), then why can't you "distribute" the source of the > rest of Fedora by giving the URL of the Fedora servers? well, the GPL section 3 is supposed to work like so: 3a(1): distributor: gives binaries + source to person A person A: gives binaries + source to person B person B: has source + binaries OR 3a(2): distributor: gives binaries + source to person A person A: gives binaries + 3b offer to person B person B: has binaries + offer 3b: distributor gives binaries + offer to person A person A: gives binaries + 3c offer to person B person B: has binaries + ability to get source. (theres of course more variations)... In the 3a(1) case, 'distributor' has fulfilled his obligations of the license via "provided that you also do one of the following" 3a, 3b, or 3c and has no requirement to give the sources to person B. If you take a look at the format of the repository: http://www.eros-os.com/YUM/coyotos/Fedora/12/ you will see: SRPMS/ i386/ under the following paragraph, this satisfies the obligations of 3a with the key piece being 'from the same place'. "If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code." so, from my perspective as a distributor, I can a) pass on the source code that was given to me b) pass on an offer for the source code. (which may in fact require a piece of paper, not sure) c) pass on the offer that was given to me and since I have no offer, I cannot choose c. from what I can tell without digging a whole lot yet, Scientific Linux mentioned by Lorens has a somewhat novel approach to this: 'Sites' distribute images that overlay on top of the standard scientific linux distribution, in that case, we'd only be distributing the overlay containing our cross compilers, allowing us to host the sources/binaries for only the cross compilers, not entirely sure though |
From: Charles L. <cl...@ma...> - 2013-01-30 04:56:05
|
On 1/29/13 9:33 AM, Matt Rice wrote: > Unfortunately some legalities have gotten in the way, > > https://fedoraproject.org/wiki/Legal:Distribution?rd=Legal/Distribution > > so not only would we/I have to distribute the sources for our cross compilers, > but the sources for anything originating from fedora as well If you "distribute" the cross-compilation environment, and the sources for the cross compilers, by giving the URL of a server with that data (e.g. SourceForge), then why can't you "distribute" the source of the rest of Fedora by giving the URL of the Fedora servers? |
From: Lorens K. <ero...@ta...> - 2013-01-29 19:00:13
|
On Tue, Jan 29, 2013 at 09:33:46AM -0800, Matt Rice wrote: > https://fedoraproject.org/wiki/Legal:Distribution?rd=Legal/Distribution > > so not only would we/I have to distribute the sources for our cross compilers, > but the sources for anything originating from fedora as well I'm annoyed at their interpretation that "three years" means three years from the day somebody else gives a physical copy to yet somebody else. I'd have interpreted it to mean three years from the date they stop distributing binaries from their central site. > given the general purpose nature of a distribution like fedora, it > contains a ton of stuff which is not necessarily essential to our > purposes, so we are on the hook twice for any of that... > > at that point it makes more sense to me, to roll a custom distribution > or a different distribution which might better fit our purposes (and > all of the work that that entails/maintaining 2 os's instead of one). Is Scientific Linux (www.scientificlinux.org) very much different from Fedora? I haven't found any License on their site other than a quote of the GPL, no comments like Fedora has but they still have their first version from 2004 on line. Otherwise I'd go the Debian way. FWIW, I have a Virtualbox VDI of a Fedora-thirteenish that successfully compiled CapROS, it's 3.8 GB ready-to-boot and 1.1 GB gzipped. |
From: Matt R. <ra...@gm...> - 2013-01-29 17:33:54
|
On Fri, Jan 25, 2013 at 8:29 PM, Charles Landau <cl...@ma...> wrote: > On 1/25/13 7:29 PM, Matt Rice wrote: >> I suppose what I should probably do, is just try it once >> it see if it works for everybody or not, and then we can decide? > > That seems like the best plan. Unfortunately some legalities have gotten in the way, https://fedoraproject.org/wiki/Legal:Distribution?rd=Legal/Distribution so not only would we/I have to distribute the sources for our cross compilers, but the sources for anything originating from fedora as well given the general purpose nature of a distribution like fedora, it contains a ton of stuff which is not necessarily essential to our purposes, so we are on the hook twice for any of that... at that point it makes more sense to me, to roll a custom distribution or a different distribution which might better fit our purposes (and all of the work that that entails/maintaining 2 os's instead of one). I'll keep looking into how small I can make fedora with the overall footprint in mind. |
From: Charles L. <cl...@ma...> - 2013-01-26 04:29:34
|
On 1/25/13 7:29 PM, Matt Rice wrote: > On 1/25/13, Charles Landau <cl...@ma...> wrote: >> Matt, IIUC, you are proposing: >> >> 1, The developer has a PC running his favorite flavor and version of Linux. >> 2. Under that system there is a VM running Fedora Core 10 (the latest >> version that I know has working cross-compilers). The developer compiles >> and perhaps develops on that system. > I was actually wanting to update them to a newer fedora, > > I'm currently running them patched on fc16, but willing to update the > cross-compilers for whatever version we decide upon, and upload a fork > of the existing tools including the patches and scripts to build the > image to a repository. > > In other words, I'm already maintaining them locally, and trying to > figure out a decent way to distribute them that doesn't require me to > compile them for a bunch of different fedora versions, something i can > preferably sign and stick on a file locker. > > In fact updating them will make it a lot easier given the newer fedora > image generation tools. > > I suppose it comes down to: i'm far too lazy to generate them the way shap did, > the excess compiles every time the tools change or fedora releases a > new version would > make me dread rather than enjoy doing it. So IIUC you are saying we (that is, you) can still update the tools to newer versions (using a mechanism that I don't understand but that works for you) and release them to others (using a mechanism that you are working out now), but you can do it on a schedule that is independent of (and less frequent than updates to) the distribution and version of the Linux on the host PC. > I suppose what I should probably do, is just try it once > it see if it works for everybody or not, and then we can decide? That seems like the best plan. >> What about this alternative: Use the (non-cross x86) tools (compiler and >> libraries) that come with the current version of Fedora Core (might work >> on other Linuxes too). We would lose the features that CapROS adds to >> the current cross tools. There are two that I know of: > I suppose, I mean keynix showed it could be done > I think its going to be a lot more work than just banging our existing > compilers in to shape, > and might end up like a swan dive into the la brea tar pits :-) The work I know about doesn't seem like a lot, but the work we don't know about could be a tar pit. > i think also disable the crt0 that come with the linux compiler as > they probably wouldn't play nice. I don't recall exactly how crt0 gets loaded in linux nor in CapROS; this would need to be figured out. DOMCRT0 in src/build/make/makevars.mk is no longer used, but was once used to explicitly load crt0, and we could go back to that. > the pain doesn't really stop there, when porting anything alien that > uses configure will recognize it as a gnu-linux compiler, and do the > wrong thing too. src/base/domain/openssl is one of those "alien" things, and instead of using configure, the Makefile explicitly builds for the CapROS targets. >> This doesn't solve the problem of getting modern tools for developing >> CapROS for the ARM target, but it might make it easier, because it's >> easier to build cross tools if you don't have to integrate the >> CapROS-specific stuff (try Googling "cross linux from scratch"). > right, its a lot of pain for a partial solution. |
From: Matt R. <ra...@gm...> - 2013-01-26 03:29:30
|
On 1/25/13, Charles Landau <cl...@ma...> wrote: > Matt, IIUC, you are proposing: > > 1, The developer has a PC running his favorite flavor and version of Linux. > 2. Under that system there is a VM running Fedora Core 10 (the latest > version that I know has working cross-compilers). The developer compiles > and perhaps develops on that system. I was actually wanting to update them to a newer fedora, I'm currently running them patched on fc16, but willing to update the cross-compilers for whatever version we decide upon, and upload a fork of the existing tools including the patches and scripts to build the image to a repository. In other words, I'm already maintaining them locally, and trying to figure out a decent way to distribute them that doesn't require me to compile them for a bunch of different fedora versions, something i can preferably sign and stick on a file locker. In fact updating them will make it a lot easier given the newer fedora image generation tools. I suppose it comes down to: i'm far too lazy to generate them the way shap did, the excess compiles every time the tools change or fedora releases a new version would make me dread rather than enjoy doing it. I suppose what I should probably do, is just try it once it see if it works for everybody or not, and then we can decide? > 3. There is probably a second VM for running x86 CapROS. right, I also do a 3rd VM running CapROS-arm running under emulation on x86. though it doesn't quite get to executing user mode instructions yet I kind of have ignored this part because it's something that should probably happen on the developers (1) machine rather than having to deal with nested vm's. > I don't understand what you are proposing with a loopback device. we can drop this from the discussion if you would like. it's somewhat orthogonal. but because grub-install/fdisk want to work with /dev block devices to create bootable partitions etc, and that requires root, or fiddling with permissions/groups something unacceptable when building on the developers (1) machine, that would typically be documented as 'post compile instructions', but could be done on a vm with no harm, but some extra copying/disk usage. so it's just something that could make the above (3) images easier to build. > #2 would be essentially the same as the Linux development system I use, > except mine is on a real PC. > > One downside of this is that CapROS builds would remain stuck on the > FC10 tools and wouldn't be able to take advantage of any subsequent > progress. hopefully answered this above. > What about this alternative: Use the (non-cross x86) tools (compiler and > libraries) that come with the current version of Fedora Core (might work > on other Linuxes too). We would lose the features that CapROS adds to > the current cross tools. There are two that I know of: I suppose, I mean keynix showed it could be done I think its going to be a lot more work than just banging our existing compilers in to shape, and might end up like a swan dive into the la brea tar pits :-) > 1. The regular C libraries have procedures such as read() and open() > that make system calls designed for the host system (e.g. Linux). CapROS > does not support these and the CapROS cross libraries do not have them. > The difference would be that if your program attempts to use these > procedures, you will get a runtime error (crashed process) instead of a > link-time error. On the plus side, sprintf() might work and xsprintf() > and kprintf() would be eliminated or simplified. > > 2. There are some modules (in src/base/lib/domain/crt) for > CapROS-specific runtime initialization. CapROS Makefiles would have to > be modified to explicitly include these modules when linking. This > should not be hard because linking is done with Make macros. _sbrk() and > _exit() do need to be supported, and the CapROS versions could be > explicitly linked before scanning the C library. > > Am I missing any other issues? i think also disable the crt0 that come with the linux compiler as they probably wouldn't play nice. the pain doesn't really stop there, when porting anything alien that uses configure will recognize it as a gnu-linux compiler, and do the wrong thing too. > This doesn't solve the problem of getting modern tools for developing > CapROS for the ARM target, but it might make it easier, because it's > easier to build cross tools if you don't have to integrate the > CapROS-specific stuff (try Googling "cross linux from scratch"). right, its a lot of pain for a partial solution. |
From: Charles L. <cl...@ma...> - 2013-01-25 22:28:34
|
Matt, IIUC, you are proposing: 1, The developer has a PC running his favorite flavor and version of Linux. 2. Under that system there is a VM running Fedora Core 10 (the latest version that I know has working cross-compilers). The developer compiles and perhaps develops on that system. 3. There is probably a second VM for running x86 CapROS. I don't understand what you are proposing with a loopback device. #2 would be essentially the same as the Linux development system I use, except mine is on a real PC. One downside of this is that CapROS builds would remain stuck on the FC10 tools and wouldn't be able to take advantage of any subsequent progress. What about this alternative: Use the (non-cross x86) tools (compiler and libraries) that come with the current version of Fedora Core (might work on other Linuxes too). We would lose the features that CapROS adds to the current cross tools. There are two that I know of: 1. The regular C libraries have procedures such as read() and open() that make system calls designed for the host system (e.g. Linux). CapROS does not support these and the CapROS cross libraries do not have them. The difference would be that if your program attempts to use these procedures, you will get a runtime error (crashed process) instead of a link-time error. On the plus side, sprintf() might work and xsprintf() and kprintf() would be eliminated or simplified. 2. There are some modules (in src/base/lib/domain/crt) for CapROS-specific runtime initialization. CapROS Makefiles would have to be modified to explicitly include these modules when linking. This should not be hard because linking is done with Make macros. _sbrk() and _exit() do need to be supported, and the CapROS versions could be explicitly linked before scanning the C library. Am I missing any other issues? This doesn't solve the problem of getting modern tools for developing CapROS for the ARM target, but it might make it easier, because it's easier to build cross tools if you don't have to integrate the CapROS-specific stuff (try Googling "cross linux from scratch"). -Charlie On 1/25/13 12:17 PM, Matt Rice wrote: > I've been wanting to get back in to capros/coyotos recently, > > there are 2 distinct issues: > a) the main issue is my lack of ability to host a yum repository for > the binaries > b) the frequency of fedora release cycles (6 months) multiplied by the > number of architectures/OS's means a lot of compiling/maintenance, because > everybody has their own uprgrade schedule for which they upgrade their > fedora installs > even if we managed to limit ourselves to a single OS. > > with regard to a) > the yum repositories require a specific URL format which they are > required to comply > makes it difficult to find free hosting, I suppose I could talk to the > 'rpmfusion.org' guys, and see if I could host as part of that project. > (seems like a weird solution to me). > > a different mechanism entirely has come to mind which would try and > attack these 2 problems directly. > it has its own unique set of issues... I wanted to see what your opinion was > > the alternative mechanism would be to host a virtual machine image > which has a linux kernel & all of the dependencies required to build > capros on it/cross compilers and what not built in to the virtual > machine. > > this solves a) since it's a single image which can be hosted anywhere > like mega.com or even on the sourceforge capros page itself. > solves b) since we are no longer dependent on fedora's release cycle. > also solves c) the requirement of fedora, now we'd just require a > qemu/kvm or 'openstack' or eucalyptus or something (haven't gotten > down to the details). > theoretically anyone running fedora or ubuntu or whatever should be > able to run it. > > it's probably > z) slower (not sure how much) > y) requires virtualization instructions for the bootstrapping machine > x) requires some synchronization mechanism to get binaries out of it > and sources to it, so it'd probably complicate the build process. > > I personally would probably just ssh into the virtual machine to > develop, since I never run anything more than bash and vi anyways. > that's probably not feasible if someone wants to run some gui editor, > if thats the case this might be more painful than it's worth, since it > takes the build system away from a process everyone is familiar with. > > w) requires the initial investment into building the vm images, > essentially we maintain a minimal linux or bsd os which we use to > bootstrap capros. We could also use the existing RPM's/rpm building > and fedora tools e.g. > https://github.com/wgwoods/lorax/blob/master/README.livemedia-creator > to create the cross compilers vm image, in that case we're staying > pretty much the same, we just never build for more than 1 version of > fedora... this is probably the simplest way to go, the image will be > somewhat larger than we could pull off with a purpose built vm (do we > care?) > > > another benefit of the whole 'vm image' based design comes to mind, > my qemu image script generation things currently use 'libguestfs', to > work around the requirement for root access/fiddling security > permissions for mounting a loopback device, so I can install grub for > x86 or uboot for arm, or whatever bootloader really. > > this sort of thing is quite nice in that it allows the build scripts > to spit out images runnable under emulation, or that you can install > from dd, without any fiddling of the local machine, or multiple steps. > > libguestfs essentially wraps up a qemu instance granting the user root > permissions to that virtual machine allowing them to mount a loopback > device. > > i've really been hampered by the bug here: > https://bugzilla.redhat.com/show_bug.cgi?id=737261 > > in that i have to fiddle with my RPM database in order to get the grub1 > installed and hacking the system provided libguestfs to work again (2 > years of that and no fix in sight, as it's a stalemate between > maintainers) > > by running the cross compilers in a VM we can quite easily allow the > build to mount loopback devices for the parts of the image building > process that want it, bypassing libguestfs and the fedora politics > involved. > > anyhow, let me know your feelings on this subject and whether you > think this would make it easier to continue development of capros > given the current cross compiler situation. > > another alternative is the openembedded.org style of mechanism, > where everybody builds the cross compilers, this is essentially what > we have today > I suppose with the exception that they do it all in one pass (I > haven't looked at their stuff in a while/since the yocto project merge > though, it is in all likelyhood somewhat linux target specific). |
From: Matt R. <ra...@gm...> - 2013-01-25 20:17:57
|
I've been wanting to get back in to capros/coyotos recently, there are 2 distinct issues: a) the main issue is my lack of ability to host a yum repository for the binaries b) the frequency of fedora release cycles (6 months) multiplied by the number of architectures/OS's means a lot of compiling/maintenance, because everybody has their own uprgrade schedule for which they upgrade their fedora installs even if we managed to limit ourselves to a single OS. with regard to a) the yum repositories require a specific URL format which they are required to comply makes it difficult to find free hosting, I suppose I could talk to the 'rpmfusion.org' guys, and see if I could host as part of that project. (seems like a weird solution to me). a different mechanism entirely has come to mind which would try and attack these 2 problems directly. it has its own unique set of issues... I wanted to see what your opinion was the alternative mechanism would be to host a virtual machine image which has a linux kernel & all of the dependencies required to build capros on it/cross compilers and what not built in to the virtual machine. this solves a) since it's a single image which can be hosted anywhere like mega.com or even on the sourceforge capros page itself. solves b) since we are no longer dependent on fedora's release cycle. also solves c) the requirement of fedora, now we'd just require a qemu/kvm or 'openstack' or eucalyptus or something (haven't gotten down to the details). theoretically anyone running fedora or ubuntu or whatever should be able to run it. it's probably z) slower (not sure how much) y) requires virtualization instructions for the bootstrapping machine x) requires some synchronization mechanism to get binaries out of it and sources to it, so it'd probably complicate the build process. I personally would probably just ssh into the virtual machine to develop, since I never run anything more than bash and vi anyways. that's probably not feasible if someone wants to run some gui editor, if thats the case this might be more painful than it's worth, since it takes the build system away from a process everyone is familiar with. w) requires the initial investment into building the vm images, essentially we maintain a minimal linux or bsd os which we use to bootstrap capros. We could also use the existing RPM's/rpm building and fedora tools e.g. https://github.com/wgwoods/lorax/blob/master/README.livemedia-creator to create the cross compilers vm image, in that case we're staying pretty much the same, we just never build for more than 1 version of fedora... this is probably the simplest way to go, the image will be somewhat larger than we could pull off with a purpose built vm (do we care?) another benefit of the whole 'vm image' based design comes to mind, my qemu image script generation things currently use 'libguestfs', to work around the requirement for root access/fiddling security permissions for mounting a loopback device, so I can install grub for x86 or uboot for arm, or whatever bootloader really. this sort of thing is quite nice in that it allows the build scripts to spit out images runnable under emulation, or that you can install from dd, without any fiddling of the local machine, or multiple steps. libguestfs essentially wraps up a qemu instance granting the user root permissions to that virtual machine allowing them to mount a loopback device. i've really been hampered by the bug here: https://bugzilla.redhat.com/show_bug.cgi?id=737261 in that i have to fiddle with my RPM database in order to get the grub1 installed and hacking the system provided libguestfs to work again (2 years of that and no fix in sight, as it's a stalemate between maintainers) by running the cross compilers in a VM we can quite easily allow the build to mount loopback devices for the parts of the image building process that want it, bypassing libguestfs and the fedora politics involved. anyhow, let me know your feelings on this subject and whether you think this would make it easier to continue development of capros given the current cross compiler situation. another alternative is the openembedded.org style of mechanism, where everybody builds the cross compilers, this is essentially what we have today I suppose with the exception that they do it all in one pass (I haven't looked at their stuff in a while/since the yocto project merge though, it is in all likelyhood somewhat linux target specific). |
From: Matt R. <ra...@gm...> - 2012-10-23 21:46:57
|
On Sun, Oct 21, 2012 at 7:37 PM, Charles Landau <cl...@ma...> wrote: > On 9/9/12 3:47 AM, Matt Rice wrote: >> I shall refrain from further hijacking the capros list for >> coyotos stuff/patches > > My apologies, but I have just now gotten time to review this thread, > which started out with Kitty Guy trying to build CapROS. Unfortunately I > don't have any knowledge to contribute on tools issues. > > CapROS relies on tools that are built under the Coyotos project. If > anyone succeeds in building a current version of the tools, in a way > that would work for the CapROS tools too, PLEASE let us know on the > CapROS list. I am stuck on Fedora Core 10 for my CapROS work. I just tested building/running capros under qemu on fedora 16 with the patch series i sent in ccs-xenv.tar.gz I can test fedora 17 later or if you'd like or 18 beta built using 'make build' in ccs-xenv/SPECS directory until recently there were some hacks you could do to run the old RPM's on newer fedoras, those don't seem to work anymore (or i can't remember how) I don't really have any way to host a yum repository though. |
From: Charles L. <cl...@ma...> - 2012-10-22 02:54:40
|
On 9/9/12 3:47 AM, Matt Rice wrote: > I shall refrain from further hijacking the capros list for > coyotos stuff/patches My apologies, but I have just now gotten time to review this thread, which started out with Kitty Guy trying to build CapROS. Unfortunately I don't have any knowledge to contribute on tools issues. CapROS relies on tools that are built under the Coyotos project. If anyone succeeds in building a current version of the tools, in a way that would work for the CapROS tools too, PLEASE let us know on the CapROS list. I am stuck on Fedora Core 10 for my CapROS work. -Charlie Landau |
From: Matt R. <ra...@gm...> - 2012-09-09 10:47:58
|
On Sat, Sep 8, 2012 at 5:34 AM, Matt Rice <ra...@gm...> wrote: > with that everything compiles, but doesn't function as expected due to > an exception after enabling interrupts. FWIW this appears to be a qemu issue (didn't happen with older qemu's, doesn't happen with newer ones), It's sending out a fault which is reserved by intel... I haven't been able to track down the origin of this in the affected qemu versions. qemu doesn't even have this one in their list of exceptions. thus given this dubious nature i'm a bit skeptical of working around a qemu bug and if this is appropriate should we get it on hardware reports on actual hardware: http://news-posts.aplawrence.com/970.html that said, I shall refrain from further hijacking the capros list for coyotos stuff/patches apologies |
From: Matt R. <ra...@gm...> - 2012-09-08 12:34:15
|
On Wed, Sep 5, 2012 at 9:15 AM, Matt Rice <ra...@gm...> wrote: > On Wed, Sep 5, 2012 at 8:12 AM, Kitty Guy <kit...@ma...> wrote: >> > I seem to recall getting coyotos working sometime after that capidl > issue arrived, only vaguely remember anyhow i will have to look > around for patches and test them. attached is a series of patches including some of kitty guy's patches (those didn't work here, still unused variables guessing we built for different targets), rpm builds don't seem to require the symlink to ldscripts that non-rpm ccs-xenv builds do, haven't looked into it further yet. with that everything compiles, but doesn't function as expected due to an exception after enabling interrupts. I had a go at trying to finish up the capidl change, but for now it was just easier to revert not sure if we need to worry about older boost's in the file_string->string thing, i don't have one around to test. |
From: Matt R. <ra...@gm...> - 2012-09-05 16:15:55
|
On Wed, Sep 5, 2012 at 8:12 AM, Kitty Guy <kit...@ma...> wrote: > > > "Jonathan S. Shapiro" <sh...@er...> wrote: >> On Wed, Sep 5, 2012 at 7:16 AM, Kitty Guy <kit...@ma...> wrote: >> >> Concerning the image builder, I would need to look at the specific input case. >> The coyotos project has been mothballed. There is nobody working on it at this time, and the tool chains have not been brought forward to run on current versions of Fedora. I don't have the time to bring them forward, so I'm afraid you may be out of luck here. FWIW in light of recent events in 'glibc-land' i've been contemplating the idea of reinvestigating the glibc port, (or eglibc should glibc not adopt eglibc's 'option groups' mechanism, which seems to at least be infrastructure for what shap originally desired from glibc, though there afaict isn't currently a way to expunge all posix stuff) anyhow such an endeavor would seem to require a toolchain update, thus i don't mind spending time on it if there is interest/maybe otherwise. I seem to recall getting coyotos working sometime after that capidl issue arrived, only vaguely remember anyhow i will have to look around for patches and test them. |
From: Kitty G. <kit...@ma...> - 2012-09-05 15:12:50
|
"Jonathan S. Shapiro" <sh...@er...> wrote: > On Wed, Sep 5, 2012 at 7:16 AM, Kitty Guy <kit...@ma...> wrote: > > Hello, > > Thanks for your quick reply. > > "Jonathan S. Shapiro" <sh...@er...> wrote: > > Kitty: > > 1. The process for building these tools is delicate, and (at this time) really only works if you proceed by rebuilding the RPMs. > > That could take care of the symlinks. The build errors will obviously remain even when built as RPM. > > The right solution, really, is to refresh the entire cross-tool chain. That is likely a week-long effort, and I don't have the time to do it. I expect some of the issues would be fixed by updating to current versions of binutils and gdb. > I was able to run the hello world demo of capros (built with capros tools) but coyotos fails linking any native binaries. > > There could be two reasons for this. It sounds like your build of the tool chain may not have been as intended. But even if there were no tool chain problems, you would hit a problem with your first use of the coyotos capidl tool. I was in the middle of a rework of capidl when the project was mothballed. The changes to capidl were never completed. > > The first problem is that the coyotos binaries have unresolved symbols to the kernel and other binaries which the linker does not like. > > They can't have unresolved symbols to the kernel or other binaries. I believe the unresolved symbols are the ones that would have been the output from capidl. Adding the '-r' flag is definitely not what you want. Yes, I expected that there would be some library to provide the symbols but did not find any. > Concerning the image builder, I would need to look at the specific input case. > The coyotos project has been mothballed. There is nobody working on it at this time, and the tool chains have not been brought forward to run on current versions of Fedora. I don't have the time to bring them forward, so I'm afraid you may be out of luck here. The files are what is in the tree. Since capidl is broken the input is bogus. I am not surprised building the image fails. Cheers kg ----------------------------------------------------- Mail.be, WebMail and Virtual Office http://www.mail.be |
From: Kitty G. <kit...@ma...> - 2012-09-05 14:16:42
|
Hello, Thanks for your quick reply. "Jonathan S. Shapiro" <sh...@er...> wrote: > Kitty: > 1. The process for building these tools is delicate, and (at this time) really only works if you proceed by rebuilding the RPMs. That could take care of the symlinks. The build errors will obviously remain even when built as RPM. > 2. The coyotos and capros tool chains really are not the same. The coyotos tools implement compilation models that the capros tools do not. > 3. Trying to build capros using the coyotos tool chain won't work. Then hardcoding the tool prefix is not that much of an issue I guess. I was able to run the hello world demo of capros (built with capros tools) but coyotos fails linking any native binaries. The first problem is that the coyotos binaries have unresolved symbols to the kernel and other binaries which the linker does not like. I tried adding the -r flag like this: $(BUILDDIR)/Constructor: $(OBJECTS) - $(GCC) -small-space $(GPLUSFLAGS) $(OBJECTS) $(LIBS) $(STDLIBDIRS) -o $@ + $(GCC) -small-space $(GPLUSFLAGS) $(OBJECTS) $(LIBS) $(STDLIBDIRS) -r -o $@ The other problem is that the image builder does not like the image descriptions which is something I cannot work around because I have no idea about these. Maybe looking at the spec in more detail would help. Anyway, the error: make[6]: Entering directory `/home/kg/coyotos/src/base/test/bring-up/Handler' /home/kg/coyotos/host/bin/mkimage -t i386 -o BUILD/i386-unknown-coyotos/Handler.img -I. -LBUILD/i386-unknown-coyotos --MD --MF BUILD/i386-unknown-coyotos/.Handler.img.m -f Handler.mki /home/kg/coyotos/usr/include/mki/coyotos/Util.mki:54:8 Inappropriate capability type to get_l2g() get_l2g(0x8558e40) at /home/kg/coyotos/usr/include/mki/coyotos/Util.mki:54:8 ==(...) at /home/kg/coyotos/usr/include/mki/coyotos/Util.mki:54:29 load_small_image(0x853afd0, 0x85383f0, 0x853b160) at Handler.mki:28:22 make_ironman_process(0x853afd0, 0x85383f0, 0x853b160) at Handler.mki:36:17 Cheers kg ----------------------------------------------------- Mail.be, WebMail and Virtual Office http://www.mail.be |
From: Jonathan S. S. <sh...@er...> - 2012-09-05 13:29:25
|
Kitty: 1. The process for building these tools is delicate, and (at this time) really only works if you proceed by rebuilding the RPMs. 2. The coyotos and capros tool chains really are not the same. The coyotos tools implement compilation models that the capros tools do not. 3. Trying to build capros using the coyotos tool chain won't work. Jonathan On Wed, Sep 5, 2012 at 3:57 AM, Kitty Guy <kit...@ma...> wrote: > Hello, > > I tried building capros but the current cross-tools won't build. > > I applied some patches to get the tools built: > https://bitbucket.org/kittyguy/ccs-xenv/changesets > > There are still some problems. Once the configure in coytools would fail > all tests with sed complaining it cannot open conftest.c. After poking > around to find what the problem was it went away and I cannot reproduce it > anymore. > > Another issue is that the coyotos linker scripts are installed in a wrong > place and ld does not link any of the capros binaries because it would not > find them. Needed some symlinks to make the script usable: > > $ readlink /capros/host/lib/ldscripts/elf_i386_coyotos_small.xc > ../../i386-unknown-capros/lib/ldscripts/elf_i386_coyotos_small.xc > > Finally it is quite stupid that the coyotos and capros build systems > insist on having each a copy of the same tools under a different name. I > tried to use tools build with the default coyotos prefix but the SSL binary > would not build. Settling on one name would save quite a bit of disk space. > > Cheers > > kg > ----------------------------------------------------- > Mail.be, WebMail and Virtual Office > http://www.mail.be > > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > CapROS-devel mailing list > Cap...@li... > https://lists.sourceforge.net/lists/listinfo/capros-devel > > |
From: Kitty G. <kit...@ma...> - 2012-09-05 11:17:00
|
Hello, I tried building capros but the current cross-tools won't build. I applied some patches to get the tools built: https://bitbucket.org/kittyguy/ccs-xenv/changesets There are still some problems. Once the configure in coytools would fail all tests with sed complaining it cannot open conftest.c. After poking around to find what the problem was it went away and I cannot reproduce it anymore. Another issue is that the coyotos linker scripts are installed in a wrong place and ld does not link any of the capros binaries because it would not find them. Needed some symlinks to make the script usable: $ readlink /capros/host/lib/ldscripts/elf_i386_coyotos_small.xc ../../i386-unknown-capros/lib/ldscripts/elf_i386_coyotos_small.xc Finally it is quite stupid that the coyotos and capros build systems insist on having each a copy of the same tools under a different name. I tried to use tools build with the default coyotos prefix but the SSL binary would not build. Settling on one name would save quite a bit of disk space. Cheers kg ----------------------------------------------------- Mail.be, WebMail and Virtual Office http://www.mail.be |
From: Matt R. <ra...@gm...> - 2011-07-05 13:29:53
|
Zarutian (cc'd) asked about setting up the capros dev tools in a vm, attached is a shell script that does that, (it works here :) you'll want to modify ks.cfg at least the rootpw line, the default password is.... 'pasword' you do not want to use this ks.cfg outside of qemu (it's going to wipe the disk.) its a little big boned, at 3g... I spent too much time trying to trim the fat, I failed. at the grub prompt, you need to: select the 'basic video option', and hit tab to edit the kernel command line, append the following line: ks=floppy:/ks.cfg after its done, you should be able to qemu-kvm -hda fedora.qcow check out capros from cvs, and follow the normal build instructions. hope that helps. |
From: Matt R. <ra...@gm...> - 2011-04-14 08:39:44
|
On Wed, Apr 13, 2011 at 10:01 PM, Matt Rice <ra...@gm...> wrote: > nothing terribly important but food for thought > > so in attempting to port to cortex-A8 i inevitably let my mind wonder > to the question > in what ways can we leverage the additional security extensions > provided, "Trustzone"[1] || [2] > > unfortunately it doesn't seem to be a natural fit for capability systems, > by splitting things in to secure and insecure worlds, in some ways you > could potentially avoid the need for > attenuation, but you inevitably lose the ability to do fine grained > access control > the ability to give secure access to one device, > yet withhold it from another while providing 'non-secure' access. this probably deserves a better explanation. what I mean is: say you have a keyboard driver you can then attenuate this to an "non-exclusive keyboard capability" and a "exclusive keyboard capability", the exclusive keyboard capability then cuts off all non-exclusive access to the keyboard until some time. with a direct mapping of the trustzone stuff, to capabilities you could potentially hand out, "keyboard capabilities" like candy, and hand out "secure", and "non-secure" capabilities, that get passed to the keyboard capability setting the device to secure mode then magically disables all insecure access, but it seems your granularity is limited to a single "secure bit". (my limited understanding of the domain protection model available in the mmu may mean this is more flexible than I believe but still you are limited to 16 domains). > though, it could maybe be used in combination with attenuation, > it'd have to be in ways which neither compromise or tie us to this > specific implementation. and this would just add some extra assurance that should someone somehow get access to the keyboard location they'd need to also be running with a secure bit. > If nothing else, it can be used as possibly intended, > transparently and on top of a system oblivious to it. > > I guess i'm curious if anyone else has any thoughts/knows of research > done on the subject. > googling doesn't really seem to provide anything but marketing stuff. > > http://infocenter.arm.com/help/topic/com.arm.doc.prd29-genc-009492c/index.html > (pdf) > http://infocenter.arm.com/help/topic/com.arm.doc.prd29-genc-009492c/PRD29-GENC-009492C_trustzone_security_whitepaper.pdf > |