linuxtuples-devel Mailing List for LinuxTuples
Brought to you by:
wware
You can subscribe to this list here.
2006 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(9) |
Jun
|
Jul
|
Aug
|
Sep
(12) |
Oct
|
Nov
|
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Robert B. <R.G...@uv...> - 2009-05-08 11:49:22
|
Hi Will, Bram has moved from SARA to Canada (Slant Six Games) a couple of years ago, so I am not sure if he received your email. Thanks for the update. We're not actively using linuxtuples anymore, but should the need arise, I will make sure to remember your email. Best, -- Rob -- Robert Belleman, PhD, Informatics Institute, Faculty of Science Universiteit van Amsterdam, Science Park 107, 1098 XG Amsterdam The Netherlands. T: +31 20 525 7272/7462 F: +31 20 525 7419 W: http://www.science.uva.nl/~robbel/ E: R.G...@uv... Will Ware wrote: > I've been migrating some of my old bits of code, including LinuxTuples, > to Google Code: > http://code.google.com/p/wware-misc/source/browse/#svn/trunk/linuxtuples-1.03 > > I haven't taken the time to really get familiar with everything > available on the Google Code website. They have some of the things > Sourceforge offers, and are probably more agreeable in many ways, > including using Subversion instead of CVS. > > Bram, I remember you had some changes you had made which, if memory > serves, were the most recent things to go into it. If you get a chance > to make sure I haven't stupidly left them out, it might make sense to > promote LinuxTuples out of the "wware-misc" area into its own project area. > > Will Ware > > > ------------------------------------------------------------------------ > > ------------------------------------------------------------------------------ > The NEW KODAK i700 Series Scanners deliver under ANY circumstances! Your > production scanning environment may not be a perfect world - but thanks to > Kodak, there's a perfect scanner to get the job done! With the NEW KODAK i700 > Series Scanner you'll get full speed at 300 dpi even with all image > processing features enabled. http://p.sf.net/sfu/kodak-com > > > ------------------------------------------------------------------------ > > _______________________________________________ > Linuxtuples-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxtuples-devel |
From: Will W. <ww...@al...> - 2009-05-08 03:16:05
|
I've been migrating some of my old bits of code, including LinuxTuples, to Google Code: http://code.google.com/p/wware-misc/source/browse/#svn/trunk/linuxtuples-1.03 I haven't taken the time to really get familiar with everything available on the Google Code website. They have some of the things Sourceforge offers, and are probably more agreeable in many ways, including using Subversion instead of CVS. Bram, I remember you had some changes you had made which, if memory serves, were the most recent things to go into it. If you get a chance to make sure I haven't stupidly left them out, it might make sense to promote LinuxTuples out of the "wware-misc" area into its own project area. Will Ware |
From: Will W. <ww...@al...> - 2006-09-15 15:21:07
|
On 9/15/06, Bram Stolk <br...@sa...> wrote: > For me, N is low enough not to notice perf gains with binary > or hash lookups. > My gut feeling is that it's not worth the effort. But then again, > I don't know what you use the tspace for. I'm not using it for anything at the moment, I just had the hashing idea in the back of my mind and thought it might be time to think about it again. I'll add a note about it as a possibility but I won't do anything with it. As far as using STL classes, I'd prefer to stay with pure C. Moving to C++ might discourage some potential users or limit its portability. Maybe I'm just being old-fashioned and should allow myself to be dragged into the 1990s. Will |
From: Will W. <ww...@al...> - 2006-09-15 14:48:11
|
On 9/15/06, Robert Belleman <ro...@sc...> wrote: > A hashtable definitely makes sense. > We had one > application here where we exchange tuples that contain > megabytes of data in one of the first three elements > And we don't want to force users into adopting certain > style habits in the structure of their tuples. > As an alternative, you could compute a hash-key from > the first N bytes of the first elements in a tuple > (perhaps md5sum?) and use lists for tuples that hash- > clash. http://en.wikipedia.org/wiki/Hash_table Using a list for the clashes is a common practice, and necessary if you can't guarantee clashes will never occur. So you always have an O(N) subsearch at the end, but it can be a lot smaller if your hashtable is clever. (And as I mentioned earlier, you only lock a portion of the tuple space, which helps concurrent requests.) For small hashes, the hashtable can simply be an array in memory with a lookup time of O(1). For larger hashes it needs some kind of memory-efficient sparse representation, and the time to search it will go up. But it's still faster than making the whole tuple space one big linked list, as we have it now. I agree that we don't want to enforce style rules on tuples. If we must, we should let users choose their own rules, e.g. tag their tuples with a user-specified hashing policy. It turns out MD5 isn't great for a hashing algorithm. It does a lot of work to ensure the hash is cryptographically secure; hashtables for searching don't require anything like that. The Wikipedia article talks about choice of hash function and suggests some better options. Google has a hash library that might be relevant, though it's in C++ rather than C. http://goog-sparsehash.sourceforge.net/ Hashing only the first N-or-fewer bytes of an element is definitely a good idea. Assuming we decide to tag tuples with hashing policies, we can get any of these requests for searches: * A search with the same hashing policy, the best case, all benefits of hashing are in my favor. * A search with a different hashing policy. I should try searching by its policy first, because that search is fast and relevant, and it locks only a portion of the tuple space. If that is unsuccessful, I need to search the entire space. * A search with no specified hashing policy. Try each of the existing policies first (since as before, hashed searches are faster and less invasive) and then search the entire space. We could avoid these two last ugly cases by declaring that each hashing policy represents a tuple space disjoint from all other tuple spaces on the same server. There would still be an unhashed policy-less space but we would encourage users to specify policies for better performance. Disjoint spaces isn't a bad idea. Chances are that if you're putting out a tuple, you have a general idea who the intended recipient is, or at least you need to specify the tuple's format in general terms, and it's not much additional effort to include a hashing policy in that specification. The simplest hash policy is to say that a tuple should be hashed only on a single element, and specify which element, e.g. the second. The next simplest policy I can imagine would be to use bits in a 32-bit word to select which of the first 32 elements should be hashed.This would require a sparse-representation hash table, since a full table would be way too huge. Will |
From: Bram S. <br...@sa...> - 2006-09-15 13:55:52
|
Will Ware wrote: > Does a hashtable on the server make sense? The idea would be to > generate hashes for the first three elments of the tuple, which would > index into a cubical table of linked lists of stored tuples. When > reading or getting, depending on the number and placement of > wildcards, the cube would be searched at a point, or a line segment, > or a square. There would need to be separate hashtables for tuples > shorter than three elements. I chose three only because I can easily > envision a cube, and envisioning a tesseract takes more work. > > For very large numbers of tuples, and searches with few wildcards, > this would make reading and getting much more efficient. It would be a > fair amount of work to implement, but the payoff might justify it. In theory, yes, hashtables give you O(1) lookup. We now have O(n) lookup. I don't think that a 3D map would be req'd. Just hash all elts to a single hash value, and use STL's std::hash_map<> In practice, you have to be very careful... Only for very large N, does hashing make sense. std::map<> takes e.g. O(logN), and is likely to be just as fast. For me, N is low enough not to notice perf gains with binary or hash lookups. Before you undertake a major effort, I suggest you do a small test on std::list, std::map and std::hash_map, to see at what N you will reap performance benefits. STL has built in hashing funcs for strings, and probably floats and ints as well, so they match your linuxtuples primitives. You would need to do the hashing recursively for nested tuples, else your hashing distribution goes wrong. My gut feeling is that it's not worth the effort. But then again, I don't know what you use the tspace for. If you have 100M tuples in it, I would pursue it. Bram > > The question of whether the hashtable should be linear, square, > cubical, etc. would depend on the average or expected number of > wildcards in the first few elements of a template, and there would be > a tradeoff with hashtable memory size on the server. If the first two > elements are rarely wildcards, but the third is usually a wildcard, > then a cube offers no advantage over a square. > > Bram, AFAIK you are currently the heaviest user of LinuxTuples on the > planet. Do you have a sense for the distribution of wildcards in the > first few elements? Would that be an easy statistic to collect while > your system is running? Any thoughts on how many bits a hash value > should have? > > Will > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Linuxtuples-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxtuples-devel |
From: Robert B. <ro...@sc...> - 2006-09-15 13:55:33
|
Bram Stolk wrote: [...] > Take e.g. peer 2 peer: > I've always been bothered by the use of tcp in torrents. > It makes absolutely no sense! > You get chunks from the peers all the time... if one is lost, no > problem: you'll get it from another peer later. Yet, bittorrent > does TCP. I've been wondering for some time now why nobody has ever written a distributed tuplespace implementation that uses p2p communication protocols. These would be ideal for large applications with many many clients distributed over a wide area. -- Rob |
From: Bram S. <br...@sa...> - 2006-09-15 13:41:41
|
Will Ware wrote: > On 9/14/06, Bram Stolk <br...@sa...> wrote: >> As an optimization, I've added a best-effort communication channel >> (UDP based) to the tuple server. > > That sounds interesting. What kind of a situation does that come up? > Any rough guess of the TCP overhead? TCP overhead is enormous. I don't have exact figures, but all sorts of handshaking is required between parties to setup/close and use a connection. Each handshake takes a round-trip. This means a TCP communication is several times slower than your ping's rtt. For a single UDP datagram, it should be half your rtt of ping. So 0.5*rtt, instead of N*rtt for TCP This is a big issue on WANs. The speed of light is lower than you think: it takes light almost 0.1sec to travel from europe to CA USA, and back again. > > I could imagine that if I had zillions of jobs to delegate, I'd > put_be() all the requests, collect the answers for a while, and then > check to see which answers I hadn't gotten, and put_be() those again > (or maybe just put() them, since their smaller number might make the > TCP overhead acceptable). Is this the kind of usage you have in mind? I have a different use. I'm working with Robert Belleman to use the tspace for viz/collab work. A lot of information in a visualization loop is not essential. E.g: in video, missing a few pixels, or even an entire videoframe, does not affect the usability of the video. Also: in steering... when using a joystick: losing joystick data for a fraction of a second is no biggie. (on the other hand: you cannot afford to lose e.g. a QUIT button press). In rendering 3D frames: when missing a changed transformation of an object is surmountable if you will receive a new trf next frame. All these are examples where computer output is generated to be interpreted by a human. Often the human is too slow to notice short disruptions. The cost of reliable communication is not worth it in these cases. But the scenario you described makes a lot of sense also. When processing in bulk, where order is not important, you could work with udp. Take e.g. peer 2 peer: I've always been bothered by the use of tcp in torrents. It makes absolutely no sense! You get chunks from the peers all the time... if one is lost, no problem: you'll get it from another peer later. Yet, bittorrent does TCP. I've started writing a p2p system on UDP some time ago in python, but that's gathering dust now. Bram > > Will |
From: Will W. <ww...@al...> - 2006-09-15 13:19:51
|
Another potential benefit of a hashtable occurs to me. Currently we have one mutex that locks the entire tuple space. With a hashtable, we could lock only portions of the tuple space, leaving the rest available for concurrent accesses. This might also help performance. Will |
From: Will W. <ww...@al...> - 2006-09-15 13:12:26
|
Does a hashtable on the server make sense? The idea would be to generate hashes for the first three elments of the tuple, which would index into a cubical table of linked lists of stored tuples. When reading or getting, depending on the number and placement of wildcards, the cube would be searched at a point, or a line segment, or a square. There would need to be separate hashtables for tuples shorter than three elements. I chose three only because I can easily envision a cube, and envisioning a tesseract takes more work. For very large numbers of tuples, and searches with few wildcards, this would make reading and getting much more efficient. It would be a fair amount of work to implement, but the payoff might justify it. The question of whether the hashtable should be linear, square, cubical, etc. would depend on the average or expected number of wildcards in the first few elements of a template, and there would be a tradeoff with hashtable memory size on the server. If the first two elements are rarely wildcards, but the third is usually a wildcard, then a cube offers no advantage over a square. Bram, AFAIK you are currently the heaviest user of LinuxTuples on the planet. Do you have a sense for the distribution of wildcards in the first few elements? Would that be an easy statistic to collect while your system is running? Any thoughts on how many bits a hash value should have? Will |
From: Bram S. <br...@sa...> - 2006-09-14 22:33:41
|
Hi, As an optimization, I've added a best-effort communication channel (UDP based) to the tuple server. With this, a put_be() will use UDP to put a tuple into space, using best effort. I'll add replace_be() later. get() and read() make little sense UDP based, because the comm for those ops is 2-way, and if half of the comm can be lost, it would cause too much headache. Currently, the put_be() either succeeds, or fails, and ofcourse there is no feedback to the caller. In practice, dropped datagrams are extremely rare. Performance-wise it is a very fast operation: no handshaking is done, just a simple one-way-comm. This should be very useful in high latency networks. TODO: UDP is limited by datagram size, so tuples cannot be too large. Currently, I have no check on size yet. Bram |
From: Robert B. <ro...@sc...> - 2006-09-13 13:12:08
|
Hi Will, Bram and I are chasing some bugs in LinuxTuples; could you please grant me access to your CVS repo (willware.net:/usr/local/cvsroot) so I can sync my sources? Thanks, best, -- Rob -- Robert Belleman, PhD, Informatics Inst., Faculty of Science Universiteit van Amsterdam, Kruislaan 403, 1098 SJ Amsterdam the Netherlands. Tel: +31 20 525 7510 - Fax: +31 20 525 7419 http://www.science.uva.nl/~robbel/ - ro...@sc... |
From: Bram S. <br...@sa...> - 2006-09-06 15:58:28
|
Will Ware wrote: > That sounds useful. I can't see any reason not to check it into the > head. Feel free to do so. Great! It's in your CVS server now. Bram -- Bram Stolk, VR Engineer SARA, Amsterdam. tel +31 20 592 3000 "Windows is a 32-bit extension to a 16-bit graphical shell for an 8-bit operating system originally coded for a 4-bit microprocessor by a 2-bit company that can't stand 1 bit of competition." |
From: Bram S. <br...@sa...> - 2006-09-06 14:35:56
|
Will. A replacement of a tuple in the space is currently an awkward process: You repeatedly do a non-blocked GET, followed by a PUT. It typically takes 3 connections to the server to replace a single tuple. I'm planning to extend the set of operations with a REPLACE that takes two arguments: a template to specify the tuples that need scrapping, and a tuple that replaces the scrapped tuples. Let me know if you want me to put this change in the tree, else I will keep it in my working tree only, or maybe do a cvs branch, as it would be quite a big change. Bram -- Bram Stolk, VR Engineer SARA, Amsterdam. tel +31 20 592 3000 "Windows is a 32-bit extension to a 16-bit graphical shell for an 8-bit operating system originally coded for a 4-bit microprocessor by a 2-bit company that can't stand 1 bit of competition." |
From: Bram S. <br...@sa...> - 2006-09-06 14:18:18
|
Hi, I did a major update on the cvs tree. >From the ChangeLog: Much more rigorous error checking. Improved logging. Use higher backlog parameter for bind(). Status report on stderr of tuple_server: to use this, set LINUXTUPLES_STATUS environment variable. The server will show the status of all its threads. '-' means no thread 'b' means the thread is blocked on a get/read operation 'a' means the thread tries to get access to the tuple space. 'r' means the thread is in recv() system call. 'p' means the thread is currently putting a tuple in the space. By the way: there is still a race condition on SMP machines. It occurs a lot less than the semaphore version, but it still does every once in a while, in complex environments. It manifests itself in recv()'s that fail, and connections that are closed prematurely. It does not occur on a non-SMP test machine I tried. In the version I checked in, every system call is checked now for return status, which helps robustness. A 1000 iterations of a 26-way ping-pong works flawlessly, even on SMP, and runs in merely a couple of seconds. bram -- Bram Stolk, VR Engineer SARA, Amsterdam. tel +31 20 592 3000 "Windows is a 32-bit extension to a 16-bit graphical shell for an 8-bit operating system originally coded for a 4-bit microprocessor by a 2-bit company that can't stand 1 bit of competition." |
From: Will W. <ww...@al...> - 2006-05-12 14:27:54
|
This is some information that Sourceforge sent me about changes to their CVS service, including some discussion of the server problems of the last week. Will Ware ---------- Forwarded message ---------- From: SourceForge.net Team <no...@so...> Date: May 11, 2006 7:17 PM Subject: SUBJECT: SourceForge.net: CVS service offering changes To: ww...@al... Greetings, You are receiving this mail because you are a project admin for a SourceForge.net-hosted project. One of our primary services, CVS, suffered a series of interrelated, critical hardware failures in recent weeks. We understand how frustrating this CVS outage must be to you and your users; however, our top priority remains preservation of the integrity of your data. The series of CVS hardware failures prompted us to expedite the deployment of planed improvements to our CVS infrastructure, drawing upon much of the knowledge that we gained from our Subversion deployment. Our improved CVS service architecture, which we plan to deploy tomorrow afternoon (2006-05-12), will offer greater performance and stability and will eliminate several single points of failure. The Site Status page (https://www.sf.net/docs/A04) will be updated as soon as the new infrastructure is rolled out. In the interim, please read the important information provided below to learn about how these changes will affect your project. Summary of changes, effective 2006-05-12: 1. Hostname for CVS service Old: cvs.sourceforge.net New: PROJECT_UNIX_NAME.cvs.sourceforge.net This change will require new working copies to be checked out of all repositories (so control files in the working copy will point to the right place). We will be updating the instructions we supply, but instructions that your team has written within documentation, etc. will need to be updated. cvs -d:pserver:ano...@cv...:/cvsroot/gaim co gaim would be changed to cvs -d:pserver:ano...@ga...:/cvsroot/gaim co gaim 2. ViewCVS We are moving from ViewCVS to its successor, ViewVC. ViewVC is currently in use for our Subversion service. 3. Sync delay Old: CVS pserver, tarballs and ViewCVS provided against a separate server which is a minimum of three hours behind developer CVS. New: ViewVC will be provided against developer CVS (it will be current). CVS pserver will be provided against a secondary server (not developer server) with a maximum expected delay of two hours. Follow-up work is planned (this infrastructure takes us 80% of the way) to essentially eliminate the sync delay. 4. Read-only rsync service As a new service offering, we are now providing read-only rsync access against developer CVS. This allows projects to efficiently make on-demand backups of their entire CVS repository. All projects should be making regular backups of their CVS repository contents using this service. 5. Nightly tarball service Nightly tarball service is being dropped in lieu of read-only rsync service. Projects which currently depend on nightly tarballs for repository backups will need to begin using rsync to make a backup copy of their repository contents. We see this as a major functional improvement. For a number of reasons, tarballs have fallen out of sync with the data in the repository at times in the past few years. Tarballs required a substantial amount of additional disk, and I/O to generate. The move to read-only rsync allows backups to be produced on-demand, with an update frequency chosen by the project. 6. Points of failure In the past, developer CVS service for all projects was provided from a single host. CVS pserver service was provided from individual backend heads based on a split of the data. Under our new design, developer CVS and most of our CVS-related services are provided from one of ten CVS hosts (count subject to increase with growth). Each host is independent, and makes a backup copy of the repository data of another host (which is used to provide the pserver CVS service). Failure of a single host will impact only the availability of data on that host. Since the data is split among a larger number of hosts, the size of data impacted by an individual host outage is substantially smaller, and the time required for us to restore service will be substantially shorter. This rapid architecture change has been made possible specifically using the research we performed for our recent launch of Subversion service. We've applied our best practices, produced a substantial amount of internal documentation, and kept an eye toward maintainability. This effort has allowed us to deploy this new architecture quickly once hardware was received, and will permit us to quickly scale this service horizontally as growth and demand requires. Many other minor improvements have also been made to improve the service offering and make it less trouble-prone. The most important of which are listed above. For a full description of the new service offering, and for information on how to use the services described above, please refer to the site documentation for the CVS service after the service has been launched: https://www.sf.net/docs/E04 Thank you, The SourceForge.net Team . |
From: Bram S. <br...@sa...> - 2006-05-11 16:31:49
|
Note that SF cvs servers had a hard disk crash yesterday, so the = migration will have to wait anyway... Bram -----Original Message----- From: lin...@li... on behalf of Will = Ware Sent: Thu 5/11/2006 18:13 To: lin...@li... Subject: [Linuxtuples-devel] Migration to Subversion =20 Here is a document the Python guys put together about migrating to = Subversion. http://www.python.org/dev/peps/pep-0347/ I still don't know when I'll find a block of time when I can give it the concentration I expect it will deserve, but at least I wanted to share the information. Will Ware ------------------------------------------------------- Using Tomcat but need to do more? Need to support web services, = security? Get stuff done quickly with pre-integrated technology to make your job = easier Download IBM WebSphere Application Server v.1.0.1 based on Apache = Geronimo http://sel.as-us.falkag.net/sel?cmd=3Dk&kid=120709&bid&3057&dat=121642 _______________________________________________ Linuxtuples-devel mailing list Lin...@li... https://lists.sourceforge.net/lists/listinfo/linuxtuples-devel |
From: Will W. <ww...@al...> - 2006-05-11 16:13:53
|
Here is a document the Python guys put together about migrating to Subversi= on. http://www.python.org/dev/peps/pep-0347/ I still don't know when I'll find a block of time when I can give it the concentration I expect it will deserve, but at least I wanted to share the information. Will Ware |
From: Bram S. <br...@sa...> - 2006-05-11 11:06:52
|
Bram Stolk wrote: > How can we fix this? > As it is, linuxtuples' usefulness under python is heavily reduced. Ah... never mind... I've found Py_BEGIN_ALLOW_THREADS When this macro is used in py_linuxtuples.c then threads can be properly rescheduled when linuxtuples is blocking on I/O. Bram -- Bram Stolk, VR Engineer SARA, Amsterdam. tel +31 20 592 3000 "Windows is a 32-bit extension to a 16-bit graphical shell for an 8-bit operating system originally coded for a 4-bit microprocessor by a 2-bit company that can't stand 1 bit of competition." |
From: Bram S. <br...@sa...> - 2006-05-11 09:36:37
|
Hello, I'm afraid that linuxtuples is currently not suited for multithreaded python code. In Python, the global-interpreter-lock is released for operations that block. E.g. I/O ops, or a time.sleep() op. If a thread blocks on such an op, other python threads get a chance to use the interpreter. In linuxtuples, the get() and read() operations will block, yet do not relinquish the lock, causing other threads to starve. The code below demonstates the problem. If you comment out the conn.get(t) operation, both threads happily execute their iterations. With the get() operation in there, the main thread starves. How can we fix this? As it is, linuxtuples' usefulness under python is heavily reduced. Bram import sys import os import time import linuxtuples import threading def read_from_space() : while True : print "#" t=("does","not","exist") conn.get(t) time.sleep(0.5) conn = linuxtuples.connect() thread = threading.Thread(target=read_from_space) thread.setDaemon(True) thread.start() while True : print "*" time.sleep(0.1) -- Bram Stolk, VR Engineer SARA, Amsterdam. tel +31 20 592 3000 "Windows is a 32-bit extension to a 16-bit graphical shell for an 8-bit operating system originally coded for a 4-bit microprocessor by a 2-bit company that can't stand 1 bit of competition." |
From: Bram S. <br...@sa...> - 2006-05-10 16:16:28
|
Hi, To improve the efficiency, i've added socket shutdown calls to linuxtuples. These are very appropriate in the linuxtuples case, because you can shutdown each direction separately. This is such a good match, because linuxtuples communications are typically two-stage affairs: client opens connection client sends tuple (template) client receives tuple client closes connection after the client send stage (which is a server recv stage), the client can advise its peer that no more sending will be done. Similarly, the server can advise its peer that no more listening will be done. Using this scheme, I noticed that all the TIME_WAIT sockets move from server-side to client-side, where they do less harm (because the burden is shared by multiple clients). Moreover, the socket FAQ also recommends shutdown() prior to close() so they should be in there in anycase. Also, I've added more rigorous error checking and error printing. This helps in cases where tuple-rate is extremely high, causing the OS to deplete its socket address space. Once a repo is back up, i'll commit this. bram -- Bram Stolk, VR Engineer SARA, Amsterdam. tel +31 20 592 3000 "Windows is a 32-bit extension to a 16-bit graphical shell for an 8-bit operating system originally coded for a 4-bit microprocessor by a 2-bit company that can't stand 1 bit of competition." |
From: Bram S. <br...@sa...> - 2006-05-10 13:51:15
|
Will Ware wrote: > and improvements that make it better than CVS, but I think the problem > we are both having at the moment is due to server downtime, not > deficiencies in CVS. Switching to Subversion wouldn't guarantee more > server uptime. Oh.. but in practice, it does! SF's SVN is much more reliable. It is the cvs service that keeps breaking down. I do a lot of work for opende.sf.net and since this project switched from cvs to svn, the reliability is so much better. > Info about Subversion on Sourceforge > http://sourceforge.net/docman/display_doc.php?docid=31070&group_id=1 > > I think I'll need some time to wrap my brain around Subversion. It's > one of those things I've intended to learn about for a while. I have > no problem with migrating from CVS to Subversion but I am not prepared > to do it immediately. Ok, There is a specific 'svn for cvs users' tutorial out there that may help. The basics are pretty simple: version nrs (revisions) are designated to entire source trees. If you change a single README, the rev goes up for the tree. Commits are done on a collection of files. If you change a .h, a .c and a README and commit in 1 go, the change is a single entity that brings the tree to a new revision nr. You typically work with the mainbranch of the repository, which is called 'trunk'. Other than that, the cmds are very similar: cvs co -> svn co cvs commit -> svn commit cvs update -> svn update cvs update -n -> svn status cvs annotate -> svn annotate ..etc.. Also: history is preserved when going from SF cvs to SF svn. Bonus: passwds are only req'd when needed (committing) and not for simple stuff like diff, update, annotate, etc. bram > > Will Ware > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=k&kid0709&bid&3057&dat1642 > _______________________________________________ > Linuxtuples-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxtuples-devel -- Bram Stolk, VR Engineer SARA, Amsterdam. tel +31 20 592 3000 "Windows is a 32-bit extension to a 16-bit graphical shell for an 8-bit operating system originally coded for a 4-bit microprocessor by a 2-bit company that can't stand 1 bit of competition." |
From: Will W. <ww...@al...> - 2006-05-10 13:36:22
|
On 5/10/06, Bram Stolk <br...@sa...> wrote: > I stumbled upon another race condition. > when tuples are removed between unlock and sending, their > memory can be freed, causing a 'Bad address' in the send() syscall. > The fix is to move the YIELDACCESS down (and do it twice: also in > early return). Alternatively, we could do reference counting on tuples, so that we didn't have to lock the whole space for all that time. Dumping is something you'd do when debugging the space, and hopefully an infrequent occurrence, so the effort of adding reference counts is probably not warranted. > I cannot commit to cvs, as cvs is now completely gone: both > developer cvs and user cvs are not working. It doesn't work for me either. Sourceforge also offers Subversion as an alternative to CVS. I understand that Subversion has many features and improvements that make it better than CVS, but I think the problem we are both having at the moment is due to server downtime, not deficiencies in CVS. Switching to Subversion wouldn't guarantee more server uptime. Info about Subversion on Sourceforge http://sourceforge.net/docman/display_doc.php?docid=3D31070&group_id=3D1 I think I'll need some time to wrap my brain around Subversion. It's one of those things I've intended to learn about for a while. I have no problem with migrating from CVS to Subversion but I am not prepared to do it immediately. Will Ware |
From: Bram S. <br...@sa...> - 2006-05-10 11:33:25
|
Hello, To initiate this mailing list: I stumbled upon another race condition. It is less serious, as it only happens when doing a dump operation. What happens is this: For the dump op, the server locks the tspace to create a list of its contents. Then it releases the lock. After the release it sends the tuples. However... when tuples are removed between unlock and sending, their memory can be freed, causing a 'Bad address' in the send() syscall. The fix is to move the YIELDACCESS down (and do it twice: also in early return). I cannot commit to cvs, as cvs is now completely gone: both developer cvs and user cvs are not working. Bram -- Bram Stolk, VR Engineer SARA, Amsterdam. tel +31 20 592 3000 "Windows is a 32-bit extension to a 16-bit graphical shell for an 8-bit operating system originally coded for a 4-bit microprocessor by a 2-bit company that can't stand 1 bit of competition." |