pycs-devel Mailing List for Python Community Server (Page 15)
Status: Alpha
Brought to you by:
myelin
You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
(3) |
Oct
(1) |
Nov
(70) |
Dec
(41) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(20) |
Feb
(9) |
Mar
(36) |
Apr
(11) |
May
(3) |
Jun
(6) |
Jul
(3) |
Aug
(13) |
Sep
(2) |
Oct
(32) |
Nov
(4) |
Dec
(7) |
| 2004 |
Jan
(14) |
Feb
(16) |
Mar
(3) |
Apr
(12) |
May
(1) |
Jun
(4) |
Jul
(13) |
Aug
(1) |
Sep
(2) |
Oct
(1) |
Nov
(2) |
Dec
(3) |
| 2005 |
Jan
(7) |
Feb
|
Mar
|
Apr
(4) |
May
|
Jun
(2) |
Jul
|
Aug
(5) |
Sep
|
Oct
|
Nov
(2) |
Dec
(1) |
| 2006 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
(1) |
Sep
(2) |
Oct
(7) |
Nov
(18) |
Dec
(22) |
| 2007 |
Jan
(10) |
Feb
(11) |
Mar
(1) |
Apr
(6) |
May
(5) |
Jun
(5) |
Jul
(14) |
Aug
(28) |
Sep
(4) |
Oct
(6) |
Nov
(9) |
Dec
(8) |
| 2008 |
Jan
(10) |
Feb
(19) |
Mar
(38) |
Apr
(17) |
May
(13) |
Jun
(7) |
Jul
(36) |
Aug
(15) |
Sep
(2) |
Oct
|
Nov
|
Dec
|
|
From: Georg B. <gb...@mu...> - 2003-05-20 08:19:04
|
Hi! I just made a small change to the PyCS with regard to handling of multiple matching locations in access restrictions. Before this change, if any location matched your URL and was satisfied for the logged in user, the user was permittet to access the page. Now all locations that match an URL need to be satisfied. The reason: - set up a restriction for the complete blog as / - set up special admin restrictions for /backup/ Before this would allow all users that can access / to access /backup/, too. This is not what would be intended. You would have to set up single locations for every subfolder of your blog and for the main index.html to achieve what was expected. With the current CVS, if you access /backup/ your user needs to be in groups so that it is allowed for both paths. To get this, set up as follows: / with groups users and all users in that group /backup/ with groups admins with only the admins in that group or / with groups users and admins. Users are normal users, admins are admins. /backup/ with only group admins. I'll change the "official" documentation of this, too. bye, Georg |
|
From: Phillip P. <pp...@my...> - 2003-05-16 01:25:24
|
Ooh, great. I've been thinking about what sort of API to make for defining permissions in my standalone search engine, but seeing as you've already defined this, I'll just use it ;-) I'll make a tool to take a PyCS users.conf file and push it up to a server using this API, and then if you get it so PyDS can set access restrictions on > 1 server at once (wait until we've got integration with my search engine though), everything will just come out automatically. Cheers, Phil On Thu, May 15, 2003 at 03:51:44PM +0200, Georg Bauer wrote: > Hi! > > The following link: > > http://pyds.muensterland.org/weblog/2003/05/15.html#P118 > > announces a description of a way to set up access restrictions on blogs > with an XML-RPC API. This is to allow blogging tools to set up user > accounts and access restrictions from within the tool. I am planning to > build such a tool for the PyDS. > > It's much like a <Location ~ /^..../> ... </Location> block in Apache, > only there is only _one_ password list and restrictions are always > based on groups. The blogging tools can easily build a GUI on top of > the API. > > One way to use this is to set up access restrictions for your backup > path, so others can't access your backups. Or to set up private > categories that are rendered and upstreamed, but should only be read by > a group of visitors you personally know. > > Phil: I have applied patches to search.py, but since I don't use > ht:/Dig, I can't test it. So you better look over my code before you > upgrade pycs.net ;-) > > swish.py doesn't honor the access restrictions, but since it only gives > out the URL, I don't see big problems with this. Users could use the > swish searching to discover what a document might be about, though. > Phil once wrote that he had applied patches to swish.py to hook it into > the access stuff, but I can't find them and so didn't bother to add my > own stuff ;-) > > This XML-RPC API is mostly ignorant of the kind of community server it > runs on, so it could be implemented on top of PSS or RCS, too. > > Comments? Ideas? Bugs? > > bye, Georg > > > > ------------------------------------------------------- > Enterprise Linux Forum Conference & Expo, June 4-6, 2003, Santa Clara > The only event dedicated to issues related to Linux enterprise solutions > www.enterpriselinuxforum.com > > _______________________________________________ > PyCS-devel mailing list > PyC...@li... > https://lists.sourceforge.net/lists/listinfo/pycs-devel |
|
From: Georg B. <gb...@mu...> - 2003-05-15 13:52:59
|
Hi! The following link: http://pyds.muensterland.org/weblog/2003/05/15.html#P118 announces a description of a way to set up access restrictions on blogs with an XML-RPC API. This is to allow blogging tools to set up user accounts and access restrictions from within the tool. I am planning to build such a tool for the PyDS. It's much like a <Location ~ /^..../> ... </Location> block in Apache, only there is only _one_ password list and restrictions are always based on groups. The blogging tools can easily build a GUI on top of the API. One way to use this is to set up access restrictions for your backup path, so others can't access your backups. Or to set up private categories that are rendered and upstreamed, but should only be read by a group of visitors you personally know. Phil: I have applied patches to search.py, but since I don't use ht:/Dig, I can't test it. So you better look over my code before you upgrade pycs.net ;-) swish.py doesn't honor the access restrictions, but since it only gives out the URL, I don't see big problems with this. Users could use the swish searching to discover what a document might be about, though. Phil once wrote that he had applied patches to swish.py to hook it into the access stuff, but I can't find them and so didn't bother to add my own stuff ;-) This XML-RPC API is mostly ignorant of the kind of community server it runs on, so it could be implemented on top of PSS or RCS, too. Comments? Ideas? Bugs? bye, Georg |
|
From: Georg B. <gb...@mu...> - 2003-04-29 22:24:08
|
Hi! > I know that Radio and NNW both look at the Last-Modified header, and I > think they do ETags too. If you subscribe to a comment feed in Radio, > do you see 304 messages in your log? Me? Yep. But I am using PyDS and I _know_ that it supports ETag :-) > You could always generate Last-Modified from the date of the last > comment, I guess ... is it stored in a parseable format? (I can't > remember :) Yep, that would be possible, I actually thought about it, but it's more work than just get a md5 hexdigest, so I just stopped after ETag worked ;-) bye, Georg |
|
From: Phillip P. <pp...@my...> - 2003-04-29 22:05:08
|
Great! I know that Radio and NNW both look at the Last-Modified header, and I think they do ETags too. If you subscribe to a comment feed in Radio, do you see 304 messages in your log? You could always generate Last-Modified from the date of the last comment, I guess ... is it stored in a parseable format? (I can't remember :) Cheers, Phil On Tue, Apr 29, 2003 at 06:03:21PM +0200, Georg Bauer wrote: > Hi! > > I just checked in a change to modules/system/comments.py and trackback.py > to add an ETag header and to respect If-None-Match headers. This could > preserve bandwidth if RSS aggregators reading the comment feeds respect > the ETag header. > > Don't know how much aggregators support it, though. Last-Modified headers > (and If-Modified-Since) would be possible, too - but would require more > work. > > The way the ETag is generated: it's just a MD5 hex digest on the HTML > source. So if it has the same content, it generates the same ETag. The > ETag header is always added, so it will help with caches when accessing > the normal HTML pages, too - as some browsers nowadays respect the ETag > header (and many proxies do). > > The _content_ is allways generated, so this doesn't preserve CPU usage. It > just preserves bandwidth by sending out a 304 instead of the content, when > nothing changed. So it shouldn't break anything. > > bye, Georg > > > > > > ------------------------------------------------------- > This sf.net email is sponsored by:ThinkGeek > Welcome to geek heaven. > http://thinkgeek.com/sf > _______________________________________________ > PyCS-devel mailing list > PyC...@li... > https://lists.sourceforge.net/lists/listinfo/pycs-devel |
|
From: Georg B. <gb...@mu...> - 2003-04-29 16:07:52
|
Hi! I just checked in a change to modules/system/comments.py and trackback.py to add an ETag header and to respect If-None-Match headers. This could preserve bandwidth if RSS aggregators reading the comment feeds respect the ETag header. Don't know how much aggregators support it, though. Last-Modified headers (and If-Modified-Since) would be possible, too - but would require more work. The way the ETag is generated: it's just a MD5 hex digest on the HTML source. So if it has the same content, it generates the same ETag. The ETag header is always added, so it will help with caches when accessing the normal HTML pages, too - as some browsers nowadays respect the ETag header (and many proxies do). The _content_ is allways generated, so this doesn't preserve CPU usage. It just preserves bandwidth by sending out a 304 instead of the content, when nothing changed. So it shouldn't break anything. bye, Georg |
|
From: Kevin A. <al...@se...> - 2003-04-20 00:15:39
|
> From: Georg Bauer > > > What I would like to do is simplify the nav bar for everything except > > the > > home page, but so far it looks like the only template that ever gets > > used is > > the home page template. That means that the large number of nav items > > If I recall correctly, the homepage tempate is the one that is > primarily used. The main template is mostly used in the desktop site > stuff and pages like that (and maybe in non-weblog-related pages on the > cloud, too). So the only idea I can offer would be to put in some > conditional stuff into your homepage template to prohibit rendering of > stuff that shouldn't be on the main index.html page. > > But please don't ask me what could trigger this conditional, as I never > hacked up something like that and don't have a Radio installation to > check against any more :-) Gee Georg, I expected you to say "You can't do it with Radio, but it is simple with PyDS." I need some extra incentive to move off of Radio completely :) ka |
|
From: Georg B. <gb...@mu...> - 2003-04-19 17:05:07
|
Hi! > What I would like to do is simplify the nav bar for everything except > the > home page, but so far it looks like the only template that ever gets > used is > the home page template. That means that the large number of nav items If I recall correctly, the homepage tempate is the one that is primarily used. The main template is mostly used in the desktop site stuff and pages like that (and maybe in non-weblog-related pages on the cloud, too). So the only idea I can offer would be to put in some conditional stuff into your homepage template to prohibit rendering of stuff that shouldn't be on the main index.html page. But please don't ask me what could trigger this conditional, as I never hacked up something like that and don't have a Radio installation to check against any more :-) bye, Georg |
|
From: Kevin A. <al...@se...> - 2003-04-19 16:39:41
|
I think this is a Radio problem, not something in PyCS, but maybe someone here will know for sure or have a workaround. http://altis.pycs.net/ In my Radio Userland\www directory (Windows 2000 box) there is a #homeTemplate.txt which appears to be what is used to render all of the pages going to the community server. This corresponds to what Radio calls the "Home page template". There is also a "Main template" which corresponds to the "#template.txt" file in the www directory. Based on the description of the main template it seems like that is the template that should be used to render everything except the main category pages and home page. What I would like to do is simplify the nav bar for everything except the home page, but so far it looks like the only template that ever gets used is the home page template. That means that the large number of nav items I would like to use on the home page for topics, blogrolls, etc. also get used on individual day pages which ends up looking bad because of the limitations of the CSS template, plus it is just a whole lot of extra bytes that those pages don't need, it is enough to have the blogroll... on just the home page. Anyone know how to get around this? ka |
|
From: Phillip P. <ph...@my...> - 2003-04-16 13:14:00
|
Good point. Anyone on the list interested in going? ----- Original Message ----- From: "Kevin Altis" To: <pp@...> Cc: <gb@...> Sent: Wednesday, April 16, 2003 6:03 PM Subject: anyone representing PyCS/PyDS... at OSCOM? > http://www.oscom.org/ > > http://www.oscom.org/Conferences/Cambridge/Program/ > > Do you know of anyone in the states going to OSCOM in May that can represent > the great stuff you're doing? I figure neither of you would be traveling to > Boston/Cambridge. > > OTOH, maybe PyCS... isn't considered in the realm of content management? > > http://www.oscom.org/matrix/index.html > > ka > |
|
From: Phillip P. <ph...@my...> - 2003-04-14 11:00:02
|
Georg: > Should the fields urlSearlch and flHasSearlch in the > xmlStorageSystem.getServerCapabilities call be filled in to swish.py or > search.py? If yes, what should the include? Maybe this should be an > option, so the admin can put in what search method he prefers? Good point. There's a flag in pycs.conf now to say if you have ht://Dig all set up, so I guess if that's set, it would make sense for getServerCapabilities to return the right things. Cheers, Phil |
|
From: Georg B. <gb...@mu...> - 2003-04-11 12:09:48
|
Hi! I added two little changes so that PyCS gives a clue that it supports trackback. This allows PyDS or other clients that know about those fields to automatic activate trackback support in the rendered stuff. The changes are in xmlStorageSystem.getServerCapabilities and in radioCommunityServer.getInitialResources. bye, Georg |
|
From: Georg B. <gb...@mu...> - 2003-04-11 12:05:25
|
Hi! Should the fields urlSearlch and flHasSearlch in the xmlStorageSystem.getServerCapabilities call be filled in to swish.py or search.py? If yes, what should the include? Maybe this should be an option, so the admin can put in what search method he prefers? bye, Georg |
|
From: Georg B. <gb...@mu...> - 2003-04-08 15:03:53
|
Hi! The full feed for trackbacks returned path elements referencing comments.py and not trackback.py. The fixed version is now in CVS. bye, Georg |
|
From: Phillip P. <pp...@my...> - 2003-03-31 09:34:42
|
Hey, If anyone updated their setup after I put the search code in CVS, you'll want to update again b/c it looks like it broke the redirect code, which affects many things (oops!). (The fix is in CVS now). Thanks to Hal Wine for finding this! Cheers, Phil :) |
|
From: Phillip P. <ph...@my...> - 2003-03-29 10:52:42
|
I just committed some search stuff into CVS. This isn't everything just
yet - there's a decent-sized patch to ht://Dig itself to get it to build as
a Python module, and a small patch to Medusa:
The big deal here is that I'm using fork() to run the search code in a
separate process (because it's used to running as a CGI, and doesn't appear
to clean up after itself very well), which requires a couple of changes to
let the child processes exit properly. Also the authorizer and rewriting
code needed a bit of refactoring to work properly here.
Can't talk much now (in a hurry) but I wanted to post this so this is all
documented at least a little bit. Will blog more later and post the
ht://Dig patch.
Here are my CVS comments:
===
Added ht://Dig integration.
Makefile: added modules/system/search.py to the install list
authorizer.py: made it possible to call various functions without spitting
out a whole lot of junk to stdout
pycs.conf: added a new 'enablehtdig' variable that must be set before
search.py will do anything that would be dangerous without a patched copy of
medusa (i.e. fork)
pycs.py, pycs_settings.py, pycs_auth_handler.py: now we store a copy of the
authorizer so search.py can grab it later on (better than having to read it
each time)
pycs_module_handler.py: now we pass on SystemExit exceptions instead of
catching them, so an exiting child process actually exits (NB: we need
co-operation from medusa here - this is where the aforementioned patch to
medusa/http_server.py comes in)
pycs_rewrite_handler.py: moved the core url rewriting stuff into a new
method, so search.py can rewrite things as necessary when doing the security
checks
www/index.html: added 'search' link
===
And here's the medusa patch:
===
RCS file: /cvsroot/oedipus/medusa/http_server.py,v
retrieving revision 1.10
diff -u -r1.10 http_server.py
--- http_server.py 18 Dec 2002 14:55:44 -0000 1.10
+++ http_server.py 29 Mar 2003 10:47:59 -0000
@@ -495,6 +495,8 @@
# This isn't used anywhere.
# r.handler = h # CYCLE
h.handle_request (r)
+ except SystemExit:
+ raise
except:
self.server.exceptions.increment()
(file, fun, line), t, v, tbinfo =
asyncore.compact_traceback()
===
Cheers,
Phil :)
|
|
From: Georg B. <gb...@mu...> - 2003-03-27 17:43:07
|
Hi! > This is only rudimentary tested, I just hacked it together. But maybe > someone else can have a look and give it a try. I will create a > description how to change your template and stuff to support it with > PyDS. Ok, some small bugfixes and now it looks like it works. I have added trackback rendering support to http://pyds.muensterland.org/ so you have something to try it on. Since it fully works on the server, it is available for all cs-Clients like bzero, Radio, PyDS or whatever someone cooks up. PyDS currently doesn't have trackback client support, but I think I will add that soon. bye, Georg |
|
From: Georg B. <gb...@mu...> - 2003-03-27 16:58:10
|
Hi! > Any takers? Yep, me :-) Ok, I have hacked up some really crude support. I copied all the comments stuff, renamed it to trackbacks stuff and changed some form fields and some formattings. Now you can post trackback pings to the PyCS and you blog software can use the same mechanisms it uses for comments, to show trackbacks (just substitute trackback.py for comments.py in the URLs - and the link parameter is gone, you don't need it any more). Trackbacks are stored in their own table, you can create RSS feeds for them and you can show the number of trackbacks in your blog using Javascript. This is only rudimentary tested, I just hacked it together. But maybe someone else can have a look and give it a try. I will create a description how to change your template and stuff to support it with PyDS. bye, Georg |
|
From: Georg B. <gb...@mu...> - 2003-03-26 18:48:49
|
Hi! How about implementing a trackback ping endpoint for PyCS based on the comments database? Trackbacks could just go into the comments like other comments, too. The article URL would go into the link, the blog title into the name and the excerpt and other stuff into the comment body. This would give PyCS users a way to get trackback pings without worrying about yet another thing to think of. Just put trackback links into your pages. I remember that someone asked for small tasks that are complete in themselves. Maybe this would be such a task :-) Any takers? bye, Georg |
|
From: Georg B. <gb...@mu...> - 2003-03-22 13:11:02
|
Hi! Due to the changes to URL parsing in medusa, now the URLs for search access were quite different and so several identical queries where seen as different queries. Another problem from the very first version of zeitgeist.py was that it didn't see searches by several google sites as the same search, but as different searches. Both problems are now solved in CVS, searches are only their terms coupled to the search engine (so it still makes a difference between HORROR searched on google and aol, but not on google.de and google.ca). What I currently don't do is ignoring the case of the search term, as I don't know wether all search engines do that, too. Another change is that now the maximum font size is 36 and not 72 as before. Those very large fonts really stood out far too much :-) Other changes over the last weeks include several additional search engines both in searches.py and zeitgeist.py. bye, Georg |
|
From: Phillip P. <pp...@my...> - 2003-03-13 22:07:57
|
> Hmm. fork doesn't exist in all OS - for example I seem to recall some > problems with regard to Windows in that area (windows only supporting > threading under python, not forking). > > And we have to take precautions if the forked process needs to talk to > metakit databases [...] Good points. In this case, the forking is restricted to a _very_ small function, so there's no need for extra locking. I guess we'd need to hack something up with CreateProcess if we want it to run on Windows. But then, does ht://Dig run on Windows anyway? :-) Cheers, Phil |
|
From: Georg B. <gb...@mu...> - 2003-03-13 21:38:17
|
Hi! > it in the body of the app, and get PyCS to regenerate the page when it > sees an RSS file being upstreamed... That was my idea. Just a combined blog of everything that's upstreamed on the community server. Something like weblogs.com on steroids - not only the weblogs that are updated, but the actual updates in the feeds. Might be interesting to have something like that. And it might stimulate comunity building a bit. bye, Georg |
|
From: Phillip P. <pp...@my...> - 2003-03-13 21:23:39
|
> >a little mini-aggregator for pycs.net. > > Nice. You couldn't let Dave Winer go away with his little thing, right? > ;-) Haha... actually it was inspired by one of the other free blogging sites (freeroller or blogalia or something) which has a big aggregated feed of everything on the front page. > Hmm. I already envision automatic inclusion of specially marked local > rss feeds to give a non-admin-overhead community feed. Maybe with added > auto-rss-generating, so the community feed can be included easily by > others without the need to include all local feeds. Maybe some markup > in the rss of the local feed and some auto-recognizing of RSS feeds > with that markup in the Upstreaming-Code of xmlStorageSystem? Hmm. Generating another RSS feed out of this sounds like a plan. If we put some sort of templating in there, we can just have multiple output templates and generate whatever format we like, I guess. I guess if we swap over to the parser from PyDS (I'm using Mark Pilgrim's one atm, just because I already had a copy), we can include it in the body of the app, and get PyCS to regenerate the page when it sees an RSS file being upstreamed... Cheers, Phil :) |
|
From: Georg B. <gb...@mu...> - 2003-03-13 12:03:45
|
Hi! > http://www.pycs.net/allyourrss.html > > a little mini-aggregator for pycs.net. Nice. You couldn't let Dave Winer go away with his little thing, right? ;-) Hmm. I already envision automatic inclusion of specially marked local rss feeds to give a non-admin-overhead community feed. Maybe with added auto-rss-generating, so the community feed can be included easily by others without the need to include all local feeds. Maybe some markup in the rss of the local feed and some auto-recognizing of RSS feeds with that markup in the Upstreaming-Code of xmlStorageSystem? Hmm. bye, Georg |
|
From: Georg B. <gb...@mu...> - 2003-03-13 11:59:46
|
Hi! > I've decided that the safest way to run htsearch is to fork inside the > module script, run htsearch in the child process, and let the OS clean > up after it. Hmm. fork doesn't exist in all OS - for example I seem to recall some problems with regard to Windows in that area (windows only supporting threading under python, not forking). And we have to take precautions if the forked process needs to talk to metakit databases, as those are only single-user (but it might be that they are only single-write but multiple-read, I didn't check that). That's why in PyDS every tool has it's own lock object, so I can protect database access against background threads concurrent access. bye, Georg |