phplib-users Mailing List for PHPLIB (Page 28)
Brought to you by:
nhruby,
richardarcher
You can subscribe to this list here.
2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(106) |
Sep
(99) |
Oct
(44) |
Nov
(97) |
Dec
(60) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2002 |
Jan
(56) |
Feb
(81) |
Mar
(134) |
Apr
(69) |
May
(106) |
Jun
(122) |
Jul
(98) |
Aug
(52) |
Sep
(184) |
Oct
(219) |
Nov
(102) |
Dec
(106) |
2003 |
Jan
(88) |
Feb
(37) |
Mar
(46) |
Apr
(51) |
May
(30) |
Jun
(17) |
Jul
(45) |
Aug
(19) |
Sep
(5) |
Oct
(4) |
Nov
(12) |
Dec
(7) |
2004 |
Jan
(11) |
Feb
(7) |
Mar
|
Apr
(15) |
May
(17) |
Jun
(13) |
Jul
(5) |
Aug
|
Sep
(8) |
Oct
(6) |
Nov
(21) |
Dec
(13) |
2005 |
Jan
(4) |
Feb
(3) |
Mar
(7) |
Apr
(7) |
May
|
Jun
(11) |
Jul
(7) |
Aug
|
Sep
|
Oct
|
Nov
(7) |
Dec
|
2006 |
Jan
(3) |
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
(2) |
Jul
(1) |
Aug
|
Sep
|
Oct
(9) |
Nov
|
Dec
(5) |
2007 |
Jan
(15) |
Feb
(2) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(9) |
Aug
|
Sep
(2) |
Oct
|
Nov
|
Dec
|
2008 |
Jan
|
Feb
|
Mar
|
Apr
(12) |
May
|
Jun
(3) |
Jul
(1) |
Aug
(19) |
Sep
(2) |
Oct
|
Nov
|
Dec
(6) |
2009 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(6) |
From: Giancarlo <gia...@na...> - 2002-12-03 22:30:03
|
Joe Stewart wrote: >>JS> With which directory structure? >>I'd keep existing (php-lib) directory structure - it's much more clean >>then in -stable (I even think that the directory structure in -stable >>should be changed accordingly). >> The change in structure is only a minimal startup quirk, but I think most existing user would favor the traditional simple structure, was it only 'cause it offers a quick upgrade path. I was surprise myself in seeing that many core .inc files (local4, auth4), in the php-lib structure, are really not needed and can be merged in a single one. Less maintenance. The php-lib structure is surely more apt to develop custom or exotic extensions, so in the end is good. Not at the start though. Such a 'dual' situation anyway is to be terminated ASAP, otherwise next time we'll have to backfix again among the two. I think as soon as the two cvs branches have the same structure, never mind which, the better. I wouldn't delay the riunification. Gian PS Why not keep the new, but move what included by the prepend.php3/4 in the main (php) dir, and leave the subdirs for contribs and development? So one can have a 'compat' single php tree that works with old prepend, and go and search for fancy extra modules in the subdirs? If he so skilled as to need them, he can bear the extra effort. This could be a third way: the 3 session*.inc, user*.inc and auth.inc are back to php dir, development, exotic and contributed extras are in their respective dirs. But never mind, for me anything it's ok, but I wouldn't delay an unique structure. |
From: Aric C. <gre...@pe...> - 2002-12-03 20:59:55
|
OK this may be simple but its eluding me: I have a single entry page for my application. I want parts of it protected, depending on what parameters are passed to = it. The unprotected parts I want accessible and not have a cookie set or a = session id put into all the urls. IE, having the default auth to = 'nobody' isnt what I want. How can I get it to *only* make a cookie/add session id's when somebody = has actualy really logged in? |
From: Aric C. <gre...@pe...> - 2002-12-03 18:39:24
|
> OK, I can see where you are going with this. I would approach this > slightly differently. > > I would build a database with the translations in it. Columns like: > lang char(2), # ISO country code > varname char(16), # contains the template varname > trans text, # the translated text > with indexes on lang and varname > > The template file would contain lots of tags like: > <h1>{TITLELABEL}</h1> > > You can extract all the template varnames from the template file with > a preg_match_all(). Then loop through the results generating a massive > SQL query which pulls all the text translations required for that page > out of the database with one query. Then loop through the SQL result > set and call $t->set_var($db->f('varname'), $db->f('trans')) for each > record. And that's why you have the getText() method which you can replace with whatever method you choose -- be it loading a language file or doing a SQL query. To support your method of doing a "massive SQL query" the getText() method could be required to accept a string or an array of all the strings in the template. I'm not sure of the most efficient way to do all this. I was thinking something like doing a preg_replace_callback() and using the getText() method as the callback. > > Your PHP script wouldn't even need to know the template varnames :) That's pretty much the whole idea. :) |
From: Rob H. <rob...@ws...> - 2002-12-03 13:36:17
|
And now you are willing to make the situation one layer more complex? I do not know what ISPs in Russia are doing, I can only speak for those in the US and Western Europe, and a large number of them are going to caches in an attempt to remain competative. As high speed services such as DSL are deployed that are a fraction of the cost of traditional dedicated services, the business customers that carry the brunt of the cost are no longer doing so. That means that the ISP has to reduce cost, and they are currently doing that with higher oversubscription on the bandwidth. Then, to reduce the bandwidth requirements, they are installing cache farms not only close to the POPs but at the NAP also to cache anything that they can. To the point where large emails are being delayed until the network throughput drops bellow a threshold. What this boils down to is that currently roughly 1/3 to 1/2 of traffic is currently routed through a chache. AOL is the largest ISP currently doing this but Earthlink also does, and AT&T, BT, and MCI are in trials. AT&T is about to come out of those trials. There are several ISPs in Western Europe in particular that we have heard from users that they could be to sites even though the server sitting in front of me was turned off, so I know that it is fairly wide spread. Take into account that another 1/3 of the traffic is from behind nat devices and the IP address becomes not only an unreliable marker, but a false layer of security. I understand the goals, but there just is not a reasonable answer right now. If the information being transmitted is that sensitive, then it should be done with a hard token anyway. You have to realize that XSS vulnerabilities are not highly exploited because that may get someone access to one account, one credit card #, one bank account. Attacks are mainly aimed at the system as a whole, the web server, db server, application itself because the potential payoff is much higher... Rob Hutton Web Safe www.wsafe.com ********************************************************************** Introducing Symantec Client Security - Integrated Anti-Virus, Firewall, and Intrusion Detection for the Client. Learn more: http://enterprisesecurity.symantec.com/symes238.cfm?JID=2&PID=11624271 > -----Original Message----- > From: Maxim Derkachev [mailto:max...@bo...] > Sent: Monday, December 02, 2002 12:13 PM > To: Rob Hutton > Cc: php...@li... > Subject: Re[6]: [Phplib-users] new Session4 changes > > > > RH> So what you end up with is something that sounds good on > paper, but causes > RH> problems in MANY situations. Can you imagine being the > sysadmin and trying > RH> to figure out why sessions just go away for SOME people, SOME > of the time? > > Well, I'm just this kind of sysadmin, and I'm fed up of persuading > users to turn their cookies on, because many of them don't read that > cookies must work on our site, others don't know what the cookies are > and don't know how to turn it on back. And sometimes browsers f@#k up and > don't handle cookies properly (e.g. Opera6.01 and IE6 in some > installations). And sometimes proxies cache cookies with other headers > and send them to everybody using those proxies. I have the situation > when the session just go away for SOME people, in SOME circumstances, > and I have it now. And I understand that we live in not such a > perfect world > where things are beyond our control in spite of rich and available > specifications. So, don't expect that this is so uncommon - such > things will last forever. > I just sat down and calculated a bit. Let us have the stats of cookie > usage - the stats are available everywhere. In Russia it is common > that the rate of cookie-disabled browsers is from 3 to 5 per cent. So, > we have a risk that 5 per sent of users won't be able to use our > service if we enforce cookie usage. Then, what is the probability of > changing client's IP during the session. I don't think it would be > more then 30% (in the worst case). So, we can lower the risk by 2/3 > without a loss (since we enforced cookies on everybody before). Does > it matter? I think, yes, I'll have only third of those people who may > experience problems with my site. > > > > > > -- > Best regards, > Maxim Derkachev mailto:max...@bo... > IT manager, > Symbol-Plus Publishing Ltd. > phone: +7 (812) 324-53-53 > www.books.ru, www.symbol.ru > > |
From: Rob H. <rob...@ws...> - 2002-12-03 13:23:09
|
Mitigation of Cross Site Scripting attacks is the responsibility of the user and more importantly the application developer. By following simple best practices such as encoding and decoding urls. There are many stupid things that could be done outside of this that would have the same end result. This is just and auth package. It is not a "make the application secure" package. Unfortunately, there is not a good answer for this right now. Again, feel free to put the IP test in, just make it defaulted off and put a warning in the documentation about its use. Rob Hutton Web Safe www.wsafe.com ********************************************************************** Introducing Symantec Client Security - Integrated Anti-Virus, Firewall, and Intrusion Detection for the Client. Learn more: http://enterprisesecurity.symantec.com/symes238.cfm?JID=2&PID=11624271 |
From: Giancarlo <gia...@na...> - 2002-12-03 09:51:04
|
> If session hijacking is of concern, the site must be running SSL. There are two classes of worry. The first is that anyone, withouth any skill or tool, can propose an url that already contains a session_id, that he will later steal. For this SSL won't do anything. To this same class of malice are cross site scccripting injections, that will exec Javascript an post the cookies to someone else. These are all artigianal tools, quite simple to use. The second class of worries is that someone can be sniffing packets. But if this is the case, sessio_id stealing is really little to worry about, because the guy has already spoofed the DNS and is virtually in control of the server, so he's probably not intereseted too much in what is passing through httpd. You are already cooked all the way ;-) What worries most is the first class of cracks, because they are so damn easy that any computer illiterate can rig it up just by menas on an URL of by posting a js tag. I read something bad about the 'Apache' cookie (mod usertrack?), saying that it was only adapt to track behaviours but not for security. Dunno the SSL_SESSION_ID, but the plain apache session id (?Apache=) contains a part with the IP. I tried already to stck on that in stead of PHP4, but I stopped after reading about this. > In which case perhaps the SSL_SESSION_ID Apache Environment Variable > would be a better thing to track than IP address? > > I'm not sure under what circumstances that would be re-negotiated > though! > > ...R. > > > ------------------------------------------------------- > This SF.net email is sponsored by: Get the new Palm Tungsten T > handheld. Power & Color in a compact size! > http://ads.sourceforge.net/cgi-bin/redirect.pl?palm0002en > _______________________________________________ > Phplib-users mailing list > Php...@li... > https://lists.sourceforge.net/lists/listinfo/phplib-users > |
From: Richard A. <rh...@ju...> - 2002-12-03 05:00:24
|
At 13:59 +0300 2/12/02, Maxim Derkachev wrote: > IMCO. The only marker I could see by now is the user's IP address - > everything else is even less reliable. If session hijacking is of concern, the site must be running SSL. In which case perhaps the SSL_SESSION_ID Apache Environment Variable would be a better thing to track than IP address? I'm not sure under what circumstances that would be re-negotiated though! ...R. |
From: Richard A. <rh...@ju...> - 2002-12-03 03:01:08
|
At 17:11 -0800 2/12/02, Aric Caley wrote: >Well, because right now, the template might look like this: > ><h1>{TITLELABEL}</h2> > >And the code would have this: > >$template->set_var(array("TITLELABEL" => getText("This is the page >title"))); OK, I can see where you are going with this. I would approach this slightly differently. I would build a database with the translations in it. Columns like: lang char(2), # ISO country code varname char(16), # contains the template varname trans text, # the translated text with indexes on lang and varname The template file would contain lots of tags like: <h1>{TITLELABEL}</h1> You can extract all the template varnames from the template file with a preg_match_all(). Then loop through the results generating a massive SQL query which pulls all the text translations required for that page out of the database with one query. Then loop through the SQL result set and call $t->set_var($db->f('varname'), $db->f('trans')) for each record. Your PHP script wouldn't even need to know the template varnames :) ...R. |
From: Aric C. <gre...@pe...> - 2002-12-03 01:12:26
|
> Hi Aric, > > What is the difference between: > > >I then ... put appropriate {} tags into the templates. > > and > > >A translatable string could be identified in the > >template like so: {"this is a string"}. > > > In both cases you are adding {} tags to the template file, > which seems to me like the same amount of work. Well, because right now, the template might look like this: <h1>{TITLELABEL}</h2> And the code would have this: $template->set_var(array("TITLELABEL" => getText("This is the page title"))); Now if there's a lot of text like that, your set_var() gets big. With the other method, it goes something like this: <h1>{"This is the page title"}</h2> Which for the template designer is a little nicer -- he sees the actual text at least. The template designer might also do the language translation file, so he could add text in both places and not have to bother the code guy to add more set_vars(). The code would then be something like this: class $mytemplate extends $template { function getText($str) { /* whatever code here you need to load a language file and index $str to get a new string */ return $str; // defaults to same } } And then that's it for your application. No more set_var() for all those strings, because the template class calls the getText method whenever it sees a {""}. Does that make more sense? |
From: Richard A. <rh...@ju...> - 2002-12-03 00:53:46
|
At 14:44 -0800 2/12/02, Aric Caley wrote: >I've been adapting my code to use a getText() function to provide >translated strings. Hi Aric, What is the difference between: >I then ... put appropriate {} tags into the templates. and >A translatable string could be identified in the >template like so: {"this is a string"}. In both cases you are adding {} tags to the template file, which seems to me like the same amount of work. ...R. |
From: Aric C. <gre...@pe...> - 2002-12-02 22:44:56
|
I've been adapting my code to use a getText() function to provide = translated strings. Using templates, I then setvar() all the strings = from getText() and put appropriate {} tags into the templates. It occured to me that this could be much simpler. Why not have the = template class do it for you? You could "register" your custom = getText() function (subclass the template object). Then when ever the = template parser encounters a translatable string it calls the function. = A translatable string could be identified in the template like so: = {"this is a string"}. This would reduce the amount of coding needed, = and would make it easier for the template designer. I think it would be pretty easy to add. Any thoughts? |
From: Virilo T. A. <vi...@su...> - 2002-12-02 21:35:13
|
I've finished my first web page using phplib 7.2d. Now i'm searching a = web server for it.=20 Wich requeriments need a web hosting service to run phplib?=20 I have several doubts due to the fact i need to modify some = configuration files during instalation and its possible that my future = web host dont let me modify this files. Also include files directory = arent within web page directory tree.=20 Its possible to use phplib without modify php.ini, http.conf and in the = web=20 directory tree? Its less secure?=20 Thanks. |
From: Rob H. <rob...@ws...> - 2002-12-02 20:41:34
|
I can have 3 or 4 different implementations of default auth, each page using a different one, each with different settings. Or, I can have 3 or 4 different implementations of default auth, each page implementing all of them under an if-then statement. I can change modes back and forth, etc. All I have to do is have a function that allows the progression of the SID from one type to another. Rob Hutton Web Safe www.wsafe.com ********************************************************************** Introducing Symantec Client Security - Integrated Anti-Virus, Firewall, and Intrusion Detection for the Client. Learn more: http://enterprisesecurity.symantec.com/symes238.cfm?JID=2&PID=11624271 > -----Original Message----- > From: php...@li... > [mailto:php...@li...]On Behalf Of Giancarlo > Sent: Monday, December 02, 2002 2:12 PM > To: phplib-users > Subject: Re: [Phplib-users] new Session4 changes > > > > > What I said is that, upon certain not so uncommomd > prerequisites, it can > > be difficult to have a twin mode-falback_mode that fit all cases, from > > the bot to the cookie_only authed user... > > use_cookie_only is better for security and authentication, problem is > it's a coiche all_or_nothing, that has to be enforced either everywhere > or nowhere. So people decide not to use it. If it was possible to > enforce it only in determined cases, it'd be better. > Think of the default_auth case. You cannot specify different session > classes for that page, because the same has to cater for both authed and > unauthed user. So how do you impose use_cookie_only only on those > authed? No way, it's a policy to be adopted either everywhere, or give > it up. And people give it up. > Similar is for the session save_handler type. You cannot, at a certain > point eg: once authenticated, migrate the anonymous 'file' storage to > the more secure db. > > Gian > > > > ------------------------------------------------------- > This SF.net email is sponsored by: Get the new Palm Tungsten T > handheld. Power & Color in a compact size! > http://ads.sourceforge.net/cgi-bin/redirect.pl?palm0002en > _______________________________________________ > Phplib-users mailing list > Php...@li... > https://lists.sourceforge.net/lists/listinfo/phplib-users > > |
From: Giancarlo <gia...@na...> - 2002-12-02 19:11:58
|
> What I said is that, upon certain not so uncommomd prerequisites, it can > be difficult to have a twin mode-falback_mode that fit all cases, from > the bot to the cookie_only authed user... use_cookie_only is better for security and authentication, problem is it's a coiche all_or_nothing, that has to be enforced either everywhere or nowhere. So people decide not to use it. If it was possible to enforce it only in determined cases, it'd be better. Think of the default_auth case. You cannot specify different session classes for that page, because the same has to cater for both authed and unauthed user. So how do you impose use_cookie_only only on those authed? No way, it's a policy to be adopted either everywhere, or give it up. And people give it up. Similar is for the session save_handler type. You cannot, at a certain point eg: once authenticated, migrate the anonymous 'file' storage to the more secure db. Gian |
From: Maxim D. <max...@bo...> - 2002-12-02 17:13:41
|
RH> So what you end up with is something that sounds good on paper, but causes RH> problems in MANY situations. Can you imagine being the sysadmin and trying RH> to figure out why sessions just go away for SOME people, SOME of the time? Well, I'm just this kind of sysadmin, and I'm fed up of persuading users to turn their cookies on, because many of them don't read that cookies must work on our site, others don't know what the cookies are and don't know how to turn it on back. And sometimes browsers f@#k up and don't handle cookies properly (e.g. Opera6.01 and IE6 in some installations). And sometimes proxies cache cookies with other headers and send them to everybody using those proxies. I have the situation when the session just go away for SOME people, in SOME circumstances, and I have it now. And I understand that we live in not such a perfect world where things are beyond our control in spite of rich and available specifications. So, don't expect that this is so uncommon - such things will last forever. I just sat down and calculated a bit. Let us have the stats of cookie usage - the stats are available everywhere. In Russia it is common that the rate of cookie-disabled browsers is from 3 to 5 per cent. So, we have a risk that 5 per sent of users won't be able to use our service if we enforce cookie usage. Then, what is the probability of changing client's IP during the session. I don't think it would be more then 30% (in the worst case). So, we can lower the risk by 2/3 without a loss (since we enforced cookies on everybody before). Does it matter? I think, yes, I'll have only third of those people who may experience problems with my site. -- Best regards, Maxim Derkachev mailto:max...@bo... IT manager, Symbol-Plus Publishing Ltd. phone: +7 (812) 324-53-53 www.books.ru, www.symbol.ru |
From: Maxim D. <max...@bo...> - 2002-12-02 16:30:44
|
G> PHP4 Serialization and URL rewriting (trans_sid) are valuable indeed, G> file savehandler could be accomodated, but propagation is faulty and G> has no abstraction and is rigidly tied to all the rest. As I said, if I G> could choose pieces of it... The propagation is faulty but I don't see the way to make it better (remember, it should be *generic*). The session start, read, save and propagation (url rewriting is only part of the propagation mechanism) methods are pretty abstracted, they deal with an abstract savehandler and serializer. That's why the module itself does not have any means to know whether the session already exists before actually starting it - the module will have to dig into savehandler - so the abstraction will leak. So, in this case, all the savehandlers should implement this logic. Don't mention that this check can raise performance problems. We can check if the session exists using a marker. It can be simple boolean value - e.g. $_SESSION['session_active']. I just used REMOTE_ADDR instead of boolean marker in order to save an extra info about the session, to use it in the anti-hijack part. But we may do such things in a library only - if we introduce the new predefined internal global/session variable in the core part of the PHP system, it can break someone's applications because of the naming conflict. That's why the module don't check for session existence, just issues a new one with the SID it gets from one of the predefined sources. So, the propagation is not so faulty - it is generic and clearly abstracted from other parts. G> I meant the 'second half' can be used only once and must be chosen among G> a fair enough batch of pregenerated ones. I don't follow you here ... How this can harden the session security? -- Best regards, Maxim Derkachev mailto:max...@bo... IT manager, Symbol-Plus Publishing Ltd. phone: +7 (812) 324-53-53 www.books.ru, www.symbol.ru |
From: Rob H. <rob...@ws...> - 2002-12-02 16:25:15
|
> -----Original Message----- > From: php...@li... > [mailto:php...@li...]On Behalf Of Maxim > Derkachev > Sent: Monday, December 02, 2002 11:01 AM > To: Rob Hutton > Cc: php...@li... > Subject: Re[4]: [Phplib-users] new Session4 changes > > > RH> I'm not following. If the session doesn't have an IP then > it's not valid? > Right, it's not valid since it doesn't "exist" yet (in case when we > always save client's IP, with every request). If we have a SID, but > the session, associated with the SID is empty (or, at least, does not > contain an IP, which should be there), than the most probably that the > SID was hand-crafted or obsolete (the new session is born > automatically), so we trash this SID and issue a new one, with a new > session. But either the session is valid, which means it is going to have an IP because the session setup puts it in, or it does not exist. There will never be a situation where the session exists without an IP. Unless you are only going to put in the IP in the case of PUTS and GETS, but what happens when you change modes? > > RH> What if the IP doesn't match? Is it invalid also? > This depends on your policy. I suppose, the "less evil" would be to > close the session if the SID comes from anything but the cookie (we > have 4 sources for SID in PHP). Cookie-enabled clients won't suffer. > But the right way is of course to make this behavior turned on/off as > desired. The less evil way would be to ignore the mismatch and keep going, which will soon be almost a requirement. > > RH> Most search tools ignore redirects as well as javascript, etc. We are > RH> talking about protected sites that people do not want indexed > anyway. If > RH> they were for public consumption, then there would be no need > to protect > RH> them. > Wrong. Tell this to Amazon :) The public sites must be protected, if > they are not going to loose their clients. > > RH> There is no good way of addressing this. There are many > papers, etc. on > RH> doing so, but they are all at the protocol level and involve the user > RH> posessing a personal key signed by a trusted third party. When DNSSec > RH> becomes more widely deployed, then there will be a reasonable > way of doing > RH> something, but not until then. > > We live just now. And when somebody can snoop into your personal > details without any serious effort, the problem should be solved. > > RH> As it stands now, if the client has cookies enabled, then > there is a fairly > RH> secure way of tracking them. If not, there is not a > secure/reliable way. > RH> That is the whole reason cookies were invented. To store > bits of data on > RH> the client's machine because there was no reliable way of > tieing it to them > RH> next time. How are you going to handle HP, which has a whole > class B block, > RH> but also has several thousand IPs behind masquerading machines. > > I do know about proxies and masquerading. I also know that a dialup > users often receive a different IP after reconnect. But we should |
From: Rob H. <rob...@ws...> - 2002-12-02 16:17:41
|
> RH> I'm not following. If the session doesn't have an IP then > it's not valid? > Right, it's not valid since it doesn't "exist" yet (in case when we > always save client's IP, with every request). If we have a SID, but > the session, associated with the SID is empty (or, at least, does not > contain an IP, which should be there), than the most probably that the > SID was hand-crafted or obsolete (the new session is born > automatically), so we trash this SID and issue a new one, with a new > session. > > RH> What if the IP doesn't match? Is it invalid also? > This depends on your policy. I suppose, the "less evil" would be to > close the session if the SID comes from anything but the cookie (we > have 4 sources for SID in PHP). Cookie-enabled clients won't suffer. > But the right way is of course to make this behavior turned on/off as > desired. So what you end up with is something that sounds good on paper, but causes problems in MANY situations. Can you imagine being the sysadmin and trying to figure out why sessions just go away for SOME people, SOME of the time? AHHHHH!!!! It is going to have to be turned off for anything over the Internet at a minimum, and also for anything in large corporations, as they also use NAT, and provides a false layer of security. > > RH> Most search tools ignore redirects as well as javascript, etc. We are > RH> talking about protected sites that people do not want indexed > anyway. If > RH> they were for public consumption, then there would be no need > to protect > RH> them. > Wrong. Tell this to Amazon :) The public sites must be protected, if > they are not going to loose their clients. Um, I just went through this on three different sites. DMOZ refused to list them until there were no script tags in several pages that used javascript for navigation. In the rules that they sent me, Exite, Google, Alta Vista, and I don't remember who all that use them now do not transverse to pages based on Javascript, Header redirects, and META redirects. They require you to have a script block that the robot uses transverse. The Amazon thing is a completely different situation. You can go to Amazon and browse a tremendous amount of their site without ever logging in. The rest is addressable with a robots.txt and good security structure. > > RH> There is no good way of addressing this. There are many > papers, etc. on > RH> doing so, but they are all at the protocol level and involve the user > RH> posessing a personal key signed by a trusted third party. When DNSSec > RH> becomes more widely deployed, then there will be a reasonable > way of doing > RH> something, but not until then. > > We live just now. And when somebody can snoop into your personal > details without any serious effort, the problem should be solved. If it is this serious, then none of this should be used. It is trivial to pull a cookie from the browser cache. We had a conversation on the list about this several weeks ago. If it is sensitive, then use HTTPS and require cookies. There simply is no in between at this point. > > RH> As it stands now, if the client has cookies enabled, then > there is a fairly > RH> secure way of tracking them. If not, there is not a > secure/reliable way. > RH> That is the whole reason cookies were invented. To store > bits of data on > RH> the client's machine because there was no reliable way of > tieing it to them > RH> next time. How are you going to handle HP, which has a whole > class B block, > RH> but also has several thousand IPs behind masquerading machines. > > I do know about proxies and masquerading. I also know that a dialup > users often receive a different IP after reconnect. But we should > decide what should be more secure way - to let everybody take over > someone else's session or restrict them. We can not make all the > people to use cookies. Yes, but it is not doable at this point. I ask that you do a little bit of research and read several of the papers on this very subject before delving into a solution. This IP scheme has been tried and does not work. Look at all the problems with deploying IPSEC. Even if the endpoints are interoperable, then it still does not work a lot of the time because the end points are masqed. And that is without the ISPs trying. > > RH> The IP address can easily be forged. It is called a man-in-the-middle > RH> attack, and in this case, simply requires that you have a > machine that is > RH> behind the same proxy server or nat device to defeat this scheme. > > Yes, but if we have a man in the middle, all the efforts are useless - > he can simply sniff our traffic. Cookies won't save us, only SSL will. Right. So require cookies and SSL on personal or sensitive information. Put that in the docs with the reasons why and call it quites. Do not provide something that is a false sense of security. > > RH> I vote that the documentation cover the security aspect, and > leave it at > RH> that until there is a widely deployed public key system that can be > RH> leveraged instead of trying to invent something that provides > an artificial > RH> sense of security and will cause problems. > > We don't invent something - look at Amazon, again. They are always > provide SID in the URLs, but I've never heard that they are > vulnerable. They don't use pubkeys, they have their pages perfectly > indexed by robots, and they deal with the problems, somehow. Try to > access their pages with someone else's SID. We can endlessly discuss > the theoretical matters, but we live now and should deal with this > now, somehow, and should not wait for someone with the magic wand, I > suppose. Try turning cookies off and accessing Amazon. Try buying from them with cookies off. Try turning SSL support off and doing so. Now try reconnecting, making sure you have a different IP, and refreshing. They use session cookies. They check protocol and they make sure the get and the cookie match. The reason that the get is there is for historical reasons #1 and for ease of passing the session to other sites. > > > > > -- > Best regards, > Maxim Derkachev mailto:max...@bo... > IT manager, > Symbol-Plus Publishing Ltd. > phone: +7 (812) 324-53-53 > www.books.ru, www.symbol.ru > > > |
From: Maxim D. <max...@bo...> - 2002-12-02 16:01:43
|
RH> I'm not following. If the session doesn't have an IP then it's not valid? Right, it's not valid since it doesn't "exist" yet (in case when we always save client's IP, with every request). If we have a SID, but the session, associated with the SID is empty (or, at least, does not contain an IP, which should be there), than the most probably that the SID was hand-crafted or obsolete (the new session is born automatically), so we trash this SID and issue a new one, with a new session. RH> What if the IP doesn't match? Is it invalid also? This depends on your policy. I suppose, the "less evil" would be to close the session if the SID comes from anything but the cookie (we have 4 sources for SID in PHP). Cookie-enabled clients won't suffer. But the right way is of course to make this behavior turned on/off as desired. RH> Most search tools ignore redirects as well as javascript, etc. We are RH> talking about protected sites that people do not want indexed anyway. If RH> they were for public consumption, then there would be no need to protect RH> them. Wrong. Tell this to Amazon :) The public sites must be protected, if they are not going to loose their clients. RH> There is no good way of addressing this. There are many papers, etc. on RH> doing so, but they are all at the protocol level and involve the user RH> posessing a personal key signed by a trusted third party. When DNSSec RH> becomes more widely deployed, then there will be a reasonable way of doing RH> something, but not until then. We live just now. And when somebody can snoop into your personal details without any serious effort, the problem should be solved. RH> As it stands now, if the client has cookies enabled, then there is a fairly RH> secure way of tracking them. If not, there is not a secure/reliable way. RH> That is the whole reason cookies were invented. To store bits of data on RH> the client's machine because there was no reliable way of tieing it to them RH> next time. How are you going to handle HP, which has a whole class B block, RH> but also has several thousand IPs behind masquerading machines. I do know about proxies and masquerading. I also know that a dialup users often receive a different IP after reconnect. But we should decide what should be more secure way - to let everybody take over someone else's session or restrict them. We can not make all the people to use cookies. RH> The IP address can easily be forged. It is called a man-in-the-middle RH> attack, and in this case, simply requires that you have a machine that is RH> behind the same proxy server or nat device to defeat this scheme. Yes, but if we have a man in the middle, all the efforts are useless - he can simply sniff our traffic. Cookies won't save us, only SSL will. RH> I vote that the documentation cover the security aspect, and leave it at RH> that until there is a widely deployed public key system that can be RH> leveraged instead of trying to invent something that provides an artificial RH> sense of security and will cause problems. We don't invent something - look at Amazon, again. They are always provide SID in the URLs, but I've never heard that they are vulnerable. They don't use pubkeys, they have their pages perfectly indexed by robots, and they deal with the problems, somehow. Try to access their pages with someone else's SID. We can endlessly discuss the theoretical matters, but we live now and should deal with this now, somehow, and should not wait for someone with the magic wand, I suppose. -- Best regards, Maxim Derkachev mailto:max...@bo... IT manager, Symbol-Plus Publishing Ltd. phone: +7 (812) 324-53-53 www.books.ru, www.symbol.ru |
From: Giancarlo <gia...@na...> - 2002-12-02 15:10:55
|
PHP4 Serialization and URL rewriting (trans_sid) are valuable indeed, file savehandler could be accomodated, but propagation is faulty and has no abstraction and is rigidly tied to all the rest. As I said, if I could choose pieces of it... > G> Why not 'preissue' a fair enough number of 'second_half of session > G> info', and save them in the session, so that only one among those can > G> be appended or cookie_appended? Use once > G> random md5 digests. Would that be really bullet_proof.. > > It does not resolve the problem, because that "second part" is saved > with the session, and we have access to this part since we know the > SID. I meant the 'second half' can be used only once and must be chosen among a fair enough batch of pregenerated ones. > Yes, but sometimes the personal info should present on [almost] every > page, if the user is authenticated. And the search bot travels the > same pages as other users, they may have auth info while the bot > may not (and they will see slightly different pages). It's a common > case. What I said is that, upon certain not so uncommomd prerequisites, it can be difficult to have a twin mode-falback_mode that fit all cases, from the bot to the cookie_only authed user... Gian |
From: Rob H. <rob...@ws...> - 2002-12-02 15:02:07
|
If what you are saying is that there should be a session save handler that simply abstracts the internal php4 session stuff then I would agree. If what you are saying is that is should be the only option, then I disagree. There are several reasons for implementing variable persistence outside of a core library that you really cannot rewrite without getting into trouble. The most basic of which is that you have several options on how to store session variables. Not to mension, the ability to do preprocessing before freezing, having access to the session from a database, performing data capture on sessions for logging purposes, etc. Sometimes, speed is not everything. Rob Hutton Web Safe www.wsafe.com ********************************************************************** Introducing Symantec Client Security - Integrated Anti-Virus, Firewall, and Intrusion Detection for the Client. Learn more: http://enterprisesecurity.symantec.com/symes238.cfm?JID=2&PID=11624271 > -----Original Message----- > From: php...@li... > [mailto:php...@li...]On Behalf Of Maxim > Derkachev > Sent: Monday, December 02, 2002 8:31 AM > To: Giancarlo > Cc: php...@li... > Subject: Re[2]: [Phplib-users] new Session4 changes > > > You talk about implementations, while I pointed out the overall > session strategy limitations. The *implementations* You mentioned use > the same basics, the main of them is HTTP, which is unreliable, > because of it has no *internal* state handling - cookies were invented > to help but they don't always function, as we know. If the HTTP > protocol had its internal session handling, there would be much less > buttache to make the session work reliably and secure. And there could > be 1000+ session implementations in PHP, but all they would be like > twins from the network point of view. That's why the most of them > would act > just like backends for the standard PHP session module - just like > another savehandler - because nobody wants to reinvent bicycles. All > the logic needed to implement state propagation has been already coded > there. And weaknesses and unreliability is in the heart of the whole > system - HTTP. the PHP core guys could make the module to check > whether the > SID already exists before starting a new session and many more, but > the performance and/or flexibility would suffer, so they preferred to be > wise and leave the exact implementation of security constraints > to the users. > And every descent session module, even if it will use its own > start() and SID propagation logic, will face the same dilemma. > Something tells me that the good extensions will go the standard way. > > > G> Look, now there the msession module that seems to suit different needs. > G> There may come others. That is in fact just one of many > session modules, > G> and with some tight-tied constraints. If you canno't have ir > regenerate > G> into a new session, most mutancy, which means flexibility, is lost. I > G> saw a php SOAP php module on sourceforge that first and only things it > G> does is session. > > > > > > -- > Best regards, > Maxim Derkachev mailto:max...@bo... > IT manager, > Symbol-Plus Publishing Ltd. > phone: +7 (812) 324-53-53 > www.books.ru, www.symbol.ru > > > > ------------------------------------------------------- > This sf.net email is sponsored by:ThinkGeek > Welcome to geek heaven. > http://thinkgeek.com/sf > _______________________________________________ > Phplib-users mailing list > Php...@li... > https://lists.sourceforge.net/lists/listinfo/phplib-users > > |
From: Rob H. <rob...@ws...> - 2002-12-02 15:02:05
|
> 1. The check for session existence is trivial. In my addition to the > Session4 the session exists if the originator's IP is registered with > the session. If we have a SID in the request and don't have *any* IP in > the session, the session does not exist, so the SID should be > omitted and the new session with the new random SID should be emitted. I'm not following. If the session doesn't have an IP then it's not valid? What if the IP doesn't match? Is it invalid also? This simply will not work. > 2. We should always try to avoid extra redirects - the main reason is > that those redirects can be misunderstood by the search robots, and > the site will not be properly indexed. Most search tools ignore redirects as well as javascript, etc. We are talking about protected sites that people do not want indexed anyway. If they were for public consumption, then there would be no need to protect them. > 3. We should do something so that session hijacks could be made > [almost] impossible. The old Session class has the same issue and the > session can be hijacked using both PHPLib's Session3 and PHP4 session > module weaknesses. The only way I see is to bind the session under some > circumstances to a marker that can not be forged. The only such a > marker I see is the user's IP address - the others can be easily > rewritten in the request variables. Given that the cookie is one of the > hardest thing to forge (just because the nature of the attack - the > attacker tries to persuade the victim to click an URL), we can ease > the restrictions for cookie users. There is no good way of addressing this. There are many papers, etc. on doing so, but they are all at the protocol level and involve the user posessing a personal key signed by a trusted third party. When DNSSec becomes more widely deployed, then there will be a reasonable way of doing something, but not until then. As it stands now, if the client has cookies enabled, then there is a fairly secure way of tracking them. If not, there is not a secure/reliable way. That is the whole reason cookies were invented. To store bits of data on the client's machine because there was no reliable way of tieing it to them next time. How are you going to handle HP, which has a whole class B block, but also has several thousand IPs behind masquerading machines. The IP address can easily be forged. It is called a man-in-the-middle attack, and in this case, simply requires that you have a machine that is behind the same proxy server or nat device to defeat this scheme. I vote that the documentation cover the security aspect, and leave it at that until there is a widely deployed public key system that can be leveraged instead of trying to invent something that provides an artificial sense of security and will cause problems. > > So, the site with such restrictive policy could recommend cookies to > everybody and enforce cookie usage by all users that may change their > IP during the session. > > > > -- > Best regards, > Maxim Derkachev mailto:max...@bo... > IT manager, > Symbol-Plus Publishing Ltd. > phone: +7 (812) 324-53-53 > www.books.ru, www.symbol.ru > > > > ------------------------------------------------------- > This sf.net email is sponsored by:ThinkGeek > Welcome to geek heaven. > http://thinkgeek.com/sf > _______________________________________________ > Phplib-users mailing list > Php...@li... > https://lists.sourceforge.net/lists/listinfo/phplib-users > > |
From: Rob H. <rob...@ws...> - 2002-12-02 15:02:02
|
Also, X-Forwarded for does not work reliably, probably less so than the IP address. Rob Hutton Web Safe www.wsafe.com ********************************************************************** Introducing Symantec Client Security - Integrated Anti-Virus, Firewall, and Intrusion Detection for the Client. Learn more: http://enterprisesecurity.symantec.com/symes238.cfm?JID=2&PID=11624271 > -----Original Message----- > From: php...@li... > [mailto:php...@li...]On Behalf Of Maxim > Derkachev > Sent: Monday, December 02, 2002 5:59 AM > To: Richard Archer > Cc: php...@li... > Subject: Re[7]: [Phplib-users] new Session4 changes > > > Hello Richard, > > Monday, December 02, 2002, 1:35:36 PM, you wrote: > >>Well, I know that. But it does not resolve the session hijack issue. > > RA> Well, using IP address is not a viable solution in any case. > RA> Too many ISPs run load balancing proxy servers. Mine for instance :) > > The check mentioned affects only cookieless clients with changing IP > (if they change ip several times during the session, providing SID in > url or POST body only). I suppose we could also check X-Forwarded-For ... > In any case, a possibility to avoid session hijacks should be added, > IMCO. The only marker I could see by now is the user's IP address - > everything else is even less reliable. > > > -- > Best regards, > Maxim Derkachev mailto:max...@bo... > IT manager, > Symbol-Plus Publishing Ltd. > phone: +7 (812) 324-53-53 > www.books.ru, www.symbol.ru > > > > ------------------------------------------------------- > This sf.net email is sponsored by:ThinkGeek > Welcome to geek heaven. > http://thinkgeek.com/sf > _______________________________________________ > Phplib-users mailing list > Php...@li... > https://lists.sourceforge.net/lists/listinfo/phplib-users > > |
From: Rob H. <rob...@ws...> - 2002-12-02 15:01:57
|
If GET or POST supercedes and overrules the IP address, then logically the IP test does nothing. Rob Hutton Web Safe www.wsafe.com ********************************************************************** Introducing Symantec Client Security - Integrated Anti-Virus, Firewall, and Intrusion Detection for the Client. Learn more: http://enterprisesecurity.symantec.com/symes238.cfm?JID=2&PID=11624271 > -----Original Message----- > From: php...@li... > [mailto:php...@li...]On Behalf Of Maxim > Derkachev > Sent: Monday, December 02, 2002 5:59 AM > To: Richard Archer > Cc: php...@li... > Subject: Re[7]: [Phplib-users] new Session4 changes > > > Hello Richard, > > Monday, December 02, 2002, 1:35:36 PM, you wrote: > >>Well, I know that. But it does not resolve the session hijack issue. > > RA> Well, using IP address is not a viable solution in any case. > RA> Too many ISPs run load balancing proxy servers. Mine for instance :) > > The check mentioned affects only cookieless clients with changing IP > (if they change ip several times during the session, providing SID in > url or POST body only). I suppose we could also check X-Forwarded-For ... > In any case, a possibility to avoid session hijacks should be added, > IMCO. The only marker I could see by now is the user's IP address - > everything else is even less reliable. > > > -- > Best regards, > Maxim Derkachev mailto:max...@bo... > IT manager, > Symbol-Plus Publishing Ltd. > phone: +7 (812) 324-53-53 > www.books.ru, www.symbol.ru > > > > ------------------------------------------------------- > This sf.net email is sponsored by:ThinkGeek > Welcome to geek heaven. > http://thinkgeek.com/sf > _______________________________________________ > Phplib-users mailing list > Php...@li... > https://lists.sourceforge.net/lists/listinfo/phplib-users > > |
From: Maxim D. <max...@bo...> - 2002-12-02 13:59:19
|
G> >> The old PHPLib's Session class is damn slow compared to the native PHP4 G> >> sessions. G> I think that comes from the save_handler most of all I guess the old serialize/unserialize most of all. When I first tested Session4(_custom) with the same savehandler (CT_SQL) as the old Session, the Session4 performed at least 2 times faster. G> >> 1. The check for session existence is trivial. In my addition to the G> >> Session4 the session exists if the originator's IP is registered with G> Why not 'preissue' a fair enough number of 'second_half of session G> info', and save them in the session, so that only one among those can G> be appended or cookie_appended? Use once G> random md5 digests. Would that be really bullet_proof.. It does not resolve the problem, because that "second part" is saved with the session, and we have access to this part since we know the SID. And this second part can leak with the next request - we should append it to the URL for the cookie-disabled. This is also a non-standard behavior, so we can not use the standard trans_sid url rewriting here. G> >> 2. We should always try to avoid extra redirects - the main reason is G> >> that those redirects can be misunderstood by the search robots, G> This problematic is really at the opposite of the one regarding privacy G> and hijacking. That's why you may want more mutations, and more fallback G> paths, depending on the situation. Afterall if you have private areas G> accessible from anything between cookieless_always_authenticated and G> cookie_only_always_reauth (or any mix), and also want robots to sneak G> around safe, you need possibility to mutate the policy in different ways G> for the cases, no one can fit all. Yes, but sometimes the personal info should present on [almost] every page, if the user is authenticated. And the search bot travels the same pages as other users, they may have auth info while the bot may not (and they will see slightly different pages). It's a common case. -- Best regards, Maxim Derkachev mailto:max...@bo... IT manager, Symbol-Plus Publishing Ltd. phone: +7 (812) 324-53-53 www.books.ru, www.symbol.ru |