From: Alex R. <al...@se...> - 2002-06-27 00:00:28
|
Hey everyone, I can't help but think that we keep falling off track in our focus on this project. We haven't even completed discussion about the threat model or security targets yet but we keep getting drawn into discussions about design. To help clarify a bit, I submit the following general principles for your review. Often I feel that we are treating the project as just another software development project. Security critical software development MUST be different from "normal" software development in a couple of ways (I'll explain the reasons as I go). 1.) Simple wins. In "traditional" software development, features win. The product that implements the most features will seem to be the most attractive and therefore will sell/distribute the most copies. Combined with first mover and network advantages, the features==market share equation has been slowly ingrained into the minds of software developers the world over. In the security world, this axiom breaks completely. The enemy of security is complexity (in all of its forms). That which is harder to do is that much harder to do correctly, and when it comes to security, we have seen spectacular failures over and over again because this principle has been broken. The most successful security products (OpenBSD, MULTICS, etc.) all have one thing in common: they do things in simple, obvious ways that my be neither either fast nor elegant, but they work... and work... and work... The trend towards kernilization of security features is a direct result of this maxim, because it can allow developers to significantly decrease the size of the Trusted Computing Base (TCB), which can then be verified much more thoroughly. Systems such as tmach, NT, etc, all make use of this principle in the design of their security-critical subsystems. The larger the TCB (more code running with privileges), the higher the probability that attacks will be successful (as witnessed by IIS). Those familiar with the principle of least privledge may appreciate the simplicity axiom as a restatement or a generalization of that principle. This axiom suggests that it may be desirable to consider "simple and verified" superior to "elegant and flexible" in the design of our toolkit. 2.) Defend against what is likely, not what is possible. This runs counter to the normal software development practice of feature-focused development. When it comes to security, priorities must be set in what threats will be considered and dealt with as security be determined by the "does it compile and run?" test. If a defender spends 50% of his time defending a threat that has an incidence coefficient of 0.0002 but a damage (monetary loss, whatever) of 50,0000 but only spends 10% of his time defending against a problem with an incidence of 5 but with a damage of "only" 5, is he doing the right thing? Clearly not. While the threat with incidence of 0.0002 may be technically interesting, it is clearly not the source of most loss, and so should be given appropriate priority when crafting a defense. This rule suggests that when drafting a threat model and security target, it is highly desirable to consult data about the attacks (frequency, success rates, mitigation factors) carried out against systems currently in production in the same or similar veins. 3.) The human factor cannot be ignored. Software developers are notorious for just throwing things on interfaces, betraying a lack of understanding (or interest) in how end users will wind up using the interfaces they produce. Security design challenges this assumption by recognizing that humans are the weakest link in most security sensitive systems. Like it or not, humans are involved in almost every security-critical system. An allegory to point #1, the fact that human are involved suggests that human error can be a significant risk factor and so should be dealt with in the use and operation of security critical systems. In the case of our project, this may suggest that it is highly desireable to distribute some form of education/training/awareness material with the toolkit. 4.) Environments Change Assumptions made about environmental factors must be either overt (the system cannot function with out it) or constantly checked (assumption==valid or die), as a change in one of these assumptions in all likelihood _will_ undermine the security engineered into an application or product. Take, for instance, the case of ATM machines. At one point, cryptography for ATMs was handled by an "exchange" box that was external to the ATM itself. This machine handled the communication between the bank and external networks. The assumption here was that such machines would be physically close to an ATM because ATMs were simply too expensive to place outdoors or have more than one at any branch. When the price of ATMs fell dramatically, they started to be placed in many locations not accounted for by this assumption. Yet in some systems, encryption was still carried out in the central exchange machine located in the bank branch, meaning that ATMs not near the bank were transmitting information in the clear for possibly miles. Environmental assumptions that were not explicitly designed in such a way as to break the system when they changed. As a result, very insecure systems were built on top of what was once a demonstrably secure infrastructure. I believe that unless we (as a team) use these axioms in judging the quality of a design/implementation for our project, we may wind up creating a really whiz-bang toolkit that doesn't do very much to ease the burden of developers in creating secure web applications. So what does everyone else think? Can we develop with these principles? -- Alex Russell al...@Se... al...@ne... |
From: Mark C. <ma...@cu...> - 2002-06-27 02:21:34
|
Alex You're a man after my own heart ;-) Models like Biba and Bell la Padula should have taught us far more than appears to have been the case ! 1. Monolithic kernels - great example 2. Agreed. remember you don't have to feed the world from day one. Why not define requirements and then prioritize them ? lower priorities can be dealt with in release 2.0 etc 3. Agree. I would have thought that something more than good documentation seems appropriate. I can get you help with that when the time comes. So why not take this approach ? (This is based on the RUP - Rational Unified Process). 1. Develop a Vision Document http://www.rational.com/media/worldwide/singapore/introducing_processes.pdf is one example on the web but you'll find lots. The vision document include features not requirements. It includes short sections on the problem we are trying to solve, objectives, risks, resources etc 2. Develop a set of prioritized requirements 3. From the requirements develop your use cases and move on. Remember iterative development works better than waterfall development as well. I can help with this sort of stuff if you want. I can certainly take a first step at a Vision document on Friday. Just let me know ! On Wed, 2002-06-26 at 17:00, Alex Russell wrote: > Hey everyone, > > I can't help but think that we keep falling off track in our focus on > this project. We haven't even completed discussion about the threat > model or security targets yet but we keep getting drawn into discussions > about design. To help clarify a bit, I submit the following general > principles for your review. > > Often I feel that we are treating the project as just another software > development project. Security critical software development MUST be > different from "normal" software development in a couple of ways (I'll > explain the reasons as I go). > > > > > 1.) Simple wins. > > In "traditional" software development, features win. The product that > implements the most features will seem to be the most attractive and > therefore will sell/distribute the most copies. Combined with first > mover and network advantages, the features==market share equation has > been slowly ingrained into the minds of software developers the world > over. In the security world, this axiom breaks completely. > > The enemy of security is complexity (in all of its forms). That which > is harder to do is that much harder to do correctly, and when it comes > to security, we have seen spectacular failures over and over again > because this principle has been broken. The most successful security > products (OpenBSD, MULTICS, etc.) all have one thing in common: they > do things in simple, obvious ways that my be neither either fast nor > elegant, but they work... and work... and work... > > The trend towards kernilization of security features is a direct result > of this maxim, because it can allow developers to significantly decrease > the size of the Trusted Computing Base (TCB), which can then be verified > much more thoroughly. Systems such as tmach, NT, etc, all make use of > this principle in the design of their security-critical subsystems. The > larger the TCB (more code running with privileges), the higher the > probability that attacks will be successful (as witnessed by IIS). Those > familiar with the principle of least privledge may appreciate the > simplicity axiom as a restatement or a generalization of that principle. > > This axiom suggests that it may be desirable to consider "simple and > verified" superior to "elegant and flexible" in the design of our > toolkit. > > > > 2.) Defend against what is likely, not what is possible. > > This runs counter to the normal software development practice of > feature-focused development. When it comes to security, priorities > must be set in what threats will be considered and dealt with as > security be determined by the "does it compile and run?" test. > > If a defender spends 50% of his time defending a threat that has an > incidence coefficient of 0.0002 but a damage (monetary loss, whatever) > of 50,0000 but only spends 10% of his time defending against a problem > with an incidence of 5 but with a damage of "only" 5, is he doing the > right thing? > > Clearly not. > > While the threat with incidence of 0.0002 may be technically > interesting, it is clearly not the source of most loss, and so should be > given appropriate priority when crafting a defense. > > This rule suggests that when drafting a threat model and security > target, it is highly desirable to consult data about the attacks > (frequency, success rates, mitigation factors) carried out against > systems currently in production in the same or similar veins. > > > > 3.) The human factor cannot be ignored. > > Software developers are notorious for just throwing things on > interfaces, betraying a lack of understanding (or interest) in how end > users will wind up using the interfaces they produce. Security design > challenges this assumption by recognizing that humans are the weakest > link in most security sensitive systems. > > Like it or not, humans are involved in almost every security-critical > system. An allegory to point #1, the fact that human are involved > suggests that human error can be a significant risk factor and so should > be dealt with in the use and operation of security critical systems. In > the case of our project, this may suggest that it is highly desireable > to distribute some form of education/training/awareness material with > the toolkit. > > > > 4.) Environments Change > > Assumptions made about environmental factors must be either overt (the > system cannot function with out it) or constantly checked > (assumption==valid or die), as a change in one of these assumptions in > all likelihood _will_ undermine the security engineered into an > application or product. Take, for instance, the case of ATM machines. At > one point, cryptography for ATMs was handled by an "exchange" box that > was external to the ATM itself. This machine handled the communication > between the bank and external networks. The assumption here was that > such machines would be physically close to an ATM because ATMs were > simply too expensive to place outdoors or have more than one at any > branch. > > When the price of ATMs fell dramatically, they started to be placed in > many locations not accounted for by this assumption. Yet in some > systems, encryption was still carried out in the central exchange > machine located in the bank branch, meaning that ATMs not near the bank > were transmitting information in the clear for possibly miles. > Environmental assumptions that were not explicitly designed in such a > way as to break the system when they changed. As a result, very insecure > systems were built on top of what was once a demonstrably secure > infrastructure. > > > > > I believe that unless we (as a team) use these axioms in judging the > quality of a design/implementation for our project, we may wind up > creating a really whiz-bang toolkit that doesn't do very much to ease > the burden of developers in creating secure web applications. > > So what does everyone else think? Can we develop with these principles? > > -- > Alex Russell > al...@Se... > al...@ne... |
From: Gabriel L. <ga...@bu...> - 2002-06-27 03:43:08
|
Alex, I think these principles are very good. I totally agree with them. But, I think in some ways we are still stuck in the vision thing. One of the key problems with the vision side of things is that I think we are all close to each other but slightly different so we are having a communication issue where we are just missing each other. I think what I'd like to do is take Mark's suggestion to heart about looking at requirements. In fact, I know this turns the rational model on its head a little, but I'd like to pull together a list of very high level requrements first and foremost and use that to help us drive the vision thing. Why? Primarily because I don't think we have developed a common vocabulary yet, and so we are like trains passing in the night. Here's some examples of what I mean of very high level requirements and I'm going to pick on the topics of the earlier vote and some of the earlier discussions. Canonicalization has been a big part of our discussion and is generally agreed upon as something we need to do. But, in my mind this is really just a solution to solve a requirement that seems to be going unspoken. This requirement is that attackers shouldn't be able to obfuscate an attack so that we cannot block it. The way we are looking at solving this is by normalizing all input so that we can apply a single set of tests to any input... I have no bones to pick with this as the solution, but I'd like to get us to look at the higher level requriements first so that we can get a full coverage idea of what kinds of things we need to think about and our approach to things. I think this should be conducted in a brainstorming fashion, and I think it probably makes sense to do it on this email list. What I will do is be the note taker. I'd like to see people put together emails that run through their high level requirements for what they think this thing needs to do, along with a little bit of explanation so we all can start to get a grip on what each other is talking about and what each others terminology is. Lets not eliminate anything at this point. Once we have all these in place I think we can look at them and try and organize them. My hope is that out of this process we can come up with goals, long term and short term and requirements for different releases of the software. But, that these basic requirements are the building blocks of these larger concepts. -gabe On Wed, 2002-06-26 at 19:06, Mark Curphey wrote: > Alex > > You're a man after my own heart ;-) > > Models like Biba and Bell la Padula should have taught us far more than > appears to have been the case ! > > 1. Monolithic kernels - great example > > 2. Agreed. remember you don't have to feed the world from day one. Why > not define requirements and then prioritize them ? lower priorities can > be dealt with in release 2.0 etc > > 3. Agree. I would have thought that something more than good > documentation seems appropriate. I can get you help with that when the > time comes. > > So why not take this approach ? > > (This is based on the RUP - Rational Unified Process). > > 1. Develop a Vision Document > > http://www.rational.com/media/worldwide/singapore/introducing_processes.pdf is one example on the web but you'll find lots. The vision document include features not requirements. It includes short sections on the problem we are trying to solve, objectives, risks, resources etc > > 2. Develop a set of prioritized requirements > > 3. From the requirements develop your use cases and move on. > > Remember iterative development works better than waterfall development > as well. > > I can help with this sort of stuff if you want. > > I can certainly take a first step at a Vision document on Friday. Just > let me know ! |