From: Victor B. <vb...@gm...> - 2009-11-05 18:45:47
|
Hi all, I decided to install all / most of the available plugins on my developer instance in order to know more about them and get a feel of what is available to the MantisBT. The thing I noticed is that with several of these plugins, I get PHP notices in very basic workflows. For example: 1. FAQ Plugin: When adding a new question or viewing existing questions. 2. Product-Matrix: When adding a new product. The errors blocked me from testing further and in some cases actually block from using MantisBT itself. For example, if the plugin breaks on the View Issues page or view issue page. This will reduce the user's feel of MantisBT stability and in some cases the errors will not be localized to just this plugin, but will affect the whole MantisBT instance. I wonder if we should make sure that by default we have a higher level of error reporting that applies to MantisBT and the plugins. In the past, we assumed a smaller number of dependencies and a lot of usage of the generated code and hence we caught any of these issues. I would like to default to showing / halting on all errors by default to make sure that all developers and plugin developers find these issues during development and not when deployed by others who have a more strict error reporting configuration. Ideas are welcome. -Victor. |
From: John R. <jr...@le...> - 2009-11-05 18:54:41
|
Victor Boctor wrote: > 2. Product-Matrix: When adding a new product. I'm slowing trying to sweep through this and make sure that there are no notices produced, but sometimes it's not obvious when/where it happens, or I don't find them in my normal usage. If you find any, please report them on http://leetcode.net/mantis > I wonder if we should make sure that by default we have a higher level > of error reporting that applies to MantisBT and the plugins. ... I would > like to default to showing / halting on all errors by default to make > sure that all developers and plugin developers find these issues during > development and not when deployed by others who have a more strict error > reporting configuration. I'd be fine with that. I tend to run Mantis in my own installs with all errors set to halt, and I would imagine that defaults like that could only help encourage better code in the first place... -- John Reese LeetCode.net |
From: David H. <hic...@op...> - 2009-11-06 00:04:04
|
On Thu, 2009-11-05 at 10:45 -0800, Victor Boctor wrote: > I wonder if we should make sure that by default we have a higher level > of error reporting that applies to MantisBT and the plugins. In the > past, we assumed a smaller number of dependencies and a lot of usage > of the generated code and hence we caught any of these issues. I > would like to default to showing / halting on all errors by default to > make sure that all developers and plugin developers find these issues > during development and not when deployed by others who have a more > strict error reporting configuration. I fully agree. It'll help us improve MantisBT by bringing more attention to the warnings and other small defects that exist. Saying that, I think all developers and plugin authors should be turning on maximum verbosity error reporting. But you could then argue that it's impossible for them to test the plugin or the MantisBT core with 100% coverage. Therefore having general users see warnings and errors will lead them to report those errors quickly so that we can solve them. Regards, David |
From: Gianluca S. <gi...@gm...> - 2009-11-06 07:55:55
|
On Fri, Nov 6, 2009 at 1:03 AM, David Hicks <hic...@op...> wrote: > I fully agree. It'll help us improve MantisBT by bringing more attention > to the warnings and other small defects that exist. Saying that, I think > all developers and plugin authors should be turning on maximum verbosity > error reporting. But you could then argue that it's impossible for them > to test the plugin or the MantisBT core with 100% coverage. That's why IMHO we should work on improving the tests coverage (and that reminds me I wanted to thank Robert for the work on the SOAP API) -- Gianluca Sforna http://morefedora.blogspot.com http://www.linkedin.com/in/gianlucasforna |
From: Robert M. <rob...@gm...> - 2009-11-09 21:33:11
|
On Fri, Nov 6, 2009 at 9:55 AM, Gianluca Sforna <gi...@gm...> wrote: > On Fri, Nov 6, 2009 at 1:03 AM, David Hicks <hic...@op...> wrote: >> I fully agree. It'll help us improve MantisBT by bringing more attention >> to the warnings and other small defects that exist. Saying that, I think >> all developers and plugin authors should be turning on maximum verbosity >> error reporting. But you could then argue that it's impossible for them >> to test the plugin or the MantisBT core with 100% coverage. > > That's why IMHO we should work on improving the tests coverage (and > that reminds me I wanted to thank Robert for the work on the SOAP API) Thanks :-) Well, to state the obvious, I added those tests because it suited me and because it was possible. As might've previously said, it's easy to add tests to the SOAP API since there was a framework to build on and SOAP tests are pretty much self-contained. I would've liked to add some tests for some core utilities, e.g. the string api, but I think that almost all function calls have a tendency to hit the database, which makes it unwieldy to write _unit_ tests, which by definition should be self-contained. Having a background in the 'YouAreGonnaNeedIt PleaseOverEngineerIt' Java world, I can say that one way to go is to wrap these dependencies, like we did for the MantisEnum class. This might involve e.g having MantisConfig object which by default hits the database, just like the current config_api.php and is either placed in a registry (global) object when a script starts executing or is passed to each function call. Then when testing we simply set up a dummy MantisConfig which is pre-configured with an array of options to insure we get the behaviour we need. Another way is to forget unit tests and focus on end-to-end tests, using something like Selenium RC ( http://seleniumhq.org/projects/remote-control/ ) and run the whole browser interface. These , like the SOAP API, bring in no dependencies - just clean up after yourself - and require no other changes, but on the other hand are more brittle, slower, and need more care when making changes - adding a row can mess up the fixture. FWIW, I would favour the first approach, but I think any kind of extra testing would be good. What do you think? Robert > > -- > Gianluca Sforna > > http://morefedora.blogspot.com > http://www.linkedin.com/in/gianlucasforna > > ------------------------------------------------------------------------------ > Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day > trial. Simplify your report design, integration and deployment - and focus on > what you do best, core application coding. Discover what's new with > Crystal Reports now. http://p.sf.net/sfu/bobj-july > _______________________________________________ > mantisbt-dev mailing list > man...@li... > https://lists.sourceforge.net/lists/listinfo/mantisbt-dev > -- Sent from my (old) computer |
From: John R. <jr...@le...> - 2009-11-09 21:47:00
|
Robert Munteanu wrote: > Another way is to forget unit tests and focus on end-to-end tests, > using something like Selenium RC ( > http://seleniumhq.org/projects/remote-control/ ) and run the whole > browser interface. These , like the SOAP API, bring in no dependencies > - just clean up after yourself - and require no other changes, but on > the other hand are more brittle, slower, and need more care when > making changes - adding a row can mess up the fixture. Selenium has been mentioned before, and in principle I really like the idea of getting that set up. In practice though, none of the core (that I know of) have any experience with Selenium, and even less time to spend learning it and setting it up. I welcome anyone familiar with it to set up and contribute a test suite, documentation on how to set it up and run it for Mantis, as well as any necessary bits to automate the process. -- John Reese LeetCode.net |
From: Victor B. <vb...@gm...> - 2009-11-10 07:39:58
|
On Mon, Nov 9, 2009 at 1:46 PM, John Reese <jr...@le...> wrote: > Robert Munteanu wrote: > > Another way is to forget unit tests and focus on end-to-end tests, > > using something like Selenium RC ( > > http://seleniumhq.org/projects/remote-control/ ) and run the whole > > browser interface. These , like the SOAP API, bring in no dependencies > > - just clean up after yourself - and require no other changes, but on > > the other hand are more brittle, slower, and need more care when > > making changes - adding a row can mess up the fixture. > > Selenium has been mentioned before, and in principle I really like the idea > of > getting that set up. In practice though, none of the core (that I know of) > have > any experience with Selenium, and even less time to spend learning it and > setting it up. > > I welcome anyone familiar with it to set up and contribute a test suite, > documentation on how to set it up and run it for Mantis, as well as any > necessary bits to automate the process. > I think there are benefits in both options. However, if I'm to choose one, it will for sure be the unit test approach. That is exactly why I developed the MantisEnum example. I've also setup the soap test cases, since I found no other reliable way to check if the SOAP API is not broken and to verify fixes (in the past I used to use the MantisConnect .NET API unit tests to achieve the same, but I couldn't get others to run that and hence I decided to move the unit tests closer to the code). I know that my suggestions below will probably be conceived as big changes and may indicate a re-write. However, I personally don't believe in re-writes for open source projects, but I believe if we have a goal / direction, we can gradually move towards it. I would love to see us refactor to achieve the following: 1. Refactor business logic with following characteristics: a. Use classes rather than functions / use new coding conventions suggested by Gianluca from the Zend Framework.. b. Don't have view logic (i.e. formatting, UI messages, etc). c. Throw exceptions rather than trigger errors. This will be useful for unit tests as well as SOAP API and any future APIs. d. Abstract access to configuration / database so that we can have a production / unit test implementation of such stores. 2. Change the core APIs to maintain the same interface / semantics on top of the classes developed in 1. This is an interim solution until all page scripts are changed to use the classes directly and then the APIs can be retired. We will probably also have to provide some grace period for such APIs before retiring it to allow for plugins to use the new classes directly. 3. Have a common pattern for data access layer. I would rather if for these classes we use a common infrastructure / interface, rather than just 100% custom per entity (Paul is looking at ADODB replacement, we should have a strategy as part of this effort). 4. Utility methods like string_apis should be refactored into easily testable code like MantisEnum. 5. Refactor print_apis or any APIs that generate HTML into APIs that return a data structure / array / value and another one that renders that. This moves us closer to templates and allows us to unit test APIs that prepare the data. At one point, I started prepare_api to move to it the logic from print_api for preparing the data and also moved some of the logic to the entity specific APIs (e.g. users, bugs, files, etc). 6. Make sure we have unit tests and continuous integration going for all refactored code, documentation, etc. 7. For the viewing / templates logic, we can use plain PHP as the template engine or we can potentially use a framework. However, I think the above refactoring is a stepping stone before taking dependency on a templating engine while our core code spits html. This doesn't mean we shouldn't consider the GUI based end-to-end test and I welcome any contributions there, however, as a core team, I think we have the skills necessary to deliver on unit testing and we should do our best effort to gradually move in this direction. -Victor |