You can subscribe to this list here.
| 2000 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(23) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2001 |
Jan
(68) |
Feb
(121) |
Mar
(59) |
Apr
(49) |
May
(110) |
Jun
(109) |
Jul
(146) |
Aug
(122) |
Sep
(83) |
Oct
(94) |
Nov
(90) |
Dec
(157) |
| 2002 |
Jan
(169) |
Feb
(186) |
Mar
(168) |
Apr
(353) |
May
(338) |
Jun
(278) |
Jul
(220) |
Aug
(336) |
Sep
(122) |
Oct
(183) |
Nov
(111) |
Dec
(265) |
| 2003 |
Jan
(358) |
Feb
(135) |
Mar
(343) |
Apr
(419) |
May
(277) |
Jun
(145) |
Jul
|
Aug
(134) |
Sep
(118) |
Oct
(97) |
Nov
(240) |
Dec
(293) |
| 2004 |
Jan
(412) |
Feb
(217) |
Mar
(202) |
Apr
(237) |
May
(333) |
Jun
(201) |
Jul
(303) |
Aug
(218) |
Sep
(285) |
Oct
(249) |
Nov
(248) |
Dec
(229) |
| 2005 |
Jan
(314) |
Feb
(175) |
Mar
(386) |
Apr
(223) |
May
(281) |
Jun
(230) |
Jul
(200) |
Aug
(197) |
Sep
(110) |
Oct
(243) |
Nov
(279) |
Dec
(324) |
| 2006 |
Jan
(335) |
Feb
(396) |
Mar
(383) |
Apr
(358) |
May
(375) |
Jun
(190) |
Jul
(212) |
Aug
(320) |
Sep
(358) |
Oct
(112) |
Nov
(213) |
Dec
(95) |
| 2007 |
Jan
(136) |
Feb
(104) |
Mar
(156) |
Apr
(115) |
May
(78) |
Jun
(75) |
Jul
(30) |
Aug
(35) |
Sep
(50) |
Oct
(44) |
Nov
(33) |
Dec
(35) |
| 2008 |
Jan
(90) |
Feb
(63) |
Mar
(47) |
Apr
(42) |
May
(72) |
Jun
(85) |
Jul
(25) |
Aug
(20) |
Sep
(14) |
Oct
(11) |
Nov
(25) |
Dec
(39) |
| 2009 |
Jan
(39) |
Feb
(46) |
Mar
(16) |
Apr
(27) |
May
(51) |
Jun
(66) |
Jul
(78) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2010 |
Jan
|
Feb
|
Mar
(5) |
Apr
|
May
(4) |
Jun
|
Jul
(1) |
Aug
|
Sep
(1) |
Oct
(2) |
Nov
|
Dec
|
| 2011 |
Jan
|
Feb
(2) |
Mar
|
Apr
(3) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2013 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2014 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
| 2016 |
Jan
(1) |
Feb
|
Mar
|
Apr
(1) |
May
(1) |
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: <ma...@li...> - 2002-01-07 18:52:10
|
Dieter, Inexact searching aside, primary key and foreign key indexes, along with candidate key indexes, should be provided at a minimum for each relation in the system. As well, you will find that RDBMS systems nowadays are substantially more intelligent about using partial keys and opportunistic use of available index to accelerate ad hoc queries. If you provide no indexing, then the RDBMS cannot use this intelligence. There is no sound argument against doing this in the general case. (For example, if most users have relatively small installations, they will not pay a substantial reindexing penalty in any case.) Within limits, I think benchmarking will show that a range of indexes should be provided. True, some queries can't or won't take advantage of indexing. Over time, you simply minimize your reliance on those sorts of queries. Qaexl has provided a large number of indexes, more by far than I would have assumed most users would need (or want). However, he based his list on benchmarking results, so maybe he as found some useful speedups worth looking into, yes? Matt On Mon, 7 Jan 2002, Dieter Simader wrote: > Indices are only useful for an exact search not a fuzzy search as is used > on most of the queries looking for a name or partnumber. > > example: > dws=# create index parts_idx on parts (partnumber); > CREATE > dws=# explain select * from parts where partnumber = '123'; > NOTICE: QUERY PLAN: > > -- Matt Benjamin The Linux Box 206 South Fifth Ave. Suite 150 Ann Arbor, MI 48104 tel. 734-761-4689 fax. 734-769-8938 cel. 734-216-5309 pgr. 734-431-0118 |
|
From: Dieter S. <dsi...@sq...> - 2002-01-07 18:25:18
|
Indices are only useful for an exact search not a fuzzy search as is used on most of the queries looking for a name or partnumber. example: dws=# create index parts_idx on parts (partnumber); CREATE dws=# explain select * from parts where partnumber = '123'; NOTICE: QUERY PLAN: Index Scan using parts_idx on parts (cost=0.00..2.01 rows=1 width=120) EXPLAIN dws=# explain select * from parts where partnumber LIKE '%123'; NOTICE: QUERY PLAN: Seq Scan on parts (cost=0.00..17.06 rows=1 width=120) EXPLAIN You see that the index is not used at all on a 'LIKE' search. If you remove the % from the backend code the index is used dws=# explain select * from parts where partnumber LIKE '123'; NOTICE: QUERY PLAN: Index Scan using parts_idx on parts (cost=0.00..2.01 rows=1 width=120) EXPLAIN If you run a database with 10,000+ tuples in the parts table it certainly pays off to change the backend from a fuzzy search to an exact search. You lose a lot of flexibility but the performance increase makes it all worth while. With the exception of having an index for the ID, dates and tablepointers it does not make much sense to create an index for a name, partnumber or invnumber. Dieter Simader http://www.sql-ledger.org (780) 472-8161 DWS Systems Inc. Accounting Software Fax: 478-5281 =========== On a clear disk you can seek forever =========== On Mon, 7 Jan 2002, Ho-Sheng Hsiao wrote: > On Sun, Jan 06, 2002 at 09:11:11AM -0800, Wes Warner wrote: > > Could you please post these commands? > > Sure. You can cut-and-paste or save it into a file: > > BEGIN WORK; > create index acc_trans_tid_idx ON acc_trans (trans_id); > create index acc_trans_cid_idx ON acc_trans (chart_id); > create index acc_trans_tdate_idx ON acc_trans (transdate); > create unique index parts_id_idx ON parts (id); > create index parts_lower_desc_idx ON parts (lower(description)); > create index parts_lower_pnumber_idx ON parts (lower(partnumber)); > create unique index ap_id_idx ON ap (id); > create index ap_transdate_idx ON ap (transdate); > create index ap_datepaid_idx ON ap (datepaid); > create index ap_invnumber_idx ON ap (invnumber); > create index ap_ordnumber_idx on ap (ordnumber); > create index ap_vendor_id_idx ON ap (vendor); > create unique index ar_id_idx ON ar (id); > create index ar_transdate_idx ON ar (transdate); > create index ar_datepaid_idx ON ar (datepaid); > create index ar_invnumber_idx ON ar (invnumber); > create index ar_customer_id_idx ON ar (customer); > create unique index customer_id_idx ON customer (id); > create index customer_macc_idx ON customer (macc); > create index customer_lower_name_idx ON customer (lower(name)); > create unique index vendor_id_idx ON vendor (id); > create index vendor_macc_idx ON vendor (macc); > create index vendor_lower_name_idx ON customer (lower(name)); > create index customer_tax_idx ON customertax (customer_id); > create index vendor_tax_idx ON vendortax (vendor_id); > create index parts_tax_idx ON partstax (parts_id); > create index chart_link_idx ON chart (link); > create index tax_idx ON tax (chart_id); > create unique index gl_idx ON gl (id); > create index gl_transdate_idx ON gl (transdate); > COMMIT WORK; > > For 1.8.0, at the very least add: > > BEGIN WORK; > create unique index oe_id_idx ON oe (id); > create index oe_ordnumber_idx ON oe (ordnumber); > create index oe_transdate_idx ON oe (transdate); > COMMIT WORK; > > I have not tested this stuff against a 1.8.x platform so, Your Mileage > May Vary. > > For those who want to tinker, I've listed below the way I've chosen > these indexes were based on several criteria: > > - All primary keys MUST have UNIQUE INDEX. That would be the > customer.id, vendor.id, parts.id, oe.id, ap.id, ar.id, etc. MySQL has > a PRIMARY KEY syntax which might even be in standard ANSI SQL and > therefore PostgreSQL -- declaring within a table that a field is a > primary key automagically creates the unique indexes which in turn > gives the optimizer something to work with. > > - Then, the secondary indexes are chosen on common JOIN points or > WHERE clauses. Having messed with the guts of the code for 80 hours > last week, I had a pretty good idea of what were frequently called > stuff. For example, in searching for parts by description, the code > uses lower() -- which I understand has been taken out in 1.8.x -- so > in the above I had a > > CREATE INDEX customer_lower_name_idx ON customer (lower(name)); > > That's supposed to make the search faster. Another commonly searched > criteria is based on dates. So I made sure all the transaction date > fields has an index. I used guestimates. If I were in the "slow and > careful" rather than the "fast and sloppy" mode, I would have used the > EXPLAIN for each of the queries and have PG tell me precisely which > indices were being used and which ones weren't. > > I might have gone overboard and added too many fields. That typically > affects INSERTs and UPDATEs where for each index, the backend has to > add another entry into the index. What's fun though, is that inserts > into the indices are typically O(log n) rather than O(n) (e.g. 20 > instead of 1 million). So there's a tradeoff between what sort of > indices are necessary and what arn't. Certainly there's room for tweaks. > > I timed the operation by calling the modules via commandline: > > cd sql-ledger > time ./ar.pl "login=myloginstuff&action=something& ... " > > It works assuming you've created a login where no passwords are > required. > > |
|
From: Roderick A. A. <raa...@ti...> - 2002-01-07 16:50:19
|
On Mon, 7 Jan 2002, [utf-8] te...@li... wrote: > Hello all and Happy New 2002! > > Do you know any documentation source about the stuff related with > accounting, ar, ap, general terms and so on, better if in > electronic form? Earlier this year this URI was provided. I ahve just tried to access it but got a timeout error. TMMV. http://www.onlinewbc.org/Docs/finance/bkpg_acct.html#intro Cheers, Rod -- Let Accuracy Triumph Over Victory Zetetic Institute "David's Sling" Marc Stiegler |
|
From: =?utf-8?Q?<te...@li...> - 2002-01-07 16:06:34
|
SGVsbG8gYWxsIGFuZCBIYXBweSBOZXcgMjAwMiENCg0KRG8geW91IGtub3cgYW55IGRvY3Vt ZW50YXRpb24gc291cmNlIGFib3V0IHRoZSBzdHVmZiByZWxhdGVkIHdpdGggDQphY2NvdW50 aW5nLCBhciwgYXAsIGdlbmVyYWwgdGVybXMgYW5kIHNvIG9uLCBiZXR0ZXIgaWYgaW4NCmVs ZWN0cm9uaWMgZm9ybT8NCkl0IHNob3VsZCBiZSB1c2VmdWwgdG8gaGF2ZSBzb21lIHRlY2hu aWNhbCAoZnJvbSB0aGUgZW5kIHVzZXIgcG9pbnQNCm9mIHZpZXcpIGtub3dsZWRnZSBiYXNl IG9yIHJlZmVyZW5jZSwgd2hlbiB0cnlpbmcgdG8gb2J0YWluIHNvbWUgDQp0ZWNobmljYWwg KGZyb20gUy5BIGFuZCBEQkEgcG9pbnQgb2YgdmlldykgcmVzdWx0cyBkdXJpbmcgDQppbXBs ZW1lbnRhdGlvbi4NCg0KVGhhbmtzIGluIGFkdmFuY2UuDQoNCkJ5ZSwNCg0KR2lhbmx1Y2Eg Q2VjY2hpDQo= |
|
From: <ma...@li...> - 2002-01-07 15:22:40
|
Ho-Sheng, All SQL-based RDBMS systems use indexes, and use them in essentially the same manner (from the DBA's point of view). If SQL-Ledger isn't supplying indexing for tables and relations, it's just a bug. (Dieter?) It does look like my 1.6.1 setup is missing a lot of obvious foreign keys/indexes. You might supply the SQL statements (CREATE INDEX) you developed to the list, as it sounds like this would help people. Matt On Sun, 6 Jan 2002, Ho-Sheng Hsiao wrote: > On Sun, Jan 06, 2002 at 11:26:33AM +0100, Roland Stoker wrote: > > Ehhh guys???? > > > > Indexes makes search-queries go from O(n) to O log(n) > > Read O as order > > That's about the biggest performance difference you can get in a database!!!!! > -- Matt Benjamin The Linux Box 206 South Fifth Ave. Suite 150 Ann Arbor, MI 48104 tel. 734-761-4689 fax. 734-769-8938 cel. 734-216-5309 pgr. 734-431-0118 |
|
From: Ho-Sheng H. <qa...@ne...> - 2002-01-07 08:29:22
|
On Sun, Jan 06, 2002 at 09:11:11AM -0800, Wes Warner wrote: > Could you please post these commands? Sure. You can cut-and-paste or save it into a file: BEGIN WORK; create index acc_trans_tid_idx ON acc_trans (trans_id); create index acc_trans_cid_idx ON acc_trans (chart_id); create index acc_trans_tdate_idx ON acc_trans (transdate); create unique index parts_id_idx ON parts (id); create index parts_lower_desc_idx ON parts (lower(description)); create index parts_lower_pnumber_idx ON parts (lower(partnumber)); create unique index ap_id_idx ON ap (id); create index ap_transdate_idx ON ap (transdate); create index ap_datepaid_idx ON ap (datepaid); create index ap_invnumber_idx ON ap (invnumber); create index ap_ordnumber_idx on ap (ordnumber); create index ap_vendor_id_idx ON ap (vendor); create unique index ar_id_idx ON ar (id); create index ar_transdate_idx ON ar (transdate); create index ar_datepaid_idx ON ar (datepaid); create index ar_invnumber_idx ON ar (invnumber); create index ar_customer_id_idx ON ar (customer); create unique index customer_id_idx ON customer (id); create index customer_macc_idx ON customer (macc); create index customer_lower_name_idx ON customer (lower(name)); create unique index vendor_id_idx ON vendor (id); create index vendor_macc_idx ON vendor (macc); create index vendor_lower_name_idx ON customer (lower(name)); create index customer_tax_idx ON customertax (customer_id); create index vendor_tax_idx ON vendortax (vendor_id); create index parts_tax_idx ON partstax (parts_id); create index chart_link_idx ON chart (link); create index tax_idx ON tax (chart_id); create unique index gl_idx ON gl (id); create index gl_transdate_idx ON gl (transdate); COMMIT WORK; For 1.8.0, at the very least add: BEGIN WORK; create unique index oe_id_idx ON oe (id); create index oe_ordnumber_idx ON oe (ordnumber); create index oe_transdate_idx ON oe (transdate); COMMIT WORK; I have not tested this stuff against a 1.8.x platform so, Your Mileage May Vary. For those who want to tinker, I've listed below the way I've chosen these indexes were based on several criteria: - All primary keys MUST have UNIQUE INDEX. That would be the customer.id, vendor.id, parts.id, oe.id, ap.id, ar.id, etc. MySQL has a PRIMARY KEY syntax which might even be in standard ANSI SQL and therefore PostgreSQL -- declaring within a table that a field is a primary key automagically creates the unique indexes which in turn gives the optimizer something to work with. - Then, the secondary indexes are chosen on common JOIN points or WHERE clauses. Having messed with the guts of the code for 80 hours last week, I had a pretty good idea of what were frequently called stuff. For example, in searching for parts by description, the code uses lower() -- which I understand has been taken out in 1.8.x -- so in the above I had a CREATE INDEX customer_lower_name_idx ON customer (lower(name)); That's supposed to make the search faster. Another commonly searched criteria is based on dates. So I made sure all the transaction date fields has an index. I used guestimates. If I were in the "slow and careful" rather than the "fast and sloppy" mode, I would have used the EXPLAIN for each of the queries and have PG tell me precisely which indices were being used and which ones weren't. I might have gone overboard and added too many fields. That typically affects INSERTs and UPDATEs where for each index, the backend has to add another entry into the index. What's fun though, is that inserts into the indices are typically O(log n) rather than O(n) (e.g. 20 instead of 1 million). So there's a tradeoff between what sort of indices are necessary and what arn't. Certainly there's room for tweaks. I timed the operation by calling the modules via commandline: cd sql-ledger time ./ar.pl "login=myloginstuff&action=something& ... " It works assuming you've created a login where no passwords are required. -- -Qaexl- |
|
From: Dieter S. <dsi...@sq...> - 2002-01-06 23:43:52
|
Yes, the simple way is to add an expense account (Sales Discount) and check the 'Deposit' box under 'Receivables'. Now when you add a payment, you enter the amount the customer paid you in your bank / cash account and add another payment with the difference using the 'Sales Discount' account. You can apply the same for vendor discounts, but you add a 'Purchase Discount' account in the income section instead and flag it as a payment account. Dieter Simader http://www.sql-ledger.org (780) 472-8161 DWS Systems Inc. Accounting Software Fax: 478-5281 =========== On a clear disk you can seek forever =========== On Sun, 6 Jan 2002, alta wrote: > > What is the best way (simplest and quickest way) to adust the > monetary value in an invoice form. > > For example, suppose a customer takes the 2% early-pay discount, or > they under-pay a small amount. How do I easily adjust the sql-ledger > books to show the invoice fully paid? > > This happens often, so I am looking for an efficient way with > sql-ledger. In my previous system I had an invoice field for > adjustments, which was simple, clear, and fast. > > In asking this question, I have a hunch my novice bookkeeping skills > are showing through. Your suggestions appreciated. > > Thanks ... Reed > > |
|
From: alta <al...@al...> - 2002-01-06 22:06:47
|
What is the best way (simplest and quickest way) to adust the monetary value in an invoice form. For example, suppose a customer takes the 2% early-pay discount, or they under-pay a small amount. How do I easily adjust the sql-ledger books to show the invoice fully paid? This happens often, so I am looking for an efficient way with sql-ledger. In my previous system I had an invoice field for adjustments, which was simple, clear, and fast. In asking this question, I have a hunch my novice bookkeeping skills are showing through. Your suggestions appreciated. Thanks ... Reed -- Reed White - ALTA RESEARCH - www.alta-research.com Phone: 877-360-2582 - Email: al...@al... |
|
From: Wes W. <ufo...@ea...> - 2002-01-06 17:11:54
|
Could you please post these commands? Thanks, Wes > If anyone asks, I'll post the sql commands to add the indexes that'll > work on a 1.6.x system and a "off the top of my head" for an 1.8.x > system ... my indices were really ad hoc (fast and sloppy) and I > hadn't taken the time to really analyze and profile each of the > queries to get the optimal mix. |
|
From: Ho-Sheng H. <qa...@ne...> - 2002-01-06 14:38:04
|
On Sun, Jan 06, 2002 at 11:26:33AM +0100, Roland Stoker wrote: > Ehhh guys???? > > Indexes makes search-queries go from O(n) to O log(n) > Read O as order > That's about the biggest performance difference you can get in a database!!!!! Hey, I remember comp sci 101. I think. Maybe I should put that tidbit into a database ;-) > Say you have a milion records in a table, the search query goes from 1 > million compares to 20 or less depending on the index-type of the db. I've read through the PostgreSQL docs, and PG have three index types. Btree seems to be the most common. Doesn't matter though as you said, since these are "orders". 1 million compares on a 1.1 gz server is still going to be slow compared to 20 or less on say a 200 mhz server. > That's some kick-ass speed difference..... Hell yeah! > I assumed they were in place. Well, whether they were or not, they definately are here to stay for me. I mean, we were using this sucker for updating inventory and adding in barcodes. Updating a single item required the computer to search-partnumber, click-to-edit, update, save, callback to search-partnumber each required a hit on the server. Each on of those operations _used_ to take about 2 or 3 seconds (13,000 part numbers). That adds up to at least 12 to 18 seconds per part in that procedure. Now it's taking less than a second. So before, the computer was taking longer than the humans... which led to some other issues like employees slacking off (little incentive to be more efficient, since the computer would still hang you up) or getting frustrated. I watched how four of the employees worked: they would click and THEN wait until the browser finishes drawing the screen. Or they would start clicking on the submit buttons more often, not understanding that clicking on submit would just reset the search time and thereby take even longer! BTW: there's lots of little things you can help someone who has to enter in volumes of info per day ... things as simple as using the Javascript to move the keyboard focus on the most commonly filled field helps tremendously. I stole ... um, imitated the code from google.com. We can now just take the barcode scanner and shoot each item, boom boom boom, and the retail customer can pay us faster :-) I'm not saying all of this about the users to sneer at or to talk about the mythical "average dumb (frustrated) user". A business builds on OPT -- Other People's Time -- so the owner can play golf and get richer. (Well, at least I do. Heh.) It's easier to build and tweak interface systems that automate as much of the stupid, and silly things than to teach someone how to learn and adapt and think on their feet. The latter quickly leads to politically incorrect ideas. The old system this business was using had it's own problems, but the response time between keypresses and screen updates were probably less than 0.5s ... much less than that. A trained personnel can fly through that, knowing all the shortcuts and keystrokes, etc. One reason some of us use UNIX and CLIs rather than GUIs. (The downside is that the turnover rate for people here is too high -- a newbie staff member would come in and gets overwhelmed with the old POS interface. OPT not effectively used.). The owner, though, LIKES SQL-ledger and the software, since it integrates all the data about cash coming in and cash going out (and therefore my income) and the software works on his existing Linux server. Cool, eh? If anyone asks, I'll post the sql commands to add the indexes that'll work on a 1.6.x system and a "off the top of my head" for an 1.8.x system ... my indices were really ad hoc (fast and sloppy) and I hadn't taken the time to really analyze and profile each of the queries to get the optimal mix. For the SL developers here working on database apps for the first time and want to find out more about how PostgreSQL uses indexes, check out their interactive documentation at http://www.postgresql.org or http://www3.us.postgresql.org (mirror). Search under "index" or "indices" and look for the document with the title "Indices". -Qaexl- |
|
From: Roland S. <sql...@st...> - 2002-01-06 10:14:58
|
Ehhh guys???? Indexes makes search-queries go from O(n) to O log(n) Read O as order That's about the biggest performance difference you can get in a database!!!!! Say you have a milion records in a table, the search query goes from 1 million compares to 20 or less depending on the index-type of the db. That's some kick-ass speed difference..... I assumed they were in place. Roland On Sunday 6 January 2002 10:31, you wrote: > Hi > > I noticed that my install of PostgreSQL didn't add indexes to the > tables. I have heard that those indexes were automatically created... > but I listed them with \dt on the psql tool, and there were only one > index, the one for the chart table. > > I ran an informal test on an "end-of-day" module using data from this > client. There were 13 transactions. The stock install of RedHat > PostgreSQL 7.1 install took approx 8 to 9 seconds for it to load. I > followed the optimization hints on PHPBuilder and increased the shared > buffer and sort memory, which shaved it down to 7 seconds. (Shared > memory from 1 MB to 50 MB. There were postings on there that says that > all that really does was force the kernel to use less memory for disk > cache. I do not know.) > > Then I added the indexes: unique indexes for ar, ap, parts, customer, > vendor, and various integer and text fields in those and acc_trans, > the taxes, etc. etc. including one that indexes the lower-cased > version of the customer/vendor/partnumber names. > > That same app went from 7 seconds to 0.7 seconds the _first_ time. The > subsequent queries were 0.2 seconds. > > This was all done on a PII 200 with only 64 MB of ram, running the > Apache and Samba, during off-business hours. > > A difference of 10 seconds down to less than 1 second is pretty > significant, especially considering that the people up front doing the > point-of-sale need to access the data quick. The bookkeeper working > here gets frustrated at the "slowness" of the "new system" which > discouraged the adoption. I'll see how the psychological impact > tommorrow ... I figure that the extra time it takes to do inserts was > less noticable than the improvements on select queries. > > Another informal and non-rigourous test used about 1 year worth of > backdata in accounts receivable to do the Aging function. Before > indexes = timing out the browser (the operation could not be > completed). Now, it's completed in about 20 seconds. > > What I am wondering is if anyone has ran into performance problems and > have fixed/optimized/tuned/tweaked and is willing to share. |
|
From: Richard L. <ri...@th...> - 2002-01-06 09:43:09
|
On Sunday 06 January 2002 17:00, bob Bevins wrote: > How can one remove themselves from this mailing list. I've looked on the site and it didn't explain it. > > bob > > > When you joined you must have got an welcome email including the following (with your email address at the end): If you ever want to unsubscribe or change your options (eg, switch to or from digest mode, change your password, etc.), visit your subscription page at: https://lists.sourceforge.net/lists/options/sql-ledger-users/you%40yourdomain It's always worth keeping a copy of those welcome messages ;-) -- richard |
|
From: Ho-Sheng H. <qa...@ne...> - 2002-01-06 09:31:49
|
Hi I noticed that my install of PostgreSQL didn't add indexes to the tables. I have heard that those indexes were automatically created... but I listed them with \dt on the psql tool, and there were only one index, the one for the chart table. I ran an informal test on an "end-of-day" module using data from this client. There were 13 transactions. The stock install of RedHat PostgreSQL 7.1 install took approx 8 to 9 seconds for it to load. I followed the optimization hints on PHPBuilder and increased the shared buffer and sort memory, which shaved it down to 7 seconds. (Shared memory from 1 MB to 50 MB. There were postings on there that says that all that really does was force the kernel to use less memory for disk cache. I do not know.) Then I added the indexes: unique indexes for ar, ap, parts, customer, vendor, and various integer and text fields in those and acc_trans, the taxes, etc. etc. including one that indexes the lower-cased version of the customer/vendor/partnumber names. That same app went from 7 seconds to 0.7 seconds the _first_ time. The subsequent queries were 0.2 seconds. This was all done on a PII 200 with only 64 MB of ram, running the Apache and Samba, during off-business hours. A difference of 10 seconds down to less than 1 second is pretty significant, especially considering that the people up front doing the point-of-sale need to access the data quick. The bookkeeper working here gets frustrated at the "slowness" of the "new system" which discouraged the adoption. I'll see how the psychological impact tommorrow ... I figure that the extra time it takes to do inserts was less noticable than the improvements on select queries. Another informal and non-rigourous test used about 1 year worth of backdata in accounts receivable to do the Aging function. Before indexes = timing out the browser (the operation could not be completed). Now, it's completed in about 20 seconds. What I am wondering is if anyone has ran into performance problems and have fixed/optimized/tuned/tweaked and is willing to share. -- -Qaexl- |
|
From: bob B. <bo...@rb...> - 2002-01-06 04:59:22
|
How can one remove themselves from this mailing list. I've looked on the = site and it didn't explain it. bob |
|
From: Dieter S. <dsi...@sq...> - 2002-01-05 18:49:01
|
The comma was not translated into an underscore when locales.pl builds
the function strings.
Add the comma in lines 93 & 94 so it reads like this
$english_sub =3D~ s/( |-|,)/_/g;
$translated_sub =3D~ s/( |-|,)/_/g;
or download 1.8.1
Dieter Simader http://www.sql-ledger.org (780) 472-8161
DWS Systems Inc. Accounting Software Fax: 478-5281
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D On a clear disk you can seek forever =3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
On Sat, 5 Jan 2002, Philip Reetz wrote:
> Hello,
>=20
> I'm having a problem. I'm using the German translations for sql-ledger.
> On a few actions (deleting an order, deleting an invoice, etc.) I get
> the following error message:
>=20
> Error!
>=20
> Edit locale/de/ap|ar|gl and add the variable (ja__eintragung_l=F6schen)=
in
> the $self{subs} section!=20
>=20
> I did that. Originally the entry reads (ja,_eintragung_l=F6schen).
> But after I run locales.pl the modifications vanish and I still have th=
e
> same error message.
>=20
> Any idea.
>=20
> Thanks.
>=20
> Bye,
> Philip
>=20
> --
> LINET Services
> Bunkus, Geisler und Reetz GbR
>=20
> Rebenring 33 Tel.: 0531-280 191 71
> 38106 Braunschweig Fax.: 0531-280 191 72
>=20
> http://www.linet-services.de
> mailto:in...@li...
>=20
>=20
|
|
From: Philip R. <p....@li...> - 2002-01-05 14:58:19
|
Hello,
I'm having a problem. I'm using the German translations for sql-ledger.
On a few actions (deleting an order, deleting an invoice, etc.) I get
the following error message:
Error!
Edit locale/de/ap|ar|gl and add the variable (ja__eintragung_l=F6schen) i=
n
the $self{subs} section! =
I did that. Originally the entry reads (ja,_eintragung_l=F6schen).
But after I run locales.pl the modifications vanish and I still have the
same error message.
Any idea.
Thanks.
Bye,
Philip
--
LINET Services
Bunkus, Geisler und Reetz GbR
Rebenring 33 Tel.: 0531-280 191 71
38106 Braunschweig Fax.: 0531-280 191 72
http://www.linet-services.de
mailto:in...@li...
|
|
From: <ga...@si...> - 2002-01-04 16:51:08
|
Dear All, I am looking for ER of sql-ledger 1.8.0 now. Anyone can me? Best regards gab ================================================================== 新浪免費電子信箱 http://sinamail.sina.com.hk |
|
From: Dieter S. <dsi...@sq...> - 2002-01-04 16:11:05
|
Hi Peter! For multiple shipping addresses you'd have to add another table (shipto) and link it to the 'ar' table. Dieter Simader http://www.sql-ledger.org (780) 472-8161 DWS Systems Inc. Accounting Software Fax: 478-5281 =========== On a clear disk you can seek forever =========== On Fri, 4 Jan 2002, ma...@ds... wrote: > Hi Peter, > > Why not create all retail branches in client table, with billing info to > head office and shipping info to retail branch? I did not check if SL > offers 2 addresses for the client, but it would be an easy fix to simply add > a 2nd address in client table. > > Here is an example: > Tim Hortons - Head Office | > 1 Central Avenue | Billing Address > Suite 5000 > New York, NY > 123456 > > Tim Hortons - Plattsburg > 2 Main Road | Shipping Address > Plattsburg, Ney York > 44455555 > > ... > > Good luck! > Sergio > > > |
|
From: <ma...@ds...> - 2002-01-04 15:45:27
|
Hi Peter, Why not create all retail branches in client table, with billing info to head office and shipping info to retail branch? I did not check if SL offers 2 addresses for the client, but it would be an easy fix to simply add a 2nd address in client table. Here is an example: Tim Hortons - Head Office | 1 Central Avenue | Billing Address Suite 5000 New York, NY 123456 Tim Hortons - Plattsburg 2 Main Road | Shipping Address Plattsburg, Ney York 44455555 ... Good luck! Sergio |
|
From: Peter D. <mer...@ma...> - 2002-01-04 10:19:17
|
Hi Personally I have no problem with customers and vendors have been=20 separated. I'm in distribution business and chance to have person or firm to be customer an vendor in the same=20= time is very small. But I've got different problem. I would like to have multiple entry for delivery address. For example if= =20 I got customer who owns a chain of shops I have to invoice the head office and make individual packing lists for = every shop to make multiple delivery Peter D. |
|
From: Steve D. <sd...@sw...> - 2002-01-04 02:24:23
|
Hi Oscar. Oscar Buijten wrote: > Hi Steve, > > I was just wondering how you plan o calculate the commisions. > My view (but I guess that there are many possibilities) is that it would > be based on either; > - a percentage of the turnover generated by the employee per month > - a percentage of the profit margin ( ( sell-cost) * % ) generated by > the employee per month This is going to vary greatly, so I'm trying to achieve flexibility. You would set up the various commission rates per employee. I.e., hardware sales - 10%, consulting revenue - 20%, then the pay period entry form will allow entry of time stats like 8 hours sick time, the hardware sales for the employee, and their consulting revenues. The next employee in the list could be a salaried accountant, who wouldn't have any commissions. Calculations will be based on what applies for an individual employee, and the pay period data entered. Reporting options are infinite. If it's in the db, it can be extracted, filtered, and sorted any way that is necessary. > > > Are you looking to do something like this (tracking the sales people) or > are you having something else in mind? > > Thanks for letting me know. > > Regards, > > Oscar > > Steve Doerr wrote: > > > Hi. I have a copy of the payroll features at (please contact me for a > > tarball): > > > > http://business.dynodns.net/cgi-bin/sql-ledger-alpha/sql-ledger/login.pl > > > > I haven't updated the menu for 1.8 yet, and it needs the following: > > > > 1. Search/edit code > > 2. uid fixed in deduction setup > > 3. fix adding more than one deduction per employee > > 4. change commission/overtime fields to drop down (same logic as > > deductions) > > 5. allow multiple commission/overtime rates per employee (same logic as > > deductions) > > 6. pay period entry screen and post routine > > 7. a little synching w/ 1.8 (stylesheets, auth menu, etc.) > > > > Dieter has code elsewhere in SL that can be adapted to do most > > everything that needs to be done. > > I'm going to finish the rest of the screens and start stubbing in > > functions over the next few weeks. > > > > This is a big extension of SL. It will probably end up having 1/4 to > > 1/3 as much code as the rest of SL. I'm a professional accountant and > > an amateur programmer, so I've not been able to solve my coding problems > > as quickly as I would like. > > > > If anyone interested in payroll function thinks they could work with > > what I've done, please let me know. > > > > Steve > > > > Oscar Buijten wrote: > > > > > >>Hi! > >> > >>I was just wondering if the payroll development is progressing. > >>Any news on this one? > >> > >>Thanks, > >> > >>Oscar > >> > > > > -- > _____________________________________________________________ > > Oscar Buijten > > Tel: +33.4.67.57.97.45 > Fax: +33.4.67.57.97.46 > GSM: +33.6.20.84.15.22 > > Email: os...@el... > > Web: www.elbie.com |
|
From: alta <al...@al...> - 2002-01-04 00:37:18
|
I have seen what appears to be a lockup, though it eventually clears. It occurs with the Konqueror browser and large number of customers in the database. The CPU is totally consumed for awhile. This happens in Invoice and Invoice Print forms with a large number of customers. Apparently, Konqueror cannot gracefully handle a drop-down list with 4000 entries. ... Reed On Thursday 03 January 2002 15:32, you wrote: > First off, thanks & congratulations to those who got 1.8 out of the > door on time & to spec. I've been tracking the discussion on the > database schema and wholeheartedly agree with the point that the > superb execution of SL far outweighs any question marks of the > relational purity of the database. Anyway .... > > A question and a problem. > > 1. We take an order for a recurring service against which we > invoice monthy. Therefore I'm really pleased to see the Sales Order > functionality. I would expect to be able to transcribe information > from the order to the invoice autmoatically, but this doesn't seem > to be the case. Any pointers on how I could do this so I don't have > to retype the item lines each month ? > > 2. My problem is that on two occasions my system has completely > locked running SQL-Ledger 1.8. It's so locked that it needed a > power cycle to reboot. I'm trying to recreate the problem with all > monitoring turned on, and of course I'll report any findings. Has > anybody ever seen anything similar ? > > gruß > Roy > > > Roy Smith > > Mobile: +44 7785 298738 > Fax: +44 870 136 9579 > e-mail: xy...@bt... -- Reed White - ALTA RESEARCH - www.alta-research.com Phone: 877-360-2582 - Email: al...@al... |
|
From: Roy S. <xy...@bt...> - 2002-01-03 23:37:29
|
First off, thanks & congratulations to those who got 1.8 out of the door on
time & to spec. I've been tracking the discussion on the database schema and
wholeheartedly agree with the point that the superb execution of SL far
outweighs any question marks of the relational purity of the database.
Anyway ....
A question and a problem.
1. We take an order for a recurring service against which we invoice monthy.
Therefore I'm really pleased to see the Sales Order functionality. I would
expect to be able to transcribe information from the order to the invoice
autmoatically, but this doesn't seem to be the case. Any pointers on how I
could do this so I don't have to retype the item lines each month ?
2. My problem is that on two occasions my system has completely locked
running SQL-Ledger 1.8. It's so locked that it needed a power cycle to
reboot. I'm trying to recreate the problem with all monitoring turned on,
and of course I'll report any findings. Has anybody ever seen anything
similar ?
gruß
Roy
Roy Smith
Mobile: +44 7785 298738
Fax: +44 870 136 9579
e-mail: xy...@bt...
|
|
From: Dieter S. <dsi...@sq...> - 2002-01-03 22:29:05
|
The default customer is the last customer you used when you a) added a new AR transaction or invoice or b) when you edited a transaction or invoice. The same principle applies to order entry and vendor invoices and AP transactions. I have to dig up the 'POS' and plug it into the code yet. It's probably going to be in one of the patch release. Dieter Simader http://www.sql-ledger.org (780) 472-8161 DWS Systems Inc. Accounting Software Fax: 478-5281 =========== On a clear disk you can seek forever =========== On Thu, 3 Jan 2002, alta wrote: > > Eric ... > > Re: Your question about 5000 or more customers. > > My customer DB has 4000 customers. My observations follow: > > - Speed of the DB is adequate, but I had problems with browsers. > > - The pull-down list on the invoice form is so long that Konqueror > misbehaves. Transitioning from the Invoice screen to the Print > Invoice screen hangs the CPU for 45 seconds! So, I must use mozilla. > > - Finding the customer in the pull-down list is a chore. I would > rather have no pull-down list, search for the customer in the > customer search form, then go to the invoice form and have the > correct customer waiting for me. I modified Version 1.6 to do this, > but have not yet gotten around to modifying Version 1.8. The change > allowed Konqueror to work quickly, and was user-friendly. > > - I also modified Version 1.6 so that the new Invoice would display > the last customer created, unless the Invoice screen was entered via > the customer search screen. In the latter case, the Invoice screen > displayed the last customer looked-at, which I feel is an important > usability concept. > > Much of the credit for these mods goes to Dieter, who was helpful in > suggestion ways to do it. > > ... Reed > > > > > |
|
From: alta <al...@al...> - 2002-01-03 21:59:04
|
Eric ... Re: Your question about 5000 or more customers. My customer DB has 4000 customers. My observations follow: - Speed of the DB is adequate, but I had problems with browsers. - The pull-down list on the invoice form is so long that Konqueror misbehaves. Transitioning from the Invoice screen to the Print Invoice screen hangs the CPU for 45 seconds! So, I must use mozilla. - Finding the customer in the pull-down list is a chore. I would rather have no pull-down list, search for the customer in the customer search form, then go to the invoice form and have the correct customer waiting for me. I modified Version 1.6 to do this, but have not yet gotten around to modifying Version 1.8. The change allowed Konqueror to work quickly, and was user-friendly. - I also modified Version 1.6 so that the new Invoice would display the last customer created, unless the Invoice screen was entered via the customer search screen. In the latter case, the Invoice screen displayed the last customer looked-at, which I feel is an important usability concept. Much of the credit for these mods goes to Dieter, who was helpful in suggestion ways to do it. ... Reed |