Re: [htmltmpl] Re: eWeek Reviews Bricolage
Brought to you by:
samtregar
From: Mathew R. <mat...@re...> - 2004-08-12 05:14:35
|
> > you are kidding right? >=20 > Not at all. That doesn't mean I expect to convince anyone though. > This is the kind of wisdom that usually only comes from experience! I'll ignore that... > > ACID capabilites and all that... > > proper locking semantics... > > long history with native support for transactions... > > proper SQL transaction semantics... >=20 > Over-rated. Sure, if I was building banking software I might have a > different opinion, but I'm not. A simple database with few critical > bugs protects my data better than your fancy ACID transactions! :-) > How can I be so sure? I've worked with big complex systems running on > both databases. I've watched Bricolage completely destroy user data > despite using PostgreSQL's transaction support. In contrast, Krang > hasn't lost data yet, as far as I know. A few careful locks in the > right places seem to be just what the doctor ordered for a moderatly > complex content management system. And if the catastrophic happens, > like a system crash, that's what nightly backups are for. Nightly > backups might not be good enough for all applications, but they're > good enough for a content-management system. >=20 > > As you said, people can make spaghetti out of anything - how this > > makes MySQL 'better', I dont understand. >=20 > Experience. Wrestle with a database strewn with triggers, > constraints, abstract types and functions sometime. umm - yes, I do this already... the cluster I am currently involved in = building is meant to store 100+ billion records... see below... > You'll be begging > to be back in the moderate mess of a badly designed MySQL DB. There's > less there so there's just less to do badly. It may not be an > emperical fact, but I didn't presented it as such! My team is currently building a "Parallel query cluster" (eg "PARALLEL = hash(id) SELECT id,value FROM row_data WHERE some_clause"), such that we = are building the 'PARALLEL hash(...)' bit. The idea is that you have a cluster of say 100 nodes, with various = schema such a foreign keys, etc. Then you query it from you data = processing engine... ... given that I have been using databases since I left uni at 24 (I am = 31 now), I'd say that I have enough experience... We prefer Open Source tools (which is why we use H::T), high reliability = and high performance -> we chose PostgreSQL over Oracle; MySQL wasn't = even a consideration... In any case, I personally think that MySQL is good enough for most = tasks; and since this is a little off-thread for H::T, I'll now shut my = big fat trap... Mathew |