LTSP Server Sizing
Figuring out how large to make the server is a very subjective issue that involves lots of variables, and there's no perfect formula to get it right every time.
But, we do have some general rules and "best practices" for at least getting reasonably close.
RAM is the most important consideration
When determining how large to make the server, the single most important item is the amount of memory you put in the server. By default, all of the applications run on the server, using the servers CPU(s) and memory. So, it is critical that you have enough memory for all of the applications that all of the users might be running at the same time.
If you don't have enough memory, the server will start to swap. This is generally a bad thing, and it doesn't make any difference how many processors you have, or how fast they are, because once you start swapping, even the fastest CPU in the world won't help. If you don't have enough memory, programmes will be unnecessarily slow to load. Linux caches files in RAM. Loading time can be cut in half for large applications like OpenOffice if the large files are cached in RAM. If you are running a web server (apache, MySql and PHP), or other services, you will need more RAM. Note that some motherboards can hold two to eight dimm sockets per CPU, but they may not be able to run at full speed with more than four. Read the fine manual.
So, how much memory do you need?
The formula that we've been using is this:
Total Ram = 256MB + ( 50MB * number_of_terminals )
The initial 256MB of ram is for the server, without any terminals. Then, plan on around 50MB per terminal. For extra services like web and database, use 512MB or 1024MB instead or use a separate server for such services.
So, if you want to host 10 LTSP terminals on your server, the formula works out like this:
Total Ram = 256 + ( 50 * 10 ) = 256 + 500 = 756MB
That's a reasonable estimate of the amount of memory you should have in the server. BUT, before we can say definitively that 756MB will work for you, you really need to consider the applications that the users will be running. If they all want to run Firefox, OpenOffice, Evolution, and a full-featured desktop environment like Gnome or KDE, and have all those apps running at the same time, then probably more like 70MB per terminal would be more appropriate.
Then again, if the environment is tightly controlled, and the user is only allowed to run specific applications that don't require much ram, and they are using a light-weight window manager such as Icewm or xfce, then perhaps you could get by with much less ram. Maybe 30MB per terminal would be enough.
You also need to consider usage patterns. If you are setting up a system for a school, where 30 users will all be running the same applications at the same time, you might need a bit more RAM. But, you might find that in a typical office setting, where at any given time, some people are in meetings, others are on the phone, and only about 1/3 of the terminals are actually being used, then you'll find that less RAM is required.
Which type of CPU should you use, and how many?
SMP is good
Multiple processors are a very good idea in an LTSP environment. With all users sharing a server for their applications, a single user running an intense program that hogs the CPU can have a very negative impact on the other users sessions. The Linux kernel does a great job of scheduling the tasks, but a single process can still bog down a system. Having more than one CPU can definately help.
Multiple CPUS vs. Dual Core vs. Hyperthreading
If you can go with multiple real CPUS, then go for it. The Dual Core CPUS are still pretty new, and I don't have any experience with them yet, but the performance should be close to that of the multiple separate CPUS.
If you can't go with multiple CPUS or Dual Core CPUS, then Hyperthreading is the next best thing. It's not quite the same as multiple CPUS, there's just certain parts of the CPU that are duplicated. In my experience, i've seen about a 1.3x improvement between a single CPU and a Hyperthreaded CPU of the same clock speed.
A rule of thumb for how much CPU power is needed can be deduced from the evidence that in a typical system with 3000MHz CPU, a single user will use from one to three percent of the CPU, so, without swapping, it should be able to handle thirty users. If you have sixty users, you should use dual CPU or dual core with an SMP kernel. There are Opteron motherboards with four sockets and the possibility of a second layer of four sockets, so with today's technology, one could estimate that four hundred eighty would be an upper limit. Considering redundancy and the prices of components, it may be cheaper and better to use multiple servers before then...
32-bit CPUS vs. 64-bit
There are several new 64-bit cpus available and they can greatly increase the number of terminals you could put on one server. It's important to make sure you run a 64-bit version of Linux on those servers, to get the full benefit of the CPUS. Also, there's still many applications that aren't compiled for 64-bit yet, and that can cause you some performance problems, so keep that in mind. Eventually, any piece of software you can think of will be built for 64-bit, so that won't be an issue.
Intel vs AMD
All I can say here, is pick the vendor you are most comfortable with. There is a see-saw battle between these vendors. It is now in the courtroom where AMD has charged Intel with unfair trade practices. AMD has lately had a lead with Opterons and AMD64, but Intel is no slouch. The Opteron has on-chip memory controller so there are theoretical reasons why it is best for the multiprocessor environment. In 32 bits, AMD had a noticeable price advantage, but that disappeared when the AMD64 came out. Prices are dropping...
Use the fastest disks you can afford, and definately consider RAID, to reduce the chance that a failed drive might take down the whole system. Remember, if the server fails, it could be a whole lot of users that are sitting idle, while the server is being repaired.
SCSI drives and controllers are the gold standard for servers, and if you are planning more than about 20 terminals, then you should definately consider nice fast 15k rpm drives connected to a SCSI RAID controller.
SATA is shaping up to be a very nice technology for low-priced drives. The performance is very good, and you can't beat the ease of cabling the drives inside the box.
My personal favorite setup is a pair of SATA drives connected to a 3Ware raid controller in a mirrored (RAID-1) configuration. You should be able to have 10-20 happy users on a system with this configuration. Beyond 20 users, consider having more drives, in a stripped/Mirrored (RAID-10) configuration.
The old ATA drives are ok for a small LTSP server with a couple of users. Use a couple of them with an ATA raid controller, or use software raid, to get better performance out of them.
A Note about RAID-1 performance
An anonymous poster had this to say about RAID-1 performance:
NOTE: RAID 1 significantly lowers performance on the server, since all the data written to the volume will be written twice (one time on each disk). I noticed this on a Debian Potato server, with the RAID system controlled by software. I can't tell if a hardware-controlled RAID would have the same performance decrease.
But, here's what I have to say about it:
The comment above about RAID-1 causing a performance issue on the server is ONLY when using SOFTWARE RAID, and is NOT true of hardware raid. Even then, software raid-1 only marginally lowers the performance, because it is only a problem during WRITING. RAID-1 actually improves READ performance, because only one drive needs to be read, and the *md* driver is smart enough to read from the disk that can get to the data faster. That is, each disk has a set of heads, and those heads are NOT kept in sync between the two drives. So, at any given moment, one drive might get to the data faster than the other drive. And, if you think about it, systems tend to do a LOT more reading than writing, so I'd take better read performance, at a slight cost in write performance any day. (Jim McQuillan - jam@Ltsp.org)
consider using the noatime or nodiratime on raid1, as every read effectively is a write without them. VagrantCascadian - 08 Dec 2008
How fast should the network be?
10Mbit/100Mbit/Gigabit per second
In an LTSP environment, each terminal connects to a HUB or SWITCH, and that is connected to the server. So, that means that ALL of the network traffic goes between the switch and the server. This means that you should put the fastest link possible between the switch and the server.
You should consider using a gigabit connection between the server and the switch, and then use 100mbit between the clients and the switch.
And you really should be using a switch instead of a hub. The performance of a switch is much better than that of a hub.
While your choice in power supplies doesn't affect the speed of the server, you really should consider redundant power supplies. With multiple users depending on the server, a failed power supply can ruin your day. Having redundancy there can be money well spent. Considering that you and many others will be depending on the server and that you will save a bundle using thin clients, consider using a better brand of power supply. The cheap beige cases often come with very cheap power supplies that may have fan failure in a few months.
Again, the information above is there to give you a rough idea of what you need build a decent LTSP environment. Your mileage may vary. Also, you really need to consider what the term _Acceptable Performance_ means to you and your users. In most cases, if you can't afford the best/fastest hardware, it's still better to put something out there for the users. That is, even a slow system is better than no system.
Finally, if you have any information to add to this document, to help people better understand how to build the right size server, please feel free to edit this page and add your information.