User Limits & Scalability

2009-09-12
2013-04-23
  • Adam Peters

    Adam Peters - 2009-09-12

    Are there any user limits with once:radix?  What are some options for scalability?

     
  • onceradix

    onceradix - 2009-09-13

    Adam

    Firstly, I started out trying to write a brief answer but found it necessary to cover all the issues. I’ll try to be brief but you raise an important topic that needs a detailed response.

    You will find our definition of user limits far more conservative (and realistic) than what alternative technologies claim. Having x00 users connected to a server is NOT the same as having x00 users try to work that same server into an early grave! The latter is what we talk about when we say ‘concurrent users’. The numbers may not seem so impressive but they are more honest and reliable.

    We have a client with 45 staff in Perth, Western Australia and another 10 in Melbourne. That is roughly the same distance as San Diego to Boston. They run a 2 Mbps SHDSL connection into a very basic A$2,500 Dell server. They have a high level of transactions and get consistently good performance. They are not our largest site but their setup is quite typical of what you can expect without spending a lot of money.

    There are a number of issues to consider regarding scalability, including server, network and browser performance and the level of complexity of the end-user application. Ultimately, scalability can be measured in terms of the speed with which data is returned from the server then rendered on the browser, as well as the stability of the server when handling large numbers of users.

    **Server Performance**

    Let’s deal with server stability first: Vadzim Karacharski did a lot of work on a stability issue under load earlier this year. It turned out to be caused by the database connection pooler. He gathered a lot of data during that investigation and we now have a high level of confidence that up to 100 users on a typical server is feasible (assuming that the server has enough processing power and memory). We have never been able to get more than about 50 active users in live situations to hit the server concurrently, though we have simulated much larger numbers. Our idea of active testing is to hit the server hard with a range of intensive operations.

    Our simulated tests are intensive but lack the human feedback and some of the randomness of live testing. While simulations are a useful tool there is nothing like the real thing! You can expect particularly high levels of activity in the sorts of applications that once:radix is used for. Also, in our experience, the level of complexity of the application has been inversely proportional to the number of users. No doubt that is not always the case but where it is, it will certainly affect the scalability.

    When we talk about concurrent users, we mean users actively hitting the server with high-level transactions (e.g. creating invoices, purchase orders, opening jobs, creating contact records. etc.) not simply being logged on. In practice, we find that the number of concurrent users peaks at about 25–50% of the total users logged on but often sits at only a few per cent.

    An organisation employing 200 staff would typically have about 50 users logged on with occasional peak loads of 100. Of course, if you were running a call centre, for example, the number of staff logged on would be a lot higher. But then, the types of transactions being processed probably would be less complex. e.g. Setting up a pickup for a freight company is a simple transaction, so it would not generate a lot of server activity.

    Keeping everything in the one server (database and Tomcat) – a multi-Quad Core server properly configured with interlaced RAID drives is probably large enough to handle hundreds of users. But if it didn't I'd start by moving JasperReports onto a separate server. It can be highly Java and we notice most CPU activity when it is compiling large reports. Then if you find it still is slow, I'd add extra Tomcat deployments with Apache doing the load sharing.

    Server-side loads are split between PostgreSQL and Java. It takes a lot to stretch PostgreSQL. But if you find the database is consuming too much CPU time, one option is to install the database on a separate server. I have no hard data but I expect that a grunty server running PostgreSQL could comfortably handle 1,000 concurrent users. After that, there are several cluster options available. In databases that are predominantly read operations (e.g. in search engines) master-slave clustering is feasible. Typical once:radix applications involve a high level of read/create/write/delete transactions. There are conflicting opinions about the feasibility of multi-master clustering. And we’ve never needed to make it work. But it is a strategy that deserves some investigation.

    You may want customers to access the system via the owebAPI. That's fine. Connect your website to it and you can add a lot more users as the transaction rate tends to be a lot lower.

    You may want to run multiple owner organisations on the same database to separate operations or you may be able to spread the load by having entirely different databases for different divisions within the organisation.

    And you can use the owebAPI to create connections between systems if there is data you want to share. Or you could look at using JBoss for that purpose. In short, you have plenty of options to grow server side.

    **Network Performance**

    There is not a lot can be said about this. The faster the network, the lower the latency, the better the result. We have done a lot to minimize the amount of data moving between client and server. So it doesn’t have a big impact on overall performance. I live in country Victoria and have to put up with a service so slow I’m embarassed to admit what it is. But I connect to systems in most parts of the world from here. Performance is quite acceptable. Our Prime Minister has promised to bring our broadband services up to the same standard that Canada had 10 years ago. I can’t wait!

    **Browser Performance**

    once:radix fetches a maximum of 10 records at a time and caches them client side. So the speed that the client can render the data and process scripts bound to onLoad and onShow events is a critical factor in determining the user experience. As indicated above, JasperReports will show signs of stressing the server long before oCLI does.

    We began development of once:radix before Firefox. It has come a long way since it moved from Firebird! At Firefox 2, we tried to achieve Internet Explorer and Safari compatibility. As I recall, Firefox was 16 times faster than IE 6 and five times faster than Safari. We are about to address this question again. With IE 8, we hope to be able to support it. Currently, the only way to connect non-Mozilla browsers to once:radix is via the owebAPI.

    Most of the speed issues relate to rendering. With the new J.I.T. Javascript compiler in Firefox 3.5, it is similar in performance to Filemaker for delivering data to the user – yes, even over the Internet! With some government departments and large corporations refusing to consider Firefox as a replacement for Internet Explorer, we do need to achieve IE 8 compatibility.

    The other exciting development of the last six years has been the evolution of the Intel CPU. I was sitting at a Quad Core Mac at the South Australia* Tourism Commission recently. Their server is a very basic Linux box running over an ordinary 100 base-T network. Their design unit switched from a FileMaker application to once:fabrik. Their Studio Manager, Mr David White commented, “It’s the fastest web application I’ve ever seen and much faster than Filemaker.”

    I believe this demonstrates better than any other example I could give that it is the client side where performance is critical.

    Does that mean you need to install Quad Core desktop machines to make once:radix work effectively? No, of course not. My own machine is an old Power Book G4. It should have been turned into an anchor years ago, yet performance under Firefox 3.5 is more than acceptable. What it does demonstrate is that this is a moving target and time is on our side. As Moore’s Law continues to push development forward, things will only get better for once:radix.

    **Measuring Performance**

    Finally, if you want to quantify how your system is performing, try switching to Debug mode by adding ?d=d to the address line before logging on or by adding &d=d to the address line after logging on, then pressing the enter key. Actual transaction times measured in milliseconds are shown in the Error Console.

    Applying my version of Heisenberg’s Uncertainty Principle! Debug mode may degrade performance somewhat, but it will give you a fair idea of what is happening.

    *When visiting Australia, this is the place to start – the world’s best red wines, sea food, brilliant beaches and Kangaroo Island (our best kept secret).

     

Log in to post a comment.

Get latest updates about Open Source Projects, Conferences and News.

Sign up for the SourceForge newsletter:

JavaScript is required for this form.





No, thanks