Tag Archives: Linux

The Anvil Podcast: TurnKey Linux

Rich: This is Rich Bowen. I’m speaking with Liraz Siri. Liraz is involved in the TurnKey Linux project.

If the embedded player below doesn’t work for you, you can download the audio in mp3 or ogg format.

You can subscribe to this, and future podcasts, in iTunes or elsewhere, at http://feeds.feedburner.com/sourceforge/podcasts, and it’s also listed in the iTunes store The music at the beginning and end of this recording is used by the generous permission of the Arianne project.

Rich: Can you tell us what this project is about and how it got started, and what sort of images the TurnKey Linux project offers?

Liraz: A few years ago … three years ago, we noticed that there was increasing proliferation of Linux-based open source software – really great stuff – that most people we worked with had no idea even existed. And those that did … if we mentioned that, well, there’s a great piece of software for that, a lot of people will talk about, well, it’s going to be difficult to set up, or we tried that, and there was an issue, and I gave up.

Now, experts didn’t have this problem. They had a lot of experience. Even though you might be expert with one application and then you run into issues with another, and it’s just not worth the bother so you never evaluate it. And it might be perfect for the solution your’re trying to resolve.

So we were back then developing development infrastructure to put together pre-integrated Linux solutions for something else – an entirely different commercial application. We thought, wouldn’t it be neat if we took this, and used the development infrastructures to start creating pre-packaged Linux solutions for the most popular software. That was how it started – basically trying to take the expertise that would go into putting together really good Linux systems by someone who knows their doing, and glue together components that make it easier for someone who doesn’t necessarily know a lot about Linux system administration, to get things done – to get on the ground running very quickly.

Our first appliances were very basic. We did three solutions: LAMP stack, Drupal and Joomla, which are management systems. They’re very popular. And it turned out that people like the concept, and they started giving us feedback and a community started forming around the project. And we gradually introduced more solutions and updated, added new features. And eventually we got to the point where we have a library of 45 solutions which we’re now working on expanding to over 100 solutions for the next release.

That’s pretty much how the project got to the point where it is today in terms of the virtual appliance library.

boot screen

There’s also some innovation on the side of … how do you … let’s say you want to start an online shop. The first step is deciding what sort of online shops software is right for you. There’s various solutions, and what we’d like to do is be able to take you from the very first stages, where you don’t necessarily know what sort of software to use, so you can evaluate the different solutions, and when you find something that’s right for you then you can very easily deploy it to the cloud. We have some features to make that much more easy than it would be with a conventional Linux system. It’s called Turnkey Backup and Migration, and it allows you to do completely automated backups and then completely automated restores, to pretty much anywhere. So you can back up a system that’s running in a virtual machine, and you can restore it a server running on real hardware, or a server running in the cloud or on the other side of the world.

That’s pretty much TurnKey in a nutshell.

Rich: I noticed that there appears to be a commercial venture that’s attached to this as well. Can you tell me something about that?

Liraz: About a year and a half ago people started talking about deploying TurnKey in the cloud. They wanted to see support for TurnKey at their favorite VPS providers, or cloud platforms. It used to be pretty difficult to do that. So we started working to see … well, it’s great if you can install a Linux system just locally, but that sort of limits your options, because most people don’t really run server software at their home. If you really want to move it into production then you’re going to want to host it somewhere. And you want to have support for hosting platform for the solution.

So we started looking for options and it turns out that Amazon, at the time, had a really good cloud infrastructure service. That made it very easy for us to plug into their system, and start offering our users the ability to deploy these solutions on Amazon. Initially we created Amazon machine images and tried letting users deploy them themselves, through the regular Amazon Web service tools. That turned out to be difficult. There were all sorts of issues.

If you’re setting up a machine on the Amazon cloud and you have to set up firewall rules and have to … you can’t have a default root password because the system is very insecure, so you’re using SSH key authentication … there are all these small issues, that, again, aren’t problematic for an expert, but somebody who isn’t an expert gets entangled in that.

So we created the TurnKey Hub, which streamlines the whole operation, and makes it very easy for you to get started using Amazon cloud – things that would’ve previously been rather difficult. As part of doing that, we also created a business model for TurnKey.

Most of the things that the TurnKey Hub provides are basically free. The service provides backup and migration – that’s completely free. We don’t make any money off of that. Dynamic DNS services, free monitoring. All that is part of the free service. But right now, it’s only cloud deployment. Say you want to deploy a server in the cloud, then we have basically two options where you you can decide if you want to pay a 10% remium to deploy your server. You pay 10% extra on top of regular Amazon Web services fee. Alternatively, we have a different plan allows you to pay a fixed rate monthly fee and then there are no premiums on usage fees.

We also provide a higher-end version of that plan that also includes support for businesses that want that kind of commitment.

That’s basically the first commercial offering that TurnKey is offering. In future we might expand that.

Right now we’d like to focus on expanding the number of solutions we provide. Also doing things like supporting Debian and supporting 64-bit which has been on the list for a while. There have been some infrastructure issues with that, but we’re finally going to solve in the next version so people can use this for higher-end applications as well. Even though that really hasn’t been a big problem up until now because most of TurnKey users have been on the low end, but once you start getting serious you want 64-bit support. That’s something we’re going to be releasing in the next version.

For an Open Source project, the community is, maybe, if not the most important aspect, one of the most important, because if you don’t have a community, then, why not be a proprietary project and have all the advantages of being able to sell your software? Why give stuff away and not have people participate, and feel they’re involved, and that this is something that they can contribute back to?

It’s really important, in our belief structure, to have people involved, and have people feel that TurnKey is something they can contribute back to. There are a few ways that users contribute right now. The easiest way is to participate in the community forums. We have community edited documentation, and our bug-tracker, and a blueprint for people to suggest features and discuss what they’d like to see in the next version, which is great.

The next level of contribution after that, which requires a bit more involvement and actual development: Right now we have a software development kit called TurnKey Linux Patch – tklpatch – we made it very easy for users to take any appliance and customize it to their kneeds. We have a core appliance that is the lowest common denominator of all our appliances. It has the basic features. The web interface. The backup and migration capability. It’s actually one of our most popular appliances, because a lot of users are taking that, and they’re just tweaking that fit their needs. Some of them are going a step further and posting their contribution back on the website, so we can discuss them. Most contributions are going to make it into the next release as new appliances, and from then on Turnkey Linux maintains that as an additional appliance.

There’s a big benefit. This is basically the advantage of Open Source in general. If you contribute back to the community, then you have to do less work in the future for yourself, because you have people who with you, and making your specific use case better, because that’s also something that are interested in.

One of our goals for this year is to take the development infrastructure that is powering Turnkey and make it public, and make it very accessible. Right now there’s the software development kit, but we want to make it possible for people to develop TurnKey appliances basically the same way we do, so this project doesn’t rely so much on our core development team, and anybody can contribute at any level.

Right now, the of source code to the appliance is public, but the fabrication system that takes that source code and assembles appliances from various package repositories – that’s something that right now we’re cleaning up, and we’d like to build an interface so people can roll their own, at the same level of the same tools, with the same power, that we have.

We’re hoping that once we do that, then can really release the labor bottleneck that has been limiting what we can do. When you depend on a small core development team everything have to go through us. We can expand to many a hundred or a hundred and fifty appliances, but there’s potential for so much more. Especially when you get to client-side applications.

There’s a ton of good client-side applications. Right now we’re only doing server-side applications, but in the next release will be releasing TurnKey client side appliance. Sort of a TurnKey appliance core, which we’d like to see as a basis for then a whole range of client-side applications. We don’t even know, necessarily, what sort of applications are going to be eventually successful. People are going to be really interested in, even though there are few that are probably obvious, such as rescue disks, kiosks, stuff like that, and maybe privacy distributions. Who knows? Once you get the tools out there, and people can use them freely then you can just let innovation happen. And then things that people are interested in, they’ll flock towards.

We want to create an ecosystem. An infrastructure that is completely open-source, where people can feel that they belong and contribute back to. TurnKey can get to the point where it really lives up to the potential we think this approach has.

Rich: Thanks so much for talking with me. I wish you a whole bunch of success with your project.

Liraz: We’re really excited. It’s been a lot of fun so far. Thanks for having me. I hope your listeners check out our website and maybe try one of appliances, then tell us what they think.

Rich: Thanks so much.

Liraz: Bye bye.

The Anvil Podcast: LEAF

This week I’m speaking with David Brooke from the LEAF project. LEAF is a very small, security-focused distribution of Linux ideal for running on very low-end hardware.

You can subscribe to this, and future podcasts, in iTunes or elsewhere, at http://feeds.feedburner.com/sourceforge/podcasts, and it’s also listed in the iTunes store.

If the embedded player below doesn’t work, you can download the audio in mp3 or ogg formats.

Rich: This is Rich Bowen. I’m speaking with David Brookes from the LEAF project. Thanks for speaking with me today. Could you give us an overview of what the LEAF project is, and what its goals are, and how the project got started to begin with?


David: Sure, Rich. Let me have a go at doing that. I must point out that I’m one of the newbies on the project, so I can’t claim to have been around since the very inception. I’m going to try to speak on behalf of my co-developers, and give a view of what we’re trying to achieve with the LEAF project.

The first thing to say is that LEAF is an acronym. It’s stands for Linux Embedded Appliance Framework. Really those words do capture the essence of what we’re about doing.

Linux, obviously.

Embedded, as in, we do target running Linux on some fairly low-end machines. Right now we are looking only at X86 hardware. That’s what we run on. Pretty much the smallest X86 machine you can think of will happily run the LEAF distribution.

Right now we’re looking at the moment at expanding the supported hardware – CPU chipsets – to run on other devices. The X86 hardware isn’t really at the low end of the scale any more. There are other alternatives that are rather smaller, rather less resource-intensive in terms of power. But right now it’s X86. Really going back a few years when the LEAF project got started up – when it was spawned from other initiatives – it was targeting very low-end X86 hardware, and for a long time we had the ambition of running off of a single floppy disk. So it was fairly a low resource requirement, very small distribution, to fit and boot off of a single floppy, and then run from memory once it had booted, and make efficient use of the kind of hardware that was around, about the year 2000 kind of time scales.

Appliance is that although there’s a focus on firewalling, network routing, that kind of functionality, it is a more generic platform. So really anything you can do with Linux, you can do with a LEAF distribution.

The Framework part of the name touches on that same thing. It is extensible as a framework. It isn’t just for networking and firewalling, although those are focus for it. Other solutions, network based, around file servers, around PBX, voice over IP, other solutions based on the same platform are eminently feasible with the basic distribution.

In terms of what it will do – it will do anything that a standard Linux distribution will do. It is low-end in terms of its demands on resources. Pretty high performance, as best we can get it given those constraints, and pretty flexible.

It is tailored, in its bare distributions, for various network based purposes, but it is extensible beyond that.

R: I saw something on the website about wireless access points. I used to have a flashable WAP. Is it used for that as well?

D: It’s used for that kind of purpose. Not as part of any commercial offerings as far as I’m aware. But if you look at the technology that’s used, I certainly see equivalent componentry in terms of the software applications that run on access points from 3Com, other people, the same kinds of solutions get used that we have available within LEAF. And certainly people are successfully able to build wireless access points.

R: The other thing I noticed on the website is that you mention several forks of your code base, other distributions of it. What’s the philosophy there regarding other distributions of your base?


D: That’s a good question. It really dates back to the early days of the LEAF project – back to about the year 2000. There’s quite a nice diagram on the website that talks about how various projects have merged together and then forked away. One of the initial projects that we based the LEAF solution on was the Linux Router Project (LRP), and personally that’s how I came across LEAF. I initially started about 1999 using LRP to use my own router, and stuck with it for a while. Then I was aware that LRP wasn’t being actively developed, after the year 2000, or maybe 2001, and that the LEAF team were developing LEAF Bering that I then switched to.

There were then various evolutions. What tends to happen is that there’s an initiative to make a fairly major change. For example, there was a change from using conventional libc to micro libc or uclibc, which is now used for the main active branch of the project. These major changes happen, and then things stabilize. The developers and users get to be happy with what’s happening. Then there’s another requirement to change the status quo because maybe people have different ambitions. That’s happened quite recently.

My active involvement has been quite recent. Perhaps the last two years or so, where the version 3 of Bering uclibc was stable, working fine for a lot of people. Based on the Linux kernel version 2.4, which did impose certain constraints. It was constraining some of the functionality, especially around IPv6, and some of the firewalling. And some of the add-on applications were constrained in terms of the versions that were compatible with that 2.4 kernel.

The version 4 of the project now, which has the 2.6 kernel, has caught up with some of the key software components like Shorewall firewall. We were quite far behind in terms of the version of that we were using, and we’ve managed to catch up with that.

What we’ve found is that the footprint of the solution has grown quite dramatically. The 2.6 kernel is a lot bigger than the 2.4 kernel. The new versions of Shorewall actually rely on Perl. It’s a cut-down Perl installation but it is a Perl installation, which is more resource intensive both in terms of disk space and runtime space. That has meant that the aspiration of fitting on a single floppy disk has had to go out the window. But then again, there’s no real user-base out there that relies on floppy disks any more. We’re in a compact flash world, and other physical disk media have taken over.

That was a big change both in the developer community, and in the focus – not so much the focus of the project, but the focus in terms of removing some of the constraints and some of the policies of being very small, and focusing more on performance than on very very small size.

R: Where do you see the project going in coming years? What sort of additional functionality are you going to try to put in there.

D: Functionality-wise, I think we’re already pretty rich. I think in terms of the number of packages, it’s getting on to the 200 individual application packages that are available. We do find that people add to those, so as different developers have a requirement to run a new application, or maybe switch from one bit of technology to another bit of technology, they’ll add another package and grow the functionality that way. That’s organic evolution of extra bits of solution being added on, rounding out the the functionlity available in the framework.

There’s also the change of the lock-in to the X86 CPU set. That’s an active development right now, to go from compiling on X86 for X86 to compiling on X86 for ARM, or other processors. That will open up a much wider range of hardware. And actually rather cheaper hardware. One of the constraints right now is that for the low-power hardware, it tends to be quite expensive. Embedded X86 solutions tend to be reasonably expensive because they’re quite low volume production. Whereas there are other alternatives out there that are a lot more affordable. We’d like to target those. That does give us challenges in terms of the compillation environment. With all of the projects doing cross-compillation, we’re quite sensitive to how the developers of the applications have written their makefiles, and it’s tough to give a completely isolated build environment for some of the applications without the host libraries and other things leaking in. That is going to be a challenge for us to make that switch to a much more clean cross-compillation thing to do. That’s quite a keen thing to do, and I think it will open the project up to a wider userbase, and a wider set of use cases for different applications.

R: It’s always kind of cool for me to see these projects that have been around for ten plus years and are still active. I’m always kind of curious when a project decides that they’re done, and there’s nothing else to do. It’s cool to see that this project is not only active, but still bringing in new developers with fresh ideas.

D: It is cool, and I think that that’s what keeps it alive in some respects. What I’ve found is that the developers – there aren’t that many, we’d like to have more developers, I guess other projects would do – we all seem to have the same kind of ideas, we think the same way about things. As with many of these projects, it’s cool to work with people who have the same kind of ambitions. And we do work well together as a team. We’re quite complimentary in how we focus on different areas, we take responsibility in different areas, and then support each other in other areas.

It almost feels like there have been a number of generations of the LEAF project, handing over from one generation to the next as new developers come on board with new ideas, and the ability to spend quite a lot of time in some cases. Some of the guys devote quite a lot of time to developing the project. There’s a wider community that can benefit from their involvement and investment, and their work.

R: You mentioned that there’s always room for developers. What areas of the project are itching for that new blood?

D: There’s the things that I’ve got on my list:

I think projects always tend to struggle with documentation. That’s certainly one thing that I’ve been trying to contribute.

I’m not really a deep development person. I’ve done software development for about 25 years, or thereabouts. I don’t claim that I’m the world’s best software developer. Because of things I do in my real job, and the things that I do for myself … I’m using a new bit of technology, or a new system, I do tell myself to write down documentation for my own benefit, so that I don’t forget the next time, so I can reinstall something, or understand why I chose to configure things the way that I did. It’s quite a small step for me then to contribute that to the wider community, and write it for a wider audience. So I’ve done some work on the documentation, bringing it up to date for the new version 4 of the project.

We used to use the DocBook technology for our documentation. It’s good technology, but I think it puts quite a few people off from contributing to it. It wasn’t that easy to do a simple change. You had to change a file, and them make sure that it’s complient with the schema and that kind of thing.

We took a fairly bold step, which I instigated, to move to a wiki-based documentation platform, which makes it easier for people to contribute to, and easy to make minor changes to. I think that has proved quite valuable in terms of … I make a minor change if I spot a typo or some slight error I’ll go fix that, as do the other developers as well.

There is quite a lot of documentation. There are quite a lot of undocumented features. There’s quite a lot of packages without any documentation. And I moved across some of the older documentation without doing much in the way of validation of that or improvement of that.

So that’s one area where we can always use new people.

The other thing I’d say is testing. That’s one of our weakness. We don’t have a full test harness. We can do some fairly major surgery to the kernel, or other parts of the distribution, and we don’t have an easy way of checking that it still works for all of the wide range of functionality that we have. I’d be delighted if someone contribute something something more around a formal test harness, or some sort of regression test scripts. Either a very small contribution there, or a bigger contribution, would be pretty neat.

Other things, I’d say, which are open to contribution from other people, is that we were constrained by using the 2.4 kernel for a while. In moving to the 2.6 kernel we’ve had some of the brakes removed in terms of updates of some of the application packages. I’m aware that we haven’t actually gone around all of the packages and done the updates. Right now we are distributing some pretty old upstream application versions. It would be nice for someone to go through and just check on those, which ones are out of date, which ones could be updated.

And then the other thing that we are working on at the moment, that is maybe another top priority, is that there is a web interface for administration of a LEAF system which does give quite a good view of a running system in terms of status. It’s not fully featured in terms of administration updates. I tend to use a command line – I’ll ssh into a box and use the command line menus that we have for administration, as do many of the developers. In terms of entry point for new users, I know that the command line world does tend to put people off, and a lot of the users tend to rely on the web interface. We can always improve on that. I wouldn’t claim that it’s a very advanced web interface. It’s designed for low-power devices, so it’s not that clever in terms of the technology on the back-end. But I know some other projects have got something more flashy on the front end. So there’s some options there for lowering the bar for new users coming on board, because right now it’s not that friendly for a new user. The documentation is helping out in some respects on that, but there’s always more we can do to try and make it real easy for a new user to come on board and get up to speed quickly.

The other core developers … we’re all over the place. This is very much a global project. It’s not the case of there being a small team that has worked together. We’ve all come to LEAF and found LEAF as a good cause to contribute to. We have core developers in Germany and other parts of Europe. Some in the US. I’m based in England. It’s quite good to have people coming from different countries, different cultures, and contributing to the same goal.

The other thing I would say is that I have a day job. I’m not doing this as a day job. I have a day job working in IT. I have a bit of a history in software development. I’m not a developer any more.

One of the reasons I do contribute some time to this project is that it keeps me grounded. These days I’m specifying systems, working with other people, getting teams to do work on my behalf. I’m getting to be the more senior person in my day job. But it’s always good to know where the technology’s at, what’s practical in terms of availability or performance or reliability of solution. Certainly I’ve found that with Linux, it’s a solid platform. I think the industry has now woken up to that. For a while it was seen as a flaky, hobby kind of project, and it’s not the case really. And with LEAF I find that it’s a very stable platform.

Being a developer I do tend to rebuild to new versions pretty frequently, and reboot more often than others would do, but I know that a lot of users have had LEAF running for literally years on end without a reboot. It’s a pretty solid platform. And pretty secure. You can cut it down and be a very limited set of installed files, which therefore makes it quite secure, which is obviously great for a network based security focused installation.

I must thank one of my colleagues based in Germany who felt his English wasn’t good enough to actually speak up on this call. To be honest he does a lot more work on LEAF than I do and he gave me some very useful notes for this session, so I’m plagiarizing some of his comments, and using some of mine as well.

R: Thank you so much for your time.

D: Thanks, Rich, no problem at all. Good speaking with you.

The OS Wars: We Have A Winner

Amy Vernon (@AmyVernon)

Update: See this post about the “unknown” and “other” categories in the stats below.

It’s clear who has won the OS wars: The user.

Just a few short years ago, Apple computers were little more than afterthoughts outside of artists’ circles. They certainly were not the go-to computers for anyone serious about programming or software development. That was left to the Windows and Linux users.

At conferences, on Sourceforge, and in other open-source communities, the OS battle to be fought was clearly Windows vs Linux. Those who liked Microsoft could call upon the massive numbers of users. Those who preferred Linux could hold themselves up as the true standard-bearers of open source.

You would not have shown your face at, say, ApacheCon, with a MacBook.

In conversation with none other than SourceForge’s new Community Growth Hacker, Rich Bowen (from whom I shamelessly stole the opening sentence of this post), it’s clear the open source community has matured to the point where the platform matters little – it’s the product, the result, that’s important.

We combed through about two years’ worth of data on SourceForge, looking at the platforms of the users who downloaded projects, and millions more Mac users are downloading open source projects now than were in February 2010. In the same time, Windows downloads have increased by a much smaller percentage and Linux downloads have actually declined.

Windows stats

Mac Stats

Linux Stats

And let’s not forget those in the “other” category where the operating system of the folks who downloaded was unknown:

Other stats

There were a few data points I found especially interesting, though a bit puzzling: April appears to be a slow month for downloading software on Sourceforge. If you look at all platforms, for each year, there was a significant dip in downloads.

Why? Perhaps it’s Spring fever. Given the fact that it’s an across-the-board dip two years running has some statistical significance. We’d need more information – and data from more years – to determine just what that significance is, though. I’d love to hear theories from readers in the comments, though.

Full stats
A column on oStatic last year dissected the complex relationship Apple has had with open source, and pointed out how it made sense that Apple both used open source in its operating system and contributed code back to the community.

Apple isn’t big enough to control the programs people will use on their computers, the author pointed out, so the best alternative was to help ensure no one could, as Microsoft very nearly did in the 1990s. Helping keep the open source community robust helps prevent another near-monopoly like Internet Explorer was in that decade.

The Sourceforge downloads data aren’t the only stats that show the rise of the Mac in open source.

Evans Data Corp. this summer released a survey that showed Mac had surpassed Linux as a development platform. The survey, conducted in June, was of 400 professional software developers. While developers are still targeting Linux for development more than Macs, they’re using Mac as the actual platform more.

The developers are increasingly making their software good across multiple platforms, too. A good deal of Sourceforge downloads are on two, three or more platforms.

A cursory survey showed that most projects downloaded primarily for one or two platforms appeared to be much more utilitarian than those downloaded on all three platforms.

Projects such as TortoiseSVN and WinMerge are popular with Windows users. iTerm is popular with Mac and Linux users, enabling the setup of a Mac terminal emulator. Fink, naturally, is downloaded by Mac and Linux users, as it eases the integration of open source projects into their Mac and Darwin environments. X-Chat Aqua brings IRC to Mac and Linux.

An exception to this trend appears to be Linux users, who love downloading UTube Ripper, which allows them to download YouTube videos and convert them. Not altogether surprising that Linux users bucked the trend, though, given that common sense would say they’re much more likely to seek out open source for most of their software needs.

On the flip side, many of the programs downloaded regularly by users regardless of platform tended to be more for alternatives to expensive proprietary software and therefore more useful to a wide variety of people.

Projects such as Audacity for audio editing, Gimp (Windows and Mac versions) for image editing, Sweet Home 3D for virtual interior design, Celestia for 3D visualizations of outer space and Hugin for panorama stitching and processing showed up as big downloads for Microsoft, Mac and Linux.

What will be an interesting statistic down the road will be where iOS and Android downloads start increasing. As tablets grab hold of more of the market, more open source projects will be made available for those OS and the smartphone OS – of which Apple and Android are the most common. No doubt, some of the downloads in the “other” category are for those OS.

It’s heartening to see so much diversity in the open source community – the idea behind open source is, after all, freedom of choice.

Amy Vernon was a professional newspaper journalist for 20 years before working as a freelance writer and consultant for a variety of publications. She has covered open source for the enterprise for Network World and consumer technology for Hot Hardware, among other sites. She uses Adium, Open Office, NeoOffice, Sea Monkey and other open source programs on a near-daily basis.