As we’ve previously announced, SourceForge.net has been the target of a directed attack. We have completed the first round of analysis, and have a much more solid picture of what happened, the extent of the impact, our plan to reduce future risk of attack. We’re still working hard on fixing things, but we wanted to share what we know with the community.
We discovered the attack on Wednesday, and have been working hard to get things back in order since then. While several boxes were compromised we believe we caught things before the attack escalated beyond its first stages.
Our early assessment of which services and hosts were impacted, and the choice to disable CVS, ishell, file uploads, and project web updates appears to have prevented any further escalation of the attack or any data corruption activities.
We expect to continue work on validating data through the weekend, and begin restoring services early next week. There is a lot of data to be validated and these tests will take some time to run.  We’ll provide more timeline information as we have more information.
We recognize that we could get services back online faster if we cut corners on data validtion. We know downtime causes serious inconveniences for some of you. But given the negative consequences of corrupted data, we feel it’s vital to take the time to validate everything that could potentially have been touched.
Attack Description
The general course of the attack was pretty standard. There was a root privilege escalation on one of our platforms which permitted exposure of credentials that were then used to access machines with externally-facing SSH. Our network partitioning prevented escalation to other zones of our network.
This is the point where we found the attack, locked down servers, and began work on analysis and response.
Immediate Response
Our first action response included many of the standard steps:
* analysis of the attack and log files on the compromised servers
* methodically checking all other services and servers for exploits
* further network lockdown and updating of server credentials
Service shutdown
Once we knew the attack was present, we locked down the impacted hosts, so that we could reduce the risk of escalation, from those servers to other hosts, and prevent possible data gathering activities.
This strategy resulted in service downtime for:
* CVS Hosting
* ViewVC
* New Release upload capability
* ProjectWeb/shell
Password invalidation
Our analysis uncovered (among other things) a hacked SSH daemon, which was modified to do password capture. We don’t have reason to the attacker was successful in collecting passwords. But, the presence of this daemon and server level access to one-way hashed, and encrypted, password data led us to take the precautionary measure of invalidating all SourceForge user account passwords. Users have been asked to recover account access by email.
Data Validation
It’s better to be safe than sorry, so we’ve decided to perform a comprehensive validation of project data from file releases, to SCM commits. We will compare data agains pre-attack backups, and will identify changed and added. We will review that data, and will will also refer anything suspicious to individual project teams for further assessment as needed.
The validation work is a precaution, because while we don’t have evidence of any data tampering, we’d much prefer to burn a bunch of CPU cycles verifying everything than to discover later that some extra special trickery lead to some undetected badness.
Service Restoration
Now that most of the analysis is done, we’ve started the next stage of our efforts, which includes the obvious work of restoring compromised boxes from bare metal, and implementing a number of new controls to reduce likelihood of future attack.
We will of course also be updating the credentials which reside on these hosts and performed quite a few steps to further lock down access to these machines.
We are in process of bringing services back one by one, as data validation is completed, and we get the newly configured hosts online. We expect that data validation will progress through the weekend, and we’ll really start getting swinging on service restoration early next week.
File Release Services
Many folks have suggested that the most likely motivation for an attack against sourceforge would be to corrupt project releases.
We’ve found no evidence of this, but are taking extrodinary care to make sure that we don’t somehow distribute corrupted release files.
We are performing validation of data against stored hashes, backups, and additional data copies.
We expect to restore these services first, as soon as data validation is completed.
Project Web
One attack vector that impacts our services directly is the shared project web space. So, let’s talk about that in a bit more detail.
Sourceforge.net has been around a long time, and security decisions made a decade ago are now being reassessed. In most cases past decisions were made around the general principle that we trust open source developers to work together, play nice, and generally do the right thing. Services were rolled out based on widespread trust for the developer community. And that philosophy served us well.
But in the years since then, we’ve evolved from hundreds of sf.net users to millions, and in many cases it’s time to re-asses the balance between widespread trust and security. Project Web is a prime example of this, and we’ve been working at a deliberate pace to isolate project web space, and have begun rolling out the new “secure project web†service to many of our projects.
This new secure project web includes a new security model that moves us away from shared hosting while preserving the scalability we need for mass hosting.
Because of this attack we’ll be accelerating the rollout of Secure Project Web services as part of the process of bringing the project web service back online. This will allow us to provide both improved functionality, and better secruity.
CVS
CVS service is one of SourceForge.net’s oldest services and, due to limitations in CVS itself, cannot readily live on our scalable network storage solution. Validation of this data is going to require several days and we anticipate that this service will be restored sometime in the later part of week.
We are also considering the end-of-life of the CVS service and hope to have user support in migrating CVS users to Subversion in coming months. Subversion generally provides parity to CVS commands, and many of our users have made this transition successfully in the past.
From SVN, projects can move to Git if desired.
Looking forward
We are very much committed to the ongoing process of improving our security, and we will continue making behind the scenes improvements to our infrastructure on a regular basis. This isn’t a one time event, it’s a process, and we’re going to stay fully engaged over the long term.
I’d like to end with a more personal note, I’ve been working with our Ops team a lot this week, and I think we can all say that the patience and support that we’ve received from the community has been the best part of a very bad week.
Thanks again for all the support and encouragement.