Apologies in advance for any misconceptions in this post, for I am but
a kernel newbie.
I have been designing what I believe is a very effective system to render
even root exploits on a host relatively powerless. But obviously at some
level you need a "trusted level" that "cannot" be exploited - or, at least,
is very difficult to exploit.
I was wondering to what extent the UML kernel can be secured against
exploits, by special measures on the host - and to what extent, if any,
such ideas have been implemented so far.
Let's assume for these purposes that the host is secured and for all
practical purposes "unexploitable". Then, would these ideas be possible?
And would they actually decrease risk?
1. Make all executable code in the UML kernel read-only.
2. Prevent the UML kernel from making itself writeable again.
(This is something that cannot be done with the kernel running directly
on a real x86 architecture, AFAIK, but can be done with UML. Hence a
UML selling point for the ultra-paranoid?)
3. Restrict the executable segment for the UML kernel so that kernel
data and stack(s) are not executable, using something like exec-shield.
Given these measures, my plan essentially involves introducing an
"archangel" security layer in the kernel, which follows constraints
set by a "God process" running on another machine (perhaps a different
UML). The archangel and the God process communicate via a socket. The
archangel must obviously prevent any process running on the UML from
interfering with or spoofing that communication channel.
The default constraints prohibit modifying system files and
modifying firewall rules; when a sysadmin needs to do some work, they
instruct the God process to temporarily relax certain constraints -
but only for the files etc. they need to access, only from a particular
process tree, and only until they have finished working with them.
(The archangel could also do an unspoofable md5sum of files, to verify
that no rogue code has taken the opportunity left by relaxed
constraints to perform some unauthorised modifications. Rogue code in
that scenario is most likely to be run as a result of the sysadmin
running code from an untrusted source.)
It might seem like an overcomplicated or byzantine system, but I
believe it offers a good balance between security, performance and
system maintainability. Move too much logic into the God process
and you lose performance (and simplicity); move too much logic
into the read-only space (i.e. in my model, the kernel, but it
doesn't have to be the kernel) and you lose maintainability;
move the God process onto the same UML as the archangel and you
lose security, because then rogue code can pretend to be the
sysadmin. (The idea is that the machine hosting the God process
has less services running, and therefore is less likely to be
exploited. The truly paranoid could run it on a different OS
altogether, to reduce the chance of the God machine and the
original UML being cracked simultaneously - and cracking both
at the same time is the only way to gain unauthorised
privileged access to the original UML.)
All of this would benefit from a "host-hardened" kernel. Is this