Home » Musings » About that... » About That: Linux Mint’s site hack

About That: Linux Mint’s site hack

Byte City

The Linux Mint site hack a few weeks ago has brought to the fore how lackadaisical the security efforts behind some projects’ hosting and distribution sites may be. The truth of the matter is though, without a dedicated resource to look after this aspect, any effort can quickly grow stale and obsolete.

The tools and workflows required to keep sites and software packages secure are moving targets and a full-time effort; and the demand for latest-and-greatest software does not help one bit as a culture of blind trust has washed in on the back of the false mantra “Linux is inherently more secure.”

No it is not, and its growing popularity is demonstrating this. Linux is set up so that you can look more easily into your security and manage it, but security does not come without at least some planning and consideration. Jumping to BSD will not save us either. Improving our tooling and workflows is the only viable, forward-looking strategy we have at the moment – and it’s lacking.

Matt Hartley’s synopsis of the event is worth a read; the following are a copy of my initial reactions on his article.

It seems to me there are 3 things that need implementing as broadly across the board as possible:

1/ GnuPG tooling must be made easy from the GUI and CLI – manage sources of public keys in one operation; add/remove keys in a second operation, verify target files as a third operation. No more, no less, make it easy.

2/ Decentralize the public key servers such that they are able to sync public keys to eachother, but that a breach and key-replace of one server does not topple over to the other servers. A client should be querying multiple key servers for redundant security.

3/ ISO signing (and indeed, any software signing) needs to happen off-network. Take your binary, put it on a USB stick. Scan stick in another system using a different OS (BSD, Solaris, Haiku, wahtever) that is offline, perform signing on a third machine which is also offline. It’s more cumbersome, but with core software like this, it’s likely worth it.

A culture of blind trust is another issue – there’s this very worrying tendency to want the latest and greatest – which leads to (as you point out) adding PPAs ad-hoc, using the AUR without second thought, and simply running git clone ; make ; make install … and vagrant up, running docker images, etc etc

This comes from developers wanting new releases to build with (buffer overflows etc there?), users wanting latest software (backdoors there?), rookie sysadmins wanting to make their lives easier (hoo hoo get hold of a sysadmin’s account, now we’re talking!).

We’re no better than getting a re-pimped gimp from Sourceforge when we do this.

Why not stay with slow-movers like Debian, CentOS or a BSD? Because you can’t guarantee that all packages are going to receive security updates for their entire lifetime. Case in point the ownCloud debacle, where they said “don’t use the versions from the repos, we’re not maintaining them.” OK fine, let’s add a PPA, with continually updated code. Whoops, back to square one.

I use the repos pricesly because I am counting on that to be a base web of trust. If I can’t trust the repos, and I can’t trust ad-hoc PPAs without deeper work, what are my options??

We’re far from out of the woods…

Posted in About that..., Free and Open Source, Internet, Linux

Leave a Reply

Your email address will not be published. Required fields are marked *