Home » Articles posted by Tai Kedzierski (Page 3)

Moving “/” when it runs out of space (Ubuntu 14.04)


My root (“/”) partition filled up nearly to the brim recently on one of my test servers, so I decided it was time to move it elsewhere… but how?

You can’t normally just add another disk and copy files over – there’s a bit of jiggery-pokery to be done… but it’s not all that difficult. I’d recommend doing this at least once on a test system before ever needing to do it on bare a metal install…

What you will need:

  • a LiveCD of your operating system
  • about 20 minutes of work
  • some time for copying
  • A BACKUP OF YOUR DATA – in case things go horribly wrong
  • a note of which partition maps to what mount point

For note, I had my partitions such initially:

/dev/sda1 : /boot
/dev/sda3 : /
/dev/sda5 : /home

I then added a new disk to my machine


In this walk-through, I will refer to your target PC, the one whose “/” needs moving, as “your PC” from now on. If you’re using a VM that’s the one I am referring to – you needn’t do anything in the host.

Note that in my setup, the “/boot” and “/home” directories are on their own partitions. If you don’t have this as your standard setup, I highly recommend you look at partitioning in this way now – it helps massively when doing long-term maintenance, such as this!

1/ Boot your PC from the LiveCD.

I recommend you use the same CD as from where you installed the OS initially, but probably any Linux with the same architecture will do (x86_32, AMD64, ARM, etc)

Once the Live environment is started, open a command line, and switch to root, or make sure you can use sudo.

2/ Prepare the new root

Use lsblk to identify all currently attached block devices.

I am assuming that /dev/sdb is the new disk. Adjust these instructions accordingly of course.

You want to first partition the new drive: run the command `fdisk /dev/sdb` as root

Type `o` to create a new partition table – you will be asked for details, adjust as you wish or accept the defaults

Type `n` to create a new partition. Adjust at will or accept the defaults

Type `w` to write the changes to disk.

As root, run `mkfs.ext4 /dev/sdb1`

Your new drive is ready to accept files…

3/ Copy the files

Make directories for the old and new roots, and copy the files over

mkdir newroot
mkdir oldroot
sudo mount /dev/sda3 oldroot
sudo mount /dev/sdb1 newroot
sudo rsync -av oldroot/ newroot/

Note: in the rsync command, specifically add the slashes at the end: “oldroot/ newroot/” and not “oldroot newroot” !!

Go do something worthwhile – this copy may take some time, depending on how full the partition was…

4/ Modify fstab

Run the following command to get the UUID of the new drive:

sudo blkid /dev/sdb1

Keep a copy of that UUID

Edit your fstab file

sudo vim newroot/etc/fstab

Keep a note of the old UUID

Change the UUID of the line for your old / partition to the new UUID you just got; and save.

5/ Edit the grub.cfg

Mount your old /boot partition into the *new* location

sudo mount /dev/sda1 newroot/boot

Now we edit newroot/boot/grub/grub.cfg

sudo vi newroot/boot/grub/grub.cfg

Locate instances of the old UUID and change them to the new UUID

Quick way: instead of using `vi` you could use `sed` instead

sudo sed -e 's/OLD-UUID-FROM-BEFORE/NEW-UUID-ON-NEW-DISK/g' -i newroot/boot/grub/grub.cfg

Of course, adjust the UUID strings appropriately. If you have never used sed before, read up on it. Keep a copy of the original grub.cfg file too, in case you mess it up first time round.

In the above command, the “-e” option defines a replacement pattern ‘s/ORIGINAL/REPLACEMENT/g’ (where ‘g’ means ‘globally’, or in the entire file); the “-i” option indicates that the file specified should be modified, instead of writing the changes to stdout and leaving the file unmodified. Using the “-r” option, you can also make use of Perl-style regular expressions, including capturing groups.

After making the change, reboot. Remember to start from the hard disk, remove the Live CD from the slot.

6/ Reboot, and rebuild grub.cfg

If all has gone well, you should now find your Ubuntu install rebooting fine. Open a terminal and run

df -h

See that your root is now mounted from the new disk, with the extra space!

There’s just one more thing to do – make the grub.cfg changes permanent. Run the following:

sudo update-grub

This will update the grub config file with relevant info from the new setup.

You have successfully moved your “/” partition to a new location. Enjoy the extra space!

Watcher-RSS : Your own, Personal, Feeder


I finally got around to putting together some initial code for that thing I wanted – a script to detect changes in a page and produce an RSS entry as an outcome.

watcher-rss” is just that – a simple script that can be called by cron to check a page for a significant area, and generate an RSS “feed” in response.

It’s designed such that you need to define a bash handler script that sets the required variables; after that it can generate an RSS entry in response to anything. Read more

Freelancing – lessons learned

Back in April, for various reasons in my personal circumstances, I decided to give freelancing a go. I quit my permanent job, and set myself up as a sole trader. I wish I had done this from the very start, back when I was at university…

Being self employed allows you to try your hand at a number of different roles before you decide to settle on a specialization – if you specialize at all! From an employer’s perspective, they are getting a skilled individual who they can keep on or let go of easily; from your perspective, you have the freedom to take on a variety of projects and diversify your experience fairly fast.

Here’s what I’ve taken away from my experience so far:

  1. Sole traders have greater freedom than temp workers
  2. You can be a sole trader and be in education or full-time employment
  3. Tax is complicated
  4. Take customers with you

Read more

Moving ownCloud from Ubuntu default repo to openSUSE build service repo

ownCloud example

ownCloud is a popular self-hosted replacement to cloud storage services such as Dropbox, Box.net, Google Drive and SkyDrive: ownCloud lets you retain ownership of the storage solution, and host it wherever you want, without being at the mercy of service providers’ usage policies and advertising-oriented data-mining.

Recently the ownCloud developers asked the Ubuntu repo maintainers to remove owncloud server from their repos.

The reason for this is because older versions of ownCloud have vulnerabilities that don’t necessarily get patched: whilst the original ownCloud developers plug the holes in the versions they support, they cannot guarantee that these fixes propagate to the code managed by repos – and Ubuntu is widely used as a base for other distros. For example, Ubuntu 12.04 is still supported and forms the base for many derivatives, and has ownCloud 5 in its repos – but is not managed by ownCloud developers.

The ownCloud developers recommend using the openSUSE build service repository where they publish the latest version of ownCloud, and from which you can get the newest updates as they arrive.

If you’ve installed ownCloud from the Ubuntu 14.04 repositories, and you want to move over to the openSUSE build repo, here’s how you do it.

If moving up from ownCloud 5, consider migrating first to version 6 by way of a PPA or an older OC6 TAR… I’ll have to leave it up to you to find those for yourselves…

Backing up

These instructions are generic. You MUST test this in a VM before performing the steps on your live system.

Mantra: do not trust instructions/code snippets from the internet blindly if you are unsure of what exactly they will do.

Backup the database

Make a backup of the specific database used for ownCloud as per your database’s documentation.

For a simple MySQL dump do:

mysqldump -u $OC_USER "-p$OC_PASS" $OC_DATABASE > owncloud_db.sql.bkp

replacing, of course, the placeholders as appropriate.

Backing up the directories

If you installed ownCloud on Ubuntu 14.04 directory from the regular repos, you’ll find the following key anatomies:

  • Main owncloud directory is in /usr/share/owncloud (call it $OCHOME)
  • The $OCHOME/config directory is a symlink to /etc/owncloud
  • the $OCHOME/data directory is a symlink to /var/lib/owncloud
  • the $OCHOME/apps is where your ownCloud apps are installed

If this is not already the case, it wouldn’t hurt to change things to match this setup.

It would also be a very good idea to make a tar backup of these folders to ensure you have a copy should the migration go awry. You have been warned.

Moving apps, data and config folders

Move your ownCloud data directory to some location (for this example /var/lib/owncloud but it could be anywhere) ; move your ownCloud config directory to /etc/config

It’s probably simply a good idea to not have your data directory directly accessible under $OCHOME/data

It is also probably good to keep the original more variable apps directory in /var/owncloud-apps instead of lumped straight into the ownCloud home directory. Note that this directory also contains the “native” ownCloud apps, which get updated with each version of ownCloud – not just custom apps.

Once you have moved these folders out, $OCHOME should no longer have data and config symlinks in it. As these are symlinks you can simply rm $OCHOME/{data,config}

If you get an error about these being actual directories that cannot be removed because they are empty…. you haven’t actually moved them. If they do not exist of course, that’s fine.

Uninstall and reinstall ownCloud

Uninstall ownCloud (do NOT purge!!)

apt remove owncloud

And add the new repo as per the instructions in http://software.opensuse.org/download/package?project=isv:ownCloud:community&package=owncloud

For Ubuntu 14.04 this is

wget http://download.opensuse.org/repositories/isv:ownCloud:community/xUbuntu_14.04/Release.key
apt-key add - < Release.key
echo 'deb http://download.opensuse.org/repositories/isv:/ownCloud:/community/xUbuntu_14.04/ /' >> /etc/apt/sources.list.d/owncloud.list
apt-get update && apt-get install owncloud

This makes the repository trusted (key download) then updates the sources and installs directly from the openSUSE repo.

NOTE – if you are following these instructions for Ubuntu 12.04 or any distro shipping a version older than ownCloud 6, you may want to consider upgrading to OC6 first before converting to the latest version 7 – make sure your test this scenario in a VM before doing anything drastic!

Restore files

The new ownCloud is installed at /var/www/owncloud

Remove the directory at $OCHOME, then move /var/www/owncloud to $OCHOME so that it takes up the exact same place your old ownCloud directory was at.

Disable the automatically added owncloud site

a2disconf owncloud

And optionally delete /etc/apache2/conf-available/owncloud.conf

Now remove the defaultĀ data and config directories, and link back in the other directories that you had cautiously moved out previously

rm -rf $OCHOME/{data,config}
#ln -s /var/owncloud-apps $OCHOME/apps
ln -s /etc/owncloud $OCHOME/config

(the apps line is commented out – because you must check what apps you are specifically restoring before squashing the default apps directory)

Finally edit $OCHOME/config/config.php to be sure that it points to the correct locations. Notably check that the $OCHOME/apps location exists, and that the data folder is pointing to the right place (especially if you had to move it).


Now go to your ownCloud main page in your web browser. You will be told that ownCloud needs to be updated to the newer version 7 – this will be done automatically.

Once done, ensure that everything is working as expected – add/remove files, navigate around ownCloud web, check that your apps are all working…


If you do need to revert,

apt remove owncloud
rm /etc/apt/sources.list.d/owncloud.list
apt update && apt install owncloud

Finally proceed to restoring the files as above – or from backup TARs

Additionally, you will want to restore the old version of the database.

mysql -u $OC_USER "-p$OC_PASS" < owncloud_db.sql.bkp

Smash – the snack

At some point I’m going to compile a list of tips and recipes for lunching at the office – for situations where you only have a spoon, bowl, fridge, microwave and kettle, like many office and lab kitchens I’ve seen so far…

Here’s one such recipe.

Smash – a brand of instant mashed potatoes in the UK – is a fairly easy to come by commodity. Most of my friends shudder at the thought of it and wonder why I even keep any in my pantry.

Now I know better than to use it to serve anybody but myself – but I also know that it can be a perfectly decent fill-in for when you’re too lazy/tired to cook, and can be made to taste perfectly fine – it’s just dessicated potatoes. The trick is not to follow the instructions on the packet – the end result of doing so does indeed taste awful.

Thankfully, an equivalent product I used to eat in France, called Mousseline, gives much better instructions – which I simply re-use here.

  • 6-7 tbsp Smash flakes
  • 100 mL milk
  • 100 mL boiling water
  • knob of butter
  • salt & pepper

Put the Smash flakes in a bowl, along with the butter, salt and pepper, then add the milk.

Add the boiling water now, and stir repeatedly. At first it’ll start off as watery nothingness, and gradually coalesce into creamy potato-y snackage.

If necessary, pass through the microwave for about 15 seconds before stirring, as the flakes absorb the liquids better when hotter, and the cold milk makes the biling water tepid.

Add some peas (not the tinned stuff mind – heated from frozen is best) and ham or frankfurters for a more filling dish.

Voila, easy to make at the office, or in the evening if you want to be minimalist in effort.



On most distributions using GTK+ or Qt desktop environments, you can use a graphical password prompt to grant administrative rights to a graphical aplication by invoking gksu or kdesu – instead of the usual su command.

Strangely enough though, the Red Hat family of systems uses neither – instead relying on an independent tool called beesu.

I emailed the developer asking them what the motivation for a separate tool was, and wanted to share the answer.

Read more

Moving wordpress

I just moved my WordPress blog from its old location on a regular webhost to a new location on a VPS.

I just wanted to share some learning notes on how to go about it – I can imagine they’ll be useful to me too in the future!

[EDIT — I have just checked the official way of doing this and I must say, it seems much more involved. I think it might be old and many issues are now resolved, but for the sake of caution, here’s their instructions {2013-06-13 : http://codex.wordpress.org/Moving_WordPress} I was using wordpress 3.9 during the upgrade]

  1. Get a dump of the database dedicated to your wordpress instance as an SQL file – it contains the actual posts and comments from your blog
  2. get a copy of your full wordpress directory – any media files uploaded are here, along with the actual PHP pages
  3. copy these over to the new location
  4. (unpack the wordpress web directory to the desired web directory, if necessary)
  5. re-create the same user in your new DBMS instance, and use the same password (you can find that in wp-config.php file in your wordpress web files), granting it the appropriate permissions
  6. restore the database by using the SQL dump file from your earlier backup – this will restore the entire database as a complete copy – for example, run the SQL file through your DBMS’s CLI tool
    • $> mysql -u wpuser -p"wp_password" < wpdb_dump.sql
  7. log in to the database and run the following, where “http://example.com” is the base URL of your new wordpress blog – wordpress redirects any access to wp-admin to this URL, so it needs updating or you’ll always go back to the old URL’s wordpress
    • update wp_options set option_value='http://example.com/' where option_name='home' or option_name='siteurl';
  8. In the wp-config.php file in the root of your wordpress directory, change the DB_HOST property to point to the new database location

You should now be able to go to the new URL and everything should be as it was!


SSL on Apache and tunneling VPN with OpenVPN on Ubuntu

This post is now superceded by a friendlier and more eficient method: https://ducakedhare.co.uk/?p=1512

The following are a bunch of quick notes about setting up security certificates, enabling OpenVPN and forcing all traffic through a VPN tunnel, and adding SSL

It’s all tailored for Ubuntu 12.04 / 14.04 servers, and exists primarily as learning notes. I may or may not come to cleaning them up.

OpenVPN details and dialectic can be found at https://help.ubuntu.com/14.04/serverguide/openvpn.html

Longer description of Apache SSL activation can be fouind here https://www.digitalocean.com/community/tutorials/how-to-create-a-ssl-certificate-on-apache-for-ubuntu-12-04

Read more

Checking for a Mac resource fork

Following a mishap with a backup some time ago in which I lost the contents of my text clippings, I have started investigating resource fork detection for my rsync pre-processor.

The pre-processor will eventually be python-based, with a couple of extra scripts for platform-specific operations that arise. I need it to be able to backup using rsync or some Windows backup program with equivalent functions, to a FAT32 target, without any data loss.

Pre-processing the backup source directory includes:

  • split files greater than 4 GB into individual chunks – for backing up to FAT32 drives
    • mark chunks for use in restore process
    • use an exclusion for file sizes greater than 4 GB
    • remove chunks post sync
  • OS X specific – detect files with resource forks
    • determine if a child of a grouped directory, if so bubble up to grouped directory and tar-gz
    • else tar-gz the file itself
    • add the original file/directory to rsync exclusion list
    • remove the tar-gz post-sync
  • build a database of current tree state – to “detect” moved files and reduce transfer time
    • build hardlink repository of all files in the source directory
    • replace previous state database with current tree state and remove hardlink repo post-sync

From the restoration side, re-building the chunks and unpacking the tar’d files need to be handled.

For detecting resource forks on Mac, this script should work:

if [ "old"$(sw_vers | grep -Po "10\.[0-6]\.") = "old" ]; then
    MYFILE="$1/rsrc" # different format pre-10.7

FOUNDRES=yes$(ls -l "$MYFILE" | grep -Po "0\s+[0-9]+\s+[a-zA-Z]");
# resource fork found to be zero length - returns string

# FOUNDRES is exactly "yes" if resource is non-zero
if [ "$FOUNDRES" = "yes" ]; then
    echo "RSRC present on $MYFILE"
    echo "RSRC absent from $MYFILE"

Use at will.

About that: Is TAILS an essential distro or just an added tinfoil hat?

A tech blogger put up a piece I came across on Tux Machines, asking whether TAILS, a security-oriented Linux distro designed to afford the user anonymity, was just another tinfoil hat for the over-imaginative conspiracy theorists.

It was stronger than me to let this be, as I believe that TAILS is actually very legitimately useful to certain people and professions – namely journalists, students and activists – and that the article was likely to gain page views over time. Below is my own answer.

Original article is http://openbytes.wordpress.com/2014/05/16/tails-an-essential-distro-or-an-accessory-to-compliment-a-tin-foil-hat-for-the-average-user/

For the TLDR – TAILS is not aimed at the average home user, but at non-technical users who actually do need to take their online safety into serious consideration.

…. it’s a bit of a straw man attack …

The real question is – where is the merit in deriding the approach and considerations TAILS addresses?

Read more