Home » Posts tagged "linux"

Fixing Broken Kernel Packages in Debian/Ubuntu

Sometimes you just hit really bad luck, or you’ve done something without due caution. Having too small a /boot partition, or uninstalling the wrong package can cause a system to be non-upgradable, or even non-bootable.

Filled up boot partition

If the boot partition filled up, then kernel upgrades will continually fail until space is cleared. Unfortunately, this also means that attempts to uninstall kernels through APT will fail too, because the package manager must try to finish the last failed install operation before it can proceed to further work.

You can use this script to forcefully remove excess kernel images, and then run apt-get -f install :



wget https://raw.githubusercontent.com/taikedz/handy-scripts/master/bin/rmkernel.sh -O rmkernel.sh

# Keep 2 most recent kernels
bash rmkernel.sh 2 | sudo bash

# Fix broken installation process
apt-get -f install

Removing kernels in this way forcibly removes them, then runs the dependency fix, hopefully completing the incomplete kernel build operation that normally fails.

In future, during regular maintenance, remember to run the sudo apt-get autoclean && sudo apt-get autoremove command. You can automate this by placing the appropriate script in /etc/cron.daily/

Debian/Ubuntu keeps booting to memory test / cannot find kernel

If no kernel can be found, the system cannot boot. You need to rescue the system at this point.

These steps describe the process when using a Ubuntu Server DVD, but a similar workflow is applicable to pretty much any standard GNU/Linux system

1. Boot from the Ubuntu Server installation DVD

To boot from DVD in a hypervisor, poweroff the VM, edit its configuration and choose to mount a CD/DVD from filesystem (or datastore in a hosted environment).

You may find you need to force entering BIOS configuration to ensure that the CD drive is booted from before the First Hard Disk

2. Once booted in to Ubuntu, choose “Rescue a broken system” from the first menu.

You will be asked a few questions, of which network setup etc. Answer as appropriate

3. You will eventually be asked to choose a root partition – choose the appropriate partition (usually the largest one on /dev/sda)

If prompted to mount the separate /boot partition, do so

4. Get a shell “in the installer” ; you will be informed that the target (your main system you are rescuing) is mounted to /target

You will need to move/copy over the installer environment’s /etc/resolv.conf to /target/etc/resolv.conf (unlink the existing /target/etc/resolv.conf first though)

5. Switch to your target system by running chroot /target

You will now be in the same context as your original server. Run bash to get back to bash shell (by default you start in sh)

6. Run the following – note the dpkg section is one line of pipes, do not forget those “|” characters ! This is a modified version of what exists in the rmkernel.sh script from above, which essentially purges all existing kernel installation data to start anew.

dpkg --list 'linux-image*' |
 grep ii |
 awk '{print $2}' | while read; do
     dpkg --force-all --remove "$REPLY"; 

apt-get update && apt-get install linux-image-generic

7. Power off the machine. Ensure there is no CD in the disk drive anymore, and bring the machine back up, this should be fixed now


If the above still does not add at least one bootable kernel, you may need to install a different/new kernel. Look for “linux-image” and install the latest

# find a suitable kernel image
apt-cache search linux-image

# In this example, the package chosen from the above step is linux-image-4.4.0-109-generic
apt-get install linux-image-4.4.0-109-generic

You will have a particular version of a kernel now, which may or may not continue to receive updates ; you need to consider moving to a new server or further fixing the existing one.

Installing SliTaz GNU/Linux

Screen Shot 2015-05-13 at 22.40.24

Recently I’ve been playing with SliTaz GNU/Linux, a tiny distro that can be made to operate even the oldest of PCs!

This article is a short bootstrap to get you started with the essentials on SliTaz 4.0

What is SliTaz?

SliTaz is an independant GNU/Linux distribution, with its own package manager, repositories and kernel, focused on providing an OS for embedded devices and computers with extremely small amounts of power and storage.

It is extremely lightweight: its standard ISO is about 35 MiB in size, botting from cold to desktop in as fast as 15 seconds, and ab initio takes up 32 MiB RAM with the Gtk openbox desktop and no extra applications.

Whilst it can be used as a lightweight desktop environment, its main application would more likely be for use as

  • an embedded Linux
  • an SSH gateway
  • an easily reloaded web kiosk
  • a portable PC troubleshooting/rescuing disk
  • such uses where slick features are shunned in favour of lightness and efficiency.

A GUI desktop environment is provided for those who are afraid of being in full-on command line mode, but to maintain its lightness, there are no traces of such heavy packages as LibreOffice or Firefox.

Out of the box you get

  • the lightweight Leafpad text editor (if you’re not content with nano or vi!)
  • the Midori web browser
  • the sakura terminal emulator
  • and mtPaint if you need to edit pictures…

and apart from that, not excessively more. That’s all you really need.

There’s a web-based GUI manager running on localhost for managing the computer, but make no mistake – this systems is more appropriate for seasoned Linux hobbyists who are OK filling in the documentation gaps…

There is even an Raspberry Pi version of SliTaz available to squeeze the most performance out of your Pi.

Screen Shot 2015-05-13 at 22.55.00

GUI install

On the LiveCD, to configure SliTaz, boot into the “Gtk” version; then open the web browser go to http://tazpanel:82 and enter the root login details. By default, the user is “root” and the password is “root”.

Once in TazPanel, you can manage the system, upgrade packages, install new software – an more!

Go to the final menu entry labelled “Install” and choose to “Install SliTaz”

For the purposes of this guide, we are just going to do a very simple install. If you’re comfortable with partitioning, go wild.

You will be offered the opoortunity to launch GParted – click that button. You will be shown a map of the first hard drive. If you have multiple hard drives, BE CAREFUL with which one you choose – the operation we are about to perform will erase the disk you operate on.

Choose the disk in the disk chooser menu in the top right – probably /dev/sda if you have only one disk. Again CHOOSE WISELY.

Once a disk is chosen wisely, use the Device menu and choose to Create Partition Table

Then choose Partition menu: New partition

Leave the defaults and hit OK, then click the Apply button to apply the changes. At this point the disk is erased and a new partition is set up. You have been warned!

Exit GParted, scroll down and choose to “Continue installation”

Most options should be fairly self explanatory.

Under Hard Disk Drive, choose the drive you just erased in GParted (/dev/sda for example) and choose to format the partition as “ext4”

Set the host name to whatever you want.

Change the root password to something sensible.

Set the default user as desired.

Remember to tick “Install Grub bootloader” or you’ll have a non-bootable system…

Click “Proceed to SliTaz installation”. After a few seconds… SliTaz is installed. Reboot!

You’ll have to set up your locale and keyboard just once more and voila, a desktop Linux that boots in seconds!

Command line install

Here’s the simple recipe for installing SliTaz from the command line. Note that even if started from the LiveCD headless, this install will install a full GUI and take up around 100MB of space.

The first thing to know is that the installer is invoked by the command tazinst.

The second thing to know is that you need to create a configuration file for it.

The third thing you need to know is that you need to partition your disk first. Naturally, this is what we’ll do first.


Type these keys in order to erase /dev/sda and set up a single partition. If you have never done this before…. read up on partitioning with fdisk. It’s a whole topic on its own! Hit return for each new line of course.

fdisk -l
fdisk /dev/sda
 (just hit return here to accept the default)

Great, you have a new partition on /dev/sda1. Now create the config file.

tazinst new configfile

Once you have created the configuration file, edit it.

vi configfile

Three key things you need to change are as follows:

  • TGT_PARTITION – the partition you will be installing on – in our case, /dev/sda1 or whichever you configured earlier
  • TGT_FS – the file system you want to use – for example, ext4
  • TGT_GRUB – “yes” unless you intend on installing Grub manually afterwards.

Finally, run

tazinst install configfile

After a few second, the install will be finished and you can reboot.

Post-install customizations

SliTaz is very light. Extremely light. You might even say it’s missing some packages you would expect as standard. You should think about doing some initial setup…

tazpkg -gi vim
tazpkg -gi htop
tazpkg -gi tmux
tazpkg -gi sudo
tazpkg -gi iptables # ...and whatever else you want...
#one tazpkg per item to install
/etc/init.d/dropbear start # SSH server
 vim /etc/rcS.conf

# add dropbear SSH server to startup
 %RUN_DAEMONS=" ...dropbear"
vim /boot/grub/menu.lst
# change timeout
 %timeout 2
 # add your own users to sudo location

And that’s about it. Some extra commands that may be different from what you may know from elsewhere:

poweroff # instead of shutdown
tazpkg recharge # sync package list
tazpkg info (package)
tazpkg description (package)
tazpkg search (string)
tazpkg get-install (package name) # install from repo
tazpkg get (package name) # download from repo
tazpkg install (TGZ file) # install from local file

Bonus – tpgi

Instead of directly using the restrictive tazpkg, try using my wrapper 🙂

Switch to root and run the following

tazpkg -gi git
tazpkg -gi bash
git clone git://github.com/taikedz/handy-scripts
cp handy-scripts/bin/tpgi /usr/bin/

This will set up the tpgi command which you can use to make life with tazpkg a little easier… run the command without arguments for help. Try:

tpgi install htop vim tmux sudo

Now you can install multiple packages from one line….!

tpgi search and gcc 3.4

Searches for packages containing the term “gcc” then filters the results for version 3.4

I Won’t Go Back to Buying Mac


Here’s a little topic I wanted to explore in written form – why I have used Mac for so long, why I still have a Mac as my main desktop…. and why despite this I won’t buy Mac again.

I Used to Love the Mac

My first computers were of course not mine – they were my dad’s. I have a vague recollection of us having a PC with 8” floppy drives and having to type commands… this was probably in 1987 or so. But that memory never really took hold, for very soon after, my dad bought a Mac: an LC II that I think is still in the cellar due to me insisting on not throwing it out.

It was graphical, it was friendly. It supported 16 colours (and not just 8 colours like many PCs still shipped with as standard). There was no command line, you could just click for everything. It was a revolution in home computing and we were on the cutting edge.

We were continually treated, with Macs, to the newest and greatest home technology: stable systems to run months without a single application crash (System 7.5.1 I particularly single out), advanced graphical UIs (Mac OS 9 was great comfort to the eye at the time), easily automated applications via AppleScript, including a fully scriptable Netscape Navigator; the first laptops and desktops with built-in Wifi, the first LCD desktops where the entire computer was hardly wider than the screen, the advent of UNIX-based systems on the home computer. Every Mac shipped with a full productivity suite included (what would become iWork), as well as a full media editing suite (photo editing, video sequencing, and audio production, which collectively would become iLife), and a couple of well-designed, full-on 3D games to boot. There was hardly anything you couldn’t do with a Mac I thought…. except perhaps write programs for Windows.

When the time came for me to go to university, I believed I would have to get a Windows PC to allow me to do some proper programming, not knowing that we’d be using many different and equally (even more so) viable systems for programming on. It was a mistake I do not regret, as it had great learning benefits to me, and gave me the ability to understand the Windows paradigm so many people endure, and the ability to operate in the average workplace; but after that laptop died (in a literal puff of smoke after an ill-fated attempt to “repair” it), I was back to buying a Mac in 2007.

Even in 2011 I was agonizing over whether or not to spend hard-earned cash on a new MacBook Pro or not. I drew up my list of pros and cons, and decided, over a solitary steak and pint, that yes, I did want that Mac after all.

It would be the last Mac I would ever personally buy.

The Mac – the good

The year is 2015. I still have that MacBook Pro. And it still serves as my main workhorse for spinning up Linux virtual machines. 4 years on, and it’s still the most powerful computer in my home.

It has a quad-core i7 hyper-threaded processor at 2.2 GHz, effectively  showing up as 8 cores – it’s the same processor family as found on entry-level business servers. I’ve upped the RAM to 16 GB. It has a 500 GB HDD.

Most computers even today ship with 4 GB RAM and a lesser i5 processor clocked at 1.7 GHz and not hyper-threaded, and still a 500 GB drive.

Needless to say, that Mac was a fantastic investment, as it remains still more powerful than an equivalently priced Windows PC on today’s market.

So why will I never buy Mac again? Put simply: Apple has chosen to go where I will not follow.

Apple – the Bad

Even back in 2011, the Apple Genius who was trying to sell to me was extolling the benefits of the new MacBooks with no CD/DVD drive: “who uses CDs these days anyway?” Well I do, for one. I experiment with computing, and in doing so sometimes break my systems. I need to reinstall the system sometimes. The one time I needed to reinstall OSX, I had to purchase a brand new copy. Gone are the days of providing a free re-installation DVD. These days, you’re lucky if you can connect anything at all.

I don’t tie up my bandwidth with movies and music I have to wait for and download, online, every time I want to consume them. I still buy DVDs and CDs because, in case you haven’t noticed, online “purchase” does not allow you to own a copy – just the license to watch, if it’s still available on the provider’s website (remember mycokemusic.com?). We do not own “our” online movies and music – only the permission to watch them, which can be revoked at any time – with no refunds.

I have become a near-full Linux convert. I use Linux for my personal machines at work, my secondary and tertiary laptops run Linux, and my private cloud servers all run Linux.

Only my Mac doesn’t run Linux, and that only because when I tried to install Linux on it, the graphics card and wireless card decided to throw a hissy fit. Apple’s choice of highly-proprietary components meant that despite the best efforts of open source developers, Apple held on closely to the proprietary mantra: the machine is Ours, you only have a license to use it. You can’t even “own” something as rustic as a tractor these days.

I feel I am not in control of my Mac because I have been told what I can and cannot run on it. I own the machine, but not the software. If it breaks, I just get to keep the pieces – not the ability to tweak and fix.

My hardware today

My preferred computer for “getting things done” nowadays, the one I am currently typing away on, is a Lenovo Flex 15. Lenovo do very good hardware, its pro line, the ThinkPads, are durable business machines much like the MacBook Pros in quality.

They’re also generally highly compatible with open source drivers and mainstream Linux distributions. Where I’d hesitate before buying a Dell or HP laptop as to whether I think Linux will work on them, I have virtually no qualms when buying a Lenovo laptop, knowing it will likely take the erasure of Windows just fine. Not that this necessarily won’t change in the future.

Open Source – Freedom and Privacy

Lenovo was in the news recently for a piece of advertising software called Superfish they had included in new laptops and desktops for a few months in their Windows deployments. This particular set of software and configurations meant that not only were users seeing even more advertising in the web browsing experience, but implementing the advertising solution was also breaking the very core security mechanisms that keep all other parts of the system secure. Lenovo makes great hardware, but they aren’t immune to woefully bad decisions.

Thankfully, they reverted their decision to include this software as soon as their technical analysts realized what had happened, and issued fixes, but it has damaged the company’s reputation.

Persons like myself who chose to erase Windows completely were not affected.

This is why I use Open Source Free Software: to maintain control over my own digital assets, and freedom in my digital life. I am fully aware that my digital identity is tightly woven into my real-world identity, whether I want it to be or not.

I now run Linux on nearly everything – more specifically, I run Ubuntu on my laptops, and a mix of Ubuntu and CentOS on my servers.

I can choose what software is on it. I can choose what software is not on it (have you not yet noticed how there is some software on Windows that you cannot get rid of for neither love nor money… pestering you for upgrades at best, selling you out at worst). I don’t have to pay an arm and a leg for it either.

What’s more, I remain in control of my data. Not only on my computer, but also in the Cloud. Windows will try to shove you onto SkyDrive and Office 365 Online. Apple is shoe-horning you into iCloud services (yeah, sync your photos all over the place, you can trust Apple… hmmm)… Google is trying to get into both spaces of storing all your photos “for” you and getting up in the online office suite as well. You can’t get an offline Adobe Creative Suite anymore – just keep up the eternal payments if you want to continue being able to access your Photoshop and Illustrator projects. At least they didn’t discontinue their editing suite altogther like Apple did with Aperture. Gone is your investment.

If I ever stopped paying once for any of these applications or services, or if the service is suddenly discontinued, I would stand to lose all my data – everything I’ve purchased, everything I’ve created, either because I no longer have the software to read the files, or because the files themselves have been whisked away to an online vault “for my convenience”. That’s why there’s hardly any storage on Chromebooks. Surrender your data to the Cloud.

I am staying firmly on Linux and virtual private servers that I control and can pull data off of as I wish. I can fully program the computer to make it do what I want – and stop if from doing things I don’t want it to do (granted, some tasks are easier than others, but at least it’s actually possible in the first place.)

One Linux distribution in particular, Ubuntu (the very same I use!), tried to follow the Big Boys like Apple, Google and Microsoft: Canonical announced a partnership with Amazon in the form of search functionality, where any keywords used for a file search was also sent to Amazon, and other online providers. Thankfully, it was easy to purge from the system the minute I heard of it. You cannot defenestrate such “features” with the other Big Three.

Building Trust

I use open source software from centralized trusted software repositories (which were the spiritual precursors to app stores) – I don’t need to hunt around on the Internet to find some software whose source I do not know. On Windows, I constantly need to fret before installing an app: Does it have a virus? Does it have a trojan? Will it send all my purchasing, credit card details, photos and other identity to some unknown third party?

What I get from the centralized repositories constitutes my base web of trust –  and that base web offers a collection of software so large and varied that I know I can get a tool for any job, be it office, media, programming, scientific or leisure, and more.

No piracy = no legal troubles AND no viruses.

Or at least, a vastly reduced risk compared to downloading anything willy-nilly from random websites. And personally, I expand that web of trust with informed decisions.

I use LibreOffice which allows me to read and save in Microsoft’s document format if I need to, but I mainly use the Open Document Format to ensure I can still edit them in decades to come, and that I can share documents with anybody who does not want to shell out for Office Pro, Office 365 or GoogleDocs.

I use ownCloud for my file synchronization so that I can keep control over what is stored, and where. It replaces services such as DropBox, Google Drive, Sky Drive and iCloud without trying to force me to store online-only and forgo local copies. If my account is terminated on the latter services, there’s no guarantee I’ll also still have the data that it ran away with. ownCloud is in my control, and I know I have the copies locally too.

I use Krita and the GNU Image Manipulator instead of PhotoShop, InkScape instead of Illustrator, Scribus instead of InDesign, digiKam instead of Lightroom. I don’t need to be online to do any of this.

I choose freedom.

In the words of Richard Stallman and the Free Software movement: “Free Software is is a matter of Freedom, not price.

Piracy might make things surreptitiously free (as in “a free lunch”), but still ties you to the control systems and spyware that is rife on the Internet.

Apple, like so many other computer manufacturers and software licensors, has taken a route I cannot go down, one I will not follow. It has taken a route that specifically makes it difficult for me to remain free. It has taken a route that stifles experimentation and learning. It has taken a route that privileges perpetually tying-in my spending on one side, as well as the monetization of my identity on the other, whilst at the same time denying me ownership both of what I purchase and what I create, and where the only solutions are either piracy… or just leaving altogether.

forget-piracy(… graphic of my creation, released under CC 4.0 Attribution Share-Alike. Anyone who wants to make a better derivative is most welcome…!!)

Moving “/” when it runs out of space (Ubuntu 14.04)


My root (“/”) partition filled up nearly to the brim recently on one of my test servers, so I decided it was time to move it elsewhere… but how?

You can’t normally just add another disk and copy files over – there’s a bit of jiggery-pokery to be done… but it’s not all that difficult. I’d recommend doing this at least once on a test system before ever needing to do it on bare a metal install…

What you will need:

  • a LiveCD of your operating system
  • about 20 minutes of work
  • some time for copying
  • A BACKUP OF YOUR DATA – in case things go horribly wrong
  • a note of which partition maps to what mount point

For note, I had my partitions such initially:

/dev/sda1 : /boot
/dev/sda3 : /
/dev/sda5 : /home

I then added a new disk to my machine


In this walk-through, I will refer to your target PC, the one whose “/” needs moving, as “your PC” from now on. If you’re using a VM that’s the one I am referring to – you needn’t do anything in the host.

Note that in my setup, the “/boot” and “/home” directories are on their own partitions. If you don’t have this as your standard setup, I highly recommend you look at partitioning in this way now – it helps massively when doing long-term maintenance, such as this!

1/ Boot your PC from the LiveCD.

I recommend you use the same CD as from where you installed the OS initially, but probably any Linux with the same architecture will do (x86_32, AMD64, ARM, etc)

Once the Live environment is started, open a command line, and switch to root, or make sure you can use sudo.

2/ Prepare the new root

Use lsblk to identify all currently attached block devices.

I am assuming that /dev/sdb is the new disk. Adjust these instructions accordingly of course.

You want to first partition the new drive: run the command `fdisk /dev/sdb` as root

Type `o` to create a new partition table – you will be asked for details, adjust as you wish or accept the defaults

Type `n` to create a new partition. Adjust at will or accept the defaults

Type `w` to write the changes to disk.

As root, run `mkfs.ext4 /dev/sdb1`

Your new drive is ready to accept files…

3/ Copy the files

Make directories for the old and new roots, and copy the files over

mkdir newroot
mkdir oldroot
sudo mount /dev/sda3 oldroot
sudo mount /dev/sdb1 newroot
sudo rsync -av oldroot/ newroot/

Note: in the rsync command, specifically add the slashes at the end: “oldroot/ newroot/” and not “oldroot newroot” !!

Go do something worthwhile – this copy may take some time, depending on how full the partition was…

4/ Modify fstab

Run the following command to get the UUID of the new drive:

sudo blkid /dev/sdb1

Keep a copy of that UUID

Edit your fstab file

sudo vim newroot/etc/fstab

Keep a note of the old UUID

Change the UUID of the line for your old / partition to the new UUID you just got; and save.

5/ Edit the grub.cfg

Mount your old /boot partition into the *new* location

sudo mount /dev/sda1 newroot/boot

Now we edit newroot/boot/grub/grub.cfg

sudo vi newroot/boot/grub/grub.cfg

Locate instances of the old UUID and change them to the new UUID

Quick way: instead of using `vi` you could use `sed` instead

sudo sed -e 's/OLD-UUID-FROM-BEFORE/NEW-UUID-ON-NEW-DISK/g' -i newroot/boot/grub/grub.cfg

Of course, adjust the UUID strings appropriately. If you have never used sed before, read up on it. Keep a copy of the original grub.cfg file too, in case you mess it up first time round.

In the above command, the “-e” option defines a replacement pattern ‘s/ORIGINAL/REPLACEMENT/g’ (where ‘g’ means ‘globally’, or in the entire file); the “-i” option indicates that the file specified should be modified, instead of writing the changes to stdout and leaving the file unmodified. Using the “-r” option, you can also make use of Perl-style regular expressions, including capturing groups.

After making the change, reboot. Remember to start from the hard disk, remove the Live CD from the slot.

6/ Reboot, and rebuild grub.cfg

If all has gone well, you should now find your Ubuntu install rebooting fine. Open a terminal and run

df -h

See that your root is now mounted from the new disk, with the extra space!

There’s just one more thing to do – make the grub.cfg changes permanent. Run the following:

sudo update-grub

This will update the grub config file with relevant info from the new setup.

You have successfully moved your “/” partition to a new location. Enjoy the extra space!

Watcher-RSS : Your own, Personal, Feeder


I finally got around to putting together some initial code for that thing I wanted – a script to detect changes in a page and produce an RSS entry as an outcome.

watcher-rss” is just that – a simple script that can be called by cron to check a page for a significant area, and generate an RSS “feed” in response.

It’s designed such that you need to define a bash handler script that sets the required variables; after that it can generate an RSS entry in response to anything. Read more

Moving ownCloud from Ubuntu default repo to openSUSE build service repo

ownCloud example

ownCloud is a popular self-hosted replacement to cloud storage services such as Dropbox, Box.net, Google Drive and SkyDrive: ownCloud lets you retain ownership of the storage solution, and host it wherever you want, without being at the mercy of service providers’ usage policies and advertising-oriented data-mining.

Recently the ownCloud developers asked the Ubuntu repo maintainers to remove owncloud server from their repos.

The reason for this is because older versions of ownCloud have vulnerabilities that don’t necessarily get patched: whilst the original ownCloud developers plug the holes in the versions they support, they cannot guarantee that these fixes propagate to the code managed by repos – and Ubuntu is widely used as a base for other distros. For example, Ubuntu 12.04 is still supported and forms the base for many derivatives, and has ownCloud 5 in its repos – but is not managed by ownCloud developers.

The ownCloud developers recommend using the openSUSE build service repository where they publish the latest version of ownCloud, and from which you can get the newest updates as they arrive.

If you’ve installed ownCloud from the Ubuntu 14.04 repositories, and you want to move over to the openSUSE build repo, here’s how you do it.

If moving up from ownCloud 5, consider migrating first to version 6 by way of a PPA or an older OC6 TAR… I’ll have to leave it up to you to find those for yourselves…

Backing up

These instructions are generic. You MUST test this in a VM before performing the steps on your live system.

Mantra: do not trust instructions/code snippets from the internet blindly if you are unsure of what exactly they will do.

Backup the database

Make a backup of the specific database used for ownCloud as per your database’s documentation.

For a simple MySQL dump do:

mysqldump -u $OC_USER "-p$OC_PASS" $OC_DATABASE > owncloud_db.sql.bkp

replacing, of course, the placeholders as appropriate.

Backing up the directories

If you installed ownCloud on Ubuntu 14.04 directory from the regular repos, you’ll find the following key anatomies:

  • Main owncloud directory is in /usr/share/owncloud (call it $OCHOME)
  • The $OCHOME/config directory is a symlink to /etc/owncloud
  • the $OCHOME/data directory is a symlink to /var/lib/owncloud
  • the $OCHOME/apps is where your ownCloud apps are installed

If this is not already the case, it wouldn’t hurt to change things to match this setup.

It would also be a very good idea to make a tar backup of these folders to ensure you have a copy should the migration go awry. You have been warned.

Moving apps, data and config folders

Move your ownCloud data directory to some location (for this example /var/lib/owncloud but it could be anywhere) ; move your ownCloud config directory to /etc/config

It’s probably simply a good idea to not have your data directory directly accessible under $OCHOME/data

It is also probably good to keep the original more variable apps directory in /var/owncloud-apps instead of lumped straight into the ownCloud home directory. Note that this directory also contains the “native” ownCloud apps, which get updated with each version of ownCloud – not just custom apps.

Once you have moved these folders out, $OCHOME should no longer have data and config symlinks in it. As these are symlinks you can simply rm $OCHOME/{data,config}

If you get an error about these being actual directories that cannot be removed because they are empty…. you haven’t actually moved them. If they do not exist of course, that’s fine.

Uninstall and reinstall ownCloud

Uninstall ownCloud (do NOT purge!!)

apt remove owncloud

And add the new repo as per the instructions in http://software.opensuse.org/download/package?project=isv:ownCloud:community&package=owncloud

For Ubuntu 14.04 this is

wget http://download.opensuse.org/repositories/isv:ownCloud:community/xUbuntu_14.04/Release.key
apt-key add - < Release.key
echo 'deb http://download.opensuse.org/repositories/isv:/ownCloud:/community/xUbuntu_14.04/ /' >> /etc/apt/sources.list.d/owncloud.list
apt-get update && apt-get install owncloud

This makes the repository trusted (key download) then updates the sources and installs directly from the openSUSE repo.

NOTE – if you are following these instructions for Ubuntu 12.04 or any distro shipping a version older than ownCloud 6, you may want to consider upgrading to OC6 first before converting to the latest version 7 – make sure your test this scenario in a VM before doing anything drastic!

Restore files

The new ownCloud is installed at /var/www/owncloud

Remove the directory at $OCHOME, then move /var/www/owncloud to $OCHOME so that it takes up the exact same place your old ownCloud directory was at.

Disable the automatically added owncloud site

a2disconf owncloud

And optionally delete /etc/apache2/conf-available/owncloud.conf

Now remove the default data and config directories, and link back in the other directories that you had cautiously moved out previously

rm -rf $OCHOME/{data,config}
#ln -s /var/owncloud-apps $OCHOME/apps
ln -s /etc/owncloud $OCHOME/config

(the apps line is commented out – because you must check what apps you are specifically restoring before squashing the default apps directory)

Finally edit $OCHOME/config/config.php to be sure that it points to the correct locations. Notably check that the $OCHOME/apps location exists, and that the data folder is pointing to the right place (especially if you had to move it).


Now go to your ownCloud main page in your web browser. You will be told that ownCloud needs to be updated to the newer version 7 – this will be done automatically.

Once done, ensure that everything is working as expected – add/remove files, navigate around ownCloud web, check that your apps are all working…


If you do need to revert,

apt remove owncloud
rm /etc/apt/sources.list.d/owncloud.list
apt update && apt install owncloud

Finally proceed to restoring the files as above – or from backup TARs

Additionally, you will want to restore the old version of the database.

mysql -u $OC_USER "-p$OC_PASS" < owncloud_db.sql.bkp

Mounting Drives in Linux

Byte City

Mounting drives in Linux is a task that sometimes needs to be performed when the auto-mounting mechanism doesn’t apply, and for neophytes can be challenging. The forums are replete with problems about mounting drives, the system not mounting drives upon plugging in the USB or inserting a CD, and permissions confusions.

The following post aims to explain as many parts of the manual process as reasonable, covering the /dev folder, mount and umount commands, fstab, umask and some particularities on filesystems and newly created disks.

The topic is fairly heavy, with many offshoot topics, and I want to keep this post as straight-to-the-task as possible, so a lot of the explanations will urge you to look up info elsewhere if you want more in-depth discussion. Generally, doing a web search on the name in underlined italics will be sufficient. I also use bold text for example snippets that you’ll need to replace, and pink text for text you would type at the command line, with green monospace text reserved for output.

Questions answered

  • How do I mount my USB key in Linux?
  • Why does my USB always mount as root?
  • How do I automatically mount a drive in Linux?
  • Why can’t I write to my USB in Linux?
  • How do I use the mount command?

Read more

Response to: Will 2014 be the “year of the Linux Desktop”?

An open poll for opinions on Linux Voice.com asks whether the tired and still popular question “is 20XX going to be the year of the Linux Dsktop” is still relevant.

My take on it is as below – but in brief (TL;DR) it is no longer relevant technologically, it is relevant and in progress from an industrial point of view, and is is most definitely still relevant when it comes to users at home, with no technical skills. The question beyond that is, do we even want non-techies using Linux? Read more

Installing applications on the GNU/Linux command line

This post answers these questions:

  • How do I install on Ubuntu/Linux Mint/Bodhi/(a Linux based off Debian) on the command line?
  • How do I install on CentOS/Korora/Fuduntu/(a Linux based off Fedora) on the command line?
  • How do I use apt-get/yum?
  • When should I use apt-get update?
  • When should I use yum update?
  • What is the build-essential package for?
  • Why should I install the “Development Tools” package group?
  • I tried installing from source using the ./make command but I get lots of build errors.


There are several ways to install applications on Linux, depending on your distribution, but for the purpose of this particular article, I am going to focus on the Debian family, which use the apt-get utility – this includes the more popular distributions, Ubuntu and Linux Mint – and the Fedora family, which use the yum utility, as do CentOS and Scientific Linux.

One way to install applications is to use the graphical software manager, but I try to not use that. I decided to get more familiar with installing from the command line, for two reasons: a number of programs you’ll come across may require it, and more importantly if the install fails, it’s best to be able to see the full list of messages from the install – this is generally hidden by the automatic installers.

Repositories and package managers

To start off with, package managers are software that download your software from repositories and install them, keeping track of what’s installed and in what version.

In the most simplistic terms, the repositories are online servers that host numerous versions of a vast number of applications, and the package manager is the utility that connects to these repositories, and installs applications, which come in the form of a number of packages. Depending on the Linux distribution, the corresponding repository may systematically make the latest versions of applications available, or only the latest known stable and fully tested version.

For example, Fedora tends to have the most up to date applications; the repository servers are maintained by Red Hat (who make Fedora) who make these latest versions available through their servers.

On the other hand, Debian will rigorously test and approve packages before releasing them to their main Stable repository. This emphasis on stability means that packages available normally are quite old already, unless you opt to subscribe to the “Testing” repository, or even “Sid”, the so-called “unstable” repository.

The Ubuntu family, like many derivative families, tend to be in between these two extremes, releasing fairly recent versions, but not after a certain amount of testing.

Other offshoot distributions may also use another distro’s repository so long as they are compatible. Thus Linux Mint, which is based on Ubuntu, uses Canonical’s repository primarily, whilst also providing their own.

You may think a repository then is hardly more than an App Store for GNU/Linux, but there’s a more technical side to it – these repositories do not only hold applications, but libraries, code packages that other applications can use. Windows users are perhaps more familiar with the “missing DLL” errors, Mac users probably know they need to hunt for other apps to support the app they are trying to run.

Each GNU/Linux package also comes with a list of packages it depends on. The package manager on your computer then figures out what it needs to install first, what versions and in what order, before installing the package you told it to install. It will make sure another program that uses a different version of the package doesn’t mess with your first app; and when you come to uninstalling the package, it will also remove any packages that are no longer necessary in your system.

Package manager applications

Packages can come in different formats depending on distro, and the trio of package type, package installer and package manager identifies the distribution family of any distro.

Ubuntu, Linux Mint, and Knoppix, all popular distros in their own rights, use the DEB packages as prescribed by their parent distribution Debian, installed by the dpkg package installer, nowadays piloted by the APT package management tools.

Mageia, RedHat Enterprise Linux and Yellow Dog Linux are all children of the Fedora family, and thus use RPM packages, the rpm package installer, and the yum package manager (from Yellow Dog itself).

ArchLinux and its derivatives such as Chakra and Manjaro use specially structured .tar.gz files, and the package manager and installer pacman.

Slackware breaks the mold in that it just has .tgz files, and for a long time it did not have a package manager. A number of third party tools arose in the child distros, including slapt-get, netpkg and slackpkg, and finally swaret that features dependency resolution, which the others do not support themselves.

A number of other systems and families exist, but those are the most common. For the rest of this article, I will focus on apt-get and yum – they are the two with which I am the most familiar, and they are used in the two largest and common families.

The command line, and sudo


In the following text, command prompts are indicated by starting with “$>” – so if a command is listed as

$> apt-get update

this means to use the terminal program and run the command “apt-get update

About sudo

You will find most commands in this article start with the word “sudo”.


sudo (pronounced “soo-doo” by some, I say it “soo-doh” like “pseudo”) is a special command, which means “run the following command with root privilege.” Most of the time, you are best not running any commands as root – this is a failsafe to prevent you from doing something silly, like deleting your entire system whilst performing an everyday command. Which, yes, you can end up doing if you’re not careful. sudo tells the machine you really mean it. Before it does anything, it will ask you for your password, and will then check if you are indeed allowed to run admin commands. If you run a second sudo command soon after, it will not ask you again, with a timeout depending on distro.

First time setup

Before you do any installation, the package manager needs to be aware of all the latest versions, and crucially, dependencies. Before a session of installing with APT, you must execute:

$> sudo apt-get update

This will update APT’s package library.


YUM on the other hand updates its repository list every time it is run, so you don’t need to worry about this.

After freshly installing a distro, it is useful to install the development and supporting build tools you are likely to need in the future – for APT this is:

$> sudo apt-get install build-essential

For YUM this is:

$> sudo yum install @development-tools

This will install a number of other packages that are generally required to build applications from source, most notably the GNU C and C++ compiler suites and libraries, as well as python build tools and perl tools.

Makefiles, installer scripts which usually ship with source code downloaded from the web, depend on these tools being installed on your system, and trying to run a Makefile without them will generate a slew of errors.


The basic APT command for installing a package is

$> sudo apt-get install <package>

Under YUM, there is a difference between a single package with dependencies, and a named group of packages. For an individual package, the following is used:

$> sudo yum install <package>

For package groups, the following two lines are equivalent:

$> sudo yum install @group-package
$> sudo yum groupinstall “Group Package”

See the examples from the previous section for comparison.

You will be asked whether to proceed, after having been given a summary of what packages will be updated and how much extra space will eventually be taken on your system.


Building from source code

If you download the source code for a package from off the web, you’ll normally be given a README and a ‘Makefile’ file. The Makefile contains instructions for building in various modes. Most often, simply switching to the directory and running the following will work (but please, do always read the README file first – do it the courtesy of its name!)

$> sudo make
$> sudo make install

This also holds true if you’re given an installer file – for example, the VirtualBox Guest Additions has a script for installing the VirtualBox add-ons, which requires the build-essential package to be installed. It doesn’t tell you so though – instead, it advises that it couldn’t find gcc (the C compiler).

Updating your OS

Depending on which distro you are using, you may need to upgrade your system frequently, occasionally, or not at all.

Debian, RedHat and CentOS tend to make any individual release supported for a long time, meaning that they will continue to actively develop and push security fixes and bugfixes for that release. You probably will only truly need to reinstall the system once every 7 years or so, a trait which makes these distros suitable for servers.

Fedora on the other hand likes to only have the latest software on hand, and regularly releases new distro versions every six months. Each is supported only for a year after the following version is released, which means under the current six-month cycle, any individual release is only supported for 18 months after its initial release. You’ll need to re-install the system again after that time. In this situation, it is a good idea to isolate your /home directory from the system. For people who tend to upgrade their machines regularly, or tend to experiment a lot on secondary systems (and break these in the process), this is not so much of an issue.

Linux Mint advises that you should be able to keep your current installation as long as you wish: from the Linux Mint upgrade notes: “Unless you need to, or unless you really want to, there’s no reason for you to upgrade.” – check the Linux Mint Community pages for more info. I personally disagree with this stance, as it seems to imply that you can choose to stay on a release that is no longer supported, from a security standpoint. As stated in the linked page, any one release only receives updates for 18 months until it is abandoned. Unless you are yourself well versed in security maintenance (and even then), it is probably not a good idea to stay on a system that is not receiving security updates.

Finally, there is also the Rolling Release model which is adopted by for example the Linux Mint Debian Edition and the ArchLinux distributions, which simply update the current installation as and when you run the upgrade function of the package manager.


That’s the end of the tutorial. Have fun installing lots of fun apps – take a look at this list for a start, and happy installing!