Home » Archive by category "Tai’s FWA"

Install a secure web server on Linux

Setting up secure connection on your Apache web server is very much straightforward on Linux — all the tools are at your disposal, and in just a few commands, you can be fully set up.

The following instructions are for Ubuntu and CentOS, and covers generating a self-signed certificate.

For an overview of free and cheap SSL certificates, see http://webdesign.about.com/od/ssl/tp/cheapest-ssl-certificates.htm. These certificates from Certificate Authorities only certify that the certificate was issued to the same person controlling the domain. They are fine for internal sites and personal home pages, but not for eCommerce sites..

For an overview of Enhanced Validation certificates (more expensive but more globally trusted), see http://webdesign.about.com/od/ssl/tp/cheapest-ev-ssl-certificates.htm. These certificates are issued against a real-world check of your identity, carrying thus a higher cost and higher trust. They are suitable for high-traffic sites that want to be properly identified and commercial sites; they are overkill for small project sites and testing. Read more

LVM Cheat Guide

This article is also featured on my professional blog at http://helpuse.com

There are a number of commands to know when doing basic Logical Volume Management, and it is probably most efficient to remember the three layers and how they interact, to be able to manage LVM volumes efficiently and autonomously.

I. Devices and Volumes

Physical devices

On the disk partitions side, there are:

  • An actual device: a hard disk, SSD, USB drive, etc
    • for example, /dev/sda or /dev/hdb
  • Partitions on the drives
    • for example /dev/sda1 or /dev/sdb3

The tools to manipulate these are

  • lsblk – to identify block devices easily
    • or df if lsblk is not available
  • fdisk – for partitioning

Logical constructs

In LVM there are 3 layers:

  • The Physical Volume
    • rather than referring to a device, it actually refers to a partition on a device
    • It is generally also referred to with the partition name
    • for example /dev/sda1 or /dev/sdb3
  • The Volume Group
    • this identifies a grouped pool of Physical Volumes that can be used together in the group
    • for example /dev/mapper/LvGroup
  • The Logical Volume
    • a collection of Physical Volumes from the same group
    • There can be multiple Logical Volumes per Volume Group
    • The Logical Volume looks to applications like a single partition
    • A Logical Volume can incorporate or release Physical Volumes in its group
    • For example, /dev/mapper/LvGroup/LvVolume

The tools used to manage these are divided into three sets, each with multiple operations:

  • pv*, vg* and lv*
  • for all three, *scan, *display, *create
  • for vg and lv, the added operations *extend, *remove
  • each set has many more of its own operations, use tab completion on the start of the command-set to show them.

II. Operations

The easiest way to remember the order of operations is to think of it this way: A physical device gets divided into partitions, and the partitions are reassembled into groups to form logical volumes.

As such, the first operations divide the physical devices into partitions, after which they are prepped, added to the appropriate volume group, added to a logical volume, and the logical volume is expanded to incorporate it. Finally the system needs to expand the filesystem to the full extents of the volume.

1. Device Preparation : Partitioning

Identify or create a partition you want to add to your LVM space.

You can use sudo fdisk /dev/sdX to create or manipulate partitions.

The partitions you want to add to volume management must have the system tag 8e : “Linux LVM”

If the partitions you are creating are on the same device as one your system is currently using, you will need to remount it, or even reboot if your root partition resides there.

2. Prepare the partition for LVM : Physical Volume

Use pvscan to identify existing Physical Volumes.

Use pvdisplay for detailed information about each.

Use pvcreate $PARTITION (where $PARTITION is a /dev/sdX as appropriate) to add physical volume information to the partition.

Use pvscan to confirm that it is recognized.

3. Associate the Physical Volume : Volume Group

Use vgscan to identify existing Volume Groups

Use vgdisplay to print detailed information about them.

a. Creating new Volume Groups

Use vgcreate $VOLUMENAME $PV to create a new Volume Group

b. Add a Physical Volume to an existing Volume Group

Use vgextend $VOLUMENAME $PV

4. Assign the Physical Volume : Logical Volume

Use lvscan to identify Logical Volumes attached to your machine.

Use lvdisplay to get detailed information

a. Creating new Logical Volumes

Use lvcreate –extents 100%FREE $PV to incorporate 100% of the currently free space on the Physical Volume. Note that “100%FREE” has no space character in it.

Finally, you need to create a filesystem on it.

mkfs.ext4 $LV where $LV is the device path.

Use lvdisplay for detailed information on the Logical Volumes on your system.

b. Adding a Physical Volume to an existing Logical Volume

Use lvextend –extents 50%FREE $LV $PV to add 50% (for example) of the currently free space on $PV to the Logical Volume identified by $LV; where $LV is the path to the Logical Volume, for example /dev/mapper/LvGroup/LvVolume. Note that “50%FREE” does not have a space in it.

After adding a Physical Volume to a Logical Volume, the Logical Volume still needs to make use of the added space. To do this:

Use resize2fs $LV where $LV is the name or group of the Logical Volume.

You may be requested to run a disk check first before completing the procedure.

Done

You can now mount the logical volume.

Tunneling Around Connection Madness

Some servers are behind multiple layers of Citrix, RDP, re-Citrix and SSH, creating all manner of problems for maintenance and support teams for copying files, and sometimes even just copy/pasting from your desktop to the customer machine’s console.

You can counter this by remote-tunnelling from their remote server to a publicly available server in your control (call it myServ1), then connecting to myServ1 and looping back through the firewall – that is, make it become only one hoop instead of several.

The advantage of this technique is to be able to work in your own browser, and in your own terminal (PuTTY, KiTTY or whatever you wish) straight on your desktop.

To do this, follow the below. It may seem long, but it’s quite short in fact.

Method 1 : Daisy-Chained Tunnels

This method allows you to operate in a single window most of the time, and benefits from the reduced overhead on one of the “connections” (on the loopback address). The disadvantage is that when copying files you will generally find you need to do a two-step copy.

The commands (TL;DR)

In summary:

The ports we define are

  • $RTUNP the port on myServ1 that tunnels back to the customer’s SSH port. Make sure this is unique per customer.
  • $DYNP the port for bridging the dynamic forwarding, on myServ1
  • $PROXYIP the SOCKS proxy port that you set in PuTTY and in your browser to use the dynamic forward

Then there are 3 commands to run in order:

  • On your desktop: ssh serveruser@myServ1 -L $PROXYP:localhost:$DYNP
    • which in PuTTY is a local port forward from source $PROXYP to remote localhost:$DYNP
  • On the customer’s machine:ssh -fNC -R $RTUNP:localhost:22 serveruser@myServ1 -o ServerAliveInterval=20 -o ServerAliveCountMax=5 &
  • On myServ1: ssh -c arcfour customer@localhost -p $RTUNP -D $DYNP

Step1 : Connect to myServ1

Connect to myServ1 with local port forwarding

ssh serveruser@myServ1 -L 8080:localhost:5580

We use a local forward so we forward our desktop’s 8080 to myServ1’s 5580 – we will be using this later.

We need to perform some forwarding on the localhost if myServ1’s firewall is locked down on the ports we’d want to use.

Step 2 : Connect to customer’s machine

Go through the multiple hoops to get to the customer’s machine, and run the following:

ssh -fNC -R 5500:localhost:22 serveruser@myServ1 -o ServerAliveInterval=20 -o ServerAliveCountMax=5 &

Get the PID of the tunnel process.

ps aux | grep ssh

So long as you remembered to include the & at the end of the ssh command, you can now close your ugly Citrix/RDP/SSH/etc hoops session.

Step 3 : Connect to the customer

Now on the myServ1 console you opened earlier, ssh to the port you specified, on the port you specified

ssh -c arcfour customer@localhost -p 5500 -D 5580

If forwarding was used for someone else before, the SSH key check will fail and you’ll get an alarming warning about the host’s identity changing. You need to edit ~/.ssh/known_hosts and remove the last line that refers to localhost.

In the previous command, we SSH to localhost on the tunnel connecting myServ1’s 5500 to the customer’s 22. Since this is localhost, we can use weak encryption (-c arcfour) to reduce the computational overhead of SSH chaining.

The dynamic port forward allows us to use a dynamic proxy on myServ1’s 5580

Since we set up the initial myServ1 connection from our desktop’s 8080 to myServ1’s 5580, we are effectively chaining our desktop’s 8080 to the customer’s network through the dynamic proxy on myServ1’s 5580.

You can use a dynamic SOCKS proxy tool on the locally forwarded dynamic port (here 8080) like usual to resolve IPs directly in the customer’s environment.

Copying files

You need to copy to myServ1 first using pscp or WinSCP, then scp the file to the client

scp -P <yourport> file/on/myServ1 mycustomer:./ # from myServ1 to customer

 

Method 2 : Tunnel Through Tunnel

To be able to directly scp/WinSCP from your desktop to the client machine, you could open the remote tunnel at the customer first; then open a first connection to myServ1 from your desktop, then a second PuTTY session tunnelling through the first.

This causes two PuTTY windows to be open, and has a more expensive SSH overhead (not so good when one end is slow for any reason or when there’s a fair amount of dropped packets on the network), but your second connection is “direct” to the customer.

On the customer’s machine

ssh -fNC -R $RTUNP:localhost:22 serveruser@myServ1 -o ServerAliveInterval=20 -o ServerAliveCountMax=5 &

Get the PID of the tunnel process.

ps aux | grep ssh

On your desktop

ssh serveruser@myServ1 -L 22:localhost:$RTUNP`

Which in PuTTY is a local port forward from your desktop source port 22 to remote localhost:$RTUNP, the remote tunnel to the customer on myServ1

On your desktop again

ssh customer@localhost -D 8080

This is, as far as PuTTY is concerned, is a direct connection – so if you start WinSCP on it, you directly copy from your desktop to the customer’s machine.

If forwarding was used for someone else before, the SSH key check will fail and you’ll get an alarming warning about the host’s identity changing. You need to edit your registry HKEY_CURRENT_USER\SoftWare\UserName\PuTTY\SshHostKeys and remove the appropriate key that refers to localhost.

We do not use a weaker connection here normally, because we still have to protect the connection from the desktop to myServ1 before it enters the outer tunnel.

Disconnecting from the customer

Kill the PID that you noted down earlier – don’t keep this connection open.

kill -9 <pid>

We need to explicitly do this, especially since we have set the keepalive options on the original SSH remote tunnel.

Even if you did not specify keepalive options, some connections are pretty persistent…

 

Installing SliTaz GNU/Linux

Screen Shot 2015-05-13 at 22.40.24

Recently I’ve been playing with SliTaz GNU/Linux, a tiny distro that can be made to operate even the oldest of PCs!

This article is a short bootstrap to get you started with the essentials on SliTaz 4.0

What is SliTaz?

SliTaz is an independant GNU/Linux distribution, with its own package manager, repositories and kernel, focused on providing an OS for embedded devices and computers with extremely small amounts of power and storage.

It is extremely lightweight: its standard ISO is about 35 MiB in size, botting from cold to desktop in as fast as 15 seconds, and ab initio takes up 32 MiB RAM with the Gtk openbox desktop and no extra applications.

Whilst it can be used as a lightweight desktop environment, its main application would more likely be for use as

  • an embedded Linux
  • an SSH gateway
  • an easily reloaded web kiosk
  • a portable PC troubleshooting/rescuing disk
  • such uses where slick features are shunned in favour of lightness and efficiency.

A GUI desktop environment is provided for those who are afraid of being in full-on command line mode, but to maintain its lightness, there are no traces of such heavy packages as LibreOffice or Firefox.

Out of the box you get

  • the lightweight Leafpad text editor (if you’re not content with nano or vi!)
  • the Midori web browser
  • the sakura terminal emulator
  • and mtPaint if you need to edit pictures…

and apart from that, not excessively more. That’s all you really need.

There’s a web-based GUI manager running on localhost for managing the computer, but make no mistake – this systems is more appropriate for seasoned Linux hobbyists who are OK filling in the documentation gaps…

There is even an Raspberry Pi version of SliTaz available to squeeze the most performance out of your Pi.

Screen Shot 2015-05-13 at 22.55.00

GUI install

On the LiveCD, to configure SliTaz, boot into the “Gtk” version; then open the web browser go to http://tazpanel:82 and enter the root login details. By default, the user is “root” and the password is “root”.

Once in TazPanel, you can manage the system, upgrade packages, install new software – an more!

Go to the final menu entry labelled “Install” and choose to “Install SliTaz”

For the purposes of this guide, we are just going to do a very simple install. If you’re comfortable with partitioning, go wild.

You will be offered the opoortunity to launch GParted – click that button. You will be shown a map of the first hard drive. If you have multiple hard drives, BE CAREFUL with which one you choose – the operation we are about to perform will erase the disk you operate on.

Choose the disk in the disk chooser menu in the top right – probably /dev/sda if you have only one disk. Again CHOOSE WISELY.

Once a disk is chosen wisely, use the Device menu and choose to Create Partition Table

Then choose Partition menu: New partition

Leave the defaults and hit OK, then click the Apply button to apply the changes. At this point the disk is erased and a new partition is set up. You have been warned!

Exit GParted, scroll down and choose to “Continue installation”

Most options should be fairly self explanatory.

Under Hard Disk Drive, choose the drive you just erased in GParted (/dev/sda for example) and choose to format the partition as “ext4”

Set the host name to whatever you want.

Change the root password to something sensible.

Set the default user as desired.

Remember to tick “Install Grub bootloader” or you’ll have a non-bootable system…

Click “Proceed to SliTaz installation”. After a few seconds… SliTaz is installed. Reboot!

You’ll have to set up your locale and keyboard just once more and voila, a desktop Linux that boots in seconds!

Command line install

Here’s the simple recipe for installing SliTaz from the command line. Note that even if started from the LiveCD headless, this install will install a full GUI and take up around 100MB of space.

The first thing to know is that the installer is invoked by the command tazinst.

The second thing to know is that you need to create a configuration file for it.

The third thing you need to know is that you need to partition your disk first. Naturally, this is what we’ll do first.

WARNING – PARTITIONING ERASES THE DISK THAT YOU ARE PARTITIONING

Type these keys in order to erase /dev/sda and set up a single partition. If you have never done this before…. read up on partitioning with fdisk. It’s a whole topic on its own! Hit return for each new line of course.

fdisk -l
fdisk /dev/sda
o
n
p
1
1
 (just hit return here to accept the default)
a
1
w

Great, you have a new partition on /dev/sda1. Now create the config file.

tazinst new configfile

Once you have created the configuration file, edit it.

vi configfile

Three key things you need to change are as follows:

  • TGT_PARTITION – the partition you will be installing on – in our case, /dev/sda1 or whichever you configured earlier
  • TGT_FS – the file system you want to use – for example, ext4
  • TGT_GRUB – “yes” unless you intend on installing Grub manually afterwards.

Finally, run

tazinst install configfile

After a few second, the install will be finished and you can reboot.

Post-install customizations

SliTaz is very light. Extremely light. You might even say it’s missing some packages you would expect as standard. You should think about doing some initial setup…

su
tazpkg -gi vim
tazpkg -gi htop
tazpkg -gi tmux
tazpkg -gi sudo
tazpkg -gi iptables # ...and whatever else you want...
#one tazpkg per item to install
/etc/init.d/dropbear start # SSH server
 vim /etc/rcS.conf

# add dropbear SSH server to startup
 %RUN_DAEMONS=" ...dropbear"
vim /boot/grub/menu.lst
# change timeout
 %timeout 2
visudo
 # add your own users to sudo location

And that’s about it. Some extra commands that may be different from what you may know from elsewhere:

poweroff # instead of shutdown
tazpkg recharge # sync package list
tazpkg info (package)
tazpkg description (package)
tazpkg search (string)
tazpkg get-install (package name) # install from repo
tazpkg get (package name) # download from repo
tazpkg install (TGZ file) # install from local file

Bonus – tpgi

Instead of directly using the restrictive tazpkg, try using my wrapper 🙂

Switch to root and run the following

tazpkg -gi git
tazpkg -gi bash
git clone git://github.com/taikedz/handy-scripts
cp handy-scripts/bin/tpgi /usr/bin/

This will set up the tpgi command which you can use to make life with tazpkg a little easier… run the command without arguments for help. Try:

tpgi install htop vim tmux sudo

Now you can install multiple packages from one line….!

tpgi search and gcc 3.4

Searches for packages containing the term “gcc” then filters the results for version 3.4

Moving “/” when it runs out of space (Ubuntu 14.04)

konata_pc

My root (“/”) partition filled up nearly to the brim recently on one of my test servers, so I decided it was time to move it elsewhere… but how?

You can’t normally just add another disk and copy files over – there’s a bit of jiggery-pokery to be done… but it’s not all that difficult. I’d recommend doing this at least once on a test system before ever needing to do it on bare a metal install…

What you will need:

  • a LiveCD of your operating system
  • about 20 minutes of work
  • some time for copying
  • A BACKUP OF YOUR DATA – in case things go horribly wrong
  • a note of which partition maps to what mount point

For note, I had my partitions such initially:

/dev/sda1 : /boot
/dev/sda3 : /
/dev/sda5 : /home

I then added a new disk to my machine

/dev/sdb

In this walk-through, I will refer to your target PC, the one whose “/” needs moving, as “your PC” from now on. If you’re using a VM that’s the one I am referring to – you needn’t do anything in the host.

Note that in my setup, the “/boot” and “/home” directories are on their own partitions. If you don’t have this as your standard setup, I highly recommend you look at partitioning in this way now – it helps massively when doing long-term maintenance, such as this!

1/ Boot your PC from the LiveCD.

I recommend you use the same CD as from where you installed the OS initially, but probably any Linux with the same architecture will do (x86_32, AMD64, ARM, etc)

Once the Live environment is started, open a command line, and switch to root, or make sure you can use sudo.

2/ Prepare the new root

Use lsblk to identify all currently attached block devices.

I am assuming that /dev/sdb is the new disk. Adjust these instructions accordingly of course.

You want to first partition the new drive: run the command `fdisk /dev/sdb` as root

Type `o` to create a new partition table – you will be asked for details, adjust as you wish or accept the defaults

Type `n` to create a new partition. Adjust at will or accept the defaults

Type `w` to write the changes to disk.

As root, run `mkfs.ext4 /dev/sdb1`

Your new drive is ready to accept files…

3/ Copy the files

Make directories for the old and new roots, and copy the files over

mkdir newroot
mkdir oldroot
sudo mount /dev/sda3 oldroot
sudo mount /dev/sdb1 newroot
sudo rsync -av oldroot/ newroot/

Note: in the rsync command, specifically add the slashes at the end: “oldroot/ newroot/” and not “oldroot newroot” !!

Go do something worthwhile – this copy may take some time, depending on how full the partition was…

4/ Modify fstab

Run the following command to get the UUID of the new drive:

sudo blkid /dev/sdb1

Keep a copy of that UUID

Edit your fstab file

sudo vim newroot/etc/fstab

Keep a note of the old UUID

Change the UUID of the line for your old / partition to the new UUID you just got; and save.

5/ Edit the grub.cfg

Mount your old /boot partition into the *new* location

sudo mount /dev/sda1 newroot/boot

Now we edit newroot/boot/grub/grub.cfg

sudo vi newroot/boot/grub/grub.cfg

Locate instances of the old UUID and change them to the new UUID

Quick way: instead of using `vi` you could use `sed` instead

sudo sed -e 's/OLD-UUID-FROM-BEFORE/NEW-UUID-ON-NEW-DISK/g' -i newroot/boot/grub/grub.cfg

Of course, adjust the UUID strings appropriately. If you have never used sed before, read up on it. Keep a copy of the original grub.cfg file too, in case you mess it up first time round.

In the above command, the “-e” option defines a replacement pattern ‘s/ORIGINAL/REPLACEMENT/g’ (where ‘g’ means ‘globally’, or in the entire file); the “-i” option indicates that the file specified should be modified, instead of writing the changes to stdout and leaving the file unmodified. Using the “-r” option, you can also make use of Perl-style regular expressions, including capturing groups.

After making the change, reboot. Remember to start from the hard disk, remove the Live CD from the slot.

6/ Reboot, and rebuild grub.cfg

If all has gone well, you should now find your Ubuntu install rebooting fine. Open a terminal and run

df -h

See that your root is now mounted from the new disk, with the extra space!

There’s just one more thing to do – make the grub.cfg changes permanent. Run the following:

sudo update-grub

This will update the grub config file with relevant info from the new setup.

You have successfully moved your “/” partition to a new location. Enjoy the extra space!

Moving ownCloud from Ubuntu default repo to openSUSE build service repo

ownCloud example

ownCloud is a popular self-hosted replacement to cloud storage services such as Dropbox, Box.net, Google Drive and SkyDrive: ownCloud lets you retain ownership of the storage solution, and host it wherever you want, without being at the mercy of service providers’ usage policies and advertising-oriented data-mining.

Recently the ownCloud developers asked the Ubuntu repo maintainers to remove owncloud server from their repos.

The reason for this is because older versions of ownCloud have vulnerabilities that don’t necessarily get patched: whilst the original ownCloud developers plug the holes in the versions they support, they cannot guarantee that these fixes propagate to the code managed by repos – and Ubuntu is widely used as a base for other distros. For example, Ubuntu 12.04 is still supported and forms the base for many derivatives, and has ownCloud 5 in its repos – but is not managed by ownCloud developers.

The ownCloud developers recommend using the openSUSE build service repository where they publish the latest version of ownCloud, and from which you can get the newest updates as they arrive.

If you’ve installed ownCloud from the Ubuntu 14.04 repositories, and you want to move over to the openSUSE build repo, here’s how you do it.

If moving up from ownCloud 5, consider migrating first to version 6 by way of a PPA or an older OC6 TAR… I’ll have to leave it up to you to find those for yourselves…

Backing up

These instructions are generic. You MUST test this in a VM before performing the steps on your live system.

Mantra: do not trust instructions/code snippets from the internet blindly if you are unsure of what exactly they will do.

Backup the database

Make a backup of the specific database used for ownCloud as per your database’s documentation.

For a simple MySQL dump do:

mysqldump -u $OC_USER "-p$OC_PASS" $OC_DATABASE > owncloud_db.sql.bkp

replacing, of course, the placeholders as appropriate.

Backing up the directories

If you installed ownCloud on Ubuntu 14.04 directory from the regular repos, you’ll find the following key anatomies:

  • Main owncloud directory is in /usr/share/owncloud (call it $OCHOME)
  • The $OCHOME/config directory is a symlink to /etc/owncloud
  • the $OCHOME/data directory is a symlink to /var/lib/owncloud
  • the $OCHOME/apps is where your ownCloud apps are installed

If this is not already the case, it wouldn’t hurt to change things to match this setup.

It would also be a very good idea to make a tar backup of these folders to ensure you have a copy should the migration go awry. You have been warned.

Moving apps, data and config folders

Move your ownCloud data directory to some location (for this example /var/lib/owncloud but it could be anywhere) ; move your ownCloud config directory to /etc/config

It’s probably simply a good idea to not have your data directory directly accessible under $OCHOME/data

It is also probably good to keep the original more variable apps directory in /var/owncloud-apps instead of lumped straight into the ownCloud home directory. Note that this directory also contains the “native” ownCloud apps, which get updated with each version of ownCloud – not just custom apps.

Once you have moved these folders out, $OCHOME should no longer have data and config symlinks in it. As these are symlinks you can simply rm $OCHOME/{data,config}

If you get an error about these being actual directories that cannot be removed because they are empty…. you haven’t actually moved them. If they do not exist of course, that’s fine.

Uninstall and reinstall ownCloud

Uninstall ownCloud (do NOT purge!!)

apt remove owncloud

And add the new repo as per the instructions in http://software.opensuse.org/download/package?project=isv:ownCloud:community&package=owncloud

For Ubuntu 14.04 this is

wget http://download.opensuse.org/repositories/isv:ownCloud:community/xUbuntu_14.04/Release.key
apt-key add - < Release.key
echo 'deb http://download.opensuse.org/repositories/isv:/ownCloud:/community/xUbuntu_14.04/ /' >> /etc/apt/sources.list.d/owncloud.list
apt-get update && apt-get install owncloud

This makes the repository trusted (key download) then updates the sources and installs directly from the openSUSE repo.

NOTE – if you are following these instructions for Ubuntu 12.04 or any distro shipping a version older than ownCloud 6, you may want to consider upgrading to OC6 first before converting to the latest version 7 – make sure your test this scenario in a VM before doing anything drastic!

Restore files

The new ownCloud is installed at /var/www/owncloud

Remove the directory at $OCHOME, then move /var/www/owncloud to $OCHOME so that it takes up the exact same place your old ownCloud directory was at.

Disable the automatically added owncloud site

a2disconf owncloud

And optionally delete /etc/apache2/conf-available/owncloud.conf

Now remove the default data and config directories, and link back in the other directories that you had cautiously moved out previously

rm -rf $OCHOME/{data,config}
#ln -s /var/owncloud-apps $OCHOME/apps
ln -s /etc/owncloud $OCHOME/config

(the apps line is commented out – because you must check what apps you are specifically restoring before squashing the default apps directory)

Finally edit $OCHOME/config/config.php to be sure that it points to the correct locations. Notably check that the $OCHOME/apps location exists, and that the data folder is pointing to the right place (especially if you had to move it).

Update

Now go to your ownCloud main page in your web browser. You will be told that ownCloud needs to be updated to the newer version 7 – this will be done automatically.

Once done, ensure that everything is working as expected – add/remove files, navigate around ownCloud web, check that your apps are all working…

Reverting

If you do need to revert,

apt remove owncloud
rm /etc/apt/sources.list.d/owncloud.list
apt update && apt install owncloud

Finally proceed to restoring the files as above – or from backup TARs

Additionally, you will want to restore the old version of the database.

mysql -u $OC_USER "-p$OC_PASS" < owncloud_db.sql.bkp

Moving wordpress

I just moved my WordPress blog from its old location on a regular webhost to a new location on a VPS.

I just wanted to share some learning notes on how to go about it – I can imagine they’ll be useful to me too in the future!

[EDIT — I have just checked the official way of doing this and I must say, it seems much more involved. I think it might be old and many issues are now resolved, but for the sake of caution, here’s their instructions {2013-06-13 : http://codex.wordpress.org/Moving_WordPress} I was using wordpress 3.9 during the upgrade]

  1. Get a dump of the database dedicated to your wordpress instance as an SQL file – it contains the actual posts and comments from your blog
  2. get a copy of your full wordpress directory – any media files uploaded are here, along with the actual PHP pages
  3. copy these over to the new location
  4. (unpack the wordpress web directory to the desired web directory, if necessary)
  5. re-create the same user in your new DBMS instance, and use the same password (you can find that in wp-config.php file in your wordpress web files), granting it the appropriate permissions
  6. restore the database by using the SQL dump file from your earlier backup – this will restore the entire database as a complete copy – for example, run the SQL file through your DBMS’s CLI tool
    • $> mysql -u wpuser -p"wp_password" < wpdb_dump.sql
  7. log in to the database and run the following, where “http://example.com” is the base URL of your new wordpress blog – wordpress redirects any access to wp-admin to this URL, so it needs updating or you’ll always go back to the old URL’s wordpress
    • update wp_options set option_value='http://example.com/' where option_name='home' or option_name='siteurl';
  8. In the wp-config.php file in the root of your wordpress directory, change the DB_HOST property to point to the new database location

You should now be able to go to the new URL and everything should be as it was!

 

Sony Walkman MTP workarounds…

This post answers the questions:

  • What is MTP (Media Transfer Protocol)?
  • How can I manage files on my media device by simply copying files?
  • How do I manage a MTP USB device through the file manager on Linux?
  • How do I use jmptfs?

So… I’ve just gone and bought a little Sony Walkman Series B and was looking forward to trying it out on my Xubuntu 14.04 install with Clementine.

The Walkman Series B comes with an integrated USB connector for transfer and recharging, which I plugged into a USB 3.0 port. Alas, rather than using a common interface such as the block file system, the device uses the Media Transfer Protocol only interface.

Media Transfer Protocol is a way that a device such as a phone, camera or music player can show itself to the computer, and generally requires special software to add/remove files to. To wit:

Problem #1 – after double-clicking the desktop icon, it took a few seconds to mount, during which I double-clicked it again (thinking my taps on the touchpad may have been too light…) and was presented with a “could not mount” error. Ouch. But it mounted after that.

Problem #2 – it was not mounted as a block device (read: regular hard disk) but as a MTP transfer. Maybe it’s Thunar, or maybe it’s an inherent limitation in the protocol, but this did not allow me to move/add/delete files at all.

Problem #3 – After firing up Clementine, it seems the device was detected as practically full. I had a number of Clementine crashes over my various attempts at copying files from my library over as well. Ghastly procedure.

Workaround – after doing a little bit of reading up, I found that the most seamless solution was to mount the device as a block file system by way of an added library called “jmtpfs” installed following the incantation

sudo apt-get install jmtpfs

Now you can use the command line to mount and unmount the first MTP device found using various commands. I opted to create a script to do this for me: I created a script in my ~/bin directory (which is already in my path) called “mmtpfs” containing:

#! /bin/bash

MTPMAIN=~/mmtpfs
MTPDIR=$MTPMAIN/$2

if [ "${1}k" = '-mk' ]; then
    mkdir -p $MTPDIR
    jmtpfs $MTPDIR
elif [ "${1}k" = '-uk' ]; then
    fusermount -u $MTPDIR
    rmdir $MTPDIR # only removes end directory if empty
    rmdir $MTPMAIN
else
    echo Help:
    cat << EOT
Mount the first MTP device to ~/media/MOUNTDIR

Only specify the name of a directory as MOUNTDIR

Mount device:
    $0 -m MOUNTDIR

Unount device:
    $0 -u MOUNTDIR
EOT
fi

Now I can just open the terminal and type

mmtpfs -m sony

To mount the device or

mmtpfs -u sony

to unmount the device. I might make a Zenity script and a .desktop entry to make it even easier to manage should I have to add this to a user’s configuration.

Mounting Drives in Linux

Byte City

Mounting drives in Linux is a task that sometimes needs to be performed when the auto-mounting mechanism doesn’t apply, and for neophytes can be challenging. The forums are replete with problems about mounting drives, the system not mounting drives upon plugging in the USB or inserting a CD, and permissions confusions.

The following post aims to explain as many parts of the manual process as reasonable, covering the /dev folder, mount and umount commands, fstab, umask and some particularities on filesystems and newly created disks.

The topic is fairly heavy, with many offshoot topics, and I want to keep this post as straight-to-the-task as possible, so a lot of the explanations will urge you to look up info elsewhere if you want more in-depth discussion. Generally, doing a web search on the name in underlined italics will be sufficient. I also use bold text for example snippets that you’ll need to replace, and pink text for text you would type at the command line, with green monospace text reserved for output.

Questions answered

  • How do I mount my USB key in Linux?
  • Why does my USB always mount as root?
  • How do I automatically mount a drive in Linux?
  • Why can’t I write to my USB in Linux?
  • How do I use the mount command?

Read more

Choosing your GNU/Linux Distro

6432837735_0964b7729a_o

Most of the time, the differences between one distro and another aren’t so important – most distros work the same as one another, which we are swift to be reminded of when asking “what’s the best distro for beginners?”

But what about drilling deeper into that question and instead asking, “what are the key differentiating factors between desktop distros?”

Here’s the list of things I consider, when deciding whether to take the time to download and install, or recommend, a distro.

This post aims to answer the questions:

  • What’s the best distro for new Linux users?
  • How do I know what Linux distro is right for me?
  • What are the main differences between Linux distros?
  • What Linux distros are backed by companies?

Read more