Home » Computing » Archive by category "Linux" (Page 2)

Tunneling Around Connection Madness

Some servers are behind multiple layers of Citrix, RDP, re-Citrix and SSH, creating all manner of problems for maintenance and support teams for copying files, and sometimes even just copy/pasting from your desktop to the customer machine’s console.

You can counter this by remote-tunnelling from their remote server to a publicly available server in your control (call it myServ1), then connecting to myServ1 and looping back through the firewall – that is, make it become only one hoop instead of several.

The advantage of this technique is to be able to work in your own browser, and in your own terminal (PuTTY, KiTTY or whatever you wish) straight on your desktop.

To do this, follow the below. It may seem long, but it’s quite short in fact.

Method 1 : Daisy-Chained Tunnels

This method allows you to operate in a single window most of the time, and benefits from the reduced overhead on one of the “connections” (on the loopback address). The disadvantage is that when copying files you will generally find you need to do a two-step copy.

The commands (TL;DR)

In summary:

The ports we define are

  • $RTUNP the port on myServ1 that tunnels back to the customer’s SSH port. Make sure this is unique per customer.
  • $DYNP the port for bridging the dynamic forwarding, on myServ1
  • $PROXYIP the SOCKS proxy port that you set in PuTTY and in your browser to use the dynamic forward

Then there are 3 commands to run in order:

  • On your desktop: ssh serveruser@myServ1 -L $PROXYP:localhost:$DYNP
    • which in PuTTY is a local port forward from source $PROXYP to remote localhost:$DYNP
  • On the customer’s machine:ssh -fNC -R $RTUNP:localhost:22 serveruser@myServ1 -o ServerAliveInterval=20 -o ServerAliveCountMax=5 &
  • On myServ1: ssh -c arcfour customer@localhost -p $RTUNP -D $DYNP

Step1 : Connect to myServ1

Connect to myServ1 with local port forwarding

ssh serveruser@myServ1 -L 8080:localhost:5580

We use a local forward so we forward our desktop’s 8080 to myServ1’s 5580 – we will be using this later.

We need to perform some forwarding on the localhost if myServ1’s firewall is locked down on the ports we’d want to use.

Step 2 : Connect to customer’s machine

Go through the multiple hoops to get to the customer’s machine, and run the following:

ssh -fNC -R 5500:localhost:22 serveruser@myServ1 -o ServerAliveInterval=20 -o ServerAliveCountMax=5 &

Get the PID of the tunnel process.

ps aux | grep ssh

So long as you remembered to include the & at the end of the ssh command, you can now close your ugly Citrix/RDP/SSH/etc hoops session.

Step 3 : Connect to the customer

Now on the myServ1 console you opened earlier, ssh to the port you specified, on the port you specified

ssh -c arcfour customer@localhost -p 5500 -D 5580

If forwarding was used for someone else before, the SSH key check will fail and you’ll get an alarming warning about the host’s identity changing. You need to edit ~/.ssh/known_hosts and remove the last line that refers to localhost.

In the previous command, we SSH to localhost on the tunnel connecting myServ1’s 5500 to the customer’s 22. Since this is localhost, we can use weak encryption (-c arcfour) to reduce the computational overhead of SSH chaining.

The dynamic port forward allows us to use a dynamic proxy on myServ1’s 5580

Since we set up the initial myServ1 connection from our desktop’s 8080 to myServ1’s 5580, we are effectively chaining our desktop’s 8080 to the customer’s network through the dynamic proxy on myServ1’s 5580.

You can use a dynamic SOCKS proxy tool on the locally forwarded dynamic port (here 8080) like usual to resolve IPs directly in the customer’s environment.

Copying files

You need to copy to myServ1 first using pscp or WinSCP, then scp the file to the client

scp -P <yourport> file/on/myServ1 mycustomer:./ # from myServ1 to customer

 

Method 2 : Tunnel Through Tunnel

To be able to directly scp/WinSCP from your desktop to the client machine, you could open the remote tunnel at the customer first; then open a first connection to myServ1 from your desktop, then a second PuTTY session tunnelling through the first.

This causes two PuTTY windows to be open, and has a more expensive SSH overhead (not so good when one end is slow for any reason or when there’s a fair amount of dropped packets on the network), but your second connection is “direct” to the customer.

On the customer’s machine

ssh -fNC -R $RTUNP:localhost:22 serveruser@myServ1 -o ServerAliveInterval=20 -o ServerAliveCountMax=5 &

Get the PID of the tunnel process.

ps aux | grep ssh

On your desktop

ssh serveruser@myServ1 -L 22:localhost:$RTUNP`

Which in PuTTY is a local port forward from your desktop source port 22 to remote localhost:$RTUNP, the remote tunnel to the customer on myServ1

On your desktop again

ssh customer@localhost -D 8080

This is, as far as PuTTY is concerned, is a direct connection – so if you start WinSCP on it, you directly copy from your desktop to the customer’s machine.

If forwarding was used for someone else before, the SSH key check will fail and you’ll get an alarming warning about the host’s identity changing. You need to edit your registry HKEY_CURRENT_USER\SoftWare\UserName\PuTTY\SshHostKeys and remove the appropriate key that refers to localhost.

We do not use a weaker connection here normally, because we still have to protect the connection from the desktop to myServ1 before it enters the outer tunnel.

Disconnecting from the customer

Kill the PID that you noted down earlier – don’t keep this connection open.

kill -9 <pid>

We need to explicitly do this, especially since we have set the keepalive options on the original SSH remote tunnel.

Even if you did not specify keepalive options, some connections are pretty persistent…

 

Installing SliTaz GNU/Linux

Screen Shot 2015-05-13 at 22.40.24

Recently I’ve been playing with SliTaz GNU/Linux, a tiny distro that can be made to operate even the oldest of PCs!

This article is a short bootstrap to get you started with the essentials on SliTaz 4.0

What is SliTaz?

SliTaz is an independant GNU/Linux distribution, with its own package manager, repositories and kernel, focused on providing an OS for embedded devices and computers with extremely small amounts of power and storage.

It is extremely lightweight: its standard ISO is about 35 MiB in size, botting from cold to desktop in as fast as 15 seconds, and ab initio takes up 32 MiB RAM with the Gtk openbox desktop and no extra applications.

Whilst it can be used as a lightweight desktop environment, its main application would more likely be for use as

  • an embedded Linux
  • an SSH gateway
  • an easily reloaded web kiosk
  • a portable PC troubleshooting/rescuing disk
  • such uses where slick features are shunned in favour of lightness and efficiency.

A GUI desktop environment is provided for those who are afraid of being in full-on command line mode, but to maintain its lightness, there are no traces of such heavy packages as LibreOffice or Firefox.

Out of the box you get

  • the lightweight Leafpad text editor (if you’re not content with nano or vi!)
  • the Midori web browser
  • the sakura terminal emulator
  • and mtPaint if you need to edit pictures…

and apart from that, not excessively more. That’s all you really need.

There’s a web-based GUI manager running on localhost for managing the computer, but make no mistake – this systems is more appropriate for seasoned Linux hobbyists who are OK filling in the documentation gaps…

There is even an Raspberry Pi version of SliTaz available to squeeze the most performance out of your Pi.

Screen Shot 2015-05-13 at 22.55.00

GUI install

On the LiveCD, to configure SliTaz, boot into the “Gtk” version; then open the web browser go to http://tazpanel:82 and enter the root login details. By default, the user is “root” and the password is “root”.

Once in TazPanel, you can manage the system, upgrade packages, install new software – an more!

Go to the final menu entry labelled “Install” and choose to “Install SliTaz”

For the purposes of this guide, we are just going to do a very simple install. If you’re comfortable with partitioning, go wild.

You will be offered the opoortunity to launch GParted – click that button. You will be shown a map of the first hard drive. If you have multiple hard drives, BE CAREFUL with which one you choose – the operation we are about to perform will erase the disk you operate on.

Choose the disk in the disk chooser menu in the top right – probably /dev/sda if you have only one disk. Again CHOOSE WISELY.

Once a disk is chosen wisely, use the Device menu and choose to Create Partition Table

Then choose Partition menu: New partition

Leave the defaults and hit OK, then click the Apply button to apply the changes. At this point the disk is erased and a new partition is set up. You have been warned!

Exit GParted, scroll down and choose to “Continue installation”

Most options should be fairly self explanatory.

Under Hard Disk Drive, choose the drive you just erased in GParted (/dev/sda for example) and choose to format the partition as “ext4”

Set the host name to whatever you want.

Change the root password to something sensible.

Set the default user as desired.

Remember to tick “Install Grub bootloader” or you’ll have a non-bootable system…

Click “Proceed to SliTaz installation”. After a few seconds… SliTaz is installed. Reboot!

You’ll have to set up your locale and keyboard just once more and voila, a desktop Linux that boots in seconds!

Command line install

Here’s the simple recipe for installing SliTaz from the command line. Note that even if started from the LiveCD headless, this install will install a full GUI and take up around 100MB of space.

The first thing to know is that the installer is invoked by the command tazinst.

The second thing to know is that you need to create a configuration file for it.

The third thing you need to know is that you need to partition your disk first. Naturally, this is what we’ll do first.

WARNING – PARTITIONING ERASES THE DISK THAT YOU ARE PARTITIONING

Type these keys in order to erase /dev/sda and set up a single partition. If you have never done this before…. read up on partitioning with fdisk. It’s a whole topic on its own! Hit return for each new line of course.

fdisk -l
fdisk /dev/sda
o
n
p
1
1
 (just hit return here to accept the default)
a
1
w

Great, you have a new partition on /dev/sda1. Now create the config file.

tazinst new configfile

Once you have created the configuration file, edit it.

vi configfile

Three key things you need to change are as follows:

  • TGT_PARTITION – the partition you will be installing on – in our case, /dev/sda1 or whichever you configured earlier
  • TGT_FS – the file system you want to use – for example, ext4
  • TGT_GRUB – “yes” unless you intend on installing Grub manually afterwards.

Finally, run

tazinst install configfile

After a few second, the install will be finished and you can reboot.

Post-install customizations

SliTaz is very light. Extremely light. You might even say it’s missing some packages you would expect as standard. You should think about doing some initial setup…

su
tazpkg -gi vim
tazpkg -gi htop
tazpkg -gi tmux
tazpkg -gi sudo
tazpkg -gi iptables # ...and whatever else you want...
#one tazpkg per item to install
/etc/init.d/dropbear start # SSH server
 vim /etc/rcS.conf

# add dropbear SSH server to startup
 %RUN_DAEMONS=" ...dropbear"
vim /boot/grub/menu.lst
# change timeout
 %timeout 2
visudo
 # add your own users to sudo location

And that’s about it. Some extra commands that may be different from what you may know from elsewhere:

poweroff # instead of shutdown
tazpkg recharge # sync package list
tazpkg info (package)
tazpkg description (package)
tazpkg search (string)
tazpkg get-install (package name) # install from repo
tazpkg get (package name) # download from repo
tazpkg install (TGZ file) # install from local file

Bonus – tpgi

Instead of directly using the restrictive tazpkg, try using my wrapper :-)

Switch to root and run the following

tazpkg -gi git
tazpkg -gi bash
git clone git://github.com/taikedz/handy-scripts
cp handy-scripts/bin/tpgi /usr/bin/

This will set up the tpgi command which you can use to make life with tazpkg a little easier… run the command without arguments for help. Try:

tpgi install htop vim tmux sudo

Now you can install multiple packages from one line….!

tpgi search and gcc 3.4

Searches for packages containing the term “gcc” then filters the results for version 3.4

I Won’t Go Back to Buying Mac

mac_keyboard

Here’s a little topic I wanted to explore in written form – why I have used Mac for so long, why I still have a Mac as my main desktop…. and why despite this I won’t buy Mac again.

I Used to Love the Mac

My first computers were of course not mine – they were my dad’s. I have a vague recollection of us having a PC with 8” floppy drives and having to type commands… this was probably in 1987 or so. But that memory never really took hold, for very soon after, my dad bought a Mac: an LC II that I think is still in the cellar due to me insisting on not throwing it out.

It was graphical, it was friendly. It supported 16 colours (and not just 8 colours like many PCs still shipped with as standard). There was no command line, you could just click for everything. It was a revolution in home computing and we were on the cutting edge.

We were continually treated, with Macs, to the newest and greatest home technology: stable systems to run months without a single application crash (System 7.5.1 I particularly single out), advanced graphical UIs (Mac OS 9 was great comfort to the eye at the time), easily automated applications via AppleScript, including a fully scriptable Netscape Navigator; the first laptops and desktops with built-in Wifi, the first LCD desktops where the entire computer was hardly wider than the screen, the advent of UNIX-based systems on the home computer. Every Mac shipped with a full productivity suite included (what would become iWork), as well as a full media editing suite (photo editing, video sequencing, and audio production, which collectively would become iLife), and a couple of well-designed, full-on 3D games to boot. There was hardly anything you couldn’t do with a Mac I thought…. except perhaps write programs for Windows.

When the time came for me to go to university, I believed I would have to get a Windows PC to allow me to do some proper programming, not knowing that we’d be using many different and equally (even more so) viable systems for programming on. It was a mistake I do not regret, as it had great learning benefits to me, and gave me the ability to understand the Windows paradigm so many people endure, and the ability to operate in the average workplace; but after that laptop died (in a literal puff of smoke after an ill-fated attempt to “repair” it), I was back to buying a Mac in 2007.

Even in 2011 I was agonizing over whether or not to spend hard-earned cash on a new MacBook Pro or not. I drew up my list of pros and cons, and decided, over a solitary steak and pint, that yes, I did want that Mac after all.

It would be the last Mac I would ever personally buy.

The Mac – the good

The year is 2015. I still have that MacBook Pro. And it still serves as my main workhorse for spinning up Linux virtual machines. 4 years on, and it’s still the most powerful computer in my home.

It has a quad-core i7 hyper-threaded processor at 2.2 GHz, effectively  showing up as 8 cores – it’s the same processor family as found on entry-level business servers. I’ve upped the RAM to 16 GB. It has a 500 GB HDD.

Most computers even today ship with 4 GB RAM and a lesser i5 processor clocked at 1.7 GHz and not hyper-threaded, and still a 500 GB drive.

Needless to say, that Mac was a fantastic investment, as it remains still more powerful than an equivalently priced Windows PC on today’s market.

So why will I never buy Mac again? Put simply: Apple has chosen to go where I will not follow.

Apple – the Bad

Even back in 2011, the Apple Genius who was trying to sell to me was extolling the benefits of the new MacBooks with no CD/DVD drive: “who uses CDs these days anyway?” Well I do, for one. I experiment with computing, and in doing so sometimes break my systems. I need to reinstall the system sometimes. The one time I needed to reinstall OSX, I had to purchase a brand new copy. Gone are the days of providing a free re-installation DVD. These days, you’re lucky if you can connect anything at all.

I don’t tie up my bandwidth with movies and music I have to wait for and download, online, every time I want to consume them. I still buy DVDs and CDs because, in case you haven’t noticed, online “purchase” does not allow you to own a copy – just the license to watch, if it’s still available on the provider’s website (remember mycokemusic.com?). We do not own “our” online movies and music – only the permission to watch them, which can be revoked at any time – with no refunds.

I have become a near-full Linux convert. I use Linux for my personal machines at work, my secondary and tertiary laptops run Linux, and my private cloud servers all run Linux.

Only my Mac doesn’t run Linux, and that only because when I tried to install Linux on it, the graphics card and wireless card decided to throw a hissy fit. Apple’s choice of highly-proprietary components meant that despite the best efforts of open source developers, Apple held on closely to the proprietary mantra: the machine is Ours, you only have a license to use it. You can’t even “own” something as rustic as a tractor these days.

I feel I am not in control of my Mac because I have been told what I can and cannot run on it. I own the machine, but not the software. If it breaks, I just get to keep the pieces – not the ability to tweak and fix.

My hardware today

My preferred computer for “getting things done” nowadays, the one I am currently typing away on, is a Lenovo Flex 15. Lenovo do very good hardware, its pro line, the ThinkPads, are durable business machines much like the MacBook Pros in quality.

They’re also generally highly compatible with open source drivers and mainstream Linux distributions. Where I’d hesitate before buying a Dell or HP laptop as to whether I think Linux will work on them, I have virtually no qualms when buying a Lenovo laptop, knowing it will likely take the erasure of Windows just fine. Not that this necessarily won’t change in the future.

Open Source – Freedom and Privacy

Lenovo was in the news recently for a piece of advertising software called Superfish they had included in new laptops and desktops for a few months in their Windows deployments. This particular set of software and configurations meant that not only were users seeing even more advertising in the web browsing experience, but implementing the advertising solution was also breaking the very core security mechanisms that keep all other parts of the system secure. Lenovo makes great hardware, but they aren’t immune to woefully bad decisions.

Thankfully, they reverted their decision to include this software as soon as their technical analysts realized what had happened, and issued fixes, but it has damaged the company’s reputation.

Persons like myself who chose to erase Windows completely were not affected.

This is why I use Open Source Free Software: to maintain control over my own digital assets, and freedom in my digital life. I am fully aware that my digital identity is tightly woven into my real-world identity, whether I want it to be or not.

I now run Linux on nearly everything – more specifically, I run Ubuntu on my laptops, and a mix of Ubuntu and CentOS on my servers.

I can choose what software is on it. I can choose what software is not on it (have you not yet noticed how there is some software on Windows that you cannot get rid of for neither love nor money… pestering you for upgrades at best, selling you out at worst). I don’t have to pay an arm and a leg for it either.

What’s more, I remain in control of my data. Not only on my computer, but also in the Cloud. Windows will try to shove you onto SkyDrive and Office 365 Online. Apple is shoe-horning you into iCloud services (yeah, sync your photos all over the place, you can trust Apple… hmmm)… Google is trying to get into both spaces of storing all your photos “for” you and getting up in the online office suite as well. You can’t get an offline Adobe Creative Suite anymore – just keep up the eternal payments if you want to continue being able to access your Photoshop and Illustrator projects. At least they didn’t discontinue their editing suite altogther like Apple did with Aperture. Gone is your investment.

If I ever stopped paying once for any of these applications or services, or if the service is suddenly discontinued, I would stand to lose all my data – everything I’ve purchased, everything I’ve created, either because I no longer have the software to read the files, or because the files themselves have been whisked away to an online vault “for my convenience”. That’s why there’s hardly any storage on Chromebooks. Surrender your data to the Cloud.

I am staying firmly on Linux and virtual private servers that I control and can pull data off of as I wish. I can fully program the computer to make it do what I want – and stop if from doing things I don’t want it to do (granted, some tasks are easier than others, but at least it’s actually possible in the first place.)

One Linux distribution in particular, Ubuntu (the very same I use!), tried to follow the Big Boys like Apple, Google and Microsoft: Canonical announced a partnership with Amazon in the form of search functionality, where any keywords used for a file search was also sent to Amazon, and other online providers. Thankfully, it was easy to purge from the system the minute I heard of it. You cannot defenestrate such “features” with the other Big Three.

Building Trust

I use open source software from centralized trusted software repositories (which were the spiritual precursors to app stores) – I don’t need to hunt around on the Internet to find some software whose source I do not know. On Windows, I constantly need to fret before installing an app: Does it have a virus? Does it have a trojan? Will it send all my purchasing, credit card details, photos and other identity to some unknown third party?

What I get from the centralized repositories constitutes my base web of trust –  and that base web offers a collection of software so large and varied that I know I can get a tool for any job, be it office, media, programming, scientific or leisure, and more.

No piracy = no legal troubles AND no viruses.

Or at least, a vastly reduced risk compared to downloading anything willy-nilly from random websites. And personally, I expand that web of trust with informed decisions.

I use LibreOffice which allows me to read and save in Microsoft’s document format if I need to, but I mainly use the Open Document Format to ensure I can still edit them in decades to come, and that I can share documents with anybody who does not want to shell out for Office Pro, Office 365 or GoogleDocs.

I use ownCloud for my file synchronization so that I can keep control over what is stored, and where. It replaces services such as DropBox, Google Drive, Sky Drive and iCloud without trying to force me to store online-only and forgo local copies. If my account is terminated on the latter services, there’s no guarantee I’ll also still have the data that it ran away with. ownCloud is in my control, and I know I have the copies locally too.

I use Krita and the GNU Image Manipulator instead of PhotoShop, InkScape instead of Illustrator, Scribus instead of InDesign, digiKam instead of Lightroom. I don’t need to be online to do any of this.

I choose freedom.

In the words of Richard Stallman and the Free Software movement: “Free Software is is a matter of Freedom, not price.

Piracy might make things surreptitiously free (as in “a free lunch”), but still ties you to the control systems and spyware that is rife on the Internet.

Apple, like so many other computer manufacturers and software licensors, has taken a route I cannot go down, one I will not follow. It has taken a route that specifically makes it difficult for me to remain free. It has taken a route that stifles experimentation and learning. It has taken a route that privileges perpetually tying-in my spending on one side, as well as the monetization of my identity on the other, whilst at the same time denying me ownership both of what I purchase and what I create, and where the only solutions are either piracy… or just leaving altogether.

forget-piracy(… graphic of my creation, released under CC 4.0 Attribution Share-Alike. Anyone who wants to make a better derivative is most welcome…!!)

About that: GNU/Linux (and cousins) are a big family – and the kids are growing up

It does seem that Linux has become too big and complex – but perhaps not in the way we think.

Some voices indicate that the problem is being un-UNIXy, others insist that systemd is the heart of all woes (well it certainly threw oil on the fire…), others think it’s just too bloated… some even think it is just not popular enough (non sequitur??)

Personally, I think it suffers from a very basic meat-space problem: identity. There are too many distros for Linux to be homogenously named, and too many people with strong (mostly valid) visions for what it should do to be recincilable accross the board.

It is clear to me that no Linux distro family is interested in doing what the other does, like siblings with rivalry, but that some “parent figures” are trying to ship them all into the same roles.

Rather than recognize that they need to have their own spaces and come into being their own, “unification” attempts are falling flat because simply not matching up to the ideals the now grown up youngsters have.

My response to a comment on iTWire’s article follows:

My general stance is that Linux’s “killer app” is “Linux” :-)

To elaborate on that, maybe I should stretch my explanation to mean: a platform on which I can bothplay games and surf the web, whilst at the same time do development work and granularly control its maintenance, depending on the role I want to to fit on any particular deployment… The lack of household-name-fame is not prticularly a problem. Technical capability has always come first, and that has not hampered its growth.

I do think (though I am no longbeard!) that there needs to be more pointedly a differentiation between desktop/laptop purpose (where you want everything to work with little fuss) and server purpose (no GUIs, high control, high debuggability)

Perhaps the danger we are seeing is the trend for a one-size-fits-all approach swalling the ecosystem whole, whether it be systemd or any other project, mind: we tend to see Linux distros as one big close-knit family, and thus create attempts to unite them under a common set of tools, orchstrators and platforms.

Maybe it’s time we stopped looking at Linux that way. UNIX split off into various mildly-related sub groups, and which still thrive, and it looks like Linux could soon do the same. Ubuntu is on course to becoming its own thing, and Chrome has taken Gentoo and created something barely related, even as they are deployed on the same kernel as the rest of distros. Android has already dropped the GNU side and, whilst it is still “Linux”, it cannot be considered in the same family as the desktop and server distros we have at the moment.

So yeah – I’d say, stop thinking of “Linux” as “a cohesive group of operating system variants” and start looking at key families of distros as operating systems in their own rights. Allow more individulaization in the nucelar family to allow a broader unification of the genealogy – our BSD cousins are doing well in server space, and if they are not taken into the unification attempts, may well fill the server niche in our stead whilst we remain with the consumer and mobile markets… who knows who will get gadget-space.

Stop trying to cram all the family members in the same bed. It’s time for the kids to fly the nest.

Moving “/” when it runs out of space (Ubuntu 14.04)

konata_pc

My root (“/”) partition filled up nearly to the brim recently on one of my test servers, so I decided it was time to move it elsewhere… but how?

You can’t normally just add another disk and copy files over – there’s a bit of jiggery-pokery to be done… but it’s not all that difficult. I’d recommend doing this at least once on a test system before ever needing to do it on bare a metal install…

What you will need:

  • a LiveCD of your operating system
  • about 20 minutes of work
  • some time for copying
  • A BACKUP OF YOUR DATA – in case things go horribly wrong
  • a note of which partition maps to what mount point

For note, I had my partitions such initially:

/dev/sda1 : /boot
/dev/sda3 : /
/dev/sda5 : /home

I then added a new disk to my machine

/dev/sdb

In this walk-through, I will refer to your target PC, the one whose “/” needs moving, as “your PC” from now on. If you’re using a VM that’s the one I am referring to – you needn’t do anything in the host.

Note that in my setup, the “/boot” and “/home” directories are on their own partitions. If you don’t have this as your standard setup, I highly recommend you look at partitioning in this way now – it helps massively when doing long-term maintenance, such as this!

1/ Boot your PC from the LiveCD.

I recommend you use the same CD as from where you installed the OS initially, but probably any Linux with the same architecture will do (x86_32, AMD64, ARM, etc)

Once the Live environment is started, open a command line, and switch to root, or make sure you can use sudo.

2/ Prepare the new root

Use lsblk to identify all currently attached block devices.

I am assuming that /dev/sdb is the new disk. Adjust these instructions accordingly of course.

You want to first partition the new drive: run the command `fdisk /dev/sdb` as root

Type `o` to create a new partition table – you will be asked for details, adjust as you wish or accept the defaults

Type `n` to create a new partition. Adjust at will or accept the defaults

Type `w` to write the changes to disk.

As root, run `mkfs.ext4 /dev/sdb1`

Your new drive is ready to accept files…

3/ Copy the files

Make directories for the old and new roots, and copy the files over

mkdir newroot
mkdir oldroot
sudo mount /dev/sda3 oldroot
sudo mount /dev/sdb1 newroot
sudo rsync -av oldroot/ newroot/

Note: in the rsync command, specifically add the slashes at the end: “oldroot/ newroot/” and not “oldroot newroot” !!

Go do something worthwhile – this copy may take some time, depending on how full the partition was…

4/ Modify fstab

Run the following command to get the UUID of the new drive:

sudo blkid /dev/sdb1

Keep a copy of that UUID

Edit your fstab file

sudo vim newroot/etc/fstab

Keep a note of the old UUID

Change the UUID of the line for your old / partition to the new UUID you just got; and save.

5/ Edit the grub.cfg

Mount your old /boot partition into the *new* location

sudo mount /dev/sda1 newroot/boot

Now we edit newroot/boot/grub/grub.cfg

sudo vi newroot/boot/grub/grub.cfg

Locate instances of the old UUID and change them to the new UUID

Quick way: instead of using `vi` you could use `sed` instead

sudo sed -e 's/OLD-UUID-FROM-BEFORE/NEW-UUID-ON-NEW-DISK/g' -i newroot/boot/grub/grub.cfg

Of course, adjust the UUID strings appropriately. If you have never used sed before, read up on it. Keep a copy of the original grub.cfg file too, in case you mess it up first time round.

In the above command, the “-e” option defines a replacement pattern ‘s/ORIGINAL/REPLACEMENT/g’ (where ‘g’ means ‘globally’, or in the entire file); the “-i” option indicates that the file specified should be modified, instead of writing the changes to stdout and leaving the file unmodified. Using the “-r” option, you can also make use of Perl-style regular expressions, including capturing groups.

After making the change, reboot. Remember to start from the hard disk, remove the Live CD from the slot.

6/ Reboot, and rebuild grub.cfg

If all has gone well, you should now find your Ubuntu install rebooting fine. Open a terminal and run

df -h

See that your root is now mounted from the new disk, with the extra space!

There’s just one more thing to do – make the grub.cfg changes permanent. Run the following:

sudo update-grub

This will update the grub config file with relevant info from the new setup.

You have successfully moved your “/” partition to a new location. Enjoy the extra space!

Watcher-RSS : Your own, Personal, Feeder

fosdem-rss

I finally got around to putting together some initial code for that thing I wanted – a script to detect changes in a page and produce an RSS entry as an outcome.

watcher-rss” is just that – a simple script that can be called by cron to check a page for a significant area, and generate an RSS “feed” in response.

It’s designed such that you need to define a bash handler script that sets the required variables; after that it can generate an RSS entry in response to anything. Read more

Moving ownCloud from Ubuntu default repo to openSUSE build service repo

ownCloud example

ownCloud is a popular self-hosted replacement to cloud storage services such as Dropbox, Box.net, Google Drive and SkyDrive: ownCloud lets you retain ownership of the storage solution, and host it wherever you want, without being at the mercy of service providers’ usage policies and advertising-oriented data-mining.

Recently the ownCloud developers asked the Ubuntu repo maintainers to remove owncloud server from their repos.

The reason for this is because older versions of ownCloud have vulnerabilities that don’t necessarily get patched: whilst the original ownCloud developers plug the holes in the versions they support, they cannot guarantee that these fixes propagate to the code managed by repos – and Ubuntu is widely used as a base for other distros. For example, Ubuntu 12.04 is still supported and forms the base for many derivatives, and has ownCloud 5 in its repos – but is not managed by ownCloud developers.

The ownCloud developers recommend using the openSUSE build service repository where they publish the latest version of ownCloud, and from which you can get the newest updates as they arrive.

If you’ve installed ownCloud from the Ubuntu 14.04 repositories, and you want to move over to the openSUSE build repo, here’s how you do it.

If moving up from ownCloud 5, consider migrating first to version 6 by way of a PPA or an older OC6 TAR… I’ll have to leave it up to you to find those for yourselves…

Backing up

These instructions are generic. You MUST test this in a VM before performing the steps on your live system.

Mantra: do not trust instructions/code snippets from the internet blindly if you are unsure of what exactly they will do.

Backup the database

Make a backup of the specific database used for ownCloud as per your database’s documentation.

For a simple MySQL dump do:

mysqldump -u $OC_USER "-p$OC_PASS" $OC_DATABASE > owncloud_db.sql.bkp

replacing, of course, the placeholders as appropriate.

Backing up the directories

If you installed ownCloud on Ubuntu 14.04 directory from the regular repos, you’ll find the following key anatomies:

  • Main owncloud directory is in /usr/share/owncloud (call it $OCHOME)
  • The $OCHOME/config directory is a symlink to /etc/owncloud
  • the $OCHOME/data directory is a symlink to /var/lib/owncloud
  • the $OCHOME/apps is where your ownCloud apps are installed

If this is not already the case, it wouldn’t hurt to change things to match this setup.

It would also be a very good idea to make a tar backup of these folders to ensure you have a copy should the migration go awry. You have been warned.

Moving apps, data and config folders

Move your ownCloud data directory to some location (for this example /var/lib/owncloud but it could be anywhere) ; move your ownCloud config directory to /etc/config

It’s probably simply a good idea to not have your data directory directly accessible under $OCHOME/data

It is also probably good to keep the original more variable apps directory in /var/owncloud-apps instead of lumped straight into the ownCloud home directory. Note that this directory also contains the “native” ownCloud apps, which get updated with each version of ownCloud – not just custom apps.

Once you have moved these folders out, $OCHOME should no longer have data and config symlinks in it. As these are symlinks you can simply rm $OCHOME/{data,config}

If you get an error about these being actual directories that cannot be removed because they are empty…. you haven’t actually moved them. If they do not exist of course, that’s fine.

Uninstall and reinstall ownCloud

Uninstall ownCloud (do NOT purge!!)

apt remove owncloud

And add the new repo as per the instructions in http://software.opensuse.org/download/package?project=isv:ownCloud:community&package=owncloud

For Ubuntu 14.04 this is

wget http://download.opensuse.org/repositories/isv:ownCloud:community/xUbuntu_14.04/Release.key
apt-key add - < Release.key
echo 'deb http://download.opensuse.org/repositories/isv:/ownCloud:/community/xUbuntu_14.04/ /' >> /etc/apt/sources.list.d/owncloud.list
apt-get update && apt-get install owncloud

This makes the repository trusted (key download) then updates the sources and installs directly from the openSUSE repo.

NOTE – if you are following these instructions for Ubuntu 12.04 or any distro shipping a version older than ownCloud 6, you may want to consider upgrading to OC6 first before converting to the latest version 7 – make sure your test this scenario in a VM before doing anything drastic!

Restore files

The new ownCloud is installed at /var/www/owncloud

Remove the directory at $OCHOME, then move /var/www/owncloud to $OCHOME so that it takes up the exact same place your old ownCloud directory was at.

Disable the automatically added owncloud site

a2disconf owncloud

And optionally delete /etc/apache2/conf-available/owncloud.conf

Now remove the default data and config directories, and link back in the other directories that you had cautiously moved out previously

rm -rf $OCHOME/{data,config}
#ln -s /var/owncloud-apps $OCHOME/apps
ln -s /etc/owncloud $OCHOME/config

(the apps line is commented out – because you must check what apps you are specifically restoring before squashing the default apps directory)

Finally edit $OCHOME/config/config.php to be sure that it points to the correct locations. Notably check that the $OCHOME/apps location exists, and that the data folder is pointing to the right place (especially if you had to move it).

Update

Now go to your ownCloud main page in your web browser. You will be told that ownCloud needs to be updated to the newer version 7 – this will be done automatically.

Once done, ensure that everything is working as expected – add/remove files, navigate around ownCloud web, check that your apps are all working…

Reverting

If you do need to revert,

apt remove owncloud
rm /etc/apt/sources.list.d/owncloud.list
apt update && apt install owncloud

Finally proceed to restoring the files as above – or from backup TARs

Additionally, you will want to restore the old version of the database.

mysql -u $OC_USER "-p$OC_PASS" < owncloud_db.sql.bkp

beesu

konata_pc

On most distributions using GTK+ or Qt desktop environments, you can use a graphical password prompt to grant administrative rights to a graphical aplication by invoking gksu or kdesu – instead of the usual su command.

Strangely enough though, the Red Hat family of systems uses neither – instead relying on an independent tool called beesu.

I emailed the developer asking them what the motivation for a separate tool was, and wanted to share the answer.

Read more

SSL on Apache and tunneling VPN with OpenVPN on Ubuntu

This post is now superceded by a friendlier and more eficient method: https://ducakedhare.co.uk/?p=1512

The following are a bunch of quick notes about setting up security certificates, enabling OpenVPN and forcing all traffic through a VPN tunnel, and adding SSL

It’s all tailored for Ubuntu 12.04 / 14.04 servers, and exists primarily as learning notes. I may or may not come to cleaning them up.

OpenVPN details and dialectic can be found at https://help.ubuntu.com/14.04/serverguide/openvpn.html

Longer description of Apache SSL activation can be fouind here https://www.digitalocean.com/community/tutorials/how-to-create-a-ssl-certificate-on-apache-for-ubuntu-12-04

Read more

About that: Is TAILS an essential distro or just an added tinfoil hat?

A tech blogger put up a piece I came across on Tux Machines, asking whether TAILS, a security-oriented Linux distro designed to afford the user anonymity, was just another tinfoil hat for the over-imaginative conspiracy theorists.

It was stronger than me to let this be, as I believe that TAILS is actually very legitimately useful to certain people and professions – namely journalists, students and activists – and that the article was likely to gain page views over time. Below is my own answer.

Original article is http://openbytes.wordpress.com/2014/05/16/tails-an-essential-distro-or-an-accessory-to-compliment-a-tin-foil-hat-for-the-average-user/

For the TLDR – TAILS is not aimed at the average home user, but at non-technical users who actually do need to take their online safety into serious consideration.

…. it’s a bit of a straw man attack …

The real question is – where is the merit in deriding the approach and considerations TAILS addresses?

Read more