Home » Archive by category "Computing" (Page 3)

Moving ownCloud from Ubuntu default repo to openSUSE build service repo

ownCloud example

ownCloud is a popular self-hosted replacement to cloud storage services such as Dropbox, Box.net, Google Drive and SkyDrive: ownCloud lets you retain ownership of the storage solution, and host it wherever you want, without being at the mercy of service providers’ usage policies and advertising-oriented data-mining.

Recently the ownCloud developers asked the Ubuntu repo maintainers to remove owncloud server from their repos.

The reason for this is because older versions of ownCloud have vulnerabilities that don’t necessarily get patched: whilst the original ownCloud developers plug the holes in the versions they support, they cannot guarantee that these fixes propagate to the code managed by repos – and Ubuntu is widely used as a base for other distros. For example, Ubuntu 12.04 is still supported and forms the base for many derivatives, and has ownCloud 5 in its repos – but is not managed by ownCloud developers.

The ownCloud developers recommend using the openSUSE build service repository where they publish the latest version of ownCloud, and from which you can get the newest updates as they arrive.

If you’ve installed ownCloud from the Ubuntu 14.04 repositories, and you want to move over to the openSUSE build repo, here’s how you do it.

If moving up from ownCloud 5, consider migrating first to version 6 by way of a PPA or an older OC6 TAR… I’ll have to leave it up to you to find those for yourselves…

Backing up

These instructions are generic. You MUST test this in a VM before performing the steps on your live system.

Mantra: do not trust instructions/code snippets from the internet blindly if you are unsure of what exactly they will do.

Backup the database

Make a backup of the specific database used for ownCloud as per your database’s documentation.

For a simple MySQL dump do:

mysqldump -u $OC_USER "-p$OC_PASS" $OC_DATABASE > owncloud_db.sql.bkp

replacing, of course, the placeholders as appropriate.

Backing up the directories

If you installed ownCloud on Ubuntu 14.04 directory from the regular repos, you’ll find the following key anatomies:

  • Main owncloud directory is in /usr/share/owncloud (call it $OCHOME)
  • The $OCHOME/config directory is a symlink to /etc/owncloud
  • the $OCHOME/data directory is a symlink to /var/lib/owncloud
  • the $OCHOME/apps is where your ownCloud apps are installed

If this is not already the case, it wouldn’t hurt to change things to match this setup.

It would also be a very good idea to make a tar backup of these folders to ensure you have a copy should the migration go awry. You have been warned.

Moving apps, data and config folders

Move your ownCloud data directory to some location (for this example /var/lib/owncloud but it could be anywhere) ; move your ownCloud config directory to /etc/config

It’s probably simply a good idea to not have your data directory directly accessible under $OCHOME/data

It is also probably good to keep the original more variable apps directory in /var/owncloud-apps instead of lumped straight into the ownCloud home directory. Note that this directory also contains the “native” ownCloud apps, which get updated with each version of ownCloud – not just custom apps.

Once you have moved these folders out, $OCHOME should no longer have data and config symlinks in it. As these are symlinks you can simply rm $OCHOME/{data,config}

If you get an error about these being actual directories that cannot be removed because they are empty…. you haven’t actually moved them. If they do not exist of course, that’s fine.

Uninstall and reinstall ownCloud

Uninstall ownCloud (do NOT purge!!)

apt remove owncloud

And add the new repo as per the instructions in http://software.opensuse.org/download/package?project=isv:ownCloud:community&package=owncloud

For Ubuntu 14.04 this is

wget http://download.opensuse.org/repositories/isv:ownCloud:community/xUbuntu_14.04/Release.key
apt-key add - < Release.key
echo 'deb http://download.opensuse.org/repositories/isv:/ownCloud:/community/xUbuntu_14.04/ /' >> /etc/apt/sources.list.d/owncloud.list
apt-get update && apt-get install owncloud

This makes the repository trusted (key download) then updates the sources and installs directly from the openSUSE repo.

NOTE – if you are following these instructions for Ubuntu 12.04 or any distro shipping a version older than ownCloud 6, you may want to consider upgrading to OC6 first before converting to the latest version 7 – make sure your test this scenario in a VM before doing anything drastic!

Restore files

The new ownCloud is installed at /var/www/owncloud

Remove the directory at $OCHOME, then move /var/www/owncloud to $OCHOME so that it takes up the exact same place your old ownCloud directory was at.

Disable the automatically added owncloud site

a2disconf owncloud

And optionally delete /etc/apache2/conf-available/owncloud.conf

Now remove the default data and config directories, and link back in the other directories that you had cautiously moved out previously

rm -rf $OCHOME/{data,config}
#ln -s /var/owncloud-apps $OCHOME/apps
ln -s /etc/owncloud $OCHOME/config

(the apps line is commented out – because you must check what apps you are specifically restoring before squashing the default apps directory)

Finally edit $OCHOME/config/config.php to be sure that it points to the correct locations. Notably check that the $OCHOME/apps location exists, and that the data folder is pointing to the right place (especially if you had to move it).

Update

Now go to your ownCloud main page in your web browser. You will be told that ownCloud needs to be updated to the newer version 7 – this will be done automatically.

Once done, ensure that everything is working as expected – add/remove files, navigate around ownCloud web, check that your apps are all working…

Reverting

If you do need to revert,

apt remove owncloud
rm /etc/apt/sources.list.d/owncloud.list
apt update && apt install owncloud

Finally proceed to restoring the files as above – or from backup TARs

Additionally, you will want to restore the old version of the database.

mysql -u $OC_USER "-p$OC_PASS" < owncloud_db.sql.bkp

beesu

konata_pc

On most distributions using GTK+ or Qt desktop environments, you can use a graphical password prompt to grant administrative rights to a graphical aplication by invoking gksu or kdesu – instead of the usual su command.

Strangely enough though, the Red Hat family of systems uses neither – instead relying on an independent tool called beesu.

I emailed the developer asking them what the motivation for a separate tool was, and wanted to share the answer.

Read more

Moving wordpress

I just moved my WordPress blog from its old location on a regular webhost to a new location on a VPS.

I just wanted to share some learning notes on how to go about it – I can imagine they’ll be useful to me too in the future!

[EDIT — I have just checked the official way of doing this and I must say, it seems much more involved. I think it might be old and many issues are now resolved, but for the sake of caution, here’s their instructions {2013-06-13 : http://codex.wordpress.org/Moving_WordPress} I was using wordpress 3.9 during the upgrade]

  1. Get a dump of the database dedicated to your wordpress instance as an SQL file – it contains the actual posts and comments from your blog
  2. get a copy of your full wordpress directory – any media files uploaded are here, along with the actual PHP pages
  3. copy these over to the new location
  4. (unpack the wordpress web directory to the desired web directory, if necessary)
  5. re-create the same user in your new DBMS instance, and use the same password (you can find that in wp-config.php file in your wordpress web files), granting it the appropriate permissions
  6. restore the database by using the SQL dump file from your earlier backup – this will restore the entire database as a complete copy – for example, run the SQL file through your DBMS’s CLI tool
    • $> mysql -u wpuser -p"wp_password" < wpdb_dump.sql
  7. log in to the database and run the following, where “http://example.com” is the base URL of your new wordpress blog – wordpress redirects any access to wp-admin to this URL, so it needs updating or you’ll always go back to the old URL’s wordpress
    • update wp_options set option_value='http://example.com/' where option_name='home' or option_name='siteurl';
  8. In the wp-config.php file in the root of your wordpress directory, change the DB_HOST property to point to the new database location

You should now be able to go to the new URL and everything should be as it was!

 

SSL on Apache and tunneling VPN with OpenVPN on Ubuntu

This post is now superceded by a friendlier and more eficient method: https://ducakedhare.co.uk/?p=1512

The following are a bunch of quick notes about setting up security certificates, enabling OpenVPN and forcing all traffic through a VPN tunnel, and adding SSL

It’s all tailored for Ubuntu 12.04 / 14.04 servers, and exists primarily as learning notes. I may or may not come to cleaning them up.

OpenVPN details and dialectic can be found at https://help.ubuntu.com/14.04/serverguide/openvpn.html

Longer description of Apache SSL activation can be fouind here https://www.digitalocean.com/community/tutorials/how-to-create-a-ssl-certificate-on-apache-for-ubuntu-12-04

Read more

Checking for a Mac resource fork

Following a mishap with a backup some time ago in which I lost the contents of my text clippings, I have started investigating resource fork detection for my rsync pre-processor.

The pre-processor will eventually be python-based, with a couple of extra scripts for platform-specific operations that arise. I need it to be able to backup using rsync or some Windows backup program with equivalent functions, to a FAT32 target, without any data loss.

Pre-processing the backup source directory includes:

  • split files greater than 4 GB into individual chunks – for backing up to FAT32 drives
    • mark chunks for use in restore process
    • use an exclusion for file sizes greater than 4 GB
    • remove chunks post sync
  • OS X specific – detect files with resource forks
    • determine if a child of a grouped directory, if so bubble up to grouped directory and tar-gz
    • else tar-gz the file itself
    • add the original file/directory to rsync exclusion list
    • remove the tar-gz post-sync
  • build a database of current tree state – to “detect” moved files and reduce transfer time
    • build hardlink repository of all files in the source directory
    • replace previous state database with current tree state and remove hardlink repo post-sync

From the restoration side, re-building the chunks and unpacking the tar’d files need to be handled.

For detecting resource forks on Mac, this script should work:

MYFILE="$1/..namedfork/rsrc"
if [ "old"$(sw_vers | grep -Po "10\.[0-6]\.") = "old" ]; then
    MYFILE="$1/rsrc" # different format pre-10.7
fi

FOUNDRES=yes$(ls -l "$MYFILE" | grep -Po "0\s+[0-9]+\s+[a-zA-Z]");
# resource fork found to be zero length - returns string

# FOUNDRES is exactly "yes" if resource is non-zero
if [ "$FOUNDRES" = "yes" ]; then
    echo "RSRC present on $MYFILE"
else
    echo "RSRC absent from $MYFILE"
fi

Use at will.

About that: Is TAILS an essential distro or just an added tinfoil hat?

A tech blogger put up a piece I came across on Tux Machines, asking whether TAILS, a security-oriented Linux distro designed to afford the user anonymity, was just another tinfoil hat for the over-imaginative conspiracy theorists.

It was stronger than me to let this be, as I believe that TAILS is actually very legitimately useful to certain people and professions – namely journalists, students and activists – and that the article was likely to gain page views over time. Below is my own answer.

Original article is http://openbytes.wordpress.com/2014/05/16/tails-an-essential-distro-or-an-accessory-to-compliment-a-tin-foil-hat-for-the-average-user/

For the TLDR – TAILS is not aimed at the average home user, but at non-technical users who actually do need to take their online safety into serious consideration.

…. it’s a bit of a straw man attack …

The real question is – where is the merit in deriding the approach and considerations TAILS addresses?

Read more

Call it “Open Source Free Software”

Freedom and Open Fields

I ranted previously about my annoyance at the name “Free Software,” wherein the name is too easily misconstrued to mean freebie (but still proprietary) software like Dropbox, or the Yahoo toolbar. Further thinking about the naming issue, I ended up deciding to call it “Open Source Free Software” instead.

There are two adjective groups in the name: “Open Source” and “Free”, with the latter being interpretable in two ways: freedom and freebie.

Due to the way adjectives apply in English, “Free [Open Source Software]” sounds like it is in opposition to a futile notion of “Proprietary Open Source Software.” More popularly, with the emphasis on “Free”, we end up with the same issue of looking like we could be talking about sketchy downloads.

“Open Source [Free Software]” on the other hand moves the emphasis to the openness, and is in opposition only to “closed source proprietary software,” since “closed source libre software” makes no sense. Even if the listener misunderstands “Free,” they can still understand that it is open to tinkering – which is the freedom we want anyhow.

Open Source
Free(dom) code is open, software promotes user freedom
Free(bies) code is open but copyrighted – we can study it to make a Free(dom) version

Thus we focus on openness as a vehicle for software freedom, instead of leaving potentially damaging emphasis on an ambiguous word.

Varying “Free” on its interpretation against openness/closedness, we get:

Open Source Closed source
Free(dom) code is open, software promotes user freedom Makes no sense
Free(bies) code is open but copyrighted – we can study it to make a Free(dom) version code is closed and copyrighted – the kind of software the FSF are against

There is still a question about whether to include blobs or not in the open source project, since doing so would disqualify it from being Free. This would still have been discussed anyhow however.

The point is, emphasising openness more easily leads to a discussion on freedom. Emphasising “Free-ness” just makes people shy away – not because of the implications of “freedom” but because of the warning flags around “freebies.”

For the sake of the Linux Desktop: forget about Linux.

There is no such thing as “Linux” as an operating system. Yet we all think we’re marching under the same banner when we’re clearly not. Between desktop environments, package managers, display managers and the rest, we’re a highly uncoordinated bunch. If anyone from the outside should dare ask for a consistent response, they’re greeted with everything from hand-holding to cold shrugs, from slaps on the back to slaps in the face.

I am not proposing that we as techies forget about Linux, nor that we let individual projects’ managers and leaders forget about the community and their Open Source Free Software roots – but to recognize that whilst for us “Linux” is a selling point, for the masses that drive adoption and support, the label “Linux” is a big turn-off.

From a technologists’ standpoint, when we talk about Linux, we all generally know what we mean: a kernel that forms the core of a vast series of operating systems, of which Android which isn’t a GNU/Linux but is nonetheless powered by the Linux kernel.

For persons outside of the tech-sphere, the concept of “Linux” is at best a moot point, if not an actual source of confusion. Numerous times, we’ve tried to explain what “distro” means and why there are different “desktop environments,” whilst applications are downloaded by “package managers” from “repositories”… all of this heavily discussed in “the community.”

The Curse of Choice

For us techies, choice is good. In Linux-land, choice is sacred. To us, choice is Freedom.

To the average person, choice is Hell.

Distro flame wars aside, there’s the obvious fractioning of the developer community. Some develop for KDE, some for GNOME, some independently. Some have packaged for DEB whilst RPM riders have to get the tarball. Arch always gets a tarball. Notifications and panel icons aren’t always there in all desktop environments. And all the varieties.

Even if you try to not stray far and stay on “the most popular distro for newbies” a.k.a. Ubuntu, you still have Lubuntu, Xubuntu and Kubuntu confusion, with Linux Mint and Linux Mint Debian, each with Cinnamon and MATE flavours, close behind. Newbie asks how to change their desktop and are asked in return “what desktop environment are you using?” The newbie answers, “I dunno, it’s blue, I want it to be pink.”

Try telling Average Joe that Manjaro (Xfce), CentOS (GNOME 2), Ubuntu (Unity) and Chrome OS (Chrome) are all Linux whilst merrily swapping through screenshots. They’ll ask you what tea you’ve been snorting. Kinda like showing up at a botanists’ convention and being told that apples, hazelnuts and tomatoes are all fruit, broccoli and artichokes are actually flowers, and that all of them are, indeed, classified under “Vegetable.” The geeks know this. The rest of us model the ecosystem in a very different way.

Hand holding

The other issue is the consistent inability for techies to grasp the very idea that installing a new OS is fraught with danger and uncertainty for a non-tinkerer. Consumers have heard that Linux is hard. They’ve heard it doesn’t run Windows. (Or Windows applications). They’ve heard it might void their warranty. We reassure them that it isn’t hard, to try it, and next thing you know they’ve hosed their data, their printer can’t be found right in front of them and what’s all that scary text on the screen?

And yet we repeatedly see the technology blogs preaching to the choir, “Why DistroX is great replacement for Windows XP” with that chipper encouragement to “give it a go” — like the kind of person who would keep XP for so long would know exactly what to do after reading the article!

There have been plenty of stories in forums, podcasts and so forth where so-and-so managed to put their grandma on Linux Mint and she loved it; or moved their dad to Ubuntu on Unity, so what’s the fuss; or the uber-geek who moved their wife onto a Gentoo or Arch setup and with them at the ready to help, their loved one was absolutely fine. Those are, no doubt, great wins on individual levels, but is kind of a moot point on their own, and still collectively amount to little much than a pat-on-the-head level of success.

But then there are the bigger, more ambitious goals: the City of Munich famously distributing Ubuntu 12.04 CDs for free in the public libraries was one. Whilst the City successfully completed their migration with a host of consultants and internal IT technicians, out in the town, how many people knew what to do with these Ubuntu CDs? Free coasters? Pocket-mirrors to store in your glove compartment? Arts and crafts projects? And did all the gendarmes in France start coming home declaring “ma chérie! I am going to replace the Windows on our PC with Linux?” Mais oui bien sûr, Maurice.

Any time you point out how difficult it is for the average user to install Linux, thousands of commenters are poised at the ready to tell you how easy it is. And I concur that, for me and you, it is dead simple. For grandad and the arts major who pointedly shied away from the school ICT course, installing a new OS is a big freaking deal.

The enthusiasts who are dead set on convincing us that it’s easy have never actually sat down and talked to an actual user. No doubt they’ve gone and done the install for them, and heard “oh it wasn’t so hard then!” after they had gone and done all the hard work.

In reality, if you were to leave an average user to do the switch on their own, you’d need to write a complete manual, many pages long. And that’s just to install the system. There’s a fair amount of IT to be learned before you can go thwack your PC with a new OS. You’ll probably find after a few weeks that Windows is still on their computer and the manual has found its useful place levelling a table leg.

Seriously, have you ever tried to have the conversation about replacing someone’s desktop with Linux when they’re not a close acquaintance and you’re not going to be on hand at their every issue?

Sure there’s a fantastic community out there – of techies speaking techno-babble.

Even in AskUbuntu which is supposed to be the forum of the “most popular distro” (read: the distro that has the lowest skill barrier for entrance) the speak there rapidly becomes super-technical for anyone who hasn’t tried to understand their computer before.

Forget that it’s “Linux”

Ubuntu has dropped the word “Linux” from its name and I don’t think it’s that bad a thing. Sure Free Software purists will be railing against their decisions on Convenience over Freedom, but face it: even when we do away with the ethical and philosophical discussion, getting people to switch from one technology to another, core technology that changes everything about they way they operate on a practical level, introduces a whole load of issues and fears, and just saying “you’re freer” will win you not votes.

Most people are actually absolutely fine with “software Freedom” and would gladly shy away from lock-in and software slavery dungeons of Windows and OS X – but if it means they’re out in the cold of Fedora or Trisqel to fend for themselves in a world they were never brought up in, they’re most gladly consciously choose their old masters. When you’re a slave of the castle, you can either keep serving or run away into the wilderness, hoping you have enough survival skills to patch wounds and pick the right berries, let alone stay safe from the wolves and the natural elements around you.

For me, Ubuntu (or rather, Xubuntu) is [an] answer. Ubuntu just being “Ubuntu OS” allows provider companies something to home in on. Those of us riding all manner of Linux – Fedora, Arch, Trisqel and the likes – can deal with hardware intricacies, repackaging Ubuntu-oriented packages and such; let software authors just focus on supporting “Ubuntu Linux” (and let’s ensure they do it as Open Source if not Free Software); and let members of the general public the option to say “I’m running Ubuntu.”

By letting average users just think in terms of a coherent brand, we can move them to a platform that prepares them that much more for Freedom, and improves the state of support for Linux as a whole, removing over time the practical barriers to adoption. Imagine the state of consumer desktops where Ubuntu has mostly replaced Windows, and where the average user still doesn’t know they’re using Linux. Imagine Canonical does something…. rash and unethical. The move from Ubuntu to Mint, CentOS or OpenSUSE is so much easier to do both from a “sell” point and a practical point of view, because we’re not switching the ecosystem. And to average users, this is seamless. There are no questions about supported apps or document compatibility.

Saying “I have an Ubuntu” allows us to know that it’s consumer. After all, we don’t ask Mac users to recognise they are running BSD, nor Windows users to know that their “NT kernel” is insecure.

Running “Linux” is still (and will probably always remain) a badge of technical honour. Saying one is running “Ubuntu Linux” is still running Ubuntu as a technician. Just let the average person have their plain “Ubuntu” and be happy.

Sony Walkman MTP workarounds…

This post answers the questions:

  • What is MTP (Media Transfer Protocol)?
  • How can I manage files on my media device by simply copying files?
  • How do I manage a MTP USB device through the file manager on Linux?
  • How do I use jmptfs?

So… I’ve just gone and bought a little Sony Walkman Series B and was looking forward to trying it out on my Xubuntu 14.04 install with Clementine.

The Walkman Series B comes with an integrated USB connector for transfer and recharging, which I plugged into a USB 3.0 port. Alas, rather than using a common interface such as the block file system, the device uses the Media Transfer Protocol only interface.

Media Transfer Protocol is a way that a device such as a phone, camera or music player can show itself to the computer, and generally requires special software to add/remove files to. To wit:

Problem #1 – after double-clicking the desktop icon, it took a few seconds to mount, during which I double-clicked it again (thinking my taps on the touchpad may have been too light…) and was presented with a “could not mount” error. Ouch. But it mounted after that.

Problem #2 – it was not mounted as a block device (read: regular hard disk) but as a MTP transfer. Maybe it’s Thunar, or maybe it’s an inherent limitation in the protocol, but this did not allow me to move/add/delete files at all.

Problem #3 – After firing up Clementine, it seems the device was detected as practically full. I had a number of Clementine crashes over my various attempts at copying files from my library over as well. Ghastly procedure.

Workaround – after doing a little bit of reading up, I found that the most seamless solution was to mount the device as a block file system by way of an added library called “jmtpfs” installed following the incantation

sudo apt-get install jmtpfs

Now you can use the command line to mount and unmount the first MTP device found using various commands. I opted to create a script to do this for me: I created a script in my ~/bin directory (which is already in my path) called “mmtpfs” containing:

#! /bin/bash

MTPMAIN=~/mmtpfs
MTPDIR=$MTPMAIN/$2

if [ "${1}k" = '-mk' ]; then
    mkdir -p $MTPDIR
    jmtpfs $MTPDIR
elif [ "${1}k" = '-uk' ]; then
    fusermount -u $MTPDIR
    rmdir $MTPDIR # only removes end directory if empty
    rmdir $MTPMAIN
else
    echo Help:
    cat << EOT
Mount the first MTP device to ~/media/MOUNTDIR

Only specify the name of a directory as MOUNTDIR

Mount device:
    $0 -m MOUNTDIR

Unount device:
    $0 -u MOUNTDIR
EOT
fi

Now I can just open the terminal and type

mmtpfs -m sony

To mount the device or

mmtpfs -u sony

to unmount the device. I might make a Zenity script and a .desktop entry to make it even easier to manage should I have to add this to a user’s configuration.

About that: getting out of walled gardens by using Blockchain?

ReadWrite is runnning a piece touting Blockchain as the panacea to solving the problem of Walled Gardens (because these in themselves are somehow stifling innovation).

The article does a poor job as far as I can tell, from reading it and from seeing the comments, of linking the two aspects, and I had to read a bit further to understand why this is potentially a game changer. Personally, I’m not sure it is. Below is the comment I added to the article:

Blockchain is a protocol that ensures identification and integreity of a piece of data and its iteration in time.

Apps are created and delivered in a walled garden. Where’s the connection?

I did go and read some of the linked articles and I think I can see where this was meant to be going:

The idea is that we want to develop apps that can leverage the power of “App+Could” type systems – that is, having an app that accesses your data online – without seeing your data being siloed into that company’s server forever more.

For example, a notepad app, with lots of your notes in it, accessible from all of your devices (desktop, laptop, tablet, phone, watch, fridge [hey, you might have a shopping list note]). Your data is stored somewhere online, on some company’s cloud infrastructure, and all these apps query that company’s server.

Now if your data is in a peer-to-peer store, using a blockchain-encoded transaction to modify it, then its data is also distributed around the Internet, with changes in the data recognized and stored by the network. I guess the goal is to move the data to a truly disembodied cloud, and not just pigeon holes.

So you change a line in your shopping list on your fridge. The entire network validates this, changes the corresponding peered data, and that change is spread, verifying against the blockchain for the latest status. Your data is stored peer to peer, potentially anonymized and encrypted (ha!), and is resilient to deletion.

However we still have walled gardens where the actual apps have restrictions on their on-device functionality, and we cann’t necessarily control what servers they actually talk to. The reason apps are more interesting than mobile browsers is the interaction with the hardware on the device.

So it’s an interesting start, but so long as there are practical advantages of using an app, we won’t escaped walled gardens on mobile devices.

In the mean time, if you’re really concerned about walling, try open source, try full computers, and try the real world. We’re all over here making stuff. Innovating freely.