Home » Articles posted by Tai Kedzierski (Page 2)

Solving “Permission denied” when using ‘locate’

On some Linux machines you might encounter a permissions error when trying to use locate as a regular user:

locate: can not open `/var/lib/mlocate/mlocate.db': Permission denied

I’m not entirely sure when this comes about, but it is the case on a number of AWS CentOS 6 machines.

The reason is multi-fold, and the following commands, run as root, enable the ability to use locate as a regular user. You need the database’s directory to b readable and executable globally, and you need the locate command to be executable with SGID.

chown root:slocate /usr/bin/locate
chmod g+x /usr/bin/locate
chmod g+s /usr/bin/locate
chmod a+rx /var/lib/mlocate

Read more

Install a secure web server on Linux

Setting up secure connection on your Apache web server is very much straightforward on Linux — all the tools are at your disposal, and in just a few commands, you can be fully set up.

The following instructions are for Ubuntu and CentOS, and covers generating a self-signed certificate.

For an overview of free and cheap SSL certificates, see http://webdesign.about.com/od/ssl/tp/cheapest-ssl-certificates.htm. These certificates from Certificate Authorities only certify that the certificate was issued to the same person controlling the domain. They are fine for internal sites and personal home pages, but not for eCommerce sites..

For an overview of Enhanced Validation certificates (more expensive but more globally trusted), see http://webdesign.about.com/od/ssl/tp/cheapest-ev-ssl-certificates.htm. These certificates are issued against a real-world check of your identity, carrying thus a higher cost and higher trust. They are suitable for high-traffic sites that want to be properly identified and commercial sites; they are overkill for small project sites and testing. Read more

LVM Cheat Guide

This article is also featured on my professional blog at http://helpuse.com

There are a number of commands to know when doing basic Logical Volume Management, and it is probably most efficient to remember the three layers and how they interact, to be able to manage LVM volumes efficiently and autonomously.

I. Devices and Volumes

Physical devices

On the disk partitions side, there are:

  • An actual device: a hard disk, SSD, USB drive, etc
    • for example, /dev/sda or /dev/hdb
  • Partitions on the drives
    • for example /dev/sda1 or /dev/sdb3

The tools to manipulate these are

  • lsblk – to identify block devices easily
    • or df if lsblk is not available
  • fdisk – for partitioning

Logical constructs

In LVM there are 3 layers:

  • The Physical Volume
    • rather than referring to a device, it actually refers to a partition on a device
    • It is generally also referred to with the partition name
    • for example /dev/sda1 or /dev/sdb3
  • The Volume Group
    • this identifies a grouped pool of Physical Volumes that can be used together in the group
    • for example /dev/mapper/LvGroup
  • The Logical Volume
    • a collection of Physical Volumes from the same group
    • There can be multiple Logical Volumes per Volume Group
    • The Logical Volume looks to applications like a single partition
    • A Logical Volume can incorporate or release Physical Volumes in its group
    • For example, /dev/mapper/LvGroup/LvVolume

The tools used to manage these are divided into three sets, each with multiple operations:

  • pv*, vg* and lv*
  • for all three, *scan, *display, *create
  • for vg and lv, the added operations *extend, *remove
  • each set has many more of its own operations, use tab completion on the start of the command-set to show them.

II. Operations

The easiest way to remember the order of operations is to think of it this way: A physical device gets divided into partitions, and the partitions are reassembled into groups to form logical volumes.

As such, the first operations divide the physical devices into partitions, after which they are prepped, added to the appropriate volume group, added to a logical volume, and the logical volume is expanded to incorporate it. Finally the system needs to expand the filesystem to the full extents of the volume.

1. Device Preparation : Partitioning

Identify or create a partition you want to add to your LVM space.

You can use sudo fdisk /dev/sdX to create or manipulate partitions.

The partitions you want to add to volume management must have the system tag 8e : “Linux LVM”

If the partitions you are creating are on the same device as one your system is currently using, you will need to remount it, or even reboot if your root partition resides there.

2. Prepare the partition for LVM : Physical Volume

Use pvscan to identify existing Physical Volumes.

Use pvdisplay for detailed information about each.

Use pvcreate $PARTITION (where $PARTITION is a /dev/sdX as appropriate) to add physical volume information to the partition.

Use pvscan to confirm that it is recognized.

3. Associate the Physical Volume : Volume Group

Use vgscan to identify existing Volume Groups

Use vgdisplay to print detailed information about them.

a. Creating new Volume Groups

Use vgcreate $VOLUMENAME $PV to create a new Volume Group

b. Add a Physical Volume to an existing Volume Group

Use vgextend $VOLUMENAME $PV

4. Assign the Physical Volume : Logical Volume

Use lvscan to identify Logical Volumes attached to your machine.

Use lvdisplay to get detailed information

a. Creating new Logical Volumes

Use lvcreate –extents 100%FREE $PV to incorporate 100% of the currently free space on the Physical Volume. Note that “100%FREE” has no space character in it.

Finally, you need to create a filesystem on it.

mkfs.ext4 $LV where $LV is the device path.

Use lvdisplay for detailed information on the Logical Volumes on your system.

b. Adding a Physical Volume to an existing Logical Volume

Use lvextend –extents 50%FREE $LV $PV to add 50% (for example) of the currently free space on $PV to the Logical Volume identified by $LV; where $LV is the path to the Logical Volume, for example /dev/mapper/LvGroup/LvVolume. Note that “50%FREE” does not have a space in it.

After adding a Physical Volume to a Logical Volume, the Logical Volume still needs to make use of the added space. To do this:

Use resize2fs $LV where $LV is the name or group of the Logical Volume.

You may be requested to run a disk check first before completing the procedure.

Done

You can now mount the logical volume.

Tunneling Around Connection Madness

Some servers are behind multiple layers of Citrix, RDP, re-Citrix and SSH, creating all manner of problems for maintenance and support teams for copying files, and sometimes even just copy/pasting from your desktop to the customer machine’s console.

You can counter this by remote-tunnelling from their remote server to a publicly available server in your control (call it myServ1), then connecting to myServ1 and looping back through the firewall – that is, make it become only one hoop instead of several.

The advantage of this technique is to be able to work in your own browser, and in your own terminal (PuTTY, KiTTY or whatever you wish) straight on your desktop.

To do this, follow the below. It may seem long, but it’s quite short in fact.

Method 1 : Daisy-Chained Tunnels

This method allows you to operate in a single window most of the time, and benefits from the reduced overhead on one of the “connections” (on the loopback address). The disadvantage is that when copying files you will generally find you need to do a two-step copy.

The commands (TL;DR)

In summary:

The ports we define are

  • $RTUNP the port on myServ1 that tunnels back to the customer’s SSH port. Make sure this is unique per customer.
  • $DYNP the port for bridging the dynamic forwarding, on myServ1
  • $PROXYIP the SOCKS proxy port that you set in PuTTY and in your browser to use the dynamic forward

Then there are 3 commands to run in order:

  • On your desktop: ssh serveruser@myServ1 -L $PROXYP:localhost:$DYNP
    • which in PuTTY is a local port forward from source $PROXYP to remote localhost:$DYNP
  • On the customer’s machine:ssh -fNC -R $RTUNP:localhost:22 serveruser@myServ1 -o ServerAliveInterval=20 -o ServerAliveCountMax=5 &
  • On myServ1: ssh -c arcfour customer@localhost -p $RTUNP -D $DYNP

Step1 : Connect to myServ1

Connect to myServ1 with local port forwarding

ssh serveruser@myServ1 -L 8080:localhost:5580

We use a local forward so we forward our desktop’s 8080 to myServ1’s 5580 – we will be using this later.

We need to perform some forwarding on the localhost if myServ1’s firewall is locked down on the ports we’d want to use.

Step 2 : Connect to customer’s machine

Go through the multiple hoops to get to the customer’s machine, and run the following:

ssh -fNC -R 5500:localhost:22 serveruser@myServ1 -o ServerAliveInterval=20 -o ServerAliveCountMax=5 &

Get the PID of the tunnel process.

ps aux | grep ssh

So long as you remembered to include the & at the end of the ssh command, you can now close your ugly Citrix/RDP/SSH/etc hoops session.

Step 3 : Connect to the customer

Now on the myServ1 console you opened earlier, ssh to the port you specified, on the port you specified

ssh -c arcfour customer@localhost -p 5500 -D 5580

If forwarding was used for someone else before, the SSH key check will fail and you’ll get an alarming warning about the host’s identity changing. You need to edit ~/.ssh/known_hosts and remove the last line that refers to localhost.

In the previous command, we SSH to localhost on the tunnel connecting myServ1’s 5500 to the customer’s 22. Since this is localhost, we can use weak encryption (-c arcfour) to reduce the computational overhead of SSH chaining.

The dynamic port forward allows us to use a dynamic proxy on myServ1’s 5580

Since we set up the initial myServ1 connection from our desktop’s 8080 to myServ1’s 5580, we are effectively chaining our desktop’s 8080 to the customer’s network through the dynamic proxy on myServ1’s 5580.

You can use a dynamic SOCKS proxy tool on the locally forwarded dynamic port (here 8080) like usual to resolve IPs directly in the customer’s environment.

Copying files

You need to copy to myServ1 first using pscp or WinSCP, then scp the file to the client

scp -P <yourport> file/on/myServ1 mycustomer:./ # from myServ1 to customer

 

Method 2 : Tunnel Through Tunnel

To be able to directly scp/WinSCP from your desktop to the client machine, you could open the remote tunnel at the customer first; then open a first connection to myServ1 from your desktop, then a second PuTTY session tunnelling through the first.

This causes two PuTTY windows to be open, and has a more expensive SSH overhead (not so good when one end is slow for any reason or when there’s a fair amount of dropped packets on the network), but your second connection is “direct” to the customer.

On the customer’s machine

ssh -fNC -R $RTUNP:localhost:22 serveruser@myServ1 -o ServerAliveInterval=20 -o ServerAliveCountMax=5 &

Get the PID of the tunnel process.

ps aux | grep ssh

On your desktop

ssh serveruser@myServ1 -L 22:localhost:$RTUNP`

Which in PuTTY is a local port forward from your desktop source port 22 to remote localhost:$RTUNP, the remote tunnel to the customer on myServ1

On your desktop again

ssh customer@localhost -D 8080

This is, as far as PuTTY is concerned, is a direct connection – so if you start WinSCP on it, you directly copy from your desktop to the customer’s machine.

If forwarding was used for someone else before, the SSH key check will fail and you’ll get an alarming warning about the host’s identity changing. You need to edit your registry HKEY_CURRENT_USER\SoftWare\UserName\PuTTY\SshHostKeys and remove the appropriate key that refers to localhost.

We do not use a weaker connection here normally, because we still have to protect the connection from the desktop to myServ1 before it enters the outer tunnel.

Disconnecting from the customer

Kill the PID that you noted down earlier – don’t keep this connection open.

kill -9 <pid>

We need to explicitly do this, especially since we have set the keepalive options on the original SSH remote tunnel.

Even if you did not specify keepalive options, some connections are pretty persistent…

 

Installing SliTaz GNU/Linux

Screen Shot 2015-05-13 at 22.40.24

Recently I’ve been playing with SliTaz GNU/Linux, a tiny distro that can be made to operate even the oldest of PCs!

This article is a short bootstrap to get you started with the essentials on SliTaz 4.0

What is SliTaz?

SliTaz is an independant GNU/Linux distribution, with its own package manager, repositories and kernel, focused on providing an OS for embedded devices and computers with extremely small amounts of power and storage.

It is extremely lightweight: its standard ISO is about 35 MiB in size, botting from cold to desktop in as fast as 15 seconds, and ab initio takes up 32 MiB RAM with the Gtk openbox desktop and no extra applications.

Whilst it can be used as a lightweight desktop environment, its main application would more likely be for use as

  • an embedded Linux
  • an SSH gateway
  • an easily reloaded web kiosk
  • a portable PC troubleshooting/rescuing disk
  • such uses where slick features are shunned in favour of lightness and efficiency.

A GUI desktop environment is provided for those who are afraid of being in full-on command line mode, but to maintain its lightness, there are no traces of such heavy packages as LibreOffice or Firefox.

Out of the box you get

  • the lightweight Leafpad text editor (if you’re not content with nano or vi!)
  • the Midori web browser
  • the sakura terminal emulator
  • and mtPaint if you need to edit pictures…

and apart from that, not excessively more. That’s all you really need.

There’s a web-based GUI manager running on localhost for managing the computer, but make no mistake – this systems is more appropriate for seasoned Linux hobbyists who are OK filling in the documentation gaps…

There is even an Raspberry Pi version of SliTaz available to squeeze the most performance out of your Pi.

Screen Shot 2015-05-13 at 22.55.00

GUI install

On the LiveCD, to configure SliTaz, boot into the “Gtk” version; then open the web browser go to http://tazpanel:82 and enter the root login details. By default, the user is “root” and the password is “root”.

Once in TazPanel, you can manage the system, upgrade packages, install new software – an more!

Go to the final menu entry labelled “Install” and choose to “Install SliTaz”

For the purposes of this guide, we are just going to do a very simple install. If you’re comfortable with partitioning, go wild.

You will be offered the opoortunity to launch GParted – click that button. You will be shown a map of the first hard drive. If you have multiple hard drives, BE CAREFUL with which one you choose – the operation we are about to perform will erase the disk you operate on.

Choose the disk in the disk chooser menu in the top right – probably /dev/sda if you have only one disk. Again CHOOSE WISELY.

Once a disk is chosen wisely, use the Device menu and choose to Create Partition Table

Then choose Partition menu: New partition

Leave the defaults and hit OK, then click the Apply button to apply the changes. At this point the disk is erased and a new partition is set up. You have been warned!

Exit GParted, scroll down and choose to “Continue installation”

Most options should be fairly self explanatory.

Under Hard Disk Drive, choose the drive you just erased in GParted (/dev/sda for example) and choose to format the partition as “ext4”

Set the host name to whatever you want.

Change the root password to something sensible.

Set the default user as desired.

Remember to tick “Install Grub bootloader” or you’ll have a non-bootable system…

Click “Proceed to SliTaz installation”. After a few seconds… SliTaz is installed. Reboot!

You’ll have to set up your locale and keyboard just once more and voila, a desktop Linux that boots in seconds!

Command line install

Here’s the simple recipe for installing SliTaz from the command line. Note that even if started from the LiveCD headless, this install will install a full GUI and take up around 100MB of space.

The first thing to know is that the installer is invoked by the command tazinst.

The second thing to know is that you need to create a configuration file for it.

The third thing you need to know is that you need to partition your disk first. Naturally, this is what we’ll do first.

WARNING – PARTITIONING ERASES THE DISK THAT YOU ARE PARTITIONING

Type these keys in order to erase /dev/sda and set up a single partition. If you have never done this before…. read up on partitioning with fdisk. It’s a whole topic on its own! Hit return for each new line of course.

fdisk -l
fdisk /dev/sda
o
n
p
1
1
 (just hit return here to accept the default)
a
1
w

Great, you have a new partition on /dev/sda1. Now create the config file.

tazinst new configfile

Once you have created the configuration file, edit it.

vi configfile

Three key things you need to change are as follows:

  • TGT_PARTITION – the partition you will be installing on – in our case, /dev/sda1 or whichever you configured earlier
  • TGT_FS – the file system you want to use – for example, ext4
  • TGT_GRUB – “yes” unless you intend on installing Grub manually afterwards.

Finally, run

tazinst install configfile

After a few second, the install will be finished and you can reboot.

Post-install customizations

SliTaz is very light. Extremely light. You might even say it’s missing some packages you would expect as standard. You should think about doing some initial setup…

su
tazpkg -gi vim
tazpkg -gi htop
tazpkg -gi tmux
tazpkg -gi sudo
tazpkg -gi iptables # ...and whatever else you want...
#one tazpkg per item to install
/etc/init.d/dropbear start # SSH server
 vim /etc/rcS.conf

# add dropbear SSH server to startup
 %RUN_DAEMONS=" ...dropbear"
vim /boot/grub/menu.lst
# change timeout
 %timeout 2
visudo
 # add your own users to sudo location

And that’s about it. Some extra commands that may be different from what you may know from elsewhere:

poweroff # instead of shutdown
tazpkg recharge # sync package list
tazpkg info (package)
tazpkg description (package)
tazpkg search (string)
tazpkg get-install (package name) # install from repo
tazpkg get (package name) # download from repo
tazpkg install (TGZ file) # install from local file

Bonus – tpgi

Instead of directly using the restrictive tazpkg, try using my wrapper :-)

Switch to root and run the following

tazpkg -gi git
tazpkg -gi bash
git clone git://github.com/taikedz/handy-scripts
cp handy-scripts/bin/tpgi /usr/bin/

This will set up the tpgi command which you can use to make life with tazpkg a little easier… run the command without arguments for help. Try:

tpgi install htop vim tmux sudo

Now you can install multiple packages from one line….!

tpgi search and gcc 3.4

Searches for packages containing the term “gcc” then filters the results for version 3.4

I Won’t Go Back to Buying Mac

mac_keyboard

Here’s a little topic I wanted to explore in written form – why I have used Mac for so long, why I still have a Mac as my main desktop…. and why despite this I won’t buy Mac again.

I Used to Love the Mac

My first computers were of course not mine – they were my dad’s. I have a vague recollection of us having a PC with 8” floppy drives and having to type commands… this was probably in 1987 or so. But that memory never really took hold, for very soon after, my dad bought a Mac: an LC II that I think is still in the cellar due to me insisting on not throwing it out.

It was graphical, it was friendly. It supported 16 colours (and not just 8 colours like many PCs still shipped with as standard). There was no command line, you could just click for everything. It was a revolution in home computing and we were on the cutting edge.

We were continually treated, with Macs, to the newest and greatest home technology: stable systems to run months without a single application crash (System 7.5.1 I particularly single out), advanced graphical UIs (Mac OS 9 was great comfort to the eye at the time), easily automated applications via AppleScript, including a fully scriptable Netscape Navigator; the first laptops and desktops with built-in Wifi, the first LCD desktops where the entire computer was hardly wider than the screen, the advent of UNIX-based systems on the home computer. Every Mac shipped with a full productivity suite included (what would become iWork), as well as a full media editing suite (photo editing, video sequencing, and audio production, which collectively would become iLife), and a couple of well-designed, full-on 3D games to boot. There was hardly anything you couldn’t do with a Mac I thought…. except perhaps write programs for Windows.

When the time came for me to go to university, I believed I would have to get a Windows PC to allow me to do some proper programming, not knowing that we’d be using many different and equally (even more so) viable systems for programming on. It was a mistake I do not regret, as it had great learning benefits to me, and gave me the ability to understand the Windows paradigm so many people endure, and the ability to operate in the average workplace; but after that laptop died (in a literal puff of smoke after an ill-fated attempt to “repair” it), I was back to buying a Mac in 2007.

Even in 2011 I was agonizing over whether or not to spend hard-earned cash on a new MacBook Pro or not. I drew up my list of pros and cons, and decided, over a solitary steak and pint, that yes, I did want that Mac after all.

It would be the last Mac I would ever personally buy.

The Mac – the good

The year is 2015. I still have that MacBook Pro. And it still serves as my main workhorse for spinning up Linux virtual machines. 4 years on, and it’s still the most powerful computer in my home.

It has a quad-core i7 hyper-threaded processor at 2.2 GHz, effectively  showing up as 8 cores – it’s the same processor family as found on entry-level business servers. I’ve upped the RAM to 16 GB. It has a 500 GB HDD.

Most computers even today ship with 4 GB RAM and a lesser i5 processor clocked at 1.7 GHz and not hyper-threaded, and still a 500 GB drive.

Needless to say, that Mac was a fantastic investment, as it remains still more powerful than an equivalently priced Windows PC on today’s market.

So why will I never buy Mac again? Put simply: Apple has chosen to go where I will not follow.

Apple – the Bad

Even back in 2011, the Apple Genius who was trying to sell to me was extolling the benefits of the new MacBooks with no CD/DVD drive: “who uses CDs these days anyway?” Well I do, for one. I experiment with computing, and in doing so sometimes break my systems. I need to reinstall the system sometimes. The one time I needed to reinstall OSX, I had to purchase a brand new copy. Gone are the days of providing a free re-installation DVD. These days, you’re lucky if you can connect anything at all.

I don’t tie up my bandwidth with movies and music I have to wait for and download, online, every time I want to consume them. I still buy DVDs and CDs because, in case you haven’t noticed, online “purchase” does not allow you to own a copy – just the license to watch, if it’s still available on the provider’s website (remember mycokemusic.com?). We do not own “our” online movies and music – only the permission to watch them, which can be revoked at any time – with no refunds.

I have become a near-full Linux convert. I use Linux for my personal machines at work, my secondary and tertiary laptops run Linux, and my private cloud servers all run Linux.

Only my Mac doesn’t run Linux, and that only because when I tried to install Linux on it, the graphics card and wireless card decided to throw a hissy fit. Apple’s choice of highly-proprietary components meant that despite the best efforts of open source developers, Apple held on closely to the proprietary mantra: the machine is Ours, you only have a license to use it. You can’t even “own” something as rustic as a tractor these days.

I feel I am not in control of my Mac because I have been told what I can and cannot run on it. I own the machine, but not the software. If it breaks, I just get to keep the pieces – not the ability to tweak and fix.

My hardware today

My preferred computer for “getting things done” nowadays, the one I am currently typing away on, is a Lenovo Flex 15. Lenovo do very good hardware, its pro line, the ThinkPads, are durable business machines much like the MacBook Pros in quality.

They’re also generally highly compatible with open source drivers and mainstream Linux distributions. Where I’d hesitate before buying a Dell or HP laptop as to whether I think Linux will work on them, I have virtually no qualms when buying a Lenovo laptop, knowing it will likely take the erasure of Windows just fine. Not that this necessarily won’t change in the future.

Open Source – Freedom and Privacy

Lenovo was in the news recently for a piece of advertising software called Superfish they had included in new laptops and desktops for a few months in their Windows deployments. This particular set of software and configurations meant that not only were users seeing even more advertising in the web browsing experience, but implementing the advertising solution was also breaking the very core security mechanisms that keep all other parts of the system secure. Lenovo makes great hardware, but they aren’t immune to woefully bad decisions.

Thankfully, they reverted their decision to include this software as soon as their technical analysts realized what had happened, and issued fixes, but it has damaged the company’s reputation.

Persons like myself who chose to erase Windows completely were not affected.

This is why I use Open Source Free Software: to maintain control over my own digital assets, and freedom in my digital life. I am fully aware that my digital identity is tightly woven into my real-world identity, whether I want it to be or not.

I now run Linux on nearly everything – more specifically, I run Ubuntu on my laptops, and a mix of Ubuntu and CentOS on my servers.

I can choose what software is on it. I can choose what software is not on it (have you not yet noticed how there is some software on Windows that you cannot get rid of for neither love nor money… pestering you for upgrades at best, selling you out at worst). I don’t have to pay an arm and a leg for it either.

What’s more, I remain in control of my data. Not only on my computer, but also in the Cloud. Windows will try to shove you onto SkyDrive and Office 365 Online. Apple is shoe-horning you into iCloud services (yeah, sync your photos all over the place, you can trust Apple… hmmm)… Google is trying to get into both spaces of storing all your photos “for” you and getting up in the online office suite as well. You can’t get an offline Adobe Creative Suite anymore – just keep up the eternal payments if you want to continue being able to access your Photoshop and Illustrator projects. At least they didn’t discontinue their editing suite altogther like Apple did with Aperture. Gone is your investment.

If I ever stopped paying once for any of these applications or services, or if the service is suddenly discontinued, I would stand to lose all my data – everything I’ve purchased, everything I’ve created, either because I no longer have the software to read the files, or because the files themselves have been whisked away to an online vault “for my convenience”. That’s why there’s hardly any storage on Chromebooks. Surrender your data to the Cloud.

I am staying firmly on Linux and virtual private servers that I control and can pull data off of as I wish. I can fully program the computer to make it do what I want – and stop if from doing things I don’t want it to do (granted, some tasks are easier than others, but at least it’s actually possible in the first place.)

One Linux distribution in particular, Ubuntu (the very same I use!), tried to follow the Big Boys like Apple, Google and Microsoft: Canonical announced a partnership with Amazon in the form of search functionality, where any keywords used for a file search was also sent to Amazon, and other online providers. Thankfully, it was easy to purge from the system the minute I heard of it. You cannot defenestrate such “features” with the other Big Three.

Building Trust

I use open source software from centralized trusted software repositories (which were the spiritual precursors to app stores) – I don’t need to hunt around on the Internet to find some software whose source I do not know. On Windows, I constantly need to fret before installing an app: Does it have a virus? Does it have a trojan? Will it send all my purchasing, credit card details, photos and other identity to some unknown third party?

What I get from the centralized repositories constitutes my base web of trust –  and that base web offers a collection of software so large and varied that I know I can get a tool for any job, be it office, media, programming, scientific or leisure, and more.

No piracy = no legal troubles AND no viruses.

Or at least, a vastly reduced risk compared to downloading anything willy-nilly from random websites. And personally, I expand that web of trust with informed decisions.

I use LibreOffice which allows me to read and save in Microsoft’s document format if I need to, but I mainly use the Open Document Format to ensure I can still edit them in decades to come, and that I can share documents with anybody who does not want to shell out for Office Pro, Office 365 or GoogleDocs.

I use ownCloud for my file synchronization so that I can keep control over what is stored, and where. It replaces services such as DropBox, Google Drive, Sky Drive and iCloud without trying to force me to store online-only and forgo local copies. If my account is terminated on the latter services, there’s no guarantee I’ll also still have the data that it ran away with. ownCloud is in my control, and I know I have the copies locally too.

I use Krita and the GNU Image Manipulator instead of PhotoShop, InkScape instead of Illustrator, Scribus instead of InDesign, digiKam instead of Lightroom. I don’t need to be online to do any of this.

I choose freedom.

In the words of Richard Stallman and the Free Software movement: “Free Software is is a matter of Freedom, not price.

Piracy might make things surreptitiously free (as in “a free lunch”), but still ties you to the control systems and spyware that is rife on the Internet.

Apple, like so many other computer manufacturers and software licensors, has taken a route I cannot go down, one I will not follow. It has taken a route that specifically makes it difficult for me to remain free. It has taken a route that stifles experimentation and learning. It has taken a route that privileges perpetually tying-in my spending on one side, as well as the monetization of my identity on the other, whilst at the same time denying me ownership both of what I purchase and what I create, and where the only solutions are either piracy… or just leaving altogether.

forget-piracy(… graphic of my creation, released under CC 4.0 Attribution Share-Alike. Anyone who wants to make a better derivative is most welcome…!!)

About that: Thalys’s response to All out

Thalys, a French national train operator, suffered recently from a backlash from an All Out campaign after a member of Thalys’s partner staff reprimanded a lesbian couple for kissing on the platform , denouncing the activity as “intolerable.”

Thalys yesterday released a French language press release, which I have opted to translate below.

Please note that this translation has not been performed from a professional standpoint, and that only Thalys’s original official press release is relevant for further quoting.

Read more

What Cameron Doesn’t Realize: Encryption Keeps Us SAFER

To Mr David Cameron, Prime Minister and person responsible for our (lack of) safety.

This is war – and you know it. A defensive war against those who would, and do, assail us. War against those who seek to undermine our values. War against those who attack us, day after day, relentlessly, on our streets and in our homes.

And amidst this ongoing conflict, you would have us break down the walls of the only fortress protecting us so as to better see our enemies charging.

You call for the private encryption of our personal messages to be undermined, and even qualify it as thoroughly undesirable – for the purpose, you say, of facilitated public protection, and the promise of a safer Britain. It will be none such, but the contrary, should your stance prevail.

The rogues who attacked Charlie Hebdo, the London buses and 9/11 were all already known to Intelligence. You have more means than the mere electronic surveillance of their messages. You are the govenrment. You can access airport records at will. You have CCTV on every major street and transport link. You intercept physical mail. You can bug our hardware. You impose police checks and searches anywhere and anywhen. You monitor bank transfers. You have the legal mandate to pry open or seize property of any private enterprise, and through international agreements, the power to reach even overseas.

I do not doubt that a government can carry out surveillance, nor that it will. Even non-governmental groups can crack highly secure networks, given sufficient determination. Just ask any computer security expert – the first thing they ever teach us is that no system is 100% “unhackable”.

Were I sufficiently deluded I would demand that you stop such mass trawling. But I see no point in such advocacy on my behalf. It will happen whether I wish it or not, with my knowledge or without. For the government to demand that private communications cease to exist outright, in reality, makes it marginally easier for your intelligence services to reap information.

However it makes an unfathomable differece to any others who would (and already try to) get control of us or those we hold dear, whilst driving the poster-criminals away from surveillance’s reach.

You say you want to better monitor terrorists and violent criminals. Would the most dangerous use your government-sanctioned communication tools to operate? No – they would simply switch to other channels of communication and “go dark” once more. Years of your agencies’ efforts to best mine the Internet and otherwise secure communications would surely go to waste – for none but the most incapable “terrorists” would be there anymore, and your agencies will have to play catch up in an entirely new arena. It is astounding that they are there at all, which in fact is a benefit to you.

In the mean time, the rest of us will be fed to the wolves.

In reality, encryption has never protected us from government spying. It has only ever protected us from non-government spying.

The holes already opened up by GCHQ and the NSA (and other lower-profile national security agencies) are already letting in criminal hackers – known in the trade as “crackers.” Computer systems will always have issues, as every computer scientist, engineer and technician knows from day one. We work hard to plug them as soon as we – or others – find them. And yet you bore more holes behind our backs.

The attacks on Sony and the leaks of celebrity photos from Apple demonstrate how easily compromised computer systems can be, even when dutifully guarded.

With mass policy of non-encryption, we open ourselves to ills no government could guard against, no matter how otherwise benevolent it were.

We already have open networks in the form of free Wireless in airports, hotels and cafes, ready to testify to the dangerous absurdity of not encrypting one’s communications. Any computer enthusiast with a modicum of technological education and a standard laptop can snoop the details of anything unencrypted. One needn’t even look underground or seek to circumvent anything for such tools: this is what was shown with the FireSheep debacle that proved that websites badly needed encryption – not to save us from the government, but from simply unscrupulous other network users.

Our devices connect automatically to these networks because we let them: rather than have to remember passwords and type them in conscientiously. We are all ripe for picking. And anyone can setup a network to trick our devices. Making better technology will not solve our desire for convenience, and crackers will always be ahead of the game – it’s what makes them such formidable foes.

Cracks employed by News of the World were already unsophisticated, but without the safeguards and encryptions there would be no need for them – all our communications would be laid bare to anyone who so much as desired to listen in.

Who would be listening? Crooks out for a quick buck perhaps. Set up a little device and listen in to rich investors’ casual discussions face to face or over some “pravate” chatting channel. At the club house, or in a restaurant, or in a hotel bar or elsewhere the likes… Some people wonder how crackers get information on certain transactions… It’s easier than Hollywood lets on…

Who else would be listening? Oh nobody but insurers and marketers, eager to have the first word in negotiations. They know who’s depressive and who’s terminally ill. Up the premiums. And crooks too. They’ll know who’s bought the latest PC, which model from which store. Let’s call them and impersonate a Customer Service representative to con them.

Who else would be listening? Only the local thugs who know how to use the government tapping loopholes to get onto some family’s network – cause their bills to skyrocket by hacking their smart energy metres, cause their fridges to turn off over holidays and everything to spoil, overheat ill-secured sensors and cause fire even as they sleep, browse private files to dig up dirt, monitor their childrens’ movements… and hold the home owner to ransom.

Lovely house and family they had there…. pity if anything were to happen to it.

Who else would be listening? Not to sound alarmist, an an open, unencrypted network would be a boon for predatory paedophiles and other sex offenders who could operate all the more efficiently. For every one paedophile who would no longer be sharing vile pictures through the Internet, a thousand more could spy on any one family out and about one sunny afternoon. Photos of our children shared with our loved ones would be available for anyone to intercept and recorgnize (see how quickly the Chinese “human flesh search engine” can identify a person from casual shots). Our daily habits and patterns would be open to anyone to see, analyze and mine. The kids get home at this time. The parents get back at that time. The parents are out to dinner on Tuesday evening. Interesting information on that couple we spied on in the cafe last Sunday. And if the paedophiles were the ones supplying the laptops and phones… what then? (Yes, we’ve already seen something like this happen.)

Who else would be listening? Maybe the disgruntled neighbour. Maybe the local bullies. Maybe some sect that really has it in for you. Maybe some ill-advised political activist hell-bent on attacking a candidate and any of their supporters.

Mr Cameron, I can’t comment on the rest of your political decisions. I disagree with your policies, but I am not an expert in any of those matters. I don’t like what you’ve done to welfare, I don’t like the Conservatives’ privatization of what I believe to be national infrastructure such as the NHS, I don’t like your government’s stance on immigration, nor how they are undermining education, and I am disappointed that I feel my vote to stay with the Union this past November seems to have come back to bite me. And so forth. Frankly I have not educated myself enough in those areas to properly comment on them. Suffice to say I disagree, and will need to leave it at that.

But I am competent in computing, as can be anybody studious enough. You seem to think cracking is only the capability of those grimly determined – but it is at the grasp of even the most puerile of pranksters. All you have shown is that you persist in ignorance and lack of judgement, from a stance of power and authority – a very dangerous combination.

You would feed us to the wolves to gauge just how hungry they were; and take a cannon your own castle out of spite.

Read more:

[1] Cameron wants nobody to have privacy. http://readwrite.com/2015/01/13/david-cameron-encryption-messaging-apps-imessage-whatsapp-snapchat

[2] Encryption makes us safer. http://www.forbes.com/sites/kashmirhill/2010/10/25/firesheep-why-you-may-never-want-to-use-an-open-wi-fi-network-again/

[3] The surveillance state made corporate (and private) espionage worse. http://www.bloomberg.com/news/2014-04-11/nsa-said-to-have-used-heartbleed-bug-exposing-consumers.html

[4] Letting companies have a know about their users tends to backfire. http://theweek.com/articles/441995/uber-growing-threat-corporate-surveillance

[5] There are people you trust spying on your children in their own bedrooms. http://www.macworld.com/article/1146666/macbook_spycam.html

[6] Why privacy matters. http://www.groklaw.net/article.php?story=20130818120421175

Protect your privacy and freedom

EFF.org

https://www.openrightsgroup.org/

About that: GNU/Linux (and cousins) are a big family – and the kids are growing up

It does seem that Linux has become too big and complex – but perhaps not in the way we think.

Some voices indicate that the problem is being un-UNIXy, others insist that systemd is the heart of all woes (well it certainly threw oil on the fire…), others think it’s just too bloated… some even think it is just not popular enough (non sequitur??)

Personally, I think it suffers from a very basic meat-space problem: identity. There are too many distros for Linux to be homogenously named, and too many people with strong (mostly valid) visions for what it should do to be recincilable accross the board.

It is clear to me that no Linux distro family is interested in doing what the other does, like siblings with rivalry, but that some “parent figures” are trying to ship them all into the same roles.

Rather than recognize that they need to have their own spaces and come into being their own, “unification” attempts are falling flat because simply not matching up to the ideals the now grown up youngsters have.

My response to a comment on iTWire’s article follows:

My general stance is that Linux’s “killer app” is “Linux” :-)

To elaborate on that, maybe I should stretch my explanation to mean: a platform on which I can bothplay games and surf the web, whilst at the same time do development work and granularly control its maintenance, depending on the role I want to to fit on any particular deployment… The lack of household-name-fame is not prticularly a problem. Technical capability has always come first, and that has not hampered its growth.

I do think (though I am no longbeard!) that there needs to be more pointedly a differentiation between desktop/laptop purpose (where you want everything to work with little fuss) and server purpose (no GUIs, high control, high debuggability)

Perhaps the danger we are seeing is the trend for a one-size-fits-all approach swalling the ecosystem whole, whether it be systemd or any other project, mind: we tend to see Linux distros as one big close-knit family, and thus create attempts to unite them under a common set of tools, orchstrators and platforms.

Maybe it’s time we stopped looking at Linux that way. UNIX split off into various mildly-related sub groups, and which still thrive, and it looks like Linux could soon do the same. Ubuntu is on course to becoming its own thing, and Chrome has taken Gentoo and created something barely related, even as they are deployed on the same kernel as the rest of distros. Android has already dropped the GNU side and, whilst it is still “Linux”, it cannot be considered in the same family as the desktop and server distros we have at the moment.

So yeah – I’d say, stop thinking of “Linux” as “a cohesive group of operating system variants” and start looking at key families of distros as operating systems in their own rights. Allow more individulaization in the nucelar family to allow a broader unification of the genealogy – our BSD cousins are doing well in server space, and if they are not taken into the unification attempts, may well fill the server niche in our stead whilst we remain with the consumer and mobile markets… who knows who will get gadget-space.

Stop trying to cram all the family members in the same bed. It’s time for the kids to fly the nest.

Technical Support as a Career

“I work in technical support” is probably one of the less impressive admissions at a sociable meetup, and to be fair, it’s not ever been glamourous, nor will it ever be. The most admiration you’ll probably get is “Oh wow; hey I have this computer problem actually, you see it …. (badly summarized problem in absence of broken thing…) … do you think it’s a virus?”

However it is a viable career (with its admitted share of dead ends), with training on offer in the right companies, and plenty of potential for exposure to the core of businesses and some Real Computing (TM).

The following is a quick profile description of the most common configurations, if you were ever curious, or looking to move into IT – and one or two profiles to avoid as much as you can. Read more