Home » Articles posted by Tai Kedzierski (Page 2)

Panic Button – EFI woes

keep-calm-and-try-the-defaults

Thank you to all at Linux Unplugged who chimed in to give their advice, as well as Rod Smith on SuperUser who provided an extensive, spot-on answer even given my mostly incorrect description and assumptions.

I’ll admit that I panicked when I sent the piece in, as I might have just turned a friend’s PC into a brick in the very first hour we bought it! Luckily he himself is already comfortable using Linux, just he hadn’t installed it in a very long time, so had the patience and the Windows-aversion to bear with me.

It turns out, this had nothing to do with Secure Boot, and after this saga, I have a slightly better understanding of what Secure Boot is, what scenarios it’s meant to apply in, and what went wrong…

Read more

SSH fingerprint

Warnings about changed SSH host identities should be taken seriously – Man-In-The-Middle attacks are whre an impoersonator gets between you and your destination. They can sniff your traffic and even gain access codes and passwords, and even control of your computers.

When you ssh to a server, SSH checks the fingerprint returned by the server against what you have in your .ssh/known_hosts file

If when you SSH you get a big warning the identity of the server has changed, you may want to investigate…

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
5e:6f:20:b5:06:c1:3e:a7:7b:55:a6:9c:be:dc:79:24.
Please contact your system administrator.

If you have no other means of getting to the server, get a different computer, on a different network (for example, different laptop on your tethered dataplan, just get onto a different network!), and try again. It’s important to note that the new laptop/computer needs to also have previously contacted the remote server, otherwise it will simply prompt you to merrily add the remote server. Check the fingerprint to see how it matches up. If it’s the same that you are getting a warning about DO NOT ADD IT.

If you have direct console access to the affected server (VNC console given to you via your provider, or a physical terminal), you can run

ssh-keygen -lf /etc/ssh/ssh_host_ecdsa_key

This will print the fingerprint for the key.

If this is the same as you are being given a warning about, it’s possibly safe to assume it’s the real server, and your known_hosts file is wrong. Try turning ssh off and using ssh to connect – if you get a timeout or connection refused, instead of the big mean error, you might — MIGHT — be safe.

If however the fingerprint returned is different from what you see, you may want to check the other identity files:

for x in $(ls /etc/ssh/*key*); do echo -n "$x : " ; ssh-keygen -lf "$x" ; done

This will list the fingerprints of all the key files in that directory. If none of them match the fingerprint you are warned about, you are likely not connecting to the server you intended to – attack or mis-routing, you should consider the route to host unsafe – and that SSH is blocking you for your protection..

Terminal escape characters (‘^]’ , etc.)

If you are trying to troubleshoot a connection issue, you have probably used the telnet tool. Telnet is an old socket protocol which, for the intents of our explanation, simply opens a network socket to the other server and passes data through plain.

For example, if you want to see if a SMTP server is running on a remote server 1.2.3.4 you would telnet to it on port 25:

telnet 1.2.3.4 25

If you wanted to check if an FTP server was running, you would instead run it against port 21, and so on – lookup different protocols online and find out their “default port” for “plain text” traffic (note that sending passwords via plaintext is a bad idea in general – but some services still allow you to do it. Tsk tsk.)

On Mac OS X, Linux and BSD, when you launch telnet, your session will probably open with the following statement

tai@demoserver:~/$ telnet localhost 25
Trying 127.0.0.1…
Connected to localhost.
Escape character is ‘^]’.
220 demoserver.local ESMTP Postfix (Ubuntu)

The topic of this article is: what’s that escape character??

You can in fact come out of the telnet stream. In the above example, if you simply typed the normal interrupt ( Ctrl+D ) it would just send the interrupt byte along the wire – and not get caught by the application.

To do that we need to first invoke the escape sequence. So how to type the escape character?

There are two sequences needed to type it:

( Ctrl + “v” ) this causes the input to wait for a special character

( Ctrl + “]” ) this provides the special character

This generates a single character, denoted as “^]”. Send it by pressing return. This returns you to the local telnet prompt. You can now issue a ( Ctrl + D ) command to exit.

This technique also works elsewhere.

For example, you can display text in colour:

$> echo “^[[1;31mhello^[[0m”

Where the ” ^[ ” sequence is actually a special sequence as described above. Note this is using “[” and not “]”

^[ — special character for output stream control

[ — formatting code follows

1 — bold true (can be “0” to turn off bold)

;31 — red ; 32 is green, 33 yello, 34 blue. Try other combinations.

m — end modification

Text entry in this form also works when editing text in vi for example – when the resulting file is output via `cat` or `less -R`, you get colours and bolds!

Note that unless you include the code to turn off custom colours (^[[0m = “terminal default”) then the rest of your command line will keep the last selected colour mode.

 

colours_terminal

Solving “Permission denied” when using ‘locate’

On some Linux machines you might encounter a permissions error when trying to use locate as a regular user:

locate: can not open `/var/lib/mlocate/mlocate.db': Permission denied

I’m not entirely sure when this comes about, but it is the case on a number of AWS CentOS 6 machines.

The reason is multi-fold, and the following commands, run as root, enable the ability to use locate as a regular user. You need the database’s directory to b readable and executable globally, and you need the locate command to be executable with SGID.

chown root:slocate /usr/bin/locate
chmod g+x /usr/bin/locate
chmod g+s /usr/bin/locate
chmod a+rx /var/lib/mlocate

Read more

Install a secure web server on Linux

Setting up secure connection on your Apache web server is very much straightforward on Linux — all the tools are at your disposal, and in just a few commands, you can be fully set up.

The following instructions are for Ubuntu and CentOS, and covers generating a self-signed certificate.

For an overview of free and cheap SSL certificates, see http://webdesign.about.com/od/ssl/tp/cheapest-ssl-certificates.htm. These certificates from Certificate Authorities only certify that the certificate was issued to the same person controlling the domain. They are fine for internal sites and personal home pages, but not for eCommerce sites..

For an overview of Enhanced Validation certificates (more expensive but more globally trusted), see http://webdesign.about.com/od/ssl/tp/cheapest-ev-ssl-certificates.htm. These certificates are issued against a real-world check of your identity, carrying thus a higher cost and higher trust. They are suitable for high-traffic sites that want to be properly identified and commercial sites; they are overkill for small project sites and testing. Read more

LVM Cheat Guide

This article is also featured on my professional blog at http://helpuse.com

There are a number of commands to know when doing basic Logical Volume Management, and it is probably most efficient to remember the three layers and how they interact, to be able to manage LVM volumes efficiently and autonomously.

I. Devices and Volumes

Physical devices

On the disk partitions side, there are:

  • An actual device: a hard disk, SSD, USB drive, etc
    • for example, /dev/sda or /dev/hdb
  • Partitions on the drives
    • for example /dev/sda1 or /dev/sdb3

The tools to manipulate these are

  • lsblk – to identify block devices easily
    • or df if lsblk is not available
  • fdisk – for partitioning

Logical constructs

In LVM there are 3 layers:

  • The Physical Volume
    • rather than referring to a device, it actually refers to a partition on a device
    • It is generally also referred to with the partition name
    • for example /dev/sda1 or /dev/sdb3
  • The Volume Group
    • this identifies a grouped pool of Physical Volumes that can be used together in the group
    • for example /dev/mapper/LvGroup
  • The Logical Volume
    • a collection of Physical Volumes from the same group
    • There can be multiple Logical Volumes per Volume Group
    • The Logical Volume looks to applications like a single partition
    • A Logical Volume can incorporate or release Physical Volumes in its group
    • For example, /dev/mapper/LvGroup/LvVolume

The tools used to manage these are divided into three sets, each with multiple operations:

  • pv*, vg* and lv*
  • for all three, *scan, *display, *create
  • for vg and lv, the added operations *extend, *remove
  • each set has many more of its own operations, use tab completion on the start of the command-set to show them.

II. Operations

The easiest way to remember the order of operations is to think of it this way: A physical device gets divided into partitions, and the partitions are reassembled into groups to form logical volumes.

As such, the first operations divide the physical devices into partitions, after which they are prepped, added to the appropriate volume group, added to a logical volume, and the logical volume is expanded to incorporate it. Finally the system needs to expand the filesystem to the full extents of the volume.

1. Device Preparation : Partitioning

Identify or create a partition you want to add to your LVM space.

You can use sudo fdisk /dev/sdX to create or manipulate partitions.

The partitions you want to add to volume management must have the system tag 8e : “Linux LVM”

If the partitions you are creating are on the same device as one your system is currently using, you will need to remount it, or even reboot if your root partition resides there.

2. Prepare the partition for LVM : Physical Volume

Use pvscan to identify existing Physical Volumes.

Use pvdisplay for detailed information about each.

Use pvcreate $PARTITION (where $PARTITION is a /dev/sdX as appropriate) to add physical volume information to the partition.

Use pvscan to confirm that it is recognized.

3. Associate the Physical Volume : Volume Group

Use vgscan to identify existing Volume Groups

Use vgdisplay to print detailed information about them.

a. Creating new Volume Groups

Use vgcreate $VOLUMENAME $PV to create a new Volume Group

b. Add a Physical Volume to an existing Volume Group

Use vgextend $VOLUMENAME $PV

4. Assign the Physical Volume : Logical Volume

Use lvscan to identify Logical Volumes attached to your machine.

Use lvdisplay to get detailed information

a. Creating new Logical Volumes

Use lvcreate –extents 100%FREE $PV to incorporate 100% of the currently free space on the Physical Volume. Note that “100%FREE” has no space character in it.

Finally, you need to create a filesystem on it.

mkfs.ext4 $LV where $LV is the device path.

Use lvdisplay for detailed information on the Logical Volumes on your system.

b. Adding a Physical Volume to an existing Logical Volume

Use lvextend –extents 50%FREE $LV $PV to add 50% (for example) of the currently free space on $PV to the Logical Volume identified by $LV; where $LV is the path to the Logical Volume, for example /dev/mapper/LvGroup/LvVolume. Note that “50%FREE” does not have a space in it.

After adding a Physical Volume to a Logical Volume, the Logical Volume still needs to make use of the added space. To do this:

Use resize2fs $LV where $LV is the name or group of the Logical Volume.

You may be requested to run a disk check first before completing the procedure.

Done

You can now mount the logical volume.

Tunneling Around Connection Madness

Some servers are behind multiple layers of Citrix, RDP, re-Citrix and SSH, creating all manner of problems for maintenance and support teams for copying files, and sometimes even just copy/pasting from your desktop to the customer machine’s console.

You can counter this by remote-tunnelling from their remote server to a publicly available server in your control (call it myServ1), then connecting to myServ1 and looping back through the firewall – that is, make it become only one hoop instead of several.

The advantage of this technique is to be able to work in your own browser, and in your own terminal (PuTTY, KiTTY or whatever you wish) straight on your desktop.

To do this, follow the below. It may seem long, but it’s quite short in fact.

Method 1 : Daisy-Chained Tunnels

This method allows you to operate in a single window most of the time, and benefits from the reduced overhead on one of the “connections” (on the loopback address). The disadvantage is that when copying files you will generally find you need to do a two-step copy.

The commands (TL;DR)

In summary:

The ports we define are

  • $RTUNP the port on myServ1 that tunnels back to the customer’s SSH port. Make sure this is unique per customer.
  • $DYNP the port for bridging the dynamic forwarding, on myServ1
  • $PROXYIP the SOCKS proxy port that you set in PuTTY and in your browser to use the dynamic forward

Then there are 3 commands to run in order:

  • On your desktop: ssh serveruser@myServ1 -L $PROXYP:localhost:$DYNP
    • which in PuTTY is a local port forward from source $PROXYP to remote localhost:$DYNP
  • On the customer’s machine:ssh -fNC -R $RTUNP:localhost:22 serveruser@myServ1 -o ServerAliveInterval=20 -o ServerAliveCountMax=5 &
  • On myServ1: ssh -c arcfour customer@localhost -p $RTUNP -D $DYNP

Step1 : Connect to myServ1

Connect to myServ1 with local port forwarding

ssh serveruser@myServ1 -L 8080:localhost:5580

We use a local forward so we forward our desktop’s 8080 to myServ1’s 5580 – we will be using this later.

We need to perform some forwarding on the localhost if myServ1’s firewall is locked down on the ports we’d want to use.

Step 2 : Connect to customer’s machine

Go through the multiple hoops to get to the customer’s machine, and run the following:

ssh -fNC -R 5500:localhost:22 serveruser@myServ1 -o ServerAliveInterval=20 -o ServerAliveCountMax=5 &

Get the PID of the tunnel process.

ps aux | grep ssh

So long as you remembered to include the & at the end of the ssh command, you can now close your ugly Citrix/RDP/SSH/etc hoops session.

Step 3 : Connect to the customer

Now on the myServ1 console you opened earlier, ssh to the port you specified, on the port you specified

ssh -c arcfour customer@localhost -p 5500 -D 5580

If forwarding was used for someone else before, the SSH key check will fail and you’ll get an alarming warning about the host’s identity changing. You need to edit ~/.ssh/known_hosts and remove the last line that refers to localhost.

In the previous command, we SSH to localhost on the tunnel connecting myServ1’s 5500 to the customer’s 22. Since this is localhost, we can use weak encryption (-c arcfour) to reduce the computational overhead of SSH chaining.

The dynamic port forward allows us to use a dynamic proxy on myServ1’s 5580

Since we set up the initial myServ1 connection from our desktop’s 8080 to myServ1’s 5580, we are effectively chaining our desktop’s 8080 to the customer’s network through the dynamic proxy on myServ1’s 5580.

You can use a dynamic SOCKS proxy tool on the locally forwarded dynamic port (here 8080) like usual to resolve IPs directly in the customer’s environment.

Copying files

You need to copy to myServ1 first using pscp or WinSCP, then scp the file to the client

scp -P <yourport> file/on/myServ1 mycustomer:./ # from myServ1 to customer

 

Method 2 : Tunnel Through Tunnel

To be able to directly scp/WinSCP from your desktop to the client machine, you could open the remote tunnel at the customer first; then open a first connection to myServ1 from your desktop, then a second PuTTY session tunnelling through the first.

This causes two PuTTY windows to be open, and has a more expensive SSH overhead (not so good when one end is slow for any reason or when there’s a fair amount of dropped packets on the network), but your second connection is “direct” to the customer.

On the customer’s machine

ssh -fNC -R $RTUNP:localhost:22 serveruser@myServ1 -o ServerAliveInterval=20 -o ServerAliveCountMax=5 &

Get the PID of the tunnel process.

ps aux | grep ssh

On your desktop

ssh serveruser@myServ1 -L 22:localhost:$RTUNP`

Which in PuTTY is a local port forward from your desktop source port 22 to remote localhost:$RTUNP, the remote tunnel to the customer on myServ1

On your desktop again

ssh customer@localhost -D 8080

This is, as far as PuTTY is concerned, is a direct connection – so if you start WinSCP on it, you directly copy from your desktop to the customer’s machine.

If forwarding was used for someone else before, the SSH key check will fail and you’ll get an alarming warning about the host’s identity changing. You need to edit your registry HKEY_CURRENT_USER\SoftWare\UserName\PuTTY\SshHostKeys and remove the appropriate key that refers to localhost.

We do not use a weaker connection here normally, because we still have to protect the connection from the desktop to myServ1 before it enters the outer tunnel.

Disconnecting from the customer

Kill the PID that you noted down earlier – don’t keep this connection open.

kill -9 <pid>

We need to explicitly do this, especially since we have set the keepalive options on the original SSH remote tunnel.

Even if you did not specify keepalive options, some connections are pretty persistent…

 

Installing SliTaz GNU/Linux

Screen Shot 2015-05-13 at 22.40.24

Recently I’ve been playing with SliTaz GNU/Linux, a tiny distro that can be made to operate even the oldest of PCs!

This article is a short bootstrap to get you started with the essentials on SliTaz 4.0

What is SliTaz?

SliTaz is an independant GNU/Linux distribution, with its own package manager, repositories and kernel, focused on providing an OS for embedded devices and computers with extremely small amounts of power and storage.

It is extremely lightweight: its standard ISO is about 35 MiB in size, botting from cold to desktop in as fast as 15 seconds, and ab initio takes up 32 MiB RAM with the Gtk openbox desktop and no extra applications.

Whilst it can be used as a lightweight desktop environment, its main application would more likely be for use as

  • an embedded Linux
  • an SSH gateway
  • an easily reloaded web kiosk
  • a portable PC troubleshooting/rescuing disk
  • such uses where slick features are shunned in favour of lightness and efficiency.

A GUI desktop environment is provided for those who are afraid of being in full-on command line mode, but to maintain its lightness, there are no traces of such heavy packages as LibreOffice or Firefox.

Out of the box you get

  • the lightweight Leafpad text editor (if you’re not content with nano or vi!)
  • the Midori web browser
  • the sakura terminal emulator
  • and mtPaint if you need to edit pictures…

and apart from that, not excessively more. That’s all you really need.

There’s a web-based GUI manager running on localhost for managing the computer, but make no mistake – this systems is more appropriate for seasoned Linux hobbyists who are OK filling in the documentation gaps…

There is even an Raspberry Pi version of SliTaz available to squeeze the most performance out of your Pi.

Screen Shot 2015-05-13 at 22.55.00

GUI install

On the LiveCD, to configure SliTaz, boot into the “Gtk” version; then open the web browser go to http://tazpanel:82 and enter the root login details. By default, the user is “root” and the password is “root”.

Once in TazPanel, you can manage the system, upgrade packages, install new software – an more!

Go to the final menu entry labelled “Install” and choose to “Install SliTaz”

For the purposes of this guide, we are just going to do a very simple install. If you’re comfortable with partitioning, go wild.

You will be offered the opoortunity to launch GParted – click that button. You will be shown a map of the first hard drive. If you have multiple hard drives, BE CAREFUL with which one you choose – the operation we are about to perform will erase the disk you operate on.

Choose the disk in the disk chooser menu in the top right – probably /dev/sda if you have only one disk. Again CHOOSE WISELY.

Once a disk is chosen wisely, use the Device menu and choose to Create Partition Table

Then choose Partition menu: New partition

Leave the defaults and hit OK, then click the Apply button to apply the changes. At this point the disk is erased and a new partition is set up. You have been warned!

Exit GParted, scroll down and choose to “Continue installation”

Most options should be fairly self explanatory.

Under Hard Disk Drive, choose the drive you just erased in GParted (/dev/sda for example) and choose to format the partition as “ext4”

Set the host name to whatever you want.

Change the root password to something sensible.

Set the default user as desired.

Remember to tick “Install Grub bootloader” or you’ll have a non-bootable system…

Click “Proceed to SliTaz installation”. After a few seconds… SliTaz is installed. Reboot!

You’ll have to set up your locale and keyboard just once more and voila, a desktop Linux that boots in seconds!

Command line install

Here’s the simple recipe for installing SliTaz from the command line. Note that even if started from the LiveCD headless, this install will install a full GUI and take up around 100MB of space.

The first thing to know is that the installer is invoked by the command tazinst.

The second thing to know is that you need to create a configuration file for it.

The third thing you need to know is that you need to partition your disk first. Naturally, this is what we’ll do first.

WARNING – PARTITIONING ERASES THE DISK THAT YOU ARE PARTITIONING

Type these keys in order to erase /dev/sda and set up a single partition. If you have never done this before…. read up on partitioning with fdisk. It’s a whole topic on its own! Hit return for each new line of course.

fdisk -l
fdisk /dev/sda
o
n
p
1
1
 (just hit return here to accept the default)
a
1
w

Great, you have a new partition on /dev/sda1. Now create the config file.

tazinst new configfile

Once you have created the configuration file, edit it.

vi configfile

Three key things you need to change are as follows:

  • TGT_PARTITION – the partition you will be installing on – in our case, /dev/sda1 or whichever you configured earlier
  • TGT_FS – the file system you want to use – for example, ext4
  • TGT_GRUB – “yes” unless you intend on installing Grub manually afterwards.

Finally, run

tazinst install configfile

After a few second, the install will be finished and you can reboot.

Post-install customizations

SliTaz is very light. Extremely light. You might even say it’s missing some packages you would expect as standard. You should think about doing some initial setup…

su
tazpkg -gi vim
tazpkg -gi htop
tazpkg -gi tmux
tazpkg -gi sudo
tazpkg -gi iptables # ...and whatever else you want...
#one tazpkg per item to install
/etc/init.d/dropbear start # SSH server
 vim /etc/rcS.conf

# add dropbear SSH server to startup
 %RUN_DAEMONS=" ...dropbear"
vim /boot/grub/menu.lst
# change timeout
 %timeout 2
visudo
 # add your own users to sudo location

And that’s about it. Some extra commands that may be different from what you may know from elsewhere:

poweroff # instead of shutdown
tazpkg recharge # sync package list
tazpkg info (package)
tazpkg description (package)
tazpkg search (string)
tazpkg get-install (package name) # install from repo
tazpkg get (package name) # download from repo
tazpkg install (TGZ file) # install from local file

Bonus – tpgi

Instead of directly using the restrictive tazpkg, try using my wrapper 🙂

Switch to root and run the following

tazpkg -gi git
tazpkg -gi bash
git clone git://github.com/taikedz/handy-scripts
cp handy-scripts/bin/tpgi /usr/bin/

This will set up the tpgi command which you can use to make life with tazpkg a little easier… run the command without arguments for help. Try:

tpgi install htop vim tmux sudo

Now you can install multiple packages from one line….!

tpgi search and gcc 3.4

Searches for packages containing the term “gcc” then filters the results for version 3.4

I Won’t Go Back to Buying Mac

mac_keyboard

Here’s a little topic I wanted to explore in written form – why I have used Mac for so long, why I still have a Mac as my main desktop…. and why despite this I won’t buy Mac again.

I Used to Love the Mac

My first computers were of course not mine – they were my dad’s. I have a vague recollection of us having a PC with 8” floppy drives and having to type commands… this was probably in 1987 or so. But that memory never really took hold, for very soon after, my dad bought a Mac: an LC II that I think is still in the cellar due to me insisting on not throwing it out.

It was graphical, it was friendly. It supported 16 colours (and not just 8 colours like many PCs still shipped with as standard). There was no command line, you could just click for everything. It was a revolution in home computing and we were on the cutting edge.

We were continually treated, with Macs, to the newest and greatest home technology: stable systems to run months without a single application crash (System 7.5.1 I particularly single out), advanced graphical UIs (Mac OS 9 was great comfort to the eye at the time), easily automated applications via AppleScript, including a fully scriptable Netscape Navigator; the first laptops and desktops with built-in Wifi, the first LCD desktops where the entire computer was hardly wider than the screen, the advent of UNIX-based systems on the home computer. Every Mac shipped with a full productivity suite included (what would become iWork), as well as a full media editing suite (photo editing, video sequencing, and audio production, which collectively would become iLife), and a couple of well-designed, full-on 3D games to boot. There was hardly anything you couldn’t do with a Mac I thought…. except perhaps write programs for Windows.

When the time came for me to go to university, I believed I would have to get a Windows PC to allow me to do some proper programming, not knowing that we’d be using many different and equally (even more so) viable systems for programming on. It was a mistake I do not regret, as it had great learning benefits to me, and gave me the ability to understand the Windows paradigm so many people endure, and the ability to operate in the average workplace; but after that laptop died (in a literal puff of smoke after an ill-fated attempt to “repair” it), I was back to buying a Mac in 2007.

Even in 2011 I was agonizing over whether or not to spend hard-earned cash on a new MacBook Pro or not. I drew up my list of pros and cons, and decided, over a solitary steak and pint, that yes, I did want that Mac after all.

It would be the last Mac I would ever personally buy.

The Mac – the good

The year is 2015. I still have that MacBook Pro. And it still serves as my main workhorse for spinning up Linux virtual machines. 4 years on, and it’s still the most powerful computer in my home.

It has a quad-core i7 hyper-threaded processor at 2.2 GHz, effectively  showing up as 8 cores – it’s the same processor family as found on entry-level business servers. I’ve upped the RAM to 16 GB. It has a 500 GB HDD.

Most computers even today ship with 4 GB RAM and a lesser i5 processor clocked at 1.7 GHz and not hyper-threaded, and still a 500 GB drive.

Needless to say, that Mac was a fantastic investment, as it remains still more powerful than an equivalently priced Windows PC on today’s market.

So why will I never buy Mac again? Put simply: Apple has chosen to go where I will not follow.

Apple – the Bad

Even back in 2011, the Apple Genius who was trying to sell to me was extolling the benefits of the new MacBooks with no CD/DVD drive: “who uses CDs these days anyway?” Well I do, for one. I experiment with computing, and in doing so sometimes break my systems. I need to reinstall the system sometimes. The one time I needed to reinstall OSX, I had to purchase a brand new copy. Gone are the days of providing a free re-installation DVD. These days, you’re lucky if you can connect anything at all.

I don’t tie up my bandwidth with movies and music I have to wait for and download, online, every time I want to consume them. I still buy DVDs and CDs because, in case you haven’t noticed, online “purchase” does not allow you to own a copy – just the license to watch, if it’s still available on the provider’s website (remember mycokemusic.com?). We do not own “our” online movies and music – only the permission to watch them, which can be revoked at any time – with no refunds.

I have become a near-full Linux convert. I use Linux for my personal machines at work, my secondary and tertiary laptops run Linux, and my private cloud servers all run Linux.

Only my Mac doesn’t run Linux, and that only because when I tried to install Linux on it, the graphics card and wireless card decided to throw a hissy fit. Apple’s choice of highly-proprietary components meant that despite the best efforts of open source developers, Apple held on closely to the proprietary mantra: the machine is Ours, you only have a license to use it. You can’t even “own” something as rustic as a tractor these days.

I feel I am not in control of my Mac because I have been told what I can and cannot run on it. I own the machine, but not the software. If it breaks, I just get to keep the pieces – not the ability to tweak and fix.

My hardware today

My preferred computer for “getting things done” nowadays, the one I am currently typing away on, is a Lenovo Flex 15. Lenovo do very good hardware, its pro line, the ThinkPads, are durable business machines much like the MacBook Pros in quality.

They’re also generally highly compatible with open source drivers and mainstream Linux distributions. Where I’d hesitate before buying a Dell or HP laptop as to whether I think Linux will work on them, I have virtually no qualms when buying a Lenovo laptop, knowing it will likely take the erasure of Windows just fine. Not that this necessarily won’t change in the future.

Open Source – Freedom and Privacy

Lenovo was in the news recently for a piece of advertising software called Superfish they had included in new laptops and desktops for a few months in their Windows deployments. This particular set of software and configurations meant that not only were users seeing even more advertising in the web browsing experience, but implementing the advertising solution was also breaking the very core security mechanisms that keep all other parts of the system secure. Lenovo makes great hardware, but they aren’t immune to woefully bad decisions.

Thankfully, they reverted their decision to include this software as soon as their technical analysts realized what had happened, and issued fixes, but it has damaged the company’s reputation.

Persons like myself who chose to erase Windows completely were not affected.

This is why I use Open Source Free Software: to maintain control over my own digital assets, and freedom in my digital life. I am fully aware that my digital identity is tightly woven into my real-world identity, whether I want it to be or not.

I now run Linux on nearly everything – more specifically, I run Ubuntu on my laptops, and a mix of Ubuntu and CentOS on my servers.

I can choose what software is on it. I can choose what software is not on it (have you not yet noticed how there is some software on Windows that you cannot get rid of for neither love nor money… pestering you for upgrades at best, selling you out at worst). I don’t have to pay an arm and a leg for it either.

What’s more, I remain in control of my data. Not only on my computer, but also in the Cloud. Windows will try to shove you onto SkyDrive and Office 365 Online. Apple is shoe-horning you into iCloud services (yeah, sync your photos all over the place, you can trust Apple… hmmm)… Google is trying to get into both spaces of storing all your photos “for” you and getting up in the online office suite as well. You can’t get an offline Adobe Creative Suite anymore – just keep up the eternal payments if you want to continue being able to access your Photoshop and Illustrator projects. At least they didn’t discontinue their editing suite altogther like Apple did with Aperture. Gone is your investment.

If I ever stopped paying once for any of these applications or services, or if the service is suddenly discontinued, I would stand to lose all my data – everything I’ve purchased, everything I’ve created, either because I no longer have the software to read the files, or because the files themselves have been whisked away to an online vault “for my convenience”. That’s why there’s hardly any storage on Chromebooks. Surrender your data to the Cloud.

I am staying firmly on Linux and virtual private servers that I control and can pull data off of as I wish. I can fully program the computer to make it do what I want – and stop if from doing things I don’t want it to do (granted, some tasks are easier than others, but at least it’s actually possible in the first place.)

One Linux distribution in particular, Ubuntu (the very same I use!), tried to follow the Big Boys like Apple, Google and Microsoft: Canonical announced a partnership with Amazon in the form of search functionality, where any keywords used for a file search was also sent to Amazon, and other online providers. Thankfully, it was easy to purge from the system the minute I heard of it. You cannot defenestrate such “features” with the other Big Three.

Building Trust

I use open source software from centralized trusted software repositories (which were the spiritual precursors to app stores) – I don’t need to hunt around on the Internet to find some software whose source I do not know. On Windows, I constantly need to fret before installing an app: Does it have a virus? Does it have a trojan? Will it send all my purchasing, credit card details, photos and other identity to some unknown third party?

What I get from the centralized repositories constitutes my base web of trust –  and that base web offers a collection of software so large and varied that I know I can get a tool for any job, be it office, media, programming, scientific or leisure, and more.

No piracy = no legal troubles AND no viruses.

Or at least, a vastly reduced risk compared to downloading anything willy-nilly from random websites. And personally, I expand that web of trust with informed decisions.

I use LibreOffice which allows me to read and save in Microsoft’s document format if I need to, but I mainly use the Open Document Format to ensure I can still edit them in decades to come, and that I can share documents with anybody who does not want to shell out for Office Pro, Office 365 or GoogleDocs.

I use ownCloud for my file synchronization so that I can keep control over what is stored, and where. It replaces services such as DropBox, Google Drive, Sky Drive and iCloud without trying to force me to store online-only and forgo local copies. If my account is terminated on the latter services, there’s no guarantee I’ll also still have the data that it ran away with. ownCloud is in my control, and I know I have the copies locally too.

I use Krita and the GNU Image Manipulator instead of PhotoShop, InkScape instead of Illustrator, Scribus instead of InDesign, digiKam instead of Lightroom. I don’t need to be online to do any of this.

I choose freedom.

In the words of Richard Stallman and the Free Software movement: “Free Software is is a matter of Freedom, not price.

Piracy might make things surreptitiously free (as in “a free lunch”), but still ties you to the control systems and spyware that is rife on the Internet.

Apple, like so many other computer manufacturers and software licensors, has taken a route I cannot go down, one I will not follow. It has taken a route that specifically makes it difficult for me to remain free. It has taken a route that stifles experimentation and learning. It has taken a route that privileges perpetually tying-in my spending on one side, as well as the monetization of my identity on the other, whilst at the same time denying me ownership both of what I purchase and what I create, and where the only solutions are either piracy… or just leaving altogether.

forget-piracy(… graphic of my creation, released under CC 4.0 Attribution Share-Alike. Anyone who wants to make a better derivative is most welcome…!!)

About that: Thalys’s response to All out

Thalys, a French national train operator, suffered recently from a backlash from an All Out campaign after a member of Thalys’s partner staff reprimanded a lesbian couple for kissing on the platform , denouncing the activity as “intolerable.”

Thalys yesterday released a French language press release, which I have opted to translate below.

Please note that this translation has not been performed from a professional standpoint, and that only Thalys’s original official press release is relevant for further quoting.

Read more