Getting into the world of Linux might seem a little daunting to anyone who doesn’t come from a formal computing background. For starters, there seem to be so many different “Linuxes” to choose from, even though Ubuntu, distributed and maintained by Canonical, seems to be the most popular for home users, whilst a new contender, “Linux Mint” on the rise. Are they the best? Are they easy to use?
- is [such-and-such-Linux] for you?
- will you be able to use it on your own?
- how do you get help with no support hotline?
- does help for one Linux work on another Linux?
- does a program on one Linux run on another?
- can you run Windows programs on Linux?
- Help me I’m scared of the command line!
- What is Linux anyway??
The chapters in this post are:
- All Linuxes are Linux but aren’t Linux
- What is GNU? What are distros?
- Linux Distros and Families
- Desktop environments
- Graphical package managers
- Command line
- Package managers
- Root, users, and the sudo command
- How to get help
1) All Linuxes are Linux but aren’t Linux
First things first. Strictly speaking, “Linux” refers to a small piece of software that runs the very heart of the operating system – the Linux Kernel.
Sparing you the details, it’s what is known as a monolithic kernel, which means that the drivers and such that control the machine are coded into it. Some distros (we’ll get to what that means exactly in a minute) ship with very basic versions of the kernel, some with all manner of driver functionality built-in. Sometimes you’ll find you “need to recompile the kernel” to get a driver working.
What does this mean for the average user? It means that most Linux variants already support your computer – when you install a mainstream Linux variant, or distro, you often do not need to hunt around for drivers. There are exceptions to this observation, but it holds true for most desktop PCs and a wide range of laptops, as well as printers, camcorders, microphones etc.
The Linux kernel on its own however does not do much for us. It lacks the basic tools and programs to make a computer usable in the slightest. That’s where GNU/Linux and Distros come in.
2) What is GNU? What are Distros?
The GNU project, to sum it up extremely briefly and over-simply, started out as a project to build fresh, new versions of all the essential tools that run on a UNIX system. UNIX is an operating system, just like Windows and OS X are operating systems, and just as proprietary too.
The GNU project aimed to recreate its tools and software in versions that were non-properietary – freely distributable with no limitations, and where the source code was not held ransom by the Copyright holders. On the contrary it was under “Copyleft” protection (a modified way of asserting Copyright rights), meaning any project using their code had to also make the code available under the same license.
GNU however was not an operating system, but a collection of software for an operating system. It lacked a kernel to power it, and that’s where Linux came in: combining the Linux kernel and the GNU applications (and with some more hard work to get things running), GNU/Linux was born, a free, libre, gratis operating system.
But even at that, an operating system does not do word processing or photo editing. Word processors and photo editors do that. You need applications to do more useful stuff. You could indeed distribute a bare GNU/Linux to someone and let them compile their own apps. Or you could include apps with your distribution copy. You could include lots of apps, very few, only apps for office work, only apps that are known stable, you could include scarce driver support in your distribution’s kernel, you could …. you get the idea.
A GNU/Linux Distro then is a packaging of:
- A compiled Linux kernel (with or without extra trimmings)
- Software from the GNU project
- Any other custom tools built for the project (most distros have brand-specific tools)
- Any other apps the distributor might care to include and tweak
- Customized settings (affecting what tools run in the background from the start)
- Most of the software will have been modified from their original source to fit better into the distro’s own customizations.
When we say we’re “running Linux,” what we really mean is that we’re running an operating system that uses the Linux kernel and the GNU tools – GNU/Linux. Most differences thereafter are cosmetic in nature: it’s possible to argue that you could turn one distro into another from a look-and-feel perspective with enough tweaking, but the fact that the software made available to the distros can be modified to better suit one distro implies that any given package compiled for one distro may or may not run properly on a different distro – as with many compatibility questions in the GNU/Linux world, “it depends.”
It should be noted that from this point down, I am going to be mentioning variations, differences and alternatives in the Linux world, all of which are hotly contested and have long entrenched camps. There is arguably no One Single, Better GNU/Linux, no completely better tool, and at the end of the day, even the most tenacious hardliners recognise that the diversity is what makes the GNU/Linux ecosystem thrive and flourish as it does. Which GNU/Linux is better for you is thus entirely up to you, based on what you can read about it, and find out by searching on the Internet at large.
In the GNU/Linux world, there is important emphasis in self-learning, so I’d advise looking up any new idea you come across or that you don’t fully get.
I include links throughout this article to refer to topics of greater interest, but don’t let that lull you into any false sense of completeness. Keep searching!
3) Linux Distros and Families
So why are there so many distros out there? The reason for them existing is that each one is packaged and delivered by a group of people with specific common goals, and each group differs from another. Some distros are compatible with one another, others aren’t, but the easiest way to think of distros is in terms of families.
Any GNU/Linux distro can make a copy of the base of another distro, rip out any internals, and replace them, to be packaged with different kernel tweaks, desktop environments, bundled applications etc. This is called forking – akin to the idea of a fork in the road, where one distro continues on its path, and another forks off of it to pursue a different direction.
Each family I mention here has two specific properties:
- It uses the Linux kernel
- When it first came into existence, it was built from the ground up – not forked from a previous distro. This generally means it has its own package management, a custom kernel, and policies affecting the way the filesystem is structured.
Debian is a distribution whose primary goal is system stability. Each released version of the Debian GNU/Linux consists of a stable Linux kernel, the known and tested stable versions of the GNU packages, a stable version of the GNOME Desktop (details further down), and known stable versions of other software such as office productivity suites, email applications, media players, and so forth, all of which have been modified so that everything plays together nicely, and everything is subsequently tested for stability. The result is a highly stable release, but which never runs the latest versions of software due to the long integration and testing phase.
Forks off the Debian distribution do not have to adhere to any of these philosophies – some distributions prioritise stability enough to produce a stable system, but with newer software, perhaps a more concerted testing effort, or use different Desktops, and so forth. As said previously – a fork can rip out the internals and replace them at will.
Many popular desktop distributions are based off of Debian, including the ubiquitous Ubuntu (with a goal of bumping Windows off the top spot as the de facto standard system on new computers), and its main rival Linux Mint (who have an emphasis on letting the community drive the design decisions). Other notable mentions include Bodhi (goal to be a better lightweight desktop system, with no frills and using the Enlightenment desktop environment) and elementary OS (whose goal is to provide the most intuitive and useful desktop environment – and whose designs are, it is fair to say, many of the better design choices ripped from Apple’s OS X).
Another very notable mention is the Knoppix project, which aims to produce a useful distro that can be run straight from a CD or DVD without needing to be installed on the machine – it loads straight to the computer’s live memory, after which the CD can be removed entirely. You can access an existing filesystem on the local hard drive and work with that, but once you reboot without the Knoppix disc, you’re back into your original system. This concept is called “Live CD” and has since been adopted by a number of distros in bare-bones fashion if only for showcasing the OS before installing it. It has also spawned a sub-family of its own within the Debian family.
Many more abound, and forks off of Debian directly and off of other forks have led to a wide range of distros in this family alone, as this tree attests.
The other main family of GNU/Linux distros is the Fedora family, by RedHat Inc. RedHat Inc’s goal is to create a solid enterprise operating system, the “RedHat Enterprise Linux” distro (RHEL),a stable distro. Stability bringing what it does in terms of software aging, and not to be outdone by competitors, RedHat publishes its other flagship distro, Fedora, which is a fast moving distro, each iteration of which becomes obsolete within a year and a half in general.
Whilst the source code to RHEL is freely available to download, it requires a paid-for subscription to get any official help. The business model is thus suitable for companies who need some form of guarantee to support and code fixes which not many other Linux distros provide.
RedHat also consequently provides a system geared more towards server administration, computer network configuration, ID servers and the likes.
The Fedora family has two notable forks – the first is the RedHat Eneterprise Linux, a stable operating system forked every few years off of a version of Fedora. This allows users to always be on the “bleeding edge” with Fedora, although sacrificing guarantees of stability, contrasting directly with the Debian philosophy, but every so often, a version of Fedora will be designated by RedHat Inc to form the basis of the next RHEL.
The other major fork is the CentOS project, which stands for “Community ENTerprise Operating System,” which is actually a fork of the RHEL projetc. Its goal is to ensure that it is as close to 100% binary compatible with the “upstream vendor” (RHEL), so anything that runs on RHEL should run on CentOS and vice versa. It allows smaller companies to benefit from the power of RedHat and its commitment to enterprises, and an open community, but without the price tag, at the cost of delays to receiving fixes that were requested by RedHat’s paying customers, and the inability to submit bugs to RedHat, or receive SLAs for fixes.
RedHat used to be the parent to Fedora, but now Fedora is the parent to the rest. This is difficult to show in diagrams, and for historical reasons, many still refer to this family as the RedHat family.
If you look at RedHat’s family tree, you can see that there are significantly fewer sub-trees, most distros forking directly off of the RedHat project itself.
Slackware, Gentoo, Arch
I mention three different families in this one section, as they are geared much more specifically towards experienced Linux users. Aaron Griffin of Arch Linux expresses the idea behind these families of distros clearly:
If you try to hide the complexity of the system, you’ll end up with a more complex system.
–Slackware started off as a distribution mainly for self-interest, with no commitment, hence its name. Its goal was to provide a distro that was structurally simple, and made few changes to upstream software distributors’ packages. Note that this distro breaks from my above rule that it was not forked – in fact, it was forked from an unpopular distro, SLS, which died very soon after, so strictly speaking the family is SLS – but that branch is dead, and Slackware was the only known fork before it died. The king is dead, long live the king.
-Initially named Enoch, Gentoo was renamed to its current title early on, and has for goal to provide the most bare-bones operational Linux out of the box – the kernel, the standard tools, and a package manager. Everything else, from drivers to network connectivity, to custom screen size support and so forth needs to be configured by the user. Typically, this is the system for users who need a base to start from, but need to ensure that no extraneous drivers or packages are included, ever.
-Finally, Arch Linux, which takes a “configuration file oriented” approach to operating system management, which means that most operations and changes in setup are informed to the system through well-structured, well-documented config files.
I hope that gives you an idea of how different distributions come to being, why they continue, and what advantages this diversity brings. But more importantly, it lays the ground for the rest of the discussion, and how it affects you as a user looking for help. For a full GNU/Linux family tree, check out the graph at this page.
4) Desktop environments
A Desktop Environment is the graphical interface you typically see as a user, as well as the set of applications it brings with it such as text editors, word precessors, and system management tools. The graphical interface is provided by a component called the Window Manager which manages how windows look, where menus are placed, what the mouse does, window effects, sounds, icons, how alerts display, etc. MS Windows uses a desktop environment called “Windows Explorer”, Mac OS X uses one called “Finder”. The variety of desktop environments in Linux can confuse you when you come from either of those since on Windows and Mac, you never switch from one to the other. You never run, say, Finder on Windows, or Windows Explorer on Mac.
But in a GNU/Linux system, you can change desktop environment just as easily as switching Internet browser.
For example, the two most popular environments (as bundled with distros) are GNOME and KDE, both of which (until GNOME 3 – the MATE environment carries the torch in GNOME 2’s stead) provide a fairly Windows-like system menu in the bottom left of the desktop, set in a task bar. An alternative environment named LXDE does this too, but lacks bells and whistles, which enables it to take up less memory. All provided multiple desktops, or workspaces, long before OS X did, and Windows still doesn’t include this as standard.
The Enlightenment environment (currently at version 0.17 and known as E17) is also lightweight, and features the ability to add customizable panels (in lieu of task bars) and brings up the system menu (akin to right-clicking) any time you click on the desktop, which negates the need to go to the top left of the screen where the system menu is kept, or the need for a system menu location at all.
GNOME 3 completely removed the very idea of the Desktop, featuring an activities-oriented interface much like a smartphone (though without the one-window-full-screen-only nastiness Windows 8 implemented), whilst Ubuntu’s controversial Unity, also breaking away from the desktop metaphor, introduced features such as causing searches from the system menu to include results from Amazon Inc even if you were just looking for files on your computer (implying that your personal file searches were going through Amazon, and thus being a breach of privacy). This has since been fixed (end September 2013).
Pantheon, which is part of the elementary OS project, uses a permanent menu at the top of the screen to display statuses and an application menu, and a Dock at the bottom. Its styling borrows heavily from the OS X Finder (smooth grey, folder icons and all) although does not allow files to reside on the desktop, encouraging users to keep their files organised.
Nearly all desktop environments are built on top of the X Window System, which is a framework for the base element concepts like windows, buttons, progress bars and so forth. More recently a new windowing framework has started to appear called Wayland, which promises to be better than X, but is still in very early stages and will not be ready quite soon.
5) Graphical package managers
Well before the rise of the app stores, GNU/Linux distributors were running software repositories online providing the latest versions of Open Source software, Free Software, or Free and Open Source Software.
Most desktop distros provide graphical front-ends (concept of a Graphical User Interface, or GUI) to their package managers, allowing users to simply search for packages and install them and the pre-requisite softwares, at the click of a button.
In these package managers, you can search for the name of a package (say, LibreOffice, VLC Player, etc) and a slew of related packages will appear. Some distros also have a an additional simpler system such as the Ubuntu Software Centre or the Fedora AppCenter, in which just the application to install will be shown.
In the case of the simple front end, you might be required to hunt down the more generic title, and the package manager will then notify you that a number of other packages need to be downloaded to support it – all fine and dandy. The more sophisticated software centres will hide this from you. Whether that is good or bad is up to you to decide…
In choosing whether to focus on a Software Centre or a GUI package manager such as Synaptic, I won’t dwell too long on the subject, as the specifics will be covered later on under the Package Manager topic below, but it is important to note that these graphical tools are simply pretty windows sitting in front of their more versatile command-line cores.
6) Command line
About that command line. It’s that place where everything you do is via a keyboard, and the mouse is simply a silly little adornment on your desk, the touchpad an ineffectual space on your laptop. It’s where you need to type text to tell the computer what to do.
It is seen as the bane of regular users. But it needn’t be – it is the most powerful tool a system administrator can have at hand, especially with the right “shell.”
Unless you’re running one of the more advanced-user-oriented distros, you won’t often need to face the command line, and even if you do you’ll generally have a browser open at the same time as doing so, with instructions from a How-To page guiding you through. It’s not as scary a beast as you might think, although it definitely requires some concentration when doing something new – lest you put a dent in your operating system, or erase files you shouldn’t.
The program that runs the command line when you’re using a primarily graphical interface goes by a few different names – a shell, command shell, terminal, console, terminal emulator, to name a few. Generally, its name is listed as some variant of “Terminal” and is filed under the applications menu under “System tools.”
The terminal program then runs a “special program” which understands commands and has some other advanced syntax options for chaining together several commands. It’s a bit like typing computer source code into a text editor, with the difference that every time you go to start a new line, what you just typed gets sent to be run. If you copy several lines of commands from a web page and paste it into your command shell, each line runs as a command. You can save these commands to a text file, and run that text file as a command.
The “special program” I mentioned earlier is a shell program (do you see where all the names came from now?) The most common shell GNU/Linux program is the “bash” shell, but there are others. “bash” is the program name, which also stands for “Bourne Again Shell” which was a GNU replacement for the UNIX-proprietary “Bourne Shell” also simply known by its program name “sh”. Another is the “C shell” which, apart from the pun, is named for the shell’s borrowing syntax from the C programming language.
The differences between all of these are mainly regarding improved syntax for scripting, and built-in commands. The details are out of the scope of this post, but I think you get the picture. If a complex command or script does not work for you, it may be because you are running the wrong shell. Default shells can be changed (Google it) and you can start a different shell just by typing its name.
7) Package manager
In the earlier days of GNU/Linux distributions, programs were distributed as source code to be compiled for the specific system. Binary packages could be created however, for deployment on other machines running the same distro, or at least using the same Package Management System.
A package manager basically is the application that lives on your operating system, that installs and uninstalls software. Unlike the typical installers that you get on Windows and Mac OS X however, the package manager knows exactly what you already have installed on your machine, and what versions they are, and most importantly, it can decide where libraries go – in Windows world, these are the “DLLs” (Dynamically Linked Library) oft mentioned as being…. missing.
For example in Windows, Acrobat installs a version of Adobe’s PDF processing DLLs to the Windows system directory, making them available across the system. Some third party PDF creators however use the same DLLs, but ship different versions. If their installer does not cater for the idea that other PDF software may exist and have already installed the DLLs, it overwrites the them with incompatible versions, causing crashes. The difficulty in managing which applications need what versions of DLLs and where from is known as Dependency Hell, a problem GNU/Linux systems avoid by using the package manager, which retains authority on what libraries go where and in what versions. The package file specifies what libraries it needs, and the package manager accommodates accordingly.
Early on, Debian created its own package manager tool, ‘dpkg’, which installs packages distributed as DEB files. In parallel, RedHat developed their own package type, the RPM package file, which was managed by the corresponding ‘rpm’ program. DEB and RPM files can be shared in CDs and DVDs, just like installers for Mac and Windows programs can.
Package managers are typically used on the command line, their birthplace, and are commands (or “incantations”, as shell commands are commonly jokingly called) that look like “dpkg -i mypackage.deb” (in the case of dpkg installing a DEB file)
Computers connected to a network can pull files from the Internet, from servers called software repositories – often these repositories are owned and managed by the developers of the distro, so Debian has its own repo servers, as do RedHat and Ubuntu. This is not a rule or requirement though, and some distros can choose to connect to other repositories. The only real requirement is that the repository serves the expected file type, though caveats about code tweaks as mentioned earlier in section 2 apply. Thus Bodhi Linux connects to Ubuntu’s servers as both use DEB files, and Bodhi was forked from the Ubuntu project. Independent repositories can be created, and some companies run multiple repositories, for example one repository for “Free and Open Source Software” and another for “free but proprietary software” such as some drivers, and programs from proprietary vendors, like Adobe Flash.
A separate program is often invoked to benefit from online ability, which in turn calls the original package manager. Debian’s main repository manager is called APT, RedHat uses the YUM tool, which originated from the YellowDog distro – a less common instance in which such a fundamental tool is brought from a derivative into the parent distro — and in this case, the main family distro, helping it spread to the other derivatives.
Repositories can be both added and removed from your GNU/Linux installation, and some software will ask you to add a repository before you can install the software.
As further examples, Arch Linux uses the Pacman package manager, Gentoo uses Portage, and Slackware uses ‘pkgtool‘ (which it should be noted does not do the crucial dependency checking – but then, Slackware does not cater for the bells and whistles either).
8) Root, users, and the sudo command
Another important concept in the GNU/Linux world is the existence of the ‘root’ user profile. This user has what could be described God Authority in the realm of the machine. It can modify the system, change critical parameters, and change the security settings of anything. Handled badly, it can break programs, nullify security. It can wipe the system even as it is running.
Needless to say, it is NOT good to allow just about anyone access to this account. It is even bad for you to be using this account day-to-day. Imagine if your user profile had the abilities of the root profile and you downloaded something unstable (not to mention even malicious) from the Internet. As a non-root user, this would not be so much of a problem from a system point of view – but if you were root, your machine itself could be totally compromised, or ruined.
Of course, sometimes you do need to perform actions as root. This is where the ‘sudo’ command comes into play. If your user profile is set up to be able to use sudo (in the user-friendly distros, it often is – check in your system preferences that your user is “admin”), you can begin a command with the keyword “sudo” to run that command as root – for example, to install software, you need to run apt-get command in Debian as root, so “sudo apt-get install htop” would be the command, instead of just “apt-get install htop“. You are then asked to provide your own password to ‘sudo’ which confirms that you are indeed the authorized user (and not someone stopping by your machine!), and then it will run the command with root authority. Once done, any subsequent command will be without root authority, and you will need to use “sudo” again to run another root command.
Some manuals and guides will advise you to use the “su” command, substituting “sudo” with “su -c”. Normally you can use it to log in at the command line as a different user (for example by issuing “su bob” to login as the user ‘bob’, and you’ll be asked for bob’s password), but provided as-is, it will log you in as root (so long as you know the root password).
When working on a laptop, it’s generally better just to use sudo so you don’t run any programs as root by accident.
When you are administering a server, nearly everything you’ll do will be as root (note: when you are administering a server only!), in which case, su becomes a preferable option. There are further subtleties and nuances about how each works respective of the other. WHilst starting out, you may want to stick to sudo at first…
One impediment to moving to Linux is the fact that so many common applications in the office and home spaces are Windows-only applications. There are however plenty of alternatives in the Free/Open Source world that run natively on Linux – just search for your favourite app over at AlternativeTo.net or at OSAlt.com for suggestions as to what alternatives to get. They even offer alternatives on Mac and Windows as well!
If however you just cannot part with your favourite Windows application, you can use a tool called Wine, perhaps packaged its more friendly form, branded PlayOnLinux. Wine provides an emulation layer, DLLs and structures such as a Windows registry and path parsing to the GNU/Linux environment, allowing Windows programs to “think” they’re running in a Windows box. It’s not always complete, and some large and complex Windows apps may not work, but for the most part, most programs are still 98% functional. The WineHQ maintains AppDB, which is a constantly updating database of applications that have been tried with Wine, which ones work perfectly, partially, under certain conditions or not at all. The info really is all out there.
10) How to get help
I have left the MOST IMPORTANT AND CRUCIAL topic for last. Really it should be: how to assist yourself (and avoid being shot down on forums). I must insist on the crucial self-help implications of running your own GNU/Linux machine. One thing that is extremely important to note about the GNU/Linux community is that using Linux means that you’re ready to research and learn, and work for your dinner.
No details, but an external link instead. Sure sign a Wikipedia page has been edited by a *nix user… pic.twitter.com/OR2KrAq0yz
— Tai Kedzierski (@TaiKedz) August 21, 2013
Anytime you come across a new command or are instructed to use a command for the first time, you MUST get into the habit of reading its manual.
You’ll hear about things called “man” pages, which are the manuals that can be accessed on the command line. To display the manual text, the `man` command uses a program called `less`, which is an advanced text display application for the command line, complete with search capabilities. To view the manual for command mycommand, type `man mycommand`.
Assuming you currently have access to a Linux machine (check my other Linux primers for guidance on installing one without uninstalling your current Windows/Mac by using virtual machines), let’s check the manual page for the less command.
$> man less
- You can scroll down the text with the “z” key and scroll up with the “w” key.
- To find a string of text, type a “/” character and then the text to find, such as “/pattern“
- The word “pattern” will be highlighted in the text, at the top of the page displayed. Type a lower caps “n” now to go to the next ocurrence of the word. Hold shift and press “n” (effectively for an uppercase “N“) to search for the previous ocurrence.
- Press “q” to quit.
The above features are common to man and less: man uses less to display manual pages; less is a command to display any text file. For example, if you list the contents of the log directory:
$> ls /var/log
You should see a number of log files. Fo example, there may be a log of all Xorg activity. To view it, type
$> less /var/log/Xorg.0.log
Unlike places like Yahoo Answers or your friendly Facebook group, Linux forums come in various varieties of sanguine should you fail to explain exactly what you are trying to do, why you are trying to do it, and what steps you personally have already taken to try to solve this on your own.
The friendlier communities will often be helpful to absolute newcomers and take you by the hand when you ask a very beginner question. The Ubuntu and Linux Mint forums have a policy of being nice, and as such it’s often considered they are the better places to start if you’re just dipping your toe in the GNU/Linux world for the first time.
This is in deliberate contrast to other forums where you can be met with resistance if you fail to be diligent upfront, ranging from people responding to you with links to the Google home page, or telling you to “RTFM” or that “PEBKAC”, to outright name-calling and derision. The Linux kernel development team itself is particularly well known for for being unforgiving to persons who haven’t done their homework before addressing them.
But it’s not all doom and gloom!
The following is a quick checklist of what to do before posting for help in a forum. Note that when I say “search” I mean use Google, or Bing, or DuckDuckGo, or whichever search engine is your favourite, so long as it searches the web, not just its own servers like Yahoo Answers or Answers.com do.
These steps may sound pedantic but, when considering how many people get burned every day in failing to do this research, I believe it is necessary to spell things out.
a) Formulate your issue descriptively in one sentence
It seems pretty obvious, but there are just too many persons who ignore this. Three reasons to do it:
i) It helps clarify to your own self what you’re trying to achieve, and sets in motion the thought process of what you need to know to fix your problem
ii) you can use this description to do a web search on the issue. Try phrasing the issue five different ways with different words, and search them all.
iii) It forms the basis of a title for a forum post, should you eventually need it
If you’re trying to solve a problem where an error message appears, write down the full error message and search for it. Most of the time, the solution is in the first five posts.
b) If your search turns up nothing, search again, read everything
It is important to note that just because you’ve search on one or two sentences it doesn’t mean you’ve been diligent. You may have phrased it awkwardly the first time. You may have not used the specific terms for the issue.
Follow any links returned that look even remotely related. You may find that the discussion mentions some important vocabulary you can use to search again, or brings forth other considerations you haven’t yet tried that solve a whole variety of problems. Read any man pages for commands that look even vaguely relevant. You might be able to construct your own solution.
Also, remember that individual GNU/Linux parts are common to many different distros. So if you want to install a program on Ubuntu’s command line, you might want to search “install with apt-get” rather than “Install on Ubuntu command line” for a more general search that will be just as relevant.
If you’re trying to customize your Fedora desktop wallpaper, don’t search for “change desktop on Fedora” but “change wallpaper in GNOME 3” for example, if that’s what you’re using. Be precise, and use the right search terms.
c) When posting, be descriptive and complete
Avoid titling a post with either of these phrases: “help!” or “[X] not working”. Also, never claim you’ve found a bug from the outset of a post. You’re just asking for pain. Follow from advice in section a).
Never say “I’m getting an error” without actually writing out what the error message says, and detailing what you found out when you put that in a search engine.
NEVER EVER simply say “[X] is not working” without specifying precisely what the behaviour it is you are seeing, and what you were expecting.
Finally (and this is where most well-meaning people trip up) ALWAYS include examples each of
- what search terms you’ve used, and what documentation you may have read
- what assumptions you’ve made or what behaviour you are expecting
- and what actions you have already tried taking and what specifically happened.
d) Do not fan the flames of war
A last word of advice (and this applies to all forums and comment threads, technical or not): no matter how ugly a response you get back on a forum, IGNORE ITS UGLINESS. You are being “trolled.” If you respond in kind, you’re starting what’s called a flame war. This descends into chaos very quickly, with nobody winning, bruised egos all around, and no amount of moderation helps.
It may be tempting to correct someone else’s behaviour, level an equal retort at them, or defend your position in the face of a less astute criticism. Don’t.
I’ve been trolled before, and I still occasionally get trolled successfully when I respond to more detailed technical responses laced with troll venom.
If you do see trolling happening, and you want to moderate: again, don’t. You’ll be drawn into the war. If you do want to take positive action, respond to the asker’s original post with a technical solution, ignoring any trolling going on around you. If you cannot provide a solution, just shut the tab. It’s not worth the aggravation.
Hopefully this stands as a practical guide to bootstrapping your mind into Linux-mode. If you’re interested in getting your feet wet and your hands dirty with GNU/Linux, why not give it a go using a Virtual Machine? I’ve prepped a quick guide for getting started with VMs and first steps using Canonical’s lightweight Lubuntu distro.
Good luck then, new GNU/Linux user, and happy learning!