Limiting your childrens’ internet access effectively using OpenDNS

There comes a time in a nerdy parent’s life when you happen to have kids that are old enough to get interested in using the internet. In my opinion there’s nothing wrong with digging through your pile of old hardware and assemble a reasonably good PC from it for your 10 year-old. But how do you avoid that your kids inadvertently find their ways into the less-desirable 90% of the internet? Read on about how to get the best results with adequate effort.


The Why of filtering

Recently my son (5th grade) claimed that other kids in his class were already allowed to use the internet. He would be the only one left out. Do the other parent really not care at all? And at the last parent teacher conference the class teacher told me that he announces everything class-related through his Facebook page. Yes – those are the people who are responsible for teaching our children media competency. He completely lost me when he claimed that getting a Facebook account is no big deal – it’s completely free. Excuse me? It appears that most people have no idea what’s going on on the internet. They occassionally get spam emails. They heard that there are porn sites. But what could possibly happen?

What to control?

There are essentially three different things that your kids may want from the internet: web surfing, email and installing software. Each of them pose different threats. Porn sites are not for 10 year-olds. Neither are social media sites that make children ruin their lives by posting inconsiderate content. Phishing emails may trick them to unwanted behavior. And installing random software thoughtlessly will easily get tons of adware, spyware and malware on their computers. So all three areas should be dealt with.

Web filtering

There is a reason why decent web filtering usually requires commercial solutions that are not affordable to anyone but large companies: it is hard. Why is it hard?

  • Filtering HTTP requires at least a piece of extra software that intercepts HTTP requests and checks them if they are desired (e.g. Squid)
  • Filtering HTTPS (which increasingly many web sites use) requires heavily invasive actions that are either unsuccessful or even harm the intended safety of the encrypted HTTP connection (deliberate man-in-the-middle-attack pattern with incomplete or no certificate validity checks)
  • Commercial products do that with a mixture of automated scans and manual moderation with multiple employees categorizing web sites. If you start managing a whitelist (block everything except a list of known web sites) then you will spend countless hours adding entries to that list because common web sites load their information from various other sources and CDNs (content delivery networks). If you instead try to manage a blacklist you would have to get a complete list of bad web sites – not just the dozen sites that come to your mind but the literally over a billion existing domains. Every second 1.5 new domains are registered. Don’t even try to catch up with that.

Fortunately there is a good approach for filtering web sites that is less intrusive and at the same time easier to implement: DNS filtering. Every time you run a request for a web site you need to find out the IP address for the host name that you are requesting. For example for http://workaround.org/ your operating system needs to get the IP address for the “workaround.org” host name. If you get the actual IP address of my web server then you can reach my web site. However if the response you get leads you somewhere else then that other server gets the HTTP request. And if that other server sends you a web page that contains an error message instead it gets you content filtering one step even before HTTP filtering can occur. And it doesn’t even matter if you intended to do an HTTP request. It will even work for any other protocol like HTTPS.

DNS-based filtering

My first idea was to use a local name server (I’m running bind here anyway for my local domain) and use a blacklist like MESD. But on second thoughts I didn’t trust such a list to be maintained well enough. Fortunately there are services that offer exactly such DNS servers that let you configure which kinds of sites you want to reach. One of them is OpenDNS. They do not publish their software nor their blacklists – so “open” is a bit misleading – but at least their service is free (beer) for personal use. Now if you can make your children’s PC use that DNS server then you wouldn’t need any other kind of URL filter.

Pros:

  • does not require maintaining any lists – you just have to make up your mind which categories of web sites you find inacceptable
  • easy to implement

Cons:

  • privacy: the external DNS filtering service gets to know which web sites you tried to access
  • tampering: this kind of filtering only works as long as the user can’t use another name server or manage their /etc/hosts entries

So in the simplest case you get a free (beer) account at OpenDNS, configure the forbidden categories of web sites in their dashboard and just use their DNS in your internet router’s settings to use the OpenDNS settings for everyone in your network from now on. But that way you would apply the same filtering policy for all users. If you don’t mind that you are done.

Filtering only on certain computers

However you may want to apply the filtering policy to only your childrens’ computers. So you need to change their network settings. On an Ubuntu workstation you can still use DHCP to get an IP address in your network but override the DNS servers like this:

Caveat: Your children may locate that setting and use the usual DNS server again but let’s not worry about that for the moment.

Optional: Assigning the OpenDNS name server from your DHCP server

If you are like me then you are running your own DHCP server in your network. I’m using the isc-dhcp-server package on a Debian server. So instead of changing the network settings on the PC you can assign the OpenDNS name servers by sending your children’s PCs a special DNS setting if they use DHCP. Edit your /etc/dhcp/dhcpd.conf file and add a section for each children’s host:

host kid1 {
    hardware ethernet 9a:d1:83:d2:50:e4;
    option domain-name-servers 208.67.222.222, 208.67.220.220;
    }

Of course you will have to find out the computer’s hardware address of the network card and use it instead of the one that I’m using. Now you won’t need any further settings at the PC.

Mandatory: Do you have a dynamic IP address?

Most internet providers just assign a dynamic IP address to you that is changing frequently. Unfortunately OpenDNS would not be able to identify who you are if your IP address changes. So you need to tell them your new IP address to make the old policy work for the new address. That requires two steps.

First you have to give your network (actually just your public IP address that you probably hide behind using network address translation (NAT)) a unique label. Log into your OpenDNS dashboard and locate the network settings. Then edit your network and give it a name. You will need that name again in a minute.

OpenDNS uses a DynDNS-style update mechanism that you can use to tell them your current IP address. The configuration is explained in one of their support articles. All you need to do is install the ddclient package and use the configuration as described in their configuration example. Now whenever your IP address changes OpenDNS will get to know that and apply your desired filtering policy to every request coming from your new IP address.

Make tampering harder by using a firewall

Your children may realize that you changed their DNS server to block web sites and try to change back the IP. To prevent that any other DNS server is used you would have to use appropriate rules in your firewall. For example if you are using Shorewall (which I can totally recommend on any Linux server) then these rules will do:

# Allow DNS requests to the OpenDNS servers
ACCEPT  lan  inet:208.67.222.222,208.67.220.220  tcp,udp 53
# Forbid using any other name servers for the chidren's PC having the IP address 10.7.0.42
REJECT  lan:10.7.0.42  inet   tcp,udp 53

# Otherwise allow any access to the internet...
ACCEPT  lan    inet
# ...and the server itself
ACCEPT  lan    $FW

Caveat: Your children could just disable DHCP and use a fixed IP address in your network that is not restricted. But unless you intend to introduce VLANs and IEEE 802.1x-based network authentication you may even live with that risk. A family therapist may be a more adequate solution if you are having parental problems that require such extreme security measures. If you just want to notice such attempts you may run arpwatch in your network.

Browser plugins

So far you have prevented access to unwanted web sites. But you may still want to keep the internet less distracting and confusing by using the mandatory set of browser plugins:

  • Adblock or Adblock Edge will get rid of any advertisements
    (I have no mercy with web sites that earn their money by showing so many ads that even an experienced internet user can hardly spot the actual content)
  • Ghostery will block all the trackers from commercial nosey big-data voyeurs

Filtering email

At the beginning of this article I also mentioned email as a potential threat. Why does spam, phishing and malware (viruses, trojans) work so well? Because even many adults can’t identify them and fall for it. If nobody would pay attention to it then the spam problem would be long gone. But if adults can’t tell good from bad then how are your kids expected to do that?

As a nerdy and worrying parent I strongly recommend that you don’t use a freemail service like Google, GMX, 1&1 or whatever. Those services are free (beer) for a reason – think about it. If you haven’t already then set up your own mail service by using my ISPmail tutorial. That way you have full control over incoming emails and can set server-based filtering rules. I have configured rules that only allow emails to my children from trusted sender addresses. All other emails end up in my inbox and I will forward it if appropriate. (Of course I won’t keep doing that indefinitely while my kids grow up.)

Installing software

As a sysadmin I find it unbelievable how many people fell for malware like browser bars. Of course I know how easy it is to get infected. You need to install Java… you forget to disable the checkbox that installs McAfee crap or an Ask.com browser disenhancement… there you go. (One more reason why I think people should stop using Oracle Java. A company that is in dire need of winding up their users like that should face a boycott.) And on a common PC magazine-style download web site I’d like to see you figure out which of the dozen download links actually gets you where you wanted (hint: it’s the tiny one two screens down that just consists of barely readable text – not the huge blinking button at the top).

What is my suggestion? Simple: use Linux – e.g. Ubuntu on your workstation. If you are using Windows you need to get all kinds of common applications from third-party sources. Your Firefox browser may be a hoax if you get it from the wrong source. Oracle pollutes your computer with browser bars. You inadvertently get dozens of pointless “download helpers”. On a modern Linux distribution you get most of your software packages from the built-in software manager. No need to dig through the internet to find trusted applications. You can’t really go wrong with Ubuntu’s “software center” application.

Conclusion

Now that we have dealt with technical measures it’s time to spend time with your children and explore the internet together. Don’t let them do that on their own. And don’t count on schools to teach them any kind of media competency because most of the teachers are as helpless as the pupils. But above all: let your kids explore IT. Compared to my generation today’s kids have become overly ignorant of the underlying technologies that power the internet. We really need to change that.

I hope you found this article helpful. Let me know what your experiences are with children and the internet in the comments below.

Updating the BIOS on Lenovo laptops from Linux using a USB flash stick

Aren’t hardware manufacturers funny? They either require an old-fashioned operating system (Windows) or museum hardware (floppy drives) to update a BIOS. Apparently they never learn and are instead busy adding features like DRM and UEFI to make our lives even more miserable.

However updating the BIOS on my Lenovo X230 laptop was surprisingly easy once I learned how to do that (kudos to a G+ post I stumbled upon).

  1. Go to support.lenovo.com (or better use a search engine becaues the Lenovo website is beautiful but technically pretty broken and slow) and search for the BIOS upgrade of your laptop model.
  2. Download the most recent ISO file. Look for “BIOS bootable update CD”.
  3. Convert the ISO image using the geteltorito utility (if you don’t have it: apt-get install genisoimage).
    Example:
    geteltorito -o bios.img g2uj18us.iso
  4. Insert any USB stick into your laptop that you have lying around. The image file is just 50 MB in size so even USB sticks with low capacity will work. Keep in mind that the stick will be completely overwritten.
  5. If you are in a graphical environment then unmount the USB stick again.
  6. Find out the device name of the stick. Enter a terminal window and enter “dmesg | tail”. You are looking for something like:[ 2101.614860] sd 6:0:0:0: [sdb] Attached SCSI disk
    The “sdb” tells you that your USB stick is available on /dev/sdb. Don’t just assume it’s sdb. If it’s on another device on your laptop then you will destroy your data.
  7. Copy the image to the USB stick:
    dd if=bios.img of=/dev/sdb bs=1M
  8. Reboot your laptop.
  9. After the Lenovo logo appears press ENTER.
  10. Press F12 to make your laptop boot from something else than your harddisk.
  11. Select the USB stick.
  12. Make sure your laptop has its power supply plugged in. (It will refuse to update otherwise.)
  13. Follow the instructions.

Tracking last logins with Dovecot

There is a newer issue of thie ISPmail guide available if you are using Debian Jessie!

Per request I have added a section to the Wheezy ISPmail tutorial about how to track last logins with the Dovecot mail server. This can be useful if you operate a semi-public mail server and want to expire email accounts that have not been used in a while. (Not to confuse with the expire Dovecot plugin whose purpose is to automatically delete emails in certain mail folders like the Trash.)

Docking and undocking Linux laptops with nVidia GPUs using disper

Do you have trouble switching the display when docking and undocking your Linux laptop? In this article I will show you how to use disper and keyboard shortcuts to do that reliably if you are using nVidia’s annoyingly broken RandR-incompatible graphics driver.

Many companies nowadays give their employees laptops instead of classical PCs under their desks. And that’s a good idea because you can carry it to meetings or hack sessions and just keep all your digital information in one place instead of printing it out, carrying it around and typing it in later. And it gets even better if your employer allows you use your favorite Linux distribution instead of Windows on your laptop. Using Linux on a laptop has become increasingly simple with modern  distributions like Ubuntu. But still the laptop manufacturers suck badly at providing proper drivers for anything other than Windows™. That’s especially true for graphics hardware from nVidia which is the perfect example for ruining great hardware by failing to provide proper drivers. Modern desktop environments usually detect if your screen resolution changes or if you dock or undock from your laptop’s dockin station. Unfortunately nVidia’s proprietary driver still doesn’t support common standards (RandR). They ship their own (“NVIDIA X Server Settings”) tool to switch monitors which allows you to do that. Well, sometimes. At other times they alter your X server’s configuration (which is something that Linux nerds used to do 10-20 years ago) and require an X server restart. Yeah, right.

But do not despair – here is a simple fix. They key is disper – a simple-to-use command-line tool to switch between different monitors. You should find it packaged for your favorite desktop Linux distribution. I personally use Xubuntu so I installed it using

$> sudo apt-get install disper

Disper is pretty simple and its manpage describes its options. Check that disper finds your monitors:

$> disper -l

display DFP-0: AU Optronics Corporation
resolutions: 1600×900
display DFP-2: Eizo S2402W
resolutions: 640×480, 800×600, 1024×768, 1280×960, 1280×1024, 1680×1050, 1600×1200, 1920×1200

You can see here that DFP-0 is my laptop’s own monitor with a resolution ov 1600×900 pixels. And I have an Eizo monitor connected to my docking station. If your output looks similar you can try to switch between the monitors:

$> disper -s      (enables the “primary” display – usually the laptop’s built-in monitor)

$> disper -S      (enables the “secondary” display – usually the monitor connected to your docking station)

$> disper -c      (clones the display – will only work if both displays can use the same resolution)

$> disper -e      (extends your desktop over both monitors)

Now all you have to do is assign keyboard shortcuts for your favorite settings.

That’s it. When I dock my laptop I press Win-d and the external monitor will display my screen. And when I undock I press Win-u and my desktop moves to the laptop’s built-in monitor. Perfect.

(And next time please get a laptop with a well-supported Intel GPU if you can. You will love it.)

Backups with rsnaphot to external USB drives

How long has it been since you last backed up your Linux system? Let me guess – you tried various backup systems and hate all of them? Let me show you how to use rsnapshot and an external inexpensive USB drive to back up precious data easily. (more…)

Zabbix: How escalations work

Zabbix is a very complex software that takes weeks to fully understand. One of the most interesting, most complex and least documented features are escalations. They are used to define a schedule what Zabbix is supposed to do when a certain event occurs. Like waking up the system administrator or do some automatic emergency cleanup tasks.

The “why” of escalations

One of the reasons I suggest you use escalations instead of just standard actions are delayed notifications. If you are using trigger dependencies then chances are that in case of a network problem still get spammed with alerts until the dependencies step in. I recommend reading my wiki article on delayed notifications. (Seems that zabbix.com has removed my nice wiki article. I should have copied it.)

Further scenarios could be:

  • if the SMS on your cell phone does not wake you up then a little later try to wake your backup admin or boss
  • get an instant alert message via Jabber but if you did not read it send an additional SMS five minutes later
  • if a certain process dies frequently then let Zabbix try to restart the service three times until it alerts you
  • repeat an alert every 10 minutes in case you missed the first one

Configuring escalations

Take a look at this screenshot that I will use to explain how escalations work (taken from my wiki article I mentioned above)

There is a lot of information in this configuration screen. To enable escalations click on the checkbox in the upper left corner. Then the top right box with the escalation plan (Action operations) will appear.

The escalation plan

Here you will see what action is run when. As soon as the Action conditions defined on the lower left are met then the escalation plan will start to run – beginning at step 0. Every “period” number of seconds Zabbix advances to the next escalation step. In my example the period is 120 seconds. So two minutes after the action started step 1 runs. After another 120 seconds step 2 is run and so on. You can change the period to a new default value during a step but I find it confusing and would advise against it. To make things simple I recommend to set the period=60 so that the steps correspond to minutes.

Steps

If you add or edit an operation in the list of operations on the top right then you see the “From” and “To” fields. This is the range of steps that a certain action should be run at. In my example it’s from 2 to 2. Which means: this operation is supposed to be run at step 2 only. Other examples:

  • from 0 to 2: run the operation at step 0 (immediately), step 1 (after 1 x period = 120 seconds) and step 2 (after 2 x period = 240 seconds)
  • from 1 to 0: run the oepration at step 1 (after 1 x period = 120 seconds) and until infinity every “period” (=120 seconds)
  • from 2 to 2: run the operation at step 2 only (after 2 x period = 240 seconds)

Only if not yet acknowleged

An extra goodie is the condition on the lower right. It means that this action is only run if the event has not been acknowledged yet. If anyone acknowledges the problem within the first 120 seconds then the condition would not match and nobody would get a notification. This is a workflow I can definitely recommend.

Stupid caveat

Caveat: you must always add an Action condition “Trigger value = PROBLEM” to the bottom left. Otherwise the escalation plan would not only start when this action starts but also after the recovery from the original network/server problem. This would lead to a delayed recovery message which is totally useless. (I have not yet understood why this is not an automatic/implicit condition.)

Recovery messages

Regarding the recovery: one the original problem that started the action is remedied then this is called recovery. If you tick the “Recovery message” checkbox then Zabbix sends out recovery messages. But only to the users that have been informed of the problem. If the escalation plan would have informed your boss 10 minutes later but you have fixed the problem then your boss will not get a (confusing) recovery message. Zabbix does the right thing.

Tired of Nagios and Cacti? Try Zabbix.

One of my professional duties in my past ten years was monitoring systems. Even my diploma thesis was dedicated to distributed monitoring (altough my professor sucked badly ). Apart from a few custom-programmed scripts to analyze special situations (e.g. proxy clusters) I used tools that fellow administrators will find familiar: Nagios and Cacti. And another less famous text-configuration-based monitoring tool called Cricket.  That worked well somehow but Cricket was hard to learn for my coworkers and Cacti seems unreliable and fundamentally broken in terms of SNMP checking. Besides why do I have to set up availability checking in Nagios and set up checking of the same parameters in another software to draw graphs? Then in 2009 I came across an open-source software I hadn’t heard of before: Zabbix. And although it has a few rough edges it seems way more professional than other common tools (the commercial tools I saw were even worse than the open-source variants). I tried it and after a lot of reading and trying it looks like it has a good potential to replace Nagios and Cacti. So I thought I’d sum up my personal experiences with all of these tools.

Nagios. Their makers claim that it’s the “industry-standard in IT infrastructure monitoring“. Honestly it’s a great tool but considering how many years it has been existing it barely evolved.  During my diploma thesis in the year 2000 I wrote an alternative software that I called “MrNetwork” that dealt with flaws that Nagios hasn’t even fixed today. Still Nagios is a tool I have used for many years and it is very reliable. Advantages:

  • open source
  • large community
  • many powerful plugins (and own plugins are easy to create: just write a program that prints a one-line string and set a certain return code)
  • easy-to-use web frontend
  • debugging plugins is moderately simple.
  • many thought-out features like host groups or notification options that make your life easier
  • dependencies (so that you don’t get 100 alerts if a router between the Nagios server and other servers went down)
  • nagvis plugin with a great interactive editor that draws nice management-suitable graphs (although I found the ndo2db interface hard to set up at first and a little flaky)

Annoyances:

  • The focus is on availability checking – you don’t get fancy graphs on the values that are monitored (e.g how the CPU load was over time). So you’ll need a second tool and set up the same checks there just to get graphs. But availability percentages are computed automatically.
  • Textual configuration that has so many different settings that you need to look up the parameters often. A web-based configuration would probably be better (and is available as an add-on but I haven’t tested it).
  • Third-party plugins are often very badly programmed and barely documented that it appeared easier to reinvent the wheel. (“Look, ma, I can has plugins.”)
  • Some views on the web interface are not very obvious (e.g. clicking on the title of a host group gives a nice view of all hosts with all services).
  • Many plugins don’t have corresponding configuration entries so you have to find out how they work and write configuration entries yourself (and those which are preconfigured take some archeology to find out which parameters they expect). This is a huge time-waster for beginners. And in your services configuration you have to verify your checks configuration to understand the meaning of each parameter. Or do you remember what “check_http 80!john!doe!10!30!body” is supposed to check?
  • Every  set of parameters of a certain plugin requires a distinct configuration entry. The plugins have dozens of configuration switches that you may need one day. Want to set a timeout on HTTP checks? Write another check configuration. Want to check for a certain string in the HTTP response? Write another check configuration. And so on.
  • Most checks are run from the Nagios server itself (the NRPE plugin to do the checks on the respective remote systems somehow refused to work properly here) which is suboptimal and puts a lot of load on the server.
  • By default every alert triggers a notification. So if you can’t define proper dependencies (e.g. if you want to check your web server in all 30 supported languages and there is some logical error) then you will get spammed with alerts.

Cricket. As Nagios does not support plotting graphs of the monitored values I was in need of another piece of software. Basically Cricket is a frontend to RRD (which stores data in a rotating/round-robin file that keeps data of the last X minutes/hours/days). It has a textual configuration that takes a lot of getting used to. It’s main principle is inheritance of settings – they call it “configuration tree“. Which means you have a master DEFAULTS file that contains general settings like how to query SNMP. In a subdirectory you define a certain class of devices that you want to monitor – e.g. routers (the DEFAULTS are inherited to this level). Within the routers directory you can just define a list of routers you want to monitor. All settings are inherited from “above” (parent directories). It’s more a geek tool for shell lovers. Advantages:

  • very quick to monitor a large set of similar devices once the general device class is defined
  • simple web interface
  • very reliable
  • can monitor SNMP values (it does this very well) or execute external scripts – thus can be easily extended
  • flexible graphing – you can sum up values of two graphs into a new graph (aka “mtargets” – multiple targets)
  • different check frequencies can be configured for different subtrees through cron (by default values are collected every 5 minutes – this can be set as low as one minute if needed)

Annoyances:

  • the textual configuration is error-prone (leading to funny Perl errors that can be hard to debug)
  • users may expect to see all parameters of a certain device instead of all devices having a certain parameters (“Give me the statistics of router42” instead of “Let’s see the temperature of all routers we have.”)
  • customizing the graphs (drawn by RRD tool) isn’t trivial
  • Frequency of checks is by default 5 minutes. Before RRD can draw the first value of a graph it needs three values. So you’ll be waiting 15 minutes before you see any results.
  • RRD rounds data by default. So the yearly graph doesn’t show the peaks that the daily graphs do. (This can be fixed by not graphing the average values but the maximum values.) Long-time archiving of graphed data is not possible without throwing away the RRD files and manually customizing them. Changing the monitoring frequency (aka “heartbeat”) is not possible either without throwing away the data and starting from scratch either.
  • No proper built-in alerting in case certain thresholds are exceeded.

Cacti. Another frontend to RRD – and a pretty sophisticated one. Nearly everything is configured through its web interface. And the result is beautiful. It’s not entirely reliable though and SNMP support (at least in version 0.8.7b) is a big fail. I like Cacti because its user interface is much better than Cactis but it’s less reliable and flexible. Advantages:

  • Beautiful and (for most features) simple web interface. Nice features like graphs that can be zoomed using Javascript.
  • Fine-grained permissions system. So a certain user may get read-only access to a certain subtree.
  • The tree where graphs are placed can be configured freely so you get exactly the view you want.

Disadvantages:

  • Doesn’t hide the RRD magic very well. The user is easily confused by templates, data sources and the like.
  • Graphing sometimes just stops working for no reason or values are missing although the server isn’t overloaded and other software doesn’t show such outages. According to a quick search on the lazyweb I’m not the only one with such effects.
  • Setting up many systems means a lot of clicking in the web interface. Setting up new kinds of checks (aka “templates”) means even more clicking and is very error-prone.
  • The quality of some third-party templates I tested was pretty bad. Creating new templates is tedious, error-prone, frustrating and close to black magic. Nothing for the casual user at least.
  • Doesn’t handle SNMP correctly (this is the biggest fail in my opinion and makes it unusable here). Although it knows how to query indexes (e.g. ifDescr to get the names of your network interfaces) it just seems to stored fixed OIDs. So once the SNMP tables change the order or number of items (which isn’t unusual) then suddenly other parameters get graphed.
  • Frequency of checks is by default 5 minutes. Increasing the frequency leads to missing data and wrong results.
  • As it uses RRD and needs 3 valid values you won’t see that your monitoring fails until you wait 3×5=15 minutes. Not suitable for impatient non-smokers like me. 🙂
  • Debugging failed checks is close to impossible. If a check fails then I find myself clicking around randomly trying to find typos because the alternative is digging around in database entries.
  • The web interface is sometimes confusing. A refresh of SNMP tables is done by clicking a unmeaning green circle icon. Adding new items to a list is done by clicking an inconspicious “Add” link that doesn’t even look like a link.
  • Another UI confusion: graphs are created from the “devices” view. But they are deleted from the “graph management” view.
  • No alerting in case certain thresholds are exceeded. Another tool like Nagios would still be needed to notify you.
  • Can’t sum up multiple targets so monitoring failover clusters doesn’t work well.
  • RRD averages data when putting daily values into weekly values, weekly into monthly and monthly into yearly. So the yearly graph doesn’t show the peaks that the daily graphs do. (This can be fixed by not graphing the average values but the maximum values.) Long-time archiving of detailed graphed data is not possible (RRD). Changing the monitoring frequency (aka “heartbeat”) is not possible either without throwing away the data and startin from scratch.

Zenoss. People pointed me to Zenoss which is supposed to offer the same functionality as other monitoring systems but is much more integrated. So this short list is more a quick one-day-experimental expression than a thorough analysis. But in the end much of the fuss is just marketing. Advantages:

  • Beautiful web interface
  • Nice gimmicks like google maps integration to show you where your servers are down worldwide. Only makes sense when monitoring networks with several/many remote locations.
  • Can partly discover parameters of systems automatically. That works for static routes (although I wonder why the heck I want to monitor static routes), file systems and network interfaces. On the other hand processes can’t be discovered automatically and have to be set up manually.
  • Does not come with a specific agent but plays rather well with plain old SNMP.
  • Can monitor Windows through native WMI.
  • Large fanbase.

Annoyances:

  • Web interface feels slow (Zope is bloated)
  • Opaque operation. It does not tell what’s actually going on. You can add monitoring and check back later if anything worked they way you want.
  • Questionable reliability. In a test here I was monitoring a running process. The process was said to be down and suddenly went to “up” after a while although nothing had changed on the system.
  • Just one dashboard. Several dashboards similar to Zabbix “screens” would be nice.
  • Configuration and data is spread across MySQL, the internal Zope database storage and RRD files on disk.
  • The dependency graph is a nice Flash-based applet displaying how the systems are connected. But it does not show any details about the systems aside from whether they are up or down. And clicking on a host does not take you anywhere but center on the system. Beautiful but it could do so much more than just look beautiful.
  • I dislike the way that things are configured. The context menu with the “down arrow” needs to be used. I’d prefer simple “add” or “delete” actions instead of navigating the menu all the time. Looks like Javascript is used wrongly here.
  • Limited open-source version. Full version needs to be paid for.

Zabbix (1.8.2). I’m using the backported Zabbix 1.8.2 on Debian Lenny here. Debian Lenny’s native 1.4 version lacked some interesting features like proper SNMP handling. Zabbix seems close to the perfect monitoring system I had always dreamt of. I would have designed it differently in some aspects though.

Advantages:

  • Availability Monitoring (like Nagios) and graphing (like Cacti) is combined into one tool.
  • Highly configurable. User John may just get an SMS for problems of high severity during the weekend and on weekdays get a Jabber message. Even automatic actions like restarting services can be set up.
  • The notifications actually help the person who gets the message. “Low disk space on /var on web5” with an additional comment is pretty helpful even when sent via SMS. Notifications are completely customizable with macro variables.
  • Very performant. A Zabbix agent can be installed on the systems (available for several operating systems – even for Windows) which gathers the information on each system efficiently. The agent can even call scripts or shell one-liners to gather information. This kind of data collection is very efficient. You will need a server with a good I/O performance and a lot of RAM though so that the database can work efficiently. At first I virtualized Zabbix on a Debian server on a VmWare server with 1 GB of RAM. The database access became so slow that showing graphs or recent events made me store at a busy mouse cursor for up to a minute. On a server with 4 GB of RAM, a large MySQL key buffer and SAS disks the system runs well again.
  • Collecting items (gather information about system parameters) happens at set intervals. You don’t have to wait for several minutes until you see results (it usually takes half a minute). Each item can have a custom check interval. So you can check for the CPU load every 30 seconds but check the number of free inodes on /home just once an hour.
  • Fast web interface.
  • Sophisticated monitoring of web sites. Zabbix can follow a path of simulated mouse clicks on a web site and check for functionality and response time.
  • Real-time graphs. Values are by default collected every 30 seconds. You quickly see where you are going.
  • Permissions system. Certain users can be limited to certain views.
  • Gathered data is stored in a database (MySQL, PostgreSQL, SQLite) instead of an unflexible RRD file. Storage periods (aka “history”) can be configured freely. Backing up the database is all there is to be done.
  • Templates (that can even link to further templates) save time in setting up many checks.
  • Graphs (plots of values over time) can be customized like which items are plotted and in what way. Even pie charts are possible.
  • Even the parameters that don’t get an explicit graph can be graphed at any time. E.g. the agent has tracked the CPU load on a system that you never cared about then you can graph that with one mouse-click.
  • Screens and slide shows can be used for high-level views (aka “dashboards”) or to be displayed on a big geeky display. They can combine textual display of the status as well as clocks, ad-hoc graphs or predefined graphs.
  • Very flexible trigger expressions. For example you can tell a trigger to fire if the average system load over the last 15 minutes is above a certain value. As all measured parameters are stored in a backend database you can use all kinds of mathemical expressions. Like firing a trigger if the average number of running processes during the last half hour is above 50. All other software I tested just has access to the last value gathered.
  • Alerting/notifications can be scripted easily by using shell scripts.
  • Remote monitoring made easy by using a Zabbix proxy.
  • Paid support and paid custom programming available. But the software is completely open-sourced.
  • 320 page PDF manual with screenshots and nice references. (Although I’d personally prefer an online help. Currently the “?” link within the web interface just points to the PDF that you can download.)

Annoyances:

  • A lot of mouse moving and clicking is required to set up things. For example setting up an alert if the free space on a disk on a certain server is getting too low then you need to set up hosts, items, triggers and actions. Some of the clicking seems redundant. E.g. I didn’t find a way to create triggers automtically for a set of checks. If I monitor how full the “/home” partition is then I’d like to set a threshold in the same configuration step.
  • Takes a little patience understanding the concepts (because there is no hidden magic) although they make sense after half a day.
  • The web interface is crammed full of features. For casual users it’s confusing to navigate. In a real-life network you find yourself setting host variables, juggling templates and unless you remember everything you did you will not get a good overview of your configuration. Zabbix is very complex but in my opinion it would need an even better web interface to deal with its featurs properly.
  • The map editor was close to unusable at first but has improved in version 1.8. It still takes a lot of time to set up the maps. The map editor could really use fewer mouse clicks to set up the map. I was also missing a feature to add current item values to the map (like the server room temperature or bandwidth on our load balancer). You can just add triggers which occupy a lot of space on the map, too.
  • You can just return one value per item. Sometimes you need to return a good/bad value plus some additional information. Nagios for example delivers a return code for OK/WARNING/ALERT and also a text string. In Zabbix this are different items.
  • Zabbix gets painful when you want to monitoring different assets of the same kind. Different network interfaces, disk partitions, MySQL instances or web server ports. Templates are pretty useless here. You will have to copy every item and trigger.
  • Does not detect the available assets in a monitored server automatically. Imagine that you want to monitor the space on all disk partitions on a system. You will have to copy over or create the check items manually or define all possible checks in a template and disable those you don’t need. Cacti handles that better by offering you a list of partitions to monitor. Zenoss can do that partly. The zabbix agent should be able to handle such a service discover automatically. (The built-in “Discovery” feature rather seems to detect new servers in a given network range automatically. But that’s something different.)
  • Hard to debug. Why was an action not run? Who would get alerted for a certain trigger? Why has a value become ‘unknown’ without a reason? Of course there are reasons for what Zabbix does. But it often takes clicking and guessing instead of telling the user.

See my Zabbix screencasts if you like to learn more.

See also: Ben Rockwood’s blog Further similar  software I didn’ test thoroughly: Hyperic and Opsview. All of the above tools are great. I’m not meaning to say that “Zenoss is total crap” for example. The differences are subtle. And whether a piece of software suits your needs really depends on your expectations. I love that all this software is available as open-source. And a totally unscientific but fun analysis of the community is counting the number of active people in the respective channels on the Freenode IRC network:

  • #nagios: 133 users
  • #cacti: 58 users
  • #cricket: 2 users
  • #zenoss: 54 users
  • #zabbix: 61 users

Either Nagios has the largest fanbase or perhaps that means that the majority of people needs help with it. 🙂

Renaming multiple files

If you need to rename a larger number of files following a certain pattern then you will long for an automated solution. The ‘rename’ command helps you here that is (at least on my Debian installation) part of the Perl installation. All you need to know is the basics of regular expressions to define how the renaming should happen.

Say you want to add a ‘.old’ to every file in your current directory. At the end of each expression ($) a ‘.old’ will be set:

rename 's/$/.old' *

Or you want to make the filenames lowercase:

rename 'tr/A-Z/a-z/' *

Or you want to remove all double characters:

rename 'tr/a-zA-Z//s' *

Or you have many JPEG files that look like “img0000154.jpg” but you want the first five zeros removed as you don’t need them:

rename 's/img00000/img/' *.jpg

In fact you can use any Perl operator as an argument. The actual documentation for the ‘s’ and ‘y’/’tr’ operators are found in the ‘perlop’ manpage.

Pipes and redirection

Many system administrators seem to have problems with the concepts of pipes and redirection in a shell. A coworker recently asked me how to deal with log files. How to find the information he was looking for. This article tries to shed some light on it.

Input / Output of shell commands

Many of the basic Linux/UNIX shell commands work in a similar way. Every command that you start from the shell gets three channels assigned:

  • STDIN (channel 0):
    Where your command draws the input from. If you don’t specify anything special this will be your keyboard input.
  • STDOUT (channel 1):
    Where your command’s output is sent to. If you don’t specify anything special the output is displayed in your shell.
  • STDERR (channel 2):
    If anything wrong happens the command will send error message here. By default the output is also displayed in your shell.

Try it yourself. The most basic command that just passes everything through from STDIN to STDOUT is the ‘cat’ command. Just open a shell and type ‘cat’ and press Enter. Nothing seems to happen. But actually ‘cat’ is waiting for input. Type something like “hello world”. Every time you press ‘Enter’ after a line ‘cat’ will output your input. So you will get an echo of everything you type. To let ‘cat’ know that you are done with the input send it an ‘end-of-file’ (EOF) signal by pressing Ctrl-D on an empty line.

The pipe(line)

A more interesting application of the STDIN/STDOUT is to chain commands together. The output of the first command becomes the input of the second command. Imagine the following chain:

The contents of the file /var/log/syslog are sent (as input) to the grep command. grep will filter the stream for lines containing the word ‘postfix’ and output that. Now the next grep picks up what was filtered and filter it further for the word ‘removed’. So now we have only lines containing both ‘postfix’ and ‘removed’. And finally these lines are sent to ‘wc -l’ which is a shell command counting the lines of some input. In my case it found 27 of such lines and printed that number to my shell. In shell syntax this reads:

cat /var/log/syslog | grep 'postfix' | grep 'removed' | wc -l

The ‘|’ character is called pipe. A sequence of such commands joined together with pipes are called pipeline.

Useless use of ‘cat’

Actually ‘cat’ is supposed to be used for concatenating files. Like “cat file1 file2”. But some administrators abuse the command to put something into a pipeline. That’s bad style and the reason why Randal L. Schwartz (a seasoned programmer) used to hand out virtual “Useless use of cat” awards. Shell commands usually can take a filename as the last argument as an input. So this would be right:

grep something /var/log/syslog | wc -l

While this works but is considered bad style:

cat /var/log/syslog | grep something | wc

Or if you knew that grep even has a “-c” option to count lines the whole task could be done with just grep:

grep -c something /var/log/syslog

Using files as input and output

Output (STDOUT)

Instead of using the console for input and the screen for output you can use files instead. While

date

shows you the current date on the console you can use

date >currentdatefile

to redirect the output of the command (STDOUT) to the file named ‘currentdatefile’.

Input (STDIN)

This also works as input. The command

grep something

will search for the word ‘something’ in what you type on your keyboard. But if you want to look for ‘something’ in a file called ‘somefile’ you could run

grep something <somefile

Input and output

You can also redirect both input and output in the same command. A politically incorrect way to copy a file would be

cat <oldfile >newfile

Of course you would use ‘cp’ for that purpose in real life.

Errors (STDERR)

So far this covers STDIN (<) and STDOUT (>) but you also redirect the STDERR channel by using (2>). An example would be

grep something <somefile >resultfile 2>errorfile

2>&1 magic

Many admins stumble when it comes to redirecting one channel to another. Say you want to redirect both STDOUT and STDERR to the same file. Then you cannot do

grep something >resultfile 2>resultfile

It will only redirect the STDOUT (>) there and keep the ‘resultfile’ open so “2>” fails to write to it. Instead you need to do

grep something >resultfile 2>&1

This redirects STDOUT (1) to the ‘resultfile’ and tells STDERR (2) to send the output to what STDOUT is set to (also ‘resultfile’).

What does not work is this order:

grep something 2>&1 >resultfile

It may look right to us humans but in fact does not redirect STDERR to the ‘resultfile’. The explanation: the shell interprets this line from left to right. So first the “2>&1” is evaluated which means “send STDERR to the same that STDOUT is currently set to”. As STDOUT is usually just printed to the shell it will send STDERR also to the shell. Next the shell finds “>resultfile” which sends STDOUT to the ‘resultfile’ but does not touch the previous destination of STDERR. So STDERR output will still end up in the shell.

Interesting commands

  • grep
    Filters out lines with certain search words. “grep -v” searches for all lines that do not contain the search word.
  • sort
    Sort the output alphabetically (needs to wait until EOF before doing its work). “sort -n” sorts numerically. “sort -u” filters out duplicate lines.usel
  • wc
    Word count. Counts the bytes, words and lines. “wc -l” just outputs how many lines were counted.
  • awk
    A sophisticated language (similar to Perl) that can be used to do something with every line. “awk ‘{print $3}'” outputs the third column of every line.
  • sed (stream editor)
    A search/replace tool to change something in every line.
  • less
    Useful at the end of a pipe. Allows you to browse through the output one page at a time. (“less” refers to a similar but less capable tool called “more” that allowed you to see the first page and then press ‘Space’ to view ‘more’.)
  • head
    Shows the first ten lines only. “head -50” shows the first 50 lines.
  • tail
    Shows the last ten lines only. “tail -50” shows the last 50 lines. “tail -f” follows a certain file.

Why you should not use Python’s easy_install carelessly on Debian

(Hint: This article talks about the Pylons web framework which is essentially dead. But the warning about easy_install and pip is still valid.)

This article tries to make clear why blindly running setuptools/ez_setup on Debian/Ubuntu is dangerous and will happily break your operating system.

I loved the Pylons web framework. A significatnt number of developers was using it. It could be installed easily through a simple – though not sophisticated – tool called easy_install. Easy_install is a wrapper/bootstrapper around the Python setuptools.

Setuptools isn’t terribly bad per se

Pylons depends on a number of Python modules. It would be a real pain to download and install all the right versions of these modules from different web sites just to run Pylons. Fortunately there are the setuptools that download the dependencies and install them on your system. It uses the PyPi Python package index that lists information on available packages and where setuptools can get them from. If your operating system has no sane way of tracking the installed software (like Windows or UNIX flavors that do not have a package management) then this approach is fine.

Setuptools is not a decent software package management

Unfortunately most modern operating systems like Linux distributions have come with a sophisticated package management. Its job is to track what software is installed and what files belong to what package. You can install and uninstall packages (without leaving debris on your system), upgrade them all easily, install security patches, keep your configuration upon upgrades and track dependencies and conflicts. So it’s similar to setuptools but does a whole lot more than that. Setuptools does not even allow you to uninstall a module. After all we are not in the dark ages of operating systems any more where you just install software and the regular way of tidying up your operating system is by reinstalling it because it breaks after a few months anyway. I didn’t have to reinstall Debian for years on my system but installed tons of software for playing around with it and removed it later. Actually the only explanation I have why setuptools is so widespread is perhaps that many Python developers use operating systems without software management. And don’t get me started about its version numbers. Anyway setuptools is a quick’n’dirty way to install Python packages.

Apt versus Setuptools

Debian has a large number of Python modules available as Debian packages through the advanced package tool (apt). Installing Pylons is just a matter of running aptitude install python-pylons. Do you smell the trouble already? Apt does not care for Python modules installed via Setuptools and vice versa. So it’s possible to install a certain module twice in different locations. Depending on the order of your PYTHONPATH you may use different versions of the same module. Some people argue that they know which Python modules they install through “apt” and which are installed through “setuptools”. That’s a dangerous game to play. Since both know how to install dependent packages/modules you will quickly install software you are not aware of. Welcome to the mess.

Debian’s modules are always outdated

Yeah, right. That is the old rumor. People who say that have still not understood the difference of “stable” and “unstable” branches of Debian. Debian consists of pretty up-to-date packages. Whatever is available via setuptools is likely available as a Debian package within days, too. So you will not miss anything. Debian releases a “stable” version roughly every two years though. Of course there is a lot of change happening within two years. But many people favor stability over new features. They are happy that their desktops will work smoothly for two years. If you prefer the newest software then just use the “unstable” branch instead of the “stable” one. The name “unstable” is misleading. It doesn’t mean that your system will constantly be broken if you use it. It contains the newest software though that couldn’t be tested thoroughly. So my personal recommendation to Debian users is: use “unstable” if you like to stay up-to-date.

Jailed by virtualenv

If you do not like to bring your system to “unstable” and still want to run Pylons applications you can create a virtual environment (virtualenv). It will debootstrap a Python environment into a directory that does not conflict with the rest of your operating system. I wouldn’t recommend that for daily work. But it allows to use setuptools to install the newest software. And if you get fed up or do not need it any more then just remove that special directory and no harm is done. An additional advantage is that you can run different projects that each require different versions of software. Perhaps one project does only work with a package of version <= 0.3 while another project needs at least >= 0.4. In this case you just give each project what it needs.

The drawback is that you do not get security updated from “apt” so you are completely on your own. And it’s not really suitable for working on a daily basis because you hardly get access to anything outside of that jail.

So who’s fault is all this?

The actual problem is that the creator of the setuptools didn’t care for the needs of Linux distributions that come with some kind of package management. Redhat has just recently discovered they have the same problem. So it’s not just something a mad Debian developer has come up to confuse the world. Perhaps sometimes in the future we will have apt-aware setuptools or a way to automatically create Debian packages from ?PyPi modules.

What should you do then?

My personal recommendation:

  • Use “unstable” for development
  • Deploy your application in a virtual environment

Whatever you do – NEVER install a Python module system-wide with setuptools/ez_setup on your Debian system. And – no – /usr/local is not a safe place for Python modules either.

I told you so. 😉

© 2021 workaround.org - Proudly powered by theme Octo