Zabbix video series available

In case you haven't heard – I have been working on a video series for learning how to use the Zabbix open-source network monitoring software properly. Finally the work is done and the series is available at PacktPub. Feels good. 🙂

Making the best of Zabbix maps

There are different ways to access the data that Zabbix has gathered for you. If you are looking for a graphical way then you will want to try out maps. In this article I will give you an example on how to create useful maps and add live data to labels.

First a sorry to everyone who has eagerly waited for this article that was actually supposed to become a screencast. But I spent wasted lots of our trying to find a decent workflow to produce WebM videos. So I decided to put this article in writing and deal with screencasts later.

The default map

Let us begin. Make sure that you have a Zabbix server and Zabbix web frontend up and running and log in. Navigate to Configuration and Maps and you will get the list of available maps. By default you will only have the “Local network” map which is quite boring. Note that there are two links here. One is called “Local network” and it will take you to the graphical map editor:

The other is called “Edit” and would show the properties of the map:

The graphical editor

The important settings are explained in the diagram above. But let’s take a deeper look at the graphical editor:

You see a map with a grid that shows squares of 50×50 pixels. If you placed icons here they would snap to the next grid location. Also there are “+” and “-” on the top to add or remove icons and links. Icons can be one of:

  • Host
    A host icon represents a host that you monitor with Zabbix. Its label can directly show problems affecting that host.
  • Map
    A map icon is a link pointing to another map. You could create an overview map and if the user clicked on such a map icon it would load another map with a more detailed view.
  • Trigger
    This is an icon that can have different appearance depending on the status of a trigger (disabled, maintenance, problem or normal operation).
  • Host group
    This kind of icon represents an entire group of hosts. It is useful if your map is a bird’s eye view and you just want to know if there are any problems in your group of hosts (e.g. Linux hosts or hosts in a certain geographical location). This way you don’t have to add all the single hosts to the map.
  • Image
    Just a static image. You can give it a custom label. You can add further Images using Configuration, General, Images, Create image.

Labels and macros

A propos labels. Let me show you an example that adds some coolness to your maps:

I hope that this diagram is not too confusing. The red captions are just explanations and are not contained in the map. But the green lines and boxes are part of my example map. As you can see the two hosts fry and screenshots are connected to the Internet icon. The two hosts have a custom label showing the hostname and the current CPU load. Let’s see how this magic works. By clicking on an icon a new popup window will appear containing settings regarding that icon. The icon I use is a host icon. The most interesting part is how I define the label:

The term put in curly braces “{ … }” is called a macro. See also the Zabbix documentation on macros. The {HOSTNAME} macro is expanded to the name of the host connected to the icon (which is selected in the fourth line called “Host”). Since Zabbix 2.0 you should rather use {HOST.HOST} because {HOSTNAME} has been deprecated. “system.cpu.load[,avg1]” refers to the minute-average CPU load on this system – I took this name from an item’s key using copy/paste. And “.last(0)” is important so you get the newest value of this item and not just the item itself. To see what the macro is expanded to you must switch on the “Expand macros” feature in the top line of the maps editor.

Links

So far I am sure that you can follow me and add your own icons and set labels. Adding links is not tricky either – if you know how to create them. Imagine that you created two new icons:

Let us create a link between these two icons. A link is nothing special – just a visual line connecting two icons. To create a link you need to select both icons (holding the Ctrl key while clicking each of them) and click on “Link +” on the top:

To configure the appearance of the line and what label is displayed on it you can edit its settings. This is a bit non-intuitive because the popup window does not show the link – it rather offers a “mass update” of the properties of all selected icons. So just click on one of the two icons and the popup window will change to this:

The diagram will give you an idea on how to reach the settings of the links. Just follow the numbers in order. Now if you want to add live data on the link as in my previous example you can use macros again. Just that you cannot use the {HOSTNAME} macro here because there is no such thing as “this host” as the link is connected to two hosts. So you must specify the hostname explicitly. Other than that the macro should look familiar to you:

Obviously “net.if.in” and “net.if.out” specify the network throughput on the “eth0” interface here. On the bottom there is a Link indicators box that allows you to change the appearance of the link line depending on triggers. In this example the line is a solid bold green line by default. But if the configured trigger fires the line will be displayed as a red dashed line. That way you can visualize any information you want. For example I use it to visualize the connections of the network backbone or the lag between database masters and slaves.

I hope that this article helped you understand basic maps and what you can do with labels and macros. More about maps can be found in the Zabbix documentation. Have fun creating your own maps. And let me know if this has helped you.

Zabbix: How escalations work

Zabbix is a very complex software that takes weeks to fully understand. One of the most interesting, most complex and least documented features are escalations. They are used to define a schedule what Zabbix is supposed to do when a certain event occurs. Like waking up the system administrator or do some automatic emergency cleanup tasks.

The “why” of escalations

One of the reasons I suggest you use escalations instead of just standard actions are delayed notifications. If you are using trigger dependencies then chances are that in case of a network problem still get spammed with alerts until the dependencies step in. I recommend reading my wiki article on delayed notifications. (Seems that zabbix.com has removed my nice wiki article. I should have copied it.)

Further scenarios could be:

  • if the SMS on your cell phone does not wake you up then a little later try to wake your backup admin or boss
  • get an instant alert message via Jabber but if you did not read it send an additional SMS five minutes later
  • if a certain process dies frequently then let Zabbix try to restart the service three times until it alerts you
  • repeat an alert every 10 minutes in case you missed the first one

Configuring escalations

Take a look at this screenshot that I will use to explain how escalations work (taken from my wiki article I mentioned above)

There is a lot of information in this configuration screen. To enable escalations click on the checkbox in the upper left corner. Then the top right box with the escalation plan (Action operations) will appear.

The escalation plan

Here you will see what action is run when. As soon as the Action conditions defined on the lower left are met then the escalation plan will start to run – beginning at step 0. Every “period” number of seconds Zabbix advances to the next escalation step. In my example the period is 120 seconds. So two minutes after the action started step 1 runs. After another 120 seconds step 2 is run and so on. You can change the period to a new default value during a step but I find it confusing and would advise against it. To make things simple I recommend to set the period=60 so that the steps correspond to minutes.

Steps

If you add or edit an operation in the list of operations on the top right then you see the “From” and “To” fields. This is the range of steps that a certain action should be run at. In my example it’s from 2 to 2. Which means: this operation is supposed to be run at step 2 only. Other examples:

  • from 0 to 2: run the operation at step 0 (immediately), step 1 (after 1 x period = 120 seconds) and step 2 (after 2 x period = 240 seconds)
  • from 1 to 0: run the oepration at step 1 (after 1 x period = 120 seconds) and until infinity every “period” (=120 seconds)
  • from 2 to 2: run the operation at step 2 only (after 2 x period = 240 seconds)

Only if not yet acknowleged

An extra goodie is the condition on the lower right. It means that this action is only run if the event has not been acknowledged yet. If anyone acknowledges the problem within the first 120 seconds then the condition would not match and nobody would get a notification. This is a workflow I can definitely recommend.

Stupid caveat

Caveat: you must always add an Action condition “Trigger value = PROBLEM” to the bottom left. Otherwise the escalation plan would not only start when this action starts but also after the recovery from the original network/server problem. This would lead to a delayed recovery message which is totally useless. (I have not yet understood why this is not an automatic/implicit condition.)

Recovery messages

Regarding the recovery: one the original problem that started the action is remedied then this is called recovery. If you tick the “Recovery message” checkbox then Zabbix sends out recovery messages. But only to the users that have been informed of the problem. If the escalation plan would have informed your boss 10 minutes later but you have fixed the problem then your boss will not get a (confusing) recovery message. Zabbix does the right thing.

Tired of Nagios and Cacti? Try Zabbix.

One of my professional duties in my past ten years was monitoring systems. Even my diploma thesis was dedicated to distributed monitoring (altough my professor sucked badly ). Apart from a few custom-programmed scripts to analyze special situations (e.g. proxy clusters) I used tools that fellow administrators will find familiar: Nagios and Cacti. And another less famous text-configuration-based monitoring tool called Cricket.  That worked well somehow but Cricket was hard to learn for my coworkers and Cacti seems unreliable and fundamentally broken in terms of SNMP checking. Besides why do I have to set up availability checking in Nagios and set up checking of the same parameters in another software to draw graphs? Then in 2009 I came across an open-source software I hadn’t heard of before: Zabbix. And although it has a few rough edges it seems way more professional than other common tools (the commercial tools I saw were even worse than the open-source variants). I tried it and after a lot of reading and trying it looks like it has a good potential to replace Nagios and Cacti. So I thought I’d sum up my personal experiences with all of these tools.

Nagios. Their makers claim that it’s the “industry-standard in IT infrastructure monitoring“. Honestly it’s a great tool but considering how many years it has been existing it barely evolved.  During my diploma thesis in the year 2000 I wrote an alternative software that I called “MrNetwork” that dealt with flaws that Nagios hasn’t even fixed today. Still Nagios is a tool I have used for many years and it is very reliable. Advantages:

  • open source
  • large community
  • many powerful plugins (and own plugins are easy to create: just write a program that prints a one-line string and set a certain return code)
  • easy-to-use web frontend
  • debugging plugins is moderately simple.
  • many thought-out features like host groups or notification options that make your life easier
  • dependencies (so that you don’t get 100 alerts if a router between the Nagios server and other servers went down)
  • nagvis plugin with a great interactive editor that draws nice management-suitable graphs (although I found the ndo2db interface hard to set up at first and a little flaky)

Annoyances:

  • The focus is on availability checking – you don’t get fancy graphs on the values that are monitored (e.g how the CPU load was over time). So you’ll need a second tool and set up the same checks there just to get graphs. But availability percentages are computed automatically.
  • Textual configuration that has so many different settings that you need to look up the parameters often. A web-based configuration would probably be better (and is available as an add-on but I haven’t tested it).
  • Third-party plugins are often very badly programmed and barely documented that it appeared easier to reinvent the wheel. (“Look, ma, I can has plugins.”)
  • Some views on the web interface are not very obvious (e.g. clicking on the title of a host group gives a nice view of all hosts with all services).
  • Many plugins don’t have corresponding configuration entries so you have to find out how they work and write configuration entries yourself (and those which are preconfigured take some archeology to find out which parameters they expect). This is a huge time-waster for beginners. And in your services configuration you have to verify your checks configuration to understand the meaning of each parameter. Or do you remember what “check_http 80!john!doe!10!30!body” is supposed to check?
  • Every  set of parameters of a certain plugin requires a distinct configuration entry. The plugins have dozens of configuration switches that you may need one day. Want to set a timeout on HTTP checks? Write another check configuration. Want to check for a certain string in the HTTP response? Write another check configuration. And so on.
  • Most checks are run from the Nagios server itself (the NRPE plugin to do the checks on the respective remote systems somehow refused to work properly here) which is suboptimal and puts a lot of load on the server.
  • By default every alert triggers a notification. So if you can’t define proper dependencies (e.g. if you want to check your web server in all 30 supported languages and there is some logical error) then you will get spammed with alerts.

Cricket. As Nagios does not support plotting graphs of the monitored values I was in need of another piece of software. Basically Cricket is a frontend to RRD (which stores data in a rotating/round-robin file that keeps data of the last X minutes/hours/days). It has a textual configuration that takes a lot of getting used to. It’s main principle is inheritance of settings – they call it “configuration tree“. Which means you have a master DEFAULTS file that contains general settings like how to query SNMP. In a subdirectory you define a certain class of devices that you want to monitor – e.g. routers (the DEFAULTS are inherited to this level). Within the routers directory you can just define a list of routers you want to monitor. All settings are inherited from “above” (parent directories). It’s more a geek tool for shell lovers. Advantages:

  • very quick to monitor a large set of similar devices once the general device class is defined
  • simple web interface
  • very reliable
  • can monitor SNMP values (it does this very well) or execute external scripts – thus can be easily extended
  • flexible graphing – you can sum up values of two graphs into a new graph (aka “mtargets” – multiple targets)
  • different check frequencies can be configured for different subtrees through cron (by default values are collected every 5 minutes – this can be set as low as one minute if needed)

Annoyances:

  • the textual configuration is error-prone (leading to funny Perl errors that can be hard to debug)
  • users may expect to see all parameters of a certain device instead of all devices having a certain parameters (“Give me the statistics of router42” instead of “Let’s see the temperature of all routers we have.”)
  • customizing the graphs (drawn by RRD tool) isn’t trivial
  • Frequency of checks is by default 5 minutes. Before RRD can draw the first value of a graph it needs three values. So you’ll be waiting 15 minutes before you see any results.
  • RRD rounds data by default. So the yearly graph doesn’t show the peaks that the daily graphs do. (This can be fixed by not graphing the average values but the maximum values.) Long-time archiving of graphed data is not possible without throwing away the RRD files and manually customizing them. Changing the monitoring frequency (aka “heartbeat”) is not possible either without throwing away the data and starting from scratch either.
  • No proper built-in alerting in case certain thresholds are exceeded.

Cacti. Another frontend to RRD – and a pretty sophisticated one. Nearly everything is configured through its web interface. And the result is beautiful. It’s not entirely reliable though and SNMP support (at least in version 0.8.7b) is a big fail. I like Cacti because its user interface is much better than Cactis but it’s less reliable and flexible. Advantages:

  • Beautiful and (for most features) simple web interface. Nice features like graphs that can be zoomed using Javascript.
  • Fine-grained permissions system. So a certain user may get read-only access to a certain subtree.
  • The tree where graphs are placed can be configured freely so you get exactly the view you want.

Disadvantages:

  • Doesn’t hide the RRD magic very well. The user is easily confused by templates, data sources and the like.
  • Graphing sometimes just stops working for no reason or values are missing although the server isn’t overloaded and other software doesn’t show such outages. According to a quick search on the lazyweb I’m not the only one with such effects.
  • Setting up many systems means a lot of clicking in the web interface. Setting up new kinds of checks (aka “templates”) means even more clicking and is very error-prone.
  • The quality of some third-party templates I tested was pretty bad. Creating new templates is tedious, error-prone, frustrating and close to black magic. Nothing for the casual user at least.
  • Doesn’t handle SNMP correctly (this is the biggest fail in my opinion and makes it unusable here). Although it knows how to query indexes (e.g. ifDescr to get the names of your network interfaces) it just seems to stored fixed OIDs. So once the SNMP tables change the order or number of items (which isn’t unusual) then suddenly other parameters get graphed.
  • Frequency of checks is by default 5 minutes. Increasing the frequency leads to missing data and wrong results.
  • As it uses RRD and needs 3 valid values you won’t see that your monitoring fails until you wait 3×5=15 minutes. Not suitable for impatient non-smokers like me. 🙂
  • Debugging failed checks is close to impossible. If a check fails then I find myself clicking around randomly trying to find typos because the alternative is digging around in database entries.
  • The web interface is sometimes confusing. A refresh of SNMP tables is done by clicking a unmeaning green circle icon. Adding new items to a list is done by clicking an inconspicious “Add” link that doesn’t even look like a link.
  • Another UI confusion: graphs are created from the “devices” view. But they are deleted from the “graph management” view.
  • No alerting in case certain thresholds are exceeded. Another tool like Nagios would still be needed to notify you.
  • Can’t sum up multiple targets so monitoring failover clusters doesn’t work well.
  • RRD averages data when putting daily values into weekly values, weekly into monthly and monthly into yearly. So the yearly graph doesn’t show the peaks that the daily graphs do. (This can be fixed by not graphing the average values but the maximum values.) Long-time archiving of detailed graphed data is not possible (RRD). Changing the monitoring frequency (aka “heartbeat”) is not possible either without throwing away the data and startin from scratch.

Zenoss. People pointed me to Zenoss which is supposed to offer the same functionality as other monitoring systems but is much more integrated. So this short list is more a quick one-day-experimental expression than a thorough analysis. But in the end much of the fuss is just marketing. Advantages:

  • Beautiful web interface
  • Nice gimmicks like google maps integration to show you where your servers are down worldwide. Only makes sense when monitoring networks with several/many remote locations.
  • Can partly discover parameters of systems automatically. That works for static routes (although I wonder why the heck I want to monitor static routes), file systems and network interfaces. On the other hand processes can’t be discovered automatically and have to be set up manually.
  • Does not come with a specific agent but plays rather well with plain old SNMP.
  • Can monitor Windows through native WMI.
  • Large fanbase.

Annoyances:

  • Web interface feels slow (Zope is bloated)
  • Opaque operation. It does not tell what’s actually going on. You can add monitoring and check back later if anything worked they way you want.
  • Questionable reliability. In a test here I was monitoring a running process. The process was said to be down and suddenly went to “up” after a while although nothing had changed on the system.
  • Just one dashboard. Several dashboards similar to Zabbix “screens” would be nice.
  • Configuration and data is spread across MySQL, the internal Zope database storage and RRD files on disk.
  • The dependency graph is a nice Flash-based applet displaying how the systems are connected. But it does not show any details about the systems aside from whether they are up or down. And clicking on a host does not take you anywhere but center on the system. Beautiful but it could do so much more than just look beautiful.
  • I dislike the way that things are configured. The context menu with the “down arrow” needs to be used. I’d prefer simple “add” or “delete” actions instead of navigating the menu all the time. Looks like Javascript is used wrongly here.
  • Limited open-source version. Full version needs to be paid for.

Zabbix (1.8.2). I’m using the backported Zabbix 1.8.2 on Debian Lenny here. Debian Lenny’s native 1.4 version lacked some interesting features like proper SNMP handling. Zabbix seems close to the perfect monitoring system I had always dreamt of. I would have designed it differently in some aspects though.

Advantages:

  • Availability Monitoring (like Nagios) and graphing (like Cacti) is combined into one tool.
  • Highly configurable. User John may just get an SMS for problems of high severity during the weekend and on weekdays get a Jabber message. Even automatic actions like restarting services can be set up.
  • The notifications actually help the person who gets the message. “Low disk space on /var on web5” with an additional comment is pretty helpful even when sent via SMS. Notifications are completely customizable with macro variables.
  • Very performant. A Zabbix agent can be installed on the systems (available for several operating systems – even for Windows) which gathers the information on each system efficiently. The agent can even call scripts or shell one-liners to gather information. This kind of data collection is very efficient. You will need a server with a good I/O performance and a lot of RAM though so that the database can work efficiently. At first I virtualized Zabbix on a Debian server on a VmWare server with 1 GB of RAM. The database access became so slow that showing graphs or recent events made me store at a busy mouse cursor for up to a minute. On a server with 4 GB of RAM, a large MySQL key buffer and SAS disks the system runs well again.
  • Collecting items (gather information about system parameters) happens at set intervals. You don’t have to wait for several minutes until you see results (it usually takes half a minute). Each item can have a custom check interval. So you can check for the CPU load every 30 seconds but check the number of free inodes on /home just once an hour.
  • Fast web interface.
  • Sophisticated monitoring of web sites. Zabbix can follow a path of simulated mouse clicks on a web site and check for functionality and response time.
  • Real-time graphs. Values are by default collected every 30 seconds. You quickly see where you are going.
  • Permissions system. Certain users can be limited to certain views.
  • Gathered data is stored in a database (MySQL, PostgreSQL, SQLite) instead of an unflexible RRD file. Storage periods (aka “history”) can be configured freely. Backing up the database is all there is to be done.
  • Templates (that can even link to further templates) save time in setting up many checks.
  • Graphs (plots of values over time) can be customized like which items are plotted and in what way. Even pie charts are possible.
  • Even the parameters that don’t get an explicit graph can be graphed at any time. E.g. the agent has tracked the CPU load on a system that you never cared about then you can graph that with one mouse-click.
  • Screens and slide shows can be used for high-level views (aka “dashboards”) or to be displayed on a big geeky display. They can combine textual display of the status as well as clocks, ad-hoc graphs or predefined graphs.
  • Very flexible trigger expressions. For example you can tell a trigger to fire if the average system load over the last 15 minutes is above a certain value. As all measured parameters are stored in a backend database you can use all kinds of mathemical expressions. Like firing a trigger if the average number of running processes during the last half hour is above 50. All other software I tested just has access to the last value gathered.
  • Alerting/notifications can be scripted easily by using shell scripts.
  • Remote monitoring made easy by using a Zabbix proxy.
  • Paid support and paid custom programming available. But the software is completely open-sourced.
  • 320 page PDF manual with screenshots and nice references. (Although I’d personally prefer an online help. Currently the “?” link within the web interface just points to the PDF that you can download.)

Annoyances:

  • A lot of mouse moving and clicking is required to set up things. For example setting up an alert if the free space on a disk on a certain server is getting too low then you need to set up hosts, items, triggers and actions. Some of the clicking seems redundant. E.g. I didn’t find a way to create triggers automtically for a set of checks. If I monitor how full the “/home” partition is then I’d like to set a threshold in the same configuration step.
  • Takes a little patience understanding the concepts (because there is no hidden magic) although they make sense after half a day.
  • The web interface is crammed full of features. For casual users it’s confusing to navigate. In a real-life network you find yourself setting host variables, juggling templates and unless you remember everything you did you will not get a good overview of your configuration. Zabbix is very complex but in my opinion it would need an even better web interface to deal with its featurs properly.
  • The map editor was close to unusable at first but has improved in version 1.8. It still takes a lot of time to set up the maps. The map editor could really use fewer mouse clicks to set up the map. I was also missing a feature to add current item values to the map (like the server room temperature or bandwidth on our load balancer). You can just add triggers which occupy a lot of space on the map, too.
  • You can just return one value per item. Sometimes you need to return a good/bad value plus some additional information. Nagios for example delivers a return code for OK/WARNING/ALERT and also a text string. In Zabbix this are different items.
  • Zabbix gets painful when you want to monitoring different assets of the same kind. Different network interfaces, disk partitions, MySQL instances or web server ports. Templates are pretty useless here. You will have to copy every item and trigger.
  • Does not detect the available assets in a monitored server automatically. Imagine that you want to monitor the space on all disk partitions on a system. You will have to copy over or create the check items manually or define all possible checks in a template and disable those you don’t need. Cacti handles that better by offering you a list of partitions to monitor. Zenoss can do that partly. The zabbix agent should be able to handle such a service discover automatically. (The built-in “Discovery” feature rather seems to detect new servers in a given network range automatically. But that’s something different.)
  • Hard to debug. Why was an action not run? Who would get alerted for a certain trigger? Why has a value become ‘unknown’ without a reason? Of course there are reasons for what Zabbix does. But it often takes clicking and guessing instead of telling the user.

See my Zabbix screencasts if you like to learn more.

See also: Ben Rockwood’s blog Further similar  software I didn’ test thoroughly: Hyperic and Opsview. All of the above tools are great. I’m not meaning to say that “Zenoss is total crap” for example. The differences are subtle. And whether a piece of software suits your needs really depends on your expectations. I love that all this software is available as open-source. And a totally unscientific but fun analysis of the community is counting the number of active people in the respective channels on the Freenode IRC network:

  • #nagios: 133 users
  • #cacti: 58 users
  • #cricket: 2 users
  • #zenoss: 54 users
  • #zabbix: 61 users

Either Nagios has the largest fanbase or perhaps that means that the majority of people needs help with it. 🙂

© 2021 workaround.org - Proudly powered by theme Octo