Updating the BIOS on Lenovo laptops from Linux using a USB flash stick

Aren’t hardware manufacturers funny? They either require an old-fashioned operating system (Windows) or museum hardware (floppy drives) to update a BIOS. Apparently they never learn and are instead busy adding features like DRM and UEFI to make our lives even more miserable.

However updating the BIOS on my Lenovo X230 laptop was surprisingly easy once I learned how to do that (kudos to a G+ post I stumbled upon).

  1. Go to support.lenovo.com (or better use a search engine becaues the Lenovo website is beautiful but technically pretty broken and slow) and search for the BIOS upgrade of your laptop model.
  2. Download the most recent ISO file. Look for “BIOS bootable update CD”.
  3. Convert the ISO image using the geteltorito utility (if you don’t have it: apt-get install genisoimage).
    Example:
    geteltorito -o bios.img g2uj18us.iso
  4. Insert any USB stick into your laptop that you have lying around. The image file is just 50 MB in size so even USB sticks with low capacity will work. Keep in mind that the stick will be completely overwritten.
  5. If you are in a graphical environment then unmount the USB stick again.
  6. Find out the device name of the stick. Enter a terminal window and enter “dmesg | tail”. You are looking for something like:[ 2101.614860] sd 6:0:0:0: [sdb] Attached SCSI disk
    The “sdb” tells you that your USB stick is available on /dev/sdb. Don’t just assume it’s sdb. If it’s on another device on your laptop then you will destroy your data.
  7. Copy the image to the USB stick:
    dd if=bios.img of=/dev/sdb bs=1M
  8. Reboot your laptop.
  9. After the Lenovo logo appears press ENTER.
  10. Press F12 to make your laptop boot from something else than your harddisk.
  11. Select the USB stick.
  12. Make sure your laptop has its power supply plugged in. (It will refuse to update otherwise.)
  13. Follow the instructions.

Moving contacts and calendar entries from Google to OwnCloud

As an Android user it’s hard to evade Google. You won’t be able to install apps without creating a GMail account and telling them your credit card number. But let’s at least not use GMail for your emails, contacts and calendar entries. In fact that is suprisingly simple. Let’s start with the…

Calendar

In OwnCloud enable the Calendar app if you haven’t already. Log in as “admin”, click on “Apps+” on the bottom of the left navigation area and activate the Calendar app.

Install CalDAV-Sync (yes, costs a bit of money) on your Android phone. Run the application and set up a CalDAV account. The CalDAV URL can be found in the OwnCloud web interface when you navigate to your calendar and click on the cog wheel icon in the upper right corner. It looks something like cloud.example.org/remote.php/caldav/

Log into your Google Calendar and go to the settings hidden behind the cog wheel icon in the upper right corner. Click on “Calendar” at the top. Look for a “export calendar” link. You will get a ZIP archive downloaded that contains an ICS file. Unpack that ZIP archive. Navigate to the files section on your OwnCloud in the web browser and upload the ICS file to any folder using drag-and-drop. Then click on the file in the browser and confirm to have that ICS file imported to your OwnCloud calendar. Now your calendar has been copied to OwnCloud.

Finally remove the appointments from your Google calendar and stop the sync from your smartphone. Neither can you remove the Google calendar from Google nor from your phone – it’s a pest. Make your OwnCloud calendar the new default on your phone. (If you own a Samsung S4 I suggest downloading the original “Google Calendar” app because the Samsung calendar app does not allow you to get rid of the Google calendar.)

Contacts

(Pretty similar as above.)

In OwnCloud enable the Calendar app if you haven’t already. Log in as “admin”, click on “Apps+” on the bottom of the left navigation area and activate the Contacts app.

Install the CardDAV-Sync app (still beta and thus free) on your Android device. Run the application and set up a CardDAV account. The CardDAV URL can be found in the OwnCloud web interface when you navigate to the contacts app and click on the cog wheel icon in the lower right corner and then on the globe icon. It looks something like cloud.example.org/remote.php/carddav/addressbooks/johndoe/contacts

Log into your GMail account and click on “More” and “Export”. Choose to export all contacts in vCard format. Navigate to the files section on your OwnCloud in the web browser and upload the VCF file to any folder using drag-and-drop. Then click on the file in the browser and confirm to have that contacts imported to your OwnCloud contacts.

And finally remove your contacts from GMail and stop the sync.

One step closer to Google-independence – yeah!

Docking and undocking Linux laptops with nVidia GPUs using disper

Do you have trouble switching the display when docking and undocking your Linux laptop? In this article I will show you how to use disper and keyboard shortcuts to do that reliably if you are using nVidia’s annoyingly broken RandR-incompatible graphics driver.

Many companies nowadays give their employees laptops instead of classical PCs under their desks. And that’s a good idea because you can carry it to meetings or hack sessions and just keep all your digital information in one place instead of printing it out, carrying it around and typing it in later. And it gets even better if your employer allows you use your favorite Linux distribution instead of Windows on your laptop. Using Linux on a laptop has become increasingly simple with modern  distributions like Ubuntu. But still the laptop manufacturers suck badly at providing proper drivers for anything other than Windows™. That’s especially true for graphics hardware from nVidia which is the perfect example for ruining great hardware by failing to provide proper drivers. Modern desktop environments usually detect if your screen resolution changes or if you dock or undock from your laptop’s dockin station. Unfortunately nVidia’s proprietary driver still doesn’t support common standards (RandR). They ship their own (“NVIDIA X Server Settings”) tool to switch monitors which allows you to do that. Well, sometimes. At other times they alter your X server’s configuration (which is something that Linux nerds used to do 10-20 years ago) and require an X server restart. Yeah, right.

But do not despair – here is a simple fix. They key is disper – a simple-to-use command-line tool to switch between different monitors. You should find it packaged for your favorite desktop Linux distribution. I personally use Xubuntu so I installed it using

$> sudo apt-get install disper

Disper is pretty simple and its manpage describes its options. Check that disper finds your monitors:

$> disper -l

display DFP-0: AU Optronics Corporation
resolutions: 1600×900
display DFP-2: Eizo S2402W
resolutions: 640×480, 800×600, 1024×768, 1280×960, 1280×1024, 1680×1050, 1600×1200, 1920×1200

You can see here that DFP-0 is my laptop’s own monitor with a resolution ov 1600×900 pixels. And I have an Eizo monitor connected to my docking station. If your output looks similar you can try to switch between the monitors:

$> disper -s      (enables the “primary” display – usually the laptop’s built-in monitor)

$> disper -S      (enables the “secondary” display – usually the monitor connected to your docking station)

$> disper -c      (clones the display – will only work if both displays can use the same resolution)

$> disper -e      (extends your desktop over both monitors)

Now all you have to do is assign keyboard shortcuts for your favorite settings.

That’s it. When I dock my laptop I press Win-d and the external monitor will display my screen. And when I undock I press Win-u and my desktop moves to the laptop’s built-in monitor. Perfect.

(And next time please get a laptop with a well-supported Intel GPU if you can. You will love it.)

Backups with rsnaphot to external USB drives

How long has it been since you last backed up your Linux system? Let me guess – you tried various backup systems and hate all of them? Let me show you how to use rsnapshot and an external inexpensive USB drive to back up precious data easily. (more…)

Renaming multiple files

If you need to rename a larger number of files following a certain pattern then you will long for an automated solution. The ‘rename’ command helps you here that is (at least on my Debian installation) part of the Perl installation. All you need to know is the basics of regular expressions to define how the renaming should happen.

Say you want to add a ‘.old’ to every file in your current directory. At the end of each expression ($) a ‘.old’ will be set:

rename 's/$/.old' *

Or you want to make the filenames lowercase:

rename 'tr/A-Z/a-z/' *

Or you want to remove all double characters:

rename 'tr/a-zA-Z//s' *

Or you have many JPEG files that look like “img0000154.jpg” but you want the first five zeros removed as you don’t need them:

rename 's/img00000/img/' *.jpg

In fact you can use any Perl operator as an argument. The actual documentation for the ‘s’ and ‘y’/’tr’ operators are found in the ‘perlop’ manpage.

Pipes and redirection

Many system administrators seem to have problems with the concepts of pipes and redirection in a shell. A coworker recently asked me how to deal with log files. How to find the information he was looking for. This article tries to shed some light on it.

Input / Output of shell commands

Many of the basic Linux/UNIX shell commands work in a similar way. Every command that you start from the shell gets three channels assigned:

  • STDIN (channel 0):
    Where your command draws the input from. If you don’t specify anything special this will be your keyboard input.
  • STDOUT (channel 1):
    Where your command’s output is sent to. If you don’t specify anything special the output is displayed in your shell.
  • STDERR (channel 2):
    If anything wrong happens the command will send error message here. By default the output is also displayed in your shell.

Try it yourself. The most basic command that just passes everything through from STDIN to STDOUT is the ‘cat’ command. Just open a shell and type ‘cat’ and press Enter. Nothing seems to happen. But actually ‘cat’ is waiting for input. Type something like “hello world”. Every time you press ‘Enter’ after a line ‘cat’ will output your input. So you will get an echo of everything you type. To let ‘cat’ know that you are done with the input send it an ‘end-of-file’ (EOF) signal by pressing Ctrl-D on an empty line.

The pipe(line)

A more interesting application of the STDIN/STDOUT is to chain commands together. The output of the first command becomes the input of the second command. Imagine the following chain:

The contents of the file /var/log/syslog are sent (as input) to the grep command. grep will filter the stream for lines containing the word ‘postfix’ and output that. Now the next grep picks up what was filtered and filter it further for the word ‘removed’. So now we have only lines containing both ‘postfix’ and ‘removed’. And finally these lines are sent to ‘wc -l’ which is a shell command counting the lines of some input. In my case it found 27 of such lines and printed that number to my shell. In shell syntax this reads:

cat /var/log/syslog | grep 'postfix' | grep 'removed' | wc -l

The ‘|’ character is called pipe. A sequence of such commands joined together with pipes are called pipeline.

Useless use of ‘cat’

Actually ‘cat’ is supposed to be used for concatenating files. Like “cat file1 file2”. But some administrators abuse the command to put something into a pipeline. That’s bad style and the reason why Randal L. Schwartz (a seasoned programmer) used to hand out virtual “Useless use of cat” awards. Shell commands usually can take a filename as the last argument as an input. So this would be right:

grep something /var/log/syslog | wc -l

While this works but is considered bad style:

cat /var/log/syslog | grep something | wc

Or if you knew that grep even has a “-c” option to count lines the whole task could be done with just grep:

grep -c something /var/log/syslog

Using files as input and output

Output (STDOUT)

Instead of using the console for input and the screen for output you can use files instead. While

date

shows you the current date on the console you can use

date >currentdatefile

to redirect the output of the command (STDOUT) to the file named ‘currentdatefile’.

Input (STDIN)

This also works as input. The command

grep something

will search for the word ‘something’ in what you type on your keyboard. But if you want to look for ‘something’ in a file called ‘somefile’ you could run

grep something <somefile

Input and output

You can also redirect both input and output in the same command. A politically incorrect way to copy a file would be

cat <oldfile >newfile

Of course you would use ‘cp’ for that purpose in real life.

Errors (STDERR)

So far this covers STDIN (<) and STDOUT (>) but you also redirect the STDERR channel by using (2>). An example would be

grep something <somefile >resultfile 2>errorfile

2>&1 magic

Many admins stumble when it comes to redirecting one channel to another. Say you want to redirect both STDOUT and STDERR to the same file. Then you cannot do

grep something >resultfile 2>resultfile

It will only redirect the STDOUT (>) there and keep the ‘resultfile’ open so “2>” fails to write to it. Instead you need to do

grep something >resultfile 2>&1

This redirects STDOUT (1) to the ‘resultfile’ and tells STDERR (2) to send the output to what STDOUT is set to (also ‘resultfile’).

What does not work is this order:

grep something 2>&1 >resultfile

It may look right to us humans but in fact does not redirect STDERR to the ‘resultfile’. The explanation: the shell interprets this line from left to right. So first the “2>&1” is evaluated which means “send STDERR to the same that STDOUT is currently set to”. As STDOUT is usually just printed to the shell it will send STDERR also to the shell. Next the shell finds “>resultfile” which sends STDOUT to the ‘resultfile’ but does not touch the previous destination of STDERR. So STDERR output will still end up in the shell.

Interesting commands

  • grep
    Filters out lines with certain search words. “grep -v” searches for all lines that do not contain the search word.
  • sort
    Sort the output alphabetically (needs to wait until EOF before doing its work). “sort -n” sorts numerically. “sort -u” filters out duplicate lines.usel
  • wc
    Word count. Counts the bytes, words and lines. “wc -l” just outputs how many lines were counted.
  • awk
    A sophisticated language (similar to Perl) that can be used to do something with every line. “awk ‘{print $3}'” outputs the third column of every line.
  • sed (stream editor)
    A search/replace tool to change something in every line.
  • less
    Useful at the end of a pipe. Allows you to browse through the output one page at a time. (“less” refers to a similar but less capable tool called “more” that allowed you to see the first page and then press ‘Space’ to view ‘more’.)
  • head
    Shows the first ten lines only. “head -50” shows the first 50 lines.
  • tail
    Shows the last ten lines only. “tail -50” shows the last 50 lines. “tail -f” follows a certain file.

NFS: sec=sys or ruin your day

And once again I was bitten by problems on my Debian laptop mounting directories from the file server via NFS. After a Debian dist-upgrade I couldn’t login to KDE any more. Shell login worked somehow but I quickly found out I could neither read nor write any files there. Apparently an “ls -al” showed me the right permissions (not just unmappable numeric UIDs or something of that kind) and an “id -a” confirmed that my LDAP PAM configuration still worked. But reading or writing any file just lead to “Permission denied”.

I’m not sure if it was an update in the nfs-common package in Sid. But I had the same problem before and it took me hours until I finally figured out that Debian’s NFS seems to use non-standard defaults. Namely the “sec” parameter when mounting the NFS share. According to the Solaris documentation the default is “sec=sys” which means that NFS uses the locally acquired UIDs and GIDs. Like /etc/passwd, NIS or LDAP/PAM. But on Debian it seems to default to “sec=krb5” or something. As I have close to no idea how to set up Kerberos and don’t want to (and talking to other people hardly anyone has used Kerberos either) I figured that it’s not really a sane default. I didn’t even ask for NFSv4 – just NFSv3. Perhaps I undeliberately set some /etc/default/nfs-common configuration setting wrong or whatever. It was just strange. So I set “sec=sys” in the options of the NFS share of my /etc/fstab and the problem was fixed.

Actually I wonder what network file systems other people use in a Debian environment. NFS somehow feels antiquated to me anyway.

Using tcpdump and Wireshark to sniff and analyse your network traffic

Sometimes a network service is just not behaving the way it should. And the log files do not help you either. Then it is time to use the power of tcpdump and Wireshark to get a deeper look on what is actually happening on the wire.

If you have an X11 running on the host in question you may just start Wireshark and start recording the traffic. However often you need to record traffic that is running on a machine you can just login through SSH. Don’t panic – you can still analyze the traffic.

Sniff the network

Run tcpdump on the server in question. The number of options is pretty large. But most of the time the call is rather simple. Try something like this:

tcpdump -lnni eth0 -w dump -s 65535 host web01 and port 80

That will mainly record traffic on the interface eth0, write the output (in raw format) to the file named dump, record the whole packet (65535 bytes maximum) instead of just a few bytes and use the filter expression host web01 and port 80 to just listen to traffic for the server called web01 and listen only to traffic on the HTTP port (80).

Run this command and let it run while you want to record network activity. When you are done, just stop the process (Ctrl-C) and you have the raw data in the dump file.

 

Analyze the dumpfile in Wireshark

Now copy this dump file over to a workstation where you have X11 running – scp should do it. Then start wireshark and load the file (or just run wireshark dumpfile). You will be shown what has happened when.

In the top window you will see one line for each packet. Click on a packet that you like to inspect deeper and you will be shown the protocol stack in the lower left corner. The layers correspond to the OSI model and are used to transport the packet. Unless you are debugging your switch environment you can safely ignore all the lower layers (that are shown as the first lines) and expand (click on the ‘+’) the HTTP protocol. Wireshark is very smart and will try to interpret the network traffic so you get a clearer view at the protocol. It knows about most common protocols like HTTP, SMTP, POP3 and a few hundreds more.

Wireshark on the console

If you just need need a tad more information than what tcpdump delivers but want to see it on the console and in real-time then tshark may be for you. Give it a try.

Mounting flash sticks or memory cards on Debian

Usually modern desktop Linux distributions make it easy to automatically mount external storage media like USB flash sticks. But if all else fails this article may help you.

Kernel

Compile a kernel with SCSI disk support (CONFIG_BLK_DEV_SD), multiple LUN support (CONFIG_SCSI_MULTI_LUN – otherwise the x-in-1 card reader will not work) and USB storage support (CONFIG_USB_STORAGE).

Plug it in

Plug the device/flash card in and watch your syslog. It should show something like:

May 12 19:09:57 aldi kernel: usb 2-1: new full speed USB device using address 5
May 12 19:09:57 aldi kernel: scsi5 : SCSI emulation for USB Mass Storage devices
May 12 19:09:57 aldi kernel:   Vendor: MATSHITA  Model: DMC-FZ20          Rev: 0100
May 12 19:09:57 aldi kernel:   Type:   Direct-Access                      ANSI SCSI revision: 02
May 12 19:09:57 aldi kernel: SCSI device sde: 246017 512-byte hdwr sectors (126MB)
May 12 19:09:57 aldi kernel: sde: assuming Write Enabled
May 12 19:09:57 aldi kernel: sde: assuming drive cache: write through
May 12 19:09:57 aldi kernel:  sde: sde1
May 12 19:09:57 aldi kernel: Attached scsi removable disk sde at scsi5, channel0, id 0, lun 0
May 12 19:09:57 aldi kernel: Attached scsi generic sg4 at scsi5, channel 0, id 0, lun 0,  type 0
May 12 19:09:57 aldi kernel: USB Mass Storage device found at 5
May 12 19:09:57 aldi udev[6641]: creating device node '/dev/sg4'
May 12 19:09:57 aldi udev[6624]: configured rule in '/etc/udev/rules.d/z_hal-plugdev.rules[2]' applied, 'sde' becomes '%k'
May 12 19:09:57 aldi udev[6624]: creating device node '/dev/sde'
May 12 19:09:57 aldi udev[6658]: configured rule in '/etc/udev/rules.d/z_hal-plugdev.rules[2]' applied, 'sde1' becomes '%k'
May 12 19:09:57 aldi udev[6658]: creating device node '/dev/sde1'
May 12 19:09:57 aldi scsi.agent[6666]:      sd_mod: can't be loaded (for disk)

You can see what kind of device was detected and that udev has created device nodes /dev/sg4, /dev/sd3 and /dev/sde1 which you could mount. Or better: let the system tell you the partition schemes on the devices:

$> fdisk -l /dev/sde1

Disk /dev/lumix: 125 MB, 125960704 bytes
8 heads, 32 sectors/track, 961 cylinders
Units = cylinders of 256 * 512 = 131072 bytes

     Device Boot      Start         End      Blocks   Id  System
/dev/sde1                 1         961      122959+   6  FAT16

Fixed mountpoint

It’s not very nifty to guess which device the card it mounted on. So you can use udev to assign it a fixed device name.

Try a:

udevinfo -a -p /sys/block/sde

At least one device will probably spit out something like this:

device '/sys/block/sde' has major:minor 8:64
  looking at class device '/sys/block/sde':
    SUBSYSTEM="block"
    SYSFS{dev}="8:64"
    SYSFS{range}="16"
    SYSFS{removable}="1"
    SYSFS{size}="246017"
    SYSFS{stat}="      12      348      360      286        0        0        0       0        0      286      286"

follow the class device's "device"
  looking at the device chain at '/sys/devices/pci0000:00/0000:00:1d.1/usb2/2-1/2-1:1.0/host5/5:0:0:0':
    BUS="scsi"
    ID="5:0:0:0"
    DRIVER="sd"
    SYSFS{detach_state}="0"
    SYSFS{device_blocked}="0"
    SYSFS{max_sectors}="240"
    SYSFS{model}="DMC-FZ20        "
    SYSFS{queue_depth}="1"
    SYSFS{rev}="0100"
    SYSFS{scsi_level}="3"
    SYSFS{state}="running"
    SYSFS{timeout}="30"
    SYSFS{type}="0"
    SYSFS{vendor}="MATSHITA"

  looking at the device chain at '/sys/devices/pci0000:00/0000:00:1d.1/usb2/2-1/2-1:1.0/host5':
    BUS=""
    ID="host5"
    DRIVER="unknown"
    SYSFS{detach_state}="0"

  looking at the device chain at '/sys/devices/pci0000:00/0000:00:1d.1/usb2/2-1/2-1:1.0':
    BUS="usb"
    ID="2-1:1.0"
    DRIVER="usb-storage"
    SYSFS{bAlternateSetting}=" 0"
    SYSFS{bInterfaceClass}="08"
    SYSFS{bInterfaceNumber}="00"
    SYSFS{bInterfaceProtocol}="50"
    SYSFS{bInterfaceSubClass}="06"
    SYSFS{bNumEndpoints}="02"
    SYSFS{detach_state}="0"
    SYSFS{iInterface}="00"

  looking at the device chain at '/sys/devices/pci0000:00/0000:00:1d.1/usb2/2-1':
    BUS="usb"
    ID="2-1"
    DRIVER="usb"
    SYSFS{bConfigurationValue}="1"
    SYSFS{bDeviceClass}="00"
    SYSFS{bDeviceProtocol}="00"
    SYSFS{bDeviceSubClass}="00"
    SYSFS{bMaxPower}="  2mA"
    SYSFS{bNumConfigurations}="1"
    SYSFS{bNumInterfaces}=" 1"
    SYSFS{bcdDevice}="0010"
    SYSFS{bmAttributes}="c0"
    SYSFS{detach_state}="0"
    SYSFS{devnum}="5"
    SYSFS{idProduct}="2372"
    SYSFS{idVendor}="04da"
    SYSFS{manufacturer}="Panasonic"
    SYSFS{maxchild}="0"
    SYSFS{product}="DMC-FZ20"
    SYSFS{speed}="12"
    SYSFS{version}=" 1.10"

  looking at the device chain at '/sys/devices/pci0000:00/0000:00:1d.1/usb2':
    BUS="usb"
    ID="usb2"
    DRIVER="usb"
    SYSFS{bConfigurationValue}="1"
    SYSFS{bDeviceClass}="09"
    SYSFS{bDeviceProtocol}="00"
    SYSFS{bDeviceSubClass}="00"
    SYSFS{bMaxPower}="  0mA"
    SYSFS{bNumConfigurations}="1"
    SYSFS{bNumInterfaces}=" 1"
    SYSFS{bcdDevice}="0206"
    SYSFS{bmAttributes}="c0"
    SYSFS{detach_state}="0"
    SYSFS{devnum}="1"
    SYSFS{idProduct}="0000"
    SYSFS{idVendor}="0000"
    SYSFS{manufacturer}="Linux 2.6.9 uhci_hcd"
    SYSFS{maxchild}="2"
    SYSFS{product}="Intel Corp. 82801EB/ER (ICH5/ICH5R) USB UHCI #2"
    SYSFS{serial}="0000:00:1d.1"
    SYSFS{speed}="12"
    SYSFS{version}=" 1.10"

  looking at the device chain at '/sys/devices/pci0000:00/0000:00:1d.1':
    BUS="pci"
    ID="0000:00:1d.1"
    DRIVER="uhci_hcd"
    SYSFS{class}="0x0c0300"
    SYSFS{detach_state}="0"
    SYSFS{device}="0x24d4"
    SYSFS{irq}="19"
    SYSFS{subsystem_device}="0x80a6"
    SYSFS{subsystem_vendor}="0x1043"
    SYSFS{vendor}="0x8086"

  looking at the device chain at '/sys/devices/pci0000:00':
    BUS=""
    ID="pci0000:00"
    DRIVER="unknown"
    SYSFS{detach_state}="0"

Look for entries like BUS="scsi" here. The line SYSFS{model}="DMC-FZ20        " seems to point to our USB device in question. So you need to go to /etc/udev/rules.d and create a file (which has a ".rules" suffix) there which reads:

BUS=="scsi", SYSFS{model}=="DMC-FZ20*", KERNEL=="sd?1", NAME="%k", SYMLINK="lumix"

Since the files in the rules.d directory are scanned in order of the file names you may want to call the file something like 010_my.rules

This will make udev look for new devices on the SCSI bus (USB flash devices are handled like SCSI devices), check if the model name starts with "DMC_FZ20" (there are often trailing spaces – thus the ‘*’) and has it create a symlink to /dev/lumix so we don’t have to guess which /dev/sd? device it has become. Don’t forget to restart udev.

Mounting

Create a mount point for your device (mkdir /mnt/lumix) and be sure to create an entry in your /etc/fstab so it’s easy to mount:

/dev/lumix      /mnt/lumix      vfat    user,noauto             0       0

Try a mount /mnt/lumix and you should find your data on /mnt/lumix afterwards.

Now if you use KDE it’s easy to create an icon for a partition called /dev/lumix and you can easily mount and unmount the device. Always unmount the card first or the cache will not be written to the flash card which will likely lead to data corruption on the card. You have been warned.

See also

Since you are hopefully using Debian you will find a more complete documentation on writing rules files in /usr/share/doc/udev/writing_udev_rules/index.html

Bareos/Bacula Cheat Sheet

Bacula is a nifty backup software that is network-capable and stores data in the database for faster retrieval in case you need a certain file back. As a big fan of cheat sheets I created this cheat sheet.

What’s up?

Which files shall be backed up?show filesetsI=Included, E=Excluded
What’s the server doing?status dir
What’s the status of a certain job?status jobid=xx
What’s the client doing?status client
What’s the streamer doing?status storage
Anything new?messages

Backing up

Start a backuprun…and choose the backup job
Label a new tapelabel…and run mount afterwards

Restoring

The common way (a user accidentally removed a file and wants the newest version back from the tapes:

  • Use the restore command.
  • Choose option 5 (Select the most recent backup for a client).
  • cd / ls / dir / mark / markdir / unmark / unmarkdir / lsmark / estimate / pwd / count / find
  • done

Jobs

Last jobslist jobs…or list jobid=xx’ for a specific job
Statistics about last jobslist jobtotal
Which files were backed up?list files jobid=xx

Job status

Statusmeans…
TTerminated normally
CCreated but not yet running
RRunning
BBlocked
ETerminated in Error
eNon-fatal error
fFatal error
DVerify Differences
ACanceled by the user
FWaiting on the File daemon
SWaiting on the Storage daemon
mWaiting for a new Volume to be mounted
MWaiting for a Mount
sWaiting for Storage resource
jWaiting for Job resource
cWaiting for Client resource
dWating for Maximum jobs
tWaiting for Start Time
pWaiting for higher priority job to finish
WTerminated with warnings

Tapes

Which tapes are in the pool?list media
Remove a tapedelete media
Which pools are defined?list pools
Which tapes are/were used for a certain job?list jobmedia
Assign a tape to a certain pooladd
Change parameters of a tapeupdate volume

Troubleshooting

Erase a label on the tapemt rewind && mt weof && mt rewind

Terminology

Catalog

It is data in a SQL database running on the Bareos server. The catalog stores information about all assets like jobs, clients and media. Without the catalog Bareos had no idea which files were backed up and cannot restore them. In case of a disastrous loss of the catalog you need to take the latest bootstrap and start restoring the catalog first using that information. So the catalog itself is also backed up because without it the system is useless.

Volumes

Volumes (also called “media”) are either files on disk or physical tapes. When backups run they save their data to volumes. Bareos keeps track in the catalog which data can be found on each volume. Usually multiple backups run in parallel leading to a multiplexed stream of data written to volumes. A volume always belongs to exactly one pool. Volumes have names/labels – tapes have the name that is printed on the barcode sticker if the library have a barcode scanner.

Pool

A set of volumes. The pool can define a maximum number of volumes, the type of volumes (e.g. disk or tape) and the retention period. For example you can have short-lived disk pools for small frequent backups. On the other hand you may have long-lived tape pools that store data for several weeks or months.

Job

A specific action like a backup, restore or copy (e.g. from disk to tape). Jobs are usually started automatically by the director following a pre-defined schedule. Multiple jobs can run in parallel and share their resources. The catalog keeps track of which jobs have run in the past in order to know which volumes would be required to restore data from them.

In the bconsole you can see the running, past and scheduled jobs by running “stat dir”.

Job definition (aka jobdef)

The definition of a job. It is not stored in the catalog but in text files in /etc/bareos.

Client

A server to be backed up. Usually a file-daemon runs on the client. The director will talk to the client to run jobs.

Fileset

Defines which files or directories to backup from a certain server. A job defines which fileset to use for a backup.

Bootstrap

A small text file usually sent out via email frequently. It is required in case of a catalog loss to find the volume that contains the last backup of the catalog.

The upstream documents reads: “The bootstrap file contains ASCII information that permits precise specification of what files should be restored, what volume they are on, and where they are on the volume. It is a relatively compact form of specifying the information, is human readable, and can be edited with any text editor.”

The bootstrap data is not confidential and should be forwarded to an external location in case of a disaster.

Message

Bareos can send messages to the console or via email. Results of jobs are sent via email.

Schedule

A definition of how often and at what time a job can be run.

Storage

Defines a way to write volumes. It is used by the storage daemon. A storage can be a path on the local disks or the name of the tape device. Autochangers (aka “tape libraries”) are also supported.


Retention

A volume is locked after being written to. The retention period defines when the volume can be overwritten again.

Scratch

This applies to tape volumes only. New tapes can be introduced into the “Scratch” pool. If a pool is out of volumes to use then Bareos will take a volume from the Scratch pool and take it into its own pool.

Levels

Backups can happen in three different levels: Full, Differential and Incremental. Only Full backups are required. The other levels can be used to save space on volumes.

  • Full: every single file defined in assigned fileset is saved to the volume
  • Differential: only new files that were not contained in the last Full backup are backed up
  • Incremental: only new files that were not contained in the last Differential or Incremental backup are backed up

Using only Full backups is secure because if you lose a full backup then you take the full backup before it. However you are stubbornly backing up the same files time and again thus wasting space on the volumes. Complete restores are fastest though because only the last full backup has to be considered.

Using Full and Differential backups saves some space. During restores the last Full and last Differential backup are considered. However multiple Differential backups may store files redundantly thus wasting a little space.

Using Full and Incremental backups saves most space. However for a restore all Incremental backups since the last Full backup need to be working. If one Incremental backup is broken in the chain then only the last Full backup can be restored.

Using Full + Differential + Incremental backups saves most space while still keeping the risk of losing data low. A restore requires the Incremental backups back up to the last Differential backup + the last Full backup. This could look like:

  • Full
    • Differential
      • Incremental
      • Incremental
      • Incremental
      • Incremental
    • Differential
      • Incremental
      • Incremental
      • Incremental <–
      • Incremental
  • Full

To restore the highlighted Incremental backup you would need the previous Differential and Full backups printed in bold letters.

Which mode to use depends on the type of data to backup. Database directories usually change in its entirety so a Full backup is the best solution. File servers with millions of files gain some advantage from using Full, Differential and Incremental backups.

Dangerous configurations:

  • Using a rare Full backup and rely on many intermediate Incremental backups. If any of the many Incremental backups were faulty you would lose all data back to that time.
  • Losing the Full backup and only keeping Incremental backups. This may occur if the retention periods are not adequately configured for Full backups.

© 2021 workaround.org - Proudly powered by theme Octo