Understanding LVM (logical volume manager)

LVM is a neat feature that some system administrators still shy away from. But it’s really not that hard to learn. And these are some awesome features you get:

  • Create a larger (virtual) disk from smaller disk (similar to RAID-0)
  • Extend partitions without any downtime
  • Add space by adding disks without any downtime
  • Remove unused partitions and get back the space without fragmentation
  • Take snapshots of partitions. You can try out things and just roll back. Or you can create consistent database backups without keeping the database down for long.
  • Replace disks without losing data.

LVM is just a thin layer of software between the disks on your system and the partitions. On a Debian system you just “apt install lvm2” and you are ready to go.

Terminology

Three terms are commonly used:

  • PV (physical volume). A disk. Simple as that. An SSD. A hard drive. An SD card.
  • VG (volume group). A group of disks. Take three 2 TiB disks and you get a 6 TiB volume group.
  • LV (logical volume). A fraction of such a group. Just take 200 GiB of the volume group and put a file system on it.

A diagram is worth a thousand words so let’s use an illustration:

PVs

On the left you see your three hard disks. Your computer has found them and made them accessible as /dev/sda, /dev/sdb and /dev/sdc. Usually you would create partitions on them (e.g. using cfdisk), put a file system on the partitions (mkfs) and mount them into your file system (mount /dev/sda1 /home).

But this time we create a volume group from it. So first we turn the disks into PVs so that LVM recognizes them:

pvcreate /dev/sda
pvcreate /dev/sdb
pvcreate /dev/sdc

All this does is write a little meta-data onto each disk.

You can use the “pvs” command command to list the PVs you have just created:

VG

Next we create a new volume group (VG) from these three disks:

vgcreate vg1 /dev/sda /dev/sdb /dev/sdc

Now you have VG called “vg1” consisting of the three disks. The “vgs” command shows you an overview:

VG     #PV #LV #SN Attr   VSize    VFree   
vg1      3   0   0 wz--n-   <6t      <6t

So you see that there is one VG called “vg1” which consists of 3 PVs (disks). And so far no LVs are using it. We will get to that in a moment. Its size is roughly 6 TiB and all of that is free to use.

Using the “vgdisplay” command shows you even more information about it.

LVs

The final step is to bite chunks out of the VG. Check out the diagram above. We want a partition for “/home” with a size of 100 GiB. So the command to create your LV is:

lvcreate -n lvhome -L 100G vg1

Pretty simple. The “-n” parameter sets the name of the new PV. “-L” is the size you want to use. And “vg1” is the name of the VG you want to cut a piece out of.

The “lvs” command will show you an overview of your LVs.

LV     VG     Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
lvhome vg1   -wi-ao---- 100,00g

There is also an “lvdisplay” command showing more verbose information about the LV.

File system

Finally we have something to put a file system on. You have probably used partitions on devices like /dev/sda1 before. But now you are using LVM. And the device for your “lvhome” is “/dev/vg1/lvhome”. Right, it’s “dev” + VG + LV. You could also use “/dev/mapper/vg1-lvhome”.

Put an EXT4 file system onto it:

mkfs.ext4 /dev/vg1/lvhome

And mount that file system:

mount /dev/vg1/lvhome /home

Summary

There are PVs (disks), VGs (groups of disk) and LVs (fractions of a VG).

To use LVM first turn disks into PVs (pvcreate), then join them to a VG (vgcreate), then take a fraction of that (lvcreate) and finally create a file system on that (/dev/vgfoo/lvbar).

Every part has a list and a display command. These are:

  • PV -> pvs, pvdisplay
  • VG -> vgs, vgdisplay
  • LV -> lvs, lvdisplay

Cool tricks

You may not be impressed yet. LVM just made your life more complicated. Of course there is a reason for it because now begins the fun part. These are some common features:

/home is running out of space

Oh, no. Your /home partition is 99% full? With LVM this is easy to solve. If you have free space on your VG (check with “vgs”) you can just extend the disk. No need to unmount anything. No downtime. Let’s give the partition 20 GiB more space:

lvextend -L +20G -r /dev/vg1/lvhome

The “-r” parameter not only extends the LV but also the file system that lives on top. That allows you to enlarge a partition without taking it offline. This is the neatest feature that LVM delivers in my opinion.

If your volume group is also out of space then you could add another disk (physical volume) and use “pvcreate” and “vgextend” to enlarge it.

Replace a disk by a larger disk

No problem either. Let’s assume that one of your disks (physical volumes) on /dev/sda was 2 TB and you just bought a shiny new 10 TB disk (found on /dev/sdg). Now you want to move the data over to the new disk. As usual you need to turn /dev/sdg into a PV:

pvcreate /dev/sdg

And now you can just move all blocks (aka physical extents – see below) to the new disk:

pvmove /dev/sda /dev/sdg

And finally you can remove the PV from your VG:

vgreduce vg1 /dev/sda

By the way: once a disk is a PV it doesn’t matter whether your system finds it on /dev/sdb, /dev/sdc or any other device. As long as all the necessary PVs are found somewhere the VG will work. Just if your boot sector was written on /dev/sda you may need to re-install it if you change that disk.

Creating snapshots

A snapshot is like taking a photo with your camera. You get an image of a situation at a certain point in time. Reality will continue to alter the world but your photo will always show that specific moment. You can still take a pen and draw something on the photo so it’s not read-only. (It used to be on LVM 1.x.) I commonly use this technique to get consistent database snapshots of large MySQL/MariaDB databases.

Let’s just say that you have a huge 1 TiB-sized LV called “lvmysql” that is mounted to /var/lib/mysql. Running a backup of those files takes an hour. And while you back up one file after another the SQL database is accessing the various files making arbitrary changes. Your backup would contain unusable garbage. Some files were from minute 5 while others might be from minute 30. Such a backup is unusable.

Now let’s instead use snapshots. Briefly stop the database and take a snapshot:

lvcreate -n mysnap -L 20G -s /dev/vg1/lvmysql

Note that we use “lvcreate” to take the /dev/vg1/lvmysql LV and create a new /dev/vg1/mysnap LV. Just that the latter is a snapshot.

You can start your databaCommand on LV vgmint/snap is invalid on LV with properties:se again. With a bit of luck this has just taken a few seconds. And now you have a perfectly consistent copy of the MySQL data directory. You can mount this snapshot anywhere in your file system:

mount /dev/vg1/mysnap /mnt/mysnap

Now you can take your time and just make a backup of /mnt/mysnap. It won’t change.

However the magic comes at a price. Have you noticed the “-L 20 G” parameter? That does not mean that the snapshot has a size of 20 GiB. After all we started with a 1 TiB LV. So why did we specify a size at all?

The answer lies in the way that snapshots work. Once you started MySQL again the data directory was changed. LVM needs to provide you with your snapshot but at the same time allow MySQL to continue doing its work. That works by a mechanism called copy-on-write. If the original LV would not change then it would be identical to the snapshot. If however the files on the LV are changed then LVM needs to keep a copy of the snapshotted state. The more changes you do the more space for those copies you will need. And that’s what is meant by “-L 20 G”. It gives your snapshot a 20 GiB storage area to track the changes.

The size depends on how much change you expect while you want to use the snapshot. If the backup takes an hour and the database typically changes 100 GiB during that period then you should give the snapshot at least that space. The “lvs” command shows you much much of that space has been used already. So you should keep the snapshot no longer than needed for a backup. Should you hit the 100% mark then your snapshot becomes unusable and all you can do it remove it. That won’t affect the original LV fortunately. So you won’t break your database.

Another use case of snapshots would be to try out things on the snapshot. And if you like what you did then merge the changes to the original LV. That can be done using “lvconvert –merge /dev/vg1/mysnap”. But I suggest you consult the man page of “lvconvert” before you do that.

PEs (physical extents)

When you take a close look at a PV for example (“pvdisplay” command) you will notice terms like “PE size” or “Free PE” or “Allocated PE”. A PE (physical extent) is the smallest data size that LVM handles. By default it’s set to 4 MiB. That means you can grow or shrink a logical volume only by a factor of 4 MiB. Using “lvextend” you can specify the number of extents using “-l …” (lowercase L) instead of the size “-L …” (uppercase L).

Booting from an LV

Using LV for all partitions used to be a problem in the past. Debian created an ext2 partition for /boot to make sure the system boots. This has become obsolete for quite a while. You can use LVs everywhere and Debian will happily boot the system.

RAID

By default LVM uses RAID-0. That is the RAID level that makes you lose everything if a single disk fails. LVM support RAID levels 1 and 5 though. Besides the LVM man pages I mainly found this web page describing it.

Debian packages are so old

Debian comes with tens of thousands of software packages that you can easily install on your system. But Debian only publishes a new “stable” release every 2-3 years. That creates the impression that Debian packages must always be up to 3 years old. And who wants to work with a three year old piece of software? Are the package maintainers lazy? Should I download my software from its own project website instead?

I feel obliged to briefly discuss this topic because it is a common source of trouble and surprises. And it may make you be

(more…)

Updating the BIOS on Lenovo laptops from Linux using a USB flash stick

Aren’t hardware manufacturers funny? They either require an old-fashioned operating system (Windows) or museum hardware (floppy drives) to update a BIOS. Apparently they never learn and are instead busy adding features like DRM and UEFI to make our lives even more miserable.

However updating the BIOS on my Lenovo X230 laptop was surprisingly easy once I learned how to do that (kudos to a G+ post I stumbled upon).

  1. Go to support.lenovo.com (or better use a search engine becaues the Lenovo website is beautiful but technically pretty broken and slow) and search for the BIOS upgrade of your laptop model.
  2. Download the most recent ISO file. Look for “BIOS bootable update CD”.
  3. Convert the ISO image using the geteltorito utility (if you don’t have it: apt-get install genisoimage).
    Example:
    geteltorito -o bios.img g2uj18us.iso
  4. Insert any USB stick into your laptop that you have lying around. The image file is just 50 MB in size so even USB sticks with low capacity will work. Keep in mind that the stick will be completely overwritten.
  5. If you are in a graphical environment then unmount the USB stick again.
  6. Find out the device name of the stick. Enter a terminal window and enter “dmesg | tail”. You are looking for something like:[ 2101.614860] sd 6:0:0:0: [sdb] Attached SCSI disk
    The “sdb” tells you that your USB stick is available on /dev/sdb. Don’t just assume it’s sdb. If it’s on another device on your laptop then you will destroy your data.
  7. Copy the image to the USB stick:
    dd if=bios.img of=/dev/sdb bs=1M
  8. Reboot your laptop.
  9. After the Lenovo logo appears press ENTER.
  10. Press F12 to make your laptop boot from something else than your harddisk.
  11. Select the USB stick.
  12. Make sure your laptop has its power supply plugged in. (It will refuse to update otherwise.)
  13. Follow the instructions.

Moving contacts and calendar entries from Google to OwnCloud

As an Android user it’s hard to evade Google. You won’t be able to install apps without creating a GMail account and telling them your credit card number. But let’s at least not use GMail for your emails, contacts and calendar entries. In fact that is suprisingly simple. Let’s start with the…

Calendar

In OwnCloud enable the Calendar app if you haven’t already. Log in as “admin”, click on “Apps+” on the bottom of the left navigation area and activate the Calendar app.

Install CalDAV-Sync (yes, costs a bit of money) on your Android phone. Run the application and set up a CalDAV account. The CalDAV URL can be found in the OwnCloud web interface when you navigate to your calendar and click on the cog wheel icon in the upper right corner. It looks something like cloud.example.org/remote.php/caldav/

Log into your Google Calendar and go to the settings hidden behind the cog wheel icon in the upper right corner. Click on “Calendar” at the top. Look for a “export calendar” link. You will get a ZIP archive downloaded that contains an ICS file. Unpack that ZIP archive. Navigate to the files section on your OwnCloud in the web browser and upload the ICS file to any folder using drag-and-drop. Then click on the file in the browser and confirm to have that ICS file imported to your OwnCloud calendar. Now your calendar has been copied to OwnCloud.

Finally remove the appointments from your Google calendar and stop the sync from your smartphone. Neither can you remove the Google calendar from Google nor from your phone – it’s a pest. Make your OwnCloud calendar the new default on your phone. (If you own a Samsung S4 I suggest downloading the original “Google Calendar” app because the Samsung calendar app does not allow you to get rid of the Google calendar.)

Contacts

(Pretty similar as above.)

In OwnCloud enable the Calendar app if you haven’t already. Log in as “admin”, click on “Apps+” on the bottom of the left navigation area and activate the Contacts app.

Install the CardDAV-Sync app (still beta and thus free) on your Android device. Run the application and set up a CardDAV account. The CardDAV URL can be found in the OwnCloud web interface when you navigate to the contacts app and click on the cog wheel icon in the lower right corner and then on the globe icon. It looks something like cloud.example.org/remote.php/carddav/addressbooks/johndoe/contacts

Log into your GMail account and click on “More” and “Export”. Choose to export all contacts in vCard format. Navigate to the files section on your OwnCloud in the web browser and upload the VCF file to any folder using drag-and-drop. Then click on the file in the browser and confirm to have that contacts imported to your OwnCloud contacts.

And finally remove your contacts from GMail and stop the sync.

One step closer to Google-independence – yeah!

Docking and undocking Linux laptops with nVidia GPUs using disper

Do you have trouble switching the display when docking and undocking your Linux laptop? In this article I will show you how to use disper and keyboard shortcuts to do that reliably if you are using nVidia’s annoyingly broken RandR-incompatible graphics driver.

Many companies nowadays give their employees laptops instead of classical PCs under their desks. And that’s a good idea because you can carry it to meetings or hack sessions and just keep all your digital information in one place instead of printing it out, carrying it around and typing it in later. And it gets even better if your employer allows you use your favorite Linux distribution instead of Windows on your laptop. Using Linux on a laptop has become increasingly simple with modern  distributions like Ubuntu. But still the laptop manufacturers suck badly at providing proper drivers for anything other than Windows™. That’s especially true for graphics hardware from nVidia which is the perfect example for ruining great hardware by failing to provide proper drivers. Modern desktop environments usually detect if your screen resolution changes or if you dock or undock from your laptop’s dockin station. Unfortunately nVidia’s proprietary driver still doesn’t support common standards (RandR). They ship their own (“NVIDIA X Server Settings”) tool to switch monitors which allows you to do that. Well, sometimes. At other times they alter your X server’s configuration (which is something that Linux nerds used to do 10-20 years ago) and require an X server restart. Yeah, right.

But do not despair – here is a simple fix. They key is disper – a simple-to-use command-line tool to switch between different monitors. You should find it packaged for your favorite desktop Linux distribution. I personally use Xubuntu so I installed it using

$> sudo apt-get install disper

Disper is pretty simple and its manpage describes its options. Check that disper finds your monitors:

$> disper -l

display DFP-0: AU Optronics Corporation
resolutions: 1600×900
display DFP-2: Eizo S2402W
resolutions: 640×480, 800×600, 1024×768, 1280×960, 1280×1024, 1680×1050, 1600×1200, 1920×1200

You can see here that DFP-0 is my laptop’s own monitor with a resolution ov 1600×900 pixels. And I have an Eizo monitor connected to my docking station. If your output looks similar you can try to switch between the monitors:

$> disper -s      (enables the “primary” display – usually the laptop’s built-in monitor)

$> disper -S      (enables the “secondary” display – usually the monitor connected to your docking station)

$> disper -c      (clones the display – will only work if both displays can use the same resolution)

$> disper -e      (extends your desktop over both monitors)

Now all you have to do is assign keyboard shortcuts for your favorite settings.

That’s it. When I dock my laptop I press Win-d and the external monitor will display my screen. And when I undock I press Win-u and my desktop moves to the laptop’s built-in monitor. Perfect.

(And next time please get a laptop with a well-supported Intel GPU if you can. You will love it.)

Backups with rsnaphot to external USB drives

How long has it been since you last backed up your Linux system? Let me guess – you tried various backup systems and hate all of them? Let me show you how to use rsnapshot and an external inexpensive USB drive to back up precious data easily. (more…)

Renaming multiple files

If you need to rename a larger number of files following a certain pattern then you will long for an automated solution. The ‘rename’ command helps you here that is (at least on my Debian installation) part of the Perl installation. All you need to know is the basics of regular expressions to define how the renaming should happen.

Say you want to add a ‘.old’ to every file in your current directory. At the end of each expression ($) a ‘.old’ will be set:

rename 's/$/.old' *

Or you want to make the filenames lowercase:

rename 'tr/A-Z/a-z/' *

Or you want to remove all double characters:

rename 'tr/a-zA-Z//s' *

Or you have many JPEG files that look like “img0000154.jpg” but you want the first five zeros removed as you don’t need them:

rename 's/img00000/img/' *.jpg

In fact you can use any Perl operator as an argument. The actual documentation for the ‘s’ and ‘y’/’tr’ operators are found in the ‘perlop’ manpage.

Pipes and redirection

Many system administrators seem to have problems with the concepts of pipes and redirection in a shell. A coworker recently asked me how to deal with log files. How to find the information he was looking for. This article tries to shed some light on it.

Input / Output of shell commands

Many of the basic Linux/UNIX shell commands work in a similar way. Every command that you start from the shell gets three channels assigned:

  • STDIN (channel 0):
    Where your command draws the input from. If you don’t specify anything special this will be your keyboard input.
  • STDOUT (channel 1):
    Where your command’s output is sent to. If you don’t specify anything special the output is displayed in your shell.
  • STDERR (channel 2):
    If anything wrong happens the command will send error message here. By default the output is also displayed in your shell.

Try it yourself. The most basic command that just passes everything through from STDIN to STDOUT is the ‘cat’ command. Just open a shell and type ‘cat’ and press Enter. Nothing seems to happen. But actually ‘cat’ is waiting for input. Type something like “hello world”. Every time you press ‘Enter’ after a line ‘cat’ will output your input. So you will get an echo of everything you type. To let ‘cat’ know that you are done with the input send it an ‘end-of-file’ (EOF) signal by pressing Ctrl-D on an empty line.

The pipe(line)

A more interesting application of the STDIN/STDOUT is to chain commands together. The output of the first command becomes the input of the second command. Imagine the following chain:

The contents of the file /var/log/syslog are sent (as input) to the grep command. grep will filter the stream for lines containing the word ‘postfix’ and output that. Now the next grep picks up what was filtered and filter it further for the word ‘removed’. So now we have only lines containing both ‘postfix’ and ‘removed’. And finally these lines are sent to ‘wc -l’ which is a shell command counting the lines of some input. In my case it found 27 of such lines and printed that number to my shell. In shell syntax this reads:

cat /var/log/syslog | grep 'postfix' | grep 'removed' | wc -l

The ‘|’ character is called pipe. A sequence of such commands joined together with pipes are called pipeline.

Useless use of ‘cat’

Actually ‘cat’ is supposed to be used for concatenating files. Like “cat file1 file2”. But some administrators abuse the command to put something into a pipeline. That’s bad style and the reason why Randal L. Schwartz (a seasoned programmer) used to hand out virtual “Useless use of cat” awards. Shell commands usually can take a filename as the last argument as an input. So this would be right:

grep something /var/log/syslog | wc -l

While this works but is considered bad style:

cat /var/log/syslog | grep something | wc

Or if you knew that grep even has a “-c” option to count lines the whole task could be done with just grep:

grep -c something /var/log/syslog

Using files as input and output

Output (STDOUT)

Instead of using the console for input and the screen for output you can use files instead. While

date

shows you the current date on the console you can use

date >currentdatefile

to redirect the output of the command (STDOUT) to the file named ‘currentdatefile’.

Input (STDIN)

This also works as input. The command

grep something

will search for the word ‘something’ in what you type on your keyboard. But if you want to look for ‘something’ in a file called ‘somefile’ you could run

grep something <somefile

Input and output

You can also redirect both input and output in the same command. A politically incorrect way to copy a file would be

cat <oldfile >newfile

Of course you would use ‘cp’ for that purpose in real life.

Errors (STDERR)

So far this covers STDIN (<) and STDOUT (>) but you also redirect the STDERR channel by using (2>). An example would be

grep something <somefile >resultfile 2>errorfile

2>&1 magic

Many admins stumble when it comes to redirecting one channel to another. Say you want to redirect both STDOUT and STDERR to the same file. Then you cannot do

grep something >resultfile 2>resultfile

It will only redirect the STDOUT (>) there and keep the ‘resultfile’ open so “2>” fails to write to it. Instead you need to do

grep something >resultfile 2>&1

This redirects STDOUT (1) to the ‘resultfile’ and tells STDERR (2) to send the output to what STDOUT is set to (also ‘resultfile’).

What does not work is this order:

grep something 2>&1 >resultfile

It may look right to us humans but in fact does not redirect STDERR to the ‘resultfile’. The explanation: the shell interprets this line from left to right. So first the “2>&1” is evaluated which means “send STDERR to the same that STDOUT is currently set to”. As STDOUT is usually just printed to the shell it will send STDERR also to the shell. Next the shell finds “>resultfile” which sends STDOUT to the ‘resultfile’ but does not touch the previous destination of STDERR. So STDERR output will still end up in the shell.

Interesting commands

  • grep
    Filters out lines with certain search words. “grep -v” searches for all lines that do not contain the search word.
  • sort
    Sort the output alphabetically (needs to wait until EOF before doing its work). “sort -n” sorts numerically. “sort -u” filters out duplicate lines.usel
  • wc
    Word count. Counts the bytes, words and lines. “wc -l” just outputs how many lines were counted.
  • awk
    A sophisticated language (similar to Perl) that can be used to do something with every line. “awk ‘{print $3}'” outputs the third column of every line.
  • sed (stream editor)
    A search/replace tool to change something in every line.
  • less
    Useful at the end of a pipe. Allows you to browse through the output one page at a time. (“less” refers to a similar but less capable tool called “more” that allowed you to see the first page and then press ‘Space’ to view ‘more’.)
  • head
    Shows the first ten lines only. “head -50” shows the first 50 lines.
  • tail
    Shows the last ten lines only. “tail -50” shows the last 50 lines. “tail -f” follows a certain file.

NFS: sec=sys or ruin your day

And once again I was bitten by problems on my Debian laptop mounting directories from the file server via NFS. After a Debian dist-upgrade I couldn’t login to KDE any more. Shell login worked somehow but I quickly found out I could neither read nor write any files there. Apparently an “ls -al” showed me the right permissions (not just unmappable numeric UIDs or something of that kind) and an “id -a” confirmed that my LDAP PAM configuration still worked. But reading or writing any file just lead to “Permission denied”.

I’m not sure if it was an update in the nfs-common package in Sid. But I had the same problem before and it took me hours until I finally figured out that Debian’s NFS seems to use non-standard defaults. Namely the “sec” parameter when mounting the NFS share. According to the Solaris documentation the default is “sec=sys” which means that NFS uses the locally acquired UIDs and GIDs. Like /etc/passwd, NIS or LDAP/PAM. But on Debian it seems to default to “sec=krb5” or something. As I have close to no idea how to set up Kerberos and don’t want to (and talking to other people hardly anyone has used Kerberos either) I figured that it’s not really a sane default. I didn’t even ask for NFSv4 – just NFSv3. Perhaps I undeliberately set some /etc/default/nfs-common configuration setting wrong or whatever. It was just strange. So I set “sec=sys” in the options of the NFS share of my /etc/fstab and the problem was fixed.

Actually I wonder what network file systems other people use in a Debian environment. NFS somehow feels antiquated to me anyway.

Using tcpdump and Wireshark to sniff and analyse your network traffic

Sometimes a network service is just not behaving the way it should. And the log files do not help you either. Then it is time to use the power of tcpdump and Wireshark to get a deeper look on what is actually happening on the wire.

If you have an X11 running on the host in question you may just start Wireshark and start recording the traffic. However often you need to record traffic that is running on a machine you can just login through SSH. Don’t panic – you can still analyze the traffic.

Sniff the network

Run tcpdump on the server in question. The number of options is pretty large. But most of the time the call is rather simple. Try something like this:

tcpdump -lnni eth0 -w dump -s 65535 host web01 and port 80

That will mainly record traffic on the interface eth0, write the output (in raw format) to the file named dump, record the whole packet (65535 bytes maximum) instead of just a few bytes and use the filter expression host web01 and port 80 to just listen to traffic for the server called web01 and listen only to traffic on the HTTP port (80).

Run this command and let it run while you want to record network activity. When you are done, just stop the process (Ctrl-C) and you have the raw data in the dump file.

 

Analyze the dumpfile in Wireshark

Now copy this dump file over to a workstation where you have X11 running – scp should do it. Then start wireshark and load the file (or just run wireshark dumpfile). You will be shown what has happened when.

In the top window you will see one line for each packet. Click on a packet that you like to inspect deeper and you will be shown the protocol stack in the lower left corner. The layers correspond to the OSI model and are used to transport the packet. Unless you are debugging your switch environment you can safely ignore all the lower layers (that are shown as the first lines) and expand (click on the ‘+’) the HTTP protocol. Wireshark is very smart and will try to interpret the network traffic so you get a clearer view at the protocol. It knows about most common protocols like HTTP, SMTP, POP3 and a few hundreds more.

Wireshark on the console

If you just need need a tad more information than what tcpdump delivers but want to see it on the console and in real-time then tshark may be for you. Give it a try.

© 2021 workaround.org - Proudly powered by theme Octo