Skip to content

Linux kernel USB errors -71 and -110

Linux

After an upgrade of my PC's mainboard BIOS the boot would take a minute or more to complete and sometimes the lightdm login screen would sit there but not accept keyboard input for another minute or so. Then the keyboard got enabled and I could log in normally. Everything worked fine after that bootup struggle completed. This was fully reproducible and persisted across reboots. Weird.

The kernel dmesg log showed entries that looked suspicious:

dmesg log excerpt showing USB error messages

Googleing these error -110 and error -71 is a bit hard. Now why the USB driver does not give useful error messages instead of archaic errno-style numbers escapes me. This is not the 80s anymore.

Citation needed (Wikipedia style) The wisdom of the crowd says error -110 is something around "the USB port power supply was exceeded" [source].

Now lsusb -tv shows device 1-7 ... to be my USB keyboard. I somehow doubt that wants more power than the hub is willing to provide.

The Archlinux BBS Forums recommend to piece together information from drivers/usb/host/ohci.h and (updated from their piece which is from 2012) /tools/include/uapi/asm-generic/errno.h. This is why some people then consider -110 to mean "Connection timed out". Nah, not likely either.

Reading through the kernel source around drivers/usb/host did not enlighten me either. To the contrary. Uuugly. There seems to be no comprehensive list what these error codes mean. And the numbers are assigned to errors conditions quite arbitrarily. And - of course - there is no documentation. "It was hard to do, so it should be hard to understand as well."

Luckily some of the random musings I read through contained some curious advice: power cycle the host. So I did and that did not make the error go away. Other people insisted on removing cables out of wall sockets, unplugging everything and conducting esoteric rituals. That made it dawn on me, the mainboard of course nicely powers the USB in "off" state, too. So switching the power supply off (yes, these have a separate switch, go find yours), waiting a bit for capacitors to drain and switching things back on and ... the errors were gone, the system booted within seconds again.

So the takeaway message: If you get random error messages like

device descriptor read/64, error -110
device not accepting address 42, error -71

on devices that previously worked fine ... completely remove power from the host, the hubs and the USB devices. So they forget they saw each other on the bus before. And when they see each other after that blackout, they will happily go through negotiating protocol details with each other again successfully.

Linux App Summit - Linux applications apparently unable to produce a five page sponsorship brochure

Strategy

I've been calling out the Linux foundation for producing their annual report on Macs. And again.

But catching the Linux App(lication) Summit to produce their 5 page sponsorship brochure one Windows with Adobe...

Produced by Adobe InDesign 18.1 on Windows

Apparently neither GNOME nor KDE have apps that are sufficient to produce such content. Wtf folks.

P.S.: Just checked the Linux Foundation's 2022 Annual Report titled "Leadership in Security and Innovation" ... Adobe InDesign 18.0 (Macintosh).

Their pdf title looks like this:

Pdf is hard.

Gradual improvements at the Linux Foundation

Strategy

After last year's blunder with trying to hide the Adobe toolchain and using hilarious stock photos, the Linux Foundation did much better in their 2021 annual report1 published Dec. 6, 2021.

Still they are using the Adobe toolchain (InDesign, Acrobat PDF) and my fellow DebianKernel2 Developer Geert was quick to point that out as the first comment to the LWN note on the publication:

LWN comment from Geert

I think it is important to call the Linux Foundation (LF) out again and again. Adobe is a Silver member of the LF and they can motivate them to publish their applications for Linux. And if that is not an option, there are Free alternatives like Scribus that could well use the exposure and funds of LF to help catch up to the market leading product, Adobe InDesign.

Linux Foundation Annual report 2021, document properties

Personally, as a photographer, I am very happy they used stock images from Unsplash to illustrate the 2021 edition over the cringeworthy Shutterstock footage from last year's report.

And they gave proper credit:

Thank you section for Unsplash from the Linux Foundation 2021 annual report

Now for next year ... find an editor that knows how to spell photographers, please. And consider Scribus. And make Adobe publish their apps for Linux. Thank you.


  1. Update 07.12.2021 22:00 CET: I had to replace the link to the Linux Foundation 2021 annual report with an archive.org one as they updated the report to fix the typo as per the comment from Melissa Schmidt below. Stable URLs are not a thing, apparently. You can find their new report at https://www.linuxfoundation.org/wp-content/uploads/2021_LF_Annual_Report_120721c.pdf. Unless somebody points out more typos. There is a Last-Modified Header in HTTP 1.1. Wordpress, Varnish and Nginx, serving the LF website, all support that. Diff of 2021_LF_Annual_Report_120621a and2021_LF_Annual_Report_120721c 

  2. 08.12.2021: Geert Uytterhoeven wrote in that he is "geert" on LWN, both are very nice Geert's but different Geert's :-) 

No dog food today - the Linux Foundation annual report

Strategy

The Linux Foundation has published its annual report today. LWN calls it glossy and yeah, boy, it is shiny.

So shiny that people that work in the publishing industry immediately see this has been produced with the Adobe toolchain which - unfortunately - is one of the big suites of software not yet available for Linux.

Checking the PDF file metadata reveals the keywords "open source, open standards, open hardware, open data". That is what the Linux Foundation is about. Good stuff.

Linux Foundation annual report 2020 cover

The PDF producer meta data for the annual report PDF has been set to "Linux kernel 0.12.1 for Workgroups" and the PDF creator meta data element to "Sharp Zaurus XR-5000 (Maemo5) Edition". Somebody thought to better hide the real data and had some tongue-in-cheek ideas. Kudos.

But nicer would have been to use Open Source software to produce the report, not?

Running strings 2020-Linux-Foundation-Annual-Report_113020.pdf | grep Adobe | wc -l gives us 1229 lines and confirms the suspicion of the toolchain.

A stale /Title (Annual Report 2020) /Producer (macOS Version 10.15.7 \(Build 19H15\) Quartz PDFContext) has been forgotten in the document to tell us about the platform.

So, ladies and gentlemen, the Linux Foundation 2020 annual report has been produced on a Mac.

Running Adobe Creative Cloud on MacOS Catalina 10.15.7.

Which is proprietary software. Its kernel (and some userland pieces) are based on BSD. Not Linux.


The image on the front page also struck me as a bit odd ... using a ballpoint pen on the laptop screen?

Unbranded laptop. Unbranded cup in the foreground.

Kid in the background not paying attention to his tablet.

All of that cries stock image so loud it hurts.

Google currently finds ~560 uses of the picture and any editorial use nicely tells us that it is © Dragana Gordic / Shutterstock.

The image is "Smiling mom working at home with her child on the sofa while writing an email. Young woman working from home, while in quarantine isolation during the Covid-19 health crisis".

See the Daily Mail for a wonderful example of the working mum in context. I hope, if her laptop had been powered on, it would have run Linux. I mean, what else would still run on an old white MacBook with an Intel "Core 2 Duo" processor from 2008?

Daily Mail screenshot of the same stock image used

Continue reading "No dog food today - the Linux Foundation annual report"

Git shared hosting quirk

IT

Show https://github.com/torvalds/linux/blob/b4061a10fc29010a610ff2b5b20160d7335e69bf/drivers/hid/hid-samsung.c#L113-L118 to a friend.

Oops 'eh? Yep, Linux has been backdoored.

Well, or not.

Konstantin Ryabitsev explains it nicely in a cgit mailing list email:

It is common for git hosting environments to configure all forks of the same repo to use an "object storage" repository. For example, this is what allows git.kernel.org's 600+ forks of linux.git to take up only 10GB on disk as opposed to 800GB. One of the side-effects of this setup is that any object in the shared repository can be accessed from any of the forks, which periodically confuses people into believing that something terrible has happened.

The hack was discussed on Github in Dec 2018 when it was discovered. I forgot about it again but Konstantin's mail brought the memory back and I think it deserves more attention.

I'm sure putting some illegal content into a fork and sending a made up "blob" URL to law enforcement would go quite far. Good luck explaining the issue. "Yes this is my repo" but "no, no that's not my data" ... "yes, it is my repo but not my data" ... "no we don't want that data either, really" ... "but, but there is nothing we can do, we host on github...1".

Updates

05.11.20 Nate Friedman (CEO of Github) promises

[..] we are going to make it much more obvious when you're viewing an orphaned commit.

For context: The source code of Github (the product) had been leaked as a commit to Github's own DMCA repository. The repository has turned into a playground since Github took down the hosting for youtube-dl as the result of a DMCA complaint.

14.11.20 Seems Github now adds a warning to commits that are not in a reachable branch Github commit warning message

28.01.22 Github currently fails to show the warning message, so https://github.com/torvalds/linux/tree/8bcab0346d4fcf21b97046eb44db8cf37ddd6da0 is making rounds now: Fake commit to Linus Torvalds' kernel repo updating the README file and claiming to have deleted Linux


  1. Actually there is something you can do. Making a repo private takes it out of the shared "object storage". You can make it public again afterwards. Seems to work at least for now. 

Openssh taking minutes to become available, booting takes half an hour ... because your server waits for a few bytes of randomness

Linux

So, your machine now needs minutes to boot before you can ssh in where it used to be seconds before the Debian Buster update?

Problem

Linux 3.17 (2014-10-05) learnt a new syscall getrandom() that, well, gets bytes from the entropy pool. Glibc learnt about this with 2.25 (2017-02-05) and two tries and four years after the kernel, OpenSSL used that functionality from release 1.1.1 (2018-09-11). OpenSSH implemented this natively for the 7.8 release (2018-08-24) as well.

Now the getrandom() syscall will block1 if the kernel can't provide enough entropy. And that's frequenty the case during boot. Esp. with VMs that have no input devices or IO jitter to source the pseudo random number generator from.

First seen in the wild January 2017

I vividly remember not seeing my Alpine Linux VMs back on the net after the Alpine 3.5 upgrade. That was basically the same issue.

Systemd. Yeah.

Systemd makes this behaviour worse, see issues #4271, #4513 and #10621.
Basically as of now the entropy file saved as /var/lib/systemd/random-seed will not - drumroll - add entropy to the random pool when played back during boot. Actually it will. It will just not be accounted for. So Linux doesn't know. And continues blocking getrandom(). This is obviously different from SysVinit times2 when /var/lib/urandom/random-seed (that you still have lying around on updated systems) made sure the system carried enough entropy over reboot to continue working right after enough of the system was booted.

#4167 is a re-opened discussion about systemd eating randomness early at boot (hashmaps in PID 0...). Some Debian folks participate in the recent discussion and it is worth reading if you want to learn about the mess that booting a Linux system has become.

While we're talking systemd ... #10676 also means systems will use RDRAND in the future despite Ted Ts'o's warning on RDRAND [Archive.org mirror and mirrored locally as 130905_Ted_Tso_on_RDRAND.pdf, 205kB as Google+ will be discontinued in April 2019].
Update: RDRAND doesn't return random data on pre-Ryzen AMD CPUs (AMD CPU family <23) as per systemd bug #11810. It will always be 0xFFFFFFFFFFFFFFFF (264-1). This is a known issue since 2014, see kernel bug #85991.

Debian

Debian is seeing the same issue working up towards the Buster release, e.g. Bug #912087.

The typical issue is:

[    4.428797] EXT4-fs (vda1): mounted filesystem with ordered data mode. Opts: data=ordered
[ 130.970863] random: crng init done

with delays up to tens of minutes on systems with very little external random sources.

This is what it should look like:

[    1.616819] random: fast init done
[    2.299314] random: crng init done

Check dmesg | grep -E "(rng|random)" to see how your systems are doing.

If this is not fully solved before the Buster release, I hope some of the below can end up in the release notes3.

Solutions

You need to get entropy into the random pool earlier at boot. There are many ways to achieve this and - currently - all require action by the system administrator.

Kernel boot parameter

From kernel 4.19 (Debian Buster currently runs 4.18 [Update: but will be getting 4.19 before release according to Ben via Mika]) you can set RANDOM_TRUST_CPU at compile time or random.trust_cpu=on on the kernel command line. This will make recent Intel / AMD systems trust RDRAND and fill the entropy pool with it. See the warning from Ted Ts'o linked above.

Update: Since Linux kernel build 4.19.20-1 CONFIG_RANDOM_TRUST_CPU has been enabled by default in Debian.

Using a TPM

The Trusted Platform Module has an embedded random number generator that can be used. Of course you need to have one on your board for this to be useful. It's a hardware device.

Load the tpm-rng module (ideally from initrd) or compile it into the kernel (config HW_RANDOM_TPM). Now, the kernel does not "trust" the TPM RNG by default, so you need to add

rng_core.default_quality=1000

to the kernel command line. 1000 means "trust", 0 means "don't use". So you can chose any value in between that works for you depending on how much you consider your TPM to be unbugged.

VirtIO (KVM, QEMU, ...)

For Virtual Machines (VMs) you can forward entropy from the host (that should be running longer than the VMs and have enough entropy) via virtio_rng.

So on the host, you do:

kvm ... -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,bus=pci.0,addr=0x7

and within the VM newer kernels should automatically load virtio_rng and use that.

You can confirm with dmesg as per above.

Or check:

# cat /sys/devices/virtual/misc/hw_random/rng_available
virtio_rng.0
# cat /sys/devices/virtual/misc/hw_random/rng_current
virtio_rng.0

Patching systemd

The Fedora bugtracker has a bash / python script that replaces the systemd rnd seeding with a (better) working one. The script can also serve as a good starting point if you need to script your own solution, e.g. for reading from an entropy provider available within your (secure) network.

Chaoskey

The wonderful Keith Packard and Bdale Garbee have developed a USB dongle, ChaosKey, that supplies entropy to the kernel. Hard- and software are open source.

Jitterentropy_RNG

Kernel 4.2 introduced jitterentropy_rng which will use the jitter in CPU timings to generate randomness.

modprobe jitterentropy_rng

This apparently needs a userspace daemon though (read: design mistake) so

apt install jitterentropy-rngd (available from Buster/testing).

The current version 1.0.8-3 installs nicely on Stretch. dpkg -i is your friend.

But - drumroll - that daemon doesn't seem to use the kernel module at all.

That's where I stopped looking at that solution. At least for now. There are extensive docs if you want to dig into this yourself.

Update: The Linux kernel 5.3 will have an updated jitterentropy_rng as per Commit 4d2fa8b44. This is based on the upstream version 2.1.2 and should be worth another look.

Haveged

apt install haveged

Haveged is a user-space daemon that gathers entropy though the timing jitter any CPU has. It will only run "late" in boot but may still get your openssh back online within seconds and not minutes.

It is also - to the best of my knowledge - not verified at all regarding the quality of randomness it generates. The haveged design and history page provides and interesting read and I wouldn't recommend haveged if you have alternatives. If you have none, haveged is a wonderful solution though as it works reliably. And unverified entropy is better than no entropy. Just forget this is 2018 2019 :-).

early-rng-init-tools

Thorsten Glaser has posted newly developed early-rng-init-tools in a debian-devel thread. He provides packages at http://fish.mirbsd.org/~tg/Debs/dists/sid/wtf/Pkgs/early-rng-init-tools/ .

First he deserves kudos for naming a tool for what it does. This makes it much more easily discoverable than the trend to name things after girlfriends, pets or anime characters. The implementation hooks into the early boot via initrd integration and carries over a seed generated during the previous shutdown. This and some other implementation details are not ideal and there has been quite extensive scrutiny but none that discovered serious issues. Early-rng-init-tools look like a good option for non-RDRAND (~CONFIG_RANDOM_TRUST_CPU) capable platforms.

Linus to the rescue

Luckily end of September Linus Torvalds was fed up with the entropy starvation issue and the non-conclusive discussions about (mostly) who's at fault and ... started coding.

With the kernel 5.4 release on 25.11.2019 his patch has made it into mainline. He created a try_to_generate_entropy function that uses CPU jitter to generate seed entropy for the PRNG early in boot.

In the merge commit Linus explains:

This is admittedly partly "for discussion". We need to have a way forward for the boot time deadlocks where user space ends up waiting for more entropy, but no entropy is forthcoming because the system is entirely idle just waiting for something to happen.

While this was triggered by what is arguably a user space bug with GDM/gnome-session asking for secure randomness during early boot, when they didn't even need any such truly secure thing, the issue ends up being that our "getrandom()" interface is prone to that kind of confusion, because people don't think very hard about whether they want to block for sufficient amounts of entropy.

The approach here-in is to decide to not just passively wait for entropy to happen, but to start actively collecting it if it is missing. This is not necessarily always possible, but if the architecture has a CPU cycle counter, there is a fair amount of noise in the exact timings of reasonably complex loads.

We may end up tweaking the load and the entropy estimates, but this should be at least a reasonable starting point.

So once this kernel is available in your distribution, you should be safe from entropy starvation at boot on any platform that has hardware timers (I haven't encountered one that does not in the last decade).

Ted Ts'o reviewed the approach and was fine and Ahmed Dawish did some testing of the quality of randomness generated and that seems fine, too.

Updates

14.01.2019

Stefan Fritsch, the Apache2 maintainer in Debian, OpenBSD developer and a former Debian security team member stumbled over the systemd issue preventing Apache libssl to initialize at boot in a Debian bug #916690 - apache2: getrandom call blocks on first startup, systemd kills with timeout.

The bug has been retitled "document getrandom changes causing entropy starvation" hinting at not fixing the underlying issue but documenting it in the Debian Buster release notes.

Unhappy with this "minimal compromise" Stefan wrote a comprehensive summary of the current situation to the Debian-devel mailing list. The discussion spans over December 2018 and January 2019 and mostly iterated what had been written above already. The discussion has - so far - not reached any consensus. There is still the "systemd stance" (not our problem, fix the daemons) and the "ssh/apache stance" (fix systemd, credit entropy).

The "document in release notes" minimal compromise was brought up again and Stefan warned of the problems this would create for Buster users:

> I'd prefer having this documented in the release notes:
> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=916690
> with possible solutions like installing haveged, configuring virtio-rng,
> etc. depending on the situation.

That would be an extremely user-unfriendly "solution" and would lead to 
countless hours of debugging and useless bug reports.

This is exactly why I wrote this blog entry and keep it updated. We need to either fix this or tell everybody we can reach before upgrading to Buster. Otherwise this will lead to huge amounts of systems dead on the network after what looked like a successful upgrade.

Some interesting tidbits were mentioned within the thread:

Raphael Hertzog fixed the issue for Kali Linux by installing haveged by default. Michael Prokop did the same for the grml distribution within its December 2018 release.

Ben Hutchings pointed to an interesting thread on the debian-release mailing list he kicked off in May 2018. Multiple people summarized the options and the fact that there is no "general solution that is both correct and easy" at the time.

Sam Hartman identified Debian Buster VMs running under VMware as an issue, because that supervisor does not provide virtio-rng. So Debian VMs wouldn't boot into ssh availability within a reasonable time. This is an issue for real world use cases albeit running a proprietary product as the supervisor.

16.01.2019

Daniel Kahn Gillmor wrote in to explain a risk for VMs starting right after the boot of the host OS:

If that pool is used by the guest to generate long-term secrets because it appears to be well-initialized, that could be a serious problem.
(e.g. "Mining your P's and Q's" by Heninger et al -- https://factorable.net/weakkeys12.extended.pdf)
I've just opened https://bugs.launchpad.net/qemu/+bug/1811758 to report a way to improve that situation in qemu by default.

So ... make sure that your host OS has access to a hardware random number generator or at least carries over its random seed properly across reboots. You could also delay VM starts until the crng on the host Linux is fully initialized (random: crng init done).
Otherwise your VMs may get insufficiently generated pseudo-random numbers and won't even know.

12.03.2019

Stefan Fritsch revived the thread on debian-devel again and got a few more interesting tidbits out of the developer community:

Ben Hutchings has enabled CONFIG_RANDOM_TRUST_CPU for Debian kernels from 4.19.20-1 so the problem is somewhat contained for recent CPU AMD64 systems (RDRAND capable) in Buster.

Thorsten Glaser developed early-rng-init-tools which combine a few options to try and get entropy carried across boot and generated early during boot. He received some scrutiny as can be expected but none that would discourage me from using it. He explains that this is for early boot and thus has initrd integration. It complements safer randomness sources or haveged.

16.04.2019

The Debian installer for Buster is running into the same problem now as indicated in the release notes for RC1. Bug #923675 has details. Essentially choose-mirror waits serveral minutes for entropy when used with https mirrors.

08.05.2019

The RDRAND use introduced in systemd to bypass the kernel random number generator during boot falls for a AMD pre-Ryzen bug as RDRAND on these systems doesn't return random data after a suspend / resume cycle. Added an update note to the systemd section above.

03.06.2019

Bastian Blank reports the issue is affecting Debian cloud images now as well as cloud-init generates ssh keys during boot.

10.07.2019

Added the update of jitterentropy_rng to a version based on upstream v2.1.2 into the Jitterentropy section above.

16.09.2019

The Linux Kernel Mailing List (LKML) is re-iterating the entropy starvation issue and the un-willingness of systemd to fix its usage of randomness in early boot. Ahmed S. Darwish has reported the issue leading to ext4 reproducibly blocking boot with Kernel 5.3-r8. There are a few patches floated and the whole discussion it worth reading albeit non-conclusive as of now.

Ted Ts'o says "I really very strongly believe that the idea of making getrandom(2) non-blocking and to blindly assume that we can load up the buffer with 'best efforts' randomness to be a terrible, terrible idea that is going to cause major security problems that we will potentially regret very badly. Linus Torvalds believes I am an incompetent systems designer." in this email.

In case you needed a teaser to really start reading the thread! Linus Torvalds also mentions the issue (and a primer on what "never break userspace" means) in the Linux kernel 5.3 release notes.

18.09.2019

... and Martin Steigerwald kindly noticed that I update this blog post with the relevant discussions I come across as this entropy starvation mess continues to haunt us.

25.11.2019

Added the "Linus to the rescue" section after the Linux kernel 5.4 has been released.

02.04.2020

I ran into the same issue on a Gentoo system today. Luckily OpenRC handeled this gracefully but it delayed booting: syslog-ng actually hangs the boot for some time ... waiting for entropy. Argh. The Gentoo forums thread on the topic clearly listed the options:

  1. Make syslog-ng depend on haveged by adding rc_syslog_ng_need="haveged" to /etc/rc.conf (and obviously having haveged installed)
  2. Re-compiling the kernel with CONFIG_RANDOM_TRUST_CPU=y where that is an option

  1. it will return with EAGAIN in the GRND_NONBLOCK use case. The blocking behaviour when lacking entropy is a security measure as per Bug #1559 of Google's Project Zero

  2. Update 18.12.2018: "SysVinit times" ::= "The times when most Linux distros used SysVinit over other init systems." So Wheezy and previous for Debian. Some people objected to the statement, so I added this footnote as a clarification. See the discussion in the comments below. 

  3. there is no Buster branch in the release notes repository yet (17.12.2018). Update: I wrote a section for the release notes 06.05.2019 and Paul Gevers amended and committed that. So when users of affected systems read the release notes before upgrading to Buster they will hopefully not be surprised (and worried) by the long boot delays. 

Unbalanced volume (channels) on headset audio

Linux

I use a headset to make phone calls and when they are mono the great awesomeness of the Linux audio stack seems to change volume only on the active channel (e.g. the right channel). So when I listen to some music (stereo) afterwards the channels are not balanced anymore and one side is louder than the other. And this persists thanks to saving the preferences across reboots. Duh.

As usually checking Pulseaudio (pavucontrol) is useless, it shows balanced channels.

But checking Alsa (alsamixer) revealed the issue and alsamixer can fix this, too:

Step 1: run alsamixer in a terminal and select your headset after pressing [F6]:

Alsamixer: Select sound card

Step 2: Select the headset audio output with [<-] and [->] cursor keys:

Alsamixer: Unbalanced channels on the headset (left / right channel loudness are different)

Step 3: Press [b] to balance the left and right channels:

Alsamixer: Balanced channels (left / right channel loudness) again

Step 4: Press [Esc] to exit alsamixer which will keep the changed settings (... great choice of key, [q] raises the left channel's loundness ...).

Step 5: Save this setting by running sudo alsactl store which should update /var/lib/alsa/asound.state with the fixed settings so they persist across reboots.

Step 6: Enjoy music again :-).

If you need to script this, amixer is the tool to use, e.g. amixer -c 1 set "Headset" 36.
1 is the card number which you see in alsamixer, "Headset" is the channel name, also from alsamixer (which can contain blanks, hence the quotes around the name) and 36 is the desired loundness level for both channels. See the screenshots above where to find the data or run aplay -l to see the cards on your PC and amixer -c 1 (with your card id) to see the channels that (virtual, USB) sound card has.

Updating the Dell XPS 13 9360 Thunderbolt firmware to get VGA and HDMI working

IT

Last year I bought the wonderful Dell XPS 13 9360 as it is certified to work with Ubuntu Linux and is just all around an awesome device. Dell made me buy the Windows version as only that got a 1 TB NVMe-SSD option. Linux apparently is only worthy of the 512GB and below models. What product manager comes up with such a stupid idea? Are SKUs that precious? Anyways ... so I bought a Windows version and that got wiped with a Linux install immediately as that was and is its intended purpose.

Dell DA200 USB-C to HDMI/VGA/Ethernet/USB 3.0 adapter

I purchased a DA200 with the system which is Dell's USB-C to anything (HDMI/VGA/Ethernet/USB 3.0) dongle. When I got the laptop the Ethernet port and USB 3.0 via the DA200 were working right out of the box. The VGA and HDMI ports were detected by Ubuntu but there was no way to get connected screens working. They stayed black.

The device was shipped with Thunderbolt firmware NVM18 and we've been told rather quickly by Dell this would be fixed with an update. And lo and behold Dell published the firmware version NVM21 right for Christmas 2016. Now unfortunately while their BIOS updates are Windows / DOS executables that can be just shoved at the Dell UEFI flash updater and thus the main BIOS can be updated from any OS, including Linux, without any hassle, the Intel provided Thunderbolt update needs Windows to get installed. Or, well, there is a convoluted way to compile an out-of-tree Linux kernel module, download and compile a few sets of software and do it via Linux. That description read so lengthy, I didn't even try it. Additionally there seems to have been no progress at all in getting this more mainline in the last three months, so I chose the cheap route and installed Windows 10 on a USB thumb drive1.

This is done via the (unfortunately Windows only) Win2USB software (the free version is sufficient).

Update: There's a new bash script windows2usb that looks good and should work to get you a bootable Windows USB thumb drive in Linux. WinUSB (that stopped working in the Win10 area some time) has also been forked and updated into WoeUSB. And there is WinToUSBLinux, yet another shell script. Give them a try.

Once Windows has rebooted often enough to finish its own installation, you can work with the USB thumb drive install as with any Windows 10. Nice.

Dell TPM 1.2 to 2.0 firmware update

Put all the files you downloaded from Dell to update your XPS 13 into a directory on the USB thumb drive. That way Windows does not need to have any network connectivity.

I first updated the TPM 1.2 firmware to a TPM 2.0 version (DellTpm2.0_Fw1.3.2.8_V1_64.exe at the time of writing this blog entry). Now this is quite hilarious as the Windows installer doesn't do anything but putting a UEFI firmware update into the EFI partition that runs on reboot. Duh. You do need to manually clear the TPM in the BIOS' security settings section (there's a clear checkbox) to be able to program new firmware onto it.
Thunderbolt firmware upgrade progress bar Thunderbolt firmware upgrade successful Now back in Windows install the Thunderbolt drivers (Chipset_Driver_J95RR_WN32_16.2.55.275_A01.exe at the time of writing this) and then run Intel_TBT3_FW_UPDATE_NVM21_0THFT_A00_3.21.00.008.exe, which is the NVM21 Thunderbolt firmware update (or a later version).

Reboot again (into Linux if you want to) and (drumroll) the VGA and HDMI ports are working. Awesome.

An update log can be found on the USB thumb drive at Dell\UpdatePackage\Log\Intel_TBT3_FW_UPDATE_NVM21_0THFT_A00_3.log:

*** Dell Thunderbolt firmware update started on 4/6/2017 at 12:56:56***
Command: C:\Install\Intel_TBT3_FW_UPDATE_NVM21_0THFT_A00_3.21.00.008.exe 

Starting FW Update....
***TBT GPIO Power is Turning On:  No Dock or DockInfo.
***TBT GPIO power is turned on.

Thunderbolt Firmware Update SUCCEEDED
TBT Items Registry creation is Success at \SOFTWARE\Dell\ManageableUpdatePackage\Thunderbolt Controller:
User selected OK for reboot
System TBT NVM Current Version:BCD:00000018: New Version:BCD:00000021

Exit Code = 0 (Success) 
***Thunderbolt Firmware flash finished at 4/6/2017 at 13:00:23***

If Windows has added its boot loader entry into your UEFI options, you can easily remove that again with the Dell UEFI BIOS or efibootmgr from within Linux.

The whole process took me less than 30 minutes. And most of that was creating the Windows USB thumb drive. I'll keep that for future updates until Intel and Dell have sorted out the Thunderbolt update process in Linux.

Updates:

18.05.17: Intel has published a large patchset on LKML to enable Thunderbolt security levels (thus preventing DMA attacks) and get NVM firmware upgrades mainlined. Yeah!

02.05.18: Added a link to the windows2usb bash script that should remove the need to create a bootable Windows USB thumb drive with a Windows only software.

11.06.18: Added a link to WoeUSB which is currently packaged for Ubuntu in a PPA.

08.04.20: Added a link to WinToUSBLinux. A recently released shell script to create a bootable Windows USB stick from Linux.


  1. If you go the Linux route please post a minimal image somewhere (kernel, initrd, squashfs or FAT16/32 raw image) and put a link into a comment below this blog post. Thanks. 

Mozilla Firefox and Thunderbird Menu font sizes

Open Source

The font size Mozilla chose for Firefox and Thunderbird menus looks awfully large on Netbook screens. It wastes space and is visually at odds with reasonably sized content. And for some weird reason you can set the content font and size via the menu but not the font and size for the drop-down menus themselves.

As the "Theme Font & Size Changer" Add-On doesn't work reliably and phones home way too often (showing a nag screen), I dug back into how to do this "manually". Probably a decade after I fixed this the first time...

You need to create the file ~/.mozilla/firefox/*/chrome/userChrome.css with * being your profile directory (<random_number>.default usually) and you most probably have to create the chrome directory first.

The same for Thunderbird resides in ~/.thunderbird/*/chrome/userChrome.css. Here again the chrome directory will most probably need to be created first.


/* Global UI font */
* { font-size: 10pt !important;
  font-family: Ubuntu !important;
}
 

needs to go into these files for Firefox or Thunderbird respectively. The curly braces are important. So copy & paste correctly. Symlinks or hardlinks are fine if those files do not need to differ between your web browser and your email client.

Restart Firefox and/or Thunderbird to see the effect.

Obviously you can choose any other font and font size in the snippet above to suit your taste and requirements.

If you are massively space-confined and don't mind a quite ugly UI, check out the Littlefox Add-on. Ugly but optimal use of the minimal screen estate with very small screens.

IPv6: Getting rid of the dreaded "Neighbour table overflow"

Internet

IPv6 is hard. It has many, many design flaws and the decade where we all ignored it and hoped for the better hasn't helped. So we're now all in on the protocol. Yeah.

One of the design principles is that it tries to be rather stateless in the configuration and "plug and play". But just like P&P in the good old ISA times, it just doesn't always work.

One of the common issues is that Linux bridges in IPv6 just don't work well with the router announcements that try to discover and configure the IPv6 neighbourhood.

The result is a sheer endless amount of "kernel: Neighbour table overflow." lines flooding dmesg and syslog (or journal for those on SystemD).

Oct  4 16:26:06 host-260 kernel: Neighbour table overflow.
Oct  4 16:26:11 host-260 kernel: __ratelimit: 1832 callbacks suppressed
Oct  4 16:26:11 host-260 kernel: Neighbour table overflow.
Oct  4 16:26:11 host-260 kernel: Neighbour table overflow.
Oct  4 16:26:11 host-260 kernel: Neighbour table overflow.
Oct  4 16:26:11 host-260 kernel: Neighbour table overflow.
Oct  4 16:26:11 host-260 kernel: Neighbour table overflow.
Oct  4 16:26:11 host-260 kernel: Neighbour table overflow.
Oct  4 16:26:11 host-260 kernel: Neighbour table overflow.
Oct  4 16:26:11 host-260 kernel: Neighbour table overflow.
Oct  4 16:26:11 host-260 kernel: Neighbour table overflow.
Oct  4 16:26:11 host-260 kernel: Neighbour table overflow.
Oct  4 16:26:16 host-260 kernel: __ratelimit: 887 callbacks suppressed
Oct  4 16:26:16 host-260 kernel: Neighbour table overflow.
Oct  4 16:26:16 host-260 kernel: Neighbour table overflow.
Oct  4 16:26:16 host-260 kernel: Neighbour table overflow.
Oct  4 16:26:16 host-260 kernel: Neighbour table overflow.
Oct  4 16:26:16 host-260 kernel: Neighbour table overflow.
Oct  4 16:26:16 host-260 kernel: Neighbour table overflow.
Oct  4 16:26:16 host-260 kernel: Neighbour table overflow.
Oct  4 16:26:16 host-260 kernel: Neighbour table overflow.
Oct  4 16:26:16 host-260 kernel: Neighbour table overflow.
Oct  4 16:26:16 host-260 kernel: Neighbour table overflow.
Oct  4 16:26:23 host-260 kernel: __ratelimit: 803 callbacks suppressed

Grep -c(ount) on syslog

Lovely. Welcome to a storage DOS waiting to happen.

So first tip: cat /proc/sys/kernel/printk_ratelimit shows you the amount of seconds the ratelimiter suppresses messages. The default is 5 seconds and you can adjust it to more reasonable values in case you get heavily flooded like in the example above. Notice that this will mean your dmesg becomes rather useless as the kernel is not very selective about which messages to suppress.

Now when you google "Neighbour table overflow", you'll find thousands of pages suggesting to increase the arp / lladdr caches and garbage collection (gc) times like so:

# Set ARP cache garbage collection interval
net.ipv4.neigh.default.gc_interval = 3600
net.ipv6.neigh.default.gc_interval = 3600

# Set ARP cache entry timeout
net.ipv4.neigh.default.gc_stale_time = 3600
net.ipv6.neigh.default.gc_stale_time = 3600

# Setup cache threshold for ARP
net.ipv4.neigh.default.gc_thresh1 = 1024
net.ipv4.neigh.default.gc_thresh2 = 2048
net.ipv4.neigh.default.gc_thresh3 = 4096

# And the same for IPv6
net.ipv6.neigh.default.gc_thresh1 = 1024
net.ipv6.neigh.default.gc_thresh2 = 2048
net.ipv6.neigh.default.gc_thresh3 = 4096

That helps if and only if you really have 500+ IPv6 neighbours. Unless you have a badly segmented network or run in a university lab, you don't.

Now ... you may be seeing messages like "kernel: vmbr0: Multicast hash table maximum of 512 reached, disabling snooping: eth0" or "kernel: vmbr0: Multicast hash table chain limit reached: eth0" in your dmesg / syslog / journal.

That hints at what is really happening here: The bridge confused the link-local router negotiation and so you get endless ff02:: neighbour routing entries added to your caches until they flow over. So increasing the caches as in the sysctl entries above is basically pasting band-aid over the problem.

ip route show cache table all will show you the tables. With all entries. See if you have too many ff02:: neighbours in there. If so, you should try to add change your /etc/network/interfaces on Debian / Ubuntu similar to this:

iface vmbr0 inet6 static
   address 2a02:0100:1:1::500:1
   netmask 64
   gateway 2a02:0100:1:1::1
   post-up echo 2048 > /sys/class/net/vmbr0/bridge/hash_max
   post-up echo 1 > /sys/class/net/vmbr0/bridge/multicast_snooping
   post-up echo 0 > /proc/sys/net/ipv6/conf/vmbr0/accept_ra

This obviously assumes your bridge is called vmbr0.

Red Hat/CentOS users will need to adjust the config spread throughout multiple files in /etc/sysconfig/network-scripts. The ifup-ipv6 script is a good one to look at and amend.

The increase of the hash_max entry makes your bridge survive the initial storm of (useless) router solicitations.
multicast_snooping is usually off when routing but you may need it to make sure your VMs on the bridge can be reached.
Finally we make sure the bridge does not accept router announcements. Because that is what the host system should handle.

Sometimes you may need to throw in a static route or two to reach the VMs. P&P, you remember ... ip -6 neigh add nud permanent proxy <VM:IPv6:goes::here> dev vmbr0 is your friend. Unfortunately the antidote for the dreaded "Neighbour table overflow" depends on the specific cause. So you'll have to poke around a bit. tcpdump -i eth0 -v ip6 will show you what is on the wire and tcpdump -i vmbr0 -v ip6 what's visible on the bridge.

How much memory does a process use on Linux?

Linux

Sometimes the easy questions are the hardest to answer.

Memory can mean RSS (Resident Set Size) which is the memory of a process held in RAM (so not swapped out). That does include shared memory allocations. So if you add two RSS numbers, you're probably wrong already. Still this is usually the number we look for in most practical investigations.

Then there is VSZ (Virtual Set siZe) also called SIZE. The VSZ includes code, data and stack segments a process has allocated. And again that will count some shared address space. So usually bash will have a VSZ that's lower than its RSS.

man ps will also tell you:

   The SIZE and RSS fields don't count some parts of a process including the page tables, kernel stack, struct
   thread_info, and struct task_struct.  This is usually at least 20 KiB of memory that is always resident.

In most (if not all) practical scenarios that difference won't matter. If it were, you'd be using valgrind to look into the memory usage of your application in minute detail. Wouldn't you?

If you want to have an as-detailed-as-possible look into the memory allocations of a process pmap <pid> will give you the information. The summary at the end is a gross over-estimation of the total memory a process has allocated as it counts all mapped memory (and may still be wrong due to de-duplication and other factors). But that number may well serve as an upper bound if you need something like that.

For running processes


ps -eo 'pid user rss:8 size:8 cmd' --sort 'rss'
 

will give you a nice sorted list of processes and their RSS and VSZ (SIZE) in kiB (old school kB...).

For short running commands GNU time (not the bash build-in time command, apt install time on Debian-based systems) has a nice capability that's not widely known yet:


/usr/bin/time -f "RSS: %MkiB" <command>
 

will tell you the maximum RSS size the <command> has had during its lifetime. That's better than top or watch ps and trying to spot the process.

Firefox opens directories in RawTherapee ... or a tale of colliding assumptions

Internet

After installing RawTherapee, a raw photo to jpg image converter, on my Kubuntu 13.04 desktop, Mozilla Firefox started to open the Downloads directory in ... RawTherapee. Gosh. Now that's what you'd want to browse a downloads folder with.

Update-alternatives did not help as RawTherapee is not managed by the Debian alternatives system. Hm, xdg-open still works as intended and opens dolphin.

Asking Google turns up many not working solution proposals. Hm.

Firefox: Context menu (right click) -> Open Containing Folder

But, searching the collective memory is not completely useless. There is a solved bug report on the RawTherapee bug tracker that explains the issue. The .desktop file RawTherapee installs contains inode/directory as a supported mime type. Well, that's technically correct but makes no sense in the way mime associations are used. Thankfully the bug reporter was able to convince the developer of this and get this fixed for future RawTherapee releases.

So option 1: edit /usr/share/applications/rawtherapee.desktop and remove the inode/directory entry from the MimeType=-line. Then regenerate the various mimetype caches.

This still bears the question why Firefox does not use the inode/directory entry from dolphin.desktop. Also, the next update from Debian/Ubuntu will probably overwrite the fixed rawtherapee.desktop file again as the patch from the Rawtherapee developer will only be included on new releases (and these may or may not be backported to Debian/Ubuntu). So you'd need to copy that fixed rawtherapee.desktop file to ~/.local/share/applications/ or the like.

Option 2: find out what the real issue is...

Continue reading "Firefox opens directories in RawTherapee ... or a tale of colliding assumptions"