Entries tagged as updated
Your software stores are a bad idea
There is significant effort involved to get your apt
or dnf
commands always have a consistent set of servers to talk to.
That's why running separate "software stores" is a bad idea:
That way more admins need to learn how to run high availability services for dubious business opportunities to "later" monetize services. Services that nobody cares to pay for and thus opportunities that never materialize. But every company wants to find that out again. Because if Apple could do it, why shouldn't Canonical be able to do it? $$$!1!!
So, can't update Firefox on Ubuntu 22.04 right now.
At least there is https://status.snapcraft.io/:
So I can check back tomorrow if I can update my web browser ...
Update
09.11.2022 12:15 CET
The Snapcraft distribution system seems quite flaky, this is the downtime log:
Bonus points for the bad client side implementation:
dl@laptop:~$ sudo snap refresh
All snaps up to date.
# ^this is a lie, just close Firefox and ...
dl@laptop:~$ sudo snap refresh
firefox 106.0.5-1 from Mozilla** refreshed
Postscriptum
GNOME and KDE join forces to sink another 100 .. 200k USD into the void:
https://github.com/PlaintextGroup/oss-virtual-incubator/[..]/proposals/flathub-linux-app-store.md
This is an application for funding from Schmidt Futures, which is one of the investment (as in philanthropic) funds from Eric Schmidt, former Google CEO.
The application text is worth reading. Very entertaining.
Robert McQueen (GNOME, Flathub) wrote on 07.03.2023 that the PlaintextGroup/Schmidt Futures application was denied for 2023.
Get Youtube Channel ID from username
Youtube has a really nice RSS feature that is extremely well hidden.
If you postfix a Channel ID to
https://www.youtube.com/feeds/videos.xml?channel_id=<id goes here>
you get a really nice Atom 1.0 (~RSS) feed for your feedreader.
Unfortunately the Channel ID is hard to find while you are navigating Youtube with usernames in the URL.
E.g. https://www.youtube.com/c/TED is TED's channel, full of interesting and worth-to-watch content (and some assorted horse toppings, of course).
But you have to read a lot of ugly HTML / JSON in that page to find and combine
https://www.youtube.com/feeds/videos.xml?channel_id=UCAuUUnT6oDeKwE6v1NGQxug
which is the related RSS feed.
Jeff Keeling wrote a simple Youtube RSS Extractor that does well if you have a ../playlist?...
or a .../channel/...
URL but it will (currently) fail on user name channels or Youtube landing pages.
So how do we get the Channel ID for a Youtube user we are interested to follow?
Youtube has a great API but that is gated by API keys even for the most simple calls (that came only with v3 of the API but the previous version is depreciated since 2015)1:
{
"error": {
"code": 403,
"message": "The request is missing a valid API key.",
"errors": [
{
"message": "The request is missing a valid API key.",
"domain": "global",
"reason": "forbidden"
}
],
"status": "PERMISSION_DENIED"
}
}
Luckily we can throw the same (example) user name DebConfVideos
at curl
and grep
:
"channelId":"UC7SbfAPZf8SMvAxp8t51qtQ"
So https://www.youtube.com/feeds/videos.xml?channel_id=UC7SbfAPZf8SMvAxp8t51qtQ is the RSS feed for DebConfVideos.
We can use individual Youtube video URLs as well. With the hack above, it'll work to find us the Chanel ID from a Youtube video URL:
Now, some user pages may have multiple valid RSS feeds because they contain multiple channels.
Remember the TED page from above? Well run:
This will iterate through the Channel IDs found and show you the titles. That way you can assess which one you want to add to your feedreader.
You probably want the last Channel ID listed above, the non-selective "TED" one. And that's the one from the example above.
Update
02.06.2022: smpl wrote in and has the much better solution for the most frequent use cases:
You can also use get a feed directly with a username: https://www.youtube.com/feeds/videos.xml?user=<username> The one I use most is the one for playlists (if creators remember to use them). https://www.youtube.com/feeds/videos.xml?playlist_id=<playlist id>
For the common case you don't even need the channel ID that way.
But it is also conveniently given in a <yt:channelId>
tag (or the topmost <id>
tag) within the Atom XML document.
Thanks, smpl!
-
Actually it is even more complicated as some channels, like our DebConfVideos example, will only get you an incomplete result, cf. this StackOverflow entry. I.e. the
forUsername
iterator may not even work and the "best practice" seems to be mucking around with thesearch
call. ↩
Leveling the playing field for non-native speakers
Updates
24.01.2022: What I love about the community is the playful creativity that inspires a game like Wordle and that in turn inspires others to create fun tools around it:
Robert Reichel has reverse engineered the Wordle application, so in case you want to play tomorrow's word today .. you can. Or have that one guess "Genius" solution experience.
JP Fosterson created a Wordle helper that is very much the Python version of my grep-foo above. In case you play regularly and can use a hand.
And Tom Lockwood wrote a Wordle solver also in Python. He blogged about it and ... is pondering to rewrite things in Rust:
I’ve decided to explore Rust for this, and so far what was taking 1GB of RAM in Python is taking, literally 1MB in Rust!
Welcome to 2022.
01.02.2022: OMG. Wordle has been bought by the New York Times for "for a price in the low seven figures" (Source).
Joey Rees-Hill put it well in The Death of Wordle:
Today’s Web is dominated by platforms. The average Web user will spend most of their time on large platforms such as Instagram, Facebook, Twitter, TikTok, Google Drive/Docs, YouTube, Netflix, Spotify, Gmail, and Google Calendar, along with sites operated by large publishers such as The New York Times or The Washington Post. [..]
The Web wasn’t always this way. I’m not old enough to remember this, but things weren’t always so centralized. Web users might run their own small website, and certainly would visit a good variety of smaller sites. With the increasing availability of internet access, the Web has become incredibly commercialized, with a handful of companies concentrating Web activity on their own properties.
Wordle was a small site that gained popularity despite not being part of a corporate platform. It was wonderful to see an independent site gain attention for being simple and fun. Wordle was refreshingly free of attention-manipulating dark patterns and pushy monetization. That’s why it’s a shame to see it absorbed, to inevitably become just another feature of one large media company’s portfolio.
Still kudos to Josh Wardle, a Million Pounds for Wordle. Well done!
It was fun while it lasted. Let's see what the next Wordle will be. This one has just been absorbed into the borg collective.
Gradual improvements at the Linux Foundation
After last year's blunder with trying to hide the Adobe toolchain and using hilarious stock photos, the Linux Foundation did much better in their 2021 annual report1 published Dec. 6, 2021.
Still they are using the Adobe toolchain (InDesign, Acrobat PDF) and my fellow DebianKernel2 Developer Geert was quick to point that out as the first comment to the LWN note on the publication:
I think it is important to call the Linux Foundation (LF) out again and again. Adobe is a Silver member of the LF and they can motivate them to publish their applications for Linux. And if that is not an option, there are Free alternatives like Scribus that could well use the exposure and funds of LF to help catch up to the market leading product, Adobe InDesign.
Personally, as a photographer, I am very happy they used stock images from Unsplash to illustrate the 2021 edition over the cringeworthy Shutterstock footage from last year's report.
And they gave proper credit:
Now for next year ... find an editor that knows how to spell photographers, please. And consider Scribus. And make Adobe publish their apps for Linux. Thank you.
-
Update 07.12.2021 22:00 CET: I had to replace the link to the Linux Foundation 2021 annual report with an archive.org one as they updated the report to fix the typo as per the comment from Melissa Schmidt below. Stable URLs are not a thing, apparently. You can find their new report at https://www.linuxfoundation.org/wp-content/uploads/2021_LF_Annual_Report_120721c.pdf. Unless somebody points out more typos. There is a Last-Modified Header in HTTP 1.1. Wordpress, Varnish and Nginx, serving the LF website, all support that. ↩
-
08.12.2021: Geert Uytterhoeven wrote in that he is "geert" on LWN, both are very nice Geert's but different Geert's :-) ↩
The Stallman wars
So, 2021 isn't bad enough yet, but don't despair, people are working to fix that:
Welcome to the Stallman wars
Team Cancel: https://rms-open-letter.github.io/ (repo)
Team Support: https://rms-support-letter.github.io/ (repo)
Current Final stats are:
Team Cancel: 3019 signers from 1415 individual commit authors Team Support: 6853 signers from 5418 individual commit authors
Git shortlog (Top 10):
rms_cancel.git (Last update: 2021-08-16 00:11:15 (UTC)) 1230 Neil McGovern 251 Joan Touzet 99 Elana Hashman 73 Molly de Blanc 36 Shauna 19 Juke 18 Stefano Zacchiroli 17 Alexey Mirages 16 Devin Halladay 14 Nader Jafari rms_support.git (Last update: 2021-09-29 07:14:39 (UTC)) 1821 shenlebantongying 1585 nukeop 1560 Ivanq 1057 Victor 880 Job Bautista 123 nekonee 101 Victor Gridnevsky 41 Patrick Spek 25 Borys Kabakov 17 KIM Taeyeob
(data as of 2021-10-01)
Technical info:
Signers are counted from their "Signed / Individuals" sections. Commits are counted with git shortlog -s
.
Team Cancel also has organizational signatures with Mozilla, Suse and X.Org being among the notable signatories.
The 16 original signers of the Cancel petition are added in their count.
Neil McGovern, Juke and shenlebantongying need .mailmap
support as they have committed with different names.
Further reading:
- An introductory Ars Technica article in case you wonder what this is all about.
- Debian vote mailing-list: March 2021, April 2021
- NYT Magazine on the history of cancel culture
- Ed Santos' commentary and analysis
12.04.2021 Statements from the accused
18.04.2021 Debian General Resolution
The Debian General Resolution (GR) vote of the developers has concluded to not issue a public statement at all, see https://www.debian.org/vote/2021/vote_002#outcome for the results.
It is better to keep quiet and seem ignorant than to speak up and remove all doubt.
See Quote Investigator for the many people that rephrased these words over the centuries. They still need to be recalled more often as too many people in the FLOSS community have forgotten about that wisdom...
01.10.2021 Final stats
It seems enough dust has settled on this unfortunate episode of mob activity now. Hence I stopped the cronjob that updated the stats above regularly. Team Support has kept adding signature all the time while Team Cancel gave up very soon after the FSF decided to stand with Mr. Stallman. So this battle was decided within two months. The stamina of the accused and determined support from some dissenting web devs trumped the orchestrated outrage of well known community figures and their publicity power this time. But history teaches us that does not mean the war is over. There will a the next opportunity to call for arms. And people will call. Unfortunately.
01.11.2024 Team Cancel is opening a new round; Team Support responds with exposing the author of "The Stallman report"
I hate to be right. Three years later than the above:
An anonymous member of team Cancel has published https://stallman-report.org/ [local pdf mirror, 504kB] to "justify our unqualified condemnation of Richard Stallman". It contains a detailed collection of quotes that are used to allege supporting (sexual) misconduct. The demand is again that Mr. Stallman "step[s] down from all positions at the FSF and the GNU project". Addressing him: "the scope and extent of your misconduct disqualifies you from formal positions of power within our community indefinitely".
Team Support has not issues a rebuttal (yet?) but has instead identified the anonymous author as Drew "sircmpwn" DeVault, a gifted software developer, but also a vocal and controversial figure in the Open Source / Free Software space. Ironically quite similar to Richard "rms" Stallman. Their piece is published at https://dmpwn.info/ [local pdf mirror, 929kB]. They also allege a proximity of Mr. DeVault to questionable "Lolita" anime preferences and societal positions to disqualify him.
Git shared hosting quirk
Show https://github.com/torvalds/linux/blob/b4061a10fc29010a610ff2b5b20160d7335e69bf/drivers/hid/hid-samsung.c#L113-L118 to a friend.
Oops 'eh? Yep, Linux has been backdoored.
Well, or not.
Konstantin Ryabitsev explains it nicely in a cgit mailing list email:
It is common for git hosting environments to configure all forks of the same repo to use an "object storage" repository. For example, this is what allows git.kernel.org's 600+ forks of linux.git to take up only 10GB on disk as opposed to 800GB. One of the side-effects of this setup is that any object in the shared repository can be accessed from any of the forks, which periodically confuses people into believing that something terrible has happened.
The hack was discussed on Github in Dec 2018 when it was discovered. I forgot about it again but Konstantin's mail brought the memory back and I think it deserves more attention.
I'm sure putting some illegal content into a fork and sending a made up "blob" URL to law enforcement would go quite far. Good luck explaining the issue. "Yes this is my repo" but "no, no that's not my data" ... "yes, it is my repo but not my data" ... "no we don't want that data either, really" ... "but, but there is nothing we can do, we host on github...1".
Updates
05.11.20 Nate Friedman (CEO of Github) promises
[..] we are going to make it much more obvious when you're viewing an orphaned commit.
For context: The source code of Github (the product) had been leaked as a commit to Github's own DMCA repository. The repository has turned into a playground since Github took down the hosting for youtube-dl
as the result of a DMCA complaint.
14.11.20 Seems Github now adds a warning to commits that are not in a reachable branch
28.01.22 Github currently fails to show the warning message, so https://github.com/torvalds/linux/tree/8bcab0346d4fcf21b97046eb44db8cf37ddd6da0 is making rounds now:
-
Actually there is something you can do. Making a repo private takes it out of the shared "object storage". You can make it public again afterwards. Seems to work at least for now. ↩
Getting rid of the Google cookie consent popup
If you clear your browser cookies regularly (as you should do), Google will annoy you with a full screen cookie consent overlay these days. And - of course - there is no "no tracking consent, technically required cookies only" button. You may log in to Google to set your preference. Yeah, I'm sure this is totally following the intent of the EU Directive 2009/136/EC (the "cookie law").
Unfortunately none of the big "anti-annoyances" filter lists seem to have picked that one up yet but the friendly folks from the Computerbase Forum [German] to the rescue. User "Sepp Depp" has created the base of the following filter set that WFM (updated by "mru62" since):
Add this to your uBlock Origin "My filters" tab:
! Google - remove cookie-consent-popup and restore scroll functionality ! Updated 26.12.2022 google.*###lb google.*##html:style(overflow: visible !important;) ! And for Youtube ! Updated 26.12.2022 www.youtube.com##.opened www.youtube.com###dialog www.youtube.com##.no-transition.yt-upsell-dialog-renderer.style-scope
Upgrading Limesurvey with (near) zero downtime
Limesurvey is an online survey tool. It is very powerful and commonly used in academic environments because it is Free Software (GPLv2+), allows for local installations protecting the data of participants and allowing to comply with data protection regulations. This also means there are typically no load-balanced multi-server szenarios with HA databases. But simple VMs where Limesurvey runs and needs upgrading in place.
There's an LTS branch (currently 3.x) and a stable branch (currently 4.x). There's also a 2.06 LTS branch that is restricted to paying customers. The main developers behind Limesurvey offer many services from template design to custom development to support to hosting ("Cloud", "Limesurvey Pro"). Unfortunately they also charge for easy updates called "ComfortUpdate" (currently 39€ for three months) and the manual process is made a bit cumbersome to make the "ComfortUpdate" offer more attractive.
Due to Limesurvey being an old code base and UI elements not being clearly separated, most serious use cases will end up patching files and symlinking logos around template directories. That conflicts a bit with the opaque "ComfortUpdate" process where you push a button and then magic happens. Or you have downtime and a recovery case while surveys are running.
If you do not intend to use the "ComfortUpdate" offering, you can prevent Limesurvey from connecting to http://comfortupdate.limesurvey.org
daily by adding the updatable
stanza as in line 14 to limesurvey/application/config/config.php
:
- [...]
- // Use the following config variable to set modified optional settings copied from config-defaults.php
- // debug: Set this to 1 if you are looking for errors. If you still get no errors after enabling this
- // then please check your error-logs - either in your hosting provider admin panel or in some /logs directory
- // on your webspace.
- // LimeSurvey developers: Set this to 2 to additionally display STRICT PHP error messages and get full access to standard templates
- 'debug'=>0,
- 'debugsql'=>0, // Set this to 1 to enanble sql logging, only active when debug = 2
- // Mysql database engine (INNODB|MYISAM):
- 'mysqlEngine' => 'MYISAM'
- , // Update default LimeSurvey config here
- 'updatable' => false,
- )
- );
The comma on line 13 is placed like that in the current default limesurvey config.php
, don't let yourself get confused.
Every item in a php array must end with a comma. It can be on the next line.
The basic principle of low risk, near-zero downtime, in-place upgrades is:
- Create a diff between the current release and the target release
- Inspect the diff
- Make backups of the application webroot
- Patch a copy of the application in-place
- (optional) stop the web server
- Make a backup of the production database
- Move the patched application to the production webroot
- (if 5) Start the webserver
- Upgrade the database (if needed)
- Check the application
So, in detail:
Continue reading "Upgrading Limesurvey with (near) zero downtime"Fixing the Nextcloud menu to show more than eight application icons
I have been late to adopt an on-premise cloud solution as the security of Owncloud a few years ago wasn't so stellar (cf. my comment from 2013 in Encryption files ... for synchronization across the Internet). But the follow-up product Nextcloud has matured quite nicely and we use it for collaboration both in the company and in FLOSS related work at multiple nonprofit organizations.
There is a very annoying "feature" in Nextcloud though that the designers think menu items for apps at the top need to be limited to eight or less to prevent information overload in the header. The whole item discussion is worth reading as it it an archetypical example of design prevalence vs. user choice.
And of course designers think they are right. That's a feature of the trade.
And because they know better there is no user configurable option to extend that 8 items to may be 12 or so which would prevent the annoying overflow menu we are seeing with 10 applications in use:
Luckily code can be changed and there are many comments floating around the Internet to change const minAppsDesktop = 8
. In this case it is slightly complicated by the fact that the javascript code is distributed in compressed form (aka "minified") as core/js/dist/main.js
and you probably don't want to build the whole beast locally to change one constant.
Basically
const resizeMenu = () => {
const appList = $('#appmenu li')
const rightHeaderWidth = $('.header-right').outerWidth()
const headerWidth = $('header').outerWidth()
const usePercentualAppMenuLimit = 0.33
const minAppsDesktop = 8
let availableWidth = headerWidth - $('#nextcloud').outerWidth() - (rightHeaderWidth > 210 ? rightHeaderWidth : 210)
const isMobile = $(window).width() < breakpoint_mobile_width
if (!isMobile) {
availableWidth = availableWidth * usePercentualAppMenuLimit
}
let appCount = Math.floor((availableWidth / $(appList).width()))
if (isMobile && appCount > minAppsDesktop) {
appCount = minAppsDesktop
}
if (!isMobile && appCount < minAppsDesktop) {
appCount = minAppsDesktop
}
// show at least 2 apps in the popover
if (appList.length - 1 - appCount >= 1) {
appCount--
}
$('#more-apps a').removeClass('active')
let lastShownApp
for (let k = 0; k < appList.length - 1; k++) {
const name = $(appList[k]).data('id')
if (k < appCount) {
$(appList[k]).removeClass('hidden')
$('#apps li[data-id=' + name + ']').addClass('in-header')
lastShownApp = appList[k]
} else {
$(appList[k]).addClass('hidden')
$('#apps li[data-id=' + name + ']').removeClass('in-header')
// move active app to last position if it is active
if (appCount > 0 && $(appList[k]).children('a').hasClass('active')) {
$(lastShownApp).addClass('hidden')
$('#apps li[data-id=' + $(lastShownApp).data('id') + ']').removeClass('in-header')
$(appList[k]).removeClass('hidden')
$('#apps li[data-id=' + name + ']').addClass('in-header')
}
}
}
// show/hide more apps icon
if ($('#apps li:not(.in-header)').length === 0) {
$('#more-apps').hide()
$('#navigation').hide()
} else {
$('#more-apps').show()
}
}
gets compressed during build time to become part of one 15,000+ character line. The relevant portion reads:
Well, we can still patch that, can we?
Continue reading "Fixing the Nextcloud menu to show more than eight application icons"Cleaning a broken GnuPG (gpg) key
I've long said that the main tools in the Open Source security space, OpenSSL and GnuPG (gpg), are broken and only a complete re-write will solve this. And that is still pending as nobody came forward with the funding. It's not a sexy topic, so it has to get really bad before it'll get better.
Gpg has a UI that is close to useless. That won't substantially change with more bolted-on improvements.
Now Robert J. Hansen and Daniel Kahn Gillmor had somebody add ~50k signatures (read 1, 2, 3, 4 for the g{l}ory details) to their keys and - oops - they say that breaks gpg.
But does it?
I downloaded Robert J. Hansen's key off the SKS-Keyserver network.
It's a nice 45MB file when de-ascii-armored (gpg --dearmor broken_key.asc ; mv broken_key.asc.gpg broken_key.gpg
).
Now a friendly:
pub rsa3072/0x1DCBDC01B44427C7
erzeugt: 2015-07-16 verfällt: niemals Nutzung: SC
Vertrauen: unbekannt Gültigkeit: unbekannt
sub ed25519/0xA83CAE94D3DC3873
erzeugt: 2017-04-05 verfällt: niemals Nutzung: S
sub cv25519/0xAA24CC81B8AED08B
erzeugt: 2017-04-05 verfällt: niemals Nutzung: E
sub rsa3072/0xDC0F82625FA6AADE
erzeugt: 2015-07-16 verfällt: niemals Nutzung: E
[ unbekannt ] (1). Robert J. Hansen <rjh@sixdemonbag.org>
[ unbekannt ] (2) Robert J. Hansen <rob@enigmail.net>
[ unbekannt ] (3) Robert J. Hansen <rob@hansen.engineering>
User-ID "Robert J. Hansen <rjh@sixdemonbag.org>": 49705 Signaturen entfernt
User-ID "Robert J. Hansen <rob@enigmail.net>": 49704 Signaturen entfernt
User-ID "Robert J. Hansen <rob@hansen.engineering>": 49701 Signaturen entfernt
pub rsa3072/0x1DCBDC01B44427C7
erzeugt: 2015-07-16 verfällt: niemals Nutzung: SC
Vertrauen: unbekannt Gültigkeit: unbekannt
sub ed25519/0xA83CAE94D3DC3873
erzeugt: 2017-04-05 verfällt: niemals Nutzung: S
sub cv25519/0xAA24CC81B8AED08B
erzeugt: 2017-04-05 verfällt: niemals Nutzung: E
sub rsa3072/0xDC0F82625FA6AADE
erzeugt: 2015-07-16 verfällt: niemals Nutzung: E
[ unbekannt ] (1). Robert J. Hansen <rjh@sixdemonbag.org>
[ unbekannt ] (2) Robert J. Hansen <rob@enigmail.net>
[ unbekannt ] (3) Robert J. Hansen <rob@hansen.engineering>
Command being timed: "gpg --no-default-keyring --keyring ./broken_key.gpg --batch --quiet --edit-key 0x1DCBDC01B44427C7 clean save quit"
User time (seconds): 3911.14
System time (seconds): 2442.87
Percent of CPU this job got: 99%
Elapsed (wall clock) time (h:mm:ss or m:ss): 1:45:56
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 107660
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 1
Minor (reclaiming a frame) page faults: 26630
Voluntary context switches: 43
Involuntary context switches: 59439
Swaps: 0
File system inputs: 112
File system outputs: 48
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
And the result is a nicely useable 3835 byte file of the clean public key.
If you supply a keyring instead of --no-default-keyring
it will also keep the non-self signatures that are useful for you (as you apparently know the signing party).
So it does not break gpg. It does break things that call gpg at runtime and not asynchronously. I heard Enigmail is affected, quelle surprise.
Now the main problem here is the runtime. 1h45min is just ridiculous. As Filippo Valsorda puts it:
Someone added a few thousand entries to a list that lets anyone append to it. GnuPG, software supposed to defeat state actors, suddenly takes minutes to process entries. How big is that list you ask? 17 MiB. Not GiB, 17 MiB. Like a large picture. https://dev.gnupg.org/T4592
If I were a gpg / SKS keyserver developer, I'd
- speed this up so the edit-key run above completes in less than 10 s (just getting rid of the lseek/read dance and deferring all time-based decisions should get close)
- (ideally) make the drop-sig import-filter syntax useful (date-ranges, non-reciprocal signatures, ...)
- clean affected keys on the SKS keyservers (needs coordination of sysops, drop servers from unreachable people)
- (ideally) use the opportunity to clean all keyserver filesystem and the message board over pgp key servers keys, too
- only accept new keys and new signatures on keys extending the strong set (rather small change to the existing codebase)
That way another key can only be added to the keyserver network if it contains at least one signature from a previously known strong-set key. Attacking the keyserver network would become at least non-trivial. And the web-of-trust thing may make sense again.
Updates
09.07.2019
GnuPG 2.2.17 has been released with another set of quickly bolted together fixes:
* gpg: Ignore all key-signatures received from keyservers. This change is required to mitigate a DoS due to keys flooded with faked key-signatures. The old behaviour can be achieved by adding keyserver-options no-self-sigs-only,no-import-clean to your gpg.conf. [#4607] * gpg: If an imported keyblocks is too large to be stored in the keybox (pubring.kbx) do not error out but fallback to an import using the options "self-sigs-only,import-clean". [#4591] * gpg: New command --locate-external-key which can be used to refresh keys from the Web Key Directory or via other methods configured with --auto-key-locate. * gpg: New import option "self-sigs-only". * gpg: In --auto-key-retrieve prefer WKD over keyservers. [#4595] * dirmngr: Support the "openpgpkey" subdomain feature from draft-koch-openpgp-webkey-service-07. [#4590]. * dirmngr: Add an exception for the "openpgpkey" subdomain to the CSRF protection. [#4603] * dirmngr: Fix endless loop due to http errors 503 and 504. [#4600] * dirmngr: Fix TLS bug during redirection of HKP requests. [#4566] * gpgconf: Fix a race condition when killing components. [#4577]
Bug T4607 shows that these changes are all but well thought-out. They introduce artificial limits, like 64kB for WKD-distributed keys or 5MB for local signature imports (Bug T4591) which weaken the web-of-trust further.
I recommend to not run gpg 2.2.17 in production environments without extensive testing as these limits and the unverified network traffic may bite you. Do validate your upgrade with valid and broken keys that have segments (packet groups) surpassing the above mentioned limits. You may be surprised what gpg does. On the upside: you can now refresh keys (sans signatures) via WKD. So if your buddies still believe in limiting their subkey validities, you can more easily update them bypassing the SKS keyserver network. NB: I have not tested that functionality. So test before deploying.
10.08.2019
Christopher Wellons (skeeto) has released his pgp-poisoner tool. It is a go program that can add thousands of malicious signatures to a GNUpg key per second. He comments "[pgp-poisoner is] proof that such attacks are very easy to pull off. It doesn't take a nation-state actor to break the PGP ecosystem, just one person and couple evenings studying RFC 4880. This system is not robust." He also hints at the next likely attack vector, public subkeys can be bound to a primary key of choice.
Apple Time Machine backups on Debian 9 (Stretch)
Update from 28.04.2022: Do not use the packages below any more. There is Netatalk 3.1.13 out with fixes for multiple remote code execution (RCE) bugs. Use packages from recent Debian again, they have been updated. |
Netatalk 3.1.12 has been released which fixes an 18 year old RCE bug. The Medium write up on CVE-2018-1160 by Jacob Baines is quite an entertaining read.
The full release notes for 3.1.12 are unfortunately not even half as interesting.
Be sure to read the original blog post if you are new to Netatalk3 on Debian Jessie or Stretch! You'll get nowhere if you install the .debs below and don't know about the upgrade path from 2.2.x which is still in the Debian archive. So RTFA. |
For Debian Buster (Debian 10) we'll have Samba 4.9 which has learnt (from Samba 4.8.0 onwards) how to emulate a SMB time machine share. I'll make a write up how to install this once Buster stabilizes. This luckily means there will be no need to continue supporting Netatalk in normal production environments. So I guess bug #690227 won't see a proper fix anymore. Waiting out problems helps at times, too :/.
Update instructions and downloads:
Continue reading "Apple Time Machine backups on Debian 9 (Stretch)"Xfce 4.12 not suspending on laptop-lid close
Xfce 4.12 as default in Ubuntu/Xubuntu 18.04 LTS did not suspend a laptop after closing the lid. In fact running
xfce4-power-manager --quit ; xfce4-power-manager --no-daemon --debug
showed that xfce4 wasn't seeing a laptop lid close event at all.
To the contrary acpi_listen
nicely finds button/lid LID close
and button/lid LID open
events when folding the screen and opening it up again.
As so often the wonderful docs / community of Arch Linux to the rescue. This forum thread from 2015 received the correct answer in 2017:
Xfce4 basically recognizes systemd and thus disables its built-in power-management options for handling these "button events" (but doesn't tell you so in the config UI for power-manager). Systemd is configured to handle these events by default (/etc/systemd/logind.conf
has HandleLidSwitch=suspend
but for unknown reasons decides not to honor that).
So best is to teach Xfce4 to handle the events again as in pre-systemd times:
xfconf-query -c xfce4-power-manager -p /xfce4-power-manager/logind-handle-lid-switch -s false
Now the UI options will work again as intended and the laptop suspends on lid close and resumes on lid open.
Update:
07.01.19: Changed XFCE -> Xfce as per Corsac's suggestion in the comments below. Thank you!
Background info:
The name "XFCE" was originally an acronym for "XForms Common Environment", but since that time it has been rewritten twice and no longer uses the XForms toolkit. The name survived, but it is no longer capitalized as "XFCE", but rather as "Xfce". The developers' current stance is that the initialism no longer stands for anything specific. After noting this, the FAQ on the Xfce Wiki comments "(suggestion: X Freakin' Cool Environment)".
(quoted from Wikipedia's Xfce article also found in the Xfce docs FAQ).
Openssh taking minutes to become available, booting takes half an hour ... because your server waits for a few bytes of randomness
So, your machine now needs minutes to boot before you can ssh in where it used to be seconds before the Debian Buster update?
Problem
Linux 3.17 (2014-10-05) learnt a new syscall getrandom() that, well, gets bytes from the entropy pool. Glibc learnt about this with 2.25 (2017-02-05) and two tries and four years after the kernel, OpenSSL used that functionality from release 1.1.1 (2018-09-11). OpenSSH implemented this natively for the 7.8 release (2018-08-24) as well.
Now the getrandom() syscall will block1 if the kernel can't provide enough entropy. And that's frequenty the case during boot. Esp. with VMs that have no input devices or IO jitter to source the pseudo random number generator from.
First seen in the wild January 2017
I vividly remember not seeing my Alpine Linux VMs back on the net after the Alpine 3.5 upgrade. That was basically the same issue.
Systemd. Yeah.
Systemd makes this behaviour worse, see issues #4271, #4513 and #10621.
Basically as of now the entropy file saved as /var/lib/systemd/random-seed
will not - drumroll - add entropy to the random pool when played back during boot. Actually it will. It will just not be accounted for. So Linux doesn't know. And continues blocking getrandom(). This is obviously different from SysVinit times2 when /var/lib/urandom/random-seed
(that you still have lying around on updated systems) made sure the system carried enough entropy over reboot to continue working right after enough of the system was booted.
#4167 is a re-opened discussion about systemd eating randomness early at boot (hashmaps in PID 0...). Some Debian folks participate in the recent discussion and it is worth reading if you want to learn about the mess that booting a Linux system has become.
While we're talking systemd ... #10676 also means systems will use RDRAND in the future despite Ted Ts'o's warning on RDRAND [Archive.org mirror and mirrored locally as 130905_Ted_Tso_on_RDRAND.pdf, 205kB as Google+ will be discontinued in April 2019].
Update: RDRAND doesn't return random data on pre-Ryzen AMD CPUs (AMD CPU family <23) as per systemd bug #11810. It will always be 0xFFFFFFFFFFFFFFFF (264-1). This is a known issue since 2014, see kernel bug #85991.
Debian
Debian is seeing the same issue working up towards the Buster release, e.g. Bug #912087.
The typical issue is:
[ 130.970863] random: crng init done
with delays up to tens of minutes on systems with very little external random sources.
This is what it should look like:
[ 2.299314] random: crng init done
Check dmesg | grep -E "(rng|random)"
to see how your systems are doing.
If this is not fully solved before the Buster release, I hope some of the below can end up in the release notes3.
Solutions
You need to get entropy into the random pool earlier at boot. There are many ways to achieve this and - currently - all require action by the system administrator.
Kernel boot parameter
From kernel 4.19 (Debian Buster currently runs 4.18 [Update: but will be getting 4.19 before release according to Ben via Mika]) you can set RANDOM_TRUST_CPU
at compile time or random.trust_cpu=on
on the kernel command line. This will make recent Intel / AMD systems trust RDRAND and fill the entropy pool with it. See the warning from Ted Ts'o linked above.
Update: Since Linux kernel build 4.19.20-1 CONFIG_RANDOM_TRUST_CPU
has been enabled by default in Debian.
Using a TPM
The Trusted Platform Module has an embedded random number generator that can be used. Of course you need to have one on your board for this to be useful. It's a hardware device.
Load the tpm-rng
module (ideally from initrd) or compile it into the kernel (config HW_RANDOM_TPM
).
Now, the kernel does not "trust" the TPM RNG by default, so you need to add
rng_core.default_quality=1000
to the kernel command line. 1000 means "trust", 0 means "don't use". So you can chose any value in between that works for you depending on how much you consider your TPM to be unbugged.
VirtIO (KVM, QEMU, ...)
For Virtual Machines (VMs) you can forward entropy from the host (that should be running longer than the VMs and have enough entropy) via virtio_rng
.
So on the host, you do:
kvm ... -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,bus=pci.0,addr=0x7
and within the VM newer kernels should automatically load virtio_rng
and use that.
You can confirm with dmesg as per above.
Or check:
virtio_rng.0
# cat /sys/devices/virtual/misc/hw_random/rng_current
virtio_rng.0
Patching systemd
The Fedora bugtracker has a bash / python script that replaces the systemd rnd
seeding with a (better) working one. The script can also serve as a good starting point if you need to script your own solution, e.g. for reading from an entropy provider available within your (secure) network.
Chaoskey
The wonderful Keith Packard and Bdale Garbee have developed a USB dongle, ChaosKey, that supplies entropy to the kernel. Hard- and software are open source.
Jitterentropy_RNG
Kernel 4.2 introduced jitterentropy_rng
which will use the jitter in CPU timings to generate randomness.
modprobe jitterentropy_rng
This apparently needs a userspace daemon though (read: design mistake) so
apt install jitterentropy-rngd
(available from Buster/testing).
The current version 1.0.8-3 installs nicely on Stretch. dpkg -i
is your friend.
But - drumroll - that daemon doesn't seem to use the kernel module at all.
That's where I stopped looking at that solution. At least for now. There are extensive docs if you want to dig into this yourself.
Update: The Linux kernel 5.3 will have an updated jitterentropy_rng as per Commit 4d2fa8b44. This is based on the upstream version 2.1.2 and should be worth another look.
Haveged
apt install haveged
Haveged is a user-space daemon that gathers entropy though the timing jitter any CPU has. It will only run "late" in boot but may still get your openssh back online within seconds and not minutes.
It is also - to the best of my knowledge - not verified at all regarding the quality of randomness it generates.
The haveged design and history page provides and interesting read and I wouldn't recommend haveged if you have alternatives. If you have none, haveged is a wonderful solution though as it works reliably. And unverified entropy is better than no entropy. Just forget this is 2018 2019 .
early-rng-init-tools
Thorsten Glaser has posted newly developed early-rng-init-tools in a debian-devel thread. He provides packages at http://fish.mirbsd.org/~tg/Debs/dists/sid/wtf/Pkgs/early-rng-init-tools/ .
First he deserves kudos for naming a tool for what it does. This makes it much more easily discoverable than the trend to name things after girlfriends, pets or anime characters. The implementation hooks into the early boot via initrd integration and carries over a seed generated during the previous shutdown. This and some other implementation details are not ideal and there has been quite extensive scrutiny but none that discovered serious issues. Early-rng-init-tools look like a good option for non-RDRAND (~CONFIG_RANDOM_TRUST_CPU) capable platforms.
Linus to the rescue
Luckily end of September Linus Torvalds was fed up with the entropy starvation issue and the non-conclusive discussions about (mostly) who's at fault and ... started coding.
With the kernel 5.4 release on 25.11.2019 his patch has made it into mainline. He created a try_to_generate_entropy
function that uses CPU jitter to generate seed entropy for the PRNG early in boot.
In the merge commit Linus explains:
This is admittedly partly "for discussion". We need to have a way forward for the boot time deadlocks where user space ends up waiting for more entropy, but no entropy is forthcoming because the system is entirely idle just waiting for something to happen.
While this was triggered by what is arguably a user space bug with GDM/gnome-session asking for secure randomness during early boot, when they didn't even need any such truly secure thing, the issue ends up being that our "getrandom()" interface is prone to that kind of confusion, because people don't think very hard about whether they want to block for sufficient amounts of entropy.
The approach here-in is to decide to not just passively wait for entropy to happen, but to start actively collecting it if it is missing. This is not necessarily always possible, but if the architecture has a CPU cycle counter, there is a fair amount of noise in the exact timings of reasonably complex loads.
We may end up tweaking the load and the entropy estimates, but this should be at least a reasonable starting point.
So once this kernel is available in your distribution, you should be safe from entropy starvation at boot on any platform that has hardware timers (I haven't encountered one that does not in the last decade).
Ted Ts'o reviewed the approach and was fine and Ahmed Dawish did some testing of the quality of randomness generated and that seems fine, too.
Updates
14.01.2019
Stefan Fritsch, the Apache2 maintainer in Debian, OpenBSD developer and a former Debian security team member stumbled over the systemd issue preventing Apache libssl to initialize at boot in a Debian bug #916690 - apache2: getrandom call blocks on first startup, systemd kills with timeout.
The bug has been retitled "document getrandom changes causing entropy starvation" hinting at not fixing the underlying issue but documenting it in the Debian Buster release notes.
Unhappy with this "minimal compromise" Stefan wrote a comprehensive summary of the current situation to the Debian-devel mailing list. The discussion spans over December 2018 and January 2019 and mostly iterated what had been written above already. The discussion has - so far - not reached any consensus. There is still the "systemd stance" (not our problem, fix the daemons) and the "ssh/apache stance" (fix systemd, credit entropy).
The "document in release notes" minimal compromise was brought up again and Stefan warned of the problems this would create for Buster users:
> I'd prefer having this documented in the release notes: > https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=916690 > with possible solutions like installing haveged, configuring virtio-rng, > etc. depending on the situation. That would be an extremely user-unfriendly "solution" and would lead to countless hours of debugging and useless bug reports.
This is exactly why I wrote this blog entry and keep it updated. We need to either fix this or tell everybody we can reach before upgrading to Buster. Otherwise this will lead to huge amounts of systems dead on the network after what looked like a successful upgrade.
Some interesting tidbits were mentioned within the thread:
Raphael Hertzog fixed the issue for Kali Linux by installing haveged by default. Michael Prokop did the same for the grml distribution within its December 2018 release.
Ben Hutchings pointed to an interesting thread on the debian-release mailing list he kicked off in May 2018. Multiple people summarized the options and the fact that there is no "general solution that is both correct and easy" at the time.
Sam Hartman identified Debian Buster VMs running under VMware as an issue, because that supervisor does not provide virtio-rng. So Debian VMs wouldn't boot into ssh availability within a reasonable time. This is an issue for real world use cases albeit running a proprietary product as the supervisor.
16.01.2019
Daniel Kahn Gillmor wrote in to explain a risk for VMs starting right after the boot of the host OS:
If that pool is used by the guest to generate long-term secrets because it appears to be well-initialized, that could be a serious problem.
(e.g. "Mining your P's and Q's" by Heninger et al -- https://factorable.net/weakkeys12.extended.pdf)
I've just opened https://bugs.launchpad.net/qemu/+bug/1811758 to report a way to improve that situation in qemu by default.
So ... make sure that your host OS has access to a hardware random number generator or at least carries over its random seed properly across reboots. You could also delay VM starts until the crng on the host Linux is fully initialized (random: crng init done
).
Otherwise your VMs may get insufficiently generated pseudo-random numbers and won't even know.
12.03.2019
Stefan Fritsch revived the thread on debian-devel again and got a few more interesting tidbits out of the developer community:
Ben Hutchings has enabled CONFIG_RANDOM_TRUST_CPU for Debian kernels from 4.19.20-1 so the problem is somewhat contained for recent CPU AMD64 systems (RDRAND capable) in Buster.
Thorsten Glaser developed early-rng-init-tools which combine a few options to try and get entropy carried across boot and generated early during boot. He received some scrutiny as can be expected but none that would discourage me from using it. He explains that this is for early boot and thus has initrd integration. It complements safer randomness sources or haveged.
16.04.2019
The Debian installer for Buster is running into the same problem now as indicated in the release notes for RC1.
Bug #923675 has details. Essentially choose-mirror
waits serveral minutes for entropy when used with https mirrors.
08.05.2019
The RDRAND use introduced in systemd to bypass the kernel random number generator during boot falls for a AMD pre-Ryzen bug as RDRAND on these systems doesn't return random data after a suspend / resume cycle. Added an update note to the systemd section above.
03.06.2019
Bastian Blank reports the issue is affecting Debian cloud images now as well as cloud-init generates ssh keys during boot.
10.07.2019
Added the update of jitterentropy_rng to a version based on upstream v2.1.2 into the Jitterentropy section above.
16.09.2019
The Linux Kernel Mailing List (LKML) is re-iterating the entropy starvation issue and the un-willingness of systemd to fix its usage of randomness in early boot. Ahmed S. Darwish has reported the issue leading to ext4 reproducibly blocking boot with Kernel 5.3-r8. There are a few patches floated and the whole discussion it worth reading albeit non-conclusive as of now.
Ted Ts'o says "I really very strongly believe that the idea of making getrandom(2) non-blocking and to blindly assume that we can load up the buffer with 'best efforts' randomness to be a terrible, terrible idea that is going to cause major security problems that we will potentially regret very badly. Linus Torvalds believes I am an incompetent systems designer." in this email.
In case you needed a teaser to really start reading the thread! Linus Torvalds also mentions the issue (and a primer on what "never break userspace" means) in the Linux kernel 5.3 release notes.
18.09.2019
... and Martin Steigerwald kindly noticed that I update this blog post with the relevant discussions I come across as this entropy starvation mess continues to haunt us.
25.11.2019
Added the "Linus to the rescue" section after the Linux kernel 5.4 has been released.
02.04.2020
I ran into the same issue on a Gentoo system today. Luckily OpenRC handeled this gracefully but it delayed booting:
syslog-ng
actually hangs the boot for some time ... waiting for entropy. Argh.
The Gentoo forums thread on the topic clearly listed the options:
- Make syslog-ng depend on haveged by adding
rc_syslog_ng_need="haveged"
to/etc/rc.conf
(and obviously having haveged installed) - Re-compiling the kernel with
CONFIG_RANDOM_TRUST_CPU=y
where that is an option
-
it will return with EAGAIN in the GRND_NONBLOCK use case. The blocking behaviour when lacking entropy is a security measure as per Bug #1559 of Google's Project Zero. ↩
-
Update 18.12.2018: "SysVinit times" ::= "The times when most Linux distros used SysVinit over other init systems." So Wheezy and previous for Debian. Some people objected to the statement, so I added this footnote as a clarification. See the discussion in the comments below. ↩
-
there is no Buster branch in the release notes repository yet (17.12.2018). Update: I wrote a section for the release notes 06.05.2019 and Paul Gevers amended and committed that. So when users of affected systems read the release notes before upgrading to Buster they will hopefully not be surprised (and worried) by the long boot delays. ↩
Prevent Ubuntu from phoning home
Ubuntu unfortunately has decided again to implement another "phone home" feature, this time transferring your lsb_release
information, CPU model and speed (from /proc/cpuinfo
), uptime
output, most of uname -a
and curl version to a Ubuntu news web-service.
Here is the Launchpad bug report #1637800 introducing this ... web bug.
This thing runs both systemd-timer based (via /lib/systemd/system/motd-news.service
and /lib/systemd/system/motd-news.timer
) and on request when you log in (via /etc/update-motd.d/50-motd-news
).
There has even been a bug filed about the motd advertising HBO's Silicon Valley show.
To prevent this from running (it is enabled by default on Ubuntu 17.04 and may probably propagate down to earlier versions as well), edit
/etc/default/motd-news
to include
ENABLED=0
so
sed -i "s/ENABLED=1/ENABLED=0/" /etc/default/motd-news # run as root
for your automated installs.
Update:
02.07.2017: Dustin Kirkland responded to a YC "hacker news" mention of his motd spam. He mentions:
You're welcome to propose your own messages for merging, if you have a well formatted, informative message for Ubuntu users.
We'll be happy to review and include them in the future.
What could possibly go wrong?