Skip to content

Weird times ... or how the New York DEC decided the US presidential elections

InternetStrategy

November 2024 will be known as the time when killing peanut, a pet squirrel, by the New York State DEC swung the US presidential elections and shaped history forever.

The hundreds of millions of dollars spent on each side, the tireless campaigning by the candidates, the celebrity endorsements ... all made for an open race for months. Investments evened each other out.

But an OnlyFans producer showing people an overreaching, bureaucracy driven State raiding his home to confiscate a pet squirrel and kill it ... swung enough voters to decide the elections.

That is what we need to understand in times of instant worldwide publication and a mostly attention driven economy: Human fates, elections, economic cycles and wars can be decided by people killing squirrels.

RIP, peanut.

P.S.: Trump Media & Technology Group Corp. (DJT) stock is up 30% pre-market.

Your software stores are a bad idea

InternetIT

There is significant effort involved to get your apt or dnf commands always have a consistent set of servers to talk to.

That's why running separate "software stores" is a bad idea:

Snap software store down

That way more admins need to learn how to run high availability services for dubious business opportunities to "later" monetize services. Services that nobody cares to pay for and thus opportunities that never materialize. But every company wants to find that out again. Because if Apple could do it, why shouldn't Canonical be able to do it? $$$!1!!

So, can't update Firefox on Ubuntu 22.04 right now.

At least there is https://status.snapcraft.io/:

Snap incodent / monitoring status page

So I can check back tomorrow if I can update my web browser ...

Update

09.11.2022 12:15 CET

The Snapcraft distribution system seems quite flaky, this is the downtime log:

Log of (frequent) Snapcraft outages

Bonus points for the bad client side implementation:


dl@laptop:~$ sudo snap refresh
All snaps up to date.

# ^this is a lie, just close Firefox and ...

dl@laptop:~$ sudo snap refresh
firefox 106.0.5-1 from Mozilla** refreshed
 

Postscriptum

GNOME and KDE join forces to sink another 100 .. 200k USD into the void:

https://github.com/PlaintextGroup/oss-virtual-incubator/[..]/proposals/flathub-linux-app-store.md

This is an application for funding from Schmidt Futures, which is one of the investment (as in philanthropic) funds from Eric Schmidt, former Google CEO.

The application text is worth reading. Very entertaining.

Robert McQueen (GNOME, Flathub) wrote on 07.03.2023 that the PlaintextGroup/Schmidt Futures application was denied for 2023.

Get Youtube Channel ID from username

Internet

Youtube has a really nice RSS feature that is extremely well hidden.

If you postfix a Channel ID to

https://www.youtube.com/feeds/videos.xml?channel_id=<id goes here>

you get a really nice Atom 1.0 (~RSS) feed for your feedreader.

Unfortunately the Channel ID is hard to find while you are navigating Youtube with usernames in the URL.

E.g. https://www.youtube.com/c/TED is TED's channel, full of interesting and worth-to-watch content (and some assorted horse toppings, of course).

But you have to read a lot of ugly HTML / JSON in that page to find and combine

https://www.youtube.com/feeds/videos.xml?channel_id=UCAuUUnT6oDeKwE6v1NGQxug

which is the related RSS feed.

Jeff Keeling wrote a simple Youtube RSS Extractor that does well if you have a ../playlist?... or a .../channel/... URL but it will (currently) fail on user name channels or Youtube landing pages.

So how do we get the Channel ID for a Youtube user we are interested to follow?

Youtube has a great API but that is gated by API keys even for the most simple calls (that came only with v3 of the API but the previous version is depreciated since 2015)1:

dl@laptop:~$ curl 'https://www.googleapis.com/youtube/v3/channels?part=contentDetails&forUsername=DebConfVideos'
{
  "error": {
    "code": 403,
    "message": "The request is missing a valid API key.",
    "errors": [
      {
        "message": "The request is missing a valid API key.",
        "domain": "global",
        "reason": "forbidden"
      }
    ],
    "status": "PERMISSION_DENIED"
  }
}

Luckily we can throw the same (example) user name DebConfVideos at curl and grep:

dl@laptop:~$ curl -s "https://www.youtube.com/c/DebConfVideos/videos" | grep -Po '"channelId":".+?"'
"channelId":"UC7SbfAPZf8SMvAxp8t51qtQ"

So https://www.youtube.com/feeds/videos.xml?channel_id=UC7SbfAPZf8SMvAxp8t51qtQ is the RSS feed for DebConfVideos.

We can use individual Youtube video URLs as well. With the hack above, it'll work to find us the Chanel ID from a Youtube video URL:

Working around the Youtube API restrictions to still make use of their RSS feed

Now, some user pages may have multiple valid RSS feeds because they contain multiple channels.

Remember the TED page from above? Well run:

dl@laptop:~$ curl -s "https://www.youtube.com/c/TED" | grep -Po '"channelId":".+?"' | cut -d \" -f 4 | while read -r YTID ; do echo -n "Youtube-ID: $YTID " ; curl -s "https://www.youtube.com/feeds/videos.xml?channel_id=$YTID" | grep -m 1 -P -o "(?<=<title>).+(?=</title>)" ; done

This will iterate through the Channel IDs found and show you the titles. That way you can assess which one you want to add to your feedreader.

screenshot of the above

You probably want the last Channel ID listed above, the non-selective "TED" one. And that's the one from the example above.

Update

02.06.2022: smpl wrote in and has the much better solution for the most frequent use cases:

You can also use get a feed directly with a username:
https://www.youtube.com/feeds/videos.xml?user=<username>

The one I use most is the one for playlists (if creators remember to
use them).

https://www.youtube.com/feeds/videos.xml?playlist_id=<playlist id>

For the common case you don't even need the channel ID that way. But it is also conveniently given in a <yt:channelId> tag (or the topmost <id> tag) within the Atom XML document.

Thanks, smpl!


  1. Actually it is even more complicated as some channels, like our DebConfVideos example, will only get you an incomplete result, cf. this StackOverflow entry. I.e. the forUsername iterator may not even work and the "best practice" seems to be mucking around with the search call. 

Getting rid of the Google cookie consent popup

Internet

If you clear your browser cookies regularly (as you should do), Google will annoy you with a full screen cookie consent overlay these days. And - of course - there is no "no tracking consent, technically required cookies only" button. You may log in to Google to set your preference. Yeah, I'm sure this is totally following the intent of the EU Directive 2009/136/EC (the "cookie law").

Google cookie consent pop-up

Unfortunately none of the big "anti-annoyances" filter lists seem to have picked that one up yet but the friendly folks from the Computerbase Forum [German] to the rescue. User "Sepp Depp" has created the base of the following filter set that WFM (updated by "mru62" since):

Add this to your uBlock Origin "My filters" tab:

! Google - remove cookie-consent-popup and restore scroll functionality
! Updated 26.12.2022
google.*###lb
google.*##html:style(overflow: visible !important;)

! And for Youtube
! Updated 26.12.2022
www.youtube.com##.opened
www.youtube.com###dialog
www.youtube.com##.no-transition.yt-upsell-dialog-renderer.style-scope

Fixing the Nextcloud menu to show more than eight application icons

Internet

I have been late to adopt an on-premise cloud solution as the security of Owncloud a few years ago wasn't so stellar (cf. my comment from 2013 in Encryption files ... for synchronization across the Internet). But the follow-up product Nextcloud has matured quite nicely and we use it for collaboration both in the company and in FLOSS related work at multiple nonprofit organizations.

There is a very annoying "feature" in Nextcloud though that the designers think menu items for apps at the top need to be limited to eight or less to prevent information overload in the header. The whole item discussion is worth reading as it it an archetypical example of design prevalence vs. user choice.

And of course designers think they are right. That's a feature of the trade.
And because they know better there is no user configurable option to extend that 8 items to may be 12 or so which would prevent the annoying overflow menu we are seeing with 10 applications in use:

Screenshot of stock Nextcloud menu

Luckily code can be changed and there are many comments floating around the Internet to change const minAppsDesktop = 8. In this case it is slightly complicated by the fact that the javascript code is distributed in compressed form (aka "minified") as core/js/dist/main.js and you probably don't want to build the whole beast locally to change one constant.

Basically

const breakpoint_mobile_width = 1024;

const resizeMenu = () => {
    const appList = $('#appmenu li')
    const rightHeaderWidth = $('.header-right').outerWidth()
    const headerWidth = $('header').outerWidth()
    const usePercentualAppMenuLimit = 0.33
    const minAppsDesktop = 8
    let availableWidth = headerWidth - $('#nextcloud').outerWidth() - (rightHeaderWidth > 210 ? rightHeaderWidth : 210)
    const isMobile = $(window).width() < breakpoint_mobile_width
    if (!isMobile) {
        availableWidth = availableWidth * usePercentualAppMenuLimit
    }
    let appCount = Math.floor((availableWidth / $(appList).width()))
    if (isMobile && appCount > minAppsDesktop) {
        appCount = minAppsDesktop
    }
    if (!isMobile && appCount < minAppsDesktop) {
        appCount = minAppsDesktop
    }

    // show at least 2 apps in the popover
    if (appList.length - 1 - appCount >= 1) {
        appCount--
    }

    $('#more-apps a').removeClass('active')
    let lastShownApp
    for (let k = 0; k < appList.length - 1; k++) {
        const name = $(appList[k]).data('id')
        if (k < appCount) {
            $(appList[k]).removeClass('hidden')
            $('#apps li[data-id=' + name + ']').addClass('in-header')
            lastShownApp = appList[k]
        } else {
            $(appList[k]).addClass('hidden')
            $('#apps li[data-id=' + name + ']').removeClass('in-header')
            // move active app to last position if it is active
            if (appCount > 0 && $(appList[k]).children('a').hasClass('active')) {
                $(lastShownApp).addClass('hidden')
                $('#apps li[data-id=' + $(lastShownApp).data('id') + ']').removeClass('in-header')
                $(appList[k]).removeClass('hidden')
                $('#apps li[data-id=' + name + ']').addClass('in-header')
            }
        }
    }

    // show/hide more apps icon
    if ($('#apps li:not(.in-header)').length === 0) {
        $('#more-apps').hide()
        $('#navigation').hide()
    } else {
        $('#more-apps').show()
    }
}

gets compressed during build time to become part of one 15,000+ character line. The relevant portion reads:

var f=function(){var e=s()("#appmenu li"),t=s()(".header-right").outerWidth(),n=s()("header").outerWidth()-s()("#nextcloud").outerWidth()-(t>210?t:210),i=s()(window).width()<1024;i||(n*=.33);var r,o=Math.floor(n/s()(e).width());i&&o>8&&(o=8),!i&&o<8&&(o=8),e.length-1-o>=1&&o--,s()("#more-apps a").removeClass("active");for(var a=0;a<e.length-1;a++){var l=s()(e[a]).data("id");a<o?(s()(e[a]).removeClass("hidden"),s()("#apps li[data-id="+l+"]").addClass("in-header"),r=e[a]):(s()(e[a]).addClass("hidden"),s()("#apps li[data-id="+l+"]").removeClass("in-header"),o>0&&s()(e[a]).children("a").hasClass("active")&&(s()(r).addClass("hidden"),s()("#apps li[data-id="+s()(r).data("id")+"]").removeClass("in-header"),s()(e[a]).removeClass("hidden"),s()("#apps li[data-id="+l+"]").addClass("in-header")))}0===s()("#apps li:not(.in-header)").length?(s()("#more-apps").hide(),s()("#navigation").hide()):s()("#more-apps").show()}

Well, we can still patch that, can we?

Continue reading "Fixing the Nextcloud menu to show more than eight application icons"

Ahh, the joy of Cloudflare SNI certificates

Internet

Nice neighbourhood, https://www.amsterdam.nl...

For your copy and paste pleasure:

openssl s_client -connect www.amsterdam.nl:443 < /dev/null | openssl x509 -noout -text | grep DNS:

Screenshot of openssl s_client -connect www.amsterdam.nl:443

Update

03.11.18: Cloudflare fixed this mess somewhat. They now look for SNI servernames and use customer-specific certs. See:

openssl s_client -servername www.amsterdam.nl -connect www.amsterdam.nl:443 < /dev/null | openssl x509 -noout -text | grep DNS:

(notice the -servername in the above vs. the original command that will fail with something like 140246838507160:error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure:s23_clnt.c:769: now)

IPv6: Getting rid of the dreaded "Neighbour table overflow"

Internet

IPv6 is hard. It has many, many design flaws and the decade where we all ignored it and hoped for the better hasn't helped. So we're now all in on the protocol. Yeah.

One of the design principles is that it tries to be rather stateless in the configuration and "plug and play". But just like P&P in the good old ISA times, it just doesn't always work.

One of the common issues is that Linux bridges in IPv6 just don't work well with the router announcements that try to discover and configure the IPv6 neighbourhood.

The result is a sheer endless amount of "kernel: Neighbour table overflow." lines flooding dmesg and syslog (or journal for those on SystemD).

Oct  4 16:26:06 host-260 kernel: Neighbour table overflow.
Oct  4 16:26:11 host-260 kernel: __ratelimit: 1832 callbacks suppressed
Oct  4 16:26:11 host-260 kernel: Neighbour table overflow.
Oct  4 16:26:11 host-260 kernel: Neighbour table overflow.
Oct  4 16:26:11 host-260 kernel: Neighbour table overflow.
Oct  4 16:26:11 host-260 kernel: Neighbour table overflow.
Oct  4 16:26:11 host-260 kernel: Neighbour table overflow.
Oct  4 16:26:11 host-260 kernel: Neighbour table overflow.
Oct  4 16:26:11 host-260 kernel: Neighbour table overflow.
Oct  4 16:26:11 host-260 kernel: Neighbour table overflow.
Oct  4 16:26:11 host-260 kernel: Neighbour table overflow.
Oct  4 16:26:11 host-260 kernel: Neighbour table overflow.
Oct  4 16:26:16 host-260 kernel: __ratelimit: 887 callbacks suppressed
Oct  4 16:26:16 host-260 kernel: Neighbour table overflow.
Oct  4 16:26:16 host-260 kernel: Neighbour table overflow.
Oct  4 16:26:16 host-260 kernel: Neighbour table overflow.
Oct  4 16:26:16 host-260 kernel: Neighbour table overflow.
Oct  4 16:26:16 host-260 kernel: Neighbour table overflow.
Oct  4 16:26:16 host-260 kernel: Neighbour table overflow.
Oct  4 16:26:16 host-260 kernel: Neighbour table overflow.
Oct  4 16:26:16 host-260 kernel: Neighbour table overflow.
Oct  4 16:26:16 host-260 kernel: Neighbour table overflow.
Oct  4 16:26:16 host-260 kernel: Neighbour table overflow.
Oct  4 16:26:23 host-260 kernel: __ratelimit: 803 callbacks suppressed

Grep -c(ount) on syslog

Lovely. Welcome to a storage DOS waiting to happen.

So first tip: cat /proc/sys/kernel/printk_ratelimit shows you the amount of seconds the ratelimiter suppresses messages. The default is 5 seconds and you can adjust it to more reasonable values in case you get heavily flooded like in the example above. Notice that this will mean your dmesg becomes rather useless as the kernel is not very selective about which messages to suppress.

Now when you google "Neighbour table overflow", you'll find thousands of pages suggesting to increase the arp / lladdr caches and garbage collection (gc) times like so:

# Set ARP cache garbage collection interval
net.ipv4.neigh.default.gc_interval = 3600
net.ipv6.neigh.default.gc_interval = 3600

# Set ARP cache entry timeout
net.ipv4.neigh.default.gc_stale_time = 3600
net.ipv6.neigh.default.gc_stale_time = 3600

# Setup cache threshold for ARP
net.ipv4.neigh.default.gc_thresh1 = 1024
net.ipv4.neigh.default.gc_thresh2 = 2048
net.ipv4.neigh.default.gc_thresh3 = 4096

# And the same for IPv6
net.ipv6.neigh.default.gc_thresh1 = 1024
net.ipv6.neigh.default.gc_thresh2 = 2048
net.ipv6.neigh.default.gc_thresh3 = 4096

That helps if and only if you really have 500+ IPv6 neighbours. Unless you have a badly segmented network or run in a university lab, you don't.

Now ... you may be seeing messages like "kernel: vmbr0: Multicast hash table maximum of 512 reached, disabling snooping: eth0" or "kernel: vmbr0: Multicast hash table chain limit reached: eth0" in your dmesg / syslog / journal.

That hints at what is really happening here: The bridge confused the link-local router negotiation and so you get endless ff02:: neighbour routing entries added to your caches until they flow over. So increasing the caches as in the sysctl entries above is basically pasting band-aid over the problem.

ip route show cache table all will show you the tables. With all entries. See if you have too many ff02:: neighbours in there. If so, you should try to add change your /etc/network/interfaces on Debian / Ubuntu similar to this:

iface vmbr0 inet6 static
   address 2a02:0100:1:1::500:1
   netmask 64
   gateway 2a02:0100:1:1::1
   post-up echo 2048 > /sys/class/net/vmbr0/bridge/hash_max
   post-up echo 1 > /sys/class/net/vmbr0/bridge/multicast_snooping
   post-up echo 0 > /proc/sys/net/ipv6/conf/vmbr0/accept_ra

This obviously assumes your bridge is called vmbr0.

Red Hat/CentOS users will need to adjust the config spread throughout multiple files in /etc/sysconfig/network-scripts. The ifup-ipv6 script is a good one to look at and amend.

The increase of the hash_max entry makes your bridge survive the initial storm of (useless) router solicitations.
multicast_snooping is usually off when routing but you may need it to make sure your VMs on the bridge can be reached.
Finally we make sure the bridge does not accept router announcements. Because that is what the host system should handle.

Sometimes you may need to throw in a static route or two to reach the VMs. P&P, you remember ... ip -6 neigh add nud permanent proxy <VM:IPv6:goes::here> dev vmbr0 is your friend. Unfortunately the antidote for the dreaded "Neighbour table overflow" depends on the specific cause. So you'll have to poke around a bit. tcpdump -i eth0 -v ip6 will show you what is on the wire and tcpdump -i vmbr0 -v ip6 what's visible on the bridge.

Firefox opens directories in RawTherapee ... or a tale of colliding assumptions

Internet

After installing RawTherapee, a raw photo to jpg image converter, on my Kubuntu 13.04 desktop, Mozilla Firefox started to open the Downloads directory in ... RawTherapee. Gosh. Now that's what you'd want to browse a downloads folder with.

Update-alternatives did not help as RawTherapee is not managed by the Debian alternatives system. Hm, xdg-open still works as intended and opens dolphin.

Asking Google turns up many not working solution proposals. Hm.

Firefox: Context menu (right click) -> Open Containing Folder

But, searching the collective memory is not completely useless. There is a solved bug report on the RawTherapee bug tracker that explains the issue. The .desktop file RawTherapee installs contains inode/directory as a supported mime type. Well, that's technically correct but makes no sense in the way mime associations are used. Thankfully the bug reporter was able to convince the developer of this and get this fixed for future RawTherapee releases.

So option 1: edit /usr/share/applications/rawtherapee.desktop and remove the inode/directory entry from the MimeType=-line. Then regenerate the various mimetype caches.

This still bears the question why Firefox does not use the inode/directory entry from dolphin.desktop. Also, the next update from Debian/Ubuntu will probably overwrite the fixed rawtherapee.desktop file again as the patch from the Rawtherapee developer will only be included on new releases (and these may or may not be backported to Debian/Ubuntu). So you'd need to copy that fixed rawtherapee.desktop file to ~/.local/share/applications/ or the like.

Option 2: find out what the real issue is...

Continue reading "Firefox opens directories in RawTherapee ... or a tale of colliding assumptions"

Replicating BIND DNS slave configurations from BIND DNS master servers

Internet

Manually copying the BIND master zone configuration to slave servers is a tedious and error-prone process. There are configuration management systems available for large deployments and BIND can support database backends as well which allow synchronization virtually behind the scenes.

For my use these options are overly complex and have unwanted risks associated with it.

The better option is to automate generating the BIND slave configuration from the BIND master named.conf.local.

Continue reading "Replicating BIND DNS slave configurations from BIND DNS master servers"

SyncPlaces, SortPlaces ... preserving these and other excellent Firefox add-ons

Internet

A few days ago I was re-installing a laptop of mine. Downloading the default add-ons for Firefox came to a stall as I could not find SyncPlaces, which I've been using for a few years to synchronize my bookmarks between the systems I use. As I knew the author, Andy Halford, has a homepage I surfed there and found...

"So long Mozilla and thanks for all the fish

If you are here looking for any of my extensions (SyncPlaces, SortPlaces, CheckPlaces or SearchPlaces) then I'm sad to inform you that these are no longer available.

Unfortunately I do not have the time or the motivation to continue to support these in light of the close-minded, short-sighted and increasingly inflexible organisation that Mozilla has become. Apparently spending 1000's of hours promoting their products and providing my extensions to the general public to fill the gaps in their product, and repair bookmarks destroyed by Firefox Sync, counts for little and does not deserve any flexibility or support from them. Apparently add-on developers are second-class citizens to them whatever they may say. Surprising given that these are the main USP of Firefox, and developers are providing their time and efforts for free. My decision was taken after a series of problems the last one being the final straw."
Andy Halford, 17.06.12 screenshot

I was unable to find any details what caused this rant and the serious decision to remove his excellent add-ons from the Internet. There are a few discussions on the Mozillazine forums but nothing enlightening so far. Andy's personal homepage, family homepage and the TotalValidator product he and his wife offer are all still online, he just pulled all his popular Firefox add-ons.

Andy states on his homepage

"I will not be answering any further emails on this matter, I suggest that you direct any questions to Mozilla themselves and ask them why they were happier for me to leave than to use a little common sense." [ib.]

I still tried to contact him and asked for an interview but he has not come back to me so far. I've sent a second email with the URL of this blog post.

The SyncPlaces plug-in is still prominently featured on Wikipedia's Comparison of browser synchronizers at press time.

Now this is a mess. With some ungraciously licensed closed source software, you're probably used to plan migrations once license terms run out or you have to upgrade to the much improved successor product because "extended support" for your perfectly fine current version gets ridiculously expensive.

But with open source you have more rights and e.g. the security to use the product as long as you wish. If you have kept a copy of the code (and the license file). Because in this case the Mozilla Firefox installer .xpi's are hard to come by. Mozilla has a centralized add-on repository and once the files are pulled from there, they quickly vanish off the net. Now don't get me wrong, these plug-ins are now unmaintained software, so you will have to migrate at some point in time, but not necessarily now. You have some time for planning and evaluation of alternatives.

Continue reading "SyncPlaces, SortPlaces ... preserving these and other excellent Firefox add-ons"

ICQ web status icons

Internet

ICQ has a feature to show your online status on a web page. You need to enable it in the "Security and Privacy" settings as indicated on ICQ's support page. Once you include http://status.icq.com/online.gif?icq=<ICQ-ID>&img=5 in a web page you can show your online status like this: ICQ Online Icon #5(Online) and ICQ Offline Icon #5(Offline).

Now the page that used to show all the icons available (img=<number> parameter in the URL) 404's (with a very nice 404-cow). Try at http://www.icq.com/features/web/indicator.html whether ICQ fixed their pages.

A friendly

seq 1 30 | xargs -n 1 -I _num_ wget -O Online_num_.gif  "http://status.icq.com/online.gif?icq=197344872&img=_num_"

helped to get the Online (and similarly the Offline) icons in the mean time.

Looking at them I find these icons provide a nice glimpse into web design history.
ICQ was founded 1996 and has seen a lot of changes between being sold to AOL 1998 and on to mail.ru in 2010. Aah, the '90s ... animated gifs anyone?
As you probably risk eye-cancer looking at them too long, the full glory of the available ICQ icons delegated to the full article view...

Continue reading "ICQ web status icons"