Skip to content

Get Youtube Channel ID from username

Internet

Youtube has a really nice RSS feature that is extremely well hidden.

If you postfix a Channel ID to

https://www.youtube.com/feeds/videos.xml?channel_id=<id goes here>

you get a really nice Atom 1.0 (~RSS) feed for your feedreader.

Unfortunately the Channel ID is hard to find while you are navigating Youtube with usernames in the URL.

E.g. https://www.youtube.com/c/TED is TED's channel, full of interesting and worth-to-watch content (and some assorted horse toppings, of course).

But you have to read a lot of ugly HTML / JSON in that page to find and combine

https://www.youtube.com/feeds/videos.xml?channel_id=UCAuUUnT6oDeKwE6v1NGQxug

which is the related RSS feed.

Jeff Keeling wrote a simple Youtube RSS Extractor that does well if you have a ../playlist?... or a .../channel/... URL but it will (currently) fail on user name channels or Youtube landing pages.

So how do we get the Channel ID for a Youtube user we are interested to follow?

Youtube has a great API but that is gated by API keys even for the most simple calls (that came only with v3 of the API but the previous version is depreciated since 2015)1:

dl@laptop:~$ curl 'https://www.googleapis.com/youtube/v3/channels?part=contentDetails&forUsername=DebConfVideos'
{
  "error": {
    "code": 403,
    "message": "The request is missing a valid API key.",
    "errors": [
      {
        "message": "The request is missing a valid API key.",
        "domain": "global",
        "reason": "forbidden"
      }
    ],
    "status": "PERMISSION_DENIED"
  }
}

Luckily we can throw the same (example) user name DebConfVideos at curl and grep:

dl@laptop:~$ curl -s "https://www.youtube.com/c/DebConfVideos/videos" | grep -Po '"channelId":".+?"'
"channelId":"UC7SbfAPZf8SMvAxp8t51qtQ"

So https://www.youtube.com/feeds/videos.xml?channel_id=UC7SbfAPZf8SMvAxp8t51qtQ is the RSS feed for DebConfVideos.

We can use individual Youtube video URLs as well. With the hack above, it'll work to find us the Chanel ID from a Youtube video URL:

Working around the Youtube API restrictions to still make use of their RSS feed

Now, some user pages may have multiple valid RSS feeds because they contain multiple channels.

Remember the TED page from above? Well run:

dl@laptop:~$ curl -s "https://www.youtube.com/c/TED" | grep -Po '"channelId":".+?"' | cut -d \" -f 4 | while read -r YTID ; do echo -n "Youtube-ID: $YTID " ; curl -s "https://www.youtube.com/feeds/videos.xml?channel_id=$YTID" | grep -m 1 -P -o "(?<=<title>).+(?=</title>)" ; done

This will iterate through the Channel IDs found and show you the titles. That way you can assess which one you want to add to your feedreader.

screenshot of the above

You probably want the last Channel ID listed above, the non-selective "TED" one. And that's the one from the example above.

Update

02.06.2022: smpl wrote in and has the much better solution for the most frequent use cases:

You can also use get a feed directly with a username:
https://www.youtube.com/feeds/videos.xml?user=<username>

The one I use most is the one for playlists (if creators remember to
use them).

https://www.youtube.com/feeds/videos.xml?playlist_id=<playlist id>

For the common case you don't even need the channel ID that way. But it is also conveniently given in a <yt:channelId> tag (or the topmost <id> tag) within the Atom XML document.

Thanks, smpl!


  1. Actually it is even more complicated as some channels, like our DebConfVideos example, will only get you an incomplete result, cf. this StackOverflow entry. I.e. the forUsername iterator may not even work and the "best practice" seems to be mucking around with the search call. 

Debian Gitlab (salsa.debian.org) tricks

Debian

Debian is moving the git hosting from alioth.debian.org, an instance of Fusionforge, to salsa.debian.org which is a Gitlab instance.

There is some background reading available on https://wiki.debian.org/Salsa/. This also has pointers to an import script to ease migration for people that move repositories. It's definitely worth hanging out in #alioth on oftc, too, to learn more about salsa / gitlab in case you have a persistent irc connection.

As of now() salsa has 15,320 projects, 2,655 users in 298 groups.
Alioth has 29,590 git repositories (which is roughly equivalent to a project in Gitlab), 30,498 users in 1,154 projects (which is roughly equivalent a group in Gitlab).

So we currently have 50% of the git repositories migrated. One month after leaving beta. This is very impressive.
As Alioth has naturally accumulated some cruft, Alexander Wirt (formorer) estimates that 80% of the repositories in use have already been migrated.

So it's time to update your local .git/config URLs!

Mehdi Dogguy has written nice scripts to ease handling salsa / gitlab via the (extensive and very well documented) API. Among them is list_projects that gets you nice overview of the projects in a specific group. This is especially true for the "Debian" group that contains the former collab-maint repositories, so source code that can and shall be maintained by Debian Developers collectively.

Finding migrated repositories

Salsa can search quite quickly via the Web UI: https://salsa.debian.org/search?utf8=✓&search=htop

Salsa search screenshot

but finding the URL to clone the repository from is more clicks and ~4MB of data each time (yeah, the modern web), so

$ curl --silent https://salsa.debian.org/api/v4/projects?search="htop" | jq .
[
  {
    "id": 9546,
    "description": "interactive processes viewer",
    "name": "htop",
    "name_with_namespace": "Debian / htop",
    "path": "htop",
    "path_with_namespace": "debian/htop",
    "created_at": "2018-02-05T12:44:35.017Z",
    "default_branch": "master",
    "tag_list": [],
    "ssh_url_to_repo": "git@salsa.debian.org:debian/htop.git",
    "http_url_to_repo": "https://salsa.debian.org/debian/htop.git",
    "web_url": "https://salsa.debian.org/debian/htop",
    "avatar_url": null,
    "star_count": 0,
    "forks_count": 0,
    "last_activity_at": "2018-02-17T18:23:05.550Z"
  }
]

is a bit nicer.

Please notice the git url format is a bit odd, it's either
git@salsa.debian.org:debian/htop.git or
ssh://git@salsa.debian.org/debian/htop.git.

Notice the ":" -> "/" after the hostname. Bit me once.

Finding repositories to update

At this time I found it useful to check which of the repositories I have cloned had not yet been updated in the local .git/config:

find ~/debconf ~/my_sources ~/shared -ipath '*.git/config' -exec grep -H 'url.*git\.debian' '{}' \;

Thanks to Jörg Jaspert (Ganneff) the Debconf repositories have all been moved to Salsa now.
Hint: Bug him for his scripts if you need to do complex moves.

Updating the URLs has been an hours work on my side and there is little you can do to speed that up if - as in the Debconf case - teams have used the opportunity to clean up and things are not as easy as using sed -i.

But there is no reason to do this more than once, so for the laptops...

Speeding up migration on multiple devices

rsync -armuvz --existing --include="*/" --include=".git/config" --exclude="*" ~/debconf/ laptop:debconf/

will rsync the .git/config files that you changed to other systems where you keep partial copies.

On these a simple git pull to get up to remote HEAD or using the git_pull_all one-liner from https://daniel-lange.com/archives/99-Managing-a-project-consisting-of-multiple-git-repositories.html will suffice.

Git short URL

Stefano Rivera (tumbleweed) shared this clever trick:

git config --global url."ssh://git@salsa.debian.org/".insteadOf salsa:

This way you can git clone salsa:debian/htop.

tail -S (truncating lines to terminal width)

Open Source

The tail command has a quite glaring omission in that it can't truncate lines. Thus it wraps long log line into multiple terminal lines regardless. Which makes them very hard to read.

I used to work around this using less -S and then hitting the [F] key but that's interactive. less +F <filename> is the little known work-around for the interactive issue but that still doesn't work well with pipes (tail -f logfile | grep "ERROR:" etc).

There is a bug report from 2004 against GNU coreutils but that went nowhere.

So we're not getting a tail -S anytime soon.

Bash to the rescue: tail -S → tails

Hence I wrote this little script, tails [1kB]:

  1. #!/bin/bash -i
  2. # v2 from 170712: introduce loop to work around GNU coreutils issues with pipe/fifo/isatty
  3.  
  4. if [[ -z "$COLUMNS" ]] ; then
  5.         MYCOL=$(tput cols)
  6. else
  7.         MYCOL=${COLUMNS}
  8. fi
  9. tail "$@" | while read line; do
  10.         echo "$line" | expand | cut -c1-${MYCOL:-80}
  11. done

Now, there are some interesting bits even in this tiny script:

The bash -i results in $COLUMNS being set within the script on sane Linux bash. Otherwise that variable wouldn't be available. Because it is a shell variable and not an environment variable. You knew that, right?

Unfortunately the bash -i doesn't get $COLUMNS set on either MacOS (X) or FreeBSD, so that's where the tput cols comes into play. It outputs the column width for the current terminal to stdout.

If all that fails tails will default to 80 columns.

So tails -f /var/log/apache/access.log will now look nice.

Corner case: color

If you use color codes somewhere (grep --color=always, dmesg --color=always) tails will just truncate lines too short so they will still not wrap.
There is a slight risk that it may cut into half a color code escape sequence and mess up the terminal a bit. You could change that by removing the -i from the shebang line and setting $COLUMNS explicitly. But that then needs manual adjustment for each combination of colored lines (=count of ANSI sequences) and terminal width. Better to color after the tails invocation then where possible, e.g. tails -f /var/log/httpd/error.log | grep ':error' to watch for PHP errors and the like.

Mended corner cases: inconsistent tail behaviour

A first version of the script didn't use a loop but just had

tail "$@" | expand | cut -c1-${MYCOL:-80}

This would break tails -f on Debian (coreutils 8.23) / Ubuntu (coreutils 8.26) while removing the |expand would make it work. On Fedora 25 (coreutils 8.25) I couldn't get tails -f to work at all with that v1. The cut (so just a single command chained) already broke the pipe :-(. And nope, stdbuf didn't help.

If you have a more simple solution to work around the isatty / isfifo mess, please leave a comment!

Alternatives

If you want to show multiple log tails in parallel, highlight strings etc. multitail is worth a look.

Depending on what you want to achieve you could also tell your terminal emulator to not wrap lines:

setterm -linewrap off; less -SR +F /var/log/apache/access.log; setterm -linewrap on

Generate an indexed list of passwords

Other

Generating an indexed list of passwords without complex perl or python:

pwgen -y 20 30 | nl -w 2 -n rz -s -

Explanation:

pwgen: -y = complex passwords (including symbols) ; 20 = length of password; 30 = number of passwords to generate

nl: -w 2 = zero pad to a width of two characters; -n rz = print right-justified; -s - = use dash as a separator

screenshot of pwgen | nl

scp (secure copy) a file to the same path on a remote system

IT

I've been copying files from one server to the symmetrical cluster partner a lot last week.

It's always

 scp /path/to/directory/file remote:/path/to/directory/

That gets boring after a while but does not really warrant setting up a full blown config management solution like salt, puppet or ansible.

So here is scpover[1.5kB].

It reduces the effort to

 scpover /path/to/directory/file

This will copy "file" from "/path/to/directory/" into exactly the same directory on the remote cluster partner. Which system to target is detected from the hostname of the local system and hard-coded into the script.

Scpover can also take multiple filepaths on one command line so you can beam over a few related config files from multiple locations in one go.
Not an atomic change but at least all within a reasonably short timespan.

Continue reading "scp (secure copy) a file to the same path on a remote system"

Encrypting files with openssl for synchronization across the Internet

Linux

Well, shortly after I wrote about encrypting files with a keyfile / passphrase with gpg people asked about a solution with openssl.

You should prefer to use the gpg version linked above, but if you can't, below is a script offering the same functionality with openssl.

You basically call crypt_openssl <file> [<files...>] to encrypt file to file.aes using the same keyfile as used in the gpg script (~/.gnupg/mykey001 per default).

A simple crypt_openssl -d <file.aes> [<files.aes...>] will restore the original files from the encrypted AES256 version that you can safely transfer over the Internet even using insecure channels.

Please note that you should feed compressed data to crypt_openssl whenever you can. So use preferably use it on .zip or .tar.gz files.

Continue reading "Encrypting files with openssl for synchronization across the Internet"

Encrypting files with gpg for synchronization across the Internet

Linux

Automatically transferring (syncing) files between multiple computers is easy these days. Dropbox, owncloud or bitpocket to name a few. You can imagine I use the latter (if you want a recommendation)1.

In any case you want to encrypt what you send to be stored in "the cloud" even if it is just for a short time. There are many options how to encrypt the "in flight" data. Symmetric ciphers are probably the safest and most widely researched cryptography these days and easier to use than asymmetric key pairs in this context as well.

Encryption is notoriously hard to implement correctly and worthless when the implementation is flawed. So I looked at gpg, a well known reference implementation, and was amazed that it can neither use a proper keyfile for symmetric encryption (you can just supply a passphrase via --passphrase-file) nor does it handle multiple files on the command line consistently. You can use --multifile (wondering...why does a command need that at all?) with --decrypt and --encrypt (asymmetric public/private key pair encryption) but not with --symmetric (symmetric shared key encryption). Duh!

With a bit of scripting around the gpg shortcomings, you end up with crypt_gpg that can nicely encrypt or decrypt multiple files (symmetric cipher) in one go.


  1. Dropbox is closed source so it cannot be assessed for its security. Owncloud needs a thorough code review before I would dare to run it on my systems. 

Continue reading "Encrypting files with gpg for synchronization across the Internet"

Keeping IRC nicks active

IRC

Typical IRC services usually allow you to register with nickserv and link a number of nicks to a personal account. It's quite common to have nick, nick_ and nick__ as many IRC clients auto-append underscores if the primary nickname is already in use when connecting. Obviously you can set these alternate nicknames to almost anything you like in a decent client.

Some folks also group a "vanity" nickname or two for whatever reason. To keep these active, people do the "nick shuffle" (/nick newnick, /nick oldnick) all the time:

nick shuffle on freenode

People who forget the occasional nick shuffle may end up losing a grouped nick because it became inactive. While freenode staff try to contact people before dropping linked nicks, there are occasional prunes of "old data" from the services database. And then nobody can really ask upfront.

So before the next big purge comes up, I wrote a small bash script that logs into a nickserv account and cycles through the linked nicks. A few friends and me have used it successfully for many months now.

Grab a copy of keepnick (2.4kB) and drop it into /usr/local/bin.

Keepnick expects to have an accountname, the corresponding password and then a sequence of linked nicks given on its command line.

Something like

/usr/local/bin/keepnick accountname passw0rd linked_nick linked_nick_ vanity_nick MyOtherNick

should work.

For regular use, you need to set up a cron job to call keepnick e.g. every week. So put something like the following script into /etc/cron.weekly/keepnicks_irc or create a corresponding crontab entry for keepnicks_irc if you do not have the convenient cron.* directories set up:

#!/bin/bash
#
# run keepnick for user(s) irc account(s)
# intended to be run from cron, e.g. through /etc/cron.weekly
#

KEEPNICK="/usr/local/bin/keepnick"
# better safe than sorry
PATH="/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin"
export PATH

$KEEPNICK accountname1 passw0rd1 linked_nick1 linked_nick1_ linked_nick1__
$KEEPNICK accountname2 passw0rd2 linked_nick1 linked_nick2_ linked_nick2__
 

You should see keepnick in action now every week like this:

keepnick in action

What happens here is that the IRC services package tells you, keepnick has just authenticated to your account and will now shuffle through all nicks you asked it to. The big advantage is that is does this outside of channels, so not annoying any users. The cron job should make sure you don't forget the nick shuffle anymore.

Making sure your bash supports network connections

Stock bash will support network connections but on Debian and old (=pre-karmic) Ubuntu that capability was disabled at compile time.

If you need to check whether your bash is compiled with network support, type cat < /dev/tcp/time.nist.gov/13 into a bash terminal.

In case that gives you a RFC-867 time string, you're all fine. If not, re-compile your bash with --enable-net-redirections.

Now for something more advanced (but entirely optional):

Continue reading "Keeping IRC nicks active"

zip2dir (expand a zip to a directory of the same name)

IT

I needed to expand a lot of jars (Java zips) and other zips of various names into directories of the same name for each file. With 6,239 files of which some are jars, some other zips and many xml and other filetypes all not properly identified by a file extension, this gets a bit too much to do manually.

So:
Finding candidates for these is easy with find . -type f.
The file is most probably a zip archive if the first two characters are "PK", good old Phil Katz' signature. A friendly head -c 2 checks that.
All combined with some rudimentary error checking:

  1. #!/bin/bash
  2. # There is little data security here, so know what you're doing.
  3. # All risks in using this code are yours. It moves and deletes files quite stupidly.
  4. # (c) Daniel Lange, 2009, v0.01, released into the public domain
  5. if [ $# -ne 1 ] ; then
  6.         echo "Error: $0 expects exactly one argument, a (fully qualified) path/to/a/zipfile"
  7.         exit 1
  8. fi
  9. if [ ! -r $1 ] ; then
  10.         echo "Error: file does not exist or no read permission on $1"
  11.         exit 2
  12. fi
  13. if [ ! -w "$(dirname $1)" ] ; then
  14.         echo "Error: cannot write to directory of $1"
  15.         exit 3
  16. fi
  17. if [ "$(head -c 2 $1)" == "PK" ] ; then
  18.         mv $1 $1.tmp
  19.         mkdir -p $1
  20.         unzip -d $1 $1.tmp
  21.         rm $1.tmp
  22. else echo "$1 is not a zipfile"
  23. fi

Download available here (1KB).

Typical usage:

 find . -type f -print0 |xargs --null -n 1 zip2dir

This will expand all zips under the current directory.
Leave the zip2dir out for a dry run (xargs will just print to the tty then). Look at the -exec switch when digging around a bit more into what find can do for you.

Cool, command-line style blog design

Private

It's very seldom that a blog design catches my eye.

Screenshot of Pete Hindle's blog

Most common templates for blog systems like Wordpress or Serendipity are very well honed. Usability, accessibility and visual design of these systems and their default templates are are as good as it gets for the time being. Trying to do better usually fails. But Wordpress-CLI, which I found at Pete Hindle's blog manages to create a unique design. It may be inspired on the google shell (gosh) or older incarnations of the concept, e.g. WebCmd, but it is unique because it requires poking and trying stuff to expose the full functionality. A bit like an old school rogue-like game, it inspires playing with it to find out more. And it reminds nicely of command interfaces to BBS systems although the authors chose a syntax to resemble a unix based shell. You can try out different sub-designs at Rob McFarland's site. Obviously, usability still sucks, but it's worth it! Well done. And Pete: Please write some interesting entries in that blog now :-).

httpdate - set local date and time from a web server

Linux

While ntp may be a great protocol, I find it quite bloated and slow for the simple purpose of just setting a local date and time to a reference clock. I do not need 20ms accuracy on a notebook's clock :-). Thus I use(d) rdate for a decade now but the public rdate servers are slowly dying out. So I'm replacing it more and more with htpdate which works quite nicely. It's written in C and a perl alternative is available on the author's site. There is also a forked windows version of it available.

Developing a bit larger bash script (which syncs a few servers), I wondered whether I could realize the time sync part in bash as well.

It's quite possible:

  1. # open a tcp connection to www.google.com
  2. exec 3<>/dev/tcp/www.google.com/80
  3. # say hello HTTP-style
  4. echo -e "GET / HTTP/1.0\n\n">&3
  5. # parse for a Date: line and with a bit of magic throw the date-string at the date command
  6. LC_ALL=C LANG=en date --rfc-2822 --utc -s "$(head <&3 | grep -i "Date: " | sed -e s/Date\:\ //I)"
  7. # close the tcp connection
  8. exec 3<&-

Simple, eh?

Continue reading "httpdate - set local date and time from a web server"