Skip to content

tail -S (truncating lines to terminal width)

Open Source

The tail command has a quite glaring omission in that it can't truncate lines. Thus it wraps long log line into multiple terminal lines regardless. Which makes them very hard to read.

I used to work around this using less -S and then hitting the [F] key but that's interactive. less +F <filename> is the little known work-around for the interactive issue but that still doesn't work well with pipes (tail -f logfile | grep "ERROR:" etc).

There is a bug report from 2004 against GNU coreutils but that went nowhere.

So we're not getting a tail -S anytime soon.

Bash to the rescue: tail -S → tails

Hence I wrote this little script, tails [1kB]:

  1. #!/bin/bash -i
  2. # v2 from 170712: introduce loop to work around GNU coreutils issues with pipe/fifo/isatty
  3.  
  4. if [[ -z "$COLUMNS" ]] ; then
  5.         MYCOL=$(tput cols)
  6. else
  7.         MYCOL=${COLUMNS}
  8. fi
  9. tail "$@" | while read line; do
  10.         echo "$line" | expand | cut -c1-${MYCOL:-80}
  11. done

Now, there are some interesting bits even in this tiny script:

The bash -i results in $COLUMNS being set within the script on sane Linux bash. Otherwise that variable wouldn't be available. Because it is a shell variable and not an environment variable. You knew that, right?

Unfortunately the bash -i doesn't get $COLUMNS set on either MacOS (X) or FreeBSD, so that's where the tput cols comes into play. It outputs the column width for the current terminal to stdout.

If all that fails tails will default to 80 columns.

So tails -f /var/log/apache/access.log will now look nice.

Corner case: color

If you use color codes somewhere (grep --color=always, dmesg --color=always) tails will just truncate lines too short so they will still not wrap.
There is a slight risk that it may cut into half a color code escape sequence and mess up the terminal a bit. You could change that by removing the -i from the shebang line and setting $COLUMNS explicitly. But that then needs manual adjustment for each combination of colored lines (=count of ANSI sequences) and terminal width. Better to color after the tails invocation then where possible, e.g. tails -f /var/log/httpd/error.log | grep ':error' to watch for PHP errors and the like.

Mended corner cases: inconsistent tail behaviour

A first version of the script didn't use a loop but just had

tail "$@" | expand | cut -c1-${MYCOL:-80}

This would break tails -f on Debian (coreutils 8.23) / Ubuntu (coreutils 8.26) while removing the |expand would make it work. On Fedora 25 (coreutils 8.25) I couldn't get tails -f to work at all with that v1. The cut (so just a single command chained) already broke the pipe :-(. And nope, stdbuf didn't help.

If you have a more simple solution to work around the isatty / isfifo mess, please leave a comment!

Alternatives

If you want to show multiple log tails in parallel, highlight strings etc. multitail is worth a look.

Depending on what you want to achieve you could also tell your terminal emulator to not wrap lines:

setterm -linewrap off; less -SR +F /var/log/apache/access.log; setterm -linewrap on

Cygwin automatic updates

IT

Cygwin is a fantastic product for people that need to use Microsoft Windows and require some compatibility to Linux (or BSD or UNIX in general).

Unfortunately it is not trivial to keep it updated (and thus safe) as the update process requires downloading the latest installer and then clicking through the package list again and again on every update.
No apt-get update, no emerge --update @world, no dnf update.

But ... the people at Red Hat (who now own Cygwin) are not mean, they are just not good at documenting things :-)...

Be sure to have wget installed via cygwin, you'll need it to fetch the installer automatically.
Then drop the following batch file as cyg_update.bat into C:\cygwin64 (or where you have installed cygwin1):

  1. @echo off
  2. cd /d C:\cygwin64
  3. del /Q cygwinSetup-x86_64.exe
  4. bin\wget.exe --progress=dot -S -N http://cygwin.com/setup-x86_64.exe
  5. move /y setup-x86_64.exe cygwinSetup-x86_64.exe
  6. REM S-1-1-0 is the SID for "Everyone"
  7. icacls cygwinSetup-x86_64.exe /grant *S-1-1-0:RX
  8. cygwinSetup-x86_64.exe --no-desktop --no-shortcuts --no-startmenu --quiet-mode

Once you start the batch (e.g. by double-clicking from Windows Explorer) it will download the latest installer from Cygwin.com and perform a silent update. You need to approve the Windows installer warning as you do with every manual install / update as well. So it's not a no-click update but a one-or-two-clicks update.
Still much better than the click fest without the batch file.

Cygwin update screenshot

Update

01.06.2017: I've changed icacls cygwinSetup-x86_64.exe /grant Everyone:RX to icacls cygwinSetup-x86_64.exe /grant *S-1-1-0:RX which is the SID and not language dependent. 'cause otherwise German Windows would like to see "Jeder" and French "Tous publics", Chinese "任何人", etc. Looking them up on Microsoft's terminology search is quite nice but not really scalable.


  1. If you have installed the 32bit version and/or used a different install path, adjust line 2 of the batch file accordingly. Did I need to say that? Hm, well, I did ... have a cookie. 

Managing a project consisting of multiple git repositories

IT

The core team organizing DebConf, the annual Debian developer conference, reached out to me two weeks ago to help support this year's effort a bit.

I'm very happy to do so as Debian is a cornerstone of everything I do in the Open Source/Free Software space.

Screenshot of git_pull_all with color

To get me started I got access to a lot of mailing lists and irc channels. And even more git repositories. So many that the DebConf team even has an instruction page on how the repositories all fit together.

It's unfortunately quite common to split a bigger project into many git repositories to ease access rights management and reduce the noise and data transfer volume for the average user. The downside is, everybody ends up with a dozen or more individual repositories to keep pulling. And then there's git annex for yet another level of indirection.

Joey Hess, a former Debian developer, has even written an extensive tool, myrepos, to meta-manage the different repositories and it can do quite some magic across different SCMSs1. In my case this is a bit of an overkill though.

And using myrepos may get you confused at some point whether to now run mr or git directly for each batch of repos you have inherited over some time of working on multiple projects.2 Thus I prefer the simple route:

Check out each repository into a common top-level directory (~/debconf/ in this case) and then put the following two lines into an executable script git_pull_all into that top level directory:

#!/bin/sh
find ~/debconf -mindepth 1 -maxdepth 1 -type d -exec sh -c "cd {}; test -r .git/config && git pull $*" \;

This will allow you to pull all git repos with one command and keep the normal syntax for everything else you do with each repo.

The --mindepth and --maxdepth will instruct find to just go and run your git pull only inside each direct child of the top level directory. So recursion depth = 1. That is the single trick there is to this.

Updates:

If you like to have some color and a bit of a spaced layout for improved readability, try:

#!/bin/sh
find ~/debconf -mindepth 1 -maxdepth 1 -type d -exec sh -c "cd {}; test -r .git/config && (printf \"\033[1m\033[34m%-50s\033[0m\" \"\${PWD}:\" ; git pull $*)" \;

When you have pull.rebase=true set in your .gitconfig, you can run ./git_pull_all --no-rebase to avoid rebases in case you work somewhere and want to have the merge commits.

P.S.: The DebConf15 Heidelberg registration just opened, please check the DebConf15 homepage for news, venue information and please register if you want to come around.


  1. Source Code Management Systems, like git, mercurial (hg) or subversion (svn). Or God forbid ... cvs. I don't like the (D)VCS (Distributed) Version Control Systems moniker. Because that's not really all these systems do. Not even the most important piece of what they do these days. 

  2. With myrepos you can still work with each individual repository via git. Just so nobody will write in "but...". 

scp (secure copy) a file to the same path on a remote system

IT

I've been copying files from one server to the symmetrical cluster partner a lot last week.

It's always

 scp /path/to/directory/file remote:/path/to/directory/

That gets boring after a while but does not really warrant setting up a full blown config management solution like salt, puppet or ansible.

So here is scpover[1.5kB].

It reduces the effort to

 scpover /path/to/directory/file

This will copy "file" from "/path/to/directory/" into exactly the same directory on the remote cluster partner. Which system to target is detected from the hostname of the local system and hard-coded into the script.

Scpover can also take multiple filepaths on one command line so you can beam over a few related config files from multiple locations in one go.
Not an atomic change but at least all within a reasonably short timespan.

Continue reading "scp (secure copy) a file to the same path on a remote system"

Encrypting files with openssl for synchronization across the Internet

Linux

Well, shortly after I wrote about encrypting files with a keyfile / passphrase with gpg people asked about a solution with openssl.

You should prefer to use the gpg version linked above, but if you can't, below is a script offering the same functionality with openssl.

You basically call crypt_openssl <file> [<files...>] to encrypt file to file.aes using the same keyfile as used in the gpg script (~/.gnupg/mykey001 per default).

A simple crypt_openssl -d <file.aes> [<files.aes...>] will restore the original files from the encrypted AES256 version that you can safely transfer over the Internet even using insecure channels.

Please note that you should feed compressed data to crypt_openssl whenever you can. So use preferably use it on .zip or .tar.gz files.

Continue reading "Encrypting files with openssl for synchronization across the Internet"

Encrypting files with gpg for synchronization across the Internet

Linux

Automatically transferring (syncing) files between multiple computers is easy these days. Dropbox, owncloud or bitpocket to name a few. You can imagine I use the latter (if you want a recommendation)1.

In any case you want to encrypt what you send to be stored in "the cloud" even if it is just for a short time. There are many options how to encrypt the "in flight" data. Symmetric ciphers are probably the safest and most widely researched cryptography these days and easier to use than asymmetric key pairs in this context as well.

Encryption is notoriously hard to implement correctly and worthless when the implementation is flawed. So I looked at gpg, a well known reference implementation, and was amazed that it can neither use a proper keyfile for symmetric encryption (you can just supply a passphrase via --passphrase-file) nor does it handle multiple files on the command line consistently. You can use --multifile (wondering...why does a command need that at all?) with --decrypt and --encrypt (asymmetric public/private key pair encryption) but not with --symmetric (symmetric shared key encryption). Duh!

With a bit of scripting around the gpg shortcomings, you end up with crypt_gpg that can nicely encrypt or decrypt multiple files (symmetric cipher) in one go.


  1. Dropbox is closed source so it cannot be assessed for its security. Owncloud needs a thorough code review before I would dare to run it on my systems. 

Continue reading "Encrypting files with gpg for synchronization across the Internet"

Creating iPhone/iPod/iPad notes from the shell

Open Source

I found a very nice script to create Notes on the iPhone from the command line by hossman over at Perlmonks.

For some weird reason Perlmonks does not allow me to reply with amendments even after I created an account. I can "preview" a reply at Perlmonks but after "create" I get "Permission Denied". Duh. vroom, if you want screenshots, contact me on IRC :-).

As I wrote everything up for the Perlmonks reply anyways, I'll post it here instead.

Against hossman's version 32 from 2011-02-22 I changed the following:

  • removed .pl from filename and documentation
  • added --list to list existing notes
  • added --hosteurope for Hosteurope mail account preferences and with it a sample how to add username and password into the script for unattended use
  • made the "Notes" folder the default (so -f Notes becomes obsolete)
  • added some UTF-8 conversions to make Umlauts work better (this is a mess in perl, see Jeremy Zawodny's writeup and Ivan Kurmanov's blog entry for some further solutions). Please try combinations of utf8::encode and ::decode, binmode utf8 for STDIN and/or STDOUT and the other hints from these linked blog entries in your local setup to get Umlauts and other non-7bit ASCII characters working. Be patient. There's more than one way to do it :-).

I /msg'd hossman the URL of this blog entry.

Continue reading "Creating iPhone/iPod/iPad notes from the shell"

Apple iPhone ring tones Linux style

Open Source

Apple has crippled the iPhone to not allow normal music files as ringtones. Business decision. Technically any sub 40 second MP4 audio file will do once you rename it to *.m4r and drag-and-drop it to the ringtones folder of your phone in iTunes. Longer ones will work, too. But you'd need a jailbroken iPhone for that as iTunes will refuse to transfer the ringtone file if it's too long. Not much of an issue imho, who keeps ringing your phone for 40 seconds or more?

There's a gazillion websites available telling you how to convert a single .mp3-file to a ringtone with or without iTunes help and there are hundreds of tools doing that for you if you can't find out how to do it with just iTunes itself. Still the ones I tried failed for me as I wanted to convert my 20 or so standard ringtones from the good old Motorola K3 to iPhone ringtones all in one go. Without having to edit each one by hand. They are already nice ringtones and have served me well for years, just too long for the iPhone and in .mp3 format.

The basic processing sequence needed is

  1. Cut the .mp3 down to 39s
  2. Convert the .mp3 -> .wav (with mplayer, normalize output gain while we're at it)
  3. Convert the .wav -> .mp4 (with facc)
  4. Clean up, GOTO 1 for next file

So below is the free shell script to create multiple ringtones in one go on any Linux system. You need to install cutmp3, mplayer and faac for it, so apt-get install cutmp3 mplayer faac on Debian or Ubuntu. cutmp3 is currently not in the portage tree for Gentoo, but you can download an ebuild from Polynomial-C's overlay (mirror). Or you just download the cutmp3 binary from Jochen Puchalla's homepage. There's no error checking in the script, so know your way around the shell before running it.

Without further ado:

#!/bin/sh
#
# convert_to_ringtone file1.mp3 [file2.mp3, ...]
# Placed into the public domain by Daniel Lange, 2011.
#

for arg
do
        echo "Processing $arg..."
        cutmp3 -c -a 0:0.0 -b 0:39.0 -i "$arg" -O "$arg.tmp"
        mplayer -vo null -vc null -af volnorm -ao pcm:fast:file=tmpfile.wav "$arg.tmp"
        faac -b 128 -c 44100 -w tmpfile.wav
        name=`echo $arg| sed 's/.mp3//g'| sed 's/ /_/g'`
        mv tmpfile.m4a $name.m4r
        rm tmpfile.wav
        rm "$arg.tmp"
        echo "$arg done."
done
 

Wikipedia article on Apple's 1984 ad.

Update

23.12.14 Apparently the faac package in Debian and Ubuntu has had the MP4 writing capability removed in v1.28-5 and later due to a minor license incompatibility. See the Debian Changelogs. Duh.

faac (1.28-5) unstable; urgency=low
  [ Andres Mejia ]
  * Disable mp4v2 support.
    This only disables mp4v2 for the faac utility program. The faac
    utility is GPL-2 but the mp4v2 library is MPL-1.1. The two licenses
    are incompatible with each other.

So ... unfortunately you have to built faac from source yourself or pin the v1.28-4 version which is identical except for the castration anyways.

Random distro dev: "Why oh why doesn't my distro ever head mainstream...?"
Hint: Because of stuff like this.

Keeping IRC nicks active

IRC

Typical IRC services usually allow you to register with nickserv and link a number of nicks to a personal account. It's quite common to have nick, nick_ and nick__ as many IRC clients auto-append underscores if the primary nickname is already in use when connecting. Obviously you can set these alternate nicknames to almost anything you like in a decent client.

Some folks also group a "vanity" nickname or two for whatever reason. To keep these active, people do the "nick shuffle" (/nick newnick, /nick oldnick) all the time:

nick shuffle on freenode

People who forget the occasional nick shuffle may end up losing a grouped nick because it became inactive. While freenode staff try to contact people before dropping linked nicks, there are occasional prunes of "old data" from the services database. And then nobody can really ask upfront.

So before the next big purge comes up, I wrote a small bash script that logs into a nickserv account and cycles through the linked nicks. A few friends and me have used it successfully for many months now.

Grab a copy of keepnick (2.4kB) and drop it into /usr/local/bin.

Keepnick expects to have an accountname, the corresponding password and then a sequence of linked nicks given on its command line.

Something like

/usr/local/bin/keepnick accountname passw0rd linked_nick linked_nick_ vanity_nick MyOtherNick

should work.

For regular use, you need to set up a cron job to call keepnick e.g. every week. So put something like the following script into /etc/cron.weekly/keepnicks_irc or create a corresponding crontab entry for keepnicks_irc if you do not have the convenient cron.* directories set up:

#!/bin/bash
#
# run keepnick for user(s) irc account(s)
# intended to be run from cron, e.g. through /etc/cron.weekly
#

KEEPNICK="/usr/local/bin/keepnick"
# better safe than sorry
PATH="/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin"
export PATH

$KEEPNICK accountname1 passw0rd1 linked_nick1 linked_nick1_ linked_nick1__
$KEEPNICK accountname2 passw0rd2 linked_nick1 linked_nick2_ linked_nick2__
 

You should see keepnick in action now every week like this:

keepnick in action

What happens here is that the IRC services package tells you, keepnick has just authenticated to your account and will now shuffle through all nicks you asked it to. The big advantage is that is does this outside of channels, so not annoying any users. The cron job should make sure you don't forget the nick shuffle anymore.

Making sure your bash supports network connections

Stock bash will support network connections but on Debian and old (=pre-karmic) Ubuntu that capability was disabled at compile time.

If you need to check whether your bash is compiled with network support, type cat < /dev/tcp/time.nist.gov/13 into a bash terminal.

In case that gives you a RFC-867 time string, you're all fine. If not, re-compile your bash with --enable-net-redirections.

Now for something more advanced (but entirely optional):

Continue reading "Keeping IRC nicks active"

zip2dir (expand a zip to a directory of the same name)

IT

I needed to expand a lot of jars (Java zips) and other zips of various names into directories of the same name for each file. With 6,239 files of which some are jars, some other zips and many xml and other filetypes all not properly identified by a file extension, this gets a bit too much to do manually.

So:
Finding candidates for these is easy with find . -type f.
The file is most probably a zip archive if the first two characters are "PK", good old Phil Katz' signature. A friendly head -c 2 checks that.
All combined with some rudimentary error checking:

  1. #!/bin/bash
  2. # There is little data security here, so know what you're doing.
  3. # All risks in using this code are yours. It moves and deletes files quite stupidly.
  4. # (c) Daniel Lange, 2009, v0.01, released into the public domain
  5. if [ $# -ne 1 ] ; then
  6.         echo "Error: $0 expects exactly one argument, a (fully qualified) path/to/a/zipfile"
  7.         exit 1
  8. fi
  9. if [ ! -r $1 ] ; then
  10.         echo "Error: file does not exist or no read permission on $1"
  11.         exit 2
  12. fi
  13. if [ ! -w "$(dirname $1)" ] ; then
  14.         echo "Error: cannot write to directory of $1"
  15.         exit 3
  16. fi
  17. if [ "$(head -c 2 $1)" == "PK" ] ; then
  18.         mv $1 $1.tmp
  19.         mkdir -p $1
  20.         unzip -d $1 $1.tmp
  21.         rm $1.tmp
  22. else echo "$1 is not a zipfile"
  23. fi

Download available here (1KB).

Typical usage:

 find . -type f -print0 |xargs --null -n 1 zip2dir

This will expand all zips under the current directory.
Leave the zip2dir out for a dry run (xargs will just print to the tty then). Look at the -exec switch when digging around a bit more into what find can do for you.

Updated Greasemonkey script for Xing

Private

Xing has just updated it's image-thumbnails naming "algorithm" once again:

You'll now find thumbnails named /2f1127fe2.3144098_s2,2.jpg, so including another component like <comma><digit> added to the name. Thus the Greasemonkey script linked from my article Greasemonkey to enlarge Xing pictures needs to have it's main regex amended:

Change \_s(1|2|3)?\. to read \_s(1|2|3)?(,\d)?\. in three places in the script.

Or download an updated version here. I hope "louis" will update the version hosted at userscripts.org, too.

Updates

02.05.2008 As Xing adds multi-digit numbers to the thumbnails now (like /7553bd445.4550412_s2,10.jpg), you need to replace \d with \d+ in the above regex. The linked Greasemonkey script is updated.

16.11.2011 And again the URLs changed. This time appending a pattern like ,4.57x75.jpg. The linked script again was updated. If Scriptish and the auto-updating in Greasemonkey mature, I'll add auto-updating to the script file via the @updateURL parameter.

13.05.2012 The Xing layout is still changing as they Ajaxify the site more and more. Currently the Xing XE is probably the best working enhancement to see Xing pictures in a recognisable size.

Greasemonkey to enlarge Xing pictures

Private

I use Xing to manage some of my business contacts and even some friends have profiles there as well.

The default size of contact's pictures displayed on one's Xing homepage is 18x24px. On a higher dpi screens, you can thus barely recognize the person shown. As there are multiple sizes of all images available, it's pretty easy to just take (for example) http://www.xing.com/img/users/d/f/1/f34814409.5648827_s1.jpg, remove the _s1 and see a 140x185px version of the picture.

Greasemonkey, a Firefox extension to run user specified scripts on selected pages that you visit, can automate this with a nice script from user louis to download here. Or my updated version here.*

Greasemonkey on Xing

So, everytime I now hover my mouse over a tiny Xing thumbnail, it will show the "full resolution" version of the image. Simple, efficient.

Update

* Xing changed it's image naming scheme a bit, so one needs an updated Greasemonkey script for all images to work again. Link inserted into the article text ("Or my updated version here.")