Skip to content

Kubuntu 9.10 (karmic) 64bit firefox java plugin

Linux

For some unknown reason the (K)Ubuntu developers did not update the Java plugin for firefox after jaunty (yet?).

The version that Karmic (9.10) pulls out of the multiverse repository is still jaunty's (9.04).

So when you try:

apt-get install sun-java6-plugin

you'll get something like

   Reading package lists... Done
   Building dependency tree
   Reading state information... Done
   Some packages could not be installed. This may mean that you have
   requested an impossible situation or if you are using the unstable
   distribution that some required packages have not yet been created
   or been moved out of Incoming.
   The following information may help to resolve the situation:
   
   The following packages have unmet dependencies:
     sun-java6-plugin: Depends: sun-java6-bin (= 6-15-1) but 6-16-0ubuntu1.9.04 is to be installed
   E: Broken packages

Duh.

Actually if you have the Java Runtime Environment (JRE, package name sun-java6-jre) installed all files needed are already present.
Just not put in the right place on the filesystem.

So, run:

sudo apt-get install sun-java6-jre   # install JRE if needed
sudo ln -s /usr/lib/jvm/java-6-sun/jre/lib/amd64/libnpjp2.so /usr/lib/mozilla/plugins/

This will install the JRE (if it's not already installed) and will symlink the firefox plugin for java in place so that it'll be found after a browser restart.

Fixing FreeNX / NoMachine NX keyboard glitches (e.g. ALTGr)

Linux

There is a add-on technology to X or VNC called NX by an Italian company called NoMachine. It's quite useful as it speeds up working on remote desktops via slow network connections (i.e. DSL pipes) substantially.

The libraries that implement NX are released under GPLv2 by that company. A server wrapping up the libraries' functionality is available as closed source from NoMachine or as a free product (GPLv2 again) by Fabian Franz, called FreeNX.

FreeNX itself is amazing as it is written in BASH (with a few helper functions in C). It's also able to mend some of the shortcomings of the NX architecture. E.g. stock NX requires a technical user called "nx" to able to ssh into the NX server with a public/private keypair. FreeNX can work around that for more secure set-ups.

One issue I bumped into quite regularly with Linux clients and Linux hosts from different distributions/localisations is that the keymaps are not compatible. This usually results in the ALTGr key not usable, so German keyboard users can't enter a pipe ("|"), tilde ("~") or a backslash ("\") character. Also the up and down keys are usually resulting in weird characters being pasted to the shell. Now all of that makes using a shell/terminal prompt quite interesting.

Continue reading "Fixing FreeNX / NoMachine NX keyboard glitches (e.g. ALTGr)"

Getting dual-screen (xinerama) to work with Matrox G450/550 graphics cards and Xorg 1.5

Gentoo

Gentoo finally decided to update Xorg to 1.5. Because this has very substantial changes against the previous version, some things break and there is a migration guide that you are nagged to read. After the upgrade I found that the Matrox card in one of my servers would not display xinerama anymore, i.e. I would get the same image on both screens only. This is the default behaviour for the stock Xorg mga driver. It needs a proprietary HALlib to get real dual-screen capabilities. Whilst there are a few unstable ebuilds for x11-drivers/xf86-video-mga none worked for me any better with Xinerama. The Gentoo Changelog is useless as usual. (Gentoo ebuild ChangeLogs tend to never really tell what is fixed, if you're lucky they reference a bug with a good description. But that's only if you're really lucky.)

Worse, that driver hasn't been updated by Matrox anymore since mammals took over the earth (figuratively ... 2005). This is the typical unmaintained-closed-source-drivers-make-hardware-obsolete-sooner-than-later story. Luckily the cards are quite widely used and clever people from the Open Source community have written guides (Tuxx-Home, Fkung) on how to dissect the proprietary driver and combine parts of it with the Open Source version so that it can be linked into recent X servers. Unfortunately because of the architectural changes in Xorg 1.5, following these guides will fail at the compile stage.

In the Matrox Forum of Alexander Griesser, the author of the first comprehensive Matrox driver install guide linked above, people currently mostly downgrade to previous Xorg versions to work around the issue.

But there is a better^Hworking solution already emerging :-P ...

Continue reading "Getting dual-screen (xinerama) to work with Matrox G450/550 graphics cards and Xorg 1.5"

httpdate - set local date and time from a web server

Linux

While ntp may be a great protocol, I find it quite bloated and slow for the simple purpose of just setting a local date and time to a reference clock. I do not need 20ms accuracy on a notebook's clock :-). Thus I use(d) rdate for a decade now but the public rdate servers are slowly dying out. So I'm replacing it more and more with htpdate which works quite nicely. It's written in C and a perl alternative is available on the author's site. There is also a forked windows version of it available.

Developing a bit larger bash script (which syncs a few servers), I wondered whether I could realize the time sync part in bash as well.

It's quite possible:

  1. # open a tcp connection to www.google.com
  2. exec 3<>/dev/tcp/www.google.com/80
  3. # say hello HTTP-style
  4. echo -e "GET / HTTP/1.0\n\n">&3
  5. # parse for a Date: line and with a bit of magic throw the date-string at the date command
  6. LC_ALL=C LANG=en date --rfc-2822 --utc -s "$(head <&3 | grep -i "Date: " | sed -e s/Date\:\ //I)"
  7. # close the tcp connection
  8. exec 3<&-

Simple, eh?

Continue reading "httpdate - set local date and time from a web server"

kloeri announces Exherbo, another source based Linux distribution

Linux

Bryan Østergaard (aka kloeri) announced Exherbo today. He assembled a team of (ex-)Gentoo developers including Ciaran McCreesh (ciaranm), Richard Brown (rbrown), Fernando J. Pereda (ferdy) and Alexander Færøy (eroyf) to build a new source based Linux distribution.

They would like to overcome some of the short-commings of Gentoo both from a technical as well as from a community perspective. Obviously this is easily said and hard to really achieve, so time will tell how successful that team can be. Renaming USE-Flags to OPTIONS and merging the platform KEYWORDS (like x86, ~x86) into the Options-logic is no big deal, but getting the thousands of ebuilds^Hpackages better supported and maintained than Gentoo will be the real deal{maker|breaker}.

Paludis, ciaranm's package manager, supports Gentoo ebuilds and can import them into Exherbo, so there is a potential migration path sketched out.*

They also add another init-system re-write ("Genesis") to the pool. An already quite crowded pool with rather shallow water, I may add.

Exherbo has nothing that is end-user-safe at the time of the announcement, so it's safe to assume kloeri's team wants to attract further development capacity :-).

Browse around the website or join folks in #exherbo if you're interested.

I asked in #exherbo what "exherbo" means ... latin for "uproot" was the answer. How fitting.

Updates

*19.04.08: Two friendly folks wrote in to clarify that Paludis currently can only import Ebuild-builds into Exherbo via importare, i.e. take a Gentoo build result and package it for importing into the Exherbo system through Paludis.
23.05.08: Ciaranm wrote a blog entry how to get build results into Exherbo/Paludis via importare.

Seredipity default event_s9ymarkup plugin breaking URLs that contain underscores

Serendipity

The default Serendipity mark-up plugin (event_s9ymarkup) currently breaks URLs that contain underscores.

So

http://en.wikipedia.org/wiki/Statler_%26_Waldorf

will end up

http://en.wikipedia.org/wiki/Statler</u>%26_Waldorf

because of a faulty regex. Garvin Hicking does not really want to fix this. (See this s9y support forum article for arguments pro/contra fixing it). So if you encounter this problem, your options are:

  • replace _ in URLs with %5F (aka manually urlencode it)
  • remove the plugin or disable it
  • patch the plugin

Patching is basically changing

plugins/serendipity_event_s9ymarkup/serendipity_event_s9ymarkup.php:

$text = preg_replace('/\b_([\S ]+?)_\b/','<u>\1</u>',$text);

to

$text = preg_replace('/\ _([\S ]+?)_\ /',' <u>\1</u> ',$text);

If you want to be writing things like "Haha[lol]" (which I have no real use for ...), extend the "\ " with whatever you'd like to be o.k. to delimit bolded words beyond blanks. It should only be symbols that are not valid in URLs (so none of "$-_.+!*'()," which are all valid in URLs according to RFC 1738).

You may also want to consider replacing one underscore ("_") with two or more ("__") to make the detection, that you actually wanted to write bold text, more reliable.

SSHd chroot and PAM

Gentoo

SSH with chroot patch has been working fine for a number of years. Since PAM v0.99 things have broken though, if users are chrooted with the "/home/username/./" syntax as their homedir.

SSH sessions will just terminate immediately after successful logon. Doh.

Two solutions exist:

  1. Put UsePAM no into /etc/ssh/sshd_config and use the chroot patch and /./ in users homedirs
  2. Keep UsePAM yes. Emerge sys-auth/pam_chroot and add session required pam_chroot.so to /etc/pamd.d/sshd setup /etc/security/chroot.conf or add a chroot_dir=/home/username/ to the pam_chroot.so line.
    This will currently not work for amd64 though as the Gentoo bug regarding pam_chroot has not cought any attention from the arch testers. Since July...

Bugging the arch testers in #Gentoo-amd64 didn't help either:

Continue reading "SSHd chroot and PAM"

Multiple Apache VHosts on the same IP and port

Apache

I just learned yesterday again, what I knew a few years ago, but since had forgotten:

You cannot put multiple SSL-enabled virtual Apache hosts onto the same IP and port.

Apache cannot identify which VirtualHost to serve a request from because the payload is encrypted in its entirety. So a

Host: servertwo.tld

header cannot be parsed until the encryption has been removed. Which requires the key, which is listed in the VHost section that could not be identified in the first place... So a name-based VirtualHost-configuration like this won't work:

  1. Listen 443
  2. NameVirtualHost *:443
  3. <virtualhost>
  4.    SSLEngine On
  5.    ServerName serverone.tld:443
  6.    SSLCertificateFile /etc/apache2/ssl/serverone.crt
  7.    SSLCertificateKeyFile /etc/apache2/ssl/serverone.key
  8.    [...]
  9. </virtualhost>
  10. <virtualhost>
  11.    SSLEngine On
  12.    ServerName servertwo.tld:443
  13.    SSLCertificateFile /etc/apache2/ssl/servertwo.crt
  14.    SSLCertificateKeyFile /etc/apache2/ssl/servertwo.key
  15.    [...]
  16. </virtualhost>

It will just serve any request out of the first VirtualHost (serverone.tld) regardless of the hostname in the request headers.

There is some light at the end of this tunnel though: RFC4366 describes an optional field to the TLS (Transport Layer Security) client request called "Server Name Indication" (SNI). With this the client just includes a list of ServerNames (usually one) that it's trying to contact. Apache can easily match the supplied name from the client against a ServerName (or ServerAlias) directive from it's configuration files.

SNI will be supported with OpenSSL v0.9.9 in mod_ssl. Sometime in the future. There is a backport to v0.9.8 available from Steven Henson linked here. Or you can use mod_gnutls as described by George Notaras in a recent blog entry.

In either cases the above configuration snippet will "just work" once SNI is understood by Apache.

Currently Internet Explorer 7 (on Vista only, wanna upgrade :-)), Mozilla Firefox 2+, Opera 7.6+, KDE Konqueror 3.5+ support sending the SNI. You can test your browser at Kaspar Brand's SNI testpage. He also has a patch available to make Apache 2.2 mod_ssl SNI capable when compiled against a CVS-version of OpenSSL.

I'm rather sure that spreading SNI capable hosts will also provide new hacking opportunities: Let's assume a system serves both Intranet and Internet traffic. A client contacts the Internet IP with SSL but specifies the Intranet Hostname in it's TLS SNI entry. Guess what will happen? Yup.

Update

02.09.2009: Gee, after two years people still read this blog entry. So I'll point you to a few updates. 2009 is not 2007 :-) SNI has made some slow progress since the original article. But major steps forward only came this summer: Apache has official support for SNI since 2.2.12 (tracking bug). Gentoo has been early to support SNI and Tobias Scheerbaum has written a blog entry on Apache, SSL und SNI in Gentoo (in German) summarizing how it works out of the box. Support for SNI has also been added to Debian (tracking bug) but for now the default config files don't reflect SNI capability yet. Ubuntu will see SNI in Karmic Koala, the release scheduled for next month i.e. "9.10" (tracking bug). Fedora has a SNI enabled Apache from httpd-2.2.13-1.fc11 onwards (tracking bug). Tobias also states that SP3 for Windows XP enables IE6 to send the SNI (SP2 is not sufficient).

Apache fails to start at boot, but works when started manually

ApacheGentoo

Since a baselayout update Apache fails to start on Gentoo at (re-)boot of a server if that server has unused ethernet interfaces.

The symptom is that Apache fails to start on boot although it has been added to the runlevel with
rc-update add apache default

This is caused by recent baselayouts not working properly with more than one eth and not all of them being up.

Thus changing depend() { need net ... } into
depend() { need net.eth0 ... } at the top of /etc/init.d/apache2 will help.

While you're at it you could also add an nice after urandom to the existing depend () construct
and make sure apr and apache are emerged with flag urandom set. Reading from /dev/random to initialize the digest authentication mechanism (or SSL for that matter) might cause apache to block otherwise if there is not enough entropy in the random pool.