I’m packaging several pieces of software for work and/or personal use. You can find i386 and amd64 packages for Ubuntu Feisty (7.04, current stable) and Gutsy* (7.10, current unstable) on my Personal Package Archive.

Currently you may find these packages:

  • Wt, AKA Witty, is a C++ library and application server for developping and deploying web applications. The API (intentionally) resembles the Qt API. Packaged by me.
  • asio, a cross-platform C++ library for network programming that provides developers with a consistent asynchronous I/O model using a modern C++ approach. It’s a dependency for Wt. Packaged by me.
  • Mini-XML is a small XML parsing library that you can use to read XML and XML-like data files in your application without requiring large non-standard libraries. It’s a dependency for Wt. Packaged by me.
  • Log4CXX, a C++ “port” of Log4j. We use it at work. Packaged by me.
  • MLDonkey, the best multinetwork P2P application. Feisty and Gutsy builds of the Debian Experimental package.
  • Intel PowerTopis a Linux tool that finds the software component(s) that make your laptop use more power than necessary while it is idle. Originally Feisty and Gutsy builds of the Debian Experimental package, I am now taking snapshots from the Subversion repository (read: these packages are the bleeding edge).
  • Qt, the best crossplatform GUI (and since Qt4, non-GUI) toolkit. These packages are build with subpixel rendering and the patented TrueType font rendering enabled. Do not use these packages if in the USA or any other country which recognizes those patents as valid. They do not apply in Spain, where I live.
  • Samba. The latest version. At this moment, Feisty builds of the Gutsy package.
  • libNTLM is a library that implement Microsoft’s NTLM authentication. Packaged by me.
  • Strigi is a daemon which uses a very fast and efficient crawler that can index data on your harddrive (Google Desktop-like). I improved the Debian version and I am building almost-daily snapshots from Subversion.
  • SNMP++, a C++ SNMP v1, v2 and v3 library. We use it at work. Packaged by me.
  • libTomCrypt is a fairly comprehensive, modular and portable cryptographic toolkit that provides developers with a vast array of well known published block ciphers, one-way hash functions, chaining modes, pseudo-random number generators, public key cryptography and a plethora of other routines. It’s a dependency for SNMP++. Packaged by me.
  • CMake, cross-platform, open-source make system. Better than autotools. We use it at work. Feisty builds of the Gutsy package.
  • tolua++, C++ bindings for Lua 5.0 and Lua 5.1. I am only providing the Lua 5.1 version. Packaged by me.
  • mod_auth_ntlm_winbind. An Apache module providing NTLM authentication in cooperation with Samba 3. I improved a bit the official packaging.

* Not all packages have Gutsy version yet, as I only started Gutsy builds today, motivated by a request on #strigi. Further, I am not providing my own Gutsy builds of packages already in Gutsy.

I am creating Debian packages for a SNMP library in C++ which unfortunately does not properly set soname or versioning information in the makefile. It is unacceptable to ship a library in those conditions under Debian guidelines.

If you are a C programmer you don’t need to worry as binary-compatibility is usually not an issue in C as it lacks polymorphism.

If you are a C++ programmer and you don’t know what an ABI is and why binary compatibility matters, you must first read and understand what an API, an ABI and name mangling are.

Even if you already knew about ABI and name mangling, you probably don’t know about binary compatibility in shared objects (libraries). Don’t worry: ABI-compatibility is non-obvius and most C++ programmers don’t know a word about that.

The idea is quite simple: if you uncarefully change the API of your library you have also changed the ABI of your library and, most probably, due to name-mangling and virtual tables (needed to support polymorphism), the new version of the library is ABI-incompatible with the older version.

There are two kind of changes you can make: extend the API/ABI but leave it compatible with former versions, or extend the API/ABI and break binary-compatibility. There is very good information about what changes render your library binary-compatible and which don’t in the KDE wiki and in the Qt FAQ.

Once ABI compatibility is clear, the next question arises: how does my library state its binary compatibiliy? There are essentially two mechanisms for that: the Sun Solaris-specific mechanism (which is great, by the way) and sonames (used by most other Unices and Linux). There is plenty of information about sonames at the libtool manual and the Debian Libraries Packaging Guide and a good example at LinuxQuestions. For more information about the Solaris mechanism, read carefully the DSO howto by Ulrich Drepper (page 35, section 3.3: “ABI Versioning”).

Two important things you have to remember:

  • If your library version is the same as your soname version, you have not understood a thing of what I said above
  • If your library is stable, follow these guidelines.

In case you really want to keep ABI-compatibility for whatever reason (you are developing a very spreaded C++ library and the user might have a newer/older version already installed, you don’t want to update versions unless totally necessary, etc), take a look at the opaque pointer technique (AKA D-Pointer) invented by Trolltech.

For almost 4 years I worked for a small company, Venue Network, as a Systems Administrator. At the beginning my job meant dealing with Windows 2000 Server and Windows 2003 Server systems. As time went by I was able to introduce Linux and FreeBSD servers in some clients, saving them money and us hassle. The last 18 months there I barely touched Windows systems: the increasing demand for Linux and the storage-hungry users led me to focus on SANs and NASes and Linux. I still did some very specific (read: complex) work on webservers, but that was the exception as I was already overloaded with work.

One day at the end of October 2006 I received an e-mail from another company saying they read about me in the aKademy 2006 site (I gave a conference last year) and would like to know more about me. I sent them my phone number and the next day we talked on the phone for about 20 minutes: they wanted me to work as a C++/Linux/Qt developer. I told Jesús (the CTO and one of the founders of the company) I had never had a developer job. The most ressembling job I had held was a summer internship in 2002 as a multimedia script writer but I didn’t think that qualified. I was not the person they were looking for. He insisted and we arranged a meeting for next week at their offices. Truth is I thought Jesús was crazy and I would be wasting his time and mine, but I agreed. How could I possibly have a job as a C++ developer? It had been years since I programmed in C/C++ and I only developed in Ruby and as a hobby (Ruby, QtRuby, Rails, etc). My visit to Arisnova went very well: Jesús was full of confidence I would be able to do the job and he was so convicing even I started to believe it (actually he was so confident I tried to hand him my resume and he declined the offer :-O)

Would it work? Venue Network was a tiny company where I held a very comfortable position and I already had earned my medals, I did not need to demonstrate anything anymore. At Arisnova I was going to start from scratch!

Fast forward to May 2007.

Turns out I accepted the offer and I have been working for Arisnova for 4 months now. My main job is porting our Integrated Platform Management System from Windows to Linux (auxiliary libraries, middleware, applications, everything). This software manages ships (frigates, corvettes, etc) and has been in use on Windows for several years now, ships have been sold for several countries and they all are very impressed with the software.

We use a lot of open source for the IPMS: Qt, Boost, ACE, ZeroC ICE, OpenSceneGraph, Lua and the list goes on. As the building blocks were already cross-platform, the port is being easier than everybody expected (including me).

The main innovation coming with the Linux version is the movement to KDE: the Windows version depends on several ActiveX components for video, documentation, videoconferencing and some other features. Obviously ActiveX do not work on Linux, so the first thing you think is we would need two different branches of code or a hell of a lot of #ifdef‘s. Not! (sorry, I couldn’t resist). Thankfully, being a KDE bigot is going to benefit our IPMS: KDE4 is multiplatform (Linux/Unix, Mac and Windows), therefore we will be making extensive use of KParts and almost every new technology KDE4 features: Phonon, Decibel, Strigi, etc (by the way, GNOME is not even close to this). We will also be using CMake.

As the port has progressed at a faster pace than we expected and we’d like the KDE4 to be quite stable when we invest our time, I have some time to fiddle with other things. Something I am looking at for the third version of our IPMS, which is currently in its inception, is Flash. Is it possible to integrate Flash in a desktop application (our GUI) and make it feel natural for the user? Will we need to embed a WebKit/Konqueror/whatever component as a "proxy" between the application and Flash? I don’t know yet, but I am currently investigating every lead: dlopen, libflashsupport, XEmbed (which has pretty easy to use since Qt 4.1).

Summarizing, I am very happy I moved to Arisnova: the job is interesting, I am learning a lot, people are nice, I am performing way better than I (and everybody) expected and I see exciting challenges coming. Thank you guys!

I have started a new open source project called Destral. It is a command-line utility to split and join files, much like Hacha and HJSplit.

The main advantages of Destral over Hacha and HJSplit are:

  • Multiplatform
    It is written in pure C, therefore it should build in every operating system with a C compiler.
    This single utility works the same for Linux, Windows, Mac, etc, forget about using a different utility in each operating system. Same use, same flags, same everything.
  • Destral is able to split and join using Hacha 3.0, Hacha 3.5 and HJSplit formats. To state it clearly: Destral does not use a new split and join algorithm. It does not need Hacha 3.0, Hacha 3.5 or HJSplit to work, I have implemented the algorithms.
  • Destral is intelligent and uses sensible defaults.
    Most times you will not need to tell it what split and join algorithm you want to use, it will discover.
    For instance, when you want to join several chunks in a file you just run destral -j myfile.0, or destral -j myfile.000, or destral -j myfile.001 (at this moment you need to provide it with the path for the first chunk, but this weekend I will make it intelligent enough to search for the first chunk if you pass, for instance, chunk #3).

There is no release yet, if you are interested you will need to access the code via Subversion. The only dependency besides a C compiler is CMake, but it’s possible and easy to build it without CMake.

Current features:

  • Join Hacha 3.0, Hacha 3.5 and HJSplit/lxsplit files (no CRC check in Hacha files yet)
  • Multiplatform
  • It works and is very fast

Known bugs: there is an issue I just discovered with the names of the joint file under certain conditions, I will fix this soon.

Future features:

  • Fix bugs
  • Implement splitting of files, with sensible defaults: Destral will automagically select certain chunk sizes depending on the input file (it will be possible to override that using parameters).
  • GUI
  • CRC reverse engineering (the Hacha developer does not answer my e-mails, so I have no information about the CRC algorithm he is using)

In Spanish-speaking forums and websites a lot of people use Hacha (a win32-only app) to split a large file into several smaller chunks. English-speaking people prefer HJSplit, which has a Linux version called lxsplit.

On one hand, I cannot understand why people keep using these programs as you could just use a compressor (WinZip, WinRAR) and set the compression ratio to zero: it would be as fast as Hacha and HJSplit and everybody already has WinZip and/or WinRAR. On the other hand, I cannot change people’s mind and using wine to run Hacha is a pain in the ass in my 64-bit KUbuntu (32-bit chroot, yadda, yadda).

I have tried to contact the author of Hacha to no avail. I suspected the algorithm was easy but I like to play nice: I kindly requested information about the algorithm Hacha is using to split files. After some weeks without an answer, tonight I gave KHexEdit a try and you know what? I was right: the split & join algorithm in Hacha 3.5 is extremely simple.

There is a variable-length header which consists of:

  • 5 bytes set to 0x3f
  • 4 bytes CRC. If no CRC was computed, CRC is 1 byte set to 0x07 followeb by 3 bytes set to 0x00. If CRC was computed, its 4 bytes are here. I have not discovered the CRC algorithm yet.
  • 5 bytes set to 0x3f
  • Variable number of bytes representing the filename of the large file (before splitting/after joining). This is plain ASCII, no Unicode involved.
  • 5 bytes set to 0x3f
  • Variable number of bytes representing an integer which is the size of the large file (before splitting/after joining). Let’s name it largeFileSize.
  • 5 bytes set to 0x3f
  • Variable number of bytes representing the size of each chunk except the first (the one which ends with ".0") and the last. Let’s call it chunkSize. The size of the first chunk is chunkSize + headerSize. The size of the last chunk is largeFileSize – (n-1)*chunkSize.
  • 5 bytes set to 0x3f

And that’s all you need to know if you want to implement the Hacha 3.5 algorithm. I will be doing that in the next few days and releasing this program under the GPL.

Update I had not realized there is CRC information. The information I had here corresponds to the trivial case (no CRC), but I’m yet to find out the CRC algorithm. Reversing CRC – Theory and Practice seems a good starting point.

The Parable of the Two Programmers is a 1985 tale showing why a lot of programmers (and I’d say people in IT in general) get frustrated with their work.

We are supposed to do our work quick, diligently and perfect at the first try. But then, as we achieve those objectives, outsiders think the work was easy and there’s no merit in it and IT workers not only get no recognition or appreciation, but are even disdained. Do you still wonder why frustration and stress come?

Yesterday I released new versions of two of my open source projects: Javascript Browser Sniffer and XSPF for Ruby.

Version 0.5.1 of jsbrwsniff fixes the detection of Flash Player. It was not working with Flash Player >= 7.0 in Gecko browsers and not working at all in Internet Explorer due to a typo.

Version 0.3 of XSPF for Ruby is able to export XSPF playlists to M3U, SMIL, HTML and SoundBlox. These are minor features, but as a new dependency has been added (Ruby/XSLT), the version number has been upped to 0.3 instead of 0.2.1. I am currently developing the XSPF generator.