Today my Amazon.com wishlist hit 1,000 books. It’d cost about US $30,000 (roughtly 22,000 EUR) to buy every book in the list. Most likely, I won’t be able to buy all those books unless I win the lottery and a) I have enough money to spend on books, and b) I buy a facility to store them all.

The worst of all (actually “the best” to me) is that there is no single book in that list I don’t want to read even though I already have a stockpile of books I own but have not had time to read on the shelves sitting next to me.

If I’d had to classify them, about 40% of the books are computer-related: systems administration, software development, vintage computers, etc. Another 30% are popular science books, people who know me know I really love popular science and most of the books I read are science books. Management books are my next target, to most people they are boring but that 20% of my wishlist makes an interesting read and what you learn from them, you can apply not only to company management but to everything in your life. A remaining 10% of the books are about miscellaneous topics: music, biographies, emergency medicine, etc

Of course you can buy me some books: you only have to pay for them and Amazon will kindly deliver them to my home :-).

I’m packaging several pieces of software for work and/or personal use. You can find i386 and amd64 packages for Ubuntu Feisty (7.04, current stable) and Gutsy* (7.10, current unstable) on my Personal Package Archive.

Currently you may find these packages:

  • Wt, AKA Witty, is a C++ library and application server for developping and deploying web applications. The API (intentionally) resembles the Qt API. Packaged by me.
  • asio, a cross-platform C++ library for network programming that provides developers with a consistent asynchronous I/O model using a modern C++ approach. It’s a dependency for Wt. Packaged by me.
  • Mini-XML is a small XML parsing library that you can use to read XML and XML-like data files in your application without requiring large non-standard libraries. It’s a dependency for Wt. Packaged by me.
  • Log4CXX, a C++ “port” of Log4j. We use it at work. Packaged by me.
  • MLDonkey, the best multinetwork P2P application. Feisty and Gutsy builds of the Debian Experimental package.
  • Intel PowerTopis a Linux tool that finds the software component(s) that make your laptop use more power than necessary while it is idle. Originally Feisty and Gutsy builds of the Debian Experimental package, I am now taking snapshots from the Subversion repository (read: these packages are the bleeding edge).
  • Qt, the best crossplatform GUI (and since Qt4, non-GUI) toolkit. These packages are build with subpixel rendering and the patented TrueType font rendering enabled. Do not use these packages if in the USA or any other country which recognizes those patents as valid. They do not apply in Spain, where I live.
  • Samba. The latest version. At this moment, Feisty builds of the Gutsy package.
  • libNTLM is a library that implement Microsoft’s NTLM authentication. Packaged by me.
  • Strigi is a daemon which uses a very fast and efficient crawler that can index data on your harddrive (Google Desktop-like). I improved the Debian version and I am building almost-daily snapshots from Subversion.
  • SNMP++, a C++ SNMP v1, v2 and v3 library. We use it at work. Packaged by me.
  • libTomCrypt is a fairly comprehensive, modular and portable cryptographic toolkit that provides developers with a vast array of well known published block ciphers, one-way hash functions, chaining modes, pseudo-random number generators, public key cryptography and a plethora of other routines. It’s a dependency for SNMP++. Packaged by me.
  • CMake, cross-platform, open-source make system. Better than autotools. We use it at work. Feisty builds of the Gutsy package.
  • tolua++, C++ bindings for Lua 5.0 and Lua 5.1. I am only providing the Lua 5.1 version. Packaged by me.
  • mod_auth_ntlm_winbind. An Apache module providing NTLM authentication in cooperation with Samba 3. I improved a bit the official packaging.

* Not all packages have Gutsy version yet, as I only started Gutsy builds today, motivated by a request on #strigi. Further, I am not providing my own Gutsy builds of packages already in Gutsy.

I am gathering a team to make a proposal to hold aKademy 2008 in Valencia (Spain) in July 2008. If you want to help, contact me via e-mail (my e-mail is at the bottom of the website). Several people have already expresed interest and I have a draft but I need to make sure we will be enough people to deal with everything before I submit the proposal.
Update I have created a mailing list for people who want to help us, as of now subscription is open.

I am creating Debian packages for a SNMP library in C++ which unfortunately does not properly set soname or versioning information in the makefile. It is unacceptable to ship a library in those conditions under Debian guidelines.

If you are a C programmer you don’t need to worry as binary-compatibility is usually not an issue in C as it lacks polymorphism.

If you are a C++ programmer and you don’t know what an ABI is and why binary compatibility matters, you must first read and understand what an API, an ABI and name mangling are.

Even if you already knew about ABI and name mangling, you probably don’t know about binary compatibility in shared objects (libraries). Don’t worry: ABI-compatibility is non-obvius and most C++ programmers don’t know a word about that.

The idea is quite simple: if you uncarefully change the API of your library you have also changed the ABI of your library and, most probably, due to name-mangling and virtual tables (needed to support polymorphism), the new version of the library is ABI-incompatible with the older version.

There are two kind of changes you can make: extend the API/ABI but leave it compatible with former versions, or extend the API/ABI and break binary-compatibility. There is very good information about what changes render your library binary-compatible and which don’t in the KDE wiki and in the Qt FAQ.

Once ABI compatibility is clear, the next question arises: how does my library state its binary compatibiliy? There are essentially two mechanisms for that: the Sun Solaris-specific mechanism (which is great, by the way) and sonames (used by most other Unices and Linux). There is plenty of information about sonames at the libtool manual and the Debian Libraries Packaging Guide and a good example at LinuxQuestions. For more information about the Solaris mechanism, read carefully the DSO howto by Ulrich Drepper (page 35, section 3.3: “ABI Versioning”).

Two important things you have to remember:

  • If your library version is the same as your soname version, you have not understood a thing of what I said above
  • If your library is stable, follow these guidelines.

In case you really want to keep ABI-compatibility for whatever reason (you are developing a very spreaded C++ library and the user might have a newer/older version already installed, you don’t want to update versions unless totally necessary, etc), take a look at the opaque pointer technique (AKA D-Pointer) invented by Trolltech.

For almost 4 years I worked for a small company, Venue Network, as a Systems Administrator. At the beginning my job meant dealing with Windows 2000 Server and Windows 2003 Server systems. As time went by I was able to introduce Linux and FreeBSD servers in some clients, saving them money and us hassle. The last 18 months there I barely touched Windows systems: the increasing demand for Linux and the storage-hungry users led me to focus on SANs and NASes and Linux. I still did some very specific (read: complex) work on webservers, but that was the exception as I was already overloaded with work.

One day at the end of October 2006 I received an e-mail from another company saying they read about me in the aKademy 2006 site (I gave a conference last year) and would like to know more about me. I sent them my phone number and the next day we talked on the phone for about 20 minutes: they wanted me to work as a C++/Linux/Qt developer. I told Jesús (the CTO and one of the founders of the company) I had never had a developer job. The most ressembling job I had held was a summer internship in 2002 as a multimedia script writer but I didn’t think that qualified. I was not the person they were looking for. He insisted and we arranged a meeting for next week at their offices. Truth is I thought Jesús was crazy and I would be wasting his time and mine, but I agreed. How could I possibly have a job as a C++ developer? It had been years since I programmed in C/C++ and I only developed in Ruby and as a hobby (Ruby, QtRuby, Rails, etc). My visit to Arisnova went very well: Jesús was full of confidence I would be able to do the job and he was so convicing even I started to believe it (actually he was so confident I tried to hand him my resume and he declined the offer :-O)

Would it work? Venue Network was a tiny company where I held a very comfortable position and I already had earned my medals, I did not need to demonstrate anything anymore. At Arisnova I was going to start from scratch!

Fast forward to May 2007.

Turns out I accepted the offer and I have been working for Arisnova for 4 months now. My main job is porting our Integrated Platform Management System from Windows to Linux (auxiliary libraries, middleware, applications, everything). This software manages ships (frigates, corvettes, etc) and has been in use on Windows for several years now, ships have been sold for several countries and they all are very impressed with the software.

We use a lot of open source for the IPMS: Qt, Boost, ACE, ZeroC ICE, OpenSceneGraph, Lua and the list goes on. As the building blocks were already cross-platform, the port is being easier than everybody expected (including me).

The main innovation coming with the Linux version is the movement to KDE: the Windows version depends on several ActiveX components for video, documentation, videoconferencing and some other features. Obviously ActiveX do not work on Linux, so the first thing you think is we would need two different branches of code or a hell of a lot of #ifdef‘s. Not! (sorry, I couldn’t resist). Thankfully, being a KDE bigot is going to benefit our IPMS: KDE4 is multiplatform (Linux/Unix, Mac and Windows), therefore we will be making extensive use of KParts and almost every new technology KDE4 features: Phonon, Decibel, Strigi, etc (by the way, GNOME is not even close to this). We will also be using CMake.

As the port has progressed at a faster pace than we expected and we’d like the KDE4 to be quite stable when we invest our time, I have some time to fiddle with other things. Something I am looking at for the third version of our IPMS, which is currently in its inception, is Flash. Is it possible to integrate Flash in a desktop application (our GUI) and make it feel natural for the user? Will we need to embed a WebKit/Konqueror/whatever component as a "proxy" between the application and Flash? I don’t know yet, but I am currently investigating every lead: dlopen, libflashsupport, XEmbed (which has pretty easy to use since Qt 4.1).

Summarizing, I am very happy I moved to Arisnova: the job is interesting, I am learning a lot, people are nice, I am performing way better than I (and everybody) expected and I see exciting challenges coming. Thank you guys!

I have started a new open source project called Destral. It is a command-line utility to split and join files, much like Hacha and HJSplit.

The main advantages of Destral over Hacha and HJSplit are:

  • Multiplatform
    It is written in pure C, therefore it should build in every operating system with a C compiler.
    This single utility works the same for Linux, Windows, Mac, etc, forget about using a different utility in each operating system. Same use, same flags, same everything.
  • Destral is able to split and join using Hacha 3.0, Hacha 3.5 and HJSplit formats. To state it clearly: Destral does not use a new split and join algorithm. It does not need Hacha 3.0, Hacha 3.5 or HJSplit to work, I have implemented the algorithms.
  • Destral is intelligent and uses sensible defaults.
    Most times you will not need to tell it what split and join algorithm you want to use, it will discover.
    For instance, when you want to join several chunks in a file you just run destral -j myfile.0, or destral -j myfile.000, or destral -j myfile.001 (at this moment you need to provide it with the path for the first chunk, but this weekend I will make it intelligent enough to search for the first chunk if you pass, for instance, chunk #3).

There is no release yet, if you are interested you will need to access the code via Subversion. The only dependency besides a C compiler is CMake, but it’s possible and easy to build it without CMake.

Current features:

  • Join Hacha 3.0, Hacha 3.5 and HJSplit/lxsplit files (no CRC check in Hacha files yet)
  • Multiplatform
  • It works and is very fast

Known bugs: there is an issue I just discovered with the names of the joint file under certain conditions, I will fix this soon.

Future features:

  • Fix bugs
  • Implement splitting of files, with sensible defaults: Destral will automagically select certain chunk sizes depending on the input file (it will be possible to override that using parameters).
  • GUI
  • CRC reverse engineering (the Hacha developer does not answer my e-mails, so I have no information about the CRC algorithm he is using)

In Spanish-speaking forums and websites a lot of people use Hacha (a win32-only app) to split a large file into several smaller chunks. English-speaking people prefer HJSplit, which has a Linux version called lxsplit.

On one hand, I cannot understand why people keep using these programs as you could just use a compressor (WinZip, WinRAR) and set the compression ratio to zero: it would be as fast as Hacha and HJSplit and everybody already has WinZip and/or WinRAR. On the other hand, I cannot change people’s mind and using wine to run Hacha is a pain in the ass in my 64-bit KUbuntu (32-bit chroot, yadda, yadda).

I have tried to contact the author of Hacha to no avail. I suspected the algorithm was easy but I like to play nice: I kindly requested information about the algorithm Hacha is using to split files. After some weeks without an answer, tonight I gave KHexEdit a try and you know what? I was right: the split & join algorithm in Hacha 3.5 is extremely simple.

There is a variable-length header which consists of:

  • 5 bytes set to 0x3f
  • 4 bytes CRC. If no CRC was computed, CRC is 1 byte set to 0x07 followeb by 3 bytes set to 0x00. If CRC was computed, its 4 bytes are here. I have not discovered the CRC algorithm yet.
  • 5 bytes set to 0x3f
  • Variable number of bytes representing the filename of the large file (before splitting/after joining). This is plain ASCII, no Unicode involved.
  • 5 bytes set to 0x3f
  • Variable number of bytes representing an integer which is the size of the large file (before splitting/after joining). Let’s name it largeFileSize.
  • 5 bytes set to 0x3f
  • Variable number of bytes representing the size of each chunk except the first (the one which ends with ".0") and the last. Let’s call it chunkSize. The size of the first chunk is chunkSize + headerSize. The size of the last chunk is largeFileSize – (n-1)*chunkSize.
  • 5 bytes set to 0x3f

And that’s all you need to know if you want to implement the Hacha 3.5 algorithm. I will be doing that in the next few days and releasing this program under the GPL.

Update I had not realized there is CRC information. The information I had here corresponds to the trivial case (no CRC), but I’m yet to find out the CRC algorithm. Reversing CRC – Theory and Practice seems a good starting point.

It’s been five and a half years now since the Twin Towers attack and Osaba Bin Laden is yet to be found. As time goes by, more and more is known about the ties between the Bush family and the Bin Laden family and how the Bin Ladens were let go without any questioning right after September 11th.

I, therefore, have come with a new hypothesis trying to explain why nobody has found Osama Bin Laden after more than five years of search.

Say the Bushs fucked the Bin Ladens in one or more than one of their common business. Say the little Osama did not take it too well. Say the little Osama is using the muslims to act on his behalf without the muslims knowing.

Essentially, it goes like this: the Bush family played some dirty tricks on the Bin Laden business and Osama wanted retaliation. How to get retaliation against a so powerful “enemy”? Use someone without him knowing. The muslims were the perfect target: there had already been some itch between the USA and the muslim world for many years before Sept. 11th, 2001. Osama Bin Laden disguises himself into a radical muslim cleric and calls for the Jihad against the USA with great success. Most probably the plan was to leave the radical muslim world after the WTC attack but it was so successful, so compelling for many muslim and anti-USA people, that he could not just disappear on Sept. 12th and he is forced to keep acting, to release some speeches on tape from time to time to feed the followers of this radical-Osama.

So, in summary, why haven’t we found Osama Bin Laden yet? I think it is because we are looking for the wrong Osamba Bin Laden. We are looking for a long-bearded Osama Bin Laden, one which is wearing a jallabah and a turban. But according to my hypothesis (which I can not proof), we should be looking for an occidental-looking Saudi Arab, one who dresses just like a rich French or British man would do: Armani suit, most probably without beard or moustache of any kind, doing business here and there and keeping himself far apart from muslims. He might even be pretending to be a Christian or Atheist. I make this proposition to the authorities: go and try to find that Osama Bin Laden, I am pretty confident you will find him.

One more thing: if my hypothesis is correct, Osama Bin Laden might have tried to use other useful fools before the muslims (Hugo Chavez/Fidel Castro, radical jews, the Chinese, etc.) but none of them dared to conduct such an attack on the USA.