Application virtualization is an umbrella term that describes software technologies that improve portability, manageability and compatibility of applications by encapsulating them from the underlying operating system on which they are executed.

A fully virtualized application is not installed in the traditional sense, although it is still executed as if it were. The application is fooled at runtime into believing that it is directly interfacing with the original operating system and all the resources managed by it, when in reality it is not.

In this context, the term “virtualization” refers to the artifact being encapsulated (application), which is quite different to its meaning in hardware virtualization, where it refers to the artifact being abstracted (physical hardware).

So Wikipedia says.

In layman terms, application virtualization is “I want to run a third party application without it requiring installation”. Probably you know it as “portable application“.

The trivial case

If we talk about Windows applications, for simple applications, application virtualization amounts to copying the .exe and any required DLL to some folder, then running the application from that folder.

Easy, huh?

Well, no.

Registry keys

Many applications add keys to the registry and will not run without those keys. Some even require other files, which are created in AppData, CommonAppData, etc.

To solve this kind of dependencies, application virtualization software packages such as VMware ThinApp, Microsoft App-V and Citrix XenApp monitor the registry and the filesystem for changes. To do so, they require an “application virtualization preparation” procedure. It’s like you have a clean virtual machine, take a snapshot, then install the application, take another snapshot, and compare them.

You could achieve more or less the same result for free by using FileMon + RegMon + RegShot.

Of course I am simplifying more than a bit here 🙂

Common files

Then there is the problem of file redirection.

The whole point of application virtualization is to be able to run the application without installation. But if this application’s setup installed files in CommonAppData (or the application creates them), how do we force the application to look for them in our “virtualized application folder”?

The technique usually involves generating a small .exe launcher that:

  • Sets a special PATH that looks where we want first
  • Uses hooking, DLL injection and some other low level tricks to force system calls to look into the places we want before (or even instead of) they look into the standard places
  • Registry emulation (in combination with hooking and DLL injection) to make sure those registry keys are taken from the registry in the sandbox instead of the actual registry
  • And more

In a way, we are cracking the application.

Cross-platform

What I have described above are the required steps to virtualize an application if the target platform (where the application is going to be run) is the same as the host platform (where the application virtualization was performed). For instance, if we are virtualizing the application on Windows XP SP3 32-bit and the application will run on Windows XP SP3 32-bit.

Even if the steps described above look complex to you (they are), that’s the simple case of application virtualization.

If the target platform is a different version (for instance, the app virtualization takes place on Windows XP 32-bit, while the target platform is supporting also Windows 7 64-bit), then it gets more complex: probably many Windows DLLs will need to be redistributed. Maybe even ntoskrnl.exe from the original system is required.

In addition to the serious problem redistribution poses in terms of licensing to anybody but Microsoft App-V, there is the technical side: how do you run Windows XP SP3 32-bit ntoskrnl.exe on Windows 7 SP1 64-bit? You cannot, which makes some applications (very few) impossible to virtualize.

Device drivers

The more general case for “impossible to virtualize applications” (we are always talking about application virtualization here, not full virtualization or paravirtualization) is device drivers.

None of the application virtualization solutions that exist so far can cope with drivers, and for a good reason: installing a driver requires administration permissions, verifying certificates, loading at the proper moment, etc. All of that needs to be done at kernel level, which requires administration permissions, thus defeating the point of application virtualization. Then there are the technical complications (the complexity of the implementation), and the fact that drivers assume they run with administration permissions, therefore they may perform actions that require administration permissions. If that driver is also app-virtualized, it will not run with administration permissions.

One possible remedy to the “driver virtualization problem” would be to implement a “driver loader process” that requires installation and will run with administration permissions. Think of it as the “runtime requirement” of our application virtualization solution.

I know, I know, it sounds contradictory: the whole point of my app virtualization software solution is to avoid application installation, yet my app virtualization solution requires a runtime installation? We are talking about a one-time installation, which would be perfectly acceptable in a business environment.

But why all this!?

You may wonder why this braindump.

The reason is the job that pays my bills requires virtualization. Given that we need driver support, we resorted to VirtualBox but we run a lot of instances of the applications we need to virtualize, which means we need a lot of Windows licenses for those virtual machines. Worse: we cannot know how many Windows licenses we need because the client may start any number of them and should not notice they run in virtual machines! Even worse: Microsoft have a very stupid attitude in the license negotiations and keep trying to force us to use Windows XP Mode and/or Hyper-V, which are not acceptable to us due to their technical limitations.

So it turned on the “think different switch” in my brain and I came with a solution: let’s port Wine to Windows and use it to virtualize applications.

The case for a Wine-based Application Virtualization solution

Crazy? I don’t think so. In fact, there would be a number of advantages:

  • No need for Windows licenses
  • No need for VirtualBox licenses
  • Wine already sandboxes everything into WINEPREFIX, and you can have as many of them as you want
  • You can tell wine to behave like Windows 2000 SP3, Windows XP, Windows XP SP3, Windows 7, etc. A simple configuration setting and you can run your application the environment it has been certified for. Compare that to configuring and deploying a full virtual machine (and thankfully this has been improved a lot in VBox 4; we used to have our own import/export in VBox 3)
  • No need to redistribute or depend on Microsoft non-freely-redistributable libraries or executables: Wine implements them
  • Wine has some limited support for drivers. Namely, it already supports USB drivers, which is all we need at this moment.

Some details with importance…

There are of course problems too:

  • Wine only compiles partially on Windows (as a curiosity, this is Wine 1.3.27 compiled for Windows using MinGW + MSYS)
  • The most important parts of wine are not available for Windows so far: wineserver, kernel, etc
  • Some very important parts of wine itself, no matter what platform it runs, are still missing or need improvement: MSI, support for installing the .NET framework (Mono is not enough, it does not support C++/CLI), more and more complete Windows “core” DLLs, LdrInitializeThunk and others in NTDLL, etc
  • Given that Windows and the Linux kernel are very different, some important architectural changes are required to get wine running on Windows

Is it possible to get this done?

I’d say a team of 3-4 people could get something working in 1 year. This could be sold as a product that would directly compete with ThinApp, XenApp, App-V, etc.

We even flirted with the idea of developing this ourselves, but we are under too much pressure on the project which needs this solution, we cannot commit 3-4 developers during 1-2 years to something that is not the main focus of development.

Pau, I’m scared!

Yes, this is probably the most challenging “A wish a day” so far. Just take a look at the answer to “Am I qualified?” in Codeweavers’ internships page:

Am I Qualified?

Probably not. Wine’s probably too hard for you. That’s no insult; it’s too hard for about 99% of C programmers out there. It is, frankly, the hardest programming gig on the planet, bar none. But if you think you’re up for a real challenge, read on…

Let’s say, just for the sake of argument, that you’re foolish enough that you think you might like to take a shot at hacking Wine. […]

Interactive whiteboards, also known as “digital whiteboards”, seem to be the latest trend in education. At least from the teacher’s point of view.

Given that all my immediate family are teachers, and I have taught my mom how to use the digital whiteboard at her school, I feel like I can talk a bit about them.

A digital whiteboard is essentially a big touchscreen (100” or more). Usually the image is projected with a beamer and they use infrared, ultrasound, or some other cheap system to touch-enable the board.

From a hardware point of view, it is quite simple. You can even use a WiiMote to convert any surface into a digital whiteboard thanks to Uwe Schmidt’s WiiMode Whiteboard software.

The interesting part of digital whiteboards, at least for me, is software.

I have tried myself three software packages (TeamBoard, SMART Notebook and Promethean ActivInspire) but there are many more: DabbleBoard, NotateIt, Luidia eBeam, BoardWorks, etc

All of those can be used for education. Some of them are generic enough to be fit for business meetings (DabbleBoard, NotateIt, TeamBoard), while others are highly specialized (Interact, focused on music-teaching).

I consider ActivInspire very good, with SMART as a distant second.

Now, let’s cut the crap. Why am I talking about digital whiteboards and commercial software in Planet KDE?

For starters, because there is no open source alternative. The only open source package I have found is Open Whiteboard and it has not been updated in four years.

Then there is the Linux issue. SMART works on Linux but that’s about it. And it’s not even the full package, some features are missing.

And of course, in KDE we happen to have the wonderful KDE Edu project!

Back in December I thought about this: would it be possible to develop a digital whiteboard software based on KDE? That’s actually why I started working on KSnapshot: screen capturing was one of the missing features in SMART.

The answer to the question is “of course”. However interested I am, currently my spare time is all taken by another project I am working on. I do have a clear picture of what needs to be done, though, and I’d love to mentor if someone is interested in taking over:

  1. Dissect ActivInspire, SMART, TeamBoard, eBeam, etc. They all have nice features and huge failures. Give me one day and I’d hand you a long list.
  2. Start small: use KParts and DBUS and take advantage of KolourPaint, KSnapshot, Flake, etc
  3. The first application would be a simple single-page, vector graphics, Paint-like program: draw lines, figures, insert text boxes (with formatting), pictures, hyperlinks, etc. Add snapshotting (via a call to kbackgroundsnapshot).
  4. Then, extend that application to allow multipage “notebooks”, some screen effects (like Calligra Stage‘s), template pages (useful for exercise sheets, “blank” pages which include date, time and letterhead), hiding the solutions, record and reply, etc
  5. Going a bit further, we could have a special “personality” of Plasma Active for education. Let’s give some actual use to those iPads kids are receiving. Anyone involved in an educational Linux distribution knows the kind of customization I am talking about.
  6. And of course let’s not forget about “apps”: let’s develop a framework for “educapplets”. Educapplets is the name I give to small edutainment games, applications (Mathematical, Geography, etc). Kind of what you can do these days with JClic, but based on Javascript + KDE Edu QML components + something like Windows Runtime but for KDE. (This is big enough to be split in two projects apart from the whiteboard project: one for the KDE RT, another for the educapplet framework).
  7. Your ideas?

I think this project could be developed as a collaboration between KDE Edu and “KDE Business” (a hypothetical extension of KDE PIM). Being unique (open source, multiplatform, powerful), it would have a lot of potential to carve in those markets.

On the other hand, this is actually an application, something built outside KDE SC, which means it might fit better as one of the projects in that hypothetical Apache-like KDE eV I talked about a few weeks ago.

Oh, by the way: some schools seem to be adopting whiteboards for children of all ages. I am strongly against it. In my humble opinion, and that’s what my experience says, computers should only be involved in the classroom after the students have mastered how to do things manually. Disagree? OK: what would you say if when you were a student, you would have been told to use a typewriter and a solar calculator instead of a notebook and a pencil? Ridiculous, isn’t it? My point, exactly.

Volunteers, please comment and/or contact me.

Yesterday one of our developers messed up our git repository pretty badly. We only noticed the problem after other people had pulled, committed & pushed, updated their topic branches, etc.

To make things worse, this developer was in Bangalore and it was quite difficult to figure over the phone what had been going on. All this was going on while about 30 other developers worked on. It took me a while to guess the past, but in the end I was able to fix the repository and colleagues from Germany and India reviewed and checked everything was OK.

In case you are interested, the easiest solution was creating a new branch from the latest not broken commit + cherry picking the good commits + merging back into master.

For some reason, I couldn’t but think of Toni Braxton‘s Unbreak my heart. Here comes my own rendition, “Un-merge my branch”:

“Un-merge my branch”

Don’t leave me in all this pain
Don’t leave me out in the fail
Come back and bring back my codeline
Come and take these changesets away
I need your rights to push it now
The conflicts are so unkind
Bring back those commits where you coded side-by-side with me

Un-merge my branch
Say you’ll code me again
Undo this havoc you caused
When you pulled in the code
And reverted out my line
Un-do these changesets
I tried so many patches
Un-merge my branch
My branch

Take back that bad commit good-bye
Bring back the joy to my repo
Don’t leave me here with these changesets
Come and cherry-pick this line away
I can’t forget the day you merged
Pull is so unkind
And coding is so cruel without you side-by-side with me

Un-merge my branch
Say you’ll code me again
Undo this havoc you caused
When you pulled in the code
And reverted out of my line
Un-do these changesets
I tried so many patches
Un-merge my branch
My branch

Don’t leave me in all this pain
Don’t leave me out in the fail
Bring back those commits where you coded side-by-side with me

Un-merge my branch
Say you’ll code me again
Undo this havoc you caused
When you pulled in the code
And reverted out of my line
Un-do these changesets
I tried so many, many patches
Un-merge my

Un-merge my branch oh admin
Code back and say you commit it
Un-merge my branch
Sweet admin
Without you I just can’t code on
Can’t code on…

It still needs some rhythm and metric work, but I’m not a musician. Volunteers for singing it? 🙂

I was at the beach today and somehow this came to my mind.

When the Apache Software Foundation was born back in 1999, its sole purpose was to support the development of the Apache HTTP server. One foundation, one project, more or less like the KDE eV and KDE until very recently.

Over the years, many more projects have been born inside the ASF (Tomcat, Xerces) , or have been “adopted” by the ASF (Subversion, OpenOffice).

For many years, KDE eV has been focused only on KDE.

Recently, Necessitas, the port of Qt and the Qt SDK to Android, joined under the umbrella of the KDE eV: mailing list, git repositories, announcements, etc.

Today I had this vision: should KDE eV go further and become “the Apache Software Foundation of the Qt-related world”?

A few projects that in my humble opinion would fit very well under this new umbrella:

  • MeeGo
  • The community ports of Qt to iPhone, webOS, Haiku, etc
  • PySide (the Python Qt bindings that nearly killed PyQt are being killed by Nokia, what an irony)
  • A Qt “target” that compiles to an HTTP server that generates AJAX, like Wt does but implemented in terms of Qt (yes, I know about QtWui, creole, etc — all dead)
  • … others you propose?

I think that would also make the Akademy’s more interesting, because we would have a lot of conferences on many totally different topics. Lately, when I see the program of Akademy, I feel like “same, again” (maybe because I’m subscribed to many mailing lists and I keep an eye on everything? I don’t know).

What do you think? Should the KDE eV widen its scope and accept other Qt-related projects, even if they are not directly related to KDE?

I have been programming for a number of years already. I have seen others introduce bugs, and I have also introduced (and solved!) many bugs while coding. Off-by-one, buffer-overflow, treating pointers as pointees, different behaviors or the same function (this is specially true for cross-platform applications), race conditions, deadlocks, threading issues. I think I have seen quite a few of the typical issues.

Yet recently I lost a lot of time to what I would call the most stupid C bug in my career so far, and probably ever.

I am porting a Unix-only application which uses tmpfile() to create temporary files:

            else if (code == 200) {     // Downloading whole file
                /* Write new file (plus allow reading once we finish) */
                g = fname ? fopen(fname, "w+") : tmpfile();
            }

For some awkward reason, Microsoft decided their implementation of tmpfile() would create temporary files in C:\, which is totally broken for normal (non-Administrator) users, and even for Administrator users under Windows 7.

In addition to that, the application did not work due to another same-function-diffent-behavior problem.

To make sure I did not forget about the tmpfile() issue, I dutifully added a FIXME in the code:

            else if (code == 200) {     // Downloading whole file
                /* Write new file (plus allow reading once we finish) */
                // FIXME Win32 native version fails here because Microsoft's version of tmpfile() creates the file in C:\
                g = fname ? fopen(fname, "w+") : tmpfile();
            }

Read More →

Why

Do you use WCF? Do you generate your datacontracts by means of postbuild events? Have you tried msbuild and seen a “cannot build XXXXX.csproj because DataContracts.cs was not found” error? Then keep reading.

What

MsBuild is the “next generation” nmake for Visual Studio. It’s been available since .NET 2.0 and has been bundled with Visual Studio since MSVC2008.

Same project files…

You don’t need to feed msbuild special project files (such as the Makefiles nmake required), it is smart enough to understand Visual Studio solutions.

… yet slightly different behavior

MsBuild does not behave 100% like Visual Studio (devenv), specifically in regards to:

  1. Dependencies
  2. When events are run
  3. When byproducts and referenced projects are copied

In general, this will affect you whenever you use postbuild events to generate source code or binaries which are required immediately.

In the project I am involved now, this is especially important when generating datacontracts. We use a Visual Studio post-build event.

Dependencies

There are two kinds of ways to express project-to-project dependencies:

  • Project to project references (Add Reference->Project tab) [this mechanism also handles gathering the referenced output file as well]. For .csproj, this is persisted as a <ProjectReference> item in the project file.
  • Explicitly specified dependencies (Solution properties->Project Dependencies) [with this mechanism you would usually also add a File Reference to the referenced output yourself]. This writes the “ProjectSection(ProjectDependencies)” section in the .sln.

There seems to be a long standing bug which makes msbuild not take all the dependencies into account.

As a consequence, a solution may build fine in Visual Studio (devenv) but not build at all with msbuild. I would say this bug is fixed in VS2010SP1, but I cannot tell 100% sure because I have not performed enough testing.

In addition to that, never add a dependency as a reference-to-DLL if that DLL is part of of your solution (.sln). Both MsBuild and DevEnv will choke when you switch from Debug to Release, because you would be setting a reference to a Debug DLL from a Release project, or viceversa (unless you use $(ConfigurationName) in your path, of course, but people rarely remember to do that).

When events are run

MsBuild runs the postbuild event before copying the byproducts to the output path

Visual Studio (devenv) runs the postbuild event after copying the byproducts to the output path

This can be solved by writing an AfterBuild event manually in the .csproj, but it is quite inconvenient because there is no GUI in Visual Studio to add, edit or remove AfterBuild events. You are alone with your text editor.

When byproducts and references are copied

As said above, msbuild runs the postbuild event before copying the byproducts. No only that: msbuild runs the postbuild event before copying the referenced projects (DLLs) to the output path.

Visual Studio (devenv), on the other hand, runs the postbuild event after copying the byproduct and its referenced projects to the output path.

In summary

DevEnv

  1. Compile
  2. Copy byproducts (DLLs, executables, etc)
  3. Copy byproducts’ referenced project output (DLLs, etc)
  4. Run postbuild event

MsBuild

  1. Compile
  2. Run postbuild event
  3. Copy byproducts (DLLs, executables, etc)
  4. Copy byproducts’ referenced project output (DLLs, etc)

Workaround

If you need the compilation byproducts in the post-build event (as is generally the case when generating datacontracts in the postbuild event), you will need a workaround.

You would think it’d be possible to tell MsBuild to behave like devenv, right? Wrong. It is not possible.

The only possible workaround I have found is to manually copy the result of the compilation to the output path in the postbuild event. Something like this:

call “%VS100COMNTOOLS%\vsvars32.bat”

if exist “$(ProjectDir)Temp” del /s /f /q “$(ProjectDir)Temp”

copy $(ProjectDir)\obj\$(ConfigurationName)\$(TargetName).dll .

svcutil /t:metadata /dataContractOnly /directory:$(ProjectDir)Temp\DataContract $(TargetName).dll

svcutil /t:code /dataContractOnly /r:$(ProjectDir)..\MyProject.Data.DataContract\bin\$(ConfigurationName)\MyProject.Data.dll /directory:$(ProjectDir)..\$(ProjectName).DataContract /out:DataContracts.cs $(ProjectDir)Temp\DataContract\*.xsd

(where “.” is the output path, the solution generally runs from there; usually you will only need to copy & paste this line to your postbuild event).

Not nice, but it works.

I will be teaching a short (4 hour) training seminar on the topic of mobile development using Qt Quick at University Miguel Hernández, Elche (Spain) on Thursday 21st April and Friday 22nd April. It’s one of the activities of the 7th LAN Party Elx. There will be t-shirts and stickers for the attendants, and a grand prize of a Nokia C7 phone, all of it courtesy of Nokia (thanks Alexandra and Thiago!). Free access.

El jueves 21 y viernes 22 de abril impartiré un cursito (4 horas) sobre desarrollo de aplicaciones para móviles usando Qt Quick en la Universidad Miguel Hernández de Elche. Es una de las actividades de la VII LAN Party Elx. Habrá camisetas y pegatinas para los participantes, y un gran premio de un teléfono Nokia C7, todo ello cortesía de Nokia (gracias Alexandra y Thiago!). Acceso libre.

The deadline for accepting student proposals for Google Summer of Code was reached an hour ago, which means we are not accepting any new proposal and students cannot modify already-submitted proposals.

I have skimmed over the list and I am impressed: 8 proposals were received for 5 of my “wishes“! :-O

There was 1 proposal for libQtGit, 1 proposal for single applications for KDE Windows, 3 proposals for a new KDE Windows installer, 2 proposals for KDE Demo and 1 proposal for KDE Dolphin “Previous versions” support.

Not bad! It’s a pity I didn’t start this series earlier, because I still have a dozen more “wishes” waiting.

Mentors: go to Melange and vote, and also follow the instructions Lydia sent to kde-soc-mentor.

CMake is a unified, cross-platform, open-source build system that enables developers to build, test and package software by specifying build parameters in simple, portable text files. It works in a compiler-independent manner and the build process works in conjunction with native build environments, such as Make, Apple‘s Xcode and Microsoft Visual Studio. It also has minimal dependencies, C++ only. CMake is open source software and is developed by Kitware(taken from Wikipedia)

But you already know that because CMake is the build system we use in KDE 🙂

When I started to use CMake at work to port a large project from Windows to Linux, I needed to write a lot of modules (“finders”). Those modules had to work on Windows and Linux, and sometimes with different versions of some of the third-party dependencies. Parsing header files, Windows registry entries, etc was often required.

If you have ever parsed text, you know you powerful regular expressions are. If you use old software, you know how great Perl Compatible Regular Expressions (PCRE) are because you do not have them available :-).

Read More →

Microsoft Windows has supported a feature called Shadow Copy since Windows Server 2003, using Windows 2000 or newer as the client.

Shadow copies, better known as “Previous versions”, by many users are kind of snapshots of a filesystem (“drive”, for Windows users) taken every so often. It is not a replacement for backups or RAID, just a convenient method to access old versions of your files. Thin of it as a revision control system for any file, save for “revisions” are taken at time intervals instead of evey time there are changes, which means not all the changesets are “captured”.

Samba implements the server part of shadow copy since version 3.0.3, released in April 2004, and makes them available to Windows clients.

On the Unix world, you can get more or less the same features, albeit only locally (not for NFS or Samba shared folders), by using filesystems such as ZFS, HAMMER (apparently, only available on DragonFly BSD), Btrfs, Zumastor and many others. The main problem being adoption of those solutions is marginal, and not available through the network (“shared folders”).

Sun (now Oracle) implemented a “Previous versions”-like feature for ZFS snapshots in Nautilus in October 2008:

KDE once had something like that but for the ext3cow file system. It was an application called Time Traveler File Manager (user manual, interface design and details) by Sandeep Ranade. Website is down and ext3cow is effectively dead now.

By this time you probably have guessed my wish for today: implement “Previous versions” support for Samba, ZFS, etc in Dolphin, the KDE file manager, or even better: in KIO, so that it is available to all KDE applications.