This is a short one and probably doable as a Summer of Code project.

The idea: add support for the Microsoft compiler and linker and Visual Studio projects and solutions (.sln, .vcproj, et)c in KDevelop, at least in the Windows version.

QtCreator has suport for the first part (compiler and linker).

For the the second part (solutions and projects), code can probably be derived (directly or indirectly) from MonoDevelop‘s and CMake‘s. The starting point would be MSBuild support, as it’s what VS2010 is based on.

Bonus points if you add C#/.NET support (Qyoto/Kimono).

In a perfectly orchestrated marketing campaign for a 100% free-libre tablet called Spark that will run KDE Plasma Active, Aaron Seigo writes today about the problems they are facing with GPL-violations.

Apparently, every Chinese manufacturer is breaking the GPLv2 by not releasing the sources for their modified Linux kernel. Conversations and conversations with Zenithink (designers of the Spark), Sygic (designers of the Dreambook W7), etc have arrived nowhere. To the point that CordiaTab, another similar effort using Gnome instead of KDE, has been cancelled.

I have to say I am very surprised at the lack of the kernel sources. What is the Free Software Foundation doing? Why don’t we seek ban of all imports of tablets whose manufacturers don’t release the full GPL source?

Apple got the Samsung GalaxyTab imports blocked in Germany and Australia for something as ethereal as patents covering the external frame design. We are talking about license infringement, which is easier to demonstrate in court.

China may ignore intellectual property but they cannot ignore business, and no imports means no business. Let’s get all GPL-infringing tablet imports banned and we will get more source in two weeks than we can digest in two years. Heck, I’m surprised Apple is not trying this in court to block Android!

FOSDEM is one of the largest gatherings of Free Software contributors in the world and happens each February in Brussels (Belgium). One of the developer rooms will be the CrossDesktop DevRoom, which will host Desktop-related talks.

Are you interested in giving a talk about open source and Qt, KDE, Enlightenment, Gnome, XFCE, Windows, Mac OS X, general desktop matters, mobile development, applications that enhance desktops and/or web?

Hurry up and submit your proposal, deadline is December 20th!

There is more information in the Call for Talks we published one month ago.

If you are interested in Qt/KDE, come visit us at the KDE booth. If you add yourself to the KDE FOSDEM 2012 wiki page, we will be able to better organize the usual dinner on Sunday and/or smaller meetings for “special interest groups”.

 

FOSDEM is one of the largest gatherings of Free Software contributors in the world and happens each February in Brussels (Belgium). One of the developer rooms will be the CrossDesktop DevRoom, which will host Desktop-related talks.

We are now inviting proposals for talks about Free/Libre/Open-source Software on the topics of Desktop development, Desktop applications and interoperativity amongst Desktop Environments. This is a unique opportunity to show novel ideas and developments to a wide technical audience.

Topics accepted include, but are not limited to: Enlightenment, Gnome, KDE, XFCE, Windows, Mac OS X, general desktop matters, applications that enhance desktops and web (when related to desktop).

Talks can be very specific, such as developing mobile applications with Qt Quick; or as general as predictions for the fusion of Desktop and web in 5 years time. Topics that are of interest to the users and developers of all desktop environments are especially welcome. The FOSDEM 2011 schedule might give you some inspiration.

Please include the following information when submitting a proposal: your name, the title of your talk (please be descriptive, as titles will be listed with around 250 from other projects) and a short abstract of one or two paragraphs.

The deadline for submissions is December 20th 2011. FOSDEM will be held on the weekend of 4-5 February 2012. Please submit your proposals to crossdesktop-devroom@lists.fosdem.org

Also, if you are attending FOSDEM 2012, please add yourself to the KDE community wiki page so that we organize better. We need volunteers for the booth!

 

Red Hat‘s Matthew Garrett let the cat out of the bag about a month ago: when UEFI Secure Boot is adopted by mainboard manufacturers to satisfy Microsoft Windows 8 requirements, it may very well be the case that Linux and others (BSD, Haiku, Minix, OS/2, etc) will no longer boot.

Matthew has written about it extensively and seems to know very well what the issues are (part I, part II), the details about signing binaries and why Linux does not support Secure Boot yet.

The Free Software Foundation has also released a statement and started a campaign, which is, as usually, anti-Microsoft instead of pro-solutions.

Now let me express my opinion on this matter: this is not Microsoft’s fault.

Facts

Let’s see what are the facts in this controversy:

  • Secure Boot is here to stay. In my humble opinion, the idea is good and it will prevent and/or lessen malware effects, especially on Windows.
  • Binaries need to be signed with a certificate from the binaries’ vendor (Microsoft, Apple, Red Hat, etc)
  • The certificate that signs those binaries needs to be installed in the UEFI BIOS
  • Everybody wants their certificate bundled with the UEFI BIOS so that their operating system works “out of the box”
  • Given that there are many UEFI and mainboard manufacturers, getting your certificate included is not an easy task: it requires time, effort and money.

Problem

The problem stems from the fact that most Linux vendors do not have the power to get their certificates in UEFI BIOS. Red Hat and Suse will for sure get their certificates bundled in server UEFI BIOS. Debian and Ubuntu? Maybe. NetBSD, OpenIndiana, Slackware, etc? No way.

This is, in my humble opinion, a serious defect in the standard. A huge omission. Apparently while developing the Secure Boot specification everybody was busy talking about signed binaries, yet nobody thought for a second how the certificates will get into the UEFI BIOS.

What should have been done

The UEFI secure boot standard should have defined an organization (a “Secure Boot Certification Authority”) that would issue and/or receive certificates from organizations/companies (Red Hat, Oracle, Ubuntu, Microsoft, Apple, etc) that want their binaries signed.

This SBCA would also be in charge of verifying the background of those organizations.

There is actually no need for a new organization: just use an existing one, such as Verisign, that carries on with this task for Microsoft for kernel-level binaries (AuthentiCode).

Given that there is no Secure Boot Certification Authority, Microsoft asked BIOS (UEFI) developers and manufacturers to include their certificates, which looks 100% logical to me. The fact that Linux distributions do not have such power is unfortunate, but it is not Microsoft’s fault at all.

What can we do?

Given its strong ties with Intel, AMD and others, maybe the Linux Foundation could start a task force and a “Temporary Secure Boot Certification Authority” to deal with UEFI BIOS manufacturers and developers.

This task force and TSBCA would act as a proxy for minorities such as Linux, BSD, etc distributions.

I am convinced this is our best chance to get something done in a reasonable amount of time.

Complaining will not get us anything. Screaming at Microsoft will not get us anything. We need to propose solutions.

Wait! Non-Microsoft certificates? Why?

In addition to the missing Secure Boot Certification Authority, there is a second problem apparently nobody is talking about: what is the advantage mainboard manufacturers get from including non-Microsoft certificates?

For instance: why would Gigabyte (or any other mainboard manufacturer) include the certificate for, say, Haiku?

The benefit for Gigabyte would be negligible and if someone with ill-intentions gets Haiku’s certificate, that piece of malware will be installable on all Gigabyte’s mainboards.This would lead to manufacturer-targetted malware, which would be fatal to Gigabyte: “oh, want to be immune to the-grandchild-of-Stuxnet? Buy (a computer with) an MSI mainboard, which does not include Haiku’s certificate”

Given that 99% of desktops and laptops only run Windows, the result of this (yet unresolved) problem would be that manufacturers will only install Microsoft certificates, therefore they would be immune to malware signed with a Slackware certificate in the wild.

If we are lucky, mainboard manufacturers will give us an utility to install more certificates under your own risk.

The solution to the first problem looks easy to me. The solution to the second looks much more worrying to me.

 

Application virtualization is an umbrella term that describes software technologies that improve portability, manageability and compatibility of applications by encapsulating them from the underlying operating system on which they are executed.

A fully virtualized application is not installed in the traditional sense, although it is still executed as if it were. The application is fooled at runtime into believing that it is directly interfacing with the original operating system and all the resources managed by it, when in reality it is not.

In this context, the term “virtualization” refers to the artifact being encapsulated (application), which is quite different to its meaning in hardware virtualization, where it refers to the artifact being abstracted (physical hardware).

So Wikipedia says.

In layman terms, application virtualization is “I want to run a third party application without it requiring installation”. Probably you know it as “portable application“.

The trivial case

If we talk about Windows applications, for simple applications, application virtualization amounts to copying the .exe and any required DLL to some folder, then running the application from that folder.

Easy, huh?

Well, no.

Registry keys

Many applications add keys to the registry and will not run without those keys. Some even require other files, which are created in AppData, CommonAppData, etc.

To solve this kind of dependencies, application virtualization software packages such as VMware ThinApp, Microsoft App-V and Citrix XenApp monitor the registry and the filesystem for changes. To do so, they require an “application virtualization preparation” procedure. It’s like you have a clean virtual machine, take a snapshot, then install the application, take another snapshot, and compare them.

You could achieve more or less the same result for free by using FileMon + RegMon + RegShot.

Of course I am simplifying more than a bit here 🙂

Common files

Then there is the problem of file redirection.

The whole point of application virtualization is to be able to run the application without installation. But if this application’s setup installed files in CommonAppData (or the application creates them), how do we force the application to look for them in our “virtualized application folder”?

The technique usually involves generating a small .exe launcher that:

  • Sets a special PATH that looks where we want first
  • Uses hooking, DLL injection and some other low level tricks to force system calls to look into the places we want before (or even instead of) they look into the standard places
  • Registry emulation (in combination with hooking and DLL injection) to make sure those registry keys are taken from the registry in the sandbox instead of the actual registry
  • And more

In a way, we are cracking the application.

Cross-platform

What I have described above are the required steps to virtualize an application if the target platform (where the application is going to be run) is the same as the host platform (where the application virtualization was performed). For instance, if we are virtualizing the application on Windows XP SP3 32-bit and the application will run on Windows XP SP3 32-bit.

Even if the steps described above look complex to you (they are), that’s the simple case of application virtualization.

If the target platform is a different version (for instance, the app virtualization takes place on Windows XP 32-bit, while the target platform is supporting also Windows 7 64-bit), then it gets more complex: probably many Windows DLLs will need to be redistributed. Maybe even ntoskrnl.exe from the original system is required.

In addition to the serious problem redistribution poses in terms of licensing to anybody but Microsoft App-V, there is the technical side: how do you run Windows XP SP3 32-bit ntoskrnl.exe on Windows 7 SP1 64-bit? You cannot, which makes some applications (very few) impossible to virtualize.

Device drivers

The more general case for “impossible to virtualize applications” (we are always talking about application virtualization here, not full virtualization or paravirtualization) is device drivers.

None of the application virtualization solutions that exist so far can cope with drivers, and for a good reason: installing a driver requires administration permissions, verifying certificates, loading at the proper moment, etc. All of that needs to be done at kernel level, which requires administration permissions, thus defeating the point of application virtualization. Then there are the technical complications (the complexity of the implementation), and the fact that drivers assume they run with administration permissions, therefore they may perform actions that require administration permissions. If that driver is also app-virtualized, it will not run with administration permissions.

One possible remedy to the “driver virtualization problem” would be to implement a “driver loader process” that requires installation and will run with administration permissions. Think of it as the “runtime requirement” of our application virtualization solution.

I know, I know, it sounds contradictory: the whole point of my app virtualization software solution is to avoid application installation, yet my app virtualization solution requires a runtime installation? We are talking about a one-time installation, which would be perfectly acceptable in a business environment.

But why all this!?

You may wonder why this braindump.

The reason is the job that pays my bills requires virtualization. Given that we need driver support, we resorted to VirtualBox but we run a lot of instances of the applications we need to virtualize, which means we need a lot of Windows licenses for those virtual machines. Worse: we cannot know how many Windows licenses we need because the client may start any number of them and should not notice they run in virtual machines! Even worse: Microsoft have a very stupid attitude in the license negotiations and keep trying to force us to use Windows XP Mode and/or Hyper-V, which are not acceptable to us due to their technical limitations.

So it turned on the “think different switch” in my brain and I came with a solution: let’s port Wine to Windows and use it to virtualize applications.

The case for a Wine-based Application Virtualization solution

Crazy? I don’t think so. In fact, there would be a number of advantages:

  • No need for Windows licenses
  • No need for VirtualBox licenses
  • Wine already sandboxes everything into WINEPREFIX, and you can have as many of them as you want
  • You can tell wine to behave like Windows 2000 SP3, Windows XP, Windows XP SP3, Windows 7, etc. A simple configuration setting and you can run your application the environment it has been certified for. Compare that to configuring and deploying a full virtual machine (and thankfully this has been improved a lot in VBox 4; we used to have our own import/export in VBox 3)
  • No need to redistribute or depend on Microsoft non-freely-redistributable libraries or executables: Wine implements them
  • Wine has some limited support for drivers. Namely, it already supports USB drivers, which is all we need at this moment.

Some details with importance…

There are of course problems too:

  • Wine only compiles partially on Windows (as a curiosity, this is Wine 1.3.27 compiled for Windows using MinGW + MSYS)
  • The most important parts of wine are not available for Windows so far: wineserver, kernel, etc
  • Some very important parts of wine itself, no matter what platform it runs, are still missing or need improvement: MSI, support for installing the .NET framework (Mono is not enough, it does not support C++/CLI), more and more complete Windows “core” DLLs, LdrInitializeThunk and others in NTDLL, etc
  • Given that Windows and the Linux kernel are very different, some important architectural changes are required to get wine running on Windows

Is it possible to get this done?

I’d say a team of 3-4 people could get something working in 1 year. This could be sold as a product that would directly compete with ThinApp, XenApp, App-V, etc.

We even flirted with the idea of developing this ourselves, but we are under too much pressure on the project which needs this solution, we cannot commit 3-4 developers during 1-2 years to something that is not the main focus of development.

Pau, I’m scared!

Yes, this is probably the most challenging “A wish a day” so far. Just take a look at the answer to “Am I qualified?” in Codeweavers’ internships page:

Am I Qualified?

Probably not. Wine’s probably too hard for you. That’s no insult; it’s too hard for about 99% of C programmers out there. It is, frankly, the hardest programming gig on the planet, bar none. But if you think you’re up for a real challenge, read on…

Let’s say, just for the sake of argument, that you’re foolish enough that you think you might like to take a shot at hacking Wine. […]

I have been programming for a number of years already. I have seen others introduce bugs, and I have also introduced (and solved!) many bugs while coding. Off-by-one, buffer-overflow, treating pointers as pointees, different behaviors or the same function (this is specially true for cross-platform applications), race conditions, deadlocks, threading issues. I think I have seen quite a few of the typical issues.

Yet recently I lost a lot of time to what I would call the most stupid C bug in my career so far, and probably ever.

I am porting a Unix-only application which uses tmpfile() to create temporary files:

            else if (code == 200) {     // Downloading whole file
                /* Write new file (plus allow reading once we finish) */
                g = fname ? fopen(fname, "w+") : tmpfile();
            }

For some awkward reason, Microsoft decided their implementation of tmpfile() would create temporary files in C:\, which is totally broken for normal (non-Administrator) users, and even for Administrator users under Windows 7.

In addition to that, the application did not work due to another same-function-diffent-behavior problem.

To make sure I did not forget about the tmpfile() issue, I dutifully added a FIXME in the code:

            else if (code == 200) {     // Downloading whole file
                /* Write new file (plus allow reading once we finish) */
                // FIXME Win32 native version fails here because Microsoft's version of tmpfile() creates the file in C:\
                g = fname ? fopen(fname, "w+") : tmpfile();
            }

Read More →

CMake is a unified, cross-platform, open-source build system that enables developers to build, test and package software by specifying build parameters in simple, portable text files. It works in a compiler-independent manner and the build process works in conjunction with native build environments, such as Make, Apple‘s Xcode and Microsoft Visual Studio. It also has minimal dependencies, C++ only. CMake is open source software and is developed by Kitware(taken from Wikipedia)

But you already know that because CMake is the build system we use in KDE 🙂

When I started to use CMake at work to port a large project from Windows to Linux, I needed to write a lot of modules (“finders”). Those modules had to work on Windows and Linux, and sometimes with different versions of some of the third-party dependencies. Parsing header files, Windows registry entries, etc was often required.

If you have ever parsed text, you know you powerful regular expressions are. If you use old software, you know how great Perl Compatible Regular Expressions (PCRE) are because you do not have them available :-).

Read More →

Microsoft Windows has supported a feature called Shadow Copy since Windows Server 2003, using Windows 2000 or newer as the client.

Shadow copies, better known as “Previous versions”, by many users are kind of snapshots of a filesystem (“drive”, for Windows users) taken every so often. It is not a replacement for backups or RAID, just a convenient method to access old versions of your files. Thin of it as a revision control system for any file, save for “revisions” are taken at time intervals instead of evey time there are changes, which means not all the changesets are “captured”.

Samba implements the server part of shadow copy since version 3.0.3, released in April 2004, and makes them available to Windows clients.

On the Unix world, you can get more or less the same features, albeit only locally (not for NFS or Samba shared folders), by using filesystems such as ZFS, HAMMER (apparently, only available on DragonFly BSD), Btrfs, Zumastor and many others. The main problem being adoption of those solutions is marginal, and not available through the network (“shared folders”).

Sun (now Oracle) implemented a “Previous versions”-like feature for ZFS snapshots in Nautilus in October 2008:

KDE once had something like that but for the ext3cow file system. It was an application called Time Traveler File Manager (user manual, interface design and details) by Sandeep Ranade. Website is down and ext3cow is effectively dead now.

By this time you probably have guessed my wish for today: implement “Previous versions” support for Samba, ZFS, etc in Dolphin, the KDE file manager, or even better: in KIO, so that it is available to all KDE applications.

I’m surprised this has gone mostly unnoticed: two Japanese researchers wanted to port the Kernel-based Virtual Machine from Linux to Windows. In order to do that, they created a compatibility layer that allows you to run Linux drivers on Windows (in kernel, no less!). They gave a talk titled WinKVM: Windows Kernel-based Virtual Machine at last year’s KVM Forum, the KVM Developers Conference.

Source code is available here.

Given that Linux never removes support for any hardware from the kernel unless the driver breaks the kernel badly, the first use I can think of is obsolete hardware whose drivers no longer work on modern versions of Windows. That’s a often-encountered case in military and industrial environments: LAN Emulation cards, industrial devices using proprietary buses and proprietary cards to communicate to PCs, medical devices, etc. In those cases, the hardware generally outlives the computer that runs the software by decades (the software is no problem thanks to virtualization).

A second target “market” could be devices which do not have a Windows driver. Believe it or not, they exist in niche markets (and not so niche: the free Kinect driver was initially only available for Linux).