In a perfectly orchestrated marketing campaign for a 100% free-libre tablet called Spark that will run KDE Plasma Active, Aaron Seigo writes today about the problems they are facing with GPL-violations.

Apparently, every Chinese manufacturer is breaking the GPLv2 by not releasing the sources for their modified Linux kernel. Conversations and conversations with Zenithink (designers of the Spark), Sygic (designers of the Dreambook W7), etc have arrived nowhere. To the point that CordiaTab, another similar effort using Gnome instead of KDE, has been cancelled.

I have to say I am very surprised at the lack of the kernel sources. What is the Free Software Foundation doing? Why don’t we seek ban of all imports of tablets whose manufacturers don’t release the full GPL source?

Apple got the Samsung GalaxyTab imports blocked in Germany and Australia for something as ethereal as patents covering the external frame design. We are talking about license infringement, which is easier to demonstrate in court.

China may ignore intellectual property but they cannot ignore business, and no imports means no business. Let’s get all GPL-infringing tablet imports banned and we will get more source in two weeks than we can digest in two years. Heck, I’m surprised Apple is not trying this in court to block Android!

Apparently HTML5 applications are the best thing after sliced bread.

HTML5 is the first platform any mobile vendor supports: iPhone, Android, Windows Phone, BlackBerry, Symbian. All of them.

Windows 8 is said to promote HTML5 as the preferred application development solution.

I used to look kindly at that. But about a month ago I started to get worried: is HTML5 good for everything?

Long-lived applications

In military, industrial, warehouse management, medical, etc is not rare that bespoke applications are developed and stay in use for many years (and I really mean many: 10, 20 or even more!) with barely an update. It’s not rare that those applications only receive very small updates once very 5 years. Those applications, not Angry Birds, are what keeps the world running: troops know what supplies they can count on, iPhones are manufactured, FedEx is able to deliver your package and your doctor is able to check your health.

But now that everybody seems to be moving to HTML5 webapps, what happens when my warehouse management application is a webapp and the additions in the newest browsers make the webapp no longer work?

Are vain upgrades the future?

Say my webapp is released in 2014 and it works fine with Firefox 14.0 and Chrome 26.0, the newest browsers when I release the application in 2014. Fast-forward to 2020 and Firefox 14.0 and Chrome 26.0 do not even install on Windows 10 computer! What’s the solution?

Should the customer pay for a huge update and redesign to make it work with Firefox 27.1 and Chrome 41.0 in 2020?

A virtual machine with Windows 8 and Firefox 14.0? A portable Mozilla Firefox 14.0 on Windows 10 in 2020 to be able to use that line-of-business application that only requires a small update once or twice every 5 years? How are the virtual machine and/or Portable Firefox 14.0 different from or better than a fat client? What’s the advantage? I’d say none!

Native applications usually do not have that kind of problems because APIs are much more stable. You can still run Win16 applications on Windows 7!

You don’t believe me? We may soon be developing for 76 browsers!

While HTML5 may be fine for applications which are updated very often, it makes me feel very uneasy to see it used in environments where applications will be rarely updated, such as SCADAs, warehouse management, control system, medical records, etc.

A solution is needed

It looks like that choice of technology is going to make those applications much more expensive in the medium and long term, paying for “adaptations to new browsers” (sorry, I resist to call “update” or “upgrade” to something that adds zero value other than being able to run on a newer browser).

Or maybe it’s about time to define actual “HTML5 profiles”. ACID3 seems to be too weak of a profile: two very different browsers may pass ACID3 yet a webapp would work with one browser and fail with the other due to bugs, missing features/added features, etc.

Something needs to be done.

Yup, one more year I’m attending FOSDEM

I'm going to FOSDEM, the Free and Open Source Software Developers' European Meeting

If you are coming, feel free to add yourself to the KDE wiki page.

If you are coming to the beer event on Friday but you don’t know anybody, make sure you bring something that identifies you as a Qt/KDE hacker! In any case, a lot of us will be around the KDE booth in the K building.

I will also spend quite some time at the CrossDesktop DevRoom, which is being run by Christophe Fergeau and myself this year.

FOSDEM is one of the largest gatherings of Free Software contributors in the world and happens each February in Brussels (Belgium). One of the developer rooms will be the CrossDesktop DevRoom, which will host Desktop-related talks.

Are you interested in giving a talk about open source and Qt, KDE, Enlightenment, Gnome, XFCE, Windows, Mac OS X, general desktop matters, mobile development, applications that enhance desktops and/or web?

We have extended the deadline for a few more days, until January 8th. If you want to submit a talk proposal, hurry up!

I have to say I am very surprised to see very few Qt/KDE talk proposals. Is there nothing interesting the Qt and KDE world have to say to 5,000+ people?

There is more information in the Call for Talks we published a couple of months.

If you are interested in Qt/KDE, come visit us at the KDE booth. If you add yourself to the KDE FOSDEM 2012 wiki page, we will be able to better organize the usual dinner on Sunday and/or smaller meetings for “special interest groups”.

 

FOSDEM is one of the largest gatherings of Free Software contributors in the world and happens each February in Brussels (Belgium). One of the developer rooms will be the CrossDesktop DevRoom, which will host Desktop-related talks.

Are you interested in giving a talk about open source and Qt, KDE, Enlightenment, Gnome, XFCE, Windows, Mac OS X, general desktop matters, mobile development, applications that enhance desktops and/or web?

Hurry up and submit your proposal, deadline is December 20th!

There is more information in the Call for Talks we published one month ago.

If you are interested in Qt/KDE, come visit us at the KDE booth. If you add yourself to the KDE FOSDEM 2012 wiki page, we will be able to better organize the usual dinner on Sunday and/or smaller meetings for “special interest groups”.

 

Here I am, with 9 other people, at the KDAB office in Berlin. We are in the KDE eV sprint, talking about promo stuff, eV stuff, corportate membership, future, etc. Really interesting stuff.

Most of us (including our intern Inu) spent the morning trying to improve Join the Game, others went to define a policy for what to publish in the donors page, thank you page, etc

I’d say it has been very productive. Everybody came with very nice ideas; some of them we will finish here, others we will need ask for help from some community members (especially from artists!)

The sprint continues tomorrow.

 

FOSDEM is one of the largest gatherings of Free Software contributors in the world and happens each February in Brussels (Belgium). One of the developer rooms will be the CrossDesktop DevRoom, which will host Desktop-related talks.

We are now inviting proposals for talks about Free/Libre/Open-source Software on the topics of Desktop development, Desktop applications and interoperativity amongst Desktop Environments. This is a unique opportunity to show novel ideas and developments to a wide technical audience.

Topics accepted include, but are not limited to: Enlightenment, Gnome, KDE, XFCE, Windows, Mac OS X, general desktop matters, applications that enhance desktops and web (when related to desktop).

Talks can be very specific, such as developing mobile applications with Qt Quick; or as general as predictions for the fusion of Desktop and web in 5 years time. Topics that are of interest to the users and developers of all desktop environments are especially welcome. The FOSDEM 2011 schedule might give you some inspiration.

Please include the following information when submitting a proposal: your name, the title of your talk (please be descriptive, as titles will be listed with around 250 from other projects) and a short abstract of one or two paragraphs.

The deadline for submissions is December 20th 2011. FOSDEM will be held on the weekend of 4-5 February 2012. Please submit your proposals to crossdesktop-devroom@lists.fosdem.org

Also, if you are attending FOSDEM 2012, please add yourself to the KDE community wiki page so that we organize better. We need volunteers for the booth!

 

Red Hat‘s Matthew Garrett let the cat out of the bag about a month ago: when UEFI Secure Boot is adopted by mainboard manufacturers to satisfy Microsoft Windows 8 requirements, it may very well be the case that Linux and others (BSD, Haiku, Minix, OS/2, etc) will no longer boot.

Matthew has written about it extensively and seems to know very well what the issues are (part I, part II), the details about signing binaries and why Linux does not support Secure Boot yet.

The Free Software Foundation has also released a statement and started a campaign, which is, as usually, anti-Microsoft instead of pro-solutions.

Now let me express my opinion on this matter: this is not Microsoft’s fault.

Facts

Let’s see what are the facts in this controversy:

  • Secure Boot is here to stay. In my humble opinion, the idea is good and it will prevent and/or lessen malware effects, especially on Windows.
  • Binaries need to be signed with a certificate from the binaries’ vendor (Microsoft, Apple, Red Hat, etc)
  • The certificate that signs those binaries needs to be installed in the UEFI BIOS
  • Everybody wants their certificate bundled with the UEFI BIOS so that their operating system works “out of the box”
  • Given that there are many UEFI and mainboard manufacturers, getting your certificate included is not an easy task: it requires time, effort and money.

Problem

The problem stems from the fact that most Linux vendors do not have the power to get their certificates in UEFI BIOS. Red Hat and Suse will for sure get their certificates bundled in server UEFI BIOS. Debian and Ubuntu? Maybe. NetBSD, OpenIndiana, Slackware, etc? No way.

This is, in my humble opinion, a serious defect in the standard. A huge omission. Apparently while developing the Secure Boot specification everybody was busy talking about signed binaries, yet nobody thought for a second how the certificates will get into the UEFI BIOS.

What should have been done

The UEFI secure boot standard should have defined an organization (a “Secure Boot Certification Authority”) that would issue and/or receive certificates from organizations/companies (Red Hat, Oracle, Ubuntu, Microsoft, Apple, etc) that want their binaries signed.

This SBCA would also be in charge of verifying the background of those organizations.

There is actually no need for a new organization: just use an existing one, such as Verisign, that carries on with this task for Microsoft for kernel-level binaries (AuthentiCode).

Given that there is no Secure Boot Certification Authority, Microsoft asked BIOS (UEFI) developers and manufacturers to include their certificates, which looks 100% logical to me. The fact that Linux distributions do not have such power is unfortunate, but it is not Microsoft’s fault at all.

What can we do?

Given its strong ties with Intel, AMD and others, maybe the Linux Foundation could start a task force and a “Temporary Secure Boot Certification Authority” to deal with UEFI BIOS manufacturers and developers.

This task force and TSBCA would act as a proxy for minorities such as Linux, BSD, etc distributions.

I am convinced this is our best chance to get something done in a reasonable amount of time.

Complaining will not get us anything. Screaming at Microsoft will not get us anything. We need to propose solutions.

Wait! Non-Microsoft certificates? Why?

In addition to the missing Secure Boot Certification Authority, there is a second problem apparently nobody is talking about: what is the advantage mainboard manufacturers get from including non-Microsoft certificates?

For instance: why would Gigabyte (or any other mainboard manufacturer) include the certificate for, say, Haiku?

The benefit for Gigabyte would be negligible and if someone with ill-intentions gets Haiku’s certificate, that piece of malware will be installable on all Gigabyte’s mainboards.This would lead to manufacturer-targetted malware, which would be fatal to Gigabyte: “oh, want to be immune to the-grandchild-of-Stuxnet? Buy (a computer with) an MSI mainboard, which does not include Haiku’s certificate”

Given that 99% of desktops and laptops only run Windows, the result of this (yet unresolved) problem would be that manufacturers will only install Microsoft certificates, therefore they would be immune to malware signed with a Slackware certificate in the wild.

If we are lucky, mainboard manufacturers will give us an utility to install more certificates under your own risk.

The solution to the first problem looks easy to me. The solution to the second looks much more worrying to me.

 

Application virtualization is an umbrella term that describes software technologies that improve portability, manageability and compatibility of applications by encapsulating them from the underlying operating system on which they are executed.

A fully virtualized application is not installed in the traditional sense, although it is still executed as if it were. The application is fooled at runtime into believing that it is directly interfacing with the original operating system and all the resources managed by it, when in reality it is not.

In this context, the term “virtualization” refers to the artifact being encapsulated (application), which is quite different to its meaning in hardware virtualization, where it refers to the artifact being abstracted (physical hardware).

So Wikipedia says.

In layman terms, application virtualization is “I want to run a third party application without it requiring installation”. Probably you know it as “portable application“.

The trivial case

If we talk about Windows applications, for simple applications, application virtualization amounts to copying the .exe and any required DLL to some folder, then running the application from that folder.

Easy, huh?

Well, no.

Registry keys

Many applications add keys to the registry and will not run without those keys. Some even require other files, which are created in AppData, CommonAppData, etc.

To solve this kind of dependencies, application virtualization software packages such as VMware ThinApp, Microsoft App-V and Citrix XenApp monitor the registry and the filesystem for changes. To do so, they require an “application virtualization preparation” procedure. It’s like you have a clean virtual machine, take a snapshot, then install the application, take another snapshot, and compare them.

You could achieve more or less the same result for free by using FileMon + RegMon + RegShot.

Of course I am simplifying more than a bit here 🙂

Common files

Then there is the problem of file redirection.

The whole point of application virtualization is to be able to run the application without installation. But if this application’s setup installed files in CommonAppData (or the application creates them), how do we force the application to look for them in our “virtualized application folder”?

The technique usually involves generating a small .exe launcher that:

  • Sets a special PATH that looks where we want first
  • Uses hooking, DLL injection and some other low level tricks to force system calls to look into the places we want before (or even instead of) they look into the standard places
  • Registry emulation (in combination with hooking and DLL injection) to make sure those registry keys are taken from the registry in the sandbox instead of the actual registry
  • And more

In a way, we are cracking the application.

Cross-platform

What I have described above are the required steps to virtualize an application if the target platform (where the application is going to be run) is the same as the host platform (where the application virtualization was performed). For instance, if we are virtualizing the application on Windows XP SP3 32-bit and the application will run on Windows XP SP3 32-bit.

Even if the steps described above look complex to you (they are), that’s the simple case of application virtualization.

If the target platform is a different version (for instance, the app virtualization takes place on Windows XP 32-bit, while the target platform is supporting also Windows 7 64-bit), then it gets more complex: probably many Windows DLLs will need to be redistributed. Maybe even ntoskrnl.exe from the original system is required.

In addition to the serious problem redistribution poses in terms of licensing to anybody but Microsoft App-V, there is the technical side: how do you run Windows XP SP3 32-bit ntoskrnl.exe on Windows 7 SP1 64-bit? You cannot, which makes some applications (very few) impossible to virtualize.

Device drivers

The more general case for “impossible to virtualize applications” (we are always talking about application virtualization here, not full virtualization or paravirtualization) is device drivers.

None of the application virtualization solutions that exist so far can cope with drivers, and for a good reason: installing a driver requires administration permissions, verifying certificates, loading at the proper moment, etc. All of that needs to be done at kernel level, which requires administration permissions, thus defeating the point of application virtualization. Then there are the technical complications (the complexity of the implementation), and the fact that drivers assume they run with administration permissions, therefore they may perform actions that require administration permissions. If that driver is also app-virtualized, it will not run with administration permissions.

One possible remedy to the “driver virtualization problem” would be to implement a “driver loader process” that requires installation and will run with administration permissions. Think of it as the “runtime requirement” of our application virtualization solution.

I know, I know, it sounds contradictory: the whole point of my app virtualization software solution is to avoid application installation, yet my app virtualization solution requires a runtime installation? We are talking about a one-time installation, which would be perfectly acceptable in a business environment.

But why all this!?

You may wonder why this braindump.

The reason is the job that pays my bills requires virtualization. Given that we need driver support, we resorted to VirtualBox but we run a lot of instances of the applications we need to virtualize, which means we need a lot of Windows licenses for those virtual machines. Worse: we cannot know how many Windows licenses we need because the client may start any number of them and should not notice they run in virtual machines! Even worse: Microsoft have a very stupid attitude in the license negotiations and keep trying to force us to use Windows XP Mode and/or Hyper-V, which are not acceptable to us due to their technical limitations.

So it turned on the “think different switch” in my brain and I came with a solution: let’s port Wine to Windows and use it to virtualize applications.

The case for a Wine-based Application Virtualization solution

Crazy? I don’t think so. In fact, there would be a number of advantages:

  • No need for Windows licenses
  • No need for VirtualBox licenses
  • Wine already sandboxes everything into WINEPREFIX, and you can have as many of them as you want
  • You can tell wine to behave like Windows 2000 SP3, Windows XP, Windows XP SP3, Windows 7, etc. A simple configuration setting and you can run your application the environment it has been certified for. Compare that to configuring and deploying a full virtual machine (and thankfully this has been improved a lot in VBox 4; we used to have our own import/export in VBox 3)
  • No need to redistribute or depend on Microsoft non-freely-redistributable libraries or executables: Wine implements them
  • Wine has some limited support for drivers. Namely, it already supports USB drivers, which is all we need at this moment.

Some details with importance…

There are of course problems too:

  • Wine only compiles partially on Windows (as a curiosity, this is Wine 1.3.27 compiled for Windows using MinGW + MSYS)
  • The most important parts of wine are not available for Windows so far: wineserver, kernel, etc
  • Some very important parts of wine itself, no matter what platform it runs, are still missing or need improvement: MSI, support for installing the .NET framework (Mono is not enough, it does not support C++/CLI), more and more complete Windows “core” DLLs, LdrInitializeThunk and others in NTDLL, etc
  • Given that Windows and the Linux kernel are very different, some important architectural changes are required to get wine running on Windows

Is it possible to get this done?

I’d say a team of 3-4 people could get something working in 1 year. This could be sold as a product that would directly compete with ThinApp, XenApp, App-V, etc.

We even flirted with the idea of developing this ourselves, but we are under too much pressure on the project which needs this solution, we cannot commit 3-4 developers during 1-2 years to something that is not the main focus of development.

Pau, I’m scared!

Yes, this is probably the most challenging “A wish a day” so far. Just take a look at the answer to “Am I qualified?” in Codeweavers’ internships page:

Am I Qualified?

Probably not. Wine’s probably too hard for you. That’s no insult; it’s too hard for about 99% of C programmers out there. It is, frankly, the hardest programming gig on the planet, bar none. But if you think you’re up for a real challenge, read on…

Let’s say, just for the sake of argument, that you’re foolish enough that you think you might like to take a shot at hacking Wine. […]

Interactive whiteboards, also known as “digital whiteboards”, seem to be the latest trend in education. At least from the teacher’s point of view.

Given that all my immediate family are teachers, and I have taught my mom how to use the digital whiteboard at her school, I feel like I can talk a bit about them.

A digital whiteboard is essentially a big touchscreen (100” or more). Usually the image is projected with a beamer and they use infrared, ultrasound, or some other cheap system to touch-enable the board.

From a hardware point of view, it is quite simple. You can even use a WiiMote to convert any surface into a digital whiteboard thanks to Uwe Schmidt’s WiiMode Whiteboard software.

The interesting part of digital whiteboards, at least for me, is software.

I have tried myself three software packages (TeamBoard, SMART Notebook and Promethean ActivInspire) but there are many more: DabbleBoard, NotateIt, Luidia eBeam, BoardWorks, etc

All of those can be used for education. Some of them are generic enough to be fit for business meetings (DabbleBoard, NotateIt, TeamBoard), while others are highly specialized (Interact, focused on music-teaching).

I consider ActivInspire very good, with SMART as a distant second.

Now, let’s cut the crap. Why am I talking about digital whiteboards and commercial software in Planet KDE?

For starters, because there is no open source alternative. The only open source package I have found is Open Whiteboard and it has not been updated in four years.

Then there is the Linux issue. SMART works on Linux but that’s about it. And it’s not even the full package, some features are missing.

And of course, in KDE we happen to have the wonderful KDE Edu project!

Back in December I thought about this: would it be possible to develop a digital whiteboard software based on KDE? That’s actually why I started working on KSnapshot: screen capturing was one of the missing features in SMART.

The answer to the question is “of course”. However interested I am, currently my spare time is all taken by another project I am working on. I do have a clear picture of what needs to be done, though, and I’d love to mentor if someone is interested in taking over:

  1. Dissect ActivInspire, SMART, TeamBoard, eBeam, etc. They all have nice features and huge failures. Give me one day and I’d hand you a long list.
  2. Start small: use KParts and DBUS and take advantage of KolourPaint, KSnapshot, Flake, etc
  3. The first application would be a simple single-page, vector graphics, Paint-like program: draw lines, figures, insert text boxes (with formatting), pictures, hyperlinks, etc. Add snapshotting (via a call to kbackgroundsnapshot).
  4. Then, extend that application to allow multipage “notebooks”, some screen effects (like Calligra Stage‘s), template pages (useful for exercise sheets, “blank” pages which include date, time and letterhead), hiding the solutions, record and reply, etc
  5. Going a bit further, we could have a special “personality” of Plasma Active for education. Let’s give some actual use to those iPads kids are receiving. Anyone involved in an educational Linux distribution knows the kind of customization I am talking about.
  6. And of course let’s not forget about “apps”: let’s develop a framework for “educapplets”. Educapplets is the name I give to small edutainment games, applications (Mathematical, Geography, etc). Kind of what you can do these days with JClic, but based on Javascript + KDE Edu QML components + something like Windows Runtime but for KDE. (This is big enough to be split in two projects apart from the whiteboard project: one for the KDE RT, another for the educapplet framework).
  7. Your ideas?

I think this project could be developed as a collaboration between KDE Edu and “KDE Business” (a hypothetical extension of KDE PIM). Being unique (open source, multiplatform, powerful), it would have a lot of potential to carve in those markets.

On the other hand, this is actually an application, something built outside KDE SC, which means it might fit better as one of the projects in that hypothetical Apache-like KDE eV I talked about a few weeks ago.

Oh, by the way: some schools seem to be adopting whiteboards for children of all ages. I am strongly against it. In my humble opinion, and that’s what my experience says, computers should only be involved in the classroom after the students have mastered how to do things manually. Disagree? OK: what would you say if when you were a student, you would have been told to use a typewriter and a solar calculator instead of a notebook and a pencil? Ridiculous, isn’t it? My point, exactly.

Volunteers, please comment and/or contact me.