D-BUS (Desktop Bus) is a simple inter-process communication (IPC) open-source system for software applications to communicate with one another. It replaced DCOP in KDE4 and has also been adopted by Gnome, XFCE and other desktops. It’s, in fact, the main interoperability mechanism in the “Linux desktop” world thanks to the freedesktop.org standards.

The architecture of D-Bus is pretty simple: there is a dbus-daemon server process which runs locally and acts as a “messaging broker” and applications exchange messages through the dbus-daemon.

But of course you already new that because you are supersmart developers and/or users.

D-Bus on Windows

What you may not know is how much damage is D-Bus making to open source software on Windows.

A few years ago I tried to introduce kdelibs for a large cross-platform project but I got it rejected, despite some obvious advantages, mainly due to D-Bus.

Performance and reliability back then was horrible. It works much better these days but it still scares Windows users. In fact, you may also replace “it scares Windows users” with “it scares IT departments in the enterprise world*”.

The reason?

A dozen processes randomly started, IPC with no security at all, makes difficult to upgrade/kill/know when to update applications, and many more. I’m not making this out, this has already happened to me.

* yes, I know our friends from Kolab are doing well, but how many KDE applications on desktop have you seen out of that “isolation bubble”

D-Bus on mobile

One other problem is D-Bus is not available on all platforms (Android, Symbian, iOS, etc), which makes porting KDE applications to those platforms difficult.

Sure, Android uses D-Bus internally, but that’s an implementation detail and we don’t have access to it). That means we still need a solution for platforms where you cannot run or access dbus-daemon.

Do we need a daemon?

A few months ago I was wondering: do we really need this dbus-daemon process at all?

What we have now looks like this:

As you can see, D-Bus is a local IPC mechanism, i. e. it does not allow applications to communicate over the network (although technically, it would not be difficult to implement). And every operating system these days has its own IPC mechanism. Why create a new one with a new daemon? Can’t we use an existing one?

I quickly got my first answer: D-Bus was created to expose a common API (and message and data format, i. e. a common “wire protocol”) to applications, so that it’s easy to exchange information.

As for the second answer, reusing an existing one, it’s obvious we cannot: KDE applications run on a variety of operating systems, every one of them has a different “native” IPC mechanism. Unices (Linux, BSD, etc) may be quite similar, but Windows, Symbian, etc are definitely very different.

No, we don’t!

So I though let’s use some technospeak buzzword and make HR people happy! The façade pattern!

Let’s implement a libdbusfat which offers the libdbus API on one side but talks to a native IPC service on the other side. That way we could get rid of the dbus-daemon process and use the platform IPC facilities. For each platform, a different “native IPC side” would be implemented: on Windows it could be COM, on Android something else, etc

Pros

The advantage of libdbusfat would be applications would not need any change and they would still be able to use DBus, which at the moment is important for cross-desktop interoperability.

On Unix platforms, applications would link to libdbus and talk to dbus-daemon.

On Windows, Android, etc, applications would link to libdbusfat and talk to the native IPC system.

By the magic of this façade pattern, we could compile, for instance, QtDBUS so that it works exactly like it does currently but it does not require dbus-daemon on Windows. Or Symbian. Or Android.

QtMobility?

QtMobility implements a Publish/Subscribe API with a D-Bus backend but it serves a completely different purpose: it’s not available glib/gtk/EFL/etc and it’s implemented in terms of QtDBUS (which in turn uses dbus-daemon for D-Bus services on every platform).

It’s, in fact, a perfect candidate to become a user of libdbusfat.

Cons

A lot of work.

You need to cut dbus-daemon in half, establish a clear API which can be implemented in terms of each platform’s IPC, data conversion, performance, etc. Very interesting work if you’ve got the time to do it, I must say. Perfect for a Google Summer of Code, if you already know D-Bus and IPC on a couple of different-enough two platforms (Linux and Windows, or Linux and Android, or Linux and iOS, etc).

Summary

TL;DR: The idea is to be able to compile applications that require DBus without needing to change the application. This may or may not be true on Android depending on the API, but it is true for Windows.

Are you brave enough to develop libdbusfat it in a Qt or KDE GSoC?

This is a short one and probably doable as a Summer of Code project.

The idea: add support for the Microsoft compiler and linker and Visual Studio projects and solutions (.sln, .vcproj, et)c in KDevelop, at least in the Windows version.

QtCreator has suport for the first part (compiler and linker).

For the the second part (solutions and projects), code can probably be derived (directly or indirectly) from MonoDevelop‘s and CMake‘s. The starting point would be MSBuild support, as it’s what VS2010 is based on.

Bonus points if you add C#/.NET support (Qyoto/Kimono).

Application virtualization is an umbrella term that describes software technologies that improve portability, manageability and compatibility of applications by encapsulating them from the underlying operating system on which they are executed.

A fully virtualized application is not installed in the traditional sense, although it is still executed as if it were. The application is fooled at runtime into believing that it is directly interfacing with the original operating system and all the resources managed by it, when in reality it is not.

In this context, the term “virtualization” refers to the artifact being encapsulated (application), which is quite different to its meaning in hardware virtualization, where it refers to the artifact being abstracted (physical hardware).

So Wikipedia says.

In layman terms, application virtualization is “I want to run a third party application without it requiring installation”. Probably you know it as “portable application“.

The trivial case

If we talk about Windows applications, for simple applications, application virtualization amounts to copying the .exe and any required DLL to some folder, then running the application from that folder.

Easy, huh?

Well, no.

Registry keys

Many applications add keys to the registry and will not run without those keys. Some even require other files, which are created in AppData, CommonAppData, etc.

To solve this kind of dependencies, application virtualization software packages such as VMware ThinApp, Microsoft App-V and Citrix XenApp monitor the registry and the filesystem for changes. To do so, they require an “application virtualization preparation” procedure. It’s like you have a clean virtual machine, take a snapshot, then install the application, take another snapshot, and compare them.

You could achieve more or less the same result for free by using FileMon + RegMon + RegShot.

Of course I am simplifying more than a bit here 🙂

Common files

Then there is the problem of file redirection.

The whole point of application virtualization is to be able to run the application without installation. But if this application’s setup installed files in CommonAppData (or the application creates them), how do we force the application to look for them in our “virtualized application folder”?

The technique usually involves generating a small .exe launcher that:

  • Sets a special PATH that looks where we want first
  • Uses hooking, DLL injection and some other low level tricks to force system calls to look into the places we want before (or even instead of) they look into the standard places
  • Registry emulation (in combination with hooking and DLL injection) to make sure those registry keys are taken from the registry in the sandbox instead of the actual registry
  • And more

In a way, we are cracking the application.

Cross-platform

What I have described above are the required steps to virtualize an application if the target platform (where the application is going to be run) is the same as the host platform (where the application virtualization was performed). For instance, if we are virtualizing the application on Windows XP SP3 32-bit and the application will run on Windows XP SP3 32-bit.

Even if the steps described above look complex to you (they are), that’s the simple case of application virtualization.

If the target platform is a different version (for instance, the app virtualization takes place on Windows XP 32-bit, while the target platform is supporting also Windows 7 64-bit), then it gets more complex: probably many Windows DLLs will need to be redistributed. Maybe even ntoskrnl.exe from the original system is required.

In addition to the serious problem redistribution poses in terms of licensing to anybody but Microsoft App-V, there is the technical side: how do you run Windows XP SP3 32-bit ntoskrnl.exe on Windows 7 SP1 64-bit? You cannot, which makes some applications (very few) impossible to virtualize.

Device drivers

The more general case for “impossible to virtualize applications” (we are always talking about application virtualization here, not full virtualization or paravirtualization) is device drivers.

None of the application virtualization solutions that exist so far can cope with drivers, and for a good reason: installing a driver requires administration permissions, verifying certificates, loading at the proper moment, etc. All of that needs to be done at kernel level, which requires administration permissions, thus defeating the point of application virtualization. Then there are the technical complications (the complexity of the implementation), and the fact that drivers assume they run with administration permissions, therefore they may perform actions that require administration permissions. If that driver is also app-virtualized, it will not run with administration permissions.

One possible remedy to the “driver virtualization problem” would be to implement a “driver loader process” that requires installation and will run with administration permissions. Think of it as the “runtime requirement” of our application virtualization solution.

I know, I know, it sounds contradictory: the whole point of my app virtualization software solution is to avoid application installation, yet my app virtualization solution requires a runtime installation? We are talking about a one-time installation, which would be perfectly acceptable in a business environment.

But why all this!?

You may wonder why this braindump.

The reason is the job that pays my bills requires virtualization. Given that we need driver support, we resorted to VirtualBox but we run a lot of instances of the applications we need to virtualize, which means we need a lot of Windows licenses for those virtual machines. Worse: we cannot know how many Windows licenses we need because the client may start any number of them and should not notice they run in virtual machines! Even worse: Microsoft have a very stupid attitude in the license negotiations and keep trying to force us to use Windows XP Mode and/or Hyper-V, which are not acceptable to us due to their technical limitations.

So it turned on the “think different switch” in my brain and I came with a solution: let’s port Wine to Windows and use it to virtualize applications.

The case for a Wine-based Application Virtualization solution

Crazy? I don’t think so. In fact, there would be a number of advantages:

  • No need for Windows licenses
  • No need for VirtualBox licenses
  • Wine already sandboxes everything into WINEPREFIX, and you can have as many of them as you want
  • You can tell wine to behave like Windows 2000 SP3, Windows XP, Windows XP SP3, Windows 7, etc. A simple configuration setting and you can run your application the environment it has been certified for. Compare that to configuring and deploying a full virtual machine (and thankfully this has been improved a lot in VBox 4; we used to have our own import/export in VBox 3)
  • No need to redistribute or depend on Microsoft non-freely-redistributable libraries or executables: Wine implements them
  • Wine has some limited support for drivers. Namely, it already supports USB drivers, which is all we need at this moment.

Some details with importance…

There are of course problems too:

  • Wine only compiles partially on Windows (as a curiosity, this is Wine 1.3.27 compiled for Windows using MinGW + MSYS)
  • The most important parts of wine are not available for Windows so far: wineserver, kernel, etc
  • Some very important parts of wine itself, no matter what platform it runs, are still missing or need improvement: MSI, support for installing the .NET framework (Mono is not enough, it does not support C++/CLI), more and more complete Windows “core” DLLs, LdrInitializeThunk and others in NTDLL, etc
  • Given that Windows and the Linux kernel are very different, some important architectural changes are required to get wine running on Windows

Is it possible to get this done?

I’d say a team of 3-4 people could get something working in 1 year. This could be sold as a product that would directly compete with ThinApp, XenApp, App-V, etc.

We even flirted with the idea of developing this ourselves, but we are under too much pressure on the project which needs this solution, we cannot commit 3-4 developers during 1-2 years to something that is not the main focus of development.

Pau, I’m scared!

Yes, this is probably the most challenging “A wish a day” so far. Just take a look at the answer to “Am I qualified?” in Codeweavers’ internships page:

Am I Qualified?

Probably not. Wine’s probably too hard for you. That’s no insult; it’s too hard for about 99% of C programmers out there. It is, frankly, the hardest programming gig on the planet, bar none. But if you think you’re up for a real challenge, read on…

Let’s say, just for the sake of argument, that you’re foolish enough that you think you might like to take a shot at hacking Wine. […]

Interactive whiteboards, also known as “digital whiteboards”, seem to be the latest trend in education. At least from the teacher’s point of view.

Given that all my immediate family are teachers, and I have taught my mom how to use the digital whiteboard at her school, I feel like I can talk a bit about them.

A digital whiteboard is essentially a big touchscreen (100” or more). Usually the image is projected with a beamer and they use infrared, ultrasound, or some other cheap system to touch-enable the board.

From a hardware point of view, it is quite simple. You can even use a WiiMote to convert any surface into a digital whiteboard thanks to Uwe Schmidt’s WiiMode Whiteboard software.

The interesting part of digital whiteboards, at least for me, is software.

I have tried myself three software packages (TeamBoard, SMART Notebook and Promethean ActivInspire) but there are many more: DabbleBoard, NotateIt, Luidia eBeam, BoardWorks, etc

All of those can be used for education. Some of them are generic enough to be fit for business meetings (DabbleBoard, NotateIt, TeamBoard), while others are highly specialized (Interact, focused on music-teaching).

I consider ActivInspire very good, with SMART as a distant second.

Now, let’s cut the crap. Why am I talking about digital whiteboards and commercial software in Planet KDE?

For starters, because there is no open source alternative. The only open source package I have found is Open Whiteboard and it has not been updated in four years.

Then there is the Linux issue. SMART works on Linux but that’s about it. And it’s not even the full package, some features are missing.

And of course, in KDE we happen to have the wonderful KDE Edu project!

Back in December I thought about this: would it be possible to develop a digital whiteboard software based on KDE? That’s actually why I started working on KSnapshot: screen capturing was one of the missing features in SMART.

The answer to the question is “of course”. However interested I am, currently my spare time is all taken by another project I am working on. I do have a clear picture of what needs to be done, though, and I’d love to mentor if someone is interested in taking over:

  1. Dissect ActivInspire, SMART, TeamBoard, eBeam, etc. They all have nice features and huge failures. Give me one day and I’d hand you a long list.
  2. Start small: use KParts and DBUS and take advantage of KolourPaint, KSnapshot, Flake, etc
  3. The first application would be a simple single-page, vector graphics, Paint-like program: draw lines, figures, insert text boxes (with formatting), pictures, hyperlinks, etc. Add snapshotting (via a call to kbackgroundsnapshot).
  4. Then, extend that application to allow multipage “notebooks”, some screen effects (like Calligra Stage‘s), template pages (useful for exercise sheets, “blank” pages which include date, time and letterhead), hiding the solutions, record and reply, etc
  5. Going a bit further, we could have a special “personality” of Plasma Active for education. Let’s give some actual use to those iPads kids are receiving. Anyone involved in an educational Linux distribution knows the kind of customization I am talking about.
  6. And of course let’s not forget about “apps”: let’s develop a framework for “educapplets”. Educapplets is the name I give to small edutainment games, applications (Mathematical, Geography, etc). Kind of what you can do these days with JClic, but based on Javascript + KDE Edu QML components + something like Windows Runtime but for KDE. (This is big enough to be split in two projects apart from the whiteboard project: one for the KDE RT, another for the educapplet framework).
  7. Your ideas?

I think this project could be developed as a collaboration between KDE Edu and “KDE Business” (a hypothetical extension of KDE PIM). Being unique (open source, multiplatform, powerful), it would have a lot of potential to carve in those markets.

On the other hand, this is actually an application, something built outside KDE SC, which means it might fit better as one of the projects in that hypothetical Apache-like KDE eV I talked about a few weeks ago.

Oh, by the way: some schools seem to be adopting whiteboards for children of all ages. I am strongly against it. In my humble opinion, and that’s what my experience says, computers should only be involved in the classroom after the students have mastered how to do things manually. Disagree? OK: what would you say if when you were a student, you would have been told to use a typewriter and a solar calculator instead of a notebook and a pencil? Ridiculous, isn’t it? My point, exactly.

Volunteers, please comment and/or contact me.

I was at the beach today and somehow this came to my mind.

When the Apache Software Foundation was born back in 1999, its sole purpose was to support the development of the Apache HTTP server. One foundation, one project, more or less like the KDE eV and KDE until very recently.

Over the years, many more projects have been born inside the ASF (Tomcat, Xerces) , or have been “adopted” by the ASF (Subversion, OpenOffice).

For many years, KDE eV has been focused only on KDE.

Recently, Necessitas, the port of Qt and the Qt SDK to Android, joined under the umbrella of the KDE eV: mailing list, git repositories, announcements, etc.

Today I had this vision: should KDE eV go further and become “the Apache Software Foundation of the Qt-related world”?

A few projects that in my humble opinion would fit very well under this new umbrella:

  • MeeGo
  • The community ports of Qt to iPhone, webOS, Haiku, etc
  • PySide (the Python Qt bindings that nearly killed PyQt are being killed by Nokia, what an irony)
  • A Qt “target” that compiles to an HTTP server that generates AJAX, like Wt does but implemented in terms of Qt (yes, I know about QtWui, creole, etc — all dead)
  • … others you propose?

I think that would also make the Akademy’s more interesting, because we would have a lot of conferences on many totally different topics. Lately, when I see the program of Akademy, I feel like “same, again” (maybe because I’m subscribed to many mailing lists and I keep an eye on everything? I don’t know).

What do you think? Should the KDE eV widen its scope and accept other Qt-related projects, even if they are not directly related to KDE?

The deadline for accepting student proposals for Google Summer of Code was reached an hour ago, which means we are not accepting any new proposal and students cannot modify already-submitted proposals.

I have skimmed over the list and I am impressed: 8 proposals were received for 5 of my “wishes“! :-O

There was 1 proposal for libQtGit, 1 proposal for single applications for KDE Windows, 3 proposals for a new KDE Windows installer, 2 proposals for KDE Demo and 1 proposal for KDE Dolphin “Previous versions” support.

Not bad! It’s a pity I didn’t start this series earlier, because I still have a dozen more “wishes” waiting.

Mentors: go to Melange and vote, and also follow the instructions Lydia sent to kde-soc-mentor.

CMake is a unified, cross-platform, open-source build system that enables developers to build, test and package software by specifying build parameters in simple, portable text files. It works in a compiler-independent manner and the build process works in conjunction with native build environments, such as Make, Apple‘s Xcode and Microsoft Visual Studio. It also has minimal dependencies, C++ only. CMake is open source software and is developed by Kitware(taken from Wikipedia)

But you already know that because CMake is the build system we use in KDE 🙂

When I started to use CMake at work to port a large project from Windows to Linux, I needed to write a lot of modules (“finders”). Those modules had to work on Windows and Linux, and sometimes with different versions of some of the third-party dependencies. Parsing header files, Windows registry entries, etc was often required.

If you have ever parsed text, you know you powerful regular expressions are. If you use old software, you know how great Perl Compatible Regular Expressions (PCRE) are because you do not have them available :-).

Read More →

Microsoft Windows has supported a feature called Shadow Copy since Windows Server 2003, using Windows 2000 or newer as the client.

Shadow copies, better known as “Previous versions”, by many users are kind of snapshots of a filesystem (“drive”, for Windows users) taken every so often. It is not a replacement for backups or RAID, just a convenient method to access old versions of your files. Thin of it as a revision control system for any file, save for “revisions” are taken at time intervals instead of evey time there are changes, which means not all the changesets are “captured”.

Samba implements the server part of shadow copy since version 3.0.3, released in April 2004, and makes them available to Windows clients.

On the Unix world, you can get more or less the same features, albeit only locally (not for NFS or Samba shared folders), by using filesystems such as ZFS, HAMMER (apparently, only available on DragonFly BSD), Btrfs, Zumastor and many others. The main problem being adoption of those solutions is marginal, and not available through the network (“shared folders”).

Sun (now Oracle) implemented a “Previous versions”-like feature for ZFS snapshots in Nautilus in October 2008:

KDE once had something like that but for the ext3cow file system. It was an application called Time Traveler File Manager (user manual, interface design and details) by Sandeep Ranade. Website is down and ext3cow is effectively dead now.

By this time you probably have guessed my wish for today: implement “Previous versions” support for Samba, ZFS, etc in Dolphin, the KDE file manager, or even better: in KIO, so that it is available to all KDE applications.

When Stephen Elop announced Nokia was adopting Windows Phone 7 as its main development platform for the future, many of us felt a mix of fear for our jobs, incredulousness (are they committing suicide?), anger for betraying open source, then curiosity. What’s this WP7 about and what’s up in regards to development?

The canonical development platform for WP7 consists of Visual Studio 2010 and managed applications developed in .NET 4 using C# as the programming language and a WPF for the user interface. If you are not into .NET, you may be a bit lost now.

Windows Phone 7 is an embedded operating system by Microsoft. It is the successor to Windows Mobile 6.5 but it is incompatible with it and it is targeted at mobile phones.

Visual Studio is Microsoft’s development environment, including IDE, compiler, debugger, profiler and much more. In my opinion, it is one of the finest applications Microsoft has developed. It is many years ahead of QtCreator, KDevelop, Eclipse or anything else.

.NET is a framework Microsoft started more than a decade ago. It includes libraries, a language independence infrastructure, memory management, localization and many more features. Applications can be compiled as native code (for x86 or x64), or as managed code (bytecode). .NET also includes a bytecode JIT compiler. You can target the .NET platform in many languages, including but not limited to, Visual Basic .NET, C#, F#, C++/CLI (it’s NOT the C++ you know!), etc

Interestingly, you can also compile normal C++ to .NET but it will have unmanaged code (i. e. code where memory allocations/deallocations are performed by you instead of the runtime, and pointers). WP7 requires that all code that runs on it be managed. This means that Qt cannot run on WP7, even though it may be compiled targetting the .NET platform and it would run on desktop operating systems.

C# is a multi-paradigm programming language developed by the father of Delphi. It is very powerful and looks a lot like C++2011. When I was learning C#, I couldn’t but think what the actual implementation of C# features were behind the scenes, in C and C++.

WPF is Windows Presentation Foundation (formerly known as Avalon), the technology (library) used for the GUI. In plain terms, it’s a library which provides accelerated everything for Windows applications: widgets, 2D, 3D, scalable graphics, etc

When developing UIs in WPF, you can either use C# code:

// Create a Button with a string as its content.
Button stringContent = new Button();
stringContent.Content = "This is string content of a Button";

// Create a Button with a DateTime object as its content.
Button objectContent = new Button();
DateTime dateTime1 = new DateTime(2004, 3, 4, 13, 6, 55);

objectContent.Content = dateTime1;

// Create a Button with a single UIElement as its content.
Button uiElementContent = new Button();

Rectangle rect1 = new Rectangle();
rect1.Width = 40;
rect1.Height = 40;
rect1.Fill = Brushes.Blue;
uiElementContent.Content = rect1;

// Create a Button with a panel that contains multiple objects 
// as its content.
Button panelContent = new Button();
StackPanel stackPanel1 = new StackPanel();
Ellipse ellipse1 = new Ellipse();
TextBlock textBlock1 = new TextBlock();

ellipse1.Width = 40;
ellipse1.Height = 40;
ellipse1.Fill = Brushes.Blue;

textBlock1.TextAlignment = TextAlignment.Center;
textBlock1.Text = "Button";

stackPanel1.Children.Add(ellipse1);
stackPanel1.Children.Add(textBlock1);

panelContent.Content = stackPanel1;

or use XAML, which is an XML-based language for WPF:

<!--Create a Button with a string as its content.-->
<Button>This is string content of a Button</Button>

<!--Create a Button with a DateTime object as its content.-->
<Button xmlns:sys="clr-namespace:System;assembly=mscorlib">
  <sys:DateTime>2004/3/4 13:6:55</sys:DateTime>
</Button>

<!--Create a Button with a single UIElement as its content.-->
<Button>
  <Rectangle Height="40" Width="40" Fill="Blue"/>
</Button>

<!--Create a Button with a panel that contains multiple objects 
as its content.-->
<Button>
  <StackPanel>
    <Ellipse Height="40" Width="40" Fill="Blue"/>
    <TextBlock TextAlignment="Center">Button</TextBlock>
  </StackPanel>
</Button>

 

More or less, WPF is to .NET what QML is to Qt.

Ouch. This WP7 is completely different to what we are used to. Maybe we can make them less different?

People tend to believe WPF and XAML are the same thing. They are not. XAML is one of the languages (which happens to be used by 99.99% of the people) to write applications in WPF but it is not the only one. You can write WPF applications in C# code (it’s not unusual to mix XAML and C#, what we call “code behind” in WPF parlance), or you could create your own XAML-equivalent language.

Or you could make my wish for today true: create a compatibility layer for QML to be able to develop WPF applications in QML, effectively replacing XAML. It is technically possible. This will also require a glue-layer for C#, similar to what we have now to bridge QML and C++. This would make porting applications from C++ and QML to C#, QML and WPF a relatively easy task, and may provide realistic path to convert Qt applications to WP7 withouth a full rewrite.

Update Fixed XAML vs C# code, thanks ‘aa’. Also, please note using C++ with QML WPF would be straightforward

April Fools’ passed, it’s safe to blog again 🙂

My last wish was born when Koen Deforche (one of the best software developers I know) made an innocent comment at my talk at FOSDEM talk this year.

Koen happens to be the guy who created Wt (pronounced ‘witty’), a C++ library and application server for developing and deploying web applications. The API mimics Qt‘s: it is widget-centric and offers complete abstraction of any web-specific application details. Where Qt has QObject, Wt has WObject; where Qt has QPainter, Wt has WPainter; both provide a very similar Model-View architecture, both have signals and slots (Wt’s being based on Boost), both provide an OpenGL widget (WebGL in the case of Wt), both allow for language translations, and in fact you can combine Qt and Wt code thanks to the wtwithqt bridge.

If you know me, you know I love Wt. Developing webapps like they were desktop apps (thinking of “widgets” instead of “pages”) is very powerful. In fact, I gave up on Rails after I discovered Wt because Rails has become a spaghetti of Ruby, Javascript, CSS, HTML and whatnot, with so many conventions that make it very difficult to follow the code unless you know very well every library involved (not to speak of performance and memory consumption, where Wt beats everything).

Wt is not without its problems, though. In my opinion, the main needs of Wt are more widgets and an IDE. Today I’m going to talk about the IDE.

Read More →