/images/avatar.png

art – computer science – play

to thread the X server (?)

I really don’t like to read large blog posts. Anyway…

What I did so far is a separated thread that takes care only the injection stage on the X server queue. Who is interested with the results, please read some past posts in my blog. It is currently in a very good shape (synced with post-mpx merge, all input devices are inside the thread and etc). The implementation looks like this: thread #1 deals with - injection of input events from devices thread #2 deals with - processing of input events to clients - requests from known clients (rendering things) - new client that tries to connect (pretty easy to do)

fakemouse -- a driver that emulates a mouse

For my SoC project I need some mechanism to evaluate the improvement of the input thread inside X. So I wrote a simple kernel driver that emulates the mouse device moving and emitting bits of a simple pattern. I don’t know if something like this already exists or if there are other ways to do it, but the fact is that the solution I thought took me only few hours between the moment that I imagined, collected some ideas on the Web and implemented it.

Google Summer of Code 2008

I’m very happy to say that I was selected again to work on Google Summer of Code with X.Org Foundation. Daniel will be my mentor again. Thanks Google. Thanks X.Org!

In the last year we did a nice work separating the input event generation code of the X server into a different thread. We saw some performance improvement there specially because the implementation is not using signals anymore to wake up the server when some device emits an event. The reason why is that when a process is in the uninterruptible sleep (D state) signals are delayed and the mouse cursor lags.

Traversing X11 clients behind NAT (or X11 end-to-end connectivity)

I was thinking how we could make remotely X clients totally connective with the server when both are behind a NAT/firewall.

We can imagine one big motivation to do this: a scenario where someone using his thin and poor machine wants to use the resources of some “fat” machines which he simply doesn’t know where they are seated. Those fat machines could be arranged through a P2P network of “X11 pool of resources” and the list of machines displayed to the user select his desired one (e.g. with minor lag/load). Someone more capitalist than me could go further and imagine a provider selling X11 resources to mobile devices. Or just open your home machine’s web browser in any place of the world. Well, the field of applications would be huge.

VgaArbiter wiki

Today, Paulo Zanoni help me to put in a shape the VgaArbiter wiki page. The primary intention of that page is to bring more developers in that project and some users++ who could also help testing. Feedback is very welcome. Here is the link.

Benchmarking it all

After a long journey I come back in this… So I did a set of benchmarks to evaluate the VGA arbitration versus the RAC usage. My goal is to evaluate the performance difference of a multi-head/multi-card environment, i.e., an Xorg using the RAC to another using the arbitration.

The experiments consisted of two applications running at the same time in each Xorg server, one at each screen. This is interesting because it stress the semaphore task of the arbiter inside kernel, creating race conditions between the screens. The experiments were performed ten times and the average result was picked.