Rethinking linked list insertion

There is one nice thing in looking for a new job. That is, you meet lots of new people and have a chance to learn from them. For example in one of the companies I was asked about something called anti-debugging. I didn’t have a clue what that is and had to ask for an explanation. Apparently, this is a set of techniques used to fool a debugger and make the code undebuggable.

Anyway, here’s something else that I learned during one of the interviews.

Read the rest of this entry »

I am looking for a new job

I am sorry to say that, but Exanet, a company that I joined less than a month ago, has been closed.

This means that I am looking for a new job. The good thing is that now your or your friend’s company has  a chance to hire a programmer with ten years of experience in writing application for Linux and Linux kernel. So, if you can help, please pass my resume. You can find it here.



I got a new job

You probably noticed that I didn’t write anything new for awhile. Well, I was looking for a new job and didn’t have much time to write. Luckily, this is over. I am now a senior software engineer at Exanet LTD.

Exanet is developing storage solutions for large organisations. ExaStore, main product of the company, is a clustered NAS gateway solution providing highly available and distributed data storage.

MSI-X – the right way to spread interrupt load

When considering ways to spread interrupts from one device among multiple cores, I can’t not to mention MSI-X. The thing is that MSI-X is actually the right way to do the job.

Interrupt affinity, which I discussed here and here, has a fundamental problem. That is inevitable CPU cache misses. To emphasise this, think about what happens when your computer receives a packet from the network. Packet belongs to some connection. With interrupt affinity the packet would land on core X, while the chances are that previous packet on the same TCP connection has landed on core Y (X ≠ Y).

Handing the packet would require kernel to load TCP connection object into X’s cache. But, this is so ineffective. After all, the TCP connection object is already in Y’s cache. Wouldn’t it be better to handle second packet on core Y as well?

Read the rest of this entry »

Why interrupt affinity with multiple cores is not such a good thing

One of the features of x86 architecture is ability to spread interrupts evenly among multiple cores. Benefits of such configuration seems to be obvious. Interrupts consume CPU time and by spreading them on all cores we avoid bottle-necks.

I’ve written an article explaining this mechanism in greater detail. Yet let me remind you how it works in two words.

Read the rest of this entry »

PSC for Personal Super Computer

I’ve been waiting for this for quiet some time and now it is finally here. I am talking about Personal Super Computers.

Five years ago I purchased a brand new laptop computer. It is a decent computer – I am still using it today. It cost me around 1500$ U.S. Obviously, today it is less powerful than those 300$ netbook computers.

Netbooks have changed things quiet a bit. However, as a matter of fact, there is nothing special about them. This is how technological progress works. At first you have something that costs a lot, then some company that wants to break into the market, releases a new breaking through product, that lowers the prices. This is what happened with Asus when they released their eeePC – the first netbook.

This had happened multiple times before. It really constitutes the beauty of capitalism.

As a result, modern netbook costs half the price and has the same processing capabilities as my good old five y-o HP Pavilion.

But what about super computers? There is a ongoing race to build faster super computer. They say that the most powerful super computer today is as powerful as a human brain – an extraordinary acquaintance if this think about it.

So here is one natural development that I expected to see long time ago. According to the article below, nVidia are making and selling Personal Super Computers. Although these beasts are far away from top 500 most powerful computers in the world, they still provide very impressive teraflops of computing power.

Here is the article:

Oh, and I forgot to mention… These computers don’t run Windows 7 :-)

2 reasons why small package repository is better than large

I am in the middle of CentOS and Ubuntu comparison frenzy. It started with an attempt to assert quality of Linux distributions made for busy people. Today I am considering packaging.

When comparing Ubuntu and CentOS packaging systems, first thing that crosses my mind is that, well, size matters. Ubuntu has nearly 70000 packages. CentOS has around 6000.

Obviously, it is very handy to have every possible package just couple of clicks away. Instead of looking for the package, understanding its version system and available architecture. Instead of looking for the vendor’s web-site, seeing all the ads, etc. What you do is just open Synaptic manager, enter the name of the program, or just a couple of keywords describing what you need. Then you do couple of clicks and you’re done.

But when I started using this system I found that there’s something broken in it. There are several things that bother me.

Yes, most of the programs are easy to install, but still, some programs are not in the repository. Others are outdated. Here is one example.
Read the rest of this entry »

Few thoughts about Ubuntu servers and CentOS

This Saturday I tried to configure VNC server to start in the background automatically at boot. You know, in Ubuntu you normally run VNC server when you need it and stop it when you don’t need it anymore.

Read the rest of this entry »

“Linux Tips and Tricks”, cracking passwords and security

Carla Schroder of Linux Today has posted a nice list of her Linux tips and tricks, here.

One tip I could not make work is Cracking Passwords. The program simply refused to identify my passwords file. I found that it might be because it doesn’t support this kind of encryption or something like that.

Read the rest of this entry »

Mono is here to stay, period?

There has been a new development in the subject I raised a day ago. It seems that there has been some effort on Microsoft’s side to clarify the legal issue with the Mono Project. According to this article in iTWire, Microsoft will extend its Community Promise to the C# and CLI standards.

Read the rest of this entry »