The Albanian virus

The Albanian virus

There’s an old joke about the Albanian virus. This virus was just an email message reading: “I’m an Albanian virus, so I can’t do any harm to your computer. Please be so kind and delete some of the important system files on my behalf. Thank you for understanding!“.

The idea is that, within hacking relation, there shouldn’t really be such a big intellectual or financial difference between the hacker and the “hackee”.

As John Gruber pointed out, there’s a plethora of Albanian viruses in Apple’s history. The last scam is more of a mass marketing tool than a software one: your Mac might get attacked, therefore you have to worry. What can Apple do about it? Well, there is a thing they can do: build an “Apple antivirus”.

What should an Apple antivirus do

It should be just a very nice a popup message saying: “You are completely out of trouble if you keep the system in our administration instead of any other app’s. We cannot guarantee its good functionality otherwise, therefore we strongly advise you not to manually remove any of the system files after purposefully suppressing our administrative control“.

This message is not new, Apple sent it when they had to declare the iOS’ jailbreak legal. There is no jailbroken iOS that is 100% risk free, as much as there’s no virgin iOS at any risk. (The main reason legalizing the jailbreak was a bad news for Apple was that moment away they had to manually foster the user)

“You might…, therefore you have to…”

That’s the usual bullshit the pharmaceutical companies are using for selling their drugs since the beginning of time. “If you feel an itch, there’s a (infinitesimal!) chance you have a terminal disease, therefore you have to use this medicine, which we happen to sell for a huge discount and for a very limited time. Hurry up, tomorrow you might be dead!”

Although you may not be in their target, you won’t believe how efficient this message can be! The immediate effect is a lot of people exposed to this message feel that specific itch in less than 20 seconds. The prophecy starts fulfilling! This the most important step after which you’re starting to feel the urge to have that medicine.

Let’s see what “at risk” means

I can think of two meanings of this word.

First, you can say “I discovered this unknown open door to your system that could be used for malicious code insertion”. Windows users are very well aware of this type of risk, it’s become a day-to-day risk. Google found the other day its Android users were exposed to the same type of risk, too.

This is, let’s say, a natural risk: you see the hole, evaluate the risk then patch the hole. That specific risk is specifically addressed, then it just stops existing. The most important trait of this type of risk is somebody has already discovered the hole in the system, documented it and assessed it. The hole exists.

A second meaning of the risk is the one used in the example with the pharmaceutical companies: “There may be a risk (read: there is nothing that would logically forbid its existence), therefore you’re already exposed to it (read: be very afraid)”.

In this case, “your disease” has not been diagnosed and it does not exist, although there is a logical possibility to get it in the future.

From this second perspective, the systems devide in two: those that should consider this secondary risk as a threat, and those that shouldn’t. (This is a distinction between the system that have to be built flawlessly and those which just have to be “not flawed”, but not necessarily built / born flawless.)

What kind of systems should consider a theoretical flaw as a threat

The answer here is very simple and I won’t digress: only the systems that are built incapable or can become incapable of feedback, that is, only the systems that lack any internal regulatory procedures, the static systems.

For example: a diamond should be flawless (static), while a dog shouldn’t (although a no-bad-habbits dog is better- dynamic).

Any static system that is unable to assess a risk and take specific actions should be built flawless, therefore the potential holes need to be considered before its creation and their appearance. (The humankind exposed to aviary flue is a good example of a static system, while a single person is a good example of a dynamic one; humankind is a system without feedback, yet).

For the sake of your sanity, let’s conclude that most of the complex systems like a human being or a computer OS are not static systems, but dynamic, which is, they have the inner capability of assessing risks and take appropriate actions (no matter these actions are predefined or adaptive).

So, where’s the bullshit?

When the pharmaceutical company addressed you, it treated you as a part of two different systems while it knew only one system part is capable of acting and understanding.

It treated you both as an element of the humankind (when it showed you the statistical risk of an irreversible diseases or death) and as an individual, when it expected an action from you and not from the whole humankind.

To say it simpler it’s like this: “We all suffer from hunger, therefore we have to eat you, Jim“. Can you feel this incorrect “therefore”?

This is only the first half of the bullshit, though. The other implied half says that your risk as an individual is equal in quality to the risk of the humankind.

The humankind, as a static system, cannot take any risk when considering a global vaccine, for example. But if that vaccine is meant to prevent some local Amazonian flue while you never left, say, Russia, there’s no much sense having that vaccine, is it?

Equalizing the risk potentials of a dynamic system to a static one’s is the second mistake. It’s like saying “the risk of drowning in the sea is the same for both humans and dolphins”.

Back to our business, the human body as well as a computer OS are complex systems that only have to not be flawed in their lifetime, without having to be flawless by design.

Mutatis mutandis, “your Apple OS might be at risk” should never be considered a threat to the OS integrity unless the flaw already exists and the risk is quantifiable, as the OS was not build as a static and inert system, but as a dynamic one.

À quoi bon?

If you are unable to correctly measure a risk or keep it enclosed in its proper category, you’re life will be uglier.

But this has nothing to do with my story; I just wanted to tell you the joke with the Albanian virus.