Two Desktop Search Tools are better than one

Previously I’ve been tossing up between a few desktop search tools. Google’s Desktop Search sidebar and rss collection features kept me on it for a long time. Unfortunately, the performance hit on the system was too much for my liking. Using Systernals Filemon tool, I found it would constantly make reads from the disk. Sometimes it would begin indexing even though I was still using my machine. It would also take about half a minute to stop indexing when I returned to my machine, and due to the constant hard disk access, would make it difficult to start new apps or switch between running ones while the indexing was finishing up.

I had been using Copernic at work and found it a reasonable alternative. It boasts speed of response and returns the ‘dont index while on batteries’ feature that I missed when I switched from MSN Desktop Search. It can’t be extended through plugins in the same way GDS or even MSN can, but its fast. I’m not too certain if Copernic has picked up, every email, or every reference in each document, the GDS results seemed a little more relevant, in the order of 0.01%, but because of the speed and responsiveness returned to my system, as well as the free disk space (smaller index files) it doesn’t matter in the slightest. Though this critique could be down to the UI of Copernic, it’s a windows application, again fast, but I’m still used to GDS (again, this used-to ness will probably fade as I use Copernic more.)

Realising that desktop search tools allow you to specify what files you want to index, I simply installed Copernic on my laptop, alongside GDS. I disabled most of GDS filetypes such that it would only index Web History. I got Copernic to do everything else (even index Onenote’s .one files as text files) except index the history from Firefox and IE. I then deleted the index file using TweakGDS which freed a couple gig of HD and let copernic take over.

Best of all, my system runs better than when I had GDS alone. Copernic satisfies my desktop search needs, and I’ve got the excellent sidebar and web clips tool that google provides – and its still picking up rss feeds and popping them up from new sites I visit, as it always has.

O p u s D ä i

O p u s D ä i is a much hyped band that I heard about through streetwise. They are being compared to Led Zeppelin, Pink Floyd, The Mars Volta and even Tool. Whilst I don’t hear the reference to the former, they are being plugged as the next progressive thing and they are beginning to catch on with me too.
To hear some of their stuff go to this link at Download.com for 3 free tracks. Their myspace page also has an extra track that is streamable. Get into it, it’s free :-S

For the moment,http://www.decoymusic.com/are streaming the new album too. Now you too can see if they are better than Linkin Park or not 😛

C++ Compilation and Linking

Taken from Monash CSSE’s 3400 Application Development with C++ webpage:

Firstly, with the normal compile and link process:
We write our programs in a file in a source language that the compiler understands (can parse). If we write code the compiler does not understand it tells us with compile error messages. If we write code that it can understand but that could have problems we get warning messages.

After the compiler has understood our code it generates machine instructions that, when executed, will do what the compiler understands we wanted the computer to do. The instructions could be native machine code or assembler (in which case the assembler is then translated (assembled) to machine code), even another programming language. The machine code often has references to data or functions in other files, such as other parts of our program or the standard libraries.

The result of compiling our source code file is an “object” file.

As already mentioned, the object files refer to other object files or libraries. Also, the various object
files and library functions must be organised into one file that the operating system can load into memory and execute. So the various object files are loaded into memory and all of the references mentioned earlier must be “fixed up” so that the actual address of the referenced data or function is used in the machine instructions. The libraries are also searched for the functions our code calls, and those functions are also loaded into the new program. Not surprisingly this process is called “linking”, and the program that does it is called the “link editor”.

If the link editor finds references but cannot find what they refer to, it reports these as errors.

The result of linking is an executable file.

OK, so where do C++ templates fit in?

When the compiler encounters a template declaration (it should be all together in a header file), it just parses it to ensure it is correct, and stores it until our code makes use of it. When we declare or define a class that uses (instantiates) it, the compiler generates the new class. This could lead to new compile errors. Then the linker will try to link our program and it may find errors too.

Java SerialVersionUID

I’ve always heard other people complain about serialisation issues, but never had to deal with them myself until now.

Here are some quick links:

java.net: (Not So) Stupid Questions 8: serialVersionUID – the discussion that led me to find these other links.

Object Serialization – a great intro to the classes involved in the Serialization process and some means to implement versioning and backward compatibility. It uses a worked example based on an app the author wrote themselves.

Practical Guidelines for Java Serial Version ID and Serialization – covers the use of serialver tool to generate serialUID’s for your classes.